text
stringlengths 56
7.94M
|
---|
\begin{document}
\title[The slice spectral sequence for the $C_{4}$ analog of real
$K$-theory]{The slice spectral sequence for the
$C_{4}$ analog of real $K$-theory}
\author{Michael~A.~Hill}
\address{Department of Mathematics \\ University of California Los Angeles\\
Los Angeles, CA 90095}
\email{[email protected]}
\author{Michael~J.~Hopkins}
\address{Department of Mathematics \\ Harvard University
\\Cambridge, MA 02138}
\email{[email protected]}
\author{Douglas~C.~Ravenel}
\address{Department of Mathematics \\ University of Rochester
\\Rochester, NY 14627}
\email{[email protected]}
\thanks{The authors were supported by DARPA Grant
FA9550-07-1-0555 and NSF Grants DMS-0905160 , DMS-1307896\dots }
\date{\today}
\begin{abstract}
We describe the slice spectral sequence
of a 32-periodic $C_{4}$-spectrum $\KH$ related to the
$C_{4}$ norm
\begin{displaymath}
{MU^{((C_{4}))}=N_{C_{2}}^{C_{4}}MU_{\reals}}
\end{displaymath}
\noindent of the real cobordism spectrum $MU_{\reals}$. We will give
it as a {\SS} of Mackey functors converging to the graded Mackey
functor $\upi_{*}\KH$, complete with differentials and exotic
extensions in the Mackey functor structure.
The slice spectral sequence for the 8-periodic real $K$-theory
spectrum $K_{\reals}$ was first analyzed by Dugger. The $C_{8}$
analog of $\KH$ is 256-periodic and detects the Kervaire invariant
classes $\theta_{j}$. A partial analysis of its slice spectral
sequence led to the solution to the Kervaire invariant problem, namely
the theorem that $\theta_{j}$ does not exist for $j\geq 7$.
\end{abstract}
\keywords{equivariant stable homotopy theory, Kervaire invariant
Mackey functor, slice spectral sequence}
\subjclass[2010]{55Q10 (primary), and 55Q91, 55P42, 55R45, 55T99 (secondary)}
\maketitle
\tableofcontents
\listoftables
\section{Introduction}\label{sec-intro}
In \cite{HHR} we derived the main theorem about the Kervaire invariant
elements from some properties of a $C_{8}$-{\eqvr} spectrum we called
$\Omega $ constructed as follows. We started with the
$C_{2}$-spectrum $MU_{\reals}$, meaning the usual complex cobordism
spectrum $MU$ equipped with a $C_{2}$ action defined in terms of
complex conjugation.
Then we defined a functor $N_{C_{2}}^{C_{8}}$, the norm of
\cite[\S2.2.3]{HHR} which we abbreviate here by $N_{2}^{8}$, from the
category of $C_{2}$-spectra to that of $C_{8}$-spectra. Roughly
speaking, given a $C_{2}$-spectrum $X$, $N_{2}^{8}X$ is underlain by
the fourfold smash power $X^{\wedge 4}$ where a generator $\gamma $ of
$C_{8}$ acts by cyclically permuting the four factors, each of which
is invariant under the given action of the subgroup $C_{2}$. In a similar
way one can define a functor $N_{H}^{G}$ from $H$-spectra to
$G$-spectra for any finite groups $H\subseteq G$.
A $C_{8}$-spectrum such as $N_{2}^{8}MU_{\reals}$, which is a
commutative ring spectrum, has {\eqvr} homotopy groups indexed by $RO
(C_{8})$, the orthogonal {\rep} ring for the group $C_{8}$. One
element of the latter is $\rho_{8}$, the regular {\rep}. In
\cite[\S9]{HHR} we defined a certain element $D\in
\pi_{19\rho_{8}}N_{2}^{8}MU_{\reals}$ and then formed the associated
mapping telescope, which we denoted by $\Omega_{\mathbb{O}}$. The
symbol $\mathbb{O}$ was chosen to suggest a connection with the
octonions, but there really is none apart from the fact that the
octonions are 8-dimensional like $\rho _{8}$.
$\Omega_{\mathbb{O}}$ is also a
$C_{8}$-{\eqvr} commutative ring spectrum. We then proved that it is
{\eqvr}ly equivalent to $\Sigma^{256}\Omega_{\mathbb{O}}$; we call
this result the Periodicity Theorem. Then our spectrum $\Omega $ is
$\Omega_{\mathbb{O}}^{C_{8}}$, the fixed point spectrum of
$\Omega_{\mathbb{O}}$.
It is possible to do this with $C_{8}$ replaced by $C_{2^{n}}$ for any
$n$. The dimension of the periodicity is then $2^{1+n+2^{n-1}}$. For
example it is 32 for the group $C_{4}$ and $2^{13}$ for $C_{16}$. We
chose the group $C_{8}$ because it is the smallest that suits our
purposes, namely it is the smallest one yielding a fixed point
spectrum that detects the Kervaire invariant elements $\theta_{j}$.
We know almost nothing about $\pi_{*}\Omega $, only that it is
periodic with periodic 256, that $\pi_{-2}=0$ (the Gap Theorem of
\cite[\S8]{HHR}), and that when $\theta_{j}$ exists its image in
$\pi_{*}\Omega $ is nontrivial (the Detection Theorem of
\cite[\S11]{HHR}).
We also know, although we did not say so in \cite{HHR}, that more
explicit computations would be much easier if we cut
$N_{2}^{8}MU_{\reals}$ down to size in the following way. Its
underlying homotopy, meaning that of the spectrum $MU^{\wedge 4}$, is
known classically to be a polynomial algbera over the integers with
four generators (cyclically permuted up to sign by the group action)
in every positive even dimension. This can be proved with methods
described by Adams in \cite{Ad:SHGH}. For the cyclic group $C_{2^{n}}$
one has $2^{n-1}$ generators in each positive even degree.
Specific generators $r_{i,j}\in \pi_{2i}MU^{\wedge 2^{n-1}}$ for $i>0$
and $0\leq j<^{n-1}$ are defined in \cite[\S5.4.2]{HHR}.
{\em There is a way to kill all the generators above dimension 2k}
that was described in \cite[\S2.4]{HHR}. Roughly speaking, let $A$ be
a wedge of suspensions of the sphere spectrum, one for each monomial in
the generators one wants to kill. One can define a multiplication and
group action on $A$ corresponding to the ones in $\pi_{*}MU^{\wedge
4}$. Then one has a map $A\to MU^{\wedge 4}$ whose restriction to
each summand represents the corresponding monomial, and a map $A\to
S^0$ (where the target is the sphere spectrum, not the space $S^{0}$)
sending each positive dimensional summand to a point. This leads to
two maps
\begin{displaymath}
S^{0}\wedge A\wedge MU^{\wedge 4}\rightrightarrows S^{0}\wedge MU^{\wedge 4}
\end{displaymath}
\noindent whose coequalizer we denote by $S^{0}\smashove{A}MU^{\wedge
4}$. Its homotopy is the quotient of $\pi_{*}MU^{\wedge 4}$ obtained
by killing the polynomial generators above diimension $2k$. The
construction is {\eqvr}, meaning that $S^{0}\smashove{A}MU^{\wedge 4}$
underlies a $C_{8}$-spectrum.
In \cite[\S7]{HHR} we showed that for $k=0$ the spectrum we get is the
integer {\SESM} spectrum $H\Z$; we called this result the Reduction
Theorem. In the non{\eqvr} case this is obvious. We are in effect
attaching cells to $MU^{\wedge 4}$ to kill all of its homotopy groups
in positive dimensions, which amounts to constructing the 0th
Postnikov section. In the {\eqvr} case the proof is more
delicate.
Now consider the case $k=1$, meaning that we are killing the
polynomial generators above dimension 2. Classically we know that
doing this to $MU$ (without the $C_{2}$-action) produces the
connective complex $K$-theory spectrum, some times denoted by $k$,
$bu$ or (2-locally) $BP\langle 1\rangle$. Inverting the Bott element
via a mapping telescope gives us $K$ itself, which is of course
2-periodic. In the $C_{2}$-{\eqvr} case one gets the ``real $K$-theory''
spectrum $K_{\reals}$ first studied by Atiyah in \cite{Atiyah:KR}. It
turns out to be 8-periodic and its fixed point spectrum is $KO$, which
is also referred to in other contexts as real $K$-theory.
The spectrum we get by killing the generators above dimension 2 in
the\linebreak $C_{8}$-spectrum $N_{2}^{8}MU_{\reals}$ will be denoted
analogously by $k_{[3]}$. We can invert the image of $D$ by forming a
mapping telescope, which we will denote by $K_{[3]}$. More
generally we denote by $k_{[n]}$ the spectrum obtained from
$N_{C_{2}}^{C_{2^{n}}}MU_{\reals}$ by killing all generators above
dimension 2. In particular $k_{[1]}=k_{\reals}$. Then we denote the
mapping telescope (after defining a suitable $D$) by $K_{[n]}$ and its
fixed point set by $KO_{[n]}$.
For $n\geq 3$, $KO_{[n]}$ also has a Periodicity, Gap and Detection
Theorem, so it could be used to prove the Kervaire invariant theorem.
{\em Thus $K_{[3]}$ is a substitute for $\Omega _{\mathbb{O}}$
with much smaller and therefore more tractable homotopy groups.} A
detailed study of them might shed some light on the fate of
$\theta_{6}$ in the 126-stem, the one hypothetical Kervaire invariant
element whose status is still open. {\em If we could show that
$\pi_{126}KO_{[3]}=0$, that would mean that
$\theta_{6}$ does not exist.}
The computation of the {\eqvr} homotopy $\upi_{*}K_{[3]}$ at this time
is daunting. {\em The purpose of this paper is to do a similar
computation for the group $C_{4}$ as a warmup exercise.} In the
process of describing it we will develope some techniques that are
likley to be needed in the $C_{8}$ case. We start with
$N_{2}^{4}MU_{\reals}$, kill its polynomial generators (of which there
are two in every positive even dimension) above dimension 2 as
described previously, and then invert a certain element in
$\pi_{4\rho_{4}}$. We denote the resulting spectrum by $\KH$, see
\ref{def-kH} below. This spectrum is known to be 32-periodic. In an
earlier draft of this paper it was denoted by $K_{\mathbf{H}}$.
The computational tool for finding these homotopy groups is the slice
spectral sequence introduced in \cite[\S4]{HHR}. Indeed we do not
know of any other way to do it. For $K_{\reals}$ it was first
analyzed by Dugger \cite{Dugger} and his work is described below in
\S\ref{sec-Dugger}. In this paper we will study the slice spectral
sequence of Mackey functors associated with $\KH$. We will rely
extensively on the results, methods and terminology of \cite{HHR}.
{\em We warn the reader that the computation for $\KH$ is more
intricate than the one for $\KR$.} For example, the slice spectral
sequence for $\KR$, which is shown in Figure \ref{fig-KR}, involves
five different Mackey functors for the group $C_{2}$. We abbreviate
them with certain symbols indicated in Table \ref{tab-C2Mackey}. The
one for $\KH$, partly shown in Figure \ref{fig-KH}, involves over
twenty Mackey functors for the group $C_{4}$, with symbols indicated
in Table \ref{tab-C4Mackey}.
\begin{figure}\label{fig-poster}
\end{figure}
Part of this {\SS} is also illustrated in an unpublished poster produced in
late 2008 and shown in Figure \ref{fig-poster}. It shows the {\SS}
converging to the homotopy of the fixed point spectrum $\KH^{C_{4}}$.
The corresponding {\SS} of Mackey functors converges to the graded
Mackey functor $\upi_{*}\KH$.
In both illustrations some patterns of $d_{3}$s and families of
elements in low filtration are excluded to avoid clutter. In the
poster, representative examples of these are shown in the second and
fourth quadrants, the {\SS} itself being concentrated in the first and
third quadrants. In this paper those patterns are spelled out in
\S\ref{sec-C2diffs} and \S\ref{sec-C4diffs}.
We now outline the rest of the paper. Briefly, the next five sections
introduce various tools we need. Our objects of study, the spectra
$\kH$ and $\KH$, are formally introduced in \S\ref{sec-kH}. Dugger's
computation for $\KR$ is recalled in \S\ref{sec-Dugger}. The final
six sections describe the computation for $\kH$ and $\KH$.
In more detail, \S\ref{sec-nonsense} collects some notions from
{\eqvr} stable homotopy theory with an emphasis on Mackey functors.
Definition \ref{def-graded} introduces new notation that we will
ocasionally need.
\S\ref{sec-HZZ} concerns the {\eqvr} analog of the homology of a point
namely, the $RO (G)$-graded homotoy of the integer {\SESM} spectrum
$H\Z$. In particular Lemma \ref{lem-aeu} describes some relations
among certain elements in it including the ``gold relation'' between
$a_{V}$ and $u_{V}$.
\S\ref{sec-gendiffs} describes some general properties of spectral
sequences of Mackey functors. These include Theorem \ref{thm-exotic}
about the relation between differential and exotic extensions in the
Mackey functor structure and Theorem \ref{thm-normdiff} on the norm of
a differential.
\S\ref{sec-C4} lists some concise symbols for various specific Mackey functors
for the groups $C_{2}$ and $C_{4}$ that we will need. Such functors
can be spelled out explicitly by means of Lewis diagrams
(\ref{eq-Lewis}), which we usually abbreviate by symbols shown in
Tables \ref{tab-C2Mackey} and \ref{tab-C4Mackey}.
In \S\ref{sec-chain} we study some chain complexes of Mackey functors
that arise as cellular chain complexes for $G$-CW complexes of the
form $S^{V}$.
In \S\ref{sec-kH} we formally define (in \ref{def-kH}) the
$C_{4}$-spectra of interest in this paper, $\kH$ and $\KH$.
In \S\ref{sec-Dugger} we describe the slice {\SS} for an easier case,
the $C_{2}$-spectrum for real $K$-theory, $K_{\reals}$. This is due to
Dugger \cite{Dugger} and serves as a warmup exercise for us. It turns
out that everything in the {\SS } is formally determined by the
structure of its $\EE_{2}$-term and Bott periodicity.
In \S\ref{sec-more} we introduce various elements
in the homotopy groups of $\kH$ and $\KH$. They are collected in Table
\ref{tab-pi*}, which spans several pages.
In \S\ref{sec-slices} we determine the $\EE_{2}$-term of the
slice {\SS } for $\kH$ and $\KH$.
In \S\ref{sec-diffs} we use the Slice Differentials Theorem of
\cite{HHR} to determine some differentials in our {\SS}.
In \S\ref{sec-C2diffs} we examine the $C_{4}$-spectrum $\kH$ as a
$C_{2}$-spectrum. This leads to a calculation only slightly more
complicated than Dugger's. It gives a way to remove a lot of clutter
from the $C_{4}$ calculation.
In \S\ref{sec-C4diffs} we determine the $\EE_{4}$-term of our {\SS}.
It is far smaller than $\EE_{2}$ and the results of
\S\ref{sec-C2diffs} enable us to ignore most of it. What is left is
small enough to be shown legibly in the {\SS } charts of Figures
\ref{fig-E4} and \ref{fig-KH}. They illustrate integrally graded (as
opposed to $RO (C_{4})$-graded) spectral sequences of Mackey functors,
which are discussed in \S\ref{sec-C4}. In order to read these charts
one needs to refer to Table \ref{tab-C4Mackey} which defines the
``heiroglyphic'' symbols we use for the specific Mackey functors that
we neede.
We finish the calculation in \S\ref{sec-higher} by dealing with the
remaining differentials and exotic Mackey functor extensions. It
turns out that they are all formal consequences of $C_{2}$
differentials of the previous section along with the results of
\S\ref{sec-gendiffs}.
The result is a complete description of the {\em integrally graded}
portion of $\upi_{\star}\kH$. It is best seen in the {\SS } charts of
Figures \ref{fig-E4} and \ref{fig-KH}. Unfortunately we do not have a
clean description, much less an effective way to display the full $RO
(C_{4})$-graded homotopy groups.
For $G=C_{2}$, the two irreducible orthogonal {\rep}s are the trivial
one of degree 1, denoted by the symbol 1, and the sign {\rep} denoted
by $\sigma $. Thus $RO (G)$ is additvley a free abelain group of rank
2, and the {\SS } of interest is trigraded. In the $RO (C_{2})$-graded
homotopy of $\KR$, a certain element of degree $1+\sigma $ (the degree
of the regular {\rep} $\rho_{2}$) is invertible. This means that each
component of $\upi_{\star}\KR$ is canonically isomorphic to a Mackey
functor indexed by an ordinary integer. See Theorem
\ref{thm-ROG-graded} for a more precise statement. Thus the full
(trigraded)$RO (C_{2})$-graded slice {\SS } is determined by bigraded
one shown in Figure \ref{fig-KR}.
For $G=C_{4}$, the {\rep} ring $RO (G)$ is additively a free abelian group of
rank 3, so it leads to a quadrigraded {\SS }. The three irreducible
{\rep}s are the trivial and sign {\rep}s 1 and $\sigma $ (each having
degree one) and a degree two {\rep} $\lambda $ given by a rotation of
the plane $\reals^{2}$ of order 4. The regular {\rep} $\rho_{4}$ is
isomorphic to $1+\sigma +\lambda $. As in the case of $\KR$, there is
an invertible element $\normrbar_{1}$ (see Table \ref{tab-pi*}) in
$\upi_{\star}\KH$ of degree $\rho_{4}$. This means we can reduce the
quadigraded slice {\SS } to a trigraded one, but finding a full
description of it is a problem for the future.
\section{Recollections about {\eqvr} stable homotopy theory}
\label{sec-nonsense}
We first discuss some structure on the {\eqvr} homotopy groups of a\linebreak
$G$-spectrum $X$. {\em We will assume throughout that $G$ is a finite
cyclic $p$-group.} This means that its subgroups are well ordered by
inclusion and each is uniquely determined by its order. The results
of this section hold for any prime $p$, but the rest of the paper
concerns only the case $p=2$. We will define several maps indexed by
pairs of subgroups of $G$. {\em We will often replace these indices
by the orders of the subgroups, sometimes denoting $|H|$ by $h$.}
The homotopy groups can be defined in terms of finite $G$-sets $T$
Let
\begin{displaymath}
\upi_{0}^{G}X (T) = [T_{+}, X]^{G},
\end{displaymath}
\noindent be the set of homotopy classes of {\eqvr} maps from $T_{+}$,
the suspension spectrum of the union of $T$ with a disjoint base
point, to the spectrum $X$. We will often omit $G$ from the notation
when it is clear from the context. For an orthogonal {\rep} $V$ of
$G$, we define
\begin{displaymath}
\upi_{V}X (T) = [S^{V}\wedge T_{+}, X]^{G}.
\end{displaymath}
\noindent As an $RO (G)$-graded contravariant abelian group valued
functor of $T$, this converts disjoint unions to direct sums. This
means it is determined by its values on the sets $G/H$ for subgroups
$H\subseteq G$.
Since $G$ is abelian, $H$ is normal and $\upi_{V}X (G/H)$ is a $Z[G/H]$-module.
Given subgroups $K\subseteq H\subseteq G$, one has pinch and fold maps
between the $H$-spectra $H/H_{+}$ and $H/K_{+}$. This leads to a diagram
\begin{numequation}\label{eq-pinch-fold}
\begin{split}
\xymatrix
@R=1mm
@C=4mm
{
&H/H_{+}\ar@<.5ex>[rr]^(.5){\mbox{pinch} }
&{}
&H/K_{+}\ar@<.5ex>[ll]^(.5){\mbox{fold} }\\
& &\ar@{=>}[ddd]^(.6){G_{+}\smashove{H} (\cdot)}\\
& &\\
& &\\
& &\\
G/H_{+}\ar@{=}[r]^(.5){}
&G_{+}\smashove{H}H/H_{+}\ar@<.5ex>[rr]^(.5){\mbox{pinch} }
&{} &G_{+}\smashove{H}H/K_{+}\ar@{=}[r]^(.5){}
\ar@<.5ex>[ll]^(.5){\mbox{fold} }
&G_{+}\smashove{K}K/K_{+}\ar@{=}[r]^(.5){}
&G/K_{+}.
}
\end{split}
\end{numequation}
\noindent Note that while the fold map is induced by a map of
$H$-sets, the pinch map is not. It only exists in the stable
category.
\begin{defin}\label{def-fixed-point-maps}
{\bf The Mackey functor structure maps in $\upi_{V}^{G}X$.}
The fixed point transfer
and restriction maps
\begin{displaymath}
\xymatrix
@R=5mm
@C=15mm
{
{\upi_{V}X (G/H)}\ar@<-.5ex>[r]_(.5){\Res_{K}^{H} }
&{\upi_{V}X (G/K)}\ar@<-.5ex>[l]_(.5){\Tr_{K}^{H} }
}
\end{displaymath}
\noindent are the ones induced by the composite maps in the bottom row
of (\ref{eq-pinch-fold}).
\end{defin}
These satisfy the formal properties needed to make $\upi_{V}X$ into a
Mackey functor; see \cite[Def. 3.1]{HHR}. They are usually referred
to simply as the transfer and restriction maps. We use the words
``fixed point'' to distinguish them from another similar pair of maps
specified below in Definition \ref{def-gp-action}.
We remind the reader that a Mackey functor $\underline{M}$ for a
finite group $G$ assigns an abelian group $\underline{M} (T)$ to every
finite $G$-set $T$. It converts disjoint unions to direct sums. It is
therefore determined by its values on orbits, meaning $G$-sets for the
form $G/H$ for various subgroups $H$ of $G$. For subgroups
$K\subseteq H\subseteq G$, one has a map of $G$-sets $G/K\to G/H$. In
categorical language $\underline{M}$ is actually a pair of functors,
one covariant and one contravariant, both behaving the same way on
objects. Hence we get maps both ways between $\underline{M} (G/K)$
and $\underline{M} (G/H)$. For the Mackey functor $\upi_{V}X$, these
are the two maps of \ref{def-fixed-point-maps}.
One can generalize the definition of a Mackey functor by replacing the
target category of abelian groups by one's favorite abelian category,
such as that of $R$-modules over graded abelian groups.
\begin{defin}\label{def-green}
A {\bf graded Green functor $\underline{R}_{*}$ for a group $G$} is a
Mackey functor for $G$ with values in the category of graded abelian
groups such that $\underline{R}_{*} (G/H)$ is a graded commutative
ring for each subgroup $H$ and for each pair of subgroups $K\subseteq
H\subseteq G$, the restriction map $\Res_{K}^{H}$ is a ring
homomorphism and the transfer map $\Tr_{K}^{H}$ satisfies the {\bf
Frobenius relation}
\begin{displaymath}
\Tr_{K}^{H}(\Res_{K}^{H}(a)b)=a (\Tr_{K}^{H}(b))\qquad
\mbox{for $a\in \underline{R}_{*} (G/H)$ and $b\in
\underline{R}_{*} (G/K)$}.
\end{displaymath}
\end{defin}
When $X$ is a ring spectrum, we have the {\em fixed point Frobenius
relation}
\begin{numequation}
\label{eq-Frob} \Tr_{K}^{H}(\Res_{K}^{H}(a)b)=a (\Tr_{K}^{H}(b))\qquad
\mbox{for $a\in \upi_{\star}X (G/H)$ and $b\in
\upi_{\star}X (G/K)$}.
\end{numequation}
In particular this means that
\begin{numequation}
\label{eq-Frobcor}
a(\Tr_{K}^{H}(b)) = 0\qquad \mbox{when } \Res_{K}^{H}(a)=0.
\end{numequation}
For a {\rep} $V$ of $G$, the group
\begin{displaymath}
\upi_{V}^{G}X (G/H)=\pi_{V}^{H}X=[S^{V},X]^{H}
\end{displaymath}
\noindent is isomorphic to
\begin{displaymath}
[S^{0},S^{-V}\wedge X]^{H}=\pi_{0} (S^{-V}\wedge X)^{H}.
\end{displaymath}
\noindent However fixed points do not respect smash products, so we
cannot equate this group with
\begin{displaymath}
\pi_{0} (S^{-V^{H}}\wedge X^{H}) = [S^{V^{H}},X^{H}]=\pi_{|V^{H}|}X^{H}
=\upi^{G}_{|V^{H}|}X (G/H).
\end{displaymath}
Conversely a $G$-{\eqvr} map $S^{V}\to X$ represents an element in
\begin{displaymath}
[S^{V},X]^{G}=\pi_{V}^{G}X=\upi^{G}_{V}X (G/G).
\end{displaymath}
The following notion is useful.
\begin{defin}\label{def-MF-induction}
{\bf Mackey functor induction and restriction.} For s subgroup $H$ of $G$
and an $H$-Mackey functor $\underline{M}$, the
induced $G$-Mackey functor $\uparrow_{H}^{G}\underline{M}$ is given by
\begin{displaymath}
\uparrow_{H}^{G}\underline{M} (T)=\underline{M} (i_{H}^{*}T)
\end{displaymath}
\noindent for each finite $G$-set $T$, where $i_{H}^{*}$ denotes the
forgetful functor from $G$-sets (or spaces or spectra) to $H$-sets.
For a $G$-Mackey functor $\underline{N}$, the restricted $H$-Mackey functor
$\downarrow_{H}^{G}\underline{N}$ is given by
\begin{displaymath}
\downarrow_{H}^{G}\underline{N} (S)=\underline{N} (G\times_{H}S)
\end{displaymath}
\noindent for each finite $H$-set $S$.
\end{defin}
This notation is due to Th\'evenaz-Webb \cite{Thevenaz-Webb}. They put
the decorated arrow on the right and denote $G\times_{H}S$ by
$S\uparrow_{H}^{G}$ and $i_{H}^{*}T$ by $T\downarrow_{H}^{G}$.
We also need notation for $X$ as an $H$-spectrum
for subgroups $H\subseteq G$. For this purpose we will enlarge the
orthogonal {\rep} ring of $G$, $RO (G)$, to the {\rep} ring Mackey
functor $\underline{RO} (G)$ defined by $\underline{RO} (G) (G/H)=RO
(H)$. This was the motivating example for the definition of a Mackey
functor in the first place. In it the transfer map on a {\rep} $V$ of
$H$ is the induced {\rep} of a supergroup $K\supseteq H$, and its
restriction to a subgroup is defined in the obvious way. In particular
the restriction of the transfer of $V$ is $|K/H|V$.
More generally for a finite $G$-set $T$, $\underline{RO} (G) (T)$ is
the ring (under pointwise direct sum and tensor product) of functors
to the category of finite dimensional orthogonal real vector spaces
from $B_{G}T$, the split groupoid (see \cite[A1.1.22]{Rav:MU}) whose
objects are the elements of $T$ with morphisms defined by the action
of $G$.
\begin{defin}\label{def-graded}
{\bf $\underline{RO} (G)$-graded homotopy groups.} For each
$G$-spectrum $X$ and each pair $(H,V)$ consisting of a subgroup
$H\subseteq G$ and a virtual orthogonal {\rep} $V$ of $H$, let the
$G$-Mackey functor $\upi_{{H,V}} (X)$ be defined by
\begin{displaymath}
\upi_{{H,V}} (X) (T)
:= \left[(G_{+}\smashove{H}S^{V})\wedge T_{+}, X \right]^{G}
\cong \left[S^{V}\wedge i_{H}^{*}T_{+}, i_{H}^{*}X \right]^{H}
= \upi_{V}^{H} (i_{H}^{*}X) (i_{H}^{*}T),
\end{displaymath}
\noindent for each finite $G$-set $T$. Equivalently, $\upi_{{H,V}}
(X)=\uparrow_{H}^{G}\upi_{V}^{H} (i_{H}^{*}X)$ (see
\ref{def-MF-induction}) as Mackey functors. We will often denote
$\upi_{G,V}$ by $\upi_{V}^{G}$ or $\upi_{V}$.
\end{defin}
We will be studying the $\underline{RO} (G)$-graded slice {\SS }
$\left\{\EE_{r}^{s,\star} \right\}$ of Mackey functors with
$r,s\in \Z$ and $\star\in \underline{RO} (G)$. We will use the
notation $\EE_{r}^{s,(H,V)}$ for such Mackey functors,
abbreviating to $\EE_{r}^{s,V}$ when the subgroup is
$G$. Most of our spectral sequence charts will display the values of
$\EE_{2}^{s,t}$ for integral values of $t$ only.
The following definition should be compared with \cite[(2.3)]{Ad:Prereq}.
\begin{defin}\label{def-homeo}
{\bf An {\eqvr} homeomorphism.} Let $X$ be a $G$-space and $Y$ an
$H$-space for a subgroup $H\subseteq G$. We define the {\eqvr}
homeomorphism
\begin{displaymath}
\tilde{u}_{H}^{G} (Y,X):G\times_{H} (Y\times i_{H}^{*}X) \to
(G\times_{H}Y)\times X
\end{displaymath}
\noindent by $(g,y,x)\mapsto (g,y,g (x))$ for $g\in G$, $y\in Y$ and
$x\in X$. We will use the same notation for a
similarly defined homeomorphism
\begin{displaymath}
\tilde{u}_{H}^{G} (Y,X):G_{+}\smashove{H} (Y\wedge i_{H}^{*}X) \to
(G_{+}\smashove{H}Y)\wedge X
\end{displaymath}
\noindent for a $G$-spectrum $X$ and $H$-spectrum $Y$. We will abbreviate
\begin{displaymath}
\tilde{u}_{H}^{G} (S^{0},X):G_{+}\smashove{H} i_{H}^{*}X \to
G/H_{+}\wedge X
\end{displaymath}
\noindent by $\tilde{u}_{H}^{G} (X)$.
For {\rep}s $V$ and $V'$ of $G$ both restricting to $W$ on $H$, but
having distinct restrictions to all larger subgroups, we define
$\tilde{u}_{V-V'}=\tilde{u}_{H}^{G} (S^{V})\tilde{u}_{H}^{G} (S^{V'})^{-1}$,
so the following diagram of {\eqvr} homeomorphisms commutes:
\begin{numequation}\label{eq-u{V-V'}}
\begin{split}
\xymatrix
@R=2mm
@C=10mm
{ & &G/H\wedge S^{V}\\
G_{+}\smashove{H}S^{W}\ar[rru]^(.5){\tilde{u}_{H}^{G} (S^{V})}
\ar[rrd]_(.5){\tilde{u}_{H}^{G} (S^{V'})}\\
& &G/H\wedge S^{V'}\ar[uu]_(.5){\tilde{u}_{V-V'}}.
}
\end{split}
\end{numequation}
\noindent When $V'=|V|$ (meaning that $H=G_{V}$ acts trivially on
$W$), then we abbreviate $\tilde{u}_{V-V'}$ by $\tilde{u}_{V}$.
\end{defin}
If $V$ is a {\rep} of $H$ restricting to $W$ on $K$, we can smash the
diagram (\ref{eq-pinch-fold}) with $S^{V}$ and get
\begin{numequation}\label{eq-pinch-fold-VW}
\begin{split}
\xymatrix
@R=1mm
@C=5mm
{
S^{V}\ar@<.5ex>[rr]^(.4){\mbox{pinch} }
&{}
& H/K_{+}\wedge S^{V}\ar@<.5ex>[ll]^(.6){\mbox{fold} }\\
&\ar@{=>}[ddd]^(.6){G_{+}\smashove{H} (\cdot)}\\
&\\
&\\
&\\
G_{+}\smashove{H}S^{V}\ar@<.5ex>[rr]^(.4){\mbox{pinch} }
&{} &G_{+}\smashove{H} (H/K_{+}\wedge S^{V} )\ar[r]^(.55){\approx }
\ar@<.5ex>[ll]^(.6){\mbox{fold} }
&G_{+}\smashove{H} (H_{+}\smashove{K} S^{W} )\ar@{=}[r]^(.5){}
&G_{+}\smashove{K}S^{W},
}
\end{split}
\end{numequation}
\noindent where the homeomorphism is induced by that of Definition
\ref{def-homeo}.
\begin{defin}\label{def-gp-action}
{\bf The group action restriction and transfer maps.} For subgroups
$K\subseteq H\subseteq G$, let $V\in RO (H)$ be a virtual {\rep} of $H$ restricting to $W\in RO (K)$. The group action transfer
and restriction maps
\begin{displaymath}
\xymatrix
@R=5mm
@C=15mm
{
{\uparrow_{H}^{G}\upi_{V}^{H} (i_{H}^{*}X) } \ar@{=}[r]^(.5){}
&{\upi_{H,V}X }\ar@<-.5ex>[r]_(.5){\underline{r}_{K}^{H} }
&{\upi_{K,W}X }\ar@<-.5ex>[l]_(.5){\underline{t}_{K}^{H,V} }
\ar@{=}[r]^(.5){}
&{\uparrow_{K}^{G}\upi_{W}^{K} (i_{K}^{*}X) }
}
\end{displaymath}
\noindent (see \ref{def-MF-induction}) are the ones induced by the
composite maps in the bottom row of (\ref{eq-pinch-fold-VW}). The
symbols $t$ and $r$ here are underlined because they are maps {\em of}
Mackey functors rather than maps within Mackey functors.
\end{defin}
We include $V$ as an index for the group action transfer
$\underline{t}_{K}^{H,V}$ because its target is not determined by its
source.
Thus we have abelian groups $\upi_{{H',V}} (X) (G/H'')$ for all
subgroups $H', H''\subseteq G$ and {\rep}s $V$ of $H'$. Most of them
are redundant in view of Theorem \ref{thm-module} below. In what
follows, we will use the notation $H_{\cup}=H'\cup H''$ and
$H_{\cap}=H'\cap H''$.
\begin{lem}\label{lem-module}
{\bf An {\eqvr} module structure.} For a $G$-spectrum $X$ and
$H'$-spectrum $Y$,
\begin{displaymath}
[G_{+}\smashove{H'}Y, X]^{H''}
=\Z[G/H_{\cup }]\otimes [H_{\cup +}\smashove{H'}Y,X]^{H''}
\end{displaymath}
\noindent as $\Z[G/H'']$-modules.
\end{lem}
\proof
As abelian groups,
\begin{align*}
[G_{+}\smashove{H'}Y, X]^{H''}
& = [i_{H''}^{*} (G_{+}\smashove{H'}Y), X]^{H''} \\
& = \left[\bigvee_{|G/H_{\cup }|}H_{\cup +}\smashove{H'}Y,
X \right]^{H''}\\
& = \bigoplus_{|G/H_{\cup }|}[H_{\cup +}\smashove{H'}Y, X]^{H''}
\end{align*}
\noindent and $G/H''$ permutes the wedge summands of
$\bigvee_{|G/H_{\cup }|}H_{\cup +}\smashove{H'}Y$ as it
permutes the elements of $G/H_{\cup }$. \qed
\begin{thm}\label{thm-module}
{\bf The module structure for $\underline{RO (G)}$-graded
homotopy\linebreak groups.} For subgroups $H', H''\subseteq G$ with
$H_{\cup}=H'\cup H''$ and $H_{\cap}=H'\cap H''$, and a virtual {\rep}
$V$ of $H'$ restricting to $W$ on $H_{\cap}$,
\begin{displaymath}
\upi_{H',V}X (G/H'')
\cong \Z[G/H_{\cup }]\otimes \upi_{H_{\cap},W}X (G/G)
\cong \Z[G/H_{\cup }]\otimes \upi_{W}^{H_{\cap}}
i_{H_{\cap}}^{*}X (H_{\cap}/H_{\cap})
\end{displaymath}
\noindent as $\Z[G/H'']$-modules.
Suppose that $H''$ is a proper subgroup of $H'$ and $\gamma \in H'$
is a generator. Then as an element in $\Z[G/H'']$, $\gamma $ induces
multiplication by $-1$ in $\upi_{H',V}X (G/H'')$ iff $V$ is
nonorientable.
\end{thm}
\proof We start with the definition and use the homeomorphism of
Definition \ref{def-homeo} and the module structure of Lemma \ref{lem-module}.
\begin{align*}
\upi_{H',V}X (G/H'')
& = [(G_{+}\smashove{H'} S^{V})\wedge G/H''_{+},X]^{G} \\
& = [G_{+}\smashove{H''}(G_{+}\smashove{H'}S^{V}),X]^{G} \\
& = [G_{+}\smashove{H'}S^{V},X]^{H''}
= \Z[G/H_{\cup }] \otimes [H_{\cup +}\smashove{H'}S^{V}, X]^{H''}\\
\aand
[H_{\cup +}\smashove{H'}S^{V}, X]^{H''}
& = [S^{W}, X]^{H_{\cap}}
= [G_{+}\smashove{H_{\cap}}S^{W}, X]^{G}\\
& = \upi_{W}^{H_{\cap}} (i_{H_{\cap}}^{*}X) (H_{\cap}/H_{\cap})
= \upi_{H_{\cap},W}X (G/G).
\end{align*}
For the statement about nonoriented $V$, we have
\begin{displaymath}
\upi_{H',V}X (G/H'') = \Z[G/H']\otimes \upi_{W}^{H''}i_{H''}^{*}X (H''/H'')
= \Z[G/H']\otimes [S^{W},X]^{H''}.
\end{displaymath}
\noindent
Then $\gamma $ induces a map of degree $\pm 1$ on the sphere depending
on the orientability of $V$. \qed
Theorem \ref{thm-module} means that we need only consider the groups
\begin{displaymath}
\upi_{H,V}X (G/G) \cong \upi_{V}^{H}i^{*}_{H}X (H/H).
\end{displaymath}
When $H\subset G$ and $V$ is a virtual {\rep} of $G$, we have
\begin{numequation}\label{eq-easy-iso}
\begin{split}
\upi_{V}X (G/H)
\cong \upi_{H,i^{*}_{H}V}X (G/G)
\cong \upi^{H}_{i^{*}_{H}V}i^{*}_{H}X (H/H).
\end{split}
\end{numequation}
\noindent This isomorphism makes the following diagram commute for
$K\subseteq H$.
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
{\upi_{V}X (G/H)}\ar[r]^(.45){\cong }
\ar@<-.5ex>[d]_(.5){\Res_{K}^{H} }
&{\upi_{H,i^{*}_{H}V}X (G/G)}\ar[r]^(.5){\cong }
\ar@<-.5ex>[d]_(.5){\underline{r}_{K}^{H} }
&{\upi^{H}_{i^{*}_{H}V}i^{*}_{H}X (H/H)}
\\
{\upi_{V}X (G/K)}\ar[r]^(.45){\cong }
\ar@<-.5ex>[u]_(.5){\Tr_{K}^{H} }
&{\upi_{K,i_{K}^{*}V}X (G/G)}\ar[r]^(.5){\cong }
\ar@<-.5ex>[u]_(.5){\underline{t}_{K}^{H,i^{*}_{H}V} }
&{\upi^{K}_{i_{K}^{*}V}i^{*}_{K}X (K/K)}
}
\end{displaymath}
\noindent {\em We will use the three groups of (\ref{eq-easy-iso})
interchangeably as convenient and use the same notation for elements
in each related by this canonical isomorphism.} Note that the group on
the left is indexed by $RO (G)$ while the two on the right are indexed
by $RO (H)$. This means that if $V$ and $V'$ are {\rep}s of $G$ each
restricting to $W$ on $H$, then $\upi_{V}X (G/H)$ and $\upi_{V'}X
(G/H)$ are canonically isomorphic. The first of these is
\begin{displaymath}
[G/H_{+}\wedge S^{V}, X]^{G}\cong
[G_{+}\smashove{H}S^{W}, X]^{G} \cong
[S^{W}, i_{H}^{*}X]^{H}
\end{displaymath}
\noindent where the first isomorphism is induced by the homeomorphism
$\tilde{u}_{H}^{G} (X)$ of Definition \ref{def-homeo} and the second
is the fact that $G_{+}\smashove{H} (\cdot )$ is the left adjoint of
the forgetful functor $i_{H}^{*}$.
\begin{rem}\label{rem-via}
{\bf Factorization via restriction}. For a ring spectrum $X$, such as
the one we are studying in this paper, an indecomposable element in
$\upi_{\star}X (G/H)$ may map to a product $xy\in \upi_{H,\star}X
(G/G)$ of elements in groups indexed by {\rep}s of $H$ that are not
restrictions of {\rep}s of $G$. When this happens we may denote the
indecomposable element in $\upi_{\star}X (G/H)$ by $[xy]$. This
factorization can make some computations easier.
\end{rem}
\section{The $RO (G)$-graded homotopy of $\HZ$}\label{sec-HZZ}
We describe part of the $RO (G)$-graded Green functor
$\upi_{\star}(\HZ)$, where $\HZ $ is the integer {\SESM} spectrum
$\HZ$ in the $G$-{\eqvr} category, for some finite cyclic 2-group $G$. For
each actual (as opposed to virtual) $G$-{\rep} $V$ we have an {\eqvr}
reduced cellular chain complex $C^{V}_{*}$ for the space $S^{V}$. It
is a complex of $\ints [G]$-modules with $H_{*} (C^{V})=H_{*}
(S^{|V|})$.
One can convert such a chain complex $C_{*}^{V}$ of $\Z[G]$-modules to
one of Mackey functors as follows. Given a $\Z[G]$-module $M$, we get
a Mackey functor $\uM$ defined by
\begin{numequation}
\label{eq-fpm}
\uM (G/H)= M^{H}\qquad \mbox{for each subgroup $H\subseteq G$.}
\end{numequation}
\noindent We call this a {\em fixed point Mackey functor}. In it each
restriction map $\Res_{K}^{H}$ (for $K\subseteq H\subseteq G$) is one
to one. When $M$ is a permutation module, meaning the free abelian
group on a $G$-set $B$, we call $\uM$ a {\em permutation Mackey
functor} \cite[\S3.2]{HHR}.
In particular the $Z[G]$-module $\Z$ with trivial group action (the
free abelian group on the $G$-set $G/G$) leads to a Mackey functor
$\underline{\Z}$ in which each restriction map is an isomorphism and
the transfer map $\Tr_{K}^{H}$ is multiplication by $|H/K|$. For each
Mackey functor $\underline{M}$ there is an {\SESM} spectrum
$H\underline{M}$ \cite[\S5]{Greenlees-May}, and $\HZZ$ is the same as
$\HZ$ with trivial group action.
Given a finite $G$-CW spectrum $X$, meaning one
built out of cells of the form $G_{+}\smashove{H} e^{n}$, we get a
reduced cellular chain complex of $\Z[G]$-modules $C_{*}X$, leading to
a chain complex of fixed point Mackey functors $\uC_{*}X$.
Its homology is a graded Mackey functor $\uH_{*}X$ with
\begin{displaymath}
\uH_{*}X (G/H)
= \upi_{*} (X\wedge \HZZ) (G/H)
= \pi_{*} (X\wedge \HZZ)^{H}.
\end{displaymath}
\noindent In particular $\uH_{*}X (G/\ee) = H_{*}X$, the
underlying homology of $X$. In general $\uH_{*}X (G/H)$
is not the same as $H_{*} (X^{H})$ because fixed points do not commute
with smash products.
For a finite cyclic 2-group $G=C_{2^{k}}$, the irreducible {\rep}s are
the 2-dimensional ones $\lambda (m)$ corresponding to rotation through
an angle of $2\pi m/2^{k}$ for $0<m<2^{k-1}$, the sign {\rep} $\sigma
$ and the trivial one of degree one, which we denote by 1. The
2-local {\eqvr} homotopy type of $S^{\lambda (m)}$ depends only on the
2-adic valuation of $m$, so we will only consider $\lambda (2^{j})$
for $0\leq j\leq k-2$ and denote it by $\lambda_{j}$. The planar
rotation $\lambda_{k-1}$ though angle $\pi$ is the same {\rep} as
$2\sigma $. {\em We will denote $\lambda (1)=\lambda_{0}$ simply by
$\lambda $.}
We will describe the chain complex $C^{V}$ for
\begin{displaymath}
V=a+b\sigma +\sum_{2\leq j\leq k}c_{j}\lambda_{k-j}.
\end{displaymath}
\noindent for nonnegative integers $a$, $b$ and $c_{j}$.
The isotropy group of $V$ (the largest subgroup fixing all of $V$) is
\begin{displaymath}
G_{V}=\mycases{
C_{2^{k}}=G
&\mbox{for }b=c_{2}=\dotsb =c_{k}=0\\
C_{2^{k-1}}=:G'
&\mbox{for $b>0$ and $c_{2}=\dotsb =c_{k}=0$}\\
C_{2^{k-\ell }}
&\mbox{for $c_{\ell }>0$ and $c_{1+\ell }=\dotsb =c_{k}=0$}
}
\end{displaymath}
The sphere $S^{V}$ has a $G$-CW structure with reduced cellular chain complex
$C^{V}$ of the form
\begin{numequation}\label{eq-CVn}
\begin{split}
C^{V}_{n}=\mycases{
\ints
&\mbox{for }n=d_{0}\\
\ints[G/G']
&\mbox{for }d_{0}<n\leq d_{1}\\
\ints[G/C_{2^{k-j}}]
&\mbox{for $d_{j-1}<n\leq d_{j}$ and $2\leq j\leq \ell $}\\
0 &\mbox{otherwise.}
}
\end{split}
\end{numequation}
\noindent where
\begin{displaymath}
d_{j}=\mycases{
a &\mbox{for $j=0$}\\
a+b
&\mbox{for $j=1$}\\
a+b+2c_{2}+\dotsb +2c_{j}
&\mbox{for $2\leq j\leq \ell $,}
}
\end{displaymath}
\noindent so $d_{\ell }=|V|$.
The boundary map $\partial_{n}:C_{n}^{V}\to C_{n-1}^{V}$ is determined
by the fact that\linebreak ${H_{*}(C^{V})=H_{*}(S^{|V|})}$. More
explicitly, let $\gamma $ be a generator of $G$ and
\begin{displaymath}
\zeta_{j}=\sum_{0\leq t<2^{j }}\gamma^{t}
\qquad \mbox{for }1\leq j \leq k.
\end{displaymath}
\noindent Then we have
\begin{displaymath}
\partial_{n}=\mycases{
\nabla
&\mbox{for }n=1+d_{0}\\
(1-\gamma )x_{n}
&\mbox{for $n-d_{0}$ even and $2+d_{0}\leq n\leq d_{n}$}\\
x_{n}
&\mbox{for $n-d_{0}$ odd and $2+d_{0}\leq n\leq d_{n}$}\\
0 &\mbox{otherwise,}
}
\end{displaymath}
\noindent where $\nabla$ is the fold map sending $\gamma \mapsto 1$,
and $x_{n}$ denotes multiplication by an element in $\Z[G]$ to be
named below. We will use the same symbol below for the quotient map
$\Z[G/H]\to\Z[G/K]$ for $H\subseteq K\subseteq G$. The elements
$x_{n}\in \Z[G]$ for $2+d_{0}\leq n\leq |V|$ are determined
recursively by $x_{2+d_{0}}=1$ and
\begin{displaymath}
x_{n}x_{n-1}= \zeta_{j } \qquad \mbox{for }2+d_{j-1}<n\leq 2+d_{j}.
\end{displaymath}
\noindent It follows that $H_{|V|}C^{V}=\Z$ generated by either
$x_{1+|V|}$ or its product with $1-\gamma $, depending on the parity
of $b$.
This complex is
\begin{displaymath}
C^{V} = \Sigma^{|V_{0}|} C^{V/V_{0}}
\end{displaymath}
\noindent where $V_{0}=V^{G}$. This means we can assume without loss
of generality that $V_{0}=0$.
An element
\begin{displaymath}
x\in H_{n} \uC^{V} (G/H)
= \uH_{n} S^{V} (G/H)
\end{displaymath}
\noindent
corresponds to an element $x\in \upi_{n-V} \HZ (G/H) $.
We will denote the dual complex $\Hom_{\ints } (C^{V},\ints )$ by
$C^{-V}$. Its chains lie in dimensions $-n$ for $0\leq n\leq |V|$.
An element $x\in \uH_{-n} (S^{-V}) (G/H)$ corresponds to an
element $x\in \upi_{V-n} \HZ (G/H)$.
The method we have just described determines only a portion of the\linebreak
$RO(G)$-graded Mackey functor $\upi_{(G, \star)}\HZZ$, namely
the groups in which the index differs by an integer from an actual
{\rep} $V$ or its negative. For example it does not give us
$\upi_{\sigma -\lambda }\HZZ$ for $|G|\geq 4$.
We leave the proof of the following as an exercise for the reader.
\begin{prop}\label{prop-top}
{\bf The top (bottom) homology groups for $S^{V}$ ($S^{-V}$).} Let $G$
be a finite cyclic 2-group and $V$ a nontrivial {\rep} of $G$ of
degree $d$ with $V^{G}=0$ and isotropy group $G_{V}$. Then
$C_{d}^{V}=C_{-d}^{-V}=\Z[G/G_{V}]$ and
\begin{enumerate}
\item [(i)] If $V$ is oriented then
$\uH_{d}S^{V}=\underline{\Z}$, the constant $\Z$-valued
Mackey functor in which each restriction map is an isomorphism and
each transfer $\Tr_{H}^{K}$ is multiplication by $|K/H|$.
\item [(ii)]
$\uH_{-d}S^{-V}=\underline{\Z} (G,G_{V})$, the constant
$\Z$-valued Mackey functor in which
\begin{displaymath}
\Res_{H}^{K}=\mycases{
1 &\mbox{for }K\subseteq G_{V}\\
{|K/H|} &\mbox{for }G_{V}\subseteq H
}
\end{displaymath}
\noindent and
\begin{displaymath}
\Tr_{H}^{K}=\mycases{
{|K/H| } &\mbox{for }K\subseteq G_{V}\\
1 &\mbox{for }G_{V}\subseteq H.
}
\end{displaymath}
\noindent (The above completely describes the cases where
$|K/H|=2$, and they determine all other restrictions and transfers.) The
functor $\underline{\Z}(G,e)$ is also known as the dual
$\underline{\Z}^{*}$. These isomorphisms are induced by the maps
\begin{displaymath}
\xymatrix
@R=0mm
@C=15mm
{
{\uH_{d}S^{V}}\ar@{=}[dd]^(.5){}
& &{\uH_{-d}S^{-V}}\ar@{=}[dd]^(.5){}\\
& &\\
{\underline{\Z}}\ar[r]^(.4){\Delta}
&{\underline{\Z[G/G_{V}]}}\ar[r]^(.5){\nabla}
&{\underline{\Z} (G,G_{V}).}
}
\end{displaymath}
\item [(iii)] If $V$ is not oriented then
$\uH_{d}S^{V}=\underline{\Z}_{-}$, where
\begin{displaymath}
\underline{\Z}_{-} (G/H)=\mycases{
0 &\mbox{for }H=G\\
\Z_{-}:=\Z[G]/ (1+\gamma )
&\mbox{otherwise}
}
\end{displaymath}
\noindent where each restriction map $\Res_{H}^{K}$ is an isomorphism
and each transfer $\Tr_{H}^{K}$ is multiplication by $|K/H|$ for each
proper subgroup $K$.
\item [(iv)] We also have
$\uH_{-d}S^{-V}=\underline{\Z} (G,G_{V})_{-}$, where
\begin{displaymath}
\underline{\Z} (G,G_{V})_{-} (G/H)=\mycases{
0 &\mbox{for $H=G$ and $V=\sigma $}\\
\Z/2 &\mbox{for $H=G$ and $V\neq \sigma $}\\
\Z_{-} &\mbox{otherwise}
}
\end{displaymath}
\noindent with the same restrictions and transfers as $\underline{\Z}
(G,G_{V})$. These isomorphisms are induced by the maps
\begin{displaymath}
\xymatrix
@R=0mm
@C=15mm
{
{\uH_{d}S^{V}}\ar@{=}[dd]^(.5){}
& &{\uH_{-d}S^{-V}}\ar@{=}[dd]^(.5){}\\
& &\\
{\underline{\Z}_{-}}\ar[r]^(.4){\Delta_{-} }
&{\underline{\Z[G/G_{V}]}}\ar[r]^(.5){\nabla_{-}}
&{\underline{\Z} (G,G_{V})_{-}.}
}
\end{displaymath}
\end{enumerate}
\end{prop}
The Mackey functor $\underline{\Z(G,G_{V})}$ is one of those defined
(with different notation) in \cite[Def. 2.1]{HHR:RO(G)}.
\begin{defin}\label{def-aeu}
{\bf Three elements in $\upi_{\star}^{G}(\HZ)$.} Let $V$
be an actual (as opposed to virtual) {\rep} of the finite cyclic
2-group $G$ with $V^{G}=0$ and isotropy group $G_{V}$.
\begin{enumerate}
\item [(i)] The {\eqvr} inclusion ${S^{0}\to S^{V}}$ defines an
element in $\upi_{-V}S^{0} (G/G)$ via the isomorphisms
\begin{displaymath}
\upi_{-V}S^{0} (G/G)=
\upi_{0}S^{V} (G/G)= \pi_{0}S^{V^{G}}=\pi_{0}S^{0}=\Z,
\end{displaymath}
\noindent and we will use the symbol $a_{V}$ to denote its image in
$\upi_{-V}\HZZ (G/G)$.
\item [(ii)] The underlying equivalence $S^{V}\to S^{|V|}$ defines an
element in
\begin{displaymath}
\upi_{V}S^{|V|} (G/G_{V}) = \upi_{V-|V|}S^{0} (G/G_{V})
\end{displaymath}
\noindent and we will use the symbol $e_{V}$ to denote its image in
$\upi_{V-|V|}\HZZ (G/G_{V})$.
\item [(iii)] If $W$ is an oriented {\rep} of $G$ (we do not require
that $W^{G}=0$), there is a map
\begin{displaymath}
\Delta :\ints \to C^{W}_{|W|}= \Z[G/G_{W}]
\end{displaymath}
\noindent as in Proposition \ref{prop-top} giving an element
\begin{displaymath}
u_{W}\in \uH_{|W|}S^{W} (G/G) = \upi_{|W|-W} \HZ(G/G).
\end{displaymath}
For nonoriented $W$, Proposition \ref{prop-top} gives a map
\begin{displaymath}
\Delta_{-} :\ints_{-} \to C^{W}_{|W|}
\end{displaymath}
\noindent and an element
\begin{displaymath}
u_{W}\in \uH_{|W|}S^{W} (G/G') = \upi_{|W|-W} \HZ(G/G').
\end{displaymath}
\noindent
\end{enumerate}
\end{defin}
The element $u_{W}$ above is related to the element $\tilde{u}_{V}$ of
(\ref{eq-u{V-V'}}) as follows.
\begin{lem}\label{lem-uV}
{\bf The restriction of $u_{W}$ to a unit and permanent cycle.} Let $W$ be a
nontrivial {\rep} of $G$ with $H=G_{W}$. Then the homeomorphism
\begin{displaymath}
\Sigma^{-W}\tilde{u}_{W}:G/H_{+}\wedge S^{|W|-W}\to G/H_{+}
\end{displaymath}
\noindent of
(\ref{eq-u{V-V'}}) induces an isomorphism $\upi_{0}\HZZ (G/H)\to
\upi_{|W|-W}\HZZ (G/H)$ sending the unit to $\Res_{H}^{K}(u_{W})$
for $u_{W}$ as defined in (iii) above and $K=G$ or $G'$ depending on
the orientability of $W$.
The product
\begin{displaymath}
\Res_{H}^{K}(u_{W})e_{W}\in \upi_{0}\HZZ (G/H)=\Z
\end{displaymath}
\noindent is a generator, so $e_{W}$ and $\Res_{H}^{K}(u_{W})$ are
units in the ring $\upi_{\star}\HZZ (G/H)$, and $\Res_{H}^{K}(u_{W})$
is in the Hurewicz image of $\upi_{\star}S^{0} (G/H)$.
\end{lem}
\proof
The diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{G/K_{+}\wedge S^{|W|-W}
&G/H_{+}\wedge S^{|W|-W}\ar[r]^(.6){\tilde{u}_{W}}
\ar[l]_(.5){\mbox{fold} }
&G/H_{+}
}
\end{displaymath}
\noindent induces (via the functor $[\cdot , \HZZ]^{G}$)
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{{\upi_{|W|-W}\HZZ (G/K)}\ar[r]^(.5){\Res_{H}^{K}}\ar@{=}[d]^(.5){}
&{\upi_{|W|-W}\HZZ (G/H)}\ar@{=}[d]^(.5){}
&{\upi_{0}\HZZ (G/H)}\ar[l]_(.4){\cong }\ar@{=}[d]^(.5){} \\
{\underline{H}_{|W|}S^{W} (G/K)}
&{\underline{H}_{|W|}S^{W} (G/H)}
&\Z
}
\end{displaymath}
\noindent The restriction map is an isomorphism by Proposition
\ref{prop-top} and the group on the left is generated by $u_{W}$.
The product is the composite of $H$-maps
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
{S^{W}}\ar[r]^(.5){e_{W}}
&{S^{|W|}}\ar[r]^(.5){\Res_{H}^{K}(u_{W})}
&{\Sigma^{W}\HZZ,}
}
\end{displaymath}
\noindent which is the standard inclusion.
\qed
Note that $a_{V}$ and $e_{V}$ are induced by maps to {\eqvr} spheres
while $u_{W}$ is not. This means that in any {\SS} based on a
filtration where the subquotients are {\eqvr} $\HZ $-modules, elements
defined in terms of $a_{V}$ and $e_{V}$ will be permanent cycles,
while multiples and powers of $u_{W}$ can support nontrivial
differentials. Lemma \ref{lem-uV} says a certain restriction of
$u_{W}$ is a permanent cycle.
Each nonoriented $V$ has the form $W+\sigma $ where $\sigma $ is the
sign {\rep} and $W$ is oriented. It follows that
\begin{displaymath}
u_{V}=u_{\sigma}\Res_{G'}^{G}(u_{W})\in \upi_{|V|-V}H\Z (G/G').
\end{displaymath}
Note also that $a_{0}=e_{0}=u_{0}=1$. The trivial
representations contribute nothing to $\pi_{\star}(H\Z)$. We can limit
our attention to {\rep}s $V$ with $V^{G}=0$. Among such {\rep}s
of cyclic 2-groups, the oriented ones are precisely the
ones of even degree.
\begin{lem}\label{lem-aeu}
{\bf Properties of $a_{V}$, $e_{V}$ and $u_{W}$.} The elements
$a_{V}\in \upi_{-V}\HZ(G/G)$, $e_{V}\in \underline{\pi
}_{V-|V|}\HZ(G/G_{V})$ and $u_{W}\in \upi_{|W|-W}\HZ(G/G)$ for
$W$ oriented of Definition \ref{def-aeu} satisfy the following.
\begin{enumerate}
\item \label{aeu:i}
$a_{V+W}=a_{V}a_{W}$ and $u_{V+W}=u_{V}u_{W}$.
\item \label{aeu:ii}
$|G/G_{V}|a_{V}=0$ where $G_{V}$ is the isotropy group of $V$.
\item \label{aeu:iii} For oriented $V$, $\Tr_{G_{V}}^{G}(e_{V})$
and $\Tr_{G_{V}}^{G'}(e_{V+\sigma})$ have infinite order, while\linebreak
$\Tr_{G_{V}}^{G}(e_{V+\sigma})$ has order 2 if
$|V|>0$ and $\Tr_{G_{V}}^{G}(e_{\sigma})=\Tr_{G'}^{G}(e_{\sigma})=0$.
\item \label{aeu:iv} For oriented $V$ and $G_{V}\subseteq H\subseteq G$
\begin{align*}
\Tr_{G_{V}}^{G}(e_{V})u_{V}
& = |G/G_{V}|\in \upi_{0}\HZZ (G/G)=\Z \\
\aand
\Tr_{G_{V}}^{G'}(e_{V+\sigma })u_{V+\sigma }
& = |G'/G_{V}|\in \upi_{0}\HZZ (G/G')=\Z
\qquad \mbox{for }|V|>0.
\end{align*}
\item \label{aeu:v}
$a_{V+W}\Tr_{G_{V}}^{G}(e_{V+U})=0$ if $|V|>0$.
\item \label{aeu:vi}
For $V$ and $W$ oriented, $u_{W}\Tr_{G_{V}}^{G}(e_{V+W})
=|G_{V}/G_{V+W}|\Tr_{G_{V}}^{G}(e_{V})$.
\item \label{aeu:vii}
{\bf The gold (or $au$) relation. } For $V$ and $W$ oriented {\rep}s
of degree 2 with $G_{V}\subseteq G_{W}$, ${a_{W}u_{V} =
|G_{W}/G_{V}|a_{V}u_{W}}$.
\end{enumerate}
For nonoriented $W$ similar statements hold in $\upi_{\star}\HZ
(G/G')$. $2W$ is oriented and $u_{2W}$ is defined in
$\upi_{2|W|-2W}\HZZ (G/G)$ with ${\Res_{G'}^{G}(u_{2W})=u_{W}^{2}}$.
\end{lem}
\proof (\ref{aeu:i})
This follows from the existence of the pairing
$C^{V}\otimes C^{W}\to C^{V+W}$. It induces an isomorphism in $H_{0}$
and (when both $V$ and $W$ are oriented) in $H_{|V+W|}$.
(\ref{aeu:ii}) This holds because $H_{0} (V)$ is killed by $|G/G_{V}|$.
(\ref{aeu:iii}) This follows from Proposition \ref{prop-top}.
(\ref{aeu:iv})
Using the Frobenius relation we have
\begin{align*}
\Tr_{G_{V}}^{G}(e_{V})u_{V}
& = \Tr_{G_{V}}^{G}(e_{V}\Res_{G_{V}}^{G}(u_{V}))
= \Tr_{G_{V}}^{G}(1) \qquad \mbox{by \ref{lem-uV} } \\
& = |G/G_{V}| \\
\Tr_{G_{V}}^{G'}(e_{V+\sigma })u_{V+\sigma }
& = \Tr_{G_{V}}^{G'}(e_{V+\sigma }\Res_{G_{V}}^{G'}(u_{V+\sigma }))
= \Tr_{G_{V}}^{G'}(1) = |G'/G_{V}|.
\end{align*}
(\ref{aeu:v}) We have
\begin{displaymath}
a_{V+W}\Tr_{G_{V}}^{G}(e_{V+U}):S^{-|V|-|U|} \to S^{W-U}.
\end{displaymath}
\noindent It is null because the bottom cell of ${S^{W-U}}$ is in
dimension ${-|U|}$.
(\ref{aeu:vi}) Since $V$ is oriented, then we are computing in a
torsion free group so we can tensor with the rationals. It follows
from (\ref{aeu:iv}) that
\begin{align*}
\Tr_{G_{V+W}}^{G}(e_{V+W})
& = \frac{|G/G_{V+W}|}{u_{V}u_{W}}\\
\aand
\Tr_{G_{V}}^{G}(e_{V})
& = \frac{|G/G_{V}|}{u_{V}}\\
\sso
u_{W}\Tr_{G_{V+W}}^{G}(e_{V+W} )
& = \frac{|G/G_{V+W}|}{u_{V}}
= |G_{V}/G_{V+W}|\Tr_{G_{V}}^{G}(e_{V}).
\end{align*}
(\ref{aeu:vii})
For $G=C_{2^{n}}$, each oriented {\rep} of degree 2 is $2$-locally
equivalent to a $\lambda_{j}$ for $0\leq j<n$. The isotropy group is
$G_{\lambda_{j}} = C_{2^{j}}$. Hence the assumption that
$G_{V}\subset G_{W}$ is can be replaced with $V=\lambda_{j}$ and
$W=\lambda_{k}$ with $0\leq j<k<n$. the statment we wish to prove is
\begin{displaymath}
a_{\lambda_{k}}u_{\lambda_{j}}=2^{k-j}a_{\lambda_{j}}u_{\lambda_{k}}.
\end{displaymath}
One has a map $S^{\lambda_{j}}\to S^{\lambda_{k}}$ which is the
suspension of the $2^{k-j}$th power map on the equatorial circle.
Hence its underlying degree is $2^{k-j}$. We will denote it by
$a_{\lambda_{k}}/a_{\lambda_{j}}$ since there is a diagram
\begin{displaymath}
\xymatrix
@R=3mm
@C=20mm
{
&S^{\lambda_{j}}\ar[dd]^(.5){a_{\lambda_{k}}/a_{\lambda_{j}}}\\
S^{0}\ar[ru]^(.5){a_{\lambda_{j}}}\ar[rd]_(.5){a_{\lambda_{k}}}\\
&S^{\lambda_{k}}.
}
\end{displaymath}
We claim there is a similar diagram
\begin{numequation}\label{eq-au-claim}
\begin{split}
\xymatrix
@R=3mm
@C=20mm
{
&S^{\lambda_{k}}\wedge \HZZ\ar[dd]^(.5){u_{\lambda_{j}}/u_{\lambda_{k}}}\\
S^{2}\ar[ru]^(.5){u_{\lambda_{k}}}\ar[rd]_(.5){u_{\lambda_{j}}}\\
&S^{\lambda_{j}}\wedge \HZZ.
}
\end{split}
\end{numequation}
\noindent in which the underlying degree of the vertical map is one.
Smashing $a_{\lambda_{k}}/a_{\lambda_{j}}$ with $\HZZ$ and composing
with $u_{\lambda_{j}}/u_{\lambda_{k}}$ gives a factorization of the
degree $2^{k-j}$ map on $S^{\lambda_{j}}\wedge \HZZ$. Thus we have
\begin{align*}
\frac{u_{\lambda_{j}}}{u_{\lambda_{k}}}
\frac{a_{\lambda_{k}}}{a_{\lambda_{j}}}
& = 2^{k-j} \\
u_{\lambda_{j}}a_{\lambda_{k}}
& = 2^{k-j} u_{\lambda_{k}}a_{\lambda_{j}}
\end{align*}
\noindent as desired.
The vertical map in (\ref{eq-au-claim}) would follow from a map
\begin{displaymath}
S^{\lambda_{k}-\lambda_{j}}\to \HZZ
\end{displaymath}
\noindent with underlying degree one. Let $G=C_{2^{n}}$ and $G \supset
H=C_{2^{j}}$. Then $S^{-\lambda_{j}}$ has a cellular structure of the
form
\begin{displaymath}
G/H_{+}\wedge S^{-2} \cup G/H_{+} \wedge e^{-1} \cup e^{0}.
\end{displaymath}
\noindent We need to smash this with $S^{\lambda_{k}}$. Since
$\lambda_{k}$ restricts trivially to $H$,
\begin{displaymath}
G/H_{+}\wedge
S^{\lambda_{k}}=G/H_{+}\wedge S^{2}.
\end{displaymath}
\noindent This means
\begin{displaymath}
S^{\lambda_{k}-\lambda_{j}}
= S^{\lambda _{k}}\wedge S^{-\lambda_{j}}
= G/H_{+}\wedge S^{0} \cup
G/H_{+} \wedge e^{1} \cup e^{0}\wedge S^{\lambda_{k}}.
\end{displaymath}
\noindent Thus its cellular chain complex has the form
\begin{displaymath}
\xymatrix
@R=5mm
@C=15mm
{
2 &\Z[G/K] \ar[d]^(.5){1-\gamma }\ar[drr]^(.5){\Delta }\\
1 &\Z[G/K] \ar[d]^(.5){\nabla }\ar[drr]^(.5){-\Delta }
& &\Z[G/H]\ar[d]^(.5){1-\gamma }\\
0 &\Z & &\Z[G/H]
}
\end{displaymath}
\noindent where $K=G/C_{p^{k}}$ and the left column is the chain
complex for $S^{\lambda_{k}}$.
There is a corresponding chain complex of fixed point Mackey functors.
Its value on the $G$-set $G/L$ for an arbitrary subgroup $L$ is
\begin{displaymath}
\xymatrix
@R=5mm
@C=15mm
{
2 &\Z[G/\max(K,L)] \ar[d]^(.5){1-\gamma }\ar[drr]^(.5){\Delta }\\
1 &\Z[G/\max(K,L)] \ar[d]^(.5){\nabla }\ar[drr]^(.5){-\Delta }
& &\Z[G/\max(H,L)]\ar[d]^(.5){1-\gamma }\\
0 &\Z & &\Z[G/\max(H,L)]
}
\end{displaymath}
\noindent For each $L$ the map $\Delta $ is injective and maps the
kernel of the first $1-\gamma $ isomorphically to the kernel of the
second one. This means we can replace the above by a diagram of the
form
\begin{displaymath}
\xymatrix
@R=5mm
@C=15mm
{
1 &\coker (1-\gamma ) \ar[d]^(.5){\nabla }\ar[drr]^(.5){-\Delta }\\
0 &\Z & &\coker (1-\gamma )
}
\end{displaymath}
\noindent where each cokernel is isomorphic to $\Z$ and each
map is injective.
This means that $\underline{H}_{*}S^{\lambda_{k}-\lambda_{j}}$ is
concentrated in degree 0 where it is the pushout of the diagram above,
meaning a Mackey functor whose value on each subgroup is $\Z$. Any
such Mackey functor admits a map to $\underline{\Z}$ with underlying
degree one. This proves the claim of (\ref{eq-au-claim}). \qed
The $\Z$-valued Mackey functor
$\underline{H}_{0}S^{\lambda_{k}-\lambda_{j}}$ is discussed in more
detail in \cite{HHR:RO(G)}, where it is denoted by $\underline{\Z} (k,j)$.
\section{Generalities on differentials and Mackey functor extensions}
\label{sec-gendiffs}
Before proceeding with a discussion about sepctral sequences, we need
the following.
\begin{rem}\label{rem-abuse}
{\bf Abusive {\SS } notation}. When $d_{r} (x)$ is a nontrivial
element of order 2, the elements $2x$ and $x^{2}$ both survive to
$\underline{E}_{r+1}$, but in that group they are not the products
indicated by these symbols since $x$ itself is no longer present.
More geneally if $d_{r} (x)=y$ and $\alpha y=0$ for some $\alpha $,
then $\alpha x$ is present in $\underline{E}_{r+1}$. This abuse of
notation is customary because it would be cumbersome to rename these
elements when passing from $\underline{E}_{r}$ to
$\underline{E}_{r+1}$. We will sometimes denote them by $[2x]$,
$[x^{2}]$ and $[\alpha x]$ respectively to emphasize their
indecomposability.
\end{rem}
Now we make some observations about the relation between exotic
transfers and restriction with certain differentials in the slice
{\SS}. By ``exotic'' we mean in a higher filtration. In a {\SS} of
Mackey functors converging to $\upi_{\star}X$, it can happen that an
element $x\in \upi_{V}X (G/H)$ has filtration $s$, but its restriction
or transfer has a higher filtration. {\em In the {\SS} charts in this
paper, exotic transfers and restrictions will be indicated by
{\color{blue}blue} and {\color{green} dashed green} lines respectively.}
\begin{lem}\label{lem-hate}
{\bf Restriction kills $a_{\sigma}$ and $a_{\sigma }$ kills
transfers.} Let $G$ be a finite cyclic 2-group with sign {\rep}
$\sigma $ and index 2 subgroup $G'$, and let $X$ be a $G$-spectrum.
Then in $\upi_{*}X (G/G)$ the image of $\Tr_{G'}^{G}$ is the kernel of
multiplication by $a_{\sigma}$, and the kernel of $\Res_{G'}^{G}$ is
the image of multiplication by $a_{\sigma}$.
Suppose further that 4 divides the order of $G$ and let $\lambda $ be
the degree 2 representation sending a generator $\gamma \in G$ to a
rotation of order 4. Then restriction kills $2a_{\lambda }$ and
$2a_{\lambda }$ kills transfers.
\end{lem}
\proof
Consider the cofiber sequence obtained by smashing $X$ with
\begin{numequation}\label{eq-hate}
\begin{split}
\xymatrix
@R=5mm
@C=10mm
{
S^{-1}\ar[r]^(.5){a_{\sigma}}
&S^{\sigma -1}\ar[r]^(.5){}
&G_{+}\smashove{G'}S^{0}\ar[r]^(.5){}
&S^{0}\ar[r]^(.5){a_{\sigma}}
&S^{\sigma}
}
\end{split}
\end{numequation}
\noindent Since $(G_{+}\smashove{G'}X)^{G}$ is equivalent to $X^{G'}$,
passage to fixed point spectra gives
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
\Sigma^{-1}X^{G}\ar[r]^(.5){}
&\left(\Sigma^{\sigma -1}X \right)^{G}\ar[r]^(.5){}
&X^{G'}\ar[r]^(.5){}
&X^{G}\ar[r]^(.5){}
&\left(\Sigma^{\sigma}X \right)^{G}
}
\end{displaymath}
\noindent so the exact sequence of homotopy groups is
\begin{center}
\includegraphics[width=12cm]{hate-diagram.pdf}
\end{center}
\noindent Note that the isomorphism $u_{\sigma}$ is invertible. This
gives the exactness required by both statements.
For the statements about $a_{\lambda }$, note that $\lambda $ restricts
to $2\sigma_{G'}$, where $\sigma_{G'}$ is the sign {\rep} for the
index 2 subgroup $G'$. It follows that $\Res_{G'}^{G}(a_{\lambda
})=a_{\sigma_{G'}}^{2}$, which has order 2. Using the Frobenius
relation, we have for $x\in \upi_{*}X (G/G')$,
\begin{displaymath}
2a_{\lambda }\Tr_{G'}^{G}(x)
= \Tr_{G'}^{G}(\Res_{G'}^{G}(2a_{\lambda })x)
= \Tr_{G'}^{G}(2a_{\sigma_{G'} }^{2}x)
= 0.
\end{displaymath}
\qed
This implies that when $a_{\sigma}x$ is killed by a differential but
$x\in \EE_{r} (G/G)$ is not, then $x$ represents an element
that is $\Tr_{G'}^{G}(y)$ for some $y$ in lower filtration. Similarly
if $x$ supports a nontrivial differential but $a_{\sigma}x$ is a
nontrivial permanent cycle, then the latter represents an element with
a nontrivial restriction to $G'$ of higher filtration. In both cases
the converse also holds.
\begin{thm}\label{thm-exotic}
{\bf Exotic transfers and restrictions in the $RO (G)$-graded slice
{\SS}.} Let $G$ be a finite cyclic 2-group with index 2 subgroup $G'$
and sign {\rep} $\sigma $, and let $X$ be a $G$-{\eqvr} spectrum
with $x\in \EE_{r}^{s,V}X (G/G)$ (for $V\in RO (G)$) in the
slice {\SS} for $X$. Then
\begin{enumerate}
\item [(i)] Suppose there is a permanent cycle $y'\in
\EE_{r}^{s+r,V+r-1}X (G/G')$. Then there is a nontrivial
differential $d_{r} (x)=\Tr_{G'}^{G}(y')$ iff $[a_{\sigma}x]$ is a
permanent cycle with $\Res_{G'}^{G}(a_{\sigma}x)=u_{\sigma}y'$. In
this case $[a_{\sigma }x]$ represents the Toda bracket $\langle
a_{\sigma },\,\Tr_{G'}^{G} ,\,y' \rangle$.
\item [(ii)] Suppose there is a permanent cycle $y\in
\EE_{r}^{s+r-1 ,V+r+\sigma -2}X (G/G)$. Then there is a
nontrivial differential $d_{r}(x)=a_{\sigma}y$ iff $\Res_{G'}^{G}(x)$
is a permanent cycle with
$\Tr_{G'}^{G}(u_{\sigma}^{-1}\Res_{G'}^{G}(x))=y$. In this case
$\Res_{G'}^{G}(x)$ represents the Toda bracket $\langle
\Res_{G'}^{G},\,a_{\sigma } ,\,y \rangle$.
\end{enumerate}
\end{thm}
In each case a nontrivial $d_{r}$ is equivalent to a Mackey functor
extension raising filtration by $r-1$. In (i) the permanent cycle
$a_{\sigma}x$ is not divisible in $\upi_{\star}X$ by $a_{\sigma}$
and therefore could have a nontrivial restriction in a higher
filtration. Similarly in (ii) the element denoted by
$\Res_{G'}^{G}(x)$ is not a restriction in $\upi_{\star}X$, so we
cannot use the Frobenius relation to equate
$\Tr_{G'}^{G}(u_{\sigma}^{-1}\Res_{G'}^{G}(x))$ with
$\Tr_{G'}^{G}(u_{\sigma}^{-1})x$.
We remark that the proof below makes no use of any properties
specific to the slice filtration. The result holds for any {\eqvr}
filtration with suitable formal properties.
Before giving the proof we need the following.
\begin{lem}\label{lem-formal}
{\bf A formal observation}. Suppose we have a
commutative diagram up to sign
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
A_{0,0}\ar[r]^(.5){a_{0,0}}\ar[d]^(.5){b_{0,0}}
&A_{0,1}\ar[r]^(.5){a_{0,1}}\ar[d]^(.5){b_{0,1}}
&A_{0,2}\ar[r]^(.5){a_{0,2}}\ar[d]^(.5){b_{0,2}}
&\Sigma A_{0,0}\ar[d]^(.5){b_{0,0}}\\
A_{1,0}\ar[r]^(.5){a_{1,0}}\ar[d]^(.5){b_{1,0}}
&A_{1,1}\ar[r]^(.5){a_{1,1}}\ar[d]^(.5){b_{1,1}}
&A_{1,2}\ar[r]^(.5){a_{1,2}}\ar[d]^(.5){b_{1,2}}
&\Sigma A_{1,0}\ar[d]^(.5){b_{1,0}}\\
A_{2,0}\ar[r]^(.5){a_{2,0}}\ar[d]^(.5){b_{2,0}}
&A_{2,1}\ar[r]^(.5){a_{2,1}}\ar[d]^(.5){b_{2,1}}
&A_{2,2}\ar[r]^(.5){a_{2,2}}\ar[d]^(.5){b_{2,2}}
&\Sigma A_{2,0}\ar[d]^(.5){b_{2,0}}\\
\Sigma A_{0,0}\ar[r]^(.5){a_{0,0}}
&\Sigma A_{0,1}\ar[r]^(.5){a_{0,1}}
&\Sigma A_{0,2}\ar[r]^(.5){a_{0,2}}
&\Sigma^{2} A_{0,0}
}
\end{displaymath}
\noindent in which each row and column is a cofiber sequence. Then
suppose that from some spectrum $W$ we have a map $f_{3}$ and
hypothetical maps $f_{1}$ and $f_{2}$ making the following diagram
commute up to sign, where $c_{i,j}=b_{i,j+1}a_{i,j}=a_{i+1,j}b_{i,j}$.
\begin{numequation}\label{eq-formal}
\begin{split}
\xymatrix
@R=5mm
@C=10mm
{
W\ar[rrr]^(.5){f_{3}}\ar@{-->}[ddr]^(.5){f_{1}}
\ar@{-->}[drr]^(.5){f_{2}}\ar[ddd]^(.5){f_{3}}
& & &\Sigma A_{0,0}\ar[d]^(.5){b_{0,0}}\ar[dr]^(.5){c_{0,0}}\\
& &A_{1,2}\ar[r]^(.5){a_{1,2}}\ar[d]^(.5){b_{1,2}}\ar[dr]^(.5){c_{1,2}}
&\Sigma A_{1,0}\ar[d]^(.5){b_{1,0}}\ar[r]^(.5){a_{1,0}}
&\Sigma A_{1,1}\ar[d]^(.5){b_{1,1}}\\
&A_{2,1}\ar[r]^(.5){a_{2,1}}\ar[d]^(.5){b_{2,1}}\ar[dr]^(.5){c_{2,1}}
&A_{2,2}\ar[r]^(.5){a_{2,2}}\ar[d]^(.5){b_{2,2}}
&\Sigma A_{2,0}\ar[r]^(.5){a_{2,0}}
&\Sigma A_{2,1}\\
\Sigma A_{0,0}\ar[r]^(.5){a_{0,0}}\ar[rd]_(.5){c_{0,0}}
&\Sigma A_{0,1}\ar[r]^(.5){a_{0,1}}\ar[d]^(.5){b_{0,1}}
&\Sigma A_{0,2}\ar[d]^(.5){b_{0,2}}
&\\
&\Sigma A_{1,1}\ar[r]^(.5){a_{1,1}}
&\Sigma A_{1,2}
}
\end{split}
\end{numequation}
\noindent Then $f_{1}$ exists iff $f_{2}$ does. When this happens,
$c_{0,0}f_{3}$ is null and we have Toda brackets
\begin{displaymath}
\langle a_{1,1},\, c_{0,0} ,\,f_{3} \rangle \ni f_{2}
\qquad \aand
\langle b_{1,1},\, c_{0,0} ,\,f_{3} \rangle \ni f_{1}.
\end{displaymath}
\end{lem}
\proof Let $R$ be the pullback of $a_{2,1}$ and $b_{1,2}$, so we have a diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
{}
&{A_{0,2}}\ar@{=}[r]^(.5){}\ar[d]^(.5){}
&{A_{0,2}}\ar[d]^(.5){b_{0,2}}
&{}\\
{A_{2,0}}\ar@{=}[d]^(.5){}\ar[r]^(.5){}
&{R}\ar[d]^(.5){}\ar[r]^(.5){}
&{A_{1,2}}\ar[d]^(.5){b_{1,2}}\ar[r]^(.5){c_{1,2}}
&{\Sigma A_{2,0}}\ar@{=}[d]^(.5){}\\
{A_{2,0}}\ar[r]^(.5){a_{2,0}}
&{A_{2,1}}\ar[d]^(.5){c_{2,1}}\ar[r]^(.5){a_{2,1}}
&{A_{2,2}}\ar[d]^(.5){b_{2,2}}\ar[r]^(.5){a_{2,2}}
&{\Sigma A_{2,0}}\\
{}
&{\Sigma A_{0,2}}\ar@{=}[r]^(.5){}
&{\Sigma A_{0,2}}
&{}
}
\end{displaymath}
\noindent in which each row and column is a cofiber sequence. Thus we
see that $R$ is the fiber of both $c_{1,2}$ and $c_{2,1}$. If $f_{1}$
exists, then
\begin{displaymath}
c_{2,1}f_{1}=a_{0,1}b_{2,1}f_{1}=a_{0,1}a_{0,0}f_{3}
\end{displaymath}
\noindent which is null homotopic, so $f_{1}$ lifts to $R$, which
comes equipped with a map to $A_{1,2}$, giving us $f_{2}$. Conversely
if $f_{2}$ exists, it lifts to $R$, which comes equipped with a map to
$A_{2,1}$, giving us $f_{1}$.
The statement about Toda brackets follows from the way they are defined.
\qed
\noindent {\em Proof of Theorem \ref{thm-exotic}}.
For a $G$-spectrum $X$ and integers $a<b<c\leq
\infty $ there is a cofiber sequence
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
P_{b+1}^{c}X\ar[r]^(.5){i}
&P_{a}^{c}X\ar[r]^(.5){j}
&P_{a}^{b}X\ar[r]^(.5){k}
&\Sigma P_{b+1}^{c}X.
}
\end{displaymath}
\noindent When $c=\infty $, we omit it from the notation. We will
combine this and the one of (\ref{eq-hate}) to get a diagram similar
to (\ref{eq-formal}) with $W=S^{V}$ to prove our two statements.
For (i) note that $x\in \EE_{1}^{s,t}X (G/G)$ is by definition an
element in $\upi_{t-s}P^{s}_{s}X (G/G)$. We will assume for
simplicity that $s=0$, so $x$ is represented by a map from some
$S^{V}$ to $(P_{0}^{0}X)^{G}$. Its survival to $\EE_{r}$ and
supporting a nontrivial differential means that it lifts to
$(P_{0}^{r-2}X)^{G}$ but not to $(P_{0}^{r-1}X)^{G}$. The value of
$d_{r} (x)$ is represented by the composite $kx$ in the diagram below,
where we can use Lemma \ref{lem-formal}.
\begin{displaymath}
\xymatrix
@R=4mm
@C=12mm
{
S^{V-1}\ar[ddd]^(.5){y'}\ar@{-->}[ddr]^(.5){x}
\ar@{-->}[drr]^(.5){w}\ar[rrr]^(.5){y'}
& & &(P_{r-1}X)^{G'} \ar[d]^(.5){i}\\
& &(\Sigma^{\sigma -1}P_{0}X)^{G}\ar[d]^(.5){j}
\ar[r]^(.55){u_{\sigma}^{-1}\Res_{G'}^{G}}
&(P_{0}X )^{G'}\ar[d]^(.5){j}\\
&(\Sigma^{-1}P_{0}^{r-2}X)^{G}\ar[d]^(.5){k}\ar[r]^(.5){a_{\sigma}}
&(\Sigma^{\sigma-1 }P_{0}^{r-2}X)^{G}\ar[d]^(.5){k}
\ar[r]^(.55){u_{\sigma}^{-1}\Res_{G'}^{G}}
&(P_{0}^{r-2}X)^{G'} \\
(P_{r-1}X)^{G'}\ar[r]^(.5){\Tr_{G'}^{G}}
&(P_{r-1}X)^{G}\ar[r]^(.5){a_{\sigma}}
&(\Sigma^{\sigma} P_{r-1}X)^{G}
}
\end{displaymath}
\noindent The commutativity of the lower left trapezoid is the
differential of (i),\linebreak ${d_{r} (x)=\Tr_{G'}^{G}(y')}$. The
existence of the map $w$ making the diagram commute follows from that
of $x$ and $y'$. It is the representative of $a_{\sigma}x$ as a
permanent cycle, which represents the indicated Toda bracket. The
commutativity of the upper right trapezoid identifies $y'$ as
$u_{\sigma}^{-1}\Res_{G'}^{G}(x)$ as claimed. For the converse we
have the existence of $y'$ and $w$ and hence that of $x$.
The second statement follows by a similar argument based on the diagram
\begin{displaymath}
\xymatrix
@R=4mm
@C=13mm
{
S^{V+\sigma -1}\ar[ddd]^(.5){y}\ar@{-->}[ddr]^(.5){x}
\ar@{-->}[drr]^(.5){w}\ar[rrr]^(.5){y}
& & &(P_{r-1}X)^{G} \ar[d]^(.5){i}\\
& &(P_{0}X)^{G'}\ar[d]^(.5){j}
\ar[r]^(.5){\Tr_{G'}^{G}}
&( P_{0}X )^{G}\ar[d]^(.5){j}\\
&(\Sigma^{\sigma -1}P_{0}^{r-2}X)^{G}\ar[d]^(.5){k}
\ar[r]^(.55){u_{\sigma}^{-1}\Res_{G'}^{G}}
&(P_{0}^{r-2}X)^{G'}\ar[d]^(.5){k}
\ar[r]^(.5){\Tr_{G'}^{G}}
&( P_{0}^{r-2}X)^{G} \\
( P_{r-1}X)^{G}\ar[r]^(.5){a_{\sigma}}
&(\Sigma^{\sigma} P_{r-1}X)^{G}
\ar[r]^(.55){u_{\sigma}^{-1}\Res_{G'}^{G}}
&(\Sigma P_{r-1}X)^{G'}.
&
}
\end{displaymath}
\noindent Here $w$ represents $u_{\sigma }^{-1}\Res_{G'}^{G}(x)$ as a
permement cycle, so we get a Toda bracket containing
$\Res_{G'}^{G}(x)$ as indicated. \qed
Next we study the way differentials interact with the norm.
Suppose we have a subgroup $H\subset G$ and an $H$-{\eqvr} ring
spectrum $X$ with $Y=N_{H}^{G}X$. Suppose we have {\SS}s converging
to $\upi_{\star}X$ and $\upi_{\star}Y$ based on towers
\begin{displaymath}
...\to P_{n}^{H}X \to P_{n-1}^{H}X \to \dotsb
\qquad \aand
...\to P_{n}^{G}Y \to P_{n-1}^{G}Y \to \dotsb
\end{displaymath}
\noindent for functors $P_{n}^{H}$ and $P_{n}^{G}$ equipped with
suitable maps
\begin{displaymath}
P^{H}_{m}X \wedge P^{H}_{n}X \to P^{H}_{m+n}X ,\quad
P^{G}_{m}Y \wedge P^{G}_{n}Y \to P^{G}_{m+n}Y
\quad \mbox{and}\quad
N_{H}^{G}P^{H}_{m}X \to P^{G}_{m|G/H|}Y.
\end{displaymath}
Our slice {\SS } for each of the spectra studied in this paper fits
this description.
\begin{thm}\label{thm-normdiff}
{\bf The norm of a differential.} Suppose we have spectral sequences
as described above and a differential $d_{r} (x)=y$ for $x\in
\EE_{r}^{s,\star}X (H/H)$. Let $\rho=\Ind_{H}^{G}1 $ and suppose that
$a_{\rho }$ has filtration $|G/H|-1$. Then in the {\SS } for
$Y=N_{H}^{G}X$,
\begin{displaymath}
d_{|G/H| (r-1) +1} (a_{\rho } N_{H}^{G}x)= N_{H}^{G}y
\in \EE_{|G/H| (r-1) +1}^{|G/H| (s+r),\star}Y (G/G).
\end{displaymath}
\end{thm}
\proof The differential can be represented by a diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
S^{V}\ar@{=}[r]^(.5){}
&S (1+V)\ar[r]^(.5){}\ar[d]_(.5){y}
&D (1+V)\ar[r]^(.5){}\ar[d]^(.5){}
&S^{1+V}\ar[d]^(.5){x}\\
&P_{s+r}^{H}X\ar[r]^(.5){}
&P_{s}^{H}X\ar[r]^(.5){}
&P^{H}_{s}X/P^{H}_{s+r}X
}
\end{displaymath}
\noindent for some orthogonal {\rep} $V$ of $H$, where each row is a
cofiber sequence. We want to apply the norm functor $N_{H}^{G}$ to
it. Let $W=\Ind_{H}^{G}V$. Then we get
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
S^{W}\ar@{=}[r]^(.5){}
&N_{H}^{G}S (1+V)\ar[r]^(.5){}\ar[d]_(.5){N_{H}^{G}y}
&D (\rho +W)\ar[r]^(.5){}\ar[d]^(.5){}
&S^{\rho +W}\ar[d]^(.5){N_{H}^{G}x}\\
&N_{H}^{G}P_{s+r}^{H}X\ar[r]^(.5){}
&N_{H}^{G}P_{s}^{H}X\ar[r]^(.5){}
&N_{H}^{G} (P_{s}^{H}X/P_{s+r}^{H}X).
}
\end{displaymath}
\noindent Neither row of this diagram is a cofiber sequence, but we
can enlarge it to one where the top and bottom rows are, namely
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
S^{W}\ar[r]^(.5){}\ar@{=}[d]
&D (1+W)\ar[r]^(.5){}\ar[d]^(.5){a_{\rho }}
&S^{1 +W}\ar[d]^(.5){a_{\rho }}\\
S^{W}\ar[r]^(.5){}\ar[d]_(.5){N_{H}^{G}y}
&D (\rho +W)\ar[r]^(.5){}\ar[d]^(.5){}
&S^{\rho +W}\ar[d]^(.5){N_{H}^{G}x}\\
N_{H}^{G}P_{s+r}^{H}X\ar[r]^(.5){}\ar[d]^(.5){}
&N_{H}^{G}P_{s}^{H}X\ar[r]^(.5){}\ar[d]^(.5){}
&N_{H}^{G} (P_{s}^{H}X/P_{s+r}^{H}X)\ar[d]^(.5){}\\
P_{(s+r)|G/H|}^{G}Y\ar[r]^(.5){}
&P_{s|G/H|}^{G}Y\ar[r]^(.5){}
&P_{s|G/H|}^{G}Y/P_{(s+r) |G/H|}^{G}Y
}
\end{displaymath}
\noindent Here the first two bottom vertical maps are part of the
multiplicative structure the {\SS } is assumed to have. Composing the
maps in the three columns gives us the diagram for the desired
differential. \qed
Given a $G$-{\eqvr} ring spectrum $X$, let $X'=i^{*}_{H}X$ denote its
restriction as an $H$-spectrum. Then $N_{H}^{G}X'=X^{(|G/H|)}$ and
the multiplication on $X$ gives us a map from this smash product to
$X$. This gives us a map $\pi_{\star}X' \to \pi_{\star}X$ called the
{\em internal norm}, which we denote abusively by $N_{H}^{G}$. The
argument above yields the following.
\begin{cor}\label{cor-normdiff}
{\bf The internal norm of a differential.} With notation as above,
suppose we have a differential $d_{r} (x)=y$ for $x\in
\EE_{r}^{s,\star}X' (H/H)$. Then
\begin{displaymath}
d_{|G/H| (r-1) +1} (a_{\rho } N_{H}^{G}x)= N_{H}^{G}y
\in \EE_{|G/H| (r-1) +1}^{|G/H| (s+r),\star}X (G/G).
\end{displaymath}
\end{cor}
The following is useful in making such calculations. It is very
similar to \cite[Lemma 3.13]{HHR}.
\begin{lem}\label{lem-norm-au}
{\bf The norm of $a_{V}$ and $u_{V}$}. With notation as above, let
$V$ be a {\rep} of $H$ with $V^{H}=0$ and let $W=\Ind_{H}^{G}V$. Then
$N_{H}^{G} (a_{V})=a_{W}$. If $V$ is oriented (and hence even
dimensional, making $|V|\rho $ oriented), then
\begin{displaymath}
u_{|V|\rho }N
(u_{V})=u_{W}.
\end{displaymath}
\end{lem}
\proof The element $a_{V}$ is represented by the map $S^{0}\to
S^{V}$, the inclusion of the fixed point set. Applying the norm
functor to this map gives
\begin{displaymath}
S^{0}=N_{H}^{G}S^{0}\to N_{H}^{G}S^{V}=S^{W},
\end{displaymath}
\noindent which is $a_{W}$.
When $V$ is oriented, $u_{V}$ is represented by a map $S^{|V|}\to
S^{V}\wedge \HZZ$. Applying the norm functor and using the
multiplication in $\HZZ$ leads to a map
\begin{displaymath}
\xymatrix
@R=8mm
@C=5mm
{
{S^{|V|\rho }}\ar@{=}[r]^(.5){}
&{N_{H}^{G}S^{|V|}}\ar[rr]^(.5){N_{H}^{G}u_{V}}
& &{S^{W}\wedge \HZZ}
}
\end{displaymath}
\noindent Now smash both sides with $\HZZ$, precompose with
$u_{|V|\rho }$ and follow with the multiplication on $\HZZ$, giving
\begin{displaymath}
\xymatrix
@R=8mm
@C=8mm
{
{S^{|V||\rho |}}\ar[r]^(.4){u_{|V|\rho }}
&{S^{|V|\rho }\wedge \HZZ}\ar[rr]^(.45){N_{H}^{G}u_{V}\wedge \HZZ}
& &{S^{W}\wedge \HZZ\wedge \HZZ}\ar[r]^(.5){}
&{S^{W}\wedge \HZZ,}
}
\end{displaymath}
\noindent which is $u_{W}$ since $|W|=|V||\rho |$.
\qed
\section{Some Mackey functors for $C_{4}$ and $C_{2}$}\label{sec-C4}
We need some notation for Mackey functors to be used in {\SS} charts.
{\em In this paper, when a cyclic group or subgroup appears as an
index, we will often replace it by its order.} We can specify Mackey
functors $\underline{M}$ for the group $C_{2}$ and $\underline{N}$ for
$C_{4}$ by means of Lewis diagrams (first introduced in
\cite{Lewis:ROG}),
\begin{numequation}\label{eq-Lewis}
\begin{split}
\xymatrix
@R=8mm
@C=10mm
{
{\underline{M} (C_{2}/C_{2})}\ar@/_/[d]_(.5){\Res_{1}^{2}}\\
{\underline{M} (C_{2}/e)}\ar@/_/[u]_(.5){\Tr_{1}^{2}}
}
\qquad \aand
\xymatrix
@R=4mm
@C=10mm
{
{\underline{N} (C_{4}/C_{4})}\ar@/_/[d]_(.5){\Res_{2}^{4}}\\
{\underline{N} (C_{4}/C_{2})}\ar@/_/[d]_(.5){\Res_{1}^{2}}
\ar@/_/[u]_(.5){\Tr_{2}^{4}}\\
{\underline{N} (C_{4}/e).}\ar@/_/[u]_(.5){\Tr_{1}^{2}}
}
\end{split}
\end{numequation}
\noindent We omit Lewis' looped arrow indicating the Weyl group action
on $\underline{M} (G/H)$ for proper subgroups $H$. This notation is
prohibitively cumbersome in {\SS } charts, so we will abbreviate
specific examples by more concise symbols. These are shown in Tables
\ref{tab-C2Mackey} and \ref{tab-C4Mackey}. {\em Admittedly some of
these symbols are arbitrary and take some getting used to, but we have
to start somewhere.} Lewis denotes the fixed point Mackey functor
for a $\Z G$-module $M$ by $R (M)$. He abbreviates $R (\Z)$and $R
(\Z_{-})$ by $R$ and $R_{-}$. He also defines (with similar
abbreviations) the orbit group Mackey functor $L (M)$ by
\begin{displaymath}
L (M) (G/H)=M/H.
\end{displaymath}
\noindent In this case each transfer map is the surjection of the
orbit space for a smaller subgroup onto that of a larger one. The
functors $R$ and $L$ are the left and right adjoints of the forgetful
functor $\underline{M}\mapsto \underline{M} (G/e)$ from Mackey
functors to $\Z G$-modules.
Over $C_{2}$ we have short exact sequences
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
0 \ar[r]^(.5){}
&{\diagbox} \ar[r]^(.5){}
&\Box \ar[r]^(.5){}
&\bullet \ar[r]^(.5){}
&0\\
0\ar[r]^(.5){}
&\bullet \ar[r]^(.5){}
&{\dot{\Box}} \ar[r]^(.5){}
&{\oBox} \ar[r]^(.5){}
&0\\
0\ar[r]^(.5){}
&\Box \ar[r]^(.5){}
&{\widehat{\Box}} \ar[r]^(.5){}
&{\oBox} \ar[r]^(.5){}
&0
}
\end{displaymath}
\noindent We can apply the induction functor to each them to get a
short exact sequence of Mackey functors over $C_{4}$.
Five of the Mackey functors in Table \ref{tab-C4Mackey} are fixed
point Mackey functors (\ref{eq-fpm}), meaning they are fixed points of
an underlying $Z[G]$-module $M$, such as $\Z[G]$ or
\begin{displaymath}
\begin{array}[]{rcllcl}
\Z & = & \Z[G]/ (\gamma -1)\qquad \qquad &
\Z[G/G']& = &\Z[G]/ (\gamma^{2} -1)\\
\Z_{-} & = &\Z[G]/ (\gamma +1)&
\Z[G/G']_{-}
& = & \Z[G]/ (\gamma^{2}+1)
\end{array}
\end{displaymath}
\begin{table}
\caption[Some {$C_{2}$}-Mackey functors]{Some $C_{2}$-Mackey functors}
\label{tab-C2Mackey}
\begin{center}
\begin{tabular}{|p{2.1cm}|c|c|c|c|c|
c|}
\hline
Symbol
&$\Box$
&$\oBox $
&$\bullet$
&$\diagbox$
&$\dot{\Box} $
&$\widehat{\Box}$\\
\hline
Lewis\newline diagram
&$
\xymatrix
@R=5mm
@C=10mm
{
\Z\ar@/_/[d]_(.5){1}\\
\Z\ar@/_/[u]_(.5){2}
}$
&
$
\xymatrix
@R=4mm
@C=10mm
{
0\ar@/_/[d]_(.5){}\\
\Z_{-}\ar@/_/[u]_(.5){}
}$
&
$
\xymatrix
@R=4mm
@C=10mm
{
\Z/2\ar@/_/[d]_(.5){}\\
0\ar@/_/[u]_(.5){}
}
$ &
$\xymatrix
@R=5mm
@C=10mm
{
\Z\ar@/_/[d]_(.5){2}\\
\Z\ar@/_/[u]_(.5){1}
}$ &
$\xymatrix
@R=5mm
@C=10mm
{
\Z/2\ar@/_/[d]_(.5){0}\\
\Z_{-}\ar@/_/[u]_(.5){1}
}$
&
$\xymatrix
@R=5mm
@C=10mm
{
\Z\ar@/_/[d]_(.5){\Delta}\\
\Z[C_{2}]\ar@/_/[u]_(.5){\nabla}
}$\\
\hline
Lewis symbol
&$R$&$R_{-}$
&$\langle \Z/2 \rangle$
&$L$&$L_{-}$
&$R (\Z^{2})$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption[Some {$C_{4}$}-Mackey functors]{Some $C_{4}$-Mackey functors,
where $G=C_{4}$ and $G'$ is its index 2 subgroup. The notation
$\underline{\Z}(G,H)$ is defined in \ref{prop-top}(i).}
\label{tab-C4Mackey}
\begin{center}
\includegraphics[width=12cm]{fig-C4-Mackeya.pdf}
\end{center}
\end{table}
We will use the following notational conventions for $C_{4}$-Mackey functors.
\begin{enumerate}
\item [$\bullet$] Given a $C_{2}$-Mackey functor $\underline{M}$ with Lewis diagram
\begin{displaymath}
\xymatrix
@R=8mm
@C=10mm
{
{A}\ar@/_/[d]_(.5){\alpha }\\
{B}\ar@/_/[u]_(.5){\beta }
}
\end{displaymath}
\noindent with $A$ and $B$ cyclic, we will use the symbols
$\underline{M}$, $\underline{\overline{M} }$ and $\dot{\underline{M}}$
for the $C_{4}$-Mackey functors with Lewis diagrams
\begin{displaymath}
\xymatrix
@R=8mm
@C=10mm
{
{A}\ar@/_/[d]_(.5){\alpha }\\
{B}\ar@/_/[u]_(.5){\beta }\ar@/_/[d]_(.5){1 }\\
{B,}\ar@/_/[u]_(.5){2 }
}
\qquad
\xymatrix
@R=8mm
@C=10mm
{
{0}\ar@/_/[d]_(.5){ }\\
{A_{-}}\ar@/_/[d]_(.5){\alpha }\ar@/_/[u]_(.5){ }\\
{B_{-}}\ar@/_/[u]_(.5){\beta }
}
\qquad \aand
\xymatrix
@R=8mm
@C=10mm
{
{\Z/2}\ar@/_/[d]_(.5){0 }\\
{A_{-}}\ar@/_/[d]_(.5){\alpha }\ar@/_/[u]_(.5){\tau }\\
{B_{-}}\ar@/_/[u]_(.5){\beta }
}\end{displaymath}
\noindent where a generator $\gamma \in C_{4}$ acts via multiplication
by $-1$ on $A$ and $B$ in the second two, and the transfer $\tau $ is
nontrivial.
\item [$\bullet$] For a $C_{2}$-Mackey functor $\underline{M}$ we
will denote $\uparrow_{2}^{4}\underline{M}$ (see \ref{def-MF-induction}) by
$\widehat{\underline{M}}$. For a Mackey functor $\underline{M}$
defined over the trivial group, we will denote
$\uparrow_{1}^{2}\underline{M}$ and $\uparrow_{1}^{4}\underline{M}$ by
$\widehat{\underline{M}}$ and $\widehat{\widehat{\underline{M}}}$.
\end{enumerate}
Over $C_{4}$, in addition to the short exact sequences induced up from
$C_{2}$, we have
\begin{numequation}\label{eq-C4-SES}
\begin{split}
\xymatrix
@R=2mm
@C=10mm
{
0\ar[r]
&{\bullet}\ar[r]
&{\dot{\Box}}\ar[r]
&{\oBox}\ar[r]
&0\\
0\ar[r]
&{\JJ}\ar[r]
&{\circ }\ar[r]
&\bullet\ar[r]
&0\\
0\ar[r]
&{\JJ}\ar[r]
&{\JJbox}\ar[r]
&{\widehat{\oBox}}\ar[r]
&0\\
0\ar[r]
&{\bullet}\ar[r]
&{\circ }\ar[r]
&\JJJ\ar[r]
&0\\
0\ar[r]
&{\fourbox}\ar[r]
&\Box\ar[r]
&\circ\ar[r]
&0\\
0\ar[r]
&{\diagbox}\ar[r]
&\Box\ar[r]
&\bullet\ar[r]
&0
}
\end{split}
\end{numequation}
\begin{defin}\label{def-enriched}
{\bf A $C_{4}$-enriched $C_{2}$-Mackey functor.}
For a $C_{2}$-Mackey functor $\uM$ as above,
$\widetilde{\uM}$ will denote the $C_{2}$-Mackey functor
enriched over $\Z[C_{4}]$ defined by
\begin{displaymath}
\widetilde{\uM} (S)
=\Z[C_{4}]\otimes_{\Z[C_{2}]}\uM (S)
\end{displaymath}
\noindent for a finite $C_{2}$-set $S$. Equivalently, in the notation
of Definition \ref{def-MF-induction}, $\widetilde{\uM}=
\downarrow_{2}^{4}\uparrow_{2}^{4}\uM$.
\end{defin}
\section{Some chain complexes of Mackey functors}\label{sec-chain}
As noted above, a $G$-CW complex $X$, meaning one built out of cells
of the form $G_{+}\smashove{H} e^{n}$, has a reduced cellular chain
complex of $\Z[G]$-modules $C_{*}X$, leading to a chain complex of
fixed point Mackey functors (see (\ref{eq-fpm})) $\uC_{*}X$. When
$X=S^{V}$ for a {\rep} $V$, we will denote this complex by
$\uC_{*}^{V}$; see (\ref{eq-CVn}). Its homology is the graded Mackey
functor $\uH_{*}X$. Here we will apply the methods of \S\ref{sec-HZZ}
to three examples.
(i) Let $G=C_{2}$ with generator $\gamma $, and $X=S^{n\rho
}$ for $n>0$, where $\rho $ denotes the regular {\rep}. We have seen
before \cite[Ex. 3.7]{HHR} that it has a reduced cellular chain complex $C$
with
\begin{numequation}
\label{eq-C2chain}
C_{i} ^{n\rho_{2}} =\mycases{
\Z[G]/ (\gamma -1)
&\mbox{for }i=n\\
\Z[G]
&\mbox{for }n<i\leq 2n\\
0 &\mbox{otherwise}.
}
\end{numequation}
\noindent Let $c_{i}^{(n)}$ denote a generator of $C_{i}^{n\rho_{2}}$.
The boundary operator $d$ is given by
\begin{numequation}
\label{eq-C2boundary}
d (c_{i+1}^{(n)}) =\mycases{
c_{i}^{(n)}
&\mbox{for }i=n\\
\gamma_{i+1-n} (c_{i}^{(n)})
&\mbox{for }n<i\leq 2n\\
0 &\mbox{otherwise}
}
\end{numequation}
\noindent where $\gamma_{i}=1- (-1)^{i}\gamma $. For future reference, let
\begin{displaymath}
\epsilon_{i}=1- (-1)^{i}=\mycases{
0 &\mbox{for $i$ even}\\
2 &\mbox{for $i$ odd.}
}
\end{displaymath}
\noindent This chain complex has the form
\begin{displaymath}
\xymatrix
@R=3mm
@C=10mm
{
n &n+1
&n+2
&n+3
& &2n\\
\Box
&{\widehat{\Box}}\ar[l]_(.5){\nabla}
&{\widehat{\Box}}\ar[l]_(.5){\gamma_{2} }
&{\widehat{\Box}}\ar[l]_(.5){\gamma_{3} }
&\dotsb \ar[l]_(.5){}
&{\widehat{\Box}}\ar[l]_(.5){\gamma_{n}}\\
\Z\ar@/_/[d]_(.5){1}
&\Z\ar@/_/[d]_(.5){\Delta}\ar[l]_(.5){2}
&\Z\ar@/_/[d]_(.5){\Delta}\ar[l]_(.5){0}
&\Z\ar@/_/[d]_(.5){\Delta}\ar[l]_(.5){2}
&\dotsb \ar[l]_(.5){}
&\Z\ar@/_/[d]_(.5){\Delta}\ar[l]_(.5){\epsilon_{n}}\\
\Z\ar@/_/[u]_(.5){2}
&\Z[G]\ar@/_/[u]_(.5){\nabla}\ar[l]_(.5){\nabla}
&\Z[G]\ar@/_/[u]_(.5){\nabla}\ar[l]_(.5){\gamma_{2}}
&\Z[G]\ar@/_/[u]_(.5){\nabla}\ar[l]_(.5){\gamma_{3} }
&\dotsb \ar[l]_(.5){}
&\Z[G]\ar@/_/[u]_(.5){\nabla}\ar[l]_(.5){\gamma_{n}}
}
\end{displaymath}
\noindent Passing to homology we get
\begin{displaymath}
\xymatrix
@R=3mm
@C=7mm
{
n &n+1
&n+2
&n+3
& &2n\\
\bullet
&0 &\bullet
&0 &\dotsb
&{\uH_{2n}} \\
\Z/2\ar@/_/[d]_(.5){}
&0\ar@/_/[d]_(.5){ }
&\Z/2\ar@/_/[d]_(.5){ }
&0\ar@/_/[d]_(.5){ }
&\dotsb
&{\uH_{2n} (G/G)}
\ar@/_/[d]_(.5){\Delta }\\
0\ar@/_/[u]_(.5){}
&0\ar@/_/[u]_(.5){}
&0\ar@/_/[u]_(.5){}
&0\ar@/_/[u]_(.5){}
&\dotsb
&\Z[G]/ (\gamma_{n+1} )\ar@/_/[u]_(.5){\nabla}
}
\end{displaymath}
\noindent where
\begin{displaymath}
\uH_{2n} (G/G)
=\mycases{
\Z
&\mbox{for $n$ even}\\
0
&\mbox{for $n$ odd}
}
\qquad \aand
\uH_{2n}
=\mycases{
\Box
&\mbox{for $n$ even}\\
\overline{\Box}
&\mbox{for $n$ odd}
}
\end{displaymath}
\noindent Here $\Box$ and $\overline{\Box}$ are fixed point Mackey
functors but $\bullet$ is not.
Similar calculations can be made for $S^{n\rho_{2}}$ for $n<0$. The
results are indicated in Figure \ref{fig-sseq-1}. This is originally
due to unpublished work of Stong and is reported in \cite[Theorem 2.1
and Table 2.2]{Lewis:ROG}. This information will be used in
\S\ref{sec-Dugger}.
\begin{figure}\label{fig-sseq-1}
\end{figure}
In other words the $RO (G)$-graded Mackey functor valued homotopy of
$\HZZ$ is as follows. For $n\geq -1$ we have
\begin{displaymath}
\upi_{i}\Sigma^{n\rho_{2}}\HZZ
=\upi_{i-n\rho_{2} }\HZZ =
\mycases{
\Box
&\mbox{for $n$ even and $i=2n$}\\
\overline{\Box}
&\mbox{for $n$ odd and $i=2n$}\\
\bullet
&\mbox{for $n\leq i<2n$ and $i+n$ even} \\
0 &\mbox{otherwise}
}
\end{displaymath}
\noindent For $n\leq -2$ we have
\begin{displaymath}
\upi_{i}\Sigma^{n\rho_{2}}\HZZ
=\upi_{i-n\rho_{2} }\HZZ =
\mycases{
\diagbox
&\mbox{for $n$ even and $i=2n$}\\
\dot{\Box}
&\mbox{for $n$ odd and $i=2n$}\\
\bullet
&\mbox{for $2n<i\leq n-3$ and $i+n$ odd}\\
0 &\mbox{otherwise}
}
\end{displaymath}
We can use Definition \ref{def-aeu} to name some elements of these groups.
Note that $\HZZ$ is a commutative ring spectrum, so there is a
commutative multiplication in $\upi_{\star}\HZZ$, making it a
commutative $RO (G)$-graded Green functor. For such a functor $\uM$
on a general group $G$, the restriction maps are a ring homomorphisms
while the transfer maps satisfy the Frobenius relations
(\ref{eq-Frob}).
Then the generators of various groups in $\upi_{\star}\HZZ$ are
\begin{align*}
\mbox{\sc $(4m-2)$-slices for $m>0$ :}\hspace{-1cm} \\
a^{2m-1-2i}u^{i}
& = a_{(2m-1-2i)\sigma}u_{2i\sigma}\\
& \in \upi_{2m-1+2i }\Sigma^{(2m-1)\rho_{2}}\HZZ (G/G)\\
& = \upi_{2i- (2m-1)\sigma}\HZZ (G/G)\\
&\qquad \mbox{for $0\leq i<m$}\\
x^{2m-1}=u_{(2m-1)\sigma}
& \in \upi_{4m-2}\Sigma^{(2m-1)\rho_{2}}\HZZ (G/\ee)\\
& = \upi_{(2m-1) (1-\sigma ) }\HZZ (G/\ee)\\
&\qquad \mbox{with $\gamma (x)=-x$}\\
\mbox{\sc $4m$-slices for $m>0$ :} \\
a^{2m-2i}u^{i}
& = a_{(2m-2i)\sigma}u_{2i\sigma}\\
& \in \upi_{2m-1+2i }\Sigma^{(2m-1)\rho_{2}}\HZZ (G/G)\\
& = \upi_{2i- (2m-1)\sigma}\HZZ (G/G)\\
&\qquad \mbox{for $0\leq i\leq m$}\\
&\qquad \mbox{ and with }\Res (u)=x^{2}\\
\mbox{\sc negative slices:} \\
z_{n}=e_{2n\rho_{2} }
& \in \upi_{-4n}\Sigma^{-2n\rho_{2}}\HZZ (G/\ee)\\
& = \upi_{2n (\sigma -1)}\HZZ (G/\ee)
\qquad \mbox{for $n>0$}\\
a^{-i}\Tr (x^{-2n-1})
& \in \upi_{-4n-2-i}\Sigma^{-(2n+1+i)\rho_{2}}\HZZ (G/G)\\
& = \upi_{(2n+1) (\sigma -1)+ i\sigma}\HZZ (G/G)\\
&\qquad
\qquad \mbox{for $n>0$ and $i\geq 0$} .
\end{align*}
\noindent We have
relations
\begin{displaymath}
\begin{array}[]{rclrcl}
2a &\hspace{-2.5mm} = & 0\qquad &
\Res (a)
&\hspace{-2.5mm} = & 0\\
z_{n} &\hspace{-2.5mm} = & x^{-2n}\qquad &
\Tr (x^{n}) &\hspace{-2.5mm} = &\mycases{
2u^{n/2}
&\mbox{for $n$ even and $n\geq 0$}\\
\Tr_{}^{}(z_{-n/2})
&\mbox{for $n$ even and $n< 0$}\\
0 &\mbox{for $n$ odd and $n>-3$.}
}
\end{array}
\end{displaymath}
(ii) Let $G=C_{4}$ with generator $\gamma $, $G'=C_{2}\subseteq G$, the
subgroup generated by $\gamma^{2}$, and $\widehat{S} (n,G')
=G_{+}\smashove{G'}S^{n\rho_{2}}$. Thus we have
\begin{displaymath}
C_{*} (\widehat{S} (n,G')) = \Z[G]\otimes_{\Z[G']} C_{*}^{n\rho_{2}}
\end{displaymath}
\noindent with $C_{*}^{n\rho_{2}}$ as in (\ref{eq-C2chain}). The
calculations of the previous example carry over verbatim by the
exactness of Mackey functor induction of Definition \ref{def-MF-induction}.
(iii) Let $G=C_{4}$ and $X=S^{n\rho_{4}}$. Then the reduced
cellular chain complex (\ref{eq-CVn}) has the form
\begin{displaymath}
C_{i}^{n\rho_{4}} = \mycases{
\Z &\mbox{for }i=n\\
\Z[G/G']
&\mbox{for }n<i\leq 2n\\
\Z[G]
&\mbox{for }2n<i\leq 4n\\
0 &\mbox{otherwise}
}
\end{displaymath}
\noindent in which generators $c_{i}^{(n)}\in C_{i}^{n\rho_{4}}$ satisfy
\begin{displaymath}
d (c_{i+1}^{(n)})= \mycases{
c_{i}^{(n)}
&\mbox{for }i=n\\
\gamma_{i+1-n} c_{i}^{(n)}
&\mbox{for $n<i\leq 2n$}\\
\mu_{i+1-n} c_{i}^{(n)}
&\mbox{for $2n< i<4n$ and $i$ even}\\
\gamma_{i+1-n} c_{i}^{(n)}
&\mbox{for $2n<i<4n$ and $i$ odd}\\
0 &\mbox{otherwise,}
}
\end{displaymath}
\noindent where
\begin{displaymath}
\mu_{i}=\gamma_{i} (1+\gamma^{2})= (1- (-1)^{i}\gamma ) (1+\gamma^{2}).
\end{displaymath}
The values of $\underline{H}_{*}S^{n\rho_{4}}$ are illustrated in Figure \ref{fig-sseq-3}. The Mackey
functors in filtration 0 (the horizontal axis) are the ones described
in Proposition \ref{prop-top}.
\begin{figure}\label{fig-sseq-3}
\end{figure}
As in (i), we name some of these elements. Let $G=C_{4}$ and
$G'=C_{2}\subseteq G$. Recall that the regular {\rep} $\rho_{4}$ is
$1+\sigma +\lambda$ where $\sigma $ is the sign {\rep} and $\lambda$
is the 2-dimensional {\rep} given by a rotation of order 4.
Note that while Figure \ref{fig-sseq-1} shows all of $\underline{\pi
}_{\star}\HZZ$ for $G=C_{2}$, Figure \ref{fig-sseq-3} shows only a
bigraded portion of this trigraded Mackey functor for $G=C_{4}$,
namely the groups for which the index differs by an integer from a
multiple of $\rho_{4}$. We will need to refer to some elements not
shown in the latter chart, namely
\begin{numequation}\label{eq-au}
\begin{split}\left\{
\begin{array}[]{rlrlrl}
a_{\sigma }
&\hspace{-3mm} \in \underline{H}_{0}S^{\sigma } (G/G)
&a_{\lambda }
&\in \underline{H}_{0}S^{\lambda } (G/G)
&\overline{a}_{\lambda }
&=\Res_{2}^{4}(a_{\lambda })
\\
u_{2\sigma }
&\hspace{-3mm} \in \underline{H}_{2}S^{2\sigma } (G/G) )
&u_{\sigma }
&\in \underline{H}_{1}S^{\sigma } (G/G')
&\overline{u} _{\sigma }
&=\Res_{1}^{2}(u_{\sigma })
\\
u_{\lambda }
&\hspace{-3mm} \in \underline{H}_{2}S^{\lambda } (G/G)
&\overline{u}_{\lambda }
&=\Res_{2}^{4}(u_{\lambda })
&\overline{\overline{u}} _{\lambda }
&=\Res_{1}^{4}(u_{\lambda })
\end{array} \right.
\end{split}
\end{numequation}
\noindent subject to the relations
\begin{numequation}\label{eq-au-rel}
\begin{split}
\left\{\begin{array}[]{rlrlrl}
2a_{\sigma }
&\hspace{-2.5mm} = 0
&\Res_{2}^{4}(a_{\sigma })
&=0 \\
4a_{\lambda }
&\hspace{-2.5mm} =0
&2\overline{a}_{\lambda }
&=0
&\Res_{1}^{4}(a_{\lambda })
&=0 \\
\Res_{2}^{4}(u_{2\sigma })
&\hspace{-2.5mm} = u_{\sigma }^{2}
&a_{\sigma }^{2}u_{\lambda }
&=2a_{\lambda }u_{2\sigma }
&\mbox{ (gold relation)};\hspace{-1cm}
\end{array} \right.
\end{split}
\end{numequation}
\noindent see Definition \ref{def-aeu} and Lemma \ref{lem-aeu}.
We will denote the generator of $\EE_{2}^{s,t} (G/H)$ (when
it is nontrivial) by $x_{t-s,s}$, $y_{t-s,s}$ and $z_{t-s,s}$ for
$H=G$, $G'$ and $\ee$ respectively. Then the generators
for the groups in the 4-slice are
\begin{align*}
y_{4,0} = u_{\rho_{4}}
=u_{\sigma}\Res^{4}_{2}(u_{\lambda})
& \in \upi_{4}\Sigma^{\rho_{4}}\HZZ (G/G')
= \upi_{3-\sigma-\lambda}\HZZ (G/G')\\
& \qquad \mbox{with }\gamma (x_{4,0})=-x_{4,0}\\
x_{3,1} = a_{\sigma}u_{\lambda}
& \in \upi_{3}\Sigma^{\rho_{4}}\HZZ (G/G)
= \upi_{2-\sigma-\lambda}\HZZ (G/G)\\
y_{2,2} = \Res^{4}_{2} (a_{\lambda})u_{\sigma}
& \in \upi_{2}\Sigma^{\rho_{4}}\HZZ (G/G')
= \upi_{1-\sigma -\lambda}\HZZ (G/G')\\
x_{1,3} = a_{\rho_{4}}
= a_{\sigma} a_{\lambda}
& \in \upi_{1}\Sigma^{\rho_{4}}\HZZ (G/G)
= \upi_{-\sigma -\lambda}\HZZ (G/G)
\end{align*}
\noindent and the ones for the 8-slice are
\begin{align*}
x_{8,0} = u _{2\lambda+2\sigma}=u_{2\rho_{4}}
& \in \upi_{8}\Sigma^{2\rho_{4}}\HZZ (G/G)
= \upi_{6-2\sigma -2\lambda}\HZZ (G/G)\\
& \qquad \mbox{with }y_{4,0}^{2}=y_{8,0}=\Res_{2}^{4}(x_{8,0})\\
x_{6,2} = a_{\lambda} u _{\lambda+2\sigma}
& \in \upi_{6}\Sigma^{2\rho_{4}}\HZZ (G/G)
= \upi_{4-2\sigma -2\lambda}\HZZ (G/G)\\
& \qquad \mbox{with }x_{3,1}^{2}=2x_{6,2}\\
& \qquad \mbox{and }y_{4,0}y_{2,2}=y_{6,2}=\Res_{2}^{4}(x_{6,2})\\
x_{4,4} = a_{2\lambda}u_{2\sigma}
& \in \upi_{4}\Sigma^{2\rho_{4}}\HZZ (G/G)
= \upi_{2-2\sigma -2\lambda}\HZZ (G/G)\\
&\qquad \mbox{with }y_{2,2}^{2}=y_{4,4}=\Res_{2}^{4}(x_{4,4})\\
&\qquad \mbox{and }
x_{1,3}x_{3,1}=2x_{4,4}\\
x_{2,6} = x_{1,3}^{2}
& \in \upi_{2}\Sigma^{2\rho_{4}}\HZZ (G/G)
= \upi_{-2\sigma -2\lambda}\HZZ (G/G).
\end{align*}
These elements and their restrictions generate
$\upi_{*}\Sigma^{m\rho_{4}}\HZZ$ for $m=1$ and 2. For
$m>2$ the groups are generated by products of these elements.
The element
\begin{displaymath}
z_{4,0} = \Res^{2}_{1}(y_{4,0}) =\Res^{2}_{1} (u_{\rho_{4}} )\in
\upi_{4}\Sigma^{\rho_{4}}\HZZ (G/\ee)
\end{displaymath}
\noindent is invertible with $\gamma (y_{4,0})=-y_{4,0}$,
$z_{4,0}^{2}=z_{8,0}=\Res_{1}^{4}(x_{8,0})$ and
\begin{displaymath}
z_{-4m,0}:= z_{4,0}^{-m}=e_{m\rho_{4}}
\in \upi_{-4m}\Sigma^{-m\rho_{4}}\HZZ (G/\ee)
\qquad \mbox{for }m>0,
\end{displaymath}
\noindent where $e_{m\rho_{4}}$ is as in Definition
\ref{def-aeu}. These elements and their transfers generate the groups
in
\begin{displaymath}
\upi_{-4m}\Sigma^{-m\rho_{4}}\HZZ \qquad \mbox{for }m>0.
\end{displaymath}
\begin{thm}\label{thm-div}
{\bf Divisibilities in the negative regular slices for $C_{4}$.}
There are the following infinite divisibilities in the third quadrant
of the {\SS} in Figure \ref{fig-sseq-3}.
\begin{enumerate}
\item [(i)] \label{div:i}
$x_{-4,0}=\Tr_{1}^{4}(z_{-4,0})$ is
divisible by any monomial in $x_{1,3}$ and $x_{4,4}$, meaning that
\begin{displaymath}
x_{1,3}^{i}x_{4,4}^{j}x_{-4-4j-i,-4j-3k}=x_{-4,0}
\qquad \mbox{for }i,j\geq 0.
\end{displaymath}
\noindent Moreover, no other basis element killed by $x_{3,1}$ and
$x_{4,4}$ has this property.
\item [(ii)] \label{div:ii}
$x_{-4,0}$, and $x_{-7,-1}$ are divisible
by any monomial in $x_{4,4}$, $x_{6,2}$ and $x_{8,0}$, subject to the
relation $x_{6,2}^{2}=x_{8,0}x_{4,4}$. Note here that
$x_{3,1}^{2}=2x_{6,2}$.
\noindent Moreover, no other basis element killed by $x_{4,4}$,
$x_{6,2}$ and $x_{8,0}$ has this property.
\item [(iii)] \label{div:iv}
$y_{-7,-1}=\Res_{2}^{4}(x_{-7,-1})$ is
divisible by any monomial in $y_{2,2}$ and $y_{4,0}$, meaning that
\begin{displaymath}
y_{2,2}^{j}y_{4,0}^{k}y_{-7-2j-4k,-1-2j}=y_{-7,-1}
\qquad \mbox{for }j,k\geq 0.
\end{displaymath}
\noindent Moreover, no other basis element killed by $y_{2,2}$,and
$y_{4,0}$ has this property.
\end{enumerate}
\end{thm}
We will prove Theorem \ref{thm-div} as a corollary of a more general
statement (Lemma \ref{lem-div} and Corollary \ref{cor-div}) in which
we consider all {\rep}s of the form $m\lambda +n\sigma $ for $m,n\geq
0$. Let
\begin{displaymath}
\underline{R}=\bigoplus_{m,n\geq 0}\underline{H}_{*}S^{m\lambda +n\sigma }.
\end{displaymath}
\noindent It is generated by the elements of (\ref{eq-au}) subject to
the relations of (\ref{eq-au-rel}).
In the larger ring
\begin{displaymath}
\underline{\tilde{R}}
=\bigoplus_{m,n\in \Z \atop mn\geq 0}
\underline{H}_{*}S^{m\lambda +n\sigma },
\end{displaymath}
\noindent the elements $u_{\sigma }$, $\overline{u}_{\sigma } $ and
$\overline{\overline{u} }_{\lambda } $ are invertible with
\begin{align*}
e_{\sigma }
& = u_{\sigma }^{-1} \in \underline{H}_{-1}S^{-\sigma } (G/G')
&e_{\lambda }
= \overline{\overline{u} }_{\lambda }^{-1}
\in \underline{H}_{-2}S^{-\lambda } (G/e).
\end{align*}
Define spectra $L_{m}$ and $K_{n}$ to be the cofibers of $a_{m\lambda
}$ and $a_{n\sigma }$. Thus we have cofiber sequences
\begin{displaymath}
\xymatrix
@R=2mm
@C=15mm
{
{\Sigma^{-1}L_{m}}\ar[r]^(.5){c_{m\lambda }}
&{S^{0}}\ar[r]^(.5){a_{m\lambda }}
&{S^{m\lambda }}\ar[r]^(.5){b_{m\lambda }}
&{L_{m}}\\
{\Sigma^{-1}K_{n}}\ar[r]^(.5){c_{n\sigma}}
&{S^{0}}\ar[r]^(.5){a_{n\sigma}}
&{S^{n\sigma}}\ar[r]^(.5){b_{n\sigma}}
&{K_{n}}
}
\end{displaymath}
\noindent Dualizing gives
\begin{displaymath}
\xymatrix
@R=2mm
@C=15mm
{
{DL_{m}}\ar[r]^(.5){Db_{m\lambda }}
&{S^{-m\lambda }}\ar[r]^(.5){Da_{m\lambda }}
&{S^{0}}\ar[r]^(.5){Dc_{m\lambda }}
&{\Sigma DL_{m}}\\
{DK_{n}}\ar[r]^(.5){Db_{n\sigma}}
&{S^{-n\sigma }}\ar[r]^(.5){Da_{n\sigma}}
&{S^{0}}\ar[r]^(.5){Dc_{n\sigma}}
&{\Sigma DK_{n}}
}
\end{displaymath}
\noindent The maps $Da_{m\lambda }$ and $Da_{n\sigma }$ are the same
as desuspensions of $a_{m\lambda }$ and $a_{n\sigma }$, which implies that
\begin{displaymath}
DL_{m}=\Sigma^{-1-m\lambda }L_{m}
\qquad \aand
DK_{n}=\Sigma^{-1-n\sigma }K_{n}.
\end{displaymath}
\noindent Inspection of the cellular chain complexes for $L_{m}$ and
$K_{n}$ and certain of their suspensions reveals that
\begin{displaymath}
\Sigma^{2-\lambda }L_{m}\wedge \HZZ
=L_{m}\wedge \HZZ=\Sigma^{2-2\sigma }L_{m}\wedge \HZZ
\end{displaymath}
\noindent and
\begin{displaymath}
\Sigma^{2-2\sigma }K_{n}\wedge \HZZ=K_{n}\wedge \HZZ,
\end{displaymath}
\noindent while $\Sigma^{1-\sigma }$ alters both $L_{m}\wedge \HZZ$
and $K_{n}\wedge \HZZ$. We will denote $\Sigma^{k (1-\sigma
)}L_{m}\wedge \HZZ$ by $L_{m}^{(-1)^{k}}\wedge \HZZ$ and similarly for
$K_{n}$.
The homology groups of $L_{m}^{\pm }$ and $K_{m}^{\pm }$ for $m,n>0$
are indicated in Figures \ref{fig-Lm} and \ref{fig-Kn}, and those for
$S^{m\lambda }$ and $S^{n\sigma }$ are shown in
Figure\ref{fig-spheres}.
\begin{figure}\label{fig-Lm}
\end{figure}
\begin{figure}\label{fig-Kn}
\end{figure}
\begin{figure}\label{fig-spheres}
\end{figure}
In the following diagrams we will use the same notation for a map and
its smash product with any identity map. Let $V=m\lambda +n\sigma $
with $m,n>0$, and let $R_{V}$ denote the fiber of $a_{V}$. Since
$a_{V}$ is self-dual up to susupension, we have
$DR_{V}=\Sigma^{-1-V}R_{V}$. In the following each row and column is
a cofiber sequence.
\begin{numequation}\label{eq-newfirst}
\begin{split}
\xymatrix
@R=5mm
@C=10mm
{
{}
&{}
&{\Sigma^{n\sigma -1}L_{m}}\ar@{=}[r]\ar[d]^(.5){c_{m\lambda }}
&{\Sigma^{n\sigma -1}L_{m}}\ar[d]^(.5){}\\
{\Sigma^{-1}K_{n}}\ar[r]^(.5){c_{n\sigma }}\ar[d]^(.5){}
&{S^{0}}\ar[r]^(.5){a_{n\sigma }}\ar@{=}[d]^(.5){}
&{S^{n\sigma }}\ar[r]^(.5){b_{n\sigma }}\ar[d]^(.5){a_{m\lambda }}
&{K_{n}}\ar[d]^(.5){}\\
{\Sigma^{-1}R_{V}}\ar[r]^(.5){c_{V}}
&{S^{0}}\ar[r]^(.5){a_{V }}
&{S^{V}}\ar[r]^(.5){b_{V}}\ar[d]^(.5){b_{m\lambda }}
&{R_{V}}\ar[d]^(.5){}\\
{}
&{}
&{\Sigma^{n\sigma }L_{m}}\ar@{=}[r]
&{\Sigma^{n\sigma }L_{m}}
}
\end{split}
\end{numequation}
\noindent The homology sequence for the third column is the easiest
way to compute $\underline{H}_{*}S^{V}$. That column is
\begin{numequation}\label{eq-third-column}
\begin{split}
\xymatrix
@R=5mm
@C=10mm
{
{\Sigma^{n\sigma -1}L_{m}}\ar[r]^(.6){c_{m\lambda }}
&{S^{n\sigma }}\ar[r]^(.5){a_{m\lambda }}
&{S^{V}}\ar[r]^(.4){b_{m\lambda }}
&{\Sigma^{n\sigma }L_{m}},
}
\end{split}
\end{numequation}
\noindent which dualizes to
\begin{displaymath}
\xymatrix
@R=2mm
@C=10mm
{
{\Sigma^{1-n\sigma }DL_{m}}\ar@{=}[d]^(.5){}
&{S^{-n\sigma }}\ar[l]_(.4){c_{m\lambda }}
&{S^{-V}}\ar[l]_(.5){a_{m\lambda }}
&{\Sigma^{-n\sigma }DL_{m}}\ar[l]_(.6){c_{m\lambda }}
\ar@{=}[d]^(.5){}\\
{\Sigma^{-V}L_{m}}
& & &{\Sigma^{-1-V}L_{m}}
}
\end{displaymath}
\noindent or
\begin{numequation}\label{eq-dual-third-column}
\begin{split}
\xymatrix
@R=5mm
@C=10mm
{
{\Sigma^{ -1-V}L_{m}}\ar[r]^(.6){c_{m\lambda }}
&{S^{-V}}\ar[r]^(.5){a_{m\lambda }}
&{S^{-n\sigma }}\ar[r]^(.4){b_{m\lambda }}
&{\Sigma^{-V}L_{m}}.
}
\end{split}
\end{numequation}
For (\ref{eq-third-column}) the {\LES} in homology includes
\begin{displaymath}
\xymatrix
@R=5mm
@C=8mm
{
{\underline{H}_{i+1-n}L_{m}^{(-1)^{n}}}\ar[r]^(.6){c_{m\lambda }}
&{\underline{H}_{i}S^{n\sigma }}\ar[r]^(.5){a_{m\lambda }}
&{\underline{H}_{i}S^{V }}\ar[r]^(.4){b_{m\lambda }}
&{\underline{H}_{i-n}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c_{m\lambda }}
&{\underline{H}_{i-1}S^{n\sigma }}
}
\end{displaymath}
{\bf Divisibility by $a_{\lambda }$.} Multiplication by $a_{\lambda }$ keads to
\begin{displaymath}
\xymatrix
@R=5mm
@C=8mm
{
{\underline{H}_{i+1-n}L_{m}^{(-1)^{n}}}\ar[r]^(.6){c_{m\lambda }}
\ar[d]^(.5){a'_{\lambda }}
&{\underline{H}_{i}S^{n\sigma }}\ar[r]^(.5){a_{m\lambda }}\ar@{=}[d]^(.5){}
&{\underline{H}_{i}S^{V }}\ar[r]^(.4){b_{m\lambda }}
\ar[d]^(.5){a_{\lambda }}
&{\underline{H}_{i-n}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c_{m\lambda }}
\ar[d]^(.5){a'_{\lambda }}
&{\underline{H}_{i-1}S^{n\sigma }}\ar@{=}[d]^(.5){}\\
{\underline{H}_{i+1-n}L_{m'}^{(-1)^{n}}}\ar[r]^(.6){c_{m'\lambda }}
&{\underline{H}_{i}S^{n\sigma }}\ar[r]^(.5){a_{m'\lambda }}
&{\underline{H}_{i}S^{V+\lambda }}\ar[r]^(.4){b_{m'\lambda }}
&{\underline{H}_{i-n}L_{m'}^{(-1)^{n}}}\ar[r]^(.5){c_{m'\lambda }}
&{\underline{H}_{i-1}S^{n\sigma },}
}
\end{displaymath}
\noindent where $m'=m+1$ and $a'_{\lambda }$ is induced by the
inclusion $L_{m}\to L_{m'}$.
In the dual case we get
\begin{numequation}\label{eq-a-lambda}
\begin{split}
\xymatrix
@R=5mm
@C=2mm
{
{\underline{H}_{i+1}S^{-n\sigma }}\ar[r]^(.4){b}\ar@{=}[d]^(.5){}
&{\underline{H}_{i+1+|V|}L_{m}^{(-1)^{n}}}\ar[r]^(.6){c}
&{\underline{H}_{i}S^{-V }}\ar[r]^(.5){a}
&{\underline{H}_{i}S^{-n\sigma }}\ar[r]^(.4){b}
\ar@{=}[d]^(.5){}
&{\underline{H}_{i+|V|}L_{m}^{(-1)^{n}}}
\\
{\underline{H}_{i+1}S^{-n\sigma }}\ar[r]^(.4){b}
&{\underline{H}_{i+3+|V|}L_{m'}^{(-1)^{n}}}\ar[r]^(.6){c}
\ar[u]_(.5){Da'_{\lambda }}
&{\underline{H}_{i}S^{-V-\lambda }}\ar[r]^(.5){a}
\ar[u]_(.5){a_{\lambda }}
&{\underline{H}_{i}S^{-n\sigma }}\ar[r]^(.4){b}
&{\underline{H}_{i+2+|V|}L_{m'}^{(-1)^{n}}}
\ar[u]_(.5){Da'_{\lambda }}
}
\end{split}
\end{numequation}
\noindent Here the subscripts on the horizontal maps ($m\lambda $ in
the top row and $m'\lambda $ in the bottom row) have been omitted
to save space. The five lemma implies that the middle
vertical map is onto when the left hand $Da'_{\lambda }$ is onto
and the right hand one is one to one.
The left version of $Da'_{\lambda }$ is onto in every
case except $i=-|V|$ and the right version of it is one to one in all
cases except $i=-|V|$ and $i=-1-|V|$. This is illustrated for small
$m$ in the following diagram in which trivial Mackey functors are
indicated by blank spaces.
\begin{displaymath}
\xymatrix
@R=-1mm
@C=1mm
{
{j}
&{\underline{H}_{j}L_{1}}
&{\underline{H}_{j}L_{2}}
&{\underline{H}_{j}L_{3}}
&{\underline{H}_{j}L_{4}}
&
&{\underline{H}_{j}L_{1}^{-}}
&{\underline{H}_{j}L_{2}^{-}}
&{\underline{H}_{j}L_{3}^{-}}
&{\underline{H}_{j}L_{4}^{-}}
\\
-1 & & & & &
& & & &\\
0 & & & & &
& & & &\\
1
&{\fourbox }
&{\fourbox }\ar[luu]^(.5){}
&{\fourbox }\ar[luu]^(.5){}
&{\fourbox }\ar[luu]^(.5){}
&
&{\dot{\diagbox}}
&{\dot{\diagbox}}\ar[luu]^(.5){}
&{\dot{\diagbox}}\ar[luu]^(.5){}
&{\dot{\diagbox}}\ar[luu]^(.5){}\\
2
&{\Box}
&{\circ} \ar[luu]^(.5){}
&{\circ}\ar[luu]^(.5){}
&{\circ} \ar[luu]^(.5){}
&
&{\oBox}
&{\obull}\ar[luu]^(.5){}
&{\obull}\ar[luu]^(.5){}
&{\obull}\ar[luu]^(.5){}\\
3
& & & & &
& &{\bullet}\ar[luu]^(.5){}
&{\bullet}\ar[luu]^(.5){}
&{\bullet}\ar[luu]^(.5){}\\
4
& &{\Box}\ar@{=}[luu]^(.5){}
&{\circ} \ar@{=}[luu]^(.5){}
&{\circ}\ar@{=}[luu]^(.5){}
&
& &{\oBox}\ar@{=}[luu]^(.5){}
&{\obull}\ar@{=}[luu]^(.5){}
&{\obull}\ar@{=}[luu]^(.5){}\\
5
& & & & &
& & &{\bullet}\ar@{=}[luu]^(.5){}
&{\bullet}\ar@{=}[luu]^(.5){}\\
6
& & &{\Box} \ar@{=}[luu]^(.5){}
&{\circ}\ar@{=}[luu]^(.5){}
&
& & &{\oBox}\ar@{=}[luu]^(.5){}
&{\obull}\ar@{=}[luu]^(.5){}\\
7
& & & & &
& & & &{\bullet}\ar@{=}[luu]^(.5){}\\
8
& & & &{\Box} \ar@{=}[luu]^(.5){}
&
& & & &{\oBox}\ar@{=}[luu]^(.5){}\\
}
\end{displaymath}
\noindent It follows that the map $a_{\lambda }$ in
(\ref{eq-a-lambda}) is onto for all $i$ except $-|V|$. {\em This is a
divisibility result.} Note that $a_{\lambda }$ is trivial on
$\underline{H}_{*}X (G/e)$ for any $X$ since $\Res_{1}^{4}(a_{\lambda
})=0$.
{\bf Divisibility by $u_{\lambda }$.} For $u_{\lambda }$
multiplication we use the diagram
\begin{numequation}\label{eq-u-lambda}
\begin{split}
\xymatrix
@R=5mm
@C=3mm
{
{\underline{H}_{i+1}S^{-n\sigma }}\ar[r]^(.5){b}
&{\underline{H}_{i+1}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c}
&{\underline{H}_{i}S^{-V}}\ar[r]^(.5){a}
&{\underline{H}_{i}S^{-n\sigma }}\ar[r]^(.5){b}
&{\underline{H}_{i}L_{m}^{(-1)^{n}}}
\\
{\underline{H}_{i-1}S^{-n\sigma-\lambda}}\ar[r]^(.5){b}
\ar[u]^(.5){u_{\lambda }}
&{\underline{H}_{i+1}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c}
\ar@{=}[u]^(.5){}
&{\underline{H}_{i-2}S^{-V-\lambda }}\ar[r]^(.5){a}
\ar[u]^(.5){u_{\lambda }}
&{\underline{H}_{i-2}S^{-n\sigma-\lambda}}\ar[r]^(.5){b}
\ar[u]^(.5){u_{\lambda }}
&{\underline{H}_{i}L_{m}^{(-1)^{n}}}\ar@{=}[u]^(.5){}
}
\end{split}
\end{numequation}
\noindent The rightmost $u_{\lambda }$ is onto in all cases except
$i=-n$ and $n$ even. This is illustrated for $n=6$ and 7 in the
following diagram.
\begin{displaymath}
\xymatrix
@R=-1mm
@C=2mm
{
j &-1 &-2 &-3 &-4 &-5 &-6 &-7 &-8 &-9\\
{\underline{H}_{j}S^{-6\sigma }}
& & &{\bullet}
&{}
&{\bullet}
&{\diagbox }
\\
{\underline{H}_{j}S^{-6\sigma-\lambda }}
& & &{\bullet}\ar[llu]^(.5){}
&{}
&{\bullet}\ar@{=}[llu]^(.5){}
&{} &{\circ}\ar[llu]^(.5){}
&{\fourbox }\ar[llu]^(.5){}
\\
{}\\
{\underline{H}_{j}S^{-7\sigma }}
& & &{\bullet}
&{}
&{\bullet}
& &{\dot{\diagbox }}
\\
{\underline{H}_{j}S^{-7\sigma-\lambda }}
& & &{\bullet}\ar[llu]^(.5){}
&{}
&{\bullet}\ar@{=}[llu]^(.5){}
&{} &{\bullet}\ar@{=}[llu]^(.5){}
& &{\dot{\diagbox }}\ar@{=}[llu]^(.5){}
\\
}
\end{displaymath}
\noindent
Thus the central $u_{\lambda }$ in (\ref{eq-u-lambda}) fails to be
onto only in when $i=-n$ and $n$ is even.
{\bf Divisibility by $a_{\sigma }$.} The corresponding
diagram is
\begin{displaymath}
\xymatrix
@R=5mm
@C=3mm
{
{\underline{H}_{i+1}S^{-n\sigma }}\ar[r]^(.4){b}
&{\underline{H}_{i+1+|V|}L_{m}^{(-1)^{n}}}\ar[r]^(.6){c}
&{\underline{H}_{i}S^{-V }}\ar[r]^(.5){a}
&{\underline{H}_{i}S^{-n\sigma }}\ar[r]^(.4){b}
&{\underline{H}_{i+|V|}L_{m}^{(-1)^{n}}}
\\
{\underline{H}_{i+1}S^{- n'\sigma }}\ar[r]^(.4){b}
\ar[u]_(.5){a_{\sigma }}
&{\underline{H}_{i+2+|V|}L_{m}^{(-1)^{n'}}}\ar[r]^(.6){c}
\ar[u]_(.5){a_{\sigma}}
&{\underline{H}_{i}S^{-V-\sigma }}\ar[r]^(.5){a}
\ar[u]_(.5){a_{\sigma }}
&{\underline{H}_{i}S^{- n'\sigma }}\ar[r]^(.35){b}
\ar[u]_(.5){a_{\sigma }}
&{\underline{H}_{i+1+|V|}L_{m}^{(-1)^{n'}}}
\ar[u]_(.5){a_{\sigma}}
}
\end{displaymath}
\noindent Here we have abbreviated ${n+1}$ by $n'$. Since
$\Res_{2}^{4}(a_{\sigma })=0$, the map $a_{\sigma }$ must vanish on
$\underline{H}_{*}X (G/G')$ and $\underline{H}_{*}X (G/e)$. It can be
nontrivial only on $G/G$.
By Lemma \ref{lem-hate}, the image of $a_{\sigma }$ is the kernel of
the restriction map $u_{\sigma }^{-1}\Res_{2}^{4}$ and the kernel of
$a_{\sigma }$ is the image of the transfer $\Tr_{2}^{4}$. From Figure
\ref{fig-spheres} we see that $\Res_{2}^{4}$ kills
$\underline{H}_{i}S^{-n\sigma } (G/G)$ except the case $i=-n$ for even
$n$. From Figure \ref{fig-Lm} we see that it kills
$\underline{H}_{j}L_{m}^{-} (G/G)$ for all $j$ and
$\underline{H}_{j}L_{m}(G/G)$ for odd $j>1$, but not the generators
for $j=1$ nor the ones for even values of $j$ from $2$ to $2m$. The
transfer has nontrivial image in $\underline{H}_{j}L_{m}^{-}$ only for
$j=1$ and in $\underline{H}_{j}L_{m}$ only for $j=1$ and for even $j$
from 2 to $2m$.
{\em It follows that for odd $n$, each element of
$\underline{H}_{i}S^{-V} (G/G)$ is divisible by $a_{\sigma }$ except
when $i=-|V|=-2m-n$. For even $n$ it is onto except when $i=-n$,
$i=-n-2m$, and $i$ odd from $1-n-2m$ to $-1-n$. }
{\bf Divisibility by $u_{2\sigma }$.} For $u_{2\sigma }$
multiplication, the diagram is
\begin{displaymath}
\xymatrix
@R=5mm
@C=4mm
{
{\underline{H}_{i+1}S^{-n\sigma }}\ar[r]^(.5){b}
&{\underline{H}_{i+1}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c}
&{\underline{H}_{i}S^{-V}}\ar[r]^(.5){a}
&{\underline{H}_{i}S^{-n\sigma }}\ar[r]^(.5){b}
&{\underline{H}_{i}L_{m}^{(-1)^{n}}}
\\
{\underline{H}_{i-1}S^{- (n+2)\sigma}}\ar[r]^(.5){b}
\ar[u]^(.5){u_{2\sigma }}
&{\underline{H}_{i+1}L_{m}^{(-1)^{n}}}\ar[r]^(.5){c}
\ar@{=}[u]^(.5){}
&{\underline{H}_{i-2}S^{-V-2\sigma }}\ar[r]^(.5){a}
\ar[u]^(.5){u_{2\sigma }}
&{\underline{H}_{i-2}S^{- (n+2)\sigma}}\ar[r]^(.55){b}
\ar[u]^(.5){u_{2\sigma }}
&{\underline{H}_{i}L_{m}^{(-1)^{n}}}\ar@{=}[u]^(.5){}
}
\end{displaymath}
\noindent The rightmost $u_{2\sigma }$ is onto in all cases, so {\em every
element in $\underline{H}_{*}S^{-V}$ is divisible by $u_{2\sigma }$.}
The arguments above prove the following.
\begin{lem}\label{lem-div}
{\bf $RO (G)$-graded divisibility.} Let $G=C_{4}$ and $V=m\lambda
+n\sigma $ for $m,n\geq 0$.
\begin{enumerate}
\item [(i)] Each element in
$\underline{H}_{i}S^{-V} (G/G)$ or $\underline{H}_{i}S^{-V} (G/G')$ is
divisible by $a_{\lambda }$ or $\overline{a}_{\lambda }$ except when
$i=-|V|$.
\item [(ii)] Each element in
$\underline{H}_{i}S^{-V} (G/H)$ is
divisible by a suitable restriction of $u_{\lambda }$ except when
$i=-n$ for even $n$.
\item [(iii)] Each element in $\underline{H}_{i}S^{-V} (G/G)$ for odd
$n$ is divisible by $a_{\sigma }$ except when $i=-|V|$. For even $n$ it is
divisible by $a_{\sigma }$ except when $i=-n$, $i=-|V|$ and $i$ is odd
from $i=1-|V|$ to $-1-n$.
\item [(iv)] Each element in $\underline{H}_{i}S^{-V} (G/H)$ is
divisible by a $u_{2\sigma }$, $u_{\sigma }$ or $\overline{u}_{\sigma } $.
\end{enumerate}
\end{lem}
In Theorem \ref{thm-div} we are looking for
divisibility by
\begin{numequation}\label{eq-divisors}
\begin{split}\left\{
\begin{array}[]{rl}
x_{1,3}
= a_{\sigma }a_{\lambda }&\in
\underline{H}_{0}S^{\sigma +\lambda } (G/G)
= \underline{H}_{1}S^{\rho } (G/G)\\
x_{4,4}
= a_{\lambda}^{2}u_{2\sigma } &\in
\underline{H}_{2}S^{2\lambda +2\sigma } (G/G)
= \underline{H}_{4}S^{2\rho } (G/G)\\
y_{2,2}
= \overline{a} _{\lambda}u_{\sigma } &\in
\underline{H}_{1}S^{1\lambda +1\sigma } (G/G')
= \underline{H}_{2}S^{\rho } (G/G)\\
x_{6,2}
= a_{\lambda }u_{2\sigma }u_{\lambda }&\in
\underline{H}_{4}S^{2\lambda +2\sigma } (G/G)
=\underline{H}_{6}S^{2\rho } (G/G) \\
x_{8,0}
= u_{2\sigma }u_{\lambda }^{2}&\in
\underline{H}_{6}S^{2\lambda +2\sigma } (G/G)
=\underline{H}_{8}S^{2\rho } (G/G) \\
y_{4,0}
= u_{\sigma }\overline{u} _{\lambda }&\in
\underline{H}_{3}S^{\lambda +\sigma } (G/G')
=\underline{H}_{4}S^{\rho } (G/G')
\end{array}\right.
\end{split}
\end{numequation}
\noindent In view of Lemma \ref{lem-div}(iv), we can ignore the
factors $u_{2\sigma }$ and $u_{\sigma }$ when analyzing such
divisibility.
\begin{cor}\label{cor-div}
{\bf Infinite divisibility by the divisors of (\ref{eq-divisors}).}
Let
\begin{displaymath}
V=m\lambda +n\sigma \qquad \mbox{for }m,n\geq 0.
\end{displaymath}
\noindent Then
\begin{itemize}
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G)$ is
infinitely divisible by ${x_{1,3}=a_{\sigma }a_{\lambda }}$ for
${i>-n}$ when $n$ is even and for $i\geq -n$ when $n$ is odd.
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G)$ is
infinitely divisible by ${x_{4,4}=a_{\lambda }^{2}u_{2\sigma }}$ for
$i>-|V|$.
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G')$ is
infinitely divisible by ${y_{2,2}=\overline{a} _{\lambda }u_{\sigma
}}$ for $i>-|V|$.
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G)$ is
infinitely divisible by ${x_{6,2}=a_{\lambda }u_{2\sigma }u_{\lambda
}}$ for $i>-|V|$ when $n$ is odd and for $-|V|<i<-n$ when $n$ is even.
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G)$ is
infinitely divisible by ${x_{8,0}=u_{2\sigma }u_{\lambda }^{2}}$ for
$i<-n$ when $n$ is even and for all $i$ when $n$ is odd.
\item [$\bullet$] Each element of $\underline{H}_{i}S^{-V} (G/G')$ is
infinitely divisible by ${y_{4,0}=u_{\sigma }\overline{u} _{\lambda
}}$ for $i<-n$ when $n$ is even and for all $i$ when $n$ is odd.
\end{itemize}
\end{cor}
This implies Theorem \ref{thm-div}.
\section{The spectra $\kR$ and $\kH$}\label{sec-kH}
Before defining our spectrum we need to recall some definitions and
formulas from \cite{HHR}. Let $H\subset G$ be finite groups. In
\cite[\S2.2.3]{HHR} we define a norm functor $N_{H}^{G}$ from the
category of $H$-spectra to that of $G$-spectra. Roughly speaking, for
an $H$-spectrum $X$, $N_{H}^{G}X$ is the $G$-spectrum underlain by the
smash power $X^{(|G/H|)}$ with $G$ permuting the factors and $H$ leaving
each one invariant. When $G$ is cyclic, we will denote the orders of
$G$ and $H$ by $g$ and $h$, and the norm functor by $N_{h}^{g}$.
There is a $C_{2}$-spectrum $MU_{\reals}$ underlain by the complex
cobordism spectrum $MU$ with group action given by complex
conjugation. Its construction is spelled out in
\cite[\S B.12]{HHR}. For a finite cyclic 2-group $G$ we define
\begin{displaymath}
MU^{((G))}= N_{2}^{g}MU_{\reals}.
\end{displaymath}
\noindent Choose a generator $\gamma $ of $G$. In
\cite[(5.47)]{HHR} we defined generators
\begin{numequation}
\label{eq-rbar}
\orr_{k}=\orr^{G}_{k}\in
\upi^{C_{2}}_{k\rho_{2}}i_{C_{2}}^{*}MU^{((G))} (C_{2}/C_{2})
\cong \upi_{C_{2},k\rho_{2}}MU^{((G))} (G/G)
\end{numequation}
\noindent (note that this group is a module over $G/C_{2}$) and
\begin{align*}
r_{k}=\underline{r}_{1}^{2}(\orr_{k})
& \in \pi_{\ee, 2k}^{u}MU^{((G))} (G/G)
\cong \upi^{\ee}_{2k}MU^{((G))} (\ee/\ee)
=\pi_{2k}^{u}MU^{((G))}.
\end{align*}
\noindent The Hurewicz images of the $\orr_{k}$ (for which we use the
same notation) are defined in terms of the coefficients (see
Definition \ref{def-graded})
\begin{displaymath}
\om_{k}\in
\upi^{C_{2}}_{k\rho_{2}}\HZZ_{(2)}\wedge MU^{((G))} (C_{2}/C_{2})
= \upi_{C_{2},k\rho_{2}}\HZZ_{(2)}\wedge MU^{((G))} (G/G)
\end{displaymath}
\noindent of the logarithm of the formal group law $\overline{F} $
associated with the left unit map from $MU$ to $MU^{((G))}$. The
formula is
\begin{displaymath}
\sum _{k\geq 0}\overline{r}_{k}x^{k+1}
=\left(x+\sum_{\ell>0}\gamma(\overline{m}_{2^{\ell}-1})x^{2^{\ell}}\right)^{-1}
\circ \log_{\overline{F} }(x)
\end{displaymath}
\noindent where
\begin{displaymath}
\log_{\overline{F} }(x)=x+\sum_{k>0}\overline{m}_{k}x^{k+1}.
\end{displaymath}
For small $k$ we have
\begin{align*}
\orr_{1}
& = (1-\gamma ) (\om_{1})\\
\orr_{2}
& = \om_{2}-2\gamma (\om_{1} )(1-\gamma ) (\om_{1})\\
\orr_{3}
& = (1-\gamma ) (\om_{3})-\gamma (\om_{1})
(5\gamma (\om_{1})^{2}-6\gamma (\om_{1})\om_{1}+\om_{1}^{2} +2\om_{2})
\end{align*}
Now let $G=C_{2}$ or $C_{4}$ and, in the latter case
$G'=C_{2}\subseteq G$. The generators $\orr^{G}_{k}$ are the
$\orr_{k}$ defined above. We also have elements $\orr^{G'}_{k}$
defined by similar formulas with $\gamma $ replaced by $\gamma^{2}$;
recall that $\gamma^{2} (\om_{k})= (-1)^{k}\om_{k}$.
They are the images of similar generators of
\begin{displaymath}
\upi^{C_{2}}_{k\rho_{2}}MU^{((G'))} (C_{2}/C_{2})
\cong \upi_{C_{2},k\rho_{2}}MU^{((G'))} (G'/G')
\end{displaymath}
\noindent under the left unit map
\begin{displaymath}
MU^{((G'))}\to MU^{((G'))}\wedge MU^{((G'))}\cong i^{*}_{G'} MU^{((G))}.
\end{displaymath}
\noindent
Thus we have
\begin{align*}
\orr_{1}^{G'}
& = 2 \om_{1} \\
\orr_{2}^{G'}
& = \om_{2}+4\om_{1}^{2}\\
\orr_{3}^{G'}
& = 2 \om_{3}+2\om_{1}\om_{2}+12\om_{1}^{3}
\end{align*}
\noindent If we set $\orr_{2}=0$ and $\orr_{3}=0$, we get
\begin{numequation}
\label{eq-r3}
\begin{split}
\left\{\begin{array}[]{rl}
\orr_{1}^{G'}
&\hspace{-2.5mm} = \orr_{1,0}+\orr_{1,1}\\
\orr_{2}^{G'}
&\hspace{-2.5mm} = 3\orr_{1,0} \orr_{1,1}+\orr_{1,1}^{2}\\
\orr_{3}^{G'}
&\hspace{-2.5mm} = 5 \orr_{1,0}^{2}\orr_{1,1} +5\orr_{1,0} \orr_{1,1}^{2}
+\orr_{1,1}^{3}
= \orr_{1,1}( 5 \orr_{1,0}^{2} +5\orr_{1,0} \orr_{1,1}
+\orr_{1,1}^{2})\\
\gamma (\orr_{3}^{G'})
&\hspace{-2.5mm} = -\orr_{1,0}( 5 \orr_{1,1}^{2} -5\orr_{1,0} \orr_{1,1}
+\orr_{1,0}^{2})\\
\lefteqn{-\orr_{3}^{G'}\gamma (\orr_{3}^{G'})/\orr_{1,0} \orr_{1,1}}
\qquad\qquad\\
&\hspace{-2.5mm} = \left(5 \orr_{1,1}^2-5
\orr_{1,0} \orr_{1,1}+\orr_{1,0}^2\right) \left(\orr_{1,1}^2+5
\orr_{1,0} \orr_{1,1}+5 \orr_{1,0}^2\right)\\
&\hspace{-2.5mm} = (5 \orr_{1,0}^{4}-20\orr_{1,0}^{3}\orr_{1,1}
+\orr_{1,0}^{2}\orr_{1,1}^{2}
+20\orr_{1,0}\orr_{1,1}^{3}+5\orr_{1,1}^{4})\\
&\hspace{-2.5mm} = \left(5 (\orr_{1,0}^{2}-\orr_{1,1}^{2})^{2}
-20\orr_{1,0}\orr_{1,1}(\orr_{1,0}^{2}-\orr_{1,1}^{2})
+11 (\orr_{1,0}\orr_{1,1})^{2} \right)
\end{array} \right.\\
\end{split}
\end{numequation}
\noindent where $\orr_{1,0}=\orr_{1}$ and $\orr_{1,1}=\gamma (\orr_{1})$.
\begin{defin}\label{def-kH}
{\bf $\kR$, $\KR$, $\kH$ and $\KH$.}
The $C_{2}$-spectrum $k_{\reals}$ (connective real $K$-theory), is the
spectrum obtained from $MU_{\reals}$ by killing the $r_{n}$s for
$n\geq 2$. Its periodic counterpart $\KR$ is the telescope obtained
from $\kR$ by inverting\linebreak ${\orr_{1}\in \upi_{\rho_{2}}\kR
(C_{2}/C_{2})}$.
The $C_{4}$-spectrum $\kH$ is obtained
from $MU^{((C_{4}))}$ by killing the $r_{n}$s and their conjugates for
$n\geq 2$. Its periodic counterpart $\KH$ is the telescope obtained
from $\kH$ by inverting a certain element $D\in \upi_{4\rho_{4}}\kH
(C_{4}/C_{4})$ defined below in (\ref{eq-D}) and Table
\ref{tab-pi*}.
\end{defin}
The image of $D$ in $\upi^{C_{2}}_{8\rho_{2}}\kH
(C_{2}/C_{2})\cong \upi_{C_{2},8\rho_{2}}\kH(C_{4}/C_{4})$ is
\begin{numequation}\label{eq-r24D}
\begin{split}
\left\{ \begin{array}{rl}
\underline{r}_{2}^{4} (D)
&\hspace{-2.5mm} = \orr_{1,0}\orr_{1,1}\orr_{3}^{G'}\gamma (\orr_{3}^{G'}) \\
&\hspace{-2.5mm} = \orr_{1,0}^{2}\orr_{1,1}^{2}\left( -5 \orr_{1,0}^{4}
+20 \orr_{1,0}^{3}\orr_{1,1}-\orr_{1,0}^{2}\orr_{1,1}^{2}
-20 \orr_{1,0} \orr_{1,1}^{3}-5 \orr_{1,1}^{4} \right)\\
&\hspace{-2.5mm} = -\orr_{1,0}^{2}\orr_{1,1}^{2}
\left(5(\orr_{1,0}^{2}-\orr_{1,1}^{2})^{2}
-20\orr_{1,0}\orr_{1,1}(\orr_{1,0}^{2}-\orr_{1,1}^{2})\right.\\
&\qquad \qquad \qquad \left. +11
(\orr_{1,0}\orr_{1,1})^{2} \right).
\end{array} \right.
\end{split}
\end{numequation}
\noindent It is fixed by the action of $G/G'$, while its factors
$\orr_{1,0}\orr_{1,1}$ and $\orr_{3}^{G'}\gamma (\orr_{3}^{G'})$ are
each negated by the action of the generator $\gamma $.
We remark that while $MU^{((C_{4}))}$ is $MU_{\reals}\wedge
MU_{\reals}$ as a $C_{2}$-spectrum, $\kH$ is {\em not}
$k_{\reals}\wedge k_{\reals}$ as a $C_{2}$-spectrum. The former has
torsion free underlying homotopy but the latter does not.
\section{The slice {\SS} for $\KR$}\label{sec-Dugger}
In this section we describe the slice {\SS} for $\KR$. These results
are originally due to Dugger \cite{Dugger}, to which we refer for many
of the proofs. This case is far simpler than that of $\KH$, but it is
very instructive.
\begin{thm}\label{thm-Dugger-slice}
{\bf The slice $\EE_{2}$-terms for $\KR$ and $\kR$. }
The slices of $\KR$ are
\begin{displaymath}
P_{t}^{t}\KR=\mycases{
\Sigma^{(t/2)\rho_{2}}\HZZ
&\mbox{for $t$ even}\\
* &\mbox{otherwise}
}
\end{displaymath}
\noindent For $\kR$ they are the same in nonnegative dimensions, and
contractible below dimension 0.
\end{thm}
Hence we know the integrally graded homotopy groups of these slices by
the results of \S \ref{sec-chain}, and they are shown in Figure
\ref{fig-sseq-1}. It shows the $\EE_{2}$-term for the wedge of all of
the slices of $\KR$, and $\KR$ itself has the same $\EE_{2}$-term. It
turns out that the differentials and Mackey functor extensions are
determined by the fact that $\upi_{*}\KR$ is 8-periodic, while the
$\EE_{2}$-term is far from it. This explanation is admittedly
circular in that the proof of the Periodicity Theorem itself of
\cite[\S9]{HHR} relies on the existence of certain differentials
described below in (\ref{eq-slicediffs}).
\begin{thm}\label{thm-KR}
{\bf The slice {\SS} for $\KR$.} The differentials and extensions in the
{\SS} are as indicated in Figure \ref{fig-KR}.
\end{thm}
\proof
There are four phenomena we need to establish:
\begin{enumerate}
\item [(i)] The differentials in the first quadrant, which are
indicated by red lines.
\item [(ii)] The differentials in the third quadrant.
\item [(iii)] The exotic transfers in the first quadrant, which are
indicated by blue lines.
\item [(iv)] The exotic restrictions in the third quadrant, which are
indicated by dashed green lines.
\end{enumerate}
For (i), note that there is a nontrivial element in
$\EE_{2}^{3,6} (G/G)$, which is part of the 3-stem, but
nothing in the $(-5)$-stem. This means the former element must be
killed by a differential, and the only possiblilty is the one
indicated. The other differentials in the first quadrant follow from
this one and the multiplicative structure.
For (ii), we know know that $\upi_{7}\KR=0$, so the same must be true
of $\upi_{-9}$. Hence the element in $\EE_{2}^{-3,-12}$
cannot survive, leading to the indicated third quadrant differentials.
For (iii), note that $\upi_{2}$ and $\upi_{-6}$ must be the same as
Mackey functors. This forces the indicated exotic transfers. For
each $m\geq 0$ one has a nonsplit short exact sequence of $C_{2}$
Mackey functors
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
0\ar[r]^(.5){}
&{\EE_{2}^{2,8m+4}}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\upi_{8m+2}\KR}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\EE_{2}^{0,8m+2}}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0\\
&\bullet
&\dot{\Box}
&\overline{ \Box}
}
\end{displaymath}
For (iv), note that $\upi_{-8}$ and $\upi_{0}$ must also agree. This
forces the indicated exotic restrictions. For each $m<0$ one has a
nonsplit short exact sequence
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
0\ar[r]^(.5){}
&{\EE_{2}^{0,8m}}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\upi_{8m}\KR}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\EE_{2}^{-2,8m-2}}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0\\
&{\diagbox}
&\dot{\Box}
&\bullet
}
\end{displaymath}
\qed
\begin{figure}\label{fig-KR}
\end{figure}
In order to describe $\upi_{*}\KR$ as a graded Green functor, meaning
a graded Mackey functor with multiplication, we recall some notation
from \S\ref{sec-chain}(i) and Definition \ref{def-aeu}. For $G=C_{2}$
we have elements
\begin{numequation}\label{eq-upi-HZ}
\begin{split}\left\{
\begin{array}{rlrl}
a=a_{\sigma }
&\hspace{-3mm} \in \upi_{-\sigma }\HZZ (G/G) \\
u=u_{2\sigma }
&\hspace{-3mm} \in \upi_{2-2\sigma }\HZZ (G/G) \\
x=u_{\sigma }
&\hspace{-3mm} \in \upi_{1-\sigma }\HZZ (G/\ee) &&\mbox{with $x^{2}=\Res(u)$} \\
z_{n} = e_{2n\rho_{2} }
&\hspace{-3mm} \in \upi_{2n (\sigma -1)}\HZZ (G/\ee)
&&\mbox{for $n>0$}\\
a^{-i}\Tr (x^{-2n-1})
&\hspace{-3mm} \in \upi_{(2n+1) (\sigma -1)+ i\sigma}\HZZ (G/G)
&&\mbox{for $n>0$}
\end{array} \right.
\end{split}
\end{numequation}
\noindent We will use the same symbols for the representatives of
these elements in the slice $E_{2}$-term. The filtrations of $u$, $x$
and $z_{n}$ are zero while that of $a$ is one. It follows that
$a^{-i}\Tr (x^{-2n-1})$ has filtration $-i$. The element $x$ is invertible.
In $\underline{E}_{2}^{*,*}$ we have relations in
\begin{numequation}\label{eq-upi-HZ-relns}
\begin{split}
\left\{
\begin{array}[]{rclrcl}
2a &\hspace{-2.5mm} = & 0\qquad &
\Res (a)
&\hspace{-2.5mm} = & 0\\
z_{n} &\hspace{-2.5mm} = & x^{-2n}\qquad &
\Tr (x^{n}) &\hspace{-2.5mm} = &\mycases{
2u^{n/2}
&\hspace{-1cm}\mbox{for $n$ even and $n\geq 0$}\\
\Tr_{}^{}(z_{-n/2})\neq 0\\
&\hspace{-1cm}\mbox{for $n$ even and $n< 0$}\\
0 &\hspace{-1cm}\mbox{for $n$ odd and $n>-3$}\\
\neq 0 &\hspace{-1cm}\mbox{for $n$ odd and $n\leq -3$.}
}
\end{array}
\right.
\end{split}
\end{numequation}
We also have the element $\orr_{1}\in \upi_{1+\sigma }\kR (G/G)$, the
image of the element of the same name in $\orr_{1}\in
\upi_{1+\sigma}MU_{\reals} (G/G)$ of (\ref{eq-rbar}). We use the same
symbol for its representative $\underline{E}_{2}^{0,1+\sigma }
(G/G)$. Then we have integrally graded elements
\begin{align*}
\eta
= a \orr_{1}
&\in \underline{E}_{2}^{1,2} (G/G) \\
v_{1}
= x\cdot \Res(\orr_{1})
&\in \underline{E}_{2}^{0,2} (G/\ee)
\qquad \mbox{with }\gamma (v_{1})=-v_{1} \\
u\orr_{1}^{2}
&\in \underline{E}_{2}^{0,4} (G/G) \\
w=2u\orr_{1}^{2}
&\in \underline{E}_{2}^{0,4} (G/G) \\
b=u^{2}\orr_{1}^{4}
&\in \underline{E}_{2}^{0,8} (G/G)
\qquad \mbox{with }w^{2}=4b,
\end{align*}
\noindent where $\eta $ and $v_{1}$ are the images of the elements of
the same name in $\pi_{1}S^{0}$ and $\pi_{2}k$, and $w$ and $b$ are
permanent cycles. The elements $x$, $v_{1}$ and $b$ are invertible.
Note that for $n<0$,
\begin{align*}
\underline{E}_{2}^{0,2n} (G/G)
& = \mycases{
0 &\mbox{for }n=1\\
\Z \mbox{ generated by } \Tr_{}^{}(v_{1}^{-n})
&\mbox{for $n$ even}\\
\Z/2 \mbox{ generated by } \Tr_{}^{}(v_{1}^{-n})
&\mbox{for $n$ odd and }n<-1
}
\end{align*}
\noindent so each group is killed by $\eta =a\orr_{1}$ by \ref{lem-hate}.
Then we have
\begin{align*}
d_{3} (u)
& = a^{3}\orr_{1}
\qquad \mbox{by (\ref{eq-slicediffs2}) below,} \\
\mbox{so }
d_{3} (u\orr_{1}^{2}) = d_{3} (u)\orr_{1}^{2}
& = a^{3}\orr_{1}^{3} =\eta^{3}\\
\Tr_{1}^{2}(x)
& = a^{2}\orr_{1}
\qquad \mbox{by (\ref{eq-exotic-transfers}), raising filtration by 2,} \\
\mbox{so }
\Tr_{1}^{2}(v_{1})
& = \eta^{2}.
\end{align*}
Thus we get
\begin{thm}\label{thm-upi-KR}
{\bf The homotopy of $\KR$ as an integrally graded Green functor}.
With notation as above,
\begin{align*}
\upi_{*}\KR (G/\ee)
& = \Z[v_{1}^{\pm 1}] \\
\upi_{*}\KR (G/G)
& = \Z[b^{\pm 1}, w, \eta ]/ (2\eta ,\eta^{3},w\eta ,w^{2}-4b)
\end{align*}
\noindent with
\begin{align*}
\Tr(v_{1}^{i})
& = \mycases{
2b^{j}
&\mbox{for }i=4j\\
\eta^{2}b^{j}
&\mbox{for }i=4j+1\\
wb^{j}
&\mbox{for }i=4j+2\\
0 &\mbox{for }i=4j+3
} \\
\Res(b)
& = v_{1}^{4},\quad
\Res(w)
= 2 v,\mbox{ and }
\Res(\eta )
= 0.
\end{align*}
\noindent For each $j<0$, $b^{j}$ has filtration $-2$ and supports an
exotic restriction in the slice spectral sequence as indicated in
Figure \ref{fig-KR}. Both $v_{1}\Res(b^{j})$ and
$\eta^{2}b^{j}$ have filtration zero, so the transfer relating them is
does not raise filtration.
\end{thm}
Now we will describe the $RO (G)$-graded slice spectral sequence and
homotopy of $\KR$. The former is trigraded since $RO (G)$ itself is
bigraded, being isomorphic as an abelian group to $\Z \oplus \Z$. For
each integer $k$, one can imagine a chart similar to Figure
\ref{fig-KR} converging to the graded Mackey functor $\upi_{k\sigma
+*}\KR$. Figure \ref{fig-KR} itself is the one for $k=0$. The product
of elements in the $k$th and $\ell $th charts lies in the $(k+\ell )$th
chart. We have elements as in (\ref{eq-upi-HZ})
\begin{align*}
a=a_{\sigma }
& \in \underline{E}_{2}^{1,1-\sigma } (G/G) \\
u=u_{2\sigma }
& \in \underline{E}_{2}^{0,2-2\sigma } (G/G) \\
x=u_{\sigma }
& \in \underline{E}_{2}^{0,1-\sigma } (G/\ee)
&&\mbox{with $\gamma (x)=-1$ and $x^{2}=\Res(u)$} \\
z_{n} = x^{-2n}
& \in \underline{E}_{2}^{0,-2n+2n\sigma } (G/\ee)
&&\mbox{for $n>0$}\\
a^{-i}\Tr (x^{-2n-1})
& \in \underline{E}_{2}^{-i,-i-2n+2n\sigma} (G/G)
&&\mbox{for $i\geq 0$ and $n>0$}\\
\orr_{1}
& \in \underline{E}_{2}^{0,1+\sigma } (G/G)
\end{align*}
\noindent where $a$, $x$, $z_{n}$ and $\orr_{1}$ are permanent cycles,
both $x$ and $\orr_{1}$ are invertible, and there are relations as in
(\ref{eq-upi-HZ-relns}). We also know that
\begin{align*}
d_{3} (u)
& = a^{3}\orr_{1} \qquad \mbox{by (\ref{eq-slicediffs2}) below} \\
\aand
\Tr_{1}^{2}(x)
& = a^{2}\orr_{1} \qquad \mbox{by (\ref{eq-exotic-transfers})} .
\end{align*}
\begin{thm}\label{thm-ROG-graded}
{\bf The $RO (G)$-graded slice spectral sequence for $\KR$} can be
obtained by tensoring that of Figure \ref{fig-KR} with
$\Z[\orr_{1}^{\pm 1}]$, that is for any integer $k$
\begin{align*}
\underline{E}_{2}^{s,t +k\sigma } (G/G)
& \cong \orr_{1}^{k} \underline{E}_{2}^{s,t -k }(G/G ) \\
\aand
\underline{E}_{2}^{s,t +k\sigma } (G/\ee)
& \cong \Res(\orr_{1}^{k}) \underline{E}_{2}^{s,t -k }(G/\ee )
\end{align*}
\noindent and $\upi_{t+k\sigma}\KR$ has a similar description.
\end{thm}
\proof The element $\orr_{1}$ and its restriction are invertible
permanent cycles, so multiplication by either induces an isomorphism
in the spectral sequence. \qed
\begin{rem}\label{rem-a3}
In {\bf the $RO (G)$-graded slice {\SS } for $\kR$} one has $d_{3}
(u)=\orr_{1}a^{3}$, but $a^{3}$ itself, and indeed all higher powers
of $a$, survive to $\underline{E}_{4}=\underline{E}_{\infty }$. Hence
the $\underline{E}_{\infty }$-term of this {\SS} does {\bf not} have
the horizontal vanishing line that we see in $\underline{E}_{4}$-term
of Figure \ref{fig-KR}. However when we pass from $\kR$ to $\KR$,
$\orr_{1}$ becomes invertible and we have
\begin{displaymath}
d_{3} (\orr_{1}^{-1}u)=a^{3}.
\end{displaymath}
\end{rem}
We can keep track of the groups in this trigraded {\SS } with the
help of four variable {\Ps} $g (\underline{E}_{r} (G/G))\in \Z[[x,y,z,t]]$
in which the rank of $\underline{E}_{r}^{s,i+j\sigma } (G/G)$ is the
coefficient in $\Z[[t]]$ of $x^{i-s}y^{j}z^{s}$. The variable $t$
keeps track of powers of two. Thus a copy of the integers
is represented by $1/ (1-t)$ or (when it is the kernel of a differential
of the form $\Z \to \Z/2$) $t/ (1-t)$. Let
\begin{numequation}\label{eq-aur}
\begin{split}
\wa =y^{-1}z,\qquad
\uu =x^{2}y^{-1}\qquad \aand
\rr =xy.
\end{split}
\end{numequation}
\noindent Since
\begin{displaymath}
\underline{E}_{2} (G/G) =\Z[a,u,\orr_{1}]/ (2a),
\end{displaymath}
\noindent we have
\begin{align*}
g (\underline{E}_{2} (G/G))
& = \left(\frac{1}{1-t}+ \frac{\wa}{1-\wa}\right)
\frac{1}{(1-\uu) (1-\rr)} \\
g (\underline{E}_{4} (G/G))
& = g (\underline{E}_{2} (G/G))
-\frac{\uu+\rr\wa^{3}}
{(1-\wa) (1-\uu^{2}) (1-\rr)}.
\end{align*}
\noindent We subtract the indicated expression from $g
(\underline{E}_{2} (G/G))$ because we have differerentials
\begin{displaymath}
d_{3} (a^{i}\orr_{1}^{j}u^{2k+1})= a^{i+3}\orr_{1}^{j+1}u^{2k}
\qquad \mbox{for all }i,j,k\geq 0.
\end{displaymath}
\noindent Pursuing this further we get
\begin{align*}
g (\underline{E}_{4} (G/G))
& = \left(\frac{1}{1-t}+ \frac{\wa}{1-\wa}\right)
\frac{1}{(1-\uu) (1-\rr)}
-\frac{\uu}
{ (1-\uu^{2}) (1-\rr)}\\
& \qquad
-\frac{a\uu+\rr\wa^{3}}
{(1-\wa) (1-\uu^{2}) (1-\rr)} \\
& = \frac{1+\uu-\uu (1-t)}
{(1-t) (1-\uu^{2}) (1-\rr)}
+\frac{\wa (1+\uu)
-\wa (\uu+\wa^{2}\rr)}
{(1-\wa) (1-\uu^{2}) (1-\rr)}\\
& = \frac{1+t\uu}
{(1-t) (1-\uu^{2}) (1-\rr)}
+\frac{\wa-\wa^{3}+\wa^{3}-\wa^{3}\rr}
{(1-\wa) (1-\uu^{2}) (1-\rr)}\\
& = \frac{1+t\uu}
{(1-t) (1-\uu^{2}) (1-\rr)}
+\frac{\wa+\wa^{2}}
{ (1-\uu^{2}) (1-\rr)}
+\frac{\wa^{3}}
{(1-\wa) (1-\uu^{2})} .
\end{align*}
\noindent The third term of this expression represents the elements of
filtration above two (referred to in \ref{rem-a3}) which disappear
when we pass to $\KR$. The first term represents the elements of
filtration zero, which include
\begin{numequation}\label{eq-[2u]]}
\begin{split}
1,\qquad
[2u]\in \langle 2,\,a ,\,a^{2}\orr_{1} \rangle
\qquad \aand
[u^{2}]\in \langle \,a ,\,a^{2}\orr_{1}\,a ,\,a^{2}\orr_{1} \rangle
\end{split}
\end{numequation}
\noindent Here we use the notation $[2u]$ and $[u^{2}]$ to indicate
the images in $\underline{E}_{4}$ of the elements $2u$ and $u^{2}$ in
$\underline{E}_{2}$; see Remark \ref{rem-abuse} below. The former
{\em not} divisible by 2 and the latter is not a square since $u$
itself is not present in $\underline{E}_{4}$, where the Massey
products are defined. For an introduction to Massey products, we
refer the reader to \cite[A1.4]{Rav:MU}.
We now make a similar computation where we enlarge $\underline{E}_{2} (G/G)$
by adjoining $\orr_{1}^{-1}u$ and denote the resulting {\SS } terms by
$\underline{E}'_{2}$ and $\underline{E}'_{4}$.
Let
\begin{displaymath}
\www = \rr^{-1}\uu= xy^{-3}.
\end{displaymath}
\noindent Then since
\begin{displaymath}
\underline{E}'_{2} (G/G) =\Z[a,\orr_{1}^{-1}u,\orr_{1}]/ (2a),
\end{displaymath}
\noindent we have
\begin{align*}
g (\underline{E}'_{2} (G/G))
& = \left(\frac{1}{1-t}+ \frac{\wa}{1-\wa}\right)
\frac{1}{(1-\www) (1-\rr)} \\
g (\underline{E}_{4} (G/G))
& = g (\underline{E}_{2} (G/G))
-\frac{\www+\wa^{3}}
{(1-\wa) (1-\www^{2}) (1-\rr)} \\
& = \left(\frac{1}{1-t}+ \frac{\wa}{1-\wa}\right)
\frac{1}{(1-\www) (1-\rr)}
-\frac{\www}
{ (1-\www^{2}) (1-\rr)}\\
& \qquad
-\frac{a\www+\wa^{3}}
{(1-\wa) (1-\www^{2}) (1-\rr)} \\
& = \frac{1+\www-\www (1-t)}
{(1-t) (1-\www^{2}) (1-\rr)}
+\frac{\wa (1+\www)
-\wa (\www+\wa^{2})}
{(1-\wa) (1-\www^{2}) (1-\rr)}\\
& = \frac{1+t\www}
{(1-t) (1-\www^{2}) (1-\rr)}
+\frac{\wa+\wa^{2}}
{ (1-\www^{2}) (1-\rr)}
\end{align*}
\noindent and there is nothing in $\underline{E}'_{4}$ with filtration
above two. As far as we know there is no modification of the spectrum
$\kR$ corresponding to this modification of $\underline{E}_{r}$.
However the map $\underline{E}_{r}\kR \to \underline{E}_{r}\KR$
clearly factors through $\underline{E}'_{r}$
\section{Some elements in the homotopy groups of $\kH$ and
$\KH$}\label{sec-more}
For $G=C_{4}$ we will often use a (second) subscript $\epsilon $ on
elements such as $r_{n}$ to indicate the action of a generator $\gamma
$ of $G=C_{4}$, so $\gamma (x_{\epsilon })=x_{1+\epsilon }$ and
$x_{2+\epsilon }=\pm x_{\epsilon }$. Then we have
\begin{numequation}
\label{eq-pikH}
\pi_{*}^{u}\kH=\upi_{*}\kH (G/\ee)=\upi_{\ee,*}\kH (G/G)
=\Z[r_{1},\,\gamma (r_{1})]=\Z[r_{1,0},\,r_{1,1}]
\end{numequation}
\noindent where $\gamma^{2} (r_{1,\epsilon })=-r_{1,\epsilon }$. Here
we use $r_{1,\epsilon }$ and $\orr_{1,\epsilon }$ to denote the images
of elements of the same name in the homotopy of $MU^{((G))}$.
\begin{numequation}\label{eq-bigraded}
\begin{split}
\includegraphics[width=9cm]{fig-small-ss.pdf}
\end{split}
\end{numequation}
\noindent Here the vertical coordinate is $s$ and the horizontal
coordinate is $|t|-s$. More information about these elements can be
found in Table \ref{tab-pi*} below.
\noindent
We are using the following notational convention. When
$x=\Tr_{2}^{4}(y)$ for some element ${y\in \upi_{\star}\kH (G/G')}$,
we will write $x'=\Tr_{2}^{4}(u_{\sigma}y)$. Examples above include
the cases $x=\eta $ and $x=\ot_{2}$. The primes could be iterated,
{\ie } we might write $x^{(k)}=\Tr_{2}^{4}(u_{\sigma}^{k}y)$, but
this turns out to be unnecessary.
The group action (by $G'$ on $\orr_{1,\epsilon }$, $a_{\sigma_{2}}$
and $u_{\sigma_{2}}$, and by $G$ on all the others) fixes each
generator but $u_{\sigma}$ and $u_{\sigma_{2}}$.
For them the action is given by
\begin{displaymath}
\xymatrix
@R=1mm
@C=8mm
{
u_{\sigma}\ar@{<->}[r]^(.5){\gamma}
&-u_{\sigma}
&{\mbox{and} }
&u_{\sigma_{2} }\ar@{<->}[r]^(.5){\gamma^{2}}
&-u_{\sigma_{2} }
}
\end{displaymath}
\noindent by Theorem \ref{thm-module}. This is compatible with the
following $G$-action:
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
r_{1,0}\ar[r]^(.5){\gamma}
&r_{1,1}\ar[d]^(.5){\gamma}\\
-r_{1,1}\ar[u]^(.5){\gamma} &-r_{1,0}\ar[l]^(.5){\gamma} }
\qquad \mbox{where } r_{1,\epsilon} =
\underline{r}_{1}^{2}(\orr_{1,\epsilon })\in \upi_{\ee,2}\kH (G/G) .
\end{displaymath}
We will see below (Theorem \ref{thm-d3ulambda}) that $d_{5}
(u_{2\sigma})=a_{\sigma}^{3}a_{\lambda}\normrbar_{1}$ and
$[u_{2\sigma}^{2}]$ is a permanent cycle. Since all transfers are
killed by $a_{\sigma}$ multiplication (Lemma \ref{lem-hate}), this
implies that $[u_{2\sigma}x]$ is a permanent
cycle representing the Toda bracket
\begin{displaymath}
[u_{2\sigma}x]
= [u_{2\sigma}\Tr_{2}^{4}(y)]
= \langle x,\,a_{\sigma} ,\,a_{\sigma}^{2}a_{\lambda}\normrbar_{1} \rangle.
\end{displaymath}
\noindent This element is $x''$ since in $\EE_{2}$ we have
(using the Frobenius relation (\ref{eq-Frob}))
\begin{displaymath}
x''=\Tr_{2}^{4}(u_{\sigma}^{2}y)=\Tr_{2}^{4}(\Res_{2}^{4}(u_{2\sigma})y)
= u_{2\sigma}\Tr_{2}^{4}(y)= u_{2\sigma}x.
\end{displaymath}
\noindent Similarly $x'''=u_{2\sigma}x'$. For $k\geq 4$,
$x^{(k)}=u_{2\sigma}^{2}x^{(k-4)}$ in $\upi_{\star}$ as well as
$\EE_{2}$.
The Periodicity Theorem \cite[Thm. 9.19]{HHR} states that inverting a class
in $\upi_{4\rho_{4}}\kH (G/G)$ whose image under
$\ur_{2}^{4}\Res_{2}^{4}$ is divisible by
$\orr_{3,0}^{G'}\orr_{3,1}^{G'}$ (see (\ref{eq-r3})) and
$\orr_{1,0}\orr_{1,1}=\orr_{1,0}^{G}\orr_{1,1}^{G}$ makes
$u_{8\rho_{4}}$ a permanent cycle. One such class is
\noindent
\begin{numequation}\label{eq-D}
\begin{split}
\begin{array}[]{rl}
D&\hspace{-2.5mm} = N_{2}^{4} (\normrbar_{2}^{G'})\normrbar_{1}^{G}
= u_{2\sigma}^{-2}(\ur_{2}^{4}\Res_{2}^{4})^{-1}
\left(\orr_{1,0}^{G}\orr_{1,1}^{G}
\orr_{3,0}^{G'}\orr_{3,1}^{G'} \right)\\
&\hspace{-2.5mm} = \normrbar_{1}^{2} (-5\ot_{2}^{2}+20 \ot_{2}\normrbar_{1}
+9\normrbar_{1}^{2}) \in \upi_{4\rho_{4}}\kH (G/G),\\
&\qquad \mbox{where }\ot_{2}=\Tr_{2}^{4}(u_{\sigma }^{-1}[\orr_{1,0}^{2}])
\mbox{ and $\normrbar_{1}$ is as in (\ref{eq-normrbar}) below,}
\end{array}
\end{split}
\end{numequation}
\noindent
and $\KH = D^{-1}\kH$. Then we know that $\Sigma^{32}\KH$ is
equivalent to $\KH$.
The Slice and Reduction Theorems \cite[Thms. 6.1 and 6.5]{HHR} imply that the
$2k$th slice of $\kH$ is the $2k$th wedge summand of
\begin{displaymath}
\HZZ \wedge N_{2}^{4}\left(\bigvee_{i\geq 0}S^{i\rho_{2}} \right).
\end{displaymath}
\noindent It follows that over $G'$ the $2k$th slice is a wedge of
$k+1$ copies of $\HZZ \wedge S^{k\rho_{2}}$. Over $G$ we get the
wedge of the appropriate number of copies of $G_{+}\smashove{G'}\HZZ
\wedge S^{k\rho_{2}}$, wedged with a single copy of $\HZZ \wedge
S^{(k/2)\rho_{4}}$ for even $k$. This is spelled out in Theorem
\ref{thm-sliceE2} below.
The group $\upi^{G'}_{\rho_{2}}\kH (G'/\ee)$ is {\em
not} in the image of the group action restriction
$\ur_{2}^{4}$ because $\rho_{2}$ is not the restriction of a
{\rep} of $G$. However, $\pi_{2}^{u}\kH$ is refined (in the
sense of \cite[Def. 5.28]{HHR}) by a map from
\begin{numequation}
\label{eq-s1}
\xymatrix
@R=5mm
@C=10mm
{
S_{\rho_{2}} : = G_{+}\smashove{G'}S^{\rho_{2}}
\ar[r]^(.7){\os_{1} }
&\kH.
}
\end{numequation}
\noindent The Reduction Theorem implies that the 2-slice
$P_{2}^{2}\kH$ is $S_{\rho_{2}}\wedge \HZZ$. We know that
\begin{displaymath}
\upi_{2} ( S_{\rho_{2}}\wedge \HZZ) = \widehat{\oBox }.
\end{displaymath}
\noindent We use the symbols $r_{1}$ and $\gamma (r_{1})$ to denote
the generators of the underlying abelian group of
$\widehat{\oBox } (G/\ee)=\Z[G/G']_{-}$. These elements have
trivial fixed point transfers and
\begin{displaymath}
\upi_{2} ( S_{\rho_{2}}\wedge \HZZ) (G/G')=0.
\end{displaymath}
Table \ref{tab-pi*} describes some elements in the slice {\SS} for $\kH$ in
low dimensions, which we now discuss.
Given an element in $\pi_{\star}MU^{((G))}$, we will often use the
same symbol to denote its image in $\pi_{\star}\kH$. For example, in
\cite[\S9.1]{HHR}
\begin{numequation}
\label{eq-normrbar}
\normrbar_{n}\in \pi_{(2^{n}-1)\rho_{4}}^{G}MU^{((G))}
=\upi_{(2^{n}-1)\rho_{4}}^{G}MU^{((G))} (G/G)
\end{numequation}
\noindent was defined to be the composite
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
S^{(2^{n}-1)\rho_{4}}\ar@{=}[r]^(.5){}
&N_{2}^{4}S^{(2^{n}-1)\rho_{2}}\ar[rr]^(.5){N_{2}^{4}\orr_{2^{n}-1} }
& &N_{2}^{4}MU^{((G))}\ar[r]^(.5){}
&MU^{((G))}.
}
\end{displaymath}
\noindent We will use the same symbol to denote its image in
$\upi_{(2^{n}-1)\rho_{4}}^{G}\kH (G/G)$.
The element $\eta \in \pi_{1}S^{0}$ (coming from the Hopf map
$S^{3}\to S^{2}$) has image ${a_{\sigma}\orr_{1} \in
\upi^{G'}_{1}\kR (G'/G')}$. There are two corresponding
elements
\begin{displaymath}
\eta_{\epsilon }\in \upi^{G'}_{1}\kH (G'/G')
\qquad \mbox{for } \epsilon =0,1.
\end{displaymath}
\noindent We use the same symbol for their preimages under
$\ur_{2}^{4}$ in $\upi^{G}_{1}\kH (G/G')$, and there we have
\begin{displaymath}
\eta_{\epsilon}=a_{\sigma_{2}}\orr_{1,\epsilon}.
\end{displaymath}
\noindent We denote by $\eta $ again the image of either under the
transfer $\Tr_{2}^{4}$, so
\begin{displaymath}
{\Res_{2}^{4}(\eta)=\eta_{0}+\eta_{1}}.
\end{displaymath}
Its cube is killed by a $d_{3}$ in the slice {\SS}, as is the sum of
any two monomials of degree 3 in the $\eta_{\epsilon }$. It follows
that in $\EE_{4}$ each such monomial is equal to
$\eta_{0}^{3}$. It has a nontrivial transfer, which we denote by $x_{3}$.
In \cite[Def. 5.51]{HHR} we defined
\begin{numequation}
\label{eq-fk}
f_{k}=a_{\overline{\rho } }^{k}N_{2}^{g} (\orr_{k})
\in \upi_{k}MU^{((G))} (G/G)
\end{numequation}
\noindent for a finite cyclic 2-group $G$. In particular,
$f_{2^{n-1}}=a_{\overline{\rho } }^{2^{n}-1}\normrbar_{n}$ for
$\normrbar_{n}$ as in (\ref{eq-normrbar}). The slice filtration of
$f_{k}$ is $k(g-1)$ and we will see below (Lemma \ref{lem-hate} and,
for $G=C_{4}$, Theorem \ref{thm-d3ulambda}) that
\begin{numequation}
\label{eq-Tr(usigma)}
\Tr_{G'}^{G}(u_{\sigma}) = a_{\sigma}f_{1}.
\end{numequation}
Note that $u_{\sigma}\in \EE_{2}^{0,1-\sigma} (G/G')$
since the maximal subgroup for which the sign {\rep} $\sigma $ is
oriented is $G'$, on which it restricts to the trivial {\rep} of
degree 1. This group depends only on the restriction of the $RO
(G)$-grading to $G'$, and the isomorphism extends to differentials as
well. This means that $u_{\sigma}$ is a place holder corresponding
to the permanent cycle $1\in \EE_{2}^{0,0} (G/G')$.
For $G=C_{4}$ (\ref{eq-Tr(usigma)}) implies
\begin{displaymath}
\Tr_{2}^{4}(u_{\sigma}) = a_{\sigma}f_{1}
= a_{\sigma}^{2}a_{\lambda}\normrbar_{1}.
\end{displaymath}
\noindent For example
\begin{align*}
\Tr_{2}^{4}(\eta_{0}\eta_{1})
& = \Tr_{2}^{4}(a_{\sigma_{2}}^{2}\orr_{1,0}\orr_{1,1})
= \Tr_{2}^{4}(u_{\sigma}\Res_{2}^{4}(a_{\lambda}\normrbar_{1}))\\
& = \Tr_{2}^{4}(u_{\sigma})a_{\lambda}\normrbar_{1}
= a_{\sigma}f_{1}a_{\lambda}\normrbar_{1}
= f_{1}^{2}
\end{align*}
The Hopf element $\nu \in \pi_{3}S^{0}$ has image
\begin{displaymath}
a_{\sigma}u_{\lambda}\normrbar_{1}\in \upi_{3}\kH (G/G),
\end{displaymath}
\noindent so we also denote the latter by $\nu $. (We will see below
in (\ref{eq-d-ul}) that $u_{\lambda}$ is not a permanent cycle, but
$\overline{\nu } := a_{\sigma}u_{\lambda}$ is (\ref{eq-alphax}).) It has an
exotic restriction $\eta_{0}^{3}$ (filtration jump two), which implies
that
\begin{displaymath}
2\nu =\Tr_{2}^{4}(\Res_{2}^{4}(\nu ))=\Tr_{2}^{4}(\eta_{0}^{3})=x_{3}.
\end{displaymath}
\noindent One way to see this is to use the Periodicity Theorem to
equate $\upi_{3}\kH$ with $\upi_{-29}\kH$,
which can be shown to be the Mackey functor $\circ $ in slice
filtration $-32$. Another argument not relying on periodicity is given
below in Theorem \ref{thm-d3ulambda}.
The exotic restriction on $\nu $ implies
\begin{displaymath}
\Res_{2}^{4}(\nu^{2})=\eta_{0}^{6},
\end{displaymath}
\noindent with filtration jump 4.
\begin{thm}\label{thm-Hur}
{\bf The Hurewicz image.} The elements $\nu \in \upi_{3}\kH (G/G)$,
\linebreak ${\epsilon \in \upi_{8}\kH (G/G)}$, $\kappa \in
\upi_{14}\kH (G/G)$, and $\overline{\kappa} \in \upi_{20}\kH (G/G)$
are the images of elements of the same names in $\pi_{*}S^{0}$. The
image of the Hopf map $\eta\in\pi_{1}S^{0}$ is either
$\eta=\Tr_{2}^{4}(\eta_{\epsilon})$ or its sum with $f_{1}$.
\end{thm}
We refer the reader to \cite[Table A3.3]{Rav:MU} for more information
about these elements.
\proof Suppose we know this for $\nu $ and $\overline{\kappa} $.
Then $\Delta_{1}^{-4}\nu $ is represented by an element of filtration
$-3$ whose product with $\nu^{2}$ is nontrivial. This implies that
$\nu^{3}$ has nontrivial image in $\underline{\pi }_{9}\kH (G/G)$.
This is a nontrivial multiplicative extension in the first quadrant,
but not in the third. The spectral sequence representative of
$\nu^{3}$ has filtration 11 instead of 3. We will see later that
$\nu^{3}=2n$ where $n$ has filtration 1, and $\nu^{3}$ is the transfer
of an element in filtration 1.
Since $\nu^{3}=\eta \epsilon $ in $\pi_{*}S^{0}$, this implies that
$\eta $ and $\epsilon $ are both detected and have the images stated
in Table \ref{tab-pi*}. It follows that $\epsilon \overline{\kappa
}$ has nontrivial image here. Since $\kappa^{2}=\epsilon
\overline{\kappa}$ in $\pi_{*}S^{0}$, $\kappa $ must also be
detected. Its only possible image is the one indicated.
Both $\nu $ and $\overline{\kappa}$ have images of order $8$ in
$\pi_{*}TMF$ and its $K (2)$ localization. The latter is the homotopy
fixed point set of an action of the binary tetrahedral group $G_{24}$
acting on $E_{2}$. This in turn is a retract of the homotopy fixed
point set of the quaternion group $Q_{8}$. A restriction and transfer
argument shows that both elements have order at least 4 in the
homotopy fixed point set of $C_{4}\subset Q_{8}$.
There is an orientation map $MU\to E_{2}$, which extends to a
$C_{2}$-{\eqvr} map $MU_{\reals}\to E_{2}$. Norming up and
multiplying on the right gives us a $C_{4}$-{\eqvr} map
$N_{2}^{4}MU_{\reals}\to E_{2}$. This $C_{4}$-action on the target is
compatible with the $G_{24}$-action leading to $L_{K (2)}TMF$.
The image of $\eta\in\pi_{1}S^{0}$ must restrict to
$\eta_{0}+\eta_{1}$, so modulo the kernel of $\Res_{2}^{4}$ it is the
element $\Tr_{2}^{4}(\eta_{\epsilon})$, which we are calling $\eta$.
The kernel of $\Res_{2}^{4}$ is generated by $f_{1}$. \qed
We now discuss the norm $N_{2}^{4}$, which is a functor from the
category of $C_{2}$-spectra to that of $C_{4}$ spectra. As explained
above in connection with Corollary \ref{cor-normdiff}, for a
$C_{4}$-ring spectrum $X$ we have an internal norm
\begin{displaymath}
\upi_{V}^{G'}i^{*}_{G'}X (G'/G')
\cong \upi_{G',V}^{G'}X (G/G)
\to \upi_{\Ind_{2}^{4}V }^{G}X (G/G)
\end{displaymath}
\noindent and a similar functor on the slice spectral sequence for
$X$. It preserves multiplication but not addition. Its source is a
module over $G/G'$, which acts trivially on its target. Consider the
diagram
\begin{displaymath}
\xymatrix
@R=8mm
@C=8mm
{
{\upi_{G',V}X (G/G)}\ar[r]^(.45){\cong }
&{\upi_{V}^{G'}i^{*}_{G'}X (G'/G')}\ar[r]^(.5){N_{2}^{4}}
&{\upi_{\Ind_{2}^{4}V }^{G}X (G/G)}\ar[d]^(.5){\Res_{2}^{4}}\\
{\upi_{G',2V}X (G/G)}\ar[r]^(.45){\cong }
&{\upi^{G'}_{2V}i^{*}_{G'}X (G'/G')}\ar[r]^(.5){\cong }
&{\upi_{\Ind_{2}^{4}V }^{G}X (G/G')}
}
\end{displaymath}
\noindent
For $x\in \upi_{V}^{G'}i^{*}_{G'}X (G'/G')$ we have $x\gamma (x)\in
\upi_{2V}^{G'}i^{*}_{G'}X (G'/G')$ and $2V$ is the restriction of some
$W\in RO (G)$. The group $\upi_{W}^{G}X (G/G')$ depends only on the
restriction of $W$ to $RO (G')$. If $W'\in RO (G)$ is another virtual
{\rep} restricting to $2V$, then $W-W'=k (1-\sigma )$ for some integer
$k$. The canonical isomorphism between $\upi_{W}^{G}X (G/G')$ and
$\upi_{W'}^{G}X (G/G')$ is given by multiplication by
$u_{\sigma}^{k}$.
\begin{defin}\label{def-bracket}
{\bf A second use of square bracket notation}. For $0\leq i\leq 2d$, let $f
(\orr_{1,0},\orr_{1,1})$ be a homogeneous polynomial of degree $2d-i$,
so
\begin{displaymath}
a_{\sigma_{2}}^{i}f (\orr_{1,0},\orr_{1,1})\in
\upi_{(2d-i)+ (2d-2i)\sigma_{2}}^{G'}i^{*}_{G'}\kH (G'/G')
\end{displaymath}
\noindent We will denote by
$[a_{\sigma_{2}}^{i}f(\orr_{1,0},\orr_{1,1})]$ its preimage in
$\upi_{2d-i+ (d-i)\lambda }\kH (G/G')$ under the isomorphism of
(\ref{eq-easy-iso}).
\end{defin}
The first use of square bracket notation is that of Remark
\ref{rem-abuse}. Note that ${\orr_{1,\epsilon }\in
\upi_{\rho_{2}}^{G'}i^{*}_{G'}\kH}$ is not the target of such an
isomorphism since ${\rho_{2}\in RO (G')}$ is not the restriction of
any element in $RO (G)$, hence the requirement that $f$ has even
degree.
We will denote $u_{\sigma }^{-1}[\orr_{1,\epsilon }^{2}]\in
\upi_{\rho_{4}}^{G}\kH (G/G')$ by $\overline{s}_{2,\epsilon }$. Then
we have $\gamma(\overline{s}_{2,0})=-\overline{s}_{2,1}$ and
$\gamma(\overline{s}_{2,1})=-\overline{s}_{2,0}$. We define
\begin{displaymath}
\ot_{2}:= (-1)^{\epsilon }\Tr_{2}^{4}(\overline{s}_{2,\epsilon }),
\end{displaymath}
\noindent which is independent of $\epsilon $, and we have
\begin{displaymath}
\Res_{2}^{4}(\ot_{2})=\overline{s}_{2,0}-\overline{s}_{2,1}.
\end{displaymath}
\noindent
Then we have
\begin{displaymath}
\Res_{2}^{4}(N_{2}^{4} (\orr_{1,0}))
=\Res_{2}^{4}(\normrbar_{1}^{})
=u_{\sigma}^{-1}[\orr_{1,0}\orr_{1,1}]
\in \upi_{_{\rho_{4} }}\kH (G/G') .
\end{displaymath}
\noindent More generally for integers $m$ and $n$
\begin{align*}
\lefteqn{\Res_{2}^{4}(N_{2}^{4} (m\orr_{1,0}+n\orr_{1,1}))}\qquad\qquad\\
& = u_{\sigma}^{-1}[(m\orr_{1,0}+n\orr_{1,1})
(m\orr_{1,1}-n\orr_{1,0})]\\
& = u_{\sigma}^{-1} ((m^{2}-n^{2}) [\orr_{1,0}\orr_{1,1}]
+mn ([\orr_{1,1}^{2}]-[\orr_{1,0}^{2}]))\\
& = (m^{2}-n^{2})\Res_{2}^{4}(\normrbar_{1}^{})
-mn\,\Res_{2}^{4}(\ot_{2})
\end{align*}
\noindent so
\begin{numequation}\label{eq-norm-orr}
\begin{split}
N_{2}^{4} (m\orr_{1,0}+n\orr_{1,1})
= (m^{2}-n^{2})\normrbar_{1} -mn\ot_{2}.
\end{split}
\end{numequation}
Similarly for integers $a$, $b$ and $c$,
\begin{align*}
\lefteqn{u_{\sigma }^{2}\Res_{2}^{4}(N_{2}^{4}
(a\orr_{1,0}^{2}+b\orr_{1,0}\orr_{1,1}+c\orr_{1,1}^{2}))}\quad\\
& = [(a\orr_{1,0}^{2}+b\orr_{1,0}\orr_{1,1}+c\orr_{1,1}^{2})
(a\orr_{1,1}^{2}-b\orr_{1,0}\orr_{1,1}+c\orr_{1,0}^{2})] \\
& = [ac (\orr_{1,0}^{4}+\orr_{1,1}^{4})
+b (c-a)\orr_{1,0}\orr_{1,1} (\orr_{1,0}^{2}-\orr_{1,1}^{2})
+ (a^{2}-b^{2}+c^{2})\orr_{1,0}^{2}\orr_{1,1}^{2}] \\
& = [ac (\orr_{1,0}^{2}-\orr_{1,1}^{2})^{2}
+b (c-a)\orr_{1,0}\orr_{1,1} (\orr_{1,0}^{2}-\orr_{1,1}^{2})
+ ((a+c)^{2}-b^{2})\orr_{1,0}^{2}\orr_{1,1}^{2}]
\end{align*}
\noindent so
\begin{numequation}\label{eq-norm-quad}
\begin{split}
N_{2}^{4}
(a\orr_{1,0}^{2}+b\orr_{1,0}\orr_{1,1}+c\orr_{1,1}^{2})
= ac\,\ot_{2}^{2}+b (c-a)\normrbar_{1}\ot_{2}
+((a+c)^{2}-b^{2})\normrbar_{1}^{2}
\end{split}
\end{numequation}
For future reference we need
\begin{align*}
\lefteqn{N_{2}^{4} ( 5 \orr_{1,0}^{2}\orr_{1,1} +5\orr_{1,0}\orr_{1,1}^{2}
+\orr_{1,1}^{3}) }\qquad\qquad\\
& = N_{2}^{4} (\orr_{1,1})N_{2}^{4} (5 \orr_{1,0}^{2} +5\orr_{1,0}\orr_{1,1}
+\orr_{1,1}^{2})) \\
& = -\normrbar_{1}(5\ot_{2}^{2}-20\normrbar_{1}\ot_{2}
+11\normrbar_{1}^{2})
\end{align*}
\noindent Compare with (\ref{eq-r3}).
We also denote by
\begin{displaymath}
\eta_{\epsilon }
=[a_{\sigma_{2}}\orr_{1,\epsilon }]
\in \upi_{1}\kH (G/G')
\end{displaymath}
\noindent the preimage of $a_{\sigma_{2}}\orr_{1,\epsilon } \in
\upi_{1}^{G'}i^{*}_{G'}\kH (G'/G')$ and by $[a_{\sigma_{2}}^{2}]\in
\upi_{-\lambda }\kH (G/G')$ the preimage of $a_{\sigma_{2}}^{2}$. The
latter is $\Res_{2}^{4}(a_{\lambda })$.
The values of $N_{2}^{4} (a_{\sigma_{2}})$ and $N_{2}^{4}
(u_{2\sigma_{2}})$ are given by Lemma \ref{lem-norm-au}, namely
\begin{align*}
N_{2}^{4} (a_{\sigma_{2}})
& = a_{\lambda } \\
\aand
N_{2}^{4} (u_{2\sigma_{2}})
& = u_{2\lambda }/u_{2\sigma }.
\end{align*}
\noindent
\begin{center}
\begin{longtable}{|p{5cm}|p{6.8cm}|}
& \kill \caption[Some elements in the slice {\SS} and homotopy
groups of {$\kH$}.]
{Some elements in the slice {\SS} and homotopy groups of $\kH$, listed in
order of ascending filtration.}\\
\hline \multicolumn{1}{|c|}{Element}
&\multicolumn{1}{|c|}{Description}\\
\hline\hline
\multicolumn{2}{|c|}{Filtration 0}\\
\hline\endfirsthead
\caption[Some elements in the slice {\SS} and homotopy groups of {$\kH$}, continued.]
{Some elements in the slice {\SS} and homotopy groups of $\kH$, continued.}\\
\hline
\multicolumn{1}{|c|}{Element}
&\multicolumn{1}{|c|}{Description}\\
\hline\endhead
\hline \multicolumn{2}{|r|}{Continued on next page} \\ \hline
\endfoot
\hline
\endlastfoot
\label{tab-pi*}
$\orr_{1,\epsilon}
\in \upi_{\rho_{2}}^{G'}i_{G'}^{*}\kH (G'/G')$
\newline $\phantom{\orr_{1,\epsilon}\,\,} \cong \upi_{G', \rho_{2}}\kH (G/G)$
\newline
with $\orr_{1,2}=-\orr_{1,0}$
&Images from (\ref{eq-rbar}) defined in \cite[(5.47)]{HHR}\\
\hline
$r_{1,\epsilon}
\in \upi_{\ee,2}\kH (G/G)$\newline
$\phantom{\orr_{1,\epsilon}}\cong \upi_{G,2}\kH (G/\ee)\cong \pi_{2}^{u}\kH$
&$\underline{r}_{1}^{2}(\orr_{1,\epsilon})$, generating \newline
$\upi_{2}^{G}\kH/\mbox{torsion}=\widehat{\oBox } $ \\
\hline $u_{2\sigma}\in \EE_{2}^{0,2-2\sigma} (G/G)$ with
&Element corresponding to
\newline $\phantom{aaaaa} u_{2\sigma}\in \upi_{2-2\sigma}\HZZ (G/G)$\\
$d_{5} (u_{2\sigma}) = a_{\sigma}^{3}a_{\lambda}\normrbar_{1}^{}$
&Slice differential of (\ref{eq-slicediffs2}) \\
$[2u_{2\sigma}]
=\langle 2,\,a_{\sigma},\,a_{\sigma}^{2}a_{\lambda}\normrbar_{1}
\rangle$
\newline $\phantom{[2u_{2\sigma}]} \in \EE_{6}^{0,2-2\sigma} (G/G)$
&Image of $2u_{2\sigma}$ in $\EE_{6}^{0,2-2\sigma} (G/G)$,
which is a permanent cycle\\
$[u_{2\sigma}^{2}]
=\langle a_{\sigma}^{3}a_{\lambda} ,\,\normrbar_{1} ,\,
a_{\sigma}^{3}a_{\lambda} ,\,\normrbar_{1} \rangle$\newline
$\phantom{[u_{2\sigma}^{2}]} \in \EE_{6}^{0,4-4\sigma} (G/G)$
&Image of $u_{2\sigma}^{2}$ in $\EE_{6}^{0,2-2\sigma} (G/G)$,
which is a permanent cycle\\
\hline $u_{\sigma}\in \upi_{1-\sigma}\kH (G/G') $
\newline $\cong \upi_{G',0}\kH (G/G)$ with
\newline $\Res_{2}^{4}(u_{2\sigma})=u_{\sigma}^{2}$,
\newline $\gamma (u_{\sigma})=-u_{\sigma}$
& Isomorphic image of
\newline $\phantom{aaaaa}
1\in \upi_{0}\kH (G/G')\cong \upi_{G',0}\kH (G/G)$\\
$\Tr_{2}^{4}(u_{\sigma}^{4k+1})=a_{\sigma}f_{1}u_{2\sigma}^{2k}$
\newline $\phantom{aaaaa}$(exotic transfer)
\newline $\Tr_{2}^{4}(u_{\sigma}^{2k})=2u_{2\sigma}^{k}$
\newline $\Tr_{2}^{4}(u_{\sigma}^{4k+3})=0$
&Follows from Theorem \ref{thm-exotic} and $d_{5} (u_{2\sigma})$ in
(\ref{eq-slicediffs2}) \\
\hline $u_{\lambda}\in \EE_{2}^{0,2-\lambda} (G/G)$ with
&Element corresponding to\newline
$\phantom{aaaaa}u_{\lambda}\in \upi_{2-\lambda}\HZZ (G/G)$\\
$[2u_{\lambda}]\in \upi_{2-\lambda}\KH (G/G)$
&$\langle 2,\,\eta ,\,a_{\lambda} \rangle$\\
$a_{\sigma}^{3}u_{\lambda}=0$
&Follows from the gold relation,\newline Lemma \ref{lem-aeu}(vii)\\
$d_{3} (u_{\lambda})=\eta a_{\lambda}
=\Tr_{2}^{4}([a_{\sigma_{2}}^{3}\orr_{1,0}])$
&Slice differential of Theorem \ref{thm-d3ulambda}\\
$d_{5} ([u_{\lambda}^{2}]) =\overline{\nu } a_{\lambda}^{2}\normrbar_{1}$
&Slice differential of Theorem \ref{thm-d3ulambda}\\
$d_{7} ([2u_{\lambda}^{2}]) =\eta ' a_{\lambda}^{3}\normrbar_{1}$
& $2\overline{\nu } a_{\lambda}^{2}\normrbar_{1}$\\
$[4u_{\lambda}^{2}]\in \upi_{4-2\lambda}\KH (G/G)$
&$\langle 2,\,\eta ,\,a_{\lambda} \rangle^{2}
=\langle 2,\,\eta ' ,\,a_{\lambda}^{3}\normrbar_{1} \rangle$\\
$[2a_{\sigma}u_{\lambda}^{2}]\in \upi_{4-\sigma -2\lambda}\KH (G/G)$
&$\langle a_{\sigma},\,\eta ' ,\,a_{\lambda}^{3}\normrbar_{1} \rangle$\\
$d_{7} ([u_{\lambda}^{4}])
=\langle \eta',\,\overline{\nu } ,\,a_{\lambda}^{2}\normrbar_{1}\rangle
a_{\lambda}^{3}\normrbar_{1}$
&$[2u_{\lambda}^{2}d (u_{\lambda}^{2})]$\\
$[2u_{\lambda}^{4}]\in \upi_{8-4\lambda}\KH (G/G)$
&$\Tr_{2}^{4}(\ou_{\lambda}^{4} )$\\
\hline
$\ou _{\lambda}\in \EE_{2}^{0,2-\lambda}
(G/G')$ with
&$\Res_{2}^{4}(u_{\lambda})$\\
$d_{3} (\ou_{\lambda} )
= [a_{\sigma_{2}}^{3} (\orr_{1,0}+\orr_{1,1})]$\newline
$\phantom{d_{3} (\ou_{\lambda} )}
=\Res_{2}^{4}(a_{\lambda })(\eta_{0}+\eta_{1})$
&$\Res_{2}^{4}(d_{3} ( u_{\lambda}))$\\
$[2\ou_{\lambda}]\in \upi_{2-\lambda}\KH (G/G') $
&$[\langle 2,\,a_{\sigma_{2}}^{3} ,\,\orr_{1,0}+\orr_{1,1} \rangle]
=\langle 2,\,[a_{\sigma_{2}}^{2}] ,\,\eta_{0}+\eta_{1} \rangle$\\
$d_{7} ([\ou_{\lambda}^{2}] )
= a_{\sigma_{2}}^{7}\orr_{1,0}^{3}$
&$\Res_{2}^{4}(d_{5} ( u_{\lambda}^{2}))$\\
$[2\ou_{\lambda}^{2}]\in \upi_{4-2\lambda}\KH (G/G') $
&$[\langle 2,\,a_{\sigma_{2}}^{7} ,\,\orr_{1,0}^{3} \rangle]
=\langle 2,\,[a_{\sigma_{2}}^{2}]^{2} ,\,\eta_{0}^{3} \rangle$\\
$[\ou_{\lambda}^{4}]\in \upi_{8-4\lambda}\KH (G/G') $
&$[\langle a_{\sigma_{2}}^{7},\,\orr_{1,0}^{3} ,\,
a_{\sigma_{2}}^{7},\,\orr_{1,0}^{3}\rangle]$\newline
$\phantom{a_{\sigma_{2}}^{7},\,\orr_{1,0}^{3}}
=\langle [a_{\sigma_{2}}^{2}]^{2} ,\,\eta_{0}^{3},\,
[a_{\sigma_{2}}^{2}]^{2} ,\,\eta_{0}^{3} \rangle$\\
\hline
$u_{\sigma_{2}}\in \upi_{(G',1-\sigma_{2})}\kH (G/e)$ with\newline
$\Res_{1}^{2}(\ou_{\lambda} )=u_{\sigma_{2}}^{2}$,\newline
$\gamma^{2} (u_{\sigma_{2}})=-u_{\sigma_{2}}$ and \newline
$\Tr_{1}^{2}(u_{\sigma_{2}})=a_{\sigma_{2}}^{2}(\orr_{1,0}+\orr_{1,1})$\newline
$\phantom{aaaaa}$(exotic transfer)
& Isomorphic image of $1\in \upi_{0}\kH (G/e)$\\
\hline
$\os_{2,\epsilon }
\in \upi_{\rho_{4}}^{G}\kH (G/G')$
&$u_{\sigma}^{-1}[\orr_{1,\epsilon }^{2}]$\\
\hline
$\normrbar_{1}
\in \upi_{\rho_{4}}^{G}\kH (G/G)$ with \newline
$\Res_{2}^{4}(\normrbar_{1})
=u_{\sigma}^{-1}[\orr_{1,0}\orr_{1,1}] $
&Image from (\ref{eq-normrbar}) defined in \cite[\S9.1]{HHR}\\
\hline
$\ot_{2}
\in \upi_{\rho_{4}}^{G}\kH (G/G) $ with\newline
$\Res_{2}^{4}(\ot_{2})= \os_{2,0}-\os_{2,1}$
&$(-1)^{\epsilon }\Tr_{2}^{4}(\os_{2,\epsilon })$
for either value of $\epsilon $\\
\hline $\ot_{2}' \in \upi_{2+\lambda}^{G}\kH (G/G) $ with\newline
$\Res_{2}^{4}(\ot_{2}')= [\orr_{1,0}^{2}]+[\orr_{1,1 }^{2}]$
&$\Tr_{2}^{4}([\orr_{1,\epsilon }^{2}])$
for either value of $\epsilon $\\
\hline $D\in \upi_{4\rho_{4}}\kH (G/G)$, \newline
the periodicity element
&$-\normrbar_{1}^{2} (5\ot_{2}^{2}-20 \ot_{2}\normrbar_{1}
+11\normrbar_{1}^{2})$\\
\hline
$\Sigma_{2, \epsilon}
\in \EE_{2}^{0,4}\kH (G/G')$ with \newline
$\Sigma_{2,2}=\Sigma_{2,0}$ and\newline
$d_{3} (\Sigma_{2, \epsilon}) = \eta_{\epsilon } ^{2} (\eta_{0}+\eta_{1} )$
&$(-1)^{\epsilon } u_{\rho_{4}}\os_{2, \epsilon}
= (-1)^{\epsilon }\ou_{\lambda} [\orr_{1,\epsilon }^{2}]$\\
\hline
$T_{2}\in \EE_{2}^{0,4}\kH (G/G)$ with \newline
${\Res_{2}^{4}(T_{2})= \Sigma_{2,0}-\Sigma_{2,1}}$ and\newline
$d_{3} (T_{2}) =\eta^{3}$
&$\Tr_{2}^{4}(\Sigma_{2,\epsilon})
= (-1)^{\epsilon }u_{\lambda}
\Tr_{2}^{4}([\orr_{1,\epsilon }^{2}])$\newline
for either value of $\epsilon $\\
\hline
$T_{4}\in \EE_{2}^{0,8}\kH (G/G)$ with \newline
$T_{4}^{2}=\Delta_{1} (T_{2}^{2}-4\Delta_{1})$,\newline
$\Res_{2}^{4}(T_{4})= (\Sigma_{2,0}-\Sigma_{2,1})\delta_{1}$ and \newline
$d_{3} (T_{4})=0$
&$(-1)^{\epsilon }\Tr_{2}^{4}(\Sigma_{2,\epsilon}\delta_{1})
= u_{2\sigma}u_{\lambda}^{2}\ot_{2}\normrbar_{1}^{}$\newline
for either value of $\epsilon $ \\
\hline
$\delta_{1}\in \EE_{2}^{0,4}\kH (G/G')$ with\newline
$\gamma (\delta_{1})=-\delta_{1}$, $\Tr_{2}^{4}(\delta_{1})=0$ \newline
and
$d_{3} (\delta_{1})=\eta_{0}\eta_{1} (\eta_{0}+\eta_{1})$
&$u_{\rho_{4}}\Res_{2}^{4}(\normrbar_{1})
=\ou_{\lambda}[\orr_{1,0}\orr_{1,1}]$\\
\hline
$\Delta_{1}\in \EE_{2}^{0,8}\kH (G/G)$ with \newline
$\Res_{2}^{4}(\Delta_{1})=\delta_{1}^{2}$, \newline
$\Res_{1}^{4}(\Delta_{1})=r_{1,0}^{2}r_{1,1}^{2}$ and \newline
$d_{5} (\Delta_{1})=\nu x_{4}$
&$u_{2\rho_{4}}\normrbar_{1}^{2}
= u_{2\sigma}u_{\lambda}^{2}\normrbar_{1}^{2}$\\
\hline \hline
\multicolumn{2}{|c|}{Filtration 1}\\
\hline
$a_{\sigma_{2}}
\in \upi_{G', -\sigma_{2}}\kH (G/G)$\newline
$\phantom{a_{\sigma_{2}}}\cong \upi_{-\sigma_{2}}^{G'}\kH (G'/G')$\newline
with $2a_{\sigma_{2}} =0$
& See Definition \ref{def-aeu} \\
\hline
$\eta_{\epsilon }
\in \upi_{1}\kH (G/G')$\newline
$\phantom{\eta_{\epsilon }}\cong \upi^{G'}_{1}\kH (G'/G')$
with $2\eta_{\epsilon } =0$
& $[a_{\sigma_{2}}\orr_{1,\epsilon} ]$\\
\hline
$\eta \in \upi^{G}_{1}\kH (G/G)$ with\newline
$\Res_{2}^{4}(\eta )= \eta_{0}+\eta_{1}$ \newline
$\phantom{\Res_{2}^{4}(\eta )}\in \upi^{G}_{1}\kH (G/G')$
& $\Tr_{2}^{4}(\eta_{\epsilon })
=\Tr_{2}^{4}([a_{\sigma_{2}}\orr_{1,0}])
=\Tr_{2}^{4}([a_{\sigma_{2}}\orr_{1,1}])$ \\
\hline
$\eta'\in \underline{\pi }_{2-\sigma}\kH (G/G)$ with\newline
$\Res_{2}^{4}(\eta')
=u_{\sigma}(\eta_{0}+\eta_{1})$
\newline
$\phantom{\Res_{2}^{4}(\eta')}\in \upi^{G}_{2-\sigma}\kH (G/G')$
&$\Tr_{2}^{4}(\eta_{0}u_{\sigma})
=\Tr_{2}^{4}([a_{\sigma_{2}}\orr_{1,0}]u_{\sigma})$\newline
$\phantom{\Tr_{2}^{4}(\eta_{0}u_{\sigma})}
=\Tr_{2}^{4}([a_{\sigma_{2}}\orr_{1,1}]u_{\sigma})$\\
\hline
$\overline{\nu } \in \upi_{2-\sigma -\lambda}\kH (G/G)$ with
&$[a_{\sigma}u_{\lambda} ]
=\langle a_{\sigma},\,\eta ,\,a_{\lambda}\rangle$\\
$\Res_{2}^{4}(\overline{\nu } ) = u_{\sigma}[a_{\sigma_{2} }^{3}\orr_{1,0}]$
\newline $\phantom{aaaaa}$ (exotic restriction)
&Follows from Theorem \ref{thm-exotic} and $d_{3} (u_{\lambda})$ in
\newline Theorem \ref{thm-d3ulambda} \\
$2\overline{\nu } =\eta 'a_{\lambda}$
& Transfer of the above\\
$\eta \overline{\nu } =a_{\lambda}\langle 2,\,a_{\sigma} ,\,f_{1}^{2} \rangle$\newline
$\phantom{\eta \overline{\nu } }
=a_{\lambda}\langle 2,\,a_{\sigma} ,\,\Tr_{2}^{4}(\eta_{0}\eta_{1}) \rangle$
&\\
$\eta' \overline{\nu } =0$
&\\
$a_{\sigma } \overline{\nu } =a_{\lambda }\Tr_{2}^{4}(u_{\sigma }^{2})$
&$[a_{\sigma }^{2}u_{\lambda }]
= a_{\lambda }[2u_{2\sigma }]$
by the gold relation, Lemma \ref{lem-aeu}(vii)\\
$a_{\sigma }^{2} \overline{\nu } =0$
&$[a_{\sigma }^{3}u_{\lambda }]
=a_{\lambda }a_{\sigma }\Tr_{2}^{4}(u_{\sigma }^{2})=0$\\
\hline
$\xi\in \upi_{4-3\sigma -\lambda}\kH (G/G)$ with
&$[\overline{\nu } u_{2\sigma}]
=\langle\overline{\nu } ,\,a_{\sigma}^{2} ,\,f_{1} \rangle$\\
$\Res_{2}^{4}(\xi)=a_{\sigma_{2}}^{3}u_{\sigma}^{3}\orr_{1,0}$
&Follows from value of $\Res_{2}^{4}(\overline{\nu } )$\\
$2\betax=a_{\lambda}\langle \eta ',\,a_{\sigma}^{2} ,\,f_{1} \rangle$
& Transfer of the above\\
$d_{5} (u_{2\sigma}u_{\lambda}^{2}) = \xia_{\lambda}^{2}\normrbar_{1}$
&\\
$\eta \xi= 2a_{\lambda}u_{2\sigma}^{2}\normrbar_{1}^{}$
\newline $\phantom{aaaaa}$ (exotic multiplication)
&\\
$\eta' \xi= a_{\sigma}^{2}a_{\lambda}^{3}u_{2\sigma}^{2}\normrbar_{1}^{2}$
\newline $\phantom{aaaaa}$ (exotic multiplication)
&\\
\hline $\nu \in \upi_{3}\kH (G/G)$ with
&$a_{\sigma}u_{\lambda}\normrbar_{1}=\overline{\nu } \normrbar_{1}$, generating
$\circ=\upi_{3}\kH$\\
$\Res_{2}^{4}(\nu)=\eta_{0}^{3}$
and $2\nu =x_{3}$
\newline (exotic restriction
\newline and group extension)
&
\newline Follows from those on $\overline{\nu } $\\
\hline
\hline
\multicolumn{2}{|c|}{Filtration 2}\\
\hline $[a_{\sigma_{2}}^{2}]\in \upi_{-\lambda }\kH (G/G')$
&Preimage of $a_{\sigma_{2}}^{2}
\in \upi_{-2\sigma_{2}}i_{G'}^{*}\kH (G'/G')$\\
\hline $a_{\lambda }\in \upi_{-\lambda }\kH (G/G)$ with\newline
$4a_{\lambda }=0$ and
$\Res_{2}^{4}(a_{\lambda })=[a_{\sigma_{2}}^{2}]$
& See Definition \ref{def-aeu}\\
\hline $ \eta_{\epsilon }^{2} ,\, \eta_{0}\eta_{1}
\in \underline{\pi }^{G}_{2}\kH (G/G')$ with\newline
$\Tr_{2}^{4}(\eta_{\epsilon }^{2})= (-1)^{\epsilon }a_{\lambda}\ot_{2}'$
and \newline
$\Tr_{2}^{4}(\eta_{0}\eta_{1})=f_{1}^{2}$ (exotic transfer)
&$u_{\sigma}[a_{\sigma_{2}}^{2}]\os_{2,\epsilon}$ and
$u_{\sigma}[a_{\sigma_{2}}^{2}]\Res_{2}^{4}(\normrbar_{1})$, \newline
generating the torsion $\widehat{\bullet}\oplus \JJ $ in
$\upi^{G}_{2}\kH$\\
\hline
$\eta^{2}
=a_{\lambda} (\ot_{2}'+a_{\sigma }^{2}a_{\lambda }\normrbar_{1}^{2})
=a_{\lambda} \ot_{2}'+f_{1}^{2} $
& $a_{\lambda}\ot_{2}'$ has order 2 by Lemma \ref{lem-hate}\\
$\eta \eta '=a_{\lambda}[u_{2\sigma} \ot_{2}]$\newline
$(\eta ')^{2}=a_{\lambda}[u_{2\sigma}\ot_{2}']$
&See (\ref{eq-order2}) for the definition of
$[u_{2\sigma} \ot_{2}]$ and $[u_{2\sigma} \ot_{2}']$
\\
\hline $\nu^{2}\in \upi_{6}\kH (G/G)$
&$2a_{\lambda}u_{\lambda}u_{2\sigma}\normrbar_{1}^{2}
=\langle 2,\,\eta ,\,f_{1} ,\,f_{1}^{2} \rangle$\\
\hline
$\kappa \in \upi_{14}\kH (G/G)$
&$2a_{\lambda}u_{2\sigma}^{2}u_{\lambda}^{3}\normrbar_{1}^{4}$\\
\hline \hline
\multicolumn{2}{|c|}{Filtration 3}\\
\hline
$f_{1} \in \upi_{1}\kH (G/G)$
&$a_{\sigma}a_{\lambda}\normrbar_{1}^{}$,
generating the summand $\bullet$ of $\upi_{1}\kH$\\
\hline
$\eta_{0}^{3}=\eta_{0}^{2}\eta_{1}=\eta_{0}\eta_{1}^{2}=\eta_{1}^{3}$
\newline $\phantom{ab}\in \upi^{G}_{3}\kH (G/G')$
& $\eta_{\epsilon }u_{\sigma}[a_{\sigma_{2}}^{2}]\Res_{2}^{4}(\normrbar_{1})
= \eta_{\epsilon }u_{\sigma}[a_{\sigma_{2}}^{2}]\os_{2,\epsilon }$\\
\hline $x_{3}\in \upi_{3}\kH (G/G)$\newline
with $\Res_{2}^{4}(x_{3})=0$
& $\Tr_{2}^{4}(\eta_{0}^{2}\eta_{1})
=a_{\lambda} \eta' \normrbar_{1}$\\
\hline \hline
\multicolumn{2}{|c|}{Filtration 4}\\
\hline $x_{4}\in \EE_{2}^{4,8} (G/G)$\newline
with $d_{5} (x_{4})=f_{1}^{3}$,\newline
$\Res_{2}^{4}(x_{4} )=(\eta_{0}\eta_{1})^{2}=\eta_{0}^{4}$ \newline
and $2x_{4}=f_{1}\nu $
&$a_{\lambda}^{2}u_{2\sigma
}\normrbar_{1}^{2}$\\
\hline
$\overline{\kappa} \in \upi_{20}\kH (G/G)$
&$a_{\lambda}^{2}u_{2\sigma}^{3}u_{\lambda}^{4}\normrbar_{1}^{6}$\\
$2\overline{\kappa }
=\Tr_{2}^{4}(u_{\sigma }
\Res_{2}^{4}(u_{2\sigma }^{2}u_{\lambda }^{5}\normrbar_{1}^{5}))$
\newline
$\phantom{aaaaa}$(exotic transfer)
&\\
\hline \hline
\multicolumn{2}{|c|}{Filtration 8}\\
\hline $\epsilon \in \upi_{8}\kH (G/G)$
&
$x_{4}^{2}=\langle f_{1},\, f_{1}^{2} ,\, f_{1} ,\,f_{1}^{2} \rangle
\in \EE_{6}^{8,16} (G/G)$\\
\hline \hline
\multicolumn{2}{|c|}{Filtration 11}\\
\hline $\nu^{3}=\eta \epsilon \in \upi_{9}\kH (G/G)$
&Represents $f_{1}x_{4}^{2}\in \EE_{2}^{11,20} (G/G)$\\
\hline
\end{longtable}
\end{center}
\section{Slices for $\kH$ and $\KH$}\label{sec-slices}
In this section we will identify the slices for $\kH$ and $\KH$ and
the generators of their integrally graded homotopy groups. For the
latter we will use the notation of Table \ref{tab-pi*}. Let
\begin{numequation}\label{eq-Xmn}
\begin{split}
X_{m,n}=\mycases{
\Sigma^{m\rho_{4}}\HZZ
&\mbox{for }m=n\\
G_{+}\smashove{G'}\Sigma^{(m+n)\rho_{2}}\HZZ
&\mbox{for }m<n.
}
\end{split}
\end{numequation}
\noindent The slices of $\kH$ are certain finite wedges of these, and
those of $\KH$ are a certain infinite wedges. Fortunately we can
analyze these slices by considering just one value of $m$ at a time,
this index being preserved by the first differential $d_{3}$. These
are illustrated below in Figures \ref{fig-sseq-7a}--\ref{fig-sseq-8b}.
They show both $\EE_{2}$ and $\EE_{4}$ in four
cases depending on the sign and parity of $m$.
\begin{thm}\label{thm-sliceE2}
{\bf The slice $\EE_{2}$-term for $\kH$.}
The slices of $\kH$ are
\begin{displaymath}
P_{t}^{t}\kH=\mycases{
\bigvee_{0\leq m\leq t/4}X_{m,t/2-m}
&\mbox{for $t$ even and $t\geq 0$}\\
* &\mbox{otherwise}
}
\end{displaymath}
\noindent where $X_{m,n}$ is as in (\ref{eq-Xmn}).
The structure of $\pi^{u}_{*}\kH$ as a $\Z[G]$-module (see
(\ref{eq-pikH})) leads to four types of orbits and slice summands:
\begin{enumerate}
\item [(1)] $\left\{(r_{1,0}r_{1,1})^{2\ell } \right\}$ leading to
$X_{2\ell ,2\ell }$ for $\ell \geq 0$; see the leftmost diagonal in
Figure \ref{fig-sseq-7a}. On the 0-line we have a copy of $\Box$ (defined in
Table \ref{tab-C4Mackey}) generated under restrictions by
\begin{displaymath}
\Delta_{1}^{\ell }=u_{2\ell \rho_{4}}\normrbar_{1}^{2\ell}=
u_{2\sigma}^{\ell }u_{\lambda}^{2\ell }\normrbar_{1}^{2\ell}
\in \EE_{2}^{0,8\ell } (G/G).
\end{displaymath}
\noindent In positive filtrations we have
\begin{align*}
\circ &\subseteq \EE_{2}^{2j,8\ell }
\qquad \mbox{generated by } \\
a_{\lambda}^{j}u_{2\sigma}^{\ell }
u_{\lambda}^{2\ell -j}\normrbar_{1}^{2\ell }
& \in \EE_{2}^{2j,8\ell } (G/G)
\qquad \mbox{for }0<j\leq 2\ell \mbox{ and} \\
\bullet &\subseteq \EE_{2}^{2k+4\ell ,8\ell }
\qquad \mbox{generated by } \\
a_{\sigma}^{2k}a_{\lambda}^{2\ell }u_{2\sigma}^{\ell -k}
\normrbar_{1}^{2\ell }
& \in \EE_{2}^{2k+4\ell,8\ell } (G/G)
\qquad \mbox{for }0<k\leq \ell.
\end{align*}
\item [(2)] $\left\{(r_{1,0}r_{1,1})^{2\ell+1 } \right\}$ leading
to $X_{2\ell +1,2\ell +1}$ for $\ell \geq 0$; see the leftmost
diagonal in Figure \ref{fig-sseq-7b}. On the 0-line we have a copy of
$\oBox$ generated under restrictions by
\begin{align*}
\delta_{1}^{2\ell +1 }
= u_{\sigma}^{2\ell +1}
\Res_{2}^{4}(u_{\lambda}\normrbar_{1})^{2\ell +1}
&\in \EE_{2}^{0,8\ell+4 } (G/G').
\end{align*}
\noindent In positive filtrations we have
\begin{align*}
\obull &\subseteq \EE_{2}^{2j,8\ell+4 }
\qquad \mbox{generated by } \\
u_{\sigma}^{2\ell +1}
\Res_{2}^{4}(a_{\lambda}^{j}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1})
& \in \EE_{2}^{2j,8\ell+4 } (G/G')
\qquad \mbox{for }0<j\leq 2\ell+1 , \\
\bullet &\subseteq \EE_{2}^{2j+1 ,8\ell+4 }
\qquad \mbox{generated by } \\
a_{\sigma}a_{\lambda}^{j}
u_{2\sigma}^{\ell}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1}
& \in \EE_{2}^{2j+1,8\ell+4 } (G/G)
\qquad \mbox{for }0\leq j\leq 2\ell+1\mbox{ and} \\
\bullet &\subseteq \EE_{2}^{2k+4\ell +3 ,8\ell+4 }
\qquad \mbox{generated by } \\
a_{\sigma}^{2k+1}a_{\lambda}^{2\ell +1}
u_{2\sigma}^{\ell-k}
\normrbar_{1}^{2\ell +1}
& \in \EE_{2}^{2k+4\ell +3,8\ell+4 } (G/G)
\qquad \mbox{for }0< k\leq \ell.
\end{align*}
\item [(3)] $\left\{r_{1,0}^{i}r_{1,1}^{2\ell -i}, r_{1,0}^{2\ell
-i}r_{1,1}^{i} \right\}$ leading to $X_{i,2\ell -i}$ for $0\leq i<\ell
$; see other diagonals in Figure \ref{fig-sseq-7a}. On the 0-line we
have a copy of $\widehat{\Box}$ generated (under $\Tr_{2}^{4}$,
$\Res_{1}^{2}$ and the group action) by
\begin{displaymath}
u_{\sigma }^{\ell } \os_{2}^{\ell -i}
\Res_{2}^{4}(u_{\lambda}^{\ell }\normrbar_{1}^{i})
\in \EE_{2}^{0,4\ell } (G/G')
\end{displaymath}
In positive filtrations we have
\begin{align*}
\widehat{\bullet} &\subseteq \EE_{2}^{2j,4\ell }
\qquad \mbox{generated by } \\
\lefteqn{u_{\sigma }^{\ell } \os_{2}^{\ell -i}
\Res_{2}^{4}(a_{\lambda}^{j}u_{\lambda}^{\ell-j }\normrbar_{1}^{i})
}\qquad\qquad\\
&\in \EE_{2}^{2j,4\ell } (G/G')\qquad \mbox{for }0<j\leq \ell \\
& = \eta_{\epsilon }^{2j}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(u_{\lambda}^{\ell -j}\normrbar_{1}^{i})
\qquad \mbox{for }0 < j < \ell -i.
\end{align*}
\item [(4)] $\left\{r_{1,0}^{i}r_{1,1}^{2\ell+1 -i},
r_{1,0}^{2\ell+1 -i}r_{1,1}^{i} \right\}$ leading to
$X_{i,2\ell+1 -i}$ for $0\leq i\leq \ell $; see other diagonals in
Figure \ref{fig-sseq-7b}. On the 0-line we have a copy of
$\widehat{\oBox}$ generated (under transfers and the group
action) by
\begin{displaymath}
r_{1,0}\Res_{1}^{2}(u_{\sigma }^{\ell } \os_{2}^{\ell -i})
\Res_{1}^{4}(u_{\lambda}^{\ell }\normrbar_{1}^{i})
\in \EE_{2}^{0,4\ell +2} (G/\ee)
\end{displaymath}
\noindent In positive filtrations we have
\begin{align*}
\widehat{\bullet} &\subseteq \EE_{2}^{2j+1,4\ell+2 }
\qquad \mbox{generated by } \\
\lefteqn{\eta_{\epsilon } u_{\sigma }^{\ell } \os_{2}^{\ell -i}
\Res_{2}^{4}(a_{\lambda}^{j}u_{\lambda}^{\ell-j }\normrbar_{1}^{i})
}\qquad\qquad\\
&\in \EE_{2}^{2j+1,4\ell+2 } (G/G')
\qquad \mbox{for }0\leq j\leq \ell \\
& = \eta_{\epsilon }^{2j+1}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(u_{\lambda}^{\ell -j}\normrbar_{1}^{i})
\qquad \mbox{for }0 \leq j \leq \ell -i .
\end{align*}
\end{enumerate}
\end{thm}
\begin{cor}\label{cor-subring}
{\bf A subring of the slice $E_{2}$-term.} The ring
$\EE_{2}\kH (G/G')$ contains
\begin{displaymath}
\Z[\delta_{1},\Sigma_{2,\epsilon},\eta_{\epsilon }
\colon \epsilon =0,\,1]/
\left(2\eta _{\epsilon },\delta_{1}^{2}-\Sigma_{2,0}\Sigma_{2,1},
\eta_{\epsilon } \Sigma_{2,\epsilon +1}+\eta _{1+\epsilon }\delta_{1} \right);
\end{displaymath}
\noindent see Table \ref{tab-pi*} for the
definitions of its generators. In particular the elements $\eta_{0} $
and $\eta_{1}$ are algebraically independent mod 2 with
\begin{displaymath}
\gamma^{\epsilon } (\eta_{0}^{m}\eta_{1} ^{n})
\in \upi_{m+n}X_{m,n} (G/G')\qquad \mbox{for }m\leq n.
\end{displaymath}
\noindent
The element $(\eta_{0}\eta_{1})^{2}$ is the fixed point restriction of
\begin{displaymath}
u_{2\sigma}a_{\lambda}^{2}\normrbar_{1}^{2}
\in \EE_{2}^{4,8}\kH (G/G),
\end{displaymath}
\noindent which has order 4, and the transfer of the former is twice
the latter. The element $\eta_{0}\eta_{1}$ is not in the image of
$\Res_{2}^{4}$ and has trivial transfer in $\EE_{2}$.
\end{cor}
\proof
We detect this subring with the monomorphism
\begin{displaymath}
\xymatrix
@R=1mm
@C=10mm
{
{\EE_{2}\kH (G/G')}\ar[r]^(.5){\ur_{2}^{4}}
&{\EE_{2}\kH (G'/G')}\\
\eta_{\epsilon}\ar@{|->}[r]
&{\,a_{\sigma}\orr_{1,\epsilon} }\\
\Sigma_{2,\epsilon}\ar@{|->}[r]
&{\,u_{2\sigma}\orr_{1,\epsilon} ^{2}}\\
\delta_{1}\ar@{|->}[r]
&{\,u_{2\sigma}\orr_{1,0}\orr_{1,1} },
}
\end{displaymath}
\noindent in which all the relations are transparent.
\qed
\begin{cor}\label{cor-KH}
{\bf Slices for $\KH$.} The slices of $\KH$ are
\begin{displaymath}
P_{t}^{t}\KH=\mycases{
\bigvee_{m\leq t/4}X_{m,t/2-m}
&\mbox{for $t$ even}\\
* &\mbox{otherwise}
}
\end{displaymath}
\noindent where $X_{m,n}$ is as in Theorem \ref{thm-sliceE2}. Here
$m$ can be any integer, and we still require that $m\leq n$.
\end{cor}
\proof Recall that $\KH$ is obtained from $\kH$ by inverting a certain
element
\begin{displaymath}
{D\in \upi_{4\rho_{4}}\kH (G/G)}
\end{displaymath}
\noindent described in Table \ref{tab-pi*}.
Thus $\KH$ is the homotopy colimit of the diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
k_{[2]}\ar[r]^(.4){D}
&\Sigma^{-4\rho_{4}}\kH\ar[r]^(.5){D}
&\Sigma^{-8\rho_{4}}\kH\ar[r]^(.6){D}
&\dotsb
}
\end{displaymath}
\noindent Desuspending by $4\rho_{4}$ converts slices to slices, so
for even $t$ we have
\begin{align*}
P_{t}^{t}\KH
& = \lim_{k\to \infty }\Sigma^{-4k\rho_{4}}P^{t+16k}_{t+16k}\kH \\
& = \lim_{k\to \infty }\Sigma^{-4k\rho_{4}}
\bigvee_{0\leq m\leq t/4+4k}X_{m,t/2+8k-m} \\
& = \lim_{k\to \infty }
\bigvee_{0\leq m\leq t/4+4k}X_{m-4k,t/2+4k-m} \\
& = \lim_{k\to \infty }
\bigvee_{-4k\leq m\leq t/4}X_{m,t/2-m} \\
& = \bigvee_{m\leq t/4}X_{m,t/2-m}.
\qed
\end{align*}
\begin{cor}\label{cor-filtration}
{\bf A filtration of $\kH$. }
Consider the diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
{\kH}\ar[d]^(.5){}
&{\Sigma^{\rho_{4}}\kH}\ar[d]^(.5){} \ar[l]_(.5){\normrbar_{1}}
&{\Sigma^{2\rho_{4}}\kH}\ar[d]^(.5){}\ar[l]_(.45){\normrbar_{1}}
&\dotsb \ar[l]_(.4){\normrbar_{1}}\\
y_{0}
&y_{1}=\Sigma^{\rho_{4}}y_{0}
&y_{2}=\Sigma^{2\rho_{4}}y_{0}
}
\end{displaymath}
\noindent where $y_{0}$ is the cofiber of the map induced by
$\normrbar_{1}$. Then the slices of $y_{m}$ are
\begin{displaymath}
P_{t}^{t}y_{m}=\mycases{
X_{m,t/2-m}
&\mbox{for $t$ even and $t\geq 4m$}\\
* &\mbox{otherwise.}
}
\end{displaymath}
\end{cor}
\begin{cor}\label{cor-Filtration}
{\bf A filtration of $\KH$.} Let $R=\Z_{(2)}[x]/ (11x^{2}-20x+5)$.
After tensoring with $R$ (by smashing with a suitable Moore spectrum
$M$) there is a diagram
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{\dotsb \ar[r]^(.5){}
&\Sigma^{2\rho_{4}}\kH\ar[r]^(.5){f_{2}}\ar[d]^(.5){}
&\Sigma^{\rho_{4}}\kH\ar[r]^(.5){f_{1}}\ar[d]^(.5){}
&k_{[2]}\ar[r]^(.4){f_{0}}\ar[d]^(.5){}
&\Sigma^{-\rho_{4}}\kH\ar[r]^(.6){f_{-1}}\ar[d]^(.5){}
&\dotsb
\\
&Y_{2}
&Y_{1}
&Y_{0}
&Y_{-1}
&
}
\end{displaymath}
\noindent where the homotopy colimit of the top row is $\KH$ and each
$Y_{m}$ has slices similar to those of $y_{m}$ as in Corollary
\ref{cor-filtration}.
\end{cor}
\proof The periodicity element $D=-\normrbar_{1}^{2} (5\ot_{2}^{2}-20
\ot_{2}\normrbar_{1} +11\normrbar_{1}^{2})$ can be factored as
\begin{displaymath}
D=D_{0}D_{1}D_{2}D_{3}
\end{displaymath}
\noindent where $D_{i}=a_{i}\normrbar_{1}+b_{i}\ot_{2}$ with $a_{i}\in
\Z_{(2)}^{\times }$ and $b_{i}\in R$. Then
let $f_{4n+i}$ be multiplication by $D_{i}$. It follows that the
composite of any four successive $f_{m}$s is $D$, making the colimit
$\KH$ as desired. The fact that $a_{i}$ is a unit means that the
$Y$'s here have the same slices as the $y$'s in Corollary
\ref{cor-filtration}. \qed
\begin{rem}\label{rem-Witt}
The 2-adic completion of $R$ is
the Witt ring $W (\F{4})$ used in Morava $E_{2}$-theory. This follows
from the fact that the roots of the quadratic polynomial involve
$\,\sqrt[]{5}$, which is in $W (\F{4})$ but is not a 2-adic integer.
Moreover if we assume that $D_{0}D_{1}= 5\ot_{2}^{2}-20
\ot_{2}\normrbar_{1} +11\normrbar_{1}^{2}$, then the composite maps
$f_{4n}f_{4n+1}$, as well as $f_{4n+2}$ and $f_{4n+3}$, can be
constructed without adjoining $\,\sqrt[]{5}$.
\end{rem}
It turns out that $y_{m}\wedge M$ and $Y_{m}$ for $m\geq 0$ not only have the
same slices, but the same slice spectral sequence, which is shown in
Figures \ref{fig-sseq-7a}--\ref{fig-sseq-8b}. See Remark \ref{rem-Ym}
below. We do not know if they have the same homotopy type.
\section{Some differentials in the slice {\SS} for $\kH$}\label{sec-diffs}
Now we turn to differentials. The only
generators in (\ref{eq-bigraded}) that are not permanent cycles are
the $u$'s. We will see that it is easy to account for the elements in
$\EE_{2}^{0,|V|-V} (G/H)$ for proper subgroups $H$ of
$G=C_{4}$. From (\ref{eq-bigraded}) we see that
\begin{numequation}\label{eq-sparse}
\EE_{2}^{s,t}= 0 \qquad \mbox{for $|t|$ odd.}
\end{numequation}
\noindent This sparseness condition implies that $d_{r}$ can be
nontrivial only for odd values of $r$.
Our starting point is the Slice Differentials Theorem of
\cite[Thm. 9.9]{HHR}, which is derived from the fact that the geometric
fixed point spectrum of $MU^{((G))}$ is $MO$. It says that in the
slice {\SS} for $MU^{((G))}$ for an arbitrary finite cyclic 2-group
$G$ of order $g$, the first nontrivial differential on various powers
of $u_{2\sigma}$ is
\begin{numequation}
\label{eq-slicediffs}
d_{r} (u_{2\sigma}^{2^{k-1}})
= a_{\sigma}^{2^{k}}a_{\orho}^{2^{k}-1}
N_{2}^{g} (\orr_{2^{k}-1}^{G})
\in \EE_{r}^{r,r+2^{k} (1-\sigma )-1}MU^{((G))}(G/G),
\end{numequation}
\noindent where $r=1+(2^{k}-1)g$ and $\orho$ is the
reduced regular {\rep} of $G$.
In particular
\begin{numequation}
\label{eq-slicediffs2}
\begin{split}
\left\{
\begin{array}[]{rlll}
d_{5} (u_{2\sigma})
&\hspace{-2.5mm}
= a_{\sigma}^{3}a_{\lambda}\normrbar_{1}
&\hspace{-2.5mm}\in \EE_{5}^{5,6-2\sigma}MU^{((G))} (G/G)
&\mbox{for }G=C_{4}\\
d_{13} ([u_{2\sigma}^{2}])
&\hspace{-2.5mm}
= a_{\sigma}^{7}a_{\lambda}^{3}\normrbar_{2}
&\hspace{-2.5mm}\in \EE_{13}^{13,16-4\sigma}MU^{((G))} (G/G)
&\mbox{for }G=C_{4}\\
d_{3} (u_{2\sigma})
&\hspace{-2.5mm}
= a_{\sigma}^{3}\orr_{1}
&\hspace{-2.5mm}\in \EE_{3}^{3,4-2\sigma}MU_{\reals} (G/G)
&\mbox{for }G=C_{2}\\
d_{7} ([u_{2\sigma}^{2}])
&\hspace{-2.5mm}
= a_{\sigma}^{7}\orr_{3}
&\hspace{-2.5mm}\in \EE_{3}^{7,10-4\sigma}MU_{\reals} (G/G)
&\mbox{for }G=C_{2}.
\end{array}
\right.
\end{split}
\end{numequation}
The first of these leads directly to a similar differential in the
slice {\SS } for $\kH$. The target of the second one has
trivial image in $\kH$ and we shall see that $[u_{2\sigma }^{2}]$
turns out to be a permanent cycle.
There are two ways to leverage the third and fourth differentials of
(\ref{eq-slicediffs2}) into information about $\kH$.
\begin{enumerate}
\item [(i)] They both lead to differentials in the slice {\SS} for the
$C_{2}$ spectrum $i^{*}_{G'}\kH$. They are spelled out ind
(\ref{eq-d3-and-d7}) and will be studied in detail below in
\S\ref{sec-C2diffs}. They completely determine the slice {\SS }
$\underline{E}_{*}^{*,\star} (G/G')$ for both $\kH$ and $\KH$.
Since $u_{\lambda }$ restricts to $\ou_{\lambda }$, which is
isomorphic to $u_{2\sigma_{2}}$, we get some information about
differentials on powers of $u_{\lambda }$. The $d_{3}$ on
$u_{2\sigma_{2}}$ forces a $d_{3} (u_{\lambda })=\eta a_{\lambda }$.
The target of $d_{7} ([u_{2\sigma_{2}}^{2}])$ turns out to be the
exotic restriction of an an element in filtration 5, leading to $d_{5}
([u_{\lambda }]^{2})=\nu a_{\lambda }^{2}$. We will also see that
even though $[u_{2\sigma_{2}}^{4}]$ is a permanent cycle, $[u_{\lambda
}^{4}]$ (its preimage under the restrtction map $\Res_{2}^{4}$) is
not.
\item [(ii)] One can norm up the differentials on $u_{2\sigma_{2}}$
and its square using Corollary \ref{cor-normdiff}, converting the
$d_{3}$ and $d_{7}$ to a $d_{5}$ and a $d_{13}$. The source of the
latter is $[a_{\sigma }u_{\lambda }^{4}]$, which implies that
$[u_{\lambda }^{4}]$ is not a permanent cycle.
\end{enumerate}
The differentials of (\ref{eq-slicediffs2}) lead to Massey products
which are permanent cycles,
\begin{align*}
\langle 2,\,a_{\sigma}^{2} ,\,f_{1} \rangle
& = [2u_{2\sigma}]
= \Tr_{G'}^{G}(u_{\sigma}^{2})
\in \mycases{
\EE_{6 }^{0,2-2\sigma}MU^{((G))} (G/G)
&\mbox{for }G=C_{4}\\
\EE_{4}^{0,2-2\sigma}MU_{\reals} (G/G)
&\mbox{for }G=C_{2}
}\\
\langle 2,\,a_{\sigma}^{4} ,\,f_{3} \rangle
& = [2u_{2\sigma}^{2}]
= \Tr_{G'}^{G}(u_{\sigma}^{4})
\in \mycases{
\EE_{14 }^{0,4-4\sigma}MU^{((G))} (G/G)
&\mbox{for }G=C_{4}\\
\EE_{8}^{0,4-4\sigma}MU_{\reals} (G/G)
&\mbox{for }G=C_{4}
}
\end{align*}
\noindent and (by Theorem \ref{thm-exotic}) to exotic transfers
\begin{numequation}\label{eq-exotic-transfers}
\begin{split}\left\{
\begin{array}[]{rl}
a_{\sigma}f_{1}
&\hspace{-2.5mm} = \left\{
\begin{array}{lll}
\Tr_{2}^{4} (u_{\sigma})
&\in\EE_{\infty }^{4,5-\sigma}MU^{((G))} (G/G)
&\mbox{(filtration jump 4)}\\
& \qquad \mbox{for }G=C_{4}\\
\Tr_{1}^{2} (u_{\sigma})
&\in \EE_{\infty }^{2,3-\sigma}MU_{\reals} (G/G)
&\mbox{(filtration jump 2)}\\
& \qquad \mbox{for }G=C_{2}
\end{array} \right. \\
a_{\sigma}^{3}f_{3}
&\hspace{-2.5mm} = \left\{
\begin{array}{lll}
\Tr_{2}^{4} (u_{\sigma}^{3})
&\in \EE_{\infty }^{12,15-3\sigma}MU_{\reals} (G/G)
&\mbox{(filtration jump 12)}\\
& \qquad \mbox{for }G=C_{4}\\
\Tr_{1}^{2} (u_{\sigma}^{3})
&\in \EE_{\infty }^{6,9-3\sigma}MU_{\reals} (G/G)
&\mbox{(filtration jump 6)}\\
& \qquad \mbox{for }G=C_{2}
\end{array} \right.
\end{array} \right.
\end{split}
\end{numequation}
Since $a_{\sigma }$ and $2a_{\lambda }$ kill transfers by Lemma
\ref{lem-hate}, we have Massey products,
\begin{numequation}\label{eq-order2}
\begin{split}
[u_{2\sigma }\Tr_{2}^{4}(x)] = \Tr_{2}^{4}(u_{\sigma }^{2}x)
=\langle a_{\sigma }f_{1},\,a_{\sigma } ,\, \Tr_{2}^{4}(x)\rangle
\quad \mbox{with }2a_{\lambda } [u_{2\sigma }\Tr_{2}^{4}(x)]=0.
\end{split}
\end{numequation}
Now, as before, let $G=C_{4}$ and $G'=C_{2}\subseteq G$. We need to
translate the $d_{3}$ above in the slice {\SS} for $MU_{\reals}$ into
a statement about the one for $\kH$ as a $G'$-spectrum. We have an
{\eqvr} multiplication map $m$ of $G'$-spectra
\begin{displaymath}
\xymatrix
@R=2mm
@C=10mm
{
&MU^{((G))}\ar@{=}[d]^(.5){}\\
MU_{\reals}\ar[r]^(.4){\eta_{L}}
& MU_{\reals}\wedge MU_{\reals}\ar[r]^(.6){m}
& MU_{\reals}\\
{\orr_{1}^{G'}}\ar@{|->}[r]^(.5){}
&\,{\orr_{1,0}^{G}+\orr_{1,1}^{G}}\ar@{|->}[r]^(.5){}
&{\,\orr_{1}^{G'}}\\
&a_{\sigma}^{3} (\orr_{1,0}^{G}+\orr_{1,1}^{G})\ar@{|->}[r]^(.5){}
&\,a_{\sigma}^{3}\orr_{1}^{G'}\\
{\orr_{3}^{G'}}\ar@{|->}[r]^(.5){}
&{\left(\begin{array}[]{r}
5\orr_{1,0}^{G}\orr_{1,1}^{G} (\orr_{1,0}^{G}+\orr_{1,1}^{G})
+(\orr_{1,1}^{G})^{3}\\ \bmod (\orr_{2}^{G},\orr_{3}^{G})
\end{array} \right)}
\ar@{|->}[r]^(.5){}
&\,\orr_{3}^{G'}\\
}
\end{displaymath}
\noindent where the elements lie in
$\upi^{G'}_{\rho_{2}}(\cdot)(G'/G')$ and
$\upi^{G'}_{3\rho_{2}}(\cdot)(G'/G')$. In the slice {\SS} for
$MU^{((G))}$ as a $G'$-spectrum, $d_{3} (u_{2\sigma})$ and
$d_{7}(u_{2\sigma}^{2})$ must be $G$-invariant since $u_{2\sigma}$ is,
and they must map respectively to $a_{\sigma}^{3}\orr_{1}^{G'}$ and
$a_{\sigma}^{7}\orr_{3}^{G'}$, so we have
\begin{numequation}\label{eq-d3-and-d7}
\begin{split}
\left\{\begin{array}{rl}
d_{3} (u_{2\sigma_{2} }) = d_{3} (\ou_{\lambda})
&\hspace{-2.5mm} = a_{\sigma_{2} }^{3} (\orr_{1,0}^{G}+\orr_{1,1}^{G})
= a_{\sigma_{2} }^{2} (\eta_{0}+\eta_{1})\\
d_{7} ([u_{2\sigma_{2} }^{2}]) = d_{7} ([\ou_{\lambda}^{2}])
&\hspace{-2.5mm} = a_{\sigma_{2} }^{7}\left(5\orr_{1,0}^{G}\orr_{1,1}^{G}
(\orr_{1,0}^{G}+\orr_{1,1}^{G})
+ (\orr_{1,1}^{G})^{3}+\dotsb \right)\\
&\hspace{-2.5mm} = a_{\sigma_{2} }^{7} (\orr_{1,0}^{G})^{3}+\dotsb\\
&
\qquad \mbox{since $a_{\sigma_{2}}^{3}(\orr_{1,0}^{G}+\orr_{1,1}^{G})=0$
in $\EE_{4}$}
\end{array} \right.
\end{split}
\end{numequation}
\noindent We get similar differentials in the slice spectral sequence
for $\kH$ as a $C_{2}$-spectrum in which the missing terms in $d_{7}
(\ou_{\lambda}^{2})$ vanish.
Pulling back along the isomorphism $\ur_{2}^{4}$ gives
\begin{numequation}\label{eq-d-ul}
\begin{split}\left\{
\begin{array}[]{rl}
d_{3} (\Res_{2}^{4}(u_{\lambda}))
= d_{3} (\overline{u}_{\lambda} )
&\hspace{-2.5mm} = [a_{\sigma_{2}}^{2}] (\eta_{0}+\eta_{1})
= \Res_{2}^{4}(a_{\lambda}\eta)\\
d_{7} (\Res_{2}^{4}(u_{\lambda}^{2}))
= d_{7} (\overline{u}_{\lambda}^{2} )
&\hspace{-3mm}
= \Res_{2}^{4}(a_{\lambda }^{2})\eta_{0}^{3}
= \Res_{2}^{4}(a_{\lambda }^{2}\nu )
\end{array}
\right.
\end{split}
\end{numequation}
\noindent These imply that
\begin{displaymath}
d_{3} (u_{\lambda})=a_{\lambda}\eta
\qquad \aand
d_{5} (u_{\lambda }^{2}) = a_{\lambda }^{2}\nu .
\end{displaymath}
The differential on $u_{\lambda}$ leads to the following Massey
products, the second two of which are permanent cycles.
\begin{numequation}\label{eq-alphax}
\begin{split}\left\{
\begin{array}[]{rl}
\left[u_{\lambda}^{2} \right]
= \langle a_{\lambda },\,\eta ,\, a_{\lambda },\,\eta\rangle
&\hspace{-3mm} \in \EE_{4}^{0,4-2\lambda} (G/G)\\
\left[2u_{\lambda} \right]
= \langle 2,\,\eta ,\, a_{\lambda} \rangle
&\hspace{-3mm} \in \EE_{4}^{0,2-\lambda} (G/G) \\
\overline{\nu } := [a_{\sigma}u_{\lambda}]
= \langle a_{\sigma},\,\eta ,\,a_{\lambda} \rangle
&\hspace{-3mm} \in \EE_{4}^{1,3-\sigma-\lambda} (G/G)
\end{array}
\right.
\end{split}
\end{numequation}
\noindent where $\overline{\nu } $ satisfies
\begin{align*}
a_{\sigma}^{2}\overline{\nu }
& = \langle a_{\sigma}^{3},\,\eta ,\, a_{\lambda}\rangle
= a_{\sigma }[a_{\sigma}^{2}u_{\lambda}]
= a_{\sigma} [2a_{\lambda}u_{2\sigma}]
= [2a_{\sigma }a_{\lambda}u_{2\sigma}]
= 0\\
\Res_{2}^{4}(\overline{\nu } )
& = [a_{\sigma_{2}}^{3}\orr_{1,\epsilon}]u_{\sigma}
\in \EE_{4}^{3,5-\sigma -\lambda} (G/G') \\
& \qquad \mbox{(exotic restriction with filtration jump 2 by Theorem
\ref{thm-exotic}(i))} \\
2\overline{\nu } & = \Tr_{2}^{4}(\Res_{2}^{4}(\overline{\nu } ))
= \Tr_{2}^{4}(u_{\sigma}
[a_{\sigma_{2}}^{3}\orr_{1,\epsilon}])\\
& = \eta' a_{\lambda}
\in \EE_{4}^{3,5-\sigma -\lambda} (G/G)\\
&
\qquad \mbox{(exotic group extension with jump 2)} \\
\Tr_{2}^{4}(x)\overline{\nu }
& = \Tr_{2}^{4}(x \cdot \Res_{2}^{4}(\overline{\nu } ))
= \Tr_{2}^{4}(x [a_{\sigma_{2}}^{3}\orr_{1,0}]
u_{\sigma})\\
\eta \overline{\nu }
& = \Tr_{2}^{4}([a_{\sigma_{2}}\orr_{1,1}])\overline{\nu }
= \Tr_{2}^{4}([a_{\sigma_{2}}^{4}\orr_{1,0}\orr_{1,1}]
u_{\sigma})
= a_{\lambda}^{2}\normrbar_{1}^{}\Tr_{2}^{4}(u_{\sigma}^{2})\\
& = a_{\lambda}\normrbar_{1}^{}
\langle 2,\,a_{\sigma} ,\,a_{\sigma}f_{1} \rangle
= \langle 2,\,a_{\sigma} ,\,f_{1}^{2} \rangle\\
\eta' \overline{\nu }
& = a_{\lambda}^{2}\normrbar_{1}^{}\Tr_{2}^{4}(u_{\sigma}^{3})
= 0\\
d_{7} ([\ou_{\lambda}^{2}])
& = [a_{\sigma_{2}}^{7}\orr_{1,0}^{3}]
\quad \mbox{in }\EE_{4}\\
& = \Res_{2}^{4}(\overline{\nu } )\Res_{2}^{4}(a_{\lambda}^{2}\normrbar_{1})
= \Res_{2}^{4}(\overline{\nu } a_{\lambda}^{2}\normrbar_{1})
= \Res_{2}^{4}(d_{5} (u_{\lambda}^{2} ))\\
d_{5} ([u_{\lambda}^{2}])
& = \overline{\nu } a_{\lambda}^{2}\normrbar_{1}=a_{\lambda }^{2}\nu \\
d_{7} ([2u_{\lambda}^{2}])
& = ( 2\overline{\nu } ) a_{\lambda}^{2}\normrbar_{1}
= a_{\lambda}^{3}\eta' \normrbar_{1}.
\end{align*}
\noindent Note that $\nu =\overline{\nu } \normrbar_{1}^{}$, with the exotic
restriction and group extension on $\overline{\nu } $ being consistent with
those on $\nu $.
The differential on $[u_{\lambda}^{2}]$ yields Massey products
\begin{numequation}\label{eq-brackets3}
\begin{split}
\left\{
\begin{array}[]{rl}
[a_{\sigma}^{2}u_{\lambda}^{2}]
&\hspace{-2.5mm} = \langle a_{\sigma}^{2},\,\overline{\nu }
,\,a_{\lambda}^{2}\normrbar_{1} \rangle\\
\left[\eta' u_{\lambda}^{2} \right]
&\hspace{-2.5mm} = \langle \eta ',\,\overline{\nu }
,\,a_{\lambda}^{2}\normrbar_{1} \rangle.
\end{array} \right.
\end{split}
\end{numequation}
\begin{thm}\label{thm-normed-slice-diffs}
{\bf Normed up slice differentials for $\kH$ and $\KH$.} In the slice
{\SS}s for $\kH$ and $\KH$,
\begin{align*}
d_{5} ( [a_{\sigma }u_{\lambda }^{2}])
& = 0 \\
\aand
d_{13} ([a_{\sigma }u_{\lambda }^{4}])
& = a_{\lambda }^{7}[u_{2\sigma }^{2}]\normrbar_{1}^{3} .
\end{align*}
\end{thm}
\proof
The two slice differentials over $G'$ are
\begin{align*}
d_{3} (u_{2\sigma_{2}})
& = a_{\sigma_{2}}^{3}\orr_{1}^{G'}
= a_{\sigma_{2}}^{3} (\orr_{1,0}+\orr_{1,1}) \\
\aand
d_{7} ([u_{2\sigma_{2}}^{2}])
& = a_{\sigma_{2}}^{7}\orr_{3}^{G'}
= a_{\sigma_{2}}^{7} ( 5 \orr_{1,0}^{2}\orr_{1,1} +5\orr_{1,0}
\orr_{1,1}^{2} +\orr_{1,1}^{3})
\end{align*}
\noindent We need to find the norms of both sources and targets.
Lemma \ref{lem-norm-au} tells us that
\begin{align*}
N_{2}^{4} (a_{\sigma_{2}}^{k})
& = a_{\lambda }^{k}\\
\aand N_{2}^{4} (u_{2\sigma_{2}}^{k}) & = u_{\lambda }^{2k}/u_{2\sigma
}^{k} \qquad \mbox{in }\underline{E}_{2}(G/G) .
\end{align*}
\noindent Previous calculations give
\begin{align*}
N_{2}^{4} (\orr_{1,0}+\orr_{1,1})
& = -\overline{t}_{2}\qquad \mbox{by (\ref{eq-norm-orr})} \\
\aand
N_{2}^{4} ( 5 \orr_{1,0}^{2}\orr_{1,1} +5\orr_{1,0}\orr_{1,1}^{2}
+\orr_{1,1}^{3})
& = -\normrbar_{1} (5\ot_{2}^{2}-20 \ot_{2}\normrbar_{1}
+11\normrbar_{1}^{2})\\
& \phantom{ = -\overline{t}_{2}}\, \qquad \mbox{by (\ref{eq-norm-quad})} ,
\end{align*}
For the first differential, Corollary \ref{cor-normdiff} tells us that
\begin{align*}
a_{\lambda }^{3}\overline{t}_{2}
& = d_{5} (a_{\sigma }u_{\lambda }^{2}/u_{2\sigma }) \\
& = d_{5} (a_{\sigma } u_{\lambda }^{2})/u_{2\sigma }
-a_{\sigma }u_{\lambda }^{2}d_{5} (u_{2\sigma })/[u_{2\sigma }^{2}]\\
& = d_{5} (a_{\sigma } u_{\lambda }^{2})/u_{2\sigma }
-a_{\sigma }u_{\lambda }^{2}a_{\sigma }^{3}
a_{\lambda }\normrbar_{1}/[u_{2\sigma }^{2}]
\end{align*}
\noindent Multiplying both sides by the permanent cycle
$[u_{2\sigma}^{2}]$ gives
\begin{align*}
[u_{2\sigma }d_{5} (a_{\sigma } u_{\lambda }^{2})]
& = a_{\lambda }^{3}[u_{2\sigma }^{2}] \ot_{2}
+ a_{\sigma }u_{\lambda }^{2}a_{\sigma }^{3}
a_{\lambda }\normrbar_{1} \\
& = a_{\lambda }^{3}[u_{2\sigma }^{2}] \ot_{2}
+ 4a_{\lambda }^{3}[u_{2\sigma }^{2}]\normrbar_{1}\\
& = a_{\lambda }^{3}[u_{2\sigma }^{2}] \ot_{2}\\
d_{5} (a_{\sigma } u_{\lambda }^{2})
& = a_{\lambda }^{3}[u_{2\sigma } \ot_{2}].
\end{align*}
\noindent We have seen that
\begin{displaymath}
\eta \eta ' = a_{\lambda }[u_{2\sigma } \overline{t}_{2}].
\end{displaymath}
\noindent This implies that $a_{\lambda }^{2}[u_{2\sigma }
\overline{t}_{2}]$ vanishes in $\underline{E}_{5}$ since $a_{\lambda
}\eta $ is killed by $d_{3}$. It follows that $d_{5} (a_{\sigma }
u_{\lambda }^{2}) = a_{\lambda }^{3}[u_{2\sigma } \ot_{2}]=0$ as claimed.
For the second differential we have
\begin{align*}
d_{13} ([a_{\sigma }u_{\lambda }^{4}/u_{2\sigma }^{2}])
& = a_{\lambda }^{7} \normrbar_{1} (-5\ot_{2}^{2}+20 \ot_{2}\normrbar_{1}
+9\normrbar_{1}^{2})\\
d_{13} ([a_{\sigma }u_{\lambda }^{4}])
& = a_{\lambda }^{7} [u_{2\sigma }^{2}]\normrbar_{1}
(-5\ot_{2}^{2}+20 \ot_{2}\normrbar_{1}
+9\normrbar_{1}^{2})\\
& = a_{\lambda }^{7}[u_{2\sigma}^{2}]\normrbar_{1}
(-\ot_{2}^{2}+\normrbar_{1}^{2})
\end{align*}
\noindent since $a_{\lambda }$ has order 4. As we saw above,
$a_{\lambda }^{2}[u_{2\sigma } \overline{t}_{2}]$ vanishes in
$\underline{E}_{5}$, so $d_{13} ([a_{\sigma }u_{\lambda }^{4}])$ is as
claimed. \noindent\qed
We can use this to find the differential on $[u_{\lambda}^{4}]$.
We have
\begin{numequation}\label{eq-d7}
\begin{split}\left\{
\begin{array}[]{rl}
d ([u_{\lambda}^{4}])
&\hspace{-2.5mm} = [2u_{\lambda}^{2}]d ([u_{\lambda}^{2}])
= [2u_{\lambda}^{2}]\overline{\nu } a_{\lambda}^{2}\normrbar_{1}^{}
= (2\overline{\nu } )a_{\lambda}^{2}[u_{\lambda}^{2}]\normrbar_{1}\\
&\hspace{-2.5mm} = \eta 'a_{\lambda}^{3}[u_{\lambda}^{2}]\normrbar_{1}^{}
= [\eta 'u_{\lambda}^{2}]a_{\lambda}^{3}\normrbar_{1}^{}
= \langle \eta',\,\overline{\nu } ,\,a_{\lambda}^{2}\normrbar_{1}\rangle
a_{\lambda}^{3}\normrbar_{1}.
\end{array}
\right.
\end{split}
\end{numequation}
The differential on $u_{2\sigma}$ yields
\begin{displaymath}
[x u_{2\sigma}]
=\langle x,\, a_{\sigma}^{2}
,\, f_{1}\rangle
\end{displaymath}
\noindent for any permanent cycle $x$ killed by $a_{\sigma}^{2}$.
Possible values of $x$ include 2, $\eta $, $\eta '$ (each of which is
killed by $a_{\sigma}$ as well) and $\overline{\nu } $. For the last of these
we write
\begin{numequation}\label{eq-brackets2}
\xi:= [\overline{\nu } u_{2\sigma}]
= \langle \overline{\nu } ,\,a_{\sigma}^{2}
,\, f_{1}\rangle
= \langle[ a_{\sigma} u_{\lambda }] ,\,a_{\sigma}^{2}
,\, f_{1}\rangle
\in \EE_{6}^{1,5-3\sigma-\lambda} (G/G),
\end{numequation}
\noindent which satisfies
\begin{align*}
\Res_{2}^{4}(\xi)
& = a_{\sigma_{2}}^{3}u_{\sigma}^{3}\orr_{1,\epsilon}
\in \EE_{4}^{3,7-3\sigma -\lambda} (G/G')\\
&
\qquad \mbox{(exotic restriction with jump 2)}\\
2\xi
& = \Tr_{2}^{4}(\Res_{2}^{4}(\xi))
= \eta' a_{\lambda}u_{2\sigma}
\in \EE_{4}^{3,7-3\sigma -\lambda} (G/G)\\
&
\qquad \mbox{(exotic group extension with jump 2)}\\
d_{5} ([u_{2\sigma}u_{\lambda}^{2}])
& = a_{\sigma}^{3}a_{\lambda}u_{\lambda}^{2}\normrbar_{1}
+\overline{\nu } a_{\lambda}^{2}u_{2\sigma}\normrbar_{1}
= (a_{\sigma}^{3}u_{\lambda}^{2}+\overline{\nu } u_{2\sigma})
a_{\lambda}^{2}\normrbar_{1}\\
& = (2a_{\sigma}a_{\lambda}u_{\lambda}u_{2\sigma}+\xi)
a_{\lambda}^{2}\normrbar_{1}
= \xia_{\lambda}^{2}\normrbar_{1}\\
d_{7} ([2u_{2\sigma }u_{\lambda }^{2}])
& = 2\betax\cdot a_{\lambda}^{2}\normrbar_{1}
= \eta' a_{\lambda}^{3}u_{2\sigma}\normrbar_{1}\\
\Res_{2}^{4}(d_{5} ([u_{2\sigma}u_{\lambda}^{2}]))
& = u_{\sigma}^{3}a_{\sigma_{2}}^{3}\orr_{1,\epsilon}
\Res_{2}^{4}(a_{\lambda}^{2}\normrbar_{1})
= u_{\sigma}^{2}a_{\sigma_{2}}^{7}\orr_{1,0}^{3}
= u_{\sigma}^{2}d_{7} (\overline{u}_{\lambda}^{2} ).
\end{align*}
\begin{thm}\label{thm-d3ulambda}
{\bf The differentials on powers of $u_{\lambda}$ and $u_{2\sigma}$.}
The following differentials occur in the slice {\SS} for $\kH$. Here
$\ou_{\lambda}$ denotes $\Res_{2}^{4}(u_{\lambda})$.
\begin{align*}
d_{3} (u_{\lambda})
& = a_{\lambda}\eta
= \Tr_{2}^{4}(a_{\sigma_{2}}^{3}\orr_{1,\epsilon })\\
d_{3} (\ou_{\lambda})
& = \Res_{2}^{4}(a_{\lambda}) (\eta_{0}+\eta_{1})
= \left[a_{\sigma_{2}}^{3} (\orr_{1,0}+\orr_{1,1}) \right]\\
d_{5} (u_{2\sigma})
& = a_{\sigma}^{3}a_{\lambda}\normrbar_{1}^{} \\
d_{5} ([u_{\lambda}^{2}])
& = a_{\lambda}^{2}a_{\sigma}u_{\lambda}\normrbar_{1}
= a_{\lambda}^{2}\overline{\nu } \normrbar_{1}
= a_{\lambda}^{2}\nu\qquad
\mbox{for $\alphax$
as in (\ref{eq-alphax})} \\
d_{5} ([u_{2\sigma}u_{\lambda}^{2}])
& = a_{\sigma}^{3}a_{\lambda}u_{\lambda}^{2}\normrbar_{1}
+\overline{\nu } a_{\lambda}^{2}u_{2\sigma}\normrbar_{1}
= (a_{\sigma}^{3}u_{\lambda}^{2}+\overline{\nu } u_{2\sigma})
a_{\lambda}^{2}\normrbar_{1}\\
& = \xia_{\lambda}^{2}\normrbar_{1}
\qquad \mbox{for $\betax$ as in (\ref{eq-brackets2})} \\
d_{7} ([2u_{2\sigma}u_{\lambda}^{2}])
& = \eta' a_{\lambda}^{3}u_{2\sigma}\normrbar_{1}\\
d_{7} ([2u_{\lambda}^{2}])
& = 2 a_{\lambda}^{2}\overline{\nu } \normrbar_{1}
= a_{\lambda}^{3}\eta '\normrbar_{1}^{}\\
d_{7} ([\ou_{\lambda}^{2}])
& = \Res_{2}^{4}(a_{\lambda}^{2})\eta_{0}^{3}
= a_{\sigma_{2}}^{7}\orr_{1,0}^{3}\\
d_{7} ([u_{\lambda}^{4}])
& = [\eta 'u_{\lambda}^{2}]a_{\lambda}^{3}\normrbar_{1}
= \langle \eta',\,\overline{\nu } ,\,a_{\lambda}^{2}\normrbar_{1}\rangle
a_{\lambda}^{3}\normrbar_{1}.
\end{align*}
\noindent The elements
\begin{align*}
u_{\sigma},
& &[2u_{\lambda}]
& = \langle 2,\,\eta ,\,a_{\lambda} \rangle
,\\
[2u_{2\sigma}]
& = \langle 2,\,a_{\sigma}^{2} ,\,f_{1} \rangle
=\Tr_{2}^{4}(u_{\sigma}^{2}),
&[4u_{\lambda}^{2}]
& = \langle 2,\,\eta '
,\,a_{\lambda}^{3}\normrbar_{1} \rangle
=\Tr_{1}^{4}(u_{\sigma_{2} }^{4}),\\
[u_{2\sigma}^{2} ]
&=\langle a_{\sigma}^{2},\,f_{1} ,\,a_{\sigma}^{2},\,f_{1} \rangle
& [2\ou_{\lambda}^{2}]
& = \langle 2,\,a_{\sigma_{2} }^{6}
,\,a_{\sigma_{2}}\orr_{1,0}^{3} \rangle
=\Tr_{1}^{2}(u_{\sigma_{2} }^{4}), \\
[2u_{2\sigma}u_{\lambda}]
&=\langle [2u_{2\sigma}],\,\eta ,\,a_{\lambda} \rangle,
& [2u_{\lambda}^{4}]
& = \langle 2,\,\eta ',\overline{\nu }
,\,a_{\lambda}^{5}\normrbar_{1}^{2} \rangle
=\Tr_{2}^{4}(\ou_{\lambda}^{4}), \\
[\ou_{\lambda}^{4}]
& = \langle a_{\sigma_{2}}^{7},\,\orr_{1,0}^{3} , \,
a_{\sigma_{2}}^{7},\,\orr_{1,0}^{3}\rangle
&\aand [u_{\lambda}^{8}]
& = \langle [\eta 'u_{\lambda }^{2}]
,\,a_{\lambda}^{3}\normrbar_{1}
,\, [\eta 'u_{\lambda }^{2}]
,\,a_{\lambda}^{3}\normrbar_{1} \rangle
\end{align*}
\noindent are permanent cycles.
We also have the following exotic restriction and transfers.
\begin{align*}
\Res_{2}^{4}(a_{\sigma}u_{\lambda})
& = u_{\sigma}\Res_{2}^{4}(a_{\lambda})\eta_{\epsilon }
= u_{\sigma}a_{\sigma_{2}}^{3}\orr_{1,\epsilon }
\qquad \mbox{(filtration jump 2)} \\
\Tr_{2}^{4}(u_{\sigma}^{k})
& = \mycases{
a_{\sigma}^{2}a_{\lambda}\normrbar_{1}u_{2\sigma}^{(k-1)/2}
= a_{\sigma}f_{1}u_{2\sigma}^{(k-1)/2}\\
\qquad \mbox{(filtration jump 4)}
&\mbox{for $k\equiv 1$ mod 4}\hspace{4cm}\\
2u_{2\sigma}^{k/2}
&\mbox{for $k$ even}\\
0 &\mbox{for $k\equiv 3$ mod 4}
}\\
\Tr_{1}^{2}(u_{\sigma_{2}}^{k})
& = \mycases{
a_{\sigma_{2}}^{2} (\orr_{1,0}+\orr_{1,1})\ou_{\lambda}^{(k-1) /2}
= a_{\sigma_{2}} (\eta_{0}+\eta_{1})\ou_{\lambda}^{(k-1) /2}
\\
\qquad \mbox{(filtration jump 2)}
&\hspace{-2.4cm} \mbox{for $k\equiv 1$ mod 4 }\\
2\ou_{\lambda}^{k/2}
&\hspace{-2.4cm}\mbox{for $k$ even}\\
a_{\sigma_{2}}^{6} \orr_{1,0}^{3}\ou_{\lambda}^{(k-3) /2}\\
\qquad \mbox{(filtration jump 6)}
&\hspace{-2.4cm}\mbox{for $k\equiv 3$ mod 4 }
}
\end{align*}
\end{thm}
\proof All differentials were established above.
The differential on $u_{\lambda}^{2}$ does {\em not} lead to an exotic
transfer because neither $[\ou_{\lambda}^{2}]$ nor
$[u_{\lambda} a_{\lambda}^{2}\normrbar_{1}]$ is a permanent cycle as
required by Theorem \ref{thm-exotic}.
We need to discuss the element $[2u_{2\sigma}u_{\lambda}] =\langle
[2u_{2\sigma}],\,\eta ,\,a_{\lambda} \rangle$. To see that this Toda
bracket is defined, we need to verify that $[2u_{2\sigma}]\eta =0$.
For this we have
\begin{displaymath}
[2u_{2\sigma}] \eta
= [2u_{2\sigma}]\Tr_{2}^{4}(\eta_{0})
= \Tr_{2}^{4}(2u_{\sigma}^{2}\eta_{0})
= \Tr_{2}^{4}(0)=0.
\end{displaymath}
The exotic restriction and transfers are applications of Theorem
\ref{thm-exotic} to the differentials on $u_{\lambda}$ and on
$\left[u_{2\sigma}^{(k+1)/2} \right]$ and
$\left[\ou_{\lambda}^{(k+1)/2} \right]$ for odd $k$. For even $k$ we
have
\begin{displaymath}
\Tr_{2}^{4}(u_{\sigma}^{k})
= \Tr_{2}^{4}\left(\Res_{2}^{4}
\left(\left[u_{2\sigma}^{k/2}\right]\right) \right)
= \left[2u_{2\sigma}^{k/2} \right]
\quad \mbox{since } \Tr_{2}^{4}(\Res_{2}^{4}(x))= (1+\gamma )x,
\end{displaymath}
\noindent and similarly for even powers of $u_{\sigma_{2}}$.
As remarked above, we lose no information by inverting the class $D$,
which is divisible by $\normrbar_{1}$. It is shown in
\cite[Thm. 9.16]{HHR} that inverting the latter makes
$u_{2\sigma}^{2}$ a permanent cycle. One can also see this from
(\ref{eq-slicediffs2}). Since $d_{5}
(u_{2\sigma})=a_{\sigma}^{3}a_{\lambda}\normrbar_{1}$, $d_{5}
(u_{2\sigma}\normrbar_{1}^{-1})=a_{\sigma}^{3}a_{\lambda}$. This
means that $d_{13}
([u_{2\sigma}^{2}])=a_{\sigma}^{7}a_{\lambda}^{3}\normrbar_{3}$ is
trivial in $\EE_{6} (G/G)$. It turns out that there is no possible
target for a higher differential. \qed
\section{$\kH$ as a $C_{2}$-spectrum}
\label{sec-C2diffs}
Before studying the slice {\SS} for the $C_{4}$-spectrum $\kH$
further, it is helpful to explore its restriction to $G'=C_{2}$, for
which the $\Z$-bigraded portion
\begin{displaymath}
\EE_{2}^{*,*}i_{G'}^{*}\kH (G'/G')
\cong \EE_{2}^{*,(G',*)}\kH (G/G)
\cong \EE_{2}^{*,*}\kH (G/G')
\end{displaymath}
\noindent (see Theorem \ref{thm-module} for these isomorphisms) is the
isomorphic image of the subring of Corollary \ref{cor-subring}. In
the following we identify $\Sigma_{2,\epsilon }$, $\delta_{1}$ and
$\orr_{1,\epsilon }$ (see Table \ref{tab-pi*}) with their images under
$\ur_{2}^{4}$. From the differentials of (\ref{eq-d3-and-d7}) we get
\begin{numequation}\label{eq-k2-C2-diffs}
\begin{split}
\left\{\begin{array}{rl}
d_{3} (\Sigma_{2,\epsilon })
&\hspace{-2.5mm} = \eta_{\epsilon }^{2} (\eta_{0}+\eta_{1})
= a_{\sigma }^{3} \orr_{1,\epsilon }^{2} (\orr_{1,0}+\orr_{1,1}) \\
d_{3} (\delta_{1})
&\hspace{-2.5mm} = \eta_{0}^{2}\eta_{1}+\eta_{0}\eta_{1}^{2}
= a_{\sigma }^{3} \orr_{1,0 } \orr_{1,1 } (\orr_{1,0}+\orr_{1,1}) \\
d_{7} ([\delta_{1}^{2}])
&\hspace{-2.5mm} = d_{7} (u_{2\sigma}^{2}) \orr_{1,0}^{2}\orr_{1,1}^{2}
= a_{\sigma}^{7}\orr_{3}^{G'}
\orr_{1,0}^{2}\orr_{1,1}^{2}\\
&\hspace{-2.5mm} = a_{\sigma}^{7} (5\orr_{1,0}^{2}\orr_{1,1}
+5\orr_{1,0}\orr_{1,1}^{2}+\orr_{1,1}^{3})
\orr_{1,0}^{2}\orr_{1,1}^{2}.
\end{array} \right.
\end{split}
\end{numequation}
\noindent The $d_{3}$s above make all monomials in $\eta_{0}$ and
$\eta_{1}$ of any given degree $\geq 3$ the same in $\EE_{4}
(G/G')$ and $\EE_{4} (G'/G')$, so $d_{7}
(\delta_{1}^{2})=\eta_{0}^{7}$. Similar calculations show that
\begin{displaymath}
d_{7} ([\Sigma_{2,\epsilon }^{2}])=\eta_{0}^{7}
= a_{\sigma}^{7}\orr_{1,0}^{7}.
\end{displaymath}
\noindent The image of the periodicity element $D$ here is as in
(\ref{eq-r24D}).
We have the following values of the transfer on powers of $u_{\sigma }$.
\begin{displaymath}
\Tr_{1}^{2}(u_{\sigma }^{i})=\mycases{
[2u_{2\sigma }^{i/2}]
&\mbox{for $i$ even} \\
{}
[a_{\sigma }^{2} u_{2\sigma }^{(i-1) /2}] (\orr_{1,0}+\orr_{1,1})
&\mbox{for $i\equiv 1$ mod 4} \\
{}
[u_{2\sigma }^{4}]^{(i-3) /8}a_{\sigma }^{6}\orr_{1,0}^{3}
= [ u_{2\sigma }^{4}]^{(i-3) /8}a_{\sigma }^{6}\orr_{1,1}^{3}
&\mbox{for $i\equiv 3$ mod 8}\\
0 &\mbox{for $i\equiv 7$ mod 8}
}
\end{displaymath}
\noindent
This leads to the following, for which Figure
\ref{fig-G'SS} is a visual aid.
\begin{figure}\label{fig-G'SS}
\end{figure}
\begin{thm}\label{thm-G'SS}
{\bf The slice {\SS} for $\kH$ as a $C_{2}$-spectrum.} Using the
notation of Table \ref{tab-C2Mackey} and Definition
\ref{def-enriched}, we have
\begin{align*}
\EE_{2}^{*,*} (G'/\ee)
& = \Z[r_{1,0}, r_{1,1}]\qquad
\mbox{with }r_{1,\epsilon }\in \EE_{2}^{0,2} (G'/\ee) \\
\EE_{2}^{*,*} (G'/G')
& = \Z[\delta_{1},\Sigma_{2,\epsilon},\eta_{\epsilon }
\colon \epsilon =0,\,1]/\\
&\qquad
\left(2\eta _{\epsilon },\delta_{1}^{2}-\Sigma_{2,0}\Sigma_{2,1},
\eta_{\epsilon } \Sigma_{2,\epsilon +1}+\eta _{1+\epsilon }\delta_{1} \right),
\end{align*}
\noindent so
\begin{displaymath}
\EE_{2}^{s,t} =\mycases{
\Box \oplus \bigoplus_{\ell}\widetilde{\Box}
&\mbox{for $(s,t)= (0,4\ell )$ with $\ell \geq 0$}
\\
\bigoplus_{\ell +1}\widetilde{\oBox}
&\mbox{for $(s,t)= (0,4\ell+2 )$ with $\ell \geq 0$}
\\
\bullet\oplus \bigoplus_{u+\ell}\widetilde{\bullet}
&\mbox{for $(s,t)= (2u,4\ell+4u )$
with $\ell \geq 0$ and $u>0$}
\\
\bigoplus_{u+\ell}\widetilde{\bullet}
&\mbox{for $(s,t)= (2u-1,4\ell+4u -2)$
with $\ell \geq 0$ and $u> 0$}
\\
0 &\mbox{otherwise.}
}
\end{displaymath}
The first set of differentials and determined by
\begin{displaymath}
d_{3} (\Sigma_{2,\epsilon })=\eta_{\epsilon }^{2} (\eta_{0}+\eta_{1})
\qquad \aand
d_{3} (\delta_{1}) = \eta_{0}\eta_{1} (\eta_{0}+\eta_{1})
\end{displaymath}
\noindent and there is a second set of differentials determined by
\begin{displaymath}
d_{7} (\Sigma_{2,\epsilon }^{2})=d_{7} (\delta_{1}^{2})=\eta_{0}^{7}
\end{displaymath}
\end{thm}
\begin{cor}\label{cor-G'SS}
{\bf Some nontrivial permanent cycles.} The elements listed below in
$\EE_{2}^{s,8i+2s}\kH (G/G')$ are nontrivial permanent
cycles. Their tranfers in $\EE_{2}^{s,8i+2s}\kH (G/G)$ are also
permanent cycles.
\begin{itemize}
\item [$\bullet$] $\Sigma_{2,\epsilon }^{2i-j}\delta_{1}^{j}$ for
$0\leq j\leq 2i$ ($4i+1$ elements of infinite order including
$\delta_{1}^{2i}$), $i$ even and $s=0$.
\item [$\bullet$] $\eta_{\epsilon }\Sigma_{2,\epsilon
}^{2i-j}\delta_{1}^{j}$ for $0\leq j<2i$ and $\eta_{\epsilon
}\delta_{1}^{2i}$ ($4i+2$ elements or order 2) for $i$ even and $s=1$.
\item [$\bullet$] $\eta_{\epsilon }^{2}\Sigma_{2,\epsilon
}^{2i-j}\delta_{1}^{j}$ for $0\leq j<2i$ and
$\delta_{1}^{2i}\left\{\eta_{0}^{2},\,\eta_{0}\eta_{1},\,\eta_{1}^{2}
\right\}$ ($4i+3$ elements or order 2) for $i$ even and $s=2$.
\item [$\bullet$] $\eta_{0}^{s}\delta_{1}^{2i}$ for $3\leq s\leq 6$ (4
elements or order 2) and $i$ even.
\item [$\bullet$]
$\Sigma_{2,\epsilon}^{2i-j}\delta_{1}^{j}+\delta_{1}^{2i}$ for $0\leq
j\leq 2i$ ($4i+1$ elements of infinite order including
$2\delta_{1}^{2i}$), $i$ odd and $s=0$.
\item [$\bullet$]
$\eta_{\epsilon}\Sigma_{2,\epsilon}^{2i-j}\delta_{1}^{j}+\delta_{1}^{2i}$
for $0\leq j\leq 2i-1$ and $\eta_{0}\delta_{1}^{2i-1}
(\Sigma_{2,1}+\delta_{1}) = \eta_{1}\delta_{1}^{2i-1}
(\Sigma_{2,0}+\delta_{1})$ ($4i+1$ elements of order 2), $i$ odd and $s=1$.
\item [$\bullet$]
$\eta_{\epsilon}^{2}\Sigma_{2,\epsilon}^{2i-j}\delta_{1}^{j}+\delta_{1}^{2i}$
for $0\leq j\leq 2i-1$,
$\eta_{0}^{2}\delta_{1}^{2i-1}(\Sigma_{2,1}+\delta_{1}) =
\eta_{0}\eta_{1}\delta_{1}^{2i-1}(\Sigma_{2,0}+\delta_{1})$ and
$\eta_{0}\eta_{1}\delta_{1}^{2i-1}(\Sigma_{2,1}+\delta_{1}) =
\eta_{1}^{2}\delta_{1}^{2i-1}(\Sigma_{2,0}+\delta_{1})$ ($4i+2$
elements of order 2) for $i$ odd and $s=2$.
\end{itemize}
In $\EE_{2}^{0,8i+4}\kH (G/G')$ we have $2\Sigma_{2,\epsilon
}^{2i+1-j}\delta_{1}^{j}$ for $0\leq j\leq 2i$ and $2\delta_{1}^{j}$,
$4i+3$ elements of infinite order, each in the image of the transfer
$\Tr_{1}^{2}$.
\end{cor}
\begin{rem}\label{rem-a7}
In {\bf the $RO (G)$-graded slice {\SS } for $k_{[2]}$} one has
\begin{displaymath}
d_{3}(u_{2\sigma })=a_{\sigma }^{3} (\orr_{1,0}+\orr_{1,1})
\qquad \aand
d_{7}([u_{2\sigma }^{2}])=a_{\sigma }^{7} \orr_{3}^{G'}
=a_{\sigma }^{7} \orr_{1,0}^{3},
\end{displaymath}
\noindent but $a^{7}$ itself, and indeed all higher powers of $a$,
survive to $\underline{E}_{8}=\underline{E}_{\infty }$. Hence the
$\underline{E}_{\infty }$-term of this {\SS} does {\bf not} have the
horizontal vanishing line that we see in $\underline{E}_{8}$-term of
Figure \ref{fig-KR}. However when we pass from $k_{[2]}$ to $K_{[2]}$,
$\orr_{3}^{G'}=5\orr_{1,0}^{2}\orr_{1,1}
+5\orr_{1,0}\orr_{1,1}^{2}+\orr_{1,1}^{3}$ becomes invertible and we
have
\begin{displaymath}
d_{7} ((\orr_{3}^{G'})^{-1}[u_{2\sigma }^{2}])
= d_{7} (\orr_{1,0}^{-3}[u_{2\sigma }^{2}])
= a^{7}.
\end{displaymath}
\noindent On the other hand, $\orr_{1,0}+\orr_{1,1}$ is not invertible, so we cannot divide $u_{2\sigma }$ by it.
\end{rem}
We now give the {\Ps} computation analogous to the one following
Remark \ref{rem-a3}, using the notation of (\ref{eq-aur}). In $RO
(G')$-graded slice {\SS } for $k_{[2]}$ we have
\begin{displaymath}
\underline{E}_{2} (G'/G')
=\Z[a_{\sigma },u_{2\sigma },\orr_{1,0},\orr_{1,2}]/ (2a_{\sigma }),
\end{displaymath}
\noindent so
\begin{align*}
g (\underline{E}_{2} (G'/G'))
& = \left(\frac{1}{1-t}+ \frac{\wa}{1-\wa}\right)
\frac{1}{(1-\uu) (1-\rr)^{2}} \\
g (\underline{E}_{4} (G'/G'))
& = g (\underline{E}_{2} (G'/G'))
-\frac{\uu+\rr\wa^{3}}
{(1-\wa) (1-\uu^{2}) (1-\rr)^{2}} \\
& = \frac{1+t\uu}
{(1-t) (1-\uu^{2}) (1-\rr)^{2}}
+\frac{\wa+\wa^{2}}
{ (1-\uu^{2}) (1-\rr)^{2}}
+\frac{\wa^{3}}
{(1-\wa) (1-\uu^{2})(1-\rr)}
\end{align*}
\noindent as before. The next differential leads to
\begin{align*}
g (\underline{E}_{8} (G'/G'))
& = g (\underline{E}_{4} (G'/G')) -\frac{\uu^{2}+\rr^{3}\wa^{7}}
{(1-\wa) (1-\uu^{4}) (1-\rr)} \\
& = g (\underline{E}_{4} (G'/G')) -\frac{\uu^{2}}
{ (1-\uu^{4}) (1-\rr)}\\
& \qquad -\frac{\uu^{2}\wa}
{(1-\wa) (1-\uu^{4}) (1-\rr)} -\frac{\rr^{3}\wa^{7}}
{(1-\wa) (1-\uu^{4}) (1-\rr)} \\
& = \frac{1+t\uu}{(1-t) (1-\uu^{2}) (1-\rr)^{2}}
+\frac{\wa+\wa^{2}}{ (1-\uu^{2}) (1-\rr)^{2}}\\
&\qquad
+\frac{\wa^{3}}{(1-\wa) (1-\uu^{2})(1-\rr)}
-\frac{\uu^{2}}{ (1-\uu^{4}) (1-\rr)} \\
& \qquad
-\frac{\uu^{2} (\wa+\wa^{2})}{ (1-\uu^{4}) (1-\rr)}
-\frac{\uu^{2}\wa^{3}}{(1-\wa) (1-\uu^{4}) (1-\rr)}
-\frac{\rr^{3}\wa^{7}}{(1-\wa) (1-\uu^{4}) (1-\rr)} \\
& = \frac{(1+t\uu) (1+\uu^{2})- (1-t) (1-\rr)\uu^{2}}
{(1-t) (1-\uu^{4}) (1-\rr)^{2}}
+ \frac{(\wa+\wa^{2}) ( 1+\uu^{2} -\uu^{2} (1-\rr))}
{ (1-\uu^{4}) (1-\rr)^{2}} \\
&\qquad + \frac{\wa^{3} (1+\uu^{2})-\uu^{2}\wa^{3}-\rr^{3}\wa^{7}}
{(1-\wa) (1-\uu^{4})(1-\rr)}\\
& = \frac{1+t\uu+ (t+\rr-t\rr)\uu^{2}+t\uu^{3}}
{(1-t) (1-\uu^{4}) (1-\rr)^{2}}
+ \frac{(\wa+\wa^{2}) (1+\uu^{2}\rr)}
{(1-\uu^{4}) (1-\rr)^{2}} \\
&\qquad +\frac{\wa-\wa^{7}+\wa^{7}-\rr^{3}\wa^{7}}
{(1-\wa) (1-\uu^{4})(1-\rr)}\\
& = \frac{1+t\uu+ (t+\rr-t\rr)\uu^{2}+t\uu^{3}}
{(1-t) (1-\uu^{4}) (1-\rr)^{2}}
+ \frac{(\wa+\wa^{2}) (1+\uu^{2}\rr)}
{(1-\uu^{4}) (1-\rr)^{2}} \\
&\qquad +\frac{\wa^{3}+\wa^{4}+\wa^{6}+\wa^{6}}
{ (1-\uu^{4})(1-\rr)}
+\frac{\wa^{7} (1+\rr+\rr^{2})}
{(1-\wa) (1-\uu^{4})}
\end{align*}
\noindent The fourth term of this expression represents the elements
with filtration above six, and the first term represents the elements of
filtration 0. The latter include
\begin{align*}
[2u_{2\sigma }]
& \in \langle 2,\,a_{\sigma }^{2}
,\,a_{\sigma }(\orr_{1,0}+\orr_{1,1}) \rangle , \\
[2u_{2\sigma }^{2}]
& \in \langle 2,\,a_{\sigma } ,\,a_{\sigma }^{6}\orr_{1,0}^{3} \rangle , \\
[(\orr_{1,0}+\orr_{1,1}) u_{2\sigma }^{2}]
& \in \langle a_{\sigma }^{4} ,\,a_{\sigma }^{}\orr_{1,0}^{3}
,\, \orr_{1,0}+\orr_{1,1}\rangle \\
& \qquad \mbox{with }
(\orr_{1,0}+\orr_{1,1}) [2u_{2\sigma }^{2}]
=2 [(\orr_{1,0}+\orr_{1,1}) u_{2\sigma }^{2}], \\
[2u_{2\sigma }^{3}]
& \in \langle 2,\, a_{\sigma }^{2}(\orr_{1,0}+\orr_{1,1})
,\, a_{\sigma}^{2},\,a_{\sigma}^{6}\orr_{1,0}^{3}\rangle \\
\aand
[u_{2\sigma }^{4}]
& \in \langle a_{\sigma }^{4} ,\,a_{\sigma }^{3}\orr_{1,0}^{3}
\,a_{\sigma }^{4} ,\,a_{\sigma }^{3}\orr_{1,0}^{3} \rangle
\end{align*}
\noindent with notation as in \ref{rem-abuse}.
As indicated in \ref{rem-a7}, we can get rid of them by formally
adjoining $w:=(\orr_{3}^{G'})^{-1}u_{2\sigma }^{2}$ to $\underline{E}_{2}
(G'/G')$. As before we denote the enlarged {\SS } terms by
$\underline{E}'_{r} (G'/G')$ This time let
\begin{displaymath}
\www= \rr^{-3}\uu^{2}=xy^{-7}.
\end{displaymath}
\noindent Then we have
\begin{displaymath}
\underline{E}'_{r} (G'/G')
=\left(\frac{1-\uu^{2}}{1-\www} \right)\underline{E}_{r} (G'/G')
\qquad \mbox{for $r=2$ and $r=4$}
\end{displaymath}
\noindent and
\begin{align*}
g (\underline{E}'_{8} (G'/G'))
& = g (\underline{E}'_{4} (G'/G'))
-\frac{\www+\wa^{7}}{(1-\wa) (1-\www^{2}) (1-\rr)} \\
& = g (\underline{E}'_{4} (G'/G'))
-\frac{\www}{ (1-\www^{2}) (1-\rr)}
-\frac{(\wa+\wa^{2})\www}{ (1-\www^{2}) (1-\rr)} \\
&\qquad -\frac{\wa^{3}\www+\wa^{7}}{(1-\wa) (1-\www^{2}) (1-\rr)} \\
& = \frac{1+t\uu}{(1-t) (1-\www) (1-\rr)^{2}}
+\frac{\wa+\wa^{2}}{ (1-\www) (1-\rr)^{2}}\\
&\qquad
+\frac{\wa^{3}}{(1-\wa) (1-\www)(1-\rr)}
-\frac{\www}{ (1-\www^{2}) (1-\rr)} \\
&\qquad -\frac{(\wa+\wa^{2})\www}{ (1-\www^{2}) (1-\rr)}
-\frac{\wa^{3}\www+\wa^{7}}{(1-\wa) (1-\www^{2}) (1-\rr)} \\
& = \frac{(1+t\uu) (1+\www) - (1-t) (1-\rr)\www}
{(1-t) (1-\www^{2}) (1-\rr)^{2}}
+\frac{(\wa+\wa^{2}) (1- (1-\rr)\www)}{ (1-\www^{2}) (1-\rr)^{2}}\\
&\qquad +\frac{\wa^{3} (1+\www) -\wa^{3}\www-\wa^{7}}
{(1-\wa) (1-\www^{2})(1-\rr)} \\
& = \frac{1+t\uu+ (t+\rr-t\rr)\www+t\www\uu}
{(1-t) (1-\www^{2}) (1-\rr)^{2}}
+\frac{(\wa+\wa^{2}) (1+\rr\www)}{ (1-\www^{2}) (1-\rr)^{2}}\\
&\qquad +\frac{\wa^{3}+\wa^{4}+\wa^{5}+\wa^{6}}{ (1-\www^{2})(1-\rr)} .
\end{align*}
\noindent Again the first term represents the elements of
filtration 0. These include
\begin{align*}
[2u_{2\sigma }]
& \in \langle 2,\,a_{\sigma }^{2}
,\,a_{\sigma }(\orr_{1,0}+\orr_{1,1}) \rangle , \\
[2w]
& \in \langle 2,\,a_{\sigma } ,\,a_{\sigma }^{6}\rangle , \\
[(\orr_{1,0}+\orr_{1,1})w]
& \in \langle a_{\sigma }^{4} ,\,a_{\sigma }^{3}
,\, \orr_{1,0}+\orr_{1,1}\rangle \\
& \qquad \mbox{with }
(\orr_{1,0}+\orr_{1,1}) [2w]
=2 [(\orr_{1,0}+\orr_{1,1}) w], \\
[2u_{2\sigma }w]
& \in \langle 2,\, a_{\sigma }^{2}(\orr_{1,0}+\orr_{1,1})
,\, a_{\sigma}^{2},\,a_{\sigma}^{6}\rangle \\
\aand
[w^{2}]
& \in \langle a_{\sigma }^{4} ,\,a_{\sigma }^{3}
\,a_{\sigma }^{4} ,\,a_{\sigma }^{3} \rangle
\end{align*}
\noindent where, as indicated above,
$w=(\orr_{3}^{G'})^{-1}u_{2\sigma}^{2}$.
From these
\section{The effect of the first differentials over $C_{4}$}
\label{sec-C4diffs}
Theorem \ref{thm-sliceE2} lists elements in the slice {\SS} for
$\kH$ over $C_{4}$ in terms of
\begin{displaymath}
r_{1}, \, \os_{2} , \,\normrbar_{1};\quad
\eta , \,a_{\sigma}, \,a_{\lambda};\quad
u_{\lambda},\,u_{\sigma},\,\mbox{ and }u_{2\sigma}.
\end{displaymath}
\noindent All but the $u$'s are permanent cycles, and the action of
$d_{3}$ on $u_{\lambda}$, $u_{\sigma}$ and $u_{2\sigma}$ is
described above in Theorem \ref{thm-d3ulambda}.
\begin{prop}\label{prop-d3}
{\bf $d_{3}$ on elements in Theorem \ref{thm-sliceE2}.}
We have the following $d_{3}$s, subject to the conditions on $i$, $j$,
$k$ and $\ell $ of Theorem \ref{thm-sliceE2}:
\begin{itemize}
\item [$\bullet$] On $X_{2\ell ,2\ell }$:
\begin{align*}
d_{3} (a_{\lambda}^{j}u_{2\sigma}^{\ell }
u_{\lambda}^{2\ell -j}\normrbar_{1}^{2\ell })
& = \mycases{
a_{\lambda}^{j+1}\eta u_{2\sigma}^{\ell}u_{\lambda}^{2\ell -j-1}
\normrbar_{1}^{2\ell}\hspace{-1.7cm}\\
\qquad
\in \upi_{*}X_{2\ell,2\ell +1} (G/G)
&\mbox{for $j$ odd}\\
0\hspace{2cm}&\mbox{for $j$ even}
}\\
d_{3} (a_{\sigma}^{2k}a_{\lambda}^{2\ell }u_{2\sigma}^{\ell-k }
\normrbar_{1}^{2\ell })
& = 0
\end{align*}
\item [$\bullet$] On $X_{2\ell+1 ,2\ell+1 }$:
\begin{align*}
d_{3} (\delta_{1}^{2\ell +1})
& = \eta u_{\sigma}^{2\ell+1}
\Res_{2}^{4}(a_{\lambda}u_{\lambda}^{2\ell }\normrbar_{1}^{2\ell+1})\\
& \in \upi_{*}X_{2\ell+1,2\ell +2} (G/G')\\
\lefteqn{ d_{3} (u_{\sigma}^{2\ell +1}
\Res_{2}^{4}(a_{\lambda}^{j}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1}))}\qquad\qquad\\
& = \mycases{
\eta u_{\sigma}^{2\ell +1}
\Res_{2}^{4}(a_{\lambda}^{j+1}u_{\lambda}^{2\ell -j}
\normrbar_{1}^{2\ell +1})\\
\qquad
\in \upi_{*}X_{2\ell+1,2\ell +2} (G/G')
&\mbox{for $j$ even}\\
0\hspace{2cm}&\mbox{for $j$ odd}
}\\
\lefteqn{ d_{3} (a_{\sigma}a_{\lambda}^{j}
u_{\sigma}^{2\ell}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1})}\qquad\qquad\\
& = \mycases{
\eta a_{\sigma}a_{\lambda}^{j+1}
u_{\sigma}^{2\ell}u_{\lambda}^{2\ell -j}
\normrbar_{1}^{2\ell +1}\\
\qquad
\in \upi_{*}X_{2\ell+1,2\ell +2} (G/G)
&\mbox{for $j$ even}\\
0\hspace{2cm}&\mbox{for $j$ odd}
}\\
\lefteqn{d_{3} (a_{\sigma}^{2k+1}a_{\lambda}^{2\ell +1}
u_{2\sigma}^{\ell-k}
\normrbar_{1}^{2\ell +1})}\qquad\qquad\\
& = 0
\end{align*}
\item [$\bullet$] On $X_{i ,2\ell-i }$:
\begin{align*}
d_{3} (u_{\sigma }^{\ell } \os_{2}^{\ell -i}
\Res_{2}^{4}(u_{\lambda}^{\ell }\normrbar_{1}^{i}) )
& = \mycases{
\eta^{3} u_{\sigma }^{\ell-1 } \os_{2}^{\ell -i-1}
\Res_{2}^{4}(u_{\lambda}^{\ell-1 }\normrbar_{1}^{i})\\% \hspace{-2cm}\\
\qquad
\in \upi_{*}X_{i,2\ell +1-i} (G/G')\\%\hspace{-3cm}\\
&\hspace{-2cm}\mbox{for $\ell $ odd}\\
0 &\hspace{-2cm}\mbox{for $\ell $ even}
}\\
d_{3} (\eta^{2j}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(u_{\lambda}^{\ell -j}\normrbar_{1}^{i}) )
& = \mycases{
\eta^{2j+1}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(a_{\lambda}u_{\lambda}^{\ell -j-1}\normrbar_{1}^{i})
\\
\qquad
\in \upi_{*}X_{i,2\ell +1-i} (G/G')\\
&\hspace{-3.0cm} \mbox{for $\ell -j$ odd}\\
0
&\hspace{-3.0cm} \mbox{for $\ell -j$ even}
}
\end{align*}
\item [$\bullet$] On $X_{i ,2\ell+1-i }$:
\begin{align*}
d_{3} (r_{1}\Res_{1}^{2}(u_{\sigma }^{\ell } \os_{2}^{\ell -i})
\Res_{1}^{4}(u_{\lambda}^{\ell }\normrbar_{1}^{i}) )
& = 0\\
d_{3} ( \eta^{2j+1}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(u_{\lambda}^{\ell -j}\normrbar_{1}^{i}) )
& = \mycases{
\eta^{2j+2}u_{\sigma}^{\ell -j}\os_{2}^{\ell -i-j}
\Res_{2}^{4}(a_{\lambda}u_{\lambda}^{\ell -j-1}\normrbar_{1}^{i})
\\
\qquad
\in \upi_{*}X_{i,2\ell +2-i} (G/G')\hspace{-3cm}\\
&\hspace{-3cm} \mbox{for $\ell -j$ odd}\\
0 &\hspace{-3cm} \mbox{for $\ell -j$ even}
}
\end{align*}
\end{itemize}
\end{prop}
Note that in each case the first index of $X$ is unchanged by the
differential, and the second one is increased by one. Since $X_{m,n}$
is a summand of the $2 (m+n)$th slice, each $d_{3}$ raises the slice
degree by 2 as expected.
\begin{rem}\label{rem-Ym}
{\bf The spectra $y_{m}$ and $Y_{m}$ of Corollaries \ref{cor-filtration} and
\ref{cor-Filtration}}. Similar statements can be proved for the case
$\ell <0$. We leave the details to the reader, but illustrate the
results in Figures \ref{fig-sseq-8a} and \ref{fig-sseq-8b}.
The source of each differential in \ref{prop-d3} is the product of
some element in $\upi_{\star}\HZZ$ with a power of $\normrbar_{1}^{}$
or $\delta_{1}$. The target is the product of a different element in
$\upi_{\star}\HZZ$ with the same power. This means they are
differentials in the slice {\SS } for the spectra $y_{m}$ of
\ref{cor-filtration}.
Similar differentials occur when we replace $\normrbar_{1}^{i}$ by any
homogeneous polynomial of degree $i$ in $\normrbar_{1}^{}$ and
$\ot_{2}$ in which the coefficient of $\normrbar_{1}^{i}$ is odd.
This means they are also differentials in the slice {\SS } for the
spectra $Y_{m}$ of \ref{cor-Filtration}.
\end{rem}
These differentials are illustrated in the upper charts in Figures
\ref{fig-sseq-7a}--\ref{fig-sseq-8b}. In order to pass to $\EE_{4}$
we need the following exact sequences of Mackey functors.
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
0 \ar[r]^(.5){}
&\bullet\ar[r]^(.5){}
&\circ\ar[r]^(.5){d_{3}}
&{\widehat{\bullet}}\ar[r]^(.5){}
&{\obull }\ar[r]^(.5){}
&0\\
0\ar[r]^(.5){}
&{\widehat{\diagbox}}\ar[r]^(.5){}
&{\widehat{\Box}}\ar[r]^(.5){d_{3}}
&{\widehat{\bullet} }\ar[r]^(.5){}
&0\\
&0\ar[r]^(.5){}
&\obull\ar[r]^(.5){d_{3}}
&{\widehat{\bullet}}\ar[r]^(.5){}
&{\JJ }\ar[r]^(.5){}
&0\\
0 \ar[r]^(.5){}
&\odbox\ar[r]^(.5){}
&\oBox\ar[r]^(.5){d_{3}}
&{\widehat{\bullet}}\ar[r]^(.5){}
&\JJ\ar[r]^(.5){}
&0
}
\end{displaymath}
The resulting subquotients of $\EE_{4}$ are shown in the lower charts
of Figures \ref{fig-sseq-7a}--\ref{fig-sseq-8b} and described below in
Theorem \ref{thm-sliceE4}. In the latter the slice summands are
organized as shown in the Figures rather than by orbit type as in
Theorem \ref{thm-sliceE2}.
\begin{figure}\label{fig-sseq-7a}
\end{figure}
\begin{figure}\label{fig-sseq-7b}
\end{figure}
\begin{figure}\label{fig-sseq-8a}
\end{figure}
\begin{figure}\label{fig-sseq-8b}
\end{figure}
\begin{thm}\label{thm-sliceE4}
{\bf The slice $\EE_{4}$-term for $\kH$.} The elements of
Theorem \ref{thm-sliceE2} surviving to $\EE_{4}$, which live
in the appropriate subquotients of $\upi_{*}X_{m,n}$, are
as follows.
\begin{enumerate}
\item
In $\upi_{*}X_{2\ell ,2\ell }$ (see the
leftmost diagonal in Figure \ref{fig-sseq-7a}), on the 0-line we still
have a copy of $\Box$ generated under fixed point restrictions by
$\Delta_{1}^{\ell }\in \EE_{4}^{0,8\ell }$. In positive
filtrations we have
\begin{align*}
\circ &\subseteq \EE_{4}^{2j,8\ell }
\qquad \mbox{generated by } \\
&\mycases{
a_{\lambda}^{j}u_{2\sigma}^{\ell }
u_{\lambda}^{2\ell -j}\normrbar_{1}^{2\ell }
\in \EE_{4}^{2j,8\ell } (G/G)\\
&\hspace{-5cm} \mbox{for $j$ even and }0<j\leq 2\ell, \\
2a_{\lambda}^{j}u_{2\sigma}^{\ell }
u_{\lambda}^{2\ell -j}\normrbar_{1}^{2\ell }
=a_{\sigma}^{2}a_{\lambda}^{j-1}u_{2\sigma}^{\ell +1}
u_{\lambda}^{2\ell -j-1}\normrbar_{1}^{2\ell }
\in \EE_{4}^{2j,8\ell } (G/G)\\
&\hspace{-5cm} \mbox{for $j$ odd and }0<j\leq 2\ell \mbox{ and}}\\
\bullet &\subseteq \EE_{4}^{2k+2\ell ,8\ell }
\qquad \mbox{generated by } \\
&a_{\sigma}^{2k}a_{\lambda}^{2\ell }u_{2\sigma}^{\ell -k}
\normrbar_{1}^{2\ell }
\in \EE_{4}^{2j+2k,8\ell } (G/G)
\qquad \mbox{for }0<k\leq \ell.
\end{align*}
\item
In $\upi_{*}X_{2\ell ,2\ell +1}$ (see the
second leftmost diagonal in Figure \ref{fig-sseq-7a}), in filtration 0
we have $\widehat{\oBox }$, generated (under transfers and
the group action) by
\begin{displaymath}
r_{1}\Res_{1}^{2}(u_{\sigma }^{2\ell }
\Res_{1}^{4}(u_{\lambda}^{2\ell }\normrbar_{1}^{2\ell })
\in \EE_{2}^{0,8\ell+2} (G/\ee) .
\end{displaymath}
\noindent In positive filtrations we have
\begin{displaymath}
\begin{array}[]{rll}
\widehat{\bullet}
& \subseteq
\EE_{4}^{1,8\ell +2}
&\mbox{generated (under transfers and the group action) by} \\
& & \eta u_{\sigma}^{2\ell }\Res_{2}^{4}(u_{\lambda}
\normrbar_{1})^{2\ell } = \EE_{4}^{1,8\ell +2} (G/G')\\
\obull
& \subseteq
\EE_{4}^{4k+1,8\ell +2}
&\mbox{for $0<k\leq \ell $ generated by} \\
& & x=\eta^{4k+1} u_{\sigma}^{2\ell-2k}
\Res_{2}^{4}(u_{\lambda}\normrbar_{1})^{2\ell -2k}
\in \EE_{4}^{4k+1,8\ell +2} (G/G')\\
& & \mbox{with $(1-\gamma )x=\Tr_{2}^{4}(x)=0$.}
\end{array}
\end{displaymath}
\item
In $\upi_{*}X_{2\ell+1 ,2\ell +1}$ (see the leftmost diagonal
in Figure \ref{fig-sseq-7b}), on the 0-line we have a copy of $\odbox$
generated under fixed point $\Delta_{1}^{(2\ell +1)/2 }\in
\EE_{4}^{0,8\ell+4}$. In positive filtrations we have
\begin{displaymath}
\begin{array}[]{rll}
\obull
&\subseteq
\EE_{2}^{2j,8\ell+4 }
& \mbox{generated by }\\
& & u_{\sigma}^{2\ell +1}
\Res_{2}^{4}(a_{\lambda}^{j}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1})\in \EE_{2}^{2j,8\ell+4 } (G/G')\\
& &\mbox{for }0<j\leq 2\ell+1 , \\
\bullet
&\subseteq
\EE_{2}^{2j+1 ,8\ell+4 }
& \mbox{generated by }\\
& & a_{\sigma}a_{\lambda}^{j}
u_{\sigma}^{2\ell}u_{\lambda}^{2\ell +1-j}
\normrbar_{1}^{2\ell +1}\in \EE_{2}^{2j+2k,8\ell+4 } (G/G)\\
& & \mbox{for }0\leq j\leq 2\ell+1\mbox{ and} \\
\bullet
&\subseteq
\EE_{2}^{2k+4\ell +3 ,8\ell+4 }
&\mbox{generated by }\\
& &a_{\sigma}^{2k+1}a_{\lambda}^{2\ell +1}
u_{2\sigma}^{\ell-k}
\normrbar_{1}^{2\ell +1}
\in \EE_{2}^{2k+4\ell +2,8\ell+4 } (G/G)\\
& &\mbox{for }0< k\leq 2\ell+1.
\end{array}
\end{displaymath}
\item
In $\upi_{*}X_{2\ell+1 ,2\ell +2}$ (see the
second leftmost diagonal in Figure \ref{fig-sseq-7b}), in filtration 0
we have $\widehat{\oBox }$, generated (under transfers and
the group action) by
\begin{displaymath}
r_{1}\Res_{1}^{2}(u_{\sigma }^{2\ell+1 }
\Res_{1}^{4}(u_{\lambda}^{2\ell+1 }\normrbar_{1}^{2\ell+1 })
\in \EE_{4}^{0,8\ell+6} (G/\ee) .
\end{displaymath}
\noindent In positive filtrations we have
\begin{displaymath}
\begin{array}[]{rll}
\JJ & \subseteq
\EE_{4}^{4k+3,8\ell+6}
&\mbox{for $0\leq k\leq \ell $ generated under transfer by } \\
& &x=\eta^{4k+3}\Delta_{1}^{\ell -k}
\in \EE_{4}^{4k+3,8\ell+6} (G/G')\\
& &\mbox{with $(1-\gamma )x=0$.}
\end{array}
\end{displaymath}
\noindent The generator of $\EE_{4}^{4k+3,8\ell+6} (G/G')$ is the
exotic restriction of the one in $\EE_{4}^{4k+1,8\ell+4} (G/G)$.
\item
In $\upi_{*}X_{m,m+i}$ for $i\geq 2$ (see the
rest of Figures \ref{fig-sseq-7a} and \ref{fig-sseq-7b}), in
filtration 0 we have
\begin{displaymath}
\begin{array}[]{rll}
\widehat{\oBox }
& \subseteq
\EE_{4}^{0,4m+4j+2}
&\mbox{generated under transfers and group action by }\\
& &r_{1}\Res_{1}^{2}(u_{\sigma }^{m +j} \os_{2}^{j})
\Res_{1}^{4}(u_{\lambda}^{m +j}\normrbar_{1}^{m })\\
& &\quad\in \EE_{4}^{0,4m+4j+2} (G/\ee)\mbox{ for $j\geq 0$,}\\
\widehat{\diagbox}
& \subseteq
\EE_{4}^{0,8\ell +4}
&\mbox{generated under transfers and group action by }\\
& &r_{1}\Res_{1}^{2}(u_{\sigma }^{m +j} \os_{2}^{j})
\Res_{1}^{4}(u_{\lambda}^{m +j}\normrbar_{1}^{m })\\
& &\quad\in \EE_{4}^{0,8\ell +4} (G/\ee)
\mbox{ for $\ell \geq m/2$ and} \\
\widehat{\Box}
& \subseteq
\EE_{4}^{0,8\ell }
&\mbox{generated under transfers, restriction and group action by}\\
& &x_{8\ell ,m}
=\Sigma_{2,0}^{2\ell -m}\delta_{1}^{m}+\ell \delta_{1}^{2\ell }
\mbox{ where } \\
& &\Sigma_{2,\epsilon }=u_{\rho_{2}}\os_{2,\epsilon } \\
& &\mbox{and } \delta_{1}=u_{\rho_{2}}\Res_{2}^{4}(\normrbar_{1})\\
& &\quad\in \EE_{4}^{0,8\ell } (G/G')\mbox{ for $0\leq m\leq 2\ell -1$}.
\end{array}
\end{displaymath}
\noindent In positive filtrations we have
\begin{displaymath}
\begin{array}[]{rll}
\widehat{\bullet}
& \subseteq
\EE_{4}^{2,8\ell +4}
&\mbox{generated under transfers and group action by }\\
& &\eta_{0}^{2}\Res_{2}^{4}(\Delta_{1}^{\ell})
= \eta_{0}^{2}\delta_{1}^{2\ell } =
\eta_{0}^{2}u_{\sigma}^{2\ell}\Res_{2}^{4}(u_{\lambda}\normrbar_{1})^{2\ell}\\
& &\quad\in \EE_{4}^{2,8\ell +4} (G/G')
\quad \mbox{and}\\
\widehat{\bullet}
& \subseteq
\EE_{4}^{s,8\ell +2s}
&\mbox{generated under transfers and group action by }\\
& &\eta_{\epsilon }^{s}x_{8\ell ,m}\in\EE_{4}^{s,8\ell +2s} (G/G')\\
& & \mbox{for $s=1,2$ and $0\leq m\leq 2\ell-1 $}.
\end{array}
\end{displaymath}
\noindent Each generator of $\EE_{4}^{2,8\ell +4} (G/G')$ is an exotic
transfer of one in\linebreak $\EE_{4}^{0,8\ell +2} (G/e)$.
\end{enumerate}
\end{thm}
\begin{prop}\label{prop-perm}
{\bf Some nontrivial permanent cycles.} The elements listed in Theorem
\ref{thm-sliceE4}(v) other than $\eta_{\epsilon }^{2}\delta_{1}^{2\ell
}$ are all nontrivial permanent cycles.
\end{prop}
\proof Each such element is either in the image of
$\EE_{4}^{0,*} (G/\ee)$ under the transfer and
therefore a nontrivial permanent cycle, or it is one of the ones listed
in Corollary \ref{cor-G'SS}. \qed
{\em In subsequent discussions and charts, starting with Figure
\ref{fig-E4}, we will omit the elements in Proposition
\ref{prop-perm}. These elements all occur in $\EE_{4}^{s,t}$ for
$0\leq s\leq 2$.}
Analogous statements can be made about the slice {\SS} for $\KH$. Each
of its slices is a certain infinite wedge spelled out in Corollary
\ref{cor-KH}. Their homotopy groups are determined by the chain
complex calculations of Section \ref{sec-chain} and illustrated in
Figures \ref{fig-sseq-1} (with Mackey functor induction
$\uparrow_{2}^{4}$ applied) and \ref{fig-sseq-3}. Analogs of Figures
\ref{fig-sseq-7a}--\ref{fig-sseq-7b} are shown in Figures
\ref{fig-sseq-8a}--\ref{fig-sseq-8b}. In each figure, exotic
transfers and restrictions are indicated by blue and dashed green lines
respectively. As in the $\kH$ case, most of the elements shown in
this chart can be ignored for the purpose of calculating higher
differentials. {\em In the third quadrant the elements we are
ignoring all occur in $\EE_{4}^{s,t}$ for $-2\leq s\leq 0$.}
The resulting reduced $\EE_{4}$ for $\KH$ is shown in Figure
\ref{fig-KH}. The information shown there is very useful for
computing differentials and extensions. The periodicity theorem tells
us that $\upi_{n}\KH$ and $\upi_{n-32}\KH$ are isomorphic. For $0\leq
n<32$ these groups appear in the first and third quadrants
respectively, and the information visible in the {\SS} can be quite
different.
For example, we see that $\upi_{0}\KH$ has summand of the form $\Box$,
while $\upi_{-32}\KH$ has a subgroup isomorphic to $\fourbox $. The
quotient $\Box/\fourbox $ is isomorphic to $\circ $. This leads to
the exotic restrictions and transfer in dimension $-32$ shown in
Figure \ref{fig-KH}. Information that is transparent in dimension 0
implies subtle information in dimension $-32$. Conversely, we see
easily that $\upi_{-4}\KH=\dot{\diagbox}$ while $\upi_{28}\KH$ has a
quotient isomorphic to $\odbox$. This leads to the ``long transfer''
(which raises filtration by 12) in dimension 28.
\section{Higher differentials and exotic Mackey functor extensions}
\label{sec-higher}
\begin{figure}\label{fig-G'SS-3}
\end{figure}
We can use the results of the \S\ref{sec-C2diffs} to study higher
differentials and extensions. The $\EE_{7}$-term implied by them is
illustrated in Figure \ref{fig-G'SS-3}. For each $\ell ,s\geq 0$ there is
a generator
\begin{displaymath}
y_{8\ell +s,s}:=\eta_{0}^{s} \delta_{1}^{2\ell }
\in \EE_{7}^{s,8\ell +2s} (G/G')
\end{displaymath}
\noindent with
\begin{displaymath}
d_{7} (y_{16k+s+8,s}) = y_{16k+s+7,s+7}.
\end{displaymath}
\noindent
Recall that
\begin{displaymath}
\delta_{1}=\ou_{\lambda }\orr_{1,0}\orr_{1,1}
\in \underline{E}_{2}^{0,4}\kH (G/G')
\cong \underline{E}_{2}^{0,(G',4)}\kH (G/G),
\end{displaymath}
\noindent and in the latter group we denote $\ou_{\lambda }$ by
$u_{2\sigma }$. We have
\begin{displaymath}
d_{3} (\delta_{1})= d_{3} (\ou_{\lambda })\orr_{1,0}\orr_{1,1}
\cong d_{3} (u_{2\sigma })\orr_{1,0}\orr_{1,1}
=a_{\sigma }^{3} (\orr_{1,0}+\orr_{1,1})\orr_{1,0}\orr_{1,1}.
\end{displaymath}
\noindent
If the source has the form $\Res_{2}^{4}(x_{16k+s+8,s})$, then such
an $x$ must support a nontrivial $d_{r}$ for $r\leq 7$. If it has a
nontrivial transfer $x'_{16k+s+8,s}$, then such an $x'$ cannot support an
earlier differential, and we must have
\begin{displaymath}
d_{r} (x'_{16k+s+8,s})
=\Tr_{2}^{4}(d_{7} (y_{16k+s+8,s}))
=\Tr_{2}^{4}(y_{16k+s+7,s+7}) \qquad \mbox{for some $r\geq 7$}.
\end{displaymath}
\noindent We could get a higher differential (meaning $r>7$) if
$y_{16k+s+7,s+7}$ supports an exotic transfer.
We have seen (Figure \ref{fig-E4} and Theorem \ref{thm-sliceE4}) that
for $s\geq 3$ and $k\geq 0$,
\begin{numequation}\label{eq-eighth-diagonls}
\begin{split}
\EE_{5}^{s,16k+8+2s}
= \mycases{
\circ
&\mbox{for $s\equiv 0$ mod 4}\\
\overline{\bullet}
&\mbox{for $s\equiv 1,2$ mod 4}\\
\JJ
&\mbox{for $s\equiv 3$ mod 4.}
}
\end{split}
\end{numequation}
\noindent For $s=1,2$, $\EE_{5}^{s,16k+8+2s}$ has
$\overline{\bullet}$ as a direct summand. For $s=0$ it has $\Box$ as a
summand, and the differentials on it factor through its quotient
$\circ$; see (\ref{eq-C4-SES}).
The corresponding statement in the third quadrant is
\begin{displaymath}
\EE_{5}^{-s,-16k-2s-24}
= \mycases{
\circ
&\mbox{for $s\equiv 3$ mod 4}\\
\overline{\bullet}
&\mbox{for $s\equiv 1,2$ mod 4}\\
\JJ
&\mbox{for $s\equiv 0$ mod 4.}
}
\end{displaymath}
\noindent for $s\geq 3$ and $k\geq 0$. For $s=1,2$ the groups have
similar summands, and for $s=0$ there is a summand of the form
$\JJbox$, which has $\JJ$ as a subgroup; again see (\ref{eq-C4-SES}).
This is illustrated in Figure \ref{fig-KH}.
\begin{thm}\label{thm-d7}
{\bf Differentials for $C_{4}$ related to the $d_{7}$s for $C_{2}$.}
The differential
\begin{displaymath}
d_{7} (y_{16k+s+8,s})=y_{16k+s+7,s+7}\qquad \mbox{with }s\geq 3
\end{displaymath}
\noindent has the following implications for the congruence
classes of $s$ modulo 4.
\begin{enumerate}
\item [(i)] For $s\equiv 0$, $\EE_{7}^{s,16k+8+2s}=\circ$ and
$\EE_{7}^{s+7,16k+14+2s}=\JJ$. Hence $y_{16k+s+8,s}$ is a
restriction with a nontrivial transfer, and
\begin{align*}
d_{5} (x_{16k+s+8,s})
& = x_{16k+s+7,s+5} \\
\aand
d_{7} (2x_{16k+s+8,s})&=d_{7} (\Tr_{2}^{4}(y_{16k+s+8,s}))\\
& = \Tr_{2}^{4}(y_{16k+s+7,s+7})= x_{16k+s+7,s+7}.
\end{align*}
\item [(ii)] For $s\equiv 1$,
\begin{align*}
d_{7} (y_{16k+s+8,s})
& = y_{16k+s+7,s+7}\\
\aand
d_{5} (x_{16k+s+8,s+2})
& = \Tr_{2}^{4}(y_{16k+s+7,s+7}) = 2x_{16k+s+7,s+7}
\end{align*}
\noindent This leaves the fate of $x_{16k+s+7,s+7}$ undecided; see below.
\item [(iii)] For $s\equiv 2$,
$\EE_{7}^{s,16k+8+2s}=\overline{\bullet} $ and
$\EE_{7}^{s+7,16k+14+2s}=\overline{\bullet} $. Neither the
source nor target is a restriction or has a nontrivial transfer, so no
additional differentials are implied.
\item [(iv)] For $s\equiv 3$, $\EE_{7}^{s,16k+8+2s}=\JJ $ and
$\EE_{7}^{s+7,16k+14+2s}=\overline{\bullet} $. In this case
the source is an exotic restriction; again see Figure
\ref{fig-sseq-7b}). Thus we have
\begin{align*}
d_{7} (y_{16k+s+8,s})
& = y_{16k+s+7,s+7}\\
\aand
d_{5} (x_{16k+s+8,s-2})
& = x_{16k+s++7,s+3}\\
\mbox{with }
\Res_{2}^{4}(x_{16k+s++7,s+3})
& =y_{16k+s+7,s+7}.
\end{align*}
\noindent Moreover, $\Tr_{2}^{4}(y_{16k+8+s,s})$ is nontrivial and it
supports a nontrivial $d_{11}$ when $4k+s\equiv 3$ mod 8. The other
case, $4k+s\equiv 7$, will be discussed below.
\end{enumerate}
\end{thm}
\proof (i) The target Mackey functor is $\JJ$ and $y_{16k+s+7,s+7}$ is
the exotic restriction of $x_{16k+s+7,s+5}$; see Figure
\ref{fig-sseq-7b} and Theorem \ref{thm-sliceE4}. The indicated
$d_{5}$ and $d_{7}$ follow.
(ii) The differential is nontrivial on the $G/G'$ component of
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
{\obull=\EE_{7}^{s,16k+8+2s}}\ar[r]^(.5){d_{7}}
&{\EE_{7}^{s+7,16k+14+2s}=\circ}
}
\end{displaymath}
\noindent Thus the target has a nontrivial transfer, so the source
must have an exotic transfer. The only option is $x_{16k+s+8,s+2}$,
and the result follows.
(iv) We prove the statement about $d_{11}$ by showing that
\begin{displaymath}
{y_{16k+s+7,s+7}=\eta_{0}^{s+7}\delta_{1}^{4k}}
\end{displaymath}
\noindent supports an exotic
transfer that raises filtration by 4. First note that
\begin{align*}
\Tr_{2}^{4}(\eta_{0}\eta_{1})
& = \Tr_{2}^{4}(a_{\sigma_{2}}^{2}\overline{r}_{1,0}\overline{r}_{1,0} )
= \Tr_{2}^{4}(u_{\sigma }\Res_{2}^{4}(a_{\lambda }\normrbar_{1}))
= \Tr_{2}^{4}(u_{\sigma })a_{\lambda }\normrbar_{1} \\
& = a_{\sigma }a_{\lambda }\normrbar_{1}a_{\lambda }\normrbar_{1}
\qquad \mbox{by (\ref{eq-exotic-transfers})}.
\end{align*}
\noindent Next note that the three elements
\begin{displaymath}
y_{8,8}=\eta_{0}^{8}=\Res_{2}^{4}(\epsilon ),\quad
y_{20,4}= \eta_{0}^{4}\delta_{1}^{4}=\Res_{2}^{4}(\overline{\kappa } )
\quad \aand
y_{32,0}= \delta_{1}^{8}=\Res_{2}^{4} (\Delta^{4})
\end{displaymath}
\noindent are all permanent cycles, so the same
is true of all
\begin{displaymath}
y_{16m +4\ell ,4\ell }=\eta_{0}^{4\ell }\delta_{1}^{4m }
\qquad \mbox{for $m,\ell \geq 0$ and $m+\ell $ even.}
\end{displaymath}
\noindent It follows that for such $\ell $ and $m$,
\begin{align*}
\eta_{0}\eta_{1}y_{16m+4\ell ,4\ell }
& = \eta_{0}\eta_{1}\eta_{0}^{4\ell }\delta_{1}^{4m}
=\eta_{0}^{4\ell+2 }\delta_{1}^{4m}
=y_{16m+4\ell+2 ,4\ell+2 } \\
& = \eta_{0}\eta_{1}\Res_{2}^{4}(x_{16m+4\ell ,4\ell }),
\end{align*}
\noindent so
\begin{displaymath}
\Tr_{2}^{4}(y_{16m+4\ell+2 ,4\ell+2 })
= \Tr_{2}^{4}(\eta_{0}\eta_{1})x_{16m+4\ell ,4\ell }
= f_{1}^{2}x_{16m+4\ell ,4\ell }.
\end{displaymath}
\noindent This is the desired exotic transfer.
\qed
We now turn to the unsettled part of \ref{thm-d7}(iv).
\begin{thm}\label{thm-d7a}
{\bf The fate of $x_{16k+s+8,s}$ for $4k+s\equiv 7$ mod 8 and $s\geq
7$.} Each of these elements is the target of a $d_{7}$ and hence a
permanent cycle.
\end{thm}
\proof Consider the element $\Delta_{1}^{2}\in
\underline{E}_{2}^{0,16} (G/G)$. We will show that
\begin{displaymath}
d_{7} (\Delta_{1}^{2}) = x_{15,7}=\Tr_{2}^{4}(y_{15,7}).
\end{displaymath}
\noindent This is the case $k=0$ and $s=7$. The remaining cases will
follow via repeated multiplication by $\epsilon $, $\overline{\kappa } $ and
$\Delta_{1}^{4}$.
We begin by looking at
\begin{displaymath}
\Delta_{1}=u_{2\sigma }u_{\lambda }^{2}\normrbar_{1}^{2}.
\end{displaymath}
\noindent From Theorem \ref{thm-d3ulambda} we have
\begin{displaymath}
d_{5} (u_{2\sigma }) = a_{\sigma}^{3}a_{\lambda}\normrbar_{1}
\qquad \aand
d_{5} (u_{\lambda}^{2}) = a_{\sigma }a_{\lambda}^{2}u_{\lambda }\normrbar_{1}
\end{displaymath}
\noindent Using the gold relation $a_{\sigma }^{2}u_{\lambda }=2a_{\lambda
}u_{2\sigma }$, we have
\begin{align*}
d_{5} (\Delta_{1})
= d_{5} (u_{2\sigma }u_{\lambda }^{2})\normrbar_{1}^{}
& = (a_{\sigma}^{3}a_{\lambda}u_{\lambda}^{2}\normrbar_{1}^{}
+ a_{\sigma }a_{\lambda}^{2}u_{\lambda }u_{2\sigma }\normrbar_{1})
\normrbar_{1}^{} \\
& = a_{\sigma }a_{\lambda }u_{\lambda }(a_{\sigma}^{2}u_{\lambda}
+ a_{\lambda}u_{2\sigma })\normrbar_{1}^{2} \\
& = a_{\sigma }a_{\lambda }u_{\lambda }(2a_{\lambda }u_{2\sigma }
+ a_{\lambda}u_{2\sigma })\normrbar_{1}^{2} \\
& = a_{\sigma }a_{\lambda }^{2}u_{\lambda }u_{2\sigma }\normrbar_{1}^{2}
\qquad \mbox{since }2a_{\sigma }=0 \\
& = \nu x_{4}.
\end{align*}
\noindent Since $\nu $ supports an exotic group extension, $2\nu
=x_{3}$, we have
\begin{displaymath}
2d_{5} (\Delta_{1})= d_{7} (2\Delta_{1})= x_{3}x_{4}.
\end{displaymath}
\noindent From this it follows that
\begin{displaymath}
d_{7} (\Delta_{1}^{2})=\Delta_{1}d_{7} (2\Delta_{1})=x_{15,7}
\end{displaymath}
\noindent as claimed.
\qed
The resulting reduced $\underline{E}_{12}$-term is shown in Figure
\ref{fig-E12}. It is sparse enough that the only possible remaining
differentials are the indicated $d_{13}$s. In order to establish them
we need the following.
\begin{figure}\label{fig-E4}
\end{figure}
\begin{figure}\label{fig-E12}
\end{figure}
The surviving class in $\underline{E}_{12}^{20,3} (G/G)$ is
\begin{displaymath}
x_{17,3}=f_{1}\Delta_{1}^{2}
= a_{\sigma }a_{\lambda }\normrbar_{1}\cdot
[u_{2\sigma }^{2}]u_{\lambda }^{4}\normrbar_{1}^{4} = (a_{\sigma
}u_{\lambda }^{4}) (a_{\lambda }[u_{2\sigma }^{2}]\normrbar_{1}^{5}).
\end{displaymath}
\noindent The second factor is a permanent cycle, so Theorem
\ref{thm-normed-slice-diffs} gives
\begin{displaymath}
d_{13} (f_{1}\Delta_{1}^{2})
= (a_{\lambda }^{7}[u_{2\sigma }^{2}]\normrbar_{1}^{3})
(a_{\lambda }[u_{2\sigma }^{2}]\normrbar_{1}^{5})
=a_{\lambda }^{8}[u_{2\sigma }^{2}]\normrbar_{1}^{8}
= \epsilon^{2} = x_{4}^{4}.
\end{displaymath}
\noindent The surviving class in $\underline{E}_{12}^{32, 2} (G/G)$ is
\begin{displaymath}
x_{30,2}
=a_{\sigma }^{2}u_{2\sigma }^{3}u_{\lambda }^{8}\normrbar_{1}^{8}
\in \underline{E}_{12}^{32,2} (G/G)
\end{displaymath}
\noindent satisfies
\begin{displaymath}
\epsilon x_{30,2}
=f_{1}\overline{\kappa }x_{17,3}
= f_{1}^{2}x_{4}\Delta_{1}^{2},
\end{displaymath}
\noindent so we have proved the following.
\begin{thm}\label{thm-d13}
{\bf $d_{13}$s in the slice {\SS } for $\kH$.} There are
differentials
\begin{displaymath}
d_{13} (f_{1}^{\epsilon }x_{4}^{m}\Delta_{1}^{2n})
= f_{1}^{\epsilon -1}x_{4}^{m+4}\Delta_{1}^{2 (n-1)}
\end{displaymath}
\noindent for $\epsilon =1,2$, $m+n$ odd, $n\geq 1$ and $m
\geq 1-\epsilon $. The {\SS } collapses from $\underline{E}_{14}$.
\end{thm}
To finish the calculation we have
\begin{thm}\label{thm-double}
{\bf Exotic transfers from and restrictions to the 0-line.} In
$\upi_{*}\kH$, for $i\geq 0$ we have
\begin{align*}
\Tr_{1}^{2} (r_{1,\epsilon }r_{1,0}^{4i}r_{1,1}^{4i})
& = \eta _{\epsilon }^{2}\orr_{1,0}^{4i}\orr_{1,1}^{4i}
&&\in \upi_{8i+2}
&\mbox{(filtration jump 2)}\\
\Tr_{1}^{4} (r_{1,0}^{8i+1}r_{1,1}^{8i+1})
& = 2x_{4}\Delta_{1}^{4i}
&&\in \upi_{32i+4}
&\mbox{(filtration jump 4)}\\
\Tr_{1}^{2} ((r_{1,0}^{3}+r_{1,1}^{3}) r_{1,0}^{8i}r_{1,1}^{8i})
& = \eta _{0}^{3}\eta _{1}^{3}\delta_{1}^{8i}
&&\in \upi_{32i+6}
&\mbox{(filtration jump 6)}\\
\Tr_{1}^{4}(r_{1,0}^{8i+5}r_{1,1}^{8i+5})
& = 2x_{4}\Delta_{1}^{4i+2}
&&\in \upi_{32i+20}
&\mbox{(filtration jump 4)}\\
\Tr_{1}^{2} ((r_{1,0}^{3}+r_{1,1}^{3}) r_{1,0}^{8i+4}r_{1,1}^{8i+4})
& = \eta _{0}^{3}\eta _{1}^{3}\delta_{1}^{8i+4}
&&\in \upi_{32i+22}
&\mbox{(filtration jump 6)}\\
\aand
\Tr_{2}^{4}(2\delta_{1}^{8i+7})
& = x_{4}^{3}\Delta_{1}^{4i+2}
&&\in \upi_{32i+28}
&\mbox{(filtration jump 12,}\\
& && &\mbox{the long transfer)}.
\end{align*}
Let $\underline{M}_{k}$ denote the reduced value of $\upi_{k}\kH$,
meaning the one obtained by removing the elements of Proposition
\ref{prop-perm}. Its values are shown in purple in Figure
\ref{fig-E14a}, and each has at most two summands. For even $k$ one
of them contains torsion free elements, and we denote it by
$\underline{M}'_{k}$. Its values depend on $k$ mod 32 and are as
follows, with symbols as in Table \ref{tab-C4Mackey}.
\begin{center}
\begin{tabular}[]{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$k$
&0 &2 &4 &6 &8 &10&12&14
&16&18&20&22&24&26&28&30\\
\hline
$\underline{M}'_{k}$
&$\Box$
&$\widehat{\dot{\Box}}$
&$\JJoldiagbox$
&$\JJJbox $
&$\fourbox$
&$\widehat{\dot{\Box}}$
&$\overline{\diagbox} $
&$\widehat{\oBox}$
&$\diagbox $
&$\widehat{\dot{\Box}}$
&$\circoldiagbox$
&$\circbox$
&$\fourbox $
&$\widehat{\dot{\Box}}$
&$\dot{\diagbox}$
&$\widehat{\oBox}$\\
\hline
\end{tabular}
\end{center}
\end{thm}
\proof We have two tools at our disposal: the
periodicity theorem and Theorem \ref{thm-exotic}, which relates exotic
transfers to differentials.
Figure \ref{fig-KH} shows that $\underline{M}'_{k}$ has the
indicated value for $-8\leq k\leq 0$ because the same is true of
$\EE_{4}^{0,k}$ and there is no room for any exotic extensions. On the
other hand $\EE_{4}^{0,k+32}$ does not have the same value for $k=-8$, $k=-6$
and $k=-4$. This comparison via periodicity forces
\begin{itemize}
\item [$\bullet$] the indicated $d_{5}$ and $d_{7}$ in dimension 24,
which together convert $\Box$ to $\fourbox$. These were also established in
Theorem \ref{thm-d7}.
\item [$\bullet$] the short transfer in dimension 26, which converts
$\widehat{\oBox}$ to $\widehat{\dot{\Box}}$. It also follows from the
the results of Section \ref{sec-C2diffs}.
\item [$\bullet$] the long transfer in dimension 28, which converts
$\overline{\diagbox}$ to $\dot{\diagbox}$.
\end{itemize}
The differential corresponding to the long transfer is
\begin{align*}
d_{13} ([2u_{\lambda }^{7}])
& = a_{\sigma }a_{\lambda }^{6}u_{2\sigma }u_{\lambda }^{4}
\normrbar_{1}^{3}, \\
\mbox{so}\qquad
d_{13} (a_{\sigma } [2u_{\lambda }^{7}])
& = a_{\sigma }^{2}a_{\lambda }^{6}u_{2\sigma }u_{\lambda }^{4}
\normrbar_{1}^{3}
= 2a_{\lambda }^{7}[u_{2\sigma }^{2}]u_{\lambda }^{3}
\normrbar_{1}^{3}
\end{align*}
\noindent This compares well with the $d_{13}$ of Theorem
\ref{thm-normed-slice-diffs}, namely
\begin{displaymath}
d_{13} (a_{\sigma }[u_{\lambda }^{4}])
=a_{\lambda }^{7}[u_{2\sigma }^{2}]\normrbar_{1}^{3}.
\end{displaymath}
The statements in dimensions 4 and 20 have similar proofs, and we will
only give the details for the former. It is based on comparing the
$\EE_{4}$-term for $\KH$ in dimensions $-28$ and $4$. They must
converge to the same thing by periodicity. From the slice
$\EE_{4}$-term in dimension 4 we see there is a short exact sequence
\begin{numequation}\label{eq-4-stem}
\begin{split}
\xymatrix
@R=4mm
@C=18mm
{
0\ar[r]
&\JJ \ar[r]
&{\underline{M}'_{4}}\ar[r]
&{\odbox}\ar[r]
&0\\
&\Z/2\ar@/_/[dd]_(.5){0}\ar@{=}[r]
&\Z/2\ar@/_/[dd]_(.5){\left[\begin{array}{c}1\\0\end{array} \right]}
\ar[r]
&0\ar@/_/[dd]\\
{}\\
&\Z/2\ar@/_/[dd]\ar@/_/[uu]_(.5){1}
\ar[r]^(.4){\left[\begin{array}{c}1\\0\end{array} \right]}
&\Z/2\oplus \Z_{-}
\ar@/_/[dd]_(.5){\left[\begin{array}{cc}0&2\end{array} \right]}
\ar@/_/[uu]_(.5){\left[\begin{array}{cc}1&a\end{array} \right]}
\ar[r]^(.6){\left[\begin{array}{cc}0&1\end{array} \right]}
&\Z_{-}\ar@/_/[dd]_(.5){2}\ar@/_/[uu]\\
{}\\
&0\ar@/_/[uu]\ar[r]
&\Z_{-}\ar@/_/[uu]_(.5){\left[\begin{array}{c}b\\1\end{array} \right]}
\ar@{=}[r]
&\Z_{-}\ar@/_/[uu]_(.5){1},
}
\end{split}
\end{numequation}
\noindent while the $(-28)$-stem gives
\begin{displaymath}
\xymatrix
@R=4mm
@C=17mm
{
0\ar[r]
&\dot{\diagbox}\ar[r]
&{\underline{M}'_{4}}\ar[r]
&\obull\ar[r]
&0 \\
&\Z/2\ar@/_/[dd]_(.5){0}\ar@{=}[r]^(.5){}
&\Z/2\ar@/_/[dd]_(.5){\left[\begin{array}{c}1\\0\end{array} \right]}
\ar[r]^(.5){}
&0\ar@/_/[dd]_(.5){} \\
{}\\
&\Z_{-}\ar@/_/[dd]_(.5){2}\ar@/_/[uu]_(.5){1}
\ar[r]^(.4){\left[\begin{array}{c}c\\1\end{array} \right]}
&\Z/2\oplus\Z_{-}
\ar@/_/[dd]_(.5){\left[\begin{array}{cc}0&2\end{array} \right]}
\ar@/_/[uu]_(.5){\left[\begin{array}{cc}1&a\end{array} \right]}
\ar[r]^(.6){\left[\begin{array}{cc}1&d\end{array} \right]}
&\Z/2\ar@/_/[uu]_(.5){}\ar@/_/[dd]_(.5){}\\
{}\\
&\Z_{-}\ar@/_/[uu]_(.5){1}\ar@{=}[r]
&\Z_{-}\ar@/_/[uu]_(.5){\left[\begin{array}{c}b\\1\end{array} \right]}
\ar[r]^(.5){}
&0\ar@/_/[uu]_(.5){}
}
\end{displaymath}
\noindent The commutativity of the second diagram requires that
\begin{displaymath}
a+b=c=1 \qquad \aand b+d=c+d=0,
\end{displaymath}
\noindent giving $(a,b,c,d)= (0,1,1,1)$. The diagram
for $M_{4}$ is that of $\JJoldiagbox$ in Table \ref{tab-C4Mackey}.
In dimension 20 the short exact sequence of (\ref{eq-4-stem}) is replaced by
\begin{displaymath}
\xymatrix
@R=5mm
@C=10mm
{
0\ar[r]
&\circ \ar[r]
&{\underline{M}'_{20}}\ar[r]
&{\odbox}\ar[r]
&0
}
\end{displaymath}
\noindent and the resulting diagram for $\underline{M}'_{20}$ is that
of $\circoldiagbox$.
Similar arguments can be made in dimensions 6 and 22.
\qed
We could prove a similar statement about exotic restrictions hitting
the 0-line in the third quadrant in dimensions congruent to 0, 4, 6,
14, 16, 20 (where there is an exotic transfer) and 22. The problem is
naming the elements involved.
\begin{table}[h]
\caption
[Infinite Mackey functors in the slice {\SS }.]
{Infinite Mackey functors in the reduced
$\EE_{\infty}$-term for $\KH$. In each even degree there is an
infinite Mackey functor on the 0-line that is related to a summand of
$\upi_{2k}\KH$ in the manor indicated. The rows in each diagram are
short or 4-term exact sequences with the summand appearing in both
rows.} \label{tab-0-line}
\begin{center}
\begin{tabular}{|c|c||c|c|}
\hline
Dimension
&Third quadrant
&Dimension
&Third quadrant\\
mod 32
&First quadrant
&mod 32
&First quadrant\\
\hline
0 &$
\xymatrix
@R=1mm
@C=7mm
{
{\fourbox} \ar[r]^(.5){\phantom{d_{7}}}
&\Box \ar[r]^(.5){}\ar@{=}[d]^(.5){}
&\circ \\
0\ar[r]^(.5){}
&\Box \ar@{=}[r]^(.5){}
&\Box
}$
&16 &$
\xymatrix
@R=1mm
@C=7mm
{
{\fourbox} \ar[r]^(.5){}
&{\diagbox} \ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\JJ}\\
&{\diagbox} \ar[r]^(.5){}
&\Box\ar[r]^(.5){d_{7}}
&\bullet
}$\\
\hline
2, 10 &$
\xymatrix
@R=1mm
@C=7mm
{
{\widehat{\dot{\Box}}}
\ar[r]^(.5){}
&{\widehat{\dot{\Box}}}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0 \\
{\widehat{\bullet}}
\ar[r]^(.5){}
&{\widehat{\dot{\Box}}}
\ar[r]^(.5){}
&{\widehat{\oBox}}
}$
&18, 26 &$
\xymatrix
@R=1mm
@C=7mm
{
{\widehat{\dot{\Box}}}
\ar[r]^(.5){}
&{\widehat{\dot{\Box}}}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0 \\
{\widehat{\bullet}}
\ar[r]^(.5){}
&{\widehat{\dot{\Box}}}
\ar[r]^(.5){}
&{\widehat{\oBox}}
}$\\
\hline
4 &$
\xymatrix
@R=1mm
@C=7mm
{
{\dot{\diagbox}}
\ar[r]^(.5){}
&{\JJoldiagbox }
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\obull}\\
{\JJ}
\ar[r]^(.5){}
&{\JJoldiagbox }
\ar[r]^(.5){}
&{\overline{\diagbox} }
}$
&20 &$
\xymatrix
@R=1mm
@C=7mm
{
{\dot{\diagbox}}
\ar[r]^(.5){}
&{\circoldiagbox }
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\circ} \\
{\circ}
\ar[r]^(.5){}
&{\circoldiagbox }
\ar[r]^(.5){}
&{\overline{\diagbox} }
}$\\
\hline
6 &$
\xymatrix
@R=1mm
@C=7mm
{
{\bullet}\ar[r]^(.5){d_{7}}
&{\JJbox }
\ar[r]^(.5){}
&{\JJJbox }
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\bullet}\\
&{\widehat{\oBox}}
\ar[r]^(.5){}
&{\JJJbox }
\ar[r]^(.5){}
&{\JJJ}
}$
&22 &$
\xymatrix
@R=1mm
@C=7mm
{
{\JJbox }
\ar[r]^(.5){}
&{\circbox}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\bullet} \\
{\circ}
\ar[r]^(.5){}
&{\circbox}
\ar[r]^(.5){}
&{\widehat{\oBox}}
}$\\
\hline
8 &$
\xymatrix
@R=1mm
@C=7mm
{
{\fourbox}
\ar[r]^(.5){\phantom{d_{7}}}
&{\fourbox}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0\\
&{\fourbox}
\ar[r]^(.5){}
&{\Box}
\ar[r]^(.5){d_{5}, d_{7}}
&{\circ}
}$
&24 &$
\xymatrix
@R=1mm
@C=7mm
{
{\fourbox}
\ar[r]^(.5){}
&{\fourbox}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0\\
&{\fourbox}
\ar[r]^(.5){}
&{\Box}
\ar[r]^(.5){d_{5}, d_{7}}
&{\circ}
}$\\
\hline
12 &$
\xymatrix
@R=1mm
@C=7mm
{
{\bullet}
\ar[r]^(.5){d_{13}}
&{\dot{\diagbox}}
\ar[r]^(.5){}
&{\overline{\diagbox} }\ar@{=}[d]^(.5){} \\
&0
\ar[r]^(.5){}
&{\overline{\diagbox} }
\ar[r]^(.5){}
&{\overline{\diagbox} }
}$
&28 &$
\xymatrix
@R=1mm
@C=7mm
{
{\dot{\diagbox}}
\ar[r]^(.5){}
&{\dot{\diagbox}}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0 \\
{\bullet}
\ar[r]^(.5){}
&{\dot{\diagbox}}
\ar[r]^(.5){}
&{\overline{\diagbox} }
}$\\
\hline
14 &$
\xymatrix
@R=1mm
@C=7mm
{
{\JJ} \ar[r]^(.5){d_{7}}
&{\JJbox }
\ar[r]^(.5){}
&{\widehat{\oBox}}
\ar@{=}[d]^(.5){}
& \\
&0
\ar[r]^(.5){}
&{\widehat{\oBox}}
\ar[r]^(.5){}
&{\widehat{\oBox}}
}$
&30 &$
\xymatrix
@R=1mm
@C=7mm
{
{\widehat{\oBox}}
\ar[r]^(.5){}
&{\widehat{\oBox}}
\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&0 \\
0
\ar[r]^(.5){}
&{\widehat{\oBox}}
\ar[r]^(.5){}
&{\widehat{\oBox}}
}$\\
\hline
\end{tabular}
\end{center}
\end{table}
In Table \ref{tab-0-line} we show short or 4-term exact sequences in the 16 even
dimensional congruence classes. In each case the value of
$\underline{M}'_{k}$ is the symbol appearing in both rows of the diagram.
For even $k$ with $0\leq k<32$, we typically have short exact
sequences
\begin{displaymath}
\xymatrix
@R=2mm
@C=10mm
{
0\ar[r]^(.5){}
&{\EE_{4}^{0,k-32}}\ar[r]^(.5){}
&{\underline{M}'_{k}}\ar[r]^(.5){}\ar@{=}[d]^(.5){}
&{\mbox{quotient} }\ar[r]^(.5){}
&{0}\\
0\ar[r]^(.5){}
&{\mbox{subgroup} }\ar[r]^(.5){}
&{\underline{M}'_{k}}\ar[r]^(.5){}
&{\EE_{4}^{0,k}}\ar[r]^(.5){}
&{0,}
}
\end{displaymath}
\noindent where the quotient or subgroup is finite and may be spread
over several filtrations. This happens for the quotient in dimensions
$-32$, $-16$ and $-12$, and for the subgroup in dimensions 6 and 22.
This is the situation in dimensions where no differential hits
[originates on] the 0-line in the third [first] quadrant. When such a
differential occurs, we may need a 4-term sequence, such as the one in
dimension -22.
In dimensions 8 and 24 there is more than one such differential,
the targets being a quotient and subgroup of the Mackey functor
$\circ=\Box/\fourbox$.
In dimension $-18$ we have a $d_{7}$ hitting the 0-line. Its source
is written as ${\circ \subseteq \EE_{4}^{-7,-24}}$ in Figure
\ref{fig-KH}. Its generator supports a $d_{5}$, leaving a copy of
$\JJ$ in $\EE_{7}^{-7,-24}$.
There is no case in which we have such
differentials in both the first and third quadrants.
\begin{cor}\label{cor-higher}
{\bf The $\EE_{\infty }$-term of the slice {\SS} for $\KH$.}
The surviving elements in the spectral sequence for $\KH$ are shown in
Figure \ref{fig-E14a}.
\end{cor}
\begin{landscape}
\begin{figure}\label{fig-KH}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{figure}\label{fig-E14a}
\end{figure}
\end{landscape}
\end{document}
|
\begin{document}
\begin{center}
\Large
Solution for the problem of the game \emph{heads or tails}
\end{center}
\begin{center}
Alberto Costa \\
LIX, \'Ecole Polytechnique\\
91128 Palaiseau, France\\
\[email protected]+
\end{center}
\textbf{Abstract}
\\
In this paper, we describe the solution for a problem dealing with definite properties of binary sequences. This problem, proposed by Xavier Grandsart in the form of a mathematical contest \cite{1}, has been solved also by Maher Younan, Ph.D. student of Theoretical Physics at the University of Geneva, and Pierre Deligne, professor at Princeton and Field Medals, using different approaches with respect to the one presented in this work. All the proofs can be found in \cite{1} (in French).
\section{Introduction}
Consider a repetition of $n$ throwings in the game heads or tails: 0 if heads, 1 if tail. This can be represented as a binary sequence $D= d_0,\,d_1,\,
\dots,\, d_{n-1}$ of length $n$.
We consider the 4 pairs made by inverse sequences of order 4 with periodical dimensions: $0100-0010, 1101-1011, 0011-1100, 1010-0101$. We compute the difference between the number of occurrences of the two terms in each pair for our binary sequence $D$. Only the inverse sequences made of consecutive terms are considered. Furthermore, the binary sequence is circular. Hence we consider only the inverse sequences of order 4 having this form: $d_{{i}_{|n}},\,d_{{(i+1)}_{|n}},\,d_{{(i+2)}_{|n}},\,d_{{(i+3)}{|n}},\, i \in \{0,\dots,n-1\}$ (where $i_{|n}$ means $i$ mod $n$).
Prove that, for each random binary sequence $D$, the difference between the number of occurrences of the two terms in each pair of inverse sequences of order 4 is preserved. For instance, why if the number of occurrences of the term 0011 is equal to the number of occurrences of the term 1100 minus $t$, the number of occurrences of the term 1101 is equal to the number of occurrences of the term 1011 minus $t$ ?
\section{The solution}
We consider a binary random sequence $D= d_0,\,d_1,\,
\dots,\, d_{n-1}$ with length $n$. We have to prove that, for every sequence $D$, if we consider the four pairs of inverse sequences of order 4 with periodical dimension ($0100-0010, 1101-1011, 0011-1100, 1010-0101$), the difference between the number of occurrences of the two terms in each pair is the same.
To do this, we use the mathematical induction on the size of the sequence $D$.
\\
\textsc{Theorem 1.}
\label{po01}
For every $D=d_0,\,d_1,\,\dots,\,d_{n-1},\,
d_i\in\{0,\,1\},\,0\leq i<n$, the difference between the number of occurrences of the two terms
$0100-0010,\,1101-1011,\,1010-0101,\,0011-1100$ is always the same.
\\
\textit{Proof}. We consider as basis of induction the sequence of size 4, then we will prove
that for every $n>4$ the hypothesis is fulfilled.
\noindent \textbf{Basis step.}\\
There are 16 possible binary sequences $D$ of size 4. The
table \ref{tab:4kDs} shows the number of inverse sequences and the difference
between the number of occurences of the 2 terms. As can be seen, this difference is equal to 0 for each sequence.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Sequence & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100} & Difference \\
\hline
0000 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0001 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
0010 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
0011 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1}&0\\ \hline
0100 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\\hline
0101 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0}&0\\\hline
0110 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1}&0\\\hline
0111 & 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\\hline
1000 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\\hline
1001 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
1010 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
1011 & 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
1100 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1}&0\\\hline
1101 & 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
1110 & 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
1111 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0}&0\\ \hline
\end{tabular}
\caption{Sequences of size 4}
\label{tab:4kDs}
\end{table}
\noindent \textbf{Inductive step.}\\
Suppose that the theorem is valid for a sequence
$D=d_0,\,d_1,\,\dots,\,d_{n-1}$. We will show that this is also true
for the sequence $D'=d_0,\,d_1,\,\dots,\,d_{n-1},d_n$. The difference
in the inverse sequences of order 4 with periodical dimension between $D$ and
$D'$ is that, by moving from $D$ to $D'$, the following sequences are lost:
\begin{itemize}
\item $d_{n-3},\,d_{n-2},\,d_{n-1},\,d_0$
\item $d_{n-2},\,d_{n-1},\,d_{0},\,d_1$
\item $d_{n-1},\,d_{0},\,d_{1},\,d_2$
\end{itemize}
However, there are 4 new sequences:
\begin{itemize}
\item $d_{n-3},\,d_{n-2},\,d_{n-1},\,d_n$
\item $d_{n-2},\,d_{n-1},\,d_{n},\,d_0$
\item $d_{n-1},\,d_{n},\,d_{0},\,d_1$
\item $d_{n},\,d_{0},\,d_{1},\,d_2$.
\end{itemize}
Hence, the digits involved in that change are $d_{n-3}-\,d_{n-2}-\,d_{n-1}-\,d_0-\,d_1-\,d_2$ and $d_n$. Since
$d_i$ is 0 or 1, and there are 7 digits involved, there are $2^7$ possible cases to be considered: we will show that, for every one, the
theorem 1 is valid.
Let $P$ be the sequence $d_{n-3},\,d_{n-2},\,d_{n-1},\,d_0,\,d_1,\,d_2$;
The first step to do is to compute the number of sequences
lost for each case. Table \ref{tab:spd} shows this.
\begin{longtable}{|c|c|c|c|c|c|c|c|c|}
\hline
$P$ & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100} \\
\hline
000000 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
000001 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
000010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
000011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
000100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
000101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
000110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
000111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
001000& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
001001& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
001010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
001011& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
001100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
001101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
001110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
001111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
010000& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010001& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010010& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010011& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
010100& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010101& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010110& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
010111& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
011000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
011001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
011010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
011011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
011100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
011101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
011110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
011111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
100100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
100110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
100111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
101000& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101001& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101010& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101011& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101100& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
101101& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101110& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
101111& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
110000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
110001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
110010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
110011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
110100& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
110101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
110110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
110111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
111000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
111001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
111010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
111011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
111100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
111101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
111110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
111111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
\caption{Sequences lost}
\label{tab:spd}
\end{longtable}
After adjoining $d_n$, there are four new sequences, as previously described. Let $d_n=0$ and $P=d_{n-3},\,d_{n-2},\,d_{n-1},\,0,\,d_0,\,d_1,\,d_2$; table
\ref{tab:sadj0} shows the number of sequences adjoined for each
case.
\begin{longtable}{|c|c|c|c|c|c|c|c|c|}
\hline
$P'$ & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100} \\
\hline
0000000 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0000001 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0000010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0000011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0000100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0000101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0000110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0000111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0010000& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010001& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010010& 1 & 2 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010011& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0010100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010110& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0010111& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100000& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100001& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100010& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100011& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0100100& 2 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100101& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0100110& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0100111& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0110000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0110001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0110010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0110011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
0110100& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0110101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0110110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0110111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1000100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1000110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1000111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1010000& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010001& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010010& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010011& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1010100& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010101& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010110& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1010111& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1100000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1100001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1100010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1100011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
1100100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1100101& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1100110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
1100111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1110000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1110001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1110010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1110011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
1110100& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1110101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1110110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1110111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
\caption{Sequences adjoined, $d_n=0$}
\label{tab:sadj0}
\end{longtable}
The table \ref{tab:sadj1} shows the number of sequences adjoined for each case, when $d_n=1$ ($P'' = d_{n-3},\,d_{n-2},\,d_{n-1},\,1,\,d_0,\,d_1,\,d_2$).
\begin{longtable}{|c|c|c|c|c|c|c|c|c|}
\hline
$P''$ & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100} \\
\hline
0001000 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0001001 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0001010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0001011& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0001100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
0001101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0001110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0001111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0011000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
0011001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
0011010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0011011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0011100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
0011101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0011110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0011111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
0101000& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101001& 1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101010& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101011& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 1} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101100& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0101101& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101110& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0101111& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0111000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0111001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0111010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0111011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0111100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
0111101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0111110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
0111111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1001000& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1001001& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1001010& 0 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1001011& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1001100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} \\ \hline
1001101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1001110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1001111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 0} \\ \hline
1011000& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1011001& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1011010& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1011011& 0 & 0 & {\color{red} 1} & {\color{red} 2} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1011100& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1011101& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1011110& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1011111& 0 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101000& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101001& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 2} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101100& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1101101& 0 & 0 & {\color{red} 2} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1101111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1111000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1111001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1111010& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1111011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1111100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} \\ \hline
1111101& 0 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1111110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
1111111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} \\ \hline
\caption{Sequences adjoined, $d_n=1$}
\label{tab:sadj1}
\end{longtable}
Now, there is only to prove that the difference between the number
of occurences of the two terms for each periodic sequence is the same
for every $P'$ and $P''$. For each case, there are the
sequences lost to remove (computed in table \ref{tab:spd}) and
the sequences adjoined (computed in table \ref{tab:sadj0} when $d_n =0$
and in table \ref{tab:sadj1} when $d_n=1$). The results
are displayed in table \ref{tab:ecaj0} and \ref{tab:ecaj1}. In order to understand better these tables, consider as example $P'=1010101$ in table
\ref{tab:ecaj0}. By moving from
$D=101,\,d_3,\,d_4,\,\dots\,d_{n-4},\,101$ to $D'=101,\,d_3,\,d_4,\,\dots\,d_{n-4},\,1010$ we lose one sequence
$1101$ and one $1011$, as shown by table \ref{tab:spd}
($P=101101$). Nevertheless, we gain two sequences $1010$ and two sequences
$0101$, as shown by table \ref{tab:ecaj0}
($P'=1010101$). In this case, the difference between the sequences is the same for $D$ and $D'$; if there was a $\Delta$Difference of +1, this means that if the sequence $D$ has a difference $E$ between the number of occurences of
the two terms for each periodic sequence, the sequence $D'$ has a difference
of $E+1$.
\begin{longtable}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$P'$ & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100}& $\Delta$Difference \\
\hline
0000000 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0000001 & 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0000010& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0000011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} &0 \\ \hline
0000100& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} &0 \\ \hline
0000101& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 0} &0 \\ \hline
0000110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} &0\\ \hline
0000111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} &0\\ \hline
0010000& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} &0\\ \hline
0010001& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} &0\\ \hline
0010010& 1 & 2-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} -1} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0010011& 1 & 1-1 & {\color{red} 0} & {\color{red} -1} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1 \\ \hline
0010100& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} -1} & 0\\ \hline
0010101& 0 & 1 & {\color{red} -1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 2} &{\color{blue} -1} &{\color{blue} 0} & -1 \\ \hline
0010110& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} 0} & -1\\ \hline
0010111& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} 0} & -1 \\ \hline
0100000& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0100001& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0100010& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0100011& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
0100100& 2-1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} -1} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0100101& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} -1} & {\color{green} 1-2} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0100110& 1 & 0 & {\color{red} 0} & {\color{red} -1} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1\\ \hline
0100111& 1 & 0 & {\color{red} 0} & {\color{red} -1} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1\\ \hline
0110000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
0110001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
0110010& 0 & 1 & {\color{red} -1} & {\color{red} 0} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} & -1 \\ \hline
0110011& 0 & 0 & {\color{red} -1} & {\color{red} -1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0\\ \hline
0110100& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} -1} & 1\\ \hline
0110101& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0110110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0110111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1000000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1000001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1000010& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1000011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
1000100& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1000101& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 0} &0\\ \hline
1000110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0\\ \hline
1000111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0\\ \hline
1010000& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1-1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1010001& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1-1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1010010& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1-2} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1010011& 1 & 0 & {\color{red} 0} & {\color{red} -1} & {\color{green} 1-1} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1\\ \hline
1010100& 1 & 0 & {\color{red} 0} & {\color{red} -1} & {\color{green} 2} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} -1} & +1\\ \hline
1010101& 0 & 0 & {\color{red} -1} & {\color{red} -1} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1010110& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0}& 0 \\ \hline
1010111& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1100000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
1100001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} &0\\ \hline
1100010& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
1100011& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 1-1} &0\\ \hline
1100100& 1-1 & 1 & {\color{red} -1} & {\color{red} 0} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} &-1\\ \hline
1100101& 0 & 1 & {\color{red} -1} & {\color{red} 0} & {\color{green} -1} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 1} & -1\\ \hline
1100110& 0 & 0 & {\color{red} -1} & {\color{red} -1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0 \\ \hline
1100111& 0 & 0 & {\color{red} -1} & {\color{red} -1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1110000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
1110001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
1110010& 0 & 1 & {\color{red} -1} & {\color{red} 0} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} & 0\\ \hline
1110011& 0 & 0 & {\color{red} -1} & {\color{red} -1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0\\ \hline
1110100& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} -1} & +1 \\ \hline
1110101& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1110110& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1110111& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
\caption{$\Delta$Difference, $d_n=0$}
\label{tab:ecaj0}
\end{longtable}
\begin{longtable}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$P''$ & 0100 & 0010 & {\color{red}1101} & {\color{red}1011}& {\color{green} 1010} & {\color{green}0101} & {\color{blue} 0011} & {\color{blue} 1100} & $\Delta$Difference \\
\hline
0001000 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0001001 & 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0001010& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0001011& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} 0} & -1 \\ \hline
0001100& -1 & -1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0 \\ \hline
0001101& 0 & -1 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & 1 \\ \hline
0001110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0\\ \hline
0001111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
0011000& -1 & -1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0 \\ \hline
0011001& -1 & -1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0 \\ \hline
0011010& 0 & -1 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1-1} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1 \\ \hline
0011011& 0 & -1 & {\color{red} 1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} & +1\\ \hline
0011100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 1-1} & 0\\ \hline
0011101& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
0011110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
0011111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0}& 0 \\ \hline
0101000& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0101001& 1-1 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} &0 \\ \hline
0101010& -1 & -1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 2} & {\color{green} 2} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
0101011& -1 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} 1} & {\color{green} 2} &{\color{blue} -1} &{\color{blue} 0} & -1\\ \hline
0101100& -1 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} -1} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 1} & -1\\ \hline
0101101& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} -1} & {\color{green} 1-2} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0101110& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0101111& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 1-1} &{\color{blue} 0} &{\color{blue} 0} &0\\ \hline
0111000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1}& 0 \\ \hline
0111001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
0111010& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 1-1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0111011& 0 & 0 & {\color{red} 1-1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0111100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
0111101& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0111110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
0111111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1001000& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1001001& 1 & 1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1001010& 0 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 1} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1001011& 0 & 1 & {\color{red} 0} & {\color{red} 1} & {\color{green} 0} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} 0} & -1 \\ \hline
1001100& 1-1 & 1-1 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1} &{\color{blue} 1} & 0 \\ \hline
1001101& 0 & -1 & {\color{red} 1} & {\color{red} 0} & {\color{green} 0} & {\color{green} -1} &{\color{blue} 1} &{\color{blue} 0} &+1\\ \hline
1001110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0 \\ \hline
1001111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 1-1} &{\color{blue} 0} & 0\\ \hline
1011000& -1 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} & -1\\ \hline
1011001& -1 & 0 & {\color{red} 0} & {\color{red} 1} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1} & -1\\ \hline
1011010& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 1-2} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1011011& 0 & 0 & {\color{red} 1} & {\color{red} 2-1} & {\color{green} -1} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1011100& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
1011101& 0 & 0 & {\color{red} 1-1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1011110& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1011111& 0 & 0 & {\color{red} 0} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1101000& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} -1} & +1 \\ \hline
1101001& 1 & 0 & {\color{red} 1} & {\color{red} 0} & {\color{green} 1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} -1} & +1 \\ \hline
1101010& 0 & -1 & {\color{red} 1} & {\color{red} 0} & {\color{green} 2} & {\color{green} 1} &{\color{blue} 0} &{\color{blue} -1} & +1 \\ \hline
1101011& 0 & 0 & {\color{red} 1} & {\color{red} 1} & {\color{green} 1} & {\color{green} 1} &{\color{blue} -1} &{\color{blue} -1} & 0 \\ \hline
1101100& -1 & 0 & {\color{red} 1-1} & {\color{red} 1} & {\color{green} -1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1}&-1 \\ \hline
1101101& 0 & 0 & {\color{red} 2-1} & {\color{red} 1} & {\color{green} -1} & {\color{green} -1} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1101110& 0 & 0 & {\color{red} 1-1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1101111& 0 & 0 & {\color{red} 1-1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1111000& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
1111001& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0\\ \hline
1111010& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 1-1} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1111011& 0 & 0 & {\color{red} 1-1} & {\color{red} 1-1} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1111100& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 1-1} & 0 \\ \hline
1111101& 0 & 0 & {\color{red} 1-1} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
1111110& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0\\ \hline
1111111& 0 & 0 & {\color{red} 0} & {\color{red} 0} & {\color{green} 0} & {\color{green} 0} &{\color{blue} 0} &{\color{blue} 0} & 0 \\ \hline
\caption{$\Delta$Difference, $d_n=1$}
\label{tab:ecaj1}
\end{longtable}
\section{Conclusion}
The theorem allows us to answer the question by illustrating that for each binary sequence $D$ the property is verified.
This theorem can be generalized in order to prove (or to refuse) the same properties for inverse sequences of length bigger than 4.
\end{document}
|
\begin{document}
\title{On convergence to equilibria of flows of compressible viscous fluids under in/out--flux boundary conditions}
\author{Jan B\v rezina \and Eduard Feireisl
\thanks{The work of E.F. was partially supported by the
Czech Sciences Foundation (GA\v CR), Grant Agreement
18--05974S. The Institute of Mathematics of the Academy of Sciences of
the Czech Republic is supported by RVO:67985840.}
\and Anton\' \i n Novotn\' y { \thanks{The work of A.N. was supported by Brain Pool program funded by the Ministry of Science and ICT through the National Research Foundation of Korea (NRF-2019H1D3A2A01101128).}}
}
\maketitle
\centerline{Faculty of Arts and Science, Kyushu University;}
\centerline{744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan}
\centerline{[email protected]}
\centerline{and}
\centerline{Institute of Mathematics of the Academy of Sciences of the Czech Republic;}
\centerline{\v Zitn\' a 25, CZ-115 67 Praha 1, Czech Republic}
\centerline{Institute of Mathematics, Technische Universit\"{a}t Berlin,}
\centerline{Stra{\ss}e des 17. Juni 136, 10623 Berlin, Germany}
\centerline{[email protected]}
\centerline{and}
\centerline{IMATH, EA 2134, Universit\'e de Toulon,}
\centerline{BP 20132, 83957 La Garde, France}
\centerline{[email protected]}
\begin{abstract}
We consider the barotropic Navier--Stokes system describing the motion of a compressible Newtonian fluid in a bounded domain
with in and out flux boundary conditions. We show that if the boundary velocity coincides with that of a rigid motion, all solutions
converge to an equilibrium state for large times.
\end{abstract}
{\bf Keywords:} compressible Newtonian fluid, Navier--Stokes system, in/out--flux boundary conditions, long--time behavior
{\bf MSC:}
\section{Introduction}
\label{i}
The barotropic Navier--Stokes system:
\begin{equation} \label{i1}
\begin{split}
\mathbb{P}artial_t \varrho + {\rm div}_x (\varrho \vc{u}) &= 0,\\
\mathbb{P}artial_t (\varrho \vc{u}) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x p(\varrho) &=
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) + \varrho \nabla_x G,\\
\mathbb{S}(\mathbb{D}_x \vc{u}) &= \mu \left( \nabla_x \vc{u} + \nabla_x^t \vc{u} - \frac{2}{d} {\rm div}_x \vc{u} \mathbb{I} \right) +
\lambda {\rm div}_x \vc{u} \mathbb{I},\ \mu > 0, \ \lambda \geq 0,\\ \mathscr{B}ox{with}\ \mathbb{D}_x \vc{u} &\equiv \frac{1}{2} \Big( \nabla_x \vc{u} + \nabla_x^t \vc{u} \Big),
\end{split}
\end{equation}
is a well--established model in continuum fluid mechanics governing the time evolution of the mass density $\varrho = \varrho(t,x)$ and the
velocity $\vc{u} = \vc{u}(t,x)$ of a compressible viscous fluid. In the fluid if confined to a bounded domain $\Omega_\varepsilonga \subset R^d$, $d=1,2,3$,
suitable boundary conditions must be prescribed to obtain a well posed problem. Here we consider the realistic situation with a given
boundary velocity,
\begin{equation} \label{i2}
\vc{u}|_{\mathbb{P}artial \Omega_\varepsilonga} = \vc{u}_b,
\end{equation}
and, decomposing the boundary as
\[
\mathbb{P}artial \Omega_\varepsilonga = \Gamma_{\rm in} \cup \Gamma_{\rm out},
\ \Gamma_{\rm in} = \left\{ x \in \mathbb{P}artial \Omega_\varepsilonga \ \Big|\
\ \mathscr{B}ox{the outer normal}\ \vc{n}(x) \ \mathscr{B}ox{exists, and}\ \vc{u}_B(x) \cdot \vc{n}(x) < 0 \right\},
\]
we prescribe the density on the in--flow component,
\begin{equation} \label{i3}
\varrho|_{\Gamma_{\rm in}} = \varrho_b.
\end{equation}
Our goal is to describe the long--time behavior of finite energy weak solutions to the problem \eqref{i1}--\eqref{i3}.
Note that the long--time behavior of solutions is well understood under the no--slip boundary conditions $\vc{u}_b \equiv 0$, see
\cite{FP7}, \cite{FP9}, \cite{NOS1}, \cite{NOST} for general results if $d=2,3$ and Melinand and Zumbrun \cite{MelZum} for refined arguments if $d=1$. The $\omega$--limit set of
any solution trajectory $t \mapsto [\varrho(t, \cdot), (\varrho \vc{u})(t, \cdot)]$ is contained in the set of stationary (static) solutions
$[\varrho_E, 0]$,
\begin{equation} \label{i4}
\nabla_x p(\varrho_E) = \varrho_E \nabla_x G,\ \varrho_E \geq 0,\ \intO{\varrho_E} = \intO{\varrho(0, \cdot)} = M_0.
\end{equation}
If the problem \eqref{i4} admits a unique solution, any trajectory converges to it. The same is true if the set of solutions
of \eqref{i4} consists of isolated points. The case when \eqref{i4} admits a continuum of solutions
remains an outstanding open problem. Note that in this case the equilibria $\varrho_E$ necessarily contain vacuum, meaning $\varrho_E$ vanishes on a set of non--zero measure, see \cite{FP7}.
Much less is known in the case of non--trivial in/out flow velocity. Melinand and Zumbrun \cite{MelZum} studied the problem in the mono--dimensional
case $d=1$ and with $G = 0$ in the framework of strong solutions. They show (non--linear) stability of the stationary solutions with constant velocity $\vc{u}_b$ and their small perturbations. They also show that linear stability implies nonlinear stability in the
general case.
Motivated by \cite{FP9}, we study stability and convergence to the static states in the multi--dimensional case, with the velocity $\vc{u}_E$ associated to a
\emph{rigid motion}, meaning
\begin{equation} \label{i5}
\mathbb{D}_x \vc{u}_E = 0.
\end{equation}
The corresponding density $\varrho_E$ satisfies
\begin{equation} \label{i6}
\begin{split}
{\rm div}_x (\varrho_E \vc{u}_E) &= 0,\\
{\rm div}_x (\varrho_E \vc{u}_E \otimes \vc{u}_E) + \nabla_x p(\varrho_E) &= \varrho_E \nabla_x G.
\end{split}
\end{equation}
Accordingly, we consider the problem \eqref{i1}--\eqref{i3} with the boundary conditions
\begin{equation} \label{i7}
\vc{u}_b = \vc{u}_E, \ \varrho_b = \varrho_E.
\end{equation}
Under the hypothesis \eqref{i7}, and if the stationary density $\varrho_E$ is strictly positive, the problem \eqref{i1}--\eqref{i3} admits a Lyapunov function, namely the relative energy
\[
\intO{ E\left(\varrho, \vc{u} \Big| \varrho_E, \vc{u}_E \right) },\ E\left(\varrho, \vc{u} \Big| \varrho_E, \vc{u}_E \right)
\equiv \left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - P'(\varrho_E)(\varrho - \varrho_E) - P(\varrho_E) \right],
\]
see Section \ref{S}. The situation becomes more delicate if $\varrho_E$ vanishes on a non--trivial part of $\Omega_\varepsilonga$. In that case, the stationary problem may admit more (infinitely many) solutions even if the total mass is prescribed.
Our main result asserts that any \emph{weak} solution of the problem \eqref{i1}--\eqref{i3}, satisfying
a suitable form of energy inequality, approaches the equilibrium solution
$[\varrho_E, \vc{u}_E]$ as $t \to \infty$ as long as the stationary problem \eqref{i6} admits a unique solution. To the best of our knowledge, this is the first result of this kind in the multi--dimensional case under the non--zero in/out flow boundary conditions. Note
that such a result does not follow from ``standard'' arguments, even if $\varrho_E > 0$, as the Lyapunov function
\[
t \mapsto \intO{ E\left(\varrho, \vc{u} \Big| \varrho_E, \vc{u}_E \right)(t, \cdot) }
\]
is not continuous on the trajectories generated by weak solutions. In addition, we show that the convergence is uniform with respect to bounded energy initial data.
The paper is organized as follows. In Section \ref{M}, we recall the concept of weak solution to the Navier--Stokes system and state our main result. Section \ref{S} is devoted to the stationary problem \eqref{i6}. In particular, we establish several conditions sufficient for its unique solvability. The main convergence result is shown in Section \ref{C}.
\section{Weak solutions, energy inequality, main results}
\label{M}
We start by introducing the main hypotheses imposed on the structural properties of the potential $G$ and the pressure $p$.
In what follows, we shall always assume that $\Omega_\varepsilonga \subset R^d$ is a bounded Lipschitz domain.
Keeping in mind the iconic example of the gravitational potential, we require only
\begin{equation} \label{MH1}
G \in C^1(\Ov{\Omega_\varepsilonga}).
\end{equation}
As for the pressure, we assume
\begin{equation} \label{MH2}
p \in C^1[0, \infty),\ p(0) = 0, \ p'(\varrho) > 0 \ \mathscr{B}ox{for}\ \varrho > 0, \ p'(\varrho) \approx \varrho^{\gamma - 1}, \ \gamma > 1
\ \mathscr{B}ox{as}\ \varrho \to \infty.
\end{equation}
Here, the symbol $p'(\varrho) \approx \varrho^{\gamma - 1}$ as $\varrho \to \infty$ means
\[
\underline{p} \varrho^{\gamma - 1} \leq p'(\varrho) \leq \Ov{p} \varrho^{\gamma - 1} \ \mathscr{B}ox{for all}\
\varrho > 1, \ \mathscr{B}ox{where}\ \underline{p} > 0.
\]
Accordingly, the pressure potential $P$ defined as
\[
P'(\varrho) \varrho - P(\varrho) = p(\varrho),\ P(0) = 0, \ \mathbb{R}ightarrow \ P''(\varrho) = \frac{p'(\varrho)}{\varrho} \ \mathscr{B}ox{for}\ \varrho > 0,
\]
is a strictly convex function on $[0, \infty)$. Without loss of generality, we may therefore assume
\[
P'(\varrho) \to - \infty \ \mathscr{B}ox{if} \ \varrho \to 0+ \ \mathscr{B}ox{or}\ P'(\varrho) \to 0 \ \mathscr{B}ox{if}\ \varrho \to 0+,
\]
adding a linear function to $P$ in the latter case if necessary.
\subsection{Weak solutions to the Navier--Stokes system}
\label{MS1}
The functions $[\varrho, \vc{u}]$ represent a weak solution of the Navier--Stokes system \eqref{i1}--\eqref{i3} in $[0, \infty) \times \Omega_\varepsilonga$, with the boundary data
\[
\vc{u}_b = \vc{u}_E|_{\mathbb{P}artial \Omega_\varepsilonga},\ \varrho_b = \varrho_E|_{\mathbb{P}artial \Omega_\varepsilonga},
\]
if:
\begin{itemize}
\item
\[
\begin{split}
\varrho \in C_{{\rm weak,loc}}([0, \infty); L^\gamma (\Omega_\varepsilonga)),\ \varrho \geq 0,\
\vc{m} \equiv \varrho \vc{u} &\in C_{{\rm weak,loc}}([0, \infty); L^{\frac{2 \gamma}{\gamma + 1}}(\Omega_\varepsilonga; R^d)),\\
(\vc{u} - \vc{u}_E) \in L^2_{\rm loc}([0,\infty); W^{1,2}(\Omega_\varepsilonga; R^d)),\ \varrho &\in L^\gamma_{{\rm loc}}([0, \infty);
L^\gamma (\Gamma_{\rm out}; {\rm d} |\vc{u}_b \cdot \vc{n}|)).
\end{split}
\]
\item
Equation of continuity
\begin{equation} \label{M1}
\begin{split}
\left[ \intO{ \varrho \varphi } \right]_{t = 0}^{t = \tau} &+
\int_0^\tau \int_{\Gamma_{\rm out}} \varphi \varrho \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x
+
\int_0^\tau \int_{\Gamma_{\rm in}} \varphi \varrho_E \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x\\ &=
\int_0^\tau \intO{ \Big[ \varrho \mathbb{P}artial_t \varphi + \varrho \vc{u} \cdot \nabla_x \varphi \Big] } \,{\rm d} t
\end{split}
\end{equation}
holds for any $0 \leq \tau < \infty$, and any test function
for any $\varphi \in C^1_c([0,\infty) \times \Ov{\Omega_\varepsilonga})$.
In addition, we require also the renormalized version of \eqref{M1},
\begin{equation} \label{M1a}
\begin{split}
\left[ \intO{ b(\varrho) \varphi } \right]_{t = 0}^{t = \tau} &+
\int_0^\tau \int_{\Gamma_{\rm out}} \varphi b(\varrho) \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x
+
\int_0^\tau \int_{\Gamma_{\rm in}} \varphi b(\varrho_E) \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x\\ &=
\int_0^\tau \intO{ \Big[ b(\varrho) \mathbb{P}artial_t \varphi + b(\varrho) \vc{u} \cdot \nabla_x \varphi -
\Big( b'(\varrho) \varrho - b(\varrho) \Big) {\rm div}_x \vc{u} \Big] } \,{\rm d} t
\end{split}
\end{equation}
to be satisfied for any $0 \leq \tau < \infty$, any test function
for any $\varphi \in C^1_c([0,\infty) \times \Ov{\Omega_\varepsilonga})$, and any $b \in C^1[0, \infty)$, $b' \in C_c[0, \infty)$.
\item
Momentum equation
\begin{equation} \label{M2}
\begin{split}
\left[ \intO{ \varrho \vc{u} \cdot \boldsymbol{\varphi} } \right]_{t=0}^{t = \tau} &=
\int_0^\tau \intO{ \Big[ \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi}
+ p(\varrho) {\rm div}_x \boldsymbol{\varphi} - \mathbb{S}(\mathbb{D}_x \vc{u}) : \nabla_x \boldsymbol{\varphi} \Big] }\\
&+ \int_0^\tau \intO{\varrho \nabla_x G \cdot \boldsymbol{\varphi} } \,{\rm d} t
\end{split}
\end{equation}
holds for any $0 \leq \tau < \infty$, and any test function
$\boldsymbol{\varphi} \in C^1_c([0,\infty) \times {\Omega_\varepsilonga}; R^d)$.
\end{itemize}
\subsection{Energy balance}
The energy inequality is an indispensable part of the definition of weak solution.
In view of direct calculations presented in the Appendix it takes the form
\begin{equation} \label{M3}
\begin{split}
-&\int_0^\infty \mathbb{P}artial_t \mathbb{P}si \intO{\left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) \right] }\,{\rm d} t +
\int_0^\infty \mathbb{P}si \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t \\
&+ \int_0^\infty \mathbb{P}si \int_{\Gamma_{\rm out}} P(\varrho) \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t +
\int_0^\infty \mathbb{P}si \int_{\Gamma_{\rm in}} P(\varrho_E) \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t
\\
&\leq
\mathbb{P}si(0) \intO{\left[ \frac{1}{2} \varrho(0, \cdot) | \vc{u}(0, \cdot) - \vc{u}_E|^2 + P(\varrho(0, \cdot)) \right]} \\
&-
\int_0^\infty \mathbb{P}si \intO{ \left[ \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right] : \nabla_x \vc{u}_E } \,{\rm d} t + \int_0^\infty \mathbb{P}si \intO{ {\varrho} { \vc{u} \cdot \frac12 \nabla_x |\vc{u}_E|^2} }
\,{\rm d} t \\ &+ \int_0^\infty \mathbb{P}si \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u}_E } \,{\rm d} t + \int_0^\infty \mathbb{P}si \intO{ \varrho \nabla_x G \cdot (\vc{u} - \vc{u}_E) }\,{\rm d} t
\end{split}
\end{equation}
for any $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$.
\begin{Remark} \label{MR3}
The energy can be defined in terms of the density and momentum that are weakly continuous quantities in time:
\[
E \left(\varrho, \vc{u} \ \Big| \vc{u}_E \right) \equiv \left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) \right] =
\left[ \frac{1}{2} \frac{|\vc{m}|^2}{\varrho} - \vc{m} \cdot \vc{u}_E + \frac{1}{2} \varrho |\vc{u}_E|^2 + P(\varrho) \right], \ \vc{m} \equiv
\varrho \vc{u}.
\]
Moreover, with the convention
\[
E \left(\varrho, \vc{u} \ \Big| \vc{u}_E \right) = \infty \ \mathscr{B}ox{if}\ \varrho < 0 \ \mathscr{B}ox{or}\ \varrho = 0, \vc{m} \ne 0,\
E \left(\varrho, \vc{u} \ \Big| \vc{u}_E \right) = 0 \ \mathscr{B}ox{if}\ \varrho = 0, \ \vc{m} = 0,
\]
$E$ is a convex l.s.c. function of $[\varrho, \vc{m}] \in R^{d + 1}$.
\end{Remark}
\begin{Definition}[Finite energy weak solution] \label{MD1}
A weak solution $[\varrho, \vc{u}]$ specified in Section \ref{MS1} satisfying the energy inequality
\eqref{M3} is called \emph{finite energy weak solution} of the Navier--Stokes system \eqref{i1}--\eqref{i3} in
$[0, \infty) \times \Omega_\varepsilonga$.
\end{Definition}
The existence of finite energy weak solutions for the Navier--Stokes system with in/out flux boundary conditions has been proved
in \cite{ChJiNo}, \cite{ChNoYa}, \cite{KwoNovSat} (see also Girinon \cite{GI}) under additional assumptions on smoothness of the domain $\Omega_\varepsilonga$ and for $\gamma > \frac{d}{2}$.
At this stage the total energy
\[
\intO{ E \left( \varrho, \vc{u} \ \Big| \vc{u}_E \right) }
\]
is not necessarily a decreasing function of time. Further assumptions on $\vc{u}_E$ and $\varrho_b$ specified below are necessary to convert
it to a kind of Lyapunov function for the system.
\subsection{Main result}
We are ready to state our main result.
\begin{Theorem}[Convergence to equilibrium] \label{MT1}
Let $\Omega_\varepsilonga \subset R^d$, $d=2,3$ be a bounded Lipschitz domain. Let $G$ and $p$ satisfy the hypotheses \eqref{MH1}, \eqref{MH2}, with
\[
\gamma > \frac{d}{2}.
\]
Let $\vc{u}_E$ be a given field such that
\begin{equation} \label{M4}
\mathbb{D}_x \vc{u}_E = 0,\ \nabla_x G \cdot \vc{u}_E = 0.
\end{equation}
Let $\varrho_E$ be a density field solving the stationary problem \eqref{i6} with the given $\vc{u}_E$ such that
\[
\varrho_E \geq 0, \ \mathscr{B}ox{the set}\ \left\{ x \in \Omega_\varepsilonga \ \Big| \ \varrho_E(x) > 0 \right\} \ne \emptyset
\ \mathscr{B}ox{is connected in}\ \Omega_\varepsilonga, \ \varrho_E|_{\Gamma_{\rm in}} > 0.
\]
Let $[\varrho, \vc{u}]$ be a finite energy weak solution of the problem \eqref{i1}--\eqref{i3} in $[0, \infty) \times \Omega_\varepsilonga$, with
the boundary conditions \eqref{i7}, and
\[
\intO{ E \left( \varrho, \vc{u} \ \Big|\ \vc{u}_E \right)(0, \cdot) } \leq E_0,
\]
\[
\intO{ \varrho(0, \cdot) } = M_0 > 0, \ M_0 = \intO{\varrho_E} \ \mathscr{B}ox{if}\ \Gamma_{\rm in} = \emptyset.
\]
Then for any $\varepsilon > 0$, there exists $T = T(\varepsilon)$ depending only on $E_0$ such that
\[
\| \varrho(t, \cdot) - \varrho_E \|_{L^\gamma(\Omega_\varepsilonga)} +
\| \varrho (\vc{u} - \vc{u}_E)(t, \cdot) \|_{L^{\frac{2 \gamma}{\gamma + 1}}(\Omega_\varepsilonga; R^d)} < \varepsilon
\ \mathscr{B}ox{for all}\ t > T(\varepsilon).
\]
\end{Theorem}
\begin{Remark} \label{MR1}
It follows from the equation \eqref{i6} that the pressure $p(\varrho_E)$ is a continuously differentiable function in $\Ov{\Omega_\varepsilonga}$, in particular, $\varrho_E \in C(\Ov{\Omega_\varepsilonga})$.
\end{Remark}
\begin{Remark} \label{MR2}
As ${\rm div}_x \vc{u}_E = 0$, we have
\[
\int_{\mathbb{P}artial \Omega_\varepsilonga} \vc{u}_E \cdot \vc{n}\ {\rm d} S_x = 0.
\]
Consequently, if $\Gamma_{\rm in} = \emptyset$, then necessarily
\begin{equation} \label{M12}
\vc{u}_E \cdot \vc{n}|_{\mathbb{P}artial \Omega_\varepsilonga} = 0.
\end{equation}
As $\vc{u}_E$ is the velocity of a rigid motion and $\Omega_\varepsilonga$ is bounded, relation \eqref{M12} implies
either $\Omega_\varepsilonga$ is rotationally symmetric or $\vc{u}_E = 0$. In both cases, the total mass
\[
\intO{ \varrho(t, \cdot) } = M_0 \ \mathscr{B}ox{is a constant of motion.}
\]
\end{Remark}
The following two sections are devoted to the proof of Theorem \ref{MT1}.
\section{Stationary problem}
\label{S}
The energy inequality \eqref{M3} simplifies to
\begin{equation} \label{M9}
\begin{split}
&-\int_0^\infty \mathbb{P}artial_t \mathbb{P}si \intO{\left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - \varrho \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \right] } \,{\rm d} t +
\int_0^\infty \mathbb{P}si \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t \\
&+\int_0^\infty \mathbb{P}si \int_{\Gamma_{\rm out}} \left[ P(\varrho) - (\varrho - \varrho_E) \left(\frac{1}{2} |\vc{u}_E|^2 + G \right) -
P(\varrho_E) \right] \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t \\&+
\int_0^\infty \mathbb{P}si \int_{\mathbb{P}artial \Omega_\varepsilonga} \left[ P(\varrho_E) - \varrho_E \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \right] \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t \\
&
\leq \mathbb{P}si(0) \intO{ \left[ E \left(\varrho, \vc{u} \ \Big|\ \vc{u}_E \right) + \left( \frac{1}{2} |\vc{u}_E|^2 + G \right)\right](0, \cdot) }
\end{split}
\end{equation}
for any $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$.
\subsection{Stationary equation of continuity}
Next we use the hypothesis that the boundary data for the density $\varrho_b = \varrho_E|_{\Gamma_{\rm in}}$ are determined by the stationary
density $\varrho_E$ satisfying, in particular, the equation of continuity
\begin{equation} \label{M10}
{\rm div}_x(\varrho_E \vc{u}_E) = 0 \ \mathscr{B}ox{in}\ \mathcal{D}'(\Omega_\varepsilonga).
\end{equation}
It follows from \eqref{M10} that
\[
\int_{\mathbb{P}artial \Omega_\varepsilonga} \varrho_E \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \vc{u}_E \cdot \vc{n} \ {\rm d} S_x
= - \intO{ \varrho_E \nabla_x \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \cdot \vc{u}_E } = 0,
\]
where the last equality follows from \eqref{M4} and
\begin{equation} \label{M8}
\nabla_x |\vc{u}_E|^2 \cdot \vc{u}_E = - 2 \vc{u}_E \cdot \nabla_x \vc{u}_E \cdot =
- \vc{u}_E \cdot \nabla_x \vc{u}_E \cdot \vc{u}_E - \vc{u}_E \cdot \nabla_x^t \vc{u}_E \cdot \vc{u}_E = - 2 \vc{u}_E \cdot \mathbb{D}_x \vc{u}_E \cdot \vc{u}_E = 0.
\end{equation}
Similarly, using ${\rm div}_x \vc{u}_E = 0$ we get by renormalization
\[
{\rm div}_x (P(\varrho_E) \vc{u}_E ) = 0 \ \mathbb{R}ightarrow \ \int_{\mathbb{P}artial \Omega_\varepsilonga} P(\varrho_E) \vc{u}_E \cdot \vc{n}\ {\rm d} S_x = 0.
\]
Consequently, the energy inequality \eqref{M9} takes the form
\begin{equation} \label{M11}
\begin{split}
&- \int_0^\infty \mathbb{P}artial_t \mathbb{P}si \intO{\left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - \varrho \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \right] } \,{\rm d} t +
\int_0^\infty \mathbb{P}si \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t \\
&+\int_0^\infty \mathbb{P}si \int_{\Gamma_{\rm out}} \left[ P(\varrho) - (\varrho - \varrho_E) \left(\frac{1}{2} |\vc{u}_E|^2 + G \right) -
P(\varrho_E) \right] \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t \\ &\leq
\mathbb{P}si(0) \intO{ \left[ E \left( \varrho, \vc{u} \ \Big| \vc{u}_E \right) - \varrho \left( \frac{1}{2} |\vc{u}_E|^2 + G \right) \right] (0, \cdot) }
\end{split}
\end{equation}
for any $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$.
Note that the result holds under general assumption on $\varrho_E$, in particular, it is enough that $\varrho_E \in C(\Ov{\Omega_\varepsilonga})$,
$\varrho_E \geq 0$, not necessarily $\varrho_E > 0$.
\subsection{Stationary momentum equation}
\label{SME}
In view of
\begin{equation} \label{M7}
\vc{u}_E \cdot \nabla_x \vc{u}_E = 2 \vc{u}_E \cdot \mathbb{D}_x \vc{u}_E - \vc{u}_E \cdot \nabla_x^t \vc{u}_E = - \frac{1}{2} \nabla_x |\vc{u}_E|^2,
\end{equation}
the stationary momentum equation can be written in the form
\begin{equation} \label{S2}
\nabla_x p(\varrho_E) = \varrho_E \nabla_x \left( G + \frac{1}{2} |\vc{u}_E|^2 \right),
\end{equation}
in particular $p(\varrho_E) \in C^1(\Ov{\Omega_\varepsilonga})$, and $\varrho_E \in C(\Ov{\Omega_\varepsilonga})$. We point out that $\varrho_E$ need not be continuously differentiable on the boundary of its domain of positivity.
If $\varrho_E > 0$, we can rewrite \eqref{S2} as
\begin{equation} \label{S3}
\nabla_x P'(\varrho_E) = \nabla_x \left( G + \frac{1}{2} |\vc{u}_E|^2 \right)
\ \mathbb{R}ightarrow \ P'(\varrho_E) = G + \frac{1}{2} |\vc{u}_E|^2 - C_E,
\end{equation}
where $C_E$ is a constant. In accordance with the hypotheses of Theorem \ref{MT1}, the domain of positivity of $\varrho_E$,
\[
\left\{ x \in \Omega_\varepsilonga \ \Big| \ \varrho_E(x) > 0 \right\}
\]
is bounded and connected in $\Omega_\varepsilonga$; whence $\varrho_E$ is given through formula
\begin{equation} \label{S4}
\begin{split}
\varrho_E(x) &= (P')^{-1} \left[ G(x) + \frac{1}{2} |\vc{u}_E(x)|^2 - C_E \right]^+ \ \mathscr{B}ox{if}\ P'(0+) = 0, \\
\varrho_E(x) &= (P')^{-1} \left( G(x) + \frac{1}{2} |\vc{u}_E(x)|^2 - C_E \right)\ \mathscr{B}ox{if}\ P'(0+) = -\infty.
\end{split}
\end{equation}
Note that in the latter case vacuum does not occur, $\varrho_E > 0$ in $\Omega_\varepsilonga$. The constant $C_E \in R$ is uniquely determined by the
boundary value $\varrho_b = \varrho_E|_{\Gamma_{\rm in}}$ if $\Gamma_{\rm in} \ne \emptyset$ or by the total mass
\[
M_0 = \intO{ \varrho_E }
\]
in the case $\Gamma_{\rm in} = \emptyset$.
Finally, we rewrite the energy inequality \eqref{M11} in the form
\begin{equation} \label{S5}
\begin{split}
&- \int_0^\infty \mathbb{P}artial_t \mathbb{P}si \intO{\left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - (\varrho -
\varrho_E) \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) - P(\varrho_E) \right] } \,{\rm d} t \\ &+
\int_0^\infty \mathbb{P}si \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t \\
&+\int_0^\infty \mathbb{P}si \int_{\Gamma_{\rm out}} \left[ P(\varrho) - (\varrho - \varrho_E) \left(G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) -
P(\varrho_E) \right] \vc{u}_E \cdot \vc{n} \ {\rm d} S_x \,{\rm d} t \\ &\leq
\mathbb{P}si(0) \intO{ \left[ E \left( \varrho, \vc{u} \ \Big| \vc{u}_E \right) - (\varrho - \varrho_E) \left( G + \frac{1}{2} |\vc{u}_E|^2 - C_E \right)
- P(\varrho_E) \right] (0, \cdot) }
\end{split}
\end{equation}
for any $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$. Here, we have
\begin{equation} \label{S6}
\begin{split}
P(\varrho) &- (\varrho -
\varrho_E) \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) - P(\varrho_E) \\ &=
P(\varrho) - (\varrho -
\varrho_E) P'(\varrho_E) - P(\varrho_E) \geq 0 \ \mathscr{B}ox{whenever}\ \varrho_E > 0,
\end{split}
\end{equation}
and
\begin{equation} \label{S7}
\begin{split}
P(\varrho) &- (\varrho -
\varrho_E) \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) - P(\varrho_E) \\ &=
P(\varrho) - \varrho \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) \geq P(\varrho) \geq 0 \ \mathscr{B}ox{if}\ \varrho_E = 0.
\end{split}
\end{equation}
In particular, the function $\mathcal{E}$,
\[
\mathcal{E} : t \mapsto \intO{ \left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - (\varrho -
\varrho_E) \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) - P(\varrho_E) \right](t, \cdot) }
\]
coincides on the set of full measure in $(0, \infty)$ with a non--increasing function and moreover
\begin{equation} \label{S8}
\mathcal{E}(t) \to 0 \ \mathscr{B}ox{as}\ t \to \infty \ \mathbb{R}ightarrow \
\| \varrho(t, \cdot) - \varrho_E \|_{L^\gamma(\Omega_\varepsilonga)} +
\| \varrho (\vc{u} - \vc{u}_E)(t, \cdot) \|_{L^{\frac{2 \gamma}{\gamma + 1}}(\Omega_\varepsilonga; R^d)} \to 0
\ \mathscr{B}ox{as}\ t \to \infty.
\end{equation}
\section{Convergence to equilibria}
\label{C}
Our goal is to show Theorem \ref{MT1}. We start with the following auxilliary result:
\begin{Lemma} \label{CL1}
Let $\{ \varrho_n, \vc{u}_n \}_{n=1}^\infty$ be a sequence of finite energy weak solutions to the Navier--Stokes system
\eqref{i1}--\eqref{i3} on a time interval $(0, 1)$ such that
\[
\intO{ E \left( \varrho_n, \vc{u}_n \ \Big| \ \vc{u}_E \right) } \leq E_0,
\]
\[
\int_0^{1} \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}_n) : \mathbb{D}_x \vc{u}_n } \,{\rm d} t \leq E_0
\]
\mathscr{B}ox{uniformly for}\ n =1,2,\dots,
\[
{\rm div}_x \vc{u}_n \to 0 \ \mathscr{B}ox{in}\ L^2(0, 1; L^2(\Omega_\varepsilonga)).
\]
Then we have
\[
\begin{split}
\varrho_n &\to \varrho \ \mathscr{B}ox{in}\ L^{\gamma + \alpha}((0, 1) \times \Omega_\varepsilonga),\\
\vc{u}_n &\to \vc{u} \ \mathscr{B}ox{weakly in}\ L^2(0,1; W^{1,2}(\Omega_\varepsilonga; R^d)),\\
\varrho_n \vc{u}_n \otimes \vc{u}_n &\to \varrho \vc{u} \otimes \vc{u} \ \mathscr{B}ox{in}\ L^{1 + \alpha}((0,1) \times \Omega_\varepsilonga; R^{d \times d})
\end{split}
\]
for a certain $\alpha > 0$, passing to a suitable subsequence as the case may be.
\end{Lemma}
The proof of Lemma \ref{CL1} is based on nowadays standard arguments of the theory of compressible Navier--Stokes system and may be found in \cite{FeiPr}.
\subsection{Convergence to equilibria}
\label{CTE}
To show convergence we introduce the sequence of time--shifts:
\[
\varrho_n (t,x) = \varrho(t + n,x), \ \vc{u}_n(t,x) = \vc{u}(t + n,x),\ n=1,2,\dots
\]
where $[\varrho, \vc{u}]$ is a global--in--time finite energy weak solution to the Navier--Stokes system. It follows from the energy inequality
\eqref{S5} that
\[
\int_0^1 \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}_n) : (\mathbb{D}_x \vc{u}_n) } \,{\rm d} t =
\int_0^1 \intO{ \mathbb{S}(\mathbb{D}_x (\vc{u}_n - \vc{u}_E)) : (\mathbb{D}_x (\vc{u}_n - \vc{u}_E)) } \,{\rm d} t \to 0
\ \mathscr{B}ox{as}\ n \to \infty;
\]
whence, by virtue of Korn--Poincar\' e inequality,
\[
\vc{u}_n \to \vc{u}_E \ \mathscr{B}ox{in}\ L^2(0,1; W^{1,2}(\Omega_\varepsilonga; R^d)).
\]
Moreover, applying Lemma \ref{CL1} we may perform the limit in the equations \eqref{M1}, \eqref{M2} obtaining
\begin{equation} \label{C1}
\begin{split}
&
\int_0^1 \int_{\Gamma_{\rm out}} \varphi \varrho \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x
+
\int_0^1 \int_{\Gamma_{\rm in}} \varphi \varrho_E \vc{u}_E \cdot \vc{n} \ {\rm d} \ S_x\\ &=
\int_0^1 \intO{ \Big[ \varrho \mathbb{P}artial_t \varphi + \varrho \vc{u}_E \cdot \nabla_x \varphi \Big] } \,{\rm d} t
\end{split}
\end{equation}
for any test function $\varphi \in C^1_c((0,1) \times \Ov{\Omega_\varepsilonga})$,
\begin{equation} \label{C2}
-\int_0^1 \intO{ \Big[ \varrho \vc{u}_E \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u}_E \otimes \vc{u}_E : \nabla_x \boldsymbol{\varphi}
+ p(\varrho) {\rm div}_x \boldsymbol{\varphi} \Big] }
= \int_0^1 \intO{\varrho \nabla_x G \cdot \boldsymbol{\varphi} } \,{\rm d} t .
\end{equation}
for any test function
$\boldsymbol{\varphi} \in C^1_c((0,1) \times {\Omega_\varepsilonga}; R^d)$.
It follows from \eqref{C1} that
\begin{equation} \label{C3}
-\int_0^1 \intO{ \varrho \vc{u}_E \mathbb{P}artial_t \boldsymbol{\varphi} } \,{\rm d} t =
\int_0^1 \intO{ \varrho \vc{u}_E \cdot \nabla_x \vc{u}_E \ \boldsymbol{\varphi} } \,{\rm d} t +
\int_0^1 \intO{ \varrho \vc{u}_E \otimes \vc{u}_E : \nabla_x \boldsymbol{\varphi} } \,{\rm d} t
\end{equation}
for any $\boldsymbol{\varphi} \in C^1_c((0,T) \times {\Omega_\varepsilonga}; R^d)$. In particular, we deduce from \eqref{C2} using \eqref{M7} that
\begin{equation} \label{C1+}
- \int_0^1 \intO{
p(\varrho) {\rm div}_x \boldsymbol{\varphi} }
= \int_0^1 \intO{\varrho \nabla_x \left( G + \frac{1}{2} |\vc{u}_E|^2 \right) \cdot \boldsymbol{\varphi} } \,{\rm d} t
\end{equation}
for any $\boldsymbol{\varphi} \in C^1_c((0,1) \times \Omega_\varepsilonga; R^d)$. Thus we get
\[
\nabla_x p(\varrho(t, \cdot)) = \varrho(t, \cdot) \nabla_x \left( G + \frac{1}{2} |\vc{u}_E|^2 \right) \ \mathscr{B}ox{in}\
\mathcal{D}'(\Omega_\varepsilonga) \ \mathscr{B}ox{for a.a.}\ t \in (0,1),
\]
from which, by a simple bootstrap argument, we deduce
\begin{equation} \label{C2+}
p(\varrho(t, \cdot)) \in C^1(\Ov{\Omega_\varepsilonga}),\
\varrho(t, \cdot) \in C(\Ov{\Omega_\varepsilonga}), \ \mathscr{B}ox{and}\ \nabla_x p(\varrho(t, \cdot)) = \varrho(t, \cdot) \nabla_x \left( G + \frac{1}{2} |\vc{u}_E|^2 \right)
\ \mathscr{B}ox{for a.a.}\ t \in (0,1).
\end{equation}
If $\Gamma_{\rm in} = \emptyset$, then \[
\intO{\varrho(t, \cdot)} = M_0
\] for any $t \in (0,1)$ and whence we deduce from {\eqref{C2+}}, exactly as in Section \ref{SME}, that
\begin{equation} \label{C3+}
\varrho(t, \cdot) = \varrho_E \ \mathscr{B}ox{for a.a. { $t \in (0,1)$}.}
\end{equation}
Similarly, as ${\rm div}_x \vc{u}_E = 0$ and $\varrho(t, \cdot)$ is continuous, we deduce from \eqref{C1+} that
\[
\varrho(t, \cdot)|_{\Gamma_{\rm in}} = \varrho_E \ \mathscr{B}ox{for a.a.}\ t \in (0,1),
\]
which yields the same conclusion \eqref{C3}.
Consequently, there is a sequence $t_n \to \infty$ such that
\[
\mathcal{E}(t_n) \to 0 \ \mathscr{B}ox{as}\ t_n \to \infty.
\]
As $\mathcal{E}(t)$ is non--increasing, this yields the desired conclusion
\begin{equation} \label{CC1}
\| \varrho(t, \cdot) - \varrho_E \|_{L^\gamma(\Omega_\varepsilonga)} +
\| \varrho (\vc{u} - \vc{u}_E)(t, \cdot) \|_{L^{\frac{2 \gamma}{\gamma + 1}}(\Omega_\varepsilonga; R^d)} \to 0
\ \mathscr{B}ox{as}\ t \to \infty.
\end{equation}
\subsection{Uniform convergence}
To show uniform convergence claimed in Theorem \ref{MT1}, it is enough to show
\[
\mathcal{E}(t) = \intO{\left[ \frac{1}{2} \varrho |\vc{u} - \vc{u}_E|^2 + P(\varrho) - (\varrho -
\varrho_E) \left( G+ \frac{1}{2} |\vc{u}_E|^2 - C_E\right) - P(\varrho_E) \right](t, \cdot) } \to 0
\ \mathscr{B}ox{as}\ t \to \infty
\]
uniformly for
\[
\mathcal{E}(0+) \leq E_0.
\]
Arguing by contradiction, we suppose there is $\delta > 0$, a sequence of time $t_m \to \infty$, and a sequence of global in time solutions
$\{ \varrho_m, \vc{u}_m \}_{m=1}^\infty$, with the associated energies $\mathcal{E}_m$ such that
\begin{equation} \label{CC2}
\mathcal{E}_m(0+) \leq E_0,\ \mathcal{E}_m(t) \geq \delta > 0 \ \mathscr{B}ox{for any}\ t \in [0, t_m].
\end{equation}
However, as $\mathcal{E}_m$ are non--increasing in time, $\mathcal{E}_m \geq 0$ and satisfying the energy inequality
\eqref{S5}, we get
\[
\int_0^{t_m} \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}_m) : \mathbb{D}_x (\vc{u}_m) } \,{\rm d} t \leq E_0 \ \mathscr{B}ox{uniformly for}\ m \to \infty.
\]
Consequently, there must be another sequence $\tau_m \to \infty$ such that
\[
(\tau_m, \tau_{m} + 1) \subset [0, t_m),\ \int_{\tau_m}^{\tau_m + 1} \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}_m) : \mathbb{D}_x (\vc{u}_m) } \,{\rm d} t
\to 0 \ \mathscr{B}ox{as}\ m \to \infty.
\]
Thus repeating the arguments of Section \ref{CTE} we would obtain another sequence $s_m \to \infty$,
$s_m \leq t_m$ such that
\[
\mathcal{E}(s_m) \to 0 \ \mathscr{B}ox{as}\ m \to \infty
\]
in contrast with \eqref{CC2}.
We have proved Theorem \ref{MT1}.
\section{Appendix}
Below we present the formal derivation of energy balance \eqref{M3}.
Since $\vc{u}_E$ is time independent we get from the balance of momentum
\begin{equation}
\mathbb{P}artial_t (\varrho \vc{u}) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x p(\varrho) =
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) + \varrho \nabla_x G
\end{equation}
that
\begin{equation}
\mathbb{P}artial_t \varrho \vc{u}+\varrho\mathbb{P}artial_t (\vc{u} - \vc{u}_E) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x p(\varrho) =
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) + \varrho \nabla_x G
\end{equation}
and consequently
\begin{equation} \label{AP1}
\begin{split}
\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)+\varrho\mathbb{P}artial_t (\vc{u} - \vc{u}_E)\cdot (\vc{u} - \vc{u}_E) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E) + \nabla_x p(\varrho)\cdot (\vc{u} - \vc{u}_E) &\\
=
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E) + \varrho \nabla_x G\cdot (\vc{u} - \vc{u}_E).
\end{split}
\end{equation}
Since
$\mathbb{P}artial_t (\frac12 \varrho |\vc{u} - \vc{u}_E|^2) = \frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2 + \varrho (\vc{u} - \vc{u}_E)\cdot \mathbb{P}artial_t (\vc{u} - \vc{u}_E) $ we rewrite \eqref{AP1} as
\begin{equation}
\begin{split}
\frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2 - \frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2 +\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)+\varrho\mathbb{P}artial_t (\vc{u} - \vc{u}_E)\cdot (\vc{u} - \vc{u}_E)& \\
+ {\rm div}_x (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E) + \nabla_x p(\varrho)\cdot (\vc{u} - \vc{u}_E) =
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E) + \varrho \nabla_x G\cdot (\vc{u} - \vc{u}_E)
\end{split}
\end{equation}
and then simplify it to
\begin{equation} \label{AP2}
\begin{split}
\mathbb{P}artial_t (\frac12 \varrho |\vc{u} - \vc{u}_E|^2) - \frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2 +\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)& \\
+ {\rm div}_x (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E) + \nabla_x p(\varrho)\cdot (\vc{u} - \vc{u}_E) =
{\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E) + \varrho \nabla_x G\cdot (\vc{u} - \vc{u}_E).
\end{split}
\end{equation}
Next, we first integrate \eqref{AP2} over $\Omega_\varepsilonga$, then multiply by $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$ and finally integrate over $(0,\infty)$ to get
\begin{equation} \label{AP3}
\begin{split}
\int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga \mathbb{P}artial_t (\frac12 \varrho |\vc{u} - \vc{u}_E|^2) \,{\rm d} {x} \,{\rm d} t - \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga \frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2\,{\rm d} {x} \,{\rm d} t +\int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} \,{\rm d} t & \\
+ \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga{\rm div}_x (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} \,{\rm d} t + \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga\nabla_x p(\varrho)\cdot (\vc{u} - \vc{u}_E) \,{\rm d} {x} \,{\rm d} t & \\
= \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} \,{\rm d} t + \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga \varrho \nabla_x G\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} \,{\rm d} t
\end{split}
\end{equation}
Using integration by parts we can rewrite the first term in \eqref{AP3} as
\begin{equation} \label{AP4}
\int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga \mathbb{P}artial_t (\frac12 \varrho |\vc{u} - \vc{u}_E|^2) \,{\rm d} {x} \,{\rm d} t = - \int_0^\infty \mathbb{P}artial_t \mathbb{P}si \int_\Omega_\varepsilonga \frac12 \varrho |\vc{u} - \vc{u}_E|^2 \,{\rm d} {x} \,{\rm d} t - \mathbb{P}si(0) \int_\Omega_\varepsilonga \frac12 \varrho(0,\cdot) |\vc{u}(0,\cdot) - \vc{u}_E|^2 \,{\rm d} {x}.
\end{equation}
The rest of the terms in \eqref{AP3} can be treated as follows (below we ignore the time integration and multiplication by $\mathbb{P}si$ as it plays no role in the calculations):
\begin{itemize}
\item
\begin{equation}
\begin{split}
\int_\Omega_\varepsilonga{\rm div}_x (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} &= - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x(\vc{u} - \vc{u}_E)\,{\rm d} {x} + \int_{\mathbb{P}artial \Omega_\varepsilonga} (\varrho \vc{u} \otimes \vc{u})\cdot (\vc{u} - \vc{u}_E)\cdot \vc{n} dS \\
&= - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} + \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u}_E\,{\rm d} {x}
\end{split}
\end{equation}
where the boundary term vanishes thanks to the boundary conditions $\vc{u} = \vc{u}_E$ on $\mathbb{P}artial \Omega_\varepsilonga$.
\item
\begin{equation}
\begin{split}
& - \int_\Omega_\varepsilonga \frac12 \mathbb{P}artial_t \varrho |\vc{u} - \vc{u}_E|^2\,{\rm d} {x} + \int_\Omega_\varepsilonga\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= - \int_\Omega_\varepsilonga \frac12 \mathbb{P}artial_t \varrho (\vc{u} - \vc{u}_E)\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} + \int_\Omega_\varepsilonga\mathbb{P}artial_t \varrho \vc{u}\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= \int_\Omega_\varepsilonga \frac12 \mathbb{P}artial_t \varrho \vc{u} \cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} + \int_\Omega_\varepsilonga \frac12\mathbb{P}artial_t \varrho \vc{u}_E\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= \frac12 \int_\Omega_\varepsilonga \mathbb{P}artial_t \varrho (\vc{u}+ \vc{u}_E) \cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= - \frac12 \int_\Omega_\varepsilonga {\rm div}_x(\varrho \vc{u}) (|\vc{u}|^2 - |\vc{u}_E|^2)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= \frac12 \int_\Omega_\varepsilonga \varrho \vc{u} \cdot (\nabla_x |\vc{u}|^2 - \nabla_x|\vc{u}_E|^2)\,{\rm d} {x} - \frac12 \int_{\mathbb{P}artial \Omega_\varepsilonga} \varrho \vc{u} \cdot \vc{n} ( |\vc{u}|^2 - |\vc{u}_E|^2)\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= - \frac12 \int_\Omega_\varepsilonga \varrho \vc{u} \cdot \nabla_x|\vc{u}_E|^2 \,{\rm d} {x} + \frac12 \int_\Omega_\varepsilonga \varrho \vc{u}\cdot \nabla_x |\vc{u}|^2\,{\rm d} {x} - \int_\Omega_\varepsilonga(\varrho \vc{u} \otimes \vc{u}): \nabla_x \vc{u} \,{\rm d} {x} \\
&= - \frac12 \int_\Omega_\varepsilonga \varrho \vc{u} \cdot \nabla_x|\vc{u}_E|^2 \,{\rm d} {x} \\
\end{split}
\end{equation}
\item
\begin{equation}
\begin{split}
\int_\Omega_\varepsilonga {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E)\,{\rm d} {x} &= - \int_\Omega_\varepsilonga \mathbb{S}(\mathbb{D}_x \vc{u}): \nabla_x (\vc{u} - \vc{u}_E)\,{\rm d} {x} +
\int_{\mathbb{P}artial \Omega_\varepsilonga} \mathbb{S}(\mathbb{D}_x \vc{u})\cdot (\vc{u} - \vc{u}_E)\cdot \vc{n} dS\\
& = - \int_\Omega_\varepsilonga \mathbb{S}(\mathbb{D}_x \vc{u}): \mathbb{D}_x \vc{u} \,{\rm d} {x} + \int_\Omega_\varepsilonga \mathbb{S}(\mathbb{D}_x \vc{u}): \mathbb{D}_x \vc{u}_E\,{\rm d} {x} ,
\end{split}
\end{equation}
where the boundary term vanishes thanks to the boundary conditions on $\vc{u}$ and
\begin{equation}
\mathbb{S}(\mathbb{D}_x \vc{u}): \nabla_x (\vc{u} - \vc{u}_E) = \mathbb{S}(\mathbb{D}_x \vc{u}): \mathbb{D}_x (\vc{u} - \vc{u}_E)
\end{equation} thanks to the symmetry of $\mathbb{S}(\mathbb{D}_x \vc{u})$.
\item
\begin{equation}
\begin{split} \label{AP5}
\int_\Omega_\varepsilonga\nabla_x p(\varrho)\cdot (\vc{u} - \vc{u}_E) \,{\rm d} {x} &=\int_\Omega_\varepsilonga\nabla_x p(\varrho)\cdot \vc{u} \,{\rm d} {x} + \int_\Omega_\varepsilonga p(\varrho){\rm div}_x \vc{u}_E \,{\rm d} {x} - \int_{\mathbb{P}artial \Omega_\varepsilonga} p(\varrho) \vc{u}_E \cdot \vc{n} dS \\
&= \int_\Omega_\varepsilonga\nabla_x p(\varrho)\cdot \vc{u} \,{\rm d} {x} + \int_\Omega_\varepsilonga p(\varrho) \mathbb{I} : \nabla_x \vc{u}_E \,{\rm d} {x}- \int_{\mathbb{P}artial \Omega_\varepsilonga} p(\varrho) \vc{u}_E \cdot \vc{n} dS.
\end{split}
\end{equation}
\end{itemize}
Finally, thanks to the boundary conditions on $\vc{u}$ we get
\begin{equation}
\begin{split}
\int_\Omega_\varepsilonga \mathbb{P}artial_t P(\varrho) \,{\rm d} {x} & = \int_\Omega_\varepsilonga P'(\varrho)\mathbb{P}artial_t \varrho \,{\rm d} {x} = - \int_\Omega_\varepsilonga P'(\varrho){\rm div}_x (\varrho \vc{u})\,{\rm d} {x}\\
& = \int_\Omega_\varepsilonga \nabla_x P'(\varrho) \cdot (\varrho \vc{u}) \,{\rm d} {x} - \int_{\mathbb{P}artial \Omega_\varepsilonga} P'(\varrho) \varrho \vc{u} \cdot \vc{n} dS
= \int_\Omega_\varepsilonga P''(\varrho) \varrho \nabla_x \varrho \cdot \vc{u} \,{\rm d} {x} - \int_{\mathbb{P}artial \Omega_\varepsilonga} P'(\varrho) \varrho \vc{u} \cdot \vc{n} dS\\
& = \int_\Omega_\varepsilonga p'(\varrho) \nabla_x \varrho \cdot \vc{u} \,{\rm d} {x} - \int_{\mathbb{P}artial \Omega_\varepsilonga} P'(\varrho) \varrho \vc{u} \cdot \vc{n} dS = \int_\Omega_\varepsilonga \nabla_x p(\varrho) \cdot \vc{u} \,{\rm d} {x} - \int_{\mathbb{P}artial \Omega_\varepsilonga} P'(\varrho) \varrho \vc{u}_E \cdot \vc{n} dS
\end{split}
\end{equation}
and hence
\begin{equation}
\begin{split} \label{APL}
- \int_0^\infty \mathbb{P}artial_t \mathbb{P}si \int_\Omega_\varepsilonga P(\varrho) \,{\rm d} {x} \,{\rm d} t &- \mathbb{P}si(0) \int_\Omega_\varepsilonga P(\varrho(0,\cdot))\,{\rm d} {x} = \int_0^\infty \mathbb{P}si \mathbb{P}artial_t \int_\Omega_\varepsilonga P(\varrho) \,{\rm d} {x} \,{\rm d} t \\
& = \int_0^\infty \mathbb{P}si \int_\Omega_\varepsilonga \nabla_x p(\varrho) \cdot \vc{u} \,{\rm d} {x} \,{\rm d} t - \int_0^\infty \mathbb{P}si \int_{\mathbb{P}artial \Omega_\varepsilonga} P'(\varrho) \varrho \vc{u}_E \cdot \vc{n} dS \,{\rm d} t ,
\end{split}
\end{equation}
for any $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$.
Relations \eqref{AP4}--\eqref{AP5}, \eqref{APL} put together with \eqref{AP3} and the boundary conditions $\varrho=\varrho_E$ on $\Gamma_{\mathscr{B}ox{in}}$ yield the energy inequality \eqref{M3} for all $\mathbb{P}si \in C^1_c[0, \infty)$, $\mathbb{P}si \geq 0$.
\def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0
\advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
\end{document}
|
\begin{document}
\title{Shuffle product formulas of multiple zeta values}
\date{\today\thanks{The first author is supported by the National Natural Science Foundation of
China (Grant No. 11471245) and Shanghai Natural Science Foundation (grant no. 14ZR1443500). The authors are grateful to the referee for his/her useful remarks.} }
\author{Zhonghua Li \quad and \quad Chen Qin}
\address{Department of Mathematics, Tongji University, No. 1239 Siping Road,
Shanghai 200092, China}
\email{zhonghua\[email protected]}
\address{Department of Mathematics, Tongji University, No. 1239 Siping Road,
Shanghai 200092, China}
\email{2014chen\[email protected]}
\keywords{Multiple zeta values, Shuffle product}
\subjclass[2010]{11M32}
\begin{abstract}
Using the combinatorial description of shuffle product, we prove or reformulate several shuffle product formulas of multiple zeta values, including a general formula of the shuffle product of two multiple zeta values, some restricted shuffle product formulas of the product of two multiple zeta values, and a restricted shuffle product formula of the product of $n$ multiple zeta values.
\end{abstract}
\maketitle
\section{Introduction}\label{Sec:Intro}
For positive integers $n,k_1,k_2,\ldots,k_n$ with $k_1>1$, a multiple zeta value is the real number defined by
\begin{align}
\zeta(k_1,k_2,\ldots,k_n)=\sum\limits_{m_1>m_2>\cdots>m_n>0}\frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}}.
\label{Eq:MZV}
\end{align}
When $n=1$, we get the Riemann zeta values, which are the special values of the Riemann zeta function at positive integer arguments. There are many works on these real numbers. A recent theorem of F. Brown \cite{Brown} states that all periods of mixed Tate motives unramified over $\mathbb{Z}$ are $\mathbb{Q}\left[\frac{1}{2\pi i}\right]$-linear combinations of multiple zeta values, and the multiple zeta values indexed by $2$ and $3$ are linear generators of the $\mathbb{Q}$-vector space spanned by all multiple zeta values. In \cite{Minh2013}, Hoang Ngoc Minh showed that there exists also a family of algebraic generators, made up of the multiple zeta values indexed by irreducible Lyndon compositions.
Besides the infinite series representation \eqref{Eq:MZV}, N. Nielsen \cite{Nielsen04,Nielsen06} first noticed that a multiple zeta value can be also obtained via the iterated integral representation of the multiple polylogarithm, for $z$ tending to $1$, (see also \cite{Zagier}):
\begin{align}
\operatorname{Li}_{k_1,k_2,\ldots,k_n}(z)=\int\limits_{z>t_1>t_2>\cdots>t_k>0}\frac{dt_1}{f_1(t_1)}\frac{dt_2}{f_2(t_2)}\cdots\frac{dt_k}{f_k(t_k)},\quad \text{for\;} |z|<1,
\label{Eq:MZV-shuffle}
\end{align}
where $k=k_1+k_2+\cdots+k_n$ and
$$f_i(t)=\begin{cases}
1-t, & \text{if\;} i=k_1,k_1+k_2,\ldots,k_1+k_2+\cdots+k_{n-1},k,\\
t, & \text{otherwise}.
\end{cases}$$
Note that in \eqref{Eq:MZV-shuffle}, $k_1$ can be $1$. Using the iterated integral representation \eqref{Eq:MZV-shuffle}, we can express a product of two multiple zeta values as a sum of multiple zeta values. For example, we have
\begin{align*}
&\zeta(2)\zeta(2)=\int\limits_{1>t_1>t_2>0}\frac{dt_1}{t_1}\frac{dt_2}{1-t_2}\int\limits_{1>s_1>s_2>0}\frac{ds_1}{s_1}\frac{ds_2}{1-s_2}\\
=&\left(\,\int\limits_{1>t_1>t_2>s_1>s_2>0}+\int\limits_{1>t_1>s_1>t_2>s_2>0}+\int\limits_{1>t_1>s_1>s_2>t_2>0}\right.\\
&\left.+\int\limits_{1>s_1>t_1>t_2>s_2>0}+\int\limits_{1>s_1>t_1>s_2>t_2>0}+\int\limits_{1>s_1>s_2>t_1>t_2>0}\right)\\
&\quad\times \frac{dt_1}{t_1}\frac{dt_2}{1-t_2}\frac{ds_1}{s_1}\frac{ds_2}{1-s_2}\\
=&2\zeta(2,2)+4\zeta(3,1).
\end{align*}
Such products are called shuffle products. The shuffle product used here was first introduced by S. Eilenberg and S. Mac Lane in \cite{Eilenberg}, and the recursive formula and the denotation of the product described below are those of M. Fliess \cite{Fliess}.
To treat the shuffle products of multiple zeta values formally, we adopt the following algebraic setting (see \cite{Hoffman,Hoffman-Ohno} for example). Let $A=\{x,y\}$ be an alphabet with two non-commutative letters, and let $A^{\ast}$ be the set of all words on $A$ with the empty word $1_{A^{\ast}}$. We denote by $\mathfrak{h}$ the non-commutative $\mathbb{Q}$-polynomial algebra generated by the set $A$, and by $\mathfrak{h}^1$ and $\mathfrak{h}^0$ the subalgebras
$$\mathfrak{h}^1=\mathbb{Q}1_{A^{\ast}}+\mathfrak{h}y, \quad\mathfrak{h}^0=\mathbb{Q}1_{A^\ast}+x\mathfrak{h}y,$$
respectively. As rational vector spaces, $\mathfrak{h}$ is spanned by $A^{\ast}$, $\mathfrak{h}^1$ is spanned by $1_{A^{\ast}}$ and words ending with $y$ and $\mathfrak{h}^0$ is spanned by $1_{A^{\ast}}$ and words starting from $x$ and ending with $y$.
The shuffle product $\,\mbox{\bf \scyr X}\,$ on $\mathfrak{h}$ is defined by $\mathbb{Q}$-bilinearity and the rules:
\begin{align*}
& 1_{A^{\ast}}\,\mbox{\bf \scyr X}\, w=w\,\mbox{\bf \scyr X}\, 1_{A^{\ast}}=w,\\
& aw_1\,\mbox{\bf \scyr X}\, bw_2=a(w_1\,\mbox{\bf \scyr X}\, bw_2)+b(aw_1\,\mbox{\bf \scyr X}\, w_2),
\end{align*}
for all letters $a,b\in A$ and all words $w,w_1,w_2\in A^\ast$. Under the shuffle product, $\mathfrak{h}$ becomes a commutative $\mathbb{Q}$-algebra, and $\mathfrak{h}^1$ and $\mathfrak{h}^0$ are also subalgebras. As a commutative algebra, the shuffle algebra $\mathfrak{h}$ is free with a pure transcendence basis consisting of all Lyndon words \cite{Reutenauer}.
We define a $\mathbb{Q}$-linear map $\zeta:\mathfrak{h}^0\rightarrow \mathbb{R}$ by $\zeta(1_{A^{\ast}})=1$ and
\begin{align*}
\zeta(x^{k_1-1}yx^{k_2-1}y\cdots x^{k_n-1}y)=\zeta(k_1,\ldots,k_n),
\end{align*}
where $n,k_1,k_2,\ldots,k_n$ are positive integers with $k_1>1$. Then it is easy to know that the map $\zeta: (\mathfrak{h}^0,\,\mbox{\bf \scyr X}\,)\rightarrow \mathbb{R}$ is an algebra homomorphism (see \cite{Hoffman-Ohno} for example). In other words, we have
$$\zeta(w_1\,\mbox{\bf \scyr X}\, w_2)=\zeta(w_1)\zeta(w_2)$$
for any $w_1,w_2\in\mathfrak{h}^0$.
Note that for any $w\in\mathfrak{h}^1$, one can define the multiple polylogarithm $\operatorname{Li}_w(z)$ by $\mathbb{Q}$-linearities, $\operatorname{Li}_{1_{A^{\ast}}}(z)=1$ and
$$\operatorname{Li}_{x^{k_1-1}yx^{k_2-1}y\cdots x^{k_n-1}y}(z)=\operatorname{Li}_{k_1,k_2,\ldots,k_n}(z)$$
for positive integers $n,k_1,k_2,\ldots,k_n$. Then it is known that the map $\operatorname{Li}$ is an algebraic isomorphism from $(\mathfrak{h}^1,\,\mbox{\bf \scyr X}\,)$ onto the algebra of multiple polylogarithms \cite{Minh-Petitot}.
Hence to get shuffle product formulas of multiple zeta values (or multiple polylogarithms), one can treat the shuffle products of $\mathfrak{h}^0$ first. For example, since
$$xy\,\mbox{\bf \scyr X}\, xy=2xyxy+4x^2y^2,$$
applying the map $\zeta$, we get the shuffle product formula $\zeta(2)\zeta(2)=2\zeta(2,2)+4\zeta(3,1)$ proved above.
The shuffle product formula of two Riemann zeta values is
\begin{align}
\zeta(k)\zeta(l)=&\sum\limits_{i=1}^k\binom{k+l-i-1}{l-1}\zeta(k+l-i,i)\nonumber\\
&+\sum\limits_{i=1}^l\binom{k+l-i-1}{k-1}\zeta(k+l-i,i),
\label{Eq:Euler-Decom}
\end{align}
where $k,l\geqslant 2$. The formula \eqref{Eq:Euler-Decom} was first found by Euler \cite{Euler}, and is called Euler's decomposition formula. The corresponding Euler's decomposition formula in $\mathfrak{h}^1$ is
\begin{align}
x^{a}y\,\mbox{\bf \scyr X}\, x^by=\sum\limits_{i=0}^a\binom{a+b-i}{b}x^{a+b-i}yx^iy+\sum\limits_{i=0}^b\binom{a+b-i}{a}x^{a+b-i}yx^iy,
\label{Eq:Shuffle-Euler}
\end{align}
where $a$ and $b$ are nonnegative integers. We remark that the shuffle product formula \eqref{Eq:Shuffle-Euler} is equivalent to
\begin{align*}
\operatorname{Li}_k(z)\operatorname{Li}_l(z)=&\sum\limits_{i=1}^k\binom{k+l-i-1}{l-1}\operatorname{Li}_{k+l-i,i}(z)\\
&+\sum\limits_{i=1}^l\binom{k+l-i-1}{k-1}\operatorname{Li}_{k+l-i,i}(z),
\end{align*}
where $k$ and $l$ are positive integers \cite{Minh-Petitot}.
Some generalizations of Euler's decomposition formula were found. In \cite[Theorem 2.1, Theorem 2.2]{Guo-Xie}, L. Guo and B. Xie gave an explicit shuffle product formula in a very general setting, and as applications, shuffle product formulas of $\zeta(k)\zeta(k_1,k_2)$ and $\zeta(k_1,k_2)\zeta(l_1,l_2)$ were given. By an analytic method, M. Eie and C.-S. Wei obtained a shuffle product formula of the product of two multiple zeta values of the form $\zeta(m,\{1\}^k)$ with one string of $1$'s: $\{1\}^k=\underbrace{1,\ldots,1}_{k\text{\, terms}}$ in \cite{Eie-Wei}. And this result was generalized to the product of $n$ multiple zeta values with one string of $1$'s in \cite[Main Theorem]{Eie-Liaw-Ong}, which also generalized the formula of the products of $n$ Riemann zeta values in \cite[Theorem 1.2]{Chung-Eie-Liaw-Ong}. By an algebraic method, P. Lie, L. Guo and B. Ma also gave a shuffle product formula of two multiple zeta values with one string of $1$'s, and obtained a formula of the product of two multiple zeta values one of which with two strings of $1$'s in \cite[Theorem 1.1,Theorem 1.3]{Lei-Guo-Ma}.
The shuffle product has a combinatorial description. By the definition of shuffle product $\,\mbox{\bf \scyr X}\,$ in $\mathfrak{h}$, we easily get
$$a_1\cdots a_n\,\mbox{\bf \scyr X}\, a_{n+1}\cdots a_{n+m}=\sum\limits_{\sigma\in\mathfrak{S}_{n,m}}a_{\sigma(1)}a_{\sigma(2)}\cdots a_{\sigma(n+m)},$$
where $a_1, a_2,\ldots,a_{n+m}\in A$ are letters, and
$$\mathfrak{S}_{n,m}=\left\{\sigma\in\mathfrak{S}_{n+m}\left|\begin{array}{l}
\sigma^{-1}(1)<\sigma^{-1}(2)<\cdots<\sigma^{-1}(n),\\
\sigma^{-1}(n+1)<\sigma^{-1}(n+2)<\cdots<\sigma^{-1}(n+m)
\end{array}\right.\right\}.$$
In other words, the shuffle product of words $a_1\cdots a_n$ and $a_{n+1}\cdots a_{n+m}$ are the sum of all permutations of $a_1,a_2,\ldots,a_{n+m}$, which simultaneously preserve the relative order of $a_1,a_2,\ldots,a_n$ and the relative order of $a_{n+1},a_{n+2},\ldots,a_{n+m}$. In this paper, we use this simple description to study shuffle products of words in $\mathfrak{h}y$, reformulate the formulas mentioned in the last paragraph, and find some new shuffle product formulas. We remark that in many cases, the method used here are very simple and natural. And the idea used here goes back to Hoang Ngoc Minh (see \cite{Minh96}-\cite{Minh-Petitot}).
We give the contexts of this paper. In Section \ref{Sec:Shu-MZV}, we give a general formula of the shuffle product of two words in $\mathfrak{h}y$ and provide some concrete examples. In Section \ref{Sec:Shu-MZV-1}, we give formulas of the shuffle products $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$, $x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$ and $x^{a_1}y^{r_1}x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$. And we give a shuffle product formula of the products $x^{a_1}y^{r_1}\,\mbox{\bf \scyr X}\,\cdots\,\mbox{\bf \scyr X}\, x^{a_n}y^{r_n}$ in Section \ref{Sec:Shu-MZV-n}. There are two appendixes, in which we prove that the formulas found in \cite[Theorem 1.1,Theorem 1.3]{Lei-Guo-Ma} are essentially the same as \eqref{Eq:Shuffle-Res-1-1} and \eqref{Eq:Shuffle-Res-1-2}, respectively.
\section{Shuffle product formulas}\label{Sec:Shu-MZV}
\subsection{The general shuffle product formula}
As the most simple and typical example, we reprove Euler's decomposition formula \eqref{Eq:Shuffle-Euler} here.
Let $a$ and $b$ be two nonnegative integers. We want to compute the shuffle product $x^ay\,\mbox{\bf \scyr X}\, x^by$. We may assume that
$$x^ay\,\mbox{\bf \scyr X}\, x^by=\sum\limits_{\alpha_1+\alpha_2=a+b\atop \alpha_1,\alpha_2\geqslant 0}c_{\alpha_1,\alpha_2}x^{\alpha_1}yx^{\alpha_2}y.$$
For $\alpha_1,\alpha_2\geqslant 0$ with $\alpha_1+\alpha_2=a+b$, we have to determine the coefficients $c_{\alpha_1,\alpha_2}$. The ideal is that we shuffle the two $y$'s first, then consider where the $x$'s are from in $x^{\alpha_i}$. To distinguish these two $y$'s, we write the $y$ in $x^ay$ by $y_1$ and $y$ in $x^by$ by $y_2$. Then there are two cases when we shuffle $y_1$ and $y_2$:
\begin{description}
\item[(i)] $y_1\quad y_2$;
\item[(ii)] $y_2\quad y_1$.
\end{description}
For case (i), the number of $x$'s in $x^{\alpha_1}$ coming from $x^ay$ is $a$ and coming from $x^by$ is $\alpha_1-a$, respectively. And all $x$'s in $x^{\alpha_2}$ are coming from $x^by$. Hence the possibility is $\binom{\alpha_1}{a}$. Similarly, for case (ii), we get the possibility $\binom{\alpha_1}{b}$. Then we find the coefficient
$$c_{\alpha_1,\alpha_2}=\binom{\alpha_1}{a}+\binom{\alpha_1}{b}.$$
Finally we get the shuffle product formula
\begin{align}
x^ay\,\mbox{\bf \scyr X}\, x^by=\sum\limits_{\alpha_1+\alpha_2=a+b\atop \alpha_1,\alpha_2\geqslant 0}\left[\binom{\alpha_1}{a}+\binom{\alpha_1}{b}\right]x^{\alpha_1}yx^{\alpha_2}y,
\label{Eq:Shuffle-Euler-New}
\end{align}
which is just Euler's decomposition formula \eqref{Eq:Shuffle-Euler}. In the summation of \eqref{Eq:Shuffle-Euler-New}, besides the condition $\alpha_1+\alpha_2=a+b$, the nonnegative integers $\alpha_1$ and $\alpha_2$ must satisfy the condition $\alpha_1\geqslant \min\{a,b\}$. While for integers $\alpha$ and $a$, the binomial coefficient $\binom{\alpha}{a}$ is zero if $a<0$ or $\alpha<a$. Therefore for simplicity, here and below we do not write such type conditions.
Applying the same method to the general case, we get a general shuffle product formula, which is stated in the following theorem.
\begin{thm}\label{Thm:Shuffle-General}
Let $r,s$ be two positive integers and let $a_1,\ldots,a_r,b_1,\ldots,b_s$ be nonnegative integers. Then we have
\begin{align}
&x^{a_1}y\cdots x^{a_r}y\,\mbox{\bf \scyr X}\, x^{b_1}y\cdots x^{b_s}y\nonumber\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{r+s}=\sum\limits_{i=1}^ra_i+\sum\limits_{j=1}^sb_j\atop \alpha_1,\ldots,\alpha_{r+s}\geqslant 0}c_{\alpha_1,\ldots,\alpha_{r+s}}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{r+s}}y,
\label{Eq:Shuffle-General}
\end{align}
where the coefficients
\begin{align}
c_{\alpha_1,\ldots,\alpha_{r+s}}=&\sum\limits_{{l_1+\cdots+l_{p+1}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{L_p+s}\binom{\alpha_i}{\beta_i}\prod\limits_{j=L_p+s+2}^{r+s}\delta_{\alpha_j,a_{j-s}}\nonumber\\
&+\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{r+N_{p-1}}\binom{\alpha_i}{\beta_i}\prod\limits_{j=r+N_{p-1}+2}^{r+s}\delta_{\alpha_j,b_{j-r}}\nonumber\\
&+\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p+1}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{r+N_p}\binom{\alpha_i}{\gamma_i}\prod\limits_{j=r+N_p+2}^{r+s}\delta_{\alpha_j,b_{j-r}}\nonumber\\
&+\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{L_{p-1}+s}\binom{\alpha_i}{\gamma_i}\prod\limits_{j=L_{p-1}+s+2}^{r+s}\delta_{\alpha_j,a_{j-s}}.
\label{Eq:Shuffle-General-Coe}
\end{align}
Here for positive integers $l_1,\ldots,l_p (,l_{p+1})$ and $n_1,\ldots,n_p (,n_{p+1})$ appearing in the summations above, we define
\begin{align}
\begin{cases}
\begin{array}{l}
\beta_{L_j+N_j+1}=\sum\limits_{i=1}^{L_j+1}a_i+\sum\limits_{i=1}^{N_j}b_i-\sum\limits_{i=1}^{L_j+N_j}\alpha_i, \\
\beta_{L_j+N_j+t}=a_{L_j+t}, \quad (2\leqslant t\leqslant l_{j+1})
\end{array} & \text{for\;} j=0,1,\ldots,p,\\
\begin{array}{l}
\beta_{L_{j+1}+N_j+1}=\sum\limits_{i=1}^{L_{j+1}}a_i+\sum\limits_{i=1}^{N_j+1}b_i-\sum\limits_{i=1}^{L_{j+1}+N_j}\alpha_i, \\
\beta_{L_{j+1}+N_j+t}=b_{N_j+t}, \quad (2\leqslant t\leqslant n_{j+1})
\end{array} & \text{for\;} j=0,1,\ldots,p-1,
\end{cases}
\label{Eq:Coe-Beta}
\end{align}
and
\begin{align}
\begin{cases}
\begin{array}{l}
\gamma_{L_j+N_j+1}=\sum\limits_{i=1}^{L_j}a_i+\sum\limits_{i=1}^{N_j+1}b_i-\sum\limits_{i=1}^{L_j+N_j}\alpha_i, \\
\gamma_{L_j+N_j+t}=b_{N_j+t}, \quad (2\leqslant t\leqslant n_{j+1})
\end{array} & \text{for\;} j=0,1,\ldots,p,\\
\begin{array}{l}
\gamma_{L_{j}+N_{j+1}+1}=\sum\limits_{i=1}^{L_{j}+1}a_i+\sum\limits_{i=1}^{N_{j+1}}b_i-\sum\limits_{i=1}^{L_{j}+N_{j+1}}\alpha_i, \\
\gamma_{L_{j}+N_{j+1}+t}=a_{L_j+t}, \quad (2\leqslant t\leqslant l_{j+1})
\end{array} & \text{for\;} j=0,1,\ldots,p-1,
\end{cases}
\label{Eq:Coe-Gamma}
\end{align}
with $L_j=l_1+\cdots+l_j$, $N_j=n_1+\cdots+n_j$ for $j\geqslant 0$ and $L_0=N_0=0$. And $\delta_{ij}$ is Kronecker's delta symbol defined as
$$\delta_{ij}=\begin{cases}
1, & \text{if\;} i=j,\\
0, & \text{otherwise}.
\end{cases}$$
\end{thm}
\noindent {\bf Proof.\;} We write the $y$ in $x^{a_1}y\cdots x^{a_r}y$ as $y_1$, and the $y$ in $x^{b_1}y\cdots x^{b_s}y$ as $y_2$. Then there are four cases when we shuffle $y_1$'s and $y_2$'s:
\begin{description}
\item[(i)] $\underbrace{y_1 \cdots y_1}_{l_1} \underbrace{y_2\cdots y_2}_{n_1}\cdots \underbrace{y_1\cdots y_1}_{l_p}\underbrace{y_2\cdots y_2}_{n_p}\underbrace{y_1\cdots y_1}_{l_{p+1}}$,\\
where $l_1+\cdots+l_{p+1}=r$, $n_1+\cdots+n_p=s$ with $p,l_i,n_j\geqslant 1$;
\item[(ii)] $\underbrace{y_1\cdots y_1}_{l_1} \underbrace{y_2\cdots y_2}_{n_1}\cdots \underbrace{y_1\cdots y_1}_{l_p} \underbrace{y_2\cdots y_2}_{n_p}$,\\
where $l_1+\cdots+l_p=r$, $n_1+\cdots+n_p=s$ with $p,l_i,n_j\geqslant 1$;
\item[(iii)] $\underbrace{y_2\cdots y_2}_{n_1} \underbrace{y_1\cdots y_1}_{l_1} \cdots \underbrace{y_2\cdots y_2}_{n_p} \underbrace{y_1\cdots y_1}_{l_p}\underbrace{y_2\cdots y_2}_{n_{p+1}}$,\\
where $l_1+\cdots+l_p=r$, $n_1+\cdots+n_{p+1}=s$ with $p,l_i,n_j\geqslant 1$;
\item[(iv)] $\underbrace{y_2\cdots y_2}_{n_1} \underbrace{y_1\cdots y_1}_{l_1} \cdots \underbrace{y_2\cdots y_2}_{n_p} \underbrace{y_1\cdots y_1}_{l_p}$,\\
where $l_1+\cdots+l_p=r$, $n_1+\cdots+n_{p}=s$ with $p,l_i,n_j\geqslant 1$.
\end{description}
We compute the contribution of each case to the coefficient $c_{\alpha_1,\ldots,\alpha_{r+s}}$. Then $c_{\alpha_1,\ldots,\alpha_{r+s}}$ is just the sum of all these contributions. For case (i), we define
$$\beta_{L_j+N_j+t}=\text{the number of\;} x\text{'s coming from\;} x^{a_{L_j+t}}y \text{\;in\;} x^{\alpha_{L_j+N_j+t}}$$
for $0\leqslant j\leqslant p$ and $1\leqslant t\leqslant l_{j+1}$, and
$$\beta_{L_{j+1}+N_j+t}=\text{the number of\;} x\text{'s coming from\;} x^{b_{N_j+t}}y \text{\;in\;} x^{\alpha_{L_{j+1}+N_j+t}}$$
for $0\leqslant j\leqslant p-1$ and $1\leqslant t\leqslant n_{j+1}$. Note that the $x$'s in $x^{\alpha_{L_j+N_j+t}}$ are either from $x^{a_{L_j+t}}y$ or from $x^{b_{N_{j}+1}}y$, and these $x$'s can be in any order. Similarly, the $x$'s in $x^{\alpha_{L_{j+1}+N_j+t}}$ are either from $x^{b_{N_j+t}}y$ or from $x^{a_{L_{j+2}+1}}y$, and these $x$'s can be in any order too. Hence the contribution of case (i) to the coefficient $c_{\alpha_1,\ldots,\alpha_{r+s}}$ is
$$\sum\limits_{{l_1+\cdots+l_{p+1}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{r+s}\binom{\alpha_i}{\beta_i}.$$
We prove that the $\beta_i$'s satisfy the formulas given in \eqref{Eq:Coe-Beta} by induction on $j$.
When $j=0$, it is obvious that
\begin{align*}
&\beta_1=a_1,\quad \beta_2=a_2,\quad \ldots, \quad \beta_{L_1}=a_{L_1},\\
&\beta_{L_1+1}=b_1-\sum\limits_{i=1}^{L_1}(\alpha_i-\beta_i)=\sum\limits_{i=1}^{L_1}a_i+b_1-\sum\limits_{i=1}^{L_1}\alpha_i,\\
&\beta_{L_1+2}=b_2,\quad \ldots,\quad \beta_{L_1+n_1}=b_{n_1}.
\end{align*}
Now assume that $j>0$, and the formulas hold for $j-1$. Then we have
\begin{align*}
&\beta_{L_j+N_j+1}=a_{L_j+1}-\sum\limits_{i=L_j+N_{j-1}+1}^{L_j+N_j}(\alpha_i-\beta_i)\\
=&a_{L_j+1}+\sum\limits_{i=1}^{L_j}a_i+\sum\limits_{i=1}^{N_{j-1}+1}b_i-\sum\limits_{i=1}^{L_j+N_{j-1}}\alpha_i+\sum\limits_{t=2}^{n_j}b_{N_{j-1}+t}
-\sum\limits_{i=L_j+N_{j-1}+1}^{L_j+N_j}\alpha_i\\
=&\sum\limits_{i=1}^{L_j+1}a_i+\sum\limits_{i=1}^{N_{j}}b_i-\sum\limits_{i=1}^{L_j+N_{j}}\alpha_i.
\end{align*}
For $2\leqslant t\leqslant l_{j+1}$, it is obvious that $\beta_{L_j+N_j+t}=a_{L_j+t}$. Next we have
\begin{align*}
&\beta_{L_{j+1}+N_j+1}=b_{N_j+1}-\sum\limits_{i=L_j+N_j+1}^{L_{j+1}+N_j}(\alpha_i-\beta_i)\\
=&b_{N_j+1}+\sum\limits_{i=1}^{L_j+1}a_i+\sum\limits_{i=1}^{N_{j}}b_i-\sum\limits_{i=1}^{L_j+N_{j}}\alpha_i+\sum\limits_{t=2}^{l_{j+1}}a_{L_j+t}
-\sum\limits_{i=L_j+N_j+1}^{L_{j+1}+N_j}\alpha_i\\
=&\sum\limits_{i=1}^{L_{j+1}}a_i+\sum\limits_{i=1}^{N_j+1}b_i-\sum\limits_{i=1}^{L_{j+1}+N_j}\alpha_i.
\end{align*}
Finally, it is obvious that $\beta_{L_{j+1}+N_j+t}=b_{N_j+t}$ for $2\leqslant t\leqslant n_{j+1}$. Then we have proved that the $\beta_i$'s defined above are given by \eqref{Eq:Coe-Beta}.
It is easy to see that for $j$ with the conditions $L_p+N_p+2\leqslant j\leqslant r+s$, it must hold $\alpha_j=a_{j-N_p}$. We also know that $\alpha_{L_p+N_p+1}=\beta_{L_p+N_p+1}$. Hence the contribution of case (i) to the coefficient $c_{\alpha_1,\ldots,\alpha_{r+s}}$ is
$$\sum\limits_{{l_1+\cdots+l_{p+1}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{L_p+s}\binom{\alpha_i}{\beta_i}\prod\limits_{j=L_p+s+2}^{r+s}\delta_{\alpha_j,a_{j-s}}$$
with $\beta_i$'s given by \eqref{Eq:Coe-Beta}.
Similar to case (i), we find that the contribution of case (ii) to the coefficient $c_{\alpha_1,\ldots,\alpha_{r+s}}$ is
$$\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{r+N_{p-1}}\binom{\alpha_i}{\beta_i}\prod\limits_{j=r+N_{p-1}+2}^{r+s}\delta_{\alpha_j,b_{j-r}},$$
where the $\beta_i$'s are also given by \eqref{Eq:Coe-Beta}. Finally by symmetry, we get the contribution of case (iii) from case (i), and the contribution of case (iv) from case (ii). Thus the theorem is proved.
\qed
Applying the algebra homomorphism $\zeta:(\mathfrak{h}^0,\,\mbox{\bf \scyr X}\,)\rightarrow \mathbb{R}$, we get the shuffle product formulas of multiple zeta values.
\begin{cor}
Let $r,s$ be two positive integers and let $a_1,\ldots,a_r,b_1,\ldots,b_s$ be nonnegative integers with $a_1,b_1\geqslant 1$. Then we have
\begin{align}
&\zeta(a_1+1,\ldots, a_r+1)\zeta(b_1+1,\ldots,b_s+1)\nonumber\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{r+s}=\sum\limits_{i=1}^ra_i+\sum\limits_{j=1}^sb_j\atop \alpha_1\geqslant 1,\alpha_2,\ldots,\alpha_{r+s}\geqslant 0}c_{\alpha_1,\ldots,\alpha_{r+s}}\zeta(\alpha_1+1,\alpha_2+1,\ldots, \alpha_{r+s}+1),
\label{Eq:MZV-Shuffle-General}
\end{align}
where the coefficients $c_{\alpha_1,\ldots,\alpha_{r+s}}$ are given by \eqref{Eq:Shuffle-General-Coe}.
\end{cor}
\begin{rem}
The shuffle product formula \eqref{Eq:MZV-Shuffle-General} is essentially the same as that of \cite[Corollary 2.5]{Guo-Xie}. The proof we supply here seems much more simple. In fact, one can prove \cite[Equaiton (22) in Theorem 2.1]{Guo-Xie} by the method provided in this paper.
\end{rem}
\begin{rem}
Applying the isomorphism $\operatorname{Li}$ to the shuffle product formula \eqref{Eq:MZV-Shuffle-General} in Theorem \ref{Thm:Shuffle-General}, one can obtain an equivalent formula which holds for multiple polylogarithms.
\end{rem}
\begin{rem}
The quantities $c_{\alpha_1,\ldots,\alpha_{r+s}}$ appearing in \eqref{Eq:MZV-Shuffle-General} are the structure constants of the shuffle algebra $\mathfrak{h}$, obtained when one identifies the coefficients in the rational expressions \cite{Berstel} $(x+y)^{\ast}\,\mbox{\bf \scyr X}\,(x+y)^\ast=(2x+2y)^{\ast}$.
\end{rem}
\begin{rem}
There is another commutative product among multiple zeta values, which is called stuffle product \cite{Hoffman}. If one can get the formula for the stuffle product
$x^{a_1}y\cdots x^{a_r}y\ast x^{b_1}y\cdots x^{b_s}y$ (we consider this product in \cite{Li-Qin}), then comparing with \eqref{Eq:Shuffle-General}, one can get some double shuffle relations of multiple zeta values.
While it seems that the obtained double shuffle relations are not terse and elegant. For example, the formula deduced from the difference $x^ay\,\mbox{\bf \scyr X}\, x^{b_1}y\cdots x^{b_s}y-x^ay\ast x^{b_1}y\cdots x^{b_s}y$ is not so terse (see Proposition \ref{Prop:Shu-1-s} below). On the other hand, if one considers the difference of sums
$$\sum\limits_{a+b_1+\cdots+b_s=k}x^ay\,\mbox{\bf \scyr X}\, x^{b_1}y\cdots x^{b_s}y,\quad \sum\limits_{a+b_1+\cdots+b_s=k}x^ay\ast x^{b_1}y\cdots x^{b_s}y,$$
one obtains an elegant formula, which is called weighted sum formula (see \cite[Theorems 2.2-2.6]{Guo-Xie-09}).
\end{rem}
\subsection{Some concrete examples}
We give some applications of Theorem \ref{Thm:Shuffle-General}. First, we consider the case $r=1$ and get the following result.
\begin{prop}\label{Prop:Shu-1-s}
Let $s$ be a positive integer and let $a,b_1,\ldots,b_s$ be nonnegative integers. Then we have
\begin{align}
&x^ay\,\mbox{\bf \scyr X}\, x^{b_1}y\cdots x^{b_s}y=\sum\limits_{\alpha_1+\cdots+\alpha_{s+1}=a+b_1+\cdots+b_s\atop \alpha_1,\ldots,\alpha_{s+1}\geqslant 0}\left\{\binom{\alpha_1}{a}\prod\limits_{j=3}^{s+1}\delta_{\alpha_j,b_{j-1}}+\prod\limits_{i=1}^s\binom{\alpha_i}{b_i}\right.\nonumber\\
&\quad +\left.\sum\limits_{k=1}^{s-1}\prod\limits_{i=1}^{k}\binom{\alpha_i}{b_i}\binom{\alpha_{k+1}}{b_{k+1}-\alpha_{k+2}}\prod\limits_{j=k+3}^{s+1}
\delta_{\alpha_j,b_{j-1}}\right\}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{s+1}}y.
\label{Eq:Shuffle-1-s}
\end{align}
If further $a,b_1\geqslant 1$, then we have
\begin{align}
&\zeta(a+1)\zeta(b_1+1,\ldots, b_s+1)=\sum\limits_{\alpha_1+\cdots+\alpha_{s+1}=a+b_1+\cdots+b_s\atop \alpha_1\geqslant 1,\alpha_2,\ldots,\alpha_{s+1}\geqslant 0}\left\{\binom{\alpha_1}{a}\prod\limits_{j=3}^{s+1}\delta_{\alpha_j,b_{j-1}}\right.\nonumber\\
&\quad \left. +\prod\limits_{i=1}^s\binom{\alpha_i}{b_i}+\sum\limits_{k=1}^{s-1}\prod\limits_{i=1}^{k}\binom{\alpha_i}{b_i}\binom{\alpha_{k+1}}{b_{k+1}-\alpha_{k+2}}\prod\limits_{j=k+3}^{s+1}
\delta_{\alpha_j,b_{j-1}}\right\}\nonumber\\
&\qquad \times \zeta(\alpha_1+1,\alpha_2+1,\ldots, \alpha_{s+1}+1).
\label{Eq:MZV-Shuffle-1-s}
\end{align}
\end{prop}
\noindent {\bf Proof.\;} Applying the map $\zeta$, we get \eqref{Eq:MZV-Shuffle-1-s} from \eqref{Eq:Shuffle-1-s}. Hence we only need to prove \eqref{Eq:Shuffle-1-s}. By Theorem \ref{Thm:Shuffle-General}, we have
$$x^ay\,\mbox{\bf \scyr X}\, x^{b_1}y\cdots x^{b_s}y=\sum\limits_{\alpha_1+\cdots+\alpha_{s+1}=a+b_1+\cdots+b_s\atop \alpha_1,\ldots,\alpha_{s+1}\geqslant 0}c_{\alpha_1,\ldots,\alpha_{s+1}}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{s+1}}y,$$
where
\begin{align*}
c_{\alpha_1,\ldots,\alpha_{s+1}}=&\sum\limits_{l_1=r,n_1=s}\binom{\alpha_1}{\beta_1}\prod\limits_{j=3}^{s+1}\delta_{\alpha_j,b_{j-1}}
+\sum\limits_{l_1=r,n_1+n_2=s\atop n_1,n_2\geqslant 1}\prod\limits_{i=1}^{n_1+1}\binom{\alpha_i}{\gamma_i}\prod\limits_{j=n_1+3}^{s+1}\delta_{\alpha_j,b_{j-1}}\\
&+\sum\limits_{l_1=r,n_1=s}\prod\limits_{i=1}^s\binom{\alpha_i}{\gamma_i}.
\end{align*}
Now for $l_1=r,n_1=s$, we have $\beta_1=a$ and
$$\gamma_i=b_i,\quad (i=1,\ldots,n_1).$$
And for $l_1=r, n_1+n_2=s$, we have
$$\gamma_i=b_i,\quad (i=1,\ldots,n_1),$$
and
$$\gamma_{n_1+1}=a+\sum\limits_{i=1}^{n_1}b_i-\sum\limits_{i=1}^{n_1}\alpha_i.$$
Then we get the formula \eqref{Eq:Shuffle-1-s}.
\qed
Let $s=1$ and $b_1=b$ in \eqref{Eq:Shuffle-1-s}, we get Euler's decomposition formula \eqref{Eq:Shuffle-Euler-New}. Let $s=2$, we have
\begin{align}
x^ay\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}y=\sum\limits_{\alpha_1+\alpha_2+\alpha_3=a+b_1+b_2\atop \alpha_1,\alpha_2,\alpha_3\geqslant 0}\left\{\binom{\alpha_1}{a}\delta_{\alpha_3,b_2}+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2}\right.\nonumber\\
\left.+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2-\alpha_3}\right\}x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}y.
\label{Eq:Shuffle-1-2}
\end{align}
Under the conditions $a,b_1\geqslant 1$, after applying the map $\zeta$ to \eqref{Eq:Shuffle-1-2}, we get \cite[Equation (3)]{Guo-Xie}.
Similarly, let $s=3$, we get
\begin{align}
&x^ay\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}yx^{b_3}y=\sum\limits_{{\alpha_1+\alpha_2+\alpha_3+\alpha_4\atop =a+b_1+b_2+b_3}\atop \alpha_1,\ldots,\alpha_4\geqslant 0}\left\{\binom{\alpha_1}{a}\delta_{\alpha_3,b_2}\delta_{\alpha_4,b_3}
+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2-\alpha_3}\delta_{\alpha_4,b_3}\right.\nonumber\\
&\qquad \left.+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2}
\left[\binom{\alpha_3}{b_3}+\binom{\alpha_3}{b_3-\alpha_4}\right]\right\}x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}yx^{\alpha_4}y.
\label{Eq:Shuffle-1-3}
\end{align}
Next we compute the formula of $r=s=2$. By Theorem \ref{Thm:Shuffle-General}, we have
$$x^{a_1}yx^{a_2}y\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}y=\sum\limits_{{\alpha_1+\alpha_2+\alpha_3+\alpha_4\atop=a_1+a_2+b_1+b_2}\atop \alpha_1,\ldots,\alpha_4\geqslant 0}(c_1+c_2+c_1'+c_2') x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}yx^{\alpha_4}y,$$
where
\begin{align*}
c_1=&\sum\limits_{{l_1=l_{2}=1,n_1=2}}\prod\limits_{i=1}^{3}\binom{\alpha_i}{\beta_i}=\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2},\\
c_2=&\sum\limits_{{l_1=2,n_1=2}}\prod\limits_{i=1}^{2}\binom{\alpha_i}{\beta_i}\delta_{\alpha_4,b_{2}}+\sum\limits_{{l_1=l_{2}=1,n_1=n_{2}=1}}
\prod\limits_{i=1}^{3}\binom{\alpha_i}{\beta_i}\\
=&\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\delta_{\alpha_4,b_2}+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{a_1+a_2+b_1-\alpha_1-\alpha_2}\\
=&\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\delta_{\alpha_4,b_2}+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2-\alpha_4},
\end{align*}
and $c_i'$ is obtained from $c_i$ by interchanging $a_1$ with $b_1$ and $a_2$ with $b_2$ simultaneously for $i=1,2$. Then we get the shuffle product formula
\begin{align}
&x^{a_1}yx^{a_2}y\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}y=\sum\limits_{{\alpha_1+\alpha_2+\alpha_3+\alpha_4\atop=a_1+a_2+b_1+b_2}\atop \alpha_1,\ldots,\alpha_4\geqslant 0}\left\{\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\delta_{\alpha_4,b_2}+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2}\delta_{\alpha_4,a_2}\right.\nonumber\\
&\quad +\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\left[\binom{\alpha_3}{b_2}+\binom{\alpha_3}{b_2-\alpha_4}\right]\nonumber\\
&\quad\left.+\binom{\alpha_1}{b_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\left[\binom{\alpha_3}{a_2}+\binom{\alpha_3}{a_2-\alpha_4}\right]\right\}x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}yx^{\alpha_4}y.
\label{Eq:Shuffle-2-2}
\end{align}
Under the conditions $a_1,b_1\geqslant 1$, after applying the map $\zeta$ to \eqref{Eq:Shuffle-2-2}, we get \cite[Equation (4)]{Guo-Xie}.
Finally, it is not difficult but needs patience to find the formulas of $r=2,s=3$ and $r=s=3$. The shuffle product formulas are
\begin{align}
&x^{a_1}yx^{a_2}y\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}yx^{b_3}y=\sum\limits_{{\alpha_1+\cdots+\alpha_5\atop =a_1+a_2+b_1+b_2+b_3}\atop \alpha_1,\ldots,\alpha_5\geqslant 0}\left\{\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\delta_{\alpha_4,b_2}\delta_{\alpha_5,b_3}\right.\nonumber\\
&\quad+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2-a_4}\delta_{\alpha_5,b_3}\nonumber\\
&\quad+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2}\left[\binom{\alpha_4}{b_3}+\binom{\alpha_4}{b_3-\alpha_5}\right]\nonumber\\
&\quad+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2}\binom{\alpha_3}{b_3}\delta_{\alpha_5,a_2}
+\binom{\alpha_1}{b_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{a_2}\delta_{\alpha_5,b_3}\nonumber\\
&\quad+\binom{\alpha_1}{b_1}\binom{\alpha_2}{b_2}\binom{\alpha_3}{a_2+b_3-\alpha_4-\alpha_5}\left[\binom{\alpha_4}{a_2}+\binom{\alpha_4}{a_2-\alpha_5}\right]
\nonumber\\
&\quad\left.+\binom{\alpha_1}{b_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{a_2+b_3-\alpha_4-\alpha_5}\left[\binom{\alpha_4}{b_3}
+\binom{\alpha_4}{b_3-\alpha_5}\right]\right\}\nonumber\\
&\qquad \times x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}yx^{\alpha_4}yx^{\alpha_5}y,
\label{Eq:Shuffle-2-3}
\end{align}
and
\begin{align}
&x^{a_1}yx^{a_2}yx^{a_3}y\,\mbox{\bf \scyr X}\, x^{b_1}yx^{b_2}yx^{b_3}y=\sum\limits_{{\alpha_1+\cdots+\alpha_6\atop =a_1+a_2+a_3+b_1+b_2+b_3}\atop \alpha_1,\ldots,\alpha_6\geqslant 0}[c(a_1,a_2;b_1,b_2)\nonumber\\
&\qquad \qquad +c(b_1,b_2;a_1,a_2)]x^{\alpha_1}yx^{\alpha_2}yx^{\alpha_3}yx^{\alpha_4}yx^{\alpha_5}yx^{\alpha_6}y,
\label{Eq:Shuffle-3-3}
\end{align}
with
\begin{align*}
&c(a_1,a_2;b_1,b_2)\\
=&\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\binom{\alpha_3}{a_3}\delta_{\alpha_5,b_2}\delta_{\alpha_6,b_3}+\binom{\alpha_1}{a_1}
\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2}\binom{\alpha_4}{b_3}\delta_{\alpha_6,a_3}\\
&+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{a_1+a_2+b_1-\alpha_1-\alpha_2}\binom{\alpha_4}{a_3}\delta_{\alpha_6,b_3}\\
&+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\binom{\alpha_3}{a_1+a_2+b_1-\alpha_1-\alpha_2}\binom{\alpha_4}{b_2-\alpha_5}\delta_{\alpha_6,b_3}\\
&+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_2}\binom{\alpha_3}{a_1+a_2+b_1-\alpha_1-\alpha_2}\binom{\alpha_4}{b_2}
\left[\binom{\alpha_5}{b_3}+\binom{\alpha_5}{b_3-\alpha_6}\right]\\
&+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{b_2}\binom{\alpha_4}{a_3+b_3-\alpha_5-\alpha_6}
\left[\binom{\alpha_5}{a_3}+\binom{\alpha_5}{a_3-\alpha_6}\right]\\
&+\binom{\alpha_1}{a_1}\binom{\alpha_2}{a_1+b_1-\alpha_1}\binom{\alpha_3}{a_1+a_2+b_1-\alpha_1-\alpha_2}\binom{\alpha_4}{a_3+b_3-\alpha_5-\alpha_6}\\
&\quad \times \left[\binom{\alpha_5}{b_3}+\binom{\alpha_5}{b_3-\alpha_6}\right].
\end{align*}
We omit the proofs of \eqref{Eq:Shuffle-2-3} and \eqref{Eq:Shuffle-3-3}. Of course, applying the map $\zeta$, we get shuffle product formulas of multiple zeta values from \eqref{Eq:Shuffle-2-3} and \eqref{Eq:Shuffle-3-3}.
\section{Restricted shuffle product formulas}\label{Sec:Shu-MZV-1}
In this section, we study shuffle product formulas of multiple zeta values with strings of $1$'s. Such formulas are called restricted shuffle product formulas.
\subsection{The formula of $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$}
In this subsection, we consider the shuffle product of multiple zeta values of the form $\zeta(m,\{1\}^n)$, which had been studied in \cite{Eie-Wei,Lei-Guo-Ma}.
Equivalently, we have to compute $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$ for any nonnegative integers $a,b$ and any positive integers $r,s$. For that purpose, we can use Theorem \ref{Thm:Shuffle-General}. We have
$$x^ay^r\,\mbox{\bf \scyr X}\, x^by^s=\sum\limits_{\alpha_1+\cdots+\alpha_{r+s}=a+b\atop \alpha_1,\ldots,\alpha_{r+s}\geqslant 0}[c(a,r;b,s)+c(b,s;a,r)]x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y,$$
where $c(a,r;b,s)=\Sigma_1+\Sigma_2$ with
\begin{align*}
\Sigma_1=&\sum\limits_{{l_1+\cdots+l_{p+1}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{L_p+s}\binom{\alpha_i}{\beta_i}\prod\limits_{j=L_p+s+2}^{r+s}\delta_{\alpha_j,a_{j-s}},\\
\Sigma_2=&\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\prod\limits_{i=1}^{r+N_{p-1}}\binom{\alpha_i}{\beta_i}\prod\limits_{j=r+N_{p-1}+2}^{r+s}\delta_{\alpha_j,b_{j-r}}.
\end{align*}
For $\Sigma_1$, we have
$$\begin{cases}
\beta_1=a, & \\
\beta_{L_j+N_j+1}=a+b-\sum\limits_{i=1}^{L_j+N_j}\alpha_i, & (j=1,\ldots,p),\\
\beta_{L_j+N_j+t}=0, & (2\leqslant t\leqslant l_{j+1},j=0,1,\ldots,p),\\
\beta_{L_{j+1}+N_j+1}=a+b-\sum\limits_{i=1}^{L_{j+1}+N_j}\alpha_i, & (j=0,1,\ldots,p-1),\\
\beta_{L_{j+1}+N_j+t}=0, & (2\leqslant t\leqslant n_{j+1},j=0,1,\ldots,p-1).
\end{cases}$$
Since $\alpha_{l_1+1}\geqslant \beta_{l_1+1}$, we get $\sum\limits_{i=1}^{l_1+1}\alpha_i\geqslant a+b$. While $a+b=\sum\limits_{i=1}^{r+s}\alpha_i$, hence it must hold $\alpha_{l_1+2}=\cdots=\alpha_{r+s}=0$ and $\sum\limits_{i=1}^{l_1+1}\alpha_i=a+b$. Then we get
\begin{align*}
\Sigma_1=&\sum\limits_{{l_1+\cdots+l_{p+1}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\binom{\alpha_1}{a}\prod\limits_{j=l_1+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r-1}\left(\sum\limits_{{l_2+\cdots+l_{p+1}=r-l\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}1\right)\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r-1}\left(\sum\limits_{p\geqslant 1}\binom{r-l-1}{p-1}\binom{s-1}{p-1}\right)\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}.
\end{align*}
Using the well-known combinatorial identity
\begin{align}
\sum\limits_{i=0}^n\binom{k}{i}\binom{l}{n-i}=\binom{k+l}{n},
\label{Eq:Com-Id}
\end{align}
we get
$$\Sigma_1=\sum\limits_{l=1}^{r-1}\binom{r+s-l-2}{r-l-1}\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}.$$
Similarly, we have
\begin{align*}
\Sigma_2=&\sum\limits_{{l_1+\cdots+l_{p}=r\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 1,l_i\geqslant 1,n_j\geqslant 1}\binom{\alpha_1}{a}\prod\limits_{j=l_1+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r-1}\left(\sum\limits_{{l_2+\cdots+l_{p}=r-l\atop n_1+\cdots+n_{p}=s}\atop p\geqslant 2,l_i\geqslant 1,n_j\geqslant 1}1\right)\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}+\binom{\alpha_1}{a}\prod\limits_{j=r+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r-1}\left(\sum\limits_{p\geqslant 2}\binom{r-l-1}{p-2}\binom{s-1}{p-1}\right)\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}+\binom{\alpha_1}{a}\prod\limits_{j=r+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r-1}\binom{r+s-l-2}{r-l}\binom{\alpha_1}{a}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}
+\binom{\alpha_1}{a}\prod\limits_{j=r+2}^{r+s}\delta_{\alpha_j,0}.
\end{align*}
Hence we get
\begin{align*}
c(a,r;b,s)=&\sum\limits_{l=1}^{r-1}\binom{\alpha_1}{a}\binom{r+s-l-1}{r-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}
+\binom{\alpha_1}{a}\prod\limits_{j=r+2}^{r+s}\delta_{\alpha_j,0}\\
=&\sum\limits_{l=1}^{r}\binom{\alpha_1}{a}\binom{r+s-l-1}{r-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}.
\end{align*}
Finally, we have the shuffle product formula.
\begin{prop}\label{Prop:Shuffle-Res-1-1}
For any nonnegative integers $a,b$ and any positive integers $r,s$, we have\begin{align}
&x^ay^r\,\mbox{\bf \scyr X}\, x^by^s=\sum\limits_{\alpha_1+\cdots+\alpha_{r+s}=a+b\atop \alpha_1,\ldots,\alpha_{r+s}\geqslant 0}\left\{\sum\limits_{l=1}^{r}\binom{\alpha_1}{a}\binom{r+s-l-1}{r-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}\right.\nonumber\\
&\qquad\qquad\left.+\sum\limits_{l=1}^{s}\binom{\alpha_1}{b}\binom{r+s-l-1}{s-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}\right\}x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y.
\label{Eq:Shuffle-Res-1-1}
\end{align}
\end{prop}
If $a,b\geqslant 1$, after applying the map $\zeta$ to \eqref{Eq:Shuffle-Res-1-1}, we get the shuffle product formula \cite[Equation (1.1)]{Eie-Wei} of multiple zeta values of the form $\zeta(m,\{1\}^n)$.
It seems that it is tedious and not natural to deduce the shuffle product formula \eqref{Eq:Shuffle-Res-1-1} from Theorem \ref{Thm:Shuffle-General}. In fact, we find that we can get \eqref{Eq:Shuffle-Res-1-1} immediately from the combinatorial description of shuffle product. We want to compute $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$, and we write this product as $x^ay_1^r\,\mbox{\bf \scyr X}\, x^by_2^s$. There are two possibilities for the positions of $y$'s:
\begin{description}
\item[(i)] $\underbrace{y_1\cdots y_1}_l y_2\begin{array}{l}
\underbrace{y_{1}\cdots y_{1}}_{r-l}\\
\underbrace{y_2 \cdots y_{2}}_{s-1}
\end{array}$, \qquad ($1\leqslant l\leqslant r$);
\item[(ii)] $\underbrace{y_2\cdots y_{2}}_{l}y_1\begin{array}{l}
\underbrace{y_1\cdots y_{1}}_{r-1}\\
\underbrace{y_{2}\cdots y_2}_{s-l}
\end{array}$,\qquad ($1\leqslant l\leqslant s$).
\end{description}
Here and below $\begin{array}{l}
\underbrace{y_{1}\cdots y_{1}}_{m}\\
\underbrace{y_2 \cdots y_{2}}_{n}
\end{array}$ means that $y_{1}^{m}\,\mbox{\bf \scyr X}\, y_2^{n}$. For case (i), in $x^{\alpha_1}y$ there are $a$ $x$'s from $x^a$, and $\alpha_1-a$ $x$'s from $x^b$. For $2\leqslant j\leqslant l+1$, all $x$'s in $x^{\alpha_j}y$ are from $x^b$. And for $l+2\leqslant j\leqslant r+s$, it must hold $\alpha_j=0$. Hence we know that the contribution of case (i) to the coefficient of $x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y$ is
$$\sum\limits_{l=1}^r\binom{\alpha_1}{a}\binom{r+s-l-1}{r-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}.$$
By the symmetry of $y_1$'s and $y_2$'s, we get that the contribution of case (ii) to the coefficient of $x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y$ is
$$\sum\limits_{l=1}^s\binom{\alpha_1}{b}\binom{r+s-l-1}{s-l}\prod\limits_{j=l+2}^{r+s}\delta_{\alpha_j,0}.$$
Then we reproved the shuffle product formula \eqref{Eq:Shuffle-Res-1-1} in a simple way.
In \cite[Theorem 1.1]{Lei-Guo-Ma}, P. Lei, L. Guo and B. Ma also studied the shuffle product of two multiple zeta values of the form $\zeta(m,\{1\}^n)$. The formula they obtained looks different from that deduced from \eqref{Eq:Shuffle-Res-1-1}. We show in Appendix \ref{AppSec:Proof-1-1} in fact these two formulas essentially coincide with each other.
\subsection{The formula of $x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$}
In this subsection, we compute the shuffle product $x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$, where $a,b_1,b_2$ are nonnegative integers and $r,s_1,s_2$ are positive integers. To distinguish these $y$'s, we write
$$x^ay^r=x^ay_1^r,\quad x^{b_1}y^{s_1}x^{b_2}y^{s_2}=x^{b_1}y_2^{s_1}x^{b_2}y_2^{s_2}.$$
There are four cases of the order of $y$'s:
\begin{description}
\item[(i)] $\underbrace{y_1 \cdots y_{1}}_{r_1}y_2\begin{array}{l}
\underbrace{y_{1}\cdots y_{1}}_{r_2}\\
\underbrace{y_2\cdots y_{2}}_{s_1-2}
\end{array}y_{2}\underbrace{y_{1}\cdots y_{1}}_{r_3}y_2\begin{array}{l}
\underbrace{y_{1} \cdots y_{1}}_{r_4}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $r_1+r_2+r_3+r_4=r$ with $r_1\geqslant 1$ and $r_2,r_3,r_4\geqslant 0$;
\item[(ii)] $\underbrace{y_2\cdots y_2}_l y_1\begin{array}{l}
\underbrace{y_1 \cdots y_{1}}_{r_1-1}\\
\underbrace{y_{2}\cdots y_{2}}_{s_1-l-1}
\end{array}y_{2}\underbrace{y_{1}\cdots y_{1}}_{r_2}y_2\begin{array}{l}
\underbrace{y_{1}\cdots y_1}_{r_3}\\
\underbrace{y_2\cdots y_{2}}_{s_2-1}
\end{array}$,\\
where $r_1+r_2+r_3=r$ with $r_1\geqslant 1$, $r_2,r_3\geqslant 0$ and $1\leqslant l\leqslant s_1-1$;
\item[(iii)] $\underbrace{y_2\cdots y_{2}}_{s_1}\underbrace{y_1\cdots y_{1}}_{r_1}y_2\begin{array}{l}
\underbrace{y_{1}\cdots y_1}_{r_2}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $r_1+r_2=r$ with $r_1\geqslant 1$ and $r_2\geqslant 0$;
\item[(iv)] $\underbrace{y_2\cdots y_{2}}_{s_1}\underbrace{y_2\cdots y_2}_l y_1\begin{array}{l}
\underbrace{y_{1}\cdots y_1}_{r-1}\\
\underbrace{y_{2}\cdots y_2}_{s_2-l}
\end{array}$,\\
where $1\leqslant l\leqslant s_2$.
\end{description}
For each case, we consider the place and the number of $x$'s to get $x^{\alpha_i}y$. Then we can write the contribution of each case to the coefficient of $x^{\alpha_1}y\cdots x^{\alpha_{r+s_1+s_2}}y$ in the product $x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$, and get the following shuffle product formula.
\begin{prop}
For any nonnegative integers $a,b_1,b_2$ and any positive integers $r,s_1,s_2$, we have
\begin{align}
&x^{a}y^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}=\sum\limits_{\alpha_1+\cdots+\alpha_{r+s_1+s_2}=a+b_1+b_2\atop \alpha_1,\ldots,\alpha_{r+s_1+s_2}\geqslant 0}\left\{\sum\limits_{r_1+r_2+r_3+r_4=r\atop r_1\geqslant 1,r_2,r_3,r_4\geqslant 0}\binom{\alpha_1}{a}\binom{r_2+s_1-2}{r_2}\right.\nonumber\\
&\quad \times\binom{r_4+s_2-1}{r_4}\delta_{\alpha_1+\cdots+\alpha_{r_1+1},a+b_1}\prod\limits_{i=r_1+2}^{r_1+r_2+s_1}\delta_{\alpha_i,0}
\prod\limits_{i=r_1+r_2+r_3+s_1+2}^{r+s_1+s_2}\delta_{\alpha_i,0}\nonumber\\
&+\sum\limits_{{r_1+r_2+r_3=r\atop r_1\geqslant 1,r_2,r_3\geqslant 0}\atop 1\leqslant l\leqslant s_1-1}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-2}{r_1-1}\binom{r_3+s_2-1}{r_3}\delta_{\alpha_1+\cdots+\alpha_{l+1},a+b_1}\prod\limits_{i=l+2}^{r_1+s_1}\delta_{\alpha_i,0}
\nonumber\\
&\quad\times\prod\limits_{i=r_1+r_2+s_1+2}^{r+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{r_1+r_2=r\atop r_1\geqslant 1,r_2\geqslant 0}\binom{\alpha_1}{b_1}\binom{\alpha_{s_1+1}}{a+b_1-\sum\limits_{i=1}^{s_1}\alpha_i}\binom{r_2+s_2-1}{r_2}\nonumber\\
&\quad\left.\times\prod\limits_{i=r_1+s_1+2}^{r+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{l=1}^{s_2}\binom{\alpha_1}{b_1}\binom{\alpha_{s_1+1}}{b_2}\binom{r+s_2-l-1}{r-1}
\prod\limits_{i=s_1+l+2}^{r+s_1+s_2}\delta_{\alpha_i,0}\right\}\nonumber\\
&\quad\qquad \times x^{\alpha_1}y\cdots x^{\alpha_{r+s_1+s_2}}y.
\label{Eq:Shuffle-Res-1-2}
\end{align}
\end{prop}
When $a,b_1\geqslant 1$, applying the map $\zeta$ to the formula \eqref{Eq:Shuffle-Res-1-2}, we get the formula for the product of the form $\zeta(m,\{1\}^n)\zeta(m_1,\{1\}^{n_1},m_2,\{1\}^{n_2})$. The formula obtained here looks different from that in \cite[Theorem 1.3]{Lei-Guo-Ma}. We show in Appendix \ref{AppSec:Proof-1-2} that these two formulas are essentially identical.
\subsection{The formula of $x^{a_1}y^{r_1}x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$}
In this subsection, we study the restricted shuffle product formula of the product $x^{a_1}y^{r_1}x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$, where $a_1,a_2,b_1,b_2$ are nonnegative integers and $r_1,r_2,s_1,s_2$ are positive integers. As before, we write the $y$ in $x^{a_1}y^{r_1}x^{a_2}y^{r_2}$ as $y_1$ and $y$ in $x^{b_1}y^{s_1}x^{b_2}y^{s_2}$ as $y_2$. When shuffling $y_1$'s and $y_2$'s, by symmetric, we only need consider the cases beginning with $y_1$. Then there are ten cases:
\begin{description}
\item[(i)] $\underbrace{y_1\cdots y_1}_{r_1}\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_2}\\
\underbrace{y_2\cdots y_2}_{s_1-2}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_3}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_4}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $l_1+l_2+l_3+l_4=r_2$ with $l_1\geqslant 1$, $l_2,l_3,l_4\geqslant 0$;
\item[(ii)] $\underbrace{y_1\cdots y_1}_{r_1}\underbrace{y_2\cdots y_2}_ky_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_1-1}\\
\underbrace{y_2\cdots y_2}_{s_1-k-1}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_2}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_3}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $l_1+l_2+l_3=r_2$ with $l_1\geqslant 1$, $l_2,l_3\geqslant 0$ and $1\leqslant k\leqslant s_1-1$;
\item[(iii)] $\underbrace{y_1\cdots y_1}_{r_1}\underbrace{y_2\cdots y_2}_{s_1}\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_2}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $l_1+l_2=r_2$ with $l_1\geqslant 1$ and $l_2\geqslant 0$;
\item[(iv)] $\underbrace{y_1\cdots y_1}_{r_1}\underbrace{y_2\cdots y_2}_{s_1}\underbrace{y_2\cdots y_2}_ky_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-1}\\
\underbrace{y_2\cdots y_2}_{s_2-k}
\end{array}$,\\
where $1\leqslant k\leqslant s_2$;
\item[(v)] $\underbrace{y_1\cdots y_1}_ly_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_1-l-1}\\
\underbrace{y_2\cdots y_2}_{k_1-1}
\end{array}y_1\underbrace{y_2\cdots y_2}_{k_2}y_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_1-1}\\
\underbrace{y_2\cdots y_2}_{k_3-1}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_2}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_3}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $1\leqslant l\leqslant r_1-1$, $k_1+k_2+k_3=s_1$ with $k_1,k_3\geqslant 1$, $k_2\geqslant 0$, and $l_1+l_2+l_3=r_2$ with $l_1\geqslant 1$, $l_2,l_3\geqslant 0$;
\item[(vi)] $\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_1-l_1-1}\\
\underbrace{y_2\cdots y_2}_{k-1}
\end{array}y_1\underbrace{y_2\cdots y_2}_{s_1-k}\underbrace{y_1\cdots y_1}_{l_2}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-l_2}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $1\leqslant l_1\leqslant r_1-1$, $1\leqslant l_2\leqslant r_2$ and $1\leqslant k\leqslant s_1-1$;
\item[(vii)] $\underbrace{y_1\cdots y_1}_{l}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_1-l-1}\\
\underbrace{y_2\cdots y_2}_{k_1-1}
\end{array}y_1\underbrace{y_2\cdots y_2}_{s_1-k_1}\underbrace{y_2\cdots y_2}_{k_2}y_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-1}\\
\underbrace{y_2\cdots y_2}_{s_2-k_2}
\end{array}$,\\
where $1\leqslant l\leqslant r_1-1$, $1\leqslant k_1\leqslant s_1-1$ and $1\leqslant k_2\leqslant s_2$;
\item[(viii)] $\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_2}\\
\underbrace{y_2\cdots y_2}_{s_1-2}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_3}\underbrace{y_1\cdots y_1}_{l}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-l}\\
\underbrace{y_2\cdots y_2}_{s_2-1}
\end{array}$,\\
where $l_1+l_2+l_3=r_1$ with $l_1,l_3\geqslant 1$, $l_2\geqslant 0$, and $1\leqslant l\leqslant r_2$;
\item[(ix)] $\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_2}\\
\underbrace{y_2\cdots y_2}_{s_1-2}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_3}\underbrace{y_2\cdots y_2}_ky_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-1}\\
\underbrace{y_2\cdots y_2}_{s_2-k}
\end{array}$,\\
where $l_1+l_2+l_3=r_1$ with $l_1,l_3\geqslant 1$, $l_2\geqslant 0$, and $1\leqslant k\leqslant s_2$;
\item[(x)] $\underbrace{y_1\cdots y_1}_{l_1}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_2}\\
\underbrace{y_2\cdots y_2}_{s_1-2}
\end{array}y_2\underbrace{y_1\cdots y_1}_{l_3}y_2\begin{array}{l}
\underbrace{y_1\cdots y_1}_{l_4-1}\\
\underbrace{y_2\cdots y_2}_{k_1-1}
\end{array}y_1\underbrace{y_2\cdots y_2}_{k_2}y_1\begin{array}{l}
\underbrace{y_1\cdots y_1}_{r_2-1}\\
\underbrace{y_2\cdots y_2}_{k_3}
\end{array}$,\\
where $l_1+l_2+l_3+l_4=r_1$ with $l_1,l_4\geqslant 1$, $l_2,l_3\geqslant 0$, and $k_1+k_2+k_3=s_2$ with $k_1\geqslant 1$, $k_2,k_3\geqslant 0$.
\end{description}
Then consider the possible positions of $x$'s, we get the following shuffle product formula.
\begin{prop}
For any nonnegative integers $a_1,a_2,b_1,b_2$ and any positive integers $r_1,r_2$, $s_1,s_2$, we have
\begin{align}
&x^{a_1}y^{r_1}x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}=\sum\limits_{{\alpha_1+\cdots+\alpha_{r_1+r_2+s_1+s_2}\atop=a_1+a_2+b_1+b_2}\atop \alpha_1,\ldots,\alpha_{r_1+r_2+s_1+s_2}\geqslant 0}[c(a_1,a_2,r_1,r_2;b_1,b_2,s_1,s_2)\nonumber\\
&\qquad +c(b_1,b_2,s_1,s_2;a_1,a_2,r_1,r_2)]x^{\alpha_1}y\cdots x^{\alpha_{r_1+r_2+s_1+s_2}}y,
\label{Eq:Shuffle-Res-2-2}
\end{align}
where
\begin{align*}
&c(a_1,a_2,r_1,r_2;b_1,b_2,s_1,s_2)=\sum\limits_{l_1+l_2+l_3+l_4=r_2\atop l_1\geqslant 1,l_2,l_3,l_4\geqslant 0}\binom{\alpha_1}{a_1}\binom{\alpha_{r_1+1}}{a_2}\binom{l_2+s_1-2}{l_2}\\
&\;\times \binom{l_4+s_2-1}{l_4}\delta_{\alpha_1+\cdots+\alpha_{r_1+l_1+1},a_1+a_2+b_1}
\prod\limits_{i=r_1+l_1+2}^{r_1+l_1+l_2+s_1}\delta_{\alpha_i,0}\prod\limits_{i=r_1+l_1+l_2+l_3+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}\\
&\;+\sum\limits_{{l_1+l_2+l_3=r_2\atop l_1\geqslant 1,l_2,l_3\geqslant 0}\atop 1\leqslant k\leqslant s_1-1}\binom{\alpha_1}{a_1}\binom{\alpha_{r_1+1}}{a_1+b_1-\sum\limits_{i=1}^{r_1}\alpha_i}\binom{l_1+s_1-k-2}{l_1-1}\binom{l_3+s_2-1}{l_3}\\
&\;\times\delta_{\alpha_1+\cdots+\alpha_{r_1+k+1},a_1+a_2+b_1}\prod\limits_{i=r_1+k+2}^{r_1+l_1+s_1}\delta_{\alpha_i,0}\prod\limits_{i=r_1+l_1+l_2+s_1+2}^{r_1+r_2+s_1+s_2}
\delta_{\alpha_i,0}\\
&\;+\sum\limits_{l_1+l_2=r_2\atop l_1\geqslant 1,l_2\geqslant 0}\binom{\alpha_1}{a_1}\binom{\alpha_{r_1+1}}{a_1+b_1-\sum\limits_{i=1}^{r_1}\alpha_i}\binom{\alpha_{r_1+s_1+1}}{a_1+a_2+b_1-\sum\limits_{i=1}^{r_1+s_1}\alpha_i}
\binom{l_2+s_2-1}{l_2}\\
&\;\times\prod\limits_{i=r_1+l_1+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{k=1}^{s_2}\binom{\alpha_1}{a_1}
\binom{\alpha_{r_1+1}}{a_1+b_1-\sum\limits_{i=1}^{r_1}\alpha_i}\binom{\alpha_{r_1+s_1+1}}{b_2}\\
&\;\times\binom{r_2+s_2-k-1}{r_2-1}\prod\limits_{i=r_1+s_1+k+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{{k_1+k_2+k_3=s_1\atop l_1+l_2+l_3=r_2}\atop {k_1,k_3,l_1\geqslant 1,k_2,l_2,l_3\geqslant 0\atop 1\leqslant l\leqslant r_1-1}}\binom{\alpha_1}{a_1}\\
&\;\times \binom{r_1+k_1-l-2}{k_1-1}\binom{\alpha_{r_1+k_1+k_2+1}}{a_2-\sum\limits_{i=r_1+k_1+1}^{r_1+k_1+k_2}\alpha_i}\binom{k_3+l_1-2}{l_1-1}
\binom{l_3+s_2-1}{l_3}\\
&\;\times \delta_{\alpha_1+\cdots+\alpha_{l+1},a_1+b_1}\prod\limits_{i=l+2}^{r_1+k_1}\delta_{\alpha_i,0}\prod\limits_{i=r_1+k_1+k_2+2}^{r_1+k_1+k_2+k_3+l_1}\delta_{\alpha_i,0}
\prod\limits_{i=r_1+l_1+l_2+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}\\
&\;+\sum\limits_{{1\leqslant l_1\leqslant r_1-1\atop 1\leqslant l_2\leqslant r_2}\atop 1\leqslant k\leqslant s_1-1}\binom{\alpha_1}{a_1}\binom{k+r_1-l_1-2}{k-1}\binom{\alpha_{r_1+s_1+1}}{a_2-\sum\limits_{i=k+r_1+1}^{r_1+s_1}\alpha_i}\binom{r_2+s_2-l_2-1}{s_2-1}\\
&\times\delta_{\alpha_1+\cdots+\alpha_{l_1+1},a_1+b_1}\prod\limits_{i=l_1+2}^{k+r_1}\delta_{\alpha_i,0}\prod\limits_{i=r_1+l_2+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}
+\sum\limits_{{1\leqslant l\leqslant r_1-1\atop 1\leqslant k_1\leqslant s_1-1}\atop 1\leqslant k_2\leqslant s_2}\binom{\alpha_1}{a_1}\\
&\;\times\binom{k_1+r_1-l-2}{k_1-1}\binom{\alpha_{r_1+s_1+1}}{b_2}\binom{r_2+s_2-k_2-1}{r_2-1}\delta_{\alpha_1+\cdots+\alpha_{l+1},a_1+b_1}\\
&\;\times\prod\limits_{i=l+2}^{k_1+r_1}\delta_{\alpha_i,0}\prod\limits_{i=k_2+r_1+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{{l_1+l_2+l_3=r_1\atop l_1,l_3\geqslant 1,l_2\geqslant 0}\atop 1\leqslant l\leqslant r_2}\binom{\alpha_1}{a_1}\binom{l_2+s_1-2}{l_2}\binom{\alpha_{r_1+s_1+1}}{a_2}\\
&\;\times \binom{r_2+s_2-l-1}{s_2-1}\delta_{\alpha_1+\cdots+\alpha_{l_1+1},a_1+b_1}
\prod\limits_{i=l_1+2}^{l_1+l_2+s_1}\delta_{\alpha_i,0}\prod\limits_{i=r_1+s_1+l+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}\\
&\;+\sum\limits_{{l_1+l_2+l_3=r_1\atop l_1,l_3\geqslant 1,l_2\geqslant 0}\atop 1\leqslant k\leqslant s_2}\binom{\alpha_1}{a_1}\binom{l_2+s_1-2}{l_2}\binom{\alpha_{r_1+s_1+1}}{b_2-\sum\limits_{i=l_1+l_2+s_1+1}^{r_1+s_1}\alpha_i}\binom{r_2+s_2-k-1}{r_2-1}\\
&\;\times \delta_{\alpha_1+\cdots+\alpha_{l_1+1},a_1+b_1}
\prod\limits_{i=l_1+2}^{l_1+l_2+s_1}\delta_{\alpha_i,0}\prod\limits_{i=k+r_1+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}+\sum\limits_{{l_1+l_2+l_3+l_4=r_1\atop k_1+k_2+k_3=s_2}\atop l_1,l_4,k_1\geqslant 1,l_2,l_3,k_2,k_3\geqslant 0}\binom{\alpha_1}{a_1}\\
&\;\times \binom{l_2+s_1-2}{l_2}
\binom{\alpha_{l_1+l_2+l_3+s_1+1}}{b_2-\sum\limits_{i=l_1+l_2+s_1+1}^{l_1+l_2+l_3+s_1}\alpha_i}\binom{k_1+l_4-2}{l_4-1}\binom{k_3+r_2-1}{k_3}\\
&\;\times\delta_{\alpha_1+\cdots+\alpha_{l_1+1},a_1+b_1}\prod\limits_{i=l_1+2}^{l_1+l_2+s_1}\delta_{\alpha_i,0}\prod\limits_{i=l_1+l_2+l_3+s_1+2}^{k_1+r_1+s_1}\delta_{\alpha_i,0}
\prod\limits_{i=k_1+k_2+r_1+s_1+2}^{r_1+r_2+s_1+s_2}\delta_{\alpha_i,0}.
\end{align*}
\end{prop}
\section{Shuffle products of several multiple zeta values}\label{Sec:Shu-MZV-n}
There is another approach to get \eqref{Eq:Shuffle-Res-1-1} of the product $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$. We want to find the coefficient of $x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y$ in the product $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$ for $\alpha_1+\cdots+\alpha_{r+s}=a+b$ with $\alpha_i\geqslant 0$. We write this product as $x_1^ay_1^r\,\mbox{\bf \scyr X}\, x_2^by_2^s$, and consider $y$'s first and $x$'s second. To get $r+s$ $y$'s, by symmetry, we only need to consider the case
$$\fbox{$y_1$} \underbrace{y\cdots y}_{l-1} \fbox{$y_2$}\underbrace{y\cdots y}_{r+s-l-1},$$
where $1\leqslant l\leqslant r$ and $\fbox{$y_1$}$ denotes the first $y_1$. The $y$'s between $\fbox{$y_1$}$ and $\fbox{$y_2$}$ can only be $y_1$, while the $y$'s after $\fbox{$y_2$}$ are either $y_1$ or $y_2$. The position of $\fbox{$y_2$}$ is fixed, and there are $r+s-l-1$ positions to place other $y_2$'s. After all $y_2$'s are placed, there is only one way to place $y_1$'s. Hence in this case, there are $\binom{r+s-l-1}{s-1}$ possibilities to get $y^{r+s}$. Now we come to place $x$'s. All $x_1$'s are placed before $\fbox{$y_1$}$, and all $x_2$'s can be placed free before $\fbox{$y_2$}$. Hence there are no $x$'s before the $y$'s which are after $\fbox{$y_2$}$. In other words we have $\alpha_i=0$ for $l+2\leqslant i\leqslant r+s$. There are $\alpha_1$ positions to place $x_1$'s, and after all $x_1$'s are placed, there is only one way to place $x_2$'s. Hence we get $\binom{\alpha_1}{a}$ possibilities. Finally, we get the contribution of this case to the coefficient of $x^{\alpha_1}y\cdots x^{\alpha_{r+s}}y$ is
$$\sum\limits_{l=1}^r\binom{\alpha_1}{a}\binom{r+s-l-1}{s-1}\prod\limits_{i=l+2}^{r+s}\delta_{\alpha_i,0}.$$
Then we get \eqref{Eq:Shuffle-Res-1-1}.
Generalizing the arguments above, we can get the formula of the shuffle products $x^{a_1}y^{r_1}\,\mbox{\bf \scyr X}\, x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\,\cdots\,\mbox{\bf \scyr X}\, x^{a_n}y^{r_n}$.
\begin{prop}
Let $n,r_1,\ldots,r_n$ be positive integers and let $a_1,\ldots,a_n$ be nonnegative integers. Then we have
\begin{align}
&x^{a_1}y^{r_1}\,\mbox{\bf \scyr X}\, x^{a_2}y^{r_2}\,\mbox{\bf \scyr X}\,\cdots\,\mbox{\bf \scyr X}\, x^{a_n}y^{r_n}\nonumber\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{r_1+\cdots+r_n}=a_1+\cdots+a_n\atop \alpha_i\geqslant 0}c_{\alpha_1,\ldots,\alpha_{r_1+\cdots+r_n}}x^{\alpha_1}y\cdots x^{\alpha_{r_1+\cdots+r_n}}y,
\label{Eq:Shuffle-Res-nProd}
\end{align}
with the coefficient
\begin{align*}
&c_{\alpha_1,\ldots,\alpha_{r_1+\cdots+r_n}}=\sum\limits_{l_1+\cdots+l_n=r_1+\cdots+r_n\atop l_i\geqslant 1}\sum\limits_{\sigma\in\mathfrak{S}_n}\sigma_r\left\{\prod\limits_{j=2}^n\begin{pmatrix}
{\sum\limits_{i=j}^nl_i-\sum\limits_{i=j+1}^nr_i-1}\\
{r_j-1}
\end{pmatrix}\right\}\\
&\quad\;\times\sigma_a\left\{\prod\limits_{j=1}^{n-1}\begin{pmatrix}
{\sum\limits_{i=1}^{L_{j-1}+1}\alpha_i-\sum\limits_{i=1}^{j-1}a_i}\\
{a_j}
\end{pmatrix}\right\}\prod\limits_{j=L_{n-1}+2}^{r_1+\cdots+r_n}
\delta_{\alpha_i,0},
\end{align*}
where $L_0=0$ and $L_j=l_1+\cdots+l_j$ for $1\leqslant j\leqslant n$, and $\sigma_r$, $\sigma_a$ are induced permutations of $\sigma\in\mathfrak{S}_n$ on the set $\{r_1,\ldots,r_n\}$, $\{a_1,\ldots,a_n\}$, respectively.
\end{prop}
\noindent {\bf Proof.\;} We write this product as $x_1^{a_1}y_1^{r_1}\,\mbox{\bf \scyr X}\, x_2^{a_2}y_2^{r_2}\,\mbox{\bf \scyr X}\,\cdots\,\mbox{\bf \scyr X}\, x_n^{a_n}y_n^{r_n}$, and consider $y$'s first and $x$'s second to get $x^{\alpha_1}y\cdots x^{\alpha_{r_1+\cdots+r_n}}y$ from the combinatorial description of shuffle product. By symmetry, we only need to consider the case
$$\fbox{$y_1$}\underbrace{y\cdots y}_{l_1-1}\fbox{$y_2$}\underbrace{y\cdots y}_{l_2-1}\cdots\fbox{$y_{n-1}$}\underbrace{y\cdots y}_{l_{n-1}-1}\fbox{$y_n$}\underbrace{y\cdots y}_{l_n-1},$$
where $l_1+\cdots+l_n=r_1+\cdots+r_n$ with $l_i\geqslant 1$.
We compute the contribution of this case to the coefficient $c_{\alpha_1,\ldots,\alpha_{r_1+\cdots+r_n}}$. The position of $\fbox{$y_n$}$ is fixed, and there are $l_n-1$ positions to place other $y_n$'s. Hence the possibility is $\binom{l_n-1}{r_n-1}$. For $y_{n-1}$, the position of $\fbox{$y_{n-1}$}$ is fixed, and there are $l_{n-1}+l_n-1$ positions to place other $y_{n-1}$'s except that $r_n$ positions are occupied by $y_n$'s. Then the possibility is $\binom{l_{n-1}+l_n-r_n-1}{r_{n-1}-1}$. Repeating this arguments, for $y_2$'s, the position of $\fbox{$y_2$}$ is fixed, and there are $l_2+\cdots+l_n-1$ positions to place other $y_2$'s except that $r_3+\cdots+r_n$ positions are occupied by $y_3$'s, $\ldots$, $y_n$'s. Hence the possibility is $\binom{l_2+\cdots+l_n-r_3-\cdots-r_n-1}{r_2-1}$. After all $y_2$'s, $\ldots$, $y_n$'s are placed, there is only one way to place $y_1$'s. Then we get the product
$$\prod\limits_{j=2}^n\begin{pmatrix}
{\sum\limits_{i=j}^nl_i-\sum\limits_{i=j+1}^nr_i-1}\\
{r_j-1}
\end{pmatrix}.$$
Now we come to place $x$'s. There is $\alpha_1$ positions to place $x_1$. Hence the possibility is $\binom{\alpha_1}{a_1}$. For $x_2$, there are $\alpha_1+\cdots+\alpha_{L_1+1}$ positions to place $x_2$'s except that $a_1$ positions are occupied by $x_1$'s. Hence the possibility is $\binom{\alpha_1+\cdots+\alpha_{L_1+1}-a_1}{a_2}$. Repeating this arguments, for $x_{n-1}$, there are $\alpha_1+\cdots+\alpha_{L_{n-2}+1}$ positions to place $x_{n-1}$, except that $a_1+\cdots+a_{n-2}$ positions are occupied by $x_1$'s, $\ldots$, $x_{n-2}$'s. Then the possibility is $\binom{\alpha_1+\cdots+\alpha_{L_{n-2}+1}-a_1-\cdots-a_{n-2}}{a_{n-1}}$. After all $x_1$'s, $\ldots$, $x_{n-1}$'s are placed, there is only one way to place $x_n$. Hence we get the product
$$\prod\limits_{j=1}^{n-1}\begin{pmatrix}
{\sum\limits_{i=1}^{L_{j-1}+1}\alpha_i-\sum\limits_{i=1}^{j-1}a_i}\\
{a_j}
\end{pmatrix}.$$
Finally, it is easy to see that for $L_{n-1}+2\leqslant j\leqslant r_1+\cdots+r_n$, it must hold $\alpha_i=0$.
\qed
If $r_1=\cdots=r_n=1$, then the formula \eqref{Eq:Shuffle-Res-nProd} becomes
\begin{align}
&x^{a_1}y\,\mbox{\bf \scyr X}\, x^{a_2}y\,\mbox{\bf \scyr X}\,\cdots\,\mbox{\bf \scyr X}\, x^{a_n}y\nonumber\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{n}=a_1+\cdots+a_n\atop \alpha_i\geqslant 0}\sum\limits_{\sigma\in\mathfrak{S}_n}\sigma_a\left\{\prod\limits_{j=1}^{n-1}\begin{pmatrix}
{\sum\limits_{i=1}^{j}\alpha_i-\sum\limits_{i=1}^{j-1}a_i}\\
{a_j}
\end{pmatrix}\right\}x^{\alpha_1}y\cdots x^{\alpha_{n}}y.
\label{Eq:Shuffle-nProd}
\end{align}
Under the case $a_1,\ldots,a_n\geqslant 1$, after applying the map $\zeta$, we get \cite[Main Theorem]{Eie-Liaw-Ong} from \eqref{Eq:Shuffle-Res-nProd}, and get \cite[Theorem 1.3]{Chung-Eie-Liaw-Ong} from \eqref{Eq:Shuffle-nProd}.
\appendix
\section{The equivalence of two formulas of the product $x^ay^r\,\mbox{\bf \scyr X}\, x^by^s$}\label{AppSec:Proof-1-1}
The shuffle product formula of two multiple zeta values of the form $\zeta(m,\{1\}^n)$ in \cite[Theorem 1.1 or Equation (23)]{Lei-Guo-Ma} can be expressed as
\begin{align}
&x^ay^r\,\mbox{\bf \scyr X}\, x^by^s=\sum\limits_{{0\leqslant k\leqslant b \atop r_1+r_2=r,r_i\geqslant 0}\atop \alpha_1+\cdots+\alpha_{r_1+1}=b-k,\alpha_i\geqslant 0}\binom{a-1+k}{a-1}\binom{r_2+s-1}{s-1}x^{\alpha_1+a+k}yx^{\alpha_2}y\nonumber\\
&\quad \cdots x^{\alpha_{r_1+1}}y^{r_2+s}+\sum\limits_{{1\leqslant l\leqslant s \atop a_1+a_2=a-1,a_i\geqslant 0}\atop \alpha_1+\cdots+\alpha_{l+1}=a_2,\alpha_i\geqslant 0}\binom{a_1+b-1}{b-1}\binom{r+s-l}{r}\nonumber\\
&\qquad \times x^{\alpha_1+a_1+b}yx^{\alpha_2}y\cdots x^{\alpha_{l}}yx^{\alpha_{l+1}+1}y^{r+s-l},
\label{Eq:Shuffle-Res-1-1-Old}
\end{align}
where $a,b,r,s$ are positive integers. We show that the shuffle product formula \eqref{Eq:Shuffle-Res-1-1-Old} can be deduced from \eqref{Eq:Shuffle-Res-1-1}.
Denote the first sum of the right-hand side of \eqref{Eq:Shuffle-Res-1-1-Old} by $\Sigma_1$ and the second sum by $\Sigma_2$, then we have
\begin{align*}
\Sigma_1=&\sum\limits_{{0\leqslant k\leqslant b \atop 0\leqslant l\leqslant r}\atop {\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_1\geqslant a+k,\alpha_i\geqslant 0}}\binom{a-1+k}{a-1}\binom{r+s-l-1}{s-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
=&\sum\limits_{{\alpha_1+\cdots+\alpha_{l+1}=a+b}\atop {0\leqslant l\leqslant r,\alpha_i\geqslant 0}}\left(\sum\limits_{k=0}^{\alpha_1-a}\binom{a-1+k}{a-1}\right)\binom{r+s-l-1}{s-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
=&\sum\limits_{{\alpha_1+\cdots+\alpha_{l+1}=a+b}\atop {0\leqslant l\leqslant r,\alpha_i\geqslant 0}}\binom{\alpha_1}{a}\binom{r+s-l-1}{s-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l},
\end{align*}
and
\begin{align*}
\Sigma_2=&\sum\limits_{{1\leqslant l\leqslant s \atop 0\leqslant k\leqslant a-1}\atop {\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_1\geqslant k+b,\alpha_{l+1}\geqslant 1,\alpha_i\geqslant 0}}\binom{k+b-1}{b-1}\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l}}yx^{\alpha_{l+1}}y^{r+s-l}\\
=&\sum\limits_{{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_{l+1}\geqslant 1,\alpha_i\geqslant 0}\atop 1\leqslant l\leqslant s}\left(\sum\limits_{k=0}^{\alpha_1-b}\binom{k+b-1}{b-1}\right)\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
=&\sum\limits_{{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_{l+1}\geqslant 1,\alpha_i\geqslant 0}\atop 1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}.
\end{align*}
Moreover, we have
\begin{align*}
\Sigma_2=&\sum\limits_{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_i\geqslant 0,1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
&-\sum\limits_{\alpha_1+\cdots+\alpha_{l}=a+b,\atop \alpha_i\geqslant 0,1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l}}y^{r+s-l+1}\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_i\geqslant 0,1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
&-\sum\limits_{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_i\geqslant 0,0\leqslant l\leqslant s-1}\binom{\alpha_1}{b}\binom{r+s-l-1}{r}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
=&\sum\limits_{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_i\geqslant 0,1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l-1}{r-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
&\quad -\binom{a+b}{b}\binom{r+s-1}{r}x^{a+b}y^{r+s},
\end{align*}
which implies that
\begin{align*}
\Sigma_1+\Sigma_2=&\sum\limits_{{\alpha_1+\cdots+\alpha_{l+1}=a+b}\atop {1\leqslant l\leqslant r,\alpha_i\geqslant 0}}\binom{\alpha_1}{a}\binom{r+s-l-1}{s-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}\\
&+\sum\limits_{\alpha_1+\cdots+\alpha_{l+1}=a+b,\atop \alpha_i\geqslant 0,1\leqslant l\leqslant s}\binom{\alpha_1}{b}\binom{r+s-l-1}{r-1}x^{\alpha_1}yx^{\alpha_2}y\cdots x^{\alpha_{l+1}}y^{r+s-l}.
\end{align*}
Then we have shown that the formula \eqref{Eq:Shuffle-Res-1-1-Old} is a consequence of \eqref{Eq:Shuffle-Res-1-1}.
\section{The equivalence of two formulas of the product $x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}$}\label{AppSec:Proof-1-2}
In \cite[Theorem 1.3 or the last equation]{Lei-Guo-Ma}, the authors showed that for any positive integers $a,b_1,b_2,r,s_1,s_2$, the following shuffle product formula holds
\begin{align}
x^ay^r\,\mbox{\bf \scyr X}\, x^{b_1}y^{s_1}x^{b_2}y^{s_2}=\Sigma_1+\Sigma_2+\Sigma_3+\Sigma_4,
\label{Eq:Shuffle-Res-1-2-Old}
\end{align}
where
\begin{align*}
\Sigma_1=&\sum\limits_{{0\leqslant k\leqslant b_1\atop r_1+r_2+r_3+r_4=r,r_i\geqslant 0}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=b_1-k,\alpha_i\geqslant 0\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2-1,\widetilde{\alpha}_j\geqslant 0}}\binom{a-1+k}{a-1}\binom{r_2+s_1-1}{s_1-1}\binom{r_4+s_2-1}{s_2-1}\\
&\; \times x^{\alpha_1+a+k}yx^{\alpha_2}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1+1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2},\\
\Sigma_2=&\sum\limits_{{{1\leqslant l\leqslant s_1\atop a_1+a_2=a-1,a_i\geqslant 0}\atop r_1+r_2+r_3=r,r_i\geqslant 0}\atop{\alpha_1+\cdots+\alpha_{l+1}=a_2,\alpha_i\geqslant 0\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2-1,\widetilde{\alpha}_i\geqslant 0}}\binom{a_1+b_1-1}{b_1-1}\binom{r_1+s_1-l}{s_1-l}\binom{r_3+s_2-1}{s_2-1}\\
&\;\times x^{\alpha_1+a_1+b_1}yx^{\alpha_2}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}+1}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1+1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2},\\
\Sigma_3=&\sum\limits_{{{1\leqslant k\leqslant b_2\atop a_1+a_2+a_3=a-1,a_i\geqslant 0}\atop r_1+r_2=r,r_i\geqslant 0}\atop{\alpha_1+\cdots+\alpha_{s_1+1}=a_2,\alpha_i\geqslant 0\atop\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2-k,\widetilde{\alpha}_i\geqslant 0}}\binom{a_1+b_1-1}{b_1-1}\binom{a_3+k-1}{k-1}\binom{r_2+s_2-1}{s_2-1}\\
&\;\times x^{\alpha_1+a_1+b_1}yx^{\alpha_2}y\cdots x^{\alpha_{s_1}}yx^{\alpha_{s_1+1}+\widetilde{\alpha}_1+a_3+k+1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2},\\
\Sigma_4=&\sum\limits_{{1\leqslant l\leqslant s_2\atop a_1+a_2+a_3+a_4=a-1,a_i\geqslant 0}\atop{\alpha_1+\cdots+\alpha_{s_1}=a_2,\alpha_i\geqslant 0\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}=a_4,\widetilde{\alpha}_i\geqslant 0}}\binom{a_1+b_1-1}{b_1-1}\binom{a_3+b_2-1}{b_2-1}\binom{r+s_2-l}{r}\\
&\;\times x^{\alpha_1+a_1+b_1}yx^{\alpha_2}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1+a_3+b_2}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_l}yx^{\widetilde{\alpha}_{l+1}+1}y^{r+s_2-l}.
\end{align*}
We show that the formulas \eqref{Eq:Shuffle-Res-1-2} and \eqref{Eq:Shuffle-Res-1-2-Old} are essentially equivalent.
We have
\begin{align*}
\Sigma_1=&\sum\limits_{{0\leqslant k\leqslant b_1\atop r_1+r_2+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1,\alpha_1\geqslant a+k\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2,\widetilde{\alpha}_1\geqslant 1}}\binom{a-1+k}{a-1}\binom{r_2+s_1-1}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}\\
=&\sum\limits_{{r_1+r_2+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2,\widetilde{\alpha}_1\geqslant 1}}\binom{\alpha_1}{a}\binom{r_2+s_1-1}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}.
\end{align*}
Here and below, all indexes appearing in the summations are nonnegative integers without special statement. Neglecting the condition $\widetilde{\alpha}_1\geqslant 1$, we have
\begin{align*}
\Sigma_1=&\sum\limits_{{r_1+r_2+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_2+s_1-1}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}\\
&\;-\sum\limits_{{r_1+r_2+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_3+1}=b_2,r_3\geqslant 1}}\binom{\alpha_1}{a}\binom{r_2+s_1-1}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1+1}x^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}.
\end{align*}
The second sum in the right-hand side of the above equation is
\begin{align*}
&\sum\limits_{{r_1+r_2+r_3+r_4=r,r_2\geqslant 1}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_2+s_1-2}{r_2-1}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}y x^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2},
\end{align*}
which implies that
\begin{align*}
\Sigma_1=&\sum\limits_{{r_1+r_2+r_3+r_4=r,r_2\geqslant 1}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_2+s_1-2}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}\\
&\;+\sum\limits_{{r_1+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}\\
=&\sum\limits_{{r_1+r_2+r_3+r_4=r}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_2+s_1-2}{r_2}\binom{r_4+s_2-1}{r_4}\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}.
\end{align*}
Under the condition $r_1\geqslant 1$, we have
\begin{align}
\Sigma_1=&\sum\limits_{{r_1+r_2+r_3+r_4=r,r_1\geqslant 1}\atop {\alpha_1+\cdots+\alpha_{r_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_3+1}=b_2}}\binom{\alpha_1}{a}\binom{r_2+s_1-2}{r_2}\binom{r_4+s_2-1}{r_4}\nonumber\\
&\; \times x^{\alpha_1}y\cdots x^{\alpha_{r_1}}yx^{\alpha_{r_1+1}}y^{r_2+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_3}}yx^{\widetilde{\alpha}_{r_3+1}}y^{r_4+s_2}\nonumber\\
&\;+\sum\limits_{r_1+r_2+r_3=r\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}\binom{a+b_1}{a}\binom{r_1+s_1-2}{r_1}\binom{r_3+s_2-1}{r_3}\nonumber\\
&\; \times x^{a+b_1}y^{r_1+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}.
\label{Eq:Sigma-1}
\end{align}
For $\Sigma_2$, we have
\begin{align*}
\Sigma_2=&\sum\limits_{{{1\leqslant l\leqslant s_1\atop 0\leqslant a_1\leqslant a-1}\atop r_1+r_2+r_3=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_1\geqslant a_1+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2,\widetilde{\alpha}_1\geqslant 1}}\binom{a_1+b_1-1}{b_1-1}\binom{r_1+s_1-l}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
=&\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2+r_3=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2,\widetilde{\alpha}_1\geqslant 1}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}.
\end{align*}
Neglecting the conditions $\widetilde{\alpha}_1\geqslant 1$, we get
\begin{align*}
\Sigma_2=&\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2+r_3=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
&\;-\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-1}{r_1-1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
=&\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-1}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
&\;+\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1,\alpha_{l+1}\geqslant 1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}.
\end{align*}
We denote the first sum in the right-hand side of the above equation by $\Sigma_{21}$, and the second sum by $\Sigma_{22}$. Then without the condition $\alpha_{l+1}\geqslant 1$, we have
\begin{align*}
\Sigma_{21}=&\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-1}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
&\;-\sum\limits_{{0\leqslant l\leqslant s_1-1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-2}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
=&\sum\limits_{{1\leqslant l\leqslant s_1-1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-2}{r_1-1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\\
&\;-\sum\limits_{{r_1+r_2+r_3=r,r_1\geqslant 1}\atop {\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{a+b_1}{b_1}\binom{r_1+s_1-2}{r_1}\binom{r_3+s_2-1}{r_3}\\
&\;\times x^{a+b_1}y^{r_1+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2},
\end{align*}
and
\begin{align*}
\Sigma_{22}=&\sum\limits_{{1\leqslant l\leqslant s_1\atop r_1+r_2=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
&\;-\sum\limits_{{0\leqslant l\leqslant s_1-1\atop r_1+r_2=r}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{r_1+r_2=r}\atop{\alpha_1+\cdots+\alpha_{s_1+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\alpha_{s_1+1}+\widetilde{\alpha}_1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
&\;-\sum\limits_{{r_1+r_2=r}\atop{\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=b_2}}\binom{a_1+b_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{a+b_1}y^{s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}.
\end{align*}
Hence we get
\begin{align}
\Sigma_2=&\sum\limits_{{1\leqslant l\leqslant s_1-1\atop r_1+r_2+r_3=r,r_1\geqslant 1}\atop{\alpha_1+\cdots+\alpha_{l+1}=a+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{\alpha_1}{b_1}\binom{r_1+s_1-l-2}{r_1-1}\binom{r_3+s_2-1}{r_3}\nonumber\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_l}yx^{\alpha_{l+1}}y^{r_1+s_1-l}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\nonumber\\
&\;-\sum\limits_{{r_1+r_2+r_3=r}\atop {\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_2+1}=b_2}}\binom{a+b_1}{b_1}\binom{r_1+s_1-2}{r_1}\binom{r_3+s_2-1}{r_3}\nonumber\\
&\;\times x^{a+b_1}y^{r_1+s_1}x^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_{r_2}}yx^{\widetilde{\alpha}_{r_2+1}}y^{r_3+s_2}\nonumber\\
&\;+\sum\limits_{{r_1+r_2=r}\atop{\alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2\atop a+b_1\geqslant \alpha_1+\cdots+\alpha_{s_1}\geqslant a+b_1-\beta}}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\nonumber\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}.
\label{Eq:Sigma-2}
\end{align}
Similarly, for $\Sigma_3$, we have
\begin{align*}
\Sigma_3=&\sum\limits_{{{1\leqslant k\leqslant b_2\atop a_1+a_3\leqslant a-1}\atop{r_1+r_2=r\atop \alpha_1+\cdots+\alpha_{s_1+1}=a+b_1-a_3}}\atop {\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=a_3+b_2\atop \alpha_1\geqslant a_1+b_1,\alpha_{s_1+1}\geqslant 1,\widetilde{\alpha}_1\geqslant a_3+k}}\binom{a_1+b_1-1}{b_1-1}\binom{a_3+k-1}{k-1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\alpha_{s_1+1}+\widetilde{\alpha}_1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{{a_1+a_3\leqslant a-1}\atop{r_1+r_2=r\atop \alpha_1+\cdots+\alpha_{s_1+1}=a+b_1-a_3}}\atop {\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=a_3+b_2\atop \alpha_1\geqslant a_1+b_1,\alpha_{s_1+1}\geqslant 1,\widetilde{\alpha}_1\geqslant a_3+1}}\binom{a_1+b_1-1}{b_1-1}\binom{\widetilde{\alpha}_1}{a_3+1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\alpha_{s_1+1}+\widetilde{\alpha}_1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{{r_1+r_2=r\atop \alpha_1+\cdots+\alpha_{s_1+1}=a+b_1-a_3}}\atop {\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{r_1+1}=a_3+b_2\atop \alpha_{s_1+1}\geqslant 1,\widetilde{\alpha}_1\geqslant a_3+1}}\binom{\alpha_1}{b_1}\binom{\widetilde{\alpha}_1}{a_3+1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\alpha_{s_1+1}+\widetilde{\alpha}_1}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{r_1+r_2=r\atop a_3\leqslant a+b_1-\alpha_1-\cdots-\alpha_{s_1}-1}\atop {\alpha_1+\cdots+\alpha_{s_1}\geqslant a+b_1-\beta+1\atop\alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}}\binom{\alpha_1}{b_1}
\binom{\beta+\alpha_1+\cdots+\alpha_{s_1}-a-b_1+a_3}{a_3+1}\\
&\;\times\binom{r_2+s_2-1}{r_2}x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{r_1+r_2=r\atop a+b_1\geqslant\alpha_1+\cdots+\alpha_{s_1}\geqslant a+b_1-\beta}\atop \alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}\binom{\alpha_1}{b_1}
\left[\binom{\beta}{a+b_1-\alpha_1-\cdots-\alpha_{s_1}}-1\right]\\
&\;\times\binom{r_2+s_2-1}{r_2}x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
=&\sum\limits_{{r_1+r_2=r}\atop \alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}\binom{\alpha_1}{b_1}
\binom{\beta}{a+b_1-\alpha_1-\cdots-\alpha_{s_1}}\\
&\;\times\binom{r_2+s_2-1}{r_2}x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\\
&\;-\sum\limits_{{r_1+r_2=r\atop a+b_1\geqslant \alpha_1+\cdots+\alpha_{s_1}\geqslant a+b_1-\beta}\atop \alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}.
\end{align*}
Under the condition $r_1\geqslant 1$ in the first sum in the above equation, we have
\begin{align}
\Sigma_3=&\sum\limits_{{r_1+r_2=r,r_1\geqslant 1}\atop \alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}\binom{\alpha_1}{b_1}
\binom{\beta}{a+b_1-\alpha_1-\cdots-\alpha_{s_1}}\nonumber\\
&\;\times\binom{r_2+s_2-1}{r_2}x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}\nonumber\\
&\;+\sum\limits_{ \alpha_1+\cdots+\alpha_{s_1}+\beta=a+b_1+b_2}\binom{\alpha_1}{b_1}
\binom{\beta}{a+b_1-\alpha_1-\cdots-\alpha_{s_1}}\nonumber\\
&\;\times\binom{r+s_2-1}{r}x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}y^{r+s_2}\nonumber\\
&\;-\sum\limits_{{r_1+r_2=r\atop a+b_1\geqslant\alpha_1+\cdots+\alpha_{s_1}\geqslant a+b_1-\beta}\atop \alpha_1+\cdots+\alpha_{s_1}+\beta+\widetilde{\alpha}_2+\cdots+\widetilde{\alpha}_{r_1+1}=a+b_1+b_2}\binom{\alpha_1}{b_1}\binom{r_2+s_2-1}{r_2}\nonumber\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\beta}yx^{\widetilde{\alpha}_2}y\cdots x^{\widetilde{\alpha}_{r_1}}yx^{\widetilde{\alpha}_{r_1+1}}y^{r_2+s_2}.
\label{Eq:Sigma-3}
\end{align}
Finally, for $\Sigma_4$, we have
\begin{align*}
\Sigma_4=&\sum\limits_{{{1\leqslant l\leqslant s_2\atop a_1+a_2+a_3+a_4=a-1}\atop{\alpha_1+\cdots+\alpha_{s_1}=a_1+a_2+b_1\atop \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}=a_3+a_4+b_2+1}}\atop \alpha_1\geqslant a_1+b_1,\widetilde{\alpha}_1\geqslant a_3+b_2,\widetilde{\alpha}_{l+1}\geqslant 1}\binom{a_1+b_1-1}{b_1-1}\binom{a_3+b_2-1}{b_2-1}\binom{r+s_2-l}{r}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_l}yx^{\widetilde{\alpha}_{l+1}}y^{r+s_2-l}\\
=&\sum\limits_{{{1\leqslant l\leqslant s_2\atop \alpha_1+\cdots+\alpha_{s_1}+\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}=a+b_1+b_2}\atop{a_1\leqslant \alpha_1+\cdots+\alpha_{s_1}-b_1\atop a_3\leqslant \widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}-b_2-1}}\atop \alpha_1\geqslant a_1+b_1,\widetilde{\alpha}_1\geqslant a_3+b_2,\widetilde{\alpha}_{l+1}\geqslant 1}\binom{a_1+b_1-1}{b_1-1}\binom{a_3+b_2-1}{b_2-1}\binom{r+s_2-l}{r}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_l}yx^{\widetilde{\alpha}_{l+1}}y^{r+s_2-l}\\
=&\sum\limits_{{\alpha_1+\cdots+\alpha_{s_1}+\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}=a+b_1+b_2}\atop 1\leqslant l\leqslant s_2,\widetilde{\alpha}_{l+1}\geqslant 1}\binom{\alpha_1}{b_1}\binom{\widetilde{\alpha}_1}{b_2}\binom{r+s_2-l}{r}\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_l}yx^{\widetilde{\alpha}_{l+1}}y^{r+s_2-l}.
\end{align*}
Without the condition $\widetilde{\alpha}_{l+1}\geqslant 1$, we get
\begin{align}
\Sigma_4=&\sum\limits_{{\alpha_1+\cdots+\alpha_{s_1}+\widetilde{\alpha}_1+\cdots+\widetilde{\alpha}_{l+1}=a+b_1+b_2}\atop 1\leqslant l\leqslant s_2}\binom{\alpha_1}{b_1}\binom{\widetilde{\alpha}_1}{b_2}\binom{r+s_2-l-1}{r-1}\nonumber\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1}y\cdots x^{\widetilde{\alpha}_l}yx^{\widetilde{\alpha}_{l+1}}y^{r+s_2-l}\nonumber\\
&-\sum\limits_{\alpha_1+\cdots+\alpha_{s_1}+\widetilde{\alpha}_1=a+b_1+b_2}\binom{\alpha_1}{b_1}\binom{\widetilde{\alpha}_1}{b_2}\binom{r+s_2-1}{r}
\nonumber\\
&\;\times x^{\alpha_1}y\cdots x^{\alpha_{s_1}}yx^{\widetilde{\alpha}_1}y^{r+s_2}.
\label{Eq:Sigma-4}
\end{align}
The equations \eqref{Eq:Sigma-1}-\eqref{Eq:Sigma-4} imply that the sum $\Sigma_1+\Sigma_2+\Sigma_3+\Sigma_4$ is just the right-hand side of the formula \eqref{Eq:Shuffle-Res-1-2}. Then we proved that the formula \eqref{Eq:Shuffle-Res-1-2-Old} can be deduced from \eqref{Eq:Shuffle-Res-1-2}.
\end{document}
|
\begin{document}
\title[Characterizations using positive semi-definite functions]{Characterizations of Herglotz-Nevan\-linna functions using positive semi-definite functions and the Nevanlinna kernel in several variables}
\author{Mitja Nedic}
\address{Mitja Nedic, Department of Mathematics and Statistics, University of Helsinki, PO Box 68, FI-00014 Helsinki, Finland, orc-id: 0000-0001-7867-5874}
\curraddr{}
\email{[email protected]}
\thanks{\textit{Key words.} Herglotz-Nevan\-linna functions, positive semi-definite functions, Poisson-type functions, Nevanlinna kernel.
}
\subjclass[2010]{32A26, 32A99.}
\date{2018-12-19}
\begin{abstract}
In this paper, we give several characterizations of Herglotz-Nevan\-linna functions in terms of a specific type of positive semi-definite functions called Poisson-type functions. This allows us to propose a multidimensional analogue of the classical Nevanlinna kernel and a definition of generalized Nevanlinna functions in several variables. Furthermore, a characterization of the symmetric extension of a Herglotz-Nevan\-linna function is also given. The subclass of Loewner functions is also discussed, as well as an interpretation of the main result in terms of holomorphic functions on the unit polydisk with non-negative real part.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
Let us denote by $\mathbb C^{+n}$ the poly-upper half-plane in $\mathbb C^n$, \textit{i.e.}\/
$$\mathbb C^{+n}:= (\mathbb C^+)^n = \big\{\bm{z}\in\mathbb C^n \,\big |\,\forall j=1,2,\ldots, n: \mathrm{Im}[z_j]>0 \big\}.$$
Here, we consider the following class of functions.
\begin{define}\label{def:hn_functions}
A function $q \colon \mathbb C^{+n} \to \mathbb C$ is called a \emph{Herglotz-Nevan\-linna function} if it is holomorphic and has non-negative imaginary part.
\end{define}
This is a well-studied class of functions, appearing \textit{e.g.}\/ in \cite{AglerEtal2012,AglerEtal2016,LugerNedic2017,LugerNedic2019,LugerNedic2020,Nedic2019,Savchuk2006,Vladimirov1969,Vladimirov1979}. Particularly when considering such functions of one variable, they appear in many areas of complex and functional analysis and numerous applications. Some examples of these include the moment problem \cite{Akhiezer1965,Nevanlinna1922,Simon1998}, the theory of Sturm-Liouville problems and perturbations \cite{Aronszajn1957,AronszajnBrown1970,Donoghue1965,KacKrein1974}, when deriving physical bounds for passive systems \cite{BernlandEtal2011} or as approximating functions in certain convex optimization problems \cite{IvanenkoETAL2019a,IvanenkoETAL2020}. Applications concerning functions of several variables appear \textit{e.g.}\/ when considering operator monotone functions \cite{AglerEtal2012} or with representations of multidimensional passive systems \cite{Vladimirov1979}.
Herglotz-Nevan\-linna functions admit a powerful integral representation theorem, \textit{cf.}\/ Theorem \ref{thm:intRep_Nvar}, that characterizes this class of functions in terms of a real number $a$, a vector $\bm{b} \in [0,\infty)^n$ and a positive Borel measure $\mu$ on $\mathbb R^n$ satisfying two conditions. The one-variable version this representation is considered a classical result attributed to Nevanlinna \cite{Nevanlinna1922} and Cauer \cite{Cauer1932}, see also \cite{KacKrein1974}. The multi-dimensional case of this representation was considered in \textit{e.g.}\/ \cite{LugerNedic2017,LugerNedic2019,Vladimirov1969,Vladimirov1979}.
For Herglotz-Nevan\-linna functions of one variable, it follows as an immediate consequence of the integral representation theorem mentioned above that a holomorphic function $q\colon\mathbb C^+ \to \mathbb C$ has non-negative imaginary part if and only if the function
$$(z,w) \mapsto \frac{q(z) - \overline{q(w)}}{z-\overline{w}}$$
is positive semi-definite on $\mathbb C^+ \times \mathbb C^+$, see also Section \ref{sec:main_thm}. This result gives rise to the so-called Nevanlinna kernel, see also Section \ref{subsec:Nevanlinna_kernel}, which plays a fundamental role in \textit{e.g.}\/ the theory of generalized Nevanlinna functions and their applications \cite{DijksmaShondin2000,KreinLanger1977,KurasovLuger2011,LangerLangerSasvari2004}.
The main goal of this paper is to first give a characterization when a function $q\colon\mathbb C^{+n} \to \mathbb C$ is a Herglotz-Nevan\-linna function by investigating the difference $q(\bm{z}) - \overline{q(\bm{w})}$ for $\bm{z},\bm{w} \in \mathbb C^{+n}$, \textit{cf.}\/ Theorem \ref{thm:positive_semi_decomposition}. Here, a crucial role is played by Poisson-type functions which are positive semi-definite functions of a very particular type, see Section \ref{subsec:poisson_type}. This result then allows us to derive an analogous characterization to the one mentioned above using a suitable analogue of the Nevanlinna kernel in several variables, \textit{cf.}\/ Theorem \ref{thm:Nevan_kernel_Nvar}.
The structure of the paper is as follows. After the introduction in Section \ref{sec:intro} we recall the integral representation theorem for Herglotz-Nevan\-linna function along with the necessary corollaries in Section \ref{sec:hn_basics}. Required prerequisites concerning positive semi-definite functions are collected in Section \ref{sec:psd}, with Section \ref{subsec:poisson_type} focusing on Poisson-type functions. The main results of the paper are presented in Section \ref{sec:main_thm}, with Section \ref{subsec:Nevanlinna_kernel} focusing on the Nevanlinna kernel in several variables, Section \ref{subsec:symmetric_extension} presenting an extension of the main result to symmetric extensions of Herglotz-Nevan\-linna functions and Section \ref{subsec:loewner} discussing the subclass of Loewner functions. Finally, Section \ref{sec:polydisk} interprets the main result of the paper in the language of holomorphic functions on the unit polydisk with non-negative real part.
\section{The integral representation theorem}
\label{sec:hn_basics}
Herglotz-Nevan\-linna functions are primarily studied via their corresponding integral representation theorem, the statement of which requires us to first introduce some notation. Given ambient numbers $z \in \C\setminus\R$ and $t \in \mathbb R$, consider first the expressions
$$\begin{array}{RCL}
N_{-1}(z,t) & := & \frac{1}{2\,\mathfrak{i}}\left(\frac{1}{t - z} - \frac{1}{t - \mathfrak{i}}\right), \\[0.35cm]
N_{0}(z,t) & := & \frac{1}{2\,\mathfrak{i}}\left(\frac{1}{t - \mathfrak{i}} - \frac{1}{t_j + \mathfrak{i}}\right), \\[0.35cm]
N_{1}(z,t) & := & \frac{1}{2\,\mathfrak{i}}\left(\frac{1}{t + \mathfrak{i}} - \frac{1}{t - \overline{z}}\right).
\end{array}$$
Note that $N_0(z,t) \in \mathbb R$ and is independent of $z \in \C\setminus\R$, while
$$\overline{N_{-1}(z,t)} = N_{1}(z,t)$$
for all $z \in \C\setminus\R$ and $t \in \mathbb R$. Define now the kernel $K_n \colon \C\setminus\RN \times \mathbb R^n \to \mathbb C^n$ as
\begin{multline}\label{eq:kernel_Kn}
K_n(\bm{z},\bm{t}) := \mathfrak{i}\bigg(2\prod_{j=1}^n\big(N_{-1}(z_j,t_j)+N_{0}(\mathfrak{i},t_j)\big)-\prod_{j=1}^nN_{0}(\mathfrak{i},t_j)\bigg) \\
= \mathfrak{i}\left(\frac{2}{(2\,\mathfrak{i})^n}\prod_{\ell=1}^n\left(\frac{1}{t_\ell-z_\ell}-\frac{1}{t_\ell+\mathfrak{i}}\right)-\frac{1}{(2\,\mathfrak{i})^n}\prod_{\ell=1}^n\left(\frac{1}{t_\ell-\mathfrak{i}}-\frac{1}{t_\ell+\mathfrak{i}}\right)\right).
\end{multline}
We may now recall the integral representation theorem for Herglotz-Nevan\-linna functions of several variables, \textit{cf.}\/ \cite[Thm. 4.1]{LugerNedic2019}.
\begin{thm}\label{thm:intRep_Nvar}
A function $q\colon \mathbb C^{+n} \to \mathbb C$ is a Herglotz-Nevan\-linna function if and only if $q$ can be written as
\begin{equation}\label{eq:intRep_Nvar}
q(\bm{z}) = a + \sum_{\ell=1}^nb_\ell z_\ell + \frac{1}{\pi^n}\int_{\mathbb R^n}K_n(\bm{z},\bm{t})\mathrm{d}\mu(\bm{t}),
\end{equation}
where $a \in \mathbb R$, $\bm{b} \in [0,\infty)^n$ and $\mu$ is a positive Borel measure on $\mathbb R^n$ satisfying the growth condition
\begin{equation}
\label{eq:measure_growth}
\int_{\mathbb R^n}\prod_{\ell=1}^n\frac{1}{1+t_\ell^2}\mathrm{d}\mu(\bm{t}) < \infty
\end{equation}
and the Nevanlinna condition
\begin{equation}
\label{eq:measure_Nevan}
\sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\int_{\mathbb R^n}\prod_{j=1}^nN_{\rho_j}(z_j,t_j)\mathrm{d}\mu(\bm{t}) = 0
\end{equation}
for all $\bm{z} \in \mathbb C^{+n}$. Furthermore, for a given function $q$, the triple of representing parameters $(a,\bm{b},\mu)$ is unique.
\end{thm}
An important corollary of this results is the description of the growth of a Herglotz-Nevan\-linna function at infinity in a Stoltz domain. Generally, an \emph{(upper) Stoltz domain} with centre $t_0 \in \mathbb R$ and angle $\theta \in (0,\frac{\pi}{2}]$ is the set
$$\{z \in \mathbb C^+~|~\theta \leq \arg(z-t_0) \leq \pi-\theta\}.$$
The symbol $z \:\scriptsize{\xrightarrow{\vee\:}}\: \infty$ then denotes the limit $|z| \to \infty$ in any Stoltz domain with centre $0$. The growth of a Herglotz-Nevan\-linna function is then captured by the vector $\bm{b}$, as it holds for any $\ell \in \{1,\ldots,n\}$ that
\begin{equation}
\label{eq:b_parameters}
b_\ell = \lim\limits_{z_\ell \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}\frac{q(\bm{z})}{z_\ell},
\end{equation}
see \textit{e.g.}\/ \cite[Cor. 4.6(iv)]{LugerNedic2019}. In particular, the above limit is independent of the entries of the vector $\bm{z}$ at the non-$\ell$-th positions.
\section{Positive semi-definite functions}
\label{sec:psd}
Consider the following definition, \textit{cf.}\/ \cite[pg. 3002]{AglerEtal2016}.
\begin{define}\label{def:positive_semi_def_function}
Let $\Omega \subseteq \mathbb C^n$ be an open set. A function $F\colon \Omega \times \Omega \to \mathbb C$ is called \emph{positive semi-definite} if for all $m \geq 1$ and every choice of $m$ vectors $\bm{z}_1,\bm{z}_2,\ldots,\bm{z}_m \in \Omega$ and $m$ numbers $c_1,c_2,\ldots,c_m \in \mathbb C$ it holds that
\begin{equation}
\label{eq:positive_semi_def_function}
\sum_{i,j = 1}^mF(\bm{z}_i,\bm{z}_j)\,c_i\,\overline{c}_j \geq 0.
\end{equation}
\end{define}
Note that we do not impose any regularity (or analyticity) on the function $F$, although the majority of the positive semi-definite functions that will be considered in this paper will be holomorphic in the first $n$ variables and anti-holomorphic in the second $n$ variables.
Alternative, one can say that a function $F\colon \Omega \times \Omega \to \mathbb C$ is positive semi-definite if for all $m \geq 1$ and every choice of $m$ vectors $\bm{z}_1,\bm{z}_2,\ldots,\bm{z}_m \in \Omega$ it holds that the matrix
$$\left[\begin{array}{cccc}
F(\bm{z}_1,\bm{z}_1) & F(\bm{z}_1,\bm{z}_2) & \ldots & F(\bm{z}_1,\bm{z}_m) \\
F(\bm{z}_2,\bm{z}_1) & F(\bm{z}_2,\bm{z}_2) & \ldots & F(\bm{z}_2,\bm{z}_m) \\
\vdots & \vdots & \ddots & \vdots \\
F(\bm{z}_m,\bm{z}_1) & F(\bm{z}_m,\bm{z}_2) & \ldots & F(\bm{z}_m,\bm{z}_m)
\end{array}\right]_{m \times m}$$
is a positive semi-definite matrix.
\begin{remark}
We note here that what we here call positive semi-definite \emph{functions} are sometimes referred to as positive semi-definite \emph{kernels}, \textit{e.g.}\/ in \cite{Aronszajn1950,BuescuPaixao2014,Sasvari1994}. Our terminology stems from \textit{e.g.}\/ \cite[pg. 3002]{AglerEtal2016} and refers to functions of $2n$ complex variables. These functions should not be confused with positive semi-definite functions in the sense of \textit{e.g.}\/ \cite[Def. 1.3.1]{Sasvari1994}, which refers to functions of one real variable.
\end{remark}
The following elementary properties now follow from the definition and will be used later on, see \textit{e.g.}\/ \cite[Sec. 1.3]{Sasvari1994} for a proof.
\begin{lemma}\label{lem:positive_semi_def_properties}
If $F_1$ and $F_2$ are two positive semi-definite functions on $\Omega \times \Omega$, then the functions $F_1 + F_2$ and $F_1F_2$ are also positive semi-definite. Furthermore, for any positive semi-definite function $F$ on $\Omega \times \Omega$, it holds that $F(\bm{z},\bm{z}) \geq 0$ and
$$F(\bm{z},\bm{w}) = \overline{F(\bm{w},\bm{z})}$$
for any $\bm{z},\bm{w} \in \Omega$.
\end{lemma}
\begin{example}
\label{ex:psd_simple}
Let $\Omega \subseteq \mathbb C^n$ and let $f\colon \Omega \to \mathbb C$ be a holomorphic function. Then, the function
$$F\colon (\bm{z},\bm{w}) \mapsto f(\bm{z})\overline{f(\bm{w})}$$
is positive semi-definite on $\Omega \times \Omega$. Indeed, if $m \geq 1$, $\bm{z}_1,\bm{z}_2,\ldots,\bm{z}_m \in \Omega$ and $c_1,c_2,\ldots,c_m \in \mathbb C$, then
$$\sum_{i,j = 1}^mF(\bm{z}_i,\bm{z}_j)\,c_i\,\overline{c}_j = \sum_{i,j = 1}^mf(\bm{z}_i)c_i\,\overline{f(\bm{z}_j)}\overline{c}_j = |f(\bm{z}_1)c_1 + \ldots + f(\bm{z}_m)c_m|^2 \geq 0.$$
Note that the last equality above holds by a combinatorial expansion of the square of an absolute value of a sum of complex numbers, \textit{i.e.}\/ if $m \in \mathbb N$ and $\zeta_1,\ldots,\zeta_m \in \mathbb C$, then
\begin{multline*}
|\zeta_1 + \ldots + \zeta_m|^2 = (\zeta_1 + \ldots + \zeta_m)(\overline{\zeta}_1 + \ldots + \overline{\zeta}_m) \\ = \zeta_1\overline{\zeta}_1 + \zeta_1\overline{\zeta}_2 + \zeta_2\overline{\zeta}_1 + \ldots + \zeta_m\overline{\zeta}_m = \sum_{i,j=1}^m\zeta_i\overline{\zeta}_j
\end{multline*}
as desired.
$\lozenge$
\end{example}
\subsection{Poisson-type functions}
\label{subsec:poisson_type}
In later sections, we will mostly look at functions on the poly-upper half-plane. There, the following class of functions is of major importance.
\begin{define}\label{def:poisson_type}
A function $F\colon \mathbb C^{+n} \times \mathbb C^{+n} \to \mathbb C$ is called a \emph{Poisson-type function} if there exists a positive Borel measure $\mu$ on $\mathbb R^n$ satisfying the growth condition \eqref{eq:measure_growth} such that
$$F(\bm{z},\bm{w}) = \frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{\ell = 1}^n\frac{1}{(t_\ell - z_\ell)(t_\ell - \overline{w_\ell})}\mathrm{d}\mu(\bm{t})$$
for all $\bm{z},\bm{w} \in \mathbb C^{+n}$. In this case, we say that the function $F$ is given by the measure $\mu$.
\end{define}
There are several important remarks to be made here. First, the name of these functions refers to their connection to the Poisson kernel of $\mathbb C^{+n}$ which will become apparent in the proof of Lemma \ref{lem:Stieltjes_for_Poisson} later on. Second, the above definition makes sense as the assumption that the measure $\mu$ satisfies the growth condition \eqref{eq:measure_growth} assure that the integral
$$\int_{\mathbb R^n}\prod_{\ell = 1}^n\frac{1}{(t_\ell - z_\ell)(t_\ell - \overline{w_\ell})}\mathrm{d}\mu(\bm{t})$$
is well-defined for $\bm{z},\bm{w} \in \mathbb C^{+n}$. Third, the normalizing factor $\pi^{-n}$ is present so that the function given by the Lebesgue measure $\lambda_{\mathbb R}$ equals $2\,\mathfrak{i}(z-\overline{w})^{-1}$. Finally, it is not immediately clear from Definition \ref{def:poisson_type} whether the correspondence between a function $F$ and a measure $\mu$ constitutes a bijection, though this will turn out to be the case later on, \textit{cf.}\/ Lemma \ref{lem:Stieltjes_for_Poisson}.
We will now show three elementary, but important, properties of Poisson-type functions.
\begin{lemma}
\label{lem:positive_semi_poisson}
Let $F$ be a Poisson-type function given by a measure $\mu$. Then, the function $F$ is positive semi-definite on $\mathbb C^{+n} \times \mathbb C^{+n}$.
\end{lemma}
\proof
Let $m \in \mathbb N$ be arbitrary and choose freely $m$ vectors $\bm{z}_1,\bm{z}_2,\ldots,\bm{z}_m \in \mathbb C^{+n}$ and $m$ numbers $c_1,c_2,\ldots,c_m \in \mathbb C$. In this case, we calculate that
$$\begin{array}{RCL}
\multicolumn{3}{L}{\sum_{i,j=1}^m F(\bm{z}_i,\bm{z}_j)\,c_i\,\overline{c}_j = \frac{1}{\pi^n}\int_{\mathbb R^n}\sum_{i,j=1}^m\bigg[c_i\,\overline{c}_j\prod_{\ell=1}^{n}\frac{1}{(t_\ell - (\bm{z}_i)_\ell)(t_\ell - \overline{(\bm{z}_j)}_\ell)}\bigg]\mathrm{d}\mu(\bm{t})} \\[0.6cm]
~~~~~~~ & = & \frac{1}{\pi^n}\int_{\mathbb R^n}\sum_{i,j=1}^m\bigg[c_i\prod_{\ell=1}^{n}\frac{1}{t_\ell - (\bm{z}_i)_\ell}\,\cdot\,\overline{c_j \prod_{\ell=1}^{n}\frac{1}{t_\ell - (\bm{z}_j)_\ell}}\bigg]\mathrm{d}\mu(\bm{t}) \\[0.6cm]
~ & = & \frac{1}{\pi^n}\int_{\mathbb R^n}\bigg|\sum_{i=1}^m\bigg[c_i\prod_{\ell=1}^{n}\frac{1}{t_\ell - (\bm{z}_i)_\ell}\bigg]\,\bigg|^2\mathrm{d}\mu(\bm{t}) \geq 0,
\end{array}$$
where the last equality follows by the same combinatorial argument used in Example \ref{ex:psd_simple}. This finishes the proof.
\endproof
\begin{lemma}
\label{lem:poisson_growth}
Let $F$ be a Poisson-type function given by a measure $\mu$. Then, it is holomorphic in its first $n$ variables, anti-holomorphic in its second $n$ variables and satisfies the growth condition that for every $j \in \{1,\ldots,n\}$ we have
\begin{equation}
\label{eq:positive_semi_growth}
\lim\limits_{z_j \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}F(\bm{z},\bm{w}) = \lim\limits_{w_j \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}F(\bm{z},\bm{w}) = 0.
\end{equation}
\end{lemma}
\proof
Fix first an arbitrary $\bm{w} \in \mathbb C^{+n}$ and consider the function $\bm{z} \mapsto F(\bm{z},\bm{w})$ on $\mathbb C^{+n}$. This function is holomorphic as the kernel
$$\prod_{\ell = 1}^n\frac{1}{(t_\ell - z_\ell)(t_\ell - \overline{w_\ell})}$$
is holomorphic in the $\bm{z}$-variables for every $\bm{t} \in \mathbb R^n$ while the function
$$\bm{z} \mapsto \frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{\ell = 1}^n\frac{1}{|t_\ell - z_\ell||t_\ell - \overline{w_\ell}|}\mathrm{d}\mu(\bm{t})$$
is locally uniformly bounded on compact subsets of $\mathbb C^{+n}$ due to fact that the measure $\mu$ satisfies the growth condition \eqref{eq:measure_growth}. An analogous argument may now be repeated to show that, for every $\bm{z} \in \mathbb C^{+n}$, the function $\bm{w} \mapsto F(\bm{z},\bm{w})$ is anti-holomorphic on $\mathbb C^{+n}$.
To prove that the function $F$ satisfies the growth conditions \eqref{eq:positive_semi_growth}, it suffices to only consider the case when $z_1 \:\scriptsize{\xrightarrow{\vee\:}}\: \infty$, as all other cases may be treated analogously. In this case, we note, for $\bm{z},\bm{w} \in \mathbb C^{+n}$, that
\begin{multline*}
|F(\bm{z},\bm{w})| \leq \frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^n\frac{1}{|t_j-z_j||t_j - \overline{w}_j|}\mathrm{d}\mu(\bm{t}) \\ = \frac{1}{\pi^n}\int_{\mathbb R^n}\left|\frac{t_1-\mathfrak{i}}{t_1-z_1}\right|\frac{1}{|t_1-\mathfrak{i}||t_1-\overline{w}_1|}\prod_{j=2}^n\frac{1}{|t_j-z_j||t_j - \overline{w}_j|}\mathrm{d}\mu(\bm{t}).
\end{multline*}
As $z_1 \:\scriptsize{\xrightarrow{\vee\:}}\: \infty$, we may assume that $\mathrm{Im}[z_1] > 1$, yielding that
$$\left|\frac{t_1-\mathfrak{i}}{t_1-z_1}\right| \leq 1$$
for all $t_1 \in \mathbb R$. Furthermore, the function
$$\bm{t} \mapsto \frac{1}{|t_1-\mathfrak{i}||t_1-\overline{w}_1|}\prod_{j=2}^n\frac{1}{|t_j-z_j||t_j - \overline{w}_j|}$$
is integrable with respect to the measure $\mu$ on $\mathbb R^n$ as $\mu$ satisfies the growth condition \eqref{eq:measure_growth}. Thus, by Lebesgue's dominated convergence theorem, we have that
$$\lim\limits_{z_1 \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}F(\bm{z},\bm{w}) = \frac{1}{\pi^n}\int_{\mathbb R^n}\lim\limits_{z_1 \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}\prod_{\ell=1}^{n}\frac{1}{(t_\ell - z_\ell)(t_\ell - \overline{w}_\ell)}\mathrm{d}\mu(\bm{t}) = 0.$$
Note now that the function $F$ is positive semi-definite by Lemma \ref{lem:positive_semi_poisson}, implying, by Lemma \ref{lem:positive_semi_def_properties}, that $F(\bm{z},\bm{w}) = \overline{F(\bm{w},\bm{z})}$ for any $\bm{z},\bm{w} \in \mathbb C^{+n}$. Hence,
$$\lim\limits_{w_j \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}F(\bm{z},\bm{w}) = \lim\limits_{w_j \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}\overline{F(\bm{w},\bm{z})} = 0.$$
This finishes the proof.
\endproof
\begin{lemma}
\label{lem:Stieltjes_for_Poisson}
Let $F$ be a Poisson-type function given by a measure $\mu$. Then, the measure $\mu$ may be reconstructed form the function $F$ via the Stieltjes inversion formula. More precisely, let $\psi\colon \mathbb R^n \to \mathbb R$ be a $\mathcal C^1$-function for which there exists some constant $C \geq 0$ such that $|\psi(\bm{x})| \leq C\prod_{j=1}^n(1+x_j^2)^{-1}$ for all $\bm{x} \in \mathbb R^n$. Then, it holds that
\begin{equation}
\label{eq:Stieltjes_for_Poisson}
\lim\limits_{\bm{y} \to \bm{0}^+}\int_{\mathbb R^n}\psi(\bm{x})\cdot\prod_{j=1}^ny_j\cdot F(\bm{x}+\mathfrak{i}\,\bm{y},\bm{x}+\mathfrak{i}\,\bm{y})\mathrm{d}\bm{x} = \int_{\mathbb R^n}\psi(\bm{t})\mathrm{d}\mu(\bm{t}).
\end{equation}
\end{lemma}
\proof
We only need to observe that for a Poisson-type function $F$, it holds that
$$\prod_{j=1}^ny_j\cdot F(\bm{x}+\mathfrak{i}\,\bm{y},\bm{x}+\mathfrak{i}\,\bm{y}) = \frac{1}{\pi^n}\int_{\mathbb R^n}\mathcal P_n(\bm{x}+\mathfrak{i}\,\bm{y},\bm{t})\mathrm{d}\mu(\bm{t}),$$
where $\mathcal P_n$ denotes the Poisson kernel of the poly-upper half-plane which, we recall, is defined for $\bm{z} \in \mathbb C^{+n}$ and $\bm{t} \in \mathbb R^n$ as
$$\mathcal P_n(\bm{z},\bm{t}) := \prod_{j=1}^n\frac{\mathrm{Im}[z_j]}{|t_j-z_j|^2}.$$
The statement then follows immediately from the properties of the Poisson kernel as in the proof of the Stieltjes inversion formula for Herglotz-Nevan\-linna functions of several variables, see \cite[pg. 1197]{LugerNedic2019}.
\endproof
Recall now that the representing measure of a Herglotz-Nevan\-linna function satisfies the Nevanlinna condition \eqref{eq:measure_Nevan} in addition to the growth condition \eqref{eq:measure_growth}. As satisfying condition \eqref{eq:measure_growth} suffices for a positive Borel measure on $\mathbb R^n$ to define a Poisson-type function, it is apparent that Poisson-type function given by representing measures of Herglotz-Nevan\-linna functions constitute a smaller subclass. This is reflected by the following condition.
\begin{lemma}
\label{lem:poisson_pluriharmonic}
Let $F$ be a Poisson type function given by a measure $\mu$. Then, the function
\begin{equation}
\label{eq:poisson_pluriharmonic}
\bm{z} \mapsto \prod_{j=1}^n\mathrm{Im}[z_j]\cdot F(\bm{z},\bm{z})
\end{equation}
is pluriharmonic on $\mathbb C^{+n}$ if and only if the measure $\mu$ satisfies the Nevanlinna condition \eqref{eq:measure_Nevan}.
\end{lemma}
\proof
We have already noted in the proof of Lemma \ref{lem:Stieltjes_for_Poisson} that
$$\prod_{j=1}^n\mathrm{Im}[z_j]\cdot F(\bm{z},\bm{z}) = \frac{1}{\pi^n}\int_{\mathbb R^n}\mathcal P_n(\bm{z},\bm{t})\mathrm{d}\mu(\bm{t}).$$
The statement of the lemma now follows from \cite[Prop. 5.2 and 5.3]{LugerNedic2019}.
\endproof
\begin{example}
Consider the Poisson-type function $F$ given by the measure $\pi^2\delta_{(0,0)}$, where $\delta_{(0,0)}$ denotes the Dirac measure at $(0,0) \in \mathbb R^2$, \textit{i.e.}\/
$$F(\bm{z},\bm{w}) = \frac{1}{z_1\,\overline{w}_1\,z_2\,\overline{w}_2}.$$
Via Lemma \ref{lem:poisson_pluriharmonic}, we may determine whether this measures satisfies the Nevanlinna condition \eqref{eq:measure_Nevan}. Calculating \textit{e.g.}\/ that
\begin{multline*}
\frac{\partial^2}{\partial z_1 \partial \overline{z}_2}\mathrm{Im}[z_1]\,\mathrm{Im}[z_2]\,F(\bm{z},\bm{z}) = -\frac{1}{4}\,\frac{\partial^2}{\partial z_1 \partial \overline{z}_2}\,\frac{(z_1-\overline{z}_1)(z_2-\overline{z}_2)}{|z_1|^2|z_2|^2} \\
= -\frac{1}{4} \cdot \frac{\partial}{\partial z_1}\,\frac{z_1-\overline{z}_1}{|z_1|^2}\cdot\frac{\partial}{\partial \overline{z}_2}\,\frac{z_2-\overline{z}_2}{|z_2|^2} = -\frac{1}{4}\cdot \frac{1}{{z_1}^2}\cdot\frac{-1}{{\overline{z}_2}^2} \neq 0,
\end{multline*}
sufficing to conclude that the function $(z_1,z_2) \mapsto \mathrm{Im}[z_1]\,\mathrm{Im}[z_2]\,F(\bm{z},\bm{z})$ is not pluriharmonic.
$\lozenge$
\end{example}
\section{Characterization via positive semi-definite functions}
\label{sec:main_thm}
Let us begin by shortly recalling the how Herglotz-Nevan\-linna functions in one variable are characterized via positive semi-definite functions. When $n=1$, Theorem \ref{thm:intRep_Nvar} states that $q\colon \mathbb C^+ \to \mathbb C$ is a Herglotz-Nevan\-linna function if and only if there exists number $a \in \mathbb R$ and $b \geq 0$ as well as a positive Borel measure $\mu$ on $\mathbb R$ satisfying $\int_\mathbb R(1+t^2)^{-1}\mathrm{d}\mu(t) < \infty$ such that
$$q(z) = a + b\,z + \frac{1}{\pi}\int_\mathbb R\left(\frac{1}{t-z}-\frac{t}{1+t^2}\right)\mathrm{d}\mu(t)$$
for all $z \in \mathbb C^+$. This representation implies immediately that
\begin{equation}
\label{eq:HN_decomposition_1var}
q(z) - \overline{q(w)} = b\,(z-\overline{w}) + (z-\overline{w})\,\frac{1}{\pi}\int_\mathbb R\frac{1}{(t-z)(t-\overline{w})}\mathrm{d}\mu(t)
\end{equation}
for all $z,w \in \mathbb C^+$. Equivalently, one may reformulate the above equality to say that for every Herglotz-Nevan\-linna function $q$ the function
$$(z,w) \mapsto \frac{q(z) - \overline{q(w)}}{z-\overline{w}} = b + \frac{1}{\pi}\int_\mathbb R\frac{1}{(t-z)(t-\overline{w})}\mathrm{d}\mu(t)$$
is positive semi-definite on $\mathbb C^+ \times \mathbb C^+$. Conversely, if $q\colon \mathbb C^+ \to \mathbb C$ is a holomorphic function for which the function
$$D\colon (z,w) \mapsto \frac{q(z) - \overline{q(w)}}{z-\overline{w}}$$
is positive semi-definite on $\mathbb C^+ \times \mathbb C^+$, then $q$ must be a Herglotz-Nevan\-linna function. This is seen evaluating the function $D$ at $z=w$ and using Lemma \ref{lem:positive_semi_def_properties}. This characterization also leads to the introduction of the Nevanlinna kernel and generalized Nevanlinna functions, which we will return to in Section \ref{subsec:Nevanlinna_kernel}.
The next objective is therefore to determine whether a decomposition analogous to \eqref{eq:HN_decomposition_1var} holds for Herglotz-Nevan\-linna functions of several variables.
\subsection{The main theorem}
\label{subsec:main_thm}
Our main result is the following.
\begin{thm}\label{thm:positive_semi_decomposition}
Let $n \in \mathbb N$ and let $q\colon \mathbb C^{+n} \to \mathbb C$ be a holomorphic function. Then, $q$ is a Herglotz-Nevan\-linna function if and only if there exists a vector $\bm{d} \in [0,\infty)^n$ and a positive semi-definite function $D$ on $\mathbb C^{+n} \times \mathbb C^{+n}$ satisfying the growth condition \eqref{eq:positive_semi_growth} such that the equality
\begin{equation}
\label{eq:positive_semi_decomposition}
q(\bm{z}) - \overline{q(\bm{w})} = \sum_{j=1}^nd_j(z_j - \overline{w}_j) + \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot D(\bm{z},\bm{w})
\end{equation}
holds for all $\bm{z},\bm{w} \in \mathbb C^{+n}$.
Furthermore, let $(a,\bm{b},\mu)$ be the representing parameters of the function $q$ in the sense of Theorem \ref{thm:intRep_Nvar}. If $a = 0$, the correspondence between the function $q$ and the parameters $\bm{d}$ and $D$ is unique and it holds that $\bm{d} = \bm{b}$ and that $D$ is the Poisson-type function given by the measure $\mu$.
\end{thm}
\proof
\textit{PART 1:} Assume first that $q$ is a Herglotz-Nevan\-linna function. Then, we wish to deduce that it admits a decomposition of the form \eqref{eq:positive_semi_decomposition} for some vector $\bm{d}$ and some positive semi-definite function $D$ on $\mathbb C^{+n} \times \mathbb C^{+n}$. Since a Herglotz-Nevan\-linna function $q$ is uniquely determined by its data $(a,\bm{b},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}, it may be uniquely written as the sum
$$q = q_a + q_b + q_c,$$
where $q_a$ is given by the data $(a,\bm{0},0)$, $q_b$ is given by the data $(0,\bm{b},0)$ and $q_c$ is given by the data $(0,\bm{0},\mu)$. Hence, it suffices to prove the desired result for each of these three special cases separately.
\textit{Case 1.a:} If a Herglotz-Nevan\-linna function $q$ is given by the data $(a,\bm{0},0)$ in the sense of Theorem \ref{thm:intRep_Nvar}, then it holds, for every $\bm{z},\bm{w} \in \mathbb C^{+n}$ that $q(\bm{z}) - \overline{q(\bm{w})} = 0$. Thus, to satisfy equality \eqref{eq:positive_semi_decomposition}, one may chose $\bm{d} = \bm{0}$ and $D \equiv 0$.
\textit{Case 1.b:} If a Herglotz-Nevan\-linna function $q$ is given by the data $(0,\bm{b},0)$ in the sense of Theorem \ref{thm:intRep_Nvar}, then it holds, for every $\bm{z},\bm{w} \in \mathbb C^{+n}$, that
$$q(\bm{z}) - \overline{q(\bm{w})} = \sum_{j=1}^nb_j(z_j - \overline{w}_j)$$
Thus, to satisfy equality \eqref{eq:positive_semi_decomposition}, one may chose $\bm{d} = \bm{b}$ and $D \equiv 0$.
\textit{Case 1.c:} If a Herglotz-Nevan\-linna function $q$ is given by the data $(0,\bm{0},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}, it holds, for every $\bm{z},\bm{w} \in \mathbb C^{+n}$, that
$$q(\bm{z}) - \overline{q(\bm{w})} = \frac{1}{\pi^n}\int_{\mathbb R^n}\big(K_n(\bm{z},\bm{t}) - \overline{K_n(\bm{w},\bm{t})}\big)\mathrm{d}\mu(\bm{t}).$$
In order to be able to describe the difference $K_n(\bm{z},\bm{t}) - \overline{K_n(\bm{w},\bm{t})}$ more precisely, we introduce the expression
\begin{multline}\label{eq:extended_Poisson}
\mathcal P_n(\bm{z},\bm{w},\bm{t}) := \prod_{\ell=1}^n\bigg(N_{-1}(z_\ell,t_\ell) + N_{0}(\mathfrak{i},t_\ell) + N_{1}(w_\ell,t_\ell)\bigg) \\ = \frac{1}{(2\,\mathfrak{i})^n}\prod_{\ell=1}^n\left(\frac{1}{t_\ell-z_\ell}-\frac{1}{t_\ell-\overline{w_\ell}}\right) = \frac{1}{(2\,\mathfrak{i})^n}\prod_{\ell=1}^n\frac{z_\ell-\overline{w_\ell}}{(t_\ell - z_\ell)(t_\ell-\overline{w_\ell})}.
\end{multline}
This expression can be thought of as an "extended" Poisson kernel of $\mathbb C^{+n}$, as $\mathcal P_n(\bm{z},\bm{z},\bm{t})$ equals the usual Poisson kernel of $\mathbb C^{+n}$.
We now claim that the equality
\begin{equation}\label{eq:kernel_difference}
\frac{1}{2\,\mathfrak{i}}\big(K_n(\bm{z},\bm{t}) - \overline{K_n(\bm{w},\bm{t})}\big) = \mathcal P_n(\bm{z},\bm{w},\bm{t}) - \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)
\end{equation}
holds for every $\bm{t} \in \mathbb R^n$ and every $\bm{z},\bm{w} \in \C\setminus\RN$, where the choice of variable $\epsilon_{\ell}$ is determined by
\begin{equation}\label{eq:epsilon_choice}
\epsilon_{\ell}(\alpha,\beta) :=
\left\{\begin{array}{rcl}
\alpha & ; & \ell = -1, \\
\mathfrak{i} & ; & \ell = 0, \\
\beta & ; & \ell = 1,
\end{array}\right.
\end{equation}
\textit{i.e.}\/ the choice $\epsilon_\ell$ ensures that the the first input of the term $N_{-1}$ is always taken form the vector $\bm{z}$ and that the first input of the term $N_{1}$ is always taken form the vector $\bm{w}$. Note that this is a stronger statement than needed, as for our current goal it would suffice to consider $\bm{z},\bm{w} \in \mathbb C^{+n}$.
The proof of equality \eqref{eq:kernel_difference} follows closely the proof of the special case when $\bm{z} = \bm{w}$, presented in \cite[Prop. 3.3]{LugerNedic2019}. To that end, we observe that
$$\overline{K_n(\bm{w},\bm{t})} = -\mathfrak{i}\bigg(2\prod_{j=1}^n\big(N_{1}(w_j,t_j) + N_{0}(\mathfrak{i},t_j)\big)-\prod_{j=1}^nN_{0}(\mathfrak{i},t_j)\bigg).$$
Expanding now the following products as sums yields
$$\begin{array}{RCL}
\multicolumn{3}{L}{\prod_{j=1}^n\big(N_{-1}(z_j,t_j)+N_{0}(\mathfrak{i},t_j)+N_{1}(w_j,t_j)\big)} \\[0.45cm]
~ & = & \sum_{\bm{\rho} \in \{-1,0,1\}^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j), \\[0.45cm]
\prod_{j=1}^n\big(N_{1}(w_j,t_j)+N_{0}(\mathfrak{i},t_j)\big) & = & \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\not\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j), \\[0.45cm]
\prod_{j=1}^n\big(N_{-1}(z_j,t_j)+N_{0}(\mathfrak{i},t_j)\big) & = & \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ 1\not\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j), \\[0.45cm]
\prod_{j=1}^nN_{0}(\mathfrak{i},t_j) & = & \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\not\in\bm{\rho} \wedge 1\not\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j),
\end{array}$$
and hence
$$\begin{array}{RCL}
\multicolumn{3}{L}{\frac{1}{2\,\mathfrak{i}}\big(K_n(\bm{z},\bm{t}) - \overline{K_n(\bm{w},\bm{t})}\big)} \\[0.45cm]
~ & = & \prod_{j=1}^n\big(N_{-1}(z_j,t_j)+N_{0}(\mathfrak{i},t_j)\big) + \prod_{j=1}^n\big(N_{1}(w_j,t_j)+N_{0}(\mathfrak{i},t_j)\big) - \prod_{j=1}^nN_{0}(\mathfrak{i},t_j) \\[0.45cm]
~ & = & \sum_{\bm{\rho} \in \{-1,0,1\}^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)-\sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j) \\[0.45cm]
~ & = & \mathcal P_n(\bm{z},\bm{w},\bm{t}) - \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j),
\end{array}$$
as desired.
Using formula \eqref{eq:kernel_difference}, we deduce that
\begin{multline}\label{eq:proof_symmetric_temp1}
q(\bm{z}) - \overline{q(\bm{w})} \\ = \frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n} \mathcal P_n(\bm{z},\bm{w},\bm{t})\mathrm{d}\mu(\bm{t}) - \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)\mathrm{d}\mu(\bm{t}).
\end{multline}
Since
\begin{multline*}
\frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n} \mathcal P_n(\bm{z},\bm{w},\bm{t})\mathrm{d}\mu(\bm{t}) \\ = \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j) \cdot \frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^n\frac{1}{(t_j-z_j)(t_j-\overline{w}_j)}\mathrm{d}\mu(\bm{t}),
\end{multline*}
due to the definition of $\mathcal P_n$, we may choose $D$ as the Poisson-type function given by the measure $\mu$. This is indeed a valid choice, as a Poisson-type function is positive semi-definite on $\mathbb C^{+n} \times \mathbb C^{+n}$ due to Lemma \ref{lem:positive_semi_poisson} and satisfies the growth condition \eqref{eq:positive_semi_growth} due to Lemma \ref{lem:poisson_growth}.
Finally, we claim that
$$\int_{\mathbb R^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)\mathrm{d}\mu(\bm{t}) = 0$$
for every vectors $\bm{z},\bm{w} \in \mathbb C^{+n}$ and every indexing vector $\bm{\rho}$ as in the sum in formula \eqref{eq:proof_symmetric_temp1}, \textit{i.e.}\/ with at lest one entry equal to $1$ and at least one entry equal to $-1$. To infer this, we observe that once an indexing vector $\bm{\rho}$ has been chosen, we may combine the vectors $\bm{z},\bm{w} \in \mathbb C^{+n}$ into a single vector $\bm{\xi} \in \mathbb C^{+n}$ via setting
$$\xi_j := \left\{\begin{array}{rcl}
z_j & ; & \rho_j = -1, \\
\mathfrak{i} & ; & \rho_j = 0, \\
w_j & ; & \rho_j = 1.
\end{array}\right.$$
Hence,
$$N_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j) = N_{\rho_j}(\xi_j,t_j)$$
and the desired result follows from the fact that the measure $\mu$ satisfies the Nevnalinna condition \eqref{eq:measure_Nevan}, \textit{cf.}\/ \cite[Thm. 5.1]{LugerNedic2019}. Note that it is important that we may write the choice $\epsilon_{\rho_j}$ in terms of a single vector from $\mathbb C^{+n}$, as \cite[Thm. 5.1 (b)]{LugerNedic2019} only implies the desired result in the case where the first inputs of the terms $N_{-1}$ and $N_{1}$ are taken from the same vector.
In conclusion, formula \eqref{eq:proof_symmetric_temp1} provides a decomposition of the form \eqref{eq:positive_semi_decomposition} where the function $D$ is chosen as described above and $\bm{d} = \bm{0}$. This finishes part 1 of the proof.
\textit{PART 2:} Assume now that $q\colon \mathbb C^{+n} \to \mathbb C$ is a holomorphic function for which there exists some vector $\bm{d} \in [0,\infty)^n$ and some positive semi-definite function $D$ on $\mathbb C^{+n} \times \mathbb C^{+n}$ such that equality \eqref{eq:positive_semi_decomposition} holds for all $\bm{z},\bm{w} \in \mathbb C^{+n}$. To show that $q$ must be a Herglotz-Nevan\-linna function, we only need to check that $\mathrm{Im}[q] \geq 0$. To that end, we may choose $\bm{z} = \bm{w}$ in equality \eqref{eq:positive_semi_decomposition} to get
$$2\,\mathfrak{i}\,\mathrm{Im}[q(\bm{z})] = 2\,\mathfrak{i}\sum_{j=1}^nd_j\mathrm{Im}[z_j] + 2\,\mathfrak{i}\,\prod_{j=1}^n\mathrm{Im}[z_j]\cdot D(\bm{z},\bm{z}).$$
Diving both sides by $2\,\mathfrak{i}$ and noting that $D(\bm{z},\bm{z}) \geq 0$ by Lemma \ref{lem:positive_semi_def_properties} yields the desired result.
As such, the only remaining step is to show that the vector $\bm{d}$ must be equal to $\bm{b}$ and that the function $D$ must be the Poisson-type function given by the measure $\mu$. For the former, we calculate using formula \eqref{eq:b_parameters} that, one one hand,
$$\lim\limits_{z_\ell \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}\frac{q(\bm{z}) - \overline{q(\bm{w})}}{z_\ell} = b_\ell.$$
On the other hand, using decomposition \eqref{eq:positive_semi_decomposition}, we have
$$\lim\limits_{z_\ell \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}\frac{q(\bm{z}) - \overline{q(\bm{w})}}{z_\ell} = d_\ell + \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{\substack{j = 1 \\ j \neq \ell}}^n(z_j - \overline{w}_j)\cdot\lim\limits_{z_\ell \:\scriptsize{\xrightarrow{\vee\:}}\: \infty}D(\bm{z},\bm{w}) = d_\ell$$
due to the assumption that the function $D$ satisfies the growth condition \eqref{eq:positive_semi_growth}.
Since we know by now that the function $q$ is a Herglotz-Nevan\-linna function, it is represented by some data $(a,\bm{b},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}. By Part 1 of the proof, it holds that
$$q(\bm{z}) - \overline{q(\bm{w})} = \sum_{j=1}^nb_j(z_j - \overline{w}_j) + q_c(\bm{z}) - \overline{q_c(\bm{w})},$$
where the function $q_c$ is a Herglotz-Nevan\-linna function given by the data $(0,0,\mu)$. However, by assumption, the function $q$ also admits a decomposition of the form \eqref{eq:positive_semi_decomposition}, where we already know that $b_\ell = d_\ell$ for all $\ell \in \{1,\ldots,n\}$. Comparing this decomposition with the one above yields that
$$D(\bm{z},\bm{w}) = \frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q_c(\bm{z}) - \overline{q_c(\bm{w})}\big).$$
However, by Step 1.c of Part 1 of the proof, it holds that
$$q_c(\bm{z}) - \overline{q_c(\bm{w})} = \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot\frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{\ell=1}^{n}\frac{1}{(t_\ell - z_\ell)(t_\ell - \overline{w}_\ell)}\mathrm{d}\mu(\bm{t}),$$
implying that the function $D$ is necessarily the Poisson-type function given by the measure $\mu$, finishing Part 2 of the proof.
\endproof
A slight reformulation of this result can be stated as follows.
\begin{coro}
\label{coro:D_rep}
A function $q\colon \mathbb C^{+n} \to \mathbb C$ is a Herglotz-Nevan\-linna function if and only if there exists a number $a \in \mathbb R$, a vector $\bm{b} \in [0,\infty)^n$ and a Poisson-type function $D$ satisfying condition \eqref{eq:poisson_pluriharmonic} such that the formula
\begin{equation}
\label{eq:D_rep}
q(\bm{z}) = \big(a - \mathfrak{i}\,D(\mathfrak{i}\,\bm{1},\mathfrak{i}\,\bm{1})\big) + \sum_{j=1}^nb_j\,z_j + \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j + \mathfrak{i}) \cdot D(\bm{z},\mathfrak{i}\,\bm{1})
\end{equation}
holds for all $\bm{z} \in \mathbb C^{+n}$.
\end{coro}
\proof
Note first that the integral of the kernel $K_n$ with respect to a measure $\mu$ satisfying the growth condition may be written in terms of the Poisson-type function $D$ given by the same measure as
$$\frac{1}{\pi^n}\int_{\mathbb R^n}K_n(\bm{z},\bm{t})\mathrm{d}\mu(\bm{t}) = \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j + \mathfrak{i}) \cdot D(\bm{z},\mathfrak{i}\,\bm{1}) - \mathfrak{i}\,D(\mathfrak{i}\,\bm{1},\mathfrak{i}\,\bm{1}).$$
The statement of the corollary now follows from Theorem \ref{thm:intRep_Nvar}, Theorem \ref{thm:positive_semi_decomposition} and Lemma \ref{lem:poisson_pluriharmonic}.
\endproof
\subsection{The Nevanlinna kernel in several variables}
\label{subsec:Nevanlinna_kernel}
For any holomorphic function $q\colon \mathbb C^+ \to \mathbb C$, one can consider its \emph{Nevanlinna kernel} $\mathcal K_q\colon \mathbb C^+ \times \mathbb C^+ \to \mathbb C$, defined by
$$\mathcal{K}_{q}(z,w) := \frac{q(z) - \overline{q(w)}}{z-\overline{w}}.$$
In general, this kernel may be considered with regards to either a scalar-, matrix- or operator-valued function $q$, see \textit{e.g.}\/ \cite{DahoLanger1985,KreinLanger1973,KreinLanger1977,KreinLanger1981}. As summarized in the introduction to Section \ref{sec:main_thm}, it holds that a function $q\colon \mathbb C^+ \to \mathbb C$ is a Herglotz-Nevan\-linna function if and only if $\mathcal K_q$ is positive semi-definite.
An analogous characterization for Herglotz-Nevan\-linna functions of several variables, based on Theorem \ref{thm:positive_semi_decomposition}, is the following.
\begin{thm}
\label{thm:Nevan_kernel_Nvar}
Let $q$ be a holomorphic function on $\mathbb C^{+n}$. Then, $q$ is a Herglotz-Nevan\-linna function if and only if the function
$$(\bm{z},\bm{w}) \mapsto \frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big)$$
is positive semi-definite on $\mathbb C^{+n} \times \mathbb C^{+n}$. In that case, it holds that
$$\frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big) = \sum_{j=1}^nb_j\prod_{\substack{\ell = 1 \\ \ell \neq j}}^n\frac{2\,\mathfrak{i}}{z_\ell - \overline{w}_\ell} + D(\bm{z},\bm{w}),$$
where the vector $\bm{b}$ is as in representation \eqref{eq:intRep_Nvar} and the function $D$ is as described in Theorem \ref{thm:positive_semi_decomposition}.
\end{thm}
\proof
If the function $q$ is represented by the data $(a,\bm{0},0)$ or $(0,\bm{0},\mu)$, then the result follows immediately by steps 1.a and 1.c of the proof of Theorem \ref{thm:positive_semi_decomposition}. If, instead, the function $q$ is represented by the data $(0,\bm{b},0)$, it holds that
$$\frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big) = \sum_{j=1}^nb_j\prod_{\substack{\ell = 1 \\ \ell \neq j}}^n\frac{2\,\mathfrak{i}}{z_\ell - \overline{w}_\ell}.$$
Since the function
$$(z,w) \mapsto \frac{2\,\mathfrak{i}}{z-\overline{w}}$$
is positive semi-definite on $\mathbb C^+ \times \mathbb C^+$, the result then follows by Lemma \ref{lem:positive_semi_def_properties}.
For the converse statement, we only need to show that $\mathrm{Im}[q(\bm{z})] \geq 0$ which may be done as in Part 2 of the proof of Theorem \ref{thm:positive_semi_decomposition}.
\endproof
\begin{coro}
Let $F$ be a positive semi-definite function on $\mathbb C^{+n} \times \mathbb C^{+n}$. Then,
\begin{equation}
\label{eq:F_till_q}
F(\bm{z},\bm{w}) = \frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big)
\end{equation}
for some Herglotz-Nevan\-linna function $q$ if and only if there exists a vector $\bm{d} \in [0,\infty)^n$ and a Poisson-type function $D$ satisfying condition \eqref{eq:poisson_pluriharmonic} such that
\begin{equation}
\label{eq:D_till_F}
F(\bm{z},\bm{w}) = \sum_{j=1}^nd_j\prod_{\substack{\ell = 1 \\ \ell \neq j}}^n\frac{2\,\mathfrak{i}}{z_\ell - \overline{w}_\ell} + D(\bm{z},\bm{w})
\end{equation}
for all $\bm{z},\bm{w} \in \mathbb C^{+n}$.
\end{coro}
\proof
If the function $F$ can be written in terms of a Herglotz-Nevan\-linna function $q$ as in formula \eqref{eq:F_till_q}, then the result follows by Theorem \ref{thm:positive_semi_decomposition} and Theorem \ref{thm:Nevan_kernel_Nvar}.
Conversely, assume that there exists a vector $\bm{d}$ and a Poisson-type function $D$ as in the Corollary such that equality \eqref{eq:D_till_F} holds. Let $\sigma$ be the positive Borel measure satisfying the growth condition \eqref{eq:measure_growth} that gives the function $D$. By Lemma \ref{lem:poisson_pluriharmonic}, the measure sigma also satisfies the Nevanlinna condition \eqref{eq:measure_Nevan} due to the assumption on the function $D$. Hence, we may define a Herglotz-Nevan\-linna function $q$ via Theorem \ref{thm:intRep_Nvar} using the data $(0,\bm{d},\sigma)$. The result now follows by Theorem \ref{thm:Nevan_kernel_Nvar}.
\endproof
Using Theorem \ref{thm:Nevan_kernel_Nvar}, we may now propose a multidimensional analogue to the classical Nevaninna kernel that generalizes its core property of characterizing Herglotz-Nevan\-linna functions.
\begin{define}
\label{def:Nevan_kernel_Nvar}
Let $q\colon\mathbb C^{+n} \to \mathbb C$ be a holomorphic function. Then, its \emph{Nevanlinna kernel} $\mathcal K_q\colon \mathbb C^{+n} \times \mathbb C^{+n} \to \mathbb C$ is defined as
$$\mathcal{K}_{q}(\bm{z},\bm{w}) := \frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big).$$
\end{define}
When $n=1$, the Nevanlinna kernel is used to define \emph{generalized Nevanlinna functions}. A meromorphic function $q\colon \mathbb C^+ \to \mathbb C$ with domain of holomorphy $\mathbb Dom(q) \subseteq \mathbb C^+$ is called a generalized Nevanlinna function of class $\mathcal{N}_\kappa(\mathbb C^+)$ if its Nevanlinna kernel $\mathcal K_q$ has $\kappa$ negative squares \cite[pg. 187]{KreinLanger1977}. We recall that $\mathcal{K}_{q}$ having $\kappa$ negative squares means that for arbitrary $N \in \mathbb N$ and arbitrary $z_1,\ldots,z_N \in \mathbb C^+$ the matrix
$$\left(\mathcal{K}_{q}(z_i,z_j)\right)_{i,j = 1}^N$$
has at most $\kappa$ negative eigenvalues and $\kappa$ is minimal with this property.
Using Definition \ref{def:Nevan_kernel_Nvar}, generalized Nevanlinna functions of several variables may be introduced completely analogously.
\begin{define}
A meromorphic function $q\colon \mathbb C^{+n} \to \mathbb C$ with domain of holomorphy $\mathbb Dom(q) \subseteq \mathbb C^{+n}$ is called a generalized Nevanlinna function of class $\mathcal{N}_\kappa(\mathbb C^{+n})$ if its Nevanlinna kernel $\mathcal K_q$ has $\kappa$ negative squares.
\end{define}
The detailed study of this class of functions lies outside the scope of this paper.
\subsection{Decomposition of the symmetric extension}
\label{subsec:symmetric_extension}
We recall that the integral representation in formula \eqref{eq:intRep_Nvar} is well-defined for any $\bm{z} \in \C\setminus\RN$, which may be used to extend any Herglotz-Nevanlinna function $q$ from $\mathbb C^{+n}$ to $\C\setminus\RN$. This extension is called the \emph{symmetric extension} of the function $q$ and is denoted as $q_\mathrm{sym}$. We note that that the symmetric extension of a Herglotz-Nevan\-linna function $q$ is different from its possible analytic extension as soon as $\mu \neq 0$, \textit{cf.}\/ \cite[Prop. 6.10]{LugerNedic2019}. The symmetric extension also satisfies the following variable-dependence property, \textit{cf.}\/ \cite[Prop. 6.9]{LugerNedic2019}.
Just as a Herglotz-Nevan\-linna function $q$ can always be symmetrically extended to $\C\setminus\RN$ via its integral representation \eqref{eq:intRep_Nvar}, so too can we consider the symmetric extension of a Poisson-type function. This symmetric extension will automatically be positive semi-definite on $\C\setminus\RN \times \C\setminus\RN$ as the proof of Lemma \ref{lem:positive_semi_poisson} still remains valid even if the variables $\bm{z}$ and $\bm{w}$ are taken from $\C\setminus\RN$ instead. However, a direct analogue of Theorem \ref{thm:positive_semi_decomposition} does not hold, as the following example shows.
\begin{example}\label{ex:standard}
Consider the Herglotz-Nevan\-linna function $q$ given by $q(z_1,z_2) = -(z_1+z_2)^{-1}$ for $(z_1,z_2) \in \mathbb C^{+2}$. This function in represented by the data $(0,\bm{0},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}, where the measure $\mu$ is defined, for any Borel set $U \subseteq \mathbb R^2$, as
$$\mu(U) := \pi\int_\mathbb R\chi_U(t,-t)\mathrm{d} t.$$
Here, $\chi_U$ denotes the characteristic function of the set $U$. The function $D$ from Theorem \ref{thm:positive_semi_decomposition} can then be calculated using standard residue calculus and equals
$$D(\bm{z},\bm{w}) = \frac{2\,\mathfrak{i}\,(z_1+z_2-\overline{w}_1 -\overline{w}_2)}{(z_1-\overline{w}_1)(z_2-\overline{w}_2)(z_1+z_2)(\overline{w}_1+\overline{w}_2)}.$$
Furthermore, also using standard residue calculus, the symmetric extension of the function $q$ can be calculated to be
$$q_\mathrm{sym}(z_1,z_2) = \left\{\begin{array}{rcl}
-(z_1+z_2)^{-1} & ; & (z_1,z_2) \in \mathbb C^+ \times \mathbb C^+, \\
(\mathfrak{i} - z_1)^{-1} & ; & (z_1,z_2) \in \mathbb C^- \times \mathbb C^+, \\
(\mathfrak{i} - z_2)^{-1} & ; & (z_1,z_2) \in \mathbb C^+ \times \mathbb C^-, \\
(z_1+z_2)^{-1}+(\mathfrak{i} - z_1)^{-1}+(\mathfrak{i} - z_2)^{-1} & ; & (z_1,z_2) \in \mathbb C^- \times \mathbb C^-,
\end{array}\right.$$
while the values of $D_\mathrm{sym}$ are shown in Table \ref{tab:symetric_extension}.
\begin{table}[!ht]
\centering
\small{$$\begin{array}{C|C|C}
D_\mathrm{sym}(\bm{z},\bm{w}) & \bm{z} \in & \bm{w} \in \\
\hline
\frac{2\,\mathfrak{i}\,(z_1+z_2-\overline{w}_1 -\overline{w}_2)}{(z_1-\overline{w}_1)(z_2-\overline{w}_2)(z_1+z_2)(\overline{w}_1+\overline{w}_2)} & \mathbb C^+ \times \mathbb C^+ & \mathbb C^+ \times \mathbb C^+ \\[0.4cm]
\frac{2\,\mathfrak{i}}{(z_1+\overline{w}_2)(z_2-\overline{w}_2)(\overline{w}_1+\overline{w}_2)} & \mathbb C^- \times \mathbb C^+ & \mathbb C^+ \times \mathbb C^+ \\[0.4cm]
\frac{2\,\mathfrak{i}}{(z_2+\overline{w}_1)(z_1-\overline{w}_1)(\overline{w}_1+\overline{w}_2)} & \mathbb C^+ \times \mathbb C^- & \mathbb C^+ \times \mathbb C^+ \\[0.4cm]
\frac{2\,\mathfrak{i}\,(z_1+z_2+\overline{w}_1 +\overline{w}_2)}{(z_1+\overline{w}_2)(z_2+\overline{w}_1)(z_1+z_2)(\overline{w}_1+\overline{w}_2)} & \mathbb C^- \times \mathbb C^- & \mathbb C^+ \times \mathbb C^+ \\[0.4cm]
\hline
\frac{2\,\mathfrak{i}}{(z_1 + z_2)(z_2 + \overline{w}_1)(z_2 - \overline{w}_2)} & \mathbb C^+ \times \mathbb C^+ & \mathbb C^- \times \mathbb C^+ \\[0.4cm]
\frac{2\,\mathfrak{i}\,(z_1-z_2-\overline{w}_1 +\overline{w}_2)}{(z_1-\overline{w}_1)(z_2+\overline{w}_1)(z_1 + \overline{w}_2)(z_2 - \overline{w}_2)} & \mathbb C^- \times \mathbb C^+ & \mathbb C^- \times \mathbb C^+ \\[0.4cm]
0 & \mathbb C^+ \times \mathbb C^- & \mathbb C^- \times \mathbb C^+ \\[0.4cm]
\frac{-2\,\mathfrak{i}}{(z_1 + z_2)(z_1 - \overline{w}_1)(z_1 + \overline{w}_2)} & \mathbb C^- \times \mathbb C^- & \mathbb C^- \times \mathbb C^+ \\[0.4cm]
\hline
\frac{2\,\mathfrak{i}}{(z_1 + z_2)(z_1 - \overline{w}_1)(z_1 + \overline{w}_2)} & \mathbb C^+ \times \mathbb C^+ & \mathbb C^+ \times \mathbb C^- \\[0.4cm]
0 & \mathbb C^- \times \mathbb C^+ & \mathbb C^+ \times \mathbb C^- \\[0.4cm]
\frac{2\,\mathfrak{i}\,(-z_1+z_2+\overline{w}_1-\overline{w}_2)}{(z_1-\overline{w}_1)(z_2+\overline{w}_1)(z_1 + \overline{w}_2)(z_2 - \overline{w}_2)} & \mathbb C^+ \times \mathbb C^- & \mathbb C^+ \times \mathbb C^- \\[0.4cm]
\frac{-2\,\mathfrak{i}}{(z_1 + z_2)(z_2 + \overline{w}_1)(z_2 - \overline{w}_2)} & \mathbb C^- \times \mathbb C^- & \mathbb C^+ \times \mathbb C^- \\[0.4cm]
\hline
\frac{-2\,\mathfrak{i}\,(z_1+z_2+\overline{w}_1 +\overline{w}_2)}{(z_1+\overline{w}_2)(z_2+\overline{w}_1)(z_1+z_2)(\overline{w}_1+\overline{w}_2)} & \mathbb C^+ \times \mathbb C^+ & \mathbb C^- \times \mathbb C^- \\[0.4cm]
\frac{-2\mathfrak{i}}{(z_1 - \overline{w}_1)(z_2 + \overline{w}_1)(\overline{w}_1 + \overline{w}_2)} & \mathbb C^- \times \mathbb C^+ & \mathbb C^- \times \mathbb C^- \\[0.4cm]
\frac{-2\mathfrak{i}}{(z_2 - \overline{w}_2)(z_1 + \overline{w}_2)(\overline{w}_1 + \overline{w}_2)} & \mathbb C^+ \times \mathbb C^- & \mathbb C^- \times \mathbb C^- \\[0.4cm]
\frac{2\mathfrak{i}(-z_1 - z_2 + \overline{w}_1 + \overline{w}_2)}{(z_1 + z_2)(z_1 - \overline{w}_1)(z_2 - \overline{w}_2)(\overline{w}_1 + \overline{w}_2)} & \mathbb C^- \times \mathbb C^- & \mathbb C^- \times \mathbb C^-
\end{array}$$}
\caption{The symmetric extension of the function $D$ from Example \ref{ex:standard}}
\label{tab:symetric_extension}
\end{table}
If we now choose \textit{e.g.}\/ $\bm{z} \in \mathbb C^+ \times \mathbb C^-$ and $\bm{w} \in \mathbb C^- \times \mathbb C^+$, we see that
$$\frac{1}{\mathfrak{i} - \overline{z}_2} + \frac{1}{\mathfrak{i}+\overline{w}_1} = q_\mathrm{sym}(\bm{z}) - \overline{q_\mathrm{sym}(\bm{w})} \neq \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot D_\mathrm{sym}(\bm{z},\bm{w}) = 0,$$
implying the presence of an error term.
$\lozenge$
\end{example}
The following proposition gives a characterization of the symmetric extension of a Herglotz-Nevan\-linna functions in analogy to Theorem \ref{thm:positive_semi_decomposition}.
\begin{prop}\label{prop:symmetric_positive_semi_decomposition}
Let $n \in \mathbb N$ and let $f\colon \C\setminus\RN \to \mathbb C$ be a holomorphic function. Then, $f = q_\mathrm{sym}$ for some Herglotz-Nevan\-linna function $q$ if and only if there exists a vector $\bm{d} \in [0,\infty)^n$ and a positive Borel measure $\sigma$ on $\mathbb R^n$ satisfying the growth condition \eqref{eq:measure_growth} and the Nevanlinna condition \eqref{eq:measure_Nevan} such that the equality
\begin{multline}
\label{eq:symmetric_positive_semi_decomposition}
f(\bm{z}) - \overline{f(\bm{w})} \\ = \sum_{j=1}^nd_j(z_j - \overline{w}_j) + \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot D_\mathrm{sym}(\bm{z},\bm{w}) - E(\bm{z},\bm{w})
\end{multline}
holds for all $\bm{z},\bm{w} \in \C\setminus\RN$. Here, $D_\mathrm{sym}$ denotes the symmetric extension of the Poisson-type function $D$ given by the measure $\sigma$, the error term $E$ is defined as
$$E(\bm{z},\bm{w}) := \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)\mathrm{d}\sigma(\bm{t})$$
and the choice of variable $\epsilon_\ell$ is defined by formula \eqref{eq:epsilon_choice}.
\end{prop}
\proof
Assume first that $f = q_\mathrm{sym}$ for some Herglotz-Nevan\-linna function $q$ given by the data $(a,\bm{b},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}. As in the proof of Theorem \ref{thm:positive_semi_decomposition}, we separately investigate three case with respect to the representing parameters of the function $q$. Cases 1.a and 1.b, \textit{i.e.}\/ when the function $q$ is either represented by data of the form $(a,\bm{0},0)$ or $(0,\bm{b},0)$ may be considered completely analogously as before.
If, instead, a Herglotz-Nevan\-linna function $q$ is given by the data $(0,\bm{0},\mu)$ in the sense of Theorem \ref{thm:intRep_Nvar}, it holds that
$$q_\mathrm{sym}(\bm{z}) - \overline{q_\mathrm{sym}(\bm{w})} = \frac{1}{\pi^n}\int_{\mathbb R^n}\big(K_n(\bm{z},\bm{t}) - \overline{K_n(\bm{w},\bm{t})}\big)\mathrm{d}\mu(\bm{t})$$
for every $\bm{z},\bm{w} \in \C\setminus\RN$. Using formula \eqref{eq:kernel_difference}, which we already originally proved holds for $\bm{z},\bm{w} \in \C\setminus\RN$, we deduce that
\begin{multline*}
q_\mathrm{sym}(\bm{z}) - \overline{q_\mathrm{sym}(\bm{w})} \\ = \frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n} \mathcal P_n(\bm{z},\bm{w},\bm{t})\mathrm{d}\mu(\bm{t}) - \sum_{\substack{\bm{\rho} \in \{-1,0,1\}^n \\ -1\in\bm{\rho} \wedge 1\in\bm{\rho}}}\frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^nN_{\rho_j}(\epsilon_{\rho_j}(z_j,w_j),t_j)\mathrm{d}\mu(\bm{t}).
\end{multline*}
Since
$$\frac{2\,\mathfrak{i}}{\pi^n}\int_{\mathbb R^n} \mathcal P_n(\bm{z},\bm{w},\bm{t})\mathrm{d}\mu(\bm{t}) = \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot D_\mathrm{sym}(\bm{z},\bm{w}),$$
the result follows.
Conversely, define a Herglotz-Nevan\-linna function $q$ via representation \eqref{eq:intRep_Nvar} using the data $(\mathbb Re[f(\mathfrak{i}\,\ldots,\mathfrak{i})],\bm{d},\sigma)$. By what we just proved, it holds that
\begin{equation}
\label{eq:temp1}
f(\bm{z}) - \overline{f(\bm{w})} = q_\mathrm{sym}(\bm{z}) - \overline{q_\mathrm{sym}(\bm{w})}
\end{equation}
for all $\bm{z},\bm{w} \in \C\setminus\RN$. When $\bm{z} = \bm{w}$, the above equality implies that
$$\mathrm{Im}[f(\bm{z})] = \mathrm{Im}[f(\bm{q})]$$
for all $\bm{z} \in \C\setminus\RN$. Therefore,
$$f(\bm{z}) = q(\bm{z}) + C(\bm{z}),$$
where the function $C$ equals a real constant on each connected component of $\C\setminus\RN$. By construction,
$$\mathbb Re[q(\mathfrak{i},\ldots,\mathfrak{i})] = \mathbb Re[f(\mathfrak{i},\ldots,\mathfrak{i})],$$
implying that $C \equiv 0$ on $\mathbb C^{+n}$ and hence $f = q_\mathrm{sym}$ on $\mathbb C^{+n}$. Finally, when $\bm{z} \in \C\setminus\RN$ is arbitrary and $\bm{w} = (\mathfrak{i},\ldots,\mathfrak{i})$, equality \eqref{eq:temp1} implies that $f = q_\mathrm{sym}$ on $\C\setminus\RN$. This finishes the proof.
\endproof
\subsection{Loewner functions}
\label{subsec:loewner}
Theorem \ref{thm:positive_semi_decomposition} provides a universal description of the difference $q(\bm{z}) - \overline{q(\bm{w})}$ for a Herglotz-Nevan\-linna function $q$ and $\bm{z},\bm{w} \in \mathbb C^{+n}$. Another approach one can take is to ask instead that this difference may be written in a specific form. This leads us to consider the following class of functions, \textit{cf.}\/ \cite[pg. 3003]{AglerEtal2016}.
\begin{define}\label{def:lowner_class}
A holomorphic function $h\colon \mathbb C^{+n} \to \mathbb C$ is called a \emph{Loewner function} (on $\mathbb C^{+n}$) if there exist $n$ positive semi-definite functions $F_1,\ldots,F_n$ on $\mathbb C^{+n} \times \mathbb C^{+n}$, such that the equality
\begin{equation}
\label{eq:lowner_functions}
h(\bm{z}) - \overline{h(\bm{w})} = \sum_{\ell = 1}^n(z_\ell - \overline{w}_\ell)F_\ell(\bm{z},\bm{w}).
\end{equation}
holds for all $\bm{z},\bm{w} \in \mathbb C^{+n}$.
\end{define}
Due to Lemma \ref{lem:positive_semi_def_properties}, it is easily seen that every Loewner function is also a Herglotz-Nevan\-linna function. The converse result is dependant on the number of variables $n$. When $n=1$, it follows from the results summarized in the beginning of Section \ref{sec:main_thm} that every Herglotz-Nevan\-linna function is also a Loewner function. If $n=2$, this still holds, with the result being a consequence of a theorem concerning Schur function on the polydisk, see \cite{Agler1990} and \cite[Thm. 1.4]{AglerEtal2016}. When $n \geq 3$, Loewner functions form a proper subclass of Herglotz-Nevan\-linna functions, with the result being a consequence of the theory of commuting contractions \cite{Parrott1970,Varopoulos1974}.
Using the Nevanlinna kernel, we may describe Loewner functions in the following way.
\begin{prop}
Let $h\colon \mathbb C^{+n} \to \mathbb C$ be a holomorphic function. Then, $h$ is a Loewner function if and only if there exist $n$ positive semi-definite functions $F_1,\ldots,F_n$ on $\mathbb C^{+n} \times \mathbb C^{+n}$ such that the equality
\begin{equation}
\label{eq:Loewner_kernel}
\mathcal K_h(\bm{z},\bm{w}) = \sum_{\ell=1}^nF_\ell(\bm{z},\bm{w})\prod_{\substack{j=1\\j\neq\ell}}^n\frac{2\,\mathfrak{i}}{z_j-\overline{w}_j}
\end{equation}
holds for all $\bm{z},\bm{w} \in \mathbb C^{+n}$.
\end{prop}
\proof
If $h$ is a Loewner function, it admits a decomposition of the form \eqref{eq:lowner_functions} for some positive semi-definite functions $F_1,\ldots,F_n$ on $\mathbb C^{+n} \times \mathbb C^{+n}$. Using this to rewrite the difference $h(\bm{z}) - \overline{h(\bm{w})}$ in the definition of $\mathcal K_h$ gives equality \eqref{eq:Loewner_kernel}.
Conversely, assuming equality \eqref{eq:Loewner_kernel} holds for some positive semi-definite functions $F_1,\ldots,F_n$ on $\mathbb C^{+n} \times \mathbb C^{+n}$. Then, multiplying both sides of the equality with $(2\,\mathfrak{i})^{-n+1}\prod_{j=1}^n(z_j-\overline{w}_j)$ gives a decomposition of the from \eqref{eq:lowner_functions}, as desired.
\endproof
For $n=1$, decomposition \eqref{eq:lowner_functions} coincides with decomposition \eqref{eq:positive_semi_decomposition}, implying that a given Loewner function admits precisely one decomposition of the form \eqref{eq:lowner_functions}. For $n \geq 2$, decomposition \eqref{eq:lowner_functions} is not-necessarily unique and the functions $F_1,\ldots,F_n$ are not necessarily Poisson-type functions. To illustrate this, consider the following two examples.
\begin{example}
Let $h(z_1,z_2) := \mathfrak{i}$ for all $(z_1,z_2) \in \mathbb C^{+2}$. Then, it holds that
\begin{eqnarray*}
h(z_1,z_2) - \overline{h(w_1,w_2)} & = & (z_1 - \overline{w}_1)\cdot\frac{2\,\mathfrak{i}}{z_1-\overline{w}_1} + (z_2-\overline{w}_2)\cdot0 \\
~ & = & (z_1 - \overline{w}_1)\cdot0 + (z_2-\overline{w}_2)\cdot\frac{2\,\mathfrak{i}}{z_2-\overline{w}_2} \\
~ & = & (z_1 - \overline{w}_1)\cdot\frac{2k_1\mathfrak{i}}{z_1-\overline{w}_1} + (z_2-\overline{w}_2)\cdot\frac{2k_2\mathfrak{i}}{z_2-\overline{w}_2},
\end{eqnarray*}
where $k_1,k_2 \geq 0$ with $k_1+k_2 = 1$. Thus, this function $h$ admits infinitely many different decompositions of the form \eqref{eq:lowner_functions}. It is also noteworthy that while the function
$$(z_1,w_1) \mapsto \frac{2\,\mathfrak{i}}{z_1-\overline{w}_1}$$
is a Poisson-type function on $\mathbb C^+ \times \mathbb C^+$, the function
$$F\colon ((z_1,z_2),(w_1,w_2)) \mapsto \frac{2\,\mathfrak{i}}{z_1-\overline{w}_1}$$
is not a Poisson-type function on $\mathbb C^{+2} \times \mathbb C^{+2}$, though it still is positive semi-definite on $\mathbb C^{+2} \times \mathbb C^{+2}$. Indeed, if it were, it would be given by some measure $\mu$, which one would be able to reconstruct via the Stieltjes inversion formula \eqref{eq:Stieltjes_for_Poisson}. Taking $\psi$ to be any function as in Lemma \ref{lem:Stieltjes_for_Poisson}, we calculate that
\begin{multline*}
0 \leq \iint_{\mathbb R^2}|\psi(x_1,x_2)|\,y_1\,y_2\,F(\bm{x}+\mathfrak{i}\,\bm{y},\bm{x}+\mathfrak{i}\,\bm{y})\mathrm{d}\bm{x} = \iint_{\mathbb R^2}|\psi(x_1,x_2)|\,y_2\mathrm{d}\bm{x} \\
\leq C\,\pi^2\,y_2 \xrightarrow{\bm{y} \to \bm{0}} 0,
\end{multline*}
where the constant $C$ comes from the assumptions on the function $\phi$. Therefore, the measure $\mu$ would have to be the zero-measure, an impossibility given that the function $F$ is not identically zero.
As a Herglotz-Nevan\-linna function, the function $h$ can be written as
$$h(z_1,z_2) - \overline{h(w_1,w_2)} = \frac{1}{2\,\mathfrak{i}}(z_1-\overline{w}_1)(z_2-\overline{w}_2)D(\bm{z},\bm{w}),$$
where the function $D$ is as specified in Theorem \ref{thm:positive_semi_decomposition} and equals
$$D((z_1,z_2),(w_1,w_2)) := \frac{-4}{(z_1-\overline{w}_1)(z_2-\overline{w}_2)}.$$
Let us now attempt to rewrite the above decomposition as
$$h(z_1,z_2) - \overline{h(w_1,w_2)} = (z_1-\overline{w}_1)\cdot\underbrace{\left(\frac{z_2-\overline{w}_2}{2\,\mathfrak{i}}D(\bm{z},\bm{w})\right)}_{:= F_1(\bm{z},\bm{w})} + (z_2-\overline{w}_2)\cdot 0.$$
The function $F_1$, defined in the above way, can be shown to be equal to
$$F_1((z_1,z_2),(w_1,w_2)) = \frac{2\,\mathfrak{i}}{z_1-\overline{w_1}},$$
providing, thusly, a valid decomposition of the form \eqref{eq:lowner_functions}.
$\lozenge$
\end{example}
\begin{example}
Let $h(z_1,z_2) := -(z_1+z_2)^{-1}$ for all $(z_1,z_2) \in \mathbb C^{+2}$. Then, it holds that
\begin{multline*}
h(z_1,z_2) - \overline{h(w_1,w_2)} = -\frac{1}{z_1+z_2} + \frac{1}{\overline{w}_1 + \overline{w}_2} = \frac{z_1+z_2-\overline{w}_1 + \overline{w}_2}{(z_1+z_2)(\overline{w}_1+ \overline{w}_2)} \\
= (z_1 - \overline{w}_1)\,\frac{1}{(z_1+z_2)(\overline{w}_1+ \overline{w}_2)} + (z_2 - \overline{w}_2)\,\frac{1}{(z_1+z_2)(\overline{w}_1+ \overline{w}_2)}
\end{multline*}
for all $\bm{z},\bm{w} \in \mathbb C^{+2}$. The function
$$F\colon ((z_1,z_2),(w_1,w_2)) \mapsto \frac{1}{(z_1+z_2)(\overline{w}_1+ \overline{w}_2)}$$
is indeed positive semi-definite on $\mathbb C^{+2} \times \mathbb C^{+2}$, as it is of the same form as the functions in Example \ref{ex:psd_simple}. However, it can be shown via an analogous reasoning as in the previous example that this functions is not a Poisson-type function.
As a Herglotz-Nevan\-linna function, it admits a decomposition of the form
\begin{multline*}
h(z_1,z_2) - \overline{h(w_1,w_2)} \\ = \frac{1}{2\,\mathfrak{i}}(z_1-\overline{w}_1)(z_2-\overline{w}_2)\cdot\underbrace{\frac{2\,\mathfrak{i}\,(z_1+z_2-\overline{w}_1 -\overline{w}_2)}{(z_1-\overline{w}_1)(z_2-\overline{w}_2)(z_1+z_2)(\overline{w}_1+\overline{w}_2)}}_{= D(\bm{z},\bm{w})}
\end{multline*}
for all $\bm{z},\bm{w} \in \mathbb C^{+2}$. Investigating whether this decomposition can be rewritten into a decomposition of the form \eqref{eq:lowner_functions} as in the previous examples leads us to consider the function
$$(\bm{z},\bm{w}) \mapsto \frac{z_1 - \overline{w}_1}{2\,\mathfrak{i}}\,D(\bm{z},\bm{w}) = \frac{z_1+z_2-\overline{w}_1 -\overline{w}_2}{(z_2-\overline{w}_2)(z_1+z_2)(\overline{w}_1+\overline{w}_2)}.$$
However, this function is no longer positive semi-definite. Indeed, choose $m = 2$, $\bm{z}_1 = (\mathfrak{i},-2+\mathfrak{i})$, $\bm{z}_2 = (2+\mathfrak{i},2\,\mathfrak{i})$, $c_2 = 1$ and
$$c_1 = \frac{\sqrt{377}-7}{2132}(-33-113\,\mathfrak{i}).$$
Then, it holds that
$$\sum_{i,j=1}^2\frac{(\bm{z}_i)_1 - \overline{(\bm{z}_j)}_1}{2\,\mathfrak{i}}D(\bm{z}_i,\bm{z}_j)c_i\overline{c}_j = \frac{4901-255\sqrt{377}}{8528} \cong -0.00588701 < 0.$$
Hence, a decomposition of the form \eqref{eq:lowner_functions} cannot always be constructed form a decomposition of the form \eqref{eq:positive_semi_decomposition} via the process that was exhibited in the previous example.
$\lozenge$
\end{example}
\section{Holomorphic functions on the unit polydisk with non-negative real part}
\label{sec:polydisk}
Historically, the idea to investigate the difference (or sum) of the values of a holomorphic function with a prescribed sign of its imaginary or real part dates back to the 1910's. A fundamental result from this era is due to Pick \cite[pg. 8]{Pick1915}, who proved, in modern terms, that for any holomorphic function $f\colon\mathbb D \to \mathbb C$ with non-negative real part it holds that
$$\frac{f(\xi) + \overline{f(\eta)}}{1-\xi\,\overline{\eta}} = \frac{1}{\pi}\int_{[0,2\pi)}\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s}-\xi)({\mathrm{e}}^{-\mathfrak{i}\,s}-\overline{\eta})}\mathrm{d}\nu(s),$$
where $\xi,\eta \in \mathbb D$ and $\nu$ is the representing measure of the function $f$ in the sense of the Riesz-Herglotz representation theorem \cite[pg. 76]{ShilovGurevich1977}. Using the Cayley transform and its inverse, recalled explicitly later on, this result may be reinterpreted in the language of Herglotz-Nevan\-linna functions where it reproduces the results summarized in the beginning of Section \ref{sec:main_thm}.
When considering functions of several variables, Kor\'anyi and Puk\'anszky gave an integral representation for holomorphic functions on the unit polydisk with non-negative real part \cite[Thm. 1]{KoranyiPukanszky1963}. Additionally, they gave a criterion to determine when a function defined \emph{a priori} on a subset of $\mathbb D^n$ of a particular type can be extended to holomorphic function on the whole of $\mathbb D^n$ with non-negative real part \cite[Thm. 2]{KoranyiPukanszky1963}. The latter condition of the latter theorem is expressed in terms of a certain function being positive semi-definite. Interestingly, this positive semi-definite function turns out to be a real constant multiple of the one appearing in the following adaptation of Theorem \ref{thm:positive_semi_decomposition}.
\begin{prop}
\label{prop:szego_kernel}
Let $f\colon \mathbb D^n \to \mathbb C$ be a holomorphic function. Then, $f$ has non-negative real part if and only if the
\begin{equation}
\label{eq:szego_kernel}
(\bm{\xi},\bm{\eta}) \mapsto \frac{2^{n-1}}{\prod_{j=1}^n(1-\xi_j\overline{\eta}_j)}\cdot\big(f(\bm{\xi}) + \overline{f(\bm{\eta})}\big)
\end{equation}
is positive semi-definite on $\mathbb D^n \times \mathbb D^n$.
\end{prop}
\begin{remark}
The normalizing factor $2^{n-1}$ has no impact on the conclusion of the proposition, unlike the factor $(2\,\mathfrak{i})^{n-1}$ appearing in Theorem \ref{thm:Nevan_kernel_Nvar}. Furthermore, the function
$$(\bm{\xi},\bm{\eta}) \mapsto \prod_{j=1}^n\frac{1}{1-\xi_j\overline{\eta}_j},$$
where $\bm{\xi},\bm{\eta} \in \mathbb D^n$, is sometimes referred to as the \emph{Szeg\H{o} kernel} of $\mathbb D^n$, \textit{cf.}\/ \cite[pg. 450]{KoranyiPukanszky1963}.
\end{remark}
\proof
Recall first that the Cayley transform $\varphi\colon \mathbb C \to \mathbb C^+$ is defined as
$$\varphi(\zeta) = \mathfrak{i}\,\frac{1+\zeta}{1-\zeta},$$
while its inverse $\varphi^{-1}\colon\mathbb C^+ \to \mathbb D$ is given by
$$\varphi^{-1}(z) = \frac{z - \mathfrak{i}}{z + \mathfrak{i}}.$$
Furthermore, if a change of variables between $s \in (0,2\pi)$ and $t \in \mathbb R$ is given by ${\mathrm{e}}^{\mathfrak{i}\,s} = \varphi^{-1}(t)$, then
$$\mathrm{d} s = \frac{2}{1+t^2}\mathrm{d} t \quad\text{or}\quad \mathrm{d} t = \frac{1}{1-\cos(s)}\mathrm{d} s.$$
Let now $f\colon\mathbb D^n \to \mathbb C$ be a holomorphic function with non-negative real part. Then, the function
$$q(\bm{z}) := \mathfrak{i}\,f(\varphi^{-1}(z_1),\ldots,\varphi^{-1}(z_n))$$ is a Herglotz-Nevan\-linna function.
By Theorem \ref{thm:positive_semi_decomposition}, it holds that
\begin{multline}
\label{eq:temp2}
q(\bm{z}) - \overline{q(\bm{w})} = \sum_{j=1}^nb_j(z_j - \overline{w}_j) \\ + \frac{1}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n(z_j - \overline{w}_j)\cdot\frac{1}{\pi^n}\int_{\mathbb R^n}\prod_{j=1}^n\frac{1}{(t_j - z_j)(t_j - \overline{w}_j)}\mathrm{d}\mu(\bm{t}),
\end{multline}
where $\bm{b}$ and $\mu$ are the representing parameters of the function $q$ in the sense of Theorem \ref{thm:intRep_Nvar}.
Define now $\bm{\xi},\bm{\eta} \in \mathbb D^n$ and $\bm{s} \in (0,2\pi)^n$ via $\xi_j := \varphi^{-1}(z_j)$, $\eta_j := \varphi^{-1}(w_j)$ and ${\mathrm{e}}^{\mathfrak{i}\,s_j} := \varphi^{-1}(t_j)$ for $j=1,2,\ldots,n$. In this notation, $f(\bm{\xi}) = -\mathfrak{i}\,q(\bm{z})$ and equality \eqref{eq:temp2} becomes
\begin{multline}
\label{eq:temp3}
\mathfrak{i}\,f(\bm{\xi}) + \mathfrak{i}\,\overline{f(\bm{\eta})} \\
= \mathfrak{i}\,\sum_{j=1}^nb_j\left(\frac{1+\xi_j}{1-\xi_j} + \frac{1+\overline{\eta}_j}{1-\overline{\eta}_j}\right) + \frac{\mathfrak{i}^n}{(2\,\mathfrak{i})^{n-1}}\prod_{j=1}^n\left(\frac{1+\xi_j}{1-\xi_j} + \frac{1+\overline{\eta}_j}{1-\overline{\eta}_j}\right) \\
\cdot\frac{1}{(2\pi)^n}\int_{(0,2\pi)^n}\prod_{j=1}^n\frac{(1-\xi_j)(1-\overline{\eta}_j)}{({\mathrm{e}}^{\mathfrak{i}\,s_j}-\xi_j)({\mathrm{e}}^{-\mathfrak{i}\,s_j}-\overline{\eta}_j)}\mathrm{d}\nu(\bm{s}) \\
= 2\,\mathfrak{i}\,\sum_{j=1}^nb_j\frac{1-\xi_j\overline{\eta}_j}{(1-\xi_j)(1-\overline{\eta}_j)} + \frac{\mathfrak{i}}{2^{n-1}}\prod_{j=1}^n(1-\xi_j\overline{\eta}_j) \\
\cdot\frac{1}{\pi^n}\int_{(0,2\pi)^n}\prod_{j=1}^n\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s_j}-\xi_j)({\mathrm{e}}^{-\mathfrak{i}\,s_j}-\overline{\eta}_j)}\mathrm{d}\nu(\bm{s}).
\end{multline}
Here, the measure $\nu$ on $(0,2\pi)^n$ is a reparametrization of the measure $\mu$ obtained by setting
$$\mathrm{d}\mu(\bm{t}) = \prod_{j=1}^n\frac{1}{1-\cos(s_j)}\mathrm{d}\nu(\bm{s}).$$
Note now that by standard residue calculus it holds for every $\xi,\eta \in \mathbb D$ that
$$\frac{1}{\pi}\int_{[0,2\pi)}\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s}-\xi)({\mathrm{e}}^{-\mathfrak{i}\,s}-\overline{\eta})}\mathrm{d} s = \frac{2}{1-\xi\,\overline{\eta}}.$$
Hence, for every $j =1,2\ldots,n$, we may write
\begin{multline*}
2\,\mathfrak{i}\,b_j\,\frac{1-\xi_j\overline{\eta}_j}{(1-\xi_j)(1-\overline{\eta}_j)} = \frac{\mathfrak{i}}{2^{n-1}}\prod_{j=1}^n(1-\xi_j\overline{\eta}_j) \\ \cdot\frac{1}{\pi^n}\int_{[0,2\pi)^n}\prod_{j=1}^n\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s_j}-\xi_j)({\mathrm{e}}^{-\mathfrak{i}\,s_j}-\overline{\eta}_j)}\mathrm{d}\sigma_j(\bm{s}),
\end{multline*}
where $\sigma_j$ is a measure on $[0,2\pi)^n$ defined as
$$\sigma_j := \lambda_{[0,2\pi)} \times \ldots \times \lambda_{[0,2\pi)} \times \underbrace{2\pi b_j \delta_0}_{j-\text{th coordinate}} \times \lambda_{[0,2\pi)} \times \ldots \times \lambda_{[0,2\pi)}.$$
Here, $\lambda_{[0,2\pi)}$ denotes the Lebesgue measure on ${[0,2\pi)}$ and $\delta_0$ denotes the Dirac measure at zero. We also extend the measure $\nu$ to a measure $\widetilde{\nu}$ on $[0,2\pi)^n$ by setting
$$\widetilde{\nu}|_{(0,2\pi)^n} := \nu \quad\text{and}\quad \widetilde{\nu}|_{[0,2\pi)^n\setminus(0,2\pi)^n} := 0.$$
Denote now
$$\widetilde{\sigma} := \widetilde{\nu} + \sum_{j=1}^n\sigma_j.$$
Using this, equality \eqref{eq:temp3} may be rewritten as
$$f(\bm{\xi}) + \overline{f(\bm{\eta})} = \frac{1}{2^{n-1}}\prod_{j=1}^n(1-\xi_j\overline{\eta}_j)\cdot\frac{1}{\pi^n}\int_{[0,2\pi)^n}\prod_{j=1}^n\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s_j}-\xi_j)({\mathrm{e}}^{-\mathfrak{i}\,s_j}-\overline{\eta}_j)}\mathrm{d}\widetilde{\sigma}(\bm{s}).$$
Note finally that the function
\begin{equation}
\label{eq:temp4}
(\bm{\xi},\bm{\eta}) \mapsto \frac{1}{\pi^n}\int_{[0,2\pi)^n}\prod_{j=1}^n\frac{1}{({\mathrm{e}}^{\mathfrak{i}\,s_j}-\xi_j)({\mathrm{e}}^{-\mathfrak{i}\,s_j}-\overline{\eta}_j)}\mathrm{d}\widetilde{\sigma}(\bm{s})
\end{equation}
is positive semi-definite on $\mathbb D^n \times \mathbb D^n$, which follows by an analogous calculation as was done in the proof of Lemma \ref{lem:positive_semi_poisson}. This finishes the first part of the proof.
Conversely, assume that we have a function $f\colon \mathbb D^n \to \mathbb C$ for which the function \eqref{eq:szego_kernel} is positive semi-definite on $\mathbb D^n \times \mathbb D^n$. To show that $f$ has non-negative real part, we only need to evaluate the function \eqref{eq:szego_kernel} at $\bm{\xi} = \bm{\eta}$ and use Lemma \ref{lem:positive_semi_def_properties}. This finishes the proof.
\endproof
\begin{remark}
The Nevanlinna kernel and the function \eqref{eq:szego_kernel} are not equivalent under the Cayley transform. More precisely, let $\varphi$ and $\varphi^{-1}$ be the Cayley transform and its inverse as in the proof of Proposition \ref{prop:szego_kernel}, let $\bm{z},\bm{w} \in \mathbb C^{+n}$ and $\bm{\xi},\bm{\eta} \in \mathbb D^n$ be such that $\xi_j := \varphi^{-1}(z_j)$ and $\eta_j := \varphi^{-1}(w_j)$ and let $q\colon\mathbb C^{+n} \to \mathbb C$ be a Herglotz-Nevan\-linna function and $f\colon\mathbb D^n \to \mathbb C$ be a holomorphic function with non-negative real part such that $q(\bm{z}) = \mathfrak{i}\,f(\bm{\xi})$. Then,
$$\frac{(2\,\mathfrak{i})^{n-1}}{\prod_{j=1}^n(z_j - \overline{w}_j)}\big(q(\bm{z}) - \overline{q(\bm{w})}\big) \neq \frac{2^{n-1}}{\prod_{j=1}^n(1-\xi_j\overline{\eta}_j)}\cdot\big(f(\bm{\xi}) + \overline{f(\bm{\eta})}\big).$$
An analogous non-equivalence under the Cayley transform holds for Poisson-type function and function of the type \eqref{eq:temp4}.
\end{remark}
\end{document}
|
\begin{document}
\mainmatter
\title{Confidence-Weighted Bipartite Ranking}
\titlerunning{Confidence-Weighted Bipartite Ranking}
\author{Majdi Khalid \and Indrakshi Ray \and Hamidreza Chitsaz}
\institute{Colorado State University, Fort Collins, USA\\
\mailsa\\
\mailsb\\
\url{www.colostate.edu}
}
\toctitle{Confidence-Weighted Bipartite Ranking}
\maketitle
\begin{abstract}
Bipartite ranking is a fundamental machine learning and data mining problem. It commonly concerns the maximization of the AUC metric. Recently, a number of studies have proposed online bipartite ranking algorithms to learn from massive streams of class-imbalanced data. These methods suggest both linear and kernel-based bipartite ranking algorithms based on first and second-order online learning. Unlike kernelized ranker, linear ranker is more scalable learning algorithm. The existing linear online bipartite ranking algorithms lack either handling non-separable data or constructing adaptive large margin. These limitations yield unreliable bipartite ranking performance. In this work, we propose a linear online confidence-weighted bipartite ranking algorithm (CBR) that adopts soft confidence-weighted learning. The proposed algorithm leverages the same properties of soft confidence-weighted learning in a framework for bipartite ranking. We also develop a diagonal variation of the proposed confidence-weighted bipartite ranking algorithm to deal with high-dimensional data by maintaining only the diagonal elements of the covariance matrix. We empirically evaluate the effectiveness of the proposed algorithms on several benchmark and high-dimensional datasets. The experimental results validate the reliability of the proposed algorithms. The results also show that our algorithms outperform or are at least comparable to the competing online AUC maximization methods.
\keywords{online ranking, imbalanced learning, AUC maximization}
\end{abstract}
\section{Introduction}
Bipartite ranking is a fundamental machine learning and data mining problem because of its wide range of applications such as recommender systems, information retrieval, and bioinformatics \cite{agarwal2005study,liu2009learning,rendle2009learning}. Bipartite ranking has also been shown to be an appropriate learning algorithm for imbalanced data \cite{cortes2004auc}. The aim of the bipartite ranking algorithm is to maximize the Area Under the Curve (AUC) by learning a function that scores positive instances higher than negative instances. Therefore, the optimization problem of such a ranking model is formulated as the minimization of a pairwise loss function. This ranking problem can be solved by applying a binary classifier to pairs of positive and negative instances, where the classification function learns to classify a pair as positive or negative based on the first instance in the pair. The key problem of this approach is the high complexity of a learning algorithm that grows quadratically or subquadratically \cite{joachims2006training} with respect to the number of instances.
Recently, significant efforts have been devoted to developing scalable bipartite ranking algorithms to optimize AUC in both batch and online settings \cite{ding2015adaptive,gao2013one,hu2015kernelized,li2014top,zhao2011online}. Online learning approach is an appealing statistical method because of its scalability and effectivity. The online bipartite ranking algorithms can be classified on the basis of the learned function into linear and nonlinear ranking models. While they are advantageous over linear online ranker in modeling nonlinearity of the data, kernelized online ranking algorithms require a kernel computation for each new training instance. Further, the decision function of the nonlinear kernel ranker depends on support vectors to construct the kernel and to make a decision.
Online bipartite ranking algorithms can also be grouped into two different schemes. The first scheme maintains random instances from each class label in a finite buffer \cite{kar2013generalization,zhao2011online}. Once a new instance is received, the buffer is updated based on stream oblivious policies such as Reservoir Sampling (RS) and First-In-First-Out (FIFO). Then the ranker function is updated based on a pair of instances, the new instance and each opposite instance stored in the corresponding buffer. These online algorithms are able to deal with non-separable data, but they are based on simple first-order learning. The second approach maintains the first and second statistics \cite{gao2013one}, and is able to adapt the ranker to the importance of the features \cite{ding2015adaptive}. However, these algorithms assume the data are linearly separable.
Moreover, these algorithms make no attempt to exploit the confidence information, which has shown to be very effective in ameliorating the classification performance \cite{crammer2009multi,dredze2008confidence,wang2012exact}. Confidence-weighted (CW) learning takes the advantage of the underlying structure between features by modeling the classifier (i.e., the weight vector) as a Gaussian distribution parameterized by a mean vector and covariance matrix \cite{dredze2008confidence}. This model captures the notion of confidence for each weight coordinate via the covariance matrix. A large diagonal value corresponding to the i-th feature in the covariance matrix results in less confidence in its weight (i.e., its mean). Therefore, an aggressive update is performed on the less confident weight coordinates. This is analogous to the adaptive subgradient method \cite{duchi2011adaptive} that involves the geometric structure of the data seen so far in regularizing the weights of sparse features (i.e., less occurring features) as they are deemed more informative than dense features. The confidence-weighted algorithm \cite{dredze2008confidence} has also been improved by introducing the adaptive regularization (AROW) that deals with inseparable data \cite{crammer2009adaptive}. The soft confidence-weighted (SCW) algorithm improves upon AROW by maintaining an adaptive margin \cite{wang2012exact}.
In this paper, we propose a novel framework that solves a linear online bipartite ranking using the soft confidence-weighted algorithm \cite{wang2012exact}. The proposed confidence-weighted bipartite ranking algorithm (CBR) entertains the fast training and testing phases of linear bipartite ranking. It also enjoys the capability of the soft confidence-weighted algorithm in learning confidence-weighted model, handling linearly inseparable data, and constructing adaptive large margin. The proposed framework follows an online bipartite ranking scheme that maintains a finite buffer for each class label while updating the buffer by one of the stream oblivious policies such as Reservoir Sampling (RS) and First-In-First-Out (FIFO) \cite{vitter1985random}. We also provide a diagonal variation (CBR-diag) of the proposed algorithm to handle high-dimensional datasets.
The remainder of the paper is organized as follows. In Section 2, we briefly review closely related work. We present the confidence-weighted bipartite ranking (CBR) and its diagonal variation (CBR-diag) in Section 3. The experimental results are presented in Section 4. Section 5 concludes the paper and presents some future work.
\section{Related Work}
The proposed bipartite ranking algorithm is closely related to the online learning and bipartite ranking algorithms. What follows is a brief review of recent studies related to these topics.
\textbf{Online Learning}. The proliferation of big data and massive streams of data emphasize the importance of online learning algorithms. Online learning algorithms have shown a comparable classification performance compared to batch learning, while being more scalable. Some online learning algorithms, such as the Perceptron algorithm \cite{rosenblatt1958perceptron}, the Passive-Aggressive (PA) \cite{crammer2006online}, and the online gradient descent \cite{zinkevich2003online}, update the model based on a first-order learning approach. These methods do not take into account the underlying structure of the data during learning. This limitation is addressed by exploring second-order information to exploit the underlying structure between features in ameliorating learning performance \cite{cesa2005second,crammer2009adaptive,dredze2008confidence,duchi2011adaptive,orabona2010new,wang2012exact}. Moreover, kernelized online learning methods have been proposed to deal with nonlinearly distributed data \cite{cavallanti2007tracking,dekel2008forgetron,orabona2009bounded}.
\textbf{Bipartite Ranking}. Bipartite ranking learns a real-valued function that induces an order on the data in which the positive instances precede negative instances. The common measure used to evaluate the success of the bipartite ranking algorithm is the AUC \cite{hanley1982meaning}. Hence the minimization of the bipartite ranking loss function is equivalent to the maximization of the AUC metric. The AUC presents the probability that a model will rank a randomly drawn positive instance higher than a random negative instance. In batch setting, a considerable amount of studies have investigated the optimization of linear and nonlinear kernel ranking function \cite{chapelle2010efficient,joachims2005support,kuo2014large,lee2014large}. More scalable methods are based on stochastic or online learning \cite{sculley2009large,wan2015online,wangsolar}. However, these methods are not specifically designed to optimize the AUC metric. Recently, a few studies have focused on the optimization of the AUC metric in online setting. The first approach adopted a framework that maintained a buffer with limited capacity to store random instances to deal with the pairwise loss function \cite{kar2013generalization,zhao2011online}. The other methods maintained only the first and second statistics for each received instance and optimized the AUC in one pass over the training data \cite{ding2015adaptive,gao2013one}. The work \cite{ding2015adaptive} exploited the second-order technique \cite{duchi2011adaptive} to make the ranker aware of the importance of less frequently occurring features, hence updating their weights with a higher learning rate.
Our proposed method follows the online framework that maintains fixed-sized buffers to store instances from each class label. Further, it exploits the online second-order method \cite{wang2012exact} to learn a robust bipartite ranking function. This distinguishes the proposed method from \cite{kar2013generalization,zhao2011online} employing first-order online learning. Also, the proposed method is different from \cite{ding2015adaptive,gao2013one} by learning a confidence-weighted ranker capable of dealing with non-separable data and learning an adaptive large margin. The most similar approaches to our method are \cite{wan2015online,wangsolar}. However, these are not designed to directly maximize the AUC. They also use classical first-order and second-order online learning whereas we use the soft variation of confidence-weighted learning that has shown a robust performance in the classification task \cite{wang2012exact}.
\section{Online Confidence-Weighted Bipartite Ranking}
\subsection{Problem Setting}
We consider a linear online bipartite ranking function that learns on imbalanced data to optimize the AUC metric \cite{hanley1982meaning}. Let $\mathcal{S} = \{x_{i}^{+} \cup x_{j}^{-}\ \in \mathcal{R}^{d} | i = \{1,\ldots,n\} , j = \{ 1, \ldots, m\} \}$ denotes the input space of dimension $d$ generated from unknown distribution $\mathcal{D}$, where $x_{i}^{+}$ is the i-th positive instance and $x_{j}^{-}$ is the j-th negative instance. The $n$ and $m$ denote the number of positive and negative instances, respectively. The linear bipartite ranking function $f:\mathcal{S} \rightarrow \mathcal{R}$ is a real valued function that maximizes the AUC metric by minimizing the following loss function:
\begin{eqnarray*}\label{eq1}
\mathcal{L}(f;\mathcal{S}) = \frac{1}{nm} \sum_{i=1}^{n} \sum_{j=1}^{m} I(f(x_{i}^{+}) \leq f(x_{j}^{-})),
\end{eqnarray*}
where $f(x) = w^{T}x$ and $I(\cdot)$ is an indicator function that outputs $1$ if the condition is held, and $0$ otherwise. It is common to replace the indicator function with a convex surrogate function,
\begin{eqnarray}\label{eq2}
\mathcal{L}(f;\mathcal{S}) = \frac{1}{nm} \sum_{i=1}^{n} \sum_{j=1}^{m} \ell(f(x_{i}^{+}) - f(x_{j}^{-}) ),
\end{eqnarray}
\noindent
where $\ell(\cdot)$ is a surrogate loss function such as hinge loss $\ell(z) = max(0,1-z)$.
It is easy to see that the complexity of optimizing (\ref{eq2}) will grow quadratically with respect to the number of training instances. Following the approach suggested by \cite{zhao2011online} to deal with the complexity of the pairwise loss function, we reformulate the pairwise loss function (\ref{eq2}) as a sum of two losses for a pair of instances,
\begin{eqnarray} \label{eq4}
\sum_{t=1}^{T} I_{(y_{t} = +1)} g_{t}^{+} (w) + I_{(y_{t} = -1)} g_{t}^{-} (w),
\end{eqnarray}
where $T = n^{+}+n^{-}$, and $g_{t}(w)$ is defined as follows
\begin{eqnarray}\label{1eq5}
g_{t}^{+}(w) = \sum_{t'=1}^{t-1} I_{(y_{t'} = -1)} \ell(f(x_{t}) - f(x_{t'})),
\end{eqnarray}
\begin{eqnarray}\label{eq6}
g_{t}^{-}(w) = \sum_{t'=1}^{t-1} I_{(y_{t'} = +1)} \ell(f(x_{t'}) - f(x_{t})) .
\end{eqnarray}
Instead of maintaining all the received instances to compute the gradients $\nabla g_{t}(w)$, we store random instances from each class in the corresponding buffer. Therefore, two buffers $B_{+}$ and $B_{-}$ with predefined capacity are maintained for positive and negative classes, respectively. The buffers are updated using a stream oblivious policy. The current stored instances in a buffer are used to update the classifier as in equation (\ref{eq4}) whenever a new instance from the opposite class label is received.
The framework of the online confidence-weighted bipartite ranking is shown in Algorithm \ref{alg1}. The two main components of this framework are UpdateBuffer and UpdateRanker, which are explained in the following subsections.
\begin{algorithm}[t]
\caption{A Framework for Confidence-Weighted Bipartite Ranking (CBR)} \label{alg1}
\begin{algorithmic}
\STATE {\bf Input}: \begin{itemize}
\item[\textbullet] the penalty parameter $C$ \\
\item[\textbullet] $ $the capacity of the buffers $M_{+}$ and $M_{-}$ \\
\item[\textbullet] $ \eta$ parameter \\
\item[\textbullet] $ a_{i} = 1$ for $i \in {1,\dots,d}$
\end{itemize}
\STATE {\bf Initialize}: $\mu_{1} = \{0,\dots,0\}^{d}$, $B_{+} = B_{-} = \emptyset$, $M^{1}_{+} = M^{1}_{-} = 0 $\\
$\qquad \qquad \quad \Sigma_{1} = diag(a)$ or $G_{1} = a $
\FOR{$t = 1, \ldots, T$}
\STATE Receive a training instance $(x_{t},y_{t})$
\IF{$y_{t} = +1$}
\STATE $B_{-}^{t+1} = B_{-}^{t}$, $M^{t+1}_{+}= M^{t}_{+} + 1$, $M^{t+1}_{-} = M^{t}_{-}$ \\
\STATE $C_{t} = C$
\STATE $B_{+}^{t+1} = $ UpdateBuffer($x_{t},B_{+}^{t},M_{+}$,$M^{t+1}_{+}$) \\
\STATE $[ \mu_{t+1},\Sigma_{t+1}] =$ UpdateRanker($\mu_{t},\Sigma_{t},x_{t},y_{t},C_{t},B_{-}^{t+1}, \eta$) or \\
\STATE $[ \mu_{t+1},G_{t+1}] =$ UpdateRanker($\mu_{t},G_{t},x_{t},y_{t},C_{t},B_{-}^{t+1}, \eta$) (CBR-diag)
\ELSE
\STATE $B_{+}^{t+1} = B_{+}^{t}$,$M^{t+1}_{-}= M^{t}_{-} + 1$, $M^{t+1}_{+} = M^{t}_{+}$ \\
\STATE $C_{t} = C$
\STATE $B_{-}^{t+1} = $ UpdateBuffer($x_{t},B_{-}^{t},M_{-}$,$M^{t+1}_{-}$) \\
\STATE $[\mu_{t+1},\Sigma_{t+1}]=$ UpdateRanker($\mu_{t},\Sigma_{t},x_{t},y_{t},C_{t},B_{+}^{t+1},\eta$) or \\
\STATE $[\mu_{t+1},G_{t+1}]=$ UpdateRanker($\mu_{t},G_{t},x_{t},y_{t},C_{t},B_{+}^{t+1},\eta$) (CBR-diag)
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Update Buffer}
One effective approach to deal with pairwise learning algorithms is to maintain a buffer with a fixed capacity. This raises the problem of updating the buffer to store the most informative instances. In our online Bipartite ranking framework, we investigate the following two stream oblivious policies to update the buffer:
Reservoir Sampling \textbf{(RS)}: Reservoir Sampling is a common oblivious policy to deal with streaming data \cite{vitter1985random}. In this approach, the new instance $(x_{t},y_{t})$ is added to the corresponding buffer if its capacity is not reached, $|B_{y_{t}}^{t}| < M_{y_{t}}$. If the buffer is at capacity, it will be updated with probability $\frac{M_{y_{t}}}{M^{t+1}_{y_{t}}}$ by randomly replacing one instance in $B^{t}_{y_{t}}$ with $x_{t}$. Algorithm \ref{resersam} shows the steps of the Reservoir sampling approach for updating the buffers.
First-In-First-Out \textbf{(FIFO)}: This simple strategy replaces the oldest instance with the new instance if the corresponding buffer reaches its capacity. Otherwise, the new instance is simply added to the buffer.
\begin{algorithm}[t]
\caption{Reservoir Sampling Approach} \label{resersam}
\begin{algorithmic}
\STATE {\bf Input}: $x_{t}$,$\;B^{t}$,$\;M$,$\;M_{t+1}$\\
\STATE {\bf Output}: updated buffer $B^{t+1}$\\
\IF{$|B^{t}| < M$}
\STATE $B^{t+1} = B^{t} \cup \{x_{t}\}$
\ELSE
\STATE Sample $Z$ from a Bernoulli distribution with $Pr(Z = 1) = M/M_{t+1}$ \\
\IF{$Z=1$}
\STATE Randomly delete an instance from $B^{t}$ \\
\STATE $B^{t+1} = B^{t} \cup \{x_{t}\}$ \\
\ENDIF
\ENDIF
Return $B^{t+1}$
\end{algorithmic}
\end{algorithm}
\subsection{Update Ranker}
Inspired by the robust performance of second-order learning algorithms, we apply the soft confidence-weighted learning approach \cite{wang2012exact} to updated the bipartite ranking function. Therefore, our confidence-weighted bipartite ranking model (CBR) is formulated as a ranker with a Gaussian distribution parameterized by mean vector $\mu \in \mathcal{R}^{d}$ and covariance matrix $\Sigma \in \mathcal{R}^{d \times d}$. The mean vector $\mu$ represents the model of the bipartite ranking function, while the covariance matrix captures the confidence in the model. The ranker is more confident about the model value $\mu_{p}$ as its diagonal value $\Sigma_{p,p}$ is smaller. The model distribution is updated once the new instance is received while being close to the old model distribution. This optimization problem is performed by minimizing the Kullback-Leibler divergence between the new and the old distributions of the model. The online confidence-weighted bipartite ranking (CBR) is formulated as follows:
\begin{eqnarray} \label{eq7}
(\mu_{t+1} , \Sigma_{t+1}) = \underset{\mu,\Sigma}{\operatorname{argmin}} D_{KL} (\mathcal{N}(\mu,\Sigma) || \mathcal{N}(\mu_{t},\Sigma_{t})) \\
+ C \ell^{\phi}(\mathcal{N}(\mu,\Sigma); (z,y_{t})), \nonumber{}
\end{eqnarray}
where $z = (x_{t}-x)$, $C$ is the the penalty hyperparamter, $\phi = \Phi^{-1}(\eta)$, and $\Phi$ is the normal cumulative distribution function. The loss function $\ell^{\phi}(\cdot)$ is defined as:
\begin{eqnarray*}\label{eq8}
\ell^{\phi}(\mathcal{N}(\mu,\Sigma); (z,y_{t})) = max(0,\phi \sqrt{z^{T}\Sigma z} - y_{t} \mu \cdot z).
\end{eqnarray*}
The solution of (\ref{eq7}) is given by the following proposition.
\begin{proposition}\label{prop1}
The optimization problem (\ref{eq7}) has a closed-form solution as follows:
\begin{equation*}
\mu_{t+1} = \mu_{t} + \alpha_{t}y_{t}\Sigma_{t}z,
\end{equation*}
\begin{equation*}
\Sigma_{t+1} = \Sigma_{t} - \beta_{t} \Sigma_{t} z^{T} z\Sigma_{t}.
\end{equation*}
The coefficients $\alpha$ and $\beta$ are defined as follows: \\
$\alpha_{t} = min\{ C, max \{ 0,\frac{1}{\upsilon_{t} \zeta} (-m_{t} \psi + \sqrt{m_{t}^{2} \frac{\phi^{4}}{4} + \upsilon_{t} \phi^{2} \zeta } ) \} \}$, \\
$\beta_{t} = \frac{\alpha_{t} \phi}{ \sqrt{u_{t}} + \upsilon_{t} \alpha_{t} \phi}$.
where $u_{t} = \frac{1}{4} ( - \alpha_{t} \upsilon_{t} \phi + \sqrt{\alpha_{t}^{2} \upsilon_{t}^{2} \phi^{2} + 4 \upsilon_{t} })^{2}$,
$\upsilon_{t} = z^{T} \Sigma_{t} z$, $m_{t} = y_{t}(\mu_{t} \cdot z)$ , $ \phi = \Phi^{-1}(\eta)$, $ \psi = 1 + \frac{\phi^{2}}{2}$, $\zeta = 1 + \phi^{2} $, and
$z = x_{t} - x$.
\end{proposition}
\noindent The proposition (\ref{prop1}) is analogous to the one derived in \cite{wang2012exact}.\\
Though modeling the full covariance matrix lends the CW algorithms a powerful capability in learning \cite{crammer2012confidence,ma2010exploiting,wang2012exact}, it raises potential concerns with high-dimensional data. The covariance matrix grows quadratically with respect to the data dimension. This makes the CBR algorithm impractical with high-dimensional data due to high computational and memory requirements.
We remedy this deficiency by a diagonalization technique \cite{crammer2012confidence,duchi2011adaptive}. Therefore, we present a diagonal confidence-weighted bipartite ranking (CBR-diag) that models the ranker as a mean vector $\mu \in \mathcal{R}^{d}$ and diagonal matrix $\hat{\Sigma} \in \mathcal{R}^{d \times d}$. Let $G$ denotes $diag(\hat{\Sigma})$, and the optimization problem of CBR-diag is formulated as follows:
\begin{eqnarray} \label{eq9}
(\mu_{t+1} , G_{t+1}) = \underset{\mu,G}{\operatorname{argmin}} D_{KL} (\mathcal{N}(\mu,G) || \mathcal{N}(\mu_{t},G_{t})) \\
+ C \ell^{\phi}(\mathcal{N}(\mu,G); (z,y_{t})). \nonumber{}
\end{eqnarray}
\begin{proposition}\label{prop2}
The optimization problem (\ref{eq9}) has a closed-form solution as follows:\\
\begin{equation*}
\mu_{t+1} = \mu_{t} + \frac{\alpha_{t}y_{t}z}{G_{t}},
\end{equation*}
\begin{equation*}
G_{t+1} = G_{t} + \beta_{t} z^{2}.
\end{equation*}
The coefficients $\alpha$ and $\beta$ are defined as follows \\
$\alpha_{t} = min\{ C, max \{ 0,\frac{1}{\upsilon_{t} \zeta} (-m_{t} \psi + \sqrt{m_{t}^{2} \frac{\phi^{4}}{4} + \upsilon_{t} \phi^{2} \zeta } ) \} \}$, \\
$ \beta_{t} = \frac{\alpha_{t} \phi}{ \sqrt{u_{t}} + \upsilon_{t} \alpha_{t} \phi}$,
where $u_{t} = \frac{1}{4} ( - \alpha_{t} \upsilon_{t} \phi + \sqrt{\alpha_{t}^{2} \upsilon_{t}^{2} \phi^{2} + 4 \upsilon_{t} })^{2}$, $\upsilon_{t} = \sum_{i=1}^{d} \frac{z_{i}^{2}}{G_{i}+C}$, $m_{t} = y_{t}(\mu_{t} \cdot z)$,
$\phi = \Phi^{-1}(\eta)$, $\psi = 1 + \frac{\phi^{2}}{2}$, $\zeta = 1 + \phi^{2}$, and $z = x_{t} - x$.
\end{proposition}
The propositions \ref{prop1} and \ref{prop2} can be proved similarly to the proof in \cite{wang2012exact}. The steps of updating the online confidence-weighted bipartite ranking with full covariance matrix or with the diagonal elements are summarized in Algorithm \ref{alg2}.
\begin{algorithm}
\caption{Update Ranker} \label{alg2}
\begin{algorithmic}
\STATE {\bf Input}:
\begin{itemize}
\item[\textbullet] $ \mu_{t}\;\;\;\;\;\;\;\;:$ current mean vector \\
\item[\textbullet] $ \Sigma_{t} \; or \; G_{t}:$ current covariance matrix or diagonal elements \\
\item[\textbullet] $(x_{t},y_{t}):$ a training instance \\
\item[\textbullet] $ B\;\;\;\;\;\;\;\;:$ the buffer storing instances from the opposite class label \\
\item[\textbullet] $ C_{t}\;\;\;\;\;\;\;:$ class-specific weighting parameter \\
\item[\textbullet] $ \eta\;\;\;\;\;\;\;\;\;\;$: the predefined probability
\end{itemize}
\STATE {\bf Output}: updated ranker:
\begin{itemize}
\item[\textbullet] $\mu_{t+1}$
\item[\textbullet] $\Sigma_{t+1} \; or \; G_{t+1}$
\end{itemize}
\STATE{\bf Initialize}: $\mu^{1} = \mu_{t}$, $(\Sigma^{1} = \Sigma_{t}$ or $G^{1} = G_{t})$, $i=1$
\FOR{$x \in B$}
\STATE Update the ranker $(\mu^{i},\Sigma^{i})$ with $z = x_{t}-x$ and $y_{t}$ by
\STATE $(\mu^{i+1} , \Sigma^{i+1}) = \underset{\mu,\Sigma}{\operatorname{argmin}} D_{KL} (\mathcal{N}(\mu,\Sigma) || \mathcal{N}(\mu^{i},\Sigma^{i})) + C \ell^{\phi}(\mathcal{N}(\mu,\Sigma); (z,y_{t}))$
\STATE or \\
\STATE Update the ranker $(\mu^{i},G^{i})$ with $z = x_{t}-x$ and $y_{t}$ by \\
\STATE $ (\mu^{i+1} , G^{i+1}) = \underset{\mu,G}{\operatorname{argmin}} D_{KL} (\mathcal{N}(\mu,G) || \mathcal{N}(\mu^{i},G^{i})) + C \ell^{\phi}(\mathcal{N}(\mu,G); (z,y_{t}))$ \\
\STATE $i = i + 1$
\ENDFOR
\STATE Return $\mu_{t+1} = \mu^{|B|+1}$
\STATE $\qquad \quad \Sigma_{t+1} = \Sigma^{{|B|+1}}$ or $G_{t+1} = G^{{|B|+1}}$
\end{algorithmic}
\end{algorithm}
\section{Experimental Results}
In this section, we conduct extensive experiments on several real world datasets in order to demonstrate the effectiveness of the proposed algorithms. We also compare the performance of our methods with existing online learning algorithms in terms of AUC and classification accuracy at the optimal operating point of the ROC curve (OPTROC). The running time comparison is also presented.
\subsection{Real World Datasets}
We conduct extensive experiments on various benchmark and high-dimensional datasets. All datasets can be downloaded from LibSVM\footnote{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/} and the machine learning repository UCI\footnote{https://archive.ics.uci.edu/ml/} except the Reuters\footnote{http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html} dataset that is used in \cite{cai2011locally}. If the data are provided as training and test sets, we combine them together in one set. For cod-rna data, only the training and validation sets are grouped together. For rcv1 and news20, we only use their training sets in our experiments. The multi-class datasets are transformed randomly into class-imbalanced binary datasets. Tables \ref{table1} and \ref{table2} show the characteristics of the benchmark and the high-dimensional datasets, respectively.
\begin{table*}
\caption{Benchmark datasets}
\label{table1}
\centering
\begin{tabular}{|c|c c|c|c c|c|c c|} \hline
Data & \#inst & \#feat & Data & \#inst & \#feat & Data & \#inst & \#feat \\
\hline
glass & 214 & 10 & cod-rna & 331,152 & 8 & australian & 690 & 14 \\ \hline
ionosphere & 351 & 34 & spambase & 4,601 & 57 & diabetes & 768 & 8 \\ \hline
german & 1,000 & 24 & covtype & 581,012 & 54 & acoustic & 78,823 & 50 \\ \hline
svmguide4 & 612 & 10 & magic04 & 19,020 & 11 & vehicle & 846 & 18 \\ \hline
svmguide3 & 1284 & 21 & heart & 270 & 13 & segment & 2,310 & 19 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{ High-dimensional datasets}
\label{table2}
\centering
\begin{tabular}{ |P{1.8cm}|P{1.2cm} P{1.2cm}| } \hline
Data & \#inst & \#feat \\ \hline
farm-ads & 4,143 & 54,877 \\ \hline
rcv1 & 15,564 & 47,236 \\ \hline
sector & 9,619 & 55,197 \\ \hline
real-sim & 72,309 & 20,958 \\ \hline
news20 & 15,937 & 62,061 \\ \hline
Reuters & 8,293 & 18,933 \\ \hline
\end{tabular}
\end{table*}
\subsection{Compared Methods and Model Selection}
\textbf{Online Uni-Exp} \cite{kotlowski2011bipartite}: An online pointwise ranking algorithm that optimizes the weighted univariate exponential loss. The learning rate is tuned by 3-fold cross validation on the training set by searching in $2^{[-10:10]}$.
\noindent \textbf{OPAUC} \cite{gao2013one}: An online learning algorithm that optimizes the AUC in one-pass through square loss function. The learning rate is tuned by 3-fold cross validation by searching in $2^{[-10:10]}$, and the regularization hyperparameter is set to a small value 0.0001.
\noindent \textbf{OPAUCr} \cite{gao2013one}: A variation of OPAUC that approximates the covariance matrices using low-rank matrices. The model selection step is carried out similarly to OPAUC, while the value of rank $\tau$ is set to 50 as suggested in \cite{gao2013one}.
\noindent \textbf{OAM$_{\text{seq}}$} \cite{zhao2011online}: The online AUC maximization (OAM) is the state-of-the-art first-order learning method. We implement the algorithm with the Reservoir Sampling as a buffer updating scheme. The size of the positive and negative buffers is fixed at 50. The penalty hyperparameter $C$ is tuned by 3-fold cross validation on the training set by searching in $2^{[-10:10]}$.
\noindent \textbf{AdaOAM} \cite{ding2015adaptive}: This is a second-order AUC maximization method that adapts the classifier to the importance of features. The smooth hyperparameter $\delta$ is set to 0.5, and the regularization hyperparameter is set to 0.0001. The learning rate is tuned by 3-fold cross validation on the training set by searching in $2^{[-10:10]}$.
\noindent \textbf{CBR$_{\text{RS}}$} and \textbf{CBR$_{\text{FIFO}}$}: The proposed confidence-weighted bipartite ranking algorithms with the Reservoir Sampling and First-In-First-Out buffer updating policies, respectively. The size of the positive and negative buffers is fixed at 50. The hyperparameter $\eta$ is set to 0.7, and the penalty hyperparameter $C$ is tuned by 3-fold cross validation by searching in $2^{[-10:10]}$.
\noindent \textbf{CBR-diag$_{\text{FIFO}}$}: The proposed diagonal variation of confidence-weighted bipartite ranking that uses the First-In-First-Out policy to update the buffer. The buffers are set to 50, and the hyperparameters are tuned similarly to CBR$_{\text{FIFO}}$.
For a fair comparison, the datasets are scaled similarly in all experiments. We randomly divide each dataset into 5 folds, where 4 folds are used for training and one fold is used as a test set. For benchmark datasets, we randomly choose 8000 instances if the data exceeds this size. For high-dimensional datasets, we limit the sample size of the data to 2000 due to the high dimensionality of the data. The results on the benchmark and the high-dimensional datasets are averaged over 10 and 5 runs, respectively. A random permutation is performed on the datasets with each run. All experiments are conducted with Matlab 15 on a workstation computer with 8x2.6G CPU and 32 GB memory.
\subsection{Results on Benchmark Datasets}
The comparison in terms of AUC is shown in Table \ref{table3}, while the comparison in terms of classification accuracy at OPTROC is shown in Table \ref{table4}. The running time (in milliseconds) comparison is illustrated in Figure \ref{fig1}.
The results show the robust performance of the proposed methods CBR$_{RS}$ and CBR$_{FIFO}$ in terms of AUC and classification accuracy compared to other first and second-order online learning algorithms. We can observe that the improvement of the second-order methods such as OPAUC and AdaOAM over first-order method OAM$_{\text{seq}}$ is not reliable, while our CBR algorithms often outperform the OAM$_{\text{seq}}$. Also, the proposed methods are faster than OAM$_{\text{seq}}$, while they incur more running time compared to AdaOAM except with spambase, covtype, and acoustic datasets. The pointwise method online Uni-Exp maintains fastest running time, but at the expense of the AUC and classification accuracy. We also notice that the performance of CBR$_{\text{FIFO}}$ is often slightly better than CBR$_{\text{RS}}$ in terms of AUC, classification accuracy, and running time.
\begin{table*}
\caption{Comparison of AUC performance on benchmark datasets}
\label{table3}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Data & CBR$_{\text{RS}}$ & CBR$_{\text{FIFO}}$ & Online Uni-Exp & OPAUC & OAM$_{\text{seq}}$ & AdaOAM \\ \hline
glass & \textbf{ 0.825} $\pm$ 0.043 & 0.823 $\pm$ 0.046 & 0.714 $\pm$ 0.075 & 0.798 $\pm$ 0.061 & 0.805 $\pm$ 0.047 & 0.794 $\pm$ 0.061 \\ \hline
ionosphere & 0.950 $\pm$ 0.027 & \textbf{ 0.951} $\pm$ 0.028 & 0.913 $\pm$ 0.018 & 0.943 $\pm$ 0.026 & 0.946 $\pm$ 0.025 & 0.943 $\pm$ 0.029 \\ \hline
german &\textbf{ 0.782} $\pm$ 0.024 & 0.780 $\pm$ 0.019 & 0.702 $\pm$ 0.032 & 0.736 $\pm$ 0.034 & 0.731 $\pm$ 0.028 & 0.770 $\pm$ 0.024 \\ \hline
svmguide4 & 0.969 $\pm$ 0.013 & \textbf{0.974} $\pm$ 0.013 & 0.609 $\pm$ 0.096 & 0.733 $\pm$ 0.056 & 0.771 $\pm$ 0.063 & 0.761 $\pm$ 0.053 \\ \hline
svmguide3 & 0.755 $\pm$ 0.022 & \textbf{0.764} $\pm$ 0.036 & 0.701 $\pm$ 0.025 & 0.737 $\pm$ 0.029 & 0.705 $\pm$ 0.033 &0.738 $\pm$ 0.033 \\ \hline
cod-rna & 0.983 $\pm$ 0.000 & \textbf{0.984} $\pm$ 0.000 & 0.928 $\pm$ 0.000 & 0.927 $\pm$ 0.001 & 0.951 $\pm$ 0.025 & 0.927 $\pm$ 0.000 \\ \hline
spambase & 0.941 $\pm$ 0.006 & \textbf{0.942} $\pm$ 0.006 & 0.866 $\pm$ 0.016 & 0.849 $\pm$ 0.020 & 0.897 $\pm$ 0.043 & 0.862 $\pm$ 0.011 \\ \hline
covtype & 0.816 $\pm$ 0.003 & \textbf{0.835} $\pm$ 0.001 & 0.705 $\pm$ 0.033 & 0.711 $\pm$ 0.041 & 0.737 $\pm$ 0.023 & 0.770 $\pm$ 0.010 \\ \hline
magic04 & 0.799 $\pm$ 0.006 & \textbf{0.801} $\pm$ 0.006 & 0.759 $\pm$ 0.006 & 0.748 $\pm$ 0.033 & 0.757 $\pm$ 0.015 & 0.773 $\pm$ 0.006 \\ \hline
heart & 0.908 $\pm$ 0.019 & \textbf{0.909} $\pm$ 0.021 & 0.733 $\pm$ 0.039 & 0.788 $\pm$ 0.054 & 0.806 $\pm$ 0.059 & 0.799 $\pm$ 0.079 \\ \hline
australian &0.883 $\pm$ 0.028 & \textbf{0.889} $\pm$ 0.019 & 0.710 $\pm$ 0.130 & 0.735 $\pm$ 0.138 & 0.765 $\pm$ 0.107 & 0.801 $\pm$ 0.037 \\ \hline
diabetes &0.700 $\pm$ 0.021 & \textbf{0.707} $\pm$ 0.033 & 0.633 $\pm$ 0.036 & 0.667 $\pm$ 0.041 & 0.648 $\pm$ 0.040 & 0.675 $\pm$ 0.034 \\ \hline
acoustic &0. 879 $\pm$ 0.006 & \textbf{ 0.892} $\pm$ 0.003 & 0.876 $\pm$ 0.003 & 0.878 $\pm$ 0.003 & 0.863 $\pm$ 0.011 & 0.882 $\pm$ 0.003 \\ \hline
vehicle & \textbf{0.846} $\pm$ 0.031 & \textbf{ 0.846} $\pm$ 0.034 & 0.711 $\pm$ 0.053 & 0.764 $\pm$ 0.073 & 0.761 $\pm$ 0.078 & 0.792 $\pm$ 0.049 \\ \hline
segment & 0.900 $\pm$ 0.013 & \textbf{0.903} $\pm$ 0.008 & 0.689 $\pm$ 0.061 & 0.828 $\pm$ 0.024 & 0.812 $\pm$ 0.035 & 0.855 $\pm$ 0.008 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table*}
\begin{table*}
\caption{Comparison of classification accuracy at OPTROC on benchmark datasets}
\label{table4}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Data & CBR$_{\text{RS}}$ & CBR$_{\text{FIFO}}$ & Online Uni-Exp & OPAUC & OAM$_{\text{seq}}$ & AdaOAM \\ \hline
glass & \textbf{0.813} $\pm$ 0.044 & 0.811 $\pm$ 0.049 & 0.732 $\pm$ 0.060 & 0.795 $\pm$ 0.046 & 0.788 $\pm$ 0.040 & 0.783 $\pm$ 0.047 \\ \hline
ionosphere & \textbf{0.946} $\pm$ 0.028 & \textbf{0.946} $\pm$ 0.022 & 0.902 $\pm$ 0.028 & 0.936 $\pm$ 0.018 & 0.943 $\pm$ 0.017 & 0.938 $\pm$ 0.018 \\ \hline
german & 0.780 $\pm$ 0.022 & \textbf{0.787} $\pm$ 0.019 & 0.741 $\pm$ 0.027 & 0.754 $\pm$ 0.022 & 0.751 $\pm$ 0.028 & 0.770 $\pm$ 0.030 \\ \hline
svmguide4 & 0.951 $\pm$ 0.014 & \textbf{0.956} $\pm$ 0.012 & 0.829 $\pm$ 0.021 & 0.843 $\pm$ 0.024 & 0.839 $\pm$ 0.022 & 0.848 $\pm$ 0.020 \\ \hline
svmguide3 & 0.784 $\pm$ 0.015 & \textbf{0.793} $\pm$ 0.016 & 0.784 $\pm$ 0.019 & 0.777 $\pm$ 0.024 & 0.780 $\pm$ 0.020 & 0.777 $\pm$ 0.024 \\ \hline
cod-rna & 0.948 $\pm$ 0.002 & \textbf{0.949} $\pm$ 0.000 & 0.887 $\pm$ 0.001 & 0.887 $\pm$ 0.001 & 0.910 $\pm$ 0.019 & 0.887 $\pm$ 0.001 \\ \hline
spambase & \textbf{ 0.899} $\pm$ 0.009 & 0.898 $\pm$ 0.009 & 0.818 $\pm$ 0.019 & 0.795 $\pm$ 0.022 & 0.849 $\pm$ 0.053 & 0.809 $\pm$ 0.014 \\ \hline
covtype & 0.746 $\pm$ 0.005 & \textbf{0.766} $\pm$ 0.003 & 0.672 $\pm$ 0.018 & 0.674 $\pm$ 0.021 & 0.685 $\pm$ 0.016 & 0.709 $\pm$ 0.008 \\ \hline
magic04 & 0.769 $\pm$ 0.011 & \textbf{0.771} $\pm$ 0.006 & 0.734 $\pm$ 0.007 & 0.731 $\pm$ 0.015 & 0.736 $\pm$ 0.013 & 0.752 $\pm$ 0.008 \\ \hline
heart & \textbf{0.883} $\pm$ 0.032 & 0.875 $\pm$ 0.026 & 0.716 $\pm$ 0.021 & 0.753 $\pm$ 0.038 & 0.777 $\pm$ 0.043 & 0.772 $\pm$ 0.053 \\ \hline
australian & 0.841 $\pm$ 0.023 & \textbf{ 0.842} $\pm$ 0.022 & 0.711 $\pm$ 0.056 & 0.725 $\pm$ 0.070 & 0.742 $\pm$ 0.064 & 0.768 $\pm$ 0.036 \\ \hline
diabetes & \textbf{0.714} $\pm$ 0.029 & 0.705 $\pm$ 0.032 & 0.683 $\pm$ 0.037 & 0.692 $\pm$ 0.040 & 0.694 $\pm$ 0.044 & 0.689 $\pm$ 0.040 \\ \hline
acoustic & 0. 844 $\pm$ 0.005 & \textbf{0.850} $\pm$ 0.003 & 0.840 $\pm$ 0.005 & 0.839 $\pm$ 0.002 & 0.832 $\pm$ 0.005 & 0.841 $\pm$ 0.003 \\ \hline
vehicle & \textbf{0.816} $\pm$ 0.018 & 0.814 $\pm$ 0.018 & 0.764 $\pm$ 0.027 & 0.797 $\pm$ 0.014 & 0.790 $\pm$ 0.029 & 0.805 $\pm$ 0.021 \\ \hline
segment & \textbf{0.838} $\pm$ 0.015 & 0.836 $\pm$ 0.008 & 0.691 $\pm$ 0.031 & 0.768 $\pm$ 0.027 & 0.755 $\pm$ 0.024 & 0.796 $\pm$ 0.014 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table*}
\begin{figure}
\caption{Running time (in milliseconds) of CBR and the other online learning algorithms on the benchmark datasets. The \textit{y}
\label{fig1}
\end{figure}
\subsection{Results on High-Dimensional Datasets}
We study the performance of the proposed CBR-diag$_{FIFO}$ and compare it with online Uni-Exp, OPAUCr, and OAM$_{seq}$ that avoid constructing the full covariance matrix. Table \ref{table5} compares our method and the other online algorithms in terms of AUC, while Table \ref{table6} shows the classification accuracy at OPTROC. Figure \ref{fig2} displays the running time (in milliseconds) comparison.
The results show that the proposed method CBR-diag$_{FIFO}$ yields a better performance on both measures. We observe that the CBR-diag$_{FIFO}$ presents a competitive running time compared to its counterpart OAM$_{seq}$ as shown in Figure \ref{fig2}. We can also see that the CBR-diag$_{FIFO}$ takes more running time compared to the OPAUCr. However, the CBR-diag$_{FIFO}$ achieves better AUC and classification accuracy compared to the OPAUCr. The online Uni-Exp algorithm requires the least running time, but it presents lower AUC and classification accuracy compared to our method.
\begin{table*}[t!]
\caption{Comparison of AUC on high-dimensional datasets}
\label{table5}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Data & CBR-diag$_{\text{FIFO}}$ & Online Uni-Exp & OPAUCr & OAM$_{\text{seq}}$ \\ \hline
farm-ads & \textbf{0.961} $\pm$ 0.004 & 0.942 $\pm$ 0.006 & 0.951 $\pm$ 0.004 & 0.952 $\pm$ 0.005 \\ \hline
rcv1 & \textbf{0.950} $\pm$ 0.007 & 0.927 $\pm$ 0.015 & 0.914 $\pm$ 0.016 & 0.945 $\pm$ 0.008 \\ \hline
sector & \textbf{0.927} $\pm$ 0.009 & 0.846 $\pm$ 0.019 & 0.908 $\pm$ 0.013 & 0.857 $\pm$ 0.008 \\ \hline
real-sim & \textbf{ 0.982} $\pm$ 0.001 & 0.969 $\pm$ 0.003 & 0.975 $\pm$ 0.002 & 0.977 $\pm$ 0.001 \\ \hline
news20 &\textbf{ 0.956} $\pm$ 0.003 & 0.939 $\pm$ 0.005 & 0.942 $\pm$ 0.006 & 0.944 $\pm$ 0.005 \\ \hline
Reuters & \textbf{0.993} $\pm$ 0.001 & 0.985 $\pm$ 0.003 & 0.988 $\pm$ 0.002 & 0.989 $\pm$ 0.003 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table*}
\begin{table*}
\caption{Comparison of classification accuracy at OPTROC on high-dimensional datasets}
\label{table6}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Data & CBR-diag$_{\text{FIFO}}$ & Online Uni-Exp & OPAUCr & OAM$_{\text{seq}}$ \\ \hline
farm-ads & \textbf{0.897} $\pm$ 0.007 & 0.872 $\pm$ 0.012 & 0.885 $\pm$ 0.008 & 0.882 $\pm$ 0.007 \\ \hline
rcv1 & \textbf{0.971} $\pm$ 0.001 & 0.967 $\pm$ 0.002 & 0.966 $\pm$ 0.003 & 0.970 $\pm$ 0.001 \\ \hline
sector & \textbf{0.850} $\pm$ 0.012 & 0.772 $\pm$ 0.011 & 0.831 $\pm$ 0.015 & 0.776 $\pm$ 0.008 \\ \hline
real-sim & \textbf{0.939} $\pm$ 0.003 & 0.913 $\pm$ 0.005 & 0.926 $\pm$ 0.002 & 0.929 $\pm$ 0.001 \\ \hline
news20 & \textbf{0.918} $\pm$ 0.005 & 0.895 $\pm$ 0.005 & 0.902 $\pm$ 0.009 & 0.907 $\pm$ 0.006 \\ \hline
Reuters & \textbf{0.971} $\pm$ 0.004 & 0.953 $\pm$ 0.006 & 0.961 $\pm$ 0.006 & 0.961 $\pm$ 0.006 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table*}
\begin{figure}
\caption{Running time (in milliseconds) of CBR-diag$_{FIFO}
\label{fig2}
\end{figure}
\section{Conclusions and Future Work}
In this paper, we proposed a linear online soft confidence-weighted bipartite ranking algorithm that maximizes the AUC metric via optimizing a pairwise loss function. The complexity of the pairwise loss function is mitigated in our algorithm by employing a finite buffer that is updated using one of the stream oblivious policies. We also develop a diagonal variation of the proposed confidence-weighted bipartite ranking algorithm to deal with high-dimensional data by maintaining only the diagonal elements of the covariance matrix instead of the full covariance matrix. The experimental results on several benchmark and high-dimensional datasets show that our algorithms yield a robust performance. The results also show that the proposed algorithms outperform the first and second-order AUC maximization methods on most of the datasets. As future work, we plan to conduct a theoretical analysis of the proposed method. We also aim to investigate the use of online feature selection \cite{wang2014online} within our proposed framework to effectively handle high-dimensional data.
\end{document}
|
\begin{document}
\selectlanguage{english}
\title[Supnorm of Modular Forms of half-integral Weight]{Supnorm of Modular Forms of half-integral Weight in the Weight Aspect}
\author{Raphael S. Steiner}
\address{Department of Mathematics, University of Bristol, Bristol BS8 1TW, UK}
\email{[email protected]}
\begin{abstract} We bound the supnorm of half-integral weight Hecke eigenforms in the Kohnen plus space of level $4$ in the weight aspect, by combining bounds obtained from the Fourier expansion with the amplification method using a Bergman kernel.
\end{abstract}
\maketitle
\section{Introduction}
The question of supremum norms of holomorphic and Maass Hecke eigenforms are connected to $L$-functions attached to them. In the case of holomorphic half-integral weight Hecke eigenforms they are directly related to the critical values of quadratic twists of the $L$-functions associated to their Shimura lift. Therefore supnorms have been studied by many in various ways: Iwaniec-Sarnak \cite{IS95} in the eigenvalue aspect, Harcos-Templier \cite{HT2}, \cite{HT3} and Saha \cite{Saha14} in the level aspect, as well as Kiral \cite{halflevel} in the case of half-integral weight, Templier \cite{Thybrid} in the level as well as the eigenvalue aspect, unifying both best known results. In the weight aspect they have been studied by Xia \cite{Supnormintweight}, Das-Sengupta \cite{DasSeng}, Rudnick \cite{R05}, Friedman-Jorgenson-Kramer \cite{FJK} and the author himself \cite{realweight}, where in the last three the condition of being a Hecke eigenform is not necessary.\\
In this paper we are concerned with the supremum norm in the weight aspect of holomorphic half-integral weight Hecke eigenforms in the Kohnen plus space of level 4. Assuming the Lindel\"of hypothesis we are able to prove an analogue of Xia's result \cite{Supnormintweight}, which states that for a for a holomorphic Hecke eigenform $f$ of integral weight for the full modular group $\SL2({\mathbb{Z}})$ we have $\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}} |f(z)| \ll_{\varepsilonilon} k^{\frac{1}{4}+\varepsilonilon}$. Our theorem reads as follows.
\begin{thm} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k \ge \frac{5}{2}$ and $f \in S^+_k(\Gamma_0(4)^{\star})$ be a $L^2$-normalised Hecke eigenform ($\langle f,f \rangle_{\Gamma_0(4)}=1$) of half-integral weight $k$ contained in the Kohnen plus space. Assume the Lindel\"of hypothesis for the family of $L$-functions $L(F,\chi,s)$, where $F$ is any modular form of weight $2k-1$ on $\SL2({\mathbb{Z}})$ and $\chi$ any primitive quadratic character. Then we have
$$
\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}}|f(z)| \ll_{\varepsilonilon} k^{\frac{1}{4}+\varepsilonilon}.
$$
\label{thm:2}
\end{thm}
Unconditionally we are able to prove the following.
\begin{thm} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k \ge \frac{5}{2}$ and $f \in S^+_k(\Gamma_0(4)^{\star})$ be a $L^2$-normalised Hecke eigenform ($\langle f,f \rangle_{\Gamma_0(4)}=1$) of half-integral weight $k$ contained in the Kohnen plus space. Then we have
$$
\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}}|f(z)| \ll_{\varepsilonilon} k^{\frac{3}{7}+\varepsilonilon}.
$$
\label{thm:1}
\end{thm}
We now give a brief overview of the significance of the various exponents and the methods which go into them. If $f$ is not assumed to be a Hecke eigenform, then the best exponent one can prove in general is $k^{\frac{3}{4}}$. Indeed this has been shown for arbitrary real weight $k$ by the author \cite{realweight} and relies on estimates for the Fourier coefficients of Poincar\'e series. However, when $f$ is an eigenform of half-integral weight as in the current paper, it follows from a result of Kohnen and Zagier (or more generally Waldspurger) that the square of its Fourier coefficients are essentially central $L$-values. Using the convexity bound on said $L$-functions one achieves a bound for the Fourier expansion, which is especially good near the cusps. Combing this estimate with a Bergman kernel for the case away from the cusps gives the bound $k^{\frac{1}{2}+\varepsilonilon}$ for the supnorm. Any sub-convexity result on those central $L$-values easily allows the removal of the $\varepsilonilon$ to achieve the bound $k^{\frac{1}{2}}$; this was shown by the author in his master's thesis. To decrease the exponent further one can either use deeper techniques or one can assume unproven bounds, e.g. the Lindel\"of hypothesis as in Theorem \ref{thm:2}. The bound $k^{\frac{1}{4}+\varepsilonilon}$ is essentially best possible as the next theorem shows that the best uniform bound one can hope for is $k^{\frac{1}{4}}$, if one takes the dimension of the space into consideration. The bound $k^{\frac{3}{7}}$ comes from combining the best known bound for these central $L$-values given by Petrow \cite{Pet14} and Young \cite{Young14} combined with the amplification method using the Bergman kernel.
\begin{thm}Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k \ge \frac{5}{2}$ and $\{f_j\} \subseteq S^+_k(\Gamma_0(4)^{\star})$ be an orthonormal basis of Hecke eigenforms of half-integral weight $k$ contained in the Kohnen plus space. Let $\{F_j\} \subseteq S_{2k-1}(\SL2({\mathbb{Z}}))$ be the corresponding arithmetically normalised Hecke eigenforms ($\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{F_j}(1)=1$) under the Shimura map. Then we have the following lower bounds:
$$
\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}}|f_j(z)| \gg_{\varepsilonilon} \max\left\{1, \ k^{\frac{1}{4}-\varepsilonilon} \sup_{\substack{D \text{ fund. disc.},\\ (-1)^{k-\frac{1}{2}}D>0}} L(F_j,\slegendre{D}{\cdot},1/2)^{\frac{1}{2}} |D|^{-\frac{1}{2}}\right\},
$$
$$
\sum_j \sup_{z \in {\mathbb{H}}} y^k |f_j(z)|^2 \ge \sup_{z \in {\mathbb{H}}} \sum_j y^k |f_j(z)|^2 \gg k^{\frac{3}{2}}.
$$
\label{thm:3}
\end{thm}
Although we restrict ourselves in this paper to the Kohnen plus space of level 4, the methods certainly generalise to larger level, but slightly weaker results are to be expected. Nevertheless the author strongly believes that even the convexity bound on the critical value of the corresponding $L$-functions are sufficient to break the convexity bound of $k^{\frac{3}{4}}$ as this is indeed the case in the Kohnen plus space of level 4.
\section{Notation and Preliminaries} Throughout let $k\in \frac{1}{2}+{\mathbb{Z}}$ be a half-integer with $k \ge \frac{5}{2}$. For a complex number $z\in {\mathbb{C}}^{\times}$ we define $z^k=\exp(k\cdot \mathop{\rm Log}\nolimits(z))$, where $\mathop{\rm Log}\nolimits(z)=\log|z|+i\arg(z)$ with $-\pi < \arg(z)\le \pi$. The notation $f(x) \ll_{A,B} g(x)$ means that $|f(x)|\le K g(x)$, where $K$ is some function depending at most on $A$ and $B$. Further let $e(z)=\exp(2 \pi i z)$ for $z \in {\mathbb{C}}$.\\
As usual we define the M\"obius action of $\gamma \in \GL2^{+}({\mathbb{Q}})$, the set of all $2\times 2$ matrices with rational coefficients and positive determinant, on ${\mathbb{H}}$, the upper half plane, as
$$
\gamma \cdot z= \gamma z = \frac{az+b}{cz+d}, \quad \forall \gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \GL2^{+}({\mathbb{Q}}), \forall z \in {\mathbb{H}}.
$$
The action is extended to the set of cusps $\overline{{\mathbb{Q}}}={\mathbb{P}}^{1}({\mathbb{Q}})={\mathbb{Q}} \sqcup \{\infty\}$. We further define
$$
j(\gamma,z)=cz+d, \quad \forall \gamma= \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \GL2^{+}({\mathbb{Q}}), \forall z \in {\mathbb{H}}
$$
and
$$
j_{\Theta}(\gamma,z)=\frac{\Theta(\gamma z)}{\Theta(z)}, \quad \forall \gamma= \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma_0(4), \forall z \in {\mathbb{H}},
$$
where $\Theta(z)=\sum_{n\in {\mathbb{Z}}}e(n^2 z)$. By ${\mathfrak{S}}_k$ we denote the group, whose elements are of the form $(\gamma, \varphi)$, where $\gamma \in \GL2^{+}({\mathbb{Q}})$ and $\varphi:{\mathbb{H}} \to {\mathbb{C}}$ a holomorphic function with $|\varphi(z)|=(\det \gamma)^{-\frac{k}{2}} |j(\gamma,z)|^k$, and whose composition is given by:
$$
(\gamma,\varphi)\circ(\gamma^{'},\varphi^{'})=(\gamma \gamma^{'}, (\varphi \circ \gamma^{'})\cdot \varphi^{'}).
$$
For each $k$ we have a group homomorphism
$$\begin{aligned}
^{\star}: \Gamma_0(4)& \to {\mathfrak{S}}_k \\
\gamma & \mapsto \gamma^{\star}=(\gamma,j_{\Theta}(\gamma,\cdot)^{2k}).
\end{aligned}$$
We further have an inclusion as sets $\GL2^+({\mathbb{Q}}) \hookrightarrow {\mathfrak{S}}_k$, where we identify the element $\gamma \in \GL2^+({\mathbb{Q}})$ with $(\gamma,(\det \gamma)^{-\frac{k}{2}} j(\gamma,\cdot)^k)$. Among all elements in ${\mathfrak{S}}_k$ we would like to distinguish two special elements $W_4$ and $V_4$, which we are going to use to translate the cusps $0,\frac{1}{2}$ to $\infty$,
$$\begin{aligned}
W_4 &= \left( \begin{pmatrix} 0 & -\frac{1}{2} \\ 2 & 0 \end{pmatrix}, (-2iz)^k \right),\\
V_4 &= \left( \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix}, (-i(2z+1))^k \right).
\end{aligned}$$
\begin{defn} For $\tau \in \SL2({\mathbb{Z}})$ we define the \emph{cusp width} $n_{\tau}$ and the \emph{cusp parameter} $\kappa_{\tau} \in [0,1)$ in such a way that the stabilizer group at $\infty$ of $\tau^{-1}\Gamma_0(4)^{\star}\tau$ is generated by
$$
\pm \left(\begin{pmatrix} 1 & n_{\tau} \\ 0 & 1 \end{pmatrix}, e(\kappa_{\tau}) \right).
$$
\end{defn}
\begin{rem} For $\Gamma_0(4)^{\star}$, the cusps $0,\frac{1}{2},\infty$ have cusp width $4,1,1$ and cusp parameter $0,\frac{1}{2}-(-1)^{k-\frac{1}{2}}\frac{1}{4},0$ respectively.
\end{rem}
The group ${\mathfrak{S}}_k$ acts on the set of meromorphic functions on ${\mathbb{H}}$ as follows:
$$
(f|_k (\gamma, \varphi))(z)=\varphi(z)^{-1} f(\gamma z).
$$
\begin{defn} A holomorphic function $f$ on the upper half-plane satisfying
$$
f|_k \xi=f, \quad \forall \xi \in \Gamma_0(4)^{\star},
$$
and having a Fourier expansion of the form
$$
(f|_k \tau)(z)=\sum_{m+\kappa_{\tau}>0} \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{(f|_k\tau)}(m) \, e\!\left(\frac{m+\kappa_{\tau}}{n_{\tau}}z\right)
$$
for every $\tau \in \SL2({\mathbb{Z}})$ is called a \emph{cusp form} of weight $k$ with respect to $\Gamma_0(4)^{\star}$. The set of such functions we denote by $S_k(\Gamma_0(4)^{\star})$.
\end{defn}
The space $S_k(\Gamma_0(4)^{\star})$ is finite dimensional and can be made into a Hilbert space by defining the Petersson inner product:
$$
\langle f,g \rangle_{\Gamma_0(4)} = \frac{1}{6} \int_{{\mathbb{F}}_{\Gamma_0(4)}} f(z)\overline{g(z)}y^k \frac{dxdy}{y^2},
$$
where ${\mathbb{F}}_{\Gamma_0(4)}$ is a fundamental domain for $\Gamma_0(4)$ and $z=x+iy$. Furthermore a theory of Hecke operators can be established on $S_k(\Gamma_0(4)^{\star})$. For $l$ a square, one defines
$$
f|_k T(l)= l^{\frac{k}{2}-1} \sum_{\xi \in \rquotient{\Gamma_0(4)^{\star}}{\Gamma_0(4)^{\star}\xi_l \Gamma_0(4)^{\star}}} f|_k \xi,
$$
where
$$
\xi_l= \left( \begin{pmatrix} 1 & 0 \\ 0 & l \end{pmatrix}, l^{\frac{k}{2}} \right).
$$
These operators commute and thus one gets an orthonormal basis of Hecke eigenforms. Shimura \cite{MFhiw} has shown, that given such a Hecke eigenform $f$ one can use its Fourier coeficients to construct a classical Hecke eigenform $F \in S_{2k-1}(\Gamma_0(M))$ of weight $2k-1$ for some level $M$ with the same Hecke eigenvalues. Later Niwa \cite{MFohiwatIocTF} has shown that one can always take $M=2$, moreover Kohnen \cite{Ko80} has shown one can take $M=1$ if the eigenform is coming from a certain subspace, the Kohnen plus space, which is defined as follows:
$$
S^+_k(\Gamma_0(4)^{\star})= \{f \in S_k(\Gamma_0(4)^{\star}) | \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m)=0, \, \forall m \text{ such that } (-1)^{k-\frac{1}{2}}n \equiv 2,3 \mathop{\rm mod}\nolimits(4) \}.
$$
The plus space has some nice properties, one of which is that it comes with a projection $S_k(\Gamma_0(4)^{\star}) \to S^+_k(\Gamma_0(4)^{\star})$. For this reason the subspace has its own Poincar\'e series, which have been computed by Kohnen \cite{Ko85}.
\begin{prop} Let $k\in \frac{1}{2}+ {\mathbb{Z}}$ with $k \ge \frac{5}{2}$ and $m \in {\mathbb{N}}, (-1)^{k-\frac{1}{2}}m \equiv 0,1 \mathop{\rm mod}\nolimits(4)$. The Poincar\'e series $G_I^+( \Gamma_0(4)^{\star},k,z,m )$ given by the Fourier expansion:
$$
G_I^+( \Gamma_0(4)^{\star},k,z,m ) = \sum_{\substack{n \ge 1, \\ (-1)^{k-\frac{1}{2}}n \equiv 0,1 \mathop{\rm mod}\nolimits(4)}}g_{k,m}(n)e^{2 \pi i n z},
$$
with
$$
g_{k,m}(n)=\frac{2}{3} \left[\delta_{m,n}+(-1)^{ \left\lfloor \frac{k+\frac{1}{2}}{2} \right\rfloor} \pi \sqrt{2} \left( \frac{n}{m}\right)^{\frac{k-1}{2}} \sum_{c \ge 1}H_c(n,m)J_{k-1}\left( \frac{\pi}{c} \sqrt{nm} \right) \right],
$$
where $H_c(n,m)$ is given by
$$\begin{aligned}
H_c(n,m) &=(1-(-1)^{k-\frac{1}{2}}i) \left(1+\legendre{4}{c} \right)\frac{1}{4c} \sum_{\substack{\delta \mathop{\rm mod}\nolimits(4c),\\ (\delta,4c)=1}}\legendre{4c}{\delta}\legendre{-4}{\delta}^{k} e\left(\frac{n\delta+m\delta^{-1}}{4c} \right),
\end{aligned}$$
satisfy
$$
\langle f, G_I^+(\Gamma_0(4)^{\star},k, \cdot ,m) \rangle_{\Gamma_0(4)} = \frac{\Gamma(k-1)}{6\cdot(4 \pi m)^{k-1}} \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m) \quad \forall f \in S^+_k(\Gamma_0(4)^{\star})
$$
and $G_I^+(\Gamma_0(4)^{\star},k, \cdot ,m) \in S^+_k(\Gamma_0(4)^{\star})$.
\end{prop}
\begin{proof} See Proposition 4 of \cite{Ko85}.
\end{proof}
The following Corollary is immediate.
\begin{cor} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k \ge \frac{5}{2}$ and $\{f_j\}$ be an orthonormal basis of $S^+_k(\Gamma_0(4)^{\star})$, then we have
$$
\sum_j \left|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f_j}(m)\right|^2=\frac{6 \cdot (4\pi m)^{k-1}}{\Gamma(k-1)} \cdot \frac{2}{3} \left[1+(-1)^{ \left\lfloor \frac{k+\frac{1}{2}}{2} \right\rfloor} \pi \sqrt{2} \sum_{c \ge 1}H_c(m,m)J_{k-1}\left( \frac{\pi m}{c} \right) \right].
$$
\label{cor:halfpluscoeff}
\end{cor}
Furthermore cusp forms in the Kohnen plus space have special relations among their Fourier coefficients at different cusps as the next lemma shows.
\begin{lem} Let $k \in \frac{1}{2}+{\mathbb{Z}}, f \in S^+_k(\Gamma_0(4)^{\star})$. Then the Fourier coefficients of $f$ at the cusps $0,\frac{1}{2}$ can be given in terms of the Fourier coefficients at $\infty$:
$$\begin{aligned}
(f|_k W_4)(z) &= \legendre{2}{2k}2^{\frac{1}{2}-k} \sum_{m \ge 1} \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(4m)e(mz), \\
(f|_k V_4)(z) &= \legendre{2}{2k}2^{\frac{1}{2}-k} \sum_{\substack{m \ge 1, \\ (-1)^{k-\frac{1}{2}}m \equiv 1 \mathop{\rm mod}\nolimits (4)}} i^{\frac{m}{2}}\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m)e\left(\frac{m}{4}z\right).
\end{aligned}$$
We note here that $\legendre{2}{2k}$ denotes the Jacobi symbol.
\label{lem:Fourrierpluscusp}
\end{lem}
\begin{proof} In \cite{Ko80} Prop. 2 Kohnen showed: $(f| U_4 |_k W_4)(z)= \legendre{2}{2k} 2^{k-\frac{1}{2}} f(z)$, where $(f|U_4)(z)=\sum_{m\ge 1} \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(4m)e(mz)$.
Applying $|_k W_4$ to both sides gives the desired result, by noting that $|_k W_4^2$ is the identity map. The second identity follows from:
$$ \begin{aligned}
(f|_k V_4)(z) &= (-i(2z+1))^{-k} \sum_{\substack{m \ge 1,\\ (-1)^{k-\frac{1}{2}}m \equiv 0,1 \mathop{\rm mod}\nolimits(4)}}\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m) e\left(\frac{m}{2}-\frac{m}{4z+2}\right) \\
&= (-i(2z+1))^{-k} \left[ 2 \sum_{\substack{m \ge 1,\\ m \equiv 0 \mathop{\rm mod}\nolimits(4)}} -\sum_{\substack{m \ge 1,\\ (-1)^{k-\frac{1}{2}}m \equiv 0,1 \mathop{\rm mod}\nolimits(4)}} \right] \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m) e\left(\frac{-m}{4z+2} \right) \\
&= (-i(2z+1))^{-k} 2(f|U_4)\left( \frac{-1}{z+\frac{1}{2}}\right) - (f|_k W_4)\left(z+\frac{1}{2}\right) \\
&= 4^{\frac{1}{2}-k}(f| U_4 |_k W_4)\left(\frac{z+\frac{1}{2}}{4} \right) - (f|_k W_4)\left(z+\frac{1}{2}\right) \\
&= \legendre{2}{2k}2^{\frac{1}{2}-k} \left[ f\left( \frac{z+\frac{1}{2}}{4} \right)- (f|U_4)\left(z+\frac{1}{2}\right) \right] \\
&= \legendre{2}{2k} 2^{\frac{1}{2}-k} \sum_{\substack{m \ge 1,\\ (-1)^{k-\frac{1}{2}}m \equiv 1 \mathop{\rm mod}\nolimits (4)}} i^{\frac{m}{2}} \hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m) e\left(\frac{m}{4}z\right).
\end{aligned}$$
\end{proof}
If we now assume $f \in S^+_k(\Gamma_0(4)^{\star})$ to be a Hecke eigenform, we can even say more about its Fourier coefficients. In this case Waldspurger has shown, that the square of the Fourier coefficients are proportional to the central value of a certain twist of the $L$-function associated to its Shimura lift. We only need a special case, which has been made explicit by Kohnen-Zagier.
\begin{prop} Let $k \in \frac{1}{2}+ {\mathbb{Z}}$ with $k \ge \frac{5}{2}$, $f \in S^+_k(\Gamma_0(4)^{\star})$ a Hecke eigenform and let $F \in S_{2k-1}(\SL2({\mathbb{Z}}))$ be the corresponding arithmetically normalised Hecke eigenform ($\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{F}(1)=1$) of $f$ under the Shimura map. Further let $D$ be a fundamental discriminant with $(-1)^{k-\frac{1}{2}}D>0$ and $L(F,\legendre{D}{\cdot},s)$ the analytic continuation of the Dirichlet L-series $\sum_{n=1}^{\infty}\legendre{D}{n} \frac{\hat{F}(n)}{n^{k-1}}n^{-s}$. Then
$$
\frac{|\hat{f}(|D|)|^2}{\langle f,f \rangle_{\Gamma_0(4)}} = \frac{\Gamma(k-\frac{1}{2})}{\pi^{k-\frac{1}{2}}} |D|^{k-1} \frac{L(F,\legendre{D}{\cdot},\frac{1}{2})}{\langle F,F \rangle_{\SL2({\mathbb{Z}})}},
$$
where
$$
\langle F,F \rangle_{\SL2({\mathbb{Z}})}=\int_{{\mathbb{F}}_{\SL2({\mathbb{Z}})}} |F(z)|^2y^{2k-1}\frac{dxdy}{y^2}
$$
and ${\mathbb{F}}_{\SL2({\mathbb{Z}})}$ is a fundamental domain of $\SL2({\mathbb{Z}})$.
\label{prop:coeffscritvalue}
\end{prop}
\begin{proof} We refer to \cite{KZ81}.\end{proof}
Concerning the size of $\langle F,F \rangle_{\SL2({\mathbb{Z}})}$ we have the following two propositions.
\begin{prop} \label{prop:norm} Let $F \in S_{2k-1}(\SL2({\mathbb{Z}}))$ be an arithmetically normalised Hecke eigenform, then we have:
$$
\langle F,F \rangle_{\SL2({\mathbb{Z}})} = \frac{\Gamma(2k-1)}{2^{4k-3}\pi^{2k}} L( {\mathop{\rm sym}}^2 F,1),
$$
where $L(\mathop{\rm sym}^2 F,s)$ is the analytic continuation of
$$\begin{aligned}
\prod_p \left( 1- \alpha_p^2 p^{2-2k-s} \right)^{-1} &\left( 1-\alpha_p \overline{\alpha_p} p^{2-2k-s} \right)^{-1} \left( 1- \overline{\alpha_p}^2 p^{2-2k-s} \right)^{-1}\\
=&\frac{\zeta(2s)}{\zeta(s)} \sum_{n=1}^{\infty} \frac{\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{F}(n)^2}{n^{s+2k-2}}
\end{aligned}$$
and $\alpha_p,\overline{\alpha_p}$ are the solutions to $\alpha_p+\overline{\alpha_p}=\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{F}(p), \, \alpha_p\overline{\alpha_p}=p^{2k-2}$.
\end{prop}
\begin{proof} See \cite{ContrRamfunc}.
\end{proof}
\begin{prop} \label{prop:sym} Let $F \in S_{2k-1}(\SL2({\mathbb{Z}}))$ be an arithmetically normalised Hecke eigenform, then we have:
$$
k^{-\varepsilonilon} \ll_{\varepsilonilon} L({\mathop{\rm sym}}^2 F,1) \ll_{\varepsilonilon} k^{\varepsilonilon}.
$$
\end{prop}
\begin{proof} See page 41 equation 2.16 of \cite{ANTALfunc}.
\end{proof}
If we adopt the notation of Proposition \ref{prop:coeffscritvalue} all the remaining Fourier coefficients of our Hecke eigenform $f\in S^+_k(\Gamma_0(4)^{\star})$ satisfy the following equation
\begin{equation}
\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(n^2|D|)=\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(|D|) \sum_{d|n}\mu(d) \legendre{D}{d}d^{k-\frac{3}{2}}\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{F}\left( \frac{n}{d} \right).
\label{eq:sqrcoeff}
\end{equation}
\section{Proof of Theorems}
Let $f=f_1\in S^+_k(\Gamma_0(4)^{\star})$ be a Hecke eigenform of norm $\langle f,f \rangle_{\Gamma_0(4)}=1$. Then $y^{\frac{k}{2}}|f(z)|$ is $\Gamma_0(4)$ invariant. Moreover we have that $(\mathop{{\rm Im}}\nolimits \tau z)^{\frac{k}{2}}|f(\tau z)|=y^{\frac{k}{2}}|(f|_k \tau)(z)|$ holds for all $\tau \in \GL2^+({\mathbb{Q}})$. This and the fact that the set
$$
\left\{z|\mathop{{\rm Im}}\nolimits z \ge \frac{\sqrt{3}}{8}\right\} \cup \left\{W_4 z|\mathop{{\rm Im}}\nolimits z \ge \frac{\sqrt{3}}{8}\right\} \cup \left\{V_4 z|\mathop{{\rm Im}}\nolimits z \ge \frac{\sqrt{3}}{8}\right\}
$$
covers a fundamental domain of $\Gamma_0(4)$ imply the following equality
\begin{equation}
\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}} |f(z)|= \max_{\xi \in \{I,W_4,V_4\}} \sup_{y \ge \frac{\sqrt{3}}{8}} y^{\frac{k}{2}} |(f|_k \xi)(z)|.
\label{eq:reducing}
\end{equation}
The proof of Theorem \ref{thm:2} and \ref{thm:1} is split up into two parts. In the first part we use the Fourier expansion and bounds on the Fourier coefficients to bound the supnorm near a cusp. If we are far away from the cusp we can use the Bergman kernel in combination with an amplifier to get superior results, which is described in the second part. In a third part we give the proof of Theorem \ref{thm:3}.
\subsection{Bounding the Fourier expansion}On a first thought it is tempting to use classical estimates such as
$$
\sum_{n\le N} |\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(n)|^2 \ll_f N^k
$$
to bound the Fourier expansion, but it turns out that the implied constant is heavily dependent on $f$, in fact the supnorm of $f$ itself appears as a factor. Thus one might try and use deeper techniques or one can use the currently best known result towards the Ramanujan-Petersson conjecture. We follow the latter path.\\
Throughout we assume we have a uniform bound of the shape
\begin{equation}
L\left(F,\chi,\frac{1}{2}\right) \ll k^{\alpha} q^{\beta}
\label{eq:uniformbound}
\end{equation}
for all arithmetically normalised Hecke eigenforms $F \in S_{2k-1}(\SL2({\mathbb{Z}}))$ and quadratic characters $\chi$ of conductor $q$. Through the work of Petrow \cite{Pet14} and Young \cite{Young14} we now know that the pair $(\alpha,\beta)=(\frac{1}{3}+\varepsilonilon,\frac{1}{3}+\varepsilonilon)$ is permissible for all $\varepsilonilon>0$. The Lindel\"of hypothesis corresponds of course to the pair $(\alpha,\beta)=(\varepsilonilon,\varepsilonilon)$.\\
Using Deligne's bound for the Fourier coefficients of $F\in S_{2k-1}(\SL2({\mathbb{Z}}))$ in equation \eqref{eq:sqrcoeff} we find that:
\begin{equation}
|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(n^2|D|)| \ll_{\varepsilonilon} |\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(|D|)| \cdot \sum_{d|n} d^{k-\frac{3}{2}} \left( \frac{n}{d} \right)^{k-1+\varepsilonilon} \ll_{\varepsilonilon} |\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(|D|)| \cdot (n^2)^{\frac{k-1}{2}+\varepsilonilon}.
\label{eq:sqrbound}
\end{equation}
Combining the Propositions \ref{prop:coeffscritvalue}, \ref{prop:norm} and \ref{prop:sym} with the bound \eqref{eq:uniformbound} we get:
\begin{equation}
|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(|D|)| \ll_{\varepsilonilon} \frac{(4\pi)^{\frac{k}{2}}}{\Gamma(k)^{\frac{1}{2}}} \cdot |D|^{\frac{k-1+\beta}{2}} k^{\frac{\alpha}{2}+\varepsilonilon}.
\label{eq:discbound}
\end{equation}
Thus we conclude the following proposition.
\begin{prop} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k\ge \frac{5}{2}$ and $f \in S_k^+(\Gamma_0(4)^{\star})$ a $L^2$-normalised Hecke eigenform. Further assume we have a uniform bound as in \eqref{eq:uniformbound} with $\beta>0$, then we have the following estimate on its Fourier coefficients:
$$
|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f}(m)| \ll_{\varepsilonilon} \frac{(4\pi)^{\frac{k}{2}}k^{\frac{\alpha}{2}+\varepsilonilon}}{\Gamma(k)^{\frac{1}{2}}} \cdot m^{\frac{k-1+\beta}{2}}.
$$
\label{prop:fouriercoefbound}
\end{prop}
For convenience let us introduce the sum
\begin{equation}
S(\alpha,\beta,\kappa)=\sum_{m+\kappa>0} (m+\kappa)^{\alpha}e^{-\beta(m+\kappa)}, \quad \alpha,\beta,\kappa>0.
\label{eq:Sab}
\end{equation}
We will further need two lemmata for this sum.
\begin{lem} \label{lem:Sabest} $S(\alpha,\beta,\kappa)$ as defined by \eqref{eq:Sab} satisfies the following inequalities:
$$
S(\alpha,\beta,\kappa) \le \beta^{-\alpha-1}\Gamma(\alpha+1)+\beta^{-\alpha} \alpha^{\alpha} e^{-\alpha}
$$
and for $\alpha \le \beta \kappa$ we have:
$$
S(\alpha,\beta,\kappa) \le \beta^{-\alpha-1}\Gamma(\alpha+1) + \kappa^{\alpha} e^{-\beta\kappa}.
$$
\end{lem}
\begin{proof} This is Lemma 1 of \cite{realweight}.
\end{proof}
\begin{lem} \label{lem:expdecay} The following inequality holds for $\kappa \ge 6 \frac{\alpha}{\beta}, \ \alpha,\beta>0$:
$$
\kappa^{\alpha}e^{-\beta \kappa} \le \alpha^{\alpha}\beta^{-\alpha}e^{-\alpha} \cdot e^{-\frac{\beta \kappa}{2}}.
$$
\end{lem}
\begin{proof} This is Lemma 2 of \cite{realweight}.
\end{proof}
Using Proposition \ref{prop:fouriercoefbound} in Lemma \ref{lem:Fourrierpluscusp} we find:
\begin{equation}\begin{aligned}
y^{\frac{k}{2}}|f(z)| & \ll_{\varepsilonilon} \frac{y^{\frac{k}{2}}(4 \pi)^{\frac{k}{2}}k^{\frac{\alpha}{2}+\varepsilonilon}}{\Gamma(k)^{\frac{1}{2}}} S \left(\frac{k-1+\beta}{2},2 \pi y,1 \right) , \\
y^{\frac{k}{2}}|(f|_k W_4)(z)| & \ll_{\varepsilonilon} \frac{y^{\frac{k}{2}}(4 \pi)^{\frac{k}{2}}k^{\frac{\alpha}{2}+\varepsilonilon}}{\Gamma(k)^{\frac{1}{2}}} S \left(\frac{k-1+\beta}{2},2 \pi y,1 \right), \\
y^{\frac{k}{2}}|(f|_k V_4)(z)| & \ll_{\varepsilonilon} \frac{\left(\frac{y}{4}\right)^{\frac{k}{2}}(4 \pi)^{\frac{k}{2}}k^{\frac{\alpha}{2}+\varepsilonilon}}{\Gamma(k)^{\frac{1}{2}}} S \left(\frac{k-1+\beta}{2},\frac{2 \pi y}{4}, 1 \right).
\label{eq:halfintysmall}
\end{aligned}\end{equation}
Using Lemma \ref{lem:Sabest} we find:
$$\begin{aligned}
S \left(\frac{k-1+\beta}{2},2 \pi y,1 \right) & \ll (4 \pi)^{-\frac{k}{2}+\frac{1}{2}-\frac{\beta}{2}} y^{-\frac{k}{2}-\frac{1}{2}-\frac{\beta}{2}} k^{\frac{k}{2}+\frac{\beta}{2}}e^{-\frac{k}{2}} \left(1+yk^{-\frac{1}{2}} \right).
\end{aligned}$$
Thus we get the following proposition.
\begin{prop} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k \ge \frac{5}{2}$, $f \in S^+_k(\Gamma_0(4)^{\star})$ a $L^2$-normalised Hecke eigenform. Assuming \eqref{eq:uniformbound} holds with $\beta>0$, then we have for $y \ge \frac{\sqrt{3}}{8}$:
$$\begin{aligned}
y^{\frac{k}{2}}|f(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+yk^{-\frac{1}{2}}\right), \\
y^{\frac{k}{2}}|(f|_k W_4)(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+yk^{-\frac{1}{2}}\right), \\
y^{\frac{k}{2}}|(f|_k V_4)(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+yk^{-\frac{1}{2}}\right).
\end{aligned}$$
\label{prop:halfsmallab}
\end{prop}
If $y \ge \frac{3k}{\pi}$ (and $\beta\le k$) we can use the second part of Lemma \ref{lem:Sabest} with Lemma \ref{lem:expdecay} to get
$$\begin{aligned}
S \left(\frac{k-1+\beta}{2},2 \pi y,1 \right) & \ll (4 \pi y)^{-\frac{k}{2}-\frac{1}{2}-\frac{\beta}{2}} k^{\frac{k}{2}+\frac{\beta}{2}} e^{-\frac{k}{2}} \left(1+k^{\frac{1}{2}} e^{-\pi y} \right).
\end{aligned}$$
Thus we conclude the following proposition.
\begin{prop} Let $k \in \frac{1}{2}+{\mathbb{Z}}$ with $k\ge \frac{5}{2}$ and, $f \in S^+(\Gamma_0(4)^{\star})$ a $L^2$-normalised Hecke eigenform. Assuming \eqref{eq:uniformbound} with $k\ge\beta>0$ then we have for $y \ge \frac{12k}{\pi}$:
$$\begin{aligned}
y^{\frac{k}{2}}|f(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+k^{\frac{1}{2}}e^{-\pi y}\right), \\
y^{\frac{k}{2}}|(f|_k W_4)(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+k^{\frac{1}{2}} e^{- \pi y}\right), \\
y^{\frac{k}{2}}|(f|_k V_4)(z)| & \ll_{\varepsilonilon} \frac{k^{\frac{1}{4}+\frac{\alpha}{2}+\frac{\beta}{2}+\varepsilonilon}}{y^{\frac{1}{2}+\frac{\beta}{2}}} \left( 1+k^{\frac{1}{2}}e^{-\frac{\pi y}{4}}\right).
\end{aligned}$$
\label{prop:halfbigab}
\end{prop}
If we assume the Lindel\"of hypothesis then the conjunction of Propositions \ref{prop:halfsmallab} and \ref{prop:halfbigab} with the observation \eqref{eq:reducing} gives Theorem \ref{thm:2}. If we use the unconditional result $(\alpha,\beta)=(\frac{1}{3}+\varepsilonilon,\frac{1}{3}+\varepsilonilon)$ instead we find that:
\begin{equation}
\max_{\xi \in \{I,W_4,V_4\}} \sup_{y \ge k^{\frac{1}{4}}} y^{\frac{k}{2}} |(f|_k \xi)(z)| \ll_{\varepsilonilon} k^{\frac{1}{4}+\frac{1}{6}+\varepsilonilon}.
\label{eq:nearcusp}
\end{equation}
The remaining region will be dealt with in the next section.
\subsection{Amplification}
We start by using the Bergman kernel as given in Theorem 4 of \cite{realweight} to deduce the identity
\begin{equation}
\sum_j \overline{f_j(w)}f_j(z)= \frac{3 (k-1)}{4 \pi} \sum_{\xi \in \Gamma_0(4)^{\star}} \frac{1}{\left(\frac{z-\overline{w}}{2i}\right)^k} \Bigg |_k \xi,
\label{eq:plainkernel}
\end{equation}
where $|_k \xi$ is taken with respect to the variable $z$ and $\{f_j\}$ is an orthonormal basis of the whole space $S_k(\Gamma_0(4)^{\star})$. If we apply the Hecke operator $|_k T(m)$ to both sides with respect the variable $z$ we get
\begin{equation}
\sum_j \lambda_j(m)\overline{f_j(w)}f_j(z)= \frac{3 (k-1)}{4 \pi} m^{\frac{k}{2}-1}\sum_{\xi \in \Gamma_0(4)^{\star} \xi_{1,m} \Gamma_0(4)^{\star}} \frac{1}{\left(\frac{z-\overline{w}}{2i}\right)^k} \Bigg |_k \xi,
\label{eq:heckekernel}
\end{equation}
with
$$
\xi_{1,m}= \left( \begin{pmatrix} 1 & 0 \\ 0 & m \end{pmatrix}, m^{\frac{1}{4}} \right).
$$
Let us denote with $A_j(m)=\lambda_j(m)m^{-\frac{k-1}{2}}$ the normalised Hecke eigenvalues. Further let ${\mathcal M}$ be a finite set of squares of odd integers and $x_m$ arbitrary real numbers for $m \in {\mathcal M}$. Using the identity
$$
\lambda_j(m^2)\lambda_j(n^2)=\sum_{d|(m,n)} d^{k-1}\lambda_j\left( \frac{m^2n^2}{d^4} \right)
$$
we get the following equation
\begin{equation}\begin{aligned}
&\sum_{j} \left | \sum_{m \in {\mathcal M}} x_m A_j(m) \right |^2 \overline{f_j(w)}f_j(z) \\ =& \sum_{m_1,m_2 \in {\mathcal M}} x_{m_1}x_{m_2} (m_1m_2)^{-\frac{k-1}{2}} \sum_j \lambda_j(m_1) \lambda_j(m_2) \overline{f_j(w)}f_j(z) \\
=& \sum_l y_l l^{-\frac{k-1}{2}} \sum_j \lambda_j(l) \overline{f_j(w)}f_j(z) \\
=& \frac{3 (k-1)}{4 \pi} \sum_l y_l l^{-\frac{1}{2}} \sum_{\xi \in \Gamma_0(4)^{\star} \xi_{1,l} \Gamma_0(4)^{\star}} \frac{1}{\left(\frac{z-\overline{w}}{2i}\right)^k} \Bigg |_k \xi,
\label{eq:amplified}
\end{aligned}\end{equation}
where
$$
y_l= \sum_{\substack{m_1,m_2 \in {\mathcal M}, \\ d^2|(m_1,m_2), \\l=\frac{m_1m_2}{d^{4}}}} x_{m_1}x_{m_2}.
$$
Specialising to $w=z$ we get the inequality we are interested in:
\begin{equation}
\left |\sum_{m \in {\mathcal M}} x_m A_1(m) \right|^2 \cdot y^k |f_1(z)|^2 \le \frac{3(k-1)}{4 \pi} \sum_l |y_l| l^{-\frac{1}{2}} \sum_{\gamma \in G_l(4)} d_{\gamma}(z)^{-k},
\label{eq:domain1}
\end{equation}
where
$$
d_{\gamma}(z)= \frac{|\gamma z-\overline{z}|\cdot|j(\gamma,z)|}{2yl^{\frac{1}{2}}}
$$
and
$$
G_l(4)= \{\gamma \in \GL2({\mathbb{Z}}) | \det \gamma =l \text{ and } \gamma \equiv \begin{pmatrix} \star & \star \\ 0 & \star \end{pmatrix} \mathop{\rm mod}\nolimits(4)\}.
$$
Note that
$$
d_{\gamma}(z)^2=u(\gamma z, z)+1,
$$
where
$$
u(z,w)=\frac{|z-w|^2}{4 \mathop{{\rm Im}}\nolimits z \mathop{{\rm Im}}\nolimits w}.
$$
Now we want the same inequality with $f_1$ replaced with $f_1|_k W_4$ and $f_1|_k V_4$. For this we replace \eqref{eq:plainkernel} with the following
\begin{equation}
\sum_j \overline{(f_j|_k B)(w)}(f_j|_k B)(z)= \frac{3 (k-1)}{4 \pi} \sum_{\xi \in B^{-1}\Gamma_0(4)^{\star}B} \frac{1}{\left(\frac{z-\overline{w}}{2i}\right)^k} \Bigg |_k \xi,
\label{eq:twistplainkernel}
\end{equation}
where $B \in \{W_4,V_4\}$. Now we apply $|_k B^{-1}T(m) B$ to both sides and proceed as before leading to
\begin{equation}
\left |\sum_{m \in {\mathcal M}} x_m A_1(m) \right|^2 \cdot y^k |(f_1|_k B)(z)|^2 \le \frac{3(k-1)}{4 \pi} \sum_l |y_l| l^{-\frac{1}{2}} \sum_{\gamma \in B^{-1}G_l(4)B} d_{\gamma}(z)^{-k}.
\label{eq:domain2}
\end{equation}
Now we just have to note that both $W_4$ and $V_4$ stabilze $G_l(4)$ for odd $l$.\\
We now condsider two sets ${\mathcal M}_1,{\mathcal M}_2$ given by
$$\begin{aligned}
{\mathcal M}_1 &= \{p^2 | \Lambda \le p < 2 \Lambda, p \neq 2 \},\\
{\mathcal M}_2 &= \{p^4 | \Lambda \le p < 2 \Lambda, p \neq 2 \},
\end{aligned}$$
with
$$
x_m=\mathop{\rm sign}\nolimits(A_1(m)), \forall m \in {\mathcal M}_1 \text{ (respectively ${\mathcal M}_2$)},
$$
for which we have
\begin{equation}\begin{aligned}
|y_l| &\ll \begin{cases} \Lambda, & l=1,\\ 1, & l=p^2q^2 \text{ with } p^2,q^2 \in {\mathcal M}_1, \\ 0, & \text{otherwise},\end{cases} \\
|y_l| &\ll \begin{cases} \Lambda, & l=1,\\ 1, & l=p^4, p^4q^4 \text{ with } p^4,q^4 \in {\mathcal M}_2, \\ 0, & \text{otherwise},\end{cases}
\label{eq:ybound}
\end{aligned}\end{equation}
respectively. We add now the two equations \eqref{eq:domain1} for ${\mathcal M}={\mathcal M}_1,{\mathcal M}_2$ and by Cauchy-Schwarz we see that the left hand side has a lower bound of
$$
\left( \sum_{m \in {\mathcal M}_1\cup {\mathcal M}_2} |A_1(m)| \right)^2 y^k|f_1(z)|^2 \gg_{\varepsilonilon} \Lambda^{2-\varepsilonilon} y^k|f_1(z)|^2
$$
as $\max(|A_1(p^2)|,|A_1(p^4)|)\gg 1$. We get the same inequality also for the other cusps and conclude
\begin{equation}
\Lambda^{2-\varepsilonilon} \max_{B \in \{I,W_4,V_4\}} y^k|(f_1|_kB)(z)|^2 \ll_{\varepsilonilon} \sum_{{\mathcal M} = {\mathcal M}_1,{\mathcal M}_2 } k\sum_l |y_l| l^{-\frac{1}{2}} \sum_{\gamma \in G_l(4)} (u(\gamma z,z)+1)^{-\frac{k}{2}}.
\label{eq:ampli}
\end{equation} Thus we are left to bound the right hand side. For this reason we define the following quantities:
\begin{equation}\begin{aligned}
M(z,l,\delta) &= |\{\gamma \in G_l(4) | u(\gamma z,z) \le \delta\}|, \\
M_{\star}(z,l,\delta) &= |\{\gamma \in G_l(4) | c\neq 0, (a+d)^2 \neq 4l \text{ and } u(\gamma z,z) \le \delta\}|, \\
M_{u}(z,l,\delta) &= |\{\gamma \in G_l(4) | c=0, a \neq d \text{ and } u(\gamma z,z) \le \delta\}|, \\
M_{p}(z,l,\delta) &= |\{\gamma \in G_l(4) | (a+d)^2 = 4l \text{ and } u(\gamma z,z) \le \delta\}|.
\label{eq:counting}
\end{aligned}\end{equation}
\begin{lem} For $z=x+iy \in {\mathbb{H}}$ with $|x| \ll 1$ and $y \gg 1$ we have
\begin{equation}
\sum_{\substack{1 \le l \le L, \\ l \text{ is a square}}} M_{\star}(z,l,\delta) \ll_{\varepsilonilon} \left(\frac{L^{\frac{1}{2}}}{y}+L\delta^{\frac{1}{2}}+L^{\frac{3}{2}}\delta\right)L^{\varepsilonilon}.
\label{eq:Mstar}
\end{equation}
\label{lem:gen}
\end{lem}
\begin{proof} This is basically Lemma 4.1. of \cite{Thybrid}. The same proof carries through with ease as we don't care about a level aspect.
\end{proof}
\begin{lem} For $z=x+iy \in {\mathbb{H}}$ with $|x| \ll 1$, $y \gg 1$ and $l \in {\mathbb{N}}$ with $d(l)\ll 1$ we have
\begin{equation}
M_{u}(z,l,\delta) \ll 1+l^{\frac{1}{2}}\delta^{\frac{1}{2}}y.
\label{eq:Mu}
\end{equation}
\label{lem:upper}
\end{lem}
\begin{proof} This is part of the variant of Lemma 1.3 given in the appendix of \cite{IS95}
\end{proof}
\begin{lem} For $z=x+iy \in {\mathbb{H}}$ with $|x|\ll 1$ and $y\gg 1$ we have
\begin{equation}
M_{p}(z,l,\delta) \ll 1+l^{\frac{1}{2}}\delta^{\frac{1}{2}}y.
\end{equation}
\label{lem:para}
\end{lem}
\begin{proof} This is Lemma 4.4 of \cite{Thybrid}. Although we don't restrict ourselves to such a fundamental domain, the same proof carries through.
\end{proof}
It is now not hard to bound the expression
$$
\sum_{\gamma \in G_l(4)} (1+u(\gamma z,z))^{-\frac{k}{2}}
$$
polynomially in $l,k,y$ for $k \ge \frac{5}{2}$, thus we omit the details. Instead we give the following insight. If $u(\gamma z,z) \ge k^{-1+\eta}$ for some positive real $\eta$, then the expression
$$
(1+u(\gamma z,z))^{-\frac{k}{2}}
$$
has super-polynomial decay in $k$, thus if $l,y$ only depend on $k$ polynomially we can completely neglect that part as follows:
\begin{equation}
\sum_{\gamma \in G_l(4)} (1+u(\gamma z,z))^{\frac{-k}{2}} \le \!\!\sum_{\substack{\gamma \in G_l(4),\\ u(\gamma z,z)\le k^{-1+\eta}}}\!\!1+(1+k^{-1+\eta})^{-\frac{k}{2}+\frac{5}{4}}\sum_{\gamma \in G_l(4)} (1+u(\gamma z,z))^{\frac{-5}{4}}.
\label{eq:superpoly}
\end{equation}
From now on we will assume, that $\Lambda$ and $y$ will depend polynomially on $k$, so that \eqref{eq:superpoly} becomes
\begin{equation}
\sum_{\gamma \in G_l(4)} (1+u(\gamma z,z))^{\frac{-k}{2}} \ll_{\eta} M(z,l,k^{-1+\eta}).
\label{eq:neglect}
\end{equation}
We now we use this inequality to estimate the right hand side of \eqref{eq:ampli}. We first consider the case ${\mathcal M}={\mathcal M}_1$. The contribution of $l=1$ is
$$
k \Lambda \left(1+yk^{\frac{-1+\eta}{2}} \right)
$$
by Lemma \ref{lem:gen} with $L=1$ and Lemmata \ref{lem:upper}, \ref{lem:para}.
The contribution of $l>1$ is
$$
k\Lambda^{\varepsilonilon}\left(\frac{1}{y}+\Lambda^2k^{\frac{-1+\eta}{2}}+\Lambda^4k^{-1+\eta} \right)
$$
for the generic matrices by Lemma \ref{lem:gen} with $L=2^4\Lambda^4$ and by Lemmata \ref{lem:upper} and \ref{lem:para} the contribution of the upper triangular and the parabolic matrices is
$$
k \sum_{l=p^2q^2} l^{-\frac{1}{2}} (1+l^{\frac{1}{2}}yk^{\frac{-1+\eta}{2}}) \ll_{\eta} k\left(1+\Lambda^2k^{\frac{-1+\eta}{2}}y\right).
$$
Thus we get that sum over ${\mathcal M}_1$ is bounded by
\begin{equation}
k\Lambda^{\varepsilonilon}\left(\Lambda+\Lambda^2yk^{\frac{-1+\eta}{2}}+\Lambda^4k^{-1+\eta}\right).
\label{eq:m1}
\end{equation}
For ${\mathcal M}={\mathcal M}_2$ the contribution of $l=1$ is again
$$
k\Lambda \left(1+yk^{\frac{-1+\eta}{2}} \right).
$$
For $l>1$ the contribution of the generic matrices is
$$
k\Lambda^{\varepsilonilon}\left(\frac{1}{y}+\Lambda^4k^{\frac{-1+\eta}{2}}+\Lambda^8k^{-1+\eta} \right)
$$
by Lemma \ref{lem:gen} with $L=2^8\Lambda^8$ and by Lemmata \ref{lem:upper} and \ref{lem:para} the contribution of the upper triangular and parabolic matrices is
$$
k \left(\sum_{l=p^4}+\sum_{l=p^4q^4}\right) l^{-\frac{1}{2}} (1+l^{\frac{1}{2}}yk^{\frac{-1+\eta}{2}}) \ll_{\eta} k \left( \frac{1}{\Lambda}+\Lambda y k^{\frac{-1+\eta}{2}}+\frac{1}{\Lambda^2}+\Lambda^2yk^{\frac{-1+\eta}{2}} \right).
$$
Thus we get that the sum over ${\mathcal M}_2$ is bounded by
\begin{equation}
k\Lambda^{\varepsilonilon} \left(\Lambda+\Lambda^2yk^{\frac{-1+\eta}{2}} + \Lambda^4k^{\frac{-1+\eta}{2}}+\Lambda^8k^{-1+\eta} \right).
\label{eq:m2}
\end{equation}
Combining \eqref{eq:m1} and \eqref{eq:m2} with \eqref{eq:ampli} and letting $\varepsilonilon,\eta$ be suitably small we get that
\begin{equation}
\max_{B \in \{I,W_4,V_4\}} y^k |f_1|_k B(z)|^2 \ll_{\varepsilonilon} k^{1+\varepsilonilon}\Lambda^{\varepsilonilon}\left(\frac{1}{\Lambda}+ y k^{-\frac{1}{2}}+\Lambda^2k^{-\frac{1}{2}}+\Lambda^6k^{-1}\right).
\label{eq:sup}
\end{equation}
Since we may assume $y \le k^{\frac{1}{4}}$ by \eqref{eq:nearcusp} we can choose $\Lambda=k^{\frac{1}{7}}$ and we achieve
\begin{equation}
\sup_{z \in {\mathbb{H}}} y^{\frac{k}{2}}|f_1(z)| \ll_{\varepsilonilon} k^{\frac{3}{7}+\varepsilonilon};
\label{eq:supi}
\end{equation}
completing the proof of Theorem \ref{thm:1}.
\subsection{Lower bounds} As in Theorem \ref{thm:3} let $\{f_j\} \subseteq S^+_k(\Gamma_0(4)^{\star})$ be an orthonormal basis of Hecke eigenforms of half-integral weight $k$ contained in the Kohnen plus space and let $\{F_j\} \subseteq S_{2k-1}(\SL2({\mathbb{Z}}))$ be the corresponding arithmetically normalised Hecke eigenforms under the Shimura map.\\
The first part of the first lower bound is trivial as
$$
\sup_{z \in {\mathbb{H}}} y^{k} |f_j(z)|^2 \gg \langle f_j,f_j \rangle_{\Gamma_0(4)} = 1.
$$
The second part follows from the inequality
\begin{equation}\begin{aligned}
y^{\frac{k}{2}} |\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f_j}(|D|)| &= \left |\int_0^1 y^{\frac{k}{2}} f(z) e(-|D|z) dx \right| \\
& \le e^{2 \pi |D| y} \cdot \int_0^1 y^{\frac{k}{2}}|f_j(z)| dx \\
& \le e^{2 \pi |D| y} \cdot \sup_{z' \in {\mathbb{H}}} y'^{\frac{k}{2}} |f_j(z')|.
\label{eq:fourcoeflow}
\end{aligned} \end{equation}
This inequality in conjunction with the Propositions \ref{prop:coeffscritvalue}, \ref{prop:norm} and \ref{prop:sym} gives
$$
\sup_{z' \in {\mathbb{H}}} y'^{\frac{k}{2}} |f_j(z')| \gg_{\varepsilonilon} \frac{(4 \pi y)^{\frac{k}{2}}}{\Gamma(k)^{\frac{1}{2}}} \cdot |D|^{\frac{k-1}{2}}k^{-\varepsilonilon}e^{-2 \pi |D|y} \cdot L\left(F_j,\legendre{D}{\cdot},\frac{1}{2}\right).
$$
The choice $y=\frac{k}{4 \pi |D|}$ gives the desired inequality. Similarly we have
\begin{equation}\begin{aligned}
\sup_{z' \in {\mathbb{H}}} \sum_j y'^{k} |f_j(z')|^2 &\ge \sum_j \left(\int_0^1 dx \right) \left(\int_0^1 y^k |f_j(z)|^2 dx \right) \\
& \ge \sum_j \left(\int_0^1 y^{\frac{k}{2}}|f_j(z)|dx\right)^2 \\
& \ge y^k e^{-4 \pi y} \cdot \sum_j \left|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f_j}(1)\right|^2.
\label{eq:lowerplus}
\end{aligned}\end{equation}
By Corollary \ref{cor:halfpluscoeff} we have
\begin{equation}\begin{aligned}
\sum_j \left|\hspace{1pt} < \hspace{-6pt} ) \hspace{2pt}dehat{f_j}(1)\right|^2 &= \frac{6 \cdot (4\pi)^{k-1}}{\Gamma(k-1)} \cdot \frac{2}{3} \left[1+(-1)^{ \left\lfloor \frac{k+\frac{1}{2}}{2} \right\rfloor} \pi \sqrt{2} \sum_{c \ge 1}H_c(1,1)J_{k-1}\left( \frac{\pi}{c} \right) \right] \\
& \gg \frac{(4\pi)^{k-1}}{\Gamma(k-1)} \cdot \left[1-2 \pi \sum_{c \ge 1}\left|J_{k-1}\left( \frac{\pi}{c} \right)\right| \right].
\label{eq:pluscoe}
\end{aligned}\end{equation}
Now we use the following proposition.
\begin{prop}\label{prop:JBesselverysmall}
One has for $\rho \ge 2 x^2$:
$$
|J_{\rho}(x)| \ll \frac{\left(\frac{x}{2} \right)^{\rho}}{\Gamma(\rho+1)}.
$$
\end{prop}
\begin{proof} See Proposition 8 of \cite{realweight}.
\end{proof}
If $k \ge 21$ we are able to apply it in order to get the estimate
$$
\sum_{c \ge 1}\left|J_{k-1}\left( \frac{\pi}{c} \right)\right| \ll \frac{\left(\frac{\pi}{2} \right)^{k-1}}{\Gamma(k)} \sum_{c \ge 1} \frac{1}{c^{k-1}} = o(1).
$$
Combining this with the equations \eqref{eq:lowerplus} and \eqref{eq:pluscoe} and making the choice $y=\frac{k}{4 \pi}$ gives the last lower bound.
\end{document}
|
\begin{document}
\acrodef{cnn}[CNN]{Convolutional Neural Network}
\acrodef{dnn}[DNN]{Deep Neural Network}
\acrodef{relu}[ReLU]{Rectified Linear Unit}
\title{ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation}
\begin{abstract}
\acp{dnn} and \acp{cnn} are useful for many practical tasks in machine learning.
Synaptic weights, as well as neuron activation functions within the deep network are typically stored with high-precision formats, e.g. 32 bit floating point.
However, since storage capacity is limited and each memory access consumes power, both storage capacity and memory access are two crucial factors in these networks.
Here we present a method and present the ADaPTION toolbox to extend the popular deep learning library Caffe to support training of deep \acp{cnn} with reduced numerical precision of weights and activations using fixed-point notation.
ADaPTION includes tools to measure the dynamic range of weights and activations.
Using the ADaPTION tools, we quantized several \acp{cnn} including VGG16 down to 16-bit weights and activations with only 0.8\% drop in Top-1 accuracy.
The quantization, especially of the activations,
leads to increase of up to 50\% of sparsity especially in early and intermediate layers, which we exploit to skip multiplications with zero, thus performing faster and computationally cheaper inference.
\end{abstract}
\begin{figure}
\caption{ADaPTION allows the adaptation of high-precision \ac{cnn}
\label{fig:motivation}
\end{figure}
\section{Introduction}
In the last decade, machine learning applications based on \acf{cnn} have gained substantial attention due their high performance on classification and localization tasks~\cite{lecun_deep_2015}.
In parallel, dedicated hardware accelerators have been proposed to speed up inference of such networks after training is completed \cite{Aimar_etal17, Chen2016, Han,Conti,Jouppi2017}.
Since the target of such hardware accelerators are mobile devices, IoT devices and robots, reducing their memory consumption, memory access and computation time are crucial (see Fig.~\ref{fig:memory_motivation})
Pruning and model compression \cite{Han_etal15}, quantization methods \cite{Courbariaux_etal14, Courbariaux_etal15,Mueller_Indiveri15, Gupta_etal15,Hubara_etal16}, as well as toolboxes \cite{Gysel_etal16, Tensorflow15,pytorch} have been developed to reduce the numbers of neurons and connections.
The deployment of \acp{cnn} on embedded platforms with only fixed point computation capabilities and the development of dedicated hardware accelerators has also spurred the development of toolboxes that can train CNNs for fixed point representations \cite{Tensorflow15,pytorch,Gysel_etal16}.
The popular toolbox Ristretto ~\cite{Gysel_etal16}, for example, can be used to train \acp{cnn} with fixed-point weights, but not fixed-point activations.
To adapt both weights and activations of \acp{cnn} trained on conventional GPUs using 32 bit floating-point representation to fixed-point hardware, we developed the ADaPTION toolbox\footnote{Code available: https://github.com/NeuromorphicProcessorProject/ADaPTION} (see Fig.~\ref{fig:motivation}).
Furthermore, our toolkit supports also to train \acp{cnn} from scratch with specified precision for both weights and activations to run on the recently developed hardware accelerator NullHop \cite{Aimar_etal17}.\\
\begin{figure*}
\caption{Motivation for reduced precision in weights and activations of deep \aclp{cnn}
\label{fig:memory_motivation}
\end{figure*}
In the next section we will introduce new functionalities and parameters added to Caffe, as well as the workflow of ADaPTION.
In Section \ref{sec:discussion} we will discuss the crucial components to achieve SOA classification accuracy with reduced precision weights and activations and compare ADaPTION directly to other existing toolboxes.\\
\section{Low-precision add-on for Caffe}
\noindent Caffe is a deep learning library developed by the Berkeley Vision and Learning Center \cite{Jia_etal14}.
It provides all state-of-the-art error backpropagation gradient descent tools such as ADAM, ADAGrad and many different layer types, as well network architectures.
To incorporate low-precision training within this framework we added three new layer types: LPInnerProduct, LPConvolution and LPAct, where the LP stands for low-precision.
These layers operate the same way as their high-precision counterpart, except that during the forward pass the values are quantized to the respective fixed-point representation given a specified bit-precision and decimal point, e.g. signed 16 bit, Q1.15\footnote{The notation Qm.f sets the decimal point of a fixed-point number, where m represents the integer part and f the fractional part of the number.}.
To round the weights and activations we introduced three additional parameters: \textsf{BD}, \textsf{AD} and \textsf{rounding\_scheme}, which are specified in the network configuration file.
The parameter \textbf{BD} (\textbf{B}efore \textbf{D}ecimal point) specifies the \textit{maximum integer value} that can be represented.
It determines the number of bits of the integer part of the respective value is allowed to occupy, including the sign bit.
The parameter \textbf{AD} (\textbf{A}fter \textbf{D}ecimal point) specifies the \textit{precision} that can be represented.
It determines the number of bits the fractional part of the respective value is allowed to occupy.
The rounding\_scheme parameter is a flag that sets the the option to either round weights and activations deterministically or stochastically.
The notation Qm.f is implemented in Caffe as QBD.AD.\\\\\\
Our toolkit ADaPTION has the following features (see also Fig. \ref{fig:pipeline}):
\begin{itemize}
\item extraction of network structure into a \textit{net\_descriptor}
\item dynamic, layer-wise distribution of predefined available number of bits for weights and activations independently according to the respective dynamic range
\item creation of a new low-precision network based on the extracted or user-defined net\_descriptor, as well as layer-wise or global bit-distribution
\item fine-tuning\footnote{A network subject to fine-tuning is initialized with pre-trained weights, which are subject to further training on the same data set.} of extracted high-precision weights or re-training from scratch with user-specified rounding\_scheme
\item exporting the network to a NullHop \cite{Aimar_etal17} compatible file format
\end{itemize}
\noindent These changes are fully compatible with the original version of Caffe and can be merged or used as a stand-alone Caffe version.\\
\begin{figure}
\caption{Pipeline overview: ADaPTION extracts the network structure and the pre-trained weights. The total number of available bits is distributed for weights and activations independently, as well as independently for each layer to create a low-precision model. The model is then fine-tuned or trained from scratch to solve the desired task. Once acceptable classification accuracy is reached the low-precision model can be converted to Nullhop compatible file format.}
\label{fig:pipeline}
\end{figure}
\subsection{ADaPTION workflow and method}
For adapting the quantized weights and activations, we used the method called \textit{power2quant}, formerly known as dual copy rounding, developed by \cite{Stromatias_etal15} and concurrently by \cite{Courbariaux_etal14}. ADaPTION works in the following way: it extracts the network structure and the pre-trained weights of a given \ac{cnn} model.
The structure is adapted to use low-precision convolutional, \ac{relu} and fully-connected layers provided as separated layers.
The low-precision activation layer is separate from the actual activation layer in Caffe.
The activations are quantized before they are sent to the activation layer, i.e. the \ac{relu} layer.
The separation of quantization procedure and activation function has the potential advantage that new activation functions can be directly used in ADaPTION.
The pre-trained weights are converted into the low-precision Caffe blob\footnote{A blob is a data storage structure that is used by Caffe to store the neuron parameters, such as the weights or biases.} structure.
An ADaPTION method measures the dynamic range of weights and activations by inferring a random set of training images.
The measured dynamic ranges are used by another method to iteratively allocate the total number of bits (specified by the user according to their needs) between the integer and fractional part of the fixed-point representation, as explained in Sec.~\ref{sec:layerwise}.
The low-precision blob structure, as well as the layer-wise bit distribution is then used to generate the low-precision model. The model can either be initialized using random weights or using the pre-trained high-precision weights. The latter normally results in faster convergence, as well as higher classification accuracy.
In the beginning we allocate for each layer two weight and two bias blobs.
One blob is used to perform inference, which will quantize the weights, biases and activations to its specified bit precision.
The second blob is used to calculate the gradients during training.
Once the classification accuracy is close to floating-point network level, or it does not change anymore we stop the training.
The resulting Caffe model can then be converted to a specific hardware accelerator format, such as NullHop \cite{Aimar_etal17}.\\
\begin{figure}
\caption{Comparison of sparsity in activations of VGG16: Sparsity within each layer of VGG16 classifying 25,000 random test images from ImageNet. The high sparsity (up to $\sim$50\% increase) can only be achieved if network is fine-tuned to a given fixed point bit-precision.}
\label{fig:sparsity}
\end{figure}
\subsection{Effect of one-time rounding on sparsity \& classification accuracy}
Training large networks, such as VGG16, is time consuming.
Thus, we investigated if it is possible to perform quantization in a single step, without fine-tuning or training from scratch.
One-time rounding of weights to 16 bits with reasonable decimal point does not impair classification accuracy, if the activations are kept at 32-bit floating-point.
If one quantizes the weights down to 16-bit fixed-point, it turns out that $\sim$ 90\% of them can be represented with only 4 bits and $\sim$99\% with 8 bits.
Even reducing the maximum number of available bits for weights down to 8 bits does not severely affect classification accuracy ($\sim$ 65\%). This finding clearly shows that the weights are not the limiting factor when a full precision network is quantized to reduced precision
On the other hand, one time-rounding (deterministic or stochastic; see Sec.~\ref{sec:rounding}) of weights, and reducing activations 16-bit fixed point without fine-tuning of weights reduces accuracy to chance level.
The level of sparsity is also increased much less than if we fine-tune the network.
These results suggest that quantizing the activations is actually a difficult problem,
since the network's performance depends more on the available dynamic range of activations than on the weights.
The average sparsity in activations of VGG16 is 57\% after quantizing weights and activation down to 16 bit without fine-tuning, however the classification accuracy drops to chance level.
In contrast, if we fine-tune the network with quantized weights and activations to a single global fixed point representation (e.g. Q8.8), we achieve an average sparsity of 82\% and a Top-1 classification accuracy of 59.4\%.
Sparsity values are obtained using 25 000 random images from the test set (Fig. \ref{fig:sparsity}).
The increase in sparsity of activations, especially in early layers of VGG16, of up to ~50\% can only be achieved if the network is trained from scratch or if it is fine-tuned using reduced precision weights and activations.
This sparsity can optimally be exploited by the NullHop hardware accelerator to efficiently skip computations \cite{Aimar_etal17}.
\subsection{Layer-wise quantization}
\label{sec:layerwise}
The activations in the original VGG16 typically span 9 orders of magnitude from $10^{-4}$ up to $10^5$ (see Fig.~\ref{fig:act_comp}), whereas the dynamic range of the activations, achieved with training the network with reduced precision spans only 4 orders of magnitude ($10^{-3}$ to $10^1$).
Thus, constraining the weights and activations to a given fixed-point representation has the effect that the dynamic range requirement is reduced by 5 orders of magnitude.
However, even if we choose two different decimal point locations for weights and activations, i.e. Q2.14 for weights and Q14.2 for the activations, globally for all layers, the Top-1 classification accuracy drops to 8.7\% after quantizing the network once.
After fine-tuning, we could achieve 59.4\% Top-1 classification accuracy, which is merely ~9\% below its high-precision counterpart.
The observed drop in classification accuracy suggests that the original VGG16 needs the entire dynamic range in each layer of its activation and is not able to produce state-of-the-art classification accuracy if the activations are bounded by a single given fixed-point representation.
\begin{figure}
\caption{Dynamic range of activations in high (orange) and low (blue) precision setting after training. Quantization acts as regularizer to keep the activations in a resolvable range and also prevents the activations from saturating.}
\label{fig:act_comp}
\end{figure}
We investigated the dynamic range for each layer separately.
We found that the layer-by-layer dynamic range especially of the activations differs significantly.
Therefore, we implemented a per-layer decimal point.
A similar approach to compress the activation functions precision per-layer, has been proposed earlier \cite{Wang_etal16}.
However, while \cite{Wang_etal16} perform rigorous mathematical assumptions on how to distribute the available number of bits for a given layer, we propose a simple, computationally cheap iterative scheme.
In our method, we keep the overall number of available bits fixed and check iteratively how many weights/activations cannot be represented if we reduce the available number of bits for the integer part.
In a second step we check if the percentage of lost weights/activations is below a user defined threshold (usually below 1\%).
We are looking at the integer part of the weight/activation, since we can check against the maximum number present in a given layer.
If we would look at the smallest number we could not directly link this value to the required number of bits, since we can not easily access the precision needed to represent small values.
By locating the decimal point individually for each layer according to our proposed scheme, we achieve a Top-1 classification accuracy of 64.5\%.
Even though in this study we used 16-bits with per-layer decimal point, ADaPTION can be used to choose any precision and any decimal point for any layer separately, .
\subsection{Deterministic vs. Stochastic rounding}
\label{sec:rounding}
With the aforementioned additions to ADaPTION we were able to increase the Top-1 classification accuracy from 59.4\% up to 64.5\% by fine-tuning the network using deterministic rounding.
However, training with reduced precision and the dual-copy scheme introduces inescapable fixed points in parameters space (see Sec. \ref{sec:insights} for more details).\\
To counteract these fixed points and to increase classification accuracy we investigated the effect of using stochasticity during training.
Stochastic rounding, in contrast to deterministic, has the advantage that each step during training has pseudo-randomness. Stochastically rounded values have the correct expectation value but are drawn from a binomial distribution of the two nearest fixed point values.
Using any kind of stochasticity usually yields networks that have better generalization properties, tend to have higher classification accuracies, and prevent overfitting~\cite{Srivastava_etal14}.
In the context of low-precision quantization, stochastic rounding not only provides higher classification accuracies, but it is helpful, if not crucial, to successfully avoid the inescapable local fixed points introduced by the low-precision training.
The training convergence time stayed the same as in the deterministic case.
Using the same number of training examples, we finally achieved a Top-1 classification accuracy of 67.5\%, which is only 0.8\% below its full-precision counterpart.
We added the option to ADaPTION to control the rounding scheme, and added the necessary function in the quantization algorithm.
To use stochastic rounding for a given layer the \texttt{rounding\_scheme} option must be set to \textit{STOACHASTIC}.
\subsection{Training VGG16 with reduced-precision on weights and activations}
\paragraph{Insights}
\label{sec:insights}
Like in other studies~\cite{Courbariaux_etal16}, we found that hyperparameters such as the learning rate must be 10 to 100 times smaller during training compared to high-precision network to ensure convergence: Smaller jumps further ensure that fixed points in parameter space caused by quantization are avoided.
These fixed points represent parameter combinations that lead to sudden tremendous decreases in accuracy. For example, we observed single-iteration jumps from 59\% accuracy down to chance level. After these large jumps, the network has to start training again from scratch.
The lower learning rate results in longer training time \cite{Mishra_etal17}, especially when training low precision networks and also using low-precision gradients \cite{Courbariaux_etal14, Courbariaux_etal16, Zhou_etal16}.\\
During training, the gradient is calculated based on the full precision 32-bit floating-point weights and activations, that are only quantized during inference, i.e. per batch.
This training scheme is different from low-precision training in which the gradient is calculated based on the quantized weights and activations and the gradient itself is constrained to a low-precision fixed-point number.
Low- or even extreme low-precision training has been investigated by \cite{Rastegari_etal16,Zhou_etal16}, but so far the resulting accuracies are not competitive.
Furthermore, weights must be initialized taking the respective fan-in and fan-out of each unit into consideration \cite{Glorot_Bengio10}.
In order to keep the activation introduced by the stimulus (image) in a range that can be resolved by the first low-precision convolution layer, we use scaling. The scaling parameter normalizes pixel intensities so that the highest value does not saturate the integer bit range precision.
Scaling uses a single scalar value that multiplies each pixel value.
Without scaling the input, the activations saturate at the maximal possible value and the images are harder to classify.
Hence, with scaling the full dynamic range of possible values can be used, which speeds up training and leads to higher classification accuracies.
These insights and factors are necessary to reach state-of-the-art classification accuracy.
Stochastic rounding enabled the low-precision network to reach state-of-the-art classification accuracy (Top-1 67.5\% vs 68.3\% in high-precision network \cite{Simonyan_Zisserman14}).
\section{Discussion}
\label{sec:discussion}
\subsection{Benefits \& Limitations of ADaPTION}
The direct effect of quantizing the activation to an intermediate precision, such as 16 bit fixed-point, is that the average sparsity. i.e. the number of zero elements divided by the total number of elements, is at 82\%, whereas in the high-precision case the average sparsity is only 57\%.
Especially early and early-intermediate layers show a much higher sparsity when quantized.
In these layers the sparsity increased by up to 50\%, while the average increase was 25\%.
This suggest that features in early-intermediate layers are not as crucial as late layers to perform a correct image classification.
This sparsity can be exploited, for a hardware accelerator that skips multiplication if the includes zeros.
The NullHop accelerator is an example \cite{Aimar_etal17}.\\
The secondary effect of quantization is that it acts as a regularizer to keep the activations and the weights in resolvable range, without saturation effects (Fig. \ref{fig:act_comp}).\\
Sparsity in weights is not as strongly affected as activations.
The smaller weight sparsity is because weights tend to be quite small and centered around zero in 32 bit floating-point.
Thus, weight quantization has only an effect if the available number of bits drops below 8 bits.
ADaPTION supports any bit-distribution down to a single bit.
However, detailed analysis of extreme low-precision quantization is beyond the scope of this paper and has been analyzed in \cite{Courbariaux_etal15,Lin_etal15,Rastegari_etal16, Mishra_etal17}
We can propose a close-to-optimal decimal point location per layer, if we can check the dynamic range, using for example a pre-trained high-precision model.
Without this check, for example with a complete new architecture directly trained with reduced precision, it is not a straightforward process to allocate the available number of bits.
One way to allocate the available number of bits is to set a fixed-global distribution, e.g. Q8.8 for weights and activations. For smaller networks this was sufficient \cite{Lungu_etal17,FaceNet}.
Another way of doing this is to first train a model without quantizing the weights and activations and fine-tune it afterwards to the desired bit precision.
It has the drawback of long training time and high computational cost.\\
A trend we observe is that weights tend to be very small and always centered around zero.
Reserving just 2 bits (including the sign) for the integer part is usually enough.
Activations, in contrast, tend to be quite large in early layers, thus requiring more bits for the integer part compared to later layers.
Activations decrease significantly with depth in the network and thus require more bits for the fractional part.\\
\noindent As direct consequence of reduced bit precision, we achieve a reduction of the overall memory consumption: this reduced memory footprint can improve the performance of generic hardware but we expect it to be particularly significant for custom hardware architectures.
Similarly, quantization has the useful side effect that sparsity, i.e. the percentage of zero-valued elements, is increased, which can be exploited by hardware supporting sparse convolution operations to save power and reduce the computation time.
\subsection{ADaPTION vs. Ristretto}
During the development of ADaPTION, Gysel and colleagues developed an add-on for Caffe called Ristretto \cite{Gysel_etal16}.
Ristretto positions itself in a similar role as ADaPTION, but lacks key-features which makes
quantization quite difficult and at the same time interesting for hardware accelerators, i.e. fixed-point activations to skip multiplication with zero.
Furthermore, this key feature is crucial for our Nullhop.
Ristretto does not support a pre-defined number of bits to distribute between integer and fractional part of a fixed-point number.
Furthermore, the number of bits, which is provided by Ristretto, does not stay constant across the network, which is crucial for hardware accelerators, such as Nullhop.
Probably most important, Ristretto also does not support fixed-point rounding of activations, which, as we showed, contributes the most to the sparsity in activations and is the hardest to constrain to a given bit-precision and still provide state-of-the-art classification accuracy.
\section{Conclusion}
We presented ADaPTION, a new toolbox to quantize existing high-precision \acp{cnn}s to be efficiently implemented on dedicated mobile-orientated hardware accelerators. The toolbox adapts weights and activations to globally fixed or layer-wise fixed-point notation.\\
Quantization of weights and activations has the advantage that the overall sparsity in the network increases while preserving state of the art Top-1 ImageNet classification accuracy.
\section*{New Benchmark Networks}
A major problem while comparing \ac{cnn} accelerators is that many works use custom networks, making the comparison between different architectures hard.
Even if some architectures (e.g. VGG16, GoogLeNet, ResNet) are more popular than others, they cover a limited range of hyperparameters and computational costs and are not ideally-suited for realistic hardware benchmarking.
In order to address this issue we used our software framework to train a new \ac{cnn} architecture characterized by a variety of kernel sizes and number of channels, useful to verify hardware computational capabilities in multiple scenarios.
Due to the fast-changing \acp{cnn} and other DNN accelerators landscape, we will update this paper, adding new networks suited for benchmarking new emerging hardware designs.
\subsection*{Giga1Net}
\textsf{Giga1Net}, defined in Table~\ref{tab:giga1net_specs}, requires 1~GOp/frame to classify an image from the ImageNet dataset. The prototxt necessary for the training is available in the ADaPTION repository; Giga1Net achieves 38\% ImageNet Top-1 accuracy after 36h of training on a GTX980 Ti.
\begin{table}
\centering
\caption{Giga1Net inference parameters}
\label{tab:giga1net_specs}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}l@{}}Layer\end{tabular}
& \begin{tabular}[c]{@{}l@{}} Input\\feature\\maps\end{tabular}
& \begin{tabular}[c]{@{}l@{}} Output\\feature\\maps\end{tabular}
& \begin{tabular}[c]{@{}l@{}} Kernel\\Size \end{tabular}
& \begin{tabular}[c]{@{}l@{}} Input\\Width/Height\end{tabular}
& Pooling
& \begin{tabular}[c]{@{}l@{}}ReLU\end{tabular}
& Stride
\\ \hline
1 - conv& 3 & 16 & 1 & 224x224 & Yes & Yes & 1\\
2 - conv& 16 & 16 & 7 & 112x112 & Yes & Yes & 1\\
3 - conv& 16 & 32 & 7 & 54x54 & Yes & Yes & 1\\
4 - conv& 32 & 64 & 5 & 24x24 & No & Yes & 1\\
5 - conv& 64 & 64 & 5 & 22x22 & No & Yes & 1\\
6 - conv& 64 & 64 & 5 & 20x20 & No & Yes & 1\\
7 - conv& 64 & 128 & 3 & 18x18 & No & Yes & 1\\
8 - conv& 128 & 128 & 3 & 18x18 & No & Yes & 1\\
9 - conv& 128 & 128 & 3 & 18x18 & No & Yes & 1\\
10 - conv& 128 & 128 & 3 & 18x18 & No & Yes & 1\\
11 - conv& 128 & 128 & 3 & 18x18 & Yes & Yes & 1\\
12 - FC& 128 & 4096 & - & - & No & Yes & -\\
13 - FC& 4096 & 1000 & - & - & - & - & -\\
\hline
\end{tabular}
\end{table}
\end{document}
|
\begin{document}
\frontmatter
\pagestyle{headings}
\addtocmark{Hamiltonian Mechanics}
\mainmatter
\title{Combining Individual and Joint Networking Behavior for Intelligent IoT Analytics
\thanks{
\scriptsize This work was completed during J. Vikranth’s summer
internship in 2019 at Arm Research. Prof. M. Srivastava and J. Vikranth are partially supported by the CONIX Research Center, one of six centers
in JUMP, a Semiconductor Research Corporation (SRC) program sponsored
by DARPA.}}
\author{Jeya Vikranth Jeyakumar$^{1,4}$, Ludmila Cherkasova$^1$, Saina Lajevardi$^2$, \hspace{10mm} Moray Allan$^3$, Yue Zhao$^2$, John Fry$^2$, Mani Srivastava$^4$}
\institute{Arm Research, San Jose, USA \hspace{2mm} $^2$Arm Inc, San Jose, USA \hspace{2mm} $^3$Arm Inc, Glasgow, UK
$^4$University of California, Los Angeles, CA, USA
}
\titlerunning{IoTelligent}
\authorrunning{J.Vikranth et al.}
\tocauthor{Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove, Craig Chambers, Kim B. Bruce, and Elisa Bertino}
\maketitle
\makeatother
\begin{abstract}
{
The IoT vision of a trillion connected devices over the next decade requires reliable end-to-end connectivity and automated device management platforms. While we have seen successful efforts for maintaining small IoT testbeds, there are multiple challenges for the efficient management of large-scale device deployments. With Industrial IoT, incorporating millions of devices, traditional management methods do not scale well. In this work, we address these challenges by designing a set of novel machine learning techniques, which form a foundation of a new tool, {\it IoTelligent}, for IoT device management, using traffic characteristics obtained at the network level.
The design of our tool is driven by the analysis of 1-year long networking data, collected from 350 companies with IoT deployments. The exploratory analysis of this data reveals that IoT environments follow the famous Pareto principle, such as: (i) 10\% of the companies in the dataset contribute to 90\% of the entire traffic; (ii) 7\% of all the companies in the set own 90\% of all the devices. We designed and evaluated CNN, LSTM, and Convolutional LSTM models for demand forecasting, with a conclusion of the Convolutional LSTM model being the best. However, maintaining and updating individual company models is expensive. In this work, we design a novel, scalable approach, where a general demand forecasting model is built using the combined data of all the companies with a normalization factor. Moreover, we introduce a novel technique for device management, based on autoencoders. They automatically extract relevant device features to identify device groups with similar behavior to flag anomalous devices.
}
\iffalse
collectionexhibit properties similar to the earlier studied web and media sites~\cite{arlitt1996, arlitt2001, cherkasova2004, tang2007} and could be described by famous Pareto principle~\cite{pareto}, when the data distribution follows the power law~\cite{power-law}.
We design a novel and efficient demand forecasting methods for owners of IoT ecosystems to evaluate needed networking resource requirements and predict their future demands. Our modeling approach is motivated by the experiments with design principle based on the real IoT network dataset analysis, collected
predict performance, and detect failures. The insights and prediction results of the tool will be of interest to the operators of IoT environments.
One of the main objectives of IoTelligent is to build the effective demand forecasting methods for owners of IoT ecosystems in order to manage trends, predict performance, and detect failures. The insights and prediction results of the tool will be of interest to the operators of IoT environments.
\fi
\end{abstract}
\section{Introduction}
The high-tech industry expects a trillion new IoT devices will be produced between now and 2035~\cite{masa-1T,hima-mbed-2018, 1T}. These devices could range from simple sensors in everyday objects to complex devices, defined by the industrial and manufacturing processes. The Internet of Things ecosystem should include the necessary components that enable businesses, governments, and consumers to seamlessly connect to their IoT devices.
This vision requires reliable end-to-end connectivity and device management platform, which makes it easier for device owners to access their IoT data and exploiting the opportunity to derive real business value from this data. The benefits of leveraging this data are greater business efficiencies, faster time to market, cost savings, and new revenue streams. Embracing these benefits ultimately comes down to ensuring the data is secure and readily accessible for meaningful insights.
The Arm Mbed IoT Device Management Platform~\cite{pelion-devices} addresses these requirements by enabling organizations to securely develop, provision and manage connected devices at scale and by enabling the connectivity management~\cite{pelion-connectivity} of every device regardless of its location or network type. The designed platform supports the physical connectivity across all major wireless protocols (such as cellular, LoRa, Satellite, etc.) that can be managed through a single user interface. Seamlessly connecting all IoT devices is important in ensuring their data is accessible at the appropriate time and cost across any use case. While we could see successful examples of deploying and maintaining small IoT testbeds, {\it there are multiple challenges in designing an efficient management platform for large-scale device deployments}. The operators of IoT environments may not be fully aware of their IoT assets, let alone whether each IoT device is functioning and connected properly, and whether enough networking resources and bandwidth allocated to support the performance objectives of their IoT networks. With the IoT devices being projected to scale to billions, the traditional (customized or manual) methods of device and IoT networks management do not scale to meet the required performance objectives.
In this work, we aim to address these challenges by designing a set of novel machine learning techniques, which form a foundation of a new tool, {\it IoTelligent}, for IoT networks and device management, using traffic characteristics obtained at the network level.
One of the main objectives of {\it IoTelligent} is to build effective demand forecasting methods for owners of IoT ecosystems to manage trends, predict performance, and detect failures. The insights and prediction results of the tool will be of interest to the operators of IoT environments.
For designing the tool and appropriate techniques, we utilize the unique set of real (anonymized) data, which were provided to us by our business partners. This dataset represents 1-year of networking data collected from 350 companies with IoT deployments, utilizing the Arm Mbed IoT Device Management Platform.
The exploratory analysis of the underlying dataset reveals a set of interesting insights into the nature of such IoT deployments. It shows that the IoT environments exhibit properties similar to the earlier studied web and media sites~\cite{arlitt1996, arlitt2001, cherkasova2004, tang2007} and could be described by famous Pareto principle~\cite{pareto}, when the data distribution follows the power law~\cite{power-law}. The Pareto principle (also known as the 80/20 rule or the "law of the vital few") states that for many events or data distributions roughly 80\% of the effects come from 20\% of the causes. For example, in the earlier web sites, 20\% of the web pages were responsible for 80\% of all the users accesses~\cite{arlitt1996}. The later, popular web sites follow a slightly different proportion rule: they often are described by 90/10 or 90/5 distributions, i.e., 90\% of all the user accesses are targeting a small subset of popular web pages, which represent 5\% or 10\% of the entire web pages set.
The interesting findings from the studied IoT networking dataset can be summarized as follows:
\begin{itemize}
\item 10\% of the companies in the dataset contribute to 90\% of the entire traffic;
\item 7\% of all the companies in the dataset own 90\% of all the devices.
\end{itemize}
{\it IoTelligent} tool applies machine learning techniques to forecast the companies’ traffic demands over time, visualize traffic trends, identify and cluster devices, detect device anomalies and failures. We designed and evaluated CNN, LSTM, and Convolutional LSTM models for demand forecasting, with a conclusion of the Convolutional LSTM model being the best. To avoid maintaining and upgrading tens (or hundreds) of models (a different model per company), we designed and implemented a novel, scalable approach, where a global demand forecasting model is built using the combined data of all the companies. The accuracy of the designed approach is further improved by normalizing the “contribution” of individual company data in the combined global dataset. To solve the scalability issues with managing the millions of devices, we designed and evaluated a novel technique based on: (i) autoencoders, which extract the relevant features automatically from the network traffic stream; (ii) DBSCAN clustering to identify the group of devices that exhibit similar behavior, to flag anomalous devices.
The designed management tool paves the way the industry can monitor their IoT assets for presence, functionality, and behavior at scale without the need to develop device-specific models.
\section{Dataset and the Exploratory Data Analysis}
The network traffic data was collected from more than 350 companies for a total duration of one year. The traffic data is binned using 15~minute time window, used for billing purposes.
\begin{itemize}
\item Unix timestamp;
\item Anonymous company ids;
\item Anonymous device ids per company;
\item The direction of the network traffic (to and from the device);
\item Number of bytes transmitted in the 15 minute interval;
\item Number of packets transmitted in the 15 minute interval.
\end{itemize}
Preliminary analysis was done to find the most impactful and well-established companies. We found that the companies' data that represent two essential metrics, such as the networking traffic amount and number of deployed IoT devices, both follow the Pareto law.
The main findings from the studied IoT networking dataset can be summarized as follows:
\begin{itemize}
\item 10\% of the companies in the dataset contribute to 90\% of the entire traffic;
\item 7\% of all the companies in the dataset own 90\% of all the devices.
\end{itemize}
Figure~\ref{fig:traffic} shows on the left side the logscale graph of CDF (Cumulative Distribution Function) of the traffic (where one can see that 10\% of the companies in the dataset contribute to 90\% of the entire traffic) and the CDF of the devices per company distribution (where one can see that 7\% of all the companies in the dataset own 90\% of all the devices). Also, it is quite interesting to note how significant and sizable the contributions are of the first 5-10 companies on those graphs: both for the networking traffic volume and the number of overall devices.
\begin{figure}
\caption{\small (left) CDF of networking traffic; (right) CDF of devices.}
\caption{\small (left) Networking traffic per company; (right) Number of devices per company.}
\label{fig:traffic}
\label{fig:devices}
\end{figure}
Another interesting observation was that companies with highest number of devices did not correspond to companies with maximum amount of traffic, and vice versa, the high volume traffic companies did not have a lot of devices. This makes sense, for example, a difference in the outputs of hundreds of simple sensors and a single recording camera.
Among some other insights into special properties of many IoT environments (at the networking level) we observe the pronounced diurnal and weekly patterns, and changes in the traffic patterns around some seasonal events and holidays. It could be explained by the fact that many IoT environments are related to human and business activities.
\iffalse
Among the other interesting insights into special properties of IoT environments (at the networking level) we observe that there are pronounced diurnal and weekly patterns. It could be explained by the fact that many IoT environments are related to human and business activities, and it is reflected in the observed diurnal and weekly patterns, as well as changes in the traffic patterns around some seasonal events and holidays.
\fi
\section{Demand Forecasting}
The {\it demand forecasting problem} is formulated in the following way. Given a recent month's traffic pattern for a company, what is the expected traffic for this company a week ahead?
This problem requires that a predictive model forecasts the total number of bytes for each hour over the next seven days. Technically, this framing of the problem is referred to as a multi-step time series forecasting problem, given the multiple forecast steps. Choosing the right time granularity for (i) making the prediction and (ii) data used in the model, is another important decision for this type of a problem.
We found that a reasonable trade-off would be to use 1~hour time granularity.
This eliminates the small noises in traffic and also ensures that we have a sufficient data to train our models on.
\subsection{Modeling Approach}
Based on our exploratory data analysis, we select 33 companies with largest traffic and 5 companies with largest number of devices.
These companies are responsible for 90\% of the networking traffic volume and 90\% of IoT devices.
Therefore, by designing and evaluating the modeling approach for these companies, we could efficiently cover the demand forecasting for 90\% of the traffic volume and assessing the monitoring solution for 90\% of devices.
The specific goal is to predict the company traffic for a next week given the previous three weeks of traffic data in an hourly time granularity. We use {\bf a deep learning} based approach for demand forecasting, because deep learning methods are robust to noise, highly scalable, and generalizable.
We have considered three different deep learning architectures for demand forecasting: CNN, LSTM, and Convolutional LSTM in order to compare their outcome and accuracy.
\iffalse
\subsection{Model Architectures}
\label{sec:model-arch}
\fi
\noindent{\textbf{Convolutional Neural Network (CNN)~\cite{krizhevsky2012imagenet}:} }
It is a biologically inspired variant of a fully connected layer, which is designed to use minimal amounts of preprocessing. CNNs are made of Convolutional layers that exploit spatially-local correlation by enforcing a local connectivity pattern between neurons of adjacent layers. The main operations in Convolution layers are Convolution, Activation (ReLU), Batch normalization, and Pooling or Sub-Sampling. The CNN architecture, used in our experiments, has 4 main layers. The first three layers are one-dimensional convolutional layers, each with 64 filters and relu activation function, that operate over the 1D traffic sequence. Each convolutional layer is followed by a max-pooling layer of size 2, whose job is to distill the output of the convolutional layer to the most salient elements. A flatten layer is used after the convolutional layers to reduce the feature maps to a single one-dimensional vector. The final layer is a dense fully connected layer with 168 neurons (24 hours x 7 days) with linear activation and that produces the forecast by interpreting the features extracted by the convolutional part of the model.
\noindent{\textbf{Long Short Term Memory (LSTM)~\cite{gers1999learning,lstm-orig}:}}
It is a type of Recurrent Neural Network (RNN), which takes current inputs and remembers what it has perceived previously in time. An LSTM layer has a chain-like structure of repeating units and each unit is composed of a cell, an input gate, an output gate, and a forget gate, working together. It is well-suited to classify, process, and predict time series with time lags of unknown size and duration between important events. Because LSTMs can remember values over arbitrary intervals, they usually have an advantage over alternative RNNs, Hidden Markov models, and other sequence learning methods in numerous applications. The model architecture, used in our experiments, consists of two stacked LSTM layers, each with 32 LSTM cells, followed by a dense layer with 168 neurons to generate the forecast.
\noindent{\textbf{Convolutional LSTM~\cite{xingjian2015convolutional}:}}
Convolutional LSTM is a hybrid deep learning architecture that consists of both convolutional and LSTM layers. The first two layers are the one-dimensional Convolutional layers that help in capturing the high-level features from the input sequence of traffic data. Each convolutional layer is followed by a max-pooling layer to reduce the sequence length. They are followed by two LSTM layers, that help in tracking the temporal information from the sequential features, captured by the convolutional layers. The final layer is a dense fully connected layer, that gives a forecasting output.
We use batch-normalization and dropout layers in all our models to reduce over-fitting and to improve the training speed of our models.
\iffalse
\subsection{Error Metrics:}
\label{sec:error-metrics}
\fi
To evaluate the prediction accuracy of the designed models, we compare the predicted value $X_n^{pred}$ with the true, measured value $X_n$ using following metrics:
\indent{\textbf{Mean Absolute Error (MAE)}}:
{\small $$MAE = \frac{1}{N}\Sigma_{n=1}^{N} { {|X_n - X_n^{pred}|}}$$}
\indent{\textbf{Mean Squared Error (MSE)}}:
{\small $$MSE = \frac{1}{N}\Sigma_{n=1}^{N}{(X_n - X_n^{pred})}^2$$}
\iffalse
\noindent{\textbf{Mean Absolute Percentage Error (MAPE)}} is the mean or average of the absolute percentage errors of forecasts. Percentage errors are summed without regard to sign to compute MAPE.
{\small $$MAPE = \frac{1}{N}\Sigma_{n=1}^{N} {\frac {|X_n - X_n^{pred}|}{X_n}}$$}
\fi
\subsection{Individual Model Per Company}
This is the naive approach where each company has it's own demand forecasting model, that is, the model for each company is trained by using only the data from that particular company as shown in Figure~\ref{fig:timing-ind}~(a).
\begin{figure}
\caption{\scriptsize (a) Each company has its own prediction model, (b) Using one model for all the companies, trained on the combined dataset.}
\label{fig:timing-ind}
\end{figure}
\begin{figure}
\caption{\scriptsize (a) Model Architecture of Convolutional LSTM Model; (b) Comparing performance of the three architectures: Convolutional LSTM achieves best performance.}
\label{fig:relative}
\end{figure}
\begin{figure}
\caption{\scriptsize Company A: the 4th week demand forecast based on data from the previous 3 weeks.}
\label{fig:individual}
\end{figure}
\iffalse
\begin{figure}
\caption{\scriptsize (a) Each company has its own prediction model, (b) Using one model for all the companies, trained on the combined dataset.}
\label{fig:timing-ind}
\end{figure}
\fi
So, for each company, we trained three models with the architectures described above (i.e., CNN, LSTM, and Convolutional LSTM).
Figure~\ref{fig:relative}~(a) presents the detailed parameters of the designed Convolutional LSTM, while
Figure~\ref{fig:relative}~(b) reflects the relative performance of three different architectures (with the MAE error metrics).
We found that for both error metrics the Convolutional LSTM model performs better than the other two architectures.
When comparing architectures accuracy by using MAE and MSE, we can see that Convolutional LSTM outperforms CNN by {\bf 16\%} and {\bf 23\%} respectively, and outperforms LSTM by {\bf 43\%} and {\bf 36\%} respectively.
Therefore, only {\bf Convolutional LSTM} architecture is considered for the rest of the paper.
Finally, Figure~\ref{fig:individual} shows an example of company A (in the studied dataset): its measured networking traffic over time and the forecasting results with the Convolutional LSTM model.
Building an individual model per each company has a benefit that this approach is simple to implement.
However, it is not scalable as the number of required forecasting models is directly proportional to the number of companies. The service provider has to deal with the models' maintenance, their upgrades, and retraining (with new data) over time.
Therefore, in the next Section~\ref{sec:global-no-norm}, we aim to explore a different approach, which enables a service provider to use all the collected, global data for building a single (global) model, while using it for individual company demand forecasting.
Only Convolutional LSTM architecture is considered in the remaining of the paper (since as shown, it supports the best performance).
\iffalse
When there are hundreds of companies, this approach would require hundreds of models (one for each company). So we considered two additional approaches for Demand Forecasting which are highly scalable as described in the following sections. For the new methods, only Convolutional LSTM architecture was considered since it gave the best performance.
\fi
\subsection{One Model for All Companies - Without Normalization}
\label{sec:global-no-norm}
In this approach, we train a single Convolutional LSTM model for demand forecasting by using data from all the companies. The networking traffic data from all the companies were combined. The data from January to October were used for training the model, and the data from November and December were used as the test set.
This method is highly scalable since it trains and utilizes a single model for demand forecasting of all companies. While this approach is very attractive and logical, it did not always produce good forecasting results.
Figure~\ref{fig:plotglobalno} shows the forecasting made by this global model for Company A (with this company we are already familiar from Figure~\ref{fig:individual}). As we can see in Figure~\ref{fig:plotglobalno}, the model fails to capture a well-established traffic pattern.
\begin{figure}
\caption{\scriptsize Demand forecasting using the Global model trained on data without normalization.}
\label{fig:plotglobalno}
\end{figure}
One of the explanations of the observed issue is that this company traffic constitutes a very small fraction compared to other companies in the combined dataset. So, the globally trained model has ``learned" the traffic patterns of larger companies in the set, while ``downplayed" the traffic patterns of a smaller company.
The model fails to capture a well-established traffic pattern for companies with less traffic because the prediction loss in terms of absolute value is still small. However, it is not a desirable outcome as we would like our model to capture the traffic pattern even for companies with low traffic.
This global model has a major drawback: the model learns to give more importance to companies with high traffic volume than the companies with low traffic. As a result, the network traffic patterns of the companies with small, but well-established traffic patterns, are not captured properly.
\subsection{One Model for All Companies - With Normalization}\label{sec:globalnorm}
This method aims to address the issues of the previous two approaches. In this method, the data from each company is normalized, that is, all the data subsets are scaled so that they lie within the same range. We use the min-max scaling approach to normalize the data subsets so that the values of the data for all companies lie between 0 and 1. Equation~\ref{eq:1} shows the formula used for min-max scaling, where 'i' refers to the 'i'th company.
{\small
\begin{equation}\label{eq:1}
X_{norm}^{i} = \frac{X^{i} - X_{min}^{i}}{X_{max}^{i} - X_{min}^{i}}
\end{equation}
}
Then a single deep learning model for forecasting is trained using the normalized data of all companies. The predicted demand (forecast) is then re-scaled using Equation~\ref{eq:2} to the original scale.
{\small
\begin{equation}\label{eq:2}
X^{i} = X_{norm}^{i} * (X_{max}^{i} - X_{min}^{i}) + X_{min}^{i}
\end{equation}
}
This method of training the global model gives equal importance to the data from all companies and treats them fairly. The designed model does not over-fit and is generalizable since it is trained on the data from multiple companies. Figure~\ref{fig:timing} graphically reflects the process of the global model creation with normalized data from different companies.
\begin{figure}
\caption{\scriptsize One global prediction model is trained by using the normalized data from all the companies.}
\label{fig:timing}
\end{figure}
Figure~\ref{fig:plotglobalyes} shows that the designed forecasting model can capture well the patterns of companies with low traffic volume (such as Company~A).
\begin{figure}
\caption{\scriptsize Demand forecasting using the Global model trained on data with normalization.}
\label{fig:plotglobalyes}
\end{figure}
\section{Introducing Uncertainty to Forecasting Models}\label{sec:uncertainty}
In the previous section, we designed a single global model with normalization, that can be used to forecast for multiple companies. But demand forecasting is a field, where an element of uncertainty exists in all the predictions, and therefore, representing model uncertainty is of crucial importance. The standard deep learning tools for forecasting do not capture model uncertainty. Gal et. al ~\cite{gal2016dropout} propose a simple approach to quantify the neural network uncertainty, which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of a Gaussian process - a well known probabilistic model. Dropout is used in many models in deep learning as a way to avoid over-fitting, and it acts as a regularizer. However, by leaving it ``on" during the prediction, we end up with the equivalent of an ensemble of subnetworks, within our single larger network, that have slightly different views of the data. If we create a set of $T$ predictions from our model, we can use the mean and variance of these predictions to estimate the prediction set uncertainty. Figure~\ref{fig:forecastunc} shows the forecast with uncertainty for Company A, using the global model with normalization.
\begin{figure}
\caption{\scriptsize Demand forecasting with uncertainty for Global model trained on data with normalization.}
\label{fig:forecastunc}
\end{figure}
To evaluate the quality of forecast based on uncertainty, we introduce Prediction Interval Coverage Probability (PICP) metric.
\subsection{Prediction Interval Coverage Probability (PICP)}
PICP tells us the percentage of time an interval contains the actual value of the prediction. Equations 3-5 show the calculation of PICP metric, where $l$ is the lower bound, $u$ is the upper bound, x\textsubscript{i} is the value at timestep $i$, \^{y} is the mean of the predicted distribution, $z$ is the number of standard deviations from the Gaussian distribution, (e.g., 1.96 for a 95\% interval), and $\sigma$ is the standard deviation of the predicted distribution.
{\small}
\begin{equation}
l(x_{i}) = \hat{y}_{i} - z * \sigma_{i}
\end{equation}
\begin{equation}
u(x_{i}) = \hat{y}_{i} + z * \sigma_{i}
\end{equation}
\begin{equation}
PICP_{l(x),u(x)} = \frac{1}{N}\sum_{i=1}^{N}h_{i}, \quad where \; h_{i}=
\left\{\begin{matrix}
1, & if \; l(x_{i})\leq y_{i}\leq u(x_{i})\\
0, & otherwise
\end{matrix}\right.
\end{equation}
\subsection{Evaluating Forecast with Uncertainty}
We evaluate the overall performance of our global forecast model, introduced in Section~\ref{sec:globalnorm}, based on the PICP metric described above. The forecasting is done 100 times for each company with a dropout probability of 0.2, and then the mean and standard deviations are obtained for each company. Figure~\ref{fig:forecast} shows the global model's forecast for the third week of December for two different companies: Company B and Company C. As we can see from the plot, the model captures the traffic pattern, but still, the predicted values show some deviations from the actual values. This results in some errors when using the traditional error metrics discussed in Section~\ref{sec:error-metrics}, though the model is performing very well. Therefore, introducing uncertainty helps the model to generate a reasonable forecast distribution. Figure~\ref{fig:uncertainty} shows the forecast with uncertainty, where the different shades of blue indicate the uncertainty interval, obtained for different values of uncertainty multipliers. As we can see from the plot, the single global forecasting model can capture well the general traffic trends across multiple companies. Figure~\ref{fig:PICP} shows the mean PICP calculated across all the companies for the different uncertainty multipliers.
\begin{figure}
\caption{\small Demand forecasting with Global model trained on data with normalization.}
\label{fig:forecast}
\end{figure}
\begin{figure}
\caption{\small Demand forecasting with Uncertainty using the Global model trained on data with normalization.}
\label{fig:uncertainty}
\end{figure}
We find that on an average 50\% of the forecast values lie within the predicted interval with one standard deviation, 74\% for two standard deviations and 85\% for three standard deviations. The forecast samples which lied outside the predicted interval were mostly due to the fact that the months of November and December had lots of holidays and hence those days did not follow the captured traffic pattern.
\begin{figure}
\caption{Mean PICP for different values of sigma multiplier.}
\label{fig:PICP}
\end{figure}
\section{Device Monitoring and Diagnostics}
Once an Internet of Things (IoT) ecosystem is installed, it does not follow a “fire and forget” scenario. There will be unforeseen operational issues, some devices will fail and/or would need to be either repaired or replaced. Each time this happens, the company is on the mission to minimize the downtime and ensure that its devices function properly to protect their revenue stream. However, to address the issues of failed or misbehaving devices, we need to identify such devices in the first place. Therefore, the ability to monitor the device's health and being able to detect, when something is amiss, such as higher-than-normal network traffic or ``unusual" device behavior, it is essential to proactively identify and diagnose potential bugs/issues. Again, large-scale IoT deployments is a critical and challenging issue. When there are thousands of devices in the IoT ecosystem, it becomes extremely difficult to efficiently manage these devices as it is practically impossible to monitor each device individually. So, we need an efficient way to analyze the observed device behaviors and identify devices that show an anomalous (``out of usual norm") behavior.
Anomalous or failed devices can be categorized into two types:
\begin{enumerate}
\item The devices that behave significantly different from the other devices;
\item The devices whose observed behavior suddenly changes from its ``normal" behavior over time.
\end{enumerate}
The following Section describes the designed technique to accomplish the device monitoring and diagnostic via device categorization over time.
\subsection{Clustering to identify Anomalies and group Devices based on their traffic patterns}
\label{sec:clustering}
When there are thousands of devices in a given IOT Ecosytem, there usually exist multiple devices of the same type or having similar behavior. We identify these groups of devices in an unsupervised manner based on their network traffic pattern over a given month. Figure ~\ref{fig:device management} shows an overview of the proposed method and its steps to obtain the groups of "similar" devices:
\begin{itemize}
\item The monthly network traffic from the thousands of IoT devices are passed through an autoencoder to extract features in the latent space in an unsupervised manner.
\item Then we use a density-based clustering algorithm, DBSCAN, on the latent space to identify the groups of similar devices. The objective is to learn what normal data points looks like and then use that to detect abnormal instances. Any instance that has a low affinity to all the clusters is likely to be an anomaly.
\end{itemize}
\begin{figure}
\caption{Identifying similar groups of devices.}
\label{fig:device management}
\end{figure}
\subsubsection{Autoencoder~\cite{masci2011stacked}}
It is a neural network capable of learning dense representations of the input data, called latent space representations, in an unsupervised manner. The latent space has low dimensions which helps in visualization and dimensionality reduction. An autoencoder has two parts: an encoder network that encodes the input values x, using an encoder function f, and, a decoder network that decodes the encoded values f(x), using a decoder function g, to create output values identical to the input values. Autoencoder‘s objective is to minimize reconstruction error between the input and output. This helps autoencoders to capture the important features and patterns present in the data in a low dimensional space. When a representation allows a good reconstruction of its input, then it has retained much of the information present in the input. In our experiment, an autoencoder is trained using the monthly traffic data from the IoT devices which captures the important features or the encoding of the devices in the latent space.
\noindent \textbf{Architecture of the Autoencoder:}
We use a stacked autoencoder in our experiment with two fully connected hidden layers each in the encoder and the decoder. The central bottle neck layer was a fully connected layer with just three neurons which helps in reducing the dimensions. We used mean squared error as the reconstruction loss function.
\subsubsection{DBSCAN~\cite{ester1996density}}
(Density-Based Spatial Clustering of Applications with Noise), is a density-based clustering algorithm that captures the insight that clusters are dense groups of points. If a particular point belongs to a cluster, it should be near to lots of other points in that cluster. The algorithm works in the following order: First, we choose two parameters, a positive number, epsilon and a natural number, minPoints. We then begin by picking an arbitrary point in our dataset. If there are more than minPoints points within a distance of epsilon from that point, (including the original point itself), we consider all of them to be part of a "cluster". We then expand that cluster by checking all of the new points and seeing if they too have more than minPoints points within a distance of epsilon, growing the cluster recursively if so. Eventually, we run out of points to add to the cluster. We then pick a new arbitrary point and repeat the process. Now, it's entirely possible that a point we pick has fewer than minPoints points in its epsilon ball, and is also not a part of any other cluster. If that is the case, it's considered a "noise point" not belonging to any cluster and we mark that as an anomaly.
\begin{figure}
\caption{Number of Clusters per company when visualized in the latent space}
\label{fig:clusters}
\end{figure}
Figure~\ref{fig:clusters} shows the latent space and the clusters obtained for Company A(left) and Company B(right). Companies A and B had more than 30000 devices each, installed in their IoT ecosystems and they had three and nine unique types of devices respectively. Based on their traffic patterns observed over the period of a month, the autoencoder mapped the devices of the same type close to each other while the devices of different types were mapped far apart from each other in the latent space. When DBSCAN clustering was applied in the latent space, we observed that the number of distinct clusters formed was exactly the same as the corresponding number of device types per company. The devices which didn't fall in these well formed clusters because of their different traffic patterns were marked as anomalies and are represented by the black points.
\section{Related Work}
\label{sec:related}
Demand forecasting has been broadly studied due to the problem importance and its significance for utility companies.{\it Statistical methods} use historical data to make the forecast as a function of most significant variables. The detailed survey on regression analysis for the prediction of residential energy consumption is offered in~\cite{regr-survey}. The authors believe that among statistical models, linear regression analysis has shown promising results because of satisfactory accuracy and simpler implementation compared to other methods. In many cases, the choice of the framework and the modeling efforts are driven by the specifics of the problem formulation.
While different studies have shown that demand forecasting depends on multiple factors and hence can be used in multivariate modeling, the univariate methods like ARMA and ARIMA~\cite{ARMA2002,ARIMA2017} might be sufficient for short term forecast. {\it Machine learning} (ML) and {\it artificial intelligence} (AI) methods based on neural networks~\cite{NN2016,NN2017}, support vector machines SVM)~\cite{SVM1}, and fuzzy logic~\cite{fuzzy1} were applied to capture complex non-linear relationships between inputs and outputs. When comparing ARIMA, traditional machine learning, and artificial neural networks (ANN) modeling, some recent articles provide contradictory results. In~\cite{arima-vs-ANN-1}, ARIMA achieves better results than ANN, while the study~\cite{arima-vs-ANN-2} claims that ANNs perform slightly better than ARIMA methods.
In our work, we construct a deep-learning based Convolutional LSTM forecasting model (a hybrid model with both Convolutional and LSTM layers). The Convolutional LSTM model works well for long term (weekly) demand prediction, and indeed, automatically captures non-linear daily patterns.
In general, the quality and the prediction power of the models designed by using ML and AI methods critically depend on the quality and quantity of historical data. To create a good forecasting model, several approaches have been developed in the literature. One such approach is an ensemble of multiple forecasting methods applied on the same time series data and a weighted average of their forecasts is used as a final result ~\cite{automate-forecast4}. In our work, we pursue a different approach by making use of the normalized data from multiple companies and train a single global model to make traffic predictions. This makes our method highly scalable.
\section{Conclusion}
In our work, we proposed {\it IoTelligent}, a tool that applies machine learning techniques to forecast the companies’ traffic demands over time, visualize traffic trends, identify and cluster devices, detect device anomalies and failures. We showed that among the different neural network architectures, Convolutional LSTM model performed the best for demand forecasting. In order to avoid maintaining and upgrading tens (or hundreds) of models (a different model per company), we designed and implemented a novel, scalable approach, where a global demand forecasting model is built using the combined data of all the companies. This method was improved by normalizing the “contribution” of individual company data in the combined global dataset. We also introduced uncertainty intervals to the forecasts to provide better information to the users. To solve the scalability issues with managing the millions of devices, we designed and evaluated a novel technique based on: (i) autoencoders, which extract the relevant features automatically from the network traffic stream; (ii) DBSCAN clustering to identify the group of devices that exhibit similar behavior, in order to flag anomalous devices. The designed management tool paves the way the industry can monitor their IoT assets for presence, functionality, and behavior at scale without the need to develop device specific models.
{\scriptsize}
\input{Sources/References.bib}
\end{document}
|
\begin{document}
\author{Oksana Busel}
\affiliation{Nano and Molecular Systems Research Unit, University of Oulu, Oulu 90014, Finland}
\author{Sami Laine}
\affiliation{Nano and Molecular Systems Research Unit, University of Oulu, Oulu 90014, Finland}
\affiliation{Department of Information Technology, Oulu University of Applied Sciences, Oulu 90101, Finland}
\author{Olli Mansikkam\"{a}ki}
\affiliation{Nano and Molecular Systems Research Unit, University of Oulu, Oulu 90014, Finland}
\author{Matti Silveri}
\affiliation{Nano and Molecular Systems Research Unit, University of Oulu, Oulu 90014, Finland}
\date{\today}
\title{Dissipation and Dephasing of Interacting Photons in Transmon Arrays}
\begin{abstract}
Transmon arrays are one of the most promising platforms for quantum information science. Despite being often considered simply as qubits, transmons are inherently quantum mechanical multilevel systems. Being experimentally controllable with high fidelity, the higher excited states beyond the qubit subspace provide an important resource for hardware-efficient many-body quantum simulations, quantum error correction, and quantum information protocols. Alas, dissipation and dephasing phenomena generated by couplings to various uncontrollable environments yield a practical limiting factor to their utilization. To quantify this in detail, we present here the primary consequences of single-transmon dissipation and dephasing to the many-body dynamics of transmon arrays. We use analytical methods from perturbation theory and quantum trajectory approach together with numerical simulations, and deliberately consider the full Hilbert space including the higher excited states. The three main non-unitary processes are many-body decoherence, many-body dissipation, and heating/cooling transitions between different anharmonicity manifolds. Of these, the many-body decoherence -- being proportional to the squared distance between the many-body Fock states -- gives the strictest limit for observing effective unitary dynamics. Considering experimentally relevant parameters, including also the inevitable site-to-site disorder, our results show that the state-of-the-art transmon arrays should be ready for the task of demonstrating coherent many-body dynamics using the higher excited states. However, the wider utilization of transmons for ternary-and-beyond quantum computing calls for improving their coherence properties.
\end{abstract}
\maketitle
\section{Introduction}
Transmon arrays have recently taken substantial advances in size, coherence, and controllability, opening doors for exciting demonstrations of quantum information protocols~\cite{Ofek16, Rosenblum18, Hu19, Arute2019, CI20, Wu21, Chen21, Marques21, Gong21, Krinner22, Zhao22, Acharya22, Chen22} and many-body simulations~\cite{Roushan2017, Ma2019, HF20, Carusotto2020, Guo2021, Mi21, Satzinger21, Blok21, Zanner22, Braumuller2022, Morvan22, Mi22, Zhu22, Saxberg22}. Transmons are typically operated as quantum two-level systems, qubits, despite their inherent nature as anharmonic oscillators with approximately $d\sim 5-10$ well-defined quantum states~\cite{Koch07}. Treating them as proper quantum multilevel systems can be leveraged in several ways, including enhanced hardware efficiency and functionality of quantum error correction~\cite{Muralidharan17, Elder20}, fault-tolerant protocols~\cite{Campbell14, Rosenblum18}, and versatile quantum simulations with less mapping overhead~\cite{Orell2019, Wang20, Macdonell21, Olli2021, Olli2022}. As a result, the utilization of the higher excited states has recently garnered substantial interest, witnessed through the demonstrations of qutrit operations and algorithms with transmons~\cite{Peterer15, Zhang19, Morvan21, Blok21, Steinmetz22, Cervera-lierta22, Cao22, Roy22, Luo22, Goss22}, other superconducting quantum devices~\cite{Neeley09, Kononenko21}, as well as trapped ion and photonic platforms~\cite{chi22, Ringbauer22}.
Taking the higher excited states into account, a transmon array can be accurately described using the Bose-Hubbard model with attractive interactions~\cite{deGrandi15, Orell2019, Roushan2017, Carusotto2020}. In our previous works, we have derived an effective model for the unitary dynamics of highly-excited states of coupled transmons based on nearly-degenerate perturbation theory~\cite{Olli2021,Olli2022}. In the typical parameter regime where the transmon anharmonicity $U$ dominates the hopping rate $J$, these states can be interpreted as quasiparticles exhibiting, for example, edge localization and effective long-range interactions. This has since found an application in explaining emergent soliton dynamics~\cite{Soliton22}. In addition to the unitary dynamics of systems of transmons, dissipation and dephasing rates of the higher excited states of individual transmons have also been quite well characterized~\cite{Peterer15, Morvan21, Blok21}. However, the combination of these two topics \---- a quantitative understanding of the non-unitary dynamics for the higher excited states in transmon arrays \---- has not been studied before in detail.
In this work, we include the dissipation and dephasing processes into the many-body dynamics of transmon arrays both analytically and numerically. We identify three main processes, listed here in descending order of their typical effective rates: many-body decoherence, many-body dissipation, and heating/cooling induced by the combination of pure transmon dephasing processes and many-body dynamics.
The Bose-Hubbard model with attractive interactions conserves the total boson number, meaning that the unitary dynamics neither adds nor removes excitations from the system. This is broken by the many-body dissipation process inducing transitions between the different boson-number manifolds and occurring at a rate proportional to the instantaneous total boson number. The many-body decoherence process, on the other hand, reduces the coherence of many-body superpositions at a rate proportional to the squared distance between the many-body Fock states. Finally, in the parameter regime of strongly interacting bosons, $U/J\gg 1$, the many-body spectrum of the Bose-Hubbard model is split into well-separated regions with almost-conserved interaction energy~\cite{Olli2021, Olli2022}. The transmon dephasing process combined with the many-body dynamics breaks this quasi-conserved symmetry by inducing heating and cooling transitions between these so-called anharmonicity manifolds.
Our results show that, with experimentally realistic values for dissipation and dephasing, it should be possible to observe the many-body dynamics of the higher excited states between coupled transmons in state-of-the-art transmon arrays, even including the inevitable site-to-site disorder. We clearly see that the dephasing of the highly-excited states is one of the critical factors in their wider utilization in ternary-and-beyond quantum computation and simulations.
The article is organized as follows. In Sec.~\ref{sec:intro2}, we introduce the attractive Bose-Hubbard model of a transmon array, together with the typical dissipation and dephasing processes in transmons through a master equation formalism. Sections~\ref{sec:diss} and~\ref{sec:dephasing} focus individually on the effects of dissipation and dephasing processes. In Section~\ref{sec:disorder}, we describe phenomena induced by disorder in the dissipation and dephasing rates between the individual transmons. Conclusions and future outlook are presented in Sec.~\ref{sec:conc}.
\section{Open many-body dynamics in~a~transmon~array}
~\label{sec:intro2}
A transmon is made of Josephson junctions and capacitor plates, realizing an anharmonic oscillator with natural frequency $\omega$ and anharmonicity $U$, see Fig.~\ref{fig:dndbhm-border-gradients-12-01-2022}(a). Nearby transmons interact with each other through a capacitive interaction $J$. In many-body language, the anharmonicity~$U$ describes the strength of the on-site many-body interactions between bosonic excitations, while $J$ is the hopping rate between neighboring transmons. Hence, an array of $L$ transmons is effectively described by the Bose-Hubbard model with attractive interactions~\cite{Orell2019,Carusotto2020},
\begin{align}
\frac{\hat{H}_{\rm BH}}{\hbar} = \sum_{\ell=1}^{L}\omega_{\ell} \hat{n}_{\ell} & - \sum_{\ell=1}^L\frac{U_\ell}{2} \hat{n}_{\ell} \left( \hat{n}_{\ell}-1 \right) \notag \\ & + \sum_{\ell=1}^{L-1} J^{}_\ell \left(\hat{a}_{\ell}^{\dagger} \hat{a}^{}_{\ell+1}+\hat{a}^{}_{\ell} \hat{a}^{\dagger}_{\ell+1}\right),
\label{eq:FullHam}
\end{align}
written here in the basis of the local bosonic annihilation $\hat{a}_{\ell}$, creation $\hat{a}_{\ell}^{\dagger}$, and occupation number $\hat{n}_{\ell} = \hat{a}_{\ell}^{\dagger} \hat{a}_{\ell}$ operators, with the reduced Planck’s constant $\hbar$. The bosonic excitations described by the model are microwave photons but we use the term 'boson' for generality in this article. In modern arrays of transmons~\cite{Ma2019, Zhu22, Morvan22}, we have $\omega_\ell / 2 \pi \sim \SI{5}{\giga\hertz}$ for the on-site energies, $J_\ell / 2 \pi \sim \SIrange{10}{30}{\mega\hertz}$ for the hopping frequencies, and $U_\ell / 2 \pi \sim \SIrange{200}{250}{\mega\hertz}$ for the on-site interactions.Due to the inevitable small differences in manufactured devices, the parameters of any two transmons are usually not equal. However, for the sake of simplicity, we will assume in most parts of this work that there is no disorder in the parameters of the Hamiltonian, i.e., $\omega_\ell = \omega$, $J_{\ell} = J$, and $U_\ell = U$.
The transmon anharmonicity dominates the hopping frequency, $U \gg J$, resulting in an energy spectrum where states with the same total anharmonicity~$\hat A=-\sum_\ell \hat{n}_\ell (\hat{n}_\ell - 1) / 2$ form well-separated bands, see Fig.~\ref{fig:dndbhm-border-gradients-12-01-2022}(b). Due to the conservation of energy, the unitary dynamics of the model takes place mostly within the anharmonicity manifold of the initial state. The ratio~$J/U \ll 1$ can be considered a perturbation parameter. A highly-excited transmon can then be seen as a bosonic excitation -- a quasiparticle -- located at one site of the transmon chain and interacting with other quasiparticles, single bosons, or array edges~\cite{Olli2022}. The term `hard-core bosons' refers to the state manifold where the value of the total anharmonicity equals zero, having no excitations beyond the qubit subspace and having the highest energy, see Fig.~\ref{fig:dndbhm-border-gradients-12-01-2022}(b).
\begin{figure}
\caption{(a) A schematic of a 1D transmon array, where the transmons are represented as anharmonic oscillators with frequencies~$\omega_\ell$, anharmonicities~$U_\ell$, nearest-neighbor hopping rates~$J_\ell$, dissipation rates~$\gamma_\ell$, and dephasing rates~$\kappa_\ell$. (b)~A~many-body energy level spectrum of a transmon array and a schematic showing many-body transitions due to the dissipation~(yellow) and dephasing~(green) processes. The colored bands denote the anharmonicity manifolds containing several many-body eigenstates. The red and blue colors of the energy levels represent the relative contributions of the hopping energy and anharmonicity, respectively.}
\label{fig:dndbhm-border-gradients-12-01-2022}
\end{figure}
Transmons experience non-unitary dissipation and dephasing processes. The master equation yielding the non-unitary evolution of the density matrix of the system is given by
\begin{align}
\frac{d \hat{\rho}}{d t} = - \frac{i}{\hbar}[ \hat{H}_{\rm BH}, \hat{\rho}] &+ \sum_{\ell=1}^{L} \frac{{\gamma}_{\ell}}{2} ( 2 \hat{a}_{\ell} \hat{\rho} \hat{a}_{\ell}^{\dagger} - \hat{a}_{\ell}^{\dagger} \hat{a}_{\ell} \hat{\rho} - \hat{\rho} \hat{a}_{\ell}^{\dagger} \hat{a}_{\ell})\nonumber\\
&+ \sum_{\ell=1}^{L} \frac{\kappa_{\ell}}{2} ( 2 \hat{n}_{\ell} \hat{\rho} \hat{n}_{\ell} - \hat{n}^2_{\ell} \hat{\rho} - \hat{\rho} \hat{n}^2_{\ell}).
\label{eq:MEF}
\end{align}
In transmon arrays, typical experimental values~\cite{Ma2019, Arute2020, Gong21, Zhao22, Zhu22, Saxberg22} for the mean dissipation rates $\gamma$ and the mean dephasing rates $\kappa$ are $\gamma / 2 \pi \sim \SIrange{5}{10}{\kilo\hertz}$ ($T_1~\sim \SIrange{15}{30}{\micro\second}$) and $\kappa / 2 \pi \sim \SIrange{50}{300}{\kilo\hertz}$ ($T^\star_2\sim\SIrange{1}{6}{\micro\second}$). In quantum information setups, where the devices are better isolated, one can achieve much better values~\cite{Place2021, Krinner22} $T_1~\sim \SIrange{30}{300}{\micro\second}$ and $T_2~\sim\SIrange{50}{100}{\micro\second}$. Here, we use the more conservative values from the array setups. The rates usually differ quite significantly from site to site, with typical standard deviations of $\sigma_\gamma/\gamma \sim \SIrange{0.2}{0.5}{}$ and $\sigma_\kappa/\kappa \sim \SIrange{0.3}{0.5}{}$.
Despite the disorder, we first assume that all the rates are identical, $\gamma_\ell = \gamma $ and $\kappa_\ell = \kappa$, and then in Sec.~\ref{sec:disorder} shortly discuss the effects of the disorder. The higher excited states of a transmon have pronounced susceptibility to charge noise~\cite{Koch07}. This means that in practice, their dephasing rates are larger than implied by Eq.~\eqref{eq:MEF}, see, e.g., Refs.~\cite{Peterer15, Morvan21, Blok21}. We first consider the simple dephasing model of Eq.~\eqref{eq:MEF} and discuss its extension in Sec.~\ref{sec:disorder}. In what follows, we consider the many-body dynamics under dissipation and dephasing separately.
\section{Dissipation} \label{sec:diss}
Let us first focus on uniform dissipation ($\gamma_\ell=\gamma$) considering the master equation \begin{align}
\frac{d \hat{\rho}}{d t} &= - \frac{i}{\hbar}[ \hat{H}_{\rm BH}, \hat{\rho}] +\sum_{\ell=1}^{L} \frac{\gamma}{2} \left( 2 \hat{a}^{}_{\ell} \hat{\rho} \hat{a}_{\ell}^{\dagger} - \hat{a}_{\ell}^{\dagger} \hat{a}^{}_{\ell} \hat{\rho} - \hat{\rho} \hat{a}_{\ell}^{\dagger} \hat{a}^{}_{\ell} \right)\notag\\
& = -\frac{i}{\hbar}\left(\hat{H}^{}_{\rm NJ}\hat\rho-\hat\rho\hat{H}_{\rm NJ}^\dag\right)+\sum_{\ell=1}^L\gamma\hat{a}^{}_{\ell} \hat{\rho} \hat{a}_{\ell}^{\dagger}.
\label{eq:MEDiss}
\end{align}
On the second line, we have split the equation into two parts according to the quantum trajectory approach~\cite{Daley14}. There, the non-unitary dynamics is described by the no-jump evolution under the non-Hermitian Hamiltonian
\begin{equation}
\hat H^{(\gamma)}_{\rm NJ}=\hat H_{\rm BH}-i \sum_{\ell=1}^L\frac{\hbar\gamma}{2}\hat a_{\ell}^\dagger\hat a_{\ell}^{}, \label{eq:diss_NJ}
\end{equation}
interrupted by random quantum jumps via the jump operators~$\sqrt{\gamma}\hat a_\ell$. The rate at which a particular quantum jump occurs is $\braket{\psi(t)|\gamma\hat a_\ell^\dagger a_\ell^{}|\psi(t)}=\gamma\braket{\hat n_\ell}$. The total rate is then given by $\braket{\psi(t)|\sum_\ell \gamma\hat a_\ell^\dagger a_\ell^{}|\psi(t)}=\gamma N$. Here, the instantaneous jump events remove a photon at the site $\ell$ and change the state discontinuously,
\begin{equation}
\ket{\psi_{\rm QJ}(t)}= \frac{ \sqrt{\gamma} \hat{a}_{\ell} \ket{\psi(t)}} {\sqrt{ \braket{ \psi(t) | \gamma \hat{a}_\ell^{\dagger} \hat a^{}_\ell |\psi(t)}}}.
\label{eq:QJ}
\end{equation}
Since the Bose-Hubbard Hamiltonian~\eqref{eq:FullHam} commutes with the total photon number operator $\hat N=\sum_{\ell}\hat n_\ell$, the no-jump evolution can be split into two independent parts, Hermitian evolution and damping at the rate $\gamma N$,
\begin{align}
\ket{\psi_{\rm NJ}(t+\tau)} & =\frac{e^{-i \hat H^{(\gamma)}_{\rm NJ} \tau/\hbar }\ket{\psi(t)}}{\sqrt{\braket{\psi(t)|\big(e^{-i \hat H^{(\gamma)}_{\rm NJ} \tau/\hbar }\big)^\dagger e^{-i \hat H^{(\gamma)}_{\rm NJ} \tau/\hbar }|\psi(t)}}} \notag \\ & =e^{-\gamma N \tau}e^{-i \hat H_{\rm BH} \tau /\hbar }\ket{\psi(t)}, \label{eq:diss_NJ:evol}
\end{align}
assuming that one starts from a quantum state in a single photon-number sector with $N=\braket{\hat N}$. Physically this means that between the quantum jumps, the evolution of the system is identical to a one experiencing no dissipation. This conclusion holds even if we include the dephasing process since it induces no transitions between the photon-number sectors. Thus, if an experimental setting allows, by post-selecting based on the total number of bosons one can recover the quantum dynamics without any dissipation effects.
In order to solve the master equation~\eqref{eq:MEDiss}, we split the full density matrix into different photon number sectors, $\hat \rho=\sum_{N=0}^{N_{\rm max}} P_N(t)\hat\rho_N$, where $\hat \rho_N=\hat \Pi_N \hat \rho \hat \Pi_N$ and $\hat \Pi_N$ is a projector to the space of states with the total photon number $N$. The master equation then reduces to the rate equations $\dot P_N(t)=-\gamma N P_N(t)+\gamma (N+1)P_{N+1}(t)$ describing transitions between the blocks $N \to N-1$ at the rate $\gamma N$. By solving the rate equations, we obtain the probabilities
\begin{align}
P_N(t) =\binom{N_{\rm max}}{N}e^{-\gamma N t}(1-e^{-\gamma t})^{N_{\rm max}-N}
\label{eq:LDiss}
\end{align}
of being in the photon number sector $N$. These are depicted in Fig.~\ref{fig:analytics}.
\begin{figure}
\caption{The populations $P_N$ of the different photon number sectors as a function of time for $N=0,1,\ldots,4$, as given by Eq.~\eqref{eq:LDiss}
\label{fig:analytics}
\end{figure}
The anharmonicity manifolds are spanned by the many-body Fock states $\ket{n_1, n_2, \ldots, n_L}=\ket{\bm{n}}$ with a fixed value for the anharmonicity $a=-\sum_\ell n_\ell(n_\ell-1)/2$ and the total photon number $N=\sum_\ell n_\ell$. For example, the lowest anharmonicity manifold is spanned by the states $\ket{N_\ell}$ where all bosons reside at a single site $\ell=1,2,\ldots,L$. The second-lowest anharmonicity manifold is spanned by the states $\ket{(N-1)_\ell, 1_k}$. Notice that in our shorthand notation of the many-body Fock states, we write explicitly only the boson numbers of the occupied sites. A photon-loss event by a quantum jump can lead to transitions between the anharmonicity manifolds. Note that the states in the lowest anharmonicity manifold are always mapped to a state in the lowest anharmonicity manifold of the next photon number manifold, $\ket{N_\ell}\to \ket{(N-1)_\ell}$. Similarly, the states in the highest anharmonicity manifold consisting of the hard-core boson states are mapped to the states in the highest anharmonicity manifold in the next photon number sector. In between, however, there are different decay channels depending on which site decays, although the total decay rate always equals to~$\gamma N$. As an example, the states $\ket{{N-1}_\ell, 1_k}$ decay to the manifold $\ket{{N-2}_\ell, 1_k}$ at a rate $\gamma (N-1)$ and to the manifold $\ket{(N-1)_\ell}$ at a rate~$\gamma$, see Fig.~\ref{fig:dndbhm-border-gradients-12-01-2022}(b).
The photon loss event, occurring at a single site, has generally a localizing effect. For example, if the state of the system is a superposition in the lowest anharmonicity manifold, $\ket{\psi}=\sum_{\ell=1}^L c_l\ket{N_\ell}$ with some coefficients $c_l$, then a quantum jump of Eq.~\eqref{eq:QJ} at the site $\ell'$ reduces it into a localized state $\ket{\psi}=\ket{N_\ell'}$, wiping out all the information on the coefficients $c_\ell$.
In addition to transitions, dissipation also causes decoherence. By considering the evolution of the off-diagonal term of the density matrix $\hat\rho$ between any many-body Fock states $\ket{n_1, n_2, \ldots, n_L}=\ket{\bm{n}}$ and $\ket{m_1, m_2, \ldots, m_L}=\ket{\bm{m}}$,
\begin{equation}
\frac{d \braket{\bm{n}|\hat \rho|\bm{m}}}{d t}=\frac{\braket{\bm{n}|[\hat H,\hat{\rho}]|\bm{m}}}{i\hbar}-\frac{\gamma}{2}\sum_{\ell=1}^L(n_\ell +m_\ell)\braket{\bm{n}|\hat\rho|\bm{m}},
\end{equation}
we see that the decoherence rate $K^\gamma_{\bm{n},\bm{m}}$ between the states is simply proportional to the total number of photons they contain,
\begin{equation}
K^\gamma_{\bm{n},\bm{m}}=\frac{\gamma}{2}\sum_{\ell=1}^L(n_\ell+m_\ell). \label{eq.diss.decoher}
\end{equation}
In our case the system is limited to a single total photon number sector, and so the decoherence rate due to dissipation becomes
\begin{equation}
K^\gamma_{\bm{n},\bm{m}}=\gamma N.
\end{equation}
To summarize, dissipation in a transmon array is rather a simple process, leading essentially to a cascade $N \to N~-~1\to\ldots \to 0$ between the photon number sectors and to a loss of coherence as described by Eq.~\eqref{eq.diss.decoher}.
\section{Dephasing} \label{sec:dephasing}
The dephasing process originates from temporal fluctuations of the transmon frequencies, and it can be modeled using the master equation
\begin{equation}
\frac{d \hat{\rho}}{d t} = - \frac{i}{\hbar}[ \hat{H}_{\rm BH}, \hat{\rho}] + \sum_{\ell=1}^{L} \frac{\kappa}{2} \left( 2 \hat{n}_{\ell} \hat{\rho} \hat{n}_{\ell} - \hat{n}^2_{\ell} \hat{\rho} - \hat{\rho} \hat{n}^2_{\ell} \right),
\label{eq:MEDeph}
\end{equation}
with the jump operators $\sqrt{\kappa}\hat n_\ell$ for each transmon. We have here ignored the dissipation altogether for simplicity. By writing down the non-Hermitian Hamiltonian corresponding to the no-jump evolution of the dephasing process,
\begin{equation}
\hat H_{\rm NJ}^{(\kappa)}=\hat H_{\rm BH}-i\sum_{\ell=1}^L\frac{\hbar \kappa}{2}\hat n^2_\ell,
\end{equation}
we see that the anti-Hermitian part $-i\sum_{\ell=1}^L\hbar \kappa\hat n^2_\ell/2$ does not commute with the Hermitian part $\hat H_{\rm BH}$. This implies that the dephasing process, and even its no-jump evolution generated by $\hat H_{\rm NJ}^{(\kappa)}$, induces transitions between the eigenstates of the attractive Bose-Hubbard Hamiltonian. Thus, in general, it is a more complicated process than the dissipation considered above.
\subsection{Decoherence of the many-body Fock states}
Let us first focus on the pure decoherence rates, introduced through the off-diagonal elements of the density matrix between many-body Fock states. Multiplication of Eq.~\eqref{eq:MEDeph} by a pair of arbitrary Fock states $\bra{\bm n}$ and $\ket{\bm{m}}$, with $\bm{n}\neq\bm{m}$, results in
\begin{align}
\frac{d \braket{\bm{n}|\hat \rho|\bm{m}}}{d t}
=&\frac{\braket{\bm{n}|[\hat H,\hat{\rho}]|\bm{m}}}{i\hbar}-\frac{\kappa}{2}\sum_{\ell=1}^L(n_\ell-m_\ell)^2\braket{\bm{n}|\hat \rho|\bm{m}}.
\end{align}
From the last term we see that the decoherence rate is given by
\begin{equation}
K_{\bm{n},\bm{m}}^\kappa=\frac{\kappa}{2}|\bm{n}-\bm{m}|^2, \label{eq.deph.decoher}
\end{equation}
where $|\bm{n}|^2$ is the squared Euclidean norm of the vector~$\bm{n}=(n_1, n_2, n_3, \ldots, n_L)$.
Let us elucidate the effect of the decoherence to the dynamics of the attractive Bose-Hubbard model through four examples. First, in the lowest anharmonicity manifold, the state of the system is always some superposition of the $N$-boson stacks, $\ket{\psi}=\sum_{\ell} c_\ell \ket{N_\ell}$. Now, the decoherence process between the Fock states $\ket{N_\ell}$ and $\ket{N_{\ell'}}$ occurs at the rate of $K^\kappa_{\bm n,\bm m}=\kappa N^2$, see Fig.~\ref{fig:dephasing}(a). This means that the steady state of the system is a mixed state of $\ket{N_\ell}$, and it is reached at the rate of $\kappa N^2$. We study this in more detail in Sec.\ \ref{sec:disorder}.
Next, in the second-lowest anharmonicity manifold, the dynamics occurs between the states $\ket{(N-1)_\ell, 1_k}$. There are now a few possibilities for the speed of the decoherence process between the states $\ket{(N-1)_\ell, 1_k}$ and $\ket{(N-1)_{\ell'}, 1_{k'}}$, depending on the relative positions between the $N - 1$-boson stacks and the single bosons. The slowest rate is obtained when the stacks are located at the same site ($\ell' = \ell$). In this case, the decoherence occurs at the rate of $K^\kappa_{\bm{n},\bm{m}} = \kappa$. At the other extreme, if neither the stacks nor the bosons line up ($\ell', k' \neq \ell, k$), the decoherence rate is given by $K^\kappa_{\bm{n},\bm{m}} = \kappa [(N - 1)^2 + 1]$. Regardless of the different decoherence rates, the final steady state is a mixed state of $\ket{(N-1)_\ell, 1_k}$, reached on the time scales of~$\kappa^{-1}$.
As a third example, in the hard-core boson manifold, the lowest decoherence rate is attained between pairs of states that differ only by a single excitation pair, such as $\ket{\ldots0101\ldots}$ and $\ket{\ldots1001\ldots}$. For these, the decoherence rate is just $K^\kappa_{\bm{n},\bm{m}}=\kappa$. Correspondingly, the maximum decoherence rate is between the state pairs where the local excitation structure differs the most. For instance, when $N\leq L/2$, the maximum decoherence rate $K^\kappa_{\bm{n},\bm{m}}=\kappa N/2$ is achieved, for example, between the states $\ket{1010\ldots}$ and $\ket{0101\ldots}$.
Finally, in the same way that the dephasing leads to decoherence between states inside an anharmonicity manifold, it degrades coherence between states in different manifolds, see Fig.~\ref{fig:dephasing}(a).
To summarize, on the time scales of the order of $\kappa^{-1}$ or shorter, the density matrix is reduced to a diagonal matrix representing a mixed state within the initial anharmonicity manifold.
\subsection{Transitions between the anharmonicity manifolds}
The states in an anharmonicity manifold are actually weakly coupled to the states in the neighboring manifold via the hopping interaction. As derived in Ref.~\cite{Olli2021} through first-order non-degenerate perturbation theory in $J/U$, the $N$-boson stack state belonging to the anharmonicity manifold $a=-N(N-1)/2$ and the state of $N-1$ stacked bosons plus a lone boson belonging to the anharmonicity manifold $b=-(N-1)(N-2)$ are more accurately given by
\begin{align}
&\ket{N_\ell}_a=\ket{N_\ell}-\frac{J\sqrt{N}}{U(N-1)}\sum_{\sigma=\pm1}\ket{(N-1)_\ell, 1_{\ell+ \sigma }}, \label{eq:stack1} \\
&\ket{(N-1)_\ell, 1_{\ell+ \sigma }}_b=\ket{(N-1)_\ell, 1_{\ell+ \sigma }}+\frac{J\sqrt{N}}{U(N-1)}\ket{N_\ell}.
\end{align}
Under unitary dynamics, the effect of this non-degenerate coupling is typically weak, and leads to fast oscillations between the two manifolds~\cite{Olli2022}. However, when combined with the dephasing process, it results in actual transitions between the manifolds. Intuitively, we can understand this by considering the state $\ket{N_\ell}_a$ of Eq.~\eqref{eq:stack1} under the action of the quantum jump operator $\sqrt{\kappa}\hat n_\ell$,
\begin{align}
\ket{\psi^{(\kappa)}_{\rm QJ}}&=\frac{\sqrt{\kappa}\hat n_\ell \ket{N_\ell}_a}{\sqrt{\hat{\Pi}escript{}{a}{\braket{N_\ell|\kappa \hat n_\ell^2|N_\ell}_a}}} \\
&= \ket{N_\ell}_a+\frac{J\sqrt{N}}{U(N-1)}\sum_{\sigma=\pm1}\ket{(N-1)_\ell, 1_{\ell+ \sigma }}_b,\notag
\end{align}
This holds to first order in $J/U$. The additional contribution by the states $\ket{(N-1)_\ell, 1_{\ell+ \sigma }}_b$ exemplifies the transitions between the anharmonicity manifolds. Similar phenomenon occurs for quantum jumps by operators $\sqrt{\kappa}\hat n_{\ell+\sigma}$.
\begin{figure}
\caption{(a) The decay of the off-diagonal elements $|\bra{0300}
\label{fig:dephasing}
\end{figure}
For a more rigorous derivation, we first note that the dephasing reduces the density matrix to a diagonal form $\hat\rho=\sum_a P_a \hat{\Pi}_a$ on a fast time scale $\kappa^{-1}$. Here, $\hat{\Pi}_a$ is a projector to the anharmonicity manifold $a$ and the coefficient $P_a$ is the corresponding population. What we now consider are the slow transition rates between the anharmonicity manifolds. In other words, we consider the longer-time (slower) dynamics based on the concept of local equilibrium. We assume that the density matrix is always of the form determined by the equilibrium of the leading-order dynamics, but the relative weights between the different manifolds are different. Inserting this diagonal density matrix into the master equation~\eqref{eq:MEDeph} yields the rate equation
\begin{equation}
\dot P_a=\sum_b(P_b-P_a)\Gamma_{ab}. \label{eq:ratePa}
\end{equation}
The effective transition rates $\Gamma_{ab}$ between the anharmonicity manifolds $a$ and $b$ are given by (see details on the derivation in App.~\ref{sec:app1})
\begin{align}
\Gamma_{ab} & = \frac{\kappa}{\Tr(\hat{\Pi}_a) [\hbar U (a - b)]^2} \sum_{\ell=1}^L \Tr \left(\hat{\Pi}_a [\hat{n}_\ell, \hat{H}_J] \hat{\Pi}_b [\hat{H}_J, \hat{n}_\ell]\right) \notag \\
& = \frac{\kappa}{\Tr(\hat{\Pi}_a) [\hbar U (a - b)]^2} \sum_{\bm{n}_a, \bm{m}_b}|\bm{n}_a-\bm{m}_b|^2 |\braket{\bm{n}_a|\hat H_J|\bm{m}_b}|^2
\label{eq:transition_rate}
\end{align}
to first order in $J/U$, akin to the quantum jump consideration above. This implies that the dynamics between the anharmonicity manifolds occurs at rates of the order of $\kappa (J/U)^2$, assuming all the internal dynamics is fast compared to this.
As an example, let us again consider the lowest anharmonicity manifold $a_1 = -N(N - 1) /2$ spanned by the states $\ket{N_\ell}$. Applying $\hat{H}_J$ to any state in $a_1$ always gives us a state in the second-lowest manifold $b_1 = -(N - 1)(N - 2)/2$ spanned by the states $\ket{(N - 1)_\ell, 1_m}$. Thus, the only non-zero rate away from $a_1$ is
\begin{equation}
\Gamma_{a_1b_1} = 4 \kappa \left(\frac{J}{U} \right)^2 \frac{L - 1}{L} \frac{N}{(N - 1)^2}.
\label{eq:rate}
\end{equation}
Other transition rates can be derived similarly, see App.~\ref{sec:app1}. Figure~\ref{fig:dephasing}(b) compares the solution of the rate equation~\eqref{eq:ratePa} with the rates of Eq.~\eqref{eq:transition_rate} to the full numerical solution of the master equation, and demonstrates a very good agreement between the two approaches. Due to the $\kappa (J/U)^2$-dependence, the cooling and heating rates of Eq.~\eqref{eq:transition_rate} are slow with respect to the many-body decoherence and dissipation when calculated using realistic transmon array parameters. This is evident also by comparing the time scales of Fig.~\ref{fig:dephasing}(b) to those of Fig.~\ref{fig:dephasing}(a) or Fig.~\ref{fig:analytics}.
\subsection{Transitions out of the hard-core boson manifold}
Above, we focused on the lowest anharmonicity manifolds corresponding to the higher excited states of transmons. The formalism derived in Eqs.~\eqref{eq:ratePa}~--~\eqref{eq:transition_rate} also applies to the hard-core boson manifold, that is, the highest anharmonicity manifold with $a=0$. The hopping term of the Bose-Hubbard Hamiltonian couples the hard-core boson states to the states residing in the second-highest anharmonicity manifold $b=-1$. For example, the state $\ket{1101\ldots}$ is coupled to the state $\ket{0201\ldots}$. The dephasing-induced transition rate away from the hard-core boson manifold is
\begin{equation}
\Gamma_{\rm h.c.}=8\kappa \left(\frac{J}{U}\right)^2 \frac{N(N-1)}{L},\label{eq:hardcore}
\end{equation}
which interestingly increases quadratically as a function of the total excitation number $N$ in contrast to the transition rates in the lower end of the energy spectrum for the boson stacks. The transition rate of Eq.~\eqref{eq:hardcore} is the non-unitary counterpart of the always-on $ZZ$ interaction of hard-core bosons~\cite{Braumuller2022, Olli2022}. Both of them are corrections resulting from the exclusion of the higher excited states beyond the qubit subspace. Notice that the states with no neighboring excitations, such as $\ket{10101\ldots}$, experience no transitions, and the corresponding population does not leak out of the manifold.
\section{Effects of dissipation, dephasing, and disorder in boson stack dynamics}\label{sec:disorder}
In the absence of dissipation and dephasing, a characteristic property of the dynamics of a transmon array is the confinement of the state of the system into the anharmonicity manifold of the initial state~\cite{Olli2022}. As a consequence, an excited state of an individual transmon behaves like a quasiparticle which moves around the array without splitting into less-excited states. Its effective hopping rate is given by $\widetilde{J} = J[N/(N-1)!](J/U)^{N-1}$, where $N$ is the number of bosons comprising the quasiparticle, that is, the local occupation number. When multiple transmons in the array are highly excited, the dynamics can be further limited by effective interactions between the quasiparticles. Noting that dissipation and dephasing both cause transfer between the manifolds, it is worth looking into how they affect the quasiparticle dynamics. We will do this by studying two example cases in both of which the unitary dynamics is well described by the effective Hamiltonians of Ref.~\cite{Olli2022}.
In our first example, we have a chain of $L = 4$ transmons with one of them prepared in the third exited level, so that $N = 3$. With the initial state $\ket{3_2}$, the unitary time-evolution results in oscillations along the array at the rate $\widetilde{J} = 3J (J/U)^2/2$. Due to the effective edge repulsion experienced by the quasiparticle, these oscillations are limited to only among the states $\ket{3_2}$ and $\ket{3_3}$. Now, with dephasing included, based on Eq.~\eqref{eq.deph.decoher} and the large distance between the states, we expect the oscillations to decay quite rapidly, at the rate $K^{\kappa}_{\bm n, \bm m}=\kappa N^2=9\kappa$ (on the time scales of $\approx\SI{0.4}{\micro\second}$). Figure~\ref{fig:stacks}(a) shows that this is indeed the case. We can also see that the small oscillations between the center sites and the edges allow dephasing to cause mixing within the entire initial anharmonicity manifold. Dissipation, which occurs on the time scales of $(3\gamma)^{-1} \approx \SI{7}{\micro\second}$, has only rather weak an effect on the time scales of the quasiparticle oscillations.
The second example we consider is a chain of $L = 5$ transmons with three quasiparticles, shown in Fig.~\ref{fig:stacks}(b). With the initial state $\ket{2_1, 3_3, 3_5}$, edge-localization and effective repulsive interactions between the quasiparticles lead to the unitary dynamics to be mostly limited to the subspace $\mathcal{S} = \{\ket{2_1, 3_3, 3_5}, \ket{3_1, 2_3, 3_5}, \ket{3_1, 3_3, 2_5} \}$. Here, the quasiparticle of two bosons moves along the chain via effective exchange interactions at the rate $\Xi=3 J (J/U) /4$. Within the subspace $\mathcal{S}$, the distances between the Fock vectors are much smaller than in the first example. Consequently, the decoherence rate due to dephasing inside $\mathcal{S}$ is considerably slower, $K^\kappa_{\bm n, \bm m}=\kappa$. The oscillations should therefore remain significantly longer. Note, however, that the distances between the subspace $\mathcal{S}$ and the rest of the initial anharmonicity manifold can be larger, and so mixing within the manifold might be faster.
Based on these examples, we can conclude that the effective many-body dynamics of the higher excited states of transmons can survive the presence of dissipation and dephasing at short time scales. However, even with a relatively large hopping frequency of $J/2\pi = \SI{20}{\mega\hertz}$, any dynamical effects occurring at third order or above in $J/U$ are too slow for the current analog simulators. Since the dissipation times are usually significantly larger than the dephasing times -- and one can, in principle, also remove the effects of dissipation by post-selecting based on the total boson number -- we conclude that the dephasing time sets the upper limit to the time scales available for operations involving the higher excited levels of transmons. Perhaps the easiest engineering approach to try and circumvent this problem is to increase the value of the hopping rate $J$ by enhancing the capacitive coupling between the transmons, either by using larger capacitors or by geometric means~\cite{Dalmonte15}. As the characteristic frequencies of the dynamics of the higher excited states scale as $(J/U)^N$, already a \SIrange{10}{25}{\percent} increase in the value of $J$ can have a drastic effect.
\begin{figure}
\caption{The local occupations $\braket{\hat{n}
\label{fig:stacks}
\end{figure}
\begin{figure}
\caption{The local occupations $\langle \hat{n}
\label{fig:disordered}
\end{figure}
\subsection{Parametric disorder in dissipation and dephasing}
Dissipation and dephasing in transmons arise from various sources, such as microscopic defects and fluctuators, quasiparticles, delicate device properties, and electromagnetic environments~\cite{Krantz19}. The degree to which these sources affect the dissipation and dephasing properties is mostly set by the device fabrication process, and thus cannot be much altered after the transmon array has been assembled. Since it is impossible to fabricate perfectly identical transmons, the transmon arrays exhibit rather strong parametric site-to-site disorder in the values of $\gamma_\ell$ and $\kappa_\ell$, the standard deviations being in the range of \SIrange[]{20}{50}{\percent}, see, e.g., Refs.~\cite{Ma2019, Arute2020, Gong21, Zhao22, Zhu22, Saxberg22}.
Let us briefly analyze the effects of dissipation disorder and dephasing disorder on the system by comparing to the ideal uniform case. Now, when the dissipation rates $\gamma_\ell$ are site-dependent, the anti-Hermitian part $-i\sum_{\ell}\hbar \gamma_\ell \hat a^\dagger_\ell \hat a_\ell$ in Eq.~\eqref{eq:diss_NJ} no more commutes with the Hermitian part~$\hat H_{\rm BH}$, and so the no-jump evolution is not just the Hermitian dynamics modified by the damping factor as was the case in Eq.~\eqref{eq:diss_NJ:evol}. Furthermore, the total rate of photon loss due to quantum jumps becomes time-dependent, $\braket{\psi(t)|\sum_\ell \gamma_\ell \hat a^\dagger_\ell \hat a_\ell^{}|\psi(t)}=\sum_\ell \gamma_\ell \braket{\hat n_\ell}(t)$. However, when the effective dynamics in the array is faster than $\gamma^{-1}$, with $\gamma$ being the site-averaged mean dissipation rate, then, by time-averaging, we see that the photon loss rate is still given by $\gamma N$. The decoherence rate between the many-body Fock states $\ket{\bm n}$ and $\ket{\bm m}$ due to the disordered dissipation becomes
\begin{equation}
K^\gamma_{\bm n, \bm m}=\frac{1}{2}\sum_{\ell=1}^L\gamma_\ell (n_\ell+m_\ell)=\frac{1}{2}\bm{\gamma}\cdot(\bm{n}+\bm{m}),
\end{equation}
where we have defined the vector $\bm{\gamma}=(\gamma_1, \gamma_2, \ldots, \gamma_L)$.
Similarly, the decoherence rate due to disordered dephasing becomes
\begin{equation}
K^\kappa_{\bm n, \bm m}=\frac{1}{2}\sum_{\ell=1}^L\kappa_\ell (n_\ell-m_\ell)^2=\frac{1}{2} (\bm{n}-\bm{m}) \cdot \bm{\underline{\kappa}} \cdot (\bm{n}-\bm{m}),
\end{equation}
where we have defined the diagonal matrix $\bm{\underline{\kappa}} = \mathrm{diag}(\kappa_1, \kappa_2, \ldots, \kappa_L)$. The expression \eqref{eq:transition_rate} for the transition rates between the anharmonicity manifolds also generalizes to the case of disordered dephasing, see App.~\ref{sec:app1}. But since it considers transitions between uniformly distributed initial and final states in the manifolds, the averaged transition rate of Eq.~\eqref{eq:transition_rate} should describe also the disordered situation very well, with $\kappa$ now being the site-averaged mean dephasing rate.
In Fig.~\ref{fig:disordered}, we have numerically compared the non-unitary time-evolution with and without parametric disorder in dissipation and dephasing. To be experimentally as relevant as possible, the disorder pattern is taken from Ref.~\cite{Zhu22}. We see that, indeed, disorder produces only minor deviations in the case many sites are traversed within a coherence/dissipation time. If one studies frozen dynamics due to, for example, edge-localization, then disorder can naturally have notable effects.
Finally we point out that the dephasing model of the master equation~\eqref{eq:MEF} is a simplification for the higher excited states in transmons. Equation~\eqref{eq.deph.decoher} yields that the dephasing rate between two consecutive higher exited states $\ket{n}$ and $\ket{n+1}$ in a single transmon would be independent on $n$: $K^\kappa_{n, n+1}=\kappa/2$. Due to increased sensitivity to charge fluctuations, the higher excited states of a transmon have worse than that coherence properties~\cite{Blok21, Peterer15}. Formally speaking, this phenomena can be accounted easily by replacing the operators $\sqrt{\kappa_\ell}\hat n_\ell$ with some other diagonal operators $\hat d_\ell$ in the dephasing part of the master equation~\eqref{eq:MEF}. For example, a possibility to model the enhanced decoherence can be $\hat d_\ell=\sqrt{\kappa_\ell}\exp[a_\ell (\hat n_\ell -1)]$ with a suitably chosen coefficients $\kappa_\ell$ and $a_\ell$ and by defining that $\hat d_\ell\ket{0}=0$. The results presented in Sec.~\ref{sec:dephasing} can be straightforwardly generalized for the operator $\hat d_\ell$. The detailed study of this topic is left a subject of future research.
\section{Conclusions} \label{sec:conc}
In this work, we studied non-unitary many-body dynamics in transmon arrays, taking into account dissipation and dephasing. Our focus was specifically on the dynamics of the higher excited states lying beyond the hard-core boson approximation, that is, we treated transmons as proper bosonic quantum multilevel systems in the experimentally relevant parameter regime of state-of-the-art devices. Instead of a local view to a single transmon, we investigated the non-unitary effects of dissipation and dephasing on global many-body states. In particular, we considered the many-body Fock states $\ket{\bm n}=\ket{n_1, n_2, \ldots, n_L}$ which can be grouped into different anharmonicity manifolds based on the values of the anharmonicity $A=-\sum_{\ell}n_\ell(n_\ell-1)/2$ and the total photon number $N=\sum_\ell n_\ell$.
The main findings demonstrated three clearly distinguishable processes: many-body decoherence, many-body dissipation, and transitions between the anharmonicity manifolds. The total decoherence rate between the many-body Fock states $\ket{\bm n}$ and $\ket{\bm m}$ is $K_{\bm n, \bm m} = \gamma N + \kappa |\bm n-\bm m|^2 / 2$. In the worst case, the decoherence rate scales as $N^2$. Furthermore, the dephasing rates $\kappa$ are typically an order of magnitude larger than the dissipation rates $\gamma$. Our numerical simulations with the dissipation rate $\gamma/2\pi\approx~\SI{8}{\kilo\hertz}~(T_1=\SI{20}{\micro\second}$) and the dephasing rate $\kappa/2\pi\approx \SI{40}{\kilo\hertz}~(T^\star_2=\SI{8}{\micro\second})$ show that the dynamics involving the higher excited states occurring on the time scales of $(J/U)^{-2} J^{-1}$ or less should be readily realizable with hopping rates $J/2\pi \lesssim \SI{20}{\mega\hertz}$. For higher-order many-body dynamics on the time scales of $(J/U)^{-3} J^{-1}$, the hopping rate $J$ needs to be modestly increased to the values of $\gtrsim 2\pi \times \SI{25}{\mega\hertz}$. The other two processes -- transitions between the photon number manifolds due to dissipation and heating/cooling transitions between the anharmonicity manifolds due to dephasing -- are both slower compared to the decoherence. To summarize, the dephasing times of the higher excited states put a practical limit on observing coherent many-body higher-excited-state dynamics in transmon arrays.
As an outlook for the future, an interesting application to extend the present work is to study potential realizations of dynamical quantum phase transitions under non-unitary conditions. Moreover, since the dissipation and dephasing processes are inherently local, our results are readily applicable to general array geometries.
\section*{Acknowledgments}
We thank Tuure Orell for useful discussions. The authors acknowledge financial support from the Academy of Finland (Grants No. 316619, No. 320086, No. 346035), the Kvantum Institute at the University of Oulu, and the Scientific Advisory Board for Defence (MATINE) of the Ministry of Defence of Finland.
\appendix
\section{Dephasing-induced transition rates between anharmonicity manifolds}\label{sec:app1}
We derive here in detail the expression for the transition rates between the anharmonicity manifolds by the combination of dephasing and unitary Bose-Hubbard dynamics. We work now in the Heisenberg picture with respect to the Hamiltonian $\hat H_{\rm BH}$ of Eq.~\eqref{eq:FullHam}, and denote operators in this picture as $\check{n}_\ell$ to distinguish them from the Schrödinger picture operators $\hat n_\ell$. The starting point is that the dephasing has rendered the density matrix fully diagonal, $\check{\rho} = \sum_a P_a \hat{\Pi}_a$. Here, the operators $\hat{\Pi}_a$ are projectors to the states in the anharmonicity manifold $a$ and $P_a$ is a coefficient describing the population in that manifold. Then, the master equation~\eqref{eq:MEDeph} can be written in the form of a rate equation
\begin{equation}
\dot P_a = \sum_{b} (P_b - P_a) \Gamma_{ab},
\end{equation}
where the effective transition rate $\Gamma_{ab}$ from the anharmonicity manifold $a$ to the manifold $b$ is given by
\begin{equation}\label{eq:transition_rate:app}
\Gamma_{ab} = \frac{1}{\Tr \hat{\Pi}_a} \sum_{\ell = 1}^L \kappa_\ell \Tr\left [ \left(\hat{\Pi}_a \check{n}_\ell \hat{\Pi}_b\right)\left(\hat{\Pi}_b \check{n}_\ell \hat{\Pi}_a\right) \right],
\end{equation}
expressed in terms of the projectors $\hat{\Pi}_a$ and $\hat{\Pi}_b$ and allowing for disorder in the dephasing rates $\kappa_\ell$ for generality.
We can expand the Heisenberg-picture number operators in each anharmonicity manifold $a$ by considering the hopping Hamiltonian $\hat{H}_J$ as a small perturbation,
\begin{equation}
\check{n}_\ell = \check{n}_\ell^{(0)} + \check{n}_\ell^{(1)} + \cdots,
\end{equation}
with the zeroth-order and first-order terms in $J/U$ given by
\begin{align}
\check{n}_\ell^{(0)} &= \sum_{aj,bk} e^{i(E_{aj} - E_{bk}) t / \hbar} \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{0} \ketE{a}{j}{0} \braE{b}{k}{0}, \\
\check{n}_\ell^{(1)} &= \sum_{aj, bk} e^{i(E_{aj} - E_{bk}) t / \hbar} \left ( \braE{a}{j}{1} \hat{n}_\ell \ketE{b}{k}{0} \ketE{a}{j}{0} \braE{b}{k}{0} \right. \notag \\
&\hspace{40pt}+ \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{1} \ketE{a}{j}{0} \braE{b}{k}{0} \notag \\
&\hspace{40pt}+ \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{0} \ketE{a}{j}{1} \braE{b}{k}{0} \notag \\
&\hspace{40pt}+ \left. \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{0} \ketE{a}{j}{0} \braE{b}{k}{1} \right) .
\end{align}
Here, $E_{a j}$ and $\ket{E_{a j}}$ are the eigenenergies and the corresponding eigenstates of the Bose-Hubbard Hamiltonian~\eqref{eq:FullHam}, and we have further expanded the states in powers of $J/U$ as $\ket{E_{a j}} = \ketE{a}{j}{0} + \ketE{a}{j}{1} + \ldots$, see also Ref.\ \cite{Olli2022}. Now, the state $\ketE{a}{j}{0}$ belongs to the anharmonicity manifold $a$. Since different manifolds are orthogonal to each other, and operating with $\hat{n}_\ell$ keeps us within a manifold, we always have $\braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{0} = 0$ for $b \neq a$. Thus, multiplying the above equations with projectors from both sides, we obtain
\begin{align}
\hat{\Pi}_a \check{n}_\ell^{(0)} \hat{\Pi}_b &= 0, \\
\hat{\Pi}_a \check{n}_\ell^{(1)} \hat{\Pi}_b &= \sum_{j, k} e^{i(E_{aj} - E_{bk}) t / \hbar} \ketE{a}{j}{0} \braE{b}{k}{0} \\
&\hspace{30pt}\times \left(\braE{a}{j}{1} \hat{n}_\ell \ketE{b}{k}{0} + \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{1} \right). \notag
\end{align}
This means that the transition rates $\Gamma_{a b}$ of Eq.~\eqref{eq:transition_rate:app} are second order in $J/U$, for
\begin{align}
&\left(\hat{\Pi}_a \check{n}_\ell \hat{\Pi}_b \right) \left(\hat{\Pi}_b \check{n}_\ell \hat{\Pi}_a \right) \approx \left(\hat{\Pi}_a \check{n}_\ell^{(1)} \hat{\Pi}_b \right) \left(\hat{\Pi}_b \check{n}_\ell^{(1)} \hat{\Pi}_a \right) \\
&\qquad= \sum_{jkm} e^{i(E_{aj} - E_{am}) t / \hbar} \ketE{a}{j}{0} \braE{a}{m}{0} \notag \\
&\hspace{55pt}\times \left(\braE{a}{j}{1} \hat{n}_\ell \ketE{b}{k}{0} + \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{1} \right) \notag \\
&\hspace{55pt}\times \left(\braE{b}{k}{1} \hat{n}_\ell \ketE{a}{m}{0} + \braE{b}{k}{0} \hat{n}_\ell \ketE{a}{m}{1} \right) \notag,
\end{align}
where we used the orthonormality of the zeroth-order eigenstates to eliminate one of the sums. Taking the trace removes the time-dependent exponential factor, yielding
\begin{align}
&\Tr\left[ \left(\hat{\Pi}_a \check{n}_\ell \hat{\Pi}_b \right) \left(\hat{\Pi}_b \check{n}_\ell \hat{\Pi}_a \right) \right] \\
&\qquad\approx \sum_{jk} \left(\braE{a}{j}{1} \hat{n}_\ell \ketE{b}{k}{0} + \braE{a}{j}{0} \hat{n}_\ell \ketE{b}{k}{1} \right) \notag \\
&\hspace{55pt} \times \left(\braE{b}{k}{1} \hat{n}_\ell \ketE{a}{j}{0} + \braE{b}{k}{0} \hat{n}_\ell \ketE{a}{j}{1} \right) \notag.
\end{align}
Using again the orthogonality of the different anharmonicity manifolds, we do not need to know the components of $\ketE{a}{j}{1}$ lying in $a$, but only their projections
\begin{equation}
\hat{\Pi}_b \ketE{a}{j}{1} = \frac{\hat{\Pi}_b \hat{H}_J}{\hbar U (a - b)} \ketE{a}{j}{0}
\end{equation}
to the manifold $b$. Substituting these into the equation above, and noting that $\sum_k \ketE{b}{k}{0}\braE{b}{k}{0} = \hat{\Pi}_b$ and $\sum_j \braket{E_{a j}^{(0)} | \hat{O} | E_{a j}^{(0)}} = \Tr[\hat{\Pi}_a \hat{O} \hat{\Pi}_a]$, we can write the leading-order approximation for the effective transition rates as
\begin{align}
\Gamma_{ab} &= \sum_{\ell=1}^L \kappa_\ell \frac{\Tr \left(\hat{\Pi}_a [\hat{H}_J, \hat{n}_\ell] \hat{\Pi}_b [\hat{H}_J, \hat{n}_\ell] \hat{\Pi}_a\right)}{(\Tr\hat{\Pi}_a) [\hbar U (a - b)]^2} \label{eq:app:transrate}\\
&= \frac{1}{\Tr\hat{\Pi}_a} \sum_{\bm{n}_a, \bm{m}_b}\left[\frac{\bm{\kappa}\cdot(\bm{n}_a-\bm{m}_b)}{\hbar U (a-b)}\right]^2 |\braket{\bm{n}_a|\hat H_J|\bm{m}_b}|^2. \notag
\end{align}
In the main text, we give the transition rate from the lowest to the second-lowest anharmonicity manifold in the case of uniform dephasing, see Eq.~\eqref{eq:rate}. Equation~\eqref{eq:app:transrate} is simple enough to allow us to compute explicitly transition rates also between other anharmonicity manifold pairs. For example, from the second-lowest anharmonicity manifold spanned by the states $\ket{(N - 1)_\ell, 1_m}$, we can get to three different manifolds using the hopping Hamiltonian $\hat H_{\rm J}$: (i) to the lowest anharmonicity manifold $a_1 = -N(N - 1)/2$ spanned by the states $\ket{N_\ell}$; (ii) to the manifold $b_2 = -(N - 2)(N - 3)/2 + 1$, spanned by the states $\ket{(N - 2)_\ell, 2_m}$; and (iii) to the manifold $b_3 = -(N - 2)(N - 3)/2$ spanned by the states $\ket{(N - 2)_\ell, 1_m, 1_n}$. The corresponding rates are
\begin{align}
\Gamma_{b_1a_1} &= 4 \kappa \left(\frac{J}{U} \right)^2 \frac{N}{L(N - 1)^2}, \label{eq:rateOne}\\
\Gamma_{b_1b_2} &= 8 \kappa \left(\frac{J}{U} \right)^2 \frac{N - 1}{L(N - 3)^2}, \label{eq:rateTwo}\\
\Gamma_{b_1b_3} &= 4 \kappa \left(\frac{J}{U} \right)^2 \frac{L - 2}{L} \frac{N - 1}{(N - 2)^2}.\label{eq:rateThree}
\end{align}
The expression~\eqref{eq:app:transrate} is also relatively simple to be used numerically to compute transition rates between arbitrary anharmonicity manifold pairs.
\end{document}
|
\begin{document}
\begin{abstract}
A criterion for the existence of a birational embedding of an algebraic curve into a projective plane with two Galois points is presented.
Several novel examples of plane curves with two inner Galois points as an application are described.
\end{abstract}
\maketitle
\section{Introduction}
Let $C$ be a (reduced, irreducible) smooth projective curve over an algebraically closed field $k$ of characteristic $p \ge 0$ with $k(C)$ as its function field.
We consider a rational map $\varphi$ from $C$ to $\Bbb P^2$, which is birational onto its image.
For a point $P \in \Bbb P^2$, if the function field extension $k(\varphi(C))/\pi_P^*k(\Bbb P^1)$ induced by the projection $\pi_P$ is Galois, then $P$ is called a Galois point for $\varphi(C)$.
This notion was introduced by Yoshihara (\cite{miura-yoshihara, yoshihara1}).
Furthermore, if a Galois point $P$ is a smooth point of $\varphi(C)$ (resp. is contained in $\Bbb P^2 \setminus \varphi(C)$), then $P$ is said to be inner (resp. outer).
The associated Galois group at $P$ is denoted by $G_P$.
If $\varphi: C \rightarrow \varphi(C)$ is isomorphic, then the number of Galois points is completely determined (\cite{fukasawa2, yoshihara1}).
However, determining the number in general is difficult.
For example, only seven types of plane curves with two inner Galois points are known (see the Table in \cite{yoshihara-fukasawa}).
It is important to find a condition for the existence of two Galois points.
The following proposition is presented after discussions with Takahashi \cite{takahashi}, Terasoma \cite{terasoma} and Yoshihara \cite{yoshihara3}.
\begin{proposition} \label{ftt}
Let $C$ be a smooth projective curve.
Assume that there exist two finite subgroups, $G_1$ and $G_2$, of the full automorphism group ${\rm Aut}(C)$ such that $G_1 \cap G_2=\{1\}$ and $C/{G_i} \cong \Bbb P^1$ for $i=1, 2$.
Let $f$ and $g$ be generators of function fields of $C/{G_1}$ and $C/{G_2}$, respectively.
Then, the rational map
$$ \varphi: C \dashrightarrow \Bbb P^2; \ (f:g:1)$$
is birational onto its image, and two points $P_1=(0:1:0)$ and $P_2=(1:0:0)$ are Galois points for $\varphi(C)$.
\end{proposition}
For both points $P_1$ and $P_2$ to be inner, or outer, we need additional conditions.
In this article, we present the following criterion.
\begin{theorem} \label{main}
Let $C$ be a smooth projective curve and let $G_1$ and $G_2$ be different finite subgroups of ${\rm Aut}(C)$.
Then, there exist a morphism $\varphi: C \rightarrow \Bbb P^2$ and different inner Galois points $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ such that $\varphi$ is birational onto its image and $G_{\varphi(P_i)}=G_i$ for $i=1, 2$, if and only if the following conditions are satisfied.
\begin{itemize}
\item[(a)] $C/{G_1} \cong \Bbb P^1$ and $C/{G_2} \cong \Bbb P^1$.
\item[(b)] $G_1 \cap G_2=\{1\}$.
\item[(c)] There exist two different points $P_1$ and $P_2 \in C$ such that
$$P_1+\sum_{\sigma \in G_1} \sigma (P_2)=P_2+\sum_{\tau \in G_2} \tau (P_1) $$
as divisors.
\end{itemize}
\end{theorem}
\begin{remark}
\begin{itemize}
\item[(1)] For outer Galois points, we have to only replace (c) by
\begin{itemize}
\item[(c')] There exists a point $Q \in C$ such that $\sum_{\sigma \in G_1} \sigma (Q)=\sum_{\tau \in G_2} \tau(Q)$ as divisors.
\end{itemize}
\item[(2)] Yoshihara \cite{yoshihara3} gave a different criterion from ours, for two outer Galois points.
Conditions (a) and (b) appear, but condition (c') does not appear in the statement of Yoshihara's criterion.
\end{itemize}
\end{remark}
We present the following application for rational or elliptic curves.
\begin{theorem} \label{main_rational}
Let $p \ne 2$.
Then, there exist the following morphisms $\varphi: \Bbb P^1 \rightarrow \Bbb P^2$, which are birational onto their images.
\begin{itemize}
\item[(1)] $\deg \varphi(C)=5$ and there exist two Galois points $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ such that $G_{\varphi(P_i)} \cong \Bbb Z/4\Bbb Z$ for $i=1, 2$, in $p \ne 3$.
\item[(2)] $\deg \varphi(C)=5$ and there exist two Galois points $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ such that $G_{\varphi(P_i)} \cong (\Bbb Z/2\Bbb Z)^{\oplus 2}$ for $i=1, 2$.
\item[(3)] $\deg \varphi(C)=5$ and there exist two Galois points $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ such that $G_{\varphi(P_1)} \cong \Bbb Z/4\Bbb Z$ and $G_{\varphi(P_2)} \cong (\Bbb Z/2\Bbb Z)^{\oplus 2}$.
\item[(4)] $\deg \varphi(C)=6$ and there exist two Galois points $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ such that $G_{\varphi(P_i)} \cong \Bbb Z/5\Bbb Z$ for $i=1, 2$.
\end{itemize}
\end{theorem}
\begin{theorem} \label{main_elliptic}
Let $p \ne 3$ and let $E \subset \Bbb P^2$ be the curve defined by $X^3+Y^3+Z^3=0$.
Then, there exists a morphism $\varphi: E \rightarrow \Bbb P^2$ such that $\varphi$ is birational onto its image, $\deg \varphi(E)=4$, and there exist two inner Galois points for $\varphi(E)$.
\end{theorem}
\section{Proof of the main theorem}
\begin{proof}[Proof of Proposition \ref{ftt}]
Let $P_1=(0:1:0)$ and $P_2=(1:0:0)$.
Then, the projection $\pi_{P_1}$ (resp. $\pi_{P_2}$) is given by $(x:y:1) \mapsto (x:1)$ (resp. $(x:y:1) \mapsto (y:1)$), and hence, $\pi_{P_1} \circ \varphi=(f:1)$ (resp. $\pi_{P_2} \circ \varphi=(g:1)$).
We have to only show that $k(C)=k(f, g)$.
Since $k(C)/k(f)$ is Galois, there exists a subgroup $H_1$ of $G_1$ such that $H_1={\rm Gal}(k(C)/k(f, g))$.
Similarly, there exists a subgroup $H_2$ of $G_2$ such that $H_2={\rm Gal}(k(C)/k(f, g))$.
Since $G_1 \cap G_2=\{1\}$, $H_1=H_2=\{1\}$.
Therefore, $k(C)=k(f, g)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
We consider the only-if part.
Let $\varphi(P_1)$ and $\varphi(P_2) \in \varphi(C)$ be inner Galois points such that $G_{\varphi(P_i)}=G_i$ for $i=1, 2$.
Assertion (a) is obvious.
The proof of assertion (b) is similar to \cite[Lemma 7]{fukasawa1}.
Let $D$ be the divisor induced by the intersection of $C$ and the line $\overline{P_1P_2}$, where $\overline{P_1P_2}$ is the line passing through $P_1$ and $P_2$.
We can consider the line $\overline{P_1P_2}$ as a point in the images of $\pi_{P_1} \circ \varphi$ and $\pi_{P_2} \circ \varphi$.
Since $\pi_{P_1} \circ \varphi$ (resp. $\pi_2 \circ \varphi$) is a Galois covering and $P_2 \in C \cap \overline{P_1P_2}$ (resp. $P_1 \in C \cap \overline{P_1P_2}$),
$$(\pi_{P_1} \circ \varphi)^*(\overline{P_1P_2})=\sum_{\sigma \in G_1}\sigma(P_2) \ \ \left(\mbox{resp. } (\pi_{P_2} \circ \varphi)^*(\overline{P_1P_2})=\sum_{\tau \in G_2}\tau(P_1)\right)$$ as divisors (see, for example, \cite[III.7.1, III.7.2, III.8.2]{stichtenoth}).
On the other hand, it follows that $(\pi_{P_1} \circ \varphi)^*(\overline{P_1P_2})=D-P_1$ (resp. $(\pi_{P_2} \circ \varphi)^*(\overline{P_1P_2})=D-P_2$).
Therefore,
$$ D=P_1+\sum_{\sigma \in G_1}\sigma(P_2)=P_2+\sum_{\tau \in G_2}\tau(P_1), $$
which is nothing but assertion (c).
We then consider the if-part.
Let $D$ be the divisor
$$ D=P_1+\sum_{\sigma \in G_1}\sigma(P_2)=P_2+\sum_{\tau \in G_2}\tau(P_1), $$
by $(c)$.
Let $f$ and $g \in k(C)$ be generators of $k(C/{G_1})$ and $k(C/{G_2})$ such that $(f)_{\infty}=D-P_1$ (i.e. $f \in \mathcal{L}(D-{P_1})$) and $(g)_{\infty}=D-P_2$, by (a), where $(f)_{\infty}$ is the pole divisor of $f$.
Then, $f, g \in \mathcal{L}(D)$.
Let $\varphi: C \rightarrow \Bbb P^2$ be given by $(f:g:1)$.
Similar to Proposition \ref{ftt}, by (b), $\varphi$ is birational onto its image.
The sublinear system of $|D|$ corresponding to $\langle f, g, 1\rangle$ is base-point-free, since ${\rm supp}(D) \cap {\rm supp}((f)+D)=\{P_1\}$ and ${\rm supp}(D) \cap {\rm supp}((g)+D)=\{P_2\}$.
Therefore, $\deg \varphi(C)=\deg D$, and the morphism $(f:1)$ (resp. $(g:1)$) coincides with the projection from the smooth point $\varphi(P_1) \in \varphi(C)$ (resp. $\varphi(P_2) \in \varphi(C)$).
\end{proof}
\section{Applications}
First, we consider rational curves.
\begin{proof}[Proof of Theorem \ref{main_rational}]
(1).
Let $\sigma, \tau \in {\rm Aut}(\Bbb P^1)$ be represented by
$$ \left(\begin{array}{cc}
1 & -1 \\
1 & 1
\end{array} \right), \
\left(\begin{array}{cc}
0 & 1 \\
-\frac{1}{2} & 1
\end{array} \right) $$
respectively, by assuming $p \ne 2$.
Let $G_1=\langle \sigma \rangle$, $G_2=\langle \tau \rangle$, $P_1=(2:1)$ and $P_2=(-1:1)$.
If $p \ne 3$, then $P_1 \ne P_2$ and condition (a) in Theorem \ref{main} is obviously satisfied.
Note that
$$\{\sigma^i(P_2)| i=1, 2, 3\}=\{(1:0), (1:1), (0:1)\}=\{\tau^i(P_1)|i=1, 2, 3\}. $$
Condition (c) in Theorem \ref{main} is satisfied.
Furthermore, $\sigma^4=1$ and $\tau^4=1$.
We prove condition (b) in Theorem \ref{main}.
Assume by contradiction that $\sigma^i=\tau^j$ for some $i, j$.
If $i=1$ or $3$, then there exists an integer $l$ such that $(\sigma^i)^l=\sigma$.
Then, $\tau^{jl}(0:1)=\sigma (0:1)=(-1:1)$.
However, there exists no integer $i$ such that $\tau^i(0:1)=(-1:1)$.
This is a contradiction.
Therefore, $i=2$ and $j=2$.
However, $\sigma^2(1:0)=(0:1) \ne (1:1)=\tau^2(1:0)$.
(2).
Let $\alpha \ne 0, 1, -1$ and let $\sigma_{\alpha}, \tau_{\alpha} \in {\rm Aut}(\Bbb P^1)$ be represented by
$$ \left(\begin{array}{cc}
0 & 1 \\
\alpha & 0
\end{array} \right), \
\left(\begin{array}{cc}
1 & -\frac{1}{\alpha} \\
1 & -1
\end{array} \right), $$
respectively.
Let $G_\alpha=\langle \sigma_{\alpha}, \tau_{\alpha}\rangle$.
Since $\sigma_{\alpha}\tau_{\alpha}=\tau_{\alpha}\sigma_{\alpha}$, $G_{\alpha} \cong (\Bbb Z/2\Bbb Z)^{\oplus 2}$.
We take $\alpha' \ne 0, 1, -1, \alpha$.
Let $G_1=G_{\alpha}$, $G_2=G_{\alpha'}$, $P_1=(1:\alpha')$ and $P_2=(1:\alpha)$.
Note that
$$ \{\sigma_{\alpha}(P_2), \tau_{\alpha}(P_2), \sigma_{\alpha}\tau_{\alpha}(P_2)\}=\{(1:1), (0:1), (1:0)\}=\{\sigma_{\alpha'}(P_1), \tau_{\alpha'}(P_1), \sigma_{\alpha'}\tau_{\alpha'}(P_1)\}. $$
Condition (c) in Theorem \ref{main} is satisfied.
Conditions (a) and (b) are obviously satisfied in this case.
(3).
We take $\sigma$ as in (1) and $\sigma_{\alpha}, \tau_{\alpha}$ as in (2).
Let $P_1=(1:\alpha)$ and $P_2=(1: -1)$.
Note that
$$ \{\sigma^i(P_2)|i=1, 2, 3\}=\{(1:0), (1:1), (0:1)\}=\{\sigma_{\alpha}(P_1), \tau_{\alpha}(P_1), \sigma_{\alpha}\tau_{\alpha}(P_1)\}. $$
Furthermore, $\sigma^2 \not\in \langle \sigma_{\alpha}, \tau_{\alpha}\rangle$.
Similar to the proof of (2), the assertion follows by Theorem \ref{main}.
(4).
Let $\alpha^2+\alpha-1=0$ and let $\sigma, \tau\in {\rm Aut}(\Bbb P^1)$ be represented by
$$ \left(\begin{array}{cc}
1 & -1 \\
1 & -\alpha
\end{array} \right), \
\left(\begin{array}{cc}
0 & 1 \\
\alpha-1 & 1
\end{array} \right) $$
respectively.
Let $G_1=\langle \sigma \rangle$, $G_2=\langle \tau \rangle$, $P_1=(\alpha:2\alpha-1)$ and $P_2=(1:1+\alpha)$.
If $p \ne 2$, then $P_1 \ne P_2$.
Note that
$$\{\sigma^i(P_2)| 1 \le i \le 4 \}=\{(1:0), (0:1), (1:1), (1:\alpha)\}=\{\tau^i(P_1)|1 \le i \le 4\}. $$
Condition (c) in Theorem \ref{main} is satisfied.
Furthermore, $\sigma^5=1$ and $\tau^5=1$.
Conditions (a) and (b) are obviously satisfied in this case.
\end{proof}
\begin{remark}
For $\sigma \in {\rm Aut}(\Bbb P^1)$ as in the proof of Theorem \ref{main_rational}(1), we can show that the rational function $\sum_{i=0}^3\sigma^*(t) \in k(\Bbb P^1)=k(t)$ is a generator of $k(\Bbb P^1/\langle \sigma \rangle)$.
Then, the birational embedding $\varphi: \Bbb P^1 \rightarrow \Bbb P^2$ is represented by
$$ (2(t^4-6t^2+1)(2t-1):(4t^4-12t^2+8t-1)(t+1):2t(t+1)(t-1)(2t-1)). $$
Similarly, for Theorem \ref{main_rational}(2), we have a birational embedding
$$ ((t^2-\alpha)^2(t-\alpha'):(t^2-\alpha')^2(t-\alpha): t(t-1)(t-\alpha)(t-\alpha')).$$
\end{remark}
\begin{remark}
According to \cite[Theorem 1]{miura}, if $p=0$, $C=\Bbb P^1$ and $\deg \varphi(C)=6$, then the number of inner Galois points is bounded by two.
Our curve in Theorem \ref{main_rational}(4) attains this bound.
\end{remark}
Next, we consider elliptic curves.
Let $p \ne 3$.
Note that if an elliptic curve $E$ admits a Galois covering over $\Bbb P^1$ of degree three, then $E$ has an embedding $\varphi: E \rightarrow \Bbb P^2$ such that the image is the Fermat cubic.
To consider the case where $\deg{\varphi(E)}=4$, we assume that $E \subset \Bbb P^2$ is the curve defined by $X^3+Y^3+Z^3=0$.
\begin{proof}[Proof of Theorem \ref{main_elliptic}]
Let $\sigma$ be the automorphism of $E$ given by $(X:Y:Z) \mapsto (\omega X:Y:Z)$, where $\omega^2+\omega+1=0$.
Then, $\sigma$ is of order three and $E/\langle \sigma \rangle \cong \Bbb P^1$.
We take a point $Q \in E \setminus \{YZ=0\}$ such that $\sigma(Q) \ne Q$ and $\sigma^2(Q) \ne Q$.
Note that there exists an involution $\eta$ such that $\eta(Q)=\sigma(Q)$, by the linear system $|Q+\sigma(Q)|$.
We take $\tau:=\eta \sigma^2 \eta$.
Then, $\tau(Q)=\sigma(Q)$, $\tau$ is of order three and $E/\langle \tau \rangle \cong \Bbb P^1$.
Let $G_1=\langle \sigma \rangle$ and $G_2=\langle \tau \rangle$.
Then, condition (a) in Theorem \ref{main} is satisfied for $G_1$ and $G_2$.
Furthermore, we take $P_1=\tau^2(Q)$ and $P_2=\sigma^2(Q)$.
To prove (b) and (c) in Theorem \ref{main}, we have to only show that $\sigma^2(Q) \ne \tau^2(Q)$.
Let $R_1=(1:0:0)$, $R_2=(0:1:0)$ and $R_3=(0:0:1)$.
Assume by contradiction that $\tau^2(Q)=\sigma^2(Q)$.
Then, $\eta(\sigma^2(Q))=\sigma^2(Q)$.
For the divisor $D=Q+\sigma(Q)+\sigma^2(Q)$, $\eta^*(D)=D$.
Since $D$ is given by $E \cap \overline{R_1Q}$ and the linear system $|D|$ is complete, where $\overline{R_1Q}$ is the line passing through $R_1$ and $Q$, $\eta$ is the restriction of some linear transformation $\hat{\eta}: \Bbb P^2 \cong \Bbb P^2$.
It is well known that the set of outer Galois points for $E$ is equal to $\Delta=\{R_1, R_2, R_3\}$ (see, for example, \cite{yoshihara2}).
Since $\hat{\eta}(\Delta)=\Delta$ and $\hat{\eta}(\overline{R_1Q})=\overline{R_1Q} \not\ni R_2, R_3$, it can be noted that $\hat{\eta}(R_1)=R_1$ and $\hat{\eta}(\overline{R_2R_3})=\overline{R_2R_3}$.
Then, $\hat{\eta}$ fixes points $R_1$, $Q$ and the point given by $\overline{R_1Q} \cap \overline{R_2R_3}$.
This implies that $\hat{\eta}$ is identity on the line $\overline{R_1Q}$.
This is a contradiction.
By Theorem \ref{main}, the assertion follows.
\end{proof}
\begin{remark}
According to \cite[Theorem 1]{miura}, if $p=0$ and $\deg \varphi(C)=4$, then the number of inner Galois points is bounded by four.
When $C$ is an elliptic curve, it is not known whether or not the bound is sharp.
\end{remark}
\begin{center} {\bf Acknowledgements} \end{center}
The author is grateful to Professor Takeshi Takahashi, Professor Tomohide Terasoma and Professor Hisao Yoshihara for helpful conversations.
\end{document}
|
\begin{document}
\title[On equidistribution of Gauss sums of cuspidal representations of
$GL_d(\mathbb F_q)$]{On equidistribution of Gauss sums of \\cuspidal representations of
$GL_d(\mathbb F_q)$}
\author{Sameer Kulkarni}
\address{Tata Institute of Fundamental Research, Homi Bhabha Road,
Bombay - 400 005, INDIA.}
\email{[email protected]}
\author{C.~S.~Rajan}
\address{Tata Institute of Fundamental Research, Homi Bhabha Road,
Bombay - 400 005, INDIA.} \email{[email protected]}
\subjclass{Primary 11T23; Secondary 20G05}
\begin{abstract}
We investigate the distribution of the angles of Gauss sums attached to the cuspidal representations of general linear groups over finite fields. In particular we show that they happen to be equidistributed w.r.t.the Haar measure. However, for representations of $PGL_2(\mathbb F_q)$, they are clustered around $1$ and $-1$ for odd $p$ and around $1$ for $p=2$.
\end{abstract}
\maketitle
Let $p$ be a prime number, $q$ a power of $p$ and $\mathbb F_q$ a finite field with $q$ elements. Fix a non-trivial additive character $\psi_p: \mathbb F_p\to \mathbb C^*$, and let $\psi_q$ the character on $\mathbb F_q$ given by composition with trace. Given a non-trivial multiplicative character $\chi: \mathbb F_q^*\to \mathbb C^*$, the Gauss sum,
\[g(\chi, \psi) =\sum_{a\in \mathbb F_q^*} \chi(a)\psi_q(a),\]
has absolute value $\sqrt{q}$. The following equidistribution theorem for the angles of Gauss sums (\cite[1.3.3]{Katz_sommes_exponentielles}) is a consequence of Deligne's bound for Kloosterman sums obtained from his work on Weil conjectures:
\begin{theorem}\label{theorem:Deligne}
As $q$ tends to infinity, the set of $(q-2)$ points $\{g(\chi, \psi_q)/\sqrt{q}: \chi ~\mbox{non-trivial}\}$ is equidistributed with respect to the normalized Haar measure $\frac{1}{2 \pi}dx$ on the unit circle $S^1$, i.e., for any continuous function $f:S^1 \to \mathbb C^*$,
\[ \frac{1}{2\pi} \int_{S^1} f(x)dx=\lim_{q\to \infty} \frac{1}{q-2}\sum_{\chi\neq 1} f\left( g(\chi, \psi_q)/\sqrt{q}\right).\]
\end{theorem}
In this article we will study the equidistribution properties of the angles of Gauss sums attached to irreducible cuspidal representations of general linear groups over finite fields. For a natural number $d$, denote by $\psi_d$ (or just $\psi$ by abuse of notation), the additive character of the ring of $d \times d$ matrices $M_d(\mathbb F_q)$ defined by $\psi_d=\psi\circ \mbox{Tr}$. Given a complex representation $\rho$ of $GL_d(\mathbb F_q)$ of degree $N$, the matrix valued Gauss sum was introduced by Lamprecht (\cite{Lamprecht}):
\begin{equation} \label{GSmatrix}
G(\rho, \psi):= \sum_{x \in GL_d(\mathbb F_q)} \rho(x)\psi(\mbox{Tr}(x)) \in M_N(\mathbb C).
\end{equation}
Suppose $\rho$ is irreducible. Then by Schur's lemma $G(\rho, \psi)$ is a scalar matrix:
\begin{equation}\label{GS}
G(\rho, \psi)= g(\rho, \psi)I_N,
\end{equation}
where $g(\rho, \psi) \in \mathbb C$ and $I_N$ is the $N\times N$ identity matrix.
The irreducible characters of $GL_d(\mathbb F_q)$ were classified by Green \cite{Green_s_paper} in terms of conjugacy classes. Using Green's work, Kondo showed
\[
|g(\rho,\psi)|=q^{\tfrac{d^2-\kappa(\rho)}{2}},
\]
where $\kappa(\rho)$ is the multiplicity of $1$ as a root of the characteristic polynomial attached to the conjugacy class of $GL_d(\mathbb F_q)$ corresponding to the irreducible representation $\rho$. Let $a(\rho):= q^{-\tfrac{d^2-\kappa(\rho)}{2}}g(\rho,\psi)$ denote the `angle', i.e, the normalized Gauss sum of absolute value $1$ attached to $\rho$.
We recall that an irreducible representation $(\rho, V)$ of $GL_d(\mathbb F_q)$ is said to be cuspidal, if the space of $N(\mathbb F_q)$-invariants $V^{N(\mathbb F_q)}$ is zero, where $N$ is the unipotent radical of a parabolic subgroup $P$ of $GL_d$ defined over $\mathbb F_q$. The cuspidal representations are the building blocks for all representations of $GL_d(\mathbb F_q)$, in the sense that any irreducible representation of $GL_d(\mathbb F_q)$ occurs in a representation parabolically induced from a cuspidal representation of the Levi component of a suitable parabolic. Via the Green correspondence, cuspidal representations correspond to elliptic conjugacy classes in $GL_d(\mathbb F_q)$, i.e., conjugacy classes of semisimple elements of
$GL_d(\mathbb F_q)$ whose characteristic polynomial is irreducible over $\mathbb F_q$. For such representations, $\kappa(\rho)=0$. We show an analogue of Deligne's theorem for the angles of the Gauss sums corresponding to cuspidal representations of $GL_d(\mathbb F_q)$:
\begin{theorem}\label{theorem:main}
Let $R_0(d, q)$ denote the set of isomorphism classes of irreducible, cuspidal representations of $GL_d(\mathbb F_q)$. The set of normalized Gauss sums $\{q^{-d^2/2}g(\rho, \psi)\mid \rho\in R_0(d,q)\}$ is equidistributed with respect to the normalized Haar measure on $S^1$, as $q$ tends to infinity.
\end{theorem}
We now consider equidistribution results for irreducible representations of $GL_d(\mathbb F_q)$ with trivial central character. Such representations can be considered as representations of $PGL_d(\mathbb F_q)$.
\begin{theorem}\label{theorem:centralchar}
Let $R_0^0(d, q)$ denote the set of isomorphism classes of irreducible, cuspidal representations of $GL_d(\mathbb F_q)$ with trivial central character. As $q$ tends to infinity, the set of normalized Gauss sums $\{q^{-d^2/2}g(\rho, \psi)\mid \rho\in R_0^0(d,q)\}$ is equidistributed with respect to the normalized Haar measure on $S^1$ for $d\geq 3$.
When $d=2$ and $p$ is odd, these normalized Gauss sums are equidistributed, as $q$ tends to infinity, with respect to the measure
\[\frac{1}{2}(\delta_1+\delta_{-1}),\]
where $\delta_a$ denotes the Dirac measure supported at $a\in S^1$.
When $d=2, ~p=2$, these normalized Gauss sums are equidistributed, as $q$ tends to infinity with respect to the Dirac measure $\delta_1$ supported at $1$. \end{theorem}
The answer for $d=2$ and $p$ odd, suggests that the family of cuspidal representations of $PGL(2,\mathbb F_q)$ should split into two families, such that the normalized Gauss sums belonging to these subsets should be equidistributed with respect to $\delta_1$ and $\delta_{-1}$ respectively. The cuspidal representations of $GL(2,\mathbb F_q)$ with trivial central character are parametrized by pairs of characters
\[\{\chi,\chi^{\sigma}\} \quad\mbox{of $\mathbb F_{q^2}^*$, such that} \quad \chi\neq \chi^{\sigma}
\quad \mbox{and} \quad \chi|_{\mathbb F_q^*}=1,\]
where $\sigma$ is the non-trivial element of $\mbox{Gal}(\mathbb F_{q^2}/\mathbb F_q)$. Denote by $\rho_{\chi}$ the representation corresponding to $\chi$ as above. The collection of characters $\chi$ of $\mathbb F_{q^2}^*$ whose restriction to $\mathbb F_{q}^*$ is trivial is a cyclic group of order $q+1$. Of this, only the trivial and the quadratic characters are equivalent to their own Galois conjugates. The natural guess regarding the above expectation turns out to be valid:
\begin{theorem} \label{theorem:rep-square}
Let $p$ be an odd prime. Let $C_{pr}^0(2,q)$ denote the collection of primitive characters of $\mathbb F_{q^2}^*$ with respect to $\mathbb F_q$, whose restriction to $\mathbb F_q^*$ is trivial. As $q$ tends to infinity, the set of normalized Gauss sums $\{g(\rho_{\chi}, \psi)/q^2\}$ where $\chi\in C_{pr}^0(2,q)$ is a square of a character of $\mathbb F_{q^2}^*$ (resp. non-square ) is equidistributed with respect to the Dirac measure $\delta_1$ (resp. $\delta_{-1}$) supported at $1$ (resp. $-1$) of $S^1$.
\end{theorem}
\begin{remark}
One reason to study Gauss sums is their relation to $L$-functions. Specifically, Gauss sums $g(\chi,\psi)$ have absolute value $q^\frac{1}{2}$, and they satisfy the Hasse-Davenport relation: $-g(\chi \circ N_{\mathbb F_{q^n}^*/ \mathbb F^*_q}, \psi\circ Tr_{\mathbb F_{q^n}/ \mathbb F_q})=(-g(\chi,\psi))^n$. The first one says that the $L$-function of $\mathbb F_q[T]$ satisfies the Riemann hypothesis, while the second relation follows from the Euler product expansion (\cite{Ireland_Rosen}).
The Sato-Tate type of conjectures in automorphic forms predict for a tempered cusp form $\pi$ on a connected, reductive algebraic group $G$ defined over a number field $K$, that the Langlands-Satake parameters attached to an unramified component $\pi_v$ at finite places $v$ of $K$, should be equidistributed with respect to the projection of the Haar measure of $M$ onto its set of conjugacy classes, where $M$ is a maximal compact subgroup of the connected component of identity of the Langlands dual $^LG$ over $\mathbb C$. This led us to consider the equidistribution question for cuspidal representations with trivial central character. Such representations can be considered as representations of the adjoint group $PGL_d(\mathbb F_q)=GL_d(\mathbb F_q)/\mathbb F_q^*$. The identity component of the Langlands dual group of $PGL_d$ is $SL_d$, whereas that of $GL_d$ is $GL_d$ itself.
However, this reasoning does not seem to suggest the above answer. We refer also to Remark \ref{remark_epsilon_factor} for a connection with epsilon factors.
\end{remark}
\section{Preliminaries and reduction to abelian Gauss sums} In this section we recall relevant results about equidistribution, representation theory of $GL_d(\mathbb F_q)$, Gauss sums, and reduce the statement of the theorems to one involving the classical abelian Gauss sums.
\subsection{Equidistribution} Let $X$ be a compact, Hausdorff space, and $\mu$ a normalized Borel measure on $X$. A sequence $S_N$ of subsets of $X$ is said to be {\em equidistibuted} in $X$ with respect to $\mu$ if, for any continuous function $f$ on $X$,
\begin{equation}
\lim_{N \longrightarrow \infty} \frac{1}{\abs{S_N}} \sum\limits_{x \in S_N} f(x)=\int_{X}f d\mu.
\end{equation}
Suppose $\{f_i\}_{i\in I}$ is a family of continuous functions whose linear spans are dense in $C(X)$, the space of continous functions on $X$ with respect to the supremum norm. Assume further that the limits,
\[\lim_{N \longrightarrow \infty} \frac{1}{\abs{S_N}} \sum\limits_{x \in S_N} f_i(x),\]
exists for all $i\in I$. Then there exists an unique measure $\mu$ on $X$ such that the sequence $S_N$ is equidistributed with respect to $\mu$ (\cite[Appendix, Chapter 1]{Serre_book}). Taking $X=S^1$, it suffices to work with the functions $z\mapsto z^n$ for $n\in \mathbb Z$. Hence, to check that the sequence of subsets $S_N$ of $S^1$ is equidistributed with respect to the normalized Haar measure on $S^1$, it suffices to show the following:
\begin{equation} \label{equation:eqdisHaar}
\lim_{N \longrightarrow \infty} \frac{1}{\abs{S_N}}\sum\limits_{z \in S_N} z^n = 0\quad \mbox{for}~ n \neq 0, ~n\in \mathbb Z.
\end{equation}
The sequence $S_N$ is equidistributed with respect to the Dirac measure supported at $1$ (resp. $-1$) provided
\begin{equation}
\lim_{N \longrightarrow \infty} \frac{1}{\abs{S_N}}\sum\limits_{z \in S_N} z^n = 1\quad \mbox{(resp. $(-1)^n$)} \quad \mbox{for $n\in \mathbb Z$}.
\end{equation}
If we further know that the sets $S_N$ are closed under complex conjugation, we need to verify the foregoing equations only for $n\geq 0$.
\subsection{Kloosterman sums and Deligne's bound.}
Let $\psi$ be a nontrivial additive character of $\mathbb Fq$ into $\mathbb C ^*$. For $a \in \mathbb Fq$, the \textit{Kloosterman sum} is defined to be the sum
\begin{equation}
\textrm{Kl}_n(a,q):=\sum\limits_{\substack{x_i \in \mathbb Fq\\ x_1 \cdots x_n=a}} \psi(x_1+\ldots + x_n)
\end{equation}
Kloosterman sums are related to Gauss sums in the following way:
\begin{equation}\label{equation:GaussandKlooseterman}
g(\chi,\psi)^n=\sum\limits_{a \in {\mathbb Fq}^*} \chi(a) \textrm{Kl}_n(a,q).
\end{equation}
That is, powers of Gauss sums are the Fourier transform (over the group $\mathbb F_q^*$) of Kloosterman sums. The absolute values of Gauss sums is known classically: $\abs{g(\chi,\psi)}=\sqrt{q}$ if $\chi \neq \mathbf 1$ and $g(\mathbf {1}, \psi)=-1$. Hence by Parseval identity, we have the equality
\[
\sum\limits_{a} \abs{\textrm{Kl}_n(a,q)}^2=q^n \pm \frac{1}{q-1}.
\]
From this we can conclude crudely that $\abs{\textrm{Kl}_n(a)} =O(q^{n/2})$, and that $\abs{\textrm{Kl}_n(a)}^2 \leq \tfrac{q^n}{q-1} \pm \tfrac{1}{(q-1)^2}$ for at least one $a$.
Deligne has obtained a uniform bound of $O(q^{\tfrac{n-1}{2}})$ for all $a$ as a consequence of his work on Weil conjectures:
\begin{theorem}\label{Deligne_bound}\cite[Equation 7.1.3]{SGA_4_half}
If $a \in \mathbb F_q^*$, then
\begin{equation}
\abs{\textrm{Kl}_n(a,q)} \leq nq^{\frac{n-1}{2}}.
\end{equation}
\end{theorem}
This saving of an order of $\tfrac{1}{2}$ is optimal and crucial in the computations that follow.
\subsection{Deligne-Lusztig representations} The classification of the irreducible characters of $GL_d(\mathbb Fq)$ was carried out by Green (\cite{Green_s_paper}). More generally, the classification of irreducible representations of finite groups of Lie type was carried out by Deligne and Lusztig (\cite{Del_Lusztig_main_paper}) using geometric methods. In this article, we will use the Deligne-Luztig parametrization of irreducible representations.
Choose a prime $\ell$ different from $p$. We work over the field $\mathbb Qlb$ instead of $\mathbb C$.
The maximal tori of $GL_d$ over $\mathbb Fq$ up to $GL_d(\mathbb Fq)$ are classified in terms of conjugacy classes of the Weyl group $W\simeq S_d$ of $GL_d$. For $w\in S_d$, let ${\mathbb T}_w$ be the associated tori, where for the identity element in $S_d$, we associate the diagonal torus. Let $T_w= {\mathbb T}_w(\mathbb Fq)$. Associated to a character $\chi: T_w\to \mathbb Qlb^*$, Deligne and Lusztig construct a virtual representation $R^{\chi}_w$ (or $R^{\chi}_{T_{w}}$) of $GL_d(\mathbb Fq)$, and showed that every irreducible representation $\rho$ of $ GL_d(\mathbb Fq)$ is a constituent of some (not necessarily unique) $R^\chi_w$.
\subsection{Cuspidal representations}
\begin{defn}
For a reductive group $\mathbb G$ defined over $\mathbb F_q$ with Frobenius $F$, we say that an irreducible representation $G=\mathbb G^F$ is \textbf{cuspidal}, if for every proper parabolic subgroup $\mathbb P$ of $\mathbb G$ with unipotent radical $\mathbb U$, the set of fixed vectors $\rho^{\mathbb U(\mathbb Fp)}$ is $0$.
\end{defn}
A theorem due to Harish Chandra says that cuspidal representations are the building blocks of all representations of $G$, in other words, the non cuspidal irreducible representations are induced from cuspidal representations with respect to suitable proper parabolic subgroups \cite{Bhama_book}.
Let $E$ be a finite extension of $\mathbb F_q$. A character $\chi:\mathbb F_q^*\to \mathbb C^*$ is said to be {\em primitive} if it does not factor via the norm map $E^*\to E'^*$ for any subextension $\mathbb F_q\subset E'\subset E$. Equivalently, for any non-trivial $\sigma\in \mbox{Gal}(E/\mathbb F_q)$, the conjugate $\chi^{\sigma}$ is not equal to the trivial character. Define two characters to be {\em equivalent} if they are equal up to a Galois twist.
The Deligne-Lusztig parametrization of cuspidal representations of $GL_d(\mathbb F_q)$ is as follows: let $w_0\in S_d$ be a $d$-cycle, an element of longest length with respect to the standard generators for $S_d$. The tori attached to $w_0$ corresponds to an embedding of the Weil restriction of scalars
$R_{\mathbb F_{q^d}/\mathbb F_q}(\mathbb G_m)$ embedded inside $GL_d$. There is an isomorphism $T_{w_0}\simeq \mathbb F_{q^d}^*$. Such tori are not contained in any proper parabolic subgroup.
Given a primitive character $\chi$ of $\mathbb F_{q^d}^*$, the Deligne-Lusztig representation $\varepsilon_{\mathbb G} \varepsilon_{\mathbb T_{w_0}} R^{\chi}_{w_0}$ is an irreducible, cuspidal representation, where $\varepsilon_{\mathbb G}:=(-1)^{\mathbb F_q-\textrm{rank of}\,\, \mathbb G}$. For $GL_d$, we have $\varepsilon_{GL_{d}}=(-1)^d$ and for the torus ${\mathbb T}_{w_0}$, the $\mathbb F_q$-rank is $-1$, hence $\varepsilon_{\mathbb T_{w_0}}=-1$, hence $\varepsilon_{\mathbb G} \varepsilon_{\mathbb T_{w_0}} R^{\chi}_{w_0}=(-1)^{d+1}R^{\chi}_{w_0}$. Further, if the characters $\chi$ and $\chi'$ are not conjugate under the action of ${\rm Gal}(\mathbb F_q^d/\mathbb F_q)$, the irreducible representations $(-1)^{d+1}R^{\chi}_{w_0}$ and $(-1)^{d+1}R^{\chi'}_{w_0}$ are distinct. This sets up a bijective correspondence between equivalence classes of primitive characters of $\mathbb F_{q^d}^*$ and isomorphism classes of cuspidal representations of $GL_d(\mathbb F_q)$. \cite[Chapter 6]{Bhama_book}
\subsection{Central character}
Let $\rho$ be an irreducible representation of a group $G$ whose centre is $Z$. By Schur's Lemma $\rho(z)$ is a scalar matrix for all $z \in Z$. That is, there exists a character $\chi: Z \to \mathbb C^*$ such that $\rho(z)=\chi(z)\cdot \textrm{Id}$. The character $\chi$ is called the central character of $\rho$.
Suppose $\mathbb T$ is a maximal $\mathbb F_q$-torus of $\mathbb G=GL_n$ (more generally of any reductive group $G$). Let $\mathbb B=\mathbb T \mathbb U$ be a Borel subgroup of $\mathbb G$. Denote by $\mathcal L :\mathbb G \to \mathbb G$ the Lang isogeny $x\mapsto x^{-1}F(x)$, where $F$ is the Frobenius morphism. On the space $\mathcal L^{-1}(U)$, the product $\mathbb G(\mathbb F_q)\times \mathbb T(\mathbb F_q)$ acts by $(g,t)(x)=gxt$. Let $Z$ be the center of $\mathbb G$. Restricted to $Z(\mathbb F_q)$, the induced action is same as the action of $Z(\mathbb F_q)\subset \mathbb T(\mathbb F_q)$.
Thus, given a character $\chi$ of $T=\mathbb T(\mathbb F_q)$, the action of $Z(\mathbb F_q)$ on the $\chi$-isotypical component of $\ell$-adic cohomology groups with compact support $H_c^i(\mathcal L^{-1}(U), \bar{\mathbb Q}_{\ell})$ is given by the character $\chi$.
Since the Deligne-Lusztig representations $R_T^{\chi}$ are afforded on the space
\[H_c^*(\mathcal L^{-1}(U), \bar{\mathbb Q}_{\ell})_{\chi}=\sum_{i=0}^{2 \textrm{dim}(U)} (-1)^iH_c^i(\mathcal L^{-1}(U), \bar{\mathbb Q}_{\ell})_{\chi},\]
it follows that the center $Z(\mathbb F_q)$ acts via the character $\chi$ in the representation $R_T^{\chi}$.
\subsection{$GL_d$-Gauss sums}
Suppose $\mathbb T$ is a maximal $\mathbb F_q$ torus in $GL_d$. Given a character $\chi: \mathbb T(\mathbb F_q)\to \bar{\mathbb Q}_{\ell}^*$, define the (abelian) Gauss sum,
\begin{equation}
g(\chi, \psi):= \sum_{x \in \mathbb T(\mathbb F_q)} \chi(x)\psi(\mbox{Tr}(x)),
\end{equation}
where $\mbox{Tr}$ denotes the trace of $x$ considered as a matrix element.
It was shown by Kondo (\cite{Kondo_original_paper}) using Green's work and by
Braverman and Kazhdan (\cite{BraKa}) using the theory of character sheaves of Lusztig, that the Gauss sums defined as in Equations (\ref{GSmatrix}, \ref{GS}) for an irreducible representation $\rho$ of $GL_d(\mathbb Fq)$ is essentially the abelian Gauss sum.
\begin{theorem}[Kondo, Braverman-Kazhdan]\label{theorem:KBK}\cite[Theorem 1.3]{BraKa}
Suppose $\rho$ is an irreducible representation of $GL_d(\mathbb Fq)$ that is an irreducible constituent of $R_T^{\chi}$ for some maximal $\mathbb F_q$ torus $T$ in $GL_d$. Then,
\[g(\rho, \psi)=q^{\tfrac{d^2-d}{2}}
g(\chi,\psi).\]
\end{theorem}
\subsection{Reduction to the abelian case} \label{section:reduction}
The cuspidal representatios $\rho$ are of the form $\pm R^{\chi}_{w_0}$ for some primitive character $\chi$ of $\mathbb F_{q^d}^*$. Via the correspondence between cuspidal representations and primitive characters, the $d$ inequivalent Galois conjugates of $\chi$, give rise to isomorphic cuspidal representations. Further, by Theorem \ref{theorem:KBK},
\[g(\rho, \psi) =q^{(d^2-d)/2} g(\chi, \psi). \]
Thus, questions about equidistribution of the normalized Gauss sums $\{g(\rho, \psi)/q^{d^2/2}\mid \rho\in R_0(d,q)\}$ are reduced to questions about equidistribution of $\{g(\chi, \psi)/q^{d/2}\}$, where $\chi$ runs over the primitive characters of $\mathbb F_{q^d}^*$. Theorems \ref{theorem:main}, \ref{theorem:centralchar} and \ref{theorem:rep-square} are consequences of the following theorems:
\begin{theorem}\label{theorem:main-charvers}
Let $C_{pr}(d,q)$ denote the collection of primitive characters of $\mathbb F_{q^d}^*$ with respect to $\mathbb F_q$. The set of normalized Gauss sums $\{g(\chi, \psi)q^{-d/2}\mid \rho\in C_{pr}(d,q)\}$ is equidistributed with respect to the normalized Haar measure on $S^1$, as $q$ tends to infinity.
\end{theorem}
\begin{theorem}\label{theorem:centralchar-charvers}
Let $C_{pr}^0(d,q)$ denote the collection of primitive characters of $\mathbb F_{q^d}^*$ with respect to $\mathbb F_q$, whose restriction to $\mathbb F_q^*$ is trivial. Suppose $d\geq 3$. As $q$ tends to infinity, the set of normalized Gauss sums $\{g(\chi, \psi)q^{-d/2}\mid \rho\in C_{pr}^0(d,q)\}$ is equidistributed with respect to the normalized Haar measure on $S^1$.
\end{theorem}
\begin{theorem} \label{theorem:evenodd}
Let $p$ be an odd prime. Let $C_{pr}^0(2,q)$ denote the collection of primitive characters of $\mathbb F_{q^2}^*$ with respect to $\mathbb F_q$, whose restriction to $\mathbb F_q^*$ is trivial. As $q$ tends to infinity, the set of normalized Gauss sums $\{g(\chi, \psi)/q\}$ where $\chi$ is a square of a character of $\mathbb F_{q^2}^*$ (resp. non-square ) is equidistributed with respect to the Dirac measure $\delta_1$ (resp. $\delta_{-1}$) supported at $1$ (resp. $-1$) of $S^1$.
When $p=2$, the set of normalized Gauss sums $\{g(\chi, \psi)/q\mid \chi \in C_{pr}^0(2,q) \}$ is equidistributed with respect to the Dirac measure $\delta_1$ supported at $1$ as $q\to \infty$.
\end{theorem}
\section{Proof of Theorem \ref{theorem:main-charvers}}
In order to prove Theorem \ref{theorem:main}, we are reduced to proving Theorem \ref{theorem:main-charvers} showing the equidistribution of $\{g(\chi, \psi)/q^{d/2}\}$, where $\chi$ runs over the primitive characters of $\mathbb F_{q^d}^*$.
We need to show for any non-zero integer $n$ that
\begin{equation} \label{eqn:primeqdis}
\frac{1}{|C_{pr}(d,q)|} \sum\limits_{\chi\,\, \textrm{primitive}} \Big( \frac{g(\chi, \psi)}{q^{d/2}} \Big)^n \longrightarrow 0,
\end{equation}
as $q \to \infty$. Since $g(\chi, \psi)^{-1}=g(\bar{\chi}, \bar{\psi})/q^d$, it suffices to verify the above limits for $n\geq 1$.
Suppose $\chi$ is a non-primitive character of $\mathbb F_{q^d}^*$. Then $\chi$ is left invariant by a non-trivial subgroup $H$ of $\mbox{Gal}(\mathbb F_{q^d}/\mathbb F_q)$. Hence $\chi$ is of the form $\chi'\circ N_{\mathbb F_{q^d}/\mathbb F_{q^d}^H}$, for some character $\chi'$ of the fixed field $\mathbb F_{q^d}^H$ under $H$. Here for an extension of finite fields $E/F$, $N_{E/F}$ denotes the norm map from $E^*$ to $F^*$. The cardinality of $\mathbb F_{q^d}^H$ is at most $q^{d/2}$. Since there are only finitely many fields contained in the finite extension $\mathbb F_{q^d}/\mathbb F_{q}$ independent of $q$, the number of non-primitive characters is bounded by $Cq^{d/2}$ for some constant $C$.
Thus the number of primitive characters is at least $q^d-Cq^{d/2}$. For any non-trivial character $\chi$ of $\mathbb F_{q^d}^*$, $|g(\chi, \psi)|=q^{d/2}$ and $g(1,\psi)=-1$. Hence, in order to show the validity of Equation (\ref{eqn:primeqdis}), we can work with the set of all characters of $\mathbb F_{q^d}^*$.
From Equation (\ref{equation:GaussandKlooseterman}) relating Gauss and Kloosterman sums,
we are reduced to showing for $n\geq 1$,
\[ \frac{1}{q^{d+nd/2}} \sum_{\chi\neq 1} ~~\sum_{a\in \mathbb F_{q^d}^*} \chi(a)Kl_n(a, q^d)\rightarrow 0, \]
as $q\to \infty$. Interchanging the order of summation and by orthogonality,
\[ \frac{1}{q^{d+nd/2}} \sum_{\chi\neq 1} ~~\sum_{a\in \mathbb F_{q^d}^*} \chi(a)Kl_n(a, q^d)=\frac{(q^d-1)}{q^{d+nd/2}}Kl_n(1, q^d).\]
Theorem \ref{theorem:main} now follows by appealing to Deligne's bound given by Theorem \ref{Deligne_bound},
\[ |Kl_n(1, q^d)|\leq nq^{d(n-1)/2}.\]
\section{Proof of Theorem \ref{theorem:centralchar-charvers} for $d\geq 3$}
We now consider equidistribution of Gauss sums of cuspidal representations with trivial central character. Equivalently by the arguments in Section \ref{section:reduction}, we consider equidistribution of Gauss sums of primitive characters of of $\mathbb F_{q^d}^*$ that restrict trivially to $\mathbb F_q^*$. As seen in the foregoing section, the cardinality of non-primitive characters is of order $O(q^{d/2})$. The group $C^0(d,q)$ of characters of $\mathbb F_{q^d}^*$ that restrict trivially to $\mathbb F_q^*$ is of order $(q^d-1)/(q-1)$.
Suppose $d\geq 3$. Then the ratio of the number of non-primitive characters to that of primitive characters with trivial central character goes to zero as $q$ tends to infinity. Arguing as in the proof of Theorem \ref{theorem:main-charvers} given above, in order to prove Theorem \ref{theorem:centralchar-charvers}, it is sufficient to work with the entire character group $C^0(d,q)$ rather than restrict to just the primitive elements in $C^0(d,q)$. Thus we are reduced to showing
for any natural number $n\geq 1$ that
\begin{equation} \label{eqn:?}
\frac{1}{|C^{0}(d,q)|} \sum\limits_{\chi\in C^0(d,q)} \Big( \frac{g(\chi, \psi)}{q^{d/2}} \Big)^n \longrightarrow 0,
\end{equation}
as $q \to \infty$. In terms of Kloosterman sums, we need to show
for $n\geq 1$, that
\begin{equation}\label{eqn:d3}
\frac{1}{|C^0(d,q)|q^{nd/2}} \sum_{\chi\in C^0(d,q)} ~~\sum_{a\in \mathbb F_{q^d}^*} \chi(a)Kl_n(a, q^d)\rightarrow 0,
\end{equation}
as $q\to \infty$.
Let $\widehat{\mathbb F_{q^d}^*}$ denote the group of characters of ${\mathbb F_{q^d}^*}$. Under the non-degenerate pairing,
\begin{equation}\label{eqn:pairing}
{\mathbb F_{q^d}^*}\times \widehat{\mathbb F_{q^d}^*}\to \mathbb C^*,
\end{equation}
the left annihilator of the group $C^0(d,q)$ is precisely $\mathbb F_q^*$. Hence,
\[
\sum_{\chi\in C^0(d,q)}\chi(a)=\begin{cases} |C^0(d,q)| & \text{if $a\in \mathbb F_q^*$}\\
0& \text{if $a\not\in \mathbb F_q^*$.}
\end{cases}
\]
Interchanging the order of summation in Equation (\ref{eqn:d3}), we get
\[ \frac{1}{|C^0(d,q)|q^{nd/2}} \sum_{\chi\in C^0(d,q)}~~ \sum_{a\in \mathbb F_{q^d}^*} \chi(a)Kl_n(a, q^d)=\frac{1}{q^{nd/2}} \sum_{a\in \mathbb F_{q}^*} Kl_n(a, q^d).\]
Using Deligne's bound,
\[\frac{1}{q^{nd/2}} \sum_{a\in \mathbb F_{q}^*} |Kl_n(a, q^d)|\leq \frac{(q-1)nq^{d(n-1)/2}}{q^{nd/2}}=\frac{n(q-1)}{q^{d/2}}.\]
Since $d\geq 3$, the term goes to zero and this proves Theorem \ref{theorem:centralchar-charvers}.
\section{Proof of Theorem \ref{theorem:evenodd}}
Let $C^0=C^0(2,q)$ be the subgroup of characters of $\mathbb F_{q^2}^*$ that is trivial restricted to $\mathbb F_q^*$. Its cardinality is $q+1$. Suppose $\chi$ is a character of $\mathbb F_q^*$ such that $\chi\circ N_{\mathbb F_q^2/\mathbb F_q}$ belongs to $C^0$. This is equivalent to saying that $\chi(x^2)$ is identically $1$. When $p=2$, the only such character is the trivial character, and when $p$ is odd the condition implies that $\chi$ is quadratic. Hence the number of primitive characters of $\mathbb F_{q^2}^*$ that are trivial upon restriction to $\mathbb F_q^*$ is precisely $(q-1)$ when $p$ is odd and $q$ when $p$ is even. Since the set of non-primitive characters is at most $2$, arguing as before, in order to prove to equidistribution we can work with $C^0$ rather than just its subset of primitive characters.
For odd $p$, let
\[\begin{split}
C^{0,s}&=\{\chi\in C^0\mid \chi=\eta^2 \quad\mbox{for some $\eta\in C^0$}\}\\
C^{0,ns}&=\{\chi\in C^0\mid \chi\neq \eta^2 \quad\mbox{for any $\eta\in C^0$}\}.
\end{split}
\]
Both sets have cardinality $(q+1)/2$ when $p$ is odd. In what follows, when $p=2$, by an abuse of notation, we will work with $C^{0,s}$-version, where we take $C^{0,s}$ to be equal to $C^0$.
The quadratic character belongs to $C^{0,s}$ precisely when $4|(q+1)$.
In order to prove Theorem \ref{theorem:evenodd} for $p$ odd, we need to show that for $n\geq 1$, the average of the $n$-th moments of the Gauss sums, have the following limiting behaviour as $q\to \infty$:
\begin{align}
\frac{1}{|C^{0,s}|} \sum\limits_{\chi\in C^{0,s}} \Big( \frac{g(\chi, \psi)}{q} \Big)^n&=\frac{2}{(q+1)q^{n}} \sum_{\chi\in C^{0,s}} ~~\sum_{a\in \mathbb F_{q^2}^*} \chi(a)Kl_n(a, q^2)\rightarrow 1, \label{eqn:cosmoment} \\
\text{and} \quad \frac{1}{|C^{0,ns}|} \sum\limits_{\chi\in C^{0,ns}} \Big( \frac{g(\chi, \psi)}{q} \Big)^n&=\frac{2}{(q+1)q^{n}} \sum_{\chi\in C^{0,ns}} ~~\sum_{a\in \mathbb F_{q^2}^*} \chi(a)Kl_n(a, q^2)\rightarrow (-1)^n.\label{eqn:consmoment}
\end{align}
For $p=2$, we need to show that as $q\to \infty$,
\[ \frac{1}{q^{n+1}} \sum_{\chi\in C^{0}} ~~\sum_{a\in \mathbb F_{q^2}^*} \chi(a)Kl_n(a, q^2)\rightarrow 1. \]
The left annihilator of the subgroup $C^0$ of $\widehat{\mathbb F_{q^2}^*}$ with respect to the non-degenerate bilinear pairiing given by Equation (\ref{eqn:pairing}) is precisely $\mathbb F_q^*$.
Let $\sigma$ be the non-trivial automorphism of $\mathbb F_{q^2}$ over $\mathbb F_q$. Define
\begin{equation*}\label{defn:S}
\begin{split}
S^0&=\{x\in \mathbb F_{q^2}^*\mid \sigma(x)=x\}=\mathbb F_q^*,\\
S^{-}&=\{x\in \mathbb F_{q^2}^*\mid \sigma(x)=-x\}=\{x\in \mathbb F_{q^2}^*\mid Tr(x)=0\},\\
S&=S^0\cup S^{-}=\{x\in \mathbb F_{q^2}^*\mid x^2\in \mathbb F_q^*\}.
\end{split}
\end{equation*}
where $Tr$ denotes the trace from $ \mathbb F_{q^2}$ to $\mathbb F_q$. When $p=2$, $S^0=S^-$. The annihilator of the subgroup $C^{0,s}$ contains the set $S$.
When $p$ is odd, the cardinality of $S$ is $2(q-1)$, and hence it is precisely the annihilator of $C^{0,s}$. Consequently, for $x\in \mathbb F_{q^2}^*$,
\begin{equation} \label{eqn:cosorth}
\sum_{\chi\in C^{0,s}}\chi(x)=\begin{cases}
0 & \text{if $x\not \in S$},\\
(q-1)/2 & \text{ if $x\in S$}.
\end{cases}
\end{equation}
The value of a character $\chi\in C^{0,ns}$ evaluated on an element $x\in S$ is given as,
\begin{equation}\label{eqn:consvalue}
\chi(x)=\begin{cases}
1 & \text{if $x\in S^0$},\\
-1& \text{ if $x\in S^-$}.
\end{cases}
\end{equation}
Since $C^{0,ns}$ is a coset of $C^{0,s}$ in $C^0$, equations (\ref{eqn:cosorth}) and (\ref{eqn:consvalue}) yield,
\begin{equation} \label{eqn:consorth}
\sum_{\chi\in C^{0,ns}}\chi(x)=\begin{cases} 0 & \text{if $x\not \in S$},\\
(q-1)/2 & \text{ if $x\in S^0$},\\
-(q-1)/2 & \text{ if $x\in S^-$}.
\end{cases}
\end{equation}
Define for $n\geq 1$, the following averages of Kloosterman sums:
\[\begin{split}
\text{(Invariant)}\quad I_n&=\sum_{a\in \mathbb F_q^*}Kl_n(a, q^2)=
\sum_{\substack{x_1, \cdots, x_n \in \mathbb F_{q^2}^*\\
\prod_i x_i\in \mathbb F_q^*}} \psi\circ Tr(x_1+\cdots+x_n)\\
\text{(Anti-invariant)}\quad A_n&=
\sum_{\substack{a\in \mathbb F_{q^2}^*\\ Tr(a)=0}}Kl_n(a, q^2)=\sum_{a\in S^-}Kl_n(a, q^2)=\sum_{\substack{x_1, \cdots, x_n \in \mathbb F_{q^2}^*\\
Tr(\prod_i x_i)=0}}\psi\circ Tr(x_1+\cdots+x_n).
\end{split}
\]
Substituting the values obtained from Equations (\ref{eqn:cosorth}, \ref{eqn:consorth}) into Equations (\ref{eqn:cosmoment}, \ref{eqn:consmoment}) we need to show for $n\geq 1$ and $q\to \infty$,
\begin{align}
\frac{1}{|C^{0,s}|} \sum\limits_{\chi\in C^{0,s}} \Big( \frac{g(\chi, \psi)}{q} \Big)^n&=\frac{(q-1)}{(q+1)q^{n}} (I_n+A_n)\rightarrow 1, \label{eqn:cosmoment-IA} \\
\text{and} \quad \frac{1}{|C^{0,ns}|} \sum\limits_{\chi\in C^{0,ns}} \Big( \frac{g(\chi, \psi)}{q} \Big)^n&=\frac{(q-1)}{(q+1)q^{n}} (I_n-A_n)\rightarrow (-1)^n.\label{eqn:consmoment-IA}
\end{align}
We observe that \begin{equation}\label{eqn:I1odd}
I_1=-1\quad \text{and} \quad A_1=q-1.
\end{equation}
Hence,
\[ I_1+A_1= q-2\quad \text{and}\quad I_1-A_1= -q.\]
This gives us the validity of Equations (\ref{eqn:cosmoment-IA}) and (\ref{eqn:consmoment-IA}) for $n=1$.
When $p=2$,
\begin{equation} \label{eqn:co-orth2}
\sum_{\chi\in C^{0}}\chi(x)=\begin{cases} 0 & \text{if $x\not \in \mathbb F_q^*$},\\
(q-1)& \text{ if $x\in \mathbb F_q^*$}.\end{cases}
\end{equation}
Thus we need to show that as $q\to \infty$,
the sum
\begin{equation}\label{eqn:2moment-IA}
\frac{1}{|C^{0}|} \sum\limits_{\chi\in C^{0}} \Big( \frac{g(\chi, \psi)}{q} \Big)^n=\frac{(q-1)}{q^{n+1}} I_n\rightarrow 1.
\end{equation}
Further,
\begin{equation}\label{eqn:I1even}
I_1=\sum_{a\in \mathbb F_q^*}\psi\circ Tr(a)=\sum_{a\in \mathbb F_q^*}\psi(2a)=(q-1),
\end{equation}
and this yields the validity of equation (\ref{eqn:2moment-IA}) when $n=1$.
The following key proposition gives a recurrence relation between the sums $I_n$ and $A_n$, allowing an inductive procedure to calculate their values explicitly:
\begin{proposition}
For $n\geq 2$ and $p$ odd, the following recurrence relation holds:
\[ \begin{split}
I_n& =qA_{n-1}+(-1)^n\\
A_n& =qI_{n-1}+(-1)^n.
\end{split}
\]
When $p=2$,
\[ I_n =qI_{n-1}+(-1)^n.\]
\end{proposition}
\begin{proof}[Proof of Theorem \ref{theorem:evenodd}]
Inductively, it follows for $n\geq 2$,
\begin{align*}
I_n+A_n&= q(I_{n-1}+A_{n-1})+2(-1)^n \\
&=q^{n-1}(I_1+A_1)+2(-1)^n (1-q+q^2+\cdots +(-1)^{n-2}q^{n-2})\\
&=(q-2)q^{n-1}+2(-1)^n \frac{(1+(-q)^{n-1})}{(1+q)}.
\end{align*}
Similarly,
\[I_n-A_n= -q(I_{n-1}-A_{n-1})=(-q)^{n-1}(I_1-A_1)=(-q)^n.\]
For $p=2$ and $n\geq 2$, we have
\begin{align*}
I_n& = qI_{n-1}+(-1)^n\\
&=q^{n-1}I_1 +(-1)^n(1-q+q^2+\cdots +(-1)^{n-2}q^{n-2})\\
&=(q-1)q^{n-1}+(-1)^n\frac{(1+(-q)^{n-1})}{(1+q)}.
\end{align*}
Together with the values for $n=1$, it gives us the validity of Equations (\ref{eqn:cosmoment-IA}, \ref{eqn:consmoment-IA}, \ref{eqn:2moment-IA}) for $n\geq 1$, and with it a proof of Theorem \ref{theorem:evenodd} for all $p$.
\end{proof}
\begin{remark}
Substituting the above expressions for $I_n\pm A_n$ in Equations (\ref{eqn:cosmoment-IA}) and (\ref{eqn:consmoment-IA}), we see that the $n$-th moments are not precisely equal to the limiting Dirac measure. Hence the normalized Gauss sums, even accounting for the contribution from the trivial character (equal to $-1/q^n$) are not identically equal to $1$ or $-1$, but only tend to the respective Dirac measures as $q \to \infty$.
\end{remark}
\begin{proof}[Proof of Proposition]
Suppose $\prod_{i=1}^nx_i=a$, where by $x_i$ we consider an arbitrary element in $\mathbb F_{q^2}^*$. Let $\prod_{i=1}^{n-1}x_i=b$ and $x_n=a/b$. With this decomposition,
\begin{align*}
I_n&= \sum_{\substack{x_1, \cdots, x_n \in \mathbb F_{q^2}^*\\
\prod_i x_i\in \mathbb F_q^*}} \psi\circ Tr(x_1+\cdots+x_n)\\
&=\sum_{a\in \mathbb F_q^*}~~\sum_{\substack{x_1\cdots x_{n-1}\in \mathbb F_{q^2}^*\\
x_1\cdots x_{n-1}=b}} \psi\circ Tr(x_1+\cdots+x_{n-1})\psi\circ Tr(a/b)\\
&=\sum_{a\in \mathbb F_q^*}~~\sum_{b\in \mathbb F_{q^2}^*} Kl_{n-1}(b, q^2)\psi\circ Tr(a/b)\\
&=\sum_{b\in \mathbb F_{q^2}^*} ~~\sum_{a\in \mathbb F_q^*}Kl_{n-1}(b, q^2)\psi\circ Tr(a/b).
\end{align*}
Since $a\in \mathbb F_q^*$,
\[\psi\circ Tr(a/b)=\psi(a/b+\sigma(a/b))=\psi(aTr(b^{-1})).\]
Suppose $Tr(b^{-1})\neq 0$. As $a$ varies over $\mathbb F_q^*$, the collection of elements of the form $aTr(b^{-1})$ forms the entire set $\mathbb F_q^*$. Hence,
\[ \sum_{a\in \mathbb F_q^*}\psi\circ Tr(a/b)=\begin{cases} -1 & \text{if $Tr(b)\neq 0$},\\
(q-1) &\text{if $Tr(b)=0$,}
\end{cases}
\]
where we have used the fact that $Tr(b^{-1})=0$ iff $Tr(b)=0$. Substituting this in the above expression for $I_n$, we get
\begin{align*}
I_n&= \sum_{b\in \mathbb F_{q^2}^*} ~~\sum_{a\in \mathbb F_q^*}Kl_{n-1}(b, q^2)\psi\circ Tr(a/b)\\
&=(q-1)\sum_{\substack{b\in \mathbb F_{q^2}^*\\Tr(b)=0}} Kl_{n-1}(b, q^2)-\sum_{\substack{b\in \mathbb F_{q^2}^*\\Tr(b)\neq 0}}Kl_{n-1}(b, q^2)\\
&=qA_{n-1}-\sum_{b\in \mathbb F_{q^2}^*}Kl_{n-1}(b, q^2)\\
&=qA_{n-1}-\sum_{x_1, \cdots, x_{n-1}\in \mathbb F_{q^2}^*}\psi\circ Tr(x_1+\cdots+x_{n-1})\\
&=qA_{n-1}-\prod_{i=1}^{n-1}\left(\sum_{x_i\in \mathbb F_{q^2}^*}\psi\circ Tr(x_i)\right)\\
&=qA_{n-1}- (-1)^{n-1}.
\end{align*}
We observe here that the above proof holds without any change for $p=2$, where we replace $A_{n-1}$ by $I_{n-1}$.
The proof for $A_n$ is similar. The set of elements of trace $0$ in $\mathbb F_{q^2}$ consists of elements of the form $a\sqrt{\delta}$, where $\delta$ is a non-square in $\mathbb F_q^*$ and $a\in \mathbb F_q$.
\begin{align*}
A_n&= \sum_{\substack{x_1, \cdots, x_n \in \mathbb F_{q^2}^*\\
Tr(\prod_i x_i)=0}}\psi\circ Tr(x_1+\cdots+x_n)\\
&=\sum_{a\in \mathbb F_q^*}~~\sum_{\substack{x_1\cdots x_{n-1}\in \mathbb F_{q^2}^*\\
x_1\cdots x_{n-1}=b}} \psi\circ Tr(x_1+\cdots+x_{n-1})\psi\circ Tr(a\sqrt{\delta}/b)\\
&=(q-1)\sum_{\substack{b\in \mathbb F_{q^2}^*\\Tr(\sqrt{\delta}/b)=0}} Kl_{n-1}(b, q^2)-\sum_{\substack{b\in \mathbb F_{q^2}^*\\Tr(\sqrt{\delta}/b)\neq 0}}Kl_{n-1}(b, q^2).
\end{align*}
Now
\[Tr(\sqrt{\delta}/b)=\sqrt{\delta}/b+\sigma(\sqrt{\delta}/b)=\sqrt{\delta}/b-\sqrt{\delta}/\sigma(b)=0\]
precisely when $b=\sigma(b)$, i.e., $b\in \mathbb F_q^*$. Hence,
\begin{align*}
A_n&=(q-1)\sum_{b\in \mathbb F_q^*} Kl_{n-1}(b, q^2)-\sum_{b\in \mathbb F_q^*}Kl_{n-1}(b, q^2)\\
&=qI_{n-1}- (-1)^{n-1}.
\end{align*}
\end{proof}
\begin{remark}\label{remark_epsilon_factor}
We now give a connection of the hypothesis of Theorem \ref{theorem:rep-square} to $\epsilon$ factors. We thank U. K. Anandavardhanan and Dipendra Prasad for pointing out this connection. We refer to the papers of Fröhlich and Queyrut and Deligne (\cite{Fr-Qu}, \cite{Deligne_local_const_orthogonal} ) for further details.
Let $K$ be a local field with residue field $k\cong \mathbb F_q$. Let $L$ be the unramified quadratic extension of $K$. There is a natural projection map $L^* \xrightarrow{\cong} \mathcal{O}_L^* \times \mathbb Z \to \mathbb F_{q^2}^*$. Via this projection, $\chi$ can be considered as a character of $L^*$, which we denote by $\chi'$. Assume now that $\chi$ restricts trivially to $\mathbb{F}_q^*$. Then $\chi'$ restricts trivially to $K^* \subset L^*$.
For a local field $K$, let $W_K$ denote its Weil group. By the isomorphism $W_L^{ab} \xrightarrow{\cong} L^*$, $\chi'$ can be considered as a character of $W_L^{ab}$, and therefore of $W_L$. Since $\chi'$ restricts trivially to $K^*$, the induced representation $\textrm{Ind}^{W_K}_{W_L}(\chi')$ can be realised over $\mathbb R$. Let $V=\textrm{Ind}^{W_K}_{W_L}([\chi']-[1])$ be the induction to $W_K$ of the virtual representation $[\chi']-[1]$. Then $V$ has dimension $0$ and determinant $1$. By Fröhlich-Queyrut and Deligne, the espsilon factor of $V$ satisfies
\begin{equation*}
\epsilon(V,\tfrac{1}{2})=\chi'(\Delta),
\end{equation*}
for any element $\Delta$ of $L^*$ whose trace to $K$ vanishes. By taking $\Delta \in L^*$ to correspond to an element $\sqrt{\delta}$ of $S^- \subset \mathbb F_{q^2}^*$ defined as in the proof of Theorem \ref{theorem:evenodd}, we see that for $\chi \in C^0_{pr}$, $\chi$ is a square if and only if $\chi(\sqrt{\delta})=1$. This holds precisely when $\chi'(\Delta)=1$, or equivalently when $\epsilon(V,\tfrac{1}{2})=1$.
\end{remark}
\nocite{*}
\end{document}
|
\begin{document}
\title{Witnessing quantumness of a system by observing only its classical features}
\author{C. Marletto$^{a}$ and V. Vedral $^{a,b}$
\\ {\small $^{a}$ Physics Department, University of Oxford}
\\{\small $^{b}$Centre for Quantum Technologies, National University of Singapore}}
\date{April 2016}
\begin{abstract}
\noindent Witnessing non-classicality in the gravitational field has been claimed to be practically impossible. This constitutes a deep problem, which has even lead some researchers to question whether gravity should be quantised, due to the weakness of quantum effects. To counteract these claims, we propose a thought experiment that witnesses non-classicality of a physical system by probing it with a qubit. Remarkably, this experiment does not require any quantum control of the system, involving only measuring a single classical observable on that system. In addition, our scheme does not even assume any specific dynamics.
That non-classicality of a system can be established indirectly, by coupling it to a qubit, opens up the possibility that quantum gravitational effects could in fact be witnessed in the lab.
\end{abstract}
\maketitle
Direct evidence in favour of the quantisation of gravity is, at present, hard to obtain. Despite the recent success in detecting gravitational waves, the detection of gravitons -- quantum particles mediating the gravitational field -- has been argued to be practically impossible \cite{NOB}, \cite{NOB1}. These impossibility claims lead one to question whether gravity should be quantised in the first place.
Evidence for quantisation can also be gathered indirectly, by coupling the gravitational field to a quantum system. For example, Feynman \cite{FEY} considered a thought experiment where a test mass in a superposition of two different locations interacts with gravity (see figure 1).
\begin{figure}
\caption{An artist's impression of a mass in a superposition of two locations. In each of the two locations it interacts with gravity, thus creating correlations. Feynman's thought experiment explores the issue of whether these correlations are classical or quantum. }
\end{figure}
His point was that the physical state of the composite system would have different properties according to whether the mass is entangled with gravity, or just somehow classically correlated with it. These two situations could in principle be distinguished, but this requires to witness non-classicality in the gravitational field. Therefore, the thought experiment seems to conceal a circularity: witnessing non-classicality would seem either to require measuring two complementary observables on the field itself, or that the field undergoes some interference process (which itself frequently constitutes a measurement of two complementary observables). However, the possibility of performing either of those operations is precisely what the thought experiment is designed to assess.
This is an instance of a more general problem, affecting the predictions of theoretical arguments in favour of quantisation. Such arguments \cite{FEY, DeW, TER, HEI, BOHRING} claim that a subsystem of the universe (e.g. a field) interacting with a quantum system (which here means any physical system that can implement a qubit) must itself be quantised. The problem is, again, that testing these predictions might seem to require full quantum control of the system that is argued to be quantised; but the existence of such a control is what the test is supposed to probe.
In this paper we offer a solution to this problem. We propose a new thought-experiment to witness non-classicality of a system, by probing it via a qubit, without requiring any quantum control on the system. Specifically, the experiment is performed on the composite system of a qubit $S_Q$ and of a classical system $S_C$, which is assumed to have only a single observable $T$. By \qq{classical} we mean precisely a system that has only one single observable. Our proposed test for non-classicality only requires to measure correlations involving just the observable $T$ on $S_C$; and the system $S_C$ need not undergo any interference. Appropriate values of these correlations, as we shall show, imply that the classical system $S_C$ must have at least another observable that cannot be simultaneously sharp when the observable $T$ is. This is our indirect witness of non-classicality. We then conjecture the possibility of an indirect measurement of the complementary observable $S$, by coupling the classical system with a qubit, via a teleportation-type scheme.
Remarkably, our thought-experiment is formulated in a general, information-theoretic framework -- which is independent of the details of the dynamics of the system $S_C$. Thus, our result is relevant within a wide range of different contexts, going well beyond quantum gravity: for example, it applies to testing non-classicality of macroscopic systems, be they biological systems \cite{DAV, DAVE} or computational devices \cite{TRO}. After all, in any experiment quantum control can only be assumed to exist on a limited number of degrees of freedom, while the rest could for all practical purposes be classical.
Let $\hat q^{(1)}\doteq(\sigma_x\otimes I, \sigma_y\otimes I, \sigma_z\otimes I)$ denote the vector of generators $q_{\alpha}^{(1)}$ of the algebra of observables of the qubit $S_Q$, where $\sigma_{\alpha}, \alpha =x,y,z$, are the Pauli operators and $I$ is the single-qubit unit. Let $T$ be a binary observable on the classical system $S_C$ -- in other words, the classical system is supposed to be a single bit. Without loss of generality, we can represent it as an operator $q_z^{(2)}\doteq I\otimes\sigma_z$. For example, in the case of gravity, $T$ could be a discretised version of the position observable, representing two different locations of a mass which interacts with the quantum system through gravity. (If $S_C$ has a higher dimensionality, our result applies as long as one considers a quantum system $S_Q$ with the same dimensionality as $S_C$).
Consider now an operation defined so that it performs a classical copy of the values held by $S_Q$ with $S_C$ as the target, in the basis defined by the observable $q_z^{(1)}q_z^{(2)}$. In other words, in that basis, it is required to perform the computation $\{00\rightarrow 00, 10\rightarrow 11\}$, where the first slot represents the value held by $S_Q$ and the second slot the value held by $S_C$. However, it is unknown what effect it has on other input states. This is because our scheme is independent of the details of the dynamics. The thought experiment precisely investigates what happens when the input states are $\ket{\pm}\ket{0}$, where $q_1^{(z)}$ is not sharp. Those states therefore act as a probe.
Note that the copy-operation is not assumed to be coherent, unitary or reversible in any sense. For example, it could be thought of as a classical controlled-NOT gate, where the NOT gate is realised by some classical evolution, such as $\ket{0}\bra{0}\rightarrow \cos^2( t)\ket{0}\bra{0}+\sin^2(t)\ket{1}\bra{1}$ for appropriate arguments $t$ (though the evolution need not be continuous).
The thought experiment goes as follows. First, prepare the qubit $S_Q$ in the eigenstates $\ket{\pm}$ of $q_x^{(1)}$; and the classical system $S_C$ in some fixed state, which we denote as $\ket{0}$, representing a state where the observable $T$ is sharp with value $1$. Then, apply the copy-operation. Let us denote by $\rho_{\pm}$ the states of $S_Q\tilde\prodlus S_C$ thus generated. At this point, measure the averages $\langle q_{\alpha}^{(1)}q_z^{(2)}\rangle_{\pm}$, $\alpha=1,2,3$, and $\langle q_{z}^{(1)}\rangle_{\pm}$ on the states $\rho_{\pm}$. Note that the global states $\rho_{\pm}$ of the composite system cannot be argued to be entangled by construction. This is because $S_C$ need not obey quantum theory.
Now consider a different procedure, to prepare the states $\tilde{\rho}_{\pm}$ by applying the same copy-operation, as above, on each of the states $\rho_{\pm}$. Then, measure $\langle A_i^{(1)}q_z^{(2)}\rangle_{\tilde \pm}$ for appropriate observables $A_i^{(1)}q_z^{(2)}$. See figure 2.
\begin{figure}
\caption{Quantum network to prepare the states $\rho_{\pm}
\end{figure}
We shall now argue that certain values of the correlation functions $\langle q_{\alpha}^{(1)}q_z^{(2)}\rangle_{\pm}$ and $\langle A_i^{(1)}q_z^{(2)}\rangle_{\tilde \pm}$ imply that the classical system must have at least another observable that is complementary to $T$. Crucially, both correlation functions only require the \qq{classical} observable $T$ to be measured on $S_C$.
Note first that, in our representation, the most general form of a state of $S_Q\tilde\prodlus S_C$ is $$\rho = \frac{1}{4} \left ( I +\underlinenderline{r}.\hat q^{(1)}+s_z q_z^{(2)}+ \underlinenderline{t}.\hat q^{(1)} q_z^{(2)}\right)\;,$$
for some real-valued vectors $\underlinenderline r$, $\underlinenderline t$ and for some real coefficient $s_z$. This state, when interpreted as a two-qubit state, is separable and has no discord \cite{KAVAN}.
Now, suppose that $\langle q_z^{(1)}q_z^{(2)}\rangle_{\pm}=1$ and $\langle q_{\alpha}^{(1)}q_z^{(2)}\rangle_{\pm}=0$, $\forall \alpha\neq z$, are observed, for both $\rho_{\pm}$. This confirms that the quantum and the classical system have undergone some interaction, because that value differs from the same correlation function evaluated for the initial states $\ket{\pm}\ket{0}$. Suppose also that $\rho_{\pm}$ are ensemble-distinguishable from eigenstates of $q_z^{(1)}q_z^{(2)}$, by measuring $\langle q_{\alpha}^{(1)}\rangle_{\pm}$. This rules out the possibility that $\rho_{\pm}$ are themselves eigenstates in that basis. To satisfy these conditions, one must require $\underlinenderline{r}=\underlinenderline{0}$, $s_z=0$ and $\underlinenderline{t}=(0,0,1)$. Thus: $\rho_+=\rho_-=\frac{1}{4} \left ( I + q_z^{(1)} q_z^{(2)}\right)\;.$
Suppose further that it is possible to find observables $A_i$ of the qubit with the property that measuring $\langle A_i^{(1)}q_z^{(2)}\rangle_{\tilde\pm}$ can distinguish $\rho_{_{\tilde\pm}}$. This implies that $\rho_+\neq\rho_-$, which is a contradiction.
Hence, we conclude that in order to reproduce the above correlation functions, the classical system must have an additional observable $T'$ that cannot be simultaneously sharp when $T$ is. In our representation, that observable can be represented as an operator $q_x^{(2)}$ which does not commute with $q_z^{(2)}$. Our thought experiment therefore constitutes a witness of non-classicality on the system $S_C$, as promised.
This non-classicality could be more general than the strictly quantum one. For example, $T$ and $T'$ might be two overlapping distributions in phase space, corresponding to uncertainty in preparation, as in Spekkens' toy model, \cite{SPEK}. This would effectively correspond to $S_C$ consisting of two classical bits whose values cannot be perfectly resolved. However, this model does not have a natural dynamics, so it is unclear what it would imply as to the physics of this thought experiment, or about the physical constitution of $S_C$.
Crucially, our test only requires applying the copy-operation in the basis defined by $T$, in order to prepare the relevant states; and that those states can be discriminated by {ensemble measurements}, realised by measuring the local observables $A_i$ on the qubit and just $T$ on the $S_C$. These states could, in particular, be not perfectly distinguishable via single-shot measurements (thus the overall evolution might be non-unitary). Therefore our thought experiment does not presuppose the possibility of performing any interference on the classical system; nor the possibility of measuring other observables than $T$.
Whilst this test allows one to conclude that there must be an additional observable $T'$ which is necessary to describe the accessible states of $S_C$, that observable need not be measurable directly. It might be, for example, that there is some fundamental limitation to how well its eigenstates can be resolved from one another: this highlights an interesting distinction between an observable being directly measurable and its being necessary to describe the accessible states of a system.
We now discuss how assuming the possibility of a coherent interaction between the classical system and some other qubit allows one to measure indirectly that observable. The scheme goes as follows.
As before, prepare the states $\rho_{\pm}$ on $S_Q\tilde\prodlus S_C$. At this point, apply the operation which in the basis defined by $q_z^{(1)}q_z^{(2)}$ realises the computation $\{00\rightarrow 00, 01\rightarrow 11\}$. This is a copy of the $q_z$ values held by $S_C$, this time with $S_Q$ as the target. As explained in \cite{MAVE}, this operation must generate, acting on each of the states $\rho_{\pm}$, two new (possibly mixed) states ${\alpha_+}$ and ${\alpha_-}$ on $S_C$. As we said, these states need not be distinguishable from one another; moreover, since it is not possible to measure any other observable than $T$ on $S_C$, one cannot reconstruct them by applying a procedure such as state tomography directly to $S_C$. However, by bringing in another qubit $S_Q'$, it is possible to apply on $S_C\tilde\prodlus S_Q'$ a sequence of three CNOT gates in the $q_z$ basis to perform a logical swap \cite{NIE}. This allows one to prepare the qubit $S_Q'$ in each of those two states. At this point, state tomography on the qubit allows one to distinguish the states asymptotically, thus showing indirectly that those states existed on $S_C$. See figure 3.
\begin{figure}
\caption{Quantum network to prepare the states $\alpha_{\pm}
\end{figure}
Incidentally, if the states ${\alpha_+}$ and ${\alpha_-}$ are not orthogonal to one another, an orthogonalisation procedure would allow one to argue for the existence of {\sl two} other observables of $S_C$, in addition to $T$. Each one observable would be represented by an operator having, respectively, ${\alpha_+}$ and ${\alpha_-}$ as eigenstates. In the case of this test, too, the only observable measured on the classical system is $T$; however, overall coherence is required to realise the swap.
We emphasise that the central feature of our analysis is that it does not assume any particular dynamical model for the system whose non-classicality is to be witnessed, nor for the coupling between it and the quantum system. This is in contrast to the recent argument in \cite{tom} where quantumness of a given system is confirmed indirectly by its ability to entangle two other bona fide quantum systems. In that sense the argument in \cite{tom} is an extension and elaboration of Feynman's argument \cite{FEY}. In the latter the witness of non-classicality would be the existence of bipartite entangled states generated by coherent evolution, while in \cite{tom} the witness is the existence of states with discord between the classical and the quantum systems, which forces the classical system to have a complementary observable. In both these arguments, however, the interactions are all assumed to be unitary and furthermore the system to be tested is described within the same formalism; whereas in our scheme, crucially, there are no such assumptions.
Our thought experiment is also of practical importance as it illustrates how to manipulate quantumly a system on which there is no direct quantum control. Thus, our thought experiment is related to the experiments that prepare and subsequently witness \qq{Schr\"odinger cat} states of light, \cite{HAR}. There, the system $S_C$ corresponds to the EM field, initially prepared in a coherent state; $S_Q$ is an atom, initially prepared in a superposition of its two energy levels. The two systems interact in such a way that they evolve into an entangled state, which corresponds to one of our $ \rho_{\pm}$. By measuring the atom in a complementary basis, the field is left in one of two orthogonal {Schr\"odinger cat states}. The atom (or an ensemble of them) is then also used as a probe, to witness the {cat states}. In this way the complementary observables of the EM field are never measured directly, just like in our thought experiment.
Witnessing non-classicality of a physical system that need not obey quantum theory is a key task in contemporary physics. It is crucial for testing predictions that gravity is quantised; but also to explore the quantum-to-classical boundary. For example, it is necessary to test predictions that macroscopic systems (e.g. a bacterium) coupled to a quantum system are, themselves, quantum. In all such cases, one cannot assume full quantum control on the physical system $S_C$ whose non-classicality is to be witnessed. For instance, tests of non-classicality designed for quantum systems, e.g. violation of Bell inequalities, are inadequate. Our thought experiment is a proposal for a new approach to performing that task. Its strength is that, by using a quantum probe, it provides an indirect witness of non-classicality, which requires only measuring a single observable on the system $S_C$. In addition, it is remarkably general: it only relies on information-theoretic witnesses, without assuming any particular dynamics for the system $S_C$. Thus, our experiment is applicable to all the above open problems -- a task that we leave for future work.
\textit{Acknowledgments}:
The authors thank Tomek Paterek and Tanjung Krisnanda for helpful comments. CM's research was supported by the Templeton World Charity Foundation. VV thanks the Oxford Martin School, the John Templeton Foundation, the EPSRC (UK) and the Ministry of Manpower (Singapore). This research is also supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Competitive Research Programme (CRP Award No. NRF- CRP14-2014-02) and administered by Centre for Quantum Technologies, National University of Singapore.
\end{document}
|
\begin{document}
\title{New results in $t$-tone coloring of graphs}
\author{
Daniel W. Cranston\thanks{Mathematics Dept., Virginia Commonwealth University,
[email protected]},
Jaehoon Kim\thanks{Mathematics Dept., University of Illinois, [email protected].
Research partially supported by the Arnold O. Beckman Research Award of the
University of Illinois at Urbana-Champaign.},
and William B. Kinnersley\thanks{Mathematics Dept., University of Illinois,
[email protected]. Research partially supported by
NSF grant DMS 08-38434, ``EMSW21-MCTP: Research Experience for Graduate
Students''.}
}
\maketitle
\begin{abstract}
A $t$-tone $k$-coloring of $G$ assigns to each vertex of $G$ a set of $t$
colors from $\{1, \ldots, k\}$ so that vertices at distance $d$ share fewer
than $d$ common colors. The {\it $t$-tone chromatic number} of $G$, denoted
$\tau_t(G)$, is the minimum $k$ such that $G$ has a $t$-tone $k$-coloring.
Bickle and Phillips showed that always $\tau_2(G) \le [\Delta(G)]^2 +
\Delta(G)$, but conjectured that in fact $\tau_2(G) \le 2\Delta(G) + 2$; we
confirm this conjecture when $\Delta(G) \le 3$ and also show that always
$\tau_2(G) \le \ceil{(2 + \sqrt{2})\Delta(G)}$. For general $t$ we prove that
$\tau_t(G) \le (t^2+t)\Delta(G)$.
Finally, for each $t\ge 2$ we show that there exist constants $c_1$ and
$c_2$ such that for every tree $T$ we have $c_1 \sqrt{\Delta(T)} \le \tau_t(T)
\le c_2\sqrt{\Delta(T)}$.
\end{abstract}
\section{Introduction}
In standard vertex coloring, we give colors to the vertices of a graph so
that adjacent vertices get distinct colors. Several variants of graph
coloring place restrictions on the colors of vertices that are near each
other, but not necessarily adjacent. In a {\em distance-$k$ coloring}, any
vertices within distance $k$ of each other must receive distinct colors. In
an {\em injective coloring}, any two vertices with a common neighbor must
receive distinct colors, but adjacent vertices need not.
In an {\em $L(2,1)$-labeling}
each vertex receives a nonnegative integer as its label, such that the labels
on adjacent vertices differ by at least 2 and those on vertices at distance 2
differ by at least 1.
Bickle and Phillips~\cite{BP} introduced the related notion of {\em $t$-tone
coloring}. Intuitively, a {\em $t$-tone $k$-coloring} of $G$ assigns to each
vertex of $G$ a set of $t$ colors from $\{1, \ldots, k\}$ so that vertices at
distance $d$ share fewer than $d$ common colors. This notion is especially appealing when $t = 2$. In this case, each vertex receives a set of two colors; adjacent vertices receive disjoint sets and vertices at distance 2 receive distinct sets.
Before giving a formal definition, we first establish some basic notation and
terminology. We write $[k]$ as shorthand for $\{1, \ldots, k\}$ and denote by
${[k] \choose t}$ the family of $t$-element subsets of $[k]$. We denote the
distance between vertices $u$ and $v$ by $d(u,v)$. Vertices $u$ and $v$ are
{\em neighbors} if $d(u,v)=1$ and {\em second-neighbors} if $d(u,v) = 2$.
\begin{definition}\label{def_ttone}\cite{BP}
Let $G$ be a graph and $t$ a positive integer. A {\em $t$-tone $k$-coloring} of $G$ is a function $f: V(G) \rightarrow {[k] \choose t}$ such that $\size{f(u) \cap f(v)} < d(u,v)$ for all distinct vertices $u$ and $v$. A graph that has a $t$-tone $k$-coloring is {\em $t$-tone $k$-colorable}. The {\em $t$-tone chromatic number} of $G$, denoted $\tau_t(G)$, is the minimum $k$ such that $G$ is $t$-tone $k$-colorable.
\end{definition}
Given a $t$-tone coloring $f$ of $G$, we call $f(v)$ the {\em label} of $v$ and
the elements of $[k]$ {\em colors}. When the meaning is clear, we
omit set notation from labels; that is, we denote the label $\{a, b\}$ by $ab$.
Note that for each $t$, the parameter $\tau_t$ is monotone: when $H$ is a
subgraph of $G$, every $t$-tone $k$-coloring of $G$ restricts to a $t$-tone
$k$-coloring of $H$, so $\tau_t(H) \le \tau_t(G)$.
Bickle and Phillips \cite{BP} established several basic results on $t$-tone
coloring, many of which focused on the relationship between
$\tau_2(G)$ and $\Delta(G)$. By looking at proper colorings of the graph
$G^2$, they proved that always $\tau_2(G) \le [\Delta(G)]^2 + \Delta(G)$.
However, they conjectured that this bound is far from tight:
\begin{conjecture}\cite{BP}\label{conj_maxdegree}
If $G$ is a graph with maximum degree $r$, then $\tau_2(G) \le 2r + 2$. If
$r\ge 3$, then equality holds only when $G$ contains $K_{r+1}$.
\end{conjecture}
When $G$ is 3-regular, they posed the following stronger conjecture:
\begin{conjecture}\cite{BP}\label{conj_cubic}
If $G$ is a 3-regular graph, then:
\begin{enumerate}
\item[(a)] $\tau_2(G) \le 8$;
\item[(b)] $\tau_2(G) \le 7$ when $G$ does not contain $K_4$;
\item[(c)] $\tau_2(G) \le 6$ when $G$ does not contain $K_4 - e$.
\end{enumerate}
\end{conjecture}
\noindent
Since they also characterized all 2-tone 5-colorable 3-regular graphs, this conjecture would yield a complete characterization of the 2-tone chromatic numbers of 3-regular graphs.
In Section \ref{2tone}, we focus on 2-tone colorings, with an eye toward
proving Conjectures~\ref{conj_maxdegree} and~\ref{conj_cubic}.
As progress toward Conjecture \ref{conj_maxdegree}, we give a short proof that
always $\tau_2(G) \le \ceil{(2 + \sqrt{2})\Delta(G)}$. Simple modifications of
this argument yield better bounds when $G$ is bipartite or chordal. We next
refute part (b) of Conjecture \ref{conj_cubic} by showing that the Heawood
graph has 2-tone chromatic number 7. Finally, our main result in
Section~\ref{2tone} confirms part (a) of Conjecture \ref{conj_cubic}:
\begin{theorema}
If $G$ is a graph with $\Delta(G) \le 3$, then $\tau_2(G) \le 8$.
\end{theorema}
In Section \ref{ttone}, we consider $t$-tone colorings for general $t$. Our
main result is:
\begin{theorema}
For each $t$ there exists a constant $c=c(t)$ such that $\tau_t(T) \le
c\sqrt{\Delta(T)}$ whenever $T$ is a tree, and this bound is asymptotically
tight.
\end{theorema}
For general graphs, our best bound is $\tau_t(G) \le (t^2 + t)\Delta(G)$. This
result implies that, for fixed $\Delta(G)$, we have $\tau_t(G) \le ct^2$ for
some constant $c$. The asymptotics of this bound are near-optimal with respect
to $t$, since for each $r \ge 3$ there exist a constant $c$ and graphs $G_t$ such that $\Delta(G_t) = r$ and $\tau_t(G_t) \ge ct^2/\lg t$. Finally, when $G$ has degeneracy at most $k$, we prove $\tau_t(G) \le kt + kt^2[\Delta(G)]^{1-1/t}$.
\section{2-tone Coloring}\label{2tone}
In this section we focus on 2-tone coloring. We first attack Conjecture
\ref{conj_maxdegree}. It was shown in \cite{BP} that always $\tau_2(G) \le
[\Delta(G)]^2 + \Delta(G)$; we improve this result by giving an upper bound on
$\tau_2(G)$ that is linear in
$\Delta(G)$, rather than quadratic. This proof---along with several others
throughout the paper---proceeds by building a $t$-tone coloring of a graph
iteratively, coloring one vertex at a time.
\begin{definition}
A {\em partial $t$-tone $k$-coloring} of a graph $G$ is a function $f: S \rightarrow {[k] \choose t}$, with $S \subseteq V(G)$, such that $\size{f(u) \cap f(v)} < d(u,v)$ whenever $u,v \in S$. Vertices not in $S$ are {\em uncolored}. An {\em extension} of $f$ to an uncolored vertex $v$ is a partial coloring $f^\prime$ that assigns a label to $v$ but otherwise agrees with $f$.
\end{definition}
It is important to note that a $t$-tone $k$-coloring of a subgraph $H$ of $G$ need not be a partial $t$-tone $k$-coloring of $G$, since the distance between two vertices may be smaller in $G$ than in $H$.
\begin{theorem}\label{2tone_maxdegree}
For every graph $G$, we have $\tau_2(G) \le \ceil{(2 + \sqrt{2})\Delta(G)}$.
\end{theorem}
\begin{proof}
Let $k = \ceil{(2 + \sqrt{2})\Delta(G)}$ and let $V(G) = \{v_1, \ldots, v_n\}$.
Starting with all vertices uncolored, we extend our partial coloring to $v_1, v_2, \ldots, v_n$ in order.
When extending to $v_i$, we need only enforce two constraints. First, the label
on $v_i$ cannot contain any color appearing on $v_i$'s neighbors; there
remain at least $\ceil{\sqrt{2}\Delta(G)}$ other colors, so at least ${\sqrt{2}\Delta(G) \choose 2}$ labels are available. Next, the label on $v_i$ cannot appear on any second-neighbor of $v_i$; this condition forbids at most
$\Delta(G)(\Delta(G) - 1)$ labels. Since
${\sqrt{2}\Delta(G) \choose 2} > \Delta(G)(\Delta(G)-1)$, some label remains
for use on $v_i$.
\end{proof}
Similar approaches yield tighter bounds on $\tau_2(G)$ for bipartite graphs and chordal graphs.
\begin{proposition}
If $G$ is a bipartite graph, then $\tau_2(G) \le 2\ceil{\sqrt{2}\Delta(G)}$.
\end{proposition}
\begin{proof}
A {\em palette} is a set of colors; we construct a 2-tone coloring of $G$ using
two disjoint palettes, each of size $\ceil{\sqrt{2}\Delta(G)}$. We assign each
partite set its own palette and color the vertices in each set using only
colors from its palette. Since adjacent vertices are assured disjoint labels,
it suffices to ensure that vertices at distance 2 receive distinct labels.
We color each partite set independently. Within a partite set, we order the
vertices arbitrarily and color iteratively. Each vertex $v$ has at most
$\Delta(G)(\Delta(G)-1)$ second-neighbors. Since each palette admits ${\sqrt{2}\Delta(G) \choose 2}$ labels, we may always extend a partial coloring to $v$.
\end{proof}
A {\em simplicial elimination ordering} of a graph $G$ is an ordering $v_1, \ldots, v_n$ of $V(G)$ such that the later neighbors of each vertex form a clique; it is well-known that chordal graphs are precisely those graphs having simplicial elimination orderings.
\begin{proposition}
If $G$ is a chordal graph, then $\tau_2(G) \le \ceil{(1 + \sqrt{6}/2)\Delta(G)} + 1$.
\end{proposition}
\begin{proof}
Let $k = \ceil{(1 + \sqrt{6}/2)\Delta(G)} + 1$. Let $v_1, \ldots, v_n$ be the
reverse of a simplicial elimination ordering of $G$.
We construct a 2-tone
$k$-coloring of $G$ by coloring iteratively with respect to this ordering.
Suppose we want to color $v_i$. Let $S$ be the set of
earlier neighbors of $v_i$, and let $d = \size{S}$.
If $v_j$ is a later neighbor of $v_i$, then
by our choice of ordering, all earlier neighbors of $v_j$ are adjacent to $v_i$.
Hence every earlier second-neighbor of $v_i$ is adjacent to some vertex in $S$.
Each vertex in $S$ is adjacent to $v_i$ itself along with the other $d-1$
vertices of $S$. Hence $v_i$ has at most $d(\Delta(G)-d)$ earlier second-neighbors.
As many as $2d$ colors may appear on $S$, so at least
$k-2d$ colors remain. We have ${k-2d \choose 2}$ labels using these colors,
so we need ${k-2d \choose 2} > d(\Delta(G)-d)$.
Straightforward computation shows that this inequality holds whenever $k\ge
\ceil{(1+\sqrt{6}/2)\Delta(G)}+1$.
\end{proof}
\begin{proposition}
For every $\epsilon > 0$, there exists an $r_0$ such that whenever $r > r_0$, if $G$ is a chordal graph with maximum degree $r$, then $\tau_2(G) \le (2 + \epsilon)r$.
\end{proposition}
\begin{proof}
Let $G$ be a chordal graph with maximum degree $r$. Kr\'al \cite{K} showed that, for some constant $c$, the graph $G^2$ is $cr^{3/2}$-degenerate. Thus, there is some ordering $v_1, \ldots, v_n$ of $V(G)$ such that each vertex has at most $cr^{3/2}$ earlier second-neighbors. Let us color iteratively with respect to this ordering using $k + 2r$ colors, for some $k$ to be specified later. When coloring $v_i$, as many as $2r$ colors may appear on its neighbors; at least $k$ other colors remain. Thus we may color $v_i$ so long as it has fewer than ${k \choose 2}$ earlier second-neighbors; taking $k \ge \sqrt{2c}r^{3/4} + 1$ suffices. Hence $\tau_t(G) \le 2r + \sqrt{2c}r^{3/4} + 1$, from which the claim follows.
\end{proof}
We next turn our attention to 3-regular graphs and Conjecture \ref{conj_cubic}.
Later in this section, we prove part (a) of Conjecture \ref{conj_cubic} by showing that $\tau_2(G) \le 8$ whenever $\Delta(G) \le 3$; first we disprove part (c) by showing that the Heawood Graph, which has girth 6, has 2-tone chromatic number 7.
\begin{theorem}
The Heawood Graph is not 2-tone 6-colorable.
\end{theorem}
\begin{proof}
Let $G$ denote the Heawood Graph.
Recall that $G$ is the incidence graph of the Fano Plane; thus it is bipartite,
and every two distinct vertices in the same partite set have exactly one common
neighbor (and hence lie at distance 2). Call a 2-tone 6-coloring of $G$ a {\em good coloring}. For distinct colors $a,b,c,d$, call the set of labels $\{ab, cd, ac, bd\}$ a {\em complementary pair}. For distinct colors $a,b,c,d,e,f$, call the set of labels $\{ab, cd, ef\}$ a {\em disjoint triple}.
Let $A$ and $B$ denote the partite sets of $G$.
We prove three claims: {\bf (1)} No good coloring uses all four labels in a
complementary pair on vertices in the same partite set; {\bf (2)} No good
coloring uses all three labels in a disjoint triple on vertices in the same
partite set; {\bf (3)} For any subset $L$ of ${[6]\choose 2}$ with $|L|=7$,
either $L$
contains a complementary pair or it contains a disjoint triple.
The theorem immediately follows from these claims by supposing $G$
has a good coloring and letting $L$ be the set of labels used on $A$.
{\bf (1)} Suppose instead that the claim is false. By symmetry, labels $12, 34,
13,$ and $24$ all appear on vertices in $A$. The common neighbor of the
vertices labeled $12$ and $34$ must receive label $56$, as must the common
neighbor of the vertices labeled $13$ and $24$. Since $G$ is 3-regular, the two
vertices labeled $56$ are distinct; since they lie at distance 2, the coloring
is invalid.
{\bf (2)} Suppose instead that the claim is false. By symmetry, labels $12,
34,$ and $56$ all appear on vertices in $A$. These vertices
cannot all have a common neighbor $u$, since then $u$ would have no valid
label. Thus they lie on a 6-cycle, and the three vertices of
this 6-cycle in $B$ must also have labels $12, 34,$ and $56$.
Consider a vertex $v\in A$ not
adjacent to any vertex of this 6-cycle. (There is exactly one such vertex.)
The label on $v$ cannot be $12, 34,$ or $56$, so without loss of
generality, it is $13$. The common neighbor of $v$ and the vertex in $A$ having label
$56$ must have label $24$, and the common neighbor of this vertex and the
vertex in $B$ having label $56$ must have label $13$. So two vertices in
$A$ have label $13$; they must be distinct, since only one is
adjacent to a vertex on the 6-cycle.
Since they lie at distance 2, the coloring is invalid.
{\bf (3)}
Consider a color appearing in the most elements of $L$; without loss of
generality, this color is 1. Let $L_1$ be the set of labels in
$L$ that contain
1. Note that $3 \le \size{L_1} \le 5$. We consider 3 cases.
If $\size{L_1} = 5$, then exactly two labels in $L$ do not
appear in $L_1$. If these labels are disjoint, then $L$
contains a disjoint triple; otherwise, $L$ contains a complementary pair.
If $\size{L_1} = 4$, then without loss of generality $L_1 =
\{12, 13, 14, 15\}$. If two labels in $L$ contain 6, then
$L$ contains a complementary pair. Similarly, if $L -
L_1$ contains two non-disjoint labels not using 6, then $L$
contains a complementary pair. Thus we may suppose that $L -
L_1$ contains two disjoint labels not using 6 and one label using 6.
Now the label using 6 is disjoint from one of the labels not using 6; these two
labels, together with some label from $L_1$, form a disjoint triple.
If $\size{L_1} = 3$, then without loss of generality $L_1 =
\{12, 13, 14\}$. Let $S_1 = \{23, 24, 34\}$, let $S_2 = \{25, 35, 45\}$, and
let $S_3 = \{26, 36, 46\}$. If $L$ contains two or more labels from
any single $S_i$, then these labels, together with two labels from
$L_1$, form a complementary pair. Thus we may suppose $L$
contains exactly one label from each $S_i$ and also contains the label $56$.
Now the label in $L \cap S_1$, the label $56$, and some element of
$L_1$ form a disjoint triple.
\end{proof}
Below we give a 2-tone 7-coloring of the Heawood graph, which completes the proof that its 2-tone chromatic number is 7.
\begin{center}
\begin{tikzpicture}[thick,scale=2]
\tikzstyle{blavert}=[circle, draw, fill=black, inner sep=0pt, minimum width=5pt]
\tikzstyle{redvert}=[circle, draw, fill=red, inner sep=0pt, minimum width=5pt]
\draw \foreach \i in {2, 4, ..., 14}
{
(\i*360/14:1) node[blavert]{} -- (\i*360/14+360/14:1)
(\i*360/14+360/14:1) node[redvert]{} -- (\i*360/14+720/14:1)
(\i*360/14:1) -- (\i*360/14+5*360/14:1)
};
\draw \foreach \i/\lab in
{1/14, 2/25, 3/36, 4/47, 5/51, 6/62, 7/73, 8/14, 9/25, 10/36, 11/47, 12/51, 13/62, 14/73}
{
(\i*360/14:1.2) node[]{$\lab$}
};
\draw (0,-1.5) node {Fig.~1: A 2-tone 7-coloring of the Heawood graph.};
\end{tikzpicture}
\end{center}
We next show that $\tau_2(G) \le 8$ whenever $\Delta(G) \le 3$, thus verifying part (a) of Conjecture \ref{conj_cubic}. The proof requires careful attention to detail, so we isolate some of the more delicate arguments in lemmas. Before stating the lemmas, we introduce some terminology.
\begin{definition}
Let $f$ be a partial $2$-tone coloring of a graph $G$ and let $v$ be an uncolored vertex. A {\em valid} label for $v$ is a label by which $f$ can be extended to $v$. A {\em free} color at $v$ is one not appearing on any neighbor of $v$. A {\em candidate} label for $v$ is a label containing only free colors. An {\em obstruction} of $v$ is a candidate label that is not valid (because it appears on some second-neighbor of $G$).
\end{definition}
Our first lemma is short and simple, but provides a good introduction to the techniques that appear throughout the proof.
\begin{lemma}\label{cubic_extend}
Let $G$ be a graph with maximum degree at most 3. Let $f$ be a partial 2-tone 8-coloring of $G$ and let $v$ be an uncolored vertex. If $v$ has at least one uncolored neighbor and at least one uncolored second-neighbor, then $f$ can be extended to $v$.
\end{lemma}
\begin{proof}
At least four colors are free at $v$, so it has at least six candidate labels.
Since $v$ has an uncolored second-neighbor, $v$ has at most five obstructions, so some candidate is valid.
\end{proof}
In the main proof we first color all vertices except for those on some induced
cycle $C$; we then iteratively extend our partial coloring along $C$. We
will need to maintain some flexibility while doing so, and the next two lemmas
provide this desired freedom.
\begin{lemma}\label{cubic_freedom_0}
Let $G$ be a 3-regular graph, let $v$ be a vertex of $G$, and let $w_1$ and $w_2$ be distinct neighbors of $v$. Let $f$ be a partial coloring of $G$ that leaves $v$, $w_1$, and $w_2$ uncolored, and let $f_1$ and $f_2$ be distinct extensions of $f$ to $w_1$. If two second-neighbors of $v$ do not yield obstructions under any $f_i$, then some $f_i$ can be extended to $v$ in three different ways.
\end{lemma}
\begin{proof}
Let $S_i$ be the set of free colors at $v$ under $f_i$. Under each $f_i$, at
most four colors appear on neighbors of $v$, so $\size{S_i} \ge 4$. Either
some $S_i$ contains at least five colors, or $S_1 \not = S_2$; in either case,
the $f_i$ yield at least nine candidate labels between them. Since $v$ has at
most four obstructions, the two $f_i$ together yield at least five valid
labels, so by the Pigeonhole Principle
some $f_i$ admits three extensions to
$v$.
\end{proof}
\begin{lemma}\label{cubic_freedom}
Let $G$ be a 3-regular graph, let $v$ be a vertex of $G$, and let $w_1$ and $w_2$ be distinct neighbors of $v$. Let $f$ be a partial coloring of $G$ that leaves $v$, $w_1$, and $w_2$ uncolored, and let $f_1$, $f_2$, and $f_3$ be distinct extensions of $f$ to $w_1$. If some second-neighbor of $v$ does not yield an obstruction under any $f_i$, then some $f_i$ can be extended to $v$ in three different ways.
\end{lemma}
\begin{proof}
Let $S_i$ be the set of colors free at $v$ under $f_i$. Under each $f_i$, at most four colors appear on neighbors of $v$, so $\size{S_i} \ge 4$. If some $S_i$ contains five or more colors, then $v$ has at least ten candidate labels and at most five obstructions under $f_i$, so $f_i$ admits at least five extensions to $v$. Otherwise, since the $f_i$ assign different labels to $w_1$, no two $S_i$ are the same. Since $v$ has at least six candidate labels under each $f_i$, it suffices to show that $v$ cannot have four obstructions under each $f_i$ simultaneously.
Without loss of generality, $S_1 = \{1, 2, 3, 4\}$. Since $S_2 \not = S_1$, we may assume $5 \in S_2$. If additionally $S_2$ contains some other color not in $S_1$, then at most one label is a candidate under both $f_1$ and $f_2$; in this case $v$ has at most one common obstruction under $f_1$ and $f_2$, so it cannot have four obstructions under both $f_1$ and $f_2$. Hence we may assume $S_2 = \{1, 2, 3, 5\}$. Now $f_1$ and $f_2$ yield three common candidates, namely $12, 13,$ and $23$; if $v$ does not have three valid labels under either $f_i$, then all three common candidates must be obstructions. Moreover, of the two remaining obstructions, one is in $\{14, 24, 34\}$ and the other is in $\{15, 25, 35\}$. If $S_3$ contains $1, 2,$ and $3$, then without loss of generality $S_3 = \{1, 2, 3, 6\}$, and $f_3$ can be extended via $16$, $26$, and $36$. Otherwise at most one of $12$, $13$, and $23$ is an obstruction under $f_3$, and again $f_3$ admits three extensions to $v$.
\end{proof}
Our final lemma helps us leverage the flexibility ensured by Lemma \ref{cubic_freedom} to complete a partial coloring.
\begin{lemma}\label{cubic_finish_1}
Let $G$ be a 3-regular graph. Let $v$ be a vertex of $G$, let $w_1,w_2,$ and $w_3$ be its neighbors, and let $x$ be one of its second-neighbors. Let $f$ be a partial coloring of $G$ that leaves $v$ and $w_1$ uncolored, and under which $w_2$ shares one color with $w_3$ and one with $x$. If $f$ has three extensions to $w_1$, then one of these extensions can itself be extended to $v$.
\end{lemma}
\begin{proof}
Let $f_1$, $f_2$, and $f_3$ be extensions of $f$ to $w_1$. Since $w_2$ and $x$ share a color, $x$ cannot yield an obstruction of $v$, so $v$ has at most five different obstructions between all three $f_i$. Since $w_2$ and $w_3$ share a color, at most five colors appear on neighbors of $v$ in each $f_i$, hence always at least three colors are free at $v$. Let $S_i$ be the set of free colors at $v$ under $f_i$. If any $S_i$ contains at least four colors, then $v$ has at least six candidate labels under $f_i$, one of which must be valid. Otherwise, each $S_i$ has size three; moreover, since the $f_i$ differ in the colors they assign to $w_1$, no two $S_i$ are identical. $S_1$ and $S_2$ together yield at least five different candidate labels for $v$, and $S_3$ yields a sixth; again we have six candidate labels, one of which must be valid. Thus some $f_i$ can be extended to $v$.
\end{proof}
We are now ready to present the main proof.
\begin{theorem}
If $G$ is a graph with $\Delta(G)\le 3$, then $\tau_2(G) \le 8$.
\end{theorem}
\begin{proof}
Suppose otherwise, and let $G$ be a smallest counterexample. Clearly $G$ is connected and is not $K_4$.
Suppose that $G$ is not 3-regular, and let $v$ be a vertex of degree 1 or 2. By Lemma \ref{cubic_extend}, iteratively coloring in non-increasing order of distance from $v$ yields a partial 2-tone 8-coloring of $G$ leaving only $N[v]$ uncolored. Each neighbor $u$ of $v$ now has at least four free colors (hence at least six candidate labels) and at most five second-neighbors, so we may extend the coloring to $u$. Likewise, $v$ itself now has at least four free colors and at most four second-neighbors, so we may extend to $v$ as well, completing the coloring and contradicting the choice of $G$. Hence $G$ must be 3-regular.
Next suppose that $G$ contains an induced $K_{2,3}$. Let $x_1, x_2, y_1, y_2$,
and $y_3$ be the vertices of this $K_{2,3}$, with the $x_i$ the vertices of
degree 3 and the $y_i$ the vertices of degree 2; let $u_i$ be the third
neighbor of each $y_i$. Let $G^\prime = G - \{x_1, x_2, y_1, y_2, y_3\}$.
Since $G^\prime$ is not 3-regular, it has a 2-tone 8-coloring, which is also a
partial 2-tone 8-coloring of $G$. Without loss of generality, the color 1 does
not appear on any $u_i$. We aim to color each $y_i$ with a label containing
color 1; each $y_i$ has five such candidate labels and at most four
second-neighbors, so this is possible. Now each $x_i$ has at least four free
colors, and hence at least six candidate labels. Since each $x_i$ has at most
four second-neighbors, we may extend the coloring to each $x_i$ in turn, again
contradicting the choice of $G$. Thus $G$ is $K_{2,3}$-free.
Let $C$ be a shortest cycle in $G$; label its vertices $v_1, \ldots, v_k$ in
cycle order. Let $u_1, \ldots, u_k$ be the neighbors off $C$ of $v_1, \ldots,
v_k$, respectively. The $u_i$ need not be distinct, but (since $G \not = K_4$)
cannot all be the same vertex. If $C$ is a triangle, then without loss of
generality $u_3 \not = u_1$. If not, then for all $i$ we have $u_{i-1} \not =
u_{i+1}$: if $C$ is a four-cycle then this follows from the fact that $G$ is
$K_{2,3}$-free, and otherwise it follows from the minimality of $C$. In any
case, construct $G^\prime$ from $G$ by deleting the vertices of $C$ and adding
the edge $u_{k-1}u_{1}$ (if it is not already present); if $C$ is not a
triangle, then add the edge $u_ku_2$ as well. By the minimality of $G$, the
graph $G^\prime$ is 2-tone 8-colorable. A 2-tone 8-coloring of $G^\prime$ is
also a
partial 2-tone 8-coloring of $G$ in which only the $v_i$ are uncolored and in
which $u_{k-1}$ and $u_1$ have disjoint labels; if $C$ has at least four
vertices, then also $u_k$ and $u_2$ have disjoint labels. We use such a
coloring as a starting point in producing a 2-tone 8-coloring of $G$.
We have three cases to consider. {\bf (1)} If the label on $u_k$ is identical
to one of the labels on $u_{k-1}$ or $u_{1}$, then by symmetry we may suppose
that $u_{k-1}$, $u_k$, and $u_{1}$ have labels $12, 12$, and $34$. {\bf (2)}
If the label on $u_k$ is disjoint from the labels on $u_{k-1}$ and $u_{1}$,
then we may suppose that $u_{k-1}$, $u_k$, and $u_{1}$ have labels $12, 34$,
and $56$. {\bf (3)} Otherwise, we may suppose that $u_{k-1}$, $u_k$, and
$u_{1}$ have labels $12, 13$, and $L$, where $1 \not \in L$.
{\bf Case (1):} {\em $u_{k-1}$, $u_k$, and $u_{1}$ have labels 12, 12, 34.} We
aim to assign $v_1$ a label containing either 1 or 2; $v_1$ has nine such
candidate labels, and it has at most four obstructions, so at least five such
labels are valid. Since we have at least three ways to extend to $v_1$, by
Lemma \ref{cubic_freedom}, we subsequently have at least three ways to extend
to $v_2$, then to $v_3$, and so on up to $v_{k-2}$. Since the labels on
$u_{k-1}$ and $v_1$ have nonempty intersection, $v_1$ cannot yield an
obstruction of $v_{k-1}$, so again we have three ways to extend to $v_{k-1}$.
Now applying Lemma \ref{cubic_finish_1} (with $v = v_k, w_1 = v_{k-1}, w_2 =
v_1, w_3 = u_k,$ and $x = u_{k-1}$) lets us complete the coloring.
{\bf Case (2):} {\em $u_{k-1}$, $u_k$, and $u_{1}$ have labels 12, 34, 56.}
First suppose that $C$ is a triangle. Give $v_1$ a label from $\{13, 23, 37,
38\}$; since $v_1$ has at most two obstructions, this is possible. Next give
$v_2$ a label from $\{45, 46, 47, 48\}$; at most one of these labels has
nonempty intersection with the label on $v_1$, and $v_2$ has at most two
additional obstructions, so again some such label is valid. We have ensured
that four colors remain free at $v_3$. Thus $v_3$ has six candidate labels and
at most four obstructions, so we can complete the coloring.
Suppose now that $C$ is not a triangle. We aim to assign $v_1$ a label from
$\{13, 14, 23, 24\}$. Although $v_1$ has four colored second-neighbors, $u_k$ has label $34$, which is not an obstruction. Moreover, by construction the label on $u_2$ contains neither 3 nor 4, so it also cannot be an obstruction. Thus, at least two such labels are valid. By Lemma \ref{cubic_freedom_0}, this coloring admits three extensions to $v_2$. Now we may apply Lemma \ref{cubic_freedom} and Lemma \ref{cubic_finish_1} (with $v = v_k, w_1 = v_{k-1}, w_2 = v_1, w_3 = u_k,$ and $x = u_{k-1}$) as before to complete the coloring.
{\bf Case (3):} {\em $u_{k-1}$, $u_k$, and $u_{1}$ have labels $12, 13, L$,
where $1 \not \in L$.} We aim to give $v_1$ a label containing either 1 or 3.
If $3 \not \in L$, then $v_1$ has at least nine such candidates and at most
four obstructions, so at least five of the candidates are valid. Otherwise
$v_1$ has only five such candidate labels, but $u_k$ does not yield an
obstruction, so at least two of these candidates are valid. In each case, by
Lemma \ref{cubic_freedom_0} we may extend the coloring to $v_2$ in at least
three different ways. Now by Lemma \ref{cubic_freedom} and Lemma
\ref{cubic_finish_1} (with $v = v_k, w_1 = v_{k-1}, w_2 = u_k, w_3 = v_1,$ and
$x = u_{k-1}$) we can again complete the coloring.
\end{proof}
\section{General $t$-tone Coloring}\label{ttone}
We next study the behavior of $\tau_t$ for general $t$. We have already noted that $\tau_t(G)$ is monotone in $G$; that is, $\tau_t(H) \le \tau_t(G)$ whenever $H$ is a subgraph of $G$. It is also true that $\tau_t(G)$ is monotone in $t$.
\begin{proposition}\label{mono_tone}
If $t < t^{\prime}$ and $G$ is any graph, then $\tau_t(G) \le \tau_{t^\prime}(G)$.
\end{proposition}
\begin{proof}
Given a graph $G$ and a $t^\prime$-tone coloring of $G$, we arbitrarily
discard $t^\prime-t$ colors from each label of $G$. This yields a $t$-tone
coloring, since the process cannot increase the size of the intersection of
any two labels.
\end{proof}
Our first main result in this section is a generalization of Theorem
\ref{2tone_maxdegree}. In the case $t = 2$, Theorem \ref{2tone_maxdegree} gives
a better bound, since restricting to $t = 2$ allows tighter analysis.
\begin{theorem}\label{general_maxdegree}
For every integer $t$ and every graph $G$, we have $\ttone{t}(G) \le (t^2+t)\Delta(G)$.
\end{theorem}
\begin{proof}
Let $V(G) = \{v_1, v_2, \ldots, v_n\}$, let $r = \Delta(G)$, and let $k = (t^2 + t)r$. As in the proof of Theorem \ref{2tone_maxdegree}, we construct a $t$-tone $k$-coloring of $G$ by coloring iteratively with respect to the ordering $v_1, \ldots, v_n$.
When coloring $v_i$, at most $tr$ colors appear on neighbors of $v_i$, so at
least $t^2r$ other colors remain. We have ${t^2r \choose t}$ labels that use
only these colors, and each is a candidate label for $v_i$.
Given a label $L$, we say that vertex $u$ {\em forbids} $L$ if $L$ and
the label on $u$ have intersection size at least $d(u,v_i)$. Recall that we
have already discarded all labels forbidden by neighbors of $v_i$. For $2\le
d\le t$, each vertex at distance $d$ from $v_i$ forbids at most ${t \choose
d}{t^2r-d \choose t-d}$ labels.
At most $r(r-1)^{d-1}$ vertices lie at distance $d$ from $v_i$, so to show that
we may color $v_i$, it suffices to show that $$\sum_{d=2}^t {t \choose
d}{t^2r-d \choose t-d}r(r-1)^{d-1} < {t^2r \choose t},$$ or equivalently, that
$$\sum_{d=2}^t \frac{{t \choose d}{t^2r-d \choose t-d}r(r-1)^{d-1}}{{t^2r \choose t}} < 1.$$
Ultimately, we will show that the $d$th term of the sum is no more than $1/d!$,
and thus (since $1/d!\le 2^{1-d}$) the sum is less than 1.
We first simplify each term. For fixed $d$,
\begin{align*}
\frac{{t \choose d}{t^2r-d \choose t-d}r(r-1)^{d-1}}{{t^2r \choose t}} &= \frac{t!}{d!(t-d)!} \cdot \frac{(t^2r-d)!}{(t-d)!(t^2r-t)!}\cdot r(r-1)^{d-1} \cdot \frac{t!(t^2r-t)!}{(t^2r)!}\\
&= \frac{1}{d!} \cdot \left ( \frac{t!}{(t-d)!} \right )^2 \cdot \frac{(t^2r-d)!}{(t^2r)!} \cdot r(r-1)^{d-1}\\
&= \frac{1}{d!} \frac{\left (t(t-1)(t-2) \cdots (t-d+1) \right )^2 r(r-1)^{d-1}}{t^2r(t^2r-1)\cdots (t^2r-d+1)}\\
&= \frac{1}{d!} \cdot \frac{(t-1)^2(r-1)}{t^2r-1} \cdot \frac{(t-2)^2(r-1)}{t^2r-2} \cdots \frac{(t-d+1)^2(r-1)}{t^2r-d+1}.
\end{align*}
Now for $i$ between $1$ and $d-1$, we have
$$(t-i)^2(r-1) < (t-i)^2r = t^2r - i(2t-i)r \le t^2r - i,$$
hence
$$\frac{{t \choose d}{t^2r-d \choose t-d}r(r-1)^{d-1}}{{t^2r \choose t}} < \frac{1}{d!} \cdot 1 \cdot 1 \cdots 1 = \frac{1}{d!}.$$
Now
$$\sum_{d=2}^t \frac{{t \choose d}{t^2r-d \choose t-d}r(r-1)^{d-1}}{{t^2r \choose t}} < \sum_{d=2}^t \frac{1}{d!} \le \sum_{d=2}^t\frac{1}{2^{d-1}} < 1,$$
which completes the proof.
\end{proof}
In \cite{BP} it was shown that for every tree $T$, we have $\tau_2(T) = \ceil{(5 + \sqrt{1 + 8\Delta(T)})/2}$. By Proposition \ref{mono_tone}, it thus follows that $\tau_t(T) \ge \ceil{(5 + \sqrt{1 + 8\Delta(T)})/2}$ whenever $t \ge 2$. In fact this bound is asymptotically best possible, as we show next.
\begin{theorem}\label{tree_upper}
For every positive integer $t$, there exists a constant $c=c(t)$ such that
for every tree $T$ we have $\tau_t(T)\le c\sqrt{\Delta(T)}$.
\end{theorem}
\begin{proof}
Fix a positive integer $t$ and a tree $T$. Let $k = \sqrt{\Delta(T)}$. Let
$T^\prime$ be the complete $(\Delta(T)-1)$-ary tree of height $\size{V(T)}$;
that is, $T^\prime$ is a rooted tree such that all vertices at distance $\size{V(T)}$ from the root are leaves, and all others have $\Delta(T) - 1$ children. By {\em level $i$} of $T^\prime$ we mean the set of vertices at distance $i$ from the root.
Clearly $T$ is contained in $T^\prime$, so by monotonicity of $\tau_t$ it
suffices to prove that $\tau_t(T^\prime) \le ck$ for some constant $c$ (to be
defined later, but independent of $T$). Moreover, by Proposition \ref{mono_tone}, we may suppose that $t$ is even.
A {\em palette} is a set of colors. We color $T^\prime$ using $t+1$ disjoint
palettes, each of size at most $c_1 k$ for some constant $c_1$. On level $i$ of the
tree we use only those colors in the $i$th palette (with $i$ taken modulo
$t+1$). This restriction ensures that, whenever $u$ and $v$ are within distance
$t$ of each other, either they lie on the same level of $T^\prime$ or they
receive colors from different palettes (and hence have disjoint labels). Thus,
we need only consider a single level of $T^\prime$ and show that the vertices
on that level can be colored with at most $c_1 k$ colors.
Within each level, color iteratively with respect to an arbitrary vertex
ordering. Note that any two vertices on the same level of $T^\prime$ lie at an
even distance. Fix a vertex $v$ and an integer $d$ between $1$ and $t/2$. Given a label $L$, say that vertex $u$ {\em forbids} $L$ if $L$ and the label on $u$ have intersection size at least $d(u,v)$. The
number of vertices at distance $2d$ from $v$, and on the same level as $v$, is
bounded above by $[\Delta(T)]^d$ and hence by $k^{2d}$; each such vertex forbids at
most ${t \choose 2d}{c_1k - 2d \choose t - 2d}$ labels in ${[c_1 k] \choose t}$. Thus the total number of forbidden labels is at most
$$\sum_{d=1}^{t/2} k^{2d}{t \choose 2d}{c_1k - 2d \choose t - 2d},$$
which is at most
$$k^t \sum_{d=1}^{t/2} \frac{t^{2d}c_1^{t-2d}}{(2d)!(t-2d)!}.$$
We have ${c_1 k \choose t}$ available labels; for fixed $t$ and large $k$, this is
at least $k^t \frac{(c_1-1)^t}{t!}$. For sufficiently large $c_1$ we have
$$\frac{(c_1-1)^t}{t!} > \sum_{d=1}^{t/2} \frac{t^{2d}c_1^{t-2d}}{(2d)!(t-2d)!},$$
since both sides of the inequality are polynomials in $c_1$, but the left side
has higher degree. Thus if $c_1$ is large enough, then we can color $v$.
\end{proof}
A graph is {\em $k$-degenerate} if each of its subgraphs contains a vertex of degree at most $k$; trees are precisely the connected 1-degenerate graphs. For $k \ge 2$, on the class of $k$-degenerate graphs we can improve the bound given by Theorem \ref{general_maxdegree}.
\begin{lemma}\label{degen_dist}
If $G$ is a $k$-degenerate graph, then $G$ has a vertex ordering such that, for
each integer $d\ge 1$ and for each vertex $v$, at most $dk\Delta(G)(\Delta(G) - 1)^{d-2}$ vertices preceding $v$ in the ordering lie at distance $d$ from $v$.
\end{lemma}
\begin{proof}
Construct an ordering of $V(G)$ by repeatedly deleting a vertex $v$ of minimum
degree and prepending $v$ to the ordering. We claim that this ordering has the desired properties.
Fix $v$ and consider the set of earlier vertices at distance $d$ from $v$. Each such vertex can be reached from $v$ via a walk of length $d$ in which at least one step moves backward in the ordering. For each $i$ between 1 and $d$, there are at most $k\Delta(G)(\Delta(G) - 1)^{d-2}$ such walks that move backward on step $i$, since we have at most $k$ choices for the $i$th step, at most $\Delta(G)$ choices for the first, and at most $\Delta(G) -1$ choices for each of the others.
\end{proof}
When $d$ is large, the bound in Lemma \ref{degen_dist} is worse than the easy
bound of $\Delta(G)(\Delta(G)-1)^{d-1}$ that holds for all graphs $G$,
regardless of degeneracy. However, when applying Lemma \ref{degen_dist}, we
will mainly care about small values of $d$.
\begin{theorem} \label{degen_maxdegree}
If $G$ is a $k$-degenerate graph, $k \ge 2$, and $\Delta(G) \le r$, then for
every $t$ we have $\tau_t(G) \le kt + kt^2 r^{1 - 1/t}$.
\end{theorem}
\begin{proof}
Let $c = kt^2r^{1 - 1/t}$. Let $v_1, \ldots, v_n$ be a vertex ordering of the form guaranteed by Lemma \ref{degen_dist}; we construct a $t$-tone $(c+kt)$-coloring of $G$ by coloring iteratively with respect to this ordering.
When coloring $v_i$, as many as $kt$ colors may appear on $v_i$'s neighbors; at least $c$ other colors remain. Thus $v_i$ has at least ${c \choose t}$ candidate labels using these $c$ colors. As in the proof of Theorem \ref{general_maxdegree}, say that a vertex $u$ {\em forbids} a label $L$ if $L$ and the label on $u$ have intersection of size at least $d(u,v_i)$. By Lemma \ref{degen_dist}, at most $dkr(r - 1)^{d-2}$ colored vertices lie at distance $d$ from $v_i$; each such vertex forbids at most ${t \choose d}{c-d \choose t-d}$ of the candidates.
Thus to show that we can color $v_i$, it suffices to show that
$$\sum_{d=2}^t {t \choose d}{c-d \choose t-d} dkr(r - 1)^{d-2} < {c \choose t},$$
or equivalently, that
$$\sum_{d=2}^t \frac{{t \choose d}{c-d \choose t-d} dkr(r - 1)^{d-2}}{{c \choose t}} < 1.$$
We proceed as in the proof of Theorem \ref{general_maxdegree}.
\begin{align*}
\frac{{t \choose d}{c-d \choose t-d} dkr(r - 1)^{d-2}}{{c \choose t}} &= \frac{t!}{d!(t-d)!} \cdot \frac{(c-d)!}{(t-d)!(c-t)!} \cdot dkr(r-1)^{d-2} \cdot \frac{t!(c-t)!}{c!}\\
&= \frac{dk}{d!} \cdot \left ( \frac{t!}{(t-d)!} \right )^2 \cdot \frac{(c-d)!}{c!} \cdot r(r-1)^{d-2}\\
&< \frac{k}{(d-1)!} \cdot \frac{(t(t-1)\cdots(t-d+1))^2}{c(c-1)\cdots(c-d+1)}\cdot r^{d-1}\\
&= \frac{k}{(d-1)!} \cdot \frac{t^2r^{1-1/d}}{kt^2r^{1 - 1/t}} \cdots \frac{(t-d+1)^2r^{1-1/d}}{kt^2r^{1 - 1/t} - d + 1}\\
&\le \frac{1}{(d-1)! k^{d-1}} \cdot \frac{t^2r^{1-1/d}}{t^2r^{1 - 1/t}} \cdots \frac{(t-d+1)^2r^{1-1/d}}{t^2r^{1 - 1/t} - d + 1}
\end{align*}
For $s$ between 0 and $d-1$, we have
$$(t-s)^2r^{1-1/d} \le (t-s)^2r^{1-1/t} = t^2r^{1-1/t} - s(2t-s)r^{1-1/t} \le t^2r^{1-1/t} - s,$$
so
$$\frac{{t \choose d}{c-d \choose t-d} dkr(r - 1)^{d-2}}{{c \choose t}} < \frac{1}{(d-1)!k^{d-1}}.$$
Thus
$$\sum_{d=2}^t \frac{{t \choose d}{c-d \choose t-d} dkr(r - 1)^{d-2}}{{c \choose t}} < \sum_{d=2}^t \frac{1}{(d-1)!k^{d-1}} < 1,$$
as desired.
\end{proof}
Bickle and Phillips \cite{BP} showed that $\tau_2(K_{1,k}) = \Theta(\sqrt{k})$. Thus by Proposition \ref{mono_tone}, the bound in Theorem \ref{degen_maxdegree} is asymptotically tight (in terms of $\Delta(G)$) when $t = 2$.
We have made several statements about the asymptotics of $\tau_t(G)$ when
$t$ is fixed and $\Delta(G)$ grows; we now consider what happens when
$\Delta(G)$ is fixed and $t$ grows. The bound in Theorem
\ref{general_maxdegree} shows that, for fixed values of $\Delta(G)$, we have
$\tau_t(G) \le ct^2$ for some constant $c$. Our final result shows that the asymptotics of this bound cannot be improved much, if at all.
\begin{theorem} \label{tree_lower}
For each $r \ge 3$, there exists a constant $c$ such that for all $t$, there is a graph $G$ for which $\Delta(G) = r$ and $\tau_t(G) \ge c t^2 / \lg t$.
\end{theorem}
\begin{proof}
Let $G$ be the complete $(r-1)$-ary tree of height $\ceil{\lg t}$.
Consider a $t$-tone coloring of $G$; examine the vertices of $G$ in any order. Since any two vertices of $G$ lie within distance $2\ceil{\lg t}$, each vertex we examine shares fewer than $2\ceil{\lg t}$ colors with each vertex already examined. Thus, the number of colors used in this coloring is at least
$$\sum_{i=0}^{\size{V(G)} - 1} \max\{0, t - 2 \ceil{\lg t} i\}.$$
When $i \le t / (4 \ceil{\lg t})$, the $i$th term of this sum is
at least $t/2$. Note that $\size{V(G)} > (r-1)^{\lg t} \ge t > t / (4 \ceil{\lg
t})$, so the sum has at least $t / (4 \ceil{\lg t})$ terms. Thus, the number of colors used is at least $t^2 / (8 \ceil{\lg t})$.
\end{proof}
\end{document}
|
\begin{document}
\title{Chebyshev's bias for products of irreducible polynomials}
\author{Lucile Devin}
\email{[email protected]}
\address{Centre de recherches math\'ematiques,
Universit\'e de Montr\'eal,
Pavillon Andr\'e-Aisenstadt,
2920 Chemin de la tour,
Montr\'eal, Qu\'ebec, H3T 1J4, Canada}
\author{Xianchang Meng}
\email{[email protected]}
\address{Mathematisches Institut,
Georg-August Universit\"{a}t G\"{o}ttingen,
Bunsenstra{\ss}e 3-5,
D-37073 G\"{o}ttingen,
Germany}
\keywords{Chebyshev's bias, function fields, product of primes}
\subjclass[2010]{11T55, 11N45, 11K38}
\begin{abstract}
For any $k\geq 1$, this paper studies the number of polynomials having $k$ irreducible factors (counted with or without multiplicities) in $\mathbf{F}_q[t]$ among different arithmetic progressions.
We obtain asymptotic formulas for the difference of counting functions uniformly for $k$ in a certain range. In the generic case, the bias dissipates as the degree of the modulus or $k$ gets large, but there are cases when the bias is extreme. In contrast to the case of products of $k$ prime numbers, we show the existence of complete biases in the function field setting, that is the difference function may have constant sign. Several examples illustrate this new phenomenon.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Background}
The notion of Chebyshev's bias originally refers to the observation in \cite{ChebLetter} that
there seems to be more primes congruent to $3 \bmod 4$ than to $1 \bmod 4$ in initial intervals of the integers.
More generally it is interesting to study the function $ \pi(x; q,a) -\pi(x;q, b)$
where $\pi(x; q,a)$ is the number of primes~$\leq x$ that are congruent to $a \bmod q$.
Under the Generalized Riemann Hypothesis (GRH) and the Linear Independence (LI) conjecture for zeros of the Dirichlet $L$-functions,
Rubinstein and Sarnak \cite{RS} gave a framework to study Chebyshev's bias quantitatively.
Precisely they showed that the logarithmic density $\delta(q;a,b)$ of the set of $x\geq2$ for which $\pi(x; q,a)>\pi(x;q, b)$ exists and in particular $\delta(4;3,1)\approx 0.9959$.
Many related questions have been asked and answered since then, we refer to the expository articles of Ford and Konyagin \cite{FordKonyagin_expository} and of Granville and Martin \cite{GranvilleMartin} for detailed reviews of the subject.
In this article we consider products of $k$ irreducible polynomials among different congruence classes. Our results are uniform for $k$ in a certain range, and we show that in some cases the \emph{bias} (see Definition~\ref{Defn-bias}) in the distribution can approach any value between $0$ and $1$.
The idea of this paper is motivated by two different generalizations of Chebyshev's bias.
On one hand, Ford and Sneed \cite{FordSneed2010} adapted the observation of Chebyshev's bias to quasi-prime numbers, i.e. numbers with two prime factors $p_1 p_2$ ($p_1=p_2$ included). They showed under GRH and LI that the direction of the bias for products of two primes is opposite to the bias among primes, and that the bias decreases. Similar results are developed in \cite{DummitGranvilleKisilevsky,Moree2004}.
Recently, under GRH and LI, the second author \cite{Meng2017, Meng2017L} generalized the results of \cite{RS}, \cite{FordSneed2010} and \cite{DummitGranvilleKisilevsky} to products of any $k$ primes among different arithmetic progressions, and observed that the bias changes direction according to the parity of $k$.
On the other hand, using the analogy between the ring of integers and polynomial rings over finite fields, Cha \cite{Cha2008} adapted the results of \cite{RS} to irreducible polynomials over finite fields. Cha discovered a surprising phenomenon: in the case of polynomial rings there are biases in unexpected directions. Further generalizations have been studied since then in \cite{ChaKim,ChaIm,CFJ,Perret-Gentil}.
Fix a finite field $\mathbf{F}_{q}$ and a polynomial $M \in \mathbf{F}_{q}[T]$ of degree $d \geq 1$, we study the distribution in congruence classes modulo $M$ of monic polynomials with $k$ irreducible factors.
More precisely let $A, B\subset (\mathbf{F}_{q}[t]/(M))^{*}$ be subsets of invertible classes modulo $M$,
for any $k\geq 1$, and any $X \geq 1$
we define the normalized\footnote{Using Sathe--Selberg method, Afshar and Porritt \cite[Th. 2]{AfsharPorritt} gave an asymptotic formula for the number of monic polynomials of degree $X$ with $k$ irreducible factors in congruence classes modulo $M\in \mathbf{F}_{q}[t]$ the main term of which is $\frac{q^{X}(\log X)^{k-1}}{X (k-1)! \phi(M)}$ in the case $k = o(\log X)$ and the modulus $M$ does not vary with $X$. In this paper, we focus on the error terms and expect to have square-root cancellation in the error terms.} difference function
\begin{multline*}
\Delta_{f_k}(X; M, A, B) \\ := \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}} \Big(\frac{1}{\lvert A\rvert}\lvert \lbrace N \in \mathbf{F}_{q}[t]: N \text{ monic, } \deg{N} \leq X,~ f(N)=k, N \bmod M \in A \rbrace\rvert \\
- \frac{1}{\lvert B\rvert}\lvert \lbrace N \in \mathbf{F}_{q}[t]: N \text{ monic, } \deg{N} \leq X,~ f(N)=k, N \bmod M \in B \rbrace\rvert \Big) \\
\end{multline*}
where $f=\Omega$ or $\omega$ is the number of prime factors counted with or without multiplicities.
We study the distribution of the values of the function $\Delta_{f_k}(X; M, A, B)$, in particular we are interested in the bias of this function towards positive values.
\begin{defi}[\emph{Bias}]\label{Defn-bias}
Let $F:\mathbf{N}\rightarrow\mathbf{R}$ be a real function, we define the \emph{bias} of $F$ towards positive values as the natural density (if it exists) of the set of integers having positive image by $F$:
\begin{align*}
\dens(F >0) = \lim_{X\rightarrow\infty}\frac{ \lvert\lbrace n \leq X : F(n)>0 \rbrace\rvert }{X}.
\end{align*}
If the limit does not exist, we say that the bias is not well defined.
\end{defi}
\subsection{Values of the bias}
In this section we present our main result which is the consequence of the asymptotic formula obtained in Theorem~\ref{Th_Difference_k_general_deg<X}.
Given a field~$\mathbf{F}_{q}$ with odd characteristic, and a square-free polynomial $M$ in $\mathbf{F}_{q}$, we examine more carefully the case of races between quadratic residues ($A = \square$) and non-quadratic residues ($B =\boxtimes$) modulo $M$.
We say that $M$ satisfies (LI\ding{70}) if the multi-set $$\lbrace\pi\rbrace\cup\bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}}\lbrace \gamma \in [0,\pi] : L(\frac{1}{2} + i\gamma, \chi) = 0 \rbrace$$ is linearly independent over $\mathbf{Q}$ (see Section~\ref{subsec_Lfunctions} for the definition of the $L$-functions).
We study the variation of the values of the bias when the degree of the modulus $M$ gets large.
In particular, we show that the values of the bias are dense in $[\tfrac{1}{2},1]$.
\begin{theo}\label{Th central limit for M under LI}
Let $\mathbf{F}_{q}$ be a finite field of odd characteristic and $k$ a positive integer.
Suppose that for every $d,r$ large enough, there exist a monic polynomial $M_{d,r} \in \mathbf{F}_q[t]$ with
\begin{enumerate}
\item $\deg M_{d,r} =d$,
\item $\omega(M_{d,r}) = r$,
\item $M_{d,r}$ satisfies \emph{(LI\ding{70})}.
\end{enumerate}
Then for $f=\Omega$ or $\omega$,
$$ \overline{\lbrace \dens ((\epsilon_f)^k \Delta_{f_k}(\cdot;M_{d,r},\square,\boxtimes) > 0) : d \geq1, r\geq1 \rbrace} = [\tfrac12,1], $$
where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
\end{theo}
\begin{Rk}
Note that, when $k$ is odd, we obtain that the possible values of $\dens ( \Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0)$ are dense in $[0,\tfrac{1}{2}]$, when $M$ varies in $\mathbf{F}_q[t]$, while the function $\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) $ is biased in the direction of quadratic residues independently of the parity of $k$.
\end{Rk}
From \cite[Prop.~1.1]{Kowalski2010}, we expect the hypothesis (LI\ding{70}) to be true for most of the monic square-free polynomials~$M \in \mathbf{F}_{q}[t]$ when $q$ is large enough. When $d, r$ are large, the set of polynomials of degree $d$ having $r$ irreducible factors should be big enough to contain at least one polynomial satisfying (LI\ding{70}).
However, in Proposition~\ref{Prop Chebyshev many factors}, similarly to \cite[Th.~1.2]{Fiorilli_HighlyBiased}, we only need an hypothesis on the multiplicity of the zeros to prove the existence of extreme biases.
In \cite[Th.~6.2]{Cha2008}, Cha considered the case $k=1$ and showed that the values of the bias $ \dens (\Delta_{\Omega_1}(\cdot;M_{d,r},\square,\boxtimes) > 0)$ approach $\tfrac{1}{2}$ when $M$ varies among irreducible polynomials of increasing degree.
In the case $k=1$, Fiorilli \cite[Th.~1.1]{Fiorilli_HighlyBiased} proved that the values of the biases in prime number races between non-quadratic and quadratic residues are dense in $[\tfrac{1}{2},1]$.
We also show in Proposition~\ref{Prop using Honda Tate} that the values $\tfrac12$ and $1$ can be obtained as values of $\dens ((-1)^k \Delta_{f_k}(\cdot;M,\square,\boxtimes) > 0)$, when $q>3$. These values are obtained for polynomials $M$ not satisfying (LI\ding{70}).
In the case $q =5$, Cha showed that there exists $M \in \mathbf{F}_5[t]$ with $\dens ( \Delta_{\Omega_{1}}(\cdot;M,\square,\boxtimes) > 0) = 0.6$, uncovering a bias in ``the wrong direction'', we wonder if such a phenomenon occurs for any $q$ and $k$.
\subsection{Asymptotic formulas}\label{subsec_Lfunctions}
Before stating the asymptotic formulas, let us set some notations.
For $M \in \mathbf{F}_{q}[t]$ we denote $\phi(M) = \lvert \left( \mathbf{F}_{q}[t]/ (M) \right)^* \rvert$ the number of invertible congruence classes modulo~$M$.
Recall that we define the Dirichlet $L$-function associated to a Dirichlet character $\chi$ by
$$L(s, \chi) = \sum_{a \text{ monic }} \frac{\chi(a)}{\lvert a \rvert^{s}}$$
where $\lvert a \rvert = q^{\deg a}$.
It can also be written as an Euler product over the irreducible polynomials:
$$L(s, \chi) = \prod_{P \text{ irreducible }}\left( 1- \frac{\chi(P)}{\lvert P \rvert^s}\right)^{-1}.$$
Recall that (e.g. \cite[Prop.~4.3]{Rosen2002}), for $\chi \neq \chi_0$ a Dirichlet character modulo~$M \in\mathbf{F}_{q}[t]$, the Dirichlet~$L$-function $L(\chi,s)$ is a polynomial in $q^{-s}=u$ of degree at most $\deg M-1$. Thanks to the deep work of Weil \cite{Weil_RH}, we know that the analogue of the Riemann Hypothesis is satisfied.
In the following we denote $\alpha_{j}(\chi)= \sqrt{q}e^{i\gamma_{j}(\chi)}$, $\gamma_{j}(\chi) \in (-\pi, \pi)\setminus \lbrace 0 \rbrace$, the distinct non-real inverse zeros of $\mathcal{L}(u,\chi) = \mathcal{L}(q^{-s},\chi) = L(s,\chi)$ of norm $\sqrt{q}$, with multiplicity $m_j(\chi)$.
The real inverse zeros will play an important role; we denote $m_{\pm}(\chi)$ the multiplicity of $\pm \sqrt{q}$ as an inverse zero of $\mathcal{L}(u,\chi)$, and $d_{\chi}$ the number of distinct non-real inverse zeros or norm $\sqrt{q}$.
We summarize the notations in the following formula:
\begin{equation}\label{Not_L-function}
\mathcal{L}(u,\chi) = (1-u \sqrt{q})^{m_+(\chi)}(1+u \sqrt{q})^{m_-(\chi)} \prod_{j=1}^{d_{\chi}} (1 - u \alpha_{j}(\chi) )^{m_{j}(\chi)}\prod_{j'=1}^{d'_{\chi}} (1- u\beta_{j'}(\chi))
\end{equation}
where $\lvert \beta_{j'}(\chi) \rvert =1$.
Recently Wanlin Li proved \cite[Th.~1.2]{Li2018} that $m_{+}(\chi) >0$ for some primitive quadratic character $\chi$ over $\mathbf{F}_{q}[t]$ for any odd $q$. This result disproves the analogue of a conjecture of Chowla about the existence of central zeros.
We present some of such examples in Section~\ref{subsec_examples_realZero} to exhibit large biases. As $k$ increases, we observe a new phenomenon: such characters can induce complete biases in races between quadratic and non-quadratic residues (see Section \ref{subsec_examples_realZero}), those biases do not dissipate as $k$ gets large (see Proposition~\ref{Prop k limit}).
Denote
$$\gamma(M) = \min\limits_{\chi \bmod M}\min\limits_{1\leq i\neq j\leq d_{\chi}}(\lbrace \lvert \gamma_i(\chi) - \gamma_j(\chi) \rvert, \lvert \gamma_{i}(\chi) \rvert, \lvert \pi -\gamma_{i}(\chi) \rvert \rbrace).$$
We have the following asymptotic formulas without any conditions uniformly for $k$ in some reasonable range (for example, $k\leq \frac{0.99\log\log X}{\log d}$).
\begin{theo}\label{Th_Difference_k_general_deg<X}
Let $M \in \mathbf{F}_{q}[t]$ be a non-constant polynomial of degree $d$,
and let $A, B \subset (\mathbf{F}_{q}[t]/(M))^*$ be two sets of invertible classes modulo $M$.
For any integer $k \geq1 $,
with $k=o((\log X)^{\frac12})$, one has
\begin{multline}\label{Form_deg=n}
\Delta_{\Omega_k}(X; M, A,B) \\
= (-1)^k \Bigg\{ \sum_{\chi \bmod M}c(\chi,A,B)\bigg( \left(m_+(\chi) +\tfrac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(m_-(\chi)+\tfrac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j(\chi)\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \bigg) + O\bigg(\frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}\bigg) \Bigg\},
\end{multline}
and if $q\geq 5$,
\begin{multline}\label{Form_deg=n_littleomega}
\Delta_{\omega_k}(X; M, A,B) \\
= (-1)^k \Bigg\{ \sum_{\chi \bmod M}c(\chi,A,B)\bigg( \left(m_+(\chi) -\tfrac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(m_-(\chi)-\tfrac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j(\chi)\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \bigg) + O\bigg(\frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}\bigg) \Bigg\},
\end{multline}
where the implicit constants are absolute, $\delta(\chi^2) = 1$ if $\chi$ is real and $0$ otherwise,
and
$$c(\chi,A,B) =
\frac{1}{\phi(M)}\bigg( \frac{1}{\lvert A\rvert}\sum_{a\in A}\bar\chi(a) -
\frac{1}{\lvert B\rvert}\sum_{b\in B}\bar\chi(b) \bigg).$$
\end{theo}
This theorem follows from the asymptotic formula obtained in Section~\ref{Sec_proof_deg<X}.
The method we use here is not a straightforward generalization of the method used in \cite{Cha2008} since the analogue of the weighted form of counting function is not ready to detect products of irreducible elements (see \cite{Meng2017}). Different from the results in \cite{RS}, \cite{FordSneed2010} and \cite{Meng2017}, we obtain asymptotic formulas for the corresponding difference functions unconditionally, and the density we derive in this paper is the natural density rather than the logarithmic density. Our starting point is motivated by a combinatorial idea in \cite{Meng2017}, but the main proof is not a parallel translation since the desired counting function is not derived as Meng did in \cite{Meng2017} using Perron's formula.
\begin{Rk}
\label{Rk_on_Th_deg<n}
\begin{enumerate}[label = \roman*)]
\item\label{Item_IntegerCase} Note that in the case $\Delta_{\Omega_1}$ this result is \cite[Th.~2.5]{Cha2008}. Our formulas are more general analogues of the result in \cite{Meng2017} including the multiplicities of the zeros.
\item\label{Item_CompareOmegas} As $\lvert m_+(\chi) -\frac{\delta(\chi^2)}{2} \rvert \leq \lvert m_+(\chi) +\frac{\delta(\chi^2)}{2} \rvert $, we expect a more important bias in the race between polynomials with $\Omega(N) = k$ than in the race between polynomials with $\omega(N) = k$. Note also that if $m_{+}(\chi) =0$ for all $\chi$, the two mean values have different sign when $k$ is odd, hence we expect the two biases to be in different directions when $k$ is odd.
\item\label{Item_largestMult} We observe from the formula that the inverse zeros with largest multiplicity will determine the behavior of the function as $k$ grows. This is the point of Proposition~\ref{Prop k limit} below. Moreover the real zeros play an important role in determining the bias.
\item\label{Item_Range_k} For degree $X$ polynomials, the typical number of irreducible factors is $\log X$. Hence, one may expect an asymptotic formula which holds for $k\ll \log X$, or at least for $k=o(\log X)$. However, we are not able to reach this range in general, and the factor $d^k$ in the error term is inevitable in our proof. Through personal communication, we know that Sam Porritt is currently using a different method to study these asymptotic formulas \cite{Porritt}.
\end{enumerate}
\end{Rk}
In the case of the race of quadratic residues against non-quadratic residues modulo $M$, the expressions in \eqref{Form_deg=n} and \eqref{Form_deg=n_littleomega} can be simplified.
This is studied in more detail in Section~\ref{Sec_Examples}.
For the race between polynomials with $\Omega(N) = k$, we expect a bias in the direction of quadratic residues or non-quadratic residues according to the parity of $k$.
We show that the existence of the real zero $\sqrt{q}$ sometimes leads to extreme biases.
In the generic case, we expect $m_{\pm} = 0$ and that the other zeros are simple,
then the asymptotic formulas in Theorem~\ref{Th_Difference_k_general_deg<X} give a connection between $\Delta_{f_k}(X; M, A,B)$ and $\Delta_{f_1}(X; M, A,B)$ ($f=\Omega$ or $\omega$), similar to the case of products of primes \cite[Cor.~1.1, Cor.~2.1]{Meng2017}.
We expect in this case that the polynomials
with $\Omega(N) = k$ have preference for quadratic non-residue classes when $k$ is odd; and when $k$ is even, such polynomials
have preference for quadratic residue classes. However, the polynomials with $\omega(N) = k$ always have preference for quadratic residue
classes. Moreover, as $k$ increases, the biases become smaller and smaller for both of the two
cases. This observation is justified by Proposition~\ref{Prop k limit}(Expected generic case).
\subsection{Further behaviour of the bias}
The asymptotic formula from Theorem~\ref{Th_Difference_k_general_deg<X} helps the understanding of the bias in the distribution of polynomials with a certain number of irreducible factors in congruence classes.
In the case of a sequence of polynomials with few irreducible factors, we give a precise rate of convergence of the bias to $\tfrac{1}{2}$ in Theorem~\ref{Th central limit for M under LI} in the following result.
\begin{theo}\label{Th_CentralLimit}
Let $\lbrace M \rbrace$ be a sequence of square-free polynomials in $\mathbf{F}_{q}[t]$ satisfying \emph{(LI\ding{70})} and such that $\frac{2^{\omega(M)}}{\deg M} \rightarrow 0$.
Then, for $f = \Omega$ or $\omega$, as $\deg M \rightarrow\infty$,
the limiting distribution $\mu_{M,f_k}^{\mathrm{norm}}$ of
\begin{align*}
&\Delta_{f_k}^{\mathrm{norm}}(X;M) := \frac{\lvert \boxtimes\rvert \sqrt{q-1} }{\sqrt{q2^{\omega(M)-1}\deg M}}\Delta_{f_k}(X;M,\square,\boxtimes)
\end{align*}
exists and converges weakly to the standard Gaussian distribution.
More precisely, one has
\begin{equation*}
\sup_{x\in\mathbf{R}}\left\lvert \int_{-\infty}^{x}\diff\mu_{M,f_k}^{\mathrm{norm}} - \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-t^2/2} \diff t \right\rvert \ll \frac{\sqrt{2^{\omega(M)}}}{2^k\sqrt{\deg M}} + \frac{\log \deg M}{\deg M}.
\end{equation*}
In particular the bias dissipates as $\deg M$ gets large.
\end{theo}
\begin{Rk}
Note that a sequence of irreducible polynomials $M$ with increasing degree satisfies the hypothesis $\frac{2^{\omega(M)}}{\deg M} \rightarrow 0$, thus Theorem~\ref{Th_CentralLimit} generalizes \cite[Th.~6.2]{Cha2008}.
We observe in particular that the rate of convergence to the Gaussian distribution increases with $k$, this justifies an observation in the number fields setting \cite{Meng2017}: the race seems to be less biased when $k$ is large.
\end{Rk}
In the other direction, fixing a modulus and letting $k$ grow, we obtain the following result.
\begin{theo}
\label{Th central limit for k under LI}
Let $\mathbf{F}_{q}$ be a finite field of odd characteristic and $M \in \mathbf{F}_q[t]$ satisfying \emph{(LI\ding{70})}.
Then, for $f =\Omega$ or $\omega$, the bias in the distributions of $\Delta_{f_k}(X;M,\square,\boxtimes)$ dissipates as $k\rightarrow\infty$.
\end{theo}
This is a corollary of Proposition~\ref{Prop k limit} which is more general and unconditional.
\section{Limiting distribution and bias}\label{Sec_LimDist}
In this section, the assertions are given in the context of almost periodic functions as in \cite{ANS}, as we expect these to be useful for other work on Chebyshev's bias over function fields.
Our main results are based on the existence of a limiting distribution for functions defined over the integers, let us briefly recall the definitions and ideas to obtain such results.
\begin{defi}
Let $F:\mathbf{N}\rightarrow\mathbf{R}$ be a real function,
we say that $F$ admits a limiting distribution if there exists a probability measure $\mu$ on Borel sets in $\mathbf{R}$ such that
for any bounded Lipschitz continuous function $g$, we have
\begin{align*}
\lim_{Y\rightarrow\infty}\frac{1}{Y}\sum_{n\leq Y}g(F(n)) =
\int_{\mathbf{R}}g(t)\diff\mu(t).
\end{align*}
We call $\mu$ the limiting distribution of the function $F$.
\end{defi}
\begin{Rk}
Note that if the function $F$ admits a limiting distribution $\mu$, and that $\mu(\lbrace 0\rbrace) = 0$, then by dominated convergence theorem, the bias of $F$ towards positive values (see Definition~\ref{Defn-bias}) is well defined and we have $\dens(F >0) = \mu((0,\infty))$.
\end{Rk}
We focus on the limiting distribution to study the bias of the difference function.
For $f = \Omega$ or $\omega$, and for any $k\geq 1$, the fact that the function $\Delta_{f_k}(\cdot; M, A,B)$ admits a limiting distribution follows directly from the asymptotic formula of Theorem~\ref{Th_Difference_k_general_deg<X} and the following result.
\begin{prop}\label{Prop_limitingDist}
Let $\gamma_2,\ldots,\gamma_N \in (0,\pi)$ be distinct real numbers.
For any $C_0,c_1\in \mathbf{R}$, $c_2,\ldots, c_N \in \mathbf{C}^{*}$,
let $F:\mathbf{N} \rightarrow \mathbf{R}$ be a function satisfying
\begin{equation}\label{Almost periodic function}
F(n) = C_0 + c_1e^{in\pi} + \sum_{j=2}^{N}\left( c_je^{in\gamma_j} + \overline{c_j}e^{-in\gamma_j}\right) + o(1)
\end{equation}
as $n\rightarrow \infty$.
Then the function $F$ admits a limiting distribution $\mu$ with mean value $C_0$ and variance $c_1^2 + 2\sum\limits_{j=1}^{N}\lvert c_j\rvert^2$.
Moreover
\begin{enumerate}[label = \roman*)]
\item\label{Item_Support} the measure $\mu$ has support in $\left[C_0 - \lvert c_1\rvert - \sum_{j=2}^{N}2\lvert c_j\rvert, C_0 + \lvert c_1\rvert +\sum_{j=2}^{N}2\lvert c_j\rvert\right]$,\\
in particular, if $\lvert C_0 \rvert > \lvert c_1\rvert + \sum_{j=2}^{N}2\lvert c_j\rvert $ then $\dens(C_0F >0) = 1$;
\item\label{Item_continuous} if there exists $j\in \lbrace 2, \ldots, N\rbrace$ such that $\gamma_j \notin \mathbf{Q}\pi$, then $\mu$ is continuous,\\
in particular $\dens(F >0) = \mu((0,\infty))$;
\item\label{Item_symmetry} if the smallest sub-torus of $\mathbf{T}^{N}$ containing $\lbrace (n\pi, n\gamma_2,\ldots, n\gamma_N) : n\in \mathbf{Z} \rbrace$ is symmetric,
then the distribution $\mu$ is symmetric with respect to $C_0$;
\item\label{Item_Fourier} if the set $\lbrace \pi, \gamma_2,\ldots,\gamma_N\rbrace$ is linearly independent over $\mathbf{Q}$, then the Fourier transform $\hat\mu$ of the measure $\mu$ is given by
\[
\hat\mu(\xi) = e^{-iC_{0}\xi}\cos(c_1\xi)
\prod_{j=2}^{N}J_{0}\left( 2\lvert c_j \rvert \xi \right),
\]
where $J_{0}(z) = \int_{-\pi}^{\pi}\exp\left(iz\cos(\theta)\right) \frac{\diff\theta}{2\pi}$ is the $0$-th Bessel function of the first kind.
\end{enumerate}
\end{prop}
Kowalski, \cite[Prop.~1.1]{Kowalski2010}, showed that in certain families of polynomials $M \in \mathbf{F}_q[t]$, the hypothesis of Linear Independence (LI) is satisfied generically when $q$ is large (with fixed characteristic) for the $L$-function of the primitive quadratic character modulo $M$.
That is, the imaginary parts of the zeros of $L(\cdot,\chi_{M})$ are linearly independent over $\mathbf{Q}$. In particular the hypotheses in \ref{Item_continuous}, \ref{Item_symmetry} and \ref{Item_Fourier} are satisfied generically for $F = \pi_{k}(\cdot,\chi_M)$ (see \eqref{Eq defi pi(chi)}).
We expect this to hold more generally for example when racing between quadratic residue and non-quadratic residues as in Section~\ref{Sec_Examples}.
The Linear Independence has also been proved generically in other families of $L$-functions over functions fields \cite{CFJ_Indep,Perret-Gentil}.
Proposition~\ref{Prop_limitingDist} is a consequence of a general version of the Kronecker--Weyl Equidistribution Theorem (see \cite[Lem.~2.7]{Humphries}, \cite[Th.~4.2]{Devin2018}, also \cite[Lem.~B.3]{MartinNg}).
\begin{lem}[Kronecker--Weyl]\label{Th_KW}
Let $\gamma_1,\ldots,\gamma_N \in \mathbf{R}$ be real numbers.
Denote $A(\gamma)$ the closure of the $1$-parameter group $\lbrace y(\gamma_{1},\ldots,\gamma_{N}) : y\in\mathbf{Z}\rbrace/(2\pi\mathbf{Z})^{N}$ in the $N$-dimensional torus $\mathbf{T}^{N}:= (\mathbf{R}/2\pi\mathbf{Z})^{N}$.
Then $A(\gamma)$ is a sub-torus of $\mathbf{T}^{N}$ and we have
for any continuous function $h: \mathbf{T}^{N}\rightarrow \mathbf{C}$,
\begin{equation*}
\lim_{Y\rightarrow\infty}\frac{1}{Y}\sum_{n=0}^{Y}h(n\gamma_{1},\ldots,n\gamma_{N})
= \int_{A(\gamma)}h(a)\diff\omega_{A(\gamma)}(a)
\end{equation*}
where $\omega_{A(\gamma)}$ is the normalized Haar measure on $A(\gamma)$.
\end{lem}
\begin{proof}[Proof of Proposition~\ref{Prop_limitingDist}]
As in the proof of \cite[Th.~3.2]{Cha2008}, we associate Lemma~\ref{Th_KW} with the asymptotic formula of Proposition~\ref{Prop_limitingDist} and Helly's selection theorem \cite[Th.~25.8 and Th.~25.10]{Billingsley}. From this, one can show that the corresponding limiting distribution exists and is a push-forward of the Haar measure on the sub-torus generated by the the $\gamma_j$'s.
Then \ref{Item_Support} is straightforward, and since the measure has compact support, its moments can be computed using compactly supported approximations of polynomials, this gives the result on the mean value and variance.
The point \ref{Item_continuous} follows from the same lines as \cite[Th.~2.2]{Devin2018} using the fact that the set of zeros is finite and being more careful about the rational multiples of $\pi$ to ensure that the sub-torus is not discrete (see also \cite[Th.~4]{Devin2019}).
The point \ref{Item_symmetry} follows directly from the proof of \cite[Th.~2.3]{Devin2018}.
To prove the point \ref{Item_Fourier}, we compute the Fourier transform:
\begin{align*}
\hat\mu(\xi) &= \lim_{Y\rightarrow\infty} \frac{1}{Y} \sum_{n\leq Y}\exp(-i\xi F(n)) \\
&= e^{-iC_0 \xi} \int_{A(\pi,\gamma_1,\ldots,\gamma_N)}
\exp\Bigg(-i\xi\bigg( c_1(-1)^{a_1} + \sum_{j=2}^{N}2\re(c_je^{ia_j}) \bigg)\Bigg) \diff\omega(a) \\
&= e^{-iC_0\xi} \frac{1}{2}\left( e^{i\xi c_1} + e^{-i\xi c_1} \right) \prod_{j=2}^{N}\int_{-\pi}^{\pi}\exp\left(i\xi 2 \lvert c_j\rvert \cos(\theta)\right) \frac{\diff\theta}{2\pi},
\end{align*}
where in the last line we use the linear independence to write $A(\pi,\gamma_1,\ldots,\gamma_N) = \lbrace 0, \pi\rbrace \times \mathbf{T}^{N-1}$, and the corresponding Haar measure as the product of the Haar measures. This concludes the proof.
\end{proof}
\begin{Rk}
Note that in the case all the $\gamma_j$'s are rational multiples of $\pi$, then the main term in the asymptotic expansion \eqref{Almost periodic function} is a periodic function. Thus the limiting distribution obtained in Proposition~\ref{Prop_limitingDist} is a linear combination of Dirac deltas supported on the image of this periodic function. If this image does not contain $0$, the limiting distribution has no mass at the point $0$ hence the bias is well defined.
Otherwise the determination of the bias requires to study lower order terms in the asymptotic expansion, which are for now out of reach.
\end{Rk}
\section{Special values of the bias}\label{Sec_Examples}
In this section, we assume that the field $\mathbf{F}_{q}$ has characteristic $\neq 2$ and that the polynomial $M$ is square-free. When $q$ and the degree of $M$ are small, it is possible to compute the Dirichlet $L$-functions associated to the quadratic characters modulo $M$ explicitly.
In particular, we illustrate our results in the case of races between quadratic residues ($\square$) and non-quadratic residues ($\boxtimes$) modulo~$M$.
In this case the asymptotic formula of Theorem~\ref{Th_Difference_k_general_deg<X} is a sum over quadratic characters.
Indeed, let $\chi$ be a non-trivial, non-quadratic character, it induces a non-trivial character on the subgroup $\square$ of quadratic residues, by orthogonality one has
$c(\chi,\square,\boxtimes) = 0.$
For $\chi$ a quadratic character, one has
\begin{align*}
c(\chi,\square,\boxtimes) &=
\frac{1}{\phi(M)}\left( \frac{1}{\lvert \square\rvert}\sum_{a\in \square}1 -
\frac{-1}{\lvert \boxtimes\rvert} \sum_{a\in \square}1 \right)\\
&= \frac{1}{\phi(M)}\left( 1 +
\frac{\lvert \square\rvert}{\lvert \boxtimes\rvert} \right) =
\frac{1}{\lvert \boxtimes\rvert}.
\end{align*}
Thus, for $ k =o((\log X)^{\frac12})$, one has
\begin{multline}\label{Formula_Res_vs_NonRes}
\Delta_{\Omega_k}(X; M, \square,\boxtimes) \\
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\Bigg(\ \left(m_+(\chi) +\tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(m_-(\chi)+\tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \Bigg) + O\left(\frac{d^k k^2}{\gamma(M)\log X}\right) \Bigg\},
\end{multline}
and, if $q \geq 5$,
\begin{multline}\label{Formula_Res_vs_NonRes_littleomega}
\Delta_{\omega_k}(X; M, \square,\boxtimes) \\
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\Bigg(\ \left(m_+(\chi) -\tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(m_-(\chi)-\tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+\sum_{\gamma_j\neq 0, \pi} m_j(\chi)^k \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} e^{iX\gamma_{j}(\chi)} \Bigg) + O\left(\frac{d^k k^2}{\gamma(M)\log X}\right) \Bigg\}.
\end{multline}
By Proposition~\ref{Prop_limitingDist}, we know that for all $k$ the function in \eqref{Formula_Res_vs_NonRes} admits a limiting distribution $\mu_{M,\Omega_k}$ with mean value
\begin{equation}\label{Form_MeanValue}
\mathbf{E}\mu_{M,\Omega_k} = \frac{(-1)^k}{\lvert \boxtimes\rvert} \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\ \left(m_+(\chi) +\frac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1},
\end{equation}
and variance
\begin{align*}
&\Var(\mu_{M,\Omega_k}) \\
&= \frac{1}{\lvert \boxtimes\rvert^2} \Bigg( \Bigg(\sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}}\left(m_-(\chi)+\frac{1}{2}\right)^{k}\Bigg)^2 \frac{q}{(\sqrt{q} + 1)^2}
+\sum_{\alpha_j\neq \pm \sqrt{q}}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}} m_j^k(\chi) \frac{\lvert\alpha_j\rvert}{\lvert\alpha_j -1\rvert}\Bigg)^2 \Bigg).
\end{align*}
The results are similar for $\Delta_{\omega_k}(X; M, \square,\boxtimes)$, with $\left(m_{\pm}(\chi)+\frac{1}{2}\right)$ replaced by $\left(m_{\pm}(\chi)-\frac{1}{2}\right)$, we denote by $\mu_{M,\omega_k}$ the corresponding limiting distribution.
In the following section we study various square-free polynomials $M$ and we denote by $\chi_M$ the primitive quadratic character modulo $M$.
In the case of prime numbers, it has been observed that the bias tends to disappear as $k\rightarrow\infty$. Moreover in the case of the race with fixed $\Omega$, the bias changes direction with the parity of $k$. Whereas the bias always stays in the direction of the quadratic residues in the race with fixed $\omega$. We present here various examples where this does (or not) happen in the context of irreducible polynomials.
\subsection{Case with no real inverse zero}
In the generic case, we expect that $m_{\pm}(\chi) =0$.
In particular, for $k$ even, $\mu_{M,\Omega_k}= \mu_{M,\omega_k}$, and, if the non-real zeros are independent of $\pi$, for $k$ odd, $\mu_{M,\Omega_k}$ is the symmetric of $\mu_{M,\omega_k}$ with respect to $0$.
Moreover, for $f = \Omega$ or $\omega$, the mean value of $\mu_{M,f_k}$ becomes negligible as $k$ grows.
This situation is very similar to the case of primes in $\mathbf{Z}$ (see \cite{Meng2017}).
More precisely, we can simplify the expression of the mean value in \eqref{Form_MeanValue}. One has
\begin{align*}
\epsilon_f^k\mathbf{E}\mu_{M,f_k} = \frac{1}{\lvert \boxtimes\rvert} \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 \\ \chi\neq \chi_0}} \left(\frac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1}
= \frac{1}{\lvert \square\rvert} \frac{1}{2^k} \frac{\sqrt{q}}{\sqrt{q}-1},
\end{align*}
where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} = 1$.
Note that in this case, $\mathbf{E}\mu_{M,\Omega_k}$ alternates sign as $k$ changes parity and $\mathbf{E}\mu_{M,\omega_k}$ has the same absolute value but stays positive.
Finally, if the sum over the non-real inverse zeros is not empty, one has
\begin{equation*}
\mathbf{E}\mu_{M,f_k} \ll_M \frac{\sqrt{\Var(\mu_{M,f_k})}}{2^k}.
\end{equation*}
This hints towards a vanishing bias as $k$ gets large, see Proposition~\ref{Prop k limit} for a precise statement.
Let us start with an irreducible polynomial $M$ (as in \cite[Sec.~5]{Cha2008}).
Assume that the $L$-function $\mathcal{L}(\cdot,\chi_M)$ has only simple zeros that are not real.
Then for $k\geq 1$ we have the formulas
\begin{multline*}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)\\
=(-1)^{k+1} \left\{ \Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{ 1 - \frac{1}{2^{k-1}} }{2\lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right] + O_{M}\left(\frac{d^k k^2}{\log X} \right)\right\},
\end{multline*}
\begin{multline*}
\Delta_{\omega_k}(X; M, \square,\boxtimes)\\
=(-1)^{k+1} \left\{ \Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right]\right\} \\ +\frac{1}{2^k \lvert\square\rvert} \left[ \frac{\sqrt{q}}{\sqrt{q}-1}+(-1)^X \frac{\sqrt{q}}{\sqrt{q}+1} \right] + O_{M}\left(\frac{d^k k^2}{\log X} \right).
\end{multline*}
Note that the term $\frac{ -1 }{2\lvert \square\rvert} \frac{\sqrt{q}}{\sqrt{q}-1}$ above is the mean value $\mathbf{E}\mu_{M,\Omega_1}$ of the limiting distribution associated to the function $ \Delta_{\Omega_1}(X; M, \square,\boxtimes)$.
Thus, up to a change of sign, the function $\Delta_{f_k}(\cdot; M, \square,\boxtimes)$ satisfies properties similar to those of the function $\Delta_{\Omega_1}(\cdot; M, \square,\boxtimes)$ regarding the behavior at infinity and the limiting distribution, with the mean value of the limiting distribution going to $0$ as $k$ grows.
\begin{ex}[Bias in the ``wrong direction'']
In \cite[Ex.~5.3]{Cha2008}, Cha studies the polynomial $M = t^5 + 3t^4 + 4t^3 + 2t+ 2 \in \mathbf{F}_{5}[t]$, from his work, we observe that the function\footnote{Note that \cite[Ex.~5.3]{Cha2008} contains a typo, we have $\mathcal{L}(u,\chi_M) = (1 - 2 \sqrt{5} \cos(\tfrac{\pi}{5})u + 5u^2 )(1 - 2 \sqrt{5} \cos(\tfrac{2\pi}{5})u + 5u^2 ).$}
$$X \mapsto\Delta_{\Omega_1}(X; M, \square,\boxtimes)
+\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{5}}{\sqrt{5}-1}+(-1)^X \frac{\sqrt{5}}{\sqrt{5}+1} \right]$$
is periodic of period~$10$ and takes positive values larger than $\frac{1}{2\lvert\square\rvert} \left[ \frac{\sqrt{5}}{\sqrt{5}-1}+(-1)^X \frac{\sqrt{5}}{\sqrt{5}+1} \right]$ for~$6$ values of~$X \bmod 10$.
Thus there is a bias in the ``wrong direction'': one has for all $k\geq 1$
\begin{align*}
\dens((-1)^k\Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0) = \frac{4}{10} < \frac12.
\end{align*}
Contrary to what is expected in the generic case, When $k$ is odd the bias is in the direction of the quadratic residues.
Similarly we obtain that
\begin{align*}
\dens(\Delta_{\omega_1}(\cdot;M,\square,\boxtimes) > 0) = \frac{7}{10} > \frac12,
\end{align*}
and for $k \geq 2$,
\begin{align*}
\dens((-1)^{k+1}\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0) = \frac{6}{10} > \frac12.
\end{align*}
In particular the bias changes direction according to the parity of $k$, and when $k$ is odd the bias is in the direction of the quadratic residues.
\end{ex}
As observed in \cite{Li2018}, when $M$ is not irreducible, the $L$-function $\mathcal{L}(\cdot,\chi_M)$ can have non-simple zeros and real zeros. Moreover in Proposition~\ref{Prop Chebyshev many factors}, we obtain extreme biases in races modulo polynomials $M$ with many irreducible factors.
We now focus on square-free non irreducible polynomials.
\begin{ex}[Dissipating bias in case of double non-real zeros]
\label{Ex_double}
Take $q=5$, and
$M= t^6 + 2t^4 + 3t + 1$ in $\mathbf{F}_5[t]$.
One has
\[\mathcal{L}(u,\chi_M) = (1 + u + 5u^2)^2(1-u) = (1 - 2\sqrt{5}\cos(\theta_1) u + 5u^2)^2 (1-u),\]
where $\theta_1 = \pi + \arctan\sqrt{19}$.
The polynomial $M$ has two irreducible factors of degree $3$ in $\mathbf{F}_{5}[t]$. We denote $M=M_1 M_2$, and for $i=1$, $2$ let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$.
We have
\[\mathcal{L}(u,\chi_1) = (1-u +5u^2)(1-u^{3}) = (1 - 2\sqrt{5}\cos(\theta_1 - \pi) u + 5u^2) (1-u^3) \]
and
\[\mathcal{L}(u,\chi_2) = (1 +3u +5u^2)(1-u^{3})
= (1 - 2\sqrt{5}\cos(\theta_2) u + 5u^2) (1-u^3), \]
where $\theta_2 = \pi + \arctan(\sqrt{11}/3) $,
the factor $(1-u^3)$ comes from the fact that the $\chi_i$ are not primitive (see e.g. \cite[Prop.~6.4]{Cha2008}).
Inserting this information in \eqref{Formula_Res_vs_NonRes}
we obtain
\begin{multline*}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)
\\ = \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg(\ \frac{3}{2^{k+2}} \left(5+ \sqrt{5} + (-1)^{X} (5-\sqrt{5})\right)
+ 2^{k+1}\re\left(\frac{10}{11 + i\sqrt{19}} e^{iX\theta_1}\right) \\ +
2\re\left(\frac{10}{9+ i\sqrt{19}}
e^{iX(\theta_1 -\pi)}\right)+
2\re\left(\frac{10}{13 + i\sqrt{11}} e^{iX \theta_2}\right) \Bigg) + O_{M}\left(\frac{6^k k^2}{\log X}\right) .
\end{multline*}
We observe that $\theta_1$ is not a rational multiple of $\pi$.
This follows from the fact that for any $n\in\mathbf{N}$ the $5$-adic valuation of $\cos(n\theta_1)$ is $-n/2$, thus we cannot have $\cos(n\theta_1)=\pm 1$ except for $n=0$.
Hence by Proposition~\ref{Prop_limitingDist}.\ref{Item_continuous},
for each $k\geq 1$, the corresponding limiting distribution is continuous.
Moreover it has mean value $ \mathbf{E} \asymp\frac{(-1)^k}{2^{k}(k-1)!} $ and variance $\Var \asymp\frac{2^{2k}}{(k-1)!^2}$.
Note that LI is not satisfied in this example. However, Damien Roy and Luca Ghidelli observed that the set $\lbrace \pi, \theta_1,\theta_2 \rbrace$ is linearly independent over $\mathbf{Q}$. For any $(a,b,c) \in \mathbf{Z}^3$, we see using the Chebyshev polynomials of the second kind that $\sin(a\pi + b\theta_1) \in \sqrt{19}\mathbf{Q}(\sqrt{5})$ and $\sin(c\theta_2) \in \sqrt{11}\mathbf{Q}(\sqrt{5})$, hence the only chance for them to be equal is to be $0$.
\begin{table}
\begin{tabular}{|r|c|c|}
\hline
$k$ & $\#\{ X \leq 10^9 : {\Delta}_{\Omega_k}(X; M, \square,\boxtimes) >0 \} $& $\#\{ X \leq 10^9 : {\Delta}_{\omega_k}(X; M, \square,\boxtimes) >0 \} $\\
\hline
1& $194\ 355\ 543$ & $805\ 644\ 606$\\
2 & $563\ 506\ 459$ &$ 563\ 506\ 459 $\\
3 & $484\ 542\ 923$ &$515\ 457\ 280$\\
4 & $503\ 903\ 947$ &$503\ 903\ 947$ \\
5 & $499\ 014\ 553$ & $500\ 985\ 439$\\
6 & $500\ 247\ 844$ & $500\ 247\ 844 $ \\
7 & $499\ 937\ 823$ & $500\ 062\ 193$\\
8 & $ 500\ 015\ 580$ & $500\ 015\ 580$ \\
9 & $499\ 996\ 073$ &$500\ 003\ 876$ \\
10 & $500\ 000\ 986$ & $500\ 000\ 986$ \\
\hline
\end{tabular}
\caption{Approximation of the bias of $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$ for $k \in \lbrace 1,\ldots, 10\rbrace$ }\label{Table_Ex2}
\end{table}
We observe that the term $2^{k+1}\re\left(\frac{10}{11 + i\sqrt{19}} e^{iX\theta_1}\right)$ will become the leading term as $k$ grows. This term corresponds to a symmetric distribution with mean value equal to zero. Proposition~\ref{Prop k limit} predicts that the bias tends to $\frac{1}{2}$ as $k$ grows.
We observe this tendency in the data;
in Table~\ref{Table_Ex2} we present an approximation of the bias for the functions $ \Delta_{f_k}(X; M, \square,\boxtimes) \bmod o(1) $, with $f = \Omega$ or $\omega$,
computed for $1\leq X \leq 10^9$ and $1\leq k\leq 10$.
\end{ex}
\subsection{Case where $\sqrt{q}$ or $-\sqrt{q}$ is an inverse zero.}\label{subsec_examples_realZero}
In \cite{Li2018}, Li showed the existence of a family of polynomials $M$ satisfying $m_{+}(\chi_M) >0$.
We now use some of these polynomials to obtain completely biased races between quadratic residues and non-quadratic residues.
\begin{ex}[Complete bias in case of a zero at $\tfrac{1}{2}$]
\label{Ex_sqrt(q)}
Taking $q=9$, we study polynomials with coefficients in $\mathbf{F}_{9}= \mathbf{F}_{3}[a]$ (i.e. $a$ is a generator of $\mathbf{F}_9$ over $\mathbf{F}_3$).
Let
$M= t^4 + 2t^3 + 2t + a^7$.
This polynomial is square-free and has the particularity that $m_{+}(\chi_{M}) =2$ where $\chi_{M}$ is the primitive quadratic character modulo $M$ (see \cite{Li2018}).
More precisely,
\[\mathcal{L}(u,\chi_M) = (1 - 3u)^2.\]
The polynomial $M$ has two irreducible factors of degree $2$ in $\mathbf{F}_{9}[t]$. We denote $M=M_1 M_2$, and for $i=1$, $2$, let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$.
Then for $i = 1$, $2$, one has
\[\mathcal{L}(u,\chi_i) = (1-u)(1-u^{2}).\]
In particular, the only inverse zero of a quadratic character modulo $M$ with norm $\sqrt{9} = 3$ is the real zero $\alpha = 3$ with multiplicity $2$.
Inserting this information in \eqref{Formula_Res_vs_NonRes} and \eqref{Formula_Res_vs_NonRes_littleomega}, we obtain
\begin{equation*}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \frac{2 + 5^k}{2^k} \frac{3}{2} +\frac{3}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}\left(\frac{4^k k^2}{\log X}\right),
\end{equation*}
and
\begin{equation*}
\Delta_{\omega_k}(X; M, \square,\boxtimes)
= \frac{1}{\lvert \boxtimes\rvert} \Bigg\{ \frac{2 + (-3)^k}{2^k} \frac{3}{2} +\frac{3}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}\left(\frac{4^k k^2}{\log X}\right).
\end{equation*}
In each case, for each $k\geq 1$, the limiting distribution is a sum of two Dirac deltas, symmetric with respect to the mean value.
One can observe that, in each case and for any $k\geq 2$, the constant term is larger in absolute value than the oscillating term.
We deduce that, for $k\geq 2$,
\[\dens ((-1)^k \Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0)
= \dens ((-1)^k \Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0)= 1.
\]
We say that the bias is complete.
Note that in this case, contrary to the case of prime numbers, when $k$ is odd, the function~$\Delta_{\omega_k}(\cdot;M,\square,\boxtimes)$ does not have a bias towards quadratic residues.
\end{ex}
\begin{Rk}
We note that the complete bias obtained in Example~\ref{Ex_sqrt(q)} could be one of the simplest ways to observe such a phenomenon.
Previously, in the setting of prime number races, Fiorilli \cite{Fiorilli_HighlyBiased} observed that arbitrary large biases could be obtained in the race between quadratic residues and non-quadratic residues modulo an integer with many prime factors (see also Proposition~\ref{Prop Chebyshev many factors} for a translation in our setting).
Fiorilli's large bias is due to the squares of prime numbers.
Note that over number fields, the infinity of zeros of the $L$-functions is (under the GRH) an obstruction to the existence of complete biases in prime number races with positive coefficients (see \cite[Rk.~2.5]{RS}).
The first observation of a complete bias is in \cite[Th.~1.5]{CFJ} in the context of Mazur's question on Chebyshev's bias for elliptic curves over function fields.
As in \cite{CFJ}, our complete bias is due to a ``large rank'' i.e. a vanishing of the $L$-function at the central point.
\end{Rk}
\begin{ex}[Absence of bias in case of a zero at $\tfrac{1}{2} + i\pi$]
\label{Ex_-sqrt(q)}
Taking $q=9$, we study polynomials with coefficients in $\mathbf{F}_{9}= \mathbf{F}_{3}[a]$ (as in Example~\ref{Ex_sqrt(q)}).
Let
$M= t^3 -t$.
This polynomial is square-free and has the particularity that $m_{-}(\chi_{M}) =2$.
More precisely,
\[\mathcal{L}(u,\chi_M) = (1 + 3u)^2.\]
The polynomial $M$ has three irreducible factors of degree $1$ in $\mathbf{F}_{9}[t]$. We denote $M=M_1 M_2 M_3$, and for $i=1$, $2$, $3$ let $\chi_i$ be the character modulo $M$ induced by the character $\chi_{M_i}$.
For $i\neq j \in \lbrace 1,2,3\rbrace$ one has
\begin{equation*}
\mathcal{L}(u,\chi_i) = (1-u)^2, \quad \text{ and } \quad
\mathcal{L}(u,\chi_i\chi_j) = (1-u)^2.
\end{equation*}
In particular, the only inverse zero of a quadratic character modulo $M$ with norm $\sqrt{9} = 3$ is the real zero $\alpha = -3$ with multiplicity $2$.
Inserting this information into \eqref{Formula_Res_vs_NonRes} and \eqref{Formula_Res_vs_NonRes_littleomega}, we obtain
\begin{equation*}
\Delta_{\Omega_k}(X; M, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \ \frac{7}{2^k} \frac{3}{2} + \frac{6 + 5^k}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}\left(\frac{3^k k^2}{\log X}\right) ,
\end{equation*}
and
\begin{equation*}
\Delta_{\omega_k}(X; M, \square,\boxtimes)
= \frac{1}{\lvert \boxtimes\rvert} \Bigg\{ \ \frac{7}{2^k} \frac{3}{2} + \frac{6 + (-3)^k}{2^k} \frac{3}{4}(-1)^{X} \Bigg\}
+ O_{M}\left(\frac{3^k k^2}{\log X}\right).
\end{equation*}
In each case, the limiting distribution associated to the function for each fixed $k$ is again a sum of two Dirac deltas, symmetric with respect to the mean value.
We observe that for $k =1$ the constant term dominates the sign of the function so there are complete biases $\dens(\Delta_{\Omega_1}(X;M,\square,\boxtimes) > 0) = 0$ and $\dens(\Delta_{\omega_1}(X;M,\square,\boxtimes) > 0) = 1$.
For $k\geq 2$, the two Dirac deltas are each on one side of zero, hence
\[
\dens(\Delta_{\Omega_k}(\cdot;M,\square,\boxtimes) > 0) = \dens(\Delta_{\omega_k}(\cdot;M,\square,\boxtimes) > 0)= \frac{1}{2},
\]
the race is unbiased.
\end{ex}
The examples in this section illustrate the following more general result, we can always find unbiased and completely biased races.
\begin{prop}
\label{Prop using Honda Tate}
Let $\mathbf{F}_{q}$ be a finite field of odd characteristic, then there exists $M_{\frac12}\in \mathbf{F}_{q}[t]$ such that,
for $f= \Omega$ or $\omega$, and for $k$ large enough, one has
$$ \dens(\Delta_{f_k}(\cdot;M_{\frac12},\square,\boxtimes)>0) = \frac12.$$
Moreover, if $q >3$, there exists $M_{1}\in \mathbf{F}_{q}[t]$ such that,
for $f= \Omega$ or $\omega$, and for $k$ large enough, one has
$$ \dens((-1)^k\Delta_{f_k}(\cdot;M_1,\square,\boxtimes)>0) = 1.$$
\end{prop}
\begin{Rk}
It is interesting to note that in the case of an extreme bias, the bias for the function $\Delta_{\omega_k}(\cdot;M,\square,\boxtimes)$ changes direction with the parity of $k$, whereas in the case of integers \cite{Meng2017} the analog function has a bias towards squares independently of the parity of $k$.
\end{Rk}
\begin{proof}
In the case where $q$ is a square,
this result is a consequence of Honda--Tate theorem in the case of elliptic curve: by \cite[Th.~4.1]{Waterhouse} there exist two elliptic curves $E_{\pm}$ on $\mathbf{F}_{q}$ whose Weil polynomial is $P_{\pm}(u) = (1 \mp \sqrt{q}u)^2$.
If $q >3$ is not a square, by \cite[Th.~4.1]{Waterhouse}, there exists one elliptic curve $E_{\frac12}$ on $\mathbf{F}_{q}$ whose Weil polynomial is $P_{\frac12}(u) = 1 + \sqrt{q}u^2$, and by \cite[Th.~1.2]{HNR}, there exist an hyperelliptic curve $C_1$ of genus $2$ whose Weil polynomial is $P_1(u) = (1 - qu^2)^2$.
Since $q$ is odd, using Weierstrass form, each of the elliptic curve $E_{a}$, where $a = +,-$ or $\frac12$, has an affine model with equation $y^2 = M_{a}(x)$, where $M_{a} \in \mathbf{F}_q$ has degree $3$.
Similarly the hyperelliptic curve $C_1$ has an affine model with equation $y^2 = M_{1}(x)$, with $M_1 \in \mathbf{F}_q[t]$ of degree $5$.
Then $\mathcal{L}(u,\chi_{M_{a}}) = P_{a}(u)$, for $a\in \lbrace +,-,\frac12,1\rbrace$.
For $a\in \lbrace +,-,\frac12\rbrace$, let $D$ be a strict divisor of $M_a$, then $\deg D\leq 2$ so $\mathcal{L}(u,\chi_{D})$ does not have inverse zeros of norm $\sqrt{q}$.
In the case $D$ is a strict divisor of $M_1$, then $\mathcal{L}(u,\chi_{D}) \in \mathbf{Z}[u]$ has at most two inverse zeros of norm $\sqrt{q}$ that are conjugate, in particular, its inverse zeros are simple.
Thus the case where $q$ is a square follows in the same way as in Examples~\ref{Ex_sqrt(q)} and~\ref{Ex_-sqrt(q)}.
In the case where $q$ is not a square, using the information above in~\eqref{Formula_Res_vs_NonRes} we obtain, for $f = \Omega$ or $\omega$,
\begin{multline*}
\Delta_{f_k}(X; M_{\frac12}, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \left( -\tfrac{\epsilon_{f}}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(-\tfrac{\epsilon_f }{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+2\re\left( \frac{i\sqrt{q}}{i\sqrt{q} -1} e^{iX\frac{\pi}2}\right)\Bigg\} + O_k\left((\log X)^{-1}\right),
\end{multline*}
where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} = 1$.
The periodic part inside the brackets takes $4$ different values : $$2q\left(\left( -\tfrac{\epsilon_{f}}{2}\right)^k\tfrac{1}{q-1} \pm \tfrac{1}{q+1}\right), \quad 2\sqrt{q}\left(\left( -\tfrac{\epsilon_{f}}{2}\right)^k\tfrac{1}{q-1} \pm \tfrac{1}{q+1}\right), $$
exactly $2$ of them are positive and $2$ are negative when $q>3$ or $k\geq 2$. So the race is unbiased.
Similarly for $M_1$ we have
\begin{multline*}
\Delta_{f_k}(X; M_1, \square,\boxtimes)
= \frac{(-1)^k}{\lvert \boxtimes\rvert} \Bigg\{ \left(2 -\epsilon_{f}\tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +\left(2 -\epsilon_f \tfrac{1}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q} + 1}(-1)^{X} \\
+2\re\left( m_1 \frac{\alpha_1}{\alpha_1 -1} e^{iX\gamma_{1}}\right) + O_k\left((\log X)^{-1}\right) \Bigg\},
\end{multline*}
where $m_1 = 0$ or $1$ and $\alpha_1 = \sqrt{q}e^{i\gamma_1}$ is an inverse zero of $\mathcal{L}(u,\chi_{D})$, for $D$ a strict divisor of $M_1$.
We observe that the constant term dominates for $k$ large enough; one has an extreme bias:
$\dens((-1)^k \Delta_{f_k}(X;M,\square,\boxtimes) > 0) = 1,$ with different directions of the bias according to the parity of $k$.
\end{proof}
\section{Limit behaviours}\label{subsec central limit}
In this section we study the limit behaviour of the measures $\mu_{M,f_k}$, for $f= \Omega$ or $\omega$, as $k$ or $\deg M$ gets large.
We present the results by increasing strength of assumption needed.
\subsection{Unconditional results as $k$ grows}
\label{subsec k limit}
First we focus on $k$ getting large while the modulus $M$ is fixed.
We obtain the following unconditional result (see also Remark~\ref{Rk_on_Th_deg<n}.\ref{Item_largestMult}) regarding the $k$-limit of the limiting distributions.
\begin{prop}\label{Prop k limit}
Fix $M\in \mathbf{F}_{q}[t]$, let $f= \Omega$ or $\omega$, and $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
We define
\[m_{f,\max} = \max_{\chi, j}\lbrace m_{\pm}(\chi) -\epsilon_{f} \frac{1}{2}, m_j(\chi)\rbrace,\]
where the maximum is taken over all non-trivial quadratic characters $\chi$ modulo $M$.
Then, as $k\rightarrow\infty$, the limiting distribution of
\begin{align*}
&\Delta_{f_k}^{\mathrm{norm}}(\cdot;M) := \frac{(-1)^k\lvert \boxtimes\rvert \sqrt{q-1}}{m_{f,\max}^{k}\sqrt{q}}\Delta_{f_k}(\cdot;M,\square,\boxtimes)
\end{align*}
converges weakly to some probability measures $\mu_{f,M}$, depending only on the set of zeros of maximal multiplicity.
In particular,
\begin{enumerate}[label = \roman*)]
\item\label{Item max integer}\emph{(Expected generic case)} if $m_{\Omega,\max}$ is an integer and if the set of zeros of maximal multiplicity generates a symmetric sub-torus, then $\mu_{M,\Omega}=\mu_{M,\omega}$ is symmetric, so the bias dissipates as $k$ gets large;
\item\label{Item max is m+} if for some non-trivial quadratic character $\chi_1$ modulo $M$ one has
\[\max_{\chi}\lbrace m_{-}(\chi) \rbrace < m_{+}(\chi_1) = m_{\Omega,\max} - \frac{1}{2}\]
then $\mu_{M,\Omega}$ is a Dirac delta, so the bias tends to be extreme as $k$ gets large.
\end{enumerate}
\end{prop}
\begin{Rk}
Note that in the generic case, we expect LI to be satisfied and to have $m_{\Omega,\max}=1$ an integer. So, for most $M \in \mathbf{F}_{q}[t]$ square-free, the bias should dissipate in the race between polynomials with $k$ irreducible factors in the quadratic residues and non-quadratic residues modulo~$M$.
\end{Rk}
\begin{proof}
Let $\varphi_{M,f_k}$ be the Fourier transform of the limiting distribution of $\Delta_{f_k}^{\mathrm{norm}}(\cdot;M)$.
One has
\begin{multline*}
\varphi_{M,f_k}(\xi)
= \exp\left(-i\xi \frac{\sqrt{q-1} \sum_{\chi} \left( m_{+}(\chi) - \epsilon_f \frac{1}{2} \right)^k}{m_{f,\max}^{k}(\sqrt{q} - 1)} \right) \\ \times
\int_{A}\exp\Bigg\lbrace -i\xi\Bigg( \frac{\sqrt{q-1}\sum_{\chi} \left( m_{-}(\chi) - \epsilon_f \frac{1}{2} \right)^k}{m_{f,\max}^{k}(\sqrt{q} +1)}(-1)^{a_1}
\\+ \sqrt{\frac{q-1}{q}}\sum_{j=2}^{N} 2\frac{m_{j}^k}{m_{\max}^k}\re\left(\frac{\sqrt{q}e^{i\gamma_j}}{\sqrt{q}e^{i\gamma_j} -1}e^{ia_j}\right) \Bigg)\Bigg\rbrace \diff\omega_{A}(a),
\end{multline*}
where we write the ordered list of inverse-zeros with multiplicities
\begin{equation}\label{Def zeros multiplicities}
\lbrace (\gamma_{2},m_2), \ldots, (\gamma_{N},m_N)\rbrace = \bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\lbrace (\gamma,m)\in (0,\pi)\times\mathbf{N}_{>0} : L(\tfrac{1}{2} + i\gamma, \chi) = 0 \text{ with multiplicity } m \rbrace,
\end{equation}
and $A$ is the closure of the $1$-parameter group $\lbrace y(\pi,\gamma_2,\ldots,\gamma_N) : y \in \mathbf{Z} \rbrace/2\pi\mathbf{Z}^{N}$.
Now, by dominated convergence theorem, there are four cases according to which zeros have maximal multiplicity. In each case it is easy to see that the limit function $\varphi_{M,f}$ is indeed the Fourier transform of a measure $\mu_{M,f}$, and the conclusion follows by L\'evy's Continuity Theorem.
Suppose that $m_{f,\max}$ is an integer, i.e. the zeros of maximal order are not real.
Up to reordering, we can assume that the first $d$ zeros in \eqref{Def zeros multiplicities} have maximal multiplicity: $m_2 = \ldots = m_d = m_{\max} > \max_{j>d}\lbrace m_j \rbrace$.
We have for $f=\Omega$ or $\omega$ and for every $\xi \in\mathbf{R}$,
\begin{align*}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \int_{A(\max)}
\exp\left(-i\xi\left( \sqrt{\frac{q-1}{q}}\sum_{j=2}^{d} 2\re\left(\frac{\alpha_j}{\alpha_j -1}e^{ia_j}\right) \right)\right) \diff\omega_{A(\max)}(a),
\end{align*}
where $A(\max)$ is the closure of the $1$-parameter group $\lbrace y(\gamma_2,\ldots,\gamma_d) : y \in \mathbf{Z} \rbrace/2\pi\mathbf{Z}^{d-1}$.
This follows from the fact that the projection $A\rightarrow A(\max)$ induces a bijection between a sub-torus of $A$ and $A(\max)$, then, by uniqueness of the normalized Haar measure, the measure induced on the sub-torus by the normalized Haar measure of $A$ is exactly the normalized Haar measure of $A(\max)$.
By Proposition~\ref{Prop_limitingDist}.\ref{Item_symmetry},
the function $\varphi_{M,f}$ is even if the sub-torus $A(\max)$ is symmetric. This concludes the proof of Proposition~\ref{Prop k limit}.\ref{Item max integer}.
Note that in the case $m_{\Omega,\max}$ is an integer, we have $m_{\omega,\max} = m_{\Omega,\max}$ and the zeros of maximal order are the same for the two functions $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$. In particular $\mu_{M,\Omega} = \mu_{M,\omega}$.
In the case the zeros of maximal order are real, we have the three following possibilities.
\begin{enumerate}[label = \roman*)]\addtocounter{enumi}{1}
\item The maximum $m_{f,\max}$ is reached only by $m_{+}(\chi_1), \ldots, m_{+}(\chi_d)$, for $\chi_1,\ldots,\chi_d$ non-trivial quadratic characters modulo $M$,
then for every $\xi \in\mathbf{R}$, we have
\begin{align*}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \exp\left(-i\xi \frac{d \sqrt{q-1}}{\sqrt{q} - 1} \right).
\end{align*}
This is the Fourier transform of a Dirac delta at a positive value, thus \begin{equation*}
\lim_{k\rightarrow\infty}\dens((-1)^k\Delta_{f_k}(X;M,\square,\boxtimes) >0) = 1.
\end{equation*}
\item The maximum $m_{f,\max}$ is reached only by $m_{-}(\chi_1), \ldots, m_{-}(\chi_d)$, for $\chi_1,\ldots,\chi_d$ non-trivial quadratic characters modulo $M$,
then for every $\xi \in\mathbf{R}$, we have
\begin{align*}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \cos\left(\xi \frac{d\sqrt{q-1}}{\sqrt{q} +1}\right).
\end{align*}
This is the Fourier transform of a combination of two half-Dirac deltas symmetric with respect to~$0$, thus \begin{equation*}
\lim_{k\rightarrow\infty}\dens(\Delta_{f_k}(X;M,\square,\boxtimes) >0) = \frac{1}{2}.
\end{equation*}
\item The maximum $m_{f,\max}$ is reached by $m_{+}(\chi_1), \ldots, m_{+}(\chi_d)$ and by $m_{-}({\chi'}_1), \ldots, m_{-}({\chi'}_{d'})$ for $\chi_1,\ldots,\chi_d, {\chi'}_1, \ldots, {\chi'}_{d'}$ non-trivial quadratic characters modulo $M$,
then for every $\xi \in\mathbf{R}$, we have
\begin{align*}
\varphi_{M,f}(\xi) := \lim_{k\rightarrow\infty} \varphi_{M,f_k}(\xi) = \exp\left(-i\xi \frac{d \sqrt{q-1}}{\sqrt{q} - 1} \right)\cos\left(\xi \frac{d' \sqrt{q-1}}{\sqrt{q} +1}\right).
\end{align*}
This is the Fourier transform of a combination of two half-Dirac deltas symmetric with respect to~$ \frac{d \sqrt{q-1}}{\sqrt{q} - 1}$.
Thus the limit measure has a complete bias, or is unbiased, or its bias is not well defined depending on whether $\frac{d}{\sqrt{q} -1} > \frac{d'}{\sqrt{q} + 1}$, or $<$, or $=$.
\end{enumerate}
Finally, note that when $m_{\Omega,\max} \notin \mathbf{N}$, the set of zeros of maximal real part for $\Delta_{\Omega_k}$ and $\Delta_{\omega_k}$ can differ, they coincide if $m_{\omega,\max} \notin \mathbf{N}$.
\end{proof}
\subsection{Existence of extreme biases for moduli with many irreducible factors}
\label{subsec extreme bias M limit}
We now keep $k$ fixed and vary the modulus $M$. Following the philosophy of \cite[Th.~1.2]{Fiorilli_HighlyBiased}, we obtain that, as the number of irreducible factors of $M$ increases, extreme biases appear in the race between quadratic and non-quadratic residues modulo $M$.
Thus $1$ is in the closure of the values of the densities in Theorem~\ref{Th central limit for M under LI}.
As in the work of Fiorilli, the full strength of (LI\ding{70}) is not necessary here.
\begin{prop}\label{Prop Chebyshev many factors}
Let $\lbrace M \rbrace$ be a sequence of polynomials in $\mathbf{F}_{q}[t]$
such that
the multi-set $\mathcal{Z}(M) = \bigcup\limits_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}}\lbrace \gamma \in [0,\pi] : L(\frac{1}{2} + i\gamma, \chi) = 0 \rbrace$ is linearly independent of $\pi$ over $\mathbf{Q}$. Assume also that the multiplicities of the zeros are bounded : there exists $B>0$ such that for each $M$, for each $\gamma \in \mathcal{Z}(M)$ one has
$$\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 , \chi \neq \chi_0}} m_{\gamma}(\chi) \leq B.$$
Then, for $f=\Omega$ or $\omega$, as $\omega(M)\rightarrow\infty$, one has
\begin{equation*}
\dens((\epsilon_f)^k\Delta_{f_k}(\cdot;M,\square,\boxtimes) >0) \geq 1 - O\left( \frac{(2B)^{2k} q \deg M}{2^{\omega(M)}} \right),
\end{equation*}
where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
\end{prop}
\begin{proof}
The proof follows the idea of \cite[Th.~1.2]{Fiorilli_HighlyBiased}, \cite[Cor.~5.8]{Devin2018}, using Chebyshev's inequality (e.g. \cite[(5.32)]{Billingsley}). However, unlike the results in loc. cit. that use a limiting density for a function over $\mathbf{R}$, we need to be careful about the influence of $\pi$.
Thanks to the hypothesis of linear independence applied to \eqref{Formula_Res_vs_NonRes}, we have that $\mu_{M,f_k}$ is the convolution of two probability measures: $\mu_{M,f_k} = D_{M,f_k} \ast \nu_{M,k}$,
where $D_{M,f_k}$ is the combination of two half-Dirac deltas at $\frac{2(2^{\omega(M)} -1)}{\lvert \boxtimes\rvert} \left(\frac{\epsilon_f}{2}\right)^k \frac{\sqrt{q}}{q-1}$ and $\frac{2(2^{\omega(M)} -1)}{\lvert \boxtimes\rvert} \left(\frac{\epsilon_f}{2}\right)^k \frac{q}{q-1}$, with $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$,
while $\nu_{M,k}$ has mean value $0$ and variance
\begin{align*}
\Var(\nu_{M,k}) = \frac{1}{\lvert \boxtimes\rvert^2} \sum_{\gamma \in \mathcal{Z}(M)}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^2 = \chi_0 , \chi\neq \chi_0}} m_{\gamma}^{k}(\chi) \frac{\lvert\sqrt{q}e^{i\gamma}\rvert}{\lvert\sqrt{q}e^{i\gamma}-1\rvert}\Bigg)^2
\ll \frac{B^{2k} 2^{\omega(M)} \deg M}{\lvert \boxtimes\rvert^2}.
\end{align*}
To obtain this bound, note that there are $2^{\omega(M)} -1$ quadratic characters modulo $M$ (see e.g. \cite[Prop.~1.6]{Rosen2002}) and for each of them the associated Dirichlet~$L$-function is of degree smaller than $\deg M$. (Note also that $\nu_{M,k}$ is independent of $f=\Omega$ or $\omega$.)
Thus, by Proposition~\ref{Prop_limitingDist}.\ref{Item_continuous} and Chebyshev's inequality,
\begin{align*}
\dens\left((\epsilon_f)^k\Delta_{f_k}(X;M,\square,\boxtimes) >0\right) &\geq \nu_{M,k}\left(\bigg( \frac{-\sqrt{q}}{(q-1)}\frac{(2^{\omega(M)} -1)}{ 2^{k-1}\lvert \boxtimes\rvert} , \infty\bigg)\right) \\
&\geq 1 - O\left( \frac{B^{2k} 2^{\omega(M)} \deg M}{\lvert \boxtimes\rvert^2} \left(\frac{\sqrt{q}}{(q-1)} \frac{(2^{\omega(M)} -1)}{ 2^{k-1}\lvert \boxtimes\rvert} \right)^{-2} \right)
\end{align*}
which concludes the proof.
\end{proof}
\subsection{Limit behaviour for moduli satisfying the linear independence}
\label{subsec M limit under LI}
In this section, following~\cite[Sec.~3]{Fiorilli_HighlyBiased}, we generalize \cite[Th.~6.2]{Cha2008} on the central limit behaviour of the measure $\mu_{M,f_k}$ for $f=\Omega$ or $\omega$.
In particular we prove Theorem~\ref{Th central limit for M under LI}; under (LI\ding{70}), the bias can approach any value in $[\tfrac12 ,1]$ as the degree of $M$ gets large.
We already proved that $1$ can be approached by the values of the bias without assuming (LI\ding{70}) in Proposition~\ref{Prop Chebyshev many factors}, thus it remains to prove Theorem~\ref{Th central limit for M under LI} for the interval $[\frac12,1)$.
As noted in Section~\ref{subsec extreme bias M limit} when using Chebyshev's inequality, assuming enough linear independence of the zeros, the distribution $\mu_{M,f_k}$ is well described by the data
\begin{equation*}
B(M) :=\frac{\lvert \mathbf{E}\mu_{M,f_k}\rvert} {\sqrt{\Var(\nu_{M,k})}}= \frac{(2^{\omega(M)} -1)\sqrt{q}}{2^k(\sqrt{q} -1)}\Bigg( \sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\sum_{j}^{d_{\chi}} \left\lvert \frac{\alpha_j{\chi}}{\alpha_j{\chi} -1} \right\rvert^2 \Bigg)^{-1/2}
\end{equation*}
where $\nu_{M,k}$ is as defined in the proof of Proposition~\ref{Prop Chebyshev many factors}.
By \cite[(44)]{Cha2008} and \cite[Prop.~6.4]{Cha2008}, we have for every non-trivial quadratic character~$\chi$ modulo~$M$,
\begin{align*}
\sum_{j=1}^{d_{\chi}}\left\lvert\frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^2 = \frac{q}{q-1}(\deg M^*(\chi) -2) + O(\log (\deg M^*(\chi) +1)),
\end{align*}
where $M^*(\chi)$ is the modulus of the primitive character that induces $\chi$.
Note that the sum is empty if $\deg M^*(\chi) \leq 2$.
Thus, summing over the non-trivial quadratic characters we have
\begin{align}\label{bound IM}
I(M) :=\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}
\sum_{j=1}^{d_{\chi}} \left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^2
&= \frac{q}{q-1}\sum_{\substack{D\mid M' \\ D \neq 1}} (\deg D -2) + O(2^{\omega(M)} \log (\deg M' +1))\nonumber\\
&= \frac{q}{q-1}(2^{\omega(M)} -1) \frac{\deg M' -4}{2} + O(2^{\omega(M)} \log (\deg M' +1),
\end{align}
where $M'$ is the largest square-free divisor of $M$.
Thus, if $\deg M'>4$,
\begin{equation*}
B(M)= 2^{-k}\frac{\sqrt{2(q-1) (\tau(M') -1)}}{(\sqrt{q} -1)\sqrt{\deg M' -4}} \left( 1 + O\left(\frac{\log(\deg M') }{\deg M'}\right)\right),
\end{equation*}
where $\tau$ counts the number of divisors.
We show that, as $\deg M' \rightarrow\infty$, $B(M)$ can approach any non-negative real number.
\begin{lem}\label{Lem any possible value}
For any fixed $0\leq c <\infty$, there exists a sequence of square-free monic polynomials $M_n \in \mathbf{F}_q[t]$
such that
\begin{equation*}
\deg M_n \rightarrow\infty \text{ and } \tau(M_n) = c\deg(M_n) + O(1).
\end{equation*}
\end{lem}
\begin{proof}
The case $c=0$ follows from taking a sequence of irreducible polynomials.
Now, fix $0<c<\infty$, for $\omega$ large enough,
there exist $0<d_1 < d_2 < \ldots < d_{\omega}$ integers such that
\begin{equation*}
\left[\frac{2^{\omega}}{c}\right] = d_1 + d_2 + \ldots + d_{\omega}.
\end{equation*}
For each $1\leq i \leq \omega$, there exists an irreducible polynomial $P_i \in \mathbf{F}_{q}[t]$ of degree $d_i$.
Then the polynomial $M := P_1 P_2 \ldots P_\omega$ is square-free and satisfies
\begin{align*}
\tau(M) = 2^{\omega} = c( \deg M + O(1)).
\end{align*}
This concludes the proof.
\end{proof}
\begin{Rk}
\label{Rk the measure is as smooth as you want}
Note that we only used the fact that there exist irreducible polynomials of each degree in $\mathbf{F}_{q}[t]$.
In the sequence we constructed, we have $\deg M_n\rightarrow\infty$, thus $I(M_n)\rightarrow \infty$, we deduce that the number of zeros~$\lvert \mathcal{Z}(M_n) \rvert$
gets large too.
In particular, we can always assume that $\lvert \mathcal{Z}(M_n) \rvert\geq 3$
so that the limiting distribution $\mu_{M_n,f_k}$ is absolutely continuous (see \cite[Th.~1.5]{MartinNg}).
\end{Rk}
In the case of a sequence of polynomials $(M_n)_{n\in \mathbf{N}}$ satisfying (LI\ding{70}) and for which $B(M_n)$ converges, we show that the limiting bias can be precisely described.
\begin{prop}
\label{Prop M limit using Berry Esseen}
Let $b \in [0,\infty),$
suppose there exist a sequence of polynomials $M_n \in \mathbf{F}_q[t]$ with $\deg M_n \rightarrow\infty$, $B(M_n) \rightarrow b$, and for each $n$, $M_n$ satisfies \emph{(LI\ding{70})}.
Then, for $f = \Omega$ or $\omega$, as $\deg M_n \rightarrow\infty$, the limiting distribution $\mu_{M,f_k}^{\mathrm{norm}}$ of
\begin{align*}
&\Delta_{f_k}^{\mathrm{norm}}(\cdot;M) := \frac{(\epsilon_f)^k}{\sqrt{\Var\nu_{M,k}}}\Delta_{f_k}(\cdot;M,\square,\boxtimes)
\end{align*}
converges weakly to the distribution $\frac{1}{2}(\delta_{2b/(\sqrt{q} +1)} + \delta_{2b\sqrt{q}/(\sqrt{q} +1)})\ast \mathcal{N}$,
where $\mathcal{N}$ is the standard Gaussian distribution, and where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
More precisely, one has
\begin{equation*}
\sup_{x\in\mathbf{R}}\left\lvert \int_{-\infty}^{x}\diff\mu_{M,f_k}^{\mathrm{norm}} - \frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} \diff t \right\rvert \ll 2^{-\omega(M_n)}(\deg M'_n)^{-1} + \lvert B(M_n) - b\rvert
\end{equation*}
where $M'_n$ is the square-free part of $M_n$.
\end{prop}
\begin{proof}
The proof follows ideas from the proof of \cite[Th.~1.1]{Fiorilli_HighlyBiased} and \cite[Th.~4.5]{CFJ}, and is based on the use of Berry--Esseen inequality \cite[Chap.~II, Th.~2a]{Esseen}.
Let $M\in\mathbf{F}_{q}[t]$ be a polynomial satisfying (LI\ding{70}).
We begin with computing the Fourier transform $\varphi_{M,f_k}$ of the limiting distribution $\mu^{\mathrm{norm}}_{M,f_k}$ of $\frac{(\epsilon_f)^k}{\sqrt{\Var \nu_{M,k}}}\Delta_{k}$ using Proposition~\ref{Prop_limitingDist}.\ref{Item_Fourier} for \eqref{Formula_Res_vs_NonRes} where we assume linear independence.
With the notations of~\eqref{Not_L-function}, one has
\begin{align*}
\varphi_{M,f_k}(\xi) &= \hat\mu_{M,f_k}\left(\frac{(\epsilon_f)^k}{\sqrt{\Var \nu_{M,k}}}\xi\right) \\
&= \exp\left(-i B(M)\xi \right)
\cos\left( \frac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)
\prod_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\prod_{j=1}^{d_{\chi}/2}J_{0}\Big( 2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert (-\epsilon_f)^k\xi \Big),
\end{align*}
where using the functional equation, we assume that the first $\frac{d_{\chi}}{2}$ non-real inverse zeros have positive imaginary part, (recall that, since $\chi$ is real, up to reordering we can write $\alpha_{d_{\chi} - j}(\chi) = \overline{\alpha_{j}(\chi)}$ for $j\in \lbrace1,\ldots,d_{\chi}\rbrace$).
Using the parity of the Bessel function and the power series expansion $\log J_0(z) = - \frac{z^2}{4} + O(z^4)$, for~$\lvert z\rvert <\frac{12}{5}$ (see e.g. \cite[Lem.~2.8]{FiorilliMartin}), we get that, for any $\lvert \xi\rvert < \frac{1}{2} I(M)^{1/2}$,
\begin{multline}
\label{Asympt Fourier deg M}
\log \left\lbrace\varphi_{M,f_k}(\xi) \exp\left(i B(M)\xi \right)
\cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)^{-1} \right\rbrace \\
= - \frac{1}{2}\xi^2
+ O\Big( I(M)^{-2}\sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}
\sum_{j=1}^{d_{\chi}/2} \left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert^4 \xi^4 \Big).
\end{multline}
Since $\left\lvert \frac{\alpha_j(\chi)}{\alpha_{j}(\chi)-1}\right\rvert \leq \frac{\sqrt{q}}{\sqrt{q} - 1}$, the error term in~\eqref{Asympt Fourier deg M} is $O( \xi^4 I(M)^{-1})$.
In the other direction, in the range $\lvert \xi \rvert > \tfrac{1}{2}I(M)^{\frac14}$, we have
\begin{align*}
2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \lvert\xi\rvert > I(M)^{-1/4}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \in [0,\tfrac{5}{3}]
\end{align*}
for $I(M)$ large enough.
Since $J_0$ is positive and decreasing on the interval $[0,\tfrac{5}{3}]$ and that for all $z \geq \tfrac{5}{3}$ we have $\lvert J_0(z) \rvert \leq J_0(\tfrac{5}{3})$, we deduce
\begin{multline} \label{Asympt Fourier xi large}
\log \left\lbrace\varphi_{M,f_k}(\xi) \exp\left(i B(M)\xi \right)
\cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} B(M)\xi \right)^{-1} \right\rbrace \\ \leq \sum_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\sum_{j=1}^{d_{\chi}/2}\log J_{0}\Big( I(M)^{-1/4}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \Big)
= - \frac{1}{4} I(M)^{\frac12}
+ O(1).
\end{multline}
Note that \eqref{Asympt Fourier deg M} is enough to show, by L\'evy's Continuity Theorem, that $\nu_{M,k}^{\mathrm{norm}}$ converges weakly to the standard Gaussian distribution as $I(M) \rightarrow\infty$.
Since limit and convolution are compatible,
we deduce that if $B(M)\rightarrow b$ converges, then $\mu_{M,f_k}^{\mathrm{norm}}$ converges weakly to a distribution that is a sum of two Gaussian distributions centered at $b(1 \pm \tfrac{\sqrt{q} -1}{\sqrt{q} +1})$.
The precise rate of convergence of the distribution function is obtained via the Berry--Esseen inequality \cite[Chap.~II, Th.~2a]{Esseen}.
Let $F$ and $G$ be the cumulative distribution functions of $\frac{1}{2}(\delta_{2b/(\sqrt{q} +1)} + \delta_{2b\sqrt{q}/(\sqrt{q} +1)})\ast \mathcal{N}$ and $\mu_{M,f_k}^{\mathrm{norm}}$, precisely:
\begin{equation*}
F(x) = \frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{x}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} )\diff t
\quad \text{ and } \quad G(x) = \int_{-\infty}^{x}\diff\mu_{M,f_k}^{\mathrm{norm}}.
\end{equation*}
As observed in Remark~\ref{Rk the measure is as smooth as you want}, when $\deg M$ is large enough, the function $G$ is differentiable.
For any $T>0$, we have
\begin{equation}\label{Eq Berry Esseen}
\lvert G(x) - F(x) \rvert \ll \int_{-T}^{T}\left\lvert \frac{\varphi_{M,f_k}(\xi) - \exp(-ib\xi) \cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} b\xi \right)e^{-\frac{1}{2}\xi^2}}{\xi} \right\rvert\diff\xi
+ \frac{\lVert G' \rVert_{\infty}}{T}.
\end{equation}
Let us first estimate the second term in the right-hand side of \eqref{Eq Berry Esseen}.
We have for all $x\in \mathbf{R}$
\begin{align*}
G'(x) = \frac{1}{2\pi}\int_{\mathbf{R}}e^{-ix\xi}\varphi_{M,f_k}(\xi)\diff\xi \ll \int_{\mathbf{R}} \prod_{\substack{\chi \bmod M \\ \chi^{2} = \chi_0 \\ \chi \neq \chi_0}}\prod_{j=1}^{d_{\chi}/2}\left\lvert J_{0}\Big( 2 I(M)^{-1/2}
\left\lvert \frac{\alpha_j(\chi)}{\alpha_j(\chi) -1} \right\rvert \xi \Big) \right\rvert\diff\xi.
\end{align*}
Using the bound $\lvert J_0(z) \rvert \ll \min(1,x^{-\frac12})$, and that $\lvert \mathcal{Z}(M) \rvert \geq 3$ we obtain that
\begin{equation}\label{Bound BerryEsseen derivative}
\lVert G' \rVert_{\infty} \ll \int_{\mathbf{R}} \min(1, I(M)^{3/4}\lvert\xi\rvert^{-3/2})
\diff\xi \ll I(M)^{3/4}.
\end{equation}
To bound the integral in \eqref{Eq Berry Esseen}, we cut the interval of integration in two ranges.
First, by \eqref{Asympt Fourier deg M}, the integral in the range $\lvert\xi\rvert \leq \frac{1}{2}I(M)^{1/4}$ is
\begin{align}\label{Ineq Berry--Esseen first range}
\int_{-\frac{1}{2}I(M)^{1/4}}^{\frac{1}{2}I(M)^{1/4}}
\frac{e^{-\frac{1}{2}\xi^2}} {2\lvert\xi\rvert} &
\left\lvert \sum_{\pm} e^{-ib(1 \pm \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1})\xi} \left( 1 - \exp\left(-i(B(M) - b)(1 \pm \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1})\xi + O(\xi^4 I(M)^{-1})\right)\right) \right\rvert
\diff\xi \nonumber\\
\ll&
\int_{-\frac{1}{2}I(M)^{1/4}}^{\frac{1}{2}I(M)^{1/4}}
e^{-\frac{1}{2}\xi^2}\left( \lvert B(M) - b \rvert + \lvert \xi\rvert^3 I(M)^{-1} \right)
\diff\xi \\
\ll& \lvert B(M) - b \rvert + I(M)^{-1}.\nonumber
\end{align}
In the range $\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T$, we use the bound from~\eqref{Asympt Fourier xi large}:
\begin{align}\label{Ineq Berry--Esseen second range}
\int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T}&
\left\lvert \frac{\varphi_{M,f_k}(\xi) - \exp(-ib\xi) \cos\left( \tfrac{\sqrt{q} - 1}{\sqrt{q} + 1} b\xi \right)e^{-\frac{1}{2}\xi^2}}{\xi} \right\rvert\diff\xi \\
\ll& \int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T} e^{-\frac14 I(M)^{\frac12}} \frac{\diff\xi}{\lvert \xi \rvert} +
\int_{\frac{1}{2}I(M)^{1/4} \leq \lvert\xi\rvert \leq T}
e^{-\frac{1}{2}\xi^2}\frac{\diff\xi}{\lvert \xi\rvert }\nonumber \\
\ll& e^{-\frac14 I(M)^{\frac12}}\log T + e^{-\frac18 I(M)^{\frac12}}.\nonumber
\end{align}
Now combining \eqref{Bound BerryEsseen derivative}, \eqref{Ineq Berry--Esseen first range} and \eqref{Ineq Berry--Esseen second range} in \eqref{Eq Berry Esseen} for $T= I(M)^{\frac74}$, and the estimate~\eqref{bound IM} gives the result, as $\deg M\rightarrow\infty$.
\end{proof}
The proof of Theorem~\ref{Th central limit for M under LI} follows.
\begin{proof}[Proof of Theorem~\ref{Th central limit for M under LI}]
Let $\eta \in [\tfrac12,1]$. If $\eta = 1$, then by Proposition~\ref{Prop Chebyshev many factors}, there exist a sequence of polynomials $M\in \mathbf{F}_q[t]$ with $\dens((\epsilon_f)^k\Delta_{f_k}(\cdot;M,\square,\boxtimes)) \rightarrow \eta$ as $\deg M \rightarrow \infty$.
Now assume $\eta \in [\frac12,1)$. Since the function $b \rightarrow \frac{1}{2\sqrt{2\pi}}\int_{0}^{\infty}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} )\diff t $ is increasing continuous and taking values in $[\tfrac12,1)$ when $b\in [0,\infty)$, there exist a unique $b$ such that $\frac{1}{2\sqrt{2\pi}}\int_{0}^{\infty}( e^{-\frac{1}{2}(t - b\frac{2}{\sqrt{q} +1})^2} + e^{-\frac{1}{2}(t - b\frac{2\sqrt{q}}{\sqrt{q} +1})^2} )\diff t = \eta$. Lemma~\ref{Lem any possible value} ensures the existence of a sequence of square-free monic polynomials $M$ with
\begin{align*}
B(M) = \begin{cases} b\left(1 + O\left(\frac{\log\deg M}{\deg M}\right)\right) \text{ if } b > 0,\\
O\left( 2^{-k} 2^{\omega(M)/2} (\deg M)^{-\frac12} \right) \text{ if } b=0,
\end{cases}
\end{align*} as $\deg M\rightarrow\infty$. Those polynomials are only defined by their degree and number of divisors, according to the hypothesis of Theorem~\ref{Th central limit for M under LI}, we can assume that each of them satisfies (LI\ding{70}).
Then applying Proposition~\ref{Prop M limit using Berry Esseen} to this sequence, we get
\begin{equation*}
\lvert \mu_{M,f_k}^{\mathrm{norm}}([0,\infty)) - \eta\rvert \ll \begin{cases} (2^{-\omega(M)} + \log \deg M)(\deg M)^{-1} \text{ if } b >0, \\
2^{-\omega(M)}(\deg M)^{-1} + 2^{-k} 2^{\omega(M)/2} (\deg M)^{-\frac12} \text{ if } b=0.
\end{cases}
\end{equation*}
Since we assume (LI\ding{70}) for $M$, one has $$\mu_{M,f_k}^{\mathrm{norm}}([0,\infty)) = \dens ((\epsilon_f)^k \Delta_{k}(\cdot;M,\square,\boxtimes) > 0),$$
this concludes the proof.
\end{proof}
\section{Character sums over polynomials of degree $n$ with $k$ irreducible factors}\label{Sec_proof_deg=n}
For $k\geq 1$, $\chi$ a Dirichlet character modulo $M$, and $f = \Omega$ or $\omega$ we define
\begin{equation}\label{Eq defi pi(chi)}
\pi_{f_k}(n, \chi)=\sum_{\substack{N \text{ monic, } (N,M)=1\\ \deg(N) = n,~ f(N)=k }} \chi(N).
\end{equation}
In this section, we prove the following result about the asymptotic expansion of $\pi_{f_k}(n,\chi)$ by induction over the number of irreducible factors $k$.
\begin{theo}\label{Prop_k_general}
Let $M \in \mathbf{F}_{q}[t]$ of degree $d \geq 1$.
Let $k$ be a positive integer.
Let $\chi$ be a non-trivial Dirichlet character modulo $M$, and
$$\gamma(\chi) = \min\limits_{1\leq i\neq j\leq d_{\chi}}\left(\lbrace \lvert \gamma_i(\chi) - \gamma_j(\chi) \rvert, \lvert \gamma_{i}(\chi) \rvert, \lvert \pi -\gamma_{i}(\chi) \rvert \rbrace\right).$$
With notations as in \eqref{Not_L-function}, for $f = \Omega$ or $\omega$, under the conditions $k =o((\log n)^{\frac{1}{2}})$, and $q\geq 5$ if $f= \omega$, one has
\begin{align*}
\pi_{f_k}(n,\chi) = \frac{(-1)^k}{(k-1)!} & \Bigg\{
\left( \left(m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k+(-1)^n \left(m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k \right)\frac{q^{n/2}(\log n)^{k-1}}{n} \\
+ &\sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n} +O \left( d^k \frac{k(k-1)}{\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n} \right) \Bigg\},
\end{align*}
where the implicit constant is absolute, $\delta(\chi^2) = 1$ if $\chi^2 = \chi_0$ and $0$ otherwise, $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
\end{theo}
\subsection{Case $k=1$}
We start by recalling the usual setting of the race between irreducible polynomials which is the base case in our induction.
In this situation we obtain a better error term.
\begin{prop}\label{Prop_k=1}
Let $\chi$ be a non-trivial Dirichlet character modulo $M$.
Its Dirichlet $L$-function $\mathcal{L}(u,\chi)$ is a polynomial,
let $\alpha_{1}(\chi), \ldots, \alpha_{d_{\chi}}(\chi)$ denote the distinct non-real inverse zeros of norm $\sqrt{q}$ of $\mathcal{L}(u,\chi)$, and $m_{1},\ldots,m_{d_{\chi}} \in \mathbf{Z}_{>0}$ be their multiplicities.
For $f=\Omega$ or $\omega$, one has
\begin{align*}
\pi_{f_1}(n,\chi)&
=-\sum_{j=1}^{d_{\chi}} m_{j}(\chi)\frac{\alpha_{j}(\chi)^n}{n} - \left(-\epsilon_f \delta\left(\frac{n}{2},\chi^{2}\right) +m_{+}(\chi) +(-1)^{n}m_{-}(\chi)\right)\frac{q^{n/2}}{n} + O(\frac{dq^{n/3}}{n}),
\end{align*}
where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} =1$, and $$\delta\left(\frac{n}{2},\chi^{2}\right) =\begin{cases}
1, & \text{if} ~n~\text{is even and} ~\chi^{2} = \chi_{0};\\
0, & \text{otherwise.}
\end{cases} $$
\end{prop}
\begin{proof}
We write the Dirichlet $L$-function in two different ways.
First it is defined as an Euler product:
\begin{align*}
\mathcal{L}(u,\chi) = \prod_{n=1}^{\infty} \prod_{\substack{P \text{ irred.}\\ \deg(P) = n \\ P\nmid M}} (1- \chi(P)u^n)^{-1}.
\end{align*}
As $\chi \neq \chi_{0}$, the function $\mathcal{L}(u,\chi)$ is a polynomial in $u$, using the notations of \eqref{Not_L-function},
\begin{align*}
\mathcal{L}(u,\chi) = (1-\sqrt{q}u)^{m_+}(1+\sqrt{q}u)^{m_-}\prod_{j=1}^{d_{\chi}} (1- \alpha_{j}(\chi)u)^{m_{j}} \prod_{j'=1}^{d'_{\chi}} (1- \beta_{j'}(\chi)u),
\end{align*}
where $\lvert \beta_{j}(\chi)\rvert =1$.
By comparing the coefficients of degree $n$ in the two expressions of the logarithm we obtain
\begin{align*}
\sum_{\ell\mid n} \frac{\ell}{n}\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} - \sum_{j'=1}^{d'_\chi}\frac{\beta_{j'}(\chi)^n}{n}.
\end{align*}
Thus
\begin{align*}
\pi_{\Omega_1}(n,\chi) &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + O\left(\frac{d'_{\chi}}{n}\right) - \sum_{\substack{\ell\mid n \\ \ell\neq n}} \frac{\ell}{n}\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} \\
&=-\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} - \frac{1}{2}\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) + O\left(\frac{d + q^{n/3}}{n}\right),
\end{align*}
and
\begin{align*}
\pi_{\omega_1}(n,\chi) &= -\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + O\left(\frac{d'_{\chi}}{n}\right) + \sum_{\substack{\ell\mid n \\ \ell\neq n}} (1 -\frac{\ell}{n})\sum_{\substack{P \text{ irred.}\\ \deg(P) = \ell \\ P\nmid M}} \chi(P)^{n/\ell} \\
&=-\frac{q^{n/2}}{n}(m_+ +(-1)^{n}m_-) -\sum_{j=1}^{d_\chi} m_{j}\frac{\alpha_{j}(\chi)^n}{n} + \frac{1}{2}\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) + O\left(\frac{d+q^{n/3}}{n}\right),
\end{align*}
where $\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) =0$ if $n$ is odd, and it can be included in the error term if $\chi^{2} \neq \chi_{0}$.
If $n$ is even and $\chi^{2} = \chi_{0}$, one has (\cite[Th.~2.2]{Rosen2002})
$$\pi_{\Omega_1}\left(\frac{n}{2},\chi^{2}\right) = 2\frac{q^{n/2}}{n} + O(q^{n/4}).$$
This concludes the proof.
\end{proof}
\subsection{Newton's formula}
To prove the general case of Theorem~\ref{Prop_k_general}, we use a combinatorial argument.
Let $x_1$, $x_2$, $\cdots$ be an infinite collection of indeterminates. If a formal power series $P(x_1, x_2, \cdots)$ with bounded degree is invariant under all finite permutations of the variables $x_1$, $x_2$, $\cdots$, we call it a \textit{symmetric function}. We define
the $n$-th \textit{homogeneous symmetric function} $h_n=h_n(x_1, x_2, \cdots)$ by the following generating function
$$\sum_{n=0}^{\infty} h_n z^n=\prod_{i=1}^{\infty}\frac{1}{1-x_i z}.$$
Thus,
$h_n$ is the sum of all possible monomials of degree $n$. The \textit{n-th elementary symmetric function} $e_n=e_n(x_1, x_2, \cdots)$ is defined by $$\sum_{n=0}^{\infty}e_n z^n=\prod_{i=1}^{\infty}(1+x_i z).$$ Precisely, $e_n$ is the sum of all square-free monomials of degree $n$. Finally the $n$-th \textit{power symmetric function}
$p_n=p_n(x_1, x_2, \cdots)$ is defined to be
$$p_n=x_1^n+x_2^n+\cdots.$$
The following result is due to Newton or Girard (see \cite[Chap.~1, (2.11)]{Mac}, or \cite[Th.~2.8]{Me-Re}).
\begin{lem}\label{lem-Newton}
For any integer $k\geq 1$, we have
\begin{equation*}
kh_k=\sum_{\ell=1}^k h_{k-\ell}p_{\ell},
\qquad
ke_k=\sum_{\ell=1}^k (-1)^{\ell} e_{k-\ell} p_\ell.
\end{equation*}
\end{lem}
\subsection{Products of $k$ irreducible polynomials --- Induction step}
We will prove Theorem~\ref{Prop_k_general} by induction on $k$. First we use the combinatorial arguments from Lemma~\ref{lem-Newton} to obtain a relation between $\pi_{f_k}$ and $\pi_{f_{k-1}}$, the two relations are obtained by different calculations according to whether $f= \Omega$ or $\omega$.
\begin{lem}\label{lem recurence big Omega}
Let $M\in \mathbf{F}_q[t]$ of degree $d\geq1$, and $\chi$ be a non-trivial Dirichlet character modulo $M$. For any positive integer $k \geq 2$, assume that for all $1\leq \ell\leq k-1$ there exists $A_{\Omega,\ell}>0$ such that one has
$\lvert\pi_{\Omega_{\ell}}(n,\chi)\rvert \leq A_{\Omega,\ell} \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1}$ for all~$n\geq 1$.
Then one has
\begin{align*}
\pi_{\Omega_k}(n, \chi)=\frac{1}{k}\sum_{n_{1} + n_{2} = n}&\pi_{\Omega_{k-1}}(n_{1},\chi)\pi_{\Omega_1}(n_{2},\chi)+O_k \left(\frac{q^{n/2}(\log n)^{k-2}}{n} \right),
\end{align*}
where the implicit constant depends on $k$ and is bounded by $$\frac{d^{k}}{k!}\sum_{\ell=2}^{k} ( 2 + \frac{\ell}{\log n})A_{\Omega,k-\ell}\frac{d^{-\ell} q^{-\ell/2+1}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!}$$ for all~$n$.
\end{lem}
\begin{proof}
We study the function
$$F_{\Omega_k}(u,\chi) = \sum_{n=1}^{\infty} \sum_{\substack{N \text{ monic} \\ \deg(N) = n \\ \Omega(N)=k}} \chi(N) u^{n} = \sum_{n=1}^{\infty}\pi_{\Omega_k}(n,\chi) u^{n}.$$
Adapting the idea of \cite{Meng2017}, we choose $x_P=\chi(P)u^{\deg P}$ for each irreducible polynomial $P$ .
Using Lemma~\ref{lem-Newton},
we obtain
\begin{equation}\label{general-k-series}
F_{\Omega_k}(u,\chi) = \frac{1}{k}\sum_{\ell=1}^{k}F_{\Omega_{k-\ell}}(u,\chi)F_{\Omega_1}(u^{\ell},\chi^{\ell}),
\end{equation}
where we use the convention $F_{\Omega_0}(u,\chi) = 1$.
Comparing the coefficients of degree $n$, we see that the first term will give the main term and the other terms contribute to the error term.
For $2\leq \ell\leq k-1 $, using the trivial bound for $\pi_{\Omega_1}$ the coefficient of degree~$n$ of~$F_{\Omega_{k-\ell}}(u,\chi)F_{\Omega_1}(u^{\ell},\chi^{\ell})$ is indeed by hypothesis:
\begin{align}\label{general-k-error}
\sum_{n_{1} + \ell n_{2} = n}&\pi_{\Omega_{k-\ell}}(n_{1},\chi)\pi_{\Omega_1}\left(n_{2},\chi^{\ell} \right) \\
&\leq A_{\Omega,k-\ell} \frac{d^{k-\ell}}{(k-\ell-1)!}\sum_{n_{1} + \ell n_{2} = n}\frac{q^{n_{1}/2} q^{n_{2}} }{n_{1} n_{2}} (\log n_1)^{k-\ell-1}\nonumber\\
&\leq A_{\Omega,k-\ell} \frac{d^{k-\ell}}{(k-\ell -1)!} q^{n/2-\ell/2+1}(\log n)^{k-\ell -1} \sum_{n_1+\ell n_2=n}\frac{1}{n_1 n_2} \nonumber\\
&\leq A_{\Omega,k-\ell}\frac{d^{k-\ell} q^{n/2-\ell/2+1}(\log n)^{k-\ell -1}}{(k-\ell -1)! n}(2 \log n + \ell) \nonumber.
\end{align}
The coefficient of degree~$n$ of~$F_{\Omega_{0}}(u,\chi)F_{\Omega_1}(u^{k},\chi^{k})$ is non-zero only when $k\mid n$, and it is bounded by $\lvert \pi_{\Omega_1}(\tfrac{n}{k},\chi^{k})\rvert \ll \frac{k q^{\frac{n}{k}}}{n} \leq 2A_{\Omega,0} \frac{q^{n/2 - k/2 + 1}}{n}$, for a good choice of $A_{\Omega,0} >0$.
Then, by \eqref{general-k-series} and \eqref{general-k-error}, summing over $2\leq \ell\leq k$ we obtain Lemma~\ref{lem recurence big Omega}.
\end{proof}
\begin{lem}\label{lem recurence small omega}
Let $M\in \mathbf{F}_q[t]$ of degree $d\geq1$, and $\chi$ be a non-trivial Dirichlet character modulo $M$. For any positive integer $k \geq 2$, assume that for all $2\leq \ell\leq k-1$ there exists $A_{\omega, \ell}>0$ such that one has
$\lvert\pi_{\omega_{\ell}}(n,\chi)\rvert \leq A_{\omega,\ell} \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1}$
for all $n\geq 1$.
Then one has
\begin{align*}
\pi_{\omega_k}(n,\chi) = \frac{1}{k} \sum_{n_1 + n_2 = n}\pi_{\omega_{k-1}}(n_1,\chi)\pi_{\omega_1}(n_2,\chi)
+ O_{k}\left(\frac{q^{n/2}(\log n)^{k-2}}{n} \right).
\end{align*}
where the implicit constant depends on $k$ and is bounded by $$2\frac{d^{k}}{k!} \sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}A_{\omega,k-\ell}j\binom{j-1}{\ell-1}\frac{q^{1-\frac{j}{2}}d^{1-\ell}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!} $$ for all~$n$.
\end{lem}
\begin{proof}
We study the function
$$F_{\omega_k}(u,\chi) = \sum_{n=1}^{\infty} \sum_{\substack{N \text{ monic} \\ \deg(N) = n \\ \omega(N)=k}} \chi(N) u^{n} = \sum_{n=1}^{\infty}\pi_{\omega_k}(n,\chi) u^{n}.$$
Adapting the idea of \cite{Meng2017} for $x_{P} = \sum_{j\geq 1}\chi(P)^j u^{j\deg P}$, and
using Lemma~\ref{lem-Newton},
we obtain
\begin{equation}\label{littleomega-general-k-series}
F_{\omega_k}(u,\chi) = \frac{1}{k}\sum_{\ell=1}^{k}(-1)^{\ell +1}F_{\omega_{k-\ell}}(u,\chi)\tilde{F}(u,\chi;\ell),
\end{equation}
where
\begin{align*}
\tilde{F}(u,\chi;\ell) = \sum\limits_{P\text{ irred.}}\left( \sum\limits_{j\geq 1} \chi(P)^{j}u^{j\deg P} \right)^{\ell}
= \sum\limits_{P\text{ irred.}}\sum\limits_{j\geq \ell}\binom{j-1}{\ell -1} \chi(P)^{j}u^{j\deg P} .
\end{align*}
Note that $\tilde{F}(u,\chi;1) = F_{\omega_1}(u,\chi)$, and we use the convention $F_{\omega_0}(u,\chi) = 1$.
Then we compare the coefficients of $u^n$ in \eqref{littleomega-general-k-series}, we show that the terms for $\ell \geq 2$ all contribute to the error term.
For $2\leq \ell \leq k-1$, the coefficient of degree $n$ of $F_{\omega_{k-\ell}}(u,\chi)\tilde{F}(u,\chi;\ell)$ is indeed by hypothesis:
\begin{align*}
\sum_{j\geq \ell}\binom{j-1}{\ell -1}\sum_{n_{1} + j n_{2} = n}&\pi_{\omega_{k-\ell}}(n_{1},\chi)\pi_{\Omega_1}\left(n_{2},\chi^{j} \right) \\
&\leq A_{\omega,k-\ell}\sum_{j = \ell}^{n-1}\binom{j-1}{\ell -1} \frac{d^{k-\ell +1}}{(k-\ell-1)!}\sum_{n_{1} + j n_{2} = n}\frac{q^{n_{1}/2} q^{n_{2}} }{n_{1} n_{2}} (\log n_1)^{k-\ell-1}\nonumber\\
&\leq A_{\omega,k-\ell} \frac{d^{k-\ell+1}(\log n)^{k-\ell-1} q^{n/2}}{(k-\ell-1)!}\sum_{j = \ell}^{n-1}\binom{j-1}{\ell -1}\sum_{n_1+ j n_2=n} \frac{q^{-(j/2-1)n_2}}{n_1 n_2} \nonumber\\
&\leq 2 A_{\omega,k-\ell}\frac{d^{k}}{(k-\ell -1)!} \frac{q^{n/2}(\log n)^{k-\ell}}{n} \sum_{j = \ell}^{n-1} j\binom{j-1}{\ell -1} q^{1-j/2} d^{1-\ell},\nonumber
\end{align*}
The coefficient of degree $n$ of $F_{\omega_{0}}(u,\chi)\tilde{F}(u,\chi;k)$ is bounded by
$$\sum_{\substack{k\leq j \leq n-1 \\ j\mid n }}\binom{j-1}{k-1}\frac{j q^{\frac{n}{j}}}{n} \leq 2A_{\omega,0} \frac{q^{n/2}}{n} \sum_{j = k}^{n-1} j\binom{j-1}{k -1} q^{1-j/2},$$
for a good choice of $A_{\omega,0}>0$.
Summing over $2\leq \ell\leq k$ we obtain Lemma~\ref{lem recurence small omega}.
\end{proof}
In order to avoid some confusions with complete sum over all zeros, in the following we use $\sum'$ to represent the sum over non-real zeros of the $L$-function. We also assume all the multiplicities and zeros depend on $\chi$ in this section.
For $f = \Omega$ or $\omega$, $\ell \geq 1$ and $\chi \bmod M$ (with the convention $0!=1$), we denote
\begin{align*}
Z_{\ell}(n,\chi)& = \frac{(-1)^{\ell}}{(\ell-1)!} \sideset{}{'}\sum_{1\leq j\leq d_{\chi}} m_j^{\ell} \frac{\alpha_j^n(\chi) (\log n)^{\ell-1}}{n},\\
B_{f_\ell}(n,\chi) &= \frac{(-1)^{\ell}}{(\ell-1)!} \left( \left(m_+ -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^{\ell} +(-1)^n \left(m_- -\epsilon_f \frac{\delta(\chi^2)}{2}\right)^\ell\right)\frac{q^{n/2}(\log n)^{\ell-1}}{n},
\end{align*}
where $\epsilon_{\Omega} = -1$ and $\epsilon_{\omega} = 1$.
With these notations, we rewrite the formula in Theorem~\ref{Prop_k_general} and Proposition~\ref{Prop_k=1} in the following form: there exists positive constants $C_{f,\ell}$ such that
\begin{align}\label{formula Th deg=n}
E_{f_\ell}(n,\chi) := \lvert\pi_{f_\ell}(n,\chi) - Z_{\ell}(n,\chi) - B_{f_\ell}(n,\chi)\rvert \leq \begin{cases} C_{f,\ell}\frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n}& \text{ for } \ell \geq 2\\
C_{f,1}d \frac{q^{n/3}}{n} & \text{ for } \ell =1,
\end{cases}
\end{align}
where for $2\leq \ell =o((\log n)^{\frac12})$, and if $f = \omega$ then $q>3$, we need to show that $C_{f,\ell} \leq C\ell(\ell -1)$ with $C$ an absolute constant.
By Lemma~\ref{lem recurence big Omega} (resp. \ref{lem recurence small omega}), it suffices to study the coefficient of $u^n$ in $F_{f_{k-1}}(u,\chi)F_{f_1}(u,\chi)$, that is:
\begin{align}\label{general-k-main}
\sum_{n_{1} + n_{2} = n}&\pi_{f_{k-1}}(n_{1},\chi)\pi_{f_1}(n_{2},\chi)\nonumber \\
=& \sum_{n_{1} + n_{2} = n} \big\{ Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi)+E_{f_{k-1}}(n_{1},\chi) \big\}\big\{Z_1(n_{2},\chi)+B_{f_1}(n_{2},\chi)+E_{f_1}(n_{2},\chi)\big\}\nonumber\\
=&\sum_{n_{1} + n_{2} = n}Z_{k-1}(n_1, \chi)Z_1(n_2, \chi)+\sum_{n_{1} + n_{2} = n}B_{f_{k-1}}(n_1, \chi)B_{f_1}(n_2, \chi)\nonumber\\
&\quad + \sum_{n_{1} + n_{2} = n}\big\{ Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)+B_{f_{k-1}}(n_1, \chi)Z_1(n_2, \chi)\big\}\nonumber \\
&\quad +\sum_{n_{1} + n_{2} = n} \big\{ \left(Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi)\right) E_{f_1}(n_2, \chi) \big\}
+\sum_{n_{1} + n_{2} = n} E_{f_{k-1}}(n_{1},\chi) \pi_{f_1}(n_2,\chi).
\end{align}
We will now study each of these sums separately.
\subsection{Bounds for certain exponential sums}
We first give a bound for certain exponential sums that appear several times in the proof of Lemmas~\ref{Lem_general_k_Zeros}--\ref{Lem-mixed-term}.
The following result follows from partial summation.
\begin{lem}\label{Lem_Abel}
Let $f$ be a differentiable function on $[1,+\infty)$ such that $f'(x) \in L^{1}[1,\infty)$.
Then for every $\theta \in (-\frac{\pi}{2}, \frac{\pi}{2}]$, $\theta \neq 0$,
one has
\begin{align*}
\sum_{n=1}^{N} e^{i\theta n}f(n) = O\left(\frac{ \lVert f' \rVert_{L^{1}} + \lVert f \rVert_{\infty}}{\lvert\theta\rvert}\right)
\end{align*}
as $N \rightarrow +\infty$, with an absolute implicit constant.
\end{lem}
\begin{proof}
As $e^{i\theta} \neq 1$, one has
\begin{align*}
H(x) := \sum_{n\leq x} e^{i\theta n} = \frac{e^{i\theta}- e^{i\theta ([x]+1)}}{1-e^{i\theta}} = O\left(\frac{1}{\lvert\theta\rvert}\right).
\end{align*}
So applying Abel's identity, one has
\begin{align*}
\sum_{n=1}^{N} e^{i\theta n}f(n) = H(N)f(N) + \int_{1}^{N} H(t)f'(t) \diff t
= O\left(\frac{\lvert f(N)\rvert}{\lvert\theta\rvert}\right) + O\left(\frac{1}{\lvert\theta\rvert}\int_{1}^{N} \lvert f'(t) \rvert \diff t\right).
\end{align*}
\end{proof}
\subsection{Sum over non-real zeros}
\begin{lem}\label{Lem_general_k_Zeros}
For any $k \geq 2$, one has
\begin{multline*}
\sum_{n_{1} + n_{2} = n}Z_{k-1}(n_{1},\chi)Z_{1}(n_{2},\chi) \\
= \frac{(-1)^{k}k}{(k-1)!} \left\{ \sideset{}{'}\sum_{j=1}^{d_{\chi}} m_{j}^{k} \frac{\alpha_{j}(\chi)^{n}(\log n)^{k-1}}{n} + O\left( d^k \left( k + \frac{1}{\gamma(\chi)}\right)\frac{q^{n/2}(\log n)^{k-2}}{n}\right) \right\},
\end{multline*}
where the implicit constant is absolute.
\end{lem}
\begin{proof}
We separate the sum in a diagonal term and off-diagonal term:
\begin{align*}
\sideset{}{'}\sum_{j_{1}=1}^{d_{\chi}} \sideset{}{'}\sum_{j_{2}=1}^{d_{\chi}} \sum_{n_{1}+ n_{2} = n}\frac{(-1)^k}{(k-2)!}m_{j_{1}}^{k-1}m_{j_{2}}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}} = \Sigma_{1} + \Sigma_{2},
\end{align*}
where
\begin{align*}
\Sigma_{1} =\frac{(-1)^k}{(k-2)!}\sideset{}{'}\sum_{j=1}^{d_{\chi}}\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}},
\end{align*}
and
\begin{align*}
\Sigma_{2} = \frac{(-1)^k}{(k-2)!}\sideset{}{'}\sum_{j_{1}\neq j_2} \sum_{n_{1}+ n_{2} = n}m_{j_{1}}^{k-1}m_{j_{2}}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}.
\end{align*}
The diagonal term gives the main term, for $1\leq j \leq d$ one has
\begin{align}\label{main-zero}
\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}= m_{j}^{k}\frac{\alpha_{j}(\chi)^{n}}{n} \sum_{n_{1} + n_{2} = n}\left(\frac{(\log n_{1})^{k-2}}{n_{1}} + \frac{(\log n_{1})^{k-2}}{n_{2}} \right).
\end{align}
By partial summation, we have
\begin{equation}\label{sum1-1}
\sum_{n_1+n_2=n} \frac{(\log n_{1})^{k-2}}{n_{1}}=\sum_{n_1=1}^{n-1} \frac{(\log n_{1})^{k-2}}{n_{1}}=\frac{1}{k-1} \left( (\log n)^{k-1}+O\left( k(\log n)^{k-2}\right) \right).
\end{equation}
For the second sum in \eqref{main-zero}, we have
\begin{align}\label{sum1-2-0}
\sum_{n_1+n_2=n}\frac{(\log n_{1})^{k-2}}{n_{2}}&=\sum_{1\leq n_2\leq n/2} \frac{(\log(n-n_2))^{k-2}}{n_{2}}+\sum_{n/2<n_2<n}\frac{(\log(n-n_2))^{k-2}}{n_{2}} \nonumber \\
&=\sum_{1\leq n_2\leq n/2} \frac{(\log n+\log (1-n_2/n))^{k-2}}{n_{2}}+O\left(\frac{n}{2}\cdot \frac{(\log n)^{k-2}}{n} \right),
\end{align}
for $1\leq n_2\leq n/2$, $|\log(1-n_2/n)|<1$, thus
\begin{align}\label{sum1-2}
\sum_{n_1+n_2=n}\frac{(\log n_{1})^{k-2}}{n_{2}}&=\sum_{1\leq n_2\leq n/2}\left( \frac{(\log n)^{k-2} }{n_2}+ \frac{O(k(\log n)^{k-3})}{n_2}\right)+O\left((\log n)^{k-2}\right) \nonumber\\
&=(\log n)^{k-1}+O\left(k(\log n)^{k-2}\right).
\end{align}
Inserting \eqref{sum1-1} and \eqref{sum1-2} into \eqref{main-zero}, we get
\begin{equation}\label{sum1-3}
\sum_{n_{1}+ n_{2} = n} m_{j}^{k}\frac{\alpha_{j}(\chi)^{n_{1} + n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}=\frac{k m_j^k}{k-1}\left( \frac{\alpha_j(\chi)^{n}(\log n)^{k-1}}{n}+O\left(k\frac{q^{n/2}(\log n)^{k-2}}{n}\right) \right).
\end{equation}
Thus,
\begin{align*}
\Sigma_1=\frac{(-1)^k k}{(k-1)!}\left\{\sideprime\sum_{j=1}^{d_{\chi}} m_j^k \frac{\alpha_j(\chi)^{n}(\log n)^{k-1}}{n}+O\left(d^k k\frac{q^{n/2}(\log n)^{k-2}}{n}\right)\right\}.
\end{align*}
For $\alpha_{j_{1}} \neq \alpha_{j_{2}}$, one has
\begin{align}\label{sum-2-1}
\sum_{n_{1}+ n_{2} = n}\frac{\alpha_{j_{1}}(\chi)^{n_{1}}\alpha_{j_{2}}(\chi)^{n_{2}}(\log n_{1})^{k-2}}{n_{1}n_{2}}
= &\frac{\alpha_{j_{2}}(\chi)^{n}}{n}\sum_{n_{1} =1}^{n-1}\frac{\left(\alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi)\right)^{n_{1}}(\log n_{1})^{k-2}}{n_{1}}\nonumber\\& +
\frac{\alpha_{j_{1}}(\chi)^{n}}{n}\sum_{n_{2} =1}^{n-1}\frac{\left(\alpha_{j_{2}}(\chi)/\alpha_{j_{1}}(\chi)\right)^{n_{2}}(\log (n-n_{2}))^{k-2}}{n_{2}},
\end{align}
where $\lvert \alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi) \rvert =1$, and $\alpha_{j_{1}}(\chi)/\alpha_{j_{2}}(\chi) \neq 1$.
We apply Lemma~\ref{Lem_Abel} with $f(x) = \frac{(\log x)^{k-2}}{x}$ to the first sum to deduce that this sum is $O\left( (\log n)^{k-2} \lvert \gamma_{j_1} - \gamma_{j_2} \rvert^{-1} \right)$.
The second term can be separated at $\frac{n}{2}$ as in \eqref{sum1-2-0}, it yields
\begin{equation*}
\sum_{1\leq n_2\leq n/2}\left(\frac{\left(\alpha_{j_{2}}(\chi)/\alpha_{j_{1}}(\chi)\right)^{n_{2}} (\log n)^{k-2} }{n_2}+ \frac{O(k(\log n)^{k-3})}{n_2}\right)+O\left((\log n)^{k-2}\right).
\end{equation*}
Then we apply Lemma~\ref{Lem_Abel} with $f(x) = \frac{1}{x}$ to the first term above.
In the end we obtain
\begin{align*}
\Sigma_{2} = O\left( d^k \frac{k\left(k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2}(\log n)^{k-2}}{n}\right).
\end{align*}
The proof of Lemma~\ref{Lem_general_k_Zeros} is complete.
\end{proof}
\subsection{Bias term}
\begin{lem}\label{Lem_general_k_Bias} For $f=\Omega$ or $\omega$, and for any $k\geq 2$, we have
\begin{multline*}
\sum_{n_{1} + n_{2} = n} B_{f_{k-1}}(n_1, \chi)B_{f_1}(n_2, \chi)
= \frac{(-1)^k k}{(k-1)!}\frac{q^{n/2}(\log n)^{k-1}}{n}\Bigg\{ \left(m_+ - \epsilon_f\frac{\delta(\chi^2)}{2}\right)^k \\ +(-1)^n \left(m_- -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k +O \left(d^k k (\log n)^{-1} \right) \Bigg\},
\end{multline*}
where $\epsilon_{\Omega} = -1$, $\epsilon_{\omega} =1$ and the implicit constant is absolute.
\end{lem}
\begin{proof}
We write the sum as sum of four parts,
\begin{align*}
\sum_{n_{1} + n_{2} = n} B_{f_{k-1}}(n_1, \chi)&B_{f_1}(n_2, \chi) \nonumber \\
=\frac{(-1)^k q^{n/2}}{(k-2)!} \sum_{n_1+n_2=n}\Bigg\{&\left( m_+ -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k-1}
\left( m_+ -\epsilon_f \frac{\delta\left(\chi^2\right)}{2} \right) \nonumber\\
&+
\left( m_+ -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k-1}
(-1)^{n_2}\left( m_{-} -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right) \nonumber\\
&+
(-1)^{n_1}\left( m_- -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k-1}
\left( m_{+} -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right) \nonumber\\
&+
(-1)^{n_1}\left( m_- -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k-1}
(-1)^{n_2}\left( m_{-} -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)
\Bigg\} \frac{ (\log n_1)^{k-2}}{n_1 n_2}\nonumber\\
=&: \frac{(-1)^k q^{n/2}}{(k-2)!} \left\{ S_1+S_2+S_3+S_4 \right\}.
\end{align*}
First, we see that $S_1$ and $S_4$ should give the main term, and we expect $S_2$ and $S_3$ to be in the error term.
Using \eqref{sum1-1} and \eqref{sum1-2}, we have
\begin{align*}
\sum_{n_1+n_2=n} \frac{(\log n_1)^{k-2}}{n_1 n_2} = \frac{k}{k-1}\frac{(\log n)^{k-1}}{n} + O\left( k\frac{(\log n)^{k-2}}{n} \right).
\end{align*}
Thus
\begin{align*}
S_1 &= \frac{k}{k-1}\left( m_+ -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k}\frac{(\log n)^{k-1}}{n} + O\left( k\left( m_+ -\epsilon_f \frac{1}{2}\right)^{k} \frac{(\log n)^{k-2}}{n} \right), \\
S_4 &= (-1)^n\frac{k}{k-1}\left( m_- -\epsilon_f \frac{\delta\left(\chi^2\right)}{2}\right)^{k}\frac{(\log n)^{k-1}}{n} + O\left( k\left( m_- -\epsilon_f \frac{1}{2}\right)^{k} \frac{(\log n)^{k-2}}{n} \right).
\end{align*}
Similar to \eqref{sum-2-1}, we have
\begin{align*}
\sum_{n_1+n_2=n} (-1)^{n_1} \frac{(\log n_1)^{k-2}}{n_1 n_2} = O\left( \left(k + \frac{1}{\pi}\right)\frac{(\log n)^{k-2}}{n} \right).
\end{align*}
Thus
\begin{align*}
S_2 + S_3 = O\left( d^k k\frac{(\log n)^{k-2}}{n} \right)
\end{align*}
Combining $S_1$, $S_2$, $S_3$ and $S_4$ we obtain Lemma~\ref{Lem_general_k_Bias}.
\end{proof}
\subsection{Other error terms}
\begin{lem}\label{lem_general_k_mixed-bias-zero}
For $f=\Omega$ or $\omega$ and for any $k\geq 2$, one has
\begin{align*}
\sum_{n_{1} + n_{2} = n}\big\{ Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)+B_{f_{k-1}}(n_1, \chi)Z_1(n_2, \chi)\big\} = O\left( d^k \frac{k \left( k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2} (\log n)^{k-2}}{n}\right),
\end{align*}
where the implicit constant is absolute.
\end{lem}
\begin{proof}
Let $\alpha_{j}$ be a non-real inverse zero of the $L$-function, one has
\begin{multline}\label{sum_zero_bias_1-2}
\sum_{n_1+n_2=n}m_j^{k-1} \left(\left(m_{+} -\epsilon_f \frac{\delta(\chi^2)}{2}\right) +(-1)^{n_2}\left(m_{-} -\epsilon_f \frac{\delta(\chi^2)}{2}\right)\right)\frac{\alpha_{j}^{n_{1}}(\log n_1)^{k-2}}{n_{1}} \frac{q^{n_{2}/2}}{n_{2}} \\
= O \left( m_j^{k-1} ( \max\left(m_{+},m_{-}\right) + \tfrac{1}{2}) \left( k + \frac{1}{\min(\lvert \gamma_j\rvert, \lvert\pi - \gamma_j\rvert) }\right)\frac{q^{n/2} (\log n)^{k-2}}{n}\right),
\end{multline}
this follows from the same idea as for \eqref{sum-2-1}.
We sum \eqref{sum_zero_bias_1-2} over the zeros to obtain
\begin{align*}
\sum_{n_{1} + n_{2} = n} Z_{k-1}(n_1, \chi)B_{f_1}(n_2, \chi)
&= O\left( d^k \frac{k \left( k + \frac{1}{\gamma(\chi)}\right)}{(k-1)!}\frac{q^{n/2} (\log n)^{k-2}}{n}\right).
\end{align*}
The proof is similar for the other term.
\end{proof}
\begin{lem}\label{Lem-mixed-term}
For $f = \Omega$ or $\omega$ and for any $k\geq 2$, one has
\begin{align*}
\sum_{n_{1} + n_{2} = n} & \left(Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi)\right) E_{f_1}(n_2, \chi) = O\left( d^k \frac{k}{(k-1)!} \frac{q^{n/2} (\log n)^{k-2}}{n} \right).
\end{align*}
\end{lem}
\begin{proof}
We use the following bound, for $k-1 \geq 1$:
\begin{align*}
\lvert Z_{k-1}(n,\chi) + B_{f_{k-1}}(n,\chi)\rvert \leq d^{k-1} \frac{1}{(k-2)!} \frac{q^{n/2}(\log n)^{k-2}}{n},
\end{align*}
where the implicit constant is absolute.
In particular the term evaluated in Lemma~\ref{Lem-mixed-term} satisfies
\begin{align*}
\sum_{n_{1} + n_{2} = n}& \left(Z_{k-1}(n_{1},\chi)+B_{f_{k-1}}(n_{1},\chi)\right) E_{f_1}(n_2, \chi) \\
&= \sum_{n_{1} + n_{2} = n} O\left( d^{k} \frac{1}{(k-2)!} \frac{q^{n_1/2}q^{n_2/3}(\log n_1)^{k-2}}{n_1}\right) \\
&= O \left( d^{k} \frac{q^{n/2}(\log n)^{k-2}}{(k-2)!} \left( \frac{1}{n}\sum_{n_{2} \leq n/2} q^{-{n_2}/6} + q^{-n/6}\sum_{n_{1} \leq n/2}\frac{q^{n_{1}/6}}{n_1} \right) \right) \\
&= O \left(\frac{d^k}{(k-2)!} \frac{q^{\frac{n}{2}}(\log n)^{k-2}}{n} \right),
\end{align*}
with an absolute implicit constant.
This concludes the proof.
\end{proof}
\subsection{Proof of Theorem~\ref{Prop_k_general}}
We now have all the ingredients to finish the proof of Theorem~\ref{Prop_k_general}.
\begin{proof}[Proof of Theorem~\ref{Prop_k_general}]
By induction on $k$, the base case is Proposition~\ref{Prop_k=1} ($k=1$). Now suppose for any $2\leq \ell\leq k-1$, we have
\begin{align}\label{Eq bound Error}
E_{f_\ell}(n,\chi) \leq C_{f,\ell} \frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n},
\end{align}
where $C_{f,\ell}\leq C\ell(\ell-1)$ as stated in \eqref{formula Th deg=n}.
In particular, the condition of Lemma~\ref{lem recurence big Omega} (resp. Lemma~\ref{lem recurence small omega}) is satisfied for $k$, one has
\begin{align}
\lvert \pi_{f_\ell}(n,\chi) \rvert &\leq \lvert Z_{\ell}(n,\chi) + B_{f_{\ell}}(n,\chi)\rvert + C_{f,\ell} \frac{d^{\ell}}{(\ell-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{\ell-2}}{n}\nonumber \\
&\leq \Big(1 + \frac{C_{f,\ell}}{\gamma(\chi)\log n}\Big) \frac{d^{\ell}}{(\ell -1)!} \frac{q^{n/2}}{n}(\log n)^{\ell -1},
\label{Eq bound on A Omega}
\end{align}
for $1\leq \ell \leq k-1$, for all $n\geq 2$.
Thus, take $A_{\Omega, \ell}=1 + \frac{C_{\Omega,\ell}}{\gamma(\chi)\log n}$ in Lemma~\ref{lem recurence big Omega}, and evaluate each sum in Equation~\eqref{general-k-main} thanks to Lemmas~\ref{Lem_general_k_Zeros}--\ref{Lem-mixed-term}, this yields
\begin{align}\label{Eq induction Omega}
\frac{n}{q^{n/2}(\log n)^{k-2}}\lvert E_{\Omega_{k}}(n,\chi) \rvert \leq& \frac{d^{k}}{k!}\sum_{\ell=2}^{k} \Big(1 + \frac{C_{\Omega,k-\ell}}{\gamma(\chi)\log n}\Big)\Big(2 + \frac{\ell}{\log n}\Big)\frac{d^{-\ell} q^{-\ell/2+1}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!} \nonumber\\ &+ \frac{C_0}{2} d^k \frac{k + \frac{1}{\gamma(\chi)}}{(k-1)!} + \frac{1}{k} \frac{n}{q^{n/2}(\log n)^{k-2}}\sum_{n_1 + n_2 = n}\lvert E_{\Omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert,
\end{align}
where $C_0$ is an absolute constant.
In the case $k=2$, we get
\begin{align*}
\frac{n}{q^{n/2}}\lvert E_{\Omega_{2}}(n,\chi) \rvert \ll \frac{d^2}{\gamma(\chi)} + \frac{n}{q^{n/2}} \sum_{n_1 + n_2 = n} d\frac{q^{n_1 /3}}{n_1} d\frac{q^{n_2/2}}{n_2}
\ll \frac{d^2}{\gamma(\chi)}
\end{align*}
which is the expected bound.
For $k\geq 3$, using the bound~\eqref{Eq bound Error}, we have
\begin{align} \label{Eq applying induction Omega}
\sum_{n_1 + n_2 = n}&\lvert E_{\Omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert\\ &\leq \sum_{n_1 + n_2 = n}C_{\Omega,k-1}\frac{d^{k-1}}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n_1/2}(\log n_1)^{k-3}}{n_1} \Big(1 + C_{\Omega,1}q^{-n_2/6}\Big) d \frac{q^{n_2/2}}{n_2}\nonumber \\
&\leq C_{\Omega,k-1}\frac{d^k}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2}}{n}\sum_{n_1 + n_2 = n} \left(\frac1{n_1} + \frac{1}{n_2}\right)(\log n_1)^{k-3}\Big(1 + C_{\Omega,1}q^{-n_2/6}\Big)\nonumber \\
&\leq C_{\Omega,k-1}\frac{d^k}{(k-2)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2}}{n}\left( \frac{k-1}{k-2}(\log n)^{k-2} + O(k(\log n)^{k-3} ) \right),\nonumber
\end{align}
which together with the bound~\eqref{Eq induction Omega}, proves the existence of $C_{\Omega,k}$ satisfying
\begin{align*}
E_{\Omega_k}(n,\chi) \leq C_{\Omega,k} \frac{d^{k}}{(k-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{k-2}}{n}.
\end{align*}
Now, when $k = o((\log n)^{1/2})$, by the induction hypothesis~\eqref{Eq bound Error}, one has $C_{\Omega,\ell} \leq C\ell(\ell -1) = o(\log n)$ for $2\leq \ell\leq k-1$ and some absolute constant $C$. In the following, we show how to choose $C$ and close the induction. We simplify the bounds~\eqref{Eq induction Omega} and \eqref{Eq applying induction Omega} to obtain
\begin{align*}
C_{\Omega,k} \leq& \frac{2(k-1)(k-2)}{k}\sum_{\ell=2}^{k} (\gamma(\chi) + o(1))d^{-\ell} q^{-\ell/2+1} + \frac{C_0}{2}k + C_{\Omega,k-1}\frac{k-1}{k}\left( \frac{k-1}{k-2} + o(k^{-1}) \right) \\
\leq& C\left(\frac{(k-1)(k-2)}{2k} + \frac{k}{2} + \frac{(k-1)^3}{k} + o(k) \right) \leq C k(k-1),
\end{align*}
if we choose $C\geq \max \lbrace C_0, 6\pi \rbrace \geq 4\gamma(\chi)\sum_{\ell=2}^{k} d^{-\ell} q^{-\ell/2+1}$ and for $k$ large enough (say $k\geq K$ finite). In the end, choose $C\geq \max\{ \frac{C_{\Omega, 2}}{2}, \cdots, \frac{C_{\Omega, K}}{K(K-1)}, C_0,6\pi \}$, we deduce that $C_{\Omega, k}\leq Ck(k-1)$ for all $2\leq k=o((\log n)^{1/2})$. This closes the induction step for $C_{\Omega,k}$.
The proof works similarly for $C_{\omega,k}$ using Lemma~\ref{lem recurence small omega}. For $k\geq 2$, we have
\begin{align}\label{Eq induction omega}
\frac{n}{q^{n/2}(\log n)^{k-2}}\lvert E_{\omega_{k}}(n,\chi) \rvert \leq& 2\frac{d^{k}}{k!} \sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}\left(1 + \frac{C_{\omega,k-\ell}}{\gamma(\chi)\log n}\right)j\binom{j-1}{\ell-1}\frac{q^{1-\frac{j}{2}}d^{1-\ell}(\log n)^{2-\ell}(k-1)!}{(k-\ell -1)!}\nonumber
\\ &+ \frac{C_0}{2} d^k \frac{k + \frac{1}{\gamma(\chi)}}{(k-1)!} + \frac{1}{k} \frac{n}{q^{n/2}(\log n)^{k-2}}\sum_{n_1 + n_2 = n}\lvert E_{\omega_{k-1}}(n_1,\chi) \pi_{f_1}(n_2,\chi) \rvert.
\end{align}
The last term is handled as in~\eqref{Eq applying induction Omega}. The first term is bounded independently of $n$ (but a priori not independently of $q$ if $q = 3$) by observing that the series
$$ \sum_{j \geq \ell} \frac{j!}{(j-\ell)!} q^{-\frac{j}{2}} = \frac{\sqrt{q}}{\sqrt{q}-1} \ell ! (\sqrt{q} - 1)^{-\ell}$$
is convergent.
Up to increasing the constant to include the case $q=3$, this proves the existence of $C_{\omega,k}$ satisfying
\begin{align*}
E_{\omega_k}(n,\chi) \leq C_{\omega,k} \frac{d^{k}}{(k-1)!}\frac{1}{\gamma(\chi)}\frac{q^{n/2} (\log n)^{k-2}}{n}.
\end{align*}
Now, assuming $q \geq 5$, one has
\begin{align*}
\sum_{\ell = 2}^{k}\sum_{j = \ell}^{n-1}j\binom{j-1}{\ell-1}q^{1-\frac{j}{2}}d^{1-\ell} &\leq 2dq\sum_{\ell = 2}^{k}\ell (d(\sqrt{q} -1))^{-\ell}.
\end{align*}
The series is convergent and can be bounded independently of $q$ and $d$, we may choose $C \geq \max\{ C_0, 8\gamma(\chi)dq\sum_{\ell \geq 2}\ell (d(\sqrt{q} -1))^{-\ell}\}$.
Thus, for $q\geq 5$, $k = o((\log n)^{1/2})$, using the induction hypothesis $C_{\omega,\ell} \leq C\ell(\ell -1)$ for $2\leq \ell\leq k-1$, \eqref{Eq induction omega} becomes
\begin{align*}
C_{\omega,k}\leq C\left(\frac{(k-1)(k-2)}{2k} + \frac{k}{2} + \frac{(k-1)^3}{k} + o(k) \right). \end{align*}
By the same argument as $C_{\Omega, k}$, we conclude that $C_{\omega, k}\leq C k(k-1)$ for some absolute constant $C$.
\end{proof}
\section{Counting polynomials of degree $\leq n$ with $k$ irreducible factors in congruence classes}\label{Sec_proof_deg<X}
The asymptotic formula in Theorem~\ref{Th_Difference_k_general_deg<X} is obtained as a corollary of Theorem~\ref{Prop_k_general}, by summing over the characters and over the degree of the polynomials.
For $A \subset (\mathbf{F}_q[t]/(M))^{*}$, for $f = \Omega$ or $\omega$ and for any integers $n,k \geq 1$,
we define the function
\begin{equation*}\pi_{f_k}(n; M, A) = \lvert\lbrace N \in\mathbf{F}_{q}[t] : N \text{ monic, } \deg{N} = n,~ \Omega(N)=k,~ N \bmod M \in A \rbrace\rvert,\end{equation*}
so that
\begin{align*}
\Delta_{f_k}(X; M, A, B)
= \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\sum_{n\leq X}\left( \frac{1}{\lvert A\rvert }\pi_{f_k}(n; M, A) - \frac{1}{\lvert B \rvert}\pi_{f_k}(n; M, B)\right).
\end{align*}
Before we give the proof of Theorem \ref{Th_Difference_k_general_deg<X}, let us prove the following preliminary lemma.
\begin{lem}\label{Lem_SumOverN}
Let $k\geq 0$ be an integer.
For any complex number $\alpha$ with $\lvert\alpha\rvert \geq \sqrt{2}$, as $X\rightarrow\infty$ we have that
\begin{align*}
\frac{X}{\alpha^{X}(\log X)^{k}}\sum_{n=1}^{X}\frac{\alpha^n(\log n)^{k}}{n} = \frac{\alpha}{\alpha -1} + O\left( \frac{1}{\lvert\alpha\rvert^{X}}+\frac{1 + \frac{k}{\log X}}{X\log X} \right).
\end{align*}
\end{lem}
\begin{proof}
The proof is adapted from \cite[Lem.~2.2]{Cha2008}.
Applying Abel identity yields
\begin{align*}
\sum_{n=1}^{X} \frac{\alpha^n(\log n)^{k}}{n} &= \frac{\alpha^{X+1} -{\alpha}}{\alpha -1}\frac{(\log X)^k}{X} + \int_{1}^{X} \frac{\alpha^{[t]+1} -{\alpha}}{\alpha -1} \frac{(k-1)(\log t)^{k-2} - (\log t)^{k-1}}{t^2} \diff t \\
&= \frac{\alpha}{\alpha -1}(\alpha^{X} +O(1)) \frac{(\log X)^k}{X}
+ O\left( \left(k(\log X)^{k-2} + (\log X)^{k-1}\right)\int_{1}^{X} \frac{ \lvert \alpha\rvert^{t}}{t^2} \diff t \right).
\end{align*}
Cha proved that $\int_{1}^{X} \frac{ \lvert \alpha\rvert^{t}}{t^2} \diff t = O\left( \frac{\lvert \alpha\rvert^X}{X^2}\right)$ via integration by parts. This concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Th_Difference_k_general_deg<X}]
Let us first sum over the characters.
By orthogonality of characters, for every $A \subset (\mathbf{F}_{q}[t]/(M))^*$, one has
\begin{equation*}\pi_{f_k}(n; M, A) =
\frac{1}{\phi(M)}\sum_{\chi \bmod M}\sum_{a\in A} \bar\chi(a) \pi_{f_k}(n,\chi).
\end{equation*}
Hence for any $A, B \subset (\mathbf{F}_{q}[t]/(M))^*$, one has
\begin{align*}
\frac{1}{\lvert A\rvert}\pi_{f_k}(n; M, A) - \frac{1}{\lvert B\rvert}\pi_{f_k}(n; M, B)
&= \frac{1}{\phi(M)}\sum_{\chi \bmod M}\left(\frac{1}{\lvert A\rvert}\sum_{a\in A} \bar\chi(a) - \frac{1}{\lvert B\rvert}\sum_{b\in B} \bar\chi(b) \right) \pi_{f_k}(n,\chi) \\
&= \sum_{\chi \bmod M}c(\chi,A,B) \pi_{f_k}(n,\chi).
\end{align*}
Note that the case $\chi = \chi_0$ is trivial, one has $c(\chi,A,B) =0$.
We have $\vert c(\chi,A,B) \rvert \leq 2$, so when we sum over the degree $n$, the implicit constants in the error terms are at most multiplied by $2$.
Now, let us sum over the degree. We divide the range $n\leq X$ into the two parts $n\leq \frac{X}{3}$ and $\frac{X}{3}<n\leq X$. For $n\leq \frac{X}{3}$, we use the trivial bound $\pi_{f_k}(n; M, A)\leq q^{n}$. We have
\begin{align*}
\Delta_{f_k}(X; M, A, B)&= \sum_{\chi \bmod M}c(\chi,A,B) \frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\Bigg\{\sum_{n\leq \frac{X}{3}}+\sum_{\frac{X}{3}<n\leq X}\Bigg\} \pi_{f_k}(n,\chi)\nonumber\\
&= \sum_{\chi \bmod M}c(\chi,A,B)\frac{X (k-1)!}{q^{X/2}(\log X)^{k-1}}\sum_{\frac{X}{3}<n\leq X} \pi_{f_k}(n,\chi) +O\left(Xq^{1 - \frac{X}{6}}\frac{(k-1)!}{(\log X)^{k-1}}\right).
\end{align*}
When $\frac{X}{3}<n\leq X$, we have $k=o((\log X)^{\frac12})=o((\log n)^{\frac12})$, the asymptotic formula in Theorem~\ref{Prop_k_general} yields
\begin{align*}
&\sum_{\frac{X}{3}<n\leq X} \pi_{f_k}(n,\chi)\nonumber\\
&= \frac{(-1)^k}{(k-1)!} \sum_{\frac{X}{3}<n\leq X} \Bigg\{\frac{q^{n/2}(\log n)^{k-1}}{n}
\left( \left(m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k+(-1)^n \left(m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k \right) \\
&\quad + \sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n}
+O \left( d^k k(k-1)\frac{1}{\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n}\right) \Bigg\}\\
&= \frac{(-1)^k}{(k-1)!} \sum_{n\leq X} \Bigg\{\frac{q^{n/2}(\log n)^{k-1}}{n}
\left( \left(m_+(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k+(-1)^n \left(m_-(\chi) -\epsilon_f\frac{\delta(\chi^2)}{2}\right)^k \right) \\
&\quad + \sum_{\alpha_j\neq\pm\sqrt{q}} m_j(\chi)^k \frac{\alpha_j^n(\chi) (\log n)^{k-1}}{n}
+O \left( d^k \frac{k(k-1)}{(k-1)!\gamma(\chi)} \frac{q^{n/2}(\log n)^{k-2}}{n} + d\frac{q^{n/3}}{n} \right) \Bigg\} \\
&\qquad+ O \left( d^k \frac{1}{(k-1)!} \frac{q^{X/6}(\log X)^{k-1}}{X} \right).
\end{align*}
Now, applying Lemma~\ref{Lem_SumOverN} for each $\alpha_j = \sqrt{q}e^{i\gamma_j(\chi)}$ (real or not), and using $k= o(\log X)$,
one has
\begin{align*}
\frac{X}{(\log X)^{k-1}q^{X/2}}\sum_{n=1}^{X}\frac{\alpha_j^n(\log n)^{k-1}}{n} = \frac{\alpha_j}{\alpha_j -1}\left(\frac{\alpha_j}{\sqrt{q}}\right)^{X} + O\left( \frac{1}{X\log X}\right).
\end{align*}
We also apply Lemma~\ref{Lem_SumOverN} and \cite[Lem. 2.2]{Cha2008} to the sum of the error term and derive that
\begin{align*}
&\Delta_{f_k}(X; M, A, B)\nonumber\\
&= (-1)^k \sum_{\chi}c(\chi,A,B)\Bigg(\ \left(m_+(\chi) +\frac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}-1} +(-1)^X \left(m_-(\chi)+\frac{\delta(\chi^2)}{2}\right)^k \frac{\sqrt{q}}{\sqrt{q}+1} \nonumber\\
&\qquad +\sideset{}{'}\sum_{j=1}^{d_{\chi}} m_j^k e^{iX\gamma_{j}(\chi)} \frac{\alpha_j(\chi)}{\alpha_j(\chi)-1} \Bigg) + O\left( \frac{d^k k(k-1)}{\gamma(M)\log X} + dq^{-X/6}\right).
\end{align*}
This concludes the proof of Theorem~\ref{Th_Difference_k_general_deg<X}.
\end{proof}
\end{document}
|
\begin{equation}gin{document}
\title{\bf Local central limit theorems \\ in stochastic geometry}
\author{
Mathew D. Penrose$^1$ and Yuval Peres$^2$
\\
{\normalsize{{\bf e}m
University of Bath and
Microsoft Research
}} }
\maketitle
\footnotetext{ $~^1$ Department of
Mathematical Sciences, University of Bath, Bath BA1 7AY, United
Kingdom: {\texttt [email protected]} }
\footnotetext{ $~^1$ Partially supported by
the Alexander von Humboldt Foundation through
a Friedrich Wilhelm Bessel Research Award.}
\footnotetext{$~^2$ Microsoft research, Redmond, WA USA: {\texttt [email protected]}}
\begin{equation}gin{abstract}
We give a general local central limit theorem
for the sum of two independent random variables, one of
which satisfies a central limit theorem while
the other satisfies a local central limit theorem
with the same order variance. We apply
this result to various quantities
arising in stochastic geometry, including: size of the largest
component for percolation on a box;
number of components, number of edges, or number
of isolated points, for random geometric graphs; covered volume
for germ-grain coverage models; number of accepted points for
finite-input random sequential adsorption; sum of nearest-neighbour
distances for a random
sample from a continuous
multidimensional
distribution.
{\bf e}nd{abstract}
{{\bf e}m Key words and phrases}: Local central limit theorem, stochastic geometry,
percolation, random geometric graph, nearest neighbours.
{{\bf e}m AMS classifications}: 60F05, 60D05, 60K35, 05C80
\tableofcontents
\section{Introduction}
\label{secintro}
A number of general central limit theorems (CLTs) have
been proved recently for quantities arising in stochastic geometry
subject to a certain local dependence.
See \cite{Penbk,PenCLT,PenEJP,PY1,PY2} for some examples.
The present work is concerned with
{{\bf e}m local}~ central limit theorems
for such quantities.
The local CLT for a binomial $(n,p)$ variable
says that
for large $n$ with $p$ fixed, its probability mass
function
minus that of the corresponding normal variable
rounded to the nearest integer,
is uniformly $o(n^{-1/2})$.
The classical local CLT provides
similar results for sums of i.i.d.\ variables with an arbitrary distribution
possessing a finite second moment.
Here we are concerned with
sums of variables with some weak dependence, in the sense that
the summands can be thought of as contributions from spatial
regions with only local interactions between different regions.
Among the examples for which we obtain local CLTs here are the following.
In Section \ref{secperc} we give local CLTs for
the number of clusters in percolation on a large finite lattice box,
and for the size of the largest open cluster for supercritical
percolation on a large finite box, as the box size becomes large.
In Sections \ref{secRGG} and \ref{secstogeo}
we consider continuum models, starting with random geometric
graphs \cite{Penbk} for which we demonstrate local CLTs for the
number of copies of a fixed subgraph (for example the number of
edges) both in the thermodynamic limit
(in which the mean degree is $\Theta(1)$)
and in the sparse limit (in which the mean degree vanishes).
For the thermodynamic limit we also derive local CLTs for
the number of components of a given type (for example the number
of isolated points), as an example of a more general local CLT
for functionals which have finite range interactions
or which are sums of functions determined by nearest neighbours
(Theorem \ref{finranthm}). This
also yields local CLTs for quantities associated
with a variety of other models, including germ-grain models and
random sequential adsorption in the continuum.
We derive these local CLTs
using the following idea which has been seen (in somewhat different
form) in \cite{DMcD}, in \cite{Be},
and no doubt
elsewhere.
If the random variable of interest
is known to satisfy a CLT, and can be decomposed
(with high probability) as the sum
of two independent parts, one of which satisfies a local CLT with
the same order of variance growth, then one can find a local CLT for the
original variable.
Theorem \ref{genthm2} below formalises this idea. The statement of
this result has no geometrical content and it could be of use elsewhere.
In the geometrical context, one can often use the geometrical
structure to effect such a decomposition. Loosely speaking,
in these examples one can represent a positive proportion
of the spatial region under consideration as a union of
disjoint boxes or balls, in such a way that with high probability a
non-vanishing proportion of the boxes are `good' in some sense,
where the contributions to the variable of interest from a good box,
given the configuration outside the box and given that it has
the `good' property, are i.i.d.
Then the classical
local CLT applies to the total contribution from good boxes,
and one can represent
the variable of interest as the sum of two independent contributions,
one of which (namely the contribution from good boxes)
satisfies a local CLT, and then apply Theorem \ref{genthm2}.
This technique is related to a method used by Avram and Bertsimas
\cite{AB} to find lower bounds on the variance for certain
quantities in stochastic geometry, although the examples considered
here are mostly different from those considered in \cite{AB}.
In any case, our results provide extra information
on the CLT behaviour for variables for numerous geometrical and
multivariate stochastic settings, which have arisen in a variety
of applications (see the examples in Section \ref{secstogeo}).
\section{A general local CLT}
\label{secgenresult}
In the sequel we let $\phi$ denote the standard
(${\cal N}(0,1)$)
normal density function, i.e. $\phi(x)= (2\pi)^{-1/2}
{\bf e}xp(-(1/2)x^2)$. Note that for $\sigma >0$,
the probability density function of the
${\cal N}(0,\sigma^2)$
distribution
is then $\sigma^{-1}\phi(x/\sigma)$, $x {\bf i}n \mathbb{R}$.
Define the ${\cal N}(0,0)$ distribution
to be that of a random variable that
is identically zero.
We say a random variable $X$
is {{\bf e}m integrable} if $\mathbb{E}[|X|]< {\bf i}nfty$. We say
$X$ has a {{\bf e}m lattice} distribution if there
exists $h >0$
such that $(X-a)/h {\bf i}n \mathbb{Z}$ almost surely
for some $a {\bf i}n \mathbb{R}$.
If $X$ is lattice, then the largest
such $h$ is called the {{\bf e}m span} of $X$, and here
denoted $h_X$. If $X$ is non-lattice, then we
set $h_X :=0$. If $X$ is degenerate, i.e.
if ${\rm Var}[X]=0$, then we set
$h_X := +{\bf i}nfty$.
As usual with local central limit theorems, we need to
distinguish between the lattice and non-lattice cases.
For real numbers $a \geq 0,b >0$, we shall write $a|b$
to mean that either
$b$ is an integer multiple of $a$ or $a =0$.
When $a= +{\bf i}nfty, b< {\bf i}nfty$
we shall say by convention that $a|b$ does not hold.
\begin{equation}gin{theo}
\label{genthm2}
Let
$V,V_1,V_2, V_3,\ldots$ be independent identically
distributed random variables.
Suppose for each $n {\bf i}n \mathbb{N}$ that
$(Y_n,S_n,Z_n)$ is a triple of integrable
random variables on the same sample space such that
(i) $Y_n$ and $S_n$ are independent, with
$S_n {\bf e}qd \sum_{j=1}^n V_j$;
(ii) both
$ n^{-1/2} \mathbb{E}[|Z_n - (Y_n+ S_n)|]$ and
$ n^{1/2} P[Z_n \neq Y_n+ S_n]$ tend to zero as
$n \to {\bf i}nfty$;
and (iii) for some $\sigma {\bf i}n [0,{\bf i}nfty)$,
\begin{equation}a
n^{-1/2} (Z_n - \mathbb{E} Z_n ) \tod {\cal N}(0,\sigma^2)
{\rm ~~~as ~~} n \to {\bf i}nfty.
\label{normlim1a}
{\bf e}ea
Then ${\rm Var} [ V ] \leq \sigma^2$
and if $b, c_1, c_2, c_3, \ldots$ are positive constants with $h_V | b$
and $c_n \sim n^{1/2}$ as $n \to {\bf i}nfty$, then
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
c_n P[Z_n {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} Z_n}{ c_n
\sigma
}
\right)
\right|
\right\}
\to 0
~~~~
{\rm as} ~ n \to {\bf i}nfty.
\label{1102c2}
{\bf e}ea
Also,
\begin{equation}a
n^{-1/2} (Y_n- \mathbb{E} Y_n) \tod {\cal N}(0,\sigma^2 - {\rm Var} [V ] ).
\label{0110a}
{\bf e}ea
{\bf e}nd{theo}
\noindent{\bf Remarks.} The main case to consider is $c_n=n^{1/2}$. The more general formulation above is convenient in some applications, e.g., in the proof of Theorem \ref{Gthm}.
Theorem \ref{genthm2} is proved in Section \ref{secpfgen}.
Our main interest is in the conclusion {\bf e}q{1102c2},
but {\bf e}q{0110a}, which comes out
for free from the proof, is also of interest.
\section{Percolation}
\label{secperc}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
Most of our applications of Theorem \ref{genthm2} will be in the continuum,
but we start with applications to percolation on the lattice.
We consider {{\bf e}m site percolation} with parameter $p$, where
each site (element) of $\mathbb{Z}^d$ is open with probability $p$ and closed otherwise,
independently of all the other sites. Given a finite set
$B \subset \mathbb{Z}^d$,
the {{\bf e}m open clusters in $B$} are
defined to be
the components of the (random) graph with vertex set
consisting of the open sites in $B$, and edges
between each pair of open sites in $B$ that are
at unit Euclidean distance from each other.
Let ${\cal C}l(B)$ denote the {{\bf e}m number of open clusters} in $B$.
Listing the open clusters in $B$ as ${\bf C}_1,\ldots,{\bf C}_{{\cal C}l(B)}$,
and denoting by $|{\bf C}_j|$ the order
(i.e., the number of vertices)
of the cluster ${\bf C}_j$, we
denote by $L(B)$ the random variable $\max(|{\bf C}_1|, \ldots,
|{\bf C}_{{\cal C}l(B)}|)$, and refer to this as the {{\bf e}m size of
the largest open cluster in $B$}.
Given a growing sequence of regions $(B_n)_{n \geq 1}$ in $\mathbb{Z}^d$,
we shall demonstrate local CLTs for
the random variables ${\cal C}l(B_n)$ and $L(B_n)$,
subject to some conditions on the sets $B_n$
which are satisfied, for example, if they are cubes of side $n$.
There should not be any difficulty adapting these
results to bond percolation.
For $B \subset \mathbb{Z}^d$ let $|B|$ denote the number of elements
of $B$. Let $| \partial B|$ denote the number of elements of $\mathbb{Z}^d \setminus B$
lying at unit Euclidean distance from some element of $B$.
We say a sequence $(B_n)_{n \geq 1}$ of nonempty finite sets in $\mathbb{Z}^d$
has
{{\bf e}m vanishing relative boundary}
if
\begin{equation}a
\lim_{n \to {\bf i}nfty}
|\partial B_n | /|B_n| =
0.
\label{vrb}
{\bf e}ea
We write $\liminf(B_n)$ for
$\cup_{n \geq 1} \cap_{m \geq n} B_m$.
\begin{equation}gin{theo}
\label{thclus}
Suppose $d \geq 2$ and $p {\bf i}n (0,1)$.
Then there
exists $\sigma >0$ such that if $(B_n)_{n \geq 1}$ is
any sequence of nonempty finite subsets in $\mathbb{Z}^d$ with vanishing
relative boundary and with $\liminf(B_n)= \mathbb{Z}^d$,
then
\begin{equation}a
|B_n|^{-1/2} ({\cal C}l(B_n) - \mathbb{E} {\cal C}l(B_n) ) \tod {\cal N}(0,\sigma^2)
\label{ClusCLT}
{\bf e}ea
and
\begin{equation}a
\sup_{j {\bf i}n \mathbb{Z} }
\left | |B_n|^{1/2} P [{\cal C}l(B_n) = j ] - \sigma^{-1} \phi \left(
\frac{j- \mathbb{E} {\cal C}l(B_n)}{\sigma |B_n|^{1/2}} \right) \right| \to 0.
\label{ClusLLT}
{\bf e}ea
{\bf e}nd{theo}
For the size of the largest open cluster we consider a more restricted
class of sequences $(B_n)_{n \geq 1}$.
Let us say
that
$(B_n)_{n \geq 1}$
is a {{\bf e}m cube-like sequence of lattice boxes} if each set $B_n$
is of the form $\prod_{j=1}^d([-a_{j,n},b_{j,n} ] \cap \mathbb{Z})$, where
$a_{j,n} {\bf i}n \mathbb{N} $ and $b_{j,n} {\bf i}n \mathbb{N}$ for all $j,n$, and moreover
\begin{equation}a
\liminf_{n \to {\bf i}nfty} \frac{{\bf i}nf \{a_{1,n},b_{1,n},a_{2,n},b_{2,n},
\ldots,a_{d,n},b_{d,n} \}}{
\sup \{a_{1,n},b_{1,n},a_{2,n},b_{2,n},
\ldots,a_{d,n},b_{d,n} \}} >0
\label{cubelike}
{\bf e}ea
which says, loosely speaking, that the sets $B_n$ are not too far away from
all being cubes.
Given $d \geq 2$, and $p {\bf i}n (0,1)$, let $\theta_d(p)$ denote the
percolation probability, that is, the probability
that the graph with vertices consisting of all open sites in
$\mathbb{Z}^d$ and edges between any two open sites that are unit Euclidean
distance apart includes an infinite component containing the origin.
Let $p_c(d)$ denote the critical value of $p$ for site percolation
in $d$ dimensions, i.e., the infimum of all $p {\bf i}n (0,1)$ such
that $\theta_d(p) >0$.
It is well known that $p_c(d) {\bf i}n (0,1) $ for all $d \geq 2$.
\begin{equation}gin{theo}
\label{thlargest}
Suppose $d \geq 2$ and $p {\bf i}n (p_c(d),1)$.
Then there
exists $\sigma >0$ such that if $(B_n)_{n \geq 1}$ is
any cube-like sequence of lattice boxes $\mathbb{Z}^d$ with
$\liminf(B_n)= \mathbb{Z}^d$,
we have
\begin{equation}a
|B_n|^{-1/2} (L(B_n) - \mathbb{E} L(B_n) ) \tod {\cal N}(0,\sigma^2)
\label{LargCLT}
{\bf e}ea
and
\begin{equation}a
\sup_{j {\bf i}n \mathbb{Z} }
\left | |B_n|^{1/2} P [L(B_n) = j ] - \sigma^{-1} \phi \left(
\frac{j- \mathbb{E} L(B_n)}{\sigma |B_n|^{1/2}} \right) \right| \to 0.
\label{LargLLT}
{\bf e}ea
{\bf e}nd{theo}
Theorems \ref{thclus} and \ref{thlargest}
are proved in Section \ref{secpfperc}.
Theorem \ref{thclus} is the simplest of our applications of Theorem
\ref{genthm2}
and we give its proof with some extra detail for instructional purposes.
\section{Random geometric graphs}
\label{secRGG}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
For our results in this section and the next,
on continuum stochastic geometry,
let
$X_1,X_2,\ldots$ be i.i.d. $d$-dimensional random vectors
with common density $f$.
Assume throughout that $f_{{\rm max}} := \sup_{x {\bf i}n \mathbb{R}^d} f(x) < {\bf i}nfty$,
and that $f$ is almost everywhere continuous.
Define the induced {{\bf e}m binomial point processes}
\begin{equation}gin{equation} \label{bin}
{\cal X}_n:= {\cal X}_{n}(f): = \{X_{1},...,X_{n}\}, ~~~ n {\bf i}n \mathbb{N}.
{\bf e}nd{equation}
In the special case where $f$ is the density of the uniform distribution
on the unit $[0,1]^d$ cube we write $f {\bf e}quiv f_U$.
For locally finite ${\cal X} \subset \mathbb{R}^d$ and $r >0$,
let ${\cal G}({\cal X},r)$ denote the graph with vertex set ${\cal X}$ and
with edges connecting each pair of vertices $x,y$ in ${\cal X}$ with
$|y-x| \leq r$; here $|\cdot|$ denotes the Euclidean norm though there
should not be any difficulty extending our results to other norms.
Sometimes ${\cal G}({\cal X},r)$ is called a {{\bf e}m geometric graph} or
{{\bf e}m Gilbert graph}.
Let $(r_n)_{n \geq 1}$ be a sequence with $r_n \to 0$ as $n \to {\bf i}nfty$.
Graphs of the type of ${\cal G}n$ are the
subject of the monograph \cite{Penbk}.
Among the quantities of interest
associated with ${\cal G}n$ are the number
of edges, the number of triangles, and so on; also
the number of isolated points, the number of isolated edges, and so on.
CLTs for such quantities are given in
Chapter 3 of \cite{Penbk} (see the notes therein for other
references) for a
large class of limiting regimes for $r_n$.
Here we give some associated local CLTs.
Let $\kappa {\bf i}n \mathbb{N}$ and let ${\cal G}amma$ be a
fixed connected graph with $\kappa$ vertices.
We follow terminology in \cite{Penbk}. With $\sim$ denoting
graph isomorphism, let $G_n$ be
the number of $\kappa$-subsets ${\cal Y}$ of ${\cal X}_n$
such that ${\cal G}({\cal Y},r_n) \sim {\cal G}amma$
(i.e., the number of induced subgraphs of ${\cal G}n$ that are isomorphic
to ${\cal G}amma$). Let
$G^*_n$ (denoted
$J_n$ in \cite{Penbk}) denote the number of {{\bf e}m components} of ${\cal G}n$ that
are isomorphic to ${\cal G}amma$. To avoid
certain trivialities, assume that ${\cal G}amma$ is {{\bf e}m feasible}
in the sense of \cite{Penbk}, i.e. that ${\cal G}({\cal X}_\kappa,r)$
is isomorphic to ${\cal G}amma$ with strictly positive probability for
some $r >0$. When considering $G_n$,
we shall also assume that $\kappa \geq 2$.
We shall give local CLTs for $G_n$ and $G^*_n$.
We assume existence of the limit
\begin{equation}a
\rho :=
\lim_{n \to {\bf i}nfty} (n r_n^d)
< {\bf i}nfty ,
\label{rhofin}
{\bf e}ea
so that $\rho $ could be zero.
If $\rho > 0$ then we are taking the {{\bf e}m thermodynamic limit}.
We also assume that
\begin{equation}a
\tau_n^2 := n (n r_n^d)^{\kappa -1} \to {\bf i}nfty ~~~ {\rm as}
~~
n \to {\bf i}nfty.
\label{taubig}
{\bf e}ea
Then (see Theorems 3.12 and 3.13 of \cite{Penbk})
there exists a constant $\sigma = \sigma(f,{\cal G}amma,\rho) >0$,
given explicitly in terms of $f,{\cal G}amma$ and $\rho$ in \cite{Penbk},
such that
\begin{equation}a
\lim_{n \to {\bf i}nfty} \tau_n^{-2} {\rm Var}( G_n) = \sigma^2;
\label{varlim1}
\\
\tau_n^{-1}
(G_n - \mathbb{E} G_n)
\tod N(0,\sigma^2).
\label{normlim1}
{\bf e}ea
We prove here an associated local central limit theorem for the case $f
{\bf e}quiv f_U$.
\begin{equation}gin{theo}
\label{Gthm}
Suppose $f {\bf e}quiv f_U$.
Suppose $k \geq 2 $, and suppose assumptions {\bf e}q{rhofin} and {\bf e}q{taubig} hold.
Then as $n \to {\bf i}nfty$,
\begin{equation}a
\sup_{j {\bf i}n \mathbb{Z} }
\left | \tau_n P [G_n = j ] - \sigma^{-1} \phi \left(
\frac{j- \mathbb{E} G_n}{\sigma \tau_n} \right) \right| \to 0.
\label{LLT_Ga}
{\bf e}ea
{\bf e}nd{theo}
We prove Theorem \ref{Gthm} in Section \ref{secpfedges}.
It should be possible to obtain similar results for $G^*_n$, but we
shall do so only for the thermodynamic limit with $\rho >0$,
as an example in the next section. In the next section we shall see that
for the case with $\rho >0 $, it
is possible to relax the assumption that $f{\bf e}quiv f_U$ in Theorem \ref{Gthm};
when $\rho =0$,
a similar extension to non-uniform densities should be possible, but
we content ourselves
here with the case $f {\bf e}quiv f_U$ so as to provide one example where the
simplicity and the appeal of the approach
do not get buried.
\section{General local CLTs in stochastic geometry}
\label{secstogeo}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
In this section we present some general local central limit theorems
in stochastic geometry. We shall illustrate these by
some examples in the next section.
For our general local CLTs in stochastic geometry,
we consider {{\bf e}m marked}
point sets in $\mathbb{R}^d$. Let ${\cal M}$ be an arbitrary measurable
space (the {{\bf e}m mark space}), and let $\mathbb{P}_{\cal M}$ be
a probability distribution on ${\cal M}$.
Given ${\bf x} = (x,t) {\bf i}n \mathbb{R}^d \times {\cal M}$ and
given $y {\bf i}n \mathbb{R}^d$, set $ y + {\bf x} := (y+x,t)$.
Given also $a {\bf i}n \mathbb{R}$, set $a {\bf x} = (ax,t)$.
We think of $t$ as a mark attached to the point $x{\bf i}n \mathbb{R}^d$
that is unaffected by translation or scalar multiplaction.
Given ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}, y {\bf i}n \mathbb{R}^d$,
and
$a {\bf i}n (0,{\bf i}nfty)$, let $y + a{\cal X}^* := \{y + a{\bf x}: {\bf x} {\bf i}n {\cal X}^* \}.$
Let $0$ denote the origin of $\mathbb{R}^d$.
For $x {\bf i}n \mathbb{R}^d$,
and $r>0$, let $B(x;r)$ denote the
Euclidean ball $\{ y {\bf i}n \mathbb{R}^d: |y-x| \leq r\}$,
and set $B^*(x;r) := B(x;r) \times {\cal M}$.
Set $B(r) := B(0;r)$ and $B^*(r):=B^*(0;r)$.
Given nonempty ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$ and $ {\cal Y}^* \subset \mathbb{R}^d \times {\cal M}$,
write
$$
D({\cal X}^*,{\cal Y}^*):= {\bf i}nf\{|x-y|:(x,t) {\bf i}n {\cal X}^*, (y,u) {\bf i}n {\cal Y}^* ~~{\rm for~ some~}
t,u {\bf i}n {\cal M}\}.
$$
Let $\omega_d$ denote the volume of the
$d$-dimensional unit ball $B(1)$.
Suppose $H({\cal X}^*)$ is a measurable $\mathbb{R}$-valued function
defined for all
finite ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$.
Suppose $H$ is translation invariant,
i.e. $H(y+{\cal X}^*)= H({\cal X}^*)$ for all $y {\bf i}n \mathbb{R}^d$ and all
${\cal X}^*$.
Throughout this section we consider the thermodynamic limit;
let $r_n, n \geq 1 $ be a sequence of constants
such that {\bf e}q{rhofin} holds with $\rho >0$.
Define
\begin{equation}a
H_n({\cal X}^*) := H( r_n^{-1} {\cal X}^* ).
\label{0110b}
{\bf e}ea
Let the point process ${\cal X}_n := \{X_1, \ldots,X_n\} $ in $\mathbb{R}^d$
be as given in {\bf e}q{bin},
with $f$ as in Section \ref{secRGG} (so $f_{{\rm max}} < {\bf i}nfty$ and
$f$ is Lebesgue-almost everywhere continuous). Define
the corresponding marked point processs (i.e., point
process in $\mathbb{R}^d \times {\cal M}$) by
\begin{equation}gin{eqnarray*}
{\cal X}_n^* := \{(X_1,T_1),\ldots,(X_n,T_n)\} ,
{\bf e}ean
where $(T_1,T_2,T_3, \ldots)$ is a sequence of independent ${\cal M}$-valued
random variables with distribution $\mathbb{P}_{\cal M}$, independent
of everything else.
We are interested in local CLTs for $H_n({\cal X}^*_n)$, for
general functions $H$.
We give two distinct
types of condition on $H$, either of which is sufficient
to obtain a local CLT.
We shall say that $H$ has {{\bf e}m finite range interactions}
if there exists a constant $\tau {\bf i}n (0,{\bf i}nfty)$ such that
\begin{equation}a
H({\cal X}^* \cup {\cal Y}^*) = H({\cal X}^*) + H({\cal Y}^*) ~~~ {\rm whenever}
~~
D({\cal X}^*,{\cal Y}^*) > \tau.
\label{finraneq}
{\bf e}ea
In many examples it is natural to write $H({\cal X}^*)$ as a sum.
Suppose ${\bf x}i({\bf x}; {\cal X}^*)$ is a measurable $\mathbb{R}$-valued function
defined for all pairs $({\bf x},{\cal X}^*)$, where ${\cal X}^* \subset \mathbb{R}^d\times {\cal M}$
is finite and ${\bf x}$
is an element of ${\cal X}^*$.
Suppose ${\bf x}i$ is translation invariant,
i.e. ${\bf x}i(y+{\bf x};y+{\cal X}^*)= {\bf x}i({\bf x};{\cal X}^*)$ for all $y {\bf i}n \mathbb{R}^d$ and all
${\bf x},{\cal X}^*$. Then ${\bf x}i$ induces a
translation-invariant functional $H^{({\bf x}i)}$ defined on finite
point sets ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$ by
\begin{equation}a H^{({\bf x}i)}({\cal X}^*) :=
\sum_{{\bf x} {\bf i}n {\cal X}^*} {\bf x}i({\bf x}; {\cal X}^*).
\label{induceh}
{\bf e}ea
Given $r {\bf i}n (0,{\bf i}nfty)$ we say ${\bf x}i$
has range $r$ if ${\bf x}i((x,t); {\cal X}^*) = {\bf x}i ((x,t); {\cal X}^* \cap B_r^*(x))$
for all finite ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$ and all $ (x,t) {\bf i}n {\cal X}^*$.
It is easy to see that if ${\bf x}i$ has range $r$ for some (finite) $r$ then
$H^{({\bf x}i)}$ has finite range interactions,
although not all $H$ with finite range interactions
arise in this way.
Let $\kappa {\bf i}n \mathbb{N}$.
Given any set ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$
with more than $\kappa$ elements, and given ${\bf x} =(x,t) {\bf i}n {\cal X}^*$, set
$R_\kappa({\bf x};{\cal X}^*)$ to be the $\kappa$-nearest neighbour distance
from $x$ to ${\cal X}^*$, i.e. the smallest $r \geq 0$ such that
${\cal X}^* \cap B^*(x;r)$ has at least $\kappa$ elements other than
${\bf x}$ itself. If ${\cal X}^*$ has $\kappa$ or fewer elements, set
$R_\kappa({\bf x};{\cal X}^*):= {\bf i}nfty$.
We say that ${\bf x}i$ {{\bf e}m depends only on the $\kappa$ nearest neighbours} if
for all ${\bf x}$ and ${\cal X}^*$, writing ${\bf x} = (x,t)$ we have
$$
{\bf x}i({\bf x};{\cal X}^*) = {\bf x}i({\bf x} ; {\cal X}^* \cap B^*(x; R_\kappa({\bf x};{\cal X})) ).
$$
We give local CLTs
for $H$ under two alternative sets of conditions: either (i)
when $H$ has finite range interactions, or (ii) when $H$ is induced,
according to the definition {\bf e}q{induceh},
by a functional ${\bf x}i({\bf x};{\cal X}^*)$ which depends only on the $\kappa$
nearest neighbours, for some fixed $\kappa$.
Given $K>0$ and $n {\bf i}n \mathbb{N}$, define point processes
${\cal U}_{n,K},$ and
${\cal Z}_n$
in $\mathbb{R}^d$, and
point processes
$ {\cal U}^*_{n,K}$,
and ${\cal Z}_n^*$
in $\mathbb{R}^d \times {\cal M}$, as follows.
Let ${\cal U}_{n,K}$ denote the point process consisting of
$n$ independent uniform random points $U_{1,K},\ldots, U_{n,K}$
in $B(K)$, and let
${\cal Z}_n$ be the point process consisting of $n$ independent
points $Z_1,\ldots,Z_n$ in $\mathbb{R}^d$, each with a $d$-dimensional standard normal
distribution (any other positive continuous density on
$\mathbb{R}^d$ would do just as well).
The corresponding marked point processs
are defined by
\begin{equation}gin{eqnarray*}
{\cal U}_{n,K}^* := \{(U_{1,K},T_1),\ldots,(U_{n,K},T_n)\} ;
\\
{\cal Z}_n^* := \{(Z_1,T_1),\ldots,(Z_n,T_n)\}.
{\bf e}ean
Define the limiting span
\begin{equation}a
h(H) := \liminf_{n \to {\bf i}nfty} h_{H({\cal Z}^*_n)}.
\label{limspandef}
{\bf e}ea
\begin{equation}gin{theo}
\label{finranthm}
Suppose that either (i) $H$ has finite range interactions
and $h_{H({\cal Z}_n^*)} < {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$,
or (ii) for some $\kappa {\bf i}n \mathbb{N}$,
$H$ is induced by a functional ${\bf x}i({\bf x};{\cal X}^*)$
which depends only on the $\kappa$ nearest neighbours,
and $h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$
with $n > \kappa$. Suppose also that $H_n({\cal X}_n^*)$ and
$H({\cal U}^*_{n,K})$ are integrable for all $n {\bf i}n \mathbb{N}$ and $K > 0$.
Finally suppose that
\begin{equation}a
n^{-1/2} ( H_{n}({\cal X}^*_n) - \mathbb{E} H_{n}({\cal X}^*_n) ) \tod {\cal N} (0,\sigma^2)
~~~~ {\rm as} ~ n \to {\bf i}nfty.
\label{Hclteq}
{\bf e}ea
Then
$\sigma >0$ and
$h(H) < {\bf i}nfty$ and
for any $b {\bf i}n (0,{\bf i}nfty)$, with $h(H)| b$,
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
n^{1/2} P[H_n ({\cal X}^*_n) {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} H_n({\cal X}^*_n)}{ n^{1/2} \sigma}
\right)
\right|
\right\}
\to 0
~~~~
{\rm as} ~ n \to {\bf i}nfty.
\nonumber \\
\label{1205a}
{\bf e}ea
{\bf e}nd{theo}
We prove Theorem \ref{finranthm} in Section \ref{secpfSG}.
Analogues to this result and to Theorem \ref{Gthm} should
also hold if one Poissonizes the number of points in the
sample, but we do not give details.
The corresponding result for unmarked point sets in
$\mathbb{R}^d$ goes as follows; we adapt our terminology to
this case in an obvious manner.
\begin{equation}gin{coro}
\label{finrancoro}
Suppose $H({\cal X})$ is $\mathbb{R}$-valued and defined for all
finite ${\cal X} \subset \mathbb{R}^d$. Suppose $H$ is translation
invariant, and set $H_n({\cal X}):= H(r_n^{-1} {\cal X})$.
Suppose that either (i) $H$ has finite range interactions
and $h_{H({\cal Z}_n)} < {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$,
or (ii) for some $\kappa {\bf i}n \mathbb{N}$, $H$ is induced by a functional
${\bf x}i(x;{\cal X})$
which depends only on the $\kappa$ nearest neighbours,
and $h_{H({\cal Z}_n)}< {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$ with $n > \kappa$.
Suppose also that $H_n({\cal X}_n)$ and
$H({\cal U}_{n,K})$ are integrable
for all $n {\bf i}n \mathbb{N}$ and $K > 0$.
Finally suppose
\begin{equation}a
n^{-1/2} ( H_{n}({\cal X}_n) - \mathbb{E} H_{n}({\cal X}_n) ) \tod {\cal N} (0,\sigma^2)
~~~ {\rm as} ~ n \to {\bf i}nfty.
\label{0623a}
{\bf e}ea
Then $\sigma >0$ and $h(H) < {\bf i}nfty$ and
for any $b {\bf i}n (0,{\bf i}nfty)$, with $h(H) | b$,
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
n^{1/2} P[H_n ({\cal X}_n) {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} H_n({\cal X}_n)}{ n^{1/2} \sigma}
\right)
\right|
\right\}
\to 0
~~~~
{\rm as} ~ n \to {\bf i}nfty.
\nonumber \\
\label{0623b}
{\bf e}ea
{\bf e}nd{coro}
Corollary \ref{finrancoro} is easily obtained
from Theorem \ref{finranthm}
by taking ${\cal M}$ to have just a single element, denoted $t_0$ say,
and identifying each element $(x,t_0){\bf i}n \mathbb{R}^d \times {\cal M}$
with the corresponding element $x$ of $\mathbb{R}^d$.
To apply Theorem \ref{finranthm} in examples, we need
to check condition {\bf e}q{Hclteq}.
For some examples this is best done directly.
However, if we strengthen the other
hypotheses of Theorem 5.1, we can obtain
{\bf e}q{Hclteq} from known results and so do not
need to include it as an extra hypothesis.
The next three theorems illustrate this.
As well {\bf e}q{Hclteq}, these results
give us the associated variance
convergence result
\begin{equation}a
\lim_{n \to {\bf i}nfty} n^{-1} {\rm Var} [ H_n({\cal X}_n^*) ] = \sigma^2.
\label{varconv}
{\bf e}ea
In the next three theorems, we impose
some extra assumptions besides those of
Theorem \ref{finranthm}.
Writing ${\rm supp}(f)$ for the support of $f$,
we shall assume that ${\rm supp}(f)$ is compact, and that
also $r_n$ satisfy
\begin{equation}a
\label{strongrn}
|r_n^{-d} -n| = O(n^{1/2}),
{\bf e}ea
which implies {\bf e}q{rhofin} with $\rho =1$.
We also assume certain polynomial growth bounds; see
{\bf e}q{polyxi}, {\bf e}q{polybd} and {\bf e}q{polynbr} below.
First consider the case
where $H = H^{({\bf x}i)}$ is induced by a functional
${\bf x}i({\bf x};{\cal X}^*)$ with finite range $r >0$.
For any set $A$, let ${\rm card}(A)$ denotes the number of elements of
$A$.
\begin{equation}gin{theo}
\label{fin0theo}
Suppose $H = H^{({\bf x}i)} $ is induced by a translation
invariant functional ${\bf x}i({\bf x};{\cal X}^*)$ having finite range $r$ and
and satisfying for some $\gamma >0$ the polynomial growth bound
\begin{equation}a
|{\bf x}i((x,t);{\cal X}^*) | \leq \gamma ({\rm card} ( {\cal X}^* \cap B^*(x;r)))^\gamma ~~~
\forall ~{\rm finite}~
{\cal X}^* \subset \mathbb{R}^d \times {\cal M}, ~ \forall ~ (x,t) {\bf i}n {\cal X}^*.
\nonumber \\
\label{polyxi}
{\bf e}ea
Suppose $h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$,
and suppose ${\rm supp}(f)$ is
compact.
Finally, suppose that
{\bf e}q{strongrn} holds.
Then there exists $\sigma {\bf i}n (0,{\bf i}nfty)$
such that {\bf e}q{Hclteq} and {\bf e}q{varconv} hold, and
$h(H) < {\bf i}nfty$ and
{\bf e}q{1205a} holds for all $b$ with $h(H)|b$.
{\bf e}nd{theo}
Now we turn to the general case of Condition (i)
in Theorem \ref{finranthm}, where $H$ has finite range interactions
but is not induced by a finite
range ${\bf x}i$. For this case we shall borrow some concepts from
continuum percolation.
For $\lambda >0$,
let ${\cal H}_\lambda$ denote a homogeneous Poisson point process
in $\mathbb{R}^d$ with intensity $\lambda$.
Let ${\cal H}_{\lambda}^*$ denote the same Poisson point process
with each point given an independent ${\cal M}$-valued mark
with the distribution $\mathbb{P}_{\cal M}$.
Let $\lambda_c$ be the critical
value for percolation in $d$ dimensions, that is, the
supremum of the set of all $\lambda >0$ such that
the component of the geometric (Gilbert) graph
$G({\cal H}_\lambda \cup\{0\},1)$
containing the origin is almost surely finite. It is known (see
e.g. \cite{Penbk})
that $0 < \lambda_c < {\bf i}nfty$ when $d \geq 2$ and $\lambda_c = {\bf i}nfty$
when $d =1$.
For nonempty ${\cal X} \subset \mathbb{R}^d$, write ${\rm diam}({\cal X})$ for
$\sup \{ |x-y|: x,y {\bf i}n {\cal X}\}$.
For ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$, write ${\rm diam}({\cal X}^*)$
for ${\rm diam} (\pi({\cal X}^*))$,
where $\pi$ denotes the canonical projection from
$\mathbb{R}^d \times {\cal M} $ onto $\mathbb{R}^d$.
\begin{equation}gin{theo}
\label{fintheo}
Suppose $H({\cal X}^*)$ is a measurable $\mathbb{R}$-valued function defined for
all finite ${\cal X}^* \subset \mathbb{R}^d \times {\cal M}$, and is translation invariant.
Suppose ${\rm supp}(f)$ is compact. Suppose for some $\tau >0$
that the finite range interaction condition
{\bf e}q{finraneq} holds, and suppose $f$ and $ \tau$ satisfy
the subcriticality condition
\begin{equation}a
\tau^d f_{{\rm max}} < \lambda_c,
\label{subcrit}
{\bf e}ea
Assume $(r_n)_{n \geq 1}$ satisfies {\bf e}q{strongrn},
and suppose also that
$ h_{H({\cal Z}^*_n)} <{\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$,
and that there exists a constant $\gamma >0$
such that for all finite non-empty ${\cal X}^* \subset \mathbb{R}^d$ we have
\begin{equation}a
H({\cal X}^*) \leq \gamma ({\rm diam} ({\cal X}^*) + {\rm card} ({\cal X}^*) )^\gamma.
\label{polybd}
{\bf e}ea
Then there exists $\sigma {\bf i}n (0,{\bf i}nfty)$ such that
{\bf e}q{Hclteq} and {\bf e}q{varconv} hold, and $h(H) < {\bf i}nfty$ and if
$b {\bf i}n (0,{\bf i}nfty)$ with $h(H) |b$,
then {\bf e}q{1205a} holds.
{\bf e}nd{theo}
Now we turn to condition (ii) in Theorem \ref{finranthm}.
Following
\cite{PYmfld},
we say that a closed region $A \subset \mathbb{R}^d$ is
a {{\bf e}m $d$-dimensional $C^1$ submanifold-with-boundary of
$\mathbb{R}^d$} if it has a differentiable boundary in the following
sense: for every $x$ in the boundary $\partial A$ of $A$,
there is an open $U \subset \mathbb{R}^d$, and
a continuously differentiable injection $g$ from $U$ to $\mathbb{R}^d$,
such that $0 {\bf i}n U$ and $g(0)= x$ and
$g(U \cap ([0,{\bf i}nfty) \times \mathbb{R}^{d-1})) = g(U) \cap A$.
\begin{equation}gin{theo}
\label{nnlclt}
Let $\kappa {\bf i}n \mathbb{N}$. Suppose
$H = H^{({\bf x}i)}$ is induced by a ${\bf x}i$ which depends only on the
$\kappa$ nearest neighbours, and
for some $\gamma {\bf i}n (0,{\bf i}nfty)$ suppose we have for all
$({\bf x},{\cal X}^*)$ that
\begin{equation}a
|{\bf x}i ({\bf x};{\cal X}^*) | \leq \gamma (1+R_\kappa({\bf x},{\cal X}^*))^\gamma.
\label{polynbr}
{\bf e}ea
Suppose also that ${\rm supp}(f)$ is either a compact convex region
in $\mathbb{R}^d$ or a compact $d$-dimensional submanifold-with-boundary of
$\mathbb{R}^d$, and suppose $f$ is bounded away from zero on ${\rm supp}(f)$.
Finally suppose that the sequence $(r_n)_{n \geq 1}$
satisfies {\bf e}q{strongrn}, and that
$h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some $n {\bf i}n \mathbb{N}$ with $n > \kappa$.
Then there exists $\sigma {\bf i}n (0,{\bf i}nfty)$ such that
{\bf e}q{Hclteq} and {\bf e}q{varconv} hold, and
$h(H) < {\bf i}nfty$ and if $b {\bf i}n (0,{\bf i}nfty)$ with
$h(H) |b$ then {\bf e}q{1205a} also holds.
{\bf e}nd{theo}
We prove Theorems \ref{fin0theo}, \ref{fintheo}
and \ref{nnlclt} in Section \ref{secusePEJP}.
In proving each of these results,
we apply Theorem \ref{finranthm},
and check the CLT condition {\bf e}q{Hclteq}
using a general CLT from \cite{PenEJP}, stated below as
Theorem \ref{lemPEJP}.
The conclusion that $\sigma >0$ in Theorems \ref{finranthm}--\ref{nnlclt}
and Corollary \ref{finrancoro}
is noteworthy because the result from \cite{PenEJP}
on its own does not guarantee this.
Our approach to showing $\sigma >0$ here is
related to that given in \cite{AB} (and elsewhere)
but is more generic.
A different approach to providing generic variance lower
bounds was used in \cite{PY1} and \cite{BY} but
is less well suited to the present setting.
\section{Applications}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
This section contains discussion of some examples of concrete
models in stochastic geometry, to which the
general local central limit theorems presented
in Section \ref{secstogeo}
are applicable.
Further
examples
where the conditions for these
general theorems
can be verified
are discussed in \cite{PenEJP,PY1,PY2,PYLLN}.
\subsection{ Further quantities associated with random geometric graphs}
Suppose the graph ${\cal G}n$ is as in Section \ref{secRGG}.
We assume here that {\bf e}q{rhofin} holds with
$\rho >0$. Theorem \ref{finranthm}
enables us to extend the case $\rho >0$ of
Theorem \ref{Gthm} to non-uniform $f$.
It also yields local CLTs
for some graph quantities not covered by
Theorem \ref{Gthm}; we now give some examples. \\
{{\bf e}m Number of components for ${\cal G}n$}. This quantity can be written in
the form $H_n({\cal X}_n)$, where $H({\cal X})$ is the number of components of
the geometric graph ${\cal G}({\cal X},1)$ (which clearly has finite
range interactions).
In the the thermodynamic limit,
this quantity satisfies the CLT {\bf e}q{0623a} (see Theorem 13.26 of
\cite{Penbk}). Therefore,
Corollary \ref{finrancoro}
is applicable here
and shows that it satisfies the local CLT {\bf e}q{0623b}. \\
{{\bf e}m Number of components for ${\cal G}n$ isomorphic to
a given feasible graph ${\cal G}amma$}.
This quantity,
denoted $G^*_n$ in Section \ref{secRGG},
can be written in the form
$H_n({\cal X}_n)$, with $H({\cal X})$ the number of
components of $G({\cal X},1)$ isomorphic to ${\cal G}amma$.
Clearly, this $H$ has finite range interactions since
{\bf e}q{finraneq} holds for $\tau =2$.
Also, it satisfies {\bf e}q{0623a} by Theorem 3.14 of \cite{Penbk}.
Therefore we can apply Corollary \ref{finrancoro} to deduce {\bf e}q{0623b}
in this case. \\
{{\bf e}m Independence number.}
The independence number of a finite graph is the maximal number
$k$ such that there exists a set of $k$ vertices
in the graph such that none of them are adjacent
Clearly this quantity is
the sum of the independence numbers of the graph's components,
and therefore if for ${\cal X} \subset \mathbb{R}^d$ we set $H({\cal X})$ to be the independence
number of ${\cal G}({\cal X},\tau)$ (also known as the {{\bf e}m off-line packing
number} since it is the maximum number of balls of radius $\tau/2$
that can be packed
centred at points of ${\cal X}$) then
$H$ satisfies the finite range interactions condition
{\bf e}q{finraneq} with $r=2$.
Therefore
we can apply Theorem \ref{fintheo}
to derive a local CLT for the independence number of ${\cal G}n$, as follows.
\begin{equation}gin{theo}.
Let $\tau >0$ and suppose
{\bf e}q{subcrit} holds. Suppose $r_n$ is satisfies
{\bf e}q{strongrn}. Then
if
for ${\cal X} \subset \mathbb{R}^d$ we set $H({\cal X})$ to be the independence
number of ${\cal G}({\cal X},\tau)$,
then there exists $\sigma {\bf i}n (0,{\bf i}nfty)$ such that
{\bf e}q{0623a} holds, and if $b {\bf i}n \mathbb{N}$ then {\bf e}q{0623b} holds.
{\bf e}nd{theo}
\subsection{Germ-grain models}
Consider a coverage process in which each point $X_i$ has an associated
mark $T_i$, the $T_i$ (defined for $i \geq 1$) being i.i.d. nonnegative
random variables with a distribution having
bounded support (i.e., with
$P[T_i \leq K] =1$ for some finite $K$).
Define the random coverage process
\begin{equation}a
{\cal X}i_n := \cup_{i=1}^n B(r_n^{-1}X_i; T_i).
\label{0130a}
{\bf e}ea
For $U$ a finite union of
convex sets in $\mathbb{R}^d$,
let $|U|$
denote the volume
of $U$ (i.e. its Lebesgue measure) and let
$|\partial U|$
denote the surface area
of $U$ (i.e. the $(d-1)$-dimensional Hausdorff
measure of its boundary).
\begin{equation}gin{theo}
Under the above assumptions, if {\bf e}q{strongrn}
holds then
there exists $\sigma >0$ and $\tilde{\sigma} >0$ such that
$n^{-1/2} (|{\cal X}i_n| - \mathbb{E} |{\cal X}i_n|) \tod {\cal N} (0,\sigma^2)$
and
$n^{-1/2} (|\partial {\cal X}i_n| - \mathbb{E} |\partial {\cal X}i_n|) \tod
{\cal N} (0,\tilde{\sigma}^2)$, and moreover for any $b {\bf i}n (0,{\bf i}nfty) $,
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
n^{1/2} P[|{\cal X}i_n| {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} |{\cal X}i_n|}{ n^{1/2} \sigma}
\right)
\right|
\right\}
\to 0
~~~~
{\rm as} ~ n \to {\bf i}nfty.
\nonumber \\
\label{0623c}
{\bf e}ea
and
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
n^{1/2} P[|\partial {\cal X}i_n| {\bf i}n [u,u +b) ] -
\tilde{\sigma}^{-1} b \phi \left(\frac{u - \mathbb{E} |\partial {\cal X}i_n|}{ n^{1/2}
\tilde{\sigma}}
\right)
\right|
\right\}
\to 0
~~~~
{\rm as} ~ n \to {\bf i}nfty.
\nonumber \\
\label{0623d}
{\bf e}ea
{\bf e}nd{theo}
{{\bf e}m Proof.}
The volume
$|{\cal X}i_n|$
can be viewed as a functional $H_n({\cal X}_n^*)$, where
$H({\cal X}) = H^{({\bf x}i)}({\cal X}^*)$ with ${\bf x}i((x,t);{\cal X}^*)$
given by the volume of that part of the ball
centred at $x$ with radius given by the associated mark $t$,
which is not covered by any corresponding
ball for some other point $x' {\bf i}n {\cal X}$
with $x'$ preceding $x$ in the lexicographic ordering.
Since we assume
the support of the distribution of the $T_i$ is bounded,
this ${\bf x}i$ has finite range $r=2K$.
Moreover, it satisfies
the polynomial growth bound {\bf e}q{polyxi}
so
by Theorem \ref{fin0theo}
we get the CLT {\bf e}q{Hclteq} and local CLT
{\bf e}q{1205a} for any $b >0$
(in this example $h(H) =0$).
Thus we have {\bf e}q{0623c}.
Turning to the surface area $|\partial {\cal X}i_n|$,
this can also be
viewed as a functional $H_n({\cal X}_n)$ for a different $H = H^{({\bf x}i)}$,
this time taking ${\bf x}i({\bf x};{\cal X})$ to be the
uncovered surface area of the ball at $x$,
which again has range $r=2K$ and satisfies
{\bf e}q{polyxi}.
Hence by Theorem \ref{fin0theo}.
we get the CLT {\bf e}q{Hclteq} and local CLT
{\bf e}q{1205a} for any $b >0$ for this choice of $H$
(in this example, again $h(H) =0$).
Thus we have {\bf e}q{0623d}.
$\qed$ \\
{{\bf e}m Remark.}
The preceding argument still works if
the independent balls of random radius in the preceding discussion are
replaced by independent copies of a random compact shape that
is almost surely contained in the ball $B(K)$ for some $K$
(cf. Section 6.1 of \cite{PenEJP}). \\
{{\bf e}m Other functionals for the germ-grain model.}
When $f {\bf e}quiv f_U$, the scaled point process $r_n^{-1/d} {\cal X}_n$
can be viewed as a uniform point process in a window of side $r_n^{-1/d}$.
CLTs for a large class of other
functionals on germ-grain models in such a window are considered in
\cite{HM}, for the Poissonised point process with a Poisson distributed
number of points. Since the Poissonised version of Theorems
\ref{finranthm} and \ref{fin0theo}
should also hold, it should be possible to
derive local CLTs for many of the quantities considered in \cite{HM},
at least in the case where the grains (i.e., the balls or other shapes
attached to the random points) are of
uniformly bounded diameter.
\subsection{Random sequential adsorption (RSA).}
RSA (on-line packing) is a model of irreversible deposition of particles
onto an initially empty $d$-dimensional surface where particles of fixed
finite size arrive sequentially at random locations in
an initially empty region $A$ of a $d$-dimensional
space (typically $d=1$ or $d=2$),
and each successive particle is accepted if it does
not overlap any previously accepted particle.
The region $A$ is taken to be compact and convex.
The locations of successive particles are independent
and governed by some density $f$ on $A$.
In the present setting, we take the mark space ${\cal M}$
to be $[0,1]$ with $\mathbb{P}_{\cal M}$
the uniform distribution. Each
point ${\bf x} = (x,t)$ of ${\cal X}^*$
represents an incoming particle with arrival time $t$.
The marks determine the order in which
particles arrive, and two particles at ${\bf x} =(x,t)$ and ${\bf y} =(y,u)$
are said to
overlap if $|x-y| \leq 1$.
Let $H({\cal X}^*)$ denote the number of accepted particles.
This choice of
$H$ clearly has finite range interactions ({\bf e}q{finraneq} holds for $\tau =2$).
Then $H_n({\cal X}^*_n)$ represents the number of
accepted particles for the re-scaled marked point process $r^{-1}_n {\cal X}^*_n$;
note that the density $f$ and hence the region $A$ on which
the particles are deposited, does not vary with $n$.
At least for $r_n = n^{-1/d}$,
the central limit theorem for $H_n({\cal X}_n)$
is known to hold; see \cite{PY2} for the case when
$A= [0,1]^d$ and $f {\bf e}quiv f_U$
and \cite{BY} for the extension to the non-uniform case on arbitrary
compact convex $A$ (note that these results do not require the
sub-criticality condition {\bf e}q{subcrit} to be satisfied).
Thus, the $H$ under consideration here satisfies the
condition {\bf e}q{Hclteq}.
Therefore we can apply Theorem \ref{finranthm}
to obtain a local CLT
for the number of accepted particles in this model.
\begin{equation}gin{theo}
Suppose $f$ has compact convex support and is bounded away
from zero and infinity on its support.
Suppose $r_n = n^{-1/d}$, and suppose $Z_n = H_n({\cal X}_n^*)$ is the
number of accepted particles in the rescaled RSA model described above.
In other words, suppose
$Z_n$ be the number of accepted particles when
RSA is performed on ${\cal X}_n$ with distance parameter
$r_n = n^{-1/d}$.
Then there is a constant $\sigma {\bf i}n (0,{\bf i}nfty)$
such that {\bf e}q{normlim1a} holds
and for $b=1$ and $c = n^{1/2}$, {\bf e}q{1102c2} holds.
{\bf e}nd{theo}
It is likely that in the preceding result the
condition
$r_n = n^{-1/d}$ can be relaxed to
{\bf e}q{rhofin} holding with $\rho > 0$.
We have not checked the details.
In the {{\bf e}m infinite input} version of RSA with range of interaction $r$,
particles continue
to arrive until the region $A$ is saturated, and the total number
of accepted particles is a random variable with its
distribution determined by $r$.
A central limit theorem
for the (random) total number of accepted particles
(in the limit $r \to 0$) is known to hold,
at least for $f {\bf e}quiv f_U$; see \cite{SPY}.
It would be interesting to know if a corresponding local central limit
theorem holds here as well.
\subsection{Nearest neighbour functionals}
Many functionals have arisen in the applied literature
which can be expressed as sums of functionals of
$\kappa$-nearest neighbours,
for such problems as multidimensional goodness-of-fit tests \cite{BB,BPY},
multidimensional two-sample tests \cite{Henze},
entropy estimation of probability
distributions \cite{LPS}, dimension estimation \cite{LB},
and nonparametric regression \cite{EJ}.
Functionals considered
include: sums of power-weighted nearest neighbour distances,
sums of logarithmic functions of the nearest-neighbour distances,
number of nearest-neighbours from the same sample
in a two-sample problem, and others.
Central limit theorems have been obtained explicitly for
some of these examples \cite{BB,Henze,BPY}
and in other cases
they can often
be derived from more general results
\cite{AB,PenEJP,PY1,Chatt}. Thus, for many of these examples
it should be possible
to check the conditions of
Theorem \ref{finranthm} (case (ii)).
We consider just one simple example where Theorem \ref{nnlclt}
is applicable. Suppose for some fixed ${\alpha}pha >0$
that $H({\cal X})$ is the
sum of the ${\alpha}pha$-power-weighted
nearest neighbour distances in ${\cal X}$ (for ${\alpha}pha =1$ this
is known as the
total length of the directed nearest neighbour graph on ${\cal X}$).
That is, suppose
$H({\cal X}) = {\cal H}^{({\bf x}i)}({\cal X})$ with ${\bf x}i(x;{\cal X})$ given by
$\min \{|y-x|^{\alpha}pha:y {\bf i}n {\cal X} \setminus \{x\}\}$.
Then $H_n({\cal X}) = r_n^{-{\alpha}pha} H({\cal X})$, and ${\bf x}i$ clearly satisfies
{\bf e}q{polynbr} for some $\gamma$, so
provided $f$ is supported by a compact convex region
in $\mathbb{R}^d$ or by a compact $d$-dimensional submanifold-with-boundary
of $\mathbb{R}^d$, and provided $f$ is bounded away from zero on its support,
Theorem \ref{nnlclt}
is applicable with $\kappa=1$. Hence in this case
there exists $\sigma {\bf i}n (0,{\bf i}nfty)$ such that
{\bf e}q{Hclteq} and (for any $b {\bf i}n (0,{\bf i}nfty)$) {\bf e}q{1205a} are valid.
\section{Proof of Theorem \ref{genthm2}}
\label{secpfgen}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
Let $V, V_1,V_2, V_3,\ldots$ be independent
identically distributed
random variables.
Define $\sigma_V := \sqrt{{\rm Var}(V)} {\bf i}n [0,{\bf i}nfty]$.
In the case $\sigma_V=0$, Theorem \ref{genthm2}
is trivial, so from now on in this section, we assume $\sigma_V > 0$.
Let $b, c_1, c_2, c_3, \ldots$ be positive constants with $h_V | b$
and $c_n \sim n^{1/2}$
as $n \to {\bf i}nfty$.
We prove Theorem \ref{genthm2} first
in the special case where $Z_n = S_n$, then
in the case where $Z_n = Y_n + S_n$,
and then in full generality. Before starting we recall
a fact about characteristic functions.
\begin{equation}gin{lemm}
\label{Vakhlem}
If $\sigma_V = {\bf i}nfty$ then for all $t {\bf i}n \mathbb{R}$,
as $n \to {\bf i}nfty$
$$
\mathbb{E} \left[ {\bf e}xp \left({\bf i}mag t
n^{-1/2} \sum_{j=1}^n (V_j- \mathbb{E}[V])
\right) \right] \to 0.
$$
{\bf e}nd{lemm}
{{\bf e}m Proof.} See for example Section 3, and in particular the final
display, of \cite{Vakh}.
$\qed$
\begin{equation}gin{lemm}
\label{classiclem}
Suppose $S_n {\bf e}qd \sum_{j=1}^n V_j$ and $\sigma_V < {\bf i}nfty$.
Then as $n \to {\bf i}nfty$,
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
c_n P[S_n {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} S_n}{ c_n
\sigma_V
}
\right)
\right|
\right\}
\to 0
\label{0203a}
{\bf e}ea
{\bf e}nd{lemm}
{{\bf e}m Proof.}
First consider
the special case with $c_n = n^{1/2}$.
In this case, {\bf e}q{0203a} holds by
the classical local central limit theorem
for sums of i.i.d. non-lattice variables
with finite second moment in the case where $h_V=0$
(see
page 232 of \cite{Breiman}, or Theorem 2.5.4 of
\cite{Durr}),
and by the local central limit theorem for sums of i.i.d. lattice
variables in the case where $h_V > 0$ and $b/h_V {\bf i}n \mathbb{Z}$
(see Theorem XV.5.3 of \cite{Feller}, or Theorem 2.5.2 of
\cite{Durr}).
To extend this to the general case with $c_n \sim n^{1/2}$,
observe first that by the special case considered above,
$n^{1/2} P[S_n {\bf i}n [u,u+b)]$
remains bounded uniformly in $u$ and $n$, and hence
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R}} \{
| (n^{1/2} - c_n ) P[S_n {\bf i}n [u,u+b) ] | \}
= \sup_{u {\bf i}n \mathbb{R}} \left\{
n^{1/2}
\left| 1 - \frac{ c_n }{ n^{1/2} } \right|
P[S_n {\bf i}n [u,u+b) ] \right\}
\nonumber \\
\to 0. ~~~~~~~~~~
\label{1207a}
{\bf e}ea
Also, for any $K > 1$,
\begin{equation}a
\sup_{|x| \leq K n^{1/2}}
\left\{ \left|
\phi \left(\frac{x}{ n^{1/2} } \right)
- \phi \left(\frac{x}{ c_n } \right)
\right|
\right\}
\leq (2 \pi e)^{-1/2}
\sup_{|x| \leq K n^{1/2}}
\left\{
\left|
\left(\frac{x}{ n^{1/2} } \right)
- \left(\frac{x}{c_n } \right)
\right|
\right\}
\nonumber \\
\leq (2 \pi e)^{-1/2}
\left(
\frac{K n^{1/2}}{ n^{1/2} }
\right)
\left| 1 -
\frac{ n^{1/2} }{ c_n } \right|
\to 0.
~~~~~~~~~~~
\label{1207b}
{\bf e}ea
Also, for large enough $n$,
\begin{equation}gin{eqnarray*}
\sup_{|x| \geq K n^{1/2}} \max
\left( \phi \left( \frac{x}{c_n } \right),
\phi \left( \frac{x}{n^{1/2} } \right) \right) \leq \phi(K-1)
{\bf e}ean
and since $K$ is arbitrarily large,
combined with {\bf e}q{1207b}, this shows that
\begin{equation}gin{eqnarray*}
\sup_{x {\bf i}n \mathbb{R} }
\left\{ \left| \phi \left(\frac{x}{ n^{1/2} } \right)
- \phi \left(\frac{x}{ c_n } \right) \right| \right\} \to 0.
{\bf e}ean
Combined with {\bf e}q{1207a}, this shows that we can deduce
{\bf e}q{0203a}
for general $c_n $ satisfying $c_n \sim n^{1/2}$
from the special case with $c_n = n^{1/2}$ which was
established earlier. $\qed$ \\
\begin{equation}gin{lemm}
\label{lemZXY}
Theorem \ref{genthm2} holds in the special case where $Z_n = Y_n + S_n$.
{\bf e}nd{lemm}
{{\bf e}m Proof.}
Assume, along with the hypotheses of Theorem \ref{genthm2}, that
$Z_n = Y_n + S_n$. Considering characteristic functions,
by {\bf e}q{normlim1a} we have for $t {\bf i}n \mathbb{R}$ that
\begin{equation}a
\mathbb{E} \left[ {\bf e}xp \left(
{\bf i}mag t n^{-1/2} (Y_n- \mathbb{E} Y_n)
\right) \right] \mathbb{E} \left[ {\bf e}xp \left(
i t n^{-1/2} (S_n- \mathbb{E} S_n) \right) \right]
\nonumber \\
\to {\bf e}xp ( - \frac{1}{2} t \sigma^2).
~~~~~~~
\label{cfeq}
{\bf e}ea
If $\sigma_V = {\bf i}nfty$ then
by Lemma \ref{Vakhlem}, the second factor
in the left hand side of {\bf e}q{cfeq}
tends to zero, giving a contradiction. Hence we may assume $\sigma_V < {\bf i}nfty$
from now on.
By the Central Limit Theorem,
\begin{equation}a
n^{-1/2} ( S_n - \mathbb{E} S_n )
\tod N(0, \sigma_V^2).
\label{normlim2a}
{\bf e}ea
By {\bf e}q{cfeq} and {\bf e}q{normlim2a}, $\sigma_V^2 \leq \sigma^2$ and
setting $\sigma_Y^2 := \sigma^2 - \sigma_V^2 \geq 0$, we have that
$ n^{-1/2} (Y_n- \mathbb{E} Y_n) $
is asymptotically $ {\cal N}(0,\sigma_Y^2 ). $ Hence,
\begin{equation}a
c_n^{-1} (Y_n- \mathbb{E} Y_n) \tod {\cal N}(0,\sigma_Y^2 ).
\label{normlimX}
{\bf e}ea
That is, {\bf e}q{0110a} holds.
Let $u {\bf i}n \mathbb{R}$ and set
\begin{equation}a
t := t(u,n) := c_n^{-1}(u - \mathbb{E} Z_n).
\label{0204a}
{\bf e}ea
Assume
that $Z_n = Y_n + S_n$.
By independence of $Y_n$ and $S_n$,
\begin{equation}gin{eqnarray*}
P[Z_n {\bf i}n [u,u +b) ]
=
P [ c_n^{-1} ( Z_n - \mathbb{E}[Z_n]) {\bf i}n c_n^{-1}[u-\mathbb{E} Z_n,u+b - \mathbb{E} Z_n) ]
\\
= {\bf i}nt_{-{\bf i}nfty}^{{\bf i}nfty} P \left[ \frac{Y_n - \mathbb{E} Y_n }{c_n} {\bf i}n dx
\right]
P\left[ \frac{S_n - \mathbb{E} S_n }{c_n} {\bf i}n
c_n^{-1}[u-\mathbb{E} Z_n,u+b - \mathbb{E} Z_n) -x
\right]
{\bf e}ean
so that
\begin{equation}gin{eqnarray*}
c_n
P[Z_n {\bf i}n [u,u +b) ]
= {\bf i}nt_{-{\bf i}nfty}^{{\bf i}nfty} P \left[ \frac{Y_n - \mathbb{E} Y_n }{c_n} {\bf i}n dx
\right]
\\
\times
\left( c_n P\left[ S_n - \mathbb{E} S_n {\bf i}n
[u-\mathbb{E} Z_n - x c_n,u - \mathbb{E} Z_n -x c_n +b)
\right]
\right)
\\
= {\bf i}nt_{-{\bf i}nfty}^{{\bf i}nfty} P \left[ \frac{Y_n - \mathbb{E} Y_n }{ c_n} {\bf i}n dx
\right]
\left( c_n P\left[ S_n - \mathbb{E} S_n {\bf i}n
[ (t - x ) c_n,(t -x) c_n +b)
\right]
\right) .
{\bf e}ean
By Lemma \ref{classiclem},
\begin{equation}gin{eqnarray*}
c_n P\left[ S_n - \mathbb{E} S_n {\bf i}n [ y c_n, y c_n +b) \right] =
\frac{b}{\sigma_V} \phi \left( \frac{ y}{\sigma_V} \right) + g_n(y)
{\bf e}ean
where
\begin{equation}a
\sup_{y {\bf i}n \mathbb{R}} | g_n(y) | \to 0 ~~~~~ {\rm as}~~~ n \to {\bf i}nfty.
\label{1102b}
{\bf e}ea
Hence,
\begin{equation}gin{eqnarray*}
c_n P[Z_n {\bf i}n [u,u +b) ] = \mathbb{E} \left[
\frac{b}{\sigma_V} \phi \left( \frac{ t- c_n^{-1} (Y_n-\mathbb{E} Y_n) }{
\sigma_V } \right) +
g_n \left( t- c_n^{-1}(Y_n - \mathbb{E} Y_n) \right)
\right],
{\bf e}ean
so by {\bf e}q{1102b}, to prove {\bf e}q{1102c2},
it suffices to prove
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{
\left|
\mathbb{E} \left[ \sigma_V^{-1} \phi \left( \frac{t(u,n)-
c_n^{-1} (Y_n - \mathbb{E} Y_n )}{ \sigma_V} \right) \right]
-
\sigma^{-1}\phi \left(\frac{u - \mathbb{E} Z_n}{ c_n \sigma}
\right)
\right|
\right\}
\to 0. \nonumber \\
\label{1102d}
{\bf e}ea
Suppose this fails. Then there is a strictly increasing sequence of
natural numbers $(n(m), m \geq 1)$
and a
sequence of real numbers $(u_m,m \geq 1)$
such that with $t_m := t(u_m,n(m)),$ we have
\begin{equation}a
\liminf_{m \to {\bf i}nfty}
\left|
\mathbb{E} \left[ \sigma_V^{-1} \phi \left( \frac{t_m -
c_{n(m)}^{-1} (Y_{n(m)} - \mathbb{E} Y_{n(m)} )}{ \sigma_V} \right) \right]
-
\sigma^{-1}\phi \left(\frac{u_m - \mathbb{E} Z_{n(m)}}{ c_{n(m)} \sigma}
\right)
\right|
> 0.
\nonumber \\
\label{1102e}
{\bf e}ea
By taking a subsequence if necessary,
we may assume without loss of generality, either that
$t_m \to t $ for some $t {\bf i}n \mathbb{R}$, or that
or that $|t_m | \to {\bf i}nfty$ as $m \to {\bf i}nfty$.
Consider first the latter case. If
$|t_m | \to {\bf i}nfty$ as $m \to {\bf i}nfty$, then
by {\bf e}q{normlimX},
\begin{equation}gin{eqnarray*}
P[
|t_m - c_{n(m)}^{-1}(Y_{n(m)}- \mathbb{E} Y_{n(m)}) | \leq |t_m |/2]
\leq P [ | c_{n(m)}^{-1}(Y_{n(m)}- \mathbb{E} Y_{n(m)}) | \geq |t_m |/2]
\\
\to 0,
{\bf e}ean
and hence
$$
\mathbb{E} \left[ \sigma_V^{-1} \phi \left( \frac{t_m -
c_{n(m)}^{-1} (Y_{n(m)} - \mathbb{E} Y_{n(m)} )}{ \sigma_V} \right) \right]
\to 0.
$$
Since $ c_{n(m)}^{-1}(u_m - \mathbb{E} Z_{n(m) })$ is
equal to $t_{m}$ by {\bf e}q{0204a}, we also have under this assumption
that
$\sigma^{-1}\phi \left(\frac{u_m - \mathbb{E} Z_{n(m)}}{ c_{n(m)} \sigma}
\right)$ tends to zero, and thus we obtain
a contradiction of {\bf e}q{1102e}.
In the case where $t_m \to t$ for some finite $t$, we have
by {\bf e}q{normlimX} that
$ t_m - c_{n(m)}^{-1}(Y_{n(m)}- \mathbb{E} Y_{n(m)}) $ converges in distribution
to $t-W_1$, where
$W_1 \sim {\cal N}(0, \sigma_Y^2 )$. Hence
as $m \to {\bf i}nfty$,
\begin{equation}gin{eqnarray*}
\mathbb{E} \left[ \sigma_V^{-1} \phi \left( \frac{t_m - c_{n(m)}^{-1}(Y_{n(m)} - \mathbb{E}
Y_{n(m)} ) }{ \sigma_V } \right)
\right]
\to \sigma_V^{-1} \mathbb{E} \phi((t-W_1)/ \sigma_V)
\\
= \mathbb{E} f_{W_2} (t - W_1),
{\bf e}ean
where $W_2 \sim N(0,\sigma_V^2)$, with
probability density function
$f_{W_2}(x) : = \sigma_V^{-1} \phi(x/\sigma_V)$.
If we
assume $W_1$, $W_2$ are independent,
then $\mathbb{E} f_{W_2} (t - W_1)$ is the convolution formula for the
probability density function of $W_1 + W_2$,
which is
$ {\cal N}(0, \sigma^2)$, so that
$$
\mathbb{E} f_{W_2} (t - W_1)
= f_{W_1 + W_2}(t) = \sigma^{-1}\phi(t/\sigma).
$$
On the other hand, since $ c_{n(m)}^{-1}(u_m - \mathbb{E} Z_{n(m) })$ is
equal (by {\bf e}q{0204a}) to $t_{m}$ which we assume converges to $t$,
we also have that
$$
\sigma^{-1}\phi \left(\frac{u_m - \mathbb{E} Z_{n(m)}}{ c_{n(m)} \sigma}
\right)
\to
\sigma^{-1}\phi \left(\frac{t}{ \sigma}
\right),
$$
and therefore we obtain a contradiction of
{\bf e}q{1102e} in this case too.
Thus
{\bf e}q{1102e} fails, and therefore
{\bf e}q{1102d} holds. Hence,
{\bf e}q{1102c2} holds in the case with
$Z_n = Y_n + S_n$.
$\qed$ \\
{{\bf e}m Proof of Theorem \ref{genthm2}.}
Set $Z'_n:=Y_n + S_n$. By the integrability assumptions,
$Z'_n$ is integrable. By
{\bf e}q{normlim1a} and the assumption that
$n^{-1/2}\mathbb{E}[|Z_n-Z'_n|] \to 0$ as $n \to {\bf i}nfty$,
\begin{equation}a
n^{-1/2} (Z'_n- \mathbb{E} Z'_n) \tod {\cal N}(0,\sigma^2 )
{\rm ~~~as ~~} n \to {\bf i}nfty.
{\bf e}ea
Let $b >0$ with $h_V | b$. By Lemma \ref{lemZXY},
$\sigma^2 \geq {\rm Var} V$ and {\bf e}q{0110a} holds and
\begin{equation}gin{eqnarray*}
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
c_n
P[Z'_n {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} Z'_n}{ c_n
\sigma
}
\right)
\right|
\right\}
\to 0
~~~~
{\rm ~~~as ~~} n \to {\bf i}nfty.
{\bf e}ean
Hence, by the assumption $n^{1/2} P[Z_n \neq Z'_n] \to 0$,
\begin{equation}gin{eqnarray*}
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left|
c_n
P[Z_n {\bf i}n [u,u +b) ] -
\sigma^{-1} b \phi \left(\frac{u - \mathbb{E} Z'_n}{ c_n
\sigma
}
\right)
\right|
\right\}
\to 0
~~~~
{\rm ~~~as ~~} n \to {\bf i}nfty,
{\bf e}ean
and since the assumption
$n^{-1/2}\mathbb{E}[|Z_n-Z'_n|] \to 0$ implies that
$ c_n^{-1}(\mathbb{E}[Z_n] - \mathbb{E} [Z'_n]) \to 0$
as $n \to {\bf i}nfty$, and $\phi$ is uniformly continuous on $\mathbb{R}$,
we can then deduce {\bf e}q{1102c2}.
$\qed$
\section{Proof of theorems for percolation}
\label{secpfperc}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
We shall repeatedly use the following Chernoff-type tail bounds
for the binomial and Poisson distributions
For $a >0$ set ${\rm Var}phi(a):=1-a + a \log a$.
Then ${\rm Var}phi(1)=0$ and ${\rm Var}phi(a) >0$
for $a {\bf i}n (0,{\bf i}nfty)\setminus \{1\}$.
\begin{equation}gin{lemm}
\label{PenL1.1}
If $X$ is a binomial or Poisson distributed random variable
with $\mathbb{E}[X] = \mu >0$.
Then we have for all $x > 0$ that
\begin{equation}a
P[X \geq x] \leq {\bf e}xp( - \mu {\rm Var}phi(x/\mu)), ~~~~
x \geq \mu; \label{Biup}
\\
P[X \leq x] \leq {\bf e}xp( - \mu {\rm Var}phi(x/\mu)), ~~~~
x \leq \mu. \label{Bilo}
{\bf e}ea
{\bf e}nd{lemm}
{{\bf e}m Proof.} See e.g. Lemmas 1.1 and 1.2 of \cite{Penbk}. $\qed$ \\
{{\bf e}m Proof of Theorem \ref{thclus}.}
Let $(B_n)_{n \geq 1}$ be a sequence of nonempty finite subsets
in $\mathbb{Z}^d$ with vanishing relative boundary. The first conclusion
{\bf e}q{ClusCLT} follows from Theorem 3.1 of \cite{PenCLT}, so it
remains to prove {\bf e}q{ClusLLT}.
For $x {\bf i}n \mathbb{Z}^d$ let $\|x\|_{\bf i}nfty$ denote the ${\bf e}ll_{\bf i}nfty$-norm of $x$,
i.e., the maximum absolute value of its coordinates.
Let $B_n^{o}$
be the set of points $x$ in $B_n$ such that all $y{\bf i}n\mathbb{Z}^d$ with
$\|y-x\|_{\bf i}nfty \leq 1$ are also in $B_n$.
Since $|B_n \setminus B_n^o|/|\partial B_n|$ is bounded by a constant
depending only on $d$, the vanishing
relative boundary condition
{\bf e}q{vrb} implies $|B^o_n|/|B_n| \to 1$ as $n \to {\bf i}nfty$.
Hence, by the pigeonhole principle,
for all large enough $n$ we can choose a set of
points $x_{n,1}, x_{n,2}, \ldots, x_{n,\lfloor 5^{-d} |B_n|/2\rfloor}$
in $B^o_n$ such that $\|x_{n,j} - x_{n,k}\|_{\bf i}nfty \geq 3$
for each distinct $j,k$ in $\{1,2,\ldots,\lfloor 5^{-d} |B_n|/2\rfloor\}$
(let these points be chosen by some arbitrary deterministic rule).
For $1 \leq j \leq \lfloor 5^{-d} |B_n|/2\rfloor$, let $I_{n,j}$
be the indicator of the event that each vertex $y {\bf i}n \mathbb{Z}^d$ with
$\|y-x_{n,j}\|_{\bf i}nfty =1$ is closed,
and list the $j$ for which $I_{n,j} =1$, in increasing order,
as $J(n,1) \ldots, J(n,N_n)$, where
$N_{n} := \sum_{j=1}^{ \lfloor 5^{-d} |B_n|/2\rfloor} I_{n,j}$.
Let $I'_{n,j}$ be the indicator of the event that the vertex
$x_{n,j}$ is itself open.
Then $N_n$ is binomially distributed with
parameter
$(1-p)^{3^d -1}$,
so by Lemma \ref{PenL1.1},
\begin{equation}a
\label{0129a}
\limsup_{n \to {\bf i}nfty} |B_n|^{-1} \log P[
N_n <
5^{-d} (1-p)^{3^d -1} |B_n|/4] <0.
{\bf e}ea
Set $b_n:=
\lfloor 5^{-d} (1-p)^{3^d -1} |B_n|/4 \rfloor.
$
Let $V_1,V_2,\ldots$ be a sequence of independent Bernoulli
variables with parameter $p$, independent of
everything else.
Recalling that ${\cal C}l(B)$ denotes the number of open clusters in $B$,
set
\begin{equation}gin{eqnarray*}
S'_n :=
\sum_{j=1}^{\min(b_n ,N_n)
}
I'_{n,J(n,j)}; ~~~~~
Y_n:= {\cal C}l(B_n)-S'_n,
{\bf e}ean
and
\begin{equation}gin{eqnarray*}
S_n := S'_n + \sum_{j=1}^{(b_n - N_n)^+} V_j,
{\bf e}ean
where $x^+ : = \max(x,0)$ as usual, and the sum $\sum_{i=1}^0$ is
taken to be zero.
In this case, the `good boxes' discussed in Section \ref{secintro}
are the unit ${\bf e}ll_{\bf i}nfty$-neighbourhoods of the sites
$x_{n,J(n,1)},
x_{n,J(n,2)}, \ldots
x_{n,J(n,\min(b_n,N_n))}
$. If $x_{n,j}$ is at the centre of a good box, it is (if open)
isolated from other open sites, so that $Y_n$ is simply the number
of open clusters in $B_n$ if one ignores
all sites $x_{n,J(n,j)}$ $(1 \leq j \leq
\min(b_n,N_n))$.
Hence $Y_n$ does not affect
the open/closed status of these sites.
Thus $S_n$ has the ${\rm Bin}n(b_n,p)$ distribution and its distribution,
given $Y_n$, is unaffected by the value of $Y_n$ so $S_n$
is independent of $Y_n$.
Also,
\begin{equation}gin{eqnarray*}
{\cal C}l(B_n) - ( Y_n + S_n) = S'_n -S_n = - \sum_{j=1}^{(b_n-N_n)^+} V_j
{\bf e}ean
so that by {\bf e}q{0129a}, both $|B_n|^{1/2} P[{\cal C}l(B_n) \neq Y_n+S_n]$
and $|B_n|^{-1/2}\mathbb{E}[|{\cal C}l(B_n) -(Y_n+S_n)|]$ tend to
zero as $n \to {\bf i}nfty$. Combined with {\bf e}q{ClusCLT} this shows
that Theorem \ref{genthm2} is applicable, with $h_V =1$,
and that result shows that
{\bf e}q{ClusLLT} holds. $\qed$ \\
In the proof of Theorem \ref{thlargest}, and again later on,
we shall use the following.
\begin{equation}gin{lemm}
\label{Azulem}
Suppose ${\bf x}i_1,\ldots,{\bf x}i_m$ are independent identically
distributed random elements
of some measurable space $(E,{\cal E})$.
Suppose $m {\bf i}n \mathbb{N}$ and $\psi:E^m \to \mathbb{R}$ is measurable and
suppose for some finite $K$ that for $j =1,\ldots,m$,
$$
K \geq \sup_{(x_1,\ldots,x_m,x'_j) {\bf i}n E^{m+1}}
|\psi(x_1,\ldots,x_j,\ldots,x_m)- \psi(x_1,\ldots,x'_j,\ldots,x_m)|.
$$
Set $Y = \psi ({\bf x}i_1,\ldots,{\bf x}i_m)$.
Then
for any $t >0$,
$$
P[ |Y- \mathbb{E} Y| \geq t ] \leq 2 {\bf e}xp ( - t^2/(2 m K^2)).
$$
{\bf e}nd{lemm}
{{\bf e}m Proof.}
The argument is similar to e.g. the proof of Theorem 3.15 of \cite{Penbk};
we include it for completeness. For $1 \leq i \leq m$
let ${\cal F}_i$ be the $\sigma$-algebra generated
by ${\bf x}i_1,\ldots, {\bf x}i_i$, and let ${\cal F}_0$ be the
trivial $\sigma$-algebra. Then $Y- \mathbb{E}[Y]= \sum_{i=1}^m D_i$
with $D_i := \mathbb{E}[Y|{\cal F}_i] - \mathbb{E}[Y|{\cal F}_{i-1}]$, the $i$th martingale
difference. Then with ${\bf x}i'_i$ independent of ${\bf x}i_1,\ldots,{\bf x}i_m$
with the same distribution as them, we have
$$
D_i = \mathbb{E}[\psi({\bf x}i_1,\ldots, {\bf x}i_i, \ldots {\bf x}i_m) -
{\bf x}i({\bf x}i_1,\ldots,{\bf x}i'_i, \ldots, {\bf x}i_m)|{\cal F}_i]
$$
so that $|D_i| \leq K$ almost surely and hence by
Azuma's inequality (see e.g. \cite{Penbk}) we
have the result. \\
{{\bf e}m Proof of Theorem \ref{thlargest}.}
Assume $d \geq 2 $ and $p > p_c(d)$.
Let $(B_n)_{n \geq 1}$ be a cube-like sequence of lattice boxes in $\mathbb{Z}^d$.
For finite nonempty $A \subset \mathbb{Z}^d$ we
define the {{\bf e}m diameter} of $A$, written ${\rm diam}(A)$, to be
$\max \{\|x-y\|_{\bf i}nfty: x {\bf i}n A, y {\bf i}n A\}$.
Set ${\bf x}di_n:= \lceil {\rm diam}(B_n)^{1/(4d)}{\rm c}eil$.
Let $B_n^{{\rm in}}$ be the set of points
$x$ in $B_n$ such that all $y{\bf i}n\mathbb{Z}^d$ with
$\|y-x\|_{\bf i}nfty \leq {\bf x}di_n $ are also in $B_n$.
Then we claim that
$|B^{{\rm in}}_n|/|B_n| \to 1$ as $n \to {\bf i}nfty$.
Indeed, writing $B_n = \prod_{j=1}^d([-a_{j,n},b_{j,n} ] \cap \mathbb{Z})$,
from the cube-like condition {\bf e}q{cubelike} we have for
$1 \leq j \leq d$ that
${\bf x}di_n = o(a_{j,n} + b_{j,n})$ as $n \to {\bf i}nfty$, and therefore
\begin{equation}gin{eqnarray*}
|B^{{\rm in}}_n| =
\prod_{j=1}^d (b_{j,n} +a_{j,n} - 2 {\bf x}di_n) =
(1 +o(1))
\prod_{j=1}^d (a_{j,n} +b_{j,n} ),
{\bf e}ean
justifying the claim.
By the preceding claim, and the pigeonhole principle,
for all large enough $n$ there is a deterministic set of points
$x_{n,1}, x_{n,2}, \ldots, x_{n,\lfloor 5^{-d} |B_n|/2\rfloor}$
in $B^{{\rm in}}_n$ such that $\|x_{n,j} - x_{n,k}\|_{\bf i}nfty \geq 3$
for each distinct $j,k$ in $\{1,2,\ldots,\lfloor 5^{-d} |B_n|/2\rfloor\}$.
For $1 \leq j \leq \lfloor 5^{-d} |B_n|/2 \rfloor$, let $I_{n,j}$
be the indicator of the event that (i) each vertex
$y {\bf i}n \mathbb{Z}^d$ with $\|y-x_{n,j}\|_{\bf i}nfty =1$ is open, and (ii)
the open cluster in $B_n$ containing all
$y {\bf i}n \mathbb{Z}^d$ with $\|y-x_{n,j}\|_{\bf i}nfty =1$
has diameter at least ${\bf x}di_n$.
Set $m(n) : = \lfloor 5^{-d} p^{3^d-1} \theta_d(p) |B_n|/8 \rfloor$,
with $\theta_d(p)$ denoting the percolation probability.
List the $j$ for which $I_{n,j} =1$
as $J(n,1),$ $ \ldots,$ $J(n,N_n)$, with
$N_{n} := \sum_{j=1}^{ \lfloor 5^{-d} |B_n|/2\rfloor} I_{n,j}.$
Then
we have for $n$ large that
$$
\mathbb{E} [ N_n ] \geq
\lfloor 5^{-d} |B_n|/2\rfloor
p^{3^d-1} \theta_d(p) \geq 2 m(n).
$$
Changing the open/closed status of a single site
$z$ in $B_n$ can change the value of $I_{n,j}$ only for
those $j$ for which
$\|x_{n,j} -z\|_{\bf i}nfty \leq {\bf x}di_n$, and the number of such $j$
is at most $(2 {\bf x}di_n +1)^d$. Moreover,
for $n$ large
$$
(2 {\bf x}di_n +1)^d \leq ( 2 ({\rm diam} B_n)^{1/(4d)} +3 )^d \leq
3^d ({\rm diam} B_n)^{1/4}
\leq 3^d |B_n|^{1/4}
$$
so that the total change in $N_n$ due to
changing the status of a single site $z$ is at most
$3^d|B_n|^{1/4}$.
So by Lemma \ref{Azulem},
\begin{equation}gin{eqnarray*}
P[ N_n \leq
m(n)
]
\leq
P [ |N_n- \mathbb{E} N_n| \geq m(n)
]
\leq 2 {\bf e}xp \left( - \frac{ m(n)^2
}{ 2 |B_n| (3^d|B_n|^{1/4})^2 }
\right)
{\bf e}ean
and hence
\begin{equation}a
\label{0129b}
\limsup_{n \to {\bf i}nfty} |B_n|^{-1/2} \log P[
N_n \leq
m(n)
] <0.
{\bf e}ea
Let $V_1,V_2,\ldots$ be a sequence of independent Bernoulli
variables with parameter $p$, independent of everything else.
For $1 \leq j \leq \lfloor 5^{-d} |B_n|/2\rfloor$, let
$I'_{n,j}$ be the indicator of the event that the vertex $x_{n,j}$
is open.
Set
\begin{equation}gin{eqnarray*}
S'_n :=
\sum_{j=1}^{\min(m(n) ,N_n)
}
I'_{n,J(n,j)}; ~~~~~
S_n := S'_n + \sum_{j=1}^{(m(n) - N_n)^+} V_j.
{\bf e}ean
Let $Y_n$ be the size of the largest open cluster in $B_n$ if
the status of $x_{i,n}$ is set to `closed' for the first
$\min(m(n),N_n)$ values of $j$ for which $I_{n,j} =1$.
Then $S_n$ has the ${\rm Bin}n(m(n),p)$ distribution and we assert that
its distribution,
given $Y_n$, is unaffected by the value of $Y_n$ so $S_n$
is independent of $Y_n$. Indeed, $Y_n$ is obtained without sampling
the status of the sites $x_{n,j}$ for
the first
$\min(m(n),N_n)$ values of $j$ for which $I_{n,j} =1$.
To explain this further, consider algorithmically sampling the
open/closed status
of sites in $B_n$ as follows.
First sample the status of
sites outside $\cup_{j}\{x_{n,j}\}$. Then sample the status
of those $x_{n,j}$ for which the ${\bf e}ll_{\bf i}nfty$-neighbouring sites are not
all open (for these sites, $I_{n,j}$ must be zero).
At this stage,
it remains to sample the status of sites
$x_{n,j}$ for which the ${\bf e}ll_{\bf i}nfty$-neighbouring sites are all open,
and for these sites one can tell, without revealing the
value of $x_{n,j}$, whether or not $I_{n,j} =1$ (and in particular
one can determine the value of $N_{n}$).
At the next step sample the status of all $x_{n,i}$
except for the first $\min(N_{n},m(n))$
values of $i$ which have $I_{n,j} =1$. At this point, the value
of $Y_n$ is determined. However, the value of $S_n$ is
determined by the status of the remaining unsampled sites
together with some extra Bernoulli variables in the
case where $N_{n} < m(n)$, so its distribution is independent
of the value of $Y_n$ as asserted.
Next, we establish that $L(B_n) = Y_n +S_n$
with high probability.
One way in which this could fail would be if $N_n < m(n)$,
but we know from {\bf e}q{0129b} that this has small probability.
Also, we claim that with high probability, all sites $x_{n,j}$
for which $I_{n,j} =1$ have all their neighbouring sites
as part of the largest open cluster, regardless
of the status of $x_{n,i}$. To see this, let $A_n$ be the event
that (i) there is a unique open cluster for $B_n$
that crosses $B_n$ in all directions (in the
sense of \cite{PenCLT}) and
(ii) all other clusters in $B_n$ have diameter less than
${\bf x}di_n$. Then we claim that
$P[A^c_n]$ decays exponentially in $\gamma_n$ in the sense that
\begin{equation}a
\limsup_{n \to {\bf i}nfty} ({\rm diam} B_n)^{1/(4d)} \log P[A_n^c] < 0.
\label{0129c}
{\bf e}ea
The proof of {\bf e}q{0129c} proceeds as in
proof of Lemma 3.4 of \cite{PenCLT}; we include a sketch
of this argument here for completeness.
First suppose $d=2$. For a given rectangle of
dimensions $(\gamma_n/3) \times \gamma_n$,
the probability that it fails to have an open crossing the long way
decays exponentially in $\gamma_n$ (see Lemma 3.1 of \cite{PenCLT}).
Consider the family of all rectangles
of dimensions $(\gamma_n/3) \times \gamma_n$ or of dimensions
$\gamma_n \times (\gamma_n/3)$,
with all corners in $(\gamma_n/3) \mathbb{Z}^2$, having non-empty intersection
with $B_n$. The number of such rectangles is $O({\rm diam}(B_n)^{d-1/2})$.
By the preceding probability estimate,
all rectangles in this family
have an open crossing
the long way, except on an event
of probablity decaying exponentially in $\gamma_n$. However,
if all these rectangles have an open crossing the long way, then
event $A_n$ occurs and we have justified {\bf e}q{0129c}
for $d=2$.
For $d\geq3$, by the well known result of Grimmett and Marstrand
\cite{GM},
there exists a finite $K$
such that there is an infinite
open cluster in the slab $[0,K] \times \mathbb{R}^{d-1}$ with
strictly positive probability.
By dividing $B_n$ into slabs of thickness
$K$ we see for $1 \leq i \leq d$
that the probabilty that there is no open crossing of $B_n$
in the $i$-direction decays exponentially
in ${\rm diam}(B_n)$. Moreover, for $i \neq j$,
by a similar slab argument
(consider successive slabs of thickness $K$ in the $i$ direction),
the probability that there is an open cluster
in $B_n$ that crosses $B_n$ in the $i$ direction
but not the $j$ direction decays exponentially in ${\rm diam}(B_n)$.
Similarly the probability that there are two or more disjoint
open clusters in $B_n$ which cross in the $i$ direction
decays exponentially in $n$.
Finally by a further slab argument,
the probability that there is an open cluster
which has diameter at least $\gamma_n/d$ in the $i$ direction
but fails to cross the whole of $B_n$ in the $j$ direction,
decreases exponentially in $\gamma_n$. This
justifies {\bf e}q{0129c} for $d\geq 3$.
Note that the occurrence or otherwise of $A_n$
is unaffected by the open/closed status of those $x_{n,i}$ for which
$I_{n,j}=1$.
Also, for large enough $n$, on
event $A_n$, whatever status we give to
these $x_{n,j}$, the unique crossing cluster is the largest one
because it has at least ${\rm diam}(B_n)$ elements while all other
clusters have at most $O({\rm diam}(B_n)^{1/4})$ elements.
If $N_n \geq m(n)$ and event $A_n$ occurs, then for each $j\leq m(n)$,
the site $x_{n,J(n,j)}$ is in the largest open cluster
if and only if it is open, since if it is open then it is in
an open cluster of diameter at least ${\bf x}di_n$. This shows that
if $N_n \geq m(n)$ and event $A_n$ occurs,
we do indeed have
$L(B_n) = Y_n + S_n$.
Together with
the previous probability
estimates {\bf e}q{0129b} and {\bf e}q{0129c}, this shows that
$|B_n|^{1/2} P [L(B_n) \neq Y_n + S_n] \to 0$ as $n\to {\bf i}nfty$.
Moreover, by the Cauchy-Schwarz inequality,
\begin{equation}gin{eqnarray*}
\mathbb{E} [|L(B_n) - ( Y_n + S_n)|]
=
\mathbb{E} [|L(B_n) - ( Y_n + S_n)|{\bf 1}_{\{N_n < m(n) \} \cup A_n^c}]
\\
\leq ( P[N_n < m(n)] + P[A_n^c] )^{1/2} (\mathbb{E}[ (L(B_n) -( Y_n + S_n))^2])^{1/2}
\\
\leq ( P[N_n < m(n)] + P[A_n^c] )^{1/2} (|B_n| + m(n)) \to 0.
{\bf e}ean
By Theorem 3.2 of \cite{PenCLT}, the first conclusion {\bf e}q{LargCLT}
holds, and by the preceding discussion, we can then apply
Theorem \ref{genthm2} with $h_V=1$,
to derive the second conclusion {\bf e}q{LargLLT}.
$\qed$
\section{Proof of Theorem \ref{Gthm}}
\label{secpfedges}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
We are now in the setting of Section \ref{secRGG}.
Assume $f {\bf e}quiv f_U$, and fix a feasible connected graph ${\cal G}amma$ with
$\kappa$ vertices $(2 \leq \kappa < {\bf i}nfty)$.
Assume also that the sequence $(r_n)_{n \geq 1}$ is given
and satisifies {\bf e}q{rhofin} and {\bf e}q{taubig}.
Then
$P[{\cal G}({\cal X}_\kappa,1/(\kappa+3)) \sim {\cal G}amma] {\bf i}n ( 0,1)$.
Let $Q_{n,1},Q_{n,2},\ldots,Q_{n,m(n)}$ be disjoint cubes
of side $(\kappa+5)r_n$, contained in the unit cube,
with $m(n) \sim ((\kappa+5) r_n)^{-d}$ as $n \to {\bf i}nfty$.
For $1 \leq j \leq m(n)$, let
$I_{n,j}$ be the indicator of the event
that
${\cal X}_n \cap Q_{n,j}$
consists of exactly $\kappa$ points, all of them at a
Euclidean distance greater than $r_n$ from the boundary of $Q_{n,j}$.
List the indices $j \leq m(n)$ such that
$I_{n,j}=1$,
in increasing order, as
$J_{n,1},\ldots,J_{n,N_n}$, with $N_n: = \sum_{j=1}^{m(n)} I_{n,j}$.
Then
\begin{equation}a
\mathbb{E} [N_n] = m(n) ((\kappa+3)/(\kappa+5))^{d\kappa}
P[{\rm Bin}n(n, ((\kappa+5)r_n)^d) =\kappa] ,
\label{1027a}
{\bf e}ea
and hence as $n \to {\bf i}nfty$, since
$n r_n^d$ is bounded by our assumption {\bf e}q{rhofin},
\begin{equation}a
\mathbb{E} [N_n]
\sim
\kappa!^{-1}(\kappa+3)^{d\kappa}
(\kappa+5)^{-d}
n^\kappa r_n^{d(\kappa-1)}
{\bf e}xp(-n (\kappa+5)^d r_n^d).
\label{1113a}
{\bf e}ea
Recalling from {\bf e}q{taubig} that $\tau_n := \sqrt{ n (nr_n^d)^{\kappa-1}}$,
we can rewrite {\bf e}q{1113a} as
\begin{equation}a
\mathbb{E} [N_n]
\sim
\kappa!^{-1}(\kappa+3)^{d\kappa}
(\kappa+5)^{-d}
\tau_n^2
{\bf e}xp(-n (\kappa+5)^d r_n^d)
\label{1031a}
{\bf e}ea
as $n \to {\bf i}nfty$.
Moreover, for the Poissonised version of this model
where the number of points is Poisson distributed
with mean $n$, we have the same asymptotics
for the quantity corresponding to $N_n$ (the binomial
probability in {\bf e}q{1027a} is asymptotic to the corresponding
Poisson probability).
Set ${\alpha}pha$ to be one-quarter of the coefficient of
$\tau_n^2$ in {\bf e}q{1031a}, if the exponential factor is
replaced by its smallest value in the sequence, i.e. set
\begin{equation}a
{\alpha}pha :=
(4 \kappa!)^{-1}(\kappa+3)^{d\kappa}
(\kappa+5)^{-d}
{\bf i}nf_n {\bf e}xp( -n (\kappa+5)^d r_n^d).
\label{0113c}
{\bf e}ea
Then
${\alpha}pha > 0$ by our assumption {\bf e}q{rhofin} on $r_n$.
\begin{equation}gin{lemm}
\label{lem1}
It is the case that
$$
\limsup_{n \to {\bf i}nfty} \tau_n^{-2} \log
P \left[ N_n < {\alpha}pha \tau_n^2 \right] < 0.
$$
{\bf e}nd{lemm}
{{\bf e}m Proof.}
Let ${\delta}lta >0$ (to be chosen later).
Let $M_n$ be Poisson distributed with parameter
$(1-{\delta}lta)n$, independent of the sequence
of random $d$-vectors
$X_1,X_2,\ldots$. Define the
Poisson point process
$$
{\cal P}_{n(1-{\delta}lta)} := \{X_1,\ldots,X_{M_n}\} .
$$
Let $N'_n $ be defined in the same manner as $N_n$
but in terms of ${\cal P}_{n(1-{\delta}lta)}$ rather than ${\cal X}_n$.
That is, set
$$
N'_n := \sum_{j=1}^{m(n)} I'_{n,j}
$$
with $I'_{n,j}$ denoting the indicator
of the event that ${\cal P}_{n(1-{\delta}lta)} \cap Q_{n,j}$
consists of exactly $\kappa$ points, all
at distance greater than $r_n$ from the boundary
of $Q_{n,j}$.
List the indices $j \leq M_n$ such that
$I'_{n,j}=1$ as $J'_{n,1}, \ldots, J'_{n,N'_n}$.
Since {\bf e}q{1031a} holds in the Poisson setting too,
using the definition of $\tau_n$ we have as $n \to {\bf i}nfty $ that
\begin{equation}a
\mathbb{E} [N'_n]
\sim
\kappa!^{-1}(\kappa+3)^{d\kappa}
(\kappa+5)^{-d}
(1-{\delta}lta)^\kappa
\tau_n^2
{\bf e}xp(-n (1-{\delta}lta) (\kappa+5)^d r_n^d).
\label{0113b}
{\bf e}ea
By {\bf e}q{1031a} and {\bf e}q{0113b},
we can and do choose ${\delta}lta >0$ to be small enough
so that
$\mathbb{E} N'_n > (3/4) \mathbb{E} N_n$
for large $n$.
By {\bf e}q{1031a} and {\bf e}q{0113c} we have for large $n$
that $2 {\alpha}pha \tau_n^2 \leq (5/8) \mathbb{E} N_n$.
Also, $N'_n$ is binomially distributed,
and hence by Lemma \ref{PenL1.1},
$P[N'_n < 2 {\alpha}pha \tau_n^2 ]$
decays exponentially in $\tau_n^2$.
By Lemma \ref{PenL1.1},
except on an event of probability
decaying exponentially in $n$, the value of $M_n$ lies between
$n(1 - 2 {\delta}lta)$
and
$n$.
If this happens, the discrepancy between $N_n$ and $N'_n$
is due to the addition of at most an extra $2 {\delta}lta n$ points
to ${\cal P}_{n(1-{\delta}lta)}$.
If also
$N'_n \geq 2 {\alpha}pha \tau_n^2$ then to have
$N_n < {\alpha}pha \tau_n^2$, at least ${\alpha}pha \tau_n^2$
of the added points must land in the union of the first
$ \lceil 2 {\alpha}pha \tau_n^2 {\rm c}eil$ cubes
contributing to $N'_n$.
To spell out the preceding argument in more detail,
let $1 \leq j \leq m(n)$.
If $M_n < n$ and $I'_{n,j}=1$ and $X_k \notin Q_{n,j}$
for $M_n < k \leq n$, then $I_{n,j}=1$, since in
this case $
{\cal X}_{n} \cap Q_{n,j}
=
{\cal P}_{n(1-{\delta}lta)} \cap Q_{n,j}
$.
Therefore if $M_n < n$ and $N'_n \geq 2 {\alpha}pha \tau_n^2$ and
$$
\sum_{k=M_n+1}^n {\bf 1}\{X_k {\bf i}n \cup_{j=1}^{\lceil 2 {\alpha}pha \tau_n^2
{\rm c}eil } Q_{n, J'_{n,j}}
\} < {\alpha}pha \tau_n^2 ,
$$
then $
{\cal X}_{n} \cap Q_{n,J'_{n,j}}
\neq
{\cal P}_{n(1-{\delta}lta)} \cap Q_{n,J'_{n,j}}
$ for at most $\lfloor {\alpha}pha \tau_n \rfloor$
values of $j {\bf i}n [1, 2 {\alpha}pha \tau_n^2]$, and hence
$$
N_n \geq
\sum_{j=1}^{\lceil 2 {\alpha}pha \tau_n^2 {\rm c}eil}
I_{n,J'_{n,j}}
\geq \lceil 2 {\alpha}pha \tau_n^2 {\rm c}eil -{\alpha}pha \tau_n^2 \geq {\alpha}pha \tau_n^2.
$$
Hence, if $n(1-2{\delta}lta) < M_n < n$
and
$ N'_n \geq 2 {\alpha}pha \tau_n^2$
and
$
\sum_{k=M_n+1}^{M_n + \lceil 2 {\delta}lta n {\rm c}eil }
{\bf 1}\{X_k {\bf i}n \cup_{j=1}^{\lceil 2 {\alpha}pha \tau_n^2
{\rm c}eil } Q_{n, J'_{n,j}}
\} < {\alpha}pha \tau_n
$, then $N_n \geq {\alpha}pha \tau_n^2$.
Hence
\begin{equation}gin{eqnarray*}
P [ N_n < {\alpha}pha \tau_n^2
| N'_n \geq 2 {\alpha}pha \tau_n^2, n-2 {\delta}lta n <
M_n < n ]
\\
\leq P [ {\rm Bin}n (\lceil 2 {\delta}lta n {\rm c}eil , \lceil 2 {\alpha}pha
\tau_n^2 {\rm c}eil ((\kappa+5) r_n)^d ) >
{\alpha}pha \tau_n^2
].
{\bf e}ean
Since $nr_n^d$ is assumed bounded, we can choose
${\delta}lta$ small enough so that
the expectation of the binomial variable in the last line is less
than $({\alpha}pha/2) \tau_n^2$, and then appeal once more
to Lemma \ref{PenL1.1}
to see that the above conditional probability
decays exponentially in $\tau_n^2$. Combining all these probability
estimates give the desired result. $\qed$ \\
{{\bf e}m Proof of Theorem \ref{Gthm}.}
Set $p := P[{\cal G}({\cal X}_\kappa,1/(\kappa +3)) \sim {\cal G}amma]$.
Let $V_1,V_2,\ldots$ be a sequence of independent Bernoulli
variables with parameter $p$, independent of ${\cal X}_n$.
Let
\begin{equation}gin{eqnarray*}
S'_n :=
\sum_{j=1}^{\min(\lfloor {\alpha}pha \tau_n^2 \rfloor,N_n)
}
{\bf 1} \{{\cal G}({\cal X}_n \cap Q_{n,J(n,j)} ; r_n)
\sim {\cal G}amma\};
~~~~~
Y_n:= G_n-S'_n,
{\bf e}ean
and
\begin{equation}gin{eqnarray*}
S_n := S'_n + \sum_{j=1}^{(\lfloor {\alpha}pha \tau_n^2 \rfloor - N_n)^+} V_j,
{\bf e}ean
where $x^+ : = \max(x,0)$ as usual, and the sum $\sum_{j=1}^0$ is
taken to be zero.
For each $j$, given that $I_{n,j}=1$, the distribution
of the contribution to $G_n$ from points in $Q_{n,j}$ is Bernoulli
with parameter
$P[ {\cal G}((\kappa+3)r_n {\cal X}_\kappa, r_n) \sim {\cal G}amma]$, which is $p$.
Hence $S_n$ is binomial ${\rm Bin}n(\lfloor {\alpha}pha
\tau^2_n \rfloor, p).$
Moreover, the conditional distribution
of $S_n,$ given the value of $Y_n$, does not depend on the value
of $Y_n$, and therefore $S_n$ is independent of $Y_n$.
By {\bf e}q{normlim1},
\begin{equation}gin{eqnarray*}
\lfloor {\alpha}pha \tau_n^2 \rfloor^{-1/2}
(G_n-\mathbb{E} G_n) \tod {\cal N} (0, {\alpha}pha^{-1} \sigma^2 ).
{\bf e}ean
Moreover,
\begin{equation}gin{eqnarray*}
\mathbb{E}[|G_n - (Y_n + S_n)| ] =
\mathbb{E} \left[ \sum_{j=1}^{(\lfloor {\alpha}pha \tau_n^2 \rfloor - N_n)^+} V_j \right]
\leq p \lfloor {\alpha}pha \tau_n^2 \rfloor P[N_n < {\alpha}pha \tau_n^2]
{\bf e}ean
so that by Lemma \ref{lem1}, both $\tau_n P[G_n \neq Y_n + S_n] $ and
$\tau_n^{-1}\mathbb{E}[|G_n - Y_n-S_n|]$ tend to zero as $n \to {\bf i}nfty$.
Hence,
Theorem \ref{genthm2} (with $h_V=1$) is applicable,
with
$ \lfloor {\alpha}pha \tau_n^2 \rfloor$
playing the role of $n$ in that result and
${\alpha}pha^{1/2} \tau_n$ playing the role of $ c_n$,
yielding
\begin{equation}gin{eqnarray*}
\sup_{k {\bf i}n \mathbb{Z} }
\left\{ \left| {\alpha}pha^{1/2} \tau_n P[G_n =k] -
{\alpha}pha^{1/2} \sigma^{-1}
\phi \left(\frac{k - \mathbb{E} G_n}{
({\alpha}pha^{1/2} \tau_n) {\alpha}pha^{-1/2} \sigma}
\right)
\right|
\right\}
\to 0,
{\bf e}ean
as $ n \to {\bf i}nfty$.
Multiplying through by ${\alpha}pha^{-1/2}$
yields {\bf e}q{LLT_Ga}.
$\qed$
\section{Proof of Theorem \ref{finranthm}
}
\label{secpfSG}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
Recall the definition of $h_X$ (the span of $X$) from Section
\ref{secgenresult}.
\begin{equation}gin{lemm}
\label{sumlatlem}
If $X$ and $Y$ are independent
random variables then $h_{X+Y}| h_X$.
{\bf e}nd{lemm}
{{\bf e}m Proof.}
If $h_{X+Y}=0$ there is nothing to prove. Otherwise, set
$h= h_{X+Y}$.
Then,
considering characteristic functions, observe that
$$
1 = | \mathbb{E} {\bf e}xp(2 \pi {\bf i}mag (X +Y )/h)| =
| \mathbb{E} {\bf e}xp(2 \pi {\bf i}mag X /h)| \times
| \mathbb{E} {\bf e}xp(2 \pi {\bf i}mag Y/h)|
$$
so that
$| \mathbb{E} {\bf e}xp(2 \pi {\bf i}mag X /h)| =1$ and hence $h|h_X$. $\qed$ \\
We are in the setup of Section \ref{secstogeo}.
Recall that the point process ${\cal Z}_n$
consists of $n$ normally distributed marked points
in $\mathbb{R}^d$,
while ${\cal U}_{n,K}$
consists of $n$ uniformly distributed marked points
in $B(K)$.
Set $ h_{n,K} := h_{H({\cal U}^*_{n,K})} $.
Set $h_n:= h_{H({\cal Z}^*_n)}$,
and recall from {\bf e}q{limspandef} that
$h(H) := \liminf_{n \to {\bf i}nfty} h_n$.
\begin{equation}gin{lemm}
\label{lemlat}
Suppose either (i) $H$ has finite range interactions
and $h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some $n$, or (ii)
$H = H^{({\bf x}i)}$ is
induced by a $\kappa$-nearest neighbour functional ${\bf x}i({\bf x};{\cal X}^*)$,
and $h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some $n > \kappa$.
Then
$h(H) < {\bf i}nfty$, and if
$h(H) > 0$,
there exists $\mu {\bf i}n \mathbb{N}$
and $K >0$ such that
$h_{\mu,K} = h(H)$.
If $h(H) = 0$, then for any ${\bf e}ps > 0$
there exists $\mu {\bf i}n \mathbb{N}$ and $K >0 $ such that
$h_{\mu,K} < {\bf e}ps$.
In case (ii), we can take $\mu$ such that additionally
$\mu \geq \kappa +1$.
{\bf e}nd{lemm}
{{\bf e}m Proof.}
The support of the distribution of $H({\cal U}^*_{n,K})$ is
increasing with $K$,
so
$h_{n,K'}|h_{n,K}$
for $K' \geq K$.
Hence, there exists a limit $h_{n,{\bf i}nfty} $ such that
\begin{equation}a
h_{n,{\bf i}nfty} = \lim_{K \to {\bf i}nfty} h_{n,K}
\label{lanidef}
{\bf e}ea
and also we have the implication
\begin{equation}a
h_{n,{\bf i}nfty} >0
{\bf i}mplies
{\bf e}xists K:
h_{n,K} =
h_{n,{\bf i}nfty}.
\label{eqlani}
{\bf e}ea
Also, for all $K$
the support of the distribution of $H({\cal U}^*_{n,K})$ is
contained in the support of $H({\cal Z}^*_n)$, so that
\begin{equation}a
h_n = h_{H({\cal Z}^*_n)} \leq
h_{n,K}, ~~~
\forall K,
\label{1207e}
{\bf e}ea
and hence
$
h_{n} \leq h_{n,{\bf i}nfty} $
for all $n$.
We assert that in fact
\begin{equation}a
h_{n,{\bf i}nfty} = h_n.
\label{1207d}
{\bf e}ea
This is clear
when $h_{n,{\bf i}nfty} =0$.
When $h_{n,{\bf i}nfty} > 0$, there exists
a countable set $S$ with span $h_{n,{\bf i}nfty}$ such that
$P[H({\cal U}^*_{n,K}) {\bf i}n S ] = 1$ for all $K$.
But then it is easily deduced that $P[H({\cal Z}^*_n) {\bf i}n S] =1$,
so that $h_{n} \geq h_{n,{\bf i}nfty}$, and combined
with {\bf e}q{1207e} this gives {\bf e}q{1207d}.
We shall show in both cases (i) and (ii) that $h_n$ tends to a finite
limit;
that is, for both cases we shall show that
\begin{equation}a
h(H) = \lim_{n \to {\bf i}nfty} h_n = \lim_{n \to {\bf i}nfty} h_{n,{\bf i}nfty}
< {\bf i}nfty.
\label{1210a}
{\bf e}ea
Also, we show in both cases
that
\begin{equation}a
h(H) > 0
{\bf i}mplies {\bf e}xists n_0 {\bf i}n \mathbb{N}: h_n = h(H) ~~ \forall n \geq n_0.
\label{1211a}
{\bf e}ea
If $h(H) >0$, the desired conclusion follows from
{\bf e}q{1211a}, {\bf e}q{1207d} and {\bf e}q{eqlani}.
If $h(H) =0$, the desired conclusion follows
from {\bf e}q{1210a} and {\bf e}q{lanidef}.
Consider the case (i), where
$H$ has finite range interactions. In this case, we
shall show that
for all $n$, \begin{equation}a
h_{n+1} | h_n, \label{ladown}
{\bf e}ea
and since we assume $h_n < {\bf i}nfty$ for some $n$,
{\bf e}q{ladown} clearly implies {\bf e}q{1210a} and {\bf e}q{1211a}.
We now demonstrate {\bf e}q{ladown} in case (i) as follows.
By {\bf e}q{1207d} and {\bf e}q{eqlani},
to prove {\bf e}q{ladown} it suffices to prove that
$h_{n+1} | h_{n,K}$
for all $K$.
Choose $\tau$ such that {\bf e}q{finraneq} holds.
There is a strictly positive probability
that the first $n$ points of ${\cal Z}_n$ lie in $B(K)$ while
the last one lies outside $B(K+\tau)$. Hence by {\bf e}q{finraneq}
and translation-invariance,
the support of the distribution of $H({\cal Z}^*_{n+1})$
contains the support of the distribution of $H({\cal U}^*_{n,K}) +
H(\{(0,T)\})$, where $T$ is a $\mathbb{P}_{\cal M}$-distributed
element of ${\cal M}$, independent of $U^*_{n,K}$. Hence
by Lemma \ref{sumlatlem},
$h_{n+1} | h_{n,K}$, so
{\bf e}q{ladown} holds as claimed in this case.
Now consider case (ii), where we assume $H = H^{({\bf x}i)}$ with
${\bf x}i({\bf x};{\cal X})$ determined by the $\kappa$ nearest neighbours.
We claim that if
$j \geq \kappa+1$ and ${\bf e}ll \geq \kappa+1$ then
\begin{equation}a
h_{j+{\bf e}ll} | h_j
~~~{\rm and}~~~
h_{j+{\bf e}ll} | h_{\bf e}ll.
\label{1207j}
{\bf e}ea
By {\bf e}q{eqlani} and {\bf e}q{1207d}, to
verify {\bf e}q{1207j}
it suffices to show that
\begin{equation}a
h_{j+{\bf e}ll} | h_{j,K} ~~~ \forall K >0.
\label{0126b}
{\bf e}ea
Given $K$, let $B$ and $B'$ be disjoint balls of radius $K$, distant
more than $2K$ from each other.
There is a positive probability that ${\cal Z}_{j+{\bf e}ll}$ consists
of $j$ points in $B$ and ${\bf e}ll$ points in $B'$,
and if this happens then (since we assume $\min(j,{\bf e}ll) >\kappa)$)
the $\kappa$ nearest neighbours of
the points in $B$ are also in $B$, while
the $\kappa$ nearest neighbours of
the points in $B'$ are also in $B'$, so that
$H({\cal Z}^*_{j+{\bf e}ll})$ is the sum of conditionally independent contributions
from the points in
$B$ and those in $B'$.
Hence the support of the distribution of $H({\cal Z}^*_{j+{\bf e}ll})$ contains
the support of the distribution of $H({\cal U}^*_{j,K})+ H(\tilde{{\cal U}}^*_{{\bf e}ll,K})$,
where $H(\tilde{{\cal U}}^*_{{\bf e}ll,K})$ is defined to be a
variable with the distribution
of $H({\cal U}^*_{{\bf e}ll,K})$ independent of $H({\cal U}^*_{j,K})$.
Then {\bf e}q{0126b} follows from Lemma \ref{sumlatlem}.
Define
\begin{equation}gin{eqnarray*}
h' = {\bf i}nf_{n \geq \kappa+1} h_n.
{\bf e}ean
Then
for all ${\bf e}ps > 0$ we can pick $j \geq \kappa+1$ with
$h_j \leq h' + {\bf e}ps$, and then by
{\bf e}q{1207j} we have $h_{\bf e}ll \leq h' + {\bf e}ps$
for ${\bf e}ll \geq j + \kappa +1$. This demonstrates {\bf e}q{1210a} for this case
(with $h(H) = h'$), since we assume $h_n < {\bf i}nfty$ for some
$n$.
Moreover, if $h(H) > 0,$ then in the argument just given we can
take ${\bf e}ps < h(H)$ and then for
${\bf e}ll \geq j + \kappa +1$ we must have
$h_{\bf e}ll |h_j$, which can happen
only if $h_{\bf e}ll =h_j$,
so by {\bf e}q{1210a},
in fact $h_{\bf e}ll = h_j = h(H)$.
That is, we also have {\bf e}q{1211a} for this case.
$\qed$ \\
Since we are in the setting of Section \ref{secstogeo}, we assume (as in
Section \ref{secRGG}) that $f$ is an almost everywhere continuous
probability density function on $\mathbb{R}^d$ with $f_{{\rm max}} < {\bf i}nfty$.
The point process ${\cal X}_n \subset \mathbb{R}^d$ is a sample from this density, and
the marked point process ${\cal X}_n^* \subset \mathbb{R}^d \times {\cal M}$
is obtained by giving each point of ${\cal X}_n$ a
$\mathbb{P}_{\cal M}$-distributed mark. Recall also that
we are given a sequence $(r_n)$ with $\rho := \lim_{n \to {\bf i}nfty}
n r_n^d {\bf i}n (0,{\bf i}nfty)$.
Recall from {\bf e}q{0110b} that $H_n({\cal X}^*):= H(r_n^{-1} {\cal X}^*)$
for a given translation-invariant $H $.
Our strategy for proving Theorem \ref{finranthm} goes as follows.
First we choose $\mu,K$ as in Lemma \ref{lemlat}.
Then we choose constants $\begin{equation}ta \geq K$ and $m \geq \mu$
in a certain way (see below),
and use the continuity of $f$ to
pick $\Theta(n)$ disjoint deterministic balls of
radius $\begin{equation}ta r_n$ such
that $f$ is positive and almost constant on each of these balls.
We use a form of rejection sampling to
make the density of points of ${\cal X}_n$ in each (unrejected) ball uniform.
We also
reject all balls which do not contain exactly
$m$ points of ${\cal X}_n$ in a certain
`good' configuration (of non-vanishing probability).
The definition of `good' is chosen in such a way that
the contribution to $H_n$
from inside an inner ball of radius $K r_n$ is shielded from
everything outside the outer ball of radius $\begin{equation}ta r_n$.
We end up with $\Theta(n)$ (in probability) unrejected balls,
and the contributions to $H_n({\cal X}^*_n)$ from the
corresponding inner balls
are independent (because of the shielding) and
identically distributed (because of the uniformly distributed
points) so the sum contribution of these inner balls can play the
role of $S_n$ in Theorem \ref{genthm2}.
In the proof of Theorem \ref{finranthm}, we need to
consider certain functions, sets and sequences, defined for $\begin{equation}ta >0$.
For $x {\bf i}n \mathbb{R}^d$ with $f(x)>0$,
define the function
\begin{equation}a
g_{n,\begin{equation}ta}(x) := \frac{ {\bf i}nf\{ f(y):y {\bf i}n B(x;\begin{equation}ta r_n) \} }{
\sup\{ f(y):y {\bf i}n B(x;\begin{equation}ta r_n) \} } ,
\label{gndef} \
{\bf e}ea
and for
$x {\bf i}n \mathbb{R}^d$ with $f(x)>0$
and
$g_{n,\begin{equation}ta}(x) >0,$ and $ z {\bf i}n B(x;\begin{equation}ta r_n)$,
define
\begin{equation}a
p_{n,\begin{equation}ta}(x,z) := \frac{ {\bf i}nf\{ f(y):y {\bf i}n B(x;\begin{equation}ta) \} }{
f(z) } .
\label{pndef}
{\bf e}ea
Since we assume $f$ is almost everywhere continuous, the function
$g_{n,\begin{equation}ta}$
converges almost everywhere on $\{x:f(x) > 0\}$ to 1.
By Egorov's theorem (see e.g. \cite{Durr}), given $\begin{equation}ta>0$
there is a set $A_\begin{equation}ta$ with ${\bf i}nt_{A_\begin{equation}ta} f(x) dx \geq 1/2$,
such that $f(x)$ is bounded away from zero on $A_\begin{equation}ta$ and
$g_{n,\begin{equation}ta}(x) \to 1$ uniformly on $A_\begin{equation}ta$.
Since we assume {\bf e}q{rhofin} with $\rho >0$ here, for $n$ large enough $nr_n^d
< 2 \rho$.
Set
$$
{\bf e}ta(\begin{equation}ta) : = 2^{-(d+2)} \omega_d^{-1} \begin{equation}ta^{-d} f_{{\rm max}}^{-1} \rho^{-1}.
$$
Given $\begin{equation}ta>0$, we claim that for $n$ large enough so that
$nr_n^d < 2 \rho$, we can (and do) choose points
$x_{\begin{equation}ta,n,1},\ldots,x_{\begin{equation}ta,n,\lfloor {\bf e}ta(\begin{equation}ta) n\rfloor } $
in $A_\begin{equation}ta$ with
$|x_{\begin{equation}ta,n,j} - x_{\begin{equation}ta,n,k}| > 2 \begin{equation}ta r_n$ for $1 \leq j<k
\leq \lfloor {\bf e}ta(\begin{equation}ta) n\rfloor .$
To see this we use
a measure-theoretic version of the pigeonhole
principle, as follows.
Suppose inductively that we have chosen $x_{\begin{equation}ta,n,1},\ldots,x_{\begin{equation}ta,n,k}$,
with $k < \lfloor {\bf e}ta(\begin{equation}ta) n\rfloor .$
Then let $x_{\begin{equation}ta,n,k+1}$ be the first point, according
to the lexicographic ordering, in the set
$A_{\begin{equation}ta} \setminus \cup_{j=1}^k B(x_{\begin{equation}ta,n,j};2 \begin{equation}ta r_n)$.
This is possible, because this set is non-empty,
because by subadditivity of measure,
\begin{equation}gin{eqnarray*}
{\bf i}nt_{\cup_{j=1}^k B(x_{\begin{equation}ta,n,j};2 \begin{equation}ta r_n)}
f(x) dx
\leq k
\omega_d (2 \begin{equation}ta r_n)^d f_{{\rm max}}
< {\bf e}ta(\begin{equation}ta) n \omega_d (2 \begin{equation}ta r_n)^d f_{{\rm max}}
\\
= nr_n^d/(4 \rho)
< 1/2 \leq {\bf i}nt_{A_{\begin{equation}ta}} f(x) dx,
{\bf e}ean
justifying the claim.
Define the ball
$$
B_{\begin{equation}ta,n,j} := B(x_{\begin{equation}ta,n,j}, \begin{equation}ta r_n); ~~~~~
B^*_{\begin{equation}ta,n,j} := B(x_{\begin{equation}ta,n,j}, \begin{equation}ta r_n) \times {\cal M}.
$$
The
balls $B_{\begin{equation}ta,n,1},\ldots,B_{\begin{equation}ta,n,\lfloor {\bf e}ta (\begin{equation}ta) n\rfloor }$ are disjoint.
Let $W_1,W_2,W_3,\ldots$ be uniformly distributed random variables
in $[0,1]$,
independent of each other and of $({\bf X}_j)_{j=1}^n$, where
${\bf X}_j = (X_j,T_j)$. For $k {\bf i}n \mathbb{N}$, think of
$W_k$ as an extra mark attached to the point $X_k$.
This is used in the rejection sampling procedure.
Given $\begin{equation}ta$, if $X_k {\bf i}n B_{\begin{equation}ta,n,j}$,
let us say that the point $X_k$ is
$\begin{equation}ta$-{{\bf e}m red} if the associated mark
$W_k$ is less than $p_{n,\begin{equation}ta}( x_{\begin{equation}ta,n,j}, X_k)$.
Given that $X_k$ lies in $B_{\begin{equation}ta,n,j}$ and
is $\begin{equation}ta$-red, the conditional distribution
of $X_k$ is uniform over $B_{\begin{equation}ta,n,j}$.
Now let $m {\bf i}n \mathbb{N}$, and suppose ${\cal S}$ is a measurable set of configurations
of $m$ points in $B(\begin{equation}ta)$ such that $P[{\cal U}_{m,\begin{equation}ta} {\bf i}n {\cal S}] > 0$.
The number $m$ and the set ${\cal S}$ will be chosen so that
given there are $m$ points of ${\cal X}_n$ in ball $B_{\begin{equation}ta,n,j}$, and
given their rescaled configuration of
lies in the set ${\cal S}$, there is a subset of these
$m$ points which are `shielded' from the rest of ${\cal X}_n$.
Given ${\cal S}$ (and by implication $\begin{equation}ta$ and $m$),
for $1 \leq j \leq \lfloor {\bf e}ta(\begin{equation}ta) n\rfloor $,
let $I_{{\cal S},n,j}$ be the indicator of the event that
the following conditions hold:
\begin{equation}gin{itemize}
{\bf i}tem
The point set
${\cal X}_n \cap B_{\begin{equation}ta,n,j}$ consists of $m$ points, all of them
$\begin{equation}ta$-red;
{\bf i}tem The configuration $r_n^{-1}(-x_{\begin{equation}ta,n,j} +
( {\cal X}_n \cap B_{\begin{equation}ta,n,j} )) $ is in ${\cal S}$.
{\bf e}nd{itemize}
Let $N_{{\cal S},n} : = \sum_{j=1}^{\lfloor {\bf e}ta (\begin{equation}ta) n\rfloor } I_{{\cal S},n,j}$,
and list the $i$ for which $I_{{\cal S},n,j} =1$ in increasing order
as $J({\cal S},n,1) \ldots,$ $ J({\cal S},n,N_{{\cal S},n})$.
\begin{equation}gin{lemm}
\label{redlem}
Let $\begin{equation}ta > 0 $, and $m {\bf i}n \mathbb{N}$. Let ${\cal S}$ be a measurable set of
configurations of $m$ points in $B(\begin{equation}ta)$ such that
$P[{\cal U}_{m,\begin{equation}ta} {\bf i}n {\cal S}] > 0$.
Then: (i) there exists ${\delta}lta > 0$ such that
\begin{equation}a
\limsup_{n \to {\bf i}nfty}
\left(
n^{-1} \log
P [N_{{\cal S},n} < {\delta}lta n] \right) < 0,
\label{1207c}
{\bf e}ea
and (ii) conditional on
the values of $I_{{\cal S},n,i}$ for $1 \leq i \leq \lfloor {\bf e}ta(\begin{equation}ta)n\rfloor$
and the configuration of ${\cal X}_n$
outside $B_{\begin{equation}ta,n,J({\cal S},n,1)} \cup \cdots \cup
B_{\begin{equation}ta,n,J({\cal S},n,N_{{\cal S},n})}$,
the joint distribution of the point sets
$$
r_n^{-1}(-x_{\begin{equation}ta,n,J({\cal S},n,1)} +( {\cal X}_n \cap B_{\begin{equation}ta,n,J({\cal S},n,1)}) )
, \ldots,
r_n^{-1} (-x_{\begin{equation}ta,n,J({\cal S},n,N_{{\cal S},n})} +
({\cal X}_n \cap B_{\begin{equation}ta,n,J({\cal S},n,N_{{\cal S},n})}
))
$$
is that of
$N_{{\cal S},n}$
independent copies of ${\cal U}_{m,\begin{equation}ta}$ each conditioned
to be in ${\cal S} $.
{\bf e}nd{lemm}
{{\bf e}m Proof.}
Consider first the asymptotics for $\mathbb{E} [N_{{\cal S},n}]$.
Given a finite point set ${\cal X} \subset \mathbb{R}^d$
and a set $B \subset \mathbb{R}^d$, let ${\cal X}(B)$ denote
the number of points of ${\cal X}$ in $B$.
Fix $m$.
Since
$f$ is bounded away from zero and infinity on $A_\begin{equation}ta$ and $g_{n,\begin{equation}ta} \to 1$
uniformly on $A_\begin{equation}ta$,
we have
uniformly over $x {\bf i}n A_\begin{equation}ta$ that
$$
n {\bf i}nt_{B(x; \begin{equation}ta r_n)} f(y)dy = n f(x)
{\bf i}nt_{B(x; \begin{equation}ta r_n)} (f(y)/f(x))dy \to \begin{equation}ta^d \omega_d \rho f(x)
$$
Hence by
binomial approximation to Poisson,
\begin{equation}gin{eqnarray*}
P [{\cal X}_n(B(x;\begin{equation}ta r_n)) = m ] \to
\frac{ ( \begin{equation}ta^d \omega_d \rho f(x) )^m
{\bf e}xp( - \begin{equation}ta^d \omega_d \rho f(x) ) }{ m! }
~~~ {\rm as} ~n \to {\bf i}nfty,
{\bf e}ean
and this convergence is also uniform over $x {\bf i}n A_\begin{equation}ta$.
Given $m$ points $X_k$ in $B_{\begin{equation}ta,n,j}$, the probability
that these are all $\begin{equation}ta$-red is at least
$g_{n,\begin{equation}ta}(x)^m$ so exceeds
$\frac{1}{2}$
if $n$ is large enough, since
$g_{n,\begin{equation}ta} \to 1$
uniformly on $A_\begin{equation}ta$.
Given that $m$ of the points $X_k $ lie in $ B_{\begin{equation}ta,n,j} $,
and given that they are all $\begin{equation}ta$-red, their spatial locations are
independently uniformly distributed over
$ B_{\begin{equation}ta,n,j}$; hence the conditional probability that
$r_n^{-1}(- x_{\begin{equation}ta,n,j} + ( {\cal X}_n \cap B_{\begin{equation}ta,n,j} ) )$ lies in $ {\cal S}$
is a strictly positive constant.
These arguments show that
$ \liminf_{n \to {\bf i}nfty} n^{-1} \mathbb{E} [ N_{{\cal S},n} ] > 0$.
They also demonstrate part (ii) in the statement of
the lemma.
Take ${\delta}lta > 0$ with
$2 {\delta}lta < \liminf_{n \to {\bf i}nfty} n^{-1} \mathbb{E} [N_{{\cal S},n} ] $.
We shall show that $P[N_{{\cal S},n} < {\delta}lta n]$ decays exponentially in $n$,
using Lemma \ref{Azulem}.
The variable $N_{{\cal S},n}$ is a function of $n$ independent
identically distributed
triples (marked points) $(X_k,T_k,W_k)$.
Consider the effect of changing the value of one of the marked
points ($(X,T,W)$ to $(X',T',W')$, say). The change could
affect the value of $I_{{\cal S},n,j}$
for at most two values of $j$,
namely the $j$ with $X {\bf i}n B_{\begin{equation}ta,n,j}$ and the $j'$ with
$X' {\bf i}n B_{\begin{equation}ta,n,j'}$.
So by Lemma \ref{Azulem},
$$
P[ | N_{{\cal S},n} - \mathbb{E} N_{{\cal S},n} | > {\delta}lta n]
\leq 2 {\bf e}xp ( - {\delta}lta^2 n / 8),
$$
and {\bf e}q{1207c} follows.
$\qed$ \\
{{\bf e}m Proof of Theorem \ref{finranthm} under condition (i) (finite range
interactions).} Recall that $h(H) $ is given by
{\bf e}q{limspandef}.
Since condition (i) includes the assumption
that $h_{H({\cal Z}^*_n)} < {\bf i}nfty$ for some
$n$, by Lemma \ref{lemlat}
we have $h(H)< {\bf i}nfty$.
Let $b>0$ with $h(H)|b$. Let ${\bf e}ps {\bf i}n ( 0,b)$.
Let $\mu {\bf i}n \mathbb{N}$, and $K >0$, be
as given by Lemma \ref{lemlat}.
Then $h_{\mu,K} = h(H)$ if $h > 0$, or
$h_{\mu,K} < {\bf e}ps $ if $h = 0$.
Moreover
$ H({\cal U}^*_{\mu,K})$ is integrable by
assumption.
Set
\begin{equation}a
b_1 :=
\begin{equation}gin{cases}
h_{\mu,K} \lfloor b/h_{\mu,K} \rfloor &
\textrm{~if~} h_{\mu,K} > 0 \\
b
& \textrm{~if~} h_{\mu,K} = 0 .
{\bf e}nd{cases}
\label{b1def}
{\bf e}ea
Choose $\tau {\bf i}n (0, {\bf i}nfty)$ such that {\bf e}q{finraneq} holds.
We shall apply Lemma \ref{redlem} with
$\begin{equation}ta = K+ \tau$.
Let ${\cal S}$ be the set of configurations
of $\mu$ points in $B(K+\tau)$ such that
in fact all of the points are in $B(K)$.
By Lemma \ref{redlem},
we can find ${\delta}lta > 0$ such that, writing $N_n$ for $N_{{\cal S},n}$
we have exponential decay of $P[N_n < {\delta}lta n]$.
Let $V_1,V_2,\ldots,$ be random variables distributed as independent
copies of $H({\cal U}^*_{\mu,K})$, independently of ${\cal X}^*_n$.
Set
\begin{equation}gin{eqnarray*}
S'_n :=
\sum_{{\bf e}ll=1}^{\min(\lfloor {\delta}lta n \rfloor,N_n) } H_n(
{\cal X}^*_n \cap B^*_{K+\tau,n,J({\cal S},n,{\bf e}ll)} ); ~~~~
S_n = S'_n + \sum_{j=1 }^{(\lfloor {\delta}lta n \rfloor -N_n)^+} V_j.
{\bf e}ean
Thus, $S'_n$ is the the total contribution to $H_n({\cal X}^*_n)$ from points
in $\cup_{{\bf e}ll=1}^{ \min(\lfloor {\delta}lta n\rfloor
,N_n) } B^*_{K+\tau,n,J({\cal S},n,{\bf e}ll)}$.
By Part (ii) of Lemma \ref{redlem},
given that $N_n \geq {\delta}lta n$, for each ${\bf e}ll$
we know that $r_n^{-1}(- x_{\begin{equation}ta,n,J({\cal S},n,{\bf e}ll)} + {\cal X}^*_n ) \cap
B^*(K+\tau)$
is conditionally distributed as ${\cal U}^*_{\mu,K+ \tau}$ conditional
on ${\cal U}^*_{\mu,K+\tau} {\bf i}n {\cal S}$; in other words, distributed
as ${\cal U}^*_{\mu,K}$. Therefore the distribution of $S_n$
is that of the sum of $\lfloor {\delta}lta n\rfloor$
independent copies of $H({\cal U}^*_{\mu,K})$,
independent of the contribution of the other points.
Let $Y_n$ denote the contribution of the other points, i.e.
\begin{equation}gin{eqnarray*}
Y_n:= H_n ({\cal X}_n^* ) - S'_n.
{\bf e}ean
Since the distribution
of $S_n,$ given the value of $Y_n$, does not depend on the value
of $Y_n$, $S_n$ is independent of $Y_n$.
By assumption $H_n({\cal X}^*_n)$ and $S_n$ are
integrable.
Clearly $n^{1/2}P[H_n({\cal X}_n) \neq Y_n + S_n]$ is at most
$n^{1/2}P[N_{n} < {\delta}lta n]$,
which tends to zero by {\bf e}q{1207c}.
Also by conditioning on $N_{n}$, we have that
\begin{equation}a
n^{-1/2}\mathbb{E} [|H_n({\cal X}^*_n) - (Y_n + S_n) |]
= n^{-1/2}\mathbb{E} \left[ \left|\sum_{j=1}^{(\lfloor {\delta}lta n \rfloor
- N_{n})^+} V_j \right| \right]
\nonumber \\
\leq n^{-1/2} \mathbb{E} [ (\lfloor {\delta}lta n \rfloor - N_n)^+ ]
\mathbb{E} \left[ \left| V_1 \right| \right]
\nonumber \\
\leq n^{-1/2} \lfloor {\delta}lta n \rfloor
P[ N_{n} \leq {\delta}lta n] \mathbb{E} \left[ \left| V_1 \right| \right],
\label{0113e}
{\bf e}ea
which tends to zero by {\bf e}q{1207c}.
This also shows that $Y_n$ is integrable
By the assumption {\bf e}q{Hclteq},
\begin{equation}a
\lfloor{\delta}lta n\rfloor^{-1/2}
(H_n({\cal X}^*_n)-\mathbb{E} H_n({\cal X}^*_n)) \tod {\cal N} (0, {\delta}lta^{-1} \sigma^2 ),
\label{lim3}
{\bf e}ea
and so, since $h_{\mu,K}|b_1$,
Theorem \ref{genthm2} is applicable, and
yields
\begin{equation}a
\sup_{u {\bf i}n \mathbb{R} }
\left\{ \left| ({\delta}lta n)^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b_1)] -
{\delta}lta^{1/2} \sigma^{-1} b_1 \phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{({\delta}lta
n)^{1/2} ( {\delta}lta^{-1} \sigma^2)^{1/2}}
\right)
\right| \right\} \to 0,
\nonumber \\
\label{0113d}
{\bf e}ea
and dividing through by ${\delta}lta^{1/2}$ gives {\bf e}q{1205a}
in all cases where $b = b_1$.
In general, suppose $b \neq b_1$.
Then $h(H) =0$ (else $h_{\mu,K}= h(H)$ and $h(H) |b$
so $b = b_1$ by {\bf e}q{b1def}), and hence
$h_{\mu,K} < {\bf e}ps$.
Since $b_1 \leq b$ by {\bf e}q{b1def},
we have
that
\begin{equation}gin{eqnarray*}
{\bf i}nf_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b)] -
\sigma^{-1} b \phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\\
\geq
{\bf i}nf_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b_1)] -
\sigma^{-1} b_1
\phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\\
+ \sigma^{-1}(b_1-b)
(2 \pi)^{-1/2}
{\bf e}ean
so that by {\bf e}q{0113d}, since $b_1 \geq b-{\bf e}ps$,
\begin{equation}gin{eqnarray*}
\liminf_{n \to {\bf i}nfty}
{\bf i}nf_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b)] -
\sigma^{-1} b \phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\geq - \frac{{\bf e}ps}{\sigma} (2 \pi)^{-1/2}.
{\bf e}ean
Similarly, setting
$b_2 : = h_{\mu,K} \lceil b/h_{\mu,K} {\rm c}eil$,
we have that
\begin{equation}gin{eqnarray*}
\sup_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b)] -
\sigma^{-1} b \phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\\
\leq
{\bf i}nf_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b_2)] +
\sigma^{-1} b_2
\phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\\
+ \sigma^{-1}(b_2-b)
(2 \pi)^{-1/2}
{\bf e}ean
so that since $b_2 -b \leq {\bf e}ps$,
\begin{equation}gin{eqnarray*}
\limsup_{n \to {\bf i}nfty}
\sup_{u {\bf i}n \mathbb{R} }
\left\{
n^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b)] -
\sigma^{-1} b \phi \left( \frac{u- \mathbb{E} H_n({\cal X}^*_n) }{
n^{1/2} \sigma}
\right)
\right\}
\leq \frac{{\bf e}ps}{\sigma} (2 \pi)^{-1/2}.
{\bf e}ean
Since ${\bf e}ps >0 $ is arbitrarily small, this gives us {\bf e}q{1205a}.
$\qed$ \\
{{\bf e}m Proof of Theorem \ref{finranthm} under condition (ii).}
We now assume
that $H$, instead of having finite range,
is given by {\bf e}q{induceh} with ${\bf x}i$ depending
only on the $\kappa$ nearest neighbours.
Again, by Lemma \ref{lemlat} we have that $h(H) $,
given by {\bf e}q{limspandef}, is finite.
Let $b>0$ with $h(H)|b$. Let ${\bf e}ps {\bf i}n ( 0,b)$.
Let $\mu {\bf i}n \mathbb{N}$ and $K >0$, with $\mu \geq \kappa +1$,
by as given by
Lemma \ref{lemlat}.
Then $h_{\mu,K} = h(H)$ if $h(H) > 0$, and
$h_{\mu,K} < {\bf e}ps $ if $h(H) = 0$.
Also, $ H({\cal U}^*_{\mu,K}) $
integrable, by the integrability assumption
in the statement of the result being proved.
Let ${\cal B}_1,{\cal B}_2,\ldots, {\cal B}_{\nu}$ be a minimal collection of
open balls of radius $K$, each of them centred at a point on
the boundary of $B(4K)$, such that their union contains
the boundary of $B(4K)$.
Let ${\cal B}_0$ be the ball $B(K)$.
We shall apply Lemma \ref{redlem} with $\begin{equation}ta= 5K$,
with $m = (\nu +1) \mu$, and with ${\cal S}$ as follows.
${\cal S}$ is the set of configurations of $m =(\nu +1) \mu$
points in $B(\begin{equation}ta) = B(5K)$, such that each of
${\cal B}_1,\ldots,{\cal B}_{\nu}$ contains at least $\mu$ points,
and $\cup_{i=1}^\nu {\cal B}_i$ contains exactly $\nu \mu$ points,
and also the ball ${\cal B}_0$ contains exactly $\mu$ points
(so that consequently there are no points in $B(5K)
\setminus \cup_{i=0}^\nu {\cal B}_i$).
A similar construction (using squares rather than balls, and with diagram)
was given by Avram and Bertsimas \cite{AB} for a related problem.
With this choice of $\begin{equation}ta$ and ${\cal S}$, let the locations
$x_{\begin{equation}ta,n,j} = x_{5K,n,j}$, the balls $B_{\begin{equation}ta,n,j} = B_{5K,n,j}$,
the indicators
$I_{{\cal S},n,j}$, and the variables $N_{{\cal S},n}$ and
$J({\cal S},n,{\bf e}ll)$ be as described just before
Lemma \ref{redlem}.
By that result, we can (and do) choose ${\delta}lta > 0$ such that
{\bf e}q{1207c} holds.
For $ 1 \leq {\bf e}ll \leq N_{{\cal S},n}$, the point process
$r_n^{-1}( - x_{5K,n,J({\cal S},n,{\bf e}ll)} + ({\cal X}_n \cap B_{5K,n,J({\cal S},n,{\bf e}ll)})) $
has $\mu $ points within distance
$K$ of the origin, and
also at least $\mu$ points in each of the balls ${\cal B}_1,\ldots,{\cal B}_\nu$.
Since $\mu \geq \kappa +1$, for any point configuration in ${\cal S}$,
each point inside $B(K)$ has its $\kappa$ nearest neighbours
also inside $B(K)$. Also none of the points in $B(5K) \setminus B(K)$
has any of its $\kappa$ nearest neighbours in $B(K)$.
Finally, any further added point outside $B(5K)$ cannot have
any of its $\kappa$ nearest neighbours inside $B(K)$, since
the line segment from such a point to any point in
$B(K)$ passes through the boundary of $B(4K)$ at a location
inside some
${\cal B}_i$, and any of the $\mu$ or more points inside ${\cal B}_i$ are
closer to the outside point than the point in $B(K)$ is.
To summarise this discussion, the points in $B(K)$ are shielded from those
outside $B(5K)$.
Given $n$, let
${\cal W}_{(\nu+1)\mu,5K}^{(1)},\ldots,
{\cal W}_{(\nu+1)\mu,5K}^{(\lfloor {\delta}lta n \rfloor )}$
be a collection of (marked) point processes which are
each distributed as ${\cal U}^*_{(\nu+1)\mu,5K}$ conditioned on
${\cal U}^*_{(\nu+1)\mu,5K} {\bf i}n {\cal S}$,
independently of each other and of ${\cal X}^*_n$.
For $1 \leq j \leq \lfloor {\delta}lta n\rfloor $
set $V_j := H({\cal W}_{(\nu+1)\mu,5K}^{(j)} \cap B^*(K))$,
so that $V_1,V_2,\ldots$ $V_{\lfloor {\delta}lta n\rfloor }$ are
random variables distributed as
independent copies
of $H({\cal U}^*_{\mu,K})$, independent of ${\cal X}_n$.
Define $S'_n$ and $S_n$ by
\begin{equation}gin{eqnarray*}
S'_n :=
\sum_{{\bf e}ll=1}^{\min(\lfloor {\delta}lta n \rfloor,N_{{\cal S},n}) }
H_n( {\cal X}^*_n \cap B^*(x_{5K,n,J({\cal S},n,{\bf e}ll)},K r_n) ) ; ~~~
S_n := S'_n + \sum_{j=1}^{( \lfloor {\delta}lta n \rfloor - N_{{\cal S},n})^+ } V_j.
{\bf e}ean
Also set
$
Y_n:= H_n({\cal X}^*_n) -S'_n.
$
Thus $S'_n$ is the total contribution to $H_n({\cal X}^*_n)$
from points in $B^*(x_{5K,n,J({\cal S},n,{\bf e}ll)};Kr_n)$,
$1 \leq {\bf e}ll \leq \min(\lfloor {\delta}lta n\rfloor ,N_{{\cal S},n})$.
On account of the shielding effect described above,
$S_n$ is the
sum of $\lfloor {\delta}lta n \rfloor $ independent copies of a random variable
with the distribution of $H({\cal U}^*_{\mu,K})$.
Moreover, we assert that the distribution
of $S_n,$ given the value of $Y_n$, does not depend on the value
of $Y_n$, and therefore $S_n$ is independent of $Y_n$.
Essentially, this assertion holds because
for any triple of sub-$\sigma$-algebras
${\cal F}_1,{\cal F}_2,{\cal F}_3$, if ${\cal F}_1 \vee {\cal F}_2$ is independent of
${\cal F}_3$ and ${\cal F}_1$ is independent of ${\cal F}_2$ then ${\cal F}_1$ is independent of
${\cal F}_2 \vee {\cal F}_3$ (here ${\cal F}_i \vee {\cal F}_j$ is the smallest $\sigma$-algebra
containing both ${\cal F}_i$ and ${\cal F}_j$).
In the present instance, to define these $\sigma$-algebras we
first define the marked point processes ${\cal Y}_j $ for
$1 \leq j \leq \lfloor {\delta}lta n\rfloor$
by
$$
{\cal Y}_j :=
\begin{equation}gin{cases}
r_n^{-1} ( -x_{5K,n,J({\cal S},n,j)} + ({\cal X}^*_n \cap B^*_{5K,n,J({\cal S},n,j)} ))
&
\textrm{~if~} 1 \leq j \leq \min (\lfloor {\delta}lta n \rfloor , N_{{\cal S},n} )
\\
{\cal W}^{(j- N_{{\cal S},n})}_{(\nu +1)\mu,5K}
& \textrm{~if~}
N_{{\cal S},n} < j \leq \lfloor {\delta}lta n \rfloor .
{\bf e}nd{cases}
$$
Take ${\cal F}_3$ to be the
$\sigma$-algebra generated by
the values of $J({\cal S},n,1), \ldots,$
\linebreak
$ J({\cal S},n,\min(\lfloor {\delta}lta n\rfloor ,N_{{\cal S},n}))$ and the
locations and marks of points of ${\cal X}_n$ outside the union of the balls
$B_{5K,n,J({\cal S},n,1)}, \ldots,$ $ B_{5K,n,J({\cal S},n,\min(\lfloor
{\delta}lta n \rfloor,N_{{\cal S},n}))}$.
Take ${\cal F}_2$ to be the
$\sigma$-algebra generated by the point processes
$ {\cal Y}_{j} \cap B^*(5K) \setminus B^*(K), 1 \leq j \leq \lfloor {\delta}lta n \rfloor$.
Take ${\cal F}_1$ to be the
$\sigma$-algebra generated by the point processes
${\cal Y}_{j} \cap B^*(K), 1 \leq j \leq \lfloor {\delta}lta n \rfloor$.
Then by Lemma \ref{redlem} and the definition of ${\cal S}$,
${\cal F}_1 \vee {\cal F}_2$ is independent of
${\cal F}_3$ and ${\cal F}_1$ is independent of ${\cal F}_2$, so ${\cal F}_1$ is independent of
${\cal F}_2 \vee {\cal F}_3$.
The variable $S_n$ is measurable with respect to ${\cal F}_1$, and
by shielding, the variable $Y_n$ is measurable with respect to ${\cal F}_2 \vee {\cal F}_3$,
justifying our assertion of independence.
By the assumptions of the result being proved,
$H_n({\cal X}^*_n)$ and $S_n$ are integrable.
Clearly $n^{1/2}P[H_n({\cal X}^*_n) \neq Y_n + S_n]$ is at most
$n^{1/2}P[N_{{\cal S},n} < {\delta}lta n]$,
which tends to zero. Also, as
with {\bf e}q{0113e} in Case (i), we have that
$n^{-1/2}\mathbb{E} [|H_n({\cal X}^*_n) - (Y_n + S_n) |]$
tends to zero by {\bf e}q{1207c}, and $Y_n$ is
integrable. By {\bf e}q{Hclteq},
\begin{equation}a
\lfloor {\delta}lta n\rfloor^{-1/2}
(H_n({\cal X}^*_n) -\mathbb{E} H_n({\cal X}^*_n) ) \tod {\cal N} (0, {\delta}lta^{-1} \sigma^2 ),
{\bf e}ea
and so, since $h_{\mu,K}|b_1$,
Theorem \ref{genthm2} is applicable
with $Z_n = H_n({\cal X}^*_n)$, yielding
\begin{equation}gin{eqnarray*}
\sup_{\{u {\bf i}n \mathbb{R} \}}
\left\{ \left| ({\delta}lta n)^{1/2} P[H_n({\cal X}^*_n) {\bf i}n [u,u+b_1)] -
{\delta}lta^{1/2} \sigma^{-1} b_1
\phi \left(\frac{u - \mathbb{E} H_n({\cal X}^*_n)}{
({\delta}lta n)^{1/2} {\delta}lta^{-1/2} \sigma}
\right)
\right|
\right\}
\to 0,
{\bf e}ean
as $ n \to {\bf i}nfty$.
Multiplying through by ${\delta}lta^{-1/2}$
yields {\bf e}q{1205a} for this case, when $b_1 =b$. If $b_1 \neq b$,
we can complete the proof in the same manner as in the proof
for Case (i). $\qed$
\section{Proof of Theorems
\ref{fin0theo}, \ref{fintheo} and \ref{nnlclt}}
\label{secusePEJP}
{\bf e}qco \thco \prco \laco \coco \cjco {\delta}co
The proofs of Theorems \ref{fin0theo}, \ref{fintheo} and \ref{nnlclt}
all rely heavily on
Theorem 2.3 of \cite{PenEJP} so for convenience we state
that result here in the form we shall use it.
This requires some further notation, besides the notation
we set up earlier in Section \ref{secstogeo}.
As before, we assume ${\bf x}i({\bf x},{\cal X}^*)$
is a
translation invariant,
measurable $\mathbb{R}$-valued function
defined for all pairs
$({\bf x},{\cal X}^*)$, where ${\cal X}^* \subset \mathbb{R}^d\times {\cal M}$
is finite and ${\bf x}$
is an element of ${\cal X}^*$.
We extend the definition of ${\bf x}i({\bf x},{\cal X}^*)$
to the case where ${\cal X}^* \subset \mathbb{R}^d\times {\cal M}$
and ${\bf x} {\bf i}n ( \mathbb{R}^d \times {\cal M}) \setminus {\cal X}^*$,
by setting ${\bf x}i({\bf x},{\cal X}^*)$
to be ${\bf x}i({\bf x},{\cal X}^* \cup \{{\bf x}\})$ in this case.
Recall that
$H^{({\bf x}i)}$ is defined by {\bf e}q{induceh}.
Let $T$ be an ${\cal M}$-valued random variable with
distribution $\mathbb{P}_{\cal M}$, independent of everything else.
For $\lambda >0$ let
$M_{\lambda}$ be a Poisson variable with parameter $\lambda$,
independent of everything else, and let ${\cal P}_{\lambda}$ be the
point process $\{X_1,\ldots,X_{M_{\lambda}}\}$, which is a Poisson
point process with intensity $\lambda f(\cdot)$.
Let ${\cal P}_\lambda^* := \{(X_1,T_1),\ldots,
(X_{M_{\lambda}},T_{M_\lambda})\}$ be the corresponding
marked Poisson process.
Given $\lambda >0$,
we say ${\bf x}i$ is $\lambda$-homoegeneously stabilizing if
there is an almost surely finite positive random variable $R$ such that
with probability 1,
$$
{\bf x}i((0,T); ({\cal H}_\lambda^* \cap B^*(0;R)) \cup {\cal Y} )
=
{\bf x}i((0,T); {\cal H}_\lambda^* \cap B^*(0;R))
$$
for all finite ${\cal Y} \subset (\mathbb{R}^d \setminus B(0;R)) \times {\cal M}$.
Recall that ${\rm supp}(f)$ denotes the support of $f$.
We say that ${\bf x}i $ is {{\bf e}m exponentially stabilizing} if
for $\lambda \geq 1$ and $x {\bf i}n {\rm supp}(f)$ there exists
a random variable $R_{x,\lambda}$ such that
\begin{equation}gin{eqnarray*}
{\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d} ({\cal P}_{\lambda}^*
\cap B^*(x;\lambda^{-1/d} R_{x,\lambda} )) \cup {\cal Y} )
\\
=
{\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d} ( {\cal P}_{\lambda}^* \cap B^*(x;\lambda^{-1/d} R_{x,\lambda} ) ))
{\bf e}ean
for all finite ${\cal Y} \subset (\mathbb{R}^d \setminus B(x;
\lambda^{-1/d} R_{x,\lambda}) )) \times {\cal M}$,
and there exists a finite positive
constant $C $ such that
$$
P[ R_{x,\lambda} >s] \leq C {\bf e}xp(-C^{-1}s), ~~~ s \geq 1, ~
\lambda \geq 1,~ f {\bf i}n {\rm supp}(f).
$$
For $k {\bf i}n \mathbb{N} \cup \{0\} $,
let ${\cal T}_k$ be the collection of all
subsets of ${\rm supp}(f)$ with at most $k$ elements.
For $k \geq 1$ and
${\cal A} =\{x_1,\ldots,x_k\} {\bf i}n {\cal T}_k \setminus {\cal T}_{k-1}$,
let ${\cal A}^*$ be the corresponding marked point set
$\{(x_1,T_1),\ldots,(x_k,T_k)\}$ where
$T_1,\ldots,T_k$ are independent ${\cal M}$-valued
variables with distribution $\mathbb{P}_{\cal M}$, independent of everything else.
If ${\cal A} {\bf i}n {\cal T}_0$ (so ${\cal A} = {\bf e}mptyset$)
let ${\cal A}^*$ also be the empty set.
We say that ${\bf x}i$ is {{\bf e}m binomially exponentially
stabilizing} if
there exist finite positive constants $C,{\bf e}ps $ such that
for all $x {\bf i}n {\rm supp}(f)$ and
all $\lambda \geq 1 $ and
$n {\bf i}n \mathbb{N} \cap ( (1-{\bf e}ps) \lambda, (1+{\bf e}ps)\lambda)) $,
and ${\cal A} {\bf i}n {\cal T}_2$,
there is a random variable $R_{x,\lambda,n,{\cal A}}$ such that
\begin{equation}a
{\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d} (({\cal X}_n^* \cup {\cal A}^*)
\cap B^*(x;\lambda^{-1/d} R_{x,\lambda,n,{\cal A}} )) \cup {\cal Y} )
\nonumber \\
=
{\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d} ( ({\cal X}_{n}^* \cup {\cal A}^*) \cap B^*(x;\lambda^{-1/d} R_{x,\lambda,n,{\cal A}} ) ))
\label{BiRS}
{\bf e}ea
for all finite ${\cal Y} \subset (\mathbb{R}^d \setminus B(x;
\lambda^{-1/d} R_{x,\lambda,n,{\cal A}}) ) \times {\cal M}$, and such that
all $\lambda \geq 1$
and all $n {\bf i}n \mathbb{N} \cap ( (1-{\bf e}ps) \lambda, (1+{\bf e}ps)\lambda)) $,
and all $x {\bf i}n {\rm supp}(f)$ and all ${\cal A} {\bf i}n {\cal T}_2$,
$$
P[ R_{x,\lambda,n,{\cal A}} >s] \leq C {\bf e}xp(-C^{-1}s), ~~~ s \geq 1.
$$
Given
$p >0$ and
${\bf e}ps >0$, we
consider the moments conditions
\begin{equation}a
\label{Pomoments}
\sup_{\lambda \geq 1, x {\bf i}n {\rm supp}(f), {\cal A} {\bf i}n {\cal S}_1} \mathbb{E}[
| {\bf x}i((\lambda^{1/d}x,T); \lambda^{1/d} ({\cal P}^*_\lambda \cup {\cal A}^* ))|^p]
< {\bf i}nfty
{\bf e}ea
and
\begin{equation}a
\label{Bimoments}
\sup_{\lambda \geq 1, n {\bf i}n \mathbb{N} \cap ((1-{\bf e}ps)\lambda,(1+{\bf e}ps)\lambda) ,
x {\bf i}n {\rm supp}(f), {\cal A} {\bf i}n {\cal S}_3} \mathbb{E}[|
{\bf x}i((\lambda^{1/d}x,T); \lambda^{1/d} ( {\cal X}_n^* \cup {\cal A}^* ))|^p]
< {\bf i}nfty.
{\bf e}ea
\begin{equation}gin{theo}
\label{lemPEJP}
Suppose $H = H^{({\bf x}i)} $ is induced by translation-invariant
${\bf x}i$. Suppose that ${\bf x}i$ is
$f(x)$-homogeneously
stabilizing for Lebesgue-almost all $x {\bf i}n {\rm supp}(f)$, and
${\bf x}i$ is exponentially stabilizing, binomially exponentially
stabilizing and for some ${\bf e}ps >0$ and $p >2$
satisfies {\bf e}q{Pomoments}
and {\bf e}q{Bimoments}.
Suppose $f_{{\rm max}} < {\bf i}nfty$ and ${\rm supp}(f)$ is bounded.
Suppose $(\lambda(n),n \geq 1)$
is a sequence taking values in $\mathbb{R}^+$
with
$|\lambda(n)-n| = O(n^{1/2})$
as $n \to {\bf i}nfty$. Then
there exists $\sigma \geq 0$ such that
$$
n^{-1/2} (H^{({\bf x}i)}( \lambda(n)^{1/d} {\cal X}^*_n)
- \mathbb{E} H^{({\bf x}i)} (\lambda(n)^{1/d} {\cal X}^*_n)) \tod {\cal N} (0,\sigma^2),
$$
and
$n^{-1}{\rm Var}(H^{({\bf x}i)}(\lambda(n)^{1/d} {\cal X}^*_n) \to \sigma^2$
as $n \to {\bf i}nfty$.
{\bf e}nd{theo}
Theorem \ref{lemPEJP} is a special case of Theorem 2.3 of \cite{PenEJP},
which also provides an expression for $\sigma$
in terms of integrated two-point correlations;
that paper considers random measures
given by a sum of contributions from each point,
whereas here we just consider the total measure. The sets
$\Omega_{\bf i}nfty$ and (for all $\lambda \geq 1$) $\Omega_\lambda $
in \cite{PenEJP} are taken to be
${\rm supp}(f)$. Our ${\bf x}i$ is translation invariant,
and these assumptions lead to some simplification
of the notation in \cite{PenEJP}. \\
{\bf Proof of Theorem \ref{fin0theo}.}
The condition that ${\bf x}i({\bf x};{\cal X}^*)$ has finite range
implies that $H = H^{({\bf x}i)} $ has finite range interactions.
Since ${\bf x}i$ has finite range $r$,
${\bf x}i$ is $\lambda$-homogeneously stabilizing for all $\lambda >0$,
exponentially stabilizing and binomially exponentially
stabilizing
(just take $R = r$, $R_{x,\lambda}=r$ and $R_{x,\lambda,n,{\cal A}}=r$).
We shall establish {\bf e}q{Hclteq} by applying
Theorem \ref{lemPEJP}. We need to check the
moments conditions {\bf e}q{Pomoments} and {\bf e}q{Bimoments}
in the present setting.
Since we assume that $f_{{\rm max}}< {\bf i}nfty$,
for any $\lambda >0$ and any $n {\bf i}n \mathbb{N}$ with
$n \leq 2 \lambda$, and any $x {\bf i}n {\rm supp}(f)$, the variable
${\rm card}({\cal X}_n^* \cap B^*(x;r \lambda^{-1/d}))$
is binomially distributed with with mean at most $\omega_d f_{{\rm max}} 2 r^d$.
Hence by Lemma \ref{PenL1.1}, there is a
constant $C$, such that whenever $n \leq 2 \lambda$ and
$x {\bf i}n {\rm supp}(f)$ we have
\begin{equation}a
P[ {\rm card} ({\cal X}_n^* \cap B^*(x;r \lambda^{-1/d})) > u ]
\leq C {\bf e}xp(- u/C), ~~~ u \geq 1.
\label{0721a}
{\bf e}ea
Moreover by {\bf e}q{polyxi}
and the assumption that ${\bf x}i$ has range $r$,
for ${\cal A} {\bf i}n {\cal T}_3$
we have
\begin{equation}gin{eqnarray*}
\mathbb{E} [{\bf x}i((\lambda^{1/d}x,T);\lambda^{1/d}({\cal X}^*_n \cup {\cal A}^* ) )^4]
\leq \gamma^4 \mathbb{E}[ (4+ {\rm card} ({\cal X}_n^* \cap B^*(x;r \lambda^{-1/d})))^{4 \gamma} ]
{\bf e}ean
so by {\bf e}q{0721a} we can bound the fourth moments of
${\bf x}i((\lambda^{1/d}x,T);\lambda^{1/d}({\cal X}^*_n \cup {\cal A}^* ) )$
uniformly over $(x,\lambda,n, {\cal A}) {\bf i}n {\rm supp}(f) \times [1,{\bf i}nfty)
\times \mathbb{N} \times {\cal T}_3$ with $n \leq 2 \lambda$.
This gives us {\bf e}q{Bimoments} (for $p=4$ and ${\bf e}ps = 1/2$)
and {\bf e}q{Pomoments}
may be deduced similarly.
Hence, the assumptions
of Theorem \ref{lemPEJP} are satisfied, with $\lambda(n)$
in that result given by $\lambda(n) = r_n^{-d}$.
By Theorem \ref{lemPEJP}, for some
$\sigma \geq 0$ we have {\bf e}q{Hclteq} and
{\bf e}q{varconv}. Then by Theorem \ref{finranthm},
we can deduce that $\sigma >0$ and
$h(H) < {\bf i}nfty$ and
{\bf e}q{1205a} holds whenever $h(H) | b$.
$\qed$ \\
{\bf Proof of Theorem \ref{fintheo}.}
Under condition {\bf e}q{finraneq}, the
functional $H({\cal X}^*)$ can be expressed as
a sum of contributions from components of the geometric
(Gilbert) graph ${\cal G}({\cal X},\tau)$, where ${\cal X} := \pi({\cal X}^*)$
is the unmarked point set
corresponding to ${\cal X}^*$
(recall that $\pi$ denotes the canonical projection from
$\mathbb{R}^d \times {\cal M}$ onto $\mathbb{R}^d$.)
Hence, $H({\cal X}^*)$ can be written as $H^{({\bf x}i)}({\cal X}^*)$
where ${\bf x}i({\bf x};{\cal X}^*)$ denotes the contribution to $H({\cal X}^*)$ from the
component containing $\pi({\bf x})$, divided by the number of vertices
in that component.
Then ${\bf x}i({\bf x};{\cal X}^*)$ is unaffected by changes to ${\cal X}^*$ that do not
affect the component of ${\cal G}({\cal X},\tau)$ containing $\pi({\bf x})$, and
we shall use this to demonstrate that the
conditions of Theorem \ref{lemPEJP} hold,
as follows
(the argument is similar to that in Section 11.1 of \cite{Penbk}).
Consider first the homogeneous stabilization
condition.
For $\lambda >0$, let $R(\lambda)$ be
the maximum Euclidean distance from
the origin of vertices in the graph
${\cal G}({\cal H}_{\lambda}\cup \{0\}, \tau)$
that are pathwise connected to the origin,
which by scaling (see the Mapping theorem in
\cite{Kingman}) has the same distribution as
$\tau $ times the maximum Euclidean distance from
the origin of vertices in ${\cal G}({\cal H}_{\tau^d \lambda} \cup \{0\},1)$,
that are pathwise connected to the origin.
Then
$R(\lambda)$ is almost surely finite, for any
$\lambda {\bf i}n (0,\tau^{-d}\lambda_c)$.
Changes to ${\cal H}_{\lambda}$ at a distance more than $R(\lambda) + \tau$
from the origin do not affect the component of
${\cal G}({\cal H}_{\lambda} \cup \{0\},\tau)$ containing
the origin and therefore do not affect ${\bf x}i((0,T);{\cal H}^*_{\lambda})$.
This shows that
${\bf x}i$ is $\lambda$-homogeneously stabilizating
for any $\lambda < \tau^{-d} \lambda_c$, and therefore
by assumption {\bf e}q{subcrit}
the homogeneous stabilization condition of Theorem \ref{lemPEJP}
holds.
Next we consider the binomial stabilization condition.
Let $x {\bf i}n {\rm supp}(f) $.
Let $R_{x,\lambda,n}$ be equal to $\tau$ plus
the maximum Euclidean distance from $\lambda^{1/d}x$
of vertices in ${\cal G}(\lambda^{1/d}({\cal X}_n \cup \{x\}),\tau)$
that are pathwise connected to $\lambda^{1/d}x$.
Changes to ${\cal X}_n$ at a Euclidean distance greater than
$\lambda^{-1/d} R_{x,\lambda,n}$
from $x$ will have no effect on ${\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d}{\cal X}_n^* )$.
Using {\bf e}q{subcrit}, let ${\bf e}ps {\bf i}n (0,1/2)$ with $(1 + {\bf e}ps)^2 \tau^d
f_{{\rm max}} < \lambda_c$.
The Poisson point process ${\cal P}_{n(1+{\bf e}ps)}:=
\{X_1,\ldots,X_{M_{n(1+{\bf e}ps)}}\}$,
is stochastically dominated by ${\cal H}_{n f_{{\rm max}} (1+{\bf e}ps)}$
(we say a point process ${\cal X}$ is stochastically dominated
by a point process ${\cal Y}$ if there exist coupled
point processes ${\cal X}', {\cal Y}'$ with ${\cal X}' \subset {\cal Y}'$
almost surely and ${\cal X}'$ having the distribution
of ${\cal X}$ and ${\cal Y}'$ having the distribution of ${\cal Y}$).
Hence by scaling, $\lambda^{1/d} {\cal P}_{n(1+{\bf e}ps)}$ is stochastically
dominated by ${\cal H}_{n f_{{\rm max}} (1+{\bf e}ps)/ \lambda}$,
and hence we have for $n \leq \lambda (1+{\bf e}ps) $ that
$\lambda^{1/d} {\cal P}_{n(1+{\bf e}ps)}$ is stochastically
dominated by ${\cal H}_{ f_{{\rm max}} (1+{\bf e}ps)^2 }$. Therefore
for $u >0$,
\begin{equation}a
P[ R_{x,\lambda,n} > u]
\leq
P[ M_{n(1+{\bf e}ps)} < n ]
+ P[ R((1+ {\bf e}ps)^2 f_{{\rm max}}) > u- \tau].
\label{0713a}
{\bf e}ea
By scaling, the second probability
in {\bf e}q{0713a} equals the probability that there is a path
from the origin in ${\cal G}({\cal H}_{\tau^d (1+ {\bf e}ps)^2 f_{{\rm max}}} \cup \{0\},1)$
to a point at Euclidean distance
greater than $\tau^{-1}u -1$ from the origin.
By the exponential decay for subcritical continuum percolation,
(see e.g. Lemma 10.2 of \cite{Penbk}),
this probability decays exponentially in $u$ (and does
not depend on $n$).
Let ${\rm DEL}ta:= {\rm diam}({\rm supp}(f))$
(here assumed finite).
By Lemma \ref{PenL1.1}, the first term
in the right hand side of {\bf e}q{0713a}
decays exponentially in $n$. Hence,
there is a finite positive constant $C$, independent of $\lambda$, such that
provided we have $n > (1-{\bf e}ps) \lambda^{1/d}$ we have
for all $ u \leq \lambda^{1/d} ({\rm DEL}ta + \tau)$ that
$$
P[ M_{n(1+{\bf e}ps)} < n ] \leq C {\bf e}xp (- C^{-1} \lambda^{1/d})
\leq C {\bf e}xp (- (({\rm DEL}ta +\tau)C)^{-1} u) .
$$
On the other hand $P[ R_{x,\lambda,n} > u]=0$
for $u > \lambda^{1/d}({\rm DEL}ta + \tau)$. Combined with {\bf e}q{0713a} this shows that
there is a constant $C$ such that for all
$(x,n,\lambda,u) {\bf i}n {\rm supp}(f) \times \mathbb{N} \times [1,,{\bf i}nfty)^2 $
with $n \leq (1+{\bf e}ps) \lambda$, we have
\begin{equation}a
P[ R_{x,\lambda,n} > u]
\leq C {\bf e}xp(- u/C).
\label{0715a}
{\bf e}ea
Now suppose ${\cal A} {\bf i}n {\cal T}_3 $, and $x {\bf i}n {\rm supp}(f) $.
Let $R_{x,\lambda,n,{\cal A}}$ be equal to $\tau$ plus
the maximum Euclidean distance from $\lambda^{1/d}x$
of vertices in ${\cal G}(\lambda^{1/d}({\cal X}_n \cup {\cal A} \cup \{x\});\tau)$
that are pathwise connected to $\lambda^{1/d}x$.
Changes to ${\cal X}_n \cup {\cal A}$ at a Euclidean distance greater than
$\lambda^{-1/d} R_{x,\lambda,n,{\cal A}}$
from $x$ will have no effect on ${\bf x}i((\lambda^{1/d}x,T);
\lambda^{1/d}({\cal X}_n^* \cup {\cal A}^*) )$; that is,
{\bf e}q{BiRS} holds.
To check the tail behaviour of $R_{x,\lambda,n,{\cal A}}$,
suppose for example that ${\cal A}$ has three elements,
$x_1$, $x_2$ and $x_3$.
Then it is not hard to see that
$$
R_{x,\lambda,n,{\cal A}}
\leq
R_{x,\lambda,n}
+ R_{x_1,\lambda,n}
+ R_{x_2,\lambda,n}
+ R_{x_3,\lambda,n},
$$
and likewise when ${\cal A}$ has fewer than three elements.
Using this together with {\bf e}q{0715a}, it is easy to
deduce that there is a constant $C$ such that
for all
$(x,n,{\cal A},\lambda,u) {\bf i}n {\rm supp}(f) \times \mathbb{N} \times {\cal T}_3
\times [1,{\bf i}nfty)^2 $
with $n \leq (1+{\bf e}ps) \lambda$, and we have
\begin{equation}a
P[ R_{x,\lambda,n,{\cal A}} > u]
\leq C {\bf e}xp(- u/C).
\label{0715a2}
{\bf e}ea
In
other words, ${\bf x}i$ is binomially exponentially stabilizing.
Next we check the moments condition {\bf e}q{Bimoments},
with $p=4$ and using the same choice of ${\bf e}ps$ as before.
By our definition of ${\bf x}i$ and the growth bound
{\bf e}q{polybd}, we have
for all $(x,n, {\cal A}, \lambda) {\bf i}n {\rm supp}(f) \times \mathbb{N} \times
{\cal T}_3 \times [1,{\bf i}nfty)^2 $ with $n \leq \lambda(1+ {\bf e}ps)$
that
\begin{equation}a
\mathbb{E}[ {\bf x}i(( \lambda^{1/d} x,T); \lambda^{1/d} ({\cal X}_n^* \cup {\cal A}^*)) ^4]
\leq
\gamma^4 \mathbb{E}[ ( {\rm card} ( {\cal C}) + {\rm diam} ({\cal C}) )^{4\gamma} ],
\label{0715b}
{\bf e}ea
where ${\cal C}$ is the vertex set of the
component of ${\cal G}(\lambda^{1/d}({\cal X}_n \cup {\cal A} \cup \{x\});
\tau)$ containing $\lambda^{1/d} x$.
By {\bf e}q{0715a2}, there is a constant $C$ such
that for all $(x,n, {\cal A}, \lambda,u) {\bf i}n ({\rm supp}(f) \times \mathbb{N}
\times {\cal T}_3 \times [1,{\bf i}nfty)^2 $ with $n \leq \lambda(1+ {\bf e}ps)$ we have
\begin{equation}a
P[ {\rm diam} ({\cal C}) >u ] \leq C {\bf e}xp(-u/C);
\label{0715c}
{\bf e}ea
moreover,
\begin{equation}a
P[
{\rm card} ({\cal C}) >u ] \leq
P[{\rm diam} ({\cal C}) >u^{1/(2d)} ] +
P [ {\rm card} ( {\cal X}_n \cap B(x; \lambda^{-1/d} u^{1/(2d)})) > u-4]
\nonumber \\
\label{0719a}
{\bf e}ea
and the first term in the right hand
side of {\bf e}q{0719a} decays exponentially in $u^{1/(2d)}$
by {\bf e}q{0715c}.
Since
$ {\rm card} ( {\cal X}_n \cap B(x; \lambda^{-1/d} u^{1/(2d)})) $
is binomially distributed with
$$
\mathbb{E} [ {\rm card} ( {\cal X}_n \cap B(x; \lambda^{-1/d} u^{1/(2d)})) ]
\leq u^{1/2} \omega_d f_{{\rm max}} n/\lambda,
$$
by Lemma \ref{PenL1.1} there is a constant $C$ such that
for all
$(x,n,\lambda,u)$ with $n \leq \lambda(1+{\bf e}ps)$
we have that
$$
P[ {\rm card} ( {\cal X}_n \cap B(x; \lambda^{-1/d} u^{1/(2d)})) > u-4]
\leq C {\bf e}xp (- C^{-1} u^{1/2} ).
$$
Thus by {\bf e}q{0719a} there is a constant, also denoted $C$, such that
for all $(x,n,{\cal A},\lambda,u)$ with $n \leq \lambda(1+ {\bf e}ps)$ we have
$$
P[{\rm card} ({\cal C}) >u ] \leq C {\bf e}xp ( -C^{-1} u^{-1/(2d)}),
$$
and combining this with {\bf e}q{0715c} and
using {\bf e}q{0715b} gives us a uniform tail bound
which is enough to ensure {\bf e}q{Bimoments}.
The argument for {\bf e}q{Pomoments} is similar.
Thus our ${\bf x}i$ satisfies all the assumptions of
Theorem \ref{lemPEJP}, and we can deduce {\bf e}q{Hclteq} and
{\bf e}q{varconv} for some $\sigma \geq 0$ by applying
that result with $\lambda(n) = r_n^{-d}$.
Then by applying Theorem \ref{finranthm},
we can deduce that $\sigma >0$ and $h(H) < {\bf i}nfty$
and {\bf e}q{1205a} holds whenever $h(H)|b$.
$\qed$ \\
{\bf Proof of Theorem \ref{nnlclt}.}
Suppose the hypotheses of Theorem \ref{nnlclt}
hold, and
assume without loss of generality
that ${\bf x}i({\bf x},{\cal X}^*)=0$ whenever ${\cal X}^* \setminus \{{\bf x}\}$
has fewer than $\kappa$ elements.
We assert that under these hypotheses,
there exists a constant $C$ such that
for all $(x,n,\lambda,u) {\bf i}n {\rm supp}(f) \times \mathbb{N} \times [1,{\bf i}nfty)^2$
with $n {\bf i}n [\lambda/2,3\lambda/2]$
and $n \geq \kappa$,
we have
\begin{equation}a
P[ \lambda^{1/d} R_\kappa( ( x,T);
{\cal X}_n^*) > u] \leq C {\bf e}xp (-C^{-1} u ).
\label{0714a}
{\bf e}ea
Indeed, if ${\rm supp}(f)$ is a compact convex
region in $\mathbb{R}^d$ and $f$ is bounded away from zero on ${\rm supp}(f)$,
then {\bf e}q{0714a} is demonstrated in Section 6.3 of \cite{PenEJP},
while if $ {\rm supp}(f)$ is
a compact $d$-dimensional submanifold-with-boundary of
$\mathbb{R}^d$, and $f$ is bounded away from zero on ${\rm supp}(f)$,
then {\bf e}q{0714a} comes from the proof of
Lemma 6.1 of \cite{PYmfld}.
It is easy to see that ${\bf x}i$ is
$\lambda$-homogeneously stabilizing for all $\lambda >0$.
Also, for any $(x,{\cal A}) {\bf i}n ({\rm supp}(f) \times {\cal T}_3)$ we obviously have
$R_\kappa ((x,T); {\cal X}^* \cup {\cal A}^*) \leq
R_\kappa ((x,T); {\cal X}^* ) $ and hence by
{\bf e}q{0714a}, ${\bf x}i$ is binomially exponentially
stabilizing,
and exponential stabilization comes from a similar
estimate with a Poisson sample.
We need to check the moments conditions to be able
to deduce {\bf e}q{Hclteq} via
Theorem \ref{lemPEJP}.
With $\gamma$ as in the growth bound {\bf e}q{polynbr}, we claim that
there is a constant $C$ such that
for any ${\cal A} {\bf i}n {\cal T}_3$,
any $x {\bf i}n {\rm supp}(f)$,
and any $u >0$, and
for all $(x,n, {\cal A} ,\lambda,u) {\bf i}n {\rm supp}(f) \times \mathbb{N} \times {\cal T}_3
\times [1,{\bf i}nfty)^2$ with
$\lambda/2 \leq n \leq 3\lambda/2$,
and $n \geq \kappa$, we have
\begin{equation}a
P [| {\bf x}i((\lambda^{1/d} x,T);\lambda^{1/d} (
{\cal X}^*_n \cup {\cal A}^* ))| > u]
\leq
P [ \gamma ( 1 + \lambda^{1/d} R_\kappa((x,T),{\cal X}_n^*) )^\gamma > u ]
\nonumber
\\
\leq C {\bf e}xp( - C^{-1} u^{1/\gamma}).
\label{0714b}
{\bf e}ea
Indeed, the first bound comes from
the {\bf e}q{polynbr}, and the second bound comes from
{\bf e}q{0714a}. Using {\bf e}q{0714b},
we can deduce the moments bound
{\bf e}q{Bimoments} for $p=4$ and ${\bf e}ps =1/2$.
We can derive {\bf e}q{Pomoments} similarly.
Thus Theorem \ref{lemPEJP}
is applicable, and enables us
to deduce {\bf e}q{Hclteq} and {\bf e}q{varconv} for some $\sigma \geq 0$,
in the present setting.
Then by using Theorem \ref{finranthm},
we can deduce that $\sigma >0$ and $h(H) >0$ and
{\bf e}q{1205a} holds whenever $h(H)|b$.
$\qed $ \\
{\bf Acknowledgments.} We thank
the Oberwolfach Mathematical Research Instiute
for hosting
the 2008 workshop `New Perspectives in Stochastic Geometry',
at which
this work was started. We also thank Antal J\'arai for helpful discussions.
\begin{equation}gin{thebibliography}{}
\bibitem{AB} Avram, F. and Bertsimas, D.
(1993). On central limit theorems in geometrical probability. {{\bf e}m
Ann. Appl. Probab.} {\bf 3}, 1033-1046.
\bibitem{BPY} Baryshnikov, Yu., Penrose, M. D. and Yukich, J. E. (2009).
Gaussian limits for generalized spacings. {{\bf e}m Ann.
Appl. Probab.} {\bf 19}, 158--185.
\bibitem{BY}
Baryshnikov, Yu. and Yukich, J. E. (2005).
Gaussian limits for random measures in geometric probability.
{{\bf e}m Ann. Appl. Probab.} {\bf 15}, 213--253.
\bibitem{Be} Bender, E. A. (1973).
Central and local limit theorems applied to asymptotic enumeration.
{{\bf e}m J. Combinatorial Theory} {\bf A 15}, 91--111.
\bibitem{BB}
Bickel, P. J. and Breiman, L. (1983).
Sums of functions of nearest neighbor distances, moment bounds,
limit theorems and a goodness of fit test.
{{\bf e}m Ann. Probab.} {\bf 11}, 185--214.
\bibitem{Breiman}
Breiman, L. (1992). {{\bf e}m Probability.}
SIAM, Philadelphia.
\bibitem{Chatt} Chatterjee, S. (2008). A new method of normal
approximation, {{\bf e}m Ann. Probab.} {\bf 36}, 1584-1610.
\bibitem{DMcD}
Davis, B. and McDonald, D.
(1995).
An elementary proof of the local central limit theorem.
{{\bf e}m J. Theoret. Probab.} {\bf 8}, 693--701.
\bibitem{Durr}
Durrett, R. (1996).
{{\bf e}m Probability: Theory and Examples.} 2nd Edition, Wadsworth, Belmont, CA.
\bibitem{EJ}
Evans, D., and Jones, A. J. (2002),
A proof of the gamma test.
{{\bf e}m R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci.}
{\bf 458}, 2759--2799.
\bibitem{Feller}
Feller, W. (1966). {{\bf e}m
An Introduction to Probability Theory and its Applications. Vol. II.}
John Wiley \& Sons, New York.
\bibitem{GM}
Grimmett, G. and Marstrand, J.M.
(1990) The supercritical phase of percolation is well behaved.
{{\bf e}m Proc. Royal Soc. London A} {\bf 430},
439-457.
\bibitem{HM} Heinrich, L. and Molchanov, I.S. (1999).
Central limit theorem for a class of random measures associated with
germ-grain models.
{{\bf e}m Adv. Appl. Probab.} {\bf 31}, 283--314.
\bibitem{Henze}
Henze, N. (1988).
A multivariate two-sample test based on the
number of nearest neighbor type coincidences.
{{\bf e}m Ann. Statist.} {\bf 16}, 772--783.
\bibitem{Kingman}
Kingman, J. F. C. (1993). {{\bf i}t Poisson Processes.}
Oxford University Press, Oxford.
\bibitem{LB}
Levina, E. and Bickel, P. J.
(2005), Maximum likelihood
estimation of intrinsic dimension, in {{\bf e}m Advances in NIPS}, {\bf
17}, Eds. L. K. Saul, Y. Weiss, L. Bottou.
\bibitem{LPS} Leonenko, N., Pronzato, L. and Savani, V.
(2008).
A class of R\'enyi information estimators for
multidimensional densities,
{{\bf e}m Ann. Statist.} {\bf 36}, 2153--2182.
\bibitem{Penbk} Penrose, M. (2003). {{\bf e}m Random Geometric Graphs}.
Oxford University Press, Oxford.
\bibitem{PenCLT}
Penrose, M. D. (2001).
A central limit theorem with applications to percolation,
epidemics and Boolean models.
{{\bf e}m Ann. Probab.} {\bf 29}, 1515--1546.
\bibitem{PenEJP}
Penrose, M. D. (2007).
Gaussian limits for random geometric measures.
{{\bf e}m Electron. J. Probab.} {\bf 12}, 989--1035.
\bibitem{PY1} Penrose, M.D. and Yukich, J.E. (2001). Central limit theorems
for some graphs in computational geometry. {{\bf e}m Ann. Appl.
Probab.} {\bf 11}, 1005-1041.
\bibitem{PY2}
Penrose, M. D. and Yukich, J. E. (2002).
Limit theory for random sequential packing and deposition.
{{\bf e}m Ann. Appl. Probab.} {\bf 12}, 272--301.
\bibitem{PYLLN}
Penrose, M. D. and Yukich, J. E. (2003).
Weak laws of large numbers in geometric probability.
{{\bf e}m Ann. Appl. Probab.} {\bf 13}, 277--303.
\bibitem{PYmfld}
Penrose, M. D. and Yukich, J. E. (2011).
Limit theory for point processes in manifolds.
Preprint, ArXiv:1104.0914
\bibitem{SPY}
Schreiber, T., Penrose, M. D., and Yukich, J. E. (2007)
Gaussian limits for multidimensional random sequential packing at saturation.
{{\bf e}m Comm. Math. Phys.} {\bf 272}, 167--183.
\bibitem{Vakh}
Vakhania, N. N. (1993).
Elementary Proof of Polya's Characterization Theorem and of the Necessity
of Second Moment in the CLT.
{{\bf e}m Theory Probab. Appl.} {\bf 38}, 166--168.
{\bf e}nd{thebibliography}
{\bf e}nd{document}
|
\betagin{document}
\widetildetle[Subcritical wave equations]{Scattering for defocusing energy subcritical nonlinear wave equations}
\author{B. Dodson}
\author{A. Lawrie}
\author{D. Mendelson}
\author{J. Murphy}
\betagin{abstract}
We consider the Cauchy problem for the defocusing power type nonlinear wave equation in $(1+3)$-dimensions for energy subcritical powers $p$ in the super-conformal range $3 < p< 5$. We prove that any solution is global-in-time and scatters to free waves in both time directions as long as its critical Sobolev norm stays bounded on the maximal interval of existence.
\epsnd{abstract}
\thanks{B. Dodson gratefully acknowledges support from NSF DMS-1500424 and NSF DMS-1764358. A. Lawrie gratefully acknowledges support from NSF grant DMS-1700127. D. Mendelson gratefully acknowledges support from NSF grant DMS-1800697. J. Murphy gratefully acknowledges support from DMS-1400706. The authors thank the MSRI program ``New Challenges in PDE: Deterministic Dynamics and Randomness in High and Infinite Dimensional Systems,'' where this work began. The first and second authors also thank IHES program ``Trimester on Nonlinear Waves,'' where part of this work was completed. The first and third authors gratefully acknowledge support from the Institute for Advanced Study, where part of this work was completed.
}
\mathbf{m}aketitle
\varsigmagmaection{Introduction}
We study the Cauchy problem for the power-type nonlinear wave equation in $\mathbf{m}athbb{R}^{1+3}$,
\betagin{equation} \lambdabel{eq:nlw}
\left\{
\betagin{aligned}
&\mathbf{m}athcal{B}ox u = \trianglerightrtialm u \abs{u}^{p-1} \\
& \vec u(0) = (u_{0}, u_{1}) \quad u = u(t, x), \quad (t, x) \in \mathbf{m}athbb{R}^{1+3}_{t, x} .
\epsnd{aligned}
\right.
\epsnd{equation}
Here $\mathbf{m}athcal{B}ox = - \trianglerightrtial_{t}^{2} + \mathbf{m}athbb{D}e$ so the $+$ sign above above yields the defocusing equation and the $-$ sign the focusing equation. The equation has the following scaling symmetry: if $\vec u(t, x) = (u, \trianglerightrtial_t u)(t, x)$ is a solution, then so is
\mathbf{m}athcal{E}Q{ \lambdabel{eq:scaling}
\vec u_{\lambda}(t, x) = \left( \lambda^{-\frac{2}{p-1}}u( t/ \lambda, x/ \lambda ), \lambda^{-\frac{2}{p-1} -1} \trianglerightrtial_t u (t/ \lambda, x/ \lambda)\right).
}
The conserved energy, or Hamiltonian, is
\mathbf{m}athcal{E}Q{ \lambdabel{eq:en}
E( \vec u(t)) = \int_{\{t\} \widetildemes \mathbf{m}athbb{R}^{3}} \frac{1}{2} \left(\abs{u_{t}}^{2} + \abs{\nabla u}^{2}\right) \trianglerightrtialm \frac{1}{p+1} \abs{u}^{p+1} \, \mathbf{m}athrm{d} x = E(\vec u(0))
}
which scales like
\[
E( \vec u_{\lambda}) = \lambda^{3 - 2\frac{p+1}{p-1}} E( \vec u).
\]
The energy is invariant under the scaling of the equation only when $p= 5$, which is referred to as the energy critical exponent. The range $p<5$ is called energy subcritical, since concentration of a solution by rescaling requires divergent energy, i.e., $\lambda \to 0 \mathbf{m}athbb{R}ightarrow E( \vec u_{\lambda}) \to \infty$. Conversely, the range $p>5$ is called energy supercritical, and here $E( \vec u_{\lambda}) \to 0 \mathbf{m}as \lambda \to 0$, i.e., concentration by rescaling is energetically favorable.
Fixing $p$, the critical Sobolev exponent $s_p:= \frac{3}{2} - \frac{2}{p-1}$ is defined to be the unique $s_p \in \mathbf{m}athbb{R}$ so that $\dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1}( \mathbf{m}athbb{R}^3)$ is invariant under the scaling~\epsqref{eq:scaling}. We will often use the shorthand notation
\mathbf{m}athcal{E}Q{
\dot \mathbf{m}athcal{H}^s:= \dot{H}^s \widetildemes \dot{H}^{s-1}(\mathbf{m}athbb{R}^3).
}
The power-type wave equation on $\mathbf{m}athbb{R}_{t, x}^{1+ 3}$ has been extensively studied. In the defocusing setting, the positivity of the conserved energy can be used to extend a local existence result to a global one for sufficiently regular initial data. In 1961, J\"orgens showed global existence for the defocusing equation for smooth compactly supported data~\cite{Jorg}. In 1968, Strauss proved global existence for smooth solutions and moreover that these solutions decay in time and scatter to free waves~\cite{Strauss68} -- this remarkable paper was the first work that proved scattering for \epsmph{any} nonlinear wave equation. There are many works extending the local well-posedness theorem of Lindblad and Sogge~\cite{LinS} in $\dot\mathcal{H}^{s}$ for $s > s_p$ to an unconditional global well-posedness statement and we refer the reader to \cite{KPV00, GP03, BC06, Roy09} and references therein. These works do not address global dynamics of the solution, in particular scattering. In the radial setting the first author has made significant advances in this direction, proving in~\cite{D16, D17} an unconditional global well-posedness and scattering result for the defocusing cubic equation for data in a Besov space with the same scaling as $\dot \mathcal{H}^{\frac{1}{2}}$. In very recent work \cite{D18a, D18b}, the first author has proved unconditional scattering for the defocusing equation for radial data in the critical Sobolev space in the entire range $3\leq p<5$.
The goal of this paper is to address global dynamics for \epsqref{eq:nlw} in the non-radial setting. Our main result is the following theorem.
\betagin{thm}[Main Theorem] \lambdabel{t:main}
Consider~\epsqref{eq:nlw} for energy subcritical exponents $3<p<5$ and with the \epsmph{defocusing} sign. Let $\vec u(t) \in \dot{\mathcal{H}}^{s_p}(\mathbf{m}athbb{R}^3)$ be a solution to~\epsqref{eq:nlw} on its maximal interval of existence $I_{\mathbf{m}ax}$. Suppose that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:fantasy}
\varsigmagmaup_{t \in I_{\mathbf{m}ax}} \| \vec u(t) \|_{\dot{\mathcal{H}}^{s_p} (\mathbf{m}athbb{R}^{3})} < \infty.
}
Then, $\vec u(t)$ is defined globally in time, i.e., $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$. In addition, we have
\mathbf{m}athcal{E}Q{ \lambdabel{scattering}
\| u \|_{L^{2(p-1)}_{t, x}( \mathbf{m}athbb{R}^{1+3})} < \infty,
}
which implies that $\vec u(t)$ scatters to a free wave in both time directions, i.e., there exist solutions $\vec v_{L}^{\trianglerightrtialm}(t) \in \dot{\mathcal{H}}^{s_p}(\mathbf{m}athbb{R}^3)$ to the free wave equation, $\mathbf{m}athcal{B}ox v_L^{\trianglerightrtialm} = 0$, so that
\mathbf{m}athcal{E}Q{
\| \vec u(t)- \vec v^{\trianglerightrtialm}_L(t) \|_{\dot{\mathcal{H}}^{s_p} (\mathbf{m}athbb{R}^3)} \longrightarrow 0 \mathbf{m}as t \rightarrow \trianglerightrtialm \infty.
}
\epsnd{thm}
A version of Theorem~\mathop{\mathrm{Re}}f{t:main} restricted to radially symmetric data was established in the earlier work of Shen \cite{Shen}; see also \cite{DL1} for the cubic power. This type of conditional scattering result first appeared in the work of Kenig and Merle \cite{KM10} in the setting of the 3$d$ cubic NLS and has since attracted a great deal of research activity, see e.g. \cite{KM11a, KM11b, KV11a, KV11b, Bul12a, Bul12b, DL2, CR5d, DKM5, DR} for this type of result for the NLW.
In the energy critical regime, the bound \epsqref{eq:fantasy} is guaranteed by energy conservation and the analogue of Theorem \mathop{\mathrm{Re}}f{t:main} was proved in the seminal works of Shatah and Struwe \cite{SS93,SS94}, Bahouri and Shatah \cite{BS} and Bahouri and G\'erard \cite{BG}. In the energy supercritical regime, the analogue of Theorem \mathop{\mathrm{Re}}f{t:main} was obtained by Killip and Visan in \cite{KV11b}.
The regime treated in this work, namely energy-subcritical with non-radial data, necessitates several new technical developments, which may prove useful in contexts beyond the scope of Theorem~\mathop{\mathrm{Re}}f{t:main}.
\betagin{rem}
It is conjectured that for the defocusing equation all solutions with data in $\dot \mathcal{H}^{s_p}$ scatter in both time directions as in the energy critical case $p=5$. Theorem~\mathop{\mathrm{Re}}f{t:main} is a conditional result, specifically we do not determine a priori which data satisfy~\epsqref{eq:fantasy}. It is perhaps useful to think of the theorem in its contrapositive formulation: if initial data in the critical space $\dot \mathcal{H}^{s_p}$ were to lead to an evolution that does not scatter in forward time, then the $\dot \mathcal{H}^{s_p}$ norm of the solution must diverge along at least one sequence of times tending to the maximal forward time of existence.
\epsnd{rem}
\betagin{rem}
The dynamics are much different in the case of the energy subcritical \epsmph{focusing} equation. In remarkable works, Merle and Zaag~\cite{MZ03ajm, MZ05ma} classified the blow up dynamics by showing that all blow-up solutions must develop the singularity \epsmph{at the self-similar rate}. In the radial case, an infinite family of smooth self-similar solutions is constructed by Bizo\'n et al. in~\cite{BBMW10}. In~\cite{DS12, DS17}, Donninger and Sch\"orkhuber address the stability of the self-similar blow up.
\epsnd{rem}
\varsigmagmaubsection{Comments about the proof}
The proof of Theorem~\mathop{\mathrm{Re}}f{t:main} follows the fundamental concentration compactness/rigidity method of Kenig and Merle, which first appeared in~\cite{KM06, KM08}. The proof is by contradiction -- if Theorem~\mathop{\mathrm{Re}}f{t:main} were to fail, the profile decomposition of Bahouri-G\'erard~\cite{BG} yields a minimal nontrivial solution to~\epsqref{eq:nlw}, referred to as a \epsmph{critical element} and denoted by $\vec u_c$, that does not scatter. Here `minimal' refers to the size of the norm in~\epsqref{eq:fantasy}. This standard construction is outlined in Section~\mathop{\mathrm{Re}}f{s:cc}. The key feature of a critical element is that its trajectory is pre-compact modulo symmetries in the space $\dot \mathcal{H}^{s_p}$, see Proposition \mathop{\mathrm{Re}}f{p:ce}. The proof is completed by showing that this compactness property is too rigid for a nontrivial solution and thus the critical element cannot exist.
The major obstacle to rule out a critical element $\vec u_{c}(t)$ in this energy subcritical setting is the fact that $\vec u_{c}(t)$ is \epsmph{a priori} at best an $\dot \mathcal{H}^{s_p}$ solution, while all known global monotonicity formulae, e.g., the conserved energy, virial and Morawetz type inqualities, require more regularity. In general, solutions to a semilinear wave equation are only as regular as their initial data because of the free propagator $S(t)$ in the Duhamel representation of a solution
\mathbf{m}athcal{E}Q{\lambdabel{eq:du}
\vec u_c(t_0) = S(t_0-t) \vec u_c(t) + \int_{t}^{t_0} S(t_0-\tauu) (0, \trianglerightrtialm \abs{u}^{p-1} u (\tauu)) \, \mathbf{m}athrm{d} \tauu.
}
However, for a \epsmph{critical element} the pre-compactness of its trajectory is at odds with the dispersion of the free part, $S(t_0-t) \vec u(t)$, which means the first term on the right-hand-side above must vanish weakly as $t \to \varsigmagmaup I_{\mathbf{m}ax} $ or as $t \to \inf I_{\mathbf{m}ax}$, where $I_{\mathbf{m}ax}$ is as in Theorem \mathop{\mathrm{Re}}f{t:main}. Thus, the Duhamel integral on the right-hand-side of~\epsqref{eq:du} encodes the regularity of a critical element and additional regularity can be expected due to the nonlinearity. As in~\cite{DL1} the gain in regularity at a fixed time $t_0$ is observed via the so-called ``double Duhamel trick'', which refers to the analysis of the pairing
\mathbf{m}athcal{E}Q{ \lambdabel{eq:pair}
\bigg\lambdangle \int_{T_1}^{t_0} S(t_0-t) (0, \trianglerightrtialm \abs{u}^{p-1} u) \, \mathbf{m}athrm{d} t, \, \int^{T_2}_{t_0} S(t_0-\tauu) (0, \trianglerightrtialm \abs{u}^{p-1} u) \, \mathbf{m}athrm{d}\tauu \bigg\rangle
}
where we take $T_1<t_0$ and $T_2>t_0$. The basic outline of this technique was introduced by Tao in~\cite{Tao07} and was used within the Kenig-Merle framework for nonlinear Schr{\"o}dinger equations by Killip and Visan~\cite{KV10CPDE, KV10AJM, KVClay}, and for nonlinear wave equations in, e.g.,~\cite{KV11b, Shen}. This method is also closely related to the in/out decomposition used by Killip, Tao, and Visan in~\cite[Section~$6$]{KTV09}.
Here we employ several novel interpretations of the double Duhamel trick, substantially building on the simple implementation developed by the first two authors in the radial setting in~\cite{DL1, DL2} for $p=3$, which exploited the sharp Huygens principle to overcome the difficulties arising from the both the slow $\ang{t}^{-1}$ decay of $S(t)$ in dimension $3$ and the small power $p=3$ that precluded this case from being treated by techniques introduced in earlier works. The general case (non-radial data) considered here requires several new ideas.
\mathbf{m}edskip
We briefly describe the set-up and several key components of the proof. A critical element has compact trajectory up to action by one parameter families (indexed by $t \in I_{\mathbf{m}ax}( \vec u_c)$) of translations $x(t)$ that mark the \epsmph{spatial center} of the bulk of $\vec u_c(t)$, and rescalings $N(t)$ that record the \epsmph{frequency scale} at which $\vec u_c(t)$ is concentrated. In Section~\mathop{\mathrm{Re}}f{s:cc} we perform a reduction to four distinct behaviors of the parameters $x(t)$ and $N(t)$. First, following the language of~\cite{KV11b} we distinguish between $x(t)$ that are \epsmph{subluminal}, roughly that $|x(t) - x(\tauu)| \le (1-\delta) \abs{t-\tauu}$ for some $\delta>0$, and those that \epsmph{fail to be subliminal}, i.e., if $x(t)$ forever moves at the speed of light, or more precisely, $\abs{x(t)} \varsigmagmaimeq \abs{t}$ (in a certain sense) for all $t$. The latter case is quite delicate in this energy-subcritical setting and we introduce several new ideas to treat it, see Section \mathop{\mathrm{Re}}f{s:hans}. We elaborate further on these two cases.
\mathbf{m}edskip
\epsmph{Subluminal critical elements.} When $x(t)$ is subluminal, we distinguish between what we call a \epsmph{soliton-like critical element} where $N(t) = 1$, a \epsmph{self-similar-like critical element} where $N(t) = t^{-1}$, $t >0$, and a \epsmph{global concentrating critical element} where, $\limsup_{t \to \infty} N(t) = \infty$. These distinct cases are treated in Sections $4, 5, 6$ respectively.
In Section~\mathop{\mathrm{Re}}f{s:soliton}, we set out to show as in~\cite{DL1} that soliton-like critical elements must be uniformly bounded in $\dot \mathcal{H}^{1+\epspsilon} \cap \dot\mathcal{H}^{s_p}$ and hence the trajectory is pre-compact in $\dot \mathcal{H}^1$. Once this is accomplished we can access nonlinear monotonicity formulae to show that such critical elements cannot exist. In this latter step we employ a version of a standard argument based on virial identity, after shifting the spatial center of the solution to $x =0$ by the Lorentz group, which is compactified by the bound in $\dot \mathcal{H}^1$. The heart of the argument in Section $4$ is thus establishing the additional regularity of a soliton-like critical element. The goal, roughly, is to show that the pairing~\epsqref{eq:pair} can be estimated in $\dot \mathcal{H}^1$. In~\cite{DL1} the proof relied crucially on radial Sobolev embedding. As this is no longer at our disposal in the current, non-radial setting, we have introduced a substantial reworking of the argument from~\cite{DL1} that both simplifies it, and removes the reliance on radial Sobolev embedding. Examining the pairing~\epsqref{eq:pair} at time $t_0 =0$ we divide spacetime into three types of regions; see Figure~\mathop{\mathrm{Re}}f{f:bowtie}. The first region is a fixed time interval of the form $[t_0-R, t_0+ R]$, where $R>0$ is chosen so that the bulk of $\vec u_c(t)$ is captured by the light cone emanating from $(t_0, 0)$ in both time directions. In this region~\epsqref{eq:pair} is estimated using an argument based on Strichartz estimates, using crucially that $R>0$ is finite and can be chosen independently of $t_0$ by compactness. The second region is the region of spacetime exterior to this time interval and exterior to the cone. Here the $\dot \mathcal{H}^{s_p}$ norm of the solution is small on any fixed time slice and hence an argument based on the small data theory can be used to absorb the time integrations in~\epsqref{eq:pair}. Lastly, the heart of the double Duhamel trick is employed to note the interaction between the two regions in the interior of the light cone, one for times $t< -R$ and the other for times $t>R$ is identically $=0$ by the sharp Huygens principle!
In Section~\mathop{\mathrm{Re}}f{s:ss} we show that a self-similar-like critical element cannot exist. Here we again use a double Duhamel argument centered at $t_0 \in (0, \infty)$, but with $T_1 = \inf I_{\mathbf{m}ax} = 0$ and $T_2 =\varsigmagmaup I_{\mathbf{m}ax} = \infty$ in~\epsqref{eq:pair}. The argument exploiting Huygens principle given in Section $4$ no longer applies since the forward and backwards cones emanating from time, say $t_0 = 1$ can never capture the bulk of the solution since $N(T) =T^{-1}$ is an expression of the fact that the solution is localized to the physical scale $T$ at time $T$, see \epsqref{r:Reta}. However here, we use a different argument based on a version of the long-time Strichartz estimates introduced by the first author in~e.g.,~\cite{D-JAMS, D-Duke}, which allow us to control Strichartz norms of the projection of $\vec u_c$ to high frequencies $k \mathbf{m}athbf{g}g 1$ on time intervals $J$ which are long in the sense that $\abs{J} \varsigmagmaimeq 2^{\alpha k}$ for $\alpha\mathbf{m}athbf{g}e1$.
In Section~\mathop{\mathrm{Re}}f{s:sword}, $N(t)$ is no longer a given fixed function. We establish a dichotomy which we refer to colloquially as the sword or the shield: either additional regularity for the critical element can be established using essentially the same argument used in Section~\mathop{\mathrm{Re}}f{s:soliton-reg}, or a self-similar-like critical element can be extracted by passing to a suitable limit. To apply the argument from Section $4$ the following must be true -- fixing any time $t_0$, the amount of time (\epsmph{but where now time is measured relative to the scale $N(t)$}) that one has to wait until the bulk of the solution is absorbed by the cone emanating from time $t_0$ must be uniform in $t_0$. We define functions $C_{\trianglerightrtialm}(t_0)$ whose boundedness (or unboundedness) measures whether or not this criteria is satisfied; see the introduction to Section~\mathop{\mathrm{Re}}f{s:sword}. The rest of the section is devoted to showing how to apply the arguments from Section~\mathop{\mathrm{Re}}f{s:soliton} in the case where $C_{\trianglerightrtialm}(t_0)$ are uniformly bounded, and how to extract a self-similar solution-like critical element in the case that one of $C_\trianglerightrtialm(t_0)$ are not bounded.
\mathbf{m}edskip
\epsmph{Critical elements that are not subluminal:} In Section $7$ we show that critical elements with spatial center $x(t)$ traveling at the speed of light cannot exist. The technique in this section is novel and may be useful in other settings. First we note that such critical elements are easily ruled out for solutions with finite energy, as is shown in~\cite{KM08, Tao37, NakS} using an argument based on the conserved momentum, and even in the energy supercritical setting; see~\cite{KV11b} using the energy/flux identity. None of these techniques (which provide an \epsmph{a priori} limit on the speed of $x(t)$) apply in our setting so we must rule out this critical element by other means, namely, \epsmph{by first showing that such critical elements have additional regularity}.
In Section \mathop{\mathrm{Re}}f{subsec:analysis_of_props} we lay the necessary groundwork and show, using finite speed of propagation, that any such critical element must have a fixed scale, i.e., $N(t) = 1$ and $x(t)$ must choose a fixed preferred direction up to deviation in angle by $\frac{1}{\varsigmagmaqrt{t}}$. The model case one should consider is $x(t) = (t, 0, 0)$ for all $t \in \mathbf{m}athbb{R}$, which means that the bulk of $\vec u_c(t)$ travels along the $x_1$-axis at speed $t$. We are able to show that such critical elements have up to $1-\nu$ derivatives in the $(x_2, x_3)$ directions for any $\nu>0$. This is enough to show that such critical elements cannot exist via a Morawetz estimate adapted to the direction of $x(t)$ -- this is the only place in the paper where the arguments are limited to the defocusing equation.
The technical heart of this section is the proof of extra regularity ($1-\nu$ derivatives) in the $x_{2, 3}$ directions. We again divide spacetime into three regions. For a solution projected to a fixed frequency $N \mathbf{m}athbf{g}g1$, we call region $A$ the strip $[0, N^{1-\epspsilon}] \widetildemes \mathbf{m}athbb{R}^3$ for $\epspsilon>0$ sufficiently small relative to $\nu$. On this region we can control the solution by a version of the long-time Strichartz estimates proved in Section~\mathop{\mathrm{Re}}f{s:ltse}. At time $t = N^{1-\epspsilon}$ we then divide the remaining part of spacetime for positives times into two regions. Region $B$ is the set including all times $t \mathbf{m}athbf{g}e N^{1-\epspsilon}$ exterior to the light cone of initial width $R(\epsta_0)$ emanating from the point $(t, x) = (N^{1-\epspsilon}, x(N^{1-\epspsilon}))$ where $R(\epsta_0)$ is chosen large enough so that at $\vec u_v(N^{1-\epspsilon})$ has $\dot \mathcal{H}^{s_p}$ norm less than $\epsta_0$ exterior to the ball of radius $R(\epsta_0)$ centered at $x(N^{1-\epspsilon})$. The solution is then controlled on region $B$ using small data theory. Estimating the interaction of the two terms in the pair~\epsqref{eq:pair} on the remaining region $C$ (the region $\{\abs{x - x(N^{1-\epspsilon})} \le R(\epsta_0) + t- N^{1-\epspsilon}, t \mathbf{m}athbf{g}e N^{1-\epspsilon} \}$) and the analogous region $C'$ for negative times $\tauu \le -N^{1-\epspsilon}$ provides the most delicate challenge. Any naive implementation of the double Duhamel trick based on Huygens principle is doomed to fail here since the left and right-hand components of the pair~\epsqref{eq:pair} restricted to $C, C'$ interact \epsmph{in the wave zone} $\abs{x} \varsigmagmaimeq \abs{t}$. Furthermore, since we are in dimension $d=3$, the $\ang{t}^{-1}$ decay from the wave propagator $S(t)$ in~\epsqref{eq:pair} is not sufficient for integration in time. For this reason we introduce an auxiliary frequency localization to frequencies $\abs{(\xii_{2}, \xii_{3})} \varsigmagmaimeq M$ in the $(\xii_{2}, \xii_{ 3})$ directions after first localizing in all directions to frequencies $\abs{\xii} \varsigmagmaimeq N$. We call this angular frequency localization $\mathbf{m}athbf{h}at P_{N, M}$. The key observation is that the intersection of the wave zone $\{\abs{x} \varsigmagmaimeq \abs{t}\}$ with region $C$ requires the spatial variable $x = (x_1, x_{2, 3})$ to satisfy
\mathbf{m}athcal{E}Q{
\frac{\abs{x_{2, 3}}}{\abs{x}} \ll \frac{M}{N}
}
for all $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$ as long as $\epspsilon>0$ is chosen small enough relative to $\nu$, whereas
application of $\mathbf{m}athbf{h}at P_{N, M}$ restricts to frequencies $\xii = (\xii_1, \xii_{2, 3})$ with
\mathbf{m}athcal{E}Q{
\frac{\abs{\xii_{2, 3}}}{ \abs{\xii}} \varsigmagmaimeq \frac{M}{N}.
}
This yields \epsmph{angular separation} in the kernel of $\mathbf{m}athbf{h}at P_{N, M} S(t)$ and allows us to deduce arbitrary time decay for the worst interactions in~\epsqref{eq:pair}, see Lemma \mathop{\mathrm{Re}}f{l:angle}. The remaining interactions in~\epsqref{eq:pair} are dealt with using an argument based on the sharp Huygens principle, which is complicated due to the blurring of supports caused by $\mathbf{m}athbf{h}at P_{N, M}$.
\betagin{rem}\lambdabel{r:p=3a}
The proof of Theorem~\mathop{\mathrm{Re}}f{t:main} serves as the foundation for the more complicated case of the cubic equation, $p=3$, as well as for the analogous result for the focusing equation; see for example~\cite{DL1} where the focusing and defocusing equations are treated in the same framework in the radial setting.
Much of the argument given here carries over to the defocusing equation when $p=3$. However, in this case we have $s_p = 1/2$ and the critical space $\dot \mathbf{m}athcal{H}^{\frac{1}{2}}$ is the unique Sobolev space that is invariant under Lorentz transforms. This introduces several additional difficulties, described more in detail in Remark~\mathop{\mathrm{Re}}f{r:p=3b}. Additionally, certain estimates in Section \mathop{\mathrm{Re}}f{s:hans} fail at the $p=3$ endpoint and would require modification.
Similarly the argument in Sections 4-6 applies equally well to the focusing equation. However the argument in Section \mathop{\mathrm{Re}}f{s:hans} used to rule out the traveling wave critical element is specific to the defocusing equation as it relies on a Morawetz-type estimate only valid in that setting.
\epsnd{rem}
\varsigmagmaection{Preliminaries}
\varsigmagmaubsection{Notation, definitions, inequalities}
We write $A\lesssim B$ or $B\mathbf{m}athbf{g}trsim A$ to denote $A\leq CB$ for some $C>0$. Dependence of implicit constants will be denoted with subscripts. If $A\lesssim B\lesssim A$, we write $A\varsigmagmaimeq B$. We will use the notation $a\trianglerightrtialm$ to denote the quantity $a\trianglerightrtialm\epspsilon$ for some sufficiently small $\epspsilon>0$.
We will denote by $P_N$ the Littlewood-Paley projections onto frequencies of size $ \abs{ \xii} \varsigmagmaimeq N$ and by $P_{ \le N}$ the projections onto frequencies of size $ \abs{ \xii} \lesssim N$. Often we will consider the case when $N = 2^k$, $k \in \mathbf{m}athbb{Z}$ is a dyadic number and in this case we will employ the following notation: when write $P_k$ with a \epsmph{lowercase} subscript $k$ this will mean projection onto frequencies $\abs{\xii} \varsigmagmaimeq 2^k$. We will often write $u_N$ for $P_N u$, and similarly for $P_{\leq N}$, $P_{>N}$, $P_k$, and so on.
These projections satisfy Bernstein's inequalities, which we state here.
\betagin{lem}[Bernstein's inequalities]\epsmph{\cite[Appendix A]{Taobook}} \lambdabel{l:bern} Let $1 \le p \le q \le \infty$ and $s \mathbf{m}athbf{g}e 0$. Let $ f: \mathbf{m}athbb{R}^d \to \mathbf{m}athbb{R}$. Then
\mathbf{m}athcal{E}Q{ \lambdabel{bern}
&\|P_{\mathbf{m}athbf{g}e N} f\|_{L^p} \lesssim N^{-s} \| \abs{\nabla}^s P_{\mathbf{m}athbf{g}e N} f\|_{L^p},\\
&\|P_{\le N} \abs{\nabla}^s f\|_{L^p} \lesssim N^{s} \| P_{\le N} f\|_{L^p}, \, \quad
\|P_{ N} \abs{\nabla}^{\trianglerightrtialm s} f\|_{L^p} \varsigmagmaimeq N^{ \trianglerightrtialm s} \| P_{ N} f\|_{L^p}\\
&\|P_{\le N} f\|_{L^q} \lesssim N^{\frac{d}{p}- \frac{d}{q}} \| P_{\le N} f\|_{L^p}, \, \quad
\|P_{ N} f\|_{L^q} \lesssim N^{\frac{d}{p}- \frac{d}{q}} \| P_{N} f\|_{L^p}.
}
\epsnd{lem}
We will write either
\[
\|u\|_{L_t^q L_x^r(I\widetildemes\mathbf{m}athbb{R}^3)} \quad\thetaxt{or}\quad \|u\|_{L_t^q(I;L_x^r(\mathbf{m}athbb{R}^3)}
\]
to denote the space-time norm
\[
\biggl(\int_I \biggl(\int_{\mathbf{m}athbb{R}^3} |u(t,x)|^{q,r}\,dx\biggr)^{\frac{q}{r}}\,dt\biggr)^{\frac{1}{q}},
\]
with the usual modifications if $q$ or $r$ equals infinity.
Given $s\in\mathbf{m}athbb{R}$ we define the space $\dot\mathcal{H}^s$ by
\[
\dot\mathcal{H}^s=\dot H^s(\mathbf{m}athbb{R}^3)\widetildemes\dot H^{s-1}(\mathbf{m}athbb{R}^3).
\]
For example, we work with initial data in $\dot \mathcal{H}^{s_p}$.
We also require the notion of a frequency envelope.
\betagin{defn}{\cite[Definition~$1$]{Tao1}} \lambdabel{d:fren} A \epsmph{frequency envelope} is a sequence $\beta = \{\beta_k\}$ of positive numbers with $\beta \in \epsll^2$ satisfying the local constancy condition
\ant{
2^{-\varsigmagma\abs{j-k}} \beta_k \lesssim \beta_j \lesssim 2^{\varsigmagma\abs{j-k}} \beta_k,
}
where $\varsigmagma>0$ is a small, fixed constant. If $\beta$ is a frequency envelope and $(f, g) \in \dot{H}^s \widetildemes \dot{H}^{s-1}$ then we say that \epsmph{$(f, g)$ lies underneath $\beta$} if
\ant{
\| (P_k f,P_k g)\|_{\dot H^s \widetildemes \dot{H}^{s-1}} \le \beta_k \, \quad \forall k \in \mathbf{m}athbb{Z}.
}
Note that if $(f, g)$ lies underneath $\beta$ then we have
\ant{
\| (f, g)\|_{\dot H^s \widetildemes \dot{H}^{s-1}} \lesssim \| \beta\|_{\epsll^2(\mathbf{m}athbb{Z})}.
}
\epsnd{defn}
In practice, we will need to choose the parameter $\varsigmagmaigma$ in the definition of frequency envelope sufficiently small depending on the power $p$ of the nonlinearity.
We next record a commutator estimate.
\betagin{lem}\lambdabel{L:commutator} Let $\chi_R$ be a smooth cutoff to $|x|\mathbf{m}athbf{g}eq R$. For $0\leq s\leq 1$ and $N\mathbf{m}athbf{g}eq 1$,
\betagin{align*}
\|P_N\chi_R f - \chi_R P_N f\|_{L^2} &\lesssim N^{-s}(N R)^{-(1-s)}\|f\|_{\dot H^s}, \\
\|P_N\chi_R f - \chi_R P_N f\|_{L^2} &\lesssim R^{-s}(N R)^{-(1-s)}\|f\|_{\dot H^{-s}}.
\epsnd{align*}
\epsnd{lem}
\betagin{proof} We write the commutator as an integral operator in the form
\[
[P_N\chi_R f - \chi_R P_N f](x) = N^{d}\int K(N(x-y))[\chi_R(x)-\chi_R(y)]f(y)\,dy.
\]
Thus, using the pointwise bound
\[
|\chi_R(x)-\chi_R(y)| \lesssim N|x-y|\cdot N^{-1}R^{-1}
\]
and Schur's test, we first find
\[
\|P_N\chi_R f - \chi_R P_N f\|_{L^2} \lesssim N^{-1}R^{-1} \|f\|_{L^2}.
\]
Next, a crude estimate via the triangle inequality, Bernstein's inequality, H\"older's inequality, and Sobolev embedding gives
\betagin{align*}
\|P_N\chi_R f - \chi_R P_N f\|_{L^2} & \lesssim N^{-1}\|\nablabla(\chi_R f)\|_{L^2} + N^{-1}\|\nablabla f\|_{L^2} \lesssim N^{-1}\|f\|_{\dot H^1}.
\epsnd{align*}
The first bound now follows from interpolation. For the second bound, we write
\[
[P_N\chi_R f - \chi_R P_N f](x) = N^{d}\int K(N(x-y))[\chi_R(x)-\chi_R(y)]\nablabla\cdot\nablabla\mathbf{m}athbb{D}elta^{-1}f(y)\,dy
\]
and integrate by parts. Estimating as above via Schur's test, we deduce
\[
\|P_N\chi_R f - \chi_R P_Nf\|_{L_x^2} \lesssim R^{-1} \||\nablabla|^{-1}f\|_{L^2},
\]
so that the second bound also follows from interpolation. \epsnd{proof}
\varsigmagmaubsection{Strichartz estimates}
The main ingredients for the small data theory are Strichartz estimates for the linear wave equation in $\mathbf{m}athbb{R}^{1+3}$,
\mathbf{m}athcal{E}Q{ \lambdabel{eq:lw}
&\mathbf{m}athcal{B}ox v = F,\\
&\vec v(0) = (v_0, v_1).
}
A free wave means a solution to~\epsqref{eq:lw} with $F=0$ and will be often denoted using the propagator notation $\vec v(t) = S(t) \vec v(0)$. We define a pair $(r, q)$ to be wave-admissible in $3d$ if
\mathbf{m}athcal{E}Q{ \lambdabel{adm}
q, r \mathbf{m}athbf{g}e 2, \, \, \frac{1}{q} + \frac{1}{r} \le \frac{1}{2}, \quad ( q, r) \neq (2, \infty)
}
The Strichartz estimates stated below are standard and we refer to~\cite{Kee-Tao, LinS}, the book~\cite{Sogge}, and references therein for more.
\betagin{prop}[Strichartz Estimates]\epsmph{\cite{Kee-Tao, LinS, Sogge}} \lambdabel{p:strich}Let $\vec v(t)$ solve~\epsqref{eq:lw} with data $\vec v(0) \in \dot{H}^s \widetildemes \dot{H}^{s-1}(\mathbf{m}athbb{R}^3)$, with $s >0$. Let $(q, r)$, and $(a, b)$ be admissible pairs satisfying the gap condition
\mathbf{m}athcal{E}Q{
\frac{1}{q} + \frac{3}{r} = \frac{1}{a'} + \frac{3}{b'} - 2 = \frac{3}{2} - s.
}
where $(a', b')$ are the conjugate exponents of $(a, b)$.
Then, for any time interval $I \ni 0$ we have the bounds
\mathbf{m}athcal{E}Q{\lambdabel{eq:str}
\| v \|_{L^{q}_t(I; L^r_x)} \lesssim \| \vec v(0) \|_{\dot{H}^s \widetildemes \dot{H}^{s-1}} + \|F\|_{L^{a'}_t(I; L^{b'}_x)}.
}
\epsnd{prop}
\varsigmagmaubsection{Small data theory -- global existence, scattering, perturbative theory}
A standard argument based on Proposition~\mathop{\mathrm{Re}}f{p:strich} yields the scaling critical small data well-posedness and scattering theory. We define the following notation for a collection of function spaces that we will make extensive use of. In this sub-section we fix $p \in [3, 5]$ (later we will fix $p \in (3, 5)$) and let $I \varsigmagmaubset \mathbf{m}athbb{R}$ be a time interval. We define
\mathbf{m}athcal{E}Q{
&S(I) : = L^{2(p-1)}_t ( I; L^{2(p-1)}_x( \mathbf{m}athbb{R}^3)).
}
For example, when $p = 3$, $S = L^{4}_{t, x}$ while for $p =5$ we have $S = L^{8}_{t, x}$.
\betagin{rem}
There are a few other function spaces related to
\mathbf{m}athcal{E}Q{ \lambdabel{eq:Hdef}
\dot \mathbf{m}athcal{H}^{s_p}:= \dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1}(\mathbf{m}athbb{R}^3)
}
that will appear repeatedly in our analysis. First note the Sobolev embedding $\dot{H}^{s_p}(\mathbf{m}athbb{R}^3) \mathbf{m}athbf{h}ookrightarrow L^{\frac{3}{2}(p-1)}(\mathbf{m}athbb{R}^3)$, which means
\mathbf{m}athcal{E}Q{
\| f \|_{L^{\frac{3}{2}(p-1)}(\mathbf{m}athbb{R}^3)} \lesssim \| f \|_{\dot{H}^{s_p}(\mathbf{m}athbb{R}^3)}.
}
\epsnd{rem}
\betagin{prop}[Small data theory]\lambdabel{small data}
Let $3 \le p <5$ and suppose that $\vec u(0) = (u_0, u_1) \in \dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1}(\mathbf{m}athbb{R}^3)$. Then there is a unique solution $\vec u(t) \in \dot\mathcal{H}^{s_p}$ with maximal interval of existence $I_{\mathbf{m}ax}( \vec u )= (T_-(\vec u), T_+( \vec u))$. Moreover, for any compact interval $J \varsigmagmaubset I_{\mathbf{m}ax}$,
\ant{
\| u\|_{S(J)} < \infty.
}
Additionally, a globally defined solution $\vec u(t)$ on $t\in [0, \infty)$ scatters as $t \to \infty$ to a free wave
if and only if $ \|u \|_{S([0, \infty))}< \infty$. In particular, there exists a constant $\delta_0>0$ so that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:apsmall}
\| \vec u(0) \|_{\dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1}} < \delta_0 \Longrightarrow \| u\|_{S(\mathbf{m}athbb{R})} \lesssim \|\vec u(0) \|_{\dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1}} \lesssim \delta_0
}
and thus $\vec u(t)$ scatters to free waves as $t \to \trianglerightrtialm \infty$. Finally, we have the standard finite time blow-up criterion:
\mathbf{m}athcal{E}Q{ \lambdabel{ftbuc}
T_+( \vec u)< \infty \Longrightarrow \|u \|_{S([0,T_+( \vec u)) } = + \infty
}
An analogous statement holds if $-\infty< T_-( \vec u)$.
\epsnd{prop}
The concentration compactness procedure in Section~\mathop{\mathrm{Re}}f{s:cc} requires the following nonlinear perturbation lemma for approximate solutions to~\epsqref{eq:nlw}.
\betagin{lem}[Perturbation Lemma]\epsmph{\cite{KM06, KM08}} \lambdabel{l:pert}
There exist continuous functions $\eps_0,C_0: (0,\infty) \to (0,\infty)$ so that the following holds true.
Let $I\varsigmagmaubset \mathbf{m}athbb{R}$ be an open interval (possibly unbounded), $\vec u, \vec v\in C(I; \dot{H}^{s_p} \widetildemes \dot{H}^{s_p-1})$ satisfying for some $A>0$
\ant{
\|\vec v\|_{L^\infty(I;\dot{H}^{s_p} \widetildemes \dot H^{s_p-1})} + \|v\|_{S(I)} & \le A \\
\|\abs{\nabla}^{s_p-\frac{1}{2}}\mathbf{m}athbf{g}lei(u)\|_{L^{\frac{4}{3}}_t(I; L^{\frac{4}{3}}_x)}
+ \|\abs{\nabla}^{s_p-\frac{1}{2}}\mathbf{m}athbf{g}lei(v)\|_{L^{\frac{4}{3}}_t(I; L^{\frac{4}{3}}_x)} + \|w_0\|_{S(I)} &\le \eps \le \eps_0(A),
}
where $\mathbf{m}athbf{g}lei(u):=\mathbf{m}athcal{B}ox u \trianglerightrtialm \abs{u}^{p-1} u$ in the sense of distributions, and $\vec w_0(t):=S(t-t_0)(\vec u-\vec v)(t_0)$ with $t_0\in I$ fixed, but arbitrary. Then
\ant{
\|\vec u-\vec v-\vec w_0\|_{L^\infty(I;\dot{H}^{s_p} \widetildemes \dot H^{s_p-1})}+\|u-v\|_{S(I)} \le C_0(A)\eps.}
In particular, $\|u\|_{S(I)}<\infty$.
\epsnd{lem}
\varsigmagmaection{Concentration compactness and the reduction of Theorem~\mathop{\mathrm{Re}}f{t:main}} \lambdabel{s:cc}
We begin the proof of Theorem~\mathop{\mathrm{Re}}f{t:main} using the concentration compactness and rigidity method of Kenig and Merle~\cite{KM06, KM08}. The concentration compactness aspect of the argument is by now standard and we follow the scheme from~\cite{KM10}, which is a refinement of the scheme in~\cite{KM06, KM08}. The main conclusion of this section is the following: If Theorem~\mathop{\mathrm{Re}}f{t:main} fails, there exists a minimal, nontrivial, non-scattering solution to~\epsqref{eq:nlw}, which we call a \epsmph{critical element}.
We follow the notation from~\cite{KM10} for convenience. Given initial data $ (u_0, u_1) \in \dot H^{s_p} \widetildemes \dot H^{s_p-1}$ we let $\vec u(t) \in \dot H^{s_p} \widetildemes \dot H^{s_p-1}$ be the unique solution to~\epsqref{eq:nlw} with data $\vec u(0) = (u_0, u_1)$ and maximal interval of existence $I_{\mathbf{m}ax}(\vec u) := (T_-( \vec u), T_+( \vec u))$.
Given $A>0$, set
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal{B}(A):= \{ (u_0, u_1) \in\dot H^{s_p} \widetildemes \dot H^{s_p-1} \, : \, \|\vec u(t)\|_{L^{\infty}_t(I_{\mathbf{m}ax}(\vec u); \dot H^{s_p} \widetildemes \dot H^{s_p-1})} \le A\}.
}
\betagin{defn} We say that $\mathbf{m}athcal{SC}(A; \vec u(0))$ holds if $\vec u(0) \in \mathbf{m}athcal{B}(A)$, $I_{\mathbf{m}ax} (\vec u) = \mathbf{m}athbb{R}$ and $ \|u \|_{S(I)} < \infty$. In addition, we will say that $\mathbf{m}athcal{SC}(A)$ holds if for every $ (u_0, u_1) \in \mathbf{m}athcal{B}(A)$ one has $I_{\mathbf{m}ax}(\vec u) = \mathbf{m}athbb{R}$ and $ \|u \|_{S(I)} < \infty$.
\epsnd{defn}
\betagin{rem}
Recall from Proposition~\mathop{\mathrm{Re}}f{small data} that $\| u \|_{S(I)}< \infty$ if and only if $\vec u(t)$ scatters to a free waves as $t \to \trianglerightrtialm \infty$. Thus, Theorem~\mathop{\mathrm{Re}}f{t:main} is equivalent to the statement that $\mathbf{m}athcal{SC}(A)$ holds for all $A>0$.
\epsnd{rem}
Now suppose that Theorem~\mathop{\mathrm{Re}}f{t:main} {\epsm fails to be true}. By Proposition~\mathop{\mathrm{Re}}f{small data}, there exists an $A_0>0$ small enough so that $\mathbf{m}athcal{SC}(A_0)$ holds. Since we are assuming that Theorem~\mathop{\mathrm{Re}}f{t:main} fails, we can find a threshold value $A_C$ so that for $A<A_C$, $\mathbf{m}athcal{SC}(A)$ holds, and for $A>A_C$, $\mathbf{m}athcal{SC}(A)$ fails. Note that we must have $0<A_0<A_C$. The Kenig-Merle concentration compactness argument is now used to produce a \epsmph{critical element}, namely a minimal non-scattering solution $\vec u_{\thetaxtrm{c}}(t)$ to~\epsqref{eq:nlw} so that $\mathbf{m}athcal{SC}(A_C, \vec u_{\thetaxtrm{c}})$ fails, and which enjoys certain compactness properties.
We state a refined version of this result below, and we refer the reader to~\cite{KM10, Shen, TVZ, TVZ08} for the details. As usual, the deep foundations of the concentration compactness part of the Kenig-Merle framework are profile decompositions of Bahouri and G\'erard \cite{BG} used in conjunction with the nonlinear perturbation theory in Lemma~\mathop{\mathrm{Re}}f{l:pert}.
\betagin{prop} \lambdabel{p:ce} Suppose Theorem~\mathop{\mathrm{Re}}f{t:main} fails to be true. Then, there exists a solution $\vec u(t)$ such that $\mathbf{m}athcal{SC}(A_C; \vec u)$ fails, which we call a \epsmph{critical element}. We can assume that $\vec u(t)$ does not scatter in either time direction, i.e.,
\mathbf{m}athcal{E}Q{\lambdabel{blow up}
\|u\|_{S((T_-(\vec{u}), 0])} = \|u\|_{S([0, T_+(\vec u))} = \infty
}
and moreover, there exist continuous functions
\ant{
&N: I_{\mathbf{m}ax}(\vec u) \to (0, \infty) , \quad
x: I_{\mathbf{m}ax}(\vec u) \to \mathbf{m}athbb{R}^3
}
so that the set
{\varsigmagmamall\mathbf{m}athcal{E}Q{\lambdabel{eq:K}
\left\{ \left(\tfrac{1}{N(t)^{\frac{2}{p-1}}} u\left( t, x(t) + \tfrac{ \cdot}{N(t)} \right), \tfrac{1}{N(t)^{\frac{2}{p-1}+1}} u_t\left( t, x(t) + \tfrac{\cdot}{N(t)} \right) \right) \mathbf{m}id t \in I_{\mathbf{m}ax}\right\}
} }
is pre-compact in $\dot\mathcal{H}^{s_p}$.
\epsnd{prop}
We make a few observations and reductions concerning the critical element found in Proposition~\mathop{\mathrm{Re}}f{p:ce}. It will be convenient to proceed slightly more generally, starting by giving a name to the compactness property~\epsqref{eq:K} satisfied by a critical element.
\betagin{defn} \lambdabel{d:cp} Let $I \ni 0$ be an interval and let $\vec u(t)$ be a nonzero solution to~\epsqref{eq:nlw} on $I$. We will say $\vec u(t)$ has the \epsmph{compactness property on $I$} if there are continuous functions $N: I \to (0, \infty)$ and $x: I \to \mathbf{m}athbb{R}^3$ so that the set
{\varsigmagmamall\ant{
K_I:= \left\{ \left(\frac{1}{N^{\frac{2}{p-1}}(t)} u\left( t, \, x(t) + \frac{ \cdot}{N(t)} \right), \, \frac{1}{N^{\frac{2}{p-1}+1}(t)} u_t\left( t, \, x(t) + \frac{\cdot}{N(t)} \right) \right) \mathbf{m}id t \in I\right\}
}}
is pre-compact in $\dot\mathbf{m}athcal{H}^{s_p}$.
\epsnd{defn}
We make the following standard remarks about solutions with the compactness property. We begin with a local constancy property for the modulation parameters.
\betagin{lem}\epsmph{\cite[Lemma 5.18]{KVClay}}\lambdabel{l:const}
Let $\vec u(t)$ have the compactness property on a time interval $I \varsigmagmaubset \mathbf{m}athbb{R}$ with parameters $N(t)$ and $x(t)$. Then there exists constants $\epspsilon_0>0$ and $C_0>0$ so that for every $t_0 \in I$ we have
\mathbf{m}athcal{E}Q{
&[ t_0 - \frac{\epspsilon_0}{ N(t_0)}, t_0 + \frac{\epspsilon_0}{N(t_0)}] \varsigmagmaubset I, \\
& \frac{1}{C_0} \le \frac{N(t)}{N(t_0)} \le C_0 \mathbf{m}if \abs{ t- t_0} \le \frac{\epspsilon_0}{N(t_0)}, \\
& \abs{ x(t) - x(t_0)} \le \frac{C_0}{N(t_0)} \mathbf{m}if \abs{ t- t_0} \le \frac{\epspsilon_0}{N(t_0)} .
}
\epsnd{lem}
\betagin{rem}\lambdabel{r:Reta}
For a solution with the \epsmph{compactness property} on an interval $I$, we can, after modulation, control the $\dot{\mathbf{m}athcal{H}}^{s_p}$ tails uniformly in $t \in I$. Indeed, for any $\epsta > 0$ there exists $R(\epsta) < \infty$ such that
\mathbf{m}athcal{E}Q{\lambdabel{3.3}
&\int_{|x - x(t)| \mathbf{m}athbf{g}eq \frac{R(\epsta)}{N(t)}}||\nablabla|^{s_p} u(t,x)|^{2} \mathbf{m}athrm{d} x + \int_{|\xii| \mathbf{m}athbf{g}eq R(\epsta)N(t)} |\xii|^{2s_p} |\mathbf{m}athbf{h}at{u}(t,\xii)|^{2} d\xii \le \epsta, \\
&\int_{|x-x(t)| \mathbf{m}athbf{g}eq \frac{R(\epsta)}{N(t)}}||\nablabla|^{s_p-1} u_t(t,x)|^{2} \mathbf{m}athrm{d} x + \int_{|\xii| \mathbf{m}athbf{g}eq R(\epsta)N(t)} |\xii|^{2(s_p-1)}| \mathbf{m}athbf{h}at{u}_{t}(t,\xii)|^{2} d\xii \le \epsta,
}
for all $t \in I$. We call $R(\cdot)$ the \epsmph{compactness modulus}.
\epsnd{rem}
We also remark that that any Strichartz norm of the linear part of the evolution of a solution with the compactness property on $I_{\mathbf{m}ax}$ vanishes as $t \to T_- $ and as $t \to T_+$. A concentration compactness argument then implies that the linear part of the evolution vanishes weakly in $\dot{\mathbf{m}athcal{H}}^{s_p}$, that is,
for each $t_{0} \in I_{\mathbf{m}ax}$,
\betagin{equation}
S(t_{0} - t) u(t) \rightharpoonup 0
\epsnd{equation}
weakly in $\dot{\mathbf{m}athcal{H}}^{s_p}$ as $t \nearrow \varsigmagmaup I$, $t \varsigmagmaearrow \inf I$,
see \cite[Section $6$]{TVZ08}\cite[Proposition $3.6$]{Shen}.
This implies the following lemma, which we use crucially in the proof of Theorem~\mathop{\mathrm{Re}}f{t:main}.
\betagin{lem}\epsmph{ \cite[Section $6$]{TVZ08}\cite[Proposition $3.6$]{Shen} }\lambdabel{l:weak} Let $\vec u(t)$ be a solution to~\epsqref{eq:nlw} with the compactness property on its maximal interval of existence $I = (T_-, T_+)$. Then for any $t_0 \in I$ we can write
\ant{
\int_{t_0}^T S(t_0 - s) (0, \abs{ u}^{p-1}u ) \, ds \rightharpoonup \vec u(t_0) \mathbf{m}as T \nearrow T_+ \, \thetaxtrm{weakly in} \,\, \dot{\mathbf{m}athcal{H}}^{s_p},\\
-\int^{t_0}_T S(t_0 - s) (0, \abs{u}^{p-1} u ) \, ds \rightharpoonup \vec u(t_0) \mathbf{m}as T \varsigmagmaearrow T_- \, \thetaxtrm{weakly in} \,\, \dot{\mathbf{m}athcal{H}}^{s_p}.
}
\epsnd{lem}
Remark~\mathop{\mathrm{Re}}f{r:Reta} indicates that solutions $\vec u(t)$ with the compactness property have uniformly small tails in $\dot{\mathbf{m}athcal{H}}^{s_p}$, where ``tails'' are taken to be centered at $x(t)$, and relative to the frequency scale $N(t)$ at which the solutions are concentrating. We would like to use this fact to obtain lower bounds for norms of the solution $u(t)$. The immediate issue that arises is that the object that obeys compactness properties is the pair $\vec u(t, x) = (u(t, x), u_t(t, x))$ and, \epsmph{a priori}, the solution could satisfy $u(t, x) = 0$ a fixed time $t$. Nonetheless, by averaging in time, such a lower bound still holds for the solution itself, $u(t)$. We can quantify this bound in several ways, starting with a result proved in~\cite{KV11b}.
\betagin{lem}\epsmph{\cite[Lemma 3.4]{KV11b}}\lambdabel{l:mass} Let $\vec u(t)$ be a solution with the compactness property on $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$. Then for any $A>0$, there exists $\epsta = \epsta(A)$ such that
\mathbf{m}athcal{E}Q{
\abs{ \{ t \in [t_0, t_0+ \frac{A}{N(t_0)}] \mathbf{m}id \| u (t) \|_{L^{\frac{3}{2}(p-1)}_x} \mathbf{m}athbf{g}e \epsta \} } \mathbf{m}athbf{g}e \frac{\epsta}{N(t_0)}
}
for all $t_0 \in \mathbf{m}athbb{R}$.
\epsnd{lem}
Lemma \mathop{\mathrm{Re}}f{l:mass} means that the $L^{\frac{3}{2}(p-1)}_x$ norm of $u(t)$ is nontrivial when averaged over intervals around $t_0$ of length comparable to $N(t_0)^{-1}$ uniformly in $t_0$. By combining this lemma with Remark~\mathop{\mathrm{Re}}f{r:Reta} and Sobolev embedding we obtain the following as an immediate consequence.
\betagin{cor}[Averaged concentration around $x(t)$]\lambdabel{C:acax}
Fix any $\delta_0>0$. Let $\vec u(t)$ be a solution with the compactness property on $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$. There exists a constant $C>0$ so that
\mathbf{m}athcal{E}Q{
N(t_0) \int_{t_0}^{t_0 + \frac{\delta}{N(t_0)}} \int_{\abs{ x- x(t)} \le \frac{C}{N(t)}} \abs{ u(t, x)}^{\frac{3}{2}(p-1)} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \mathbf{m}athbf{g}trsim 1
}
for all $t_0 \in \mathbf{m}athbb{R}$.
\epsnd{cor}
One can also deduce the following corollary, also proved in~\cite{KV11b}, which gives an lower bound on the localized $S$ norm of $u(t)$.
\betagin{cor}[$S$-norm concentration around $x(t)$] \lambdabel{c:Sloc} Let $\vec u(t)$ be a solution with the compactness property on $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$.
Then there exists constants $c, C>0$ so that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:Sloc}
\int_{t_1}^{t_2} \int_{ \abs{x - x(t)} \le \frac{C}{N(t)}} \abs{ u(t, x) }^{2(p-1)} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \mathbf{m}athbf{g}e c \int_{t_1}^{t_2} N(t) \, \mathbf{m}athrm{d} t
}
for any $t_1, t_2$ such that $t_2 - t_1 \mathbf{m}athbf{g}e \frac{1}{N(t_1)}$.
\epsnd{cor}
\betagin{proof}
The proof runs completely parallel to the argument in~\cite[Proof of Corollary 3.5]{KV11b} given for the averaged potential energy.
\epsnd{proof}
The fact that we have only averaged lower bounds on, e.g., the $L^{\frac{3}{2}(p-1)}$ norm of a critical element will not be too much trouble. We will often pair the above with the fact that the compactness parameters $N(t), x(t)$ are approximately locally constant; see Lemma~\mathop{\mathrm{Re}}f{l:const}.
Lastly, we also need the following estimate proved in~\cite[Lemma 4.5]{DL1}.
\betagin{lem}\epsmph{\cite[Lemma 4.5]{DL1}} \lambdabel{l:DL1lem} Let $\vec u(t)$ have the compactness property on a time interval $I \varsigmagmaubset \mathbf{m}athbb{R}$ with scaling parameter $N(t)$. Let $\epsta>0$. Then there exists $\delta>0$ such that
\mathbf{m}athcal{E}Q{
\| u \|_{L^{2(p-1)}_{t, x}( [t_0 - \delta/ N(t_0), \, t_0 + \delta/ N(t_0)] \widetildemes \mathbf{m}athbb{R}^3)} \le \epsta
}
uniformly in $t_0 \in I$.
\epsnd{lem}
\varsigmagmaubsection{Analysis of solutions with the compactness property}\lambdabel{subsec:analysis_of_props}
In the next subsection, we will prove a classification result for solutions with the compactness property. Our goal is to gather together a list of possibilities for the compactness parameters $N(t)$ and $x(t)$ that is exhaustive in the sense that if we rule out the existence of all members of the list, then Theorem~\mathop{\mathrm{Re}}f{t:main} is true. Before stating these cases, we need to distinguish between two scenarios based on how fast $x(t)$ is moving relative to the speed of light. To make this distinction precise, we have the following definition.
\betagin{defn} \lambdabel{d:sl}
Let $\vec u(t)$ be a solution to~\epsqref{eq:nlw} with the compactness property on $I = \mathbf{m}athbb{R}$ with parameters $x(t)$ and $N(t) \mathbf{m}athbf{g}e 1$. We will say that $x(t)$ is \epsmph{subluminal} if there exists a constant $A>1$ so that for all $t_0 \in \mathbf{m}athbb{R}$ there exists $t \in [t_0, t_0 + \frac{A}{N(t_0)}]$ such that
\mathbf{m}athcal{E}Q{
\abs{x(t) - x(t_0)} \le \abs{ t - t_0} - \frac{1}{ A N(t_0)}.
}
\epsnd{defn}
\betagin{prop}\lambdabel{p:cases}
Suppose $\vec u(t)$ is a solution to~\epsqref{eq:nlw} with the compactness property on its maximal interval of existence $I_{\mathbf{m}ax}$ with compactness parameters $N(t)$ and $x(t)$. We can assume without loss of generality in the arguments that follow that $I_{\mathbf{m}ax}$, $N(t)$ and $x(t)$ falls into one of the following four scenarios:
\betagin{itemize}
\item[(I)] \epsmph{The soliton-like critical element}: $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$, $N(t) \epsquiv 1$ for all $t \in\mathbf{m}athbb{R}$ and $x(t)$ is subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}.
\item[(II)] \epsmph{The two-sided concentrating critical element}: $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$, $N(t) \mathbf{m}athbf{g}e 1$ for all $t \in \mathbf{m}athbb{R}$, $\limsup_{t \to \trianglerightrtialm \infty} N(t) = \infty$, and $x(t)$ is subluminal.
\item[(III)] \epsmph{The self-similar-like critical element}: $I_{\mathbf{m}ax} = (0, \infty)$, $N(t) = \frac{1}{t}$, and $x(t)\epsquiv 0$.
\item[(IV)] \epsmph{The traveling wave critical element}: $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$, $N(t) \epsquiv 1$ for all $t \in \mathbf{m}athbb{R}$ and $|x(t) - (t, 0, 0)| \lesssim \varsigmagmaqrt{\abs{t}}$ for all $t \in \mathbf{m}athbb{R}$.
\epsnd{itemize}
\epsnd{prop}
\betagin{rem} \lambdabel{r:p=3b}
In the case $p=3$, one must take into account the action of the Lorentz group, which will introduce additional cases to the list of critical elements in Proposition \mathop{\mathrm{Re}}f{p:cases}. For $p \neq 3$, the hypothesis~\epsqref{eq:fantasy} compactifies the action of the Lorentz group in the Bahouri-G\'erard profile decomposition at regularity ~$\dot \mathcal{H}^{s_p}$, which is why only a translations $x(t)$ and scaling $N(t)$ appear in the descriptions of critical elements. However, because $\dot \mathcal{H}^{\frac{1}{2}}$ is invariant under action of the Lorentz group one must confront critical elements with velocity $\epsll(t)$ that approaches the speed of light. See the work of Ramos~\cite{Ramos1, Ramos2}, for Bahouri-G\'erard type profile decompositions in this setting.
\epsnd{rem}
Before proving Proposition \mathop{\mathrm{Re}}f{p:cases}, we note that ruling out Cases (I) - (IV) in the statement of the proposition will prove our main result, Theorem \mathop{\mathrm{Re}}f{t:main}. Hence we will now focus on establishing Proposition \mathop{\mathrm{Re}}f{p:cases} and proving that such critical elements cannot exist.
\mathbf{m}edskip
We will prove this proposition in several steps. First, we will reduce the frequency parameter $N(t)$ to one of three possible cases. We state these reductions for $N(t)$, but we omit the proof as it follows readily from arguments similar to those in \cite[Theorem 5.25]{KVClay}.
\betagin{prop}\lambdabel{p:3cases}
Let $\vec u(t)$ denote the critical element found in Proposition~\mathop{\mathrm{Re}}f{p:ce}. Passing to subsequences, taking limits, using scaling considerations and time reversal, we can assume, without loss of generality, that $T_+(\vec u) = + \infty$, and that the frequency scale $N(t)$ and maximal interval of existence $I_{\mathbf{m}ax} =I_{\mathbf{m}ax}(\vec u)$ satisfy one of the following three possibilities:
\betagin{itemize}
\item \epsmph{(Soliton-like scale)} $I_{\mathbf{m}ax} = \mathbf{m}athbb{R}$ and
\ant{
N(t) \epsquiv 1 \quad \, \forall \, t \in \mathbf{m}athbb{R}.
}
\item \epsmph{(Doubly concentrating scale)} $I_{\mathbf{m}ax} = (-\infty, \infty)$ and
\ant{
\limsup_{t \to T_-} N(t) =\infty, \quad \limsup_{t \to \infty} N(t) = \infty, \mathbf{m}and N(t) \mathbf{m}athbf{g}e 1 \quad \, \forall t \in \mathbf{m}athbb{R}.
}
\item \epsmph{(Self-similar scale)} $I_{\mathbf{m}ax} = (0, \infty)$ and $N(t) = t^{-1}$.
\epsnd{itemize}
\epsnd{prop}
We will now make a few further reductions, mostly concerning the spatial center $x(t)$ of a critical element that is global in time.
We will show that in all cases where we have a solution with the compactness property with translation parameter $x(t)$ that fails to be subluminal, we may extract a traveling wave solution. To prove this, we will need to analyze the properties of solutions with the compactness property and more specifically, properties of their spatial centers, $x(t)$. We turn to this analysis now. First, we note that in the case that $x(t)$ is subluminal (see Definition~\mathop{\mathrm{Re}}f{d:sl}) we can derive the following consequence.
\betagin{lem}\epsmph{\cite[Proposition 4.3]{KV11b}} \lambdabel{l:sl}
Let $\vec u(t)$ be a solution to~\epsqref{eq:nlw} with the compactness property on $I = \mathbf{m}athbb{R}$ with parameters $x(t)$ and $N(t) \mathbf{m}athbf{g}e 1$. Suppose $x(0)=0$ and that $x(t)$ is subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. Then there exists a $\delta_0>0$ so that
\mathbf{m}athcal{E}Q{
\abs{x(t) - x(\tauu)} \le (1- \delta_0) \abs{t - \tauu}
}
for all $t, \tauu$ with
\mathbf{m}athcal{E}Q{
\abs{t- \tauu} \mathbf{m}athbf{g}e \frac{1}{\delta_0 N_{t, \tauu}},
}
where $N_{t, \tauu} := \inf_{s \in [t, \tauu]} N(s) $.
\epsnd{lem}
\betagin{proof} See the proof of Proposition 4.3 in \cite{KV11b}.
\epsnd{proof}
Using Lemma~\mathop{\mathrm{Re}}f{l:const} together with Lemma~\mathop{\mathrm{Re}}f{l:mass} and a domain of dependence argument based on the finite speed of propagation, we obtain a preliminary bound on how fast $x(t)$ can grow. (See e.g. \cite[Proposition~4.1]{KV11b}.)
\betagin{lem}
Let $\vec u(t)$ have the compactness property on a time interval $I \varsigmagmaubset \mathbf{m}athbb{R}$ with parameters $N(t)$ and $x(t)$. Then there exists a constant $C>0$ so that for any $t_1, t_2 \in I$ we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:dxt}
\abs{x(t_1) - x(t_2)} \le \abs{t_1 - t_2} + \frac{C}{N(t_1)} + \frac{C}{N(t_2)}.
}
In fact, if $\vec u(t)$ is global in time, we have $N(t) \abs{t} \to \infty$ as $\abs{t } \to \infty$ and we normalize so that $x(0) = 0$, from which the above yields
\mathbf{m}athcal{E}Q{ \lambdabel{eq:fsp2}
\lim_{t \to \trianglerightrtialm \infty} \frac{\abs{ x(t)}}{\abs{t}} \le 1.
}
\epsnd{lem}
\betagin{rem}
We remark that by finite speed of propagation and compactness, we can assume that
\mathbf{m}athcal{E}Q{
\lim_{t \to T_{\trianglerightrtialm}(\vec u)} \abs{t} N(t) \in [1, \infty) .
}
\epsnd{rem}
Note that according to the definition of the compactness property, the function $x(t)$ is not uniquely defined; indeed, one can always modify $x(t)$ up to a radius of $\mathbf{m}athcal{O}(N(t)^{-1})$, provided one also modifies the compactness modulus appropriately.
Note, however, that the compactness property, together with monotone convergence, prevents $\vec{u}$ from concentrating on very narrow strips, as measured in units of $N(t)^{-1}$. See \cite[Lemma~4.2]{KV11b}.
\betagin{lem}\lambdabel{L:strips} Let $\vec u$ be an solution to \epsqref{eq:nlw} with the compactness property on an interval $I$. Then for any $\epsta>0$, there exists $c(\epsta)>0$ so that
\[
\varsigmagmaup_{\omegaega\in\mathbf{m}athbb{S}^2} \int_{|\omegaega\cdot[x-x(t)]|\leq c(\epsta)N(t)^{-1}} ||\nabla|^{s_p} u|^2 + ||\nabla|^{s_p}t u_t|^2\,\mathbf{m}athrm{d} x <\epsta.
\]
\epsnd{lem}
To deal with ambiguity in the definition of $x(t)$, we use the notion of a `centered' spatial center as in \cite{KV11b}, that is, a choice of $x(t)$ such that each plane through $x(t)$ partitions $\vec u(t)$ into two non-trivial pieces.
\betagin{defn}\lambdabel{D:centered} Let $\vec u$ be a solution to \epsqref{eq:nlw} with the compactness property on an interval $I$ with spatial center $x(t)$. We call $x(t)$ \epsmph{centered} if there exists $C(u)>0$ such that for all $\omegaega\in\mathbf{m}athbb{S}^2$ and $t\in I$,
\[
\int_{\omegaega\cdot[x-x(t)]>0} ||\nabla|^{s_p} u(t,x)|^2 + ||\nabla|^{s_p}t u_t(t,x)|^2 \,\mathbf{m}athrm{d} x \mathbf{m}athbf{g}eq C(u).
\]
\epsnd{defn}
\betagin{prop}\lambdabel{P:centered} Let $\vec u$ be a global solution to \epsqref{eq:nlw} with the compactness property. Then there exists a centered spatial center for $\vec u$.
\epsnd{prop}
\betagin{proof} The argument is similar to the proof of \cite[Proposition~4.1]{KV11b}. Let $x(t)$ be any spatial center for $\vec{u}$. To shorten formulas, we introduce the notation
\[
\varphi(t,x)=||\nabla|^{s_p} u(t,x)|^2 + ||\nabla|^{s_p}t u_t(t,x)|^2.
\]
By compactness, there exists $C=C(u)$ large enough that
\[
\inf_{t\in\mathbf{m}athbb{R}}\int_{B(t)} \varphi(t,x)\,\mathbf{m}athrm{d} x \mathbf{m}athbf{g}trsim_u 1,\qtq{where}B(t):=\{x:|x-x(t)|\leq CN(t)^{-1}\}.
\]
Now set
\[
\widetildelde x(t) = x(t) + \frac{\int_{B(t)}[x-x(t)]\varphi(t,x)\,\mathbf{m}athrm{d} x}{\int_{B(t)} \varphi(t,x)\,\mathbf{m}athrm{d} x}.
\]
By definition, $|x(t)-\widetildelde x(t)| \leq CN(t)^{-1}$, and hence $\widetildelde x(t)$ is a valid spatial center for $\vec u$ (one only needs to add $C$ to the compactness modulus). We now claim that $\widetildelde x(t)$ is centered. To see this, first note that by construction one has
\[
\int_{B(t)} \omegaega\cdot[x-\widetildelde x(t)] \varphi(t,x)\,\mathbf{m}athrm{d} x = 0.
\]
On the other hand, combining non-triviality on $B(t)$ together with Lemma~\mathop{\mathrm{Re}}f{L:strips}, we have
\[
\int_{B(t)\cap |\omegaega\cdot[x-\widetildelde x(t)]|>cN(t)^{-1}} \varphi(t,x)\,\mathbf{m}athrm{d} x\mathbf{m}athbf{g}trsim_u 1
\]
for some $c=c(u)>0$. Thus
\[
\int_{B(t)}|\omegaega\cdot[x-\widetildelde x(t)]|\varphi(t,x)\,\mathbf{m}athrm{d} x \mathbf{m}athbf{g}trsim_u N(t)^{-1},
\]
and so
\[
\int_{B(t)} \{\omegaega\cdot[x-\widetildelde x(t)]\}_+ \varphi(t,x)\,\mathbf{m}athrm{d} x \mathbf{m}athbf{g}trsim_u N(t)^{-1},
\]
where $+$ denotes the positive part. As $|x-\widetildelde x(t)|\leq 2CN(t)^{-1}$ for $x\in B(t)$, we finally deduce
\[
1\lesssim_u \int_{B(t)} \frac{\{\omegaega\cdot[x-\widetildelde x(t)]\}_+}{2CN(t)^{-1}} \varphi(t,x)\,\mathbf{m}athrm{d} x\lesssim_u \int_{\omegaega\cdot[x-\widetildelde x(t)]>0}\varphi(t,x)\,\mathbf{m}athrm{d} x
\]
for all $\omegaega\in\mathbf{m}athbb{S}^2$, as needed.\epsnd{proof}
\betagin{prop}\lambdabel{p:hans} Suppose that $\vec u(t)$ is a solution with the compactness property on $\mathbf{m}athbb{R}$ with parameters $N(t)$ and $x(t)$. Suppose in addition that $N(t) = 1$ for all $t \in \mathbf{m}athbb{R}$, and that $x(t)$ fails to be subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. Then there exists a (possibly different) solution $\vec w(s)$ to~\epsqref{eq:nlw} with the compactness property on $\mathbf{m}athbb{R}$ with parameters $N(s)$ and $x(s)$ satisfying
\mathbf{m}athcal{E}Q{
N(s) \epsquiv 1, \quad |x(s) - (s, 0, 0)| \lesssim \varsigmagmaqrt{{\abs{s}}} \quad \forall s \in \mathbf{m}athbb{R}.
}
\epsnd{prop}
\betagin{proof}
Let $\vec u(t)$ be a solution to~\epsqref{eq:nlw} with the compactness property on $\mathbf{m}athbb{R}$ with parameters $N(t) \epsquiv 1$ and $x(t)$ failing to be subluminal. This means we can find a sequence $t_m$ and intervals
\ant{
I_m:= [t_m, t_m + m]
}
such that
\mathbf{m}athcal{E}Q{\lambdabel{eq:nsl1}
\abs{x(t_m) - x(t)} \mathbf{m}athbf{g}e \abs{t_m - t} - \frac{1}{m} \quad \forall t \in I_m.
}
We construct a sequence as follows. Set
\mathbf{m}athcal{E}Q{
\vec u_m(0) := \vec u(t_m, \cdot - x(t_m)).
}
Using the pre-compactness of the trajectory of $\vec u$ modulo the translations by $x(t)$ we can (passing to a subsequence) extract a strong limit
\mathbf{m}athcal{E}Q{
\vec u_m(0) \to \vec u_\infty(0) \in \dot \mathbf{m}athcal{H}^{s_p} \mathbf{m}as m \to \infty.
}
Let $\vec u_\infty(\tauu)$ be the solution to~\epsqref{eq:nlw} with initial data $\vec u_{\infty}(0)$. One can show that we must have $[0, \infty) \varsigmagmaubset I_{\mathbf{m}ax}(\vec u_{\infty})$ and that $\vec u_{\infty}$ satisfies the following compactness property on $[0, \infty)$: the set
\mathbf{m}athcal{E}Q{
K_\infty := \{ \vec u_\infty( \tauu, \cdot - x_\infty(\tauu)) \mathbf{m}id \tauu \in [0, \infty)\}
}
is pre-compact in $\dot \mathbf{m}athcal{H}^{s_p}(\mathbf{m}athbb{R}^3)$, where for each $\tauu>0$ the function $x_\infty(\tauu) $ is defined by
\mathbf{m}athcal{E}Q{
x_\infty(\tauu):= \lim_{m \to \infty} ( x(t_m + \tauu) - x( t_m)).
}
Note that for each $\epspsilon>0$ and for all $\tauu \in [0, \infty)$ we can choose $M>0$ large enough so that for all $m \mathbf{m}athbf{g}e M$ we have
\ant{
\abs{ x_\infty (\tauu)} + \epspsilon \mathbf{m}athbf{g}e \abs{ x(t_m + \tauu) - x( t_m)} \mathbf{m}athbf{g}e \abs{\tauu} - \frac{1}{m},
}
where the last inequality follows from~\epsqref{eq:nsl1}. Letting $m \to \infty$ above, we conclude that in fact
\mathbf{m}athcal{E}Q{
\abs{x_\infty(\tauu)} \mathbf{m}athbf{g}e \tauu \quad \forall \tauu \in [0, \infty),
}
By finite speed of propagation (see~\epsqref{eq:fsp2}) we can conclude that in fact
\ant{
\lim_{ \tauu \to \infty} \frac{ \abs{x_\infty(\tauu)}}{\tauu} = 1,
}
We now refine our solution again, this time constructing a suitable limit from $\vec u_{\infty}(\tauu)$. Indeed, using the previous two lines, choose a sequence $\tauu_m \to \infty$ so that
\mathbf{m}athcal{E}Q{
\tauu \le \abs{x_\infty(\tauu)} \le \tauu + 2^{-m} \quad \forall \tauu \in J_m:= [\tauu_m- m, \tauu_m +m],
}
It is clear from elementary geometry that for large enough $m$,
\mathbf{m}athcal{E}Q{\lambdabel{eq:dxtau}
\abs{ x_\infty(\tauu) - x_\infty(t)} \mathbf{m}athbf{g}e \abs{ t - \tauu} - \frac{1}{m} \quad \forall t, \tauu \in J_m,
}
As before we extract a limit from the sequence
\ant{
\vec u_{\infty,m}(0) := \vec u_\infty(\tauu_m, \cdot - x_\infty(\tauu_m)) \to \vec v(0) \in \mathbf{m}athcal{H}^{s_p}
}
and we note that the solution $\vec v(s)$ to~\epsqref{eq:nlw} with data $\vec v(0)$ has the compactness property on $\mathbf{m}athbb{R}$ with parameters $\widetilde N(s) \epsquiv 1$ and $\widetilde x(s)$ defined by
\ant{
\widetilde x(s):= \lim_{m \to \infty} ( x_{\infty}(\tauu_m + s) - x_{\infty}(\tauu_m))
}
Using~\epsqref{eq:dxtau} along with~\epsqref{eq:dxt} we see that for all $s_1, s_2 \in \mathbf{m}athbb{R}$ we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:tiC}
\abs{s_1 - s_2} \le \abs{ \widetilde x(s_1) - \widetilde x(s_2)} \le \abs{s_1 -s_2} + \widetilde C
}
for some absolute constant $\widetilde C>0$ and moreover we still have that
\mathbf{m}athcal{E}Q{ \lambdabel{xbigs}
1 \le \frac{\abs{\widetilde x(s)}}{\abs{s}} \to 1 \mathbf{m}as s \to \trianglerightrtialm \infty,
}
Now we express $\widetilde x(s)$ in polar coordinates, finding $r(s) \mathbf{m}athbf{g}e 0$ and $\omega(s) \in \mathbf{m}athbb{S}^2$ so that
\mathbf{m}athcal{E}Q{
\widetilde x(s) = r(s) \omega (s) \quad \forall s \in [0, \infty).
}
Note that by~\epsqref{xbigs} we have
\mathbf{m}athcal{E}Q{
\frac{r(s)}{s} \to 1 \mathbf{m}as s \to \infty.
}
Since $\omega(s) \in \mathbf{m}athbb{S}^2$ we can find a sequence $s_m \to \infty$ and we can (up to passing to a subsequence) find a limit $\omega_0$ so that
\mathbf{m}athcal{E}Q{
\omega(s_m) \to \omega_0 \mathbf{m}as m \to \infty.
}
To prove the claim, it suffices to verify that
\[
|\widetildelde{x}(s) - s\omegaega_0| \leq C \varsigmagmaqrt{s},
\]
since then we obtain the desired result applying a fixed spatial rotation. Note that
\[
|s_2\omegaega(s_2)-s_1\omegaega(s_1)|^2 = |s_1-s_2|^2 + s_1 s_2 |\omegaega(s_2)-\omegaega(s_1)|^2,
\]
which by finite speed of propagation yields
\[
|\omegaega(s_2)-\omegaega(s_1)| \leq \varsigmagmaqrt{\frac{2C|s_1-s_2|+C^2}{s_1s_2}}.
\]
Then
\betagin{align*}
|&(s_n+s)\omegaega(s_n+s)-s_n\omegaega(s_n)-s\omegaega_0| \\
&\leq |s_n+s|\bigl| \omegaega(s_n+s)-\omegaega(s_n)\bigr| + s|\omegaega(s_n)-\omegaega_0| \\
& \leq \varsigmagmaqrt{(2Cs+C^2)(1+\tfrac{s}{s_n})} + s|\omegaega(s_n)-\omegaega_0|,
\epsnd{align*}
which implies
\[
|\widetildelde{x}(s)-s\omegaega_0| \leq \varsigmagmaqrt{2Cs+C^2},
\]
as required.
\epsnd{proof}
In the case that $N(t) \mathbf{m}athbf{g}e 1$ and $x(t)$ is not subluminal, we will now show that we can also reduce to the case when $N(t) = 1$ for all $t \in \mathbf{m}athbb{R}$ and $x(t) = (t, 0, 0) + O(\varsigmagmaqrt{{\abs{t}}})$. We will need the following lemma.
\betagin{lem}\lambdabel{l:mon}
Let $\vec u(t)$ have the compactness property on $I \varsigmagmaubset \mathbf{m}athbb{R}$ with parameters $N(t)$ and $x(t)$. Let $t_1, t_2 \in I$ be any times so that $N(t_1) \le N(t_2)$. Then there exists a uniform constant $c \in (0, 1)$ such that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:mon}
\abs{x(t_1) - x(t_2)} \mathbf{m}athbf{g}e \abs{ t_1 - t_2} - \frac{c}{ N(t_1)} \Longrightarrow N(t_2) \le \frac{1}{c^2} N(t_1)
}
\epsnd{lem}
\betagin{proof}[Proof of Lemma~\mathop{\mathrm{Re}}f{l:mon}]
The argument adapts readily from \cite[Lemma~4.4]{KV11b}, using the arguments from \S\mathop{\mathrm{Re}}f{subsec:analysis_of_props}. Exploiting time-reversal symmetry, space-translation symmetry, and rotation symmetry, we may assume $t_1<t_2$, $x(t_1)=0$, and $x(t_2)=(x_1(t_2),0,0)$ with $x_1(t_2)\mathbf{m}athbf{g}eq 0$. Further, we may choose $x(t)$ to be centered by Proposition \mathop{\mathrm{Re}}f{P:centered}.
Suppose for contradiction that for times $t_1, t_2$ as in the statement of the lemma,
\[
\abs{x(t_1) - x(t_2)} \mathbf{m}athbf{g}e \abs{ t_1 - t_2} - \frac{c}{ N(t_1)}
\]
but $cN(t_1)^{-1} \mathbf{m}athbf{g}eq c^{-1} N(t_2)^{-1}$, where $c=c(u)$ will be chosen sufficiently small below.
Let $\trianglerightrtialsi:\mathbf{m}athbb{R}\to[0,\infty)$ be a cutoff so that $\trianglerightrtialsi=1$ for $x\leq -1$ and $\trianglerightrtialsi =0$ for $x\mathbf{m}athbf{g}eq -\frac12$. Set $\trianglerightrtialsi_2(x_1)=\trianglerightrtialsi(\frac{x_1-x_1(t_2)}{cN(t_1)^{-1}})$. Then, given $\epsta>0$ and choosing $c=c(\epsta)$ sufficiently small, we have
\[
\|(\trianglerightrtialsi_2u(t_2),\trianglerightrtialsi_2u_t(t_2))\|_{\mathcal{H}^{s_p}} < \epsta.
\]
Choosing $\epsta$ small enough, the small-data theory and finite speed of propagation for \epsqref{eq:nlw} imply
\[
\int_{\Omegaega} ||\nabla|^{s_p} u(t_1,x)|^2 + ||\nabla|^{s_p}t u_t(t_1,x)|^2 \,\mathbf{m}athrm{d} x \lesssim \epsta^2,
\]
where
\[
\Omegaega = \{x:x_1 \leq x_1(t_2)-(t_2-t_1)-cN(t_1)^{-1}\}.
\]
Using the assumption on $|x(t_2)-x(t_1)|$ and the normalizations above, one finds
\[
\Omegaega\varsigmagmaupset\{x:-e_1\cdot[x-x(t_1)]\mathbf{m}athbf{g}eq 2cN(t_1)^{-1}\},
\]
so that
\[
\int_{-e_1\cdot[x-x(t_1)]\mathbf{m}athbf{g}eq 2cN(t_1)^{-1}} ||\nabla|^{s_p} u(t_1,x)|^2 + ||\nabla|^{s_p}t u_t(t_1,x)|^2 \,\mathbf{m}athrm{d} x \lesssim \epsta^2.
\]
On the other hand, choosing $c=c(\epsta)$ sufficiently small, Lemma~\mathop{\mathrm{Re}}f{L:strips} implies
\[
\int_{0<-e_1\cdot[x-x(t_1)]< 2cN(t_1)^{-1}} ||\nabla|^{s_p} u(t_1,x)|^2 + ||\nabla|^{s_p}t u_t(t_1,x)|^2 \,\mathbf{m}athrm{d} x < \epsta^2.
\]
We now choose $\epsta^2\ll C(u)$, where $C(u)$ is as in Definition~\mathop{\mathrm{Re}}f{D:centered}, to reach a contradiction to Proposition \mathop{\mathrm{Re}}f{P:centered}.\epsnd{proof}
We are now in a position to prove that we can extract a traveling wave solution from any solution with compactness property with translation parameter $x(t)$ that fails to be subluminal.
\betagin{prop} \lambdabel{p:nsl}
Suppose that $\vec u(t)$ is a solution with the compactness property on $\mathbf{m}athbb{R}$ with parameters $N(t)$ and $x(t)$. Suppose that either $N(t)$ is soliton-like or doubly concentrating in the sense of Proposition~\mathop{\mathrm{Re}}f{p:3cases} and that $x(t)$ fails to be subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. Then there exists a (possibly different) solution $\vec w(s)$ to~\epsqref{eq:nlw} with the compactness property
on $\mathbf{m}athbb{R}$ with parameters $N(s)$ and $x(s)$ satisfying
\mathbf{m}athcal{E}Q{
N(s) \epsquiv 1, \quad |x(s) - (s, 0, 0)| \lesssim \varsigmagmaqrt{s} \quad \forall s \in \mathbf{m}athbb{R}.
}
\epsnd{prop}
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{p:nsl}]
Note that by Proposition~\mathop{\mathrm{Re}}f{p:hans} it suffices to show that we can extract a solution with the compactness property on $\mathbf{m}athbb{R}$ with parameters $N(t) =1$ and $x(t)$ failing to be subluminal. By our assumption that $x(t)$ fails to be subluminal, for each $m \in \mathbf{m}athbb{N}$ there exists $t_m \in \mathbf{m}athbb{R}$ so that
\mathbf{m}athcal{E}Q{\lambdabel{eq:nsl2}
\abs{ x(t_m) - x(t)} \mathbf{m}athbf{g}e \abs{t - t_m} - \frac{1}{m N(t_m)} \quad \forall t \in I_m:= [t_m, \, t_m + \frac{m}{N(t_m)}].
}
We will show that $N(t) \varsigmagmaimeq N(t_m)$ for all $t \in I_m$ with constants independent of $m$. First assume that $$N(t_m) \le N(t).$$ Then
by Lemma~\mathop{\mathrm{Re}}f{l:mon} we can find a constant $c>0$ so that
\mathbf{m}athcal{E}Q{\lambdabel{eq:ub2}
c^2 N(t) \le N(t_m) \le N(t) \quad \forall t \in I_m.
}
Next assume that
\ant{
N(t) \le N(t_m).
}
This means that
\ant{
-\frac{1}{N(t_m)} \mathbf{m}athbf{g}e - \frac{1}{N(t)}
}
and thus from~\epsqref{eq:nsl2} we see that
\mathbf{m}athcal{E}Q{
\abs{ x(t_m) - x(t)} \mathbf{m}athbf{g}e \abs{t - t_m} - \frac{1}{m N(t_m)} \mathbf{m}athbf{g}e \abs{t - t_m} -\frac{1}{M N(t)}.
}
Another application of Lemma~\mathop{\mathrm{Re}}f{l:mon} then gives
\mathbf{m}athcal{E}Q{
N(t) \le N(t_m) \le \frac{1}{c^2} N(t).
}
As we can assume in Lemma~\mathop{\mathrm{Re}}f{l:mon} that $c<1$ we deduce that
\mathbf{m}athcal{E}Q{\lambdabel{eq:cnc}
c^2 N(t) \le N(t_m) \le \frac{1}{c^2} N(t) \quad \forall t \in I_m.
}
We can then extract, in the usual manner a new solution $\vec w(s)$ with the compactness property on $[0, \infty)$ with
\mathbf{m}athcal{E}Q{
&\widetilde N(s):= \lim_{m \to \infty} \frac{N( t_m + \frac{s}{N(t_m)})}{N(t_m)}, \\
&\widetilde x(s) := \lim_{m \to \infty} N(t_m) ( x(t_m + \frac{s}{N(t_m)}) - x(t_m)).
}
Note that by~\epsqref{eq:cnc} we must have
\mathbf{m}athcal{E}Q{
c_1 \le \widetilde N(s) \le C_1 \quad \forall \, s \in[0, \infty).
}
Moreover, using~\epsqref{eq:nsl2}, for each $\epspsilon>0$ we can find $M>0$ large enough so that for each $m \mathbf{m}athbf{g}e M$ we have
\ant{
\abs{ \widetilde x(s)} + \epspsilon &\mathbf{m}athbf{g}e \abs{N(t_m) ( x(t_m + \frac{s}{N(t_m)}) - x(t_m)) }\\
& \mathbf{m}athbf{g}e N(t_m) \abs{ \frac{s}{N(t_m)} - \frac{1}{m N(t_m))}} \mathbf{m}athbf{g}e s - \frac{1}{m}.
}
Letting $m \to \infty$ we obtain
\mathbf{m}athcal{E}Q{
\abs{\widetilde x(s)} \mathbf{m}athbf{g}e s \quad \forall s \in [0, \infty).
}
Noting that $\widetilde x(0) = 0$ and combining the above with~\epsqref{eq:fsp2} we conclude that
\mathbf{m}athcal{E}Q{
1 \le \frac{ \abs{\widetilde x(s)}}{s} \to 1 \mathbf{m}as s \to \infty.
}
From here it is straightforward to obtain a new solution $\vec w(s)$ with the compactness property on all of $\mathbf{m}athbb{R}$ with parameters $N(s) \epsquiv 1$ and $x(s)$ failing to be subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}, and we apply Proposition \mathop{\mathrm{Re}}f{p:hans} to conclude.
\epsnd{proof}
Finally, we now have the ingredients necessary to prove Proposition~\mathop{\mathrm{Re}}f{p:cases}.
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{p:cases}]
Suppose $\vec u(t)$ is a solution to~\epsqref{eq:nlw} with the compactness property on its maximal interval of existence $I_{\mathbf{m}ax}$ with compactness parameters $N(t)$ and $x(t)$. By Proposition~\mathop{\mathrm{Re}}f{p:3cases}, if the solution has the compactness property with $N(t) = t^{-1}$, then we may also assume without loss of generality that it has the compactness property with translation parameter $x(t) = 0$: by finite speed of propagation, $x(t)$ must remain bounded, and hence we may, up to passing to a subsequence, obtain a pre-compact solution with $x(t) = 0$ by applying a fixed translation. Thus, in the case that $N(t) = t^{-1}$ we obtain a self-similar solution, i.e. we have reduced to case (III).
In the remaining cases we must address different scenarios depending on whether or not $x(t)$ is subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. If $x(t)$ is subluminal, then we have reduced ourselves to cases (I) and (II). If $x(t)$ fails to be subluminal, then by Proposition~\mathop{\mathrm{Re}}f{p:nsl} we can find a critical element as in the traveling wave scenario, i.e. case (IV). This concludes the proof.
\epsnd{proof}
\varsigmagmaection{The soliton-like critical element} \lambdabel{s:soliton}
In this section we show that the soliton-like critical element, that is, case (I) from Proposition~\mathop{\mathrm{Re}}f{p:cases}, cannot exist. The main result is the following proposition:
\betagin{prop}\lambdabel{prop:soliton_zero}
There are no soliton-like critical elements for~\epsqref{eq:nlw}, in the sense of Case (I) of Proposition \mathop{\mathrm{Re}}f{p:cases}.
\epsnd{prop}
We recall that soliton-like means that $\vec u(t)$ is a global solution to~\epsqref{eq:nlw} with the compactness property on $\mathbf{m}athbb{R}$ as defined in~\epsqref{d:cp} with parameters $N(t) \epsquiv 1$, and $x(t)$ subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. We will show that any such solution with the compactness property is necessarily $\epsquiv 0$.
The proof will be accomplished in two main steps. We are ultimately aiming to employ a rigidity argument based on a virial identity, which will show that any such critical element must then be identically 0. The key point here is that in order to access the virial identity, which is at $\dot \mathbf{m}athcal{H}^1$ regularity, and to use it to prove Proposition \mathop{\mathrm{Re}}f{prop:soliton_zero}, we first must prove that our critical element actually lies in a precompact subset of ~$\dot \mathbf{m}athcal{H}^1$. Thus, we must first show that a soliton-like critical element must be more regular than expected. In fact, we will prove that the trajectory $K$ of any soliton-like critical element (see Definition \mathop{\mathrm{Re}}f{d:cp}) must be pre-compact in $\dot \mathbf{m}athcal{H}^1 \cap \dot \mathbf{m}athcal{H}^{s_p}$.
Throughout this section, we assume towards a contradiction that $\vec u(t)$ is a critical element with $x(t)$ subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl} and $N(t)\epsquiv 1$. In particular, by Lemma \mathop{\mathrm{Re}}f{l:sl} there exists $\deltalta_0>0$ so that
\[
|x(t)-x(\tauu)|<(1-\deltalta_0)|t-\tauu|\qtq{for all}|t-\tauu|>\frac1{\deltalta_0}.
\]
\varsigmagmaubsection{Additional regularity} \lambdabel{s:soliton-reg} We first prove that if the soliton-like critical element $\vec u$ has some additional regularity to begin with, then we can achieve $\dot \mathcal{H}^1$ regularity. The key ingredient in our proof will be a double Duhamel argument, which will enable us to gain the requisite regularity for critical elements, while our main technical tool will be the use of a frequency envelope which controls the $\dot \mathcal{H}^1$ norm (see Definition~\mathop{\mathrm{Re}}f{d:fren}). In order to exploit the sharp Huygens principle, we will use the following modified frequency projection operators: let $\trianglerightrtialsi\mathbf{m}athbf{g}eq 0$ be a smooth function supported on $|x|\leq 2$ satisfying $\trianglerightrtialsi=1$ on $|x|\leq 1$. For $k\mathbf{m}athbf{g}eq 0$, let
\betagin{align}\lambdabel{qk_def}
Q_{<k} f(x) = \int_{\mathbf{m}athbb{R}^3} 2^{3k} \trianglerightrtialsi(2^k(x-y))f(y)\,dy.
\epsnd{align}
These satisfy the same estimates as the usual Littlewood--Paley projections (which instead use sharp cutoffs in frequency space), e.g. the Bernstein estimates in Lemma~\mathop{\mathrm{Re}}f{l:bern}.
We summarize the main ingredient in Proposition \mathop{\mathrm{Re}}f{prop:soliton_zero}, the aforementioned additional regularity result, in the following proposition.
\betagin{prop} \lambdabel{p:sol-reg}
Suppose $\vec u$ is a soliton-like critical element. Then
\[
\vec u\in L_t^\infty\dot\mathcal{H}^{s_p} \quad \Longrightarrow \quad \vec u\in L_t^\infty \dot\mathcal{H}^{s}.
\]
for some $s > 1$. In particular, the set
\mathbf{m}athcal{E}Q{
K := \{ \vec u(t, \cdot - x(t)) \mathbf{m}id t \in \mathbf{m}athbb{R}\} \varsigmagmaubset \dot{\mathbf{m}athcal{H}^{s_p}} \cap \dot \mathbf{m}athcal{H}^{1}
}
is pre-compact in $\dot{\mathbf{m}athcal{H}^{s_p}} \cap \dot \mathbf{m}athcal{H}^{1} $.
\epsnd{prop}
We will prove Proposition \mathop{\mathrm{Re}}f{p:sol-reg} in several steps. To make this precise, we define the parameter
\mathbf{m}athcal{E}Q{ \lambdabel{eq:s0def}
s_0=s_p+\frac{5-p}{2p(p-1)}=\frac32-\frac5{2p}.
}
This exponent is chosen so that $\dot \mathcal{H}^{s_0}$ has the same scaling as $L_t^p L_x^{2p}$, and we note that crucially, $s_p < s_0 < 1$.
\varsigmagmaubsection{The jump from \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_0}({\mathbb R}^3)$}{H} regularity to \thetaxorpdfstring{$\dot {\mathcal{H}}^{1}({\mathbb R}^3)$}{H} regularity} We begin with the first, easier gain in regularity, namely passing from $\dot \mathcal{H}^{s_0}(\mathbf{m}athbb{R}^3)$ to $\dot \mathcal{H}^1(R^3)$.
\betagin{prop}\lambdabel{P:reg-jump} Suppose $\vec u$ is a soliton-like critical element. Let $s_0>s_p$ be defined as in~\epsqref{eq:s0def}. Then
\[
\vec u\in L_t^\infty\dot\mathcal{H}^{s_0} \quad \Longrightarrow \quad \vec u\in L_t^\infty \dot\mathcal{H}^{1}.
\]
\epsnd{prop}
\betagin{proof}
By time-translation symmetry, it suffices to estimate the $\dot \mathcal{H}^{1}$-norm at time $t=0$. We complexify the solution, letting
\[
w = u+ \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} u_t.
\]
Then
\[
\|w(t) \|_{\dot H^1} \varsigmagmaimeq \|\vec u(t) \|_{\dot H^1 \widetildemes L^2},
\]
and if $\vec u(t)$ solves \epsqref{eq:nlw}, then $w(t)$ is a solution to
\betagin{align}
w_t = -i \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} w \trianglerightrtialm \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} |u|^{p-1} u.
\epsnd{align}
By Duhamel's principle, for any $T$, we have
\[
w(0) = e^{iT \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} w(T) \trianglerightrtialm \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} \int_T^0 e^{i \tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} F(u)(\tauu) d\tauu
\]
where $F(u) = |u|^{p-1} u$. By compactness (see Lemma~\mathop{\mathrm{Re}}f{l:weak}),
\betagin{equation}\lambdabel{weakH10}
\lim_{T\to\infty} Q_{<k} e^{-i T\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} w (T) = \lim_{T\to\infty} Q_{<k} e^{iT \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} } w(-T) = 0
\epsnd{equation}
as weak limits in $\dot H^{1}$ for any $k\mathbf{m}athbf{g}eq 0$. We next write
\betagin{align*}
Q_{<k} w (0) & = e^{-iT\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_{<k} w (T) {\mathbf{m}p \frac{1}{\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} }\int_0^T e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_{<k} F (u(t))\,\mathbf{m}athrm{d} t \\
& = e^{iT\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_{<k} w (-T) {\mathbf{m}p \frac{1}{\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} } \int_{-T}^0 e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_{<k} F(u(t))\,\mathbf{m}athrm{d} t.
\epsnd{align*}
Using \epsqref{weakH10}, and arguing as in~{\cite[Section 4]{DL1}} we can deduce
\betagin{align}
&\lambdangle Q_{<k} w(0),Q_{<k} w (0)\rangle_{\dot H^{1}} \nonumber \\
& = \lim_{T\to\infty} \big\lambdangle\int_0^T e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_{<k} F(u(t))\,\mathbf{m}athrm{d} t, \int_{-T}^0 \mathbf{m}athbf{h}space{-2mm} e^{-i\tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_{<k} F(u(s))\,\mathbf{m}athrm{d} t\big\rangle_{L^{2}}. \lambdabel{scdd1}
\epsnd{align}
We fix a large parameter $R>0$ to be determined below. Let $\delta_0$ be as in the statement of Lemma \mathop{\mathrm{Re}}f{l:sl} and take $T=2 R\deltalta_0^{-1}$. We define
\betagin{equation}\lambdabel{eq:regions}
\betagin{split}
\thetaxtup{Region }\thetaxtbf{A} &:= \{(t,x) \,:\, 0 \leq t \leq T\}, \\
\thetaxtup{Region }\thetaxtbf{B} &:=\{(t,x) \,:\, |x- x(T)| \mathbf{m}athbf{g}eq R + |t - T|\},\\
\thetaxtup{Region }\thetaxtbf{C} &:= \{(t,x) \,:\, |x- x(T)| < R + |t - T|\}.
\epsnd{split}
\epsnd{equation}
See Figure~\mathop{\mathrm{Re}}f{f:bowtie}.
\betagin{figure}[h] \lambdabel{f:bowtie}
\centering
\includegraphics[width=14cm]{drawing_soliton.pdf}
\caption{A depiction of the spacetime regions \thetaxtbf{A, A', B, B'} and \thetaxtbf{C, C'} in the case that $x(t) = 0$.} \lambdabel{fig:regions_soliton}
\epsnd{figure}
We will treat these regions separately. Our goal is to bound $u$ on region \thetaxtbf{A} using that we are estimating the solution on a compact time interval, and on region \thetaxtbf{B} using the small data theory and finite speed of propagation. We will then use the double Duhamel trick, together with the sharp Huygens principle on region \thetaxtbf{C}, to conclude the proof.
Let $\chi_R$ denote a smooth cutoff to the set
\[
\{|x-x(T)| > R\} \varsigmagmaubseteq \mathbf{m}athbb{R}^3.
\]
Now fix a small parameter $\epsta>0$. By compactness of $\vec u$, if $R = R(\epsta)$ is sufficiently large then we have
\betagin{align}\lambdabel{equ:small_data_soliton}
\| \chi_R \vec u(T)\|_{\mathcal{H}^{s_p}} \le \epsta.
\epsnd{align}
We let $\vec v = (v,v_t)$ be the solution to \epsqref{eq:nlw} with initial data
\[
\vec v(T)=\chi_R \vec u (T).
\]
By finite speed of propagation, we have that
\[
u \epsquiv v \qtq{for} |x-x(T)|\mathbf{m}athbf{g}eq R+|t-T|,
\]
We now rewrite \epsqref{scdd1}, and abusing notation slightly, we define
\betagin{equation}\lambdabel{scdd-decomp1}
\betagin{split}
&\int_0^\infty e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_{<k} F(u(t))\,\mathbf{m}athrm{d} t = A+B+C, \\
&A=\int_0^T e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} } Q_{<k} F(u(t))\,\mathbf{m}athrm{d} t, \\
&B=\int_T^\infty e^{-it \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} } Q_{<k} F(v(t))\,\mathbf{m}athrm{d} t, \\
&C = \int_T^\infty e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} Q_{<k}[ F(u(t))- F(v(t))]\,\mathbf{m}athrm{d} t.
\epsnd{split}
\epsnd{equation}
Note that the notation in \epsqref{scdd-decomp1} is such that each term relates to an estimate for the solution on the correspondingly named region from \epsqref{eq:regions}.
We can carry out a similar construction at time $-T$, yielding a small solution $\widetilde v$ that agrees with $u$ whenever $|x-x(-T)|\mathbf{m}athbf{g}eq R+|t+T|$, and we obtain three terms in the negative time direction
\betagin{equation}\lambdabel{scdd-decomp2}
\betagin{aligned}
&\int_{-\infty}^0 e^{-i \tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_{<k} F(u(\tauu))\,d\tauu = A'+B'+C', \\
&A'=\int_{-T}^0 e^{-i\tauu\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} Q_{<k} F(u(\tauu))\,d\tauu, \\
& B'=\int_{-\infty}^{-T} e^{-i\tauu\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} Q_{<k} F(\widetilde v(\tauu))\,d\tauu, \\
&C' = \int_{-\infty}^{-T} e^{-i\tauu\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} Q_{<k}[ F(u(\tauu))-F(\widetilde v(\tauu))]\,d\tauu.
\epsnd{aligned}
\epsnd{equation}
Using the elementary linear algebra estimate
\betagin{equation}\lambdabel{linal}
|\lambdangle A+B+C,A'+B'+C'\rangle| \lesssim |A|^2 + |A'|^2 + |B|^2 + |B'|^2 + |\lambdangle C,C'\rangle|
\epsnd{equation}
whenever $A+B+C=A'+B'+C'$, we may estimate
\[
\lambdangle Q_{<k} w(0),Q_{<k} w(0)\rangle_{\dot H^{1}}
\]
by obtaining bounds for $A, A'$ and $B, B'$ and $\lambdangle C, C' \rangle$.
\varsigmagmaubsection*{Region A} To estimate the $A$ and $A'$ terms, first we establish the bound
\betagin{align}\lambdabel{equ:a_est1}
\|u\|_{L_{t,x}^{2(p-1)}([-T,T]\widetildemes\mathbf{m}athbb{R}^3)}\lesssim \left(\frac{T}{\epspsilon} \right)^{\frac{1}{2(p-1)}}
\epsnd{align}
for some suitably small $\epspsilon>0$. To prove this, we rely on the fact that $\vec u$ is a soliton-like critical element. Fix $\epsta >0$. Since $N(t)= 1$, there exists $\epspsilon >0$ small enough that the $L_{t,x}^{2(p-1)}$-norm is bounded by $\epsta$ on any interval of length $\epspsilon$; see Lemma~\mathop{\mathrm{Re}}f{l:DL1lem}. Thus to obtain the desired bound, we divide $[-T,T]$ into $\varsigmagmaim \lceil T / \epspsilon \rceil$ intervals $J_k$ of length $\epspsilon$, and
\[
\|u\|_{L_{t,x}^{2(p-1)}([-T,T]\widetildemes\mathbf{m}athbb{R}^3)}^{2(p-1)} \varsigmagmaimeq \varsigmagmaum_{k=1}^{\lceil T / \epspsilon \rceil} \|u\|_{L_{t,x}^{2(p-1)}(J_k \widetildemes\mathbf{m}athbb{R}^3)}^{2(p-1)} \lesssim \frac{T}{\epspsilon}.
\]
Using a similar argument together with Strichartz estimates and the hypothesis
\[
\|u\|_{L_t^\infty \mathcal{H}^{s_0}} \lesssim 1,
\]
we obtain that
\betagin{align}\lambdabel{equ:a_est2}
\| u\|_{L_t^p L_x^{2p}([-T,T]\widetildemes\mathbf{m}athbb{R}^3)}\lesssim \left(\frac{T}{\epspsilon} \right)^{\frac{1}{p}}\|u\|_{L_t^\infty \mathcal{H}^{s_0}}.
\epsnd{align}
Thus, using \epsqref{equ:a_est1}, \epsqref{equ:a_est2} and Strichartz estimates, we can estimate
\betagin{align*}
|A|^2 + |A'|^2
&\lesssim \|u\|_{L_t^p L_x^{2p}([-T,T]\widetildemes\mathbf{m}athbb{R}^3)}^{p} \lesssim \left(\frac{T}{\epspsilon} \right) \|u\|_{L_t^\infty \mathcal{H}^{s_0}}^p.
\epsnd{align*}
\varsigmagmaubsection*{Region B} For the estimates of $B$ and $B'$, we use the small data theory to bound the solutions $v$ and $\widetilde v$. We argue only for $v$ as the estimates for $\widetilde{v}$ are identical. By the small data theory, for
$\epsta$ chosen sufficiently small in \epsqref{equ:small_data_soliton}, we have
\[
\|v\|_{L_{t,x}^{2(p-1)}(\mathbf{m}athbb{R}^{1+3})} \lesssim \epsta.
\]
Using Strichartz estimates, we bound
\betagin{align*}
\| |\nablabla|^{\frac{3(p-3)}{2p}}v\|_{L_t^p L_x^{\frac{2p}{p-2}}} & \lesssim \|v(T)\|_{\mathcal{H}^{s_0}}+\| |\nablabla|^{\frac{3(p-3)}{2p}}(|v|^{p-1}v)\|_{L_t^{\frac{2p}{p+2}}L_x^{\frac{p}{p-1}}} \\
& \lesssim \|u(T)\|_{\mathcal{H}^{s_0}} + \|v\|_{L_{t,x}^{2(p-1)}}^{p-1}\| |\nablabla|^{\frac{3(p-3)}{2p}}v\|_{L_t^p L_x^{\frac{2p}{p-2}}},
\epsnd{align*}
with all space-time norms over $\mathbf{m}athbb{R}^{1+3}$. Note that $(p,\frac{2p}{p-2})$ is wave admissible. Thus, for $\epsta$ sufficiently small, we deduce
\[
\| |\nablabla|^{\frac{3(p-3)}{2p}}v\|_{L_t^p L_x^{\frac{2p}{p-2}}} \lesssim \|u\|_{L_t^\infty \mathcal{H}^{s_0}(\mathbf{m}athbb{R}^{1+3})},
\]
and hence it follows from Sobolev embedding that
\[
\quad \| v\|_{L_t^p L_x^{2p}(\mathbf{m}athbb{R}^{1+3})}\lesssim \|u\|_{L_t^\infty \mathcal{H}^{s_0}}.
\]
Thus, we have shown that
\[
|B|^2+|B'|^2 \lesssim \|v\|_{L_t^p L_x^{2p}(\mathbf{m}athbb{R}^{1+3})}^{p} \lesssim \|u\|_{L_t^\infty \mathcal{H}^{s_0}}^p.
\]
\varsigmagmaubsection*{Region C} Finally, we claim that
\betagin{align}\lambdabel{equ:c_vanish}
\lambdangle C,C'\rangle \epsquiv 0.
\epsnd{align}
To see this, write
\betagin{align*}
\lambdangle C,C'\rangle = \int_T^\infty\mathbf{m}athbf{h}space{-1.5mm}\int_{-\infty}^{-T} &\lambdangle e^{i(\tauu-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_{<k}[F(u(t))- F(v(t))], \\
&\quad Q_{<k}[F(u(\tauu))- F(\widetilde v(\tauu))]\rangle\,d\tauu\,\mathbf{m}athrm{d} t,
\epsnd{align*}
and note that by subluminality and the fact that $x(0)=0$, we have for $T=2 R\deltalta_0^{-1}$ the inclusion
\betagin{equation}\lambdabel{scsl}
\{|x-x(\trianglerightrtialm T)|\leq R\}\varsigmagmaubset \{|x|\leq (1- 2^{-1} \deltalta_0) T\}.
\epsnd{equation}
We recall that the operator $Q_{<k}$ defined in \epsqref{qk_def}, is given by convolution with the function $2^{3k} \trianglerightrtialsi(2^k x)$ for a fixed function $\trianglerightrtialsi \in C_0^\infty(\mathbf{m}athbb{R}^3)$. Hence for $k \mathbf{m}athbf{g}eq k_0$, a sufficiently large, fixed constant depending on the support of $\trianglerightrtialsi$, $\deltalta_0$ and $T$, we can ensure
\[
\thetaxtup{supp } \left( Q_{<k}[F(u(\tauu))- F(\widetilde v(\tauu))] \right) \varsigmagmaubseteq \bigl\{|x|\leq |\tauu| - 4^{-1} \deltalta_0 T \bigr\}.
\]
Similarly, using the properties of the $Q_{<k}$ and the sharp Huygens principle, we can ensure that for $k$ sufficiently large,
\[
\thetaxtup{supp } \left(e^{i(\tauu-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_{<k}[F(u(t))- F(v(t))] \right) \varsigmagmaubseteq \bigl\{ |x| > |t - \tauu| - 4^{-1} \deltalta_0 T \bigr\}.
\]
Since $t > 0$ and $\tauu < 0$, we have that $|t-\tauu| > |\tauu|$, this yields \epsqref{equ:c_vanish}, as required.
\mathbf{m}edskip
Collecting these estimates, we obtain that
\[
\|Q_{<k} w(0)\|_{\dot H^{1}}^2 = \lambdangle Q_{<k} w(0),Q_{<k} w(0)\rangle_{\dot H^{1}} \lesssim 1
\]
uniformly in $k\mathbf{m}athbf{g}eq 0$. The desired result then follows. \epsnd{proof}
\varsigmagmaubsection{The jump from \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_p}({\mathbb R}^3)$}{H} regularity to \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_0}({\mathbb R}^3)$}{H} regularity}
Now we turn to the more difficult estimates. Here, we will need a finer analysis based on frequency envelope machinery. We prove the following.
\betagin{prop}\lambdabel{P:improve-reg}
Suppose $\vec u$ is a soliton-like critical element. Then
\[
\vec u\in L_t^\infty\dot\mathcal{H}^{s_p} \quad \Longrightarrow \quad \vec u\in L_t^\infty \dot\mathcal{H}^{s}.
\]
for any $s_p \leq s < 1$.
\epsnd{prop}
\betagin{proof}
Once again, we define
\betagin{align}
\thetaxtup{Region }\thetaxtbf{A} &:= \{(t,x) \,:\, |t| \leq T\}, \\
\thetaxtup{Region }\thetaxtbf{B} &:=\{(t,x) \,:\, |x- x(T)| \mathbf{m}athbf{g}eq R + |t - T|\},\\
\thetaxtup{Region }\thetaxtbf{C} &:= \{(t,x) \,:\, |x- x(T)| < R + |t - T|\}.
\epsnd{align}
with corresponding regions $A', B' , C'$ in the negative time direction. We further introduce
\[
Q_k = Q_{<2k}-Q_{<k} \qtq{for}k >0, \qquad Q_0 = Q_{<0}.
\]
By Schur's test, we can conclude that these frequency projections are a good partition of frequency space, in the sense that
\[
\|f\|_{\dot H^s}^2 \varsigmagmaim \|Q_0 f\|_{\dot H^s}^2 + \varsigmagmaum_{k\mathbf{m}athbf{g}eq0} 2^{ks} \|Q_k f\|_{L^2}^2.
\]
We will also need to introduce an exponent $q$ satisfying
\[
2<q<\frac{2}{s_p}.
\]
\varsigmagmaubsection*{Region \thetaxtbf{A}}
We begin by defining suitable frequency envelopes with a parameter $\varsigmagmaigma>0$ to be determined shortly. We set
\betagin{equation}\lambdabel{sc-fe}
\betagin{aligned}
\mathbf{m}athbf{g}amma_k(t_0)&=\varsigmagmaum_j 2^{-\varsigmagmaigma |j-k|}\bigl[ 2^{s_p j}\|Q_j u(t_0)\|_{L^2}+2^{j(s_p-1)}\|Q_j\trianglerightrtialartial_t u(t_0)\|_{L^2}\bigr],\\
\alphaphaha_k (J) &= \varsigmagmaum_j 2^{-\varsigmagmaigma |j-k|}\ \bigl[ 2^{-j(\frac{2}{q}-s_p)}\|Q_j u\|_{L_t^q L_x^{\frac{2q}{q-2}}(J\widetildemes\mathbf{m}athbb{R}^3)} \\
& \mathbf{m}athbf{h}space{24mm} + 2^{j\left(\frac{2}{q}-1+s_p\right)}\|Q_ju\|_{L_t^{\frac{2q}{q-2}} L_x^q(J\widetildemes\mathbf{m}athbb{R}^3)}\bigr]
\epsnd{aligned}
\epsnd{equation}
for $k \mathbf{m}athbf{g}eq 0$. Note $(q,\frac{2q}{q-2})$ is sharp admissible and that each of the quantities appearing in the definition of $\betata_k$ has the same scaling as $\dot \mathcal{H}^{s_p}$. We will choose
\[
0<\varsigmagmaigma<\frac2q-s_p.
\]
Our goal is to prove that
\betagin{equation}\lambdabel{alphak}
\alphaphaha_k([-T, T]) \lesssim \mathbf{m}athbf{g}amma_k(0) + C_02^{-k \varsigmagmaigma},
\epsnd{equation}
where $C_0=C_0(T)$.
We begin by recording the some space-time estimates for $\vec u$ that are consequences of the pre-compactness of the set $K$, see Definition \mathop{\mathrm{Re}}f{d:cp}. We fix $\epsta >0$. Since $N(t)= 1$, there exists $\epspsilon >0$ small enough that the $L_{t,x}^{2(p-1)}$ norm is $< \epsta$ on any interval of length $\epspsilon$; see Lemma~\mathop{\mathrm{Re}}f{l:DL1lem}. Furthermore, we can find $k_0 = k_0(\epsta)$ such so that for any $k >k_0$,
\[
\|Q_{>k}u\|_{L_{t,x}^{2(p-1)}([-T,T]\widetildemes\mathbf{m}athbb{R}^3)} < \epsta T^{\frac{1}{2(p - 1)}}.
\]
With these bounds in hand, we turn to the proof of \epsqref{alphak}. In the following, all space-time norms will be taken over $[-T,T]\widetildemes\mathbf{m}athbb{R}^3$. For any $j$, we decompose the nonlinearity as follows. Writing $u_{\leq j}=Q_{\leq j}u$ (and similarly for $u_{>j}$), we write
\[
F(u)=F(u_{>k_0}) +F(u)-F(u_{>k_0}),
\]
where $k_0(\epsta)$ is as above. By Taylor's theorem, we have that
\[
F(u)= F(u_{>k_0}) + u_{\leq k_0 } \int_0^1 F'(\theta u_{\leq k_0 } + u_{ > k_0 }) d\theta,
\]
and hence to estimate the nonlinearity, it suffices to estimate three type of terms
\betagin{equation}\lambdabel{sc-nonlinear-decomp}
u_{>k_0}^{p-1}u_{>j}, \qquad u_{>k_0}^{p-1}u_{\leq j}, \qquad u_{\leq k_0}u^{p-1}.
\epsnd{equation}
Using the inhomogeneous Strichartz estimates, we obtain
\betagin{align} \lambdabel{equ:strichartz_duhamel}
&2^{-j(\frac2q-s_p)}\biggl\|\int_0^t e^{i(t-s)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} Q_j F(u(s))\,ds\biggr\|_{L_t^q L_x^{\frac{2q}{q-2}}} \\
& \quad + 2^{j \left(\frac2q-1+s_p\right)}\biggl\|\int_0^t e^{i(t-s)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} } Q_j F(u(s))\,ds\biggr\|_{L_t^{\frac{2q}{q-2}}L_x^q} \\
& \lesssim \mathbf{m}in\bigl\{ 2^{j (\frac2q-1+s_p)}\|F(u)\|_{L_t^{\frac{q}{q-1}}L_x^{\frac{2q}{q+2}}}, 2^{-j(\frac2q-s_p)}\|F(u)\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}}\bigr\}.
\epsnd{align}
Now let $J$ be an interval with $|J| < \epspsilon$, and let $t_0 = \inf J$. In the next estimates, all norms will be taken over $J \widetildemes \mathbf{m}athbb{R}^3$. Using Strichartz estimates, we estimate
\betagin{align*}
2^{-j(\frac2q-s_p)}&\|Q_ju\|_{L_t^q L_x^{\frac{2q}{q-2}}(J \widetildemes \mathbf{m}athbb{R}^3)} + 2^{j (\frac2q-1+s_p)}\|Q_j u\|_{L_t^{\frac{2q}{q-2}} L_x^q(J \widetildemes \mathbf{m}athbb{R}^3) } \nonumber\\
&\lesssim 2^{js_p}\|u_j(t_0)\|_{L_x^2} + 2^{j(s_p-1)}\|\trianglerightrtialartial_t u_j(t_0)\|_{L_x^2} \\
& \quad + 2^{j \left(\frac2q-1+s_p\right)}\|u_{>k_0}^{p-1}u_{>j}\|_{L_t^{\frac{q}{q-1}}L_x^{\frac{2q}{q+2}}} \\
& \quad + 2^{-j(\frac2q-s_p)}\|u_{>k_0}^{p-1}u_{\leq j}\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \\
& \quad + 2^{-j(\frac2q-s_p)}\|u_{\leq k_0} u^{p-1}\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}}. \\
& \lesssim 2^{js_p}\|u_j(t_0)\|_{L_x^2} + 2^{j(s_p-1)}\|\trianglerightrtialartial_t u_j(t_0)\|_{L_x^2} + I + II + III.
\epsnd{align*}
We first estimate Term I. We obtain
\betagin{align*}
&2^{j (\frac2q-1+s_p)}\|u_{>k_0}^{p-1}u_{>j}\|_{L_t^{\frac{q}{q-1}}L_x^{\frac{2q}{q+2}}} \\
& \lesssim 2^{j(\frac2q-1+s_p)}\|u_{>k_0}\|_{L_{t,x}^{2(p-1)}}^{p-1}\|u_{>j}\|_{L_t^{\frac{2q}{q-2}} L_x^q} \\
& \lesssim \epsta^{(p-1)} T^{1/2} 2^{j (\frac2q-1+s_p)}\varsigmagmaum_{\epsll>j} 2^{-\epsll(\frac2q-1+s_p)} \bigl[2^{\epsll(\frac2q-1+s_p)}\|Q_\epsll u\|_{L_t^{\frac{2q}{q-2}} L_x^q}\bigr].
\epsnd{align*}
Similarly, for Term II we obtain
\betagin{align*}
&2^{-j(\frac2q-s_p)}\|u_{>k_0}^{p-1}u_{\leq j}\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \\
&\lesssim 2^{-j(\frac2q-s_p)}\|u_{>k_0}\|_{L_{t,x}^{2(p-1)}}^{p-1}\|u_{\leq j}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \\
&\lesssim 2^{-j(\frac2q-s_p)}\|u_{>k_0}\|_{L_{t,x}^{2(p-1)}}^{p-1} \varsigmagmaum_{ \epsll \leq j} 2^{\epsll (\frac2q-s_p)} \bigl[2^{-\epsll(\frac2q-s_p)} \|u_{\epsll}\|_{L_t^q L_x^{\frac{2q}{q-2}}}\bigr] .
\epsnd{align*}
Finally, for Term III, using smallness of the interval and we obtain
\betagin{align}
2^{-j(\frac2q-s_p)}&\|u_{\leq k_0} u^{p-1}\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \\
&\lesssim 2^{-j(\frac2q-s_p)}\|u\|_{L_{t,x}^{2(p-1)}}^{p-1}\|u_{\leq k_0}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \\
&\lesssim 2^{-j(\frac2q-s_p)}2^{k_0(\frac2q-s_p)} \epsta^{p-1} T^{1/2}.
\epsnd{align}
Multiplying by $2^{-\varsigmagmaigma |j-k|}$ and summing in the above bounds, recalling that $\varsigmagmaigma<\frac2q-s_p$ in our definition of the frequency envelopes in \epsqref{sc-fe}, it follows that (for $t_0 = \inf J$) we have
\[
\mathbf{m}athbf{g}amma_k(t_1) + \alphaphaha_{k}(J) \lesssim \mathbf{m}athbf{g}amma_k(t_0) + T^{1/2} \epsta^{p-1} \alphaphaha_{k}(J) + C_0 (T) 2^{-k\varsigmagmaigma} .
\]
For $\epsta \epsquiv \epsta (T)$ small enough so that
\[
C \epsta^{p-1} T^{1/2} < \frac{1}{2}
\]
with $C$ the implicit constant in Strichartz estimates, this implies
\[
\alphaphaha_k(J) \lesssim \mathbf{m}athbf{g}amma_k(t_0) + C_0 2^{-k\varsigmagmaigma}.
\]
Iterating this procedure $\lceil T/\epspsilon \rceil$ times on $[-T,T]$, we may also conclude that
\[
\mathbf{m}athbf{g}amma_k(t_0) \lesssim \mathbf{m}athbf{g}amma_k(0),
\]
from which \epsqref{alphak} follows by summing up these estimates.
\varsigmagmaubsection*{Region \thetaxtbf{B}}
To implement the double Duhamel argument, we will again consider the solution $v$ to \epsqref{eq:nlw} with data $\vec v(T)=\chi_R \vec u(T)$. To control this solution, we define the frequency envelopes
\[
\widetilde\mathbf{m}athbf{g}amma_k(t_0)\qtq{and} \betata_k
\]
analogously to \epsqref{sc-fe}, but with space-time norms over $\mathbf{m}athbb{R}^{1+3}$. We will prove
\betagin{equation}\lambdabel{betak}
\betata_k \lesssim \mathbf{m}athbf{g}amma_k(0) + C_0 2^{-k\varsigmagmaigma}.
\epsnd{equation}
First observe that
\[
\|v\|_{L_{t,x}^{2(p-1)}(\mathbf{m}athbb{R}^{1+3})}\lesssim \epsta.
\]
Thus
\betagin{align*}
\| |\nablabla|^{-(\frac2q-s_p)} &v\|_{L_t^q L_x^{\frac{2q}{q-2}}} + \| |\nablabla|^{\frac2q-1+s_p} v\|_{L_t^{\frac{2q}{q-2}} L_x^q} \\
& \lesssim \|v(T)\|_{\mathcal{H}^{s_p}} + \| |\nablabla|^{\frac2q-1+s_p}(v|v|^{p-1})\|_{L_t^{\frac{q}{q-1}}L_x^{\frac{2q}{q+2}}} \\
& \lesssim \epsta + \|v\|_{L_{t,x}^{2(p-1)}}^{p-1}\||\nablabla|^{\frac2q-1+s_p} v\|_{L_t^{\frac{2q}{q-2}} L_x^q} \\
& \lesssim \epsta + \epsta^{p-1}||\nablabla|^{\frac2q-1+s_p} v\|_{L_t^{\frac{2q}{q-2}} L_x^q},
\epsnd{align*}
which implies in particular that
\betagin{equation}\lambdabel{bk-vgd}
\|v_{\leq 1}\|_{L_t^q L_x^{\frac{2q}{q-2}}(\mathbf{m}athbb{R}^{1+3})} \lesssim\epsta.
\epsnd{equation}
We now estimate $\betata_k$ in essentially the same manner as $\alphaphaha_k$. The main difference is that we split at frequency $1$ instead of at frequency $k_0$ as above. Estimating as above, but using \epsqref{bk-vgd}, we deduce
\[
\betata_k \lesssim \widetilde\mathbf{m}athbf{g}amma_k(T) + \epsta^{p-1} \betata_k + C_0 2^{-k\varsigmagmaigma},
\]
which implies
\betagin{align} \lambdabel{equ:first_est}
\betata_k \lesssim \widetilde \mathbf{m}athbf{g}amma_k(T) + C_0 2^{-k\varsigmagmaigma}.
\epsnd{align}
In order to prove \epsqref{betak}, we need to relate $\widetilde\mathbf{m}athbf{g}amma_k(T)$ to $\mathbf{m}athbf{g}amma_k(0)$. Similar arguments as in \epsqref{betak} yield
\[
\mathbf{m}athbf{g}amma_k(T) \lesssim \mathbf{m}athbf{g}amma_k(0) + \epsta^{p-1}\betata_k + C_0 2^{-k\varsigmagmaigma} \lesssim \mathbf{m}athbf{g}amma_k(0) + C_0 2^{-k\varsigmagmaigma},
\]
so it therefore suffices to relate $\widetilde \mathbf{m}athbf{g}amma_k(T)$ to $\mathbf{m}athbf{g}amma_k(T)$. Using that $\vec v(T)=\chi_R \vec u(T)$, we apply the commutator estimate Lemma~\mathop{\mathrm{Re}}f{L:commutator} to deduce
\betagin{align*}
2^{ks_p}\|Q_k v(T)\|_{L^2} & \lesssim 2^{ks_p}\|Q_k u(T)\|_{L^2} + (2^k R)^{-(1-s_p)}\|u\|_{L_t^\infty \dot H^{s_p}}, \\
2^{k(s_p-1)}\|Q_k\trianglerightrtialartial_t v(T)\|_{L^2} & \lesssim 2^{k(s_p-1)}\|Q_k\trianglerightrtialartial_t u(T)\|_{L^2} + 2^{-k}R^{-1}\|\trianglerightrtialartial_t u\|_{L_t^\infty \dot H^{s_p-1}}.
\epsnd{align*}
In particular, since $\varsigmagmaigma<\frac2q-s_p<1-s_p$, we deduce that
\[
\widetilde\mathbf{m}athbf{g}amma_k(T) \lesssim \mathbf{m}athbf{g}amma_k(T) + C_0 2^{-k\varsigmagmaigma}.
\]
Putting this together with \epsqref{equ:first_est} above, we conclude
\[
\betata_k \lesssim \mathbf{m}athbf{g}amma_k(0)+C_02^{-k\varsigmagmaigma},
\]
which completes the proof of \epsqref{betak}.
\mathbf{m}edskip
We will now carry out the double Duhamel argument with the complexified solutions $w$.
We write
\betagin{align}\lambdabel{equ:double_duh_term}
&\lambdangle Q_jw(0),Q_jw(0)\bigl\rangle_{\dot H^1}\\
& = \lim_{T \to\infty}\lambdangle \int_0^T e^{-i t\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_j F(u(t))\,\mathbf{m}athrm{d} t,\int_{-T}^0 e^{-i \tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_j F(u(\tauu))\,d\tauu\bigr\rangle_{\dot H^1}
\epsnd{align}
and (as before) decompose
\betagin{align}
\int_0^\infty e^{-i t\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }Q_j F(u(t))\,\mathbf{m}athrm{d} t &= A+B+C\\
&=A'+B'+C'=\int_{-\infty}^0 e^{-i \tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}Q_jF(u(\tauu))\,d\tauu,
\epsnd{align}
for components as in \epsqref{scdd-decomp1} and \epsqref{scdd-decomp2}. Once again, we rely on the algebraic inequality
\betagin{align} \lambdabel{regularity1}
\lambdangle Q_jw(0),Q_jw(0)\bigl\rangle_{\dot H^1} \lesssim |A|^2 + |A'|^2 + |B|^2 + |B'|^2 + |\lambdangle C,C'\rangle|
\epsnd{align}
and we note that by construction and the argument above relying on the sharp Huygens principle, $\lambdangle C,C'\rangle_{\dot \mathcal{H}^1}\epsquiv 0$.
To treat the other terms, we recall the definition of the frequency envelope $\alphaphaha_k$ in \epsqref{sc-fe}, and we use \epsqref{alphak} and \epsqref{betak}. To this end, we multiply the left-hand side of \epsqref{equ:double_duh_term} by $2^{-\varsigmagmaigma| j-k|}$ and we sum over $j \mathbf{m}athbf{g}eq 0$ to obtain
\betagin{align}\lambdabel{equ:outcome0}
\mathbf{m}athbf{g}amma_k(0) \lesssim \epsta^{p-1} \mathbf{m}athbf{g}amma_k(0) + C_0 2^{-k \varsigmagmaigma},
\epsnd{align}
which, choosing $\epsta$ sufficiently small depending only on the implicit constant, implies
\betagin{align}\lambdabel{equ:outcome1}
\mathbf{m}athbf{g}amma_k(0) \lesssim C_0 2^{-k \varsigmagmaigma},
\epsnd{align}
which yields $\vec u\in \dot \mathcal{H}^{s}$ for any $s_p\leq s<s_p+\varsigmagmaigma$. Since that we may choose any $\varsigmagmaigma<\frac2q-s_p$ and $q$ arbitrarily close to $2$, we deduce $\vec u\in L_t^\infty \dot \mathcal{H}^{s}$ for any $s_p\leq s<1$. This completes the proof of Proposition \mathop{\mathrm{Re}}f{P:improve-reg}. \epsnd{proof}
Propositions \mathop{\mathrm{Re}}f{P:reg-jump} and \mathop{\mathrm{Re}}f{P:improve-reg} immediately yield the following corollary.
\betagin{cor}
Suppose $\vec u$ is a soliton-like critical element. Then
\[
\vec u\in L_t^\infty\dot\mathcal{H}^{s_p} \quad \Longrightarrow \quad \vec u\in L_t^\infty \dot\mathcal{H}^{1}.
\]
\epsnd{cor}
\varsigmagmaubsection{The jump from \thetaxorpdfstring{$\dot {\mathcal{H}}^{1}({\mathbb R}^3)$}{H} regularity to \thetaxorpdfstring{$\dot {\mathcal{H}}^{s}({\mathbb R}^3)$}{H} regularity}
As mentioned above, in order to employ the rigidity argument based on a certain virial identity, we also need to prove that the trajectory of a critical element in fact lies in a pre-compact subset of $\dot \mathcal{H}^1$. We will achieve this by proving that in fact, we can gain a bit more regularity, specifically we can place the solution in $\dot \mathcal{H}^s$ for some $s > 1$. The key idea here is that we actually have a bit of room in the previous estimates given the additional assumption of $\dot {\mathcal{H}}^1$ regularity, and this will provide some extra decay which we can use to establish the additional regularity.
\betagin{prop}\lambdabel{P:improve-reg2}
Suppose $\vec u$ is a soliton-like critical element. Then $\vec u\in L_t^\infty\dot\mathcal{H}^{s} $ for some $s > 1$.
\epsnd{prop}
\betagin{proof}
Let $v$ and $\widetilde{v}$ be the solutions to the small data Cauchy problems defined above. By small data arguments $v(T) \in \dot {\mathcal{H}}^{1}({\mathbb R}^3)$ and $\| v(T) \|_{\dot {\mathcal{H}}^{s_p}}$ small implies that
\betagin{align}
\| v \|_{ L_t^{\frac{2q}{q-2}} L_x^q({\mathbb R} \widetildemes {\mathbb R}^{3})} + \| |\nablabla|^{1- s_p} v \|_{L_{t,x}^{2(p-1)}({\mathbb R} \widetildemes {\mathbb R}^{3})} &\lesssim_T 1\\
\| \widetildelde{v} \|_{ L_t^{\frac{2q}{q-2}} L_x^q({\mathbb R} \widetildemes {\mathbb R}^{3})} + \| |\nablabla|^{1 - s_p} \widetildelde{v} \|_{L_{t,x}^{2(p-1)}({\mathbb R} \widetildemes {\mathbb R}^{3})} &\lesssim_{T} 1.
\epsnd{align}
Furthermore, arguing as above and partitioning $[-T, T]$ into sufficiently small intervals, we obtain
\[
\| u \|_{ L_t^{\frac{2q}{q-2}} L_x^q ([-T, T] \widetildemes {\mathbb R}^{3})} + \| |\nablabla|^{1- s_p} u \|_{L_{t,x}^{2(p-1)}([-T,T] \widetildemes {\mathbb R}^{3})} \lesssim_{T} 1.
\]
These inequalities, together with the argument used to prove Proposition \mathop{\mathrm{Re}}f{P:improve-reg}, as well as \epsqref{regularity1} and \epsqref{equ:strichartz_duhamel} establishes that
\betagin{align}
\| Q_{k} u(0) \|_{\mathbf{m}athcal H^{1}}^{2} &\lesssim_{T} 2^{-k\varsigmagmaigma} 2^{-k \left(\frac{2}{q} - 1+ s_p\right)}.
\epsnd{align}
Since we may choose any
\[
\varsigmagmaigma < \frac{2}{q} - s_p,
\]
and $q$ arbitrarily close to $2$, we have then shown that
\[
\varsigmagmaum_{k} 2^{2 \alphaphaha k} \| Q_{k} u(0) \|_{\mathbf{m}athcal H^{1}}^{2} < \infty
\]
for any $\alphaphaha < \frac{1}{2}$, which concludes the proof.
\epsnd{proof}
\varsigmagmaubsection{Rigidity for the soliton-like critical element}
Now we may prove that the soliton-like critical element is identically zero. We summarize this in the following proposition.
\betagin{prop} \lambdabel{p:srig}
Let $\vec u(t) \in \dot\mathcal{H}^1$ be a global-in-time solution to~\epsqref{eq:nlw} such that for subluminal $x(t)$ the set
\mathbf{m}athcal{E}Q{
K = \{ u( t, \cdot - x(t)), \trianglerightrtial_t u (t, \cdot - x(t)) \mathbf{m}id t \in \mathbf{m}athbb{R}\} \varsigmagmaubset \dot\mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}
}
is a pre-compact subset of $\dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}$. Then $\vec u(t) \epsquiv 0$.
\epsnd{prop}
As mentioned in the introduction, we include a proof of rigidity for the soliton-like critical element in the \thetaxtit{focusing setting} as well. The arguments that we use are similar to the ones given in~\cite[Section 3]{CKLS1} and~\cite{DL1, CR5d} but with a modification. The key new ingredient here is that the subluminality of $x(t)$ compactifies the subset of the Lorentz group taking $(t, x(t))$ to $(t', 0)$; see also~\cite{KM06, NakS} for a somewhat different approach that uses the Lorentz transform to show that critical elements must have zero momentum. The main ingredients in the proof are the following virial identities.
In what follows we let $r=|x|$ and denote $\trianglerightrtialartial_r u = \nablabla u \cdot \tfrac{x}{|x|}$.
\betagin{lem}[Virial Identities]
Let $\chi \in C^\infty_0$ be a smooth radial function such that $\chi(r) = 1$ if $r \le 1$ and $\varsigmagmaupp \chi \in \{ r \le 2 \}$. For any $R>0$ we define $\chi_R(r) = \chi(r/ R)$ and let $\vec u(t)$ be a solution to \epsqref{eq:nlw}. Denoting
\mathbf{m}athcal{E}Q{\lambdabel{eq:Omdef}
\Omega_{u(t)}(R):= \int_{\abs{ x} \mathbf{m}athbf{g}e R} | \nabla u |^2 + | \trianglerightrtial_t u|^2 + \frac{\abs{u}^2}{\abs{x}^2} + \abs{ u}^{p+1} \, \mathbf{m}athrm{d} x,
}
we have
\mathbf{m}athcal{E}Q{\lambdabel{eq:vir3d}
\frac{\mathbf{m}athrm{d} }{ \mathbf{m}athrm{d} t} \ang{ \trianglerightrtial_t u \mathbf{m}id \chi_R ( r \trianglerightrtial_r u + u) } & = -E( \vec u) \trianglerightrtialm \big( \tfrac{p-3}{p+1} \big) \| u \|_{L^{p+1}}^{p+1} \\
& \quad + O( \Omega_{u(t)} (R)),
}
where the $+$ sign above corresponds to the focusing equation and the $-$ sign corresponds to the defocusing equation.
If $\vec u(t)$ solves the focusing equation, we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:vfoc}
\frac{\mathbf{m}athrm{d} }{ \mathbf{m}athrm{d} t} \ang{ \trianglerightrtial_t u \mathbf{m}id \chi_R ( r \trianglerightrtial_r u + \frac{1}{2}u) } &= -\frac{1}{2} \int \abs{\trianglerightrtial_t u}^2 - 3 \left( \frac{1}{p+1} - \frac{1}{6} \right) \int \abs{u}^{p+1} \\
& \quad + O (\Omega_{u(t)} (R)).
}
\epsnd{lem}
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{p:srig} for the focusing equation]
We may assume that $x(0) = 0$. Since $x(t)$ is subluminal we can find $\delta >0$ so that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:sl1}
\abs{ x (t) - x( \tauu) } \le (1-\delta) \abs{t - \tauu}, \quad \abs{x(t)} \le (1-\delta) \abs{t}
}
for all $t, \tauu \in \mathbf{m}athbb{R}$.
For convenience, we consider only the special case where
\mathbf{m}athcal{E}Q{
x(t) = (x_1(t), 0, 0) \quad \forall t >0,
}
as this contains the essential difficulties and the general argument is an easy modification of the one presented below. Recall that for each $\nu \in (-1, 1)$ we have a Lorentz transform $L_\nu$ defined by
\mathbf{m}athcal{E}Q{
L_{\nu}(t, x_1, x_1, x_3) = \mathbf{m}athcal{B}ig( \frac{ t -\nu x_1}{\varsigmagmaqrt{1- \nu^2}}, \frac{ x_1 - \nu t}{\varsigmagmaqrt{1- \nu^2}}, x_2, x_3 \mathbf{m}athcal{B}ig)=:(t', x').
}
For any $T >0$, set
\mathbf{m}athcal{E}Q{
\nu(T) := \frac{x_1(T)}{T}.
}
Then
\mathbf{m}athcal{E}Q{ \lambdabel{eq:alde}
-(1-\delta) \le \nu(T) \le 1-\delta
}
and the Lorentz transform $L_{\nu(T)}$ gives
\mathbf{m}athcal{E}Q{
L_{\nu(T)} (T, x_1(T), 0, 0) = (T', 0, 0, 0),
}
where
\mathbf{m}athcal{E}Q{ \lambdabel{eq:T'}
T' = \varsigmagmaqrt{T^2 - x_1(T)^2}.
}
Since $x(t)$ satisfies~\epsqref{eq:sl1}, we have the bounds
\mathbf{m}athcal{E}Q{
c_\delta T \le T' \le T
}
for $c_\delta:= \varsigmagmaqrt{1 - (1-\delta)^2}>0$, which means that $T'$ is comparable to $T$. For each $T>0$ define
\mathbf{m}athcal{E}Q{
v_{\nu(T)}(t', x') := u \circ L_{\nu(T)} (t, x)
}
Then, since $K$ above is pre-compact for $x(t)$ subluminal and since $\vec v_{\nu(T)}(t')$ as above is a fixed Lorentz transform of $\vec u(t, x)$, we can find a subluminal translation parameter $x'(t')$ with
\mathbf{m}athcal{E}Q{
x'(T') = 0
}
such that the trajectory
\mathbf{m}athcal{E}Q{\lambdabel{eq:K'}
K' := \{ \vec v_{\nu(T)}(t', x - x'(t')) \mathbf{m}id t' \in \mathbf{m}athbb{R}\}
}
is pre-compact in $\dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}$. We will now establish the following.
\betagin{claim} \lambdabel{c:Tn}
Consider a critical element for the focusing equation with $3 \le p <5$. For each $n$ there exists a time $T_n>0$ such that for $T_n'$ as in~\epsqref{eq:T'} we have
\mathbf{m}athcal{E}Q{
\frac{1}{T'_n} \int_0^{T_n'} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu(T_n)}(t, x) }^2 + \abs{v_{\nu(T_n)}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t < \frac{1}{4n}
}
\epsnd{claim}
\betagin{proof}[Proof of Claim~\mathop{\mathrm{Re}}f{c:Tn}]
Let $T>0$. Since $\vec v_{\nu(T)}$ solves the focusing equation we average~\epsqref{eq:vfoc} with $R = C_\delta T$ over the time interval $[0, T']$ for some constant $C_\delta$ to be specified below, yielding
\mathbf{m}athcal{E}Q{ \lambdabel{eq:vir1v}
\frac{1}{T'} \int_0^{T'} \int_{\mathbf{m}athbb{R}^3} &\abs{\trianglerightrtial_t v_{\nu(T)}(t, x) }^2 + \abs{v_{\nu(T)}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \\
& \lesssim \frac{1}{T'} \abs{ \ang{ \trianglerightrtial_t v_{\nu(T)}(t) \mathbf{m}id \chi_{2T} r \trianglerightrtial_r v_{\nu(T)} } \vert^{T'}_0} \\
& \quad + \frac{1}{T'} \abs{ \ang{ \trianglerightrtial_t v_{\nu(T)}(t) \mathbf{m}id \chi_{2T} v_{\nu(T)} } \vert^{T'}_0}\\
& \quad +\frac{1}{T'} \int_0^{T'} \Omega_{v_{\nu(T)}(t)}(C_\delta T) \mathbf{m}athrm{d} t
}
where $\Omega_{v_{\nu(T)}}(C_\delta T)$ is defined as in~\epsqref{eq:Omdef}. Given $n>0$, by~\epsqref{eq:K'}, the subluminality of $x'(t')$, and the fact that
\mathbf{m}athcal{E}Q{
x'(0) = 0, \quad x'(T') = 0
}
we can choose $C_\delta$ and $T = T_n$ large enough so that
\mathbf{m}athcal{E}Q{
\frac{1}{T_n'} \int_0^{T_n'} \Omega_{v_{\nu(T_n)}(t)}(C_\delta T) \mathbf{m}athrm{d} t \ll \frac{1}{n}.
}
Note that $C_\delta$ can be chosen independently of $n$. Next we estimate the first term on the right-hand side of~\epsqref{eq:vir1v}. We treat only the case where the inner product is evaluated at $t = T'$, as the case when it is evaluated at $t =0$ is similar. We have
\mathbf{m}athcal{E}Q{
\frac{1}{T'} &\abs{ \ang{ \trianglerightrtial_t v_{\nu(T)}(T') \mathbf{m}id \chi_{2T}\cdot r \, \trianglerightrtial_r v_{\nu(T)}(T') }} \\
&\lesssim \frac{T^{\frac{1}{2}}}{T'} \|\trianglerightrtial_t v_{\nu(T)}(T') \|_{L^2} \| \nabla v_{\nu(T)}(T') \|_{L^2( \abs{x} \le T^{\frac{1}{2}})} \\
& \quad + \frac{C_\delta}{c_\delta} \|\trianglerightrtial_t v_{\nu(T)}(T') \|_{L^2} \| \nabla v_{\nu(T)}(T') \|_{L^2( T^{\frac{1}{2}} \le \abs{x} \le C_\delta T)}.
}
Since $T' \varsigmagmaimeq_{\delta} T$, the first term on the right-hand side above can be made as small as we like by choosing $T_n$ large enough so that
\mathbf{m}athcal{E}Q{
\frac{T_n^{\frac{1}{2}}}{T_n'} \ll \frac{1}{n}.
}
Similarly, for the second term on the right, we rely on the pre-compactness of $K'$ in $\dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}$ and the fact that $x'(T'_n) = 0$ which yields,
\mathbf{m}athcal{E}Q{
\| \nabla v_{\nu(T_n)}(T'_n) \|_{L^2( \abs{x} \mathbf{m}athbf{g}e T_n^{\frac{1}{2}}) } \ll \frac{1}{n}
}
for $T_n$ large enough. The second term on the right-hand-side of~\epsqref{eq:vir1v} is estimated in a similar fashion. This completes the proof of Claim~\mathop{\mathrm{Re}}f{c:Tn}.
\epsnd{proof}
Now, given this sequence of times $T_n$ guaranteed by Claim~\mathop{\mathrm{Re}}f{c:Tn} consider the sequence $\nu(T_n):= x_1(T_n)/ T_n$. By~\epsqref{eq:alde} we can, passing to subsequence that we still denote by $\nu(T_n)$, find a fixed $\nu \in [-1-\delta, 1-\delta]$ with
\mathbf{m}athcal{E}Q{ \lambdabel{eq:allim}
\nu(T_n) \to \nu_0 \mathbf{m}as n \to \infty.
}
Define
\mathbf{m}athcal{E}Q{
v_{\nu_0}(t', x'): = u \circ L_{\nu_0}(t, x)
}
and note that this is a \epsmph{fixed} Lorentz transform of $u$.
It follows from Claim~\mathop{\mathrm{Re}}f{c:Tn}, ~\epsqref{eq:allim}, and a continuity argument that in fact,
\mathbf{m}athcal{E}Q{
\frac{1}{T'_n} \int_0^{T_n'} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(t, x) }^2 + \abs{v_{\nu_0}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t < \frac{1}{2n}
}
after passing to a further subsequence. Using yet another continuity argument we can assume without loss of generality that the $T_n' = M_n \in \mathbf{m}athbb{N}$, i.e.,
\mathbf{m}athcal{E}Q{\lambdabel{eq:Mn}
\frac{1}{M_n} \int_0^{M_n} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(t, x) }^2 + \abs{v_{\nu_0}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t < \frac{1}{n}
}
for some sequence $\{M_n \} \varsigmagmaubset \mathbf{m}athbb{N}$ with $M_n \to \infty$. Now we claim that there exists a sequence of positive integers $m_n \to \infty$ such that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:mn}
\int_{m_n}^{m_n +1} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(t, x) }^2 + \abs{v_{\nu_0}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \to 0 \mathbf{m}as n \to \infty.
}
If not, we could find $\epspsilon_0>0$ such that for all $n \in \mathbf{m}athbb{Z}$ we have
\mathbf{m}athcal{E}Q{
\int_{m}^{m +1} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(t, x) }^2 + \abs{v_{\nu_0}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \mathbf{m}athbf{g}e \epspsilon_0.
}
However, summing up from $0$ to $M_n -1$ we would then have
\mathbf{m}athcal{E}Q{
\int^{M_n}_{0} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(t, x) }^2 + \abs{v_{\nu_0}(t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \mathbf{m}athbf{g}e \epspsilon_0 M_n,
}
which contradicts~\epsqref{eq:Mn}. Now, by~\epsqref{eq:mn} we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:0lim}
\int_{0}^{1} \int_{\mathbf{m}athbb{R}^3} \abs{\trianglerightrtial_t v_{\nu_0}(m_n+t, x) }^2 + \abs{v_{\nu_0}(m_n+t, x)}^{p+1} \, \mathbf{m}athrm{d} x \, \mathbf{m}athrm{d} t \to 0
}
as $n\to\infty$. On the other hand, passing to a further subsequence, we can find $(V_0, V_1) \in \dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}$ such that
\mathbf{m}athcal{E}Q{
\vec v_{\nu_0}( m_n, \cdot - x'(m_n)) \to (V_0, V_1)\in \dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p} \mathbf{m}as n \to \infty.
}
Let $\vec V(t)$ be the solution to~\epsqref{eq:nlw} with data $(V_0, V_1)$. Then for some $t_0>0$ sufficiently small we have
\betagin{align}\lambdabel{val_conv}
\lim_{n \to \infty} \varsigmagmaup_{t \in [0, t_0]} \| \vec v_{\nu_0}(m_n + t, \cdot - x'(m_n)) - \vec V(t) \|_{\dot \mathcal{H}^1 \cap \dot \mathcal{H}^{s_p}} = 0
\epsnd{align}
However, from~\epsqref{eq:0lim} we can then conclude that
\mathbf{m}athcal{E}Q{
\vec V \epsquiv 0,
}
from which we conclude from \epsqref{val_conv} and small data arguments that
\mathbf{m}athcal{E}Q{
\vec v_{\nu_0} \epsquiv 0.
}
This means $\vec u \epsquiv 0$ as well, which finishes the proof.
\epsnd{proof}
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{p:srig} for the defocusing equation]
The argument is much easier if either $p =3$ or if the equation is defocusing since~\epsqref{eq:vir3d} gives us coercive control over the energy. Indeed, arguing as in the proof of Claim~\mathop{\mathrm{Re}}f{c:Tn}, but using~\epsqref{eq:vir3d} instead of~\epsqref{eq:vfoc} we see that
\mathbf{m}athcal{E}Q{
E(\vec v_{\nu(T)}) = \frac{1}{T'} \int_0^{T'} E(\vec v_{\nu(T)} ) \, \mathbf{m}athrm{d} t = o(1) \mathbf{m}as T \to \infty
}
since for each fixed $T$, the energy of $v_{\nu(T)}(t)$ is constant in time. However, since
\mathbf{m}athcal{E}Q{
v_{\nu(T)}(t', x') = u \circ L_{\nu(T)} (t, x),
}
we must have either $\limsup_{T \to \infty} \abs{\nu(T)} = 1$, or $E(\vec u) =0$. The former is impossible by~\epsqref{eq:alde}. Hence $E(\vec u) = 0$. Therefore $\vec u \epsquiv 0$.
\epsnd{proof}
\betagin{rem}
Note the argument given above for the defocusing equation also works for the cubic focusing equation since~\epsqref{eq:vir3d} yields control of the full energy for $p=3$. Arguing as above one can conclude that $E(u) =0$. Since the only nonzero solutions with zero energy must blow up in both time directions~\cite{KSV} we conclude that the global-in-time solution $\vec u \epsquiv 0$; see~\cite{DL1} where a version of this argument was carried out in detail.
\epsnd{rem}
\varsigmagmaection{The self-similar critical element} \lambdabel{s:ss}
In this section, we assume towards a contradiction that $\vec u$ is a self-similar-like critical element as in Proposition~\mathop{\mathrm{Re}}f{p:cases}, case (III). We will prove that any such $\vec u$ has finite energy, and in fact that $E(\vec u)=0$. Since we are treating the defocusing equation, this implies $\vec u \epsquiv 0$. The arguments in this section can be readily adapted to the focusing setting as well.
More precisely, we will prove the following result.
\betagin{prop}\lambdabel{ssim_0}
There are no self-similar-like critical elements, in the sense of Case (II) of Proposition~\mathop{\mathrm{Re}}f{p:cases}.
\epsnd{prop}
As in Section \mathop{\mathrm{Re}}f{s:soliton}, we will prove this proposition via two additional regularity arguments. We fix the following notation: let
\mathbf{m}athcal{E}Q{ \lambdabel{eq:s0}
s_0=s_p+\frac{5-p}{2p(p-1)}=\frac32-\frac5{2p},
}
\betagin{prop} \lambdabel{ad_ref_ss}
Let $\vec u$ be a self-similar-like critical element as in Proposition~\mathop{\mathrm{Re}}f{p:cases}. Then,
\betagin{equation}
\|\vec u(T)\|_{\mathcal{H}^{s_0}}\lesssim T^{-(s_0-s_p)}
\epsnd{equation}
uniformly in $T>0$.
\epsnd{prop}
\betagin{prop}\lambdabel{P:reg-jump-ss} Let $\vec u$ be a self-similar-like critical element as in Proposition~\mathop{\mathrm{Re}}f{p:cases}. Let $s_0$ be as in~\epsqref{eq:s0} and suppose that
\betagin{equation}\lambdabel{s0pluseps}
\|\vec u(T)\|_{\dot {\mathcal{H}}^{s_0}}\lesssim T^{-(s_0-s_p)}
\epsnd{equation}
uniformly in $T>0$, then
\[
\|\vec u(T)\|_{\dot {\mathcal{H}}^{1}}\lesssim T^{-p(s_0-s_p)}
\]
uniformly in $T>0$.
\epsnd{prop}
Proposition \mathop{\mathrm{Re}}f{P:reg-jump-ss} will immediately imply Proposition \mathop{\mathrm{Re}}f{ssim_0}.
\betagin{proof}[Proof of Proposition \mathop{\mathrm{Re}}f{ssim_0} assuming Proposition \mathop{\mathrm{Re}}f{P:reg-jump-ss}]
Note that the nonlinear component of the energy is controlled by the $ \dot H^{\frac{3}{2} - \frac{3}{p+1}}({\mathbb R}^3)$ norm by Sobolev embedding, and by interpolation we have
\[
\dot H^{\frac{3}{2} - \frac{3}{p+1}}({\mathbb R}^3) \varsigmagmaubseteq \dot H^{s_p} \cap \dot H^1.
\]
Thus the conserved energy $E(\vec u)$ must be zero by sending $T \to \infty$ in Proposition~\mathop{\mathrm{Re}}f{P:reg-jump-ss}. Then $E[\vec u]\epsquiv 0$, which implies that $\vec u \epsquiv 0$, which is impossible.
\epsnd{proof}
Proposition~\mathop{\mathrm{Re}}f{P:reg-jump-ss} is the easier of the two additional regularity arguments, so we turn to this first.
\varsigmagmaubsection{The jump from \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_0}({\mathbb R}^3)$}{H} to \thetaxorpdfstring{$\dot {\mathcal{H}}^{1}({\mathbb R}^3)$}{H} regularity.}
We first prove that if $\vec{u}$ has some additional regularity, then we can achieve $\dot \mathcal{H}^1$ regularity, and hence reach the desired contradiction.
\betagin{proof}[Proof of \trianglerightrtialrotect{Proposition~\mathop{\mathrm{Re}}f{P:reg-jump-ss}}]
Using $N(t)=t^{-1}$, we have
\[
\|u\|_{L_{t,x}^{2(p-1)}([2^k,2^{k+1}]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 1
\]
uniformly in $k$. Thus for any $0<\epsta\ll 1$, we can partition $[2^{k},2^{k+1}]$ into $C(\epsta)$ intervals $I_j$ so that
\[
\|u\|_{L_{t,x}^{2(p-1)}(I_j\widetildemes\mathbf{m}athbb{R}^3)} < \epsta.
\]
On each such interval, we may argue using Strichartz estimates and a continuity argument together with \epsqref{s0pluseps} to deduce that
\[
\| u\|_{L_t^p L_x^{2p}(I_j\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 2^{-k(s_0-s_p)}
\]
for each $j$. This implies
\[
\|u\|_{L_t^p L_x^{2p}([2^k,2^{k+1}]\widetildemes\mathbf{m}athbb{R}^3)}\lesssim 2^{-k(s_0-s_p)}
\]
uniformly in $k$. We once again complexify the solution. We let
\[
w = u+ \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} u_t.
\]
Once again, if $\vec u(t)$ solves \epsqref{eq:nlw}, then $w(t)$ is a solution to
\betagin{align}
w_t = -i \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} w \trianglerightrtialm \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} |u|^{p-1} u.
\epsnd{align}
By compactness,
\betagin{equation}\lambdabel{weakH1}
\lim_{T\to\infty} P_{\leq k} e^{iT \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} } w(-T) = 0
\epsnd{equation}
as weak limits in $\dot H^{s_0}$ for any $k \mathbf{m}athbf{g}eq 0$. By Strichartz estimates, we have
\betagin{align*}
\|P_{\leq k} w(T)\|_{\dot H^{1}} \lesssim \|u|u|^{p-1}\|_{L_t^1 L_x^2([T,\infty)\widetildemes\mathbf{m}athbb{R}^3)} & \lesssim \varsigmagmaum_{2^k \mathbf{m}athbf{g}eq \frac{T}{2}} \|u|u|^{p-1}\|_{L_t^1 L_x^2([2^k, 2^{k+1}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim \varsigmagmaum_{2^k \mathbf{m}athbf{g}eq \frac{T}{2}} 2^{-k p(s_0-s_p)} \lesssim T^{-p(s_0-s_p)},
\epsnd{align*}
which completes the proof.
\epsnd{proof}
\varsigmagmaubsection*{The jump from \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_p}$}{H} to \thetaxorpdfstring{$\dot {\mathcal{H}}^{s_0}$}{H} regularity.}
It remains to prove Proposition~\mathop{\mathrm{Re}}f{ad_ref_ss}. The main technical ingredient in the proof of Proposition~\mathop{\mathrm{Re}}f{ad_ref_ss} is a long-time Strichartz estimate.
\betagin{prop}[Long-time Strichartz estimate]\lambdabel{prop-ss-lts} Let $\alphaphaha \mathbf{m}athbf{g}eq 1$ and
\[
2<q<\frac{2}{s_p}.
\]
Suppose $\vec u$ is a self-similar-like critical element as in Proposition \mathop{\mathrm{Re}}f{p:cases} with compactness modulus function $R(\cdot)$. For any $\epsta_0>0$, there exists $k_0=k_0(R(\epsta_0),\alphaphaha)$ so that for every $k > k_0$,
\betagin{align*}
&\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{>k}\|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}([1,2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)} \\
&\mathbf{m}athbf{h}space{15mm}+ \| |\nablabla|^{-(\frac{2}{q}-s_p)}u_{>k}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)}<\epsta_0.
\epsnd{align*}
\epsnd{prop}
\betagin{proof} We proceed by induction on $k > k_0$. Let $\epsta_0>0$. Using compactness and the fact that $N(t)=t^{-1}$, we may find $k$ large enough that
\betagin{align*}
\| & |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{>k_0}\|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}([1,2^{3\alphaphaha}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \quad + \| |\nablabla|^{-(\frac2q-s_p)}u_{>k_0}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{3\alphaphaha}]\widetildemes\mathbf{m}athbb{R}^3)} <\frac12\epsta_0.
\epsnd{align*}
This implies the result for $k_0\leq k\leq 8k_0$.
To establish the induction step, by Taylor's theorem, we may decompose
\betagin{align}
F(u) &= F(u_{> k-3 }) + u_{\leq k-3 } \int_0^1 F'(\theta u_{\leq k-3 } + u_{ > k-3 })\\
&= F(u_{> k-3 }) + u_{\leq k-3} F'(u_{> k-3 }) \\
& \mathbf{m}athbf{h}space{24mm}+ u_{\leq k-3 }^2 \int_0^1 \int_0^1 F''(\theta_1 \theta_2 u_{< k-3} + u_{> k-3}) d\theta_1 d\theta_2.
\epsnd{align}
Hence, we can write the nonlinearity $F(u)$ as a sum of terms
\[
P_{>k} F(u) = |P_{> k - 3}u|^{p-1}P_{> k - 3}u + P_{>k}(u_{\leq k-3 }F'(u_{> k-3 }) ) + P_{>k}(u_{\leq k-3 }^2 P_{>k-3} F_2)
\]
where
\[
F_2 = \iint_0^1 F''(\theta_1 \theta_2 u_{< k-3} + u_{> k-3}) d\theta_1 d\theta_2,
\]
and we have used in the last term that
\[
P_{>k}(u_{\leq k-3 }^2 F_2) = P_{>k}(u_{\leq k-3 }^2 P_{>k-3} F_2).
\]
Note that $|F'(u_{> k-3 })| \lesssim |u_{> k-3}|^{p-1}$ and $|F_2| \lesssim |u|^{p-2}$, and so we may replace these terms with $|u|^{p-1}$ and $|u|^{p-2}$ respectively once we have a chosen a dual space.
Fix exponents
\betagin{equation}\lambdabel{ss-ab}
\mathbf{m}athbf{g}amma=\frac{q}{2},\quad \rhoo=\frac{6(-q+pq)}{12-12p-21q+13pq},
\epsnd{equation}
and note that $\mathbf{m}athbf{g}amma\in(1,2)$ for $q\in(2,4)$, while for $q=2$, we have $\rhoo=\frac{6(p-1)}{7p-15}\in(\frac65,2)$. In particular, by choosing $q$ close to $2$ we can guarantee that $\mathbf{m}athbf{g}amma,\rhoo\in(1,2)$. Furthermore,
\[
\frac{1}{\alphaphaha}+\frac{1}{\betata}-\frac32 = \frac{2(p-3)}{3(p-1)}\mathbf{m}athbf{g}eq 0,
\]
which guarantees that the conjugate exponent pair $(\mathbf{m}athbf{g}amma',\rhoo')$ is wave-admissible.
By Strichartz estimates,
\betagin{align}
\| &|\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{>k} \|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}} + \| |\nablabla|^{-(\frac2q-s_p)}u_{>k}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \nonumber \\
& \lesssim \|u_{>k}(1)\|_{\mathcal{H}^{s_p}} + \| |\nablabla|^{\frac{3(p-3)}{2(p-1)}}[u_{> k-3}]^p\|_{L_t^{\frac{2(p-1)}{p}}L_x^{\frac{2(p-1)}{2p-3}}} \lambdabel{J-ss-lts1}\\
& \quad + \||\nablabla|^{-(\frac{2}{q}-s_p)}P_{>k}(u_{\leq k-3} u_{>k-3}^{p-1})\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \lambdabel{J-ss-lts2} \\
& \quad + \| |\nablabla|^{-\frac4q+3s_p}P_{>k}(u_{\leq k-3}^2P_{>k}(u^{p-2}))\|_{L_t^\mathbf{m}athbf{g}amma L_x^\rhoo}\lambdabel{J-ss-lts3}\\
&:= I + II + II,
\epsnd{align}
where all space-time norms are over $[1,2^{\alphaphaha(k-k_0)} ]\widetildemes\mathbf{m}athbb{R}^3$. We estimate Term I as follows:
\betagin{align}
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}}&[u_{> k-3}]^p\|_{L_t^{\frac{2(p-1)}{p}}L_x^{\frac{2(p-1)}{2p-3}}} \nonumber \\
& \lesssim \| u_{> k-3}\|_{L_{t,x}^{2(p-1)}}^{p-1} \| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{> k-3} \|_{L_t^{2(p-1)} L_x^{\frac{2(p-1)}{p-2)}}} \nonumber \\
& \lesssim \| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{> k-3} \|_{L_t^{2(p-1)} L_x^{\frac{2(p-1)}{p-2)}}}^{p}. \lambdabel{equ:ss-term1}
\epsnd{align}
By induction, we have
\[
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{> k-3} \|_{L_t^{2(p-1)} L_x^{\frac{2(p-1)}{p-2}}([1,2^{\alphaphaha(k-k_0)}/8]\widetildemes\mathbf{m}athbb{R}^3)}\leq \epsta_0.
\]
Thus, using $N(t)=t^{-1}$ and that
\[
\int_{2^{\alphaphaha(k-k_0)}/8}^{2^{\alphaphaha(k-k_0 )}} t^{-1}\,\mathbf{m}athrm{d} t = \log 2^{3\alphaphaha} \varsigmagmaim 1,
\]
and the fact that $N(t)\leq 1$ on $[2^{\alphaphaha(k-k_0)}/8,\,2^{\alphaphaha(k-k_0 )}]$ for $k > k_0 \mathbf{m}athbf{g}g 1$, we can deduce
\[
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{> k-3} \|_{L_t^{2(p-1)} L_x^{\frac{2(p-1)}{p-2}}([2^{\alphaphaha(k-k_0)}/8,\,2^{\alphaphaha(k-k_0 )}]\widetildemes\mathbf{m}athbb{R}^3)}\leq \epsta_0.
\]
In particular, using \epsqref{equ:ss-term1}, we obtain that
\[
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}}[u_{> k-3}]^p\|_{L_t^{\frac{2(p-1)}{p}}L_x^{\frac{2(p-1)}{2p-3}}([1,2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim \epsta_0^p.
\]
For Term II, we estimate
\betagin{align*}
2^{-k(\frac{2}{q}-s_p)} \|u_{\leq k-3}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{> k-3}\|_{L_{t,x}^{2(p-1)}}^{p-1} \lesssim \epsta_0^{p-1} 2^{-k(\frac2q-s_p)}\|u_{\leq k-3 }\|_{L_t^q L_x^{\frac{2q}{q-2}}}.
\epsnd{align*}
Fix $C_0\mathbf{m}athbf{g}eq 1$ to be determined below. We write
\betagin{align}
\|u_{\leq k-3}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)} & \lesssim \|u_{\leq C_0} \|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}] \widetildemes\mathbf{m}athbb{R}^3)} \lambdabel{ss-lts-1} \\
& \quad + \| u_{C_0<\cdot\leq k_0} \|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)} ] \widetildemes\mathbf{m}athbb{R}^3)} \lambdabel{ss-lts-2} \\
& \quad + \varsigmagmaum_{k_0\leq j \leq k-3 } \|u_j\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}] \widetildemes\mathbf{m}athbb{R}^3)} \lambdabel{ss-lts-3}.
\epsnd{align}
For \epsqref{ss-lts-2}, we have
\[
\| u_{C_0<\cdot\leq k_0} \|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}] \widetildemes\mathbf{m}athbb{R}^3)} \lesssim C_0^{\frac2q-s_p}\log(2^{k-k_0}).
\]
On the other hand, for $C_0=C_0(\epsta_0)$ large enough, we can estimate \epsqref{ss-lts-3} by
\[
\varsigmagmaum_{k_0\leq j \leq k-3 } \|u_j\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)}] \widetildemes\mathbf{m}athbb{R}^3)} \lesssim \epsta_0 2^{k_0(\frac2q-s_p)}\log(2^{k-k_0}).
\]
Finally, for $k_0\leq j \leq k-3$ we first use the inductive hypothesis to write
\[
\|P_j u\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(j-k_0)} ]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 2^{j(\frac2q-s_p)}\epsta_0.
\]
Arguing as we did for the high-frequency piece,
\[
\|P_M u\|_{L_t^q L_x^{\frac{2q}{q-2}}([2^{\alphaphaha(j-k_0)},2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 2^{j(\frac2q-s_p)}\epsta_0\log(2^{k-j}).
\]
Thus
\betagin{align*}
\varsigmagmaum_{k_0\leq j \leq k-3 }& \|u_j\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)} ]\widetildemes\mathbf{m}athbb{R}^3)} \\
&\lesssim \varsigmagmaum_{k_0\leq j\leq k-3 }\epsta_0 2^{j(\frac2q-s_p)}[1+\log(2^{k-j})] \lesssim \epsta_0 2^{k(\frac2q-s_p)},
\epsnd{align*}
where we have used
\[
\varsigmagmaum_{L>1} L^{-(\frac2q-s_p)}\log(L) \lesssim 1.
\]
Collecting these estimates, we find
\betagin{align}\lambdabel{J-ss-lts-low}
&\|u_{\leq k-3}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha(k-k_0)} ] \widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim [C_0+\epsta_0 2^{k_0(\frac2q-s_p)}]\log(2^{k-k_0}) + \epsta_0 2^{k(\frac2q-s_p)},
\epsnd{align}
which yields
\betagin{align}
\||\nablabla|^{-(\frac{2}{q}-s_p)}&P_{>k}(u_{\leq k-3} u_{>k-3}^{p-1})\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \\
& \lesssim \epsta_0^{p-1} 2^{-k(\frac2q-s_p)} [C_0^{\frac2q-s_p}+\epsta_0 2^{k_0(\frac2q-s_p)}]\log(2^{k-k_0}) + \epsta_0^p.
\epsnd{align}
Choosing $k_0$ possibly even larger, we deduce
\[
\||\nablabla|^{-(\frac{2}{q}-s_p)}P_{>k}(u_{\leq k-3} u_{>k-3}^{p-1})\|_{L_t^{\frac{2q}{q+2}}L_x^{\frac{q}{q-1}}} \lesssim \epsta_0^p.
\]
Finally, we estimate Term III. Since $-\tfrac{2}{q}+s_p<0$, we use the fractional chain rule and Bernstein estimates to obtain
\betagin{align*}
\| &|\nablabla|^{-\frac4q+3s_p}P_{>k}(u_{\leq k-3 }^2P_{>k}(u^{p-2}))\|_{L_t^\mathbf{m}athbf{g}amma L_x^\rhoo} \\
& \lesssim 2^{-k\frac4q+2s_p}\|u_{\leq k-3}\|_{L_t^q L_x^{\frac{2q}{q-2}}}^2\| |\nablabla|^{s_p}(u^{p-2})\|_{L_t^\infty L_x^{\frac{6(p-1)}{7p-15}}} \\
& \lesssim 2^{-2k(\frac{2}{q}-s_p)} \|u_{\leq k-3}\|_{L_t^q L_x^{\frac{2q}{q-2}}}^2 \| u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3} \| |\nablabla|^{s_c} u\|_{L_t^\infty L_x^2}.
\epsnd{align*}
Using \epsqref{J-ss-lts-low} (and the conditions on $k_0,C_0$ given above), we conclude
\[
\| |\nablabla|^{-\frac4q+3s_p}P_{>k}(u_{\leq k-3}^2P_{>k}(u^{p-2}))\|_{L_t^\mathbf{m}athbf{g}amma L_x^\rhoo} \lesssim \epsta_0^2.
\]
Combining our estimates for Terms I, II and III and choosing $\epsta_0$ small, we conclude that
\[
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} u_{>k} \|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}} + \| |\nablabla|^{-(\frac2q-s_p)}u_{>k}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \leq \epsta_0
\]
on $[1,2^{\alphaphaha(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3$, thus closing the induction and completing the proof. \epsnd{proof}
Finally, we arrive at the proof of the additional regularity Proposition \mathop{\mathrm{Re}}f{ad_ref_ss}.
\betagin{proof}[Proof of \trianglerightrtialrotect{Proposition~\mathop{\mathrm{Re}}f{ad_ref_ss}}]
We compute
\betagin{align}
\|\vec u(1)\|^2_{\mathcal{H}^{s_0}} \varsigmagmaimeq \| w (1) \|^2_{\dot H^{s_0}} \lesssim \varsigmagmaum_{k \mathbf{m}athbf{g}g 1} 2^{2ks_0} \lambdangle P_k w(1), P_k w(1)\rangle .
\epsnd{align}
We use the double Duhamel argument based at $t=1$. For some $k\mathbf{m}athbf{g}g 1$, we write
\[
\lambdangle P_k w(1), P_k w(1)\rangle = \int_0^1 \int_1^\infty \lambdangle e^{i(1-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}P_k F(u(t)), e^{i(1-\tauu)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} P_k F(u(\tauu))\rangle \,d\tauu \,\mathbf{m}athrm{d} t.
\]
We fix $\alphaphaha \mathbf{m}athbf{g}eq 1$, to be determined below, and split
\[
\int_1^\infty e^{i(1-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}P_k F(u(t))\,\mathbf{m}athrm{d} t = A_k + B_k,
\]
where
\betagin{align*}
A_k =\int_1^{2^{k\alphaphaha }} e^{i(1-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}P_k F(u(t))\,\mathbf{m}athrm{d} t,\quad B_k&=\int_{2^{k\alphaphaha}}^\infty e^{i(1-t)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}}P_k F(u(t))\,\mathbf{m}athrm{d} t.
\epsnd{align*}
We also write
\[
\int_0^1 e^{i(1-\tauu)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }P_k F(u(s))\,ds = Z_k,
\]
We will use the estimate
\[
|\lambdangle A_k+B_k, Z_k\rangle| \leq |A_k|^2+2|\lambdangle B_k,Z_k\rangle|,
\]
which follows from the fact that $A_k+B_k=Z_k$.
\mathbf{m}edskip
We first estimate the $\lambdangle B_k,Z_k\rangle$ term. We expand
\[
|\lambdangle B_k,Z_k\rangle| \leq \varsigmagmaum_{\epsll \leq 0} \varsigmagmaum_{ j \mathbf{m}athbf{g}eq k\alphaphaha} \int_{2^{\epsll}}^{2^{\epsll+1}} \mathbf{m}athbf{h}space{-2mm} \int_{2^j}^{2^{j+1}} \bigl| \lambdangle e^{-i(t-\tauu)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }P_k F(u(t)), P_k F(u(s))\rangle\bigr| \,d\tauu\,\mathbf{m}athrm{d} t.
\]
We claim that
\betagin{equation}\lambdabel{01071}
\|P_k(u|u|^{p-1})\|_{L_t^2 L_x^1([2^{\epsll},2^{\epsll+1}]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 2^{-ks_p}
\epsnd{equation}
uniformly in $\epsll \mathbf{m}athbf{g}eq 0$. Indeed, arguing as above, we can decompose the nonlinearity into two types of terms
\[
u_{> k-1}u^{p-1} \qquad \thetaxtup{and} \qquad uP_{> k-1}(u^{p-1}) ,
\]
since if both $u$ and $|u|^{p-1}$ are projected to low frequencies, the product vanishes when projected to high frequencies.
We thus have by Bernstein's inequality, H\"older's inequality, and the fractional chain rule that
\betagin{align*}
\|&P_k(u|u|^{p-1})\|_{L_t^2 L_x^1} \\ & \lesssim \|u\|_{L_{t,x}^{2(p-1)}}^{p-1}\|u_{>k-1}\|_{L_t^\infty L_x^2} + \|u\|_{L_{t,x}^{2(p-1)}}\|P_{>k-1}(u^{p-1})\|_{L_t^{\frac{2(p-1)}{p-2}} L_x^{\frac{2(p-1)}{2p-3}}} \\
& \lesssim 2^{-ks_p} \|u\|_{L_{t,x}^{2(p-1)}}^{p-1}\| |\nablabla|^{s_p}u\|_{L_t^\infty L_x^2} \lesssim 2^{-ks_p},
\epsnd{align*}
where all spacetime norms are over $[2^{\epsll}, 2^{\epsll+1}]\widetildemes\mathbf{m}athbb{R}^3$.
Using dispersive estimates, we have for any $j \mathbf{m}athbf{g}eq k \alphaphaha$ and $\epsll \leq 0$,
\betagin{align*}
\int_{2^{\epsll}}^{2^{\epsll+1}} & \int_{2^j}^{2^{j+1}} \bigl| \lambdangle e^{-i(t-\tauu)\varsigmagmaqrt{-\mathbf{m}athbb{D}elta} }P_k F(u(t)), P_k F(u(s))\rangle\bigr| \,d\tauu\,\mathbf{m}athrm{d} t \\
& \lesssim \int_{2^{\epsll}}^{2^{\epsll+1}} \int_{2^j}^{2^{j+1}} t^{-1} 2^k \|P_k(u|u|^{p-1})(t)\|_{L_x^1} \|P_k(u|u|^{p-1)}(\tauu)\|_{L_x^1}\,d\tauu\,\mathbf{m}athrm{d} t \\
& \lesssim 2^{\frac \epsll 2} 2^{-\frac j 2} \|P_k(u|u|^{p-1})\|_{L_t^2 L_x^1([2^{\epsll},2^{\epsll+1}]\widetildemes\mathbf{m}athbb{R}^3)}\|P_k(u|u|^{p-1})\|_{L_t^2 L_x^1([2^k,2^{k+1}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim 2^{\frac \epsll 2} 2^{-\frac j2} 2^{k(1-2s_p)}.
\epsnd{align*}
Summing over $\epsll \leq 0$ and $j \mathbf{m}athbf{g}eq k\alphaphaha $, we deduce that
\betagin{equation}\lambdabel{ss-yz}
|\lambdangle B_k , Z_k \rangle_{\dot H^{s_p}_x}| \lesssim 2^{k(1-\frac{\alphaphaha}{2})}.
\epsnd{equation}
We now turn to estimating the $|A_k|^2$ term. We will use a frequency envelope argument to establish the required bounds. Once again, we fix an exponent $q$ satisfying
\[
2 < q < \frac{s_p}{2}.
\]
Let
\betagin{align}\lambdabel{sigma}
\varsigmagmaigma < \mathbf{m}in\left\{s_p,\frac2q-s_p,\frac4q-1-s_p \right\}.
\epsnd{align}
and define
\[
\mathbf{m}athbf{g}amma_k = \varsigmagmaum_{j} 2^{-\varsigmagmaigma|j-k|} \|w_j\|_{L_t^\infty \dot H^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)}.
\]
We will establish the following: Let $\epsta_0>0$ and let $R(\cdot)$ denote the compactness modulus function of $\vec u$. Then there exists $k_0\epsquiv k_0(\epsta_0,R(\epsta_0))$ sufficiently large that
\betagin{align}\lambdabel{ss-ak}
\|A_k\|_{\dot H^{s_p}_x}\lesssim C(k_0)2^{-k (\frac2q-s_p)+} + \epsta_0\varsigmagmaum_{j} 2^{-\varsigmagmaigma |j-k|} \|u_j\|_{L_t^\infty \dot H_x^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)}
\epsnd{align}
for all $k \mathbf{m}athbf{g}eq k_0$. For $p > 3$, we write the nonlinearity as $F(u)=|u|^{p-3}u^3$ and then decompose $u^3$ by writing $u=u_{\leq k}+u_{>k}$. By further decomposing $u_{\leq k}=u_{\leq k_0}+u_{k_0<\cdot\leq k}$, we are led to terms of the form
\betagin{align}
F(u)&=|u|^{p-3}u_{>k}^3 \lambdabel{HHH}\\
& \quad + 3|u|^{p-3}u_{>k}^2 u_{\leq k_{0}} \lambdabel{HHL} \\
& \quad + 3|u|^{p-3}u_{>k}^2 u_{k_0<\cdot\leq k} \lambdabel{HHM} \\
& \quad + |u|^{p-3} u_{\leq k_{0}} F_3 \lambdabel{MML} \\
& \quad + |u|^{p-3}u_{k_0<\cdot\leq k}^3 \lambdabel{MMM}\\
& \quad + 3|u|^{p-3}u_{>k} u_{\leq k}u_{\leq k_0} \lambdabel{HML} \\
& \quad + 3|u|^{p-3}u_{>k} u_{\leq k}u_{k_0<\cdot\leq k}, \lambdabel{HMM}
\epsnd{align}
where we have written
\[
F_3 = u_{\leq k_0}^2 + 2u_{\leq k_0}u_{k_0<\cdot\leq k}+ u_{k_0<\cdot\leq k}^2.
\]
By Proposition~\mathop{\mathrm{Re}}f{prop-ss-lts}, for any $\betata \mathbf{m}athbf{g}eq 1$ there exists $k_0\epsquiv k_0(R(\epsta_0),\betata)$ so that for every $k > k_0$ we have
\betagin{align}
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}}& u_{>k}\|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}([1,2^{\betata(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)}\\
&+ \| |\nablabla|^{-(\frac2q-s_p)}u_{>k}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\betata(k-k_0)}]\widetildemes\mathbf{m}athbb{R}^3)}<\epsta_0.
\epsnd{align}
Fix $\betata>\alphaphaha$ and $k_1=k_1(R(\epsta_0),\alphaphaha,\betata)\mathbf{m}athbf{g}eq k_0$ which satisfies
\[
2^{k_1(\betata - \alphaphaha)}\mathbf{m}athbf{g}eq 2^{k_0 \betata}.
\]
Then $2^{\betata(k- k_0)} \mathbf{m}athbf{g}eq 2^{k \alphaphaha}$ for $k \mathbf{m}athbf{g}eq k_1$, and hence for every $k \mathbf{m}athbf{g}eq k_1$, we have
\betagin{equation}\lambdabel{cor-ss-lts}
\betagin{split}
\| |\nablabla|^{\frac{3(p-3)}{2(p-1)}} &u_{>k}\|_{L_t^{2(p-1)}L_x^{\frac{2(p-1)}{p-2}}([1,2^{\alphaphaha k}]\widetildemes\mathbf{m}athbb{R}^3)}\\
&+\| |\nablabla|^{-(\frac2q-s_p)}u_{>k}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha k}]\widetildemes\mathbf{m}athbb{R}^3)} <\epsta_0.
\epsnd{split}
\epsnd{equation}
We will use this estimate repeatedly below. Furthermore, we may also establish identical long-time Strichartz estimates for
\[
\| |\nablabla|^{s_p-\frac2r} u_{>k}\|_{L_t^r L_x^{\frac{2r}{r-2}}},
\]
where $\tfrac2{s_p}<r<4$.
To estimate \epsqref{HHH}, we use the dual Strichartz pair
\[
\left(\frac{r}{2},\frac{6r(p-1)}{12-12p-21r+13pr}\right),
\]
with $\frac2{s_p}<r<4$. We note that this pair is dual admissible: writing the pair as $(A,B)$, we have $\tfrac{1}{A}+\tfrac{1}{B}=\tfrac{13p-21}{6p-6}\mathbf{m}athbf{g}eq \tfrac32$ for $p\mathbf{m}athbf{g}eq 3$. Note that $A\in(1,2)$ since $r\in(2,4)$ and $B>1$ for $r<\tfrac{12(p-1)}{7p-15}$. This is compatible with $r>\tfrac{2}{s_p}$ when $p\in[3,5)$. We can thus bound
\betagin{align*}
2^{k(3s_p-\frac4r)}&\|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3}\|u_{>k}\|_{L_t^r L_x^{\frac{2r}{r-2}}}^2 \varsigmagmaum_{k\leq j} \|u_{j}\|_{L_t^\infty L_x^2} \\
& \lesssim \| |\nablabla|^{s_p-\frac2r}u_{>k}\|_{L_t^r L_x^{\frac{2r}{r-2}}}^2\varsigmagmaum_{k\leq j} 2^{(k-j)s_p}\|u_j\|_{L_t^\infty \dot H_x^{s_p}} \\
& \lesssim \epsta_0 \varsigmagmaum_{j>k}2^{(k-j) s_p}\|u_j\|_{L_t^\infty \dot H_x^{s_p}}.
\epsnd{align*}
For \epsqref{HHL}, we use the dual Strichartz pair
\betagin{align}\lambdabel{eq:hhl_pair}
\left(\frac{2q(p-1)}{2p+q-2},\frac{6q(p-1)}{6-15q+2p(5q-3)}\right).
\epsnd{align}
We bound the contribution of this term by
\betagin{align*}
2^{k(2s_p-\frac2q)}&\|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3}\|u_{>k}\|_{L_{t,x}^{2(p-1)}} \|u_{\leq k_0}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{>k}\|_{L_t^\infty L_x^2} \\
& \lesssim \epsta_0 2^{-k(\frac2q-s_p)}2^{k_0(\frac2q-s_p)}\log 2^k \|u\|_{L_t^\infty \dot H_x^{s_p}} \\
& \lesssim \epsta_0 2^{-k(\frac2q-s_p)}2^{k_0(\frac2q-s_p)}\log 2^k.
\epsnd{align*}
For \epsqref{HHM}, we use the same dual pair as in \epsqref{eq:hhl_pair}, and we obtain
\betagin{align*}
2^{k(2s_p-\frac2q)}&\|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3}\|u_{>k}\|_{L_{t,x}^{2(p-1)}} \varsigmagmaum_{k_0\leq j_1\leq k\leq j_2} \|u_{j_1}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{j_2}\|_{L_t^\infty L_{x}^{2}}\\
& \lesssim \epsta_0 \varsigmagmaum_{k_0\leq j_1\leq k\leq j_2} 2^{j_1(\frac2q-s_p)}\log(2^{k-j_1}) 2^{-j_2s_p}\|u_{j_2}\|_{L_t^\infty \dot H_x^{s_p}} \\
& \lesssim \epsta_0\varsigmagmaum_{k\leq j}2^{(k-j)s_p}\|u_j\|_{L_t^\infty \dot H_x^{s_p}}.
\epsnd{align*}
To estimate \epsqref{MML}, we use the admissible dual pair $(\frac{q}{2},\frac{6q}{7q-8}+)$. We choose $\rhoo$ so that
\[
\frac{3}{\rhoo}=\frac{2}{q}+\frac{4}{p-1}-\frac{3}{2}.
\]
We bound the contribution of this term by
\betagin{align*}
2^{-k(\frac2q-s_p)}&\|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3}\|u_{\leq k_0}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \varsigmagmaum_{j_1\leq j_2\leq k} \|u_{j_1}\|_{L_t^q L_x^{\frac{2q}{q-2}}}\|u_{j_2}\|_{L_t^\infty L_x^{\rhoo+}} \\
& \lesssim 2^{-k(\frac2q-s_p)}2^{k_0(\frac2q-s_p)}\log 2^k \varsigmagmaum_{j_1\leq j_2\leq k} 2^{j_1(\frac2q-s_p)}2^{j_2(s_p-\frac2q+)}\log(2^{k-j_1}) \\
& \lesssim 2^{-k(\frac2q-s_p)+}.
\epsnd{align*}
Now we estimate \epsqref{MMM} as follows: we apply Strichartz estimates with the dual (sharp) admissible pair $(\tfrac{q}{2},\tfrac{2q}{3q-4})$. Then we obtain
\betagin{align*}
2^{k(1-\frac4q+s_p)}\|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3} \varsigmagmaum \|u_{j_1}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{j_2}\|_{L_t^q L_x^{\frac{2q}{q-2}}}\|u_{j_3}\|_{L_t^\infty L_x^{\frac{6(p-1)}{9-p}}},
\epsnd{align*}
where the sum is over $k_0\leq j_1\leq j_2\leq j_3\leq k$.
Now, for $k_0\leq j\leq k$, we can estimate
\betagin{align*}
\|u_{j}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha k}]\widetildemes\mathbf{m}athbb{R}^3)} & \lesssim\|u_{j}\|_{L_t^q L_x^{\frac{2q}{q-2}}([1,2^{\alphaphaha j}]\widetildemes\mathbf{m}athbb{R}^3)}+ \|u_j\|_{L_t^q L_x^{\frac{2q}{q-2}}([2^{\alphaphaha k},2^{\alphaphaha k}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim \epsta_0 \log(2^{k-j}) 2^{j(\frac2q-s_p)},
\epsnd{align*}
using the long-time Strichartz estimate of Proposition \mathop{\mathrm{Re}}f{prop-ss-lts} and we note the $\log$ comes from the second term. We also have
\[
\|u_j\|_{L_t^\infty L_x^{\frac{6(p-1)}{9-p}}} \lesssim 2^{-j(1-s_p)}\|u_j\|_{L_t^\infty \dot H_x^{s_p}}.
\]
This yields
\betagin{align*}
\epsta_0 2^{k(1-\frac4q+s_p)}& \varsigmagmaum_{k_0\leq j_1\leq j_2\leq j_3\leq k} 2^{j_1(\frac2q-s_p)}\log(2^{k-j_1}) 2^{j_2(\frac2q-s_p)}\log(2^{k-j_2})2^{-j_3(1-s_p)} \\
& \lesssim \epsta_0 \varsigmagmaum_{k_0\leq j\leq k} \log(2^{j-k}) 2^{(j-k)(\frac4q-1-s_p)}\|u_j\|_{L_t^\infty \dot H_x^{s_p}}.
\epsnd{align*}
Note that for this estimate, we need
\[
q<\frac{4}{1+s_p} = \frac{8(p-1)}{5p-9},
\]
which is compatible with $q>2$ for $p\in[3,5)$.
For \epsqref{HML}, we use the dual Strichartz pair
\betagin{align}\lambdabel{equ:hml_pair}
\left(\frac{q}{2},\frac{6(pq-q)}{12-12p-21q+13pq}\right).
\epsnd{align}
We bound the contribution of this term by
\betagin{align*}
2^{k(3s_p-\frac4q)}&\|u_{\leq k_0}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \varsigmagmaum_{j_1\leq k \leq j_2} \|u_{j_1}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{j_2}\|_{L_t^\infty L_x^2} \\
& \lesssim 2^{k(3s_p-\frac4q)} 2^{k_0(\frac2q-s_p)} \varsigmagmaum_{j_1\leq k\leq j_2} 2^{j_1(\frac2q-s_p)}\log(2^{k-j_1}) 2^{-j_2s_p} \|u_{j_2}\|_{L_t^\infty \dot H_x^{s_p}} \\
& \lesssim 2^{-k(\frac2q-s_p)}2^{k_0(\frac2q-s_p)}.
\epsnd{align*}
Finally, for \epsqref{HMM}, using the same dual pair as in \epsqref{equ:hml_pair}, and we estimate the contribution of this term by
\betagin{align*}
2^{k(3s_p-\frac4q)}& \|u\|_{L_t^\infty L_x^{\frac{3(p-1)}{2}}}^{p-3} \varsigmagmaum_{j_1, k_0\leq j_2\leq k\leq j_3} \mathbf{m}athbf{h}space{-4mm}\|u_{j_1}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{j_2}\|_{L_t^q L_x^{\frac{2q}{q-2}}} \|u_{j_3}\|_{L_t^\infty L_x^2} \\
& \lesssim \epsta_0 2^{k(3s_p-\frac4q)} \varsigmagmaum_{j_1\leq j_2\leq k \leq j_3} \mathbf{m}athbf{h}space{-4mm} 2^{k_1(\frac2q-s_p)}\log(2^{2k-j_1 - j_2})2^{j_2(\frac2q-s_p)}2^{-j_3 s_p}\|u_{j_3}\|_{L_t^\infty \dot H_x^{s_p}} \\
& \lesssim \epsta_0\varsigmagmaum_{k\leq j} 2^{(k-j)s_p}\|u_j\|_{L_t^\infty \dot H_x^{s_p}}.
\epsnd{align*}
Putting together all the estimates, we establish \epsqref{ss-ak}, which, together with \epsqref{ss-yz} and the conditions on $\varsigmagmaigma$ from \epsqref{sigma} yields
\[
\|w_k(1)\|_{\dot H^{s_p}_x} \lesssim 2^{k(\frac12-\frac{\alphaphaha}{4})}+ 2^{-k(\frac2q-s_p)+} + \epsta_0\varsigmagmaum_{j} 2^{-\varsigmagmaigma|j-k|} \|w_j\|_{L_t^\infty \dot H_x^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)}
\]
for all $k \mathbf{m}athbf{g}g 1$. For $\alphaphaha$ large enough, we can guarantee that the the second term dominates the first, and hence
\[
\|w_k(1)\|_{\dot H^{s_p}_x} \lesssim 2^{-k(\frac2q-s_p)+} + \epsta_0\varsigmagmaum_{j} 2^{-\varsigmagmaigma|j-k|} \|w_j\|_{L_t^\infty \dot H_x^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)}
\]
for all $k \mathbf{m}athbf{g}g 1$. We now rescale the solution $u$ and use the fact that the rescaled solution $Tu(Tt,Tx)$ is also a self-similar solution for any $T>1$ (with the {same} compactness modulus function as $u$). This yields
\betagin{equation}\lambdabel{01072}
\|w_k\|_{L_t^\infty \dot H^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)} \lesssim 2^{-k(\frac2q-s_p)+} + \epsta_0 \varsigmagmaum_{j} 2^{-\varsigmagmaigma|j-k|} \|w_j\|_{L_t^\infty \dot H_x^{s_p}([1,\infty)\widetildemes\mathbf{m}athbb{R}^3)}.
\epsnd{equation}
Let $0<\epsta< \varsigmagmaigma$. Then \epsqref{01072} implies that for $k \mathbf{m}athbf{g}g k_0$,
\[
\mathbf{m}athbf{g}amma_k \lesssim 2^{-k\epsta} +\epsta_0\alphaphaha_k,
\]
and hence, we may conclude that
\[
\| w(1)\|_{\dot H^{s_p+\epsta}} \lesssim 1 \qtq{for any}0<\epsta<\varsigmagmaigma.
\]
Using the same rescaling argument as above, and the relation between $w$ and $u$, we ultimately deduce that
\[
\| u(T)\|_{\mathcal{H}^{s_p+\deltalta}} \lesssim T^{-\epsta},
\]
which yields \epsqref{s0pluseps} provided we can choose $\epsta=\frac{5-p}{2p(p-1)}$. Combining with the constraint $\epsta<\tfrac{2}{q}-s_p$, this requires that we choose
\[
2<q<\frac{4p}{3p-5},
\]
which is possible whenever $p\in[3,5)$. For the other term appearing in the definition of $\varsigmagmaigma$, we find that we can choose $\epsta=\frac{5-p}{2p(p-1)}$ provided we take
\[
q<\frac{8p}{5(p-1)},
\]
which is similarly allowable by the requirement that $q>2$ for $p\in[3,5)$. This completes the proof of Proposition~\mathop{\mathrm{Re}}f{ad_ref_ss} and hence completes our treatment of the self-similar scenario.
\epsnd{proof}
\varsigmagmaection{Doubly concentrating critical element: the sword and shield} \lambdabel{s:sword}
We now consider the case of the doubly concentrating critical element, that is, $N(t) \mathbf{m}athbf{g}eq 1$ on $\mathbf{m}athbb{R} = I_{\thetaxtup{max}}$ and
\mathbf{m}athcal{E}Q{ \lambdabel{eq:Ntoinf}
\limsup_{t \to \trianglerightrtialm \infty} N(t) = \infty.
}
By Proposition~\mathop{\mathrm{Re}}f{p:cases} we may assume in this case that $x(t)$ is subluminal in the sense of Definition~\mathop{\mathrm{Re}}f{d:sl}. By Lemma~\mathop{\mathrm{Re}}f{l:sl} there exists $\delta_0>0$ so that
\betagin{align} \lambdabel{eq:sl}
|x(t) - x(\tauu)| \leq (1- \deltalta_0)|t - \tauu| ,
\epsnd{align}
for all $t, \tauu$ with
\mathbf{m}athcal{E}Q{
|t - \tauu| \mathbf{m}athbf{g}eq \frac{1}{\deltalta_0 \inf_{s \in [t, \tauu]} N(s)}.
}
The goal of this section is to prove the following proposition:
\betagin{prop}
There are no doubly-concentrating critical elements, in the sense of Case (III) of Proposition~\mathop{\mathrm{Re}}f{p:cases}.
\epsnd{prop}
To prove this proposition, we establish the following dichotomy: either additional regularity for the critical element can be established using essentially the same arguments used in Section~\mathop{\mathrm{Re}}f{s:soliton}, or a self-similar-like critical element can be extracted by passing to a suitable limit. To this end we define function
$\tauu : \mathbf{m}athbb{R} \rightarrow \mathbf{m}athbb{R}$ by
\[
\tauu(t) = \int_{0}^{t} N(s) \mathbf{m}athrm{d} s.
\]
Since $N(t) > 0$ and $\lim_{t \to \trianglerightrtialm \infty} \tauu(t) = \infty$, the function $\tauu : [0,\infty) \to [0,\infty)$ is bijective. Hence for any $t_0 > 0 $ and any $C_+ > 0$, there exists a unique $\kappa_+ = \kappa_+( t_0, C_+) > 0$ such that
\[
t_0 + \frac{\kappa_{+}( t_0, C_{+})}{N(t_0)}= \tauu^{-1}(\tauu(t_{0}) + C_{+}).
\]
Similarly, for $t_0 < 0$ and any $ C_- > 0$, we can define
\[
t_0 - \frac{\kappa_{-}( t_0, C_{-})}{N(t_0)} = \tauu^{-1}(\tauu(t_{0}) - C_{-}).
\]
Fix $\epsta > 0$ as in the small data theory of Proposition~\mathop{\mathrm{Re}}f{small data}, and let $R = R(\epsta)$ be such that for all $t \in \mathbf{m}athbb{R}$,
\betagin{align}\lambdabel{equ:compactness_small_data}
&\int_{|x - x(t)| \mathbf{m}athbf{g}eq \frac{R(\epsta)}{N(t)}}||\nablabla|^{s_p} u(t,x)|^{2} \mathbf{m}athrm{d} x + \int_{|x-x(t)| \mathbf{m}athbf{g}eq \frac{R(\epsta)}{N(t)}}||\nablabla|^{s_p-1} u_t(t,x)|^{2} \mathbf{m}athrm{d} x \leq \epsta,
\epsnd{align}
see Remark \mathop{\mathrm{Re}}f{r:Reta}. Now let $\chi(t) = \chi_{R, N}(t)$ be a smooth cutoff to the set
\[
\{ |x - x(t) | \mathbf{m}athbf{g}eq R(\epsta) / N(t) \}.
\]
By our choice of $R(\epsta)$ we have
\[
\| \chi(t) \vec u \|_{\dot {\mathcal{H}}^{s_p}}^2 \lesssim \epsta.
\]
Since $N(t) \mathbf{m}athbf{g}eq 1$ and by~\epsqref{eq:sl}, for any $t_{0}$, there exists $C_{+}(t_{0}) \mathbf{m}athbf{g}e 1$ sufficiently large so that
\betagin{align} \lambdabel{eq:Cpmdef}
\biggl|x\mathbf{m}athcal{B}ig(t_{0} +& \frac{\kappa_{+}(t_{0}, C_{+}(t_{0}))}{ N(t_{0})}\mathbf{m}athcal{B}ig) - x(t_{0})\biggr|\\
&\leq \abs{\frac{\kappa_{+}(t_{0}, C_{+}(t_{0}))}{N(t_{0})}} - \frac{R(\epsta)}{N(t_0 + \kappa_+(t_0, C_+(t_0)) N(t_0)^{-1})},
\epsnd{align}
and similarly for $C_-(t_0)$. By continuity we may assume that $C_{\trianglerightrtialm}(t_0)$ are minimal with this property.
Furthermore, for every $t_{0}$ there exists $C(t_{0})$ such that for some $t_{1} \in \mathbf{m}athbb{R}$ satisfying
\betagin{equation}
\tauu(t_{1}) - \tauu(t_{0}) \leq C(t_{0}),
\epsnd{equation}
there exist $t_{-} < t_{1} < t_{+}$ with
\betagin{equation}
\tauu(t_{1}) - \tauu(t_{-}) \leq 2 C(t_{0}), \quad \tauu(t_{+}) - \tauu(t_{1}) \leq 2 C(t_{0}),
\epsnd{equation}
which satisfies
\betagin{equation}
|x(t_{-}) - x(t_{1})| \leq |t_{-} - t_{1}| - \frac{R(\epsta)}{N(t_{-})}, \quad \thetaxt{and} \quad |x(t_{+}) - x(t_{1})| \leq |t_{+} - t_{1}| - \frac{R(\epsta)}{N(t_{+})}.
\epsnd{equation}
It is clear from the definition that $C(t_{0}) \leq \varsigmagmaup(C_{+}(t_{0}), C_{-}(t_{0}))$, and thus is finite. However, $C_{\trianglerightrtialm}(t_{0})$ need not be uniformly bounded for $t_{0} \in \mathbf{m}athbb{R}$, and hence neither does $C(t_0)$.
We will now analyze several cases based on whether $C(t_0)$ are uniformly bounded for $t_0 \in \mathbf{m}athbb{R}$.
\varsigmagmaubsection{Case 1: \thetaxorpdfstring{$C(t_0)$}{C} are uniformly bounded} Here we work under the assumption that there exists a constant $C > 0$ such that $C(t_0) \leq C$ for all $t_0 \in \mathbf{m}athbb{R}$.
We show that essentially the same argument used in Section~\mathop{\mathrm{Re}}f{s:soliton-reg} can be used to show that such a critical element necessarily has the compactness property in $\dot \mathbf{m}athcal{H}^{s_p} \cap \dot \mathbf{m}athcal{H}^1$.
\betagin{prop}[Additional regularity] \lambdabel{p:casc-reg} Let $\vec u(t) \in \dot \mathbf{m}athcal{H}^{s_p}$ be a solution with the compactness property that is subluminal and doubly concentrating, as in Case (III) of Proposition~\mathop{\mathrm{Re}}f{p:cases}. Assume in addition that $C(t)$ is uniformly bounded as a function of $t \in \mathbf{m}athbb{R}$. Then $\vec u(t) \in \dot \mathbf{m}athcal{H}^1$ and satisfies the bound,
\mathbf{m}athcal{E}Q{ \lambdabel{eq:h1-regN}
\| \vec u(t) \|_{\dot \mathbf{m}athcal{H}^1} \lesssim N(t)^{\frac{5-p}{2(p-1)}}
}
uniformly in $t \in \mathbf{m}athbb{R}$.
\epsnd{prop}
For the moment, we will assume Proposition \mathop{\mathrm{Re}}f{p:casc-reg}, and we will use it to prove the following corollary.
\betagin{cor} \lambdabel{c:casc-reg}
Let $\vec u(t)$ satisfy the hypotheses of Proposition~\mathop{\mathrm{Re}}f{p:casc-reg}. Then $\vec u(t) \epsquiv 0$.
\epsnd{cor}
\betagin{proof}[Proof of Corollary~\mathop{\mathrm{Re}}f{c:casc-reg} assuming Proposition~\mathop{\mathrm{Re}}f{p:casc-reg}]
We begin by extracting from $\vec u(t)$ another solution with the compactness property on a half-infinite time interval $[0, \infty)$ but with new scaling parameter $\widetilde N(s) \to 0$ as $s \to \infty$. Let $\widetilde t_m$ be any sequence of times with
\mathbf{m}athcal{E}Q{
\widetilde t_m \to - \infty, \quad N(\widetilde t_m) \to \infty \mathbf{m}as m \to \infty.
}
Next choose another sequence $t_m \to - \infty$ by choosing $t_m$ such that
\mathbf{m}athcal{E}Q{
N(t_m) := \mathbf{m}ax_{t \in [\widetilde t_m, 0]} N(t).
}
Now define a sequence as follows: set
\mathbf{m}athcal{E}Q{
&w_m(s, y) := \frac{1}{N( t_m)^{\frac{2}{p-1}}} u ( t_m + \frac{s}{N(t_m)}, x( t_m) + \frac{y}{N( t_m)}), \\
&\trianglerightrtial_t w_m(s, y):= \frac{1}{N( t_m)^{\frac{2}{p-1} +1}} \trianglerightrtial_t u ( t_m + \frac{s}{N(t_m)}, x( t_m) + \frac{y}{N(t_m)}),
}
and set
\mathbf{m}athcal{E}Q{
\vec w_m := ( w_m(0, y), \trianglerightrtial_t w_m(0, y)).
}
Then by the pre-compactness in $ \dot{\mathbf{m}athcal{H}}^{s_p} $, there exists (after passing to a subsequence) $\vec w_{\infty}(y) \neq 0$ so that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:strongw}
\vec{w}_m \to \vec{w}_{\infty} \in \dot{\mathbf{m}athcal{H}}^{s_p}.
}
It is standard to show that $\vec w(s)$ (the evolution of $\vec w_{\infty} = \vec w(0)$) has the compactness property on $I = [0, \infty)$ with frequency parameter $ \widetilde N(s)$ defined by
\mathbf{m}athcal{E}Q{
\widetilde N(s):= \lim_{m \to \infty} \frac{N( t_m + \frac{s}{N(t_m)})}{ N( t_m)},
}
and moreover that
\mathbf{m}athcal{E}Q{ \lambdabel{bigN}
&\widetilde N(s) \le 1 \quad \forall s \in [0, \infty), \\
&\liminf_{s \to \trianglerightrtialm \infty} \widetilde N(s) = 0.
}
By the uniform bounds of~\epsqref{eq:h1-regN}, we see that
\mathbf{m}athcal{E}Q{
\| \vec w(s) \|_{\dot\mathbf{m}athcal{H}^1} \lesssim \widetilde N(s)^{\frac{5-p}{2(p-1)}} \quad \forall \, s \in [0, \infty),
}
and hence there exists a sequence of times $s_n \to \infty$ along which
\mathbf{m}athcal{E}Q{
\| \vec w(s_n) \|_{\dot\mathbf{m}athcal{H}^1} \lesssim \widetilde N(s_n)^{\frac{5-p}{2(p-1)}} \to 0 \mathbf{m}as n \to \infty.
}
Using the above, Sobolev embedding, and interpolation, along the same sequence of times we have
\mathbf{m}athcal{E}Q{
\| w(s_n) \|_{L^{p+1}} \lesssim \| w(s_n) \|_{\dot H^{\frac{3(p-1)}{2(p+1)}}} \to 0 \mathbf{m}as n \to \infty.
}
But then, since the energy of $\vec w(s)$ is well-defined and conserved, we must have
\mathbf{m}athcal{E}Q{
E( \vec w) = 0.
}
For the defocusing equation we may immediately conclude that $\vec w(s) \epsquiv 0$. \epsnd{proof}
\betagin{rem}
As in Section \mathop{\mathrm{Re}}f{s:soliton}, these arguments readily adapt to the focusing setting.
\epsnd{rem}
\betagin{proof}[Sketch of the proof of Proposition~\mathop{\mathrm{Re}}f{p:casc-reg}] The argument is nearly identical to the proof of Proposition~\mathop{\mathrm{Re}}f{p:sol-reg} in Section~\mathop{\mathrm{Re}}f{s:soliton}, hence rather than repeat the entire proof, we instead summarize how the uniform boundedness of the numbers $C(t_0)$ allow us to proceed as in Section~\mathop{\mathrm{Re}}f{s:soliton-reg}. The main idea is that the boundedness of these constants means that for each $t_0 \in \mathbf{m}athbb{R}$ we only have to wait a uniformly bounded amount of time, where time is measured relative to the scale $N(t)$), for the forward and backwards light cones based at $(t_0, x(t_0))$) to capture the bulk of the solution.cConsequently, we can apply the same techniques that were developed in Section~\mathop{\mathrm{Re}}f{s:soliton-reg} directly and implement a double Duhamel argument. In order to estimate the norm at a time $t=t_0$, we recall the definitions of $t_1,t_\trianglerightrtialm$ above and decompose space-time into three regions:
\betagin{enumerate}
\item[A)] \thetaxtbf{Region A}: $[t_-, t_+] \widetildemes \mathbf{m}athbb{R}^3$,
\item[B)] \thetaxtbf{Region B}: the forward (resp. backward) light-cones from
\[
\{t_+\} \widetildemes \{x \,:\, |x - x(t_1)| \mathbf{m}athbf{g}eq |t_+ - t_1|\},
\]
and
\[
\{t_-\} \widetildemes \{x \,:\, |x - x(t_1)| \mathbf{m}athbf{g}eq |t_- - t_1|\},
\]
\item[C)] \thetaxtbf{Region C}: $\mathbf{m}athbb{R} \widetildemes \mathbf{m}athbb{R}^4 \,\varsigmagmaetminus$ (\thetaxtbf{Region A} $\cup$ \thetaxtbf{Region B}).
\epsnd{enumerate}
\betagin{figure}[h]
\centering
\includegraphics[width=14cm]{drawing_section6.pdf}
\caption{A depiction of the regions \thetaxtbf{A, B} and \thetaxtbf{C} in the case that $x(t) = 0$.} \lambdabel{fig:regions_swordandshield}
\epsnd{figure}
On \thetaxtbf{Region A}, we control the solution by dividing the time interval $[t_-, t_+]$ into finitely many sufficiently small time strips on which we can use Lemma~\mathop{\mathrm{Re}}f{l:DL1lem}.
The main difficulty is that we need to ensure that we can uniformly control the number of small strips we will need to accomplish this (this type of uniform control was guaranteed in the Section~\mathop{\mathrm{Re}}f{s:soliton} because we had $N(t) \epsquiv 1$ there). Here, the boundedness of the constants $C$ is used to achieve this uniformity.
From Lemma~\mathop{\mathrm{Re}}f{l:DL1lem} we know that for each $\epsta>0$ there exists $\deltalta>0$ such that for all $t \in \mathbf{m}athbb{R}$
\mathbf{m}athcal{E}Q{
\|u\|_{L^{2(p-1)}_{t,x}([t - \frac{\deltalta}{N(t)}, t+ \frac{\deltalta}{N(t)}] \widetildemes \mathbf{m}athbb{R}^3)} \leq \epsta\quad \forall t \in \mathbf{m}athbb{R}.
}
Fix this $\deltalta > 0$. Examining the proof of the estimates used to control the solution on \thetaxtbf{Region A} in Section~\mathop{\mathrm{Re}}f{s:soliton}, see \epsqref{eq:regions}, we need to show that there exists a uniformly (in $t_0$) bounded number $M>0$ of times $t_m$, $-M \le m \le M$ with $t_- \le t_{m}\le t_+$, and such that the corresponding intervals $I_{-M}, \dots I_{M}$ with
\mathbf{m}athcal{E}Q{
I_M := [t_m - \frac{\deltalta}{N(t_m)}, t_m+ \frac{
\deltalta}{N(t_m)}]
}
satisfy \mathbf{m}athcal{E}Q{
[t_-, t_+] \varsigmagmaubset \bigcup_{m = -M}^M I_m.
}
In this case we obtain
\[
\|u\|^{2(p-1)}_{L^{2(p-1)}_{t,x}([t_-, t_+] \widetildemes \mathbf{m}athbb{R}^3)} \lesssim \varsigmagmaum_{i=1}^M \|u\|^{2(p-1)}_{L^{2(p-1)}_{t,x}(I_i \widetildemes \mathbf{m}athbb{R}^3)} \lesssim \int_{t_-}^{t_+} N(t)\,\mathbf{m}athrm{d} t,
\]
and, since
\betagin{align}\lambdabel{equ:int_bds}
\int_{t_-}^{t_+} N(t) \, \mathbf{m}athrm{d} t = \tauu(t_+) - \tauu(t_-) \le 4C
\epsnd{align}
by construction, this would yield the desired upper bound.
Hence, we now turn to the argument that intervals on which we can control the $L^{2(p-1)}_{t,x}$ will exhaust the time interval $[t_-, t_+]$ after finitely many steps. Since $|N'(t)| \lesssim N(t)^2$ on an interval of length $\deltalta / N(t)$, for any $t_1, t_2 \in [t_-, t_+]$, which satisfy $|t_1 - t_2| \leq \deltalta / N(t_1)$, we have that
\[
N(t_1) - \deltalta N(t_1) \lesssim N(t_2) \lesssim N(t_1) + \deltalta N(t_1).
\]
Consequently, for any $t_1 \in [t_-, t_+]$ we must have
\[
\int_{t_1 - \deltalta / N(t_1)}^{t_1 + \deltalta / N(t_1)} N(t) \mathbf{m}athrm{d} t \mathbf{m}athbf{g}eq (2\deltalta - 2\deltalta^2) ,
\]
which for any $0 < \deltalta < \frac{1}{2}$ yields
\betagin{align}\lambdabel{equ:lwr_bds}
\int_{t_1 - \deltalta / N(t_1)}^{t_1 + \deltalta / N(t_1)} N(t) \, \mathbf{m}athrm{d} t \mathbf{m}athbf{g}eq \deltalta.
\epsnd{align}
By \epsqref{equ:int_bds} and \epsqref{equ:lwr_bds},
\betagin{align}
4C \mathbf{m}athbf{g}eq \int_{t_-}^{t_+} N(t) \mathbf{m}athrm{d} t &= \int_{t_-}^{t_- + \deltalta / N(t_-)} N(t) \mathbf{m}athrm{d} t + \int_{t_- + \deltalta / N(t_-)}^{t_+} N(t ) \mathbf{m}athrm{d} t \\
&= \int_{t_- + \deltalta / N(t_-)}^{t_+ - \deltalta / N(t_+)} N(t) \mathbf{m}athrm{d} t + \deltalta,
\epsnd{align}
hence the positivity of $N(t)$ implies that by iterating this procedure, we be able to cover the whole interval $[t_-, t_+]$ in at most $4C / \deltalta$ many intervals of length $\deltalta / N(t)$ where we can control the $L^{2(p-1)}_{t,x}$ norm of the critical element.
On \thetaxtbf{Region B}, we use \epsqref{equ:compactness_small_data} to apply the small data theory at times $t_\trianglerightrtialm$, which, together with finite speed of propagation, yields a uniform bound on the solution. Finally, on \thetaxtbf{Region C}, we may use the sharp Huygens principle exactly as in Section~\mathop{\mathrm{Re}}f{s:soliton-reg}.
All together, using arguments from Section~\mathop{\mathrm{Re}}f{s:soliton}, this will yield that
\betagin{equation}
\| u(t_{1}) \|_{\dot {\mathcal{H}}^1} \lesssim N(t_{1})^{\frac{5 - p}{2(p - 1)}}.
\epsnd{equation}
For more details, we refer the reader to \cite{DL2}. By continuation of regularity and $(\mathop{\mathrm{Re}}f{equ:lwr_bds})$, this implies
\betagin{equation}
\| u(t_{0}) \|_{\dot {\mathcal{H}}^1} \lesssim N(t_{1})^{\frac{5 - p}{2(p - 1)}},
\epsnd{equation}
where the implicit constant again depends on $C$. Finally, since $|N'(t)| \lesssim N(t)^{2}$,
\betagin{equation}
N(t_{0}) \varsigmagmaim_{C} N(t_{1}),
\epsnd{equation}
which completes the proof.
\epsnd{proof}
\varsigmagmaubsection{Case 2: \thetaxorpdfstring{$C(t)$}{C} is not uniformly bounded} In this case we will show how to extract a self-similar-like critical element by taking an appropriate limit. The arguments from Section~\mathop{\mathrm{Re}}f{s:ss}, specifically Proposition \mathop{\mathrm{Re}}f{P:reg-jump-ss}, then allow us to conclude that any such solution must be $\epsquiv 0$, which is a contradiction.
By assumption, there exists sequences $\{t_n\}$ such that
\[
C(t_{n}) \mathbf{m}athbf{g}eq 2n.
\]
Now define
\[
I_{n} = [t_{n} - \kappappa_{-}(t_{n}, n) N(t_{n})^{-1}, t_{n} + \kappappa_{+}(t_{n}, n) N(t_{n})^{-1}].
\]
Borrowing language from \cite{TVZ}, since $C(t_{n}) \mathbf{m}athbf{g}eq 2n$, we show that all sufficient late times $t \in I_n$ are future focusing, that is,
\betagin{equation}
\forall \tauu \in I_{n} : \tauu > t, \quad |x(\tauu) - x(t)| \mathbf{m}athbf{g}eq |\tauu - t| - \frac{R(\epsta)}{N(\tauu)},
\epsnd{equation}
or all sufficiently early times $t \in I_{n}$ are past focusing, that is
\betagin{equation}
\forall \tauu \in I_{n} : \tauu < t, \quad |x(t) - x(\tauu)| \mathbf{m}athbf{g}eq |t - \tauu| - \frac{R(\epsta)}{N(\tauu)}.
\epsnd{equation}
Indeed, suppose that there exist $t_{-}^{n}, t_{+}^{n} \in I_{n}$ such that $\tauu(t_{+}^{n}) - \tauu(t_{-}^{n}) \mathbf{m}athbf{g}eq C_{n}$ for some $C_{n} \nearrow \infty$ as $n \nearrow \infty$, $t_{-}^{n}$ is future focusing, $t_{+}^{n}$ is past focusing, and $t_{-}^{n} < t_{+}^{n}$. In that case,
\betagin{equation}
N(t) \varsigmagmaim N(\tauu), \quad \forall t, \tauu \in [t_{-}^{n}, t_{+}^{n}],
\epsnd{equation}
with constant independent of $n$. For $n$ sufficiently large this violates subluminality.
Therefore, suppose without loss of generality that for $n$ sufficiently large, all sufficiently late times, say all
\betagin{equation}
t \in [t_{n} + \kappappa_{+}(t_{n}, n/2) N(t_{n})^{-1}, t_{n} + \kappappa_{+}(t_{n}, n) N(t_{n})^{-1}] = I_{n}',
\epsnd{equation}
are future focusing.
First, we note that if $t \in I_{n}$ is future focusing, then for any $\tauu \in I_{n}$, $\tauu > t$,
\betagin{equation}\lambdabel{monotone}
N(\tauu) \leq \frac{R(\epsta)}{c} \inf_{t < s < \tauu} N(s).
\epsnd{equation}
Indeed, for any $\tauu \in I$, $\tauu > t$, $|x(\tauu) - x(t)| \mathbf{m}athbf{g}eq |\tauu - t| - \frac{R(\epsta)}{N(\tauu)}$. Then if $N(t) \leq \frac{c N(\tauu)}{R(\epsta)}$,
\betagin{equation}
|x(t) - x(\tauu)| \mathbf{m}athbf{g}eq |t - \tauu| - \frac{c}{N(t)},
\epsnd{equation}
and therefore, we conclude that $N(\tauu) \leq \frac{1}{c^{2}} N(t) \leq \frac{N(\tauu)}{c R(\epsta)}$, which is a contradiction for $R(\epsta)$ sufficiently large. Note that in the case of past focusing times, a similar argument yields a lower bound in place of \epsqref{monotone}.
Consequently, for any $t \in I_{n}'$,
\betagin{equation}
N(t) \leq \inf_{\tauu < t : \tauu \in I_{n}'} N(\tauu).
\epsnd{equation}
In particular, modifying by a constant, $N(t)$ may be replaced by $\widetilde{N}(t)$ on $I_{n}'$, where
\betagin{equation}
\widetilde{N}(t) = \inf_{t_{n} + \kappappa_{+}(t_{n}, \frac{n}{2})N(t_{n})^{-1} < \tauu < t} N(\tauu).
\epsnd{equation}
Clearly, $\widetilde{N}(t)$ is monotone decreasing. Furthermore, $\widetilde{N}(t)$ must converge to a self-similar solution taking $n \to \infty$.
The main idea is that forward in time, on longer and longer time intervals, the pre-compact solution expands to fill the light cone. This observation will enable us to extract a solution which ``looks self-similar'' on $[1, \infty)$ and we can then rescale that solution to extract a true self-similar solution on $[0,\infty)$. We proceed with this argument now.
We begin by simplifying our notation, setting
\betagin{align}
t_{-}^{n} &= t_{n} + \kappappa_{+}(t_{n}, \frac{n}{2}) N(t_{n})^{-1}\\
t_{+}^{n} &= t_{n} + \kappappa_{+}(t_{n}, n) N(t_{n})^{-1}.
\epsnd{align}
By definition of subluminality (see Definition \mathop{\mathrm{Re}}f{d:sl}), it holds that uniformly for all $t \in I_{n}'$,
\betagin{equation}
\widetilde{N}(t) (t - t_{-}^{n}) \lesssim 1,
\epsnd{equation}
independent of $n$. We further have that
\betagin{equation}
\widetilde{N}(t) (t - t_{-}^{n}) \mathbf{m}athbf{g}trsim 1
\epsnd{equation}
is also uniformly bounded for all $t \in I_{n}'$ such that $t - t_{-}^{n} \mathbf{m}athbf{g}eq \frac{\deltalta}{N(t_{-}^{n})}$ by finite propagation speed.
Now set
\[
K_n := [\kappappa_{+}(t_{n}, n) - \kappappa_{+}(t_{n}, n/2)] N(t_{n})^{-1} \cdot N(t_{-}^{n}).
\]
Since
\betagin{align}\lambdabel{equ:cn_unbdd}
\int_{t_{-}^{n}}^{t_{+}^{n}} \widetilde{N}(t) \, \mathbf{m}athrm{d} t \varsigmagmaim \frac{n}{2} \to \infty,
\epsnd{align}
and $\widetilde{N}(t) \leq \widetilde{N}(t_{-}^{n})$ for all $t \in I_{n}'$, we see that if $K_n \leq C$ for all $n \in \mathbf{m}athbb{N}$, then
\[
\int_{t_{-}^{n}}^{t_{+}^{n} } \widetilde{N}(t) \lesssim 1,
\]
which contradicts \epsqref{equ:cn_unbdd}. Hence we may conclude that $K_n$ is unbounded. We can then define a rescaled sequence as follows: set
\[
u_n(0,x) = \frac{1}{\widetilde{N}(t_{-}^{n})^\frac{2}{p-1}} u \biggl(t_{-}^{n}, x(t_{-}^{n}) + \frac{x}{\widetilde{N}(t_{-}^{n})}\biggr),
\]
\[
\trianglerightrtialartial_t u_n(0,x) = \frac{1}{\widetilde{N}(t_-^n)^{\frac{2}{p-1} + 1}} u \biggl(t_-^n, x(t_-^n) + \frac{x}{\widetilde{N}(t_-^n)}\biggr)
\]
and let
\[
\vec w_n(1) = \left( u_n(0, x), \trianglerightrtialartial_t u_n(0,x) \right).
\]
By pre-compactness of the trajectory of $\vec u$ in $\dot {\mathcal{H}}^{s_p}$ (modulo symmetries), the rescaled initial data converges, that is $\vec w_n(1) \to w_\infty$ in $\dot {\mathcal{H}}^{s_p}$. We let $\vec w(s)$ be the evolution of $\vec w_\infty =: \vec w(1)$, then $\vec w_\infty$ has the compactness property with a new scaling parameter $\widehat{N}(s)$, given by
\[
\widehat{N}(s) = \lim_{n \to \infty} \frac{\widetilde{N}(t_{-}^{n} + \frac{s}{\widetilde{N}(t_{-}^{n})})}{\widetilde{N}(t_{-}^{n})}.
\]
Hence we have
\[
cs \leq \frac{1}{\widehat{N}(s)} \leq s, \quad \thetaxtup{for all }s > 1.
\]
We may also assume without loss of generality that $\vec w_\infty$ has the compactness property with translation parameter $\widetilde{x}(s) = 0$: by finite speed of propagation, $\widetilde{x}(s)$ must remain bounded, and hence we may, up to passing to a subsequence, obtain a pre-compact solution with $\widetilde{x}(s) = 0$ by applying a fixed translation. Finally, we consider one last sequence of times $\{s_n\}$ with $s_n \to \infty$ and we define
\[
w_n(1,x) = \frac{1}{(s_n)^\frac{2}{p-1}} w \biggl(s_n, \frac{x}{s_n}\biggr), \quad \trianglerightrtialartial_t u_n(1,x) = \frac{1}{(s_n)^{\frac{2}{p-1} + 1}} w \biggl(s_n, \frac{x}{s_n}\biggr).
\]
We set
\[
\vec v_n(1) = \left( w_n(1, x), \trianglerightrtialartial_t w_n(1,x) \right),
\]
which gives rise to a corresponding solution $\vec v_n(\widetildelde{s})$ with $\widehat{N}(\widetildelde{s}) = \widetildelde{s}^{-1}$ on $[\frac{1}{s_n}, \infty)$. We then can take the limit $n \to \infty$, which yields convergence $\vec v_n \to \vec v_\infty$ in $\dot {\mathcal{H}}^{s_p}$, and a solution $\vec v$ with initial data $\vec v_\infty$ which is self-similar on $[0, \infty)$.
\varsigmagmaection{The traveling wave critical element} \lambdabel{s:hans}
In this section we preclude the possibility of the existence of a `traveling wave' critical element.
Recall the definition of a traveling wave critical element.
\betagin{defn}[Traveling wave]
We say $\vec u(t) \neq 0$ is a \epsmph{traveling wave critical element} if $\vec u(t)$ is a global-in-time solution to~\epsqref{eq:nlw} such that the set
\mathbf{m}athcal{E}Q{
K:= \left\{ \left( u\left( t, \, x(t) + \cdot \right), \, \trianglerightrtial_t u\left( t, \, x(t) + \cdot \right) \right) \mathbf{m}id t \in \mathbf{m}athbb{R} \right\}
}
is pre-compact in $\dot{H}^{s_p} \widetildemes \dot H^{s_p-1} (\mathbf{m}athbb{R}^3)$, where the function $x: \mathbf{m}athbb{R} \to \mathbf{m}athbb{R}^3$ satisfies,
\betagin{align}
x(0)&= 0, \lambdabel{eq:xhans0}\\
\abs{t} - C_1 &\le \abs{x(t)} \le \abs{t} + C_1 \lambdabel{eq:xhans1} \\
\abs{ x(t) - (t, 0, 0)} &\le C_1 \abs{t}^{\frac{1}{2}} \lambdabel{eq:xhans2}
\epsnd{align}
for some uniform constant $C_1>0$.
\epsnd{defn}
The main result of this section is the following theorem.
\betagin{prop} \lambdabel{p:hans_zero} There are no traveling wave critical elements, in the sense of Case (IV) of Proposition~\mathop{\mathrm{Re}}f{p:cases}.
\epsnd{prop}
To prove Propostion~\mathop{\mathrm{Re}}f{p:hans_zero}, we will show that any traveling waves critical element would enjoy additional regularity in the $x_2$ and $x_3$ directions. This will allow us to utilize a direction-specific Morawetz-type estimate to reach a contradiction. We will require an additional technical ingredient, namely, a long-time Strichartz estimate in the spirit of~\cite{D-JAMS, D-Duke}.
\varsigmagmaubsubsection{Main ingredients in the proof}
The long-time Strichartz estimates take the following form:
\betagin{prop}[Long-time Strichartz estimate]\lambdabel{P:tw-lts} Suppose $\vec u(t)$ is a traveling wave critical element for \epsqref{eq:nlw}. Let $\epspsilon \in (0, 1)$ be arbitrary. Then,
\[
\|u_{>N}\|_{S([t_0,t_0+N^{1- \epspsilon}])} = o_N(1) \mathbf{m}as N \to \infty.
\]
where $S(I)$ denotes any admissible, non-endpoint Strichartz norm at Sobolev regularity $s = s_p$ on the time interval $I$.
\epsnd{prop}
With the help of Proposition \mathop{\mathrm{Re}}f{P:tw-lts}, we will also prove the following additional regularity result.
\betagin{prop}[Additional regularity]\lambdabel{padditional}
Suppose $\vec u(t)$ is a traveling wave critical element for \epsqref{eq:nlw}. For any $0<\nu<\frac12$,
\betagin{equation}\lambdabel{equ:hans_added_reg}
\| |\trianglerightrtialartial_{2}|^{1 - \nu} u \|_{L_{t}^{\infty} L_x^{2}(\mathbf{m}athbb{R}\widetildemes\mathbf{m}athbb{R}^3)} + \| |\trianglerightrtialartial_{3}|^{1 - \nu} u \|_{L_{t}^{\infty} L_x^{2}(\mathbf{m}athbb{R}\widetildemes\mathbf{m}athbb{R}^3)} < \infty.
\epsnd{equation}
\epsnd{prop}
Using Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} and Proposition~\mathop{\mathrm{Re}}f{padditional}, we can then prove the following Morawetz-type estimate. In the sequel, we use the notation
\[
x=(x_1, x_{2,3}).
\]
\betagin{prop}[Morawetz-type estimate]\lambdabel{P:tw-morawetz} Suppose $\vec u(t)$ is a traveling wave critical element for \epsqref{eq:nlw}. Then there exists $\deltalta>0$ and $\epspsilon>0$ such that
\[
\lim_{T\to\infty} \frac{1}{T^{1-\epspsilon}}\int_0^{T^{1-\epspsilon}} \int_{|x_{2,3}|\leq T^\deltalta} |u_{\leq T}(t,x)|^{p+1}\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t = 0.
\]
\epsnd{prop}
Combining Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz} with the nontriviality of critical elements will yield a contradiction and complete the proof of Theorem~\mathop{\mathrm{Re}}f{p:hans_zero}.
We turn to the proofs of the three preceding propositions. In Section~\mathop{\mathrm{Re}}f{s:mor} we also give the proof of Proposition~\mathop{\mathrm{Re}}f{p:hans_zero}.
\varsigmagmaubsection{Long-time Strichartz estimates} \lambdabel{s:ltse}
In this subsection we prove the long-time Strichartz estimate, Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} and then deduce a few technical corollaries.
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}] For technical reasons we fix a small parameter $0<\theta\ll1$ and introduce the following norm: given a time interval $I$,
\betagin{align}\lambdabel{equ:lts_space}
\|u\|_{S_{\theta}(I)} & = \|u\|_{L_{t,x}^{2(p-1)}} + \| |\nablabla|^{-\frac{2 -3\theta}{2(p-1)}}u\|_{L_t^{p-1}L_x^{\frac{2(p-1)}{\theta}}} \\
& \quad + \| |\nablabla|^{-\frac{1-\theta}{p-1}} u\|_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^{\frac{2(p-1)}{\theta}}} + \| |\nablabla|^{s_p-\theta}u \|_{L_t^{\frac{2}{\theta}} L_x^{\frac{2}{1-\theta}}} \\
&+ \| |\nablabla|^{\frac{2 s_p}{3} - \frac{1}{3} }u \|_{L^\frac{6}{1+s_p}_t L^\frac{6}{2-s_p}_x } + \| |\nablabla| ^{\frac{3}{4} - \frac{3}{2(p-1)}} u \|_{L^{2(p-1)}_t L^4_x} ,
\epsnd{align}
where all space-time norms are over $I\widetildemes\mathbf{m}athbb{R}^3$. Restrictions will be put on $\theta$ below. One can check that each of these norms correspond to wave-admissible exponent pairs at $\dot H^{s_p}$ regularity; this already requires $0<\theta<p-3$. We will prove Propostion~\mathop{\mathrm{Re}}f{P:tw-lts} for the space $S_{\theta}$ and note here that the same estimates then easily follow for the whole family of admissible Strichartz norms. We also remark that a nearly identical (but simpler) argument works in the case $p=3$, with the caveat that we need to perturb away from the inadmissible $(2,\infty)$ endpoint.
Let $\epsta_0>0$ and $\epspsilon>0$. We will actually prove that there exists $N_0\mathbf{m}athbf{g}g 1$ such that for $N\mathbf{m}athbf{g}eq N_0$, we have
\[
\|u_{>N}\|_{S_\theta([t_0,t_0+(\frac{N}{N_0})^{1-\epspsilon}])} <\epsta_0
\]
for any $t_0\in\mathbf{m}athbb{R}$ and $\theta < 2 \epspsilon/3$. This implies the estimate appearing in the statement of Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} upon enlarging $\epspsilon$ and $N_0$; indeed, $N^{1-\epspsilon'}\leq (\tfrac{N}{N_0})^{1-\epspsilon}$ provided $N\mathbf{m}athbf{g}eq N_0^{\frac{1-\epspsilon}{\epspsilon'-\epspsilon}}$.
By compactness and $N(t)\epsquiv 1$, there exists $N_0$ sufficiently large such that
\[
\|u_{>N_0}\|_{S_\theta([t_0,t_0+9^{1-\epspsilon}])} <\tfrac12\epsta_0
\]
for any $t_0\in\mathbf{m}athbb{R}$. This implies the desired estimate for $N_0\leq N\leq 9N_0$. We will prove the result for larger $N$ by induction.
Note that by choosing $N_0$ possibly even larger, we can guarantee
\betagin{equation}\lambdabel{tw-lts-small1}
\|P_{>N}\vec u\|_{L_t^\infty \mathcal{H}^{s_p}(\mathbf{m}athbb{R} \widetildemes \mathbf{m}athbb{R}^{3})} <\tfrac12\epsta_0
\epsnd{equation}
for any $N\mathbf{m}athbf{g}eq N_0$.
Before completing the inductive step, we make a few simplifications. First, by time-translation invariance, it suffices to consider $t_0=0$. Next, to keep formulas within the margins, we will assume all space-time norms are over $[0,(\tfrac{N}{N_0})^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3$ unless otherwise stated.
By the Taylor's theorem, we can write
\betagin{align}
F(u) &= F(u_{\leq N}) + u_{> N} \int_0^1 F'(u_{< N} + \theta u_{>N})\\
&= F(u_{\leq N}) + u_{> N} F'(u_{< N}) + u_{>N}^2 \iint_0^1 F''(u_{< N} + \theta_1 \theta_2 u_{>N})\\
&= F(u_{\leq N}) + u_{> N} F'(u_{< N}) + u_{>N}^2 F''(u_{< N}) \\
& \mathbf{m}athbf{h}space{14mm}+ u_{>N}^3 \iiint_0^1 F'''(u_{< N} + \theta_1 \theta_2 \theta_3 u_{>N})
\epsnd{align}
for any $N$. Thus (ignoring absolute values and constants) we need to estimate four types of terms
\[
u_{>\frac{N}{8}} u_{\leq \frac{N}{8}}^{p-1} + u_{>\frac{N}{8}}^2 u_{<\frac{N}{8}}^{p-2} + u_{\leq \frac{N}{8}}^p + u_{>\frac{N}{8}}^3 F_2 = : I + II + III + IV,
\]
where
\[
F_2 = \iiint_0^1 F'''(u_{< \frac{N}{8}} + \theta_1 \theta_2 \theta_3 u_{>\frac{N}{8}}).
\]
We will estimate the contribution of each term using Strichartz estimates.
\varsigmagmaubsection*{Term I} We let $0\leq \theta<p-3$ as in \epsqref{equ:lts_space} and further impose $\theta<\tfrac23\epspsilon$. We estimate
\betagin{align*}
\| |\nablabla|^{s_p-1}& P_{>N}( u_{\leq \frac{N}{8}}^{p-1}u_{\mathbf{m}athbf{g}eq \frac{N}{8}})\|_{L_t^{1}L_x^{2} } \\&\lesssim N^{s_p-1} \|u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \|u_{>\frac{N}{8}}\|_{L^{\infty}_t L^{\frac{2}{1-\theta}}_x}\\
& \lesssim N^{-1+\frac{3\theta}{2}} \|u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \||\nablabla|^{s_p - \frac{3\theta}{2} }u_{>\frac{N}{8}}\|_{L^{\infty}_t L^{^\frac{2}{1-\theta }}_x} \\
& \lesssim \bigl[N^{-\frac{2-3\theta}{2(p-1)}}\|u_{\leq \frac{N}{8}}\|_{L_t^{p-1} L_x^{\frac{2(p-1)}{\theta}}}\bigr]^{p-1} \| |\nablabla|^{s_p}u_{>\frac{N}{8}}\|_{L_t^\infty L_x^2}.
\epsnd{align*}
Recalling \epsqref{tw-lts-small1}, it remains to prove
\betagin{equation}\lambdabel{tw-lts-term2}
N^{-\frac{2-3\theta}{2(p-1)}}\|u_{\leq \frac{N}{8}}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \lesssim \epsta_0.
\epsnd{equation}
We let $C_0\mathbf{m}athbf{g}g1$ to be determined shortly and begin by splitting
\betagin{align*}
N^{-\frac{2 -3\theta}{2(p-1)}} \|u_{\leq \frac{N}{8}}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} &\lesssim N^{-\frac{2 -3\theta}{2(p-1)}}\|u_{\leq C_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
& \quad +N^{-\frac{2 -3\theta}{2(p-1)}} \|u_{C_0\leq\cdot\leq N_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
& \quad + N^{-\frac{2 -3\theta}{2(p-1)}} \varsigmagmaum_{N_0\leq M\leq\frac{N}{8}} \|u_{M}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}}.
\epsnd{align*}
By Bernstein's inequality and $N(t)\epsquiv 1$, we can estimate
\betagin{align*}
N^{-\frac{2 -3\theta}{2(p-1)}} \|u_{\leq C_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} & \lesssim N^{-\frac{2 -3\theta}{2(p-1)}}C_0^{\frac{2-3\theta}{2(p-1)}} \bigl(\tfrac{N}{N_0}\bigr)^{\frac{(1-\epspsilon)}{p-1}}
\epsnd{align*}
on $[0,(\tfrac{N}{N_0})^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3$. To guarantee that the overall power of $N$ is negative, we need
\[
\frac{3\theta}{2} < \varepsilon.
\]
Thus, for $N_0$ sufficiently large depending on $C_0$, we may guarantee that
\[
N^{-\frac{2 -3\theta}{2(p-1)}} \|u_{\leq C_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \lesssim \epsta_0.
\]
Next, choosing $C_0=C_0(\epsta_0)$ large enough and using $N(t)\epsquiv 1$, we estimate
\betagin{align*}
&N^{-\frac{2 -3\theta}{2(p-1)}}\|u_{C_0\leq \cdot \leq N_0} \|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
& \lesssim N^{-\frac{2 -3\theta}{2(p-1)}}N_0^{\frac{2-3\theta}{2(p-1)}}\||\nablabla|^{- \frac{2-3\theta}{2(p-1)}}u_{>C_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
& \lesssim \epsta_0 \bigl(\tfrac{N}{N_0}\bigr)^{-\frac{2 -3\theta}{2(p-1)}+\frac{(1-\epspsilon)}{p-1}} \lesssim \epsta_0.
\epsnd{align*}
For the final term, we begin by estimating
\betagin{align*}
N^{-\frac{2 -3\theta}{2(p-1)}}&\varsigmagmaum_{N_0\leq M\leq \frac{N}{8}} \|u_M\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
\quad & \lesssim \varsigmagmaum_{N_0\leq M\leq \frac{N}{8}} \bigl(\tfrac{M}{N}\bigr)^{\frac{2 -3\theta}{2(p-1)}}\| |\nablabla|^{-\frac{2 -3\theta}{2(p-1)}} u_M\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}}.
\epsnd{align*}
We now apply the inductive hypothesis to the last term. To do so, we divide the interval $[0,(\tfrac{N}{N_0})^{1-\epspsilon}]$ into $\approx (\tfrac{N}{M})^{1-\epspsilon}$ intervals of length $(\tfrac{M}{N_0})^{1-\epspsilon}$. Continuing from above, this leads to
\betagin{align*}
N^{-\frac{2 -3\theta}{2(p-1)}}&\varsigmagmaum_{N_0\leq M\leq \frac{N}{8}} \|u_M\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \lesssim \varsigmagmaum_{N_0\leq M\leq \frac{N}{8}} \bigl(\tfrac{M}{N}\bigr)^{\frac{2 -3\theta}{2(p-1)}-\frac{(1-\epspsilon)}{p-1}} \epsta_0 \lesssim \epsta_0,
\epsnd{align*}
where we have used that the exponent appearing is, in this case, positive. This completes the estimation of Term I.
\varsigmagmaubsection*{Term II}
We estimate
\betagin{align}
\| |\nablabla|^{s_p - 1}& P_{>N}( u_{\leq \frac{N}{8}}^{p-2}u_{\mathbf{m}athbf{g}eq \frac{N}{8}}^2)\|_{L_t^{1}L_x^{2} }\\
& \lesssim N^{ s_p-1 } \|u_{> \frac{N}{8}} \|^2_{L^{2(p-1)}_t L^4_x} \|u_{\leq \frac{N}{8}} \|_{L^{p-1}_t L^\infty_x}^{p-2}\\
& \lesssim N^{ s_p-1+ 1 - \frac{1}{p-1}} \|u_{> \frac{N}{8}} \|^2_{L^{2(p-1)}_t L^4_x} N^{- \frac{p-2}{(p-1)}} \|u_{\leq \frac{N}{8}} \|_{L^{p-1}_t L^\infty_x}^{p-2}\\
& \lesssim \bigr[ N^{ \frac{3}{4}- \frac{3}{2(p-1)}} \|u_{> \frac{N}{8}} \|_{L^{2(p-1)}_t L^4_x} \bigr]^2 N^{- \frac{p-2}{(p-1)}} \|u_{\leq \frac{N}{8}} \|_{L^{p-1}_t L^\infty_x}^{p-2}.
\epsnd{align}
We can argue as above (now with $\theta = 0$) for the low frequency term, and we note that $(2(p-1), 4)$ is a wave admissible pair at regularity
\[
\frac{3}{2} - \frac{1}{2(p-1)} - \frac{3}{4} = s_p - \left(\frac{3}{4} - \frac{3}{2(p-1)} \right),
\]
and we conclude using the inductive hypothesis on
\[
\| |\nablabla| ^{\frac{3}{4} - \frac{3}{2(p-1)}} u_{> \frac{N}{8}} \|_{L^{2(p-1)}_t L^4_x} .
\]
\varsigmagmaubsection*{Term III}
Next using the fractional chain rule we estimate
\betagin{align*}
\| |\nablabla|^{s_p-1} &P_{>N}(u_{\leq \frac{N}{8}}^{p})\|_{L_t^{1}L_x^{2} }\\
& \lesssim N^{s_p - 2} \|u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}} \||\nablabla| u_{\leq \frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x}\\
& \lesssim N^{-1 + \theta } \|u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{{\frac{2(p-1)}{2-\theta}}}L_x^\frac{2(p-1)}{\theta}} N^{- \theta + s_p - 1} \||\nablabla| u_{\leq\frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x}.
\epsnd{align*}
To complete the estimation of term III, we need to prove
\betagin{align*}
N^{-\frac{1-\theta}{p-1}}\|u_{\leq \frac{N}{8}}\|_{L_t^{\frac{2(p-1)}{2-\theta}} L_x^{\frac{2(p-1)}{\theta}}}+N^{-\theta+s_p-1}\| |\nablabla| u_{\leq N}\|_{L_t^{\frac{2}{\theta}} L_x^{\frac{2}{1-\theta}}}& \lesssim \epsta_0.
\epsnd{align*}
For this, we argue as in term I, that is, we split
\[
u_{\leq \frac{N}{8}} = u_{\leq C_0}+u_{C_0\leq\cdot\leq N_0}+\varsigmagmaum_{N_0\leq M \leq \frac{N}{8}} u_M.
\]
and estimate each term separately, relying on the inductive hypothesis (and a splitting of the time interval) for the final sum. Comparing with those estimates, we see that this requires
\[
\tfrac{1-\theta}{p-1}-\tfrac{1-\epspsilon}{p-1}>0
\]
to deal with the first term and
\[
\theta+1-s_p-\tfrac{\theta(1-\epspsilon)}{2}>0
\]
to deal with the second term. These conditions are satisfied provided $0<\theta<\epspsilon$.
\varsigmagmaubsection*{Term IV}
We estimate
\[
\|u_{> \frac{N}{8}}^3 F_2 \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } \lesssim \|u_{> \frac{N}{8}}^p \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } + \|u_{> \frac{N}{8}}^3 u_{\leq \frac{N}{8}}^{p-3} \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } .
\]
For the first expression we estimate
\[
\|u_{> \frac{N}{8}}^p \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } = \|u_{> \frac{N}{8}} \|^p_{L^\frac{2p}{1+s_p}_t L^\frac{2p}{2-s_p}_x } \lesssim \epsta_0^p,
\]
while for the second expression we have
\betagin{align}
\|u_{> \frac{N}{8}}^3 u_{\leq \frac{N}{8}}^{p-3} \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } \lesssim \|u_{> \frac{N}{8}} \|_{L^\frac{6}{1+s_p}_t L^\frac{6}{2-s_p}_x }^3 \|u_{\leq \frac{N}{8}} \|^{p-3}_{L^\infty_{t,x}}.
\epsnd{align}
Now,
\betagin{align}
\|u_{\leq \frac{N}{8}} \|^{p-3}_{L^\infty_{t,x}} \lesssim N^{\frac{2(p-3)}{p-1}}\| u_{< \frac{N}{8}}\|^{p-3}_{L^{\infty}_{t} L^{\frac{3(p-1)}{2}}}\lesssim N^{\frac{2(p-3)}{p-1}}.
\epsnd{align}
For the first term, we see that $(6/(1+s_p), 6/(2- s_p))$ is an admissible Strichartz pair at regularity
\[
s_p + \frac{1}{3} - \frac{2 s_p}{3} < s_p,
\]
and hence
\betagin{align}
&\|u_{> \frac{N}{8}}^3 u_{\leq \frac{N}{8}}^{p-3} \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x } \\
& \lesssim N^{ 1 - 2s_p }\| |\nablabla|^{\frac{2 s_p}{3} - \frac{1}{3} }u_{> \frac{N}{8}} \|^3_{L^\frac{6}{1+s_p}_t L^\frac{6}{2-s_p}_x } N^{\frac{2(p-3)}{p-1}}\| u_{< \frac{N}{8}}\|^{p-3}_{L^{\infty}_{t} L^{\frac{3(p-1)}{2}}}.
\epsnd{align}
Finally, note that \[
-2s_p + 1 + \frac{2(p-3)}{p-1} = - 3 +1 + \frac{4}{p-1} + \frac{2p - 6}{p-1} = 0.
\]
Hence, by the inductive hypothesis, putting all the pieces of the argument together, we obtain
\[
\|u_{>N}\|_{S_{\theta}([0,(\frac{N}{N_0})^{1-\epspsilon}])} \leq \tfrac12\epsta_0 + C\epsta_0^{3},
\]
which suffices to complete the induction for $\epsta_0$ sufficiently small.
\epsnd{proof}
We will need the following corollary of Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}, which provides some control over the low frequencies as well.
\betagin{cor}[Control of low frequencies]\lambdabel{C:tw-lts} Suppose $\vec u$ is a traveling wave critical element for \epsqref{eq:nlw}. Let $\epspsilon>0$ and $0<\theta<\tfrac23\epspsilon$. For any $\epsta_0$ there exists $N$ sufficiently large such that
\betagin{align}
\|u_{\leq N}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([t_0,t_0+N^{1-\epspsilon}])\widetildemes\mathbf{m}athbb{R}^3)} &\lesssim \epsta_0 N^{\frac{2-3\theta}{2(p-1)}},\lambdabel{tw-clf1}\\
\|u_{\leq N}\|_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}([t_0,t_0+N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} &\lesssim \epsta_0 N^{\frac{1-\theta}{p-1}},\nonumber \\
\| |\nablabla| u_{\leq N}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x([t_0,t_0+N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} &\lesssim \epsta_0 N^{\theta - s_p + 1}.\nonumber
\epsnd{align}
uniformly over $t_0\in\mathbf{m}athbb{R}$.
\epsnd{cor}
\betagin{proof} We let $\epsta_0$ and choose $N_0=N_0(\epsta_0)\mathbf{m}athbf{g}eq 1$ as in Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}. By time-translation invariance, it suffices to consider $t_0=0$. We focus our attention on \epsqref{tw-clf1}, as the other estimates follow similarly. For $N\mathbf{m}athbf{g}eq N_0$, we estimate
\betagin{align*}
\|u_{\leq N}&\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([0,N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim \|u_{\leq N_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([0,N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \quad + \varsigmagmaum_{N_0\leq M\leq N} \|u_{M}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([0,M^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \quad + \varsigmagmaum_{N_0\leq M\leq N}\|u_{\leq N}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([M^{1-\epspsilon},N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)}.
\epsnd{align*}
For the first term, we use Bernstein's inequality and $N(t)\epsquiv 1$ to get
\[
\|u_{\leq N_0}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([0,N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \lesssim N_0^{\frac{2-3\theta}{2(p-1)}}N^{\frac{1-\epspsilon}{p-1}}.
\]
Recalling that $\frac{1-\epspsilon}{p-1}<\frac{2-3\theta}{2(p-1)}$, we see that this term is acceptable provided we choose $N$ sufficiently large.
Next, we use Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} to estimate
\[
\varsigmagmaum_{N_0\leq M\leq N} \|u_M\|_{L_t^{p-1}L_x^{\frac{2(p-1)}{\theta}}([0,M^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \\ \lesssim \epsta_0 \varsigmagmaum_{N_0\leq M \leq N} M^{\frac{2-3\theta}{2(p-1)}} \lesssim \epsta_0N^{\frac{2-3\theta}{2(p-1)}},
\]
which is also acceptable.
For the remaining term, we split $[M^{1-\epspsilon},N^{1-\epspsilon}]$ into $\approx (\tfrac{N}{M})^{1-\epspsilon}$ intervals of length $M^{1-\epspsilon}$. Applying Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} once more, we have
\betagin{align*}
\varsigmagmaum_{N_0\leq M\leq N}&\|u_{M}\|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}([M^{1-\epspsilon},N^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3)} \\
& \lesssim \epsta_0\varsigmagmaum_{N_0\leq M\leq N}M^{\frac{2-3\theta}{2(p-1)}}\bigl(\tfrac{N}{M}\bigr)^{\frac{1-\epspsilon}{p-1}} \lesssim \epsta_0N^{\frac{2-3\theta}{2(p-1)}},
\epsnd{align*}
where we recall $0 <\theta<\tfrac23\epspsilon$ in order to sum. This term is also acceptable, and so we complete the proof of \epsqref{tw-clf1} and Corollary~\mathop{\mathrm{Re}}f{C:tw-lts}. \epsnd{proof}
Finally, we will need certain long-time Strichartz estimates with regularity in the $x_2, x_3$ directions.
\betagin{cor}[Long-time Strichartz estimates for $\nablabla_{x_2, x_3} u$]\lambdabel{cor:ltse_23}
Suppose that Proposition~\mathop{\mathrm{Re}}f{padditional} holds with $\nu>0$. Then for any $\nu_0 > \nu$,
\[
\| |\nablabla_{x_{2,3}}|^{1- \nu_0} u_{N}\|_{L^{\frac{2}{1-s_p}}_{t} L^{\frac{2}{s_p}}_x ([t_0, t_0 + N^{1-\epspsilon}])} \lesssim N^{1-s_p}.
\]
\epsnd{cor}
\betagin{proof}
We only sketch this argument as it follows in the same manner as the standard LTSE with some additional technical details. First we note that $(2/(1-s_p), 2/s_p)$ is an admissible Strichartz pair at regularity $1- s_p$. By compactness, it suffices to argue with $t_0 = 0$. Let $S(I)$ denote any collection of Strichartz pairs at regularity $s= 0$. We will show that
\betagin{align}\lambdabel{equ:tlong23}
\| |\nablabla_{x_{2,3}}|^{1- \nu_0} u_{N}\|_{S ([t_0, t_0 + N^{1-\epspsilon}])} \lesssim 1,
\epsnd{align}
for $\nu_0 > \nu$, from which the result follows.
Let $\vec u$ be a solution with the compactness property on $\mathbf{m}athbb{R}$ with $N(t) = 1$. By the Gagliardo-Nirenberg inequality
\betagin{align}\lambdabel{gn_ineq}
\||\nablabla_{2,3}|^{1- \nu_0} u \|_{L^2_{x_2, x_3}} \leq C \||\nablabla_{2,3}|^{1- \nu} u \|_{L^2_{x_2, x_3}}^{\alphaphaha} \| u\|_{L_{x_2,x_3}^{\frac{2}{1-s_p}}}^{1-\alphaphaha}
\epsnd{align}
for
\[
\alphaphaha = \frac{1 - s_p - \nu_0}{1- s_p - \nu}.
\]
Next, we observe the Sobolev embedding
\betagin{align}\lambdabel{equ:sobolev_embed}
\dot H_{x}^{s_p} \mathbf{m}athbf{h}ookrightarrow L_{x_1}^2 L_{x_{2,3}}^{\frac{2}{1-s_p}},
\epsnd{align}
which follows from Sobolev embedding in $\mathbf{m}athbb{R}^2$ and Plancherel:
\betagin{align*}
\int\biggl(\int & |u(x_1,x_{2,3})|^{\frac{2}{1-s_p}}\,dx_{2,3} \biggr)^{1-s_p}\,dx_1 \\
& \lesssim \int \bigl| |\nablabla_{x_{2,3}}|^{s_p} u\bigr|^2 \,\mathbf{m}athrm{d} x \varsigmagmaim \int \bigl| | \xii_{2,3} |^{s_p} \mathbf{m}athbf{h}at u(\xii)\bigr|^2 \,d\xii \lesssim \int \bigl| |\xii|^{s_p}\mathbf{m}athbf{h}at u(\xii)\bigr|^2 \,d\xii.
\epsnd{align*}
Thus we may take the $L^2_{x_1}$ norm of both sides of \epsqref{gn_ineq} and us H\"older's inequality on the right to conclude that the trajectory $\abs{\nabla_{2, 3}}^{1-\nu_0} u$ has the compactness property in $L^2_x$, and hence there exists $N_0 = N_0 (\epsta_0)$ such that for all $N>N_0$,
\betagin{align}\lambdabel{equ:base_case}
\|P_{>N} \abs{\nabla_{2, 3}}^{1- \nu_0} u\|_{S([0, \, 9^{1- \varepsilon}])} \leq \epsta_0, \qquad \thetaxtup{for all } N \mathbf{m}athbf{g}eq N_0(\epsta_0),
\epsnd{align}
which proves the base case, that is \epsqref{equ:tlong23} holds for $N_0 \leq N \leq 9 N_0$.
We now proceed to the inductive step. Suppose that \epsqref{equ:tlong23} holds up to frequency $N_1$ for $N_1 \mathbf{m}athbf{g}eq 9 N_0$. We will show that \epsqref{equ:tlong23} holds for $N = 2N_1$. The argument we employ is similar to a persistence of regularity argument. Note that $\abs{\nabla_{2, 3}}^{1- \nu_0} u$ solves the equation
\[
\trianglerightrtialartial_t \abs{\nabla_{2, 3}}^{1- \nu_0} u - \mathbf{m}athbb{D}elta \abs{\nabla_{2, 3}}^{1- \nu_0} u = \abs{\nabla_{2, 3}}^{1- \nu_0} F(u).
\]
By the Strichartz estimates we have:
\betagin{align}
&\| P_{> N} \abs{\nabla_{2, 3}}^{1- \nu_0} u \|_{S([0, \left( N /N_0\right)^{1-\varepsilon} ])} \\
&\lesssim \| P_{> N} \abs{\nabla_{2, 3}}^{1- \nu_0} \vec u \|_{L^\infty_t \dot {\mathcal{H}}_x^{0} ([0, \left( N /N_0\right)^{1-\varepsilon} ])} + \|P_{>N} \abs{\nabla_{2, 3}}^{1- \nu_0} F(u) \|_{{\mathcal{N}}([0, \left( N /N_0\right)^{1-\varepsilon} ])}.
\epsnd{align}
where ${\mathcal{N}}$ is the dual space to $S$. Let $\widehat{P}_M$ denote a Fourier projection in the $\xii_2, \xii_3$ variables. The first term can be bounded using compactness, so we focus on the second term. We again write
\betagin{align}
F(u) &= F(u_{\leq N}) + u_{> N} \int_0^1 F'(u_{< N} + \theta u_{>N})\\
&= F(u_{\leq N}) + u_{> N} F'(u_{< N}) + u_{>N}^2 \iint_0^1 F''(u_{< N} + \theta_1 \theta_2 u_{>N})\\
&= F(u_{\leq N}) + u_{> N} F'(u_{< N}) + u_{>N}^2 F''(u_{< N}) \\
& \mathbf{m}athbf{h}space{14mm}+ u_{>N}^3 \iiint_0^1 F'''(u_{< N} + \theta_1 \theta_2 \theta_3 u_{>N}).
\epsnd{align}
We will estimate the first term as an example, since the other terms will be similar generalizations of the proof of Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}. We have
\betagin{align}
& \||\nablabla|^{-1} |\nablabla_{2,3}|^{1-\nu_0} P_{>N} F(u_{\leq \frac{N}{8}})\|_{L^1_t L^2_x} \\
& \lesssim N^{- 1} \| |\nablabla_{2,3}|^{1-\nu_0} P_{>N} F(u_{\leq \frac{N}{8}})\|_{L^1_t L^2_x} \\
& \lesssim N^{- 2} \| |\nablabla_{2,3}|^{1-\nu_0} u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}} \||\nablabla| u_{\leq \frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x} \\
& \mathbf{m}athbf{h}space{14mm} + N^{- 2} \| u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}} \||\nablabla||\nablabla_{2,3}|^{1-\nu_0} u_{\leq \frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x}\\
& \lesssim N^{-1 + \theta - s_p} \| |\nablabla_{2,3}|^{1-\nu_0} u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}} N^{-1 - \theta + s_p} \||\nablabla| u_{\leq \frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x} \\
& \mathbf{m}athbf{h}space{14mm} + N^{- 1 + \theta} \| u_{\leq \frac{N}{8}}\|^{p-1}_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^\frac{2(p-1)}{\theta}} N^{-1 - \theta} \||\nablabla||\nablabla_{2,3}|^{1-\nu_0} u_{\leq \frac{N}{8}}\|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x},
\epsnd{align}
and all four terms can be treated analogously to the low frequency component in Term I in Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}. \epsnd{proof}
\varsigmagmaubsection{Proof of Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz} and Proposition~\mathop{\mathrm{Re}}f{p:hans_zero}, assuming Proposition~\mathop{\mathrm{Re}}f{padditional}} \lambdabel{s:mor}
As mentioned above, the long-time Strichartz estimate (Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}) will be a key ingredient to proving additional regularity (Proposition~\mathop{\mathrm{Re}}f{padditional}). Before turning to the rather technical proof, let us use Proposition~\mathop{\mathrm{Re}}f{padditional} (together with Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} and Corollary~\mathop{\mathrm{Re}}f{C:tw-lts}) to prove the Morawetz estimate, Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz}. With the Morawetz estimate in hand, we can then quickly rule out the possibility of traveling waves and hence complete the proof of the main result, Proposition~\mathop{\mathrm{Re}}f{p:hans_zero}.
We recall the notation $x=(x_1,x_{2,3})$ and similarly write $\xii=(\xii_1,\xii_{2,3})$.
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz}] Let $\trianglerightrtialsi:[0,\infty)\to\mathbf{m}athbb{R}$ be a smooth cutoff satisfying
\[
\betagin{cases}
\trianglerightrtialsi(\rhoo) = 1 & \rhoo \leq 1, \\
\trianglerightrtialsi(\rhoo) = 0 & r >2.
\epsnd{cases}
\]
We fix $R>0$ to be determined below and let $\trianglerightrtialsi_R(\rhoo) = \trianglerightrtialsi(\tfrac{\rhoo}{R})$. Next, let
\[
\chi_R(r) = \frac{1}{r}\int_0^r \trianglerightrtialsi_R(s)\,ds.
\]
We collect a few useful identities:
\betagin{equation}\lambdabel{tw-chi-id}
\trianglerightrtialartial_k[x^k\chi_R] = \chi_R+\trianglerightrtialsi_R, \quad r\trianglerightrtialartial_r \chi_R = -\chi_R + \trianglerightrtialsi_R,
\epsnd{equation}
and we recall the Sobolev embedding \epsqref{equ:sobolev_embed}.
In the following, we consider $\chi_R$ as a function of $|x_{2,3}|$. For $T>0$ and
\[
I:=P_{\leq T},
\]
we define the Morawetz quantity
\[
M(t) = \int_{\mathbf{m}athbb{R}^3} \chi_R Iu_t\, x^k \trianglerightrtialartial_k Iu \,\mathbf{m}athrm{d} x + \frac12\int_{\mathbf{m}athbb{R}^3} (\chi_R + \trianglerightrtialsi_R) Iu_t Iu\,\mathbf{m}athrm{d} x,
\]
where repeated indices are summed over $k\in\{2,3\}$.
We first compute the derivative of $M(t)$:
\betagin{align*}
M'(t) & = \int \chi_R Iu_t (x^k\trianglerightrtialartial_k Iu_t) + \frac12 \int (\chi_R+\trianglerightrtialsi_R)(Iu_t)^2 \\
& \quad + \int \chi_R [x^k\trianglerightrtialartial_k Iu] Iu_{tt} + \frac12 \int(\chi_R+\trianglerightrtialsi_R)Iu Iu_{tt}.
\epsnd{align*}
By \epsqref{tw-chi-id} and integration by parts, we have
\[
\int [x^k \chi_R]\trianglerightrtialartial_k\tfrac 12 (Iu_t)^2 = -\frac12\int (\chi_R+\trianglerightrtialsi_R) (Iu_t)^2,
\]
so we are left to estimate
\betagin{align}\lambdabel{tw-mor2}
\int \chi_R [x^k\trianglerightrtialartial_k Iu] Iu_{tt} + \frac12 \int(\chi_R+\trianglerightrtialsi_R)Iu Iu_{tt}.
\epsnd{align}
Using the equation for $u$ yields
\[
Iu_{tt} = \mathbf{m}athbb{D}elta Iu -F(Iu)+ [F(Iu)-IF(u)],\qtq{where} F(z) = |z|^{p-1}z.
\]
We first consider the contribution of $\mathbf{m}athbb{D}elta Iu$ to \epsqref{tw-mor2}. We claim
\betagin{equation}\lambdabel{tw-mor3}
\int x^k\chi_R [\trianglerightrtialartial_k Iu] \mathbf{m}athbb{D}elta Iu + \frac12(\chi_R+\trianglerightrtialsi_R)Iu\mathbf{m}athbb{D}elta Iu\,\mathbf{m}athrm{d} x \leq \frac 12 \int \mathbf{m}athbb{D}elta(\chi_R+\trianglerightrtialsi_R)(Iu)^2\,\mathbf{m}athrm{d} x.
\epsnd{equation}
In the proof of \epsqref{tw-mor3} we will simplify notation by suppressing the operator $I$, suppressing the dependence on $R$, and writing $u_k=\trianglerightrtialartial_k u$. We turn to the proof.
We begin by considering the first term on the left-hand side of \epsqref{tw-mor3}. Integrating by parts yields
\[
\int x^k\chi u_k u_{jj} = -\int\trianglerightrtialartial_j[x^k \chi]u_k u_j + \frac12 x^k\chi \trianglerightrtialartial_k(u_j^2),
\]
where $k\in\{2,3\}$ and $j\in\{1,2,3\}$. Writing $r=|x_{2,3}|$ and using \epsqref{tw-chi-id}, we have
\betagin{align*}
\int \trianglerightrtialartial_j[x^k\chi] u_k u_j & = \int \deltalta_{jk}\chi u_j u_k + \frac{x^k x^j}{r^2}r \chi' u_j u_k \\
& = \int \deltalta_{jk}\chi u_j u_k + \frac{x^k x^j}{r^2}\trianglerightrtialsi u_j u_k - \frac{x^k x^j}{r^2}\chi u_j u_k,
\epsnd{align*}
where we may now restrict to $j\in\{2,3\}$. Using the other identity in \epsqref{tw-chi-id}, we also have
\[
\frac12 \int x^k \chi\trianglerightrtialartial_k(u_j)^2 = -\frac12 \int (\chi+\trianglerightrtialsi) u_j^2.
\]
As for the second term on the left-hand side of \epsqref{tw-mor3}, we have
\[
\frac12 \int (\chi+\trianglerightrtialsi)u u_{jj} = \frac 12 \int \trianglerightrtialartial_{jj}(\chi+\trianglerightrtialsi)u^2- \frac12(\chi+\trianglerightrtialsi) u_j^2.
\]
Collecting the computations above, we find
\betagin{align*}
\int &\bigl( x^k\chi_R [\trianglerightrtialartial_k Iu] \mathbf{m}athbb{D}elta Iu + \frac12(\chi_R+\trianglerightrtialsi_R)Iu\mathbf{m}athbb{D}elta Iu\,\bigr) \mathbf{m}athrm{d} x \\
& = \frac12 \int \mathbf{m}athbb{D}elta(\chi+\trianglerightrtialsi)u^2 -\int\bigl[\deltalta_{jk}-\frac{x^j x^k}{r^2}\bigr]\chi u_j u_k - \int \trianglerightrtialsi\bigl[\frac{x_{2,3}}{r}\cdot \nablabla_{x_{2,3}}u\bigr]^2\,\mathbf{m}athrm{d} x,
\epsnd{align*}
which yields \epsqref{tw-mor3}.
We next consider the contribution of $-F(Iu)$ to \epsqref{tw-mor2}. Using \epsqref{tw-chi-id} and integration by parts,
\betagin{equation}\lambdabel{tw-mor4}
\betagin{aligned}
-\int&\bigl[x^k\chi_R \trianglerightrtialartial_k Iu + \frac12(\chi_R+\trianglerightrtialsi_R) Iu]F(Iu) \,\mathbf{m}athrm{d} x\\
& = -\int(x^k\chi_R)\frac{1}{p+1}\trianglerightrtialartial_k|Iu|^{p+1} + \frac 12 (\chi_R+\trianglerightrtialsi_R)|Iu|^{p+1}\,\mathbf{m}athrm{d} x \\
& = \int \left(\frac{1}{p+1}-\frac{1}{2}\right)(\chi_R+\trianglerightrtialsi_R)|Iu|^{p+1}\,\mathbf{m}athrm{d} x.
\epsnd{aligned}
\epsnd{equation}
Hence, by \epsqref{tw-mor2}, \epsqref{tw-mor3}, and \epsqref{tw-mor4} and the fundamental theorem of calculus, we deduce
\betagin{equation}\lambdabel{TWM1}
\betagin{aligned}
\iint_{|x_{2,3}| \leq R}& | Iu|^{p+1}\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \\
& \lesssim \varsigmagmaup_{t\in J} | M(t)| + \iint \mathbf{m}athbb{D}elta(\chi_R + \trianglerightrtialsi_R) |Iu|^2 \,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \\
& \quad + \biggl| \iint [x^k\chi_R \trianglerightrtialartial_k Iu+(\chi_R+\trianglerightrtialsi_R)Iu][F(Iu)-IF(u)]\,\mathbf{m}athrm{d} x \,\mathbf{m}athrm{d} t \biggr|
\epsnd{aligned}
\epsnd{equation}
for any interval $J$. In the following, we choose $J=[0,T^{1-\epspsilon}]$, where $\epspsilon>0$ will be chosen below and $T$ is large enough that Proposition~\mathop{\mathrm{Re}}f{P:tw-lts} and Corollary~\mathop{\mathrm{Re}}f{C:tw-lts} hold.
We need to estimate the terms on the right-hand side of this inequality. We first bound $|M(t)|$. By Bernstein's inequality, the Sobolev embedding \epsqref{equ:sobolev_embed}, and Proposition~\mathop{\mathrm{Re}}f{padditional}, we have
\betagin{equation}
\lambdabel{tw-mor-bd}
\betagin{split}
\varsigmagmaup_t |&M(t)|\\
&\lesssim \| I u_t \|_{L_t^\infty L_x^2}\bigl(R \|\nablabla_{x_{2,3}} Iu\|_{L_t^\infty L_x^2} +\|Iu\|_{L_t^\infty L_{x_1}^2 L_{x_{2,3}}^{\frac{2}{1-s_p}}}\|\chi_R+\trianglerightrtialsi_R\|_{L_{x_{2,3}}^{\frac{2}{1+s_p}}} \bigr) \\
& \lesssim T^{1-s_p}\bigl( RT^\nu + R^{1+s_p}\|u\|_{L_t^\infty \dot H_x^{s_p}}\bigr) \\
& \lesssim T^{1-s_p}\bigl( RT^\nu + R^{1+s_p}\bigr),
\epsnd{split}
\epsnd{equation}
for $\nu>0$ to be chosen sufficiently small below.
For the next term, we have
\betagin{equation}\lambdabel{TWM2}
\betagin{aligned}
\iint \mathbf{m}athbb{D}elta(\chi_R + \trianglerightrtialsi_R)|Iu|^2\,\mathbf{m}athrm{d} x \,\mathbf{m}athrm{d} t & \lesssim
T^{1-\epspsilon}\|Iu\|_{L_t^\infty L_{x_1}^2 L_{x_{2,3}}^{\frac{2}{1-s_p}}}^2 \|\mathbf{m}athbb{D}elta(\chi_R+\trianglerightrtialsi_R)\|_{L_{x_{2,3}}^{\frac{2}{1+s_p}}} \\
& \lesssim T^{1-\epspsilon} \| u\|_{L_t^\infty \dot H_x^{s_p}}^2 R^{-(2-2s_p)}\\
& \lesssim T^{1-\epspsilon}R^{-(2-2s_p)}.
\epsnd{aligned}
\epsnd{equation}
Now we turn to the final term. Arguing as in the long-time Strichartz estimates, we need to estimate terms of the form
\[
u_{\leq T}^{p} + u_{> T} u_{\leq T}^{p-1} + u_{> T}^2 u_{\leq T}^{p-2} + u_{>T}^3 F_2,
\]
where $F_2$ involves both high and low frequencies.
Thus we estimate
\betagin{align}
\iint [x^k\chi_R \trianglerightrtialartial_k &Iu +(\chi_R+\trianglerightrtialsi_R)Iu][F(Iu)-IF(u)]\,\mathbf{m}athrm{d} x \,\mathbf{m}athrm{d} t\\
& \lesssim\iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}P_{>T}[u_{\leq T}^p]\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lambdabel{tw-emor1}\\
&\quad + \iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T} P_{>T}[|u_{>T}| |u_{\leq T}|^{p-1}]\,\mathbf{m}athrm{d} x \mathbf{m}athrm{d} t \lambdabel{tw-emor2} \\
& \quad +\iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}P_{>T}[|u_{>T}|^2 |u_{\leq T}|^{p-2}]\,\mathbf{m}athrm{d} x \mathbf{m}athrm{d} t \lambdabel{tw-emor3} \\
&\quad + \iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}P_{>T} [|u_{>T}|^3 F_2 ]\,\mathbf{m}athrm{d} x \mathbf{m}athrm{d} t \lambdabel{tw-emor4} \\
& \quad+\iint (\chi_R + \trianglerightrtialsi_R)u_{\leq T} P_{>T} [ u_{\leq T}^p]\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lambdabel{tw-emor5}\\
& \quad+\iint (\chi_R + \trianglerightrtialsi_R)u_{\leq T } P_{>T} [ |u_{> T}| |u_{\leq T} |^{p-1]}\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lambdabel{tw-emor6}\\
& \quad+\iint (\chi_R+\trianglerightrtialsi_R) u_{\leq T}P_{>T}[ |u_{> T}|^2 |u_{\leq T} |^{p-2}]\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lambdabel{tw-emor7}\\
& \quad+\iint (\chi_R+\trianglerightrtialsi_R) u_{\leq T}P_{>T}[|u_{> T}|^3 F_2]\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lambdabel{tw-emor8}
\epsnd{align}
where all the integrals are taken over $[0,T^{1-\epspsilon}]\widetildemes\mathbf{m}athbb{R}^3$. We treat each of these terms separately.
We first consider \epsqref{tw-emor1}. Estimating as in the long-time Strichartz estimates and using Corollary \mathop{\mathrm{Re}}f{C:tw-lts}, we obtain
\betagin{align*}
&\left|\iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}P_{>T}[u_{\leq T}^p]\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t\right|\\
& \lesssim RT^{1- s_p} \| \nablabla_{x_{2,3}}u_{\leq T}\|_{L_t^\infty L_x^2}T^{s_p - 2} \||\nablabla| P_{>T}(u_{\leq T})^p\|_{L_t^1 L_x^2} \\
&\lesssim RT^{1 - s_p + \nu}\cdot T^{s_p - 2} \| u_{\leq T}\|_{L_t^{\frac{2(p-1)}{2-\theta}}L_x^{\frac{2(p-1)}{\theta}}}^{p-1} \|\nablabla u_{\leq T}\|_{L_t^{\frac{2}{\theta}}L_x^{\frac{2}{1-\theta}}} \\
&\lesssim R T^{1-s_p+\nu}.
\epsnd{align*}
We next consider \epsqref{tw-emor2}. We let $0 < \theta <\tfrac23\epspsilon$. By Bernstein's inequality, Proposition~\mathop{\mathrm{Re}}f{padditional}, and Corollary~\mathop{\mathrm{Re}}f{C:tw-lts}, we obtain
\betagin{align*}
&\left| \iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}[|u_{>T}| |u_{\leq T}|^{p-1}]\,\mathbf{m}athrm{d} x\, \mathbf{m}athrm{d} t \right| \\
& \lesssim R \|\nablabla_{x_{2,3}} u_{\leq T}\|_{L^\infty_t L^2_x} \|u_{\leq T}\|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \|u_{>T}\|_{L^{\infty}_t L^{^\frac{2}{1-\theta}}_x} \\
& \lesssim R T^{1- s_p} \|\nablabla_{x_{2,3}} u_{\leq T}\|_{L^\infty_t L^2_x} T^{s_p - 1} \|u_{\leq T}\|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \|u_{>T}\|_{L^{\infty}_t L^{^\frac{2}{1-\theta}}_x} \\
& \lesssim R T^{1 - s_p + \nu}.
\epsnd{align*}
For \epsqref{tw-emor3} we again argue as in the proof of the long-time Strichartz estimates, and using Corollary~\mathop{\mathrm{Re}}f{C:tw-lts}, we obtain
\betagin{align*}
&\left| \iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T}[|u_{>T}| ^2|u_{\leq T}|^{p-2}]\,\mathbf{m}athrm{d} x\, \mathbf{m}athrm{d} t \right| \\
& \lesssim R \|\nablabla_{x_{2,3}} u_{\leq T}\|_{L^\infty_t L^2_x} \|u_{\leq T}\|^{p-2}_{L_t^{p-1}L_x^{\infty}} \|u_{>T}\|_{L^{2(p-1)}_t L^{4}_x} \\
& \lesssim R T^{ \nu + s_p - 1} \||\nablabla_{x_{2,3}}|^{1-\nu} u_{\leq T}\|_{L^\infty_t L^2_x} N^{1 - s_p} \|u_{\leq T}\|^{p-2}_{L_t^{p-1}L_x^{\infty}} \|u_{>T}\|_{L^{2(p-1)}_t L^{4}_x} \\
& \lesssim R T^{1 - s_p + \nu}.
\epsnd{align*}
For \epsqref{tw-emor4}, we once again use the bounds from the proof of the long-time Strichartz estimates as well as Corollary \mathop{\mathrm{Re}}f{cor:ltse_23}, and we obtain
\betagin{align*}
&\left| \iint x^k \chi_R \trianglerightrtialartial_k u_{\leq T} P_{>T} [|u_{>T}| ^3F_2]\,\mathbf{m}athrm{d} x\, \mathbf{m}athrm{d} t \right| \\
& \lesssim R \|\nablabla_{x_{2,3}} u_{\leq T}\|_{L^{\frac{2}{1-s_p}}_{t} L^{\frac{2}{s_p}}_x} \|P_{>T} [|u_{>T}| ^3F_2]\|_{L^{\frac{2}{1+s_p}}_t L^{\frac{2}{2-s_p}}_x} \\
& \lesssim R \varsigmagmaum_{N \leq T} N^{\nu_0} \left(\frac{T}{N}\right)^{\frac{(1-\epspsilon)(1-s_p)}{2}} \| |\nablabla_{x_{2,3}}|^{1- \nu_0} u_{N}\|_{L^{\frac{2}{1-s_p}}_{t} L^{\frac{2}{s_p}}_x} \\
& \lesssim R T^{\nu_0} \varsigmagmaum_{N \leq T} \left(\frac{T}{N}\right)^{\frac{(1-\epspsilon)(1-s_p)}{2}} N^{1-s_p} \\
& \lesssim R T^{1- s_p + \nu_0} \varsigmagmaum_{N \leq T} \left(\frac{N}{T}\right)^{1- s_p -\frac{(1-\epspsilon)(1-s_p)}{2}} \lesssim R T^{ 1- s_p + \nu_0}
\epsnd{align*}
for any $\nu_0 > \nu$, where $\nu > 0$ is as in Proposition \mathop{\mathrm{Re}}f{padditional}.
Arguing analogously for the remaining terms, the estimates are almost identical, up to noting that by H\"older's inequality in the $x_{2,3}$ variables we have
\betagin{align}
\|(\chi_R + \trianglerightrtialsi_R) u_{\leq T}\|_{L^\infty_t L^2_{x}} \lesssim R^{s_p} \| u_{\leq T}\|_{L^\infty_t L^2_{x_1}L^{\frac{2}{1-s_p}}_{x_2, x_3}},
\epsnd{align}
which is controlled by the $\dot H^{s_p}$ norm by the Sobolev embedding \epsqref{equ:sobolev_embed}. Thus we obtain for \epsqref{tw-emor5} - \epsqref{tw-emor7} the estimates:
\betagin{align}
\epsqref{tw-emor5} &\lesssim R^{s_p} T^{1- s_p} \| u_{\leq T}\|_{L^\infty_t L^2_{x_1} L^{\frac{2}{1-s_p}}_{x_2, x_3}} T^{s_p - 2} \| |\nablabla| P_{>T}[u_{\leq T}^p] \|_{L^1_t L^2_x}
\lesssim R^{s_p} ,\\\\
\epsqref{tw-emor6} &\lesssim R^{s_p} T^{1 - s_p} \| u_{\leq T}\|_{L^\infty_t L^2_{x_1}L^{\frac{2}{1-s_p}}_{x_2, x_3}} T^{s_p - 1} \| P_{>T}[ u_{>T} u_{\leq T}^{p-1}] \|_{L^1_t L^2_x} \lesssim R^{s_p} ,\\\\
\epsqref{tw-emor7}& \lesssim R^{s_p} T^{1 - s_p} \| u_{\leq T}\|_{L^\infty_tL^2_{x_1} L^{\frac{2}{1-s_p}}_{x_2, x_3}} T^{s_p - 1} \| P_{>T}[ u_{>T}^2 u_{\leq T}^{p-2}] \|_{L^1_t L^2_x}\lesssim R^{s_p}.
\epsnd{align}
For the last term \epsqref{tw-emor8}, we note that
\[
\frac{2}{s_p} \leq \frac{2}{1-s_p},
\]
and we use H\"older's inequality in the $x_{2,3}$ variables to estimate
\betagin{align}
\epsqref{tw-emor8} &\lesssim \|(\chi_R + \trianglerightrtialsi_R) u_{\leq T}\|_{L^{\frac{2}{1-s_p}}_{t} L^{\frac{2}{s_p}}_x} \| P_{>T}[ u_{>T}^3 F_2] \|_{L^{\frac{2}{1+s_p}}_{t} L^{\frac{2}{2-s_p}}_x}\\
& \lesssim T^{\frac{(1-\epspsilon)(1-s_p)}{2}} T^{\frac{1 -s_p}{2}} R^{2s_p - 1} \|(\chi_R + \trianglerightrtialsi_R) u_{\leq T}\|_{L^\infty_{t} L^2_{x_1} L^{\frac{2}{1-s_p}}_{x_{2,3}}} \\
& \lesssim T^{1- s_p} R^{2s_p - 1} \|(\chi_R + \trianglerightrtialsi_R) u_{\leq T}\|_{L^\infty_{t} L^2_{x_1} L^{\frac{2}{1-s_p}}_{x_{2,3}}} .
\epsnd{align}
Now, using \epsqref{tw-mor-bd}, \epsqref{TWM2}, and our estimates for \epsqref{tw-emor1}--\epsqref{tw-emor8}, we have established that
\betagin{align*}
&\iint_{|x_{2,3}|\leq R}|u_{\leq T}|^{p+1}\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \lesssim RT^{1+\nu-s_p}+R^{s_p} + R^{2s_p - 1} T^{1 - s_p }.
\epsnd{align*}
We now choose $R=T^{\frac{1}{2}+}$ to obtain that the right-hand side is $o(T^{1-\epspsilon})$. This can be achieved provided $\nu+\epspsilon<s_p-\frac12,$ and hence we complete the proof.
\epsnd{proof}
As mentioned above, with the Morawetz estimate Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz} in hand, we can quickly rule out traveling waves. The final ingredient we will need is the non-triviality for compact solutions appearing in Corollary~\mathop{\mathrm{Re}}f{C:acax}. Combining this corollary with Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz}, we can now prove Proposition~\mathop{\mathrm{Re}}f{p:hans_zero}.
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{p:hans_zero}] Suppose toward a contradiction that $\vec u$ is a traveling wave critical element for \epsqref{eq:nlw}. It suffices to prove that
\betagin{equation}\lambdabel{tw-lb}
\int_0^{T^{1-\epspsilon}}\int_{|x_{2,3}|\leq T^{\frac{1}{2}+}} |u_{\leq T}(t,x)|^{p+1}\,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \mathbf{m}athbf{g}trsim T^{1-\epspsilon}
\epsnd{equation}
for $T$ sufficiently large, as this contradicts Proposition~\mathop{\mathrm{Re}}f{P:tw-morawetz}. By Corollary~\mathop{\mathrm{Re}}f{C:acax}, the definition of the critical element, and the fact that $N(t)\epsquiv 1$, there exists $C\mathbf{m}athbf{g}g1$ and $T\mathbf{m}athbf{g}g 1$ large enough that
\betagin{equation}\lambdabel{tw-lb2}
\int_{t_0}^{t_0+1} \int_{|x-x(t)|\leq C} |u_{\leq T}(t,x)|^{\frac{3(p-1)}{2}} \,\mathbf{m}athrm{d} x\,\mathbf{m}athrm{d} t \mathbf{m}athbf{g}trsim_u 1
\epsnd{equation}
for all $t_0\in\mathbf{m}athbb{R}$. Recalling $|x(t) -(t,0,0)| \lesssim \varsigmagmaqrt{t}$ we see that for $T> C^{2}$, we have
\[
\{|x-x(t)| \leq C\} \varsigmagmaubset \{|x_{2,3}|\leq T^{\frac{1}{2}+}\}
\]
for all $t\in[0,T^{1-\epspsilon}]$. Thus \epsqref{tw-lb2} implies \epsqref{tw-lb}, as desired.
\epsnd{proof}
\varsigmagmaubsection{Additional Regularity: Proof of Proposition~\mathop{\mathrm{Re}}f{padditional}}
Our final task is to prove Proposition~\mathop{\mathrm{Re}}f{padditional}, namely, additional regularity for traveling waves. More precisely, we can establish additional regularity in the directions orthogonal to the direction of travel.
Recall the notation $x=(x_1,x_{2,3})$. We similarly use $\xii=(\xii_1,\xii_{2,3})$ for the frequency variable. We also introduce the following modified Littlewood--Paley operators:
For $N,M\in 2^{\mathbf{m}athbb{Z}}$, we let $\widehat P_{N,>M}$ be the Fourier multiplier operator that is equal to one where
\[
|\xii|\varsigmagmaimeq N \qtq{and} |\xii_{2,3}|\mathbf{m}athbf{g}trsim M.
\]
We let $\widehat P_{N, M} = \widehat P_{N,>2M} - \widehat P_{N,>M}$, and we let $P_N = \widehat P_{N,\leq M}+\widehat P_{N,>M}$.
\mathbf{m}edskip
We will occasionally abuse notation slightly and apply these multipliers to a vector, where this should be taken to mean applying these multipliers component-wise. We note that this notation differs from that of the previous sections, however we would like to make explicit that $N$ corresponds to $\xii$ frequencies while $M$ to those of $\xii_{2,3}$.
We fix $\nu>0$. We begin with the observation that
\betagin{align}\lambdabel{equ:order1}
\| P_{\leq N_0} u\|_{L_t^\infty \dot H_x^1} \lesssim_{N_0} 1.
\epsnd{align}
We will choose the precise value of $N_0 \mathbf{m}athbf{g}g 1$ in the course of the proof. On the other hand,
we have
\betagin{align}\lambdabel{equ:lowm}
\varsigmagmaum_{N>N_0}& \| |\nablabla_{x_{2,3}}|^{1-\nu} \widehat P_{N,\leq N^{\frac{s_p}{1-\nu}}}u(t)\|_{L_x^2}^2\\
& \lesssim \varsigmagmaum_{N>N_0} N^{2[\frac{s_p}{1-\nu}(1-\nu)-s_p]}\| |\nablabla|^{s_p}u_N(t)\|_{L_x^2}^2 \lesssim \| |\nablabla|^{s_p} u(t)\|_{L_x^2}^2,
\epsnd{align}
Therefore, we are left to show that
\betagin{equation}\lambdabel{tw-addreg-nts0}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}}\leq M \leq N} M^{2(1-\nu)}\|\widehat P_{N, \mathbf{m}athbf{g}eq M}u(t)\|^2_{L_x^2} \lesssim 1,
\epsnd{equation}
for some fixed $C_0 > 0$ (uniformly in $t$).
We will use a double Duhamel argument together with a frequency envelope to estimate this expression. We will estimate
\betagin{align}
\| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0) \|_{L^{2}({\mathbb R}^{3})}^2 &\varsigmagmaimeq N^{-2s_p} \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0) \|_{\dot H^{s_p} ({\mathbb R}^{3})}^2\\
& = N^{-2s_p} \lambdangle \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0),\, \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0) \rangle_{\dot H^{s_p}({\mathbb R}^{3})}.
\epsnd{align}
We will show that there exists a frequency envelope $\mathbf{m}athbf{g}amma_{M,N}$ such that
\[
\| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0) \|_{\dot H^{s_p}({\mathbb R}^{3})} \lesssim \mathbf{m}athbf{g}amma_{N,M}(t_0)
\]
and such that
\betagin{align}\lambdabel{equ:frequency_env}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0} \biggl(\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}}\leq M \leq N} M^{2(1 - \nu)} N^{-2s_p} \mathbf{m}athbf{g}amma_{N,M}(t_0)^2 \biggr) \lesssim 1.
\epsnd{align}
Consequently, this will show that
\betagin{align}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}&\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}}\leq M \leq N} M^{2(1-\nu)}\|\widehat P_{N, \mathbf{m}athbf{g}eq M}u(t_0)\|^2_{L_x^2} \\
&\lesssim \varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}}\leq M \leq N} M^{2(1-\nu)} N^{-2s_p } \|\widehat P_{N, \mathbf{m}athbf{g}eq M}u(t_0)\|^2_{\dot H^{s_p}_x} \lesssim 1.
\epsnd{align}
Together with \epsqref{equ:order1} and \epsqref{equ:lowm} (and time translation invariance), this will imply that
\[
\| |\trianglerightrtialartial_{2}|^{1 - \nu} u \|_{L_{t}^{\infty} L_x^{2}} + \| |\trianglerightrtialartial_{3}|^{1 - \nu} u \|_{L_{t}^{\infty} L_x^{2}} < \infty,
\]
and hence prove Proposition \mathop{\mathrm{Re}}f{padditional}.
Thus, we let
\[
\mathbf{m}athcal{G}amma_{N,M}(t_0) = N^{s_p} \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(t_0) \|_{L^{2}({\mathbb R}^{3})},
\]
and we fix some $\varsigmagmaigma > 0$ to be specified later.
We define the frequency envelope
\betagin{align}\lambdabel{equ:alpha_env}
\mathbf{m}athbf{g}amma_{M,N}(t_0) =\varsigmagmaum_{N'} \varsigmagmaum_{M' \leq M} \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}\cdot \left(\frac{M'}{M}\right)^{\varsigmagmaigma}\mathbf{m}athcal{G}amma_{N',M'}(t_0).
\epsnd{align}
By time-translation symmetry, it suffices to consider the case $t_0 = 0$. Once again, we complexify the solution, letting
\[
w = u+ \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} u_t.
\]
Then
\[
\|w(t) \|_{\dot H^1} \varsigmagmaimeq \|\vec u(t) \|_{\dot H^1 \widetildemes L^2},
\]
and if $\vec u(t)$ solves \epsqref{eq:nlw}, then $w(t)$ is a solution to
\betagin{align}
w_t = -i \varsigmagmaqrt{-\mathbf{m}athbb{D}elta} w \trianglerightrtialm \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} |u|^{p-1} u.
\epsnd{align}
By Duhamel's principle, for any $T$, we have
\[
w(0) = e^{iT \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} w(T) \trianglerightrtialm \frac{i}{\varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} \int_T^0 e^{i \tauu \varsigmagmaqrt{-\mathbf{m}athbb{D}elta}} F(u)(\tauu) d\tauu
\]
where $F(u) = |u|^{p-1} u$. To estimate $\mathbf{m}athbf{g}amma_{N,M}$, we write
\betagin{align}
\widehat{P}_{N, \mathbf{m}athbf{g}eq M} w(0) &= \widehat{P}_{N, \mathbf{m}athbf{g}eq M} e^{-iT\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} w(T) - \frac{1}{\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \int_0^T e^{-it\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(u) \mathbf{m}athrm{d} t \\
&=\widehat{P}_{N, \mathbf{m}athbf{g}eq M} e^{-iT\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} w(-T) - \frac{1}{\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \int_{-T}^0 e^{-i\tauu\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(u) \mathbf{m}athrm{d}\tauu
\epsnd{align}
When we pair these expressions and take $T \to \infty$, we use the facts that
\[
e^{-iT\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \widehat{P}_{N, \mathbf{m}athbf{g}eq M} w(T) \rightharpoonup 0 \quad\thetaxt{and}\quad e^{iT\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} \widehat{P}_{N, \mathbf{m}athbf{g}eq M} w(-T) \rightharpoonup 0,
\]
and ultimately we are left to estimate
\betagin{align}\lambdabel{equ:double_duhamel_term_hans}
\left \lambdangle \int_0^\infty S(-t) \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(u) \mathbf{m}athrm{d} t ,\, \int_{-\infty}^0 S(-\tauu) \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(u) d\tauu \right \rangle_{\dot H^{s_p}_x}.
\epsnd{align}
where we have introduced the notation
\mathbf{m}athcal{E}Q{
S(t):= \frac{1}{\varsigmagmaqrt{-\mathbf{m}athbb{D}e}} e^{i t\varsigmagmaqrt{-\mathbf{m}athbb{D}e}}
}
above.
As we have done in previous sections, we will estimate this expression by dividing space-time into three regions: a compact time interval, an outer region, and a region inside the light-cone. We note, however, that the arguments on the compact time interval and the region inside the light cone will be considerably different than in previous sections.
Thus, we let $\epsta_0>0$ and $\epspsilon>0$ be sufficiently small parameters and define the smooth cut-off
\betagin{align}\lambdabel{equ:chi_0}
\chi_{0}(t,x) = 1_{\{ |x - x(N^{1-\epspsilon})| \mathbf{m}athbf{g}eq R(\epsta_0) + (t - N^{1 - \epspsilon}),\, t \mathbf{m}athbf{g}eq N^{1 - \epspsilon}\}},
\epsnd{align}
where $R(\epsta_0)$ is such that
\betagin{align}\lambdabel{equ:small1}
\| \chi_{0}(N^{1-\epspsilon},x)u(N^{1 - \epspsilon}, x) \|_{\dot{H}^{s_p}} + \| \chi_0(N^{1-\epspsilon},x) \trianglerightrtialartial_t u(N^{1 - \epspsilon}, x) \|_{\dot{H}^{s_p - 1}} \leq \epsta_0.
\epsnd{align}
By the small-data theory, we may solve the Cauchy problem
\betagin{equation}
\left\{ \betagin{aligned}
&v_{tt} - \mathbf{m}athbb{D}elta v + F(v) = 0 \thetaxt{ on } {\mathbb R} \widetildemes {\mathbb R}^3, \\
& (v, \trianglerightrtialartial_t v)|_{t={0}} = \bigl( \chi_{0}(N^{1-\epspsilon},x) u(N^{1- \epspsilon}, x), \chi_{0}(N^{1-\epspsilon},x) u_{t}(N^{1- \epspsilon}, x)\bigr) \in \dot \mathcal{H}^{s_p}({\mathbb R}^3).
\epsnd{aligned} \right.
\epsnd{equation}
Note that by finite propagation speed, $v = u$ on the set
\betagin{align}\lambdabel{equ:supp_set}
\{ (t,x) : |x - x(N^{1-\epspsilon})| \mathbf{m}athbf{g}eq R(\epsta_0) + (t - N^{1 - \epspsilon}), \,\, t \mathbf{m}athbf{g}eq N^{1 - \epspsilon} \}.
\epsnd{align}
We now write
\betagin{align*}
&\int_0^\infty S(-t) \widehat P_{N, \mathbf{m}athbf{g}eq M} F(u)\,\mathbf{m}athrm{d} t = A+B+C,
\epsnd{align*}
where
\betagin{equation}\lambdabel{tw-ABC}
\betagin{aligned}
& A = \int_{N^{1-\epspsilon}}^\infty S(-t) \widehat P_{N, \mathbf{m}athbf{g}eq M} F(v)\,\mathbf{m}athrm{d} t, \\
& B = \int_0^{N^{1-\epspsilon}} S(-t)\widehat P_{N, \mathbf{m}athbf{g}eq M} F(u)\,\mathbf{m}athrm{d} t,\\
& C = \int_{N^{1-\epspsilon}}^\infty S(-t)\widehat P_{N, \mathbf{m}athbf{g}eq M}[ F(u)- F(v)]\,\mathbf{m}athrm{d} t
\epsnd{aligned}
\epsnd{equation}
and perform a similar decomposition in the negative time direction, yielding quantities $A',B',C'$. We will use the estimate
\betagin{equation}\lambdabel{eq:ABC}
\betagin{split}
|\lambdangle A+B+&C,A'+B'+C'\rangle| \\
& \lesssim \|A\|_{\dot H^{s_p}_x}^2 + \|A'\|_{\dot H^{s_p}_x}^2+\|B\|_{\dot H^{s_p}_x}^2+\|B'\|_{\dot H^{s_p}_x}^2+|\lambdangle C,C'\rangle_{\dot H^{s_p}_x}|
\epsnd{split}
\epsnd{equation}
whenever $A+B+C=A'+B'+C'$.
\varsigmagmaubsection*{Term A} We first estimate $\lambdangle A,A\rangle_{\dot H^{s_p}_x}$ and $\lambdangle A',A'\rangle_{\dot H^{s_p}_x}$, where
\[
A = \int_{N^{1 - \epspsilon}}^\infty S(-t) \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(v) \mathbf{m}athrm{d} t\qtq{and} A' = \int_{-\infty}^{-N^{1 - \epspsilon}} S(-\tauu) \widehat{P}_{N, \mathbf{m}athbf{g}eq M} F(v) d\tauu.
\]
We introduce two parameters $q$ and $r$ satisfying
\betagin{equation}\lambdabel{tw-parameters}
2<q<\mathbf{m}in\{p-1,\tfrac{2}{s_p},\tfrac{5p-9}{3p-7}\}\qtq{and}\tfrac{2}{s_p}\leq r\leq\mathbf{m}in\{\tfrac{2p(p-1)}{2p-3},4+\}.
\epsnd{equation}
and let $I = [N^{1 - \epspsilon}, \infty)$. We fix $\varsigmagmaigma > 0$ to be specified later, and we define
\[
a_{N', M} = \biggl[ (N')^{-(\frac{2}{q} -s_p ) } \| \widehat{P}_{N', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}} + (N')^{\frac{2}{r} - 1 + s_p} \| \widehat{P}_{N', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{\frac{2r}{r-2}} L_{x}^{r}}\biggr],
\]
and let
\betagin{align}
\alphaphaha_{N,M}=\varsigmagmaum_{N'} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M}\right)^\varsigmagmaigma&\mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}a_{N', M'}.
\epsnd{align}
All space-time norms are taken over $I\widetildemes\mathbf{m}athbb{R}^3$.
Our goal is to prove the following result.
\betagin{lem}\lambdabel{l:AA'}
Let $A, A'$ and $\mathbf{m}athbf{g}amma_{N,M}$ be as above, then
\betagin{align}\lambdabel{equ:a_ests}
\varsigmagmaum_{N'}\varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} (\|A\|_{\dot H^{s_p}_x} + \|A'\|_{\dot H^{s_p}_x})
&\lesssim \epsta_0^{p-1}\, \alphaphaha_{N, M},
\epsnd{align}
and we also have
\betagin{align}\lambdabel{alpha_bds}
\alphaphaha_{N, M} \lesssim \mathbf{m}athbf{g}amma_{N,M}(N^{1-\epspsilon}) + \epsta_0^{p-1} \alphaphaha_{N, M}.
\epsnd{align}
\epsnd{lem}
\betagin{proof}
On this region, we will use the small data theory, which implies, in particular, that
\betagin{align} \lambdabel{equ:small_v}
\| v \|_{L_{t,x}^{2(p-1)}(\mathbf{m}athbb{R}\widetildemes\mathbf{m}athbb{R}^3)} \lesssim \epsta_0.
\epsnd{align}
By Strichartz estimates, we may write
\betagin{align}\lambdabel{equ:first_a_bds}
&N^{-\left(\frac{2}{q} -s_p \right) } \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}} + N^{\frac{2}{r} - 1 + s_p} \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{\frac{2r}{r-2}} L_{x}^{r}} \\
& \lesssim \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} (v, v_{t})(N^{1 - \epspsilon}) \|_{\dot{{\mathcal{H}}}^{s_p}} + \|P_{N, \mathbf{m}athbf{g}eq M} F(v)\|_{N(\mathbf{m}athbb{R})},
\epsnd{align}
and recall that, by definition, we have
\betagin{align}\lambdabel{equ:data_bds}
\| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} (v, v_{t})(N^{1 - \epspsilon}) \|_{\dot{H}^{s_p}} \lesssim \mathbf{m}athcal{G}amma_{N,M}(N^{1-\epspsilon}).
\epsnd{align}
Here, we let the norm $\|F\|_{N(\mathbf{m}athbb{R})}$ denote any finite combination $\varsigmagmaum_j \|F_j\|_{N_j(\mathbf{m}athbb{R})}$, with $F=\varsigmagmaum_j F_j$ and each $N_j(\mathbf{m}athbb{R})$ being a dual admissible Strichartz space with the appropriate scaling and number of derivatives.
It will be useful to introduce the quantities
\[
v_{lo} = \varsigmagmaum_{N'} \widehat{P}_{N',\leq M}v\qtq{and} v_{hi} = \varsigmagmaum_{N'}\widehat{P}_{N',>M}v,
\]
where low and high is meant to refer to the $\xii_{2,3}$ frequency component. We decompose the nonlinearity via
\[
F(v)=F(v_{lo})+v_{hi}\int_0^1 F'(v_{lo}+\theta v_{hi})\,d\theta,
\]
which we write schematically as
\[
F(v)=F(v_{lo})+v_{hi} v^{p-1}.
\]
For the high frequency (in $M$) component, we write
\[
v_{hi} v^{p-1} = (P_{\leq N} v_{hi} ) v^{p-1} + (P_{\mathbf{m}athbf{g}eq N} v_{hi} ) v^{p-1},
\]
and to estimate these terms we may use the dual Strichartz spaces
\[
L_t^{\frac{2q}{q+2}} \dot H_x^{-(\frac2q-s_p),\frac{q}{q-1}} \quad \thetaxtup{and} \quad L_t^{\frac{r}{r-1}}\dot H_x^{\frac2r-1 +s_p,\frac{2r}{r+2} }
\]
respectively. This yields
\betagin{align}
\|&P_{N', \mathbf{m}athbf{g}eq M} v_{hi} v^{p-1} \|_{N(\mathbf{m}athbb{R})} \\
& \lesssim (N')^{-\left(\frac{2}{q} -s_p \right) } \varsigmagmaum_{N'' \leq N'} \| \widehat{P}_{N'', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}(I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\\
& + (N')^{\frac{2}{r} - 1 + s_p} \varsigmagmaum_{N'' \mathbf{m}athbf{g}eq N'} \| \widehat{P}_{N'', \mathbf{m}athbf{g}eq M } v \|_{L_{t}^{\frac{2r}{r-2}}L_{x}^{r} (I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\trianglerightrtialhantom{\int}\\
& := \mathbf{m}athcal{N}_1 + \mathbf{m}athcal{N}_2
\epsnd{align}
hence we conclude that
\betagin{align}
\varsigmagmaum_{N'}\mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}& \|P_{N', \mathbf{m}athbf{g}eq M} v_{hi} v^{p-1} \|_{N(\mathbf{m}athbb{R})} \\
& \lesssim \varsigmagmaum_{N'}\mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} (\mathbf{m}athcal{N}_1 + \mathbf{m}athcal{N}_2).
\epsnd{align}
Thus, we argue in order to bound the $\mathbf{m}athcal{N}_1$ and $\mathbf{m}athcal{N}_2$ terms.
We only treat the first term as an example since the other term follows analogously. We obtain
\betagin{align}
&\varsigmagmaum_{N' \leq N}\varsigmagmaum_{N'' \leq N'} \left( \frac{N'}{N}\right)^{\varsigmagmaigma} (N')^{-\left(\frac{2}{q} -s_p \right) } \| \widehat{P}_{N'', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}}\\
& \lesssim \varsigmagmaum_{N' \leq N} \varsigmagmaum_{N'' \leq N'} \left( \frac{N'}{N}\right)^{\varsigmagmaigma}\left(\frac{N''}{N'}\right)^{\frac2q-s_p} (N'')^{-\left(\frac{2}{q} -s_p \right) } \| \widehat{P}_{N'', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}}\\
&\lesssim\varsigmagmaum_{N'' \leq N} \left( \frac{N''}{N}\right)^{\varsigmagmaigma} a_{N'', M} \varsigmagmaum_{N'\mathbf{m}athbf{g}eq N''}\left(\frac{N'}{N''}\right)^{-(\frac{2}{q}-s_p)+\varsigmagmaigma}.
\epsnd{align}
Hence this term can be bounded by $ \alphaphaha_{N, M}$ provided $\varsigmagmaigma<\frac{2}{q}-s_p.$
We also have
\betagin{align}
&\varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left( \frac{N}{N'}\right)^{\varsigmagmaigma} (N')^{-\left(\frac{2}{q} -s_p \right) } \varsigmagmaum_{N'' \leq N'} \| \widehat{P}_{N'', \mathbf{m}athbf{g}eq M} v \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}(I \widetildemes {\mathbb R}^{3})}\\
&\lesssim \varsigmagmaum_{N'' \leq N} \left( \frac{N''}{N}\right)^{\varsigmagmaigma} a_{N'', M} \varsigmagmaum_{N'' \leq N', N \leq N'} \left( \frac{N^2}{N' N''}\right)^{\varsigmagmaigma} \left(\frac{N '}{N''} \right)^{-\left(\frac{2}{q} -s_p \right) } \\
& \mathbf{m}athbf{h}space{14mm} + \varsigmagmaum_{N'' \mathbf{m}athbf{g}eq N} \left( \frac{N}{N''}\right)^{\varsigmagmaigma} a_{N'', M} \varsigmagmaum_{N'' \leq N'} \left( \frac{N''}{N'}\right)^{\varsigmagmaigma} \left(\frac{N '}{N''} \right)^{-\left(\frac{2}{q} -s_p \right) } .
\epsnd{align}
Using that in the first term,
\[
\left( \frac{N^2}{N' N''}\right)^{\varsigmagmaigma} \leq \left( \frac{(N')^2}{N' N''}\right)^{\varsigmagmaigma} = \left( \frac{N'}{N''}\right)^{\varsigmagmaigma},
\]
we can bound this expression by $\mathbf{m}athbf{g}amma_{N,M}$ provided
\[
\varsigmagmaum_{N'' \leq N'} \left( \frac{N'}{N''}\right)^{\varsigmagmaigma} \left(\frac{N '}{N''} \right)^{-\left(\frac{2}{q} -s_p \right) } \lesssim 1 \quad \Longleftrightarrow \quad \varsigmagmaigma < \frac{2}{q} - s_p,
\]
and so we obtain
\betagin{align}\lambdabel{gamma_bd1}
\varsigmagmaum_{N'} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma& \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} \|P_{N', \mathbf{m}athbf{g}eq M'} v_{hi} v^{p-1} \|_{N(\mathbf{m}athbb{R})} \\
& \lesssim \epsta_0^{p-1} \alphaphaha_{N, M}
\epsnd{align}
Next we estimate the contribution of the low-frequency piece. We can write
\betagin{align}
\|P_{N, \mathbf{m}athbf{g}eq M} F(v_{lo})\|_{N(\mathbf{m}athbb{R})} \leq M^{ - 2 } \|P_{N, \mathbf{m}athbf{g}eq M} \mathbf{m}athbb{D}elta_{x_2,x_3} F(v_{lo})\|_{N(\mathbf{m}athbb{R})}.
\epsnd{align}
applying the chain rule and decomposing $v = P_{\mathbf{m}athbf{g}eq N} v + P_{\leq N} v$, we obtain for $j=2,3$ that
\betagin{align}
\trianglerightrtialartial_{x_j} F(v_{lo}) &= \trianglerightrtialartial_{x_j}v_{lo} F'(v_{lo})\\
& = \trianglerightrtialartial_{x_j} P_{\leq N} v_{lo} F'(v_{lo}) + \trianglerightrtialartial_{x_j} P_{\mathbf{m}athbf{g}eq N} v_{lo} F'(v_{lo}) ,
\epsnd{align}
and hence
\betagin{align}
\|P_{N, \mathbf{m}athbf{g}eq M} \mathbf{m}athbb{D}elta_{x_2,x_3} F(v_{lo})\|_{N(\mathbf{m}athbb{R})} &\leq \varsigmagmaum_{j=1}^2 \|\trianglerightrtialartial_{x_j} P_{N, \mathbf{m}athbf{g}eq M} (\trianglerightrtialartial_{x_j}v_{lo} )F'(v_{lo})\|_{N(\mathbf{m}athbb{R})}\\
& \leq \varsigmagmaum_{j=1}^2 M \|P_{N, \mathbf{m}athbf{g}eq M} (\trianglerightrtialartial_{x_j}v_{lo} )F'(v_{lo})\|_{N(\mathbf{m}athbb{R})}.
\epsnd{align}
Estimating as above, using the dual Strichartz spaces
\[
L_t^{\frac{2q}{q+2}} \dot H_x^{-(\frac2q-s_p),\frac{q}{q-1}} \quad \thetaxtup{and} \quad L_t^{\frac{r}{r-1}}\dot H_x^{\frac2r-1 +s_p,\frac{2r}{r+2} },
\]
we conclude that
\betagin{align}
&\|P_{N,\mathbf{m}athbf{g}eq M}F(v_{lo}) \|_{N(\mathbf{m}athbb{R})} \\
& \lesssim N^{-\left(\frac{2}{q} -s_p \right) } \varsigmagmaum_{N' \leq N} \varsigmagmaum_{M' \leq M} M^{-1} \| \nablabla_{x_2,x_3} \widehat{P}_{N', M'} v_{lo} \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}(I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\\
& + N^{\frac{2}{r} - 1 + s_p} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \varsigmagmaum_{M' \leq M} M^{-1} \| \nablabla_{x_2,x_3} \widehat{P}_{N', M' } v_{lo} \|_{L_{t}^{\frac{2r}{r-2}}L_{x}^{r} (I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\trianglerightrtialhantom{\int}\\
& \lesssim N^{-\left(\frac{2}{q} -s_p \right) } \varsigmagmaum_{N' \leq N} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right) \| \widehat{P}_{N', M'} v_{lo} \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}(I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\\
& + N^{\frac{2}{r} - 1 + s_p} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right) \| \widehat{P}_{N', M' } v_{lo} \|_{L_{t}^{\frac{2r}{r-2}}L_{x}^{r} (I \widetildemes {\mathbb R}^{3})} \| v \|_{L_{t,x}^{2(p-1)}}^{p-1}\trianglerightrtialhantom{\int}.
\epsnd{align}
To establish a bound for this expression, it is useful to introduce the notation
\[
\widetilde{a}_{N, M'} = \varsigmagmaum_{N'} \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}a_{N', M'}.
\]
Thus summing over $N$ and $M' \leq M$, we can again argue exactly as above to bound this expression by
\betagin{align}
&\epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma \varsigmagmaum_{M'' \leq M'} \left( \frac{M''}{M'} \right) \widetilde{a}_{N, M''} \\
&\leq \epsta_0^{p-1} \varsigmagmaum_{M'' \leq M} \left( \frac{M''}{M} \right)^\varsigmagmaigma \varsigmagmaum_{M'' \leq M'} \left( \frac{M''}{M'} \right) \left( \frac{M'}{M''}\right)^\varsigmagmaigma \widetilde{a}_{N, M''} \\
&\leq \epsta_0^{p-1} \varsigmagmaum_{M'' \leq M} \left( \frac{M''}{M} \right)^\varsigmagmaigma \widetilde{a}_{N, M''} \varsigmagmaum_{M'' \leq M'} \left( \frac{M''}{M'} \right) \left( \frac{M'}{M''}\right)^\varsigmagmaigma\\
& \lesssim \epsta_0^{p-1} \alphaphaha_{N, M}
\epsnd{align}
provided $\varsigmagmaigma < 1$. Thus, we obtain
\betagin{align}
\varsigmagmaum_{N'} &\varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} \|P_{N, \mathbf{m}athbf{g}eq M} F(v_{lo}) \|_{N(\mathbf{m}athbb{R})} \\ & \lesssim \epsta_0^{p-1} \alphaphaha_{N, M}.\lambdabel{gamma_bd2}
\epsnd{align}
By Strichartz estimates, we have
\[
\|A'\|_{\dot H^{s_p}_x} + \|A\|_{\dot H^{s_p}_x} \lesssim \|P_{N, \mathbf{m}athbf{g}eq M} F(v)\|_{N(I)}.
\]
Hence, putting these bounds together with \epsqref{gamma_bd1} and \epsqref{gamma_bd2} we obtain
\betagin{align}
\varsigmagmaum_{N'}\varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} (\|A\|_{\dot H^{s_p}_x} + \|A'\|_{\dot H^{s_p}_x})
&\lesssim \epsta_0^{p-1}\, \alphaphaha_{N, M}.
\epsnd{align}
Together with \epsqref{equ:first_a_bds} and \epsqref{equ:data_bds}, we also have
\betagin{align}
\alphaphaha_{N, M} \lesssim \mathbf{m}athbf{g}amma_{N,M}(N^{1-\epspsilon}) + \epsta_0^{p-1} \alphaphaha_{N, M},
\epsnd{align}
as required.
\epsnd{proof}
\varsigmagmaubsection*{Term B} We next estimate the terms $\lambdangle B,B\rangle$ and $\lambdangle B',B'\rangle$ from \epsqref{tw-ABC}. On this region, we use the long-time Strichartz estimates (Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}) and another frequency envelope argument for this contribution. In the following we suppose, unless otherwise specified, that norms be taken over
\[
I:= [0, N^{1 - \epspsilon}] \widetildemes {\mathbb R}^{3}.
\]
We define
\betagin{align}
b_{N', M} &= \biggl[ (N')^{-\left(\frac{2}{q} -s_p \right) } \| \widehat{P}_{N', \mathbf{m}athbf{g}eq M} u \|_{L_{t}^{q} L_{x}^{\frac{2q}{q-2}}} + (N')^{\frac{3(p-3)}{2(p-1)}} \| P_{N', \mathbf{m}athbf{g}eq M} u\|_{L_t^{2(p-1)} L_x^{\frac{2(p-1)}{p-2}}} \\
& + (N')^{-\frac{1}{p-1} + \frac{3\theta}{2(p-1)}} \|P_{N', \mathbf{m}athbf{g}eq M} u \|_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
&+(N')^{s_p - \frac{3\theta}{2}} \| P_{N', \mathbf{m}athbf{g}eq M} u\|_{L^{\infty}_t L^{^\frac{2}{1-\theta}}_x} + (N')^{- \theta + s_p } \| P_{N' , \mathbf{m}athbf{g}eq M} u \|_{L^{\frac{2}{\theta}}_t L^{\frac{2}{1-\theta}}_x} \\
& + (N')^{\frac{\epsll}{2}} \|P_{N', \mathbf{m}athbf{g}eq M} u \|_{L^\frac{2p}{1+s_p - p\epsll}_t L^\frac{2p}{2-s_p}_x } + (N')^{\frac{2 s_p}{3} - \frac{1}{3} } \| P_{N',\mathbf{m}athbf{g}eq M}\|_{L^\frac{6}{1+s_p}_t L^\frac{6}{2-s_p}_x } \biggr],
\epsnd{align}
where $\epsll>0$ will be determined more precisely below and $\theta$ is as in Proposition~\mathop{\mathrm{Re}}f{P:tw-lts}. These are just a collection of admissible Strichartz pairs at regularity $s_p$. We then define the frequency envelope
\betagin{align} \lambdabel{equ:beta_env}
\betata_{N,M} = \varsigmagmaum_{N'} \varsigmagmaum_{M' \leq M} \left( \frac{M'}{M}\right)^\varsigmagmaigma \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} b_{N', M'} .
\epsnd{align}
Our goal in this section is to prove the following result.
\betagin{lem}\lambdabel{l:BB'}
Let $B, B'$ and $\betata_{N,M}$ be as above. Then
\betagin{align}\lambdabel{equ:b_ests}
\varsigmagmaum_{N'}\mathbf{m}athbf{h}space{-.5mm} \varsigmagmaum_{M' \leq M} \mathbf{m}athbf{h}space{-1mm} \left( \frac{M'}{M} \right)^\varsigmagmaigma\mathbf{m}athbf{h}space{-1mm} \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}\mathbf{m}athbf{h}space{-1.5mm} (\|B\|_{\dot H^{s_p}_x} + \|B'\|_{\dot H^{s_p}_x})
&\lesssim \epsta_0^{p-1}\, \betata_{N, M},
\epsnd{align}
and we also have
\betagin{align}\lambdabel{equ:beta_bds}
\mathbf{m}athbf{g}amma_{N,M}(N^{1-\epspsilon}) + \betata_{N, M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(0) + \epsta_0^{p-1} \betata_{N, M}.
\epsnd{align}
\epsnd{lem}
\betagin{proof}
Fix $t_0 = 0$. Throughout, we will assume that $N \mathbf{m}athbf{g}g N_0$ as in the statement of the long-time Strichartz estimates. By Strichartz estimates
\betagin{align}\lambdabel{equ:first_b_bds}
\|P_{N', \mathbf{m}athbf{g}eq M} (u, u_t) \|_{L^\infty_t \dot {\mathcal{H}}^{s_p}([0, N^{1-\epspsilon}])} &+ b_{N',M} \\
& \lesssim \|P_{N', \mathbf{m}athbf{g}eq M} (u, u_t)(0) \|_{{\mathcal{H}}^{s_p}} + \|P_{N', \mathbf{m}athbf{g}eq M} F \|_{N(I)}.
\epsnd{align}
Once again, it will be useful to introduce the quantities
\[
u_{lo} = \varsigmagmaum_{N'} \widehat{P}_{N',\leq M}u\qtq{and} u_{hi} = \varsigmagmaum_{N'}\widehat{P}_{N',>M}u,
\]
where low and high is meant to refer to the $\xii_{2,3}$ frequency component. We decompose the nonlinearity via
\[
F(u)=F(u_{lo})+u_{hi}\int_0^1 F'(u_{lo}+\theta u_{hi})\,d\theta,
\]
which we write schematically as
\[
F(u)=F(u_{lo})+u_{hi} u^{p-1}.
\]
These two expressions will be estimated almost identically to, up to requiring additional exponential gains for the low frequency (in $M$) term, $F(u_{lo})$. We will only estimate this term since the other is easier. Arguing as above via the chain rule with the Laplacian in the $x_{2,3}$ directions, we have
\betagin{align}
\|P_{N, \mathbf{m}athbf{g}eq M} F(u_{lo})\|_{N(I)} \leq M^{-1} \| P_{N, \mathbf{m}athbf{g}eq M} (\nablabla_{2,3} u_{lo})\: F'(u_{lo})\|_{N(I)}.
\epsnd{align}
We write
\betagin{equation}\lambdabel{EQX}
(\nablabla_{2,3} u_{lo})\: F'(u_{lo}) = (\nablabla_{2,3} P_{\mathbf{m}athbf{g}eq N} u_{lo})\: F'(u_{lo}) + (\nablabla_{2,3} P_{\leq N} u_{lo})\: F'(u_{lo}) := 1 + 2
\epsnd{equation}
and we begin with Term 1. We set
\[
P_{\mathbf{m}athbf{g}eq N} u_{lo} := u_{lo, \mathbf{m}athbf{g}eq N}, \quad P_{\leq N} u_{lo} := u_{lo, \leq N},
\]
and decompose
\betagin{align}
(\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N})\: F'(v_{lo}) &= (\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N})\: F'(u_{lo, \leq N}) \\
& \mathbf{m}athbf{h}space{4mm}+ (\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}) u_{lo, \mathbf{m}athbf{g}eq N} \int_0^1 F''(u_{lo, \leq N} + \theta u_{lo, \mathbf{m}athbf{g}eq N}) \\
& = (\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N})\: F'(u_{lo, \leq N}) + \nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N} u_{lo, \mathbf{m}athbf{g}eq N} F''(u_{lo, \leq N})\\
& \mathbf{m}athbf{h}space{4mm}+ (\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N})u_{lo, \mathbf{m}athbf{g}eq N}^2 \iint_0^1 F'''(u_{lo, \leq N} + \theta_1 \theta_2 u_{lo, \mathbf{m}athbf{g}eq N})
\\
& := 1.I + 1.II + 1.III.
\epsnd{align}
\varsigmagmaubsection*{Term 1.I}
We estimate using Corollary~\mathop{\mathrm{Re}}f{C:tw-lts} to get
\betagin{align}
&M^{-1} \||\nablabla|^{s_p - 1}P_{N} \nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: F'(u_{lo, \leq N}) \|_{L^1_t L^2_x} \\
& \leq M^{-1} N^{s_p - 1} \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: F'(u_{lo, \leq N}) \|_{L^1_t L^2_x}\\
& \leq M^{-1} N^{s_p - 1} \|u_{lo, \leq N} \|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\|_{L^{\infty}_t L^{\frac{2}{1-\theta}}_x}\\
& \leq N^{s_p - 1} \|u_{lo, \leq N} \|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M} \right) \|P_{N', M'} u\|_{L^{\infty}_t L^{\frac{2}{1-\theta}}_x}\\
& \leq N^{s_p - 1} N^{-s_p + \frac{3\theta}{2}} \|u_{lo, \leq N} \|^{p-1}_{L_t^{p-1}L_x^\frac{2(p-1)}{\theta}} \\
& \mathbf{m}athbf{h}space{14mm} \widetildemes \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M} \right) \left(\frac{N'}{N} \right)^{-s_p + \frac{3\theta}{2}} (N')^{s_p - \frac{3\theta}{2}} \|P_{N', M'} u\|_{L^{\infty}_t L^{\frac{2}{1-\theta}}_x}\\
& \leq \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M} \right) \left(\frac{N'}{N} \right)^{-s_p + \frac{3\theta}{2}} b_{N', M'}.
\epsnd{align}
\varsigmagmaubsection*{Term 1.II}
We estimate
\betagin{align}
&M^{-1} \||\nablabla|^{s_p - 1}P_{N} \nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: u_{lo, \mathbf{m}athbf{g}eq N} F''(u_{lo, \leq N}) \|_{L^1_t L^2_x}\\
& \lesssim M^{-1} N^{ s_p-1 } \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\|_{L^{2(p-1)}_t L^4_x} \|u_{lo, \mathbf{m}athbf{g}eq N}\|_{L^{2(p-1)}_t L^4_x} \|u_{lo, \leq N} \|_{L^{p-1}_t L^\infty_x}^{p-2}\\
& \lesssim M^{-1} N^{ s_p-1 } \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\|_{L^{2(p-1)}_t L^4_x} \|u_{lo, \mathbf{m}athbf{g}eq N}\|_{L^{2(p-1)}_t L^4_x} \|u_{lo, \leq N} \|_{L^{p-1}_t L^\infty_x}^{p-2}\\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{ \frac{3}{4} - \frac{3}{2(p-1)} } (N')^{ \frac{3}{4} - \frac{3}{2(p-1)} }\|P_{N', M'} u\|_{L^{2(p-1)}_t L^4_x} \\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{ \frac{3}{4} - \frac{3}{2(p-1)} } b_{N', M'} .
\epsnd{align}
\varsigmagmaubsection*{Term 1.III}
As in the proof of Term IV in the long-time Strichartz estimates, there are two terms. For the first we estimate
\betagin{align}
&M^{-1} \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: u_{lo, \mathbf{m}athbf{g}eq N}^2u_{lo, \mathbf{m}athbf{g}eq N}^{p-3} \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x }\\
& \lesssim M^{-1} N^{\frac{\epsll}{2}} \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: u_{lo, \mathbf{m}athbf{g}eq N}^2 u_{lo, \mathbf{m}athbf{g}eq N}^{p-3} \|_{L^{\frac{2}{1+s_p - \epsll}}_t L^\frac{2}{2-s_p}_x }\\
& \lesssim M^{-1} N^{\frac{\epsll}{2}} \|u_{> \frac{N}{8}} \|^{p-1}_{L^\frac{2p}{1+s_p}_t L^\frac{2p}{2-s_p}_x } \|u_{lo, \mathbf{m}athbf{g}eq N} \|_{L^\frac{2p}{1+s_p - p\epsll}_t L^\frac{2p}{2-s_p}_x }\\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{\frac{\epsll}{2}} (N')^{\frac{\epsll}{2}} \|P_{N', M'} u \|_{L^\frac{2p}{1+s_p - p\epsll}_t L^\frac{2p}{2-s_p}_x }\\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{\frac{\epsll}{2}} b_{N',M'},
\epsnd{align}
where we have used that for $p > 3$ and $\epsll > 0$, the pair
\[
\left( \frac{2p}{1+s_p - p\epsll}, \frac{2p}{2-s_p} \right)
\]
is wave admissible at regularity
\[
\frac{3}{2} - \frac{1+s_p - p\epsll}{2p} - \frac{6-3s_p}{2p} = s_p - \frac{\epsll}{2}.
\]
For the second term, we have
\betagin{align}
&M^{-1} \|\nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N}\: u_{lo, \mathbf{m}athbf{g}eq N}^2 u_{lo, \leq N}^{p-3} \|_{L^\frac{2}{1+s_p}_t L^\frac{2}{2-s_p}_x }\\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{\frac{2 s_p}{3} - \frac{1}{3} }(N')^{\frac{2 s_p}{3} - \frac{1}{3} } \| P_{N',M'} u\|_{L^\frac{6}{1+s_p}_t L^\frac{6}{2-s_p}_x } \\
& \lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \mathbf{m}athbf{g}eq N} \left(\frac{M'}{M}\right) \left(\frac{N}{N'} \right)^{\frac{2 s_p}{3} - \frac{1}{3} } b_{N', M'}.
\epsnd{align}
This completes the estimation of Term 1 in \epsqref{EQX}.
Now we turn to Term 2 in \epsqref{EQX}, namely
\[
\nablabla_{2,3} P_{\leq N} u_{lo}\: F'(u_{lo}).
\]
We decompose
\betagin{align}
\nablabla_{2,3} u_{lo, \leq N}\: F'(v_{lo}) &= \nablabla_{2,3} u_{lo, \leq N}\: F'(u_{lo, \leq N}) \\
& \mathbf{m}athbf{h}space{4mm}+ \nablabla_{2,3} u_{lo, \leq N} u_{lo, \mathbf{m}athbf{g}eq N} \int_0^1 F''(u_{lo, \leq N} + \theta u_{lo, \mathbf{m}athbf{g}eq N}) \\
& = \nablabla_{2,3} u_{lo, \leq N}\: F'(u_{lo, \leq N}) + \nablabla_{2,3} u_{lo, \mathbf{m}athbf{g}eq N} u_{lo, \mathbf{m}athbf{g}eq N} F''(u_{lo, \leq N})\\
& \mathbf{m}athbf{h}space{4mm}+ \nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^2 \iint F'''(u_{lo, \leq N} + \theta_1 \theta_2 u_{lo, \mathbf{m}athbf{g}eq N})
\\
& := 2.I + 2.II + 2.III.
\epsnd{align}
We omit the estimates for the first two terms since they follow as above, and we focus on
\[
\nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^2 \iint_0^1 F'''(u_{lo, \leq N} + \theta_1 \theta_2 u_{lo, \mathbf{m}athbf{g}eq N}) =: \nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^2 F_3.
\]
Here, we will need to introduce some new exponent pairs compared to the proof of the long-time Strichartz estimates. We divide this expression into two parts:
\betagin{align}
\| &|\nablabla|^{- \frac{p-3}{2(p-1)} +s_p} P_{N}( \nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^2 F) \|_{L_t^{\frac{p-1}{p-2}}L_x^{1} }\\
& \lesssim N^{ - \frac{p-3}{2(p-1)} + s_p}\| \nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^{p-1} \|_{L_t^{\frac{p-1}{p-2}}L_x^{1} } \\
& \qquad + N^{ - \frac{p-3}{2(p-1)} + s_p}\| \nablabla_{2,3} u_{lo, \leq N} u_{lo, \mathbf{m}athbf{g}eq N}^{2} u_{lo, \leq N}^{p-3} \|_{L_t^{\frac{p-1}{p-2}}L_x^{1} }.
\epsnd{align}
Note that $(\tfrac{p-1}{p-2},1)$ is dual wave admissible for $p\mathbf{m}athbf{g}eq 3$.
For the first term, we have a bound of
\betagin{align}
&N^{ - \frac{p-3}{2(p-1)} - s_p} \||\nablabla|^{s_p} u_{> N } \|_{L^\infty_t L^2_x}^2 \|u_{\leq N} \|_{L^{p-1}_t L^\infty_x}^{p-3 } \varsigmagmaum_{M' \leq M} M' \| P_{M'} u _{\leq N } \|_{L^{p-1}_t L^\infty_x} \\
&\lesssim \epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \leq N} M' \left( \frac{N'}{N}\right)^{\frac{1}{p-1}} (N')^{-\frac{1}{p-1}} \| P_{N',M'} u \|_{L^{p-1}_t L^\infty_x}.
\epsnd{align}
For the second term we estimate
\betagin{align}
&N^{ - \frac{p-3}{2(p-1)} + s_p}\| \nablabla_{2,3} u_{lo, \leq N}u_{lo, \mathbf{m}athbf{g}eq N}^{p-1} \|_{L_t^{\frac{p-1}{p-2}}L_x^{1} } \\
& \lesssim N^{ - \frac{p-3}{2(p-1)} + s_p} \|u_{\mathbf{m}athbf{g}eq N} \|_{L^{2(p-1)}_{t,x}}^{p-3} \|u_{\mathbf{m}athbf{g}eq N} \|^2_{ L^{4}_t L^{\frac{4(p-1)}{p+1}}_x }\\
& \mathbf{m}athbf{h}space{12mm} \widetildemes \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \leq N} M' \left( \frac{N'}{N} \right)^{\frac{2}{p-1}} (N')^{-\frac{2}{p-1}} \| P_{N', M'}u \|_{L^{\infty}_t L^{\infty}_x} .
\epsnd{align}
Now we note that the pair
\[
\left(4, \frac{4(p-1)}{p+1} \right)
\]
is wave admissible at regularity
\[
\frac{3}{2} - \frac{1}{4} - \frac{3p + 3}{4(p-1)} = s_p + \frac{2}{p-1} - \frac{1}{4} - \frac{3p + 3}{4(p-1)}.
\]
Noting that
\[
\frac{3p + 3}{4(p-1)} = \frac{3(p + 1)}{4(p-1)} > \frac{3}{(p-1)},
\]
we see that this is number strictly less that $s_p$. Thus we obtain a bound of
\[
\epsta_0^{p-1} \varsigmagmaum_{M' \leq M} \varsigmagmaum_{N' \leq N} M' \left( \frac{N'}{N} \right)^{\frac{2}{p-1}} (N')^{-\frac{2}{p-1}} \| P_{N', M'}u \|_{L^{\infty}_t L^{\infty}_x}.
\]
Arguing as in the estimates for Term $A$, we may determine the restrictions on $\varsigmagmaigma$. First, we need to assume that $\varsigmagmaigma < 1$ so that we can perform the summation in $M$, and we further require that $\varsigmagmaigma$ be bounded above by the power appearing on the $N' / N$ factor when $N' \leq N$ and the $N / N'$ factor when $N \leq N'$. Examining the exponents in the definition of $\mathbf{m}athcal{S}_{N, M}$ this amounts to requiring $\varsigmagmaigma$ smaller than the smallest (in absolute values) exponent in that expression, and hence we may assume the most restrictive of these will be taking $\varsigmagmaigma < \epsll /2$ in Term 1.III.
Provided this is the case, we obtain
\betagin{align}
\varsigmagmaum_{N'}\varsigmagmaum_{M' \leq M} \left( \frac{M'}{M} \right)^\varsigmagmaigma \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma} \|P_{N', \mathbf{m}athbf{g}eq M} F(u) \|_{N(I)}
&\lesssim \epsta_0^{p-1}\, \betata_{N, M},
\epsnd{align}
and since, by Strichartz estimates
\[
\|B\|_{\dot H^{s_p}_x} + \|B'\|_{\dot H^{s_p}_x} \lesssim \|P_{N', \mathbf{m}athbf{g}eq M} F \|_{N(I)}.
\]
we have
\betagin{align}
\varsigmagmaum_{N'}\mathbf{m}athbf{h}space{-.5mm} \varsigmagmaum_{M' \leq M} \mathbf{m}athbf{h}space{-1mm} \left( \frac{M'}{M} \right)^\varsigmagmaigma\mathbf{m}athbf{h}space{-1mm} \mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}\mathbf{m}athbf{h}space{-1.5mm} (\|B\|_{\dot H^{s_p}_x} + \|B'\|_{\dot H^{s_p}_x})
&\lesssim \epsta_0^{p-1}\, \betata_{N, M},
\epsnd{align}
as well as the estimate
\betagin{align}
\mathbf{m}athbf{g}amma_{N, M}(N^{1-\epspsilon}) + \betata_{N, M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(0) + \epsta_0^{p-1} \betata_{N, M},
\epsnd{align}
as required.
\epsnd{proof}
\varsigmagmaubsection*{Term C} We turn to the $\lambdangle C,C'\rangle$ term (cf. \epsqref{tw-ABC} and \epsqref{eq:ABC}).
In this section we prove the following lemma.
\betagin{lem} \lambdabel{l:CC'}
Let $C, C'$ be defined as in~\epsqref{tw-ABC}, and let $M \mathbf{m}athbf{g}eq C_0 N^{\frac{s_p}{1-\nu}}$. Then, for any $L \in \mathbf{m}athbb{N}$ we have
\mathbf{m}athcal{E}Q{
\abs{\ang{C, C'}_{\dot H^{s_p}_x}} \lesssim_L \frac{1}{M^L}
}
where the implicit constant above depends only on $L $.
\epsnd{lem}
\betagin{proof}[Proof of Lemma~\mathop{\mathrm{Re}}f{l:CC'}]
We introduce the notation
\[
G(u,v)(t)=F(u(t))-F(v(t)),
\]
which we may abbreviate as $G(t)$ or even $G$. We are faced with estimating
\[
\ang{C, C'}_{\dot H^{s_p}_x} \varsigmagmaimeq N^{2s_p} \ang{C, C'}_{L^2_x},
\]
where
\betagin{align} \lambdabel{equ:cc_prime}
&\ang{C, C'}_{L^2_x}
\\&= \int_{-\infty}^{-N^{1 - \epspsilon}} \mathbf{m}athbf{h}space{-3mm}\int_{N^{1 - \epspsilon}}^{\infty} \lambdangle S(-t) \widehat{P}_{N, M} G(u,v)(t), \, S(-\tauu)\widehat{P}_{N,M}G(u,v)(\tauu) \rangle_{L_x^2}\, \mathbf{m}athrm{d} t\, d\tauu \\
&= \int_{-\infty}^{-N^{1 - \epspsilon}} \mathbf{m}athbf{h}space{-3mm}\int_{N^{1 - \epspsilon}}^{\infty} \lambdangle G(u,v)(t), \, S(t-\tauu)\widehat{P}_{N,M}^2\frac{1}{\abs{\nabla}}G(u,v)(\tauu) \rangle_{L_x^2}\, \mathbf{m}athrm{d} t\, d\tauu.
\epsnd{align}
Since $M \mathbf{m}athbf{g}eq C_0 N^{\frac{s_p}{1-\nu}}$, it suffices to show that
\[
|\ang{C, C'}_{L^2_x} |\lesssim_L \frac{1}{M^L},
\]
and all inner products in this proof will be $L^2_x$ inner products.
For each fixed $t, \tauu$ as above we estimate the pairing,
\mathbf{m}athcal{E}Q{ \lambdabel{eq:ddpair}
\lambdangle G(u,v)(t) ,\, S(t- \tauu)\widehat{P}_{N, M}^2 G(u,v)(\tauu) \rangle_{L_x^2},
}
Recall that by the definition of $\vec v(\tauu)$, $G(u, v)(\tauu)$ is supported in the region
\mathbf{m}athcal{E}Q{ \lambdabel{eq:calG}
\mathbf{m}athbf{h}space{.2in} \mathbf{m}athcal G_{\trianglerightrtialm}(\tauu):= \{ x \mathbf{m}id \abs{x - x(\trianglerightrtialm N^{1-\epspsilon})} \le R(\epsta_0) + \abs{\tauu}- N^{1-\epspsilon} ,}
for all $\trianglerightrtialm \tauu \mathbf{m}athbf{g}e N^{1-\epspsilon}\}$. This points to an immediate problem in any naive implementation of the double Duhamel trick by way of Huygens principle as performed in previous sections. Namely, the support of the $S(t-\tauu)$ evolution of $G(u, v)(\tauu)$ intersects with the support of $G(u, v)(t)$ in the ``wave zone,'' i.e., near the boundary of the light cone where the kernel of $S(t -\tauu)$ only yields $\ang{t-\tauu}^{-1}$ decay, which is not sufficient for integration in time. However, we are saved here by a gain in \epsmph{angular separation} in the wave zone guaranteed by our directional frequency localization $\mathbf{m}athbf{h}at P_{N, M}$. Indeed, application of $\mathbf{m}athbf{h}at P_{N, M}$ restricts to frequencies $\xii = (\xii_1, \xii_{2, 3})$ with
\mathbf{m}athcal{E}Q{
\frac{\abs{\xii_{2, 3}}}{ \abs{\xii}} \varsigmagmaimeq \frac{M}{N}
}
whereas for any $x = (x_1, x_{2, 3}) \in \mathbf{m}athcal G(t) \cap \{(t, x) \mathbf{m}id \abs{x} \mathbf{m}athbf{g}e t - R(\epsta_0)\}$ we claim that
\mathbf{m}athcal{E}Q{
\frac{\abs{x_{2, 3}}}{\abs{x}} \ll \frac{M}{N}
}
for all $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$. We establish this fact in Lemma~\mathop{\mathrm{Re}}f{l:angle} below.
We introduce some additional notation. Let $R(\cdot)$ be the compactness modulus function. For given $t \in \mathbf{m}athbb{R}$ let
\betagin{equation}
\lambdabel{eq:calC}
\betagin{split}
\mathbf{m}athcal C_{\epsxt}(t) := \{ x \mathbf{m}id \abs{x} \mathbf{m}athbf{g}e \abs{t} - R(\epsta_0), \\
\mathbf{m}athcal C_{{\mathrm{int}}}(t):= \{ x \mathbf{m}id \abs{x} \le \abs{t} - R(\epsta_0)\}
\epsnd{split}
\epsnd{equation}
We decompose $\ang{C, C'}$ as follows. First, we write
\betagin{align}
G(u,v)(t) &= G(u,v)(t)1_{\mathbf{m}athcal C_{\epsxt}(t)} +G(u,v)(t)1_{\mathbf{m}athcal C_{{\mathrm{int}}(t)} }.
\epsnd{align}
Using this decomposition in \epsqref{equ:cc_prime} leads to four terms:
\betagin{align}\lambdabel{c_outout}
\iint\bigl\lambdangle S(t - \tauu) \frac{1}{\abs{\nabla}}\widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(t) \bigr \rangle \mathbf{m}athrm{d} t \mathbf{m}athrm{d}\tauu,
\epsnd{align}
\betagin{align}\lambdabel{c_outin}
\iint \bigl\lambdangle S(t - \tauu) \frac{1}{\abs{\nabla}}\widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(t) G(t) \bigr \rangle \mathbf{m}athrm{d} t \mathbf{m}athrm{d}\tauu,
\epsnd{align}
\betagin{align}\lambdabel{c_inout}
\iint\bigl\lambdangle S(t - \tauu) \frac{1}{\abs{\nabla}} \widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(t) \bigr \rangle \mathbf{m}athrm{d} t \mathbf{m}athrm{d}\tauu,
\epsnd{align}
\betagin{align}\lambdabel{c_inin}
\iint \bigl\lambdangle S(t - \tauu) \frac{1}{\abs{\nabla}}\widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(t) G(t) \bigr \rangle \mathbf{m}athrm{d} t \mathbf{m}athrm{d}\tauu,
\epsnd{align}
where the integrals are over $[-\infty,-N^{1-\epspsilon}]\widetildemes[N^{1-\epspsilon},\infty]$. We will refer to these terms are $C_{\epsxt - \epsxt}$, $C_{\epsxt - {\mathrm{int}}}$, $C_{{\mathrm{int}} - \epsxt}$ and $C_{{\mathrm{int}} - {\mathrm{int}}}$ respectively, and we will handle these terms separately below.
Were it not for the frequency localization $\mathbf{m}athbf{h}at P_{N, M}$ all but the first term above would vanish using the support properties of $G(u, v)$, together with the particular pairing of the cutoffs $1_{C_{{\mathrm{int}}}}$ and $1_{C_{\epsxt}}$, and the sharp Huygens principle. On the other hand, whereas in previous scenarios (e.g. the subluminal soliton) the first term would vanish, in the present setting there truly is an interaction between these two terms. This is the origin of the essential technical difficulty faced in the present scenario, and indeed we will find that the first term~\epsqref{c_outout} requires the most careful analysis. The crucial observation is that in this setting we can rely on angular separation to exhibit decay.
\varsigmagmaubsection*{The term \thetaxorpdfstring{$C_{\epsxt - \epsxt}$}{C}:} We will rely crucially on the following two lemmas, which together make precise the gain in decay from angular separation.
\betagin{lem}[Angular separation in the wave zone] \lambdabel{l:angle} For any $c>0$ there exists $N_0 = N_0(c)>0$ with the following property. Fix $\nu \in (0, 1)$ and let $\epspsilon>0$ be any number with $\epspsilon< \frac{2s_p}{1-\nu} -1$. Let $(t, x)$ satisfy
\mathbf{m}athcal{E}Q{
\abs{t} \mathbf{m}athbf{g}e N^{1-\epspsilon}, \quad x = ( x_1, x_{2, 3}) \in \mathbf{m}athcal G(t) \cap \mathbf{m}athcal C_{\epsxt}(t)
}
where $\mathbf{m}athcal G(t)$ are defined in~\epsqref{eq:calG},~\epsqref{eq:calC}. Then,
\mathbf{m}athcal{E}Q{ \lambdabel{eq:angle}
\frac{\abs{x_{2, 3}}}{\abs{x}} \lesssim \frac{1}{N^{\frac{1}{2}- \frac{\epspsilon}{2}}} \le c \frac{M}{N}
}
for all $ N \mathbf{m}athbf{g}e N_0$ and $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$.
\epsnd{lem}
See Figure~\mathop{\mathrm{Re}}f{f:angle} for a depiction of Lemma~\mathop{\mathrm{Re}}f{l:angle}.
\betagin{figure}[h]
\centering
\includegraphics[width=14cm]{drawing_hans.pdf}
\caption{The dark gray region above represents the region $\mathbf{m}athcal G_+(t) \cap \mathbf{m}athcal C_{\epsxt}$ in space at fixed time $t > N^{1-\epspsilon}$.} \lambdabel{f:angle}
\epsnd{figure}
Next, we show that if we restrict to those $x \in \mathbf{m}athbb{R}^3$ satisfying~\epsqref{eq:angle} then we get strong pointwise decay for the kernel of the operator $S(t)\frac{1}{ \abs{\nabla}}\widehat{P}_{N, M}^2$.
To state the result, we define
\mathbf{m}athcal{E}Q{ \lambdabel{eq:calS}
\mathbf{m}athcal S_{N} := \mathbf{m}athcal{B}ig\{ x \in \mathbf{m}athbb{R}^3 \mathbf{m}id \frac{\abs{x_{2, 3}}}{\abs{x}} \lesssim \frac{1}{N^{\frac{1}{2}- \frac{\epspsilon}{2}}} \mathbf{m}athcal{B}ig\}.
}
\betagin{lem}[Kernel estimates via angular separation] \lambdabel{lem:ang}
Let $K_{N, M}(t,x)$ denote the kernel of the operator $S(t)\frac{1}{ \abs{\nabla}}\widehat{P}_{N, M}^2$. Let $N \mathbf{m}athbf{g}e N_0$ where $N_0$ is as in the hypothesis of Lemma~\mathop{\mathrm{Re}}f{l:angle}. Then for any $L$,
\betagin{align}\lambdabel{equ:ang}
| 1_{\mathbf{m}athcal S_N}(x)K_{N, M}(t, x)| \lesssim_{L} N \frac{N^{L }}{M^L} \frac{1}{ \ang{ M |x|}^{L}}, \quad \forall t \mathbf{m}athbf{g}e N^{1-\epspsilon}, \quad
\epsnd{align}
where $\mathbf{m}athcal S_N$ is the set defined in~\epsqref{eq:calS} and where we have used the notation $\ang{z}:= (1 + \abs{z}^2)^{\frac{1}{2}}$ above.
\epsnd{lem}
\betagin{proof}[Proof of Lemma~\mathop{\mathrm{Re}}f{l:angle}]
We assume that $t \mathbf{m}athbf{g}e 0$. Since we are assuming $0< \epspsilon< \frac{2s_p}{1-\nu} -1$ and that $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$ it suffices to show the first inequality in~\epsqref{eq:angle}, i.e., that
\mathbf{m}athcal{E}Q{ \lambdabel{eq:angleest}
\frac{\abs{x_{2, 3}}}{\abs{x}} \lesssim \frac{1}{ N^{\frac{1}{2} -\frac{\epspsilon}{2}}}
}
for all $x \in \mathbf{m}athcal G(t) \cap \mathbf{m}athcal C_{\epsxt}(t)$ for some uniform constant.
First, we claim that ~\epsqref{eq:angleest} holds at time $t = N^{1-\epspsilon}$. Suppose
\mathbf{m}athcal{E}Q{
x \in \mathbf{m}athcal G_+(N^{1-\epspsilon}) \cap \mathbf{m}athcal C_{\epsxt}(N^{1-\epspsilon}).
}
By the definition of traveling wave (i.e. \epsqref{eq:xhans1} and \epsqref{eq:xhans2}) we have
\mathbf{m}athcal{E}Q{
\abs{x} \varsigmagmaimeq N^{1-\epspsilon}, \quad \abs{x_{2, 3}} \lesssim N^{\frac{1}{2} - \frac{\epspsilon}{2}}
}
and thus,
\mathbf{m}athcal{E}Q{
\frac{\abs{x_{2, 3}}}{\abs{x}} \lesssim N^{\frac{\epspsilon}{2} - \frac{1}{2}}
}
as desired.
Now suppose $t > N^{1- \epspsilon}$. We introduce some notation. Let $\theta_{x(N^{1-\epspsilon})}$ denote the angle between the unit vector $\vec e_1$ (the unit vector in the positive $x_1$-direction) and the vector $x(N^{1-\epspsilon})$, where we recall that $x(t)$ denotes the spatial center of $\vec u$. Above, we have just shown that
\mathbf{m}athcal{E}Q{
\abs{\varsigmagmain(\theta_{x(N^{1-\epspsilon})})} \varsigmagmaimeq \abs{\theta_{x(N^{1-\epspsilon})}} \le A_1 N^{\frac{\epspsilon}{2} - \frac{1}{2}}
}
for some uniform constant $A_1>0$. To finish the proof it will suffice to show that for any $x \in \mathbf{m}athcal G(t) \cap \mathbf{m}athcal C_{\epsxt}(t)$, the angle $\theta_{(x, x(N^{1-\epspsilon}))}$ formed between the vectors $x$ and $x(N^{1-\epspsilon})$ satisfies
\mathbf{m}athcal{E}Q{
\abs{\theta_{(x, x(N^{1-\epspsilon}))}} \le A_2 N^{\frac{\epspsilon}{2} - \frac{1}{2}}
}
for some other uniform constant $A_2 >0$, as then the sine of the total angle between $x$ and the $x_1$-axis, i.e., $\frac{\abs{x_{2, 3}}}{\abs{x}}$ would satisfy~\epsqref{eq:angleest}. To get a hold of $\theta_{(x, x(N^{1-\epspsilon}))}$ we square both sides of the inequality defining the set $\mathbf{m}athcal G_+(t)$. For $x \in \mathbf{m}athcal G(t)$ we have
\mathbf{m}athcal{E}Q{
\abs{x}^2 - 2 x \cdot x(N^{1-\epspsilon}) + \abs{x(N)^{1-\epspsilon}}^2 \le \left( R(\epsta_0) + t - N^{1-\epspsilon}\right)^2.
}
Using that $x \cdot x(N^{1-\epspsilon}) = \abs{x} \abs{x(N^{1-\epspsilon})} \cos \theta_{(x, x(N^{1-\epspsilon}))}$ the above yields the inequality,
\mathbf{m}athcal{E}Q{
-2 \abs{x} \abs{x(N^{1-\epspsilon})} \cos\theta_{(x, x(N^{1-\epspsilon}))}\le \left( R(\epsta_0) + t - N^{1-\epspsilon}\right)^2 - \abs{x}^2 - \abs{x(N^{1-\epspsilon})}^2
}
Bootstrapping, we may assume that $\theta_{(x, x(N^{1-\epspsilon}))}$ is small enough to use the estimate,
\mathbf{m}athcal{E}Q{
\cos \theta_{(x, x(N^{1-\epspsilon}))} \le 1 - \frac{\theta_{(x, x(N^{1-\epspsilon}))}^2}{4}.
}
Pugging the above in we arrive at the inequality
\mathbf{m}athcal{E}Q{
\frac{\theta_{(x, x(N^{1-\epspsilon}))}^2}{2} &\le 2 + \frac{1}{\abs{x}\abs{x(N^{1-\epspsilon})}}\left( \left( R(\epsta_0) + t - N^{1-\epspsilon}\right)^2 - \abs{x}^2 - \abs{x(N^{1-\epspsilon})}^2 \right) \\
& \le \frac{ 2 \abs{x}\abs{x(N^{1-\epspsilon})} + \left( R(\epsta_0) + t - N^{1-\epspsilon}\right)^2 - \abs{x}^2 - \abs{x(N)^{1-\epspsilon}}^2}{\abs{x}\abs{x(N^{1-\epspsilon})}}.
}
The requirement that $x \in C_{\epsxt}(t)$, finite speed of propagation, and~\epsqref{eq:xhans1} imply that we have
\mathbf{m}athcal{E}Q{
t - R(\epsta_0) \le \abs{x} \le t + R(\epsta_0) \mathbf{m}and N^{1-\epspsilon} - R(\epsta_0) \le \abs{x(N^{1-\epspsilon})} \le N^{1-\epspsilon} + R(\epsta_0).
}
Plugging the above into the previous line gives,
\betagin{align}
\frac{\theta_{(x, x(N^{1-\epspsilon}))}^2}{2} &\le \frac{ 2 (t + R(\epsta_0))(N^{1-\epspsilon} + R(\epsta_0))+ \left( R(\epsta_0) + t - N^{1-\epspsilon}\right)^2}{(t - R(\epsta_0)) ( N^{1-\epspsilon} - R(\epsta_0) )} \\
& \mathbf{m}athbf{h}space{34mm} - \frac{ (t - R(\epsta_0))^2 - (N^{1-\epspsilon} - R(\epsta_0))^2}{(t - R(\epsta_0)) ( N^{1-\epspsilon} - R(\epsta_0) )}\\
& = \frac{ 6 t R(\epsta_0) + 2 N^{1-\epspsilon} R(\epsta_0) + R(\epsta_0)^2}{(t - R(\epsta_0)) ( N^{1-\epspsilon} - R(\epsta_0) )} \\
& \lesssim \frac{1}{N^{1-\epspsilon}} + \frac{1}{t}.
\epsnd{align}
Taking the square root and noting that $t \mathbf{m}athbf{g}e N^{1-\epspsilon}$ we arrive at
\mathbf{m}athcal{E}Q{
\abs{\theta_{(x, x(N^{1-\epspsilon}))}} \lesssim \frac{1}{N^{\frac{1}{2} - \frac{\epspsilon}{2}}}
}
as desired. This completes the proof.
\epsnd{proof}
Next, we prove Lemma~\mathop{\mathrm{Re}}f{lem:ang}.
\betagin{proof}[Proof of Lemma~\mathop{\mathrm{Re}}f{lem:ang}]
The kernel $K_{N, M}$ of the operator $S(t)\frac{1}{ \abs{\nabla}}\widehat{P}_{N, M}^2$ is given by
\mathbf{m}athcal{E}Q{
K_{N, M}(t, x) := \int e^{i x \cdot \xii} \abs{\xii}^{-2} e^{ it |\xii|} \trianglerightrtialhi^2\bigl(\tfrac{|\xii|}{N}\bigr) \trianglerightrtialhi^2 \bigl(\tfrac{|(\xii_2, \xii_3)|}{M}\bigr) d\xii \\
}
where $\trianglerightrtialhi \in C^{\infty}_{0}(\mathbf{m}athbb{R})$ is satisfies $ \trianglerightrtialhi(r) = 1$ if $1 \le r \le 2$ and $ \varsigmagmaupp \trianglerightrtialhi \in (\frac{1}{4}, 4)$.
Now,
recall that we are restricting to only those $x \in \mathbf{m}athcal S_N$, as defined in~\epsqref{eq:calS}. We express any
such $x$ in spherical coordinates
\mathbf{m}athcal{E}Q{
x = \abs{x}( \cos \theta_x, \varsigmagmain \theta_x \cos \omega, \varsigmagmain \theta_x \varsigmagmain \omega)
}
where $\theta_x$ denotes the angle formed by $x$ and the unit vector in the $e_1$-direction. And recall that any $x \in \mathbf{m}athcal S_N$ satisfies
\mathbf{m}athcal{E}Q{ \lambdabel{eq:xa}
\frac{\abs{x_{2, 3}}}{\abs{x}} = \varsigmagmain \theta_x \varsigmagmaimeq \abs{\theta_x} \lesssim \frac{1}{N^{1-\epspsilon}}.
}
Similarly, we change to the spherical variables
\mathbf{m}athcal{E}Q{
\xii = \abs{\xii}( \cos \theta_{\xii}, \varsigmagmain \theta_\xii \cos \alpha, \varsigmagmain \theta_\xii \varsigmagmain \alpha)
}
in the integral defining $K_{N, M}$ and note that because of the frequency localization $\mathbf{m}athbf{h}at P_{N, M}$ we have
\mathbf{m}athcal{E}Q{\lambdabel{eq:xia}
\frac{\abs{\xii_{2, 3}}}{\abs{\xii}} = \varsigmagmain \theta_\xii \varsigmagmaimeq \frac{M}{N}.
}
This yields,
\betagin{align*}
K_{N, M}(t, x) = \int_0^{2\trianglerightrtiali}\int_0^\trianglerightrtiali\int_{N/4}^{4N} & e^{i \abs{x}\abs{\xii}f( \theta_x, \theta_\xii, \omega, \alpha)} \abs{\xii}^{-2} e^{ it |\xii|} \trianglerightrtialhi^2\bigl(\tfrac{|\xii|}{N}\bigr) \\
& \quad\widetildemes \trianglerightrtialhi^2 \bigl(\tfrac{ \abs{\xii}\varsigmagmain \theta_\xii }{M}\bigr) \, \abs{\xii}^2 \varsigmagmain \theta_\xii \mathbf{m}athrm{d} \abs{\xii} \mathbf{m}athrm{d} \theta_\xii \mathbf{m}athrm{d} \alpha,
\epsnd{align*}
where the angular phase function $f( \theta_x, \theta_\xii, \omega, \alpha)$ is given by
\mathbf{m}athcal{E}Q{
f( \theta_x, \theta_\xii, \omega, \alpha) = \cos \theta_x \cos \theta_\xii + \varsigmagmain \theta_x \varsigmagmain \theta_\xii( \cos \omega \cos \alpha + \varsigmagmain \omega \varsigmagmain \alpha).
}
The idea is that the angular separation between $x$ and $\xii$ given by~\epsqref{eq:xa} and~\epsqref{eq:xia} allows us to integrate by parts in $\theta_\xii$. Indeed, using~\epsqref{eq:xa} and~\epsqref{eq:xia} we have the lower bound
\mathbf{m}athcal{E}Q{
\biggl| \frac{\mathbf{m}athrm{d}}{\mathbf{m}athrm{d} \theta_\xii} &[\abs{x} \abs{\xii} f( \theta_x, \theta_\xii, \omega, \alpha) ] \biggr|\\
&= \abs{x}{\abs \xii} \mathbf{m}athcal{B}igl| -\cos \theta_x \varsigmagmain \theta_\xii + \varsigmagmain \theta_x \cos \theta_\xii( \cos \omega \cos \alpha + \varsigmagmain \omega \varsigmagmain \alpha ) \mathbf{m}athcal{B}igr| \\
& \mathbf{m}athbf{g}e \abs{x}{\abs \xii} \mathbf{m}athcal{B}ig( \frac{M}{N} - O( \frac{1}{N^{\frac{1}{2}-\frac{\epspsilon}{2}}}) \mathbf{m}athcal{B}ig) \\
& \mathbf{m}athbf{g}trsim \abs{x} M.
}
Moreover, note that for any $L \in \mathbf{m}athbb{N}$ and $M\lesssim N$,
\mathbf{m}athcal{E}Q{
\abs{\frac{\mathbf{m}athrm{d}^L}{\mathbf{m}athrm{d} \theta_\xii^L} \big(\trianglerightrtialhi^2 \bigl(\tfrac{ \abs{\xii}\varsigmagmain \theta_\xii }{M}\bigr) \varsigmagmain \theta_\xii \mathbf{m}athcal{B}ig)} \lesssim \frac{N^{L}}{M^{L}}
}
Thus, integration by parts $L$-times in $\theta_\xii$ yields the estimate
\mathbf{m}athcal{E}Q{
\abs{K_{N, M}(t, x)} \lesssim_L N^2 \frac{N^{L}}{M^{L}} \frac{1}{ \ang{M\abs{x}}^L}, \quad \forall t \mathbf{m}athbf{g}e N^{1-\epspsilon}, \quad x \in \mathbf{m}athcal G(t) \cap \mathbf{m}athcal C_{\epsxt}(t)
}
as desired. \epsnd{proof}
We can now estimate~\epsqref{c_outout}. Here will rely crucially on Lemma~\mathop{\mathrm{Re}}f{l:angle} and
Lemma~\mathop{\mathrm{Re}}f{lem:ang}. First we write,
\betagin{multline}
\bigl\lambdangle S(t - \tauu) \frac{1}{\abs{\nabla}} \widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(u,v)(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(u,v)(t) \bigr \rangle \\
= \ang{ K_{N, M}(t- \tauu) \ast 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(u,v)(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(u,v)(t)}.
\epsnd{multline}
We claim that in fact, the above can be expressed as
\betagin{multline}
\ang{ K_{N, M}(t- \tauu) \ast 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(t)} \lambdabel{eq:SNttau} \\
= \ang{ \big(1_{\mathbf{m}athcal S_N}( \cdot) 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}e\frac{1}{2} \abs{t -\tauu} \}}(\cdot) K_{N, M}(t- \tauu) \big) \ast 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(t)},
\epsnd{multline}
where the set $\mathbf{m}athcal S_N$ is defined in~\epsqref{eq:calS}.
Indeed, note that above we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:xyout}
x \in \mathbf{m}athcal G_+(t) \cap \mathbf{m}athcal C_{\epsxt}(t) \mathbf{m}and y \in \mathbf{m}athcal G_-(\tauu) \cap \mathbf{m}athcal C_{\epsxt}(\tauu)
}
where $\mathbf{m}athcal G_{\trianglerightrtialm}$ are as in~\epsqref{eq:calG} and $C_{\epsxt}$ is as in~\epsqref{eq:calC}.
Thus,
\mathbf{m}athcal{E}Q{
\abs{x- y} \mathbf{m}athbf{g}e \abs{t - \tauu} - 2 R(\epsta_0) \mathbf{m}athbf{g}e \frac{1}{2} \abs{t - \tauu}
}
as long as $N$ is chosen large enough. Similarly by~\epsqref{eq:xyout} we have $\abs{x-y} \mathbf{m}athbf{g}e \abs{x}$ and $\abs{x-y} \mathbf{m}athbf{g}e \abs{y}$ and thus,
\mathbf{m}athcal{E}Q{
\frac{\abs{x_{2, 3} - y_{2, 3}}}{ \abs{x - y}} \le \frac{\abs{x_{2, 3}}}{\abs{x}} + \frac{\abs{y_{2, 3}}}{\abs{y}} \lesssim \frac{1}{N^{\frac{1}{2} - \frac{\epspsilon}{2}}},
}
where in the last inequality above we used Lemma~\mathop{\mathrm{Re}}f{l:angle}. This proves the equality in~\epsqref{eq:SNttau}.
Now, let $q_{p}$ denote the Sobolev embedding exponent for $\dot H^{s_p}$, i.e. $q_p=\frac{3(p-1)}{2}$. Note that $q_p\mathbf{m}athbf{g}eq p$ for $p\mathbf{m}athbf{g}eq 3$ and $(q_p/p)'\mathbf{m}athbf{g}eq 2$ for $p>0$ (where $x'$ denotes the H\"older dual of $x$). By H\"older's and Young's inequalities we then have
\betagin{align*}
&\abs{\ang{ \big(1_{S_N}( \cdot) 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}e\frac{1}{2} \abs{t -\tauu} \}}(\cdot)K_{N, M}(t- \tauu) \big) \ast 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(t)}} \\
&\quad \le \| 1_{S_N}( \cdot) 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}e\frac{1}{2} \abs{t -\tauu} \}}(\cdot) K_{N, M}(t- \tauu) \|_{L_x^{\left(\frac{q_{p}}{p} \right)'/2}} \\
& \quad\widetildemes \| G(u, v)(t) \|_{L_x^{\frac{q_{p}}{p}}}\| G(u, v)(\tauu) \|_{L_x^{\frac{q_{p}}{p}}}.
\epsnd{align*}
Using~\epsqref{equ:ang} we see that
\betagin{align} \lambdabel{eq:MNL}
&\| 1_{S_N}( \cdot) 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}e\frac{1}{2} \abs{t -\tauu} \}}(\cdot) K_{N, M}(t- \tauu) \|_{L_x^{\left(\frac{q_{p}}{p} \right)'/2}} \\
&\lesssim \frac{N^{L+1}}{M^{2L}} \left( \int_{\abs{x} \mathbf{m}athbf{g}e \frac{1}{2}\abs{t-\tauu}} \frac{1}{ \abs{x}^{2L\left(\frac{q_{p}}{p} \right)'} } \, \mathbf{m}athrm{d} x \right)^{\frac{2}{\left(\frac{q_{p}}{p} \right)'}} \\
& \lesssim \frac{N^{L+1}}{M^{2L}} \frac{1}{ \abs{t-\tauu}^{L-1}}
\epsnd{align}
Since $G(u, v) = F(u) - F(v)$ we have
\mathbf{m}athcal{E}Q{
\| G(u, v)(t) \|_{L_x^{\frac{q_p}{p}}} \lesssim \| u(t) \|_{L_x^{q_p}}^p + \| v(t) \|_{L_x^{q_p}}^p \lesssim \|u(t) \|_{\dot H^{s_p}_x}^p + \|v(t) \|_{ \dot H^{s_p}}^p.
}
Putting this all together we arrive at the estimate
\betagin{align}
&\abs{ \int_{-\infty}^{-N^{1 - \epspsilon}} \mathbf{m}athbf{h}space{-3mm} \int_{N^{1 - \epspsilon}}^{\infty} \bigl\lambdangle S(t - \tauu) \widehat{P}_{N, M} 1_{\mathbf{m}athcal C_{\epsxt}}(\tauu) G(u,v)(\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}(t) G(u,v)(t) \bigr \rangle \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu } \\
&\lesssim_L \int_{-\infty}^{-N^{1 - \epspsilon}} \mathbf{m}athbf{h}space{-3mm} \int_{N^{1 - \epspsilon}}^{\infty} \frac{N^{L+1}}{M^{2L}} \frac{1}{ \abs{t-\tauu}^{L-1}} \left(\|u(t) \|_{\dot H^{s_p}_x}^p + \|v(t) \|_{ \dot H^{s_p}}^p \right)\\
& \mathbf{m}athbf{h}space{54mm} \widetildemes \left(\|u(\tauu) \|_{\dot H^{s_p}_x}^p + \|v(\tauu) \|_{ \dot H^{s_p}}^p \right) \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \\
& \lesssim_L N^{L + 1 + (1-\epspsilon)(2- L)} M^{-2L} \left(\|u \|_{L^\infty_t \dot H^{s_p}}^{2p} + \|v \|_{L^\infty_t \dot H^{s_p}}^{2p} \right) \\
& \lesssim_L M^{-L},
\epsnd{align}
where to obtain the last line we ensure that $\epspsilon>0$ is small enough so that when $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$ we also have $M^L \mathbf{m}athbf{g}e N^{4+ \epspsilon L}$. We have proved that
\mathbf{m}athcal{E}Q{
\epsqref{c_outout} \lesssim_L M^{-L},
}
as desired. This completes the treatment of the $C_{\epsxt - \epsxt}$ term.
\varsigmagmaubsection*{The term \thetaxorpdfstring{$C_{{\mathrm{int}} - {\mathrm{int}}}$}{C}:}
Here we will use a combination of arguments based on sharp Huygens principle and the techniques developed to deal with the previous term $C_{\epsxt-\epsxt}$.
First we record an estimate for the kernel of the modified frequency projection.
\betagin{lem}
Let $p_{N, M}^2$ denote the kernel of the operator $\widehat{P}_{N, M}^2$. Then,
\mathbf{m}athcal{E}Q{ \lambdabel{eq:pnm}
\abs{p_{N, M}^2 (x)} \lesssim_L \frac{N^3}{\ang{N \abs{x}}^L} + \frac{N^3}{\ang{M \abs{x}}^L}
}
\epsnd{lem}
Next, consider the following decomposition of the forward cone centered at $(t, x) = (N^{1-\epspsilon}, x(N^{1-\epspsilon}))$ of width $R(\epsta_0)$, i.e., the set
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal G_+ := \bigcup_{t \mathbf{m}athbf{g}e N^{1-\epspsilon}}\mathbf{m}athcal G_+(t)
}
where $\mathbf{m}athcal G(t)$ is defined as in~\epsqref{eq:calG}. This decomposition is depicted in Figure~\mathop{\mathrm{Re}}f{f:CG}.
We write
\mathbf{m}athcal{E}Q{
\mathbf{m}athcal G_+ = \bigcup_{j \mathbf{m}athbf{g}e 1} \mathbf{m}athcal C_{+, j} \cup \bigcup_{j \mathbf{m}athbf{g}e 0} \mathbf{m}athcal G_{+, j}
}
We define the $\mathbf{m}athcal C_{+, j}, G_{+, j}$ as follows. First, set
\mathbf{m}athcal{E}Q{
\widetilde\mathbf{m}athcal C_{+, 1}&:= \{(t, x) \mathbf{m}id \abs{x - x(2N^{1-\epspsilon})} \mathbf{m}athbf{g}e R(\epsta_0) + t - 2N^{1-\epspsilon} , \\
& \mathbf{m}athbf{h}space{74mm} t \mathbf{m}athbf{g}e 2N^{1-\epspsilon} \} \cap \mathbf{m}athcal G_+
}
and for $j \mathbf{m}athbf{g}e 1$,
\mathbf{m}athcal{E}Q{
\widetilde \mathbf{m}athcal C_{+, j}&:= \bigg\{\{ (t, x) \mathbf{m}id \abs{x - x(2^jN^{1-\epspsilon})} \mathbf{m}athbf{g}e R(\epsta_0) + t - 2^jN^{1-\epspsilon}, \\
& \mathbf{m}athbf{h}space{64mm} t \mathbf{m}athbf{g}e 2^jN^{1-\epspsilon} \} \cap \mathbf{m}athcal G_+ \bigg\} \varsigmagmaetminus \mathbf{m}athcal C_{+, j-1}.
}
For $j \mathbf{m}athbf{g}e 0$, define sets $\widetilde\mathbf{m}athcal G_{+, j}$ to be the regions,
\mathbf{m}athcal{E}Q{
\widetilde \mathbf{m}athcal G_{+, j} &= \{(t, x) \mathbf{m}id \abs{x - x(2^jN^{1-\epspsilon})} \le R(\epsta_0) + t - 2^jN^{1-\epspsilon}, \\
& \mathbf{m}athbf{h}space{64mm} 2^j N^{1-\epspsilon} \le t \le 2^{j+1}N^{1-\epspsilon} \} \cap \mathbf{m}athcal G_+.
}
Then we define
\mathbf{m}athcal{E}Q{
&\mathbf{m}athcal C_{+, j} := \widetilde \mathbf{m}athcal C_{+, j} \cap \{ (t, x) \mathbf{m}id \abs{x} \le t - 2^{j}N^{1-\epspsilon}, \quad t \mathbf{m}athbf{g}e 2^{j}N^{1-\epspsilon}\} \\
&\mathbf{m}athcal G_{+, j}:= \widetilde \mathbf{m}athcal G_{+, j} \cup [\widetilde\mathbf{m}athcal C_{+, j+1} \varsigmagmaetminus \mathbf{m}athcal C_{+, j+1}].
}
The regions $\mathbf{m}athcal C_{+,j}$ and $\mathbf{m}athcal G_{+, j}$ are depicted in Figure~\mathop{\mathrm{Re}}f{f:CG}.
\betagin{figure}[h]
\centering
\includegraphics[width=14cm]{drawing_hans_2.pdf}
\caption{A depiction of the first few regions $\mathbf{m}athcal C_{j, +}$ and $\mathbf{m}athcal G_{+, j}$ within the region $C$.} \lambdabel{f:CG}
\epsnd{figure}
Now, split the integrand of~\epsqref{c_inin} in the four pieces,
\betagin{align}
\epsqref{c_inin} = \int_{-\infty}^{-N^{1 - \epspsilon}} \mathbf{m}athbf{h}space{-3mm} \int_{N^{1 - \epspsilon}}^{\infty} (I + II + III + IV) \mathbf{m}athrm{d} t\, \mathbf{m}athrm{d} \tauu,
\epsnd{align}
where
\betagin{align}
\mathbf{m}athbf{h}space{10mm} I &= \varsigmagmaum_{j,k} \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](\tauu) , [1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](t) \bigr \rangle,
\lambdabel{eq:CC} \\
II&= \varsigmagmaum_{j,k} \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](\tauu) , [1_{\mathbf{m}athcal G_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](t) \bigr \rangle, \lambdabel{eq:GC} \\
III &=\varsigmagmaum_{j,k} \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](\tauu) , [1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}}G](t) \bigr \rangle, \lambdabel{eq:CG} \\
IV &= \varsigmagmaum_{j,k} \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](\tauu) , [1_{\mathbf{m}athcal G_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G](t) \bigr \rangle. \lambdabel{eq:GG}
\epsnd{align}
First we estimate the term~\epsqref{eq:CC} above. The key points are the following. First,
by the support properties of $1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(\tauu, y)$, $1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(t, y) $ and the sharp Huygens principle, we must have
\betagin{align} \lambdabel{eq:gjgk}
&\abs{x - y} \mathbf{m}athbf{g}trsim (2^j +2^k) N^{1-\epspsilon} \\
& \forall\, x \in \varsigmagmaupp (1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v))(t), \, \, y \in \varsigmagmaupp [S(t- \tauu) 1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) .
\epsnd{align}
Second, by the definitions of the spacetime cutoffs $1_{\mathbf{m}athcal C_{-, j}}$, and $1_{\mathbf{m}athcal C_{+, k}}$ the functions $1_{\mathbf{m}athcal C_{-, j}} u(\tauu)$ and $1_{\mathbf{m}athcal C_{+, k}} u(t)$ are restricted to the exterior \epsmph{small data} regime and we thus have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:cjck}
\|1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v) \|_{\mathbf{m}athcal{N}( (-\infty, -2^j N^{1-\epspsilon}])} \lesssim \| \vec u \|_{L^\infty_t \dot \mathbf{m}athcal{H}^{s_p}} \lesssim 1\\
\|1_{\mathbf{m}athcal C_{+, k}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v) \|_{\mathbf{m}athcal{N}( [ 2^j N^{1-\epspsilon}, \infty))} \lesssim \| \vec u \|_{L^\infty_t \dot \mathbf{m}athcal{H}^{s_p}} \lesssim 1,
}
where $\mathbf{m}athcal{N}$ denote suitable dual spaces.
We argue as follows. For any $q\mathbf{m}athbf{g}e2$, and up to fattening the projection $\mathbf{m}athbf{h}at P_{N, M}$, we have
\betagin{align*}
&\abs{\bigl\lambdangle \widehat{P}_{N, M}^2\frac{1}{\abs{\nabla}} S(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, [1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \bigr \rangle} \\
&\lesssim \| 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}trsim (2^j +2^k) N^{1-\epspsilon} \} }p_{N, M} \|_{L^1} \\
& \quad\widetildemes \| P_{N} \abs{\nabla}^{-1-s_p + \frac{2}{q}}S(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{q}_x} \\
&\quad \widetildemes \| \abs{\nabla}^{s_p - \frac{2}{q}} [1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \|_{L^{q'}}
\epsnd{align*}
We estimate last line above as follows. Note that by~\epsqref{eq:pnm} and~\epsqref{eq:gjgk} (and the lower bound on $M$), we have
\mathbf{m}athcal{E}Q{
\| 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}trsim (2^j +2^k) N^{1-\epspsilon} \} }p_{N, M} \|_{L^1} \lesssim_L \frac{N^3}{ [(2^j + 2^k) N^{1-\epspsilon}]^{L-1}}.
}
By the dispersive estimate for the wave equation (and noting that $\abs{t-\tauu} \mathbf{m}athbf{g}e 2 N^{1-\epspsilon}$ we have
\betagin{align}
& \| P_{N} \abs{\nabla}^{-1-s_p + \frac{2}{q}}S(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{q}_x} \\
&\lesssim \frac{1}{\abs{t-\tauu}^{1- \frac{2}{q}}} N^{1-\frac{4}{q}} \| P_N \abs{\nabla}^{-1-s_p + \frac{2}{q}}[1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{q'}_x} \\
& \lesssim \frac{1}{\abs{t-\tauu}^{1- \frac{2}{q}}} N^{-2s_p} \| P_N \abs{\nabla}^{s_p - \frac{2}{q}}[1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{q'}_x}.
\epsnd{align}
Thus, using the above, Bernstein's inequality, the Hardy-Littlewood Sobolev inequality, and~\epsqref{eq:cjck} in the last line below we have
\betagin{align}
&\int_{N^{1-\epspsilon}}^\infty \int_{-\infty}^{-N^{1-\epspsilon}}\epsqref{eq:CC} \, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \\
& \lesssim_L \varsigmagmaum_{j, k \mathbf{m}athbf{g}e 1} \frac{N^3N^{-2s_p}}{ [(2^j + 2^k) N^{1-\epspsilon}]^{L-1}} \int_{N^{1-\epspsilon}}^\infty \int_{-\infty}^{-N^{1-\epspsilon}} \mathbf{m}athbf{h}space{-4mm}\mathbf{m}athcal{B}ig( \frac{1}{\abs{t-\tauu}^{1- \frac{2}{q}}}
\\
& \quad \widetildemes \| \abs{\nabla}^{s_p- \frac{2}{q}} [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{q'}_x} \\
& \, \quad \qquad \widetildemes \,\quad \| \abs{\nabla}^{s_p- \frac{2}{q}} [1_{\mathbf{m}athcal C_{-, k}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \|_{L^{q'}_x} \mathbf{m}athcal{B}ig)
\, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \\
& \lesssim_L \varsigmagmaum_{j, k \mathbf{m}athbf{g}e 1} \frac{N^3N^{-2s_p}}{ [(2^j + 2^k) N^{1-\epspsilon}]^{L-1}} \| \abs{\nabla}^{s_p- \frac{2}{q}} [1_{\mathbf{m}athcal C_{+, k}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)] \|_{L_t^{\frac{2q}{q+2}}L^{q'}_x} \\
& \quad \qquad \widetildemes \quad \| \abs{\nabla}^{s_p- \frac{2}{q}} [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)] \|_{L_t^{\frac{2q}{q+2}}L^{q'}_x} \\
& \lesssim_L \varsigmagmaum_{j, k \mathbf{m}athbf{g}e 1} \frac{N^3N^{-2s_p}}{ [(2^j + 2^k) N^{1-\epspsilon}]^{L-1}} \lesssim_L N^{-L/2},
\epsnd{align}
where in the second to last line we have fixed $q>2$ above and note that the norms above are dual sharp admissible Strichartz pairs (e.g., one can take $q=4$).
Next, consider the term~\epsqref{eq:GG}. Here we cannot rely exclusively on separation of supports because the $S(t-\tauu)$ evolution of the term localized to $\mathbf{m}athcal G_{-, j}$ has some of its support within $2 R(\epsta_0)$ of the term localized to $\mathbf{m}athcal G_{+, k}$ for all $j, k$. The saving grace is that the pieces of the supports of $S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu)$ and $[1_{\mathbf{m}athcal G_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) $ that are close to each other (say within $2^{\alpha j} + 2^{\alpha k}$ for some small parameter $\alphaphaha>0$) come along with \epsmph{angular separation} in the sense of Lemma~\mathop{\mathrm{Re}}f{lem:ang}. To make this precise we must further subdivide $G_{\trianglerightrtialm, k}$ as follows.
Let $\alpha>0$ be a small parameter to be fixed below. Let
\mathbf{m}athcal{E}Q{
&\mathbf{m}athcal G_{+, k, in}:= \mathbf{m}athcal G_{+, k} \cap \{ (t, x) \mathbf{m}id \abs{x} \le t - 2^{\alpha k}N^{\alpha(1-\epspsilon)} \} \\
& \mathbf{m}athcal G_{+, k, out} := \mathbf{m}athcal G_{+, k} \cap \{ (t, x) \mathbf{m}id \abs{x} \mathbf{m}athbf{g}e t - 2^{\alpha k}N^{\alpha(1-\epspsilon)} \}
}
We decompose~\epsqref{eq:GG} as follows, noting symmetry in $j, k$ means it suffices to consider only the sum for $j \mathbf{m}athbf{g}e k$. We write \epsqref{eq:GG} in the form
\betagin{align}
&\varsigmagmaum \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, in}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] , \, [1_{\mathbf{m}athcal G_{+, k, in}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] \bigr \rangle \lambdabel{eq:Ginin}
\\
& +\varsigmagmaum \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, in}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] , \, [1_{\mathbf{m}athcal G_{+, k, out}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] \bigr \rangle \lambdabel{eq:Ginout} \\
& + \varsigmagmaum \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, out}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] , \, [1_{\mathbf{m}athcal G_{+, k, in}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G]\bigr \rangle \lambdabel{eq:Goutin}
\\
& + \varsigmagmaum \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, out}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] , \, [1_{\mathbf{m}athcal G_{+, k, out}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G] \bigr \rangle \lambdabel{eq:Goutout} ,
\epsnd{align}
where the sums are over $j,k\mathbf{m}athbf{g}eq 0$ with $j\mathbf{m}athbf{g}eq k$, $G=G(u,v)$, and the pairings are evaluated at $\tauu,t$.
The key point will be that on the outer regions $\mathbf{m}athcal G_{-, j, out}$, $\mathbf{m}athcal G_{+, k, out}$ we can recover the same angular separation used to treat the term $C_{\epsxt -\epsxt}$ and on the inner regions $\mathbf{m}athcal G_{-, j, in}$ and $\mathbf{m}athcal G_{+, k, in}$ we obtain sufficient separation in support between the two factors after evolution by $S(t-\tauu)$ to get enough decay in $j, k$ after the application of $\mathbf{m}athbf{h}at P_{N, M}^2 \frac{1}{\abs{\nabla}}$.
\betagin{lem}[Angular separation in $\mathbf{m}athcal G_{\trianglerightrtialm, j, out}$] \lambdabel{l:Gout} Let $\alpha>0$ and let $S_{N, \alpha}$ be the set
\mathbf{m}athcal{E}Q{ \lambdabel{eq:SNal}
\mathbf{m}athcal S_{N, \alpha} := \biggl \{ x \in \mathbf{m}athbb{R}^3 \mathbf{m}id \frac{\abs{x_{2, 3}}}{\abs{x}} \lesssim \frac{1}{N^{\frac{(1-\alpha)(1-\epspsilon)}{2} }} \biggr\}
}
Then, there exists $\alpha>0$ small enough and $N_0>0$ large enough so that for all $x \in \mathbf{m}athcal G_{\trianglerightrtialm, j, out}$ we have
\mathbf{m}athcal{E}Q{ \lambdabel{eq:angGout}
x \in S_{N, \alpha} \mathbf{m}and \frac{1}{N^{\frac{(1-\alpha)(1-\epspsilon)}{2} }} \ll \frac{M}{N}
}
for all $ N \mathbf{m}athbf{g}e N_0$ and $M \mathbf{m}athbf{g}e N^{\frac{s_p}{1-\nu}}$ and for all $j \mathbf{m}athbf{g}e 0$.
\epsnd{lem}
\betagin{proof}
It suffices to consider $x \in\mathbf{m}athcal G_{+, j, out}$. The proof is nearly identical to the proof of Lemma~\mathop{\mathrm{Re}}f{l:angle}, but here we have allowed the region $\mathbf{m}athcal G_{+, j, out}$ to deviate farther from the boundary of the cone as $j$ (and hence $t$) gets larger. As in Lemma~\mathop{\mathrm{Re}}f{lem:ang} we have
\mathbf{m}athcal{E}Q{
\abs{\varsigmagmain(\theta_{x(2^j N^{1-\epspsilon})})} \varsigmagmaimeq \abs{\theta_{x(2^jN^{1-\epspsilon})}} \le A_1 N^{\frac{\epspsilon}{2} - \frac{1}{2}}
}
independently of $j \mathbf{m}athbf{g}e 0$. To finish the proof it suffices to show that for any $x \in \mathbf{m}athcal G_{+, j, out}$, the angle $\theta_{(x, x(2^jN^{1-\epspsilon}))}$ formed between the vectors $x$ and $x(2^jN^{1-\epspsilon})$ satisfies
\mathbf{m}athcal{E}Q{
\abs{\theta_{(x, x(2^jN^{1-\epspsilon}))}} \le A_2 \frac{1}{N^{\frac{(1-\alpha)(1-\epspsilon)}{2} }}
}
for some other uniform constant $A_2 >0$, as then the sine of the total angle between $x$ and the $x_1$-axis, i.e., $\frac{\abs{x_{2, 3}}}{\abs{x}}$ would satisfy~\epsqref{eq:SNal}. Note that for any $(t, x) \in \mathbf{m}athcal G_{+, j, out}$
\mathbf{m}athcal{E}Q{
2^{j}N^{1-\epspsilon} - 2^{\alpha j} N^{\alpha(1-\epspsilon)} \le \abs{x} \le 2^{j+1} N^{1-\epspsilon} + 2^{\alpha j} N^{\alpha(1-\epspsilon)}
}
Arguing as in the proof of Lemma~\mathop{\mathrm{Re}}f{l:angle} we see that for any $(t, x) \in \mathbf{m}athcal G_{+, j, out} $
\mathbf{m}athcal{E}Q{
\theta_{(x, x(2^jN^{1-\epspsilon}))}^2 \lesssim \frac{ 2 t 2^{\alpha j} N^{\alpha(1-\epspsilon)}}{ (t - 2^{\alpha j} N^{\alpha(1-\epspsilon)})( 2^{j}N^{1-\epspsilon})} \lesssim \frac{1}{ 2^{(1-\alpha)j} N^{(1-\alpha){(1-\epspsilon)}}},
}
as desired. \epsnd{proof}
With Lemma~\mathop{\mathrm{Re}}f{l:Gout} in hand, we can estimate the term~\epsqref{eq:Goutout} in an identical fashion as the term~\epsqref{c_outout}, noting that applications of Lemma~\mathop{\mathrm{Re}}f{lem:ang} are still valid in this new setting because for $x \in G_{+, k, out}$ and $y \in G_{-, j, out}$ we have
\mathbf{m}athcal{E}Q{
\frac{ \abs{x_{2, 3} - y_{2, 3}}}{ \abs{x-y}} \lesssim \frac{\abs{x_{2, 3}}}{\abs{x}} + \frac{\abs{y_{2, 3}}}{\abs{y}} \lesssim \frac{1}{N^{(1-\alpha)(1-\epspsilon)}} \ll \frac{M}{N},
}
i.e., sufficient angular separation since the Fourier variable $\xii$ satisfies
\[
\abs{\xii_{2, 3}}/ \abs{\xii} \varsigmagmaimeq M/N.
\]
Moreover we have
\mathbf{m}athcal{E}Q{
\abs{x-y} \varsigmagmaimeq (2^{j} + 2^k) N^{1-\epspsilon} \mathbf{m}if x \in G_{+, k, out}, \, y \in G_{-, j, out}
}
This means that we are free to write,
\mathbf{m}athcal{E}Q{
& \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, out}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, [1_{\mathbf{m}athcal G_{+, k, out}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \bigr \rangle \\
& = \bigl\lambdangle [1_{S_{N, \alpha}} 1_{\{\abs{\cdot} \varsigmagmaimeq (2^{j} + 2^k) N^{1-\epspsilon} \}} K_{N, M} ] \ast [1_{\mathbf{m}athcal G_{-, j, out}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \\ & \quad\quad\quad \, [1_{\mathbf{m}athcal G_{+, k, out}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \bigr \rangle.
}
Mimicking the estimates of~\epsqref{eq:Goutout} we see that as in~\epsqref{eq:MNL} we have
\mathbf{m}athcal{E}Q{
\| 1_{S_{N, \alpha}}&( \cdot) 1_{\{\abs{\cdot} \varsigmagmaimeq (2^{j} + 2^k) N^{1-\epspsilon} \}} (\cdot) K_{N, M}(t- \tauu) \|_{L_x^{\left(\frac{qp}{p} \right)'/2}} \\
& \lesssim_L \frac{N^{L+1}}{M^{2L}} \frac{1}{[(2^j + 2^k)N^{1-\epspsilon}]^L}
}
This allows us to sum in $j, k$, and we obtain,
\mathbf{m}athcal{E}Q{
\int_{- \infty}^{- N^{1-\epspsilon}} \int_{ N^{1-\epspsilon}}^{\infty} \epsqref{eq:Goutout} \, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \lesssim_L \frac{1}{M^L}.
}
To handle the term~\epsqref{eq:Ginin} we rely on the following observation: by the support properties of $1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(\tauu, y)$, $1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}}(t, y) $ and the sharp Huygens principle, we must have
\betagin{equation} \lambdabel{eq:gjgkin}
\abs{x - y} \mathbf{m}athbf{g}trsim (2^j +2^k) N^{1-\epspsilon}
\epsnd{equation}
for all
\[
x \in \varsigmagmaupp (1_{\mathbf{m}athcal G_{+, k, in}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v))(t)
\]
and
\[
y \in \varsigmagmaupp S(t- \tauu) [1_{\mathbf{m}athcal G_{-, j, in}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu).
\]
Hence,
\betagin{align}
&\abs{\bigl\lambdangle \widehat{P}_{N, M}^2\frac{1}{\abs{\nabla}} S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j, in}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, [1_{\mathbf{m}athcal G_{+, k, in}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \bigr \rangle} \\
& \lesssim \| 1_{ \{\abs{ \cdot} \mathbf{m}athbf{g}trsim (2^j +2^k) N^{1-\epspsilon} \} }p_{N, M} \|_{L_x^{\left(\frac{q_p}{p}\right)'/2}} N^{-1} \\
& \quad \widetildemes \| P_NS(t - \tauu) [1_{\mathbf{m}athcal C_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) \|_{L^{\frac{q_p}{p}}_x}\| [1_{\mathbf{m}athcal C_{+, k}}1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](t) \|_{L_x^{\frac{q_p}{p}}} \\
& \lesssim_L \frac{1}{[(2^j+ 2^k)N^{1-\epspsilon}]^L}
\left(\|u \|_{L^\infty_t \dot H^{s_p}}^{2p} + \|v \|_{L^\infty_t \dot H^{s_p}}^{2p} \right)
\epsnd{align}
Hence,
\mathbf{m}athcal{E}Q{
\int_{- \infty}^{- N^{1-\epspsilon}} \int_{ N^{1-\epspsilon}}^{\infty} \epsqref{eq:Ginin} \, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \lesssim_{L} \frac{1}{N^{L}}
}
Next, for the term~\epsqref{eq:Ginout} we note that the same argument used to treat~\epsqref{eq:Ginin} applies. However, we note that here we only obtain spatial separation of $2^j N^{1-\epspsilon}$. Nonetheless, since $j \mathbf{m}athbf{g}e k$ we have that
\mathbf{m}athcal{E}Q{
2^jN^{1-\epspsilon} \varsigmagmaimeq (2^j + 2^k) N^{1-\epspsilon}
}
and hence we are able to sum in $j, k$, obtaining
\mathbf{m}athcal{E}Q{
\int_{- \infty}^{- N^{1-\epspsilon}} \int_{ N^{1-\epspsilon}}^{\infty} \epsqref{eq:Ginout} \, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \lesssim_L \frac{1}{M^L}
}
Lastly, consider the term~\epsqref{eq:Goutin}. Here we use a mix of the arguments used to control~\epsqref{eq:Ginin} and~\epsqref{eq:Goutout}. In particular we split the sum into two pieces noting that if $j \varsigmagmaimeq k$ then the same argument used to estimate~\epsqref{eq:Ginin} applies since the spatial supports are separated by $\varsigmagmaimeq 2^{k} N^{1-\epspsilon} \varsigmagmaimeq (2^j + 2^k) N^{1-\epspsilon}$. If $j \mathbf{m}athbf{g}g k$ we obtain enough angular separation argument to use the same argument used to bound~\epsqref{eq:Goutout}, since in this case we have
\mathbf{m}athcal{E}Q{
\frac{ \abs{x_{2, 3} - y_{2, 3}}}{ \abs{x-y}} \varsigmagmaimeq \frac{\abs{y_{2, 3}}}{\abs{y}} \lesssim \frac{1}{N^{(1-\alpha)(1-\epspsilon)}} \ll \frac{M}{N}
}
for all $x \in \mathbf{m}athcal G_{+, k, in}$ and $y \in \mathbf{m}athcal G_{-, j, out}$ as long as $j \mathbf{m}athbf{g}g k$.
We obtain
\mathbf{m}athcal{E}Q{
\int_{- \infty}^{- N^{1-\epspsilon}} \int_{ N^{1-\epspsilon}}^{\infty} \epsqref{eq:Ginin} \, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \lesssim_{L} \frac{1}{N^{L}} + \frac{1}{M^L}
}
This completes the estimation of \epsqref{eq:GG}.
At this point, the mixed terms \epsqref{eq:GC} and \epsqref{eq:CG} (i.e. the remaining contributions to the $C_{{\mathrm{int}}-{\mathrm{int}}}$ term), as well as the $C_{{\mathrm{int}}-\epsxt}$ and $C_{\epsxt-{\mathrm{int}}}$ terms (\epsqref{c_outin} and \epsqref{c_inout}) can be handled with a combination of the techniques developed above. For example, after further subdividing $\mathbf{m}athcal G_{-}$ in the regions $\mathbf{m}athcal C_{-, j}$ and $\mathbf{m}athcal G_{-, j}$ consider the term of the form,
\mathbf{m}athcal{E}Q{
&\varsigmagmaum_{j \mathbf{m}athbf{g}e 0} \int_{-2^{j+2} N^{1-\epspsilon}}^{-2^j N^{1-\epspsilon}} \int_{N^{1-\epspsilon}}^\infty \\
& \quad \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}G(u,v)](t) \bigr \rangle\, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu.
}
Fixing a large constant $K_1>0$, we can divide the above into two further pieces, namely
\mathbf{m}athcal{E}Q{
& \varsigmagmaum_{j \mathbf{m}athbf{g}e 0} \int_{-2^{j+2} N^{1-\epspsilon}}^{-2^j N^{1-\epspsilon}} \int_{N^{1-\epspsilon}}^{K_1 2^{j} N^{1-\epspsilon}} \\
&\quad \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}G(u,v)](t) \bigr \rangle\, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu \\
& + \varsigmagmaum_{j \mathbf{m}athbf{g}e 0} \int_{-2^{j+2} N^{1-\epspsilon}}^{-2^j N^{1-\epspsilon}} \int_{K_1 2^j N^{1-\epspsilon}}^\infty \\
& \quad + \bigl\lambdangle \widehat{P}_{N, M}^2 \frac{1}{\abs{\nabla}}S(t - \tauu) [1_{\mathbf{m}athcal G_{-, j}} 1_{\mathbf{m}athcal C_{{\mathrm{int}}}} G(u,v)](\tauu) , \, 1_{\mathbf{m}athcal C_{\epsxt}}G(u,v)](t) \bigr \rangle\, \mathbf{m}athrm{d} t \, \mathbf{m}athrm{d} \tauu.
}
For the first term on the right-hand-side above we can copy the argument used to estimate~\epsqref{eq:Ginin}. Indeed by the sharp Huygens principle the spatial supports (before application of $P_{N, M}$) are separated for each fixed $t, \tauu$ by a distance of at least $\varsigmagmaimeq 2^{j}N^{1-\epspsilon} \varsigmagmaimeq_{K_1} \abs{t -\tauu}$. For the second term above we can choose $K_1 \mathbf{m}athbf{g}g 1$ large enough to guarantee enough angular separation between the spatial and Fourier variables to mimic a combination of the arguments used to estimate~\epsqref{c_outout} (where one integrates in $t$) and~\epsqref{eq:Goutout} (where one sums in $j$). The remaining interactions are handled similarly. We omit the details.
We have thus proved that
\mathbf{m}athcal{E}Q{
\abs{\ang{C, C'} } \lesssim_L \frac{1}{N^L} + \frac{1}{M^L} \lesssim_L \frac{1}{M^L},
}
which finally completes the proof of Lemma~\mathop{\mathrm{Re}}f{l:CC'}.
\epsnd{proof}
\mathbf{m}edskip
We are now prepared to conclude the frequency envelope argument and the proof of Proposition~\mathop{\mathrm{Re}}f{padditional}.
\betagin{proof}[Proof of Proposition~\mathop{\mathrm{Re}}f{padditional}]
Recall that we are trying to prove that
\betagin{equation}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}} \leq M \leq N} M^{2(1-\nu)}\|\widehat P_{N, \mathbf{m}athbf{g}eq M}u(t)\|^2_{L_x^2} \lesssim 1,
\epsnd{equation}
for some fixed $C_0 > 0$, for which it suffices to prove that
\betagin{equation}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}\varsigmagmaum_{C_0 N^{\frac{s_p}{1-\nu}} \leq M \leq N} M^{2(1-\nu)}N^{-2 s_p} \|\widehat P_{N, \mathbf{m}athbf{g}eq M}u(t)\|^2_{\dot H^{s_p}_x} \lesssim 1.
\epsnd{equation}
Once again, by time-translation invariance, we argue for $t = 0$. Recall that
\betagin{align}
| \lambdangle \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(0),\, \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(0) \rangle_{\dot H^{s_p}_x}|& \lesssim \|A\|_{\dot H^{s_p}}^2 + \|A'\|_{\dot H^{s_p}}^2 \\
& \quad +\|B\|_{\dot H^{s_p}}^2+\|B'\|_{\dot H^{s_p}}^2+|\lambdangle C,C'\rangle_{\dot H^{s_p}_x}|,
\epsnd{align}
and hence by Lemmas \mathop{\mathrm{Re}}f{l:AA'}, \mathop{\mathrm{Re}}f{l:BB'} and \mathop{\mathrm{Re}}f{l:CC'}, we obtain
\betagin{align}\lambdabel{equ:first_env_bd}
\mathbf{m}athbf{g}amma_{N,M}(0) &= \varsigmagmaum_{N',M' \mathbf{m}athbf{g}eq M }\mathbf{m}in\biggl\{\frac{N}{N'},\frac{N'}{N}\biggr\}^{\varsigmagmaigma}\left( \frac{M'}{M} \right)^{\varsigmagmaigma} \| \widehat{P}_{N, \mathbf{m}athbf{g}eq M} u(0)\|_{\dot H^{s_p}_x}^2 \\
& \lesssim \epsta_0^{p-1} \alphaphaha_{N,M} + \epsta_0^{p-1} \betata_{N,M} + M^{-L}.
\epsnd{align}
Furthermore by \epsqref{alpha_bds} and \epsqref{equ:beta_bds},
\[
\alphaphaha_{N,M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(N^{1-\epspsilon}) + \epsta_0^{p-1} \alphaphaha_{N,M},
\]
and
\[
\mathbf{m}athbf{g}amma_{N, M}(N^{1-\epspsilon}) + \betata_{N,M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(0) + \epsta_0^{p-1} \betata_{N,M}.
\]
Hence
\[
\betata_{N,M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(0) , \qquad \alphaphaha_{N,M} \lesssim \mathbf{m}athbf{g}amma_{N, M}(N^{1-\epspsilon}) \lesssim \mathbf{m}athbf{g}amma_{N, M}(0),
\]
and we conclude from \epsqref{equ:first_env_bd} that
\betagin{align}\lambdabel{equ:second_env_bd}
\mathbf{m}athbf{g}amma_{N,M}(0) \lesssim \epsta_0^{p-1} \mathbf{m}athbf{g}amma_{N, M}(0) + M^{-L},
\epsnd{align}
which implies
\betagin{align}\lambdabel{equ:final_env_bds}
\mathbf{m}athbf{g}amma_{N,M}(0) \lesssim M^{-L}
\epsnd{align}
for any $L \mathbf{m}athbf{g}g 1$. Consequently, we have established that
\betagin{align}
\varsigmagmaum_{N \mathbf{m}athbf{g}eq N_0}\varsigmagmaum_{M \mathbf{m}athbf{g}eq C_0 N^{\frac{s_p}{1-\nu}}} M^{2(1-\nu)}N^{-2 s_p} \mathbf{m}athbf{g}amma_{N,M}(0)^2 \lesssim 1,
\epsnd{align}
which concludes the proof.
\epsnd{proof}
\mathbf{m}edskip
\centerline{\varsigmagmacshape Benjamin Dodson}
\varsigmagmamallskip
{\footnotesize
\centerline{Department of Mathematics, Johns Hopkins University}
\centerline{404 Krieger Hall, Baltimore, MD 21218}
\centerline{\epsmail{[email protected]}}
}
\mathbf{m}edskip
\centerline{\varsigmagmacshape Andrew Lawrie}
\varsigmagmamallskip
{\footnotesize
\centerline{Department of Mathematics, Massachusetts Institute of Technology}
\centerline{77 Massachusetts Ave, 2-267, Cambridge, MA 02139, U.S.A.}
\centerline{\epsmail{ [email protected]}}
}
\mathbf{m}edskip
\centerline{\varsigmagmacshape Dana Mendelson}
\varsigmagmamallskip
{\footnotesize
\centerline{Department of Mathematics, University of Chicago}
\centerline{5734 S. University Avenue, Chicago, IL 60637}
\centerline{\epsmail{[email protected]}}
}
\mathbf{m}edskip
\centerline{\varsigmagmacshape Jason Murphy}
\varsigmagmamallskip
{\footnotesize
\centerline{Department of Mathematics and Statistics, Missouri University of Science and Technology}
\centerline{400 West 12th St., Rolla, MO 65409}
\centerline{\epsmail{[email protected]}}
}
\epsnd{document}
|
\begin{document}
\title{Independent Distributions on a Multi-Branching AND-OR Tree of Height 2
}
\author{Mika Shigemizu${}^{1}$, Toshio Suzuki${}^{2}$\thanks{Corresponding author. This work was partially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI (C) 16K05255.} and Koki Usami${}^{3}$
\\
Department of Mathematical Sciences,
Tokyo Metropolitan University, \\
Minami-Ohsawa, Hachioji, Tokyo 192-0397, Japan\\
1: mksgmn$\[email protected]
\quad
2: [email protected]
\\
3: [email protected]
}
\date{\today}
\maketitle
\begin{abstract}
We investigate an AND-OR tree $T$ and a probability distribution $d$
on the truth assignments to the leaves.
Tarsi (1983) showed that if $d$ is an independent and identical distribution (IID)
such that probability of a leaf having value 0 is neither 0 nor 1
then, under a certain assumptions, there exists an optimal
algorithm that is depth-first.
We investigate the case where $d$ is an independent distribution (ID)
and probability depends on each leaf.
It is known that in this general case, if height is greater than or equal to 3,
Tarsi-type result does not hold.
It is also known that for a complete binary tree of height 2, Tarsi-type result
certainly holds.
In this paper, we ask whether Tarsi-type result holds for an AND-OR tree of
height 2. Here, a child node of the root is either an OR-gate or a leaf:
The number of child nodes of an internal node is arbitrary,
and depends on an internal node.
We give an affirmative answer.
Our strategy of the proof is to reduce the problem to the case of
directional algorithms. We perform induction on the number of leaves,
and modify Tarsi's method to suite height 2 trees.
We discuss why our proof does not apply to height 3 trees.
Keywords:
Depth-first algorithm; Independent distribution; Multi-branching tree; Computational complexity; Analysis of algorithms
MSC[2010] 68T20; 68W40
\end{abstract}
\section{Introduction}
Depth-first algorithm is a well-known type of tree search algorithm.
Algorithm $A$ on a tree $T$ is \emph{depth-first} if the following holds for each internal node $x$ of $T$:
Once $A$ probes a leaf that is a descendant of $x$, $A$ does not probe leaves
that are not descendant of $x$ until $A$ finds value of $x$.
With respect to analysis of algorithm, the concept of depth-first algorithm has an advantage that it is well-suited for induction on subtrees. An example of such type of induction may be found in our former paper \cite{SN15}.
Thus, given a problem on a tree, it is theoretically interesting question
to ask whether there exists an optimal algorithm that is depth-first.
Here, cost denotes (expected value of) the number of leaves probed during
computation, and an algorithm is optimal if it achieves the minimum cost
among algorithms considered in the question.
If an associated evaluation function of a mini-max tree is bi-valued
and the label of the root is MIN (MAX, respectively), the tree is equivalent to
an AND-OR tree (an OR-AND tree).
In other words, the root is labeled by AND (OR), and
AND layers and OR layers alternate.
Each leaf has a truth value 0 or 1, where we identify 0 with false,
and 1 with true.
A fundamental result on optimal algorithms on
an AND-OR trees is given by Tarsi.
A tree is \emph{balanced} (in the sense of Tarsi) if
(1) any two internal nodes of the same depth (distance from the root)
have the same number of child nodes, and
(2) all of the leaves have the same depth.
A probability distribution $d$ on the truth assignments to the leaves
is an \emph{independent distribution} (ID)
if the probability of each leaf having value 0 depends on
the leaf, and values of leaves are determined independently.
If, in addition, all the probabilities of the leaves are the same
(say, $p$),
$d$ is an \emph{independent and identical distribution} (IID).
Algorithm $A$ is \emph{directional} \cite{Pe80} if there is a fixed linear order
of the leaves and for any truth assignment, the order of probing by $A$
is consistent with the order.
The result of Tarsi is as follows.
Suppose that AND-OR tree $T$ is balanced and $d$ is an IID such that
$p\ne 0, 1$. Then there exists an optimal algorithm that is depth-first
and directional \cite{Ta83}.
This was shown by an elegant induction.
For an integer $n$ such that $0 \leq n \leq h = $ (the height of the tree),
an algorithm $A$ is called \emph{$n$-straight} if for each node $x$
whose distance from the leaf is at most $n$,
once $A$ probes a leaf that is a descendant of $x$,
$A$ does not probe leaves
that are not descendant of $x$ until $A$ finds value of $x$.
The proof by Tarsi is induction on $n$.
Under an induction hypothesis, we take an optimal algorithm $A$ that is
$(n-1)$-straight but not $n$-straight.
Then we modify $A$ and get two new algorithms.
By the assumption,
the expected cost by $A$ is not greater than the costs by the modified
algorithms, thus we get inequalities. By means of the inequalities,
we can eliminate a non-$n$-straight move from $A$
without increasing cost.
In the above mentioned proof by Tarsi, derivation of the inequalities
heavily depends on the hypothesis that the distribution is an IID.
In the same paper, Tarsi gave an example of an ID on an AND-OR
tree of the following properties:
The tree is of height 4, not balanced, and no optimal algorithm is
depth-first. Later, we gave another example of such an ID where a tree is
balanced, binary and height 3 \cite{Su17b}.
On the other hand, as is observed in \cite{Su17b},
in the case of a balanced binary AND-OR tree of height 2,
Tarsi-type result holds for IDs in place of IIDs.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|p{.25\textwidth}||p{.28\textwidth}|p{.28\textwidth}|}
\hline
& ID & IID \\
\hline
\hline
height 2, binary & Yes. S. \cite{Su17b} & \\ \cline{1-2}
height 2, general & & \\ \cline{1-2}
height $\geq$ 3 & No. S. \cite{Su17b} & Yes. Tarsi \cite{Ta83} \\
& (see also Tarsi \cite{Ta83}) & \\
\hline
\end{tabular}
\end{center}
\caption{Existence of an optimal algorithm that is depth-first}
\label{table:1}
\end{table}
Table~\ref{table:1} summarizes whether Tarsi-type result holds or not.
In the table, we assume that an AND-OR tree is balanced, and that
the probability of each leaf having value 0 is neither 0 nor 1.
In this paper, we ask whether Tarsi-type result holds for
the case where a tree is height 2 and the number of child nodes is
arbitrary. We give an affirmative answer.
We show a slightly stronger result.
We are going to investigate a tree of the following properties.
The root is an AND-gate,
and a child node of the root is either an OR-gate or a leaf.
The number of child nodes of an internal node is arbitrary,
and depends on an internal node.
Figure~\ref{fig:orandtreeh2n1kai} is an example of such a tree.
Now, suppose that an ID on the tree is given and that at each leaf,
the probability of having value 0 is neither 0 nor 1.
Under these mild assumptions, we show that there exists an optimal
algorithm that is depth-first and directional.
\begin{figure}
\caption{Example of a height 2 AND-OR tree that is not balanced}
\label{fig:orandtreeh2n1kai}
\end{figure}
Our strategy of the proof is to reduce the problem to the case of directional algorithms.
We perform induction on the number of leaves, and modify Tarsi's method to go along with properties particular to hight 2 trees.
The first author and the third author showed, in their talk \cite{SU18},
a restricted version of the present result in which only directional (possibly non-depth-first) algorithms are taken into consideration.
In the talk, the core of the strategy is suggested by the first author.
The second author reduced the general case, in which non-directional algorithms are taken into consideration, to the case of directional algorithms.
The paper \cite{Su17a} gives an exposition of the background.
It is a short survey on the work of Liu and Tanaka
\cite{LT07} and its subsequent developments \cite{SN12,SN15,POLT16}.
The paper \cite{PPNTY17} is also in this line.
Some classical important results by 1980's may be
found in the papers \cite{KM75,Pe80,Pe82,Ta83} and \cite{SW86}.
We introduce notation in section~\ref{section:preliminaries}.
We show our result in section~\ref{section:results}.
In section~\ref{section:summary}, we discuss why our proof
does not apply to the case of height 3, and discuss future directions.
\section{Preliminaries} \label{section:preliminaries}
If $T$ is a balanced tree (in the sense of Tarsi, see Introduction)
and there exists a positive integer $n$ such that all of the internal nodes
have exactly $n$ child nodes, we say $T$ is a \emph{complete $n$-ary tree}.
For algorithm $A$ and distribution $d$, we denote (expected value of)
the cost by $\mathrm{cost} (A,d)$.
We are interested in a multi-branching AND-OR tree of height 2,
where a child node of the root is either a leaf or an OR-gate,
and the number of leaves depend on each OR-gate.
For simplicity of notation, we investigate a slightly larger class
of trees.
Hereafter, by ``a multi-branching AND-OR tree of height at most 2'',
we denote an AND-OR tree $T$ of the following properties.
\begin{itemize}
\item The root is an AND-gate.
\item We allow an internal node to have only one child,
provided that the tree has at least two leaves.
\item All of the child nodes of the root are OR-gates.
\end{itemize}
The concept of ``multi-branching AND-OR tree of height at most 2'' include
the multi-branching AND-OR trees of height 2 in the original sense
(because an OR-gate of one leaf is equivalent to a leaf),
the AND-trees of height 1
(this case is achieved when all of the OR-gates have one leaf)
and the OR-trees of height 1
(this case is achieved when the root has one child).
The simplest case is a tree of just two leaves,
and this case is achieved exactly in either of the following two.
(1) The tree is equivalent to a binary AND-tree of height 1:
(2) The tree is equivalent to a binary OR-tree of height 1.
Suppose that $T$ is a multi-branching AND-OR tree of height at most 2.
\begin{itemize}
\item We let $x_{\lambda}$ denote the root.
\item By $r$ we denote the number of child nodes of the root.
$x_{0}, \dots, x_{r-1}$ are the child nodes of the root.
\item For each $i$ ($0 \leq i < r$), we let $a(i)$ denote
the number of child leaves of $x_{i}$.
$x_{i,0}, \dots, x_{i,a(i)-1}$ are the child leaves of $x_{i}$.
\end{itemize}
Figure~\ref{fig:orandtreeh2n1} is an example of such a tree,
where $r=5$, $a(0)=1$ and $a(4)=3$.
The tree is equivalent to the tree in Figure~\ref{fig:orandtreeh2n1kai}.
\begin{figure}
\caption{Example of an AND-OR tree of height at most 2}
\label{fig:orandtreeh2n1}
\end{figure}
Suppose that $d$ is an ID on $T$.
For each $i$ ($0 \leq i < r$) and $j$ ($0 \leq j < a(i)$),
we use the following symbols.
\begin{itemize}
\item $p(i,j)$ is the probability of $x_{i,j}$ having value 0.
\item $p(i)$ is the probability of $x_{i}$ having value 0.
\item Since $d$ is an ID, its restriction to the child nodes of
$x_{i}$ is an ID on the subtree whose root is $x_{i}$.
Here, we denote it by the same symbol $d$.
Then we define $c(i)$ as follows.
\[
c(i) = \min_{A} \mathrm{cost}(A,d),
\]
\noindent
where $A$ runs over algorithms finding value of $x_{i}$,
and $\mathrm{cost}(A,d)$ is expected cost.
\end{itemize}
Thus, $p(i)$ is the product $p(i,0)\cdots p(i,a(i)-1)$.
If $a(i)=1$ then $c(i)=1$.
If $a(i)\geq 2$ and we have $p(i,0) \leq \dots \leq p(i,a(i)-1)$, then
$c(i) =1+p(i,0)+p(i,0)p(i,1)+\dots +p(i,0)\cdots p(i,a(i)-1)$.
Tarsi \cite{Ta83} investigated a depth-first algorithm that probes the leaves
from left to right, skipping a leaf whenever there is sufficient information
to evaluate one of its ancestors, and he called it $\mathrm{SOLVE}$.
We investigate a similar algorithm depending on a given independent
distribution.
\begin{defi}
Suppose that $T$ is a multi-branching AND-OR tree of height at most 2.
\begin{enumerate}
\item
Suppose that $d$ is an ID on $T$ and for each $i$ ($0 \leq i < r$)
and $j$ ($0 \leq j < a(i)$), we have $p(i,j) \neq 0,1$.
By $\mathrm{SOLVE}_{d}$, we denote the unique depth-first directional
algorithm such that the following hold for all $i,s, j, k$
($0 \leq i < r$, $0 \leq s < r$, $0 \leq j < a(i)$, $0 \leq k < a(i)$).
\begin{enumerate}
\item If $c(i)/p(i) < c(s)/p(s)$ then priority of (probing the leaves under) $x_{i}$
is higher than that of $x_{s}$.
\item If $c(i)/p(i) = c(s)/p(s)$ and $i<s$ then
priority of $x_{i}$ is higher than that of $x_{s}$.
\item If $p(i,j) < p(i,k)$ then priority of (probing) $x_{i,j}$ is higher than $x_{i,k}$.
\item If $p(i,j) = p(i,k)$ and $j<k$ then priority of $x_{i,j}$ is higher than $x_{i,k}$.
\end{enumerate}
\item
Suppose that we remove some nodes (except for the root of $T$) from $T$, and if a removed node has descendants, we remove them too. Let $T^{\prime}$ be the resulting tree.
Suppose that $\delta$ is an ID on $T^{\prime}$ and for each $i$ ($0 \leq i < r$)
and $j$ ($0 \leq j < a(i)$) such that $x(i,j)$ is a leaf of $T^{\prime}$,
we have $p(i,j) \neq 0,1$.
By $\mathrm{SOLVE}^{T}_{\delta}$, we denote the unique depth-first directional
algorithm of the following properties.
For all $i,s, j, k$
($0 \leq i < r$, $0 \leq s < r$, $0 \leq j < a(i)$, $0 \leq k < a(i)$),
if $x_{i}$ and $x_{s}$ are nodes of $T^{\prime}$, the above-mentioned assertions (a) and (b) hold;
and if $x_{i,j}$ and $x_{i,k}$ are leaves of $T^{\prime}$, the above-mentioned assertions (c) and (d) hold.
\end{enumerate}
\end{defi}
\begin{lemm} \label{lemm:1}
Suppose that $T$ is a multi-branching AND-OR tree of height at most 2.
Suppose that $d$ is an ID on $T$ and for each $i$ $(0 \leq i < r)$
and $j$ $(0 \leq j < a(i) )$, we have $p(i,j) \neq 0,1$.
Then $\mathrm{SOLVE}_{d}$ achieves the minimum cost among
depth-first directional algorithms.
To be more precise, if $A$ is a depth-first directional algorithm
then $\mathrm{cost} (\mathrm{SOLVE}_{d},d) \leq \mathrm{cost}(A,d)$.
\end{lemm}
\begin{proof}
It is straightforward.
\end{proof}
\section{Result} \label{section:results}
\begin{theo} \label{theo:main}
Suppose that $T$ is a multi-branching AND-OR tree of height at most 2.
Suppose that $d$ is an ID on $T$ and for each $i$ $(0 \leq i < r)$
and $j$ $(0 \leq j < a(i) )$, we have $p(i,j) \neq 0,1$.
Then $\mathrm{SOLVE}_{d}$ achieves the minimum cost among all of the algorithms
(depth-first or non-depth-first, directional or non-directional).
Therefore, there exists a depth-first directional algorithm that is
optimal among all of the algorithms.
\end{theo}
\begin{proof}
We perform induction on the number of leaves. The base cases are
the binary AND-trees of height 1 and the binary OR-trees of height 1.
In general, if $T$ is equivalent to a tree of height 1,
the assertion of the theorem clearly holds.
To investigate induction step, we assume that $T$ has at least
three leaves. Our induction hypothesis is
that for any multi-branching AND-OR tree $T^{\prime}$ of height at most 2,
if the number of leaves of $T^{\prime}$ is less than that of $T$
then the assertion of theorem holds for $T^{\prime}$.
We fix an algorithm $A$ that minimizes $\mathrm{cost}(A,d)$
among all of the algorithms
(depth-first or non-depth-first, directional or non-directional).
Case 1: At the first move of $A$, $A$ makes a query to a leaf $x_{i,0}$
such that $a(i)=1$.
In this case, if $A$ finds that $x_{i,0}$ has value 0 then $A$ returns 0 and finish.
Otherwise, $A$ calls a certain algorithm (say, $A^{\prime}$)
on $T - x_{i}$, that is, the tree given by removing
$x_{i}$ and $x_{i,0}$ from $T$.
The probability distribution given by restricting $d$
to $T - x_{i}$ is an ID, and a probability of any leaf is neither 0 nor 1.
Therefore, by induction hypothesis, without loss of generality,
we may assume that $A^{\prime}$ is a depth-first directional algorithm
on $T - x_{i}$. Therefore, $A$ is a depth-first directional algorithm.
Hence, by Lemma~\ref{lemm:1},
the same cost as $A$ is achieved by $\mathrm{SOLVE}_{d}$.
Case 2: Otherwise. At the first move of $A$,
$A$ makes a query to a leaf $x_{i,j}$ such that $a(i)\geq 2$.
Let $T_{0}:=T - x_{i,j}$, the tree given by removing $x_{i,j}$ from $T$.
In addition, let $T_{1}:=T - x_{i}$, the tree given by removing $x_{i}$ and all of the leaves under $x_{i}$ from $T$.
Here, $T_{0}$ and $T_{1}$ inherit all of the indices (for example, ``$3,1$'' of $x_{3,1}$) from $T$.
If $T_{1}$ is empty then $T$ is equivalent to a tree of height 1, and this case
reduces to our observation in the base case. Thus, throughout rest of Case 2, we assume that $T_{1}$ is non-empty.
If $A$ finds that $x_{i,j}$ has value 0 then
$A$ calls a certain algorithm (say, $A_{0}$) on $T_{0}$.
If $A$ finds that $x_{i,j}$ has value 1 then
$A$ calls a certain algorithm (say, $A_{1}$)
on $T_{1}$.
For each $s=0,1$, let $d[s]$ be the restriction of $d$ to $T_{s}$.
In the same way as Case 1, without loss of generality,
we may assume that $A_{s}$ is $\mathrm{SOLVE}^{T}_{d[s]}$ on $T_{s}$.
Hence, there is a permutation
$X=\langle x_{i,s(0)}, \dots, x_{i,s(a(i)-2)}\rangle$
of the leaves under $x_{i}$ except $x_{i,j}$,
and there are possibly empty sequences of leaves,
$Y=\langle y_{0}, \dots, y_{k-1}\rangle$ and
$Z=\langle z_{0}, \dots, z_{m-1}\rangle$, with the following properties.
\begin{itemize}
\item The three sets
$\{ x_{i} \}$,
$Y^{\ast} = \{ y : y$ is a parent of a leaf in $Y \}$, and
$Z^{\ast} = \{ z : z$ is a parent of a leaf in $Z \}$
are mutually disjoint,
and their union equals $\{ x_{0}, \dots, x_{r-1} \}$,
the set of all child nodes of $x_{\lambda}$.
\item
The search priority of $A_{0}$ is in accordance with $YXZ$ (thus, $y_{0}$ is the first).
\item
The search priority of $A_{1}$ is in accordance with $YZ$.
\end{itemize}
Case 2.1: $Z$ is non-empty.
We are going to show that $Y$ is empty. Assume not. Let $B$ be
the depth-first directional algorithm on $T$
whose search priority is $Y~x_{i,j}~XZ$.
Let $A_{Y}$ ($A_{X}, A_{Z}$, respectively) denote
the depth-first directional algorithm on the subtree above by $Y$ ($X,Z$),
where search priority is in accordance with $Y$ ($X,Z$).
Thus, we may write $A_{0}$ as ``$A_{Y}$; $A_{X}$; $A_{Z}$'',
$A_{1}$ as ``$A_{Y}$; $A_{Z}$'', and
$B$ as ``$A_{Y}$; Probe $x_{i,j}$; $A_{X}$; $A_{Z}$''.
We look at the following events.
Recall that $Y^{\ast}$ is the set of all parent nodes of leaves in $Y$.
$E_{Y}$: ``At least one element of $Y^{\ast}$ has value 0.''
$E_{X}$: ``All of the elements of $X$ have value 0.''
Since the tree is height 2 and $Z$ is non-empty, in each of $A_{0}$, $A_{1}$ and $B$, the following holds:
``$A_{Y}$ finds value of $x_{\lambda}$ if and only if $E_{Y}$ happens.''
In the same way, in $A_{0}$ and in $B$, under assumption that $A_{X}$ is called,
$A_{X}$ finds value of the root if and only if $E_{X}$ happens.
Thus, flowcharts of $A$ and $B$ as Boolean decision trees are as described in Figure~\ref{fig:flowchart02} and Figure~\ref{fig:flowchartb1}, respectively.
\begin{figure}
\caption{Flowchart of $A$ (Case 2.1, in the presence of $Y$)}
\label{fig:flowchart02}
\end{figure}
\begin{figure}
\caption{Flowchart of $B$ (in the presence of $Y$)}
\label{fig:flowchartb1}
\end{figure}
Therefore, letting $p_{Y} = \mathrm{prob} [\neg E_{Y}] $ (that is, probability of the negation) and
$p_{X} = \mathrm{prob} [\neg E_{X}]$,
the cost of $A$ and $B$ are as follows.
In the following formulas, $C_Y$ denotes
$\mathrm{cost}(A_{Y}, d_{Y})$,
where $d_{Y}$ denotes the probability distribution
given by restricting $d$ to $Y$.
$C_X$ and $C_Z$ are similarly defined.
\begin{eqnarray}
& & \mathrm{cost}(A,d) \notag \\
&=& 1 + p(i,j) \{ C_Y + p_{Y} (C_X + p_{X} C_Z) \} + (1-p(i,j)) (C_Y+p_{Y}C_Z)
\notag \\
&=& 1 + C_Y + p(i,j)p_{Y}(C_X+p_{X}C_Z) + (1-p(i,j))p_{Y}C_Z
\end{eqnarray}
\begin{eqnarray}
& & \mathrm{cost}(B,d) \notag \\
&=& C_Y + p_{Y} [ 1 + \{ p(i,j)(C_X+p_{X}C_Z) + (1 - p(i,j))C_Z \} ]
\notag \\
&=& C_Y + p_{Y} + p_{Y}p(i,j)(C_X+p_{X}C_Z) + p_{Y}(1-p(i,j))C_Z
\end{eqnarray}
Therefore, $\mathrm{cost}(A,d) - \mathrm{cost}(B,d) = 1 - p_{Y}$.
However, by our assumption on $d$ that probability of each leaf (having value 0)
is neither 0 nor 1, $E_{Y}$ has positive probability, thus $p_{Y} < 1$.
Thus $\mathrm{cost}(A,d) - \mathrm{cost}(B,d)$ is positive,
and this contradicts to the assumption that $A$ achieves the minimum cost.
Hence, we have shown that $Y$ is empty. Therefore, $A$ is the following algorithm
(Figure~\ref{fig:flowchart12}):
``Probe $x_{i,j}$. If $x_{i,j}=0$ then perform depth-first directional search
on $T_{0}=T - x_{i,j}$, where search priority is in accordance with $XZ$.
Otherwise (that is, $x_{i,j}=1$), perform depth-first directional search
on $T_{1}=T - x_{i}$, where search priority is in accordance with $Z$.''
\begin{figure}
\caption{Flowchart of $A$ (in the absence of $Y$)}
\label{fig:flowchart12}
\end{figure}
Thus, $A$ is a depth-first directional algorithm.
Hence, by Lemma~\ref{lemm:1},
the same cost as $A$ is achieved by $\mathrm{SOLVE}_{d}$.
Case 2.2: Otherwise. In this case, $Z$ is empty. The proof is similar to
Case 2.1.
\end{proof}
\section{Concluding remarks} \label{section:summary}
\subsection{Difference between height 2 case and heihgt 3 case}
As is mentioned in Introduction, the counterpart to Theorem~\ref{theo:main}
does not hold for the case of height 3.
We are going to discuss why the proof of Theorem~\ref{theo:main} does not
work for the case of height 3.
Figure~\ref{fig:orandtreeh3n1} is a complete binary OR-AND tree of height 3.
\begin{figure}
\caption{A binary OR-AND tree of height 3}
\label{fig:orandtreeh3n1}
\end{figure}
Suppose that algorithm $A$ is as follows.
Let $Y=\langle x_{100}, x_{101} \rangle$, $X=\langle x_{001} \rangle$, and
$Z=\langle x_{010}, x_{011}, x_{110}, x_{111}\rangle$.
At the first move, $A$ probes $x_{000}$.
If $x_{000} = 0$ then $A$ probes in accordance with order $YXZ$.
If $x_{000} = 1$ then $A$ probes in accordance with order $YZ$.
Let $A^{\prime}_{Y}$ be the algorithm on $Y$ such that $x_{100}$ has higher priority
of probing than $x_{101}$.
Suppose that an ID $d$ on the tree is given, and that, at each leaf,
probability of having value 0 is neither 0 nor 1.
We investigate the following event.
$E^{\prime}_{Y}$: ``$x_{10}$ has value 0.''
On the one hand, by our assumption on ID $d$, $E^{\prime}_{Y}$ has positive probability.
On the other hand, whether $E^{\prime}_{Y}$ happens or not, $A_{Y}$ does not find value of $x_{\lambda}$. In other words, probability of ``$A^{\prime}_{Y}$ finds value of $x_{\lambda}$'' is 0.
Therefore, $E^{\prime}_{Y}$ is not equivalent to the assertion
``$A^{\prime}_{Y}$ finds value of $x_{\lambda}$''.
Hence, counterpart to our observation in Case 2.1 of Theorem~\ref{theo:main}
does not work for the present setting.
\subsection{Summary and future directions}
Given a tree $T$, let $\mathrm{IID}_{T}^{+}$ ($\mathrm{ID}_{T}^{+}$, respectively)
denote the set of all IIDs on $T$ (IDs on $T$) such that, at each leaf,
probability having value 0 is neither 0 nor 1.
Now we know the following.
\begin{enumerate}
\item (Tarsi \cite{Ta83}) Suppose that $T$ is a balanced AND-OR tree of any height,
and that $d \in \mathrm{IID}_{T}^{+}$.
Then there exists an optimal algorithm that is depth-first and directional.
\item (S. \cite{Su17b}) Suppose that $T$ is a complete binary OR-AND tree of height 3.
Then there exists $d \in \mathrm{ID}_{T}^{+}$ such that
no optimal algorithm is depth-first.
\item (Theorem 2) Suppose that $T$ is an AND-OR tree of height 2,
and that $d \in \mathrm{ID}_{T}^{+}$.
Then there exists an optimal algorithm that is depth-first and directional.
\end{enumerate}
Suppose that $T$ is a complete binary AND-OR tree of height $h \geq 3$.
There is yet some hope to find a subset $\mathcal{D}$ of $\mathrm{ID}_{T}^{+}$
of the following properties.
\begin{itemize}
\item $\mathrm{IID}_{T}^{+} \subsetneq \mathcal{D} \subsetneq \mathrm{ID}_{T}^{+}$
\item For each $d \in \mathcal{D}$, there exists an optimal algorithm that is depth-first and directional.
\end{itemize}
\end{document}
|
\begin{document}
\title{Central Trajectories}
\begin{abstract}
An important task in trajectory analysis is clustering. The results of a
clustering are often summarized by a single representative trajectory and
an associated size of each
cluster. We study the problem of computing a suitable representative of a
set of similar trajectories. To this
end we define a \emph{central trajectory} \mkpzc{C}, which consists of pieces of the
input trajectories, switches from one entity to another only if they are
within a small distance of each other, and such that at any time $t$, the
point $\mkpzc{C}(t)$ is as central as possible. We measure centrality in terms of
the radius of the smallest disk centered at $\mkpzc{C}(t)$ enclosing all entities at
time $t$, and discuss how the techniques can be adapted to other measures of
centrality. We first study the problem in $\mkmbb{R}^1$, where we show that an
optimal central trajectory \mkpzc{C} representing $n$ trajectories, each consisting
of $\mkpzc{T}au$ edges, has complexity $\mkmcal{T}heta(\mkpzc{T}au n^2)$ and can be computed in
$O(\mkpzc{T}au n^2 \log n)$ time. We then consider trajectories in $\mkmbb{R}^d$ with $d\geq 2$,
and show that the complexity of \mkpzc{C} is at most $O(\mkpzc{T}au n^{5/2})$ and can be computed in
$O(\mkpzc{T}au n^3)$ time.
\end{abstract}
\mkpzc{T}hispagestyle{empty}
\mkpzc{C}learpage
\setcounter{page}{1}
\section{Introduction}
\label{sec:Introduction}
A \emph {trajectory} is a sequence of time-stamped locations in the plane, or more generally in $\mkmbb{R}^d$.
Trajectory data is obtained by tracking the movements of e.g. animals \mkpzc{C}ite{BovetB88,Calenge200934,gal-nmibc-09}, hurricanes \mkpzc{C}ite{Stohl1998947}, traffic \mkpzc{C}ite{lltx-dftf-10}, or other moving entities \mkpzc{C}ite{dwf-rpm-09} over time.
Large amounts of such data have recently been collected in a variety of research fields.
As a result, there is a great demand for tools and
techniques to analyze trajectory data.
One important task in trajectory analysis is \emph {clustering}: subdividing a large collection of trajectories into groups of ``similar'' ones. This problem has been studied
extensively, and many different techniques are available~\mkpzc{C}ite{bbgll-dcpcs-11,grsc-pcecu-07,gs-tcmrm-99,lhw-tc-07,vgk-dsmt-02}.
Once a suitable clustering has been determined, the result needs to be stored
or prepared for further processing. Storing the whole collection of
trajectories in each cluster is often not feasible, because follow-up analysis
tasks may be computation-intensive. Instead, we wish to represent each cluster
by a signature: the number of trajectories in the cluster, together
with a \emph {representative} trajectory which should capture the defining
features of all trajectories in the cluster.
Representative trajectories are also useful for visualization
purposes. Displaying large amounts of trajectories often leads to visual
clutter. Instead, if we show only a number of representative trajectories, this
reduces the visual clutter, and allows for more effective data
exploration. The original trajectories can still be shown if desired,
using the details-on-demand principle in information visualization~\mkpzc{C}ite{Shneiderman96}.
\paragraph {Representative trajectories}
When choosing a representative trajectory for a group of similar trajectories,
the first obvious choice would be to pick one of the trajectories in the
group. However, one can argue that no single element in a group may be a
good representative, e.g. because each individual trajectory has some prominent
feature that is not shared by the rest (see Fig.~\ref {fig:single_bad}), or no
trajectory is sufficiently in the middle all the time. On the
other hand, it is desirable to output a trajectory that does consist of \emph
{pieces} of input trajectories, because otherwise the representative trajectory
may display behaviour that is not present in the input, e.g. because of
contextual information that is not available to the algorithm (see Fig.~\ref
{fig:lake}).
\mkpzc{T}weeplaatjes {single_bad} {lake}
{ (a) Every trajectory has a peculiarity that is not representative for the set.
(b) Taking, for example, the pointwise average of a set of trajectories may result in one that ignores context.
}
To determine what a good representative trajectory of a group of similar trajectories is, we identify two main categories: \emph {time-dependent} and \emph {time-independent} representatives.
Trajectories are typically collected as a discrete sequence of time-stamped
locations. By linearly interpolating the locations we obtain a continuous
piecewise-linear curve as the image of the function. Depending on the application, we may be interested in the curve with attached time stamps (say, when studying a flock of animals that moved together) or in just the curve (say, when considering hikers that took the same route, but possibly at different times and speeds).
When time is not important, one can select a representative based directly on the geometry or
topology of the set of curves~\mkpzc{C}ite{bbklsww-mt-12,hpr-fdre-11}. When time is
important, we would like to have the property that at each time $t$ our
representative point $c(t)$ is a good representative of the set of points
$P(t)$. To this end, we may choose any static representative point of a point
set, for which many examples are available: the Fermat-Weber point (which
minimizes the sum of distances to the points in $P$), the center of mass (which
minimizes the sum of squared distances), or the center of the smallest
enclosing circle (which minimizes the distance to the farthest point in $P$).
\paragraph {Central trajectories}
In this work, we focus on time-dependent measures based on static concepts of centrality. We choose the distance to the farthest point, but discuss in Section~\ref {sec:Extensions} how our results can be adapted to other measures.
Ideally, we would output a trajectory \mkpzc{C} such that at any time $t$, $\mkpzc{C}(t)$ is
the point (entity) that is closest to its farthest entity. Unfortunately, when
the entities move in $\mkmbb{R}^d$ for $d > 1$, this may cause discontinuities. Such
discontinuities are unavoidable: if we insist that the output trajectory
consists of pieces of input trajectories \emph {and} is continuous, then in
general, there will be no opportunities to switch from one trajectory to
another, and we are effectively choosing one of the input trajectories
again. At the same time, we do not want to output a trajectory with arbitrarily
large discontinuities. An acceptable compromise is to allow discontinuities, or
\emph{jumps}, but only over small distances, controlled by a parameter
$\varepsilon$. We note that this problem of discontinuities shows up for
time-independent representatives for entities moving in $\mkmbb{R}^d$, with $d \geq
3$, as well, because the traversed curves generally do not intersect.
\paragraph{Related work}
Buchin et al.\xspace~\mkpzc{C}ite{bbklsww-mt-12} consider the problem of computing a \emph {median} trajectory for a set of trajectories without time information. Their method produces a trajectory that consists of pieces of the input.
Agarwal~et al.\xspace~\mkpzc{C}ite{agarwal2005staying} consider trajectories with time
information and compute a representative trajectory that follows the median (in $\mkmbb{R}^1$) or a point of high \emph {depth} (in $\mkmbb{R}^2$) of the input entities.
The resulting trajectory does not necessarily stay close to the input trajectories.
They give exact and approximate algorithms.
Durocher and Kirkpatrick~\mkpzc{C}ite{durocher2009projection} observe that a trajectory minimizing the sum of distances to the other entities is \emph {unstable}, in the sense that arbitrarily small movement of the entities may cause an arbitrarily large movement in the location of the representative entity.
They proceed to consider alternative measures of centrality, and define the \emph {projection median}, which they prove is more stable.
Basu~et al.\xspace~\mkpzc{C}ite{basu2012projection} extend this concept to higher dimensions.
\paragraph{Problem description} We are given a set \mkmcal{X} of $n$ entities, each
moving along a piecewise linear trajectory in $\mkmbb{R}^d$ consisting of $\mkpzc{T}au$
edges. We assume that all trajectories have their vertices at the same times,
i.e.~times $t_0,..,t_\mkpzc{T}au$. Fig.~\ref {fig:intro_example_standardandtime}
shows an example.
\mkpzc{T}weeplaatjes [scale=1.18] {intro_example_standardandtime}
{intro_example_slabsanddistances} { (a) Two views of five moving entities and
their trajectories. (b) On the top the pairwise distances between the
entities as a function over time. On the bottom the functions $D_\sigma$, and
in yellow the area representing $\mkmcal{D}(\mkpzc{C})$. }
For an entity $\sigma$, let $\sigma(t)$ denote the position of $\sigma$ at
time~$t$. With slight abuse of notation we will write $\sigma$ for both entity
$\sigma$ and its trajectory. At a given time $t$, we denote the distance from
$\sigma$ to the entity farthest away from $\sigma$ by $D_\sigma(t) =
D(\sigma,t) = \max_{\psi \in \mkmcal{X}} \|\sigma(t)\psi(t)\|$, where $\|pq\|$ denotes
the Euclidean distance between points $p$ and $q$ in $\mkmbb{R}^d$.
Fig.~\ref{fig:intro_example_slabsanddistances} illustrates the pairwise
distances and resulting $D$ functions for five example trajectories. For ease
of exposition, we assume that the trajectories are in general position: that is, no
three trajectories intersect in the same point, and no two pairs of entities
are at distance $\varepsilon$ from each other at the same time.
A \emph{trajectoid\xspace} is a function that maps time to the set of entities \mkmcal{X},
with the restriction that at discontinuities the distance between the entities
involved is at most $\varepsilon$. Intuitively, a trajectoid\xspace corresponds to a
concatenation of pieces of the input trajectories in such a way that two
consecutive pieces match up in time, and the end point of the former piece is
within distance $\varepsilon$ from the start point of the latter piece. In Fig.~\ref
{fig:intro_example_slabsanddistances}, a trajectoid\xspace may switch between a pair
of entities when their pairwise distance function lies in the bottom strip of
height $\varepsilon$. More formally, for a trajectoid\xspace \mkpzc{T} we have that
\begin{itemize}[nosep]
\item at any time $t$, $\mkpzc{T}(t) = \sigma$ for some $\sigma \in \mkmcal{X}$, and
\item at every time $t$ where \mkpzc{T} has a discontinuity, that is, \mkpzc{T} \emph{jumps}
from entity $\sigma$ to entity $\psi$, we have that $\|\sigma(t)\psi(t)\|
\leq \varepsilon$.
\end{itemize}
Note that this definition still allows for a series of jumps within an
arbitrarily short time interval $[t,t+\delta]$, essentially simulating a jump
over distances larger than $\varepsilon$. To make the formulation cleaner, we slightly
weaken the second condition, and allow a trajectoid to have discontinuities
with a distance larger than $\varepsilon$, provided that such a large jump can be
realized by a sequence of small jumps, each of distance at most $\varepsilon$. When it
is clear from the context, we will write $\mkpzc{T}(t)$ instead of $\mkpzc{T}(t)(t)$ to mean
the location of entity $\mkpzc{T}(t)$ at time $t$. We now wish to compute a trajectoid
\mkpzc{C} that minimizes the function
\[ \mkmcal{D}(\mkpzc{T}) = \int_{t_0}^{t_\mkpzc{T}au} D(\mkpzc{T},t) \dd t. \]
\noindent
So, at any time $t$, all entities lie in a disk of radius $D(\mkpzc{C},t)$ centered at
$\mkpzc{C}(t)$.
\paragraph{Outline and results} We first study the situation where entities
move in $\mkmbb{R}^1$. In Section~\ref{sec:oned} we show that the worst case
complexity of a central trajectory in $\mkmbb{R}^1$ is $\mkmcal{T}heta(\mkpzc{T}au n^2)$, and that we
can compute one in $O(\mkpzc{T}au n^2 \log n)$ time. We then extend our approach to
entities moving in $\mkmbb{R}^d$, for any constant $d$, in Section~\ref
{sec:higher_dimensions}. For this case, we prove that the maximal complexity of
a central trajectory \mkpzc{C} is $O(\mkpzc{T}au n^{5/2})$. Computing \mkpzc{C} takes $O(\mkpzc{T}au n^3)$
time and requires $O(\mkpzc{T}au n^2 \log n)$ working space. We briefly discuss various
extensions to our approach in Section~\ref{sec:Extensions}. Omitted proofs can
be found in Appendix~\ref{app:Omitted_Proofs}.
Even though we do not expect this to happen in practice, the worst case
complexity of our central trajectories can be significantly higher than the input size.
If this occurs, we can use traditional line simplification algorithms like Imai and
Iri~\mkpzc{C}ite{imai1998computational} to simplify the resulting central
trajectory. This gives us a representative that still is always close
---for instance within distance $2\varepsilon$--- to one of the input trajectories.
Alternatively, we can use dynamic-programming combined with our methods
to enforce the output trajectory to have at most $k$ vertices, for any $k$,
and always be on the input trajectories.
Computing such a central trajectory is more expensive
than our current algorithms, however. Furthermore, enforcing a low output
complexity may not be necessary. For example, in applications like
visualization, the number of trajectories shown often has a larger impact
visual clutter than the length or complexity of the individual trajectories. It
may be easier to follow a single trajectory that has many vertices than
to follow many trajectories that have fewer vertices each.
\section{Entities moving in $\mkmbb{R}^1$}
\label{sec:oned}
\mkpzc{T}weeplaatjes [scale=.8] {1d_slabs} {1d_slabs_straight} {(a) A set of trajectories and the
ideal trajectory \mkpzc{I}. The breakpoints in the ideal trajectory partition time
into $O(n\mkpzc{T}au)$ intervals. (b) The trajectories after transforming \mkpzc{I} to a
horizontal line.}
Let \mkmcal{X} be the set of entities moving in $\mkmbb{R}^1$. The trajectories of these
entities can be seen as polylines in $\mkmbb{R}^2$: we associate time with the
horizontal axis, and $\mkmbb{R}^1$ with the vertical axis (see
Fig.~\ref{fig:1d_slabs}). We observe that the distance between two points $p$
and $q$ in $\mkmbb{R}^1$ is simply their absolute difference, that is, $\|pq\|=|p-q|$.
Let \mkpzc{I} be the \emph{ideal} trajectory, that is, the trajectory that minimizes
\mkmcal{D} but is not restricted to lie on the input trajectories. It follows that at
any time $t$, $\mkpzc{I}(t)$ is simply the average of the highest entity $\mkpzc{U}(t)$ and
the lowest entity $\mkpzc{L}(t)$. We further subdivide each time interval
$J_i=[t_i,t_{i+1}]$ into \emph{elementary intervals}, such that \mkpzc{I} is a single
line segment inside each elementary interval.
\begin{lemma}
\label{lem:num_elementary_intervals}
The total number of elementary intervals is $\mkpzc{T}au(n+2)$.
\end{lemma}
\begin{proof}
The ideal trajectory \mkpzc{I} changes direction when $\mkpzc{U}(t)$ or $\mkpzc{L}(t)$
changes. During a single interval $[t_i,t_{i+1}]$ all entities move along lines, so \mkpzc{U}
and \mkpzc{L} are the upper and lower envelope of a set of $n$ lines. So by standard
point-line duality, \mkpzc{U} and \mkpzc{L} correspond to the upper and lower hull of $n$
points. The summed complexity of the upper and lower hull is at most $n+2$.
\end{proof}
We assume without loss of generality that within each elementary interval \mkpzc{I}
coincides with the $x$-axis. To simplify the description of the proofs and
algorithms, we also assume that the entities never move parallel to the ideal
trajectory, that is, there are no horizontal edges.
\begin{lemma}
\label{lem:central_ideal}
\mkpzc{C} is a central trajectory in $\mkmbb{R}^1$ if and only if it minimizes the function
\[ \mkmcal{D}'(\mkpzc{T}) = \int_{t_0}^{t_\mkpzc{T}au} |\mkpzc{T}(t)| \dd t. \]
\end{lemma}
\begin{proof}
A central trajectory \mkpzc{C} is a trajectoid\xspace that minimizes the function
\begin{align*}
\mkmcal{D}(\mkpzc{T}) &= \int_{t_0}^{t_\mkpzc{T}au} D(\mkpzc{T},t) \dd t
= \int_{t_0}^{t_\mkpzc{T}au} \max_{\psi \in \mkmcal{X}} \|\mkpzc{T}(t)\psi(t)\| \dd t
= \int_{t_0}^{t_\mkpzc{T}au} \max_{\psi \in \mkmcal{X}} |\mkpzc{T}(t) - \psi(t)| \dd t \\
&= \int_{t_0}^{t_\mkpzc{T}au} \max \{ |\mkpzc{T}(t) - \mkpzc{U}(t)|, |\mkpzc{T}(t) - \mkpzc{L}(t)| \} \dd t.
\end{align*}
Since $(\mkpzc{U}(t) + \mkpzc{L}(t))/2 = 0$, we have that $|\mkpzc{T}(t) - \mkpzc{U}(t)| > |\mkpzc{T}(t) - \mkpzc{L}(t)|$
if and only if $\mkpzc{T}(t) < 0$. So, we split the integral, depending on $\mkpzc{T}(t)$,
giving us
\begin{align*}
\mkmcal{D}(\mkpzc{T}) &= \int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) \geq 0} \mkpzc{T}(t) - \mkpzc{L}(t) \dd t +
\int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) < 0} \mkpzc{U}(t) - \mkpzc{T}(t) \dd t \\
&= \int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) \geq 0} \mkpzc{T}(t) \dd t -
\int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) \geq 0} \mkpzc{L}(t) \dd t~+ \\
&\quad \int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) < 0} \mkpzc{U}(t) \dd t -
\int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) < 0} \mkpzc{T}(t) \dd t.
\end{align*}
We now use that $-\int_{\mkpzc{T}(t) < 0}\mkpzc{T}(t) = \int_{\mkpzc{T}(t) < 0} |\mkpzc{T}(t)|$, and that
$-\int \mkpzc{L}(t) = \int \mkpzc{U}(t)$ (since $(\mkpzc{U}(t) + \mkpzc{L}(t))/2 = 0$). After rearranging the
terms we then obtain
\begin{align*}
\mkmcal{D}(\mkpzc{T}) &= \int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) \geq 0} \mkpzc{T}(t) \dd t +
\int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) < 0} |\mkpzc{T}(t)| \dd t~+ \\
&\quad \int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) \geq 0} \mkpzc{U}(t) \dd t +
\int_{t_0 \leq t \leq t_\mkpzc{T}au \land \mkpzc{T}(t) < 0} \mkpzc{U}(t) \dd t \\
&= \int_{t_0 \leq t \leq t_\mkpzc{T}au} |\mkpzc{T}(t)| \dd t~+
\int_{t_0 \leq t \leq t_\mkpzc{T}au} \mkpzc{U}(t) \dd t.
\end{align*}
The last term is independent of $\mkpzc{T}$, so we have $\mkmcal{D}(\mkpzc{T}) = \mkmcal{D}'(\mkpzc{T}) + c$,
for some $c \in \mkmbb{R}$. The lemma follows.
\end{proof}
By Lemma~\ref{lem:central_ideal} a central trajectory \mkpzc{C} is a trajectoid\xspace that
minimizes the area $\mkmcal{D}'(\mkpzc{T})$ between \mkpzc{T} and the ideal trajectory \mkpzc{I}. Hence, we
can focus on finding a trajectoid that minimizes $\mkmcal{D}'$.
\subsection{Complexity}
\label{sub:complexity_1d}
\eenplaatje[scale=.8]{quadratic_ideal}{Lower bound construction that shows that
\mkmcal{C} (red) may have quadratic complexity. The ideal trajectory \mkpzc{I} is shown in green.}
\begin{lemma}
\label{lem:lowerbound_complexity_1D}
For a set of $n$ trajectories in $\mkmbb{R}^1$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, a central trajectory \mkpzc{C} may have worst case complexity
$\Omega(\mkpzc{T}au n^2)$.
\end{lemma}
\begin{proof}
We describe a construction for the entities that shows that within a single
time interval $J=[t_i,t_{i+1}]$ the complexity of \mkpzc{C} may be
$\Omega(n^2)$. Repeating this construction $\mkpzc{T}au$ times gives us $\Omega(\mkpzc{T}au
n^2)$ as desired.
Within $J$ the entities move linearly. So we construct an arrangement \mkmcal{A} of
lines that describes the motion of all entities. We place $m=n/3$ lines such
that the upper envelope of \mkmcal{A} has linear complexity. We do the same for the
lower envelope. We position these lines such that the ideal trajectory \mkpzc{I}
---which is the average of the upper and lower envelope--- makes a vertical
``zigzagging'' pattern (see
Fig.~\ref{fig:quadratic_ideal}).
The remaining set $H$ of $m$ lines are horizontal. Two consecutive lines are
placed at (vertical) distance at most $\varepsilon$. We place all lines such that
they all intersect \mkpzc{I}. It follows that \mkpzc{C} jumps $O(n^2)$ times between the
lines in $H$. The lemma follows.
\end{proof}
Two entities $\sigma$ and $\psi$ are $\varepsilon$\emph{-connected} at time $t$ if
there is a sequence $\sigma=\sigma_0,..,\sigma_k=\psi$ of entities such that
for all $i$, $\sigma_i$ and $\sigma_{i+1}$ are within distance $\varepsilon$ of each
other at time $t$. A subset $\mkmcal{X}' \subseteq \mkmcal{X}$ of entities is $\varepsilon$-connected
at time $t$ if all entities in $\mkmcal{X}'$ are pairwise $\varepsilon$-connected at time
$t$. The set $\mkmcal{X}'$ is $\varepsilon$-connected during an interval $I$, if they are
$\varepsilon$-connected at any time $t \in I$. We now observe:
\begin{observation}
\label{obs:jump}\hspace{-3pt}
\mkpzc{C} can jump from entity $\sigma$ to $\psi$ at time $t$ if and only if
$\sigma$ and $\psi$ are $\varepsilon$-connected at time $t$.
\end{observation}
At any time $t$, we can partition \mkmcal{X} into maximal sets of $\varepsilon$-connected
entities. The central trajectory \mkpzc{C} must be in one of such maximal sets $\mkmcal{X}'$:
it uses the trajectory of an entity $\sigma \in \mkmcal{X}'$ (at time $t$), if and only
if $\sigma$ is the entity from $\mkmcal{X}'$ closest to \mkpzc{I}. More formally,
let $f_\sigma(t) = |\sigma(t)|$, and let $\mkpzc{L}(\mkmcal{F}) = \min_{f \in \mkmcal{F}} f$ denote
the lower envelope of a set of functions \mkmcal{F}.
\begin{observation}
\label{obs:lower_envelope_1d}
Let $\mkmcal{X}' \ni \sigma$ be a set of entities that is $\varepsilon$-connected during
interval $J$, and assume that $\mkpzc{C} \in \mkmcal{X}'$ during $J$. For any time $t \in
J$, we have that $\mkpzc{C}(t) = \sigma(t)$ if and only if $f_\sigma$ is on the
lower envelope of the set $\mkmcal{F}' = \{f_\psi \mid \psi \in \mkmcal{X}'\}$ at time $t$,
that is, $f_\sigma(t) = \mkpzc{L}(\mkmcal{F})(t)$.
\end{observation}
Let $\mkmcal{X}_1,..,\mkmcal{X}_m$, denote a collection of maximal sets of entities that are
$\varepsilon$-connected during time intervals $J_1,..,J_m$, respectively. Let $\mkmcal{F}_i =
\{ f_\sigma \mid \sigma \in \mkmcal{X}_i \}$, and let $\mkpzc{L}_i$ be the lower envelope
$\mkpzc{L}(\mkmcal{F}_i)$ of $\mkmcal{F}_i$ restricted to interval $J_i$. A lower envelope $\mkpzc{L}_i$
has a break point at time $t$ if $f_\sigma(t) = f_\psi(t)$, for $\sigma,\psi
\in \mkmcal{X}_i$. There are two types of break points: (i) $\sigma(t) = \psi(t)$,
or (ii) $\sigma(t) = -\psi(t)$. At events of type (i) the modified trajectories
of $\sigma$ and $\psi$ intersect. At events of the type (ii), $\sigma$ and
$\psi$ are equally far from \mkpzc{I}, but on different sides of \mkpzc{I}. Let $B = \{
(t,\sigma,\psi) \mid \mkpzc{L}_i(t) = f_\sigma(t) = f_\psi(t) \land i \in \{1,..,m\}\}$
denote the collection of break points from all lower envelopes $\mkpzc{L}_1,..,\mkpzc{L}_m$.
\begin{lemma}
\label{lem:break_points_in_at_most_one_set}
Consider a triplet $(t,\sigma,\psi) \in B$. There is at most one lower
envelope $\mkpzc{L}_i$ such that $t$ is a break point in $\mkpzc{L}_i$.
\end{lemma}
\begin{proof}
Assume by contradiction that $t$ is a break point in both $\mkpzc{L}_i$ and
$\mkpzc{L}_j$. At any time $t$, an entity can be in at most one maximal set
$\mkmcal{X}_\ell$. So if $\mkmcal{X}_i$ and $\mkmcal{X}_j$ share either entity $\sigma$ or $\psi$,
then the intervals $J_i$ and $J_j$ are disjoint. It follows $t$ cannot lie
in both intervals, and thus cannot be a break point in both $\mkpzc{L}_i$ and
$\mkpzc{L}_j$. Contradiction.
\end{proof}
\begin{lemma}
\label{lem:entities_on_boundary_zone}
Let \mkmcal{A} be an arrangement of $n$ lines, describing the movement of $n$
entities during an elementary interval $J$. If there is a break point
$(t,\sigma,\psi) \in B$, with $t \in J$, of type (ii), then $\sigma(t)$ and
$\psi(t)$ lie on the boundary $\partial\mathcal{Z}$ of the zone $\mathcal{Z}$
of \mkpzc{I} in \mkmcal{A}.
\end{lemma}
\begin{proof}
Let $\mkmcal{X}_j$ be the maximal $\varepsilon$-connected set containing $\sigma$ and $\psi$, and
assume without loss of generality that $f_\sigma(t) = \sigma(t) = -\psi(t) =
f_\psi(t)$. Now, assume by contradiction that $\sigma$ is not on $\partial
\mathcal{Z}$ at time $t$ (the case that $\psi(t)$ is not on
$\partial\mathcal{Z}$ is symmetric). This means that there is an entity
$\rho$ with $0 \leq \rho(t) < \sigma(t)$. If $\rho \in \mkmcal{X}_j$, this
contradicts that $f_\sigma(t)$ was on the lower envelope of $\mkmcal{X}_j$ at time
$t$. So $\rho$ is not $\varepsilon$-connected to $\sigma$ at time $t$. Hence, their
distance is at least $\varepsilon$. We then have $\sigma(t) > \rho(t) + \varepsilon >
\varepsilon$. It now follows that $\sigma$ and $\psi$ cannot be $\varepsilon$-connected at
time $t$: the distance between $\sigma$ and $\psi$ is bigger than $\varepsilon$ so
they are not directly connected, and $f_\sigma$ and $f_\psi$ are on $\mkpzc{L}_j$,
so there are also no other entities in $\mkmcal{X}_j$ through which they can be
$\varepsilon$-connected. Contradiction.
\end{proof}
\begin{lemma}
\label{lem:equidistant_jumps}
Let \mkmcal{A} be an arrangement of $n$ lines, describing the movement of $n$
entities during an elementary interval $J$. The total number of break points
$(t,\sigma,\phi) \in B$, with $t \in J$, of type (ii) is at most $6.5n$.
\end{lemma}
\begin{wrapfigure}[10]{r}{0.4\mkpzc{T}extwidth}
\mkpzc{C}entering
\includegraphics{jumps_in_zone}
\mkpzc{C}aption{The jumps of \mkpzc{L} (dashed arrows) involving edges $e$ and $g$.}
\label{fig:jumps_in_zone}
\end{wrapfigure}
\begin{proof}
By Lemma~\ref{lem:break_points_in_at_most_one_set} all break points can be
charged to exactly one set $\mkmcal{X}_j$. From
Lemma~\ref{lem:entities_on_boundary_zone} it follows that break points of
type (ii) involve only entities whose lines in \mkmcal{A} participate in the zone of
\mkpzc{I}.
Let $E$ be the set of edges of $\partial\mathcal{Z}$. We have that $|E| \leq
5.5n$~\mkpzc{C}ite{bepy-htlp-91,pa-cg-95}. We now split every edge that intersects
\mkpzc{I}, at the intersection point. Since every line intersects \mkpzc{I} at most once,
this means the number of edges in $E$ increases to $6.5n$. For every pair
of edges $(e,g)$, that lie on opposite sides of \mkpzc{I}, there is at most one time
$t$ where a lower envelope $\mkpzc{L}=\mkpzc{L}_j$, for some $j$, has a break point of type
(ii).
Consider a break point of type (ii), that is, a time $t$ such that \mkpzc{L}
switches (jumps) from an entity $\sigma$ to an entity $\psi$, with $\sigma$
and $\psi$ on opposite sides of \mkpzc{I}. Let $e \in E$ and $g \in E$ be the
edges containing $\sigma(t)$ and $\psi(t)$, respectively. If the arriving
edge $g$ has not been charged before, we charge the jump to $g$. Otherwise,
we charge it to $e$. We continue to show that every edge in $E$ is charged at most
once. Since $E$ has at most $6.5n$ edges, the number of break points of type
(ii) is also at most $6.5n$.
We now show that either $e$ or $g$ has not been charged before. Assume, by
contradiction, that both $e$ and $g$ have been charged before time $t$, at
times $t_e$ and $t_g$, respectively. Consider the case that $t_g < t_e$
(see Fig.~\ref{fig:jumps_in_zone}). At time $t_e$, the lower envelope \mkpzc{L}
jumps from an edge $h$ onto $e$ or vice versa. Since there is a jump
involving edge $g$ at time $t_g$ and one at time $t$ it follows that at time
$t_e$, $g$ is the closest edge in $E$ opposite to $e$. Hence, $h = g$. This
means we jump twice between $e$ and $g$. Contradiction. The case $t_e < t_g$
is symmetrical and the case $t_e = t_g$ cannot occur. It follows that $e$ or
$g$ was not charged before time $t$, and thus all edges in $E$ are charged at
most once.
\end{proof}
\begin{lemma}
\label{lem:complexity_lower_envelopes_1d}
The total complexity of all lower envelopes $\mkpzc{L}_1,..,\mkpzc{L}_m$ on $[t_i,t_{i+1}]$
is $O(n^2)$.
\end{lemma}
\begin{proof}
The break points in the lower envelopes are either of type (i) or of type
(ii). We now show that there are at most $O(n^2)$ break points of either
type.
The break points of type (i) correspond to intersections between the
trajectories of two entities. Within interval $[t_i,t_{i+1}]$ the entities
move along lines, hence there are at most $O(n^2)$ such intersections. By
Lemma~\ref{lem:break_points_in_at_most_one_set} all break points can be
charged to exactly one set $\mkmcal{X}_i$. It follows that the total number of break
points of type (i) is $O(n^2)$.
To show that the number of events of the second type is at most $O(n^2)$ as
well we divide $[t_i,t_{i+1}]$ in $O(n)$ elementary intervals such that
\mkpzc{I} coincides with the $x$-axis. By Lemma~\ref{lem:equidistant_jumps} each such
elementary interval contains at most $O(n)$ break points of type (ii).
\end{proof}
\begin{theorem}
\label{thm:complexity_c_1D}
Given a set of $n$ trajectories in $\mkmbb{R}^1$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, a central trajectory \mkpzc{C} has worst case complexity
$O(\mkpzc{T}au n^2)$.
\end{theorem}
\begin{proof}
A central trajectory \mkpzc{C} is a piecewise function. From
Observations~\ref{obs:jump} and~\ref{obs:lower_envelope_1d} it now follows
that \mkpzc{C} has a break point at time $t$ only if (a) two subsets of entities become
$\varepsilon$-connected or $\varepsilon$-disconnected, or (b) the lower envelope of a set
of $\varepsilon$-connected entities has a break point at time $t$. Within a single
time interval $J_i=[t_i,t_{i+1}]$ there are at most $O(n^2)$ times when two
entities are at distance exactly $\varepsilon$. Hence, the number of events of type
(a) during interval $J_i$ is also $O(n^2)$. By
Lemma~\ref{lem:complexity_lower_envelopes_1d} the total complexity of all
lower envelopes of $\varepsilon$-connected sets during $J_i$ is also
$O(n^2)$. Hence, the number of break points of type (b) within interval $J_i$
is also $O(n^2)$. The theorem follows.
\end{proof}
\subsection{Algorithm}
\label{sub:Algorithm_1d}
\begin{wrapfigure}[14]{r}{0.45\mkpzc{T}extwidth}
\mkpzc{C}entering
\includegraphics{1d_reeb}
\mkpzc{C}aption{The Reeb graph for the moving entities from
Fig.~\ref{fig:1d_slabs}. The dashed lines indicate that two entities are at
distance $\varepsilon$.}
\label{fig:1d_reeb}
\end{wrapfigure}
We now present an algorithm to compute a trajectoid\xspace \mkpzc{C} minimizing $\mkmcal{D}'$. By
Lemma~\ref{lem:central_ideal} such a trajectoid is a central trajectory. The
basic idea is to construct a weighted (directed acyclic) graph that represents
a set of trajectoid\xspace{}s containing an optimal trajectoid\xspace. We can then find \mkpzc{C}
by computing a minimum weight path in this graph.
The graph that we use is a weighted version of the Reeb graph that Buchin
et al.\xspace~\mkpzc{C}ite{grouping2013} use to model the trajectory grouping structure. We
review their definition here. The \emph{Reeb graph} \mkmcal{R} is a directed acyclic
graph. Each edge $e=(u,v)$ of \mkmcal{R} corresponds to a maximal subset of entities $C_e
\subseteq \mkmcal{X}$ that is $\varepsilon$-connected during the time interval
$[t_u,t_v]$. The vertices represent times at which the sets of $\varepsilon$-connected
entities change, that is, the times at which two entities $\sigma$ and $\psi$
are at distance $\varepsilon$ from each other and the set containing $\sigma$ merges with
or splits from the set containing $\psi$. See Fig.~\ref{fig:1d_reeb} for an illustration.
By Observation~\ref{obs:jump} a central trajectory \mkpzc{C} can jump from $\sigma$ to
$\psi$ if and only if $\sigma$ and $\psi$ are $\varepsilon$-connected, that is, if
$\sigma$ and $\psi$ are in the same component $C_e$ of edge $e$. From
Observation~\ref{obs:lower_envelope_1d} it follows that on each edge $e$, \mkpzc{C}
uses only the trajectories of entities $\sigma$ for which $f_\sigma$ occurs on
the lower envelope of the functions $\mkmcal{F}_e = \{ f_\sigma \mid \sigma \in
C_e\}$. Hence, we can then express the cost for \mkpzc{C} using edge $e$ by
\[
\omega_e = \int_{t_u}^{t_v} \mkpzc{L}(\mkmcal{F}_e)(t) \dd t.
\]
It now follows that \mkpzc{C} follows a path in the Reeb graph \mkmcal{R}, that is, the set
of trajectoids represented by \mkmcal{R} contains a trajectoid minimizing $\mkmcal{D}'$. So we
can compute a central trajectory by finding a minimum weight path in \mkmcal{R}
from a source to a sink.
\paragraph{Analysis} First we compute the Reeb graph as defined by Buchin
et al.\xspace~\mkpzc{C}ite{grouping2013}. This takes $O(\mkpzc{T}au n^2\log n)$ time. Second we
compute the weight $\omega_e$ for each edge $e$. The Reeb graph \mkmcal{R} is a DAG,
so once we have the edge weights, we can use dynamic programming to compute a
minimum weight path in $O(|\mkmcal{R}|) = O(\mkpzc{T}au n^2)$ time. So all that remains is
to compute the edge weights $\omega_e$. For this, we need the lower envelope
$\mkpzc{L}_e$ of each set $\mkmcal{F}_e$ on the interval $J_e$. To compute the lower
envelopes, we need the ideal trajectory \mkpzc{I}, which we can compute \mkpzc{I} in $O(\mkpzc{T}au n\log
n)$ time by computing the lower and upper envelope of the trajectories in each
time interval $[t_i,t_{i+1}]$.
Lemma~\ref{lem:complexity_lower_envelopes_1d} implies that the total complexity
of all lower envelopes is $O(\mkpzc{T}au n^2)$. To compute them we have two
options. We can simply compute the lower envelope from scratch for every edge
of \mkmcal{R}. This takes $O(\mkpzc{T}au n^2 \mkpzc{C}dot n\log n) = O(\mkpzc{T}au n^3 \log n)$
time. Instead, for each time interval $J_i=[t_i,t_{i+1}]$, we compute the
arrangement \mkmcal{A} representing the modified trajectories on the interval $J_i$,
and use it to trace $\mkpzc{L}_e$ in \mkmcal{A} for every edge $e$ of \mkmcal{R}.
An arrangement of $m$ line segments can be built in $O(m \log m + A)$ time,
where $A$ is the output complexity \mkpzc{C}ite{as-aa-00}. In total \mkmcal{A} consists of
$O(n^2)$ line segments: $n+2$ per entity. Since each pair of trajectories
intersects at most once during $J_i$, we have that $A = O(n^2)$. Thus, we can
build \mkmcal{A} in $O(n^2 \log n)$ time. The arrangement \mkmcal{A} represents all break
points of type (i), of all functions $f_\sigma$. We now compute all pairs of
points in \mkmcal{A} corresponding to break points of type (ii). We do this in $O(n^2)$
time by traversing the zone of \mkpzc{I} in \mkmcal{A}.
We now trace the lower envelopes through \mkmcal{A}: for each edge $e=(u,v)$ in the
Reeb graph with $J_e \subseteq J_i$, we start at the point $\sigma(t_u)$,
$\sigma \in C_e$, that is closest to \mkpzc{I}, and then follow the edges in \mkmcal{A}
corresponding to $\mkpzc{L}_e$, taking care to jump when we encounter break points of
type (ii). Our lower envelopes are all disjoint (except at endpoints), so we
traverse each edge in \mkmcal{A} at most once. The same holds for the jumps. We can
avoid costs for searching for the starting point of each lower envelope by
tracing the lower envelopes in the right order: when we are done tracing
$\mkpzc{L}_e$, with $e=(u,v)$, we continue with the lower envelope of an outgoing edge
of vertex $v$. If $v$ is a split vertex where $\sigma$ and $\psi$ are at
distance $\varepsilon$, then the starting point of the lower envelope of the other edge is
either $\sigma(t_v)$ or $\psi(t_v)$, depending on which of the two is farthest
from \mkpzc{I}. It follows that when we have \mkmcal{A} and the list of break points of type
(ii), we can compute all lower envelopes in $O(n^2)$ time. We conclude:
\begin{theorem}
\label{thm:central_trajectory_1d}
Given a set of $n$ trajectories in $\mkmbb{R}^1$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, we can compute a central trajectory \mkpzc{C} in $O(\mkpzc{T}au n^2\log n)$ time
using $O(\mkpzc{T}au n^2)$ space.
\end{theorem}
\paragraph{A central trajectory without jumps} When our entities move in
$\mkmbb{R}^1$, it is not yet necessary to have discontinuities in \mkpzc{C}, i.e.~we can set
$\varepsilon = 0$. In this case we can give a more precise bound on the complexity of
\mkpzc{C}, and we can use a slightly easier algorithm. The details can be found in
Appendix~\ref{app:oned_no_jumps}.
\section{Entities moving in $\mkmbb{R}^d$}
\label{sec:higher_dimensions}
In the previous section, we used the ideal trajectory \mkpzc{I}, which minimizes the
distance to the farthest entity, ignoring the requirement to stay on an input
trajectory. The problem was then equivalent to finding a trajectoid\xspace that
minimizes the distance to the ideal trajectory. In $\mkmbb{R}^d$, with $d > 1$,
however, this approach fails, as the following example shows.
\eenplaatje {2d_not_ideal} {Point $p$ is closest to the ideal point $m$,
however the smallest enclosing disk centered at $q$ is smaller than that of $p$.}
\begin {observation}
Let $P$ be a set of points in $\mkmbb{R}^2$. The point in $P$ that minimizes the distance to
the ideal point (i.e., the center of the smallest enclosing disk of $P$) is
not necessarily the same as the point in $P$ that minimizes the distance to
the farthest point in $P$.
\end {observation}
\begin {proof}
See Fig.~\ref {fig:2d_not_ideal}. Consider three points $a$, $b$ and $c$
at the corners of an equilateral triangle, and two points $p$ and $q$ close
to the center $m$ of the circle through $a$, $b$ and $c$. Now $p$ is closer
to $m$ than $q$, yet $q$ is closer to $b$ than $p$ (and $q$ is as far from
$a$ as from $b$).
\end {proof}
\subsection{Complexity}
\label{sub:Complexity_2D}
It follows from Lemma~\ref{lem:lowerbound_complexity_1D} that the complexity of
a central trajectory for entities moving in $\mkmbb{R}^d$ is at least $\Omega(\mkpzc{T}au
n^2)$. In this section, we prove that the complexity of \mkpzc{C} within a single time
interval $[t_i,t_{i+1}]$ is at most $O(n^{5/2})$. Thus, the complexity over all
$\mkpzc{T}au$ time intervals is $O(\mkpzc{T}au n^{5/2})$.
Let \mkmcal{F} denote the collection of functions $D_\sigma$, for $\sigma \in \mkmcal{X}$. We
partition time into intervals $J'_1,..,J'_{k'}$ such that in each interval
$J'_i$ all functions $D_\sigma$ restricted to $J'_i$ are \emph{simple}, that
is, they consist of just one piece. We now show that each function $D_\sigma$
consists of at most $\mkpzc{T}au(2n-1)$ pieces, and thus the total number of intervals
is at most $O(\mkpzc{T}au n^2)$. See Fig.~\ref{fig:intro_example_slabsanddistances}
for an illustration.
\begin{lemma}
\label{lem:D_hyperbolic}
Each function $D_\sigma$ is piecewise hyperbolic and
consists of at most $\mkpzc{T}au(2n-1)$ pieces.
\end{lemma}
\begin{proof}
Consider a time interval $J_i=[t_i,t_{i+1}]$. For any entity $\psi$ and any
time $t \in J_i$, the function $\|\sigma(t)\psi(t)\| = \sqrt{at^2 + bt + c}$,
with $a,b,c \in \mkmbb{R}$, is hyperbolic in $t$.
Each pair of such functions can intersect at most twice. During $J_i$,
$D_\sigma$ is the upper envelope of these functions, so it consists of
$\lambda_2(n)$ pieces, where $\lambda_s$ denotes the maximum complexity of
a Davenport-Schinzel sequence of order $s$~\mkpzc{C}ite{as-dssga-00}. We have
$\lambda_2(n) = 2n -1$, so the lemma follows.
\end{proof}
\begin{lemma}
\label{lem:intersections_F}
The total number of intersections of all functions in \mkmcal{F} is at most $O(\mkpzc{T}au n^3)$.
\end{lemma}
\begin{proof}
Fix a pair of entities $\sigma,\psi$. By Lemma~\ref{lem:D_hyperbolic} there
are at most $\mkpzc{T}au(2n-1)$ time intervals $J$, such that $D_\sigma$ restricted
to $J$ is simple. The same holds for $D_\psi$. So, there are at most
$\mkpzc{T}au(4n-2)$ intervals in which both $D_\sigma$ and $D_\psi$ are simple (and
hyperbolic). In each interval $D_\sigma$ and $D_\psi$ intersect at most
twice.
\end{proof}
We again observe that \mkpzc{C} can only jump from one entity to another if they are
$\varepsilon$-connected. Hence, Observation~\ref{obs:jump} holds entities moving in
$\mkmbb{R}^d$ as well. As before, this means that at any time $t$, we can partition \mkmcal{X}
into maximal sets of $\varepsilon$-connected entities. Let $\mkmcal{X}' \ni \sigma$ be a
maximal subset of $\varepsilon$-connected entities at time $t$. This time, a central
trajectory \mkpzc{C} uses the trajectory of entity $\sigma$ at time $t$, if and only
if $\sigma$ is the entity from $\mkmcal{X}'$ whose function $\mkmcal{D}_\sigma$ is
minimal. Hence, if we define $f_\sigma = D_\sigma$
Observation~\ref{obs:lower_envelope_1d} holds again as well.
Consider all $m'=O(n^2)$ intervals $J'_1,..,J'_{m'}$ that together form $[t_j,t_{j+1}]$. We
subdivide these intervals at points where the distance between two entities is
exactly $\varepsilon$. Let $J_1,..,J_m$ denote the set of resulting intervals. Since
there are $O(n^2)$ times at which two entities are at distance exactly $\varepsilon$,
we still have $O(n^2)$ intervals. Note that for all intervals $J_i$ and all
entities $\sigma$, $f_\sigma$ is simple and totally defined on
$J_i$.
In each interval $J_i$, a central trajectory \mkpzc{C} uses the trajectories of only
one maximal set of $\varepsilon$-connected entities. Let $\mkmcal{X}'_i$ be this set, let
$\mkmcal{F}'_i = \{f_\sigma \mid \sigma \in \mkmcal{X}'_i\}$ be the set of corresponding
functions, and let $\mkpzc{L}_i$ be the lower envelope of $\mkmcal{F}'_i$, restricted to
interval $J_i$. We now show that the total complexity of all these lower
envelopes is $O(n^{5/2})$. It follows that the maximal complexity of \mkpzc{C} in
$J_i$ is at most $O(n^{5/2})$ as well.
\begin{lemma}
\label{lem:complexity_L_in_interval}
Let $J$ be an interval, let $\mkmcal{F}$ be a set of hyperbolic functions that are
simple and totally defined on $J$, and let $k$ denote the complexity of the
lower envelope $\mkpzc{L}$ of \mkmcal{F} restricted to $J$. Then there are $\Omega(k^2)$
intersections of functions in $\mkmcal{F}$ that do not lie on $\mkpzc{L}$.
\end{lemma}
\frank{It seems this should work for arbitrary functions that intersect at most
twice and span $J$. So we can generalize the lemma a bit further. Probably the
same holds for Lemma 24 then.}
\maarten{Or you could define it for arbitrary functions, and let $k$ be the number of distinct ones that appear on the lower envelope.}
\begin{wrapfigure}[14]{r}{0.50\mkpzc{T}extwidth}
\mkpzc{C}entering
\includegraphics[scale=.8]{lower_envelope_in_interval}
\mkpzc{C}aption{The function $f$ (blue) has at least $\ell_{i-2} = \lfloor (i-1)/2\rfloor$
functions from $\mkmcal{F}_i$ (black).}
\label{fig:intersections_interval}
\end{wrapfigure}
\begin{proof}
Let $\mkpzc{L} = L_1,..,L_k$ denote the pieces of the lower envelope, ordered from
left to right. Consider any subsequence $\mkpzc{L}'=L_1,...,L_i$ of the pieces. The
functions in \mkmcal{F} are all hyperbolic, so every pair of functions intersect at
most twice. Therefore $\mkpzc{L}'$ consists of at most $\lambda_2(|\mkmcal{F}|) = 2|\mkmcal{F}|-1$
pieces. Hence, $i \leq 2|\mkmcal{F}|-1$. The same argument gives us that there must
be at least $\ell_i = \lfloor (i+1)/2 \rfloor$ distinct functions of $\mkmcal{F}$
contributing to $\mkpzc{L}'$.
Consider a piece $L_i = [a,b]$ such that $a$ is the first time that a
function $f$ contributes to the lower envelope. That is, $a$ is the first
time such that $f(t) = \mkpzc{L}(t)$. Clearly, there are at least $\ell_k$ such
pieces. Furthermore, there are at least $\ell_{i-2}$ distinct functions
corresponding to the pieces $L_1,..,L_{i-2}$. Let $\mkmcal{F}_i$ denote the set of
those functions.
All functions in $\mkmcal{F}$ are continuous and totally defined, so they span time
interval $J$. It follows that all functions in $\mkmcal{F}_i$ must intersect $f$ at
some time after the start of interval $J$, and before time $a$. Since $a$ was the
first time that $f$ lies on \mkpzc{L}, all these intersection points do not lie on
\mkpzc{L}. See Fig.~\ref{fig:intersections_interval}. In total we have at least
\[ \sum_{i=2}^{\ell_k} \ell_{i-2}
= \sum_{i=2}^{\lfloor (k+1)/2 \rfloor} \lfloor (i-1)/2 \rfloor
= \sum_{i=1}^{\lfloor (k+1)/2 \rfloor -1} \lfloor i/2 \rfloor
= \Omega(k^2)
\]
such intersections.
\end{proof}
\begin{lemma}
\label{lem:summed_lower_envelope_complexity}
Let $\mkmcal{F}_1,..,\mkmcal{F}_m$ be a collection of $m$ sets of hyperbolic partial
functions, let $J_1,..,J_m$ be a collection of intervals such that:
\begin{itemize}[nosep]
\item the total number of intersections between functions in $\mkmcal{F}_1,..,\mkmcal{F}_m$
is at most $O(n^3)$,
\item for any two intersecting intervals $J_i$ and $J_j$, $\mkmcal{F}_i$ and $\mkmcal{F}_j$
are disjoint, and
\item for every set $\mkmcal{F}_i$, all functions in $\mkmcal{F}_i$ are simple and totally
defined on $J_i$.
\end{itemize}
Let $\mkpzc{L}_i$ denote the lower envelope of $\mkmcal{F}_i$ restricted to
$J_i$,
The total complexity of the lower envelopes $\mkpzc{L}_1,..,\mkpzc{L}_m$ is $O((m +
n^2)\sqrt{n})$.
\end{lemma}
\begin{proof}
Let $k_i$ denote the complexity of the lower envelope $\mkpzc{L}_i$. An interval
$J_i$ is \emph{heavy} if $k_i > \sqrt{n}$ and \emph{light}
otherwise. Clearly, the total complexity of all light intervals is at most
$O(m\sqrt{n})$. What remains is to bound the complexity of all heavy
intervals.
Relabel the intervals such that $J_1,..,J_h$ are the heavy intervals. By
Lemma~\ref{lem:complexity_L_in_interval} we have that in each interval $J_i$,
there are at least $ck^2_i$ intersections involving the functions $\mkmcal{F}_i$, for
some $c \in \mkmbb{R}$.
Since for every pair of intervals $J_i$ and $J_j$ that overlap the sets
$\mkmcal{F}_i$ and $\mkmcal{F}_j$ are disjoint, we can associate each intersection with at
most one interval. There are at most $O(n^3)$ intersections in total, thus we
have $c'n^3 \geq \sum_{i=1}^m ck^2_i \geq \sum_{i=1}^h ck^2_i$, for some $c'
\in \mkmbb{R}$. Using that for all heavy intervals $k_i > \sqrt{n}$ we obtain
\[ c'n^3
\geq \sum_{i=1}^h ck^2_i
\geq \sum_{i=1}^h c\sqrt{n}k_i
= c\sqrt{n} \sum_{i=1}^h k_i.
\]
It follows that the total complexity of the heavy intervals is $\sum_{i=1}^h
k_i \leq c'n^3/c\sqrt{n} = O(n^2\sqrt{n})$.
\end{proof}
By Lemma~\ref{lem:intersections_F} we have that the number of intersections
between functions in \mkmcal{F} in time interval $[t_j,t_{j+1}]$ is $O(n^3)$. Hence,
the total number of intersections over all functions in all sets $\mkmcal{F}'_i$ is
also $O(n^3)$. All functions in each set $\mkmcal{F}'_i$ are simple and totally defined
on $J_i$, and all intervals $J_1,..,J_m$ are pairwise disjoint, so we can use
Lemma~\ref{lem:summed_lower_envelope_complexity}. It follows that the total
complexity of $\mkpzc{L}'_1,..,\mkpzc{L}'_m$ is at most $O(n^{5/2})$. Thus, in a single time
interval the worst case complexity of \mkpzc{C} is also at most $O(n^{5/2})$. We conclude:
\begin{theorem}
\label{thm:complexity_c_d}
Given a set of $n$ trajectories in $\mkmbb{R}^d$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, a central trajectory \mkpzc{C} has worst case complexity
$O(\mkpzc{T}au n^{5/2})$.
\end{theorem}
\subsection{Algorithm}
\label{sub:Algorithm_higher_dim}
We use the same global approach as before: we represent a set of
trajectoids containing an optimal solution by a graph, and then compute a
minimum weight path in this graph.
The graph that we use, is a slightly modified Reeb graph. We split an edge
$e$ into two edges at time $t$ if there is an entity $\sigma \in C_e$ such that
$D_\sigma = f_\sigma$ has a break point at time $t$. All functions $f_\sigma$,
with $\sigma \in C_e$, are now simple and totally defined on $J_e$. This
process adds a total of $O(\mkpzc{T}au n^2)$ degree-two vertices to the Reeb
graph. Let \mkmcal{R} denote the resulting Reeb graph (see Fig.~\ref{fig:overview_2d}).
\begin{figure}\label{fig:overview_2d}
\end{figure}
To find all the times where we have to insert vertices, we explicitly compute
the functions $D_\sigma$. This takes $O(\mkpzc{T}au n \lambda_2(n)\log n) = O(\mkpzc{T}au n^2
\log n)$ time, where $\lambda_s$ denotes the maximum complexity of a
Davenport-Schinzel sequence of order $s$~\mkpzc{C}ite{as-dssga-00}, since within each
time interval $[t_i,t_{i+1}]$ each $D_\sigma$ is the upper envelope of a set of
$n$ functions that intersect each other at most twice. After we sort these
break points in $O(\mkpzc{T}au n^2 \log n)$ time, we can compute the modified Reeb
graph \mkmcal{R} in $O(\mkpzc{T}au n^2 \log n)$ time ~\mkpzc{C}ite{grouping2013}.\footnote{The
algorithm to compute the Reeb graph is presented for entities moving in
$\mkmbb{R}^2$ in~\mkpzc{C}ite{grouping2013}, but it can easily be extended to entities
moving in $\mkmbb{R}^d$.}
Next, we compute all weights $\omega_e$, for each edge $e$. This means we have
to compute the lower envelope $\mkpzc{L}_e$ of the functions $\mkmcal{F}_e = \{ f_\sigma \mid
\sigma \in C_e \}$ on the interval $J_e$. All these lower envelopes have a
total complexity of at most $O(\mkpzc{T}au n^{5/2})$:
\begin{lemma}
\label{lem:complexity_Fes}
The total complexity of the lower envelopes for all edges of the
Reeb graph is $O(\mkpzc{T}au n^{5/2})$.
\end{lemma}
\begin{proof}
We consider each time interval $J_i=[t_i,t_{i+1}]$ separately. Let $\mkmcal{R}_i$
denote the Reeb graph restricted to $J_i$. We now show that for each $\mkmcal{R}_i$,
the total complexity of all lower envelopes $\mkpzc{L}_e$ of edges $e$ in $\mkmcal{R}_i$ is
$O(n^2\sqrt{n})$. The lemma then follows.
By Lemma~\ref{lem:intersections_F}, the total number of intersections of all
functions $\mkmcal{F}_e$, with $e$ in $\mkmcal{R}_i$, is $O(n^3)$. Each set $\mkmcal{F}_e$
corresponds to an interval $J_e$, on which all functions in $\mkmcal{F}_e$ are simple
and totally defined. Furthermore, at any time, every entity is in at most one
component $C_e$. So, if two intervals $J_e$ and $J_{e'}$ overlap, the sets of
entities $C_e$ and $C_{e'}$, and thus also the sets of functions $\mkmcal{F}_e$ and
$\mkmcal{F}_{e'}$ are disjoint. It follows that we can apply
Lemma~\ref{lem:summed_lower_envelope_complexity}. Since $\mkmcal{R}_i$ has
$O(n^2)$ edges the total complexity of all lower envelopes is
$O(n^2\sqrt{n})$.
\end{proof}
We again have two options to compute all lower envelopes: either we compute all
of them from scratch in $O(\mkpzc{T}au n^2 \mkpzc{C}dot \lambda_2(n) \log n) = O(\mkpzc{T}au n^3
\log n)$ time, or we use a similar approach as before. For each time interval,
we compute the arrangement \mkmcal{A} of all functions \mkmcal{F}, and then trace $\mkpzc{L}_e$ in \mkmcal{A}
for every edge $e$. For $n^2$ functions that pairwise intersect at most twice,
the arrangement can be built in $O(n^2 \log n + A)$ expected time, where $A$ is
the output complexity \mkpzc{C}ite{as-aa-00}. The complexity of \mkmcal{A} is $O(n^3)$, so we
can construct it in $O(n^3)$ time. As before, every edge is traversed at most
once so tracing all lower envelopes $\mkpzc{L}_e$ takes $O(n^3)$ time. It follows that
we can compute all edge weights in $O(\mkpzc{T}au n^3)$ time, using $O(n^3)$ working
space.
Computing a minimum weight path takes $O(\mkpzc{T}au n^2)$ time, and uses $O(\mkpzc{T}au
n^2)$ space as before. Thus, we can compute \mkpzc{C} in $O(\mkpzc{T}au n^3)$ time and $O(n^3
+ \mkpzc{T}au n^2)$ space.
\paragraph{Reducing the required working space} We can reduce the amount of
working space required to $O(n^2 \log n + \mkpzc{T}au n^2)$ as follows. Consider
computing the edge weights in the time interval $J=[t_i,t_{i+1}]$. Interval $J$
is subdivided into $O(n^2)$ smaller intervals $J_1,..,J_m$ as described in
Section~\ref{sub:Complexity_2D}. We now consider groups of $r$ consecutive
intervals. Let $J$ be the union of $r$ consecutive intervals, we compute
the arrangement \mkmcal{A} of the functions \mkmcal{F}, restricted to time interval $J$. Since
every interval $J_i$ has at most $O(n^2)$ intersections \mkmcal{A} has worst case
complexity $O(rn^2)$. Thus, at any time we need at most $O(rn^2)$ space to
store the arrangement. In total this takes $O(\sum_{i=1}^{n^2/r} (n_i \log n_i
+ A_i))$ time, where $n_i$ is the number of functions in the $i^\mathrm{th}$
group of intervals, and $A_i$ is the complexity of the arrangement in group
$i$. The total complexity of all arrangements is again $O(n^3)$. Since we cut
each function $D_\sigma$ into an additional $O(n^2/r)$ pieces, the total number
of functions is $O(n^3/r + n^2)$. Hence, the total running time is
$O((n^3/r)\log n + n^3)$. We now choose $r=\mkmcal{T}heta(\log n)$ to compute all edge
weights in $[t_i,t_{i+1}]$ in $O(n^3)$ time and $O(n^2\log n)$ space. We
conclude:
\begin{theorem}
\label{thm:central_trajectory_2d}
Given a set of $n$ trajectories in $\mkmbb{R}^d$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, we can compute a central trajectory \mkpzc{C} in $O(\mkpzc{T}au n^3)$ time
using $O(n^2 \log n + \mkpzc{T}au n^2)$ space.
\end{theorem}
\section{Extensions}
\label{sec:Extensions}
We now briefly discuss how our results can be extended in various directions.
\paragraph {Other measures of centrality}
We based our central trajectory on the center of the smallest enclosing disk of
a set of points. Instead, we could choose other static measures of centrality,
such as the Fermat-Weber point, which minimizes the sum of distances to the
other points, or the center of mass, which minimizes the sum of squared
distances to the other points. In both cases we can use the same general
approach as described in Section~\ref{sec:higher_dimensions}.
Let $\hat{D}^2_\sigma(t) = \sum_{\psi \in \mkmcal{X}} \|\sigma(t)\psi(t)\|^2$ denote
the sum of the squared Euclidean distances from $\sigma$ to all other entities
at time $t$. This function $\hat{D}^2_\sigma$ is piecewise quadratic in $t$,
and consists of (only) $O(\mkpzc{T}au)$ pieces. It follows that the total number of
intersections between all functions $\hat{D}^2_\sigma$, $\sigma \in \mkmcal{X}$, is at
most $O(\mkpzc{T}au n^2)$. We again split the domain of these functions into
elementary intervals. The Reeb graph \mkmcal{R} representing the $\varepsilon$-connectivity
of the entities still has $O(\mkpzc{T}au n^2)$ vertices and edges. Each vertex of \mkpzc{C}
corresponds either to an intersection between two functions $\hat{D}^2_\sigma$
and $\hat{D}^2_\psi$, or to a jump, occurring at a vertex of \mkmcal{R}. It now
follows that \mkpzc{C} has complexity $O(\mkpzc{T}au n^2)$.
To compute a central trajectory, we compute a shortest path in the
(weighted) Reeb graph \mkmcal{R}. To compute the weights we again construct the
arrangement of all curves $\hat{D}^2_\sigma$, and trace the lower envelope
$\mkpzc{L}_e$ of the curves associated to each edge $e \in \mkmcal{R}$. This can be done in $O(\mkpzc{T}au n^2 \log n)$ time
in total.
The sum of Euclidean distances $\hat{D}_\sigma(t) = \sum_{\psi \in \mkmcal{X}}
\|\sigma(t)\psi(t)\|$ is a sum of square roots, and cannot be represented analytically in an efficient
manner. Hence, we cannot efficiently compute a central trajectory for this measure.
\frank{What about lower bounds for all these variants?}
Similarly, depending on the application, we may prefer a different way of integrating over time. Instead of the integral of $D$, we may, for example, wish to minimize $\max_t D(\mkpzc{C}dot,t)$ or $\int D^2(\mkpzc{C}dot,t)$.
Again, the same general approach still works, but now, after constructing the Reeb graph, we compute the weights of each edge differently.
\frank{$\max$ should be equally expensive as $\int$: pwhyp. so maximum at break
point. Hence, we need to evaluate the lower envelopes at every break point.}
\frank{using $D^2$ should also just be equally expensive I guess.}
\paragraph{Minimizing the distance to the Ideal Trajectory \mkpzc{I}} We saw that for
entities moving in $\mkmbb{R}^1$, minimizing the distance from \mkpzc{C} to the farthest
entity is identical to minimizing the distance from \mkpzc{C} to the ideal trajectory
\mkpzc{I} (which itself minimizes the distance to the farthest entity, but is not
constrained to lie on an input trajectory). We also saw that for entities
moving in $\mkmbb{R}^d$, $d > 1$, these two problems are not the same. So, a natural
question is if we can also minimize the distance to \mkpzc{I} in this case. It turns
out that, at least for $\mkmbb{R}^2$, we can again use our general approach, albeit
with different complexities.
Demaine et al.\xspace~\mkpzc{C}ite{demaine2010kinetic} show that for entities moving along
lines\footnote{Or, more generally, along a curve described by a low degree
polynomial.} in $\mkmbb{R}^2$ the ideal trajectory \mkpzc{I} has complexity
$O(n^{3+\delta})$ for any $\delta > 0$. \frank{What about $\mkmbb{R}^d$?} It follows that the function
$\mkpzc{C}heck{D}_\sigma(t) = \|\mkpzc{I}(t)\sigma(t)\|$ is a piecewise hyperbolic function
with at most $O(\mkpzc{T}au n^{3+\delta})$ pieces. The total number of intersections
between all functions $\mkpzc{C}heck{D}_\sigma$, for $\sigma \in \mkmcal{X}$, is then $O(\mkpzc{T}au
n^{5+\delta})$. Similar to Lemma~\ref{lem:summed_lower_envelope_complexity}, we
can then show that all lower envelopes in \mkmcal{R} together have complexity $O(\mkpzc{T}au
n^{4+\delta})$.
We then also obtain an $O(\mkpzc{T}au n^{4+\delta})$ bound on the complexity of a
central trajectory \mkpzc{C} minimizing the distance to \mkpzc{I}.
To compute such a central trajectory \mkpzc{C} we again construct \mkmcal{R}. To compute the
edge weights it is now more efficient to recompute the lower envelope $\mkpzc{L}_e$
for each edge $e$ from scratch. This takes $O(\mkpzc{T}au n^3 \mkpzc{C}dot n \log n) = O(\mkpzc{T}au
n^4 \log n)$ time, whereas constructing the entire arrangement may take up to
$O(\mkpzc{T}au n^{5+\delta})$ time.
We note that the $O(n^{3+\delta})$ bound on the complexity of \mkpzc{I} by Demaine
et al.\xspace~\mkpzc{C}ite{demaine2010kinetic} is not known to be tight. The best known lower bound is
only $\Omega(n^2)$. So, a better upper bound for this problem immediately also
gives a better bound on the complexity of \mkpzc{C}.
\paragraph {Relaxing the input pieces requirement} We require each piece of the
central trajectory to be part of one of the input trajectories, and allow small
jumps between the trajectories. This is necessary, because in general no two
trajectories may intersect. Another interpretation of this dilemma is to
relax the requirement that the
output trajectory stays on an input trajectory at all times, and just require
it to be \emph {close} (within distance $\varepsilon$) to an input trajectory at all times.
In this case, no discontinuities in the output trajectory are necessary.
We can model this by replacing each point entity by a disk of radius $\varepsilon$. The
goal then is to compute a shortest path within the union of disks at all times.
We now observe that
if at time $t$ the ideal trajectory \mkpzc{I} is contained in the same component of
$\varepsilon$-disks as \mkpzc{C}, the central trajectory will follow \mkpzc{I}. If \mkpzc{I} lies outside
of the component, \mkpzc{C} travels on the border of the $\varepsilon$-disk (in the component
containing \mkpzc{C}) minimizing $D(\mkpzc{C}dot,t)$. In terms of the distance functions,
this behavior again corresponds to following the lower envelope of a set of
functions. We can thus identify the following types of break points of \mkpzc{C}: (i)
break points of \mkpzc{I}, (ii) breakpoints in one of the lower envelopes
$\mkpzc{L}_1,..,\mkpzc{L}_m$ corresponding to the distance functions of the entities in each
component, and (iii) break points at which \mkpzc{C} switches between following \mkpzc{I} and
following a lower envelope $\mkpzc{L}_j$. There are at most $O(\mkpzc{T}au n^{3+\delta})$
break points of type (i)~\mkpzc{C}ite{demaine2010kinetic}, and at most $O(\mkpzc{T}au
n^2\sqrt{n})$ of type (ii). The break points of type (iii) correspond to
intersections between \mkpzc{I} and the manifold that we get by tracing the
$\varepsilon$-disks over the trajectory. The number of such intersections is at most
$O(\mkpzc{T}au n^{4+\delta})$. Hence, in this case \mkpzc{C} has complexity $O(\mkpzc{T}au
n^{4+\delta})$. We can thus get an $O(\mkpzc{T}au n^{5+\delta} \log n)$ algorithm by
computing the lower envelopes from scratch.
\small
\section*{Acknowledgments}
M.L. and F.S. are supported by the Netherlands Organisation for Scientific
Research (NWO) under grant 639.021.123 and 612.001.022, respectively.
\mkpzc{C}learpage
\appendix
\section{A continuous central trajectory for entities moving in $\mkmbb{R}^1$}
\label{app:oned_no_jumps}
\subsection{Complexity}
\label{sub:Complexity}
We analyze the complexity of central trajectory \mkpzc{C}; a trajectoid\xspace that
minimizes $\mkmcal{D}'$, in the case $\varepsilon = 0$. We now show that on each elementary
interval \mkpzc{C} has complexity $24n$. It follows that the complexity of \mkpzc{C} during a
time interval $[t_i,t_{i+1}]$ is $24n^2+48n$, and that the total complexity is
$O(\mkpzc{T}au n^2)$.
We are given $n$ lines representing the movement of all the entities during an
elementary interval. We split each line into two half lines at the point where
it intersects \mkpzc{I} (the $x$-axis). This gives us an arrangement \mkmcal{A} of $2n$
half-lines. A half-line $\ell$ is \emph{positive} if it lies above the
$x$-axis, and \emph{negative} otherwise. If the slope of $\ell$ is positive,
$\ell$ is \emph{increasing}. Otherwise it is \emph{decreasing}. For a given
point $p$ on $\ell$, we denote the ``sub''half-line of $\ell$ starting at $p$
by $\ell^{p\mkpzc{T}o}$.
Let \mkpzc{C} be a trajectoid\xspace that minimizes $\mkmcal{D}'$.
\begin{lemma}
\label{lem:opt_clip_half_lines_at_v}
Let $\ell$ and $m$ be two positive increasing half-lines, of which $\ell$ has
the largest slope, and let $v$ be their intersection point. At vertex $v$, \mkpzc{C} does
not continue along $\ell^{v\mkpzc{T}o}$.
\end{lemma}
\begin{proof}
Assume for contradiction that \mkpzc{C} starts to travel along $\ell^{v\mkpzc{T}o}$ at
vertex $v$ (see Fig.~\ref{fig:opt_clip_half_lines}). Let $t^*$, with $t^* >
t_v$, be the first time where \mkpzc{C} intersects $m$ again after visiting
$\ell^{v\mkpzc{T}o}$, or $\infty$ if no such time exists. Now consider the
trajectoid\xspace \mkpzc{T}, such that $\mkpzc{T}(t) = m(t)$ for all times $t \in
[t_v,t^*]$, and $\mkpzc{T}(t) = \mkpzc{C}(t)$ for all other times $t$. At any time $t$
in the interval $(t_v,t^*)$ we have $0 \leq \mkpzc{T}(t) < \mkpzc{C}(t)$. It follows that
$\mkmcal{D}'(\mkpzc{T}) < \mkmcal{D}'(\mkpzc{C})$. Contradiction.
\begin{figure}\label{fig:opt_clip_half_lines}
\end{figure}
\end{proof}
\begin{corollary}
\label{cor:opt_clip_half_lines}
Let $\ell$ and $m$ be two positive increasing half-lines, of which $\ell$ is
the steepest, and let $v$ be their intersection point. \mkpzc{C} does not visit
$\ell^{v\mkpzc{T}o}$, that is, $\ell^{v\mkpzc{T}o} \mkpzc{C}ap \mkpzc{C} = \emptyset$.
\end{corollary}
Consider two positive increasing half-lines $\ell$ and $\ell'$ in \mkmcal{A}, of which
$\ell$ is the steepest, and let $v$ be their intersection
point. Corollary~\ref{cor:opt_clip_half_lines} guarantees that \mkpzc{C} never uses
the half-line $\ell^{v\mkpzc{T}o}$. Hence, we can remove it from \mkmcal{A} (thus replacing
$\ell$ by a line segment) without affecting \mkpzc{C}. By
symmetry, it follows that we can \emph{clip} one half-line from every pair of
half-lines that are both increasing or decreasing. Let $\mkmcal{A}'$ be the arrangement
that we obtain this way. See Fig.~\ref{fig:clipped_arrangement} for an
illustration.
\begin{figure}\label{fig:clipped_arrangement}
\end{figure}
Consider the set \mkmcal{Z} of open faces of $\mkmcal{A}'$ intersected by \mkpzc{I}, and let
$E$ be the set of edges bounding them. We refer to $\mkmcal{Z} \mkpzc{C}up E$ as the
\emph{zone} of \mkpzc{I} in $\mkmcal{A}'$.
\begin{lemma}
\label{lem:opt_in_zone}
\mkpzc{C} is contained in the zone of \mkpzc{I} in $\mkmcal{A}'$.
\end{lemma}
\begin{proof}
We can basically use the argument as in
Lemma~\ref{lem:opt_clip_half_lines_at_v}. Assume by contradiction that \mkpzc{C}
lies outside the zone from $u$ to $v$. The path from $u$ to $v$ along the
border of the zone is $x$-monotone, hence there is a trajectoid\xspace \mkpzc{T} that
follows this path during $[t_u,t_v]$ and for which $\mkpzc{T}(t) = \mkpzc{C}(t)$ at any
other time. It follows that $\mkmcal{D}'(\mkpzc{T}) < \mkmcal{D}(\mkpzc{C})$ giving us a contradiction.
\end{proof}
\newcommand{\ensuremath{\mkmcal{Z}_\ell}\xspace}{\ensuremath{\mkmcal{Z}_\ell}\xspace}
\begin{lemma}
\label{lem:zonecomplex_Ap}
The zone \ensuremath{\mkmcal{Z}_\ell}\xspace of a line $\ell$ in $\mkmcal{A}'$ has maximum complexity $8n$.
\end{lemma}
\begin{proof}
We show that the complexity of \ensuremath{\mkmcal{Z}_\ell}\xspace restricted to the positive half-plane is
at most $4n$. A symmetric argument holds for the negative half-plane, thus
proving the lemma. Since we restrict ourselves to the positive half-plane,
the half-lines and segments of $\mkmcal{A}'$ correspond to two forests: a purple
forest with $p$ segments\footnote{Note that some of these segments ---the
segments corresponding to the roots of the trees--- are actually
half-lines.}, and a brown forest with $b$ segments. Furthermore, we have
$p+b = n$.
Rotate all segments such that $\ell$ is horizontal. We now show that the
number of \emph{left-bounding} edges in \ensuremath{\mkmcal{Z}_\ell}\xspace is $2n$. Similarly, the number of
right-bounding edges is also $2n$. Consider just the purple forest. Clearly,
there are at most $p$ left-bounding edges in the zone of $\ell$ in the purple
forest. We now iteratively add the edges of the brown forest in some order
that maintains the following invariant: the already-inserted brown segments
form a forest in which every tree is rooted at an unbounded segment (a
half-line). We then show that every new left-bounding edge in the zone
either replaces an old left-bounding edge or can be charged to a purple
vertex or a brown segment. In total we gather $p + 2b$ charges, giving us a
total of $2p+2b = 2n$ left bounding edges.
Let $s$ be a new brown leaf segment that we add to $\mkmcal{A}'$, and consider the
set $J$ of all intersection points of $s$ with edges that form \ensuremath{\mkmcal{Z}_\ell}\xspace in the
arrangement so far. The points in $J$ subdivide $s$ into subsegments
$s_1,..,s_k$ (See Fig.~\ref{fig:subsegments_in_zone}). All new edges in \ensuremath{\mkmcal{Z}_\ell}\xspace
are subsegments of $s$. We charge the subsegment $s_i$ that intersects $\ell$
(if any), and $s_k$ to $s$ itself. The remaining subsegments replace either
a brown edge or a purple vertex from \ensuremath{\mkmcal{Z}_\ell}\xspace, or they yield no new left bounding
edges.
Clearly, segments
replacing edges on \ensuremath{\mkmcal{Z}_\ell}\xspace do not increase the complexity of \ensuremath{\mkmcal{Z}_\ell}\xspace. We charge the
segments replacing purple vertices to those vertices.
We claim that a vertex $v$ gets charged at most once.
Indeed, each vertex has three incident edges, only two of which may intersect $\ell$.
The vertex gets charged when a brown segment intersects those two edges between $v$ and $\ell$. After this, $v$ is no longer part of \ensuremath{\mkmcal{Z}_\ell}\xspace in that face (though it may still be in \ensuremath{\mkmcal{Z}_\ell}\xspace in its other faces).
It follows that the total number of charges is $p + 2b$.
\begin{figure}\label{fig:subsegments_in_zone}
\end{figure}
\end{proof}
\begin{theorem}
\label{thm:complexity_lines}
Given $n$ lines, a trajectoid \mkpzc{C} that minimizes $\mkmcal{D}'$ has worst case
complexity $8n$.
\end{theorem}
\begin{proof}
\mkpzc{C} is contained in the zone of $\mkmcal{A}'$. So the intersection vertices of \mkpzc{C} are
vertices in the zone of $\mkmcal{A}'$. By Lemma~\ref {lem:zonecomplex_Ap}, the zone has at most $8n$ vertices,
so \mkpzc{C} has at most $8n$ vertices as well.
\end{proof}
\eenplaatje [scale=0.8] {global_zone} {A piece of the zone in three consecutive slabs.}
Let $\mkmcal{A}^*$ be the total arrangement of all restricted functions; that is, a
concatenation of the arrangements $\mkmcal{A}'$ restricted to the vertical slabs
defined by their elementary time intervals. Define the \emph {global zone} as the zone of $J$ in $\mkmcal{A}^*$. Note that the global zone is more than just the union of the individual zones in the slabs, since cells can be connected along break points (and are not necesarily convex anymore). Nonetheless, we can still show that the complexity of the global zone is linear.
\begin {lemma}
The global zone has complexity $24\mkpzc{T}au n^2 + 48\mkpzc{T}au n$.
\end {lemma}
\begin {proof}
The global zone is a subset of the union of the zones of $J$ and the vertical lines $x=t_i$, for $i \in {0,..,k}$ in the arrangements $\mkmcal{A}'$.
By Lemma~\ref {lem:zonecomplex_Ap}, a line intersecting a single slab has zone complexity $8n$.
Each slab is bounded by two vertical lines and intersected by $J$, so applying the lemma three times yields a $24n$ upper bound on the complexity in a single slab.
Since there are $\mkpzc{T}au(n+2)$ elementary intervals,
we conclude that the total complexity is at most $24 \mkpzc{T}au n^2 + 48\mkpzc{T}au n$.
\end {proof}
As before, it follows that \mkpzc{C} is in the zone of \mkpzc{I} in $\mkmcal{A}^*$. Thus, we conclude:
\begin{theorem}
\label{thm:complexity_opt_central trajectory}
Given a set of $n$ trajectories in $\mkmbb{R}^1$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, a central trajectory \mkpzc{C} with $\varepsilon = 0$, has worst case
complexity $O(\mkpzc{T}au n^2)$.
\end{theorem}
\subsection{Algorithm}
\label{sub:Algorithm}
We now present an algorithm to compute a trajectoid\xspace \mkpzc{C} minimizing $\mkmcal{D}'$. It
follows from Lemma~\ref{lem:central_ideal} that such a trajectoid is a central
trajectory. The basic idea is to construct a weighted graph that represents a
set of trajectoid\xspace{}s, and is known to contain an optimal trajectoid\xspace. We then
find \mkpzc{C} by computing a minimum weight path in this graph.
The graph that we use is simply a weighted version of the global zone of \mkpzc{I}. We
augment each edge $e=(u,v)$ in the global zone with a weight $\int_{t_u}^{t_v}
|e(t)|\dd t$. Hence, we obtain a weighted graph \mkmcal{G}. Finally, we add one source
vertex that we connect to the vertices at time $t_0$ with an edge of weight
zero, and one target vertex that we connect with all vertices at time
$t_\mkpzc{T}au$. This graph represents a set of trajectoid\xspace{}s, and contains an
optimal trajectoid\xspace \mkpzc{C}.
We find \mkpzc{C} by computing a minimum weight path from the source to the target
vertex. All vertices except the source and target vertex have constant
degree. Furthermore, all zones have linear complexity. It follows that \mkmcal{G} has
$O(\mkpzc{T}au n^2)$ vertices and edges, and thus, if we have \mkmcal{G}, we can compute \mkpzc{C} in
$O(\mkpzc{T}au n^2 \log (\mkpzc{T}au n))$ time.
We compute \mkmcal{G} by computing the zone(s) of the arrangement $\mkmcal{A}'$ in each
elementary interval. We can find the zone of $\mkmcal{A}'$ in $O((n+k)\alpha(n+k)\log
n) = O(n\alpha(n)\log n)$ expected time, where $k$ is the complexity of \mkpzc{C} and
$\alpha$ is the inverse Ackermann function, using the algorithm of
Har-Peled~\mkpzc{C}ite{harpeled2000walk}. Since $\mkmcal{A}'$ has a special shape, we can
improve on this slightly as follows.
We use a sweep line algorithm which sweeps $\mkmcal{A}'$ with a vertical line from left
to right. We describe only computing the upper border of the zone (the part
that lies above \mkpzc{I}). Computing the lower border is analogous. So in the
following we consider only positive half-lines.
We maintain two balanced binary search trees as status structures. One storing
all increasing half-lines, and one storing all decreasing half-lines. The
binary search trees are ordered on increasing $y$-coordinate where the
half-lines intersect the sweep line. We use a priority queue to store all
events. We distinguish the following events: (i) a half-line starts or stops at
\mkpzc{I}, (ii) an increasing (decreasing) half-line stops (starts) because it
intersects an other increasing (decreasing) half-line, and (iii) we encounter
an intersection vertex between an increasing half-line and a decreasing
half-line that lies in the zone. In total there are $O(n)$ events.
The events of type (ii) involve only neighboring lines in the status structure,
and the events of type (iii) involve the lowest increasing (decreasing)
half-line and the decreasing (increasing) half-lines that are in the zone when
they are intersected by the sweep line. To maintain the status structures and
compute new events we need constantly many queries and updates to our status
structures and event queue. Hence, each event can be handled in $O(\log n)$
time.
The events of type (i) are known initially. The first events of type (ii) and
(iii) can be computed in $O(\log n)$ time per event (by inserting the
half-lines in the status structures). So, initializing our status structures
and event queue takes $O(n \log n)$ time. During the sweep we handle $O(n)$
events, each taking $O(\log n)$ time. Therefore, we can compute the zone of
$\mkmcal{A}'$ in $O(n \log n)$ time in total.
\paragraph{Computing a minimum weight path} We can slightly improve the running
time by reducing the time required to compute a minimum weight path. If,
instead of a general graph $\mkmcal{G}=(V,E)$ we have a directed acyclic graph (DAG),
we can compute am minimum weight path in only $O(|V|+|E|) = O(\mkpzc{T}au n^2)$ time using
dynamic programming. We transform \mkmcal{G} into a DAG by orienting all edges $e=(u,v)$, with $t_u <
t_v$, from $u$ to $v$.
The running time is now dominated by constructing the graph. We conclude:
\begin{theorem}
\label{thm:central trajectory_1d_no_jumps}
Given a set of $n$ trajectories in $\mkmbb{R}^1$, each with vertices at times
$t_0,..,t_\mkpzc{T}au$, we can compute a central trajectory \mkpzc{C} for $\varepsilon=0$ in
$O(\mkpzc{T}au n^2 \log n)$ time using $O(\mkpzc{T}au n^2)$ space.
\end{theorem}
\end{document}
|
\begin{document}
\baselineskip=13pt
\pagestyle{empty}
\
\noindent {\LARGE\bf Regularity jumps for powers of ideals
}
\noindent Aldo \ Conca, \\ Dipartimento di Matematica, Universita' di
Genova, \\ Via Dodecaneso 35, I-16146 Genova, Italia. \\ {\it E-mail}: {\tt
[email protected]}
\section{Introduction
\break}
The Castelnuovo-Mumford regularity $\reg(I)$ is one of the most
important invariants of a homogeneous ideal $I$ in a polynomial ring.
A basic
question is how the regularity behaves with respect to taking powers of ideals. It is
known that in the long-run $\reg(I^k)$ is a linear function of $k$. We show that in the
short-run the regularity of
$I^k$ can be quite ``irregular". For any given integer $d>1$ we
construct an ideal $J$ generated by $d+5$ monomials of degree $d+1$ in $4$ variables
such that $\reg(J^k)=k(d+1)$ for every
$k<d$ and $\reg(J^d)\geq d(d+1)+d-1$.
\section{Generalities
\break } \label{ng}
Let $K$ be a field. Let $R=K[x_1,\ldots,x_n]$ be the polynomial ring over
$K$. Let
$I=\oplus_{i\in {\mathbb N }} I_i$ be a homogeneous ideal. For every $i,j\in
{\mathbb N }$ one defines the $ij$th graded Betti number of $I$ as
$$\beta_{ij}(I)=\dim_K \Tor^R_i(I,K)_j$$ and set
$$t_i(I)=\max\{ j \, | \, \beta_{ij}(I)\neq 0\}$$
with $t_i(I)=-\infty$ if it happens that
$\Tor^R_i(I,K)=0$. The Castelnuovo-Mumford regularity
$\reg(I)$ of $I$ is defined as
$$\reg(I)=\sup \{ t_i(I)-i \, : i\in {\mathbb N } \}. $$ By construction
$t_0(I)$ is the largest degree of a minimal generator of $I$. The
initial degree of $I$ is the smallest degree of a minimal generator
of $I$, i.e. it is the least index $i$ such that $I_i\neq 0$. The ideal
$I$ has a linear resolution if its regularity is equal to its
initial degree. In other words,
$I$ has a linear resolution if its minimal generators all have the same
degree and the non-zero entries of the matrices of the minimal free resolution of
$I$ all have degree $1$.
The highest degree of a generator the $k$-th power $I^k$ of $I$ is
bounded above by
$k$ times the highest degree of a generator of $I$, i.e.
$t_0(I^k)\leq k t_0(I)$. One may wonder whether the same
relation holds also for the Castelnuovo-Mumford regularity, that is,
whether the inequality
$$ \reg(I^k)\leq k\reg(I) \eqno(1) $$ holds for every $k$. For some classes of ideals
$(1)$ holds, see e.g. \cite{CH} but in general it does not.
In Section 3 we present some examples of ideals with linear resolution
whose square does not have a linear resolution. On the other hand, it is known that for
every ideal
$I$ one has $$\reg(I^k)=a(I)k+b(I) \quad \mbox{ for }\quad \mbox{ for all } k\geq
c(I)
\eqno(2)$$ where $a(I), b(I)$ and $c(I)$ are integers. Cutkosky, Herzog
and Trung \cite{CHT}, and Kodiyalam \cite{K} proved independently that $(2)$ holds.
They also shown that $a(I)$ is bounded above by the largest degree of a generator of
$I$. Bounds for $b(I)$ and $c(I)$ are given in \cite{CHT}.
Note that if the ideal $I$ is generated in a single degree, say $s$, then $a(I)=s$
and hence $\reg(I^k)-\reg(I^{k-1}=s$ for large $k$. We say that the regularity of the
powers of $I$ jumps at place $k$ if $\reg(I^k)-\reg(I^{k-1}>s$.
One of the most powerful tools in proving that an ideal has a linear
resolution is the following notion:
\begin{Definition}
An ideal $I$ generated in a single degree is said to have linear
quotients if there exists a system of minimal generators $f_1,\dots, f_s$ of $I$ such
that for every $k\leq s$ the colon ideal
$(f_1,\dots, f_{k-1}):f_k$ is generated by linear forms.
\end{Definition}
One has (see \cite{CH}):
\begin{Lemma}
\begin{itemize}
\item[(a)] If $I$ has linear quotients then $I$ has a linear resolution.
\item[(b)] If $I$ is a monomial ideal, then the property of having linear
quotients with respect to its monomial generators is independent of the
characteristic of the base filed.
\end{itemize}
\end{Lemma}
\section{Examples}
In this section we present some (known and some new) examples of ideals with a linear
resolution such that the square does have non-linear syzygies.
The first example of such an ideal was discovered by Terai. It is an
ideal well-known for having another pathology: it is a
square-free monomial ideal whose Betti numbers, regularity and projective
dimension depend on the characteristic of the base field.
\begin{Example}\label{terai} Consider the ideal
$$I=(abc, abd, ace, adf, aef, bcf, bde, bef, cde, cdf)$$
of $K[a,b,c,d,e,f]$. In
characteristic $0$ one has
$\reg(I)=3$ and $\reg(I^2)=7$. The only non-linear syzygy for $I^2$ comes
at the very end of the resolution. In characteristic $2$ the ideal $I$
does not have a linear resolution.
\end{Example}
The second example is taken from \cite{CH}. It is monomial and characteristic free.
The ideal is defined by $5$ monomials and very likely there are no such examples
with less than $5$ generators.
\begin{Example}\label{ConcaHerzog} Consider the ideal
$$I=(a^2b, a^2c, ac^2, bc^2, acd)$$
of $K[a,b,c,d]$ . It is easy to check that $I$ has
linear quotients (with respect to the monomial generators in the given
order). It follows that $I$ has a linear resolution independently of
$\chara K$. Furthermore
$I^2$ has a quadratic first-syzygy in characteristic $0$. But the first
syzygies of a monomial ideals are independent of $\chara K$. So we may
conclude that $\reg(I)=3$ and
$\reg(I^2)>6$ for every base field $K$.
\end{Example}
The third example is due to Sturmfels \cite{S}. It is monomial, square-free and
characteristic free. It is defined by $8$ square-free monomials and, according
to Sturmfels \cite{S}, there are no such examples with less than $8$
generators.
\begin{Example}\label{Sturmfels} Consider the ideal
$$I=(def, cef, cdf, cde, bef, bcd, acf, ade)$$
of $K[a,b,c,d,e,f]$. One checks
that $I$ has linear quotients (with respect to the monomial generators in the given
order) and so it has a linear resolution independently of $\chara K$. Furthermore
$I^2$ has a quadratic first-syzygy. One concludes that $\reg(I)=3$ and
$\reg(I^2)>6$ for every base field $K$.
\end{Example}
So far all the examples were monomial ideals generated in degree $3$.
Can we find examples generated in degree $2$? In view of the main result of
\cite{HHZ}, we have to allow also non-monomial generators.
One binomial generator is enough:
\begin{Example} Consider the ideal
$$I=(a^2, ab, ac, ad, b^2, ae + bd, d^2)$$
of $K[a,b,c,d,e]$.
One checks that $\reg(I)=2$ and $\reg(I^2)=5$ in characteristic
$0$. Very likely the same holds in any characteristic. The ideal has
linear quotients with respect to the generators in the given order.
\end{Example}
One may wonder whether there exists a prime ideal with this behavior.
Surprisingly, one can find such an example already among the most beautiful and
studied prime ideals, the generic determinantal ideals.
\begin{Example} \label{prime} Let $I$ be the ideal of $K[x_{ij} : 1\leq
i\leq j\leq 4]$ generated by the the $3$-minors of the generic symmetric
matrix $(x_{ij})$. It is well-known that $I$ is a prime ideal defining a
Cohen-Macaulay ring and that $I$ has a linear resolution. One checks that $I^2$
does not have a linear resolution.
\end{Example}
\begin{Remark} {\rm
Denote by $I$ the ideal of \ref{prime} and by $J$ the ideal of
\ref{terai}. It is interesting to note that the graded Betti numbers
of $I$ and $J$ as well as those of $I^2$ and $J^2$ coincide. Is this just an
accident? There might be some hidden relationship between the two ideals,
e.g. $J$ could be an initial ideal or a specialization (or an initial
ideal of a specialization) of the ideal $I$. Concretely, we may ask
whether $J$ can be represented as the (initial) ideal of (the ideal of)
$3$-minors of a
$4\times 4$ symmetric matrix of linear forms in $6$ variables.
We have not been able to answer this question but we believe that
something like that should be true. Note however that the most natural way of
filling a $4\times 4$ symmetric matrix with $6$ variables would be to put $0$'s
on the main diagonal and to fill the remaining positions with the $6$
variables. Taking $3$-minors one gets an ideal, say $J_1$, which shares
many invariants with $J$. For instance, we have checked that $J_1$ and
$J$ as well as their squares have the same graded Betti
numbers (respectively of course). The ideals $J$ and $J_1$ are both reduced but $J$
has $10$ components of degree $1$ while $J_1$ has $4$ components of degree $1$
and $3$ of degree $2$. We have also checked that, in the given
coordinates, $J$ cannot be an initial ideal of $J_1$. }
\end{Remark}
\section{Regularity Jumps}
The goal of this section is to show that the regularity of the powers of an ideal can
jump for the first time at any place. This happens already in $4$
variables. Let $R$ be the polynomial ring
$K[x_1,x_2,z_1,z_2]$ and for $d>1$ define the ideal
$$J=(x_1z_1^d, x_1z_2^d, x_2z_1^{d-1}z_2)+(z_1, z_2)^{d+1}$$
We will prove the following:
\begin{Theorem}
\label{main} The ideal $J^k$ has linear quotients for all $k<d$ and
$J^d$ has a first-syzygy of degree $d$. In particular
$\reg(J^k)=k(d+1)$ for all $k<d$ and $\reg(J^d)\geq d(d+1)+d-1$ and this holds
independently of $K$.
\end{Theorem}
In order to prove that $J^k$ has linear quotients we need the following
technical construction.
Given three sets of variables $x=x_1,\dots,x_m$, $z=z_1,\dots,z_n$ and
$t=t_1,\dots,t_k$ we consider monomials
$m_1,\dots,m_k$ of degree $d$ is the variables $z$. Let $\phi$ be a
map
$$\phi:\{t_1,\dots,t_k\}\to
\{x_1,\dots,x_m\}. $$
Extend its action to arbitrary monomials by
setting
$$\phi(\prod t_i^{a_i})=\prod \phi( t_{i})^{a_i}.$$ Set
$W=(m_1,\dots,m_k)$. Consider the bigraded presentation
$$\Phi:K[z_1,\dots,z_n,t_1,\dots,t_k] \to R(W)=K[z_1,\dots,z_n ,m_1s,\dots, m_ks]$$
of the Rees algebra of the ideal $W$ obtained by setting $\Phi(z_i)=z_i$ and
$\Phi(t_i)=m_is$ ($s$ a new variable) and giving degree $(1,0)$ to the $z$'s and degree
$(0,1)$ to the $t$'s. We set
$$H=\Ker \Phi.$$
and note that $H$ is a binomial ideal.
\begin{Definition}\label{pseudo} We say that
$m_1\dots,m_k$ are pseudo-linear of order
$p$ with respect to $\phi$ if for every $1\leq b\leq a\leq p$ and for
every binomial
$MA-NB$ in $H$ with $M,N$ monomials in the
$z$ of degree $(a-b)(d+1)$ and $A,B$ monomials in the $t$ of degree $b$
such that
$\phi(A)>\phi(B)$ in the lex-order there exists an element of the form
$M_1t_i-N_1t_j$ in $H$ where $M_1,N_1$ are monomials in the $z$ such
that the following conditions are satisfied:
\begin{itemize}
\item[(1)] $N_1|N$,
\item[(2)] $t_i|A$, $t_j|B$,
\item[(3)] $\phi(t_i)|\phi(A)/\GCD(\phi(A),\phi(B))$,
\item[(4)] $\phi(t_i)>\phi(t_j)$ in the lex-order.
\end{itemize}
\end{Definition}
The important consequence is the following:
\begin{Lemma}
\label{pl1} Assume that $m_1\dots,m_k$ are monomials of degree $d$ in the
$z$ which are pseudo-linear of order $p$ with respect to $\phi$. Set
$$J=(z_1,\dots,z_n)^{d+1}+(\phi(t_1)m_1,\phi(t_2)m_2,\dots,
\phi(t_k)m_k).$$ Then $J^a$ has linear quotients for all $a=1,\dots,p$.
\end{Lemma}
\noindent{\bf Proof: } Set $Z=(z_1,\dots,z_n)^{d+1}$ and
$I=(\phi(t_1)m_1,\phi(t_2)m_2,\dots,
\phi(t_k)m_k)$. Take $a$ with $1\leq a\leq p$ and order the generators of
$J^a$ according to the following decomposition:
$J^a=Z^a+Z^{a-1}I+\dots+Z^bI^{a-b}+\dots+I^a$. In the block
$Z^a$ we order the generators so that they have linear quotients; this is
easy since $Z^a$ is just a power of the $(z_1,\dots,z_n)$. In the the
block $Z^bI^{a-b}$ with $b<a$ we order the generators extending (in
anyway) the lex-order in the $x$. We claim that, with this order, the
ideal $J^a$ has linear quotients. Let us check this. As long as we deal
with elements of the block $Z^a$ there is nothing to check. So let us
take some monomial, say $u$ from the block $Z^bI^{a-b}$ with $b<a$ and
denote by $V$ the ideal of generated by the monomials which are earlier
in the list. We have to show that the colon ideal $V:(u)$ is generated
by variables. Note that $V:(u)$ contains
$(z_1,\dots,z_n)$ since $(z_1,\dots,z_n)u\subset Z^{b+1}I^{a-b-1}\subset
V$. Let $v$ be a generator of $V$. If $v$ comes from a block
$Z^cI^{a-c}$ with $c>b$ then we are done since
$(v):(u)$ is contained in $(z_1,\dots,z_n)$ by degree reason. So we can
assume that also
$v$ comes from the block $Z^bI^{a-b}$. Again, if the generator
$v/\GCD(v,u)$ of $(v):(u)$ involves the variables
$z$ we are done. So we are left with the case in which $v/\GCD(v,u)$
does not involves the variables $z$. It is now the time to use the
assumption that the
$m_i$'s are pseudo-linear. Say $u=Nm_{s_1}\phi(t_{s_1}) \cdots
m_{s_a}\phi(t_{s_a})$ and
$v=Mm_{r_1}\phi(t_{r_1}) \cdots m_{r_a}\phi(t_{r_a})$ with $M,N$ monomials
of degree
$(b-a)(d+1)$ in the $z$. Set $A=t_{r_1} \cdots t_{r_a}$ and $B=t_{s_1}
\cdots t_{s_a}$.
Since $v$ is earlier than $u$ in the generators of $J^a$ we have
$\phi(A) >\phi(B)$ in the lex-order. Note also that $(v):(u)$ is
generated by $\phi(A)/\GCD(\phi(A),\phi(B))$.
Now the fact that
$v/\GCD(v,u)$ does not involves the variables $z$ is equivalent to say
that
$MA-NB$ belongs to $H$. By assumption there exists
$L=M_1t_{i}-N_1t_{j}$
in $H$ such that the conditions (1)--(4) of Definition \ref{pseudo} hold. Multiplying
$L$ with $(N/N_1)(B/t_j)$, we have that
$M_1(N/N_1)t_i(B/t_j)-NB$ is in $H$ and by construction
$$v_1=M_1 (N/N_1) m_i\phi(t_i) m_{s_1}\phi(t_{s_1}) \cdots
m_{s_a}\phi(t_{s_a}) / m_{j}\phi(t_{j})$$
is a monomial of the block $Z^bI^{a-b}$ which
is in $V$ by construction and such that
$(v):(u)\subseteq (v_1):(u)=(\phi(t_i))$.
This concludes the proof.
$\square$\break
Now we can prove:
\begin{Lemma}\label{pl2} For every integer $d>1$ the monomials
$$m_1=z_1^d,\ \ m_2=z_2^d,\ \ m_3=z_1^{d-1}z_2$$ are pseudo-linear of
order $(d-1)$ with respect to the map
$$\phi:\{t_1,t_2,t_3\}\to \{x_1,x_2\}$$ defined by $\phi(t_1)=x_1$,
$\phi(t_2)=x_1$,
$\phi(t_3)=x_2$.
\end{Lemma}
\noindent{\bf Proof: } It is easy to see that the defining ideal $H$ of the Rees algebra of
$W=(m_1,m_2,m_3)$ is generated by
$$(3)\ z_2t_1-z_1t_3\qquad (4)\ z_1^{d-1}t_2-z_2^{d-1}t_3 \qquad (5)\
t_1^{d-1}t_2-t_3^d.$$
Let $1\leq b\leq a\leq d-1$ and $F=MA-NB$ a binomial of bidegree
$((a-b)(d+1), b)$ in $H$ such that $\phi(A)>\phi(B)$ in the lex-order.
Denote by
$v=(v_1,v_2), u=(u_1,u_2)$ the exponents of $M$ and $N$ and by
$\alpha=(\alpha_1,\alpha_2,\alpha_3)$ and
$\beta=(\beta_1,\beta_2,\beta_3)$ the exponents of $A$ and $B$. We
collect all the relations that hold by assumption:
$$\begin{array}{ll} (i) & 1\leq b\leq a\leq d-1\\ (ii) &
\alpha_1+\alpha_2+\alpha_3=\beta_1+\beta_2+\beta_3=b\\ (iii) &
v_1+v_2=u_1+u_2=(a-b)(d+1)\\ (iv) &
v_1+d\alpha_1+(d-1)\alpha_3=u_1+d\beta_1+(d-1)\beta_3\\ (v) &
v_2+d\alpha_2+\alpha_3=u_2+d\beta_2+\beta_3\\ (vi) &
\alpha_1+\alpha_2>\beta_1+\beta_2\\
\end{array}
$$
Note that $(vi)$ holds since $\phi(A)>\phi(B)$ in the lex-order. If
$$ \alpha_1>0 \mbox{ and } \beta_3>0 \mbox{ and } u_1>0 \eqno{(6)}$$
then equation (3) does the job. If instead
$$ \alpha_2>0 \mbox{ and } \beta_3>0 \mbox{ and } u_2\geq d-1
\eqno{(7)}$$ then equation (4) does the job. So it is enough to show
that either $(6)$ or $(7)$ hold. By contradiction, assume that both $(6)$
and $(7)$ do not hold. Note that $(ii)$ and $(vi)$ imply that
$\beta_3>0$. Also note that if $a=b$ then the equation $F$ has bidegree
$(0,a)$ and hence must be divisible by $(3)$ which is impossible since
$a<d$. So we may assume that
$b<a$. But then
$u_1+u_2=(a-b)(d+1)\geq d+1$ and hence either $u_1>0$ or $u_2\geq d-1$.
Summing up, if both
$(6)$ and $(7)$ do not hold and taking into consideration that
$\beta_3>0$, that either
$u_1>0$ or $u_2\geq d-1$ and that $\alpha_1+\alpha_2>0$, then one of the
following conditions hold:
$$\begin{array}{ll} (8) & \alpha_1=0 \mbox{ and } u_2<d-1 \\ (9) &
\alpha_2=0 \mbox{ and } u_1=0
\end{array}
$$
If $(8)$ holds then $\alpha_2>\beta_1+\beta_2$. Using $(v)$ we may write
$$
u_2=v_2+d(\alpha_2-\beta_1-\beta_2)+d\beta_1+\alpha_3-\beta_3.\eqno{(10)}$$
If $\beta_1>0$ we conclude that $u_2\geq d+d-\beta_3$ and hence $u_2\geq
d+1$ since
$\beta_3\leq b\leq d-1$, and this is a contradiction.
If instead $\beta_1=0$ then $\alpha_2+\alpha_3=\beta_2+\beta_3$ and hence
$(10)$ yields
$u_2=(d-1)(\alpha_2-\beta_2)+v_2\geq (d-1)$, a contradiction.
If $(9)$ holds then $\alpha_1>\beta_1+\beta_2$. By $(iv)$ we have
$$v_1+d(\alpha_1-\beta_1)+(d-1)(\alpha_3-\beta_3)=0$$ Hence
$$v_1+d(\alpha_1-\beta_1-\beta_2)+(d-1)(\alpha_3-\beta_3)+d\beta_2=0$$
But $\alpha_3-\beta_3=-\alpha_1+\beta_1+\beta_2$, so that
$$v_1+(\alpha_1-\beta_1-\beta_2)+d\beta_2=0$$ which is impossible since
$\alpha_1-\beta_1-\beta_2>0$ by assumption.
$\square$\break
Now we are ready to complete the proof of the theorem.
\noindent{\bf Proof: }[ of Theorem \ref{main}]: Combining \ref{pl1} and \ref{pl2} we have
that
$J^k$ has linear quotients, and hence a linear resolution, for all
$k<d$. It remains to show that $J^d$ has a first-syzygy of degree
$d$.
Denote by
$V$ the ideal generated by all the monomial generators of $J^d$ but
$u=(z_1^{d-1}z_2x_2)^d$. We claim that
$x_1^d$ is a minimal generator of $V:u$ and this is clearly enough to
conclude that
$J^d$ has a first-syzygy of degree $d$. First note that
$x_1^du=x_2^d(z_1^dx_1)^{d-1}(z_2^dx_1)\in V$, hence $x_1^d\in V:u$.
Suppose, by contradiction, $x_1^d$ is not a minimal generator of $V:u$.
Then there exists an integer
$s<d$ such that $x_1^su\in V$. In other words we may write $x_1^su$ as
the product of $d$ generators of $J$, say $f_1,\dots,f_d$, not all equal
to $z_1^{d-1}z_2x_2$, times a monomial
$m$ of degree $s$. Since the total degree in the $x$ in $x_1^su$ is
$s+d$ at each generator of $J$ has degree at most $1$ in the $x$, it
follows that the $f_i$ are all of the of type $z_1^dx_1, z_2^dx_2,
z_1^{d-1}z_2x_2$ and $m$ involves only $x$. Since
$x_2$ has degree $d$ in $u$, $s<d$ and $z_1^{d-1}z_2x_2$ is the only
generator of $J$ containing $x_2$ it follows that at least one of the
$f_i$ is equal to $z_1^{d-1}z_2x_2$. Getting rid of those common factors
we obtain a relation of type
$x_1^s(z_1^{d-1}z_2x_2)^r=m(z_1^{d}x_1)^{r_1}(z_2^{d}x_1)^{r_2}$ with
$r=r_1+r_2<d$. In the $z$-variables it gives
$ (z_1^{d-1}z_2)^r=(z_1^{d})^{r_1}(z_2^{d})^{r_2}$ with $r=r_1+r_2<d$
which is clearly impossible.
$\square$\break
What is the regularity of $J^k$ for $k\geq d$? There is some computational evidence
that the first guess, i.e. $\reg(J^k)=k(d+1)+d-1$ for $k\geq d$,
might be correct .
\section{Variations}
The ideas and the strategy of the previous section can be used, in
principle, to create other kinds of ``bad" behaviors. We give in this section
some hints and examples but no detailed proofs.
\begin{Hint} {\rm Given $d>1$ consider the ideal
$$H=(x_1z_1^d, x_1z_2^d, x_2z_1^{d-1}z_2)+z_1z_2(z_1, z_2)^{d-1}$$
We believe that $\reg(H^k)=k(d+1)$ for all $k<d$ and $\reg(H^d)\geq
d(d+1)+d-1$. Note that $H$ has two generators less than
$J$. In the case $d=2$, $H$ is exactly the ideal of \ref{ConcaHerzog}.
}
\end{Hint}
One can ask whether there are radical ideals with a behavior
as the ideal in \ref{main}. One would need a square-free version of the construction
of the previous section. This suggests the following:
\begin{Hint} {\rm For every $d$ consider variables $z_1,z_2,\dots,z_{2d}$
and $x_1,x_2$ and the ideal $$J=(
\begin{array}{ll} x_1z_1z_2, x_1z_3z_4, \dots, x_1z_{2d-1}z_{2d},\\
x_2z_2z_3, x_2z_4z_5, \dots, x_2z_{2d}z_{1}
\end{array} )+\Sq^3(z)$$
where $\Sq^3(z)$ denote the square-free cube of
$(z_1,\dots,z_{2d})$, i.e. the ideal generated by the square-free monomials of degree
$3$ in the $z$'s. We conjecture that $\reg(J^k)=3k$ for $k<d$ and $\reg(J^d)>3d$.
Note that for $d=2$ one obtains Sturmfels' Example \ref{Sturmfels}. }
\end{Hint}
We have no idea on how to construct prime ideals with a behavior as the ideal in
\ref{main}. If one wants two (or more) jumps one can try with:
\begin{Hint} {\rm Let $1<a<b$ be integers. Define the ideal
$$I=(y_2z_1^b, y_2z_2^b, xz_1^{b-1}z_2)+
z_1^{b-a} (y_1z_1^a, y_1z_2^a,xz_1^{a-1}z_2)+
z_1z_2(z_1,z_2)^{b-1}$$
of the polynomial ring $K[x,y_1,y_2, z_1,z_2]$.
We expect that $\reg(I)=b+1$ and
$\reg(I^k)-\reg(I^{k-1})>(b+1)$ if $k=a$ or $k=b$.
}\end{Hint}
\begin{center}
{\Large\bf Acknowledgments}
\end{center}
The author wishes to thank the organizers
of the Lisbon Conference on Commutative
Algebra (Lisbon, June 2003) for the kind invitation and for their warm hospitality.
Some parts of this research project was carried out while the author was visiting
MSRI (Berkeley) within the frame of the Special Program 2002/03 on Commutative
Algebra. The results and examples presented in this the paper have been inspired
and suggested by computations performed by the computer algebra system CoCoA
\cite{CNR}.
\end{document}
|
\begin{document}
\title{
The Generalized Witt Algebras using Additive Maps }
\begin{abstract}
Kawamoto generalized the Witt algebra using $F[x_1^{\pm 1},\cdots,x_n^{\pm 1}]$
instead of $F[x_1,\cdots,x_n].$
We construct the generalized Witt algebra $W(g_p,n)$ by using
an additive map $g_p$ from a set of integers into
a field of characteristic zero where $1\leq p \leq n.$
We show that the Lie algebra $W(g_p,n)$ is simple if
$g_p$ is injective, and also the Lie algebra
$W(g_p,n)$ has no ad-diagonalizable elements.
\end{abstract}
\newtheorem{lemma}{Lemma}
\newtheorem{prop}{Proposition}
\newtheorem{thm}{Theorem}
\newtheorem{coro}{Corollary}
\newtheorem{definition}{Definition}
\section{Introduction}
Let $F$ be a field of characterisitic
zero.
The Witt algebra
is called the general algebra by Rudakov [8].
Kac [2] studied the generalized Witt algebra on
the $F$-algebra in the formal power series
$F[[x_1,\cdots ,x_n]]$ for a positive integer $n$.
Nam
[5] constructs the Lie algebra on the $F$-subalgebra
$F[e^{\pm x_1},\cdots ,e^{\pm x_n}, x_1,\cdots ,x_{n+m}]$
in the formal power series
$F[[x_1,\cdots ,x_{n+}]]$
for the positive integers $n$ and $m.$
The Witt algebra $W(n)$ has a basis
$$\{x_1^{a_1}\cdots x_n^{a_n}\partial_i |a_1,\cdots ,a_n\in N,
1\leq i\leq n\}$$
with Lie bracket on basis elements
\begin{eqnarray*}
& &[x_1^{a_1}\cdots x_n^{a_n}\partial_i,
x_1^{b_1}\cdots x_n^{b_n}\partial_j]\\
&=&b_ix_1^{a_1+b_1}\cdots x_n^{a_n+b_n}x_i^{-1}\partial_j
-a_jx_1^{a_1+b_1}\cdots x_n^{a_n+b_n}x_j^{-1}\partial_i
\end{eqnarray*}
where $N$ is the set of non-negative integers.
Consider the generalized Witt algebra $W(n,m)$ having a basis
$$\{e^{a_1 x_1}\cdots e^{a_n x_n} x_1^{u_1}\cdots x_{n+m}^{u_{n+m}}\partial_k|
a_1,\cdots ,a_n,u_1,\cdots ,u_{n+m}\in Z,1\leq k\leq n+m\}$$
with Lie bracket on basis elements given by
$$[e^{a_1 x_1}\cdots e^{a_n x_n} x_1^{u_1}\cdots x_{n+m}^{u_{n+m}}\partial_i,
e^{b_1 x_1}\cdots e^{b_n x_n} x_1^{t_1}\cdots x_{n+m}^{t_{n+m}}\partial_j]$$
$$=b_ie^{a_1x_1+b_1 x_1}
\cdots e^{a_nx_n+b_n x_n} x_1^{u_1+t_1}\cdots x_{n+m}^{u_{n+m}+t_{n+m}}
\partial_j$$
$$+t_ie^{a_1x_1+b_1 x_1}
\cdots e^{a_nx_n+b_n x_n} x_1^{u_1+t_1}\cdots x_{n+m}^{u_{n+m}+t_{n+m}}
x_i^{-1}\partial_j$$
$$-a_ie^{a_1x_1+b_1 x_1}
\cdots e^{a_nx_n+b_n x_n} x_1^{u_1+t_1}\cdots x_{n+m}^{u_{n+m}+t_{n+m}}
\partial_i$$
$$-u_ie^{a_1x_1+b_1 x_1}
\cdots e^{a_nx_n+b_n x_n} x_1^{u_1+t_1}\cdots x_{n+m}^{u_{n+m}+t_{n+m}}
x_j^{-1}\partial_i$$
where $b_i=0$ if $n+1\leq i \leq n+m,$
and $a_j=0$ if $n+1\leq j \leq n+m$ (see [4, 5, 7]).
In [5, 8], it is noted that
the Lie subalgebra of $W(n,m)$ is the Witt algebra $W(n)$ on
$F[x_1,\cdots ,x_m]$.
Let $g_p$ be an additive map from $Z$ into $F$ where
$1\leq p \leq n$. Let us define as $W(g_p,n)$ the Lie algebra with basis
\begin{eqnarray}\label{map1}
& &B:=\{{a_1 \choose i_1} \cdots {a_n \choose i_n}_k|a_1,\cdots ,a_n,i_1,\cdots ,i_n\in Z,
1\leq k \leq n\}
\end{eqnarray}
and a Lie bracket on basis elements given by
\begin{eqnarray}\label{map10}
& &[{a_1 \choose i_1} \cdots {a_n \choose i_n}_k,
{b_1 \choose j_1} \cdots {b_n \choose j_n}_l]\\ \nonumber
&=&g_k(b_k){a_1+b_1 \choose i_1+j_1} \cdots {a_n+b_n \choose i_n+j_n}_l\\ \nonumber
&+&j_k{a_1+b_1 \choose i_1+j_1} \cdots {a_k+b_k \choose i_k+j_k-1}
{a_{k+1} +b_{k+1} \choose i_{k+1}+j_{k+1}} \cdots {a_n+b_n \choose i_n+j_n}_l\\ \nonumber
&-&g_l(a_l){a_1+b_1 \choose i_1+j_1} \cdots {a_n+b_n \choose i_n+j_n}_l\\ \nonumber
&-&i_l{a_1+b_1 \choose i_1+j_1} \cdots
{a_{l}+b_l \choose i_l+j_l-1}
{a_{l+1}+b_{l+1} \choose i_{l+1}+j_{l+1}}
\cdots {a_n+b_n \choose i_n+j_n}_k.
\end{eqnarray}
It follows from [1, 4, 5, 6] that
the above bracket is extended linearly to the given basis $B$.
Also, it is not hard to show
that the above bracket satisfies the Jacobi identity. In section 2,
we will prove the following main theorems. Throughout
the paper, a given map $g_p$ means an
additive and injective map from a set of integers
into a field of characteristic zero.
\noindent
{\bf Theorem 1} The Lie algebra $W(g_p,n)$ is simple.
\noindent
{\bf Theorem 2} For any automorphism of $\theta \in Aut (W(g,1))$
$$\theta ({0 \choose 1}_1 )=\sum_j C_j
({0 \choose j}_1 )$$
where $C_j\in F.$
\noindent
{\bf Theorem 3} Each derivation of $W(g,1)_+$ can be written
as a sum of an inner derivation and a scalar derivation \cite{Nam}.
\section{ Simplicities of $W(g_p,n)$}
The Lie algebra $W(g_p,n)$ has a $Z^n$-gradation as mentioned in
[3] : that is,
\begin{eqnarray}\label{map30}
& &W(g_p,n)=\bigoplus _{(a_1,\cdots ,a_n)\in Z^n} W_{(a_1,\cdots ,a_n)}
\end{eqnarray}
where each is the subspace of $W(g_p,n)$ with a basis
$$\bar B=\{{a_1 \choose i_1}\cdots {a_n \choose i_n}_l|
a_1,\cdots,a_n,i_1,\cdots,i_n\in Z,1\leq l \leq n\}$$
Let $W_{(a_1,\cdots ,a_n)}$ denote the $(a_1,\cdots ,a_n)$-homogeneous
component of $W(g_p,n)$ and elements in
the $W_{(a_1,\cdots ,a_n)}$-homogeneous $(a_1,\cdots ,a_n)$-homogeneous
elements. Note that the
${(0,\cdots ,0)}$
- homogeneous component
is isomorphic to the Witt algebra $W(n)$ [8].
From now on let $(0,\cdots,0)$-homogeneous
component denote the $0$-homogeneous component.
For the simplicity of $W(g_p,n)$,
we define the map as an additive and injective map.
Now we introduce a lexicographic ordering of two basis
elements of $W(g_p,n)$ as follows :
for any two elements
${a_1 \choose i_1}\cdots {a_n \choose i_n}_l,$
${b_1 \choose j_1}\cdots {b_n \choose j_n}_k$
, we have
$${a_1 \choose i_1}\cdots {a_n \choose i_n}_l>
{b_1 \choose j_1}\cdots {b_n \choose j_n}_k$$
if $(a_1,\cdots ,a_n,i_1,\cdots ,i_n,l)
>(b_1,\cdots ,b_n,j_1,\cdots ,j_n,k)$ by the natural
lexicographic ordering in $Z^{2n+1}$.
For any element $l\in W(g_p,n)$,
$l$ can be written as follows
using the ordering and gradation
:
$$l=\sum _{i_1,\cdots,i_n,p}C(i_1,\cdots ,i_n,p){a_{11} \choose i_1}
\cdots {a_{1n} \choose i_n}_l +\cdots $$
$$+\sum _{j_1,\cdots,j_n,q}C(j_1,\cdots ,j_n,q){a_{t1} \choose i_1}
\cdots {a_{tn} \choose j_n}_q$$
where $1\leq p,\cdots, q\leq n,$ $(a_{11},\cdots ,a_{1n})
>\cdots >(a_{11},\cdots ,a_{1n})$
and
$$C(i_1,\cdots ,i_n,p),\cdots ,
C(j_1,\cdots ,j_n,q)\in F.$$
Next, define the string number $st(l)=t$ for $l$ (see [5, 6]), and $l_p(l)$
as $max \{i_1,\cdots ,i_n,\cdots ,j_1,\cdots ,j_n\}.$
For any basis element ${a_1 \choose i_1} \cdots {a_n \choose i_n}_l$
in $\bar B$, let us refer to $a_1,\cdots ,a_n$
as upper indices and $i_1,\cdots ,i_n$ as lower indices.
\noindent
{\bf Remark 2.1} If $g_p$ is an inclusion, then $W(g_p,n)=W(n,0)$
is the generalized
Witt algebra which is studied by Nam [5, 6].
\begin{lemma} If $l$ is any non-zero element, then the ideal $<l>$
generated by $l$
contains an element whose lower indices are positive.
\end{lemma}
{\it Proof.} Let $l$ be a nonzero element of $W(g_p,n)$.
Take an element $M={0 \choose j_1} \cdots {0 \choose j_n}_t$
such
that $j_1>> \cdots >>j_n$ and $t$ such that either or $a_t\neq 0$ or
$i_t\neq 0$ in $l$, where $a>>b$ means $a$ is
sufficiently larger than $b$. Then $0\neq [M,l]$
is the required element.
\quad $\Box$
\begin{lemma} If an ideal $I$ of $W(g_p,n)$ contains ${0 \choose 0}\cdots {0 \choose 0}_l$
for $1\leq l \leq n,$ then $I=W(g_p,n)$.
\end{lemma}
{\it Proof.} Since $W(n)$ is a simple subalgebra of $W(g_p,n)$
with the basis
\noindent
$\{ {0 \choose i_1}\cdots {0 \choose i_n}_l|i_1,\cdots ,i_n\in Z,
1\leq l \leq n\},$ the ideal
$< {0 \choose 0}\cdots {0 \choose 0}_i>$
contains $W(n)$, where
$<{0 \choose 0}\cdots {0 \choose 0}>$ is the
ideal of which is generated by ${0 \choose 0}\cdots {0 \choose 0}_l$
for a fixed $t\in \{1,\cdots ,n\}$
(see [8]). It follows from Lemma 2 of [5] and Lemma 3 of [6]
that the
basis elements of $W(g_p,n)$ are contained in $I$
by using the injectiveness of $g_p$.
\quad $\Box$
\begin{thm} The Lie algebra $W(g_p,n)$ is simple.
\end{thm}
{\it Proof. } It is not difficult to prove
this theorem by induction on $st(l)$ for any element $l$ in
any ideal $I$ of $W(g_p,n)$ and by considering
the one to one properties of $g_p$.
\begin{coro} The Lie algebra $W(n,0)$ is simple.
\end{coro}
{\it Proof.} If we take an additive
embedding $g_p:Z\to F$, $1\leq p \leq n$,
then we get the required result (see
[5, 6]).
\quad $\Box$
It is an interesting problem to find all the automorphisms
of the subalgebra
$W(g,1)$ of $W(g_p,n)$.
\begin{thm} For any automorphism $\theta \in Aut(g_p,n)$
$$\theta ({0 \choose 1}_1)=\sum _j C_j {0 \choose j }_1$$
where $C_j\in F$.
\end{thm}
{\it Proof.} It is not
difficult to prove this theorem using the gradation of (2.1)
and the $W(g,1)$
acting of ${0 \choose 0}_1$ on the
zero homogeneous component $W_0$ whose vector basis is $\{{0 \choose i}_1|i\in Z\}$
as an ad-map.
\quad $\Box$
If we consider the Lie subalgebra of $W(g,1)$
such that all the lower indices are zero, then
this subalgebra is a Block algebra $W(1)$
which is also called the centerless Virasoro
algebra \cite{Blo}.
Thus all the automorphisms
of this Lie algebra can be determined by Theorem 3 in [1].
Consider the Lie subalgebra $W(1,1)^+$ of $W(n,m)$ in
\cite{Nam} with basis
$$\{e^{ax}x^iy^j\partial_x,
e^{bx}x^ly^m\partial_y| a,b\in Z, i,j,l,m\in N\}$$
where $N$ is the set of non-negative integers.
{\noindent}
{\bf Conjecture}
For any $\theta \in Aut (W(1,1)^+)$
$\theta (y\partial_y)=(\alpha y +\beta)\partial_y$
for some $0\neq \alpha, \beta\in F.$
The element $l\in W(g_p,n)$ is
an ad-diagonalizable element if $[l,m]=\alpha (m) m$
for any $m\in B$ given in (1) and for some $\alpha (m)\in F$.
\begin{prop} The Lie algebra $W(g,n)$
has no ad-diagonalizable element with respect to
the basis given in (\ref{map1}).
\end{prop}
{\it Proof.} Since $W(g,n)$ is $Z^n$-graded Lie algebra
all the ad-diagonalizable elements are in
the $(0,\cdots,0)$-homogeneous component.
$W_{(0,\cdots ,0)}$ is isomorphic to $W(n)$ as Lie algebras
where $W(n)$ is the well known Witt algebra [8].
Thus all the ad-semisimple elements of $W(g_p,n)$
are of the form $\sum _{i=1}^n C_i {0 \choose 1 }$,
where $C_i\in F$. But
$[\sum _{i=1}^n C_i {0 \choose 1 }_i,
{a \choose 0}_j]\neq \alpha {a \choose 0}_j$
for any $\alpha \in F.$
Therefore, we proved the proposition.
\quad $\Box$
\noindent
{\bf Remark 2.2} For another proof of
Proposition 2.6, see Corollary 1 of [5].
\section{Derivation of $W(g,1)_+$}
Consider the subalgebra $W(g,1)_+$ of $W(g_p,n)$ with basis
$$B_+:=\{{a\choose i}_1 |a\in Z,i\in N\}$$
where $N$ is the set of non-negative integers.
In this section we determine all the derivations of the
Lie algebra $W(g,1)_+$. Ikeda and Kawamoto found all
the derivations of the Kawamoto algebra $W(G,I)$ \cite{Kaw}
in their paper \cite{Ik}. It is very important to
find all the derivations of a given Lie algebra to
compare with other Lie algebras.
Let $L$ be a Lie algebra over any field $F.$
An $F$-linear map $D$ from $L$ to $L$ is a derivation
if $D([l_1,l_2])=[D(l_1),l_2]+[l_1,D(l_2)]$
for any $l_1,l_2\in L.$
Let $L$ be a Lie algebra over any field $F$. Define
the derivation $D$ of $L$ to be a scalar derivation if
for all basis elements $l$ of $L$,
$D(l)=f_l l$ for some scalar $f_l\in F$ \cite{Blo}, \cite{Nam1}.
We need the following lemma.
\begin{lemma}
Let $D$ be a derivation of $W(g,1)_+$. If $D(\partial)=0,$
then $D=f ad_{\partial} +S$ where
$f\in F$ and $S$ is a scalar derivation.
\end{lemma}
{\it Proof.}
It is not difficult to prove this lemma
using the gradation and the ordering of the Lie algebra
$W(g,1)_+.$ \cite{Nam}.
\quad $\Box$
\begin{thm}
Each derivation of $W(g,1)_+$ can be written
as a sum of an inner derivation and a scalar derivation.
\end{thm}
{\it Proof.}
Let $D$ be any derivation of $W(g,1)_+$. Then
$D(\partial)=f\partial$ for some
$f\in F[e^{\pm x},x]$. Since
$\partial : F[e^{\pm x},x] \to
F[e^{\pm x},x] $ is onto, there is a function
$g\in
F[e^{\pm x},x] $
such that $\partial (g)=f.$ Then
$ad_{g\partial} (\partial)=[\partial,g\partial]=f\partial=D(\partial)$.
We have $(D-ad_{g\partial})(\partial)=0.$
By Lemma 3 we have $D=ad_{g\partial}+c ad_{\partial} +S$
\cite{Nam}. Therefore, we have proved the theorem.
\quad $\Box$
\noindent
{\bf Remark 3.1}
Let $Z^+$ and $F^+$ be the additive group of $Z$ and
$F$ respectively. For any $g\in Hom(Z^+,F^+),$ we have
$g(1)=m$ for some fixed $m\in F,$ thus the Witt algebra
$W(n)$ can not be changed by using the idea of the additive
map in this paper \cite{Kaw1}.
{\it \small Department of Mathematics, UW-Madison, WI 53706}
{\it \small e-mail :[email protected]}
{\it \small Department of Mathematics, Hanyang University-Ansan, Ansan, Korea}
\end{document}
|
\begin{document}
\title{On locally quasiconformal Teichm\"uller spaces}
\author{Alastair Fletcher}
\address{ AF: Department of Mathematical Sciences, Northern Illinois University,
Dekalb, IL 60115, USA. E-mail address: [email protected]}
\author{ Zhou Zemin}
\address{ ZZ: School of Mathematics, Renmin
University of China, Beijing, 100872, People's Republic of China.
E-mail address: [email protected] }
\subjclass[2000]{Primary 30C62, Secondary 30C75}
\keywords{ Quasiconformal mapping, Locally Quasiconformal mapping,
generalized Teichm$\ddot{\text{u}}$ller space,
Generalized maximum dilatation.}
\thanks{AF is supported by a grant from the Simons Foundation (\#352034, Alastair Fletcher). ZZ is partially
supported by the National Natural Science Foundation of China (Grant
11571362).}
\date{}
\maketitle
\newtheorem{Theorem}{Theorem}
\begin{abstract}
We define a universal Teichm\"uller space for locally quasiconformal mappings whose dilatation grows not faster than a certain rate. Paralleling the classical Teichm\"uller theory, we prove results of existence and uniqueness for extremal mappings in the generalized Teichm\"uller class. Further, we analyze the circle maps that arise.
\end{abstract}
\section{Introduction}
Teichm\"uller theory is a major area of research in modern mathematics, bringing together analysis, geometry, topology and dynamics. The Teichm\"uller space of a topological surface parameterizes the set of complex structures that can be equipped on the surface. For example, not all tori are conformally equivalent and the space of complex structures of a genus one surface can be parameterized by the upper half-plane. A fundamental object in Teichm\"uller theory is the universal Teichm\"uller space of the disk, denoted $T(\Delta)$. Via the Uniformization Theorem, every Teichm\"uller space of a hyperbolic surface is embedded in $T(\Delta)$, and so it is an important object to understand. We refer to \cite{10,11,18,19} for introductions to Teichm\"uller theory.
There are various ways of modelling points of universal Teichm\"uller space. One can consider equivalence classes of quasiconformal maps $f:\Delta\rightarrow \Delta$ under the Teichm\"uller equivalence relation or, equivalently via solving the Beltrami differential equation $f_{\overline{z}} = \mu f_z$, equivalence classes of Beltrami differentials. We recall that Beltrami differentials are elements $\mu \in L^{\infty}(\Delta)$ with $||\mu ||_{\infty} <1$. Since every quasiconformal map $f:\Delta\rightarrow \Delta$ extends to a quasisymmetric homeomorphism $\widetilde{f} :\partialartial \Delta \rightarrow \partialartial \Delta$, points of Teichm\"uller space can also be modelled as quasisymmetric maps of the circle which fix the three points $1,-1,i$.
The defining property of a quasiconformal map $f:\Delta\rightarrow \Delta$ is that it has uniformly bounded distortion. Every quasiconformal map has a complex dilatation $\mu_f = f_{\overline{z}} / f_z$ which is defined almost everywhere. The quasiconformality condition implies that there exists $0\leq k<1$ such that $||\mu_f||_{\infty} \leq k$ almost everywhere. Solving the Beltrami equation provides the converse to this statement. Recently, there has been interest in the consequences of allowing $||\mu||_{\infty}=1$ in the Beltrami equation and investigating properties of the solutions that occur.
In the literature, various classes of such mappings have been studied, for example David mappings \cite{9,Zakeri}, $\mu$-homeomorphisms \cite{4,5,6,7,8,13,14,17,24} and locally quasiconformal mappings \cite{21,25} have all been studied in dimension two. These all sit in the larger framework of mappings of finite distortion in Euclidean spaces, see for example \cite{12,HK,16}. Such mappings are far from novelties: Petersen and Zakeri used David mappings in \cite{PZ} to study Siegel disks in complex dynamics. Moreover, in the theory of length spectrum Teichm\"uller spaces, it is known that it certain circumstances it differs from the quasiconformal Teichm\"uller space, see for example the work of Shiga \cite{Shiga}. In particular, in this paper a certain map is constructed which has a uniform bound on the distortion of hyperbolic lengths of essential curves, but is (in our language) locally quasiconformal. It is therefore conceivable that locally quasiconformal mappings could play a role in the study of length spectrum Teichm\"uller spaces.
In the current paper, we will work in the setting of locally quasiconformal mappings of the disk and initiate the study of a universal Teichm\"uller space of such mappings. Our aim is to set up a workable definition, solve an extremality problem in this setting and study the boundary mappings that arise.
The paper is organized as follows: we recall some preliminary material on quasiconformal and locally quasiconformal mappings in section 2. In section 3, we define locally quasiconformal Teichm\"uller spaces and state our main results on them. In section 4, we provide proofs of our results. Finally in section 5, we give some concluding remarks indicating directions of future research.
{\bf Acknowledgements:} The second named author would like to thank Professor Chen Jixiu for his many useful suggestions and help.
\section{Preliminaries}
We recall some basic facts about quasiconformal mappings which can be found in many texts, for example \cite{1,10,11,16,18,19}.
Let $U\subset \mathfrak{B}bb C $ be a domain. The {\it distortion} of a homeomorphism $f:U\rightarrow V\subset \mathfrak{B}bb C $ is defined by
\[ D_f(z) = \frac{|f_z(z)| + |f_{\overline{z}}(z)|}{|f_z(z) - |f_{\overline{z}}(z)|}.\]
A homeomorphism $f:U\rightarrow V$ is called $K$-quasiconformal if $f$ is absolutely continuous on almost every horizontal and vertical line in $U$ and moreover $\sup_{z\in U} D_f(z) \leq K$. The smallest such $K$ that holds here is called the {\it maximal dilatation} of $f$ and denoted $K_f$. If we do not need to specify the $K$, then we just call the map {\it quasiconformal}. The {\it complex dilatation} of $f$ is $\mu_f = f_{\overline{z}}/f_z$ and satisfies the equation
\[ D_f(z) = \frac{1+|\mu(z)|}{1-|\mu(z)|}.\]
If $f$ is quasiconformal, then there exists $0\leq k<1$ such that $||\mu_f ||_{\infty} \leq k$ and
\[ K_f = \frac{1+ ||\mu_f||_{\infty}}{1-||\mu_f ||_{\infty} }.\]
Every quasiconformal map $f:\Delta\rightarrow \Delta$ extends to a homeomorphism of $\overlineverline{\Delta}$ and, moreover, the boundary map $\widetilde{f} :\partialartial \Delta\rightarrow \partialartial \Delta$ is quasisymmetric, that is, there exists $M\gammaeq 1$ so that
\[ \frac{1}{M} \leq \frac{ | \widetilde{f}(e^{i(\theta +t)}) - \widetilde{f}(e^{i\theta}) |}{ | \widetilde{f}(e^{i\theta}) - \widetilde{f}(e^{i(\theta - t)}) |} \leq M \]
holds for all $\theta \in [0,2\partiali)$ and $t>0$.
We now define the class of mappings that will form the basis of our study.
\begin{definition}
\label{def:lqc}
A homeomorphism $f:\Delta\rightarrow \Delta$ is called {\it locally quasiconformal} if and only if for every compact set $E\subset \Delta$, $f|_{E}$ is quasiconformal.
\end{definition}
This definition means we allow that distortion of our map to blow up as we head out towards the boundary. Moreover, the complex dilatation of a locally quasiconformal mapping is defined almost everywhere in the disk and we allow $||\mu_f||_{\infty}=1$. However, we have to restrict the types of locally quasiconformal mappings we study.
\begin{example}
\begin{enumerate}[(i)]
\item The map $f(z) = z(1-|z|^2)^{-1}$ from \cite{21} is a locally quasiconformal map from $\Delta$ onto $\mathfrak{B}bb C $. In this paper, we want to consider only locally quasiconformal self-mappings of $\Delta$.
\item The spiral map $f(re^{i\theta}) = r\exp(i(\theta + \ln \frac{1}{1-r} ) )$ is a locally quasiconformal map $f:\Delta\rightarrow \Delta$ but it does not extend continuously to the boundary. In this paper, we want to consider locally quasiconformal mappings which extend homeomorphically to the boundary.
\end{enumerate}
\end{example}
\begin{definition}
\label{def:rho}
We say that a continuous increasing function $\rho:[0,1) \rightarrow [1,\infty)$ is {\it allowable} if the following conditions hold:
\begin{enumerate}[(i)]
\item $\rho(0)=1$,
\item \hskip 1px \lefteqn{\int_0^1 \rho(r) \: dr < \infty,}
\item for some constant $R>0$ and every $\xi \in \partialartial \Delta$,
\[ \lim_{t\rightarrow 0^+} \int_t^R \frac{dr}{r\rho^*(\textcolor[rgb]{0.00,0.00,0.00}{|z|})} = +\infty,\]
where $\rho^*(r)$ is defined by
\begin{equation}
\label{eq:rhostar}\
\rho^*(r) = \int_{S(\xi ,r) \cap \Delta} \rho(|z|) d\theta,
\end{equation}
\textcolor[rgb]{0.00,0.00,0.00}{$z=\xi+r e^{i\theta}$}
and $S(\xi,r)$ is the circle centred at $\xi$ of radius $r$.
\end{enumerate}
\end{definition}
Each allowable $\rho$ yields a family of locally quasiconformal mappings.
\begin{definition}
Suppose $\rho$ is allowable. The family $QC_{\rho}(\Delta)$ consists of locally quasiconformal mappings $f:\Delta\rightarrow \Delta$ such that there exists $C>0$ with
\[ D_f(z) \leq C\rho(|z|),\]
for all $z\in \Delta$.
\end{definition}
Condition (i) in Definition \ref{def:rho} is a convenient normalization condition which implies that $D_{f}(z) / \rho (|z|) \leq 1$ for every conformal map $f:\Delta\rightarrow \Delta$ with equality at the origin. Condition (ii) implies by \cite[Theorem 1]{21} that if $\mu \in L^{\infty}(\Delta)$ with $|\mu(z)|<1$ then there exists a locally quasiconformal map $f:\Delta\rightarrow \Delta$ with complex dilatation $\mu$. Finally, condition (iii) implies by, for example, \cite[Theorem 1]{6} that $f$ extends continuously to $\partialartial \Delta$.
Observe that if $f$ is quasiconformal, then $f\in QC_{\rho}(\Delta)$ for every allowable $\rho$. Typically for a locally quasiconformal mapping, the maximal dilatation is not finite, but there is a maximal dilatation with respect to $\rho$.
\begin{definition}
Let $\rho$ be allowable and let $f\in QC_{\rho}(\Delta)$. Then the {\it maximal dilatation with respect to $\rho$} is defined by
\[ K^{\rho}_f:= \sup_{z\in \Delta} \frac{ D_f(z)}{\rho(|z|)}.\]
\end{definition}
As remarked above, condition (i) in Definition \ref{def:rho} implies that $K^{\rho}_{f}=1$ for every conformal map $f$ and every allowable $\rho$.
\begin{example}
By \cite[Theorem 4]{21}, if $\rho(r) = \log \frac{1}{1-r}$, then any locally quasiconformal map $f:\Delta \rightarrow \Delta$ with $D_f(z) \leq C \rho (|z|)$ extends homeomorphically to $\overlineverline{\Delta}$. It is a short computation to show that, if $\sigma$ denotes Lebesgue measure, then any such $f$ satisfies
\[ \sigma \{ z\in \Delta : D_f(z) >K \} < \partiali e^{-2K/C}.\]
In particular, this means that any such $f$ is a David mapping. We don't know how $QC_{\rho}(\Delta)$ and David mappings are related in general.
\end{example}
Since $\rho(0)=1$, we have $K^{\rho}_f \gammaeq 1$, and so $K^{\rho}_f$ gives a quantity the describes how far $f$ is from a conformal map, except that here $K^{\rho}_f = 1$ does not imply that $f$ is conformal. To see this, we only need $D_f(z)$ to grow slower than $\rho(|z|)$.
\section{Locally quasiconformal Teichm\"uller spaces}
In this section we define locally quasiconformal Teichm\"uller spaces with respect to an allowable $\rho$ and state our results.
\begin{definition}
\label{def:lqct}
Let $\rho$ be allowable. Then the set $\mathcal{L}_{\rho}(\Delta)$ consists of locally quasiconformal mappings $f:\Delta\rightarrow \Delta$ such that $f^{-1} \in QC_{\rho}(\Delta)$.
\end{definition}
We need to control the growth of $f^{-1}$ for our results. Note that such a condition on $f$ does not imply the same condition holds for $f^{-1}$, in contrast to the fact that the inverse of a $K$-quasiconformal map is also a $K$-quasiconformal map.
\begin{example}
\label{ex:radial}
Consider radial maps given in polar coodinates by $f_a(re^{i\theta}) = [1-(1-r)^a]e^{i\theta}$ for $a>0$.
A computation shows that
\[ D_f(re^{i\theta}) = \frac{1-(1-r)^a}{ar(1-r)^{a-1}},\]
and the right hand side is the appropriate $\rho$ to consider for such a map. However, condition (ii) in Definition \ref{def:rho} is only satisfied when $0<a<2$. Since $f_a^{-1} = f_{1/a}$ and $f_a\circ f_b = f_{ab}$, we see that condition (ii) is not closed under inverses and compositions.
\end{example}
\begin{lemma}
\label{lem:1}
If $f\in \mathcal{L}_{\rho}(\Delta)$, then $f$ can be extended homeomorphically to a map $\overlineverline{\Delta}\rightarrow \overlineverline{\Delta}$.
\end{lemma}
\begin{proof}
Since $\rho$ is allowable and $f\in\mathcal{L}_{\rho}(\Delta)$, then $D_{f^{-1}}(z) \leq C \rho(|z|)$, where $\rho$ satisfies the conditions in Definition \ref{def:rho}. It immediately follows from \cite[Theorem 1.1]{25} that $\mu_{f^{-1}}$
can be integrated to give a locally quasiconformal map $g$ which extends to a homeomorphism of $\overlineverline{\Delta}$.
Now, for $z\in \Delta$, the complex dilatation of $g\circ f$ satisfies
\[ \mu_{g\circ f}(f^{-1}(z)) = \overlinemega (z) \cdot \frac{ \mu_g(z) - \mu_{f^{-1}}(z) }{1-\overlineverline{\mu_{f^{-1}}(z) }\mu_g(z) } \equiv 0,\]
since $\mu_g = \mu_{f^{-1}}$, and where $|\overlinemega(z)|=1$ for all $z\in \Deltaelta$. Hence $g\circ f$ is a conformal map from $\Delta$ to itself which extends to the boundary. Hence $f$, and also $f^{-1}$, extend homeomorphically to $\overlineverline{ \Delta}$.
\end{proof}
Recall that $A(\Delta)$ is the Bergman space of integrable holomorphic functions on $\Delta$, that is,
\[ A(\Delta) = \{ \varphi : \int_{\Delta} |\varphi| < \infty \} .\]
Complex dilatations $\mu(z) = k\overlineverline{\varphi}/|\varphi|$ for $0<k<1$ and $\varphi \in A(\Delta)$ are said to be of Teichm\"uller-type
and play an important role in extremal problems in Teichm\"uller theory. We show that an analogue exists for locally quasiconformal mappings.
\begin{lemma}
\label{lem:2}
Let $\varphi_0 \in A(\Deltaelta)$ be not identically zero,
$K_0\gammaeq 1$ be a constant and let $\rho$ be allowable. Then there exists a locally quasiconformal mapping $f_0\in \mathcal{L}_{\rho}(\Delta)$ with Teichm\"uller-type complex dilatation
\begin{equation}
\label{eq:lem2}
\mu_{f_0}(z) = \frac{\rho(|f_0(z)|)K_0-1}{\rho(|f_0(z)|)K_0+1}\cdot \frac{\overlineverline{\varphi_0 (z)}}{|\varphi_0 (z)|}.
\end{equation}
\end{lemma}
We prove this lemma in the next section. Since elements of $\mathcal{L}_{\rho}(\Delta)$ extend homeomorphically to the boundary, we may always assume that we have post-composed by a M\"obius map so that elements of $\mathcal{L}_{\rho}(\Delta)$ fix $1,-1$ and $i$.
\begin{definition}
\label{def:teich}
Let $\rho$ be allowable. Then $f,g\in \mathcal{L}_{\rho}(\Delta)$ are Teichm\"uller related with respect to $\rho$, denoted by $f\sim g$, if and only if the boundary extensions of $f$ and $g$, normalized to fix $1,-1,i$, agree. We then define the {\it generalized Teichm\"uller space with respect to $\rho$} by
\[ T_{\rho}(\Delta) = \mathcal{L}_{\rho}(\Delta) / \sim.\]
\end{definition}
Elements of $T_{\rho}(\Delta)$ are Teichm\"uller equivalence classes denoted by $[f]_{\rho}$, or simply $[f]$ if the context is clear.
Given $[f_0] \in T_{\rho}(\Delta)$, every representative of $[f_0]$ is a locally quasiconformal map $f$ whose boundary values agree with $f_0$ and so that $K^{\rho}_{f^{-1}} < \infty$.
\begin{definition}
\label{def:extremal}
Let $[f_0] \in T_{\rho}(\Delta)$.
\begin{enumerate}[(i)]
\item We say that $f\in [f_0]$ is {\it extremal} if $K^{\rho}_{f^{-1}} \leq K^{\rho}_{g^{-1}}$ for all $g\in [f_0]$.
\item We say that $f\in [f_0]$ is {\it uniquely extremal} if $K^{\rho}_{f^{-1}} < K^{\rho}_{g^{-1}}$ for all $g\in [f_0] \setminus \{ f \}$.
\end{enumerate}
\end{definition}
Our first result on extremal maps is that extremal representatives always exist.
\begin{theorem}
\label{thm:extremal}
Let $\rho$ be allowable and let $[f_0] \in T_{\rho}(\Delta)$. Then there exists an extremal representative $f\in [f_0]$.
\end{theorem}
We show next that uniquely extremal representatives exist.
\begin{theorem}
\label{thm:ue}
Let $\rho$ be allowable, $K_0>1$, $\varphi_0 \in A(\Delta)$ and suppose $f_0 \in T_{\rho}(\Delta)$ has Teichm\"uller-type complex dilatation
\[ \mu_{f_0}(z) = \frac{ \rho(|f_0(z)|)K_0 - 1}{\rho(|f_0(z)|)K_0+1} \cdot \frac{ \overlineverline{\varphi_0(z)}}{|\varphi_0(z)|}.\]
Then $f_0$ is uniquely extremal in $[f_0]$.
\end{theorem}
We next turn to the boundary mapping induced by an element $[f] \in T_{\rho}(\Delta)$. It is well-known that every quasiconformal self-map of $\Delta$ extends to a quasisymmetric map of the unit circle and, conversely, every quasisymmetric map of the unit circle extends to a quasiconformal map of $\Delta$. For $h:\partialartial \Delta \rightarrow \partialartial \Delta$, we define the quasisymmetric function
\[ \lambda_h(\xi, t) = \frac{ | h(\xi e^{it}) - h(\xi) |}{ | h(\xi) - h(\xi e^{-it}) |} .\]
This is the circle version of the standard quasisymmetric function considered by many authors, for example \cite{6,Zakeri}.
\begin{theorem}
\label{thm:qs}
Let $\rho$ be allowable and $[f]\in T_{\rho}(\Delta)$ with associated circle map $h$. Then there exists a function $\lambda$ depending only on $\rho$ such that
\[ \frac{1}{\lambda(t)} \leq \lambda_{h^{-1}}(\theta, t) \leq \lambda(t)\]
for all $\theta \in [0,2\partiali)$.
\end{theorem}
In proving this result we will give an explicit formula for $\lambda(t)$. Note that we may have $\lambda(t) \rightarrow \infty$ as $t\rightarrow 0$.
Theorem \ref{thm:qs} is a circle version of a result proved for the extended real line in \cite{6}. However, the extended real line has a special point at infinity, whereas the circle has no special point. In particular, it is not a trivial task to obtain our result from \cite{6}.
\section{Proofs of results}
\subsection{Generalized Teichm\"uller-type dilatations}
Here we prove Lemma \ref{lem:2}. First, we recall some results on non-linear elliptic systems from \cite[Chapter 8]{3}.
Assume that $h:\mathfrak{B}bb C \times\mathfrak{B}bb C \times\mathfrak{B}bb C \rightarrow \mathfrak{B}bb C $ satisfies the following conditions:
\begin{enumerate}[(i)]
\item the homogeneity condition that $ f_{\overlineverline{z}} = 0$ whenever $f_z = 0$, or equivalently,
\[ H(z,w,0)\equiv 0,\,\, \text{for almost every}\,\, (z,w)\in \mathfrak{B}bb C \times \mathfrak{B}bb C ; \]
\item the uniform ellipticity condition that for almost every $z,w \in \mathfrak{B}bb C $ and all $\zeta,\xi \in \mathfrak{B}bb C ,$
\[ |H(z,w,\zeta)-H(z,w,\xi)|\leq k|\zeta -\xi|,\]
for some $0\leq k<1$;
\item $H$ is Lusin measurable (see \cite[p.238]{3} for further details).
\end{enumerate}
A solution $f\in W^{1,2}_{loc}(\mathfrak{B}bb C )$ to
\begin{equation}
\label{eq:elliptic}
\frac{\partialartial f}{ \partialartial \overlineverline{ z}}= H(z,\,f,\,\frac{\partialartial f}{\partialartial z}),
\end{equation}
for $z\in \mathfrak{B}bb C $ and normalized by the condition
\[ f(z)=z+a_1z^{-1}+a_2z^{-2}+\cdots\]
outside a compact set will be called a {\it principal solution}. A homeomorphic solution $f\in W^{1,2}_{loc}(\mathfrak{B}bb C )$ to \eqref{eq:elliptic} is called {\it normalized} if $f(0)=0$ and $f(1)=1$. Naturally any such solution fixes the point at infinity too.
\begin{lemma}[Theorem 8.2.1, \cite{3}]
\label{lem:h}
Under the hypotheses above, equation \eqref{eq:elliptic} admits a normalized solution. If, in addition, $H(z,w,\zeta)$ is compactly supported in the $z$-variable, then the equation admits a principal solution.
\end{lemma}
We now prove Lemma \ref{lem:2}.
\begin{proof}[Proof of Lemma \ref{lem:2}]
Recalling \eqref{eq:lem2}, the equation we want to solve is
\[ \frac{\partialartial f_0}{\partialartial \overlineverline{z}}=\frac{\rho(|f_0(z)|)K_0-1}{\rho(|f_0(z)|)K_0+1}\cdot \frac{\overlineverline{\varphi_0}}{|\varphi_0|}\cdot \frac{\partialartial f_0}{\partialartial z}.\]
To that end, let
\[ H(z,w,\zeta)=H(z,f_0,\frac{\partialartial f_0}{\partialartial z})=\frac{\rho(|f_0|)K_0-1}{\rho(|f_0|)K_0+1}\cdot \frac{\overlineverline{\varphi_0(z)}}{|\varphi_0(z)|}\cdot\frac{\partialartial f_0}{\partialartial z}.\]
For $n= 2,3,\ldots$, set $\Delta_n = \{z:\,|z|<1-\frac{1}{n}\}$ and $D_n=\{z:\,1-\frac{1}{n}< |z|<1-\frac{1}{n^2}\}$.
By interpolating in $\overlineverline{D_n}$ with a continuous function $F_n$ satisfying $|F_n(z,w,\zeta)| \leq |H(z,w,\zeta)|$, there exists a continuous function
\[ H_n(z,w,\zeta):=\begin{cases}H(z,w,\zeta)\,\,\quad&\text{when}\,\,z\in \Deltaelta_n \\ F_n(z,w,\zeta)\,\,\quad&\text{when}\,\,z\in \overlineverline{ D_n} \\
0&\text{when}\,\,z\in
\mathfrak{B}bb C -\overlineverline{\Deltaelta_n}-\overlineverline{D_n}\end{cases}.\]
Applying Lemma \ref{lem:h}, we see that the equation
\[ \frac{\partialartial f}{\partialartial \overlineverline{z}}= H_n(z,w,\frac{\partialartial f}{\partialartial z})\]
admits a principal and normalized solution $f_n:\mathfrak{B}bb C \rightarrow \mathfrak{B}bb C $.
For every fixed $j$, the family $\{f_n|_{\Deltaelta_j} \}$ consists of quasiconformal mappings with a uniform bound on the distortion. Since $f_n(0) = 0$ and $f_n(1)=1$ for all $n$, the family is uniformly bounded in $\Deltaelta_j$. It follows from the quasiconformal version of Montel's Theorem (see \cite{Miniowitz}) that the family $\{f_n|_{\Deltaelta_j} \}$ is normal. By a standard diagonal argument, we can find a subsequence $(f_{n_p})_{p=1}^{\infty}$ which converges locally uniformly in $\Delta$. Suppose $f_0$ is the limit function. The image $f_0(\Delta)$ may not be $\Delta$, but it must be a simply connected proper subset of $\mathfrak{B}bb C $. By post-composing by a suitable conformal map, via the Riemann Mapping Theorem, we can assume that $f_0:\Delta\rightarrow \Delta$ and $f_0$ still fixes $0$ and $1$. Then $f_0$ is either a locally quasiconformal mapping (since it is quasiconformal on each $\Delta_j$) with complex dilatation \eqref{eq:lem2} or a constant. However, since $f_0(0) \neq f_0(1)$, $f_0$ cannot be constant.
Finally, we need to show that $f_0 \in \mathcal{L}_{\rho}(\Delta)$. Set $g=f_0^{-1}$. Then, by the formula for the complex dilatation of an inverse (see \cite[p.6]{10}), for $z\in \Delta$ we have
\[ |\mu_g(z)| = |\mu_{f_0}(f_0^{-1}(z))| = \left | \frac{ \rho(|z|){K_0}-1}{\rho(|z|)K_0+1} \cdot \frac{ \overlineverline{\varphi (f_0^{-1}(z))}}{|\varphi(f_0^{-1}(z)) |} \right | = \frac{\rho(|z|)K_0-1}{\rho(|z|)K_0+1}.\]
We therefore obtain
\[ \frac{1+|\mu_g(z)|}{1-|\mu_g(z)|} \leq K_0\rho(|z|)\]
and conclude that $g\in QC_{\rho}(\Delta)$ and hence $f_0 \in \mathcal{L}_{\rho}(\Delta)$. The proof is complete.
\end{proof}
\subsection{Extremal Mappings}
We will show that extremal mappings always exist, but first we need to prove a normal family result which generalizes \cite[Theorem 3.1]{7}.
\begin{lemma}
\label{lem:normal}
Let $\rho$ be allowable, suppose $\mathcal{F} \subset QC_{\rho}(\Delta)$ and there exists a constant $C>0$ so that
\[ D_f(z) \leq C\rho(|z|)\]
for all $f\in \mathcal{F}$. Then $\mathcal{F}$ is a normal family and relatively compact in $QC_{\rho}(\Delta)$ viewed as a subset of continuous functions from the disk to itself.
\end{lemma}
\begin{proof}
Let $r<1$. Then on $\Delta_r =\{z:|z|<r \}$, every $f\in \mathcal{F}$ is $C\rho(r)$-quasiconformal with image contained in $\Deltaelta$. By Montel's Theorem for quasiconformal mappings (see \cite{Miniowitz}), if $(f_n)_{n=1}^{\infty}$ is any sequence in $\mathcal{F}$, then $(f_n|_{\Deltaelta_r})_{n=1}^{\infty}$ contains a subsequence $(f_{n_k}|_{\Deltaelta_r})_{k=1}^{\infty}$ which converges to either a $C\rho(r)$-quasiconformal map or a constant.
By a standard diagonal sequence argument, we find a subsequence $(f_{n_p})_{p=1}^{\infty}$ with $f_{n_p}$ converging to $f_0$ locally uniformly on $\Delta$. Then $f_0$ is either a constant or a locally quasiconformal map with $D_{f_0}(z) \leq C\rho(|z|)$. Hence $f_0 \in QC_{\rho}(\Deltaelta)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:extremal}]
Let $[f_0]\in T_{\rho}(\Delta)$. Then for each $g\in [f_0]$ we have $1\leq K^{\rho}_{g^{-1}} <\infty$. Let
\[ K = \inf_{g\in [f_0]} K^{\rho}_{g^{-1}},\]
and find $f_n \in [f_0]$ with $K^{\rho}_{f_n^{-1}} \rightarrow K$. Without loss of generality, we may assume that $K^{\rho}_{f_n^{-1}}$ is decreasing. Set $C = K^{\rho}_{f_1^{-1}}$. Then for all $n\in \mathbb{N}$, we have
\[ D_{f_n^{-1}}(z) \leq C \rho(|z|).\]
By Lemma \ref{lem:normal}, the family $\{ f_n^{-1} : n\in \mathbb{N} \}$ is normal and hence there exists a subsequence $(f_{n_k}^{-1})_{k=1}^{\infty}$ which converges locally uniformly on $\Delta$ to a continuous map $h$. For each $k$, $f_{n_k}^{-1}$ agrees with $f_0^{-1}$ on $\partialartial \Delta$ and hence $h$ cannot be a constant. We conclude that $h\in QC_{\rho}(\Delta)$ and $K^{\rho}_h = K$. Setting $f=h^{-1}$ we see that $f$ is extremal in $[f_0]$.
\end{proof}
\subsection{Uniquely Extremal Mappings}
\begin{proof}[Proof of Theorem \ref{thm:ue}]
By Lemma \ref{lem:2}, there exists a locally quasiconformal map $f_0$ with complex dilatation of Teichm\"uller type given by \eqref{eq:lem2}. Consequently, the Teichm\"uller class $[f_0]$ is well-defined and at least contains the representative $f_0$.
Let $f\in [f_0]$ and set $g=f^{-1}\circ f_0$. As we observed in Example \ref{ex:radial}, elements of $QC_{\rho}(\Delta)$ are not necessarily preserved by taking inverses or compositions. However, $g$ is locally quasiconformal in $\Delta$, extends to a homeomorphism from $\overlineverline{\Delta}$ to itself, has generalized partial derivatives on $\Delta$ and is the identity on $\partialartial \Delta$. We can therefore apply the generalized version of the Reich-Strebel Main Inequality proved by Markovi\'c and Mateljevi\'c \cite[Theorem 1]{22} to see that for any $\varphi \in A(\Delta)$, we have
\begin{equation}
\label{eq:ue1}
\int_{\Delta} |\varphi| \leq \int_{\Delta} |\varphi| \frac{ |1+ \mu_g \varphi |\varphi| |^2}{1-|\mu_g|^2}.
\end{equation}
For convenience, we write $\mu_1(z) = \mu_{f^{-1}}(f_0(z))$, $\mu = \mu_{f_0}$ and $\tau = \overlineverline{(f_0)_z} / (f_0)_z$.
Since
\[ \mu_g=\frac{\mu+\mu_1\tau}{1+\overlineverline{\mu}\mu_1\tau}\]
we obtain
\begin{equation}
\label{align:ue}
\frac{|1+\mu_g\varphi/|\varphi||^2}{1-|\mu_g|^2} \leq \frac{|1+\mu\varphi/|\varphi||^2}{(1-|\mu|^2)}
\frac{|1+\mu_1\tau\frac{\varphi}{|\varphi|}(1+\overlineverline{\mu}\frac{\overlineverline{\varphi}}{|\varphi|})/(1+\mu\frac{\varphi}{|\varphi|})|^2}{(1-|\mu_1|^2)}.
\end{equation}
Consequently
\begin{equation}
\label{eq:ue2}
\int_{\Delta}|\varphi| \leq \int_{\Delta} |\varphi| \frac{|1+\mu\varphi/|\varphi||^2}{(1-|\mu|^2)}
\frac{|1+\mu_1\tau\frac{\varphi}{|\varphi|}(1+\overlineverline{\mu}\frac{\overlineverline{\varphi}}{|\varphi|})/(1+\mu\frac{\varphi}{|\varphi|})|^2}{(1-|\mu_1|^2)}.
\end{equation}
Setting $\varphi = -\varphi_0$ in \eqref{eq:ue2} and using \eqref{eq:lem2}, we obtain
\begin{align*}
\int_{\Delta} |\varphi_0| & \leq \int_{\Delta} |\varphi_0| \frac{1}{K_0\rho( |f_0(z) |)} \frac{|1+\mu_1\tau|^2}{1-|\mu_1|^2} \\
&\leq \frac{1}{K_0} \int_{\Delta} |\varphi_0| \frac{1}{\rho( |f_0(z)| ) } \frac{1+|\mu_1|}{1-|\mu_1|} \\
&\leq \frac{1}{K_0} \int_{\Delta} |\varphi_0| \left | \left | \frac{ 1+|\mu_1|}{1-|\mu_1|} \frac{1}{\rho(|f_0(z)|)}\right | \right |_{\infty} \\
&= \frac{K^{\rho}_{f^{-1}} }{K_0} \int_{\Delta} |\varphi_0|.
\end{align*}
We conclude that $K_0 \leq K^{\rho}_{f^{-1}}$. Since $K^{\rho}_{f_0^{-1}} = K_0$, we see that $f_0$ is extremal.
To show that $f_0$ is uniquely extremal, suppose $f\in [f_0]$ is extremal and keep the same notation as above. Therefore $K^{\rho}_{f^{-1}} = K^{\rho}_{f_0^{-1}}$. We obtain from \eqref{eq:lem2}, \eqref{eq:ue2} and setting $\varphi = -\varphi_0$ that
\[ \int_{\Delta} |\varphi| \leq \int_{\Delta} |\varphi| \frac{1}{\rho(|f_0(z)|)K_0} \frac{ |1-\mu_1 \tau \varphi_0/|\varphi_0||^2}{1-|\mu_1|^2}.\]
Since
\[ \frac{ |1-\mu_1 \tau \varphi_0/|\varphi_0||^2}{1-|\mu_1|^2} \leq \frac{ (1+|\mu_{f^{-1}}(f_0(z))|)^2}{1-|\mu_{f^{-1}}(f_0(z))|^2} \leq \rho(|f_0(z)|) K^{\rho}_{f^{-1}},\]
it follows that
\[ \int_{\Delta} |\varphi| \leq \int_{\Delta} |\varphi| \frac{1}{\rho(|f_0(z)|)K_0} \rho(|f_0(z)|) K^{\rho}_{f^{-1}} = \int_{\Delta} |\varphi|.\]
Since there must be equality everywhere, in particular we must have
\[ \mu_1(z) = \mu_{f^{-1}}(f_0(z))= -\frac{1}{\tau} \frac{ K_0\rho(|f_0(z)|)-1}{K_0\rho(|f_0(z)|)+1} \frac{\overlineverline{\varphi_0}}{|\varphi_0|}.\]
However, we also have from the definition of $f_0$ that
\[ \mu_{f_0^{-1}}(f_0(z)) = -\frac{1}{\tau} \mu_{f_0}(z) =
-\frac{1}{\tau} \frac{ K_0\rho(|f_0(z)|)-1}{K_0\rho(|f_0(z)|)+1} \frac{\overlineverline{\varphi_0}}{|\varphi_0|}.\]
Since $f_0^{-1}$ and $f^{-1}$ have the same complex dilatation, then the same argument as in the proof of Lemma \ref{lem:1} shows that they are related via post-composition by a conformal map. However, since both $f_0^{-1}$ and $f^{-1}$ agree on $\partialartial \Delta$, the conformal map must be the identity. We conclude that $f= f_0$ and so $f_0$ is uniquely extremal.
\end{proof}
\subsection{Quasisymmetry functions}
Before we prove Theorem \ref{thm:qs}, we need to recall some results. If $\Gammaamma$ is a curve family, then let $M(\Gammaamma)$ denote its modulus. We refer to \cite[Chapter II]{Vuorinen} for the precise definition. In particular, if $E,F \subset \mathfrak{B}bb C $ are disjoint continua, then $\Deltaelta(E,F)$ denotes the family of curves starting in $E$ and terminating in $F$ and $M(\Deltaelta(E,F))$ is the corresponding modulus.
For $\xi \in \partialartial \Delta$ and $0<r<R$, let $Q(\xi,r,R)$ be the quadrilateral
\[ Q(\xi,r,R) = \{ z: r\leq |z-\xi | \leq R, z\in \Delta \} \]
with vertices taken in order as the intersections of the circle $\{ |z-\xi| = r \}$ and $\{ |z - \xi| = R \}$ with the unit circle. The modulus $\overlineperatorname{mod} Q(\xi,r,R)$ is then defined as the modulus of the curve family joining the two components of $Q(\xi,r,R) \cap \partialartial \Delta$. If $f:\overlineverline{\Delta}\rightarrow \overlineverline{\Delta}$ is a homeomorphism, then we define $\overlineperatorname{mod} f(Q(\xi,r,R))$ analogously.
\begin{lemma}[Lemma 2.1, \cite{25}]
\label{lem:qs}
Let $\rho$ be allowable and let $f\in QC_{\rho}(\Delta)$. Then
\[ \int_r^R \frac{dt}{t\rho^*(t)} \leq \overlineperatorname{mod} f( Q(\xi,r,R)),\]
where \[\rho^*(t) = \int_{S(\xi ,t) \cap \Delta} \rho(|z|) d\theta,\]
and $z=\xi+t e^{i\theta}$.
\end{lemma}
Let $\tau:(0,\infty) \rightarrow (0,\infty)$ be the Teichm\"uller capacity function (see \cite[p.66]{Vuorinen}). Observe that $\tau$ is decreasing.
\begin{lemma}[Lemma 7.34, \cite{Vuorinen}]
\label{lem:v}
If $\Omega \subset \mathfrak{B}bb C $ is an open ring with complementary components $E,F$ and $a,b\in E$, $c,\infty \in F$, then
\[ M(\Deltaelta(E,F)) \gammaeq \tau \left ( \frac{ |a-c|}{|a-b|} \right ).\]
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:qs}]
Since $[f] \in T_{\rho}(\Delta)$, if $g=f^{-1}$, then $g \in QC_{\rho}(\Delta)$ and extends to a boundary map that, by abuse of notation, we will also call $g$.
Let $\xi \in \partialartial \Delta$, $t>0$ and $w = \xi e^{it/2} \in \partialartial \Delta$ be the midpoint of the arc of $\partialartial \Delta$ from $\xi$ to $\xi e^{it}$. Let $Q$ be the quadrilateral $Q=Q(w,s,S)$, where $s=|\xi-w| = |e^{it/2}-1|$ and $S = |\xi e^{-it} - w| = |e^{3it/2}-1|$. Then since $g\in QC_{\rho}(\Delta)$, by Lemma \ref{lem:qs},
\begin{equation}
\label{eq:qs1}
\int_s^S \frac{dr}{r \rho^*(r) } \leq \overlineperatorname{mod} f(Q).
\end{equation}
Let $I(z) = z/|z|^2$ be inversion in the unit circle. Then $\Omega:= f(Q) \cup I(f(Q))$ is a ring domain. If $\Gammaamma$ is the curve family separating boundary components of $\Omega$, by symmetry we have
\begin{equation}
\label{eq:qs2}
M(\Gammaamma) = \frac{ \overlineperatorname{mod}{f(Q)}}{2}.
\end{equation}
Now, if $\Gammaamma'$ is the curve family connecting the complementary components of $\Omega$, then
\begin{equation}
\label{eq:qs3}
M(\Gammaamma ') = 1/ M(\Gammaamma).
\end{equation}
Applying Lemma \ref{lem:v} with $a=g(\xi)$, $b = g(\xi e^{it})$ and $c = g(\xi e^{-it})$, we have
\begin{equation}
\label{eq:qs4}
M(\Gammaamma ') \gammaeq \tau \left ( \frac{ | g(\xi) - g(\xi e^{-it}) |}{| g(\xi) - g(\xi e^{it})| } \right)
= \tau \left ( \frac{1}{\lambda_g(\xi, t) }\right ).
\end{equation}
Combining \eqref{eq:qs1}, \eqref{eq:qs2}, \eqref{eq:qs3} and \eqref{eq:qs4}, we conclude that
\[ \int_s^S \frac{dr}{r \rho^*(r) } \leq 2\left ( \tau \left ( \frac{1}{\lambda_g(\xi, t) }\right ) \right )^{-1}.\]
Rearranging in terms of $\lambda_g(\xi, t)$ and using the fact that $\tau$ is decreasing, we obtain
\[ \lambda_g(\xi, t) \leq \left [ \tau^{-1} \left ( \frac{2}{\int_s^S \frac{dr}{r \rho^*(r) }} \right ) \right ] ^{-1}.\]
For the reverse inequality, we apply the same argument as above, except this time we let $w' = \xi e^{-it/2}$ be the midpoint of the arc of $\partialartial \Delta$ between $\xi$ and $\xi e^{-it/2}$ and we let $Q'$ be the quadrilateral $Q(w',s,S)$. Using the same notation as above, the argument is the same until we reach \eqref{eq:qs4} and we obtain
\[ M(\Gammaamma ') \gammaeq \tau \left ( \frac{ | g(\xi) - g(\xi e^{it}) |}{| g(\xi) - g(\xi e^{-it})| } \right) = \tau \left ( \lambda_g(\xi, t) \right ).\]
This yields
\[ \lambda_g(\xi ,t) \gammaeq \tau^{-1} \left ( \frac{2}{\int_s^S \frac{dr}{r \rho^*(r) }} \right ).\]
Consequently, we obtain the desired quasisymmetry estimate with
\[ \lambda (t) = \left [ \tau^{-1} \left ( \frac{2}{\int_s^S \frac{dr}{r \rho^*(r) }} \right ) \right ] ^{-1},\]
recalling that $s = |e^{it/2}-1|$ and $S = |e^{3it/2}-1|$.
\end{proof}
\section{Concluding remarks}
\subsection{Boundary maps}
In Theorem \ref{thm:qs}, we showed that if $[f] \in T_{\rho}(\Delta)$ and $f_0$ is any representative, then $f_0^{-1}$ extends to the boundary and the boundary map has controlled quasisymmetry function $\lambda$ depending on $\rho$ which may, however, blow up. What is not clear is whether given a boundary map with quasisymmetry controlled by $\lambda$, there is a locally quasiconformal extension contained in $QC_{\rho}(\Delta)$. If so, then we would have an alternate parameterization of $T_{\rho}(\Delta)$ through boundary maps of controlled quasisymmetry, in analogy with the quasisymmetric parameterization of universal Teichm\"uller space.
There are various extensions available. The Douady-Earle extension \cite{DE} extends a homeomorphism of the circle to a diffeomorphism of the (open) disk and hence this extension will be locally quasiconformal. It would be interesting to know how the distortion of the extension is controlled by $\lambda$.
Another extension is obtained through the Beurling-Ahlfors extension of homeomorphisms of the real line to homeomorphisms of the upper half-plane. In \cite{6}, it is shown that control of the quasisymmetry function of the boundary map leads to control of the distortion of the Beurling-Ahlfors extension, but again we don't know whether we can obtain $\rho$ from $\lambda$.
\subsection{Pseudo-metrics and metrics}
It is well-known that universal Teichm\"uller space carries the Teichm\"uller metric, and this has been well studied. For $T_{\rho}(\Delta)$, it is not clear how to make it into a metric space. One immediate barrier is that different classes can have extremal representatives $f_1,f_2$ which both have $K^{\rho}_{f_i^{-1}} = 1$ for $i=1,2$. Moreover, the fact that $QC_{\rho}(\Delta)$ need not be closed under compositions and inverses makes a direct analogy of the Teichm\"uller metric impossible.
On the other hand, it is easy to turn $T_{\rho}(\Delta)$ into a pseudo-metric space by considering maps $F:T_{\rho}(\Delta) \rightarrow (X,d_X)$, where $X$ is a metric space with distance function $d_X$. We then define $d_{T,X}$ on $T_{\rho}(\Delta)$ via
\[ d_{T,X}( [f] , [g] ) = d_X( F([f]) , F([g]) ).\]
For example $F([f]) = \inf_{f_0\in [f]} K^{\rho}_{f_0}$ mapping $T_{\rho}(\Delta)$ into $\mathfrak{B}bb R ^+$ with the usual Euclidean metric yields a pseudo-metric. We can therefore ask how to construct a metric on $T_{\rho}(\Delta)$ or, slightly nebulously, how to construct as interesting a pseudo-metric as possible.
\subsection{Riemann surfaces}
Our construction of generalized universal Teichm\"uller spaces leads to an obvious generalization to generalized Teichm\"uller spaces of hyperbolic Riemann surfaces, that is, those surfaces covered by the unit disk. It seems plausible that a study of such objects could yield information about the interplay between length-spectrum and quasiconformal Teichm\"uller spaces. In particular, does length spectrum Teichm\"uller space sit inside a generalized Teichm\"uller space for every infinite-type Riemann surface?
\end{document}
|
\begin{document}
\newcommand{section}{section}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{question}[theorem]{Question}
\newtheorem{remark}[theorem]{Remark}
\numberwithin{equation}{section}
\title[Local estimates for the Chern-Ricci flow]{Local Calabi and curvature estimates for the Chern-Ricci flow$^{\dagger}$}
\author[M. Sherman]{Morgan Sherman}
\address{Department of Mathematics, California Polytechnic State University, San Luis Obispo, CA 93407}
\author[B. Weinkove]{Ben Weinkove}
\address{Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208}
\thanks{$^{\dagger}$Supported in part by NSF grant DMS-1105373. Part of this work was carried out while the second-named author was a member of the mathematics department of the University of California, San Diego.}
\begin{abstract}
Assuming local uniform bounds on the metric for a solution of the Chern-Ricci flow, we establish local Calabi and curvature estimates using the maximum principle. \end{abstract}
\maketitle
\section{Introduction}
Let $(M, \hat{g})$ be a Hermitian manifold. The \emph{Chern-Ricci flow} starting at $\hat{g}$ is a smooth flow of Hermitian metrics $g=g(t)$ given by
\begin{equation}\label{crf0}
\ddt g_{i\ov{j}} = - R^C_{i\ov{j}}, \qquad g_{i\ov{j}}|_{t=0} = \hat{g}_{i\ov{j}},
\end{equation}
where $R^C_{i\ov{j}} := - \partial_i \partial_{\ov{j}} \log \det g$ is the \emph{Chern-Ricci} curvature of $g$. If $\hat{g}$ is K\"ahler, then the Chern-Ricci flow coincides with the K\"ahler-Ricci flow.
The Chern-Ricci flow was introduced by Gill \cite{G} and further investigated by Tosatti and the second-named author \cite{TW1, TW2}. This flow has many of same properties as the K\"ahler-Ricci flow. For example: on manifolds with vanishing first Bott-Chern class the Chern-Ricci flow converges to a Chern-Ricci flat metric \cite{G}; on manifolds with negative first Chern class, the Chern-Ricci flow takes any Hermitian metric to the K\"ahler-Einstein metric \cite{TW1};
when $M$ is a compact complex surface and $\hat{g}$ is $\partial \ov{\partial}$-closed, the Chern-Ricci flow exists until either the volume of the manifold goes to zero or the volume of a curve of negative self-intersection goes to zero \cite{TW1}; if in addition $M$ is non-minimal with nonnegative Kodaira dimension, the Chern-Ricci flow shrinks exceptional curves in finite time \cite{TW2} in the sense of Gromov-Hausdorff. These results are closely analogous to results for the K\"ahler-Ricci flow \cite{Cao, FIK, TZ, SW0, SW}.
In this note, we establish local derivative estimates for solutions of the Chern-Ricci flow assuming local uniform bounds on the metric, generalizing our previous work \cite{ShW} on the K\"ahler-Ricci flow. Our estimates are local, so we work in a small open subset of $\mathbb{C}^n$. Write $B_r$ for the ball of radius $r$ centered at the origin in $\mathbb{C}^n$, and fix $T<\infty$. We have the following result (see Section \ref{sectionprelim} for more details about the notation).
\begin{theorem} \label{maintheorem} Fix $r$ with $0< r<1$. Let $g(t)$ solve the Chern-Ricci flow (\ref{crf0}) in a neighborhood of $B_r$ for $t\in [0,T]$.
Assume $N>1$ satisfies
\begin{equation} \label{uniformestimatemetric}
\frac{1}{N} \hat{g} \le g(t) \le N \hat{g} \qquad \textrm{on } B_r \times [0,T].
\end{equation}
Then there exist positive constants $C, \alpha, \beta$ depending only on $\hat{g}$ such that
\begin{enumerate}
\item[(i)] $\displaystyle{| \hat{\nabla} g|^2_{g} \le \frac{C N^{\alpha}}{r^2}}$ on $B_{r/2} \times [0,T],$ where $\hat{\nabla}$ is the Chern connection of $\hat{g}$.
\item[(ii)] $\displaystyle{|\emph{Rm}|_g^2 \le \frac{C N^{\beta}}{r^4}}$ on $B_{r/4} \times [0,T]$, for $\emph{Rm}$ the Chern curvature tensor of $g$.
\end{enumerate}
\end{theorem}
Note that the estimates are independent of the time $T$ and so the results holds also for time intervals $[0,T)$ or $[0,\infty)$. The dependence of the constants on $\hat{g}$ is as follows: up to three derivatives of torsion of $\hat{g}$ and one derivative of the Chern curvature of $\hat{g}$ (see Remarks \ref{remark1} and \ref{remark2}). We call the bound (i) a \emph{local Calabi estimate} \cite{Ca} (see \cite{ZZ} for a similar estimate in the elliptic case).
As a consequence of Theorem \ref{maintheorem}, we have local derivative estimates for $g$ to all orders:
\begin{corollary} \label{corollary}
With the assumptions of Theorem \ref{maintheorem}, for any $\varepsilon>0$ with $0< \varepsilon<T$, there exist constants $C_m$, $\alpha_m$ and $\gamma_m$ for $m= 1, 2,3, \ldots$ depending only on $\hat{g}$ and $\varepsilon$ such that
$$| \hat{\nabla}^m_{\mathbb{R}} g|^2_{\hat{g}} \le \frac{C_m N^{\alpha_m}}{r^{\gamma_m}} \qquad \textrm{on } B_{r/8} \times [\varepsilon,T],$$
where $\hat{\nabla}_{\mathbb{R}}$ is the Levi-Civita covariant derivative associated to $\hat{g}$.
\end{corollary}
Note that our assumption (\ref{uniformestimatemetric}) often holds for the Chern-Ricci flow on compact subsets away from a subvariety. For example, this always occurs for the Chern-Ricci flow on a non-minimal complex surface of nonnegative Kodaira dimension \cite{TW1, TW2}. It has already been shown by Gill \cite{G} that local derivative estimates exist using the method of Evans-Krylov \cite{E,K} adapted to this setting. The purpose of this note is to give a direct maximum principle proof of Gill's estimates, and in the process identify evolution equations for the Calabi quantity $|\hat{\nabla} g|^2_g$ and the Chern curvature tensor $R_{i\ov{j}k\ov{l}}$, which were previously unknown for this flow. In addition, we more precisely determine the form of dependence on the constants $N$ and $r$. We anticipate that this may be useful, for example in generalizations of arguments of \cite{SW}.
In the case when $\hat{g}$ is K\"ahler, so that $g(t)$ solves the K\"ahler-Ricci flow, the above result follows from results of the authors in \cite{ShW}. The more general case we deal with here leads to many more difficulties, arising from the torsion tensors of $g$ and $\hat{g}$. For these reasons, our conclusions here are slightly weaker: for example, we cannot obtain the small values ($\alpha=3$ and $\beta=8$) in the estimates of (i) and (ii) that we achieved in \cite{ShW}.
The second-named author thanks Valentino Tosatti and Xiaokui Yang for some helpful discussions.
\section{Preliminaries} \label{sectionprelim}
In this section we introduce the basic notions that we will be using throughout the paper. We largely follow notation given in \cite{TW1}. Given a Hermitian metric $g$ we write $\nabla$ for the \emph{Chern connection} associated to $g$, which is characterized as follows. Define Christoffel symbols $\Gamma_{ik}^l = g^{{\overline{s}} l} \partial_i g_{k{\overline{s}}}$. Let $X = X^l \frac{\partial}{\partial z^l}$ be a vector field and let $a = a_k \, dz^k$ be a $(1,0)$ form. Then
\begin{equation}
\nabla_i X^l = \partial_i X^l + \Gamma_{i r}^l X^r ,
\quad \nabla_i a_j = \partial_i a_j - \Gamma_{i j}^r a_r .
\label{Chern connection}
\end{equation}
We can, in a natural way, extend $\nabla$ to act on any tensor. Note that $\nabla$ makes $g$ parallel: i.e. $\nabla g = 0$. Similarly we let $\hat\nabla$ denote the Chern connection associated to $\hat g$.
Define the torsion tensor $T$ of $g$ by
\begin{equation}
T_{ij}{}^k = \Gamma_{ij}^k - \Gamma_{ji}^k
\label{torsion}
\end{equation}
We note that $g$ is K\"ahler precisely when $T = 0$. We write
$T_{{\overline{i}} {\overline{j}}}{}^{\overline{k}} := \Gamma_{{\overline{i}}{\overline{j}}}^{\overline{k}} - \Gamma_{{\overline{j}}{\overline{i}}}^{\overline{k}}
:= \overline{\Gamma_{ij}^k} - \ov{\Gamma_{ji}^k}$ for the components of the tensor $\overline{T}$.
We lower and raise indices using the metric $g$. For example, $T^{ij}{}_{k} = g^{\ov{a}i} g^{\ov{b}j} g_{k\ov{l}} T_{\ov{a} \ov{b}}{}^{\ov{l}}$.
We define the \emph{Chern curvature tensor} of $g$ to be the tensor written locally as
\begin{equation}
R_{i {\overline{j}} k}{}^l = - \partial_{{\overline{j}}} \Gamma_{ik}^l.
\label{curvature}
\end{equation}
Then
\begin{equation}
R_{i {\overline{j}} k {\overline{l}}} = - \partial_i \partial_{\overline{j}} g_{k{\overline{l}}}
+ g^{{\overline{s}} r} \partial_i g_{k {\overline{s}}} \partial_{\overline{j}} g_{r {\overline{l}}} .
\label{curvature in terms of metric}
\end{equation}
where again we have lowered an index using the metric $g$.
Note that $\overline{R_{i {\overline{j}} k {\overline{l}}}} = R_{j {\overline{i}} l {\overline{k}}}$ holds.
The commutation formulas for the Chern connection are given by
\begin{gather}
[\nabla_i, \nabla_{\overline{j}}] X^l = R_{i {\overline{j}} k}{}^l X^k,
\quad [\nabla_i, \nabla_{\overline{j}}] \overline{X^k}
= - R_{i {\overline{j}}}{}^{{\overline{k}}}{}_{{\overline{l}}} \overline{X^l} \nonumber\\
[\nabla_i, \nabla_{\overline{j}}] a_k = - R_{i {\overline{j}} k}{}^l a_l,
\quad [\nabla_i, \nabla_{\overline{j}}] \overline{a_l}
= R_{i {\overline{j}}}{}^{{\overline{k}}}{}_{{\overline{l}}} \overline{a_k}.
\label{curvature via difference in connections}
\end{gather}
Because $g$ is not assumed to be a K\"ahler metric the \emph{Bianchi identities} will not necessarily hold for $R_{i {\overline{j}} k {\overline{l}}}$. However their failure to hold can be measured with the torsion tensor $T$ defined above:
\begin{align}
& R_{i {\overline{j}} k {\overline{l}}} - R_{k {\overline{j}} i {\overline{l}}} = - \nabla_{\overline{j}} T_{i k {\overline{l}}}
\nonumber\\
& R_{i {\overline{j}} k {\overline{l}}} - R_{i {\overline{l}} k {\overline{j}}} = - \nabla_i T_{{\overline{j}} {\overline{l}} k}
\nonumber\\
& R_{i {\overline{j}} k {\overline{l}}} - R_{k {\overline{l}} i {\overline{j}}}
= - \nabla_{\overline{j}} T_{i k {\overline{l}}} - \nabla_k T_{{\overline{j}} {\overline{l}} i}
= - \nabla_i T_{{\overline{j}} {\overline{l}} k} - \nabla_{\overline{l}} T_{i k {\overline{j}}}
\nonumber\\
& \nabla_p R_{i {\overline{j}} k {\overline{l}}} - \nabla_i R_{p {\overline{j}} k {\overline{l}}}
= - T_{p i}{}^r R_{r {\overline{j}} k {\overline{l}}}
\nonumber\\
& \nabla_{\overline{q}} R_{i {\overline{j}} k {\overline{l}}} - \nabla_{\overline{j}} R_{i {\overline{q}} k {\overline{l}}}
= - T_{{\overline{q}} {\overline{j}}}{}^{\overline{s}} R_{i {\overline{s}} k {\overline{l}}}.
\label{Bianchi}
\end{align}
These identities are well-known (see \cite{TWY} for example). Indeed,
it is routine to verify the first line, and the second and third lines follow directly from it. Furthermore the fifth line follows directly from the fourth. For the fourth line we calculate:
\[
\nabla_p R_{i {\overline{j}} k}{}^l
= - \nabla_p (\partial_{\overline{j}} \Gamma_{ik}^l)
= - \partial_p \partial_{\overline{j}} \Gamma_{ik}^l
- \Gamma_{pr}^l \partial_{\overline{j}} \Gamma_{ik}^r
+ \Gamma_{pi}^r \partial_{\overline{j}} \Gamma_{rk}^l
+ \Gamma_{pk}^r \partial_{\overline{j}} \Gamma_{ir}^l.
\]
Swapping the $p$ and $i$ indices, subtracting, and combining terms, we find
\[
\nabla_p R_{i {\overline{j}} k}{}^l - \nabla_i R_{p {\overline{j}} k}{}^l
= - T_{pi}{}^r R_{r {\overline{j}} k}{}^l
+ \partial_{\overline{j}} \left(
\partial_i \Gamma_{pk}^l - \partial_p \Gamma_{ik}^l
+ \Gamma_{ir}^l \Gamma_{pk}^r
- \Gamma_{pr}^l \Gamma_{ik}^r
\right).
\]
Now one checks that the quantity in parentheses vanishes.
We define the Chern-Ricci curvature tensor $R^C_{i {\overline{j}}}$ by
\begin{equation}
R^C_{i {\overline{j}}} = g^{{\overline{l}} k} R_{i {\overline{j}} k {\overline{l}}} = - \partial_i \partial_{{\overline{j}}} \log \det g.
\label{Chern-Ricci curvature}
\end{equation}
Note that $\sqrt{-1} R^C_{i\ov{j}}dz^i \wedge dz^{\ov{j}}$ is a real closed (1,1) form.
We will suppose that $g = g(t)$ satisfies the \emph{Chern-Ricci flow}:
\begin{equation}
\ddt g_{i {\overline{j}}} = - R^C_{i {\overline{j}}}, \quad g_{i\ov{j}}|_{t=0} = {\hat g}_{i\ov{j}},
\label{Chern-Ricci flow}
\end{equation}
for $t \in [0,T]$ for some fixed positive time $T$. We will use $\hat{\nabla}$, $\hat{\Gamma}^l_{ik}$, $\hat{T}_{ik}{}^l$, $\hat{R}_{i\ov{j}k\ov{l}}$ etc to denote the corresponding quantities with respect to the metric $\hat{g}$.
Define a real (1,1) form $\omega=\omega(t)$ by $\omega = \frac{\sqrt{-1}}2 g_{i {\overline{j}}} dz^i \wedge dz^{\overline{j}}$ and similarly for $\hat\omega$. From (\ref{Chern-Ricci flow}) we have that
\begin{equation}
\omega = \hat\omega + \eta(t)
\label{omega = omegahat + eta}
\end{equation}
for a closed $(1,1)$ form $\eta$. Hence
\begin{equation}
T_{i k {\overline{l}}} = \hat T_{i k {\overline{l}}} .
\label{T = T hat}
\end{equation}
Here we raise and lower indices of $\hat{T}$ using the metric $\hat{g}$, in the same manner as for $g$ above. Note that
$T_{i k {\overline{l}}} = g_{r {\overline{l}}} T_{i k}{}^r = \partial_i g_{k \ov{l}} - \partial_k g_{i \ov{l}}$ and
$\hat T_{i k {\overline{l}}} = \hat g_{r {\overline{l}}} \hat T_{i k}{}^r = \partial_i \hat{g}_{k \ov{l}} - \partial_k \hat{g}_{i \ov{l}}$.
It is convenient to introduce the tensor $\Psi_{ik}{}^l = \Gamma_{ik}^l - \hat\Gamma_{ik}^l$. We raise and lower indices of $\Psi$ using the metric $g$, and write $\Psi_{\ov{i} \ov{k}}{}^{\ov{l}}$ for the components of $\ov{\Psi}$.
We note here that $\Psi$ can be used to switch between the connections $\nabla$ and $\hat\nabla$. For example given a tensor of the form $X_i{}^j$ we have
\begin{equation}
\nabla_p X_i{}^j - \hat\nabla_p X_i{}^j = - \Psi_{p i}{}^r X_r{}^j + \Psi_{p r}{}^j X_i{}^r .
\label{nabla - nablahat = Psi}
\end{equation}
Observe that
\begin{equation}
\nabla_{\overline{j}} \Psi_{i k}{}^l = - R_{i {\overline{j}} k}{}^l + \hat R_{i {\overline{j}} k}{}^l.
\label{nablabar Psi}
\end{equation}
We write $\Delta$ for the ``rough Laplacian'' of $g$, $\Delta = \nabla^{{\overline{q}}} \nabla_{{\overline{q}}}$, where $\nabla^{\overline{q}} = g^{{\overline{q}} p} \nabla_p$. Finally note that we will write all norms $|\cdot|$ with respect to the metric $g$.
\section{Local Calabi estimate}
In this section we prove part (i) of Theorem \ref{maintheorem}.
We consider the Calabi-type \cite{Ca, Y} quantity
\begin{equation}
S := | \Psi |^2 = | \hat \nabla g |^2.
\label{definition of S}
\end{equation}
Our goal in this section is to uniformly bound $S$ on the set $B_{r/2}$, which we will do using a maximum principle argument. First we compute its evolution. Calculate
\begin{align*}
\Delta S ={} & g^{{\overline{q}} p} \nabla_p \nabla_{{\overline{q}}}
\left(g^{{\overline{a}} i} g^{{\overline{b}} j} g_{k{\overline{c}}} \Psi_{ij}{}^k \overline{\Psi_{ab}{}^c}
\right) \\
={} & g^{{\overline{q}} p} g^{{\overline{a}} i} g^{{\overline{b}} j} g_{k {\overline{c}}} \nabla_p \left(
\nabla_{\overline{q}} \Psi_{ij}{}^k \overline{\Psi_{ab}{}^c} + \Psi_{ij}{}^k
\overline{\nabla_q \Psi_{ab}{}^c } \right) \\
={} & |\overline{\nabla} \Psi|^2 + |\nabla \Psi|^2
+ g^{{\overline{a}} i} g^{{\overline{b}} j} g_{k {\overline{c}}} \Bigl(
\Delta\Psi_{ij}{}^k \overline{\Psi_{ab}{}^c} \\
&+ \Psi_{ij}{}^k \overline{\left(
\Delta\Psi_{ab}{}^c
+ g^{{\overline{q}} p} R_{p {\overline{q}} a}{}^r \Psi_{rb}{}^c
+ g^{{\overline{q}} p} R_{p {\overline{q}} b}{}^r \Psi_{ar}{}^c
- g^{{\overline{q}} p} R_{p {\overline{q}} r}{}^c \Psi_{ab}{}^r
\right)}
\Bigr) \\
={} & |\overline{\nabla} \Psi|^2 + |\nabla \Psi|^2
+ 2 \mathrm{Re} \left( ( \Delta \Psi_{ij}{}^k ) \Psi^{ij}{}_k \right) \\
&+ (R_p{}^p{}_r{}^i \Psi^{rj}{}_k
+ R_p{}^p{}_r{}^j \Psi^{ir}{}_k
- R_p{}^p{}_k{}^r \Psi^{ij}{}_r ) \Psi_{ij}{}^k.
\end{align*}
From (\ref{nablabar Psi}) we have
\begin{equation}
\Delta \Psi_{ij}{}^k = - \nabla^{{\overline{q}}} R_{i {\overline{q}} j}{}^k
+ \nabla^{{\overline{q}}} \hat{R}_{i {\overline{q}} j}{}^k.
\label{laplacian Psi}
\end{equation}
For the time derivative of $S$, first compute (cf. \cite{PSS} in the K\"ahler case),
\begin{equation}
\ddt \Psi_{ij}{}^k = \ddt \Gamma_{ij}^k = - \nabla_i (R^C)_j{}^k .
\label{evolution of Psi}
\end{equation}
Then
\begin{align*}
\ddt S ={} & \ddt
\left( g^{{\overline{a}} i} g^{{\overline{b}} j} g_{k{\overline{c}}} \Psi_{ij}{}^k \overline{\Psi_{ab}{}^c}
\right) \\
={} & \left(\ddt g^{{\overline{a}} i}\right) \Psi_{ij}{}^k \Psi_{{\overline{a}}}{}^{j}{}_k
+ \left(\ddt g^{{\overline{b}} j}\right) \Psi_{ij}{}^k \Psi^i{}_{{\overline{b}} k}
+ \left(\ddt g_{k{\overline{c}}}\right) \Psi_{ij}{}^k \Psi^{i j {\overline{c}}}
+ 2\mathrm{Re} \left( \left(\ddt \Psi_{ij}{}^k\right) \Psi^{ij}{}_k \right) \\
={} & (R^C)^{{\overline{a}} i} \Psi_{ij}{}^k \Psi_{{\overline{a}}}{}^{j}{}_k
+ (R^C)^{{\overline{b}} j} \Psi_{ij}{}^k \Psi^i{}_{{\overline{b}} k}
- (R^C)_{k{\overline{c}}} \Psi_{ij}{}^k \Psi^{i j {\overline{c}}} - 2\mathrm{Re} \left( (\nabla_i (R^C)_j{}^k) \Psi^{ij}{}_k \right).
\end{align*}
Therefore
\begin{align*}
\left( \ddt - \Delta \right) S
={}& - |\overline{\nabla} \Psi|^2 - |\nabla \Psi|^2
+ \left( R^{{\overline{r}} i}{}_p{}^p - R_p{}^{p {\overline{r}} i} \right)
\Psi_{ij}{}^k \Psi_{{\overline{r}}}{}^j{}_k
+ \left( R^{{\overline{r}} j}{}_p{}^p - R_p{}^{p {\overline{r}} j} \right)
\Psi_{ij}{}^k \Psi^{i}{}_{{\overline{r}} k} \\
&- \left( R_{k {\overline{r}}}{}_p{}^p - R_p{}^p{}_{k {\overline{r}} } \right)
\Psi_{ij}{}^k \Psi^{i j {\overline{r}}}
- 2 \mathrm{Re} \left[
\left( \nabla_i R_j{}^k{}_p{}^p + \Delta \Psi_{ij}{}^k \right)
\Psi^{i j}{}_k \right].
\end{align*}
By (\ref{Bianchi}) we can re-write the terms involving a difference in curvature using the torsion tensor $T$. For the term in square brackets we compute, using (\ref{laplacian Psi}) and again (\ref{Bianchi}) that
\begin{align*}
\nabla_i R_j{}^k{}_p{}^p + \Delta \Psi_{ij}{}^k
=& \nabla_i \left(
R_{p}{}^p{}_j{}^k + \nabla_j T^{p k}{}_p + \nabla^p T_{pj}{}^k
\right)
- \nabla^{\overline{q}} R_{i {\overline{q}} j}{}^k
+ \nabla^{\overline{q}} \hat{R}_{i {\overline{q}} j}{}^k \\
=& \left(
\nabla_p R_{i}{}^p{}_j{}^k - T_{ip}{}^r R_{r}{}^p{}_j{}^k
+ \nabla_i \nabla_j T^{p k}{}_p + \nabla_i \nabla^p T_{pj}{}^k
\right)
- \nabla^{\overline{q}} R_{i {\overline{q}} j}{}^k + \nabla^{\overline{q}} \hat{R}_{i {\overline{p}} j}{}^k \\
=& - T_{ip}{}^r R_r{}^p{}_j{}^k
+ \nabla_i \nabla_j T^{p k}{}_p
+ \nabla_i \nabla^p T_{pj}{}^k
+ \nabla^{\overline{q}} \hat{R}_{i {\overline{q}} j}{}^k.
\end{align*}
Hence $S$ satisfies the following evolution equation
\begin{align}
\left(\ddt - \Delta\right) S = {}& - |\overline{\nabla} \Psi|^2 - |\nabla \Psi|^2 \nonumber\\
&+ \left( \nabla_r T_{{\overline{q}}}{}^{i {\overline{q}}} + \nabla_{{\overline{q}}} T^{{\overline{q}}}{}_r{}^i
\right) \Psi_{i j}{}^k \Psi^{r j}{}_k \nonumber
+ \left( \nabla_r T_{{\overline{q}}}{}^{j {\overline{q}}}
+ \nabla_{{\overline{q}}} T^{{\overline{q}}}{}_{r}{}^j
\right) \Psi_{ij}{}^k \Psi^{ir}{}_k \nonumber \\
& - \left( \nabla_k T_{{\overline{q}}}{}^{r {\overline{q}}} + \nabla_{{\overline{q}}} T^{{\overline{q}}}{}_k{}^r
\right) \Psi_{ij}{}^k \Psi^{ij}{}_r \nonumber\\
&- 2 \mathrm{Re} \left[ \left(
\nabla_i \nabla_j T^{p k}{}_p
+ \nabla_i \nabla_{{\overline{q}}} T^{{\overline{q}}}{}_j{}^k
- T_{ip}{}^r R_r{}^p{}_j{}^k
+ g^{{\overline{q}} p} \nabla_p \hat{R}_{i{\overline{q}} j}{}^k
\right) \Psi^{ij}{}_k \right].
\label{evolution of S}
\end{align}
There are similar calculations to (\ref{evolution of S}) in the literature which generalize Calabi's argument \cite{Ca, Y}: in the elliptic Hermitian case \cite{Che, ZZ}; in the case of the K\"ahler-Ricci flow (see also \cite{ShW}) in \cite{Cao, PSS}; and in other settings \cite{TWY, To, ST}.
For the remainder of this section we will write $C$ for a constant of the form $CN^{\alpha}$ for $C$ and $\alpha$ depending only on $\hat{g}$. Our goal is to show that $S \le C/r^2$. The constant $C$ will be used repeatedly and may change from line to line, and we may at times use $C'$ or $C_1$ etc.
We would like to bound the right-hand side of (\ref{evolution of S}). First, from (\ref{T = T hat}) and (\ref{nabla - nablahat = Psi}) we have, for example,
\begin{equation}
\nabla_{\overline{a}} T_{ij}{}^k = g^{{\overline{l}} k} ( \hat \nabla_{\overline{a}} \hat T_{ij{\overline{l}}}
- \Psi_{{\overline{a}}{\overline{l}}}{}^{\overline{r}} \hat T_{ij{\overline{r}}} ).
\label{nabla T}
\end{equation}
This and similar calculations show that the second and third lines of (\ref{evolution of S}) can be bounded by $C(S^{3/2} + 1)$. Next we address the terms in the last line of the evolution equation for $S$.
\begin{itemize}
\item
Building on (\ref{nabla T}) we find
\begin{align} \nonumber
\nabla_a \nabla_b T_{{\overline{i}} {\overline{j}}}{}^{\overline{k}}
={} & g^{\ov{k}l} \left( \nabla_a (\hat{\nabla}_b \hat{T}_{\ov{i} \ov{j}l} - \Psi_{bl}{}^r \hat{T}_{\ov{i}\ov{j} r} ) \right) \\
= {} & g^{ {\overline{k}} l} \left( \hat{\nabla}_a \hat{\nabla}_b \hat{T}_{{\overline{i}}{\overline{j}} l}
- \Psi_{a b}{}^r \hat{\nabla}_r \hat{T}_{{\overline{i}} {\overline{j}} l}
- \Psi_{a l}{}^r \hat{\nabla}_b \hat{T}_{{\overline{i}} {\overline{j}} r} \nonumber \right. \\
&\left. - (\nabla_a \Psi_{bl}{}^r) \hat{T}_{\ov{i} \ov{j}r}
- \Psi_{bl}{}^r \hat\nabla_a \hat{T}_{\ov{i} \ov{j} r} + \Psi_{b l}{}^{r} \Psi_{ar}{}^s \hat{T}_{\ov{i} \ov{j} s} \right)
\label{nabla nabla T},
\end{align}
and hence $|\nabla_i \nabla_j T^{p k}{}_p|$ can be bounded by $C (S+ | \nabla \Psi | + 1)$.
\item Similarly,
\begin{align} \nonumber
\nabla_a \nabla_{\ov{b}} T_{ij}{}^k ={} & g^{\ov{l}k} \nabla_a ( \hat{\nabla}_{\ov{b}} \hat{T}_{ij\ov{k}} - \Psi_{\ov{b} \ov{k}}{}^{\ov{q}} \hat{T}_{ij\ov{q}}) \\ \nonumber
={} & g^{\ov{l}k} \left( \hat{\nabla}_a \hat{\nabla}_{\ov{b}} \hat{T}_{ij\ov{k}} - \Psi_{ai}{}^p \hat{\nabla}_{\ov{b}} \hat{T}_{pj\ov{k}} - \Psi_{aj}{}^p \hat{\nabla}_{\ov{b}} \hat{T}_{ip\ov{k}} - (\nabla_a \Psi_{\ov{b} \ov{k}}{}^{\ov{q}}) \hat{T}_{ij\ov{q}} \right. \\
& \left. - \Psi_{\ov{b} \ov{k}}{}^{\ov{q}} ( \hat{\nabla}_a \hat{T}_{ij\ov{q}} - \Psi_{ai}{}^p \hat{T}_{pj \ov{q}} - \Psi_{aj}{}^p \hat{T}_{ip\ov{q}}) \right),
\end{align}
and so $|\nabla_i \nabla_{{\overline{q}}} T^{{\overline{q}}}{}_j{}^k|$ can be bounded by $C (S+ | \ov{\nabla} \Psi | + 1)$.
\item
Next, using (\ref{T = T hat}) and (\ref{nablabar Psi}):
\[
T_{ip}{}^r R_r{}^p{}_j{}^k = g^{{\overline{s}} r}g^{{\overline{q}} p} \hat T_{ip{\overline{s}}}
\left( \hat R_{r{\overline{q}} j}{}^k - \nabla_{\overline{q}} \Psi_{rj}{}^k \right),
\]
so we can bound $|T_{ip}{}^r R_r{}^p{}_j{}^k|$ by $C(|\overline{\nabla}\Psi|+1)$.
\item
Finally, compute
\begin{align*}
\nabla_p \hat R_{i{\overline{q}} j}{}^k & = \hat\nabla_p \hat R_{i{\overline{q}} j}{}^k - \Psi_{pi}{}^r \hat R_{r{\overline{q}} j}{}^k
- \Psi_{pj}{}^r \hat R_{i{\overline{q}} r}{}^k
+ \Psi_{pr}{}^k \hat R_{i{\overline{q}} j}{}^r.
\end{align*}
So $|g^{{\overline{q}} p} \nabla_p \hat{R}_{i{\overline{q}} j}{}^k|$ can be bounded by $C(S^{1/2}+1)$.
\end{itemize}
Putting this all together we arrive at the bound
\begin{equation}
\left( \ddt - \Delta \right) S \le C ( S^{3/2} + 1 )
- \frac12 ( |\overline{\nabla}\Psi|^2 + |\nabla\Psi|^2 ).
\label{bound on evolution of S}
\end{equation}
We note here the bounds:
\begin{align}
&\left| \nabla {\mathrm{tr}_{\hat{g}}g} \right|^2 \le C S
\label{bound grad trace g} \\
&\left| \nabla S \right|^2 \le 2 S ( |\overline{\nabla}\Psi|^2+|\nabla\Psi|^2 ) .
\label{bound grad S}
\end{align}
The first follows from
$ \nabla_p \left( \hat{g}^{{\overline{j}} i} g_{i {\overline{j}}} \right)
= \hat\nabla_p \left( \hat{g}^{{\overline{j}} i} g_{i {\overline{j}}} \right)
= \hat{g}^{{\overline{j}} i} \hat{\nabla}_p g_{i {\overline{j}}} $
and the second follows from
$
| \nabla S |^2 = \bigl|\nabla |\Psi|^2\bigr| \, \bigl|\overline{\nabla} |\Psi|^2\bigr|
\le 2 |\Psi|^2 \, ( |\nabla\Psi|^2 + | \overline{\nabla}\Psi|^2)
$.
Furthermore from \cite[Proposition 3.1]{TW1} (see also \cite{Che} in the elliptic case), we also have the following evolution equation for ${\mathrm{tr}_{\hat{g}}g}$:
\begin{align}
\left( \ddt - \Delta \right) & {\mathrm{tr}_{\hat{g}}g} \ =\
-g^{{\overline{j}} p} g^{{\overline{q}} i} \hat\nabla_k g_{i{\overline{j}}} \hat\nabla^k g_{p{\overline{q}}}
-2\operatorname{Re}\left(g^{{\overline{j}} i} \hat T_{ki}{}^p \hat\nabla^k g_{p{\overline{j}}} \right)
\nonumber\\
& + g^{{\overline{j}} i} \left( \hat\nabla_i \hat T_{{\overline{j}}}{}^{k {\overline{q}}}
- \hat R_{i}{}^{k{\overline{q}}}{}_{{\overline{j}}} \right) g_{k{\overline{q}}}
- g^{{\overline{j}} i} \left(
\hat\nabla_i \hat T_{{\overline{j}}{\overline{q}}}{}^{{\overline{q}}}
+ \hat\nabla^k \hat T_{ik{\overline{j}}}
\right)
\nonumber\\
& + g^{{\overline{j}} i} \hat T_{{\overline{j}}}{}^{k{\overline{q}}} \hat T_{ik}{}^p (\hat g - g)_{p{\overline{q}}} .
\label{evolution of trace g}
\end{align}
\newcommand{\mycomment}[1]{}
\mycomment{
\begin{align}
\left( \ddt - \Delta \right) & {\mathrm{tr}_{\hat{g}}g} \ =\
-g^{{\overline{j}} p} g^{{\overline{q}} i} \hat\nabla_k g_{i{\overline{j}}} \hat\nabla^k g_{p{\overline{q}}}
\nonumber\\
& -2\operatorname{Re}\left(g^{{\overline{j}} i} \hat T_{ki}{}^p \hat\nabla^k g_{p{\overline{j}}} \right)
- g^{{\overline{j}} i} \hat T_{ik}{}^p \hat T_{{\overline{j}}}{}^{k {\overline{q}}} g_{p{\overline{q}}}
+ g^{{\overline{j}} i} \left( \hat\nabla_i \hat T_{{\overline{j}}}{}^{k {\overline{q}}}
- \hat R_{i}{}^{k{\overline{q}}}{}_{{\overline{j}}} \right) g_{k{\overline{q}}}
\nonumber\\
& - g^{{\overline{j}} i} \left(
\hat\nabla_i \hat T_{{\overline{j}}{\overline{q}}}{}^{{\overline{q}}}
+ \hat\nabla^k \hat T_{ik{\overline{j}}}
\right)
+ g^{{\overline{j}} i} \hat T_{{\overline{j}}}{}^{k{\overline{q}}} \hat T_{ik{\overline{q}}} .
\label{evolution of trace g}
\end{align}
}
(Here $\hat\nabla^k = \hat g^{{\overline{l}} k}\hat\nabla_{\overline{l}}$ and we have raised indices on the tensor $\widehat{\mathrm{Rm}}$ using $\hat g$).
This generalizes the second order evolution inequality for the K\"ahler-Ricci flow \cite{Cao} (cf. \cite{Y,A}).
Hence we have the estimate
\begin{equation}
\left( \ddt - \Delta \right) {\mathrm{tr}_{\hat{g}}g} \le - \frac{S}{C_0} + C (S^{1/2}+1),
\label{bound on evolution of trace g}
\end{equation}
for a uniform positive constant $C_0$ (in fact we can take $C_0=N$).
We now would like to show that the evolution inequalities (\ref{bound on evolution of S}, \ref{bound on evolution of trace g}) imply a uniform bound on $S = |\hat \nabla g |^2$ on $\overline{B_{r/2}} \times [0,T]$. Choose a smooth cutoff function $\rho$ which is supported in $B_r$ and is identically 1 on $\ov{B_{r/2}}$. We may assume that
$ |\nabla \rho|^2, |\Delta \rho|$ are bounded by $C /r^2$. Let $K$ be a large uniform constant, to be specified later, which is at least large enough so that $$\frac K2 \le K - {\mathrm{tr}_{\hat{g}}g} \le K.$$ Let $A$ denote another large positive constant to be specified later. We will use a maximum principle argument with the function (cf. \cite{CY})
$$f = \rho^2 \frac{S}{K-{\mathrm{tr}_{\hat{g}}g}} + A {\mathrm{tr}_{\hat{g}}g}$$ to show that $S$ is bounded on $B_{r/2}$.
Suppose that the maximum of $f$ on $\ov{B_r} \times [0,T]$ occurs at a point $(x_0, t_0)$. We assume for the moment that $t_0>0$ and that $x_0$ does not lie in the boundary of $\ov{B}_r$.
We wish to show that at $(x_0, t_0)$, $S$ is bounded from above by a uniform constant $C$. Hence we
may assume without loss of generality that $S>1$ at $(x_0, t_0)$. In particular, we have
\begin{equation} \label{evolvestr}
\left( \ddt - \Delta \right) S \le C S^{3/2}
- \frac12 ( |\overline{\nabla}\Psi|^2 + |\nabla\Psi|^2 ),
\quad
\left( \ddt - \Delta\right) {\mathrm{tr}_{\hat{g}}g}
\le -\frac{S}{2C_0} + C.
\end{equation}
We compute at $(x_0, t_0)$,
\begin{align*}
\left( \ddt - \Delta \right) f = {} & A \left( \ddt - \Delta \right) {\mathrm{tr}_{\hat{g}}g} + (-\Delta (\rho^2)) \frac{S}{K-{\mathrm{tr}_{\hat{g}}g}} + \rho^2 \frac{S}{(K-{\mathrm{tr}_{\hat{g}}g})^2} \left( \ddt - \Delta \right) {\mathrm{tr}_{\hat{g}}g} \\
& + \rho^2 \frac{1}{K-{\mathrm{tr}_{\hat{g}}g}} \left( \ddt - \Delta \right) S - 4 \textrm{Re} \left[ \rho \frac{S}{(K-{\mathrm{tr}_{\hat{g}}g})^2} \nabla {\mathrm{tr}_{\hat{g}}g} \cdot \ov{\nabla} \rho \right] \\
& - 4 \textrm{Re} \left[ \rho \frac{1}{K-{\mathrm{tr}_{\hat{g}}g}} \nabla \rho \cdot \ov{\nabla} S \right] - 2 \textrm{Re} \left[ \rho^2 \frac{1}{(K-{\mathrm{tr}_{\hat{g}}g})^2} \nabla {\mathrm{tr}_{\hat{g}}g} \cdot \ov{\nabla} S \right] \\
& - \frac{2\rho^2 S}{(K-{\mathrm{tr}_{\hat{g}}g})^3} | \nabla {\mathrm{tr}_{\hat{g}}g}|^2.
\end{align*}
But since a maximum occurs at $(x_0, t_0)$ we have $\ov{\nabla} f=0$ at this point, and hence
$$2\rho \ov{\nabla}\rho \frac{S}{K-{\mathrm{tr}_{\hat{g}}g}} + \rho^2 \frac{\ov{\nabla} S}{K-{\mathrm{tr}_{\hat{g}}g}} + \rho^2 \frac{S \ov{\nabla} {\mathrm{tr}_{\hat{g}}g}}{(K-{\mathrm{tr}_{\hat{g}}g})^2} + A \ov{\nabla} {\mathrm{tr}_{\hat{g}}g} =0.$$
Then at $(x_0, t_0)$,
\begin{align*}
\left( \ddt - \Delta \right) f = {} & A \left( \ddt - \Delta \right) {\mathrm{tr}_{\hat{g}}g} + (-\Delta (\rho^2)) \frac{S}{K-{\mathrm{tr}_{\hat{g}}g}} + \rho^2 \frac{S}{(K-{\mathrm{tr}_{\hat{g}}g})^2} \left( \ddt - \Delta \right) {\mathrm{tr}_{\hat{g}}g} \\
& + \rho^2 \frac{1}{K-{\mathrm{tr}_{\hat{g}}g}} \left( \ddt - \Delta \right) S - 4 \textrm{Re} \left[ \rho \frac{1}{K-{\mathrm{tr}_{\hat{g}}g}} \nabla \rho \cdot \ov{\nabla} S \right] + \frac{2A | \nabla {\mathrm{tr}_{\hat{g}}g}|^2}{K-{\mathrm{tr}_{\hat{g}}g}}.
\end{align*}
Making use of (\ref{bound grad trace g}, \ref{bound grad S}, \ref{evolvestr}) and Young's inequality, we obtain at $(x_0, t_0)$,
\begin{align*}
0 \le \left( \ddt - \Delta \right) f \le & \left( -\frac{A}{2C_0} S + CA \right) + \left( \frac{CS}{r^2K} \right) + \left( - \frac{\rho^2}{2K^2 C_0} S^2 + \frac{C\rho^2}{K^2} S \right) \\
& + \left( -\frac{\rho^2}{2K} ( | \ov{\nabla} \Psi|^2 + | \nabla \Psi|^2 ) + \frac{\rho^2}{4K^2C_0} S^2 + C \rho^2 S\right) \\
& + \left( \frac{\rho^2}{4K} ( | \ov{\nabla} \Psi|^2 + | \nabla \Psi|^2 ) + \frac{C}{Kr^2} S \right) + \frac{CA}{K} S \\
& \le - \frac{A}{2C_0} S + CA + \frac{C'}{r^2} S + \frac{CA}{K} S.
\end{align*}
Now pick $K\ge4C_0 C$ so that at $(x_0, t_0)$,
$$0 \le - \frac{A}{4C_0}S + CA + \frac{C'}{r^2} S.$$
Then choose $A= \frac{8C'C_0}{r^2}$ so that at $(x_0, t_0)$,
$$\frac{C'}{r^2} S \le CA,$$
giving a uniform upper bound for $S$. It follows that $f$ is bounded from above by $Cr^{-2}$ for a uniform $C$. Hence $S$ on $\ov{B_{r/2}}$ is bounded above by $Cr^{-2}$.
It remains to deal with the cases when $t_0=0$ or $x_0$ lies on the boundary of $\ov{B_r}$. In either case we have $f(x_0, t_0) \le A {\mathrm{tr}_{\hat{g}}g} (x_0, t_0) \le Cr^{-2}$ and the same bound holds.
\begin{remark} \label{remark1} \emph{
Tracing through the argument, one can see that the constants only depend on uniform bounds for the torsion and curvature of $\hat{g}$, and one and two derivatives (with respect to $\hat{\nabla}$ or $\ov{\hat{\nabla}}$) of torsion and one derivative of curvature.}
\end{remark}
\section{Local curvature bound}
In this section we prove part (ii) of Theorem \ref{maintheorem}.
As in the previous section, we write $C$ for a constant of the form $CN^{\gamma}$ for some uniform $C, \gamma$. We compute in the ball $\ov{B_{r/2}}$ on which we already have the bound $S\le C/r^2$.
Let $\Delta_{\mathbb R} = \frac12 g^{{\overline{q}} p}( \nabla_p \nabla_{\overline{q}} + \nabla_{\overline{q}} \nabla_p )$. First we need an evolution equation for the curvature tensor.
We begin with
\[
\ddt R_{i {\overline{j}} k}{}^l = \ddt \left( - \partial_{\overline{j}} \Gamma_{ik}^l \right)
= - \partial_{\overline{j}} \ddt \left( \Gamma_{ik}^l \right)
= - \partial_{\overline{j}} ( - \nabla_i (R^C)_k{}^l )
= \nabla_{\overline{j}} \nabla_i R_k{}^l{}_p{}^p
\]
and therefore,
\begin{equation}
\ddt R_{i {\overline{j}} k {\overline{l}}} = - R_{q {\overline{l}} p}{}^p R_{i {\overline{j}} k}{}^q
+ \nabla_{\overline{j}} \nabla_i R_{k {\overline{l}} p}{}^p.
\end{equation}
Now, computing in coordinates where $g$ is the identity, we find
\begin{align*}
\Delta_{\mathbb R} R_{i {\overline{j}} k {\overline{l}}} =& \frac12 ( \nabla_p \nabla_{\overline{p}} + \nabla_{\overline{p}} \nabla_p ) R_{i {\overline{j}} k {\overline{l}}} \\
=& \nabla_p \nabla_{\overline{p}} R_{i {\overline{j}} k {\overline{l}}} + \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) \\
=& \nabla_p ( \nabla_{\overline{j}} R_{i {\overline{p}} k {\overline{l}}} - T_{{\overline{p}} {\overline{j}} q} R_{i {\overline{q}} k {\overline{l}}} )
+ \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) \\
=& \nabla_{\overline{j}} \nabla_p R_{i {\overline{p}} k {\overline{l}}}
- R_{p {\overline{j}} i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{p}}} R_{i {\overline{q}} k {\overline{l}}}
- R_{p {\overline{j}} k {\overline{q}}} R_{i {\overline{p}} q {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{l}}} R_{i {\overline{p}} k {\overline{q}}}
- \nabla_p ( T_{{\overline{p}} {\overline{j}} q} R_{i {\overline{q}} k {\overline{l}}} ) \\
& + \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) \\
=& \nabla_{\overline{j}} ( \nabla_i R_{p {\overline{p}} k {\overline{l}}} - T_{p i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}} )
- R_{p {\overline{j}} i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{p}}} R_{i {\overline{q}} k {\overline{l}}}
- R_{p {\overline{j}} k {\overline{q}}} R_{i {\overline{p}} q {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{l}}} R_{i {\overline{p}} k {\overline{q}}} \\
& - \nabla_p ( T_{{\overline{p}} {\overline{j}} q} R_{i {\overline{q}} k {\overline{l}}} )
+ \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) \\
=& \nabla_{\overline{j}} \nabla_i ( R_{k {\overline{l}} p {\overline{p}}}
- \nabla_p T_{{\overline{p}} {\overline{l}} k}
- \nabla_{\overline{l}} T_{p k {\overline{p}}}
)
- \nabla_{\overline{j}} ( T_{p i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}} ) \\
& - R_{p {\overline{j}} i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{p}}} R_{i {\overline{q}} k {\overline{l}}}
- R_{p {\overline{j}} k {\overline{q}}} R_{i {\overline{p}} q {\overline{l}}}
+ R_{p {\overline{j}} q {\overline{l}}} R_{i {\overline{p}} k {\overline{q}}} \\
& - \nabla_p ( T_{{\overline{p}} {\overline{j}} q} R_{i {\overline{q}} k {\overline{l}}} )
+ \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) .
\end{align*}
Hence
\begin{align} \nonumber
\left( \ddt - \Delta_{\mathbb R} \right) R_{i {\overline{j}} k {\overline{l}}}
=& - R_{q {\overline{l}} p {\overline{p}}} R_{i {\overline{j}} k {\overline{q}}}
+ R_{p {\overline{j}} i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}}
- R_{p {\overline{j}} q {\overline{p}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{j}} k {\overline{q}}} R_{i {\overline{p}} q {\overline{l}}}
- R_{p {\overline{j}} q {\overline{l}}} R_{i {\overline{p}} k {\overline{q}}} \\ \nonumber
& - \frac12 (
R_{p {\overline{p}} i {\overline{q}}} R_{q {\overline{j}} k {\overline{l}}}
- R_{p {\overline{p}} q {\overline{j}}} R_{i {\overline{q}} k {\overline{l}}}
+ R_{p {\overline{p}} k {\overline{q}}} R_{i {\overline{j}} q {\overline{l}}}
- R_{p {\overline{p}} q {\overline{l}}} R_{i {\overline{j}} k {\overline{q}}}
) \\ \label{evRm}
& + \nabla_p ( \hat{T}_{{\overline{p}} {\overline{j}} q} R_{i {\overline{q}} k {\overline{l}}} )
+ \nabla_{\overline{j}} ( \hat{T}_{p i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}} )
+ \nabla_{\overline{j}} \nabla_i ( \nabla_p T_{{\overline{p}} {\overline{l}} k}
+ \nabla_{\overline{l}} T_{p k {\overline{p}}} ).
\end{align}
To estimate this, we first compute
$$\nabla_p (\hat{T}_{\ov{p} \ov{j} q} R_{i\ov{q}k \ov{l}}) = (\hat{\nabla}_p \hat{T}_{\ov{p} \ov{j} q} - \Psi_{pq \ov{r}} \hat{T}_{\ov{p}\ov{j}r}) R_{i\ov{q}k\ov{l}} + \hat{T}_{\ov{p}\ov{j} q} \nabla_p R_{i\ov{q}k\ov{l}},$$
and this is bounded by $C(|\textrm{Rm}|/r + | \nabla \textrm{Rm}|)$. Using the fact that $R_{i\ov{j}k}{}^l = - \nabla_{\ov{j}} \Psi_{ik}{}^l + \hat{R}_{i\ov{j}k}{}^l$ we have
\begin{equation} \label{bdRmPsi}
|\textrm{Rm}| \le |\ov{\nabla} \Psi|+C,
\end{equation}
and hence
\begin{equation} \label{bdThatR}
|\nabla_p (\hat{T}_{\ov{p} \ov{j} q} R_{i\ov{q}k \ov{l}}) | \le C \left( | \nabla \textrm{Rm}| + \frac{|\ov{\nabla}\Psi|}{r} + \frac{1}{r} \right).
\end{equation}
Similarly for the term $ \nabla_{\overline{j}} ( \hat{T}_{p i {\overline{q}}} R_{q {\overline{p}} k {\overline{l}}} )$.
The last two terms of (\ref{evRm}) involve three derivatives of torsion. We claim that
\begin{equation} \label{claimT}
| \overline{\nabla} \nabla \nabla \overline T |, \ | \overline{\nabla} \nabla \overline{\nabla} T |
\le C \left(| \nabla \textrm{Rm}| + \frac{|\nabla\Psi|+|\overline{\nabla}\Psi|}{r} + \frac{1}{r^3} \right).
\end{equation}
Indeed, applying $\nabla_{\ov{c}}$ to (\ref{nabla nabla T}), we have
\begin{align} \nonumber
\nabla_{\ov{c}} \nabla_a \nabla_b T_{\ov{i} \ov{j}}{}^{\ov{k}} = {} & g^{\ov{k}l} \left( \hat{\nabla}_{\ov{c}} \hat{\nabla}_a \hat{\nabla}_b \hat{T}_{\ov{i}\ov{j}l} - \Psi_{\ov{c}\ov{i}}{}^{\ov{q}} \hat{\nabla}_a \hat{\nabla}_b \hat{T}_{\ov{q} \ov{j} l} - \Psi_{\ov{c} \ov{j}}{}^{\ov{q}} \hat{\nabla}_a \hat{\nabla}_b \hat{T}_{\ov{i}\ov{q}l} \right. \\ \nonumber
& - \nabla_{\ov{c}} (\Psi_{ab}{}^r \hat{\nabla}_r \hat{T}_{\ov{i} \ov{j}l}) - \nabla_{\ov{c}} (\Psi_{al}{}^r \hat{\nabla}_b \hat{T}_{\ov{i}\ov{j}r}) - \nabla_{\ov{c}} (\Psi_{bl}{}^r \hat{\nabla}_a \hat{T}_{\ov{i}\ov{j}r}) \\ \label{3derivsT}
&\left. - \nabla_{\ov{c}} (\nabla_a \Psi_{bl}{}^r \hat{T}_{\ov{i} \ov{j}r}) + \nabla_{\ov{c}} (\Psi_{bl}{}^r \Psi_{ar}{}^s \hat{T}_{\ov{i} \ov{j}s}) \right).
\end{align}
The first three terms on the right hand side of (\ref{3derivsT}) are bounded by $C (\sqrt{S}+1)$ and hence by $C/r$. Next compute
\begin{align} \label{1}
\nabla_{\ov{c}} (\Psi_{ab}{}^r \hat{\nabla}_r \hat{T}_{\ov{i}\ov{j}l}) = (\nabla_{\ov{c}} \Psi_{ab}{}^r) \hat{\nabla}_r \hat{T}_{\ov{i}\ov{j}l} + \Psi_{ab}{}^r \hat{\nabla}_{\ov{c}} \hat{\nabla}_r \hat{T}_{\ov{i} \ov{j}l} - \Psi_{ab}{}^r \Psi_{\ov{c}\ov{i}}{}^{\ov{q}} \hat{\nabla}_r \hat{T}_{\ov{q}\ov{j}l} - \Psi_{ab}{}^r \Psi_{\ov{c}\ov{j}}{}^{\ov{q}} \hat{\nabla}_r \hat{T}_{\ov{i}\ov{q}l},
\end{align}
which is bounded by $C | \ov{\nabla} \Psi| + C\sqrt{S} + C S$ and hence by $C(| \ov{\nabla} \Psi| + 1/r^2)$. The same bound holds for the other two terms on the second line of (\ref{3derivsT}).
For the third line, compute
\begin{align*}
\nabla_{\ov{c}} (\nabla_a \Psi_{bl}{}^r \hat{T}_{\ov{i}\ov{j}r}) = {}& \left( \nabla_a \nabla_{\ov{c}} \Psi_{bl}{}^r + R_{a\ov{c}b}{}^p \Psi_{pl}{}^r + R_{a\ov{c}l}{}^p \Psi_{bp}{}^r - R_{a\ov{c}p}{}^r \Psi_{bl}{}^p\right) \hat{T}_{\ov{i}\ov{j}r} \\
& + (\nabla_a \Psi_{bl}{}^r) \left( \hat{\nabla}_{\ov{c}} \hat{T}_{\ov{i} \ov{j} r} - \Psi_{\ov{c} \ov{i}}{}^{\ov{q}} \hat{T}_{\ov{q} \ov{j}r} - \Psi_{\ov{c} \ov{j}}{}^{\ov{q}} \hat{T}_{\ov{i} \ov{q}r} \right),
\end{align*}
and using the fact that $\nabla_{\ov{c}} \Psi_{bl}{}^r = - R_{b\ov{c} l}{}^r + \hat{R}_{b\ov{c} l}{}^r$ we obtain
\begin{align*}
\nabla_{\ov{c}} (\nabla_a \Psi_{bl}{}^r \hat{T}_{\ov{i}\ov{j}r}) = {}& \big( - \nabla_a R_{b\ov{c} l}{}^r + \hat{\nabla}_a \hat{R}_{b\ov{c}l}{}^r - \Psi_{ab}{}^p \hat{R}_{p\ov{c}l}{}^r - \Psi_{al}{}^p \hat{R}_{b\ov{c}p}{}^r + \Psi_{ap}{}^r \hat{R}_{b\ov{c}l}{}^p \\
& + R_{a\ov{c}b}{}^p \Psi_{pl}{}^r + R_{a\ov{c}l}{}^p \Psi_{bp}{}^r - R_{a\ov{c}p}{}^r \Psi_{bl}{}^p\big) \hat{T}_{\ov{i}\ov{j}r} \\
& + (\nabla_a \Psi_{bl}{}^r) \left( \hat{\nabla}_{\ov{c}} \hat{T}_{\ov{i} \ov{j} r} - \Psi_{\ov{c} \ov{i}}{}^{\ov{q}} \hat{T}_{\ov{q} \ov{j}r} - \Psi_{\ov{c} \ov{j}}{}^{\ov{q}} \hat{T}_{\ov{i} \ov{q}r} \right).
\end{align*}
It follows that
\begin{equation} \label{2}
| \nabla_{\ov{c}} (\nabla_a \Psi_{bl}{}^r \hat{T}_{\ov{i}\ov{j}r})| \le C \left( | \nabla \textrm{Rm}| + \frac{| \textrm{Rm}|}{r} + \frac{|\nabla \Psi|}{r} +\frac{1}{r} \right).
\end{equation}
Finally,
\begin{align*}
\nabla_{\ov{c}} (\Psi_{bl}{}^r \Psi_{ar}{}^s \hat{T}_{\ov{i}\ov{j}s}) = {} & ( - R_{b\ov{c} l}{}^r + \hat{R}_{b\ov{c}l}{}^r) \Psi_{ar}{}^s \hat{T}_{\ov{i}\ov{j}s} + \Psi_{bl}{}^r ( -R_{a\ov{c} r}{}^s + \hat{R}_{a\ov{c}r}{}^s) \hat{T}_{\ov{i}\ov{j}s} \\
& + \Psi_{bl}{}^r \Psi_{ar}{}^s ( \hat{\nabla}_{\ov{c}} \hat{T}_{\ov{i} \ov{j}s} - \Psi_{\ov{c}\ov{i}}{}^{\ov{q}} \hat{T}_{\ov{q}\ov{j}s} - \Psi_{\ov{c}\ov{j}}{}^{\ov{q}} \hat{T}_{\ov{i}\ov{q}s}),
\end{align*}
giving
\begin{equation} \label{3}
|\nabla_{\ov{c}} (\Psi_{bl}{}^r \Psi_{ar}{}^s \hat{T}_{\ov{i}\ov{j}s})| \le C\left( \frac{|\textrm{Rm}|}{r} + \frac{1}{r^3} \right).
\end{equation}
Putting together (\ref{3derivsT}, \ref{1}, \ref{2}, \ref{3}), and making use of (\ref{bdRmPsi}), we obtain
$$| \overline{\nabla} \nabla \nabla \overline T |
\le C \left(| \nabla \textrm{Rm}| + \frac{|\nabla\Psi|+|\overline{\nabla}\Psi|}{r} + \frac{1}{r^3} \right),$$
and the bound for $| \overline{\nabla} \nabla \overline{\nabla} T |$ follows similarly. This completes the proof of the claim (\ref{claimT}).
From (\ref{bdThatR}) and the claim we just proved, since the first two lines of (\ref{evRm}) are of the order $|\textrm{Rm}|^2$, we have the bound
\begin{equation}
\left| \left( \ddt - \Delta_{\mathbb R} \right) {\mathrm{Rm}} \right|
\le C \left( |\textrm{Rm}|^2 + | \nabla \textrm{Rm} |+ \frac{| \nabla \Psi|+ | \ov{\nabla} \Psi|}{r} + \frac{1}{r^3} \right).
\label{bound on evolution of curvature tensor}
\end{equation}
Now
\begin{align}
\left( \ddt - \Delta \right) | {\mathrm{Rm}} |^2
={} & g^{{\overline{j}} b} g^{{\overline{c}} k} g^{{\overline{l}} d} (R^C)^{{\overline{a}} i}
R_{i {\overline{j}} k {\overline{l}}} \overline{R_{a {\overline{b}} c {\overline{d}}}} \nonumber\\
&+ g^{{\overline{a}} i} g^{{\overline{c}} k} g^{{\overline{l}} d} (R^C)^{{\overline{j}} b}
R_{i {\overline{j}} k {\overline{l}}} \overline{R_{a {\overline{b}} c {\overline{d}}}} \nonumber\\
&+ g^{{\overline{a}} i} g^{{\overline{j}} b} g^{{\overline{l}} d} (R^C)^{{\overline{c}} k}
R_{i {\overline{j}} k {\overline{l}}} \overline{R_{a {\overline{b}} c {\overline{d}}}} \nonumber\\
&+ g^{{\overline{a}} i} g^{{\overline{j}} b} g^{{\overline{c}} k} (R^C)^{{\overline{l}} d}
R_{i {\overline{j}} k {\overline{l}}} \overline{R_{a {\overline{b}} c {\overline{d}}}} \nonumber\\
&+ 2 \mathrm{Re} \left[
g^{{\overline{a}} i} g^{{\overline{j}} b} g^{{\overline{c}} k} g^{{\overline{l}} d}
\left( (\ddt-\Delta_{\mathbb R}) R_{i {\overline{j}} k {\overline{l}}} \right)
\overline{R_{a {\overline{b}} c {\overline{d}}}}
\right] \nonumber\\
&- 2 |\nabla {\mathrm{Rm}}|^2.
\label{evolution of Rm squared}
\end{align}
This together with (\ref{bound on evolution of curvature tensor}) and (\ref{bdRmPsi}) implies
\begin{align} \nonumber
\left( \ddt - \Delta \right) | {\mathrm{Rm}} |^2 \le {} & C \left( |\textrm{Rm}|^2 + |\textrm{Rm}|^3 + |\nabla \textrm{Rm}| \cdot |\textrm{Rm}| + \frac{(| \nabla \Psi|+ |\ov{\nabla}\Psi|) | \textrm{Rm}|}{r} + \frac{| \textrm{Rm}|}{r^3} \right) \\
\le {} & C \left( |{\mathrm{Rm}}|^3+\frac{1}{r} + \frac{ |\nabla\Psi|^2+|\overline{\nabla}\Psi|^2}{r} + \frac{|\textrm{Rm}|}{r^3} \right) - |\nabla{\mathrm{Rm}}|^2.
\label{bound on evolution of Rm squared}
\end{align}
To show $|{\mathrm{Rm}}|^2$ is locally uniformly bounded we will use an argument similar to the previous section. Let $\rho$ now denote a cutoff function which is identically 1 on $\ov{B_{r/4}}$, and supported in $B_{r/2}$. From the previous section we know that $S$ is bounded by $C/r^2$ on $B_{r/2}$. As before we can assume $|\nabla\rho|^2$ and $|\Delta\rho|$ are bounded by $C/r^2$. Let $K= C_1/r^2$ where $C_1$ is a constant to be determined later, and is at least large enough so that $\frac{K}2 \le K-S \le K$. Let $A$ denote a constant to be specified later. We will apply the maximum principle argument to the quantity
$$f = \rho^2 \frac{|{\mathrm{Rm}}|^2}{K-S} + A S.$$
As in the previous section, we calculate at a point $(x_0, t_0)$ where a maximum of $f$ is achieved, and we first assume that $t_0>0$ and that $x_0$ does not occur at the boundary of $\ov{B_{r/2}}$. We use the fact that $\nabla f=0$ at this point, giving us
\begin{align*}
\left( \ddt - \Delta \right) f
=& A ( \ddt - \Delta ) S
+ (- \Delta (\rho^2) ) \frac{ |{\mathrm{Rm}}|^2 }{K - S} + \rho^2 \frac{|{\mathrm{Rm}}|^2}{(K - S)^2} ( \ddt - \Delta ) S \\
& + \rho^2 \frac1{K - S} ( \ddt - \Delta ) |{\mathrm{Rm}}|^2
- 4 \mathrm{Re} \left(
\frac1{K - S} \rho \nabla \rho \cdot \overline{\nabla} |{\mathrm{Rm}}|^2
\right) + \frac{2A | \nabla S|^2}{K - S}.
\end{align*}
Our goal is to show that at $(x_0, t_0)$, we have $|\textrm{Rm}|^2 \le C/r^4$. Hence without loss of generality, we may assume that $1/r + | \textrm{Rm}|/r^3 \le C |\textrm{Rm}|^3$ and hence (\ref{bound on evolution of Rm squared}) becomes
$$\left( \ddt - \Delta \right) | {\mathrm{Rm}} |^2
\le C \left( |{\mathrm{Rm}}|^3 + \frac{Q}{r} \right) - |\nabla{\mathrm{Rm}}|^2,
$$
where for convenience we are writing $Q = |\nabla\Psi|^2+|\overline{\nabla}\Psi|^2$. For later purposes, recall from (\ref{bdRmPsi}) that $|\textrm{Rm}|^2 \le Q+C$ and from (\ref{bound grad S}) that $|\nabla S|^2 \le 2SQ$.
Also note that $\bigl|\nabla|{\mathrm{Rm}}|^2\bigr| \le 2 |{\mathrm{Rm}}| |\nabla{\mathrm{Rm}}|$.
By (\ref{bound on evolution of S}) we find that on $B_{r/2}$ we have $$(\ddt - \Delta)S \le \frac{C}{r^3} - \frac12 Q.$$ Using these, we find at $(x_0, t_0)$,
\begin{align*}
\left( \ddt -\Delta \right) f \le {} & \left( \frac{CA}{r^3} - \frac{AQ}{2} \right) + \left( \frac{C|\textrm{Rm}|^2}{Kr^2} \right) + \left( \frac{C \rho^2 |\textrm{Rm}|^2}{K^2r^3} - \frac{\rho^2 | \textrm{Rm}|^2 Q}{2K^2} \right) \\
& + \left( \frac{C\rho^2 |\textrm{Rm}|^3}{K} + \frac{C\rho^2Q}{Kr} - \frac{\rho^2}{K} | \nabla \textrm{Rm} |^2 \right) + \left( \frac{\rho^2 | \nabla \textrm{Rm}|^2}{2K} + C \frac{| \textrm{Rm}|^2}{Kr^2} \right) + \left( \frac{8ASQ}{K} \right).
\end{align*}
First choose $C_1$ in the definition of $K$ to be sufficiently large so that
$$\frac{8ASQ}{K} \le \frac{AQ}{4},$$
where we use the fact that $S\le C/r^2$. Next observe that
$$\frac{C \rho^2 |\textrm{Rm}|^3}{K} \le \frac{\rho^2 |\textrm{Rm}|^2 Q}{2K^2} + C' \rho^2 |\textrm{Rm}|^2,$$
and hence
\begin{align*}
\left( \ddt - \Delta \right) f \le & \frac{CA}{r^3} - \frac{AQ}{4} + C''Q + C.
\end{align*}
Now we may choose $A$ sufficiently large so that $A \ge 8C''$ and we obtain at $(x_0, t_0)$,
$$Q \le \frac{C}{r^3},$$
which implies that $|\textrm{Rm}|^2 \le C/r^3$ at this point. It follows that at $(x_0, t_0)$, $f$ is bounded from above by $C/r^2$. The same bound holds if $x_0$ lies in the boundary of $\ov{B_{r/2}}$ or if $t_0=0$. Hence on $\ov{B_{r/4}}$ we obtain
$$|\textrm{Rm}|^2 \le \frac{C}{r^4},$$
as required. This completes the proof of Theorem \ref{maintheorem}.
\begin{remark} \label{remark2} \emph{In addition to the dependence discussed in Remark \ref{remark1}, the constants also depend on \emph{three} derivatives of the torsion of $\hat{g}$, with respect to $\hat{\nabla}$ or $\ov{\hat{\nabla}}$}.
\end{remark}
\section{Higher order estimates}
In this last section, we prove Corollary \ref{corollary} by establishing the estimates for $|\hat{\nabla}_{\mathbb{R}}^m g|^2_{\hat{g}}$ for $m=2, 3, \ldots$. For this part, we essentially follow the method of Gill \cite{G} (cf. \cite{Chau, CK, CLN, PSSW} in the K\"ahler case), but since the setting here is slightly more general, we briefly outline the argument. In this section, we say that a quantity is uniformly bounded if it can be bounded by $CN^{\alpha} r^{-\gamma}$ for uniform $C, \alpha, \gamma$.
We work on the ball $B_{r/4}$, and assume the bounds established in Theorem \ref{maintheorem}.
As in \cite{TW1}, define reference tensors $(\hat{g}_t)_{i\ov{j}} = \hat{g}_{i\ov{j}} - t \hat{R}^C_{i\ov{j}}$, where $\hat{R}^C_{i\ov{j}}$ is the Chern-Ricci curvature of $\hat{g}$.
For each fixed $x\in M$, let $\varphi=\varphi(x,t)$ solve
$$\frac{\partial \varphi}{\partial t} = \log \frac{\det g(t)}{\det \hat{g}}, \quad \varphi|_{t=0} =0.$$
Then $g_{i\ov{j}} = (\hat{g}_t)_{i\ov{j}} +\partial_i \partial_{\ov{j}} \varphi$ is the solution of the Chern-Ricci flow starting at $\hat{g}$.
Consider the first order differential operator $D = \frac{\partial}{\partial x^{\gamma}}$, where $x^{\gamma}$ is a real coordinate.
Applying $D$ to the equation for $\varphi$, we have
\begin{equation*}
\frac{\partial}{\partial t} (D\varphi) = g^{\ov{j}i} D g_{i\ov{j}} - \hat{g}^{\ov{j}i} D (\hat{g})_{i \ov{j}}
= g^{\ov{j}i} \partial_i \partial_{\ov{j}} (D\varphi) + g^{\ov{j}i} D (\hat{g}_t)_{i\ov{j}} - \hat{g}^{\ov{j}i} D \hat{g}_{i \ov{j}}.
\end{equation*}
Hence, working in real coordinates, the function $u=D(\varphi)$ satisfies a linear parabolic PDE of the form
\begin{equation} \label{linearPDE}
\partial_t u = a^{\alpha \beta} \partial_{x^{\alpha}} \partial_{x^{\beta}} u + f,
\end{equation}
where $A=(a^{\alpha \beta})$ is a real $2n \times 2n$ positive definite symmetric matrix whose largest and smallest eigenvalues $\Lambda$ and $\lambda$ satisfy
\begin{equation} \label{evalues}
C^{-1} \le \lambda \le \Lambda \le C,
\end{equation}
for a uniform positive constant $C$.
Moreover, the entries of $A$ are uniformly bounded in the $C^{\delta/2,\delta}$ parabolic norm for $0< \delta<1$. Indeed our Calabi-type estimate from part (i) of Theorem \ref{maintheorem},
$$| \hat{\nabla}g|^2 \le C,$$
implies that the Riemannian metric $g_R$ associated to $g$ is bounded in the $C^1$ norm in the space direction. On the other hand,
$$\ddt{} g_{i\ov{j}} = - R^{C}_{i\ov{j}} = - g^{\ov{l}k} R_{i\ov{j}k\ov{l}}.$$
From the curvature bound of Theorem \ref{maintheorem}, we know that $ g^{\ov{l}k} R_{i\ov{j}k\ov{l}}$ is uniformly bounded for any fixed $i,j$.
It follows that $\ddt{} (g_R)_{\alpha \beta}$ is also uniformly bounded for any fixed $\alpha, \beta$. Thus we see that each entry $a^{\alpha \beta}$ in the matrix $A$ has uniform bounds in one space and one time derivative. This implies that $a^{\alpha \beta}$ is uniformly bounded in the $C^{\delta/2, \delta}$ parabolic norm for any $0< \delta<1$.
Next, note that $u= \frac{\partial \varphi}{\partial x^{\gamma}}$ in (\ref{linearPDE}) is bounded in the $C^0$ norm since $g(t)$ is uniformly bounded and hence $| \sqrt{-1}\partial \overline{\partial} \varphi|_{C^0}$ is uniformly bounded. Moreover, $f$ in (\ref{linearPDE}) is uniformly bounded in the $C^{\delta/2, \delta}$ norm.
We can then apply Theorem 8.11.1 in \cite{Kr} to (\ref{linearPDE}) to see that $u$ is bounded in the parabolic $C^{1+\delta/2, 2+\delta}$ norm on a slightly smaller parabolic domain: $[\varepsilon', T] \times B_{r'}$ for any $\varepsilon'$ and $r'$ with $0 < \varepsilon' < \varepsilon$ and $r/8 < r' <r/4$.
Tracing through the argument in \cite{Kr}, one can check that the estimates we obtain indeed are of the desired form.
Now apply $D$ to the equality $g_{i\ov{j}}(t) = (\hat{g}_t)_{i\ov{j}} + \partial_i \partial_{\ov{j}} \varphi$. We get
$$Dg_{i\ov{j}} = D(\hat{g}_t)_{i\ov{j}} + \partial_i \partial_{\ov{j}} u,$$
where we recall that $D= \partial/\partial x^{\gamma}$ for some $\gamma$. Since we have bounds for $u$ in $C^{1+\delta/2, 2+\delta}$ this implies that $\partial_i \partial_{\ov{j}} u$ is bounded in $C^{\delta/2, \delta}$. Since $ D(\hat{g}_t)_{i\ov{j}} $ is uniformly bounded in all norms we get that $D g_{i\ov{j}}$ is uniformly bounded in $C^{\delta/2, \delta}$ for all $i,j$. Since $D= \partial/\partial x^{\gamma}$ and $\gamma$ was an arbitrary index, it follows that $\partial_{\gamma} a^{\alpha \beta}$ is uniformly bounded in $C^{\delta/2, \delta}$ for all $\alpha, \beta, \gamma$. We have a similar estimate for $\partial_{\gamma} f$. Now apply Theorem 8.12.1 in \cite{Kr} (with $k=1$) to see that, for any $\alpha$, $\partial_{\alpha} u$ is uniformly bounded in $C^{1+\delta/2, 2+\delta}$ on a slightly smaller parabolic domain. This means that $D^{{\alpha}} \varphi$ is uniformly bounded in $C^{1+\delta/2, 2+\delta}$ for any multi-index $\alpha\in \mathbb{R}^{2n}$ with $| \alpha| \le 2$.
We can then iterate this procedure and obtain the required $C^k$ bounds for $g(t)$ for all $k$. This completes the proof of the corollary.
\begin{remark} \emph{In \cite{ShW}, we showed how to obtain higher derivative estimates for curvature using simple maximum principle arguments (following \cite{H, Shi}). However, in the case of the Chern-Ricci flow, there are difficulties in using this approach because of torsion terms that need to be controlled. An alternative method to proving the estimates in this section may be to generalize the work of Gill on the K\"ahler-Ricci flow \cite{G2}. This could give an ``elementary'' maximum principle proof, but the technical difficulties in carrying this out seem to be substantial.}
\end{remark}
\end{document}
|
\begin{document}
\title[A problem about reflecting lights inside grids]{A problem about reflecting lights inside grids and a mapping about finite discrete sets}
\author{Yangcheng Li}
\address{School of Mathematics and Statistics, Changsha University of Science and Technology; Hunan Provincial Key Laboratory of Mathematical Modeling and Analysis in Engineering, Changsha 410114, People's Republic of China}
\email{[email protected]}
\subjclass[2010]{11A05, 11D04, 20K01, 05A99.}
\date{}
\keywords{finite discrete sets, mapping, linear Diophantine equations, Finite abelian groups}
\begin{abstract}
We first raise a problem about reflecting lights inside grids, and give two solutions to this problem. Next, we consider a similar problem in plane grids and give its solution. Moreover, we extend this two problems to $p$-dimensional space, where $p\geq2$. In this process, we introduce two mappings about finite discrete sets, and get two finite abelian groups.
\end{abstract}
\maketitle
\section{Introduction}
In 1978, Rauch \cite{Rauch} studied the problem of how many light sources are needed to illuminate the interior of a bounded domain. He considered the problem from the perspective of geometry, using some simple geometric properties of the ellipse and an elementary compactness argument. In this paper, we investigate a similar problem in a plane grid. Our tools are methods in number theory and algebra. It is worth mentioning that Perucca, De Sousa, and Tronto \cite{Perucca} studied the problem of arithmetic billiards and got some interesting results.
Let us describe this problem first. A plane grid consists of several horizontal and vertical line segments (as shown in Figure 1).
\begin{figure}
\caption{A $6\times4$ plane grid}
\end{figure}
We consider the problem of how to connect all points in the plane grid with lines. We assume that the boundary of the plane grid is composed of mirrors, which means that light can be reflected. If a beam of light enters the plane grid from the boundary and reflects when it meets the boundary, then the beam of light will pass through several points in the plane grid. We want to know at least how many beams of light are needed to pass through all the points in this plane grid. Therefore, we raise a problem as follow.
\begin{question}
A $m\times n$ plane grid whose boundary is composed of mirrors, how many beams of light can pass through all points in the plane grid at least?
\end{question}
\begin{example}
The following is an example of a $6\times4$ plane grid, and this plane grid needs at least $3$ beams of light to pass through all points.
\begin{figure}
\caption{At least $3$ beams of light needed for a $6\times4$ plane grid}
\end{figure}
\end{example}
Since a plane grid has four vertices, a beam of light that enters the plane grid from one vertex is bound to come out from the other vertex. Therefore, for any plane grid, at least two beams of light are needed, and we call the trajectories of these two beams an open path. Except that these two open paths are both called closed paths. Obviously, we only need to get the number of closed paths, and write it as $C(m,n)$. Example 1.2 shows that $C(6,4)=1$.
For any $m,n\in \mathbb{N}^{*}$, we have the following theorem.
\begin{theorem}
$C(m,n)=\gcd(m,n)-1$ holds for any $m,n\in \mathbb{N}^*$, where $\gcd(m,n)$ is the greatest common factor of $m$ and $n$.
\end{theorem}
\section{The first proof of Theorem 1.3}
The first proof of Theorem 1.3 is based on Euclidean algorithm. We first prove two lemmas.
\begin{lemma}
$C(n,n)=n-1$ holds for any $n\in \mathbb{N}^*$.
\end{lemma}
\begin{proof}
In Figure 3, we see that the light starting from a point on the boundary of the $n\times n$ plane grid returns to the starting point after three reflections.
\begin{figure}
\caption{A closed path in $n\times n$ plane grid}
\end{figure}
Specifically, its trajectory is $(s,0)\rightarrow (n,t)\rightarrow (t,n)\rightarrow (0,s) \rightarrow (s,0)$, where $s+t=n$. That is to say, any closed path uniquely corresponds to a point on the boundary that is not a vertex. Hence, the truth of Lemma 2.1 is now clear, because there are $n-1$ points that are not vertices on any boundary of the $n\times n$ plane grid.
\end{proof}
\begin{lemma}
i) $C(m,n)=C(n,m)$ holds for any $m,n\in \mathbb{N}^*$.
ii) $C(m,n)=C(m+n,n)$ holds for any $m,n\in \mathbb{N}^*$.
\end{lemma}
\begin{proof}
The statement i) is obviously valid. In Figure 4, we see that a beam of light entering the $n\times n$ plane grid on the right from point $P$ will pass through point $P$ again and return to its original path. Therefore, when we remove the $n\times n$ plane grid on the right, the number of closed paths of the new plane grid remains unchanged. This proves the statement ii).
\begin{figure}
\caption{A closed path in $(m+n)\times n$ plane grid}
\end{figure}
\end{proof}
\begin{proof}[{\bf\text{The first proof of Theorem 1.3}}]
Based on Lemma 2.1 and Lemma 2.2, Theorem 1.3 can be proved immediately by Euclidean algorithm.
\end{proof}
\section{The second proof of Theorem 1.3}
The second proof of Theorem 1.3 is based on a mapping about points contained in plane grids and the concept of the step length of closed paths.
The set of all points contained in the $m_1\times m_2$ plane grid is
\[\mathcal{S}^2=\{(x_1,x_2)~\vert~0\leq x_i\leq m_i,m_i\in\mathbb{N}^*,i=1,2\}.\]
\begin{definition}
We define a mapping $F$ from the set $\mathcal{S}^2$ to the set $\mathcal{S}^2$ as follows.
\[F:~\mathcal{S}^2\rightarrow \mathcal{S}^2,\quad (x_1,x_2)\mapsto F(x_1,x_2)=(f_1(x_1),f_2(x_2)),\]
where the component $f_i(x_i)$ satisfies the following rule:
\begin{equation}
t=f_i^{2nm_i-x_i-t}(x_i)=f_i^{2nm_i-x_i+t}(x_i), \label{3-2}
\end{equation}
where $0\leq t\leq m_i,~i=1,2,~n\in\mathbb{Z}$, with $f_i^0(x_i)=x_i$.
\end{definition}
\begin{remark}
The definition of $f_i(x_i)$ is equivalent to the following processes.
\begin{equation}
\begin{split}
t&\mapsto f_i(t)=t+1\mapsto f_i(t+1)=t+2\mapsto\cdots\\
&\mapsto f_i(m_i-1)=m_i\mapsto f_i(m_i)=m_i-1\\
&\mapsto f_i(m_i-1)=m_i-2\mapsto\cdots\mapsto f_i(1)=0\\
&\mapsto f_i(0)=1\mapsto f_i(1)=2\mapsto\cdots, \label{3-1}
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
t&\mapsto f_i^{-1}(t)=t-1\mapsto f^{-1}_i(t-1)=t-2\mapsto\cdots\\
&\mapsto f_i^{-1}(2)=1\mapsto f_i^{-1}(1)=0\mapsto f_i^{-1}(0)=1\\
&\mapsto\cdots\mapsto f_i^{-1}(m_i-1)=m_i\mapsto f_i^{-1}(m_i)=m_i-1\\
&\mapsto f_i^{-1}(m_i-1)=m_i-2\mapsto\cdots, \label{3-3}
\end{split}
\end{equation}
where $0\leq t\leq m_i,~i=1,2$.
In fact, we can use a circle to represent the changing law of component $f_i(x_i)$. Take $m_i=5$ as an example.
\begin{figure}
\caption{The changing law of component $f_i(x_i)$ when $m_i=5$.}
\end{figure}
\end{remark}
It is easy to see that when a beam of light starts from $(x_1,x_2)$, $F(x_1,x_2)$ is the next point that this beam of light passes through. Let $F^k=F\circ F\circ\cdots\circ F$ be the composite of $k$ mappings $F$ with $F^0(x_1,x_2)=(x_1,x_2)$. The chain $F^1,F^2,\cdots,F^k$ is the trajectory of the light starting from point $(x_1,x_2)$.
Before giving the proof of Theorem 1.3, we first study a more general question:
\begin{question}
For any two points $P_1=(x_1,x_2)$ and $P_2=(y_1,y_2)$ in $\mathcal{S}^2$, will the light starting from $P_1$ pass through $P_2$?
\end{question}
This is not always possible for any two points, such as points $P_1=(0,0)$ and $P_2=(0,1)$. But we have the following theorem.
\begin{theorem}
The light starting from $P_1=(x_1,x_2)$ will pass through $P_2=(y_1,y_2)$ if and only if there are $j~(j=0,1)$ and $k~(k=0,1)$ such that
\[2\gcd{(m_1,m_2)}\mid(x_2-x_1)+((-1)^jy_2+(-1)^ky_1).\]
\end{theorem}
\begin{proof}
For any point $P_1=(x_1,x_2)\in\mathcal{S}^2$, if the light starting from $P_1=(x_1,x_2)$ will pass through $P_2=(y_1,y_2)$, there is an integer $k$ such that
\[F^{k}(x_1,x_2)=(f_1^{k}(x_1),f_2^{k}(x_2))=(y_1,y_2).\]
By the definition of $f_i(x_i)$, we let $t=y_i$ in (\ref{3-2}) and get
\[t=y_i=f_i^{2n_im_i-x_i-y_i}(x_i)=f_i^{2n_im_i-x_i+y_i}(x_i),~i=1,2,\]
which is equivalent to
\begin{equation*}
\begin{cases}
\begin{aligned}
&y_1=f_1^{2n_1m_1-x_1-y_1}(x_1)=f_1^{2n_1m_1-x_1+y_1}(x_1),\\
&y_2=f_2^{2n_2m_2-x_2-y_2}(x_2)=f_2^{2n_2m_2-x_2+y_2}(x_2).
\end{aligned}
\end{cases}
\end{equation*}
Thus, we get the following four linear Diophantine equations.
\begin{equation}
\begin{split}
2n_1m_1-x_1-y_1&=2n_2m_2-x_2-y_2;\\
2n_1m_1-x_1-y_1&=2n_2m_2-x_2+y_2;\\
2n_1m_1-x_1+y_1&=2n_2m_2-x_2-y_2;\\ \label{3-4}
2n_1m_1-x_1+y_1&=2n_2m_2-x_2+y_2.
\end{split}
\end{equation}
Eq. (\ref{3-4}) can be rewritten as
\begin{equation}
2m_2n_2-2m_1n_1=(x_2-x_1)+((-1)^jy_2+(-1)^ky_1), \label{3-5}
\end{equation}
where $j=0,1,~k=0,1$.
Eq. (\ref{3-5}) has integer solutions with respect to $n_1$ and $n_2$ if and only if
\[2\gcd{(m_1,m_2)}\mid(x_2-x_1)+((-1)^jy_2+(-1)^ky_1).\]
\end{proof}
\begin{example}
When $m_1=6,m_2=4$, the light starting from $P_1=(0,3)$ will pass through $P_2=(3,4)$ because
\[2\gcd{(6,4)}=4\mid\pm4=(x_2-x_2)+((-1)^jy_2-y_1),j=0,1.\]
But the light starting from $P_3=(0,2)$ cannot reach $P_2=(3,4)$ because
\[2\gcd{(6,4)}=4\nmid3,-5,9,1=(x_2-x_2)+((-1)^jy_2+(-1)^ky_1),j,k=0,1.\]
\begin{figure}
\caption{A $6\times4$ plane grid}
\end{figure}
\end{example}
Next, we need to give the concept of the closed path in set $\mathcal{S}^2$. Note that if there is a $k\in\mathbb{Z}^+$ such that
\[F^k(x_1,x_2)=(x_1,x_2),\]
then this means that the light starting from $(x_1,x_2)$ returns to point $(x_1,x_2)$. But the trajectory at this time may not necessarily form a closed path.
\begin{example}
When $m_1=4,m_2=3$ and $P=(2,2)$, we have
\[F^{8}(2,2)=(2,2).\]
Clearly, this trajectory is not a closed path.
\begin{figure}
\caption{A $4\times3$ plane grid}
\end{figure}
\end{example}
\begin{definition}
For any point $(x_1,x_2)\in\mathcal{S}^2$, if there is a $k\in\mathbb{Z}^+$ such that
\begin{equation}
\begin{cases}
\begin{aligned}
&F^k(x_1,x_2)=(x_1,x_2),\\
&F^{k-1}(x_1,x_2)=F^{-1}(x_1,x_2), \label{3-6}
\end{aligned}
\end{cases}
\end{equation}
where $F^{-1}(x_1,x_2)=(f_1^{-1}(x_1),f_2^{-1}(x_2))$, then the chain
\[F^1,F^2,\cdots,F^{k}\]
is called a closed path starting from $(x_1,x_2)$. An open path is defined as a closed path with half repeating itself.
\end{definition}
\begin{remark}
From Theorem 3.4, it is easy to see that the light starting from $(x_1,x_2)$ will definitely return to $(x_1,x_2)$, because
\[2\gcd{(m_1,m_2)}\mid0=(x_2-x_1)+((-1)x_2+x_1).\]
This means that the positive integer $k$ in (\ref{3-6}) must exist. It is easy to see that $k\geq 2$, and $k=2$ if and only if $m_1=m_2=1$.
\end{remark}
\begin{definition}
We call the smallest $k(x_1,x_2)\in\mathbb{Z}^+$ that satisfies
\begin{equation}
\begin{cases}
\begin{aligned}
&F^{k(x_1,x_2)}(x_1,x_2)=(x_1,x_2),\\
&F^{k(x_1,x_2)-1}(x_1,x_2)=F^{-1}(x_1,x_2). \label{3-7}
\end{aligned}
\end{cases}
\end{equation}
as the step length of this closed path. Correspondingly, the step length of the open path is naturally $\frac{k(x_1,x_2)}{2}$.
\end{definition}
\begin{theorem}
The step length $k(x_1,x_2)$ is an invariant independent of $x_1$ and $x_2$, denoted as $\mathcal{K}$. Specifically,
\[\mathcal{K}=2\text{lcm}(m_1,m_2),\]
where $\text{lcm}(m_1,m_2)$ is the least common multiple of $m_1$ and $m_2$.
\end{theorem}
\begin{proof}
For any point $(x_1,x_2)\in\mathcal{S}^2$, we need
\begin{equation*}
\begin{cases}
\begin{aligned}
&F^{k(x_1,x_2)}(x_1,x_2)=(f_1^{k(x_1,x_2)}(x_1),f_2^{k(x_1,x_2)}(x_2))=(x_1,x_2),\\
&F^{k(x_1,x_2)-1}(x_1,x_2)=(f_1^{k(x_1,x_2)-1}(x_1),f_2^{k(x_1,x_2)-1}(x_2))=(f_1^{-1}(x_1),f_2^{-1}(x_2)).
\end{aligned}
\end{cases}
\end{equation*}
\underline{Case 1}. If $x_i>0,~i=1,2$, then $f^{-1}(x_i)=x_i-1$, we thus have
\[F^{-1}(x_1,x_2)=(f_1^{-1}(x_1),f_2^{-1}(x_2))=(x_1-1,x_2-1).\]
In order to get the step length $k(x_1,x_2)$, we let $t=x_i$ and $t=x_i-1$, respectively, in (\ref{3-2}) and get
\[x_i=f_i^{2n_im_i-2x_i}(x_i)=f_i^{2n_im_i}(x_i),~i=1,2,\]
\[x_i-1=f_i^{2n_im_i-2x_i+1}(x_i)=f_i^{2n_im_i-1}(x_i)=f_i^{-1}(x_i),~i=1,2,\]
which is equivalent to
\begin{equation*}
\begin{cases}
\begin{aligned}
&x_1=f_1^{2n_1m_1-2x_1}(x_1)=f_1^{2n_1m_1}(x_1),\\
&x_2=f_2^{2n_2m_2-2x_2}(x_2)=f_2^{2n_2m_2}(x_2),
\end{aligned}
\end{cases}
\end{equation*}
and
\begin{equation*}
\begin{cases}
\begin{aligned}
&x_1-1=f_1^{2n_1m_1-2x_1+1}(x_1)=f_1^{2n_1m_1-1}(x_1)=f_1^{-1}(x_1),\\
&x_2-1=f_2^{2n_2m_2-2x_2+1}(x_2)=f_2^{2n_2m_2-1}(x_2)=f_2^{-1}(x_2).
\end{aligned}
\end{cases}
\end{equation*}
Hence, we have
\begin{equation}
k(x_1,x_2)=2n_1m_1=2n_2m_2. \label{3.1}
\end{equation}
Therefore, we need to find $n_1$ and $n_2$ that make $2n_1m_1=2n_2m_2$ the smallest. Let $d=\gcd(m_1,m_2)$ be the greatest common factor of $m_1$ and $m_2$, then $m_1=dm'_1$ and $m_2=dm'_2$ with $\gcd(m'_1,m'_2)=1$. The equation (\ref{3.1}) is equivalent to
\begin{equation*}
\frac{n_2}{n_1}=\frac{m_1}{m_2}=\frac{m'_1}{m'_2},
\end{equation*}
which leads to
\begin{equation}
\begin{split}
n_1&=m'_2=\frac{m_2}{d}=\frac{m_2}{\gcd(m_1,m_2)},\\
n_2&=m'_1=\frac{m_1}{d}=\frac{m_1}{\gcd(m_1,m_2)}. \label{3.2}
\end{split}
\end{equation}
Therefore, the step length of the closed path is
\[k(x_1,x_2)=2n_1m_1=2n_2m_2=\frac{2m_1m_2}{\gcd(m_1,m_2)}=2\text{lcm}(m_1,m_2),\]
where $\text{lcm}(m_1,m_2)$ is the least common multiple of $m_1$ and $m_2$.
\underline{Case 2}. If there exists $i~(i=1,2)$ such that $x_i=0$, then $f^{-1}(x_i)=1$. Taking $(x_1,x_2)=(0,x_2)$ as an example, and other cases can be discussed similarly. We have
\[F^{-1}(x_1,x_2)=(f_1^{-1}(x_1),f_2^{-1}(x_2))=(1,x_2-1).\]
In order to get the step length $k(x_1,x_2)$, we let $t=0,x_2,1$ and $t=x_2-1$, respectively, in (\ref{3-2}) and get
\begin{equation*}
\begin{cases}
\begin{aligned}
&0=f_1^{2n_1m_1}(x_1),\\
&x_2=f_2^{2n_2m_2-2x_2}(x_2)=f_2^{2n_2m_2}(x_2),
\end{aligned}
\end{cases}
\end{equation*}
and
\begin{equation*}
\begin{cases}
\begin{aligned}
&1=f_1^{2n_1m_1-1}(x_1)=f_1^{2n_1m_1+1}(x_1)=f_1^{-1}(x_1),\\
&x_2-1=f_2^{2n_2m_2-2x_2+1}(x_2)=f_2^{2n_2m_2-1}(x_2)=f_2^{-1}(x_2).
\end{aligned}
\end{cases}
\end{equation*}
Therefore, we also get Eq. (\ref{3.1}) and get the same result.
Obviously, the step length is independent of $x_1$ and $x_2$, so the step length of any closed path is the same, and the step length of open path is half that of closed path.
\end{proof}
\begin{proof}[{\bf\text{The second proof of Theorem 1.3}}]
Note that each mapping $F$ corresponds to a diagonal of a minimum unit in the plane grid. A $m_1\times m_2$ plane grid has a total of $m_1m_2$ minimum units and $2m_1m_2$ diagonals. Since any plane grid has only two open paths, we get that the number of closed paths is
\[C(m_1,m_2)=\frac{2m_1m_2-2\text{lcm}(m_1,m_2)}{2\text{lcm}(m_1,m_2)}=\gcd(m_1,m_2)-1.\]
\end{proof}
We are also interested in how many boundary points the closed path will pass through, which corresponds to how many times the light is reflected in $\mathcal{S}^2$.
The set of all points on the boundary of the $m_1\times m_2$ plane grid is
\[\overline{\mathcal{S}}^2=\{(x_1,x_2)\in\mathcal{S}^2~\vert~\exists i,x_i=0~\text{or}~m_i,~i=1,2\}.\]
\begin{corollary}
From (\ref{3.2}), the number of points in $\overline{\mathcal{S}}^2$ that any closed path passes through, that is, the number of boundary points, is
\[\begin{split}
B(m_1,m_2)&=2n_1+2n_2\\
&=2\times\frac{m_2}{\gcd(m_1,m_2)}+2\times\frac{m_1}{\gcd(m_1,m_2)}\\
&=\frac{2(m_1+m_2)}{\gcd(m_1,m_2)}.
\end{split}\]
\end{corollary}
\begin{corollary}
Another interesting fact is that the sum of coordinates of all points (which may have the same points) that any closed path passes through is the same. Specifically,
\[\sum_{k=0}^{\mathcal{K}}f^{k}(x_i)=2n_i\bigg(\sum_{t=0}^{m_i}t\bigg)-n_im_i=n_im_i^2=m_i\text{lcm}(m_1,m_2),\]
and
\[\sum_{k=0}^{\mathcal{K}}\sum_{i=1}^{2}f^{k}(x_i)=(m_1+m_2)\text{lcm}(m_1,m_2).\]
\end{corollary}
In Definition 3.7, we call the chain $F^1,F^2,\cdots,F^{k}$ a closed path starting from $(x_1,x_2)$. Now let us consider the set $\mathcal{F}$ composed of the mappings $F^k$, i.e.,
\[\mathcal{F}=\{F^1,F^2,\cdots,F^k,\cdots\}.\]
\begin{corollary}
By Theorem 3.10, the step length $\mathcal{K}$ of any closed path is an invariant and $\mathcal{K}=2\text{lcm}(m_1,m_2)$, so the set $\mathcal{F}$ is a finite set, and its order is $\mathcal{K}$. Furthermore, the set $\mathcal{F}$ forms an Abelian group generated by $F^1$ under the composition of the mappings $F^k$. In particular,
\[\mathcal{F}\simeq\mathbb{Z}_{\mathcal{K}}.\]
\end{corollary}
Now let us consider another interesting question:
\begin{question}
Given any two points $P_1$ and $P_2$ in the $m_1\times m_2$ plane grid, can we reach $P_2$ from $P_1$ along the diagonal of a minimum unit?
\end{question}
\begin{example}
When $m_1=6,m_2=4$, the point $P_1$ can reach $P_2$ but cannot reach $P_3$.
\begin{figure}
\caption{A $6\times4$ plane grid}
\end{figure}
\end{example}
To solve Question 3.14, we define a mapping $F_{s,t}$ from the set $\mathcal{S}^2$ to the set $\mathcal{S}^2$ as follows.
\[F_{s,t}:~\mathcal{S}^2\rightarrow \mathcal{S}^2,\quad (x_1,x_2)\mapsto F_{s,t}(x_1,x_2)=(f_1^{(-1)^s}(x_1),f_2^{(-1)^t}(x_2)),~s,t=0,1,\]
with
\[F_{s,t}^k(x_1,x_2)=(f_1^{(-1)^sk}(x_1),f_2^{(-1)^tk}(x_2)),~k\in\mathbb{Z},\]
where the component $f_i(x_i)$ satisfies the rule (\ref{3-2}).
Let $\mathcal{G}$ be the set composed of some composites of the mappings $F_{s,t}^k$, i.e.,
\[\mathcal{G}=\{F_{s_1,t_1}^{k_1}\circ\cdots\circ F_{s_r,t_r}^{k_r}~|~s_i,t_i=0,1,k_i\in\mathbb{Z},1\leq i\leq r,r\in\mathbb{N}^*\}.\]
It is easy to verify that the set $\mathcal{G}$ forms an Abelian group under the composition of the mappings $F_{s,t}^k$. Furthermore, $\mathcal{G}$ is also finitely generated by $4$ elements $F_{0,0},F_{0,1}$. Therefore,
\[\mathcal{G}=\{F_{0,0}^{k_1}\circ F_{0,1}^{k_2}~|~k_i\in\mathbb{Z},i=1,2\}.\]
By Theorem 3.10, for any $F\in\mathcal{G}$, we have
\[F^{\mathcal{K}}=F.\]
Hence, $\mathcal{G}$ is a finitely generated torsion abelian group and obviously a finite group. Then
\[\mathcal{G}\simeq\mathbb{Z}_{\mathcal{K}}^2=\mathbb{Z}_{\mathcal{K}}\oplus\mathbb{Z}_{\mathcal{K}}.\]
Suppose that $F\in\mathcal{G}$ and $P\in\mathcal{S}^2$. An operation of $\mathcal{G}$ on $\mathcal{S}^2$ is given by the mapping
\[\mathcal{G}\times\mathcal{S}^2\rightarrow\mathcal{S}^2,\quad (F,P)\mapsto F(P).\]
Note that
\[f_1^{(-1)^s}(x_1)\equiv x_1+1\pmod{2},~~f_2^{(-1)^t}(x_2)\equiv x_2+1\pmod{2},\]
thus
\[f_1^{(-1)^s}(x_1)+f_2^{(-1)^t}(x_2)\equiv x_1+x_2\pmod{2}\]
holds for any $s~(s=0,1)$ and $t~(t=0,1)$. We have
\begin{theorem}
The set $\mathcal{S}^2$ is divided into the following two orbits.
\[\begin{split}
S_e&=\{P=(x_1,x_2)~|~x_1+x_2\equiv0\pmod{2}\},\\
S_o&=\{P=(x_1,x_2)~|~x_1+x_2\equiv1\pmod{2}\},
\end{split}\]
with
\[S_e\cup S_o=\mathcal{S}^2,~~S_e\cap S_o=\varnothing,\]
and
\begin{equation*}
|S_e|=
\begin{cases}
|S_o|+1, &m_1\equiv m_2\equiv 0\pmod{2},\\
|S_o|, &\text{otherwise}.
\end{cases}
\end{equation*}
\end{theorem}
Now the answer to Question 3.14 is clear. The point $P_1$ can reach $P_2$ if and only if $P_1$ and $P_2$ are in the same orbit.
\section{The generalization of the problem}
In this section, we first extend Questions 1.1 and 3.3 to $p$-dimensional space, where $p\geq2$. The second proof of Theorem 1.3 provides us with a way to define Questions 1.1 and 3.3 abstractly in high-dimensional space.
\begin{definition}
A $m_1\times m_2\times\cdots\times m_p$ spatial grid is defined as the following point set:
\[\mathcal{S}^p=\{(x_1,x_2,\cdots,x_p)~\vert~0\leq x_i\leq m_i,m_i\in\mathbb{N}^*,i=1,2,\cdots,p,~p\geq2\}.\]
The set of vertices of $m_1\times m_2\times\cdots\times m_p$ spatial grid is defined as
\[\hat{\mathcal{S}^p}=\{(x_1,x_2,\cdots,x_p)\in\mathcal{S}^p~\vert~\text{for all}~i,~x_i=0~\text{or}~m_i\}.\]
\end{definition}
\begin{definition}
We define a mapping $F$ from the set $\mathcal{S}^p$ to the set $\mathcal{S}^p$ as follows.
\[F:~\mathcal{S}^p\rightarrow \mathcal{S}^p,\quad (x_1,x_2,\cdots,x_p)\mapsto F(x_1,x_2,\cdots,x_p)=(f_1(x_1),f_2(x_2),\cdots,f_p(x_p)),\]
where the component $f_i(x_i)$ satisfies the following rule:
\begin{equation}
t=f_i^{2nm_i-x_i-t}(x_i)=f_i^{2nm_i-x_i+t}(x_i), \label{4-1}
\end{equation}
where $0\leq t\leq m_i,~i=1,2,\cdots,p,~n\in\mathbb{Z}$, with $f_i^0(x_i)=x_i$.
\end{definition}
The definition of $f_i(x_i)$ is also equivalent to the following processes (\ref{3-1}) and (\ref{3-3}).
We first consider the generalization of Question 3.3. For any two points $P_1=(x_1,x_2,\cdots,x_p)$ and $P_2=(y_1,y_2,\cdots,y_p)$, we have
\begin{theorem}
The light starting from $P_1=(x_1,x_2,\cdots,x_p)$ will pass through $P_2=(y_1,y_2,\cdots,y_p)$ if and only if there are $k_i~(k_i=0,1),i=1,2,\cdots,p$ such that the following Diophantine equations
\begin{equation}
\begin{split}
2n_1m_1-x_1+(-1)^{k_1}y_1&=2n_2m_2-x_2+(-1)^{k_2}y_2\\
&=\cdots=2n_pm_p-x_p+(-1)^{k_p}y_p \label{4-2}
\end{split}
\end{equation}
has an integer solution with respect to $n_i,~i=1,2,\cdots,p$.
\end{theorem}
\begin{proof}
For any point $P_1=(x_1,x_2,\cdots,x_p)\in\mathcal{S}^p$, if the light starting from $P_1$ will pass through $P_2=(y_1,y_2,\cdots,y_p)$, there is an integer $k$ such that
\[F^{k}(x_1,x_2,\cdots,x_p)=(f_1^{k}(x_1),f_2^{k}(x_2),\cdots,f_p^{k}(x_p))=(y_1,y_2,\cdots,y_p).\]
By the definition of $f_i(x_i)$, we let $t=y_i$ in (\ref{4-1}) and get
\[y_i=f_i^{2n_im_i-x_i+(-1)^{k_i}y_i}(x_i),~k_i=0,1,~i=1,2,\cdots,p.\]
Therefore, we get Theorem 4.3.
\end{proof}
\begin{remark}
A necessary condition for Eq. (\ref{4-2}) to have a solution is for any $1\leq i<j\leq p$, there are $k_i=0,1,k_j=0,1$ such that
\begin{equation}
2\gcd{(m_i,m_j)}\mid(x_j-x_i)+((-1)^{k_j}y_j+(-1)^{k_i}y_i). \label{4-3}
\end{equation}
Especially, when $p=2$, condition (\ref{4-3}) is sufficient, which is Theorem 3.4.
\end{remark}
\begin{definition}
For any point $(x_1,x_2,\cdots,x_p)\in\mathcal{S}^p$, if there is a $k\in\mathbb{Z}^+$ such that
\begin{equation}
\begin{cases}
\begin{aligned}
&F^k(x_1,x_2,\cdots,x_p)=(x_1,x_2,\cdots,x_p),\\
&F^{k-1}(x_1,x_2,\cdots,x_p)=F^{-1}(x_1,x_2,\cdots,x_p), \label{4-4}
\end{aligned}
\end{cases}
\end{equation}
where $F^{-1}(x_1,x_2,\cdots,x_p)=(f_1^{-1}(x_1),f_2^{-1}(x_2),\cdots,f_p^{-1}(x_p))$, then the chain
\[F^1,F^2,\cdots,F^{k}\]
is called a closed path starting from $(x_1,x_2,\cdots,x_p)$. An open path is defined as a closed path with half repeating itself.
\end{definition}
\begin{remark}
From Theorem 4.3, it is easy to see that the light starting from $(x_1,x_2,\cdots,x_p)$ will definitely return to $(x_1,x_2,\cdots,x_p)$, because when $k_i=0$ for all $i=0,1,\cdots,p$, the following Diophantine equation
\[2n_1m_1=2n_2m_2=\cdots=2n_pm_p\]
has infinitely many integer solutions. This means that the positive integer $k$ in (\ref{4-4}) must exist.
\end{remark}
\begin{definition}
We call the smallest $k(x_1,x_2,\cdots,x_p)\in\mathbb{Z}^+$ that satisfies
\begin{equation}
\begin{cases}
\begin{aligned}
&F^{k(x_1,x_2,\cdots,x_p)}(x_1,x_2,\cdots,x_p)=(x_1,x_2,\cdots,x_p),\\
&F^{k(x_1,x_2,\cdots,x_p)-1}(x_1,x_2,\cdots,x_p)=F^{-1}(x_1,x_2,\cdots,x_p). \label{4-5}
\end{aligned}
\end{cases}
\end{equation}
as the step length of this closed path. Correspondingly, the step length of the open path is naturally $\frac{k(x_1,x_2,\cdots,x_p)}{2}$.
\end{definition}
\begin{theorem}
The step length $k(x_1,x_2,\cdots,x_p)$ is an invariant independent of $x_1,x_2,\cdots,x_p$, denoted as $\mathcal{K}$. Specifically,
\[\mathcal{K}=2\text{lcm}(m_1,m_2,\cdots,m_p).\]
where $\text{lcm}(m_1,m_2,\cdots,m_p)$ is the least common multiple of $m_1,m_2,\cdots,m_p$.
\end{theorem}
\begin{proof}
For any point $(x_1,x_2,\cdots,x_p)\in\mathcal{S}^p$, we need
\[\begin{split}
&F^{k(x_1,x_2,\cdots,x_p)}(x_1,x_2,\cdots,x_p)=(x_1,x_2,\cdots,x_p)\\
=~&(f_1^{k(x_1,x_2,\cdots,x_p)}(x_1),f_2^{k(x_1,x_2,\cdots,x_p)}(x_2),\cdots,f_p^{k(x_1,x_2,\cdots,x_p)}(x_p)),
\end{split}\]
and
\[\begin{split}
&F^{k(x_1,x_2,\cdots,x_p)-1}(x_1,x_2,\cdots,x_p)=(f_1^{-1}(x_1),f_2^{-1}(x_2),\cdots,f_p^{-1}(x_p))\\
=~&(f_1^{k(x_1,x_2,\cdots,x_p)-1}(x_1),f_2^{k(x_1,x_2,\cdots,x_p)-1}(x_2),\cdots,f_p^{k(x_1,x_2,\cdots,x_p)-1}(x_p)).
\end{split}\]
\underline{Case 1}. If $x_i>0,~1\leq i\leq p$, then $f^{-1}(x_i)=x_i-1$, we thus have
\[\begin{split}
F^{-1}(x_1,x_2,\cdots,x_p)&=(f_1^{-1}(x_1),f_2^{-1}(x_2),\cdots,f_p^{-1}(x_p))\\
&=(x_1-1,x_2-1,\cdots,x_p-1).
\end{split}\]
In order to get the step length $k(x_1,x_2,\cdots,x_p)$, we let $t=x_i$ and $t=x_i-1$, respectively, in (\ref{4-1}) and get
\[x_i=f_i^{2n_im_i-2x_i}(x_i)=f_i^{2n_im_i}(x_i),~1\leq i\leq p,\]
\[x_i-1=f_i^{2n_im_i-2x_i+1}(x_i)=f_i^{2n_im_i-1}(x_i)=f_i^{-1}(x_i),~1\leq i\leq p,\]
Hence, we have
\begin{equation}
k(x_1,x_2,\cdots,x_p)=2n_1m_1=2n_2m_2=\cdots=2n_pm_p. \label{4-6}
\end{equation}
Therefore, we need to find $n_i,1\leq i\leq p$ that make
\[2n_1m_1=2n_2m_2=\cdots=2n_pm_p\]
the smallest. Thus, we have
\begin{equation}
n_i=\frac{\text{lcm}(m_1,m_2,\cdots,m_p)}{m_i}, \label{4.1}
\end{equation}
where $\text{lcm}(m_1,m_2,\cdots,m_p)$ is the least common multiple of $m_i,1\leq i\leq p$.
\underline{Case 2}. If there exists $i~(1\leq i\leq p)$ such that $x_i=0$, then $f^{-1}(x_i)=1$. Taking $(x_1,x_2)=(0,x_2,\cdots,x_p)$ as an example, and other cases can be discussed similarly. We have
\[\begin{split}
F^{-1}(x_1,x_2,\cdots,x_p)&=(f_1^{-1}(x_1),f_2^{-1}(x_2),\cdots,f_p^{-1}(x_p))\\
&=(1,x_2-1,\cdots,x_p-1).
\end{split}\]
In order to get the step length $k(x_1,x_2,\cdots,x_p)$, we let $t=0,x_i,1$ and $t=x_i-1$, respectively, in (\ref{4-1}) and get
\begin{equation*}
\begin{cases}
\begin{aligned}
&0=f_1^{2n_1m_1}(x_1),\\
&x_i=f_i^{2n_im_i-2x_i}(x_i)=f_i^{2n_im_i}(x_i),~2\leq i\leq p,
\end{aligned}
\end{cases}
\end{equation*}
and
\begin{equation*}
\begin{cases}
\begin{aligned}
&1=f_1^{2n_1m_1-1}(x_1)=f_1^{2n_1m_1+1}(x_1)=f_1^{-1}(x_1),\\
&x_i-1=f_i^{2n_im_i-2x_i+1}(x_i)=f_i^{2n_im_i-1}(x_i)=f_i^{-1}(x_i),~2\leq i\leq p.
\end{aligned}
\end{cases}
\end{equation*}
Therefore, we also get Eq. (\ref{4-6}) and get the same result.
Hence, the step length of the closed path is
\[k(x_1,x_2,\cdots,x_p)=2n_1m_1=2n_2m_2=\cdots=2n_pm_p=2\text{lcm}(m_1,m_2,\cdots,m_p).\]
Obviously, the step length is independent of $x_i,1\leq i\leq p$, so the step length of any closed path is the same, and the step length of open path is half that of closed path.
\end{proof}
\begin{theorem}
The number of closed paths (not open paths) is
\[C(m_1,m_2,\cdots,m_p)=2^{p-2}\frac{m_1m_2\cdots m_p}{\text{lcm}(m_1,m_2,\cdots,m_p)}-2^{p-2}.\]
\end{theorem}
When $p=2$, we have
\[C(m_1,m_2)=\frac{m_1m_2}{\text{lcm}(m_1,m_2)}-1=\gcd{(m_1,m_2)}-1,\]
which is exactly Theorem 1.3.
\begin{proof}
Note that each mapping $F$ corresponds to a diagonal of a minimum unit in the spatial grid. A $m_1\times m_2\times\cdots\times m_p$ spatial grid has a total of $m_1m_2\cdots m_p$ minimum units and $2^{p-1}m_1m_2\cdots m_p$ diagonals.
The number of points in the set $\hat{\mathcal{S}^p}$ is $2^p$. An open path corresponds to two points in the set $\hat{\mathcal{S}^p}$, so the number of open paths in the spatial grid is $2^{p-1}$.
Thereore the number of closed paths is
\[\begin{aligned}
C(m_1,m_2,\cdots,m_p)=&\frac{2^{p-1}m_1m_2\cdots m_p-2^{p-1}\text{lcm}(m_1,m_2,\cdots,m_p)}{2\text{lcm}(m_1,m_2,\cdots,m_p)}\\
=&~2^{p-2}\frac{m_1m_2\cdots m_p}{\text{lcm}(m_1,m_2,\cdots,m_p)}-2^{p-2}.
\end{aligned}\]
\end{proof}
\begin{example}
When $p=3$ and $m_1=4,m_2=3,m_3=2$, we have
\[C(4,3,2)=2\times\frac{4\times3\times2}{\text{lcm}(4,3,2)}-2=2.\]
\begin{figure}
\caption{Two closed paths in the $4\times3\times2$ spatial grid}
\end{figure}
\end{example}
\begin{corollary}
The sum of coordinates of all points (which may have the same points) that any closed path passes through is the same. Specifically,
\[\sum_{k=0}^{\mathcal{K}}f^{k}(x_i)=2n_i\bigg(\sum_{t=0}^{m_i}t\bigg)-n_im_i=n_im_i^2=m_i\text{lcm}(m_1,m_2,\cdots,m_p),\]
and
\[\sum_{k=0}^{\mathcal{K}}\sum_{i=1}^{p}f^{k}(x_i)=\bigg(\sum_{i=1}^{p}m_i\bigg)\text{lcm}(m_1,m_2,\cdots,m_p).\]
\end{corollary}
In Definition 4.5, we call the chain $F^1,F^2,\cdots,F^{k}$ a closed path starting from $(x_1,x_2,\cdots,x_p)$. Now let us consider the set $\mathcal{F}$ composed of the mappings $F^k$, i.e.,
\[\mathcal{F}=\{F^1,F^2,\cdots,F^k,\cdots\}.\]
\begin{corollary}
By Theorem 4.8, the step length $\mathcal{K}$ of any closed path is an invariant and $\mathcal{K}=2\text{lcm}(m_1,m_2,\cdots,m_p)$, so the set $\mathcal{F}$ is a finite set, and its order is $\mathcal{K}$. Furthermore, the set $\mathcal{F}$ forms an Abelian group generated by $F^1$ under the composition of the mappings $F^k$. In particular,
\[\mathcal{F}\simeq\mathbb{Z}_{\mathcal{K}}.\]
\end{corollary}
We can also consider Question 3.14 in $\mathcal{S}^p$.
\begin{question}
Given any two points $P_1$ and $P_2$ in the $m_1\times m_2\times\cdots\times m_p$ spatial grid, can we reach $P_2$ from $P_1$ along the diagonal of a minimum unit?
\end{question}
\begin{example}
When $m_1=m_2=m_3=1$, the point $P_1$ can reach $P_2$ but cannot reach $P_3$.
\begin{figure}
\caption{A $1\times1\times1$ spatial grid}
\end{figure}
\end{example}
To solve Question 4.13, we define a mapping $F_{s_1,s_2,\cdots,s_p}$ from the set $\mathcal{S}^p$ to the set $\mathcal{S}^p$ as follows.
\[F_{s_1,s_2,\cdots,s_p}:~\mathcal{S}^p\rightarrow \mathcal{S}^p,\quad (x_1,x_2,\cdots,x_p)\mapsto F_{s_1,s_2,\cdots,s_p}(x_1,x_2,\cdots,x_p),\]
where $s_i=0,1,i=1,2,\cdots,p$ and
\[F_{s_1,s_2,\cdots,s_p}(x_1,x_2,\cdots,x_p)=(f_1^{(-1)^{s_1}}(x_1),f_2^{(-1)^{s_2}}(x_2),\cdots,f_p^{(-1)^{s_p}}(x_p)),\]
with
\[F_{s_1,s_2,\cdots,s_p}^k(x_1,x_2,\cdots,x_p)=(f_1^{(-1)^{s_1}k}(x_1),f_2^{(-1)^{s_2}k}(x_2),\cdots,f_p^{(-1)^{s_p}k}(x_p)),~k\in\mathbb{Z},\]
where the component $f_i(x_i)$ satisfies the rule (\ref{4-1}).
Let $\mathcal{G}$ be the set composed of some composites of the mappings $F_{s_1,s_2,\cdots,s_p}^k$, i.e.,
\[\mathcal{G}=\{F_{s_{11},s_{21},\cdots,s_{p1}}^{k_1}\circ\cdots\circ F_{s_{1r},s_{2r},\cdots,s_{pr}}^{k_r}|s_{ij}=0,1,k_j\in\mathbb{Z},1\leq i\leq p,1\leq j\leq r,r\in\mathbb{N}^*\}.\]
It is easy to verify that the set $\mathcal{G}$ forms an Abelian group under the composition of the mappings $F_{s_1,s_2,\cdots,s_p}^k$. Furthermore, $\mathcal{G}$ is also finitely generated by $2^p$ elements $F_{0,0,\cdots,0},F_{0,0,\cdots,1},\cdots,F_{0,s_{1j},\cdots,s_{pj}},\cdots,F_{0,1,\cdots,1}$. Therefore,
\[\mathcal{G}=\{F_{0,0,\cdots,0}^{k_1}\circ\cdots\circ F_{0,1,\cdots,1}^{k_{2^{p-1}}}~|~k_i\in\mathbb{Z},1\leq i\leq 2^{p-1}\}.\]
By Theorem 4.8, for any $F\in\mathcal{G}$, we have
\[F^{\mathcal{K}}=F.\]
Hence, $\mathcal{G}$ is a finitely generated torsion abelian group and obviously a finite group. Then
\[\mathcal{G}\simeq\mathbb{Z}_{\mathcal{K}}^{2^{p-1}}=\mathbb{Z}_{\mathcal{K}}\oplus\cdots\oplus\mathbb{Z}_{\mathcal{K}}~~(2^{p-1}~\text{summands}).\]
Suppose that $F\in\mathcal{G}$ and $P\in\mathcal{S}^p$. An operation of $\mathcal{G}$ on $\mathcal{S}^p$ is given by the mapping
\[\mathcal{G}\times\mathcal{S}^p\rightarrow\mathcal{S}^p,\quad (F,P)\mapsto F(P).\]
We now prove that the set $\mathcal{S}^p$ is divided into $2^{p-1}$ orbits. By the definition of $f^{(-1)^{s_i}}_i(x_i)$, we have
\[f^{(-1)^{s_i}}_i(x_i)\equiv x_i+1\pmod{2},\]
thus
\begin{equation}
f^{(-1)^{s_i}}_i(x_i)+f^{(-1)^{s_j}}_j(x_j)\equiv x_i+x_j\pmod{2} \label{4-7}
\end{equation}
holds for all $s_i,s_j=0,1,1\leq i,j\leq p$.
For any integer $a$, we introduce the symbol $[a]_2\in\{0,1\}$ such that
\[a\equiv[a]_2\pmod{2}.\]
Using the symbol $[a]_2$, we give the concept of the index of point $P\in\mathcal{S}^p$.
\begin{definition}
The index of point $P=(x_1,x_2,\cdots,x_p)\in\mathcal{S}^p$ is defined as
\[I(P)=([x_1+x_2]_2,[x_1+x_3]_2,\cdots,[x_1+x_p]_2).\]
\end{definition}
\begin{theorem}
The points $P_1=(x_1,x_2,\cdots,x_p)$ and $P_2=(y_1,y_2,\cdots,y_p)$ are in the same orbit if and only if the indexes of $P_1$ and $P_2$ are the same, i.e.,
\begin{equation}
I(P_1)=I(P_2). \label{4-8}
\end{equation}
\end{theorem}
\begin{proof}
If $P_1$ and $P_2$ are in the same orbit, from (\ref{4-7}), we have
\begin{equation}
[x_i+x_j]_2=[y_i+y_j]_2,~1\leq i,j\leq p. \label{4-9}
\end{equation}
Note that for any $1\leq i,j,k\leq p$, if
\[[x_i+x_j]_2=[y_i+y_j]_2\quad \text{and}\quad [x_i+x_k]_2=[y_i+y_k]_2,\]
then
\[\begin{split}
[x_j+x_k]_2&=[x_j+x_k+2x_i]_2\\
&=[(x_i+x_j)+(x_i+x_k)]_2\\
&=[x_i+x_j]_2+[x_i+x_k]_2\\
&=[y_i+y_j]_2+[y_i+y_k]_2\\
&=[2y_i+y_j+y_k]_2\\
&=[y_j+y_k]_2.
\end{split}\]
Therefore, (\ref{4-8}) and (\ref{4-9}) are equivalent.
\end{proof}
Since $[x_i+x_j]_2\in\{0,1\}$, there are $2^{p-1}$ different $I(P)$.
\begin{theorem}
The set $\mathcal{S}^p$ is divided into $2^{p-1}$ orbits. The points in each orbit have the same index. Denote the orbit with index $I(P)=(\delta_1,\delta_2,\cdots,\delta_{p-1})$ by
\[S_{(\delta_1,\delta_2,\cdots,\delta_{p-1})}=\{P=(x_1,x_2,\cdots,x_p)~|~I(P)=(\delta_1,\delta_2,\cdots,\delta_{p-1})\}.\]
We have
\[\begin{split}
|S_{(\delta_1,\delta_2,\cdots,\delta_{p-1})}|=&~\prod_{i=1}^{p}\frac{(m_i+1)+(-1)^{[\delta_{i-1}]_2}[m_i+1]_2}{2}\\
&+\prod_{i=1}^{p}\frac{(m_i+1)+(-1)^{[\delta_{i-1}+1]_2}[m_i+1]_2}{2},
\end{split}\]
where $\delta_0=0$.
\end{theorem}
When $p=2$, we get the following two orbits
\[\begin{split}
S_{(0)}&=\{P=(x_1,x_2)~|~I(P)=(0)\},\\
S_{(1)}&=\{P=(x_1,x_2)~|~I(P)=(1)\}.
\end{split}\]
This is exactly the orbits $S_e$ and $S_o$ in Theorem 3.16.
Now the answer to Question 4.13 is clear. The point $P_1$ can reach $P_2$ if and only if $P_1$ and $P_2$ are in the same orbit.
\end{document}
|
\begin{document}
\begin{abstract}
It is proved in this note that the analogues of the Bennequin
inequality which provide an upper bound for
the Bennequin invariant of a Legendrian knot in the standard contact
three dimensional space in terms of the least degree in the framing
variable of the HOMFLY and the Kauffman polynomials are not sharp.
Furthermore, the relationships between these restrictions on the range
of the Bennequin invariant are investigated, which leads to a
new simple proof of the inequality involving
the Kauffman polynomial.
\end{abstract}
\title{On Legendrian knots and polynomial invariants}
\section{Introduction}
The {\em standard contact three dimensional space} is $\RM^3$ with
coordinates $u,p,q$ endowed with
the plane field induced by the the contact form $\alpha=du-pdq$ and
the orientation induced by the volume form $du \wedge dp \wedge dq$.
A smooth knot embedded in $\RM^3$ is called {\em Legendrian}
if it is everywhere tangent to this
plane field. A Legendrian knot is completely determined by its
projection to the plane $(p,q)$, the other coordinate $u$ being the
integral
of the form $pdq$ along the knot projection. Any smooth plane curve
unambiguously defines a Legendrian immersion in $\RM^3$ provided that the integral of
$pdq$ vanishes along this curve (so that the Legendrian ``lift'' is
closed).
Any Legendrian knot is {\em horizontal} (i.e. the tangent vector is never vertical)
with respect to this projection.
Hence a {\em contact isotopy} (a one parameter family of Legendrian knots)
is a particular case of what is classically called a {\em regular
isotopy} (the first Reidemeister move is forbidden on knot
projections).
Two horizontal oriented knots are regular isotopic if an only if they are
isotopic and have the same {\em writhe} $w$ (self-linking number with
respect to the vertical framing) and the same {\em Whitney index} $r$
(degree of the Gauss map of the knot projection).
Let $l$ be some Legendrian knot.
By definition, its Bennequin invariant is
$tb(l)=-w(l)$ and its Maslov invariant is $\mu (l)=r(l)$.
The following restrictions are known (see below for a list of
the authors of these results):
{\bf Theorem.}
\begin{itemize}
\item[a).] $tb(l)+|\mu(l)| \leq 2.g_4(l)-1$
\item[b).] $tb(l)+|\mu(l)| \leq e_P(l)$
\item[c).] $tb(l) \leq e_Y(l)$
\end{itemize}
Here $g_4(l)$ denotes the slice genus of $l$, $e_P(l)$ (resp. $e_Y(l)$)
the least degree of the framing variable $a$ in the HOMFLY
(resp. Kauffman) polynomial of $l$ (see below for the precise
definition and normalization of these polynomials).
In this paper, the relationships between these three
inequalities are investigated. A new proof of $c)$ is given.
It is shown that in spite of the misleading evidence provided by the
knot tables, there is no inequality like $e_P \leq 2g_4-1$,
from which it would follow that $b)$ implies $a)$.
We provide examples showing that inequality $e_Y \leq e_P$ is false,
again in spite of what could be expected from the tables.
As a consequence, none of the three inequalities above is sharp.
\section{Known results.}
\label{known}
\subsection{Inequalities}
\begin{itemize}
\item Consider a braid $\sigma$ with $n(\sigma)$ strands and whose exponent sum is $c(\sigma)$.
Denote by $\hat{\sigma}$ the closure of this braid and let $I$ be some knot invariant
that does not detect the orientation of knots.
It follows from \cite{Be} (theorem 8, proposition 6, and paragraph 24) that if the inequality\footnote{The sign difference with \cite{Be} is due to the fact that we use a different contact form and a different orientation.}
$-c(\sigma)-n(\sigma) \leq I(\hat{\sigma})$ holds for any braid $\sigma$,
then, for any Legendrian knot $l$ having
topological knot type $K$, $tb(l)+|\mu(l)| \leq I(K)$.
\item Bennequin \cite{Be} (theorem 3) proved that $|c(\sigma)|-n(\sigma) \leq 2g_3(\hat{\sigma})-1$, where $g_3$
denotes the genus. Hence, by the previous discussion, $tb(l)+|\mu(l)| \leq 2g_3(l)-1$
(\cite{Be}, theorem 11).
\item In \cite{Ru5}, Lee Rudolph proved that, for any braid $\sigma$, $|c(\sigma)|-n(\sigma) \leq 2g_4(\hat{\sigma})-1$,
and hence inequality a) follows.
\item In \cite{Mo} an \cite{FW}, Morton and Franks-Williams proved, using ``elementary'' combinatorial means,
that\footnote{We follow \cite{Kau} for the normalization of the Homfly
polynomial. This explains the difference with the original
inequality of \cite{Mo}, where different variables and a different normalization are assumed.}
$-c(\sigma)-n(\sigma) \leq e_P(\hat{\sigma})$.
As observed in \cite{FT}, inequality b) follows from this and the preceding discussion.
\item An inequality similar to inequality c) with $e_Y$
replaced by the least degree of the framing variable in the Kauffman
polynomial {\em reduced modulo} $2$ was obtained by Fuchs and Tabachnikov \cite{FT}.
\item Inequality c) was proved by Tabachnikov \cite{Ta} using Turaev's state model
for the Kauffman polynomial.
\item Using another approach, Chmutov, Goryunov and Murakami \cite{CGM} proved inequality b) and Chmutov and Goryunov \cite{CG}
proved inequality c). In \cite{CGM,CG,Ta}, inequalities are stated in the more general context of the contact manifold $ST^* \RM^2$.
Note also that an analogue of inequality b) for transversal knots
in $ST^* \RM^2$ is proved independently in \cite{GH} and in \cite{Ta}.
\item All the results mentioned in this section have a counterpart in Lee Rudolph's theory of
quasi-positive links.
See \cite{Ru1, Ru2, Ru3, Ru4, Ru5, Ru6} for analogues of inequalities a), b) and c).
\item Tanaka \cite{Tan} has shown that inequality $c)$ is a consequence of \cite{Yo},
lemma 1, which itself relies on Turaev's state model for the Kauffman polynomial.
\end{itemize}
\subsection{Non-Sharpness}
Below in this paper, when it is stated that an inequality of the
form
\centerline{(contact isotopy invariant) $\leq$ (topological invariant)}
is {\em not sharp}, this means that there exists some topological
knot types $K$ such that the supremum of all the values of
this contact isotopy invariant computed on all Legendrian
representatives of $K$ is less than the value of the
topological invariant computed on $K$.
\begin{itemize}
\item Using topological methods, Y. Kanda \cite{Ka} has computed
the maximal Bennequin number realizable by a Legendrian representative of some Pretzel knots, showing that the bound
$tb(l) \leq 2.g_3(l)-1$ is not sharp for these knot types.
The same result follows from Rudolph's \cite{Ru3}, modulo the identification in
\cite{Ru6} of $TB(K)=max\{tb(l) \hskip2mm ; \hskip2mm l
\textrm{ has topological type} \hskip2mm K\}$ with the invariant $q(K)$ (which was defined, using
another symbol, in \cite{Ru1}). See also \cite{Ru4}.
\item J. Epstein \cite{Ep} and L. Ng \cite{Ng} have conjectured
non-sharpness of inequality c) for the knot $8_{19}$. This was
proved by J. Etnyre and K. Honda \cite{EH}, as a byproduct of
their classification of Legendrian torus knots.
\item Sharpness has been established for some specific knot types (see,
for exemple, \cite{Tan,Ep,Ng}).
\item Inequality $a)$ is not optimal already
{\em at the level of concordance classes } (see \cite{Fe}).
\end{itemize}
\section{Knot polynomials.}
Here, the precise definition of the topological invariants $e_P$ and
$e_Y$ is given. We follow the normalization of \cite{Kau}, pp 215-222.
To any regular oriented knot projection we associate $R$,
a Laurent polynomial
in the variables $z$ and $a$ defined by
the following skein relations:
$$R(\rond)=\frac{ a-a^{-1}}{ z}$$
$$R(\fcpo)-R(\fcmo)=z\cdot R(\ouvertVff)$$
$$R(\boucled)=a \cdot R(\droit)$$
$R$ is a regular isotopy invariant and
the HOMFLY polynomial $P(z,a)=a^{-w}R(z,a)$ (where $w=\sharp \fcpo
- \sharp \fcmo$) is a knot invariant.
The least exponent of the variable $a$ in $P$ is
denoted by $e_P$. It is known that $P$ is independent of the orientation, and that $e_P+1$ is an additive
knot invariant with respect to connected sum.
{\bf Example.} $P(\postref)=a^{-3}(\frac{a-a^{-1}}{z})(2a-a^{-1}+az^2)$, hence
$e_P(\postref)=-5$.
To any regular knot projection we associate $D$,
a Laurent polynomial of
the variables $z$ and $a$
defined by
the following skein relations:
$$D(\rond)=\frac{ a-a^{-1}}{ z}+1$$
$$D(\fcp)-D(\fcm)=z\cdot(D(\ouvertV) -D(\ouvertH))$$
$$D(\boucled)=a \cdot D(\droit)$$
$D$ is a regular isotopy invariant and
the Kauffman polynomial $Y(z,a)=a^{-w}D(z,a)$ is a knot invariant.
The least exponent of the variable $a$ in $Y$ is
denoted by $e_Y$. It is known that $e_Y+1$ is an additive knot invariant
with respect to connected sum.
{\bf Example.} $Y(\postref)=a^{-3}(1+\frac{a-a^{-1}}{z})(2a-a^{-1}+z-a^{-2}z+az^2-a^{-1}z^2)$
hence $e_Y(\postref)=-6$.
\section{The Jaeger formula.}
This formula (see \cite{Kau}, pp 219-222) shows that
the Kauffman polynomial of some knot can be computed from the
HOMFLY polynomials of the knots obtained by ``splicing'' a regular
projection of this knot at some crossings.
Consider a link diagram $K$.
A {\em state} $\sigma$ is the following data: A link $K_\sigma$
obtained from $K$
by splicing some of the crossings (\fcp is modified into \ouvertH or \ouvertV, or is left unchanged),
and an orientation of $K_\sigma$.
A state $\sigma$ being given, a {\em local weight} is associated the
each crossing $x$ of $K$. If $x$ does not belong to the spliced
crossings, then the
local weight of $x$ is one.
Consider now an $x$ that belongs to the spliced crossings and suppose that $x=\fcp$ before splicing.
There are $8$ possible local pictures. If $x$ is spliced to
$\ouvertVn$ then the weight of $x$ is $(t-t^{-1})$. If $x$ is spliced
to $\ouvertHp$, then the weight of $x$ is
$-(t-t^{-1})$. The weight of $x$ vanishes in all remaining possibilities.
The weight of $\sigma$, denoted by $[K,\sigma]$, is the product of all
these local weights.
Denote by $r_\sigma$ the degree of the Gauss map (Whitney index) of
the oriented plane curve underlying the knot diagram $K_\sigma$.
{\bf Theorem.} (Jaeger)
$D(K)(t-t^{-1},a^2t^{-1})= \sum_{\sigma} (ta^{-1})^{r_\sigma}
[K,\sigma]R(K_\sigma)(t-t^{-1},a).$
\section{The Legendrian version of the Jaeger formula.}
It is a reformulation of the formula above in terms of the projection
of Legendrian knots in the plane $(q,u)$, called {\em fronts}.
These are not regular projections.
A generic front has transverse self-intersections \cross and semi-cubic
cusps like \cuspg or \cuspd. It has no vertical tangent. A typical front is \bifly.
A generic front completely determines the Legendrian knot which lies above, hence {\em
in the sequel a Legendrian knot $l$ is identified with its front when there is no ambiguity.}
To the (generic) front of some Legendrian knot $l$, a generic knot diagram,
called the {\em morsification}
of the front, is associated
by the following rule:
Each crossing \cross is modified to \fcp.
Each cusp pointing leftward \cuspg is modified to \gauche.
Each cusp pointing rightward \cuspd is modified to \boucled.
For example, the morsification of \bifly is \trifi.
{\bf Claims.}
The morsification of the front of $l$ is such that:
\begin{itemize}
\item The corresponding knot has the topological type of $l$.
\item The Whitney index of the morsification is $r=-\mu(l)$.
\item The writhe of the morsification is $w=-tb(l)$.
\item The regular isotopy type of the morsification is invariant under
Legendrian isotopy.
\item[$\square$]
\end{itemize}
$R$ and $D$ are
defined for regular knot diagrams.
Observe that $D$ is defined for unoriented diagrams, and that inverting the orientation leaves $R$,
$tb$ and $w$ invariant (but changes the sign of $\mu$). In the sequel,
$R(l)$ (resp. $D(l)$) denotes the
polynomial computed by applying skein relations to the morsification of the front of $l$.
However this is the same as the polynomial computed applying skein relations to the (generically
regular) projection of $l$ in the plane $(p,q)$.
{\em A state} of the front of $l$ consists in the following data: A front $l_\sigma$
obtained from the
one of $l$ by splicing some crossings (\cross can be modified to \ouvertH, or
\cuspd \cuspg, or left unchanged),
and the choice of an orientation of the resulting $l_\sigma$.
A state $\sigma$ of $l$ being given, to each crossing $x=$\cross of $l$,
a local weight is associated.
If $x$ is left unspliced, then its weight is one.
Suppose now that $x$ belongs to the spliced crossings. There are $8$ possible local
pictures. If $x$ is spliced to \cuspdn \cuspgp, then its weight is $ta^{-2}(t-t^{-1})$.
If $x$ is spliced to \ouvertHp then its weight is $t^{-1}-t$.
The weight of $x$ vanishes in all remaining possibilities.
The weight of $\sigma$, denoted by $[l,\sigma]$ is the product of all the local weights.
Denote by $\sharp \cuspgp$ (resp. $ \sharp \cuspdn$) the number of
cusps of $l_\sigma$ which point leftward
(resp. rightward) and which are oriented upward (resp. downward).
Using this language, the Jaeger formula translates to (see the proof below):
$$ (LJ) \hskip1cm D(l)(t-t^{-1},a^2t^{-1})= \sum_{\sigma}
(at^{-1})^{\sharp \cuspgp +\sharp \cuspdn}
[l,\sigma]R(l_\sigma)(t-t^{-1},a).$$
{\bf Example 1.}
$R(\flying)=R(\infi)= \frac{a^2-1} {z}$, and
$D(\flying)=D(\infi)=a+\frac{a^2-1 }{z}$.
There are two states for $\flying$ (the two possible orientations).
Hence the right hand side of $(LJ)$ is:
$$(ta^{-1})^0 \frac{a^2-1}{ (t-t^{-1})} + (at^{-1})^2
\frac{a^2-1} {(t-t^{-1})}.$$
This is equal to $D(t-t^{-1}, a^2t^{-1})=
(a^2t^{-1})(1+\frac{a^2t^{-1}-a^{-2}t}{t-t^{-1}})$, as expected.
{\bf Example 2.}
$R(\bifly)=R(\trifi)=\frac{a^3-a} {z}$.
$D(\bifly)=D(\trifi)=a^2+\frac{a^3-a }{z}$.
There are 4 states whose weights do not vanish: \statun, \statdeux, \stattrois \stattrois,
and \stattrois.
The right hand side of $(LJ)$ is:
$$(at^{-1}) \frac{a^3-a} {t-t^{-1}}
+(at^{-1}) \frac{a^3-a} {t-t^{-1}}
+(at^{-1})^4 (ta^{-2})(t-t^{-1}) (\frac{a^2-1} {t-t^{-1}})^{2}
+(at^{-1})^2 (t^{-1}-t) \frac{a^2-1} {t-t^{-1}}.$$
This is equal to $D(t-t^{-1},a^2t^{-1})=(a^2t^{-1})^2(1+\frac{a^2t^{-1}-a^{-2}t}{t-t^{-1}})$,
as expected.
{\it Proof of (LJ).} Consider some Legendrian knot $l$, and the knot diagram $K$ obtained by
rounding all the cusps of the front of $l$ (\cuspg becomes \gauche and \cuspd becomes \droit).
Denote by $\nu$ half the number of cusps of $l$. By the axioms for $D$,
$$D(l)(t-t^{-1},a^2t^{-1})=(a^2t^{-1})^{\nu}D(K)(t-t^{-1},a^2t^{-1}).$$
There is a one-to-one correspondence between the states of $l$ and the states of $K$.
Writing the Jaeger formula for $K$ in terms of $l$ will give $(LJ)$:
Consider some state $\sigma$ of $l$. Denote by $\nu_\sigma$ half the
number of cusps of $l_\sigma$ and by $V$ (resp. by $H$) the number of crossings
of $l$ (or of $K$) that are spliced vertically (resp. horizontally) in $\sigma$.
The following relations hold: $R(K_\sigma)=a^{-\nu_\sigma}R(l_\sigma)$,
$[K,\sigma]=(-1)^H(t-t^{-1})^{V+H}$, $\nu=\nu_\sigma-V$, and
$\nu_\sigma-r(K_\sigma)={\sharp \cuspgp +\sharp \cuspdn}$.
Plug this into the expression of $D(l)$ above:
$$ D(l)(t-t^{-1},a^2t^{-1})= \sum_{\sigma}
(at^{-1})^{\sharp \cuspgp +\sharp \cuspdn}
(-1)^H(a^2t^{-1})^{-V}(t-t^{-1})^{V+H}R(l_\sigma)(t-t^{-1},a). $$
This is $(LJ)$.
$\square$
\section{Inequality $c)$ follows from inequality $b)$}
\label{b_implique_c}
Since $tb=-w$, inequality $b)$
is equivalent to the fact that there is no negative
power of $a$ occurring in $a^{-|\mu|}R(l)$, i.e., {\em it is a genuine polynomial in $a$}.
Similarly, we want to prove that $D(l)$ is a genuine polynomial in $a$.
This is a consequence of the following lemma.
{\bf Lemma.} The contribution of each state in the right hand side of $(LJ)$ is a genuine
polynomial in $a$.
{\it Proof.}
Consider a state $\sigma$ of $l$.
Denote by $V$ the number of crossings that are spliced to \cuspdn \cuspgp, and by
$H$ the number of crossings that are spliced to \ouvertHp.
The contribution of $\sigma$ is
$$(at^{-1})^{\sharp \cuspgp + \sharp \cuspdn}
(ta^{-2})^{V}(-1)^{H}(t-t^{-1})^{V+H}R(l_\sigma)(t-t^{-1},a).$$
The least exponent of $a$ in $R(l_\sigma)(t-t^{-1},a)$ is not less than
$|\mu(l_\sigma)|$, by inequality $b)$.
Denote by $E$ the least exponent of $a$ in the contribution of $\sigma$.
$E \geq \sharp \cuspgp + \sharp \cuspdn -2\cdot V +|\mu|$.
On the other hand $\mu=\sharp \cuspgp - \sharp \cuspdn$, hence $E \geq 2
(\sharp \cuspgp - V) +|\mu|-\mu$.
Since splicing \cross to \cuspdn \cuspgp creates one \cuspgp, $V$ is not bigger than
$\sharp \cuspgp$, and hence $E\geq0$. $\square$
{\bf Remark about this proof.} As explained in \cite{FT}, inequality b) has a ``simple'' and natural
proof by \cite{Be} and \cite{Mo} or \cite{FW}, much simpler than the known proofs of a) for instance.
Since the Jaeger formula is also proved (\cite{Kau}) by "elementary" means (like checking its invariance
under the Reidemeister moves), this proof of $c)$ is, in my opinion, simple and natural.
I find it remarkable that the Jaeger formula fits so well between b) and c).
{\bf Remark.} Like $a)$, $b)$ follows from a more general inequality about transverse knots
(see \cite{Be,Ta,GH}).
The proof above, which lacks of a natural transverse counterpart,
seems to indicate that $c)$ is an inequality about Legendrian knots only.
\section{Relationship between $g_4$ and $e_P$.}
It is proved here that inequality $a)$ can be stronger than
inequality $b)$.
{\bf Proposition.}
The difference between $e_P$ and $2 \cdot g_4 -1$ can be arbitrarily negative or positive.
{\bf Corollary.} Inequality b) is not
sharp, i.e. $$max\{tb(l)+ |\mu(l)| \hskip2mm ; \hskip2mm l
\textrm{ has topological type} \hskip2mm K\}<e_P(K)$$ for some knot types $K$.
{\bf Remark.} $2 \cdot g_4 -1 < e_P$ seems much more difficult to realize than the converse:
The tables indicate no
contradiction to
$e_P(K) \leq 2 \cdot g_4(K)-1$ for the $84$ first knots (which arise from diagrams with
less than $10$ crossings).
{\bf Question.} This leaves the question of Morton \cite{Mo} open: Is it
true that $e_P(K) \leq 2 \cdot g_3(K)-1$? (Recall that $g_3$ denotes
the genus).
This inequality is true for alternating
knots and for positive knots, as proved in \cite{Cr}, and for knots
with braid index less than 4 \cite{DM}. It was checked by Alexander
Stoimenow for all knots which admit a diagram with less than 17 crossings.
{\it Proof of the proposition.}
Consider some knot $K$ such that $e_P+1$ is negative.
For instance $K=\postref$.
Since $e_P+1$ is additive under connected sum, there exist knots with arbitrarily negative $e_P$.
On the other hand, $g_4$ is never negative. Hence $e_p-(2 \cdot g_4-1)$ can be
made arbitrarily negative.
Let K be the closure of the braid
$\sigma$= \blanc \sbb \sbb \scc \scc \saa \saa \sbb \sccc \sa \sa \sa \blanc.
This knot admits a projection with ten crossings.
Changing the first $\sbb$ \blanc of $\sigma$ to $\sbbb$ \blanc, one gets
a braid whose closure is the trivial knot. Hence $g_4(K) \leq 1$ (it is in fact $1$).
Computation shows that $e_P(K)=3$ (hence $K$ is an example for which $b)$ is not sharp).
Denote by $K^{\sharp d}$ the connected sum of $k$ copies of $K$. $g_4(K^{\sharp d}) \leq d$,
and $e_P(K^{\sharp d})=d(3+1)-1$. Hence $e_P-(2 \cdot g_4 - 1)$ can be made
arbitrarily positive. $\square$
\section{Relationships between inequalities $b)$ and $c)$.} \label{ineq}
By section \ref{b_implique_c}, inequality $b)$ implies inequality $c)$.
However, inequalities $b)$ and $c)$ are independent in the following sense:
$b)$ implies that $tb(l)\leq e_P(l)$.
Looking in the tables seems to indicate that $e_Y \leq e_P$, and hence that $tb(l)\leq e_P(l)$
is weaker than $tb(l) \leq e_Y(l)$ (inequality $c)$). This is however not true.
Among all the
prime knots which admit a diagram with less than 15 crossings (there
are grosso-modo 60.000 of them), there are 22 knots verifying
$e_P < e_Y$. One of the two examples with 12 crossings is the closure
of the following braid:
\centerline{\Scc \Sbb \Saa \Sbbb \Scc \Sddd \Saaa \Sbb \Scc \Sddd \Sccc \Sbbb \Scc \Saaa
\Sbb \Saaa \Sdd \Scc
\Sbb \Saa }
{\bf Corollary.} Inequality $c)$ is not sharp.
{\bf Question.} Is it true that $e_Y \leq e_P$ for alternating knots
(none of the 13 examples cited above is alternating)?
\end{document}
|
\begin{equation}gin{document}
\title{Two Extensions of Topological Feedback Entropy\thanks{This work was supported
by Australian Research Council grant DP110102401.}
}
\titlerunning{Topological Feedback Entropy Extensions}
\author{Rika Hagihara \and
Girish N. Nair
}
\authorrunning{R. Hagihara, G.~N. Nair}
\institute{
G.~N. Nair \at
Department of Electrical and Electronic Engineering, University of Melbourne, VIC 3010 Australia\\
Tel.: +61-3-8344-6701\\
Fax.:+61-3-8344-6678\\
\email{[email protected]}
}
\date{}
\maketitle
\begin{equation}gin{abstract}
Topological feedback entropy (TFE) measures the intrinsic rate at which a continuous, fully observed, deterministic
control system generates information for controlled set-invariance.
In this paper we generalise this notion in two directions;
one is to continuous, partially observed systems, and the other is to discontinuous, fully observed systems.
In each case we show that the corresponding generalised TFE coincides with
the smallest feedback bit rate that allows a form of controlled invariance to be achieved.
\keywords{Topological entropy \and communication-limited control \and quantised systems}
\end{abstract}
\section{Introduction}
\langlebel{intro}
In 1965, Adler, Konheim, and McAndrew \cite{adler} introduced {\em topological entropy} as a measure of the fastest rate at which a continuous, discrete-time, dynamical system in a compact space generates initial-state information.
Though related to the measure-theoretic notion of {\em Kolmogorov-Sinai entropy} (see e.g. \cite{walters}),
it is a purely deterministic notion and requires only a topology on the state space, not an invariant measure.
Subsequently, Bowen \cite{bowenTAMS71} and Dinaburg \cite{dinaburgDokl70} proposed an alternative, metric based definition of topological entropy.
This accommodates uniformly continuous dynamics on noncompact spaces and is equivalent to the original definition on compact spaces.
These concepts play an important role in dynamical systems but remained largely neglected in control theory.
However, the emergence of digitally networked control systems (see e.g. \cite{antsaklisSpecial07}) over the last four decades
renewed interest in the information theory of feedback, and in 2004 the techniques of Adler et al. were adapted
to introduce the notion of {\em topological feedback entropy (TFE)} \cite{nairTAC03}.
Unlike topological entropy, TFE quantifies the {\em slowest} rate at which a continuous,
deterministic, discrete-time dynamical system {\em with inputs} (i.e. a control plant)
generates information, with states confined in a specified compact set.
Equivalently, it describes the rate at which the plant generates
information relevant to the control objective of set-invariance.
From an engineering perspective, the operational significance of TFE arises from the fact that it
coincides with the smallest average bit-rate between the plant and controller that allows set invariance to be achieved.
In other words, if an errorless digital channel with limited bit rate $R$ connects
the plant sensor to the controller, then a coder, controller and decoder that achieve set invariance can be constructed if and almost only if $R$
exceeds the TFE of the plant.\footnote{Without loss of generality, there can also be an errorless digital channel
from the controller to the plant actuator; in this case $R$ is taken to be the minimum of the two channel rates.}
Thus set invariance is possible if and almost only if the digital channel can transport information
faster than the plant generates it.
Later, the notion of {\em invariance entropy} was introduced for continuous, deterministic control systems in continuous time \cite{coloniusSICON09,kawanThesis,kawanDCDS11} based on the metric-space techniques of Bowen.
This measures the smallest growth rate of the number of open-loop control functions
needed to confine the states within an arbitrarily small distance from a given compact set.
In contrast, TFE is defined in a topological space and counts the minimum rate at which initial state uncertainty sets are refined,
with states confined to a given compact set.
Despite these significant conceptual differences, it has been established that TFE and invariance entropy are essentially the same object.
A limitation of the formulations above of entropy for control systems is their restriction to plants with fully observed states
and continuous dynamics.
This makes them inapplicable when only a function of the state can be measured or
the plant has discontinuities such as a quantised internal feedback loop.
A recent article \cite{coloniusMCSS11} studied continuous, partially observed plants
in continuous time, with the objective being to keep the plant outputs arbitrarily close
to a given compact set for any initial state in another set.
The notion of {\em output invariance entropy} was defined as a lower bound on the required data rate.
However, the control functions in this formulation are chosen according to the initial state.
Thus when the controller has access to only the output not the state,
the lower bound may be loose.
For discontinuous systems, notions of topological entropy have been proposed for piecewise continuous, piecewise monotone (PCPM) maps
on an interval \cite{misiurewicz92,kopf05}, using a Bowen-style metric approach.
It is shown that the topological entropy of a PCPM map coincides with the exponential growth rate of the number
of subintervals on which the iteration of the map is continuous and monotone.
In \cite{savkinAUTO06}, a metric approach was also adopted to define
a topological entropy for a possibly discontinuous, open-loop, discrete-time system driven by a sequence of disturbances.
However, it is not clear how these constructions can be adapted to feedback control systems with vector-valued states.
In this paper we use open-set techniques to extend TFE in two directions;
one is to continuous plants with continuous, partial observations in section \ref{output_section}, and the other is to a class of discontinuous plants with full state observations in section \ref{piecewisesec}.
In each case we show that the extended TFE coincides with the smallest average bit-rate
between the plant and controller that allows weak invariance to be achieved,
thus giving these concepts operational relevance in communication-limited control.
Though both generalisations involve open covers, the assumptions and
techniques underlying them are significantly different.
The questions of how to compute bounds on these notions and how to construct a unified notion of feedback entropy for
discontinuous, partially observed systems are ongoing areas of research.
\noindent {\em Terminology.} The nonnegative integers are denoted by $\ensuremath{\mathbb{Z}_{0}}$ and the positive integers by $\mathbb N$.
Sequence segments $(x_s,\ldots, x_t)$ are denoted by $x_s^t$.
A collection $\alpha$ of open sets in a topological space $Z$ is called an {\em open cover}
of a set $W\subseteq Z$ if $\cup_{A\in\alpha}A\supseteq W$.
A subcollection $\begin{equation}ta\subseteq\alpha$ is called a {\em subcover} of $W$ if
$\cup_{B\in\begin{equation}ta}B\supseteq W$.
If $\alpha$ contains at least one finite subcover of $W$, then
$N(\alpha|W)\in\mathbb N$ denotes the minimal cardinality over them all;
in the case where $W=Z$, the second argument will be dropped.
\section{Continuous, Partially Observed Systems}
\langlebel{output_section}
In this section, we extend topological feedback entropy (TFE) to continuous, partially observed, discrete-time control systems.
We then prove that the TFE coincides with the smallest average feedback bit-rate
that allows a form of controlled set-invariance to be achieved via a digital channel.
\subsection{Weak Topological Feedback Entropy}
\langlebel{output_WITFE}
To improve readability, most of the proofs in this subsection are deferred to the Appendix.
Let $X$ be a compact topological space and consider the continuous, partially observed, discrete-time control system
\begin{equation}gin{equation}
\begin{equation}gin{split}
x_{k+1}&=f(x_k, u_k)\in X \\
y_k&=g(x_k)\in Y
\end{split}
,\quad \forall k\in\ensuremath{\mathbb{Z}_{0}},
\langlebel{postate}
\end{equation}
where the input $u_k$ is taken from a set $U$, the output $y_k$ lies in a topological space $Y$,
and the functions $g$ and $f(\cdot, u)$, $u\in U$, are continuous.
For simplicity write
$f_u(\cdot):= f(\cdot , u)$, $f_{u_0^{s-1}}:=f_{u_{s-1}}f_{u_{s-2}} \cdots f_{u_0}$ and for any $s\in\mathbb N$, let $g_{u_0^{s-1}}$ denote
the continuous function that maps $x_0$ to $y_0^s$ when the plant is fed with input sequence $u_0^{s-1}$.
Given a compact target set $K \subsetneqq X$ with nonempty interior,
assume the following:
\begin{equation}gin{itemize}
\item[(Ob)] The plant is \emph{uniformly controlled observable}: there exists $s \in \ensuremath{\mathbb{Z}_{0}}$ and an input sequence $v_0^{s-1}$
such that the map $g_{v_0^{s-1}}$ is continuously invertible on $g_{v_0^{s-1}}(X)$.
\item[(WCI)] The set $K$ is \emph{weakly controlled invariant}: there exists $t \in \mathbb N$
such that for any $x_0 \in X$ there exists a sequence $\{ H_k(x_0) \}_{k=0}^{t-1}$ of inputs in $U$ that ensures $x_t \in \mbox{int}K$.
\end{itemize}
Condition (Ob) states that there is a fixed input sequence $v_0^{s-1}$ that allows the initial state $x_0$ to be
recovered as a continuous function of the output sequence $y_0^{s}$. The states $x_1,\ldots, x_s$
are not required to lie inside $\mathrm{int}K$;
this gives the freedom to trade transient control performance
off for improved accuracy in state estimation.
We also remark that the main difference between WCI
and the usual definition of controlled set-invariance
is that the state only needs to be steerable to the target set in $t$ time steps, not one.
As a technicality, the topological methods we employ
also require the state $x_t$ to lie in the interior of $K$.
We now introduce tools to describe the information generation rate.
Pick $s \in \ensuremath{\mathbb{Z}_{0}}$ and an input sequence $v_0^{s-1}$.
Let $\alpha$ be an open cover of $g_{v_0^{s-1}}(X)\subseteq Y^{s+1}$, $\tau$ a positive integer,
and $G=\{ G_k: \alpha \to U \}_{k=0}^{\tau-1}$ a sequence of $\tau$ maps that assign input values to each element of $\alpha$.
Define $\mathcal{C}$ to be the set of all tuples $\left (s, v_0^{s-1}, \alpha, \tau, G\right )$
that satisfy the following constraint:
\begin{equation}gin{itemize}
\item[(C)] For any $A \in \alpha$ and $x_0 \in g_{v_0^{s-1}}^{-1}(A)$,
the concatenated input sequence $\left (v_0^{s-1},G(A)\right )$ yields $x_{s+\tau} \in \mbox{int}K$,
with $g_{v_0^{s-1}}$ continuously invertible on $g_{v_0^{s-1}}(X)$.
\end{itemize}
\begin{equation}gin{proposition}\langlebel{output_feasibility}
If (Ob) and (WCI) hold, then $\mathcal{C}\neq\emptyset$, i.e. constraint (C) is feasible.
\end{proposition}
Next, we use $\alpha$ to track the orbits of the initial states.
Divide time up into cycles of duration $s+\tau$
and apply an input sequence $v_0^{s-1}$ that satisfies (Ob) for the first $s$ instants
of each cycle.
Let $A_0, A_1, \dots$ be elements of $\alpha$
and for each $j \in \mathbb N$ define
\begin{equation}gin{equation}\langlebel{B_j}
B_j :=\left\{ x_0 \in X:
\begin{equation}gin{array}{l} x_{i(s+\tau)} \in g_{v_0^{s-1}}^{-1}(A_i), \, 0 \leq i \leq j-1, \, \mbox{and} \\
u_{i(s+\tau)}^{(i+1)(s+\tau)-1}= (v_0^{s-1}, G(A_i)), \, 0 \leq i \leq j-2
\end{array} \right\}.
\end{equation}
In other words, $B_j$ is the set of initial states
such that during the $(i+1)$th cycle,
the sequence $y_{i(s+\tau)}^{i(s+\tau)+s}$ of the first $s+1$ outputs
lies inside $A_i$ when the sequence of $s+\tau$ inputs over the cycle is
$\left (v_0^{s-1}, G(A_i)\right )\in U^{s+\tau}$ for every $i\in[0,\ldots, j-2]$.
\begin{equation}gin{proposition}\langlebel{B_j_properties}
The set $B_j$ has the following properties.
\begin{equation}gin{enumerate}
\item Each $B_j$ is an open set.
\item Every $x_0 \in X$ must lie in some $B_j$.
\end{enumerate}
\end{proposition}
\begin{equation}gin{corollary}
For each $j \in \mathbb N$ the collection of sets $B_j$,
\begin{equation}gin{equation}\langlebel{beta_j}
\begin{equation}ta_j := \{ B_j: A_0, A_1, \dots , A_{j-1} \in \alpha \},
\end{equation}
is an open cover of $X$.
\end{corollary}
Now, since $\begin{equation}ta_j$ is an open cover of $X$, compactness implies that it must contain a finite subcover.
Consider a minimal subcover with cardinality $N(\begin{equation}ta_j)\in\mathbb N$.
As no set in this minimal subcover of $\begin{equation}ta_j$ is contained in a union of other sets, each carries new information.
Thus as $N(\begin{equation}ta_j)$ increases, the amount of information gained about the initial state grows.
In order to measure the asymptotic rate of information generation, we need the following:
\begin{equation}gin{lemma}\langlebel{subadditivity}
The sequence ${(\log_2 N(\begin{equation}ta_j))}_{j=1}^{\infty}$ is subadditive.
\end{lemma}
The next proposition follows from Fekete's lemma; see e.g. \cite[Theorem~4.9]{walters} for a proof.
\begin{equation}gin{proposition}\langlebel{inf}
The following limit exists and equals the right hand infimum:
\begin{equation}gin{equation}\langlebel{limit1}
\lim_{j \to \infty}\frac{\log_2 N(\begin{equation}ta_j)}{j(s+\tau)}
=\inf_{j \in \mathbb{N}} \frac{\log_2 N(\begin{equation}ta_j)}{j(s+\tau)}.
\end{equation}
\end{proposition}
We now define a topological feedback entropy for continuous, partially observed plants:
\begin{equation}gin{definition}
The \emph{weak topological feedback entropy (WTFE)} of the plant (\ref{postate})
with target set $K$ is defined as
\begin{equation}gin{equation}
h_{\mbox{w}}
:=\inf_{\left (s, v_0^{s-1},\alpha, \tau, G\right )\in\mathcal{C}}\lim_{j \to \infty} \frac{\log_2 N(\begin{equation}ta_j)}{j(s+\tau)}
\stackrel{(\ref{limit1})}{=}
\inf_{\left (s, v_0^{s-1},\alpha, \tau, G\right )\in\mathcal{C}, j\in\mathbb N}
\frac{\log_2N(\begin{equation}ta_j)}{j(s+\tau) },
\langlebel{WITFE}
\end{equation}
where $\mathcal{C}$ is the set of all tuples
$\left (s, v_0^{s-1},\alpha, \tau, G\right )$ that satisfy constraint (C).
\end{definition}
This definition reduces to the {\em weak invariance TFE} of \cite{nairTAC03} if the plant is fully observed, since in that case condition (Ob) is trivially satisfied with $s=0$ and $g_{v_0^{s-1}}=g$ reducing to the identity map.
Like classical topological entropy for dynamical systems \cite{adler},
it measures the rate at which initial state uncertainty sets are refined as more and more observations are taken `via' an open cover.
The differences here are that control inputs must be accommodated, only partial state observations are possible, and the slowest rate is of interest, not the fastest.
Instead of `pulling back' and intersecting the open sets $A_0,A_1,\ldots\in\alpha$ to form an open cover $\begin{equation}ta_j$ of the initial state space $X$,
suppose we simply counted the smallest cardinality of minimal subcovers of
the open cover $\alpha$ itself, under constraint (C).
It turns out that this more direct approach yields the same number:
\begin{equation}gin{proposition}\langlebel{inf_equation}
The weak topological feedback entropy (\ref{WITFE}) for the continuous, partially observed
plant (\ref{postate}) satisfies the identities
\begin{equation}gin{equation*}\langlebel{inf_equation_formulas}
\begin{equation}gin{split}
h_{\mbox{w}}
&=\inf_{\left (s,v_0^{s-1},\alpha, \tau, G\right )\in\mathcal{C}} \frac{\log_2N(\begin{equation}ta_1)}{s+\tau} \\
&=\inf_{\left (s,v_0^{s-1},\alpha, \tau, G\right )\in\mathcal{C}} \frac{\log_2 N(\alpha| g_{v_0^{s-1}}(X))}{s+\tau},
\end{split}
\end{equation*}
\end{proposition}
\begin{equation}gin{proof}
Note first that by plugging $j=1$ into the second equality in (\ref{WITFE}) we obtain
\begin{equation}gin{equation}
h_{\mbox{w}}\langlebel{rewrite1}
= \inf_{(s,v_0^{s-1}, \alpha, \tau, G)\in\mathcal{C}, j\in\mathbb N }\frac{\log_2 N(\begin{equation}ta_j)}{j(s+\tau)}
\leq \inf_{(s,v_0^{s-1}, \alpha, \tau, G)\in\mathcal{C}}\frac{\log_2 N(\begin{equation}ta_1)}{s+\tau}.
\end{equation}
We prove that the infimum (with $j=1$) arbitrarily close to $h_{\mbox{w}}$ can be achieved.
The definition of the WTFE, combined with the fact that $\lim_{j \to \infty}j/(j-1)=1$ and Proposition \ref{inf},
implies that given $\ensuremath{\varepsilon} >0$ we can find a tuple $(s, v_0^{s-1}, \alpha, \tau, G)$ that satisfies constraint (C) and a large $j \in \mathbb N$ such that
\begin{equation}gin{equation}\langlebel{rewrite2}
h_{\mbox{w}}
\leq \frac{\log_2 N(\begin{equation}ta_j)}{(j-1)(s+\tau)}
= \frac{\log_2 N(\begin{equation}ta_j)}{j(s+\tau)} \frac{j}{j-1}
\leq h_{\mbox{w}} +\ensuremath{\varepsilon}.
\end{equation}
We now construct a new tuple $(s', {v'}_0^{s-1}, \alpha', \tau', G')$.
Let $s'$ and ${v'}_0^{s-1}$ be $s$ and $v_0^{s-1}$ chosen above.
The open cover $\alpha'$ of $g_{v_0^{s-1}}(X)$ in $Y^{s+1}$ is obtained as follows.
Recall that $\begin{equation}ta_j$ is an open cover of $X$.
We use the continuous map $g_{v_0^{s-1}}^{-1}$ to form a collection of open sets in $g_{v_0^{s-1}}(X)$, $\{ g_{v_0^{s-1}}(B_j): B_j \in \begin{equation}ta_j \}$.
Note that the union of sets $g_{v_0^{s-1}}(B_j)$ is $g_{v_0^{s-1}}(X)$.
Since $g_{v_0^{s-1}}(X)$ is equipped with the subspace topology of $Y^{s+1}$, each open set $g_{v_0^{s-1}}(B_j)$ in $g_{v_0^{s-1}}(X)$ can be expanded to an open set $B_j'$ in $Y^{s+1}$ so that $g_{v_0^{s-1}}^{-1}$ maps points in $B_j' \cap g_{v_0^{s-1}}(X)$ into $B_j$.
Now $\alpha'={\{ B_j': B_j \in \begin{equation}ta_j \}}$ is an open cover of $g_{v_0^{s-1}}(X)$ in $Y^{s+1}$.
Recall that each $B_j \in \begin{equation}ta_j$ is of the form
\begin{equation}gin{multline*}
B_j=g_{v_0^{s-1}}^{-1}(A_0) \cap \Phi_{G(A_0)}^{-1} g_{v_0^{s-1}}^{-1}(A_1) \cap \Phi_{G(A_0)}^{-1} \Phi_{G(A_1)}^{-1} g_{v_0^{s-1}}^{-1}(A_2) \cap \cdots \\
\dots \cap \Phi_{G(A_0)}^{-1} \dots \Phi_{G(A_{j-2})}^{-1} g_{v_0^{s-1}}^{-1}(A_{j-1}),
\end{multline*}
where $\Phi_{G(A)}= f_{v_0^{s-1}G(A)}$ with $A_0, \dots, A_{j-1}$ in $\alpha$.
Set $\tau'=(j-1)(s+\tau)-s$ and define a sequence of $\tau'$ maps $G'$ by
$G'(B_j')=G(A_0)v_0^{s-1}G(A_1)\dots v_0^{s-1}G(A_{j-2})$.
Since $(s, v_0^{s-1}, \alpha, \tau, G)$ satisfies constraint (C), by construction $(s', {v'}_0^{s-1}, \alpha', \tau', G')$ satisfies (C).
Clearly, $N(\begin{equation}ta_1') = N(\begin{equation}ta_j)$, where $\begin{equation}ta_1' = g_{v_0^{s-1}}^{-1}(\alpha')$, and from inequalities in (\ref{rewrite2}) we have
\[ h_{\mbox{w}} \leq \frac{\log_2 N(\begin{equation}ta_1')}{s'+\tau'} \leq h_{\mbox{w}}+\ensuremath{\varepsilon}. \]
Since $\ensuremath{\varepsilon} >0$ was arbitrary, the above inequalities and the earlier result in (\ref{rewrite1}) give the first equality.
For the second equality, observe that given any tuple $(s, v_0^{s-1}, \alpha, \tau, G)$ satisfying constraint (C)
we have $N(\begin{equation}ta_1)\equiv N(\begin{equation}ta_1|X)=N(\alpha | g_{v_0^{s-1}}(X))$,
where $\begin{equation}ta_1=g_{v_0^{s-1}}^{-1}(\alpha)$, since $\alpha$ is an open cover of
$g_{v_0^{s-1}}(X)$ in $Y^{s+1}$, $g_{v_0^{s-1}}(X)$ is equipped with the subspace topology of $Y^{s+1}$,
and $g_{v_0^{s-1}}$ is a homeomorphism from the compact space $X$ onto $g_{v_0^{s-1}}(X)$. Thus we obtain the desired equation.
\qed
\end{proof}
The first equality in this result is a technical simplification that allows the cycle index $j\in\mathbb N $ in the definition of WTFE
to be restricted to the value 1. The second equality follows almost immediately but is
conceptually more significant. In rough terms, it states that under constraint (C), the smallest
growth rate of the number of `topologically distinguishable' output sequences in $g_{v_0^{s-1}}(X)\subseteq Y^{s+1}$
coincides with the smallest growth rate of the number of such initial states in $X$.
\subsection{Data-Rate-Limited Weak Invariance}
The weak topological feedback entropy (WTFE) constructed in the previous subsection
is defined in abstract terms. We now show its relevance to the problem of
feedback control via a channel with finite bit-rate, for a general class of coding and control laws.
Suppose that the sensor that measures the outputs of the plant (\ref{postate}) transmits one discrete-valued symbol $s_k$ per sampling interval to the controller over a digital channel.
Each symbol transmitted by the coder may potentially depend on all past and present outputs and past symbols,
\begin{equation}gin{equation}\langlebel{symbol}
s_k=\gamma_k({\{ y_i\}}_{i=0}^{k}, {\{s_i \}}_{i=0}^{k-1})\in S_k, \quad \forall k \in \ensuremath{\mathbb{Z}_{0}},
\end{equation}
where $S_k$ is a coding alphabet $S_k$ of time-varying size $\mu_k$ and
$\gamma_k: Y^{k+1}\times S_0 \times \dots \times S_{k-1} \to S_k$ is the coder map at time $k$.
Assuming that the digital channel is errorless, at time $k$ the controller has $s_0, \dots, s_k$ available and generates
\begin{equation}gin{equation}\langlebel{control_input}
u_k=\delta_k({\{s_i\}}_{i=0}^{k}) \in U, \quad \forall k \in \ensuremath{\mathbb{Z}_{0}},
\end{equation}
where $\delta_k: S_0 \times \dots \times S_k \to U$ is the controller map at time $k$.
Define the \emph{coder-controller} as the triple
$(S, \gamma, \delta):=( {\{ S_k \}}_{k \in \ensuremath{\mathbb{Z}_{0}}}$, ${\{ \gamma_k \}}_{k \in \ensuremath{\mathbb{Z}_{0}}}$, ${\{ \delta_k \}}_{k \in \ensuremath{\mathbb{Z}_{0}}} )$
of the alphabet, coder and controller sequences.
Suppose that the performance objective of the coder-controller is to render $K$ \emph{weakly invariant},
i.e. to ensure that there exists a time $q \in \mathbb N$ such that for any $x_0 \in X$, $x_{q} \in \mathrm{int}K$.
Let $Q_\ensuremath{\mathrm{w}}\subseteq\mathbb N$ be the set of all the invariance times $q$ of a given coder-controller
that achieves this objective.
For each $q\in Q_\ensuremath{\mathrm{w}}$, a $q$-periodic coder-controller extension that ensures $x_{q}, x_{2q}, \ldots \in \mathrm{int}K$ can be constructed by `resetting the clock' to zero at times $q,2q,\ldots$.\footnote{See section III in \cite{nairTAC03}.}
The average transmission data rate of each $q$-periodic extension is simply $\frac{1}{q} \sum_{j=0}^{q-1} \log_2 \mu_j$,
and the (smallest) average data rate required by the coder-controller is then
\begin{equation}gin{equation}\langlebel{data_rate}
R:=\inf_{q\in Q_\ensuremath{\mathrm{w}}} \frac{1}{q} \sum_{j=0}^{q-1} \log_2 \mu_j \quad \mbox{(bits/sample)}.
\end{equation}
We remark that in \cite{nairTAC03}, the communication requirements of the coder-controller
were measured instead by the asymptotic average data rate, i.e. over all $j\in\ensuremath{\mathbb{Z}_{0}}$, and the
weak-invariance control objective was to ensure $x_q,x_{2q},\ldots\in\mathrm{int}K$.
Thus for that coder-controller, $q\mathbb N\subseteq Q_\ensuremath{\mathrm{w}}$, and applying (\ref{data_rate}) to it would
yield a less conservative number.
The main result of this subsection follows.
\begin{equation}gin{theorem}
Consider the continuous, partially observed, discrete-time plant (\ref{postate}).
Suppose that the uniformly controlled observability (Ob) and weak controlled invariance (WCI) conditions hold
for a given target set $K$.
For $K$ to be made weakly invariant by a coder-controller of the form (\ref{symbol}) and (\ref{control_input}),
the feedback data rate in $R$ (\ref{data_rate}) cannot be less than the weak topological feedback entropy:
\begin{equation}gin{equation*}\langlebel{pctheorem}
R \geq h_{\ensuremath{\mathrm{w}}}.
\end{equation*}
Furthermore, this lower bound is tight: there exist coder-controllers that achieve weak invariance at data rates arbitrarily close to $h_{\ensuremath{\mathrm{w}}}$.
\end{theorem}
This theorem says that weak invariance can be achieved by some coder-controller if and (almost) only if the available data rate
exceeds the WTFE of the plant on the target set.
This gives operational significance to WTFE and justifies its intepretation as the rate at which
the plant generates information for weak-invariance.
The remainder of this subsection comprises the proof of this result.
\paragraph{Necessity of the Lower Bound.}
Let the coder-controller $(S, \gamma, \delta)$ achieve invariance with data rate $R$. By (\ref{data_rate}),
$\forall\ensuremath{\varepsilon}>0$ there exists $q \in Q_\ensuremath{\mathrm{w}}$ such that $x_{q} \in \mbox{int}K$ for any $x_0\in X$ and
\begin{equation}gin{equation}\langlebel{approximation}
\frac{1}{q}\sum_{k=0}^{q-1}\log_2 \mu_k
<R+\ensuremath{\varepsilon}.
\end{equation}
Let $(S^1,\gamma^1,\delta^1)$ be the $q$-periodic extension,
which achieves weak invariance at times $q,2q,\ldots$.
We wish to transform this periodic coder-controller into another that more closely matches
the structure of a WTFE quintuple and has nearly the same data rate.
Specifically, we seek a periodic coder-controller each cycle of which has two phases.
The first phase is an initial `observation' phase, during which the controller applies a pre-agreed input sequence so as to enable the coder
to determine the exact value of the current state.
This is followed by an `action' phase, where the current state value is used to generate symbols and controls so as to achieve weak invariance by the end of the cycle.
Let $s \in \mathbb N$ and the input sequence $v_0^{s-1}$ satisfy condition (Ob).
Set $\tau=q$ and construct a $(s+\tau)$-periodic coder-controller $(S^2, \gamma^2, \delta^2)$
as follows; to simplify notation, the coding and control laws are defined only for the first cycle.
First, the controller applies the input sequence $u_0^{s-1}=v_0^{s-1}$;
during this time, the coder transmits an `empty' symbol.
For the remainder of the cycle, i.e. $s\leq k\leq s+\tau-1$, the coder-controller implements $(S^1, \gamma^1, \delta^1)$
and obtains a new state $x_{s+\tau}\in \mbox{int}K$.
This can be achieved by defining the coder-controller $(S^2, \gamma^2, \delta^2)$ by
\begin{equation}gin{equation*}
\begin{equation}gin{split}
S^2_{k}&=
\begin{equation}gin{cases}
\{0\} & \mbox{if $0\leq k\leq s-1$}\\
S_{k-s}^1 & \mbox{if $s \leq k \leq s+\tau-1$}
\end{cases}\\
s_{k} \equiv \gamma^2_{k}(y_0^k, s_0^{k-1}) &=
\begin{equation}gin{cases}
0 & \mbox{if $0\leq k\leq s-1$}\\
\gamma^1_{k-s} (y_s^k, s_s^{k-1})
& \mbox{if $s \leq k \leq s+\tau-1$}
\end{cases}, \\
u_{k}\equiv \delta^2_{k}(s_0^{k})&=
\begin{equation}gin{cases}
v_k & \mbox{if $0\leq k\leq s-1$}\\
\delta^1_{k-s} (s_s^{k}) & \mbox{if $s \leq k \leq s+\tau-1$}
\end{cases}
\end{split}
\end{equation*}
Observe that the symbol sequence $s_s^{s+\tau-1}$ is completely determined by $x_{s}$ by a fixed map that incorporates both coder and controller laws.
Thus there exist maps $\Gamma$ and $\displaystyleelta$ such that
\begin{equation}gin{equation}\langlebel{p_coder_controller}
\begin{equation}gin{split}
s_s^{s+\tau-1}&\equiv \Gamma (x_{s}), \\
u_s^{s+\tau-1} &\equiv \displaystyleelta \big( s_s^{s+\tau-1}\big).
\end{split}
\end{equation}
Now consider the disjoint coding regions $\Gamma^{-1}(c_s^{s+\tau-1})$,
$c_s^{s+\tau-1}\in$ $S^2_s \times \cdots \times S^2_{s+\tau-1}$.
The total number of distinct symbol sequences is
$\prod_{k=s}^{s+\tau-1}\mu_{k-s}= \prod_{k=0}^{\tau-1}\mu_{k}$,
which must be greater than or equal to the number $n$ of distinct coding regions.
Denote these coding regions by $C^1, \dots, C^n$ and note that $f_{v_0^{s-1}}(X) \subseteq \cup_{i=1}^n C^i$.
We can now rewrite the control law in (\ref{p_coder_controller}) as
\begin{equation}gin{equation}\langlebel{new_p_coer_controller}
u_s^{s+\tau-1}=\displaystyleelta^*\big( C^i \big) \quad \mbox{if} \quad x_{s} \in C^i,
\end{equation}
where $\displaystyleelta^*(C^i)=\displaystyleelta(c_s^{s+\tau-1})$ iff $C^i=\Gamma^{-1}(c_s^{s+\tau-1})$.
We now construct the open cover $\alpha$ of $g_{v_0^{s-1}}(X)$
and the mapping sequence $G={\{ G_{k}: \alpha \to U \}}_{k=0}^{\tau-1}$ required in the definition of the WTFE.
To define $\alpha$, first construct an open cover of $X$ as follows.
Observe that for any $x_s$ in region $C^i$,
\[ \Phi_{\displaystyleelta^*(C^i)}(x_s)\in \mbox{int}K,\]
where the left hand side denotes $f_{u_s^{s+\tau-1}}$ with the control law (\ref{new_p_coer_controller}).
By the continuity of $f_u$ (hence of $\Phi_{\displaystyleelta^*(C^i)}$) and the openness of $\mbox{int}K$,
it then follows that for any $x_s \in C^i$ there is an open set $O(x_s)$ that contains $x_s$ and is such that
\[ \Phi_{\displaystyleelta^*(C^i)}(x)\in \mbox{int}K , \ \ \forall x \in O(x_s). \]
We can construct for each $i\in[1,\ldots , n]$ an open set $D^i=\cup_{x_s \in C^i} O(x_s)$ so that
\begin{equation}gin{equation*}\langlebel{open_sets}
\Phi_{\displaystyleelta^*(C^i)}(x) \in \mbox{int}K, \ \ \forall x \in D^i,
\end{equation*}
which is equivalent to
\begin{equation}gin{equation*}
\Phi_{\displaystyleelta^*(C^i)}(D^i) \subseteq \mbox{int}K.
\end{equation*}
We then use the continuous map
$h_{v_0^{s-1}}=f_{v_0^{s-1}}g_{v_0^{s-1}}^{-1}$ to form the
collection $\{ h_{v_0^{s-1}}^{-1}(D^i) \}_{i=1}^n$ of open sets in $g_{v_0^{s-1}}(X)$.
As $g_{v_0^{s-1}}(X)$ is equipped with the subspace topology of $Y^{s+1}$, each open set
$h_{v_0^{s-1}}^{-1}(D^i)$ in $g_{v_0^{s-1}}(X)$ can be expanded to
an open set $L^i$ in $Y^{s+1}$ so that $h_{v_0^{s-1}}$ maps points in $L^i \cap g_{v_0^{s-1}}(X)$ into $D^i$.
The open cover $\alpha$ for $g_{v_0^{s-1}}(X)$ is defined by $\alpha = \{ L^1, \dots, L^n \}$.
Finally, construct the mapping sequence $G$ on $\alpha$ by
\[ G_{k}(L^i)=\,\mbox{the $k$th element of $\displaystyleelta^*(C^i)$ \quad for each $L^i \in \alpha$ and $0 \leq k \leq \tau -1$}. \]
It is evident that constraint (C) on $s, v_0^{s-1}, \alpha, \tau, G$ is satisfied.
The minimum cardinality $N(\alpha|g_{v_0^{s-1}}(X))$ of subcovers of $\alpha$ does not exceed the number $n$ of sets $L^i$ in $\alpha$.
From this we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
h_{\ensuremath{\mathrm{w}}}
&=\inf_{s, v_0^{s-1}, \alpha, \tau, G} \frac{\log_2 N(\alpha| g_{v_0^{s-1}}(X))}{s+\tau} \\
&\leq \frac{\log_2 N(\alpha | g_{v_0^{s-1}}(X))}{s+\tau} \\
&\leq \frac{\log_2 n}{s+\tau} \\
&\leq \frac{\log_2 \left( \prod_{k =0}^{\tau-1} \mu_{k } \right)}{s+\tau}
=\frac{1}{s+\tau}\sum_{k =0}^{\tau-1}\log_2 \mu_{k }
\leq \frac{1}{\tau}\sum_{k =0}^{\tau-1}\log_2 \mu_{k }.
\end{split}
\end{equation*}
\noindent From (\ref{approximation})
the last term of the right hand side is less
than $R+ \ensuremath{\varepsilon}$, since $\tau=q$.
Hence $R \geq h_{\ensuremath{\mathrm{w}}}$.
\paragraph{Achievability of the Lower Bound}
To prove that a data rate arbitrarily close to $h_{\ensuremath{\mathrm{w}}}$ can be achieved,
we show that any tuple $(s, v_0^{s-1}, \alpha, \tau, G)$ that satisfies constraint (C) yields
an $(s+\tau)$-periodic coder-controller that renders $K$ weakly invariant in every cycle.
Let
\begin{equation}gin{equation*}\langlebel{uninfimised_entropy}
H:=\frac{\log_2 N(\alpha|g_{v_0^{s-1}}(X))}{s+\tau}.
\end{equation*}
Recalling that $\alpha$ is an open cover of the compact set $g_{v_0^{s-1}}(X)$,
select a minimal subcover of $\alpha$ and denote it by $\{ A^1, \dots, A^m \}$, where $m=N(\alpha|g_{v_0^{s-1}}(X))$.
Since $g_{v_0^{s-1}}$ is a homeomorphism from $X$ onto $g_{v_0^{s-1}}(X)$ and $g_{v_0^{s-1}}(X)$ is equipped with the subspace topology of $Y^{s+1}$,
we see that $\{ g_{v_0^{s-1}}^{-1}(A^i)\}_{i=1}^m$ is a finite open cover of $X$.
We construct an $(s+\tau)$-periodic coding law using these overlapping sets as follows; to simplify notation, the coding and control laws are described only for the first cycle.
For each $k\in[0, s+\tau-1]$, set $S_k = \{1,\ldots , m\}$ if $k=s$ and $S_k=\{0 \}$ otherwise.
Then let
\begin{equation}gin{equation}\langlebel{coding_rule}
s_k=
\begin{equation}gin{cases}
\min \{ i: x_{0} \in g_{v_0^{s-1}}^{-1}(A^i) \} &\mbox{if $k=s$} \\
0 & \mbox{otherwise}
\end{cases}.
\end{equation}
The coding alphabet is of size $\mu_k=m$ when $k=s$ and $1$ otherwise.
The next step is to construct the controller from the input sequence $v_0^{s-1}$ and mapping sequence $G$.
For the first $s$ instants of the cycle, the controller applies inputs $u_0^{s-1}=v_0^{s-1}$.
At time $s$, the coder determines the initial state $x_0=g_{v_0^{s-1}}^{-1}(y_0^s)$.
Upon receiving the symbol $s_{s}$ that indexes an open set $g_{v_0^{s-1}}^{-1}(A^i)$ containing $x_{0}$,
the controller applies control inputs via the rule $u_s^{s+\tau-1}=G(A^i)$.
The coder-controller thus constructed has period $s+\tau$.
By constraint (C) we have $x_{s+\tau} \in \mbox{int}K$, hence weak $(s+\tau)$-invariance is achieved.
By (\ref{coding_rule}), the average data rate over the cycle is
\begin{equation}gin{equation*}
\bar{R} =\frac{\log_2 m}{s+\tau}=\frac{\log_2 N(\alpha|g_{v_0^{s-1}}(X))}{s+\tau} \equiv H.
\end{equation*}
As $h_{\ensuremath{\mathrm{w}}}$ is the infimum of $H$, for any $\ensuremath{\varepsilon} >0$ we can find $(s, v_0^{s-1}, \alpha, \tau, G)$ yielding $H < h_{\ensuremath{\mathrm{w}}}+\ensuremath{\varepsilon}$.
Hence $\bar{R}<h_{\ensuremath{\mathrm{w}}}+\ensuremath{\varepsilon}$. The result follows by observing that $R\leq\bar{R}$ by
definition (\ref{data_rate}) and then choosing $\ensuremath{\varepsilon}$ arbitrarily small.
\section{Piecewise Continuous, Fully Observed Plants}
\langlebel{piecewisesec}
In this section we introduce the notion of {\em robust weak topological feedback entropy} for
a class of piecewise continuous, fully observed plants.
We then establish the relevance of this notion for bit-rate limited control.
\subsection{Robust Weak Topological Feedback Entropy}
As before, let $X$ be a compact topological space.
We consider the fully observed, piecewise continuous plant
\begin{equation}gin{equation}\langlebel{pcstate}
x_{k+1}=F(x_k, u_k)\in X,
\ \ \forall k\in\ensuremath{\mathbb{Z}_{0}},
\end{equation}
where the input $u_k$ is taken from a set $U$.
For simplicity write $F_u:= F( \cdot, u)$ and $F_{u_0^k}:= F_{u_k}\cdots F_{u_0}$.
Assume that there exists a finite partition $\mathcal{P}$ of $X$
and that each connected component of any element of $\mathcal{P}$ has nonempty interior.
For each $u \in U$ the map $F_u$ is continuous on each element of $\mathcal{P}$, i.e. $F_u$ is piecewise continuous.
Let $K$ be a compact target set with nonempty interior,
and impose the following condition on the plant:
\begin{equation}gin{itemize}
\item[(RWI)] The compact set $K$ can be made \emph{robustly weakly controlled invariant} under $F$:
there exists $t \in \mathbb N$
such that for any $x \in X$,
there exists an open set $O(x)$ containing $x$ and an input sequence $\{ H_k(x) \}_{k=0}^{t-1}$ in $U$
that ensures $x_t \in \mbox{int}K$ for any $x_0 \in O(x)$.
\end{itemize}
Note that this is stricter than the weak controlled invariance (WCI) condition in the previous section.
It restricts our focus to plants with states that can be driven in finite time into the interior of
$K$,
despite an arbitrarily small error in the initial state measurement.
This requirement is not satisfied by all discontinuous plants, but is reasonable
if for instance the plant consists of an underlying continuous
system with an `inner' quantised control loop and an `outer' control loop to be designed,
i.e. $F(x,u)\equiv f(x,w,u)$, where the inner control input $w$ is a quantised function of $x$.
It can be shown that if $f$ is continuous and $K$ is WCI,
then it is always possible to find a sequence of quantised {\em joint} inputs
$(w,u)\equiv(q_1(x),q_2(x))$ that preserves WCI in the presence of small state measurement errors.
Condition (RWI) then applies if the inner loop control law is chosen to be such a robust law $q_1(x)$.
For reasons of space, details are omitted.
We now introduce a feedback entropy concept for this system.
The key difference from Section \ref{output_section} is that the output map $g$ is the identity and so the index $s$ in condition (Ob) there
is 0. This leads to taking an open cover $\alpha$ directly on $X$.
We form a triple $(\alpha, \tau, G)$, where $\tau \in \mathbb N$ and $G=\{ G_k: \alpha \to U \}_{k=0}^{\tau-1}$
is a sequence of $\tau$ maps that assign input variables to all elements of $\alpha$.
Let $\mathcal{RC}$ be the set of all triples $(\alpha, \tau, G)$
that satisfy the following constraint:
\begin{equation}gin{itemize}
\item[(RC)]
For any $A \in \alpha$ and $x_0 \in A$, the input sequence $G(A)$ yields $x_{\tau} \in \mbox{int}K$, i.e.
\[ F_{G_{\tau-1}(A)} \dots F_{G_0(A)}(A) \subseteq \mbox{int}K. \]
\end{itemize}
\begin{equation}gin{proposition}
If condition (RWI) is assumed, then constraint (RC) is feasible, i.e. $\mathcal{RC}\neq\emptyset$.
\end{proposition}
\begin{equation}gin{proof}
We give a complete proof to emphasise the difference from the setting in Section \ref{output_section}, cf.~Proposition \ref{output_feasibility}.
Assume (RWI).
We construct $(\alpha, \tau, G)$ that satisfies (RC) as follows.
Set $\tau$ equal to $t$ in (RWI).
By (RWI), for each $x \in X$ there is an open set $O(x)$ that contains $x$ and is such that $F_{H_{\tau-1}(x)} \dots F_{H_0(x)}(O(x)) \subseteq \mbox{int}K$.
By ranging $x$ in $X$ we form an open cover of $X$, $\{ O(x): x \in X \}$.
The compactness of $X$ implies that there exists a finite subcover $\alpha=\{ O (x_q) \}_{q=1}^r$.
Construct the sequence $G=\{ G_k \}_{k=0}^{\tau-1}$ of $\tau$ maps on $\alpha$ by
\begin{equation}gin{equation*}
G_k(O(x_q))=H_k(x_q) \quad \mbox{for each $1 \leq q \leq r$ and $0 \leq k \leq \tau-1$.}
\end{equation*}
By construction we have that if $x \in O(x_q)$ for some $1 \leq q \leq r$ then
\[ F_{G(O(x_q))}(x)=F_{\{ H_k(x_q) \}_{k=0}^{\tau-1}}(x) \in \mbox{int}K. \]
This confirms the feasibility of (RC) under (RWI).
\qed
\end{proof}
Motivated by Proposition \ref{inf_equation}, we introduce the following concept:
\begin{equation}gin{definition}
The \emph{robust weak topological feedback entropy (RWTFE)} of the plant (\ref{pcstate}) is
\begin{equation}gin{equation}\langlebel{RWITFE}
h_{\ensuremath{\mathrm{rw}}}:=\inf_{(\alpha, \tau, G)\in\mathcal{RC}}\frac{\log_2N(\alpha)}{\tau}.
\end{equation}
\end{definition}
Unlike TFE and the classical topological entropy, this definition does not involve
pulling back and intersecting the sets in $\alpha$ to measure the rate
at which initial state uncertainty sets are refined.
Thus it cannot be viewed {\em a priori} as an index of the rate at which the plant generates initial-state information.
However, such an interpretation becomes plausible due to the results in the next subsection.
\subsection{Robust Weak Invariance Under a Data-Rate Constraint}
We now show the relevance of robust weak topological entropy to the problem of
feedback control via a channel with finite bit-rate.
Consider the fully observed, piecewise continuous plant (\ref{pcstate}).
We now introduce a feedback loop with a coder-controller of the general form (\ref{symbol})--(\ref{control_input}),
but with the outputs $y_i$ replaced by states $x_i$.
The performance objective of the coder-controller is to render $K$ \emph{robustly weakly invariant}, that is, to guarantee that $\exists q \in \mathbb N$ such that $\forall x_0 \in X$, there is an open neighbourhood $O(x_0)\ni x_0$ with the property that the input sequence $u_0^{q-1}$ generated by the coder-controller $(S, \gamma, \delta)$ {\em acting on} $x_0$ ensures $F_{u_0^{q-1}} (x) \in \mbox{int}K$, $\forall x \in O(x_0)$.
In other words, we desire that the control inputs generated by (\ref{symbol})--(\ref{control_input}) for a plant with nominal initial state $x_0$ should still succeed in achieving weak $q$-invariance if the true initial state $x$ is sufficiently close.
Let $Q_{\ensuremath{\mathrm{rw}}}\subseteq\mathbb N$ be the set of all the robust weak invariance times $q$ of a given coder-controller that achieves this objective.
For each $q\in Q_{\ensuremath{\mathrm{rw}}}$, we can construct a $q$-periodic coder-controller extension by `resetting the clock' to zero
at times $q,2q,\ldots$.\footnote{See section III in \cite{nairTAC03}.}
As before, the (smallest) average data rate required by the coder-controller is
then
\begin{equation}gin{equation}\langlebel{r_data_rate}
R:=\inf_{q\in Q_{\ensuremath{\mathrm{rw}}}} \frac{1}{q} \sum_{j=0}^{q-1} \log_2 \mu_j.
\end{equation}
The main result of this subsection follows:
\begin{equation}gin{theorem}
Consider the piecewise continuous, fully observed plant (\ref{pcstate}) with a compact target set $K$ having nonempty interior.
Assume that initial states are in the compact topological space $X$.
Suppose that the robust weak invariance condition (RWI) holds on $F, K, U$.
For $K$ to be made robustly weakly invariant
by a coder-controller of the form (\ref{symbol})--(\ref{control_input}), the feedback data rate $R$ in (\ref{r_data_rate})
cannot be less than the robust weak topological feedback entropy (\ref{RWITFE}):
\begin{equation}gin{equation*}\langlebel{rtheorem}
R\geq h_{\ensuremath{\mathrm{rw}}}.
\end{equation*}
Furthermore, this lower bound is tight: there exist coder-controllers that achieve robust weak invariance
at data rates arbitrarily close to $h_{\ensuremath{\mathrm{rw}}}$.
\end{theorem}
The rest of this subsection consists of the proof of this result.
\paragraph{Necessity of the Lower Bound.}
Pick an arbitrarily small $\ensuremath{\varepsilon}>0$.
Given a coder-controller $(S, \gamma, \delta)$ that achieves robust weak invariance,
there exists a robust weak invariance time $q \in \mathbb N$ such that
\[ \frac{1}{q} \sum_{k=0}^{q-1}\log_2 \mu_k \leq R +\ensuremath{\varepsilon}. \]
Set $\tau=q$ and note that the symbol sequence $s_0^{\tau-1}$
is completely determined by the initial state $x_0$,
i.e. there exist maps $\Gamma$ and $\displaystyleelta$ such that
\begin{equation}gin{equation}\langlebel{r_p_coder_controller}
\begin{equation}gin{split}
s_0^{\tau-1} &\equiv \Gamma (x_0) \\
u_0^{\tau-1} &\equiv \displaystyleelta ( s_0^{\tau-1} ).
\end{split}
\end{equation}
Now consider the disjoint regions $\Gamma^{-1}(c_{0}^{\tau -1}) \subseteq X$
as the symbol sequence $c_0^{\tau-1}$ varies over all possible sequences in $S_0 \times \dots \times S_{\tau-1}$.
The total number of distinct symbol sequences is $\prod_{k=0}^{\tau-1}\mu_k$, and
hence the total number $n$ of distinct coding regions does not exceed it.
Denote these coding regions by $C^1, \dots, C^n$ and note that $X= \cup_{i=1}^n C^i$.
We can then rewrite the control equation in (\ref{r_p_coder_controller}) as
\begin{equation}gin{equation}\langlebel{new_r_p_coer_controller}
u_0^{\tau-1}=\displaystyleelta^*\big( C^i \big) \quad \mbox{if} \quad x_0 \in C^i
\end{equation}
by defining the map $\displaystyleelta^*(C^i)=\displaystyleelta(c_{0}^{\tau-1})$ iff $C^i=\Gamma^{-1}(c_{0}^{\tau-1})$.
We now construct the open cover $\alpha$ and the mapping sequence
$G={\{ G_k: \alpha \to U \}}_{k=0}^{\tau-1}$
required in the definition of RWTFE.
Before we define $\alpha$, observe that for any $x$ in coding region $C^i$,
\[ \Phi_{\displaystyleelta^*(C^i)}(x)\in \mbox{int}K,\]
where the left hand side denotes the dynamical map $F_{u_0^{\tau-1}}$ applied with the input sequence (\ref{new_r_p_coer_controller}).
By robust weak invariance, it then follows that for any $x \in C^i$ there is an open set $O(x)$ that contains $x$ and is such that
\[ \Phi_{\displaystyleelta^*(C^i)}(x_0)\in \mbox{int}K \quad \mbox{for any $x_0 \in O(x)$}. \]
We can construct for each $1 \leq i \leq n$ an open set $L^i=\cup_{x \in C^i}O(x)$ so that
\begin{equation}gin{equation*}\langlebel{r_open_sets}
\Phi_{\displaystyleelta^*(C^i)}(x_0) \in \mbox{int}K \quad \mbox{for any $x_0 \in L^i$,}
\end{equation*}
which is equivalent to
\begin{equation}gin{equation*}
\Phi_{\displaystyleelta^*(C^i)}(L^i) \subseteq \mbox{int}K.
\end{equation*}
As $C^i \subseteq L^i$ and $X = \cup_{i=1}^n C^i$, we have an open cover $\alpha=\{ L^1, \dots, L^n \}$ for $X$.
Finally, construct the mapping sequence $G$ on $\alpha$ by
\[ G_k(L^i)=\,\mbox{the $k$th element of $\displaystyleelta^*(C^i)$ \quad for each $L^i \in \alpha$ and $0 \leq k \leq \tau -1$}. \]
\noindent It is evident that constraint (RC) on $(\alpha, \tau, G)$ is satisfied.
By construction we know that $N(\alpha)\leq n$ and we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
h_{\ensuremath{\mathrm{rw}}}&:=\inf_{(\alpha, \tau, G)\in\mathcal{RC}}\frac{\log_2 N(\alpha)}{\tau} \\
&\leq \frac{\log_2 N(\alpha)}{\tau}\\
&\leq\frac{\log_2 n}{\tau} \\
&\leq \frac{\log_2 \left (\prod_{k=0}^{\tau-1} \mu_k\right )}{\tau}=\frac{1}{\tau}\sum_{k=0}^{\tau-1}\log_2 \mu_k \leq R+\ensuremath{\varepsilon}.
\end{split}
\end{equation*}
Since $\ensuremath{\varepsilon}$ was arbitrary, we have the desired result.
\paragraph{Achievability of the Lower Bound.}
To prove that data rate arbitrarily close to $h_{\ensuremath{\mathrm{rw}}}$ can be achieved, we first show that any triple $(\alpha, \tau, G)$ that satisfies constraint (RC) with $K$ induces a coder-controller that renders $K$ robustly weakly invariant.
Given such a triple, define
\begin{equation}gin{equation*}\langlebel{r_uninfimised_entropy}
H:=\frac{\log_2 N(\alpha)}{\tau}.
\end{equation*}
Recalling that $\alpha$ is an open cover of compact set $X$, select and denote by
$\{ D^1, \dots, D^m \}$ a minimal subcover of $\alpha$, where $m=N(\alpha)$.
We construct a $\tau$-periodic coding law using these overlapping sets via the rule
\begin{equation}gin{equation*}\langlebel{r_coding_rule}
s_k=
\begin{equation}gin{cases}
\min \{ i: x_0 \in D^i \}& \mbox{if $k =0$} \\
0 &\mbox{otherwise}
\end{cases}, \ \ 0\leq k\leq \tau -1.
\end{equation*}
The coding alphabet is of size $\mu_k=m$ when $k=0$ and $1$ otherwise.
The next step is to construct the controller from the mapping sequence $G$.
Upon receiving the symbol $s_{0}=i$ that indexes an open set $D^i$ in the minimal subcover of $\alpha$,
containing $x_{0}$, the controller applies the input sequence
\begin{equation}gin{equation*}\langlebel{r_achiev_periodic_controller}
u_0^{\tau-1} =G(D^i)
\end{equation*}
By assumption $(\alpha, \tau, G)$ satisfies constraint (RC).
From this it is easy to see that $K$ is robustly weakly invariant with the coder-controller and $\tau$ defined above.
By (\ref{r_coding_rule}), the average data rate over the cycle is
\begin{equation}gin{equation*}
\bar{R} =\frac{\log_2 m}{\tau}=\frac{\log_2 N(\alpha)}{\tau} \equiv H.
\end{equation*}
As $h_{\ensuremath{\mathrm{rw}}}$ is the infimum of $H$, for any $\ensuremath{\varepsilon} >0$ we can find $(\alpha, \tau, G)$ yielding $H < h_{\ensuremath{\mathrm{rw}}}+\ensuremath{\varepsilon}$.
Hence $\bar{R}<h_{\ensuremath{\mathrm{rw}}}+\ensuremath{\varepsilon}$. The result follows by observing that $R\leq\bar{R}$ by
definition (\ref{r_data_rate}) and then choosing $\ensuremath{\varepsilon}$ arbitrarily small.
\section{Conclusion}
In this paper we used open set techniques to propose two extensions of topological feedback entropy (TFE)
; one is to continuous plants with continuous, partial observations, and the other to a class of discontinuous plants with full state observations.
In each case we showed that the extended TFE coincides with the smallest average bit-rate
between the plant and controller that allows weak invariance to be achieved,
thus giving these concepts operational meaning in communication-limited control.
Our focus here has been on formulating the concepts of TFE rigorously
and showing the fundamental limitations they impose on the communication rates needed for control.
Future work will focus on computing upper and lower bounds on them for various system classes,
and on strengthening the weak-invariance objective to controlled invariance.
Another important question left for future work is: how to construct a unified notion of TFE for
discontinuous, partially observed plants.
Such a notion would be an important step toward a theory of information flows
for cooperative nonlinear control systems interconnected by digital communication channels.
\appendix
\section{Proof for Proposition \ref{output_feasibility}}
Assume (Ob) and (WCI).
Suppose that $s$ and $v_0^{s-1}$ satisfy (Ob).
We construct $\alpha, \tau, G$ that satisfy (C) as follows.
Set $\tau$ equal to $t$ in (WCI).
For $\alpha$, we first observe the following.
By (WCI), the continuity of $f_u$, and openness of $\mbox{int}K$, for any $x_s \in X$ there is an open set $O(x_s)$ in $X$ that contains $x_s$ and is such that $f_{H_{\tau-1}(x_s)} \dots f_{H_0(x_s)}(O(x_s)) \subset \mbox{int}K$.
By ranging $x_s$ in $f_{v_0^{s-1}}(X)$ we obtain an open cover $\{ O(x_s): x_s \in f_{v_0^{s-1}}(X) \}$ of $f_{v_0^{s-1}}(X)$ in $X$.
We then use the continuous map $h_{v_0^{s-1}}:=f_{v_0^{s-1}}g_{v_0^{s-1}}^{-1}$ to form the collection of open sets in $g_{v_0^{s-1}}(X)$, $\{ h_{v_0^{s-1}}^{-1}(O(x_s)): x_s \in f_{v_0^{s-1}}(X) \}$.
Note that the union of sets $h_{v_0^{s-1}}^{-1}(O(x_s))$ is $g_{v_0^{s-1}}(X)$.
Since $g_{v_0^{s-1}}(X)$ is equipped with the subspace topology of $Y^{s+1}$, each open set $h_{v_0^{s-1}}^{-1}(O(x_s))$ in $g_{v_0^{s-1}}(X)$ can be expanded to an open set $S'(x_s)$ in $Y^{s+1}$ so that $h_{v_0^{s-1}}$ maps points in $S'(x_s) \cap g_{v_0^{s-1}}(X)$ into $O(x_s)$.
Now $\{S'(x_s): x_s \in f_{v_0^{s-1}}(X) \}$ is an open cover of $g_{v_0^{s-1}}(X)$ in $Y^{s+1}$.
The open cover $\alpha$ of $g_{v_0^{s-1}}(X)$ in $Y^{s+1}$ is then chosen to be a finite subcover $\{ L^1, \dots, L^r \}$ of $\{S'(x_s)\}$.
The existence of a finite subcover is guaranteed by the compactness of $g_{v_0^{s-1}}(X)$ as the image of the compact set $X$ under the continuous map $g_{v_0^{s-1}}$.
Let $x_s^q$ be the point in $f_{v_0^{s-1}}(X)$ such that $L^q=S'(x_s^q)$, and let
\begin{equation}gin{equation*}
G_k(L^q)=H_k(x_s^q), \quad 1 \leq q \leq r, \, 0 \leq k \leq \tau-1.
\end{equation*}
\noindent By construction we have that if $x_0 \in g_{v_0^{s-1}}^{-1}(L^q)$ then
\begin{equation}gin{equation*}
f_{v_0^{s-1}G(L^q)} (x_0)
= f_{v_0^{s-1}{\{ H_k(x_s^q)\}}_{k=0}^{\tau-1}}(x_0) \in \mbox{int}K.
\end{equation*}
This confirms the feasibility of (C) under (Ob) and (WCI).
\section{Proof for Proposition \ref{B_j_properties}}
The openness of $B_j$ is confirmed by writing it as
\begin{equation}gin{multline*}
B_j=g_{v_0^{s-1}}^{-1}(A_0) \cap \Phi_{G(A_0)}^{-1} g_{v_0^{s-1}}^{-1}(A_1) \cap \Phi_{G(A_0)}^{-1} \Phi_{G(A_1)}^{-1} g_{v_0^{s-1}}^{-1}(A_2) \cap \cdots \\
\dots \cap \Phi_{G(A_0)}^{-1} \dots \Phi_{G(A_{j-2})}^{-1} g_{v_0^{s-1}}^{-1}(A_{j-1}),
\end{multline*}
where $\Phi_{G(A)}:= f_{v_0^{s-1}G(A)}$ is a continuous map.
Since $g_{v_0^{s-1}}$ is continuous and $\alpha$ is an open cover of $g_{v_0^{s-1}}(X)$ in $Y^{s+1}$, the collection $\{ g_{v_0^{s-1}}^{-1}(A) : A \in \alpha \}$ is an open cover of $X$.
Hence any $x_0 \in X$ must be in some $g_{v_0^{s-1}}^{-1}(A_0)$, $A_0 \in \alpha$.
Then constraint (C) implies that the $s+\tau$ inputs $v_0^{s-1}G(A_0)$ forces $x_{s+\tau} \in \mbox{int}K \subset X$.
Repeating this process indefinitely, we see that for any $x_0 \in X$ there is a sequence $A_0, A_1, \dots$ of sets in $\alpha$ such that $x_{i(s+\tau)} \in g_{v_0^{s-1}}^{-1}(A_i)$ when the input sequences $u_{i(s+\tau)}^{(i+1)(s+\tau)-1}$
are used.
\section{Proof for Lemma \ref{subadditivity}}
For each $j, k \in \mathbb N$, the collection $\begin{equation}ta_{j+k}$ consists of all sets of the form
\begin{equation}gin{multline*}
B_{j+k}=\left[ g_{v_0^{s-1}}^{-1}(A_0) \cap \Phi_{G(A_0)}^{-1} g_{v_0^{s-1}}^{-1}(A_1) \cap \dots \cap \Phi_{G(A_0)}^{-1} \dots \Phi_{G(A_{j-2})}^{-1} g_{v_0^{s-1}}^{-1}(A_{j-1}) \right] \\
\cap \Phi_{G(A_0)}^{-1} \dots \Phi_{G(A_{j-1})}^{-1} \Big( g_{v_0^{s-1}}^{-1}(A_j) \cap \Phi_{G(A_j)}^{-1} g_{v_0^{s-1}}^{-1}(A_{j+1}) \cap \cdots \\
\dots \cap \Phi_{G(A_j)}^{-1} \dots \Phi_{G(A_{j+k-2})}^{-1} g_{v_0^{s-1}}^{-1}(A_{j+k-1}) \Big),
\end{multline*}
where $\Phi_{G(A)}= f_{v_0^{s-1}G(A)}$ with $A_0, \dots, A_{j+k-1}$ ranging over $\alpha$.
Note that the expression inside the square brackets runs over all sets in $\begin{equation}ta_j$, while the expression inside the large parentheses runs over all sets in $\begin{equation}ta_k$.
Constrain ${\{ A_i \}}_{i=0}^{j-1}$ to index sets in a minimal subcover $\begin{equation}ta_j'$ of $\begin{equation}ta_j$, and ${\{ A_i \}}_{i=j}^{j+k-1}$ to those in a minimal subcover $\begin{equation}ta_k'$ of $\begin{equation}ta_k$.
Denote the constrained family of sets $B_{j+k}$ thus formed by $\begin{equation}ta_{j+k}^{*}$.
We claim that $\begin{equation}ta_{j+k}^{*}$ is still an open cover for $X$.
To see this, observe that any $x_0 \in X$ must lie in a set $B_j' \in \begin{equation}ta_{j}'$ indexed by some sequence ${\{ A_i \}}_{i=0}^{j-1}$ in $\alpha$.
Furthermore, $\Phi_{G(A_{j-1})} \dots \Phi_{G(A_0)}(x_0) \in \mbox{int}K \subset X$ and thus lies in some set $B_k'$ in the minimal subcover $\begin{equation}ta_k'$.
Hence $x_0 \in \Phi_{G(A_0)}^{-1} \dots \Phi_{G(A_{j-1})}^{-1} (B_k')$ and so $\begin{equation}ta_{j+k}^{*}$ is still a cover for $X$.
As there are $N(\begin{equation}ta_j)$ sets in $\begin{equation}ta_j'$, and to each there correspond $N(\begin{equation}ta_k)$ possible sets in $\begin{equation}ta_k'$, the number of distinct elements in $\begin{equation}ta_{j+k}^{*}$ is less than or equal to $N(\begin{equation}ta_j) N(\begin{equation}ta_k)$.
By the definition of minimal subcovers we have that for any $j, k \in \mathbb N$,
\begin{equation}gin{equation*}
N(\begin{equation}ta_{j+k}) \leq N(\begin{equation}ta_j) N(\begin{equation}ta_k).
\end{equation*}
Take logarithm with base $2$ on each side of the inequality.
\end{document}
|
\begin{document}
\title{Manifold turnpikes of nonlinear \portHamiltonian descriptor systems under minimal energy supply}
\begin{abstract}
Turnpike phenomena of nonlinear port-Ha\-mil\-to\-ni\-an\xspace descriptor systems under minimal energy supply are studied.
Under assumptions on the smoothness of the system nonlinearities, it is shown that the optimal control problem is dissipative with respect to a manifold.
Then, under controllability assumptions, it is shown that the optimal control problem exhibits a manifold turnpike property.
\end{abstract}
\emph{Keywords: turnpike phenomenon, nonlinear systems, port-Ha\-mil\-to\-ni\-an\xspace systems}
\section{Introduction}
This paper is concerned with \emph{turnpike phenomena}.
These phenomena were first noticed in the context of economics~\cite{DorSS87,McK63} and have since been observed in many different situations, see, e.g.,~\cite{CarHL91,Zas06,TreZ18} and the references therein.
Usually, turnpike phenomena are studied in optimal control problems, where the goal is the minimization of a cost functional $C(u)$.
Here, the function $u$ acts as the control of a system of interest.
In many cases, it can be observed that an optimal control~$u^*$ will, for a majority of the time horizon, steer the associated state trajectory~$x^*$ to a point~\cite{PorZ13,PorZ16,TreZ15}, a set~\cite{Zas06,TreZ18,RapC04} or, as in our case, a manifold~\cite{FauFO22}.
In other words, optimal solutions depend mainly on the underlying system and the optimization objective and are more or less independent on the choice of the time horizon and other data, such as initial or final values.
The behaviour is reminiscent of an observation from daily life:
when traveling a long distance by car, it is usually faster to take a detour via a turnpike than to choose a more direct way on slower streets.
Also, the chosen turnpike usually does not heavily depend on the start and end of the route.
If one would start the journey a few blocks away, then the fastest path would remain more or less the same.
Here, we consider a special class of systems called \emph{port-Ha\-mil\-to\-ni\-an\xspace} (\textsf{pH}\xspace) systems.
Parts of the origins of port-Ha\-mil\-to\-ni\-an\xspace systems date back to the late 1950s~\cite{Pay61}, and the interested reader is referred to~\cite{VanJ14} for an overview on the origins of this system class.
Despite their long history, they continue to be the focus of active research~\cite{MosLYY,LamH22,SchM22,BreHU22,GerH22,MehU22}.
Arguably, the key feature of \textsf{pH}\xspace systems is their modeling perspective: they focus on taking energy as the \emph{lingua franca} between subsystems.
As a consequence, the class of \textsf{pH}\xspace systems is a promising class for modeling real world processes~\cite{MehU22}.
Benefits of port-Ha\-mil\-to\-ni\-an\xspace models include inherent stability and passivity, the invariance under Galerkin projection and congruence transformation, and the possibility to interconnect multiple~\textsf{pH}\xspace systems in a structure-preserving manner.
When \textsf{pH}\xspace systems are considered in an optimal control setting, the objective of minimizing the supplied energy is quite natural.
This results in a cost term of the form $\int_0^T y(t)^{\mathsf{T}} u(t) ~\mathrm{d}t$, where $y$ is a collocated observation of the system, and renders the corresponding optimal control problem singular.
In~\cite{SchPFWM21,FauMP22}, the authors have considered this objective for linear time invariant port-Ha\-mil\-to\-ni\-an\xspace (descriptor) systems.
They have shown that the optimal control problem has a measure turnpike property with respect to the dissipative part of the state space, given by the kernel of the matrix corresponding to the non-conservative system dynamics.
The infinite-dimensional linear case was discussed in~\cite{PhiSF21}.
In this paper, we are concerned with the finite-dimensional nonlinear case.
We show that, under smoothness assumptions on the nonlinearities and controllability assumptions on the system, nonlinear \textsf{pH}\xspace descriptor systems admit a turnpike phenomenon with respect to a submanifold of $\mathbb{R}^n$.
This submanifold corresponds, as in the linear case, to the energy dissipating part of the state space.
The structure of this paper is as follows.
In \Cref{sec:preliminaries}, we recall the definition of port-Ha\-mil\-to\-ni\-an\xspace systems and precisely state the optimal control problem that is considered.
After that, a short repetition of results on submanifolds of $\mathbb{R}^n$ follows in \Cref{sec:manifolds}.
In \Cref{sec:dissip}, we define manifold dissipativity and manifold turnpikes following~\cite{FauFO22} and recall that, under weak assumptions, manifold dissipativity implies a manifold turnpike property.
\Cref{sec:app-ph} contains the main results of this work, where the previously established results are applied to finite-dimensional nonlinear port-Ha\-mil\-to\-ni\-an\xspace descriptor systems.
The theoretical results are then illustrated by a numerical example in \Cref{sec:num}.
Finally, in \Cref{sec:conclusion} a conclusion is drawn and an outlook on future research is given.
\subsubsection*{Notation}
For a set $Z \subseteq \mathbb{R}^n$ we define $Z^\circ$ as the interior of $Z$.
We denote the Euclidean norm by $\norm{\cdot}$ and define the distance of a point $x\in\mathbb{R}^n$ to the set $\altmathcal{M}\subseteq\mathbb{R}^n$ as $\dist(x,\altmathcal{M}) := \inf_{p \in \altmathcal{M}} \norm{\hbox{$x-p$}}$.
We denote the set of all $k$-times continuously differentiable functions from $U$ to $V$ by $C^k(U,V)$ and define $C(U,V) := C^0(U,V)$.
When the spaces $U$ and $V$ are clear from context, we say that $f\in C^k(U,V)$ is of class $C^k$.
The derivative of a function~$f$ at point $x$ is denoted by $Df_x$.
Further, for a matrix $A\in\mathbb{R}^{n,n}$ we write $A \succeq 0$ if $x^{\mathsf{T}} A x \geq 0$ for all $x\in \mathbb{R}^{n}$, and $A \succ 0$ if $x^{\mathsf{T}} A x > 0$ for all $x\in\mathbb{R}^{n}\setminus\{0\}$, where $\cdot^{\mathsf{T}}$ denotes the transpose.
The kernel and range of the matrix~$A$ are denoted by $\ker(A)$ and $\ran(A)$, respectively.
The non-negative square-root of a positive semidefinite matrix $A$ is denoted by $A^{1/2}$.
Often, we surpress the time dependency of functions and write $z$ instead of~$z(t)$.
\section{Preliminaries and problem setting}\label{sec:preliminaries}
Following the definiton of~\cite{MehU22}, we consider nonlinear port-Ha\-mil\-to\-ni\-an\xspace descriptor systems of the form
\begin{equation}\label{eq:nlph}
\begin{aligned}
E(x) \dot{x} = &~ (J(x) - R(x))\eta(x) + B(x) u, \\
y = &~ B(x)^{\mathsf{T}} \eta(x).
\end{aligned}
\tag{\textsf{pH}\xspace}
\end{equation}
Here, $x$, $u$ and $y$ are the state, input and output of the system, respectively.
We restrict our analysis to \textsf{pH}\xspace systems without feedthrough but note that the discussion can be extended to systems with a feedthrough term as introduced in~\cite{MehU22}.
We consider the state space $\altmathcal{X} = \mathbb{R}^n$ and a set of admissible controls $\altmathcal{U}_{\mathrm{ad}}$ and require that
\begin{equation*}
E,J, R \in C(\altmathcal{X}, \mathbb{R}^{n,n}),~~ \eta \in C(\altmathcal{X},\mathbb{R}^n) ~~ \text{and} ~~ B \in C(\altmathcal{X},\mathbb{R}^{n,m}).
\end{equation*}
Further, the functions $J$ and $R$ have to satisfy $J(x) = - J(x)^{\mathsf{T}}$ and $R(x) = R(x)^{\mathsf{T}} \succeq 0$ for all $x \in \altmathcal{X}$.
We assume that the system~\eqref{nlph} is associated with a Hamiltonian $\altmathcal{H} \in C^1(\altmathcal{X},\mathbb{R})$ which is bounded from below along any solution of~\eqref{nlph} and satisfies
\begin{equation*}
\ddx \altmathcal{H}(x) = E(x)^{\mathsf{T}} \eta(x)
\end{equation*}
for each $x\in\altmathcal{X}$.
Without loss of generality, as in~\cite{MehU22} we may assume that $\altmathcal{H}$ is nonnegative along any solution of~\eqref{nlph}.
\begin{remark}\label{rem:ph-time}
We only consider \textsf{pH}\xspace systems which do not explicitly depend on time.
The definition given here can be generalized to include explicit time dependence, but these systems can easily be made autonomous~\cite[Remark 4.2]{MehU22}.
\end{remark}
When it comes to the optimal control of port-Ha\-mil\-to\-ni\-an\xspace systems, the cost functional should take into account that \textsf{pH}\xspace systems stem from using energy as the lingua franca.
Hence, choosing the supplied energy as the optimization objective is quite natural.
For this, usually the impedance supply $y^{\mathsf{T}} u$ is considered, which is related to the scattering supply $\norm{u}^2 - \norm{y}^2$ via the Cayley transform~\cite{Sta02}.
We focus on the former and thus consider the optimal control problem
\begin{equation}\label{eq:ocp}
\left.
\begin{aligned}
\min_{u \in \altmathcal{U}_{\mathrm{ad}}}~C_{\textsf{pH}\xspace,T}(u) := & \int_{0}^{T} y^{\mathsf{T}} u ~\mathrm{d}t \\
\text{subject to the dynamics~\eqref{nlph} and}\hspace{-2.1cm} \\
x(0) = x_0, ~~ x(T) = x_T. \hspace{-1.3cm}\\
\end{aligned}
\tag{$\text{\textsf{pH}\xspace OCP}_T$}
\hspace{.5cm}
\right\}
\end{equation}
Here and in the following, we assume $x_0, x_T \in K$, where $K \subseteq \mathbb{R}^n$ is a compact set.
Further, as we are interested in the properties of optimal solutions to~\eqref{ocp}, throughout the paper we assume that an optimal solution~$u^*$ and a corresponding trajectory $x^*$ exist.
This assumption is quite restrictive and may be violated.
The existence and uniqueness of optimal controls for nonlinear differential-algebraic equations is studied in, e.g.,~\cite{KunM08}.
It can be shown~\cite{MehU22} that the \emph{power-balance} equation
\begin{equation}\label{eq:powerbalance}
\ddt \altmathcal{H}(x) = -\eta(x)^{\mathsf{T}} R(x) \eta(x) + y^{\mathsf{T}} u
\end{equation}
holds along any solution $x$ of~\eqref{nlph}.
This allows us to rewrite the cost functional $C_{\textsf{pH}\xspace,T}(u)$ as
\begin{equation*}
C_{\textsf{pH}\xspace,T}(u) = \altmathcal{H}(x_T) - \altmathcal{H}(x_0) + \int_{0}^{T}\norm{R(x)^{1/2} \eta(x)}^2 ~\mathrm{d}t.
\end{equation*}
This equation is called the \emph{energy-balance} equation, and we can interpret each of the terms physically~\cite{OrtVM02}.
The term $\altmathcal{H}(x_T) - \altmathcal{H}(x_0)$ measures the conserved energy, while the integral term corresponds to the dissipated energy.
By rearranging and plugging in the definiton of $C_{\textsf{pH}\xspace,T}$, we see that
\begin{equation*}
\altmathcal{H}(x_T) - \altmathcal{H}(x_0) = \int_{0}^{T}y^{\mathsf{T}} u - \norm{R(x)^{1/2}\eta(x)}^2 ~\mathrm{d}t.
\end{equation*}
Note that this implies dissipativity in the sense of Willems~\cite{Wil72a}, and as a consequence shows the aforementioned passivity of \textsf{pH}\xspace systems.
We will use both of these equations in \Cref{sec:app-ph}.
\section{Submanifolds of $\mathbb{R}^n$ and the orthogonal projection} \label{sec:manifolds}
This section repeats mostly well-known results regarding submanifolds of $\mathbb{R}^n$, with a focus on manifolds defined as the zero locus of some smooth function.
The main result of the section is \Cref{lem:dist}, which provides an upper bound for the distance of a point to such a manifold.
We begin with recalling the classical defintion of submanifolds of $\mathbb{R}^n$.
Following~\cite{DudH94,LeoS21}, we distinguish manifolds whose tangent spaces locally satisfy a Lipschitz condition.
\begin{definition}[submanifolds of $\mathbb{R}^n$]\hphantom{abc}
\begin{itemize}
\item
Let $\altmathcal{M}$ be a subset of $\mathbb{R}^n$.
We call $\altmathcal{M}$ an \emph{$s$-dimensional $C^k$ manifold} if
for each $p\in \altmathcal{M}$ there exists an open neighborhood $U$ of $p$ and a $C^k$ diffeomorphism $\phi: U \to \phi(U)\subseteq \mathbb{R}^n$ such that
\begin{equation*}
\altmathcal{M} \cap U = \{ x\in U ~|~ \phi_{s+1}(x) = \dots = \phi_n(x) =0 \}.
\end{equation*}
The function $\phi$ is called a \emph{local coordinate system of $\altmathcal{M}$ at $p$}.
\item
Let $\altmathcal{M} \subseteq \mathbb{R}^n$ be an $s$-dimensional manifold, $p\in \altmathcal{M}$ and let $\phi: U \to \mathbb{R}^n$ be a local coordinate system of $\altmathcal{M}$ at $p$.
We define the \emph{tangent space at $p$ relative to $\altmathcal{M}$} as
\begin{equation*}
T_p\altmathcal{M} := D\phi_{\phi(p)}^{-1}(\left\{ y\in\mathbb{R}^n ~\middle|~ y_{s+1} = \dots = y_n = 0 \right\}).
\end{equation*}
The space $N_p\altmathcal{M} := T_p\altmathcal{M}^\perp$ is called the \emph{normal space at $p$ relative to $\altmathcal{M}$}.
\item
We call $\altmathcal{M}\subseteq \mathbb{R}^n$ an \emph{$s$-dimensional $C^{k,1}$ manifold} if $\altmathcal{M}$ is an $s$-di\-men\-sion\-al $C^k$ manifold and for all $p \in \altmathcal{M}$ there exists a set $V\subseteq \altmathcal{M}$ that is open relative to $\altmathcal{M}$ and a positive constant $L>0$ such that $p \in V$ and for all ${\widetilde{p}} \in V$ it holds that
\begin{equation*}
d_{\mathrm{H}}(T_p\altmathcal{M}, T_{\widetilde{p}}\altmathcal{M}) \leq L \norm{p - {\widetilde{p}}}.
\end{equation*}
Here, $d_{\mathrm{H}}$ denotes the \emph{Hausdorff distance} defined by
\begin{equation*}
d_{\mathrm{H}}(T_1, T_2) := \sup \left\{ \inf \left\{ \norm{t_2-t_1} ~\middle|~ t_2 \in T_2 \cap S \right\} ~\middle|~ t_1 \in T_1 \cap S \right\},
\end{equation*}
where $S := \left\{ z \in \mathbb{R}^n ~\middle|~ \norm{z} = 1 \right\}$ is the unit sphere.
\end{itemize}
\end{definition}
Next, we recall the definition of the orthogonal projection on a manifold from~\cite{LeoS21}.
For this, consider a manifold $\altmathcal{M}\subseteq \mathbb{R}^n$ and define the set of points with the \emph{unique nearest points property} as
\begin{equation*}
\unpp(\altmathcal{M}) := \left\{ x\in\mathbb{R}^n ~\middle|~ \text{there exists a unique}~\xi\in\altmathcal{M} ~\text{with}~ \dist(x,\altmathcal{M}) = \norm{x-\xi} \right\}.
\end{equation*}
Clearly, for each $x\in\unpp(\altmathcal{M})$ there exists a unique $p(x)\in\altmathcal{M}$ with the property
\begin{equation*}
\norm{x-p(x)} = \dist(x,\altmathcal{M}) = \inf_{p \in \altmathcal{M}} \norm{x-p}.
\end{equation*}
\begin{definition}[orthogonal projection on manifold,~\cite{LeoS21}]
Let $\altmathcal{M}\subseteq \mathbb{R}^n$ be a manifold.
The function $p: \unpp(\altmathcal{M}) \to \altmathcal{M}, ~ x \mapsto p(x)$ is called the \emph{orthogonal projection on $\altmathcal{M}$}.
\end{definition}
Often, we write $p_x$ instead of $p(x)$.
The maximal open set on which the orthogonal projection is defined plays a special role in~\cite{LeoS21} and also in our setting.
We will refer to this set as $\altmathcal{E}(\altmathcal{M}) := \unpp(\altmathcal{M})^\circ$.
The next proposition collects selected results on submanifolds of $\mathbb{R}^n$ defined in a particular manner and will be useful in \Cref{sec:app-ph}, where it will allow us to study the optimal control problem~\eqref{ocp}.
\begin{proposition}\label{prop:manifolds}
Suppose $f:\mathbb{R}^n \to \mathbb{R}^n$ is of class $C^2$, assume that
\begin{equation*}
\altmathcal{M} := \left\{ x\in\mathbb{R}^n ~\middle|~ f(x) = 0 \right\}
\end{equation*}
is nonempty and assume that there exists an open neighborhood $G\subseteq \mathbb{R}^n$ of $\altmathcal{M}$ such that for all $x\in G$ it holds that $\dim(\ker(Df_x)) = s$, where the constant $s$ is independent of $x$ and satisfies $0 < s < n$.
Then
\begin{enumerate}[(i)]
\item the set $\altmathcal{M}$ is an $s$-dimensional $C^2$ submanifold of $\mathbb{R}^n$,
\item the tangent space at $p \in\altmathcal{M}$ is given by $T_p\altmathcal{M} = \ker(Df_p)$,
\item the manifold $\altmathcal{M}$ is a $C^{2,1}$ manifold, and
\item it holds that $\altmathcal{M} \subseteq \altmathcal{E}(\altmathcal{M})$, and if $x\in\altmathcal{E}(\altmathcal{M})\setminus\altmathcal{M}$ then $x-p_x \perp T_{p_x}\altmathcal{M}$.
\end{enumerate}
\end{proposition}
\begin{proof} \hphantom{a}
\begin{enumerate}[(i)]
\item
Let $p\in \altmathcal{M}$.
It can be shown~\cite{Wal09} that there exist open neighborhoods $U_1\subseteq \mathbb{R}^n$ of $p$ and $U_2\subseteq \mathbb{R}^n$ of $f(p)$ and $C^2$ diffeomorphisms
\begin{equation*}
\phi: U_1 \to \phi(U_1)~~~~ \text{and} ~~~~ \psi: U_2 \to \psi(U_2)
\end{equation*}
such that
\begin{equation*}
f(U_1) \subseteq U_2
\end{equation*}
and
\begin{equation*}
\psi \circ f \circ \phi^{-1} (y_{1},\dots,y_n) = (y_1,\dots,y_{n-s}, 0,\dots,0)
\end{equation*}
for all $y\in \phi(U_1)$.
Thus, for each $v \in U_1$ we have
\begin{align*}
v \in \altmathcal{M}
& \Longleftrightarrow f(v) = 0 \\
& \Longleftrightarrow (\psi\circ f)(v) = 0 \\
& \Longleftrightarrow (\psi\circ f \circ \phi^{-1} \circ \phi)(v) = 0 \\
& \Longleftrightarrow \phi_1(v) = \dots = \phi_{n-s}(v) = 0.
\end{align*}
\item
Suppose that $p\in \altmathcal{M}$ and that $\phi: U \to \mathbb{R}^n$ is a local coordinate system of $\altmathcal{M}$ at $p$.
Since
\begin{equation*}
(f\circ\phi^{-1}) \big(\{ y\in\mathbb{R}^n ~|~ y_{s+1} = \dots = y_n = 0 \} \cap \phi(U)\big) = \{0\},
\end{equation*}
we have
\begin{equation*}
Df_p(T_p\altmathcal{M}) = Df_p\Big(D\phi_{\phi(p)}^{-1}\big(\left\{ y\in\mathbb{R}^n ~\middle|~ y_{s+1} = \dots = y_n = 0 \right\}\big)\Big) = 0.
\end{equation*}
The claim then follows from the fact that $\ker(Df_p)$ has dimension $s$.
\item
In~\cite[Equations (3.3) and (3.6)]{DudH94}, it is shown that a $C^k$ manifold with $k\geq 2$ is also a $C^{k,1}$ manifold, from which the claim follows.
\item
This claim was proven in~\cite{LeoS21}.\qedhere
\end{enumerate}
\end{proof}
We finish this section with \Cref{lem:dist} and a corresponding remark.
The lemma establishes an upper bound on the distance to the manifold $\altmathcal{M}$ defined in \Cref{prop:manifolds} in terms of the function $f$.
This result will be the key in our application to port-Ha\-mil\-to\-ni\-an\xspace systems, as it will allow us to deduce a dissipativity property for~\eqref{ocp}.
\begin{lemma}\label{lem:dist}
Suppose $f:\mathbb{R}^n \to \mathbb{R}^n$ is of class $C^2$, assume that
\begin{equation*}
\altmathcal{M} := \left\{ x\in\mathbb{R}^n ~\middle|~ f(x) = 0 \right\}
\end{equation*}
is nonempty and assume that there exists an open neighborhood $G\subseteq \mathbb{R}^n$ of $\altmathcal{M}$ such that for all $x\in G$ it holds that $\dim(\ker(Df_x)) = s$, where the constant $s$ is independent of $x$ and satisfies $0 < s < n$.
Further, assume that for each $x\in G$ the smallest nonzero singular value of $Df_x$ is bounded from below by $\widetilde{c} > 0$.
Then $\altmathcal{M}$ is a $C^{2}$ manifold and there exists an open set $V \subseteq \mathbb{R}^n$ and a constant $c>0$ with
$\altmathcal{M} \subseteq V \subseteq \altmathcal{E}(\altmathcal{M})$
and
\begin{equation}\label{eq:fdist}
c \dist(x,\altmathcal{M}) \leq \norm{f(x)}
\end{equation}
for all $x\in V$.
\end{lemma}
\begin{proof}
We will first show that~\eqref{fdist} is true locally.
Fix a point $p\in\altmathcal{M}$ and notice that due to $f\in C^2$, for all $x \in \mathbb{R}^n$ we have
\begin{equation}\label{eq:ftaylor}
f(x) = f(p) + Df_{p}(x-p) + g_{p}(x-p),
\end{equation}
where the remainder $g_p$ satisfies
\begin{equation}\label{eq:remainder}
\lim_{x\to p}\frac{\norm{g_p(x-p)}}{\norm{x-p}} = 0.
\end{equation}
To establish~\eqref{fdist} locally, our first goal is to show that there exists an open set $U_p \subseteq \altmathcal{E}(\altmathcal{M})$ such that for all $x\in V_p := U_p \cap N_p\altmathcal{M}$ we have
\begin{equation}\label{eq:gdurchd}
\frac{\norm{g_{p}(x-p)}}{\norm{Df_{p}(x-p)}} < \frac12.
\end{equation}
For the sake of simplicity, let us set $A_p := Df_{p}^{\mathsf{T}} Df_{p}$.
Then, by the definition of $\altmathcal{M}$ and \Cref{prop:manifolds}, we have
$
\ker(A_p) = \ker(Df_{p}) = T_{p}\altmathcal{M}.
$
Now, let us decompose $\mathbb{R}^n = \ker(A_p) \oplus \ran(A_p) = T_{p}\altmathcal{M} \oplus N_{p}\altmathcal{M}$ and accordingly also $A_p = 0 \oplus A_p^{(2)}$.
Since $A_p^{(2)}$ is symmetric positive definite, for any $x\in N_p\altmathcal{M}$ we obtain
\begin{equation}\label{eq:lambda_min_A2}
\norm{Df_{p}(x-{p})}^2 = (x-{p})^{\mathsf{T}} A_p (x-{p}) = (x-{p})^{\mathsf{T}} A_p^{(2)} (x-{p}) \geq \lambda_{\min}(A_p^{(2)}) \norm{x-{p}}^2.
\end{equation}
Now, using~\eqref{remainder} and~\eqref{lambda_min_A2}, we obtain
\begin{equation*}
0 = \lim_{x\to p}\frac{\norm{g_p(x-p)}}{\norm{x-p}} \geq \lim_{x\to p} c_p \frac{\norm{g_p(x-p)}}{\norm{Df_p(x-p)}} \geq 0,
\end{equation*}
where we set $c_p:=\lambda_{\min}(A_p^{(2)})^{1/2} > 0$.
In particular, we have
\begin{equation*}
\lim_{x \to p} \frac{\norm{g_p(x-p)}}{\norm{Df_p(x-p)}} = 0.
\end{equation*}
Thus, choosing $x \in N_p\altmathcal{M}$ sufficiently close to $p$ we obtain the estimate~\eqref{gdurchd}, and we can deduce that an open set $U_p$ with the sought-after properties has to exist.
Now, since~\eqref{ftaylor} and $p\in \altmathcal{M}$ implies
\begin{equation*}
\norm{Df_{p}(x-{p})} \leq \norm{f(x)} + \norm{g_{p}(x-{p})}\quad \text{for all} ~~ x \in \mathbb{R}^n,
\end{equation*}
using~\eqref{gdurchd} we obtain
\begin{equation}\label{eq:12absch}
\tfrac12 \norm{Df_{p}(x-p)} \leq \norm{f(x)} \quad \text{for all} ~~ x \in V_p = U_p \cap N_p\altmathcal{M}.
\end{equation}
To finish the local argument, notice that by~\eqref{lambda_min_A2} we have
\begin{equation*}
c_p \dist(x,\altmathcal{M}) = c_p \norm{x-p} \leq \norm{Df_p(x-p)} \quad \text{for all} ~~ x \in V_p,
\end{equation*}
which together with~\eqref{12absch} shows that~\eqref{fdist} holds for all $x\in V_p$.
To construct the set $V$, first notice that the differentiability of $f$ implies that an expression of the form~\eqref{ftaylor} is possible on the set $\altmathcal{E}(\altmathcal{M})$.
In other words, there exists a function~$g_{\cdot}(\cdot)$ such that
\begin{equation*}
f(x) = f(p_x) + Df_{p_x}(x-p_x) + g_{p_x}(x-p_x)
\end{equation*}
for all $x\in\altmathcal{E}(\altmathcal{M})$.
Since the orthogonal projection $x\mapsto p_x$ is differentiable~\cite{LeoS21} and $f\in C^2$, the map $x \mapsto g_{p_x}(x-p_x)$ is continuous on $\altmathcal{E}(\altmathcal{M})$.
Define the function
\begin{equation*}
h: \altmathcal{E}(\altmathcal{M}) \to \mathbb{R}, ~ x \mapsto \frac{\norm{g_{p_x}(x-p_x)}}{\norm{Df_{p_x}(x-p_x)}}.
\end{equation*}
Then $h$ is continuous and hence the preimage of the open set $(-\infty,\tfrac12)\subseteq \mathbb{R}$ under $h$ is open.
Note that $\altmathcal{M}$ is a subset of this preimage.
Define
\begin{equation*}
\altmathcal{M} \subseteq V := h^{-1}\big((-\infty,\tfrac12)\big) \subseteq \altmathcal{E}(\altmathcal{M}).
\end{equation*}
Then for each $p\in \altmathcal{M}$ we have
\begin{equation*}
V_p \subseteq V \cap N_p\altmathcal{M}.
\end{equation*}
The previous arguments can then be used to show that for $c := \tfrac{\widetilde{c}}{2}$ and $x \in V$ we have
\begin{equation*}
2c \dist(x,\altmathcal{M}) \leq c_{p_x} \dist(x,\altmathcal{M}) \leq \norm{Df_{p_x}(x-{p_x})} \leq 2 \norm{f(x)},
\end{equation*}
finishing the proof.
\end{proof}
\begin{remark}\label{rem:ineq}
The estimate~\eqref{fdist} is related to the {\L}ojasiewicz inequality~\cite{Loj59,JiKS92}, which states that for a real analytic function $g: U \to \mathbb{R}$ defined on an open set $U\subseteq \mathbb{R}^n$ and a compact set $K \subseteq U$, the distance of $x \in K$ to the zero locus $\altmathcal{Z} := \{ z \in U ~|~ g(z) = 0 \}$ of $g$ may be estimated by
\begin{equation*}
\dist(x, \altmathcal{Z})^\alpha \leq C~|g(x)|,
\end{equation*}
where $\alpha$ and $C$ are positive constants.
\end{remark}
\section{Manifold dissipativity and manifold turnpikes}
\label{sec:dissip}
In this section, we recall the definition of dissipativity with respect to a manifold and the definition of manifold turnpikes as introduced in~\cite{FauFO22}.
Further, a theorem relating the two properties is stated.
Here and in the following, the set $\altmathcal{K}$ is defined as
\begin{equation*}
\altmathcal{K} := \{ \alpha: [0,\infty) \to [0,\infty) ~|~ \alpha(0)=0, ~ \alpha \text{ is continuous and strictly increasing}\}.
\end{equation*}
We consider the general optimal control problem
\begin{equation}\label{eq:gen-ocp}
\left.
\begin{aligned}
\min_{u \in \altmathcal{U}}~C_T(u) := & \int_{0}^{T} \ell(x,u) ~\mathrm{d}t \\
\text{subject to} \qquad & \\
h(x) \dot{x} = &~ g(x,u), \\
x(0) = x_0, \quad x(T) = x_T.\hspace{-1.95cm}
\end{aligned}
\tag{$\text{OCP}_T$}
\right\}
\end{equation}
As before, we assume $x_0, x_T \in K$, where $K \subseteq \mathbb{R}^n$ is a compact set.
Here, the function $g$ defines the dynamics of the system and the function $h$ corresponds to possible algebraic constraints.
We refrain from further specification of these functions as~\eqref{gen-ocp} is only used for general definitions.
Throughout this section, we assume that an optimal control~$u^*$ of~\eqref{gen-ocp} and an associated trajectory $x^*$ exist.
We begin with the definition of manifold dissipativity.
The definition is related to Willems' notion of dissipativity~\cite{Wil72a} and is also found in~\cite{FauFO22}.
\begin{definition}[manifold dissipativity]\label{def:dissip}
Consider the optimal control problem~\eqref{gen-ocp} together with the manifold $\altmathcal{M}\subseteq\mathbb{R}^n$.
We say that~\eqref{gen-ocp} is \emph{dissipative with respect to the manifold $\altmathcal{M}$} if there exists a function $\altmathcal{S}: \mathbb{R}^n \to [0,\infty)$ that is bounded on compact sets and a function $\alpha\in\altmathcal{K}$ such that all optimal controls $u^*$ and associated trajectories $x^*$ satisfy the dissipation inequality
\begin{equation}\label{eq:gen-dissip}
\altmathcal{S}(x_T) - \altmathcal{S}(x_0) \leq \int_{0}^{T} \ell(x^*,u^*) - \alpha(\dist(x^*,\altmathcal{M})) ~\mathrm{d}t
\end{equation}
for all $T >0$.
\end{definition}
The function $\altmathcal{S}$ from \Cref{def:dissip} is also called storage function.
Note that we require the dissipation inequality~\eqref{gen-dissip} only to hold along optimal solutions of~\eqref{gen-ocp}.
This is not a severe restriction when turnpike phenomona are studied, as we are only interested in properties of optimal solutions.
Next, we define a manifold turnpike property, again following~\cite{FauFO22}.
The property is essentially a notion of measure turnpikes, see, e.g.,~\cite{CarHL91,TreZ18,FauKJB17,Zas06}.
\begin{definition}[manifold turnpike]
Consider the optimal control problem~\eqref{gen-ocp}
together with the manifold $\altmathcal{M}\subseteq\mathbb{R}^n$.
We say that~\eqref{gen-ocp} has the \emph{manifold turnpike property with respect to the manifold $\altmathcal{M}$} if for all compact sets $K\subseteq \mathbb{R}^n$ and all $\varepsilon>0$ there exists a constant $C_{K,\varepsilon} > 0$ such that for all $T>0$ all optimal trajectories $x^*$ of~\eqref{gen-ocp} satisfy
\begin{equation*}
\lambda\big(\{ t\in[0,T] ~|~ \dist(x^*(t),\altmathcal{M}) > \varepsilon\}\big) \leq C_{K,\varepsilon}
\end{equation*}
for all $x_0,x_T \in K$.
Here, $\lambda$ denotes the Lebesgue-measure.
\end{definition}
The next theorem can be found similarly in~\cite{FauFO22,FauKJB17,GruG21}.
The theorem shows that manifold dissipativity implies a manifold turnpike property.
\begin{theorem}[manifold dissipativity implies manifold turnpike]\label{thm:diss->turn}
Consider the optimal control problem~\eqref{gen-ocp} together with a submanifold $\altmathcal{M} \subseteq \mathbb{R}^n$ and assume that
\begin{enumerate}[(i)]
\item \label{as:i-diss-turn} there exists a constant $C_\ell(K)>0$ such that for all optimal controls $u^*$ of~\eqref{gen-ocp} and the associated trajectories $x^*$ we have
\begin{equation*}
\int_0^T \ell(x^*,u^*) ~\mathrm{d}t < C_\ell(K)
\end{equation*}
for all $T > 0$, and
\item the optimal control problem is dissipative with respect to the manifold $\altmathcal{M}$.
\end{enumerate}
Then the optimal control problem~\eqref{gen-ocp} has the manifold turnpike property.
\end{theorem}
\begin{remark}\label{rem:cost-controllability}
In~\cite{FauFO22}, \Cref{thm:diss->turn} is stated with a stronger assumption in the place of~\ref{as:i-diss-turn}, which can be interpreted as a controllability property.
For the sake of simplicity, we do not consider this case.
\end{remark}
\section{Application to port-Ha\-mil\-to\-ni\-an\xspace systems}\label{sec:app-ph}
Finally, we are ready to apply the previous results to port-Ha\-mil\-to\-ni\-an\xspace systems and the optimal control problem~\eqref{ocp}.
First, recall that we can rewrite the cost functional $C_{\textsf{pH}\xspace,T}(u)$ as
\begin{equation}\label{eq:cu}
C_{\textsf{pH}\xspace,T}(u) = \int_{0}^{T} y^{\mathsf{T}} u ~\mathrm{d}t = \altmathcal{H}(x_T) - \altmathcal{H}(x_0) + \int_{0}^{T}\norm{R(x)^{1/2} \eta(x)}^2 ~\mathrm{d}t,
\end{equation}
and that rearranging gives
\begin{equation}\label{eq:cu2}
\altmathcal{H}(x_T) - \altmathcal{H}(x_0) = \int_{0}^{T}y^{\mathsf{T}} u - \norm{R(x)^{1/2}\eta(x)}^2 ~\mathrm{d}t.
\end{equation}
Equation~\eqref{cu} hints that any optimal trajectory will have to spend most of the time close to the set
\begin{align*}
\altmathcal{M}
:= & \big\{ x\in\mathbb{R}^n ~\big|~ R(x)^{1/2}\eta(x) = 0 \big\},
\end{align*}
and that $\altmathcal{H}$ can be used as a storage function to derive dissipativity notions with respect to~$\altmathcal{M}$.
Our aim will be to formalize these ideas.
The first step will be to ensure that~$\altmathcal{M}$ has the necessary manifold structure.
For that, we make the following assumptions.
\begin{assumption}\label{as:ph}\hphantom{abc}
\begin{enumerate}[({A}1)]
\item \label{as:A1} The map $f: \mathbb{R}^n \to \mathbb{R}^n, ~ x\mapsto R(x)^{1/2} \eta(x)$ is of class $C^2$.
\item \label{as:A2} The set $\altmathcal{M}$ is nonempty and there exists an open neighborhood $G\subseteq \mathbb{R}^n$ of $\altmathcal{M}$ such that for all $x\in G$ it holds that $\dim(\ker(Df_x)) = s$, where the constant $s$ is independent of $x$ and satisfies $0 < s < n$.
\item \label{as:A3} For each $x\in G$, the smallest nonzero singular value of $Df_x$ is bounded from below by a positive constant $\widetilde{c} > 0$.
\item \label{as:A4} Let $V$ be the open set from \Cref{lem:dist}. Any optimal trajectory $x^*$ of~\eqref{ocp} remains in $V$ for all times.
\end{enumerate}
\end{assumption}
With assumptions \ref{as:A1} and \ref{as:A2}, \Cref{prop:manifolds} ensures that $\altmathcal{M}$ is an $s$-dimensional $C^{2,1}$ submanifold of $\mathbb{R}^n$ and that $\altmathcal{M} \subseteq \altmathcal{E}(\altmathcal{M})$.
The next step is to show that the problem is dissipative.
As we will see shortly, this is ensured by assumptions \ref{as:A3} and \ref{as:A4}, which allow us to use \Cref{lem:dist} to conclude that the optimal control problem~\eqref{ocp} is dissipative with respect to the manifold~$\altmathcal{M}$.
Notice that \ref{as:A4} implies $x_0,x_T \in V$.
\begin{theorem}[\eqref{ocp} is dissipative]\label{thm:ph-dissip}
Under \Cref{as:ph}, the optimal control problem~\eqref{ocp} is dissipative with respect to the manifold $\altmathcal{M}$ with storage function $\altmathcal{H}$.
\end{theorem}
\begin{proof}
The proof is essentially an application of \Cref{lem:dist}.
Since all assumptions of \Cref{lem:dist} are satisfied under \Cref{as:ph}, there exists an open set $V\subseteq \mathbb{R}^n$ and constant $c>0$ such that $\altmathcal{M} \subseteq V \subseteq \altmathcal{E}(\altmathcal{M})$ and
\begin{equation}\label{eq:fdist-ph}
c \dist(x,\altmathcal{M}) \leq \norm{f(x)} = \norm{R(x)^{1/2} \eta(x)}
\end{equation}
holds for all $x\in V$.
In particular, assumption \ref{as:A4} ensures that the estimate holds along any optimal trajectory $x^*$ of~\eqref{ocp}.
With this and~\eqref{cu2}, we see that for any optimal control $u^*$ and the associated trajectory $x^*$ and output $y^*$, we have
\begin{align*}
\altmathcal{H}(x_T) - \altmathcal{H}(x_0)
& = \int_{0}^{T}{y^*}^{\mathsf{T}} u^* - \norm{R(x^*)^{1/2}\eta(x^*)}^2 ~\mathrm{d}t \\
& \leq \int_{0}^{T}{y^*}^{\mathsf{T}} u^* - c^2 \dist(x^*,\altmathcal{M})^2 ~\mathrm{d}t \\
& = \int_{0}^{T}{y^*}^{\mathsf{T}} u^* - \alpha(\dist(x^*,\altmathcal{M})) ~\mathrm{d}t,
\end{align*}
where $\alpha: s \mapsto c^2 s^2 \in \altmathcal{K}$.
Finally, note that the Hamiltonian $\altmathcal{H}$, acting as a storage function here, is bounded on compact sets since it is differentiable.
\end{proof}
\begin{remark}\label{rem:ineq2}
Let us emphasize that the estimate~\eqref{fdist} from \Cref{lem:dist} was the key to conclude the dissipativity property of~\eqref{ocp}.
As we have mentioned in \Cref{rem:ineq}, the estimate is related to {\L}ojasiewicz' inequality~\cite{Loj59,JiKS92}.
In fact, if the map $g: x \mapsto \norm{f(x)}^2$ is real analytic, then we may use the {\L}ojasiewicz inequality to derive dissipativity without~\Cref{lem:dist}, as long as any optimal trajectory stays in some compact set $D\subseteq \mathbb{R}^n$.
\end{remark}
Now, an application of \Cref{thm:diss->turn} yields the following result, showing that the optimal control problem~\eqref{ocp} has the manifold turnpike property with respect to~$\altmathcal{M}$.
\vbox{
\begin{theorem}[\eqref{ocp} has a manifold turnpike]
\label{thm:ph-turn}
In addition to \Cref{as:ph}, assume
\begin{enumerate}[({A}1)]
\setcounter{enumi}{4}
\item \label{as:mta1}there exists a control $u_1 \in \altmathcal{U}_{\mathrm{ad}}$ that steers the associated trajectory $x_1$ from $x_0$ onto the manifold~$\altmathcal{M}$ in time $T_1 \geq 0$, and
\item \label{as:mta2}there exists a control $u_2 \in \altmathcal{U}_{\mathrm{ad}}$ that steers the associated trajectory $x_2$ from the manifold $\altmathcal{M}$ to $x_T$ in time $T_2 \geq 0$.
\end{enumerate}
Then the optimal control problem~\eqref{ocp} has a manifold turnpike at the manifold $\altmathcal{M}$.
\end{theorem}
}
\begin{proof}
Since the port-Ha\-mil\-to\-ni\-an\xspace system~\eqref{nlph} is autonomous and there is no control cost on the manifold $\altmathcal{M}$, the conditions \ref{as:mta1} and \ref{as:mta2} ensure that the total cost of the optimal control $u^*$ is bounded by
\begin{align*}
C_{\textsf{pH}\xspace,T}(u^*)
& = \int_{0}^{T}{y^*}^{\mathsf{T}} u^* ~\mathrm{d}t \\
& = \altmathcal{H}(x_T) - \altmathcal{H}(x_0) + \int_{0}^{T}\norm{R(x^*)^{1/2} \eta(x^*)}^2 ~\mathrm{d}t \\
& \leq C_{\altmathcal{H}}(K) + \int_{0}^{T_1} \norm{R(x_1)^{1/2} \eta(x_1)}^2 ~\mathrm{d}t + \int_{0}^{T_2} \norm{R(x_2)^{1/2} \eta(x_2)}^2 ~\mathrm{d}t \\
& \leq C_{\altmathcal{H}}(K) + C_1(K) + C_2(K) < \infty.
\end{align*}
Notice that the constants $C_{\altmathcal{H}}(K), ~C_1(K)$ and $C_2(K)$ are independent of the final time $T$.
Thus, using the results of \Cref{thm:ph-dissip}, we may apply \Cref{thm:diss->turn} to conclude that the optimal control problem~\eqref{ocp} has the manifold turnpike property at $\altmathcal{M}$.
\end{proof}
In \Cref{thm:ph-turn}, in order to show that a turnpike property holds true, we needed to make the controllability assumptions~\ref{as:mta1} and~\ref{as:mta2}.
This is a common pattern in turnpike results, similar controllability assumptions are made in~\cite{CarHL91,FauFO22} and~\cite{SchPFWM21}.
\begin{remark}
In~\cite{SchPFWM21}, the authors considered linear \textsf{pH}\xspace systems of the form
\begin{align*}
\dot{x} & = (J-R)Qx + Bu, \\
y & = B^{\mathsf{T}} Q x,
\end{align*}
where $J=-J^{\mathsf{T}}$, $R=R^{\mathsf{T}} \succeq 0$ and $Q=Q^{\mathsf{T}} \succ 0$.
They have shown that in this case the optimal control problem~\eqref{ocp} admits a subspace turnpike property with respect to $\ker(R^{1/2} Q)$.
We can interpret \Cref{thm:ph-turn} as a generalization of this result to the nonlinear case.
In the linear case, assumption \ref{as:A1} is immediately satisfied.
Further, the set $\altmathcal{M}$ is the kernel of $R^{1/2} Q$, and if the dimension of $\ker(R^{1/2} Q)$ is $0<s<n$, then assumptions \ref{as:A2} and \ref{as:A3} are also satisfied.
Since the distance estimate~\eqref{fdist-ph} can be shown to hold true globally~\cite[Lemma 13]{SchPFWM21}, the set $V$ from \Cref{lem:dist} is $V=\mathbb{R}^n$ and thus assumption \ref{as:A4} is also satisfied.
\end{remark}
We finish this section with a simple example illustrating \Cref{thm:ph-turn}.
\begin{example}\label{ex:ph}
Consider the functions $E,J,R,\eta$ and $B$ defined by
\begin{gather*}
E(x) = \vec{ 1 & 0 \\ 0 & 1}, ~~
J(x) = \vec{ 0 & 1 \\ -1 & 0 }, ~~
B(x) = \vec{ 1 \\ 0 }, \\
R(x) = \vec{ \tfrac14 (4\norm{x}^2 + 1)^2 & 0 \\ 0 & 0}, ~~
\eta(x) = \vec{2 & 0 \\ 0 & 1}x
\end{gather*}
for all $x\in \mathbb{R}^2$, which together with the Hamiltonian
\begin{equation*}
\altmathcal{H}(x) = \frac12 ~x^{\mathsf{T}}\!\vec{2 & 0 \\ 0 & 1} x
\end{equation*}
form the port-Ha\-mil\-to\-ni\-an\xspace system
\begin{equation}\label{eq:ph-ex-ode}
\begin{aligned}
E(x) \dot{x} = &~ \big(J(x) - R(x)\big)\eta(x) + B(x) u, \\
y = &~ B(x)^{\mathsf{T}} \eta(x).
\end{aligned}
\tag{\textsf{pH-1}}
\end{equation}
For the system~\eqref{ph-ex-ode}, the function $f$ reads as
\begin{equation*}
f: \mathbb{R}^2 \to \mathbb{R}^2,~ x \mapsto R(x)^{1/2} \eta(x) = \vec{4 \norm{x}^2 + 1 & 0 \\ 0 & 0}x = \vec{4(x_1^3 + x_2^2x_1) + x_1 \\ 0},
\end{equation*}
where we take $x=\vec{x_1 & x_2}^{\mathsf{T}}$.
Thus, the derivative $Df$ reads as
\begin{equation*}
Df(x) = \vec{ 12 x_1^2+ 4 x_2^2 + 1 & 8 x_2 x_1 \\ 0 & 0} \neq 0
\end{equation*}
and the subspace $\ker(Df(x))$ is one-dimensional for all $x\in\mathbb{R}^2$.
A simple calculation shows that the nonzero singular value of $Df(x)$ is given by
\begin{equation*}
\sigma(x) = 144 x_1^4 + 160 x_1^2 x_2^2 + 24 x_1^2 + 16 x_2^4 + 8 x_2^2 + 1 \geq 1.
\end{equation*}
The zero locus $\altmathcal{M}$ of $f$ is given by
\begin{equation*}
\altmathcal{M} = \bigg\{ \vec{x_1 \\ x_2} \in \mathbb{R}^2 ~\bigg|~ (4 x_1^2 + 4x_2^2 + 1) x_1 = 0\bigg\} = \bigg\{ \vec{x_1 \\ x_2} \in \mathbb{R}^2 ~\bigg|~ x_1 = 0\bigg\}.
\end{equation*}
Hence, assumptions \ref{as:A1}, \ref{as:A2} and \ref{as:A3} are satisfied for~\eqref{ph-ex-ode}.
As $\altmathcal{M}$ is a linear subspace, the orthogonal projection on $\altmathcal{M}$ is well defined globally and we have $\altmathcal{E}(\altmathcal{M})=\mathbb{R}^2$.
Further, for the set $V$ from \Cref{lem:dist} it holds that $V=\mathbb{R}^2$ since
\begin{equation*}
\dist(x,\altmathcal{M}) = |x_1| \leq | x_1 (4x_1^2 + 4x_2^2 + 1)| = \norm{f(x)}
\end{equation*}
for all $x\in \mathbb{R}^2$.
Thus, also assumption \ref{as:A4} is satisfied for~\eqref{ph-ex-ode}.
Now, for $\xi = \vec{\xi_1 & \xi_2 & \xi_3}^{\mathsf{T}} \in \mathbb{R}^3$, let us define the functions $\widetilde{E},\widetilde{J},\widetilde{R},\widetilde{\eta}$ and $\widetilde{B}$ by
\begin{gather*}
\widetilde{E}(\xi) := \vec{1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0}, ~~
\widetilde{J}(\xi) := \vec{0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0}, ~~
\widetilde{B}(\xi) := \vec{1 \\ 2 \\ 0}, \\
\widetilde{R}(\xi) := \vec{ \tfrac14 (4\xi_1^2 + 4\xi_2^2 + 1)^2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0},
~~
\widetilde{\eta}(\xi) := \vec{2\xi_1 \\ \xi_2 \\ \xi_3}.
\end{gather*}
It is easy to see that also the system
\begin{equation}\label{eq:ph-ex-dae}
\begin{aligned}
\widetilde{E}(\xi) \dot{\xi} = &~ \big(\widetilde{J}(\xi) - \widetilde{R}(\xi)\big)\widetilde{\eta}(\xi) + \widetilde{B}(\xi) \widetilde{u}, \\
\widetilde{y} = &~ \widetilde{B}(\xi)^{\mathsf{T}} \widetilde{\eta}(\xi)
\end{aligned}
\tag{\textsf{pH-2}}
\end{equation}
satisfies assumptions \ref{as:A1} -- \ref{as:A4}.
The zero locus of the map $\widetilde{f}: \xi \mapsto \widetilde{R}(\xi)^{1/2}\widetilde{\eta}(\xi)$ is
\begin{equation*}
\widetilde{\altmathcal{M}} = \left\{ \vec{\xi_1 & \xi_2 & \xi_3}^{\mathsf{T}} \in \mathbb{R}^3 ~\middle|~ \xi_1 = 0 \right\}.
\end{equation*}
\end{example}
\section{Numerical example}\label{sec:num}
As an example, we consider the optimal control problem~\eqref{ocp} together with the \textsf{pH}\xspace systems~\eqref{ph-ex-ode} and~\eqref{ph-ex-dae} from \Cref{ex:ph}.
For the implementation, we use the open-source software package \texttt{CasADi}~\cite{AndGH18}.
In order to use \texttt{CasADi}, the optimal control problem~\eqref{ocp} is formulated as a minimization problem of the form
\begin{equation}\label{eq:cs-nlp}
\begin{gathered}
\min_{w} ~J(w) \\
\text{subject to }~ w_{\text{lb}} \leq w \leq w_{\text{ub}} ~~ \text{and} ~~ G(w) = 0.
\end{gathered}
\end{equation}
We follow a similar procedure as~\cite[Section 5.4]{AndGH18}.
In our implementation, $w$ contains the values $x(t_i)$ for the discretization points $t_i \in [0,T]$, and the values $u(t_i)$ for the discretization points $t_i \in [0,T], ~ i \neq 0$.
The initial condition and possible control constraints are incorporated in $w_{\text{lb}}$ and $w_{\text{ub}}$.
The function $G$ is used to enforce the final condition and a continuity condition on $x$ by using an integrator scheme to determine the value $x(t_{i+1})$ given the values $x(t_i)$ and $u(t_i)$.
This integrator scheme is also used to calculate the cost~$J$ via the \texttt{quad} option in \texttt{CasADi}'s \texttt{integrator} function.
For the solution of the nonlinear optimization problem~\eqref{cs-nlp}, \textsf{Ipopt}~\cite{WaeB06} is used.
In \Cref{fig:ocp-ex-ode}, the solution of the optimal control problem~\eqref{ocp} with the system~\eqref{ph-ex-ode} under the control constraint $-50 \leq u(t) \leq 50$ is shown.
The turnpike behaviour is clearly visible; the first component $x_1$ of the optimal trajectory $x^*$ approaches the manifold $\altmathcal{M} = \{ x \in \mathbb{R}^2 ~|~ x_1 = 0 \}$ very quickly and remains there for the majority of the time horizon.
The same observation can be made for larger time horizons, which is not shown in \Cref{fig:ocp-ex-ode}.
In \Cref{fig:ocp-ex-dae}, the solution of the optimal control problem~\eqref{ocp} with the system~\eqref{ph-ex-dae} under the control constraint $-200 \leq u(t) \leq 200$ is shown.
Again, the turnpike phenomenon can be observed.
\begin{figure}
\caption{Minimal energy control for~\eqref{ph-ex-ode}
\label{fig:ocp-ex-ode}
\end{figure}
\begin{figure}
\caption{Minimal energy control for~\eqref{ph-ex-dae}
\label{fig:ocp-ex-dae}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper, we have considered the optimal control of port-Ha\-mil\-to\-ni\-an\xspace systems under minimal energy supply with fixed initial and final values.
We have seen that a map $f$, corresponding to the energy dissipating portion of the right hand side, and its zero locus $\altmathcal{M} = \{x ~|~ f(x) = 0 \}$, which corresponds to the dissipative part of the state space, play an important role.
It was shown that under smoothness assumptions on $f$, the set $\altmathcal{M}$ forms a $C^2$ submanifold of $\mathbb{R}^n$.
In particular, using results from~\cite{LeoS21}, we observed that the orthogonal projection onto $\altmathcal{M}$ is well-defined in an open set $\altmathcal{E}(\altmathcal{M})$.
Further, we have shown that under these assumptions the distance of a point $x$ to $\altmathcal{M}$ can essentially be bounded by $\norm{f(x)}$ from above.
This fact allowed us to deduce that the considered optimal control problem is dissipative with respect to the manifold~$\altmathcal{M}$.
Our main result was a consequence of this dissipativity property.
Under additional controllability assumptions, we have seen that the problem has a manifold turnpike property with respect to $\altmathcal{M}$.
This theoretical observation was confirmed in a simple numerical example.
An open question from a theoretical perspective is the existence of optimal controls of~\eqref{ocp}.
Here, similar to~\cite{SchPFWM21,FauMP22,PhiSF21}, the particular structure of \textsf{pH}\xspace systems should be exploited.
Another open topic is the study of stronger turnpike properties, such as exponential turnpikes~\cite{TreZZ18,GruG21}.
Applications of the theoretical results to specific port-Ha\-mil\-to\-ni\-an\xspace systems such as gas networks~\cite{DomHL21} will be studied in future works.
\end{document}
|
\begin{document}
\author{R. Vilela Mendes \\
Grupo de F\'isica-Matem\'atica, Complexo II - Univ. de Lisboa\\
Av. Gama Pinto, 2 - P1699 Lisboa Codex, Portugal \and Ricardo Coutinho \\
Departamento de Matem\'atica, Instituto Superior T\'ecnico\\
Av. Rovisco Pais, 1096 Lisboa Codex, Portugal}
\title{On the computation of quantum characteristic exponents}
\date{}
\maketitle
\begin{abstract}
A quantum characteristic exponent may be defined, with the same operational
meaning as the classical Lyapunov exponent when the latter is expressed as a
functional of densities. Existence conditions and supporting measure
properties are discussed as well as the problems encountered in the
numerical computation of the quantum exponents. Although an example of true
quantum chaos may be exhibited, the taming effect of quantum mechanics on
chaos is quite apparent in the computation of the quantum exponents.
However, even when the exponents vanish, the functionals used for their
definition may still provide a characterization of distinct complexity
classes for quantum behavior.
Keywords: quantum chaos, characteristic exponents
\end{abstract}
\section{Introduction. Classical and quantum characteristic exponents.}
A notion of {\it quantum characteristic exponent} has been introduced in Ref.
\cite{Vilela2}, which has the same physical meaning as the corresponding
classical quantity (the Lyapunov exponent). The correspondence is
established by first rewriting the classical Lyapunov exponent as a
functional of densities and then constructing the corresponding quantity in
quantum mechanics. The construction is explained in detail in Ref.\cite
{Vilela1}, where the required functional spaces are identified and the
infinite-dimensional measure theoretic framework is developed. Here we just
recall the main definitions and emphasize some refinements concerning the
support properties of the quantum characteristic exponents, which turn out
to be relevant for the numerical computations of Sect.2.
Expressed as a functional of {\it admissible} $L^1-$densities, the classical
Lyapunov exponent is\cite{Vilela1}
\begin{equation}
\label{1.1}\lambda _v=\lim _{n\rightarrow \infty }\frac 1n\log \left|
-v^i\frac \partial {\partial x^i}D_{\delta _x}\left( \int d\mu (y)
\ y
\ P^n\rho (y)\right) \right|
\end{equation}
where $\rho $ is an initial condition density, $P$ the Perron-Frobenius
operator, $x$ a generic phase-space coordinate, $v$ a vector in the tangent
space, $\mu $ the invariant measure and $D_{\delta _x}$ the Gateaux
derivative along the generalized function $\delta _x$. The possibility to
define Gateaux derivatives along generalized functions with point support
and the need for a well-defined $\sigma $-additive measure in an
infinite-dimensional functional space lead almost uniquely to the choice of
the appropriate mathematical framework, that is, {\it admissible densities}
are required to belong to a nuclear space. Being ergodic invariants, the
Lyapunov exponents exist on the support of a measure. In the nuclear space
framework, measures with support on generalized functions, which are in
one-to-one correspondence with the usual measures in phase space, may be
constructed by the Bochner-Minlos theorem\cite{Vilela1}.
To construct, in quantum mechanics, a quantity with the same operational
meaning as (\ref{1.1}) let $U^n$ ($n$ continuous or discrete) be the unitary
operator of quantum evolution acting on the Hilbert space $H$ and $
\widetilde{X}$ a self-adjoint operator in $H$ belonging to a commuting set $S
$. For definiteness $\widetilde{X}$ is assumed to have a continuous
spectrum, to be for example a coordinate operator in configuration space.
One considers, as in the classical case, the propagation of a perturbation $
\partial _i\delta _x$ , where by $x$ we mean now a point in the spectrum of $
\widetilde{X}$.
\begin{equation}
\label{1.2}v^iD_{\partial _i\delta _x}\left( U^n\Psi ,\widetilde{X}U^n\Psi
\right) =2Re
\ v^i\frac \partial {\partial x^i}<\delta _x,U^{-n}
\widetilde{X}U^n\Psi >
\end{equation}
For the proper definition of the right-hand side of (\ref{1.2}) one requires
$\Psi \in E$ to be in a Gelfand triplet\cite{Gelfand}
$$
E^{*}\supset H\supset E
$$
By $<\delta _x|$ or $<x|$ we denote a generalized eigenvector of $\widetilde{
X}$ in $E^{*}$. Notice also that $U^n$, being an element of the
infinite-dimensional unitary group, has a natural action both in $E$ and $
E^{*}$\cite{Hida}. One obtains then the following definition of {\it quantum
characteristic exponent}
\begin{equation}
\label{1.3}\lambda _{v,x}=\lim \sup _{n\rightarrow \infty }\frac 1n\log
\left| \textnormal{Re}
\ v^i\frac \partial {\partial x^i}<\delta _x,U^{-n}
\widetilde{X}U^n\Psi >\right|
\end{equation}
The support properties of this quantum version of the Lyapunov exponent have
to be carefully analyzed. In Eq.(\ref{1.3}), $\Psi $ defines the state
which, in quantum mechanics, plays the role of a (non-commutative) measure
\cite{Connes1}. The quantum exponent may depend on the state, but the
measure that, as in the classical case, provides its support is not the
state but a measure in the space of the perturbations of the initial
conditions, that is, in the space where the Gateaux derivative operates. In
the classical case these two measures coincide, in the sense that to which
invariant measure in phase-space corresponds an infinite-dimensional measure
in the space of generalized functions\cite{Vilela1}. In the quantum case,
however, the two entities are different, the second one being the measure on
the spectrum of $\widetilde{X}$ induced by the projection-valued spectral
measure and the state, that is
\begin{equation}
\label{1.4}\nu (\Delta x)=(\Psi ,\int\limits_{\Delta x}dP_x\Psi )
\end{equation}
A particular case where an infinite-dimensional measure-theoretical setting,
similar to the classical one, may be used to define the quantum exponents
\cite{Vilela1}, is when the quantum evolution is implemented by substitution
operators in configuration space, as in some sectors of the configurational
quantum cat\cite{Weigert1}\cite{Vilela2}\cite{Weigert2}. However this
formulation is not very useful in general and the {\it state plus
spectrum-measure} framework seems to be the one that has general validity.
In this framework the following existence theorem holds
\underline{{\em Theorem}}: Let $\widetilde{X}$ be a self-adjoint operator, $E
$ a test function space in a Gelfand triplet containing the generalized
eigenvectors of $\widetilde{X}$ in its dual $E^{*}$ and $\Psi \in E$. Then
if $U^n\Phi \in E$ and $\widetilde{X}\Phi \in E$ ($\forall \Phi \in E$, $
\forall n\in Z$) and the following integrability condition is fulfilled
\begin{equation}
\label{1.4a}\left| \int d\nu (x)\log \frac{\left| \textnormal{Re}v^i\partial
_{x_i}<x|U^{-1}\Phi >\right| }{\left| \textnormal{Re}v^i\partial _{x_i}<x|\Phi
>\right| }\right| <M
\end{equation}
$\forall \Phi \in E$, the limit in Eq.(\ref{1.3}) exists as a $L^1(\nu )-$
function, that is , the average quantum characteristic exponent is defined
for any measurable set in the support of $\nu $.
\
Proof:
We write Eq.(\ref{1.3}) as
\begin{equation}
\label{1.4b}\lambda _v(x)=\lim \sup _{n\rightarrow \infty }\frac
1n\sum_{n=0}^{n-1}\log \frac{\left| \textnormal{Re}v^i\partial _{x_i}<x|U^{-n+k}
\widetilde{X}U^{n-k}\Phi >\right| }{\left| \textnormal{Re}v^i\partial
_{x_i}<x|U^{-n+k+1}\widetilde{X}U^{n-k-1}\Phi >\right| }
\end{equation}
Then from the integrability condition (\ref{1.4a}) the integral of the
sequence in the right-hand side of Eq.(\ref{1.4b}) is bounded and the
Bolzano-Weierstrass theorem insures the existence of the $\lim \sup $.
Therefore $\lambda _v(x)$ is well defined as a $L^1(\nu )-$function. $\Box $
Notice that we really need the $\lim \sup $ in the definition of the
characteristic exponent because we have no natural $U$-invariant measure in $
E$ to be able, for example, to apply Birkhoff or Kingman's theorem and prove
$\lim \sup $=$\lim \inf $. Also the sense in which the measure $\nu $
provides the support for the quantum characteristic exponent is different
from the classical ergodic theorems. We have not proven pointwise existence
of the exponent a. e. in the support a measure. What we have obtained here
is the possibility to define an average quantum characteristic exponent for
arbitrarily small $\nu -$measurable sets.
Other definitions of characteristic exponents in infinite-dimensional spaces
have been proposed by several authors\cite{Ruelle2} \cite{Vilela3} \cite
{Haake} \cite{Zycz} \cite{Majewski} \cite{Emch}. They characterize several
aspects of the dynamics of linear and non-linear systems. The definition
discussed here, proposed for the first time in \cite{Vilela2}, seems however
to be the one that is as close as possible to the spirit of the classical
definition of Lyapunov exponent.
Like the classical Lyapunov exponent the quantum analogue (\ref{1.3}) cannot
in general be obtained analytically. There is however a non-trivial example
where it can. This is the configurational quantum cat introduced by Weigert
\cite{Weigert1}\cite{Weigert2}. The phase space of this model is $T^2\times
R^2$. A mapping similar to the classical cat operates as a quantum kick in
the configuration space $T^2$, and the rest of the Floquet operator is a
free evolution. This system has the appealing features of actually
corresponding to the physical motion of a charged particle on a torus acted
by an impulsive electromagnetic field and, as show by Weigert\cite{Weigert2}
, to be exactly solvable.
The Floquet operator is
\begin{equation}
\label{1.5}U=U_FU_K
\end{equation}
\begin{equation}
\label{1.6}U_F=\exp [-i\frac T2\widetilde{p}^2];
\ U_K=\exp [-\frac
i2(\widetilde{x}\cdot V\cdot \widetilde{p}+\widetilde{p}\cdot V\cdot
\widetilde{x})]
\end{equation}
$U_F$ is a free evolution and $U_K$ a kick that operates in a simple manner
on momentum eigenstates and on (generalized) position eigenstates
\begin{equation}
\label{1.7}U_K\left| p\right\rangle =\left| M^{-1}p\right\rangle
\end{equation}
\begin{equation}
\label{1.8}U_K\left| x\right\rangle =\left| M\textnormal{ }x\right\rangle
\end{equation}
$M$ being an hyperbolic matrix with integer entries and determinant equal to
1. The momentum has discrete spectrum, $p\in (2\pi Z)^2$.
To compute the quantum characteristic exponent (Eq.(\ref{1.3})), let the
operator $\widetilde{X}$ be
\begin{equation}
\label{1.9}\widetilde{X}=\sin (2\pi l\cdot x)
\end{equation}
$l\in Z^2$. This operator has the same set of generalized eigenvectors as
the position operator $\widetilde{x}$. To construct the measure $\nu $ (Eq.(
\ref{1.4})) in the spectrum of the operator $\widetilde{X}$ we cannot use
the energy eigenstates $\mid P\alpha \rangle $ because they are not
normalized. However all one requires is invariance of the measure, and using
the (normalizable) momentum eigenstates one such measure is obtained.
\begin{equation}
\label{1.10}\nu (A)=\langle p\mid \int_Adx\mid x\rangle \langle x\mid
p\rangle
\end{equation}
This invariant measure in this case happens to be simply the Lebesgue
measure in $T^2$.
Defining
\begin{equation}
\label{1.11}\gamma _n(x)=\langle x\mid U^{-n}\widetilde{X}U^n\mid p\rangle
\end{equation}
the result for the quantum characteristic exponent is
\begin{equation}
\label{1.12}
\begin{array}{c}
\lambda _v=\lim \sup _{n\rightarrow \infty }\frac 1n\log ^{+}\left|
Rev^i\frac \partial {\partial x^i}\gamma _n(x)\right| \\
=\lim \sup _{n\rightarrow \infty }\frac 1n\log ^{+}\left| v^i(2\pi
M^nl)_i\{\cos \theta _n(p,l,x)+\cos \theta _n(p,-l,x)\}\right|
\end{array}
\end{equation}
with
$$
\theta _n(p,l,x)=\frac T2\left(
\sum_{k=0}^{n-1}(M^{-k}p)^2+\sum_{k=0}^{n-1}(M^k(2\pi l+M^{-n}p))^2+x\cdot
(2\pi M^nl+p)\right)
$$
For the $\lim \sup $ the cosine term plays no role and finally
\begin{equation}
\label{1.13}\lambda _v=\lim \sup _{n\rightarrow \infty }\frac 1n\log \left|
v^i(M^nl)_i\right|
\end{equation}
The characteristic exponent is then determined from the eigenvalues of the
hyperbolic matrix $M$ and is the same everywhere in the support of the
measure $\nu $. If $\mu _1$, $\mu _2$ ($\mu _1>\mu _2$) are
the eigenvalues of $M$, one obtains $\lambda _v=\log \mu _1$ for a generic
vector $v$ and $\lambda _v=\log \mu _2$ iff $\nu $ is orthogonal to the
eigenvector associated to $\mu _1$. Hence, in this case, one obtains a
positive quantum characteristic exponent whenever the corresponding
classical Lyapunov exponent is also positive.
This exact example will be used in Sect.2 as a testing ground for the
numerical algorithm and an illustration of the kind of precision problems
and support properties to be expected when computing quantum characteristic
exponents.
In the numerical calculation of the quantum characteristic exponents two
delicate points are identified. The first is that the calculation requires a
high degree of precision, because, if the exponent is positive, the
derivative of $U^{-n}\widetilde{X}U^n\Psi (x)$ grows very rapidly with $n$.
Therefore in the positive exponent case acceptable statistics is only
obtained by taking average values over the configuration space. Second, if
the situation is as in the classical case where different invariant measures
coexist in phase space, the quantum exponent may depend on the state $\Psi $
used to define the measure $\nu $ in the spectrum of $\widetilde{X}$.
Therefore, in all rigor, one should first construct stationary states and
then study the $\Psi -$dependence of the quantum exponent. Such study has
not yet been carried out and, in the calculations below, a flat wave
function is used as the initial state.
\section{Numerical computation of quantum characteristic exponents}
For kicked quantum systems corresponding to the Hamiltonian
\begin{equation}
\label{2.1}H=H_0+V(x)\sum_j\delta (t-j\tau )
\end{equation}
the Floquet operator is
\begin{equation}
\label{2.2}U=e^{-iV(x)}e^{-i\tau H_0(\frac \partial {\partial x_i})}
\end{equation}
in units where $\hbar =1$. For the computation of the action of $U$ on a
wave function $\psi (x)$, a fast Fourier transform algorithm $F$ and its
inverse $F^{-1}$ are used
\begin{equation}
\label{2.3}U\psi (x)=e^{-iV(x)}F^{-1}e^{-i\tau H_0(ik)}F\psi (x)
\end{equation}
In this way one obtains a uniform algorithm for any potential. In the
configurational quantum cat, Eq.(\ref{1.6}), the computation is similar with
the multiplicative kick $e^{-iV(x)}$ replaced by the substitution operator
\begin{equation}
\label{2.4}\psi (x)\rightarrow \psi (M^{-1}x)
\end{equation}
The quantum characteristic exponent is obtained from the calculation of
\begin{equation}
\label{2.5}\partial _xU^{-n}\widetilde{X}U^n\psi (x)
\end{equation}
in the limit of large $n$. The precision of the algorithm is checked by
insuring that
\begin{equation}
\label{2.6}\left| \left( U^{-n}U^n-1\right) \psi (x)\right| <\epsilon
\end{equation}
for a small $\epsilon $, and that the finite difference used to compute the
derivative in (\ref{2.5}) does not approach the maximum possible value
allowed by the discretization.
\subsection{The configurational quantum cat}
Here the configuration space is the 2-torus $T^2$, the Floquet operator is
the one of Eq.(\ref{1.6}), and the kick is a substitution operator with
matrix
\begin{equation}
\label{2.7}M=\left(
\begin{array}{cc}
1 & 1 \\
1 & 2
\end{array}
\right)
\end{equation}
Numerically we have computed the quantities
\begin{equation}
\label{2.8}\frac 1n\left\langle D_n-D_0\right\rangle =\frac 1n\left\langle
\log \frac{\left| Re
\ v^i\frac \partial {\partial x^i}\left( U^{-n}
\widetilde{X}U^n\Psi \right) (x)\right| }{\left| Re
\ v^i\frac
\partial {\partial x^i}\left( \widetilde{X}\Psi \right) (x)\right| }
\right\rangle _{T^2}
\end{equation}
$\widetilde{X}$ being the operator in (\ref{1.9}). The initial wave function
is $\Psi (x)=1$ and the average is taken over the whole of configuration
space. It turns out that, in this case, the derivative in the numerator of (
\ref{2.8}) grows so fast at some points that one reaches, after a few
iterations, the maximum difference (2 in this case) for the wave function at
two nearby points in the discretization grid. When this happens the
calculation cannot be reliably taken to higher $n$ with that discretization.
In the numerical calculation of the classical Lyapunov the computation
becomes a local evaluation at each step by rescaling the transported tangent
vector. Here, because of the linearity of matrix elements and quantum
evolution, a similar procedure is not possible and one has to carefully
control the growth of the quantities in (\ref{2.8}). To improve statistics
the average over configuration space has been taken. This can be safely done
in this case because we know exactly the supporting measure (\ref{1.10}),
but in general there will be no guarantee that the supporting spectral
measure is uniform. In any case average quantities like (\ref{2.8}) are
exactly we expect to be able to compute reliably.
Fig.1 shows the evolution of $\frac 1n\left\langle D_n-D_0\right\rangle $
obtained with a discretization grid of 292681 points in the unit square, for
two different directions $\nu $. The calculation was interrupted when the
local finite differences reached one half of the maximum. The lines are fits
to the points constrained to approach the same value at large $n$. The
resulting numerical estimate for the largest quantum characteristic exponent
is 0.95. The exact value obtained from (\ref{1.13}) is 0.9624.
\subsection{Quantum kicked rotators}
The configuration space is the circle $S^1$,
\begin{equation}
\label{2.9}V(x)=q\cos (2\pi x)
\end{equation}
$x\in [0,1)$ and for $H_0$ the following two possibilities were explored
\begin{equation}
\label{2.10}
\begin{array}{c}
H_0^{(1)}=-\frac 1{2\pi }
\frac{d^2}{dx^2} \\ H_0^{(2)}=-2\pi \cos \left( \frac 1{2\pi i}\frac
d{dx}\right)
\end{array}
\end{equation}
The operator $\widetilde{X}$ is
\begin{equation}
\label{2.11}\widetilde{X}=\sin (2\pi x)
\end{equation}
The quantity that is numerically computed is
\begin{equation}
\label{2.12}\left\langle D_n\right\rangle =\left\langle \log \left| Re
\ \frac \partial {\partial x}\left( U^{-n}\widetilde{X}U^n\Psi
\right) (x)\right| \right\rangle
\end{equation}
and, in all cases, one starts from a flat initial wave function. In both
cases and for the very many values of $q$ that were studied, this quantity
seems either to stabilize or to have a very small rate of growth for large $n
$. Fig.2, for example, shows the results for the $H_0^{(1)}$ case with $\tau
=\frac{\sqrt{5}}2$ and $q=5$. The (numerical) conclusion is that the quantum
characteristic exponent vanishes. This conclusion does not seem to be a
numerical artifact because the discretization grid for the fast Fourier
transform has always been chosen sufficiently small to insure a small local
finite difference for all iterations. For example in the example shown in
Fig.2, the grid has 4096 points which keeps observed local differences below
0.1. Also the vanishing of the quantum characteristic exponent in quantum
kicked rotators is not dependent on the phenomena of localization because
also for quantum resonances it may exactly be shown to vanish\cite{Vilela2}.
In Fig.2 $\left\langle D_n\right\rangle $ seems to tend to a constant at
large $n$. In other cases very slow rates of growth are observed. This is
shown in Figs.3a,b for the $H_0^{(2)}$ case with $\tau =\frac{\sqrt{5}}2$
and $q=11$.
\section{Conclusions}
Both the classical Lyapunov exponent (\ref{1.1}) and its quantum counterpart
(\ref{1.3}), measure the exponential rate of separation of matrix elements
of $\widetilde{X}$ when the density (or the wave function) suffers a $\delta
_x^{^{\prime }}$ perturbation, $x$ being a point in the spectrum of $
\widetilde{X}$. The configurational quantum cat example shows that there are
instances of {\it true quantum chaos}, in the sense of exponential growth of
the matrix element separation. However, as the numerical study of the
quantum kicked rotators seems to show, exponential growth may be rather
exceptional in quantum mechanics. Furthermore the taming effect of quantum
mechanics on exponential chaos goes deeper than the phenomenon of
localization, because also for quantum resonances, where no localization is
present, the quantum characteristic exponent vanishes.
Although distinct from one another, all known ways that now exist to
approach the problem of quantum chaos, seem to agree in one point, namely
that quantum mechanics has a definite taming effect on chaos. This is now
probably the main issue in quantum chaos, not only from the theoretical
point of view, but also in the context of quantum control. Even if quantum
characteristic exponent, as defined in (\ref{1.3}) might be zero in most
quantum systems, the rate of growth index $D_n$ or its average $\left\langle
D_n\right\rangle $ might still be useful as a characterization of quantum
dynamics because, even if weaker than exponential, a growth of this quantity
would still be an indication of sensitivity to initial conditions. In
particular, as suggested by the numerical results, subexponential rates of
growth might characterize distinct complexity classes of quantum evolution.
\section{Figure captions}
Fig.1 - Calculation of $\frac 1n\left\langle D_n-D_0\right\rangle $, Eq.(\ref
{2.8}), in the configurational quantum cat for two orthogonal directions $
\nu $ and a fit constrained to the same limit at large $n$.
Fig.2 - $\left\langle D_n\right\rangle $, Eq.(\ref{2.12}), for the quantum
standard map at $q=5$, $\tau =\frac{\sqrt{5}}2$.
Fig.3 - (a) $\left\langle D_n\right\rangle $, Eq.(\ref{2.12}), for a kicked
rotator with kinetic Hamiltonian $H_0^{(2)}$ at $q=11$, $\tau =\frac{\sqrt{5}
}2$ ; (b) the same scaled by $\log (\log (n+1))$.
\end{document}
|
\begin{document}
\title{{Chaotic dynamics in three dimensions:\\ a topological proof for a triopoly game model}
}
\author{\textbf{Marina Pireddu} \\
Dipartimento di Matematica e Applicazioni, \\
Universit\`{a} degli Studi di Milano-Bicocca \\
via Cozzi 53, 20125 Milano\\
e-mail: [email protected]}
\maketitle
\begin{abstract}
\noindent
We rigorously prove the existence of chaotic dynamics for the triopoly game model already studied, mainly from a numerical viewpoint,
in \cite{NaTr->}. In the model considered, the three firms are heterogeneous and in fact each of them adopts a different decisional mechanism, i.e., linear approximation, best response and gradient mechanisms, respectively.\\
The method we employ is the so-called ``Stretching Along the Paths'' (SAP) technique in \cite{PiZa-07}, based on the Poincar\'e-Miranda Theorem and on the properties of the cutting surfaces.
\end{abstract}
\noindent \textbf{Keywords:} Chaotic dynamics; Stretching along the paths; triopoly games; heterogeneous players.
\noindent \textbf{2000 AMS subject classification:} 54H20, 54H25, 37B10, 37N40, 91B55.
\section{Introduction}\label{sec-a0}
$~$
In the economic literature, due to the complexity of the
models considered, an analytical study of the associated dynamical features
turns out often to be too difficult or simply impossible
to perform. That is why many dynamical systems are studied mainly from a numerical viewpoint (see, for instance, \cite{Ag-99,BiTr-10,Tretal-10,YoMaPo-00}).
Sometimes, however, even such kind of study turns out to be problematic, especially with high
dimensional systems, where several variables are involved.\\
In particular, as observed in Naimzada and Tramontana's working paper \cite{NaTr->}, this may be the reason for
the relatively low number of works on triopoly games (see, for instance, \cite{ElAgEl-09,Pu-96,TrEl-12}), where the context is given by an oligopoly composed
by three firms. In such framework, a local analysis can generally be performed in the
special case of homogeneous triopoly models, i.e., those in which the equations describing the dynamics are
symmetric (see, for instance, \cite{Ag-98, AgGaPu-00, RiSt-04}). \\
A more difficult task is that of studying heterogeneous triopolies, where instead the three
firms considered behave according to different strategies.
This has been done, for instance, in \cite{Eletal-07, Ji-09}, as well as in the above mentioned paper by Naimzada and Tramontana \cite{NaTr->} where,
in addition to the classical heterogeneity with interacting agents adopting gradient and best response mechanisms, it is assumed that one of the firms adopts a linear approximation mechanism, which means that the firm does not know
the shape of the demand function and thus builds a conjectured demand function through the local knowledge of the true demand function.
In regard to such model, those authors perform a stability analysis of the Nash equilibrium and show numerically that,
according to the choice of the parameter values, it undergoes a flip bifurcation or a Neimark-Sacker bifurcation leading to chaos.\\
What we then aim to do in the present paper is complementing that analysis, by proving the existence of
chaotic sets only via topological arguments. This task will be performed using the ``Stretching Along the Paths'' (from now on, SAP) technique, already employed in \cite{MePiZa-09} to rigorously prove the presence of chaos for some discrete-time one- and bidimensional economic models of the classes of overlapping generations and duopoly game models. Notice however that, to the best of our knowledge, this is the first three-dimensional discrete-time application of the SAP technique, called in this way because it concerns maps that expand the arcs along one direction.
We stress that, differently from other methods for the search of fixed points and the detection of chaotic dynamics based on more sophisticated algebraic or geometric tools, such as the Conley index or the Lefschetz number (see, for instance, \cite{Ea-75,MiMr-95,SrWo-97}), the SAP method relies on relatively elementary arguments and it is easy to apply in practical contexts, without the need of ad-hoc constructions. No differentiability conditions are required for the map describing the dynamical system under analysis and even continuity is needed only on particular subsets of its domain. Moreover, the SAP technique can be used to rigorously prove the presence of chaos also for continuous-time dynamical systems. In fact, in such framework it suffices to apply the results in Section \ref{sec-a1}, suitably modified, to the Poincar\'e map associated to the considered system and thus one is led back to work with a discrete-time dynamical system. However, the geometry required to apply the SAP method turns out to be quite different in the two contexts: in the case of discrete-time dynamical systems we look for ``topological horseshoes'' (see, for instance, \cite{BuWe-95, KeYo-01, ZgGi-04}), that is, a weaker version of the celebrated Smale horseshoe in \cite{Sm-65}, while in the case of continuous-time dynamical systems one has to consider the case of switching systems and the needed geometry is usually that of the so-called ``Linked Twist Maps'' (LTMs) (see \cite{BuEa-80, De-78, Pr-86}), as shown for the planar case in \cite{PaPiZa-08,PiZa-08}. We also stress that the Poincar\'e map is a homeomorphism onto its
image, while in the discrete-time framework the function describing the considered dynamical system need not be one-to-one, like in our example in Section \ref{sec-a2}. Hence, in the latter context, it is in general not be possible to apply the results for the Smale horseshoe, where one deals with homeomorphisms or diffeomorphisms.
As regards three-dimensional continuous-time applications of the SAP method, those have recently been performed in \cite{RHZa-12}, in a higher-dimensional counterpart of the LTMs framework, and in \cite{RHZa->}, where a system switching between different regimes is considered.\\
For the reader's convenience, we are going to recall in Section \ref{sec-a1} what are the basic mathematical ingredients behind the SAP method, as well as the main conclusions it allows to draw about the chaotic features of the model under analysis.
It will then be shown in Section \ref{sec-a2} how it can be applied to the triopoly game model taken from \cite{NaTr->}.
Some further considerations and comments can be found in Section \ref{sec-a3}, which concludes the paper.
\section{The ``Stretching along the paths'' method}\label{sec-a1}
$~$
In this section we briefly recall what the ``Stretching along the paths'' (SAP) technique consists in,
referring the reader interested in further mathematical details to \cite{PiZa-07}, where the original planar theory
by Papini and Zanolin in \cite{PaZa-04a,PaZa-04b} has been extended to the $N-$dimensional setting, with $N\ge 2.$ \\
In the bidimensional setting, elementary
theorems from plane topology suffice, while in the higher-dimensional framework some
results from degree theory are needed, leading to the study of the so-called
``cutting surfaces''.
In fact, the proofs of the main results in \cite{PiZa-07} (and in particular of Theorem \ref{th-fp} below), we do not recall here, are based on the properties of the cutting surfaces and on the Poincar\'e-Miranda Theorem, that is, an $N$-dimensional version of the Intermediate Value Theorem. \\
Since in Section \ref{sec-a2} we will deal with the three-dimensional setting only, we directly present the theoretical results in the special case in which $N=3.$
We start with some basic definitions.\\
A \textit{path} in a metric space $X$ is a continuous map $\gamma:
[t_0,t_1]\to X.$ We also set $\overline{\gamma}:=\gamma([t_0,t_1]).$
Without loss of generality, we usually take the unit interval
$[0,1]$ as the domain of $\gamma.$ A \textit{sub-path} $\sigma$ of
$\gamma$ is the restriction of $\gamma$ to a compact sub-interval
of its domain.
By a \textit{generalized parallelepiped}
we mean a set ${\mathcal P}\subseteq X$ which is homeomorphic to the
unit cube $I^3:=[0,1]^3,$ through a homeomorphism $h: {\mathbb R}^3\supseteq I^3 \to \mathcal P\subseteq X.$
We also set
$${\mathcal P}^{-}_{\ell}:= h([x_3 = 0])\,,\quad
{\mathcal P}^{-}_{r}:= h([x_3 = 1])$$ and call them the \textit{left} and
the \textit{right} faces of $\mathcal P,$ respectively,
where\footnote{Notice that the choice of privileging the third coordinate is purely conventional. In fact, any other choice would give the same results, as it is possible to compose the homeomorphism $h$ with a suitable permutation on three elements, without modifying its image set.}
$$[x_3 = 0]:= \{(x_1,x_2,x_3)\in I^3: \, x_3 = 0\} \,\,\,
\mbox{ and } \,\,\, [x_3 = 1]:= \{(x_1,x_2,x_3)\in I^3: \, x_3 = 1\}.$$
Setting
$$\mathcal P^{-}:= \mathcal P^{-}_{\ell}\cup \mathcal P^{-}_{r}\,,$$
we call the pair
$${\widetilde{\mathcal P}}:= (\mathcal P, \mathcal P^-)$$
an {\textit{oriented parallelepiped}} \textit{of $X$}.
Although in the application discussed in the present paper the
space $X$ is simply $\mathbb R^3$ and the
generalized parallelepipeds are standard parallelepipeds, the generality of our definitions
makes them applicable in different contexts (see Figure 1).
We are now ready to introduce the {\it stretching along the paths} property for maps between oriented rectangles.
\begin{definition}[SAP]\label{def-sap}
{\rm{Let ${\widetilde{\mathcal A}}:=
({\mathcal A},{\mathcal A}^-)$ and ${\widetilde{\mathcal B}}:=
({\mathcal B},{\mathcal B}^-)$ be oriented parallelepipeds of a metric
space $X.$ Let also $\psi: \mathcal A\to X$ be a function
and ${\mathcal K}\subseteq {\mathcal A}$
be a compact set. We say that \textit{$({\mathcal K},\psi)$ stretches
${\widetilde{\mathcal A}}$ to ${\widetilde{\mathcal B}}$ along the
paths}, and write
\begin{equation}\label{eq-sap}
({\mathcal K},\psi): {\widetilde{\mathcal A}} \Bumpeq{\!\!\!\!\!\!\!\!{\longrightarrow}} {\widetilde{\mathcal B}},
\end{equation}
if the following conditions hold:
\begin{itemize}
\item{} \; $\psi$ is continuous on ${\mathcal K}\,;$
\item{} \; for every path $\gamma: [0,1]\to {\mathcal A}$ with
$\gamma(0)$ and $\gamma(1)$ belonging to different components of ${\mathcal A}^-,$ there exists a sub-path
$\sigma:=\gamma|_{[t',t'']}:[0,1]\supseteq [t',t'']\to {\mathcal K},$ such that
$\psi(\sigma(t))\in {\mathcal B},\,
\forall\, t\in [t',t''],$ and, moreover, $\psi(\sigma(t'))$ and
$\psi(\sigma(t''))$ belong to different components of ${\mathcal B}^-.$
\end{itemize}
}}
\end{definition}
For a description of the relationship between the SAP relation and other ``covering relations'' in the literature on expansive-contractive maps, we refer the interested reader to \cite{MePiZa-09}.\\
A first crucial feature of the SAP relation is that, when it is satisfied with ${\widetilde{\mathcal A}}={\widetilde{\mathcal B}}$ \footnote{Note that this means both that $\mathcal A$ and $\mathcal B$ coincide as subsets of $X$ and that they have the same orientation. In fact, it is easy to find counterexamples to Theorem \ref{th-fp} if the latter property is violated (see, for instance, \cite{Pi-09}, pag. 11).}, it ensures the existence of a fixed point localized in the compact set $\mathcal K.$ In fact the following result does hold true.
\begin{theorem}\label{th-fp}
Let ${\widetilde{\mathcal P}}:= ({\mathcal P},{\mathcal P}^-)$
be an oriented parallelepiped of a metric space $X$ and let $\psi: \mathcal P\to X$ be a function.
If ${\mathcal K}\subseteq {\mathcal P}$
is a compact set such that
$$({\mathcal K},\psi): {\widetilde{\mathcal P}} \Bumpeq{\!\!\!\!\!\!\!\!{\longrightarrow}} {\widetilde{\mathcal P}},$$
then there exists at least a point $z\in {\mathcal K}$ with $\psi(z) = z.$
\end{theorem}
For a proof, see \cite{PiZa-07}, pagg. 307-308. Notice that the arguments employed therein are different from the ones used to prove the same result in the planar context (see, for instance, \cite{MePiZa-09}, pagg. 3301-3302), which are in fact much more elementary.\\
A graphical illustration of Theorem \ref{th-fp} can be found in Figure 1, where it looks evident that, differently from the classical Rothe and Brouwer Theorems, we do not require that $\psi(\partial \mathcal A)\subseteq \mathcal A.$
\begin{figure}
\caption{The tubular sets $\mathcal A$ and $\mathcal B$ in the picture
are two generalized parallelepipeds, for which we
have put in evidence the compact set $\mathcal K$ and the boundary
sets $\mathcal A_{\ell}
\label{fig-0}
\end{figure}
The most interesting case in view of detecting chaotic dynamics is when there exist pairwise disjoint compact sets playing the role of ${\mathcal K}$ in Definition \ref{def-sap}. Indeed, applying Theorem \ref{th-fp} with respect to each of them, we get a multiplicity of fixed points localized in those compact sets.
Another crucial property of the SAP relation is that it is preserved under composition of maps, and thus, when dealing with the iterates of the function under consideration, it allows to detect the presence of periodic points of any period (see Lemma A.1, Theorems A.1 and A.2 in \cite{MePiZa-09}, which can be directly transposed to the three-dimensional setting, with the same proof).\\
We now describe in Definition \ref{def-ch} what we mean when we talk about ``chaos'' and we explain in Theorem \ref{th-ch} which is the relationship between that concept and the stretching relation in Definition \ref{def-sap}.
We stress that Theorem \ref{th-ch} is the main theoretical result we are going to apply in Section \ref{sec-a2} and that it can be shown exploiting the two properties of the SAP relation mentioned above. In fact, its proof follows by the same arguments in Theorems 2.2 and 2.3 in \cite{MePiZa-09}.
\begin{definition}\label{def-ch}
\rm{Let $X$ be a metric space and let $\psi: X\supseteq D\to X$
be a function. Let also $m\ge 2$ be an integer and let
${\mathcal K}_0,\dots,{\mathcal K}_{m-1}$ be nonempty pairwise disjoint compact subsets of ${\mathcal D}.$ We say that
\textit{$\psi$ induces chaotic dynamics on $m$ symbols on the set
${\mathcal D}$ relatively to ${\mathcal K}_0,\dots,{\mathcal K}_{m-1}$}
if, setting
$$
\mathcal K:=\bigcup_{i=0}^{m-1}\mathcal K_i\subseteq {\mathcal D}
$$
and defining the nonempty compact set
\begin{equation}\label{eq-lam}
{\mathcal I}_{\infty}:=\bigcap_{n=0}^{\infty}\psi^{-n}(\mathcal K),
\end{equation}
then there exists a nonempty compact set
$${\mathcal I}\subseteq {\mathcal I}_{\infty} \subseteq {\mathcal K},$$
on which the following conditions are fulfilled:
\begin{itemize}
\item[$(i)$] $\psi({\mathcal I}) = {\mathcal I};$
\item[$(ii)$] $\psi|_{\mathcal I}$ is semi--conjugate to the
Bernoulli shift on $m$ symbols, that is, there exists a continuous
map $\pi:{\mathcal I}\to\Sigma_m^+,$ where $\Sigma_m^+:=\{0,1\}^{\mathbb
N}$ is endowed with the distance
\begin{equation*}
\hat d(\textbf{s}', \textbf{s}'') := \sum_{i\in {\mathbb N}}
\frac{d(s'_i, s''_i)}{m^{i + 1}}\,,\quad \mbox{ for }\;
\textbf{s}'=(s'_i)_{i\in {\mathbb N}}\,,\,
\textbf{s}''=(s''_i)_{i\in {\mathbb N}}\in \Sigma_m^+
\end{equation*}
($\,d(\cdot\,,\cdot)$ is the discrete distance on $\{0,1\},$ i.e.,
$d(s'_i, s''_i)=0$ for $s'_i=s''_i$ and $d(s'_i, s''_i)=1$ for
$s'_i\not=s''_i$), such that the diagram
\begin{equation}\label{diag-1}
\begin{diagram}
\node{{\mathcal I}} \arrow{e,t}{\psi} \arrow{s,l}{\pi}
\node{{\mathcal I}} \arrow{s,r}{\pi} \\
\node{\Sigma_m^+} \arrow{e,b}{\sigma}
\node{\Sigma_m^+}
\end{diagram}
\end{equation}
commutes, where $\sigma:\Sigma_m^+\to\Sigma_m^+$ is the Bernoulli
shift defined by $\sigma((s_i)_i):=(s_{i+1})_i,\,\forall
i\in\mathbb N\,;$
\item[$(iii)$] the set of the periodic points of $\psi|_{{\mathcal I}_{\infty}}$ is dense in ${\mathcal I}$
and the pre--image $\pi^{-1}(\textbf{s})\subseteq {\mathcal I}$ of
every $k$-periodic sequence $\textbf{s} = (s_i)_{i\in {\mathbb N}}\in \Sigma_m^+$
contains at least one $k$-periodic point.
\end{itemize}
}
\end{definition}
\begin{remark}\label{cons}
According to Theorem 2.2 in \cite{MePiZa-09}, from $(ii)$ in Definition \ref{def-ch} it follows that:
\begin{itemize}
\item[$-$] $h_{\rm top}(\psi)\ge h_{\rm top}(\psi|_{\mathcal I})\geq h_{\rm top}(\sigma) = \log(m),$
where $h_{\rm top}$ is the topological entropy;
\item[$-$] there exists a compact invariant set
$\Lambda \subseteq {\mathcal I}$ such that $\psi\vert_{\Lambda}$
is semi--conjugate to the Bernoulli shift on $m$ symbols,
topologically transitive and displays sensitive dependence on initial
conditions.
\end{itemize}
\end{remark}
\begin{theorem}\label{th-ch}
Let ${\widetilde{\mathcal P}}:= ({\mathcal
P},{\mathcal P}^-)$ be an oriented parallelepiped of a metric space
$X$ and let $\psi: \mathcal P\to X$ be a function. If
${\mathcal K_0},\dots,{\mathcal K_{m-1}}$ are $m\ge 2$ pairwise disjoint compact
subsets of ${\mathcal P}$ such that
\begin{equation}\label{sap-rel}
({\mathcal K}_i,\psi): {\widetilde{\mathcal P}} \Bumpeq{\!\!\!\!\!\!\!\!{\longrightarrow}} {\widetilde{\mathcal P}}, \mbox{ for } i=0,\dots,m-1,
\end{equation}
then $\psi$ induces chaotic dynamics on $m$ symbols on
${\mathcal P}$ relatively to ${\mathcal K}_0,\dots,{\mathcal K}_{m-1}.$
\end{theorem}
Notice that if the function $\psi$ in the above statement is also one--to--one on $\mathcal K:=\bigcup_{i=0}^{m-1}{\mathcal K}_i,$ then it is additionally possible to prove that $\psi$ restricted to a suitable invariant subset of $\mathcal K$ is semi--conjugate to the two--sided Bernoulli shift $\sigma:\Sigma_2\to\Sigma_2,$ $\sigma((s_i)_i):=(s_{i+1})_i,\,\forall i\in\mathbb Z,$ where $\Sigma_2:=\{0,1\}^{\mathbb Z}$ (see \cite[Lemma 3.2]{PiZa-08})\,\footnote{This is not the case in our application in Section \ref{sec-a2}. Indeed, as it looks clear from Figure 2, the map $F$ in \eqref{eq-tg} is not injective on the set $\mathcal K_0\cup\mathcal K_1$ introduced in Theorem \ref{th1}.}.
We are now in position to explain what the SAP method consists in.
Given a dynamical system generated by a map $\psi,$ our technique consists in finding
a subset $\mathcal P$ of the domain of $\psi$ homeomorphic to the unit cube and at least two disjoint compact subsets of $\mathcal P$ for which the stretching property in \eqref{sap-rel} is satisfied (when $\mathcal P$ is suitably oriented). In this way, Theorem \ref{th-ch} ensures the existence of chaotic dynamics in the sense of Definition \ref{def-ch} for the system under consideration and, in particular, the positivity of the topological entropy for $\psi,$ which is in fact generally considered as one of the trademark features of chaos.
\section{The triopoly game model}\label{sec-a2}
$~$
In this section we apply the SAP method to an
economic model belonging to the class of triopoly games, taken from \cite{NaTr->}.\\
By oligopoly, economists denote a market form characterized by
the presence of a small number of firms. Triopoly is a special
case of oligopoly where the firms are three. The term game
refers to the fact that the players - in our case the firms - make
their decisions reacting to each other actual or
expected moves, following a suitable strategy.
In particular, we will deal with a dynamic game where
moves are repeated in time, at discrete, uniform intervals.\\
More precisely, the model analyzed
can be described as follows.\\
The economy consists of three firms producing an identical commodity
at a constant unit cost, not necessarily
equal for the three firms. The commodity is sold in a single market
at a price which depends on total output through a given inverse
demand function, known to one firm (say, Firm $2$) globally and to another firm (say, Firm $1$) locally.
In fact, Firm $1$ linearly approximates the demand function around the latest realized pair of quantity and market price.
Finally, Firm $3$ does not know anything about the demand function and adopts a myopic adjustment mechanism,
i.e., it increases or decreases its output according to the sign of the marginal profit from the last period.
The goal of each firm is
the maximization of profits, i.e., the difference between revenue
and costs. The problem of each firm is to decide at the beginning of every time period $t$
how much to produce in the same period on the basis of the limited
information available and, in particular, on the expectations about
its competitors' future decisions.\\
In what follows, we introduce the needed notation and the postulated assumptions:
\vskip .5cm
\noindent 1. \bf Notation \rm \vskip .25cm
$x_t$: output of Firm 1 at time $t\,;$
$y_t$: output of Firm 2 at time $t\,;$
$z_t$: output of Firm 3 at time $t\,;$
$p$: unit price of the single commodity\,. \vskip .5cm
\noindent \bf 2. Inverse demand function
\begin{equation}\label{p}
p:=\frac{1}{x+y+z}\,.
\end{equation}
\noindent \bf 3. Technology \rm \vskip .25cm
The unit cost of production for firm $i$ is equal to $c_i, \,i=1,2,3,$
where $c_1, c_2, c_3$ are (possibly different) positive constants. \vskip .5cm
\noindent \bf 4. Price approximation \rm \vskip .25cm
Firm 1 observes the
current market price $p_t$ and the corresponding total supplied quantity $Q_t=x_t+y_t+z_t.$ By
using market experiments, that player obtains the slope of
the demand function at the point $(Q_t, p_t)$ and, in the absence of other information, it
conjectures that the demand function, which has to pass through that point, is linear.
\vskip .5cm
\noindent \bf 5. Expectations \rm \vskip .25cm
In the presence of incomplete information concerning their competitors'
future decisions (and therefore about future prices), Firms 1 and 2 are assumed to use naive expectations. This means that at each time $t$
both Firm 1 and 2 expect that the other two firms will keep output unchanged w.r.t. the
previous period. \vskip .35cm
\noindent
As shown in \cite{NaTr->}, the assumptions above lead to the following system of three difference equations in
the variables $x,\,y$ and $z$:
\begin{equation*}
\left\{
\begin{array}{ll}
x_{t+1}= \frac{2x_t+y_t+z_t-c_1(x_t+y_t+z_t)^2}{2}\\
\\
y_{t+1}= \sqrt{\frac{x_t+z_t}{c_2}}-x_t-z_t\\
\\
z_{t+1}= z_t+\alpha z_t\left(-c_3+\frac{x_t+y_t}{(x_t+y_t+z_t)^2}\right)\\
\end{array}
\right. \eqno{(\mbox{TG})}
\end{equation*}
where $\alpha$ is a positive parameter denoting the speed of Firm 3's adjustment to changes in profit and $c_1,\,c_2,\,c_3$ are the marginal costs.\\
We refer the interested reader to \cite{NaTr->} for a more detailed explanation of
the model, as well as for the derivation of (TG).
As mentioned in the Introduction, in \cite{NaTr->} Naimzada and Tramontana discuss the
equilibrium solution of system (TG) along with its stability and
provide numerical evidence of the presence of chaotic dynamics.
In particular, it is shown the existence of a double route
to chaos: according to the parameter values, the Nash equilibrium
can undergo a flip bifurcation or a Neimark-Sacker bifurcation.
Moreover, in \cite{NaTr->} the authors numerically find multistability of different
coexisting attractors and identify their basins of attraction through a global analysis.\\
Hereinafter we will integrate that study
rigorously proving that, for certain parameter configurations,
system (TG) exhibits chaotic behavior in the precise sense
discussed in Section \ref{sec-a1} \footnote{Notice that, as we shall stress in Section \ref{sec-a3}, we
only prove \textit{existence} of an invariant, chaotic set, not
its \textit{attractiveness}.}.\\
In order to apply the SAP method
to analyze system (TG), it is expedient to represent it
in the form of a continuous map ${F=(F_1,F_2,F_3):\mathbb R_+^3\to\mathbb R^3},$ with components
\begin{equation}\label{eq-tg}
\begin{array}{ll}
{F_1(x,y,z):=\frac{2x+y+z-c_1(x+y+z)^2}{2}},\\
\\
{F_2(x,y,z):=\sqrt{\frac{x+z}{c_2}}-x-z},\\
\\
{F_3(x,y,z):=z+\alpha z\left(-c_3+\frac{x+y}{(x+y+z)^2}\right)}.\\
\end{array}
\end{equation}
We prove that the SAP property for the map $F$ is satisfied
when choosing a generalized rectangle in the family of parallelepipeds of the first quadrant described analytically by
\begin{equation}\label{eq-cu}
{\mathcal R=\mathcal R(x_i,\,y_i,\,z_i):=\left\{(x,y,z)\in\mathbb R^3: x_{\ell}\le x\le x_r,\,y_{\ell}\le y\le y_r,\,z_{\ell}\le z\le z_r\right\}},
\end{equation}
with $x_{\ell}<x_r,\,y_{\ell}<y_r,\,z_{\ell}<z_r$ and $x_i,\,y_i,\,z_i,\,i\in\{\ell,r\},$ satisfying the conditions in Theorem \ref{th1}.\\
The parallelepiped $\mathcal R$ can be oriented by setting
\begin{equation}\label{eq-or}
{\mathcal R^-_{\ell}:= [x_{\ell},x_r]\times [y_{\ell},y_r]\times\{z_{\ell}\}} \, \mbox{ and } \, {\mathcal R^-_{r}:=[x_{\ell},x_r]\times [y_{\ell},y_r]\times\{z_r\}\,}.
\end{equation}
Consistently with \cite{NaTr->}, we choose the marginal costs as
$c_1=0.4,\, c_2=0.55$ and $c_3=0.6.$ On the other hand, in order to easily apply the SAP method we need the parameter $\alpha$ to be close to $17,$ while in \cite{NaTr->} the presence of chaos is numerically proven for $\alpha$ around $8$ \footnote{As explained below, it would be possible to apply our technique with a lower value for $\alpha,$ at the cost of changing the parameter conditions in Theorem \ref{th1} and of making the computations in the proof much more complicated. However, it seems not possible to apply the SAP method to the first iterate of $F$ when $\alpha$ is close to $8,$ which is the largest value considered in \cite{NaTr->}.}. The implications of this discrepancy will be discussed in Section \ref{sec-a3}.
\begin{figure}
\caption{ A possible choice of the parallelepiped $\mathcal
R$ for system (TG), according to conditions $(H1)$--$(H5).$ It has been oriented by
taking as $[\,\cdot\,]^{-}
\label{fig-1}
\end{figure}
\begin{figure}
\caption{This picture complements the previous one, by showing how the two vertical faces of $\mathcal R$ not considered in Figure 1 are transformed by the map $F.$ Again, the same color is used to depict a set and its $F$-image set.}
\label{fig-2}
\end{figure}
\noindent Our result on system (TG) can be stated as follows:
\begin{theorem}\label{th1}
If the parameters of the map $F$ defined in \eqref{eq-tg} assume the following values
\begin{equation}\label{par}
c_1=0.4,\,\, c_2=0.55,\,\,c_3=0.6,\,\,\alpha= 17,
\end{equation}
then, for any parallelepiped ${\mathcal R}=\mathcal R(x_i,y_i,z_i)$ belonging to the family described in \eqref{eq-cu}, with
$x_i,\,y_i,\,z_i,\,i\in\{\ell,r\},$ satisfying the conditions:
\begin{equation*}
\begin{array}{lll}
& {}\!\!\!\!\! (H1) & z_{\ell}=0\,;\\
& {}\!\!\!\!\! (H2) & x_{\ell}+y_{\ell}>z_r\ge\sqrt{\frac{\alpha}{\alpha c_3-1}(x_{\ell}+y_{\ell})}-(x_{\ell}+y_{\ell})>0\,;\\
& {}\!\!\!\!\! (H3) & 2\left(\sqrt{\frac{\alpha}{\alpha c_3+1}(x_{r}+y_{r})}-(x_{r}+y_{r})\right)>z_r\,;\\
& {}\!\!\!\!\! (H4) &\frac{1}{c_1}-x_r>y_{r}+z_{r}>\frac{1}{2c_1}-x_{\ell}>0\,,\quad \frac{1}{2c_1}-x_r > y_{\ell}+z_{\ell}\,,\quad x_{r}\ge\frac{1}{4c_1}\,,\\
& & \frac{1}{2c_1}\left(1-c_1(y_{\ell}+y_{r}+z_{\ell}+z_{r})\right)\ge x_{\ell}>0\,,\quad \sqrt{\frac{y_{\ell}+z_{\ell}}{c_1}}-(y_{\ell}+z_{\ell})\ge x_{\ell}\,;\\
& {}\!\!\!\!\! (H5) &
x_{\ell}+z_{\ell}>\frac{1}{4c_2}\,,\quad y_r\ge\sqrt{\frac{x_{\ell}+z_{\ell}}{c_2}}-(x_{\ell}+z_{\ell})>0\,,\quad\sqrt{\frac{x_{r}+z_{r}}{c_2}}-(x_{r}+z_{r})\ge y_{\ell}>0\,,
\end{array}
\end{equation*}
and oriented as in \eqref{eq-or}, there exist two disjoint compact subsets ${\mathcal K}_0={\mathcal K}_0(\mathcal R)$ and ${\mathcal K}_1={\mathcal K}_1(\mathcal R)$ of $\mathcal R$ such that
\begin{equation}\label{eq-ks}
({\mathcal K}_i,F): {\widetilde{\mathcal R}} \Bumpeq{\!\!\!\!\!\!\!\!{\longrightarrow}} {\widetilde{\mathcal R}}, \mbox{ for } i=0,1.
\end{equation}
Hence, the map $F$ induces chaotic dynamics on two symbols on
${\mathcal R}$ relatively to $\mathcal K_0$ and
$\mathcal K_1$ and displays all the properties listed in Theorem
\ref{th-ch}.
\end{theorem}
\begin{figure}
\caption{With reference to the parallelepiped $\mathcal R$ in Figure 2,
reproduced here at a different scale, we show that the $F$-image set of
an arbitrary path $\gamma$ joining in $\mathcal R$ the two components of
the boundary set $\mathcal R^-$ intersects $\mathcal R$ twice. In particular, this is due
to the fact that the horizontal faces $\mathcal R^-_{\ell}
\label{fig-3}
\end{figure}
\begin{figure}
\caption{Since $F(S)\cap\mathcal R=\emptyset$ (see Figure 4), then ${\mathcal R\cap F(\mathcal R)={\mathcal K}
\label{fig-4}
\end{figure}
\begin{figure}
\caption{Given the arbitrary path $\gamma$ in Figure $5$ joining in $\mathcal R$ the two components of
$\mathcal R^-,$ we show that the $F$-image sets of $\overline\gamma\cap{\mathcal K}
\label{fig-5}
\end{figure}
Before proving Theorem \ref{th1}, we make some comments on the conditions in $(H1)$--$(H5).$
First of all, notice that those conditions imply that $x_{\ell}+z_{\ell}>0$ and $x_{\ell}+y_{\ell}+z_{\ell}>0$ and thus there are no issues with the definition of $F$ on $\mathcal R$\,\,\footnote{Notice that, with our conditions on the parameters, it is immediate to check that also the functions we will introduce in the proof of Theorem \ref{th1} will be well defined, even when not explicitly remarked.}.
We also remark that we chose to split $(H1)$--$(H5)$ according to the corresponding conditions $(C1)$--$(C5)$ in the next proof they allow to verify.
Moreover we stress that the assumptions in $(H1)$--$(H5)$ are consistent, i.e., there exist parameter configurations satisfying them all. For instance, we checked that they are fulfilled for $c_1=0.4,\,c_2=0.55,\,c_3=0.6,\,\alpha=17,$ $x_{\ell}=0.5766666668,\,x_r=0.6316666668,\,y_{\ell}=0.3366666668,\,y_r=.04516666668,\,z_{\ell}=0,$ $z_r=0.3951779684.$ These are the same parameter values we used to draw Figures $2$--$6$, with the only exception of $z_{\ell}$ that in those pictures is slightly negative. Although this makes no sense from an economic viewpoint, as the variables $x,\,y$ and $z$ represent the output of the three firms, we made such choice in order to make the pictures easier to read. In fact, choosing $z_{\ell}=0,$ then
$F(\mathcal R^-_{\ell})\subseteq \mathcal R^-_{\ell}$ and thus the crucial set $F(\mathcal R^-_{\ell})$ would have been not visible in Figures 2--4.
With this respect, we also remark that in Figure 3 the $x$-axis has been reversed in order to make the double folding of $F(\mathcal R)$ more evident.\\
In regard to the choice of the parameter values in \eqref{par}, as mentioned above, they are the same as in \cite{NaTr->}, except for $\alpha,$ which is larger here. In fact, numerical exercises we performed show that when $\alpha$ increases it becomes easier to find a domain where to apply the SAP technique. On the other hand, it seems not possible to apply our method for a sensibly smaller value of $\alpha.$
The impossibility of reducing $\alpha$ much below $17$ comes from the fact that, as it is immediate to verify, when such parameter decreases it becomes more and more difficult to have all conditions in $(H2)$ and $(H3)$ fulfilled and with $\alpha=10$ it seems just impossible. The situation would slightly improve dealing with
$(C2)$ and $(C3)$ below, instead of $(C2)$ and $(C3^{'})$ as we actually do in order to simplify our argument, but still computer plots suggest it is not possible to have both conditions satisfied when $\alpha=8,$ that is the largest value considered in \cite{NaTr->}.
\begin{proof}
We show that, for the parameter values in \eqref{par}, any choice of $x_i,\,y_i,\,z_i,\,i\in\{\ell,r\},$ fulfilling
$(H1)$--$(H5)$ guarantees that the image under the map $F$ of any path
$\gamma=(\gamma_1,\gamma_2,\gamma_2):[0,1]\to\mathcal R=\mathcal R(x_i,y_i,z_i)$ joining the sets
$\mathcal R^-_{\ell}$ and $\mathcal
R^-_{r}$ defined in \eqref{eq-or} satisfies the following
conditions:
\begin{itemize}
\item [$(C1)$] $F_3(\gamma(0))\le z_{\ell}\,;$
\item [$(C2)$] $F_3(\gamma(1))\le z_{\ell}\,;$
\item [$(C3)$] $\exists \,\, t^*\in \, (0,1): F_3(\gamma(t^*))>z_{r}\,;$
\item [$(C4)$] $F_1(\gamma(t))\subseteq [x_{\ell},x_{r}],\,\forall t\in [0,1]\,;$
\item [$(C5)$] $F_2(\gamma(t))\subseteq [y_{\ell},y_{r}],\,\forall t\in [0,1]\,.$
\end{itemize}
Broadly speaking, conditions $(C1)$--$(C3)$ describe an expansion
with folding along the $z$--coordinate. In fact, the image $F\circ
\gamma$ of any path $\gamma$ joining in ${\mathcal R}$ the sides ${\mathcal R}^-_{\ell}$ and
${\mathcal R}^-_r$ crosses a first time the parallelepiped ${\mathcal
R}$ for $t\in(0,t^*)$ and then crosses ${\mathcal R}$ back
again for $t\in(t^*,1).$ Conditions $(C4)$ and $(C5)$ imply instead a
contraction along the $x$--coordinate and the $y$--coordinate, respectively.
\noindent
Actually, in order to simplify the exposition, instead of $(C3),$ we will check that the stronger condition
\begin{itemize}
\item [$(C3^{'})$] $F_3\left(x,y,\frac{z_{\ell}+z_{r}}{2}\right)>z_r,\,\forall (x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}],$
\end{itemize}
is satisfied, which means that the inequality in $(C3)$ holds for any
$t^*\in (0,1)$ such that
$\gamma(t^*)=\big(x,y,\frac{x_{\ell}+x_{r}}{2}\big),$ for some
$(x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}].$ Notice that
\begin{equation}\label{s}
S:=\Big\{\Big(x,y,\frac{z_{\ell}+z_{r}}{2}\Big):(x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}]\Big\}\subseteq\mathcal R
\end{equation}
is the flat surface of middle points w.r.t. the
$z$--coordinate in $\mathcal R$ depicted in Figure 4.\\
Setting
$$\mathcal R_0:=\Big\{(x,y,z)\in\mathbb R^3: (x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}],\,z\in\Big[z_{\ell},\frac{z_{\ell}+z_{r}}{2}\Big]\Big\},$$
$$\mathcal R_1:=\Big\{(x,y,z)\in\mathbb R^3: (x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}],\,z\in\Big[\frac{z_{\ell}+z_{r}}{2},z_r\Big]\Big\},$$
and
$$\mathcal K_0:=\mathcal R_0\cap F(\mathcal R) \quad \mbox{ and } \quad \mathcal K_1:=\mathcal R_1
\cap F(\mathcal R)$$
(see Figure 5), we claim that $(C1),(C2),(C3^{'}),(C4)$ and
$(C5)$ together imply \eqref{eq-ks}. Notice at first that $\mathcal K_0$
and $\mathcal K_1$ are disjoint because, thanks to
condition $(C3^{'}),$ the set $S$ in \eqref{s} is mapped by $F$ outside $\mathcal R$ (see Figure 4).
Furthermore, by $(C1),\,(C2)$ and
$(C3^{'}),$ for every path $\gamma: [0,1]\to {\mathcal R}$ such
that $\gamma(0)$ and $\gamma(1)$ belong to different components of ${\mathcal R}^-,$
there exist two disjoint
sub-intervals $[t_0',t_0''],\, [t_1',t_1'']\subseteq [0,1]$ such
that, setting
$\sigma_0:=\gamma|_{[t_0',t_0'']}:[t_0',t_0'']\to {\mathcal K}_0$ and
$\sigma_1:=\gamma|_{[t_1',t_1'']}:[t_1',t_1'']\to {\mathcal K}_1,$
it holds that $F(\sigma_0(t_0'))$ and
$F(\sigma_0(t_0''))$ belong to different components of ${\mathcal R}^-,$ as well as $F(\sigma_1(t_1'))$ and
$F(\sigma_1(t_1'')).$ Moreover, from $(C4)$ and $(C5)$ it follows that
$F(\sigma_0(t))\in {\mathcal R},\,\forall\, t\in [t_0',t_0'']$ and
$F(\sigma_1(t))\in {\mathcal R},\,\forall\, t\in [t_1',t_1''].$\\
This means that $({\mathcal K}_i,F): {\widetilde{\mathcal R}} \Bumpeq{\!\!\!\!\!\!\!\!{\longrightarrow}} {\widetilde{\mathcal R}},\,i=0,1,$ and
our claim is thus proved.\\
Once that the stretching condition in \eqref{eq-ks} is achieved,
the conclusion of the theorem follows by Theorem \ref{th-ch} \footnote{Notice that, by the choice of $\mathcal K_0$ and $\mathcal K_1,$ the invariant chaotic set $\mathcal I\subseteq\mathcal K_0\cup\mathcal K_1$
in Definition \ref{def-ch}
lies entirely in the first quadrant and therefore makes economic sense for the application in question.}.
In order to complete the proof, let us verify that any choice of the parameters as in \eqref{par} and of the domain
${\mathcal R}=\mathcal R(x_i,y_i,z_i)$ in agreement with $(H1)$--$(H5)$
implies that conditions $(C1),(C2),(C3^{'}),(C4)$ and
$(C5)$ are fulfilled for any path
$\gamma:[0,1]\to\mathcal R$ joining
$\mathcal R^-_{\ell}$ and
$\mathcal R^-_{r}$ \footnote{Just to fix the ideas, in what follows we will assume that $\gamma(0)\in\mathcal R^-_{\ell}$ and $\gamma(1)\in\mathcal R^-_{r}.$}.
In so doing, we will prove that the inequality in $(C1)$ is indeed an equality.\\
Let us start with the verification of $(C1).$ Since $F_3(x,y,z)=z\left(1-\alpha c_3+\frac{\alpha (x+y)}{(x+y+z)^2}\right)$
and $\gamma(0)\in\mathcal R^-_{\ell}= [x_{\ell},x_r]\times [y_{\ell},y_r]\times\{z_{\ell}\}=[x_{\ell},x_r]\times [y_{\ell},y_r]\times\{0\}$ by $(H1),$
it then follows that $\gamma_3(0)=0$ and thus
$0=F_3(\gamma(0))\le z_{\ell}=0,$ as desired.\\
In regard to $(C2),$ we have to verify that $F_3|_{\mathcal R^-_{r}}\le 0,$ that is, $F_3(x,y,z_r)\le 0,$ $\forall (x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}].$
Setting $A:=x+y,$ we consider, instead of $F_3|_{\mathcal R^-_{r}},$ the one-dimensional function\footnote{In several steps of the proof,
instead of studying the original problem, through a substitution we will be lead to consider a lower dimensional one. Alternatively, we could use the Kuhn-Tucker Theorem for constrained maximization problems. We decided to follow the former approach because it is more elementary and requires less computations. However, we stress that the two approaches require to impose the same conditions $(H1)$--$(H5)$ on the parameters.}
$$\phi:[x_{\ell}+y_{\ell},x_r+y_r]\to\mathbb R,\quad \phi(A):=z_r\left(1-\alpha c_3+\frac{\alpha A}{(A+z_r)^2}\right).$$
Computing the first derivative of $\phi,$ we get
$\phi^{\,'}(A)=z_r\,\alpha\left(\frac{-A+z_r}{(A+z_r)^3}\right),$ which vanishes at $A=z_r.$ However, since by $(H2)$ we have $x_{\ell}+y_{\ell}>z_r,$ then $\phi{\,'}(A)<0,$ $\forall A\in[x_{\ell}+y_{\ell},x_r+y_r].$ Hence, $F_3|_{\mathcal R^-_{r}}\le F_3(x_{\ell},y_{\ell},z_r)$ and thus, in order to have $(C2)$ satisfied, it suffices that $F_3(x_{\ell},y_{\ell},z_r)\le 0.$ Imposing such condition, we find
$z_r\left(1-\alpha c_3+\frac{\alpha (x_{\ell}+y_{\ell})}{(x_{\ell}+y_{\ell}+z_r)^2}\right)\le 0,$ which is fulfilled when
$\frac{\alpha c_3-1}{\alpha}\ge\frac{x_{\ell}+y_{\ell}}{(x_{\ell}+y_{\ell}+z_r)^2}.$ Making $z_r$ explicit, this
holds when $z_r\ge\sqrt{\frac{\alpha}{\alpha c_3-1}(x_{\ell}+y_{\ell})}-(x_{\ell}+y_{\ell}),$ that is, when
$(H2)$ is fulfilled. Notice that the latter is a ``true'' restriction, since, still by $(H2),$ the right hand side of the above inequality is positive.
The verification of $(C2)$ is complete.\\
As regards $(C3^{'}),$ we need to check that $F_3\left(x,y,\frac{z_{\ell}+z_{r}}{2}\right)>z_r,\,\forall (x,y)\in [x_{\ell},x_{r}]\times[y_{\ell},y_{r}],$ that is, recalling the definition of $S$ in \eqref{s}, $F_3|_S>z_r.$ Notice that, by $(H1),$
$\frac{z_{\ell}+z_{r}}{2}=\frac{z_{r}}{2}.$
Analogously to what done above, instead of $F_3|_S,$ let us consider the one-dimensional function
$$\varphi:[x_{\ell}+y_{\ell},x_r+y_r]\to\mathbb R,\quad \varphi(A):=\frac{z_{r}}{2}\left(1-\alpha c_3+\frac{\alpha A}{\big(A+\frac{z_{r}}{2}\big)^2}\right).$$
Since $x_{\ell}+y_{\ell}>z_r>\frac{z_r}{2},$ by the previous analysis we know that $\varphi(A)\ge \varphi(x_r+y_r)=F_3\left(x_r,y_r,\frac{z_{r}}{2}\right).$
Hence, in order to have $F_3|_S>z_r,$ it suffices that $F_3\left(x_r,y_r,\frac{z_{r}}{2}\right)>z_r,$ that is,
$$\frac{z_{r}}{2}\left(1-\alpha c_3+\frac{\alpha (x_r+y_r)}{\big(x_r+y_r+\frac{z_{r}}{2}\big)^2}\right)>z_r.$$
Since $z_{r}>0,$ making $z_r$ explicit, we find
$$z_r<2\left(\sqrt{\frac{\alpha}{\alpha c_3+1}(x_{r}+y_{r})}-(x_{r}+y_{r})\right)$$
and this condition is satisfied thanks to $(H3).$ Hence $(C3)$ is verified.\\
In order to check $(C4),$ we need to show the two inequalities $F_1(x,y,z)\le x_r,$ $\forall (x,y,z)\in\mathcal R$ and
$F_1(x,y,z)\ge x_{\ell},\,\forall (x,y,z)\in\mathcal R,$ which are satisfied if
$${\displaystyle{\max_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}\le x_r \qquad \mbox{and} \qquad {\displaystyle{\min_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}\ge x_{\ell}\,,$$
respectively\footnote{Notice that such maximum and minimum values exist by the Weierstrass Theorem.}.\\
Instead of considering $F_1|_{\mathcal R},$ setting $B:=y+z$ and $T:=[x_{\ell},x_r]\times[y_{\ell}+z_{\ell},y_r+z_r],$ we deal with the bidimensional function
$$\Phi:T\to\mathbb R,\quad \Phi(x,B):=\frac{2x+B-c_1(x+B)^2}{2}\,,$$
whose partial derivatives are
$$\frac{\partial\Phi}{\partial x}=1-c_1(x+B)\qquad \mbox{ and } \qquad \frac{\partial\Phi}{\partial B}=\frac{1}{2}-c_1(x+B).$$
Since they do not vanish contemporaneously, there are no critical points in the interior of $T.$ We then study
$\Phi$ on the boundary of its domain.\\
As concerns $\Phi_1(B):=\Phi|_{\{x_{\ell}\}\times[y_{\ell}+z_{\ell},y_r+z_r]}(x,B)=\Phi(x_{\ell},B),$ we have that
$\Phi_1^{\,'}(B)=\frac{1}{2}-c_1(x_{\ell}+B),$ which vanishes at $\overline B=\frac{1}{2c_1}-x_{\ell}.$ This is the maximum point of $\Phi_1$ if $\overline B\in [y_{\ell}+z_{\ell},y_r+z_r].$ But that is guaranteed by the conditions in $(H4).$\\
Similarly, setting $\Phi_2(B):=\Phi|_{\{x_{r}\}\times[y_{\ell}+z_{\ell},y_r+z_r]}(x,B)=\Phi(x_{r},B),$ we find that its maximum point, still by $(H4),$ is given by $\widehat B=\frac{1}{2c_1}-x_{r}\in [y_{\ell}+z_{\ell},y_r+z_r].$\\
In regard to $\Phi_3(x):=\Phi|_{[x_{\ell},x_r]\times\{y_{\ell}+z_{\ell}\}}(x,B)=\Phi(x,y_{\ell}+z_{\ell}),$ we have $\Phi_3^{\,'}(x)=1-c_1(x+y_{\ell}+z_{\ell}),$
which vanishes at $\overline x=\frac{1}{c_1}-(y_{\ell}+z_{\ell}).$ By the conditions in $(H4),$ $\overline x>x_r$ and thus $\Phi_3(x)$ is increasing on $[x_{\ell},x_r]$. Analogously, since $\widehat x=\frac{1}{c_1}-(y_r+z_r)>x_r,$ it holds that $\Phi_4(x):=\Phi|_{[x_{\ell},x_r]\times\{y_r+z_r\}}(x,B)=\Phi(x,y_r+z_r)$ is increasing on $[x_{\ell},x_r].$ Summarizing, the two candidates for the maximum point of $\Phi$ on $T$ are $\big(x_{\ell},\frac{1}{2c_1}-x_{\ell}\big)$ and $\big(x_{r},\frac{1}{2c_1}-x_{r}\big).$ A direct computation shows that $\Phi\big(x_{\ell},\frac{1}{2c_1}-x_{\ell}\big)<\Phi\big(x_{r},\frac{1}{2c_1}-x_{r}\big),$ and thus
${\displaystyle{\max_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}=\Phi\big(x_{r},\frac{1}{2c_1}-x_{r}\big).$ Hence, it is now easy to verify that the inequality ${\displaystyle{\max_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}\le x_r$ is satisfied when $x_r\ge \frac{1}{4c_1},$ the latter being among the assumptions in $(H4).$\\
The analysis above also suggests that the two candidates for the minimum point of $\Phi$ on $T$ are $(x_{\ell},y_{\ell}+z_{\ell})$ and $(x_{\ell},y_r+z_r).$ Straightforward calculations show that, if $x_{\ell}\le \frac{1}{2c_1}\left(1-c_1(y_{\ell}+y_{r}+z_{\ell}+z_{r})\right),$ then
$\Phi(x_{\ell},y_{\ell}+z_{\ell})\le \Phi(x_{\ell},y_r+z_r).$ Hence, again by $(H4),$
${\displaystyle{\min_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}=\Phi\big(x_{\ell},y_{\ell}+z_{\ell}\big).$ The inequality ${\displaystyle{\min_{(x,y,z)\in\mathcal R}F_1(x,y,z)}}\ge x_{\ell}$ is thus satisfied when $\sqrt{\frac{y_{\ell}+z_{\ell}}{c_1}}-(y_{\ell}+z_{\ell})\ge x_{\ell},$ which is among the conditions in $(H4).$\\
This concludes the verification of $(C4).$\\
Let us finally turn to $(C5).$ In order to check it, we have to show that
\begin{equation}\label{mm}
{\displaystyle{\max_{(x,y,z)\in\mathcal R}F_2(x,y,z)}}\le y_r \qquad \mbox{and} \qquad {\displaystyle{\min_{(x,y,z)\in\mathcal R}F_2(x,y,z)}}\ge y_{\ell}\,.
\end{equation}
Instead of $F_2|_{\mathcal R},$ setting $D:=x+z,$ we deal with the one-dimensional function
$$\psi:[x_{\ell}+z_{\ell},x_r+z_{r}]\to\mathbb R,\quad \psi(D):=\sqrt{\frac{D}{c_2}}-D,$$
whose derivative is $\psi{'}(D)=\frac{1}{2\sqrt{c_2 D}}-1.$ It vanishes at $\overline D=\frac{1}{4c_2},$ which by $(H5)$ is smaller than $x_{\ell}+z_{\ell}.$ Thus
${\displaystyle{\max_{(x,y,z)\in\mathcal R}F_2(x,y,z)}}=\psi(x_{\ell}+z_{\ell})$ and ${\displaystyle{\min_{(x,y,z)\in\mathcal R}F_2(x,y,z)}}=\psi(x_{r}+z_{r}).$ Hence, the first condition in \eqref{mm} is satisfied if $\psi(x_{\ell}+z_{\ell})\le y_r$ and the second condition
is fulfilled if $\psi(x_{r}+z_{r})\ge y_{\ell}.$ It is easy to see that both inequalities are fulfilled thanks to $(H5)$ and this concludes the verification of $(C5).$\\
The proof is complete.
\end{proof}
\section{Conclusions}\label{sec-a3}
$~$
In this paper we have recalled what the SAP method consists in and we have applied that topological technique to rigorously prove the existence of chaotic sets for the triopoly game model in \cite{NaTr->}. By ``chaotic sets'' we mean invariant domains on which the map describing the system under consideration is semiconjugate to the Bernoulli shift (implying the features in Remark \ref{cons}) and where periodic points are dense.
However, we stress that we did not say anything about the attractivity of those chaotic sets. In fact, in general, the SAP method does not allow to draw any conclusion in such direction.
For instance, when performing numeric simulations for the parameter values in \eqref{par},
no attractor appears on the computer screen. The same issue emerged with the bidimensional models
considered in \cite{MePiZa-09}. The fact that the chaotic set is repulsive can be a good signal as regards the overlapping generations model therein, for
which we studied a backward moving system, since the forward moving one was defined only implicitly and it was not possible to
invert it. Indeed, as argued in \cite{MePiZa-09}, a repulsive chaotic set for the backward moving system possibly gets transformed
into an attractive one for a related forward moving system through Inverse Limit Theory (ILT).
In general, however, one just deals with a forward moving dynamical system and this kind of argument cannot be employed.
For instance, both in the duopoly game model in \cite{MePiZa-09} and in the triopoly game model analyzed in the present paper, we are able to
prove the presence of chaos for the same parameter values considered in the literature, except for a bit larger speed of adjustment $\alpha.$ It makes economic sense that complex dynamics arise when firms are more reactive, but unfortunately for such parameter values no chaotic attractors can be found
via numerical simulations.\\
What we want to stress is that this is not a limit of the SAP method: such issue is instead related to the possibility of performing computations by hands. To see what is the point, let us consider the well-known case of the logistic map $f:[0,1]\to\mathbb R,$ $f(x)=\mu x(1-x),$ with $\mu>0.$
As observed in \cite{MePiZa-09}, if we want to show the presence of chaos for it via the SAP method by looking at the first iterate, then we need $\mu>4.$ In this case, however, the interval $[0,1]$ is not mapped into itself and for almost all initial points in $[0,1]$ forward iterates limit to $-\infty.$
If we consider instead the second iterate, then the SAP method may be applied for values less than $4,$ for which chaotic attractors do exist. Figure 6 shows a possible choice for the compact sets $\mathcal K_0$ and $\mathcal K_1$ (denoted in the picture by $I_0$ and $I_1,$ since they are intervals) for the stretching relation to be satisfied when $\mu\sim3.88.$
\begin{figure}
\caption{The graph of the second iterate of the logistic map with $\mu\sim3.88.$}
\label{fig-6}
\end{figure}
\noindent
This simple example aims to suggest that working with higher iterates may allow to reach an agreement between the conditions needed to employ the SAP method and those to find chaotic attractors via numerical simulations.\\
A possible direction of future study can then be the study of economically interesting but simple enough models, so that it is
possible to deal with higher iterates, in the attempt of rigorously proving the presence of chaos via the SAP technique for parameter values for which also computer simulations indicate the same kind of behavior.\\
Still in regard to chaotic attractors, we have observed that the SAP method works well for models presenting H\'enon-like attractors, due to the presence of a double folding, in turn related to the geometry required to apply our technique. On the other hand, a preliminary analysis seems to suggest that the SAP method is not easily applicable to models presenting a Neimark-Sacker bifurcation leading to chaos. A more detailed investigation of such kind of framework will be pursued, as well.\\
A further possible direction of future study is the analysis of continuous-time economic models with our technique, maybe in the context of
LTMs, for systems switching between two different regimes, such as gross complements and gross substitutes.
\noindent
\textbf{Acknowledgements}.
Many thanks to Dr. Naimzada, Prof. Pini and Prof. Zanolin for useful discussions during the preparation of the paper.
\end{document}
|
\begin{equation}gin{document}
\title{A necessary condition in a De Giorgi type conjecture for elliptic systems
in infinite strips}
\begin{equation}gin{dedication}
Dedicated to Ha\"im Brezis on his seventy-fifth anniversary\\ with esteem
\end{dedication}
\begin{equation}gin{abstract}
Given a bounded Lipschitz domain \(\omega\subset\mathbb{R}^{d-1}\) and a lower semicontinuous function \(W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}\) that vanishes on a finite set and that is bounded from below by a positive constant at infinity, we show that every map \(u:\mathbb{R}\times\omega\to\mathbb{R}^N\) with
$$\int_{\mathbb{R}\times\omega} \big(\abs{\nabla u}^2+W(u)\big)\mathop{}\mathopen{}\mathrm{d} x_1\mathop{}\mathopen{}\mathrm{d} x'<+\infty$$ has a limit \(u^\pm\in\{W=0\}\) as \(x_1\to\pm\infty\). The convergence holds in \(L^2(\omega)\) and almost everywhere in \(\omega\). We also prove a similar result for more general potentials $W$ in the case where the considered maps $u$ are divergence-free in $\Omega$ with $\omega$ being the $(d-1)$-torus and $N=d$.
\noindent {\bf Keywords.} Nonlinear elliptic PDEs; De Giorgi conjecture; Energy estimates; Geodesic distance.
\end{abstract}
\section{Introduction}
Let $N\geq 1$, $d\geq 2$ and $\Omega=\mathbb{R}\times \omega$ be an infinite cylinder in $\mathbb{R}^d$, where $\omega\subset \mathbb{R}^{d-1}$ is an open connected bounded set with Lipschitz boundary. For a lower semicontinuous potential $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$,
we consider the functional
\begin{equation}gin{equation}\label{EE}
E(u)= \int_\Omega \Big( |\nabla u|^2 + W(u) \Big) \mathop{}\mathopen{}\mathrm{d} x, \quad u\in\dot{H}^1(\Omega,\mathbb{R}^N),
\end{equation}
where $|\cdot|$ is the Euclidean norm
and
\begin{equation}gin{equation*}
\dot{H}^1(\Omega,\mathbb{R}^N)=\left\{u\in H^1_{loc}(\Omega,\mathbb{R}^N)\;:\;\nabla u=(\partial_j u_i)_{1\leq i\leq N, 1\leq j \leq d}\in L^2(\Omega,\mathbb{R}^{N\times d})\right\}.
\end{equation*}
A natural problem consists in studying optimal transition layers for the functional $E$ between two wells $u^\pm$ of $W$ (i.e., $W(u^\pm)=0$). In particular, motivated by the De Giorgi conjecture, one aim is to analyse under which conditions on the potential $W$ and on the dimensions $d$ and $N$, every minimizer $u$ of $E$ connecting $u^\pm$ as $x_1\to \pm \infty$ is one-dimensional, i.e., depending only on $x_1$. Obviously, such one-dimensional transition layers $u$ coincide with
their $x'$-average $\overline{u}:\mathbb{R}\to \mathbb{R}^N$ defined as
\begin{equation}gin{equation}\label{average}
\overline{u}(x_1):=\int_{\omega} \mathcal{H}space{-0.4cm} - \, \, u(x_1,x')\mathop{}\mathopen{}\mathrm{d} x', \quad x_1\in\mathbb{R},
\end{equation}
where $x'=(x_2,\dots,x_d)$ denotes the $d-1$ variables in $\omega$ and the $x'$-average symbol is denoted by $\displaystyle \int_{\omega} \mathcal{H}space{-0.4cm} - \, \,=\varphirac1{|\omega|} \int_{\omega}$.
\subsection{Main results}
The purpose of this note is to prove a necessary condition for finite energy configurations $u$ provided that $W$ satisfies the following two conditions:
\begin{equation}gin{description}
\item[\bf (H1)]
$W$ has a finite number of wells, i.e., $\textrm{card}(\{z\in \mathbb{R}^N\, :\, W(z)=0\})<\infty$;
\item[\bf (H2)]
$\liminf\limits_{|z|\to\infty} W(z)>0$.
\end{description}
More precisely, we prove that under these assumptions, there exist two wells $u^\pm$ of $W$ such that $u(x_1, \cdot)$ converges to $u^\pm$ in $L^2$ and a.e. in $\omega$ as $x_1\to \pm\infty$; in particular, the $x'$-average $\overline{u}$ (as a continuous map in $\mathbb{R}$) admits the limits
$\overline{u}(\pm \infty)=u^\pm$ as $x_1\to \pm\infty$. Here, $u(x_1,\cdot)$ stands for the trace of the Sobolev map $u\in\dot{H}^1(\Omega,\mathbb{R}^N)$ on the section $\{x_1\}\times \omega$ for every $x_1\in \mathbb{R}$.
\begin{equation}gin{theorem}\label{thm1}
Let $\Omega=\mathbb{R}\times \omega$, where $\omega\subset \mathbb{R}^{d-1}$ is an open connected bounded set with Lipschitz boundary.
If $W:\mathbb{R}^N\to \mathbb{R}_+\cup\{+\infty\}$ is a lower semicontinuous potential satisfying {\bf (H1)} and {\bf (H2)}, then every $u \in\dot{H}^1(\Omega,\mathbb{R}^N)$ with $E(u)<\infty$ connects two wells\varphiootnote{$u^-$ and $u^+$ could be equal.} $u^\pm\in\mathbb{R}^N$ of $W$ at $x_1=\pm \infty$ (i.e., $W(u^\pm)=0$) in the sense that
\begin{equation}gin{equation}\label{BC}
\lim_{x_1\to \pm \infty}\|u(x_1,\cdot)-u^\pm\|_{L^2(\omega,\mathbb{R}^N)}=0\quad\text{and}\quad
\lim_{x_1\to \pm \infty} u(x_1, \cdot)=u^\pm\quad\text{a.e. in } \omega.
\end{equation}
In particular,
$$
\lim\limits_{x_1\to \pm \infty} \int_{\omega} \mathcal{H}space{-0.4cm} - \, \, u(x_1, x')\, \mathop{}\mathopen{}\mathrm{d} x'=u^\pm.
$$
\end{theorem}
\begin{equation}gin{remark}
i) As a consequence of the Poincar\'e-Wirtinger inequality\varphiootnote{The assumption that $\omega$ is connected with Lipschitz boundary is needed for the Poincar\'e-Wirtinger inequality.}, for $u\in \dot{H}^1(\Omega, \mathbb{R}^N)$ with $\bar u(\pm \infty)=u^\pm$, there exist two sequences $(R_n^+)_{n\in\mathbb{N}}$ and $(R_n^-)_{n\in\mathbb{N}}$ such that $(R_n^\pm)_{n\in\mathbb{N}}\to\pm\infty$ and
\begin{equation}
\label{poincare_wirtinger}
\| u(R_n^\pm,\cdot)- u^\pm\|_{H^1(\omega,\mathbb{R}^N)} \underset{n\to\infty}{\longrightarrow}0
\end{equation}
(see \cite[Lemma~3.2]{Ignat-Monteil}).
ii) Theorem \ref{thm1} also holds true if $\omega$ is a closed (i.e., compact, connected without boundary) Riemannian manifold.
iii) Theorem \ref{thm1} also applies for maps $u$ taking values into a closed set $\mathcal{N}\subset \mathbb{R}^N$ (e.g., $\mathcal{N}$ could be a compact manifold embedded in $\mathbb{R}^N$). More precisely, if the potential \(W:\mathbb{R}^N\to \mathbb{R}_+\cup\{+\infty\}\) satisfies {\bf (H1)}, {\bf (H2)}
and \(\mathcal{N}:=\{z\in \mathbb{R}^N\, :\, W(z)<+\infty\}\) is a closed set such that \(W_{\varepsilonrt \mathcal{N}}:\mathcal{N}\to\mathbb{R}_+\) is lower semicontinuous, then Theorem \ref{thm1} handles the case where the nonlinear constraint \(u\in\mathcal{N}\) is present.
\end{remark}
The result in Theorem \ref{thm1} extends to slightly more general potentials $W$ in the following context of divergence-free maps. For that, let $d=N$ and $\Omega=\mathbb{R}\times \omega$ with $\omega=\mathbb{T}^{d-1}$ and $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ being the flat torus.
We consider maps $u\in H^1_{loc}(\Omega, \mathbb{R}^d)$ periodic in $x'\in \omega$ and divergence-free, i.e.,
$$\nabla\cdot u=0 \quad \textrm{in} \quad \Omega.$$
Then the $x'$-average $\bar u:\mathbb{R}\to \mathbb{R}^d$ is continuous and its first component is constant, i.e., there is $a\in \mathbb{R}$ such that
$$\bar u_1(x_1)=a \quad \textrm{for every} \quad x_1\in \mathbb{R}$$
(see \cite[Lemma 3.1]{Ignat-Monteil}). For such maps $u$, we consider potentials $W$ satisfying the following two conditions:
\begin{equation}gin{description}
\item[$\textrm{\bf (H1)}_a$]
$W(a, \cdot)$ has a finite number of wells, i.e., $\textrm{card}(\{z'\in \mathbb{R}^{d-1}\, :\, W(a,z')=0\})<\infty$;
\item[$\textrm{\bf (H2)}_a$]
$\liminf\limits_{z_1\to a, \, |z'|\to\infty} W(z_1, z')>0$.
\end{description}
In this context, we have proved in our previous paper \cite{Ignat-Monteil} that the $x'$-average map $\bar u$ admits limits $u^\pm$ as $x_1\to \pm \infty$, where $u^\pm_1=a$ and they are two wells of $W(a, \cdot)$, see \cite[Lemma 3.7]{Ignat-Monteil}. As in Theorem \ref{thm1}, we will prove that $u(x_1, \cdot)$ converges to $u^\pm$ in $L^2$ and a.e. in $\omega$ as $x_1\to \pm\infty$.
\begin{equation}gin{theorem}\label{thm2}
Let $\Omega=\mathbb{R}\times \omega$ with $\omega=\mathbb{T}^{d-1}$ the $(d-1)$-dimensional torus and $u\in H^1_{loc}(\Omega, \mathbb{R}^d)$ such that $E(u)<\infty$ and $\bar u_1=a$ in $\mathbb{R}$ for some $a\in \mathbb{R}$.
If $W:\mathbb{R}^d\to \mathbb{R}_+\cup\{+\infty\}$ is a lower semicontinuous potential satisfying $\textrm{\bf (H1)}_a$ and
$\textrm{\bf (H2)}_a$, then there exist two wells $u^\pm\in\mathbb{R}^d$ of $W$ such that \eqref{BC}
holds true and $u^\pm_1=a$. In particular, $\bar u(\pm\infty)=u^\pm$.
\end{theorem}
Note that we don't assume that $u$ is divergence-free in Theorem \ref{thm2}, only the assumption that
$\bar u_1$ is constant.
\subsection{Motivation}
Our main result is motivated by the well-known De Giorgi conjecture that consists in investigating
the one-dimensional symmetry of critical points of the functional $E$, i.e., solutions $u:\Omega\to\mathbb{R}^N$ to the nonlinear elliptic system
\begin{equation}gin{equation}
\label{sys}
\begin{equation}gin{cases}
\Delta u=\varphirac12 \nabla W(u) \quad & \text{in} \quad \Omega,\\
\varphirac{\partial u}{\partial \nu}=0 \quad & \text{on} \quad \partial\Omega=\mathbb{R}\times \partial \omega,
\end{cases}
\end{equation}
where $W$ is assumed to be locally Lipschitz in \eqref{sys} and $\nu$ is the unit outer normal vector field at $\partial \omega$.
Theorem \ref{thm1} states in particular that solutions $u$ of finite energy satisfy the boundary condition \eqref{BC} for two wells $u^\pm$ of $W$.
A natural question related to the De Giorgi conjecture arises in this context:
\noindent {\bf Question}: Under which assumptions on the potential $W$ and the dimensions $d$ and $N$,
is it true that every global minimizer $u$ of $E$ connecting two wells\varphiootnote{We say that $u$ connects two wells $u^\pm$ of $W$ if \eqref{BC} is satisfied.} of $W$ is one-dimensional symmetric, i.e., $u=u(x_1)$ ?
\noindent {\it Link with the Gibbons and De Giorgi conjectures.} i) In the scalar case $N=1$ ($d$ is arbitrary) and $W(u)=\varphirac12(1-u^2)^2$, the answer to the above question is positive provided that the limits \eqref{BC} are replaced by uniform convergence (see \cite{Carbou,F1}); within these uniform boundary conditions, the problem is called the Gibbons conjecture. We mention that many articles have been written on Gibbons' conjecture in the case of the entire space $\Omega=\mathbb{R}^d$: more precisely, if a solution\varphiootnote{Here, $u$ needs not be a global minimizer of $E$ within the boundary condition \eqref{BC}, nor monotone in $x_1$, i.e., $\partial_1 u>0$. Obviously, this result applies also to global minimizers, as $|u|\le 1$ in $\mathbb{R}^d$ by the maximum principle.} $u:\mathbb{R}^d\to \mathbb{R}$ of the PDE
\begin{equation}
\label{scalarPDE}
\Delta u=\varphirac12\varphirac{dW}{du}(u) \quad \textrm{ in }\quad \mathbb{R}^d
\end{equation}
satisfies the convergence $\lim_{x_1\to\pm\infty}u(x_1,x')=\pm 1$ uniformly in $x'\in \mathbb{R}^{d-1}$ and \(|u|\le 1\) in $\mathbb{R}^d$,
then $u$ is one-dimensional (see \cite{BBG,BHM,CafCor,F2}).
Let us now speak about the long standing De Giorgi conjecture in the scalar case $N=1$. It predicts that any bounded solution $u$ of \eqref{scalarPDE} that is monotone in the $x_1$ variable is one-dimensional in dimension $d\leq 8$, i.e.,
the level sets $\{u=\lambda\}$ of $u$ are hyperplanes. The conjecture has been solved in dimension $d=2$ by Ghoussoub-Gui \cite{Ghoussoub:1998}, using a Liouville-type theorem and monotonicity formulas. Using similar techniques, Ambrosio-Cabr\'e \cite{AmbrosioCabre:2000} extended these results to dimension $d=3$, while Ghoussoub-Gui \cite{ghoussoub2003giorgi} showed that the conjecture is true for $d=4$ and $d=5$ under some antisymmetry condition on $u$. The conjecture was finally proved by Savin \cite{Savin:2009} in dimension $d\leq 8$ under the additional condition $\lim_{x_1\to\pm\infty}u(x_1,x')=\pm 1$ pointwise in $x'\in\mathbb{R}^{d-1}$, the proof being based on fine regularity results on the level sets of $u$. Lately, Del Pino-Kowalczyk-Wei \cite{del2011giorgi} gave a counterexample to the De Giorgi conjecture in dimension $d\geq 9$, which satisfies the pointwise limit conditions $\lim_{x_1\to\pm\infty}u(x_1,x')=\pm1$ for a.e. \(x'\in\mathbb{R}^{d-1}\). It would be interesting to investigate whether these results transfer (or not) to the context of the strip $\Omega=\mathbb{R}\times \omega$ as stated in Question. Theorem \ref{thm1} proves that the pointwise convergence as $x_1\to \pm \infty$ is a necessary condition in the context of a strip $\mathbb{R}\times \omega$ and for finite energy configurations.
\noindent ii) Less results are available for the vector-valued case $N\geq 2$. In the case $\Omega=\mathbb{R}^d$, $N=2$ and
$W(u_1, u_2)=\varphirac12(u_1^2-1)^2+\varphirac12(u_2^2-1)^2+{\Lambda}u_1^2 u_2^2-\varphirac12$ with $\Lambda\geq 1$
(so $W\geq 0$ and $W$ has exactly four wells $\{(0, \pm 1), (\pm 1, 0)\}$, thus, {\rm \bf (H1)} and {\rm \bf (H2)} are satisfied), the Gibbons and De Giorgi conjectures corresponding to the system \eqref{sys} are discussed in \cite{FSS}. Several other phase separation models (e.g., arising in a binary mixture of Bose-Einstein condensates) are studied in the vectorial case where $W$ has a non-discrete set of zeros (see e.g., \cite{berestycki2013phase, berestycki2013entire, fazly2013giorgi}).
We recall that in the study of the De Giorgi conjecture for \eqref{scalarPDE}, i.e., $N=1$, there is a link between monotonicity of solutions (e.g., the condition $\partial_1 u>0$), stability (i.e., the second variation of the corresponding energy at $u$ is nonnegative), and local minimality of $u$ (in the sense that the energy does not decrease under compactly supported perturbations of $u$). We refer to \cite[Section~4]{Alberti:2001} for a fine study of these properties. In particular, it is shown that the monotonicity condition in the De Giorgi conjecture implies that $u$ is a local minimizer of the energy (see \cite[Theorem~4.4]{Alberti:2001}). Therefore, it is natural to study Question under the monotonicity condition in $x_1$ (instead of the global minimality condition on $u$).
\noindent {\it Link with micromagnetic models.}
We have studied Question in the context of divergence-free maps $u:\mathbb{R}\times \omega\to \mathbb{R}^N$ where $d=N$ and $\omega=\mathbb{T}^{d-1}$ is the $(d-1)$-dimensional torus, see \cite{Ignat-Monteil}. By developing a theory of calibrations, we have succeeded to give sufficient conditions on the potential $W$ in order that the answer to Question is positive, in particular in the case where $\textrm{\bf (H1)}_a$ and $\textrm{\bf (H2)}_a$ are satisfied, see \cite[Theorem 2.11]{Ignat-Monteil}. In that context, Question is related to some reduced model in micromagnetics in the regime where the so-called stray-field energy is strongly penalized favoring the divergence constraint $\nabla \cdot u=0$ of the magnetization $u$ (the unit-length constraint on $u$ being relaxed in the system). In the theory of micromagnetics, a challenging question concerns the symmetry of domain walls. Indeed, much effort has been devoted lately to identifying on the one hand, the domain walls that have one-dimensional symmetry, such as the so-called symmetric N\'eel and symmetric Bloch walls (see e.g. \cite{DKO, IO, Ignat:2011}), and on the other hand, the domain walls involving microstructures, such as the so-called cross-tie walls (see e.g., \cite{Alouges:2002,Riviere:2001}), the zigzag walls (see e.g., \cite{Ignat:2012, Moser_zigzag}) or the asymmetric N\'eel / Bloch walls
(see e.g. \cite{Doring:2013, DI}). Thus, answering to Question would give a general approach in identifying the anisotropy potentials $W$ for which the domain walls are one-dimensional in the elliptic system \eqref{sys}.
\noindent {\it Link with heteroclinic connections.} One dimensional \varphiootnote{If \(u=u(x_1)\), the Neumann condition \(\varphirac{\partial u}{\partial \nu}=0\) is automatically satisfied.} solutions \(u=u(x_1)\) of the system \eqref{sys} are called heteroclinic connections. Given two wells \(u^\pm\) of a potential \(W\) satisfying {\rm\bf (H1)} and {\rm\bf (H2)}, it is known that there exists a heteroclinic connection \(\gamma:\mathbb{R}\to\mathbb{R}^N\) obtained by minimizing
\(\int_\mathbb{R} \abs{\varphirac{d}{dx_1}\gamma}^2+W(\gamma)\, dx_1\) under the condition \(\gamma(\pm\infty)=u^\pm\) (see \cite{MonteilSantambrogio2016,Sourdis:2016,Zuniga:2016}). In the vectorial case \(N\ge 2\), this connection may not be unique in the sense that there could exist two (minimizing) heteroclinic connections \(\gamma_1,\gamma_2\) such that \(\gamma_i(\pm\infty)=u^\pm\) for \(i=1,2\) but \(\gamma_1(\cdot)\) and \(\gamma_2(\cdot-\tau)\) are distinct for every \(\tau\in\mathbb{R}\). If this is the case, at least in dimension \(d=2\) and $\Omega=\mathbb{R}^2$, there also exists a solution \(u\) to \(\Delta u=\varphirac 12 \nabla W(u)\) which realizes an interpolation between \(\gamma_1\) and \(\gamma_2\) in the following sense (see \cite{Schatzman:2002,Alama:1997,MonteilSantambrogio2017}):
\[
\begin{equation}gin{cases}
u(x_1,x_2)\to u^\pm&\text{as \(x_1\to\pm\infty\) uniformly in \(x_2\),}\\
u(x_1,x_2)\to \gamma_1(x_1)&\text{as \(x_2\to -\infty\) uniformly in \(x_1\),}\\
u(x_1,x_2)\to \gamma_2(x_1)&\text{as \(x_2\to +\infty\) uniformly in \(x_1\).}
\end{cases}
\]
Moreover, this solution is energy local minimizing, i.e., the energy cannot decrease by compactly supported perturbations of $u$. Solutions to the system \(\Delta u=\varphirac 12 \nabla W(u)\) naturally arise when looking at the local behavior of a transition layer near a point at the interface between two wells \(u^\pm\) ; solutions satisfying the preceding boundary conditions correspond to the case of an interface point where the 1D connection passes from \(\gamma_1\) to \(\gamma_2\). The existence of such stable entire solutions to the Allen-Cahn system makes a significative difference with the scalar case, i.e. \(N=1\), where only 1D solutions are present by the De Giorgi conjecture.
\section{Pointwise convergence and convergence of the $x'$-average}
\label{sec:2}
In this section we prove that under the assumptions in Theorem \ref{thm1},
the $x'$-average $\overline{u}$ (as a continuous map in $\mathbb{R}$) has limits $\overline{u}(\pm \infty)=u^\pm$ as $x_1\to \pm \infty$ corresponding to two wells of $W$. For that, we will follow the strategy that we developed in our previous paper (see \cite[Section 3.1]{Ignat-Monteil}). The idea consists
in introducing an ``averaged" potential \(V\) in $\mathbb{R}^N$ with $W\geq V\geq 0$ and $\{V=0\}=\{W=0\}$ (see Lemma \ref{lemmaV}), and a new functional $E_V$ associated to the $x'$-average $\overline{u}$ of a map $u$ such that $\varphirac1{|\omega|}E(u)\geq E_V(\bar u)$. This can be seen as a dimension reduction technique since the new map $\bar u$ has only one variable. We will prove that every transition layer $\bar u$ connecting two wells $u^\pm$ has the energy $E_V(\bar u)$ bounded from below by the geodesic pseudo-distance $\mathrm{geod}_V$ between the wells $u^\pm$
(see Lemma \ref{regularization_lemma}). As the Euclidean distance in $\mathbb{R}^N$ is absolutely continuous with respect to $\mathrm{geod}_V$ (see Lemma \ref{cWgeqGeod}), we will conclude that $\bar u$ admits limits at $\pm \infty$ given by two wells of $W$ (see Lemma \ref{closed_boundary}). Note that in Section \ref{sec3}, we will give a second proof of the claim $\bar u(\pm\infty)=u^\pm$ without using the geodesic pseudo-distance $\mathrm{geod}_V$.
We first introduce the energy functional $E$ (defined in \eqref{EE}) restricted to appropriate subsets $A\subset\Omega$ (e.g., $A$ can be a subset of the form $I\times \omega$ for an interval $I\subset \mathbb{R}$, or a section $\{x_1\}\times \omega$): for every map $u\in \dot{H}^1(A,\mathbb{R}^N)$, we set
\[
E(u,A):=
\int_A |\nabla u|^2 + W(u) \mathop{}\mathopen{}\mathrm{d} x,
\]
so that for $A=\Omega$, we have $E(u)=E(u, A)$. For any interval $I\subset\mathbb{R}$, the Jensen inequality yields
\[
E(u,I\times \omega)= \int_{I}\int_{\omega}\left( |\partial_{1}u|^2+|\nabla' u|^2+W(u)\right)\mathop{}\mathopen{}\mathrm{d} x'\mathop{}\mathopen{}\mathrm{d} x_1\geq
|\omega| \int_I \Big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\Big|^2+e(u(x_1,\cdot))\mathop{}\mathopen{}\mathrm{d} x_1,
\]
where $\nabla'=(\partial_2, \dots, \partial_d)$, $\bar u$ is the $x'$-average of $u$ given in \eqref{average} and the $x'$-average energy $e$ is defined by
$$e(v):= \int_{\omega} \mathcal{H}space{-0.4cm} - \, \,\left( |\nabla' v|^2+W(v)\right)\mathop{}\mathopen{}\mathrm{d} x' \quad \textrm{ for all }\, v\in H^1(\omega,\mathbb{R}^N).$$
Introducing the averaged potential $V:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ defined for all $z\in\mathbb{R}^N$ by
\begin{equation}
\label{defV}
V(z):=\inf\left\{ e(v)\;:\; v\in H^1(\omega,\mathbb{R}^N),\, \int_{\omega} \mathcal{H}space{-0.4cm} - \, \, v\mathop{}\mathopen{}\mathrm{d} x'=z \right\}\geq 0,
\end{equation}
we have
\begin{equation}
\label{inegEV}
E(u,I\times \omega)\geq |\omega| \int_I \left(\Big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\Big|^2+V(\overline{u}(x_1))\right)\mathop{}\mathopen{}\mathrm{d} x_1.
\end{equation}
This observation is the starting point in the proof of the following lemma:
\begin{equation}gin{lemma}\label{lemmaV}
Let $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ be a lower semicontinuous function satisfying {\rm \bf(H2)}. Then
the averaged potential $V:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ defined in \eqref{defV} satisfies the following:
\begin{equation}gin{enumerate}
\item
$V$ is lower semicontinuous in $\mathbb{R}^N$,
\item
for all $z\in\mathbb{R}^N$, $V(z)\leq W(z)$, the infimum in \eqref{defV} is achieved and\varphiootnote{In particular, if $W$ satisfies {\rm \bf(H1)}, then $V$ satisfies {\rm \bf(H1)}, too.}
$\Big[V(z)=0\Leftrightarrow W(z)=0\Big]$,
\item
$V_\infty:=\liminf\limits_{|z|\to\infty}V(z)>0$,
\item
for every interval $I\subset\mathbb{R}$ and for every $u\in \dot{H}^1(I\times\omega,\mathbb{R}^N)$, one has
\[
\varphirac1{|\omega|}E(u,I\times \omega)\geq E_V(\overline{u},I), \quad E_V(\overline{u},I) :=\int_I \Big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\Big|^2+V(\overline{u}(x_1))\mathop{}\mathopen{}\mathrm{d} x_1.
\]
\end{enumerate}
\end{lemma}
The new energy $E_V(\bar u):=E_V(\overline{u}, \mathbb{R})$ associated to the $x'$-average $\bar u$ will play an important role for proving the existence of the two limits $\bar u(\pm \infty)$.
\begin{equation}gin{proof}[Proof of Lemma \ref{lemmaV}] The claim {\it 4} follows from \eqref{inegEV}. We divide the rest of the proof in three steps.
\noindent\textsc{Step 1: proof of claim {\it 2}.} Clearly, for all $z\in\mathbb{R}^N$, one has $V(z)\leq e(z)=W(z)$. By the compact embedding $H^1(\omega)\mathcal{H}ookrightarrow L^1(\omega)$, the lower semicontinuity of $W$, Fatou's lemma and the lower semicontinuity of the $L^2$ norm in the weak $L^2$-topology (see \cite{Brezis}), we deduce that $e$ is lower semicontinuous in the weak $H^1(\omega,\mathbb{R}^N)$-topology. Then the direct method in the calculus of variations implies that the infimum is achieved in \eqref{defV} (infimum that could be equal to $+\infty$ as $W$ can take the value $+\infty$).
If $W(z)=0$, then $V(z)=0$ (as $0\leq V\leq W$ in $\mathbb{R}^N$).
Conversely, if $V(z)=0$ with $z\in\mathbb{R}^N$, then a minimizer $v\in H^1(\omega,\mathbb{R}^N)$ in \eqref{defV} satisfies $V(z)=e(v)=0$ so that $v\equiv z$ and $W(z)=0$.
\noindent\textsc{Step 2: $V$ is lower semicontinuous in $\mathbb{R}^N$.} Let $(z_n)_{n\in\mathbb{N}}$ be a sequence converging to $z$ in $\mathbb{R}^N$. We need to show that
\[
V(z)\leq\liminf_{n\to\infty}V(z_n).
\]
Without loss of generality, one can assume that $(V(z_n))_{n\in\mathbb{N}}$ is a bounded sequence that converges to $\liminf_{n\to\infty} V(z_n)$. By Step 1, for each $n\in\mathbb{N}$, there exists $v_n\in H^1(\omega,\mathbb{R}^N)$ such that
\[
\int_{\omega} \mathcal{H}space{-0.4cm} - \, \, v_n \mathop{}\mathopen{}\mathrm{d} x'=z_n\quad\text{and}\quad e(v_n)=V(z_n).
\]
Since $(z_n)_{n\in\mathbb{N}}$ and $(e(v_n))_{n\in\mathbb{N}}$ are bounded, we deduce that $(v_n)_{n\in\mathbb{N}}$ is bounded in $H^1(\omega,\mathbb{R}^N)$ by the Poincar\'e-Wirtinger inequality. Thus, up to extraction, one can assume that $(v_n)_{n\in\mathbb{N}}$ converges weakly in $H^1$, strongly in $L^1$ and a.e. in $\omega$ to a limit
$v\in H^1(\omega,\mathbb{R}^N)$. In particular, $ \int_{\omega} \mathcal{H}space{-0.45cm} - \, \, v \mathop{}\mathopen{}\mathrm{d} x'=z$. Since $e$ is lower semicontinuous in weak $H^1(\omega,\mathbb{R}^N)$-topology (by Step 1), we conclude
\[
V(z)\leq e(v)\leq \liminf\limits_{n\to\infty} e(v_n)= \liminf\limits_{n\to\infty} V(z_n).
\]
\noindent\textsc{Step 3: proof of claim {\it 3}.} Assume by contradiction that there exists a sequence $(z_n)_{n\in\mathbb{N}}\subset\mathbb{R}^N$ such that $|z_n|\to\infty$ and $V(z_n)\to 0$ as $n\to\infty$. Then, there exists a sequence of maps $(w_n)_{n\in\mathbb{N}}$ in $H^1(\omega,\mathbb{R}^N)$ satisfying
\[
\int_{\omega}w_n(x')\mathop{}\mathopen{}\mathrm{d} x'=0\quad\text{for each $n\in\mathbb{N}$}\quad\text{and}\quad e(z_n+w_n)\underset{n\to\infty}{\longrightarrow} 0.
\]
By the Poincar\'e-Wirtinger inequality, we have that $(w_n)_{n\in\mathbb{N}}$ is bounded in $H^1$. Thus, up to extraction, one can assume that it converges weakly in $H^1$, strongly in $L^1$ and a.e. to a map $w\in H^1(\omega,\mathbb{R}^N)$. We claim that $w$ is constant since
$$
\int_{\omega} \mathcal{H}space{-0.4cm} - \, \, | \nabla' w|^2\mathop{}\mathopen{}\mathrm{d} x'\le \liminf\limits_{n\to\infty} \int_{\omega} \mathcal{H}space{-0.4cm} - \, \, | \nabla' w_n|^2\mathop{}\mathopen{}\mathrm{d} x'\le \liminf\limits_{n\to\infty}e(z_n+w_n)=0.
$$
We deduce $w\equiv 0$ since $\int_{\omega} w=\lim_{n\to\infty}\int_{\omega} w_n=0$. Thus $w_n\to 0$ a.e and
{\rm \bf{(H2)}} implies that for a.e.\ $x'\in \omega$,
$$\liminf_{n\to\infty}W(z_n+w_n(x'))\geq \liminf\limits_{|z|\to\infty} W(z)>0,$$
which contradicts the fact that $e(z_n+w_n)\to 0$.
\end{proof}
For every lower semicontinuous function $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ satisfying {\rm \bf{(H1)}} and {\rm \bf{(H2)}}, we introduce the geodesic pseudo-distance $\mathrm{geod}_W$ in $\mathbb{R}^N$ endowed with the singular pseudo-metric $4Wg_0$, $g_0$ being the standard Euclidean metric in $\mathbb{R}^N$; this geodesic pseudo-distance (that can take the value $+\infty$) is defined for every $ x,y\in\mathbb{R}^N$ by
\begin{equation}gin{multline}
\label{estgeod}
\mathrm{geod}_W(x,y):=\inf\bigg\{\int^1_{ -1} 2\sqrt{W(\sigma(t))}|\dot{\sigma}|(t)\mathop{}\mathopen{}\mathrm{d} t\;:\; \sigma\in\mathrm{Lip}_{ploc}([-1,1],\mathbb{R}^N),\, \sigma( -1)=x,\,\sigma (1)=y\bigg\},
\end{multline}
where $\mathrm{Lip}_{ploc}([-1,1],\mathbb{R}^N)$ is the set of continuous and {\bf piecewise locally Lipschitz} curves
\varphiootnote{In general, we cannot hope that a minimizing sequence in \eqref{estgeod} is better than piecewise locally Lipschitz because $W$ is not assumed locally bounded ($\dot \sigma$ is the derivative of $\sigma$). However, in the case of a locally bounded $W$, we could use a regularization procedure in order to restrict to Lipschitz curves $\sigma$.}
on $[-1,1]$:
\begin{equation}gin{align*}
\mathrm{Lip}_{ploc}([-1,1],\mathbb{R}^N):=\Big\{ \sigma\in \mathcal{C}^0([-1,1],\mathbb{R}^N)\;:\; & \textrm{there is a partition } -1= t_1<\dots< t_{k+1}= 1,\\
& \text{with } \sigma\in \mathrm{Lip}_{loc}((t_i,t_{i+1})) \, \textrm{ for every } 1\leq i\leq k\Big\}.
\end{align*}
By {\it pseudo-distance}, we mean that $\mathrm{geod}_W$ satisfies all the axioms of a distance; the only difference with respect to the standard definition is that a pseudo-distance can take the value $+\infty$. We will prove that $\mathrm{geod}_W$ yields a lower bound for the energy $E$ (see Lemma \ref{regularization_lemma}); this plays an important role in the proof of our claim $\overline{u}(\pm \infty)=u^\pm$.
We start by proving some elementary facts about the pseudo-metric structure induced by $\mathrm{geod}_W$ on $\mathbb{R}^N$:
\begin{equation}gin{lemma}\label{cWgeqGeod}
Let $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ be a lower semicontinuous function satisfying {\rm \bf (H1)} and {\rm \bf(H2)}. Then the function $\mathrm{geod}_W:\mathbb{R}^N\times\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ defines a pseudo-distance over $\mathbb{R}^N$ and the Euclidean distance is absolutely continuous with respect to
$\mathrm{geod}_W$, i.e., for every $\delta>0$, there exists $\varepsilon>0$ such that for every $x,y\in \mathbb{R}^N$ with $\mathrm{geod}_W(x,y)< \varepsilon$, we have $|x-y|< \delta$.
\end{lemma}
\begin{equation}gin{proof}[Proof of Lemma \ref{cWgeqGeod}] In proving that $\mathrm{geod}_W:\mathbb{R}^N\times\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ defines a pseudo-distance over $\mathbb{R}^N$, the only non-trivial axiom to check is the non-degeneracy, i.e., $\mathrm{geod}_W(x,y)>0$ whenever $x\neq y$. In fact, we prove the stronger property that for every $\delta>0$, there exists $\varepsilon>0$ such that for every $x,y\in \mathbb{R}^N$, $|x-y|\ge\delta$ implies $\mathrm{geod}_W(x,y)\ge\varepsilon$ which also yields the absolute continuity of the Euclidean distance with respect to
$\mathrm{geod}_W$. For that, we recall that the set $\{W=0\}$ is finite (by {\rm \bf(H1)}); therefore, w.l.o.g. we can assume that $\delta>0$ is small enough so that the open balls $B(p,\delta/2)$, for $p\in\{W=0\}$, are disjoint. We consider the following disjoint union of balls
\[
\mathbb{S}igma_\delta:=\bigsqcup_{p\in\{W=0\}} B(p, \varphirac\delta 4),
\]
the distance between each ball being larger than $\delta/2$. We now take two points $x,y\in\mathbb{R}^N$ with $|x-y|\ge \delta$. In order to obtain a lower bound on $\mathrm{geod}_W(x,y)$, we take an arbitrary continuous and piecewise locally Lipschitz curve $\sigma:[-1,1]\to\mathbb{R}^N$ such that $\sigma(-1)=x$ and $\sigma(1)=y$. As $\abs{x-y}\ge\delta$ (so no ball in $\mathbb{S}igma_\delta$ can contain both $x$ and $y$), by connectedness, the image $\sigma([-1,1])$ cannot be contained in $\mathbb{S}igma_\delta$. Thus, there exists $t_0\in [-1,1]$ with $\sigma(t_0)\notin \mathbb{S}igma_\delta$. It implies that $B(\sigma(t_0),\delta/8) \cap \mathbb{S}igma_{\delta/2}=\emptyset$. Moreover, since $\abs{x-y}\ge\delta$, we have either $\abs{\sigma(t_0)-x}\ge \delta/2$ or $\abs{\sigma(t_0)-y}\ge \delta/2$; w.l.o.g., we may assume that $\abs{\sigma(t_0)-y}\ge \delta/2$. Then the (continuous) curve $\sigma\big|_{[t_0,1]}$ has to get out of the ball $B(\sigma(t_0),\delta/8)$; in particular, it has length larger than $\delta/8$ and
\[
\int_{-1}^1 2\sqrt{W(\sigma(t))}|\dot{\sigma}|(t)\mathop{}\mathopen{}\mathrm{d} t\ge\varphirac\delta{4} \inf_{z\in B(\sigma(t_0),\delta/8)}\sqrt{W(z)}\ge \varphirac\delta {4} \inf_{z\in \mathbb{R}^N\setminus\mathbb{S}igma_{\delta/2}}\sqrt{W(z)}.
\]
Since $W$ is lower semicontinuous and bounded from below at infinity (by {\rm\bf (H2)}), we deduce that $W$ is bounded from below by a constant $c_\delta>0$ on $\mathbb{R}^N\setminus\mathbb{S}igma_{\delta/2}$. Taking the infimum over curves $\sigma\in \mathrm{Lip}_{ploc}([-1,1],\mathbb{R}^N)$ connecting $x$ to $y$, we deduce from the preceding lower bound that
\[
\mathrm{geod}_W(x,y)\ge \varphirac{\delta\sqrt{c_\delta}}{4}>0.
\]
This finishes the proof of the result.
\end{proof}
We now use a regularization argument to derive the following lower bound on the energy:
\begin{equation}gin{lemma}
\label{regularization_lemma}
Let $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ be a lower semicontinuous function. Then, for every interval $I\subset \mathbb{R}$ and every map $\sigma\in \dot{H}^1(I,\mathbb{R}^N)$ having limits $\sigma(\inf I)$ and $\sigma(\sup I)$ at the endpoints of $I$, we have
\begin{equation}
\label{ineg_ener}
E_W(\sigma, I):=\int_I \Big(\abs{\dot\sigma(t)}^2+W(\sigma(t))\Big)\mathop{}\mathopen{}\mathrm{d} t\ge \mathrm{geod}_W\big(\sigma(\inf I),\sigma(\sup I)\big).
\end{equation}
\end{lemma}
\begin{equation}gin{proof}[Proof of Lemma \ref{regularization_lemma}]
W.l.o.g. we assume that $I$ is an open interval. Since $\dot{H}^1(I,\mathbb{R}^N)\subset W^{1,1}_{loc}(I, \mathbb{R}^N)$, we can define the arc-length $s:I\to J:=s(I)\subset\mathbb{R}$ by
\[
s(t):=\int_{t_0}^t|\dot{\sigma}|(x_1)\mathop{}\mathopen{}\mathrm{d} x_1,\quad t\in I,
\]
where $t_0\in I$ is fixed. Thus $s$ is a nondecreasing continuous function with $\dot{s}=|\dot{\sigma}|$ a.e. in $I$. Then the arc-length reparametrization of $\sigma$, i.e.
\[
\tilde{\sigma}(s(t)):=\sigma(t),\quad t\in I,
\]
is well-defined and provides a Lipschitz curve $\tilde{\sigma}:J\to\mathbb{R}^N$ with constant speed on the interval $J$, i.e. $|\dot{\tilde\sigma}|=1$ a.e., and such that $\tilde{\sigma}(\inf J)={\sigma}(\inf I)$ and $\tilde{\sigma}(\sup J)={\sigma}(\sup I)$.
W.l.o.g. we may assume that $\sigma$ is not constant, so $J$ has a nonempty interior. Then we consider an arbitrary function $\varphi\in\mathrm{Lip}_{loc}((-1,1),\mathrm{int}J)$ which is nondecreasing and surjective onto the interior of the interval $J$ and we set
\[
\gamma(t):=\tilde{\sigma}(\varphi(t)),\quad t\in (-1,1).
\]
So $\gamma$ is a locally Lipschitz map that is continuous on $[-1,1]$ as $\tilde{\sigma}$ admits limits at $\inf J$ and
$\sup J$; thus, $\gamma\in \mathrm{Lip}_{ploc}([-1,1],\mathbb{R}^N)$. The changes of variable $s:=\varphi(t)$, resp. $s:=s(t)$, yield
\[
\int_{-1}^1 2\sqrt{W(\gamma(t))}|\dot{\gamma}|(t)\mathop{}\mathopen{}\mathrm{d} t=\int_J 2\sqrt{W(\tilde{\sigma}(s))}|\dot{\tilde{\sigma}}|(s)\mathop{}\mathopen{}\mathrm{d} s=\int_I 2\sqrt{W(\sigma(t))}\, |\dot{\sigma}|(t)\mathop{}\mathopen{}\mathrm{d} t.
\]
Combined with $\gamma(-1)=\sigma(\inf I)$ and $\gamma(1)=\sigma(\sup I)$, the definition of $\mathrm{geod}_W$ and the Young inequality imply
\[
E_W(\sigma, I)\ge \int_I 2\sqrt{W(\sigma(t))}\, |\dot{\sigma}|(t)\mathop{}\mathopen{}\mathrm{d} t=\int_{-1}^1 2\sqrt{W(\gamma(t))}|\dot{\gamma}|(t)\mathop{}\mathopen{}\mathrm{d} t\ge \mathrm{geod}_W\big(\sigma(\inf I),\sigma(\sup I)\big).
\]
This completes the proof.
\end{proof}
The convergence of the $x'$-average in Theorem \ref{thm1} stating that $\overline{u}(\pm \infty)=u^\pm$ is a consequence of the following lemma:
\begin{equation}gin{lemma}
\label{closed_boundary}
Let $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ be a lower semicontinuous function satisfying {\rm \bf (H1)} and {\rm \bf(H2)}. Then for every map $\sigma \in \dot{H}^1(\mathbb{R},\mathbb{R}^N)$ such that $E_W(\sigma,\mathbb{R})<+\infty$ with $E_W$ defined at \eqref{ineg_ener}, there exist two wells $u^-,\, u^+\in\{W=0\}$ such that
$
\lim\limits_{t\to\pm\infty}\sigma(t)=u^\pm.
$
\end{lemma}
\begin{equation}gin{proof}[Proof of Lemma \ref{closed_boundary}]
We use the fact that the energy bound $E_W(\sigma,\mathbb{R})<+\infty$ yields a bound on the total variation of $\sigma:\mathbb{R}\to \mathbb{R}^N$ where $\mathbb{R}^N$ is endowed with the pseudo-metric $\mathrm{geod}_W$. More precisely, for every sequence $t_1<\dots< t_k$ in $\mathbb{R}$, we have by Lemma~\ref{regularization_lemma}:
$$
\sum_{i=1}^k \mathrm{geod}_W(\sigma(t_{i+1}),\sigma(t_i))\le \sum_{i=1}^k E_W(\sigma,[t_i,t_{i+1}])\le E_W(\sigma,\mathbb{R})<+\infty.
$$
In particular, for every $\varepsilon>0$, there exists $R>0$ such that for all $t,s\in\mathbb{R}$ with $t,s\ge R$ or $t,s\le -R$, one has $\mathrm{geod}_W(\sigma(t),\sigma(s))<\varepsilon$. Since by Lemma~\ref{cWgeqGeod}, smallness of $\mathrm{geod}_W(x,y)$ implies smallness of $|x-y|$, we deduce that $\sigma$ has a limit $u^\pm\in\mathbb{R}^N$ at $\pm\infty$. Since $W(\sigma(\cdot))$ is integrable in $\mathbb{R}$, we have furthermore that $W(u^\pm)=0$.
\end{proof}
Now we can prove the convergence of the $x'$-average $\bar u$ at $\pm \infty$ as stated in Theorem \ref{thm1}:
\begin{equation}gin{proof}[Proof of the convergence in $x'$-average in Theorem \ref{thm1}] By Lemma \ref{lemmaV}, we have $E_V(\overline{u},\mathbb{R})<+\infty$ for the lower semicontinuous function $V:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ satisfying {\rm \bf (H1)} and {\rm \bf(H2)}. By Lemma~\ref{closed_boundary} applied to $E_V$, we deduce that there exists $u^\pm\in\{V=0\}=\{W=0\}$ such that $\lim_{t\to\pm\infty}\overline{u}(t)=u^\pm$.
\end{proof}
The pointwise convergence of $u(x_1, \cdot)$ as $x_1\to \pm\infty$ stated in Theorem \ref{thm1} is proved in the following:
\begin{equation}gin{proof}[Proof of the pointwise convergence in Theorem \ref{thm1}] \label{pagina}
We prove that $u(x_1, \cdot)$ converges a.e. in $\omega$ to $u^\pm\in\{W=0\}$ as $x_1\to \pm \infty$, where $u^\pm$ are the limits $\bar u(\pm \infty)$ of the $x'$-average $\bar u$ proved above. For that, we have by Fubini's theorem:
$$E(u)\geq \int_\Omega |\partial_1 u|^2+W(u)\, \mathop{}\mathopen{}\mathrm{d} x\geq \int_{\omega} E_W(u(\cdot, x'), \mathbb{R}) \, \mathop{}\mathopen{}\mathrm{d} x'$$
with the usual notation
$$E_W(\sigma, \mathbb{R})=\int_{\mathbb{R}} \abs{\dot \sigma}^2+W(\sigma)\, \mathop{}\mathopen{}\mathrm{d} x_1,\quad \sigma\in \dot{H}^1(\mathbb{R}, \mathbb{R}^N).$$
As $E(u)<\infty$, we deduce that $E_W(u(\cdot, x'), \mathbb{R})<\infty$ for a.e. $x'\in \omega$.
By Lemma \ref{closed_boundary}, we deduce that for a.e. $x'\in \omega$, there exist two wells $u^\pm(x')$ of $W$
such that
\begin{equation}
\label{point}
\lim\limits_{x_1\to\pm\infty}u(x_1, x')=u^\pm(x').
\end{equation}
By \eqref{poincare_wirtinger}, as $\bar u(\pm \infty)=u^\pm$, we know that $\| u(R_n^\pm,\cdot)- u^\pm\|_{L^2(\omega,\mathbb{R}^N)}\to 0$ as $n\to\infty$ for two sequences $R_n^\pm\to \pm \infty$. Up to a subsequence, we deduce that
$u(R_n^\pm,\cdot)\to u^\pm$ a.e. in $\omega$ as $n\to \infty$. By \eqref{point}, we conclude that
$u^\pm(x')=u^\pm$ for a.e. $x'\in \omega$.
\end{proof}
\section{The $L^2$ convergence}
\label{sec3}
In this section, we prove that $u(x_1, \cdot)$ converges in $L^2(\omega, \mathbb{R}^N)$ to $u^\pm$ as $x_1\to \pm \infty$.
The idea is to go beyond the averaging procedure in Section \ref{sec:2} and keep the full information given by the
$x'$-average energy $e$ introduced at Section \ref{sec:2} over the set ${H}^1(\omega, \mathbb{R}^N)$. More precisely, we extend $e$ to the space $L^2(\omega,\mathbb{R}^N)$ as follows
\begin{equation}
\label{defe}
e(v)=\begin{equation}gin{cases}
\displaystyle \int_{\omega} \mathcal{H}space{-0.4cm} - \, \, \Big(|{\nabla'} v|^2+W(v)\Big) \mathop{}\mathopen{}\mathrm{d} x' \quad & \textrm{ if } \, v\in {H}^1(\omega, \mathbb{R}^N),\\
+\infty \quad & \textrm{ if } \, v\in L^2(\omega, \mathbb{R}^N)\setminus {H}^1(\omega, \mathbb{R}^N).
\end{cases}
\end{equation}
In particular, we have for every $u\in \dot{H}^1(\Omega, \mathbb{R}^N)$,
\begin{equation}
\label{supl}
E(u)= \int_{\mathbb{R}} \Big(\|\partial_1 u(x_1,\cdot)\|_{L^2(\omega, \mathbb{R}^N)}^2+|\omega| e(u(x_1,\cdot)) \Big)\mathop{}\mathopen{}\mathrm{d} x_1.
\end{equation}
In the sequel, we will also need the following properties of the energy $e$:
\begin{equation}gin{lemma}\label{lemma_e}
If $W:\mathbb{R}^N\to\mathbb{R}_+\cup\{+\infty\}$ is a lower semicontinuous function satisfying {\rm \bf(H2)}, then
\begin{equation}gin{enumerate}
\item
$e$ is lower semicontinuous in $L^2(\omega, \mathbb{R}^N)$,
\item
the sets of zeros of $e$ and $W$ coincide; moreover $\mathbb{S}igma:=\{e=0\}=\{W=0\}\subset \mathbb{R}^N$ is compact,
\item
for every \(\varepsilon>0\), we have
\[
k_\varepsilon:=\inf\big\{e(v)\; : \; v\in L^2(\omega,\mathbb{R}^N)\text{ with } d_{L^2}(v,\mathbb{S}igma)\ge\varepsilon\big\}>0.
\]
\end{enumerate}
\end{lemma}
\begin{equation}gin{proof} We divide the proof in several steps:
\noindent \textsc{Step 1. Lower semicontinuity of $e$ in $L^2(\omega, \mathbb{R}^N)$.} Indeed, let $v_n\to v$ in
$L^2(\omega, \mathbb{R}^N)$. W.l.o.g., we may assume that $(e(v_n))_n$ is bounded, in particular, $(v_n)_n$ is bounded in $H^1(\omega, \mathbb{R}^N)$; thus, $(v_n)_n$ converges to $v$ weakly in $H^1(\omega, \mathbb{R}^N)$. By Step 1 in the proof of Lemma \ref{lemmaV}, we know that $e\big|_{H^1(\omega, \mathbb{R}^N)}$ is lower semicontinuous w.r.t. the weak $H^1$ topology and the conclusion follows.
\noindent \textsc{Step 2. Zeros of $e$.} The equality of the zero sets of $e$ and $W$ is straightforward thanks to the connectedness of $\omega$. Thanks to the assumption
{\rm \bf(H2)}, the set of zeros $\mathbb{S}igma$ of $W$ is bounded and by the lower semicontinuity and non-negativity of $W$, the set of zeros $\mathbb{S}igma$ of $W$ is closed; thus, $\mathbb{S}igma$ is compact in $\mathbb{R}^N$.
\noindent \textsc{Step 3. We prove that $k_\eps>0$.} Assume by contradiction that $k_\eps=0$ for some $\eps>0$. Then there exists a minimizing sequence $v_n\in L^2(\omega, \mathbb{R}^N)$ such that \(d_{L^2}(v_n,\mathbb{S}igma)\ge\varepsilon\) for every \(n\in\mathbb{N}\) and \(\lim_{n\to\infty}e(v_n)=0\). W.l.o.g., we may assume that $v_n\in {H}^1(\omega, \mathbb{R}^N)$ for every $n$ as $\|v_n\|_{\dot{H^1}}\to 0$. Denoting $\overline{v_n}$ the ($x'$-)average of $v_n$, the Poincar\'e-Wirtinger inequality implies that the sequence $(w_n:=v_n-\overline{v_n})_n$ converges in $H^1(\omega, \mathbb{R}^N)$ to $0$. Up to extracting a subsequence, we may assume that
$w_n\to 0$ for a.e. $x'\in \omega$.
\noindent\emph{Claim:} The sequence $(\overline{v_n})_n$ is bounded in $\mathbb{R}^N$.
\noindent Indeed, assume by contradiction that there exists a subsequence of $(\overline{v_n})_n$ (still denoted by $(\overline{v_n})_n$) such that $|\overline{v_n}|\to \infty$ as $n\to \infty$. As $W$ is l.s.c. and $w_n\to 0$ for a.e. $x'\in \omega$, the assumption ${\rm \bf (H2)}$ implies
$$\liminf_{n\to \infty} W(v_n(x'))=\liminf_{n\to \infty} W(w_n(x')+\overline{v_n})\geq \liminf_{|z|\to \infty} W(z)>0 \quad \textrm{for a.e. $x'\in \omega$}$$
which by integration over $x'\in \omega$ contradicts the assumption $e(v_n)\to 0$. This finishes the proof of the claim.
\noindent As a consequence of the claim, we deduce that \((v_n)_{n\in\mathbb{N}}\) is bounded in \(H^1(\omega,\mathbb{R}^N)\). In particular, \((v_n)_{n\in\mathbb{N}}\) has a subsequence that converges in \(L^2(\omega,\mathbb{R}^N)\) to a map \(v\in H^1(\omega,\mathbb{R}^N)\) and we deduce \(d_{L^2}(v,\mathbb{S}igma)\ge\varepsilon\), in particular, $v$ is not a zero of $e$, i.e., $e(v)>0$. As $e$ is l.s.c. in
$L^2(\omega, \mathbb{R}^N)$, we have $0=\lim_{n\to\infty}e(v_n)\geq e(v)$, which contradicts that $e(v)>0$.
\end{proof}
Now we prove the $L^2$-convergence of $u(x_1, \cdot)$ to $u^\pm$ as $x_1\to \pm \infty$:
\begin{equation}gin{proof}[Proof of the $L^2$-convergence in Theorem \ref{thm1}]
Take \(u\in H^1_{loc}(\Omega,\mathbb{R}^N)\) such that \(E(u)<+\infty\) and set \(\sigma(t):=u(t, \cdot)\in H^1(\omega,\mathbb{R}^N)\) for a.e. \(t\in\mathbb{R}\). We prove that \(\sigma(t)\) converges in $L^2(\omega,\mathbb{R}^N)$ to a limit that is a zero in $\mathbb{S}igma$ as \(t\to+\infty\) (the proof of the convergence as \(t\to-\infty\) is similar). Moreover, we will see that these limits are in fact the zeros $u^\pm$ of $W$ given by the $x'$-average $\bar u$ and the a.e. convergence of $u(x_1, \cdot)$ as $x_1\to \pm \infty$.
\medbreak
\noindent \textsc{Step 1: Continuity.} We prove that \(t\in \mathbb{R}\mapsto\sigma(t)\in L^2(\omega,\mathbb{R}^N)\) is continuous in \(\mathbb{R}\), and moreover, it is a \(\varphirac12\)-H\"older map. Indeed, for a.e. \(t,s\in\mathbb{R}\), we have
\[
d_{L^2}(\sigma(t),\sigma(s))^2=\int_\omega\Big|\int_t^s\partial_{x_1}u(x_1,x')\mathop{}\mathopen{}\mathrm{d} x_1\Big|^2\mathop{}\mathopen{}\mathrm{d} x'\le\abs{t-s}\|\partial_{x_1}u\|_{L^2(\Omega, \mathbb{R}^N)}^2.
\]
\medbreak
\noindent \textsc{Step 2: Convergence of a subsequence $(\sigma(t_n))_n$ to some \(u^+\in\mathbb{S}igma\).} Since \(e(\sigma(\cdot))\in L^1(\mathbb{R})\) by \eqref{supl}, there is a sequence \((t_n)_{n\in\mathbb{N}}\to +\infty\) such that \(\lim_{n\to\infty}e(\sigma(t_n))=0\). Exactly like in Step~3 in the proof of Lemma \ref{lemma_e}, we deduce that \((\sigma(t_n))_{n\in\mathbb{N}}\) has a subsequence that converges strongly in \(L^2(\omega,\mathbb{R}^N)\) to some map \(\sigma_\infty\in L^2(\omega,\mathbb{R}^N)\) (the assumption {\rm \bf (H2)} is essential here). Since \(e\) is l.s.c. in \(L^2\) and $e\geq 0$ in $L^2$, we deduce that \(e(\sigma_\infty)=0\) and so, there exists \(u^+\in\mathbb{S}igma\) such that \(\sigma_\infty\equiv u^+\).
\medbreak
\noindent \textsc{Step 3: Convergence to \(u^+\) in \(L^2\) as \(t\to+\infty\).} Assume by contradiction that
$\sigma(t)$ does not converge in \(L^2(\omega,\mathbb{R}^N)\) to $u^+$ as $t\to \infty$. Then there is a sequence \((s_n)_{n\in\mathbb{N}}\to +\infty\) such that \(\varepsilon:=\inf_{n\in\mathbb{N}}d_{L^2}(\sigma(s_n),u^+)>0\). Now, by Step 1, the curve \(t\in [s_n,+\infty)\mapsto\sigma(t)\in L^2(\omega,\mathbb{R}^N)\) is continuous. Moreover, $\sigma(s_n)$ doesn't belong to the \(L^2\)-ball centered at \(u^+\) with radius \(\varphirac{3\varepsilon}{4}\). By Step~2, it has to enter (at some time $t>s_n$) in the \(L^2\)-ball centered at \(u^+\) with radius \(\varphirac \varepsilon 4\). Therefore, the curve \(\sigma_{\varepsilonrt (s_n,+\infty)}\) has to cross the ring ${\cal R}:=B_{L^2}(u^+, \varphirac{3\eps}4)\setminus B_{L^2}(u^+, \varphirac{\eps}4)$, so it has $L^2$-length larger than $\varphirac \eps 2$, i.e.,
\[
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R} \} } \|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^N)} \, \mathop{}\mathopen{}\mathrm{d} t=
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R} \} } \|\dot{\sigma}\|_{L^2(\omega, \mathbb{R}^N)}\, \mathop{}\mathopen{}\mathrm{d} t\ge \varphirac \eps 2.
\]
Moreover, by the third claim in Lemma \ref{lemma_e}, we know that $e(\sigma(t))\geq k_{\eps/4}$ if $\sigma(t)\in {\cal R}$ (up to lowering $\varepsilon$, we may assume that the other zeros of $\mathbb{S}igma$ are placed at distance larger than $2\eps$ from $u^+$, the assumption {\rm \bf (H1)} is essential here). We obtain
\begin{equation}gin{align}
\label{antrad}
\int_{s_n}^{+\infty} \sqrt{e(u(t,\cdot))}\;\|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^N)}\, \mathop{}\mathopen{}\mathrm{d} t&\ge
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R} \} } \sqrt{e(u(t,\cdot))}\;\|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^N)}\, \mathop{}\mathopen{}\mathrm{d} t\\
\nonumber
&\geq \varphirac{\varepsilon}{2} \sqrt{k_{\varepsilon/4}}.
\end{align}
This is a contradiction with the assumption \(E(u)<+\infty\) implying by \eqref{supl}:
\begin{equation}gin{align*}
2|\omega|^\varphirac12 \int_{s_n}^{+\infty} \sqrt{e(u(t,\cdot))}\;\|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^N)}\mathop{}\mathopen{}\mathrm{d} t
&\le \int_{s_n}^{+\infty} \Big(|\omega| e(u(t,\cdot)) + \|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^N)}^2\Big)\mathop{}\mathopen{}\mathrm{d} t\\
&\underset{n\to\infty}{\longrightarrow}0.
\end{align*}
\medbreak
\noindent \textsc{Step 4: The \(L^2\) limits $u^\pm$ coincide with the average limits $\bar u(\pm\infty)$.} This is clear as \(L^2\) convergence implies convergence in average.
\end{proof}
\begin{equation}gin{remark}
i) The above proof does not use (so, it is independent of) the almost everywhere convergence of $u(x_1, \cdot)$ as $x_1\to \pm \infty$ or the convergence of the $x'$-average $\bar u$. Therefore, thanks to this proof, one can obtain as a direct consequence the convergence of the $x'$-average $\bar u$ as well as the almost everywhere convergence of $u(x_1, \cdot)$ as $x_1\to \pm \infty$.\varphiootnote{As the $L^2$-convergence implies almost everywhere convergence of $u(x_1, \cdot)$ only up to a subsequence, one should repeat the argument in the proof of the a.e. convergence in Theorem \ref{thm1} at page \pageref{pagina}.}
ii) Also, the above proof applies to Lemma \ref{closed_boundary} leading to a second method that does not use the geodesic distance $\mathrm{geod}_W$.
iii) Behind the above proof, the notion of geodesic distance over $L^2(\omega, \mathbb{R}^N)$ with the degenerate weight $\sqrt{e}$ is hidden (see \eqref{antrad}). Therefore, one could repeat the arguments in the first proof of Theorem \ref{thm1} based on this geodesic distance.
\end{remark}
The above argument can also be used directly to obtain a second proof for the existence of limits of $\bar u$ at $\pm \infty$ without using the
geodesic pseudo-distance $\mathrm{geod}_W$ (as presented in the proof in Section \ref{sec:2}). For completeness, we redo the proof in the sequel:
\begin{equation}gin{proof}[Second proof of the convergence in $x'$-average in Theorem \ref{thm1}]
Let $u\in \dot{H}^1(\Omega, \mathbb{R}^N)$ such that $E(u)<\infty$. We want to prove that the $x'$-average $\bar u$ admits a limit $u^+$ as $x_1\to \infty$ and $W(u^+)=0$ (the proof of the convergence as \(x_1\to-\infty\) is similar). Let $V$ and $E_V$ given by Lemma \ref{lemmaV}. Recall that $\mathbb{S}igma:=\{V=0\}=\{W=0\}$ and $E_V(\bar u)\leq \varphirac1{|\omega|} E(u)<\infty$.
\noindent \textsc{Step 1. We prove that for every $\varepsilon>0$,}
\[
\kappa_\varepsilon:=\inf\big\{V(z)\; : \; z\in \mathbb{R}^N, \, d_{\mathbb{R}^N}(z,\mathbb{S}igma)\ge\varepsilon\big\}>0.
\]
Assume by contradiction that there exists a sequence $(z_n)_n$ such that $V(z_n)\to 0$ and $d_{\mathbb{R}^N}(z_n,\mathbb{S}igma)\geq \varepsilon$.
By the third claim in Lemma \ref{lemmaV}, we deduce that $(z_n)_n$ is bounded, so that, up to a subsequence, $z_n\to z$ for some $z\in \mathbb{R}^N$ yielding
$d_{\mathbb{R}^N}(z,\mathbb{S}igma)\ge\varepsilon$ and $V(z)=0$, i.e., $z\in \mathbb{S}igma$ (since $V$ is l.s.c. and $V\geq 0$) which is a contradiction.
\noindent \textsc{Step 2. There exists a sequence $(\bar u(t_n))_n$ converging to a well $u^+\in \mathbb{S}igma$}. Indeed, as $V(\bar u)\in L^1(\mathbb{R})$, there exists a sequence $t_n\to \infty$ with $V( \bar u(t_n))\to 0$. By {\rm \bf (H2)}, $(\bar u(t_n))_n$ is bounded, so that up to a subsequence, $\bar u (t_n)\to u^+$ as $n\to \infty$ for some point $u^+\in \mathbb{R}^N$. As $V$ is l.s.c. and $V\geq 0$, we deduce that $V( u^+)=0$, i.e., $u^+\in \mathbb{S}igma$.
\medbreak
\noindent \textsc{Step 3: Convergence of $\bar u$ to \(u^+\) as \(x_1\to+\infty\).} Assume by contradiction that
$\bar u(x_1)$ does not converge to $u^+$ as $x_1\to \infty$. Then there is a sequence \((s_n)_{n\in\mathbb{N}}\to +\infty\) such that \(\varepsilon:=\inf_{n\in\mathbb{N}}d_{\mathbb{R}^N}(\bar u(s_n),u^+)>0\). As $\bar u:[s_n,+\infty)\to\mathbb{R}^N$ is continuous, by Step 2, it has to get out of the ball $B(\bar u(s_n), \varepsilon/4)$ and it has to enter in the ball $B(u^+, \varepsilon/4)$. Therefore, $\bar u$ has to cross the ring ${\cal R}:=B(u^+, \varphirac{3\eps}4)\setminus B(u^+, \varphirac{\eps}4)\subset \mathbb{R}^N$. Moreover, by Step 1, we know that $V(\bar u(x_1))\geq \kappa_{\eps/4}$ if $\bar u(x_1)\in {\cal R}$ (where we assumed w.l.o.g. that $\eps>0$ is small enough so that the other zeros of $\mathbb{S}igma$ are placed at distance larger than $2\eps$ from $u^+$). We obtain
$$
\int_{s_n}^{+\infty} \sqrt{V(\bar u(x_1))}\;\big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\big|\, \mathop{}\mathopen{}\mathrm{d} x_1\ge
\int_{ \{x_1\in (s_n,+\infty)\, :\, \bar u(x_1)\in {\cal R} \} } \sqrt{V(\bar u(x_1))}\; \big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\big|\, \mathop{}\mathopen{}\mathrm{d} x_1\geq \varphirac{\varepsilon}{2} \sqrt{\kappa_{\varepsilon/4}}.
$$
This is a contradiction with the assumption \(E_V(\bar u)<+\infty\) implying
$$
2\int_{s_n}^{+\infty} \sqrt{V(\bar u(x_1))}\;\big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\big|\mathop{}\mathopen{}\mathrm{d} x_1
\le \int_{s_n}^{+\infty} \Big(\big|\varphirac{\mathop{}\mathopen{}\mathrm{d}}{\mathop{}\mathopen{}\mathrm{d} x_1}\overline{u}(x_1)\big|^2+V(\bar u (x_1))\Big)\mathop{}\mathopen{}\mathrm{d} x_1
\underset{n\to\infty}{\longrightarrow}0.
$$
\end{proof}
\section{Proof of Theorem \ref{thm2}}
\label{sec4}
In this section, we consider $d=N$, $\Omega=\mathbb{R}\times \omega$ with $\omega=\mathbb{T}^{d-1}$ and $u\in H^1_{loc}(\Omega, \mathbb{R}^d)$ periodic in $x'\in \omega$ with $\bar u_1=a$ in $\mathbb{R}$ for some constant $a\in \mathbb{R}$ (recall that $\bar u$ is the $x'$-average of $u$). Note that $|\omega|=1$. We set
$$L^2_a(\omega, \mathbb{R}^d):=\left\{v=(v_1, \dots, v_d)\in L^2(\omega, \mathbb{R}^d)\, :\, \int_{\omega} v_1\, dx'=a\right\}$$ and
$H^1_a(\omega, \mathbb{R}^d):=H^1\cap L^2_a(\omega, \mathbb{R}^d)$. Note that for a.e. $x_1\in \mathbb{R}$, $u(x_1, \cdot)\in
H^1_a(\omega, \mathbb{R}^d)$.
We define the following energy $e_a$ on the convex closed subset $L^2_a(\omega,\mathbb{R}^d)$ of $L^2(\omega,\mathbb{R}^d)$:
\begin{equation}
\label{defea}
e_a(v)=\begin{equation}gin{cases}
\displaystyle \int_{\omega} \Big(|{\nabla'} v|^2+W(v)\Big) \mathop{}\mathopen{}\mathrm{d} x' \quad & \textrm{ if } \, v\in {H}^1_a(\omega, \mathbb{R}^d),\\
+\infty \quad & \textrm{ if } \, v\in L^2_a(\omega, \mathbb{R}^d)\setminus {H}^1(\omega, \mathbb{R}^d).
\end{cases}
\end{equation}
In particular, we have for every $u\in \dot{H}^1(\Omega, \mathbb{R}^d)$ with $\bar u_1=a$:
\begin{equation}
\label{supla}
E(u)= \int_{\mathbb{R}} \Big(\|\partial_1 u(x_1,\cdot)\|_{L^2(\omega, \mathbb{R}^d)}^2+ e_a(u(x_1,\cdot)) \Big)\mathop{}\mathopen{}\mathrm{d} x_1.
\end{equation}
The aim is to adapt the proof of Theorem \ref{thm1} given in Section \ref{sec3} to Theorem \ref{thm2}. We start by transfering the properties
of the energy $e$ in Lemma \ref{lemma_e} to the energy $e_a$ defined in $L^2_a(\omega, \mathbb{R}^d)$. More precisely,
if $W:\mathbb{R}^d\to\mathbb{R}_+\cup\{+\infty\}$ is a lower semicontinuous function, then
$e_a$ is lower semicontinuous in $L^2_a(\omega, \mathbb{R}^d)$ endowed with the strong $L^2$-norm and the sets of zeros of $e_a$ and $W(a, \cdot)$ coincide, i.e., $$\mathbb{S}igma^a:=\{v\in L^2_a(\omega, \mathbb{R}^d)\, :\, e_a(v)=0\}=\{z=(a, z')\in \mathbb{R}^d\, :\, W(a, z')=0\}.$$ If in addition $W$ satisfies $\textrm{\bf (H2)}_a$, then
$\mathbb{S}igma^a$ is compact in $\mathbb{R}^d$ and
for every \(\varepsilon>0\), we have
\[
k^a_{\varepsilon}:=\inf\big\{e_a(v)\; : \; v\in L^2_a(\omega,\mathbb{R}^d)\text{ with } d_{L^2}(v,\mathbb{S}igma^a)\ge\varepsilon\big\}>0
\]
(the proof of these properties follows by the same arguments presented in the proof of Lemma \ref{lemma_e}).
\begin{equation}gin{proof}[Proof of Theorem \ref{thm2}]
Let \(u\in H^1_{loc}(\Omega,\mathbb{R}^d)\) such that \(E(u)<+\infty\) and $\bar u_1=a$ in $\mathbb{R}$. We set \(\sigma(t):=u(t, \cdot)\in H^1_a(\omega,\mathbb{R}^d)\) for a.e. \(t\in\mathbb{R}\). We prove that \(\sigma(t)\) converges in $L^2(\omega,\mathbb{R}^d)$ to a limit that is a zero in $\mathbb{S}igma^a$ as \(t\to+\infty\) (the proof of the convergence as \(t\to-\infty\) is similar). As in Steps~1 and 2 in the proof of the $L^2$-convergence in Theorem \ref{thm1}, we have that
\(t\in \mathbb{R}\mapsto\sigma(t)\in L^2_a(\omega,\mathbb{R}^d)\) is a \(\varphirac12\)-H\"older continuous map in $\mathbb{R}$ and there is a sequence \((t_n)_{n\in\mathbb{N}}\to +\infty\) such that \(\sigma(t_n)\to u^+\) in \(L^2(\omega,\mathbb{R}^d)\) for a well \(u^+\in\mathbb{S}igma^a\) (the assumption
$\textrm{\bf (H2)}_a$ is essential here). In order to prove the convergence of $\sigma(t)$ to \(u^+\) in \(L^2\) as \(t\to+\infty\), we argue by contradiction. If $\sigma(t)$ does not converge in \(L^2(\omega,\mathbb{R}^d)\) to $u^+$ as $t\to \infty$, then there is a sequence \((s_n)_{n\in\mathbb{N}}\to +\infty\) such that \(\varepsilon:=\inf_{n\in\mathbb{N}}d_{L^2}(\sigma(s_n),u^+)>0\). We repeat the argument in
Step 3 in the proof of the $L^2$-convergence in Theorem \ref{thm1} by restricting ourselves to $L^2_a(\omega,\mathbb{R}^d)$ endowed by the strong $L^2$ topology. More precisely, the continuous curve \(t\in [s_n,+\infty)\mapsto\sigma(t)\in L^2_a(\omega,\mathbb{R}^d)\) has to cross the ring ${\cal R}_a:=\big(B_{L^2}(u^+, \varphirac{3\eps}4)\setminus B_{L^2}(u^+, \varphirac{\eps}4)\big)\cap L^2_a(\omega,\mathbb{R}^d)$, so it has $L^2$-length larger than $\varphirac \eps 2$, i.e.,
\[
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R}_a \} } \|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^d)} \, \mathop{}\mathopen{}\mathrm{d} t=
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R}_a \} } \|\dot{\sigma}\|_{L^2(\omega, \mathbb{R}^d)}\, \mathop{}\mathopen{}\mathrm{d} t\ge \varphirac \eps 2.
\]
As $e(\sigma(t))\geq k^a_{\eps/4}$ if $\sigma(t)\in {\cal R}_a$ (up to lowering $\varepsilon$, we may assume that the other zeros of $\mathbb{S}igma^a$ are placed at distance larger than $2\eps$ from $u^+$, the assumption $\textrm{\bf (H1)}_a$ is essential here), we obtain
\begin{equation}gin{align*}
\int_{ \{t\in (s_n,+\infty)\, :\, \sigma(t)\in {\cal R}_a \} } \sqrt{e_a(u(t,\cdot))}\;\|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^d)}\, \mathop{}\mathopen{}\mathrm{d} t
\geq \varphirac{\varepsilon}{2} \sqrt{k^a_{\varepsilon/4}}.
\end{align*}
This is a contradiction with \eqref{supla}:
\begin{equation}gin{align*}
2\int_{s_n}^{+\infty} \sqrt{e_a(u(t,\cdot))}\;\|\partial_{x_1}u(t,\cdot)\|_{L^2(\omega, \mathbb{R}^d)}\mathop{}\mathopen{}\mathrm{d} t
&\le \int_{s_n}^{+\infty} \Big(e_a(u(t,\cdot)) + \|\partial_{x_1}u(t,\cdot)\|_{L^2}^2\Big)\mathop{}\mathopen{}\mathrm{d} t
\underset{n\to\infty}{\longrightarrow}0.
\end{align*}
Clearly, the \(L^2\) convergence implies also the convergence
in average of $\sigma(t)$ over $\omega$ as $t\to \infty$ as well as the a.e. convergence $\sigma(t)\to u^+$ in $\omega$ but only up to a subsequence. For the full almost everywhere convergence of \(u(x_1,\cdot)\to u^+\), we proceed as follows. First, by the Poincar\'e-Wirtinger inequality on $\omega=\mathbb{T}^{d-1}$, we have for a.e. \(x_1\in \mathbb{R}\),
\[
\int_{\omega}\abs{\nabla' u_1(x_1,x')}^2\mathop{}\mathopen{}\mathrm{d} x'\ge 4\pi^2 \int_{\omega} \abs{u_1(x_1,x')-\bar u_1(x_1)}^2\mathop{}\mathopen{}\mathrm{d} x'=4\pi^2\int_{\omega} \abs{u_1(x_1,x')-a}^2\mathop{}\mathopen{}\mathrm{d} x'.
\]
By Fubini's theorem, we deduce that
\[
E(u)\ge\int_\Omega\big(\abs{\partial_1 u}^2+\abs{\nabla' u_1}^2+ W(u)\big)\mathop{}\mathopen{}\mathrm{d} x
\ge\int_{{{\mathbb{T}^{d-1}}}}E_{W_a}(u(\cdot,x'),\mathbb{R})\mathop{}\mathopen{}\mathrm{d} x',
\]
where \(W_a(z):=W(z)+4\pi^2\abs{z_1-a}^2\) and, as usual,
$$E_{W_a}(\sigma, \mathbb{R})=\int_{\mathbb{R}} \big(\abs{\dot \sigma}^2+W_a(\sigma)\big) \mathop{}\mathopen{}\mathrm{d} x_1,\quad \sigma\in \dot{H}^1(\mathbb{R}, \mathbb{R}^N).$$
Hence, $E_{W_a}(u(\cdot, x'), \mathbb{R})<\infty$ for a.e. $x'\in \omega$.
Note that \(W_a\) is lower semicontinuous and satisfies assumptions $\textrm{\bf (H1)}$ (the set of zeros of \(W_a\) coincides with
\(\mathbb{S}igma^a\), which is finite by $\textrm{\bf (H1)}_a$) and the coercivity condition $\textrm{\bf (H2)}$ (thanks to $\textrm{\bf (H2)}_a$). Thus, Lemma
\ref{closed_boundary} implies that for a.e. $x'\in \omega$, there exist two wells $u^\pm(x')$ of $W_a$ such that
\begin{equation}
\label{pointa}
\lim\limits_{x_1\to\pm\infty}u(x_1, x')=u^\pm(x').
\end{equation}
By \eqref{poincare_wirtinger}, as $\bar u(\pm \infty)=u^\pm$, we know that $\| u(R_n^\pm,\cdot)- u^\pm\|_{L^2(\omega,\mathbb{R}^N)}\to 0$ as $n\to\infty$ for two sequences $(R_n^\pm)_{n\in\mathbb{N}}\to \pm \infty$. Up to a subsequence, we deduce that
$u(R_n^\pm,\cdot)\to u^\pm$ a.e. in $\omega$ as $n\to \infty$. By \eqref{pointa}, we conclude that
$u^\pm(x')=u^\pm$ for a.e. $x'\in \omega$.
\end{proof}
\paragraph{Acknowledgment.} R.I. acknowledges partial support by the ANR project ANR-14-CE25-0009-01.
\begin{equation}gin{thebibliography}{10}
\bibitem{Alama:1997}
S. Alama, L. Bronsard and C. Gui. Stationary layered solutions for an Allen--Cahn system with multiple well potential.
{\em Calc. Var. Partial Differential Equations} {\bf 5} (1997), no. 4, 359--390.
\bibitem{Alberti:2001}
G. Alberti, L. Ambrosio and X. Cabr{{\'e}}.
\newblock On a long-standing conjecture of {E}. {D}e {G}iorgi: symmetry in {3D}
for general nonlinearities and a local minimality property.
\newblock {\em Acta Applicandae Mathematica} {\bf 65} (2001) (1-3), 9--33.
\bibitem{Alouges:2002}
F. Alouges, T. Rivi{{\`e}}re and S. Serfaty.
\newblock N{\'e}el and cross-tie wall energies for planar micromagnetic configurations.
\newblock {\em ESAIM Control Optim. Calc. Var.} {\bf 8} (2002), 31--68.
\bibitem{AmbrosioCabre:2000}
L. Ambrosio and X. Cabr{\'e}.
\newblock Entire solutions of semilinear elliptic equations in $\mathbb{R}^3$ and a conjecture of {D}e {G}iorgi.
\newblock {\em J. Eur. Math. Soc.} {\bf 13} (2000) (4), 725--739.
\bibitem{BBG}
M. T. Barlow, R. F. Bass and C. Gui.
\newblock The Liouville property and a conjecture of De Giorgi.
\newblock {\em Comm. Pure Appl. Math.} {\bf 53} (2000) (8), 1007--1038.
\bibitem{BHM}
H. Berestycki, F. Hamel and R. Monneau.
\newblock One-dimensional symmetry of bounded entire solutions of some elliptic equations.
\newblock {\em Duke Math. J.} {\bf 103} (2000), 375--396.
\bibitem{berestycki2013phase}
H. Berestycki, T.-C. Lin, J. Wei and C. Zhao.
\newblock On phase-separation models: asymptotics and qualitative properties.
\newblock {\em Archive for Rational Mechanics and Analysis} {\bf 208} (2013) (1), 163--200.
\bibitem{berestycki2013entire}
H. Berestycki, S. Terracini, K. Wang and J. Wei.
\newblock On entire solutions of an elliptic system modeling phase separations.
\newblock {\em Advances in Mathematics} {\bf 243} (2013), 102--126.
\bibitem{Brezis}
H. Brezis.
\newblock Analyse fonctionnelle. Th\'eorie et applications.
\newblock Masson, Paris, 1983.
\bibitem{brezis-mironescu}
H. Brezis and P. Mironescu.
\newblock Sur une conjecture de E. De Giorgi relative \`a l'\'energie de Ginzburg-Landau.
\newblock {\em C. R. Acad. Sci. Paris Ser. I Math.} {\bf 319} (1994), 167--170.
\bibitem{CafCor}
L. A. Caffarelli and A. C\' ordoba.
\newblock Uniform convergence of a singular perturbation problem.
{\em Comm. Pure Appl. Math.} {\bf 48} (1995), 1--12.
\bibitem{Carbou}
G. Carbou.
\newblock Unicit\'e et minimalit\'e des solutions d'une \'equation de Ginzburg-Landau,
{\em Ann. Inst. H. Poincar\'e, Analyse non lin\'eaire,} {\bf 12} (1995), 305--318.
\bibitem{del2011giorgi}
M. Del~Pino, M. Kowalczyk and J. Wei.
\newblock On {D}e {G}iorgi's conjecture in dimension {$N\geq 9$}.
\newblock {\em Annals of Mathematics} {\bf 174} (2011) (3), 1485--1569.
\bibitem{DKO} A. DeSimone, H. Kn\"upfer and F. Otto.
\newblock $2$-d stability of the N\'eel wall.
\newblock {\em Calc. Var. Partial Differential Equations} {\bf 27} (2006), 233--253.
\bibitem{DI} L. D\"oring and R. Ignat.
\newblock Asymmetric domain walls of small angle in soft ferromagnetic films.
\newblock {\em Arch. Ration. Mech. Anal.} {\bf 220} (2016), 889--936.
\bibitem{Doring:2013}
L. D{\"o}ring, R. Ignat and F. Otto.
\newblock A reduced model for domain walls in soft ferromagnetic films at the cross-over from symmetric to asymmetric wall types.
\newblock {\em J. Eur. Math. Soc. (JEMS)} {\bf 16} (2014) (7), 1377--1422.
\bibitem{F1}
A. Farina.
\newblock Some remarks on a conjecture of De Giorgi.
{\em Calc. Var. Partial Differential Equations} {\bf 8} (1999), 233--245.
\bibitem{F2}
A. Farina.
\newblock Symmetry for solutions of semilinear elliptic equations in $\mathbb{R}^N$ and related conjectures.
{\em Ricerche di Matematica} {\bf XLVIII} (1999), 129--154.
\bibitem{FSS}
A. Farina, B. Sciunzi and N. Soave.
\newblock Monotonicity and rigidity of solutions to some elliptic systems with uniform limits.
{\em arXiv:1704.06430} (2017).
\bibitem{fazly2013giorgi}
M. Fazly and N. Ghoussoub.
\newblock De {G}iorgi type results for elliptic systems.
\newblock {\em Calculus of Variations and Partial Differential Equations} {\bf 47} (2013) (3-4), 1--15.
\bibitem{Ghoussoub:1998}
N. Ghoussoub and C. Gui.
\newblock On a conjecture of {D}e {G}iorgi and some related problems.
\newblock {\em Mathematische Annalen} {\bf 311} (1998) (3), 481--491.
\bibitem{ghoussoub2003giorgi}
N. Ghoussoub and C. Gui.
\newblock On {D}e {G}iorgi's conjecture in dimensions {4} and {5}.
\newblock {\em Annals of mathematics} {\bf 157} (2003) (1), 313--334.
\bibitem{Ignat:2011}
R. Ignat and B. Merlet.
\newblock Lower bound for the energy of {B}loch walls in micromagnetics.
\newblock {\em Arch. Ration. Mech. Anal.} {\bf 199} (2011) (2), 369--406.
\bibitem{Ignat-Monteil}
R. Ignat and A. Monteil.
\newblock A De Giorgi type conjecture for minimal solutions to a nonlinear Stokes equation,
\newblock {\em Comm. Pure Appl. Math.}, accepted (2018).
\bibitem{Ignat:2012}
R. Ignat and R. Moser.
\newblock A zigzag pattern in micromagnetics.
\newblock {\em J. Math. Pures Appl.} {\bf 98} (2012) (2), 139--159.
\bibitem{IO}
R. Ignat and F. Otto.
\newblock A compactness result in thin-film micromagnetics and the optimality of the N\'eel wall.
\newblock {\em J. Eur. Math. Soc. (JEMS)} {\bf 10} (2008), 909--956.
\bibitem{MonteilSantambrogio2016}
A. Monteil and F. Santambrogio.
\newblock Metric methods for heteroclinic connections.
\newblock {\em Math. Methods Appl. Sci.} {\bf 41} (2018) (3), 1019--1024.
\bibitem{MonteilSantambrogio2017}
A. Monteil and F. Santambrogio. Heteroclinic connections in infinite dimensional spaces.
To appear in Indiana Univ. Math. J.
\bibitem{Moser_zigzag}
R. Moser.
\newblock On the energy of domain walls in ferromagnetism.
\newblock {\em Interfaces Free Bound.}, {\bf 11} (2009) 399--419.
\bibitem{Riviere:2001}
T. Rivi{{\`e}}re and S. Serfaty.
\newblock Limiting domain wall energy for a problem related to micromagnetics.
\newblock {\em Comm. Pure Appl. Math.} {\bf 54} (2001) (3), 294--338.
\bibitem{Savin:2009}
O. Savin.
\newblock Regularity of flat level sets in phase transitions.
\newblock {\em Annals of Mathematics} {\bf 169} (2009) (1), 41--78.
\bibitem{Schatzman:2002}
M. Schatzman. Asymmetric heteroclinic double layers.
{\em ESAIM Control Optim. Calc. Var.} {\bf 8} (2002), no. 2, 965--1005.
\bibitem{Sourdis:2016}
C. Sourdis.
\newblock The heteroclinic connection problem for general double-well potentials.
\newblock {\em Mediterranean Journal of Mathematics} {\bf 13} (2016) (6), 4693--4710.
\bibitem{Zuniga:2016}
A. Zuniga and P. Sternberg.
\newblock On the heteroclinic connection problem for multi-well gradient systems.
\newblock {\em Journal of Differential Equations} {\bf 261} (2016) (7), 3987--4007.
\end{thebibliography}
\end{document}
|
\begin{document}
\title[Invariant Seifert surfaces]
{Invariant Seifert surfaces for strongly invertible knots}
\author{Mikami Hirasawa}
\address{Department of Mathematics\\
Nagoya Institute of Technology\\
Showa-ku, Nagoya city, Aichi, 466-8555, Japan}
\email{[email protected]}
\author{Ryota Hiura}
\address{Inuyama-Minami High School\\
Hasuike 2-21, Inuyama City, Aichi, 484-0835, Japan}
\email{[email protected]}
\author{Makoto Sakuma}
\address{Osaka Central Advanced Mathematical Institute\\
Osaka Metropolitan University\\
3-3-138, Sugimoto, Sumiyoshi, Osaka City
558-8585, Japan}
\email{[email protected]}
\makeatletter
\@namedef{subjclassname@2020}{
\textup{2020} Mathematics Subject Classification}
\makeatother
\keywords{strongly invertible knot, invariant Seifert surface, equivariant genus, Kakimizu complex\\
{\bf This article is to appear in the book,
Essays in geometry, dedicated to Norbert \mbox{A'Campo} (ed. A. Papadopoulos), European Mathematical Society Press, Berlin, 2023.}}
\subjclass[2020]{Primary 57K10; secondary 57M60}
\begin{abstract}
Abstract: We study invariant Seifert surfaces for strongly invertible knots, and prove
that the gap between the equivariant genus
(the minimum of the genera of invariant Seifert surfaces)
of a strongly invertible knot and the (usual) genus of the underlying knot
can be arbitrarily large.
This forms a sharp contrast with Edmonds' theorem that
every periodic knot admits an invariant minimal genus Seifert surface.
We also prove variants of Edmonds' theorem,
which are useful in studying
invariant Seifert surfaces for strongly invertible knots.
\end{abstract}
\maketitle
\vspace*{-5mm}
\begin{center}
{\it Dedicated to Professor Norbert A'Campo on his 80th birthday}
\end{center}
\section{Introduction} \label{sec:intro}
A smooth knot $K$ in $S^3$ is said to be
({\it cyclically}) {\it periodic with period $n$}\index{knot!periodic knot}
if there is a periodic diffeomorphism $f$ of $S^3$
of period $n$
which leaves $K$ invariant and fixes a simple loop in the knot complement $S^3\setminus K$.
Since the seminal work by Trotter \cite{Trotter1961} and Murasugi \cite{Murasugi1971},
periodic knots have been studied extensively.
In particular, Edmonds and Livingston \cite{Edmonds-Livingston}
proved that every periodic knot admits an invariant incompressible Seifert surface,
and this was enhanced by
Edmonds \cite{Edmonds} to the existence of an invariant minimal genus Seifert surface.
(He applied this result to prove Fox's conjecture
that a given nontrivial knot has only finitely many periods.)
It is natural to ask if the same result holds for strongly invertible knots.
Recall that a smooth knot $K$ in $S^3$ is said to be
{\it strongly invertible}
if there is a smooth involution $h$ of $S^3$
which leaves $K$ invariant and fixes a simple loop intersecting $K$ in two points.
The involution $h$ is called a
{\it strong inversion}\index{strong inversion}
of $K$.
As in \cite{Sakuma1986},
we use the term
{\it strongly invertible knot}\index{knot!strongly invertible knot}
to mean a pair $(K,h)$ of a knot $K$ and a strong inversion $h$ of $K$,
and regard two strongly invertible knots $(K,h)$ and $(K',h')$
to be {\it equivalent}
if there is an orientation-preserving diffeomorphism $\varphi$
of $S^3$ mapping $K$ to $K'$ such that $h'=\varphi h\varphi^{-1}$.
Note that if $S$ is an $h$-invariant Seifert surface for a strongly invertible knot $(K,h)$
then $\operatorname{Fix}(h)\cap S$ is equal to one of the two subarcs of
$\operatorname{Fix}(h)\cong S^1$
bounded by
$\operatorname{Fix}(h)\cap K\cong S^0$.
So the problem of whether $K$ admits an invariant minimal genus Seifert surface
depends on the choice of the subarc of $\operatorname{Fix}(h)$, in addition to the choice of
the strong inversion $h$.
By a {\it marked strongly invertible knot}\index{knot!marked strongly invertible knot},
we mean a triple $(K,h,\delta)$ where $(K,h)$ is a strongly invertible knot
and $\delta$ is a subarc of $\operatorname{Fix}(h)$ bounded by $\operatorname{Fix}(h)\cap K$.
Two marked strongly invertible knots $(K,h,\delta)$ and $(K',h',\delta')$ are regarded to be
{\it equivalent} if
there is an orientation-preserving diffeomorphism $\varphi$
of $S^3$ mapping $K$ to $K'$ such that
$h'=\varphi h\varphi^{-1}$ and
$\delta'=\varphi(\delta)$.
\begin{definition}
\label{def:equivariant-genus}
{\rm
By an {\it invariant Seifert surface}\index{surface!invariant Seifert surface}
\index{Seifert surface!invariant Seifert surface}
(or a {\it Seifert surface} in brief)
{\it for a marked strongly invertible knot $(K,h,\delta)$},
we mean a Seifert surface $S$ for $K$
such that $h(S)=S$ and $\operatorname{Fix}(h)\cap S=\delta$.
The {\it equivariant genus}\index{genus!equivariant genus}
(or the {\it genus} in brief) $g(K,h,\delta)$ of $(K,h,\delta)$
is defined to be the minimum
of the genera of Seifert surfaces for $(K,h,\delta)$.
A Seifert surface for $(K,h,\delta)$ is said to be of
{\it minimal genus}\index{genus!minimal genus}
if its genus is equal to $g(K,h,\delta)$.
}
\end{definition}
Every marked strongly invertible knot admits an invariant Seifert surface
(Proposition \ref{prop:existence-invariant-Seifert}), and so
its equivariant genus is well-defined.
However, in general,
the equivariant genus is bigger than the (usual) genus.
In fact, the following theorem is proved in the second author's
master thesis \cite{Hiura} supervised by the third author with support by the first author.
\begin{theorem}
\label{thm:main}
For any integer $n\ge 0$,
there exits a marked strongly invertible knot $(K,h,\delta)$
such that $g(K,h,\delta)-g(K)=n$.
\end{theorem}
This theorem follows from a formula
of the equivariant genera for
certain marked strongly invertible knots
that arise from $2$-bridge knots.
See Examples \ref{example:8-3} and \ref{example:main-example} for
special simple cases.
However, there remained various marked strongly invertible $2$-bridge knots
whose equivariant genera were undetermined.
This paper and its sequel \cite{Hirasawa-Hiura-Sakuma2}
are motivated by the desire
to determine the equivariant genera of
all marked strongly invertible $2$-bridge knots.
In this paper, we prove the following two variants of Edmonds' theorem on periodic knots,
which are useful in
studying invariant Seifert surfaces for general strongly invertible knots:
Theorem \ref{thm:disjoint-Seifert2} is
used in \cite{Hirasawa-Hiura-Sakuma2} to give a unified
determination of the equivariant genera of
all marked strongly invertible $2$-bridge knots.
\begin{theorem}
\label{thm:disjoint-Seifert}
Let $(K,h)$ be a strongly invertible knot.
Then there is a minimal genus Seifert surface $F$ for $K$
such that $F$ and $h(F)$ have disjoint interiors.
\end{theorem}
\begin{theorem}
\label{thm:disjoint-Seifert2}
Let $(K,h,\delta)$ be a marked strongly invertible knot, and
let $F$ be a minimal genus Seifert surfaces for $K$
such that $F$ and $h(F)$ have disjoint interiors.
Then there is a minimal genus Seifert surface $S$ for $(K,h,\delta)$
whose interior is disjoint from the interiors of $F$ and $h(F)$.
\end{theorem}
Note that a Seifert surface for a fibered knot\index{knot!fibered knot}
is a fiber surface\index{surface!fiber surface}
if and only if it is of minimal genus.
For fibered knots, we have a stronger conclusion
as in Proposition \ref{prop:fibered} below.
\begin{proposition}
\label{prop:fibered}
Let $K$ be a strongly invertible fibered knot.
Then for any marked strongly invertible knot $(K,h,\delta)$,
there is an $h$-invariant fiber surface for $K$ containing $\delta$,
and hence
$g(K,h,\delta) = g(K)$.
\end{proposition}
Theorem \ref{thm:disjoint-Seifert}
is also motivated by our interest in the
{\it Kakimizu complex}\index{Kakimizu complex}
$MS(K)$ of the knot $K$
and in the natural action of
the symmetry group\index{symmetry group}
$\operatorname{Sym}(S^3,K)=\pi_0\operatorname{Diff}(S^3,K)$ on $MS(K)$.
The complex was introduced by Kakimizu \cite{Kakimizu1988}
as the flag simplicial complex
whose vertices correspond to the (isotopy classes of) minimal genus Seifert surfaces for $K$
and edges to pairs of such surfaces with disjoint interiors.
The following corollary of Theorem \ref{thm:disjoint-Seifert}
may be regarded as a refinement of a special case of
a theorem proved by Przytycki and Schultens \cite[Theorem 1.2]{Przytycki-Schultens}.
\begin{corollary}
\label{cor:disjoint-Seifert}
Let $(K,h)$ be a strongly invertible knot,
and let $h_*$ be the automorphism of $MS(K)$
induced from of the strong inversion $h$.
Then one of the following holds.
\begin{enumerate}
\item
There exists a vertex of $MS(L)$
that is fixed by $h_*$.
\item
There exists a pair of adjacent vertices of $MS(K)$
that are exchanged by $h_*$.
(Thus
the mid point of the edge spanned by the vertices is
fixed by $h_*$.)
\end{enumerate}
\end{corollary}
For a $2$-bridge knot $K$,
the structure of $MS(K)$ is described by \cite[Theorem 3.3]{Sakuma1994};
in particular, the underlying space $|MS(K)|$ is
identified with a linear quotient of a cube.
The actions of strong inversions
on $MS(K)$ will be described in \cite{Hirasawa-Hiura-Sakuma2}.
This paper is organized as follows.
In Section \ref{sec:basic-fact},
we recall basic facts about
strongly invertible knots.
In Section \ref{sec:invariant-Seifert-surface},
we give two constructions of invariant Seifert surfaces
and prove a basic proposition (Proposition \ref{prop:lifting})
concerning the quotient of an invariant Seifert surface
by the strong inversion.
In Section \ref{sec:Proof-MainTheorem},
we give an argument for determining the equivariant genera
(Proposition \ref{prop:band-number} and Corollary \ref{cor:band-number}),
and prove Theorem \ref{thm:main}
by using that argument (Example \ref{example:main-example}).
In Section \ref{sec:proof-SubMainTheorem},
we prove our main theorems, Theorems
\ref{thm:disjoint-Seifert} and
\ref{thm:disjoint-Seifert2}.
In the final section,
Section \ref{sec:fourgenus},
we briefly review old and new studies of
equivariant $4$-genera of symmetric knots.
Finally, we explain relation of this paper and A'Campo's work.
Motivated by isolated singularities of complex hypersurfaces,
A'Campo \cite{A'Campo}
formulated a way to construct fibered links in the 3-sphere from
{\it divides}\index{divide},
i.e., immersions of copies of $1$-manifolds in a disk.
He proved that the link obtained from a connected divide is
strongly invertible and fibered.
In \cite[Section 3]{A'Campo}, we can find a beautiful description of a pair of
invariant fiber surfaces for the link.
This nicely illustrates Proposition \ref{prop:fibered}.
(See \cite{Hirasawa} for visualization of these links and their fiber surfaces.)
Furthermore, Couture \cite{Couture} introduced the more general notion of
ordered Morse signed divides, and proved that
every strongly invertible link is isotopic to the link of an ordered Morse signed divide.
As A'Campo suggested to the authors, it would be interesting to study invariant Seifert surfaces
from this view point.
\section{Basic facts concerning strongly invertible knots}
\label{sec:basic-fact}
Recall that a knot $K$ in $S^3$ is said
to be {\it invertible}\index{knot!invertible knot}
if there is an orientation-preserving diffeomorphism $h$ that maps $K$ to itself
reversing orientation.
The existence of non-invertible knots was proved by Trotter \cite{Trotter1963}
using $2$-dimensional hyperbolic geometry.
(His proof is based on the fact that the pretzel knots admit the structure
of Seifert fibered orbifolds, where the base orbifolds are generically hyperbolic.)
If $h$ can be chosen to be an involution
then $K$ is said to be
{\it strongly invertible},\index{strongly invertible}\index{knot!strongly invertible knot}
and $h$ is called a {\it strong inversion}\index{strong inversion}.
Though strong invertiblity of course implies invertiblity,
the converse does not hold,
as shown by Whitten \cite{Whitten}.
However, for hyperbolic knots, the converse also holds
by the Mostow-Prasad rigidity theorem.
Moreover, a sufficient condition
for invertible knots to be strongly invertible was given by Boileau \cite{Boileau}.
The finiteness theorem of symmetries of $3$-manifolds proved by Kojima \cite{Kojima}
by using the orbifold theorem \cite{BLP, BoP, CHK, DL}
implies that any knot
admits only finitely many strong inversions up to equivalence.
Here two strong inversions $h$ and $h'$ of $K$ are regarded to be {\it equivalent}
if there is an orientation-preserving diffeomorphism $\varphi$
of $S^3$ such that
$h'=\varphi h\varphi^{-1}$.
However, the number of strong inversions up to equivalence
for a satellite knot can be arbitrary large
\cite[Lemma 5.4]{Sakuma1986b}.
For torus knots and hyperbolic knots,
the number is at most $2$, as described in
Proposition \ref{prop:strong-inversion} below.
Recall that a knot $K$ in $S^3$ is said to have
{\it cyclic period}\index{cyclic period}\index{period!cyclic period} $2$
or {\it free period} $2$,\index{free period}\index{period!free period}
respectively,
if there is an orientation-preserving smooth involution $f$ of $S^3$ that maps $K$ to itself
preserving orientation, such that $\operatorname{Fix}(f)$ is $S^1$ or $\emptyset$.
\begin{proposition}
\label{prop:strong-inversion}
{\rm (1)} The trivial knot admits a unique strong inversion up to equivalence.
{\rm (2)} A nontrivial torus knot admits a unique strong inversion up to equivalence.
{\rm (3)} An invertible hyperbolic knot admits exactly two or one strong inversions up to equivalence
according to whether it has (cyclic or free) period $2$ or not.
\end{proposition}
The first assertion is due to Marumoto \cite[Proposition 2]{Marumoto}, and
the remaining assertions are proved in \cite[Proposition 3.1]{Sakuma1986}
(cf. \cite[Section 4]{ALSS})
by using the result of Meeks-Scott \cite{Meeks-Scott}
on finite group actions on Seifert fibered spaces
and the orbifold theorem.
Another key ingredient of the proof of the third assertion is
the following consequence of
Riley's observation \cite[p.124]{Riley} based on the positive solution of the Smith conjecture
\cite{Morgan-Bass}.
For a hyperbolic invertible knot $K$, the orientation-preserving isometry group
of the hyperbolic manifold $S^3\setminus K$ is the dihedral group $D_{2n}$
of order $2n$ for some $n\ge 1$, i.e.,
\begin{align*}
\label{dihdral-group}
\operatorname{Isom}^+(S^3\setminus K) \cong
\langle f, h \ | \ f^n=1,\ h^2=1, hfh^{-1}=f^{-1}\rangle.
\end{align*}
Here $h$ extends to a strong inversion of $K$ and $f$
extends to a periodic map of $S^3$ of period $n$ which maps $K$ to itself preserving orientation.
If $n$ is odd, then $K$ does not have cyclic nor free period $2$,
and any strong inversion of $K$ is equivalent to that obtained from $h$.
If $n=2m$ is even, then $f^m$ extends to an involution of $S^3$
which gives cyclic or free period $2$ of $K$,
and any strong inversion of $K$ is equivalent to that obtained from
exactly one of $h$ and $fh$.
By \cite[Proposition 1.2]{Kodama-Sakuma},
the above Proposition \ref{prop:strong-inversion}(3) is refined to the following proposition
concerning marked strongly invertible knots.
\begin{proposition}
\label{prop:marked-strong-inversion}
Let $K$ be an invertible hyperbolic knot.
{\rm (1)} Suppose $K$ does not have cyclic nor free period $2$.
Then $K$ admits a unique strong inversion up to equivalence,
and the two marked strongly invertible knots associated with $K$ are inequivalent.
Thus there are precisely two marked strongly invertible knots
associated with $K$.
{\rm (2)} Suppose $K$ has cyclic period $2$.
Then $K$ admits precisely two strong inversions up to equivalence,
and for each strong inversion,
the associated two marked strongly invertible knots are inequivalent.
Thus there are precisely four marked strongly invertible knots
associated with $K$.
{\rm (3)} Suppose $K$ has free period $2$.
Then $K$ admits precisely two strong inversions up to equivalence,
and for each strong inversion,
the associated two marked strongly invertible knots are equivalent.
Thus there are precisely two marked strongly invertible knots
associated with $K$.
\end{proposition}
Since every $2$-bridge knot admits cyclic period $2$,
we obtain the following corollary.
\begin{corollary}
\label{cor:nonhyperbolic-msi-2br-knot}
Every hyperbolic $2$-bridge knot has precisely four associated
marked strongly invertible knots up to equivalence.
\end{corollary}
The four marked strongly invertible knots
associated with a hyperbolic $2$-bridge knot
are (implicitly) presented in \cite[Proposition 3.6]{Sakuma1986} and
\cite[Section 4]{ALSS}.
\begin{remark}
\label{rem:nonhyperbolic-msi-2br-knot}
{\rm
By Proposition \ref{prop:strong-inversion}(1) and (2),
we can easily see that the trivial knot has a unique associated
marked strongly invertible knot and that
every torus knot has precisely two associated
marked strongly invertible knots.
}
\end{remark}
We note that
Barbensi, Buck, Harrington and Lackenby \cite{Barbensi-Buck-Harrington-Lackenby}
shed new light on the strongly invertible knots
in relation with the knotoids\index{knotoid}
introduced by Turaev \cite{Turaev}.
They prove that there is a $1-1$ correspondence
between unoriented knotoids, up to \lq\lq rotation'',
and strongly invertible knots,
up to \lq\lq inversion'' \cite[Theorem 1.1]{Barbensi-Buck-Harrington-Lackenby}.
Proposition \ref{prop:marked-strong-inversion} is a variant of
their result \cite[Theorem 1.3]{Barbensi-Buck-Harrington-Lackenby}
concerning knotoids.
\section{Invertible diagrams and invariant Seifert surfaces}
\label{sec:invariant-Seifert-surface}
In this section, we describe two proofs
of the following basic proposition
which shows the existence of an invariant Seifert surface
for every marked strongly invertible knot.
One is due to Boyle and Issa \cite{Boyle-Issa2021a},
and the other is due to Hiura \cite{Hiura}.
\begin{proposition}
\label{prop:existence-invariant-Seifert}
Every marked strongly invertible knot admits an invariant Seifert surface.
Namely, for every marked strongly invertible knot $(K,h,\delta)$,
there is an $h$-invariant Seifert surface $S$ for $K$ such that
$\operatorname{Fix}(h)\cap S=\delta$.
\end{proposition}
Following \cite[Definition 3.3]{Boyle-Issa2021a}
we say that a symmetric diagram representing a strongly invertible knot $(K,h)$
is
\begin{enumerate}
\item
{\it intravergent}\index{knot diagram!intravergent diagram}
if $h$ acts as half-rotation around an axis perpendicular to the plane of the diagram
(see Figure \ref{fig:F1}(a)), and
\item
{\it transvergent}\index{knot diagram!transvergent diagram}
if $h$ acts as half-rotation around an axis contained within the plane of the diagram
(see Figure \ref{fig:F5}(a)).
\end{enumerate}
\figer{f1}{0.95}
{fig:F1}{A transvergent diagram and a symmetric knotoid }
\subsection{Construction from an intravergent diagram and knotoid}
As Boyle and Issa note
in \cite[Proposition 1]{Boyle-Issa2021a},
Seifert's algorithm applied to an intravergent diagram
produces an invariant Seifert surface.
To be precise, for a marked strongly invertible knot $(K,h,\delta)$,
let $\Gamma$ be an intravergent diagram
of the strongly invertible knot $(K,h)$,
such that $\delta$ is the crossing arc
at the crossing through which the axis passes.
Then, by applying Seifert's algorithm to $\Gamma$,
we obtain an invariant Seifert surface for $(K,h,\delta)$
(Figure \ref{fig:F1}(b)).
By cutting the over- or under-path which contains a fixed point,
we have a rotationally symmetric knotoid diagram $\Gamma'$ (Figure \ref{fig:F1}(c)).
By applying Seifert's smoothing to $\Gamma'$, one obtains
Seifert circles and an arc.
Then replacing the arc by a thin disk, we have an invariant
Seifert surface for $(K,h,\delta)$
(see Figure \ref{fig:F1}(d)).
Note that theses surfaces in general have different genera.
\subsection{Construction from a transvergent diagram}
In order to construct an invariant Seifert surface
from a transvergent diagram of a strongly invertible knot $(K,h)$,
we consider the quotient $\theta$-curve
$\theta(K,h)$ defined as follows.
Let $\pi:S^3\to S^3/h\cong S^3$ be the projection.
Then $O:=\pi(\operatorname{Fix}(h))$ is a trivial knot and $k:=\pi(K)$ is an arc
such that $O\cap k=\partial k$.
Thus the union $\theta(K,h):=O\cup k$ forms a $\theta$-curve embedded in $S^3$:
we call it the {\it quotient $\theta$-curve}\index{$\theta$-curve}\index{quotient!quotient $\theta$-curve}
of
the strongly invertible knot $(K,h)$.
For a marked strongly invertible knot $(K,h,\delta)$,
set $\check\delta:=\pi(\delta)$ and $\check K:= k\cup \check\delta$
(Figure \ref{fig:F2}).
\figer{f2}{0.95}{fig:F2}{
The quotient $\theta$-curve $\theta(K,h)=O\cup k$
and the constituent knot
$\check K=k\cup\check\delta=\pi(K\cup \delta)$.
Note that $\theta(K,h)$ consists of three edges
$k=\pi(K)$, $\check\delta=\pi(\delta)$
and $\check\delta^c=\pi(\operatorname{cl}(\operatorname{Fix}(h)\setminus \delta))=\operatorname{cl}(O\setminus \check\delta)$.}
Observe that if $S$ is an $h$-invariant Seifert surface for $(K,h,\delta)$,
then its image $\check S:=\pi(S)$ in $S^3/h$ is
a {\it spanning surface}\index{surface!spanning surface}
for the knot $\check K$,
i.e., a compact surface in $S^3/h$ with boundary $\check K$,
which is disjoint from the interior of the arc
$\check\delta^c:=\operatorname{cl}(O\setminus \check\delta)$.
Conversely, if $\check S$ is a spanning surface for $\check K$
which is disjoint from the interior of the arc $\check\delta^c$,
then its inverse image $\pi^{-1}(\check S)$ is
an $h$-invariant spanning surface for $(K,h,\delta)$,
namely an $h$-invariant spanning surface for $K$
whose intersection with $\operatorname{Fix}(h)$ is equal to $\delta$.
However, $\pi^{-1}(\check S)$ is not necessarily orientable.
The following proposition gives a necessary and sufficient condition
for $\pi^{-1}(\check S)$ to be orientable and so an invariant Seifert surface for $(K,h,\delta)$.
\begin{proposition}
\label{prop:lifting}
Under the above notation, the following hold
for every marked strongly invertible knot $(K,h,\delta)$.
{\rm (1)}
If $S$ is an invariant Seifert surface for $(K,h,\delta)$,
then its image $\check S=\pi(S)$ in $S^3/h$
is a spanning surface for $\check K=k\cup \check\delta$
disjoint from the interior of $\check\delta^c$,
and satisfies the following condition.
\begin{itemize}
\item[\rm{(C)}]
For any loop $\gamma$ in $\operatorname{int}\check S$,
$\gamma$ is orientation-preserving or -reversing in $\check S$
according to whether the linking number
$\operatorname{lk}(\gamma, O)$ is even or odd.
\end{itemize}
{\rm (2)}
Conversely, if $\check S$ is a spanning surface for the knot $\check K$ in $S^3/h$
which is disjoint from the interior of $\check\delta^c$
and satisfies Condition {\rm (C)},
then its inverse image $\pi^{-1}(\check S)$ in $S^3$
is an invariant Seifert surface for $(K,h,\delta)$
\end{proposition}
\begin{proof}
(1)
Note that
$\check S\cong S/h$ is regarded as an orbifold which has $\check\delta$ as
a reflector line.
We consider an \lq\lq orbifold handle decomposition'' $\check S=D\cup(\cup_i B_i)$,
where
(a) $D$ is an \lq\lq orbifold $0$-handle'', namely $D$ is a disk such that
$\partial \check S \cap D=\check\delta$,
and (b) $\{B_i\}$ are $1$-handles attached to $D$,
namely each $B_i$ is a disk such that
$B_i\cap D =\partial B_i\cap \partial D$
consists of two mutually disjoint arcs
which are disjoint from the reflector line $\check \delta$.
This handle decomposition of the orbifold $\check S$
lifts to the handle decomposition
$S=\pi^{-1}(D)\cup (\cup_i \pi^{-1}(B_i))$ of the surface $S$,
where $\pi^{-1}(D)$ is an $h$-invariant $0$-handle
and each $\pi^{-1}(B_i)$ consists of a pair of $1$-handles attached to the disk $\pi^{-1}(D)$.
For each $i$, $D\cup B_i$ is an annulus or a M\"obius band
which is obtained as the quotient of the orientable surface
$\pi^{-1}(D\cup B_i)\subset S$ by the (restriction of) the involution $h$.
Let $\gamma_i$ be a core loop of $D\cup B_i$.
Then we can see that the projection $\pi:\pi^{-1}(D\cup B_i)\to D\cup B_i$ is as shown
in Figure \ref{fig:F3}(a), (b)
according to whether $\operatorname{lk}(\gamma_i,O)$ is even or odd.
Hence $D\cup B_i$ is an annulus or a M\"obius band
accordingly.
Since the homology classes of $\{\gamma_i\}$ generate $H_1(\check S)$,
this observation implies that $\check S$ satisfies Condition (C).
(2) This can be proved by reversing the above argument.
\end{proof}
\figer{f3}{0.8}
{fig:F3}{The band $D\cup B_i$ (bottom) and its inverse image $\pi^{-1}(D\cup B_i)$ (top).
The linking number $\operatorname{lk}(\gamma_i,O)$ is even in (a) and odd in (b).}
\begin{remark}
\label{rem:lifting}
{\rm
(1) Condition (C) is equivalent to the condition that
the homomorphism $\iota:H_1(\check S;\mathbb{Z}/2\mathbb{Z}) \to \mathbb{Z}/2\mathbb{Z}$
defined by $\iota(\gamma):=\operatorname{lk}(O,\gamma) \pmod 2$ is
identical with the orientation homomorphism.
(2) We have $\beta_1(S)=2\beta_1(\check S)$, where $\beta_1$ denotes
the first Betti number,
because if $b$ is the number of $1$-handles $\{B_i\}$
in the proof, then
$\beta_1(\check S)=b$ and $\beta_1(S)=2b$.
}
\end{remark}
From a given transvergent diagram of a strongly invertible knot $(K,h)$,
we can easily draw a diagram of the quotient $\theta$-curve $\theta(K,h)$
(see Figure \ref{fig:F5}(b)).
There are various ways of modifying the diagram into a \lq\lq good'' diagram
from which we can construct a surface $\check S$ for $\theta(K,h)$
that satisfies the conditions in Proposition \ref{prop:lifting}.
Hiura \cite{Hiura} gave an algorithm for obtaining such a diagram of $\theta(K,h)=O \cup k$
as follows (see Figure \ref{fig:F4}).\\
(a)
Let $k$ hook
$O \setminus \check\delta$ in even times in the uniform way.
\\
(b) Travel along $k$ and enumerate the hooks from $1$ to $2n$. Then
slide the hooks so that they are paired and arranged from the top to the bottom along $O$.
Note that one travels between $(2i-1)^{\rm st}$ and $(2i)^{\rm th}$ hooks in
one of the four routine ways according to the orientation of the hooks.\\
(c) Surger $k$ along the bands $\{B_i\}_{i=1}^n$ arising in the pairs of hooks
to obtain a set of arcs $k'$.
Arrangement in (b) allows us to reset the orientation of $k'$ as depicted.
Apply Seifert algorithm to $k' \cup \check{\delta}$ to obtain a Seifert surface $S'$.
Then attaching the bands $\{B_i\}_{i=1}^n$ to $S'$
yields a spanning surface $\check S$ for
$k\cup \check{\delta}$ which satisfies the
conditions in Proposition \ref{prop:lifting}(2).
Therefore, the inverse image $S:=\pi^{-1}(\check S)$ is an invariant Seifert surface for
$(K,h,\delta)$.
\figer{f4}{0.9}
{fig:F4}{Hiura's algorithm to obtain an invariant Seifert surface from an intravergent diagram}
\begin{example}
\label{example:8-3}
{\rm
Let $(K,h,\delta)$ be the marked strongly invertible knot with $K=8_{3}$
as illustrated by
the transvergent diagram in Figure \ref{fig:F5}(a).
Then by applying the algorithm to the diagram,
we obtain a genus $2$ invariant Seifert surface $S$
for $(K,h,\delta)$ as shown in Figure \ref{fig:F5}(c).
This is a minimal genus Seifert surface for $(K,h,\delta)$
and so $g(K,h,\delta)=2$, as shown below.
By Hatcher-Thurston \cite[Theorem 1] {Hatcher-Thurston}
or by Kakimizu \cite{Kakimizu2005} succeeding the work of Kobayashi \cite{Kobayashi},
$8_{3}$ has precisely two genus $1$ Seifert surfaces up to equivalence.
(They are obtained by applying Seifert's algorithm to the alternating diagram,
where there are two different ways of attaching a disk to the unique big Seifert circle.)
Obviously they are interchanged by $h$, and hence not $h$-invariant.
Kakimizu \cite{Kakimizu2005} in fact showed that the two genus $1$ Seifert surfaces
are the only incompressible Seifert surfaces for $8_{3}$.
Thus $8_{3}$ does not even admit an $h$-invariant incompressible Seifert surface.
So the result of Edmonds and Livingston \cite[Corollary 2.2]{Edmonds-Livingston}
that every periodic knot admits an invariant incompressible Seifert surface
does not hold for strongly invertible knots.
}
\end{example}
\figer{f5}{0.99}
{fig:F5}{A transvergent diagram for $8_{3}$ and an invariant Seifert surface}
\section{Proof of Theorem \ref{thm:main}}
\label{sec:Proof-MainTheorem}
By Proposition \ref{prop:lifting}
and Remark \ref{rem:lifting},
we can characterize the equivariant genus $g(K,h,\delta)$
in terms of the quotient $\theta$-curve $\theta(K,h)=k\cup \check \delta\cup \check\delta^c$
and the constituent knot $\check K=k\cup \check \delta$ as follows.
\begin{proposition}
\label{prop:band-number}
Let $(K,h,\delta)$ be a marked strongly invertible knot.
Then $g(K,h,\delta)$ is equal to the minimum of
$\beta_1(\check S)$ where
$\check S$ runs over the spanning surfaces for
the constituent knot $\check K=k\cup \check \delta$ of
the quotient $\theta$-curve $\theta(K,h)$
that is
disjoint from the interior of the remaining edge $\check\delta^c$
and satisfies Condition (C).
\end{proposition}
By relaxing the definition of the crosscap number\index{crosscap number}
, $\gamma(K)$, of a knot $K$
introduced by Clark \cite{Clark},
we define the {\it band number}\index{band number},
$b(K)$, to be the
minimum of $\beta_1(G)$
of all spanning surfaces $G$ for $K$
(see Murakami-Yasuhara \cite{Murakami-Yasuhara}).
In other words, $b(K)=\min(2g(K),\gamma(K))$.
Then we have the following corollary.
\begin{corollary}
\label{cor:band-number}
$g(K,h,\delta)\ge b(\check K)$.
\end{corollary}
For any marked strongly invertible knot $(K,h,\delta)$
with $K$ a $2$-bridge knot,
the constituent knot $\check K$ is either the trivial knot or a $2$-bridge knot
(see \cite[Proposition 3.6]{Sakuma1986}).
In his master thesis \cite{Bessho} supervised by the last author,
Bessho described a method for determining the cross cap numbers
of $2$-bridge knots by using the result of Hatcher and Thurston
\cite[Theorem 1(b)] {Hatcher-Thurston}
that classifies the
incompressible and boundary incompressible surfaces in the $2$-bridge knot exteriors.
Hirasawa and Teragaito \cite{Hirasawa-Teragaito}
promoted the method into a very effective algorithm.
For some classes of marked strongly invertible $2$-bridge knots,
we can apply Corollary \ref{cor:band-number} by using
Hirasawa-Teragaito method, as shown in the following example.
\begin{example}
\label{example:main-example}
{\rm
For a positive integer $n$,
let $K_n$ be the the plat closure of a 4-string braid
$(\sigma_2^2 \sigma_1^4)^n$.
Then $K_n$ is the $2$-bridge knot\index{$2$-bridge knot}\index{knot!$2$-bridge knot}
whose slope $q/p$
has the continued fraction expansion $[2, 4, 2,4, \cdots,2,4]$ of length $2n$.
Here we employ the following convention of continued fraction expansion,
which is used in \cite{Hirasawa-Teragaito}.
\[
[a_1,a_2, \cdots ,a_m]:=\cfrac{1}
{a_1-\cfrac{1}
{a_2-\cfrac{1}
{\ddots -\cfrac{1}
{a_{m}
}}}}
\]
Thus $K_n$ is the boundary of a linear plumbing of unknotted annuli
where the $i^{\rm th}$ band has $2$ or $4$ right-handed half-twists
according to whether $i$ is odd or even
(see \cite[Fig. 2]{Hatcher-Thurston}).
In particular, $g(K_n)=n$.
Note that $K_n$ is isotopic to the knot
admitting the strong inversion $h$ in Figure \ref{fig:F6}(a).
Let $(K_n,h,\delta)$ be the marked strongly invertible knot,
where $\delta$ is the long arc in $\operatorname{Fix}(h)$ bounded by $\operatorname{Fix}(h)\cap K_n$
illustrated in Figure \ref{fig:F6}(a).
Then we have
$g(K_n,h,\delta)=2n=g(K_n)+n$, as explained below.
\figer{f6}{1}{fig:F6}{A Seifert surface realizing
the equivariant genus}
Observe that $(K_n, h, \delta)$ is equivalent to the marked strongly invertible knot
bounding the $h$-invariant Seifert surface $S$ of genus
$2n$ in Figure \ref{fig:F6}(b).
Consider the quotient surface $\check S=S/h$,
which is a spanning surface for the knot $\check K:=k\cup \check \delta$.
We show that the band number $b(\check K)$ is equal to $\beta_1(\check S)$.
Then it follows from Corollary \ref{cor:band-number}
that $S$ is a minimal genus Seifert surface for $(K_n, h, \delta)$.
To this end, observe that $\check K$ is the $2$-bridge knot
whose slope has the continued fraction expansion
$\mathcal{C}:=[4,2,4,2,\cdots,4,2]$ of length $2n$
(Figure \ref{fig:F6}(c)).
Since $\mathcal{C}$ does not contain an entry ${0, -1,1}$
and since $\mathcal{C}$ does not contain a sub-sequence $\pm [\cdots, 2, 3, \cdots, 3, 2,\cdots]$
nor $\pm [\cdots, 2, 2,\cdots]$,
it follows from
\cite[Theorems 2 and 3]{Hirasawa-Teragaito},
that the length $2n$ of $\mathcal{C}$ is
minimal among all continued fraction expansions of
all fractions representing $\check K$.
Then it follows from \cite[Theorem 1(b)] {Hatcher-Thurston}
that $b(\check K)=\beta_1(\check S)=2n$ as desired.
}
\end{example}
Theorem \ref{thm:main} follows from the above example
and Proposition \ref{prop:fibered}, which is proved at the end of
Section \ref{sec:proof-SubMainTheorem}.
Though the above method is also applicable to a certain family of
marked strongly invertible $2$-bridge knots,
there are various cases
where the above simple method does not work.
In fact, there is a case where the knot $\check K=k\cup\check\delta$
is trivial (see Figure \ref{fig:F5}).
The second author \cite{Hiura} treated such a family by applying
the method in \cite{Hatcher-Thurston}
to such spanning surfaces as in Proposition \ref{prop:band-number}.
However, there still remained cases where none of the methods described above work.
In the sequel of this paper [28],
we give a unified determination of the equivariant genera of
all marked strongly invertible 2-bridge knots using sutured manifolds.
Theorem \ref{thm:disjoint-Seifert2} enabled us to do that without
invoking \cite{Hatcher-Thurston}
or \cite{Hirasawa-Teragaito}.
\section{Proof of Theorems \ref{thm:disjoint-Seifert} and \ref{thm:disjoint-Seifert2}}
\label{sec:proof-SubMainTheorem}
For a knot $K$ in $S^3$, let
$E(K):=S^3\setminus \operatorname{int} N(K)$ be the {\it exterior}\index{exterior}\index{knot!knot exterior}
of $K$,
where $N(K)$ is a regular neighborhood of $K$.
If $F$ is a Seifert surface for $K$,
then after an isotopy,
$F$ intersects $N(K)$ in a collar neighborhood of $\partial F$,
and $F\cap E(K)$ is a surface properly embedded in $E(K)$
whose boundary is a {\it preferred longitude}\index{preferred longitude},
i.e., a longitude of the solid torus $N(K)$
whose linking number with $K$ is $0$.
Conversely, any such surface in $E(K)$ determines a
Seifert surface\index{Seifert surface}\index{surface!Seifert surface}
for $K$.
Thus we also call such a surface in $E(K)$ a {\it Seifert surface} for $K$.
\begin{proof}[Proof of Theorem \ref{thm:disjoint-Seifert}]
We give a proof imitating the arguments by Edmonds \cite{Edmonds}.
Let $(K,h)$ be a strongly invertible knot, and
let $E:=E(K)$ be an $h$-invariant exterior of $K$.
We continue to denote by $h$ the restriction of $h$ to $E$.
Then Theorem \ref{thm:disjoint-Seifert} is equivalent to the existence
of a minimal genus Seifert surface $F\subset E$ for $K$ such that $F\cap h(F)=\emptyset$.
Fix an $h$-invariant Riemannian metric on $E$
such that $\partial E$ is convex.
Choose a preferred longitude $\ell_0\subset \partial E$ of $K$
such that $\ell_0$ and $\ell_1:=h(\ell_0)$ are disjoint.
Let $F^*$ be a smooth, compact, connected, orientable surface of genus $g(K)$ with
one boundary component,
$\psi_0:\partial F^* \to \partial E$ an embedding such that $\psi_0(\partial F^*)=\ell_0$,
and set $\psi_1:=h \psi_0$.
For $i=0,1$, let $\mathcal{F}_i$ be the space of all piecewise smooth maps
$f:(F^*,\partial F^*)\to (E,\ell_i)$
properly homotopic to an embedding, such that $f|_{\partial F^*}=\psi_i$.
Then we have the following \cite[Proposition 1]{Edmonds}.
\begin{lemma}
Each
$\mathcal{F}_i$ contains an area minimizer\index{area minimizer},
namely,
there exists an element $f_i\in \mathcal{F}_i$
whose area
is minimum among the areas of all elements of $\mathcal{F}_i$.
Moreover, any area minimizer in $\mathcal{F}_i$ is an embedding.
\end{lemma}
Since $h$ is an isometric involution, it follows that
$f_i$ is an area minimizer in $\mathcal{F}_i$ if and only if
$h f_i$ is an area minimizer in $\mathcal{F}_j$,
where $\{i,j\}=\{0,1\}$.
Thus Theorem \ref{thm:disjoint-Seifert} follows from the
following analogue of \cite[Theorem 2]{Edmonds}.
\end{proof}
\begin{theorem}
\label{thm:Edomonds-Thm2}
For $i=0,1$,
let $f_i$ be an area minimizer in $\mathcal{F}_i$,
and $F_i\subset E$ the minimal genus Seifert surface for $K$ obtained as the image of $f_i$.
Then $F_0$ and $F_1$ are disjoint.
\end{theorem}
\begin{proof}
The proof is the same as \cite[Proof of Theorem 2]{Edmonds}, as explained below.
Suppose to the contrary that $F_0\cap F_1\ne \emptyset$.
We first assume that $F_0$ and $F_1$ intersect transversely.
Then, by
the arguments in \cite[the 3rd to the 6th paragraphs of the proof of Theorem 2]{Edmonds}
using the area minimality and the incompressibility of $F_i$ ($i=0,1$) and
the asphericity of $E$,
it follows that every component of $F_0\cap F_1$ is
essential in both $F_0$ and $F_1$.
By the arguments in
\cite[the 7th to the final paragraphs of the proof of Theorem 2]{Edmonds},
we see that there is a submanifold $W$ in $E$
satisfying the following conditions.
\begin{enumerate}
\item[(a)]
$\partial W=A\cup B$ where $A=W\cap F_0$ and $B=W\cap F_1$.
\item[(b)]
Both $(F_0\setminus A)\cup B$ and $(F_1\setminus B)\cup A$
are minimal genus Seifert surfaces.
\end{enumerate}
Since $(F_0\setminus A)\cup B$ and $(F_1\setminus B)\cup A$ have corners,
smoothing them reduces area and yields two minimal genus Seifert surfaces,
at least one of which has less area than $F_0$ or $F_1$,
a contradiction.
Finally, we explain the generic case where
$F_0$ and $F_1$ are not necessarily transversal.
As is noted in \cite[the 2nd paragraph of the proof of Theorem 2]{Edmonds},
by virtue of the Meeks-Yau trick\index{Meeks-Yau trick}
introduced in \cite{Meeks-Yau} and
discussed in \cite{Freedman-Hass-Scott},
we can reduce to the case where $F_0$ and $F_1$ intersect transversely as follows.
By \cite[Lemma 1.4]{Freedman-Hass-Scott}, we have a precise picture of the situation
where $F_0$ and $F_1$ are non-transversal
(see \cite[Figure 1.2]{Freedman-Hass-Scott}).
By using this fact, $F_0$ is perturbed slightly to a surface $F_0'$, so that
(a) $F_0'$ and $F_1$ intersect transversely and nontrivially, and
(b) the difference $\operatorname{Area}(F_0')-\operatorname{Area}(F_0)$ is less than a lower bound estimate for the area reduction to be achieved as in the preceding paragraphs.
Thus the assumption $F_0\cap F_1\ne \emptyset$ again leads to a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:disjoint-Seifert2}]
Throughout the proof we use the following terminology:
for a topological space $X$ and its subspace $Y$,
a {\it closed-up component}\index{closed-up component} of $X\setminus Y$
means the closure in $X$ of a component of $X\setminus Y$.
Let $(K,h,\delta)$ be a marked strongly invertible knot
and let $E$ and $h$ be as
in the proof of Theorem \ref{thm:disjoint-Seifert}.
Then $\operatorname{Fix}(h)\subset E$ is the disjoint union of two arcs $\delta\sqcup\delta^c$,
where $\delta$ denotes the intersection of the original $\delta$ with $E$
and $\delta^c=\operatorname{Fix}(h)\setminus \delta$.
Then, by the assumption of Theorem \ref{thm:disjoint-Seifert2},
there is
a minimal genus Seifert surface $F\subset E$
such that $F\cap h(F)=\emptyset$.
Let $E_{\delta}$ and $E_{\delta^c}$ be the closed-up components of
$E(K)\setminus (F\cup h(F))$ containing $\delta$ and
$\delta^c$, respectively.
Then there is a minimal genus Seifert surface $S \subset E$ for $(K,h,\delta)$
such that $\partial S$ is
contained in the interior of the annulus $\partial E\cap \partial E_{\delta}$.
We will construct from $S$
a minimal genus Seifert surface for $(K,h,\delta)$
which is properly embedded in
$E_{\delta}\setminus (F\cup h(F))$.
\begin{claim}
\label{claim:essential-intersection0}
We can choose $S$ so that
$S$ intersects $F\cup h(F)$ transversely and so that
every component of $S\cap(F\cup h(F))$ is essential
in both $S$ and $F\cup h(F)$.
\end{claim}
\begin{proof}
Since $F\cap h(F)=\emptyset$,
we can $h$-equivariantly isotope $S$ so that
it intersects $F\cup h(F)$ transversely.
Then $S\cap(F\cup h(F))$ consists of simple loops,
because the boundaries of $S$ and $F\cup h(F)$ are disjoint.
Suppose first that $S\cap(F\cup h(F))$ contains a component
that is inessential in $F\cup h(F)$.
Let $\alpha$ be one such component which is innermost in $F\cup h(F)$,
and let $\Delta$ be the disk in $F\cup h(F)$ bounded by $\alpha$.
Then $\Delta\cap S=\partial\Delta$, $h(\Delta)\cap S=\partial (h(\Delta))$, and
$\Delta\cap h(\Delta)=\emptyset$.
Let $S'$ be the $h$-invariant surface obtained from $S$
by surgery along $\Delta\cup h(\Delta)$,
i.e., $S'$ is obtained from $S$ by removing
an $h$-invariant open regular neighborhood of
$\partial(\Delta\cup h(\Delta))$
in $S$
and capping off the resulting four boundary circles
with nearby parallel $h$-equivariant copies of $\Delta$ and $h(\Delta)$.
Let $S'_b$ be the component of $S'$ containing $\partial S$.
Then $S'_b$ is a Seifert surface for $(K,h,\delta)$
such that $g(S'_b)\le g(S)=g(K,h,\delta)$.
Hence $S'_b$ is also a minimal genus Seifert surface for $(K,h,\delta)$.
Moreover $|S'_b\cap(F\cup h(F))|\le |S\cap(F\cup h(F))|-2$,
where $|\cdot|$ denotes the number of the connected components.
Suppose next that $S\cap(F\cup h(F))$ contains a component
that is inessential in $S$.
Let $\alpha$ be one such component which is innermost in $S$.
Then $\alpha$ is also inessential in $F\cup h(F)$
by the incompressibility of $F\cup h(F)$.
So, by the argument in the preceding paragraph,
we obtain a minimal genus Seifert surface $S'_b$
for $(K,h,\delta)$ such that
$|S'_b\cap(F\cup h(F))|\le |S\cap(F\cup h(F))|-2$.
By repeating the above arguments,
we can find a desired minimal genus Seifert surface for $(K,h,\delta)$.
\end{proof}
Let $p:\tilde E\to E$ be the infinite cyclic covering,
and let $E_j$ ($j\in\mathbb{Z}$) be the closed-up components of
$\tilde E\setminus p^{-1}(F\cup h(F))$
satisfying the following conditions.
\begin{enumerate}
\item
$\tilde E=\cup_{j\in\mathbb{Z}} E_j$.
\item
$E_j$ projects homeomorphically onto $E_{\delta}$ or $E_{\delta^c}$
according to whether $j$ is even or odd.
\item
$F_j:=E_{j-1}\cap E_j$
projects homeomorphically onto $F$ or $h(F)$
according to whether $j$ is even or odd.
\end{enumerate}
For each $j\in\mathbb{Z}$,
let $h_j$ be the involution on $\tilde E$
which is a lift of $h$ such that $h_j(E_j)=E_j$.
Note that $h_j(E_i)=E_{2j-i}$ and that $\operatorname{Fix}(h_j)$ is a properly embedded arc in $E_j$
which projects to $\delta$ or $\delta^c$ according to whether $j$ is even or odd.
The composition
$\tau:=h_{j+1}h_j$ is independent of $j$, and
gives a generator of the covering transformation group of $\tilde E$,
such that $\tau(E_i)=E_{i+2}$ and $\tau(F_i)=F_{i+2}$ for every $i$.
Note that $p^{-1}(S)=\sqcup_{i\in\mathbb{Z}}S_{2i}$,
where $S_{2i}$ is the lift of $S$ preserved by $h_{2i}$.
Let $r:=\max\{j\in\mathbb{Z} \ | \ S_0\cap E_j\ne \emptyset\}$, and
assume that the number $r$
is minimized among all minimal genus Seifert surfaces $S$ for $(K,h,\delta)$
that satisfy the conclusion of Claim \ref{claim:essential-intersection0}.
Then Theorem \ref{thm:disjoint-Seifert} is equivalent to the assertion that $r=0$.
Suppose to the contrary that $r>0$.
Let $S_0^-$ be the closed-up component of $\tilde E\setminus S_0$
containing $S_{-2}=\tau^{-1}(S_0)$.
Set $\tilde W:=S_0^- \cap E_r$ and $W:=p(\tilde W)$.
Since $\tilde W\subset E_r$,
$p|_{\tilde W}:\tilde W\to W$ is a homeomorphism.
(See Figure \ref{fig:F7}.)
\figer{f7}{0.95}{fig:F7}{Schematic picture of $p:\tilde E\to E$, where $r=2$.
The picture does not reflect the assumption that
the boundaries of $F$, $h(F)$ and $S$ are disjoint.}
\begin{claim}
\label{claim:W}
$W$ is a (possibly disconnected) compact $3$-manifold contained in $\operatorname{int} E$
such that
$\partial W=A \cup B$,
where
$A=S\cap W$,
$B=p(F_r) \cap W$,
and $A \cap B= p(S_0\cap F_r)\subset S\cap p(F_r)$.
Here $p(F_r)=F$ or $h(F)$ according to whether $r$ is even or odd.
\end{claim}
\begin{proof}
Note that $S_0^-\cap \partial \tilde E$ is the half-infinite annulus in $\partial \tilde E$
which forms the closed-up component of $\partial \tilde E \setminus \partial S_0$
disjoint from $\partial F_1$.
On the other hand, $E_r\cap \partial \tilde E$ is the annulus in $\partial \tilde E$
bounded by $\partial F_r$ and $\partial F_{r+1}$.
Since $r>0$ by the assumption, these imply that
$S_0^-\cap \partial \tilde E$ and $E_r\cap \partial \tilde E$ are disjoint.
Hence $\tilde W$ is disjoint from $\partial \tilde E$ and
so $\tilde W \subset \operatorname{int}\tilde E$.
Note that $\operatorname{fr} S_0^-=S_0$ and $\operatorname{fr} E_r = F_r \sqcup F_{r+1}$
intersect transversely (Claim \ref{claim:essential-intersection0}), where $S_0^- \cap F_{r+1}=\emptyset$.
(Here $\operatorname{fr} Y$ with $Y=S_0^-, E_r$ denotes the frontier\index{frontier}
of $Y$ in $\tilde E$,
namely the closure of $Y$ in $\tilde E$ minus the interior of $Y$
in $\tilde E$.)
Hence $\tilde W$ is a compact $3$-manifold contained in $\operatorname{int} \tilde E$,
such that
$\partial \tilde W=\tilde A \cup \tilde B$,
where $\tilde A=(\operatorname{fr} S_0^-) \cap \tilde W=S_0\cap\tilde W$,
$\tilde B=(\operatorname{fr} E_r) \cap \tilde W=F_r \cap \tilde W$, and
$\tilde A\cap \tilde B
= (S_0\cap\tilde W) \cap (F_r \cap \tilde W)= S_0\cap F_r$.
Since $p|_{\tilde W}:\tilde W\to W$ is a homeomorphism,
these imply the claim.
\end{proof}
\begin{claim}
\label{claim:Wh}
$W\cap h(W)=\emptyset$.
\end{claim}
\begin{proof}
Since the restriction $p|_{E_r}$ is a homeomorphism
onto its image $p(E_r)$
(which is equal to $E_{\delta}$ or $E_{\delta^c}$ according to whether $r$ is even or odd)
and since the involution $h|_{p(E_r)}$ is pulled back to
the involution $h_r|_{E_r}$ by $p|_{E_r}$,
it follows that $p|_{E_r}$
restricts to a homeomorphism from $\tilde W \cap h_r(\tilde W)$
onto $W\cap h(W)$.
On the other hand, we have
\[
\tilde W \cap h_r(\tilde W)
=
(S_0^- \cap E_r) \cap h_r(S_0^- \cap E_r)
=
(S_0^-\cap h_r(S_0^-))\cap E_r
=
\emptyset \cap E_r=\emptyset,
\]
where the third identity is verified as follows.
Recall that $S_0^-$ is the closed-up component of $\tilde E\setminus S_0$
containing $S_{-2}$.
Thus $h_r(S_0^-)$ is the closed-up component of
$\tilde E\setminus h_r(S_0)=\tilde E\setminus S_{2r}$
containing $h_r(S_{-2})=S_{2r+2}$.
Since $r>0$, this implies $S_0^-\cap h_r(S_0^-)=\emptyset$.
Hence $W\cap h(W)= p(\tilde W \cap h_r(\tilde W)) =\emptyset$ as desired.
\end{proof}
We perform an $h$-equivariant cut and paste operation on $S\cup F\cup h(F)$ along $W\cup h(W)$,
and produce surfaces $S'=S'_b\sqcup S'_c$, $F'=F'_b\sqcup F'_c$ and
$h(F')=h(F'_b)\sqcup h(F'_c)$ as follows.
\begin{enumerate}
\item
$S':=(S\setminus (A\cup h(A))\cup(B\cup h(B))$,
$S'_b$ is the component of $S'$
containing $\partial S$, and $S'_c:=S'\setminus S'_b$.
\item
$F':= (F\setminus B) \cup A$,
$F'_b$ is the component of $F'$ containing $\partial F$,
and $F'_c:=F'\setminus F'_b$.
\end{enumerate}
Then $S'_b$ is a Seifert surface for $(K,h,\delta)$, and
both $F'_b$ and $h(F'_b)$ are Seifert surfaces for $K$.
Let $\Sigma$ be the disjoint union of copies of
$S'_c$, $F'_c$ and $h(F'_c)$.
Then $\Sigma$ is a possibly empty, closed, orientable surface,
such that $\chi(\Sigma)=\chi(S'_c\cup F'_c\cup h(F'_c))$.
(Note that the intersection among $S'_c$, $F'_c$ and $h(F'_c)$
consists of disjoint loops.)
Since $S$ and $F\cup h(F)$
intersects in essential loops by Claim \ref{claim:essential-intersection0},
none of the components of $\Sigma$ is a $2$-sphere
and therefore $\chi(\Sigma)\le 0$.
Hence
\begin{align*}
\chi(S'_b)+\chi(F'_b)+\chi(h(F'_b))
& \ge \chi(S'_b)+\chi(F'_b)+\chi(h(F'_b))+\chi(\Sigma)\\
& = \chi(S) +\chi(F)+\chi(h(F)).
\end{align*}
Since $g(F)=g(K)\le g(F'_b)$, this implies
\[
\chi(S'_b) \ge
\chi(S)+(\chi(F)-\chi(F'_b))+(\chi(h(F))-\chi(h(F'_b)))
\ge \chi(S).
\]
Hence $g(S'_b) \le g(S)=g(K,h,\delta)$, and therefore
$S'_b$ is also a minimal genus Seifert surface for $(K,h,\delta)$.
Note that the $h_0$-invariant lift of $S'_b$ is
obtained from $S_0 \subset \tilde E$ by cut and paste operation along
$\tilde W \cup h_0(\tilde W)$
and so it is
contained in the region of $\tilde E$
bounded by $F_{1-r}$ and $F_r$.
After a small $h$-equivariant isotopy of $S'_b$,
the lift is contained in the interior of that region,
and hence the number $r$ for $S'_b$ is strictly smaller than the original $r$.
This contradicts the minimality of $r$.
Hence we have $r=0$ as desired.
\end{proof}
The following example illustlates
Theorems \ref{thm:disjoint-Seifert}, \ref{thm:disjoint-Seifert2}.
and Corollary \ref{cor:disjoint-Seifert}.
\figer{f8}{0.9}{fig:F8}{The invariant Seifert surface $S$ and its compressing loop $\alpha$
on the positive side}
\begin{example}
\label{example:compression}
{\rm
Let $S$ be the invariant Seifert surface for $K=8_3$
constructed in Example \ref{example:8-3}.
Then the loop $\alpha$ in Figure \ref{fig:F8} is a compressing loop of $S$
on the positive side,
and $h(\alpha)$ is a compressing loop of $S$
on the negative side.
Note that $\alpha$ and $h(\alpha)$ intersect nontrivially,
and therefore $\{\alpha, h(\alpha)\}$ does not yield
an $h$-equivariant compression of $S$.
Consider a parallel copy $S_+$ of $S$ on the positive side,
and let $F_+$ be the minimal genus Seifert surface
obtained by compressing $S_+$ along a parallel copy of $\alpha$ on $S_+$.
Then $F_-:=h(F_+)$ is obtained from the parallel copy $h(S_+)$ of $S$ on the negative side
through compression along a parallel copy of $h(\alpha)\subset h(S_+)$.
The minimal genus Seifert surfaces $F_+$ and $F_-=h(F_+)$ are disjoint, and $S$ lies in
a region between them.
As noted in Example \ref{example:8-3}, $K=8_3$ has precisely two
minimal genus Seifert surfaces up to equivalence,
and the Kakimizu complex $MS(K)$ consists of a single edge and two vertices.
The automorphism $h_*$ of $MS(K)$ induced by $h$ is the reflection in the center
of the $1$-simplex.
}
\end{example}
Finally,
we prove Proposition \ref{prop:fibered}
for fibered knots.
\begin{proof}[Proof of Proposition \ref{prop:fibered}]
Let $p:E(K) \to S^1$ be the fibering whose fibers are
minimal genus Seifert surfaces for $K$.
By the result of Tollefson \cite[Theorem 2]{Tollefson},
we may assume that the involution $h$ on $E(K)$ preserves the fibering
and that the involution $\check h$ on $S^1$ induced from $h$
is given by $\check h(z)=\bar z$,
where we identify $S^1$ with the unit circle on the complex plane.
Then the inverse image $p^{-1}(\pm 1)$ gives a pair of
$h$-invariant Seifert surfaces for $K$ with genus $g(K)$.
Since these give Seifert surfaces
for the two marked strongly invertible knots associated with $(K,h)$,
we have $g(K,h,\delta)=g(K)$ as desired.
\end{proof}
\section{Equivariant $4$-genus}\label{sec:fourgenus}
In this section,
we review old and new studies of
equivariant $4$-genera of symmetric knots.
For a periodic knot or a strongly invertible knot $K$,
one can define the {\it equivariant $4$-genus}\index{genus!equivariant $4$-genus}
$\tilde g_4(K)$
to be the minimum of the genera of smooth surfaces
in $B^4$ bounded by $K$ that are invariant by a periodic diffeomorphism of $B^4$
extending the periodic diffeomorphism realizing the symmetry of $K$.
(Here, the symbol expressing the symmetry is suppressed in the symbol $\tilde g_4(K)$.)
Of course, $\tilde g_4(K)$ is bounded below by the
$4$-genus $g_4(K)$\index{genus!$4$-genus},
and it is invariant by equivariant cobordism.
The equivarinat cobordism of periodic knots was
studied by Naik \cite{Naik1997},
where she gave criteria
for a given periodic knot to be equivariantly slice,
in terms of the linking number and the homology
of the double branched covering.
By using the criteria, she presented examples of slice periodic knots which are not equivariantly slice.
(See Davis-Naik \cite{Davis-Naik} and Cha-Ko \cite{Cha-Ko}
for further development.)
The equivariant cobordism\index{equivarinat cobordism}\index{cobordism!equivarinat cobordism}
of strongly invertible knots was studied by
the third author \cite{Sakuma1986}, where
the notion of directed strongly invertible knots was introduced
so that the connected sum is well-defined, and it was observed that
the set $\tilde\mathcal{C}$ of the directed strongly invertible knots
modulo equivariant cobordism form
a group
with respect to the connected sum.
He then introduced a polynomial invariant $\eta:\tilde\mathcal{C}\to \mathbb{Z}\langle t\rangle$
which is a group homomorphism,
and presented a slice strongly invertible knot that is not equivariantly slice.
We note that it was recently proved by
Prisa \cite{Prisa} that the group $\tilde\mathcal{C}$ is not abelian.
These two old works show that there is a gap between the (usual) $4$-genus $g_4(K)$
and the equivariant $4$-genus $\tilde g_4(K)$
for both periodic knots and strongly invertible knots.
In a recent series of works
\cite{Boyle-Issa2021a, Boyle-Issa2021b, Alfieri-Boyle, Boyle-Musyt,
Boyle-Chen2022a, Boyle-Chen2022b},
Boyle and his coworkers
launched a project to systematically study the
equivariant $4$-genera of symmetric knots.
It should be noted that they treat not only periodic/strongly-invertible knots,
but also freely periodic knots and strongly negative amphicheiral knots.
In \cite[Theorems 2 and 3]{Boyle-Issa2021a},
Boyle and Issa gave a lower bound of $\tilde g_4(K)$
of a periodic knot $K$
in terms of the signatures
of $K$ and its quotient knot,
and showed that the gap between $g_4(K)$ and $\tilde g_4(K)$ can be arbitrary large.
They also introduced the notion of the butterfly $4$-genus $\widetilde{bg}_4(K)$
of a directed (or marked) strongly invertible knot $K$,
gave an estimate of $\widetilde{bg}_4(K)$ in terms of the $g$-signature,
and showed that the gap between $g_4(K)$ and $\widetilde{bg}_4(K)$ can be arbitrary large
\cite[Theorem 4]{Boyle-Issa2021a}.
In \cite{Dai-Mallick-Stoffregen},
Dai, Mallick and Stoffregen
introduced equivariant concordance invariants of strongly invertible knots
using knot Floer homology,
and showed that
the gap between the equivariant $4$-genus $\tilde g_4(K)$ and the genus $g(K)$
for marked strongly invertible knots can be arbitrary large,
proving the conjecture \cite[Question 1.1]{Boyle-Issa2021a} by Boyle and Issa.
In fact, they constructed a homomorphism from $\tilde\mathcal{C}$
to a certain group $\mathfrak{K}$, which is not a priori abelian.
By using the invariant,
they proved that, for
the knot $K_n$ defined as the connected sum
$
K_n:=(T_{2n,2n+1}\# T_{2n,2n+1})\# -(T_{2n,2n+1}\# T_{2n,2n+1})
$,
the equivariant $4$-genus $\tilde g_4(K_n)$,
with respect to some strong inversion,
is at least $2n-2$ \cite[Theorem 1.4]{Dai-Mallick-Stoffregen},
whereas $g_4(K_n)=0$.
Here $T_{2n,2n+1}$ is the torus knot of type $(2n,2n+1)$,
and \lq\lq$-$'' denoted the reversed mirror image.
On the other hand,
since $K_n$ is fibered,
the ($3$-dimensional) equivariant genera of
the marked strongly invertible knots associated with $K_n$
are equal to the genus $g(K_n)=4n(2n-1)$
(see Proposition \ref{prop:fibered}),
which is bigger than the estimate $2n-2$ of $\tilde g_4(K_n)$ from below.
As far as the authors know, there is no known algebraic invariant
that gives an effective estimate of the ($3$-dimensional) equivariant genera
of marked strongly invertible knots,
though the ($3$-dimensional) genus of any knot is estimated by the the Alexander polynomial
and moreover it is determined by the Heegaard Floer homology
(Ozsvath-Szabo \cite[Theorem 1.2]{Ozsvath-Szabo}).
As is noted in \cite{Dai-Mallick-Stoffregen},
there has been a renewed interest in strongly invertible knots
from the view point of more modern invariants
(see Watson \cite{Watson} and Lobb-Watson \cite{Lobb-Watson}).
Thus we would like to pose the following question.
\begin{question}
\label{question1}
{\rm
Is there a computable algebraic invariant of a marked strongly invertible knot
that gives an effective estimate of the equivariant genus,
or more strongly, determines it?
In other words, is there a computable,
algebraically defined,
integer-valued function
$I$:
$(K,h,\delta)\mapsto I(K,h,\delta)$
on the set of all marked strongly invertible knots up to equaivalence,
such that $g(K,h,\delta) \ge I(K,h,\delta)$
and that it does not descend to a function on $\tilde\mathcal{C}$?
}
\end{question}
The last requirement means that we have a chance to have a nontrivial estimate of
$g(K,h,\delta)$ by using $I$ even if the equivariant $4$-genus
is strictly smaller than the equivariant genus.
If we drop this requirement, then
such invariants are constructed
in the above mentioned work by
Dai, Mallick and Stoffregen \cite{Dai-Mallick-Stoffregen}.
If we restrict to the marked strongly invertible knots $(K,h,\delta)$
such that
the constituent knot $\check K=k\cup \check \delta$ of
the quotient $\theta$-curve $\theta(K,h)$ is alternating,
then
the works by Kalfagianni and Lee \cite{Kalfagianni-Lee},
Ito and Takimura \cite{Ito-Takimura2020a, Ito-Takimura2020b}
and {Kindred} \cite{Kindred}
on crosscap numbers of alternating links
give such invariants.
In fact, their results together with the classical results
by Murasugi \cite{Murasugi} and Crowell \cite{Crowell}
on the genera of alternating links
estimate/determine the band number
$b(\check K)=\min(2g(\check K),\gamma(\check K))$
of the alternating knot $\check K$
in terms of the Jones polynomial of $\check K$.
Since $g(K,h,\delta)\ge b(\check K)$ by Corollary \ref{cor:band-number},
the function $I$ defined by
$I(K,h,\delta):=b(\check K)$
on the set of
marked strongly invertible knots $(K,h,\delta)$
with alternating $\check K$
satisfies the desired property.
We note that all of the above mentioned works
\cite{Kalfagianni-Lee, Ito-Takimura2020a, Ito-Takimura2020b, Kindred}
depend on the geometric study due to
Adams and Kindred \cite{Adams-Kindred}
which gives an algorithm for determining the cross cap numbers
of prime alternating links.
We also note Burton and Ozlen \cite{Burton-Ozlen}
present an algorithm that utilizes normal surface theory and integer programming
to determine the crosscap numbers of generic knots.
However, as far as we know, there are no computable algebraic invariants
which give lower bounds of crosscap numbers of generic knots.
Moreover, what we really need to estimate is
the smallest Betti number of the spanning surfaces for $\check K$
that satisfy the conditions in Proposition \ref{prop:band-number}
(cf. Proposition \ref{prop:lifting}).
{\bf Acknowledgements}\
M.H. and M.S learned the theory of divides
through a lecture by A'Campo in Osaka in 1999,
and R.H. learned it
through a lecture by A'Campo's former student
Masaharu Ishikawa in Hiroshima in 2015.
The theory of divides and discussions with A'Campo
have been sources of inspiration for the authors.
The authors would like to thank Professor Norbert A'Campo
for invaluable discussions and encouragements.
They would also like to thank the anonymous referee
for his or her valuable comments and suggestions
that helped them to improve the exposition.
M.H. is supported by JSPS KAKENHI JP18K03296.
M.S. is supported by JSPS KAKENHI JP20K03614
and by Osaka Central Advanced Mathematical Institute
(MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849).
\end{document}
|
\begin{document}
\title[Lie superalgebras of supermatrices of complex
size]{Lie superalgebras of supermatrices of complex
size. Their generalizations and related integrable systems}
\author{Pavel Grozman, Dimitry Leites}
\address{Dept. of Math., Univ. of Stockholm, Roslagsv. 101,
Kr\"aftriket hus 6, S-106 91, Stockholm,
Sweden\\e-mail:mleites\@matematik.su.se}
\thanks{We are thankful to V.~Kornyak (JINR, Dubna) who checked the
generators and relations with an independent program and compared
convenience of the Serre relations with that of our ones. Financial
support of the Swedish Institute and NFR is gratefully acknowledged.
We are thankful to B.~Feigin and Shi Kangjie-laoshi for their shrewd
questions and to S.~Shnider, G.~Post and M.~Vasiliev for the timely
information.}
\keywords {Defining relations, principal embeddings, Lie superalgebra,
Schr\"odinger operator, matrices of complex size, Gelfand--Dickey
bracket, KdV hierarchy, $W$-algebras, quantized Lie algebras}
\subjclass{17B01, 17A70; 17B35, 17B66}
\begin{abstract} We distinguish a class of simple filtered Lie
algebras $LU_\fg(\lambda)$ of polynomial growth with increasing
filtration and whose associated graded Lie algebras are not simple.
We describe presentations of such algebras. The Lie algebras
$LU_\fg(\lambda)$, where $\lambda$ runs over the projective space of
dimension equal to the rank of $\fg$, are quantizations of the Lie
algebras of functions on the orbits of the coadjoint representation of
$\fg$.
The Lie algebra $\fgl(\lambda)$ of matrices of complex size is the
simplest example; it is $LU_{\fsl(2)}(\lambda)$. The dynamical
systems associated with it in the space of pseudodifferential
operators in the same way as the KdV hierarchy is associated with
$\fsl(n)$ are those studied by Gelfand--Dickey and Khesin--Malikov.
For $\fg\neq \fsl(2)$ we get generalizations of $\fgl(\lambda)$ and
the corresponding dynamical systems, in particular, their superized
versions. The algebras $LU_{\fsl(2)}(\lambda)$ posess a trace and an invariant
symmetric bilinear form, hence, with these Lie algebras associated are
analogs of the Yang-Baxter equation, KdV, etc.
Our presentation of $LU_{\fs}(\lambda)$ for a simple $\fs$ is related to
presentation of $\fs$ in terms of a certain pair of generators. For
$\fs=\fsl(n)$ there are just 9 such relations.
\end{abstract}
\maketitle
This is our paper published in: by E.~Ram\'irez de Arellano, et. al.
(eds.) Proc. Internatnl. Symp. Complex Analysis and related
topics, Mexico, 1996, Birkhauser Verlag, 1999, 73--105. We just wish
to make it more accessible. Here we made minory corrections, e.g.,
replaced $\fp\fgl(\lambda)$ with $\fsl(\lambda)$: to write
$\fp\fgl(\lambda)$ is correct, but notation $\fsl(\lambda)$ is closer
to its finite dimensional particular case.
\section*{\S 0. Introduction}
This is an expanded transcript of the talk given at the
International Symposium on Complex Analysis and Related Topics,
Cuernavaca, Mexica, November 18 -- 22, 1996. We are thankful to
A.~Turbiner and N.~Vasilevski for hospitality.
\ssec{0.0. History} About 1966, V.~Kac and B.~Weisfeiler began the
study of simple {\it filtered} Lie algebras of {\it polynomial
growth}. Kac first considered the $\Zee$-{\it graded} Lie algebras
associated with the filtered ones and classified {\it simple graded}
Lie algebras of {\it polynomial growth} under a technical assumption
and conjectured the inessential nature of the assumption. It took
more than 20 years to get rid of the assumption: see very complicated
papers by O.~Mathieu, cf. \cite{K} and references therein. For a
similar list of simple $\Zee$-graded Lie {\it super}algebras of
polynomial growth see \cite{KS}, \cite{LSc}.
The Lie algebras Kac distinguished (or rather the algebras of
derivations of their nontrivial central extensions, the {\it
Kac--Moody} algebras) proved very interesting in applications. These
algebras aroused such interest that the study of filtered algebras was
arrested for two decades. Little by little, however, the simplest
representative of the new class of simple filtered Lie superalgebras
(of polynomial growth), namely, the Lie algebra $\fgl(\lambda)$ of
matrices of complex size, and its projectivization, i.e., the quotient
modulo the constants, $\fp\fgl(\lambda)$, drew its share of attention
\cite{F}, \cite{KM}, \cite{KR}.
While we typed this paper, Shoikhet \cite{Sh} published a description
of representations of $\fgl(\lambda)$; we are thankful to M.~Vasiliev
who informed us of still other applications of generalizations of
$\fgl(\lambda)$, see \cite{BWV}, \cite{KV}.
This paper begins a systematic study of a new class of Lie algebras:
simple filtered Lie algebras of polynomial growth (SFLAPG) for which
the graded Lie algebras associated with the filtration considered are
not simple; $\fsl(\lambda)$ is our first example. Actually, an
example of a Lie algebra of class SFLAPG was known even before the
notion of Lie algebras was introduced. Indeed, the only deformation
(physicists call it {\it quantization}) $Q$ of the Poisson Lie algebra
$\fpo(2n)$ sends $\fpo(2n)$ into $\fdiff(n)$, the Lie algebra of
differential operators with polynomial coefficients; the restriction
of $Q$ to $\fh(2n)=\fpo(2n)/center$, the Lie algebra of Hamiltonian
vector fields, sends $\fh(2n)$ to the projectivization
$\fp\fdiff(n)=\fdiff(n)/\Cee \cdot 1$ of $\fdiff(n)$. The Lie algebra
$\fp\fdiff(n)$ escaped Kac's classification, though it is the deform
of an algebra from his list, because its intrinsically natural
filtration given by $\deg q_i=-\deg \partial_{q_{i}}=1$ is not of
polynomial growth while the graded Lie algebra associated with the
filtration of polynomial growth (given by $\deg q_i=\deg
\partial_{q_{i}}=1$) is not simple.
Observe that from the point of view of dynamical systems the Lie
algebra $\fdiff(n)$ is not very interesting: it does not possesses a
nondegenerate bilinear symmetric form; we will consider its
subalgebras that do.
\vskip 0.2 cm
In what follows we will usually denote the associative (super)algebras
by Latin letters; the Lie (super)algebras associated with them by
Gothic letters; e.g., $\fgl(n)=L(\Mat(n))$, $\fdiff(n)=L(\Diff(n))$,
where the functor $L$ replaces the dot product withthe bracket.
\ssec{0.1. The construction. Problems related} Each of our Lie
algebras (and Lie superalgebras) $LU_\fg(\lambda)$ is realized as a
quotient of the Lie algebra of global sections of the sheaf of twisted
$D$-modules on the flag variety, cf. \cite{Ka}, \cite{Di}. The general
construction consists of the preparatory step 0), the main steps 1)
and 2) and two extra steps 3) and 4).
We distinguish two cases: A) $\dim \fg<\infty$ and $\fg$ possesses a
Cartan matrix and B) $\fg$ is a simple vectorial Lie (super)algebra.
Let $\fg=\fg_-\oplus
\fh\oplus\fg_+$, where $\fg_+=
\mathop{\oplus}\limits_{\alpha>0}\fg_\alpha$ and $\fg_-=
\mathop{\oplus}\limits_{\alpha<0}\fg_\alpha$, be one of the simple
$\Zee $-graded Lie algebras of polynomial growth, either finite
dimensional or of vector fields, represented as the sum of its maximal
torus (usually identical with the Cartan subalgebra) $\fh$ and the
root subspaces $\fg_\alpha$ corresponding to an order in the set $R$
of roots.
Observe that each order of $R$ is in one-to-one correspondence with a
system of simple roots. For the finite dimensional Lie algebras $\fg$
all systems of simple roots are equivalent, the equivalence is
established by the Weyl group. For Lie superalgebras and infinite
dimensional Lie algebras of vector fields there are inequivalent
systems of simple roots; nevertheless, there is an analog of the Weyl
group and the passage from system to system is described in \cite{PS}.
For vectorial Lie algebras and Lie superalgebras, even the dimension
of the superspaces $X=(\fg_-)^*$ associated with systems of simple
roots can vary. It is not clear if only {\it essential} (see
\cite{PS}) systems of simple roots are essential in the construction
of Verma modules (roughly speaking, each Verma module is isomorphic to
the space of functions on $X$) in which we will realize
$LU_{\fg}(\lambda)$, but hopefully not all.
\underline{Step 0): From $\fg$ to $\tilde\fg$}. From representation
theory it is clear that there exists a realization of the elements of
$\fg$ by differential operators of degree $\leq 1$ on the space
$X=(\fg_-)^*$. The realization has rank $\fg$ parameters (coordinates
$\lambda=(\lambda_1, \dots , \lambda_n)\in\fh^*$ of the highest weight
of the $\fg$-module $M^\lambda$). For the algorithms of construction
and its execution in some cases see \cite{BMP}, \cite{B}, \cite{BGLS}.
Let $\tilde\fg$ be the image of $\fg$ with respect to this
realization. Let $\tilde S^{\bcdot}(\tilde\fg)$ be the associative
subalgebra generated by $\tilde\fg$. Clearly,
$\tilde S^{\bcdot}(\tilde\fg)\subset \fdiff(\fg_-)$. Set
$$
U_\fg(\lambda)=\tilde
S^{\bcdot}(\tilde\fg)/J(\lambda), \text{ where $J(\lambda)$ is the
maxiam ideal}.
$$
Observe that $J(\lambda)=0$ for generic $\lambda$.
Roughly speaking, $U_\fg(\lambda)$ is \lq\lq$\Mat"(L^\lambda)$, where
$L^\lambda$ is the quotient of $M^\lambda$ modulo the maximal
submodule $I(\lambda)$ (it can be determined and described with the
help of the Shapovalov form, see \cite{K}) and $\tilde
S^{\bcdot}(\tilde\fg)$ is the subalgebra generated by $\tilde\fg$ in
the symmetric algebra of $\tilde\fg$ modulo the relations between
differential operators. Clearly, $\tilde S^{\bcdot}(\tilde\fg)$ is
smaller than $S^{\bcdot}(\fg)$ due to the relations between the
differential operators that span $\tilde\fg$.
To explicitly describe the generators of $J(\lambda)$ is a main
technical problem. We solve it in this paper for $\rk \fg=1$. The
general case will be considered elsewhere.
\underline{Step 1) From $U_\fg(\lambda)$ to $LU_\fg(\lambda)$} Recall
that $LU_\fg(\lambda)$ is the Lie algebra whose space is the same as
that of $U_\fg(\lambda)$ and the bracket is the commutator.
\underline{Step 2) Montgomery's functor} S.~Montgomery suggested
\cite{M} a construction of simple Lie superalgebras:
$$
\text{$Mo$: a central simple $\Zee$-graded algebra
$\mapsto$ a simple Lie superalgebra.}\eqno{(Mo)}
$$
Observe that the associative algebras $U_\fg(\lambda)$ constructed
from simple Lie algebras $\fg$ are central simple. In \cite{LM} we
intend to consider {\it Montgomery superalgebras} $Mo(U_\fg(\lambda))$
and compare them with the Lie superalgebras $LU_\fs(\lambda)$
constructed from Lie superalgebras $\fs$. Montgomery functor often
produces new Lie superalgebras, e.g., if $\fg$ is equal to $\ff_4$ or
$\fe_i$, though not always: $Mo(U_{\fsl(2)}(\lambda))\cong
LU_{\fosp(1|2)}(\lambda)$.
\underline{Step 3) Twisted versions} An outer automorphism $a$ of
$\fG=LU_\fg(\lambda)$ or $Mo(U_\fg(\lambda))$ might single out a new
simple Lie subsuperalgebra $a_0(\fG)$, the set of fixed points of
$\fG$ under $a$.
For example, the intersection of $LU_{\fsl(2)}(\lambda)$ with the set
of skew-adjoint differential operators is a new Lie algebra
$\fo/\fsp(\lambda)$ while the intersection of
$Mo(U_{\fsl(2)}(\lambda))=LU_{\fosp(1|2)}(\lambda)$ with the set of
superskew-adjoint operators is the Lie superalgebra
$\fo\fsp(\lambda+1|\lambda)$. For the description of the outer
automorphisms of $\fgl(\lambda)$ see \cite{LAS}. In general even the
definition is unclear.
\underline{Step 4) Deformations} The deformations of Lie algebras and
Lie superalgebras
obtained via steps 1) -- 3) may lead to new algebras of class SFLAPG,
cf. \cite{Go}. A.~Sergeev posed the following interesting problšem:
{\sl what Lie algebras and Lie superalgebras can we get by applying the
above constructions 1) -- 3) to the quantum deformation $U_q(\fg)$ of
$U(\fg)$?}
\footnotesize
\begin{Remark} The above procedure can be also applied to (twisted)
loop algebras $\fg=\fh^{(k)}$ and the stringy algebras; the result
will be realized with differential operators of infinitely many
indeterminates; they remind vertex operators. The algebra
$LU_{\fh^{(k)}}(\lambda)$ is a polynomial one but not of
polynomial growth.
\end{Remark}
\normalsize
\ssec{0.2. Another description of $U_\fg(\lambda)$} For the finite
dimensional simple $\fg$ there is an alternative description of
$U_\fg(\lambda)$ as the quotient of $U(\fg)$ modulo the central
character, i.e., modulo the ideal $C_{(\lambda)}$ generated by rank
$\fg$ elements $C_i-k_i(\lambda)$, where the $C_i$ is the $i$-th
Casimir element and the $k_i(\lambda)$ is the (computed by
Harish-Chandra and Berezin) value of $C_i$ on $M^\lambda$. This
description of $U_\fg(\lambda)$ goes back, perhaps, to Kostant, cf.
\cite{Ka}. From this description it is clear that, after the shift by
$\rho$, the half sum of positive roots, we get
$$
LU_\fg(\sigma(\lambda))\cong LU_\fg(\lambda)\quad \text{for any}\;
\sigma\in W(\fg).
$$
A similar isomorphism holds for $Mo(U_\fg(\lambda))$. In particular,
over $\Ree$, it suffices to consider the $\lambda$ that belong to one
Weyl chamber only.
For vectorial Lie algebras the description of $U_\fg(\lambda)$ as
$U(\fg)/C_{(\lambda)}$ is inapplicable. For example, let $\fg=\fvect(n)$.
The highest weight Verma modules are (for the standard filtration of
$\fg$) identical with Verma modules over $\fsl(n+1)$, but the center
of $U(\fvect(n))$ consists of constants only. It is a research
problem to describe the generators of $C_{(\lambda)}$ in such cases.
Though the center of $U(\fg)$ is completely described by A.~Sergeev
for all simple finite dimensional Lie superalgebras \cite{S}, the
problem
{\sl describe the {\it generators of the ideal} $C_{(\lambda)}$}
is open for Lie superalgebras $\fg$ even if $\fg$ is of the form
$\fg(A)$ (i.e., if $\fg$ has Cartan matrix $A$) different from
$\fosp(1|2n)$: for them the center of $U(\fg)$ is not noetherian and
it is {\it a priori} unclear if $C_{(\lambda)}$ has infinitely or
finitely many generators. (As we will show elsewhere, $C_{(\lambda)}$
is generated for Lie superalgebras $\fg$ of the form $\fg(A)$ by the
first $\rk \fg$ Casimir operators and finitely many extra elements.
For algebras $\fg$ of other types we do not even have a conjecture.)
\ssec{0.3. Our result} The main result is the statement of the fact
that the above constructions 1) -- 4) yield a new class of simple Lie
(super) algebras of polynomial growth (some of which have nice
properties).
Observe that our Lie algebras $LU_{\fg}(\lambda)$ are quantizations of
the Lie algebras considered in \cite{DGS} which are also of class
SFLAPG and are contractions of our algebras. Indeed, Donin, Gurevich
and Shnider consider the Lie algebras of functions on the orbits of
the coajoint representation of $\fg$ with respect to the Poisson
bracket. These DGS Lie algebras are naturally realized as the
quotients of the polynomial algebra modulo an inhomogeneous ideal that
singles out the orbit; we realize the result of quantization of DGS Lie
algebras by differential operators.
In this paper we consider the simplest case of the superization of
this construction: replace $\fsl(2)$ with $\fosp(1|2)$. The cases of
higher ranks will be considered elsewhere. The Khesin--Malikov
construction \cite{KM} can be applied almost literally to
the Lie (super)algebras $LU_\fg(\lambda)$ such that $\fg$
admits a (super)principal embedding, see, e.g., \cite{GL2}.
Our main theorems: 2.6 and 4.3. The structure of the algebras
$LU_\fg(\lambda)$ (real forms, automorphisms, root systems) will be
described elsewhere, see e.g., \cite{LAS}.
Observe that while the polynomial Poisson Lie algebra has only one
class of nontrivial deformations and all the deformed algebras are
isomorphic, cf. \cite{LSc1}, the dimension of the space of parameters
of deformations of Lie algebras of Donin, Gurevich and Shnider is
equal to the rank of $\fg$ and all of the deforms are pairwise
nonisomorphic, generally.
\ssec{0.4. The defining relations} The notion of defining relations
is clear for a nilpotent Lie algebra. This is one of the reasons why
the most conventional way to present a simple Lie algebra $\fg$ is to
split it into the direct sum of a (commutative) Cartan subalgebra and 2
maximal nilpotent subalgebras $\fg_{\pm}$ (positive and negative).
There are about $(2\cdot\rk\fg)^2$ relations between the $2\cdot\rk
\fg$ generators of $\fg_{\pm}$. The generators of $\fg_{+}$ together
with the generators of $\fg_{-}$ generate $\fg$ as well. In $\fg$,
there are about $(3\cdot\rk\fg)^2$ relations between these generators;
the relations additional to those in $\fg_{+}$ or $\fg_{-}$, i.e.,
between the positive and the negative generators, are easy to grasp.
Though numerous, all these relations --- called {\it Serre relations}
--- are neat and this is another reason for their popularity. These
relations are good to deal with not only for humans but for computers
as well, cf. sec. 7.3.
Nevertheless, it so happens that the Chevalley-type generators and,
therefore, the Serre relations are not always available. Besides, as
we will see, there are problems in which other generators and
relations naturally appear, cf. \cite{GL2}.
Though not so transparent as for nilpotent algebras, the notion of
generators and relations makes sense in the general case. For
instance, with the principal embeddings of $\fsl (2)$ into $\fg$ one
can associate only {\bf two} elements that generate $\fg$; we call
them {\it Jacobson's generators}, see \cite{GL1}. We explicitly describe
the associated with the principal embeddings of $\fsl (2)$
presentations of simple Lie algebras, finite dimensional and certain
infinite dimensional; namely, the Lie algebra \lq \lq of matrices of
complex size" realized as a subalgebra of the Lie algebra $\fdiff(1)$
of differential operators in 1 indeterminate or of $\fgl_+(\infty)$,
see \S 2.
The relations obtained are rather simple, especially for
nonexceptional algebras. In contradistinction with the conventional
presentation there are just 9 relations between Jacobson's generators
for $\fsl(\lambda)$ series (actually, 8 if
$\lambda\in\Cee\setminus\Zee $) and not many more for the other
algebras.
It is convenient to present $\fsl(\lambda)$ as the Lie algebra
generated by two differential operators: $X^+=u^2\frac{d}{du}-(\lambda
-1)u$ and $Z_{\fsl}= \frac{d^2}{du^2}$; its Lie subalgebra
$\fo{/}\fsp(\lambda)$ of skew-adjoint operators --- a hybrid of Lie
algebras of series $\fo$ and $\fsp$ (do not confuse with the Lie
superalgebra of $\fosp$ type!) --- is generated by the same $X^+$ and
$Z_{\fo{/}\fsp}= \frac{d^3}{du^3}$; to make relations simpler, we
always add the third generator $X^-=-\frac{d}{du}$. For integer
$\lambda$ each of these algebras has an ideal of finite codimension
and the quotient modulo the ideal is the conventional $\fsl(n)$ (for
$\lambda=n$ and $\fgl(\lambda)$) and either $\fo(2n+1)$ (for $\lambda=2n+1$) or $\fsp(2n)$ (for
$\lambda=2n$), respectively , for $\fo/\fsp(\lambda)$.
In this paper we superize \cite{GL1}: replace $\fsl(2)$ with its
closest relative, $\fosp(1|2)$. We denote by
$\fsl(\lambda|\lambda+1)$ the Lie superalgebra generated by
$\nabla^+=x\partial_\theta +x\theta\partial _x-\lambda\theta$,
$Z=\partial_x\partial_\theta - \theta {\partial_x}^2$ and
$U=\partial_\theta - \theta \partial_x$, where $x$ is an even
indeterminate and $\theta$ is an odd one. We define
$\fosp(\lambda+1|\lambda)$ as the Lie subsuperalgebra of
$\fsl(\lambda|\lambda+1)$ generated by $\nabla^+$ and $Z$. The
presentations of $\fsl(\lambda|\lambda+1)$ and
$\fosp(\lambda+1|\lambda)$ are associated with the {\it
superprincipal} embeddings of $\fosp(1|2)$. For $\lambda\in\Cee
\setminus \Zee $ these algebras are simple. For integer $\lambda=n$
each of these algebras has an ideal of finite codimension and the
quotient modulo the ideal is the conventional $\fsl(n|n+1)$ and
$\fosp(2n+1|2n)$, respectively.
\ssec{0.5. Some applications} (1) Integrable systems like continuous
Toda lattice or a generalization of the Drinfeld--Sokolov construction
are based on the superprincipal embeddings in the same way as the
Khesin--Malikov construction \cite{KM} is based on the principal
embedding, cf. \cite{GL2}.
(2) To $q$-quantize the Lie algebras of type $\fsl(\lambda)$ \`a la
Drinfeld, using only Chevalley generators, is impossible; our
generators indicate a way to do it.
\ssec{0.6. Related topics} We would like to draw attention of the
reader to several other classes of Lie algebras. One of the reasons
is that, though some of these classes have empty intersections with
the class of Lie algebras we consider here, they naturally spring to
mind and are, perhaps, deformations of our algebras in some, yet
unknown, sense.
$\bullet$ {\it Krichever--Novikov algebras}, see \cite{SH} and refs.
therein. The KN-algebras are neither graded, nor filtered (at least,
wrt the degree considered usually). Observe that so are our algebras
$LU_\fg(\lambda)$ with respect to the degree induced from $U(\fg)$, so
a search for a better grading is a tempting problem.
$\bullet$ {\it Odessky} or {\it Sklyanin algebras}, see \cite{FO} and
refs. therein.
$\bullet$ {\it Continuum algebras}, see \cite{SV} and refs. therein.
In particular cases these algebras coincide with Kac--Moody or loop
algebras, i.e., have a continuum analog of the Cartan matrix. But to
suspect that $\fgl(\lambda)$ has a Cartan matrix is wrong, see sec.
2.2. Nevertheless, in the simplest cases, if $\rk \fg=1$, the algebras
$LU_\fg(\lambda)$ and their ``relatives'' obtained in steps 1) -- 3)
(and, perhaps, 4)) of sec 0.1 do possess Saveliev-Vershik's nonlinear
{\it Cartan operator} which replaces the Cartan matrix.
\section*{\S 1. Recapitulation: finite dimensional simple Lie algebras}
This section is a continuation of \cite{LP}, where the case of the
simplest base (system of simple roots) is considered and where
non-Serre relations for simple Lie algebras first appear, though in a
different setting. This paper is also the direct superization of
\cite{GL1}; we recall its results. For presentations of Lie superalgebras
with Cartan matrix via Chevalley generators, see \cite{LS}, \cite{GL3}.
What are ``natural'' generators and relations for a {\it simple finite
dimensional Lie algebra}? The answer is important in questions when
it is needed to identify an algebra $\fg$ given its generators and
relations. (Examples of such problems are connected with
Estabrook--Wahlquist prolongations, Drinfeld's quantum algebras,
symmetries of differential equations, integrable systems, etc.).
\ssec{1.0. Defining relations} If $\fg$ is nilpotent, the problem of
its presentation has a natural and unambiguous solution:
representatives of the homology $H_1(\fg)\cong \fg/ [\fg, \fg]$ are
the generators of $\fg$ and the elements from $H_2(\fg)$ correspond to
relations.
On the other hand, if $\fg$ is simple, then $\fg =[\fg, \fg]$ and
there is no ``most natural'' way to select generators of $\fg$. The
choice of generators is not unique.
Still, among algebras with the property $\fg =[\fg, \fg]$ the simple
ones are distinguished by the fact that their structure is very well
known. By trial and error people discovered that for finite
dimensional simple Lie algebras, there are certain ``first among
equal'' sets of generators:
1) {\it Chevalley generators} corresponding to positive and negative
simple roots;
2) a pair of generators that generate any finite dimensional simple
Lie algebra associated with the {\it principal $\fsl(2)$-subalgebra}
(considered below).
The relations associated with Chevalley generators are well-known, see
e.g., \cite{OV}, \cite{K}. These relations are called {\it Serre relations}.
The possibility to generate any simple finite dimensional Lie algebra
by two elements was first claimed by N.~Jacobson; for the first (as
far as we know) proof see \cite{BO}. We do not know what generators
Jacobson had in mind; \cite{BO} take for them linear combinations of
positive and negative root vectors with generic coefficients; nothing
like a \lq\lq natural" choice that we suggest to refer to as {\it
Jacobson's generators} was ever proposed.
To generate a simple algebra with only two elements is tempting but
nobody yet had explicitly described relations between such generators,
perhaps, because to check whether the relations between these elements
are nice-looking is impossible without a modern computer (cf. an
implicit description in \cite{F}). As far as we could test, the relations
for any other pair of generators chosen in a way distinct from ours
are too complicated. There seem to be, however, one exception cf.
\cite{GL2}.
\ssec{1.1. The principal embeddings} There exists only one (up to
equivalence) embedding $r: \fsl(2)\tto \fg$ such that $\fg$,
considered as $\fsl(2)$-module, splits into $\rk \fg$ irreducible
modules, cf. \cite{D} or \cite{OV}. This embedding is called {\it
principal} and, sometimes, {\it minimal} because for the other
embeddings (there are plenty of them) the number of irreducible
$\fsl(2)$-modules is $>\rk \fg$. Example: for $\fg=\fsl(n)$,
$\fsp(2n)$ or $\fo(2n+1)$ the principal embedding is the one
corresponding to the irreducible representation of $\fsl(2)$ of
dimension $n$, $2n$, $2n+1$, respectively.
For completeness, let us recall how the irreducible $\fsl(2)$-modules
with highest weight look like. (They are all of the form $L^{\mu}$,
where $L^{\mu}=M^{\mu}$ if $\mu\not\in \Zee _+$, and
$L^n=M^{n}/M^{-n-2}$ if $n\in \Zee _+$, and where $M^{\mu}$ is
described below.) Select the following basis in $\fsl(2)$:
$$
X^-=\pmatrix
0&0\\ -1&0\cr\endpmatrix, \quad H=\pmatrix 1&0\\ 0&-1\\ \endpmatrix, \quad X^+= \pmatrix
0&1\\ 0&0\\
\endpmatrix.
$$
The $\fsl(2)$-module $M^{\mu}$ is illustrated with a graph whose nodes
correspond to the eigenvectors $l_{\mu-2i}$ of $H$ with the weight
indicated;
$$
\dots\overset{\mu-2i-2}{\circ} -\overset{\mu-2i}{\circ} -\dots
-\overset{\mu-2}{\circ} -\overset{\mu}{ \circ}
$$
the edges depict the action of $X^{\pm}$ (the action of
$X^+$ is directed to the right, that of $X^-$ to the left: $X^-l_{\mu-2i}=l_{\mu-2i-2}$ and
$$
X^+l_{\mu-2i}=X^+((X^-)^il_{\mu})=i(\mu-i+1)l_{\mu-2i+2};\quad
X^+(l_{\mu})=0.\eqno{(1.1)}
$$
As follows from (1.1), the module $M^n$ for $n\in \Zee _+$ has an
irreducible submodule isomorphic to $M^{-n-2}$; the quotient,
obviously irreducible, as follows from the same (1.1), will be denoted
by $L^n$.
There are principal $\fsl(2)$-subalgebras in every finite dimensional
simple Lie algebra, though, generally, not in infinite dimensional
ones, e.g., not in affine Kac-Moody algebras. The construction is as
follows. Let $X^{\pm}_1, \dots , X^{\pm}_{\rk \fg}$ be Chevalley
generators of $\fg$, i.e., the generators corresponding to simple
roots. Let the images of $X^{\pm}\in\fsl(2)$ in $\fg$ be
$$
X^-\mapsto \sum X^{-}_i;\quad X^+\mapsto \sum a_iX^{+}_i
$$
and select the $a_i$ from the relations $[[X^+, X^-], X^{\pm}]=\pm
2X^{\pm}$ true in $\fsl(2)$. For $\fg$ constructed from a Cartan
matrix $A$, there is a solution for $a_i$ if and only if $A$ is invertible.
In Table 1.1 a simple finite dimensional Lie algebra $\fg$ is
described as the $\fsl(2)$-module corresponding to the principal
embedding (cf. \cite{OV}, Table 4). The table introduces the number
$2k_2$ used in relations. We set $k_{1}=1$.
\ssec{Table 1.1. $\fg$ as the $\fsl(2)$-module}
$$
\renewcommand{1.4}{1.4}
\begin{tabular}{|l|l|r|}
\hline
$\fg$&the $\fsl(2)$-spectrum of $\fg=L^2\oplus L^{2k_2}
\oplus L^{2k_3}
\dots$ &$2k_2$\cr
\hline
$\fsl(n)$&$L^2\oplus L^4 \oplus L^6 \dots \oplus L^{2n-2}$&4\cr
$\fo (2n+1)$, $\fsp(2n)$\; &$L^2\oplus L^6 \oplus L^{10} \dots \oplus L^{4n-2}$&6\cr
$\fo (2n)$&$L^2\oplus L^6 \oplus L^{10} \dots \oplus L^{4n-6}\quad\oplus L^{2n-2}$&6\cr
$\fg_2$&$L^2\oplus L^{10}$&10\cr
$\ff_4$&$L^2\oplus L^{10}\oplus L^{14}\oplus L^{22}$&10\cr
$\fe_6$&$L^2\oplus L^{8}\oplus L^{10}\oplus L^{14}\oplus L^{16}\oplus L^{22}$&8\cr
$\fe_7$&$L^2\oplus L^{10}\oplus L^{14}\oplus L^{18}\oplus L^{22}\oplus L^{26}\oplus
L^{34}$&10\cr
$\fe_8$&$L^2\oplus L^{14}\oplus L^{22}\oplus L^{26}\oplus L^{34}\oplus L^{38}
\oplus L^{46}\oplus L^{58}$&14\cr
\hline
\end{tabular}
$$
One can show that $\fg$ can be generated by two elements: $x:=X^+\in
L^2=\fsl (2)$ and a lowest weight vector $z:=l_{-r}$ from an
appropriate module $L^r$ other than $L^2$ from Table 1.1. For the
role of this $L^r$ we take either $L^{2k_{2}}$ if $\fg\not=\fo(2n)$ or
the last module $L^{2n-2}$ in the above table if $\fg =\fo(2n)$.
(Clearly, $z$ is defined up to proportionality; we will assume that a
basis of $L^r$ is fixed and denote $z=t\cdot l_{-r}$ for some $t\in
\Cee $ that can be fixed at will, cf. \S 3.)
The exceptional choice for $\fo(2n)$ is occasioned by the fact that by
choosing $z\in L^{r}$ for $r\neq 2n-2$ instead, we generate
$\fo(2n-1)$.
We call the above $x$ and $z$, together with $y:=X^-\in L^2$ taken for
good measure, {\it Jacobson's generators}. The presence of $y$
considerably simplifies the form of the relations, though slightly
increases their number. (One might think that taking the symmetric to
$z$ element $l_r$ will improve the relations even more but in reality
just the opposite happens.)
Concerning $\fg =\fo(2n)$ see sec. 7.2.
\ssec{1.2. Relations between Jacobson's generators} First, observe
that if an ideal of a free Lie algebra is homogeneous (with respect to
the degrees of the generators of the algebra), then the number and the
degrees of the defining relations (i.e., the generators of the ideal)
is uniquely defined provided the relations are homogeneous. This is
obvious.
A simple Lie algebra $\fg$, however, is the quotient of a free Lie
algebra $\fF$ modulo a inhomogeneous ideal, $\fI$, the ideal without
homogeneous generators. Therefore, we can speak about the number and
the degrees of relations only conditionally. Our condition is the
possibility to express any element $x\in\fI$ via the generators $g_1,
...$ of $\fI$ by a formula of the form
$$
x=\sum [c_i,g_i],\text{ where $c_i\in \fF$ and $\deg c_i+\deg g_i\leq
\deg x$ for all $i$.} \eqno{(*)}
$$
Under condition $(*)$ the number of relations and their degrees are
uniquely determined. Now we can explain why do we need an extra
generator $y$: without $y$ the weight relations would have been of
very high degree.
We divide the relations between the Jacobson generators into the types
corresponding to the number of occurrences of $z$ in them: {\bf 0}.
Relations in $L^2 = \fsl(2)$; {\bf 1}. Relations coming from the
$\fsl(2)$-action on $L^{2k_2}$; {\bf 2}. Relations coming from
$L^{2k_{1}}\wedge L^{2k_2}$; $\pmb{\geq 3}$. Relations coming from
$L^{2k_2}\wedge L^{2k_2}\wedge L^{2k_2}\wedge \dots$ with $\geq 3$
factors; among the latter relations we distinguish one --- of type
``$\pmb\infty$" --- the relation that shears the dimension. (For small
$\text{rank}\, \fg$ the relation of type $\infty$ can be of the above
types.)
Observe that, apart form relations of type $\infty$, the relations of
type $\geq 3$ are those of type 3 except for $\fe_7$ which satisfies
stray relations of types 4 and 5, cf. \cite{GL1}.
The relations of type 0 are the well-known relations in $\fsl(2)$
$$
\pmb{0.1}. \; \; [[x, y], \, x]=2x,\quad \quad \quad
\pmb{0.2}\; \; [[x, y], \, y]=-2y.\eqno{(Rel 0)}
$$
The relations of type 1 mirror the fact that the space $L^{2k_2}$ is
the $(2k_2+1)$-dimensional $\fsl(2)$-module. To simplify notations we
denote: $z_i=(\ad x)^iz$. Then the type 1 relations are:
$$
\pmb{1.1}.\; \; [y, \, z] = 0,\; \pmb{1.2}. \; \; [[x, y], \, z] =
-2k_2z,\;
\pmb{1.3}.\; \; z_{2k_{1}+1} = 0\; \text{ with $2k_2$ from Table
1.1.}.\eqno{(Rel 1)}
$$
\ssbegin{1.3}{Theorem} For the simple finite dimensional Lie algebras
all the relations between the Jacobson generators are the above
relations {\em (Rel0), (Rel1)} and the relations from {\em
\cite{GL1}}. \end{Theorem}
In \S 3 these relations from \cite{GL1} are reproduced for
the classical Lie algebras.
\section*{\S 2. The Lie algebra $\fsl(\lambda)$ as a quotient
algebra of $\fdiff (1)$ and a subalgebra of $\fsl_+ (\infty)$}
\ssec{2.1. $\fgl(\lambda)$ is endowed with a trace} The Poincar\'
e-Birkhoff-Witt theorem states that, as spaces, $U(\fsl(2))\cong \Cee
[X^-, H, X^+]$. We also know that to study representations of $\fg$
is the same as to study representations of $U(\fg)$. Still, if we are
interested in irreducible representations, we do not need the whole of
$U(\fg)$ and can do with a smaller algebra, easier to study.
This observation is used now and again; Feigin applied it in \cite{F}
writing, actually, (as deciphered in \cite{PH}, \cite{GL1}, \cite{Sh})
that setting
$$
X^-=-\frac{d}{d u}, \quad
H=2u\frac{d}{d u}-(\lambda-1), \quad X^+=u^2\frac{d}{d
u}-(\lambda-1) u \eqno{(2.1)}
$$
we obtain a morphism of $\fsl(2)$-modules and, moreover, of
associative algebras: $U(\fsl(2))\longrightarrow \Cee [u, \frac{d}{d
u}]$. The kernel of this morphism is the ideal generated by
$\Delta-\lambda ^2+1$, where $\Delta=2(X^+X^-+X^-X^+)+H^2$. Observe,
that this morphism is not an epimorphism, either. The image of this
morphism is our Lie algebra of matrices of \lq\lq complex size".
\begin{rem*}{Remark}
In their proof of certain statements from \cite{F} that we will
recall, \cite{PH} made use of the well-known fact that the Casimir
operator $\Delta$ acts on the irreducible $\fsl(2)$-module $L^{\mu}$
(see sec 1.1) as the scalar operator of multiplication by
$\mu^2+2\mu$. The passage from \cite{PH}'s $\lambda$ to \cite{F}'s
$\mu$ is done with the help of a shift by the weight $\rho$, a half
sum of positive roots, which for $\fsl(2)$ can be identified with 1,
i.e., $(\lambda-1)^2+2(\lambda-1)=\lambda^2-1$ for $\lambda=\mu+1$.
\end{rem*}
Consider the Lie algebra $LU(\fsl(2))$ associated with the associative
algebra $U(\fsl(2))$. Set
$$
U_\lambda=U(\fsl(2))/(\Delta-\lambda ^2+1). \eqno{(2.2)}
$$
The definition directly implies that
$\fgl(-\lambda)\cong\fgl(\lambda)$, so speaking about real values of
$\lambda$ we can confine ourselves to the nonnegative values, cf.
sec. 0.2. It is easy to see that, as $\fsl(2)$-module,
$$
LU_\lambda=L^0\oplus L^2\oplus L^4\oplus\dots\oplus
L^{2n}\oplus \dots \eqno{(2.3)}
$$
It is not difficult to show (see \cite{PH} for details) that the Lie
algebra $LU_n$ for $n\in \Zee \setminus\{0\}$ contains an ideal
$J_{n}$ and the quotient $LU_n/J_{n}$ is the conventional $\fgl(n)$.
In \cite{PH} it is proved that for $\lambda \neq \Zee\setminus\{0\}$
the Lie algebra $LU_\lambda$ has only two ideals --- the space $L^0$
of constants and its complement. Set
$$
\fp\fgl(\lambda)=\fgl(\lambda)/L^0, \text{ where }\fgl(\lambda)=
\cases LU_\lambda&\text{for $\lambda \not\in \Zee\setminus\{0\}$}\cr
LU_n/J_{n}&\text{for $n\in\Zee\setminus\{0\}$.}\endcases
\eqno{(2.4)}
$$
Observe, that $\fgl(\lambda)$ is endowed with a trace. This
follows directly from (2.3) and the fact that
$$
\fgl(\lambda)\cong L^0\oplus [\fgl(\lambda), \fgl(\lambda)].
$$
Therefore, $\fp\fgl(\lambda)$ can be identified with $\fsl(\lambda)$,
the subalgebra of the traceless matrices in $\fgl(\lambda)$. We can
normalize the trace at will, for example, if we set $\tr
(id)=\lambda$, then the trace that our trace induces on the quotient
of $LU_{\fsl(2)}(n)$ modulo $J(n)$ coincides with the usual trace on
$\fgl(n)$ for $n\in\Nee$.
Another way to introduce the trace was suggested by J. Bernstein. We
decipher its description in \cite{KM} as follows. Look at the image
of $H\in\fsl(2)$ in $\fgl(M^\lambda)$. Bernstein observed that though
the trace of the image is an infinite sum, the sum of the first $D+1$
summands is a polynomial in $D$, call it $\tr (H)$. It is easy to see
that $\tr (H)$ vanishes if $D=\lambda$.
Similarly, for {\it any} $x\in LU_{\fg}(\lambda)$ considered as an
element of $\fgl(M^\lambda)$ set
$$
\tr(x; D)=\mathop{\sum}\limits_{i=1}^{D}x_{ii}.
$$
Let $D(\lambda)$ be the value of the dimension of the irreducible
finite dimensional $\fg$-module with highest weight $\lambda$, for an
exact formula see \cite{D}, \cite{OV}. Set $\tr(x)=\tr(x;
D(\lambda))$; as is easy to see, this formula determines the trace on
$LU_{\fg}(\lambda)$ for arbitrary values of $\lambda$.
Observe that whereas for any irreducible finite dimensional module
over the simple Lie algebra $\fg$ there is just one formula for
$D(\lambda)$ (H.~Weyl dimension formulas) for Lie superalgebra there
are {\it several} distinct formulas depending on how ``typical''
$\lambda$ is.
\ssec{2.2. There is no Cartan matrix for $\fsl(\lambda)$. What
replaces it?} Are there {\it Chevalley generators} in $\fsl(\lambda)$?
In other words are there elements $X^{\pm}_i$ of degree $\pm 2$ and
$H_i$ of degree 0 (the {\it degree} is the weight with respect to the
$\fsl(2)=L^2\subset \fsl(\lambda)$) such that
$$
[X^+_i, X^-_j]= \delta_{ij}H_i, \quad [H_i, H_j]=0\text{ and }[H_i,
X^{\pm}_j]=\pm A_{ij}X^{\pm}_j? \eqno{(2.5)}
$$
The answer is {\bf NO}: $\fsl(\lambda)$ is too small. To see what is
the problem, consider the following elements of degree $\pm 2$ from
$L^4$ and $L^6$ of $\fgl(\lambda)$:
$$
\renewcommand{1.4}{1.4}
\begin{array}{ll}
\deg= -2: &-4uD^2-2(\lambda-2)D\\
\deg=2: &-4u^3D^2+6(\lambda-2)u^2D-2(\lambda-1)(\lambda-2)u
\end{array}
$$$$
\renewcommand{1.4}{1.4}
\begin{array}{ll}\deg=-2: &15u^2D^3-15(\lambda-3)uD^2+3(\lambda-2)(\lambda-3)D\\
\deg=2: &15u^4D^3-30(\lambda-3)u^3D^2+
18(\lambda-2)(\lambda-3)u^2D-3(\lambda-1)(\lambda-2)(\lambda-3)u
\end{array}
$$
To satisfy $(2.5)$, we can complete $\fgl(\lambda)$ by considering
infinite sums of its elements, but the completion erases the
difference between different $\lambda$'s:
\begin{Proposition} For $\lambda\neq \rho$ the completion of
$\fsl(\lambda)$ generated by Jacobson's generators {\em (see Tables)}
is isomorphic to $\overline{\fp\fdiff (1)}$, the quotient of the Lie
algebra of differential operators with formal coefficients modulo
constants.
\end{Proposition}
Though there is no Cartan matrix, Saveliev and Vershik \cite{SV}
suggested an operator $K$ which replaces Cartan matrix. For furtehr
details see paper by Shoihet and Vershik \cite{ShV}.
\ssec{2.3. The outer automorphism of $LU_{\fg}(\lambda)$}
The invariants of the mapping
$$
X\mapsto -SX^{t}S\; \text{for}\; X\in\fgl(n), \; \text{where}\;
S=\antidiag(1, -1, 1, -1\dots )\eqno{(2.6)}
$$
constitute $\fo(n)$ if $n\in 2\Nee +1$ and $\fsp(n)$ if $n\in 2\Nee $.
By analogy, Feigin defined $\fo(\lambda)$ and $\fsp(\lambda)$ as
subalgebras of $\fgl(\lambda)=\mathop{\oplus}\limits_{k\geq 0} L^{2k}$ invariant with respect to the
involution analogous to (2.6):
$$
X\mapsto \cases -X&\text{if $X\in L^{4k}$}\cr
X&\text{if $X\in L^{4k+2}$.}\endcases\eqno{(2.7)}
$$
Since $\fo(\lambda)$ and $\fsp(\lambda)$ --- the subalgebras of
$\fgl(\lambda)$ singled out by the involution (2.7) --- differ by a
shift of the parameter $\lambda$, it is natural to denote them
uniformly (but so as not to confuse with the Lie superalgebras of
series $\fosp$), namely, by $\fo{/}\fsp(\lambda)$. For integer values
of the parameter it is clear that
$$
\fo{/}\fsp(\lambda) = \left\{\begin{matrix} \fo(\lambda)\supplus
I_\lambda& \text{if $\lambda \in 2\Nee+1$}, \\
\fsp(\lambda)\supplus I_\lambda & \text{if $\lambda \in 2\Nee
$},\end{matrix}\right.\; \text{where}\; I_\lambda \; \text{is an
ideal}.
$$
In the realization of $\fsl(\lambda)$ by differential operators the
transposition is the passage to the adjoint operator; hence,
$\fo{/}\fsp(\lambda)$ is a subalgebra of $\fsl(\lambda)$ consisting of
self-skew-adjoint operators with respect to the involution
$$
a(u)\frac{d^k}{du^k}\mapsto (-1)^k\frac{d^k}{du^k}a(u)^*.\eqno{(2.8)}
$$
The superization of this formula is straightforward: via Sign Rule.
\ssec{2.4. The Lie algebra $\fgl(\lambda)$ as a subalgebra of
$\fgl_+(\infty)$} Recall that $\fgl_+(\infty)$ often denotes the Lie
algebra of infinite (in one direction; index $+$ indicates that)
matrices with nonzero elements inside a (depending on the matrix)
strip along the main diagonal and containing it. The subalgebras
$\fo(\infty)$ and $\fsp(\infty)$ are naturally defined, while
$\fsl(\infty)$ is, by abuse of language, sometimes used to denote
$\fp\fgl(\infty)$.
When it comes to superization, one shall be very careful selecting an
appropriate candidate for $\fsl(\infty|\infty)$ and its subalgebra,
cf. \cite{E}.
The realization (2.1) provides with an embedding
$\fsl(\lambda)\subset\fsl_+(\infty)=\lq\lq\fsl(M^{\lambda})"$, so for
$\lambda\neq \Nee$ the Verma module $M^{\lambda}$ with highest weight
$\mu$ is an irreducible $\fsl(\lambda)$-module.
\begin{Proposition} The completion of $\fgl(\lambda)$ (generated by
the elements of degree $\pm 2$ with respect to $H\in
\fsl(2)\subset\fgl(\lambda)$) is isomorphic for any noninteger
$\lambda$ to $\fgl_+(\infty)=\lq\lq\fgl(M^{\lambda})"$.
\end{Proposition}
\ssec{2.5. The Lie algebras $\fsl(*)$ and $\fo{/}\fsp(*)$ for
$*\in\Cee P^1=\Cee \cup\{*\}$}
The \lq\lq dequantization" of the relations for $\fsl(\lambda)$ and
$\fo{/}\fsp(\lambda)$ (see \S 3) is performed by passage to the limit as
$\lambda\longrightarrow\infty$ under the change:
$$
t\mapsto
\left\{\renewcommand{1.4}{1.4}
\begin{array}{ll}
\frac{t}{\lambda}&\text{for $\fsl(\lambda)$}\\
\frac{t}{\lambda^2}&\text{for $\fo{/}\fsp(\lambda)$}.\end{array}\right.
$$
So the parameter $\lambda$ above can actually run over $\Cee
P^1=\Cee\cup\{*\}$, not just $\Cee$. In the realization with the help
of deformation, cf. 2.7 below, this is obvious. Denote the limit
algebras by $\fsl(*)$ and $\fo{/}\fsp(*)$ in order to distinguish
them from $\fsl(\infty)$ and $\fo(\infty)$ or $\fsp(\infty)$ from sec.
2.4.
It is clear that it is impossible to embed $\fsl(*)$ and $\fo{/}\fsp(*)$ into the \lq\lq
quadrant" algebra $\fsl_+(\infty)$: indeed, $\fsl(*)$ and
$\fo{/}\fsp(*)$ are subalgebras of the whole
\lq\lq plane" algebras $\fsl(\infty)$ and
$\fo(\infty)$ or $\fsp(\infty)$.
\ssbegin{2.6}{Theorem} For Lie algebras $\fsl(\lambda)$ and
$\fo{/}\fsp(\lambda)$, $\lambda\in \Cee P^1$, all the relations
between the Jacobson generators are the relations of types $0, 1$ with
$2k_2$ found from Table $1.1$ and the borrowed from {\em [GL1]}
relations from \S $3$.
\end{Theorem}
\section*{\S 3. Jacobson's generators and relations between them}
In what follows the $E_{ij}$ are the matrix units; $X^{\pm}_i$ stand
for the conventional Chavalley generators of $\fg$. For
$\fsl(\lambda)$ and $\fo{/}\fsp(\lambda)$ the generators $x =
u^2\frac{d}{du} - (\lambda-1) u$ and $y = - \frac{d}{d u}$ are the
same; $z_{\fsl} = t\frac{d^2}{d u^2}$ while $z_{\fo{/}\fsp} =
t\frac{d^3}{d u^3}$. For $n\in\Cee \setminus \Zee $ there is no
shearing relation of type $\infty$; for $n=*\in\Cee P^1$ the relations
are obtained with the substitution 2.5. The parameter $t$ can be
taken equal to 1; we kept it explicit to clarify how to ``dequantize"
the relations as $\lambda\tto\infty$.
\noindent $\underline{\fsl(*)}$.
$$
\renewcommand{1.4}{1.3}
\begin{tabular}{rl}
{\pmb 2.1}. & $3[z_1, z_2] - 2[z, z_3] = 24 y$,\cr
{\pmb 3.1}.& $[z, \, [z, z_1] ]= 0$,\cr
{\pmb 3.2}.& $4 [[z, z_1], z_3]]] + 3 [z_2, [z, z_2]]
= -576 z$.\cr
\end{tabular}
$$
\noindent $\underline{\fo{/}\fsp(*)}$.
$$
\renewcommand{1.4}{1.3}
\begin{tabular}{rl}
{\pmb 2.1}.& $2 [z_1, z_2] - [z, z_3] = 72 z$,\cr
{\pmb 2.2}.& $9 [z_2, z_3] - 5 [z_1, z_4] =
216 z_2 - 432 y$,\cr
{\pmb 3.1}.& $[z, \, [z, z_1] ]= 0$,\cr
{\pmb 3.2}.& $7 [[z, z_1], z_3] + 6 [z_2, [z, z_2]] =
- 720 [z, z_1] $.\cr
\end{tabular}
$$
\vskip 1mm
\noindent $\underline{\fsl(n)\text{ for }n\geq 3}$. Generators:
$$
x = \sum\limits_{1\leq i\leq n-1}i(n-i)E_{i, i+1}, \qquad
y = \sum\limits_{1\leq i\leq n-1}E_{i+1, i}, \qquad
z = t\sum\limits_{1\leq i\leq n-2} E_{i+2, i}.
$$
Relations:
$$
\renewcommand{1.4}{1.3}
\begin{tabular}{rl}
{\pmb 2.1}.& $3[z_1, \, z_2] - 2[z, \, z_3] = 24t^2(n^2-4) y$,\cr
{\pmb 3.1}.& $[z, \, [z, z_1]] = 0$,\cr
{\pmb 3.2}.& $4 [z_3, \, [z, z_1] ] - 3 [z_2, \, [z, z_2]]
= 576 t^2(n^2-9)z$.\cr
${\pmb \infty=n-1}$.& $(\ad z_1)^{n-2}z = 0$.\cr
\end{tabular}
$$
\noindent For $n=3, 4$ the degree of the last relation is lower than the
degree of some other relations, this yields simplifications.
\noindent $\underline{\fo(2n+1)\text{ for }n\geq 3}$. Generators:
$$
x = n(n+1)(E_{n+1, 2n+1}-E_{n, n+1}) +
\sum\limits_{1\leq i\leq n-1} i(2n+1-i)(E_{i, i+1}-E_{n+i+2, n+i+1}),
$$
$$
y = (E_{2n+1, n+1}-E_{n+1, n}) +
\sum\limits_{1\leq i\leq n-1}(E_{i+1, i}-E_{n+i+1, n+i+2}),
$$
$$
z=t\bigr( (E_{2n-1, n+1}-E_{n+1, n-2})-(E_{2n+1, n-1}-E_{2n, n}) +
\sum\limits_{1\leq i\leq n-3} (E_{i+3, i}-E_{n+i+1, n+i+4}) \bigl).
$$
Relations:
$$
\begin{tabular}{rl}
{\pmb 2.1}.& $2 [z_1, \, z_2] - [z, \, z_3] = 144 t (2n^2+2n-9) z$,\cr
{\pmb 2.2}.& $9 [z_2, \, z_3] - 5 [z_1, \, z_4] =
432 t (2n^2+2n-9)z_2 + 1728 t^2(n-1)(n+2)(2n-1)(2n+3) y$,\cr
{\pmb 3.1}.& $[z, \, [z, z_1]] = 0$,\cr
{\pmb 3.2}.& $7 [z_3, \, [z, z_1]] - 6 [z_2, \, [z, z_2]] =
2880 t (n-3)(n+4)[z, z_1] $,\cr
$\pmb{\infty=n}$.& $(\ad z_1)^{n-1}z = 0$.\cr
\end{tabular}
$$
\noindent $\underline{\fsp(2n)\text{ for }n\geq 3}$. Generators:
$$
x=n^2E_{n, 2n} +
\sum\limits_{1\leq i\leq n-1} i(2n-i)(E_{i, i+1}-E_{n+i+1, n+i}),
$$
$$
y=E_{2n, n} + \sum\limits_{1\leq i\leq n-1}(E_{i+1, i}-E_{n+i, n+i+1}),
$$
$$
z=t\biggl((E_{2n, n-2}+E_{2n-2, n}) - E_{2n-1, n-1} +
\sum\limits_{1\leq i\leq n-3} (E_{i+3, i}-E_{n+i, n+i+3}) \biggr).
$$
Relations:
$$
\begin{tabular}{rl}
{\pmb 2.1}.& $2 [z_1, \, z_2] - [z, \, z_3] = 72 t (4n^2-19) z$,\cr
{\pmb 2.2}.& $9 [z_2, \, z_3] - 5 [z_1, \, z_4] =
216 t (4n^2-19)z_2 + 1728 t^2(n^2-1)(4n^2-9) y$,\cr
{\pmb 3.1}.& $[z, \, [z, z_1]] = 0$,\cr
{\pmb 3.2}.& $7 [z_3, \, [z, z_1] ] - 6 [z_2, \, [z, z_2]] =
720 t (4n^2-49)[z, z_1] $, \cr
$\pmb{\infty=n}$.& $(\ad z_1)^{n-1}z = 0$.\cr
\end{tabular}
$$
For Jacobson generators and corresponding defining relations for the
exceptional Lie algebras see \cite{GL1}.
\section*{\S 4. Lie superalgebras}
\ssec{4.0. Linear algebra in superspaces}
Superization has certain subtleties, often disregarded or expressed
too briefly. We will dwell on them a bit, see \cite{L2}.
A {\it superspace} is a $\Zee /2$-graded space; for a superspace
$V=V_{\bar 0}\oplus V_{\bar 1}$ denote by $\Pi (V)$ another copy of
the same superspace: with the shifted parity, i.e., $(\Pi(V))_{\bar
i}= V_{\bar i+\bar 1}$.
A superspace structure in $V$ induces that in the space $\End (V)$. A
{\it basis of a superspace} is always a basis consisting of {\it
homogeneous} vectors; let $\Par=(p_1, \dots, p_{\dim V})$ be an
ordered collection of their parities, called the {\it format} of (the
basis of) $V$. A square {\it supermatrix} of format (size) $\Par$ is
a $\dim V\times \dim V$ matrix whose $i$th row and $i$th column are
said to be of parity $p_i$. The matrix unit $E_{ij}$ is supposed to
be of parity $p_i+p_j$ and the bracket of supermatrices (of the same
format) is defined via Sign Rule: {\it if something of parity $p$
moves past something of parity $q$ the sign $(-1)^{pq}$ accrues; the
formulas defined on homogeneous elements are extended to arbitrary
ones via linearity}. For example: $[X, Y]=XY-(-1)^{p(X)p(Y)}YX$; the
sign $\wedge$ in what follows is also understood in supersence, etc.
Usually, $\Par$ is considered to be of the form $(\bar 0, \dots, \bar
0, \bar 1, \dots, \bar 1)$. Such a format is called {\it standard}.
The Lie superalgebra of supermatrices of size $\Par$ is denoted by
$\fgl(\Par)$, usually $\fgl(\bar 0, \dots, \bar 0, \bar 1, \dots, \bar
1)$ is abbreviated to $\fgl(\dim V_{\bar 0}|\dim V_{\bar 1})$.
For $\dim V_{\bar 0}=\dim V_{\bar 1}\pm 1$ we will often use another
format, the {\it alternating} one, $\Par_{alt}=(\bar 0, \bar 1, \bar
0, \bar 1, \dots )$.
The {\it supertrace} is the map $\fgl (\Par)\longrightarrow \Cee $,
$(A_{ij})\mapsto \sum (-1)^{p_{i}}A_{ii}$. The supertraceless
matrices constitute a Lie subsuperalgebra, $\fsl(\Par)$.
To the linear map $F$ of superspaces there corresponds the dual map
$F^*$ between the dual superspaces; if $A$ is the supermatrix
corresponding to $F$ in a format $\Par$, then to $F^*$ the {\it
supertransposed} matrix $A^{st}$ corresponds:
$$
(A^{st})_{ij}=(-1)^{(p_{i}+p_{j})(p_{i}+p(A))}A_{ji}.
$$
The supermatrices $X\in\fgl(\Par)$ such that
$$
X^{st}B+(-1)^{p(X)p(B)}BX=0\quad \text{for a homogeneous matrix
$B\in\fgl(\Par)$}
$$
constitute the Lie superalgebra $\faut (B)$ that preserves the
bilinear form on $V$ with matrix $B$.
The superspace of bilinear forms is denoted by $Bil_C(M, N)$ or
$Bil_C(M)$ if $M$=$N$. The {\it upsetting of forms} $uf:\ Bil_C(M,
N)\rightarrow Bil_C(N, M)$, is defined by the formula
$$
B^{uf}(n, m)=(-1)^{p(n) p(m) }B(m, n).
$$
A form $B\in Bil_C(M)$ is called {\it supersymmetric} if $B^{uf}=B$
and {\it superskew-symmetric} if $B^{uf}=-B$.
Given bases $\{m_i\}$ and $\{n_j\}$ of $C$-modules $M$ and $N$ and a
bilinear form $B: M\otimes N\rightarrow C$, we assign to $B$ the
matrix
$$
({}^{mf}\! B)_{ij}=(-1)^{p(m_i)p(B)}B(m_i, n_j).
$$
For a nondegenerate supersymmetric form whose matrix in the standard
format is
$$
B_{m, 2n}= \begin{pmatrix}
1_m&0\\
0&J_{2n}
\end{pmatrix},\quad \text{where $J_{2n}=\begin{pmatrix}
0&1_n\\-1_n&0\end{pmatrix}$}.
$$
The usual notation for $\faut (B_{m|2n})$ is $\fosp^{sy}(m|2n)$ or
just $\fosp(m|2n)$. (Observe that the passage from $V$ to $\Pi (V)$
sends the supersymmetric forms to superskew-symmetric ones, preserved
by $\fosp^{sk}(m|2n)$ which is isomorphic to $\fosp(m|2n)$ but has a
different matrix realization.)
We will need the orthosymplectic supermatrices in the alternating
format; in this format we take the matrix $B_{m, 2n}(\alt)=\antidiag
(1, \dots, 1, -1, \dots, -1)$ with the only nonzero entries on the
side diagonal, the last $n$ being $-1$'s. The Lie superalgebra of
such supermatrices will be denoted by $\fosp(\alt_{m|2n})$, where, as
is easy to see, either $m=2n\pm 1$ or $m=2n$.
There is a 1-parameter family of deformations $\fosp_\alpha(4|2)$ of
the Lie superalgebra $\fosp(4|2)$; its only explicit description we
know (apart from \cite{BGLS}, of course) is in terms of Cartan matrix
\cite{GL3}.
\ssec{4.1. The superprincipal embeddings} Not every simple Lie
superalgebra, even a finite dimensional one, hosts a superprincipal
$\fosp(1|2)$-subsuperal\-geb\-ra. Let us describe those that do.
(Aside: an interesting problem is to describe {\it semiprincipal}
embeddings into $\fg$, defined as the ones with the least possible
number of irreducible components.)
We select the following basis in $\fosp(1|2)\subset \fsl(\bar 0|\bar 1|\bar 0)$:
$$
X^-=\begin{pmatrix}
0&0&0\\
0&0&0\\
-1&0&0\cr\end{pmatrix}, \;
H=\begin{pmatrix}
1&0&0\\0&0&0\\
0& 0&-1\\ \end{pmatrix}, \;
X^+= \begin{pmatrix} 0& 0&1\\
0&0&0\\
0&0&0\\
\end{pmatrix}.
$$
$$
\nabla^-=\begin{pmatrix} 0&0&0\\
1&0&0\\
0&1&0\\
\end{pmatrix}, \;
\nabla^+=\begin{pmatrix}
0&1&0\\
0&0&-1\\
0&0&0\\ \end{pmatrix}.
$$
The highest weight $\fosp(1|2)$-module $\cM^\mu$ is illustrated with a graph
whose nodes correspond to the eigenvectors $l_i$ of $H$ with the weight indicated; the horisontal edges
depict the
$X^{\pm}$-action (the $X^+$-action is directed to the right, that of $X^-$ to
the left; each horizontal string is an irreducible $\fsl(2)$-submodule; two such
submodules are glued together into an $\fosp(1|2)$-module by the action of
$\nabla^{\pm}$ (we set $\nabla^+(l_n)=0$ and $\nabla^-(l_i)=l_{i-1}$; the corresponding
edges are not depicted below); we additionally assume that $p(l_\mu)=\bar 0$:
$$
\begin{matrix}
\dots\overset{\mu-2i}{\circ} \longleftrightarrow\overset{\mu-2i+2}{\circ}
\longleftrightarrow ...\quad ... \quad ...
\longleftrightarrow\overset{\mu-2}{\circ} \longleftrightarrow\overset{\mu}{\circ}\\
\dots\overset{\mu-2i+1}{\circ} \longleftrightarrow\overset{\mu-2i+3}{\circ}
\longleftrightarrow ...\longleftrightarrow\overset{\mu-3}{\circ}
\longleftrightarrow\overset{\mu-1}{\circ}\end{matrix}
$$
As follows from the relations of type 0 below in sec 4.2, the module $\cM^n$ for $n\in \Zee
_+$ has an irreducible submodule isomorphic to $\Pi(\cM^{-n-1})$; the quotient,
obvoiusly irreducible as follows from the same formulas, will be denoted by $\cL^n$.
Serganova completely described superprincipal embeddings of
$\fosp(1|2)$ into a simple finite dimensional Lie superalgebra
\cite{LSS} (the main part of her result was independently obtained in
\cite{vJ}).
As the $\fosp(1|2)$-module corresponding to the superprincipal
embedding, a simple finite dimensional Lie superalgebra $\fg$ is as
follows (the missing simple algebras $\fg$ do not contain a
superprincipal $\fosp(1|2)$):
\ssec{Table 4.1. $\fg$ that admits a superprincipal subalgebra: as the
$\fosp(1|2)$-module}
$$
\renewcommand{1.4}{1.4}
\begin{tabular}{|l|ll|l|}
\hline
$\fg$&$\fg=\cL^2\oplus
(\mathop{\oplus}\limits_{i>1} \cL^{2k_i})$ for $i\geq 2$&$
\oplus$&$(\mathop{\oplus}\limits_j\Pi(\cL^{m_j}))$ for $j\geq 1$\cr
\hline
$\fsl(n|n+1)$&$\cL^2\oplus \cL^4 \oplus \cL^6 \dots \oplus \cL^{2n-2}$&&$\oplus
\Pi(\cL^1)\oplus \Pi(\cL^3)\oplus\dots \oplus \Pi(\cL^{2n-1})$\cr
$\fosp (2n-1|2n)$&$\cL^2\oplus \cL^6 \oplus \cL^{10} \dots \oplus
\cL^{4n-6}$&&$\oplus \Pi(\cL^3)\oplus \Pi(\cL^7)\oplus\dots \oplus
\Pi(\cL^{4n-1})$\cr
($n>1$)&&&\cr
$\fosp (2n+1|2n)$&$\cL^2\oplus \cL^6 \oplus \cL^{10}
\dots \oplus \cL^{4n-2}$&&$\oplus \Pi(\cL^3)\oplus \Pi(\cL^7)\oplus\dots
\oplus \Pi(\cL^{4n-1})$\cr
$\fosp (2|2)\cong \fsl(1|2)$&$\cL^2$&&$\oplus \Pi(\cL^1)$\cr
$\fosp (4|4)$&$\cL^2\oplus \cL^6$&&$\oplus \Pi(\cL^3)\oplus \Pi(\cL^3)$\cr
$\fosp (2n|2n)$&$\cL^2\oplus \cL^6 \oplus \cL^{10} \dots \oplus
\cL^{4n-2}$&$\oplus \cL^{2n-2}$&$\oplus \Pi(\cL^3)\oplus \Pi(\cL^7)\oplus\dots \oplus
\Pi(\cL^{4n-1})$\cr
$\fosp (2n+2|2n)$&$\cL^2\oplus \cL^6 \oplus \cL^{10} \dots \oplus \cL^{4n+2}$&$\oplus
\cL^{2n}$&$\oplus \Pi(\cL^3)\oplus \Pi(\cL^7)\oplus\dots \oplus \Pi(\cL^{4n-1})$\cr
$\fosp_{\alpha}(4|2)$&$\cL^2$&$\oplus \cL^2$&$\oplus \Pi(\cL^3)$ \cr
\hline
\end{tabular}
$$
The Lie superalgebra $\fg$ of type $\fosp$ that contains a
superprincipal subalgebra $\fosp (1|2)$ can be generated by two
elements. For such elements we can take $X:=\nabla^+\in
\cL^2=\fosp(1|2)$ and a lowest weight vector $Z:=l_{-r}$ from the
module $M=\cL^r$ or $\Pi(\cL^r)$, where for $M$ we take $\Pi(\cL^3)$
if $\fg\not=\fosp(2n|2m)$ or the last module with the even highest
weight vector in the above table (i.e., $\cL^{2n-2}$ if $\fg
=\fosp(2n|2n)$ and $\cL^{2n}$ if $\fg =\fosp(2n+2|2n)$).
To generate $\fsl(n|n+1)$ we have to add to the above $X$ and $Z$ a
lowest weight vector $U$ from $\Pi(\cL^1)$. (Clearly, $Z$ and $U$ are
defined up to factors that we can select at our convenience; we will
assume that a basis of $L^r$ is fixed and denote $Z=t\cdot l_{-r}$ and
$U=s\cdot l_{-1}$ for $t, s\in \Cee $.)
We call the above $X$ and $Z$, together with $U$, and fortified by
$Y:=X^-\in L^2$ the {\it Jacobson's generators}. The presence of $Y$
considerably simplifies the form of the relations, though slightly
increases the number of them.
\ssec{4.2. Relations between Jacobson's generators} We repeat the
arguments from sec. 1.2. Since we obtain the relations recurrently,
it could happen that a relation of higher degree implies a relation of
a lower degree. This did not happen when we studied $\fsl(\lambda)$,
but does happen in what follows, namely, relation 1.2 implies 1.1.
We divide the relations between Jacobson's generators into the types
corresponding the number of occurence of $z$ in them: {\bf 0}.
Relations in $\fsl(1|2)$ or $\fosp(1|2)$; {\bf 1}. Relations coming
from the $\fosp(1|2)$-action on $\cL^{2k_2}$; {\bf 2}. Relations
coming from $\cL^{2k_{1}}\wedge \cL^{2k_2}$; {\bf 3}. Relations
coming from $\cL^{2k_2}\wedge \cL^{2k_2}$; $\pmb\infty$. Relation
that shear the dimension.
The relations of type 0 are the well-known relations in $\fsl(1|2)$,
those of them that do not involve $U$ (marked with an $*$) are the
relations for $\fosp(1|2)$. The relations of type 1 that do not
involve $U$ express that the space $\cL^{2k_2}$ is the
$\fosp(1|2)$-module with highest weight $2k_2$. To simplify notations
we denote: $Z_i=\ad X^iZ$ and $Y_i=\ad
X^iY$. $$
\begin{matrix}
\pmb{0.1}^*.& [Y, Y_1]=0, &\pmb{0.2}^*.& [Y_2, Y]=2Y,&\pmb{0.3}^*.& [Y_2, X]=-X,\\
\pmb{0.4}.& [Y, U]=0,& \pmb{0.5}.& [U, U] = -2Y;&\pmb{0.6}.& [U, Y_1]=0,\\
\pmb{0.7}.& [[X, X], [X, U]]=0,& \pmb{0.8}.& [Y_2, U]=U.\\
\end{matrix}
$$
$$
\pmb{1.1}.\; \; [Y, \, Z] = 0\Longleftarrow \pmb{1.2}.\; \; [[X, Y], \, Z] = 0, \quad
\pmb{1.3}.\; \; Z_{4k_{1}} = 0, \quad \pmb{1.4}.\; \; [Y_2, Z]= 3Z.
$$
\ssbegin{4.3}{Theorem}
For the Lie superalgebras indicated, all the relations between Jacobson's
generators are the above relations of types $0, 1$ and the relations from
$\S 6$. \end{Theorem}
\section*{\S 5. The Lie superalgebra $\fgl (\lambda|\lambda+1)$ as the quotient of
$\fdiff (1|1)$ and a subalgebra of
$\fsl_+ (\infty|\infty)$}
There are several ways to superize $\fsl_+ (\infty|\infty)$. For a
description of \lq\lq the best" one from a certain point of view see
\cite{E}. For our purposes any version of $\fsl_+ (\infty|\infty)$
will do.
\ssec{5.1} The Poincar\' e-Birkhoff-Witt theorem states that
$U(\fosp(1|2))\cong \Cee [X^-, \nabla^-, H, \nabla^+, X^+]$, as
superspaces. Set
$U_\lambda=U(\fosp(1|2))/(\Delta-\lambda^2+\frac{9}{4})$.
Denote: $\partial_x=\frac{\partial}{\partial x}$, $\partial_\theta=
\frac{\partial}{\partial\theta}$ and set
$$
X^-=-\partial_x, \quad \nabla^-=\partial_\theta -\theta \partial_x,
\quad H=2x\partial _x+\theta\partial_\theta (\lambda-1),\quad
\nabla^+=x\partial_\theta +x\theta\partial _x-\lambda\theta, \quad
X^+=x^2\partial_x-(\lambda-1) x.
$$
These formulas establish a morphism of $\fosp(1|2)$-modules and,
moreover, of associative superalgebras: $U_\lambda\longrightarrow \Cee
[x, \theta , \partial_x, \partial_\theta ]$.
In what follows we will need a well-known fact: the Casimir operator
$$
\Delta=2(X^+X^-+X^-X^+)+\nabla^+\nabla^--\nabla^-\nabla^++H^2
$$
acts on the irreducible $\fosp(1|2)$-module $\cL^{\mu}$ as the scalar
operator of multiplication by $\mu^2+3\mu$. (The passage from $\mu$
to $\lambda$ is done with the help of a shift by $\frac{3}{2}$.)
Consider the Lie superalgebra $LU(\fosp(1|2))$ associated with the
associative superalgebra $U_\lambda$. It is easy to see that, as
$\fosp(1|2)$-module,
$$
LU_\lambda=\cL^0\oplus \cL^2\oplus\dots\oplus
\cL^{2n}\oplus \dots \oplus\Pi (\cL^1\oplus \cL^3\oplus \dots)\eqno{(5.1)}
$$
In the same way as for Lie algebras we show that $LU_n$ contains an ideal
$I_{n}$ for $n\in \Nee \setminus\{0\}$ and the quotient $LU_n/I_{n}$ is the
conventional $\fsl(n|n+1)$. It is clear that for $\lambda \neq \Zee $ the Lie
algebra $LU_\lambda$ has only one ideal --- the space $\cL^0$ of constants and
$LU_\lambda=\cL^0\oplus [LU_\lambda, LU_\lambda]$; hence, there is a supertrace
on $LU_\lambda$. This justifies the following notations
$$
\fsl(\lambda|\lambda+1)=\fgl(\lambda|\lambda+1)/\cL^0,\quad
\text{ where }\fgl(\lambda|\lambda+1)=\left\{\begin{matrix} U_\lambda&\text{for
$\lambda \neq
\Nee\setminus\{0\}$}\\
LU_n/I_n&\text{otherwise.}\end{matrix}\right.\eqno{(5.2)}
$$
The definition directly implies that
$\fgl(-\lambda|-\lambda+1)\cong\fgl(\lambda|\lambda+1)$, so speaking about real
values of $\lambda$ we can confine ourselves to the nonnegative values.
Define $\fosp(\lambda+1|\lambda)$ as the Lie subsuperalgebra of
$\fsl(\lambda|\lambda+1)$ invariant with respect to the
involution
$$
X\to \left\{\begin{matrix}
-X&\text{if $X\in \cL^{4k}$ or $X\in \Pi(\cL^{4k\pm 1})$}\cr
X&\text{if $X\in \cL^{4k\pm 2}$ or $X\in \Pi(\cL^{4k\pm
3})$},\end{matrix}\right.\eqno{(5.3)}
$$
which is the analogue of the map
$$
X\to -X^{st}\quad \text{for}\; \quad X\in\fgl(m|n).\eqno{(5.4)}
$$
\ssec{5.2. The Lie superalgebras $\fsl(*|*+1)$ and $\fosp(2*|*+1)$,
for $*\in\Cee P^1=\Cee \cup\{*\}$} The \lq\lq dequantization" of the
relations for $\fsl(\lambda|\lambda+1)$ and $\fosp(\lambda+1|\lambda)$
is performed by passage to the limit as $\lambda\longrightarrow\infty$
under the change $ t\mapsto\frac{t}{\lambda}$. We denote the limit
algebras by $\fsl(*|*+1)$ and $\fosp(*+1|*)$ in order not to mix them
with $\fsl(\infty|\infty+1)$ and $\fosp(\infty|\infty+1)$,
respectively.
\section*{\S 6. Tables. The Jacobson generators and relations between them}
\ssec{Table 6.1. Infinite dimensional case}
$\bullet\quad \underline{\fosp (\lambda|\lambda+1)}$. Generators:
$$
X = x \partial_\theta + x \theta \partial_x - \lambda \theta, \; \;
Y = \partial_x,\; \;
Z = t(\partial_x\partial_\theta - \theta {\partial_x}^2).
$$
Relations:
$$
\begin{tabular}{rl}
{\pmb 2.1}.& $3 [Z, Z_3] + 2 [Z_1, Z_2] = 6t(2\lambda+1) Z$, \cr {\pmb
2.2}.& $[Z_1, Z_3] = 2 t^2(\lambda-1) (\lambda+2) Y +2t(2\lambda+1)
Z_1$, \cr {\pmb 3.1}.& $[Z_1, [Z, Z]] = 0$.
\end{tabular}
$$
\noindent $\bullet\quad \underline{\fosp (*|*+1)}$. Relations: the
same as in sec 4.2 plus the following relations:
$$
\begin{tabular}{rl}
\pmb{2.1}.& $3[Z, Z_3]+2[Z_1, Z_2]=12t Z$,\cr
\pmb{2.2}.&$[Z_1, Z_3]=2t^2Y+4tZ_1$.
\end{tabular}
$$
\noindent $\bullet\quad \underline{\fsl (\lambda +1|\lambda)}$ for
$\lambda\in \Cee P^1$. Generators (for $\lambda\in \Cee $): the same
as for $\fosp (\lambda|\lambda+1)$ and $U=\partial_\theta -\theta
\partial_x$.
Relations: the same as for $\fosp (\lambda|\lambda+1)$ plus the
following
$$
\begin{tabular}{c l c l}
\pmb{1.5}.\; &$3 [Z, [X, U]] - [U, Z_1] = 0$,&\pmb{2.3}.\; &$[Z, [U, Z]] = 0$,\cr
\pmb{1.6}.\; &$[[X, U], Z_1] = 0$,&\pmb{2.4}.\; &$[Z_1, [U, Z]] = 0$.\cr
\end{tabular}
$$
\ssec{Table 6.2. Finite dimensional algebras}
In this table $E_{ij}$ are the matrix units; $X^{\pm}_i$ stand for the
conventional Chevalley generators of $\fg$.
$\bullet\quad \underline{\fsl(n+1|n)}$ for $n\geq 3$. Generators:
$$
\begin{matrix}
X = \sum\limits_{1\leq i \leq n}
\bigl((n-i+1)E_{2i-1, 2i} - i E_{2i, 2i+1}\bigr),& Y = \sum\limits_{1\leq i \leq
2n-1} E_{i+2, i},\\
U = \sum\limits_{1\leq i \leq 2n} (-1)^{i+1} E_{i+1, i},& Z =
\sum\limits_{1\leq i \leq 2n-2}(-1)^{i+1} E_{i+3, i}.
\end{matrix}
$$
{\bf Relations}: those for $\fsl(\lambda+1|\lambda)$ with $\lambda=n$ and an extra
relation to shear the dimension:
$$
(\ad Z)^n([X, X])=0.
$$
\noindent For $n=1$ the relations degenerate in relations of type 0.
$\bullet\quad \underline{\fosp (2n+1|2n)}$. Generators:
$$
\begin{tabular}{rl}
$X$=&$ \sum\limits_{1 \leq i \leq n}\bigl((2n-i+1)(E_{2i-1,
2i}+E_{4n+2-2i, 4n+3-2i}) -i(E_{2i, 2i+1}-E_{4n+1-2i,
4n+2-2i})\bigr)$, \cr $Y$=&$ E_{2n+2, 2n} + \sum\limits_{1 \leq i \leq
2n-1} (E_{i+2, i}-E_{4n+2-i, 4n-i})$, \cr $Z$&$- E_{2n+2, 2n-1} -
E_{2n+3, 2n} +\sum\limits_{1 \leq i \leq 2n-2}\bigr((-1)^i E_{i+3, i}
+ E_{4n+2-i, 4n-1-i}\bigr)$.
\end{tabular}
$$
{\bf Relations}: those for $\fosp(2\lambda+1|2\lambda)$ with
$\lambda=n$ and an extra relation to shear the dimension (the form of
the relation is identical to that for $\fsl(n+1|n)$).
$\bullet\quad \underline{\fosp_\alpha (4|2)}$. Generators: As
$\fosp(1|2)$-module, the algebra $\fosp_\alpha (4|2)$ has 2 isomorphic
submodules. The generators $X$ and $Y$ belong to one of them. It so
happens that we can select $Z$ from either of the remaining submodules
and still generate the whole Lie superalgebra. The choice (a) is from
$\Pi(\cL^3)$; it is unique (up to a factor). The choices (b) and (c)
are from $\cL^2$; one of them seem to give simpler relations.
$$
\begin{tabular}{rl}
$X$&$-\frac{\alpha+1}{\alpha}X^+_1+\frac{\alpha}{\alpha+1}X^+_2+
\frac{1}{\alpha(\alpha+1)}X^+_3,\quad Y = [X^-_1, X^-_2]+[X^-_1, X^-_3]+[X^-_2,
X^-_3]$, \cr
$Z$&$\left \{\begin{matrix} \text{a)}&-[[X^-_1, X^-_2], X^-_3]];\\
\text{b)}&-(1+2\alpha)[X^-_1, X^-_2]+\alpha^2(2+\alpha)[X^-_1, X^-_3]+
(\alpha -1)(1+\alpha)^2[X^-_2, X^-_3];\\
\text{c)}& -[X^-_2, X^-_3]-(\alpha+1)[X^-_1, X^-_2].\end{matrix}\right .$\cr
\end{tabular}
$$
Relations of type 0 are common for cases a) -- c):
$$
\pmb{0.1}\; [Y, [Y, [X, X]]]=4Y;\quad \quad \pmb{0.2}\; [Y_1 [X,
X]]]=-2X;
$$
The other relations are as follows.
{\bf Relations a)}:
$$
\begin{tabular}{rl}
\pmb{1.1}& $[Y_1, Z_1]=3Z, \quad\quad\quad\quad\quad \pmb{1.2}\; (\ad [X,
X])^3Z_1=0$;\cr
\pmb{2.1}& $[Z, Z]=0;\quad\quad\quad\quad\quad\quad\quad \pmb{2.2}\; [Z_1, [[X,
X], Z]]=-4\frac{\alpha^2+\alpha+1}{\alpha(\alpha+1)}Z$, \cr
\pmb{3.1}& $[\ad [X, X](Z_1), [Z_1, \ad [X, X](Z_1)]]= $\cr
&$-\frac{16}{\alpha(\alpha+1)}Y+8\frac{\alpha^2+\alpha+1}{\alpha(\alpha+1)}[Z_1,
\ad [X, X](Z_1)]+16\frac{(\alpha^2+\alpha+1)^2}{\alpha^2(\alpha+1)^2}Z_1$.\cr
\end{tabular}
$$
{\bf Relations b)}:
$$
\begin{tabular}{rl}
\pmb{1.1}& $[Y_1, Z_1]=2Z; \quad \quad\quad\quad\quad \pmb{1.2}\; (\ad [X,
X])^2Z_1=0$;\cr
\pmb{2.1}*& $[Z_1, Z_1]=2[Z, [Z,[X, X]]]-18\alpha^2(1+\alpha)^2Y+
4(1-\alpha)(2+\alpha)(1+2\alpha)Z$; \cr
\pmb{3.1}& $(\ad Z)^3X=0$,\cr
\pmb{3.2}*& $[[Z, Z_1], (\ad [X, X])^2Z_1]=$\cr
&$(-1+\alpha)(2+\alpha)(1+2\alpha)[Z,
[Z,[X, X]]] +12(1-\alpha)(2+\alpha)(1+2\alpha)\alpha^2(1+\alpha)^2Y+$\cr
&$8
(1-3\alpha^2-\alpha^3)(-1-3\alpha+\alpha^3)Z$.\cr
\end{tabular}
$$
{\bf Relations c)}: same as for b) except that the relations marked in b) by an ${}*$
should be replaced with the following ones
$$
\begin{tabular}{rl}
\pmb{2.1}& $[Z_1, Z_1]=2[Z, [Z,[X, X]]]-2\alpha^2Y+4(2+\alpha)Z$; \cr
\pmb{2.2}& $(\ad [X, X])^2Z_1=(-2-\alpha)[Z, [Z,[X, X]]]
-8(1+\alpha)Z+4\alpha^2(2+\alpha)Y$.\cr
\end{tabular}
$$
\section*{\S 7. Remarks and problems}
\ssec{7.1. On proof} For the exceptional Lie algebras and
superalgebras $\fosp_\alpha(4|2)$ the proof is direct: the quotient of
the free Lie algebra generated by $x, y$ and $z$ modulo our relations
is the needed finite dimensional one. For rank $\fg\leq 12$ we
similarly computed relations for $\fg=\fsl(n)$, $\fo(2n+1)$ and
$\fsp(2n)$; as Post pointed out, together with the result of \cite{PH}
on deformation (cf. 2.7) this completes the proof for Lie algebras.
The results of \cite{PH} on deformations can be directly extended for
the case of $\fsl(2)$ replaced with $\fosp(1|2)$; this proves Theorem
4.3.
Our Theorem 2.6 elucidates Proposition 2 of \cite{F}; we just wrote
relations explicitely. Feigin claimed \cite{F} that for $\fsl(\lambda)$
the relations of type 3 follow from the decomposition of
$L^{2k_2}\wedge L^{2k_2}\subset L^{2k_2}\wedge L^{2k_2}\wedge
L^{2k_2}$. We verified that this is so not only in Feigin's case but
for all the above-considered algebras except $\fe_6$, $\fe_7$ and
$\fe_8$: for the latter one should consider the whole $L^{2k_2}\wedge
L^{2k_2}\wedge L^{2k_2}$, cf. \cite{GL1}. Theorem 4.3 is a direct
superization of Theorem 2.6.
\ssec{7.2. Problems} 1) How to present $\fo(2n)$ and $\fosp(2m|2n)$?
One can select $z$ as suggested in sec. 1.1. Clearly, the form of
$z$ (hence, relations of type 1) and the number of relations of type 3
depend on $n$ in contradistinction with the algebras considered above.
Besides, the relations are not as neat as for the above algebras. We
should, perhaps, have taken the generators as for $\fo(2n-1)$ and add
a generator from $L^{2n-2}$. We have no guiding idea; to try at
random is frustrating, cf. the relations we got for
$\fosp_\alpha(4|2)$.
2) We could have similarly realized the Lie algebra $\fsl(\lambda)$ as
the quotient of $U(\fvect(1))$, where $\fvect(1)=\fder\Cee [u]$.
However, $U(\fvect(1))$ has no center except the constants. What are
the generators of the ideal --- the analog of (2.0) --- modulo which
we should factorize $U(\fvect(1))$ in order to get $\fsl(\lambda)$?
(Observe that in case $U(\fg)$, where $\fg$ is a simple finite
dimensional Lie superalgebra such that $Z(U(\fg))$ is not noetherian,
the ideal --- the analog of (5.0) --- is, nevertheless, finitely
generated, cf. \cite{GL2}.)
3) Feigin realized $\fsl(*)$ on the space of functions on the open
cell of $\Cee P^1$, a hyperboloid, see \cite{F}. Examples of
\cite{DGS} are similarly realized. Give any realization of
$\fo/\fsp(*)$ and its superanalogs.
4) Other
problems are listed in sec. 8.1--8.3 below.
\ssec{7.3. Serre relations are more convenient than our
ones} The following Table represents results of V.~Kornyak's
computations. $N_{GB}$ is the number of relations in Groebner basis,
$N_{comm}$ is the number of non-zero commutators in the multiplication
table, $D_{GB}$ is a maximum degree of relations in $GB$, Space is measured in in bytes. The
corresponding values for Chevalley generators/Serre relations are given in
brackets.
$$
\begin{matrix}
\text{alg}& N_{GB} & N_{comm} &D_{GB}& \text{Space}
& \text{Time}\\
\fsl(3) & 23 \; (24) & 21 \; (21) & 9\; (4) & 1300 \; (1188) &
<1 sec \; (<1 sec)\\
\fsl(4) & 69 \; (84) & 70 \; (60) &17\; (6) &3888 \; (3612) &
<1 sec \; (<1 sec)\\
\fsl(5) &193\; (218) & 220\; (126) &25\; (8) &13556 \; (8716) &
<1 sec \; (<1 sec)\\
\fsl(6) & 444\; (473) & 476\; (225) &33 (10) &34692\; (18088) &
2 sec \; (<1 sec)\\
\fsl(7) & 893\; (908) & 937\; (363) &41 (12) &80272\; (33700) &1
0 sec \; (1 sec)\\
\fsl(8) &1615 (1594) &1632\; (546) &49 (14) &162128\; (57908) &34
sec \; (3 sec)\\
\fsl(9) &2705 (2614) &2714\; (780) &57 (16) &314056\; (93452) &109
sec\; (6 sec)\\
\fsl(10) &4263\; (4063) &4138\; (1071) &65\; (18) &534684\; (143456) &336
sec (10 sec)\\
\fsl(11) &6405\; (6048) & 6224\; (1425) &73\; (20) &921972\; (211428) &1058 sec
\; (19 sec)
\end{matrix}
$$
For the other Lie algebras, especially exceptional ones, the
comparison is even more unfavourable. Nevertheless, for
$\fsl(\lambda)$ with noninteger $\lambda$ there are only the Jacobson
generators and we have to use them.
\section*{\S 8. Lie algebras of higher ranks. The analogs of
the exponents and $W$-algebras}
The following Tables 8.1 and 8.2 introduce the generators for the Lie
algebras $U_\fg(\lambda)$ and the analogues of the exponents that
index the generalized $W$-algebras (for their definition in the
simplest cases from different points of view see \cite{FFr} and
\cite{KM}; we will follow the lines of \cite{KM}).
Recall that (see 0.1) one of the definitions of $U_\fg(\lambda)$ is as
the associative algebra generated by $\tilde \fg$; we loosely denote
it by $\tilde S^{\bcdot}(\tilde \fg)$. For the generators of
$LU_\fg(\lambda)$ we take the Chevalley generators of $\fg$ (since by
7.3 they are more convenient) and the lowest weight vectors of the
irreducible $\fg$-modules that constitute $\tilde S^2(\tilde \fg)$.
\ssec{8.1. The exponents} This section is just part of Table 1 from
\cite{OV} reproduced here for the convenience of the reader. Recall that
if $\fg$ is a simple (finite dimensional) Lie algebra, $W=W_\fg$ is
its Weyl group, $l=\rk~\fg$, $\alpha_1$, \dots , $\alpha_l$ the simple
roots, $\alpha_0$ the lowest root; the $n_i$ the coefficients of
linear relation among the $\alpha_i$ normed so that $n_0=1$; let
$c=r_1\cdot \dots\cdot r_l$, where $r_i$ are the reflections from $W$
associated with the simple roots, be the Killing--Coxeter element.
The order $h$ of $c$ (the Coxeter number) is equal to $\sum_{i>0}
n_i$. The eigenvalues of $c$ are $\varepsilon^{k_{1}}$, \dots,
$\varepsilon^{k_{l}}$, where $\varepsilon$ is a primitive $h$-th root
of unity. The numbers $k_i$ are called the {\it exponents}. Then
The exponents $k_i$ are the respective numbers $k_i$ from Table 1.1,
e.g., $k_1=1$. The number of roots of $\fg$ is equal to $l\sum_{i>0}
n_i= 2\sum_{i>0} k_i$. The order of $W$ is equal to
$$
zl!\prod n_i=\prod_{i>0} (k_i+1),
$$
where $z$ is the number of 1's among the $n_i$'s for $i>0$ (the number
$z$ is also equal to the order of the centrum $Z(G)$ of the simply
connected Lie group $G$ with the Lie algebra $\fg$). The algebra of
$W$-invariant polynomials on the maximal diagonalizable (Cartan)
subalgebra of $\fg$ is freely generated by homogeneous polynomials of
degrees $k_i+1$.
We will use the following notations:
For a finite dimensional irreducible representations of finite
dimensional simple Lie algebras $R(\lambda)$ denotes
the irreducible representation with highest weight $\lambda$ and
$V(\lambda)$ the space of this representation;
$\rho=\frac12\sum\limits_{\alpha>0}\alpha$ or $\rho$ is a weight such
that $\rho(\alpha_i)=A_{ii}$ for each simple root $\alpha_i$.
The weights of the Lie algebras $\fo(2l)$ and $\fo(2l+1)$, $\fsp(2l)$
and $\ff_4$ ($l=4$) are expressed in terms of an orthonormal basis
$\varepsilon_1$, \dots , $\varepsilon_l$ of the space $\fh^*$ over
$\Qee$. The weights of the Lie algebras $\fsl(l+1)$ as well as
$\fe_7$, $\fe_8$ and $\fg_2$ ($l=7, 8$ and 2, respectively) are
expressed in terms of vectors $\varepsilon_1$, \dots ,
$\varepsilon_{l+1}$ of the space $\fh^*$ over $\Qee$ such that $\sum
\varepsilon_{i}=0$. For these vectors $(\varepsilon_{i},
\varepsilon_{i})=\frac{l}{l+1}$ and $(\varepsilon_{i},
\varepsilon_{j})=\frac{1}{l+1}\quad \text{for } i\neq j$. The indices
in the expression of any weight are assumed to be different.
The analogues of the exponents for $LU_{\fg}(\lambda)$ are the
highest weights of the representations that
constitute $\tilde S^k(\tilde \fg)$.
\begin{Problem} Interprete these exponents in terms of the analog of
the Weyl group of $LU_\fg(\lambda)$ in the sence of \cite{PS} and
invariant polynomials on $LU_\fg(\lambda)$.
\end{Problem}
\ssec{8.2. Table. The Lie algebras $U_\fg(\lambda)$ as $\fg$-modules}
Columns 2 and 3 of this Table are derived from Table 5 in \cite{OV}.
Columns 4 and 5 are results of a computer-aided study. To fill in the
gaps is a research problem, cf. \cite{GL2} for the Lie algebras
different from $\fsl$ type.
$$
\renewcommand{1.4}{1.4}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\fg$&$\ad$&$\tilde S^2(\tilde \fg)$&$\tilde S^3(\tilde \fg)$&$\tilde S^k(\tilde \fg)$\cr
\hline
$\fsl(2)$&$R(2\pi)$&$R(4\pi)$&$R(6\pi)$&$R(2k\pi)$\cr
\hline
$\fsl(3)$&$R(\pi_1+\pi_2)$&$R(2\pi_1+2\pi_2)$&$R(3\pi_1+3\pi_2)$&$R(k\pi_1+k\pi_2)$\cr
&
&$R(\pi_1+\pi_2)$&$R(2\pi_1+2\pi_2)$&$R((k-1)\pi_1+(k-1)\pi_2)$\cr
\hline
\end{tabular}
$$
$$
\renewcommand{1.4}{1.4}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\fsl(4)$&$R(\pi_1+\pi_3)$&$R(2\pi_1+2\pi_3)$&$R(3\pi_1+3\pi_3)$ & \cr
& &$R(\pi_1+\pi_3)$&$R(2\pi_1+2\pi_3)$ & \cr
& &$R(2\pi_2)$ &$R(2\pi_1+\pi_2)$ & \cr
& & &$R(\pi_2+2\pi_3)$&\cr
& & &$R(\pi_1+\pi_3)$&\cr
& & &$R(\pi_1+2\pi_2+\pi_3)$&\cr
\hline
$\fsl(n+1)$&$R(\pi_1+\pi_n)$&$R(2\pi_1+2\pi_n)$&$R(3\pi_1+3\pi_n)$
&\quad\quad \cr
$n\geq 4$& &$R(\pi_1+\pi_n)$&$R(2\pi_1+2\pi_n)$& \cr
& &$R(\pi_2+\pi_{n-1})$&$R(2\pi_1+\pi_{n-1})$& \cr
& & &$R(\pi_2+\pi_{n-1})$&\cr
& & &$R(\pi_2+2\pi_n)$&\cr
& & &$R(\pi_1+\pi_n)$&\cr
& & &$R(\pi_1+\pi_2+\pi_{n-1}+\pi_n)$&\cr
\hline
\end{tabular}
$$
The generators of $LU_{\fg}(\lambda)$ are the Chevalley generators
$X^\pm_i$ of $\fg$ AND the lowest weight vectors from $\tilde S^2$.
Denote the latter by $z_1$, $z_2$ (sometimes there is a third one,
$z_3$). Then the relations are (recall that $ h_i=[X^+_i, X^-_i]$):
(type 0) the Serre relations in $\fg$
(type 1) The relations between $X^\pm_i$ and $z_j$, namely:
$$
X^-_i(z_j)=0;\quad h_i(z_j)=\text{weight}_i(z_j);\quad (ad
X^+_i)^{\text{the power determined by the weight of}z_j}(z_j)=0.
$$
\begin{Problem} Give an explicit form of the relations of higher
types. \end{Problem}
\ssec{8.3. Tougher problems} Even if the explicit realization of the
exceptional Lie algebras by differential operators on the base affine
space were known at the moment, it is, nevertheless, a difficult
computer problem to fill in the blank spaces in the above table and
similar tables for Lie superalgebras. To make plausible conjectures
we have to compute $\tilde S^k(\tilde \fg)$ to, at least, $k=4$.
Observe that for simple Lie algebras $\fg$ we have a remarkable
theorem by Kostant which states that $U_{\fg}(\lambda)$ contains
every finite dimensional irreducible $\fg$-module $V$ with
multiplicity equal to the multiplicity of the zero weight in $V$; in
view of which only the $\fsl(2)$-line is complete.
\section*{\S 9. A connection with integrable dynamical systems}
We recall the basic steps of the Khesin--Malikov construction and
then superize them.
\ssec{9.1. The Hamilton reduction} Let $(M^{2n}, \omega)$ be a
symplectic manifold with an action $act$ of a Lie group $G$ on $M$ by
symplectomorphisms (i.e., $G$ preserves $\omega$). The derivative of
the $G$-action gives rise to a Lie algebra homomorphism $a:
\fg=Lie(G)\longrightarrow\fh(2n)$. The action $act$, or rather, $a$
is called a {\it Poisson} one, if $a$ can be lifted to a Lie algebra
homomorphism $\tilde a: \fg\longrightarrow\fpo(2n)$, where the Poisson
algebra $\fpo(2n)$ is the nontrivial central extension of $\fh(2n)$.
For any Poisson $G$-action on $M$ there arises a $G$-equivariant map
$p: M\longrightarrow\fg^*$, called the {\it moment map}, given by the
formula
$$
\langle p(x), g\rangle=\tilde a(g)(x)\quad\text{for any}\quad x\in M,
g\in\fg.
$$
Fix $b\in\fg^*$; let $G_b\subset G$ be the stabilizer of $b$. Under
certain regularity conditions (see \cite{Ar}) $p^{-1}(b)/G_b$ is a
manifold. This manifold is endowed with the symplectic form
$$
\begin{matrix}
\omega(\bar v, \bar w)=\omega(v, w)\quad\text{for arbitrary
preimages}\quad v, w \;\text{of} \; \bar v, \bar w,\;
\text{respectively}\\
\text{wrt the natural projection}\quad T(p^{-1}(b))\longrightarrow
T(p^{-1}(b)/G_b).
\end{matrix}
$$
The passage from $M$ to $p^{-1}(b)/G_b$ is called {\it Hamilton
reduction}. In the above picture $M$ can be the {\it Poisson
manifold}, i.e., $\omega$ is allowed to be nondegenerate not on the
whole $M$; the submanifolds on which $\omega$ is nondegenerate are
called {\it symplectic leaves}.
{\it Example}. Let $\fg=\fsl(n)$ and $M=\fg^*$, let $G$ be the group
$N$ of uppertriangular matrices with 1 on the diagonal. The coadjoint
$N$-action on $\fg^*$ is a Poisson one, the moment map is the natural
projection $\fg^*\longrightarrow\fn^*$ and $\fg^*/N$ is a Poisson
manifold.
\ssec{9.2. The Drinfeld--Sokolov reduction} Let $\fg=\hat \fa^{(1)}$,
where $\fa$ is a simple finite dimensional Lie algebra (the case
$\fa=\fsl(n)$ is the one considered by Gelfand and Dickii), hat
denotes the Kac--Moody central extension. The elements of $M=\fg^*$,
can be identified with the $\fa$-valued differential operators:
$$
(f(t)dt, az^*)\mapsto (tf(t)+at\frac{d}{dt}))\frac{dt}{t}.
$$
Let $N$ be the loop group with values in the group generated by
positive roots of $\fa$. For the point $b$ above take the element
$y\in\fa\subset\hat\fg^*$ described in \S 3. If $\fa=\fsl(n)$, we can
represent every element of $p^{-1}(b)/N$ in the form
$$
t\frac{d}{dt}+y+\begin{pmatrix} b_1(t)&\dots&b_n(t)\\
0&\dots&0\\
0&\dots&0\\
\end{pmatrix}\longleftrightarrow \frac{d^n}{d\varphi^n}+\tilde
b_1(\varphi)\frac{d^{n-1}}{d\varphi^{n-1}}+\dots+\tilde b_n(\varphi),
$$
To generalize the above to $\fsl(\lambda)$, Khesin and Zakharevich
described the Poisson--Lie structure on symbols of pseudodifferential
operators, see \cite{KM} and refs therin. Let us recall the main formulas.
\ssec{9.3.1. The Poisson bracket on symbols of $\Psi DO$}
Set $D=\frac{d}{dx}$; define
$$
D^\lambda \circ f=fD^\lambda +\sum\limits_{k\geq 1}\binom{\lambda}{k}
f^{(k)}D^{(\lambda-k)}, \quad\text{where}\;
\binom{\lambda}{k}=\frac{\lambda(\lambda-1)\dots (\lambda-k+1)}{k!}.
$$
Set
$$
G_\lambda=\left\{D^\lambda (1+\sum\limits_{k\geq 1}u_k(x)D^{(-k)})\right\}
$$
and
$$
TG_\lambda=\left\{\sum\limits_{k\geq 1}v_k(x)D^{(-k)}\right\}\circ
D^\lambda,\quad T^* G_\lambda=D^{-\lambda}\circ DO.
$$
For $X=D^{-\lambda}\circ\sum\limits_{k\geq 0}u_k(x)D^{(k)}\in
T^*G_\lambda$ and $L=\left(\sum\limits_{k\geq
1}v_k(x)D^{(-k)}\right)\circ D^\lambda\in TG_\lambda $ define the
pairing $\langle X, L\rangle$ to be
$$
\langle X, L\rangle=Tr(L\circ X),\quad\text{where}\quad Tr(\sum
w_k(x)D^{(k)})=\Res |_{x=0}w_{-1}.
$$
The Poisson bracket on the spae of psedodifferential symbols $\Psi
DS_\lambda$ is defined on linear functionals by the formula
$$
\{X, Y\}(L)=X(H_Y(L)),\quad\text{where}\quad H_Y(L)=(LY)_+L-L(YL)_+.
$$
\begin{Theorem} {\em (Khesin--Malikov)} For $\fa=\fsl(\lambda)$ in the
Drinfeld--Sokolov picture, the Poisson manifolds $p^{-1}(b)/N_b$ and
$\Psi DS_\lambda$ are isomorphic. Each element of the Poisson leaf
has a representative in the form
$$
t\frac{d}{dt}+y+\begin{pmatrix} b_1(t)&\dots&b_n(t)&\dots\\
0&\dots&0&\dots\\
0&\dots&0&\dots\\
\end{pmatrix}\longleftrightarrow D^\lambda \left(1+\sum\limits_{k\geq
1}\tilde b_k(\varphi)D^{(-k)}\right).
$$
\end{Theorem}
The Drinfeld--Sokolov construction \cite{DS}, as well as its
generalization to $\fsl(\lambda)$ and $\fo{/}\fsp(\lambda)$ (\cite{KM}),
hinges on a certain element that can be identified with the image of
$X^+\in \fsl(2)$ under the principal embedding. For the case of
higher ranks this is the image in $U_\fg(\lambda)$ of the element
$y\in \fg$ described in \S 3 for Lie algebras. In $\fsl(\lambda)$ and
$\fo{/}\fsp(\lambda)$ this image is just $\frac{d}{dx}$ (or the matrix
whose only nonzero entries are the 1's under the main diagonal in the
realization of $\fsl(\lambda)$ and $\fo{/}\fsp(\lambda)$ by matrices).
\ssec{9.4. Superization}
\ssec{9.4.1 Basics} Further facts from Linear Algebra in Superspaces.
The {\it tensor algebra} $T(V)$ of the superspace
$V$ is naturally defined:
$T(V)=\bigoplus_{n\geq 0} T^n(V)$, where $T^0(V)=k$ and
$T^n(V)=V\otimes \dots \otimes V$ ($n$ factors) for $n>0$.
The {\it symmetric algebra} of the superspace $V$ is $S(V)=T(V)/I$,
where $I$ is the two-sided ideal generated by $v_1\otimes
v_2-(-1)^{p(v_1)p(v_2)}v_2\otimes v_1$ for $v_1, v_2\in V$.
The {\it exterior algebra} of the superspace $V$ is $E(V)=S(\Pi(V))$.
Clearly, both the exterior and symmetric algebras of the superspace
$V$ are supercommutative superalgebras. It is worthwhile to mention
that if $V_{\ev }\neq 0$ and $V_{\od}\neq 0$, then both $E(V)$ and
$S(V)$ are infinite dimensional.
A {\it Lie superalgebra} is defined with Sign Rule applied to the
definition of a Lie algebra. Its multiplication is called {\it
bracket} and is usually denoted by $[\cdot, \cdot]$ or $\{\cdot ,
\cdot\}$. If, however, we try to use this definition in attempts to
apply the standard group-theoretical methods to differential equations
on supermanifolds we will find ourselves at a loss: the supergroups
and their modules are objects from different categories! Accordingly,
the following (equivalent to the conventional, ``naive'' one, see
\cite{L1})
definition becomes useful: a {\it Lie superalgebra} is a superalgebra
$\fg$ (defined over a field or, more generally, a supercommutative
superalgebra $k$); the bracket should satisfy the following
conditions: $[X, X]=0$ and $[Y, [Y, Y]]=0$ for any $X\in (C\otimes
\fg)_{\ev }$ and $Y\in (C\otimes \fg)_{\od}$ and any supercommutative
superalgebra $C$ (the bracket in $C\otimes \fg$ is defined via
$C$-linearity and Sign Rule).
With an associative (super)algebra $A$ we associate Lie
(super)algebras (1) $A_L$ with the same (super)space $A$ and the
multiplication $(a, b)\mapsto [a, b]$ and (2) $\fder A$, the algebra
of derivations of $A$, defined via Sign and Leibniz Rules.
From a Lie superalgebra $\fg$ we construct an associative superalgebra
$U(\fg)$, called the {\it universal enveloping algebra} of the Lie
superalgebra $\fg$ by setting $U(\fg)= T(\fg)/I$, where $I$ is the
two-sided ideal generated by the elements $x\otimes
y-(-1)^{p(x)p(y)}y\otimes x-[x, y]$ for $x, y\in \fg$.
The {\it Poincar\'{e}--Birkhoff--Witt theorem} for Lie algebras
extends to Lie superalgebras with the same proof (beware Sign Rule)
and reads as follows:
{\it if $\{X_i\}$ is a basis in $\fg_{\ev }$ and $\{Y_j\}$ is a basis
in $\fg_{\od}$, then the monomials $X^{n_1}_{i_1}\dots
X^{n_r}_{i_r}Y^{\varepsilon_1}_{j_1}\dots Y^{\varepsilon_s}_{j_s}$,
where $n_i\in \Zee^+$ and $\varepsilon_j = 0, 1$, constitute a basis
in the space $U(\fg)$.}
A superspace $M$ is called a {\it left module} over a superalgebra $A$
(or a {\it left $A$-module}) if there is given an even map {\it act}:
$A\otimes M\rightarrow M$ such that if $A$ is an associative
superalgebra with unit, then $(ab)m=a(bm)$ and $1m=m$ and if $A$ is a
Lie superalgebra, then $[a, b]m=a(bm)-(-1)^{p(a)p(b)}b(am)$ for any
$a, b \in A$ and $m\in M$. The definition of a {\it right $A$-module}
is similar.
\begin{rem*}{Convention} We endow every module $M$ over a
supercommutative superalgebra $C$ with a two-sided module structure:
the left module structure is recovered from the right module one and
vice versa according to the formula $cm=(-1)^{p(m)p(c)}mc$ for any
$m\in M$ and $c\in C$. Such modules will be called $C$-{\it modules}.
(Over $C$, there are {\it two} canonical ways to define a two-sided
module structure, see \cite{L2}; the meaning of such an abundance is
obscure.)
\end{rem*}
The functor $\Pi$ is, actually, tensoring by $\Pi(\Zee)$. So there
are two ways to apply $\Pi$ to $C$-modules: to get $\Pi
(M)=\Pi(\Zee)\otimes_{\Zee}M$ and $(M)\Pi=M\otimes_{\Zee}\Pi(\Zee)$.
The two-sided module structures on $\Pi (M)$ and $(M)\Pi$ are given
via Sign Rule.
Sometimes, instead of the map {\it act} a morphism $\rho : A
\rightarrow \End M$ is defined if $A$ is an associative superalgebra
(or $\rho : A\longrightarrow (\End M)_L$ if $A$ is a Lie
superalgebra); $\rho$ is called a {\it representation} of $A$ in $M$.
The simplest (in a sense) modules are those which are {\it
irreducible}. We distinguish {\it irreducible modules of $G$-type}
(general); these do not contain invariant subspaces different from $0$
and the whole module; and their \lq\lq odd" counterparts, {\it
irreducible modules of $Q$-type}, which do contain an invariant
subspace which, however, is not a subsuperspace. Consequently, {\it
Schur's lemma} states that {\sl over $\Cee$ the centralizer of a set
of irreducible operators is either $\Cee$ or} $\Cee \otimes \Cee
^s=Q(1; \Cee)$, see the definition of the superalgebras $Q$ below.
The next in terms of complexity are {\it indecomposable} modules,
which cannot be split into the direct sum of invariant submodules.
A $C$-module is called {\it free} if it is isomorphic to a module of
the form $C\oplus \dots \oplus C\oplus\Pi (C)\oplus \dots \oplus\Pi
(C)$ ($C$ occurs $r$ times, $\Pi (C)$ occurs $s$ times).
The {\it rank} of a free $C$-module $M$ is the element $\rk
M=r+s\varepsilon$ from the ring $\Zee [\varepsilon]/
(\varepsilon^2-1)$. Over a field, $C=k$, we usually write just $\dim
M=(r, s)$ or $r|s$ and call this pair the {\it superdimension} of $M$.
The module $M^*=\Hom_C(M, C)$ is called {\it dual} to a $C$-module
$M$. If $(\cdot , \cdot)$ is the pairing of modules $M^*$ and $M$,
then to each operator $F\in \Hom_C(M, N)$, where $M$ and $N$ are
$C$-modules, there corresponds the dual operator $F^*\in \Hom_C(N^*,
M^*)$ defined by the formula
$$
(F(m), n^*)=(-1)^{p(F)p(m)}(m, F^*(n^*)) \;\text{ for any }\; m\in M,
\ n^*\in N^*.
$$
Over a supercommutative superalgebra $C$ a {\it supermatrix} is a
supermatrix with entries from $C$, the parity of the matrix with only
$(i, j)$-th nonzero element $c$ is equal to $p_(i)+p_(j)+p(c)$.
Denote by $\Mat(\Par; C)$ the set of $\Par\times \Par$ matrices with
entries from a supercommutative superalgebra $C$.
The even invertible elements from $\Mat(\Par; C)$ constitute the {\it
general linear group} $GL(\Par ; C)$. Put $GQ(\Par; C)=Q(\Par; C)\cap
GL(\Par; C)$.
On the group $GL(\Par;C)$ an analogue of the determinant is defined;
it is called the {\it Berezinian} (in honour of F.~A.~Berezin who
discovered it). In the standard format the explicit formula for the
Berezinian is:
$$
\Ber\begin{pmatrix}
A\ B\\
D\ E\end{pmatrix}=\det (A-BE^{-1}D)\det E^{-1}.
$$
For the matrices from $GL(\Par; C)$ the identity $\Ber\ XY=\Ber\
X\cdot \Ber\ Y$ holds, i.e., $Ber: GL(\Par; C)\rightarrow GL(1|0;
C)=GL(0|1; C)$ is a group homomorphism. Set $SL(\Par; C)=\{X \in
GL(\Par; C): Ber X=1\}$. The {\it orthosymplectic} group of
automorphisms of the bilinear form with the even canonical matrix is
denoted (in the standard format) by $Osp(n|2m; C)$.
\ssec{9.4.2. Pseudodiferential operators on the supercircle.
Residues} Let $V$ be a superspace. For $\theta=(\theta_1, \dots ,
\theta_n)$ set
$$
\begin{matrix}
V[x, \theta]=V\otimes \Kee [x, \theta];\; V[x^{-1}, x, \theta]=V\otimes
\Kee [x^{-1}, x, \theta];\\
V[[x^{-1}, \theta]]=V\otimes \Kee [[x^{-1},
\theta]];\\
V((x, \theta))=V\otimes \Kee [[x^{-1}]][x, \theta].\end{matrix}
$$
We call $V((x, \theta))x^\lambda$ the space of {\it pseudodifferential
symbols}. Usually, $V$ is a Lie (super)algebra. Such symbols correspond to
pseudodifferential operators (pdo) of the form
$$
\sum\limits_{i=-\infty}^n\sum\limits_{k_{0}+\dots+k_{n}=i}
a_i(\partial_x)^{k_{0}}\theta_1^{k_{1}}\dots \theta_n^{k_{n}},
$$
Here $k_i=0$ or 1 for $i>0$ and $a_i(x, \theta)\in V$. This is clear.
For any $P=\sum\limits_{i\leq m} P_ix ^i\theta_0^k\theta^j\in V((x, \theta))$
we call
$P_{+}=\sum\limits_{i, j, k\geq 0} P_ix ^i\theta_0^k\theta^j$ the {\it
differential part} of $P$ and
$P_{-}=\sum\limits_{i, k< 0} P_ix ^i\theta_0^k\theta^j$ the {\it integral}
part of $P$.
The space $\Psi DO$ of pdos is, clearly, the left module over the
algebra $\cF $ of functions. Define the left $\Psi DO$-action on $\cF
$ from the Leibniz formula thus making $\Psi DO$ into a superalgebra.
Define the involution in the superalgebra $\Psi DO$ setting
$$
(a(t, \theta)D^i\tilde D^j)^*=(-1)^{jip(\tilde D)p(D)}\tilde D^jD^ia^*(x,
\theta).
$$
The following fact is somewhat unexpected. If $D$ is an odd
differential operator, then $D^2$ is well-defined as $\frac{1}{2}[D,
D]$. Hence, we can consider the set $V((x, \theta))x^\lambda$ for an
{\it odd} $x$! Therefore, there are two types of pdos: {\it contact}
ones, when $D^2\neq 0$ for odd $D$'s and general ones, when all odd
$D$'s are nilpotent.
For the definition of distinguished stringy superalgebgras crucial in
what follows see \cite{GLS1}.
\begin{rem*}{Conjecture} There exists a residue for all distinguished
dimensions, i.e. for the contact type pdos in dimensions $1|n$ for
$n\leq 4$ and for the general pdos in dimensions $1|n$ for $n\leq 2$.
\end{rem*}
So far, however, the residue was defined only for contact type pdos, of
$\fk^L$ type, and only for $n=1$ at that.
Let us extend \cite{MR} and define the {\it residue} of $P=\sum\limits_{i\leq m}
P_ix ^i\theta_0^k\theta^j\in V((x, \theta_0, \theta))$ for $n=1$. We
can do it thanks to the following exceptional property of $\fk^L(1|1)$
and $\fk^M(1|1)$. Indeed, over $\fk^L(1|1)$, the volume form \lq\lq
$dt\pder{\theta}$" is, more or less, $d\theta$: consider the quotient
$\Omega^1/\cF\alpha$, where $\alpha$ is the contact form preserved by
$\fk^L(1|1)$; similarly, over $\fk^M(1|1)$, the transformation rules
of $dt\pder{\theta}$" and $\tilde\alpha$, where $\tilde\alpha$ is the
contact form preserved by $\fk^M(1|1)$, are identical.
Therefore, define the residue by the formula
$$
\Res~P= \text{ coefficient of }\frac{\theta}{x}\text{
in the expansion of } P_{-1}.
$$
\begin{rem*}{Remark} Manin and Radul \cite{MR} considered the
Kadomtsev--Petviashvili hierarchy associated with $\fns$, i.e., for
$D=K_\theta$. The formula for the residue allows one to directly
generalize their result and construct a simialr hierarchy associated
with $\fr$, i.e., for $D=\tilde K_\theta$.
\end{rem*}
This new phenomenon --- an invertible odd symbol --- doubles the old
picture: let $\theta_0$ be the symbol of $D$, and let $x$ be the
symbol of the differential operator $D^2$. We see that the case of
the odd $D$ reduces to either $V((x, \theta_0, \theta))x^\lambda$ or
$V((x, \theta_0^{-1}, \theta))x^\lambda$.
\ssec{9.5. Continuous Toda lattices} Khesin and Malikov \cite{KM}
considered straightforward generalizations of the Toda lattices ---
the dynamical systems on the orbits of the coadjoint representation of
a simple finite diminsional Lie group $G$ defined as follows. Let
$\cX$ be the image of $X^+\in\fsl(2)$ in $\fg=Lie(G)$ under the
principal embedding. Having identified $\fg$ with $\fg^{*}$ with the
help of the invariant nondegenerate form, consider the orbit
$\cO_{\cX}$. On $\cO_{\cX}$, the traces $H_{i}(A)=\tr(A+\cX)^i$ are
the commuting Hamiltonians.
In our constructions we only have to consider in $LU_{\fg}(\lambda)$
either (for Lie algebras) the image of $\cX$ or (for Lie
superalgebras) the image of $\nabla^+\in\fosp(1|2)$ under the
superprincipal embedding of $\fosp(1|2)$. For superalgebras we also
have to replace trace with the supertrace.
For the general description of dynamical systems on the orbits of the
coadjoint representations of Lie supergroups see \cite{LST}. A
possibility of odd mechanics is pointed out in \cite{LST} and in the
subsequent paper by R.~Yu.~Kirillova in the same Procedings. To take
such a possibility into account, we have to consider analogs of the
principal embeddings for $\fsq(2)$. This is a full-time job; its
results will be considered elsewhere.
\end{document}
|
\begin{document}
\title{Quantum Relief Algorithm
}
\author{Wen-Jie Liu \and
Pei-Pei Gao \and
Wen-Bin Yu \and
Zhi-Guo Qu \and
Ching-Nung Yang
}
\institute{
W.-J. Liu
\at Jiangsu Engineering Center of Network Monitoring, Nanjing University of Information Science \& Technology, Nanjing 210044, P.R.China
\\
\email{[email protected]}
\and
W.-J. Liu \and
P.-P. Gao \and
W.-B. Yu \and
Z.-G. Qu
\at School of Computer and Software, Nanjing University of Information Science \& Technology, Nanjing 210044, P.R.China
\and
C.-N. Yang
\at Department of Computer Science and Information Engineering, National Dong Hwa University, Hualien 974, Taiwan
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
Relief algorithm is a feature selection algorithm used in binary classification proposed by Kira and Rendell, and its computational complexity remarkable increases with both the scale of samples and the number of features. In order to reduce the complexity, a quantum feature selection algorithm based on Relief algorithm, also called quantum Relief algorithm, is proposed. In the algorithm, all features of each sample are superposed by a certain quantum state through the \emph{CMP} and \emph{rotation} operations, then the \emph{swap test} and measurement are applied on this state to get the similarity between two samples. After that, \emph{Near-hit} and \emph{Near-miss} are obtained by calculating the maximal similarity, and further applied to update the feature weight vector $WT$ to get $WT'$ that determine the relevant features with the threshold $\tau$. In order to verify our algorithm, a simulation experiment based on IBM Q with a simple example is performed. Efficiency analysis shows the computational complexity of our proposed algorithm is \emph{O(M)}, while the complexity of the original Relief algorithm is \emph{O(NM)}, where $N$ is the number of features for each sample, and $M$ is the size of the sample set. Obviously, our quantum Relief algorithm has superior acceleration than the classical one.
\keywords{quantum Relief algorithm \and feature selection \and \emph{CMP} operation \and \emph{rotation} operation \and \emph{swap test} \and IBM Q}
\end{abstract}
\section{Introduction}
\label{intro}
Machine learning refers to an area of computer science in which patterns are derived (``learned'') from data with the goal to make sense of previously unknown inputs. As part of both artificial intelligence and statistics, machine learning algorithms process large amounts of information for tasks that come naturally to the human brain, such as image recognition, pattern identification and strategy optimization.
Machine learning tasks are typically classified into two broad categories \cite{1}, supervised and unsupervised machine learning, depending on whether there is a learning ``signal" or ``feedback" available to a learning system. In supervised machine learning, the learner is provided a set of training examples with features presented in the form of high-dimensional vectors and with corresponding labels to mark its category. On the contrary, no labels are given to the learning algorithm, leaving it on its own to find structure in unsupervised machine learning. The core mathematical task for both supervised and unsupervised machine learning algorithm is to evaluate the distance and inner products between the high-dimensional vectors, which requires a time proportional to the size of the vectors on classical computers. As we know, it is ``curse of dimensionality'' \cite{2} to calculate the distance in high-dimensional condition. One of the possible solutions is the dimension reduction \cite{3}, and the other is the feature selection \cite{4}\cite{5}.
Relief algorithm is one of the most representative feature selection algorithms, which was firstly proposed by Kira et al. \cite{6}. The algorithm devises a ``relevant statistic'' to weigh the importance of the feature which has been widely applied in many fields, such as, hand gesture recognition \cite{7}, electricity price forecasting \cite{8} and power system transient stability assessment \cite{9}. Relief algorithm will occupy larger amount of computation resources while the number of samples and features become huger, which restricts the application of the algorithm.
Since the concept of quantum computer was proposed by the famous physicist Feynman \cite{10}, a number of remarkable outcomes have been proposed. For example, Deutsch's algorithms \cite{11}\cite{12} embody the superiority of quantum parallelism calculation, Shor's algorithm \cite{13} solves the problem of integer factorization in polynomial time, and Grover's algorithm \cite{14} has a quadratic speedup to the problem of conducting a search through some unstructured search space. With the properties of superposition and entanglement, quantum computation has the potential advantage for dealing with high dimension vectors which attract researchers to apply the quantum mechanics to solve some classical machine learning tasks such as quantum pattern matching \cite{15}, quantum probably approximately correct learning \cite{16}, feedback learning for quantum measurement \cite{17}, quantum binary classifiers \cite{18}\cite{19}, and quantum support vector machines \cite{20}.
Even in quantum machine learning, we still face with the trouble of ``curse of dimensionality'', thus dimension reduction or feature selection is a necessary preliminary before training high-dimensional samples. Inspired by the idea of computing the inner product between two vectors \cite{21}\cite{22}, we propose a Relief-based quantum parallel algorithm, also named quantum Relief algorithm, to effectively perform the feature selection.
The outline of this paper is as follows. The classic Relief algorithm is briefly reviewed in Sect. 2, the proposed quantum Relief algorithm is proposed in detail in Sect. 3, and a simulation experiment based on IBM Q with a simple example is given in Sect. 4. Subsequently, the efficiency of the algorithm is analyzed in Sect. 5, and the brief conclusion and the outlook of our work are discussed in the last section.
\section{Review of Relief algorithm}
\label{sec:2}
Relief algorithm is a feature selection algorithm used in binary classification (generalizable to polynomial classification by decomposition into a number of binary problems) proposed by Kira and Rendell \cite{6} in 1992. It is efficient in estimating features according to how well their values distinguish among samples that are near each other.
We can divide an $M$-sample set into two vector sets: $A{\rm{ = }}\left\{ {{v_j}{\rm{ | }}{v_j} \in {{\mathbb{R}}^{\rm{N}}}, j = 1,2, \cdots {M_1}} \right\}$, $B{\rm{ = }}\left\{ {{w_k}{\rm{ | }}{w_k} \in {{\mathbb{R}}^{\rm{N}}}, k = 1,2, \cdots {M_2}} \right\}$, where ${v_j}$, ${w_k}$ are $N$-feature samples: ${v_j} = {\left( {{v_{j1}},{v_{j2}}, \cdots ,{v_{jN}}} \right)^\mathsf{T}}$, ${w_k} = {\left( {{w_{k1}},{w_{k2}}, \cdots {w_{kN}}} \right)^\mathsf{T}}$, ${v_{j1}}, \cdots ,{v_{jN}},{w_{k1}}, \cdots ,{w_{kN}} \in \{ 0,1\} $, and the weight vector of $N$ features $WT = {\left( {w{t_1},w{t_2}, \cdots ,w{t_N}} \right)^\mathsf{T}}$ is initialized to all zeros. Suppose the upper limit of iteration is $T$, and the relevance threshold (that differentiate the relevant and irrelevant features) is $\tau (0\le \tau \le 1)$. The process of Relief algorithm is described in Algorithm~\ref{alg:one}.
\begin{algorithm}[t]
\SetAlgoNoLine
Init $WT = {\left( {0, \cdots ,0} \right)^\mathsf{T}}$. \\
\For{ t=1 to $T$}{
Pick a sample $u$ randomly. \\
Find the closest samples \emph{Near-hit} and \emph{Near-miss} in the two classes $A$ and $B$.\\
\For {i=1 to $N$}{
$WT\left[ i \right] = WT\left[ i \right]{\rm{ - }}\emph{diff}{\left( {i,u,\emph{Near-hit}} \right)^2}{\rm{ + }}\emph{diff}{\left( {i,u,\emph{Near-miss}} \right)^2}$.\\
}
}
Select the most relevant features according to $WT$ and $\tau$.\\
\caption{Relief($A$, $B$, $T$, $\tau$)}
\label{alg:one}
\end{algorithm}
At each iteration, pick a random sample $u$, and then select the samples closest to $u$ (by $N$-dimensional Euclidean distance) from each class. The closest same-class sample is called \emph{Near-hit}, and the closest different-class sample is called \emph{Near-miss}. Update the weight vector such that,
\begin{equation}
\label{eqn:01}
WT\left[ i \right] = WT\left[ i \right]{\rm{ - }}\emph{diff}{\left( {i,u,\emph{Near-hit}} \right)^2}{\rm{ + }}\emph{diff}{\left( {i,u,\emph{Near-miss}} \right)^2},
\end{equation}
where the function $\emph{diff}(i, u, v)$ is defined as below,
\begin{equation}
\label{eqn:02}
diff(i,u,v) = \left\{
\begin{array}{lcl}
0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {u_i} = {v_i} \\
1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {u_i} \ne {v_i}
\end{array}
.\right.
\end{equation}
Relief was also described as generalizable to polynomial classification by decomposition into a number of binary problems. However, as the scale of samples and the number of features increase, their efficiencies will drastically decline.
\section{Quantum Relief Algorithm}
\label{sec:3}
In order to handle the problem of large samples and large features, we propose a quantum Relief algorithm. Suppose the sample sets $A = \left\{ {{v_j},j = 1,2, \cdots {M_1}} \right\}$, $B = \left\{ {{w_k},k = 1,2, \cdots {M_2}} \right\}$, weight vector $WT$, the upper limit $T$ and threshold $\tau$ are the same as classical Relief algorithm defined in Sec. 2. Different from the classical one, all the features of each sample are represented as a quantum superposition state, and the similarity between two samples can be calculated in parallel. Algorithm~\ref{alg:two} shows the detailed procedure of quantum Relief algorithm.
\begin{algorithm}[t]
\SetAlgoNoLine
Init $WT = {\left( {0, \cdots ,0} \right)^\mathsf{T}}$.\\
Prepare the states for sample sets $A$ and $B$ through \emph{CMP} and \emph{rotation} operations,
$\begin{array}{l}
{\left| {{\phi _A}} \right\rangle _j} = \frac{1}{{\sqrt N }}\left| j \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} \left| 0 \right\rangle + {v_{ji}}\left| 1 \right\rangle } \right)} \\
{\left| {{\phi _B}} \right\rangle _k} = \frac{1}{{\sqrt N }}\left| k \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{w_{ki}}} \right|}^2}} \left| 0 \right\rangle + {w_{ki}}\left| 1 \right\rangle } \right)}
\end{array}$.\\
\For{t=1 to $T$}{
Select a state $\left| \phi \right\rangle$ randomly from the state set $\left\{ {{{\left| {{\phi _A}} \right\rangle }_j}{\kern 1pt} } \right\}$ or ${\left\{ {{{\left| {{\phi _B}} \right\rangle }_k}{\kern 1pt} } \right\}}$.
Perform a swap operation on $\left| \phi \right\rangle$ to get $\left| \varphi \right\rangle {\text{ = }}\frac{{\text{1}}}{{\sqrt N }}\left| l \right\rangle \sum\limits_{i = 0}^{N-1} {\left| i \right\rangle \left( {\sqrt {1 - {{\left| {{u_i}} \right|}^2}} \left| 0 \right\rangle + {u_i}\left| 1 \right\rangle } \right)\left| 1 \right\rangle } $.\\
Get $\begin{array}{l}
{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{v_j}}}} \right. \kern-\nulldelimiterspace} {{{v_j}}} \right\rangle } \right|^2} = \left( {1 - 2P_j^{l\left( A \right)}\left( 1 \right)} \right){N^2}\\
{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{w_k}}}} \right. \kern-\nulldelimiterspace} {{{w_k}}} \right\rangle } \right|^2} = \left( {1 - 2P_k^{l\left( B \right)}\left( 1 \right)} \right){N^2}
\end{array}$ through \emph{swap test} and measurement operation.\\
Obtain the maximum similarity: $\max \left\{ {{{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{v_j}}}} \right. \kern-\nulldelimiterspace} {{{v_j}}} \right\rangle } \right|}^2}} \right\}$ and $\max \left\{ {{{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{w_k}}}} \right. \kern-\nulldelimiterspace} {{{w_k}}} \right\rangle } \right|}^2}} \right\}$.\\
\eIf{$u$ belongs to class $A$}{
$\emph{Near-hit}={v_{\max }} , \emph{Near-miss}={\kern 1pt} {w_{\max }}$.\\
}{
$\emph{Near-hit}={w_{\max }}, \emph{Near-miss}={\kern 1pt} {v_{\max }}$.\\
}
\For{ i = 1 to $N$}{
$w{t_i} = w{t_{i-1}} - \emph{diff}{\left( {i, u, \emph{Near-hit}} \right)^2} + \emph{diff}{\left({i, u, \emph{Near-miss}} \right)^2}$.\\
}
}
$\overline{WT}$ = $\left( {{1 \mathord{\left/ {\vphantom {1 T}} \right. \kern-\nulldelimiterspace} T}} \right)WT$.\\
\For {i = 1 to $N$}{
\eIf{($\overline{WT}_i \ge \tau$)}{
the \emph{i-th} feature is relevant.\\
}{
the \emph{i-th} feature is not relevant.\\
}
}
\caption{QRelief($A$, $B$, $T$, $\tau $)}
\label{alg:two}
\end{algorithm}
\subsection{State preparation}
\label{sec:3_1}
At the beginning of the algorithm, the classical information is converted into quantum superposition states. Quantum superposition state sets $\left\{ {{{\left| {{\phi _A}} \right\rangle }_j}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} |j = 1,2, \cdots ,{M_1}} \right\}$ and $\left\{ {{{\left| {{\phi _B}} \right\rangle }_k}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} |k = 1,2, \cdots ,{M_2}} \right\}$, which store all the feature values of ${v_j} \in A$ and ${w_k} \in B$, are prepared as below,
\begin{equation}
\label{eqn:03}
\begin{array}{l}
{\left| {{\phi _A}} \right\rangle _j} = \frac{1}{{\sqrt N }}\left| j \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} \left| 0 \right\rangle + {v_{ji}}\left| 1 \right\rangle } \right)} \\
{\left| {{\phi _B}} \right\rangle _k} = \frac{1}{{\sqrt N }}\left| k \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{w_{ki}}} \right|}^2}} \left| 0 \right\rangle + {w_{ki}}\left| 1 \right\rangle } \right)}
\end{array},
\end{equation}
where ${v_{ji{\kern 1pt}}}$ and ${w_{ki{\kern 1pt}}}$ represent the \emph{i-th} feature value of vector ${v_{j{\kern 1pt} }}$ and ${w_{k{\kern 1pt} }}$, respectively. Suppose the initial state is $\left| j \right\rangle {\left| 0 \right\rangle ^{ \otimes n}}\left| 1 \right\rangle \left| 0 \right\rangle $ ($n = \left\lceil {{{\log }_2}\left( {N } \right)} \right\rceil $) and we want to prepare the state ${\left| {{\phi _A}} \right\rangle _j}{\kern 1pt} {\kern 1pt} $, its construction process consists of the following three steps.
First of all, the \emph{H} (i.e., Hadamard gate) and \emph{CMP} operations are performed on ${\left| 0 \right\rangle ^{ \otimes n}}$ to obtain the state ${\raise0.7ex\hbox{$1$} \!\mathord{\left/
{\vphantom {1 {\sqrt N }}}\right.\kern-\nulldelimiterspace}
\!\lower0.7ex\hbox{${\sqrt N }$}}\sum\limits_{i = 0}^{N-1} {\left| i \right\rangle } $,
\begin{equation}
\label{eqn:04}
{\left| 0 \right\rangle ^{ \otimes n}} \xrightarrow{\emph{H} {\kern 2pt} and {\kern 2pt}\emph{CMP}{\kern 2pt} operations}{{\rm{1}} \over {\sqrt N }}\sum\limits_{i = 0}^{N-1} {\left| i \right\rangle }.
\end{equation}
Fig.~\ref{fig:1} depicts the detailed circuit of these operations, where the \emph{CMP} operation is a key component which is used to tailor the state ${\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\sqrt {{2^n}} }}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${\sqrt {{2^n}} }$}}\sum\limits_{i = 0}^{{2^n-1}} {\left| i \right\rangle } $ into the target state ${\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\sqrt N }}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{${\sqrt N }$}}\sum\limits_{i = 0}^{N-1} {\left| i \right\rangle } $, and its definition is
\begin{equation}
\label{eqn:05}
CMP\left| i \right\rangle \left| N \right\rangle \left| 0 \right\rangle = \left\{ {
\begin{array}{*{20}{c}}
{\left| i \right\rangle \left| N \right\rangle \left| 0 \right\rangle ,i < N}\\
{\left| i \right\rangle \left| N \right\rangle \left| 1 \right\rangle ,i \ge N}
\end{array}} .\right.
\end{equation}
After the \emph{CMP} operation, the quantum state which greater than $N$ (i.e., the last qubit is $\left| 1 \right\rangle $) will be clipped off.
\begin{figure}
\caption{The circuit of quantum operation performed on ${\left| 0 \right\rangle ^{ \otimes n}
\label{fig:1}
\end{figure}
\begin{figure}
\caption{The circuit for performing CMP illustrated for a single qubit inputs $\left| {{i_q}
\label{fig:2}
\end{figure}
And then, an unitary rotation operation
\begin{equation}
\label{eqn:06}
{R_y}\left( {2{{\sin }^{ - 1}}{v_{ji}}} \right) = \left[ {
\begin{array}{*{20}{c}}
{\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} }&{ - {v_{ji}}} \\
{{v_{ji}}}&{\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} }
\end{array}} \right]
\end{equation}
is performed on the last qubit to obtain ${\left| {{\phi _A}} \right\rangle _j}{\kern 1pt} {\kern 1pt}$,
\begin{equation}
\label{eqn:07}
{{\rm{1}} \over {\sqrt N }}\left| j \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left| 0 \right\rangle } \buildrel {{R_y}} \over
\longrightarrow {1 \over {\sqrt N }}\left| j \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} \left| 0 \right\rangle + {v_{ji}}\left| 1 \right\rangle } \right)} .
\end{equation}
\subsection{Similarity calculation}
\label{sec:3_2}
The similarity is a coefficient that describes how close two samples are, and it is obviously inversely related to the Euclidean distance. After the state preparation, the main work of this phase is to calculate the similarity between $\left| \phi \right\rangle$ and other states in state sets $\left\{ {{{\left| {{\phi _A}} \right\rangle }_j}{\kern 1pt} } \right\}$ and ${\left\{ {{{\left| {{\phi _B}} \right\rangle }_k}{\kern 1pt} } \right\}}$, where $\left| \phi \right\rangle$ is a state selected randomly from $\left\{ {{{\left| {{\phi _A}} \right\rangle }_j}{\kern 1pt} } \right\}$ or ${\left\{ {{{\left| {{\phi _B}} \right\rangle }_k}{\kern 1pt} } \right\}}$. For simplicity, suppose $\left| \phi \right\rangle$ is the \emph{l-th} state from $\left\{ {{{\left| {{\phi _A}} \right\rangle }_j}{\kern 1pt} } \right\}$,
\begin{equation}
\label{eqn:08}
\left| \phi \right\rangle {\text{ = }}\frac{1}{{\sqrt N }}\left| l \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{u_i}} \right|}^2}} \left| 0 \right\rangle + {u_i}\left| 1 \right\rangle } \right)} ,
\end{equation}
which corresponds to the chosen sample $u$ from sample set $A$ in the classical scenario. The detailed process is as follows.
First, a swap operation is performed on the last two qubits of $\left| \phi \right\rangle$ to obtain a new state,
\begin{equation}
\label{eqn:09}
\left| \varphi \right\rangle = \frac{1}{{\sqrt N }}\left| l \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left( {\sqrt {1 - {{\left| {{u_i}} \right|}^2}} \left| 0 \right\rangle + {u_i}\left| 1 \right\rangle } \right)\left| 1 \right\rangle } .
\end{equation}
Second, the \emph{swap test} \cite{23} operation (its circuit is given in Fig.~\ref{fig:3}) is applied on $\left( {\left| \varphi \right\rangle ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{\left| {{\phi _A}} \right\rangle }_j}} \right)$,
\begin{equation}
\label{eqn:10}
\begin{array}{l}
\left| 0 \right\rangle \left| \varphi \right\rangle {\left| {{\phi _A}} \right\rangle _j}\xrightarrow{swap{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} test}\left[ {{1 \over 2}\left| 0 \right\rangle (\left| \varphi \right\rangle {{\left| \phi_A \right\rangle }_j} + {{\left| \phi_A \right\rangle }_j}\left| \varphi \right\rangle ) + {1 \over 2}\left| 1 \right\rangle (\left| \varphi \right\rangle {{\left| \phi_A \right\rangle }_j} - {{\left| \phi_A \right\rangle }_j}\left| \varphi \right\rangle )} \right]\\
\end{array}.
\end{equation}
\begin{figure}
\caption{The circuit of \emph{swap test}
\label{fig:3}
\end{figure}
If we measure the first qubit with $\left| 1 \right\rangle \left\langle 1 \right| \otimes I \otimes I$, the probability of measurement result to be $\left| {\rm{1}} \right\rangle $ is
\begin{equation}
\label{eqn:11}
\begin{array}{l}
{P_j^l}(1) = \left[ {{1 \over 2}\left\langle 0 \right|(\left\langle \varphi \right|{{\left\langle \phi_A \right|}_j} + {{\left\langle \phi_A \right|}_j}\left\langle \varphi \right|) + {1 \over 2}\left\langle 1 \right|(\left\langle \varphi \right|{{\left\langle \phi_A \right|}_j} - {{\left\langle \phi_A \right|}_j}\left\langle \varphi \right|)} \right]\left| 1 \right\rangle \left\langle 1 \right| \otimes I \otimes I\\
\qquad \qquad\left[ {{1 \over 2}\left| 0 \right\rangle (\left| \varphi \right\rangle {{\left| \phi_A \right\rangle }_j} + {{\left| \phi_A \right\rangle }_j}\left| \varphi \right\rangle ) + {1 \over 2}\left| 1 \right\rangle (\left| \varphi \right\rangle {{\left| \phi_A \right\rangle }_j} - {{\left| \phi_A \right\rangle }_j}\left| \varphi \right\rangle )} \right]\\
\;\,\qquad= \left[ {{1 \over 2}\left\langle 1 \right|(\left\langle \varphi \right|{{\left\langle \phi_A \right|}_j} - {{\left\langle \phi_A \right|}_j}\left\langle \varphi \right|)} \right]\left| 1 \right\rangle \left\langle 1 \right| \otimes I \otimes I\left[ {{1 \over 2}\left| 1 \right\rangle (\left| \varphi \right\rangle {{\left| \phi_A \right\rangle }_j} - {{\left| \phi_A \right\rangle }_j}\left| \varphi \right\rangle )} \right]\\
\;\,\qquad= {1 \over 2} - {1 \over 2}{\left| {{{\left\langle {\varphi }
\mathrel{\left | {\vphantom {\varphi \phi }}
\right. \kern-\nulldelimiterspace}
{\phi_A } \right\rangle }_j}} \right|^2}
\end{array},
\end{equation}
As we know, the inner product between $\left| \varphi \right\rangle $ and prepared state ${\left| {{\phi _A}} \right\rangle _j}$ can be calculated as below,
\begin{equation}
\label{eqn:12}
\begin{array}{l}
{\left\langle {\varphi }
\mathrel{\left | {\vphantom {\varphi {{\phi _A}}}}
\right. \kern-\nulldelimiterspace}
{{{\phi _A}}} \right\rangle _j} = {1 \over N}\sum\limits_i {(u_i)^*{v_{ji}}} = {1 \over N}\left\langle {u}
\mathrel{\left | {\vphantom {u {{v_j}}}}
\right. \kern-\nulldelimiterspace}
{{{v_j}}} \right\rangle\\
\end{array}.
\end{equation}
From Eqs. (\ref{eqn:11}) and(\ref{eqn:12}), we can get the similarity between samples $u$ and $v_j$,
\begin{equation}
\label{eqn:13}
\begin{array}{l}
{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{v_j}}}} \right. \kern-\nulldelimiterspace}
{{{v_j}}} \right\rangle } \right|^2} = \left( {1 - 2P_j^{l\left( A \right)}\left( 1 \right)} \right){N^2}\\
\end{array}.
\end{equation}
Finally, we can find out the max-similarity sample ${v_{\max }} \in A$ through the classical maximum traversal search algorithm among the set $ \left\{ {{{\left| {\left\langle {u} \mathrel{\left | {\vphantom {u {{v_j}}}} \right. \kern-\nulldelimiterspace} {{{v_j}}} \right\rangle } \right|}^2}} \right\}$.
Through the above method, we can also find out the other max-similarity sample ${w_{\max }} \in B$. When having finished all these steps, the algorithm turns to the next phase.
\subsection{Weight vector update}
\label{sec:3_3}
The first step is to determine the closest same-class sample \emph{Near-hit} and the different-class sample \emph{Near-miss}, which obey the following rule,
\begin{equation}
\label{eqn:14}
\left\{ {\begin{array}{*{20}{c}}
{\emph{Near-hit} = {v_{\max }},{\kern 1pt} {\kern 1pt} \emph{Near-miss} = {w_{\max }}}&{ {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} u \in A} \\
{\emph{Near-hit} = {w_{\max }},{\kern 1pt} {\kern 1pt} \emph{Near-miss} = {v_{\max }}}&{{\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} u \in B}
\end{array}} .\right.
\end{equation}
After determining \emph{Near-hit} and \emph{Near-miss}, we can update every element of weight vector $WT = {\left( {w{t_1},w{t_2}, \cdots ,w{t_N}} \right)^\mathsf{T}}$ with them,
\begin{equation}
\label{eqn:15}
w{t_i} = w{t_{i - 1}} - \emph{diff}{\left( {i, u, \emph{Near-hit}} \right)^2} + \emph{diff}{\left( {i, u, \emph{Near-miss}} \right)^2}{\kern 1pt} {\kern 1pt},
\end{equation}
where $1\le i \le N$.
\subsection{Feature selection}
\label{sec:3_4}
After iterating the similarity calculation and weight vector update $T$ runs, the algorithm jumps out of the loop with the final vector $WT$ as output result. The remainder of the algorithm is to select the ``real" relevant features.
We firstly divide $WT$ by $T$, and obtain its mean vector $\overline{WT}$,
\begin{equation}
\label{eqn:16}
\overline{WT} = \frac{1}{T}WT.
\end{equation}
Then, we select relevant features according to the preset threshold $\tau$. To be specific, those features are selected if their corresponding values in $\overline{WT}$ are greater than $\tau$ and discarded in the opposite case,
\begin{equation}
\label{eqn:17}
\left\{ {\begin{array}{*{20}{c}}
{the{\kern 2pt} \emph{i-th}{\kern 2pt} feature{\kern 2pt} is{\kern 2pt} relevant {\kern 36pt} if{\kern 3pt}\overline{WT}_i \ge \tau }\\
{the{\kern 2pt} \emph{i-th}{\kern 2pt} feature{\kern 2pt} is{\kern 2pt}NOT{\kern 2pt} relevant {\kern 8pt} if{\kern 3pt}\overline{WT}_i < \tau }
\end{array}} .\right.
\end{equation}
\section{Example and its experiment}
\label{sec:4}
Suppose there are four samples(see Tab.~\ref{tab:1}), ${S_0} = (1,0,1,0)$, ${S_1} = (1,0,0,0)$, ${S_2} = (0,1,1,0)$, ${S_3} = (0,1,0,0)$, thus the $n$ in Eq. (\ref{eqn:04}) is 2, and they belong to two classes: $A = \{ {S_0},{S_1}\}$, $B = \{ {S_2},{S_3}\}$, which is illustrated in Fig.~\ref{fig:4}.
\begin{table}
\centering
\caption{The feature values of four samples. Each row represents all the feature values of a certain sample, while each column denotes a certain feature value of all the samples.}
\label{tab:1}
\begin{tabular}{cllll}
\hline\noalign{
}
& $F_0$ & $F_1$ & $F_2$ & $F_3$ \\
\noalign{
}\hline\noalign{
}
$S_0$ & 1 & 0 & 1 & 0\\
$S_1$ & 1 & 0 & 0 & 0\\
$S_2$ & 0 & 1 & 1 & 0\\
$S_3$ & 0 & 1 & 0 & 0\\
\noalign{
}\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{The simple example with four samples in classes $A$ and $B$.}
\label{fig:4}
\end{figure}
First, the four initial quantum states are prepared as follows,
\begin{equation}
\label{eqn:18}
\left\{ {\begin{array}{*{20}{c}}
{\left| \psi \right\rangle _{{{\rm{S}}_{\rm{0}}}}} = \left| {{\rm{00}}} \right\rangle {\left| {\rm{0}} \right\rangle ^{ \otimes 2}}\left| 1 \right\rangle \left| 0 \right\rangle \\
{\left| \psi \right\rangle _{{{\rm{S}}_{\rm{1}}}}} = \left| {{\rm{01}}} \right\rangle {\left| {\rm{0}} \right\rangle ^{ \otimes 2}}\left| 1 \right\rangle \left| 0 \right\rangle \\
{\left| \psi \right\rangle _{{{\rm{S}}_{\rm{2}}}}} = \left| {{\rm{10}}} \right\rangle {\left| {\rm{0}} \right\rangle ^{ \otimes 2}}\left| 1 \right\rangle \left| 0 \right\rangle \\
{\left| \psi \right\rangle _{{{\rm{S}}_{\rm{3}}}}} = \left| {{\rm{11}}} \right\rangle {\left| {\rm{0}} \right\rangle ^{ \otimes 2}}\left| 1 \right\rangle \left| 0 \right\rangle
\end{array}} .\right.
\end{equation}
Taking ${\left| \psi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}$ as an example, the ${H^{ \otimes 2}}$ operation is applied on the third and fourth qubits,
\begin{equation}
\label{eqn:19}
\begin{array}{l}
\left| {{\rm{00}}} \right\rangle {\left| {\rm{0}} \right\rangle ^{ \otimes 2}}\left| 1 \right\rangle \left| 0 \right\rangle \xrightarrow{H^{ \otimes 2}}{1 \over 2}\left| {{\rm{00}}} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle \left| 1 \right\rangle \left| 0 \right\rangle }
\end{array}.
\end{equation}
Then we perform ${R_y}$ rotation (see Eq. (\ref{eqn:06})) on the last qubit, where
\begin{equation}
\label{eqn:20}
\left\{ {\begin{array}{*{20}{c}}
{R_y}(2{\sin ^{ - 1}}{v_{00}}) = {R_y}(2{\sin ^{ - 1}}{v_{02}}) = -iY = \left[ {
\begin{array}{*{20}{c}}
0 & { - 1} \cr
1 & 0 \cr
\end{array} } \right]\\
{R_y}(2{\sin ^{ - 1}}{v_{01}}) = {R_y}(2{\sin ^{ - 1}}{v_{03}}) $$ = I= \left[ {
\begin{array}{*{20}{c}}
1 & 0 \cr
0 & 1 \cr
\end{array} } \right]
\end{array}} ,\right.
\end{equation}
and can get
\begin{equation}
\label{eqn:21}
{\left| \phi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}={1 \over 2}\left| {{\rm{00}}} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle \left| 1 \right\rangle \left( {\sqrt {{\rm{1 - }}{{\left| {{v_{0i}}} \right|}^2}} \left| 0 \right\rangle + {v_{0i}}\left| 1 \right\rangle } \right)} .
\end{equation}
The other quantum states are prepared in the same way and they are listed as below,
\begin{equation}
\label{eqn:22}
\left\{ {\begin{array}{*{20}{c}}
{\left| \phi \right\rangle _{{S_0}}} = {1 \over 2}\left| {00} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle } \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{0i}}} \right|}^2}} \left| 0 \right\rangle + {v_{0i}}\left| 1 \right\rangle } \right)\\
{\left| \phi \right\rangle _{{S_1}}} = {1 \over 2}\left| {01} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle } \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{1i}}} \right|}^2}} \left| 0 \right\rangle + {v_{1i}}\left| 1 \right\rangle } \right)\\
{\left| \phi \right\rangle _{{S_2}}} = {1 \over 2}\left| {10} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle } \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{2i}}} \right|}^2}} \left| 0 \right\rangle + {v_{2i}}\left| 1 \right\rangle } \right)\\
{\left| \phi \right\rangle _{{S_3}}} = {1 \over 2}\left| {11} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle } \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{3i}}} \right|}^2}} \left| 0 \right\rangle + {v_{3i}}\left| 1 \right\rangle } \right)
\end{array}} .\right.
\end{equation}
Second, we randomly select a sample (assume ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}$ is that one), and perform similarity calculation with other samples (i.e., ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{1}}}}}$, ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{2}}}}}$, ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{3}}}}}$). Taking ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}$ and ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{1}}}}}$ as an example, we perform a \emph{swap} operation between the last two qubits of ${\left| \phi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}$,
\begin{equation}
\label{eqn:23}
\begin{array}{l}
{\left| \phi \right\rangle _{{{\rm{S}}_{\rm{0}}}}}\xrightarrow{swap}\left| {{\varphi}} \right\rangle = {1 \over 2}\left| {{\rm{00}}} \right\rangle \sum\limits_{i = 0}^3 {\left| i \right\rangle \left( {\sqrt {{\rm{1 - }}{{\left| {{v_{0i}}} \right|}^2}} \left| 0 \right\rangle + {v_{0i}}\left| 1 \right\rangle } \right)\left| 1 \right\rangle}
\end{array}.
\end{equation}
After that, the \emph{swap test} operation is applied on ($\left| {{\varphi}} \right\rangle $, $\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}} $),
\begin{equation}
\label{eqn:24}
\begin{array}{l}
\left| 0 \right\rangle \left| \varphi \right\rangle {\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}}}\xrightarrow{swap{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} test}{1 \over 2}\left| 0 \right\rangle \left( {\left| \varphi \right\rangle {{\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}} }} + {{\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}} }}\left| \varphi \right\rangle} \right) + {1 \over 2}\left| 1 \right\rangle \left( {{\left| \varphi \right\rangle{\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}} }} - {{\left| {{\phi}} \right\rangle_{{{\rm{S}}_{\rm{1}}}} }}\left| \varphi \right\rangle } \right)\\
\end{array},
\end{equation}
then, we measure the result shown in Eq.(\ref{eqn:24}) and obtain the probability of the first qubit being $\left| {\rm{1}} \right\rangle $ is
\begin{equation}
\label{eqn:25}
\begin{array}{l}
{P_1^0}(1) = {1 \over 2} - {1 \over 2}{\left| {{{\left\langle {\varphi }
\mathrel{\left | {\vphantom {\varphi \phi }}
\right. \kern-\nulldelimiterspace}
{\phi } \right\rangle_{{{\rm{S}}_{\rm{1}}}} }}} \right|^2}
\end{array}.
\end{equation}
In terms of Eq. (\ref{eqn:12}), the inner product between $\left| {{\varphi}} \right\rangle $ and $\left| {{\phi}} \right\rangle_{S_1}$ can be represented as
\begin{equation}
\label{eqn:26}
\left\langle {\varphi }
\mathrel{\left | {\vphantom {\varphi \phi }}
\right. \kern-\nulldelimiterspace}
{\phi } \right\rangle_{S_1} = {1 \over 4}\sum\limits_i {{S_0}_i^*{S_{1i}} = {1 \over 4}\left\langle {{{S_0}}}
\mathrel{\left | {\vphantom {{{S_0}} {{S_1}}}}
\right. \kern-\nulldelimiterspace}
{{{S_1}}} \right\rangle }.
\end{equation}
From Eqs. (\ref{eqn:25}) and (\ref{eqn:26}), the similarity between $\left| {{S_0}} \right\rangle$ and $\left| {{S_1}} \right\rangle$ is
\begin{equation}
\label{eqn:27}
{\left| {\left\langle {{{S_0}}}
\mathrel{\left | {\vphantom {{{S_0}} {{S_1}}}}
\right. \kern-\nulldelimiterspace}
{{{S_1}}} \right\rangle } \right|^2} = 16(1 - 2{P_1^0}(1)),
\end{equation}
here, the value of ${P_1^0}(1)$ can be determined by measurement.
In order to obtain the measurement result and also verify our algorithm, we choose the IBM Q [24] to perform the quantum processing (Fig. \ref{fig:5} gives the schematic diagram of our experimental circuit)\footnote{In the experiment, we program our algorithm based on the QISKit toolkit[25] and phyton language, and \textcolor[rgb]{0.00,0.00,1.00}{remotely connect the online IBM QX5 device} to execute quantum processing.}. After the experiment, we can get ${P_1^0}(1)$ which is shown in Tab.~\ref{tab:2}.
\begin{figure}
\caption{One of the ideal experiment circuits of QRelief algorithm running on IBM Q platform. $q[0]-q[5]$ represents the randomly selected quantum state $\left| {{\phi}
\label{fig:5}
\end{figure}
\begin{table}
\centering
\caption{The probabilities $P_j^l(1)$ of the first qubit being $\left| {\rm{1}} \right\rangle $}
\label{tab:2}
\begin{tabular}{clll}
\hline\noalign{
}
Iteration times($T$)& $u$ & $Sample$ & $P_j^l(1)$ \\
\noalign{
}\hline\noalign{
}
\multirow{3}*{1} & \multirow{3}*{$S_0$} & $S_1$ & 0.49023438 \\
& & $S_2$ & 0.49902344 \\
& & $S_3$ & 0.49121094 \\
\multirow{3}*{2} & \multirow{3}*{$S_1$} & $S_0$ & 0.50097656 \\
& & $S_2$ & 0.52246094 \\
& & $S_3$ & 0.53417969 \\
\multirow{3}*{3} & \multirow{3}*{$S_2$} & $S_0$ & 0.50683594 \\
& & $S_1$ & 0.50878906 \\
& & $S_3$ & 0.49218750 \\
\multirow{3}*{4} & \multirow{3}*{$S_3$} & $S_0$ & 0.49804688 \\
& & $S_1$ & 0.49218750 \\
& & $S_2$ & 0.50195312 \\
\noalign{
}\hline
\end{tabular}
\end{table}
According to ${P_1^0}(1)$ (from Tab. \ref{tab:2}) and Eq. (\ref{eqn:27}), we can calculate the similarity between $\left| {{S_0}} \right\rangle$ and ${\left| S_1 \right\rangle}$,
\begin{equation}
\label{eqn:28}
\begin{array}{l}
{\left| {\left\langle {{{S_0}}} \mathrel{\left | {\vphantom {{{S_0}} {{S_1}}}} \right. \kern-\nulldelimiterspace} {{{S_1}}} \right\rangle } \right|^2} = 16(1 - 2{P_1^0}(1))\\
\;\;\;\qquad\qquad=16(1-2*0.49023438)\\
\;\;\;\qquad\qquad \approx 0.3125
\end{array}.
\end{equation}
In the same way, the other two similarities ($\left| {{S_0}} \right\rangle$, ${\left| S_2 \right\rangle}$), ($\left| {{S_0}} \right\rangle$, ${\left| S_3 \right\rangle}$) can also be obtained (which are illustrated in Tab. \ref{tab:3}).
\begin{table}
\centering
\caption{Similarities between samples}
\label{tab:3}
\begin{tabular}{cllll}
\hline\noalign{
}
& $S_0$ & $S_1$ & $S_2$ & $S_3$ \\
\noalign{
}\hline\noalign{
}
$S_0$ & $-$ & 0.3125 & 0.03125 & 0.28125 \\
$S_1$& & $-$ & 0.71875 & 1.09375 \\
$S_2$& & & $-$ & 0.25 \\
$S_3$& & & & $-$ \\
\noalign{
}\hline
\end{tabular}
\end{table}
Third, From Tab. \ref{tab:3}, it is easy to find \emph{Near-hit} is $S_1$ and \emph{Near-miss} is $S_3$ (as shown in Fig. \ref{fig:6}). Then, the weight vector is updated by applying Eq.(\ref{eqn:15}), and the result of $WT$ is listed in the second row of Tab. \ref{tab:4} for the first iteration.
\begin{figure}
\caption{Finding \emph{Near-hit}
\label{fig:6}
\end{figure}
\begin{table}
\centering
\caption{The update result of $WT$}
\label{tab:4}
\begin{tabular}{cc}
\hline\noalign{
}
$Iteration times(T)$ & $Weight vector(WT)$ \\
\noalign{
}\hline\noalign{
}
1 & [1 1 0 0] \\
2 & [2 2 -1 0] \\
3 & [3 3 -1 0] \\
4 & [4 4 -2 0] \\
\noalign{
}\hline
\end{tabular}
\end{table}
The algorithm iterates $T$ times (in our example, $T$=4) as above steps, and obtains all the $WT$ results shown in Tab. \ref{tab:4}. After $T$-th iterations, $WT=[4,4,-2,0]$, then $\overline{WT}=[1,1,-1/2,0]$. Since the threshold $\tau=0.5 $, so the selected features are $F_0$ and $F_1$, i.e., the first and second features.
\section{Efficiency analysis}
\label{sec:5}
In classical Relief algorithm, it takes \emph{O(N)} times to calculate the Euclidean distance between $u$ and each of the $M$ samples, the complexity of finding its \emph{Near-hit} and \emph{Near-miss} is related to $M$, and the loop iterates $T$ times, so the computational complexity of classical Relief algorithm is \emph{O(TMN)}. Since $T$ is a constant which affects the accuracy of relevance levels, but it is chosen independently of $M$ and $N$, so the complexity can be simplified to \emph{O(MN)}. In addition, an $N$-dimensional vector in the Hilbert space is represented by $N$ bits in the classical computer, and there are $M$ samples (i.e., $M$ $N$-dimensional vectors) needed to be stored in the algorithm, so the classical Relief algorithm will consume $O(MN)$ bits storage resources.
In our quantum Relief algorithm, all the features of each sample are superposed on a quantum state ${\left| {{\phi _A}} \right\rangle _j}$ or ${\left| {{\phi _B}} \right\rangle _k}$, then the similarity calculation between two states, which is shown in Eq. (\ref{eqn:13}), is just required to be taken $O(1)$ time. As same as Relief algorithm, the similarity between the selected state $\left| \varphi \right\rangle$ and each state in $\left\{ {\left| {{\phi _A}} \right\rangle } \right\}$, $\left\{ {\left| {{\phi _B}} \right\rangle } \right\}$ is calculated, taking $O(M)$ times, to obtain \emph{Near-miss} and \emph{Near-hit}, and the loop iterates $T$ times, so the proposed algorithm totally takes \emph{O(TM)} times. Since $T$ is a constant, the computational complexity can be rewritten as \emph{O(M)}. On the viewpoint of resource consumption, each quantum state in state sets $\left\{ {\left| {{\phi _A}} \right\rangle _j} \right\}$ and $\left\{ {\left| {{\phi _B}} \right\rangle _k} \right\}$ is represented as $${\left| {{\phi _A}} \right\rangle _j} = \frac{1}{{\sqrt N }}\left| j \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{v_{ji}}} \right|}^2}} \left| 0 \right\rangle + {v_{ji}}\left| 1 \right\rangle } \right)}$$ or $${\left| {{\phi _B}} \right\rangle _k} = \frac{1}{{\sqrt N }}\left| k \right\rangle \sum\limits_{i = 0}^{N-1} {\left| {i} \right\rangle \left| 1 \right\rangle \left( {\sqrt {1 - {{\left| {{w_{ki}}} \right|}^2}} \left| 0 \right\rangle + {w_{ki}}\left| 1 \right\rangle } \right)},$$ and it consists of $O({\log _2}N)$ qubits. Since $j = 1,2, \cdots ,{M_1}$, $k = 1,2, \cdots ,{M_2}$ and $M={M_1}+{M_2}$, that means that there are such $M$ quantum states needed to be stored in our algorithm, so it will consume $O(MlogN)$ qubits storage resources.
Tab.~\ref{tab:5} illustrates the efficiency comparison, including the computational complexity and resource consumption, between classical Relief and our quantum Relief algorithms. As shown in the table, the computational complexity of our algorithm is $O(M)$, which is obviously superior than $O(MN)$ in classical Relief algorithm. On the other hand, the resource that our algorithm needs to consume is $O(MlogN)$ qubits, while the classical Relief algorithm consumes $O(MN)$ bits.
\begin{table}
\centering
\caption{Efficiency comparison between classical Relief and quantum Relief algorithms }
\label{tab:5}
\begin{tabular}{lll}
\hline\noalign{
}
& Complexity & resource consumption \\
\noalign{
}\hline\noalign{
}
Relief Algorithm &$O(MN)$ &$O(MN)$ bits\\
Quantum Relief Algorithm& $O(M)$ &$O(MlogN)$ qubits\\
\noalign{
}\hline
\end{tabular}
\end{table}
\section{Conclusion and Discussion}
\label{sec:6}
With quantum computing technologies nearing the era of commercialization and quantum supremacy, recently machine learning seems to be considered as one of the "killer" applications. In this study, we utilize quantum computing technologies to solve a simple feature selection problem (just used in binary classification), and propose the quantum Relief algorithm, which consist of four phases: state preparation, similarity calculation, weight vector update, and features selection. Furthermore, we verify our algorithm by performing quantum experiments on the IBM Q platform. Compared with the classical Relief algorithm, our algorithm holds lower computation complexity and less resource consumption (in terms of number of resource). To be specific, the complexity is reduced from $O(MN)$ to $O(M)$, and the consumed resource is shortened from $O(MN)$ bits to $O(MlogN)$ qubits.
Although this work just focuses on the feature selection problem (even the simple binary classification), but the method can be generalized to implement the other Relief-like algorithms, such as ReliefF \cite{26}, RReliefF \cite{27}. Besides, we are interested in utilizing quantum technologies to deal with some classic high-dimension massive data processing, such as text classification, video stream processing, data mining, computer version, information retrieval, and bioinformatics.
\end{document}
|
\begin{document}
\title{Time-Optimal Quantum Driving by Variational Circuit Learning}
\author{Tangyou Huang}
\affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain}
\affiliation{International Center of Quantum Artificial Intelligence for Science and Technology (QuArtist) \\ and Department of Physics, Shanghai University, 200444 Shanghai, China}
\author{Yongcheng Ding}
\affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain}
\author{L\'eonce Dupays}
\affiliation{Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg}
\author{Yue Ban}
\affiliation{TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain}
\author{Man-Hong Yung}
\affiliation{Central Research Institute, 2012 Labs, Huawei Technologies, Shenzhen 518129, China}
\affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China}
\author{Adolfo del Campo}
\affiliation{Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg}
\affiliation{Donostia International Physics Center, E-20018 San Sebasti\'an, Spain}
\author{Xi Chen}
\affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain}
\affiliation{EHU Quantum Center, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Spain}
\date{\today }
\begin{abstract}
The simulation of quantum dynamics on a digital quantum computer with parameterized circuits has widespread applications in fundamental and applied physics and chemistry. In this context, using the hybrid quantum-classical algorithm, combining classical optimizers and quantum computers, is a competitive strategy for solving specific problems. We put forward its use for optimal quantum control. We simulate the wave-packet expansion of a trapped quantum particle on a quantum device with a finite number of qubits. We then use circuit learning based on gradient descent to work out the intrinsic connection between the control phase transition and the quantum speed limit imposed by unitary dynamics.
We further discuss the robustness of our method against errors
and demonstrate the absence of barren plateaus in the circuit. The combination of digital quantum simulation and hybrid circuit learning opens up new prospects for quantum optimal control.
\end{abstract}
\maketitle
\section{Introduction}
Following the vision by Feynman \cite{feynman}, quantum simulation has acquired a potentially-disruptive role in the development of contemporary science and technology, given the prospects of harnessing the advantage of using a quantum computer (QC) for specific applications. In recent decades, quantum simulation has been used to probe the dynamics of condensed matter systems \cite{CondensateScience2011,CondensateSolano2015prx}, for quantum chemistry \cite{qChemistry2018prx, qChemistry2016prx}, and as a test-bed for nonequilibrium statistical mechanics, e.g., in studying thermalization and nonequilibrium behavior of many-body systems \cite{therm2o11prl,Nonequilibrium2011RMP}. Quantum simulation is also expected to impact high-energy physics,
given the potential to facilitate the study of lattice gauge theories \cite{Banuls2020} and gauge-gravity duality \cite{GarciaALvarez17,Luo2019}, among other examples.
The use of a digital quantum simulator (DQS) based on the gate model offers a prominent approach in current Noisy Intermediate-Scale Quantum (NISQ) devices \cite{NISQ} and has gained relevance with theoretical and experimental progress \cite{DQS2015prx,Nature2016gauge,Zoller2019nature}.
In particular, DQS can be used to implement
variational quantum algorithms (VQAs), under development for quantum optimization \cite{qaoa}, quantum machine learning \cite{DQML}, and quantum control \cite{DQcontrol}. Their formulation generally approximates the continuous time evolution by discrete, finite Trotter steps \cite{Trotter1,suzuki1993general,Trotter2,Trotter3} implemented by a sequence of quantum gates, with controlled accuracy, in principle. However, balancing the number of Trotter steps and imperfections of quantum circuits in experiments is still a fundamental challenge. In this sense, various optimization scenarios aim at quantum error mitigation \cite{GAs2016prl,ErrorCorrection2019prl,TrotterOptim2020pra} for achieving a good precision of the quantum simulation with limited quantum resources. Among those, the machine-learning-enhanced optimization protocol \cite{heyl2021prl,RL2018prx,RL2018prb,RL2018pra} utilizes a feedback loop between the quantum device and a classical optimizer. This approach is particularly useful in the field of hybrid quantum algorithms \cite{VQE1,VQE2} with current quantum hardware. Nonetheless, solving the quantum optimal control problem by VQAs in a NISQ device is still an open challenge \cite{DQcontrol}.
In this work, we propose a circuit learning scheme based on gradient-descent (GD) for time-optimal quantum control. As a concrete example, we consider a quantum particle trapped in time-varying parabolic potential. We use a qubit register and encode the spatial wave function using the basis of $n$-qubit states. We then reproduce the exact state evolution on a designed quantum circuit using a digital algorithm \cite{quadraticOperator,Molecular2020prl}. We optimize the control function to achieve maximum-fidelity control by using the GD-based circuit learning. We further unveil the connection between a control phase transition and the quantum speed limit, i.e., the minimum time for a quantum state to evolve into a distinguishable state under a given dynamics. We demonstrate that the fidelity-based GD method avoids a large number of measurements by comparison to the reinforcement learning protocol \cite{CPT2018prx} and show how it can be accelerated by choosing different quantum quantities as the cost function.
In the following two sections, we introduce the quantum algorithm for the circuit realization of a quadratic Hamiltonian and discuss the time-dependent harmonic oscillator as an example. We then explore a fidelity-based GD method for maximum-fidelity control in a nonequilibrium expansion process and characterize the efficiency through various cost functions.
The relation of the quantum speed limit to the control phase transition is then discussed. Finally, we establish the fault tolerance of our method against the quantum errors in experiments and also address the problem of barren plateaus in the parameterized circuit.
\section{PRELIMINARIES AND NOTATION}
\subsection{Time-dependent quantum harmonic oscillator}
We exemplify our approach by considering the time-dependent harmonic oscillator (TDHO), described by the Hamiltonian
\begin{equation}
H(t) = \frac{\hat{p}^2}{2m} +\frac{1}{2}m \omega^2(t)[\hat{x}-x_0(t)]^2,
\end{equation}
where $\omega(t)$ and $x_0(t)$ are tunable and represent the trap frequency and the location of the trap center, respectively. The TDHO is an ideal model for benchmarking quantum control algorithms since its dynamics admits exact closed-form solutions. In particular, we focus on the case with $x_0(t)=0$ and look for the expansion of the wave packet induced by a modulation of the trap frequency $\omega(t)$ from an initial value $\omega_0$ to a final one $\omega_f$. This model has many applications, including the cooling of a particle in an optical trap \cite{chenprl104}, mechanical resonators \cite{LianAoPRA}, and tunable transmon superconducting qubits \cite{JuangoPRAppl}. The ground state of $H(0)$ evolves into the time-dependent Gaussian state \cite{chenprl104}
\begin{equation}a
\langlebel{wavefunction}
\Psi(t,x) = \left(\frac{m\omega_0}{\pi\hbarar b^2}\right)^{1/4}\exp\left[-\frac{i}{2} \int_0^t \frac{\hbarar\omega_0}{b^2}dt'\right]\nonumber \\
\times\exp\left[\frac{im}{2\hbarar}\left(\frac{\dot{b}}{b}+i\frac{\omega_0}{b^2}\right)x^2\right],
\end{equation}a
where the time-dependent scaling factor $b(t)>0$ characterizes the width of the wave packet and satisfies the auxiliary equation
\begin{equation}a\langlebel{ermakov}
\ddot{b}+\omega^2(t) b = \frac{\omega_0^2}{b^3}.
\end{equation}a
A primary numerical solver of quantum dynamics is the so-called \textit{split-operator method} (SOM), also known as the split time propagation scheme \cite{Kosloff1988}. For the sake of convenience, one usually sets dimensionless variables based on physical units of energy $\epsilon= \hbarar \omega_0$, length $b_{\text{HO}}= \sqrt{\hbarar/m\omega_0}$, and time $\tau=1/\omega_0$. In a classical computer, one defines a $N$-dimensional vector as $\Psi(\textbf{r})$ for encoding the amplitude of the wave function on the space grid $\textbf{r}=[x_0,~x_1,\cdots,x_{N-1}]$. Note that the kinetic energy operator $\hat{T}=\hat{p}^2/2$ and the potential operator $\hat{V}=\omega^2(t)\hat{x}^2/2$ do not commute. Thus, the following approximation stands for small $dt$ with an error $\mathcal{O}(dt^3)$
\begin{equation}
e^{-iHdt} \approx e^{-\frac{i}{2}\hat{V}dt}e^{-i\hat{T}dt}e^{-\frac{i}{2}\hat{V}dt},
\end{equation}
evolving the wave function in a Trotter step. A common trick in implementing this method uses forward and inverse Fourier transforms to change the representation of the quantum state between the real space $\textbf{r}$ and momentum space $\textbf{k}$, in which the kinetic energy operator becomes diagonal in $\textbf{k}$, simplifying the numerical calculation.
\subsection{Time-optimal control}
The frictionless expansion of quantum particles trapped in a time-varying harmonic trap can be formulated as a time-optimal control problem by minimizing the time of the process $t_f$ \cite{chenprl104,stefanatos2010frictionless,hoffmann2011time,Chaos,Dupays21}. It follows from Pontryagin's maximum principle that the control Hamiltonian for all $t\in[0,t_f]$ takes the form \cite{stefanatos2010frictionless}
\begin{equation}
H_c[x_1,x_2,p_1,p_2] = p_1x_2 + \frac{p_2}{x_1^3} - p_2x_1u(t),
\end{equation}
where the state $x_1=b$,~$x_2=\dot{b}/\omega_0$, and the controller $u(t)=\omega^2(t)/\omega_0^2$ are governed by the Ermakov equation~\eqref{ermakov}. Here, $p_1$ and $p_2$ are the conjugate momentum of $x_1$ and $x_2$, respectively. Substituting the control Hamiltonian into the canonical equation leads to the cost equations
\begin{eqnarray}
\dot{p_1}&=&\left(u+\frac{3}{x_1^4}\right)p_2,\\
\dot{p_2}&=&-p_1.
\end{eqnarray}
If the controller is bounded as $\delta_1\leq u(t) \leq\delta_2$, the time-optimal control has a bang-bang form, i.e., it is a piece-wise function and constant in each interval. For a specific problem with $b(0)=1$ and $b(t_f) = \sqrt{\omega_0/\omega_f}=\gamma$, consider the feasible three-jump protocol
\begin{equation}\langlebel{bangbang}
u(t) =
\begin{cases}
1~~~~~~~~~~~~~~~t=0\\
\delta_1 ~~~~~~~0<t \leq t_1\\
\delta_2~~~~~~~~t_1 <t < t_1+t_2\\
1/\gamma^4~~~~~t = t_f^{\text{opt}} = t_1+t_2
\end{cases},
\end{equation}
where the switching time $t_1$ and the optimal operation time $t_f^{\text{opt}} = t_1 + t_2$ can be calculated by integrating the Ermakov equation~\eqref{ermakov} by using boundary conditions. This yields the closed-form exact time-optimal driving protocol $u(t)$ with
\begin{eqnarray}
t_1 &=& \frac{1}{\sqrt{\delta_1}}\sinh^{-1}\sqrt{\frac{\delta_1(\gamma^2-1)(\gamma^2\delta_2-1)}{(\delta_1-\delta_2)\gamma^2(1-\delta_1)}}, \nonumber\\
t_2 &=& \frac{1}{\sqrt{\delta_2}} \sin^{-1}\sqrt{\frac{\delta_2(\gamma^2-1)(1-\gamma^2\delta_1)}{(\delta_1-\delta_2)(1-\gamma^4\delta_2)}}.
\langlebel{optimaltime}
\end{eqnarray}
\begin{figure*}
\caption{ The circuit realization of time-evolution operator for a quadratic Hamiltonian as defined by Eq. (\ref{quadratic operator}
\end{figure*}
\subsection{Quantum speed limit}
Quantum speed limits (QSLs) provide fundamental upper bounds on the speed of quantum evolution \cite{Deffner17rev}. They have wide-spread applications ranging from quantum metrology to optimal control \cite{qslwithOC,Funo17,Campbell17}. QSLs are formulated by choosing a notion of distance between quantum states and identifying a maximum speed of evolution. For isolated systems described by a time-independent Hamiltonian, two seminal results are known. The Mandelstam-Tamm QSL determines the maximum speed of evolution in terms of the energy dispersion \cite{MT45}, while the Margolus-Levitin bound uses the mean energy above the ground state instead \cite{ML98}. The interplay of these bounds has recently been demonstrated in a trapped system made of ultracold atoms that are suddenly quenched \cite{Ness21}. For a generic driven system, only an analog of the Mandelstam-Tamm bound is known \cite{AA90,Uhlmann92,deffner2013energy}.
Consider the quantum unitary dynamics generated by a time-dependent Hamiltonian according to the Schr\"odinger equation. The distance between the initial state and the time-dependent state in projective Hilbert space can be quantified by the Bures angle
\begin{equation}
\mathcal{L}(\psi_0,\psi_t)=\arccos( | \langlengle \psi_0 | \psi_t \ranglengle|)\in[0,\pi/2].
\end{equation}
The minimum time scale required to sweep a given Bures angle is lower bounded by
\begin{equation}\langlebel{tau_QSL}
\tau _{\rm QSL} = \frac{1}{ \overline{\Delta E} }\mathcal{L}(\psi_0,\psi_t),
\end{equation}
where the speed of evolution is set by the time-averaged energy dispersion
\begin{equation}
\overline{\Delta E} = \frac{1}{t}\int_0^tds\sqrt{\langlengle \psi_s |\hat{H}(s)^2|\psi_s\ranglengle-\langlengle \psi_s |\hat{H}(s)|\psi_s\ranglengle^2}.
\end{equation}
The QSL $\tau_{\rm QSL}$ is thus approached by maximizing the energy dispersion at all times. In control protocols for wave-packet expansion, we consider the evolution of the ground state of the trap with initial trapping frequency $\omega_0$ to the ground state of the trap with final frequency $\omega_f$. Provided that the control protocol, specified by $u(t)$, has unit efficiency in preparing the target state, the Bures angle is fixed, and the corresponding QSL reads
\begin{equation}
\langlebel{qsl-ho}
\tau_{\rm QSL} = \frac{\hbarar}{\overline{\Delta E}}\sqrt{\frac{2 \gamma}{1 +\gamma^2}}.
\end{equation}
We note that $\tau_{\rm QSL}$ should be distinguished from the minimum time $t^{\rm opt}_f$ in the preceding time-optimal control with the trap frequency bounded.
\subsection{ Fidelity susceptibility}
Generally, the final state $\rho_{f} =|\Psi_{f}(x)\ranglengle\langlengle\Psi_{f}(x)|$ upon completion of a control protocol at $t=t_f$ differs from the target state $\rho_{\rm tar} =|\Psi_{\rm tar}(x)\ranglengle\langlengle\Psi_{\rm tar}(x)|$ one wishes to prepare.
Let us consider the fidelity $F$ between these two states
\begin{equation}
\langlebel{fidelity}
F(\rho_{\rm tar},\rho_f)=\left[{\rm Tr}(\sqrt{\sqrt{\rho_{\text{tar}}}\rho_f\sqrt{\rho_{\text{tar}}}})\right]^2,
\end{equation}
where ${\rm Tr}(\cdot)$ denotes the trace operation.
The {\it fidelity susceptibility} $\chi_f$ quantifies the fidelity response to a slight change of driving parameter \cite{FS2007pre,FS2008prb,FS2010GSJ}. For a functional Hamiltonian $H(f_t)$ parameterized by $f_t$, let $|\Psi_0(f_t)\ranglengle$ be the ground state. We assume $f_t$ to be a function of time and consider a variation on the control function $f_t\to f_t+\delta f$, where $\delta f\to 0$ is small enough to apply perturbation theory. As a result, the perturbed ground state is $|\Psi_0(f +\delta f)\ranglengle$. The fidelity susceptibility,
without loss of generality, is defined as
\begin{equation}
\chi_f \equiv \frac{-2\ln (F_{\delta f})}{\delta f},
\end{equation}
where the fidelity $F_{\delta f}=F[\rho_{0}(f), \rho_0(f+\delta f)]$. The \textit{fidelity susceptibility} quantifies the sensitivity of the fidelity to variations of the control functions. In other words, the fidelity susceptibility can be used as a cost function to accelerate the convergence of the optimization process. For the sake of simplicity, we assume that $\delta f\to 0$ is a time-independent real value.
\section{Quantum circuit realization of quadratic Hamiltonian }
Next, we present the algorithm for the circuit realization of quadratic Hamiltonians using a finite set of elementary quantum gates. We focus on DQS of the continuous-variables system
and encode a wave packet onto a $n$-qubit register. Quantum states of this register can be described in binary notation
\begin{equation}
|\Phi\ranglengle = \sum^{2^n-1}_{i=0} c_i|i\ranglengle,
\end{equation}
using the computational basis $|i\ranglengle=|q_{n-1}\ranglengle\otimes \cdot\cdot\cdot \otimes|q_1 \ranglengle\otimes |q_0 \ranglengle$ with $q_0,q_1,\dots,q_{n-1}\in\{0,1\}$ , and the corresponding amplitudes $c_i$ normalized as $\sum^{2^n-1}_{i=0}|c_i|^2\equiv1$. To solve the time-dependent Schr\"odinger equation on a quantum computer with a $n$-qubit register, we discretize the continuous variables associated with the spatial coordinate $x$ and time $t$, and subsequently map the coordinate space $x$ into the Hilbert space of $n$ qubits.
Specifically, the compact continuous spatial domain $x\in[-L,L]$ is approximated by a lattice of $2^n$ points spaced by a constant interval $dx = L/(2^{n-1}-1)$.
A wave packet can be encoded in the state of the $n$-qubit register as
\begin{equation}a\langlebel{q_states}
|\Phi\ranglengle &=& \sum^{2^n-1}_{i=0} \Psi(x_i)|i\ranglengle \\
&=&\Psi(x_0)|0\cdots0\ranglengle+\dots+\Psi(x_{2^n-1})|1\cdots1\ranglengle,
\end{equation}a
which reproduces the vectorized wave function for the following quantum analog of the SOM in the Hilbert space. As in numerical discretization methods, this encoding of $\Psi(\textbf{r})$ provides, in principle, satisfactory accuracy when the lattice length $dx$ is much smaller than any characteristic length scale of the wave packet. The preparation of arbitrary initial qubit states $|\Phi\ranglengle$ based on the initialized wave-packet $\Psi(\textbf{r})$ can be approached by the variational quantum eigensolver (VQE)
\begin{equation}\langlebel{vqe}
|\tilde{\Phi}\ranglengle = \prod_{i=1}^{p}\left[ \prod_{q=0}^{n-1}\left(U^{q,i}\right)U_{\text{ENT}}\right]|+\ranglengle^{\otimes n},
\end{equation}
where $|+\ranglengle=\frac{1}{\sqrt{2}}(|0\ranglengle+|1\ranglengle)$ is a single-qubit state, the unitary $U^{q,i}(\theta)= R_z^{q,i}(\theta_1^{q,i})R_x^{q,i}(\theta_2^{q,i})R_z^{q,i}(\theta_3^{q,i})$ is a universal single-qubit gate, and $U_{\text{ENT}}$ represents CNOT gates that entangle the neighboring qubits with periodic boundary conditions. In this way, an approximated initial state $|\tilde{\Phi}\ranglengle$ is prepared by optimizing $3pn$ parameters to minimize the cost function.
Next, consider the digital quantum simulation aimed at reproducing an equivalent SOM.
The dynamics of the wave packet is described by
\begin{equation}
\Phi(t+dt) \approx e^{-iH(t)dt}\Phi(t),
\end{equation}
where $e^{-iHdt}$ is the time-evolution operator for the time step $dt$.
We use the quantum Fourier transform (QFT) as the quantum analog of the inverse discrete Fourier transform, which is key to the efficient implementation of the SOM. Hence, the wave function $|\Phi\ranglengle$ is evolved as
\begin{equation}
|\tilde{\Phi}(t+dt)\ranglengle = \mathcal{V}(t)_{dt/2}\text{QFT}\mathcal{T}(t)_{dt}\text{QFT}^\dag\mathcal{V}(t)_{dt/2}|\tilde{\Phi}(t)\ranglengle,
\end{equation}
where $\mathcal{V}(t)$ and $\mathcal{T}(t)$ are the potential operator and the kinetic-energy operator in the real space and momentum space, respectively. In other words, they are both quadratic, and their diagonal elements can be written as
\begin{equation}\langlebel{quadratic operator}
\mathcal{A}_{jj} = \exp\left\{-i dt [h(j dx +x_0)^2+\sigma]\right\},
\end{equation}
where other off-diagonal elements are zero. The preliminary result in Ref. \cite{quadraticOperator} demonstrates that the quadratic Hamiltonian can be exactly decomposed into a quantum circuit. In Fig.~\ref{figure1}, we plot the quantum circuit for implementing a quadratic Hamiltonian in the computational basis for DQS. The decomposition is verified by the DQS of nonadiabatic processes in molecular systems \cite{Molecular2020prl}.
\section{Fidelity-based Gradient Descent}
\subsection{Initial state preparation}
The first step is to encode the information of the wave packet into the state of the qubit register. The accuracy of the preparation of a target state of qubits by VQE depends on the qubit number $n$ and the parameter depth $p$, see Eq. (\ref{vqe}).
As we know, the depth $p$ and qubit number $n$ exponentially increase the complexity of VQE.
In Fig. \ref{figure2}, we compare the fidelity of state preparation in the coefficient grid ${n,p}$ in (a).
In (b), without loss of accuracy, we choose $n = 6$, $p = 4$, and present the resulting $n$-qubits states for the corresponding density (\ref{wavefunction}) with $\omega_0 = 1$.
In what follows, the numerical results are produced by the quantum simulator {\tt statevector simulator} on the {\tt qiskit} platform, which admits no errors, decoherence, and imperfections at all. We will consider the noise of an actual quantum device in the discussion.
\begin{figure}
\caption{ The fidelity of states preparation by VQE as a function of qubit number $n$ and parameter depth $p$ in (a), and corresponding probability distribution of qubits $|\Phi|^2$ compared with density of wave function $|\Psi(x)|^2$ in (b), with fidelity $F = 0.996$ for $n = 6$ and $p=4$.
}
\end{figure}
\subsection{The maximum-fidelity control}
A parametric optimization problem is usually mapped into the minimization of a given cost function to find its local minimum with the Gradient Descent (GD) algorithm.
The optimal solution $M=\{m_0,m_1,...,m_i\}$ is obtained by minimizing the cost-value $c =J(M)$, and can be expressed as
\begin{equation}
M^{\text{opt}} = \min_{c}J(M).
\end{equation}
In our case, the control function is the trap frequency $f(t) =\omega^2(t) $ that is piece-wise on the discrete time $t\in [0, t_f ]$ involving $N_t$-intervals.
Accordingly, the control tuple $f(t) =\{f(0),f(dt),...,f(t_f)\}$ is constrained by $\delta_1\le|f(t)|\le\delta_2$, $|f(t+dt)-f(t)|\le\Delta f$ and the boundary conditions $f(0)=1$ and $f(t_f) = 0.01$ are considered for trap expansion with $\omega_f/\omega_0= 1/10$. Then, we optimize $\omega$-dependent parameters $M = \{\theta_3(\omega)\}_t$ in the circuit shown in Fig. \ref{figure1} by minimizing the loss value which can be generated by the measurements on the qubits at $t = t_f$.
Here, we exploit three different cost functions: the infidelity (IF) $(1-F)$, the fidelity susceptibility (FS) $\chi_f$, and the Bures angle (BA) $\mathcal{L}(\psi_t,\psi_{\text{tar}})$.
For simplicity, we
initialize the controller $f(t)$ with a linear dependence of the form $f(t)=(\omega_f^2-\omega_0^2)(t/t_f)+\omega_0^2$.
This yields a parametric constrained minimization problem in this case.
We exemplify the optimization process of finding maximum-fidelity policy for different cost functions in Fig. \ref{figure3} by using the optimizer $\tt{SLSQP}$ \cite{SLSQP} based on the $\tt{scipy}$ \cite{scipy} platform.
\begin{figure}
\caption{(a) The infidelity $1-F$ as a function of the training iteration using three different loss functions: fidelity susceptibility (FS) with $\delta f=0.01$, infidelity (IF), and Bures angle (BA). (b)
The infidelity as a function of the iteration step for different $\delta f$ is compared when the fidelity susceptibility (FS) is taken as a loss function. Parameters: $N_t = 50$, $\omega_f = 0.1$, $n = 6$, $p = 4$, $\delta_1 = 10^{-6}
\end{figure}
To this end, we assign the total time $t_f = t_f^{\text{opt}}$ calculated in (\ref{optimaltime}), and use the GD method to minimize the cost function for obtaining the maximum-fidelity control function.
In Fig. \ref{figure3}, we present infidelities versus the GD iteration when using each cost function.
The control with maximum fidelity is obtained with their convergence.
One can see that the learning rate of FS outperforms the others with the same optimizer in (a), and we compare the FS-based GD with various coefficients: $\delta f = [10^{-4}, 10^{-3},10^{-2},10^{-1},10^{0}]$ in (b).
Because the greatest convergent rate of the optimization process arises when $\delta f \le 10^{-3}$, we employ the FS $\chi_f$ as the cost function with the coefficient $\delta f = 10^{-3}$ to design the maximum fidelity policy in the GD algorithm.
It is worth emphasizing that reducing the training iteration is equivalent to decreasing the accumulation of operation errors.
As a result, we can improve the accuracy of DQS with the same quantum volume.
By considering the trade-off between the complexity and accuracy of digitized circuits, we analyze the fidelity achieved by the maximum-fidelity policy for different numbers of Trotter steps $N_t$ and the constraints on control step $\Delta f$. Trotter steps $N_t$ and the constraints on control step $\Delta f$ determine the depth of the circuit and the continuity of the control function, respectively. In Fig. \ref{figure4}, we present the fidelity density as a function of $N_t$ and $\Delta f$ in (a) and show the maximum-fidelity control protocols for various $N_t$ and $\Delta f$, where the counter curves corresponding to the fidelity $F=0.999, 0.99$ are marked. In addition, in Fig. \ref{figure4}(b), the results from circuit learning are compared with
the the optimal-time protocol $t_f = t_f^{\text{opt}} $ produced by bang-bang control (\ref{bangbang}). The higher accuracy of DQS requires the larger Trotter step $N_t$ and higher computation complexity.
On the other hand, the control function becomes smoother when $\Delta f \to 0$, with increasing the Trotter step and circuit complexity. In this context, we choose $N_t = 50$ and $\Delta f = 1$ in the following calculations.
\begin{figure}
\caption{(a) Fidelity as function of $\Delta f$ and $N_t$ with total time $t_f=3.152$. The two dashed contour curves refer to $F = 0.99,0.999$ and are labeled by $-2,-3$, respectively.
In (b), we present the trained fidelity-optimal controls compared to the bang-bang control (solid gray line). The fidelity takes values
$F = 0.84,0.98,0.998,0.9998$ in the four cases illustrated in the legend from top to bottom.
Other parameters are chosen as in Fig. \ref{figure3}
\end{figure}
\subsection{Control phase transition at quantum speed limit}
In the previous section, we described an efficient GD-based hybrid algorithm to find the maximum-fidelity control in a quantum device by considering the loss function $J$, Trotter step $N_t$, and step length $\Delta f$.
Next, we shall prove the control phase transition (CPT) appears at the critical point due to the QSL.
The use of optimal control to reach the QSL in quantum state manipulation has been discussed in \cite{qslwithOC}.
However, the authors in \cite{qslwithOC} intended to find maximum fidelity control by reducing infidelity, which is a highly time-consuming task. This can be improved by introducing the fidelity susceptibility $\chi$, as we have shown in the previous section.
In the space of protocols, the control phase transition is associated with abrupt changes in an optimal control function, satisfying given constraints as the duration of the process is varied \cite{CPT2018prx}.
In particular, the maximum-fidelity control function is unique when $t\le t^{\rm opt}$.
It is expected that the QSL can be reached at the point of CPT.
\begin{figure}
\caption{
(a) The control phase diagram, with the fidelity-optimal control sequence $f(t)=\omega^2(t)$ as a function of $t/t_f$ for different $t_f\in[2,5]$.
(b) Logarithm of the infidelity $\log_{10}
\end{figure}
In practice, we consider the expansion process: from initial state $\psi_0$ ($\omega_0 = 1$) to target states $\psi_{\text{tar}}$ ($\omega_f = 0.1$) with constrain $\delta_1 =10^{-6}$, $\delta_2 = 1$ by assuming $\omega^2 (t) >0$, and subsequently generate the maximum-fidelity control sequence $f(t)$ for $t_f\in[2,5]$, where other parameters are same as those in Fig. \ref{figure3}.
Consequently, we present the control phase diagram in Fig. \ref{figure5}(a) and illustrate several control functions of selected $t_f$ compared with the optimal bang-bang control (\ref{bangbang}).
The control function is suddenly converted to a non-bang-bang-typed phase at the transition point $t_f \approx t^{\text{opt}}_f$, i.e., the control phase transition point.
In addition, we note that there exists only one solution of the maximum-fidelity control function when $t_f \leq t^{\text{opt}}_f$ for fulfilling the requirement of time-energy bound (\ref{tau_QSL}).
Here, we confirm the minimum time $t_f^{\min}$ when the maximum-fidelity is larger than 0.999, and compare the result for different $N_t$ in (b) of Fig. \ref{figure5}.
It is evident in Fig. \ref{figure5}(c) that the accuracy of optimal time produced by circuit learning essentially depends on the $N_t$, and we set $N_t=50$ for the criteria of $\log_{10}(1-F)\sim -4$ in the following calculations.
We emphasize that the time-optimal driving obtained here differs from the QSL but is closely related to it. Specifically, the time-optimal driving is bounded in terms of the trap frequency by contrast to the QSL, which is bounded in terms of the time-averaged standard deviation of the energy, see Eq. (\ref{qsl-ho}). The former is a weaker and more conservative bound, as the energy fluctuations can be upper-bounded in terms of the frequency.
In Fig. \ref{figure5}(d), we display the energy dispersion of maximum-fidelity control and compare it with the corresponding time-optimal driving.
The energy dispersion becomes consistently unique when $t_f \leq t^{\rm opt}_f$ before the point of CPT. Moreover, the energy dispersion for the maximum fidelity control is slightly smaller than in the time-optimal bang-bang control with bounded trap frequency.
In this sense, one can approach the QSL in time-optimal driving when an additional energy cost is allowed by relaxing the trap frequency bound.
\section{Discussion}
A significant source of discussion is the robustness of VQAs in an environment with stochastic perturbations.
In a real quantum computer with NISQ hardware, imperfections are unavoidably induced as a result of a finite number of measurements and a noisy environment.
As emphasized, the previous results are produced by the quantum simulator {\tt statevector simulator} in the {\tt qiskit} platform, with no errors, decoherence, and imperfections at all.
In this section, we implement our method in a noise-associated quantum device simulated by {\tt qasm simulator} with $N_m$ measurement shots.
The performance of the GD algorithm depends on the tolerance of the optimizer to errors. Thus, we shall balance the GD induced by noise and the parameter variance in a training landscape.
Let us recall the definition of noise in the framework of quantum information processing.
In general, the $n$-qubits register is coupled with an environment $\varepsilon$, leading to the nonunitary evolution of the system.
Initially, we assume the density operators of the register $\rho(t_0)=\rho_0$ and the environment to be decoupled so that the composite state is given by the tensor product $\rho\otimes \varepsilon$.
For any global unitary operator $U$ describing the dynamics of the composite state, the reduced evolution of the register reads
\begin{equation}
\rho(t) = {\rm Tr}[U(\rho(t_0)\otimes \varepsilon)U^{\dagger}]\equiv \xi(\rho_0).
\end{equation}
This superoperator $\xi(\cdot)$ can be implemented for simulating a noise model in a quantum circuit.
The noisy quantum channel describes the nonunitary evolution of the time-varying density state in the Kraus representation
\begin{equation}
\rho(t) = \sum_kE_k\rho(t_0) E_k^{\dagger},
\end{equation}
where $E_k$ satisfy the trace-preserving condition $\sum_kE_kE_k^{\dagger} = \textbf{1}$.
Since we perform the measurement on the qubits register only at $t=t_f$, imperfections induced by any kind of noise result in fluctuations of the measurement accuracy.
In this sense, we shall primarily consider the bit-flip error of measurements in a real quantum computer.
Assume the system's noise flips $|0\ranglengle$ and $|1\ranglengle$ with probability $\beta$. The superoperator for this bit flip noise can be expressed as
\begin{equation}
\mathcal{\xi}_{BF}(\rho) = (1-\beta)\rho+\beta X \rho X ,
\end{equation}
where the corresponding Kraus operators are $\{\sqrt{1-\beta}\mathbb{I}, \sqrt{\beta}X\}$, in terms of the identity $\mathbb{I}$ and the Pauli operator $X$.
\begin{figure}
\caption{ (a) The average gradient $\overline{|\partial_{\theta_k}
\end{figure}
Next, we discuss whether the \textit{barren plateau} \cite{McClean2018nc} phenomenon occurs here. The barren plateau refers to the fact that the gradient of an observable vanishes exponentially as a function of qubits number in a training landscape of VQAs. It has been widely studied in various ansatze of deep circuits \cite{VQA2021NP}. In general, the gradient of an objective function is calculated by mean of the parameter-shift rule \cite{shift2018pra,analyticgrad2019pra}, expressed as $\partial_{\theta_k} J = \frac{1}{2}[J({\theta_k}+\frac{\pi}{2})- J({\theta_k}-\frac{\pi}{2})] $ for an arbitrary trainable parameter $\theta_k$ in the circuit.
In this sense, we define the average of the absolute gradient over $N_r$ random initializations
\begin{equation}\langlebel{average_gradient}
\overline{|\partial_{\theta_k}J|} =\sum_{i=1}^{N_r} \frac{1}{2N_r} \left| J_i({\theta_k}+\frac{\pi}{2})- J_i({\theta_k}-\frac{\pi}{2})\right|.
\end{equation}
Since these three objective functions we proposed in the previous section are all involve the fidelity, we here provide numerical analysis of the average gradients of the fidelity $F$.
Essentially, the probability distribution of qubit states obeys a statistical precision of order $1/\sqrt{N_m}$, and the fidelity defined in Eq. (\ref{fidelity}) meets the same criteria.
{The derivative of an observable with respect to an arbitrary trainable $\theta_k$ in the circuit is a linear function of the gradient with respect to the corresponding control parameter $f_k = f(k)$ at $k$-th Trotter step: $\partial _{f_{k}} J = 2c\cdot\partial _{\theta_{k}} J$, with $c$ being a real number. Thus, the gradient of the objective function with respect to the control parameter $f(t)$ shares the analytic expression in Eq. (\ref{average_gradient}), see also the detail in Appendix \ref{App1}.}
Also, we calculate the average gradient Eq. (\ref{average_gradient}) over $N_r = 50$ random initialization of $f(t)$ for various qubit number $n$, while the Trotter step is taken as a polynomial function as $N_t = ploy(n)$. In Fig. \ref{figure6}(a), we demonstrate that the \textit{barren plateau} is avoided for the average gradient of a correlating parameter $\theta_k$.
The main reason for the absence of \textit{barren plateau} is the reduction of ansatz's expressibility, due to the strong correlation of parameter $\theta_k$ depending on the controller $f(t)$ in our method, a common feature with the recent work in Ref. \cite{Expressibility2022prxQ}.
In this regard, we set measurement shots $N_m = 8192$ for statistical accuracy and energy saving by considering the gradient magnitudes, as shown in Fig. \ref{figure6}(a).
Moreover, we apply the optimizer of simultaneous perturbation stochastic approximation ({\tt SPSA}), which is widely used for solving an optimization problem with statistical noise \cite{SPSA,MLbook}. In Fig. \ref{figure6}(b), we present the infidelity as a function of the training iteration for $\beta = 0,0.02,0.04,0.06$, where the infidelity for noise-free case ($\beta = 0$) convergent to $\sim10^{-2}$ which obeys the criteria of $\sim 1/\sqrt{N_m}$.
Moreover, the performance of {\tt SPSA} is compared with various optimizers in Appendix \ref{App2}.
Let us discuss the circuit complexity in terms of the qubits number $n$ and the number of Trotter steps $N_t$. The whole circuit consists of $N_t$ circuit units that simulate each unitary operation $ \hat{U}(dt) =e^{-iHdt}$, as depicted in Fig. \ref{figure1}.
In the absence of $\theta_1$ and $\theta_2$, the gate number of each circuit unit, including the potential operator $\mathcal{V}(dt)$ and the kinetic-energy operator $\mathcal{T}(dt)$ in real space and momentum space, is $N_{unit}\sim 2n^2$.
In addition, the operation of \text {QFT} and \text {iQFT} requires $\sim n^2/2$ control-phase gates. Consequently, the total number of gates for our ansatz is proportional to a quadratic function of $n$, yielding $ 5N_t n^2/2$.
To find the minimal-time control on the QC with reasonable precision, one can increase the qubit number $n$ and Trotter step $N_t$, with exponentially enlarged Hilbert space. But this leads to the quadratic size increase in circuit complexity. Recently, alternative methods inspired by the Grover-Rudolph algorithm \cite{marin2021quantum} are worked out for the quantum state preparation, which are expected to reduce its complexity in this direction.
Finally, we discuss the control problem beyond the quadratic Hamiltonian on which we have focused. For our case study, one can introduce perturbations of the trap, e.g., a time-independent anharmonicity involving an operator $x^4$, which is no longer quadratic. Although its exact decomposition into quantum circuits does not exist, one can still approximate the evolution block with arbitrary precision by the Solovay-Kitaev algorithm \cite{kitaev1997quantum}, placing it in the block of $\mathcal{V}_{dt/2}$ before or after the evolution of the quadratic Hamiltonian since it commutes with the harmonic potential operator.
\section{Conclusion}
To sum up, we propose the GD-based circuit learning to find the time-optimal control problem, the driving of a quantum particle trapped in a time-varying harmonic potential, and figure out its quantum speed limit in relation to the control phase transition.
First, we have constructed the digitized quantum circuit of a time-dependent harmonic oscillator using a finite $n$-qubit register. Second, we have demonstrated that the learning rate of circuit optimization can be accelerated by considering various physical quantities, such as the infidelity, Bures angle, and fidelity susceptibility, as cost functions, thus reducing training iteration.
Third, we have established the relation between control phase transition and quantum speed limit.
Finally, we have established the error tolerance of our method by considering the presence of measurement errors in a quantum computer.
The absence of a barren plateau is further justified in our ansatz, enabling the application of VQAs for a class of tasks that is not affected by the fundamental limitations of NISQ devices.
As a heuristic example, we have demonstrated that quantum control can be efficiently simulated and optimized using a NISQ device by combining digital quantum simulation and hybrid circuit learning. Numerical experiments prove that barren plateaus are avoided in the framework.
\section*{ACKNOWLEDGMENTS}
This work was financially supported by NSFC (12075145), STCSM (Grants No. 2019SHZDZX01-ZX04), EU FET Open Grant EPIQUS (899368), the Basque Government through Grant No. IT1470-22, the project grant PID2021-126273NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by ``ERDF A way of making Europe" and ``ERDF Invest in
your Future", QUANTEK project (KK-2021/00070), and the BRTAQ project (expendient no. KK-2022/00041). HTY acknowledges CSC fellowship (202006890071). X.C. acknowledges ayudas para contratos Ramón y Cajal--2015-2020 (RYC-2017-22482).
\appendix
\section{ The parameter-shift rule}\langlebel{App1}
\begin{figure}
\caption{Schematic illustration of a quantum circuit with Trotter step $N_t$. Each dashed-block denoted in Fig. \ref{figure1}
\end{figure}
One recipe to find the partial derivative of an objective function $J(\Theta)$ in parametric quantum circuits (PQCs) is known as the parameter-shift rule \cite{shift2018pra,analyticgrad2019pra}.
In general, the expectation value of an observable $\hat{B}$ as a function of a single parameter $\theta_k$ in a circuit is expressed as $J(\theta_k) =\langlengle \hat{B}(\theta_k) \ranglengle$.
We assume a sequence of unitary operations represented as $U(\theta_k) = U_LU_k(\theta_k)U_R$, for which we have
\begin{equation}a\langlebel{J_appA}
J(\theta_k) &=& \langlengle0| U_R^{\dagger} U_k^{\dagger}(\theta_k)U_L^{\dagger} \hat{B} U_LU_k(\theta_k)U_R | 0\ranglengle \nonumber \\
&=&\langlengle z| \mathcal{M}(\hat{B},\theta_k) |z\ranglengle,
\end{equation}a
where $\mathcal{M}(\hat{B},\theta_k) =U_k^{\dagger}(\theta_k)U_L^{\dagger} \hat{B} U_L U_k(\theta_k) $ and the basis reads $|z\ranglengle = U_R | 0\ranglengle$.
Consider a unitary operator $U_k(\theta_k)$ generated by a Pauli matrix $\sigma_k $ as $U_k(\theta_k) = \exp(-i\theta_k\sigma_k/2)$.
The gradient of the objective function is defined as
\begin{equation}a\langlebel{define_gad}
\partial_{\theta_k} J(z;\theta_k) &=& \langlengle z| \partial_{\theta_k} \mathcal{M}(\hat{B},\theta_k) |z\ranglengle \nonumber \\
&=& c
\left[\langlengle z| \mathcal{M}(\theta_k+s) |z\ranglengle -\langlengle z| \mathcal{M}(\theta_k-s) |z\ranglengle\right], ~~~
\end{equation}a
where coefficient $c$ and shift $s$ are independent of $\theta_k$.
The gradient $\partial_{\theta_k} U_k(\theta_k) = -\frac{i}{2}U_k(\theta_k)\sigma_k$, and inserting it to (\ref{J_appA}), we have \cite{McClean2018nc}
$ \partial_{\theta_k} J = -\frac{i}{2}\langlengle z|U_k^{\dagger}(\theta_k)[\sigma_k,\hat{B}]U_k(\theta_k)|z\ranglengle$.
The commutation relation,
$
[\sigma_k,\hat{B}] = i \left(U_k^\dagger(\frac{\pi}{2})\hat{B}U_k(\frac{\pi}{2})-U_k^\dagger(-\frac{\pi}{2})\hat{B}U_k(-\frac{\pi}{2})\right)$,
enables us derive the analytical gradient as \cite{shift2018pra,analyticgrad2019pra}:
\begin{equation}a\langlebel{shift_rule}
\partial_{\theta_k}J&=&
\frac{1}{2}
\langlengle z|\left[U_k(\theta_k^+)\hat{B}U_k(\theta_k^+)-U_k^\dagger(\theta_k^-)\hat{B}U_k(\theta_k^-)\right]|z\ranglengle \nonumber \\
&=& \frac{1}{2}
\left[\langlengle z| \mathcal{M}(\theta_k^+) |z\ranglengle -\langlengle z| \mathcal{M}(\theta_k^-) |z\ranglengle\right],\nonumber
\end{equation}a
with $\theta_k^{\pm} = \theta_k \pm \pi/2$.
The above expression provides an analytical evaluation of the gradient of an objective function involving Pauli operators.
In Fig. \ref{figure_app1}, we schematically illustrate the deep circuit of our method, which is composed of $N_t$ dashed blocks (refers to $N_t$ Trotter steps). In each unitary operator $U^{k}(f_{k})$, all rotating parameters $(\theta_1,\theta_2,\theta_3)$ are $f_{k}$-correlated, as detailed in Fig. \ref{figure1}.
Let us now select an arbitrary single-qubit gate $R(\theta_k)$, with a gradient obeying the parameter shift rule in Eq. (\ref{shift_rule}).
According to the algorithm in Fig. \ref{figure1}, the rotation $\theta_k$ is correlated with $f_{k}$ as a liner form: $\theta_k = c_1 \cdot f_{k}$, where $c_1$ is a real number. Starting with Eq. (\ref{define_gad}), we have
$ \partial _{f_{k}} J = c [J(f_{k}+s)-J(f_{k}-s)]
= c [J(\theta'_k+s)-J(\theta'_k-s)]$,
where the gradient is independent of the initial angle $\theta'_k = \theta_k/c_1$.
Consequently, we find $
\partial _{f_{k}} J = 2c\cdot\partial _{\theta_{k}} J$, while the shift $f_{k}$ is $s = \pi/2$.
Furthermore, we introduce the notation
\begin{equation}
\overline{|c|} =\sum_{i=1}^{N_r} \frac{1}{2N_r} \left| \frac{\partial _{f_k}J }{\partial _{\theta_{k}} J}\right|,
\end{equation}
where the absolute average value $\overline{|c|} $ over $N_r$ random initialization.
In Fig. \ref{figure_app2}(a), we calculate the average value $\overline{|c|} $ with $N_r = 50$, which is irrelevant to $n$.
\section{Comparisons of optimizers}\langlebel{App2}
In Appendix (\ref{App2}), we compare the performance of several classical optimizers for the same optimization task with statistical errors from a finite number of measurements.
We choose widely used optimizers, namely, $\tt SLSQP$, $\tt COBYLA$ ,$\tt SPSA$ , $\tt BFQS$ based on the library of $\tt qiskit$.
In Fig. \ref{figure_app2}(b), we present the infidelity as a function of the training iteration by using various optimizers. The {\tt SPSA} stands out for its performance in an optimization task
in the presence of bit-flip noise.
\begin{figure}
\caption{(a) The coefficient $\overline{|c|}
\end{figure}
\end{document}
|
\begin{document}
\title{Nonlinear damped Timoshenko systems with second sound --- global existence and exponential stability}
\author{Salim A. Messaoudi\thanks{
Mathematical Sciences Department, KFUPM, Dhahran 31261, Saudi Arabia\newline
E-mail: [email protected]}, \thinspace Michael Pokojovy\thanks{
Fachbereich Mathematik und Statistik, Universit\"{a}t Konstanz, 78457 Konstanz,
Germany\newline
E-mail: [email protected]}, \thinspace Belkacem Said-Houari
\thanks{
Universit\'{e} Badji Mokhtar, Laboratoire de Math\'{e}matiques Appliqu\'{e}es, B.P. 12
Annaba 23000, Algerie\newline
E-mail: [email protected]}}
\date{March 2008}
\maketitle
\begin{abstract}
In this paper, we consider nonlinear thermoelastic systems of Timoshenko
type in a one-dimensional bounded domain. The system has two dissipative
mechanisms being present in the equation for transverse displacement and
rotation angle --- a frictional damping and a dissipation through hyperbolic
heat conduction modelled by Cattaneo's law, respectively. The global
existence of small, smooth solutions and the exponential stability in linear
and nonlinear cases are established.
\end{abstract}
\noindent AMS-Classification: 35B37, 35L55, 74D05, 93D15, 93D20 \newline
Keywords: Timoshenko systems, thermoelasticity, second sound, exponential
decay, nonlinearity, global existence
\section{Introduction}
In \cite{Ti1921}, a simple model describing the transverse vibration of a
beam was developed. This is given by a system of two coupled hyperbolic
equations of the form
\begin{eqnarray}
\rho u_{tt} &=& (K(u_x - \varphi))_x \quad \text{ in } (0, \infty) \times (0, L), \label{TIMOSHENKO_LIN_1} \\
I_\rho \varphi_{tt} &=& (EI \varphi_x)_x + K(u_x - \varphi) \quad \text{ in } (0, \infty) \times (0, L), \nonumber
\end{eqnarray}
where $t$ denotes the time variable and $x$ the space variable along a beam
of length $L$ in its equilibrium configuration. The unknown functions $u$
and $\varphi$ depending on $(t, x) \in (0, \infty) \times (0, L)$ model the
transverse displacement of the beam and the rotation angle of its filament,
respectively. The coefficients $\rho$, $I_\rho$, $E$, $I$ and $K$ represent
the density (i.e. the mass per unit length), the polar momentum of inertia
of a cross section, Young's modulus of elasticity, the momentum of inertia
of a cross section, and the shear modulus, respectively.
Kim and Renardy considered (\ref{TIMOSHENKO_LIN_1}) in \cite{KiRe1987} together with two boundary controls of the form
\begin{eqnarray}
K\varphi(t, L) - Ku_{x}(t,L) &=& \alpha u_{t}(t, L) \quad \text{ in }(0,\infty), \nonumber \\
EI\varphi_{x}(t, L) &=& -\beta \varphi_{t}(t, L) \quad \text{ in }(0,\infty) \nonumber
\end{eqnarray}
and used the multiplier techniques to establish an exponential decay result
for the natural energy of (\ref{TIMOSHENKO_LIN_1}). They also provided some
numerical estimates to the eigenvalues of the operator associated with the
system (\ref{TIMOSHENKO_LIN_1}). An analogous result was also established by
Feng \textit{et al.} in \cite{FeShiZha1998}, where a stabilization of
vibrations in a Timoshenko system was studied. Rapos \textit{et al.} studied
in \cite{RaFeSaCa2005} the following system
\begin{align}
& \rho_{1} u_{tt} - K(u_{x} - \varphi)_{x} + u_{t} = 0 \quad \text{ in } (0,\infty) \times (0, L), \nonumber \\
& \rho_{2} - b\varphi_{xx} + K(u_{x} - \varphi) + \varphi_{t} = 0 \quad \text{ in } (0,\infty) \times (0,L), \label{TIMOSHENKO_DAMPED_TWICE} \\
& u(t, 0) = u(t, L) = \varphi(t, 0) = \varphi(t, L) = 0 \quad \text{ in }(0, \infty) \nonumber
\end{align}
and proved that the energy associated with (\ref{TIMOSHENKO_DAMPED_TWICE})
decays exponentially. This result is similar to that one by Taylor \cite{Ta2000},
but as they mentioned, the originality of their work lies in the
method based on the semigroup theory developed by Liu and Zheng \cite{LiuZhe1999}.
Soufyane and Wehbe considered in \cite{SouWeh2003} the system
\begin{align}
&\rho u_{tt} = (K(u_x - \varphi))_x \quad \text{ in } (0, \infty) \times (0,L), \nonumber \\
&I_\rho \varphi_{tt} = (EI\varphi_x)_x + K(u_x - \varphi) - b\varphi_t \quad
\text{ in } (0, \infty) \times (0, L), \label{TIMOSHENKO_DAMPLED_ONCE} \\
&u(t, 0) = u(t, L) = \varphi(t, 0) = \varphi(t, L) = 0 \quad \text{ in } (0, \infty), \nonumber
\end{align}
where $b$ is a positive continuous function satisfying
\begin{equation}
b(x) \geq b_0 > 0 \quad \text{ in } [a_0, a_1] \subset [0, L]. \nonumber
\end{equation}
In fact, they proved that the uniform stability of (\ref
{TIMOSHENKO_DAMPLED_ONCE}) holds if and only if the wave speeds are equal, i.e.
\begin{equation}
\frac{K}{\rho} = \frac{EI}{I_\rho}, \nonumber
\end{equation}
otherwise, only the asympotic stability has been proved. This result
improves previous ones by Soufyane \cite{Sou1999} and Shi and Feng \cite{ShiFe2002}
who proved an exponential decay of the solution of (\ref{TIMOSHENKO_LIN_1})
together with two locally distributed feedbacks.
Recently, Rivera and Racke \cite{MuRa2007} obtained a similar result in a
work where the damping function $b = b(x)$ is allowed to change its sign.
Also, Rivera and Racke \cite{MuRa2003} treated a nonlinear Timoshenko-type
system of the form
\begin{align}
&\rho_1 \varphi_{tt} - \sigma_1(\varphi_x, \psi)_x = 0, \nonumber \\
&\rho_2 \psi_{tt} - \chi(\psi_x)_x + \sigma_2(\varphi_x, \psi) + d\psi_t = 0 \nonumber
\end{align}
in a one-dimensional bounded domain. The dissipation is produced here
through a frictional damping which is only present in the equation for the
rotation angle. The authors gave an alternative proof for a necessary and
sufficient condition for exponential stability in the linear case and then
proved a polynomial stability in general. Moreover, they investigated the
global existence of small smooth solutions and exponential stability in the
nonlinear case.
Xu and Yung \cite{XuYu2003} studied a system of Timoshenko beams with
pointwise feedback controls, looked for the information about the
eigenvalues and eigenfunctions of the system, and used this information to
examine the stability of the system.
Ammar-Khodja \textit{et al.} \cite{AmBeRiRa2003} considered a linear
Timoshenko-type system with a memory term of the form
\begin{align}
& \rho_{1} \varphi_{tt} - K(\varphi_{x} + \psi)_{x} = 0, \label{TIMOSHENKO_MEMORY_TERM} \\
& \rho_{2} \psi_{tt} - b \psi_{xx} + \int_{0}^{t} g(t - s) \psi_{xx}(s)ds + K(\varphi_{x} + \psi) = 0 \nonumber
\end{align}
in $(0,\infty) \times (0,L)$, together with homogeneous boundary conditions.
They applied the multiplier techniques and proved that the system is
uniformly stable if and only if the wave speeds are equal,
i.e. $\frac{K}{\rho_{1}} = \frac{b}{\rho_{2}}$, and $g$ decays uniformly.
Precisely, they proved an exponential decay if $g$ decays exponentially and polynomial decay
if $g$ decays polynomially. They also required some technical conditions on
both $g^{\prime}$ and $g^{\prime \prime}$ to obtain their result. The
feedback of memory type has also been studied by Santos \cite{Sa2002}. He
considered a Timoshenko system and showed that the presense of two feedbacks
of memory type at a subset of the bounary stabilizes the system uniformly.
He also obtained the energy decay rate which is exactly the decay rate of
the relaxation functions.
Shi and Feng \cite{ShiFe2001} investigated a nonuniform Timoshenko beam and
showed that the vibration of the beam decays exponentially under some
locally distributed controls. To achieve their goal, the authors used the
frequency multiplier method.
For Timoshenko systems of classical thermoelasticity, Rivera and Racke \cite
{MuRa2007} considered, in $(0,\infty )\times (0,L)$, the following system
\begin{align}
& \rho_{1} \varphi_{tt} - \sigma (\varphi_{x}, \psi_{x})_{x} = 0, \nonumber \\
& \rho_{1} \psi_{tt} - b \psi_{xx} + k(\varphi_{x} + \psi) + \gamma \theta_{x} = 0, \label{TIMOSHENKO_NONLINEAR_FOURIER} \\
& \rho_{3} \theta_{t} - \kappa \theta_{xx} + \gamma \psi_{tx} = 0, \nonumber
\end{align}
where the functions $\varphi$, $\psi$, and $\theta$ depend on $(t, x)$ and
model the transverse displacement of the beam, the rotation angle of the
filament, and the temperature difference, respectively. Under appropriate
conditions on $\sigma$, $\rho_{i}$, $b$, $k$, $\gamma $ they proved
several exponential decay results for the linearized system and
non-exponential stability result for the case of different wave speeds.
In the above system, the heat flux is given by the Fourier's law. As a
result, we obtain a physical discrepancy of infinite heat propagation speed.
That is, any thermal disturbance at a single point has an instantaneous
effect everywhere in the medium. Experiments showed that heat conduction in
some dielectric crystals at low temperatures is free of this paradox.
Moreover, the disturbances being almost entirely thermal, propagate at a
finite speed. This phenomenon in dielectric crystals is called second sound.
To overcome this physical paradox, many theories have been developed. One of
which suggests that we should replace the Fourier's law
\begin{equation}
q + \kappa \theta_x = 0 \nonumber
\end{equation}
by so called Cattaneo's law
\begin{equation}
\tau q_t + q + \kappa \theta_x = 0. \nonumber
\end{equation}
Few results concerning existence, blow-up, and asymptotic behavior of smooth
as well as weak solutions in thermoelasticity with second sound have been
established over the past two decades. Tarabek \cite{Ta1992} treated
problems related to the following one-dimensional system
\begin{eqnarray}
u_{tt} - a(u_{x}, \theta, q) u_{xx} + b(u_{x}, \theta, q) \theta_{x} &=& \alpha_{1}(u_{x}, \theta) qq_{x}, \nonumber \\
\theta_{t} + g(u_{x}, \theta, q) q_{x} + d(u_{x}, \theta, q) u_{tx} &=& \alpha_{2}(u_{x}, \theta) qq_{t},
\label{THERMOELASTICITY_CATTANEO_NONLINEAR_1D} \\
\tau(u_{x}, \theta) q_{t} + q +k(u_{x}, \theta) \theta_{x} &=& 0 \nonumber
\end{eqnarray}
in both bounded and unbounded situations and established global existence
results for small initial data. He also showed that these ``classical''
solutions tend to equilibrium as $t$ tends to infinity. However, no decay
rate has been discussed. Racke \cite{Ra2002} discussed lately (\ref{THERMOELASTICITY_CATTANEO_NONLINEAR_1D})
and established exponential decay results for several linear and nonlinear initial boundary value problems. In
particular, he studied the system (\ref{THERMOELASTICITY_CATTANEO_NONLINEAR_1D}) for a rigidly clamped medium with
the temperature held constant on the boundary, i.e.
\begin{equation}
u(t, 0) = u(t, 1) = \theta(t, 0) = \theta(t, 1) = \bar{\theta} \quad \text{ in } (0, \infty), \nonumber
\end{equation}
and showed for sufficiently small initial data and $\alpha_{1} = \alpha_{2} = 0$
that the classical solution decays exponentially to an equilibrium state.
Messaoudi and Said-Houari \cite{MeSa2005} extended the decay result of \cite{Ra2002} for $\alpha_{1}$ and $\alpha_{2}$ that are not
necessarily zero.
Concerning the multi-dimensional case ($n = 2,3$), Racke \cite{Ra2003}
established an existence result for the following $n$-dimensional problem
\begin{align}
& u_{tt} - \mu \Delta u - (\mu + \lambda) \nabla \textrm{\thinspace div \thinspace} u
+ \beta \nabla \theta = 0, \quad (t,x) \in (0, \infty) \times \Omega, \nonumber \\
& \theta_{t} + \gamma \textrm{\thinspace div \thinspace} q
+ \delta \textrm{\thinspace div\thinspace} u_{t} = 0, \quad (t, x) \in (0, \infty) \times \Omega, \nonumber \\
& \tau q_{t} + q + \kappa \nabla \theta = 0, \quad (t, x) \in (0, \infty) \times \Omega, \label{THERMOELASTICITY_CATTANEO_3D} \\
& u(0, x) = u_{0}(x), \, u_{t}(0, x) = u_{1}(x), \, \theta(0, x) = \theta_{0}(x), \,q(0,x) = q_{0}(x), \quad x \in \Omega \nonumber \\
& u(t, x) = \theta(t, x) = 0, \quad (t, x) \in (0, \infty) \times \partial \Omega, \nonumber
\end{align}
where $\Omega$ is a bounded domain of $\mathbb{R}^{n}$ with a smooth boundary $\partial \Omega $.
$u = u(t, x) \in \mathbb{R}^{n}$ is the displacement vector, $\theta = \theta(t, x)$ is the temperature difference,
$q = q(t, x) \in \mathbb{R}^{n}$ is the heat flux, and $\mu$, $\lambda$, $\beta$, $\gamma$, $\delta$, $\tau$, $\kappa$ are positive constants,
where $\mu$, $\alpha$ are Lam\'{e} moduli and $\tau$ is the relaxation time being a small parameter compared to the others.
In particular, if $\tau = 0$, the system (\ref{THERMOELASTICITY_CATTANEO_3D}) reduces to the system of thermoelasticity,
in which the heat flux is given by Fourier's law instead of Cattaneo's law. He also proved, under condition
$\nabla \times \nabla u = \nabla \times \nabla q = 0$, an exponential decay result for (\ref{THERMOELASTICITY_CATTANEO_3D}).
This result is easily extended to the radially symetric solutions, as they satisfy the above condition.
Messaoudi \cite{Me2002} investigated the following problem
\begin{align}
& u_{tt} - \mu \Delta u - (\mu + \lambda) \nabla \textrm{\thinspace div\thinspace} u
+ \beta \nabla \theta = |u|^{p-2} u, \quad (t, x) \in (0, \infty) \times \Omega, \nonumber \\
& \theta_{t} + \gamma \textrm{\thinspace div\thinspace } q + \delta \textrm{\thinspace div\thinspace } u_{t}
= 0, \quad (t, x) \in (0, \infty) \times \Omega, \nonumber \\
& \tau q_{t} + q + \kappa \nabla \theta = 0, \quad (t, x) \in (0, \infty) \times \Omega , \label{THERMOELATICITY_CATTANEO_POLYNOMIAL_3D} \\
& u(0, x) = u_{0}(x), \, u_{t}(0, x) = u_{1}(x), \, \theta (0, x) = \theta_{0}(x), \, q(0, x) = q_{0}(x), \quad x \in \Omega \nonumber \\
& u(t, x) = \theta(t, x) = 0, \quad (t, x) \in (0, \infty) \times \partial \Omega \nonumber
\end{align}
for $p > 2$, where a nonlinear source term is competing with the damping
caused by the heat conduction and established a local existence result. He
also showed that solutions with negative initial energy blow up in finite
time. The blow-up result was then improved by Messaoudi and Said-Houari \cite{MeSa2004}
to accommodate certain solutions with positive initial energy.
In the present work, we are concerned with
\begin{align}
& \rho_{1} \varphi_{tt} - \sigma (\varphi_{x}, \psi)_{x} + \mu \varphi_{t} = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
& \rho_{2} \psi_{tt} - b \psi_{xx} + k(\varphi_{x} + \psi) + \beta \theta_{x} = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
& \rho_{3} \theta_{t} + \gamma q_{x} + \delta \psi_{tx} = 0, \quad (t, x) \in (0, \infty ) \times (0, L),
\label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED} \\
& \tau_{0} q_{t} + q + \kappa \theta_{x} = 0, \quad (t, x) \in (0, \infty) \times (0,L), \nonumber
\end{align}
where $\varphi = \varphi(t, x)$ is the displacement vector, $\psi = \psi(t,x)$
is the rotation angle of the filament, $\theta = \theta(t, x)$ is the
temperature difference, $q = q(t, x)$ is the heat flux vector, $\rho_{1}$,
$\rho_{2}$, $\rho_{3}$, $b$, $k$, $\gamma$, $\delta$, $\kappa$, $\mu$,
$\tau_{0}$ are positive constants. The nonlinear function $\sigma$ is
assumed to be sufficiently smooth and satisfy
\begin{equation}
\sigma_{\varphi_{x}}(0, 0) = \sigma_{\psi}(0, 0) = k \nonumber
\end{equation}
and
\begin{equation}
\sigma_{\varphi_{x} \varphi_{x}}(0, 0) = \sigma_{\varphi_{x}\psi}(0, 0) = \sigma_{\psi \psi} = 0. \nonumber
\end{equation}
This system models the transverse vibration of a beam subject to the heat
conduction given by Cattaneo's law instead of the usual Fourier's one. We
should note here that dissipative effects of heat conduction induced by
Cattaneo's law are usualy weaker than those induced by Fourier's law (an
opposite effect was observed though in \cite{Ir2006}). This justifies the
presence of the extra damping term in the first equation of (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED}).
In fact if $\mu = 0$, Fern\'{a}ndez Sare and Racke \cite{FeRa2007} have proved recently
that (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED}) is no longer exponentially stable
even in the case of equal propagation speed ($\rho_{1}/\rho_{2} = k/b)$.
Moreover, they showed that this ''unexpected`` phenomenon (the loss of
exponential stability) takes place even in the presence of a viscoelastic
damping in the second equation of (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED}).
If $\mu > 0$, but $\beta = 0$, one can also prove with the aid of semigroup theory (cf. \cite{MuRa2002}, Section 4)
that the system is not exponential stable independent of the relation between coefficients.
Our aim is to show that the presence of frictional damping $\mu \varphi_{t}$ in the first
equation of (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED}) will drive the
system to stability in an exponential rate independent of the wave speeds in linear and nonlinear cases.
The structure of the paper is as follows. In section \ref{SECTION_LINEAR_1}, we discuss the well-posedness
and exponential stability of the linearized problem for $\varphi = \psi = q = 0$ on the boundary.
In section \ref{SECTION_LINEAR_2}, we establish the same result for $\varphi_{x} = \psi = q = 0$ on the boundary.
In section \ref{SECTION_NONLINEAR_1}, we study the nonlinear system subject to the boundary conditions $\varphi_{x} = \psi = q =0$,
show the global unique solvability and exponential stability for small initial data.
\section{Linear exponential stability --- $\varphi = \psi = q = 0$} \label{SECTION_LINEAR_1}
For the sake of technical convenience, by scaling the system (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED}),
we transform it to an equivalent form
\begin{align}
&\rho_1 \varphi_{tt} - \sigma(\varphi_x, \psi)_x + \mu \varphi_t = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_2 \psi_{tt} - b \psi_{xx} + k(\varphi_x + \psi) + \gamma \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_3 \theta_t + \kappa q_x + \gamma \psi_{tx} = 0, \quad (t, x) \in (0,
\infty) \times (0, L), \label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV} \\
&\tau_0 q_t + \delta q + \kappa \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber
\end{align}
with some other constants and the nonlinear function $\sigma$ still satisfying (possibly for a new $k$)
\begin{equation}
\sigma_{\varphi_x}(0, 0) = \sigma_{\psi}(0, 0) = k \label{SIGMA_ASSUMPTION_1}
\end{equation}
and
\begin{equation}
\sigma_{\varphi_x \varphi_x}(0, 0) = \sigma_{\varphi_x \psi}(0, 0) = \sigma_{\psi \psi} = 0. \label{SIGMA_ASSUMPTION_2}
\end{equation}
In this section, we consider the linearization of (\ref {TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV}) given by
\begin{align}
&\rho_1 \varphi_{tt} - k(\varphi_x + \psi)_x + \mu \varphi_t = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_2 \psi_{tt} - b \psi_{xx} + k(\varphi_x + \psi) + \gamma \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_3 \theta_t + \kappa q_x + \gamma \psi_{tx} = 0, \quad (t, x) \in (0, \infty) \times (0, L), \label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV} \\
&\tau_0 q_t + \delta q + \kappa \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber
\end{align}
completed by the following boundary and initial conditions
\begin{align}
\varphi(t, 0) &= \varphi(t, L) = \psi(t, 0) = \psi(t, L) = q(t, 0) = q(t, L) = 0 \text{ in } (0, \infty),
\label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_BC} \\
\varphi(0, \cdot) &= \varphi_0, \varphi_t(0, \cdot) = \varphi_1, \psi(0, \cdot) = \psi_0, \psi_t(0, \cdot) = \psi_1, \nonumber \\
\theta(0, \cdot) &= \theta_0, q(0, \cdot) = q_0. \label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_IC}
\end{align}
We present a brief discussion of the well-posedness, and the semigroup formulation of
(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV})--(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_IC}).
For this purpose, we set $V:=(\varphi ,\varphi_{t},\psi ,\psi_{t},\theta ,q)^{t}$ and observe that $V$ satisfies
\begin{equation}
\left\{
\begin{array}{c}
V_{t} = AV \\
V(0) = V_{0}
\end{array}
\right., \label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION}
\end{equation}
where $V_{0} := (\varphi_{0}, \varphi_{1}, \psi_{0}, \psi_{1},\theta_{0},q_{0})^{t}$ and $A$ is the differential operator
\begin{equation}
A = \left(
\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0 \\
\frac{k}{\rho_{1}}\partial_{x}^{2} & -\frac{\mu }{\rho_{1}} & \frac{k}{\rho_{1}}\partial_{x} & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
-\frac{k}{\rho_{2}}\partial_{x} & 0 & \frac{b}{\rho_{2}} \partial_{x}^{2} -
\frac{k}{\rho_{2}} & 0 & -\frac{\gamma}{\rho_{2}}\partial_{x} & 0 \\
0 & 0 & 0 & -\frac{\gamma}{\rho_{3}}\partial_{x} & 0 & -\frac{\kappa}{\rho_{2}}\partial_{x} \\
0 & 0 & 0 & 0 & -\frac{\kappa}{\tau_{0}}\partial_{x} & -\frac{\delta}{\tau_{0}}
\end{array}
\right). \nonumber
\end{equation}
The energy space
\begin{equation}
\mathcal{H} := H^1_0((0, L)) \times L^2((0, L)) \times H^1_0((0, L)) \times L^2((0, L)) \times L^2((0, L)) \times L^2((0, L)) \nonumber
\end{equation}
is a Hilbert space with respect to the inner product
\begin{align}
\langle V, W \rangle_\mathcal{H} &=
\phantom{+} \rho_1 \langle V^1, W^1\rangle_{L^2((0, L))} + \rho_2 \langle V^4, W^4\rangle_{L^2((0, L))} \nonumber \\
&\phantom{=} +b \langle V^3_x, W^3_x\rangle_{L^2((0, L))} + k \langle V^1_x + V^3, W^1_x + W^3\rangle_{L^2((0, L))} \nonumber \\
&\phantom{=} +\rho_3 \langle V^5, W^5\rangle_{L^2((0, L))} + \tau_0 \langle V^6, W^6\rangle \nonumber
\end{align}
for all $V, W \in \mathcal{H}$. The domain of $A$ is then
\begin{align}
D(A) = \{V \in \mathcal{H} \,|\, &V^1, V^3 \in H^2((0, L)) \cap H^1_0((0, L)), V^2, V^3 \in H^1_0((0, L)) \nonumber \\
&V^5, V^6 \in H^1_0((0, L)), V^5_x \in H^1_0((0, L))\}. \nonumber
\end{align}
It is easy to show according to \cite{Ra2002} the validness of
\begin{Lemma}
The operator $A$ has the following properties:
\begin{enumerate}
\item $\overline{D(A)}=\mathcal{H}$ and $A$ is closed;
\item $A$ is dissipative;
\item $D(A)=D(A^{\ast})$.
\end{enumerate}
\end{Lemma}
Now, by the virtue of the Hille-Yosida theorem, we have the following result.
\begin{Theorem}
$A$ generates a $C_{0}$-semigroup of contractions $\{e^{At}\}_{t\geq 0}$.
If $V_{0} \in D(A)$, the unique solution $V \in C^{1}([0, \infty), \mathcal{H}) \cap C^{0}([0, \infty), D(A))$
to (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION}) is given by
$V(t) = e^{At} V_{0}$. If $V_{0}\in D(A^{n})$ for $n \in \mathbb{N}$, then $V \in C^{0}([0, \infty), D(A^{n}))$.
\end{Theorem}
Our next aim is to obtain an exponential stability result for the energy
functional $E(t) = E(t; \varphi, \psi, \theta, q)$ given by
\begin{align}
E(t; \varphi, \psi, \theta, q) = \frac{1}{2} \int^L_0 (\rho_1 \varphi_t^2 +
\rho_2 \psi_t^2 + b \psi_x^2 + k(\varphi_x + \psi)^2 + \rho_3 \theta^2 + \tau_0 q^2) \mathrm{d}x. \nonumber
\end{align}
We formulate and prove the following theorem.
\begin{Theorem}
Let $(\varphi, \psi, \theta, q)$ be the unique solution to
(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV})--(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_IC}).
Then, there exist two positive constants $C$ and $\alpha$, independent of $t$ and the initial data, such that
\begin{equation}
E(t; \varphi, \psi, \bar{\theta}, q) \leq CE(0; \varphi, \psi, \bar{\theta}, q) e^{-2\alpha t} \text{ for all } t \geq 0, \nonumber
\end{equation}
where $\bar{\theta}(t, x) = \theta (t,x) - \frac{1}{L} \int_{0}^{L} \theta_0(s) \mathrm{d}s$.
\end{Theorem}
\begin{Proof}
To show the exponential stability of the energy functional, we use the
Lyapunov's method, i.e. we construct a Lyapunov functional $\mathcal{L}$ satisfying
\[
\beta_{1}E(t) \leq \mathcal{L}(t) \leq \beta_{2}E(t), \quad t \geq 0
\]
for positive constants $\beta_{1}$, $\beta_{2}$ and
\[
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{L}(t)\leq -2\alpha \mathcal{L} (t),\quad t\geq 0
\]
for some $\alpha > 0$. This will be achieved by a careful choice of multiplicators.
Multiplying in $L^{2}((0,L))$ the first equation in (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV})
by $\varphi_{t}$, the second by $\psi_{t}$, the third by $\theta$ and the fourth by $q$ and partially integrating, we obtain
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t}E(t) = -\mu \int_{0}^{L} \varphi_{t}^{2} \mathrm{d}x - \delta \int_{0}^{L}q^{2} \mathrm{d}x.
\end{equation}
As in \cite{MuRa2002}, let $w$ be a solution to
\[
-w_{xx}=\psi_{x},\quad w(0)=w(L)=0
\]
and let
\[
I_{1} := \int_{0}^{L} \left(\rho_{2} \psi_{t} \psi + \rho_{1} \varphi_{t} w
- \frac{\gamma \tau_{0}}{\kappa} \psi q\right) \mathrm{d}x.
\]
Then, we obtain taking into account the second equation in (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV})
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}& \int_{0}^{L}\rho_{2}\psi_{t}\psi \mathrm{d}
x=\rho_{2}\int_{0}^{L}\left( \psi_{t}^{2}+\psi_{tt}\psi \right) \mathrm{d}
x \nonumber \\
& =\rho_{2}\int_{0}^{L}\psi_{t}^{2}\mathrm{d}x+b\int_{0}^{L}\psi_{xx}\psi
\mathrm{d}x-k\int_{0}^{L}(\varphi_{x}+\psi )\psi \mathrm{d}x-\gamma
\int_{0}^{L}\theta_{x}\psi \mathrm{d}x. \nonumber
\end{align}
Further, we get using the first and the fourth equations in (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV})
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} &\int_{0}^{L} \rho_{1} \varphi_{t} w \mathrm{d}x =
\rho_{1}\int_{0}^{L} \left(\varphi_{tt} w + \varphi_{t} w_{t}\right) \mathrm{d}x \nonumber \\
& = - k \int_{0}^{L} \varphi \psi_{x} \mathrm{d} x + k \int_{0}^{L} w_{x}^{2} \mathrm{d}x
- \mu \int_{0}^{L} \varphi_{t} w \mathrm{d}x + \rho_{1} \int_{0}^{L} \varphi_{t}w_{t} \mathrm{d}x, \nonumber \\
\frac{\mathrm{d}}{\mathrm{d}t} & \int_{0}^{L} - \frac{\gamma \tau_{0}}{\kappa} \psi q\mathrm{d}x
= - \frac{\gamma \tau_{0}}{\kappa } \int_{0}^{L} \psi_{t}q \mathrm{d}x
+ \frac{\gamma}{\kappa} \int_{0}^{L} \psi (\delta q + \kappa \theta_{x})\mathrm{d}x \nonumber \\
& = -\frac{\gamma \tau_{0}}{\kappa} \int_{0}^{L} \psi_{t} q \mathrm{d}x
+ \frac{\gamma \delta}{\kappa} \int_{0}^{L}\psi q\mathrm{d}x + \gamma \int_{0}^{L} \theta_{x} \psi \mathrm{d}x. \nonumber
\end{align}
By using the above inequalities, we find
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} I_{1}& = \rho_{2} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x - b \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
- k \int_{0}^{L} \psi^{2} \mathrm{d}x+k\int_{0}^{L}w_{x}^{2}\mathrm{d}x \nonumber \\
& \phantom{=} - \mu \int_{0}^{L} \varphi_{t} w \mathrm{d}x + \rho_{1} \int_{0}^{L} \varphi_{t} w_{t} \mathrm{d}x
- \frac{\gamma \tau_{0}}{\kappa} \int_{0}^{L} \psi_{t} q \mathrm{d}x
+ \frac{\gamma \delta}{\kappa} \int_{0}^{L}\psi q\mathrm{d}x. \nonumber
\end{align}
Observing
\begin{equation}
\int_{0}^{L} w_{x}^{2} \mathrm{d}x \leq \int_{0}^{L} \psi^{2} \mathrm{d}x \leq c \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x,
\end{equation}
with the Poincar\'{e} constant $c = \frac{L^{2}}{\pi^{2}} > 0$, we conclude using the Young's inequality
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} I_{1}
\leq & \phantom{+} \rho_{2} \int_{0}^{L}\psi_{t}^{2} \mathrm{d}x - b \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
-k\int_{0}^{L}\psi^{2} \mathrm{d}x + k\int_{0}^{L} \psi^{2}\mathrm{d}x \nonumber \\
& +\frac{\mu}{2} \int_{0}^{L} \left(\varepsilon_{1}w^{2} + \frac{1}{\varepsilon_{1}}\varphi_{t}^{2}\right) \mathrm{d}x
+ \frac{\rho_{1}}{2} \int_{0}^{L} \left( \varepsilon_{1} w_{t}^{2}
+ \frac{1}{\varepsilon_{1}} \varphi_{t}^{2} \right) \mathrm{d}x \nonumber \\
& +\frac{\gamma \tau_{0}}{2\kappa} \int_{0}^{L} \left(\varepsilon_{1} \psi_{t}^{2}+\frac{1}{\varepsilon_{1}} q^{2}\right)\mathrm{d}x
+\frac{\gamma \delta}{2\kappa} \int_{0}^{L} \left(\varepsilon_{1} \psi^{2} + \frac{1}{\varepsilon_{1}}q^{2}\right) \mathrm{d}x \nonumber \\
\leq &-\left[b - \frac{\varepsilon_{1}}{2} \left(\mu c^{2} + \frac{\delta \gamma c}{\kappa }\right) \right]
\int_{0}^{L}\psi_{x}^{2} \mathrm{d} x + \left[\rho_{2} + \frac{\varepsilon_{1}}{2}
\left(\rho_{1}c + \frac{\gamma \tau_{0}}{\kappa }\right) \right] \int_{0}^{L}\psi_{t}^{2}\mathrm{d}x \nonumber \\
& +\frac{1}{2\varepsilon_{1}}\left( \mu +\rho_{1}\right) \int_{0}^{L}\varphi_{t}^{2}\mathrm{d}x
+ \frac{1}{2\varepsilon_{1}}\left(\frac{\gamma \tau_{0}}{\kappa } + \frac{\delta \gamma}{\kappa}\right)
\int_{0}^{L}q^{2}\mathrm{d}x. \label{I1_ESTIMATE}
\end{align}
for some $\varepsilon_{1} > 0$.
Next, we consider the functional $I_{2}$ given by
\[
I_{2}:=\rho_{1}\int_{0}^{L}\varphi_{t}\varphi \mathrm{d}x.
\]
It easily follows that
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} I_{2} &= \rho_{1} \int_{0}^{L} \varphi_{tt} \varphi \mathrm{d}x
+\rho_{1} \int_{0}^{L} \varphi_{t}^{2} \mathrm{d}x \nonumber \\
& =\int_{0}^{L} k(\varphi_{x} + \psi)_{x} \varphi \mathrm{d}x
- \mu \int_{0}^{L}\varphi_{t}\varphi \mathrm{d}x + \rho_{1}\int_{0}^{L}\varphi_{t}^{2}\mathrm{d}x \nonumber \\
& =-k\int_{0}^{L}\varphi_{x}^{2} \mathrm{d}x + k\int_{0}^{L} \psi_{x} \varphi \mathrm{d}x
- \mu \int_{0}^{L}\varphi_{t}\varphi \mathrm{d}x + \rho_{1}\int_{0}^{L}\varphi_{t}^{2}\mathrm{d}x, \nonumber
\end{align}
which can be estimated by
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}I_{2}& \leq -k \int_{0}^{L} \varphi_{x}^{2} \mathrm{d}x
+\frac{k}{2} \int_{0}^{L} \left(\varepsilon_{2} \varphi^{2} + \frac{1}{\varepsilon_{2}}\psi_{x}^{2}\right) \mathrm{d}x \nonumber \\
& \phantom{=} + \frac{\mu}{2} \int_{0}^{L} \left(\varepsilon_{2}\varphi^{2}
+ \frac{1}{\varepsilon_{2}}\varphi_{t}^{2} \right) \mathrm{d}x
+\rho_{1}\int_{0}^{L}\varphi_{t}^{2}\mathrm{d}x \nonumber
\end{align}
\begin{align}
& \leq - \left(k - \frac{\varepsilon_{2}c}{2}(k + \mu)\right)
\int_{0}^{L} \varphi_{x}^{2} \mathrm{d}x + \frac{k}{2\varepsilon_{2}} \int_{0}^{L}\psi_{x}^{2}\mathrm{d}x \nonumber \\
& \phantom{=} +\left(\frac{\mu}{2\varepsilon_{2}} + \rho_{1}\right) \int_{0}^{L} \varphi_{t}^{2} \mathrm{d}x \label{I2_ESTIMATE}
\end{align}
for some $\varepsilon_{2}>0$.
Next we consider a functional $I_{3}$ defined by
\begin{equation}
I_{3} := N_{1} I_{1} + I_{2} \nonumber
\end{equation}
for some $N_{1} > 0$ and, combining (\ref{I1_ESTIMATE}) and (\ref{I2_ESTIMATE}), arrive at
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}I_{3} \leq& -\left[N_{1} \left(b - \frac{\varepsilon_{1}}{2}
\left(\mu c^{2} +\frac{\delta \gamma c}{\kappa}\right)\right)
- \frac{k}{2\varepsilon_{2}}\right] \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x \nonumber \\
&-\left(k - \frac{\varepsilon_{2}c}{2}(k + \mu)\right)
\int_{0}^{L} \varphi_{x}^{2} \mathrm{d}x + N_{1} \left[\rho_{2} + \frac{\varepsilon_{1}}{2}
\left(\rho_{1}c+\frac{\gamma \tau_{0}}{\kappa }\right) \right] \int_{0}^{L}\psi_{t}^{2}\mathrm{d}x \nonumber \\
& +\left[ N_{1}\frac{1}{2 \varepsilon_{1}}\left(\mu + \rho_{1}\right) + \left(\frac{\mu}{2\varepsilon_{2}} + \rho_{1}\right) \right]
\int_{0}^{L}\varphi_{t}^{2} \mathrm{d}x \nonumber \\
& +N_{1}\frac{1}{2\varepsilon_{1}} \left(\frac{\gamma \tau_{0}}{\kappa}
+ \frac{\delta \gamma}{\kappa}\right) \int_{0}^{L}q^{2} \mathrm{d}x. \label{I3_ESTIMATE}
\end{align}
At this point, we introduce
\begin{equation}
\bar{\theta}(t, x) = \theta(t, x) - \frac{1}{L} \int_{0}^{L} \theta_{0}(x) \mathrm{d}x. \nonumber
\end{equation}
One can easily verify that $(\varphi, \psi, \bar{\theta}, q)$ satisfies system (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV}).
Moreover, one can apply the Poincar\'{e} inequality to $\bar{\theta}$
\begin{equation}
\int_{0}^{L}\bar{\theta}^{2}(t, x) \mathrm{d}x \leq c \int_{0}^{L}\bar{\theta}_{x}^{2}(t,x)\mathrm{d}x, \nonumber
\end{equation}
since $\int_{0}^{L} \bar{\theta}(t, x) \mathrm{d}x = 0$ for all $t \geq 0$.
Until the end of this chapter, we shall work with $\bar{\theta}$ but denote it with $\theta$.
In order to obtain a negative term of $\int_{0}^{L} \psi_{t}^{2} \mathrm{d}x$, we introduce, as in \cite{MuRa2002},
the following functional
\[
I_{4}(t) := \rho_{2} \rho_{3} \int_{0}^{L} \left(\int_{0}^{x} \theta(t, y) \mathrm{d}y\right) \psi_{t}(t, x)\mathrm{d}x,
\]
and find
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} I_{4}
=& \int_{0}^{L} \left(\int_{0}^{x} \rho_{3}\theta_{t}\mathrm{d}y\right) \rho_{2}\psi_{t} \mathrm{d}x
+\int_{0}^{L} \left(\int_{0}^{x} \rho_{3} \theta \mathrm{d}y\right) \rho_{2} \psi_{tt}\mathrm{d}x \nonumber \\
=& -\int_{0}^{L} \left(\int_{0}^{x}\kappa q_{x} + \gamma \psi_{tx} \mathrm{d}y\right) \rho_{2}\psi_{t}\mathrm{d}x \nonumber \\
& +\int_{0}^{L} \left(\int_{0}^{x} \rho_{3} \theta \mathrm{d}y\right)
(b\psi_{xx} - k(\varphi_{x} + \psi) - \gamma \theta_{x})\mathrm{d}x \nonumber
\end{align}
\begin{align}
=& -\gamma \rho_{2} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x - \rho_{2} \kappa \int_{0}^{L} q\psi_{t} \mathrm{d}x
- b\rho_{3}\int_{0}^{L} \theta \psi_{x} \mathrm{d}x \nonumber \\
& + k\rho_{3} \int_{0}^{L} \theta \varphi \mathrm{d}x
- k\rho_{3}\int_{0}^{L} \left(\int_{0}^{x}\theta \mathrm{d}y\right) \psi \mathrm{d} x
+ \gamma \rho_{3} \int_{0}^{L} \theta ^{2} \mathrm{d}x. \nonumber
\end{align}
This can be estimated as follows
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} I_{4} \leq & -\gamma \rho_{2} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x
+ \frac{\rho_{2}\kappa}{2} \int_{0}^{L} \left(\varepsilon_{4} \psi_{t}^{2} + \frac{1}{\varepsilon_{4}}q^{2}\right) \mathrm{d}x
+\frac{b\rho_{3}}{2} \int_{0}^{L} \varepsilon_{4}^{\prime} \psi_{x}^{2} \nonumber \\
& +\frac{1}{\varepsilon_{4}^{\prime}} \theta ^{2}\mathrm{d}x
+ \frac{k\rho_{3}}{2} \int_{0}^{L} \left(\varepsilon_{4}^{\prime} \varphi^{2}
+\frac{1}{\varepsilon_{4}^{\prime}} \theta^{2} \right) \mathrm{d}x
+\frac{k\rho_{3}}{2} \int_{0}^{L} \varepsilon_{4}^{\prime} \psi^{2} \mathrm{d}x \nonumber \\
& +\frac{1}{\varepsilon_{4}^{\prime}} \left(\int_{0}^{x} \theta \mathrm{d}y\right)^{2} \mathrm{d}x
+ \gamma \rho_{3} \int_{0}^{L} \theta^{2} \mathrm{d}x \nonumber \\
=& \left[-\gamma \rho_{2} + \frac{\varepsilon_{4} \rho_{2}\kappa}{2}\right]
\int_{0}^{L} \psi_{t}^{2} \mathrm{d}x + \left(\frac{\varepsilon_{4}^{\prime} \rho_{3}}{2}(b + kc)\right) \int_{0}^{L}\psi_{x}^{2}\mathrm{d}x \nonumber \\
& +\frac{\varepsilon_{4}^{\prime} k\rho_{3}c}{2} \int_{0}^{L}\varphi_{x}^{2}\mathrm{d}x
+ \left(\gamma \rho_{3} + \frac{\rho_{3}}{2\varepsilon_{4}^{\prime}}(b+k+kc)\right) \int_{0}^{L}\theta ^{2}\mathrm{d}x \nonumber\\
& +\frac{\rho_{2}\kappa}{2\varepsilon_{4}} \int_{0}^{L} q^{2} \mathrm{d}x \label{I4_ESTIMATE}
\end{align}
for arbitrary positive $\varepsilon_{4}$ and $\varepsilon_{4}^{\prime}$.
Finally, we set
\[
I_{5}(t) := -\tau_{0} \rho_{3} \int_{0}^{L} q(t, x) \left(\int_{0}^{x} \theta(t, y) \mathrm{d}y \right) \mathrm{d}x
\]
and observe
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}I_{5}(t) &=
-\rho_{3} \int_{0}^{L} \tau_{0} q_{t}\left(\int_{0}^{x} \theta \mathrm{d}y\right) \mathrm{d}x
-\tau_{0} \int_{0}^{L} q\left(\int_{0}^{x} \rho_{3} \theta_{t}\mathrm{d}y\right) \mathrm{d}x \nonumber \\
&= -\rho_{3} \int_{0}^{L}(-\delta q - \kappa \theta_{x})
\left(\int_{0}^{x}\theta \mathrm{d}y\right) \mathrm{d}x \nonumber \\
&\phantom{=} -\tau_{0}\int_{0}^{L} q\left(\int_{0}^{x} - \kappa q_{x} - \gamma \psi_{tx}\mathrm{d}y\right) \mathrm{d}x \nonumber \\
&= \rho_{3} \delta \int_{0}^{L} q \left(\int_{0}^{x} \theta \mathrm{d}y\right)
\mathrm{d}x + \rho_{3} \kappa \int_{0}^{L} \theta_{x} \left(\int_{0}^{x}\theta \mathrm{d}y\right) \mathrm{d}x \nonumber \\
&\phantom{=} +\tau_{0} \kappa \int_{0}^{L} q \left(\int_{0}^{x} q_{x} \mathrm{d}y\right) \mathrm{d}x
+\tau_{0} \gamma \int_{0}^{L} q \left(\int_{0}^{x} \psi_{tx} \mathrm{d}y\right) \mathrm{d}x \nonumber \\
&= \frac{\rho_{3}\delta}{2} \int_{0}^{L} \left(\varepsilon_{5}\left(\int_{0}^{x} \theta^{2} \mathrm{d}y\right)^{2}
+ \frac{1}{\varepsilon_{5}} q^{2}\right) \mathrm{d}x - \rho_{3}\kappa \int_{0}^{L} \theta^{2} \mathrm{d}x \nonumber \\
&\phantom{=}+ \tau_{0}\kappa \int_{0}^{L} q^{2} \mathrm{d}x
+ \frac{\tau_{0}\gamma}{2} \int_{0}^{L}\varepsilon_{5}^{\prime} \psi_{t}^{2}
+\frac{1}{\varepsilon_{5}^{\prime}} q^{2}\mathrm{d}x \nonumber
\end{align}
\begin{align}
& \leq \left(-\rho_{3} \kappa + \frac{\varepsilon_{5}\rho_{3}\delta c}{2}\right) \int_{0}^{L} \theta^{2}\mathrm{d}x
+ \frac{\varepsilon_{5}^{\prime} \tau_{0} \gamma}{2} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x \nonumber \\
&\phantom{=}+ \left(\tau_{0}\kappa + \frac{\rho_{3}\delta}{2\varepsilon_{5}}
+ \frac{\tau_{0}\gamma}{2\varepsilon_{5}^{\prime}}\right) \int_{0}^{L}q^{2} \mathrm{d}x \label{I5_ESTIMATE}
\end{align}
for positive $\varepsilon_{5}$ and $\varepsilon_{5}^{\prime }$
For $N,N_{4},N_{5}>0$, we can define an auxiliary functional $\mathcal{F}(t)$ by
\[
\mathcal{F}(t) := NE + I_{3} + N_{4} I_{4} + N_{5} I_{5}.
\]
From (\ref{I3_ESTIMATE}), (\ref{I4_ESTIMATE}) and (\ref{I5_ESTIMATE}), we have then
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} \mathcal{F}(t) \leq& -C_{\psi_{x}} \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
- C_{\varphi_{x}} \int_{0}^{L} \varphi_{x}^{2} \mathrm{d}x - C_{\psi_{t}} \int_{0}^{L} \psi _{t}^{2}\mathrm{d}x \nonumber \\
&- C_{\theta} \int_{0}^{L} \theta^{2} \mathrm{d}x - C_{\varphi_{t}} \int_{0}^{L} \varphi_{t}^{2} \mathrm{d}x
-C_{q}\int_{0}^{L} q^{2} \mathrm{d}x, \label{LYAPUNOV_ESTIMATE_1}
\end{align}
where
\begin{align}
C_{\psi_{x}} &= \left[N_{1} \left(b - \frac{\varepsilon_{1}}{2}
\left(\mu c^{2} + \frac{\delta \gamma c}{\kappa}\right) \right)
- \frac{k}{2\varepsilon_{2}} - N_{4} \frac{\varepsilon_{4}^{\prime}}{2} \rho_{3}(b + kc)\right], \nonumber \\
C_{\varphi_{x}} &= \left[\left(k - \frac{\varepsilon_{2}}{2}c(k + \mu)\right)
-N_{4} \frac{\varepsilon_{4}^{\prime}}{2} k\rho_{3} c\right], \nonumber \\
C_{\psi_{t}} &= \left[N_{4} \left(\gamma \rho_{2} -\frac{\varepsilon_{4} \rho_{2}\kappa}{2}\right)
-N_{1} \left(\rho_{2} + \frac{\varepsilon_{1}}{2} \left(\rho_{1} c + \frac{\gamma \tau_{0}}{\kappa}\right)\right)
-N_{5} \frac{\varepsilon_{5}^{\prime} \tau_{0} \gamma}{2}\right], \nonumber \\
C_{\theta} &= \left[N_{5} \left(\rho_{3} \kappa - \frac{\varepsilon_{5}
\rho_{3} \delta c}{2}\right) - N_{4} \left(\gamma \rho_{3}
+ \frac{\rho_{3}}{2\varepsilon_{4}^{\prime}} (b + k + kc)\right)\right], \nonumber \\
C_{\varphi_{t}} &= \left[N \mu - N_{1} \frac{1}{2 \varepsilon_{1}}
(\mu + \rho_{1}) - \left(\frac{\mu}{2\varepsilon_{2}} + \rho_{1}\right) \right], \nonumber \\
C_{q} &= \left[N - N_{1} \frac{1}{2\varepsilon_{1}}
\left(\frac{\gamma \tau_{0}}{\kappa} + \frac{\delta \gamma}{\kappa}\right) - N_{4} \frac{\rho_{2} \kappa}{2\varepsilon_{4}}
- N_{5} \left(\tau_{0} \kappa + \frac{\rho_{3}\delta}{2\varepsilon_{5}}
+ \frac{\tau_{0}\gamma}{2\varepsilon_{5}^{\prime}}\right)\right] . \nonumber
\end{align}
Choosing $\varepsilon_{1}$, $\varepsilon_{2}$, $\varepsilon_{4}$, $\varepsilon_{5}$ sufficiently small,
then $N_{1}$ and $N_{4}$ sufficiently large, $\varepsilon_{4}^{\prime}$ sufficiently small,
$N_{5}$ sufficiently large, $\varepsilon_{5}^{\prime}$ sufficiently small and finally $N$ sufficiently large, we can assure that
\begin{align*}
\varepsilon_{1} &< \frac{2b\kappa}{\mu \kappa c^{2} + \delta \gamma c}, \quad
\varepsilon_{2} < \frac{2k}{c(k + \mu)}, \quad
\varepsilon_{4} < \frac{2\gamma}{\kappa}, \quad
\varepsilon_{5} < \frac{2\kappa}{\delta c}, \\
N_{1} &> \frac{k}{2\varepsilon_{2} \left(b - \frac{\varepsilon_{1}}{2} \left(\mu c^{2}
+ \frac{\delta \gamma c}{\kappa }\right) \right)}, \\
N_{4} &> \frac{N_{1} \left(\rho_{2} + \frac{\varepsilon_{1}}{2} \left(\rho_{1} c
+ \frac{\gamma \tau_{0}}{\kappa}\right) \right)}{\gamma \rho_{2}- \frac{\varepsilon_{4}\rho_{2}\kappa }{2}}, \\
\varepsilon_{4}^{\prime}& < \min\left\{\frac{2N_{1} \left(b - \frac{\varepsilon_{1}}{2} \left(\mu c^{2}
+\frac{\delta \gamma c}{\kappa }\right)\right)}{N_{4}\rho_{3}(b + kc)},
\frac{2\left(k - \frac{\varepsilon_{2}}{2} c(k + \mu)\right)}{N_{4} k \rho_{3}c}\right\},
\end{align*}
\begin{align*}
N_{5} &> \frac{N_{4} \left(\gamma \rho_{3} + \frac{\rho_{3}}{2\varepsilon_{4}^{\prime }}
\left(b + k + kc\right)\right)}{\rho_{3}\kappa -\frac{\varepsilon_{5} \rho_{3} \delta c}{2}}, \\
\varepsilon_{5}^{\prime} &< \frac{2\left(N_{4} \left(\gamma \rho_{2} - \frac{\varepsilon_{4}\rho_{2}\kappa}{2}\right)
- N_{1}\left( \rho_{2} + \frac{\varepsilon_{1}}{2} \left(\rho_{1} c
+ \frac{\gamma \tau_{0}}{\kappa} \right)\right)\right)}{N_{5} \tau_{0}\gamma} \\
N &> \max\Bigg\{\frac{N_{1} \frac{1}{2 \varepsilon_{1}}(\mu + \rho_{1}) + \left(\frac{\mu}{2\varepsilon_{2}}
+ \rho_{1}\right)}{\mu}, \\
&\phantom{> \max\{}N_{1} \frac{1}{2\varepsilon_{1}} \left(\frac{\gamma \tau_{0}}{\kappa} + \frac{\delta \gamma}{\kappa}\right)
+ N_{4}\frac{\rho_{2}\kappa}{2\varepsilon_{4}} + N_{5} \left(\tau_{0}\kappa + \frac{\rho_{3}\delta}{2\varepsilon_{5}}
+\frac{\tau_{0}\gamma}{2\varepsilon_{5}^{\prime}}\right) \Bigg \}.
\end{align*}
Having fixed the constants as above, we find that all the terms on the right-hand side of (\ref{LYAPUNOV_ESTIMATE_1}) are negative.
Now, we have to estimate $\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}(t)$ versus $-d_{2}E(t)$ for a $d_{2} > 0$.
By letting $C := \frac{1}{2}\min\{C_{\psi_{x}},C_{\varphi_{x}}\}$, we conclude from (\ref{LYAPUNOV_ESTIMATE_1}) that
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} \mathcal{F}(t) &\leq
\underbrace{-C\int_{0}^{L}\psi_{x}^{2}\mathrm{d}x}_{\leq - \frac{C}{c}\int_{0}^{L}\psi^{2}\mathrm{d}x}
- C\int_{0}^{L}\varphi_{x}^{2}\mathrm{d}x -(C_{\psi_{x}}-C) \int_{0}^{L}\psi_{x}^{2}\mathrm{d}x \nonumber \\
&-C_{\psi_{t}} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x
- C_{\theta} \int_{0}^{L}\theta ^{2}\mathrm{d}x-C_{\varphi_{t}}\int_{0}^{L}\varphi _{t}^{2}\mathrm{d}x
-C_{q}\int_{0}^{L}q^{2}\mathrm{d}x \nonumber \\
&\leq - \min\left\{C, \frac{C}{c}\right\} \int_{0}^{L} \underbrace{\left(\varphi_{x}^{2}
+ \psi ^{2}\right)}_{\geq \frac{1}{2}(\varphi_{x}+\psi)^{2}}\mathrm{d}x
- (C_{\psi_{x}}-C)\int_{0}^{L}\psi_{x}^{2}\mathrm{d}x \nonumber \\
& -C_{\psi_{t}} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x - C_{\theta} \int_{0}^{L} \theta^{2} \mathrm{d}x
- C_{\varphi_{t}} \int_{0}^{L}\varphi_{t}^{2} \mathrm{d}x - C_{q} \int_{0}^{L} q^{2} \mathrm{d}x \nonumber \\
& \leq -C_{\varphi_{t}}\int_{0}^{L}\varphi_{t}^{2}\mathrm{d}x-C_{\psi_{t}}
\int_{0}^{L}\psi_{t}^{2}\mathrm{d}x - (C_{\psi_{x}} - C) \int_{0}^{L} \psi_{x}^{2}\mathrm{d}x \nonumber \\
& -\frac{\min \left\{C, \frac{C}{c}\right\}}{2} \int_{0}^{L}(\varphi_{x}
+\psi )^{2}\mathrm{d}x - C_{\theta} \int_{0}^{L} \theta^{2} \mathrm{d} x - C_{q} \int_{0}^{L} q^{2}\mathrm{d}x \nonumber \\
& \leq -d_{1} \int_{0}^{L}(\varphi_{t}^{2} + \psi_{t}^{2} + \psi_{x}^{2} + (\varphi_{x}+\psi )^{2}
+\theta^{2} + q^{2}) \mathrm{d}x. \label{LYAPUNOV_ESTIMATE_2}
\end{align}
with
\begin{equation}
d_{1} : =\min \left\{ C_{\varphi_{t}},C_{\psi_{t}},(C_{\psi_{x}}-C),
\frac{\min \left\{C, \frac{C}{c}\right\}}{2},C_{\theta}, C_{q}\right\} .
\end{equation}
For $d_{2} := \frac{2d_{1}}{\max\{\rho_{1}, \rho_{2}, b, k, \rho_{3}, \tau_{0}\}}$, we can therefore estimate
\[
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}(t) \leq -d_{2}E(t).
\]
Finally, we consider the functional $H(t) := I_{3} + N_{4} I_{4} + N_{5} I_{5}$ and show for this
\[
|H(t)| \leq CE(t), \quad C > 0.
\]
By using the trivial relation
\[
\int_{0}^{L} \varphi^{2} \mathrm{d}x \leq 2c \int_{0}^{L}(\varphi_{x}
+ \psi)^{2}\mathrm{d}x+2c^{2}\int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
\]
with the Poincar\'{e} constant $c=\frac{L^{2}}{\pi ^{2}}$ we arive at
\begin{align}
|H(t)| &= \left|N_{1} I_{1} + I_{2} + N_{4} I_{4} + N_{5} I_{5}\right|
\leq N_{1} |I_{1}| + |I_{2}| + N_{4} |N_{4}| + N_{5} |I_{5}| \nonumber \\
&= N_{1} \left|\int_{0}^{L} \left(\rho_{2} \psi_{t} \psi
+ \rho_{1}\varphi_{t}w - \frac{\gamma \tau_{0}}{\kappa}\psi q\right) \mathrm{d}x\right|
+ \rho_{1}\left| \int_{0}^{L} \varphi_{t} \varphi \mathrm{d}x\right| \nonumber \\
&+ N_{4} \rho_{2} \rho_{3} \left|\int_{0}^{L} \left(\int_{0}^{x}\theta (t,x) \mathrm{d}y\right) \psi_{t}(t,x)\mathrm{d}x\right|
+ N_{5}\tau_{0}\rho_{3} \left|\int_{0}^{L} q\left(\int_{0}^{x} \theta \mathrm{d}y\right) \mathrm{d}x\right| \nonumber \\
&\leq N_{1} \Bigg(\frac{\rho_{2}}{2} \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x+
\frac{\rho_{2}c}{2}\int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
+\frac{\rho_{1}}{2} \int_{0}^{L}\varphi_{t}^{2} \mathrm{d}x
+\frac{\rho_{1}c^{2}}{2} \int_{0}^{L}\psi_{x}^{2} \mathrm{d}x \nonumber \\
&+ \frac{\gamma \tau_{0}c}{2\kappa} \int_{0}^{L} \psi_{x}^{2} \mathrm{d}x
+ \frac{\gamma \tau_{0}}{2\kappa} \int_{0}^{L} q^{2} \mathrm{d}x\Bigg)
+\frac{\rho_{1}}{2} \left(\int_{0}^{L}\varphi_{t}^{2} \mathrm{d} x + \int_{0}^{L}\varphi^{2} \mathrm{d}x\right) \nonumber \\
&+ \frac{\rho_{2} \rho_{3} N_{4}}{2} \left(c \int_{0}^{L}\theta^{2} \mathrm{d}x
+ \int_{0}^{L} \psi_{t}^{2} \mathrm{d}x \right) + \frac{\tau_{0} \rho_{3} N_{5}}{2}
\left( \int_{0}^{L}q^{2}\mathrm{d}x+c\int_{0}^{L}\theta ^{2}\mathrm{d} x\right) \nonumber \\
&\leq \hat{C}_{\varphi_{t}} \int_{0}^{L} \varphi_{t}^{2} + \hat{C}_{\psi_{t}} \int_{0}^{L}\psi_{t}^{2} \mathrm{d}x
+ \hat{C}_{\varphi_{x}} \int_{0}^{L}\psi_{x}^{2} \mathrm{d}x \nonumber \\
&+ \hat{C}_{\varphi_{x} + \psi} \int_{0}^{L}(\varphi_{x} + \psi)^{2} \mathrm{d}x
+ \hat{C}_{\theta} \int_{0}^{L}\theta^{2} \mathrm{d}x + \hat{C}_{q} \int_{0}^{L} q^{2}\mathrm{d}x, \label{H_ESTIMATE}
\end{align}
where the constants are determined as follows
\begin{align}
\hat{C}_{\varphi_{t}} &:= \frac{1}{2}\left(N_{1} \rho_{1} + \rho_{1}\right), \quad
\hat{C}_{\psi_{t}} := \frac{1}{2} \left(N_{1} \rho_{2} + \rho_{2} \rho_{3}N_{4}\right) , \nonumber \\
\hat{C}_{\psi_{x}} &:= \frac{1}{2} \left(N_{1} \rho_{2} c + N_{1} \rho_{1} c^{2}
+ \frac{N_{1}\tau_{0}c}{\kappa }+2\rho_{1}c^{2}\right) , \nonumber \\
\hat{C}_{\varphi_{x} + \psi} &:= \rho_{1} c, \,
\hat{C}_{\theta} := \frac{1}{2} \left(N_{4} \rho_{2} \rho_{3} c +N_{5} \rho_{3} \tau_{0} c\right), \,
\hat{C}_{q} := \frac{1}{2} \left(\frac{N_{1} \gamma \tau_{0}}{\kappa}
+ N_{5} \rho_{3} \tau_{0}\right). \nonumber
\end{align}
According to (\ref{H_ESTIMATE}) we have $|H(t)|\leq \hat{C}E(t)$ for
\[
\hat{C} := \frac{\max\left\{\hat{C}_{\varphi_{t}}, \hat{C}_{\psi_{t}},
\hat{C}_{\psi_{x}}, \hat{C}_{\varphi_{x} + \psi}, \hat{C}_{\theta}, \hat{C}_{q}\right\}}
{\min\left\{\rho_{1}, \rho_{2}, b, k, \rho_{3}, \tau_{0}\right\}}.
\]
Taking finally $\hat{N} > \max\{N, \hat{C}\}$ and defining a Lyapunov functional
\begin{equation}
\mathcal{L}(t) := \hat{N} E + H(t) = \hat{N} E + I_{3} + N_{4} I_{4} + N_{5} I_{5}, \label{LYAPUNOV_FUNCTIONAL_FINAL}
\end{equation}
we obtain, on the one hand,
\begin{equation}
\beta_{1} E(t)\leq \mathcal{L}(t) \leq \beta_{2}E(t) \label{LYAPUNOV_FUNCTIONAL_EQUIVALENCE}
\end{equation}
for $\beta_{1} := \hat{N} - \hat{C} > 0$, $\beta_{2} := \hat{N} + \hat{C} > 0$,
on the other hand, we know that
\[
\frac{\mathrm{d}}{\mathrm{d}t} \mathcal{L}(t) \leq -d_{2}E(t) \leq - \frac{d_{2}}{\beta_{2}} \mathcal{L}(t).
\]
By using the Gronwall's lemma, we conclude for $\alpha := \frac{d_{2}}{2\beta_{2}}$ that
\[
\mathcal{L}(t)\leq e^{-2\alpha t}0).
\]
Eventually, (\ref{LYAPUNOV_FUNCTIONAL_EQUIVALENCE}) yields
\[
E(t)\leq Ce^{-2\alpha t}E(0)
\]
with $C:=\frac{\beta_{2}}{\beta_{1}}$.
\end{Proof}
\section{Linear exponential stability --- $\varphi_x = \psi = q = 0$} \label{SECTION_LINEAR_2}
The second set of boundary conditions we are going to study in this paper is
\begin{equation}
\varphi_x(t, 0) = \varphi_x(t, L) = \psi(t, 0) = \psi(t, L) = q(t, 0) = q(t, L) = 0 \text{ in } (0, \infty).
\label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_BC_1} \\
\end{equation}
Here, we consider the initial boundary value problem (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV}),
(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_IC}),
(\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_BC_1}).
We will present a semigroup formulation of this problem, show the exponential stability of the
associated semigroup and make estimates on higher energies. This will enable
us to prove global existence and exponential stability also in nonlinear
settings.
Let
\begin{align}
L^2_\ast((0, L)) &= \big\{u \in L^2((0, L)) \,\big|\, \int^L_0 u(x) \mathrm{d}x = 0\big\}, \nonumber \\
H^1_\ast((0, L)) &= \big\{u \in H^1((0, L)) \,\big|\, \int^L_0 u(x) \mathrm{d}x = 0\big\}. \nonumber
\end{align}
We introduce a Hilbert space
\begin{equation}
\mathcal{H} := H_{\ast}^{1}((0,L)) \times L_{\ast}^{2}((0,L))\times
H_{0}^{1}((0,L)) \times L^{2}((0,L)) \times L_{\ast}^{2}((0,L)) \times L^{2}((0,L)) \nonumber
\end{equation}
equipped with the inner product
\begin{align}
\langle V,W\rangle_{\mathcal{H}} &= \phantom{+} \rho_{1} \langle V^{1}, W^{1}\rangle_{L^{2}((0,L))}
+ \rho_{2} \langle V^{4}, W^{4} \rangle_{L^{2}((0,L))} \nonumber \\
&\phantom{=} + b\langle V_{x}^{3}, W_{x}^{3} \rangle_{L^{2}((0, L))}
+ k\langle V_{x}^{1} + V^{3}, W_{x}^{1} + W^{3}\rangle_{L^{2}((0, L))} \nonumber \\
&\phantom{=} + \rho_{3} \langle V^{5}, W^{5}\rangle_{L^{2}((0, L))} + \tau_{0}\langle V^{6},W^{6}\rangle_{L^{2}((0,L))}. \nonumber
\end{align}
Let the operator $A$ be formally defined as in section \ref{SECTION_LINEAR_1}
with the domain
\begin{align}
D(A) = \{V \in \mathcal{H} \,|\, &V^1 \in H^2((0, L)), V^1_x \in H^1_0((0, L)), V^2 \in H^1_\ast((0, L)), \nonumber \\
&V^3 \in H^2((0, L)), V^4 \in H^1_0((0, L)), \nonumber \\
&V^5 \in H^1_\ast((0, L)), V^6 \in H^1_0((0, L))\}. \nonumber
\end{align}
Setting $V := (\varphi, \varphi_{t}, \psi, \psi_{t}, \theta ,q)^{t}$, we
observe that $V$ satisfies
\begin{equation}
\left\{
\begin{array}{c}
V_{t}=AV \\
V(0)=V_{0}
\end{array}
\right. ,
\label{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION_1}
\end{equation}
where $V_{0} := (\varphi_{0}, \varphi_{1}, \psi_{0}, \psi_{1}, \theta_{0}, q_{0})^{t}$.
By assuring that $A$ satisfies the conditions of the Hille-Yosida theorem, we can easily get
\begin{Theorem}
$A$ generates a $C_{0}$-semigroup of contractions $\{e^{At}\}_{t \geq 0}$.
If $V_{0}\in D(A)$, the the unique solution $V\in C^{1}([0, \infty),\mathcal{H}) \cap C^{0}([0, \infty), D(A))$
to (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION_1}) is given by $V(t) = e^{At}V_{0}$.
If $V_{0}\in D(A^{n})$ for $n \in \mathbb{N}$, then $V \in C^{0}([0, \infty), D(A^{n}))$.
\end{Theorem}
Moreover, we can show that the Lyapunov functional (\ref{LYAPUNOV_FUNCTIONAL_FINAL}) constructed in
section \ref{SECTION_LINEAR_1} is also a Lyapunov functional for (\ref{TIMOSHENKO_CATTANEO_LINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION_1}).
Observing for the energy $E(t)$ of the unique solution $(\varphi, \psi, \theta, q)$ that
\begin{equation}
E(t) = \frac{1}{2} \|V \|_{\mathcal{H}}^{2} \nonumber
\end{equation}
holds independent of $t$, we obtain the exponential stability of the associated semigroup $\{e^{At}\}_{t\geq 0}$.
\begin{Theorem} \label{THEOREM_EXPONENTIAL_STABILITY_LINEAR_2}
The semigroup $\{e^{At}\}_{t \geq 0}$ associated with $A$ is exponential stable, i.e.
\begin{equation}
\exists c_{1} > 0 \quad \forall t \geq 0 \quad \forall V_{0} \in \mathcal{H} :
\quad \|e^{At} V_{0} \|_{\mathcal{H}}
\leq c_{1} e^{-\alpha t} \|V_{0}\|_{\mathcal{H}}. \label{EXPONENTIAL_STABILITY_SEMIGROUP}
\end{equation}
\end{Theorem}
Similar to \cite{MuRa2002}, we observe that if $V_{0} \in D(A)$, we can estimate $AV(t)$ in the same way as $V(t)$ is estimated
in (\ref{EXPONENTIAL_STABILITY_SEMIGROUP}), implying in its turn using the structure of $A$
that $(V_{x}^{1},V_{x}^{2},V_{x}^{3},V_{x}^{4},V_{x}^{5},V_{x}^{6})$ can be estimated in the norm of $\mathcal{H}$,
hence, one can estimate $((\varphi_{x})_{x}, (\varphi_{t})_{x}, (\psi_{x})_{x}, (\psi_{t})_{x}, \theta_{x}, q_{x})^{t}$ in $L^{2}((0,L))^{6}$.
We define for $s \in \mathbb{N}$ the Hilbert space
\begin{equation}
\mathcal{H}_s := (H^{s} \times H^{s-1} \times H^{s} \times H^{s-1} \times H^{s-1} \times H^{s-1})((0, L)) \nonumber
\end{equation}
with natural norm Sobolev norm for its component. Using the consideration above, we can therefore estimate
\begin{equation}
\|V(t)\|_{\mathcal{H}_s} \leq c_s \|V_0\|_{\mathcal{H}_s} e^{-\alpha t}. \label{ESTIMATE_HIGHER_ENERGIES}
\end{equation}
$c_s$ denotes here a positive constant, being independent of $V_0$ and $t$.
\section{Nonlinear exponential stability} \label{SECTION_NONLINEAR_1}
In this section, we study the nonlinear system
\begin{align}
&\rho_1 \varphi_{tt} - \sigma(\varphi_x, \psi)_x + \mu \varphi_t = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_2 \psi_{tt} - b \psi_{xx} + k(\varphi_x + \psi) + \gamma \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber \\
&\rho_3 \theta_t + \kappa q_x + \gamma \psi_{tx} = 0, \quad (t, x) \in (0, \infty) \times (0, L), \label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1} \\
&\tau_0 q_t + \delta q + \kappa \theta_x = 0, \quad (t, x) \in (0, \infty) \times (0, L), \nonumber
\end{align}
completed by the boundary
\begin{align}
\varphi(t, 0) &= \varphi(t, L) = \psi(t, 0) = \psi(t, L) = q(t, 0) = q(t, L) = 0 \text{ in } (0, \infty),
\label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_BC_1}
\end{align}
and the initial conditions
\begin{align}
\varphi(0, \cdot) &= \varphi_0, \quad \varphi_t(0, \cdot) = \varphi_1, \quad
\psi(0, \cdot) = \psi_0, \quad \psi_t(0, \cdot) = \psi_1, \nonumber \\
\theta(0, \cdot) &= \theta_0, \quad q(0, \cdot) = q_0.
\label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}
\end{align}
As before, the constants $\rho_1$, $\rho_2$, $\rho_3$, $b$, $k$, $\gamma$, $\delta$, $\kappa$, $\mu$, $\tau_0$ are assumed to be positive.
The nonlinear function $\sigma$ is assumed to be sufficiently smooth and to satisfy
\begin{equation}
\sigma_{\varphi_x}(0, 0) = \sigma_{\psi}(0, 0) = k \label{SIGMA_ASSUMPTION_1_1}
\end{equation}
and
\begin{equation}
\sigma_{\varphi_x \varphi_x}(0, 0) = \sigma_{\varphi_x \psi}(0, 0) = \sigma_{\psi \psi} = 0. \label{SIGMA_ASSUMPTION_2_1}
\end{equation}
To obtain a local well-posedness result, we have first to consider a
corresponding non-homogeneous linear system
\begin{align}
&\rho_1 \varphi_{tt} - \hat{\sigma}(t, x) \varphi_{xx} - \check{\sigma}(t, x) \psi_x + \mu \varphi_t = 0 \quad \text{ in } (0, \infty) \times (0, L), \nonumber \\
&\rho_2 \psi_{tt} - b\psi_{xx} + k(\varphi_x + \psi) + \gamma \theta_x = 0 \quad \text{ in } (0, \infty) \times (0, L), \nonumber \\
&\rho_3 \theta_t + \kappa q_x + \gamma \psi_{tx} = 0 \quad \text{ in } (0, \infty) \times (0, L), \label{TIMOSHENKO_CATTANEO_LINEAT_NONHOMEGENEOUS} \\
&\tau_0 q_t + \delta q + \kappa \theta_{x} = 0 \quad \text{ in } (0, \infty) \times (0, L) \nonumber
\end{align}
together with the boundary conditions (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_BC_1})
and initial conditions (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}).
The solvability of this system is established in the following theorem.
\begin{Theorem} \label{LOCAL_LINEAR_EXISTENCE_THEOREM}
We assume for some $T>0$ that
\begin{align}
&\hat{\sigma},\check{\sigma}\in C^{1}([0,T]\times [0,L]), \nonumber \\
&\hat{\sigma}_{tt},\hat{\sigma}_{tx},\hat{\sigma}_{xx},\check{\sigma}_{tt},
\check{\sigma}_{tx},\check{\sigma}_{xx}\in L^{\infty}([0,T], L^{2}((0,L))). \nonumber
\end{align}
Let $\hat{\sigma}\geq s > 0$. The initial data may satisfy
\begin{align}
\varphi_{0,x} &\in H^{2}((0, L)) \cap H_{0}^{1}((0, L)), \quad \varphi_{1, x} \in H_{0}^{1}((0, L)), \nonumber \\
\psi_{0} &\in H^{3}((0, L))\cap H_{0}^{1}((0, L)), \quad \psi_{1}\in H^{2}((0,L))\cap H_{0}^{1}((0,L)), \nonumber \\
\theta_{0} &\in H^{2}((0,L)),\quad q_{0}\in H^{2}((0,L))\cap H_{0}^{1}((0,L)). \nonumber
\end{align}
Under the above conditions, the initial boundary problem (\ref{TIMOSHENKO_CATTANEO_LINEAT_NONHOMEGENEOUS}),
(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_BC_1}), (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}), posesses a unique
classical solution $(\varphi, \psi, \theta, q)$ such that
\begin{align}
\varphi, \psi &\in C^{2}([0, T] \times [0, L]), \quad \theta, q \in C^{1}([0,T] \times [0, L]), \nonumber \\
\partial^{\alpha} \varphi, \partial^{\alpha} \psi &\in L^{\infty}([0,T], L^{2}((0, L))), \quad 1 \leq |\alpha| \leq 3, \nonumber \\
\partial^{\alpha} \theta, \partial^{\alpha}q &\in L^{\infty}([0, T], L^{2}((0, L))), \quad 0 \leq |\alpha| \leq 2 \nonumber
\end{align}
with $\partial^{\alpha} = \partial_{t}^{\alpha_{1}}\partial_{x}^{\alpha_{2}}$
for $\alpha = (\alpha_{1}, \alpha_{2}) \in \mathbb{N}_{0}^{2}$.
\end{Theorem}
\begin{Proof}
We present here a similar proof to that one of Slemrod in \cite{Sl1981}.
Using Faedo-Galerkin method, we construct a sequence that converges to a
solution of (\ref{TIMOSHENKO_CATTANEO_LINEAT_NONHOMEGENEOUS}),
(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_BC_1}), (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}).
By using then a special a priori estimate, one obtains corresponding regularity of the solution.
Letting $\lambda_{i} := i\pi/L$, $c_{i}(x) := \sqrt{2/L} \cos\lambda_{i}x$,
$s_{i}(x) := \sqrt{2/L} \sin \lambda_{i}x$, $i \in \mathbb{N}$,
we define $(\varphi_{m}(t), \psi_{m}(t), \theta_{m}(t), q_{m}(t))$ by
\begin{align}
\varphi_{m}(t) &:= \sum_{i = 0}^{m} \Phi_{im}(t) c_{i}(x), \quad
\psi_{m}(t) := \sum_{i = 0}^{m} \Psi_{im}(t) s_{i}(x), \nonumber \\
\theta_{m}(t) &:= \sum_{i = 0}^{m} \Theta_{im}(t) c_{i}(x),\quad
q_{m}(t) := \sum_{i = 0}^{m} Q_{im}(t) s_{i}(x), \nonumber
\end{align}
where
\begin{align}
\Phi_{im}(0) &= \int_{0}^{L} \varphi_{0}(x) c_{i}(x) \mathrm{d}x, \quad
\dot{\Phi}_{im}(0) = \int_{0}^{L}\varphi_{1}(x) s_{i}(x) \mathrm{d}x, \nonumber \\
\Psi_{im}(0) &= \int_{0}^{L} \psi_{0}(x) s_{i}(x) \mathrm{d}x,
\quad \dot{\Psi}_{im}(0) = \int_{0}^{L} \psi_{1}(x) c_{i}(x) \mathrm{d}x, \nonumber \\
\Theta_{im}(0) &= \int_{0}^{L} \theta_{0}(x) c_{i}(x) \mathrm{d}x, \quad
Q_{im}(0) = \int_{0}^{L} q_{0}(x) s_{i}(x) \mathrm{d}x. \nonumber
\end{align}
Multiplying the equations in (\ref{TIMOSHENKO_CATTANEO_LINEAT_NONHOMEGENEOUS})
in $L^{2}((0,L))$ by $c_{i}$, $s_{i}$, $c_{i}$ and $s_{i}$, respectively,
we observe that the functions $\Phi_{im}$, $\Psi_{im}$, $\Theta_{im}$, $Q_{im}$
satisfy a system of ordinary differential equations
\begin{align}
\rho_{1} \ddot{\Phi}_{jm}(t) =& -\sum_{i=0}^{m} \Phi_{im}(t) \lambda_{i}^{2}\langle \hat{\sigma}(t,x) c_{i}(x), c_{j}(x)\rangle \nonumber \\
\phantom{=}& + \sum_{i=0}^{m} \lambda_{i} \Psi_{im} \langle \check{\sigma}(t, x)c_{i} (x), c_{j}(x)\rangle
-\mu \dot{\Phi}_{jm}(t) \nonumber \\
\rho_{3} \ddot{\Psi}_{jm}(t) =& -b\Psi_{jm}(t) \lambda_{j}^{2} + k(\Phi_{jm}(t)\lambda_{j}
- \Psi_{jm}(t)) + \gamma \Theta_{jm}(t) \lambda_{j}, \label{GALERKIN_EQUATIONS} \\
\rho_{3} \dot{\Theta}_{jm}(t) =& -\kappa Q_{jm}(t) \lambda_{j} - \gamma \dot{\Psi}_{jm}(t)\lambda_{j}, \nonumber \\
\tau_{0} \dot{Q}_{jm}(t) =& -Q_{jm}(t) + \kappa \Theta_{jm}(t)\lambda_{j} \nonumber
\end{align}
for $0\leq j\leq m$ and
\[
\langle f,g\rangle =\langle f,g\rangle_{L^{2}((0,L))}=\int_{0}^{L}f(x)g(x)\mathrm{d}x.
\]
This system is always solvable and possesses a unique solution
\begin{equation}
(\Phi_{jm}, \Psi_{jm}, \Theta_{jm}, Q_{jm}) \nonumber
\end{equation}
with $\Phi_{jm}, \Psi_{jm} \in C^{2}([0, T])$ and $\Theta_{jm}, Q_{jm} \in C^{1}([0, T])$.
We define a total energy $\mathcal{E}$ by
\begin{align}
\mathcal{E}(t) =& \phantom{+} E(t; \varphi, \psi, \theta, q)
+ E(t; \varphi_{t}, \psi_{t}, \theta_{t}, q_{t}) + E(t; \varphi_{tt}, \psi_{tt}, \theta_{tt}, q_{tt}) \nonumber \\
&+ E(t; \varphi_{x}, \psi_{x}, \theta_{x}, q_{x}) + E(t; \varphi_{tx}, \psi_{tx}, \theta_{tx}, q_{tx}), \nonumber
\end{align}
where
\begin{equation}
E(t; \phi_{m}, \psi_{m}, \theta_{m}, q_{m}) = \frac{1}{2} \int_{0}^{L}(\rho_{1} \varphi_{t}^{2}
+ \hat{\sigma} \varphi_{x}^{2} + \rho_{2}\psi_{t}^{2} + b\psi_{x}^{2} + \rho_{3}\theta^{3} + \tau_{0}q^{2})(t, x)\mathrm{d}x.
\nonumber
\end{equation}
By multiplying in $L^{2}((0, L))$ the equations in (\ref{GALERKIN_EQUATIONS})
by $\dot{\Phi}_{jm}$, $\dot{\Psi}_{jm}$, $\Theta_{jm}$, $Q_{jm}$, then
differentiating them once and twice with respect to $t$, multiplying them
with $\ddot{\Phi}_{jm}$, $\ddot{\Psi}_{jm}$, $\dot{\Theta}_{jm}$, $\dot{Q}_{jm}$
and $\dddot{\Phi}_{jm}$, $\dddot{\Psi}_{jm}$, $\ddot{\Theta}_{jm}$, $\ddot{Q}_{jm}$,
respectively, and summing up over $j = 1, \dots, m$, we obtain an energy equality of the form
\begin{equation}
\frac{\text{\textrm{d}}}{\text{\textrm{d}}t} \mathcal{E}(t; \varphi_{m},\psi_{m}, \theta_{m}, q_{m})
= F_{m}(\partial^{\alpha_{1}}\varphi, \partial^{\alpha_{2} \psi}, \partial^{\beta_{1}} \theta,
\partial^{\beta_{2}}q) \nonumber
\end{equation}
for $0\leq |\alpha_{1,2}|\leq 3,$ $0\leq |\beta_{1,2}|\leq 2.$
Following the approach of Slemrod and obtaining higher order $x$ derivatives
from differential equations, we can integrate the above equality with
respect to $t$ and estimate
\begin{equation}
\int_{0}^{t} F_{m}(\tau) \mathrm{d} \tau
\leq C \int_{0}^{t} \mathcal{E}(\tau; \varphi_{m}, \psi_{m}, \theta_{m}, q_{m}) \mathrm{d}\tau . \nonumber
\end{equation}
Gronwall's inequality yields then $\mathcal{E}(t) \leq C\mathcal{E}(0) e^{Ct} \leq C$ for a generic constant $C > 0$.
It follows that the sequence $\{(\varphi_{m}, \psi_{m}, \theta_{m}, q_{m})\}_m$
has a convergent subsequence. By the virtue of usual Sobolev embedding
theorems, we get necessary regularity of the solution.
The solution is unique since our a priori estimate can be shown also for $(\varphi, \psi, \theta, q)$
assuring the continuous dependence of the solution on the initial data.
By usual continuation arguments, the solution can be smoothly continued to a maximal open interval $[0, T)$.
\end{Proof}
Having proved the local linear existence theorem, we can obtain a local existence also in the nonlinear situation.
\begin{Theorem} \label{LOCAL_NONLINEAR_EXISTENCE_THEOREM}
Consider the initial boundary value problem
(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}).
Let $\sigma = \sigma(r, s)\in C^{3}(\mathbb{R} \times \mathbb{R})$ satisfy
\begin{align}
&0 < r_{0}\leq \sigma_{r} \leq r_{1} < \infty \quad (r_{0}, r_{1} > 0), \label{SIGMA_NL_COND_1} \\
&0 \leq |\sigma_{s}| \leq s_{0} < \infty \quad (s_{0} > 0). \label{SIGMA_NL_COND_1}
\end{align}
Let the initial data comply with
\begin{align}
\varphi_{0, x} &\in H^{2}((0, L)) \cap H_{0}^{1}((0, L)), \quad \varphi_{1, x}\in H_{0}^{1}((0, L)), \nonumber \\
\psi_{0} &\in H^{3}((0, L)) \cap H_{0}^{1}((0, L)), \quad
\psi_{1} \in H^{2}((0, L)) \cap H_{0}^{1}((0, L)), \nonumber \\
\theta_{0} &\in H^{2}((0, L)), \quad q_{0} \in H^{2}((0, L)) \cap H_{0}^{1}((0, L)). \nonumber
\end{align}
The problem (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1})
has then a unique classical solution $(\varphi, \psi, \theta, q)$ with
\begin{align}
\varphi,\psi & \in C^{2}([0, T)\times [0, L]), \nonumber \\
\theta, q &\in C^{1}([0, T)\times [0, L]), \nonumber
\end{align}
defined on a maximal existence interval $[0, T)$, $T \leq \infty$ such that for all $t_{0} \in [0, T)$
\begin{align}
\partial^{\alpha} \varphi, \partial^{\alpha} \psi &\in L^{\infty}([0, t_{0}],L^{2}((0, L))),
\quad 1 \leq |\alpha| \leq 3, \nonumber \\
\partial^{\alpha} \theta, \partial^{\alpha} q &\in L^{\infty}([0, t_{0}], L^{2}((0, L))),
\quad 0 \leq |\alpha| \leq 2 \nonumber
\end{align}
holds.
\end{Theorem}
\begin{Proof}
The proof of the local existence is by now standard. For positive $M$, $T$, we
define the space $X(M, T)$ to be a set of all functions $(\varphi, \psi, \theta, q)$ such that they satisfy
\begin{align}
\varphi(0, \cdot) &= \varphi_{0}, \quad \psi(0, \cdot) = \psi_{0}, \quad
\theta(0, \cdot) = \theta_{0}, \quad q(0, \cdot) = q_{0}, \nonumber \\
\varphi_{t}(0, \cdot ) &= \varphi_{1}, \quad \psi_{t}(0, \cdot) = \psi_{1} \quad \text{in }(0, L), \label{ABNLA_X} \\
\varphi_{x}(t, 0) &= \varphi_{x}(t, L) = \psi(t, 0) = \psi(t, L) = q(t, 0) = q(t, L) = 0
\quad \text{ in }(0, \infty ) \label{RBNL_X}
\end{align}
and their generalized derivatives fulfil
\begin{align}
\partial^{\alpha} \varphi, \partial^{\alpha} \psi &\in L^{\infty}([0, T], L^{2}((0, L))),
\quad 1 \leq |\alpha | \leq 3, \nonumber \\
\partial^{\alpha} \theta, \partial^{\alpha} q &\in L^{\infty}([0, T], L^{2}((0, L))),
\quad 0 \leq |\alpha| \leq 2 \nonumber
\end{align}
and
\[
\sup_{0 \leq t \leq T} \int_{0}^{L} \left(\sum_{|\alpha|=1}^{3}
\left[(\partial^{\alpha} \varphi)^{2} + (\partial^{\alpha} \psi)^{2}\right]
+ \sum_{|\alpha| = 0}^{2}\left[(\partial^{\alpha}\theta)^{2}
+(\partial^{\alpha} q)^{2}\right]\right) \mathrm{d}x\leq M^{2}.
\]
Let $(\bar{\varphi},\bar{\psi},\bar{\theta},\bar{q})\in X(M,T)$. Consider
the linear initial boundary value problem
\begin{align}
&\rho_{1} \varphi_{tt} - \sigma_{r}(\bar{\varphi}_{x}, \bar{\psi})\varphi_{xx}
- \sigma_{s}(\bar{\varphi}_{x}, \bar{\psi})\psi_{x} + \mu \varphi_{t} = 0,\nonumber \\
&\rho_{2} \psi_{tt} - b\psi_{xx} + k(\varphi_{x} + \psi) + \gamma \theta_{x} = 0, \nonumber \\
&\rho_{3} \theta_{t} + \kappa q_{x} + \gamma \psi_{tx} = 0, \label{TIMOSHENKO_S_MAPPING_SYSTEM} \\
&\tau_{0} q_{t} + \delta q + \kappa \theta_{x} = 0 \nonumber
\end{align}
together with initial conditions (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1})
and boundary conditions (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_BC_1}).
We set
\begin{align}
\hat{\sigma}(t,x) &= \sigma_{r}(\bar{\varphi}_{x}, \bar{\psi}), \nonumber \\
\check{\sigma}(t,x) &= \sigma_{s}(\bar{\varphi}_{x}, \bar{\psi}), \nonumber
\end{align}
and observe that $\hat{\sigma}$, $\check{\sigma}$ and the initial data
satisfy the assumptions of the local existence and uniqueness Theorem \ref{LOCAL_LINEAR_EXISTENCE_THEOREM}.
Therefore, this linear problem possesses a unique solution.
We define an operator $S$ mapping $(\bar{\varphi}, \bar{\psi}, \bar{\theta}, \bar{q})\in X(M,T)$
to the solution of (\ref{TIMOSHENKO_S_MAPPING_SYSTEM}),
i.e. $S(\bar{\varphi},\bar{\psi},\bar{\theta}, \bar{q}) = (\varphi, \psi, \theta, q)$.
With standard techniques, we can show that $S$ maps the space $X(M, T)$ into
itself if $M$ is sufficiently big and $T$ sufficiently small.
Following the approach of Slemrod, we show that $S$ is a contraction for sufficiently small $T$.
As $X(M,T)$ is a closed subset of the metric space
\begin{equation}
Y = \{(\varphi ,\psi ,\theta ,q) \,|\,
\varphi_{t}, \varphi_{x}, \psi_{t}, \psi_{t}, \theta, q \in L^{\infty }([0, T],L^{2}((0, L)))\} \nonumber
\end{equation}
equipped with a distance function
\begin{align}
\rho\left((\varphi, \psi, \theta, q), (\bar{\varphi}, \bar{\psi}, \bar{\theta}, \bar{q})\right)
:= \sup_{0 \leq t \leq T} &\int_{0}^{L} \Big[(\varphi_{t} - \bar{\varphi}_{t})^{2}
+(\varphi_{x} - \bar{\varphi}_{x})^{2} + (\psi_{t} - \bar{\psi}_{t})^{2} \nonumber \\
+& (\psi_{x} - \bar{\psi}_{x})^{2} + (\theta - \bar{\theta})^{2} + (q - \bar{q})^{2}\Big] \mathrm{d}x, \nonumber
\end{align}
the Banach mapping theorem is applicable to $S$ and yields a unique solution in $X(M, T)$ having the asserted regularity.
\end{Proof}
To be able to handle the nonlinear problem globally, we need a local
existence theorem with higher regularity. This one can be proved in the same
way as Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM}.
\begin{Theorem}
\label{LOCAL_NONLINEAR_EXISTENCE_THEOREM_HIGHER_REGULARITY} Consider the initial boundary value problem
(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}).
Let $\sigma = \sigma(r, s) \in C^{4}(\mathbb{R} \times \mathbb{R})$ satisfy
\begin{align}
&0 < r_{0} \leq \sigma_{r} \leq r_{1} < \infty \quad (r_{0}, r_{1} > 0), \nonumber \\
&0 \leq |\sigma_{s} |\leq s_{0} < \infty \quad (s_{0} > 0). \nonumber
\end{align}
Let the assumptions of Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM} be satisfied. Moreover, let us assume
\[
\varphi_{0,xxxx}, \psi_{0,xxxx}, \varphi_{1,xxx}, \psi_{1, xxx}, \theta_{0, xxx}, q_{0, xxx} \in L^{2}((0, L))
\]
and
\begin{align}
\partial_{t}^{2} \varphi(0, \cdot), \partial_{t}^{2} \psi(0, \cdot ) &\in H^{2}((0, L)), \quad
\partial_{t}^{2} \varphi_{x}(0, \cdot), \partial_{t}^{2} \psi(0, \cdot) \in H_{0}^{1}((0, L)) \nonumber \\
\partial_{t} \theta(0, \cdot), \partial_{t} q(0, \cdot) &\in H^{2}((0, L)), \quad
\partial_{t} q(0, \cdot) \in H_{0}^{1}((0, L)). \nonumber
\end{align}
Then, (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1})
possesses a unique classical solution $(\varphi ,\psi ,\theta ,q)$ satisfying
\begin{align}
\varphi,\psi &\in C^{3}([0, T) \times [0, L]), \nonumber \\
\theta, q &\in C^{2}([0, T) \times [0, L]), \nonumber
\end{align}
being defined in a maximal existence interval $[0, T)$, $T \leq \infty$ such that for all $t_{0} \in [0,T)$
\begin{align}
\partial^{\alpha} \varphi, \partial^{\alpha} \psi &\in L^{\infty}([0, t_{0}],L^{2}((0, L))),
\quad 1\leq |\alpha| \leq 4, \nonumber \\
\partial^{\alpha} \theta, \partial^{\alpha} q &\in L^{\infty}([0, t_{0}], L^{2}((0,L))),
\quad 0 \leq |\alpha| \leq 3 \nonumber
\end{align}
holds. Moreover, this interval coincides with that one from Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM}.
\end{Theorem}
\begin{Remark}
Our conjecture is that in analogy to thermoelastic equations one can prove a
more general existence theorem by getting bigger regularity of the solution
under the same regularity assumptions as in Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM} for initial data (cf. \cite{JiRa2000}).
This technique dates back to Kato and is based on a general notion of a CD-system coming from the semigroup theory.
\end{Remark}
For the proof of global solvability and exponential stability, we rewrite
the problem (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1})
as a nonlinear evolution problem.
Letting $V = (\varphi, \varphi_{t}, \psi, \psi_{t}, \theta, q)^{t}$ and defining a linear differential operator
$A : D(A) \subset \mathcal {H} \to \mathcal{H}$ in the same manner as in section \ref{SECTION_LINEAR_2}, we obtain
\begin{equation}
\left\{
\begin{array}{c}
V_{t} = AV + F(V, V_{x}) \\
V(0) = V_{0}
\end{array}
\right.
\label{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_SEMIGROUP_FORMULATION}
\end{equation}
with a nonlinear mapping $F$ being defined by
\begin{align}
F(V,V_{x}) &= (0, \sigma_{\varphi_{x}}(\varphi_{x}, \psi) \varphi_{xx}
- k \varphi_{xx} + \sigma_{\psi}(\varphi_{x}, \psi) \psi_{x}, 0, 0, 0, 0)^{t} \nonumber \\
&= (0, \sigma_{\varphi_{x}}(V_{x}^{1}, V^{3}) V_{xx}^{1} - kV_{xx}^{1}
+ \sigma_{\psi}(V_{x}^{1}, V^{3}) V_{x}^{3} - kV_{x}^{3}, 0, 0, 0, 0)^{t}. \nonumber
\end{align}
Taking into account that $F(V, V_x)(t, \cdot) \in D(A)$ for $V \in \mathcal{H}_3$,
it follows from the Duhamel's principle that
\begin{align}
V(t) = e^{tA} V_0 + \int^t_0 e^{(t-\tau)A} F(V, V_x)(\tau) \mathrm{d}r. \label{DUHAMMEL}
\end{align}
The existence of a global solution as well as its exponential decay can be
proved as in \cite{MuRa2002} using a similar technique as for nonlinear Cauchy problems in \cite{Ra1992}.
We assume that the initial data are small in the $\mathcal{H}_2$-norm, i.e.
\begin{align}
\|V_0\|_{\mathcal{H}_2} < \delta. \nonumber
\end{align}
Moreover, let us assume the boundness of $V_0$ in the $\mathcal{H}_3$-norm, i.e. let
\begin{align}
\|V_0\|_{\mathcal{H}_3} < \nu \nonumber
\end{align}
hold for a $\nu > 1$.
Due to the smoothness of the solution, there exist two intervals $[0, T^0]$
und $[0, T^1]$ such that
\begin{align}
\|V(t)\|_{\mathcal{H}_2} &\leq \delta, \quad \forall t \in [0, T^0], \nonumber \\
\|V(t)\|_{\mathcal{H}_3} &\leq \nu, \quad \forall t \in [0, T^1]. \nonumber
\end{align}
Let $d > 1$ be a constant to be fixed later on. We define two positive numbers $T^1_M$ und $T^0_M$
as the biggest interval length such that the local solution satisfies
\begin{align}
\|V(t)\|_{\mathcal{H}_2} \leq 2 c_1 \delta, \quad \forall t \in [0, T^0_M] \nonumber
\end{align}
and
\begin{align}
\|V(t)\|_{\mathcal{H}_3} \leq d\nu, \quad \forall t \in [0, T^1_M], \nonumber
\end{align}
respectively, fulfiling
\begin{align}
\left\|e^{tA} V_0\right\|_{\mathcal{H}_2} \leq c_1 \|V\|_{\mathcal{H}_2} \nonumber
\end{align}
for the constant $c_1 > 0$ defined as in (\ref{ESTIMATE_HIGHER_ENERGIES}).
Under these conditions, we obtain the following estimate for high energy.
\begin{Lemma} \label{LEMMA_ENERGY_ESTIMATE}
There exist positive constants $c_{2},c_{3}$ independent of $V_{0}$ and $T$ such that the local solution
from Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM_HIGHER_REGULARITY} satisfies for $t\in[0, T_{M}^{1}]$ the inequality
\[
\|V(t)\|_{\mathcal{H}_{3}}^{2}
\leq c_{2}\|V_{0}\|_{\mathcal{H}^{3}}^{2}e^{c_{3}\sqrt{d\nu}\int_{0}^{t}\|V(\tau)\|_{\mathcal{H}_{2}}^{1/2}d\tau}.
\]
\end{Lemma}
\begin{Proof}
As our nonlinearity coincides with that considered for nonlinear Timoshenko
systems with classical heat conduction and the estimates for the linear
terms produced by our two new dissipations can be done in the same manner,
we can repeat the proof from \cite{MuRa2002} literally.
\end{Proof}
Using Lemma \ref{LEMMA_ENERGY_ESTIMATE} and equality (\ref{DUHAMMEL}), we can write
\begin{align}
F(V, V_x)(\tau) \in D(A) \subset \mathcal{H}_2, \quad \tau \geq 0, \nonumber
\end{align}
we can estimate for $\|V(t)\|_{\mathcal{H}_2}$
\begin{align}
\|V(t)\|_{\mathcal{H}_2} &\leq \left\|e^{tA} V_0\right\|_{\mathcal{H}_2}
+ \int^t_0 \left\|e^{(t-\tau)A} F(V, V_x)(\tau)\right\|_{\mathcal{H}_2} \mathrm{d}\tau \nonumber \\
&\leq c_1 e^{-\alpha t} \|V_0\|_{\mathcal{H}_2} + c_1 \int^t_0 e^{-\alpha(t - \tau)} \left\|F(V, V_x)\right\|_{\mathcal{H}_2} \mathrm{d}\tau
\label{LOCAL_SOLUTION_ESTIMATE}
\end{align}
by estimating the nonlinearity $F$ as in the lemma below.
\begin{Lemma} \label{LEMMA_NONLINEARITY_ESTIMATE}
There exists a positive constant $c$
such that the inequality
\[
\|F(W,W_{x})\|_{\mathcal{H}_{2}}\leq c\|W\|_{\mathcal{H}_{2}}^{2}\|W\|_{\mathcal{H}_{3}}
\]
holds for all $W \in \mathcal{H}_{3}$ with $\|W\|_{\mathcal{H}_{2}} < C < \infty$.
\end{Lemma}
Further, we can show the following weighted a priori estimate.
\begin{Lemma} \label{LEMMA_A_PRIORI_ESTIMATE}
Let
\[
M_{2}(t) := \sup_{0 \leq \tau \leq t}\left(e^{\alpha \tau}\|V(\tau)\|_{\mathcal{H}_{2}}\right)
\]
be defined for $t \in [0, T_{M}^{1}]$.
There exist then $M_{0} > 0$ and $\delta > 0$ such that
\[
M_{2}(t) \leq M_{0} < \infty
\]
holds if $\|V_{0}\|_{\mathcal{H}_{3}} < \nu$ and $\|V_{0}\|_{\mathcal{H}_{2}} < \delta$,
Moreover, $M$ does not depend on $T_{M}^{1}$ and $V_{0}$.
\end{Lemma}
\begin{Proof}
We assume that $\|V\|_{\mathcal{H}_{2}}$ is bounded. Using Lemma \ref{LEMMA_NONLINEARITY_ESTIMATE}
and the estimate (\ref{LOCAL_SOLUTION_ESTIMATE}), we have
\[
\|V(t)\|_{\mathcal{H}_{2}} \leq c_{1} e^{-\alpha t}\|V_{0}\|_{\mathcal{H}_{2}}
+ c\int_{0}^{t} e^{-\alpha(t - \tau)}\|V(\tau )\|\| V(\tau )\|_{\mathcal{H}_{3}} \mathrm{d}\tau.
\]
With the aid of Lemma \ref{LEMMA_ENERGY_ESTIMATE}, there results
\begin{align}
\|V(t)\|_{\mathcal{H}_{2}} \leq& c_{1}\|V_{0}\|_{\mathcal{H}_{2}}e^{-\alpha t} \nonumber \\
&+ c\int_{0}^{t} e^{-\alpha(t - \tau )}\|V(\tau )\|_{\mathcal{H}_{2}}^{2}\|V_{0}\|_{\mathcal{H}_{3}}
e^{c\sqrt{d\nu }\int_{0}^{\tau} \|V(r)\|_{\mathcal{H}_{2}}^{1/2} \mathrm{d}r} \mathrm{d}\tau. \label{V_H2_ESTIMATE}
\end{align}
Under assumption $\|V_{0}\|_{\mathcal{H}_{2}}\leq \delta$ for some $\delta >0$ to be determined later on,
we obtain for $t \in [0, \min\{T_{m}^{0}, T_{m}^{1}\}]$
\begin{align}
\|V(t)\|_{\mathcal{H}_{2}} &\leq c_{1} \delta e^{-\alpha t}
+ c\delta^{1/2} \nu e^{c\sqrt{d \nu} \int_{0}^{t }\|V(\tau )\|_{\mathcal{H}_{2}}^{1/2} \mathrm{d}\tau}
\int_{0}^{t}e^{-\alpha(t - \tau)}\|V(\tau)\|_{\mathcal{H}_{2}}^{3/2} \mathrm{d}\tau \nonumber \\
&\leq c_{1} \delta e^{-\alpha t} + c\delta^{1/2} \nu e^{c\sqrt{d\nu}
\int_{0}^{t} e^{-\alpha \tau /2} e^{\alpha \tau/2} \|V(\tau)\|_{\mathcal{H}_{2}}^{1/2} \mathrm{d} \tau} \times \nonumber \\
&{\phantom{=}} \times \int_{0}^{t}e^{-\alpha(t - \tau)}(e^{-\alpha \tau}
e^{\alpha \tau }\|V(\tau )\|_{\mathcal{H}_{2}})^{3/2} \mathrm{d}\tau \nonumber \\
&\leq c_{1} \delta e^{-\alpha t} + c\delta ^{1/2} \nu
e^{cte^{-\alpha t/2}\sqrt{d\nu }\sqrt{M(t)}}M_{2}(t)^{3/2}\int_{0}^{t} e^{-(\alpha - \tau)}
e^{-3\alpha\tau/3} \mathrm{d}\tau \nonumber \\
&\leq c_{1} \delta e^{-\alpha t} + c \delta^{1/2} \nu e^{\sqrt{d\nu} \sqrt{M(t)}}M_{2}(t)^{3/2}\int_{0}^{t}
e^{-(\alpha - \tau)}e^{-3\alpha \tau/3}\mathrm{d}\tau, \nonumber
\end{align}
whence one can easily deduce
\[
M_{2}(t)\leq c_{1}\delta + c\delta ^{1/2} \nu e^{c\sqrt{d\nu }\sqrt{M_{2}(t)}}M_{2}(t)^{3/2} \sup_{0\leq t < \infty}
e^{\alpha t} \int_{0}^{t} e^{-\alpha(t - \tau)}e^{-3 \alpha \tau/2} \mathrm{d}\tau
\]
after multiplication with $e^{\alpha t}$.
From
\[
\sup_{0\leq t < \infty} e^{\alpha t} \int_{0}^{t} e^{-\alpha (t-\tau)} e^{-3\alpha \tau/2}
\mathrm{d} \tau = \sup_{0 \leq t < \infty} - \frac{2}{5} \frac{e^{-5\alpha \tau /2}}{\alpha} \big|_{\tau = 0}^{\tau = t}
\leq c < \infty
\]
it follows that
\[
M_{2}(t) \leq c_{1} \delta + c \delta^{1/2} \nu M_{2}(t)^{3/2} e^{c\sqrt{d\nu}\sqrt{M_{2}(t)}}.
\]
We define a function
\[
f(x) := c_{1} \delta + c \delta^{1/2} \nu x^{3/2} e^{c\sqrt{d\nu} \sqrt{x}} - x.
\]
We compute $f(0) = c_{1} \delta$ and $f^{\prime}(0) = -1$. According to the fundamental theorem of calculus, we know
\[
f(x) = f(0) + \int_{0}^{x} f^{\prime}(\xi) \mathrm{d}\xi = c_{1} \delta + \int_{0}^{x} f^{\prime}(\xi)\mathrm{d}\xi.
\]
For sufficiently small $x$, we get $f^{\prime}(\xi) \leq - \frac{1}{2}$.
This means that
\[
f(x) \leq c_{1} \delta - \frac{1}{2}x.
\]
If we choose now a $\delta < \delta_{1} := \frac{x}{2c_{1}}$, we obtain $f(x) < 0$.
Because $f$ is continuous and $f(0) > 0$ as well as $f(x) < 0$ holds, $f$ must possess a zero in interval $[0, x]$.
Let $M_{0}$ be the smallest zero of $f$ in $[0, x]$. The latter must exist as $f^{-1}(\{0\}) \cap [0, x]$ is compact.
We fix a $\delta_{2} < \delta_{1}$ to be so small that for $M_{2}(0) = \|V_{0}\|_{\mathcal{H}_{2}} < \delta_{2}$
\[
M_{2}(t) \leq M_{0}
\]
is fulfiled. It is possible due to the continuity of $M_{2}(t)$.
Thus, $M_{2}(t)$ is bounded by a $M_{0}$ for all $t \in [0, \min\{T_{M}^{0}, T_{M}^{1}\}]$.
If $T_{M}^{0} \geq T_{M}^{1}$, the claim of the theorem holds for $\delta < \delta_{2}$ und $M_{0}(\delta_{1}) < \infty $.
Otherwise, we have $T_{M}^{0} < T_{M}^{1}$. We observe that for sufficiently small $\delta_{3} > 0$
\[
f(2c_{1}\delta ) = c\nu c_{1}^{3/2} e^{c\sqrt{d\nu} \sqrt{c_{1}\delta}}\delta^{2} - c_{1}\delta < 0
\]
is valid for $\delta < \delta_{3}$.
We choose now an appropriately small $\delta_{3}$ to fulfil the above inequality.
Hence,
\[
\|V(t)\|_{\mathcal{H}_{3}} \leq M_{2}(t) \leq M_{0} < 2c_{1} \delta
\]
for $\delta < \min\{\delta_{2}, \delta_{3}\}$.
This contradicts to the maximality of $T_{M}^{0}$. That is why $T_{M}^{0} \geq T_{M}^{1}$ must be valid,
i.e. the claim holds for $\delta <\min\{\delta_{2}, \delta_{3}\}$ and $M_{0}(\delta_{1}) < \infty $.
\end{Proof}
This enables us finally to formulate and prove the theorem on global existence and exponential stability.
\begin{Theorem}
Let the assumptions of Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM_HIGHER_REGULARITY} be fulfiled. Moreover, let
\[
\int_{0}^{L}\varphi_{0}(x)\mathrm{d}x = \int_{0}^{L}\varphi_{1}(x) \mathrm{d}x = \int_{0}^{L}\theta (x) \mathrm{d}x = 0.
\]
Let $\nu > 1$ be arbitrary but fixed. We can then find an appropriate $\delta > 0$ such that if $\|V_{0}\|_{\mathcal{H}_{2}} < \delta$
and $\|V_{1}\|_{\mathcal{H}_{3}} < \nu$ hold there exists a unique global solution $(\varphi, \psi, \theta, q)$
to (\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_1})---(\ref{TIMOSHENKO_CATTANEO_NONLINEAR_DAMPED_EQUIV_IC_1}) satisfying
\begin{align}
\varphi, \psi &\in C^{3}([0, \infty )\times [0,L]), \nonumber \\
\theta, q &\in C^{2}([0, \infty )\times [0,L]). \nonumber
\end{align}
There exists besides a constant $C_{0}(V_{0}) > 0$ such that for all $t \geq 0$
\[
\|V(t)\|_{\mathcal{H}_{2}} \leq C_{0}e^{-\alpha t}
\]
with $\alpha > 0$ from Theorem \ref{THEOREM_EXPONENTIAL_STABILITY_LINEAR_2} is valid.
\end{Theorem}
\begin{Proof}
Theorem \ref{LOCAL_NONLINEAR_EXISTENCE_THEOREM_HIGHER_REGULARITY} guarantees
the existence of a local solution with the regularity
\begin{align}
\varphi,\psi &\in C^{3}([0, T] \times [0, L]), \nonumber \\
\theta,q &\in C^{2}([0, T] \times [0, L]). \nonumber
\end{align}
Lemmata \ref{LEMMA_ENERGY_ESTIMATE} and \ref{LEMMA_A_PRIORI_ESTIMATE} suggest that
\begin{align}
\|V(t)\|_{\mathcal{H}_{3}} &\leq c\|V_{0}\|_{\mathcal{H}_{3}}e^{\tilde{c}\sqrt{d\nu}
\int_{0}^{t}\|V(\tau )\|_{\mathcal{H}_{2}}^{1/2}\mathrm{d} \tau} \nonumber \\
&\leq c\|V_{0}\|_{\mathcal{H}_{3}}e^{\tilde{c} \sqrt{d \nu M_{0}}}
\leq ce\|V_{0}\|_{\mathcal{H}_{3}}, \quad t \leq T_{M}^{1} \leq T, \nonumber
\end{align}
where $\tilde{c} > 0$ and $\delta$ are chosen sufficiently small in order $\tilde{c} \sqrt{d \nu M_{0}} < 1$ is fulfiled.
We put $d := ce$ and find
\[
\|V(t)\|_{\mathcal{H}_{3}} \leq d\|V_{0}\|_{\mathcal{H}_{3}} < d \nu, \quad t \leq T_{M}^{1} \leq T.
\]
For $T_{M}^{1} < T$, we become a contradition to the maximality of $T_{M}^{1}$. Thus, $T_{M}^{1} = T$ must hold.
If $0 \leq t \leq T$, there results from (\ref{V_H2_ESTIMATE}) that
\begin{align}
\|V(t)\|_{\mathcal{H}_{2}} &\leq c_{1}\|V_{0}\|_{\mathcal{H}_{2}}e^{-\alpha t}
+ c\int_{0}^{t}e^{-\alpha (t - \tau)}\|V(\tau )\|_{\mathcal{H}_{2}}^{2}\|V_{0}\|_{\mathcal{H}_{3}} e^{c\sqrt{d\nu }
\int_{0}^{\tau}\|V(r)\|_{\mathcal{H}_{2}}^{1/2}\mathrm{d}r}\mathrm{d}\tau \nonumber \\
&\leq c\|V_{0}\|_{\mathcal{H}_{2}} + c\int_{0}^{t} \|V(\tau)\|_{\mathcal{H}_{2}}^{2}
e^{c\sqrt{M_{2}(t)}} \mathrm{d}\tau \nonumber \\
&\leq c \|V_{0}\|_{\mathcal{H}_{2}} + ce^{c\sqrt{M_{2}(t)}}M_{2}(t)
\int_{0}^{t} \|V(\tau)\|_{\mathcal{H}_{2}} \mathrm{d}\tau, \nonumber
\end{align}
whence we conclude
\begin{equation}
\|V(t)\|_{\mathcal{H}_{2}} \leq K \|V_{0}\|_{\mathcal{H}_{2}} \label{V_ESTIMATE_IN_H2}
\end{equation}
using the Gronwall's lemma for
\[
K := cM_{0} e^{c (\sqrt{M_{0}} + M_{0})}.
\]
We choose $\delta^{\prime}$ such that $0 < \delta^{\prime} < \frac{\delta}{K}$ and obtain
\[
\|V(T)\|_{\mathcal{H}_{2}} \leq K \|V_{0}\|_{\mathcal{H}_{2}} \leq K\delta^{\prime} \leq \delta.
\]
Therefore, there exists a continuation of $V$ onto $[T,T + T_{1}(\delta_{1})]$.
With (\ref{V_ESTIMATE_IN_H2}) there follows
\[
\|V(T + T_{1}(\delta))\|_{\mathcal{H}_{2}} \leq K \|V_{0}\|_{\mathcal{H}_{2}} \leq \delta,
\]
i.e. we can smoothly continue the solution onto $[T+T_{1}(\delta_{1}), T + 2T_{1}(\delta_{1})]$.
Here, we applied (\ref{V_ESTIMATE_IN_H2}) to the solution of the initial boundary value problem
with the initial value $W_{0} := V(T)$. This is allowed since $\|W_{0}\|_{\mathcal{H}_{3}} < \delta$
and $\|W_{0}\|_{\mathcal{H}_{3}} \leq c < \infty $ holds according to Lemma \ref{LEMMA_ENERGY_ESTIMATE}.
Hence, we can succesively obtain a global solution $V = (\varphi, \varphi_{t}, \psi, \psi_{t}, \theta, q)^{t}$ with
\begin{align}
\varphi, \psi &\in C^{3}([0, \infty) \times [0, L]), \nonumber \\
\theta, q &\in C^{2}([0, \infty) \times [0, L]). \nonumber
\end{align}
In particular, we can conlude
\[
M_{2}(t) \leq M_{0} < \infty ,
\]
for all $t\in [0,\infty )$, since
\[
\|V(t)\|_{\mathcal{H}_{2}} \leq K\delta^{\prime} \leq \delta .
\]
Finally, it follows
\[
\|V(t)\|_{\mathcal{H}_{2}}\leq M_{0} e^{-\alpha t}.
\]
\end{Proof}
\noindent \textbf{Acknownledgement} The author Salim A. Messaoudi has been funded during the work on this paper by KFUPM
under Project \# SB070002.
\end{document}
|
\begin{document}
\title{Why (and How) Avoid Orthogonal Procrustes in Regularized Multivariate Analysis}
\begin{abstract}
Multivariate Analysis (MVA) comprises a family of well-known methods for feature extraction that exploit correlations among input variables of the data representation. One important property that is enjoyed by most such methods is uncorrelation among the extracted features. Recently, regularized versions of MVA methods have appeared in the literature, mainly with the goal to gain interpretability of the solution. In these cases, the solutions can no longer be obtained in a closed manner, and it is frequent to recur to the iteration of two steps, one of them being an orthogonal Procrustes problem. This letter shows that the Procrustes solution is not optimal from the perspective of the overall MVA method, and proposes an alternative approach based on the solution of an eigenvalue problem. Our method ensures the preservation of several properties of the original methods, most notably the uncorrelation of the extracted features, as demonstrated theoretically and through a collection of selected experiments.
\end{abstract}
\section{Introduction}
MultiVariate Analysis (MVA) techniques have been widely used during the last century since Principal Component Analysis (PCA) \cite{pearson1901pca} was proposed as a simple and efficient way to reduce data dimension by projecting the data over the maximum variance directions. Since then, many variants have emerged trying to include supervised information to this data dimension reduction process; this is the case of algorithms such as Canonical Correlation Analysis (CCA) \cite{hotelling1936cca}, Partial Least Squares (PLS) approaches \cite{wold1966nipals2,wold1966nipals1}, or Orthonormalized PLS (OPLS) \cite{worsley1998mvlm}. In fact, we can find many real applications where these methods have been successfully applied: in biomedical engineering \cite{Gerven12,Hansen07}, remote sensing \cite{Arenas08,Arenasbook}, or chemometrics \cite{Barker03}, among many others.
Some recent significant contributions in the field have focused on trying to gain interpretability by means of including $\ell_1$ and $\ell_{2,1}$ norms, or even group lasso penalties, in the MVA formulations. This is the case of extensions such as sparse PCA \cite{Zou06}, sparse OPLS \cite{Gerven12}, group-lasso penalized OPLS (or SRRR) \cite{Chen12}, and $\ell_{2,1}$-regularized CCA (or L21SDA) \cite{Shi14}. All these approaches are based on an iterative process which combines two optimization problems. The first step consists of a regularized least-squares problem to obtain the vectors for the extraction of input features; the second step involves a minimization problem which is typically solved as an orthogonal Procrustes problem.
Despite these regularized approaches have been recurrently applied in feature extraction and dimensionality reduction scenarios \cite{Gerven12}, \cite{Shi14}, all of them ignore one intrinsic and important property of most MVA approaches: \textit{uncorrelation of the extracted features in the new subspace}. When this property holds, the feature extraction process provides additional advantages: (1) The subsequent learning tasks (working over this new space) are easen, for instance, least-square problems (Ridge Regression, LASSO,...) can work independently over each dimension, and the effects of variations of the input data are isolated in the different directions. (2) The selection of optimal feature subsets becomes straightforward; since once a set of features is computed, obtaining an optimum reduced subset consists of selecting those features with highest associated eigenvalue. Consequently, the adjustment of the optimum number of extracted features is simplified.
In this paper, we analyze in detail the above mentioned MVA formulations showing, from a theoretical and experimental point of view, some drawbacks overlooked until now in the literature. Concretely, we will demonstrate that these MVA approaches (1) do not obtain uncorrelated features in general; (2) do not converge to their associated non-regularized MVA solutions; (3) suffer some issues that depend on the algorithm initialization, e.g., depending on algorithm initialization the methods can fail to progress at all.
As solution to these problems, this paper proposes an alternative to orthogonal Procrustes. In order to do so, we rely on a common framework that allows us to deal simultaneously with the most common MVA methods (PCA, CCA and OPLS), and extend it to favor interpretable solutions by including a regularization term. Similarly to existing methods, we propose a solution to this generalized formulation which is based on an iterative process but does not suffer from the above problems.
The paper is organized as follows: Firstly, Section 2 introduces this generalized MVA framework. Then, Section 3 presents the iterative process required to solve its regularized extension and describes both Procrustes solution, as well our proposal based on a standard eigenvalue problem. Section 4 explains the limitations of the Procrustes solution in greater detail, and provide theoretical proof of the most important problems of this approach. Section 5 illustrates and compare the suitability of the new proposed solution with that of Procrustes using some real problems that support well the theoretical findings. Finally, Section 6 concludes the paper.
\section{Framework for MVA with uncorrelated features}
This section reviews some well-known MVA methods under a unifying framework, so that subsequent sections can deal with these methods in a unified manner. Before that, notation used throughout the paper is presented.
Let us assume a supervised learning scenario, where the goal is to learn relevant features from an input data set of $N$ training data $\{{\boldsymbol{x}}_i,{\boldsymbol{y}}_i\}_{i = 1}^N$, where ${\boldsymbol{x}}_i \in \Re^n$ and ${\boldsymbol{y}}_i \in \Re^m$ are considered as the input and output vectors, respectively. Therefore, $n$ and $m$ denote the dimensions of the input and output spaces. For notational convenience, we define the input and output data matrices: ${\mathbf X} = \left[{\boldsymbol{x}}_1,\dots,{\boldsymbol{x}}_N \right]$ and ${\mathbf Y} = \left[{\boldsymbol{y}}_1,\dots,{\boldsymbol{y}}_N \right]$, so sample estimations of the input and output data covariance matrices, as well as of their cross-covariance matrix, can be calculated as ${\mathbf C}_{{\mathbf X}{\mathbf X}} = {\mathbf X}{\mathbf X}^\top$, ${\mathbf C}_{{\mathbf Y}{\mathbf Y}} = {\mathbf Y}{\mathbf Y}^\top$ and ${\mathbf C}_{{\mathbf X}{\mathbf Y}} = {\mathbf X}{\mathbf Y}^\top$, where we have neglected the scaling factor $\frac{1}{N}$, and superscript $^\top$ denotes vector or matrix transposition. The goal of linear MVA methods is to find relevant features by combining the original variables, i.e., ${\mathbf X}' = {\mathbf U}^\top {\mathbf X}$, where the $k$th column of ${\mathbf U} = [{\boldsymbol{u}}_1,\dots,{\boldsymbol{u}}_{n_f}]$ is a vector containing the coefficients associated to the $k$th extracted feature.
The results in this paper apply, at least, to PCA, CCA, and OPLS, all these methods having in common that the extracted features are uncorrelated, i.e., ${\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf X}} {\mathbf U} = {\boldsymbol{\Lambda}}$, with ${\boldsymbol{\Lambda}}$ a diagonal matrix. MVA methods that do not enforce this uncorrelation, more notably PLS, are therefore left outside the scope of this paper.
A common framework for many regularized MVA methods can be found in \cite{reinsel98}. According to it, these methods pursue the minimization of the following objective function:
\begin{align}
\label{GOPLS_cost}
{\cal L}({\mathbf W},{\mathbf U}) &= \|{\boldsymbol{\Omega}} ^{\frac{1}{2}} \left({\mathbf Y} - {\mathbf W} {\mathbf U}^\top {\mathbf X}\right) \|_F^2 + \gamma R\left({\mathbf U}\right) \nonumber\\
&= \Trc\{{\mathbf Y}^\top{\boldsymbol{\Omega}} {\mathbf Y}\} - 2 \Trc\{{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} {\mathbf W}\} + \Trc\{{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf X}} {\mathbf U} {\mathbf W}^\top {\boldsymbol{\Omega}} {\mathbf W}\} + \gamma R\left({\mathbf U}\right),
\end{align}
where $R\left({\mathbf U}\right)$ is a regularization term, such as the ridge regularization ($||{\mathbf U}||^2$), the $\ell_1$ norm ($|{\mathbf U}|_1$), or the $\ell_{2,1}$ penalty for variable selection ($||{\mathbf U}||_{2,1}$). Parameter $\gamma$ trades off the importance of the regularization term in \eqref{GOPLS_cost}, ${\mathbf W}$ can be considered a matrix for the extraction of output features, and different particularizations of matrix ${\boldsymbol{\Omega}}$ give rise to the considered MVA methods, in particular ${\boldsymbol{\Omega}}={\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-1}$ for CCA, ${\boldsymbol{\Omega}}={\mathbf I}$ for OPLS, and ${\boldsymbol{\Omega}}={\mathbf I}$ with ${\mathbf Y}={\mathbf X}$ for PCA \cite{reinsel98,Sergio15}.
In order to extract uncorrelated features, the loss function \eqref{GOPLS_cost} is formally minimized subject to ${\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U}={\mathbf I}$. However, it is proved in \cite{Sergio15} that the same solution is obtained constraining the minimization to ${\mathbf W}^\top{\boldsymbol{\Omega}} {\mathbf W}={\mathbf I}$. For the case in which $R({\mathbf U})$ can be derived, it is possible to obtain a closed-form solution for ${\mathbf U}$ as a function of ${\mathbf W}$. Introducing this solution back into \eqref{GOPLS_cost}, the problem can be rewritten in terms of ${\mathbf W}$ only. For instance, when $R\left({\mathbf U}\right)=||{\mathbf U}||^2$ the solution for ${\mathbf U}$ can be found by taking derivatives of \eqref{GOPLS_cost} with respect to ${\mathbf U}$. After setting the result equal to zero, we obtain
\begin{equation}
\label{eq:U_solution}
{\mathbf U} =\left({\mathbf C}_{{\mathbf X}{\mathbf X}}+\gamma {\mathbf I}\right)^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} {\mathbf W} = \widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} {\mathbf W},
\end{equation}
where $\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}={\mathbf C}_{{\mathbf X}{\mathbf X}}+\gamma {\mathbf I}$. Now, replacing \eqref{eq:U_solution} into \eqref{GOPLS_cost}, the loss function can be written as a function ${\mathbf W}$ only,
$${\cal L}({\mathbf W})=\Trc\{{\boldsymbol{\Omega}} {\mathbf C}_{{\mathbf Y}{\mathbf Y}}\} - \Trc\{{\mathbf W}^\top {\boldsymbol{\Omega}} {\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} {\mathbf W}\}.$$
Minimizing this functional with respect to ${\mathbf W}$, subject to ${\mathbf W}^\top{\boldsymbol{\Omega}} {\mathbf W}={\mathbf I}$, the solution is given in terms of the following generalized eigenvalue problem,
$${\boldsymbol{\Omega}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top \widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1} {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} {\mathbf W} = {\boldsymbol{\Omega}} {\mathbf W}{\boldsymbol{\Lambda}},$$
which can be rewritten as a standard eigenvalue problem:
\begin{equation}
\label{eq:V}
{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}} ^\frac{1}{2}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}},
\end{equation}
where we have defined ${\mathbf W}={\boldsymbol{\Omega}} ^{-\frac{1}{2}}{\mathbf V}$. Thus, ${\mathbf U}$ can also be obtained as (see \eqref{eq:U_solution})
\begin{equation}
\label{eq:U}
{\mathbf U}=\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf V}.
\end{equation}
Table \ref{Tab:summaryMVA} provides the above expression particularized for the CCA, OPLS and PCA methods. For each method, we show the corresponding eigenvalue problem that defines the solution for ${\mathbf V}$, the associated ${\mathbf W}$, and the solution for ${\mathbf U}$ provided by \eqref{eq:U_solution}.
\begin{table}[h]
\caption{Summary of the most popular MVA methods: CCA, OPLS, and PCA.}
\label{Tab:summaryMVA}
\centering
\begin{tabular}{@{}llll@{}}
\toprule
& ${\mathbf V}$ (eig. problem) & ${\mathbf W}$ & ${\mathbf U}$ \\
\midrule
CCA & ${\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-\frac{1}{2}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$ & ${\mathbf W}={\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{\frac{1}{2}}{\mathbf V}$ & $\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-\frac{1}{2}}{\mathbf V}$ \\
OPLS & ${\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$ & ${\mathbf W}={\mathbf V}$ & $\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf V}$\\
PCA & ${\mathbf C}_{{\mathbf X}{\mathbf X}}^\top\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$ & ${\mathbf W}={\mathbf V}={\mathbf U}$ & $\widetilde{{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf V}$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Uncorrelation of the extracted features}
It is important to remark that, in the absence of regularization, the above approach still produces uncorrelated features, in spite of not enforcing it explicitly. To prove this, we set $\gamma = 0$ and multiply both sides of \eqref{eq:U} from the left by ${\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}$, arriving at:
\begin{equation}
\label{eq:UCxxU_UCxyW}
{\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U}={\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf V}.
\end{equation}
Next, substituting \eqref{eq:U} in \eqref{eq:V}, and premultiplying both sides by ${\mathbf V}^\top$, we obtain
\begin{equation}
\label{eq:UCxyW_Lambda}
{\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf V}={\boldsymbol{\Lambda}}.
\end{equation}
Therefore, by jointly considering \eqref{eq:UCxxU_UCxyW} and \eqref{eq:UCxyW_Lambda} we have
\begin{equation}
\label{eq:UCxyW_Lambda2}
{\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U}={\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf V}={\boldsymbol{\Lambda}},
\end{equation}
which proves the uncorrelation of the extracted features, since ${\boldsymbol{\Lambda}}$ is diagonal.
\section{Iterative solutions for regularized MVA methods}
In the case of non-derivable regularizations, the minimization of \eqref{GOPLS_cost} s.t. ${\mathbf W}^\top{\boldsymbol{\Omega}} {\mathbf W}={\mathbf I}$ has not a closed-form solution. This problem is found, for instance, when using LASSO regularization, or the very useful $\ell_{2,1}$-norm, that performs variable selection. In order to solve these regularized MVA methods, many authors have recurred in the literature to the following iterative coupled procedure:
\begin{enumerate}
\item Step-${\mathbf U}$. For fixed ${\mathbf W}$ (satisfying ${\mathbf W}^\top{\boldsymbol{\Omega}}{\mathbf W}={\mathbf I}$), find the matrix ${\mathbf U}$ that minimizes the following regularized least-squares problem,
\begin{equation}
\label{eq:reg_U}
\|{\mathbf Y}' - {\mathbf U}^\top {\mathbf X}\|_F^2 + \gamma R\left({\mathbf U}\right),
\end{equation}
where ${\mathbf Y}'={\mathbf W}^\top{\boldsymbol{\Omega}}{\mathbf Y}$ is the transformed output data. Note that this step can take advantage of the great variety of existing efficient solutions for regularized least-squares problems \cite{Nie10,Grant08,Kim2008}.
\item Step-${\mathbf W}$. For fixed ${\mathbf U}$, find the matrix ${\mathbf W}$ that minimizes \eqref{GOPLS_cost} subject to ${\mathbf W}^\top{\boldsymbol{\Omega}}{\mathbf W} = {\mathbf I}$ or, rewriting this step in terms of ${\mathbf V}={\boldsymbol{\Omega}} ^{\frac{1}{2}}{\mathbf W}$, solve ${\mathbf V}$ by minimizing \begin{equation}\label{eq:procrustes}\|\bar{\mathbf Y} - {\mathbf V} {\mathbf X}'\|_F^2, \;\;\; \st {\mathbf V}^\top{\mathbf V}={\mathbf I},\end{equation}
where we have defined $\bar{\mathbf Y}={\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf Y}$.
\end{enumerate}
Step-${\mathbf W}$ above is typically solved in the literature by using the orthogonal Procrustes approach. As we will see later, this solution neglects the uncorrelation among the extracted features and, despite of that, since it was initially proposed by \cite{Zou06} for the sparse PCA algorithm, it has been wrongly extended to supervised approaches such as sparse OPLS \cite{Gerven12}, group-lasso penalized OPLS (or SRRR) \cite{Chen12}, and $\ell_{2,1}$-regularized CCA (or L21SDA) \cite{Shi14}. Note that this Procrustes approach can still be considered mainstream, as it can be checked in the very recent works \cite{Lai16,Hu16}. An example of some other proposed Procrustes-based solutions can be found not only in theoretical proposals \cite{Qiao08,Qiao09,Dou10,Guo10,Han10,Liu14}, but also in real-world applications such as medical imaging \cite{Sjostrand06}, optical emission spectroscopy \cite{Ma08}, or decoding intracranial data \cite{Gerven12}.
Therefore, the main purpose of this paper is two-fold: (1) to alert the machine learning community about limitations of Procrustes when used as part of the above iterative method, as next section theoretically analyzes; (2) to propose an alternate method for the ${\mathbf W}$-step that pursues feature uncorrelation, which is next presented (Section 3.2).
\subsection{Generalized solution: ${\mathbf W}$-step with Orthogonal Procrustes}
Problem \eqref{eq:procrustes} is known as Orthogonal Procrustes, whose optimal solution is given by ${\mathbf V}_\text{P} = {\mathbf Q}{\mathbf P}^\top$, given the singular value decomposition ${\mathbf C}_{\bar{\mathbf Y}{\mathbf X}'}={\mathbf Q}\boldsymbol{\Sigma}{\mathbf P}^\top$ \cite{Schonemann1966}.
\subsection{Proposed solution: ${\mathbf W}$-step as an eigenvalue problem}
\label{subsec:nuestro}
Using Lagrange multipliers, we reformulate \eqref{eq:procrustes} as the following maximization problem
$${\cal L}_{\boldsymbol{\Xi}}({\mathbf V})=\Trc\{{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^{\frac{1}{2}} {\mathbf V}\} - \Trc\{\left({\mathbf V}^\top{\mathbf V} - {\mathbf I}\right){\boldsymbol{\Xi}}\},$$
where ${\boldsymbol{\Xi}}$ is a matrix containing the Lagrange multipliers. Taking derivatives of ${\cal L}_{\boldsymbol{\Xi}}$ with respect to ${\mathbf V}$, and setting this result to zero, we arrive at the following expression
\begin{equation}
\label{eq:UCxyW_Xi}
{\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^\frac{1}{2}{\mathbf V}={\boldsymbol{\Xi}}.
\end{equation}
Now, since \eqref{eq:UCxyW_Lambda} needs to hold to guarantee uncorrelation of the extracted features, this implies that matrix ${\boldsymbol{\Xi}}$ should also be diagonal, which is not necessarily satisfied by the solution of \eqref{eq:procrustes}. In other words, when using the iterative procedure described above, it is not sufficient to impose ${\mathbf V}^\top {\mathbf V} = {\mathbf I}$ during the ${\mathbf W}$-step, but we need to additionally impose \eqref{eq:UCxyW_Lambda} to get uncorrelated features.
Assuming that ${\boldsymbol{\Xi}}$ is a diagonal matrix, we can now premultiply both terms of \eqref{eq:UCxyW_Xi} by their transposes. Multiplying further by ${\mathbf V}$ from the left, and using the fact that ${\mathbf V}^\top {\mathbf V} = {\mathbf I}$, we arrive at the following eigenvalue problem that is the basis of our method:
\begin{equation}
\label{eq:reg_V}
{\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U}{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf V}={\mathbf V}{\boldsymbol{\Xi}}^2 = {\mathbf V}{\boldsymbol{\Lambda}},
\end{equation}
Table 2 includes a summary of the ${\mathbf U}$- and ${\mathbf W}$-steps for the particular cases of regularized CCA, OPLS and PCA. Remember that ${\mathbf W}$ can be straightforwardly computed from ${\mathbf V}$ using the relations indicated in the last column of Table \ref{Tab:summaryMVA}.
\begin{table}[!t]
\caption{Proposed solution for the two coupled steps of most popular regularized MVA methods.}
\label{Tab:summary_regMVA}
\centering
\resizebox{\textwidth}{!}{
\begin{footnotesize}
\begin{tabular}{@{}lll@{}}
\toprule
& ${\mathbf U}$-step (reg. LS) & ${\mathbf W}$-step (eigenvalue problem)\\
\midrule
reg. CCA & $\displaystyle\argmin_{\mathbf U} \|{\mathbf Y}' - {\mathbf U}^\top {\mathbf X}\|_F^2 + \gamma R\left({\mathbf U}\right)$ & ${\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U}{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf C}_{{\mathbf Y}{\mathbf Y}}^{-\frac{1}{2}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$\\
reg. OPLS & $\displaystyle\argmin_{\mathbf U} \|{\mathbf Y}' - {\mathbf U}^\top {\mathbf X}\|_F^2 + \gamma R\left({\mathbf U}\right)$ & ${\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U}{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$\\
reg. PCA & $\displaystyle\argmin_{\mathbf U} \|{\mathbf X}' - {\mathbf U}^\top {\mathbf X}\|_F^2 + \gamma R\left({\mathbf U}\right)$ & ${\mathbf C}_{{\mathbf X}{\mathbf X}}^\top{\mathbf U}{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf V} = {\mathbf V}{\boldsymbol{\Lambda}}$\\
\bottomrule
\end{tabular}\end{footnotesize}}
\end{table}
\subsection{Relationship between both solutions}
In this section we demonstrate that, in the absence of regularization, the solution to the eigenvalue problem \eqref{eq:reg_V} is given by ${\mathbf V}_\text{EIG} = {\mathbf Q}$, where the columns of ${\mathbf Q}$ are the left singular vectors of matrix ${\mathbf C}_{\bar{\mathbf Y}{\mathbf X}'} = {\mathbf Q} \boldsymbol{\Sigma} {\mathbf P}^\top$. This implies that the solution of our method is just a rotation of the solution obtained with Procrustes, ${\mathbf V}_\text{P} = {\mathbf Q} {\mathbf P}^\top$. This rotation plays a crucial role at uncorrelating the extracted features. Indeed, in the experiments section we will see that not only more uncorrelated features can be obtained, but also that the extracted features are more effective at minimizing the overall objective function \eqref{GOPLS_cost}.
We start by rewriting the singular value decomposition of ${\mathbf C}_{\bar{\mathbf Y}{\mathbf X}'}$ as
\begin{equation}
\label{eq:SVD}
{\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U} = {\mathbf Q}\boldsymbol{\Sigma}{\mathbf P}^\top,
\end{equation}
now, multiplying both terms of \eqref{eq:SVD} by their transposes from the right, we have
\begin{equation}
\label{eq:EVD}
{\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U}{\mathbf U}^\top {\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^{\frac{1}{2}}={\mathbf Q}\boldsymbol{\Sigma}^2{\mathbf Q}^\top.
\end{equation}
Further multiplying both terms by ${\mathbf Q}$ from the right, and comparing the result with \eqref{eq:reg_V}, we can see that the solution to the eigenvalue problem \eqref{eq:reg_V} is precisely ${\mathbf V}_\text{EIG} = {\mathbf Q}$ and ${\boldsymbol{\Lambda}} = \boldsymbol{\Sigma}^2$.
\section{Undesired properties of orthogonal Procrustes in regularized MVA}
In this section, we provide theoretical arguments about the unsuitability of using orthogonal Procrustes as the solution to the ${\mathbf W}$-step, showing that the obtained solution lacks some desired properties of MVA methods. In order to do so, we work on a generalization of the property declared in \cite{Zou06}, which states that a good regularized MVA method should reduce to the original (unregularized) MVA solution when the regularization term is suppressed. We will show that this is not the case when using the solution based on Procrustes. In particular, we study the two following issues that occur when setting ($\gamma=0$):
\begin{itemize}
\item The extracted features are not uncorrelated in general. This issue itself dismantles the correctness of all MVA methods based on the Procrustes solution.
\item Initialization of the iterative process becomes critical, and in some cases the algorithm may not progress at all (for $\gamma = 0$).
\end{itemize}
We demonstrate next the above statements, and discuss further on their implications.
\subsection{Uncorrelation of the input features using Procrustes}
Denoting the solution of the ${\mathbf W}$-step as ${\mathbf V}_\text{P}$ and the solution after the next ${\mathbf U}$-step as ${\mathbf U}_\text{P}$, we can use \eqref{eq:U_solution} to write
\begin{equation}
{\mathbf U}_\text{P} = {\mathbf C}_{{\mathbf X}{\mathbf X}}^{-1} {\mathbf C}_{{\mathbf X}{\mathbf Y}} {\boldsymbol{\Omega}}^{\frac{1}{2}} {\mathbf V}_\text{P},
\end{equation}
since this is the closed-form optimal solution of the ${\mathbf U}$-step when regularization is removed. Now, it is easy to see that the autocorrelation matrix of the extracted features can be rewritten as
\begin{equation}
\label{ec:corr_procrustes}
{\mathbf C}_{{\mathbf X}'{\mathbf X}'} = {\mathbf U}_\text{P}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U}_\text{P} = {\mathbf V}_\text{P}^\top {\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top {\mathbf U}_\text{P}.
\end{equation}
Recalling that ${\mathbf V}_\text{P} = {\mathbf Q} {\mathbf P}^\top$, and that ${\mathbf C}_{\bar{\mathbf Y}{\mathbf X}'} = {\mathbf Q} \boldsymbol{\Sigma} {\mathbf P}^\top$, \eqref{ec:corr_procrustes} can be finally expressed as:
\begin{equation}
{\mathbf C}_{{\mathbf X}'{\mathbf X}'} = {\mathbf P} {\mathbf Q}^\top {\mathbf Q} \boldsymbol{\Sigma} {\mathbf P}^\top = {\mathbf P} \boldsymbol{\Sigma} {\mathbf P}^\top,
\end{equation}
which is not diagonal in a general case, and, thus, there is no guarantee that the extracted features are uncorrelated. In fact, since $\boldsymbol{\Sigma}$ is a diagonal matrix and ${\mathbf P}$ is an orthogonal matrix (${\mathbf P}^\top={\mathbf P}^{-1}$), only permutations matrices P will result in uncorrelated features (diagonal ${\mathbf C}_{{\mathbf X}'{\mathbf X}'}$); in this case, solutions ${\mathbf V}_\text{P}={\mathbf Q}{\mathbf P}^\top$ and ${\mathbf V}_\text{EIG}={\mathbf Q}$ extract the same features, but not necessarily in the same order.
Experimental section will demonstrate that methods based on Procrustes do not necessarily enjoy the desired uncorrelation property and, even, when the regularization is cancelled (for $\gamma=0$) the correlation among the features will imply that part of the variance of the original data described by one feature will also affect other features. Furthermore, experiments we will show that, since this method does not explicitly pursue such uncorrelation, the obtained solution for $\gamma > 0$ results in higher correlation among the features than that of the proposed method.
\subsection{Proof of initialization dependency by applying orthogonal Procrustes approach}
In the experiments section, we will illustrate that the solution achieved when using Procrustes shows a significant variance when the initialization conditions are changed, even when the regularization term is removed. In this subsection, we pay attention to a particular issue associated to the initialization of the algorithm. In particular, we show that when ${\mathbf V}$ is initialized with an orthogonal matrix (which is a quite common case in the literature) the algorithm does not progress at all.
Let ${\mathbf V}^{(0)}$ denote an orthogonal matrix which is used for the algorithm initialization. Subsequently, ${\mathbf U}^{(1)}$ and ${\mathbf V}^{(1)}$ will denote the solutions to the ${\mathbf U}$- and ${\mathbf W}$-step, that can be obtained from ${\mathbf V}^{(0)}$ as
\begin{enumerate}
\item ${\mathbf U}^{(1)}={\mathbf C}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^{-\frac{1}{2}}{\mathbf V}^{(0)}$
\item ${\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top{\mathbf U}^{(1)} = {\mathbf Q}\boldsymbol{\Sigma}{\mathbf P}^\top$
\item ${\mathbf V}^{(1)}={\mathbf Q}{\mathbf P}^\top$
\end{enumerate}
In order to express ${\mathbf V}^{(1)}$ in terms of ${\mathbf V}^{(0)}$, we use expressions for steps 1 and 2 to arrive at
\begin{equation}
\label{eq:CV_SVD}
{\mathbf M}{\mathbf V}^{(0)} = {\mathbf Q}\boldsymbol{\Sigma}{\mathbf P}^\top,
\end{equation}
where we have defined ${\mathbf M} ={\boldsymbol{\Omega}}^{\frac{1}{2}}{\mathbf C}_{{\mathbf X}{\mathbf Y}}^\top {{\mathbf C}}_{{\mathbf X}{\mathbf X}}^{-1}{\mathbf C}_{{\mathbf X}{\mathbf Y}}{\boldsymbol{\Omega}}^{\frac{1}{2}}$ for compactness reasons.
Now, multiplying both sides of \eqref{eq:CV_SVD} by theirs transposes from the right and from the left, we obtain the following expressions (note ${\mathbf M}$ is symmetric)
\begin{align}
{\mathbf Q}\boldsymbol{\Sigma}^2{\mathbf Q}^\top &= {\mathbf M}{\mathbf V}^{(0)}{\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M} \\
{\mathbf P}\boldsymbol{\Sigma}^2{\mathbf P}^\top &= {\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}{\mathbf M}{\mathbf V}^{(0)}.
\end{align}
From these, the following equalities that will be helpful for this demonstration are obtained:
\begin{eqnarray}
\label{eq:Q}
{\mathbf Q}&={\mathbf M}{\mathbf V}^{(0)}{\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}{\mathbf Q}\boldsymbol{\Sigma}^{-2},\\
\label{eq:P}
{\mathbf P}&={\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}{\mathbf M}{\mathbf V}^{(0)}{\mathbf P}\boldsymbol{\Sigma}^{-2}.
\end{eqnarray}
Finally, multiplying \eqref{eq:Q} by the transpose of \eqref{eq:P}, we can express ${\mathbf V}^{(1)}$ as a function of ${\mathbf V}^{(0)}$, and simplify the resulting expression as follows:
\begin{eqnarray*}
{\mathbf V}^{(1)}&=& {\mathbf M}{\mathbf V}^{(0)}{\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}({\mathbf Q}\boldsymbol{\Sigma}^{-4}{\mathbf P}^\top){\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}{\mathbf M}{\mathbf V}^{(0)}\\
&=& {\mathbf M}{\mathbf V}^{(0)}{\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}({\mathbf M}{\mathbf V}^{(0)})^{-4}{\mathbf V}^{^\top\hspace{-0.05cm}(0)}{\mathbf M}{\mathbf M}{\mathbf V}^{(0)}\\
&=& {\mathbf M}{\mathbf M}{\mathbf M}^{-4}{\mathbf M}{\mathbf M}{\mathbf V}^{(0)} = {\mathbf V}^{(0)},
\end{eqnarray*}
where we made use of \eqref{eq:CV_SVD} and the fact that ${\mathbf V}^{(0)}$ is orthogonal (i.e., ${\mathbf V}^{^\top\hspace{-0.05cm}(0)} = {\mathbf V}^{^{-1}\hspace{-0.05cm}(0)}$).
Therefore, we have proved that Procrustes based MVA iterative process results in its paralysis when the regularization term is canceled and ${\mathbf V}$ is initialized as an orthogonal matrix. This is the case of the method proposed in \cite{Gerven12}, where the algorithm is initialized with the eigenvectors of ${\mathbf C}_{{\mathbf Y}{\mathbf Y}}$. Note also that, since ${\mathbf V}^\top{\mathbf V}={\mathbf I}$ (or ${\mathbf W}^\top{\boldsymbol{\Omega}}{\mathbf W}={\mathbf I}$) is imposed, an orthogonal matrix is a reasonable choice for initialization, the identity matrix being a classic choice in these cases.
\section{Experiments}
The previous section theoretically demonstrated the problems of Procrustes based MVA methods, as well as the validity of our proposal. In this section, we show empirically the differences of both approaches over a real problem. To that end, we are going to compare three implementations: iterative MVA solutions using Procrustes approach (referred to as ``Procrustes'') and the proposed solution (denoted ``Proposal''); furthermore, whenever possible, the original algorithm implementations (``Original'') will be included in the comparison. For all implementations, we are going to consider well-known MVA methods derived from the generalized framework: PCA, CCA and OPLS.
For this study, problem \textit{segment} \cite{Blake98} will be used along this section. This dataset consists of 18 input variables, 7 output dimensions and 2390 samples. To be able to analyze initialization dependencies of iterative approaches, all results have been averaged over 50 random initializations.
To start this evaluation, we are going to consider that no regularization is applied ($\gamma = 0$) and analyze the following algorithm behaviors when different number of extracted features are used:
\begin{enumerate}
\item {\bf Convergence to the minimum of the objective function}: evaluating the achieved value of the cost function \eqref{GOPLS_cost} \footnote{In CCA, we will consider its formulation as a maximization of a trace problem.}, we will be able to study whether the compared solutions are able to achieve the same performance as original MVA solutions (see Figure \ref{fig:lossfunction}).
\item {\bf Information of the extracted variables}: we can measure when the extracted features are correlated or redundant by means of the {\it Total Explained Variance} (TEV) concept \cite{Zou06}, since its value decreases when there are relationships among features; thus, higher values of this parameter would indicate that the extracted features are more informative (see Figure \ref{fig:variance}).
The explained variance of a single variable would be given by computing the QR decomposition ${\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U}={\mathbf Q}{\mathbf R}$, and taking the absolute value of the diagonal elements of R. Thus,
$${\rm TEV}(k)= \sum_{j=1}^k{|{\mathbf R}_{jj}|}.$$
\end{enumerate}
\begin{figure}
\caption{Evolution of the global objective function with the number of extracted features ($k$) for the studied methods ($\gamma = 0$).}
\label{fig:lossfunction}
\end{figure}
\begin{figure}
\caption{TEV evolution with the number of features ($k$) for the methods under study ($\gamma = 0$).}
\label{fig:variance}
\end{figure}
In the light of these results, we confirm those problems of the Procrusters based approaches, and check that the proposed MVA implementation can overcome them. In particular:
\begin{itemize}
\item From Figure \ref{fig:lossfunction}, we can conclude that, when $k<n_f$ extracted features are considered, Procrustes based approaches are not able to converge to the standard MVA solution. The proposed versions, however, converge to exactly the same solution as the original methods.
\item The Proposed MVA approach is able to extract more informative features. This is demonstrated by larger TEV values (for any value of $k$) than those achieved with the Procrustes method. This is a direct consequence of the uncorrelation among the extracted features.
\item Last but not least, standard deviation of all Procrustes based solutions reveal serious initialization dependency. Note that the proposed solutions, as well as the original MVA methods, converge to the same solution for all initializations.
\end{itemize}
For the sake of completeness, we are also going to analyze uncorrelation among the extracted features when an $\ell_1$ penalty is used. For this purpose, we will directly measure the Correlation of the Extracted Features (CEF) by calculating the Frobenius norm
$$ {\rm CEF}= {\Vert {\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U} - {\rm diag} \left( {\mathbf U}^\top{\mathbf C}_{{\mathbf X}{\mathbf X}}{\mathbf U} \right) \Vert }_F; $$
thus, CEF values different from zero would reveal correlations among the extracted components. In particular, Figure \ref{fig:CEF} includes the values of CEF parameter for different Sparsity Rates (SR) (from zero to 80\%\footnote{Take into account that exploring higher sparsity rates would lead all approaches to set ${\mathbf U} = {\bf 0}$ and the CEF parameter would no longer make sense.}). For this study, three different initializations for Procrustes based methods are considered: (1) `Proc-random' that, as our proposal implementations, uses uniformly random values in the range from 0 to 1; (2) `Proc-orthog' which uses an orthogonal matrix given by the eigenvectors of ${\mathbf C}_{{\mathbf Y}{\mathbf Y}}$ (or ${\mathbf C}_{{\mathbf X}{\mathbf X}}$ in SPCA\footnote{Note that in this case SPCA initialization is equal to the standard PCA solution.}), as it is proposed \cite{Zou06} and \cite{Gerven12}; and, (3) `Proc-ideal' which directly starts the iterative process with the ideal solution when no sparsity regularization is used.
When the regularization parameter is included, all approaches (included the proposed ones) are not able to obtain an absolute uncorrelation among the extracted features; even so, CEF values reveal that our proposal gets a higher uncorrelation among the features than Procrustes approaches (independently of its initialization). Besides, when the regularization parameter is close to zero (SR = 0\%), Procrustes versions are only able to obtain uncorrelated features if they are initialized with the ideal solution (Proc-ideal).
\begin{figure}
\caption{CEF vs SR for the proposed MVA approaches and Procrustes based ones. Three different initializations are considered for all methods, as described in the text.}
\label{fig:CEF}
\end{figure}
\section{Conclusions}
Solutions for regularized MVA approaches are based in an iterative approach consisting of two coupled steps. Whereas the first step easen the inclusion of regularization terms, the second results in a constrained minimization problem which is generally solved as a orthogonal Procrustes problem. Despite the generalized use of this scheme, it fails in obtaining a new subspace of uncorrelated features, this being a desired property of MVA solutions.
In this paper we have analyzed the drawbacks of these schemes, proposing an alternative algorithm to force the uncorrelation property. The advantages of the proposed technique over the methods based on Procrustes have been discussed theoretically, and further confirmed via simulations.
\subsubsection*{Acknowledgments}
This work has been partly supported by MINECO project TEC2014-52289-R.
\scriptsize
\end{document}
|
\begin{document}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{remark}
\newtheorem*{rem}{Remark}
\dedicatory{Dedicated to Boris Mikhailovich Makarov, with great respect}
\title[Comparison theorems]{COMPARISON THEOREMS \\ FOR THE SMALL BALL PROBABILITIES \\ OF GAUSSIAN PROCESSES IN WEIGHTED $L_2$-NORMS}
\author{Alexander I. Nazarov}
\address{St. Petersburg Departament of Steklov Institute RAS, Fontanka 27, 191023, St. Petersburg, RUSSIA;
St.Petersburg State University, Universitetskii pr. 28, 198504, St.Petersburg, RUSSIA}
\email{[email protected]}
\author{Ruslan S. Pusev}
\address{St.Petersburg State University, Universitetskii pr. 28, 198504, St.Petersburg, RUSSIA}
\email{[email protected]}
\subjclass[2000]{60G15}
\keywords{Small ball probabilities, Gaussian processes, comparison theorems, spectral asymptotics}
\begin{abstract}
We prove comparison theorems for small ball probabilities of the Green Gaussian processes in weighted $L_2$-norms. We find the sharp small ball asymptotics
for many classical processes under quite general assumptions on the weight.
\end{abstract}
\thanks{Authors are supported by RFBR grant 10-01-00154. The first author is also supported by St. Petersburg State University grant 6.38.64.2012.
The second author is also supported by the Chebyshev Laboratory of St. Petersburg State University with the Russian Government grant 11.G34.31.0026,
and by the Program of supporting for Leading Scientific Schools (NSh-1216.2012.1).}
\date{}
\maketitle
\section{Introduction}
The problem of small ball behavior of a random process $X$ in the norm $\|\cdot\|$ is to describe the asymptotics as $\varepsilon\to0$ of the probability
${\sf P}\{\|X\|\leq\varepsilon\}$. The theory of small ball behavior for Gaussian processes in various norms
is intensively developed in recent decades, see surveys \cite{Lifs:1999}, \cite{Li:Shao:2001} and the site \cite{Lifs:2010}.
Suppose we have a Gaussian process $X(t)$, $0 \leqslant t \leqslant 1$, with zero mean and covariance
function $G_X(t,s)={\sf E}X(t)X(s)$, $t,s\in [0,1]$. Let $\psi$ be a non-negative weight function on $[0,1]$. We set
$$
\|X\|_\psi=\left(
\int_0^1 X^2(t)\psi(t)dt
\right)^{\frac12}
$$
(we drop the subscript $\psi$ if $\psi\equiv1$).
By the classical Karhunen--Lo\`eve expansion, one has the equality in distribution
$$
\|X\|_\psi^2 = \sum_{j=1}^\infty\lambda_j\xi_j^2.
$$
Here $\xi_j$, $j\in\mathbb N$, are independent standard
Gaussian random variables while $\lambda_j>0$, $j\in\mathbb N$,
are the eigenvalues of the integral equation
$$
\lambda f(t)=\int_0^1 G(t,s)\sqrt{\psi(t)\psi(s)}f(s)\,ds,\quad t\in[0;1].
$$
In the papers \cite{Naza:2009,Naza:Niki:2004} there was selected the concept of the {\it Green} process, i.e. Gaussian process with covariance being
the Green function for a self-adjoint differential operator. The approach developed in these papers allows to obtain the sharp (up to a constant)
asymptotics of small deviations in $L_2$-norm for this class of processes. In the papers \cite{Naza:2003,Naza:Puse:2009}, using this approach, we
have calculated the sharp asymptotics of small ball probabilities for a large class of particular processes with various weights.
In this paper we prove a comparison theorem for the small ball probabilities of the Green Gaussian processes in the weighted $L_2$-norms.
This theorem gives us the opportunity to obtain the sharp small ball asymptotics for many classical processes under quite general assumptions on the weight.
For the Wiener process and some other processes this result was obtained in \cite{Niki:Puse:2012}.
Let us recall some notation. A function $G(t,s)$ is called the Green function
of a boundary value problem for differential operator $L$ if it satisfies the equation
$LG=\delta(t-s)$ in the sense of distributions and satisfies the boundary conditions.
The space $W_p^m(0,1)$ is the Banach space of functions $y$ having
continuous derivatives up to $(m-1)$-th order when $y^{(m-1)}$ is
absolutely continuous on $[0,1]$ and $y^{(m)}\in L_p(0,1)$.
$\mathcal V(\dots)$ stands for the Vandermonde determinant.
\section{The calculation of the perturbation determinant}
Let $L$ be a self-adjoint differential operator of order $2n$, generated by the differential expression
\begin{equation}
\label{diffExpression}
Lv\equiv
(-1)^n v^{(2n)}+\left(p_{n-1}v^{(n-1)}\right)^{(n-1)}+\ldots+p_0v;
\end{equation}
and boundary conditions
\begin{equation}
\label{boundaryConditions}
U_\nu(v)\equiv U_{\nu 0}(v)+U_{\nu 1}(v)=0,
\quad
\nu=1,\ldots,2n.
\end{equation}
Here
$$
\begin{aligned}
U_{\nu 0}(v) = \alpha_\nu v^{(k_\nu)}(0)+\sum_{j=0}^{k_\nu-1}\alpha_{\nu j}v^{(j)}(0),\\
U_{\nu 1}(v) = \gamma_\nu v^{(k_\nu)}(1)+\sum_{j=0}^{k_\nu-1}\gamma_{\nu j}v^{(j)}(1),
\end{aligned}
$$
and for any $\nu$ at least one of coefficients $\alpha_\nu$ and $\gamma_\nu$ is not zero.
We assume that the system of boundary conditions (\ref{boundaryConditions}) is normalized. This means that the sum of orders of all boundary conditions
$\varkappa=\sum_{\nu} k_{\nu}$ is minimal. See \cite[\S4]{Naim:1969}; see also \cite{Shka:1982} where a more general class of boundary value problems is considered.
We introduce the notation
$$
\widetilde\alpha_\nu=\alpha_\nu(\psi(0))^{\frac{k_\nu}{2n}-\frac{2n-1}{4n}},
\quad
\widetilde\gamma_\nu=\gamma_\nu(\psi(1))^{\frac{k_\nu}{2n}-\frac{2n-1}{4n}},
\quad
\omega_k=\exp(ik\pi/n),
$$
$$
\theta_1(\psi)=
\det
\mbox{\tiny$
\begin{pmatrix}
\widetilde\gamma_1 & \widetilde\alpha_1\omega_1^{k_1} & \ldots & \widetilde\alpha_1\omega_{n-1}^{k_1} & \widetilde\alpha_1\omega_n^{k_1} & \widetilde\gamma_1\omega_{n+1}^{k_1} & \ldots & \widetilde\gamma_1\omega_{2n-1}^{k_1}\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
\widetilde\gamma_{2n} & \widetilde\alpha_{2n}\omega_1^{k_{2n}} & \ldots & \widetilde\alpha_{2n}\omega_{n-1}^{k_{2n}} & \widetilde\alpha_{2n}\omega_n^{k_{2n}} & \widetilde\gamma_{2n}\omega_{n+1}^{k_{2n}} & \ldots & \widetilde\gamma_{2n}\omega_{2n-1}^{k_{2n}}\\
\end{pmatrix}
$},
$$
$$
\theta_{-1}(\psi)=
\det
\mbox{\tiny$
\begin{pmatrix}
\widetilde\alpha_1 & \widetilde\alpha_1\omega_1^{k_1} & \ldots & \widetilde\alpha_1\omega_{n-1}^{k_1} & \widetilde\gamma_1\omega_n^{k_1} & \widetilde\gamma_1\omega_{n+1}^{k_1} & \ldots & \widetilde\gamma_1\omega_{2n-1}^{k_1}\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
\widetilde\alpha_{2n} & \widetilde\alpha_{2n}\omega_1^{k_{2n}} & \ldots & \widetilde\alpha_{2n}\omega_{n-1}^{k_{2n}} & \widetilde\gamma_{2n}\omega_n^{k_{2n}} & \widetilde\gamma_{2n}\omega_{n+1}^{k_{2n}} & \ldots & \widetilde\gamma_{2n}\omega_{2n-1}^{k_{2n}}\\
\end{pmatrix}
$}.
$$
\begin{theorem}
\label{spectral}
Let $L$ be a self-adjoint differential operator of order $2n$, generated by the differential expression~(\ref{diffExpression})
and boundary conditions~(\ref{boundaryConditions}). Let also $p_m\in W_\infty^m(0,1)$, $m=0,\ldots,n-1$.
Consider two eigenvalue problems
\begin{equation}
\label{BVP}
Ly=\mu\psi_{1,2}y;\qquad
U_\nu(y)=0,\quad \nu=1,\ldots,2n,
\end{equation}
where $\psi_1$, $\psi_2\in W_\infty^n(0,1)$. Suppose that the weight functions $\psi_1$, $\psi_2$ are bounded away from zero, and
\begin{equation}
\label{J1=J2}
\int_0^1\psi_1^{\frac1{2n}}(x)dx=\int_0^1\psi_2^{\frac1{2n}}(x)dx=\vartheta.
\end{equation}
Denote by $\mu_k^{(j)}$, $j=1,2$, $k\in\mathbb N$, the eigenvalues of the problems (\ref{BVP}), enumerated in ascending order according to the multiplicity.
Then
$$
\prod_{k=1}^\infty\frac{\mu_k^{(1)}}{\mu_k^{(2)}}=
\left|\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}\right|.
$$
\end{theorem}
\begin{proof}
Consider the first problem in (\ref{BVP}). Denote by $\varphi_j(t,\zeta)$, $j=0,\ldots,2n-1$, solutions of the equation $Ly=\zeta^{2n}\psi_{1}y$,
specified by the initial conditions $\varphi_j^{(k)}(0,\zeta)=\delta_{jk}$.
We substitute a general solution of the equation $y(t)=c_0\varphi_0(t,\zeta)+\ldots+c_{2n-1}\varphi_{2n-1}(t,\zeta)$ to the boundary conditions and
obtain $\mu_k^{(1)}=x_k^{2n}$, where $x_1\leq x_2\leq\ldots$ are positive roots of the function
$$
F_1(\zeta)=\det
\begin{pmatrix}
U_1(\varphi_0) & \ldots & U_1(\varphi_{2n-1})\\
\vdots & \ddots & \vdots\\
U_{2n}(\varphi_0) & \ldots & U_{2n}(\varphi_{2n-1})\\
\end{pmatrix}.
$$
It is easy to see (\cite[\S 2]{Naim:1969}) that $F_1(\zeta)$ is an entire function.
It is well known (see~\cite{Fedo:1993}, \cite[\S 4]{Naim:1969}) that there exist solutions $\widetilde\varphi_j(t,\zeta)$, $j=0,\ldots,2n-1$, of the
equation $Ly=\zeta^{2n}\psi_{1}y$ such that for large $|\zeta|$, $|\arg(\zeta)|\leqslant\frac\pi{2n}$, the following asymptotic relation holds:
\begin{equation}
\label{tildephi}
\widetilde\varphi_j(t,\zeta)=(\psi_1(t))^{-\frac{2n-1}{4n}}\exp\left(i\omega_j\zeta\int_0^t\psi_1^{\frac1{2n}}(u)du\right)\left(1+O(|\zeta|^{-1})\right),
\quad
j=0,\ldots,2n-1.
\end{equation}
The relation (\ref{tildephi}) is uniform in $t\in[0,1]$, and one can differentiate it.
It is easy to see that for $|\arg(\zeta)|\leqslant\frac\pi{2n}$, $|\zeta|\to\infty$
$$
U_\nu(\widetilde\varphi_j)=
\left(\alpha_\nu\widetilde\varphi_j^{(k_\nu)}(0,\zeta)+\gamma_\nu\widetilde\varphi_j^{(k_\nu)}(1,\zeta)\right)
\left(1+O(|\zeta|^{-1})\right).
$$
For large $|\zeta|$, the functions $\widetilde\varphi_j(t,\zeta)$ are linearly independent. Therefore there exists a matrix
$C(\zeta)=(c_{jk})_{0\leq j,k\leq 2n-1}$ depending on $\zeta$ such that
$$
(\varphi_0(t,\zeta),\ldots,\varphi_{2n-1}(t,\zeta))^\top=C(\zeta)(\widetilde\varphi_0(t,\zeta),\ldots,\widetilde\varphi_{2n-1}(t,\zeta))^\top.
$$
Thus,
\begin{equation}
\label{F1}
F_1(\zeta)=\det(C(\zeta))\cdot\det
\begin{pmatrix}
U_1(\widetilde\varphi_0) & \ldots & U_{2n}(\widetilde\varphi_0)\\
\vdots & \ddots & \vdots\\
U_1(\widetilde\varphi_{2n-1}) & \ldots & U_{2n}(\widetilde\varphi_{2n-1})\\
\end{pmatrix}.
\end{equation}
By the initial conditions we have
$$
I_{2n}=C(\zeta)
\begin{pmatrix}
\widetilde\varphi_0(0,\zeta) & \ldots & \widetilde\varphi_0^{(2n-1)}(0,\zeta)\\
\vdots & \ddots & \vdots\\
\widetilde\varphi_{2n-1}(0,\zeta) & \ldots & \widetilde\varphi_{2n-1}^{(2n-1)}(0,\zeta)\\
\end{pmatrix}.
$$
By the relations (\ref{tildephi}), we obtain for $|\arg(\zeta)|\leqslant\frac\pi{2n}$, $|\zeta|\to\infty$
\begin{multline*}
\det
\begin{pmatrix}
\widetilde\varphi_0(0,\zeta) & \ldots & \widetilde\varphi_0^{(2n-1)}(0,\zeta)\\
\vdots & \ddots & \vdots\\
\widetilde\varphi_{2n-1}(0,\zeta) & \ldots & \widetilde\varphi_{2n-1}^{(2n-1)}(0,\zeta)\\
\end{pmatrix}
=(\psi_1(0))^{2n\left(-\frac12+\frac1{4n}\right)}
\times\\\times
\det
\begin{pmatrix}
1 & \left(i\zeta(\psi_1(0))^{\frac1{2n}}\right)^1 & \ldots & \left(i\zeta(\psi_1(0))^{\frac1{2n}}\right)^{2n-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & \left(i\omega_{2n-1}\zeta(\psi_1(0))^{\frac1{2n}}\right)^1 & \ldots & \left(i\omega_{2n-1}\zeta(\psi_1(0))^{\frac1{2n}}\right)^{2n-1}\\
\end{pmatrix}
(1+O(|\zeta|^{-1}))
=\\=
(\psi_1(0))^{\frac{1-2n}2}
\left(i\zeta(\psi_1(0))^{\frac1{2n}}\right)^{n(2n-1)}
\mathcal V(1,\omega_1,\ldots,\omega_{2n-1})
(1+O(|\zeta|^{-1}))
=\\=
\left(i\zeta\right)^{2n^2-n}
\mathcal V(1,\omega_1,\ldots,\omega_{2n-1})
(1+O(|\zeta|^{-1})).
\end{multline*}
Whence, for $|\arg(\zeta)|\leqslant\frac\pi{2n}$, $|\zeta|\to\infty$, we have
$$
\det(C(\zeta)) = \frac{\left(i\zeta\right)^{n-2n^2}}{\mathcal V(1,\omega_1,\ldots,\omega_{2n-1})}\cdot(1+O(|\zeta|^{-1})).
$$
Next, following \cite[\S 4]{Naim:1969}, we obtain for $|\arg(\zeta)|\leqslant\frac\pi{2n}$, $|\zeta|\to\infty$
\begin{multline*}
\det
\begin{pmatrix}
U_1(\widetilde\varphi_0) & \ldots & U_1(\widetilde\varphi_{2n-1})\\
\vdots & \ddots & \vdots\\
U_{2n}(\widetilde\varphi_0) & \ldots & U_{2n}(\widetilde\varphi_{2n-1})\\
\end{pmatrix}
=\\=
(i\zeta)^{\varkappa}
\exp\left(-i\omega_1\vartheta\zeta-i\omega_2\vartheta\zeta-\ldots-i\omega_{n-1}\vartheta\zeta\right)
\times\\\times
\left(\theta_1(\psi_1)\exp(i\vartheta\zeta)+\theta_0(\psi_1)+\theta_{-1}(\psi_1)\exp(-i\vartheta\zeta)\right)
(1+O(|\zeta|^{-1}))
\end{multline*}
(we recall that $\varkappa=k_1+\ldots+k_{2n}$), where $\theta_0(\psi_1)$ is some unimportant constant.
It is easy to see (\cite[Theorem 1.1]{Naza:2009}) that $\theta_1(\psi_1)=-\omega_1^\varkappa\theta_{-1}(\psi_1)$.
Substituting these formulas to (\ref{F1}) we obtain for $|\arg(\zeta)|\leqslant\frac\pi{2n}$, $|\zeta|\to\infty$
\begin{multline*}
F_1(\zeta)=
\frac
{\left(i\zeta\right)^{n-2n^2+\varkappa}\exp\left(-i\omega_1\vartheta\zeta-i\omega_2\vartheta\zeta-\ldots-i\omega_{n-1}\vartheta\zeta\right)}
{\mathcal V(1,\omega_1,\ldots,\omega_{2n-1})}
\times\\\times
\left(\theta_{-1}(\psi_1)(\exp(-i\vartheta\zeta)-\omega_1^\varkappa\exp(i\vartheta\zeta))+\theta_0(\psi_1)\right)
(1+O(|\zeta|^{-1})).
\end{multline*}
Now we consider the second problem in (\ref{BVP}) and define the function $F_2(\zeta)$ similarly to $F_1(\zeta)$ with $\psi_2$ instead of $\psi_1$.
Then the following relation holds:
\begin{multline*}
F_2(\zeta)=
\frac
{\left(i\zeta\right)^{n-2n^2+\varkappa}\exp\left(-i\omega_1\vartheta\zeta-i\omega_2\vartheta\zeta-\ldots-i\omega_{n-1}\vartheta\zeta\right)}
{\mathcal V(1,\omega_1,\ldots,\omega_{2n-1})}
\times\\\times
\left(\theta_{-1}(\psi_2)(\exp(-i\vartheta\zeta)-\omega_1^\varkappa\exp(i\vartheta\zeta))+\theta_0(\psi_2)\right)
(1+O(|\zeta|^{-1})).
\end{multline*}
Whence, for $|\zeta|\to\infty$, $\arg(\zeta)\neq\frac{\pi j}{n}$, $j\in\mathbb Z$, we obtain
$$
\left|\frac{F_2(\zeta)}{F_1(\zeta)}\right|\rightarrow
\left|\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}\right|.
$$
Moreover, the quotient $\left|{F_2(\zeta)}/{F_1(\zeta)}\right|$ is uniformly bounded on circles $|\zeta|=r_k$ for a proper sequence $r_k\to\infty$.
Further, by continuity of solutions to a differential equation with respect to parameters, we have $F_1(\zeta)/F_2(\zeta)\rightrightarrows1$ as $\zeta\to0$.
Applying the Jensen Theorem to $F_1(\zeta)$ and $F_2(\zeta)$, we obtain
$$
\prod_{k=1}^\infty\frac{\mu_k^{(1)}}{\mu_k^{(2)}}=
\exp\left(\lim_{\rho\to\infty}\frac1{2\pi}\int_0^{2\pi}\ln\frac{|F_2(\rho e^{i\theta})|}{|F_1(\rho e^{i\theta})|}d\theta\right)=
\left|\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}\right|.
$$
\end{proof}
\begin{corollary}
\label{main}
Let the covariance of a centered Gaussian process~$X(t)$,
$0\leqslant t\leqslant1$, be the Green function of a self-adjoint operator~$L$ generated by the differential expression~(\ref{diffExpression})
and boundary conditions~(\ref{boundaryConditions}).
Let the coefficients $p_m$, $m=0,\ldots,n-1$, and the weight functions $\psi_1$, $\psi_2$ satisfy the assumptions of Theorem~\ref{spectral}. Then
$$
\lim_{\varepsilon\to0}
\frac{{\sf P}(\|X\|_{\psi_1}\leq\varepsilon)}{{\sf P}(\|X\|_{\psi_2}\leq\varepsilon)}=
\left|\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}\right|^{1/2}.
$$
\end{corollary}
\begin{proof}
Denote by $\mu_k^{(1,2)}$ the eigenvalues of the problems (\ref{BVP}).
Using the Li comparison theorem (see \cite{Li:1992,Gao:Hann:Torc:2003a}) and Theorem~\ref{spectral}, we obtain
$$
\lim_{\varepsilon\to0}\frac{{\sf P}(\|X\|_{\psi_1}\leq\varepsilon)}{{\sf P}(\|X\|_{\psi_2}\leq\varepsilon)}=
\left(\prod_{k=1}^\infty\frac{\mu_k^{(1)}}{\mu_k^{(2)}}\right)^{\frac12}=
\left|\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}\right|^{\frac12}.
$$
\end{proof}
\begin{rem}
If the assumption (\ref{J1=J2}) does not hold then the probabilities ${\sf P}(\|X\|_{\psi_{1,2}}\leq\varepsilon)$ have different logarithmic asymptotics (see
\cite[Theorem 7.3]{Naza:Niki:2004}).
\end{rem}
\section{Separated boundary conditions}
Now we consider an important particular case.
\begin{theorem}
\label{sep}
Let the assumptions of Corollary~\ref{main} be satisfied. Suppose also that the boundary conditions~(\ref{boundaryConditions}) are separated in main terms,
i.e. have the form
$$
\left.
\begin{aligned}
v^{(k_\nu)}(0)+\sum_{j=0}^{k_\nu-1}\left(\alpha_{\nu j}v^{(j)}(0)+\gamma_{\nu j}v^{(j)}(1)\right)=0,\\
v^{(k'_\nu)}(1)+\sum_{j=0}^{k'_\nu-1}\left(\alpha'_{\nu j}v^{(j)}(0)+\gamma'_{\nu j}v^{(j)}(1)\right)=0,
\end{aligned}
\right\}
\quad
\nu=1,\ldots,n.
$$
Denote by $\varkappa_0$ and $\varkappa_1$ sums of orders of boundary conditions at zero and one, respectively:
$\varkappa_0=k_1+\ldots+k_n$, $\varkappa_1=k'_1+\ldots+k'_n$.
Then
\begin{equation}
\label{separated}
\lim_{\varepsilon\to0}
\frac{{\sf P}(\|X\|_{\psi_1}\leq\varepsilon)}{{\sf P}(\|X\|_{\psi_2}\leq\varepsilon)}=
\left(
\frac{\psi_2(0)}{\psi_1(0)}
\right)
^{-\frac{n}4+\frac18+\frac{\varkappa_0}{4n}}
\left(
\frac{\psi_2(1)}{\psi_1(1)}
\right)
^{-\frac{n}4+\frac18+\frac{\varkappa_1}{4n}}.
\end{equation}
\end{theorem}
\begin{proof}
Under assumptions of the Theorem the matrix determining $\theta_{-1}(\psi)$ is block diagonal, and we obtain
$$
\theta_{-1}(\psi)=
(-1)^{\varkappa_1}
(\psi(0))^{{\frac{\varkappa_0}{2n}-\frac{2n-1}{4}}}
(\psi(1))^{{\frac{\varkappa_1}{2n}-\frac{2n-1}{4}}}
\cdot
\mathcal V(\omega_1^{k_1},\ldots,\omega_1^{k_n})
\cdot
\mathcal V(\omega_1^{k'_1},\ldots,\omega_1^{k'_n}).
$$
Therefore,
$$
\frac{\theta_{-1}(\psi_2)}{\theta_{-1}(\psi_1)}
=
\left(\frac{\psi_2(0)}{\psi_1(0)}\right)^{\frac{\varkappa_0}{2n}-\frac{2n-1}{4}}
\left(\frac{\psi_2(1)}{\psi_1(1)}\right)^{\frac{\varkappa_1}{2n}-\frac{2n-1}{4}}.
$$
\end{proof}
Many classical Gaussian processes satisfy the assumptions of Theorem \ref{sep}. We give several examples.
For a random process $X(t)$, $0\leq t\leq 1$, denote by $X_m^{[\beta_1,\ldots,\beta_m]}(t)$, $0\leq t\leq 1$, the $m$-times integrated process:
$$
X_m^{[\beta_1,\ldots,\beta_m]}(t)=(-1)^{\beta_1+\ldots+\beta_m}\int_{\beta_m}^t\dots\int_{\beta_1}^{t_1}X(s)dsdt_1\ldots
$$
(any index $\beta_\nu$ equals 0 or 1).
Following \cite{Naza:2003}, we introduce the notation
$$
z_n=\exp(i\pi/n),
\quad
\varepsilon_n=\left(\varepsilon\sqrt{2n\sin\frac\pi{2n}}\right)^{\frac1{2n-1}},
\quad
\mathcal D_n=\frac{2n-1}{2n\sin\frac\pi{2n}}.
$$
\begin{proposition}
\label{wiener}
Suppose that the function $\psi\in W_\infty^{m+1}(0,1)$ is bounded away from zero and satisfies the relation $\int_0^1\psi^{\frac1{2(m+1)}}(x)dx=1$.
Then for the integrated Brownian motion the following relation holds:
\begin{multline*}
{\sf P}(\|W_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon)
\sim
\left(
\frac{\psi(1)}{\psi(0)}
\right)
^{-\frac{m+1}8+\frac{\mathcal K}{4(m+1)}}
\times\\\times
\frac{(2m+2)^{\frac{m}2+1}}{|\mathcal V(1,z_{m+1}^{1-3\beta_1},z_{m+1}^{2-5\beta_2},\ldots,z_{m+1}^{m+\beta_m})|}
\frac{\varepsilon_{m+1}}{\sqrt{\pi\mathcal D_{m+1}}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\varepsilon_{m+1}^2}\right),
\end{multline*}
where $\mathcal K=\mathcal K(\beta_1,\ldots,\beta_m)=\sum_{\nu=1}^m(2\nu+1)\beta_\nu$.
\end{proposition}
\begin{proof}
The boundary value problem corresponding to $W_m$ was derived in \cite{Gao:Hann:Torc:2003}, see also \cite{Naza:Niki:2004}.
Namely in Theorem \ref{sep} one should set $n=m+1$,
$$
k_\nu=
\begin{cases}
m-\nu & \text{for } \beta_\nu=0,\\
m+1+\nu & \text{for } \beta_\nu=1,
\end{cases}
\quad
\nu=1,\ldots,m;
\qquad
k_{m+1}=m,
$$
$$
k'_\nu=2m+1-k_\nu,\quad \nu=1,\ldots,m+1.
$$
This implies $\varkappa_0=\mathcal K+\frac{m(m+1)}{2}$, $\varkappa_1=\frac{(m+1)(3m+2)}{2}-\mathcal K$.
We substitute these quantities into (\ref{separated}) and obtain
$$
{\sf P}(\|W_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon)\sim
\left(
\frac{\psi(1)}{\psi(0)}
\right)
^{-\frac{m+1}8+\frac{\mathcal K}{4(m+1)}}
\cdot
{\sf P}(\|W_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon),
\quad
\varepsilon\to0.
$$
The asymptotics of probability ${\sf P}(\|W_m^{[\beta_1,\ldots,\beta_m]}\|\leq\varepsilon)$ was obtained in \cite[Proposition 1.5]{Naza:2003}.
\end{proof}
In a similar way, using \cite[Propositions 1.6 and 1.8]{Naza:2003}, we obtain the following relations.
\begin{proposition}
Let $B(t)$ be the Brownian bridge. Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
\begin{multline*}
{\sf P}(\|B_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon)
\sim
\left(
\psi(0)
\right)
^{\frac{m+1}8-\frac{\mathcal K}{4(m+1)}}
\left(
\psi(1)
\right)
^{\frac{\mathcal K+1}{4(m+1)}-\frac{m+1}8}
\times\\\times
\frac{(2m+2)^{\frac{m+1}2}\sqrt{2\sin\frac\pi{2m+2}}}{|\mathcal V(z_{m+1}^{1-3\beta_1},z_{m+1}^{2-5\beta_2},\ldots,z_{m+1}^{m+\beta_m})|}
\frac1{\sqrt{\pi\mathcal D_{m+1}}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\varepsilon_{m+1}^2}\right).
\end{multline*}
\end{proposition}
\begin{proposition}
Let $\mathbb B_m(t)=\left(W_m(t)|W_j(1)=0,0\leq j\leq m\right)$ be conditional integrated Wiener process (see \cite{Lach:2002}).
Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
\begin{multline*}
{\sf P}(\|\mathbb B_m\|_\psi\leq\varepsilon)\sim
(\psi(0)\psi(1))^{\frac18}
\times\\\times
\frac{(2m+2)^{\frac{m}2+1}\cdot\left(\prod_{j=0}^m\frac{j!}{(m+1+j)!}\right)^{\frac12}}{|\mathcal V(1,z_{m+1},\ldots,z_{m+1}^m)|\sqrt{\pi\mathcal D_{m+1}}\cdot\varepsilon_{m+1}^{m(m+2)}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\varepsilon_{m+1}^2}\right).
\end{multline*}
\end{proposition}
Let us introduce the notation
$$
\widetilde\varepsilon_n=\left(\varepsilon\sqrt{n\sin\frac\pi{2n}}\right)^{\frac1{2n-1}},
\quad
\widehat\varepsilon_n=\left(\varepsilon\sqrt{\frac{2n}{c_n}\sin\frac\pi{2n}}\right)^{\frac1{2n-1}},
\quad
c_n=\frac{2\sqrt\pi\Gamma(n)}{\Gamma(n-\frac12)}.
$$
The following relations can be obtained using \cite[Theorem 2.2]{Naza:2003}, \cite[Theorem 2.2]{Naza:2009} and \cite[Theorem 3.1]{Puse:2010}.
\begin{proposition}
\label{OU}
Let $U(t)$ be the Ornstein--Uhlenbeck process, i.e. the centered Gaussian process with the covariance function $\mathsf EU(t)U(s)=e^{-|t-s|}$.
Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
\begin{multline*}
{\sf P}(\|U_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon)
\sim
\left(
\psi(0)
\right)
^{\frac{m+1}8-\frac{\mathcal K+1}{4(m+1)}}
\left(
\psi(1)
\right)
^{\frac{\mathcal K}{4(m+1)}-\frac{m+1}8}
\times\\\times
\frac{(2m+2)^{\frac{m+1}2}2\sqrt{e}\sqrt{\sin\frac\pi{2m+2}}}{|\mathcal V(z_{m+1}^{1-3\beta_1},z_{m+1}^{2-5\beta_2},\ldots,z_{m+1}^{m+\beta_m})|}
\frac{\widetilde\varepsilon_{m+1}^2}{\sqrt{\pi\mathcal D_{m+1}}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\widetilde\varepsilon_{m+1}^2}\right).
\end{multline*}
\end{proposition}
\begin{proposition}
Let $S(t)=W(t+1)-W(t)$ be the Slepian process (see \cite{Slep:1961}). Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
$$
{\sf P}(\|S_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon)
\sim
\sqrt{\frac2e}
{\sf P}(\|U_m^{[\beta_1,\ldots,\beta_m]}\|_{\psi}\leq\varepsilon).
$$
\end{proposition}
\begin{proposition}
\label{Matern}
Let $M^{(n)}(t)$ be the Matern process (see \cite{Mate:1986}), i.e. the centered Gaussian process with the covariance function
$$
\mathsf EM^{(n)}(t)M^{(n)}(s)=\frac{(n-1)!}{(2n-2)!}\exp(-|t-s|)\sum_{k=0}^{n-1}\frac{(n+k-1)!}{k!(n-k-1)!}(2|t-s|)^{n-k-1}.
$$
Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
$$
{\sf P}(\|M^{(n)}\|_{\psi}\leq\varepsilon)
\sim
\left(
\psi(0)\psi(1)
\right)
^{-\frac{n}8}
\frac{\sqrt{2^{n^2+n+1}n^{n+1}e^n}}{|\mathcal V(1,z_n,\ldots,z_n^{n-1})|}
\frac{\widehat\varepsilon_n{}^{n^2+1}}{\sqrt{\pi\mathcal D_n}}
\exp\left(-\frac{\mathcal D_n}{2\widehat\varepsilon_n{}^2}\right).
$$
\end{proposition}
\begin{rem}
It is well known that $\{M^{(1)}(t),0\leq t\leq1\} \stackrel{law}{=} \{U(t),0\leq t\leq1\}$. It is easy to see that the formula from Proposition
\ref{OU} with $m=0$ coincides with the formula from Proposition \ref{Matern} with $n=1$.
\end{rem}
\section{Non-separated boundary conditions}
If some boundary conditions are not separated in the main terms, they can be split into pairs of the following form (see \cite[\S 18]{Naim:1969}):
\begin{equation}
\label{non-sep}
\begin{aligned}
&a v^{(\ell)}(0)+b v^{(\ell)}(1)+\sum_{j=0}^{\ell-1}\left(\alpha_{\nu j}v^{(j)}(0)+\gamma_{\nu j}v^{(j)}(1)\right)=0,\\
&b v^{(2n-\ell-1)}(0)+a v^{(2n-\ell-1)}(1)+\sum_{j=0}^{2n-\ell-2}\left(\alpha'_{\nu j}v^{(j)}(0)+\gamma'_{\nu j}v^{(j)}(1)\right)=0.
\end{aligned}
\end{equation}
We consider the case with a unique such pair.
\begin{theorem}
\label{non-separated}
Let the assumptions of Corollary~\ref{main} be satisfied. Suppose also that one pair of boundary conditions has the form (\ref{non-sep}) while other ones
are separated in the main terms\footnote{Note that the normalization condition implies that the numbers $k_\nu$ and $k'_\nu$, $\nu=1,\ldots,n-1$, differ
from $\ell$ and $2n-\ell-1$.}:
$$
\left.
\begin{aligned}
v^{(k_\nu)}(0)+\sum_{j=0}^{k_\nu-1}\left(\alpha_{\nu j}v^{(j)}(0)+\gamma_{\nu j}v^{(j)}(1)\right)=0,\\
v^{(k'_\nu)}(1)+\sum_{j=0}^{k'_\nu-1}\left(\alpha_{\nu j}v^{(j)}(0)+\gamma_{\nu j}v^{(j)}(1)\right)=0,
\end{aligned}
\right\}
\quad
\nu=1,\ldots,n-1.
$$
Denote by $\varkappa_0$ and $\varkappa_1$ the sums of orders of separated boundary conditions at zero and one, respectively:
$\varkappa_0=k_1+\ldots+k_{n-1}$, $\varkappa_1=k'_1+\ldots+k'_{n-1}$.
Then
\begin{multline}
\label{th-non-sep}
\lim_{\varepsilon\to0}
\frac{{\sf P}(\|X\|_{\psi_1}\leq\varepsilon)}{{\sf P}(\|X\|_{\psi_2}\leq\varepsilon)}=
\left(
\frac{\psi_2(0)}{\psi_1(0)}
\right)
^{\frac{\varkappa_0}{4n}-\frac{(n-1)(2n-1)}{8n}}
\left(
\frac{\psi_2(1)}{\psi_1(1)}
\right)
^{\frac{\varkappa_1}{4n}-\frac{(n-1)(2n-1)}{8n}}
\times\\\times
\left|
\frac
{\mathcal M_1 a^2\left(\frac{\psi_2(1)}{\psi_2(0)}\right)^{\frac{2n-2\ell-1}{4n}}+
\mathcal M_2 b^2\left(\frac{\psi_2(0)}{\psi_2(1)}\right)^{\frac{2n-2\ell-1}{4n}}}
{\mathcal M_1 a^2\left(\frac{\psi_1(1)}{\psi_1(0)}\right)^{\frac{2n-2\ell-1}{4n}}+
\mathcal M_2 b^2\left(\frac{\psi_1(0)}{\psi_1(1)}\right)^{\frac{2n-2\ell-1}{4n}}}
\right|^{\frac12},
\end{multline}
where
$$
\mathcal M_1=\mathcal V(\omega_1^{k_1},\ldots,\omega_1^{k_{n-1}},\omega_1^{\ell})\cdot
\mathcal V(\omega_1^{2n-\ell-1},\omega_1^{k'_1},\ldots,\omega_1^{k'_{n-1}}),
$$
$$
\mathcal M_2=\mathcal V(\omega_1^{k_1},\ldots,\omega_1^{k_{n-1}},\omega_1^{2n-\ell-1})\cdot
\mathcal V(\omega_1^{\ell},\omega_1^{k'_1},\ldots,\omega_1^{k'_{n-1}}).
$$
\end{theorem}
\begin{proof}
We have
\begin{multline*}
\theta_{-1}(\psi)=
(\psi(0))^{\frac{\varkappa_0}{2n}-\frac{(n-1)(2n-1)}{4n}}
(\psi(1))^{\frac{\varkappa_1}{2n}-\frac{(n-1)(2n-1)}{4n}}\times
\\
\times
\det
\mbox{\tiny$
\begin{pmatrix}
1 & \omega_1^{k_1} & \ldots & \omega_{n-1}^{k_1} & 0 & 0 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
1 & \omega_1^{k_{n-1}} & \ldots & \omega_{n-1}^{k_{n-1}} & 0 & 0 & \ldots & 0\\
\widetilde\alpha_n & \widetilde\alpha_n\omega_1^{\ell} & \ldots & \widetilde\alpha_n\omega_{n-1}^{\ell} & \widetilde\gamma_n\omega_n^{\ell} & \widetilde\gamma_n\omega_{n+1}^{\ell} & \ldots & \widetilde\gamma_n\omega_{2n-1}^{\ell}\\
\widetilde\alpha_{n+1} & \widetilde\alpha_{n+1}\omega_1^{2n-2\ell-1} & \ldots & \widetilde\alpha_{n+1}\omega_{n-1}^{2n-\ell-1} & \widetilde\gamma_{n+1}\omega_n^{2n-\ell-1} & \widetilde\gamma_{n+1}\omega_{n+1}^{2n-\ell-1} & \ldots & \widetilde\gamma_{n+1}\omega_{2n-1}^{2n-\ell-1}\\
0 & 0 & \ldots & 0 & \omega_n^{k'_1} & \omega_{n+1}^{k'_1} & \ldots & \omega_{2n-1}^{k'_1}\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 0 & \omega_n^{k'_{n-1}} & \omega_{n+1}^{k'_{n-1}} & \ldots & \omega_{2n-1}^{k'_{n-1}}\\
\end{pmatrix}
$}
=\\=
(\psi(0))^{\frac{\varkappa_0}{2n}-\frac{(n-1)(2n-1)}{4n}}
(\psi(1))^{\frac{\varkappa_1}{2n}-\frac{(n-1)(2n-1)}{4n}}
\cdot
(-1)^{\varkappa_1+2n-\ell-1}
\times\\\times
\left[
\widetilde\alpha_n\widetilde\gamma_{n+1}
\mathcal V(\omega_1^{k_1},\ldots,\omega_1^{k_{n-1}},\omega_1^{\ell})\cdot
\mathcal V(\omega_1^{2n-\ell-1},\omega_1^{k'_1},\ldots,\omega_1^{k'_{n-1}})
\right.
+\\+
\left.
\widetilde\alpha_{n+1}\widetilde\gamma_{n}
\mathcal V(\omega_1^{k_1},\ldots,\omega_1^{k_{n-1}},\omega_1^{2n-\ell-1})\cdot
\mathcal V(\omega_1^{\ell},\omega_1^{k'_1},\ldots,\omega_1^{k'_{n-1}})
\right]
.
\end{multline*}
Since
$$
\widetilde\alpha_n=a(\psi(0))^{\frac{2\ell+1-2n}{4n}}, \quad \widetilde\alpha_{n+1}=b(\psi(0))^{\frac{2n-2\ell-1}{4n}},
$$
$$
\widetilde\gamma_n=b(\psi(1))^{\frac{2\ell+1-2n}{4n}}, \quad \widetilde\gamma_{n+1}=a(\psi(1))^{\frac{2n-2\ell-1}{4n}},
$$
we have
\begin{multline*}
|\theta_{-1}(\psi)|=
(\psi(0))^{\frac{\varkappa_0}{2n}-\frac{(n-1)(2n-1)}{4n}}
(\psi(1))^{\frac{\varkappa_1}{2n}-\frac{(n-1)(2n-1)}{4n}}
\times\\\times
\left|
\mathcal M_1 a^2\left(\frac{\psi(1)}{\psi(0)}\right)^{\frac{2n-2\ell-1}{4n}}
+
\mathcal M_2 b^2\left(\frac{\psi(0)}{\psi(1)}\right)^{\frac{2n-2\ell-1}{4n}}
\right|
.
\end{multline*}
Now Corollary \ref{main} implies (\ref{th-non-sep}).
\end{proof}
The following relations can be obtained from Theorem~\ref{non-separated} using \cite[Theorem~3]{Puse:2010a} and \cite[Theorem~3.4]{Naza:2009}.
\begin{proposition}
Let $Y(t)$ be the Bogolyubov process (see \cite{Sank:1999,Sank:2005}). Then, under assumptions of Proposition \ref{wiener}, the following relation holds:
\begin{multline*}
{\sf P}
\{\|Y_m^{[\beta_1,\ldots,\beta_m]}\|_\psi\leq\varepsilon\}\sim
\left(\frac{\psi(0)}{\psi(1)}\right)^{\frac{m(m+2)}{8(m+1)}-\frac{\mathcal K}{4(m+1)}}
\times\\\times
\left|
\prod_{\nu=1}^m\big|1+z_{m+1}^{k_\nu}\big|^2 \left(\frac{\psi(0)}{\psi(1)}\right)^{\frac{1}{4(m+1)}}
+\prod_{\nu=1}^m\big|1+z_{m+1}^{2m+1-k_\nu}\big|^2 \left(\frac{\psi(1)}{\psi(0)}\right)^{\frac{1}{4(m+1)}}
\right|^{-\frac12}
\times\\\times
\frac{2^{m+2}(m+1)^{m+1}\sinh(\omega/2)}{|\mathcal V(z_{m+1}^{k_1},\ldots,z_{m+1}^{k_{m}})|}
\frac{\varepsilon_{m+1}}{\sqrt{\pi\mathcal D_{m+1}}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\varepsilon_{m+1}^2}\right),
\end{multline*}
where $k_\nu=\nu-(2\nu+1)\beta_\nu$, $\nu=1,\dots,m$.
\end{proposition}
Consider multiply centered-integrated Brownian bridge:
$$
B_{\{0\}}(t)=B(t),\quad B_{\{l\}}(t)=\int_0^t\overline{B_{\{l-1\}}}(s)ds,\quad l\in\mathbb N.
$$
\begin{proposition}
Suppose that the function $\psi\in W_\infty^{m+2}(0,1)$ is bounded away from zero and satisfies the relation $\int_0^1\psi^{\frac1{2(m+2)}}(x)dx=1$.
Then the following relation holds:
\begin{multline*}
{\sf P}\{\|(B_{\{1\}})_m^{[\beta_1,\ldots,\beta_m]}\|_\psi\leq\varepsilon\}\sim
(\psi(0))^{\frac{m^2-3}{8(m+2)}-\frac{\widetilde{\mathcal K}}{4(m+2)}}
(\psi(1))^{\frac{\widetilde{\mathcal K}}{4(m+2)}-\frac{m^2+8m+3}{8(m+2)}}
\times
\\
\times
\left|
\prod_{\nu=1}^m\big|1+z_{m+2}^{k_\nu}\big|^2 \left(\frac{\psi(0)}{\psi(1)}\right)^{\frac{1}{4(m+2)}}
+
\prod_{\nu=1}^m\big|1+z_{m+2}^{2m+3-k_\nu}\big|^2 \left(\frac{\psi(1)}{\psi(0)}\right)^{\frac{1}{4(m+2)}}
\right|^{-\frac12}
\times\\\times
\frac{(2m+4)^{\frac{m+2}{2}}\sqrt{2\sin\frac{3\pi}{2m+4}}}{|\mathcal V(z_{m+2}^{k_1},\ldots,z_{m+2}^{k_{m}})|}
\frac{\varepsilon_{m+2}^{-2}}{\sqrt{\pi\mathcal D_{m+2}}}
\exp\left(-\frac{\mathcal D_{m+2}}{2\varepsilon_{m+2}^2}\right),
\end{multline*}
where $\widetilde{\mathcal K}=\widetilde{\mathcal K}(\beta_1,\ldots,\beta_m)=\sum_{\nu=1}^m(2\nu+3)\beta_\nu$ and
$k_\nu=\nu-(2\nu+3)\beta_\nu$, $\nu=1,\dots,m$.
\end{proposition}
Now we consider the case of boundary conditions periodic in the main terms. The following theorem can be easily derived from Corollary~\ref{main}.
\begin{theorem}
\label{periodic}
Let the assumptions of Corollary~\ref{main} be satisfied. Suppose also that the boundary conditions~(\ref{boundaryConditions}) have the form
$$
v^{(\nu)}(0)-v^{(\nu)}(1)+\sum_{j=0}^{\nu-1}\left(\alpha_{\nu j}v^{(j)}(0)+\gamma_{\nu j}v^{(j)}(1)\right)=0,
\quad
\nu=0,\ldots,2n-1.
$$
Then
\begin{multline*}
\lim_{\varepsilon\to0}
\frac{{\sf P}(\|X\|_{\psi_1}\leq\varepsilon)}{{\sf P}(\|X\|_{\psi_2}\leq\varepsilon)}=
\left(\frac{\psi_1(0)\psi_1(1)}{\psi_2(0)\psi_2(1)}\right)^{\frac{2n-1}8}
\times\\\times
\left|
\frac{\mathcal V((\psi_2(0))^{\frac{1}{2n}},(\psi_2(0))^{\frac{1}{2n}}\omega_1,\ldots,(\psi_2(0))^{\frac{1}{2n}}\omega_{n-1},
(\psi_2(1))^{\frac{1}{2n}}\omega_n,
\ldots,(\psi_2(1))^{\frac{1}{2n}}\omega_{2n-1})}
{\mathcal V((\psi_1(0))^{\frac{1}{2n}},(\psi_1(0))^{\frac{1}{2n}}\omega_1,\ldots,(\psi_1(0))^{\frac{1}{2n}}\omega_{n-1},
(\psi_1(1))^{\frac{1}{2n}}\omega_n,
\ldots,(\psi_1(1))^{\frac{1}{2n}}\omega_{2n-1})}
\right|^{\frac12}.
\end{multline*}
\end{theorem}
The following relation can be obtained from Theorem~\ref{periodic} using \cite[Theorem~3.2]{Naza:2009}.
\begin{proposition}
Under assumptions of Proposition \ref{wiener}, the following relation holds:
\begin{multline*}
{\sf P}(\|\overline{B_{\{m\}}}\|_{\psi}\leq\varepsilon)
\sim\left(\psi(0)\psi(1)\right)^{\frac{2m+1}8}
\times\\\times
\left|
\mbox{\tiny$
{\mathcal V((\psi(0))^{\frac{1}{2(m+1)}},(\psi(0))^{\frac{1}{2(m+1)}}z_{m+1},\ldots,(\psi(0))^{\frac{1}{2(m+1)}}z_{m+1}^{m},
(\psi(1))^{\frac{1}{2(m+1)}}z_{m+1}^{m+1},
\ldots,(\psi(1))^{\frac{1}{2(m+1)}}z_{m+1}^{2m+1})}
$}
\right|^{-\frac12}
\times\\\times
(2m+2)^{\frac{m+2}2}
\frac{\varepsilon_{m+1}^{-(2m+1)}}{\sqrt{\pi\mathcal D_{m+1}}}
\exp\left(-\frac{\mathcal D_{m+1}}{2\varepsilon_{m+1}^2}\right).
\end{multline*}
\end{proposition}
\end{document}
|
\begin{document}
\title{Multigrid methods combined with low-rank approximation for tensor structured Markov chains}
\newcommand{F\hspace{-0.075cm}F}{F\hspace{-0.075cm}F}
\footnotetext[1]{Institut f\"ur Mathematik, Universit\"at Kassel, Heinrich-Plett-Str.\ 40, 34132 Kassel, Germany, \texttt{[email protected]}}
\footnotetext[2]{Fakult\"at f\"ur Mathematik und Naturwissenschaften, Bergische Universit\"at Wuppertal, 42097 Wuppertal, Germany, \texttt{\{kkahl,sokolovic\}@math.uni-wuppertal.de}}
\footnotetext[3]{EPF Lausanne, SB-MATHICSE-ANCHP, Station 8, CH-1015 Lausanne,
Switzerland, \texttt{\{daniel.kressner,francisco.macedo\}@epfl.ch}}
\footnotetext[4]{IST, Alameda Campus, Av. Rovisco Pais, 1, 1049-001 Lisbon, Portugal}
\begin{abstract}
Markov chains that describe interacting subsystems suffer, on the one hand, from state space explosion but lead, on the other hand, to highly structured matrices. In this work, we propose a novel tensor-based algorithm to address such tensor structured Markov chains. Our algorithm combines a tensorized multigrid method with AMEn, an optimization-based low-rank tensor solver, for addressing coarse grid problems. Numerical experiments demonstrate that this combination overcomes the limitations incurred when using each of the two methods individually. As a consequence, Markov chain models of unprecedented size from a variety of applications can be addressed.
\end{abstract}
\begin{keywords}
Multigrid method, SVD, Tensor Train format, Markov chains, singular linear system, alternating optimization
\end{keywords}
\begin{AMS}
65F10, 65F50, 60J22, 65N55
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{M. BOLTEN, K. KAHL, D. KRESSNER, F. MACEDO AND S. SOKOLOVI\'{C}}{LOW-RANK TENSOR MULTIGRID FOR MARKOV CHAINS}
\section{Introduction}
This paper is concerned with the numerical computation of stationary distributions for large-scale continuous--time Markov chains. Mathematically, this task consists of solving the linear system
\begin{equation}\label{eq:Ax0}
Ax=0 \quad \text{with}\quad \mathbf{1}^T x = 1,
\end{equation}
where $A$ is the transposed generator matrix of the Markov chain and $\mathbf{1}$ denotes the vector of all ones. The matrix $A$ is square, nonsymmetric, and satisfies $\mathbf{1}^TA=0$. It is well known~\cite{BermanPlemmons1994} that the irreducibility of $A$ implies existence and uniqueness of the solution of~\eqref{eq:Ax0}.
We specifically consider Markov chains that describe $d$ interacting subsystems. Assuming that the $k$th subsystem has $n_k$ states, the generator matrix usually takes the form
\begin{equation}\label{eq:A}
A = \sum\limits_{t = 1}^T E_1^t \otimes E_2^t \otimes \cdots \otimes E_d^t,
\end{equation}
where $\otimes$ denotes the Kronecker product and $E_k^t \in \mathbb{R}^{n_k \times n_k}$ for $k = 1,\ldots, d$.
Consequently, $A$ has size $n = n_1 n_2 \cdots n_d$, which reflects the fact that
the states of the Markov chain correspond to all possible combinations of subsystem states. The exponential growth of $n$ with respect to $d$ is
usually called state space explosion~\cite{BuchholzDayar2007}. Applications of models described by~\eqref{eq:A} include queuing theory~\cite{Chan1987,Chan1988,Kaufman1983}, stochastic automata networks~\cite{LangvilleStewart2004, PlateauStewart1997}, analysis of chemical reaction networks~\cite{AndersonCraciunKurtz2010,LevineHwa2007} and telecommunication~\cite{Antunes2005,PhilippeSaadStewart1996}.
The tensor structure of~\eqref{eq:A} can be exploited to yield efficient matrix-vector multiplications in iterative methods for solving~\eqref{eq:Ax0}; see, e.g.,~\cite{LangvilleStewart2004}. However, none of the standard iterative solvers is computationally feasible for larger $d$ because of their need to store vectors of length $n$. To a certain extent, this can be avoided by reducing each $n_k$ with the tensorized multigrid method recently proposed in~\cite{BoltenKahlSokolovic2015}. Still, the need for solving coarse subproblems of size $2^d$ or $3^d$ limits such an approach to modest values of $d$.
Low-rank tensor methods as proposed in~\cite{Buchholz2008b,KressnerMacedo2014} can potentially deal with large values of $d$. The main idea is to view the solution $x$ of~\eqref{eq:A} as an $n_1\times n_2 \times \cdots \times n_d$ tensor and aim at an approximation in a highly compressed, low-rank tensor format. The choice of the format is crucial for the success and practicality of such an approach. In~\cite{Buchholz2008b}, the so called canonical decomposition was used, constituting a natural extension of the concept of product form solutions. Since this format aims at separating all subsystems at the same time, it cannot benefit from an underlying topology and thus often results in relatively large ranks. In contrast, low-rank formats based on tensor networks can be aligned with the topology of interactions between subsystems. In particular, it was demonstrated in~\cite{KressnerMacedo2014} that the so called tensor train format~\cite{Oseledets2011} appears to be well suited. Alternating optimization techniques are frequently used to obtain approximate solutions within a low-rank tensor format. Specifically,~\cite{KressnerMacedo2014} proposes a variant of the Alternating Minimal Energy method (AMEn)~\cite{Dolgov2014,White2005}. In each step of alternating optimization, a subproblem of the form~\eqref{eq:Ax0} needs to be solved. This turns out to be challenging, although these subproblems are much smaller than the original problem, they are often too large to allow for the solution by a direct method and too ill-conditioned to allow for the solution by an iterative method. It is not known how to design effective preconditioners for such problems.
In this paper, we combine the advantages of two methods. The tensorized multigrid method from~\cite{BoltenKahlSokolovic2015} is used to reduce the mode sizes $n_k$ and the condition number. This, in turn, benefits the use of the low-rank tensor method from~\cite{KressnerMacedo2014} by reducing the size and the condition number of the subproblems.
The rest of this paper is organized as follows. In Section~\ref{sec:tensor} we briefly describe the tensor train format and explain the basic ideas of alternating least squares methods, including AMEn. The tensorized multigrid method is described in Section~\ref{sec:multigrid}. Section~\ref{sec:combination} describes our proposed combination of the tensorized multigrid method with AMEn. In Section~\ref{sec:tests}, the advantages of this combination by a series of numerical experiments involving models from different applications.
\section{Low-rank tensor methods} \label{sec:tensor}
A vector $x \in \mathbb{R}^{n_1 \cdots n_d}$ is turned into a tensor $\mathcal{X} \in \mathbb{R}^{n_1\times \cdots \times n_d}$ by setting
\begin{equation} \label{eq:tensorindex}
\mathcal{X}(i_1, \dots, i_d)=x\big(i_1+(i_2-1)n_1+(i_3-1)n_1n_2+\cdots +(i_d-1)n_1n_2 \cdots n_{d-1}\big)
\end{equation}
with $1 \leq i_{k} \leq n_{k}$ for $k=1,\ldots,d$. In \textsc{Matlab}, this corresponds to the command
$\texttt{X=reshape(x,n)}$ with $\texttt{n=[n\_1,n\_2,{}\ldots{},n\_d]}$.
\subsection{Tensor train format}\label{sec:tt}
The \emph{tensor train (TT) format} is a multilinear low-dimensional representation of a tensor. Specifically, a tensor $\mathcal{X}$ is said to be represented in TT format if each entry of the tensor is given by
\begin{equation}\label{eq:tt}
\mathcal{X}(i_1,\dots, i_d) = G_1(i_1)\cdot G_2(i_2)\cdots G_d(i_d).
\end{equation}
The parameter-dependent matrices $G_k(i_k) \in \mathbb{R}^{r_{k-1} \times r_k}$ for $k=1, \ldots, d$
are usually collected in $r_{k-1} \times n_k \times r_k$ tensors, which are called the \emph{TT cores}.
The integers $r_0,r_1,\dots,r_{d-1}, r_d$, with $r_0 = r_d = 1$, determining the sizes of these matrices are called the \emph{TT ranks}.
The complexity of storing $\mathcal{X}$ in the format~\eqref{eq:tt} is bounded by $(d-2)\widehat{n}\widehat{r}^2+2\widehat{n}\widehat{r}$ if each $n_k \leq \widehat{n}$ and $r_k\leq \widehat{r}$.
For a matrix $A \in \mathbb{R}^{n_1 \cdots n_d \times n_1 \cdots n_d}$, one can define a corresponding \emph{operator TT format} by mapping the row and column indices of $A$ to tensor indices analogous to~\eqref{eq:tensorindex} and letting each entry of $A$ satisfy
\begin{equation}\label{eq:tt_matrix}
A(i_1,\dots, i_d; j_1,\dots, j_d) = M_1(i_1,j_1)\cdot M_2(i_2,j_2)\cdots M_d(i_d, j_d),
\end{equation}
with parameter-dependent matrices $M_k(i_k,j_k) \in \mathbb{R}^{r_{k-1} \times r_k}$ for $k = 1,\dots, d$. The difference to~\eqref{eq:tt} is that the cores now depend on two parameters instead of one. A matrix given as a sum of $T$ Kronecker products as in~\eqref{eq:A} can be easily converted into an operator TT format~\eqref{eq:tt_matrix} using, e.g., the techniques described in~\cite{Oseledets2010}. It holds that $r_k \le T$ but often much smaller operator TT ranks can be achieved.
Assuming constant TT ranks, the TT format allows to perform certain elementary operations with a complexity linear (instead of exponential) in $d$. Table~\ref{tab:Costs} summarizes the complexity for operations of interest, which shows that the cost can be expected to be dominated by the TT ranks. For a detailed description of the TT format and its operations, we refer to~\cite{Oseledets2010,Oseledets2011,OseledetsDolgov2012}.
\begin{table}
\caption{Complexity of operations in TT format for tensors $\mathcal{X}, \mathcal{Y} \in \mathbb{R}^{n_1 \times \dots \times n_d}$ with TT ranks bounded by $\hat r_\mathcal{X}$ and $\hat r_\mathcal{Y}$, respectively, and matrix $A \in \mathbb{R}^{(n_1 \times \dots \times n_d) \times (n_1 \times \dots \times n_d)}$ with operator TT ranks bounded by $\hat r_A$. All sizes $n_k$ are bounded by $\hat{n}$.\label{tab:Costs}}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Operation&Cost& Resulting TT ranks\\ \hline
Addition of two tensors $\mathcal{X}+\mathcal{Y}$ & --- & $\hat r_\mathcal{X} + \hat r_\mathcal{Y}$\\
Scalar multiplication $\alpha\mathcal{X}$& $\mathcal{O}(1)$ & $\hat r_\mathcal{X}$ \\
Scalar product $\left\langle\mathcal{X},\mathcal{Y}\right\rangle$&$\mathcal{O}(d\hat{n}\max\{\hat r_\mathcal{X},\hat r_\mathcal{Y}\}^3)$ &---\\
Matrix-vector product $A\mathcal{X}$ & $\mathcal{O}(d\hat{n}^2\hat r^2_A\hat r^2_\mathcal{X})$& $\hat r_A\hat r_\mathcal{X}$\\
Truncation of $\mathcal{X}$ &$\mathcal{O}(d\widehat{n}\hat r_\mathcal{X}^3)$& prescribed\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Alternating least squares} \label{sec:als}
In this section, we describe the method of alternating least squares (ALS) from~\cite{KressnerMacedo2014}.
To incorporate the TT format, we first replace~\eqref{eq:Ax0} by the equivalent optimization problem
\begin{equation}\label{eq:min_problem}
\min \|Ax \| \text{ subject to } \mathbf{1}^Tx=1,
\end{equation}
where $\|\cdot\|$ denotes the Euclidean norm.
We can equivalently view $A$ as a linear operator on $\mathbb{R}^{n_1\times \cdots \times n_d}$ and constrain~\eqref{eq:min_problem} to tensors in TT format:
\begin{equation}\label{eq:const_min_problem}
\min \|A \mathcal{X} \| \text{ subject to } \langle \mathcal{X}, \mathbf{1} \rangle =1,\ \text{$\mathcal{X}$ is in TT format~\eqref{eq:tt}},
\end{equation}
where $\mathbf{1}$ now refers to the $n_1\times \cdots \times n_d$ tensor of all ones.
Note that the TT format is linear in each of the TT cores. This motivates the use of an alternating least squares (ALS) approach that optimizes the $k$th TT core while keeping all other TT cores fixed. To formulate the subproblem that needs to be solved in each step of ALS, we define the interface matrices
\begin{eqnarray*}
G_{\leq k-1} &=& \big[G(i_1)\cdots G(i_{k})\big] \in \mathbb{R}^{(n_1\cdots n_{k}) \times r_{k-1}}, \\
G_{\geq k+1}&=& \big[G(i_{k +1})\cdots G(i_{d})\big]^T \in \mathbb{R}^{(n_{k+1} \cdots n_d) \times r_{k}}.
\end{eqnarray*}
Without loss of generality, we may assume that the TT format is chosen such that the columns of $G_{\leq k}$
and $G_{\geq k+1}$ are orthonormal; see, e.g.,~\cite{KressnerSteinlechnerUschmajew2013}.
By letting $g_k \in \mathbb{R}^{r_{k-1}n_k r_k}$ contain the vectorization of the $k$th core and setting
\[
G_{\neq k} = G_{\leq k-1} \otimes I_{n_{k}} \otimes G_{\geq k+1},
\]
it follows that
\begin{equation*}
\text{vec}(\mathcal{X}) = G_{\neq k} g_k.
\end{equation*}
Inserting this relation into~\eqref{eq:const_min_problem} yields
\[
\min \|A G_{\neq k} g_k \| \text{ subject to } \langle G_{\neq k} g_k, \mathbf{1} \rangle =1,
\]
which is equivalent to the linear system
\begin{equation}\label{eq:linear_system}
\left[\begin{array}{cc} G_{\neq k}^TA^T\!AG_{\neq k} & \tilde{\mathbf{e}} \\ \tilde{\mathbf{e}}^T & 0 \end{array}\right] \left[\begin{array}{c} g_k \\ \lambda \end{array}\right] = \left[\begin{array}{c} 0 \\ 1 \end{array}\right].
\end{equation}
The vector $\tilde{\mathbf{e}} = G_{\neq k}^T \mathbf e$ can be cheaply computed by tensor contractions. After~\eqref{eq:linear_system} has been solved, the TT format of the tensor $\mathcal{X}$ is updated by reshaping
$g_k$ into its $k$th TT core.
One full sweep of ALS consists of applying the described procedure first in a forward sweep over the TT cores $1,2,\ldots,d$ followed
by a backward sweep over the TT cores $d,d-1,\ldots, 1$.
After each update of a core, an orthogonalization procedure~\cite{Oseledets2011} is applied to ensure the orthonormality of the interface matrices in the subsequent optimization step.
\subsection{AMEn} \label{sec:amen}
The alternating minimal energy (AMEn) method proposed in~\cite{Dolgov2014} for linear systems enriches the TT cores locally by
gradient information, which potentially yields faster convergence than ALS and allows for rank adaptivity. It is sufficient to consider $d = 2$ for illustrating the extension of this procedure to~\eqref{eq:const_min_problem}. The general case $d>2$ then follows analogously to~\cite{Dolgov2014,KressnerSteinlechnerUschmajew2013}
by applying the case $d = 2$ to neighbouring cores.
For $d = 2$, the TT format corresponds to a low-rank factorization $\mathcal{X} = G_1 G_2^T$ with $G_1 \in \mathbb{R}^{n_1\times r_1}$, $G_2 \in \mathbb{R}^{n_2\times r_2}$. Suppose that the first step of ALS has been performed and $G_1$ has been optimized. We then consider a low-rank approximation of the negative gradient of $\|A \mathcal{X}\|^2$:
\[
\mathcal{R} = -A \mathcal{X} \approx R_1 R_2^T.
\]
In practice, a rank-2 or rank-3 approximation of $R$ is used. Then the method of steepest descent applied to minimizing $\|A \mathcal{X}\|^2$ would compute
\[
\mathcal{X} + \alpha \mathcal{R} \approx \begin{pmatrix} G_1 & R_1 \end{pmatrix} \begin{pmatrix} G_2 & \alpha R_2 \end{pmatrix}^T
\]
for some suitably chosen scalar $\alpha$. We now fix (and orthonormalize) the first augmented core $\begin{pmatrix} G_1 & R_1 \end{pmatrix}$. However, instead of using $\begin{pmatrix} G_2 & \alpha R_2 \end{pmatrix}$, we apply the next step of ALS to obtain an optimized second core via the solution of a linear system of the form~\eqref{eq:linear_system}. As a result we obtain an approximation $\mathcal{X}$ that is at least as good as the one obtained from one forward sweep of ALS without augmentation and, when ignoring the truncation error in $\mathcal{R}$, at least as good as one step of steepest descent. The described procedure is repeated by augmenting the second core and optimizing the second core, and so on. In each step, the rank of $\mathcal{X}$ is adjusted by performing low-rank truncation. This rank adaptivity is one of the major advantages of AMEn.
\section{Multigrid}\label{sec:multigrid}
In this section, we recall the multigrid method from~\cite{BoltenKahlSokolovic2015} for solving~\eqref{eq:Ax0} with a matrix $A$ having the tensor structure~\eqref{eq:A}. Special care has to be taken in order to preserve the tensor structure within the multigrid hierarchy. We first introduce the generic components of a multigrid method before explaining the tensor specific construction.
A multigrid approach has the following ingredients: the smoothing scheme, the set of coarse variables, transfer operators (the interpolation operator and the restriction operator) and the coarse grid operator.
Algorithm~\ref{alg:vcycle} is a prototype of a $V$-cycle and includes the mentioned ingredients. For a detailed description we refer the reader to~\cite{RugeStueben1986,TrottenbergOsterleeSchueller2001}.
\begin{algorithm}
\DontPrintSemicolon
$ v_\ell = \textnormal{MG}(b_\ell,v_\ell) $\;
\uIf{ coarsest grid is reached}{solve coarse grid equation $A_\ell v_\ell=b_\ell$.}
\Else{Perform $\nu_1$ smoothing steps for $A_\ell v_\ell = b_\ell$ with initial guess $ v_\ell $\;
Compute coarse right-hand side $b_{\ell+1}=Q_\ell(b_\ell-A_\ell v_\ell)$\;
$ e_{\ell+1}=\textnormal{MG}(b_{\ell+1}, 0)$\;
$v_\ell=v_\ell+P_\ell e_{\ell+1}$ \;
Perform $\nu_2$ smoothing steps for $A_\ell v_\ell = b_\ell$ with initial guess $ v_\ell $\;
}
\caption{Multigrid $V$-cycle\label{alg:vcycle}}
\end{algorithm}
In particular, for a two-grid approach, i.e., $\ell=1,2$, one can describe the realization as follows: the method performs a certain number $\nu_1$ of smoothing steps, using an iterative solver that can be, for instance, weighted Jacobi, Gauss-Seidel or a Krylov subspace method like GMRES~\cite{Saad1996,SaadSchultz1986}; the residual of the current iterate is computed and restricted by a matrix-vector multiplication with the restriction matrix $Q \in \mathbb{R}^{n \times n_c}$; the operator $A_1=A$ is restricted via a Petrov-Galerkin construction to obtain the coarse-grid operator, $A_2=QA_1P \in \mathbb{R}^{n_c\times n_c}$, where $P \in \mathbb{R}^{n_c \times n}$ is the interpolation operator; then we have a recursive call where we solve the coarse grid equation, which is the residual equation; then the error is interpolated and again some smoothing iterations are applied.
This $V$-cycle can be performed repeatedly until a certain accuracy of the residual is reached or a maximum number of $V$-cycles have been applied. Instead of stopping at the second grid, because the matrix may still be too large, one can solve the residual equation via a two-grid approach again. By this recursive construction one obtains a multi-level approach, see Fig.~\ref{fig:vcycle}.
\begin{figure}
\caption{Multigrid V-cycle: on each level, a presmoothing iteration is performed before the problem is restricted to the next coarser grid. On the smallest grid, the problem is typically solved exactly by a direct solver. When interpolating back to the finer grids, postsmoothing iterations are applied on each level.}
\label{fig:vcycle}
\end{figure}
No detail has yet been provided on how to choose $n_c$ and how to obtain the weights for the interpolation and restriction operators $P$ and $Q$. The value $n_c$ is obtained by specifying coarse variables. Geometric coarsening~\cite{TrottenbergOsterleeSchueller2001} or compatible relaxation~\cite{Brandt2000, BrannickFalgout2010} are methods which split the given $n$ variables into fine variables $\mathcal{F}$ and coarse variables $\mathcal{C}$, so that $n=|\mathcal{C}|+|\mathcal{F}|$. If such a splitting is given, $n_c=|C|$, the operators are defined as
\begin{equation*}
Q: \mathbb{R}^{|\mathcal{C}\cup\mathcal{F}|} \rightarrow \mathbb{R}^{|\mathcal{C}|},\quad P: \mathbb{R}^{|\mathcal{C}|}\rightarrow\mathbb{R}^{|\mathcal{C}\cup\mathcal{F}|}.
\end{equation*}
To obtain the entries for these operators, one can use methods like linear interpolation~\cite{TrottenbergOsterleeSchueller2001} or direct interpolation~\cite{RugeStueben1986,TrottenbergOsterleeSchueller2001}, among others.
Another approach for choosing a coarse grid is aggregation~\cite{BrezinaManteuffelMcCormichRugeSanders2010}, where one defines a partition of the set of variables and each subset of this partition is associated with one coarse variable.
In this work we focus on the $V-$cycle strategy. Other strategies, for example $W-$ or $F-$cycles~\cite{TrottenbergOsterleeSchueller2001}, can be applied in a straightforward fashion.
\subsection{Tensorized Multigrid}\label{subsec:tensormultigrid}
In order to make Algorithm~\ref{alg:vcycle} applicable to a tensor-structured problem, one has to ensure that the tensor structure is preserved along the multigrid hierarchy. In this, we follow the approach taken in~\cite{BoltenKahlSokolovic2015} and define interpolation and restriction in the following way.
\begin{proposition}\label{pro:QAP}
Let $A$ of the form \eqref{eq:A} be given, with $E_k^t \in \mathbb{R}^{n_k \times n_k}$. Let $P = \bigotimes_{k = 1}^d P_k$ and $Q = \bigotimes_{k = 1}^d Q_k$ with $P_k \in \mathbb{R}^{n_k \times n_k^c}$ and $Q_k \in \mathbb{R}^{n_k^c \times n_k}$ where $n_k^c < n_k$. Then the corresponding Petrov-Galerkin operator satisfies
\[
QAP = \sum\limits_{t = 1}^T\bigotimes_{k = 1}^d Q_k E_k^t P_k.
\]
\end{proposition}
Thus, the task of constructing interpolation and restriction operators becomes a ``local'' task, i.e., each part $P_k$ of the interpolation $P = \bigotimes_{k = 1}^d P_k$ coarsens the $k$th subsystem. In particular, this implies $n_k^{(c)} < n_k$ and the entries of $P_k$ depend largely on the local part of the tensorized operator.
Another important ingredient of the multigrid method is the smoothing scheme. In our setting, it should fulfill two main requirements; it should:
\begin{itemize}
\item[(i)] be applicable to non-symmetric, singular systems;
\item[(ii)] admit an efficient implementation in the TT format.
\end{itemize}
Requirement (ii) basically means that only the operations listed in Table~\ref{tab:Costs} should be used by the smoother, as most other operations are far more expensive. In this context, one logical choice is GMRES~\cite{Saad1996,SaadSchultz1986} (which also fulfills requirement (i)), which consists of matrix-vector products and orthogonalization steps (i.e., inner products and vector addition). See~\cite{BoltenKahlSokolovic2015} for a discussion of other possible choices for smoothing schemes and their limitations.
\begin{paragraph}{Parameters of the SVD truncation}
We apply the TT-SVD algorithm from~\cite{Oseledets2011} to keep the TT ranks of the iterates in the tensorized multigrid method under control. Except for the application of restriction and interpolation, which both have operator TT rank one by construction, all operations of Algorithm~\ref{alg:vcycle} lead to an increase of the rank of the current iterate.
In particular, truncation has to be performed after line 6 and line 8 of Algorithm~\ref{alg:vcycle}. Concerning the truncation of the restricted residual in line 6, we have observed that we do not need a very strict accuracy to obtain convergence of the global scheme and thus set the value to $10^{-1}$. As for the truncation of the updated iterates $v_\ell$ after line 8, we note that they have highly different norms on the different levels, so that the accuracy for their truncation should depend on the level. Additionally, a dependency on the cycle, following the idea in \cite{KressnerMacedo2014} in which such an adaptive scheme is applied to the sweeps of AMEn, is also included. Precisely, the accuracy depends on the residual norm after the previous cycle. This is motivated by the fact that truncations should be more accurate as we get closer to the desired approximation, while this is not needed while we are still far away from it. Summarizing, the accuracy of the truncation of the different $v_{\ell}$ is thus taken as the norm of $v_{\ell}$ divided by $v_1$ (dependency on the level), times the residual norm after the previous cycle (dependency on the quality of the current approximate solution) times a default value of 10. This ``double'' adaptivity is also used within the GMRES smoother to truncate the occurring vectors.
We also impose a restriction on the maximum TT rank allowed after each truncation. This maximum rank is initially set to $15$ and grows by a factor of $\sqrt{2}$ after each cycle for which the reduction of the residual norm is observed to be smaller than a factor of $\frac{9}{10}$, signalling stagnation.
\end{paragraph}
\section{Multigrid-AMEn}\label{sec:combination}
In Sections~\ref{sec:tensor} and~\ref{sec:multigrid} we have discussed two independent methods for solving~\eqref{eq:Ax0}.
In this section we first discuss the limitations of these two methods and then describe a novel combination that potentially overcomes these limitations.
\subsection{Limitation of AMEn} \label{sec:limitamen}
Together with orthogonalization and low-rank truncation, one of the computationally most expensive parts of AMEn is the solution of the linear system~\eqref{eq:linear_system}, which has size $r_{k-1} r_k n_k + 1$. A direct solver applied to this linear system has complexity $\mathcal{O}(\hat r^6 \hat n^3)$
and can thus only be used in the presence of small ranks and mode sizes.
Instead of a direct solver, an iterative solver such as MINRES~\cite{Greenbaum1997,Saad1996} can be applied to~\eqref{eq:linear_system}. The Kronecker structure of $G_{\neq k}^TA^T\!AG_{\neq k}$ inherited by the low operator TT rank of $A$ allows for efficient matrix-vector multiplications despite the fact that this matrix is not sparse. Unfortunately, we have observed for all the examples considered in Section~\ref{sec:tests} that the condition number of the reduced problem~\eqref{eq:linear_system} grows rapidly as the mode sizes $n_k$ increase. In turn, the convergence of MINRES is severely impaired, often leading to stagnation. It is by no means clear whether it is possible to turn a preconditioner for the original problem into an effective preconditioner for the reduced problem. So far, this has only been achieved via a very particular construction for Laplace-like operators~\cite{KressnerSteinlechnerUschmajew2013}, which is not relevant for the problems under consideration.
\subsection{Limitations of tensorized multigrid}
The described tensorized multigrid method is limited to modest values of $d$, simply because of the need for solving the problem on the coarsest grid. The size of this problem grows exponentially in $d$. Figure~\ref{fig:coarsening} illustrates the coarsening process if one applies full coarsening to each $E_j^t$ in an overflow queueing problem with mode sizes $9$, as described, e.g., in~\cite[Section 5.1]{BoltenKahlSokolovic2015}; see also Section~\ref{subsec:models} of this paper. In the case of three levels, a problem of size $3^d$ would need to be addressed by a direct solver on the coarsest grid. Due to the nature of the problem it is not possible to coarse the problem to a single variable in each dimension.
\begin{figure}
\caption{\label{fig:coarsening}
\label{fig:coarsening}
\end{figure}
\subsection{Combination of the two methods}
Instead of using a direct method for solving the coarsest-grid system in the tensorized multigrid method, we propose to use AMEn. Due to the fact that the mode sizes on the coarsest grid are small, we expect that it becomes much simpler to solve the reduced problems~\eqref{eq:linear_system} within AMEn.
Note that the problem to be solved on the coarsest grid constitutes a correction equation and thus differs from the original problem~\eqref{eq:Ax0} in having a nonzero right-hand side and incorporating a different linear constraint. To address this problem, we apply AMEn~\cite{Dolgov2014} to the normal equations and ignore the linear constraint. The linear constraint is fixed only at the end of the cycle by explicitly normalizing the obtained approximation, as in \cite{BoltenKahlSokolovic2015}.
\begin{paragraph}{Parameters of AMEn for the coarsest grid problem} AMEn targets an accuracy that is at the level of the residual from the previous multigrid cycle and we stop AMEn once this accuracy is reached or, at the latest, after $5$ sweeps. A rank-3 approximation of the negative gradient, obtained by ALS as suggested in \cite{Dolgov2014}, is used to augment the cores within AMEn. Reduced problems~\eqref{eq:linear_system} are addressed by a direct solver for size up to $1000$; otherwise MINRES (without a preconditioner) is used.
\end{paragraph}
\begin{paragraph}{Initial approximation of the solution}
All algorithms are initialized with the tensor that results from solving the coarsest grid problem, using the variant of AMEn described in Section~\ref{sec:amen}, and then bringing it up to the finest level using interpolation, as in \cite{BoltenKahlSokolovic2015}.
\end{paragraph}
\section{Numerical experiments}\label{sec:tests}
In this section, we illustrate the efficiency of our newly proposed algorithm from Section~\ref{sec:combination}. All tests have been performed in \textsc{Matlab} version 2013b, using functions from the \textit{TT-Toolbox}~\cite{Toolbox}.
The execution times have been obtained on a 12-core Intel Xeon CPU X5675, 3.07GHz with 192 GB RAM running 64-Bit Linux version 2.6.32.
\subsection{Model problems}\label{subsec:models}
All benchmark problems used in this paper are taken from the benchmark collection \cite{Macedo2015}, which not only provides a detailed description of the involved matrices but also {\sc Matlab} code.
In total, we consider six different models, which can be grouped into three categories.
\paragraph{Overflow queuing models}
\begin{figure}
\caption{Structure of the model $\mathsf{overflow}
\label{fig:overflow_tikz_new}
\end{figure}
The first class of benchmark models consists of the well-known overflow queuing model and two variations thereof. The structure of the model is depicted in Figure~\ref{fig:overflow_tikz_new}. The arrival rates are chosen as $\lambda_k = 1.2 - (k-1)\cdot 0.1$ and the service rates as $\mu_k = 1$ for $k = 1,\dots,d$, as suggested in~\cite{Buchholz2008b}. The variations of the model differ in the interaction between the queues:
\begin{itemize}
\item $\mathsf{overflow}$: Customers which arrive at a full queue try to enter subsequent queues until they find one that is not full. After trying the last queue, they leave the system.
\item $\mathsf{overflowsim}$: As $\mathsf{overflow}$, but customers arriving at a full queue try only one subsequent queue before leaving the system.
\item $\mathsf{overflowpersim}$: As $\mathsf{overflowsim}$, but when the last queue is full, a customer arriving there tries to enter the first queue instead of immediately leaving.
\end{itemize}
For these models, as suggested in \cite{BoltenKahlSokolovic2015}, we choose the interpolation operator $P_k$ as direct interpolation based on the matrices describing the local subsystems, and the restriction operator as its transpose.
\paragraph{Simple tandem queuing network ($\mathsf{kanbanalt2}$)}
\begin{figure}
\caption{Structure of the model $\mathsf{kanbanalt2}
\label{fig:kanbanalt_tikz}
\end{figure}
A number $d$ of queues has to be passed through by customers one after the other. Each queue $k$ has its own service rate, denoted by $dep(k)$; and its own capacity, denoted by $cap(k)$. For our tests we choose $dep(k)=1$ for all $k = 1,\dots,d$. The service in queue $k$ can only be finished if queue $k+1$ is not full, so that the served customer can immediately enter the next queue. Customers arrive only at the first queue, with an arrival rate of $1.2$. Figure~\ref{fig:kanbanalt_tikz} illustrates this model.
As only the subsystems corresponding to the first and last dimensions have a non-trivial ``local part'' and the one for the last dimension is associated with a subdiagonal matrix, we construct only $P_1$ via direct interpolation (as in the overflow models) and use linear interpolation for $P_2,\dots,P_d$.
\paragraph{Metabolic pathways}
\begin{figure}
\caption{Structure of the models $\mathsf{directedmetab}
\label{fig:metab_tikz}
\end{figure}
The next model problems we consider come from the field of chemistry, describing stochastic fluctuations in metabolic pathways. In Fig.~\ref{fig:metab_tikz}(a) each node of the given graph describes a metabolite. A flux of substrates can move along the nodes being converted by means of several chemical reactions (an edge between node $k$ and $\ell$ in the graph means that the product of reaction $k$ can be converted further by reaction $\ell$). The rate at which the $k$th reaction happens is given by
$$\frac{v_k m_k}{m_k + K_k - 1},$$
where $m_k$ is the number of particles of the $k$th substrate and $v_k, K_k$ are constants which we choose as $v_k = 0.1$ and $K_k = 1000$ for all $k = 1,\dots,d$. Note that every substrate $k$ has a maximum capacity of $cap(k)$. This model will be called $\mathsf{directedmetab}$.
$\mathsf{divergingmetab}$ is a variation of this model. Now, one of the metabolites in the reaction network can be converted into two different metabolites, meaning that the reaction path splits into two paths which are independent of each other, as shown in Fig.~\ref{fig:metab_tikz}(b).
The interpolation and restriction operators for these models are chosen in the same way as for $\mathsf{kanbanalt2}$.
\subsection{Numerical results}\label{subsec:experiments}
In this section, we report the results of the experiments we performed on the models from Section \ref{subsec:models}, in order to compare our proposed method, called ``MultigridAMEn'', to the existing approaches ``AMEn'' and ``Multigrid''.
Throughout all experiments, we stop an iteration when the residual norm $\|Ax\|$ is two orders of magnitude smaller than the residual norm of the tensor of all ones (scaled so that the sum of its entries is one). This happens to be our initial guess for AMEn, but it does not correspond to the initial guesses of Multigrid and MultigridAMEn.
For both multigrid methods, three pre- and postsmoothing steps are applied on each grid.
The number of levels is chosen such that the coarsest grid problem has mode size $3$.
\paragraph{Scaling with respect to the number of subsystems}
In order to illustrate the scaling behaviour of the three methods,
we first choose in all models a capacity of $16$ in each subsystem (i.e., mode sizes 17) and vary $d$, the number of subsystems. Figure~\ref{fig:fixedn_new} displays the obtained execution times.
\begin{figure}
\caption{Execution time (in seconds) needed to compute an approximation of the steady state distribution for the benchmark models from Section~\ref{subsec:models}
\label{fig:fixedn_new}
\end{figure}
To provide more insight into the results depicted in Figure~\ref{fig:fixedn_new}, we also give the number of iterations and the maximum rank of the computed approximation for the $\mathsf{overflow}$ model in Table~\ref{tab:fixedn}. For the other models, the observed behaviour is similar and we therefore refrain from providing more detailed data.
\begin{table}
\caption{Execution time (in seconds), number of iterations, and maximum rank of the computed approximations for $\mathsf{overflow}$ with mode size 17 and varying dimension $d$. The symbol --- indicates that the desired accuracy could not be reached within $3\,600$ seconds.\label{tab:fixedn}}
\footnotesize
\begin{tabular}[c]{c|ccc|ccc|ccc}
&&AMEn&&&Multigrid&&&MultigridAMEn\\
d&time&iter&rank&time&iter&rank&time&iter&rank\\
\hline
4 & 4.5 & 7 & 16 & 4.6& 13 & 13 & 4.2 & 13 & 13 \\
5 & 36.3 & 9 & 23 & 6.4 & 11 & 20 & 7.0 & 11 & 20 \\
6 & 239.4 & 12 & 28 & 24.7 & 17 & 29 & 20.4 & 17 & 29 \\
7 & 1758.4 & 14 & 36 & 252.4 & 24 & 29 & 38.3 & 24 & 29 \\
8 & --- & --- & --- & --- & --- & --- & 98.4 & 28 & 41 \\
9 & --- & --- & --- & --- & --- & --- & 214.8 & 36 & 57 \\
10 & --- & --- & --- & --- & --- & --- & 718.8 & 40 & 80 \\
11 & --- & --- & --- & --- & --- & --- & 2212.2 & 45 & 113 \\
\end{tabular}
\end{table}
In Figure~\ref{fig:fixedn_new}, we observe that Multigrid and MultigridAMEn behave about the same up to $d = 6$ subsystems. For larger $d$, the cost of solving the coarsest grid problem of size $3^d$ by a direct method becomes prohibitively large within Multigrid. MultigridAMEn is almost always faster than AMEn even for $d = 4$ or $d = 5$. To which extent MultigridAMEn is faster depends on the growth of the TT ranks of the solution with respect to $d$, as these have the largest influence on the performance of AMEn.
Note that the choice of levels in MultigridAMEn is not optimized; it is always chosen such that the coarsest grid mode sizes are three.
We sometimes observed that choosing a larger mode size leads to better performance, but we have not attempted to optimize this choice.
The TT format is a degenerate tree tensor network and thus perfectly matches the topology of interactions in the models {\sf overflowsim}, $\mathsf{kanbanalt2}$, and $\mathsf{directedmetab}$. Compared to $\mathsf{overflowsim}$, the performance is slightly worse for $\mathsf{kanbanalt2}$ and {\sf directedmetab}, possibly because they contain synchronized interactions, that is, interactions associated with a simultaneous change of state in more than one subsystem. In contrast, $\mathsf{overflowsim}$, as well as $\mathsf{overflow}$ and $\mathsf{overflowpersim}$, only have functional interactions, that is, the state of some subsystems determines the rates associated with other subsystems. This seems to be an important factor as the second best performance is observed for $\mathsf{overflowpersim}$, which contains a cycle in the topology of the network and thus does not match the TT format. This robustness with respect to the topology is also reflected by the results for $\mathsf{divergingmetab}$; recall Figure~\ref{fig:metab_tikz}(b).
The maximum problem size that is considered is $17^{13}\approx 9.9 \times 10^{15}$. MultigridAMEn easily deals with larger $d$, but this is the largest configuration for which an execution time below $3\,600$ seconds is obtained.
\paragraph{Scaling with respect to the mode sizes}
To also illustrate how the methods scale with respect to increasing mode sizes, we next perform experiments where we fix all models to $d = 6$ subsystems and vary their capacity. The execution times for all models are presented in Figure~\ref{fig:fixedd_new}, while more detailed information for the {\sf overflow} model is given in Table~\ref{tab:fixedd}.
\begin{figure}
\caption{Execution time (in seconds) needed to compute an approximation of the steady state distribution for the benchmark models from Section~\ref{subsec:models}
\label{fig:fixedd_new}
\end{figure}
\begin{table}
\caption{Execution time (in seconds), number of iterations and maximum rank of the computed approximations for $\mathsf{overflow}$ with $d = 6$ and varying mode sizes. The symbol --- indicates that the desired accuracy could not be reached within $3\,600$ seconds.\label{tab:fixedd}}
\small
\begin{tabular}[c]{c|ccc|ccc|ccc}
&&AMEn&&&Multigrid&&&MultiAMEn\\
n&time&iter&rank&time&iter&rank&time&iter&rank\\
\hline
5 & 0.7 & 4 & 13 & 5.9 & 8 & 15 & 6.2 & 8 & 15 \\
9 & 3.8 & 6 & 19 & 6.1 & 8 & 15 & 3.9 & 8 & 15 \\
17 & 239.4& 12 & 28 & 24.8 & 17 & 29 & 19.5 & 17 &29 \\
33 & --- & --- & --- & 102.9 & 17 & 41 & 104.6 & 17 & 41\\
65 & --- & --- & --- & 882.1 & 20 & 57 & 904.1 & 20 & 57\\
\end{tabular}
\end{table}
Figure~\ref{fig:fixedd_new} shows that AMEn outperforms the two multigrid methods (except for $\mathsf{kanbanalt2}$) for small mode sizes. Depending on the model, the multigrid algorithms start to be faster for mode sizes $9$ or $17$, as the subproblems to be solved in AMEn become too expensive at this point.
The bad performance of AMEn for $\mathsf{kanbanalt2}$ can be explained by the fact that the steady state distribution of this model has rather high TT ranks already for small mode sizes.
Concerning the comparison between the two multigrid methods, no significant difference is visible in Figure~\ref{fig:fixedd_new};
we have already seen in Figure~\ref{fig:fixedn_new} that $d = 6$ is not enough to let the coarsest grid problem solver dominate the computational time in Multigrid.
In fact, Figure~\ref{fig:fixedd_new} nicely confirms that using AMEn for solving the coarsest grid problem does not have an adverse effect on the convergence of multigrid.
The maximum problem size addressed in Figure~\ref{fig:fixedd_new} is $129^6 \approx 4.6 \times 10^{12}$.
\section{Conclusion}\label{sec:conclusion}
We have proposed a novel combination of two methods, AMEn and Multigrid, for computing the stationary distribution of large-scale tensor structured Markov chains. Our numerical experiments confirm that this combination truly combines the advantages of both methods. As a result, we can address a much wider range of problems in terms of number of subsystems and subsystem states. Also, our experiments demonstrate that the TT format is capable of dealing with a larger variety of applications and topologies compared to what has been previously reported in the literature.
{}
\end{document}
|
\betaegin{document}
\spacing{1.2}
\noindent{\Large\betaf Equivariant incidence algebras and equivariant Kazhdan--Lusztig--Stanley theory}\\
\noindent{\betaf Nicholas Proudfoot}\\
Department of Mathematics, University of Oregon,
Eugene, OR 97403\\
[email protected]\\
{\small
\betaegin{quote}
\noindent {\em Abstract.}
We establish a formalism for working with incidence algebras of posets with symmetries, and we develop
equivariant Kazhdan--Lusztig--Stanley theory within this formalism. This gives a new way of thinking about
the equivariant Kazhdan--Lusztig polynomial and equivariant $Z$-polynomial of a matroid.
\end{quote} }
\section{Introduction}
The incidence algebra of a locally finite poset was first introduced by Rota, and has proved to be a natural
formalism for studying such notions as M\"obius inversion \cite{Rota-incidence}, generating functions \cite{incidence-generating},
and Kazhdan--Lusztig--Stanley polynomials \cite[Section 6]{Stanley-h}.
A special class of Kazhdan--Lusztig--Stanley polynomials that have received a lot of attention recently is that of Kazhdan--Lusztig polynomials
of matroids, where the relevant poset is the lattice of flats \cite{EPW,KLS}. If a finite group $W$ acts on a matroid $M$ (and therefore on the lattice of flats),
one can define the $W$-equivariant Kazhdan--Lusztig polynomial of $M$ \cite{GPY}.
This is a polynomial whose coefficients are virtual representations of $W$, and has the property that taking dimensions recovers
the ordinary Kazhdan--Lusztig polynomial of $M$. In the case of the uniform matroid of rank $d$ on $n$ elements, it is actually
much easier to describe the $S_n$-equivariant Kazhdan--Lusztig polynomial, which admits a nice description in terms of partitions of $n$,
than it is to describe the non-equivariant Kazhdan--Lusztig polynomial \cite[Theorem 3.1]{GPY}.
While the definition of Kazhdan--Lusztig--Stanley polynomials is greatly clarified by the language of incidence algebras,
the definition of the equivariant Kazhdan--Lusztig polynomial of a matroid is completely {\em ad hoc} and not nearly as elegant.
The purpose of this note is to define the equivariant incidence algebra of a poset with a finite group of symmetries, and to show
that the basic constructions of Kazhdan--Lusztig--Stanley theory make sense in this more general setting.
In the case of a matroid, we show that this approach recovers the same equivariant Kazhdan--Lusztig polynomials that were defined
in \cite{GPY}.\\
\noindent
{\em Acknowledgments:}
We thank Tom Braden for his feedback on a preliminary draft of this work.
\section{The equivariant incidence algebra}
Fix once and for all a field $k$.
Let $P$ be a locally finite poset equipped with the action of a finite group $W$.
We consider the category $\mathbb{C}WP$ whose objects consist of
\betaegin{itemize}
\item a $k$-vector space $V$
\item a direct product decomposition $V = \prod_{x\leq y\in P} V_{xy}$, with each $V_{xy}$ finite dimensional
\item an action of $W$ on $V$ compatible with the decomposition.
\end{itemize}
More concretely, for any $\sigma\in W$ and any $x\leq y\in P$, we have a linear map
$\varphi^{\sigma}_{xy}:V_{xy}\to V_{\sigma(x)\sigma(y)}$,
and we require that $\varphi^{e}_{xy} = \operatorname{id}_{V_{xy}}$ and that
$\varphi^{\sigma'}_{\sigma(x)\sigma(y)}\circ\varphi^{\sigma}_{xy} = \varphi^{\sigma'\sigma}_{xy}$.
Morphisms in $\mathbb{C}WP$ are defined to be linear maps that are compatible with both the decomposition and the action.
This category admits a monoidal structure, with tensor product given by
$$(U\otimes V)_{xz} := \betaigoplus_{x\leq y\leq z}U_{xy}\otimes V_{yz}.$$
Let $\mathbf{i}WP$ be the Grothendieck ring of $\mathbb{C}WP$; we call $\mathbf{i}WP$ the {\betaf equivariant incidence algebra} of $P$
with respect to the action of $W$.
\betaegin{example}
If $W$ is the trivial group, then $\mathbf{i}WP$ is isomorphic to the usual incidence algebra of $P$ with coefficients in $\mathbb{Z}$.
That is, it is isomorphic as an abelian group to a direct product of copies of $\mathbb{Z}$, one for each interval in $P$, and multiplication
is given by convolution.
\end{example}
\betaegin{remark}\lambdabel{base change}
If $W$ acts on $P$ and $\psi:W'\to W$ is a group homomorphism, then $\psi$ induces a functor $F_\psi:\mathbb{C}WP\to\mathcal{C}^{W'}\!(P)$
and a ring homomorphism $R_\psi:\mathbf{i}WP\to I^{W'}\!(P)$.
\end{remark}
We now give a second, more down to earth description of $\mathbf{i}WP$.
Let $\operatorname{VRep}(W)$ denote the ring of finite dimensional virtual representations of $W$ over the field $k$.
A group homomorphism $\psi:W'\to W$ induces a ring homomorphism $\Lambda_\psi:\operatorname{VRep}(W)\to\operatorname{VRep}(W')$.
For any $x\in P$, let $W_x\subset W$ be the stabilizer of $x$. We also define
$W_{xy} := W_x\cap W_y$ and $W_{xyz} := W_x \cap W_y\cap W_z$.
Note that, for any $x,y\in P$ and $\sigma\in W$, conjugation by $\sigma$ gives a group isomorphism
$$\psi_{xy}^\sigma:W_{xy}\to W_{\sigma(x)\sigma(y)},$$ which induces a ring isomorphism
$$\Lambda_{\psi_{xy}^\sigma}:\operatorname{VRep}(W_{\sigma(x)\sigma(y)})\to \operatorname{VRep}(W_{xy}).$$
An element $f\in \mathbf{i}WP$ is uniquely determined by a collection $$\{f_{xy}\mid x\leq y\in P\},$$ where $f_{xy}\in\operatorname{VRep}(W_{xy})$
and for any $\sigma\in W$ and $x\leq y\in P$, $f_{xy} = \Lambda_{\psi_{xy}^\sigma}\left(f_{\sigma(x)\sigma(y)}\right)$.
The unit $\delta\in\mathbf{i}WP$ is characterized by the property that $\delta_{xx}$ is the 1-dimensional trivial representation of $W_x$ for all $x\in P$
and $\delta_{xy} = 0$ for all $x<y\in P$.
The following proposition describes the product structure on $\mathbf{i}WP$ in this representation.
\betaegin{proposition}\lambdabel{multiplication}
For any $f,g\in\mathbf{i}WP$.
$$(fg)_{xz} := \sum_{x\leq y\leq z}\frac{|W_{xyz}|}{|W_{xz}|}\operatorname{Ind}_{W_{xyz}}^{W_{xz}} \left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyz}} f_{xy}\right)\otimes\left( \operatorname{Res}^{W_{yz}}_{W_{xyz}} g_{yz}\right)\right).$$
\end{proposition}
\betaegin{remark}\lambdabel{fractions}
It may be surprising to see the fraction $\frac{|W_{xyz}|}{|W_{xz}|}$ in the statement of Proposition \ref{multiplication},
since $\operatorname{VRep}(W_{xy})$ is not a vector space over the rational numbers.
We could in fact replace the sum over $[x,z]$ with a sum over one representative of each $W_{xz}$-orbit in $[x,z]$ and then eliminate the factor of
$\frac{|W_{xyz}|}{|W_{xz}|}$. Including the fraction in the equation allows us to avoid choosing such representatives.
\end{remark}
\betaegin{remark}
Proposition \ref{multiplication} could be taken as the definition of $\mathbf{i}WP$.
It is not so easy to prove associativity directly from this definition,
though it can be done with the help of Mackey's restriction formula (see for example \cite[Corollary 32.2]{Bump}).
\end{remark}
\betaegin{remark}\lambdabel{dimension}
Suppose that $\psi:W'\to W$ is a group homomorphism, and for any $x,y\in P$, consider the induced group homomorphism $\psi_{xy}:W'_{xy}\to W_{xy}$.
For any $f\in \mathbf{i}WP$, we have, $R_\psi(f)_{xy} = \Lambda_{\psi_{xy}}\left(f_{xy}\right)$.
In particular, if $W'$ is the trivial group, then $R_\psi(f)_{xy}$ is equal to the dimension of the virtual representation $f_{xy}\in\operatorname{VRep}(W_{xy})$.
\end{remark}
Before proving Proposition \ref{multiplication}, we state the following standard lemma in representation theory.
\betaegin{lemma}\lambdabel{induced reps}
Suppose that $E = \betaigoplus_{s\in S} E_s$ is a vector space that decomposes as a direct sum of pieces indexed by a finite set $S$.
Suppose that $G$ acts linearly on $E$ and acts by permutations on $S$ such that, for all $s\in S$ and $\gamma\in G$,
$\gamma\cdot E_s = E_{\gamma\cdot s}$. For each $x\in S$, let $G_x\subset G$ denote the stabilizer of $s$.
Then there exists an isomorphism
$$E \cong \betaigoplus_{s\in S}\frac{|G_s|}{|G|}\operatorname{Ind}_{G_s}^{G} \betaig(E_s\betaig)$$
of representations of $G$.\footnote{As in Remark \ref{fractions}, we may eliminate the fraction at the cost of choosing one
representative of each $W$-orbit in $S$.}
\end{lemma}
\betaegin{proof}[Proof of Proposition \ref{multiplication}.]
By linearity, it is sufficient to prove the proposition in the case where we have objects $U$ and $V$ of $\mathbb{C}WP$
with $f = [U]$ and $g = [V]$.
This means that, for all $x\leq y\leq z\in P$, $f_{xy} = [U_{xy}]\in\operatorname{VRep}(W_{xy})$,
$g_{yz} = [V_{yz}]\in\operatorname{VRep}(W_{yz})$,
and $$(fg)_{xz} = \betaig[(U\otimes V)_{xz}\betaig] = \left[\betaigoplus_{x\leq y\leq z}U_{xy}\otimes V_{yz}\right]\in \operatorname{VRep}(W_{xz}).$$
The proposition then follows from Lemma \ref{induced reps} by taking $E = (U\otimes V)_{xz}$, $S = [x,z]$, and $G = W_{xz}$.
\end{proof}
Let $R$ be a commutative ring. Given an element $f\in\mathbf{i}WP\otimes R$ and a pair of elements $x\leq y\in P$, we will write $f_{xy}$ to denote the
corresponding element of $\operatorname{VRep}(W_{xy})\otimes R$.
\betaegin{proposition}\lambdabel{inverses}
An element $f\in\mathbf{i}WP\otimes R$ is (left or right) invertible if and only if $f_{xx}\in\operatorname{VRep}(W_{x})\otimes R$ is invertible for all $x\in P$.
In this case, the left and right inverses are unique and they coincide.
\end{proposition}
\betaegin{proof}
By Proposition \ref{multiplication}, an element $g$ is a right inverse to $f$ if and only if $g_{xx} = f_{xx}^{-1}$ for all $x\in P$ and
$$\sum_{x\leq y\leq z}\frac{|W_{xyz}|}{|W_{xz}|}\operatorname{Ind}_{W_{xyz}}^{W_{xz}} \left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyz}} f_{xy}\right)\otimes\left( \operatorname{Res}^{W_{yz}}_{W_{xyz}} g_{yz}\right)\right) = 0$$ for all $x<z\in P$.\footnote{If the ring $R$ has integer torsion, then we rewrite this equation without
the fractions as described in Remark \ref{fractions}.}
The second condition can be rewritten as
$$\left(\operatorname{Res}^{W_x}_{W_{xz}} f_{xx}\right)\otimes g_{xz} = - \sum_{x<y\leq z}\frac{|W_{xyz}|}{|W_{xz}|}\operatorname{Ind}_{W_{xyz}}^{W_{xz}} \left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyz}} f_{xy}\right)\otimes\left( \operatorname{Res}^{W_{yz}}_{W_{xyz}} g_{yz}\right)\right).$$
If $f_{xx}$ is invertible in $\operatorname{VRep}(W_{x})\otimes R$, then $\operatorname{Res}^{W_x}_{W_{xz}} f_{xx}$ is invertible in $\operatorname{VRep}(W_{xz})\otimes R$, and this equation has a unique solution
for $g$. Thus $f$ has a right inverse if and only if $f_{xx}\in\operatorname{VRep}(W_{x})\otimes R$ is invertible for all $x\in P$. The argument for left inverses is identical,
so it remains only to show that left and right inverses coincide.
Let $g$ be right inverse to $f$. Then $g$ is also left inverse to some function, which we will denote $h$. We then have
$$f = f\delta = f(gh) = (fg)h = \delta h = h,$$
so $g$ is left inverse to $f$, as well.
\end{proof}
\section{Equivariant Kazhdan--Lusztig--Stanley theory}
In this section we take $R$ to be the ring $\mathbb{Z}[t]$ and for each $f\in\mathbf{i}WP\otimes \mathbb{Z}[t]$ and $x\leq y\in P$, we write $f_{xy}(t)$ for the corresponding component of $f$.
One can regard $f_{xy}(t)$ as a polynomial whose coefficients are virtual representations of $W_{xy}$, or equivalently as a graded virtual representation
of $W_{xy}$. We assume that $P$ is equipped with a $W$-invariant {\betaf weak rank function} in the sense of \cite[Section 2]{Brenti-twisted}.
This is a collection of natural numbers $\{r_{xy}\in\mathbb{N}\mid x\leq y\in P\}$ with the following properties:
\betaegin{itemize}
\item $r_{xy} > 0$ if $x<y$
\item $r_{xy}+r_{yz}=r_{xz}$ if $x\leq y\leq z$
\item $r_{xy} = r_{\sigma(x)\sigma(y)}$ if $x\leq y$ and $\sigma\in W$.
\end{itemize}
Following the notation of \cite[Section 2.1]{KLS}, we define
$$\mathcalIWP := \left\{f\in\mathbf{i}WP\otimes \mathbb{Z}[t]\;\Big{|}\; \text{$\deg f_{xy}(t)\leq r_{xy}$ for all $x\leq y$}\right\}$$
along with $$\mathbf{i}h := \left\{f\in\mathbf{i}WP\otimes \mathbb{Z}[t]\;\Big{|}\; \text{$\deg f_{xy}(t)< r_{xy}/2$ for all $x< y$ and $f_{xx}(t) = \delta_{xx}(t)$ for all $x$}\right\}.$$
Note that $\mathcalIWP$ is a subalgebra of $\mathbf{i}WP$,
and we define an involution $f\mapsto \betaar f$ of $\mathcalIWP$ by putting $\betaar f_{xy}(t) := t^{r_{xy}} f_{xy}(t^{-1})$.
An element $\kappa\in\mathcalIWP$ is called a {\betaf \betaoldmath{$P$}-kernel} if $\kappa_{xx}(t) = \delta_{xx}(t)$ for all $x\in P$ and $\betaar\kappa = \kappa^{-1}$.
\betaegin{theorem}\lambdabel{thm:KL}
If $\kappa\in\mathcalIWP$ is a P-kernel, there exists a unique pair of functions $f,g\in\mathbf{i}h$ such that
$\betaar f = \kappa f$ and $\betaar g = g\kappa$.
\end{theorem}
\betaegin{proof}
We follow the proof in \cite[Theorem 2.2]{KLS}.
We will prove existence and uniqueness of $f$; the proof for $g$ is identical.
Fix elements $x<w\in P$. Suppose that $f_{yw}(t)$ has been defined for all $x<y\leq w$ and that the equation $\betaar f = \kappa f$
holds where defined.
Let $$Q_{xw}(t) := \sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}
\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}\kappa_{xy}(t)\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}f_{yw}(t)\right)\right)
\in \operatorname{VRep}(W_{xw})\otimes \mathbb{Z}[t].$$
The equation $\betaar f = \kappa f$ for the interval $[x,w]$ translates to $$\betaar f_{xw}(t) - f_{xw}(t) = Q_{xw}(t).$$
It is clear that there is at most one polynomial $f_{xw}(t)$ of degree strictly less than $r_{xw}/2$ satisfying this equation.
The existence of such a polynomial is equivalent to the statement $$t^{r_{xw}}Q_{xw}(t^{-1}) = -Q_{xw}(t).$$
To prove this, we observe that
\betaegin{eqnarray*}
t^{r_{xw}}Q_{xw}(t^{-1}) &=& t^{r_{xw}}\sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}\kappa_{xy}(t^{-1})\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}f_{yw}(t^{-1})\right)\right)\\
&=& \sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}t^{r_{xy}}\kappa_{xy}(t^{-1})\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}t^{r_{yw}}f_{yw}(t^{-1})\right)\right)\\
&=& \sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}\betaar\kappa_{xy}(t)\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}\betaar f_{yw}(t)\right)\right)\\
&=& \sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}\betaar\kappa_{xy}(t)\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}(\kappa f)_{yw}(t)\right)\right).
\end{eqnarray*}
This is formally equal to the expression for $(\betaar\kappa(\kappa f))_{xw} - (\kappa f)_{xw}$, which by associativity
is equal to the expression for $$((\betaar\kappa \kappa)f)_{xw} - (\kappa f)_{xw} = f_{xw} - (\kappa f)_{xw}.$$
Thus we have
\betaegin{eqnarray*}
t^{r_{xw}}Q_{xw}(t^{-1}) &=&
- \sum_{x<y\leq w} \frac{|W_{xyw}|}{|W_{xw}|}\operatorname{Ind}_{W_{xyw}}^{W_{xw}}\left(\left(\operatorname{Res}^{W_{xy}}_{W_{xyw}}\kappa_{xy}(t)\right)\otimes\left( \operatorname{Res}^{W_{yw}}_{W_{xyw}}f_{yw}(t)\right)\right)\\
&=& - Q_{xw}(t).
\end{eqnarray*}
Thus there is a unique choice of polynomial $f_{xw}(t)$ consistent with the equation $\betaar f = \kappa f$ on the interval $[x,w]$.
\end{proof}
We will refer to the element $f\in \mathbf{i}h$ from Theorem \ref{thm:KL} is the {\betaf right equivariant KLS-function} associated with $\kappa$,
and to $g$ as the {\betaf left equivariant KLS-function} associated with $\kappa$.
For any $x\leq y$, we will refer to the graded virtual representations $f_{xy}(t)$ and $g_{xy}(t)$
as (right or left) {\betaf equivariant KLS-polynomials}.
When $W$ is the trivial group, these definitions specialize to the ones in \cite[Section 2]{KLS}.
\betaegin{example}\lambdabel{char}
Let $\zeta\in\mathcalIWP$ be the element defined by letting $\zeta_{xy}(t)$ be the trivial representation of $W_{xy}$
in degree zero for all $x\leq y$, and let $\operatorname{ch}i := \zeta^{-1}\betaar\zeta$. The function $\operatorname{ch}i$ is called the {\betaf equivariant characteristic
function} of $P$ with respect to the action of $W$. We have $\operatorname{ch}i^{-1} = \betaar\zeta^{-1}\zeta = \betaar\operatorname{ch}i$, so $\operatorname{ch}i$ is a $P$-kernel.
Since $\betaar\zeta = \zeta\operatorname{ch}i$, $\zeta$ is equal to the left KLS-function associated with $\operatorname{ch}i$. However, the right KLS-function $f$
associated with $\operatorname{ch}i$ is much more interesting! See Propositions \ref{char-OS} and \ref{matroid main} for a special case of this construction.
\end{example}
We next introduce the equivariant analogue of the material in \cite[Section 2.3]{KLS}.
If $\kappa$ is a $P$-kernel with right and left KLS-functions $f$ and $g$,
we define $Z := g\kappa f\in\mathcalIWP$, which we call the {\betaf equivariant \betaoldmath{$Z$}-function} associated with $\kappa$.
For any $x\leq y$, we will refer to the graded virtual representation $Z_{xy}(t)$ as an {\betaf equivariant \betaoldmath{$Z$}-polynomial}.
\betaegin{proposition}\lambdabel{palindromic}
We have $\betaar Z = Z$.
\end{proposition}
\betaegin{proof}
Since $\betaar g = g\kappa$, we have $Z = g\kappa f = \betaar g f$.
Since $\betaar f = \kappa f$, we have $Z = g\kappa f = g\betaar f$.
Thus $\betaar Z = \overline{\betaar g f} = \betaar{\betaar g}\betaar f = g\betaar f = Z$.
\end{proof}
\betaegin{remark}\lambdabel{restrictions}
Suppose that $\kappa\in\mathbf{i}WP$ is a $P$-kernel and $f,g,Z\in\mathbf{i}WP$ are the associated equivariant KLS-functions and equivariant $Z$-function.
It is immediate from the definitions that, if $\psi:W'\to W$ is a group homomorphism, then $R_\psi(f),R_\psi(g),R_\psi(Z)\in I^{W'}\!(P)$
are the equivariant KLS-functions and equivariant $Z$-function associated with the $P$-kernel $R_\psi(\kappa)\in I^{W'}\!(P)$.
In particular, if we take $W'$ to be the trivial group, then Remark \ref{dimension} tells us that the ordinary KLS-polynomials and $Z$-polynomials
are recovered from the equivariant KLS-polynomials and $Z$-polynomials by sending virtual representations to their dimensions.
\end{remark}
\section{Matroids}\lambdabel{sec:matroid}
Let $M$ be a matroid, let $L$ be the lattice of flats of $M$
equipped with the usual weak rank function, and let $W$ be a finite group acting on $L$.
Let $OS^W_M(t)$ be the Orlik--Solomon algebra of $M$, regarded as a graded representation of $W$.
Following \cite[Section 2]{GPY}, we define
$$H^W_M(t) := t^{\operatorname{rk} M}OS_M^W(-t^{-1})\in\operatorname{VRep}(W)\otimes\mathbb{Z}[t].$$
If $W$ is trivial, then $H^W_M(t)\in\mathbb{Z}[t]$ is equal to the characteristic polynomial of $M$.
For any $F\leq G\in L$, let $M_{FG}$ be the minor of $M$ with lattice of flats $[F,G]$
obtained by deleting the complement of $G$ and contracting $F$; this matroid inherits an action of the stabilizer group $W_{FG}\subset W$.
Define $H\in\mathcalIWL$ by putting $H_{FG}(t) = H_{M_{FG}}^{W_{FG}}(t)$ for all $F\leq G$.
\betaegin{proposition}\lambdabel{char-OS}
The function $H$ is equal to the equivariant characteristic function of $L$.
\end{proposition}
\betaegin{proof}
It is proved in \cite[Lemma 2.5]{GPY} that $\zeta H = \betaar \zeta$. Multiplying on the left by $\zeta^{-1}$, we have $H = \zeta^{-1}\betaar\zeta$,
which is the definition of the equivariant characteristic function of $L$.
\end{proof}
\betaegin{remark}
The proof of \cite[Lemma 2.5]{GPY} is surprisingly difficult.\footnote{The difficult
part appears in the proof of Lemma 2.4, which is then used to prove Lemma 2.5.}
Consequently, Proposition \ref{char-OS} is a deep fact about Orlik--Solomon algebras, not just a formal consequence of the definitions.
\end{remark}
The {\betaf equivariant Kazhdan--Lusztig polynomial} $P^W_M(t) \in \operatorname{VRep}(W)\otimes\mathbb{Z}[t]$ was introduced
in \cite[Section 2.2]{GPY}.
Define $P\in\mathbf{i}hL$ by putting $P_{FG}(t) = P_{M_{FG}}^{W_{FG}}(t)$ for all $F\leq G$.
The defining recursion for $P^W_M(t)$ in \cite[Theorem 2.8]{GPY} translates to the formula $\betaar P = H P$, which immediately implies the following proposition.
\betaegin{proposition}\lambdabel{matroid main}
The function $P$ is the right equivariant KLS-function associated with $H$.
\end{proposition}
The {\betaf equivariant \betaoldmath{$Z$}-polynomial} $Z^W_M(t) \in \operatorname{VRep}(W)\otimes\mathbb{Z}[t]$ was introduced
in \cite[Section 6]{PXY}. Define $Z\in\mathcalIWL$ by putting $Z_{FG}(t) = Z_{M_{FG}}^{W_{FG}}(t)$ for all $F\leq G$.
The defining recursion for $Z^W_M(t)$ in \cite[Section 6]{PXY} translates to the formula $Z = \betaar\zeta P$.
\betaegin{proposition}\lambdabel{matroid Z}
The function $Z$ is the $Z$-function associated with $H$.
\end{proposition}
\betaegin{proof}
Example \ref{char} tells us that the right KLS-function associated with $H$ is $\zeta$ and Proposition \ref{matroid main} tells us that the
left KLS-function associated with $H$ is $P$, thus the $Z$-function is equal
$\zeta H P = \betaar\zeta P = Z.$
\end{proof}
The following corollary was asserted without proof in \cite[Section 6]{PXY}, and follows immediately from Propositions \ref{palindromic} and \ref{matroid Z}.
\betaegin{corollary}
The polynomial $Z^W_M(t)$ is palindromic. That is, $t^{\operatorname{rk} M}Z^W_M(t^{-1}) = Z^W_M(t)$.
\end{corollary}
When $W$ is the trivial group, Gao and Xie define polynomials $Q_M(t)$ and $\hat Q_M(t) = (-1)^{\operatorname{rk} M}Q_M(t)$ with the property that
$\left(P^{-1}\right)_{FG}(t) = \hat Q_{M_{FG}}(t)$ \cite{GX}. If $\hat 0$ and $\hat 1$ are the minimal and maximal flats of $M$,
this is equivalent to the statement that $Q_M(t) = (-1)^{\operatorname{rk} M}\left(P^{-1}\right)_{\hat 0 \hat 1}(t)$.
The polynomial $Q_M(t)$ is called the {\betaf inverse Kazhdan--Lusztig polynomial of $M$}.\footnote{The reason for bestowing this name on $Q_M(t)$ rather than $\hat Q_M(t)$ is that $Q_M(t)$ has non-negative coefficients.}
Using the machinery of this paper, we may extend their definition to the equivariant setting by defining
the {\betaf equivariant inverse Kazhdan--Lusztig polynomial}
$$Q_M^W(t) := (-1)^{\operatorname{rk} M}\left(P^{-1}\right)_{\hat 0 \hat 1}(t).$$ If we then define $\hat Q\in\mathbf{i}hL$ by putting $\hat Q_{FG}(t) = (-1)^{r_{FG}}Q_{M_{FG}}^{W_{FG}}(t)$ for all $F\leq G$,
we immediately obtain the following proposition.
\betaegin{proposition}
The functions $P$ and $\hat Q$ are mutual inverses in $\mathbf{i}WL$.
\end{proposition}
\betaibliography{./symplectic}
\betaibliographystyle{amsalpha}
\end{document}
|
\begin{document}
\title{Cremona linearizations of some classical varieties}
\begin{abstract}
In this paper we present an effective method for {linearizing}
rational varieties of codimension at least two under Cremona
transformations, starting from a given parametrization. Using these
linearizing Cremonas, we simplify the equations of secant and
tangential varieties of some classical examples, including Veronese,
Segre and Grassmann varieties. We end the paper by treating the
special case of the Segre embedding of the $n$--fold product of
projective spaces, where cumulant Cremonas, arising from algebraic
statistics, appear as specific cases of our general construction.
\end{abstract}
\mathbb{S}ection*{Introduction}\label{sec:intro}
Computations in Algebraic Geometry may be very sensitive to the choice of
coordinates. Often, by picking the appropriate coordinate
system, calculations or expressions can be greatly simplified.
Change of variables with rational functions are classically known as
\emph {Cremona transformations} and give a huge flexibility when
dealing with systems of polynomial equations. In this paper, we
focus on varieties and maps defined over $\mathbb{C}$.
Cremona transformations, one of the most venerable topics in algebraic geometry,
were widely studied in the second half of XIX and the first of XX century.
They became again fashionable in recent times, after the spectacular developments of birational
geometry due to Mori, Kawamata, Koll\'ar, et al. and even more recently by Birkar, Cascini, Hacon, McKernan (see \cite {dr} and references therein). However, despite this great progress, Cremona transformations still reserve a great deal of surprises. The aim of the present, mainly expository, paper is to show how useful they can be in studying some
classical geometric invariants of complex projective varieties, linking previous independent work of the last three authors (see \cite {GHPRS, MP, pwz-2010-cumulants}). For a recent survey on the properties of the group of Cremona transformations, we refer the reader to~\cite{SegreCantat}.
The work of Mella and Polastri \cite{MP} shows that any rational variety $X$ of
codimension at least two in $\mathbb{P}^ r$ can be \emph{linearized} by a Cremona
transformation: a \emph{linearizing Cremona map} is a transformation that maps $X$
birationally to a linear subspace of $\mathbb{P}^ r$. In
Section~\ref{sec:crem-equiv} we provide a proof of the
aforementioned result, close to the original one in spirit, but best suited for effective computations.
If $X$ in $\mathbb{P}^ r$ admits a birational linear projection to a linear
subspace, a Cremona linearizing map can be directly constructed as a \emph{triangular Cremona
transformation} (see \S~\ref{sec:trianguar}).
In fact, in Section~\ref{sec:constr-crem-transf} we present a
systematic approach to Cremona transformations linearizing a
rational variety. They turn out to be building blocks for
the \emph{cumulant Cremona transformations}, which we present in
Section~\ref{sec:cum}. In Section \ref {sec:crem-repr} we discuss the effect of linearizing Cremona
transformations on tangential and secant varieties. We devote Section~\ref{sec:examples} to the study of a number of classical examples, including Veronese, Segre and Grassmann varieties, and in these cases we observe an interesting feature of linearizing Cremonas: they tend to simplify also tangential and secant varieties.
The final Section~\ref{sec:cum} is, as we said, devoted to {cumulant Cremona transformations} which appear already, in a simple case, in Section~\ref{sec:examples}. Cumulants arise from algebraic statistics (see \cite {pwz-2010-cumulants}) and can be viewed as the choice of preferable coordinates in which varieties coming from algebraic
statistics simplify. For instance, Sturmfels and Zwiernik~\cite{pwz-2011-bincum} used
cumulants to simplify the equations of the Segre embedding of the
$n$-fold product of $\mathbb{P}^1$ and of its tangent variety. More recent results in the same direction are contained in \cite{MM,MOZ}. Cumulants are
particular instances of linearizing Cremona transformations.
We conclude by indicating how to
generalize some combinatorial formulas in \cite{MOZ, pwz-2011-bincum,pwz-2010-cumulants}.
\mathbb{S}ection{Construction of some Cremona transformations}
\label{sec:constr-crem-transf}
In this section we present some recipes for constructing
Cremona transformations. We focus on two specific closely-related types:
\emph{monoidal extensions} and \emph{triangular Cremona transformations}. All constructions in this paper may be
seen as iterated applications of monoidal extensions
as in
Section~\ref{sec:mono-extens-rati} (see also \cite {CS}).
\mathbb{S}ubsection{Basics on Cremona transformations} A \emph{Cremona transformation} is a birational map
\begin{equation}
\label{eq:1}
\varphi\colon \mathbb{P}^ r\dasharrow \mathbb{P}^r, \qquad
[x_0,\ldots,x_r]\mapsto [F_0(x_0,\ldots,x_r),\ldots,
F_r(x_0,\ldots,x_r)],
\end{equation}
where $F_i(x_0,\ldots,x_r)$ are coprime homogeneous
polynomials of the same degree $\delta>0$, for $0\leqslant i\leqslant r$. The
inverse map is also a Cremona transformation, and it is defined by
coprime homogeneous polynomials $G_i(x_0,\ldots,x_r)$ of degree $\delta'>0$, for $0\leqslant i\leqslant r$. In this case, we say that
$\varphi$ is a \emph{$(\delta, \delta')$--Cremona transformation}. The subscheme
${\rm Ind}(\varphi):=\{F_i(x_0,\ldots,x_r)=0\}_{0\leqslant i\leqslant r}$ is the {\em indeterminacy locus} of $\varphi$. Since the
composition of $\varphi$ and its inverse is the identity, we have
\[G_i(F_0(x_0,\ldots,x_r), \ldots, F_r(x_0,\ldots,x_r))=\mathbb{P}hi\cdot x_i,
\,\,\, \text {for}\,\,\, 0\leqslant i\leqslant r\] where $\mathbb{P}hi$ is a
homogeneous polynomial of degree $\delta\cdot \delta'-1$. The
hypersurface ${\rm Fund}(\varphi):=\{\mathbb{P}hi=0\}$ is the
\emph{fundamental locus} of $\varphi$ and its support is the
\emph{reduced fundamental locus} ${\rm Fund}_{\rm red}(\varphi)$. In
general one cannot reconstruct ${\rm Fund}(\varphi)$ from ${\rm
Fund}_{\rm red}(\varphi)$, except when ${\rm Fund}_{\rm
red}(\varphi)$ is irreducible. Indeed, in this case, we recover
${\rm Fund}(\varphi)$ from its multiplicity value $(\delta\cdot
\delta'-1)/e$, where $e$ is the degree of ${\rm Fund}_{\rm
red}(\varphi)$.
By construction, ${\rm Ind}(\varphi)\mathbb{S}ubset {\rm Fund}(\varphi)$ and $\phi$ is one-to-one on the complement of ${\rm Fund}_{\rm red}(\varphi)$.
Often times in this paper, the loci ${\rm Fund}_{\rm red}(\varphi)$
and ${\rm Fund}_{\rm red}(\varphi^ {-1})$ induced by the Cremona
transformation $\varphi$ are the same
hyperplane.
In those cases, we can see $\varphi$ as a polynomial automorphism
$\varphi_a\colon \mathcal{A}f^ r\to \mathcal{A}f^ r$ (often denoted by $\varphi$)
whose extension to $ \mathbb{P}^ r$ contains the hyperplane at infinity as its
fundamental locus.
\mathbb{S}ubsection{Monoids} Let $X\mathbb{S}ubset \mathbb{P}^ r$ be a hypersurface of degree
$d$ where $r\geqslant 2$. We say that $X$
is a \emph{monoid} with \emph{vertex} $p\in \mathbb{P}^ r$ if $X$ is irreducible and $p$ is a point in $X$
of multiplicity exactly $d-1$. Note that a monoid can have more than one
vertex. If we choose projective coordinates in such a way that
$p=[0,\ldots,0,1]$, then the defining equation of $X$ is
\begin{equation*}\label{eq:monoid}
f(x_0,\ldots,x_{r})=f_{d-1}(x_0,\ldots,x_{r-1})\,x_{r}+
f_{d}(x_0,\ldots,x_{r-1})=0,\end{equation*}
where $f_{d-1}$ and $f_d$ are homogeneous polynomials of degree $d-1$ and $d$ respectively and $f_{d-1}$ is nonzero. The hypersurface $X$ is irreducible if and only if $f_{d-1}$ and $f_d$ are coprime.
A monoid $X$ is rational. Indeed, the
projection of $X$ from its vertex $p$ onto a hyperplane $H$ not
passing through $p$ is a birational map $\pi\colon X\dasharrow H\cong \mathbb{P}^
{r-1}$. If $H$ has equation $x_r=0$, then the inverse map
$\pi^ {-1}\colon \mathbb{P}^ {r-1} \dasharrow X $ is given by
\[
[x_0,\ldots, x_{r-1}]
\mapsto [{f_{d-1}(x_0,\ldots,x_{r-1})}x_0,\ldots, {f_{d-1}(x_0,\ldots,x_{r-1})}x_{r-1}, -f_{d}(x_0,\ldots,x_{r-1})].
\]
The map $\pi$ is called the \emph{stereographic projection} of $X$ from
$p$. Its indeterminacy locus is $p$. Each line through $p$ contained in $X$ gets contracted to a point under $\pi$. The set of all such lines is defined by the
equations $\{f_d=f_{d-1}=0\}$. This is the indeterminacy locus
of $\pi^ {-1}$, whereas the hypersurface of $H$ with equation
$\{x_r=f_{d-1}=0\}$ is contracted to $p$ by the map $\pi^ {-1}$.
\mathbb{S}ubsection{Monoidal extensions of rational maps}
\label{sec:mono-extens-rati}
Let $\omega\colon \mathbb{P}^ r\dasharrow \mathbb{P}^ r$ be a dominant rational map
defined, in homogeneous coordinates, as in~\eqref{eq:1}.
The homogeneous polynomials $F_0, \ldots, F_r$ have the same degree
$\delta>0$ and are coprime. We construct a \emph{monoidal extension} $\mathcal{O}mega$ of $\omega$ as
follows. First, we embed $\mathbb{P}^ r$ in $\mathbb{P}^ {r+1}$ as the hyperplane
$H=\{x_{r+1}=0\}$ and we consider the point $p=[0,\ldots, 0, 1] \in \mathbb{P}^
{r+1}$.
Fix an integer $d\ge \delta$, a nonzero polynomial
$h(x_0,\ldots,x_r)$ of degree $d-\delta$ and an irreducible {monoid} of degree
$d$ with {vertex} at $p$ defined by
\[
f(x_0,\ldots,x_{r+1})=f_{d-1}(x_0,\ldots,x_{r})\,x_{r+1}+
f_{d}(x_0,\ldots,x_{r})=0.\]
Then we let $\mathcal{O}mega\colon \mathbb{P}^
{r+1}\dasharrow \mathbb{P}^ {r+1}$ be defined by
\begin{equation*}
[x_0,\ldots, x_{r+1}] \mapsto [h(x_0,\ldots,x_r) F_0(x_0,\ldots,x_r),\ldots, h (x_0,\ldots,x_r) F_r(x_0,\ldots,x_r), f(x_0,\ldots,x_{r+1})].\label{eq:3}
\end{equation*}
Note that $p$ is an indeterminacy point of $\mathcal{O}mega$. If $\pi: \mathbb{P}^
{r+1}\dasharrow \mathbb{P}^ r$ is the projection from $p$ to $H$, we have
\begin{equation}\label{eq:proj}\pi\circ \mathcal{O}mega =\omega\circ \pi. \end{equation}
\begin{lemma}\label{lem:tranf} The map $\mathcal{O}mega\colon \mathbb{P}^
{r+1}\dasharrow \mathbb{P}^ {r+1}$ is dominant and has the same degree as
$\omega$. Hence $\mathcal{O}mega$ is a Cremona transformation if and
only if $\omega$ is.
\end{lemma}
\begin{proof}
By definition, the degree of the map $\mathcal{O}mega$ coincides with the
degree of the induced field extension. Let
$y=[y_0,\ldots,y_{r+1}]\in \mathbb{P}^ {r+1}$ be a general point and let
$y'=[y_0,\ldots,y_{r}]=\pi(y)$. The rational map $\mathcal{O}mega$ may be
written as $\mathcal{O}mega(y)=[F_0(y'):\ldots:F_r(y'):
(f_{d-1}(y')y_{r+1}+f_d(y'))/h(y')]$. In particular, $\mathcal{O}mega(\mathbb{C}(y_0,\ldots, y_{r}))=\omega(\mathbb{C}(y_0,\ldots, y_r))\mathbb{S}ubset \mathbb{C}(y_0,\ldots, y_r)$, while $\mathcal{O}mega(y)_{r+1}$ is linear in $y_{r+1}$ over $\mathcal{O}mega(\mathbb{C}(y_0,\ldots, y_r))$. The field extension has degree $[\mathbb{C}(y_0,\ldots, y_{r+1}):\mathcal{O}mega(\mathbb{C}(y_0,\ldots, y_{r}))]=[\mathbb{C}(y_0,\ldots, y_r):\omega(\mathbb{C}(y_0, \ldots, y_{r}))]$ and the lemma follows.
\end{proof}
The indeterminacy locus of $\mathcal{O}mega$ (as a scheme) is the union of the cone over the locus of
indeterminacy of $\omega$ with vertex $p$ intersected with the monoid $\{f=0\}$ and the
codimension two subvariety $\{h=f=0\}$.
The reduced fundamental locus of $\mathcal{O}mega$ is the union of the hypersurface
$\{h=0\}$ and the cone over the fundamental locus of $\omega$ with
vertex $p$.
\mathbb{S}ubsection{Triangular Cremona transformations}\label{sec:trianguar}
\label{sec:triang-crem-transf} \emph{Triangular Cremona transformations} are obtained as
iterated applications of monoidal extensions, as we now explain.
Consider a rational map $\tau: \mathbb P^ r\dasharrow \mathbb P^ r$
defined, in affine coordinates over $\{x_0\neq 0\}$, by formulas of the type
\[ (x_1,\ldots,x_r)\to (f_1(x_1),\dots, f_i(x_1,\ldots, x_i), \ldots
f_r(x_1,\ldots,x_r))\] where each $f_i(x_1,\ldots, x_i)\in
K(x_1,\ldots,x_{i-1})[x_i]$ is nonconstant and linear in $x_i$, for
$1\leqslant i\leqslant r$. If $f_i(x_1,\ldots, x_i)\in
K[x_1,\ldots, x_i]$ for all $1\leqslant i\leqslant r$, the
indeterminacy locus of $\tau$ is contained in the \emph{hyperplane at
infinity} $\{x_0=0\}$. Any such map $\tau$ is birational, with
inverse of the same type. To find $\tau^ {-1}$, we have
to solve the system
\[y_i= f_i(x_1,\ldots, x_i), \quad 1\leqslant i\leqslant r\]
in the indeterminates $x_1,\ldots, x_r$. This can be done stepwise as
follows. From the linear expression $y_1=f_1(x_1)$ we find a
linear polynomial $g_1$ such that $x_1=g_1(y_1)$. Given $i>1$, assume
we know that
\begin{equation}\label{eq:iteration}
x_h=g_h(y_1,\ldots,y_h), \quad {\rm for}\quad
1\leqslant h< i\leqslant n
\end{equation}
where $g_h(y_1,\ldots,y_h)\in K(y_1,\ldots,y_{h-1})[y_h]$ is linear in
$y_h$. From $y_i= f_i(x_1,\ldots, x_i)$ we obtain the expression
$x_i=\xi(x_1,\ldots,x_{i-1},y_i)$, where
$\xi(x_1,\ldots,x_{i-1},y_i)\in K(x_1,\ldots,x_{i-1})[y_i]$ is linear
in $y_i$. Substituting $x_h$ from \eqref
{eq:iteration}, we conclude that $x_i=g_i(y_1,\ldots,y_i)$ with
$g_i(y_1,\ldots,y_i)\in K(y_1,\ldots,y_{i-1})[y_i]$ of degree 1 in
$y_i$.
\begin{example} Fix an integer $n\ge 2$ and use the same notation as
above. The following Cremona quadratic transformation $\omega_n$ of $\mathbb{P}^
{{n+1}\choose 2}$ is triangular
\[[x_0, \ldots , x_i,\ldots, x_{ij},\ldots]
\to [x_0^ 2,\ldots , x_0 x_i,\ldots, x_0
x_{ij}-x_ix_j,\ldots], \,\,\, \text{where}\,\,\, 1\le i<j\le n.\]
The inverse is
\[[y_0, \ldots , y_i,\ldots, y_{ij},\ldots]
\to [y_0^ 2,\ldots , y_0 y_i,\ldots, y_0
y_{ij}+y_iy_j,\ldots]\]
From a geometric viewpoint, $\omega_n$ is defined by a linear system
$\mathcal L$ of quadrics as follows. Consider the coordinate
hyperplane $\mathbb{P}i=\{ x_0=0\}$. Let $S$
be the linear subspace of $\mathbb{P}i$ with equations
$\{x_0=x_1=\ldots=x_n=0\}$ and let $S'$ be the the
complementary subspace in $\mathbb{P}i$ with equations
$\{x_0=x_{12}=\ldots=x_{ij}=\ldots=x_{(n-1)n}=0\}$, where $1\leqslant
i<j\leqslant n$. Then $\mathcal L$ cuts out the complete linear system of
quadric cones on $\mathbb{P}i$ that are singular along $S$ and that pass through the $n$
independent points $p_i\in S'$, where $p_i$ is the torus-fixed point with all coordinates $0$ but $x_i$, with $1\leqslant i\leqslant n$. After splitting off $\mathbb{P}i$ from $\mathcal L$, the residual system
consists of all hyperplanes containing $S$.
\end{example}
\mathbb{S}ection{Cremona equivalence}
\label{sec:crem-equiv}
An irreducible variety $X\mathbb{S}ubset \mathbb{P}^ r$ is \emph{Cremona linearizable} (CL) if there is a \emph{linearizing Cremona transformation} of $\mathbb{P}^r$ which maps $X$ birationally onto a linear subspace. It is a consequence of
Theorem \ref {thm:mp} (presented in~\cite {MP}) that, if $X$ is rational of dimension $n\leqslant r-2$ in $\mathbb{P}^ r$, then $X$ is CL.
In this section we recall this theorem and present a
slightly different proof.
\mathbb{S}ubsection{Monoids and Cremona transformations}\label{ssec:monoids}
Let $X\mathbb{S}ubset \mathbb{P}^ r$ be a monoid of degree $d$.
Let $p_1, p_2\in X$ be two vertices, let
$H_1,H_2$ be hyperplanes with $p_i\not \in H_i$, and { consider the
stereographic projections of $X$ from $p_i$, which is the restriction of the projection
$\pi_i\colon \mathbb{P} ^r\dasharrow H_i$ from $p_i$, with
$i=1,2$.} The map
\[
\pi_{X,p_1,p_2}:=\pi_2\circ \pi_1^ {-1}\colon H_1\dasharrow H_2
\] is a Cremona transformation. If $p_1=p_2=p$, then
$\pi_{X,p}:=\pi_{X,p,p}$ does not depend on $X$, being the (linear)
\emph{perspective} of $H_1$ to $H_2$ with center $p$. From now on, we
restrict to the case when $p_1\neq p_2$. In this setting, the map
$\pi_{X,p_1,p_2}$ depends on $X$ and is in general nonlinear. In the
following we assume that $H_1$ and $H_2$ have equations $x_r=0$ and
$x_{r-1}=0$ respectively, and $p_1=[0,\ldots,0,1], p_2=[0,\ldots,
0,1,0]$. The defining equation of $X$ has the form
\begin{equation*}\label{eq:bimon}
f_d(x_0,\ldots, x_{r-2})+x_{r-1}g_{d-1}(x_0,\ldots,
x_{r-2})+x_rh_{d-1}(x_0,\ldots, x_{r-2})+ x_{r}x_{r-1}f_{d-2}(x_0,
\ldots, x_{r-2})=0.
\end{equation*}
Then
\[
\pi_{X,p_1,p_2}( [x_0,\ldots, x_{r-1}] )= [(f_{d-2}x_{r-1}+h_{d-1})x_0,\ldots,
(f_{d-2}x_{r-1}+h_{d-1})x_{r-2},-f_d-x_{r-1}g_{d-1}].
\]
{
\begin{lemma} \label{lem:mp16_1} Let $Z\mathbb{S}ubset \mathbb{P}^ r$, with $r\geqslant 3$, be an irreducible variety of dimension $r-2$ and let $p\in \mathbb{P}^ r$ be such that the projection of $Z$ from $p$ is birational to its image. For $d\gg 0$ there is a monoid of degree $d$ with vertex $p$, containing $Z$ but not containing the cone $C_p(Z)$
over $Z$ with vertex $p$.
\end{lemma}
\begin{proof} Let $V\to \mathbb{P}^ r$ be the blow--up of $\mathbb{P}^ r$ at $p$. We denote by $E$ the exceptional divisor and by $H$ the proper transform of a general hyperplane of $\mathbb{P}^ r$ and by $Z'$ the proper transform of $Z$.
Consider $M_d =\vert dH-(d-1)E\vert= \vert (d-1)(H-E)+H\vert$,
i.e. the proper transform on $V$ of the linear system of monoids we
are interested in. We have
\begin{equation}\label{eq:mon}
\dim (M_d)=\varphirac{2d^{r-1}}{(r-1)!}+\varphirac{(r-1)d^{r-2}}{(r-2)!}+O(d^{r-3}).
\end{equation}
Since the projection of $Z$ from $p$ is birational, the line bundle $\mathcal O_{Z'}(H-E)$ is big and nef. Then
by \cite [Theorem~1.4.40, Vol.\ I, p.\ 69]{laz} it follows that
\begin{equation*}\label{eq:av}
h^ 0(Z',\mathcal O_{Z'}(d(H-E)))= \varphirac {\delta} {(r-2)!} d^ {r-2}+ O(d^ {r-3}), \,\,\, \text {for}\,\,\, d\gg 0
\end{equation*}
where $\delta=(H-E)^ {r-2}\cdot Z'>0$ is the degree of the variety obtained under the projection of $Z$ from $p$, i.e. the degree of the cone $C_p(Z)$. Thus, if $M'_d$ is the sublinear system of $M_d$ of the divisors containing $Z'$, then
$$\dim (M'_d)\geqslant \dim(M_d)- h^ 0(Z',\mathcal O_{Z'}(d(H-E)))=
\varphirac{2d^{r-1}}{(r-1)!}+\varphirac{(r-1-\delta)d^{r-2}}{(r-2)!}+O(d^{r-3}). $$
We let $M''_d$ be the sublinear system of $M'_d$ of the divisors
containing the proper transform $Y$ of the cone $C_p(Z)$, which is a
hypersurface of degree $\delta$, i.e. $Y\in \vert \delta
(H-E)\vert$. Hence, $M''_d\mathbb{S}ubseteq \vert (d-\delta-1)(H-E)+H\vert$ so
by \eqref {eq:mon} we have
$$\dim (M''_d)\leq \varphirac{2(d-\delta)^{r-1}}{(r-1)!}+\varphirac{(r-1)(d-\delta)^{r-2}}{(r-2)!}+O(d^{r-3})=
\varphirac{2d^{r-1}}{(r-1)!}+\varphirac{(r-1-2\delta)d^{r-2}}{(r-2)!}+O(d^{r-3}).
$$
Hence
$$\dim(M'_d)-\dim(M''_d)=\varphirac{\delta d^{r-2}}{(r-2)!}+O(d^{r-3})>0, \quad \text{for}\quad d\gg 0,$$
as we wanted to show.\end{proof}
\begin{lemma} \label{lem:mp16} Let $Z\mathbb{S}ubset \mathbb{P}^ r$ be an irreducible
variety of positive dimension $n\leqslant r-3$ and let $p_1,p_2\in
\mathbb{P}^ r$ be distinct points such that the projection of $Z$ from the
line $\ell$ joining $p_1$ and $p_2$ is birational to its image. For
$d\gg 0$ there is a monoid of degree $d$ with vertices $p_1$ and
$p_2$, containing $Z$ but not containing any of the cones $C_{i}(Z)$
over $Z$ with vertices $p_i$, for $i=1,2$.
\end{lemma}
\noindent\emph{Proof.} We start with the following
\begin{claim} It suffices to prove the assertion for $n=r-3$.
\end{claim}
\begin{proof} [Proof of the Claim] Consider the projection of $\mathbb{P}^ r$
to $\mathbb{P}^ {n+3}$ from a general linear subspace $\mathbb{P}i$ of dimension
$r-n-4$
and call $Z', p'_1,p'_2, \ell'$ the projections of $Z,p_1,p_2, \ell$
respectively. Then $Z'$ is birational to $Z$ and it is still true that
the projection of $Z'$ form $\ell'$ is birational to its image. The
dimension of $Z'$ is $n-3$.
Assume the assertion holds for $Z', p'_1,p'_2$ and let $F'\mathbb{S}ubset \mathbb{P}^ {n+3}$ be a monoid of degree $d\gg 0$ with vertices
$p'_1,p'_2$ containing $Z'$ but not $C_{i}(Z')$, for $i=1,2$. Let
$F\mathbb{S}ubset \mathbb{P}^ r$ be the cone over $F'$ with vertex $\mathbb{P}i$. Then $F$ is
a monoid with vertices $p_1,p_2$ containing $Z$. It does not contain
either one of $C_{i}(Z)$, for $i=1,2$, otherwise $F'$ would contain
one of the cones $C_{i}(Z')$, for $i=1,2$, contradicting our
hypothesis on $F'$.\end{proof}
We can thus assume from now on that $n=r-3$. Fix two hyperplanes $H_1$
and $H_2$, where $p_1\notin H_1$ and $p_2\notin H_2$. Let $Z_1$ and
$Z_2$ be the projections from $p_1$ and $p_2$ to $H_1$ and $H_2$,
respectively.
We set $p'_{3-i}:=\pi_i(p_{3-i})$, for $i=1,2$. Our result follows by
Lemma~\ref {lem:mp16_1} and the following claim:
\begin{claim} It suffices to prove that for $d\gg 0$, there is a monoid of degree $d$ in $H_i$ with
vertex $p'_i$ containing $Z_i$ but not containing the cone $C(Z_i)$
over $Z_i$ with vertex $p'_i$, for $i=1,2$.
\end{claim}
\begin{proof} [Proof of the Claim] Let $F'_i\mathbb{S}ubset H_i$ be such a
monoid, and let $F_i$ be the cone over $F'_i$ with vertex $p_{3-i}$,
for $i=1,2$. Then $F_i$ is a monoid with vertex $p_{i}$ (by
construction, we can take any point in the line joining $p_i$ and
$p_{3-i}$ as its vertex).
In addition, $F_i$ contains
$Z$ but does not contain $C_i(Z)$ (same argument as in the proof of
the previous Claim). Then the assertion of Lemma~\ref{lem:mp16_1}
holds for a general linear combination $F$ of $F_1$ and
$F_2$.\end{proof}
}
Let $Z, p_1, p_2$ be as in Lemma \ref {lem:mp16}. Fix a general monoid $X\mathbb{S}upset Z$ with vertices in $p_1$ and $p_2$; by Lemma \ref {lem:mp16} $X$ does not contain the cones $C_i(Z)$. Let $H_1$ and $H_2$
be hyperplanes such that $p_i\not \in H_i$, for $i=1,2$. Let $Z_i$ be
the projection of $Z$ from $p_i$ to $H_i$, for $i=1,2$. Then the map $\varphi:=\pi_{X,p_1,p_2}:H_1\dasharrow H_2$ is birational and $Z_1$ is not contained in the indeterminacy locus of both $\varphi$ and $\varphi^{-1}$.
Thus $\varphi$ induces a birational transformation $\phi\colon
Z_1\dasharrow Z_2$. For future reference we summarize this construction in the following Proposition.
\begin{proposition}\label{prop:proj} In the above setting, the
double projection $\varphi\colon H_1\dasharrow H_2$,
(resp.\ $\varphi^ {-1}$) is defined at the general
point of $Z_1$ (resp.\ of $Z_2$), hence it defines a birational map
$\phi\colon Z_1\dasharrow Z_2$ (resp.\ $\phi^{-1}\colon
Z_2\dasharrow Z_1$).
\end{proposition}
\mathbb{S}ubsection {Cremona equivalence} \label{ssec:cremeq}
Let $X,
Y\mathbb{S}ubset \mathbb{P}^ r$ be irreducible, projective varieties. We say that $X$
and $Y$ are \emph{Cremona equivalent} (CE) if there is a Cremona
transformation $\omega\colon \mathbb{P}^ r\dasharrow \mathbb{P}^ r$ such that $\omega$ [resp. $\omega^ {-1}$] is defined
at the general point of $X$ [resp. of $Y$] and such that $\omega$ maps $X$ to $Y$ [accordingly $\omega^ {-1}$ maps $Y$ to $X$]. This is an equivalence relation
among all irreducible subvarieties of $\mathbb{P}^r$.
The following result is due to Mella and Polastri~\cite[Theorem
1]{MP}. We present an alternative proof, close to the original ideas, but more computational in spirit.
\begin{remark}
We take the opportunity of correcting a mistake in the proof
of~\cite[Theorem 1]{MP}. In the notation of \cite[Theorem 1]{MP},
let $T_i=\varphi_{\mathcal{H}_{i}}(X)$, $Y_i=T_i\cap (x_{n+1}=0)$ and $Z$ the cone
over $T_i$ with vertex $q_1$. Let $S$ be the monoid containing $Y_i$
and $W_i$ the projection of $T_i$ onto $S$. Then
$X_{i+1}=\pi_{q_2}(W_i)$ and not the projection of a general
hyperplane section of $Z$ as written in the published paper.
\end{remark}
\begin{theorem}\label{thm:mp} Let $X, Y\mathbb{S}ubset \mathbb{P}^r$, with $r\geqslant 3$, be two irreducible varieties of positive dimension $n<r-1$. Then
$X,Y$ are CE if and only if they are birationally
equivalent. \end{theorem}
\begin{proof} We prove the nontrivial implication.
{
\begin{claim} We may assume that the
projection of $Y$ from any coordinate subspace of dimension $m$ is
birational to its image if $r> n+m+1$ and dominant to $\mathbb{P}^{r-m-1}$
if $r\le n+m+1$.
\end{claim}
\begin{proof}[Proof of the Claim] If we choose the $r+1$ torus-fixed
points of $\mathbb{P}^ r$ to be generic (which we can do after a generic
change of coordinates), then each one of the coordinate
subspaces of a given dimension $m$ (spanned by $m+1$ coordinate
points) is generic in the corresponding Grassmannian, hence the
assertion follows.\end{proof} }
Let $Z$ be a smooth variety and
let $\phi\colon Z\dasharrow X$ and $\psi\colon Z\dasharrow Y$ be birational
maps. Passing to affine coordinates, we may assume that $\phi$ and $\psi$
are given by equations
\[
x_j=\phi_j(t), \text{ and }\; y_j=\psi_j(t),\,\,\, \text {for} \,\,\, 1\leqslant j\leqslant r,
\]
where $\phi_j, \psi_j$ are rational functions on
$Z$ and $t$ varies in a suitable dense open subset of $Z$.
We prove the theorem by constructing a sequence of birational maps $\varphi_i: Z\dasharrow X_i\mathbb{S}ubset \mathbb{P}^ r$,
with $X_i$ projective varieties, for $0\leqslant i\leqslant r$, such that:\par
\begin{inparaenum}
\item [(a)] $\varphi_0=\phi$ and $\varphi_r=\psi$, thus $X_0=X$ and $X_r=Y$;\par
\item [(b)] for $0\leqslant i\leqslant r-1$, there is a Cremona
transformation $\omega_i\colon\mathbb{P}^ r\dasharrow \mathbb{P}^ r$, such that
$\omega_i$ (resp.\ $\omega_i^ {-1}$) is defined at the general point
of $X_i$ (resp.\ of $X_{i+1}$), and it satisfies $\omega_i(X_i)=X_{i+1}$
(accordingly, $\omega_i^ {-1}(X_{i+1})=X_i$) and
$\varphi_{i+1}=\omega_i\circ \varphi_i$.
\end{inparaenum}
The construction is done recursively. We assume $\varphi_i$ is of the form
\[
\varphi_i(t)= (\tilde \phi_{i,1}(t),\ldots, \tilde \phi_{i,r-i}(t),\psi_{r-i+1}(t), \ldots, \psi_r(t)), \,\,\, \text{for}\,\,\, t\in Z \,\,\, \text{and}\,\,\, 0\leqslant i\leqslant r-1,
\]
where the $\tilde \phi_{i,j}$'s are suitable rational functions on
$Z$. For $i=0$, the starting case, we fix $\tilde \phi_{0,i}= \phi_i$
for all $0\leqslant i\leqslant r$. Thus, requirement (a) is
satisfied.
Assume $0\leqslant i\leqslant r-1$. In order to perform the step from
$i$ to $i+1$, we consider the map
\[
g_i\colon Z\dasharrow \mathbb A^ {r+1},
\qquad g_i(t)= (\tilde \phi_{i,1}(t),\ldots, \tilde
\phi_{i,r-i}(t),\psi_{r-i}(t),\psi_{r-i+1}(t),\ldots, \psi_r(t)).
\]
Let $Z_i$ be the closure of the image of $g_i$, and
$\pi_{j}\colon \mathbb{A}^{r+1}\to \mathbb{A}^{r}$ the projection to
the coordinate hyperplane $\{x_{j}=0\}$ from the point at infinity of
the axis $x_{j}$, for all $1\leqslant j\leqslant r+1$.
We have $\varphi_i=\pi_{r-i+1}\circ g_i$, and since $\varphi_i$ is
birational onto its image, the same holds for $g_i$.
{
\begin{claim}\label{sub:claim1} The projection of $Z_i$ from a general
point of the space at infinity of the affine linear space $\mathbb{P}i_i:= \{x_{r-i+1}=\ldots =x_{r+1}=0\}$ is birational to its
image. \end{claim}
\begin{proof} [Proof of the Claim] The variety $Z_i$ is not a
hypersurface, then by \cite [Theorem 1]{CC}, the locus of points
from which the projection is not birational has dimension strictly
bounded by the dimension of $Z_i$. We may therefore assume that
$\dim(\mathbb{P}i_i)=r-i-1< n$. On the other hand the map $\psi$ is
birational, therefore we may as well assume that $i<r-1$ . So it
remains to prove the result in the range $0<r-i-1<n$.
The projection of $Z_i$ from $\mathbb{P}i_i$ is the closure of the image of the
map
\[
h_i\colon Z\dasharrow \mathbb A^ {i+1}, \qquad h_i(t)= (\psi_{r-i}(t),\psi_{r-i+1}(t),\ldots, \psi_r(t)).
\]
By the previous claim applied to $Z$,
either $h_i$ is birational to its image (if
$i\geqslant n$)
or $h_i$ is dominant. In the former case the projection from a general
point of $\mathbb{P}i_i$ is also birational, so the assertion follows. In the
latter case the cone over $Z_i$ with vertex $\mathbb{P}i_i$ is the whole
$\mathbb{P}^r$, and the assertion follows from \cite [Theorem
1]{CC}.\end{proof}}
By the Claim, we can make a general change of the first $r-i$
coordinates of $g_i$ so that $\varphi_{i+1}:= \pi_{r-i+i}\circ g_i$
is
birational to its image $X_{i+1}$. Finally, iterated applications of
Proposition \ref{prop:proj} show that also requirement (b) is
satisfied, thus ending the proof.\end{proof}
\mathbb{S}ubsection{Linearizing Cremona}\label{ssec:lincrem}
Theorem \ref {thm:mp} ensures that any variety $X$ of dimension $n\leq
r-2$ in $\mathbb{P}^ r$ is CE to a hypersurface in a $\mathbb{P}^
{n+1}{\mathbb{S}ubset\mathbb{P}^r}$. If, in addition, $X$ is rational, then it is CL,
e.g.\ it is CE to the subspace $\{x_{n+1}=\ldots =x_r= 0\}$. In
particular, suppose that there is a linear subspace $\mathbb{P}i$ of dimension
$r-n-1$ of $\mathbb{P}^r$ such that the projection from $\mathbb{P}i$ induces a
birational map $\pi\colon X\dasharrow \mathbb{P}^ n$. Equivalently, $X$
admits an affine parametrization of the form
\begin{equation}\label{eq:parameter}
x_i=t_i,\quad {\rm for}\quad 1\leqslant i\leqslant n,\qquad
x_j=f_j(t_1,\ldots,t_n), \quad {\rm for}\quad n+1\leqslant j\leqslant r,\end{equation}
where the $f_i$'s are rational functions of $t_1,\ldots,t_n$. For instance,
smooth toric varieties and Grassmannians enjoy this property (see Section~\ref {sec:examples}).
Using~\eqref{eq:parameter}, we define a Cremona
map $\phi\colon \mathbb P^ r\dasharrow \mathbb P^ r$ in affine
coordinates as
\begin{equation}
\phi(x_1,\ldots,x_r) = (x_1,\ldots, x_n, x_{n+1}-f_{n+1}(x_1,\ldots, x_n),\ldots, x_{r}-f_{r}(x_1,\ldots, x_n)).\label{eq:basiccremona}
\end{equation}
The map $\phi$ gives a birational equivalence between $X$ and the
subspace $\{x_{n+1}=\ldots =x_r= 0\}$, hence $\phi$ linearizes $X$.
The above construction can be slightly modified to make it more general. Fix a collection of
rational functions $g_i(x_1,\ldots,x_{i-1})$ and $h_i(x_1,\ldots,x_{i-1} )$, with $n+1\leqslant i\leqslant
r$, with all the $h_i(x_1,\ldots,x_{i-1} )$'s are nonzero.
We replace the $i$--th coordinate of $ \phi$
with the expression
\[ \phi_i(x_1,\ldots,x_r)= h_i(x_1,\ldots,x_{i-1} )\big (
x_i-f_i(x_1,\ldots,x_n)\big ) +g_i(x_1,\ldots,x_{i-1}), \qquad i=n+1,
\ldots, r .\]
The following is clear:
\begin{lemma}
The image $\phi(X)$ is the linear subspace $\{x_{n+1}=\ldots =x_r=
0\}$ if and only if the functions $g_i$ vanish on $X$ for
$n+1 \leqslant i\leqslant r$.
\end{lemma}
\mathbb{S}ection{Secant and tangential varieties}
\label{sec:crem-repr}
In this section, we focus on Cremona transformations of secant and
tangential varieties. Similar techniques can be applied to
{osculating varieties}, although we will not do this here.
\begin{definition}
Let $X\mathbb{S}ubset \mathbb{P}^r$ be a variety of dimension $n$. The
\emph{$k$--secant variety} $\Sec_k(X)$ of $X$ (simply $\Sec(X)$ if $k=1$) is the Zariski
closure of the union of all $(k+1)$--secant linear spaces of dimension
$k$ to $X$, i.e. those containing $k+1$ linearly independent points of $X$. The
\emph{$k$--defect} of $X$ is $\min\{r, n(k+1)+k\}-\dim(\Sec_k(X))$ (which is nonnegative), and $X$
is \emph{$k$--defective} if the {$k$--defect} is positive.
\end{definition}
The $k$--secant variety of $X$ has expected dimension $nk+n+k$. It is parametrized as
\begin{equation}
\label{eq:30}
\psi\colon {\rm Sym}^ {k+1}(X)\times \mathbb{P}^k \dashrightarrow \Sec_k(X)\mathbb{S}ubset \mathbb{P}^r, \quad ([p^{(0)},
\ldots, p^{(k)}], [s_0,\ldots,
s_k])\mapsto \mathbb{S}um_{j=0}^k s_j p^{(j)}.
\end{equation}
Assume that there is a codimension $n+1$ linear subspace $\mathbb{P}i$ such
that the projection from $\mathbb{P}i$ induces a birational map $\pi\colon
X\dashrightarrow \mathbb{P}^n$. From Section \ref {sec:crem-equiv}, we know
that $X$ can be parametrized as in~\eqref {eq:parameter}. Then, we can
combine the maps $\psi$ and $\pi$ to simplify the parametrization of
$\Sec_k(X)$, as we now show.
Pick affine variables $s_1,\ldots, s_k$ and set $s_0:=1-\mathbb{S}um_{j=1}^k
s_j$ in~\eqref{eq:30}. Consider $k+1$ vectors of unknowns
\[{\bf t}_i=(t_{i1},\ldots, t_{in}) \quad \text{ for}\quad 0\leqslant
i\leqslant k.\]
Then, $\Sec_k(X)$ is parametrized as follows
\begin{equation*}
\label{eq:param2}
x_i=\begin{cases}
s_0t_{0i}+s_1t_{1i}+\ldots +s_kt_{ki}& \quad \text{ for}\quad 1\leqslant i\leqslant n,\\
s_0f_i({\bf t}_0) +s_1f_i({\bf t}_1)+\ldots+s_kf_i({\bf t}_k)& \quad
\text{ for}\quad n+1\leqslant i \leqslant r.
\end{cases}
\end{equation*}
We let $\phi$ be the Cremona transformation from
\eqref{eq:basiccremona}, that linearizes $X$. Applying $\phi$ to
$\Sec_k(X)$ gives
\begin{equation}
\label{eq:param1}
x_i=\begin{cases}
s_0t_{0i}+s_1t_{1i}+\ldots +s_kt_{ki}& \; \text{ for }\;
1\leqslant i\leqslant n,\\
s_0f_i({\bf t}_0) +s_1f_i({\bf t}_1)+\ldots+s_kf_i({\bf t}_k)-f_i(s_0{\bf
t}_0+s_1{\bf t}_1+\ldots+s_k{\bf t}_k)& \; \text{ for } \;
n+1\leqslant i \leqslant r.
\end{cases}
\end{equation}
This change of coordinates can be useful for computing geometric
invariants of $X$, such as its $k$--defect. The next example,
illustrates this situation.
\begin{example}\label{ex:sec}
Suppose that the $f_i$ in ~\eqref {eq:parameter} are quadratic
polynomials. In this case, $X$ is a projection of the Veronese variety, hence it is 1--defective. We write the homogeneous decomposition of $f_i$
\[ f_i=f_{i0}+f_{i1}+f_{i2}, \,\,\, \qquad\text{for}\,\,\, n+1\leqslant i\leqslant r,\] where $f_{ij}$
is the homogeneous component of $f_i$ of degree $j$. Let $\mathbb{P}hi_i$ be
the bilinear form associated to $f_{i2}$. Then, the parametrization
\eqref{eq:param1} yields
the expression
\begin{equation*}
\label{eq:param3}
x_i =\begin{cases}
s_0t_{0i}+s_1t_{1i}+\ldots +s_kt_{ki},& \quad \text{ for}\quad 1\leqslant i\leqslant n,\\
\mathbb{S}um\limits_{j=0}^ k s_j(1-s_j) f_{i2} ({\bf t}_j)-
2\mathbb{S}um\limits_{0\leqslant u<v\leqslant k }s_us_v\mathbb{P}hi_i({\bf t}_u, {\bf t}_v),& \quad
\text{ for}\quad n+1\leqslant i \leqslant r.
\end{cases}
\end{equation*}
Suppose that $k=1$. Then
$$
x_i=s_0(1-s_0)f_{i2} ({\bf t}_0)+s_1(1-s_1)f_{i2} ({\bf t}_1)-2s_0s_1\mathbb{P}hi_i({\bf t}_0, {\bf t}_1),\,\,\, \text{for}\,\,\, n+1\leqslant i\leqslant r.
$$
Since $s_0=1-s_1$, then $s_0(1-s_0)=s_1(1-s_1)=s_0s_1$ and we obtain
\[
x_i= s_0 s_1 f_{i2} ({\bf t}_1-{\bf t}_0)\qquad\mbox{for}\quad n+1\leqslant i \leqslant r.
\]
Replacing ${\bf t}_0-{\bf t}_1$ with ${\bf u}:=(u_{1},\ldots,u_{n})
$
and setting ${\bf t}_0=:{\bf t}=(t_1,\ldots, t_n)$ and $s_1=:s$ yields
\begin{equation*}
\label{eq:param2b}
x_i =\begin{cases}
t_{i}+su_{i},& \quad \text{ for}\quad 1\leqslant i\leqslant n,\\
s(1-s)f_{i2} ({\bf u}),& \quad
\text{ for}\quad n+1\leqslant i \leqslant r.
\end{cases}
\end{equation*}
The image of a general secant line is a conic with two points in the linear image of the variety $X$.
The dimension of the secant variety can be deduced from the rank of the Jacobian of this parametrization. \end{example}
Next we discuss the interplay between tangential varieties and Cremona
transformations.
\begin{definition}
Let $X\mathbb{S}ubset \mathbb{P}^r$ be a variety. The
\emph{tangential variety} $T(X)$ of $X$ is the Zariski closure of the
union of all tangent spaces to $X$ at smooth points of $X$.
\end{definition}
Assume that $X$ has dimension $n$. The tangential variety has expected
dimension $2n$. If $X$ is (locally) parametrized by a map
\[ {\bf t}=(t_1,\ldots, t_n)\in U\mapsto [x_0({\bf t}), \ldots,
x_r({\bf t})]\in X,\]
where $U\mathbb{S}ubset \mathbb{C}^ n$ is a suitable nonempty open subset, then $T(X)$ is represented by
\begin{equation*}
\label{eq:4}
\tau\colon U\times \mathbb{C}^{n} \dashrightarrow T(X), \quad
({\bf t},{\bf s})=(t_1,\ldots, t_n, s_1,\ldots, s_n) \mapsto
[x_0({\bf t})+\mathbb{S}um\limits_{j=1}^n s_j \varphirac {\partial x_0}{\partial
t_j}({\bf t}), \ldots, x_r({\bf t})+\mathbb{S}um\limits_{j=1}^n s_j \varphirac {\partial x_r}{\partial
t_j}({\bf t})].
\end{equation*}
Assume again that $X$ is described as in \eqref
{eq:parameter}. Then, the parametric equations of $T(X)$ have a
simplified expression
\begin{equation*}
\label{eq:param3a}
x_i =\begin{cases}
t_{i}+s_i,& \quad \text{ for } \quad 1\leqslant i\leqslant n,\\
f_i({\bf t})+\mathbb{S}um\limits_{j=1}^ ns_j\varphirac {\partial f_i}{\partial
t_j} ({\bf t}),& \quad \text{ for }\quad n+1\leqslant i\leqslant r.
\end{cases}
\end{equation*}
Under the linearizing Cremona transformation $\phi$ from Section~\ref{sec:crem-equiv}, the variety $T(X)$
has image
\begin{equation}
\label{eq:param4}
x_i =\begin{cases}
t_{i}+s_i,& \;\text{ for }\; 1\leqslant i\leqslant n,\\
f_i({\bf t})-f_i({\bf t}+{\bf s})
+\mathbb{S}um\limits_{j=1}^ ns_j\varphirac {\partial f_i}{\partial t_j} ({\bf t}),& \; \text{ for }\; n+1\leqslant i\leqslant r.
\end{cases}
\end{equation}
\begin{example}\label{ex:tan}
Assume the $f_i$'s in~\eqref
{eq:parameter} are homogeneous quadratic polynomials. Then \eqref{eq:param4} becomes
\begin{equation}
\label{eq:param5}
x_i =\begin{cases}
t_{i}+s_i,& \quad \text{ for}\quad 1\leqslant i\leqslant n,\\
-f_i({\bf s}),& \quad \text{ for}\quad n+1\leqslant i\leqslant r.
\end{cases}
\end{equation}
Formula~\eqref{eq:param5} describes a cone with vertex the space at
infinity of the $n$--dimensional linear space $\{x_{n+1}=\ldots =
x_r=0\}$, over the variety parametrically represented by the last
$n-r$ coordinates of~\eqref{eq:param5}
\begin{equation*}
\label{eq:param6}
x_i=-f_i({\bf s}), \qquad \text{ for}
\quad n+1\leqslant i\leqslant r.
\end{equation*}
\end{example}
In Section \ref {sec:examples} we will see how the equations of secant
and tangential varieties simplify in classical defective cases, as
predicted by the above example. If the parametrization involves forms
of degree higher than $2$, the tangent variety is in general no longer
transformed to a cone. In Section \ref{sec:cum} we will see
alternative linearizing Cremonas that work better for certain
varieties. For instance, for Segre varieties cumulant Cremonas enable
us to write the tangential variety in the form (\ref{eq:param5}) even
though the parametrizing polynomials are not quadratic.
\mathbb{S}ection{ Cremona linearization of some classical varieties}
\label{sec:examples}
Segre, Veronese and Grassmannian varieties and their secants
play a key role in the study of determinantal
varieties.
Here we describe some triangular Cremona transformations that
linearize these varieties, and we will compute the image of their
secant varieties under these transformation. Similar considerations
can be applied to \emph{Spinor varieties} (see \cite {an} for a
parametrization of these varieties), and to \emph{Lagrangian Grassmannians} $LG(n,2n)$, etc., on which we do not dwell here.
\mathbb{S}ubsection{Segre varieties} \label{sec:segre-varieties} The
\emph{Segre variety} $\Segre (r_1,\ldots, r_k)$ is the image of
$\mathbb{P}^ {r_1}\times \ldots\times \mathbb{P}^ {r_k}$ under the Segre embedding in $\mathbb{P}^ r$, with $r+1=\prod_{i=1}^ k (r_i+1)$ (we may assume $r_1\geqslant r_2\geqslant \ldots\geqslant r_k\geqslant 1$). Sometimes we may use the exponential notation $\Segre (m^ {h_1}_1,\ldots, m^ {h_k}_k)$ if $m_i$ is repeated $h_i$ times, for $1\leqslant i\leqslant k$.
In this section, we find Cremona linearizations for $\Segre(m,n)$ and
we show how they simplify the equations for their secant varieties. In
Section~\ref{sec:cum} we will extend this to higher Segre varieties.
We interpret $ \mathbb{P}^{mn+m+n}$ as the space of nonzero $(m+1)\times
(n+1)$ matrices modulo multiplication by a nonzero scalar, so we have
coordinates $[x_{ij}]_{0\leqslant i\leqslant n, 0\leqslant j\leqslant
m}$ in $ \mathbb{P}^{mn+m+n}$. Then, $\Segre(m,n)$ is defined by the rank condition
\[\rk(x_{ij})_{0\leqslant i\leqslant n, 0\leqslant j\leqslant m}=1.\]
This condition amounts to equate to zero all $2\times 2$ minors of the
matrix ${\bf x}=(x_{ij})_{0\leqslant i\leqslant n, 0\leqslant
j\leqslant m}$. We pass to affine coordinates by setting $x_{00}=1$,
and we let
\[
x=\begin{pmatrix} 1&x_{01}&x_{02}&\cdots & x_{0n}\\
x_{10}&x_{11}&x_{12}&\cdots & x_{1n}\\
\vdots&&& & \vdots\\
x_{m0}&x_{m1}&x_{m2}&\cdots & x_{mn}
\end{pmatrix}
\]
be the corresponding matrix. Then the affine equations of
$\Segre(m,n)$ are $\{ x_{ij}-x_{i0}x_{0j}=0\}_{1\leqslant i\leqslant
n, 1\leqslant j\leqslant m}$. This shows that $\Segre(m,n)$ has
parametric equations of type \eqref {eq:parameter} with parameters
$x_{i0}, x_{0j}$, for $1\leqslant i\leqslant n, 1\leqslant j\leqslant
m$.
As in Section~\ref {ssec:lincrem} a linearizing
affine Cremona has equations (in vector form)
\begin{equation}
( y_{ij})_{0\leq i\leq m, 0\leq j\leq n, (i,j)\neq (0,0)}=
(x_{i0}, x_{0j}, x_{ij}-x_{i0}x_{0j})_{1\leqslant i\leqslant n, 1\leqslant j\leqslant m},
\label{eq:SegreNN}
\end{equation}
which is of type $(2,2)$ and in homogeneous coordinates reads
\[
[{\bf y}]=[y_{ij}]_{0\leq i\leq m, 0\leq j\leq n)}=
[x_{00}^ 2, x_{00}x_{i0},x_{00}x_{0j}, x_{00}x_{ij}-x_{i0}x_{0j}]_{1\leqslant i\leqslant n, 1\leqslant j\leqslant m}.
\]
The indeterminacy locus has equations $\{x_{00}=x_{i0}x_{0j}=0\}_{1\leqslant i\leqslant n, 1\leqslant j\leqslant m}$ and the reduced fundamental locus $\{x_{00}=0\}$.
To see the image of the secant varieties, we
perform column operations on $x$ and use ~\eqref{eq:SegreNN} to see that
\[
\operatorname{rank}(x)=\operatorname{rank}\,\begin{pmatrix} 1&0&0&\cdots & 0\\
y_{10}&y_{11}&y_{12}&\cdots & y_{1n}\\
\vdots&&& & \vdots\\
y_{m0}&y_{m1}&y_{m2}&\cdots & y_{mn}
\end{pmatrix}
=1+\operatorname{rank} \begin{pmatrix}
y_{11}&y_{12}&\cdots & y_{1n}\\
\vdots&& & \vdots\\
y_{m1}&y_{m2}&\cdots & y_{mn}
\end{pmatrix}.
\]
Therefore the $k$--secant variety to $\Segre(m,n)$ is mapped to the cone over the $(k-1)$--secant variety of $\Segre(m-1, n-1)$ with vertex along the linear image of
$\Segre(m,n)$.
\begin{example}\label{ex:seg} The (first) secant and tangent variety to $\Segre(2,2)$ is the cubic
hypersurface defined by the $3\times 3$-determinant
\[
\det ({\bf x})= x_{00}(x_{11}x_{22}-x_{12}x_{21})-x_{01}(x_{10}x_{22}-x_{20}x_{12})+x_{02}(x_{10}x_{21}-x_{20}x_{11})=0.
\]
In the new coordinates this hypersurface has the simpler binomial equation
$y_{11}y_{22}- y_{12}y_{21}=0$.\end{example}
\mathbb{S}ubsection{Projectivized tangent bundles} The projectivized tangent bundle $TP^n$ over $\mathbb{P}^n$ is
embedded in $\Segre(n,n)$ as the traceless nonzero $(n+1)\times(n+1)$--matrices modulo
multiplication by nonzero scalar, i.e. as the hyperplane section ${\rm tr} ({\bf x})=0$
of $\Segre(n,n)$ in $ \mathbb{P}^{n^2+2n}$. On the affine chart $x_{00}\neq 0$,
we view $TP^n$ as the set of rank $1$ matrices of the form
\[
x=\begin{pmatrix} 1&x_{01}&x_{02}&\cdots & x_{0n}\\
x_{10}&x_{11}&x_{12}&\cdots & x_{1n}\\
\vdots&&& & \vdots\\
x_{n0}&x_{n1}&x_{n2}&\cdots & -x_{11}-\ldots -x_{n-1,n-1}-1
\end{pmatrix}.
\] We can
parametrize $TP^n$ with the $2n-1$ coordinates $x_{0i}\neq 0$, with $1\leqslant i\leqslant n$, and $x_{ii}$, with
$1\leqslant i\leqslant n-1$. The parametric equations for the remaining coordinates are
\[\left\{
\begin{array}{lr}
x_{i0}=\varphirac {x_{ii}}{x_{0i}} & \text{ for } 1\leqslant i \leqslant n-1,\\
x_{n0}=-\varphirac {1+x_{11}+\ldots +x_{n-1,n-1}}{x_{0n}},\\
x_{ij}=\varphirac {x_{ii} x_{0j}} {x_{0i}}=x_{i0}x_{0j}&\text{ for }1\leqslant i<j \leqslant n. \\
\end{array}\right.
\]
According to Section \ref {ssec:lincrem} we have a linearizing Cremona map
$\phi \colon\mathbb{P}^{n^2+2n-1}\dasharrow \mathbb{P}^{n^2+2n-1}$
given in affine coordinates by
\begin{equation}\label{eq:mat}
\begin{array}{lr}
y_{0i}=x_{0i} & \text{ if } \,\,\,0\leqslant i \leqslant n \\
y_{ii}=x_{ii} & \text{ if }\,\,\,1\leqslant i \leqslant n-1\\
y_{i0}=x_{ii}- x_{i0}x_{0i} & \text{ for } 1\leqslant i \leqslant n-1\\
y_{n0}=-({1+x_{11}+\ldots +x_{n-1,n-1}}+x_{n0}x_{0n})\\
y_{ij}=x_{ij}-x_{i0}{x_{0j}}&\text{ for }1\leqslant i<j \leqslant n. \\
\end{array}
\end{equation}
Performing row operations on $x$ and using \eqref {eq:mat}, we see that $x$ has rank $k$ if and only if
\[
y'=\begin{pmatrix} y_{10}&y_{12}&y_{13}&\cdots & y_{1n}\\
y_{21}& y_{20}&y_{23}&\cdots &y_{2n}\\
\vdots&&& & \vdots\\
y_{n1}&y_{n2}&y_{n3}&\cdots & y_{n0}
\end{pmatrix}
\]
has rank $k-1$. This shows that
the $k$--th secant variety of $TP^n$ is mapped to a cone over the $(k-1)$-st secant variety of $\Segre(n-1,n-1)$ with vertex along the linear image of $TP^n$.
\begin{example} \label{ex:tp} The first secant variety of $TP^2$
coincides with the tangent variety and it is the cubic
hypersurface defined in $\mathbb{P} ^7$, with coordinates $[x_{ij}]_{0\leqslant i\leqslant j\leqslant 2, (i,j)\neq (2,2)}$, by the equation
\[
\det (x)= x_{00}^2 x_{11}+x_{00}(x_{11}^2+x_{12}x_{21}-x_{01}x_{10})-x_{01}(x_{10}x_{11}+x_{20}x_{12})-x_{02}(x_{10}x_{21}-x_{20}x_{11})=0.
\]
In the new coordinates it has the simpler equation
$y_{12}y_{21}=y_{10}y_{20}$, which defines the cone over $\Segre(1,1)$ with vertex along the subspace $\{y_{12}=y_{21}=y_{10}=y_{20}=0\}$, the linear image of $TP^2$.\end{example}
\mathbb{S}ubsection{Veronese varieties}
\label{sec:veronese-varieties}
Consider the \emph{$2$--Veronese variety} $V_{2,n}$ of quadrics in $\mathbb{P}^ n$
embedded in $\mathbb{P}^ {\varphirac {n(n+3)}2}$ with coordinates $[x_{ij}]_ {0\leqslant
i\leqslant j\leqslant n}$. The following map $\phi$ is a linearizing affine $(2,2)$ Cremona
transformation for $V_{2,n}$ defined on $\{x_{00}\not=0\}$
\[
(x_{ij})_{0\leq i\le j\leq n}\mapsto (y_{ij})_{0\leq i\le j\leq n}=
(x_{01},\dots, x_{0n}, x_{ij}-x_{0i}x_{0j})_{1\leq i\le j\leq n}.
\]
Its reduced fundamental locus is $\{x_{00}=0\}$ and the indeterminacy locus is $\{x_{00}=\ldots=x_{0n}=0\}$.
We interpret $V_{2,n}$ as the set of rank $1$ symmetric matrices
$x=(x_{ij})_ {0\leqslant
i, j\leqslant n}$ with $x_{ji}=x_{ij}$ if $j<i$.
The $(k-1)$--secant variety to $V_{2,n}$ is
defined by the $k\times k$--minors of $x$. On $\{x_{00}\not=0\}$ the rank of $x$
coincides with the rank of
\[
\begin{pmatrix} 1&0&0&\ldots &0\\
x_{01}&x_{11}-x_{01}^2&x_{12}-x_{01}x_{02}&\ldots &x_{1n}-x_{01}x_{0n}\\
\vdots& \vdots& \vdots& \vdots&\vdots\\
x_{0n}&x_{1n}-x_{01}x_{0n}&x_{2n}-x_{02}x_{0n}&\ldots &x_{nn}-x_{0n}^2
\end{pmatrix}=
\begin{pmatrix} 1&0&0&\ldots &0\\
x_{01}&y_{11}&y_{12}&\ldots &y_{1n}\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
x_{0n}&y_{1n}&y_{2n}&\ldots &y_{nn}
\end{pmatrix},
\]
So the $(k-1)$--secant variety to $V_{2,n}$ is mapped by $\phi$
to the cone over the $(k-2)$--secant variety of $V_{2,n-1}$ with
vertex along the linear image of $V_{2,n}$ in $\mathbb{P}^ {\varphirac {n(n+3)}2}$.
\begin{example}\label{ex:ver} The secant variety $\Sec(V_{2,2})$ of the Veronese surface in $\mathbb{P}^5$ is mapped to the cone over the conic $V_{2,1}=\{y_{11}y_{22}-y_{12}^2=0\} \mathbb{S}ubset\mathbb{P}^2=\{y_{00}=y_{10}=y_{20}=0\}$ with vertex $\mathbb{P}^2=\{y_{11}=y_{12}=y_{22}=0\}$.\end{example}
In general, the \emph{$d$--Veronese variety} $V_{d,n}$ of $\mathbb{P}^ n$ is
embedded in $\mathbb{P}^ {\binom{n+d}n-1}$ with coordinates $[x_{i_0\ldots
i_n}]_ {i_0+\ldots+ i_n=d}$, with $i_j\geqslant 0$ for $ 0\leqslant
j\leqslant n$. Its projection from the linear space $\{x_{i_0\ldots
i_n}=0\}_{i_0\geqslant d-1}$ to the $n$--space $\{x_{i_0\ldots
i_n}=0\}_{i_0<d-1}$, is birational. Accordingly, we can find a
Cremona linearizing map. We will treat the curve case in Section~\ref
{sec:rnc} but we will not dwell on the higher dimensional and higher
degree cases.
\mathbb{S}ubsection{Grassmannians
of lines}
\label{sec:grassm-vari}
In this section we present Cremona linearizations of Grassmannians of
lines. Analogous linearizations exist for higher Grassmannians, an
example of which we treat in Section~\ref {ssec:grass}.
Let $V$ be a complex vector space of dimension $n$.
We can identify $V$ with $\mathbb{C}^n$, once we fix a basis $({\bf e}_0,\ldots, {\bf e}_{n-1})$ of $V$.
The \emph{Pl\"ucker embedding} maps the Grassmannian $G(2,n)$ of $2$--dimensional vector subspaces (i.e. \emph{$2$--planes}) of $V$ into $\mathbb{P}^{\varphirac{n(n-1)}{2}-1}=\mathbb{P}(\wedge^ 2V)$, which we identify with the projective space associated to the vector space of antisymmetric matrices of order $n$, thus
the coordinates are $[x_{ij}]_{0\leq i<j\leq n-1}$.
Two vectors
\[{\bf \xi}_0=(\xi_{00},\ldots, \xi_{0,n-1}),\,\,\, {\bf
\xi}_1=(\xi_{10},\ldots, \xi_{1,n-1})\] in $V$ that span a
$2$--plane $W$ form the rows of a $2\times n$--matrix $x$, whose
minors are independent on the chosen points, up to a nonzero common
factor. The \emph{Pl\"ucker point} associated to $W$ is
$[x_{ij}]_{0\leq i<j\leq n-1}$, where $x_{ij}$ denotes the minor of
$X$ obtained by choosing the $i$-th\ and $j$-th columns.
The \emph{Pl\"ucker ideal} $I_{2,n}$ is the homogeneous ideal of
$G(2,n)$ in its Pl\"ucker embedding. This ideal is prime and it is
generated by quadrics. More precisely, $I_{2,n}$ is generated by the
$\binom{n}{4}$ \emph{three terms P\"ucker relations}
\begin{equation}
x_{ij}x_{kl}-x_{ik}x_{jl}+x_{il}x_{jk} \qquad \text{ for } 0\leqslant
i<j<k<l\leqslant n-1.\label{eq:PluckerRel}
\end{equation}
Using~\eqref{eq:PluckerRel}, in the open affine $\{x_{01}\neq 0\}$, we
have parametric equations for $G(2,n)$: the parameters are the $2n-4$
coordinates $x_{ij}$ with $i=0,1$, and the equations for the remaining
coordinates are
\[
x_{ij}=x_{0i}x_{1j}-x_{0j}x_{1i},\,\,\, \text {for}\,\,\, 2\leqslant i<j\leqslant n-1.
\]
Hence $G(2,n)$ is rational, and a birational map $G(2,n)\dasharrow \mathbb{P}^
{2n-2}$ is given by projecting $G(2,n)$ from the linear span $\mathbb{P}^
{\varphirac {n(n-5)}2+2}$ of $G(2,n-2)$ viewed inside $G(2,n)$
as the
Grassmannian of 2--planes in $V'=\langle {\bf
e}_2, \ldots, {\bf e}_{n-1}\rangle\mathbb{S}ubset V$.
According to Section \ref {ssec:lincrem}, we have a triangular $(2,2)$--Cremona linearization
$\varphi\colon
\mathbb{P}^{\varphirac{n(n-1)}{2}-1}\dashrightarrow \mathbb{P}^{\varphirac{n(n-1)}{2}-1}$ of $G(2,n)$, given in affine coordinates by
\begin{equation*}
y_{ij}=
\begin{cases}
x_{ij} &\text{ if } i=0,1,\; 2\leq j\leq n-1,\\
x_{ij}-x_{0i}x_{1j}+x_{0j}x_{1i} &\text{ if } \,\,\, 2\leqslant i<j\leqslant n-1.
\end{cases}
\end{equation*}
The reduced fundamental locus is $\{x_{01}=0\}$, and the indeterminacy
locus is the union of the two linear spaces
$\{x_{01}=x_{02}=\ldots=x_{0(n-1)}=0\}$ and $\{x_{01}=x_{12}=\ldots =x_{1(n-1)}=0\}$.
On the complement of $\{x_{01}= 0\}$, the Grassmannian $G(2,n)$ is the
set of rank $2$ matrices of the form
\[
x=\begin{pmatrix} 0&1&x_{02}&x_{03}&\ldots&x_{0n}\\ -1
&0&x_{12}&x_{13}&\ldots &x_{1n}\\ -x_{02}
&-x_{12}&0&\ddots &\vdots&\vdots \\ -x_{03}
&-x_{13}&\ddots&\ddots&\ddots&\vdots\\ \vdots
&\vdots&\vdots&\ddots&\ddots &x_{n-2,n-1}
\\-x_{0,n-1}&-x_{1,n-1}&-x_{2,n-1}&\vdots&-x_{n-2,n-1}&0\end{pmatrix}.
\]
Performing suitable column operations on
$x$ and using the $y$--coordinates, we see that the rank of $x$ is 2 plus the rank of the matrix
\[\begin{pmatrix}
0&y_{23}& \ldots&\ldots&y_{2,n-1}\\
\vdots &\vdots&\vdots&\vdots&\vdots\\\
\vdots &\vdots&\vdots&0&y_{n-2,n-1}\\
-y_{2,n-1}&\ldots&\ldots&-y_{n-2,n-1}&0\end{pmatrix}.
\]
Since $\Sec_k(G(2,n))$ is the set of antisymmetric matrices of rank $2k+2$, we see that $\Sec_k(G(2,n))$
is mapped by $\varphi$ to the cone over $\Sec_{k-1}(G(2,n-2))$ with vertex along the linear image of
$G(2,n)$.
\begin{example}\label{ex:grass} The first secant and tangent variety of $G(2,6)$ coincide and are defined by the Pfaffian cubic polynomial
\begin{align*}
& x_{01}(x_{23}x_{45}- x_{24}x_{35}+ x_{25}x_{34})-x_{02}(x_{13}x_{45}
-x_{14}x_{35}+x_{15}x_{34})+x_{03}(x_{12}x_{45}- x_{14}x_{25}+
x_{15}x_{24})
\\
& -x_{04}(x_{12}x_{35}-x_{13}x_{25}+x_{15}x_{23})+x_{05}(x_{14}x_{23}-x_{13}x_{24}+x_{12}x_{34})=0.\label{eq:2}
\end{align*}
In the $y$--coordinates, this hypersurface has a much
simpler equation, namely the Pl\"ucker equation of $G(2,4)$
\[ y_{23}y_{45}- y_{24}y_{35}+y_{25}y_{34}=0.
\]
\end{example}
\begin{example} A different Cremona transformation that linearizes $G(2,n)$ was considered in \cite{GHPRS}, namely
\[
y_{0i}\mapsto \varphirac{1}{x_{0i}}\quad i=1,\ldots,n-1, \qquad y_{ij}\mapsto \varphirac{x_{ij}}{x_{0i}x_{0j}}\quad 1\leqslant i<j\leqslant n-1. \]
It maps $G(2,n)$ to the linear space defined by
\[ y_{ij}-y_{ik}+y_{jk}=0 \qquad 1\leqslant i<j<k\leqslant n-1.
\]
This transformation is studied in \cite{GHPRS} to compare various
notions of convexity for lines.
\end{example}
\mathbb{S}ubsection{Severi varieties}
\label{sec:severi-varieties}
The Veronese surface $V_{2,2}$, the Segre variety $\Segre(2,2)$ and
the Grassmannian $G(2,6)$ mentioned in Examples \ref {ex:ver}, \ref
{ex:seg} and \ref {ex:grass} are \emph{Severi varieties},
i.e. smooth 1--defective varieties of dimension $n$ in $\mathbb{P}^ {\varphirac
32n+2}$ (see \cite {Zak1}). There is one more Severi variety: the
so--called \emph{Cartan variety} of dimension 16 embedded in $\mathbb{P}^
{26}$.
Let $X$ be a Severi variety. It is known that $X$ is swept out by a
$n$--dimensional family $\mathcal Q$ of $\varphirac n2$--dimensional
smooth quadrics, such that, given two distinct points $x,y\in X$,
there is a unique quadric of $\mathcal Q$ containing $x,y$. If
$Q\in \mathcal Q$, the projection of $X$ from the linear space
$\langle Q\rangle$ of dimension $\varphirac n2+1$ to $\mathbb{P}^ n$ is
birational and, as usual by now, we get a Cremona linearization $\phi$ of $X$.
Being $X$ defective, its tangent and first secant varieties coincide.
By Example~\ref{ex:tan} we see
that $\phi$ maps $T(X)={\rm Sec}(X)$ to the cone over $Q$ with vertex the $n$--dimensional
linear image of $X$. This agrees with the
contents of the previous examples and applies to the Cartan
variety as well.
\mathbb{S}ubsection{Rational normal curves}
\label{sec:rnc}
Let $V_n:=V_{1,n}$ be the rational normal curve of degree $n$ in $\mathbb{P} ^n$
\[
V_n=\{[t^n, st^{n-1}, \ldots, s^{n-1}t, s^n] \,:\, [s,t]\in \mathbb{P}^1\} \mathbb{S}ubset \mathbb{P}^n.
\]
Let $[x_0, \ldots, x_n]$ be the coordinates of $\mathbb{P}^n$. Then
$V_n$ is the determinantal variety
\[
\rk \begin{pmatrix} x_0&x_1&\ldots &x_{n-1}\\
x_1&x_2&\ldots&x_n\\
\end{pmatrix}=1.
\]
Assume $n=2k$ is even (similar considerations can be made in the odd case). Then, we can linearize $V_n$ via the
following affine triangular $(2,2)$--Cremona map $\phi$ on $\{x_0\neq 0\}$
\[
y_{i}=
\begin{cases}
x_1 & \text{ if } i=1,\\
x_i-x_{i-1}x_1 &\text{ if }i>1 \text { and } i \text{ is odd},\\
x_i-x_{\varphirac i2}^2 &\text{ otherwise.}
\end{cases}
\]
The $\phi$--image of $V_n$ is the linear space $\{y_{2}=y_3=\ldots=y_n=0\}$.
The reduced fundamental locus is $\{x_0=0\}$ and the indeterminacy locus is $\{x_0=x_1=\ldots=x_{k}=0\}$
The secant variety $\Sec(V_n)$ is defined by the $3\times 3$-minors of the $3\times(n-1)$ \emph{catalecticant} matrix
\[
\begin{pmatrix} x_0&x_1&\ldots &x_{n-2}\\
x_1&x_2&\ldots&x_{n-1}\\
x_2&x_3&\ldots&x_{n}\\
\end{pmatrix},
\]
(see \cite {dol}), where we as usually set $x_0=1$. Using column operations, this matrix can be transformed into the following one expressed in terms of the $y$--coordinates
\[
\begin{pmatrix} 1&0&\ldots &0\\
y_1&y_2&\ldots&y_{n-1}\\
y_2+y_1^ 2&y_3&\ldots&y_{n}\\
\end{pmatrix}.
\]
This shows that $\Sec(V_n)$ is mapped by $\phi$ to the cone over a
$V_{n-2}$ with vertex the line $\{y_2=\ldots=y_n=0\}$ which is the
$\phi$--image of $V_n$. A similar situation occurs for all higher
secant varieties of $V_n$. For instance, $\Sec_{k-1}(V_n)$ is the
hypersurface defined by the catalecticant determinantal equation
of degree $k+1=\varphirac n2+1$
\[
\det\begin{pmatrix} x_0&x_1&\ldots &x_{k}\\
x_1&x_2&\ldots&x_{k+1}\\
\ldots& \ldots& \ldots& \ldots\\
x_k&x_{k+1}&\ldots&x_{n}\\
\end{pmatrix}=0.
\]
Hence, $\phi$ maps $\Sec_{k-1}(V_n)$ to the cone over
$\Sec_{k-2}(V_{n-2})$ with vertex the $\mathbb{P}^1$ obtained as the
$\phi$--image of $V_n$.
\mathbb{S}ubsection{The Grassmannians $G(3,6)$}\label {ssec:grass}
Let $X=(x_{ij})_{1\leqslant i,j\leqslant 3}$ and $Y=(y_{ij})_{1\leqslant i,j\leqslant 3}$ be $3\times 3$-matrices, and let
\[
[ x_0,X,Y,y_0]
\]
be coordinates in $\mathbb{P}^{19}$.
Let $A=(a_{ij})_{1\leqslant i,j\leqslant 3}$ be a $3\times 3$ matrix. We denote by $A_{ij}$ the minor of $A$ obtained by deleting row $i$ and column $j$, so that
\[ \wedge ^ 2A= (A_{ij})_{1\leqslant i,j\leqslant 3},\,\,\, \text {and}\,\,\, \wedge^ 3A=\det (A).\]
We parametrize $G(3,6)$ as follows:
\[
(I_3|A)\in \mathbb{C}^ 9\mapsto (1, A, \wedge^2 A, \wedge^3
A)\in \{x_{0}\not=0\}\mathbb{S}ubset\mathbb{P}^{19}.
\]
This parametrization is precisely the inverse of the birational projection of $G(3,6)$ from its tangent space at the point $[0,0,0,1]$.
By our discussion in Section~\ref {ssec:lincrem}, this gives rise to a
family of triangular Cremona transformations linearizing $G(3,6)$
\[\phi: [ x_0,X,Y,y_0]\in \mathbb{P}^{19}\dasharrow [z_0, Z,W,w_0]\in \mathbb{P}^{19}\]
where $Z=(z_{ij})_{1\leqslant i,j\leqslant 3}$ and $W=(w_{ij})_{1\leqslant i,j\leqslant 3}$.
We can view the determinant of $A$ in two ways: as a cubic polynomial
in the entries of $A$ and as a bilinear quadric form in the variables
$(a_{ij}, A_{ij})$. This yields different Cremona transformations, one
defined by quadrics whose inverse transformation is defined by cubics
(a \emph{quadro--cubic} transformation), the other defined by cubics
with the inverse also defined by cubics (a \emph{{cubo--}cubic}
transformation).
Let us start with the quadro-cubic transformation $\phi$. On
$\{x_0\not=0\}$ it is defined by
\begin{equation*}
\label{eq:quadroG36}
z_{ij}=x_{ij},\quad w_{ij}=y_{ij}-X_{ij}, \quad
w_{0}=y_{0} - \mathbb{S}um_{i=1}^3 (-1)^{i+1} x_{1i}y_{1i}.
\end{equation*}
The reduced fundamental locus is $\{x_0=0\}$ and the indeterminacy locus is $\{x_0=x_{ij}=0\}$.
The inverse of $\phi$, on $\{z_0\not=0\}$, is given by
\begin{equation}\label{eq:inv}
x_{ij}=z_{ij}, \quad y_{ij}=w_{ij}+Z_{ij}, \quad y_0=w_0+\mathbb{S}um_{i=1}^3 (-1)^{i+1} z_{1i}(w_{1i}+Z_{1i}).
\end{equation}
The {cubo--}cubic Cremona transformation $\psi$ is given on the affine set $\{x_0\not=0\}$ by the following expressions
\begin{equation*}
\label{eq:cuboG36}
z_{ij}=x_{ij},\quad w_{ij}=y_{ij}-X_{ij}, \quad
w_{0}=y_{0} - \det(X).
\end{equation*}
Its reduced fundamental locus is $\{x_0=0\}$, and its indeterminacy locus is $\{x_0=x_{ij}=0\}$.
The inverse Cremona transformation restricted to $\{z_0\not=0\}$ is defined by
\begin{equation*}
\label{eq:invcuboG36}
x_{ij}=z_{ij},\quad y_{ij}=w_{ij}+Z_{ij}, \quad
y_{0}=w_{0} + \det(Z).
\end{equation*}
The image of $G(3,6)$ under both $\phi$ and $\psi$ is the linear space
defined by $\{W=0, w_0=0\}$.
It is known that $\Sec(G(3,6))=\mathbb{P}^{19}$ (see \cite {donagi77}), while $T(G(3,6))$ is the quartic hypersurface
defined by
\[
P=(x_{0}y_{0}-{\rm tr} (XY))^2+4x_{0}\det(Y)+4y_{0}\det(X)-4\mathbb{S}um_{i,j}\det(X_{ij})\det(Y_{ji})=0,
\]
(see \cite[p. 83]{SK}). We find the equation of $\phi(T(G(3,6)))$ by
plugging \eqref {eq:inv} in $P$ (where $x_0=1$). We obtain a degree
6 equation
\[
z_{13}^4z_{22}^2-2z_{12}z_{13}^3z_{22}z_{23}+ \quad {\rm appr.}\;\,\, 600 \quad {\rm terms}=0.
\]
Analogously, for $\psi(T(G(3,6)))$ we obtain
\[
z_{13}^4z_{22}^2-2z_{12}z_{13}^3z_{22}z_{23}+ \quad {\rm appr.}\;\,\, 600 \quad {\rm terms}=0.
\]
Since $T(G(3,6))$ is singular along $G(3,6)$, the same happens for the
above two sextics along $\{W=0, w_0=0\}$. In any event, none of these
two linearizing Cremonas simplify the equation of $T(G(3,6))$, which
actually becomes more complicated.
Similar considerations can be done for Grassmannians $G(n,2n)$ with
$n\geq 4$.
\mathbb{S}ection{Cumulant Cremonas}\label{sec:cum}
As we saw in Section \ref{sec:examples}, there are several examples
in which a Cremona linearization of a rational variety simplifies the
equations of its secant varieties. Here is another instance of this
behavior.
\begin{example}\label{ex:3segre}
Consider the Segre embedding $\Sigma_n$ of $(\mathbb{P}^1)^ n$ in $\mathbb{P}^{2^
n-1}$. In particular take the case $n=3$. Then, $\Sigma_3$ is
parametrically given by the equations
\[x_1=t_1,\quad x_2=t_2,\quad x_3=t_3,\quad x_4=t_1t_2, \quad
x_5=t_1t_3, \quad x_6=t_2t_3, \quad x_7=t_1t_2t_3.\] We have
$\Sec(\Sigma_3)=\mathbb{P}^ 7$, whereas $T(\Sigma_3)$ is a hypersurface of
degree four in $\mathbb{P}^7$. Its defining equation is the so called
\emph{hyperdeterminant} (see \cite {GKZ}).
The linearizing Cremona transformation $\phi$ defined in
(\ref{eq:basiccremona}) maps $\Sigma_3$ to the linear space
$x_4=\ldots=x_7=0$. Following (\ref{eq:param4}), the variety
$T(\Sigma_n)$ is mapped under $\phi$ to a (symmetric) degree four
hypersurface with defining equation
\[x_3^ 2x_4^ 2+ x_2^ 2x_5^ 2+x_1^ 2x_6^ 2+ 2(x_1x_2x_5x_6+
x_1x_3x_4x_6+ x_2x_3x_4x_5)+ 4x_4x_5x_6
-2x_7(x_1x_6+x_3x_4+x_2x_5)+x_7^ 2=0.\]
\end{example}
In this case the
linearization process simplifies the equation of $T(\Sigma_3)$, but the degree remains the same. The question is:
can we find a linearizing Cremona for $\Sigma_3$ which lowers the degree of $T(\Sigma_3)$? An affirmative answer to this question is given by \emph{cumulant Cremonas} arising from algebraic
statistics. Indeed,
this family of Cremonas gives the following very simple equation for the image of $T(\Sigma_3)$ (see \cite[(2.1)]{pwz-2011-bincum})
\[x_7^ 2+4x_4x_5x_6=0.\]
\mathbb{S}ubsection{Binary cumulants}
We recall the setting of binary cumulants from~\cite{pwz-2011-bincum}. Let
$\mathbb{P}i(I)$ denote the set of all nonempty set partitions of
$I\mathbb{S}ubseteq[n]:=\{1,\ldots,n\}$. We write $\pi=B_{1}|\cdots|B_{r}$
for a typical element of $\mathbb{P}i(I)$, where all $\emptyset\neq B_{i}\mathbb{S}ubset I$ are
the unordered disjoint \emph{blocks} of $\pi$, with $I=\cup_{i=1}^ rB_i$. For example, if $n=3$, then
$$
\mathbb{P}i([3])=\{123,\;1|23,\; 2|13,\; 3|12,\; 1|2|3\}.
$$
We denote by $|\pi|$ the number of blocks of $\pi \in \mathbb{P}i(I)$.
Consider two copies of
$\mathbb{P}^{2^{n}-1}=\mathbb{P}(\mathbb{C}^{2}\otimes\cdots\otimes\mathbb{C}^{2})$ with coordinates $[x_{I}]_{I\mathbb{S}ubseteq [n]}$
and $[y_{I}]_{I\mathbb{S}ubseteq [n]}$. Following
\cite{pwz-2011-bincum}, we define the \emph{(binary) cumulant Cremona transformation} or the \emph{(binary) cumulant change of coordinates}
\[\psi\colon [x_{I}]_{I\mathbb{S}ubseteq [n]}\in \mathbb{P}^{2^{n}-1}\dasharrow [y_{I}]_{I\mathbb{S}ubseteq [n]}\in
\mathbb{P}^{2^{n}-1}\]
via the formula
\begin{equation}\label{eq:cumul}
y_{\emptyset}=x_{\emptyset}^{n},
\quad \,\,\, \text{and}\,\,\,\quad
y_{I}=\mathbb{S}um_{\pi\in \mathbb{P}i(I)}(-1)^{|\pi|-1}(|\pi|-1)!\;x_{\emptyset}^{n-|\pi|}\prod_{B\in\pi} x_{B} \qquad \text{for }\emptyset \neq I\mathbb{S}ubseteq [n],
\end{equation}
where the product
in~\eqref{eq:cumul} is taken over all blocks $B$ of $\pi$ (we will call $[y_{I}]_{I\mathbb{S}ubseteq [n]}$ the \emph{cumulant coordinates}).
Note that $I$ is the
maximal element in the poset $\mathbb{P}i(I)$, hence $\psi$ is a triangular
Cremona transformation. It linearizes $\Sigma_n$, which is mapped to the linear space
$\{y_I=0\}_{|I|\geqslant 2}$ (see \cite[Remark
3.4]{pwz-2011-bincum}), and $T(\Sigma_n)$ is
toric in the cumulants coordinates (see \cite[Theorem 4.1]{pwz-2011-bincum}).
The inverse map of $\psi$ is given by the standard M\"{o}bius
inversion formula for the partition lattice $\mathbb{P}i([n])$
(see \cite[Proposition 3.7.1]{stanley2006enumerative})
\begin{equation*}\label{eq:inversecumul}
x_{\emptyset}=y_{\emptyset}^{n},\quad \,\,\, \text{and}\,\,\,\quad
x_{I}=\mathbb{S}um_{\pi\in \mathbb{P}i(I)}y_{\emptyset}^{n-|\pi|}\prod_{B\in\pi} y_{B}\qquad\text{for }I\mathbb{S}ubseteq [n].
\end{equation*}
Both maps are morphisms on the open affine subsets
$\{x_{\emptyset}\neq 0\}$ and $\{y_{\emptyset}\neq 0\}$, respectively.
\begin{example}
Fix $n=2$. Then
\[y_{\emptyset}=x_{\emptyset}^{2}, \,\,\,
y_{1}=x_{\emptyset}x_{1}, \,\,\, y_{2}=x_{\emptyset}x_{2}\\, \,\,\, y_{12}=x_{\emptyset}x_{12}-x_{1}x_{2},\]
which coincides with \eqref{eq:SegreNN} in this case.
The inverse is given by
\[x_{\emptyset}=y_{\emptyset}^{2}, \,\,\, x_{1}=y_{\emptyset}y_{1},
x_{2}=y_{\emptyset}y_{2},\,\,\, x_{12}=y_{\emptyset}y_{12}+y_{1}y_{2}.\]
The {fundamental locus} is $\{x_{\emptyset}^{3}=0\}$.
Let $n=3$. Then
\[
\begin{array}{lr}
y_{\emptyset}=x_{\emptyset}^{3},\,\,\,
y_{i}=x_{\emptyset}^{2}x_{i}, \,\,\, \text {for}\,\,\, 1\leqslant i\leqslant 3, \,\,\,
y_{ij}=x_{\emptyset}^{2}x_{ij}-x_{\emptyset}x_{i}x_{j}, \,\,\, \text {for}\,\,\, 1\leq
i<j\leq 3\\
y_{123}=x_{\emptyset}^{2}x_{123}-x_{\emptyset}x_{1}x_{23}-x_{\emptyset}x_{2}x_{13}-x_{\emptyset}x_{3}x_{12}+2x_{1}x_{2}x_{3}.\\
\end{array}
\]
The inverse is
\[
\begin{array}{lr}
x_{\emptyset}=y_{\emptyset}^3,\,\,\,
x_{i}=y_{\emptyset}^2y_i, \,\,\, \text {for}\,\,\, 1\leqslant i\leqslant 3, \,\,\,
x_{ij} = y_{\emptyset}(y_{\emptyset}y_{ij}+y_iy_j), \,\,\, \text {for}\,\,\, 1\leq
i<j\leq 3\\
x_{123}=y_{\emptyset}^2y_{123}+ y_{\emptyset}
(y_{12}y_3+y_{13}y_2+y_{23}y_1)+ y_1y_2y_3.\\
\end{array}
\]
The {fundamental locus} is now
$\{x_{\emptyset}^{8}=0\}$.
\end{example}
\mathbb{S}ubsection{Linearizing higher Segre varieties}
The above construction can be generalized to
$\Segre(r_1,\ldots,r_k)\mathbb{S}ubset \mathbb{P}^ r$ with $r+1=\prod_{i=1}^
3(r_i+1)$, for any $k\geq 2$ and $r_1\geqslant \ldots \geqslant
r_k\geqslant 1$. The case $k=2$ has been treated in Section
\ref{sec:segre-varieties}. If $k=3$, let $[x_{ijk}]_{0\leq i\leq
r_1,0\leq j\leq r_2,0\leq k\leq r_3}$ be the coordinates on $\mathbb{P}^r$. Define a Cremona transformation by
\begin{eqnarray*}
& y_{000}=x_{000}^3,\quad y_{i00}=x_{000}^2x_{i00},\quad y_{0j0}=x_{000}^2x_{0j0},\quad y_{00k}=x_{000}^2x_{00k},&\\
&y_{ij0}=x_{000}(x_{000}x_{ij0}-x_{i00}x_{0j0}),\quad y_{i0k}=x_{000}(x_{000}x_{i0k}-x_{i00}x_{00k}),\quad y_{0jk}=x_{000}(x_{000}x_{0jk}-x_{0j0}x_{00k}),\quad &\\
& y_{ijk}=x_{000}^2x_{ijk}-x_{000}x_{i00}x_{0jk}-x_{000}x_{0j0}x_{i0k}-x_{000}x_{00k}x_{ij0}+2x_{i00}x_{0j0}x_{00k}, &
\end{eqnarray*}
where $i,j,k\geq 1$. This linearizes ${\rm Seg}(r_1,r_2,r_3)$ by mapping it to the linear space $\{y_{ijk}=0\}$ for all triples $(i,j,k)\in \prod _{i=1}^ 3\{0,\ldots,r_i\}$ with at least two nonzero coordinates.
This generalizes to any $k$ as follows (see \cite[Sections 7 and
8]{MOZ}). Let $S(\mathbf{i})\mathbb{S}ubseteq [n]$ be the support of
$\mathbf{i}=(i_1,\ldots,i_k)\in \prod_{i=1}^k \{0,\ldots,r_i\}$, i.e.\
the set of coordinates of nonzero entries in $\mathbf{i}$. For every
$B\mathbb{S}ubseteq[k]$, we define the $k$--tuple $\mathbf{i}(B)$ that agrees
with $\mathbf{i}$ on those indices in $B$ and is zero otherwise. We
define the Cremona transformation $\psi\colon \mathbb{P}^{r}\dashrightarrow
\mathbb{P}^{r}$ by the formulas
$$
y_\mathbf{i}=\mathbb{S}um_{\pi\in\mathbb{P}i(S(\mathbf{i}))}(-1)^{|\pi|-1}(|\pi|-1)!\,\, x_{0\cdots 0}^{n-|\pi|}\prod_{B\in \pi} x_{\mathbf{i}(B)},\,\,\, \text{for all} \,\,\, \mathbf{i}\in \prod_{i=1}^k
\{0,\ldots,k_i\}.
$$
The image of ${\rm Seg}(r_1,\ldots,r_k)$
lies in the subspace $\{y_\mathbf{i}=0\}_{|S({\bf i})|\geqslant 2}$. This can be shown by mimicking the proof of Theorem \ref{thm:lincrem} below, so we leave the proof to the reader.
\mathbb{S}ubsection{$\mathcal L$-cumulant Cremonas}
One of the advantages of working with cumulants is that the change of
coordinates is conveniently encoded by the cumulant generating
function \cite{pwz-2011-bincum}. However, to fully exploit the involved
combinatorics, we will generalize cumulants to situations
when such a generating function is not known. As we will see,
$\mathcal L$-cumulants, introduced in \cite {pwz-2010-cumulants}, enjoy this property.
First we show how the homogeneous binary cumulant change of
coordinates generalizes. We replace the partition lattice $\mathbb{P}i(I)$ by
a \emph{partial order set (poset)} $(P, <_P)$ (or simply $(P,<)$ if
there is no danger of confusion) with its associated \emph{M\"obius
function} $\mu_P$. The function $\mu_P\colon P\times P\rightarrow
\mathbb{Z}$ (or simply $\mu$) is recursively defined by
$\mu(\pi,\pi)=1$ for all $\pi \in P$, $\mu(\pi, \nu)=0$ if $\pi \not<
\nu$, and
$$
\mu(\pi,\nu)=-\mathbb{S}um_{\pi\leq\delta<\nu}\mu(\pi,\delta),\quad\text{for
all }\pi< \nu \text{ in } P.
$$
The two main features of this function that we will use in the rest of
this section are the \emph{M\"obius inversion formula} and the
\emph{product theorem}, which we now recall. Even though they hold in
a more general setting, we state them for finite posets, since this
will suffice for our purposes.
\begin{proposition}[\textbf{M\"obius inversion formula
}~{\cite[Proposition
3.7.1]{stanley2006enumerative}}] \label{pr:MobiusInversion}
Let $(P, <)$ be a finite poset and $f,g\colon P\to \mathbb{C}$. Then
\[
g(x)=\mathbb{S}um_{y\leq x} f(y)\quad \text{ for all } x\in P \qquad \text{
if and only if } \qquad f(x)=\mathbb{S}um_{y\leq x} g(y)\mu(y,x) \quad
\text{ for all } x\in P.
\]
\end{proposition}
\begin{theorem}[\textbf{Product theorem}~{\cite[Proposition
3.8.2]{stanley2006enumerative}}]
\label{thm:ProductTheorem}
Let $(P, <_P)$ and $(Q, <_Q)$ be finite posets and let
$(P\times Q, <)$ be their product, with order given coordinatewise,
i.e.\ $(p,q)\leq (p', q')$ if and only if $p\leq_P p'$ and
$q\leq_Q q'$. If $(p,q)\leq (p', q')$ in $P\times Q$, then
\[
\mu_{P\times Q}((p,q), (p',q'))=\mu_{P}(p,p')\,\mu_Q(q,q').
\]
\end{theorem}
\noindent
For further basic results concerning M\"{o}bius functions we refer the reader
to \cite[Chapter 3]{stanley2006enumerative}. Some of them will be
recalled later on in this section.
The set partitions of a given nonempty set form a poset, where the order $<$
corresponds to refinement, that is, $\pi<
\nu$ if $\pi$ refines $\nu$. To generalize cumulant Cremonas, we replace
$\mathbb{P}i([n])$ by a subposet $\mathcal{L}$ containing the maximal
and minimal elements of $\mathbb{P}i([n])$, i.e., the partitions
\[
\hat{0}=1\vert \ldots\vert n\,\,\, \text {and}\,\,\,
\hat{1}=[n].
\]
These two elements coincide if and only if $n=1$.
Let us fix such an $\mathcal{L}$. For each $I\mathbb{S}ubseteq [n]$, we
construct a subposet $\mathcal{L}(I)$ of $\mathbb{P}i(I)$ by restricting each
partition in $\mathcal{L}$ to $I$. In particular,
$\mathcal{L}([n])=\mathcal{L}$. Each poset $\mathcal{L}(I)$ has an
associated M\"obius function. To simplify notation, we denote all of
them by $\mu$. Similarly, we denote the maximal and the minimal
element of each poset by $\hat{0}$ and $\hat{1}$, so $\hat{1}=I$ in
$\mathcal{L}(I)$. The identification will be clear from the context.
Given $\mathcal{L}$, we define a map $\psi_{\mathcal{L}}\colon
\mathbb{P}^{2^{n}-1}\dasharrow \mathbb{P}^{2^{n}-1}$ as
\begin{equation}\label{eq:cremonagen}
y_{I}= \begin{cases}
\mathbb{S}um_{\pi\in
\mathcal{L}(I)}\mu(\pi,\hat{1})\;x_{\emptyset}^{n-|\pi|}\prod_{B\in\pi}
x_{B}& \quad \text{if }I\neq \emptyset,\\
x_{\emptyset}^{n} & \quad \text{otherwise.}\\
\end{cases}
\end{equation}
Here, $B\in \pi$ if it is a block of the partition. Note that
\[
y_i=x_\emptyset ^ {n-1} x_i, \,\,\, \text {for all}\,\,\, i\in [n],\,\,\, \text {and},\,\,\, y_{ij} = x_\emptyset ^ {n-2}(x_\emptyset x_{ij}-x_ix_j),
\]
do not depend on $\mathcal{L}$. Since $\hat{1}\in \mathcal{L}$, we
know that $I$ is the maximal element of $\mathcal{L}(I)$ for every
$I\mathbb{S}ubset [n]$. This implies that $\psi_{\mathcal{L}}$ is a triangular
Cremona transformation. It is defined over the open set
$\{x_{\emptyset}\neq 0\}$. We call such a map an \emph{$\mathcal
L$-cumulant Cremona}. Its fundamental locus is
$\{x_\emptyset^{n^2-1}=0\}$.
\begin{example} If
$\mathcal{L}=\mathbb{P}i([n])$, the M\"obius function satisfies
$\mu(\pi,\hat{1})=(-1)^{|\pi|-1}(|\pi|-1)!$, so we recover the
cumulant change of coordinates in~\eqref{eq:cumul}.
To the other extreme, if $n>1$ and $\mathcal{L}=\{\hat{0},\hat{1}\}$, then \eqref {eq:cremonagen} becomes
\begin{equation*}\label{eq:cremonagenspec}
y_{I}= \begin{cases}
x_\emptyset^ {n-1} x_I-x_\emptyset^ {n-|I|} \prod_{i\in I} x_i
& \quad \text{if }I\neq \emptyset,\\
x_{\emptyset}^{n} & \quad \text{otherwise},\\
\end{cases}
\end{equation*}
which is the linearizing Cremona of $\Sigma_n$ arising, as in \eqref {eq:basiccremona}, from the affine parametrization of $\Sigma_n$ given by
\[
x_I= \prod_{i\in I} t_i, \,\,\, \text {for}\,\,\, \emptyset \neq I\mathbb{S}ubseteq [n]\,\,\, \text {and} \,\,\, (t_1,\ldots,t_n)\in \mathbb{C}^ n.
\]
\end{example}
\begin{example}[\textbf{Interval partitions of
{$[n]$}}]\label{ex:interval} Fix a positive integer $n$
and let $\mathcal{L}$ be the set of \emph{interval partitions} of
$[n]$, ordered by refinement. An interval partition of $[n]$ is
obtained by cutting the sequence $1,2,\ldots,n$ into
subsequences. For example, there are four interval partitions on
$[3]$, i.e., $123$, $1|23$, $12|3$ and
$1|2|3$.
The interval partitions form a poset
isomorphic to the Boolean lattice of a set of $n-1$ elements. In
particular, its M\"{o}bius function satisfies
$\mu(\pi,\hat{1})=(-1)^{|\pi|-1}$ (c.f.~\cite[Example
3.8.3]{stanley2006enumerative}). If $n=3$, this gives the following
formulas for the map $\psi_{\mathcal{L}}$ from~\eqref{eq:cremonagen}
\[
\begin{array}{c}
y_{\emptyset}=x_{\emptyset}^3, \quad
y_i = x_{\emptyset}^2x_i \; (i=1,2,3), \quad
y_{12}=x_{\emptyset}^{2}x_{12}-x_{\emptyset}x_{1}x_{2}, \quad y_{13}=x_{\emptyset}^{2}x_{13}-x_{\emptyset}x_{1}x_{3}, \quad y_{23}=x_{\emptyset}^{2}x_{23}-x_{\emptyset}x_{2}x_{3},\\
y_{123}=x_{\emptyset}^{2}x_{123}-x_{\emptyset}x_{1}x_{23}-x_{\emptyset}x_{3}x_{12}+x_{1}x_{2}x_{3}.
\end{array}
\]
\end{example}
The formula for the inverse of $\psi_{\mathcal{L}}$ over
$\{y_{\emptyset}\neq 0\}$ follows by the standard M\"{o}bius inversion
formula on each poset $\mathcal{L}(I)$. Let us show how to do this.
For every $\pi\in \mathbb{P}i(I)$ we set
\begin{equation}\label{eq:x}
x_{\pi}:=\prod_{B\in \pi}x_B.\end{equation}
Given
$I\mathbb{S}ubseteq [n]$ and $\nu\in \mathcal{L}(I)$, we define
\begin{equation}\label{eq:ynu}
y_\nu:=\mathbb{S}um_{ \mathbb{S}ubstack{\pi\leq \nu\\\ \pi\in \mathcal{L}(I)}}\mu(\pi,\nu)\, x_\pi.
\end{equation}
By
the M\"{o}bius inversion formula on $\mathcal{L}(I)$, we conclude that
\begin{equation}\label{eq:partialInverse}
x_\nu=\mathbb{S}um_{\mathbb{S}ubstack{\pi\leq \nu\\\pi\in \mathcal{L}(I)}}y_\pi\qquad
\text{ for all } I\mathbb{S}ubset [n]\,\,\, \text {and}\,\,\, \nu\in \mathcal{L}(I).
\end{equation}
In particular
\begin{equation}\label{eq:invLcum}
x_I=\mathbb{S}um_{\pi\in \mathcal{L}(I)}y_\pi, \,\,\,\text{ for all } I\mathbb{S}ubset [n].
\end{equation}
The following lemma ensures that, for each $\nu \in \mathcal{L}(I)$,
$y_{\nu}$ is a polynomial in the variables $y_J$'s with $J\mathbb{S}ubseteq
I$. It also show that~\eqref{eq:invLcum} and yields an explicit
formula for $\psi_{\mathcal{L}}^{-1}$.
\begin{lemma}\label{lm:recursiony_pi}
For each $I\mathbb{S}ubset [n]$ and each $\nu\in \mathcal{L}(I)$, the
variable $y_{\nu}$ is a polynomial in $y_{J}$'s where $J$ runs
over all subsets of each one of the blocks of $\nu$.
\end{lemma}
\begin{proof}
We prove the result by induction on the subsets of $[n]$. If
$I=\emptyset$, there is nothing to prove since
$y_{\emptyset}=1$. Suppose that $I\mathbb{S}upsetneq \emptyset$ and that the
result holds for all $J\mathbb{S}ubsetneq I$.
If $\nu$ is a one block partition, there is nothing to
prove. Assume that $\nu$ contains more than one
block. By ~\eqref {eq:x},~\eqref{eq:ynu} and~\eqref{eq:partialInverse} we
obtain
\[
y_{\nu}=\mathbb{S}um_{\pi\leq \nu}\mu(\pi, \nu)\prod_{B\in \pi}(\mathbb{S}um_{\tau\in \mathcal{L}(B)}y_{\tau}).
\]
Since all $B$'s on the right-hand side are strictly
included in $I$, the result follows by induction.
\end{proof}
As it happens with the homogeneous cumulant change of
coordinates~\eqref{eq:cumul}, the map
$\psi_{\mathcal{L}}$ linearizes $\Sigma_n$:
\begin{theorem}\label{thm:lincrem}
For any choice of $\mathcal{L}$, the map $\psi_{\mathcal{L}}$
from~\eqref{eq:cremonagen} linearizes $\Sigma_n$. Its image is the
linear space $\mathbb{P}i:=\{y_{I}=0\}_{I\mathbb{S}ubseteq [n], |I|\geq 2}$.
\end{theorem}
\begin{proof}
Denote by
$a_{{i}}=[a_{i0},a_{i1}]$ the coordinates of the $i$--th copy of
$\mathbb{P}^{1}$ in $(\mathbb{P}^{1})^{n}$. The Segre embedding $\mathbb{S}igma_n\colon
(\mathbb{P}^1)^n\to \mathbb{P}^{2^n-1}$,
maps $a=(a_{1}, \ldots, a_{n})$ to the point in $\mathbb{P}^{2^n-1}$
whose $I$--th coordinate is
\[a_I=\prod_{i\in
I}a_{i1}\prod_{i\notin I}a_{i0},\,\,\,\text{for every}\,\,\, I\mathbb{S}ubseteq [n].\]
We compute $\psi_{\mathcal{L}} \circ
\mathbb{S}igma_n$ using~\eqref{eq:cremonagen}. For every $I \mathbb{S}ubseteq [n]$ and every partition $\pi \in
\mathcal{L}(I)$ we have
$$ a_{\emptyset}^{n-|\pi|}\prod_{B\in \pi}a_{B}=\prod_{i\in I} (a_{i0}^{n-1}a_{i1}) \prod_{i\notin I} a_{i0}^{n}$$
which does not depend on $\pi$. Therefore, the $I$--th coordinate of
$\psi_{\mathcal{L}}(\mathbb{S}igma_n(a))$ is
\begin{equation}\label{eq:lin}
b_{I}=\big(\prod_{i\notin I} a_{i0}^{n}\prod_{i\in I} (a_{i0}^{n-1}a_{i1})\big) \mathbb{S}um_{\pi\in \mathcal{L}(I)} \mu(\pi,\hat{1}).
\end{equation}
If $|\mathcal{L}(I)|\geq 2$, Lemma~\ref{lm:PosetIdentity} below,
applied to $P=\mathcal{L}(I)$, yields $\mathbb{S}um_{\pi\in \mathcal{L}(I)}
\mu(\pi,\hat{1})=0$. Combining this fact with~\eqref{eq:lin}, we
conclude that the image of $\Sigma_n$ via $\psi_{\mathcal{L}}$ is
contained in the linear space $\{y_I=0\}_{|\mathcal{L}(I)|\geq
2}$. Note that since $\hat{1}$ and $\hat{0}$ lie in $\mathcal{L}$,
the condition $| \mathcal{L}(I)|\geq 2$ is equivalent to $|I|\geq
2$. So this linear space is $\mathbb{P}i$, and it has dimension
$n$. Moreover $\Sigma_n$ is not contained in the fundamental locus
of $\psi_\mathcal{L}$, so the induced map $\psi_{\mathcal{L}\vert \Sigma_n}\colon
\Sigma_n\dasharrow \mathbb{P}i$ is birational.
\end{proof}
\begin{lemma}\label{lm:PosetIdentity} Let $(P, \leq)$ be a finite
poset of size at least two with unique maximal and minimal elements
$\hat{1}, \hat{0}$. Let $\mu$ be its M\"obius function. Then,
\[
\mathbb{S}um_{x\in P}
\mu(x,\hat{1})=0.
\]
\end{lemma}
\begin{proof}
Consider the dual poset $(P^*, {\leq}^*)$ obtained by reversing the order
in $(P, \leq)$. In particular, the roles of the minimal and maximal
elements are exchanged, namely $\hat{0}^*=\hat{1}$ and
$\hat{1}^*=\hat{0}$. The M\"obius function $\mu^*$ of $P^ *$ satisfies
$\mu^*(x,y)=\mu(y,x)$ for all $(x,y)\in P\times P$ (see \cite[page
120]{stanley2006enumerative}). Therefore
\[
\mathbb{S}um_{x\in P} \mu(x,\hat{1})=\mu(\hat{0}, \hat{1}) +\mathbb{S}um_{\hat{0}<x\leq \hat{1}}\mu(x, \hat{1})=
\mu^*(\hat{0}^*, \hat{1}^*)+ \mathbb{S}um_{\hat{0}^*{\leq^*}\, x{<^*}\,\hat{1}^*}
\mu^*(\hat{0}^*, x) =0,
\]
where the last equality follows from the recursive definition of
$\mu^*$.
\end{proof}
\mathbb{S}ubsection{Secant cumulants}
As we mentioned earlier, one of the useful features of binary
cumulants is that the tangential variety of $\Sigma_n$, expressed in cumulants,
becomes toric. This is not the case, in general, for $\mathcal{L}$--cumulant
Cremonas.
However, with a careful choice of the defining poset $\mathcal{L}$
one may obtain other desired properties. For example, if
${\mathcal{L}}$ is the poset of interval partitions of $[n]$ defined
in Example~\ref{ex:interval}, the Cremona transformation
$\psi_{\mathcal{L}}$ is an involution.
The next example is
related to ${\rm Sec}(\Sigma_n)$ (see \cite {MOZ}, \cite[Section 3.3]{pwz-2010-cumulants}).
\begin{example}\label{ex:secant}
In what follows, we parametrize the secant variety
${\Sec}(\Sigma_n)$ inside $\mathbb{P}^{2^n-1}$ starting from the
parametrization of $\Sigma_n$: $$p^{(0)}_I=\prod_{i\in [n]\mathbb{S}etminus
I}a_{i0}\prod_{i\in I}a_{i1},\quad p^{(1)}_I=\prod_{i\in
[n]\mathbb{S}etminus I}b_{i0}\prod_{i\in I}b_{i1}\qquad\mbox{for all
}I\mathbb{S}ubseteq[n].$$
Denote by $\mathbf{A}$ the affine subspace given by $x_\emptyset=1$. The affine variety ${\rm Sec}(\Sigma_n)\cap \mathbf{A}$ is parametrized by
\begin{equation*}\label{eq:paramnclaw}
x_I\,\,=\,\,(1-s_1)\prod_{i\in I} a_{i1}+s_1\prod_{i\in I} b_{i1}.
\end{equation*}
Consider a sequence of two $\mathcal L$-cumulant transformations. The first one corresponds to the
lattice $\mathcal{L}_1$ of all \emph{one-cluster partitions} of $[n]$, i.e.\
partitions with at most one block of size greater than one. The second
one comes from the lattice $\mathcal{L}_2$ of interval partitions of
$[n]$. The first map $\psi_1\colon \mathbf{A}\rightarrow \mathbf{A}$ is
defined by
\begin{equation*}\label{eq:cremonaxtoy}
y_I=\mathbb{S}um_{A\mathbb{S}ubseteq I}(-1)^{|I\mathbb{S}etminus A|}x_A\prod_{i\in I\mathbb{S}etminus A}x_i, \quad\mbox{for }I\mathbb{S}ubseteq [n].
\end{equation*}
The second map $\psi_2\colon\mathbf{A}\rightarrow \mathbf{A}$ is given
by~\ref{eq:cremonagen}, i.e.
\begin{equation*}\label{eq:cremonaytoz}
z_I=\mathbb{S}um_{\pi\in \mathcal{I}(I)}(-1)^{|\pi|-1}\prod_{B\in \pi}y_B,\quad\mbox{for }I\mathbb{S}ubseteq [n].
\end{equation*}
To see how this sequence of maps can be written as a single $\mathcal
L$-cumulant transformation we refer to \cite{pwz-2010-cumulants}. By \cite[Lemma 3.1]{MOZ}, for every $I\mathbb{S}ubseteq [n]$ such that $|I|\geq 2$, the result of $\psi_2\circ \psi_1$ applied to $\Sec(\Sigma_n)$ is
\begin{equation}
z_I\,\,\,=\,\,\,s_1(1-s_1)(1-2s_1)^{|I|-2}\prod_{i\in I} (b_{i1}-a_{i1}).\label{eq:6}
\end{equation}
Taking $d_i=(1-2s_1)(b_{i1}-a_{i1})$ for $i\in [n]$, and
$t=s_1(1-s_1)(1-2s_1)^{-2}$ in~\eqref{eq:6} we conclude
that that secant variety, when expressed in cumulants, becomes locally
toric with $z_I=t\prod_{i\in I}d_i$ for $|I|\geq 2$. \end{example}
\noindent
This simple local description of $\Sec(\Sigma_n)$ can be generalized to the secant variety of the Segre product of projective spaces of arbitrary dimensions. This gives the following result:
\begin{theorem}[see \cite{MOZ}] The secant variety ${\rm Sec}(\Segre
(r_1,\ldots,r_k))$ is covered by normal affine toric varieties. In
particular, it has rational singularities. \end{theorem} It turns
out that similar techniques can be applied to study the tangential
variety $T(\Segre (r_1,\ldots,r_k))$ (see \cite{MOZ} for details).
\noindent {\bf Acknowledgments:}{ We thank Mateusz Micha\l{}ek for his helpful comments on an early version of the paper. M. A. Cueto was partially supported
by an AXA Mittag-Leffler postdoctoral fellowship (Sweden), by an
Alexander von Humboldt Postdoctoral Research Fellowship and by an
NSF postdoctoral fellowship DMS-1103857 (USA). C. Ciliberto and
M. Mella have been partially supported by the Progetto PRIN
``Geometria sulle variet\`a algebriche'' MIUR. P. Zwiernik was
partially supported by an AXA Mittag-Leffler postdoctoral fellowship
(Sweden), by Jan Draisma's Vidi grant from the Netherlands
Organisation for Scientific Research (NWO) and by the European Union
Seventh Framework Programme (FP7/2007-2013) under grant agreement
PIOF-GA-2011-300975. This project started at the Institut
Mittag-Leffler during the Spring 2011 program on ``Algebraic
Geometry with a View Towards Applications.'' We thank IML for its
wonderful hospitality. }
\end{document}
|
\begin{document}
\centerline{Rev. Mex. F\'{\i}s. {\bf 51}(3) 316-319 (June 2005)}
\centerline{
\Large Riccati Nonhermiticity with Application to the Morse
Potential }
\centerline{
Octavio \textsc{Cornejo-P\'erez}, Rom\'an \textsc{L\'opez-Sandoval},
Haret C. \textsc{Rosu\footnote{E-mail: [email protected] $\qquad$
$\qquad$ quant-ph/0502074
4/2005}}
}
\begin{center}
Potosinian Institute of Science and Technology,\\
Apdo Postal 3-74 Tangamanga, 78231 San Luis Potos\'{\i}, Mexico\\
\end{center}
{\small
\noindent
A supersymmetric one-dimensional matrix procedure similar to relationships of the same type between Dirac and Schr\"odinger equations in particle
physics is described at the general level. By this means we are able to introduce a nonhermitic Hamiltonian having the imaginary part proportional to the solution of a Riccati
equation of the Witten type. The procedure is applied to the exactly solvable Morse potential introducing in this way the corresponding nonhermitic Morse problem.
A possible application is to molecular diffraction in evanescent waves over nanostructured surfaces.
\noindent {\em Keywords}: Nonhermiticity; supersymmetry; Morse
potential.
\noindent
Un procedimiento matricial uni-dimensional supersim\'etrico similar a relaciones del mismo tipo entre las ecuaciones de Dirac y Schr\"odinger que usamos recientemente para
el oscilador arm\'onico cl\'asico es presentado de manera concisa en t\'erminos generales. El aspecto nuevo es el uso de par\'ametros constantes por medio de los cuales el Hamiltoniano
se vuelve no herm\'{\i}tico con parte imaginaria proporcional a la
soluci\'on de una ecuaci\'on de Riccati de tipo Witten, que es caracter\'{\i}stica para el m\'etodo supersim\'etrico. El factor de proporcionalidad contiene los par\'ametros mencionados.
Aplicamos esta t\'ecnica algebraica al oscilador cu\'antico no arm\'onico de Morse obteniendo una forma no herm\'{\i}tica.
Una posible aplicaci\'on es a la difracci\'on molecular en ondas evanescentes sobre superficies nanoestructuradas.
\noindent {\em Descriptores}: No hermiticidad; supersimetr\'{\i}a;
potencial de Morse
\noindent
PACS: 11.30.Pb
\markboth{
Cornejo-P\'erez, L\'opez-Sandoval, Rosu
}{
Morse potential with susy nonhermiticity
}
\section{Introduction}
\noindent
We have recently elaborated on an interesting way of introducing imaginary parts (nonhermiticities) in second order differential equations starting from
a Dirac-like matrix equation \cite{A1,A2}. The procedure is a complex extension of the known supersymmetric connection between the Dirac matrix equation and the Schr\"odinger equation \cite{cooper}.
A detailed discussion of the Dirac equation in the supersymmetric approach has been provided by Cooper {\em et al} in 1988,
who showed that the Dirac equation with a Lorentz scalar potential is associated with a susy pair of Schr\"odinger Hamiltonians.
In the supersymmetric approach one uses the fact that the Dirac potential, that we denote by $R$, is the solution of a Riccati equation with the free term related to the potential function $U$ in the second order linear differential equations of the Schr\"odinger type.
Indeed, writing the one-dimensional Dirac equation in the form
\begin{equation}\label{Dparticulas}
[\alpha p +\beta m +\beta R(x)] \psi (x) = E \psi (x)
\end{equation}
where $c=\hbar =1$, $p=-id/dx$, $m$ ($>0$) is the fermion mass, and $R(x)$ is a Lorentz scalar function representing the potential in which the relativistic particle moves. The wavefunction $\psi$ is a two-component spinor $\big(\begin{array}{c} \psi _1\\ \psi _2 \end{array}\big)$
and the $\alpha$ and $\beta$ matrices are the following Pauli matrices
$$
\sigma _y=
\left( \begin{array}{cc}
0 & - i \\
i & 0\end{array} \right ) \qquad
{\rm and} \qquad
\sigma _x=\left( \begin{array}{cc}
0 & 1\\
1 & 0 \end{array} \right )~,
$$
respectively.
Writing the matrix Dirac equation in coupled system form leads to
\begin{equation}\label{cs1}
\big[D_x+m+R\big]\psi _1=E\psi _2
\end{equation}
\begin{equation}\label{cs2}
\big[-D_x+m+R\big]\psi _2=E\psi _1~.
\end{equation}
By decoupling, one gets two Schr\"odinger equations for each spinor component, respectively
\begin{equation}\label{cs3}
H_i\psi _i\equiv\big[-D_{x}^{2}+U_i\big]\psi _i =\epsilon \psi _i~, \qquad \epsilon =E^2-m^2~,
\end{equation}
where the subindex $i=1, 2$, and
$$
U_i(x)=\left((m+R)^2-m^2\mp dR/dx\right)~.
$$
One can also write factorizing operators for Eqs.~(\ref{cs3})
\begin{equation}\label{cs4}
A^{\pm} =\pm D_x +m+R
\end{equation}
such that
\begin{equation}\label{cs5}
H_1=A^{-}A^{+}-m^2~, \quad H_2=A^{+}A^{-}-m^2~.
\end{equation}
However, we have employed a so-called complex extension of the method by which we mean that we considered the Dirac potential $R$ as a purely imaginary quantity implying that the Schr\"odinger potentials
$U_i$ are complex, and, as such, we deal with nonhermitic problems. We considered previously the cases of the classical harmonic oscillator and Friedmann-Robertson-Walker barotropic cosmologies, which correspond to the very specific situation in which the Dirac mass parameter that we denoted by ${\rm K}$ was treated as a free parameter {\em equal} to the Dirac eigenvalue parameter $E$. This is equivalent to Schr\"odinger equations at zero energy, $\epsilon =0$. On the other hand, it is interesting to see how the method works for negative energies, i.e., for a bound spectrum in quantum mechanics. In this paper, we first briefly describe the method and next apply it to the case of Morse potential obtaining a nonhermitic version of this exactly-solvable quantum problem.
\section{Complex extension with a single {\rm K} parameter}
We consider the slightly different Dirac-like equation with respect to Eq.~(\ref{Dparticulas})
\begin{equation} \label{HDM}
\hat{\cal D}_{{\rm K}}W\equiv [\sigma _y D_{x}+\sigma _x (iR +{\rm K})]W={\rm K}W~,
\end{equation}
where K is a (not necessarily positive) real constant. In the left hand side of the equation, $\rm K$ stands as a mass parameter of the Dirac spinor, whereas
on the right hand side it corresponds to the energy parameter. $R$ is an arbitrary solution of the Riccati equation of the Witten type \cite{w81}
\begin{equation}\label{ricric}
R'\pm R^2=u~,
\end{equation}
where $u$ is the real part of the nonhermitic potential in the Schr\"odinger equations we get.
Thus, we have an equation equivalent to a Dirac equation for a spinor $W=\left( \begin{array}{cc}
\phi _1\\
\phi _0\end{array} \right )\equiv\left( {\rm \begin{array}{cc}
{\rm w_f}\\
{\rm w_b}\end{array}} \right )
$
of mass $\rm K$ at the fixed energy $E={\rm K}$ but in a purely imaginary potential (optical lattices). This equation
can be written as the following system of coupled equations
\begin{equation}\label{D1}
iD_{x}\phi _1+(iR+{\rm K})\phi _1={\rm K}\phi _0
\end{equation}
\begin{equation}\label{D2}
-iD_{x}\phi _0+(iR+{\rm K})\phi _0={\rm K}\phi _1~.
\end{equation}
The decoupling of these two equations can be achieved by applying the operator in Eq.~(\ref{D2}) to Eq.~(\ref{D1}) . For the fermionic spinor
component one gets
\begin{equation} \label{comp1}
D^{2}_{x}\phi_1-\Big[R^2-D_x R-i\,2{\rm K}R\Big] \phi_1=0
\end{equation}
whereas the bosonic component fulfills
\begin{equation} \label{comp2}
D^{2}_{x}\phi_0-\Big[R^2+D_x R-i\,2{\rm K}R\Big] \phi _0=0 ~.
\end{equation}
This is a very simple mathematical scheme for introducing a special type of nonhermiticity directly proportional to the Riccati solution.
The factorization operators can be written in this case in the form
\begin{equation}\label{comp3}
A^{\pm} =\pm i D_x +{\rm K}+iR
\end{equation}
that allows to write the fermionic equation (\ref{comp1}) as $H_1\phi _1\equiv(A^{-}A^{+}-K^2)\phi _1=0$ and the bosonic equation (\ref{comp2}) as $H_2\phi _0\equiv(A^{+}A^{-}-K^2)\phi _0=0$.
\section{Complex extension with parameters {\rm K} and {\rm K'}.}
\noindent
A more general case in this scheme is to consider the following matrix Dirac-like equation
$$
\Bigg[\left( \begin{array}{cc}
0 & - i \\
i & 0\end{array} \right )D_{\rm x}+\left( \begin{array}{cc}
0 & 1\\
1 & 0 \end{array} \right )\left( \begin{array}{cc}
iR +{\rm K}& 0\\
0 & iR+{\rm K}\end{array} \right )\Bigg]\left( \begin{array}{cc}
{\rm w}_1\\
{\rm w}_2 \end{array} \right )=
$$
\begin{equation} \label{Dg}
\left( \begin{array}{cc}
{\rm K^{'}}& 0\\
0 &{\rm K^{'}}\end{array} \right )\left( \begin{array}{cc}
{\rm w_1}\\
{\rm w_2} \end{array} \right )~.
\end{equation}
The system of coupled first-order differential equations will be now
\begin{eqnarray}
\Big[- i D_{\rm x}+ iR+{\rm K}\Big]{\rm w_2}={\rm K^{'}}{\rm w_1}\\
\Big[ i D_{\rm x}+ iR+{\rm K}\Big]{\rm w_1}={\rm K^{'}}{\rm w_2}
\end{eqnarray}
and the equivalent second-order differential equations
\begin{equation} \label{Schrgb}
{ D_{\rm x}}^{2}{\rm w} _{i}
+\Big[\pm D_{\rm x}R+2 i {\rm K} R+({\rm K^2-K^{'2}})
- R^2\Big]{\rm w} _{i}=0~,
\end{equation}
where the subindex $i=1,2$ refers to the fermionic and bosonic components, respectively.
Again, introducing the same factorization operators as for the single $\rm K$ case, i.e., $A^{\pm}=\pm iD_x+{\rm K} +iR$, one can write Eq.~(\ref{Schrgb}) in the Schr\"odinger-like form
\begin{equation}\label{schf1}
H_1{\rm w} _1\equiv(A^{-}A^{+}-K^2){\rm w} _1=({\rm K^{'2}-K^2}){\rm w}_1
\end{equation}
and
\begin{equation}\label{schf2}
H_2{\rm w} _2\equiv(A^{+}A^{-}-K^2){\rm w} _2=({\rm K^{'2}-K^2}){\rm w}_2~,
\end{equation}
for the fermionic and bosonic components, respectively. These forms are useful for quantum mechanical applications; see the next section for one of them.
\section{Application to the Morse potential}
This potential is frequently used in molecular physics in connection with the dissociation and vibrational spectra of diatomic molecules.
In this case, the Riccati solution is of the type
\begin{equation}\label{RicMorse}
R(x)=A-Be^{-ax}~,
\end{equation}
Therefore, the second-order fermionic differential equation will be
\begin{eqnarray}
D^{2}_{x}{\rm w}_{1}&+&
\left[-\left(\bar{B}\textrm{e}^{-2ax}-\bar{C}_{1}\textrm{e}^{-ax}\right)\right.
+ (K^{2}-K^{\prime 2}) - A^{2} \nonumber\\
&+& \left. 2iK(A-B\textrm{e}^{-ax}) \right]{\rm w}_{1}=0
\end{eqnarray}
where $\bar{B}=B^{2}$, and $\bar{C}_{1}=B(2A+a)$.
The solution is expressed as a superposition of Whittaker functions
\begin{equation}
{\rm w_{1}}=\alpha_{1}\textrm{e}^{ax/2}
M_{\kappa _1, \mu}
\left(\frac{2B}{a}\,\textrm{e}^{-ax}\right)
+\beta_{1}\textrm{e}^{ax/2}
W_{\kappa _1, \mu}
\left(\frac{2B}{a}\,\textrm{e}^{-ax}\right)
\end{equation}
$\kappa _1= \frac{A}{2a}\left(2+\frac{a}{A}-i\frac{2K}{A}\right)$ and $\mu = \frac{A}{a}\left(\frac{K^{'2}-K^2}{A^2}-i\frac{2K}{A}\right)^{1/2}$.
The bosonic equation reads
\begin{eqnarray}
D^{2}_{x}{\rm w}_{2}&+&
\left[-\left(\bar{B}\textrm{e}^{-2ax}-\bar{C}_{2}\textrm{e}^{-ax}\right)\right.
+ (K^{2}-K^{\prime 2}) - A^{2} \nonumber\\
&+& \left. 2iK(A-B\textrm{e}^{-ax}) \right]{\rm w}_{2}=0
\end{eqnarray}
where $\bar{B}=B^{2}$, and $\bar{C}_{2}=B(2A-a)$.
The solution is a superposition of the following Whittaker functions
\begin{equation} \label{superposition}
{\rm w_{2}}=\alpha_{2}\textrm{e}^{ax/2}
M_{\kappa _2, \mu}
\left(\frac{2B}{a}\,\textrm{e}^{-ax}\right)
+\beta_{2}\textrm{e}^{ax/2}
W_{\kappa _2 , \mu}
\left(\frac{2B}{a}\,\textrm{e}^{-ax}\right)
\end{equation}
where $\kappa _2=\frac{A}{2a}\left(2-\frac{a}{A}-i\frac{2K}{A}\right)$ and the $\mu$ subindex is unchanged.
If we now place ourselves within the quantum mechanical (hermitic) Morse problem we should take $\beta _2=0$ and $K=0$ in order to achieve the exact correspondence with the bound spectrum problem and
eliminate the non-hermiticity.
Moreover, the following well-known connection with the associated Laguerre polynomials
\begin{equation} \label{Laguerre1}
M_{\frac{p}{2}+n+\frac{1}{2}, \frac{p}{2}}(y)= y^{\frac{p+1}{2}}e^{-y/2}L_{n}^{p}(y)~, \quad y=\frac{2B}{a}e^{-ax}
\end{equation}
can be used in our case with the following identifications
$$
\frac{p}{2}=\frac{K'}{a}~, \qquad
K'=(A-an)
$$
i.e.,
$$
p=2\left(\frac{A}{a}-n\right)~.
$$
Then we can write the solution of the hermitic bosonic problem in the well-known form
\begin{equation}\label{wn}
{\rm w} _{2,n}(y) =\alpha _2\left(\frac{2B}{a}\right)^{\frac{1}{2}}y^{\frac{A}{a}-n}e^{-y/2}L_{n}^{2(\frac{A}{a}-n)}(y)~.
\end{equation}
If we want to approach the nonhermitic problem we define by analogy with Eq.~(\ref{Laguerre1})
\begin{equation}\label{Laguerre2}
M_{\kappa _2,\mu}(y)= y^{\mu +\frac{1}{2}}e^{-y/2}L_{\kappa _2-\mu -\frac{1}{2}}^{2\mu}(y)~,
\end{equation}
where $\kappa _2$ and $\mu$ are the complex parameters mentioned before and the symbol corresponding to the associated Laguerre polynomial representing now a Laguerre-like function introduced
by definition through Eq.~(\ref{Laguerre2}).
The wavefunction of the nonhermitic problem can be written as follows
\begin{equation}\label{wnonherm}
{\rm w} _{2,nonherm}(y) =\alpha _2\left(\frac{2B}{a}\right)^{\frac{1}{2}}y^{\mu}e^{-y/2}L_{\kappa _2-\mu -\frac{1}{2}}^{2\mu}(y)~.
\end{equation}
In the case of the nonhermitic fermionic problem, the formulas are similar with the replacement of $\kappa _2$ by $\kappa _1$. Thus:
\begin{equation}\label{wnonherm1}
{\rm w} _{1,nonherm}(y) =\alpha _1\left(\frac{2B}{a}\right)^{\frac{1}{2}}y^{\mu}e^{-y/2}L_{\kappa _1-\mu -\frac{1}{2}}^{2\mu}(y)~.
\end{equation}
In conclusion, the supersymmetric connection between Dirac-like equations and Schr\"odinger equations, in a simple complex extension form, has been applied here in the quantum context
of the Morse potential. However, in the pure quantum case, the results of this note seem to be only of mathematical interest. A natural question is how different types of nonhermiticities can be engineered.
As we noticed in our previous research \cite{A1}, physical optics is closer to real applications. In particular, one can think to the diffraction of diatomic molecules in evanescent fields because such fields have imaginary wavenumbers and we know that Schr\"odinger equations are similar to
Helmholtz equation in the paraxial approximation.
A specific experimental setup could be very similar to that discussed recently
by L\'ev\^eque and collaborators \cite{lev02} in their study of diffractive scattering of cold atoms from an evanescent field, spatially modulated by an array of nanometric objects with high
index of refraction and subwavelength periodicity deposited on a glass surface. The evanescent wavefield is created by a totally internally reflected laser beam and is strongly modulated by the configuration
of the nanostructure. The calculations are not easy as one should tackle Helmholtz equations in complicated geometries. The task is to obtain the configuration of the nanostructure that is able to produce
the evanescent field corresponding to the nonhermitic part of our Morse problem. This is experimentally only a challenging possibility for the time being.
\noindent
The four figures next have not been included in the RMF published
version.
\begin{figure}
\caption{The real part of the bosonic mode ${\rm w}
\label{figmorse1}
\end{figure}
\begin{figure}
\caption{The imaginary part of the bosonic mode ${\rm w}
\label{figmorse1}
\end{figure}
\begin{figure}
\caption{The real part of the fermionic mode ${\rm w}
\label{figmorse1}
\end{figure}
\begin{figure}
\caption{The imaginary part of the fermionic mode ${\rm w}
\label{figmorse1}
\end{figure}
\end{document}
|
\begin{document}
\title{ Riesz projection and essential $S$-spectrum in quaternionic setting}
\begin{abstract}
This paper is devoted to the investigation of the Weyl and the essential $S-$spectra of a bounded right quaternionic linear operator in a right quaternionic Hilbert space. Using the quaternionic Riesz projection, the $S-$eigenvalue of finite type is both introduced and studied. In particular, we have shown that the Weyl and the essential $S-$spectra do not contain eigenvalues of finite type. We have also described the boundary of the Weyl $S-$spectrum and the particular case of the spectral theorem of the essential $S-$spectrum.
\end{abstract}
\tableofcontents
\vskip 0.2 cm
\section{Introduction}
\noindent Over the recent years, the spectral theory for quaternionic operators has piqued the interest and attracted the attention of multiple researchers, see for instance \cite{SL,KRE,NCFCBO,FJDP,FJGanter,2021,FIF,FIDC,BK} and references therein. Research in this topic is motivated by application in various fields, including quantum mechanics, fractional evolution problems \cite{FJGanter}, and quaternionic Schur analysis \cite{DFIS}. The concept of spectrum is one of the main objectives in the theory of quaternionic operators acting on quaternionic Hilbert spaces. The obscurity that seemed to surround the precise definition of the quaternionic spectrum of a linear operator was deemed a stumbling block. However, in 2006, F. Colombo and I. Sabadini succeeded in devising a new notion conducive to the complete development of the quaternionic operator theory, namely the $S$-spectrum. We refer to \cite[Subsection 1.2.1]{FJDP} for a precise history. After several more years of formulating the spectral theorem that stemmed from the $S$-spectrum, D. Alpay, F. Colombo, D.P. Kimsey, see \cite{DFDP}, supplied ample evidence in 2016 to further establish this fundamental theorem for both bounded and unbounded operators. In the book \cite{FJDP}, see also \cite{FJGanter}, the authors briefly explain the concept of $S-$spectrum and give the systematic basis of quaternionic spectral theory. We refer to the book \cite{DFIS} and the references therein for the spectral theory on the $S$-spectrum for Clifford operators and to \cite{PF} for some results on operators perturbation.
\vskip 0.2 cm
Motivated by the new concept of $S-$spectrum in the quaternionic setting, Muraleetharan and Thirulogasanthar, in \cite{BK,BK2}, introduced the Weyl and essential $S$-spectra and gave a characterization using Fredholm operators. We refer to \cite{B1} for the study of the general framework of the Fredholm element with respect to a quaternionic Banach algebra homomorphism. In general, the set of all operators acting on right Banach space is not quaternionic Banach algebra with respect to the composition operators. By \cite[Theoreme 7.1 and Theorem 7.3]{RVA}, if $V_{\mathbb{H}}^{R}$ is a separable quaternionic Hilbert space, then $\mathcal{B}(V_{\mathbb{H}}^{R})$ (the set of all right bounded operators) is a quaternionic two-sided $C^{*}-$algebra and the set of all compact operators $\mathcal{K}(V_{\mathbb{H}}^{R})$ is a closed two-sided ideal of $\mathcal{B}(V_{\mathbb{H}}^{R})$ which is closed under adjunction. In this regard, in \cite{BK}, the author defined the essential $S-$spectrum as the $S-$spectrum of quotient map image of bounded right linear operator on the Calkin algebra $\mathcal{B}(V_{\mathbb{H}}^{R})/\mathcal{K}(V_{\mathbb{H}}^{R})$.
\vskip 0.2 cm
In order to explain the objective of this work, we start by recalling a few results concerning the discrete spectrum and the Riesz projection in the complex setting. Let $T$ be a linear operator acting on a complex Banach space $V_{\mathbb{C}}$. We denote the spectrum of $T$ by $\sigma(T)$. Let $\sigma$ be an isolated part of $\sigma(T)$. The Riesz projection of $T$ corresponding to $\sigma$ is the operator
\begin{align*}P_{\sigma}=\displaystyle\frac{1}{2\pi i}\int_{C_{\sigma}}(z-T)^{-1}dz\end{align*}
where $C_{\sigma}$ is a smooth closed curves belonging to the resolvent set $\mathbb{C}\backslash\sigma(T)$ such that $C_{\sigma}$ surrounds $\sigma$ and separates $\sigma$ from $\sigma(T)\backslash\sigma$. The discrete spectrum of $T$, denoted $\sigma_{d}(T)$, is the set of isolated point $\lambda\in\mathbb{C}$ of $\sigma(T)$ such that the corresponding Riesz projection $P_{\{\lambda\}}$ are finite dimensional, see \cite{Gohberg,Lutgen}. Note that in general we have $\sigma_{e}(T)\subset\sigma(T)\backslash\sigma_{d}(T)$, where $\sigma_{e}(T)$ denotes the set of essential spectrum of $T$. We refer to \cite{BJ,J1,Wolf} for more properties of $\sigma_{e}(T)$. Note that, if $A$ is a self-adjoint operator on a Hilbert space, then $\sigma_{e}(T)=\sigma(T)\backslash\sigma_{d}(T)$. In particular, the essential spectrum is empty if and only if $\sigma(T)=\sigma_{d}(T)$. We point out that this point, namely the absence of the essential spectrum, has been studied in many works, e.g., \cite{Gol,Keller}.
\vskip 0.2 cm
In the quaternionic setting, if $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $q\in\sigma_{S}(T)\backslash\mathbb{R}$ (where $\sigma_{S}(T)$ denote the $S-$spectrum of $T$), then $q$ is not an isolated point of $\sigma_{S}(T)$. Indeed, $[q]:=\{hqh^{-1}:\ h\in\mathbb{H}^{*}\}\subset \sigma_{S}(T)$, see \cite{FJDP}. By the compactness of $[q]$, $\{q\}$ is not an isolated part of $\sigma_{S}(T)$. However, if we set
\begin{align*}\Omega:=\sigma_{S}(T)/\sim\end{align*}
where $p\sim q$ if and only if $p\in [q]$, and $E_{T}$ the set of representative, then if $[q]$ is an isolated part of $\sigma_{S}(T)$ then $q$ is an isolated point of $E_{T}$.
\vskip 0.2 cm
The first aim of this work is to study the isolated part of the $S-$ spectrum of a bounded right quaternionic operators and its relation with the essential $S$-spectrum. To begin with, we consider the Riesz projector associated with a given quaternionic operator $T$ which was introduced in \cite{FIDC1}. We refer to \cite{DFJI,KRE,PF,FJDP} for more details on this concept. We treat the decomposition of the essential $S-$spectrum of $T$ as a function of the Riesz projector, and more generally as a function of a projector which commutes with $T$, see Theorem \ref{t:r}. The technique of the proof is inspired from \cite{KRE}. We also discuss the Riesz decomposition theorem \cite[Theorem 6]{PSK} in quaternionic setting. More precisely, we prove that this decomposition is unique. Motivated by this, we study the quaternionic version of the discrete $S-$spectrum. Following the complex formalism given in \cite{Gohberg, Lutgen}, we show that the essential $S-$spectrum of a given right operator acting on right Hilbert space does not contain discrete element of the $S-$spectrum. The second aim of this work is to give new results concerning the Weyl and essential $S$-spectra in quaternionic setting. First off, Theorem \ref{t:4} gives a description of the boundary of the Weyl $S-$spectrum. The proof is based on the study of the minimal modulus of the right quaternionic operators. We also deal with the particular case of the spectral theorem of essential $S-$spectra. The technique of the proof is inspired from \cite{FIDC1}.
\vskip 0.1 cm
The article is organised as follows: In Section \ref{sec:1}, we present general definitions about operators theory in quaternionic setting. In Section \ref{sec:2}, we discuss the question of decomposition of the essential $S-$spectrum. Finally, in Section \ref{sec:3}, we provide new results of the Weyl $S-$spectrum.
\section{Mathematical preliminaries}\label{sec:1}
In this section, we review some basic notions about quaternions, right quaternionic Hilbert space, right linear operator (even unbounded and define its $S-$spectrum), and slice functional calculus. For details, we refer to the reader \cite{SL,FIDC1,FJDP,RVA,WR}.
\subsection{Quaternions}
Let $\mathbb{H}$ be the Hamiltonian skew field of quaternions. This class of numbers can be written as:
\begin{align*}
q=q_{0}+q_{1}i+q_{2}j+q_{3}k,
\end{align*}
where $q_{l}\in \mathbb{R}$ for $l=0,1,2,3$ and $i\ ,j\ ,k$ are the three quaternionic imaginary units satisfying
\begin{align*}
i^{2}=j^{2}=k^{2}=ijk=-1.
\end{align*}
The real and the imaginary part of $q$ is defined as ${\rm Re}(q)=q_{0}$ and ${\rm Im}(q)=q_{1}i+q_{2}j+q_{3}k$, respectively. Then, the conjugate and the usual norm of the quaternion q are given respectively by
\begin{align*}
\overline{q}=q_{0}-q_{1}i-q_{2}j-q_{3}k \mbox{ and }|q|=\sqrt{q\overline{q}}.
\end{align*}
The set of all imaginary unit quaternions in $\mathbb{H}$ is denoted by $\mathbb{S}$ and defined as
\begin{align*}
\mathbb{S}=\Big\{q_{1}i+q_{2}j+q_{3}k:\ q_{1},\ q_{2},\ q_{3}\in\mathbb{R}, q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1 \Big\}.
\end{align*}
The name imaginary unit is due the fact that, for any $I\in\mathbb{S}$, we have
\begin{align*}I^{2}=-\overline{I}I=-|I|^{2}=-1.\end{align*}
\noindent For every $q\in\mathbb{H}\backslash\mathbb{R}$, we associate the unique element
\begin{align*}I_{q}:=\frac{{\rm Im}(q)}{|{\rm Im}(q)|}\in\mathbb{S}\end{align*}
such that
\begin{align*}q={\rm Re}(q)+I_{q}|{\rm Im}(q)|.\end{align*}
This implies that
\begin{align*}\mathbb{H}=\bigcup_{I\in\mathbb{S}}\mathbb{C}_{I}.\end{align*}
where
\begin{align*}\mathbb{C}_{I}:=\mathbb{R}+I\mathbb{R}.\end{align*}
We can associate to $q\in\mathbb{H}$ the $2-$dimensional sphere
\begin{align*}[q]:=\{{\rm Re}(q)+I\vert {\rm Im}(q)\vert:\ I\in\mathbb{S}\}.\end{align*}
This sphere has center at the real point ${\rm Re}(q)$ and radius $\vert {\rm Im}(q)\vert$.
\subsection{Right quaternionic Hilbert space and operator}
In this subsection, we recall the concept of right quaternionic Hilbert space and right linear operator (see, \cite{SL,NCFCBO,RVA}).
\begin{definition}\cite{SL}
{\rm Let $V_{\mathbb{H}}^{R}$ be a right vector space. The map
\begin{align*}\langle .,.\rangle :V_{\mathbb{H}}^{R}\times V_{\mathbb{H}}^{R}\longrightarrow \mathbb{H}\end{align*}
is called an inner product if it satisfies the following properties:\\
\noindent $(i)$ $\langle f,gq+h\rangle=\langle f,g\rangle q+\langle f,h\rangle$, for all $f,g,h\in V_{\mathbb{H}}^{R}$ and $q\in\mathbb{H}$.\\
$(ii)$ $\langle f,g\rangle=\overline{\langle g,f\rangle}$, for all $f,g\in V_{\mathbb{H}}^{R}$.\\
$(iii)$ If $f\in V_{\mathbb{H}}^{R}$, then $\langle f,f\rangle\geq 0$ and $f=0$ if $\langle f,f\rangle=0$.\\
The pair $(V_{\mathbb{H}}^{R},\langle .,.\rangle)$ is called a right quaternionic pre-Hilbert space. Moreover, $V_{\mathbb{H}}^{R}$ is said to be right quaternionic Hilbert space, if
\begin{align*}\|f\|=\sqrt{\langle f,f\rangle}\end{align*}
defines a norm for which $V_{\mathbb{H}}^{R}$ is complete.}
\end{definition}
\vskip 0.2 cm
In the sequel, we assume that $V_{\mathbb{H}}^{R}$ is complete and separable.
We now recall the concept of Hilbert basis in the quaternionic case. First, we review the following proposition, the proof of which is similar to its complex version, see \cite{RVA,KV}.
\begin{proposition}\label{p:2.2}
Let $V_{\mathbb{H}}^{R}$ be a right quaternionic Hilbert space and let $\mathcal{F}=\{f_{k}:\ k\in\mathbb{N}\}$ be an orthonormal subset of $V_{\mathbb{H}}^{R}$. The following properties are equivalent:
\begin{enumerate}
\item For every $f,g\in V_{\mathbb{H}}^{R}$, the series $\sum_{k\in\mathbb{N}}\langle f,f_{k}\rangle\langle f_{k},g\rangle$ converges absolutely and
\begin{align*} \langle f,g\rangle=\sum_{k\in\mathbb{N}}\langle f,f_{k}\rangle\langle f_{k},g\rangle.\end{align*}
\item For every $f\in V_{\mathbb{H}}^{R}$, we have
\begin{align*}\|f\|^{2}=\sum_{k\in\mathbb{N}}\vert \langle f_{k},f\rangle\vert^{2}\end{align*}
\item $\mathcal{F}^{\bot}:=\Big\{f\in V_{\mathbb{H}}^{R}:\ \langle f,g \rangle=0\mbox{ for all }g\in\mathcal{F}\Big\}=\{0\}$.
\item $\langle \mathcal{F}\rangle:=\Big\{\displaystyle\sum_{l=1}^{m}f_{l}q_{l}:\ f_{l}\in\mathcal{F},\ q_{l}\in\mathbb{H},\ m\in\mathbb{N}\Big\}$ is dense in $V_{\mathbb{H}}^{R}$.
\end{enumerate}
\end{proposition}
\begin{definition}
{\rm Let $\mathcal{F}$ be an orthonormal subset of $V_{\mathbb{H}}^{R}$. $\mathcal{F}$ is said to be Hilbert basis of $V_{\mathbb{H}}^{R}$ if $\mathcal{F}$ verifies one of the equivalent conditions of Proposition \ref{p:2.2}.}
\end{definition}
\vskip 0.2 cm
The proof of the following proposition is the same as its complex version, see \cite{RVA,KV}.
\begin{proposition}
Let $V_{\mathbb{H}}^{R}$ be a right quaternionic Hilbert space. Then,
\begin{enumerate}
\item $V_{\mathbb{H}}^{R}$ admits a Hilbert basis.
\item Two Hilbert basis of $V_{\mathbb{H}}^{R}$ have the same cardinality.
\item If $\mathcal{F}$ is a Hilbert basis of $V_{\mathbb{H}}^{R}$, then every $f\in V_{\mathbb{H}}^{R}$ can be uniquely decomposed as follows:
\begin{align*}f=\sum_{k\in\mathbb{N}}f_{k}\langle f_{k},f\rangle\end{align*}
where the series $\sum_{k\in\mathbb{N}}f_{k}\langle f_{k},f\rangle$ converges absolutely in $V_{\mathbb{H}}^{R}$.
\end{enumerate}
\end{proposition}
\vskip 0.2 cm
The quaternionic multiplication is not commutative. Afterwards, we recall that if $V_{\mathbb{H}}^{R}$ is a right separable quaternionic Hilbert space, we can define the left scalar multiplication on $V_{\mathbb{H}}^{R}$ using an arbitrary Hilbert basis on $V_{\mathbb{H}}^{R}$. We refer to \cite{RVA} for an explanation of this construction. Let $\mathcal{F}=\Big\{f_{k}:\ k\in\mathbb{N}\Big\}$ be a Hilbert basis of $V_{\mathbb{H}}^{R}$. The left scalar multiplication on $V_{\mathbb{H}}^{R}$ induced by $\mathcal{F}$ is defined as the map
\begin{align*}
&\mathbb{H}\times V_{\mathbb{H}}^{R}\longrightarrow V_{\mathbb{H}}^{R}
\\
&\ (q,f)\longmapsto\ qf=\displaystyle\sum_{k\in\mathbb{N}}f_{k}q\langle f_{k},f\rangle.
\end{align*}
\vskip 0.2 cm
The properties of the left scalar multiplication are described in the following proposition.
\begin{proposition}\cite[Proposition 3.1]{RVA} Let $f,g\in V_{\mathbb{H}}^{R}$ and $p,q\in\mathbb{H}$, then
\begin{enumerate}
\item $q(f+g)=qf+qg$ and $q(fp)=(qf)p$.
\item $\|qf\|=|q|\|f\|$.
\item $q(pf)=(qp)f$.
\item $\langle \overline{q}f,g\rangle=\langle f,qg\rangle$.
\item $rf=fr$, for all $r\in\mathbb{R}$.
\item $qf_{k}=f_{k}q$, for all $k\in\mathbb{N}$.
\end{enumerate}
\end{proposition}
It is easy to see that $(p+q)f=pf+qf$, for all $p,q\in\mathbb{H}$ and $f\in V_{\mathbb{H}}^{R}$. In the sequel, we consider $V_{\mathbb{H}}^{R}$ as a right quaternionic Hilbert space equipped with the left scalar multiplication.
\begin{definition}{\rm Let $V_{\mathbb{H}}^{R}$ be a right quaternionic Hilbert space. A mapping $T:\mathcal{D}(T)\subset V_{\mathbb{H}}^{R}\longrightarrow V_{\mathbb{H}}^{R}$, where $\mathcal{D}(T)$ denote the domain of $T$, is called \emph{quaternionic right linear} if
\begin{align*}T(f+gq)=T(f)+T(g)q,\mbox{ for all }f,\ g\in\mathcal{D}(T) \mbox{ and }q\in\mathbb{H}.\end{align*}
The operator $T$ is called closed, if the graph $\mathcal{G}(T):=\{(f,Tf):\ f\in\mathcal{D}(T)\}$ is a closed right linear subspace of $V_{\mathbb{H}}^{R}\times V_{\mathbb{H}}^{R}$.}
\end{definition}
\noindent We call an quaternionic right operator $T$ bounded if
\begin{align*}\|T\|:=\sup\Big\{\|Tf\|:\ f\in V_{\mathbb{H}}^{R},\ \|f\|=1\Big\}<+\infty.\end{align*}
\vskip 0.2 cm
The set of all bounded right operators on $V_{\mathbb{H}}^{R}$ is denoted by $\mathcal{B}(V_{\mathbb{H}}^{R})$ and the identity operator on $V_{\mathbb{H}}^{R}$ will be denoted by $\mathbb{I}_{V_{\mathbb{H}}^{R}}$. Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$, we denote the null space of $T$ by $N(T)$ and its range space by $R(T)$. A closed subspace $M$ of $V_{\mathbb{H}}^{R}$ is said to be $T-$invariant subspace if $T(M)\subset M$. Note that the function $f\longmapsto Tf-fq$ is not right linear, we refer to \cite{NCFCBO, FJDP} for this point of view. The fundamental suggestion of \cite{NCFCBO} is to define the spectrum using the Cauchy kernel series. We recall these concepts from the book \cite{FJDP}.
\begin{definition}{\rm Let $T:\mathcal{D}(T)\subset V_{\mathbb{H}}^{R}\longrightarrow V_{\mathbb{H}}^{R}$ be a right linear operator. We define the operator $Q_{q}(T):\mathcal{D}(T^{2})\longrightarrow V_{\mathbb{H}}^{R}$ by
\begin{align*}Q_{q}(T):=T^{2}-2{\rm Re}(q)T+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}.\end{align*}
\noindent $1)$ The \emph{$S-$resolvent} set of $T$ is defined as follows:
\begin{align*}\rho_{S}(T):=\Big\{q\in\mathbb{H}:\ N(Q_{q}(T))=\{0\},\overline{R(T)}=\mathbb{H}\mbox{ and }Q_{q}(T)^{-1}\in\mathcal{B}(V_{\mathbb{H}}^{R})\Big\}.\end{align*}
\noindent $2)$ The \emph{$S-$spectrum} of $T$ is defined as:
\begin{align*}\sigma_{S}(T)=\mathbb{H}\backslash\rho_{S}(T).\end{align*}
\noindent $3)$ The \emph{point $S-$spectrum} of $T$ is given by
\begin{align*}\sigma_{pS}(T):=\Big\{q\in\mathbb{H}:\ N(Q_{q}(T))\neq\{0\}\Big\}.\end{align*}}
\end{definition}
\vskip 0.1 cm
For $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$, the $S-$spectrum $\sigma_{S}(T)$ is a non-empty compact set, see \cite{FJDP}. We recall that if $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $q\in\sigma_{S}(T)$, then all the elements of the sphere $[q]$ belong to $\sigma_{S}(T)$, see \cite[Theorem 7.2.8]{DFIS}.
Let $v\in V_{\mathbb{H}}^{R}\backslash\{0\}$, then $v$ is a right eigenvalue of $T$ if $T(v)$ is a right quaternionic multiple of $v$. That, is
\begin{align*}T(v)=vq\end{align*}
where $q\in\mathbb{H}$, known as the right eigenvalue. The set of right eigenvalues coincides with the point $S-$spectrum, see \cite[Proposition 4.5]{RVA}.
\vskip 0.1 cm
Now, we recall the definition of essential $S-$spectrum, we refer to \cite{BK,BK2} for more details. Let $V_{\mathbb{H}}^{R}$ be a separable right quaternionic Hilbert space equipped with a left scalar multiplication. Using \cite{RVA}, $\mathcal{B}(V_{\mathbb{H}}^{R})$ is a quaternionic two-sided Banach $C^{*}-$algebra with unity, and the set of all compact operators $\mathcal{K}(V_{\mathbb{H}}^{R})$ is a closed two-sided ideal of $\mathcal{B}(V_{\mathbb{H}}^{R})$. We consider the natural quotient map:
\begin{align*}
&\pi: \mathcal{B}(V_{\mathbb{H}}^{R}) \longrightarrow \mathcal{C}(V_{\mathbb{H}}^{R}):=\mathcal{B}(V_{\mathbb{H}}^{R})/\mathcal{K}(V_{\mathbb{H}}^{R})
\\
&\ \ \quad \ \quad T \longmapsto\ \ [T]=T+\mathcal{K}(V_{\mathbb{H}}^{R}).
\end{align*}
Note that $\pi$ is a unital homomorphism, see \cite{BK}. The norm on $\mathcal{C}(V_{\mathbb{H}}^{R})$ is given by
\begin{align*}\|[T]\|=\inf_{K\in\mathcal{K}(V_{\mathbb{H}}^{R})}\|A+K\|.\end{align*}
\begin{definition}\cite{BK}
{\rm The essential $S-$spectrum of $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ is the $S-$spectrum of $\pi(A)$ in the Calkin algebra $\mathcal{C}(V_{\mathbb{H}}^{R})$. That is,
\begin{align*}\sigma_{e}^{S}(T):=\sigma_{S}(\pi(A)).\end{align*}}
\end{definition}
\subsection{The quaternionic functional calculus}
The quaternionic functional calculus is defined on the class of slice regular function $f:U\longrightarrow\mathbb{H}$ for some set $U\subset \mathbb{H}$. We recall this concept and refer to \cite{PF,FIDC1,FJDP} and the references therein on the matter.
\begin{definition}
{\rm A set $U\subset\mathbb{H}$ is called\\
\noindent $(i)$ axially symmetric if $[x]\subset U$ for any $x\in U$ and\\
\noindent $(ii)$ a slice domain if $U$ is open, $U\cap\mathbb{R}\neq\emptyset$ and $U\cap\mathbb{C}_{I}$ is a domain in $\mathbb{C}_{I}$, for any $I\in\mathbb{S}$.}
\end{definition}
\begin{definition}{\rm Let $U\subset\mathbb{H}$ be an open set.
A real differentiable function $f:U\longrightarrow \mathbb{H}$ is said to be left $s-$regular $($resp. right $s-$regular$)$ if for every $I\in\mathbb{S}$, the function $f$ satisfy
\begin{align*}\displaystyle\frac{1}{2}[\frac{\partial}{\partial x}f(x+Iy)+I\frac{\partial}{\partial y}f(x+Iy)]=0\ \Big(resp. \frac{1}{2}[\frac{\partial}{\partial x}f(x+Iy)\\
\quad\quad \quad\quad\quad+\frac{\partial}{\partial y}f(x+Iy)I]=0\Big).\end{align*}}
\end{definition}
\vskip 0.2 cm
We denote the class of left $s-$regular $($resp. right $s-$regular$)$ by $\mathcal{R}^{L}(U)$ $($resp. $\mathcal{R}^{R}(U))$. We recall that $\mathcal{R}^{L}(U)$ is a right $\mathbb{H}-$module and $\mathcal{R}^{R}(U)$ is a left $\mathbb{H}-$module. Let $V_{\mathbb{H}}^{R}$ be a separable right quaternionic Hilbert space equipped with a Hilbert basic $\mathcal{N}$ and with a left scalar multiplication. We recall that $\mathcal{B}(V_{\mathbb{H}}^{R})$ is a two-sided ideal quaternionic Banach algebra with respect to the left multiplication given by
\begin{align*}(q.T)f=\displaystyle \sum_{g\in\mathcal{N}}gq\langle g,Tf\rangle\mbox{ and }(Tq)f=\displaystyle\sum_{g\in\mathcal{N}}T(g)q\langle g,f\rangle.\end{align*}
\vskip 0.2 cm
\begin{definition}{\rm Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$ and $q\in\rho_{S}(T)$. The \emph{left $S-$resolvent operator} is given by
\begin{align*}S_{L}^{-1}(q,T):=-Q_{q}(T)^{-1}(T-\overline{q}\mathbb{I}_{V_{\mathbb{H}}^{R}}),\end{align*}
and the \emph{right $S-$resolvent operator} is defined by
\begin{align*}S_{R}^{-1}(q,T):=-(T-\overline{q}\mathbb{I}_{V_{\mathbb{H}}^{R}})Q_{q}(T)^{-1}.\end{align*}}
\end{definition}
\begin{definition}\cite[Definition 3.4]{KRE}
{\rm Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$ and let $U\subset\mathbb{H}$ be an axially symmetric $s-$domain that contains the $S-$spectrum $\sigma_{S}(T)$ and such that $\partial(U\cap\mathbb{C}_{I})$ is union of a finite number of continuously differentiable Jordan curves for every $I\in\mathbb{S}$. We say that $U$ is a $T-$admissible open set.}
\end{definition}
\begin{definition}
{\rm Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$, $W\subset \mathbb{H}$ be open set. A function $f\in \mathcal{R}^{L}(W)($resp. $\mathcal{R}^{R}(W))$is said to be locally left regular $($resp. right regular$)$ function on $\sigma_{S}(T)$, if there is $T-$admissible domain $U\subset\mathbb{H}$ such that $\overline{U}\subset W$.}
\end{definition}
\vskip 0.2 cm
We denote by $\mathcal{R}^{L}_{\sigma_{S}(T)}$ $($resp. $\mathcal{R}^{R}_{\sigma_{S}(T)})$ the set of locally left $($ resp. right$)$ regular functions on $\sigma_{S}(T)$. Now, we recall the two versions of the quaternionic functional calculus.
\begin{definition}\cite[Definition 4.10.4]{FIDC1}
{\rm Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$ and $U\subset\mathbb{H}$ be a $T-$admissible domain. Then,
\begin{equation}\label{e:4}f(T)=\displaystyle\frac{1}{2\pi}\int_{\partial(U\cap\mathbb{C}_{I})}S_{L}^{-1}(q,T)dq_{I}f(q)\ \forall f\in \mathcal{R}^{L}_{\sigma_{S}(T)} \end{equation}
and
\begin{equation}\label{e:5}f(T)=\displaystyle\frac{1}{2\pi}\int_{\partial(U\cap\mathbb{C}_{I})}f(q)dq_{I}S_{R}^{-1}(q,T)\ \forall f\in \mathcal{R}^{R}_{\sigma_{S}(T)} \end{equation}
where $dq_{I}=-dqI$.}
\end{definition}
The two integrals that appear in Eqs $(\ref{e:4})$ and $(\ref{e:5})$ are independent of the choice of imaginary unit $I\in\mathbb{S}$ and $T-$admissible domain, see \cite[Theorem 4.10.3]{FIDC1}.
\vskip 0.1 cm
A set $\sigma$ is called an isolated part of $\sigma_{S}(T)$ if both $\sigma$ and $\sigma_{S}(T)\setminus\sigma$ are closed subsets of $\sigma_ {S}(T)$.
\begin{definition}\cite{A:C,PF}
{\rm Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Denote by $U_{\sigma}$ an axially symmetric $s-$domain that contains the axially symmetric isolated part $\sigma\subset\sigma_{S}(T)$ but not any other point of $\sigma_{S}(T)$. Suppose that the Jordan curves $\partial(U_{\sigma}\cap\mathbb{C}_{I})$ belong to the $S-$resolvent set $\rho_{S}(T)$, for any $I\in\mathbb{S}$. We define the quaternionic Riesz projection by
\begin{align*}P_{\sigma}=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma}\cap\mathbb{C}_{I})}S_{L}^{-1}(q,T)dq_{I}.\end{align*}}
\end{definition}
\begin{remark}\cite{A:C,PF}
{\rm The concept of $P_{\sigma}$ can be given by using the right $S-$resolvent operator $S_{R}^{-1}(q,T)$, that is
\begin{align*}P_{\sigma}=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma}\cap\mathbb{C}_{I})}dq_{I}S_{R}^{-1}(q,T).\end{align*}}
\end{remark}
\vskip 0.1 cm Note that $P_{\sigma}$ is a projection that commutes with $T$, see \cite[Theorem 2.8]{A:C}.
\vskip 0.1 cm
In the sequel, we assume that $V_{\mathbb{H}}^{R}$ is a separable right quaternionic Hilbert space with infinite dimensional.
\section{Riesz projection and essential spectrum}\label{sec:2}
We recall that in \cite{BK,BK2}, the study of the essential $S-$spectrum is established using the theory of Fredholm operators. The aim of this section is to show that the essential $S-$spectrum does not contain discrete element of the $S-$spectrum. In this regard, let $V_{\mathbb{H}}^{R}$ be a separable right quaternionic Hilbert space. We note that if $K\in\mathcal{K}(V_{\mathbb{H}}^{R})$, then $\sigma_{e}^{S}(A+K)=\sigma_{e}^{S}(A)$ for all $A\in\mathcal{B}(V_{\mathbb{H}}^{R})$. We start by showing that in general the $S$-spectrum does not satisfy this property. We define $V_{\mathbb{H}}^{'}=\mathcal{B}(V_{\mathbb{H}}^{R},\mathbb{H})$ and call $V_{\mathbb{H}}^{'}$ the right dual space of $V_{\mathbb{H}}^{R}$.
\begin{theorem}\label{t:r}
Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Then, $\sigma_{S}(T+A)\subset\sigma_{S}(A)$ for all $A\in \mathcal{B}(V_{\mathbb{H}}^{R})$ if and only if $T\equiv0$.
\end{theorem}
\proof Assume that $T$ is non-zero operator and $\sigma_{S}(T+A)\subset\sigma_{S}(A)$ for all $A\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Let $0_{V_{\mathbb{H}}^{R}}\neq x\in V_{\mathbb{H}}^{R}$ such that
\begin{align*}Tx=y\neq 0_{V_{\mathbb{H}}^{R}}.\end{align*}
{\bf First step:} There exists $f\in V_{\mathbb{H}}^{'}$ such that
\begin{align*}f(x)=1\mbox{ and }f(y) \neq 0.\end{align*}
Indeed, if $x=yq$ for some $q\in\mathbb{H}$, the result follows from Hahn-Banach theorem. Now assume that $x$ and $y$ are linearly independent. Then, there exists a right basis $Z$ of $V_{\mathbb{H}}^{R}$ such that $\{x,y\}\subset Z$. We consider the following map
\begin{align*}f:u=\displaystyle\sum_{z\in Z}zq_{z}\longmapsto q_{x}+q_{y}.\end{align*}
It is clear that $f$ is right linear and $f(x)=1$ and $f(y)\neq 0$.
\vskip 0.2 cm
\noindent {\bf Second step:} Take
\begin{align*}A:=(x-y)\otimes f\end{align*}
the rank one operator on $V_{\mathbb{H}}^{R}$ given by
\begin{align*}u\longmapsto (x-y)f(u).\end{align*}
We have:
\begin{align*}(T+A)^{2}x-2(T+A)x
&=(T+A)x-2x=-x.\end{align*}
So, $1\in\sigma_{S}(A+T)$. Since $\dim V_{\mathbb{H}}^{R}>1$, then $0\in\sigma_{S}(A)$. In the sequel, we assume that $x\neq y$. Let $q\in\sigma_{S}(A)$. By \cite[Lemma 4.2.3]{DFIS}, we have
\begin{align*}(f(x-y))^{2}-2{\rm Re}(q)f(x-y)+|q|^{2}=0\end{align*}
if and only if
\begin{align*}q\in\Big\{hf(x-y)h^{-1}:\ h\in \mathbb{H}\backslash\{0\}\Big\}.\end{align*}
This implies that $1\not\in\sigma_{S}(A)$. Indeed, if $1$ is an $S-$eigenvalue of $A$, there exists $h\in\mathbb{H}^{*}$ such that
\begin{align*}hf(x-y) h^{-1}=1\end{align*}
\noindent and so $f(y) =0$, contradiction.\qed
\begin{definition}
{\rm A bounded operator $S\in \mathcal{B}(V_{\mathbb{H}}^{R})$ is called a \emph{quasi-inverse} of the operator $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$ if there exists $K_{1},K_{2}\in\mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}ST=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}\mbox{ and }TS=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}.\end{align*}}
\end{definition}
\begin{lemma}\label{l:1}
Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$, $q\in \mathbb{H}\backslash\sigma_{e}^{S}(T)$. Let $R_{q}(T)$ be a quasi-inverse of $Q_{q}(T)$ and $A$ be an operator that commute with $T$. Then, there exist $K\in \mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}AR_{q}(T)=R_{q}(T)A+K.\end{align*}
\end{lemma}
\proof Since $AT=TA$, then $Q_{q}(T)A=AQ_{q}(T)$. Let $K_{1},\ K_{2}\in \mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}R_{q}(T)Q_{q}(T)=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}\mbox{ and }Q_{q}(T)R_{q}(T)=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}.\end{align*}
Then,
\begin{align*}AR_{q}(T)=R_{q}(T)A+K\end{align*}
where $K=K_{1}AR_{q}(T)-R_{q}(T)AK_{2}\in \mathcal{K}(V_{\mathbb{H}}^{R})$.\qed
\begin{theorem}\label{t:0}
Let $V_{\mathbb{H}}^{R}$ be a quaternionic Hilbert space and $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Let $P_{1}$ be a projector in $\mathcal{B}(V_{\mathbb{H}}^{R})$ commuting with $T$ and let $P_{2}=\mathbb{I}_{V_{\mathbb{H}}^{R}}-P_{1}$. Take $T_{j}:=TP_{j}=P_{j}T,\ j=1,2$. Then,
\begin{align*}\sigma_{e}^{S}(T)=\sigma_{e}^{S}(T_{1}|_{R(P_{1})})\cup\sigma_{e}^{S}(T_{2}|_{R(P_{2})}),\end{align*}
where $T_{i}|_{M}$ denotes the restriction of $T_{i}$ to $M$.
\end{theorem}
\proof Let $q\not\in \sigma_{e}^{S}(T)$. Then, there exist $A_{q}(T)\in \mathcal{B}(V_{\mathbb{H}}^{R})$ and $K_{1},K_{2}\in \mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}A_{q}(T)Q_{q}(T)=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}\mbox{ and } Q_{q}(T)A_{q}(T)=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}.\end{align*}
Using Lemma \ref{l:1}, we infer that there exists $K_{3}\in \mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}A_{q}(T)P_{1}=P_{1}A_{q}(T)+K_{3}\mbox{ and }A_{q}(T)P_{2}=P_{2}A_{q}(T)-K_{3}.\end{align*}
Therefore,
\begin{equation} \label{e:1}A_{q}(T)=P_{1}A_{q}(T)P_{1}+ P_{2}A_{q}(T)P_{2}+K_{4},\end{equation}
where $K_{4}=(\mathbb{I}_{V_{\mathbb{H}}^{R}}-2P_{1})K_{3}\in \mathcal{K}(V_{\mathbb{H}}^{R})$. Now, we consider the following identity
\begin{equation}\label{e:2}Q_{q}(T)=(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})+(T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2} P_{2}).\end{equation}
We multiply the identity $(\ref{e:2})$ by $A_{q}(T)$ on the left and on the right, we obtain
\begin{align*}\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}
&=(P_{1}A_{q}(T)P_{1}+ P_{2}A_{q}(T)P_{2}+K_{4})((T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1}))\\
&+(P_{1}A_{q}(T)P_{1}+ P_{2}A_{q}(T)P_{2}+K_{4})(T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2} P_{2})]\end{align*}
and
\begin{align*}\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}
&=((T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1}))(P_{1}A_{q}(T)P_{1}+P_{2}A_{q}(T)P_{2}+K_{4})\\
&+((T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2}P_{2})(P_{1}A_{q}(T)P_{1}+P_{2}A_{q}(T)P_{2}+K_{4}) .\end{align*}
This leads us to conclude that
\begin{align*}\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}
&=P_{1}A_{q}(T)P_{1}(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})\\
&+P_{2}A_{q}(T)P_{2}(T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2} P_{2})+K_{5}\end{align*}
and
\begin{align*}\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}
&=(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})P_{1}A_{q}(T)P_{1}\\
&+(T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2} P_{2})P_{2}A_{q}(T)P_{2}+K_{6},\end{align*}
where
\begin{align*}K_{5}=K_{4}Q_{q}(T)\in \mathcal{K}(V_{\mathbb{H}}^{R})\mbox{ and }K_{6}=Q_{q}(T)K_{4}\in\mathcal{K}(V_{\mathbb{H}}^{R}).\end{align*}
\noindent Take
\begin{align*}A_{q,j}(T)=P_{j}A_{q}(T)P_{j},\mbox{ for }j=1,2.\end{align*}
\noindent Then,
\begin{align*}
P_{1}(\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{1}-K_{5})P_{1}=A_{q,1}(T)(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})
\end{align*}
and
\begin{align*}
P_{1}(\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{2}-K_{6})P_{1}=(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})A_{q,1}(T).
\end{align*}
As a consequence, $A_{q,1}(T)|_{R(P_{1})}$ is a quasi-inverse of $(T_{1}^{2}-2{\rm Re}(q)T_{1}+\vert q\vert^{2} P_{1})|_{R(P_{1})}$. Similarly, we have $A_{q,2}(T)|_{N(P_{1})}$ is a quasi-inverse $(T_{2}^{2}-2{\rm Re}(q)T_{2}+\vert q\vert^{2} P_{2})|_{N(P_{1})}$. Therefore, we conclude that \begin{align*}q\in\mathbb{H}\backslash(\sigma^{S}_{e}(T_{1}\mid_{R(P_{1})})\cup\sigma^{S}_{e}(T_{2}\mid_{R(P_{2})})).\end{align*}
\vskip 0.1 cm
Conversely, let $q\not\in \sigma_{e}^{S}(T_{1}|_{R(P_{1})})\cup\sigma_{e}^{S}(T_{2}|_{R(P_{2})})$. Then, there exists $A_{q,i}(T)\in\mathcal{B}(R(P_{i}))$ and $K_{i,j}\in\mathcal{K}(R(P_{i}))$, for $i, j=1,2$, such that
\begin{align*}A_{q,i}Q_{q}(T_{i}\mid_{R(P_{i})})=\mathbb{I}_{R(P_{i})}-K_{1,i}\mbox{ and }Q_{q}(T_{i})A_{q,i}(T)=\mathbb{I}_{R(P_{i})}-K_{2,i}.\end{align*}
Let us define the operator
\begin{align*}B_{q}(T):=P_{1}A_{q,1}(T)P_{1}+P_{2}A_{q,2}(T)P_{2}.\end{align*}
\noindent We get:
\begin{align*}B_{q}(T)Q_{q}(T)
&=P_{1}A_{q,1}(T)Q_{q}(T)P_{1}+P_{2}A_{q,2}(T)Q_{q}(T)P_{2}\\
&=P_{1}(\mathbb{I}_{R(P_{1})}-K_{1,1})P_{1}+P_{2}(\mathbb{I}_{R(P_{2})}-K_{1,2})P_{2}\\
&=P_{1}+P_{2}-P_{1}K_{1,1}P_{1}-P_{2}K_{1,2}P_{2}\\
&=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{6}\end{align*}
where $K_{6}=P_{1}K_{1,1}P_{1}+P_{2}K_{1,2}P_{2}\in\mathcal{K}(V_{\mathbb{H}}^{R})$. By a similar argument, we obtain
\begin{align*}Q_{q}(T)B_{q}(T)=\mathbb{I}_{V_{\mathbb{H}}^{R}}-K_{7},\end{align*}
where $K_{7}=P_{1}K_{2,1}P_{1}+P_{2}K_{2,2}P_{2}\in\mathcal{K}(V_{\mathbb{H}}^{R})$. Therefore, $B_{q}(T)$ is a quasi-inverse of $Q_{q}(T)$. We deduce that $q\not\in\sigma_{e}^{S}(T)$.\qed
The result of \cite[Theorem 6]{PSK} can be reformulate as follows:
\begin{theorem}\label{t:1}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and let $\sigma$ be an isolated part of $\sigma_{S}(T)$. Put $V_{1,\mathbb{H}}^{R}=R(P_{\sigma})$ and $V_{2,\mathbb{H}}^{R}=N(P_{\sigma})$. Then,
$V_{\mathbb{H}}^{R}=V_{1,\mathbb{H}}^{R}\oplus V_{2,\mathbb{H}}^{R}$, the spaces $V_{1,\mathbb{H}}^{R}$ and $V_{2,\mathbb{H}}^{R}$ are $T-$invariant subspaces and
\begin{equation}\label{e:3}\sigma_{S}(T|_{V_{1;\mathbb{H}}^{R}})=\sigma\mbox{ and }\sigma_{S}(T|_{V_{2,\mathbb{H}}^{R}})=\sigma_{S}(T)\backslash \sigma.\end{equation}
\end{theorem}
\vskip 0.2 cm
In the next result, we discuss the uniqueness of the decomposition $(\ref{e:3})$.
\begin{theorem}\label{p:1}Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $P$ be a projection in $\mathcal{B}(V_{\mathbb{H}}^{R})$ such that $TP=PT$. Set
\begin{align*}T_{1}:=TP\vert_{R(P)}\mbox{ and } T_{2}:=T(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P)\vert_{N(P)}.\end{align*}
\noindent If ${\rm dist}(\sigma_{S}(T_{1}),\sigma_{S}(T_{2}))>0$, then
\begin{align*}R(P)=R(P_{\sigma_{S}(T_{1})})\mbox{ and }N(P)=N(P_{\sigma_{S}(T_{1})})\end{align*}
\end{theorem}
\proof First and foremost, we have by \cite[Theorem 4.4]{KRE}
\begin{align*}\sigma_{S}(T)=\sigma_{S}(T_{1})\cup\sigma_{S}(T_{2})\end{align*}
\noindent Take $\sigma_{1}=\sigma_{S}(T_{1})$ and $\sigma_{2}=\sigma_{S}(T_{2})$ and assume that ${\rm dist}(\sigma_{1},\sigma_{2})>0$. According to the proof of \cite[Theorem 6]{PSK}, there exist a pair of a non-empty disjoint axially symmetric domains $(U_{\sigma_{1}},U_{\sigma_{2}})$ such that
\begin{align*}\sigma_{j}\subset U_{\sigma_{j}},\ \overline{U}_{\sigma_{1}}\cap\overline{U}_{\sigma_{2}}=\emptyset\end{align*}
and the boundary $\partial(U_{\sigma_{j}}\cap\mathbb{C}_{I})$ is the union of finite number of continuously differentiable Jordan curves for $j=1,2$ and for all $I\in\mathbb{S}$. In this way, we see that $U_{\sigma_{j}}$ is $T_{j}$-admissible open set for $j=1,2$. So by the quaternionic functional calculus, we deduce that
\begin{align*}\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{j}}\cap\mathbb{C}_{I})}S_{L}^{-1}(s,T_{j})ds_{I}=\mathbb{I}_{V_{j,\mathbb{H}}^{R}} \mbox{ for }j=1,2\end{align*}
and
\begin{align*}\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{i}}\cap\mathbb{C}_{I})}S_{L}^{-1}(s,T_{j})ds_{I}=0 \mbox{ for }i\neq j\end{align*}
where
\begin{align*}V_{1,\mathbb{H}}^{R}:=R(P)\mbox{ and }V_{2,\mathbb{H}}^{R}:=N(P).\end{align*}
Define the operators:
\begin{align*}R_{1}=\left[
\begin{array}{ccc}
\mathbb{I}_{R(P)}& 0 \\
0 & 0 \\
\end{array}
\right] \mbox{ and } R_{2}=\left[
\begin{array}{ccc}
0& 0 \\
0 & \mathbb{I}_{N(P)} \\
\end{array}
\right].\end{align*}
We have
\begin{align*}P_{\sigma_{1}}
&=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}S_{L}^{-1}(s,T)ds_{I}\\
&=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T)\\
&=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T)P+
\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T)(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P)\\
&=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T_{1})P+
\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T_{2})(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P)\\
&=\displaystyle\frac{1}{2\pi}\int_{\partial(U_{\sigma_{1}}\cap\mathbb{C}_{I})}ds_{I}S_{R}^{-1}(s,T_{1})P\\
&=P_{\sigma_{1}}P\\
&=(I-P_{\sigma_{2}})P\\
&=R_{1}.\end{align*}
We conclude that
\begin{align*}R(P)=R(P_{\sigma_{1}}).\end{align*}
\noindent By the similar arguments, we achieve that
\begin{align*}P_{\sigma_{2}}=R_{2}\end{align*}
and so, $N(P)=R(P_{\sigma_{2}})=N(P_{\sigma_{1}})$.\qed
\vskip 0.3 cm
We now analyze the Riesz projection associated with the isolated spheres. To start off, we give the following definition:
\begin{definition}
{\rm $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. A point $q\in\sigma_{S}(T)$ is called an eigenvalue of finite type if $V_{\mathbb{H}}^{R}$ is a direct sum of $T-$invariant subspaces $V_{1,\mathbb{H}}^{R}$
and $V_{2,\mathbb{H}}^{R}$ such that
\vskip 0.1 cm
$(H1)$ $\dim(V_{1,\mathbb{H}}^{R})<\infty$,
\vskip 0.1 cm
$(H2)$ $\sigma_{S}(T\vert_{V_{1,\mathbb{H}}^{R}})\cap \sigma_{S}(T\vert_{V_{2,\mathbb{H}}^{R}})=\emptyset$,
\vskip 0.1 cm
$(H3)$ $\sigma_{S}(T\vert_{V_{1,\mathbb{H}}^{R}})=[q]$.}
\end{definition}
\begin{remark}~~\\
\begin{enumerate}
{\rm \item In complex spectral theory, in a complex Hilbert space $V_{\mathbb{C}}$, for a continuous linear operator, $T$, the condition $(H3)$ is replaced by $\sigma(T\vert_{V_{1,\mathbb{C}}})=\{q\}$ $($where $V_{\mathbb{C}}$ is a direct sum of $T-$invariant subspaces $V_{1,\mathbb{C}}$ and $V_{2,\mathbb{C}})$, see \cite{Gohberg}. In the quaternionic setting, we must take the whole $2-$sphere $[q]$ because if $q\in \sigma_{S}(T\vert_{V_{1,\mathbb{H}}^{R}})$, then $[q]\subset \sigma_{S}(T\vert_{V_{1,\mathbb{H}}^{R}})$.
\item Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. If $q\in\sigma_{S}(T)\backslash\mathbb{R}$, then $q$ is not an isolated point of $\sigma_{S}(T)$. Take
\begin{align*}\Omega:=\sigma_{S}(T)/\sim\end{align*}
where $p\sim q$ if and only if $p\in [q]$. Let $E_{T}$ denote the set of representatives of $\Omega$.
\item By \cite[Proposition 4.44 and Theorem 4.47]{KM}, If $\dim(V_{\mathbb{H}}^{R})<\infty$ , then the $S-$spectrum of $T$ consists of right eigenvalues only and $\# E_{T}<\infty$. In particular, if $T$ satisfied the assumptions $(H1)$ and $(H3)$, then $q\in\sigma_{pS}(T)$.}
\end{enumerate}
\end{remark}
\begin{lemma}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. Let $E_{T}$ denote the set of representatives as above. Then, $q$ is an isolated point of $E_{T}$ if and only if $[q]$ is an isolated part of $\sigma_{S}(T)$.
\end{lemma}
\proof Assume that $[q]$ is isolated part of $\sigma_{S}(T)$ and let $U_{q}\subset\mathbb{H}$ be an open set such that
\begin{align*}[q]=\sigma_{S}(T)\cap U_{q}.\end{align*}
If there exists $q\neq p\in E_{T}\cap U_{q}$, then $p\not\in [q]$. This implies that
\begin{align*}E_{T}\cap U_{q}=\{q\}.\end{align*}
Conversely, if $q$ is an isolated point of $E_{T}$, then $[q]$ is an open subset of $\sigma_{S}(T)$. Since $[q]$ is compact, then $[q]$ is isolated part of $\sigma_{S}(T)$.\qed
\vskip 0.3 cm
Let $V_{\mathbb{C}}$ be a complex Hilbert space and $T$ be a continuous linear operator acting on $V_{\mathbb{C}}$. It follows from \cite[Theorem 1.1]{Gohberg} that if $q\in\sigma(T)$ $($where $\sigma(T)$ denotes the complex spectrum of $T)$, then $q$ is an eigenvalue of finite type if and only if $\dim R(P_{\{q\}})<\infty$, where $P_{\{q\}}$ is the Riesz projection corresponding to the isolated point $q$. In the next theorem, we show that the same is true for the right eigenvalue of finite type in the quaternionic setting.
\begin{theorem}\label{t:2}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $q\in\sigma_{pS}(T)$. Then, $q$ is a right eigenvalue of finite type if and only if $\{q\}$ is an isolated part in $E_{T}$ and $\dim R(P_{[q]})<\infty$.
\end{theorem}
\proof
Assume that $q$ is a right eigenvalue of finite type and consider the direct sum
\begin{align*}V_{\mathbb{H}}^{R}=V_{1,\mathbb{H}}^{R}\oplus V_{2,\mathbb{H}}^{R}\end{align*}
with the properties $(H1)-(H3)$. By Theorem \ref{p:1}, this decomposition is unique and so
\begin{align*}V_{1,\mathbb{H}}^{R}= R(P_{[q]}).\end{align*}
The converse comes from Theorem \ref{t:1}.
\begin{corollary}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\sigma_{e}^{S}(T)\subset \sigma_{S}(T)\backslash\sigma_{d}^{S}(T)\end{align*}
where $\sigma_{d}^{S}(T)$ denotes the set of all right eigenvalue of finite type.
\end{corollary}
\proof Assume that $q$ is a right eigenvalue of finite type. Then, $[q]$ is an isolated part of $\sigma_{S}(T)$ and $\dim R(P_{[q]})<\infty$. Therefore,
\begin{align*}\dim R(TP_{[q]})<\infty\mbox{ and so }\sigma_{e}^{S}(T\vert_{R(P_{[q]})})=\emptyset.\end{align*}
Using Theorem \ref{t:0}, we infer that
\begin{align*}\sigma_{e}^{S}(T)=\sigma_{e}^{S}(T\vert_{R(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P_{[q]})}).\end{align*}
On the other hand,
\begin{align*}\sigma_{e}^{S}(T\vert_{R(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P_{[q]})})\subset \sigma_{S}(T\vert_{R(\mathbb{I}_{V_{\mathbb{H}}^{R}}-P_{[q]})})=\sigma_{S}(T)\backslash [q].\end{align*}
This leads to conclude that $q\not\in\sigma_{e}^{S}(T)$.\qed
\vskip 0.3 cm
\begin{example}
{\rm We consider the right quaternionic Hilbert space:
\begin{align*}\ell^{2}_{\mathbb{H}}(\mathbb{Z}):=\Big\{x:\mathbb{Z}\longrightarrow\mathbb{H}\mbox{ such that } \|x\|^{2}:=\sum_{i\in\mathbb{Z}}|x_{i}|^{2}<\infty\Big\}.\end{align*}
with the right scalar multiplication
\begin{align*}xa=(x_{i}a)_{i\in\mathbb{Z}}\end{align*}
for $x=(x_{i})_{i\in\mathbb{Z}}$ and $a\in\mathbb{H}$. The associated scalar product is given by
\begin{align*}\langle x,y\rangle:=\langle x,y\rangle_{\ell^{2}_{\mathbb{H}}(\mathbb{Z})}:=\sum_{i\in\mathbb{Z}}\overline{x_{i}}y_{i}.\end{align*}
The right shift is the map:
\begin{align*}
T:\ &\ell_{\mathbb{H}}^{2}(\mathbb{Z})\longrightarrow \ell_{\mathbb{H}}^{2}(\mathbb{Z})
\\
& x\longmapsto\ y=(y_{i})_{i\in\mathbb{Z}}
\end{align*}
where $y_{i}=x_{i+1}$ if $i\neq -1$ and $0$ if $i=-1$. We have
\begin{align*}\|T(x)\|^{2}=\sum_{i\neq -1}|x_{i}|^{2}\leq \|x\|^{2}.\end{align*}
The $S-$spectrum of $T$ was studied in \cite{B1,BK4}. In particular, we have
\begin{align*}\sigma_{S}(T)=\sigma_{pS}(T)=\nabla_{\mathbb{H}}(0,1),\end{align*}
where $\nabla_{\mathbb{H}}(0,1)$ is the closed quaternionic unit ball. By Theorem \ref{t:2}, none of these $S-$eigenvalues are of finite type.}
\end{example}
We recall:
\begin{lemma}\label{l:2}\cite[Corollary 2.22]{GN0}
Let $V_{\mathbb{H}}^{R}$ be a quaternionic right vector space and $E$ be a right linear independent subspace of $V_{\mathbb{H}}^{R}$. Then, there exists a right basis $B$ of $V_{\mathbb{H}}^{R}$ such that $E\subset B$. In particular, every quaternionic right vector space has a right basis.
\end{lemma}
\begin{proposition}\label{p:2}\cite[Proposition 4.44]{KM}
Let $V_{\mathbb{H}}^{R}$ be a quaternionic Hilbert space and $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. If $q_{1},....,q_{n}\in\mathbb{H}$ are right eigenvalues of $T$ such that $[q_{i}]\neq[q_{j}]$, $\forall 1\leq i<j\leq n\ (n\geq 2)$, and $Q_{q_{j}}(T)x_{j}=0,\ 0\neq x_{j}\in V_{\mathbb{H}}^{R},\forall 1\leq j \leq n$, then $x_{1},x_{2},...,x_{n}$ are right-linearly independent in $V_{\mathbb{H}}^{R}$.
\end{proposition}
\vskip 0.1 cm
We have the following lemma:
\begin{lemma}\label{l:3}
Assume that $\dim(V_{\mathbb{H}}^{R})<\infty$ and let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. Then, $\# E_{T}<\infty$. In this case, set \begin{align*}E_{T}=\Big\{q_{1},q_{2},...,q_{n}\Big\},\end{align*}
then $V_{\mathbb{H}}^{R}$ is a direct sum of $T-$invariant right subspaces $V_{1,\mathbb{H}}^{R},\ V_{2,\mathbb{H}}^{R},\ ....,\ V_{n,\mathbb{H}}^{R}$. Moreover, if $T_{i}:=T\vert_{V_{i,\mathbb{H}}^{R}}:\ V_{i,\mathbb{H}}^{R}\longrightarrow V_{i,\mathbb{H}}^{R}$, for $i=1,...,n$, then
\begin{align*}\sigma_{S}(T_{i})=\Big\{hq_{i}h^{-1}:\ h\in\mathbb{H}^{*}\Big\}.\end{align*}
\end{lemma}
\proof Combine Lemma \ref{l:2}, Proposition \ref{p:2} and Theorem \ref{t:2}\qed
\vskip 0.1 cm
Let $V_{\mathbb{H}}^{R}$ be a quaternionic right Hilbert space, $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $q$ be a right eigenvalue of $T$ of finite type. In this way, we see that
\begin{align*}V_{\mathbb{H}}^{R}=R(P_{[q]})\oplus R(P_{\sigma_{S}(T)\backslash [q]}).\end{align*}
\begin{definition}{\rm The algebraic multiplicity of the right eigenvalue $q$ is, by definition, the dimension of the space $R(P_{[q]})$.}\end{definition}
In the next, we write:
\begin{align*}m_{T}(q):=\dim R(P_{[q]}).\end{align*}
We refer to \cite{Gohberg,J1} for the definition in complex setting. Now, inspired by the description of the Riesz projection in the complex setting, see \cite{Gohberg}, we give a quaternionic version of a result describing the finite part of right eigenvalues of finite type.
\begin{theorem}Let $V_{\mathbb{H}}^{R}$ be a quaternionic Hilbert space, $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ and $\sigma$
be an axially symmetric isolated part of $\sigma_{S}(T)$. Set $P_{\sigma}$ the Riesz projection correspond to $\sigma$ and take
\begin{align*}E_{T}^{\sigma}=\sigma/\cong\end{align*}
where $q\cong p$ if and only if $p\in [q]$. Then, $\dim R(P_{\sigma})<\infty$ if and only if $\# E_{T}^{\sigma}<\infty$ and $q$ is a right eigenvalue of finite type for all $q\in E_{T}^{\sigma}$. Besides, if so, then
\begin{align*}\dim R(P_{\sigma})=\displaystyle\sum_{q\in E_{T}^{\sigma}}\dim R(P_{[q]}).\end{align*}
\end{theorem}
\proof
If $\dim R(P_{\sigma})<\infty$, then $P_{\sigma}T$ is a finite rank operator. In this way, we see that
\begin{align*}\# E_{T|_{R(P_{\sigma})}}^{\sigma}=n<\infty.\end{align*}
By Lemma \ref{l:3},
\begin{align*}R(P_{\sigma})=V_{1,\mathbb{H}}^{R}\oplus V_{2,\mathbb{H}}^{R}\oplus ...\oplus V_{n,\mathbb{H}}^{R}\end{align*}
where $V_{j,\mathbb{H}}^{R}$ is $T-$invariant with the properties
\begin{align*}\sigma_{S}(T|_{V_{j,\mathbb{H}}^{R}})=[q_{j}],\ j=1,...,n.\end{align*}
Let $i\in\{1,...,n\}$. Since $P_{\sigma}$ is a projection, then
\begin{align*}V_{\mathbb{H}}^{R}=N( P_{\sigma})\oplus R(P_{\sigma}).\end{align*}
In this fashion, we have
\begin{align*}V_{\mathbb{H}}^{R}=V_{i,\mathbb{H}}^{R}\oplus W_{i,\mathbb{H}}^{R}\end{align*}
where
\begin{align*}W_{i,\mathbb{H}}^{R}=V_{1,\mathbb{H}}^{R}\oplus ...\oplus V_{i-1,\mathbb{H}}^{R}\oplus V_{i+1,\mathbb{H}}^{R}\oplus...\oplus V_{n, \mathbb{H}}^{R}\oplus N(P_{\sigma}).\end{align*}
As consequence, $V_{i,\mathbb{H}}^{R}$ and $W_{i,\mathbb{H}}^{R}$ are $T-$invariant and $\sigma_{S}(T|_{V_{i,\mathbb{H}}^{R}})=[q_{i}]$. So, $q_{i}$ is a right eigenvalue of finite type.
\skip 0.1 cm
Conversely, set $E_{T}^{\sigma}=\{q_{1},\ q_{2}...,q_{n}\}$, where $q_{i}$ is a right eigenvalue of $T$ of finite type for all $i\in\{1,2,...,n\}$.
Applying \cite[Theorem 5.6]{NCFCBO}, we have
\begin{align*}\mathbb{I}_{R(P_{\sigma})}=\displaystyle \sum_{i=1}^{n}P_{[q_{i}]}|_{R(P_{\sigma})}.\end{align*}
Since $P_{[q_{i}]}P_{[q_{j}]}=0$ for all $i\neq j$, then
\begin{align*}R(P_{\sigma})=R(P_{[q_{1}]})\oplus R(P_{[q_{2}]})\oplus ...\oplus R(P_{[q_{n}]}).\end{align*}
This implies that
\begin{align*}\dim (R(P_{\sigma}))=\displaystyle\sum_{i=1}^{n}\dim (R(P_{[q_{i}]})).\end{align*}
In particular, we have $P_{\sigma}$ which is a finite rank operator.\qed
\begin{remark}
{\rm In the complex spectral theory, much attention has ben paid to eigenvalue of finite type, see \cite{charfi,Gohberg,J1,J2,Lutgen}. It is useful for the study of the essential spectrum of certain operators-matrices. We refer to \cite{charfi} for this point on the two-groupe transport operators. More precisely, let $V_{\mathbb{C}}$ be a complex Banach space and let $T$ be a closed operator in $V_{\mathbb{C}}$. The Browder resolvent set of $T$ is given by
\begin{align*}\rho_{B}(T):=\rho(T)\cup\sigma_{d}(T),\end{align*}
where we use the notation $\rho(.)$ for the resolvent set of $T$ and $\sigma_{d}(.)$ the set of eigenvalues of finite type of $T$. In fact, the usual resolvent
\begin{align*}R_{\lambda}(A):=(A-\lambda)^{-1}\end{align*}
can be extended to $\rho_{B}(T)$, e.g. \cite{Lutgen}. Motivated by this, \cite{charfi} gives a version of the Frobenius-Schur factorization using the Browder resolvent. This makes it possible to study the essential spectrum of serval types operators-matrices. In this paper, we have described the discrete $S-$spectrum in quaternionic setting. In this regard, as in complex case, we can define the spherical Browder resolvent. Although, we avoided studying it in this paper, we will cover that in a future article.}
\end{remark}
\section{Some results on the Weyl $S$-spectrum}\label{sec:3}
In this section, we develop a deeper understanding of the concept of the Weyl $S$-spectrum of the bounded right linear operator. More precisely, we describe the boundary of the $S$-spectrum. Likewise, we deal with the particular case of the spectral theorem. To begin with, we recall:
\begin{definition}\cite{BK2}{\rm Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. The Weyl $S-$spectrum is the set
\begin{align*}\sigma_{W}^{S}(T)=\displaystyle \bigcap_{K\in\mathcal{K}(V_{\mathbb{H}}^{R})}\sigma_{S}(T+K).\end{align*}}
\end{definition}
\vskip 0.1 cm
The study of the essential and the Weyl $S-$spectra are established using the Fredholm theory, see \cite{BK,BK2}. We refer to \cite{B1} for the investigation of the Fredholm and Weyl elements with respect to a quaternionic Banach algebra homomorphism.
\begin{definition}
{\rm A Fredholm operator is an operator $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$ such that $N(T)$ and $V_{\mathbb{H}}^{R}/R(T)$ are finite dimensional. We will denote by $\Phi(V_{\mathbb{H}}^{R})$ the set of all Fredholm operators.}
\end{definition}
\vskip 0.1 cm
From \cite{BK,BK2}, we have
\begin{align*}\Phi(V_{\mathbb{H}}^{R})=\Phi_{l}(V_{\mathbb{H}}^{R})\cap\Phi_{r}(V_{\mathbb{H}}^{R})\end{align*}
where
\begin{align*}\Phi_{l}(V_{\mathbb{H}}^{R})=\Big\{T\in \mathcal{B}(V_{\mathbb{H}}^{R}):\mbox{ R(T) is closed and }\dim (N(T))<\infty\Big\}\end{align*}
and
\begin{align*}\Phi_{r}(V_{\mathbb{H}}^{R})=\Big\{T\in \mathcal{B}(V_{\mathbb{H}}^{R}):\mbox{ R(T) is closed and }\dim (N(T^{\dag}))<\infty\Big\}.\end{align*}
Let $T\in \Phi_{l}(V_{\mathbb{H}}^{R})\cup \Phi_{r}(V_{\mathbb{H}}^{R})$. Then, the index of $T$ is given by
\begin{align*}i(T):=\dim N(T)-\dim(V_{\mathbb{H}}^{R}/R(T)).\end{align*}
\begin{theorem}\cite{BK,BK2} \label{t:5}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\sigma_{e}^{S}(T)=\mathbb{H}\backslash \Phi_{T}\mbox{ and }\sigma_{W}^{S}(T)=\mathbb{H}\backslash W_{T}\end{align*}
where
\begin{align*}\Phi_{T}:=\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\Big\}\end{align*}
\mbox{ and }
\begin{align*}W_{T}:=\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))=0\Big\}.\end{align*}
\end{theorem}
\begin{remark} {\rm Let $V_{\mathbb{H}}^{R}$ be a quaternionic space and $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$.
\begin{enumerate}
\item Note that, in general, we have
\begin{align*}\sigma_{e}^{S}(T)\subset\sigma_{W}^{S}(T)=\sigma_{1,W}^{S}(T)\cup\sigma_{2,W}^{S}(T)\subset \sigma_{S}(T)\backslash\sigma_{d}(T).\end{align*}
where
\begin{align*}\sigma_{1,W}^{S}(T):=\mathbb{H}\backslash\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in \Phi_{l}(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))\leq 0\Big\}\end{align*}
and
\begin{align*}\sigma_{2,W}^{S}(T):=\mathbb{H}\backslash\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in \Phi_{r}(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))\geq 0\Big\}.\end{align*}
\noindent In particular, $\sigma_{W}^{S}(T)$ does not contain eigenvalues of finite type.
\item In \cite{B1}, one proves that $q\longmapsto i(T)$ is constant on any component of $\Phi_{T}$. In this way, we see that if $\Phi_{T}$ is connected, then
\begin{align*}\sigma_{e}^{S}(T)=\sigma_{W}^{S}(T).\end{align*}
\end{enumerate}}
\end{remark}
\vskip 0.3 cm
\noindent The first result in this section is the next theorem.
\begin{theorem}\label{t:4}
Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\partial\sigma_{W}^{S}(T)\subset\sigma_{1,W}^{S}(T).\end{align*}
In particular, if $\Phi_{T}$ is connected, then
\begin{align*}\partial\sigma_{e}^{S}(T)=\partial\sigma_{W}^{S}(T)\subset\Big\{q\in\mathbb{H}:\ Q_{q}(T)\not\in \Phi_{l}(V_{\mathbb{H}}^{R})\Big\}.\end{align*}
\end{theorem}
To prove Theorem \ref{t:4}, we first study the concept of the minimum modulus. Let $V_{\mathbb{H}}^{R}$ be a separable right Hilbert space and $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. The minimum modulus of $T$ is given by
\begin{align*}\mu(T):=\displaystyle\inf_{\|x\|=1}\|Tx\|.\end{align*}
\vskip 0.3 cm
\noindent To begin with, we give the following lemma.
\begin{lemma}\label{l1}
Let $T$ and $S$ be two bounded right linear operators on a right quaternionic Hilbert space. Then,
\begin{enumerate}
\item If $\|T-S\|<\mu(T)$, then $\mu(S)>0$ and $\overline{R(S)}$ is not a proper subset of $\overline{R(T)}$.\\
\item If $\|T-S\|<\frac{\mu(T)}{2}$, then $\overline{R(S)}$ is not a proper subset of $\overline{R(T)}$ and $\overline{R(T)}$
is not a proper subset of $\overline{R(S)}$.
\end{enumerate}
\end{lemma}
\proof The proof is the same as for the complex Banach space, see Lemma 2.3 and lemma 2.4 in \cite{HAE} for a complex proof.\qed
\vskip 0.3 cm
For $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$, $q\in\mathbb{H}$ and $\varepsilon>0$ we set:
\begin{align*}\mathcal{O}(T,q,\varepsilon):=\Big\{q'\in\mathbb{H}:\ 2\vert {\rm Re}(q)-{\rm Re}(q')\vert\|T\|+\vert |q'|^{2}-|q|^{2}\vert<\varepsilon.\Big\}\end{align*}
It is clear that $\mathcal{O}(T,q,\varepsilon)$ is an open set in $\mathbb{H}$.
\begin{corollary}Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$ and $q_{0}\in \rho_{S}(T)$. Then, $q\in \rho_{S}(T)$ for each $q\in \mathcal{O}(T,q_{0},\mu(Q_{q_{0}}(T)))$.
\end{corollary}
\proof Let $q\in \mathcal{O}(T,q_{0},\mu(Q_{q_{0}}(T)))$. Then,
\begin{align*}\|Q_{q}(T)-Q_{q_{0}}(T)\|
&=\|2({\rm Re}(q_{0})-{\rm Re}(q))T+|q|^{2}-|q_{0}|^{2}\|\\
&\leq 2\vert ({\rm Re}(q_{0})-{\rm Re}(q)\vert\|T\|+\vert|q|^{2}-|q_{0}|^{2} \vert\\
&<\mu(Q_{q_{0}}(T)).\end{align*}
We can apply Lemma \ref{l1} to conclude that
\begin{align*}\mu(Q_{q}(T))>0\mbox{ and }\overline{R(Q_{q}(T))}=\overline{R(Q_{q_{0}}(T))}=V_{\mathbb{H}}^{R}.\end{align*}
By \cite[Proposition 3.5]{BK}, $R(Q_{q}(T))$ is closed. Hence, $q\in \rho_{S}(T)$.\qed
\noindent We recall:
\begin{lemma}\cite[Lemma 7.3.9]{DFIS} \label{l2}Let $n\in\mathbb{N}$ and $q,s\in\mathbb{H}$. Set
\begin{align*}P_{2n}(q)=q^{2n}-2{\rm Re}(s^{n})q^{n}+|s^{n}|^{2}.\end{align*}
Then,
\begin{align*}P_{2n}(q)
&=\mathcal{Q}_{2n-2}(q)(q^{2}-2{\rm Re}(s)q+|s|^{2})\\
&=(q^{2}-2{\rm Re}(s)q+|s|^{2})\mathcal{Q}_{2n-2}(q),\end{align*}
where $\mathcal{Q}_{2n-2}(q)$ is a polynomial of degree $2n-2$ in $q$.
\end{lemma}
\vskip 0.3 cm
\noindent \emph{\bf Proof of Theorem \ref{t:4}}
\noindent Set:
\begin{align*}f(T):=\displaystyle\sup_{K\in \mathcal{K}(V_{\mathbb{H}}^{R})}\mu(T+K).\end{align*}
Similar proof in the complex case, we have $f(T)>0$ if and only if
\begin{align*}T\in\Phi_{l}(V_{\mathbb{H}}^{R})\mbox{ and }\dim N(T)\leq\dim(V_{\mathbb{H}}^{R}/R(T)).\end{align*}
Now,
Since $\sigma_{e}^{S}(T)$ is not empty (e.g., \cite[Proposition 7.14]{BK}) and $\sigma_{e}^{S}(T)\subset\sigma_{W}^{S}(T)$, then $\partial\sigma_{W}^{S}(T)$ is not empty. Let us then take an element $p$ in $\partial\sigma_{W}^{S}(T)$. Assume that $p\not \in\sigma_{1,w}^{S}(T)$. Then,
\begin{align*} f(Q_{p}(T))>0.\end{align*}
So, there exists $K_{0}\in\mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}\mu(Q_{p}(T)+K_{0})>0.\end{align*}
\noindent Since $\mathcal{O}(T,p,\frac{\mu(Q_{p}(T)+K_{0})}{2})$ is an open neighborhood of $p$ and
\begin{align*}p\in\overline{\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))=0\Big\}},\end{align*}
then there exists $p_{0}\in \mathcal{O}(T,p,\frac{\mu(Q_{p}(T)+K_{0})}{2})$ such that
\begin{align*}p_{0}\in W_{T}.\end{align*}
\noindent On the other hand,
\begin{align*}\|Q_{p_{0}}(T)+K_{0}-Q_{p}(T)-K_{0}\|
&\leq \vert |p_{0}|^{2}-|p|^{2}\vert+2\vert Re(p)-Re(p_{0})\vert\|T\|\\
&<\displaystyle \frac{\mu(Q_{p}(T)+K_{0})}{2}.\end{align*}
Applying Lemma \ref{l1}, we obtain
\begin{align*}R(Q_{p}(T)+K_{0})=V_{\mathbb{H}}^{R} .\end{align*}
Indeed, since $\mu(Q_{p_{0}}+K_{0})>0$ and $p_{0}\in W_{T}$, then
\begin{align*}\dim(V_{\mathbb{H}}^{R}/R(Q_{p_{0}}(T)+K_{0}))=0.\end{align*}
In this way, we see that
\begin{align*}Q_{p}(T)+K_{0}\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{p}(T)+K_{0})=0.\end{align*}
This implies that, $p\not\in \sigma_{W}^{S}(T)$.
\vskip 0.1 cm
\noindent The rest of the proof follows immediately from \cite[Theorem 5.13]{B1}.\qed
\vskip 0.3 cm
We will now deal with the particular spectral theorem for the essential S-spectra.
\begin{theorem}
Let $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\sigma_{e}^{S}(T^{n})=\Big\{q^{n}\in\mathbb{H}:\ q\in\sigma_{e}^{S}(T)\Big\}=(\sigma_{e}^{S}(T))^{n}.\end{align*}
\end{theorem}
\proof
According to \cite[Lemma 3.10]{DFIS} and the proof of \cite[Theorem 7.3.11]{DFIS} we have
\begin{align*}T^{2n}-2{\rm Re}(q)T^{n}+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}=\displaystyle\prod_{j=0}^{n-1}
(T^{2}-2Re(q_{j})T+|q_{j}|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}),\end{align*}
where $q_{j}, j=0,...,n-1$ are the solutions of $p^{n}=q$ in the complex plane $\mathbb{C}_{I_{q}}$. Let $q\in\sigma_{e}^{S}(T^{n})$. Then, $Q_{q}(T^{n})\not\in\Phi(V_{\mathbb{H}}^{R})$.
We can apply \cite[Theorem 6.13]{BK}, we infer that there exists $i\in\{0,1,...,n-1\}$ such that
\begin{align*}Q_{q_{i}}(T)\not\in \Phi(V_{\mathbb{H}}^{R}).\end{align*}
Therefore, $q_{i}\in\sigma_{e}^{S}(T)$. In this way, we see that $q=q_{i}^{n}\in(\sigma_{e}^{S}(T))^{n}$. To prove the inverse inclusion, we consider $p=q^{n}$, where $q\in\sigma_{e}^{S}(T)$.
By Lemma \ref{l2} and \cite[Theorem 7.3.7]{DFIS}, we get
\begin{align*}T^{2n}-2{\rm Re}(q^{n})T^{n}+\vert q^{n}\vert^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}
&=\mathcal{Q}_{2n-2}(T)(T^{2}-2{\rm Re}(q)T+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}})\\
&=(T^{2}-2{\rm Re}(q)T+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}})\mathcal{Q}_{2n-2}(T).\end{align*}
\noindent Since $Q_{q}(T)\not\in \Phi(V_{\mathbb{H}}^{R})$, we can apply \cite[Corollary 6.14]{BK}, we deduce that
\begin{align*}T^{2n}-2{\rm Re}(q^{n})T^{n}+\vert q^{n}\vert^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}\not\in \Phi(V_{\mathbb{H}}^{R}).\end{align*}
So, $p\in\sigma_{e}^{S}(T^{n})$. \qed
\vskip 0.1 cm
\end{document}
|
\begin{equation}gin{document}
\title{On pair correlation for generic diagonal forms}
\date{\today}
\author{J.~Bourgain}
\address{J.~Bourgain, Institute for Advanced Study, Princeton, NJ 08540}
\email{[email protected]}
\thanks{The author was partially supported by NSF grants DMS-1301619}
\keywords {Pair correlation, homogenous diagonal forms}
\begin{equation}gin{abstract}
We establish new pair correlation results for certain generic homogenous diagonal forms evaluated on the integers.
Methods are analytic leading to explicit quantitative statements.
\end{abstract}
\maketitle
\section {Introduction and statements}
In \cite {S}, an analytic and quantitative approach to the pair correlation problem for (measure) generic binary
quadratic forms $\alpha m^2+mn+ \begin{equation}ta n^2$, $\alpha, \begin{equation}ta>0)$ is given, establishing Poisson behavior.
The method used in \cite {S} has been developed further by several authors, in particular in \cite {V}, to which we will refer later.
On the other hand, Sarnak's argument as it stands does not seem to apply to diagonal forms $m^2+\begin{equation}ta n^2, \begin{equation}ta>0$, corresponding to the Laplace eigenvalues of a
rectangular billard, due to lack of parameters.
Pair correlation results for binary quadratic forms have been obtained in \cite{EMM}, based on ergodic methods \big(in this case the pair correlation statistics amounts to the
distribution of quadratic forms of (2, 2)-signature\big).
A considerable advantage of the ergodic method is to provide deterministic results, with the quadratic forms being subject to a diophantine assumption.
Those results are qualitative and only weak quantitative statements seem extractable from this technology.
The purpose of this Note is to provide an alternative for Sarnak's approach which applies in the diagonal case and also in situations where no dynamical treatment is
known.
The technique is closely related to arguments in \cite{BBRR} and \cite{B}.
A first model, suggested to the author by Z.~Rudnick, is that of a generic positive definite diagonal ternary quadratic form
$$
Q(\bar x) =Q (x_1, x_2, x_3)= x_1^2 +\alpha x_2^2+\begin{equation}ta x_3^2, \alpha, \begin{equation}ta >0
$$
Restricting the variables to $\mathbb Z\cap [0, N]$, typical spacings are expected to be $O(\frac 1N)$ so that the natural interval size in the pair correlation problem
for (1.1) is $O(\frac 1N)$.
For $T$ large and $a<b$, define
$$
\begin{equation}gin{aligned}
R(a, b; T)= & \frac 1{T^{3/2}}|\{(\bar m, \bar n)\in \mathbb Z_+^3\times \mathbb Z_+^3; \bar m\not= \bar n, Q(\bar m)\leq T, Q(\bar n)\leq T \text { and } \\[8pt]
&Q(\bar m)-Q(\bar n)\in [a, b]\}|
\end{aligned}
\eqno{(1.1)}
$$
and
$$
c=\lim_{\varepsilon\to 0}\frac 1\varepsilon \Big|\Big\{(x, y)\in\mathbb R^3_+\times\mathbb R_+^3; Q(x)\leq 1, |Q(x)-Q(y)|<\frac\varepsilon 2\Big\}\Big|.\eqno{(1.2)}
$$
\begin{equation}gin{theorem}
Let $Q=Q_{\alpha, \begin{equation}ta}$ be defined by (1.1) with $\alpha,\begin{equation}ta \in [\frac 12, 1]$ parameters.
Fix $\frac 12<\rho<1$.
Then almost surely in $\alpha, \begin{equation}ta$ the following statement holds.
Let $T\to\infty$ and $a<b$, $|a|, |b|<O(1)$ and $T^{-\rho}< b-a<1$.
Then
$$
R(a, b; T)\sim c T^{\frac 12}(b-a).\eqno{(1.3)}
$$
\end{theorem}
The second result relates to Vanderkam's work \cite {V} on the pair correlation for homogenous degree $k$ forms in $k$ variables (and which is an extension of \cite {S}).
While in \cite {V} a measure generic result for the full space of such forms is established, we restrict ourselves to diagonal forms, proving a similar result.
Define for given $k\geq 3$
$$
F(x_1, \ldots, x_k)= x_1^k+\alpha_2x_2^k+\cdots+ \alpha_kx_k^k, \text { where} \ \alpha_2, \ldots, \alpha_k\in \Big[\frac 12, 1\Big]\eqno{(1.4)}
$$
$$
c=|\{x\in\mathbb R_+^k; F(x)\leq 1\}|\eqno{(1.5)}
$$
$$
R(a, b, T)=\frac 1T|\{(\bar m, \bar n)\in \mathbb Z_+^k\times \mathbb Z^k_+; \bar m\not=\bar n, F(\bar m)\leq T, F(\bar n)\leq T \text { and }
F(\bar m)-F(\bar n)\in [a. b]\}|.
$$
\begin{equation}gin{theorem}
Let $F$ be defined by (1.4) with $\alpha_2, \ldots, \alpha_k\in [\frac 12, 1]$ parameters.
There is some $\rho>0$ such that almost surely in $\alpha_2, \ldots\alpha_k$ the following statement holds.
Let $T\to\infty$ and $0<a< b<O(1)$ and $b-a> T^{-\rho}$. Then
$$
R(a, b, T)\sim c^2(b-a)\eqno{(1.6)}
$$
\end{theorem}
More precise statements will follow from the arguments below.
Some steps were not made quantitatively explicit in order not to over-complicate the exposition.
It turns out that in the diagonal case, the most direct harmonic analysis attack introducing Gauss sums does not seem to succeed.
Instead, we follow a slightly different approach based on distributional properties of certain Dirichlet sums (cf. \cite{BBRR} and \cite {B}).
\section
{Preliminaries}
We make a few comments/reductions related to Theorem 1 (a similar discussion holds for Theorem 2).
Clearly, Theorem 1 is equivalent to the statement (by rescaling)
$$
\frac 1{T^{\frac 12}} R(a, b; T)\sim\frac {b-a}{2.T^{8/3}} |\{(\bar x, \bar y)\in\mathbb R^3_+\times \mathbb R^3_+; Q(\bar x)\leq T, |Q(\bar x)-Q(\bar y)|< T^{2/3}\}|.
\eqno{(2.1)}
$$
We make a localization of the variables $x_i \in I_i =[u_i -\Delta u_i, u_i+\Delta u_i]$ and $y_i \in J_i =[v_i-\Delta v_i, v_i +\Delta v_i]$
(where $I_i, J_i\subset \mathbb R_+$, hence $I_i, J_i\subset [0, cT^{\frac 12}])$ and such that $u_i> T^{\frac 12-\varepsilon}$, $\Delta u_i=T^{\frac 12-3\varepsilon}$ and similarly
for $v_i$ and $\Delta v_i$.
Then (2.1) will follow from
$$
\begin{equation}gin{aligned}
&|\{ (\bar m, \bar n)\in\mathbb Z^3\times\mathbb Z^3; \bar m\in I_1\times I_2\times I_3, \bar n \in J_1\times J_2\times J_3, \bar m \not= \bar n \text { and } Q(\bar m)- Q(\bar n)\in [a,
b]\}|\sim\\[10pt]
& \frac {b-a}{2T^{2/3}} |\{ (\bar x, \bar y) \in \mathbb R^3\times\mathbb R^3; \bar x\in I_1\times I_2\times I_3, \bar y \in J_1\times J_2\times J_3 \text { and } |Q(\bar x)-
Q(\bar y)|< T^{2/3}\}|
\end{aligned}
\eqno{(2.2)}
$$
We may further ensure that (as we explain next)
$$
\rm dist(I_i, J_i) \sim |u_i-v_i| >T^{\frac 12 -\varepsilon}.\eqno{(2.3)}
$$
Since $Q (\bar m) -Q(\bar n) = m^2_1 -n_1^2+\alpha(m^2_2 -n_2^2)+
\begin{equation}ta (m_3^2 -n_3^2)$ and $b-a<1$, the contribution of say $|m_2-n_2|<T ^{\frac 12+\varepsilon}$ to $R(a, b; T)$ is at most (denoting $\xi =\frac
{a+b}2$).
$$
T^{-3/2} \Bigg| \Bigg\{
\begin{equation}gin{aligned}
(m_2, n_2, m_3, n_3)\in \mathbb Z^4_+ ; m_i, n_i\lesssim & T^{\frac 12}, m_3\not= n_3, |m_2-n_2|<T^{\frac 12-\varepsilon} \text {and}\\[10pt]
& \Vert \alpha (m_2^2 -n_2^2) +\begin{equation}ta (m_3^2 -n_3^2)- \xi\Vert < b-a\end{aligned}\Bigg\}
\Bigg|\eqno{(2.4)}
$$
$$
+T^{-3/2} \Big|\Big\{ (\bar m, \bar n)\in \mathbb Z_+^6; m_i, n_i \lesssim T^{\frac 12}, m_3 =n_3, \bar m \not =\bar n \text { and } |m^2_1-n_1^2 +\alpha (m_2^2-n_2^2)
-\xi|< b-a\Big\}\Big|.
\eqno{(2.5)}
$$
Since $\alpha, \begin{equation}ta$ are measure generic, hence diophantine, (2.4) can be estimated by
$$
T^{-3/2} T^{1-\varepsilon} \big(T^{1+\frac \varepsilon 2} (b-a) +1\big) <T^{\frac 12-\frac \varepsilon 2} (b-a) + T^{-\frac 12-\varepsilon} <T^{\frac 12-\frac \varepsilon2}(b-a).
$$
For (2.5), since $|\xi|<O(1)$ and $\bar m \not= \bar n$, either $m_2\not= n_2$ or $m_2 =n_2, m_3= n_3, m_1, n_1< O(1)$.
Again the $m_2\not = n_2$ contribution is at most
$$
T^{-\frac 32} T^{\frac 12} T^{1+\frac\varepsilon 2} (b-a) +1\big)< T^{\frac \varepsilon 2}(b-a) + T^{-1} < T^{\frac \varepsilon 2} (b-a).
$$
In order to evaluate the contribution of $|m_1-n_1|< T^{\frac 12-\varepsilon}$, simply replace $Q(x)$ by $\frac 1\begin{equation}ta x_1^2+\frac \alpha \begin{equation}ta x_2^2 +x_3^2$ and exploit that
$\frac 1\begin{equation}ta$, $\frac \alpha\begin{equation}ta$ are diophantine.
This justifies (2.3).
Next, observe that in order to establish Theorem 1, we need to consider the $T^{\rho+\varepsilon}$ cases for the interval $[a, b]$.
This means that the measure of the exceptional parameter set in $(\alpha, \begin{equation}ta)$ for a fixed interval $[a, b]$ has to be multiplied by $T^{\rho+\varepsilon}$.
Similar considerations apply to Theorem 2, replacing $T^{\frac 12}$ by $T^{\frac 1k}$ and variables range $0<x_i, y_i \lesssim T^{1/k}$.
Condition (2.3) is replaced by
$$
\rm dist (I_i, J_i)\sim |u_i-v_i|> T^{\frac 1k-\varepsilon}\eqno{(2.6)}
$$
which we justify, as the argument differs.
Assume $|m_1-n_1|<T^{\frac 1k-\varepsilon}$.
Note first that for a fixed interval $[a, b]$, we have for fixed $\bar m\not=\bar n$
$$
\begin{equation}gin{aligned}
&\Big|\Big\{ (\alpha_2, \ldots, \alpha_k) \in \Big[\frac 12, 1\Big]^{k-1}; |m_1^k- n_1^k +\alpha_2 (m_2^k-n_2^k)+\cdots+ \alpha_k (m_k^k -n_k^k)-\xi|
< b-a\Big\}\Big|\\[10pt]
&\lesssim \Big(\sum^k_{i=1} |m_i^k- n_i^k|\Big)^{-1}.
\end{aligned}
\eqno{(2.7)}
$$
Summing (2.7) over $\bar m, \bar n$ and using the geometric/arithmetical mean inequality, the contribution of $m_1\not= n_1$ to $R_1 (a, b; T)$ is at most
$$
\begin{equation}gin{aligned}
&\frac C T \sum_{\substack{ 1\leq |m_1-n_1|< T^{\frac 1k-\varepsilon}\\ m_i, n_i\leq T^{\frac 1k} (2\leq i\leq k)}} \ \frac 1{\sum^k_{i=1} |m_i^k -n_i^k|}\leq\\[10pt]
&\frac CT \Big(\iint_{\substack{1\leq u\leq T^{\frac 1k-\varepsilon}\\ v\leq T^{\frac 1k}}} \ \frac {dudv}{u^{\frac 1k}\, v^{\frac {k-1}k}}\Big)
\Big( T^{\frac 1k}+\iint_{1\leq u\leq v\leq T^{\frac 1k}} \ \frac {dudv}{u^{\frac 1k} v^{\frac{k-1} k}}\Big)^{k-1} \leq\\[10pt]
&\frac CT T^{(\frac 1k-\varepsilon)\frac {k-1}k} T^{\frac {1}{k^2}} T^{\frac{k-1}k} \lesssim T^{-\varepsilon\frac {k-1}k}.
\end{aligned}
\eqno{(2.8)}
$$
If $m_1=n_1$, then say $m_2\not= n_2$ contributing
$$
\begin{equation}gin{aligned}
&C\frac {T^{\frac 1k}}T \sum_{\substack{m_i, n_i \leq T^{\frac 1k} (2\leq i\leq k)\\ m_2\not= n_2}} \ \frac 1{\sum^k_{i=2} |m_i^k -n_i^k|}\leq\\[10pt]
&C T^{\frac 1k-1} \Big( \iint_{1\leq u\leq v\leq T^{\frac 1k}} \ \frac {dudv}{u^{\frac 1{k-1}} v}\Big)
\Big( T^{\frac 1k}+\iint_{1\leq u\leq v\leq T^{\frac 1k}}
\ \frac {dudv}{u^{\frac 1{k-1}} v} \Big)^{k-2}\leq\\[10pt]
& CT^{\frac 1k-1} (\log T) T^{\frac {k-2}{k(k-1)}} T^{\frac {k-2}k} \lesssim T^{-\frac 1{k(k-1)}} \log T.\end{aligned}
\eqno{(2.9)}
$$
Multiplying (2.8), (2.9) with the number of intervals $[a, b]$ under consideration, i.e. $O(T^\rho)$, the argument is conclusive letting $\rho =\min \big(\frac \varepsilon 4, \frac
1{2k(k-1)}\big) =\frac\varepsilon 4$.
This justifies (2.6).
\section
{Proof of Theorem 1}
We establish (2.2) with the intervals $I_i, J_i(1\leq i\leq 3)$ introduced as above and in particular the separation condition (2.3).
Let $\xi =\frac {a+b}2 , \delta =\frac {b-a}2$.
Let $I_3=[u-\Delta u, u+\Delta u], I_3 =[v-\Delta v, v+\Delta v], k_0= u^2-v^2$ and note that by (2.3)
$$
|k_0|\geq |u-v|^2 > T^{1-2\varepsilon}\eqno{(3.1)}
$$
while for $x\in I_3, y\in J_3$
$$
|x^2 -y^2-k_0|\lesssim T^{\frac 12} (\Delta u +\Delta v) < T^{1-3\varepsilon}.\eqno{(3.2)}
$$
Rewrite $Q(\bar m) -Q(\bar n)\in [a, b]$ as
$$
|m_1^2-n_1^2 +\alpha (m_2^2- n_2^2)+\begin{equation}ta (m_3^2 -n_3^2)-\xi |<\delta.\eqno{(3.3)}
$$
Taking into account (3.1), (3.2) and exploiting a usual upper/lower bounding argument, (3.3) may be replaced by
$$
\Big|\frac {m_1^2 - n_1^2 +\alpha (m_2^2 -n_2^2)-\xi}{m_3^2- n_3^2} +\begin{equation}ta\Big| < \frac \delta{|k_0|}\eqno{(3.4)}
$$
or, assuming $k_0 >0$, i.e. $u>v$ and taking into account the variable localization
$$
|\log (n_1^2 -m^2_1+ \alpha (n_2^2 - m^2_2)+\xi)-\log (m_3^2-n_3^2)-\log \begin{equation}ta|<\frac \delta{\begin{equation}ta k_0}.\eqno{(3.5)}
$$
At this point, we use Fourier analysis.
Denote
$$
\qquad S_1 (t)= \sum_{\substack{m_i\in I_i, n_i\in J_i\\ i=1, 2}} \big(n_1^2 -m_1^2 +\alpha(n_2^2-m_2^2)+\xi\big)^{it}\eqno{(3.6)}
$$
$$
S_2(t)=\sum_{m_3 \in I_3, n_3\in J_3} (m_3^2 -n_3^2)^{it}\qquad\qquad .\eqno{(3.7)}
$$
Using (3.5), the l.h.s. of (2.2) may be expressed as
$$
\int S_1(t)\, \overline{S_2(t)} \ e^{-it\log \begin{equation}ta} \, \widehat {1_{[-\frac 1B, \frac 1B]}} (t) dt \text { with } B=\frac{\begin{equation}ta{k_0}}\delta.\eqno{(3.8)}
$$
Define $B_0=\frac {\begin{equation}ta k_0} {T^{2/3}} > T^{\frac 16}$ and split (3.8) as
$$
\begin{equation}gin{aligned}
&\frac {B_0}B\int S_1(t) \, \overline {S_2(t)} e^{it\log\begin{equation}ta} \, \widehat{ 1_{[-\frac 1{B_0}, \frac 1{B_0}]}} (t) dt +\int S_1 (t) \overline {S_2(t)} e^{-it \log B} \big[
\widehat { 1_{[-\frac 1 B, \frac 1B]}} (t) -\frac {B_0}B \widehat{ 1_{[-\frac 1{B_0}, \frac 1{B_0}]}} (t)\big] dt\\[8pt]
&= (3.9)+(3.10).
\end{aligned}
$$
Clearly (3.9) amounts to
$$
\begin{equation}gin{aligned}
&\frac {B_0}{B} \Big|\Big\{ (\bar m, \bar n)\in \mathbb Z^3 \times\mathbb Z^3; m_i\in I_i, n_i \in J_i \text { and } \Big|\log
\frac {n_1^2 -m_1^2 +\alpha (n_2^2 -m_2^2)+\xi}{\begin{equation}ta(m_3^2-n_3^2)}\Big| < \frac 1{B_0}\Big\}\Big|\\
&\sim \delta T^{-\frac 23} \Big|\Big\{ (\bar m , \bar n) \in \mathbb Z^3\times\mathbb Z^3; m_i\in I_i, n_i\in J_i \text { and }
|m_1^2-n^2_1+\alpha (m_2^2 -n_2^2)+\begin{equation}ta (m_3^2 -n_3^2)-\xi |< T^{\frac 23}\Big|\Big\}
\end{aligned}
$$
and since $\xi =O(1)$
$$
\sim \delta T^{-\frac 23} \Big|\Big\{ (\bar x, \bar y)\in \mathbb R^3\times\mathbb R^3; x_i\in I_i, y_i \in J_j \text { and } |Q(x) -Q(y)|<T^{\frac 32}\Big\}\Big|\eqno{(3.11)}
$$
which is (2.2).
Note that so far, no further restrictions on $\alpha, \begin{equation}ta$ were made.
Since clearly
$$
(3.11) >\delta T^{2-15\varepsilon}\eqno{(3.12)}
$$
it remains to ensure that
$$
|(3.10) |< \delta T^{2-16 \varepsilon}\eqno{(3.13)}
$$
which will involve an additional parameter restriction.
Analyzing the expression $\widehat{1_{[-\frac 1B, \frac 1B]}} (t) -\frac {B_0}{B} \widehat {1_{[-\frac 1{B_0}, \frac 1{B_0}]}} (t) $ and relying on $L^2$-theory, one verifies that
$$
\begin{equation}gin{aligned}
&\Vert(3.10)\Vert^2_{L^2_\begin{equation}ta}\lesssim\\[8pt]
&\Big(\frac \delta{k_0}\Big)^2 \Big\{\int_{[|t|<T^{-2/3} k_0]} (|t|T^{2/3}k_0^{-1})^4 |S_1(t)|^2 |S_2(t)|^2+
\int_{[|t|> T^{-2/3}k_0]} \min \Big(1, \frac {k_0}{|t|\delta}\Big)^2 |S_1(t)^2 |S_2(t)|^2\Big\} \\[8pt]
\end{aligned}\eqno{(3.14)}
$$
$$
\lesssim \Big(\frac \delta{k_0}\Big)^2\Big\{ T^{-\frac 43+C\varepsilon} \int_{[|t|< T^\varepsilon]} t^4 |S_1|^2 |S_2|^2 +\int_{[|t|>T^\varepsilon]}
\min \Big( 1, \frac T{|t|\delta}\Big)^2 |S_1|^2 |S_2|^2\Big\}.
\eqno{(3.15)}
$$
Recall that $S_1$ also depends on $\alpha$, so that the measure of the $(\alpha, \begin{equation}ta)$-set where (3.13) fails is bounded by
$$
T^{-6+C\varepsilon} \Big\{ T^{-\frac 43} \int_{[|t|<T^\varepsilon} t^4 (Av_\alpha |S_1 (t)|^2) |S_2(t)|^2 +\int_{|t|>T^\varepsilon]} \min \Big( 1,
\frac T{|t|S}\Big)^2 (Av_\alpha |S_1(t)|^2) |S_2(t)|^2\Big\}.
\eqno{(3.16)}
$$
It remains to bound (3.16). Write
$$
|S_1(t)|^2 =\sum_{\substack{m_i, r_i \in I_i; n_i, s_i \in J_i\\ (i=1, 2)}}
e^{it[\log\big(n_1^2- m^2_1 +\alpha (n^2_2-m^2_2)+\xi\big)-\log \big(s_1^2-r_1^2+\alpha (s_2^2-r^2_2)+\xi)]}\eqno{(3.17)}
$$
and average in $\alpha$. The $\alpha$-derivative of the phase function equals
$$
\frac {n_2^2 -m_2^2}{ n^2_1 -m_1^2+\alpha (n_2^2 -m_2^2)+\xi} -\frac {s_2^2 -r_2^2}{s_1^2 -r^2_1+\alpha (s_2^2-r_2^2)+\xi}
\geq \frac {(n_2^2 -m_2^2)(s_1^2-r^2_1+\xi)-(s_2^2 -r^2_2)(n_1^2-m_1^2+\xi)}{T^2}.
$$
This allows to bound the $\alpha$-average of (3.17) by
$$
\begin{equation}gin{aligned}
&\frac {T^2}{|t|} \ \sum_{\substack{ m_i, r_i \in I_i; n_i, s_i\in J_i\\ i=1, 2}}
[1+|(n^2_2-m_2^2)(s_1^2-r_1^2+\xi) -(s_2^2-r_2^2)(n_1^2-m_1^2+\xi)|]^{-1} \leq\\[10pt]
&\frac {T^2}{|t|} \ \sum_{\substack{z_i, w_i\in\mathbb Z\\ T^{1-2\varepsilon} <|z_i|, w_i|<T}} [1+|z_2 (w_1+\xi) - w_2(z_1+\xi)|]^{-1}\leq\\[10pt]
&\frac {T^2}{|t|} \log T \max_{k\in\mathbb Z} \big|\{ (z_1, z_2, w_1, w_2) \in\mathbb Z^4; T^{1-2\varepsilon}< |z_i|, |w_i| <T \text { and }
z_2 (w_1+\xi)-w_2 (z_1+\xi)= k+O(1)\big\}\big|\\[10pt]
&\lesssim \log T.\frac {T^2}{|t|} \sum_{T^{1-2\varepsilon} < z_2, w_2<T} \Big(1+\frac T{z_2} (z_2, w_2)\Big) < \frac {T^{4+\varepsilon}}{|t|}.
\end{aligned}\eqno{(3.18)}
$$
By (3.18),
$$
(3.16) <T^{-4/3+C\varepsilon} +T^{-2+C\varepsilon} \max_{2^k>T^\varepsilon} \Big\{ 2^{-k} \min \Big(1, 2^{-k}\frac T\delta\Big) \int_{[|t|\sim 2^k]}|S_2(t)|^2\Big\}\eqno{(3.19)}
$$
where for the first term we used the trivial bound on $S_2$.
The main point is the argument in the treatment of $S_2$.
Recall the definition (3.7).
Instead of restricting $m_3\in I_3, n_3\in J_3$, it will be convenient to use a smoother localization, replacing $1_{[u-\Delta u, u+\Delta u]}$ by $\varphi \big(\frac {x-u}{\Delta u}\big)$
with $\varphi$ a bumpfunction satisfying $0\leq \varphi\leq 1$, $\supp \varphi\subset [-1, 1], \varphi=1$ on $[-1+T^{-\varepsilon}, 1-T^{-\varepsilon}]$.
Then, writing $m_3^2-n_3^2= (m_3-n_3)(m_3+n_3)=mn$
$$
S_2(t) =\sum_{m, n} \varphi\Big(\frac {m+n-2u}{2\Delta u}\big)\varphi \Big( \frac {m-n-2v}{2\Delta v}\Big) m^{it} n^{it}
$$
and the latter may be bounded (up to some factor $T^{C\varepsilon}$) by expressions of the from
$$
\Big(\sum_{m\in M} m^{it}\Big)\Big(\sum_{n\in N} n^{it}\Big)
$$
with $M, N$ subintervals of $[1, T^{\frac 12}]$ (recall that $m_3-n_3 > T^{\frac 12-\varepsilon})$.
A well-known argument involving the Mellin transform (see for instance \cite{BBRR}) permits then to replace essentially $\frac 1{\sqrt K}\Big|\sum_{1\leq k\leq K} k^{it}|$ by the Riemann
zeta function $\zeta \big(\frac 12+it')$ with $t'$ a translate of $t$ by $O\big((\log T)^2\big)$.
Consequently
$$
\begin{equation}gin{aligned}
2^{-\ell}\int_{2^\ell <t< 2^{\ell+1}} |S_2 (t)|^2 dt & < T^{C\varepsilon} 2^{-\ell} \max_{K<T^{\frac 12}}\int_{2^\ell <t<2^{\ell+1}} \Big|\sum _{1\leq k\leq K}
k^{it}\Big|^4 dt\\[8pt]
&<T^{C\varepsilon} 2^{-\ell} T\int_{[1<t<2^\ell + (\log T)^2]}\Big|\zeta \Big(\frac 12+it\Big)\Big|^4 < T^{1+C\varepsilon} 2^{\varepsilon\ell}
\end{aligned}\eqno{(3.20)}
$$
using the classical bound
$$
\int_1^{t_*}\Big|\zeta \Big(\frac 12+ it\Big)\Big|^4 dt\ll t^{1+\varepsilon}_* \ \text { for all } \ t_* >1, \varepsilon>0.\eqno{(3.21)}
$$
Finally, substituting (3.20) in (3.19) gives a measure estimate in the $(\alpha, \begin{equation}ta)$ parameter set of the form $T^{-1+C\varepsilon}$ (where $\varepsilon>0$ may be taken arbitrarily small).
As pointed out earlier, this measure needs to be multiplied with the number $T^{\rho+\varepsilon}$ of intervals $[a, b]$ under consideration and Theorem 1 follows.
\section
{Proof of Theorem 2}
We follow a similar procedure as for Theorem 1, though we need less optimal estimates due to the fact that the number of intervals under consideration is only $O(1)$.
Recall the definition of the intervals $I_i, J_i$ $(1\leq i\leq k)$ and the separation condition (2.6).
It will suffice to show that for some $0<\kappa<1$ (depending on $k$)
$$
\begin{equation}gin{aligned}
&\big|\big\{ (\bar m, \bar n) \in\mathbb Z^k\times\mathbb Z^k; m_i\in I_i, n_i\in J_i \ \text { and } \ F(\bar m)- F(\bar n)\in [a, b]\big\}\big|\sim\\[8pt]
& \frac {b-a}2 \, T^{-1+\kappa}\big|\big\{(\bar x, \bar y)\in \mathbb R^k \times\mathbb R^k; x_i\in I_i, y_i \in J_i \text { and } |F(x) -F(y)|< T^{1-\kappa}\big\}\big|.
\end{aligned}
\eqno{(4.1)}
$$
Define
$$
S_1(t) =\sum_{\substack{ n_i\in I_i, n_i\in J_i\\ i=1, \ldots, k-1}} (n^k_1-m_1^k +\alpha_2(n_2^k -m_2^k)+\cdots+ \alpha_{k-1}
(n^k_{k-1} - m^k_{k-1}) \big)^{it}\eqno{(4.2)}
$$
$$
S_2(t)=\sum_{m\in I_k, n\in J_k} (m^k -n^k)^{it}\eqno{(4.3)}
$$
and evaluate
$$
\int S_1(t) \overline {S_2(t)}\, e^{-it \log \alpha_k} \widehat {1_{[-\frac 1B, \frac 1B]}} (t) dt\eqno{(4.4)}
$$
where $B= \frac {\alpha_k k_o}\delta, k_0\sim m_k^k -n_k^k$.
Let $B_0 =\frac {\alpha_k k_0}{T^{1-\kappa}} > T^{\kappa/2}$ and decompose (4.4) as
$$
\begin{equation}gin{aligned}
&\frac {B_0}{B} \int S_1(t)\, \overline{S_2(t)} e^{-it\log \alpha_k} \widehat {1_{[-\frac 1{B_0}, \frac 1{B_0}]}} (t) dt+\int S_1(t) \, \overline {S_2(t)}\,
e^{-it\log \alpha_k} \big[\widehat {1_{[-\frac 1B, \frac 1B]}} (t) -\frac {B_0}{B} \widehat {1_{[-\frac 1{B_0}, \frac 1{B_0}]}} (t)\big] dt\\[8pt]
& = (4.5)+(4.6)
\end{aligned}
$$
Then, taking $\kappa<\frac 1k$, (4.5) amounts again to
$$
\delta T^{-1+\kappa}\big|\big\{ (\bar x, \bar y)\in \mathbb R^k \times \mathbb R^k; x_i\in I_i, y_i\in J_i \ \text { and } \ |F(\bar x) -F(\bar y)|<
T^{1-\kappa}\big\}\big|
$$
which is (4.1) and at least $T^{1-C\varepsilon}$ (recall that $\delta =O(1)$ in this case).
Thus we need to ensure that
$$
(4.6)< T^{1-C'\varepsilon}\eqno{(4.7)}
$$
achieved by additional parameter restriction.
Using the $L^2_{\alpha_k}$-norm as before,
$$
\begin{equation}gin{aligned}
\Vert(4.6)\Vert^2_{L^2_{\alpha_k}} &\lesssim \Big(\frac \delta{k_0}\Big)^2 \Big\{\int_{[|t|<\frac {k_0}{T^{1-\kappa}}]}
\Big(\frac {|t|T^{1-\kappa}}{k_0}\Big)^4 |S_1|^2 \, |S_2|^2 +\int_{[|t|>\frac {k_0}{T^{1-\kappa}}]} \min \Big(1, \frac {k_0}{\delta t}\Big)^2
|S_1|^2 |S_2|^2\Big\}\\[8pt]
&\lesssim T^{-2+C\varepsilon}\Big\{ T^{-3\kappa+4}+\int_{[|t|>T^{\kappa/2}]} \min \Big(1, \frac Tt\Big)^2 |S_1|^2 |S_2|^2\Big\}
\end{aligned}
\eqno{(4.8)}
$$
where for the first term we used trivial bounds on $S_1$ and $S_2$.
It follows that the $\alpha$-parameter set where (4.7) fails is of measure at most
$$
T^{-3\kappa+C\varepsilon}+ T^{-4+C\varepsilon} \int_{[|t|>T^{x/2}]} \min \Big(1, \frac T{|t|}\Big)^2 \big[ Av_{\alpha_1, \ldots, \alpha_{k-1}}|S_1(t)|^2] |S_2(t)|^2
dt\eqno{(4.9)}
$$
and it remains to bound the second term.
Estimate for $|t|>T^{\kappa/2}$,
$$
|S_2(t)|< T^{-\gamma} |I_k| \, |J_k|< T^{\frac 2k-\gamma}\eqno{(4.10)}
$$
for some $\gamma>0$ (a more precise bound will be unnecessary).
An estimate of the form (4.10) is obtained from standard exponential sum theory, exploiting only one of the variables $m$ or $n$.
Substituting (4.9), it remains to prove that
$$
\int_{[|t|>T^{\kappa/2}]} \min \Big(1, \frac T{|t|}\Big)^2 \big[Av_{\alpha_1, \ldots, \alpha_{k-1}} |S_1(t)|^2\big] dt\lesssim T^{4-\frac 4\kappa}\eqno{(4.11)}
$$
and (4.11) will clearly follow from
$$
\frac 1T\int\big[Av_{\alpha_1, \ldots, \alpha_{k-1}} |S_1(t)|^2\big] \varphi \Big(\frac tT\Big) dt\lesssim T^{3-\frac 4k+\varepsilon}\eqno{(4.12)}
$$
($0\leq \varphi\leq 1$ a symmetric smooth bumpfunction).
The l.h.s. of (4.12) is bounded by
$$
\begin{equation}gin{aligned}
&\int_{[\frac 12, 1]^{k-1}}d\alpha_1\ldots d\alpha_{k-1}\\[10pt]
&\Bigg|\Bigg\{\begin{equation}gin{aligned} &(\bar m, \bar n, \bar {m'}, \bar {n'})\in (\mathbb Z^{k-1})^4; m_i, m_i' \in I_i, n_i, n_i' \in J_i \text { and}\\
&|n_1^k-m_1^k -(n_1')^k+(m_1')^k+\alpha_2 (n_2^k-m_2^k -(n_2')^k +(m_2')^k) +\cdots+ \alpha_{k-1} ( \ ) |< O(1)
\end{aligned} \Bigg\}\Bigg|.
\end{aligned}\eqno{(4.13)}
$$
Set $N=T^{\frac 1k}$ and
$$
N^\rho =\max(|n^k_1-m_1^k-(n_1')^k + (m_1')^k|,\ldots, | n^k_{k-1} - m^k_{k-1}- (n_{k-1}')^k + (m_{k-1}')^k|)
$$
$(0<\rho\leq k)$.
Then the (4.14) contribution to (4.13) is bounded by
$$
N^{-\rho}\big|\big\{ (x_1, x_2, x_3, x_4)\in \mathbb Z_+^4; x_i<N \text { and } |x_1^k- x_2^k+x_3^k- x_4^k|< N^\rho\big\}\big|^{k-1}.\eqno{(4.15)}
$$
To evaluate (4.15), we consider several cases for $\rho$.
If $\rho\geq k-1$ and since $k\geq 2$
$$
(4.15) \leq N^{-\rho} \Big(N^3 \frac{N^\rho}{N^{k-1}}\Big)^{k-1} \leq N^{3k-4}
$$
let $1\leq \rho< k-1$, $M\sim \max(x_1, x_2, x_3, x_4)=x_1$. Then
$$
\begin{equation}gin{aligned}
&\big|\big\{ (x_1, x_2, x_3, x_4)\in\mathbb Z_+^4; x_1\sim M, x_i<M \text { and } |x_1^k+x_2^k+x_3^k -x_4^k|< N^\rho\big\}\big|\leq\\
&M^3 \Big(1+\min\Big(M, \frac{N^\rho}{M^{k-1}}\Big)\Big) \leq N^3+N^{\frac {4\rho}k}+\frac {N^\rho}{M^{k-4}} 1_{[M^k>N^\rho]}
\end{aligned}
$$
and (4.15) contribution at most
$$
N^{3(k-1)-\rho}+N^{\frac {4\rho}k(k-1)-\rho} + N^{3k-4} \lesssim N^{3k-4}.
$$
Remains the case $\rho<1$.
If $k\geq 3$, estimate
$$
\begin{equation}gin{aligned}
&\big|\big\{ (x_1, \ldots, x_4)\in \mathbb Z^4_+; x_1\leq N \ \text { and } \ |x_1^k-x_2^k+x_3^k-x_4^k|<N^\rho\big\}\big|\leq\\[10pt]
&\int_{-1}^1 \Big|\sum_{x<N} e^{iux^k}\Big|^4 \ \Big|\sum_{n<N^\rho} e^{iun}\Big|du \ll N^{\frac 52+\frac \rho 2+\varepsilon}
\end{aligned}
$$
using Cauchy-Schwarz and Hua's inequalities.
This gives the contribution
$$
N^{-\rho} N^{\frac {5+\rho}2(k-1)+\varepsilon} \ll n^{3k-4+\varepsilon}.
$$
For $k=2$, we obtain the contribution
$$
N^{-\rho} N^{2+\rho+\varepsilon} \ll N^{2+\varepsilon}= N^{3k-4+\varepsilon}.
$$
This proves Theorem 2.
\begin{equation}gin{thebibliography}{XXXX}
\bibitem [B]{B} J.~Bourgain, \emph {A quantitative Oppenheim theorem for generic diagonal quadratic forms}, preprint 2016.
\bibitem [BBRR]{BBRR} V.~Blomer, J.~Bourgain, M.~Radziwill, Z.~Rudnick, \emph{ Small gaps in the spectrum of the rectangular billiard}, preprint 2016.
\bibitem [EMM]{EMM} A.~Eskin. G.~Margulis, S.~Mozes, \emph {Quadratic forms of signature (2,2) and eigenvalue spacings on rectangular 2-tori}, Ann. of Math. (2) 161
(2005), no 2, 679--725.
\bibitem [S]{S} P.~Sarnak, \emph {Values at integers of binary quadratic forms}, Harmonic Analysis and Number Theory, CMS Conf. Roc. 21, AMS, 1997.
\bibitem [V]{V} J.~Vanderkam, \emph {Values at integers of homogeneous polynomials}, Duke Math. J., 97 (1999), no 2, 379--412.
\end{thebibliography}
\end{document}
|
\betaegin{document}
\title{Rigid gems in dimension n.
\footnote{Work performed under the auspicies of the G.N.S.A.G.A. of
the I.N.D.A.M. (Istituto Nazionale di Alta Matematica "F. Severi" of
Italy) and financially supported by MiUR of Italy (project
``Propriet\`a geometriche delle variet\`a reali e complesse'') and
by the University of Modena and Reggio Emilia (project ``Modelli
discreti di strutture algebriche e geometriche''). }}
\alphauthor{Paola Bandieri \ and \ Carlo Gagliardi}
\maketitle
\betaegin{abstract}
We extend to dimension $n \gammaeq 3$ the concept of $\rho$-pair in a
coloured graph and we prove the existence theorem for minimal rigid
crystallizations of handle-free, closed $n$-manifolds.
\\
\\{\hat{\imath}t 2000 Mathematics Subject Classification:} Primary 57Q15;
Secondary 57M15 .\\{\hat{\imath}t Keywords:} rigid gems, coloured graphs,
$\rho$-pairs.
\end{abstract}
\section{Introduction}
The concept of \textit{$\rho$-pair} in a $4$-coloured graph was
introduced for the first time by Sostenes Lins in \cite{L}. Roughly
speaking, it consists of two equally coloured edges, which belong to
two or three bicoloured cycles. A graph with no $\rho$-pairs was
then called \textit{rigid} in the same paper, where the following
basic result was proved:
\textit{Every handle-free, closed $3$-manifold admits a rigid
crystallization of minimal order.}
The proof is based on the definition of a particular local move,
called \textit{switching} of a $\rho$-pair. Starting from any gem
$\Gamma$ of a closed, irreducible $3$-manifold $M$, a finite sequence of
such moves, together with the cancelling of a suitable number of
$1$-dipoles, produces a rigid crystallization $\Gamma'$ of the same
manifold $M$, whose order is strictly less than the order of $\Gamma$.
The above existence theorem plays a fundamental r\^ole in the
problem of generating automatically essential catalogues of
$3$-manifolds, with "small" Heegaard genus and/or graph order (see,
e.g., \cite{L}, \cite{BCrG_1}, \cite{CC}, \cite{BCrG_2}, \cite{M}).
In the present paper, we extend the concepts of $\rho$-pair,
switching and rigidity to $(n+1)$-coloured graphs, for $n > 3$.
Our main result is the proof of the existence of a rigid
crystallization of minimal order, for every handle-free
$n$-dimensional, closed manifold. It will be used in a subsequent paper to generate the catalogue of
all $4$-dimensional, closed manifolds, represented by (rigid)
crystallizations of "small" order.
\section{Notations}
In the following all manifolds will be piecewise linear (PL), closed
and, when not otherwise stated, connected. For the basic notions of
PL topology, we refer to \cite{RS} and to \cite{Gl}; "$\cong$" will
mean "\textit{PL-homeomorphic}". For graph theory, see \cite{GT} and
\cite{W}.
We will use the term "graph" instead of "multigraph". Hence
multiple edges are allowed, but loops are forbidden. As usual,
$V(\Gamma)$ and $E(\Gamma)$ will denote the vertex-set and the edge-set of
the graph $\Gamma$.
If $\Gamma$ is an oriented graph, then each edge ${\betaf e}$ is directed
from its first endpoint ${\betaf e}(0)$ (also called \textit{tail}) to
its second endpoint ${\betaf e}(1)$ (called \textit{head}).
An \textit{$(n+1)$-coloured graph} is a pair $(\Gamma,\gammaamma)$, where
$\Gamma$ is a graph, regular of degree $n+1$, and $\gammaamma :
E(\Gamma)\to\Delta_n=\{0,\ldots,n\}$ is a map with the property that, if
${\betaf e}$ and ${\betaf f}$ are adjacent edges of $E(\Gamma)$, then $\gammaamma
(\betaf e) \neq \gammaamma (\betaf f)$. We shall often write $\Gamma$ instead of
$(\Gamma,\gammaamma)$.
Let $B$ be a subset of $\Delta_n$. Then, the connected components of
the graph $\Gamma_B=(V(\Gamma),\gammaamma^{-1}(B))$ are called
$B$\textit{-residues} of $(\Gamma,\gammaamma)$. Moreover, for each
$c\hat{\imath}n\Delta_n$, we set $\hat{c}=\Delta_n\setminus\{c\}$. If $B$ is a
subset of $\Delta_n$, we define $g_B$ to be the number of
$B$-residues of $\Gamma$; in particular, given any colour $c\hat{\imath}n
\Delta_n$, $g_{\hat c}$ denotes the number of components of the
graph $\Gamma_{\hat c}$, obtained by deleting all edges coloured $c$
from $\Gamma$. If $i,j \hat{\imath}n \Delta_n, i \neq j$, then $g_{ij}$ denotes
the number of cycles of $\Gamma$, alternatively coloured $i$ and $j$,
i.e. $g_{ij}= g_{\{i,j\}}$.
An isomorphism $\phi : \Gamma\to\Gamma'$ is called a \textit{coloured
isomorphism} between the $(n+1)$-coloured graphs $(\Gamma,\gammaamma)$ and
$(\Gamma',\gammaamma')$ if there exists a permutation $\varphi$ of
$\Delta_n$ such that $\varphi\circ\gammaamma=\gammaamma'\circ\phi$.
A pseudocomplex $K$ of dimension $n$ \cite{HW} with a labelling on
its vertices by $\Delta_n=\{0,\ldots,n\}$, which is injective on the
vertex-set of each simplex of $K$ is called a \textit{coloured
$n$-complex} .
It is easy to associate a coloured $n$-complex $K(\Gamma)$ to each
$(n+1)$-coloured graph $\Gamma$, as follows:
\betaegin{itemize}
\hat{\imath}tem [-] for each vertex ${\betaf v}$ of $\Gamma$, take an $n$-simplex $\sigma ({\betaf v})$ and label
its vertices by $\Delta_n$;
\hat{\imath}tem [-] if ${\betaf v}$ and ${\betaf w}$ are vertices of $\Gamma$ joined by a $c$-coloured
edge ($c\hat{\imath}n\Delta_n$), then identify the $(n-1)$-faces of $\sigma
({\betaf v})$ and $\sigma ({\betaf w})$ opposite to the vertices labelled
$c$.
\end{itemize}
If $M$ is a manifold of dimension $n$ and $\Gamma$ an $(n+1)$-coloured
graph such that $|K(\Gamma)|\cong M$, then, following Lins \cite{L}, we
say that $\Gammaamma$ is a \textit{gem}
(\textit{graph-encoded-manifold}) \textit{representing} $M.$
Note that \textit{ $\Gamma$ is a gem of an $n$-manifold $M$ iff, for
every colour $c \hat{\imath}n \Delta_n$, each $\hat c$-residue represents
$\mathbb S^{n-1}.$} Moreover, \textit{$M$ is orientable iff $\Gamma$ is
bipartite.}
If, for each $c\hat{\imath}n\Delta_n$, $\Gammaamma_{\hat{c}}$ is connected , then
the corresponding coloured complex $K(\Gammaamma)$ has exactly $(n+1)$
vertices (one for each colour $c\hat{\imath}n\Delta_n$); in this case both
$\Gamma$ and $K(\Gammaamma)$ are called \textit{contracted}. A contracted
gem $\Gamma$, representing an $n$-manifold $M$, is called a
\textit{crystallization} of $M$.
The \textit{existence theorem} of crystallizations for every
$n$-manifold $M$ was proved by Pezzana \cite{P$_1$}, \cite{P$_2$}.
Surveys on crystallizations theory can be found in \cite{FGG},
\cite{BCaG}.
Let ${\betaf x},{\betaf y}$ be two vertices of an $(n+1)$-coloured graph
$\Gamma$ joined by $k$ edges $\{{\betaf e}_1,\ldots, {\betaf e}_k\}$ with
$\gamma({\betaf e}_h)=i_h$, for $h = 1, \ldots, k$. We call $\Theta=\{{\betaf
x},{\betaf y}\}$ a \textit{dipole of type k}, \textit{involving
colours} $i_1,\ldots, i_k$, iff ${\betaf x}$ and ${\betaf y}$ belong to
different $(\Delta_n \setminus \{ i_1,\ldots, i_k\})$-residues of
$\Gamma$.
In this case a new $(n+1)$-coloured graph $\Gamma'$ can be obtained by
deleting ${\betaf x},{\betaf y}$ and all their incident edges from $\Gamma$
and then joining, for each $i\hat{\imath}n\Delta_n \setminus \{i_1,\ldots
,i_k\}$, the vertex $i$-adjacent to ${\betaf x}$ to the vertex
$i$-adjacent to ${\betaf y}$. $\Gamma'$ is said to be obtained from $\Gamma$ by
\textit{cancelling} (or \textit{deleting}) \textit{the $k$-dipole}
$\Theta$. Conversely $\Gamma$ is said to be obtained from $\Gamma'$ by
\textit{adding the $k$-dipole} $\Theta$.
By a \textit{dipole move}, we mean either the \textit{adding} or the
\textit{cancelling} of a dipole from a gem $\Gamma$.
As proved in \cite{FG}, \textit{two gems $\Gamma$ and $\Gamma'$ represent
PL-homeomorphic manifolds iff they can be obtained from each other
by a finite sequence of dipole moves}.
An $n$-dipole $\Theta=(\betaf x, \betaf y)$ is often called a
\textit{blob} (see \cite{LM}, where a different calculus for gems is
introduced). If $c$ is the (only) colour not involved in the blob
$\Theta$, and $\betaf x', \betaf y'$ are the vertices $c$-adjacent to $\betaf
x$ and $\betaf y$ respectively, then the cancelling of $\Theta$ from
$\Gamma$ produces (in $\Gamma'$) a new $c$-coloured edge $\betaf e'$, joining
$\betaf x'$ with $\betaf y'$. Following Lins, we call the inverse
procedure the \textit{adding of a blob on the edge $\betaf e'$}.
Two vertices ${\betaf x}, {\betaf y}$ of an $(n+1)$-coloured graph $\Gamma$
are called \textit{completely separated} if, for each colour $c \hat{\imath}n
\Delta_n$, ${\betaf x}$ and ${\betaf y}$ belong to two different $\hat
c$-residues. The \textit{fusion graph} $\Gamma \text{fus}({\betaf x},{\betaf
y})$ is obtained simply by deleting ${\betaf x}$ and ${\betaf y}$ from
$\Gamma$ and then by gluing together the "hanging edges" with the same
colours.
The following result was first proved, for case (a), in \cite{L}
and, for case (b), in \cite{L} ($n=3$) and in \cite{GV}.
\betaegin{lemma}\label{1} Let ${\betaf x}, {\betaf y}$ be two completely separated vertices of
a (possibly disconnected) graph $\Gamma$.
\betaegin{itemize}
\hat{\imath}tem[(a)] If ${\betaf x}$ and ${\betaf y}$ belong to the (only) two different
components $\Gamma^\prime$ and $\Gamma^{\prime\prime}$ of $\Gamma$, representing
two $n$-dimensional manifolds $M^\prime$ and $M^{\prime\prime}$
respectively, then $\Gamma \text{fus}({\betaf x},{\betaf y})$ is a gem of a
connected sum $M^\prime\# M^{\prime\prime}$.
\hat{\imath}tem[(b)] If $\Gamma$ is a gem of a (connected) $n$-manifold $M$,
then $\Gamma \text{fus}({\betaf x},{\betaf y})$ is a gem of $M \# \mathbb H$,
where $\mathbb H$ is either $\mathbb S^{n-1} \times \mathbb S^1$ or
$\mathbb S^{n-1} \tilde \times \mathbb S^1$ (i. e. the orientable or
non-orientable $(n-1)$-sphere bundle over $\mathbb S^1$).
\end{itemize}
\end{lemma}
Note that such a manifold $\mathbb H$ is often called a
\textit{handle} (of dimension $n$). A manifold $M$ is called
\textit{handle-free} if it admits no handles as connected summands
(i.e. if $M$ is not homeomorphic to $M^\prime\# \mathbb H, M^\prime$
being any $n$-manifold).
\section{Switching of $\rho$-pairs.}
Let $(\Gamma,\gammaamma)$ be an $(n+1)$-coloured graph. Let further $(\betaf {e},\betaf {f})$ be any pair of edges,
both coloured $c$, of
$\Gamma.$
If we delete ${\betaf e},{\betaf f}$ from $\Gamma$, we obtain an edge-coloured
graph $\omegaverline \Gamma$, with exactly four vertices of degree $n$
(namely, the endpoints ${\betaf u},{\betaf v}$ of ${\betaf e}$ and the
endpoints ${\betaf w},{\betaf z}$ of ${\betaf f}$).
Now, there are exactly two $(n+1)$-coloured graphs $(\Gamma_1,
\gammaamma_1), (\Gamma_2, \gammaamma_2)$ (different from $(\Gamma, \gammaamma)$) which
can be obtained by adding two new edges (both coloured $c$) to
$\omegaverline \Gamma$ : such edges are either ${\betaf e}_1, {\betaf f}_1$,
joining ${\betaf u}$ with ${\betaf w}$ and ${\betaf v}$ with ${\betaf z}$
respectively, or ${\betaf e}_2, {\betaf f}_2$, joining ${\betaf u}$ with
${\betaf z}$ and ${\betaf v}$ with ${\betaf w}$ respectively. (See Figure 1,
Figure 1a and Figure 1b, where, w.l.o.g., we consider $c=0$)
\betaigskip
\centerline{\scalebox{0.5}{\hat{\imath}ncludegraphics{figure_1.eps}}}
\betaigskip
\centerline{\betaf Figure 1}
\betaigskip
We will say that $(\Gamma_1,\gammaamma_1)$ and $(\Gamma_2,\gammaamma_2)$ are
obtained from $(\Gamma,\gammaamma)$ by a {\hat{\imath}t switching} on the pair $({\betaf
e},{\betaf f}).$
\betaigskip
\centerline{\scalebox{0.8}{\hat{\imath}ncludegraphics{figure_1ab.eps}}}
\betaigskip
\, \, \, \, \, {\betaf Figure 1a \, \, \, \, \, \, \, \, \, \, \, \, \,
\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, Figure 1b}
\betaigskip
Actually, we are interested in particular pair of equally coloured
edges of $\Gamma$. More precisely:
\betaegin{definition} A pair $R= ({\betaf {e}},{\betaf {f}})$ of edges of $\Gamma$, with $\gammaamma({\betaf e}) = \gammaamma({\betaf f}) =
c,$ will be called:
\betaegin{itemize}
\hat{\imath}tem[(a)] a $\rho_n$-pair involving colour $c$ if, for each
colour $i \hat{\imath}n \Delta_n \setminus \{c\}$, we have $\Gamma_{\{c,i\}}({\betaf
e}) = \Gamma_{\{c,i\}}({\betaf f})$;
\hat{\imath}tem[(b)] a $\rho_{n-1}$-pair, involving colour $c$, if there
exists a colour $d \neq c$, such that:
\betaegin{itemize}
\hat{\imath}tem[($b_1$)] $\Gamma_{\{c,d\}}({\betaf e}) \neq \Gamma_{\{c,d\}}({\betaf f}),$ and
\hat{\imath}tem[($b_2$)] for each colour $j \hat{\imath}n \Delta_n \setminus \{c,d\}, \Gamma_{\{c,j\}}({\betaf e}) = \Gamma_{\{c,j\}}({\betaf f}).$
\end{itemize}
\end{itemize}
\end{definition}
The colour $d$ of above will be said to be {\hat{\imath}t not involved} in the
$\rho_{n-1}$-pair $R$.
By a {\hat{\imath}t $\rho$-pair}, we will mean for short either a
$\rho_n$-pair or a $\rho_{n-1}$-pair.
\betaegin{theorem} \label{n.c.c.} Let $(\Gamma,\gammaamma)$ be an $(n+1)$-coloured graph, $R =
({\betaf e},{\betaf f})$ be a $\rho$-pair of $\Gamma$ and let
$(\Gamma_1,\gammaamma_1)$ be obtained from $(\Gamma,\gammaamma)$ by any switching of
$R.$ Then:
\betaegin{itemize}
\hat{\imath}tem[(a)] if $R$ is a $\rho_{n-1}$- pair, then $\Gamma$ and $\Gamma_1$ have
the same number of components;
\hat{\imath}tem[(b)] if $R$ is a $\rho_{n}$- pair, then $\Gamma_1$ has at most one more component than $\Gamma$.
\end{itemize}
\end{theorem}
\betaegin{proof} As before, let us call ${\betaf u},{\betaf v}$ the endpoints of $\betaf e$ and
${\betaf w},{\betaf z}$ the endpoints of $\betaf f$. Let further
$\omegaverline\Gamma$ be the graph obtained by deleting $\betaf e$ and $\betaf f$
from $\Gamma$.
As it is easy to check, ${\betaf u},{\betaf v},{\betaf w}$ and $\betaf z$ lie in
the same component of $\Gamma$.
(a) If $R$ is a $\rho_{n-1}$- pair, then $\betaf u,\betaf v,\betaf w$ and
$\betaf z$ also lie in the same component of $\omegaverline\Gamma$ (and
therefore of $\Gamma_1$).
For, let $d$ be the colour not involved in $R$. By definition of
$\rho_{n-1}$- pair, $\Gamma_{\{c,d\}}(\betaf e)$ and $\Gamma_{\{c,d\}}(\betaf f)$
are two different bicoloured cycles of $\Gamma$, the first one
containing $\betaf e$ and the second one containing $\betaf f$.
Hence there are two paths of $\omegaverline\Gamma$, which join $\betaf u$ with
$\betaf v$ and $\betaf w$ with $\betaf z$, respectively.
On the other hand, if $j$ is any colour, $j\neq c,d$, then
$\Gamma_{\{c,j\}}({\betaf e}) = \Gamma_{\{c,j\}}({\betaf f})$ is a single
bicoloured cycle, containing both $\betaf e$ and $\betaf f$.
This proves that there is a path of $\omegaverline\Gamma$, which joins $\betaf
u$ with either $\betaf w$ or $\betaf z$.
This completes the proof of (a).
(b) If $i\hat{\imath}n \Delta_n \setminus \{c\}$, then by definition of
$\rho_{n}$-pair, $\Gamma_{\{c,i\}}({\betaf e}) = \Gamma_{\{c,i\}}({\betaf f})$.
This proves that there are two paths of $\omegaverline\Gamma$, the first one
joining $\betaf u$ with either endpoint of $\betaf f$, $\betaf w$ say, and
the second one joining $\betaf z$ with $\betaf v$.
This shows that $\omegaverline\Gamma$ (hence also $\Gamma_1$) has at most one
more component than $\Gamma$.
\end{proof}
\betaigskip
In the following, we will show that in
some particular, but geometrically relevant, cases, it is possible
to choose a "preferred" way to switch a pair of equally coloured
edges of $(\Gamma,\gammaamma)$.
CASE (A):\, \textit{$\Gamma$ bipartite.}
If $R = (\betaf e,\betaf f)$ is any pair of edges, both coloured $c$ (in
particular, if $R$ is a $\rho$-pair) of a bipartite $(n+1)$-coloured
graph $(\Gamma,\gammaamma)$, then it is easy to see that only one of the two
possible switching of $R$ is again bipartite.
If, further, $V_0, V_1$ are the two bipartition classes of $\Gamma$ and
we orient $\betaf e,\betaf f$ from $V_0$ to $V_1$, so that their tails
$\betaf x_0 = \betaf e(0), \betaf y_0 = \betaf f(0)$ belong to $V_0$, and their
heads $\betaf x_1 = \betaf e(1), \betaf y_1 = \betaf f(1)$ belong to $V_1$, the
(bipartite) switching $(\Gamma',\gammaamma')$ of $R$ is obtained as follows:
\betaegin{itemize}
\hat{\imath}tem[(I)] delete $\betaf e$ and $\betaf f$ from $\Gamma$ (thus obtaining
$\omegaverline\Gamma$);
\hat{\imath}tem[(II)] join $\betaf x_0$ with $\betaf y_1$ and $\betaf x_1$ with $\betaf y_0$ by two new
edges $\betaf e', \betaf f'$, both coloured $c$.
\end{itemize}
CASE (B): \textit{$\Gamma$ non bipartite, with bipartite residues.}
If $\Gamma$ is a non bipartite graph, but for each colour $i$,
$\Gamma_{\hat{\hat{\imath}math}}$ has bipartite components (residues), then we
shall consider two subcases
Subcase (B$_1$): \textit{$R = (\betaf e,\betaf f)$ is a $\rho_{n-1}$-pair
of $\Gamma$, involving colour $c$ and not involving colour $d$.}
Let $\Xi$ be the residue of $\Gamma_{\hat{d}}$ containing both $\betaf e$
and $\betaf f$ (note that $\betaf e$ and $\betaf f$ belong to the same
$\hat{\hat{\imath}math}$-residue, because for every colour $i \neq c,d$,
$\Gamma_{\{c,i\}}(\betaf e) = \Gamma_{\{c,i\}}(\betaf f)$.)
Let $V_0, V_1$ be the two bipartition classes of $\Xi$ (recall that
$\Xi$ is bipartite), As in Case (A), let us orient $\betaf e$ from
$V_0$ to $V_1$. Now, the switching of $R = (\betaf e,\betaf f)$ is the
$(n+1)$-coloured graph $(\Gamma',\gammaamma')$ , obtained as before (Case
(A)):
\betaegin{itemize}
\hat{\imath}tem[(I)] delete $\betaf e$ and $\betaf f$ from $\Gamma$;
\hat{\imath}tem[(II)] join $\betaf x_0= \betaf e(0)$ with $\betaf y_1 = \betaf f(1)$ and $\betaf x_1 = \betaf e(1)$ with $\betaf y_0 = \betaf f(0)$
by two
new edges $\betaf e', \betaf f'$, both coloured $c$.
\end{itemize}
Subcase (B$_2$): \textit{$R = (\betaf e,\betaf f)$ is a $\rho_{n}$-pair
(involving colour $c$) of $\Gamma$ and $n\gammaeq 3$.}
Let us orient arbitrarily the edge $\betaf e$, as before, let us call
$\betaf x_0 = \betaf e(0)$ and $\betaf x_1 = \betaf e(1)$. Let now $i$ be any
colour different from $c$. The orientation on $\betaf e$ induces a
coherent orientation on all edges of the cycle $\Gamma_{\{c,i\}}(\betaf e)$
and, in particular, on the edge $\betaf f$ (with the induced
orientation).
Now, we shall prove that the orientation on $\betaf f$ (and hence its
tail and its head) is independent from the choice of colour $i$ ($i
\neq c$).
For, let $h$ be any colour of $\Delta_n, h \neq c$, and let $\betaf
y_0^h, \betaf y_1^h$ be the tail and the head of the edge $\betaf f$, with
the orientations induced by the cycle $\Gamma_{\{c,h\}}(\betaf e)$ ($\betaf e$
being oriented as before).
Let now $j \hat{\imath}n \Delta_n, j \neq i,c$. In order to prove that $\betaf
y_0^i = \betaf y_0^j$ (and, as a consequence $\betaf y_1^i = \betaf y_1^j$),
let us consider a further colour $k$, with $k \neq i,j,c$.
Note that such a colour $k$ must exist, since $n \gammaeq 3$ and
therefore $\Delta_n$ contains at least four colours.
Let now $\Xi$ be the $\hat{k}$-residue of $\Gamma$, which contains $\betaf
e$. $\Xi$ is bipartite and contains both the cycles
$\Gamma_{\{c,i\}}(\betaf e)$ and $\Gamma_{\{c,j\}}(\betaf e)$. As a consequence,
$\betaf y_0^i = \betaf y_0^j$. In fact, supposing on the contrary, $\betaf
y_0^i = \betaf y_1^j$, we could construct an odd cycle of $\Xi$.
The construction of the switching $(\Gamma',\gammaamma')$ of the
$\rho_{n}$-pair $R = (\betaf e,\betaf f)$ can now be performed as in the
above cases:
\betaegin{itemize}
\hat{\imath}tem[(I)] delete $\betaf e$ and $\betaf f$ from $(\Gamma,\gammaamma)$;
\hat{\imath}tem[(II)] join $\betaf x_0$ with $\betaf y_1^i$ and $\betaf x_1 $ with $\betaf y_0^i$ by two new
edges $\betaf e', \betaf f'$, both coloured $c$.
\end{itemize}
\betaegin{remark} The above cases include all $\rho$-pairs of gems
representing orientable $n$-manifolds (Case (A)), all
$\rho_{n-1}$-pairs of gems representing non orientable $n$-manifolds
(Case (B$_1$)) and all $\rho_n$-pairs of gems representing non
orientable $n$-manifolds, with $n \gammaeq 3$ (Case (B$_2$)).
\end{remark}
The only remaining case is that of a $\rho_2$-pair of a gem $\Gamma$
representing a non orientable surface, for which it is not always
possible the choice of a standard switching.
In fact, for $n = 2$, the procedure described in Case (B$_2$)
doesn't work, as it depends on the choice of the colour $i$.
\footnote{The case $n=2$ is completely analyzed in \cite{B}, also for graphs representing surfaces with non-empy boundary.}
\section{Main results}
The present section is devoted to prove the following Theorems
\ref{switchrn} and \ref{switchrn-1}, which concern the geometrical
meaning of switching $\rho$-pairs in gems of $n$-dimensional
manifolds.
\betaigskip
As in Section 2, let $\mathbb H$ be a handle, i.e. either $(\mathbb
S^{n-1}\times \mathbb S^1)$ or $(\mathbb S^{n-1}\tilde \times
\mathbb S^1).$
\betaegin{theorem}\label{switchrn} Let $(\Gamma,\gamma)$ be a gem of a
(connected) $n$-manifold $M$, $n\gammaeqslant 3$, $R=(\betaf e, \betaf f)$ be
a $\rho_n$-pair in $\Gamma$ and let $(\Gamma', \gamma')$ be the $(n+1)$-coloured
graph, obtained by switching $R$. Then:
\betaegin{itemize}
\hat{\imath}tem[(a)] if $(\Gamma', \gamma')$ splits into two connected components,
$(\Gamma'_1, \gamma'_1)$ and $(\Gamma'_2 \gamma'_2)$ say, then they are gems of two
$n$-manifolds $M'_1$ and $M'_2$ respectively, and $M \cong M'_1 \#
M'_2;$
\hat{\imath}tem[(b)] if $(\Gamma', \gamma')$ is connected, then it is a gem of an
$n$-manifold $M'$ such that $M \cong M' \# \mathbb H.$
\end{itemize}
Moreover, if $(\Gamma, \gamma)$ is a crystallization of $M$, then $(\Gamma',
\gamma')$ must be connected, and only case (b) may occur.
\end{theorem}
In order to prove Theorem \ref{switchrn}, we need some further
construction and a double sequence of Lemmas, which will be proved
by induction on $n$.
\betaegin{lemma}\label{1n} -- {\betaf step n} \, \, Let $(\Sigma, \sigma)$ be a gem of the $n$-sphere
$\mathbb S^n$, $n\gammaeqslant 2$, $R=(\betaf e, \betaf f)$ be a $\rho_n$-pair
of $\Sigma$ and let $(\Sigma', \sigma')$ be obtained by switching
$R$. Then $\Sigma'$ splits into two connected components, both
representing $\mathbb S^n.$
\end{lemma}
Let now $\Gamma, R=(\betaf e, \betaf f), \Gamma'$ be as in the statement of
Theorem \ref{switchrn}. Recall that ($n$ being $\gammaeq 3$) any
orientation of $\betaf e$ induces a coherent orientation on $\betaf f$. As
in Section 2, let ${\betaf e(0)}, {\betaf f(0)}, {\betaf e(1)}$ and ${\betaf
f(1)},$ be the ends of $\betaf e$ and $\betaf f$, so that $\betaf e$ is
directed from $\betaf e(0)$ to $\betaf e(1)$ and $\betaf f$ is directed from
$\betaf f(0)$ to $\betaf f(1).$ Furthermore, after the switching, the new
edges $\betaf e', \betaf f'$ of $\Gamma'$ join $\betaf e(0)$ with $\betaf f(1)$ and
$\betaf e(1)$ with $\betaf f(0)$ respectively. Denote by $\tilde \Gamma$ the
$(n+1)$-coloured graph obtained by adding a blob (i.e. an
$n$-dipole), with vertices $\betaf A$ and $\betaf B$ on the edge $\betaf f'$
of $\Gamma'$ (see Figure 2)
\betaegin{lemma}\label{2n} -- {\betaf step n} \, \, With the above notations, if
$\Gamma$ is a gem of a (connected) $n$-manifold $M$, $n\gammaeq 3$, then:
\betaegin{itemize}
\hat{\imath}tem[(i)] $\Gamma'$ (hence also $\tilde \Gamma$) is a gem of a (possibly disconnected)
$n$-manifold $M'$;
\hat{\imath}tem[(ii)] $\betaf e(0)$ and $\betaf B$ are two completely separated vertices of $\tilde \Gamma$;
moreover $\Gamma$ coincides with $\tilde \Gamma \text{fus}(\betaf e(0),\betaf B).$
\end{itemize}
\end{lemma}
\betaigskip
\centerline{\scalebox{0.6}{\hat{\imath}ncludegraphics{figure_2.eps}}}
\betaigskip
\centerline{\betaf Figure 2}
\betaigskip
\betaegin{proof} First of all, we repeat here the proof of Lemma \ref{1n}, step 2, which is exactly Corollary 13 of \cite{B}.
Let $(\Sigma, \sigma)$ a $3$-coloured, bipartite graph representing
$\mathbb S^2$. Let $R$ be a $\rho_2$- pair in $\Sigma$ involving
colour $c \hat{\imath}n \Delta_2$. Then, by switching $R$ in the only possible
way, we obtain a new graph $(\Sigma', \sigma')$, either connected or
with two connected components. Moreover, if we denote by $d, k$ the
further two colours of $\Delta_2$, then $\Sigma'$ has the same
number of $(d,k)$- coloured cycles ($\hat c$-residues) and one more
$(c,h)$- coloured cycle ($\hat h$-residue), for $h = d,k.$
Hence $\chi(\Sigma') = \chi(\Sigma) + 2 = 4$. This implies that $\Sigma'$ must have two connected components, both representing $\mathbb S^2$.
Now, assuming Lemma \ref{1n}, step $n-1$, we prove Lemma
\ref{2n}, step $n$.
For, let us suppose $\Gamma$ to be a gem of the $n$-manifold $M$. As a
consequence, for each colour $i \hat{\imath}n \Delta_n$, all $\hat{\imath}$-residues are
gems of $\mathbb S^{n-1}$. Now, suppose $R=(\betaf e, \betaf f)$ to be a
$\rho_n$-pair of $\Gamma$, involving color $c$, whose switching produces
the graph $\Gamma'$.
Of course, the switching of $R$ has no effects on the $\hat
c$-residues of $\Gamma$. Hence, each $\hat c$-residue of $\Gamma'$ is
colour-isomorphic to the corresponding one of $\Gamma$, and therefore
represents $\mathbb S^{n-1}$. Let now $i$ be any colour different
from $c$ and let $\Xi$ be the $\hat{\imath}$-residue containing $R$. Of
course, $R$ is a $\rho_{n-1}$-pair of $\Xi$ (where $\Xi$ is a gem of
$\mathbb S^{n-1}$). Hence, by Lemma \ref{1n}, step $n-1$, the
switching of $R$ splits $\Xi$ into two new $\hat{\imath}$-residues of $\Gamma'$,
both representing $\mathbb S^{n-1}$.
Since all $\hat{\imath}$-residues of $\Gamma$, different from $\Xi$, are left
unaltered by the switching of $R$, $\Gamma'$ is again a gem of a
$n$-manifold $M'$ (with either one or two connected components). Let
now $\tilde \Gamma$ be obtained from $\Gamma'$ by adding a blob (i.e. an
$n$-dipole $\Theta =(\betaf A, \betaf B)$) on the edge $\betaf f'$, with
endpoints $\betaf e(0), \betaf f(1)$. Of course, $\tilde \Gamma$ is again a
gem of $M'$ and, as it is easy to check, $\tilde \Gamma \text{fus}(\betaf
e(0), \betaf B)$ is colour-isomorphic to $\Gamma'$ (where the vertex $\betaf
A$ plays the role of $\betaf e(0)$).
Now, assuming Lemma \ref{2n}, step $n$, we prove Lemma \ref{1n},
step $n$. Let $\Sigma, R=({\betaf e},{ \betaf f}), \Sigma'$ be as in the
statement of Lemma \ref{1n}. Let further $\tilde \Sigma$ be obtained
by adding a blob $\Theta =(\betaf A, \betaf B)$ on the edge $\betaf f'$ of
$\Sigma'$. Hence, by Lemma \ref{1n}, step $n$, $\Sigma'$ and $\tilde
\Sigma$ are both gems of an $n$-manifold $M'$; moreover ${\betaf e}(0)$
and $\betaf B$ are completely separated vertices of $\tilde \Sigma$,
and $\Sigma$ is isomorphic to $\tilde \Sigma \text{fus}(\betaf e(0),
\betaf B)$. If $\Sigma'$ (hence also $\tilde \Sigma$) is connected,
then, by Lemma \ref{1}, the manifold represented by $\Sigma$ must
have a handle $\mathbb H$ as a direct summand, but this is
impossible, since $\Sigma$ represents $\mathbb S^n$, by hypothesis.
Hence $\Sigma'$ (and $\tilde \Sigma$) must split into two components
$\Sigma'_1$, $\Sigma'_2$ say, representing two connected
$n$-manifolds $M'_1, M'_2$ respectively, so that $\mathbb S^n \cong
M'_1 \# M'_2$. But this implies that both $M'_1, M'_2$ are gems of
$\mathbb S^n$, too.
This concludes the proof of Lemmas \ref{1n} and \ref{2n}.
\end{proof}
\betaegin{proof} (of Theorem \ref{switchrn})
The proof of Theorem \ref{switchrn}, (a) and (b), is now a direct
consequence of Lemma \ref{2n}, Step $n$, and Lemma \ref{1}.
If, further, $\Gamma_{\hat c}$ is connected, $c$ being the colour
involved in $R$ (in particular, if $\Gamma$ is a crystallization of
$M$), then $\Gamma'$ must be connected, too, and therefore $M \cong M'\#
\mathbb H$.
\end{proof}
\betaigskip
\centerline{\scalebox{0.8}{\hat{\imath}ncludegraphics{figure_2ab.eps}}}
\betaigskip
\, \, \, \, \, {\betaf Figure 2a \, \, \, \, \, \, \, \, \, \, \, \, \,
\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, Figure 2b}
\betaigskip
As a consequence of Theorem \ref{switchrn} and of Corollary 13 of
\cite {B}, we have the following
\betaegin{corollary}\label{Sn} If $(\Sigma, \sigma)$ is a
crystallization of the $n$-sphere $\mathbb S^n$, $n \gammaeq 2$ then it can't
contain any $\rho_n$-pair.
\end{corollary}
\betaegin{theorem}\label{switchrn-1} Let $(\Gamma, \gamma)$ be a gem of a (connected) $n$-manifold $M$,
$R=(\betaf e, \betaf f)$ be a $\rho_{n-1}$-pair of $\Gamma$ and let $(\Gamma',
\gamma')$ be obtained by switching $R$. Then $\Gamma'$ is a gem of the same
manifold $M$.
\end{theorem}
\betaegin{proof} W.l.o.g., let us suppose $c=0$ to be the colour involved and $d=n$ the one not involved in $R$.
By Theorem 2, $\Gamma'$ has the same number of connected components as
$\Gamma$ and, by performing the switching, it is bipartite (resp. non-
bipartite) iff $\Gamma$ is.
Consider the graph $\tilde \Gamma$, obtained by replacing the
neighborhood of $R$ in $\Gamma$ (Figure 3a), with the graph of Figure
3b. The switching of $R$ can be thought as the replacing of the
neighborhood of $R=(\betaf e, \betaf f)$ by the neighborhood of $R'=(\betaf
e', \betaf f')$ (see Figure 1a). Consider now the graph $\tilde \Gamma$
obtained by replacing the above neighborhood by the graph of Figure
5, where $\Theta_1$ ($\Theta_2$ resp.) is formed by two vertices
$\betaf A', \betaf e(0)$ ($\betaf B', \betaf f(1)$ resp.) joined by $n-1$ edges
coloured $1,\ldots,n-1.$
\betaigskip
\centerline{\scalebox{0.5}{\hat{\imath}ncludegraphics{figure_3a.eps}}}
\betaigskip
\centerline{\betaf Figure 3a}
\betaigskip
\centerline{\scalebox{0.5}{\hat{\imath}ncludegraphics{figure_3b.eps}}}
\betaigskip
\centerline{\betaf Figure 3b}
\betaigskip
We will describe two sequences of dipole moves, joining $\tilde \Gamma$
with $\Gamma$ and $\Gamma'$ respectively, thus proving that $\Gamma, \Gamma'$ are
gems of PL-homeomorphic manifolds.
The first sequence starts by considering $\delta_1=(\betaf A, \betaf A')$,
which is a $1$-dipole. In fact, $\tilde \Gamma_{\hat n}(\betaf A') =
\delta_1$, whose further end is $\betaf e(1)$; hence the ${\hat
n}$-residue $\tilde \Gamma_{\hat n}(\betaf A)$ is different from
$\delta_1.$ By deleting, the $1$-dipole $\delta_1$ from $\tilde
\Gamma$, yields a $2$-dipole $\delta_2$ with ends $\betaf B, \betaf B'$, in
fact $\tilde \Gamma_{\hat n}(\betaf B')$ is a multiple edge whose further
end is $\betaf f(1)$ and which differs from the ${\hat n}$-residue
$\tilde \Gamma_{\hat n}(\betaf B)$. By cancelling $\delta_2$, too, we
obtain $\Gamma$ (Figg. 3c and 3d).
$\Theta_1$ and $\Theta_2$ are $(n-1)$-dipoles, since the
$(0,n)$-residue containing $\betaf A', \betaf B'$ is a quadrilateral cycle
whose vertices are $\betaf A, \betaf B, \betaf A', \betaf B'$ only. By deleting
them from $\tilde \Gamma$ (Figg. 3e and 3f), we obtain $\Gamma'$.
\end{proof}
\betaigskip
\betaigskip
\centerline{\scalebox{0.8}{\hat{\imath}ncludegraphics{figure_3cd1.eps}}}
\betaigskip
\, \, \, \, \, \, {\betaf Figure 3c \, \, \, \, \, \, \, \, \, \, \, \,
\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, Figure 3d}
\betaigskip
\betaigskip
\centerline{\scalebox{0.8}{\hat{\imath}ncludegraphics{figure_3cd2.eps}}}
\betaigskip
\, \, \, \, \, \, {\betaf Figure 3e \, \, \, \, \, \, \, \, \, \, \, \,
\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, Figure 3f}
\betaigskip
\section{Rigid gems}
\betaigskip
\betaegin{definition} An $(n+1)$-coloured graph $(\Gamma,\gamma)$, $n \gammaeq 3$, is
called \textit{rigid} iff it has no $\rho$-pairs.
\end{definition}
\betaegin{theorem}\label{rigid} The $(n+1)$-coloured graph $(\Gamma,\gamma)$, $n \gammaeq 3$, is
rigid iff for each $i\hat{\imath}n \Delta_n$, the graph $\Gamma_{\hat{\hat{\imath}math}}$
has no $\rho_{n-1}$-pairs. \footnote{Note that, for $n=2$, the
concept of rigidity has no interest at all. In fact, if $\Gammaamma$ is
a $3$-coloured graph representing a closed surface, then it contains
$\rho$-pairs: $\rho_2$-pairs, if $\Gammaamma$ is a crystallization,
either $\rho_1$-pairs or $\rho_2$-pairs, otherwise. Hence, given any
closed surface $M^2$, it can't exist any rigid crystallization of
$M^2$}
\end{theorem}
\betaegin{proof} Suppose that $(\Gamma,\gamma)$ is rigid and that there is
a colour $i \hat{\imath}n \Delta_n$ such that $\Gamma_{\hat{\imath}}$ has a
$\rho_{n-1}$-pair $R=(\betaf e,\betaf f)$ of colour $c \hat{\imath}n \Delta_n -
\{i\}$. Then $R$ is a $\rho$-pair in $\Gamma$ too, and $\Gamma$ can't be
rigid.
Conversely, If for each $i \hat{\imath}n \Delta_n$, $(\Gamma)_{\hat{\hat{\imath}math}}$
contains no $\rho_{n-1}$-pairs, but $(\Gamma,\gamma)$ isn't rigid, then $\Gamma$
contains at least a $\rho$-pair $R=(\betaf e,\betaf f)$.
If $R$ is a $\rho_{n}$-pair, then $R$ is a $\rho_{n-1}$-pair in
$(\Gamma)_{\hat{\hat{\imath}math}}$, for each $i \hat{\imath}n \Delta_n$.
If $R$ is a $\rho_{n-1}$-pair, and $d$ is the non-involved colour,
then $R$ is a $\rho_{n-1}$-pair in $\Gamma_{\hat{d}}$.
\end{proof}
\betaegin{theorem}\label{rigcrist} Every closed, connected, handle-free
$n$-manifold $M^n$, $n \gammaeq 3$, admits a rigid crystallization.
Moreover, if $(\Gamma,\gamma)$ is a crystallization of a closed, connected,
handle-free $n$-manifold $M^n$ of order $p$, then there exists a rigid
crystallization of $M^n$ of order $\le p$.
\end{theorem}
\betaegin{proof} Starting from any gem of $M^n$ by cancelling a suitable number of $1$-dipoles,
we always can obtain a crystallization of $M^n$ ( see \cite
{FGG}). Suppose now that $\Gamma$ is a crystallization of $M^n$; if $\Gamma$
is rigid, then it is the request crystallization.
If $\Gamma$ has some $\rho_{n-1}$-pair $R=(\betaf e, \betaf f)$, of colour $c
\hat{\imath}n \Delta_n$ and non involving colour $d \hat{\imath}n \Delta_n \setminus
\{c\}$, then consider the connected component $\Xi$ of $\Gamma_{\hat d}$
containing both $\betaf e$ and $\betaf f$. Since $M^n$ is a manifold,
$\Xi$ represents $\mathbb S^{n-1}$ and $R$ is a $\rho_{n-1}$-pair
in $\Gamma_{\hat d}$, again. For Lemma \ref{1n}, by switching $R$ in
$\Gamma_{\hat d}$, we obtain two connected components, both representing
$\mathbb S^{n-1};$ since $\Gamma_{\hat d}$ is connected (Theorem
\ref{n.c.c.}), then there is at least a $1$-dipole in $\Gamma_{\hat d}$,
whose cancellation reduces the vertex-number.
If $\Gamma$ has some $\rho_{n}$-pair $R=(\betaf e, \betaf f)$, of colour $c
\hat{\imath}n \Delta_n$, then, for each colour $i \hat{\imath}n \Delta_n \setminus
\{c\}$, the connected component of $\Gamma_{\hat{\imath}}$ containing $\betaf e$ and
$\betaf f$, represents $\mathbb S^{n-1}$ and $R$ is a $\rho_{n-1}$-pair
in $\Gamma_{\hat{\imath}}$, as before, by switching $R$ in $\Gamma_{\hat{\imath}}$, we obtain
two connected components, both representing $\mathbb S^{n-1};$ since
$\Gamma_{\hat{\imath}}$ is connected (Theorem \ref{n.c.c.}), then there is at
least a $1$-dipole in $\Gamma_{\hat{\imath}}$, whose cancellation reduces the
vertex-number, for each $i \hat{\imath}n \Delta_n \setminus \{c\}$.
\end{proof}
Note that the minimal crystallizations of $\mathbb S^{n-1} \times \mathbb S^1$ and
$\mathbb S^{n-1} \tilde \times \mathbb S^1$ are not rigid (see, e.g., \cite{GV}).
Hence the second statement of Theorem \ref{rigid} is false for handles.
In dimension $3$, there exist rigid crystallizations for $\mathbb
S^{2} \times \mathbb S^1$ and $\mathbb S^{2} \tilde \times \mathbb
S^1$. The minimal ones have order $20$ for $\mathbb S^{2} \times
\mathbb S^1$ and order $14$ for $\mathbb S^{2} \tilde \times \mathbb
S^1$.
For $n>3$, it is easy to construct a rigid crystallization of
$\mathbb S^{n-1} \times \mathbb S^1$, if $n$ is even, and of
$\mathbb S^{n-1} \tilde \times \mathbb S^1$, if $n$ is odd, both of order $2(2^n-1).$
The remaining cases are still open.
\betaegin{thebibliography}{0}
\betaibitem{B}
P. Bandieri, {\hat{\imath}t $\rho$-pairs in graphs representing surfaces} (to
appear).
\betaibitem {BCaG}
P. Bandieri-M.R. Casali-C. Gagliardi, {\hat{\imath}t Representing manifolds by
crystallization theory: foundations, improvements and related
results} Atti Sem. Mat. Fis. Univ. Modena. suppl. {\betaf 49} (2001),
283--337.
\betaibitem{BCrG_1} P. Bandieri-P. Cristofori-C. Gagliardi, {\hat{\imath}t Nonorientable
3-manifolds admitting coloured triangulations with at most 30
tetrahedra}, J. Knot Theory Ramifications {\betaf 18}(3) (2009)
381-395.
\betaibitem{BCrG_2} P. Bandieri-P. Cristofori-C. Gagliardi, {\hat{\imath}t A census of genus-two 3-manifolds up to
42 coloured tetrahedra}, Discrete Mathematics {\betaf 310} (2010) 2469
- 2481.
\betaibitem{CC} M.R. Casali-P. Cristofori, {\hat{\imath}t A catalogue of orientable 3-manifolds
triangulated by $30$ coloured tethraedra}, J. Knot Theory
Ramifications {\betaf 17}(5) (2008), 1 - 23.
\betaibitem{FG} M. Ferri-C. Gagliardi, {\hat{\imath}t Crystallization
moves}, Pacific J. Math. {\betaf 100} (1982), 85-103.
\betaibitem {FGG}
M. Ferri-C. Gagliardi-L. Grasselli, {\hat{\imath}t A graph-theoretical
representation of PL-manifolds - A survey on crystallizations} Aeq.
Math. {\betaf 31} (1986), 121--141.
\betaibitem {Gl}
L. C. Glaser, {\hat{\imath}t Geometrical combinatorial topology} Van Nostrand
Reinhold Math. Studies, New York, 1987.
\betaibitem{GT}
J.L. Gross-T.W. Tucker, {\hat{\imath}t Topological graph theory.} John Wiley
\& Sons, New York, 1987.
\betaibitem {GV}
C. Gagliardi-G. Volzone, {\hat{\imath}t Handles in graphs and sphere bundles
over $\mathbb S^1$} Europ. J. Combinatorics {\betaf 8} (1987),
151--158.
\betaibitem{HW}
P. J. Hilton-S. Wylie, {\hat{\imath}t An introduction to algebraic
topology-homology theory.} Cambridge Univ. Press, Cambridge, 1960.
\betaibitem {L}
S. Lins, {\hat{\imath}t Gems, computers and attractors for $3$-manifolds}
Knots and Everything {\betaf 5}, 1995.
\betaibitem {LM}
S. Lins-M. Mulazzani, {\hat{\imath}t Blobs and flips on gems\/}, J. Knot
Theory Ramifications {\betaf 15} (2006), no. 8, 1001-1035.
\betaibitem {M}
S.V. Matveev, {\hat{\imath}t Algorithmic Topolgy and Classification of
3-Manifolds} Springer - Verlag,Berlin and Heidelberg, 2010.
\betaibitem{P$_1$}
M.Pezzana, {\hat{\imath}t Sulla struttura topologica delle variet\`a
compatte\/}, Atti Sem. Mat. Fis. Univ. Modena, {\betaf 23} (1974),
269-277.
\betaibitem{P$_2$}
M. Pezzana, {\hat{\imath}t Diagrammi di Heegaard e triangolazione contratta},
Boll. Un. Mat. Ital. {\betaf 12 (4)} (1975), 98--105.
\betaibitem {RS}
C. Rourke-B. Sanderson, {\hat{\imath}t Introduction to piecewise-linear
topology} Springer Verlag, New York-Heidelberg, 1972.
\betaibitem {W}
A. T. White, {\hat{\imath}t Graphs, groups and surfaces} North Holland,
revised edition, 1984.
\end{thebibliography}
\betaigskip
\betaigskip
\par \noindent Dipartimento di Matematica Pura ed Applicata,
\par \noindent Universit\`a di Modena e Reggio Emilia,
\par \noindent Via Campi 213 B,
\par \noindent I-41100 MODENA (Italy)
\par \noindent {\hat{\imath}t e-mail: \/} [email protected] \, \, \, \, [email protected]
\end{document}
|
\begin{document}
\title{On the interplay between effective notions of randomness and genericity}
\begin{abstract}
In this paper, we study the power and limitations of computing effectively generic sequences using effectively random oracles. Previously, it was known that every 2-random sequence computes a 1-generic sequence (as shown by Kautz) and every 2-random sequence forms a minimal pair in the Turing degrees with every 2-generic sequence (as shown by Nies, Stephan, and Terwijn). We strengthen these results by showing that every Demuth random sequence computes a 1-generic sequence (which answers an open question posed by Barmpalias, Day, and Lewis) and that every Demuth random sequence forms a minimal pair with every pb-generic sequence (where pb-genericity is an effective notion of genericity that is strictly between 1-genericity and 2-genericity). Moreover, we prove that for every comeager $\mathcal{G}\subseteq 2^\omega$, there is some weakly 2-random sequence $X$ that computes some $Y\in\mathcal{G}$, a result that allows us to provide a fairly complete classification as to how various notions of effective randomness interact in the Turing degrees with various notions of effective genericity.
\end{abstract}
\section{Introduction}
Randomness and genericity play an important role in computability theory in that they both, in their own way, define what it means for an infinite binary sequence~$X$ to be typical among all infinite binary sequences. For any reasonable way of defining randomness and genericity, these two notions are orthogonal, i.e., a random sequence cannot be generic and a generic sequence cannot be random. Moreover, for sufficient levels of randomness and genericity, this orthogonality goes even further: Nies, Stephan and Terwijn~\cite{NiesST2005} showed that a 2-random sequence and a 2-generic sequence always form a minimal pair in the Turing degrees.
At lower levels of randomness and genericity, however, the situation is more nuanced. For example, by the Ku\v cera-G\'acs theorem, any sequence is computed by some $1$-random sequence, thus, in particular, any $n$-generic sequence (for any~$n$) is computed by some $1$-random sequence. Another striking result, due to Kautz~\cite{Kautz1991} (building on the work of Kurtz~\cite{Kurtz1981}), is that \emph{every} 2-random sequence \emph{must} compute some $1$-generic sequence. It therefore makes sense to ask how random sequences and generic sequences interact for levels of randomness between $1$-randomness and $2$-randomness, and for levels of genericity between $1$-genericity and $2$-genericity. This is precisely the purpose of this paper. As we will see, the notion of randomness that has the most interesting interactions with genericity turns out to be Demuth randomness. In particular, we answer positively a question of Barmpalias, Day, and Lewis~\cite{BarmpaliasDL2013}, who asked whether every Demuth random sequence computes a $1$-generic sequence.
\section{Notation and background}
While we assume the reader is familiar with computability theory, let us briefly recall some basic definitions. We work in the Cantor space $2^\omega$, that is, the space of infinite binary sequences endowed with the product topology, i.e., the topology generated by the cylinders~$[\sigma]$, where $\sigma$ is a finite binary sequence (also referred to as \emph{string}) and $[\sigma]$ is the subset of $2^\omega$ consisting of the $X$'s that have $\sigma$ as a prefix. We denote by $2^{<\omega}$ the set of all binary strings, $\mathcal{L}ambda$ the empty string. For a string $\sigma$, $|\sigma|$ is the length of $\sigma$ and if $n \leq |\sigma|$, $\sigma {\upharpoonright} n$ is the prefix of $\sigma$ of length~$n$. We also use $X {\upharpoonright} n$ when $X \in 2^\omega$ to denote the prefix of $X$ of length~$n$. The prefix relation is denoted by $\mathsf{Pre}ceq$. An open subset of $2^\omega$ is a set of type $\bigcup_{\sigma \in S} [\sigma]$ for some countable set~$S$ of strings; when $S$ is computably enumerable, we say that the open set is $\emph{effectively open}$.
Let us briefly recall the definitions of genericity and of weak genericity. For $n \geq 1$, we say that $X$ is \emph{weakly $n$-generic} if $X$ belongs to every dense open set $\mathcal{U}$ that is effectively open relative to $\emptyset^{(n-1)}$. We say that \emph{$n$-generic} if for every open set $\mathcal{U}$ that is effectively open relative to $\emptyset^{(n-1)}$, $X$ belongs to $\mathcal{U} \cup \overline{\mathcal{U}}^c$ (where $\overline{\mathcal{U}}^c$ is the complement of the closure of $\mathcal{U}$). Equivalently, $X$ is weakly $n$-generic if for every dense set $S$ of strings\mathcal{F}ootnote{Here ``dense" should be understood relative to the prefix order: a set $S$ of strings is dense if every string $\sigma\in2^{<\omega}$ has an extension in~$S$.} that is c.e.\ relative to $\emptyset^{(n-1)}$, some prefix of $X$ is in~$S$, and $X$ is $n$-generic if for set $S$ of strings that is c.e.\ relative to $\emptyset^{(n-1)}$, either some prefix of $X$ is in~$S$ or some prefix of~$X$ has no extension in~$S$. For every~$n$, the following relations hold:
\begin{center}
weak $(n+1)$-genericity $\mathbb{R}ightarrow$ $n$-genericity $\mathbb{R}ightarrow$ weak $n$-genericity
\end{center}
The Lebesgue, or uniform, measure $\mu$ on $2^\omega$ is the measure corresponding to the random process where each bit has value $0$ with probability $1/2$ independently of all other bits. Equivalently, $\mu$ is the unique measure on~$2^\omega$ such that $\mu([\sigma])= 2^{-|\sigma|}$ for all~$\sigma \in 2^{<\omega}$. For any measurable subset $\mathcal{A}$ of $2^\omega$ and $\sigma \in 2^{<\omega}$, we write $\mu(\mathcal{A} \mid \sigma)$ for $\mu(\mathcal{A} \cap [\sigma])/\mu([\sigma])$. When we talk about `randomness' or `random' objects, we implicitly mean `with respect to Lebesgue measure'.
A \emph{Martin-L\"of test} is a sequence $(\mathcal{U}_n)_{n \in \omega}$ of uniformly effectively open sets such that $\mu(\mathcal{U}_n) \leq 2^{-n}$ for all~$n$. An $X \in 2^\omega$ is \emph{Martin-L\"of random} if for every Martin-L\"of test, $X \notin \bigcap_n \mathcal{U}_n$.
The other notions of randomness we will encounter in this paper will be recalled as we proceed.
\section{Demuth randomness vs. effective genericity}
In this section, we shall see how Demuth randomness interacts with various notions of effective genericity. Our main result is that every Demuth random sequence computes a $1$-generic sequence. The proof has two components. First, we shall review the classical proof that a sufficiently random sequence computes a $1$-generic sequence, and show that the failure set (the set of $X$'s that fail to compute a $1$-generic) can be covered by a specific type of randomness test. We will then show that tests of that type in fact characterize Demuth randomness and obtain the desired result.
\subsection{Fireworks arguments}
Kautz's proof that almost every~$X\in2^\omega$ computes a $1$-generic sequence is framed in a way that is difficult to precisely analyze in terms of algorithmic randomness (i.e., to determine how random~$X$ needs to be for the argument to work). A more intuitive proof can be given using a \emph{fireworks argument} (which takes its name from the presentation by Rumyantsev and Shen~\cite{RumyantsevS2014} with an analogy about purchasing fireworks from a purportedly corrupt fireworks salesman), an approach that is more suitable for our purposes. The mechanics of fireworks arguments are already thoroughly explained in~\cite{RumyantsevS2014} and \cite{BienvenuP2017}, but we shall review them for the sake of completeness.
For now, we will set aside notions of algorithmic randomness and state the result we want to prove as follows: ``For every $k$, we can, uniformly in~$k$, design a probabilistic algorithm that produces a $1$-generic sequence with probability $\geq 1-2^{-k}$."
Let us thus fix~$k\in\omega$. Let $(W_e)_{e\in\omega}$ be an effective enumeration of all c.e.\ sets of strings. We must satisfy for each $e\in\omega$ the following requirement:
\begin{itemize}
\item[] $\mathcal{R}_e$: There is some $\sigma\mathsf{Pre}c \mathcal{P}hi_k^X$ such that either $\sigma\in W_e$ or for all $\tau\succeq\sigma,\ \tau\notin W_e$.
\end{itemize}
The strategy to satisfy requirement $\mathcal{R}_e$ is as follows. When the strategy receives attention for the first time, the first thing it does is pick an integer $n(e,k)$ at random between $1$ and $N(e,k)$ (all integers in this interval being assigned the same probability), where $N(\cdot,\cdot)$ is a fixed computable function to be specified later. The strategy then makes a \emph{passive guess} that there is no extension of the current prefix $\sigma$ of~$X$ in $W_e$, and moves on to building~$X$ by taking care of strategies for other requirements. If this guess is correct, the requirement $\mathcal{R}_e$ is satisfied and the strategy will have succeeded without having to do anything (this is why we call our guess ``passive", as it requires no action from the strategy). Of course this passive guess may also be incorrect; that is, there may be some extension of $\sigma$ in~$W_e$. If that is the case, this will become apparent at some point (because $W_e$ is c.e.), but by the time it does, a longer prefix $\sigma'$ of~$X$ may have been built that does not have an extension in~$W_e$. The strategy then makes a second passive guess that $\sigma'$ has no extension in~$W_e$, and again moves on to other strategies until this second guess is proven wrong. Again it will make a new passive guess, and so on.
Now one needs to avoid the undesirable case where the strategy makes infinitely many wrong passive guesses during the construction, for otherwise it might fail to satisfy~$\mathcal{R}_e$. To avoid this situation, we use $n(e,k)$ as a cap on the number of passive guesses that the strategy is allowed to make. Once the strategy has made $n(e,k)$ passive guesses and each time has realized that the guess was wrong, it will then make an \emph{active guess}; that is, it will guess that there \emph{is} an extension~$\tau$ of the current prefix $\sigma$ of~$X$ in $W_e$ and will wait for such an extension to be enumerated into $W_e$. While waiting, the actions of all the other strategies are put on hold. If indeed an extension $\tau$ of $\sigma$ is enumerated into $W_e$, take~$\tau$ as the new prefix of~$X$, declare $\mathcal{R}_e$ to be satisfied and terminate (allowing the other active strategies to resume).
There are three possible outcomes for our strategy:
\begin{itemize}
\item[(i)] After some wrong passive guesses, the strategy eventually makes a correct passive guess.
\item[(ii)] After making $n(e,k)$ wrong passive guesses, the strategy makes a correct active guess.
\item[(iii)] After making $n(e,k)$ wrong passive guesses, the strategy makes a wrong active guess.
\end{itemize}
As we have seen, the first two outcomes ensure the satisfaction of the requirement~$\mathcal{R}_e$. The third outcome is the bad case: if the strategy makes a wrong active guess, it will wait in vain for an extension of the current prefix of~$X$ to appear in $W_e$, and all the actions of other strategies are stopped during this waiting period, so the algorithm fails to produce an infinite binary sequence~$X$. We claim that the probability that the strategy for $\mathcal{R}_e$ has outcome (iii) is at most $\mathcal{F}rac{1}{N(e,k)}$. Indeed, assuming all values $n(e',k)$ for $e' \not= e$ have been chosen, there at most one value $n(e,k)$ can take that would cause the strategy to have outcome (iii). To see why this is the case, we make the trivial observation that when a strategy makes a wrong passive guess, if it had instead made an active guess, this active guess would have been correct, and vice versa. Thus if the strategy ends up in case (iii) after making $m$ wrong passive guesses and one last active guess, then any cap $m'<m$ on the number of passive guesses would have given outcome (ii) instead, and any $m'>m$ would have given outcome (i), as the $m$-th passive guess would have been correct.
This shows that, conditional to \emph{any fixed choice} of the $n(e',k)$ for $e' \not= e$, the probability for the $\mathcal{R}_e$-strategy to have outcome (iii) is at most $\mathcal{F}rac{1}{N(e,k)}$, since the value of $n(e,k)$ is chosen randomly between $1$ and $N(e,k)$, independently of the $n(e',k)$ for $e' \not= e$. Thus the unconditional probability for the $\mathcal{R}_e$-strategy to have outcome (iii) is at most $\mathcal{F}rac{1}{N(e,k)}$. (Technically, we cannot really talk about conditional probability since any fixed choice of $n(e',k)$ for $e' \not= e$ is a probability-$0$ event; what we are actually appealing to is Fubini's theorem, which says that for $f(\cdot,\cdot)$ a measurable function defined on the product of two probability spaces, $\int_x \int_y f(x,y) dx dy = \int_y \int_x f(x,y) dy dx$. In particular, if for every fixed $y$ we have $\int_x f(x,y) dx \leq \varepsilon$, then $\int_{(x,y)} f(x,y) dx dy \leq \varepsilon$; in our case $x$ is $n(e,k)$, $y$ is the sequence of $n(e',k)$ for $e' \not= e$ and $f$ is the characteristic function of the failure set of the algorithm).
Over all strategies, the probability of failure of our algorithm is therefore bounded by $\sum_e \mathcal{F}rac{1}{N(e,k)}$. So if we take, for example, $N(e,k) = 2^{e+k+1}$, the probability of failure is at most $2^{-k}$, as desired.
\subsection{How much randomness is needed for fireworks arguments?}
The probabilistic algorithm with parameter~$k$ presented above can be interpreted by a Turing functional $\mathcal{P}si_k$ which has access to an oracle~$X\in2^\omega$ chosen at random (with respect to the uniform measure). Since the only probabilistic part of the above algorithm is the choice of the numbers $n(e,k)$, we can assume that $\mathcal{P}si_k$ splits its oracle~$X$ into blocks of bits of length $l(e,k)$, $e \in \omega$, where $2^{l(e,k)}=N(e,k)$ (for this we need $N(e,k)$ to be a power of two, which we can assume is the case without loss of generality) and interpret the block of bits of~$X$ of size $l(e,k)$ as the integer $n(e,k,X)$.
Let $\mathcal{F}_{e,k}$ be the set of $X$'s which cause $\mathcal{P}si_k$ to fail because of the $\mathcal{R}_e$-strategy having outcome (iii). We already know that $\mathcal{F}_{e,k}$ has measure at most $\mathcal{F}rac{1}{N(e,k)}$. Analyzing the above algorithm, we see that $\mathcal{F}_{e,k}$ is the difference of two effectively open sets: the effectively open set $\mathcal{U}_{e,k}$ of $X$'s that cause the $\mathcal{R}_e$ strategy of $\mathcal{P}si_k$ to make an active guess, which is a $\mathcal{S}igma_1$-event, minus the effectively open set $\mathcal{V}_{e,k}$ of $X$'s that cause the $\mathcal{R}_e$ strategy of $\mathcal{P}si_k$ to make an active guess \emph{and} this active guess turns out to be correct, which is also a $\mathcal{S}igma_1$ event.
If an~$X$ belongs to only finitely many $\mathcal{F}_{e,k}=\mathcal{U}_{e,k} \setminus \mathcal{V}_{e,k}$, this means that for $k$ large enough it does not belong to $\mathcal{F}_{e,k}$ for any~$e$, meaning that for sufficiently large $k$, ~$\mathcal{P}si_k^X$ succeeds in producing a $1$-generic. Recall that $\mathcal{F}_{e,k}$ has measure at most $\mathcal{F}rac{1}{N(e,k)}$, and the function~$N(\cdot,\cdot)$ can be chosen as large as we need it to be. We can take $N(e,k)=2^{-\tuple{e,k}}$ and combine the $\mathcal{F}_{e,k}$ into a single sequence of sets by taking $\mathcal{F}'_i = \mathcal{F}_{e,k}$ when $i = \tuple{e,k}$, which ensures that $\mathcal{F}'_i$ has measure at most $2^{-i}$.
To sum up, we need~$X$ to only belong to finitely many sets in a sequence $(\mathcal{F}'_i)_{i \in \omega}$ where each $\mathcal{F}'_i$ is the difference of two effectively open sets (uniformly in~$i$) and $\mathcal{F}'_i$ has measure at most $2^{-i}$ (by the Borel-Cantelli lemma, almost every~$X$ has this property). This is very similar to the notion of difference tests, introduced by Franklin and Ng~\cite{FranklinN2011}. A difference test is precisely a sequence $(\mathcal{D}_i)_{i\in\omega}$ where each $\mathcal{D}_i$ is a difference of two effectively open sets (uniformly in~$i$) and has measure at most $2^{-i}$, just like in our case. However, the passing condition for difference randomness is weaker than what we need: $X\in2^\omega$ passes a difference test if it does not belong to all of the $\mathcal{D}_i$'s, and we say that~$X$ is difference random if it passes all difference tests (which, as proven by Franklin and Ng, is equivalent to $X$ being Martin-L\"of random and not computing $\emptyset'$). In our case we need $X$ to not belong to infinitely many $\mathcal{F}'_i$, the so-called \emph{Solovay passing condition}.
This lead the authors, in early presentation of this work, to propose the notion of ``strong difference randomness", where $X$ would be said to be strongly difference random if for any difference test $(\mathcal{D}_i)_{i\in\omega}$, $X$ belong to at most finitely many $\mathcal{D}_i$'s. However, as observed by Hoyrup (private communication), this is not a robust randomness notion as it depends on the bound we put on the measure of $\mathcal{D}_i$. That is, if we defined instead a difference test by requiring that each $\mathcal{D}_i$ has measure at most $1/(i+1)^2$ instead of $2^{-i}$, the Borel-Cantelli lemma would still tell us that almost every~$X$ passes the test, but it is not clear that a test with a $1/(i+1)^2$ bound can be covered by one or several tests with a $2^{-i}$ bound. Indeed, what we would normally like to do to convert a such a test $(\mathcal{D}_i)_{i\in\omega}$ with $\mu(\mathcal{D}_i) \leq 1/(i+1)^2$ for each~$i$ into a test $(\mathcal{D}'_i)_{i\in\omega}$ with $\mu(\mathcal{D}'_i) \leq 2^{-i}$ such that an $X$ failing the first also fails the second is apply the following technique: Construct a computable sequence of integers $j_1 < j_2 < \ldots$ such that for every $i$, $\sum_{k \geq j_i} 1/(k+1)^2 \leq 2^{-i}$, which can be done since $\sum_k 1/(k+1)^2$ is a computable sum. Then for each~$i$, we set
\[
\mathcal{D}'_i = \mathcal{D}_{j_i} \cup \mathcal{D}_{j_i+1} \ldots \cup \mathcal{D}_{j_{i+1}-1}.
\]
By definition of the $j_i$'s, we have $\mu(\mathcal{D}'_i) \leq 2^{-i}$, and if an~$X$ belongs to infinitely many sets $\mathcal{D}_i$, it also belongs to infinitely many sets $\mathcal{D}'_i$. This for example would work if the $\mathcal{D}_i$'s were effectively open sets, but the problem here is that they are differences of two open sets, which may no longer be the case for the $\mathcal{D}'_i$ we constructed: we only know they are a finite unions of differences of effectively open sets (equivalently, Boolean combinations of effectively open sets as every Boolean combination can be written in this form).
To get a more robust randomness notion that corresponds to what we need for fireworks arguments to work, we have two main options:
\begin{itemize}
\item Either we keep the bound $2^{-i}$, but allow each level $\mathcal{D}_i$ of the test to be a finite union of differences of two effectively open sets (presented as a finite list of indices for these sets uniformly in~$i$) with the Solovay passing condition. (By the above argument, in that case, the bound $2^{-i}$ can be replaced by any $a_i$ such that $\sum_{a_i}$ is finite and is a computable real number without changing the power of the family of tests).
\item Or we allow the measure bound to vary, i.e., we allow all tests of type $(\mathcal{D}_i)_{i\in\omega}$ such that each $\mathcal{D}_i$ is a difference of two effectively open sets uniformly in~$i$, and there is a computable sequence $(a_i)_{i\in\omega}$ of rationals such that the sum $\sum_i a_i$ is finite (and possibly computable), such that $\mu(\mathcal{D}_i) < a_i$ (and again use the Solovay passing condition for these tests).
\end{itemize}
The second approach seems to give a new randomness notion, which probably deserves to be studied. But one would first have to decide whether it is more natural to require the sum $\sum_i a_i$ to be computable or simply finite, prove that it does differ from existing randomness notions, etc. This would take us beyond the scope of this paper.
The first approach is just as natural, and has the non-negligible advantage to take us back to an existing randomness notion: Demuth randomness. This is one of the central randomness notions between $1$-randomness and $2$-randomness, which has received a lot of attention recently; see, for example,~\cite{BienvenuDGNT2014,GreenbergT2014,KuceraN2011}. Demuth randomness is defined as follows. We fix an effective enumeration $(\mathcal{W}_e)_{e \in \omega}$ of all effectively open sets. A Demuth test is a sequence $(\mathcal{W}_{g(n)})_{n \in \omega}$ where $g$ is an $\omega$-c.a.\ function (that is, $g \leq_{wtt} \emptyset'$, or equivalently, $g$ is a $\mathcal{D}elta^0_2$ function which has a computable approximation $g(n,s)$ such that for every~$n$ the number of $s$ such that $g(n,s) \not= g(n,s+1)$ is bounded by $h(n)$ for some fixed computable function $h$) and $\mu(\mathcal{W}_{g(n)}) \leq 2^{-n}$ for all~$n$. $X\in2^\omega$ is Demuth random if it only belongs to at most finitely many levels of any given Demuth test $(\mathcal{W}_{g(n)})_{n \in \omega}$ (as shown by Ku\v cera and Nies~\cite{KuceraN2011}, the notion is independent of the bound, in that one can take any other sequence $(a_n)_{n\in\omega}$ in place of $2^{-n}$, as long as $\sum_n a_n$ is finite and computable). One can also define weak Demuth randomness by changing the passing condition: $X$ is weakly Demuth random if for any Demuth test $(\mathcal{W}_{g(n)})_{n \in \omega}$, $X \notin \bigcap_n \mathcal{W}_{g(n)}$. This is a strictly weaker notion: indeed, weak Demuth randomness is implied by weak 2-randomness (where $X\in2^\omega$ is weakly 2-random if it does not belong to any $\mathcal{P}i^0_2$ nullset), while Demuth randomness is incomparable with it.
Our next theorem shows that the randomness notion yielded by the first approach above is indeed Demuth randomness.
\begin{theorem}\label{thm:demuth-list-diff}
$X \in 2^\omega$ is Demuth random if and only if for every test $(\mathcal{D}_n)_{n\in\omega}$ where $\mathcal{D}_n$ is a finite union of differences of two open sets (presented as a finite list of indices for these sets), and such that $\mu(\mathcal{D}_n) \leq 2^{-n}$,~$X$ only belongs to finitely many sets $\mathcal{D}_n$.
\end{theorem}
This is the analogue of a result of Franklin and Ng~\cite{FranklinN2014} who proved the same equivalence between this new type of tests and Demuth tests when the passing condition is ``$X$ does not belong to all levels", thus obtaining a new characterization of weak Demuth randomness.
\begin{proof}
We first see how to turn a Demuth $(\mathcal{W}_{g(n)})_{n \in \omega}$ into a test $(\mathcal{D}_n)_{n \in \mathbb{N}}$ as in the theorem. Let~$h$ be computable bound on the number of changes of~$g$. For each~$n$, effectively create a list of $h(n)$ pairs of difference sets $\mathcal{U}_k \setminus\mathcal{V}_k$, where $\mathcal{U}_k$ is the $k$-th version of $\mathcal{W}_{g(n)}$ (that is, $\mathcal{U}_k$ is equal to $\mathcal{W}_{g(n,s+1)}$, where $s+1$ is the $k$-th stage at which $g(n,s+1)\not=g(n,s)$; if there is no such~$s$, $\mathcal{U}_k$ remains empty), and $\mathcal{V}_k$ is equal to $\mathcal{U}_k$ if a $(k+1)$-th version of $\mathcal{W}_{g(n)}$ ever appears, and empty otherwise. It is easy to see that all $\mathcal{U}_k \setminus\mathcal{V}_k$ are empty, except the one where $k$ corresponds to the final version of $\mathcal{W}_{g(n)}$, for which we have $\mathcal{U}_k \setminus \mathcal{V}_k=\mathcal{W}_{g(n)}$. Calling $\mathcal{D}_n$ the finite union of the $\mathcal{U}_k \setminus \mathcal{V}_k$, we have $\mathcal{D}_n = \mathcal{W}_{g(n)}$. Thus~$X$ is in infinitely many $\mathcal{W}_{g(n)}$ if and only if it is in infinitely many $\mathcal{D}_n$.
Conversely, let us see how to convert a test $(\mathcal{D}_n)_{n \in \omega}$ as above into a Demuth test. For every $n$, the $n$-th level $\mathcal{W}_{g(n)}$ of the Demuth test is built as follows: consider the list of $\mathcal{U}_k \setminus \mathcal{V}_k$ composing $\mathcal{D}_{n+1}$. There are $h(n)$ such difference sets, where $h$ is a computable function. The first version of $\mathcal{W}_{g(n)}$ is $\bigcup_{k} \mathcal{U}_k$. Meanwhile, enumerate all $\mathcal{V}_k$ in parallel. When we see at some stage~$s$ that the measure of one of the $\mathcal{V}_k$'s becomes greater than some new multiple of $2^{-n-1}/h(n)$, we change the version of $\mathcal{W}_{g(n)}$: The new version is now $\bigcup_{k} (\mathcal{U}_k \setminus \mathcal{V}_k[s])$.
The number of versions of $\mathcal{W}_{g(n)}$ is bounded by $h(n)^2 \cdot 2^{n+1}$. Indeed, each $\mathcal{V}_k$ can reach a new multiple of $2^{-n-1}/h(n)$ only $h(n) \cdot 2^{n+1}$ times (causing a new version of $\mathcal{W}_{g(n)}$), and there are $h(n)$ sets $\mathcal{V}_k$'s. By definition, every version of $\mathcal{W}_{g(n)}$ contains $\mathcal{D}_{n+1}$, so any sequence contained in infinitely many $\mathcal{D}_{n}$'s is contained in infinitely many $\mathcal{W}_{g(n)}$. It remains to evaluate the measure of $\mathcal{W}_{g(n)}$. Let $s$ be the stage at which the final version of $\mathcal{W}_{g(n)}$ has appeared. Since this is the final version, this means that no $\mathcal{V}_k$ will increase by more than $2^{-n-1}/h(n)$ in measure after stage~$s$, thus for each $k$,
\[
\mathcal{B}ig|\mu(\mathcal{U}_k \setminus \mathcal{V}_k[s]) - \mu(\mathcal{U}_k \setminus \mathcal{V}_k)\mathcal{B}ig| \leq 2^{-n-1}/h(n).
\]
Since there are $h(n)$ terms $\mathcal{U}_k \setminus \mathcal{V}_k$ in $\mathcal{D}_n$, we have
\[
\mathcal{B}ig|\mu(\bigcup_k \mathcal{U}_k \setminus \mathcal{V}_k[s]) - \mu(\bigcup_k \mathcal{U}_k \setminus \mathcal{V}_k)\mathcal{B}ig| \leq 2^{-n-1},
\]
i.e.,
\[
\mathcal{B}ig|\mu(\mathcal{W}_{g(n)}) - \mu(\mathcal{D}_{n+1})\mathcal{B}ig| \leq 2^{-n-1}.
\]
As $\mu(\mathcal{D}_{n+1}) \leq 2^{-n-1}$, we get $\mu(\mathcal{W}_{g(n)}) \leq 2^{-n-1}+2^{-n-1}=2^{-n}$ as desired.
\end{proof}
From this theorem and our analysis of the tests induced by fireworks arguments, we immediately get that Demuth randomness is sufficient to compute a $1$-generic.
\begin{theorem}\label{thm:demuth-1-generic}
Every Demuth random computes a $1$-generic.
\end{theorem}
Theorem~\ref{thm:demuth-list-diff} more generally tells us that if $\mathcal{S} \subseteq 2^\omega$ is such that a member of $\mathcal{S}$ can be obtained with positive probability via a fireworks argument, one can conclude that every Demuth random~$X$ computes some element of~$\mathcal{S}$. For example, more intricate fireworks arguments were used in~\cite{BienvenuP2017} for the set $\mathcal{S}$ consisting of the diagonally non-computable (DNC) functions which compute no Martin-L\"of random. Together with the present paper, we have established that every Demuth random~$X$ computes a DNC function which computes no Martin-L\"of random.
This theorem cannot be generalized to currently available definitions of randomness (other than those that imply Demuth randomness): any notion implied by weak 2-randomness, which includes weak Demuth randomness, difference randomness, Martin-L\"of randomness, Oberwolfach randomness, balanced randomness, etc., do not guarantee the computation of a 1-generic. Indeed, every $1$-generic is hyperimmune while there are sequences that are both weakly 2-random and of hyperimmune-free Turing degree~\cite[Proposition 3.6.4]{Nies2009}.
One of the main results of~\cite{BarmpaliasDL2013} is that every non-computable $X$ which is merely \emph{Turing below} a $2$-random~$Y$ computes a $1$-generic (Kurtz~\cite{Kurtz1981} had proven this result for almost all~$Y$, but the exact level of randomness needed was unknown). One cannot replace $2$-randomness by Demuth randomness in this statement. Indeed, take a~$Y$ which is Demuth random but not weakly 2-random. By a result of Hirschfeldt and Miller, $Y$ computes a non-computable c.e.\ set~$A$ (see~\cite[Corollary 7.2.12]{DowneyH2010}). Applying a second result, due to Yates, that every non-computable c.e.\ set computes some~$X$ of minimal degree, it follows that a sequence~$X$ that is below a Demuth random does not compute any $1$-generic, as no $1$-generic has minimal degree.
\subsection{Demuth randomness vs stronger genericity notions}
So far we have seen that every Demuth random computes a $1$-generic sequence, and that this was in some sense optimal among the randomness notions that have been considered in the litterature. One may wonder whether the same is true of the genericity notion; that is, can we improve Theorem~\ref{thm:demuth-1-generic} by replacing 1-genericity with a stronger genericity notion? Again, we give a negative answer for the genericity notions we are aware of between $1$-genericity and $2$-genericity. We have seen in the introduction that a good candidate for a possible strengthening of Theorem~\ref{thm:demuth-1-generic} would be weak 2-genericity. We show that at this level of genericity, the situation changes significantly: a Demuth random sequence and a weakly 2-generic sequence always form a minimal pair. In fact, we will show this even for a weaker notion, known as pb-genericity, introduced by Downey, Jockusch and Stob~\cite{DowneyJS1996}.
\begin{definition}
$G \in 2^\omega$ is pb-generic if for every function~$f: 2^{<\omega} \rightarrow 2^{<\omega}$ such that
\begin{itemize}
\item[(i)] $f$ is computable in~$\emptyset'$ with a primitive recursive bound on the use, and
\item[(ii)] $\sigma \mathsf{Pre}ceq f(\sigma)$ for all~$\sigma$,
\end{itemize}
there are infinitely many~$n$ such that $f(G {\upharpoonright} n) \mathsf{Pre}ceq G$.
\end{definition}
It is easy to see that weak 2-genericity implies pb-genericity. Indeed, for each $n$, consider the set of strings $S_n=\{f(\sigma) \mid |\sigma| \geq n\}$. By definition of~$f$, $S_n$ is both dense and $\emptyset'$-c.e., thus a weakly 2-generic sequence~$G$ must have a prefix in every $S_n$, which is exactly what it means for $G$ to be pb-generic.
It is already known that a Demuth random cannot compute a pb-generic: indeed, Downey et al.~\cite{DowneyJS1996} proved that $X \in 2^\omega$ computes a pb-generic if and only if it has array non-computable degree, meaning that $X$ can compute a total function $f: \mathbb{N} \rightarrow \mathbb{N}$ which is dominated by no $\omega$-c.a.\ function $g$ (where $g$ is said to dominate $f$ if $f(n) \leq g(n)$ for almost all~$n$), and Downey and Ng~\cite{DowneyN2009} showed that all Demuth randoms have array computable degree. The next theorem improves this.
\begin{theorem}
If $X$ is Demuth random and~$G$ is pb-generic, then $X$ and $G$ form a minimal pair in the Turing degrees.
\end{theorem}
This also strengthens the theorem of Nies et al.'s mentioned in the introduction which asserts that for any pair $(X,G)$ consisting of a $2$-random and a $2$-generic forms a minimal pair. As we shall see later, among the available effective notions of randomness and genericity in the literature (that we are aware of), this theorem is the best we can get.
\begin{proof}
For this proof, we fix a primitive recursive bijection $\mathbb{N}at$ from strings to integers. Let us consider a pair $(\mathcal{P}hi, \mathcal{P}si)$ of Turing functionals. We want to show that if $\mathcal{P}hi^G$ and $\mathcal{P}si^X$ are both defined and equal, then they are computable. For each such pair of functionals, we exploit the pb-genericity of~$G$ by building a specific function~$f$, and exploit the Demuth randomness of~$X$, by building a Demuth test $(\mathcal{W}_{g(n)})_{n \in \mathbb{N}}$. The function~$f$ is defined as follows. For a given~$\sigma\in2^{<\omega}$, look for a family of pairs of strings $(\sigma_1,\tau_1), ..., (\sigma_{2^N},\tau_{2^N})$, where $N=\mathbb{N}at(\sigma)$, such that:
\begin{itemize}
\item all $\sigma_i$ strictly extend $\sigma$,
\item for all~$i$, $\mathcal{P}hi^{\sigma_i} \succeq \tau_i$,
\item and the $\tau_i$ are pairwise incomparable.
\end{itemize}
Note that these conditions are c.e.\ and therefore if there is such a family, it can be effectively found. If such a family is eventually found, let $f(\sigma)=\sigma_j$ where $j$ is the smallest index such that the measure of $\mathcal{P}si^{-1}(\tau_j)$ (that is, the effectively open set $\{X \in 2^\omega \mid \mathcal{P}si^X \succeq \tau_j\}$) is smaller or equal to $2^{-N}$ (note that since the $\tau_i$ are mutually incomparable, the open sets $\mathcal{P}si^{-1}([\tau_i])$ are disjoint, hence at least one of them must have measure at most $2^{-N}$) and define $\mathcal{W}_{g(N)}$ to be equal to $\mathcal{P}si^{-1}([\tau_j])$. If there is no such family of pairs of strings, set $f(\sigma)=\sigma$ and $\mathcal{W}_{g(N)}$ to be the empty set. We show that this construction works via a series of claims. \\
\begin{claim}
The function~$f$ is computable in $\emptyset'$ with a primitive recursive bound on the use.
\end{claim}
\begin{proof}
Indeed $f$ has a computable approximation with a primitive recursive bound on the number of mind changes. For a given $\sigma$ and stage~$s$, let $f(\sigma)[s]$ be equal to $\sigma$ if no family of pairs as in the construction has been found by stage~$s$ and in case such a family was found before stage~$s$, set $f(\sigma)[s]$ be equal to $\sigma_k$ where $k$ is the minimal such that the measure of $\mathcal{P}si^{-1}(\tau_k)[s]$ is smaller or equal to $2^{-N}$ (where, again, $N=\mathbb{N}at(\sigma)$). Note that this is indeed a computable approximation, and $f(\sigma)[t]$ can change at most $2^{N}$ times. Indeed, it changes once when (and if) the family of pairs is found, and changes every time the current candidate $\tau_k$ is discovered to be such that the measure of $\mathcal{P}si^{-1}(\tau_k)[s]$ is bigger than $2^{-N}$, which can only happen to $2^{N}-1$ strings. Moreover, $\mathbb{N}at$ is a primitive recursive function, thus so is $\sigma \mapsto 2^{\mathbb{N}at(\sigma)}$.
\end{proof}
\begin{claim}
The sequence $(\mathcal{W}_{g(N)})_{N \in \mathbb{N}}$ is a Demuth test.
\end{claim}
\begin{proof}
For a given~$N$, let $\sigma$ be the string such that $N=\mathbb{N}at(\sigma)$. By definition of $\mathcal{W}_{g(N)}$, we have $\mu(\mathcal{W}_{g(N)}) \leq 2^{-N}$. Moreover, $g$ has a computable approximation with at most $2^N$-many changes, the proof of this being almost identical to that of the previous claim. Initially, and at any stage~$s$ before the desired family of strings is found, $\mathcal{W}_{g(N)[s]}$ is empty, and at any stage~$s$ posterior to finding such a family, one can take $\mathcal{W}_{g(N)[s]}=\mathcal{P}si^{-1}(\tau_k)[s]$, where~$k$ is minimal such that $\mu(\mathcal{P}si^{-1}(\tau_k)[s])\leq2^{-N}$. It follows that $g(N)$ can change at most $2^{N}$ many times.
\end{proof}
\begin{claim}
If there are infinitely many~$n$ such that $f(G {\upharpoonright} n) \mathsf{Pre}ceq G$ and if $X$ passes the Demuth test $(\mathcal{W}_{g(N)})_{N \in \mathbb{N}}$, then either $\mathcal{P}hi^G$ is partial, or $\mathcal{P}hi^G$ is computable, or $\mathcal{P}hi^G \not= \mathcal{P}si^X$.
\end{claim}
\begin{proof}
By definition of `passing a Demuth test', we know that $X$ only belongs to finitely many~$\mathcal{W}_{g(n)}$. Therefore let us take $n$ such that $f(G {\upharpoonright} n) \mathsf{Pre}ceq G$ and large enough so that $X \notin \mathcal{W}_{g(N)}$, where $N=\mathbb{N}at(G {\upharpoonright} n)$. Let $\sigma=G {\upharpoonright} n$. We distinguish two cases.\\
\noindent Case 1: $f(\sigma)=\sigma$. By definition of~$f$, this means that no family of pairs was ever found for that~$\sigma$, meaning that the set of strings
\[
T= \{\tau \mid (\exists \sigma' \succeq \sigma)\, \mathcal{P}hi^{\sigma'} \succeq \tau \}
\]
contains at most $2^{N}-1$ incomparable strings. Observe that~$T$ is a c.e.\ tree (the computable enumerability is obvious by definition, and it is clearly closed under the prefix relation). Therefore, all infinite paths of~$T$ are strongly isolated, in the sense that for every infinite path~$X$, there is an $n_0$ such that for all ~$n \geq n_0$, $X {\upharpoonright} (n+1)$ is the only extension of~$X {\upharpoonright} n$ in~$T$. Indeed, if this were not the case for some path~$X$, we would have infinitely many $n$ such that $\tau_n = (X {\upharpoonright} n) ^\mathcal{F}rown (1-X(n))$ is in the tree. Since for any $n \neq m$ $\tau_n$ and $\tau_m$ are incomparable, this would contradict our assumption that there is no family of $2^N$ incomparable strings in the tree. Of course, any strongly isolated path in a c.e.\ tree is computable since for almost all~$n$, $X(n)$ can be effectively found from $X {\upharpoonright} n$. Now observe that since $\sigma$ is a prefix of~$G$, $\mathcal{P}hi^G$, if it is defined, is a path of~$T$. Thus $\mathcal{P}hi^G$ is either undefined or is a computable sequence. \\
\noindent Case 2: $f(\sigma)$ strictly extends $\sigma$. By construction, this means that there is a string~$\tau$ such that $\mathcal{P}hi^G \succeq \mathcal{P}hi^{f(\sigma)} \succeq \tau$ and $\mathcal{W}_{g(N)} = \mathcal{P}si^{-1}(\tau)$. Since by assumption $X \notin \mathcal{W}_{g(N)}$, this means that $\mathcal{P}si^X \nsucceq \tau$, and thus $\mathcal{P}hi^G \not= \mathcal{P}si^X$.
\end{proof}
This last claim completes the proof.
\end{proof}
\section{Weak 2-randomness vs genericity}
The other main randomness notion below $2$-randomness, namely weak 2-randomness, behaves quite differently from Demuth randomness in terms of the ``escaping power" of the functions $f:\omega\rightarrow\omega$ that such random elements can compute. Following the terminology of~\cite{AndrewsGM2014}, given $\mathscr{F}$ a countable family of functions, we say that a function~$g$ is $\mathscr{F}$-escaping if it is not dominated by any function $f \in \mathscr{F}$. We will also say that $X \in 2^\omega$ has $\mathscr{F}$-escaping degree if it computes an $\mathscr{F}$-escaping function. For example, $X$ has $\mathcal{D}elta^0_1$-escaping degree iff it has hyperimmune degree and $X$ has $(\omega$-c.a.)-escaping degree iff it has array non-computable degree.
For Demuth random sequences we have an upper and a lower bound on escaping power: every Demuth random sequence has $\mathcal{D}elta^0_1$-escaping degree (see~\cite{Nies2009}; this also follows from the fact that every Demuth random computes a $1$-generic), but no Demuth random is $(\omega$-c.a.)-escaping as mentioned in the previous section (a fortiori, no Demuth random is $\mathcal{D}elta^0_2$-escaping).
By contrast, weak 2-randomess is completely orthogonal to this measure of computational strength. On the one hand, some weakly 2-random sequences have hyperimmune-free degree (see for example~\cite{Nies2009}). On the other hand, a striking result by Barmpalias, Downey and Ng~\cite{BarmpaliasDN2011} is that for \emph{any} countable family $\mathscr{F}$ of functions, there is a weakly 2-random~$X$ that has has $\mathscr{F}$-escaping degree.
There are close connections between escaping degrees and the ability to compute generics:
\begin{itemize}
\item $X$ computes a weak-1-generic iff $X$ has $\mathcal{D}elta^0_1$-escaping degree~\cite{Kurtz1981},
\item $X$ computes a pb-generic iff it has $(\omega$-c.a.)-escaping degree~\cite{DowneyJS1996},
\item $X$ computes a weak-2-generic iff it has $\mathcal{D}elta^0_2$-escaping degree~\cite{AndrewsGM2014},
\item If $X$ has $\mathcal{D}elta^0_3$-escaping degree, it computes a $2$-generic~\cite{AndrewsGM2014}.
\end{itemize}
Together with the theorem of Barmpalias, Downey and Ng, the last item shows that there exists a weak-2-random which computes a $2$-generic. However, a very interesting result from~\cite{AndrewsGM2014} is that we cannot extend this correspondence much further in the genericity hierarchy: indeed, for \emph{any} countable family $\mathscr{F}$ of functions, there exists an~$\mathscr{F}$-escaping function~$g$ which computes no weakly 3-generic sequence. Thus the theorem of Barmpalias, Downey and Ng does not say how weak 2-randomness interacts with weak 3-genericity or higher genericity notions. Our next theorem strengthens their result to show that in fact, there always exists a weakly 2-random sequence that computes a generic sequence, no matter how strong the genericity notion is.
\begin{theorem}\label{thm:w2r-may-compute-generic}
Let $\mathcal{G}$ be a comeager subset of $2^\omega$. There exists a weakly 2-random~$X$ that computes some $Y \in \mathcal{G}$.
\end{theorem}
The rest of this section is dedicated to the proof of Theorem~\ref{thm:w2r-may-compute-generic}. The main ideas are the same as the ones use by Barmpalias et al.\ in~\cite{BarmpaliasDN2011}, and there is little doubt that they would have been able to refine their proof to get Theorem~\ref{thm:w2r-may-compute-generic} had they been considering the problem of coding generics into randoms. Nonetheless, some adaptations are needed, and this is what we provide below. Also, this is more an expository choice, but our proof differs from Barmpalias et al.'s by the characterization we use of weak 2-randomness: while they used the fact that an $X \in 2^\omega$ is weakly 2-random iff it is Martin-L\"of random and forms a minimal pair with $\emptyset'$ (see~\cite{DowneyH2010}), we directly use the definition of weak 2-randomness, that is, $X$ is weakly 2-random iff it does not belong to any $\mathcal{P}i^0_2$ nullset.
The main tool we need for our proof is the so-called Ku\v cera-G\'acs coding, which allows one to encode any information into a Martin-L\"of random real. Let us review the basic mechanisms of this technique.
Ku\v cera-G\'acs coding begins by fixing a $\mathcal{P}i^0_1$ class $\mathcal{R}$ containing only Martin-L\"of random sequences (in particular, $\mathcal{R}$ has positive measure). Ku\v cera proved that this class has the following property: There exists a computable function~$h$ such that for any $\mathcal{P}i^0_1$ class~$\mathcal{P}$ contained in $\mathcal{R}$, and any $\sigma \in 2^{<\omega}$,
\[
[\sigma] \cap \mathcal{P} \not= \emptyset\ \mathcal{L}eftrightarrow \ \mu([\sigma] \cap \mathcal{P}) > 2^{-h(\sigma,\mathcal{P})}
\]
(in the above equivalence and in what follows, a $\mathcal{P}i^0_1$ class as an argument should be read as an index for this $\mathcal{P}i^0_1$ class). In particular this means that when $[\sigma] \cap \mathcal{P} \not= \emptyset$, $\sigma$ has at least two extensions~$\tau$ of length $h(\sigma,\mathcal{P})$ such that $[\tau] \cap \mathcal{P} \not= \emptyset$. The Ku\v cera-G\'acs coding, in its simpler form, consists in coding $0$ by $\tau_{\mathit{left}}$, the leftmost such $\tau$ and $1$ by the $\tau_{\mathit{right}}$ rightmost one. Indeed, knowing $\sigma$, $\mathcal{P}$, and given a $\tau$ which is either the leftmost or rightmost string of length $h(\sigma,\mathcal{P})$, we can figure out which is which because the strings $\tau$ such that $[\tau] \cap \mathcal{P} \not= \emptyset$ form a co-c.e.\ set. One can then encode a second bit by an extension of $\tau$ of length $h(\tau, \mathcal{P})$, and iterate the process above $\tau$. If we were to continue this process indefinitely, since coding is monotonic (each time we encode one more bit the new code word is an extension of the previous one), we can take the union of all the codewords to get a sequence~$X \in 2^\omega$ from which we can can computably recover all the bits we encoded during the construction. Since at each step of the process we ensure that the new codeword $\tau$ satisfies $[\tau] \cap \mathcal{P} \not= \emptyset$, this means that $X$ has arbitrarily long prefixes $X {\upharpoonright} n$ such that $[X {\upharpoonright} n] \cap \mathcal{P} \not= \emptyset$, and thus $X \in \mathcal{P}$, as $\mathcal{P}$ is a closed set.
Coming back to finite encoding, the Ku\v cera-G\'acs technique gives us a (non-computable) function $KG: 2^{<\omega} \times 2^{<\omega} \times \omega \rightarrow 2^{<\omega}$ such that $KG(\xi \mid \sigma, \mathcal{P})$ is the encoding of the string~$\xi$ above $\sigma$ within the $\mathcal{P}i^0_1$ class $\mathcal{P}$ following the above technique, and thus enjoying the following properties for all $\xi$, $\sigma$, and $\mathcal{P}$ a $\mathcal{P}i^0_1$ subset of $\mathcal{R}$:
\begin{itemize}
\item $KG(\xi \mid \sigma,\mathcal{P}) \succeq \sigma$
\item $[\sigma] \cap \mathcal{P} \not= \emptyset$, then $[KG(\xi \mid \sigma, \mathcal{P})] \cap \mathcal{P} \not= \emptyset$
\item $KG(\;\cdot \mid \sigma, \mathcal{P})$ is one-to-one for every fixed $\sigma, \mathcal{P}$; furthermore, up to composing with a prefix-free encoding of $2^{<\omega}$, we can assume that for a fixed $(\sigma, \mathcal{P})$, the range of $KG(\;\cdot \mid \sigma, \mathcal{P})$ is prefix-free.
\item There exists an effective `decoding' procedure, which we denote by $KG^{-1}$, which is a partial computable function such that (a) $KG^{-1}(\tau \mid \sigma,\mathcal{P})=\xi$ when $\tau=KG(\xi \mid \sigma,\mathcal{P})$ and (b) for a fixed $(\sigma, \mathcal{P})$, the domain of $KG^{-1}(\;\cdot \mid \sigma, \mathcal{P})$ is prefix-free.
\end{itemize}
Now, we want to encode information into a weakly 2-random sequence. Of course since a weakly 2-random sequence cannot compute any non-computable $\mathcal{D}elta^0_2$ set, we cannot hope for an encoding which can be perfectly decoded and thus the decoding procedure will be allowed to make errors.
The idea is to sequentially use Ku\v cera-G\'acs codings where the $\mathcal{P}i^0_1$ class shrinks at each step in order to make the union of the codewords a weakly 2-random sequence. To do this, let $(\mathcal{U}^e_k)_{e,k\in\omega}$ be an effective enumeration of $\mathcal{S}igma^0_1$ subsets of $2^\omega$ such that every $\mathcal{P}i^0_2$ set is equal to $\bigcap_k \mathcal{U}^e_k$ for some $e$. Without loss of generality, we can ensure that $\mathcal{U}^e_{k+1} \subseteq \mathcal{U}^e_k$ for all~$e,k$. We let $\mathcal{C}^e_k$ be the complement of $\mathcal{U}^e_k$. We also let $e^*_1< e^*_2 < \ldots$ the sequence of indices~$e$ such that $\bigcap_k \mathcal{U}^e_k$ is a nullset. This is, of course, not a computable sequence, but the idea is to make this sequence part of the encoded information.
Let $g: \omega \times 2^{<\omega} \times \omega \rightarrow \omega \cup \{\infty\}$ be the function defined by
\[
g(e, \sigma, \mathcal{P}) = \inf \bigl\{k \mid [\sigma] \cap \mathcal{C}^e_k \cap \mathcal{P} \not= \emptyset\bigr\}.
\]
Observe that $g$ is lower semi-computable.
Let us fix a computable, one-to-one pairing function $\tuple{ \cdot,\cdot}:\omega\times2^{<\omega}\rightarrow2^{<\omega}$. Given a sequence of strings $\xi_1, \ldots , \xi_k$, its \emph{W2R-encoding}, denoted by $E(\xi_1, \ldots, \xi_k)$ is the string $\tau_1\tau_2\cdots \tau_k$ where
\[
\left\{
\begin{array}{l}
\mathcal{P}_0= \mathcal{R}\\
\tau_1 = KG(\tuple{e^*_1, \xi_1} \mid \mathcal{L}ambda, \mathcal{P}_0)
\end{array}
\right.
\]
and for $1 \leq n < k$,
\[
\left\{
\begin{array}{l}
\mathcal{P}_{n} = \mathcal{P}_{n-1} \cap \mathcal{C}^{e^*_n}_{g(e^*_n,\tau_n,\mathcal{P}_{n-1})}\\
\tau_{n+1} = KG(\tuple{e^*_{n+1}, \xi_{n+1}} \mid \tau_n, \mathcal{P}_{n}).
\end{array}
\right.
\]
By construction, $E(\xi_1, \ldots, \xi_{k})$ is a prefix of $E(\xi_1, \ldots, \xi_{k+1})$ for all~$k$, so we can extend the definition of $E$ to infinite sequences of strings $(\xi_i)_{i \in \omega}$ by setting
\[
E\bigl((\xi_i)_{i \in \omega}\bigr) = \bigcup_k E(\xi_1, \ldots, \xi_k).
\]
Moreover, the construction ensures that $E\bigl((\xi_i)_{i \in \omega}\bigr)$ belongs to all $\mathcal{P}_n$, and $\mathcal{P}_{n+1}$ is chosen to be disjoint from $\bigcap_k \mathcal{U}^{e^*_n}_k$, the $n$-th $\mathcal{P}i^0_2$ nullset. Thus, $E\bigl((\xi_i)_{i \in \omega}\bigr)$ is weakly 2-random for any sequence $(\xi_i)_{i \in \omega}$.
Let us now define a `decoding' functional $\mathcal{G}amma$. This functional will make `errors' in the decoding process, i.e., we will not have $\mathcal{G}amma^{E((\xi_i)_{i \in \omega})}=\xi_1\xi_2\cdots$. However, we will ensure the following property: if $\xi_1, \ldots, \xi_k$ are fixed, there is an $r$ such that for any extension $\xi_{k+1}\xi_{k+2}\cdots$ of the sequence, the prefix of $\mathcal{G}amma^{E((\xi_i)_{i \in \omega})}$ of size $|\xi_1\xi_2\cdots\xi_{k+1}|$ differs from $\xi_1\xi_2\cdots\xi_{k+1}$ on at most~$r$ bits.
The procedure $\mathcal{G}amma$ is defined as follows. On input~$X$, for all~$t$ in parallel, $\mathcal{G}amma$ runs a sub-procedure (which we call a \emph{$t$-sub-procedure}) that tries to find a prefix $\tau_1 \cdots \tau_k$ of~$X$ and a sequence of triples $\tuple{a_n,\zeta_n,\mathcal{Q}_n}_{1 \leq n \leq k}$ such that
\[
\left\{
\begin{array}{l}
\mathcal{Q}_0= \mathcal{R}\\
\tuple{a_1, \zeta_1} = KG^{-1}(\tau_1 | \mathcal{L}ambda, \mathcal{Q}_0)
\end{array}
\right.
\]
and for $1 \leq n < k$,
\[
\left\{
\begin{array}{l}
\mathcal{Q}_{n} = \mathcal{Q}_{n-1} \cap \mathcal{C}^{a_n}_{g(a_n,\tau_n,\mathcal{Q}_{n-1})[t]}\\
\tuple{a_{n+1}, \zeta_{n+1}} = KG^{-1}(\tau_{n+1} | \tau_n, \mathcal{Q}_{n}).
\end{array}
\right.
\]
Note that there is at most one such sequence because $KG$ is prefix-free and one-to-one for each fixed pair of conditions $(\sigma,\mathcal{P})$. If such a sequence is found, then setting $\zeta=\zeta_1\cdots\zeta_k$, $\mathcal{G}amma^X(i)$ is defined to be $\zeta(i)$ for any~$i \leq \min(t,|\zeta|-1)$ on which $\mathcal{G}amma^X(i)$ has not yet been defined by other sub-procedures with parameter $t'< t$.
We now prove two claims which are going to allow us to conclude the proof.
\begin{claim}
For any sequence of strings $\xi_1, \ldots, \xi_k$, there exists a $N\in\omega$ such that for any $\xi_{k+1}$, if $\xi=\xi_1 \cdots \xi_{k+1}$ has length at least~$N$, then $\mathcal{G}amma^{X}(i)=\xi(i)$ for any $X$ extending $E(\xi_1,\dots,\xi_{k+1})$ and $i \geq N$.
\end{claim}
\begin{proof}
Fix $\xi_1, \ldots, \xi_k$, let $\xi_{k+1}$ be any string, and let $X$ be an infinite sequence extending $E(\xi_1, \ldots, \xi_{k+1})$.
Let $\tau_1, \ldots, \tau_{k+1}$ and $\mathcal{P}_1, \ldots \mathcal{P}_{k+1}$ be the strings and $\mathcal{P}i^0_1$ classes inductively built in the definition of $E(\xi_1, \ldots, \xi_{k+1})$. Recall that the function~$g$ is lower semi-computable and $g(e^*_n,\tau_n,\mathcal{P}_{n-1})$ is always finite, thus there is an~$N$ such that
\[
g(e^*_n,\tau_n,\mathcal{P}_{n-1})[t]=g(e^*_n,\tau_n,\mathcal{P}_{n-1})
\]
for all $n \leq k+1$ and all $t \geq N$.
This means that for $t \geq N$, the $t$-sub-procedure of $\mathcal{G}amma$ will eventually find the desired sequence $\tuple{a_n,\zeta_n,\mathcal{Q}_n}_{1 \leq n \leq k+1}$ because $\tuple{e^*_n,\xi_n,\mathcal{P}_n}_{1 \leq n \leq k+1}$ satisfies that property. By uniqueness, we must have $\zeta_n=\xi_n$, $a_n=e^*_n$ and $\mathcal{Q}_n=\mathcal{P}_n$ for $n \leq k+1$. By definition of $\mathcal{G}amma$, the $t$-subprocedure for $t < N$ can only define $\mathcal{G}amma^X(i)$ for $i < N$. Thus if $i \geq N$ one must have $\mathcal{G}amma^X(i)=\xi(i)$ where $\xi=\xi_1\cdots\xi_{k+1}$. This proves our claim.
\end{proof}
\begin{claim}
Let $\xi_1, \ldots, \xi_k$ be fixed and let $\mathcal{U}$ be a dense open set. There exists $\xi_{k+1}$ such that $\mathcal{G}amma^{E(\xi_1, \ldots, \xi_{k+1})} \succeq \sigma$ for some $\sigma$ such that $[\sigma] \subseteq \mathcal{U}$.
\end{claim}
\begin{proof}
By the previous claim, there exists an~$N\in\omega$ such that for any $\xi_{k+1}$, $\mathcal{G}amma^X(i)=\xi(i)$ for all $i \geq N$, where $\xi=\xi_1\cdots\xi_{k+1}$. We can assume that $N \geq |\xi_1\cdots\xi_{k}|$.
Now, given a string $\eta\in2^{<\omega}$, we denote by $\mathcal{U}_{\eta}$ the set $\{Z \mid \eta Z \in \mathcal{U}\}$. Since $\mathcal{U}$ is dense, it is in particular dense above $\eta$, so $\mathcal{U}_\eta$ is dense. Consider the open set $\mathcal{V} = \bigcap_{|\eta|=N} \mathcal{U}_\eta$. A finite intersection of dense open sets is dense and, in particular, non-empty, so there must a $\zeta$ such that $[\zeta] \subseteq \bigcap_{|\eta|=N} \mathcal{U}_\eta$, which is equivalent to saying that $[\eta\zeta] \subseteq \mathcal{U}$ for all $\eta$ of length~$N$. Thus, it suffices to choose $\xi_{k+1}$ so that the bits $\xi=\xi_1\cdots\xi_{k+1}$ after position~$N$ are an extension of~$\zeta$ to get the desired result.
\end{proof}
This last claim is just what we need to complete the proof of Theorem~\ref{thm:w2r-may-compute-generic}. Let $\mathcal{G}$ be comeager and $(\mathcal{U}_k)_{k \in \mathbb{N}}$ a family of dense open sets such that $\bigcap_k \mathcal{U}_k \subseteq \mathcal{G}$. The previous claim allows us to construct by induction a sequence $(\xi_k)_{k \in \mathbb{N}}$ of strings such that for all~$k$, $\mathcal{G}amma^{X}$ is guaranteed to be in~$\mathcal{U}_k$ when $X$ extends $E(\xi_1,\ldots,\xi_k)$ and $\mathcal{G}amma^X$ is total. Thus, taking $X=E((\xi_k)_{k \in \mathbb{N}})$, we have that $\mathcal{G}amma^X$ is total, belongs to all $\mathcal{U}_k$, and as explained earlier on, $X$ must be weakly 2-random. Our theorem is proven.
\section{Conclusion}
The following table recaps the various interactions between randomness and genericity discussed in the paper: \\
\begin{tabular}{|c|c|c|c|c|}
\hline
& $n$-gen. ($n \geq 2$) & weakly 2-gen.\ & pb-gen.\ & 1-gen.\ \\
\hline
$n$-random ($n \geq 2$) & min.\ pair & min.\ pair & min.\ pair & computes\\
\hline
weakly 2-random & may compute & may compute & may compute & may compute\\
\hline
Demuth random & min.\ pair & min.\ pair & min.\ pair & computes\\
\hline
$1$-random & may compute & may compute & may compute & may compute\\
\hline
\end{tabular}
$ $ \\
\noindent For a given pair consisting of a randomness notion and a genericity notion:
\begin{itemize}
\item `min.\ pair' means that for any random $X$ and any generic $G$, $(X,G)$ forms a minimal pair in the Turing degrees;
\item `may compute' means that there is a random~$X$ and a generic~$G$ such that $X$ computes~$G$; and
\item `computes' means that any random $X$ computes some generic~$G$.
\end{itemize}
We note that these three cases do not form an exhaustive list of possibilities. It would for example be interesting to find natural pair of one randomness notion and one genericity notion such that a random never computes a generic but that a random and a generic do not necessarily form a minimal pair.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Predicting the traffic flux in the city of Valencia with Deep Learning}
\author[UPV]{Miguel G. Folgado}
\ead{[email protected]}
\author[UV,Sussex]{Ver\'onica Sanz}
\ead{[email protected]}
\author[UI1,AI]{Johannes Hirn}
\ead{[email protected]}
\author[UPV]{Edgar G. Lorenzo}
\ead{[email protected]}
\author[UPV]{Javier F. Urchueguía}
\ead{[email protected]}
\affiliation[UPV]{organization={ICTvsCC research group, Instituto Universitario de Tecnolog\unexpanded{í}as de la Informaci\unexpanded{ó}n y Comunicaciones (ITACA)\unexpanded{,} Universidad Polit\unexpanded{é}cnica de Valencia},
addressline={Edificio 8G Ciudad Polit\unexpanded{é}cnica de la Innovaci\unexpanded{ó}n\unexpanded{,} Cam\unexpanded{í} de Vera\unexpanded{,} s\unexpanded{/}n},
city={Valencia},
postcode={46022},
country={Spain}}
\affiliation[UV]{organization={Instituto de F\unexpanded{í}sica Corpuscular (IFIC)\unexpanded{,} Universidad de Valencia-CSIC},
addressline={Carrer del Catedr\unexpanded{á}tic Jos\unexpanded{é} Beltr\unexpanded{á}n Martinez, 2},
city={Paterna},
postcode={46980},
country={Spain}}
\affiliation[Sussex]{organization={School of Mathematical and Physical Sciences, University of Sussex},
addressline={Falmer Campus},
city={Brighton},
postcode={BN1 9RH},
country={United Kingdom}}
\affiliation[UI1]{organization={Faculdad de Ciencias y Tecnolog\unexpanded{í}a, Universidad Isabel I de Castilla},
addressline={Calle Fern\unexpanded{á}n Gonz\unexpanded{á}lez, 76},
city={Burgos},
postcode={09003},
country={Spain}}
\affiliation[AI]{organization={AI Superior GmbH},addressline={Robert-Bosch-Str. 7,
64293 Darmstadt}, country={Germany}}
\begin{abstract}
Traffic congestion is a major urban issue due to its adverse effects on health and the environment, so much so that reducing it has become a priority for urban decision-makers. In this work, we investigate whether a high amount of data on traffic flow throughout a city and the knowledge of the road city network allows an Artificial Intelligence to predict the traffic flux far enough in advance in order to enable emission reduction measures such as those linked to the Low Emission Zone policies. To build a predictive model, we use the city of Valencia traffic sensor system, one of the densest in the world, with nearly 3500 sensors distributed throughout the city. In this work we train and characterize an LSTM (Long Short-Term Memory) Neural Network to predict temporal patterns of traffic in the city using historical data from the years 2016 and 2017. We show that the LSTM is capable of predicting future evolution of the traffic flux in real-time, by extracting patterns out of the measured data.
\end{abstract}
\begin{keyword}
Traffic congestion \sep Traffic forecasting \sep Neural Network \sep Time Series \sep LSTM.
\end{keyword}
\end{frontmatter}
\section{Introduction}
Road traffic causes one-fifth of the total greenhouse gas (GHG) emissions in the European Union~\cite{EC,EEA,EEA2}. Specifically, for the city of Valencia, GHG emissions from road transport represent $60\%$ of its total GHG emissions~\cite{GHG}. Additionally, vehicle traffic is one the main contributors to air pollution in the urban environment~\cite{art5}. Road vehicles emit pollutants in various ways --- exhaust, abrasion and resuspension--- the combination of which has an important impact on air quality \cite{art5}.
Exposure to elevated concentrations of Dioxide Nitrogen (NO2) and Particulate matter (PM) are the main risk factors for adverse health effects and premature deaths worldwide~\cite{art1,WHO}. For example, exposure to PM (especially the fine fraction) is correlated with the outbreaks of allergy aggravation, respiratory, cardiovascular and even cerebrovascular diseases~\cite{art2,art3,art4}. Additionally, exposure to elevated concentrations of NO2 in the air is linked to a range of respiratory diseases such as bronchoconstrictions, increased bronchial reactivity, airway inflammation and decreases in immune defenses leading to increased susceptibility to respiratory tract infection~\cite{UK}. This makes traffic congestion a major urban problem~\cite{10.7864/j.ctt1vjqprt,art6}, and reducing it has become a priority for urban decision-makers.
Some authors~\cite{RePEc:mtp:titles:0262012197} demonstrate the necessity of applying a microscopic approach to identify adequate mitigating actions for traffic congestion. In this view, high spatial and temporal resolutions are used to study traffic flows and patterns in the different traffic hotspots of the city.
The main objective of our work is to develop a Neural Network algorithm to identify spatial and temporal patterns (the visible ones and the not so obvious) and to predict traffic with high spatial and temporal resolution.
\section{Related work}
Several authors have developed models to identify the microscopic problems of traffic flux: examples can be found in Refs.~\cite{art7,PhysRevE.51.1035,RePEc:inm:oropre:v:9:y:1961:i:4:p:545-567}. These models are based on mathematical descriptions of various aspects of mobility, accounting for velocities and density of vehicles for instance. In this work, we present a different type of analysis, based on using a large amount of traffic flux data and the knowledge of the road city network. We then train an Artificial Intelligence to predict the traffic flux sometime far ahead in the future, potentially allowing policy-makers to take emission reduction measures.
The spectrum of traffic features that one can forecast is broad, for example, traffic speed~\cite{a8}, travel time~\cite{a36}, as well as traffic flux~\cite{a37}. In order to reduce emissions, we are primarily interested in the predictions of traffic flux: this variable will be the target of the study.
Several authors have studied how Neural Networks can predict different features of a traffic network. Some examples are based on long short-term memory (LSTMs)~\cite{2018arXiv181104745M,Song2016DeepTransportPA}, simpler gated recurrent unit (GRU)~\cite{2014arXiv1406.1078C}, deep belief networks (DBN)~\cite{Huang2014DeepAF,a12}, stacked autoencoders ~\cite{a2} and generative adversarial networks \cite{2018arXiv180103818L}. More recent studies have used graph neural networks (GraphNN) to tackle this problem~\cite{2016arXiv160902907K,2013arXiv1312.6203B,2015arXiv151102136A,2016arXiv160601166V}. Also, some authors have studied the potential of the combination of different kinds of Neural Networks, for example see Ref.~\cite{2018arXiv180207007C}.
\section{Descriptive study}
\subsection{Data-set description}
\label{sect:dataframe}
This study is based on data from induction loops used to monitor traffic in the city of Valencia, composed of 3500 sensors distributed in 1500 road segments. These induction loops provide data of total traffic intensity (number of vehicles) and velocity at a given induction loop, within a prescribed time window. The data obtained from the induction loops is registered at the Smart City València infrastructure~\cite{smartcity} and automatically transferred to the Universitat Politècnica de València.
Historical data from 2016 and 2017 for each of these road segments, has been used in this work. To illustrate the practical use of the data in a future real-time pipeline, we used a time resolution of one hour in the predictive analysis, although the sensors can provide data with a 5-minute frequency.
The data is first processed to remove low-quality measurements, along with statistical calculations to obtain average values and deviations. In particular, before averaging over each hour, outliers are removed from the 5-minute frequency data to keep values in the range:
\begin{equation}
m-3\sigma \leq v \leq m+3\sigma \ ,
\end{equation}
where $m$ is the road segment flux average over that hour, $\sigma$ is the typical deviation and $v$ the value of the evaluated data.
\subsection{Geographical distribution of the sensorized road segments}
\label{sec:sample1}
In our analysis, we define the sensorized road sections as the sections where the traffic flux is completely determined by our sensors. In this sense, each sensor measures the traffic between two intersections, and its exact location along the road segment linking those two intersections is irrelevant in our analysis, up to possible vehicles parking along that segment.
\begin{figure}
\caption{Map of the distribution of sensors in the road network of Valencia city. In grey we show the different road segments of Valencia's traffic network. The blue lines represent the road segments for which we have flux data, i.e., the sensorized road segments.}
\label{fig:sensores}
\end{figure}
In Fig.~\ref{fig:sensores} we show the different sensorized road sections (blue segments) for which we have obtained the traffic flux. The initial granularity of measurements was five minutes. However, for the purposes of a predictive analysis such time-resolution is unnecessary, as realistic decisions making and implementation will not be as fast and we will focus on predicting flux one to two hours ahead.
As can be seen in Fig.~\ref{fig:sensores}, these sections represent only a small part of the complete road network of Valencia city. To be specific, the area being studied consists of 8595 road sections, of which less than $20\%$ are sensorized.
\subsection{Traffic flux}
As mentioned in Sec.~\ref{sect:dataframe}, we have data on flux and velocities from the sensorized roads of Valencia city. In our analysis, we are going to focus on the number of vehicles going through the monitored segments. We will define a measurement of traffic flux as the number of vehicles per unit of time. In our data, the time resolution will be one hour.
With this definition of flux, we move to conduct a descriptive analysis of the evolution of the flux with time. We will focus on {\it 1.)} the total traffic flux in all segments, {\it 2.)} the daily patterns for specific sections in the city (main access roads), {\it 3.)} the differences between incoming and outgoing flux, and {\it 4.)} rush hour traffic.
\subsubsection{Analysis of the total traffic flux}
\label{sec:weekly_flux}
We start analysing the data by looking at a global quantity, the total flux measured in all the sensorized segments. One would expect to see a weekly pattern, with quieter activity in the weekends and an increased flux during working days.
This is indeed the pattern found in the data, as shown in Fig.~\ref{fig:weekly_flux}. In this Figure we plot the number of vehicles per hour respect to its mean over the week, $\sigma_h=N_h/\tilde N$. Values of $\sigma_h$ below (above) 1 represent periods that are quieter (busier) than average.
\begin{figure}
\caption{The orange-shaded represents the deviation $\sigma_h = N_h / \bar{N}
\label{fig:weekly_flux}
\end{figure}
On the x-axis we chose a typical week, corresponding to September 2016, from Monday to Sunday.
As we can see in the plot, all days have a similar overall behavior, with quiet times during the night. But one can see marked differences across the week.
Each working day exhibits three peaks: times where people would travel to/from work or school, and a third peak around midday when people travel for lunchtime, typically back home.
The pattern of Friday is noticeably different from the rest of the working days, with an increased lunch-hour peak and a reduced afternoon peak, as workers may leave the office at lunchtime and not come back on Friday afternoon.
The pattern changes again during the weekend days, Saturday and Sunday. These two days we only observe two clear peaks, possibly related to midday shopping and evening social outings. We will discuss these effects in more detail in the next two subsections.
\subsubsection{Analysis of the daily patterns in the main roads}
The analysis of Sec.~\ref{sec:weekly_flux} clearly indicates different patterns in the flux during the days in a standard week. Now we are going to move closer and analyze the evolution during the day, already visible in Fig.~\ref{fig:weekly_flux}.
\begin{figure}
\caption{Analysis of the traffic flux in the most traffic-heavy sensorized road sections. In the four panels we show In the top-left panel the traffic flux evolution of 19/09/16 (Monday); in the top-right panel the traffic flux evolution of 23/09/16 (Friday); in the bottom-left panel the evolution of 25/08/16 (Sunday); and, finally, in the bottom-right panel we show the analysed road segments.}
\label{fig:patrones}
\end{figure}
In the previous section we showed that the weekly patterns were clustered around three types of behaviour: Monday through Thursday, weekends and Friday. To analyse this pattern in more detail, we will focus on the main exit and entry roads in the city of Valencia: {\color{blue}Av del Cid} to the West, {\color{teal} Av de les corts valencianes} to the North and { \color{orange} Av Ausias March} to the South\footnote{Valencia is bordered by the sea on the East side.}. We have determined in our analysis that these three segments represent the busiest roads in the city. The segments are located at the main access points to the city. The location of these avenues is shown in the lower right panel of Fig.~\ref{fig:patrones}.
The blue, green and orange lines in the graphs represent, respectively, segments located in {\color{blue} Av. del Cid}, {\color{teal} Av. de les cortes valencianes} and {\color{orange} Av. Ausias March}. The top-left panel corresponds to the flux on a weekday (Monday 19/09/16), with a pattern that will be replicated during the rest of the week until Friday.
We observe three characteristic peaks in the three segments, located around 8 AM, 2 PM and 7 PM.
If we focus on the early morning rush hour, we can distinguish different times for the start of the working day. There is a sharp increase of traffic from 6 AM, while the maximum is reached around 8 AM, related to the usual office working hours which start at 9 AM.
The next peak, located around 1 PM and likely related to lunchtime, is less acute
in {\color{teal} Av de les cortes valencianes}, than in the other two sections. This could be related to different lunch patterns, where workers coming from the North of the city tend to eat near their workplace, but workers from the South of the city keep the traditional custom of lunching at home. Another factor at play in this midday peak could be the work modality called {\it jornada continua}, where workers start early (8 AM) and leave the workplace at around 3 PM, with no lunch break. This modality is often found in public offices.
Both effects (lunch and {\it jornada continua}) seem to be at play, as we will discuss in the next sub-section. There we will plot the incoming and outgoing traffic, Fig.~\ref{fig:entrada_salida}. In this plot we see how there is indeed an incoming flux after lunch-time but also a strong overall outgoing peak around 3 PM.
The third traffic daily peak is found in the region 6 to 8 PM, when many workers will finish their duties and go back home.
Let us now focus on Fridays. The typical pattern can be seen in the top right panel of Fig.~\ref{fig:patrones} and one can observe again some differences between the West and South access to Valencia, and the North access in Av. de les cortes valencianes. On Fridays, the traffic flux of vehicles going through the North has a sharp peak around 3PM, whereas in the other two main roads the flux pattern is similar to the other working days. This may indicate that many workers in the North of the city have more working-hours flexibility and/or are public workers with {\it jornada continua}.
Finally, we can analyse a typical weekend day in the bottom left panel of Fig.~\ref{fig:patrones}. These days, the mobility is concentrated around midday and 7 PM, indicating that these movements may be related with entertainment, social outings and shopping.
\subsubsection{Incoming and outgoing traffic}
In this section we will analyse the daily patterns, but now differentiating between incoming and outgoing flux. In Fig.~\ref{fig:entrada_salida} we show the same Monday as in Fig.~\ref{fig:patrones}, but now plotting the flux coming to the city (incoming) and the flux going out of the city (outgoing).
\begin{figure}
\caption{Traffic flux in the three main accesses to the city for Monday, September 19, 2016. Left and right panels represents, respectively, the incoming and outgoing traffic.}
\label{fig:entrada_salida}
\end{figure}
The three roads exhibit different behavior during the day. For example, the {\color{blue} Av. del Cid} is the main road for incoming traffic in the morning, however most morning exits occur through {\color{orange} Av. Ausias March}. This could indicate that city dwellers who work outside the city travel to the industrial areas in the South of the city.
It is also possible that many drivers use GPS apps to find the quickest path to their work, and this could imply that the best incoming way is probably not the best outgoing.
It could also be the case that drivers choose smaller roads on one leg of return trip. Nevertheless, the overall vehicle exchange on the three main road exchanges in the city of Valencia is close to zero, as shown in Fig.~\ref{fig:totales}.
\begin{figure}
\caption{Sum over the incoming and outgoing traffic on the three main roads for Monday, September 19, 2016.}
\label{fig:totales}
\end{figure}
\subsubsection{Rush hour}
To finish this descriptive analysis, we will analyze the traffic flux during the two main rush hours in a weekday, specifically at 8 AM and 7 PM.
\begin{figure}
\caption{Traffic flux per sensorized road in the rush hours of Monday, September 19, 2016. In the left and right panels we show the traffic flux at 8 AM and 7 PM, respectively. The color scale is clipped at 2000 to improve visibility.}
\label{fig:rush_hour}
\end{figure}
For direct comparison with the previous plots, we show results from Monday, September 19, 2016, although these results are representative of the rush hours on all weekdays. As one can see in Fig.~\ref{fig:rush_hour}, the busiest roads are the main avenues and access points to the city.
\section{Predictive Analysis of the traffic}
\subsection{Analysis Description}
In the previous sections, we studied the different patterns of the traffic flux in Valencia city. As we have seen, there are several symmetries between different days of the week and different hours of the day. This indicates that modelling the traffic flux should be possible, as long as one can develop a model with enough complexity to account for these patterns.
The main idea of this work is to use a very expressive modelling tool based on Neural Networks, which could encapsulate the patterns we have identified, and others which may not be visible in a simple descriptive analysis. The problem we would like to focus on, evolution of the traffic in a city, has both spatial (traffic distribution on the road map) and time components (evolution with time).
Theoretically, one might expect that to achieve the best predictive power we should employ Graph Neural Networks (GraphNN), as they are ideal for these kind of problems. Unfortunately, the current data does not allow for a successful GraphNN analysis. When we developed a predictive analysis using GraphNNs, the first problem we encountered was related to the connectivity of the sensorized road segments. For a good graph description, it would be necessary to sensorize practically all the city road segments, and although we have explored different ways to overcome this issue, we find that the overall GraphNN predictivity is worse than the method we will propose here.
In addition to this problem, there is an issue of generalizability of the results. When developing a GraphNN analysis, and using some interpolations in segments which are not covered, the overall results ended up being very sensitive to the geometry of the city. In this situation, it would not be easy to translate the GraphNN predictions to other cities, as it would be necessary to create a new graph for the new city and the interpolations we had made in the city of Valencia may not work for a different topology.
Some of these shortcomings were already pointed out in previous studies~\cite{2018arXiv180207007C}, where they showed that including the map of the city in the Neural Network did not significantly increase the precision of the prediction. For these reasons, we chose to present our analysis with a LSTM (Long Short-Term Memory)~\cite{10.1162/neco.1997.9.8.1735} Neural Network, instead of a GraphNN.
In the LSTM analysis, the inputs are time-ordered flux values corresponding to measured segments labelled by an ID number. The LSTM will learn to predict the future evolution of the flux in these segments, given a past behaviour.
Note that the spatial patterns of the city will be present in the flux data, but their locality will not be explicit. For example, the input to the LSTM will not have any information of whether two ID segments are close to each other. Nevertheless, with enough data, the LSTM may extract some of these patterns from the measured flux data.
First, we must define the data that we will use in the training of the Neural Network. In all of the results we will present, the LSTM is trained using the measured flux during the 366 days of the year 2016. Once trained on this data, we will test it on unseen data from 2017. In particular, we will predict the traffic flux in six days of February 2017 (more concretely, from Thursday 2 to Wednesday 8). Also, to test the precision of the LSTM in specific locations, we have studied the real and the predicted flux over the congested road segment located in {\color{blue} Av. del Cid}, one of the main arteries of Valencia city.
\begin{figure}
\caption{Structure of the LSTM used in the prediction of the traffic flux.}
\label{fig:arquitectura}
\end{figure}
In Fig.\ref{fig:arquitectura} we describe the architecture of the Neural Network used for the analysis, the LSTM. Such Neural Networks are very useful in forecasting problems, often surpassing traditional forecasting methods~\cite{siami2018comparison}. They are particularly powerful when dealing with complex datasets, with many points in a multidimensional space. This is indeed our case, as each time slot corresponds to many entries (location IDs) and their corresponding traffic flux measurement.
The LSTM produces a set of predictions at step $t$, which we label ($\Vec{y}_t$), see Fig.~\ref{fig:arquitectura}. These predictions will be affected by the cell state (the long term memory that contains information kept by the Neural Network from all the previous steps, $\Vec{C}_{t-1}$), the short-term memory (the information of the label in the near previous steps, $\Vec{y}_{t-1}$) and all the features in the current step, $\Vec{x}_t$. When the LSTM calculates the prediction for $\Vec{y}_t$, it also decides which information should be retained and which to forget, thus updating the cell state $\Vec{C}_{t}$.
\begin{figure}
\caption{Root mean square error of the neural network for the validation (orange) and training (blue) data sets per training epoch. The example chosen in the figure corresponds to the case where the Neural Network predicts 1h using the past 24h.}
\label{fig:loss}
\end{figure}
In all cases under study, we trained the LSTM during 11000 epochs. This number of epochs is justified by analysing Fig.~\ref{fig:loss}. In this figure, we plot the RMSE (Root-Mean-Squared Error) loss of the neural network per epoch: the loss decreases until about 10000 epochs, then starts to plateau, even on this scale that is logarithmic in the x-axis. The computational cost of training beyond this number of epochs is thus not justified: for instance, for 11000 epochs, the relative improvement in RMSE is negligible $|RMSE(10k) - RMSE(11k) | / RMSE(10k) < 10^{-5}$.
\subsection{Results}
The input to the learning algorithm is information on flux from all ID segments in 1h time steps. The LSTM is trained to predict the flux 1h or 2h ahead by looking back several hours.
Before we move on to show predictions, let us discuss whether the prediction target should be local (one segment) or global (all segments). We find that focusing on the prediction of a particularly busy road, or asking for predictions for the whole network leads to very similar precision. Indeed, we chose the busy access road {\color{blue} Av. del Cid} discussed in the previous sections, to predict its traffic flux by analysing 24 h before and predicting 1h later. We then ran two different algorithms, one LSTM which predicted only the {\color{blue} Av. del Cid} flux, and another LSTM predicting all road segments, but both using all segments as input. The RMSE obtained in both cases was $RMSE = 320$ with a mean flux of $2200$ per hour. Due to this result, in this paper, we only show the results of predictions for this busy road segment, as it is computationally cheaper. However, all the analysis can be extrapolated to the case where we would predict all road segments of the city.
\subsubsection{Traffic prediction and uncertainty}
Once we have chosen a road segment that we are interested in, we can proceed to studying the stability and forecasting errors of our LSTM approach. For the rest of the analysis, we forecast the traffic flux of {\color{blue} Av. del Cid} during the week between 02/02/2017 and 08/02/2017.
In Fig.~\ref{fig:bandas_error} we show, on the one hand, the mean prediction (orange line) for the {\color{blue} Av. del Cid} after training the LSTM with the 2016 data ten different times. The orange shadow band is the standard deviation of the forecasting. In the plot, we show the prediction 1h ahead, calculated using the previous 24h.
\begin{figure}
\caption{Real flux (blue dashed line) and the mean prediction (orange line) for {\color{blue}
\label{fig:bandas_error}
\end{figure}
In this figure, the error band is barely wide enough to be seen for most predicted points. More concretely, the error band is always between $0.5\%$ and $10\%$ of the mean-predicted value, and this upper value of the relative error only occurs during times where the flux is near its (night-time) minimum.
As we have asserted that the uncertainty in the prediction is small, in the next figures we will show comparisons using only the mean predictions.
Another feature in this plot is that the LSTM tends to underpredict the flux during sharp increases. This result is expected, and is a typical behaviour of a time-series predictor. The LSTM takes information about the events that have taken place in the previous hours, and one should expect a lower performance when predicting the flux peak in isolated hours. However, despite the expected error in the forecasting of the flux in those hours, we see that the Neural Network encapsulates the right tendency, predicting the time of a high flux and an approximate value of the flux.
In forecasting problems, it is very common to find that the predicted curve shifts to the right compared to the real data. This would occur if the algorithm had not actually learnt real patterns in the data and, as consequence, the forecasting for the step $y_t$ would be essentially the same as the real value for the previous step $y_{t-1}$. However, in this study, we do not observe any such displacement in the prediction curve, which shows that our LSTM has learned patterns from the data.
\subsubsection{The effect of local information in the prediction}
We now move on to discuss the impact of the training data on the prediction. As explained before, we have information on fluxes from thousands of sensorized segments in the city. Since we are looking to predict traffic for a single point in the city, the question arises of whether this coverage is really useful or whether one could get away with using less training data, for instance by restricting the inputs to local road segments.
It may be that a large coverage is needed to produce reliable predictions, if previous events in distant places of the city impact the prediction on the target segment. To explore this question we study the prediction for the traffic flux of {\color{blue} Av. del Cid} in three different cases, where we changed the coverage in road segments : using as input either {\it 1.)} only the location of the predicted segment (yellow line), {\it 2.)} the predicted segment and the nearby road segments (blue line) or, {\it 3.)} all the sensorized road segments (red line).
The results are shown in Fig.~\ref{fig:tres_paradigmas}. In all three cases, we predict 1h into the future, using the past 24h of data as an input (training and inference).
First, note that the three input cases produce predictions with high accuracy. The predictions using only the target road segment and using the nearby road segments exhibit very similar accuracy, i.e. the yellow and green lines are practically indistinguishable practically identical. However, one can observe in these two lines a small shift to the right with respect to the dashed blue line (real data), an effect we discussed in the previous section and that indicates that the LSTM is not fully learning patterns of behaviour.
\begin{figure}
\caption{Comparison between the three different training paradigms under study for the input: with all sensorized roads (pink line), roads near the segment being studied (green line) and only the output road (yellow line).}
\label{fig:tres_paradigmas}
\end{figure}
This small shift is not present in the red line, indicating that knowledge of a wider set of road segments is adding valuable information. Indeed, the case where we use the input of all sensorized road segments is different. As we can see in the Figure, the prediction is an excellent match to the real data except for the highest peaks. The shift observed in the other two cases has completely disappeared. Therefore, we conclude that having input/training data with a wide coverage of the traffic map is key to improving the predictions.
\subsubsection{The effect of changing the length of future and past windows}
Finally, we study how many hours of past data are needed for good predictions, and how many hours ahead we are able to predict well. In Fig.~\ref{fig:comparacion_tiempos} we show the results of using different time windows for training and predicting: using information from the past 24h to predict the next 1h (pink line) or 2h (yellow line) and, using the past 5h in order to predict 1h (clear blue line) and 2h (dark blue line) into the future.
\begin{figure}
\caption{Comparison of the forecasts using different time windows in the past and future. The first panel shows the prediction 1h into the future, using either 5h (green) or 24h (red) of past data ---the two curves are nearly indistinguishable. In the second panel, we compare predictions 2h ahead, using either 5h (black) or 24h (yellow) of past data ---a 5h back window yields better predictions. Finally, in the third panel, we represent the predictions using 24h of past data, predicting either 1h ahead (red) or 2h ahead (yellow) ---a 2h forward window does worse. In the three panels, the blue dashed line represents the real flux.}
\label{fig:comparacion_tiempos}
\end{figure}
To show all the options and compare them, we divided the plot into three different panels in Fig.~\ref{fig:comparacion_tiempos}.
Additionally, for completeness in Fig.~\ref{fig:RMSE_comparacion}, we plot the RMSE of the different cases studied in this paper.
In the first panel of Fig.~\ref{fig:comparacion_tiempos}, we compare the actual flux (blue-dashed line) with the prediction for the next 1h, using 24h (pink) and 5h (clear blue) of past data. As one can see, including 24h of past data does very little to improve the predictions. In fact, it turns out that the error is slightly higher when we use 24h, see Fig.~\ref{fig:RMSE_comparacion}. In that Figure, we see that the distance between the RMSE using 24h or 5h is practically constant, and that the prediction using only 5h before is better. This observation makes sense if we consider that the prediction of the traffic flux, should only really be affected by the events that occur only a few hours before. The use of 24h time series, in this case, is not only computationally expensive, it also makes the Neural Network worse at extracting temporal patterns.
In the second panel of Fig.~\ref{fig:comparacion_tiempos} we plot the same as before, but predicting 2h instead of 1h ahead. The yellow and black line represent, respectively, the prediction using 24h and 5h of past data. As in the first panel, the best prediction is obtained using 5h of past data. However, unlike the 1h prediction cases were the improvement was small, here the 2h prediction is clearly better using 5h of past data instead of 24h. As in the first case, in the bottom panel of Fig.~\ref{fig:RMSE_comparacion} is clear that the RMSE of the 24h back - 2h forward case (yellow curve) is the highest.
Finally, in the third panel of Fig.~\ref{fig:comparacion_tiempos}, we compare the prediction for the next 1h (pink) or 2h (yellow), using 24h of past data. As expected, the results are better when we attempt to predict fewer hours ahead.
\begin{figure}
\caption{Root mean square error (RMSE) per day in the studied week with the three studied input configurations for {\color{blue}
\label{fig:RMSE_comparacion}
\end{figure}
The last figure that we show, Fig.~\ref{fig:RMSE_comparacion}, as mentioned previously, represents the different RMSEs in all cases being studied.
In the first panel (dashed lines), we can see the RMSEs for predictions using past data from the segment under consideration only, with the 4 combinations of duration for the past input data and for the forecast window represented as different colors. In that case, there is no difference between using the past 24h or 5h of data to train, and the curves are indistinguishable. On the other hand, we see that we are better at predicting 1h ahead rather than 2h ahead, as expected. Also, the lowest absolute errors are achieved on the weekend (4th and 5th of February), for which the mean traffic is also lower (see bottom panel in the Figure).
The second panel (in dash-dotted lines) shows the RMSEs for all cases, this time taking as input all road segments within 1km. This obviously increases the computing costs, yet the RMSE is very similar in all cases, except for one: predicting 1h into the future with 5h of past data. In that case, we get improved predictions by including nearby road segments rather than a single one: the LSTM is beginning to uncover the correlations between neighboring road segments and using this information to improve predictions. This only holds for the case with the short windows of time (5h into the past, 1h into the future).
In the third panel, we increase the computational cost again, using past data from all available sensorized road segments in order to predict our single segment of interest. In that case the LSTM is able to extract even more correlations between road segments, and we get a noticeable reduction in errors for all time windows (including when using 24h in past and predicting 2h into the future) compared to the top two panels. Yet, once again, we see that the best case is the one with the shortest time windows (5h backward, 1h forward): more past data seems to drown the relevant signal, making it harder for the LSTM to extract it, and predictions further into the future are always harder.
\section{Discussion and Outlook}
In this paper we have analyzed traffic data from the city of Valencia during the years 2016 and 2017.
We first identified several temporal and spatial patterns. We observed three traffic peaks on the working days Monday to Thursday corresponding to work/study travel and lunchtime. This workday pattern changed on Friday, signaling a variation in return times from work. We also found that the weekend days only display two peaks of traffic: midday and early evening.
Looking at the spatial distribution of this flux, we found interesting variations in incoming and outgoing traffic behaviour, especially in the most congested roads. All these findings could be used to design better intervention plans.
We then moved to an analysis based on predicting traffic flux one to two hours ahead of time. For this task, we trained several Neural Networks and explored their performance under various situations and explored the impact of network coverage and past and future time windows for predictions.
Overall we found an excellent capacity of predicting the flux, even at rush hours. For example, our analysis of the busiest access road in Valencia produced predictions with errors ranging from 15\% during weekdays to 20\% during weekends, see Fig.~\ref{fig:RMSE_comparacion}. In other words, in the busiest roads at the busiest moments during the day, the accuracy of the algorithm's prediction was close to 85\%.
We also noticed that, even though our model was not aware of the geographical relations between road segments, and even though our data contained lots of disconnected road segments (fewer than 20\% of all road segments), accuracy improved a lot by including data from road segments farther than 1km. This means that the LSTM does learn correlations between road segments, even thought the
connections between them are not made explicit in the data, and most road segments are missing anyway. On the other hand, we noted that there was no gain to be had from including past data beyond 5h in order to predict the next 1h. This might change if we included data from more than 24h, perhaps using the attention mechanism.
This study could motivate the development of a real-time tool to predict the traffic in the city of Valencia. Also, the Neural Network could be used to test different intervention techniques, e.g. by simulating the effect of road closures or traffic light controls, and allowing the algorithm we trained with real data to produce predictions under these interventions.
\section*{Funding Statement}
The work of MGF is supported by the {\it Margarita Salas} postdoctoral fellowship from the Ministerio de Educaci\'on UP2021-044. The research of VS is supported by the Generalitat
Valenciana PROMETEO/2021/083 and the Ministerio de Ciencia e
Innovacion PID2020-113644GB-I00. The work of JFU and EGL is supported by the Agència Valenciana de la Innovació (AVI) INNEST/2021/263.
\end{document}
|
\begin{document}
\title{Models \poprawka{of} Positive Truth}
\author{Mateusz Łełyk, Bartosz Wcisło}
\maketitle
\begin{abstract}
This paper is a follow-up to \cite{CWL}. We give a strenghtening of the main result on the semantical non-conservativity of the theory of $\textnormal{PT}^-$ with internal induction for total formulae ($\textnormal{PT}^- + \textnormal{INT}(\textnormal{tot})$). We show that if to $\textnormal{PT}^-$ the axiom of internal induction for all arithmetical formulae is added ($\textnormal{PT}^- + \textnormal{INT}$), then this theory is semantically stronger than $\textnormal{PT}^- + \textnormal{INT}(\textnormal{tot})$. In particular the latter is not relatively truth definable (in the sense of \cite{fujimoto}) in the former. Last but not least we provide an axiomatic theory of truth which meets the requirements put forward by Fischer and Horsten in \cite{fischerhorsten}.
\end{abstract}
\section{Introduction}
\subsection{Axiomatic Theories of Truth}
\emph{Axiomatic theories of truth} is a branch of mathematical logic and philosophy which studies the properties of formal theories generated in the following way:
\begin{enumerate}
\item We take a \emph{base theory} $B$ which we demand to be sufficiently strong to (strongly) represent basic syntactical operations.
\item We extend the language of $B$ by adding one new unary predicate $T$ and some axioms for it so that the resulting theory $\textnormal{Th}$ prove all sentences of the form
\[T(\qcr{\phi})\equiv \phi\]
for $\phi$ \emph{in the language of our base theory $B$}.
\end{enumerate}
For a brief introduction to the subject see \cite{stanford} and for a more complete one---\cite{halbach}. The big question that this paper answers in a tiny part is the following: how various axioms for the truth predicate influence its \emph{strength}. For the purpose of investigating this question we focus on the truth theories with \emph{Peano Arithmetic} as a base theory. The notion of \textit{strength} may enjoy many different explications. For example, the simplest one is given by inclusion of sets of consequences: we might say that $\textnormal{Th}_1$ is not weaker than $\textnormal{Th}_2$ if and only if $\textnormal{Th}_1$ proves all the axioms of $\textnormal{Th}_2$. For many applications this is too fine-grained: many theories, intuitively differing in strength, become incomparable out of not-that-important reasons (obviously this is not a formal notion). Better adjusted notion was introduced by Kentaro Fujimoto in \cite{fujimoto} and is a special kind of interpretability. We recall the definition:
\begin{definicja}
\begin{enumerate}
\item Let $\textnormal{Th}_1$ and $\textnormal{Th}_2$ be axiomatic truth theories and let $T_{\textnormal{Th}eta_2}$ be the truth predicate of $\textnormal{Th}_2$. For any sentence $\textnormal{Th}eta$ of $\mathcal{L}_{\textnormal{Th}_2}$ and a formula $\phi(x)\in\mathcal{L}_{\textnormal{Th}_1}$ with precisely one free variable let
\[\textnormal{Th}eta^{\phi(x)}\]
denote the $\mathcal{L}_{\textnormal{Th}_1}$ sentence which results from $\textnormal{Th}eta$ by substituting $\phi(x)$ for every occurrence $T_{\textnormal{Th}_2}(x)$ (and renaming variables, if necessary).
\item We say that $\textnormal{Th}_1$ \emph{relatively truth defines} $\textnormal{Th}_2$ if and only if there exists a formula $\phi(x)\in\mathcal{L}_{\textnormal{Th}_1}$ such that for any axiom $\textnormal{Th}eta$ of $\textnormal{Th}_2$
\[\textnormal{Th}_1\vdash \textnormal{Th}eta^{\phi(x)}\]
If $\textnormal{Th}_2$ relatively truth defines $\textnormal{Th}_1$ we will denote it by $\textnormal{Th}_1\leq_F\textnormal{Th}_2$\footnote{\poprawka{''$F$'' stands for ''Fujimoto.''}}.
\end{enumerate}
\end{definicja}
In terms of interpretations, relative truth definability is a $\mathcal{L} _{\textnormal{PA}}$-conservative interpretation between truth theories (for the terminology related to interpretations see e.g. \cite{visser}). It was argued in \cite{fujimoto} that relative truth definability provides a good explication of \textit{epistemological reduction} between truth theories. We may treat it as an explication of a notion of strength: $\textnormal{Th}_1$ is \emph{Fujimoto-stronger} than $\textnormal{Th}_2$ if and only if $\textnormal{Th}_1$ relatively truth defines $\textnormal{Th}_2$ but not \textit{vice versa}. This relation will be denoted by $\lneq_F$.
\subsection{Strength relative to $\textnormal{PA}$}
In some philosophical debates, especially the ones related to the deflationism, the need for a differently oriented formal explication of strength seems to emerge. It has been claimed (most importantly in \cite{shapiro}, \cite{ketland} and \cite{horsten}) that deflationary thesis that truth is a "simple" (aka "light", "metaphysically thin") notion implies that the deflationary theory of truth should be \emph{conservative} over $\textnormal{PA}$.\footnote{This thesis however has been recently criticised at length in \cite{ciesinn}} Let us recall that a theory of truth can be conservative over $\textnormal{PA}$ in two senses:
\begin{definicja}
Let $\textnormal{Th}$ be a theory of truth.
\begin{enumerate}
\item We say that $\textnormal{Th}$ is \emph{proof-theoretically conservative} over $\textnormal{PA}$ if and only if for every $\phi\in\mathcal{L}_{\textnormal{PA}}$, if $\textnormal{Th}\vdash \phi$, then $\textnormal{PA}\vdash \phi$.\footnote{This property is also called \textit{syntactical conservativity}}.
\item We say that $\textnormal{Th}$ is \textit{model-theoretically conservative} over $\textnormal{PA}$ if and only if every model $\mathcal{M}$ of $\textnormal{PA}$ admits an \emph{expansion} to a model of $\textnormal{Th}$.\footnote{This relation is also called \textit{semantical conservativity}}.
\end{enumerate}
\end{definicja}
\begin{uwaga}
Note that, in the definition of model-theoretical conservativity, we do not merely demand every model to have an \emph{extension} to a model of $\textnormal{Th}$, in which case both notion of conservativity would be the same. We say that $\mathcal{M}'$ is an \emph{expansion} of $\mathcal{M}$ if \poprawka{$\mathcal{M}$ and $\mathcal{M} '$ are the same model, except that $\mathcal{M} '$ carries interpretation of additional function, relation and constant symbols.}
\end{uwaga}
The two notions lead in the natural way to the following generalisations:
\begin{definicja}
Let $\textnormal{Th}_1$ and $\textnormal{Th}_2$ be two truth theories.
\begin{enumerate}
\item We say that $\textnormal{Th}_1$ is \textit{proof-theoretically not stronger} than $\textnormal{Th}_2$ if every $\mathcal{L}_{\textnormal{PA}}$ sentence provable in $\textnormal{Th}_1$ is provable in $\textnormal{Th}_2$. If $\textnormal{Th}_1$ is proof-theoretically not stronger than $\textnormal{Th}_1$, we will denote it with $\textnormal{Th}_1\leq_{P}\textnormal{Th}_2$.\footnote{$P$ is meant to abbreviate "Proof".}
\item We say that $\textnormal{Th}_1$ is \textit{model-theoretically not stronger} than $\textnormal{Th}_2$ if every model which can be expanded to a model of $\textnormal{Th}_2$, can be expanded to a model of $\textnormal{Th}_1$. If $\textnormal{Th}_1$ is syntactically not stronger than $\textnormal{Th}_1$, we will denote it with $\textnormal{Th}_1\leq_{M}\textnormal{Th}_2$.\footnote{$M$ is meant to abbreviate "Model".}
\end{enumerate}
\end{definicja}
Obviously we say that $\textnormal{Th}_2$ is proof-theoretically (model-theoretically) stronger than $\textnormal{Th}_1$ if $\textnormal{Th}_1\leq_P \textnormal{Th}_2$ but $\textnormal{Th}_2\nleq_P\textnormal{Th}_1$ (respectively, $\textnormal{Th}_1~\leq_M~\textnormal{Th}_2$ but $\textnormal{Th}_2~\nleq_M~\textnormal{Th}_1$). This relation will be denoted $\lneq_P$ ($\lneq_M$ respectively).
Let us observe that the three notions of strength introduced above can be ordered with respect to their "granularity". Indeed, for any theories $\textnormal{Th}_1$ and $\textnormal{Th}_2$ we have:
\begin{equation}\nonumber\label{strength1}\tag{$FMP$}
\textnormal{Th}_1\leq_F \textnormal{Th}_2 \implies \textnormal{Th}_1\leq_M\textnormal{Th}_2 \implies \textnormal{Th}_1\leq_P\textnormal{Th}_2.
\end{equation}
Hence also
\begin{equation}\nonumber\label{strength2}\tag{$\neg PMF$}
\textnormal{Th}_2\nleq_P \textnormal{Th}_1 \implies \textnormal{Th}_2\nleq_M\textnormal{Th}_1 \implies \textnormal{Th}_2\nleq_P\textnormal{Th}_2
\end{equation}
Having three different notions of strength makes it possible to decide not only whether one theory of truth is stronger than another one, but also \emph{how much} stronger it is.
\begin{comment}
In Section \ref{sect_instrum} we consider also another type of question: how strong must be truth theories provide they
\begin{enumerate}
\item are not interpretable in $\textnormal{PA}$ and
\item have super-exponential speed-up over $\textnormal{PA}$.
\end{enumerate}
This is related to the programme put forward in \cite{ficherhorsten}. We will go back to explaining its underlying motivations in Section \ref{sect_instrum}.
\end{comment}
\subsection{Compositional Positive Truth and its Extensions}
Before continuing let us introduce some handy notational conventions:
\begin{konwencja}\text{}
\begin{enumerate}
\item By using variables $\phi, \psi$ we implicitly restrict quantification to (G\"odel codes of) arithmetical sentences. For example, by $\forall \phi \ \ \Psi( \phi)$ we mean $\forall x \ \ \bigl(\textnormal{Sent}(x) \rightarrow \Psi(x)\bigr)$ and by $\exists \phi \ \ \Psi(\phi)$ we mean $\exists x \ \ \bigl(\textnormal{Sent}(x) \wedge \Psi(x)\bigr).$ For brevity, we will sometimes also use variables $\phi, \psi$ to run over arithmetical formulae, whenever it is clear from the context which one we mean; similarly
\begin{enumerate}
\item $\phi(v), \psi(v)$ run over arithmetical formulae with at most one indicated free variable (i.e. $\phi(v)$ is either a formula with exactly one free variable or a sentence); $\phi(\bar{x})$, $\psi(\bar{x})\ldots$ run over arbitrary arithmetical formulae.
\item $s,t$ run over codes of closed arithmetical terms;
\item $v,v_1,v_2, \ldots, w, w_1, w_2, \ldots$ run over codes of variables;
\end{enumerate}
\item $\textnormal{Form}_{\mathcal{L}_{\textnormal{PA}}}(x)$, $\textnormal{Form}^{\leq 1}_{\mathcal{L}_{\textnormal{PA}}}(x)$, $\textnormal{Sent}_{\mathcal{L}_{\textnormal{PA}}}(x)$ are natural arithmetical formulae strongly representing in $\textnormal{PA}$ the sets of (G\"odel codes of) formulae of $\mathcal{L}_{\textnormal{PA}}$, formulae of $\mathcal{L}_{\textnormal{PA}}$ with at most one free variable, sentences of $\mathcal{L}_{\textnormal{PA}}$, respectively.
\item if $\phi$ is a $\mathcal{L}_{\textnormal{PA}}$ formula, then $\qcr{\phi}$ denotes either its G\"odel code or the numeral denoting the G\"odel code of $\phi$ (context-dependently). This is the unique way of using $\qcr{\cdot}$ in this paper.
\item to enhance readability we suppress the formulae representing the syntactic operations. For example we write $\Phi(\psi \wedge \eta)$ instead of $\Phi(x) \wedge$ "$x$ is the conjunction of $\psi$ and $\eta$", similarly, we write $\Phi(\psi(t))$ instead of $\Phi(x) \wedge x = \textnormal{Subst}(\psi,t)$;
\item $\num{x}$ denotes the (G\"odel code of) standard numeral for $x$, i.e. $\qcr{\underbrace{S\ldots S(0)}_{x \textnormal{ times }S}}$
\item $\val{y}$ is the standard arithmetically definable function representing the value of term (coded by) $y$.
\end{enumerate}
\end{konwencja}
The main objective of this study is to measure the strength of theories that are compositional, but do not enjoy the global axiom for commutativity with the negation, i.e.
\begin{equation}\tag{NEG}\label{equat_neg}
\forall \phi \ \ \bigl(T(\neg\phi)\equiv \neg T(\phi)\bigr)
\end{equation}
Let us formulate the theories which will be of the main interest.
\begin{definicja}
$\textnormal{PT}^-$ is the axiomatic truth theory with the following axioms for the truth predicate:
\dousuniecia{Hej, to majo być te kornery, czy nie majo? Usunąłem, ale nie jestem pewien czy słusznie.}
\begin{enumerate}
\item
\begin{enumerate}
\item $\forall s,t \ \ \bigl(T(s=t) \equiv (\val{s}=\val{t})\bigr)$
\item $\forall s,t \ \ \bigl(T(\neg s=t) \equiv (\val{s}\neq\val{t})\bigr)$
\end{enumerate}
\item \begin{enumerate}
\item $\forall \phi,\psi \ \ \bigl(T(\phi\vee \psi)\equiv T(\phi)\vee T(\psi)\bigr)$
\item $\forall \phi,\psi \ \ \bigl(T(\neg (\phi\vee \psi))\equiv T(\neg\phi)\wedge T(\neg\psi)\bigr)$
\end{enumerate}
\item \begin{enumerate}
\item $\forall v \forall \phi(v)\ \ \bigl(T(\exists v\phi)\equiv \exists x\ \ T(\phi(\num{x}))\bigr)$
\item $\forall v\forall \phi(v)\ \ \bigl(T(\neg\exists v\phi)\equiv \forall x\ \ T(\neg\phi(\num{x}))\bigr)$
\end{enumerate}
\item $\forall \phi\ \ \bigl(T(\neg\neg\phi)\equiv T(\phi)\bigr)$
\item $\forall \phi\forall s,t\Bigl(\val{s}=\val{t}\rightarrow \Big(T(\phi(s))\equiv T(\phi(t))\Big)\Bigr)$
\end{enumerate}
\end{definicja}
In the arithmetized language, we treat $\wedge$ and $\forall$ as symbols defined contextually, i.e. $\phi\wedge \psi = \neg(\neg\phi\vee\neg\psi)$ and $\forall v\phi = \neg\exists v\neg\phi$. Then it is easy to check that the following sentences are provable in $\textnormal{PT}^-$:
\begin{enumerate}
\item $\forall \phi,\psi \left(T(\phi\wedge\psi)\equiv \left(T(\phi)\wedge T(\psi)\right)\right)$.
\item $\forall \phi,\psi \left(T\left(\neg\left(\phi\wedge\psi\right)\right)\equiv \left(T\left(\neg\phi\right)\vee T\left(\qcr{\neg\psi}\right)\right)\right)$.
\item $\forall v\forall \phi(v) \left(T(\forall v\phi)\equiv \left(\forall x\ \ T(\phi(\num{x})\right)\right)$
\item $\forall v\forall \phi(v) \left(T(\neg\forall v\phi)\equiv \left(\exists x\ \ T(\neg\phi(\num{x})\right)\right)$
\end{enumerate}
In $\textnormal{PT}^-$ the internal logic (i.e. the logic of all \emph{true} sentences) is modelled after the Strong Kleene Scheme. Let us observe that axioms of $\textnormal{PT}^-$ make it possible to accept a disjunction $\phi\vee \psi$ as true simply on the basis of the truth of one of $\phi$ and $\psi$ and regardless of whether the second one has its truth value determined. The second theory we will study is more cautious in this respect. Let us define
\[\textnormal{tot}(\phi(v)) := \textnormal{Form}^{\leq 1}(\phi(v))\wedge \forall x \bigl( T(\phi(\num{x}))\vee T(\neg\phi(\num{x}))\bigr)\]
In particular if $\psi$ is a sentence, then
$$\textnormal{PA}T^-\vdash\textnormal{tot}(\psi) \equiv \biggl(T(\psi)\vee T(\neg\psi)\biggr).$$
where $\textnormal{PA}T^-$ is the extension of $\textnormal{PA}$ in $\mathcal{L}_{\textnormal{PA}}\cup \was{T}$, with no non-logical axioms for $T$.
\begin{definicja}
$\textnormal{WPT}^-$ is the axiomatic truth theory with the following axioms for the truth predicate:
\begin{enumerate}
\item
\begin{enumerate}
\item $\forall s,t \ \ \bigl(T(s=t) \equiv (\val{s}=\val{t})\bigr)$
\item $\forall s,t \ \ \bigl(T(\neg s=t) \equiv (\val{s}\neq\val{t})\bigr)$
\end{enumerate}
\item \begin{enumerate}
\item $\forall \phi,\psi \ \ \bigl(T(\phi\vee \psi)\equiv \bigl(\textnormal{tot}(\phi)\wedge \textnormal{tot}(\psi) \wedge \bigl(T(\phi)\vee T(\psi)\bigr)\bigr)$
\item $\forall \phi,\psi \ \ \bigl(T(\neg (\phi\vee \psi))\equiv T(\neg\phi)\wedge T(\neg\psi)\bigr)$
\end{enumerate}
\item \begin{enumerate}
\item $\forall v \forall \phi(v)\ \ \bigl(T(\exists v\phi)\equiv \textnormal{tot}(\phi(v))\wedge \exists x T(\phi(\num{x}))\bigr)$
\item $\forall v\forall \phi(v)\ \ \bigl(T(\neg\exists v\phi)\equiv \forall x\ \ T(\neg\phi(\num{x}))\bigr)$
\end{enumerate}
\item $\forall \phi\ \ \bigl(T(\neg\neg\phi)\equiv T(\phi)\bigr)$
\item $\forall \phi\forall s,t\bigl(\val{s}=\val{t}\rightarrow T(\phi(s))\equiv T(\phi(t))\bigr)$
\end{enumerate}
\end{definicja}
Using the above mentioned conventions regarding $\wedge$ and $\forall$, it is an easy exercise to show that the following sentences are provable in $\textnormal{WPT}^-$:
\begin{enumerate}
\item $\forall \phi,\psi \left(T(\phi\wedge\psi)\equiv \left(T(\phi)\wedge T(\psi)\right)\right)$.
\item $\forall \phi,\psi \left(T\left(\neg\left(\phi\wedge\psi\right)\right)\equiv \left(\textnormal{tot}(\phi)\wedge\textnormal{tot}(\psi) \wedge \left(T\left(\neg\phi\right)\vee T\left(\neg\psi\right)\right)\right)\right)$.
\item $\forall v\forall \phi(v) \left(T(\forall v\phi)\equiv \left(\forall x\ \ T(\phi(\num{x})\right)\right)$
\item $\forall v\forall \phi(v) \left(T(\neg\forall v\phi)\equiv \left(\textnormal{tot}(\phi(v))\wedge\exists x\ \ T(\neg\phi(\num{x}))\right)\right)$
\end{enumerate}
In $\textnormal{WPT}^-$ the internal logic is modelled after the \emph{Weak Kleene Scheme}. $(\textnormal{W})\textnormal{PT}^-$ can be seen as a natural stratified counterpart of $(\textnormal{W})\textnormal{KF}^-$\footnote{For the definition of all mentioned theories \poprawka{not defined in this paper,} consult \cite{halbach} or \cite{fujimoto} (for $\textnormal{WKF}$).} Since in particular $(\textnormal{W})\textnormal{PT}^-$ is a subtheory of $(\textnormal{W})\textnormal{KF}^-$ and the latter are well known to be model-theoretically conservative over $\textnormal{PA}$ (see \cite{cant}; we will outline direct proof of model-theoretical conservativity of $\textnormal{PT}^-$ in Section \ref{sect_int}), we have
\begin{stwierdzenie}
$\textnormal{PT}^-$ and $\textnormal{WPT}^-$ are model-theoretically conservative.
\end{stwierdzenie}
In particular we see that the axiom \eqref{equat_neg} may contribute to the strength of truth theories: it is easy to see that $(W)\textnormal{PT}^- + \eqref{equat_neg}$ is deductively equivalent to the theory $\textnormal{CT}^-$, hence in particular by the well-known theorem of Lachlan (see \cite{halbach},\cite{kaye}) it is not semantically conservative.
For the sake of convenience let us isolate one easily noticeable feature of $\textnormal{PT}^-$ and $\textnormal{WPT}^-$:
\begin{definicja}[$\textnormal{UTB}$]
Let $\phi(x_0,\ldots,x_n)$ be any arithmetical formula. $\textnormal{UTB}^-(\phi)$ is the following $\mathcal{L}_T$ sentence
\begin{equation}\label{UTB}\tag{$\textnormal{UTB}^-(\phi)$}
\forall t_0\ldots t_n\ \ \bigl(T(\qcr{\phi(t_0,\ldots, t_n)})\equiv \phi(\val{t_0},\ldots,\val{t_n})\bigr)
\end{equation}
Define
\[\textnormal{UTB}^- := \set{\textnormal{UTB}^-(\phi(x_0,\ldots,x_n))}{\phi(x_0,\ldots,x_n)\in\mathcal{L}_{\textnormal{PA}}}\]
And $\textnormal{UTB}$ to be the extensions of $\textnormal{UTB}^-$ with all instantiations of induction scheme with $\mathcal{L}_T$ formulae.
\end{definicja}
\begin{fakt}\label{utb_w_pt_wpt}
Both $\textnormal{PT}^-$ and $\textnormal{WPT}^-$ prove $\textnormal{UTB}^-$.
\end{fakt}
In \cite{fischer2},\cite{fischer3} and \cite{fischerhorsten} (this last philosophical motivation was summarized also in \cite{CWL}) authors motivated the need for a weak theory of truth which would be able to prove in a single sentence the fact that every arithmetical formula satisfy the induction scheme. Such a fact can be naturally expressed by an $\mathcal{L}_T$ sentence
\begin{equation}\label{equat_ind}\tag{\textnormal{INT}}
\forall \phi(x)\ \ \biggl[\bigl(\forall x \bigl(T(\phi(\num{x}))\rightarrow T(\phi(\num{x+1})\bigr)\bigr)\longrightarrow \bigl(T(\phi(0))\rightarrow \forall x T(\phi(\num{x}))\bigr)\biggr]
\end{equation}
For further usage let us abbreviate the formula
\[\bigl(\forall x \bigl(T(\phi(\num{x}))\rightarrow T(\phi(\num{x+1})\bigr)\bigr)\longrightarrow \bigl(T(\phi(0))\rightarrow \forall x T(\phi(\num{x}))\bigr)\]
by $\textnormal{INT}(\phi(x))$. Using Fact \ref{utb_w_pt_wpt}, we see that both $\textnormal{PT}^- + \textnormal{INT}$ and $\textnormal{WPT}^- + \textnormal{INT}$ can prove any arithmetical instance of the induction schema in a uniform way, for each formula using the same finitely many axioms\footnote{The proof is really easy: we fix $\phi(x)$ (with parameters), prove the instantiation of the \ref{UTB} scheme for $\phi$ and substitute $\phi(x)$ for $T(\phi(\num{x}))$ in \ref{equat_ind}}. In particular, it can be finitely axiomatised by taking $I\Sigma_1$ together with axioms for the truth predicate from $\textnormal{PT}^-$ and $\eqref{equat_ind}$. To achieve this goal, however, none of the discussed theories uses the full strength of \eqref{equat_ind}.
\begin{comment}
Working in $\textnormal{PT}^-$ let us define an important class of arithmetical formulae: formula $\phi(x)$ is \emph{total}, $\textnormal{tot}(\phi(x))$ for short, if
\begin{equation}\label{equat_tot}\tag{tot}
\forall x \ \ \bigl(T(\phi(\num{x}))\vee T(\neg\phi(\num{x}))\bigr)
\end{equation}
Hence a formula $\phi(x)$ is total if for every $x$ its truth on $x$ is decided. It can be shown by a routine proof that $\textnormal{PT}^-$ does not prove that all formulae are total, but by \eqref{UTB} every \emph{standard} formula is.
\end{comment}
By \ref{UTB} every standard formula is total, provably in $\textnormal{WPT}^-$. Hence it makes good sense to consider a version of \eqref{equat_ind} restricted to total formulae, i.e.
\begin{equation}\label{equat_int_tot}\tag{$\textnormal{INT}\upharpoonright_{\tot}$}
\forall \phi(v)\ \ \biggl[\textnormal{tot}(\phi(v))\longrightarrow \textnormal{INT}(\phi(v))\biggr]
\end{equation}
The theory $\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$ was claimed to be model-theoretically conservative in \cite{fischer} (and then used in \cite{fischer2},\cite{fischer3} and \cite{fischerhorsten} as such). However, as shown in \cite{CWL}, the proof of its conservativity contained an essential gap and no prime model of $\textnormal{PA}$\footnote{For the definition of all notions from the model theory of $\textnormal{PA}$ see \cite{kaye}.} admits an expansion to the model of $\textnormal{PT}tot$. Moreover, it was shown that every recursively saturated model of $\textnormal{PA}$ can be expanded to a model of this theory. In particular, $\textnormal{PT}^-+\textnormal{INT}\upharpoonright_{\tot}$ is model-theoretically stronger than $\textnormal{PT}^-$ and weaker than $\textnormal{UTB}$ and $\textnormal{CT}^-$.
In the current study, we further approximate the class of models expandable to $\textnormal{PT}^-+\textnormal{INT}\upharpoonright_{\tot}$ and compare the strength of $\textnormal{UTB}$ with the strength of $\textnormal{PT}^- + \textnormal{INT}$. Moreover we show that $\textnormal{WPT}^- + \textnormal{INT}$ is model theoretically conservative and meets the requirements posed in \cite{fischerhorsten}. Our results jointly with some well-known facts from the literature give the following picture of interdependencies between proof-theoretically conservative theories of truth:
\begin{center}
\begin{tikzcd}
& \textnormal{CT}^- + \textnormal{INT} &\\
\textnormal{CT}^-\ar[ur, Rightarrow]& \overset{?}{\Longleftrightarrow} & \textnormal{PT}^-+\textnormal{INT}\ar[ul, Rightarrow]\\
& \textnormal{UTB}\ar[ur, Rightarrow]\ar[ul, Rightarrow] &\\
& \textnormal{PT}tot\ar[u] &\\
& \textnormal{TB}\ar[u]&\\
\textnormal{PT}^-\ar[ur]& \Longleftrightarrow & \textnormal{WPT}^- + \textnormal{INT}\ar[ul]
\end{tikzcd}
\end{center}
\dousuniecia{Ale profeska.}
where $\longrightarrow$ stands for $\lneq_M$ and $\Longrightarrow$ for $\leq_M$. The question whether any of $\Longrightarrow$ arrows is in fact a $\longrightarrow$ arrow is open. Similarly, the relation between classes of models of $\textnormal{CT}^-$ and $\textnormal{PT}^- + \textnormal{INT}$ is unknown.
\section{Models of $\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$}\label{sect_tot}
In the paper \cite{CWL}, it has been shown that $\textnormal{PT}tot$ is not semantically conservative over $\textnormal{PA}$ and, moreover, any (not necessarily countable) recursively saturated model of $\textnormal{PA}$ admits an expansion to a model of $\textnormal{PT}tot$. The nonconservativity result has been obtained by demonstrating that no prime model of $\textnormal{PA}$ can be expanded in such a way. Now, we will show a strengthening of that result. Let us first recall one definition.
\begin{definicja}
Let $M$ be a model of $\textnormal{PA}$. We say that $M$ is \df{short recursively saturated} if any recursive type (with finitely many parametres from $M$) of the form $p(x) = \set{x<a \wedge \phi_i(x)}{i \in \mathbb{N}}$ is realised in $M$ where $a$ is some fixed parameter from $M$.
\end{definicja}
In other words, a model is short recursively saturated if it realises all types which are finitely realised \emph{below} some fixed element. This notion is strictly weaker than full recursive saturation. For example, the standard model $\mathbb{N}$ is short recursively saturated but not recursively saturated. More generally, a countable model is short recursively saturated if and only if it has a recursively saturated elementary end extension, see \cite{smorynski}, Theorem 2.8.
\begin{tw}\label{tw_srecsat_to_petetot}
Let $M \models \textnormal{PA}$ and suppose that $M$ has an expansion $(M,T) \models \textnormal{PT}tot$. Then $M$ is short recursively saturated.
\end{tw}
The proof of our theorem will closely parallel the proof of Theorem 4 from \cite{CWL}. In particular, we will again use a propositional construction invented by Smith.
\begin{definicja} \label{df_alternatives_stopping}
Let $(\alpha_i)_{i\leq c}, (\beta_i)_{i \leq c}$ be any sequences of sentences. We define the \df{alternative with stopping condition} $(\alpha_i)$
\begin{displaymath}
\bigvee_{i=i_0}^{c,\alpha} \beta_i
\end{displaymath}
by backwards induction on $i_0$ as follows:
\begin{enumerate}
\item $\bigvee_{i=c}^{c,\alpha} \beta_i = \alpha_c \wedge \beta_c.$
\item $\bigvee_{i=k}^{c,\alpha} \beta_i = \neg (\alpha_k \wedge \neg \beta_k) \wedge \Bigl( (\alpha_k \wedge \beta_k) \vee \big(\neg \alpha_k \wedge \bigvee_{i=k+1}^{c,\alpha} \beta_i \big) \Bigr).$
\end{enumerate}
\end{definicja}
We may think that this is a formalisation in propositional logic of the following instruction: for $i$ from $i_0$ up to $c$, search for the first number $j$ such that $\alpha_j$ holds and then check whether also $\beta_j$ holds. \emph{Then stop your search}. The whole formula is true if this $\beta_j$ is true and is false if either $\beta_j$ is false or there is no $j$ such that $\alpha_j$ holds. It turns out that this intuition may be partially recovered in theories of truth, even if one does not assume that the truth predicate satisfies induction axioms.
\begin{lemat} \label{lem_total_stopping_disjunction}
Fix $(M,T) \models \textnormal{PT}^-.$ Suppose that $(\alpha_i)_{i \leq c}$ is a sequence of arithmetical sentences coded in $M$. Suppose that the least $j$ such that $T(\alpha_j)$ holds is standard, say $j=j_0$, and that for any $k \leq j_0$ either $T\left( \beta_k\right)$ or $T \left(\neg \beta_k\right)$ holds. Then
\begin{enumerate}
\item $(M,T) \models T \Bigl(\bigvee_{i=c}^{c,\alpha} \beta_i \Bigr) \equiv T \left(\beta_{j_0}\right).$
\item $(M,T) \models T \Bigl(\neg \bigvee_{i=c}^{c,\alpha} \beta_i \Bigr) \equiv T \left( \neg \beta_{j_0}\right).$
\end{enumerate}
\end{lemat}
For a proof, see \cite{CWL}, Lemma 2.3.
Now we are ready to prove that any model of $\textnormal{PA}$ expandable to a model of $\textnormal{PT}tot$ is short recursively saturated.
\begin{proof}
Fix any recursive type $p(x) = (\phi_i(x) \wedge x<a)_{i \in \omega}$ (with a parameter $a$) and suppose that for any finite set $\phi_0, \ldots, \phi_k$ there is some $b_k<a$ such that $M \models \phi_0(b_k) \wedge \ldots \wedge \phi_k(b_k).$ Let
\begin{eqnarray*}
\alpha_0(x) & = & \neg x < \num{a} \vee \neg \phi_0(x) \\
\alpha_{j+1}(x) & = & x<\num{a} \wedge \phi_0(x) \wedge \ldots \wedge \phi_{j}(x) \wedge \neg \phi_{j+1}(x).
\end{eqnarray*}
In a sense, formulae $\alpha_j(x)$ measure how much of the type $p$ is realised by $x$. Now, if the type $p$ is ommitted in the model $M$, then for any $x$, there exists a standard $j$ such that $(M,T) \models T\alpha_j(\num{x}).$
Let $\beta_j(y)$ be defined as
\begin{displaymath}
\beta_j(y) = y< \num{a} \wedge \phi_0(y) \wedge \ldots \wedge \phi_{j+1}(y).
\end{displaymath}
Now, fix any nonstandard $c$ and consider the (nonstandard) formula
\begin{displaymath}
\phi(x,y) = \bigvee_{i=0}^{c,\alpha(x)} \beta_i(y).
\end{displaymath}
By Lemma \ref{lem_total_stopping_disjunction} and our assumption that the type $p$ is omitted in $M$, the sentence $\phi(\num{x},\num{y})$ is either true or false for any fixed $x,y \in M.$ But this means that the formula $\phi(x,y)$ is total. One can check that then a formula
\begin{displaymath}
\psi(z) = \exists y \forall x<z \phi(x,y)
\end{displaymath}
is also total. Note that this formula intuitively says that there is a $y$ which satisfies more of a type $p$ than any of the elements of $M$ up to $z.$ Now, we will show that $\psi(z)$ is progressive, i.e.,
\begin{displaymath}
(M,T) \models \forall z \Big(T \psi(\num{z}) \rightarrow T \psi(\num{z+1})\Big).
\end{displaymath}
Fix any $z$ and suppose that $T\psi(\num{z})$ holds. Then there exists a $y$ such that $T (\qcr{\forall x<z \ \phi(x,\num{y})}).$ Now, let $j$ be the least number such that $T \alpha_j(\num{z}).$ Since $j$ is a standard number and $p$ is a type, there exists $y'$ such that $\phi_0(y') \wedge \ldots \wedge \phi_{j+1}(y')$ holds in $M$, i.e. $(M,T) \models T \beta_j(\num{y'})$. Let $y'' = y$ if also
\begin{displaymath}
T \Big(\phi_0(\num{y}) \wedge \ldots \wedge \phi_{j+1}(\num{y}) \Big)
\end{displaymath}
and $y''=y'$ otherwise. In other hand, we fix either $y$ or $y'$, whichever satisfies ''more'' formulae $\phi_i.$ One readily checks that then
\begin{displaymath}
(M,T) \models \forall x < z+1 \ \phi(\num{x},\num{y''}).
\end{displaymath}
We have shown that the formula $\psi(z)$ is total and progressive. By the internal induction for total formulae this means that
\begin{displaymath}
(M,T) \models \forall z \ T \psi(\num{z}).
\end{displaymath}
In particular, we have $T \psi(\num{a}),$ where $a$ is the parameter used as a bound in the type $p$. But then for some $d$, we have
\begin{displaymath}
(M,T) \models \forall x<a \ T\phi(\num{x}, \num{d}).
\end{displaymath}
Now, since $p$ is a type, for an arbitrary $k \in \omega$, there exists some $x<d$ such that $\phi_0(x) \wedge \ldots \wedge \phi_k(x)$. Since $(M,T) \models T \phi(\num{x},\num{d}),$ it follows that $d<a \wedge \phi_0(d) \wedge \ldots \wedge \phi_{k+1}(d).$ As we have chosen an arbitrary $k$, we see that actually $d$ satisfies the type $p$. We conclude that $M$ is short recursively saturated.
\end{proof}
Let us summarize our findings from \cite{CWL} and this paper:
\begin{itemize}
\item Any recursively saturated model of $\textnormal{PA}$ (possibly uncountable) admits an expansion to a model of $\textnormal{PT}tot.$
\item If a model $M$ expands to a model of $\textnormal{PT}tot$, then it is short recursively saturated.
\end{itemize}
Unfortunately, we do not know whether any of the implications reverses.
\poprawka{Cieśliński and Engstr\"om have (independetly) found the following characterisation the class of models of $\textnormal{PA}$ which admit an expansion to a model of $\textnormal{TB}$, i.e., the truth theory axiomatised with the induction scheme for the whole language and the following scheme of Tarski's biconditionals:
\begin{displaymath}
T (\qcr{\phi}) \equiv \phi,
\end{displaymath}
where $\phi$ is an arithmetical sentence.
\begin{tw}[Cieśliński, Engstr\"om]\footnote{See \cite{cies_disquotational}, Theorem 7.}
Let $M$ be a nonstandard model of $\textnormal{PA}$. Then the following are equivalent:
\begin{enumerate}
\item $M$ admits an expansion to a model $(M,T) \models \textnormal{TB}$.
\item There exists an element $c \in M$ such that for all (standard) arithmetical sentences $\phi$, $M \models \qcr{\phi} \in c$ iff $M \models \phi$, i.e., $M$ codes its own theory.
\end{enumerate}
\end{tw}
It can be easily shown that every nonstandard short recursively saturated model $M \models \textnormal{PA}$ satisfies the second item of the above characterisation. Hence, every short recursively saturated model of $\textnormal{PA}$ admits an expansion to a model of $\textnormal{TB}.$ Thus we obtain the following corollary:
\begin{wniosek} \label{cor_modele_pttot_sa_modelami_tb}
$\textnormal{TB} \leq_M \textnormal{PT}tot$, i.e., every model $M \models \textnormal{PA}$ which admits an expansion to a model of $\textnormal{PT}tot$, also admits an expansion to a model of $\textnormal{TB}$.
\end{wniosek}}
\subsection{A non-result}
We are going to show that the method used to prove that every recursively saturated model of $\textnormal{PA}$ admits an expansion to a model of $\textnormal{PT}^-+\textnormal{INT}\upharpoonright_{\tot}$ cannot be used to obtain a stricter upper bound on the class of models expandable to this theory (if such a stricter upper bound exists). Let us recall that in $\cite{CWL}$, it was shown that every recursively saturated model of $\textnormal{PA}$ can be expanded to a model of \poprawka{$\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$}. Let us recall the standard definition of a function which generates possible extensions for the $\textnormal{PT}^-$ truth predicate.
\begin{konwencja}
If $\mathcal{M}\models \textnormal{PA}$, then $\textnormal{Sent}_{\mathcal{M}}$, $\textnormal{Form}^{\leq 1}_{\mathcal{M}}$, $\textnormal{Term}_{\mathcal{M}}$ denote the set of sentences of $\mathcal{L}_{\textnormal{PA}}$, the set of formulae with at most one free variable and the set of terms, respectively, \emph{in the sense of $\mathcal{M}$.}
\end{konwencja}
\begin{definicja}\label{defin_oper_theta}
Let $\mathcal{M}\models \textnormal{PA}$ and $A\subseteq \mathcal{M}$. Define:
\begin{eqnarray}
\textnormal{Th}eta_{\mathcal{M}}(\phi,A)&:= &\ \ \mathcal{M}\models \exists s,t\ \ [\phi = (s=t)\wedge \val{s} = \val{t}]\nonumber\\
&\vee&\ \ \mathcal{M}\models\exists s,t\ \ [\phi = \neg(s=t) \wedge \val{s}\neq \val{t}]\nonumber\\
&\vee&\ \ \exists \psi\in \textnormal{Sent}_{\mathcal{M}} [\mathcal{M}\models \phi = \neg\neg\psi]\wedge \psi\in A\nonumber\\
&\vee&\ \ \exists \psi_1,\psi_2 \in \textnormal{Sent}_{\mathcal{M}}[\mathcal{M}\models\phi = (\psi_1\vee\psi_2)] \wedge \bigl(\psi_1\in A)\vee (\psi_2\in A)\bigr)\nonumber\\
&\vee&\ \ \exists \psi_1,\psi_2 \in \textnormal{Sent}_{\mathcal{M}}[\mathcal{M}\models\phi = \neg(\psi_1\vee\psi_2)] \wedge \bigl( (\neg\psi_1\in A)\wedge (\neg\psi_2\in A)\bigr)\nonumber\\
&\vee&\ \ \exists \psi(x)\in {\textnormal{Form}}^1_{\mathcal{M}}[\mathcal{M}\models\phi = \exists x\psi] \wedge \exists x\in \mathcal{M}\ \ (\psi(\num{x})\in A)\nonumber\\
&\vee&\ \ \exists \psi(x)\in {\textnormal{Form}}^1_{\mathcal{M}}[\mathcal{M}\models\phi = \neg\exists x\psi] \wedge \forall x\in \mathcal{M}\ \ (\neg\psi(\num{x})\in A)\nonumber
\end{eqnarray}
Let $\Gamma^{\mathcal{M}}: \mathcal{P}(M)\rightarrow\mathcal{P}(M)$ be the function defined:
\begin{equation}\label{Gamma}\tag{$\Gamma$}
\Gamma^{\mathcal{M}}(A) = \set{\phi\in\mathcal{M}}{\textnormal{Th}eta_{\mathcal{M}}(\phi,A)}.
\end{equation}
Let us now define:
\begin{eqnarray}
\Gamma^{\mathcal{M}}_0 &=& \Gamma^{\mathcal{M}}(\emptyset)\nonumber\\
\Gamma^{\mathcal{M}}_{\alpha+1} &=& \Gamma^{\mathcal{M}}(\Gamma^{\mathcal{M}}_{\alpha})\nonumber\\
\Gamma^{\mathcal{M}}_{\beta} &=& \bigcup_{\alpha<\beta}\Gamma^{\mathcal{M}}_{\alpha},\textnormal{ for $\beta$ a limit ordinal.}\nonumber
\end{eqnarray}
It can be checked that for some ordinal $\alpha$ we must get $\Gamma^{\mathcal{M}}_{\alpha+1} = \Gamma^{\mathcal{M}}_{\alpha}$, i.e., $\Gamma^{\mathcal{M}}_{\alpha}$ is a fixpoint of $\Gamma^{\mathcal{M}}$. In general, if $A$ is any fixpoint of $\Gamma^{\mathcal{M}}$, then
\[(\mathcal{M},A)\models\textnormal{PT}^-.\]
Let $\alpha_{\mathcal{M}}$ denote the least ordinal $\alpha$ such that $\Gamma_{\alpha}^{\mathcal{M}}$ is a fixpoint of $\Gamma^{\mathcal{M}}$.
\end{definicja}
In \cite{CWL}, the following lemmata were proved:
\begin{lemat}\label{lemat_recsat_to_omega}
If $\mathcal{M}\models \textnormal{PA}$ is recursively saturated, then $\alpha_{\mathcal{M}} = \omega$.
\end{lemat}
\begin{lemat}
If $\mathcal{M}\models \textnormal{PA}$ and $\alpha_{\mathcal{M}}=\omega$, then $(\mathcal{M},\Gamma^{\mathcal{M}}_{\omega})\models \textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$.
\end{lemat}
Now we shall show that the converse to \ref{lemat_recsat_to_omega} holds. In particular, our method of finding extensions for $\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$ works only for recursively saturated models.
\begin{lemat}
For every non-standard $\mathcal{M}\models \textnormal{PA}$. If $\alpha_{\mathcal{M}} = \omega$, then $\mathcal{M}$ is recursively saturated.
\end{lemat}
\begin{proof}
We prove the contraposition: suppose that a non-standard model $\mathcal{M}$ is not recursively saturated.
\begin{comment}
Observe that without loss of generality we may assume that $\mathcal{M}$ is short recursively saturated, for otherwise, by Theorem \ref{tw_srecsat_to_petetot} $\mathcal{M}$ cannot be expanded to a model of $\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$ and \textit{a fortiori} $\alpha_{\mathcal{M}}\neq \omega$.
\end{comment}
Let $p(x)$ be a recursive type using parameters from $\bar{a}$ which is omitted in $\mathcal{M}$. Let $(\phi_i(x,\bar{y}))_i$ be an arithmetically representable enumeration of formulae in $p(x)$. Without loss of generality, assume that $\phi_0(x,\bar{y}) = (x=x)$. Let
\[\psi_i(x,\bar{y}) = \bigwedge_{j< i}\phi_j(x,\bar{y})\wedge\neg \phi_i(x,\bar{y})\]
Then every $b\in M$ satisfies exactly one of $\psi_i(x,\bar{a})$ (since $p(x)$ is omitted).
\begin{comment}
and $\psi_i(x,\bar{y}) = \phi_{i}(x,\bar{y})\wedge \forall z<x\neg\phi_i(x,\bar{y})$. $\psi_i(x,\bar{a})$ is the canonical definition (with parameters) of the first element satisfying first $i$ formulae from $p(x)$. Let $a_i$ denote the unique element $c\in M$ such that
\[\mathcal{M}\models \psi_i(c,\bar{a})\]
Since $p(x)$ is omitted and $\mathcal{M}$ is short recursively saturated we have that $(a_i)_{i\in\omega}$ is unbounded in $\mathcal{M}$. Moreover, by definition, for every $i$
$$a_i\leq a_{i+1}.$$
For the sake of readability, let $a_i\leq x < a_{i+1}$ abbreviate the formula
$$\exists z_1 \exists z_2 (\phi_i(z_1,\bar{y})\wedge \phi_{i+1}(z_2,\bar{y})\wedge z_1\leq x<z_2).$$
\end{comment}
Now, for every $n\in\omega$ we shall define formulae $\theta_n(x)$ as follows:
\begin{eqnarray}
\theta_n^0(x) &=& (x\neq x)\nonumber\\
\theta_n^{k+1}(x,\bar{y}) &=&\psi_{n-(k+1)}(x,\bar{y})\vee \theta_n^{k}(x,\bar{y})\nonumber\\
\theta_n(x,\bar{y}) &=& \theta^{n}_n(x,\bar{y})\nonumber
\end{eqnarray}
Let us observe that the above construction can be arithmetized and therefore for some $b\in M\setminus\mathbb{N}$ there exists a (code of a) formula $\theta_b(x,\bar{y})$, which is of the following form:
\[(\psi_0(x,\bar{y}) \vee (\psi_1(x,\bar{y})\vee (\psi_2(x,\bar{y})\vee (\ldots(\psi_{b-1}(x,\bar{y}) \vee x\neq x)\ldots)\]
Then for each $c\in M$, there exists $n\in\omega$ such that $\theta_b(\num{c},\bar{\num{a}})\in \Gamma^{\mathcal{M}}_n$, since each $c$ satisfies some $\psi_i(x,\bar{a})$. But also for every $i\in\omega$, there exists $c\in M$ such that the least $n$ for which $\psi_n(c,\bar{a})$ is greater than $i$.
Consequently, there is no $k\in\omega$ for which
\[\theta_b(c,\bar{\num{a}})\in\Gamma_k^{\mathcal{M}}\]
for every $c\in M$. In particular, $\forall v\theta(v,\bar{\num{a}})\notin \Gamma_{\omega}^{\mathcal{M}}$ and consequently the $\textnormal{PT}^-$ axiom
$$\forall v \forall \phi(v)\ \ \bigl(T(\forall v\phi)\equiv \forall x T(\phi(\num{x}))\bigr)$$
is not satisfied in $(\mathcal{M},\Gamma^{\mathcal{M}}_{\omega})$. Hence $\alpha_{\mathcal{M}}\neq \omega$.
\end{proof}
\section{Models of $\textnormal{PT}^- + \textnormal{INT}$}\label{sect_int}
Since we have shown that $\textnormal{PT}tot$ is not a model-theoretically weak theory, as was originally hoped, one could start wondering whether it differs in some significant respect from $\textnormal{PT}int.$ In this section, we will show that actually this is the case. Namely, it turns out that $\textnormal{PT}int$ is still model-theoretically stronger than $\textnormal{PT}tot.$ As we shall see, any model of $\textnormal{PA}$ expandable to a model of $\textnormal{PT}int$, is also expandable to a model of $\textnormal{UTB}.$ We know that any model of $\textnormal{PA}$ expandable to a model of $\textnormal{UTB}$ is recursively saturated and that this containment is strict, i.e., not every recursively saturated model of $\textnormal{PA}$ admits an expansion to a model of $\textnormal{UTB}.$\footnote{\poprawka{We know that there exist rather classless recursively saturated models of $\textnormal{PA}$, i.e., recursively saturated models $M \models \textnormal{PA}$ with the following property: for every $X \subseteq M$ such that every initial segment of $X$ is coded $M$, the set $X$ is definable in $M$ with an arithmetical formula (with parametres). Since no subset of $M$ definable with an arithmetical formula can satisfy $\textnormal{UTB}^-$, we see that no such model $M$ can admit an expansion to a model of $\textnormal{UTB}$. The existence of recursively saturated, rather classless models has been shown by Kaufmann in \cite{kaufmann} under an additional set-theoretic assumption $\diamond$. The assumption has been dropped by Shelah, \cite{shelah}, Application C, p. 74.}}
On the other hand, it has been shown in \cite{CWL}, Theorem 3.3 that any recursively saturated model of $\textnormal{PA}$ admits an expansion to a model of $\textnormal{PT}tot.$
\begin{tw} \label{tw_ptint_a_utb}
Suppose that $(M,T)$ is a model of $\textnormal{PT}int$. Then there exists a $T'$ such that $(M,T') \models \textnormal{UTB}.$
\end{tw}
\begin{proof}
Let $(M,T) \models \textnormal{PT}int$. We will find $T'$ such that $(M,T') \models \textnormal{UTB}.$ Without loss of generality we may assume that $M$ is nonstandard. As in the previous section, we will use Lemma \ref{lem_total_stopping_disjunction}. Let us fix any primitive recursive enumeration $(\phi_i)_{i =0}^{\infty}$ of arithmetical formulae. Then let
\begin{displaymath}
\alpha'_i(\phi,t)
\end{displaymath}
be defined as the (formalised version of the) formula "$t$ is a (finite) sequence of terms $(t_1, \ldots, t_n)$ and $\phi = \phi_i(t_1, \ldots, t_n)$" and let
\begin{displaymath}
\alpha_i(\phi,t,b) = \alpha_i'(\phi,t) \vee \num{i}>b.
\end{displaymath}
Let
\begin{displaymath}
\beta'_i(t)
\end{displaymath}
be defined as ''$t$ is a (finite) sequence of terms $t_1, \ldots, t_n$ and $\phi_i(t_1, \ldots,t_n)$.'' Let
\begin{displaymath}
\beta_i(t,b)
\end{displaymath}
be $\beta'_i(t) \wedge \num{i} \leq b.$ Note that $\phi$ is not a free variable of the formula $\beta_i.$
Let us fix any nonstandard $c \in M$ and let
\begin{displaymath}
\tau(\phi,t,b) = \bigvee_{i=0}^{\alpha(\phi,t,b), c} \beta_i(t).
\end{displaymath}
Note that for any standard $c$ the predicate $\tau$ is equivalent to the very simple arithmetical truth predicate:
\begin{displaymath}
\tau_n(\phi,t,b) = \bigvee_{i=0}^{n} \phi= \phi_i(t) \wedge \phi_i(t) \wedge \num{i}<b.
\end{displaymath}
At this point one may wonder, what is the role of the variable $b.$ It is indeed technical. We artificially
truncate our truth predicates so that they work only for the first $b$ formulae. \poprawka{This is to some extent controlled by the parameter $c$ in the definition of $\tau$, since whenever $c$ is standard, the formula $\tau$ works like a truth predicate only for the first $c$ sentences. However, $c$ is not a variable in the formula $\tau$, but rather a parameter describing the syntactic shape of $\tau$, whereas we need this truncation to be expressed with a variable for reasons which will shortly become clear.}
It turns out that for some parameter $b$ the formula given by
\begin{displaymath}
T'(\phi) =\exists t \ T (\tau(\num{\phi},\num{t},\num{b}))
\end{displaymath}
satisfies $\textnormal{UTB}$. We will prove this claim in a series of lemmata. This will obviously conclude our proof.
\end{proof}
\begin{lemat} \label{lem_utb-}
Let $\tau'(\phi,t) = \tau(\phi,t,b)$ for some fixed nonstandard $b$. Then for an arbitrary standard arithmetical formula $\phi(v_1,\ldots,v_n)$ and an arbitrary sequence of terms $t=(t_1,\ldots, t_n)$, possibly nonstandard (the length of the sequence is assumed to be standard).
\begin{displaymath}
(M,T) \models T\tau'(\num{\phi(t_1, \ldots, t_n)},\num{t}) \equiv \phi(t_1, \ldots, t_n).
\end{displaymath}
\end{lemat}
\begin{proof}
If $\phi$ is standard, then $\phi = \phi_i$ for some standard $i.$ So by Lemma \ref{lem_total_stopping_disjunction}
\begin{displaymath}
(M,T) \models T\tau'(\num{\phi(t_1, \ldots, t_n)},\num{t}) \equiv \beta_i(t) \equiv \phi_i(t_1, \ldots, t_n),
\end{displaymath}
which is exactly the claim of the lemma.
\end{proof}
Note that the above lemma is true in pure $\textnormal{PT}^-$. We have used no induction at all. Now we only need to check that for some parameter $b$ the predicate $T'(\phi, t)$ defined as $T \tau(\num{\phi}, \num{t},\num{b})$ is fully inductive.
\begin{lemat} \label{lem_tot_cons}
Let $T'$ be defined as in the above proof. Then for some $b$, the formula $\tau'(\phi,t) = \tau(\phi,t,\num{b})$
is total and consistent i.e. for all $\phi$ and $t$, exactly one of $T\tau'(\num{\phi},\num{t}),$ $T \neg \tau'(\num{\phi}, \num{t})$ holds.
\end{lemat}
\begin{proof}
Note first that for any standard $b$, the formula $\tau(\phi,t,\num{b})$ is total and consistent. Namely, since $\alpha_i(\phi,t,n)$ is true for any $i > n$,
we see that for any $\phi,t$ the least $i$ such that $\alpha_i(\phi,t,n)$ holds is standard (it is at most $n+1$) and then the assumptions of Lemma \ref{lem_total_stopping_disjunction} are satisfied. This implies that for any fixed $\xi$ the formula $T\tau(\num{\xi},\num{t},\num{n})$ is equivalent to some $\phi_i(\val{t}) \wedge \num{i} \leq\num{n}$,
which is a standard formula. This implies that for any $t$, exactly one of $T\tau(\num{\xi},\num{t},\num{n}), T\neg \tau(\num{\xi},\num{t},\num{n})$ holds.
Now, consider the formula
\begin{displaymath}
\psi(b) = \forall \phi,t \ \Big(\tau(\phi,t,b) \vee \neg \tau(\phi,t,b)\Big).
\end{displaymath}
We have just shown that for an arbitrary standard $n$ we have $T\forall b<\num{n} \ \psi(b).$
So by internal induction we have for some nonstandard $d_1$
\begin{displaymath}
T \bigg(\forall b\leq \num{d_1} \forall \phi,t \ \Big(\tau(\phi,t,b) \vee \neg \tau(\phi,t,b)\Big)\bigg),
\end{displaymath}
which gives
\begin{displaymath}
\forall b\leq d_1 \forall \phi,t \ \Big(T\tau(\num{\phi},\num{t},\num{b}) \vee T\neg \tau(\num{\phi},\num{t},\num{b})\Big)
\end{displaymath}
Similarly, let
\begin{displaymath}
\xi(b) = \exists \phi,t \ \Big(\tau(\phi,t,b) \wedge \neg \tau(\phi,t,b)\Big).
\end{displaymath}
Suppose that $T (\exists d<\num{b} \ \xi(d))$ holds for any nonstandard $b<d_1$. Then by underspill we would have $T\xi(\num{n})$ for some $n \in \omega.$ But we have just shown that this is impossible. So there exists some nonstandard $b<d_1$ such that for any $d \leq b$ and any $\phi,t$ at most one of $T\tau(\num{\phi},\num{t},\num{d}), T \neg \tau(\num{\phi},\num{t},\num{d})$ holds. At the same time, we know that at least one of these formulae holds. So $\tau'(\phi,t) = \tau(\phi,t,b)$ is total and consistent.
\end{proof}
We are very close to showing that we have defined a predicate satisfying full induction. Before we proceed, we have to introduce some new notation. Let $\eta$ be any formula containing a unary predicate $P$ not in the language of $\textnormal{PT}^-$ and let $\xi(v)$ be an arbitrary formula with one free variable. Then by $\eta[\xi/P]$ (or simply $\eta[\xi]$) we mean a formula resulting from substituting $\xi(v_i)$ for any instance of $P(v_i)$ in $\eta$. We assume that all the variables in $\eta$ has been renamed so as to avoid clashes.
Let us give an example. Let $\eta(x,y) = P(x+y) \wedge \exists z \ (z=y \wedge P(z))$. Let $\xi(v) = (v>0)$. Then
\begin{displaymath}
\eta[\xi] = x+y>0 \wedge \exists z \ (z=y \wedge z>0).
\end{displaymath}
Now, basically, we would like to finish the proof in the following way. Let $\tau'$ be a total formula defined as in the above lemmata and let $\eta$ be an arbitrary standard formula from the arithmetical language enlarged with a fresh unary predicate $P$. Then, applying compositional axioms a couple of times, we see that
\begin{displaymath}
T(\eta[\tau]) \equiv \eta[T \tau].
\end{displaymath}
Let us call this principle the \df{generalised commutativity}. If this were true, then we could conclude our proof. Namely, by the internal induction principle, we know that
\begin{displaymath}
\bigl(\forall x \bigl(T(\eta[\tau](\num{x}))\rightarrow T(\eta[\tau](\num{x+1}))\bigr)\longrightarrow \bigl(T(\eta[\tau](\num{0}))\rightarrow \forall x T(\eta[\tau](\num{x}))\bigr)
\end{displaymath}
which, by generalised commutativity, would allow us to conclude that
\begin{displaymath}
\bigl(\forall x \bigl(\eta[T\tau](\num{x})\rightarrow (\eta[T\tau](\num{x+1})\bigr)\bigr)\longrightarrow \bigl(\eta[T\tau](\num{0})\rightarrow \forall x \eta[T\tau](\num{x})\bigr).
\end{displaymath}
Since the choice of $\eta$ was arbitrary, this precisely means that $\tau$ satisfies the full induction scheme.
\poprawka{The generalised commutativity principle in the form stated above does not even quite make sense, since we would have to apply the truth predicate to a formula containing free variables. Therefore, we have to restate it in somewhat more careful manner. }
\begin{definicja}
Fix a unary predicate $P$. Let $\eta$ be an arbitrary formula from the language containing that predicate. We say that $\eta$ is in \df{semirelational} form if the predicate $P$ is applied only to variables rather than to arbitrary terms.
\end{definicja}
We may always assume that formulae we \emph{use} are semirelational, \poprawka{since we may eliminate any occurrence of $P(t)$ for complex terms $t$, by replacing it with $\exists x \ (x = t \wedge P(x))$. This is expressed in the following lemma:}
\begin{lemat} \label{lem_semirelational}
Any formula is equivalent in first-order-logic to a formula in semirelational form.
\end{lemat}
Now we are ready to state generalised commutativity lemma in a proper manner.
\begin{lemat} \label{lem_gen_commutativity}
Let $(M,T) \models \textnormal{PT}^-.$ Let $T * \xi(x) = T(\xi(\num{x}))$ for every $x.$ Suppose that $\xi$ is total and consistent. Let $\eta$ be an arbitrary standard formula from the arithmetical language extended with a fresh unary predicate $P$. Then the formula $\eta[\xi]$ is total and consistent, and
\begin{displaymath}
(M,T) \models \forall x_1, \ldots, x_n \big( T(\eta[\xi](\num{x_1},\ldots, \num{x_n})) \equiv \eta[T * \xi](x_1, \ldots, x_n)\big).
\end{displaymath}
\end{lemat}
The lemma generalises to the case, where the predicate $P$ is not unary (i.e. $\xi$ may have more than one variable). The proof may be easily adapted to cover this case. We will actually use the lemma for the case with $P$ binary.
\begin{proof}
We prove both claims simultanously by induction on complexity of $\eta.$ Suppose that $\eta$ is an atomic formula. Then it is either of the shape $s=t$ for some standard arithmetical terms $s$,$t$, or of the form $P(x)$.
In the first case, $\eta[\xi] = \eta$, and the following equivalences hold:
\begin{eqnarray*}
T(s(\num{x_1}, \ldots, \num{x_n})=t(\num{x_1}, \ldots, \num{x_n})) & \equiv & s(\num{x_1}, \ldots, \num{x_n})^{\circ}=t(\num{x_1}, \ldots, \num{x_n})^{\circ} \\
& \equiv & s(x_1, \ldots, x_n) = t(x_1, \ldots, x_n) \\
& = & \eta[T*\xi](x_1, \ldots, x_n).
\end{eqnarray*}
If $\eta = P(x)$, then $\eta[\xi] = \xi$ and
\begin{displaymath}
T(\eta[\xi](\num{x})) = T \xi(\num{x}) = \eta[T * \xi](x).
\end{displaymath}
So let prove the induction step. If $\eta$ is a conjunction or disjunction, then the proof is straightforward (the fact that a conjunction or disjunction of sentences which are either true or false is itself either true or false is an easy application of the compositional axioms of $\textnormal{PT}^-$). If $\eta = \neg \rho$, then we know by induction hypothesis that $\rho[\eta]$ is total and consistent. Then by the compositional axiom for double negation for the truth predicate, the formula $\neg \rho[\eta]$ is also total and consistent and the following equivalences hold:
\begin{eqnarray*}
T(\neg \rho[\eta](\num{x_1}, \ldots, \num{x_n})) & \equiv & \neg T(\rho[\eta](\num{x_1}, \ldots, \num{x_n})) \\
& \equiv & \neg \rho[T * \eta](x_1, \ldots, x_n).
\end{eqnarray*}
The induction step for quantifier axioms is also simple. Let us prove it for the existential quantifier. Suppose that $\eta = \exists x \ \rho(x,x_1, \ldots, x_n).$ Then
\begin{eqnarray*}
T(\exists x \ \rho[\xi](x, \num{x_1}, \ldots, \num{x_n})) & \equiv & \exists x \ T(\rho[\xi](\num{x}, \num{x_1}, \ldots, \num{x_n}) \\
& \equiv & \exists x \ \rho[T * \xi](x, x_1 \ldots,x_n) \\
& = & \eta[T * \xi](x_1, \ldots, x_n).
\end{eqnarray*}
The second equivalence follows by the induction hypothesis and the last equality by definition. So let us check that $\eta[\xi]$ is total and consistent. Suppose that $T(\exists x \rho[\xi](x, \num{x_1}, \ldots, \num{x_n}))$ does not hold. Then by compositional axioms for the truth predicate, there is no $x$ such that
\begin{displaymath}
T(\rho(\num{x}, \num{x_1}, \ldots, \num{x_n})).
\end{displaymath}
By induction hypothesis, $\rho$ is total and consistent, so for all $x$ we must have
\begin{displaymath}
T (\neg \rho[\xi](\num{x}, \num{x_1}, \ldots, \num{x_n})).
\end{displaymath}
This entails, again by compositional clauses
\begin{displaymath}
T (\neg \exists x \ \rho[\xi](x, \num{x_1}, \ldots, \num{x_n})).
\end{displaymath}
\end{proof}
Now we are ready to conclude the proof of our theorem.
\begin{lemat} \label{lem_indukcja_tauprim}
Let $(M,T)$ be any nonstandard model of $\textnormal{PT}int.$ Suppose that $\tau'(\phi,t)$ satisfies the claim Lemma \ref{lem_tot_cons}.
Then the predicate $T'(\phi,t)$ defined as $T * \tau'(\phi,t)$ satisfies the full induction scheme.
\end{lemat}
\begin{proof}
By internal induction principle, the following holds for an arbitrary standard
$\eta$ from the arithmetical language extended with one fresh unary predicate $P(v)$:
\begin{displaymath}
\Bigl(\forall x \bigl(T(\eta[\tau](\num{x}))\rightarrow T(\eta[\tau](\num{x+1})\bigr)\Bigr)\longrightarrow \Bigl(T(\eta[\tau](\num{0}))\rightarrow \forall x T(\eta[\tau](\num{x}))\Bigr),
\end{displaymath}
Since $\tau'$ is total, if we additionally assume that $\eta$ is semirelational,
we can reach the following conclusion by Lemma \ref{lem_gen_commutativity}:
\begin{displaymath}
\Bigl(\forall x \bigl(\eta[T * \tau](x)\rightarrow (\eta[T * \tau](x+1)\bigr)\Bigr)\longrightarrow \Bigl(\eta[T * \tau](0)\rightarrow \forall x \eta[T * \tau](x)\Bigr).
\end{displaymath}
Since $\eta$ was an arbitrary semirelational formula and any formula is equivalent to a semirelational one, this shows that $T'$ satisfies the full induction scheme.
\end{proof}
\begin{proof}[The conclusion of the proof of Theorem \ref{tw_ptint_a_utb}]
We have defined a formula $T * \tau'(\phi,t)$ which satisfies full induction scheme and such that for an arbitrary standard $\phi(v_1,\ldots,v_n)$ and an arbitrary sequence of terms $(t_1,\ldots, t_n)$ the following holds:
\begin{displaymath}
(M,T) \models T * \tau'(\phi(t_1,\ldots, t_n),t) \equiv \phi(t_1, \ldots, t_n).
\end{displaymath}
Then the formula $T'(\phi)$ defined as
\begin{displaymath}
\exists t \ T*\tau'(\phi,t)
\end{displaymath}
satisfies the uniform disquotation axioms of $\textnormal{UTB}$ as well as the full induction scheme. So it defines a predicate satisfying $\textnormal{UTB}$ in $(M,T).$
\end{proof}
This model-theoretic result allows us to make some conclusions concerning relative definability of the introduced theories.
\begin{wniosek}
The theory $\textnormal{PT}tot$ does not relatively truth defines $\textnormal{PT}int.$
\end{wniosek}
\begin{proof}
We have just checked that every model $(M,T) \models \textnormal{PT}tot$ may be expanded to a model of $\textnormal{UTB}$. By Theorem TODO, there exist recursively saturated rather classless models which cannot be expanded to any model of $\textnormal{UTB}.$ On the other hand in \cite{CWL}, Theorem 3.3, it has been shown that any recursively saturated model can be expanded to a model of $\textnormal{PT}tot.$ Thus, there exist models of $\textnormal{PT}tot$ which cannot be expanded to a model of $\textnormal{PT}int$. This contradicts relative definability.
\end{proof}
\section{Weak and Expressive Theories of Truth}\label{sect_instrum}
In \cite{fischerhorsten}, the authors searched for a theory of truth that would simultaneously satisfy two requirements:\poprawka{
\begin{enumerate}
\item It could model the use of truth in model theory;
\item It would witness the expressive function of the notion of truth.
\end{enumerate}}
The way to satisfy the former is to be model-theoretically conservative over $\textnormal{PA}$. Being such, the theory would not discriminate among possible interpretations of our basis theory. The way to satisfy the latter is to allow for expressing "thoughts" which are not expressible in the basis theory. There are many ways in which a theory of truth can witness the expressive role of the notion of truth. To mention just two (for the rest of the examples the Reader should consult \cite{fischerhorsten}): if a theory of truth is finitely axiomatizable, then it is more expressive (than $\textnormal{PA}$)
and if a theory of truth has non-elementary speed-up over $\textnormal{PA}$, then it is more expressive (than $\textnormal{PA}$).
There is a canonical construction which produces a theory of truth satisfying both finite-axiomatizability and the speed-up desideratum:
the theory has to be (at least partially) classically compositional and it has to prove
that all standard instantiations of the induction scheme are true. Without aspiring to any sort of completeness, let us offer the following explication of both properties. We start with a useful definition:
\begin{definicja}
Let $\textnormal{CC}(x)$ denote the disjunction of the following formulae
\begin{itemize}
\item $\exists s,t\ \ \bigl(x = (s=t)\wedge (T(x)\equiv \val{s}=\val{t})\bigr)$
\item $\exists \phi,\psi\ \ \bigl(x = \phi\vee\psi \wedge (T(x)\equiv T(\phi)\vee T(\psi))\bigr)$
\item $\exists \phi \ \ \bigl( x=\neg\phi \wedge (T(x)\equiv \neg T(\phi))\bigr)$
\item $\exists \phi\exists v\ \ \bigl(x = \exists v\phi \wedge (T(x)\equiv \exists y T(\phi(\num{y})))\bigr)$
\item $\textnormal{Form}^{\leq 1}(x) \wedge \forall s,t\Bigl( \val{s} = \val{t}\rightarrow \bigl(T(x(s))\equiv T(x(t))\bigr)\Bigr)$
\end{itemize}
\end{definicja}
Informally, $\textnormal{CC}(x)$ says that $x$ is a formula on which $T$ behaves compositionally in the sense of classical first-order logic.
\begin{definicja}
A truth theory $\textnormal{Th}$ is \textit{partially classically compositional} if there exists a formula $D(y)$ such that $\textnormal{Th}$ proves the following sentences:
\begin{enumerate}
\item $\forall y (D(y)\rightarrow \forall x\leq y D(x))$;
\item $D(0)\wedge \forall y (D(y)\rightarrow D(y+1))$;
\item $\forall y \bigl(D(y)\rightarrow \forall \phi (\textnormal{dp}(\phi)\leq y \rightarrow \textnormal{CC}(\phi))\bigr)$;
\end{enumerate}
where $\textnormal{dp}(\phi)\leq x$ denotes an arithmetical formula representing the (primitive recursive) relation "the \poprawka{ depth of the syntactic tree of $\phi$} is at most $x$".
\end{definicja}
If a formula satisfies the first requirement, we say that it is \emph{downward closed}. If a formula satisfies the second one, we say that it is \emph{progressive}. If a formula $D(y)$ is both downward closed and progressive, we will say that it \textit{defines an initial segment}. This is justified, since if
$D(y)$ satisfies $1$ and $2$, then in each model $\mathcal{M}\models\textnormal{Th}$ the set $\set{a\in M}{\mathcal{M}\models D(a)}$ is an initial segment of the model. In fact, being \emph{downward closed} is not a very restrictive condition: if $D(y)$ is progressive, then the formula
\[D'(x):= \forall y\leq x D(y)\]
defines an initial segment. (this corresponds to a model-theoretic fact that each \emph{cut} can be \emph{shortened} to an initial segment). The third condition says that if $\phi$ is not too complicated (i.e., its complexity belongs to the initial segment defined by $D$), then $T$ behaves classically on $\phi$.
\begin{definicja}
Let $\textnormal{ind}(\phi(x))$ denote the instantiation of the induction scheme with $\phi(x)$, i.e., the universal closure of the following formula:
\[\forall x (\phi(x)\rightarrow\phi(x+1))\longrightarrow \bigl(\phi(0)\rightarrow \forall x\phi(x)\bigr)\]
Following our conventions, we will use $\textnormal{ind}(\cdot)$ to denote an arithmetical formula representing the function which, given a G\"odel code of a formula with at most one free variable, returns the G\"odel code of the corresponding
induction axiom.
\end{definicja}
\begin{definicja}
A truth theory \textit{proves the truth of induction} if there exists a formula $D(y)$ such that $\textnormal{Th}$ proves that $D(y)$ defines an initial segment and
\begin{equation}\label{truth_ind}\tag{T(IND)}
\forall \phi(v)\ \ \biggl(D(\phi(v))\rightarrow T\bigl(\textnormal{ind}(\phi(v))\bigr)\biggr).
\end{equation}
\end{definicja}
We shall say that $\textnormal{Th}$ is finitely axiomatisable modulo $\textnormal{PA}$ if there is a sentence $\phi$ such that the logical consequences of $\textnormal{Th}$ are precisely the logical consequences of $\textnormal{PA}\cup\was{\phi}$. For example, $\textnormal{CT}^-$, $\textnormal{PT}^-$ and $\textnormal{WPT}^-$ are finitely axiomatisable modulo $\textnormal{PA}$.
Now we have the following theorem whose unique novelty rests on isolating the features that are usually used to prove the thesis for concrete theories of truth.
\begin{tw}
Assume that
\begin{enumerate}
\item $\textnormal{Th}$ is partially classically compositional and proves the truth of induction and
\item $\textnormal{Th}$ is finitely axiomatizable modulo $\textnormal{PA}$,
\end{enumerate}
then $\textnormal{Th}$ is finitely axiomatizable and it has super-exponential speed-up over $\textnormal{PA}$.
\end{tw}
\begin{proof}[Sketch of the proof]
Let $D_1(y)$ define an initial segment on which $T$ is classically compositional. Let $D_2(y)$ define an initial segment on which $\textnormal{Th}$ proves the truth on induction. Then $D(y): = D_1(y)\wedge D_2(y)$ defines an initial segment on which $T$ is classically compositional and proves the truth of induction. Obviously, for every standard natural number $n$ we have
\[\textnormal{Th} \vdash D(\num{n})\]
In particular, $\textnormal{Th}\vdash \textnormal{UTB}^-$,
which for every concrete formula of standard complexity can be proved by external induction on the complexity of its subformulae. Now, for every standard formula $\phi(x_0,\ldots,x_n)$, we can prove $\textnormal{ind}(\phi(\bar{x}))$ in $\textnormal{Th}$ in the following way:
\begin{enumerate}
\item prove that $D$ defines an initial segment on which $T$ is classically compositional;
\item prove that $D(\qcr{\textnormal{ind}(\phi(\bar{x}))})$ and conclude $D(\qcr{\phi(\bar{x})})$;
\item prove \ref{truth_ind};
\item using $2.$ and $3.$ conclude $T\left(\qcr{\textnormal{ind}(\phi(\bar{x}))}\right)$;
\item prove $\textnormal{UTB}^-\left(\textnormal{ind}(\phi(\bar{x}))\right)$;
\item conclude $\textnormal{ind}\left(\phi(\bar{x})\right)$.
\end{enumerate}
Observe that, given $1.$, the proof of $D(\qcr{\textnormal{ind}(\phi(\bar{x}))}$ can be constructed in pure First-Order Logic. Similarly, given $1.$, all we need to use in proving $\textnormal{UTB}^-\left(\textnormal{ind}(\phi(\bar{x}))\right)$ are some basic syntactical facts provable in $I\Sigma_1$. Let $\phi$ be a sentence such that $\textnormal{PA}\cup\was{\phi}$ is a finite-modulo-$\textnormal{PA}$ axiomatisation of $\textnormal{Th}$. It follows that for some $n$, every proof \dousuniecia{(every proof, czy proof of every?)} of $\textnormal{ind}(\phi(\bar{x}))$ can be given in $I\Sigma_n + \phi$, hence
the theory
\[I\Sigma_n\cup\was{\phi}\]
is a finite axiomatization of $\textnormal{Th}$. To prove that $\textnormal{Th}$ has super-exponential speed-up over $\textnormal{PA}$, we show that there is a formula $D'(y)$ which provably in $\textnormal{Th}$ defines an initial segment and that
\[\textnormal{Th}\vdash \forall y\bigl( D'(y)\rightarrow \textnormal{Con}_{\PA}(y)\bigr)\]
where $\textnormal{Con}_{\PA}(y)$ is a finitary statement of consistency of $\textnormal{PA}$ saying that there is no $\textnormal{PA}$ proof of $0=1$ which can be coded using less than $y$ bits. For the details, see \cite{fischer2}, Theorem 9.
\end{proof}
In \cite{fischer2}, it was shown that $\textnormal{PT}^- + \textnormal{INT}\upharpoonright_{\tot}$ satisfies the assumptions of the above theorem. However, as was shown in \cite{CWL}, this theory is not model-theoretically conservative over $\textnormal{PA}$. We shall now show that the right theory to use is $\textnormal{WPT}^-+\textnormal{INT}$.
\begin{stwierdzenie}\label{prop_wpt_spid}
$\textnormal{WPT}^-$ is partially classically compositional and $\textnormal{WPT}^- + \textnormal{INT}$ proves the truth of induction.
\end{stwierdzenie}
\begin{proof}
Let us define
\[D'(y):= \forall x\leq y\forall \phi \biggl(\textnormal{dp}(\phi)\leq x\longrightarrow \bigl(T(\neg\phi)\equiv \neg T(\phi)\bigr)\biggr).\]
\dousuniecia{Usunąłem z powyższego różki.}
Then it can be easily shown that $D'(y)$ provably in $\textnormal{WPT}^-$ defines an initial segment on which $T$ is classically compositional (that $D'(y)$ is progressive is assured by the compositional axioms of $\textnormal{WPT}^-$). For convenience, let us define:\footnote{\poprawka{$\textnormal{GC}$ stands for ''generalised commutativity.'' $\textnormal{GC}(x)$ expresses that the truth predicate commutes with the whole block of universal quantifiers in the universal closure of $x$.}}
\[\textnormal{GC}(x):= \textnormal{Form}(x) \wedge \bigl(T(\textnormal{ucl}(x))\equiv \forall \sigma (\textnormal{Asn}(x,\sigma)\rightarrow T(x[\sigma]))\bigr)\]
where
\begin{enumerate}
\item $\textnormal{ucl}(\phi(\bar{x}))$ denotes the universal closure of $\phi(\bar{x})$;
\item $\textnormal{Asn}(\phi,\sigma)$ represents the relation "$\sigma$ is an assignment for $\phi$", i.e., $\sigma$ is \poprawka{a function} defined exactly on the free variables of $\phi$;
\item $x[\sigma]$ denotes the result of simultaneous substitution of numerals naming numbers assigned by $\sigma$ to the free variables of $x$.
\end{enumerate}
Further define
\[D(y):= D'(y)\wedge \forall \phi(\bar{x})\biggl(|\textnormal{FV}(\phi(\bar{x}))|\leq y\longrightarrow \textnormal{GC}(\phi(\bar{x}))\biggr)\]
where, , $|\textnormal{FV}(\phi\bar{x})|\leq x$ represents the relation "$\phi(\bar{x})$ contains at most $x$ free variables".
For the sake of definiteness, we assume that $\textnormal{ucl}(\phi)$ starts with a quantifier binding the variable with the least index among the free variables if $\phi$. It can be easily seen that $D(y)$ is downward closed. Let us now show that $D(y)$ is progressive. We work in $\textnormal{WPT}^-$. Fix arbitrary $a$ and suppose that $D(a)$. Then $D'(a)$ and, as $D'(y)$ is progressive, we have also $D'(a+1)$. Let us fix an arbitrary formula $\phi$ with less than $a+1$ free variables and let $v$ be its free variable with the least index. Then the following are equivalent
\begin{enumerate}
\item $T(\textnormal{ucl}(\phi))$
\item $\forall x T(\textnormal{ucl}(\phi(\num{x}/v)))$
\item $\forall x\forall \sigma \bigl(\textnormal{Asn}(\phi(\num{x}/v),\sigma)\rightarrow T(\phi(\num{x}/v)[\sigma])\bigr)$
\item $\forall \sigma \bigl(\textnormal{Asn}(\phi,\sigma)\rightarrow T(\phi[\sigma])\bigr)$.
\end{enumerate}
The equivalence between $1.$ and $2.$ is by the axiom for universal quantifier in $\textnormal{WPT}^-$. The equivalence between $2.$ and $3.$ holds because $\phi(\num{x}/v)$ has $\leq a$ free variables. The last equivalence holds because each assignment for $\phi$ consists of an assignment to $v$ and an assignment to the free variables of $\phi(\num{x}/v)$.
We show that $\textnormal{WPT}^-+\textnormal{INT}$ proves the truth of induction on $D(y)$. We work in $\textnormal{WPT}^-+\textnormal{INT}$. Let us observe that for each formula $\phi$, we have $\textnormal{dp}(\phi)\leq \phi$ and $|\textnormal{FV}(\phi)|\leq \phi$. \footnote{Being precise, this is a property of our coding. But most natural codings surely have it.}
Hence if $D(\phi)$, then $D'(\textnormal{dp}(\phi))$ and $D(|\textnormal{FV}(\phi)|)$. In particular, if $D(\phi)$ then $T$ is classically compositional on subformulae of $\phi$ and $T$ can successfully deal \dousuniecia{(Napisałbym, co to znaczy ,,succesfully deal'' w tym kontekście.)} with the universal closure for $\phi$.
Let us fix an arbitrary formula $\phi(v,\bar{w})$ such that $D(\phi(v,\bar{w}))$. We have to show $T(\textnormal{ind}(\phi(v,\bar{w}))$, i.e.
\begin{equation}\label{equat}
T\biggl(\textnormal{ucl}\Bigl(\forall v (\phi(v)\rightarrow\phi(v+1))\longrightarrow \bigl(\phi(0)\rightarrow \forall v\phi(v)\bigr)\Bigr)\biggr)
\end{equation}
(we skip the reference to $\bar{w}$ and assume that they are bounded by the universal quantifiers from $\textnormal{ucl}$). Since the formula
$$\bigl(\forall v (\phi(v)\rightarrow\phi(v+1))\longrightarrow \bigl(\phi(0)\rightarrow \forall v\phi(v)\bigr)\bigr)$$
contains less free variables then $\phi(v)$, we know that \eqref{equat} is equivalent to
\begin{equation}\label{equat2}
\forall \sigma T\Bigl(\forall v (\phi(v)[\sigma]\rightarrow\phi(v+1)[\sigma])\longrightarrow \bigl(\phi(0)[\sigma]\rightarrow \forall v\phi(v)[\sigma]\bigr)\Bigr).
\end{equation}
Let us fix an arbitrary $\sigma$. Then $\phi(v)[\sigma]$ is a formula with at most free variable $v$. Let us abbreviate it with $\psi(v)$. Hence it is enough to show:
\begin{equation}\label{equat3}
T\Bigl(\forall v (\psi(v)\rightarrow\psi(v+1))\longrightarrow \bigl(\psi(0)\rightarrow \forall v\psi(v)\bigr)\Bigr).
\end{equation}
Since $\textnormal{dp}(\psi(v)) = \textnormal{dp}(\phi(v))$ and the depth of \eqref{equat3} is equal to $\textnormal{dp}(\psi(v)) + 3$, then $T$ is classically compositional on \eqref{equat3}. Hence \eqref{equat3} is equivalent to
\[\forall x \bigl(T(\psi(\num{x}))\rightarrow T(\psi(\num{x+1}))\bigr)\longrightarrow \bigl(T(\psi(\num{0}))\rightarrow \forall x T\psi(\num{x})\bigr)\]
which follows by $\textnormal{INT}$. Hence $\textnormal{WPT}^-+\textnormal{INT}$ proves the truth of induction on $D$.
\end{proof}
Hence $\textnormal{WPT}^-+ \textnormal{INT}$ exemplifies the expressive role of truth. Let us observe that, as it contains no restriction on arithmetical formulae admissible in the axiom of internal induction, it is more natural than $\textnormal{PT}^-+\textnormal{INT}\upharpoonright_{\tot}$.\footnote{In \cite{fischerhorsten} authors discuss this restriction in the context of $\textnormal{PT}^-$ and admit it is as a possible objection to their theory.} $\textnormal{WPT}^-+ \textnormal{INT}$ proves that all arithmetical formulae, and not only total, satisfy induction, which is clearly the idea behind $\textnormal{PA}$. Let us show that despite having such an expressive axiom, it is a model-theoretically conservative theory of truth.
\begin{tw}\label{tw_wpt_cons}
$\textnormal{WPT}^- + \textnormal{INT}$ is model-theoretically conservative over $\textnormal{PA}$.
\end{tw}
\begin{proof}
Let $\mathcal{M}\models \textnormal{PA}$. For $b\in M$ let $b\in Tr'$ if and only if for some $t_0,\ldots, t_n$ such that
\[ \mathcal{M}\models\bigwedge_{i\leq n}\textnormal{Term}(t_i) \]
and some (standard!) $\phi(x_0,\ldots, x_n)\in\mathcal{L}_{\textnormal{PA}}$
\[\mathcal{M}\models \bigl(b = \qcr{\phi(t_0,\ldots, t_n)}\bigr) \wedge \phi(\val{t_0},\ldots,\val{t_n})\]
Let us observe that with such a definition, we have
\[(\mathcal{M}, Tr')\models \textnormal{UTB}^-\]
To become the appropriate interpretation of $\textnormal{WPT}^-$ truth predicate, $Tr'$ requires only one small correction. Let $\sim_{\alpha}$ denote the arithmetical formula representing in $\textnormal{PA}$ the relation of two \emph{sentences} being the same modulo \poprawka{renaming} variables ($\alpha$-conversion).
Let us define
\[b \in Tr\]
if and only if for some $\psi\in Tr'$, $\mathcal{M}\models b\sim_{\alpha}\psi$. Now it can be easily shown that
\[(\mathcal{M},Tr)\models \textnormal{WPT}^- + \textnormal{INT}.\]
Indeed, compositional axioms are satisfied, since for every $x\in M$ such that $\mathcal{M}\models \textnormal{Form}^{\leq 1}(x)$
\begin{equation}\label{cond}\tag{$*$}
(\mathcal{M}, Tr)\models \textnormal{tot}(x) \textnormal{ if and only if for some } n \in\omega\text{, } \mathcal{M}\models \textnormal{dp}(x)\leq n
\end{equation}
and moreover $(\mathcal{M}, Tr)\models \textnormal{UTB}^-$. Hence in verifying compositional axioms we may use the fact that $\models$ is compositional. Let us check the axiom for $\vee$. Suppose $\phi = \psi\vee\theta$ and $(\mathcal{M}, Tr)\models T(\phi)$. Then there exists $\phi'\sim_{\alpha} \phi$ such that $T(\phi')$ and $\phi' = \qcr{\phi''(t_0,\ldots,t_n)}$ for some standard $\mathcal{L}_{\textnormal{PA}}$ formula $\phi''(x_0,\ldots,x_n)$ and $t_0,\ldots,t_n$ terms in the sense of $\mathcal{M}$. If so, then $\phi' = \psi'\vee \theta'$ such that $\psi\sim_{\alpha}\psi'$ and $\theta\sim_{\alpha}\theta'$. Also $\psi'$ and $\theta'$ are of the form $\psi''(t_0,\ldots,t_n)$ and $\theta''(t_0,\ldots, t_n)$, respectively. By $\textnormal{UTB}^-$, we have
\[\mathcal{M}\models \psi''(\val{t_0},\ldots,\val{t_n})\vee \theta''(\val{t_0},\ldots,\val{t_n})\]
Without loss of generality, assume that $\mathcal{M}\models \psi''(\val{t_0},\ldots, \val{t_n})$. It means that $(\mathcal{M},Tr)\models T(\psi)$ and consequently $(\mathcal{M}, Tr)\models T(\psi)\vee T(\theta)$. By \eqref{cond}, we have
\[(\mathcal{M}, Tr)\models \textnormal{tot}(\psi) \wedge \textnormal{tot}(\theta) \wedge\bigl(T(\psi)\vee T(\theta)\bigr).\]
which completes the proof of one implication. Let us now assume that the above holds. Since we have $\textnormal{tot}(\psi)$ and $\textnormal{tot}(\phi)$, it follows that for some $n$,$k$,
\[\mathcal{M}\models \textnormal{dp}(\psi)\leq n \wedge \textnormal{dp}(\theta)\leq k.\]
In particular, $\textnormal{dp}(\phi) \leq \max\was{n,k}+1$. Let us assume that $(\mathcal{M},Tr)\models T(\psi)$. Let $\psi\sim_{\alpha} \psi'$ and $\theta\sim_{\alpha}\theta'$ be such that $(\mathcal{M},Tr')\models T(\psi')$.
Reasoning as previously, we conclude that $(\mathcal{M},Tr')\models T(\psi'\vee \phi')$
and hence
\[(\mathcal{M}, Tr)\models T(\psi\vee\phi)\]
which completes the proof of the compositional axiom for $\vee$.
Let us now verify that $(\mathcal{M}, Tr)\models \textnormal{INT}$. Fix an arbitrary formula $\phi(x)$ in the sense of $\mathcal{M}$ and assume that
\[(\mathcal{M}, Tr)\models T(\phi(0))\wedge \forall x \bigl(T(\phi(\num{x}))\rightarrow T(\phi(\num{x+1}))\bigr)\]
It follows that $\mathcal{M}\models \textnormal{dp}(\phi(x))\leq n$ for some $n\in\omega$ and for some standard
\[\phi_0'(x,y_0,\ldots,y_{k}), \phi_1'(x,y_0,\ldots,y_{k}), \phi_2'(x,y_0,\ldots,y_{k}),\]
we have
\[\mathcal{M}\models \phi(x)\sim_{\alpha} \qcr{\phi_i'(x,t_0,\ldots,t_k)}\]
for some terms in $t_0,\ldots, t_{k}$ in the sense of $\mathcal{M}$ and for $i\leq 2$. In particular, by $\textnormal{UTB}^-$ we have
\[\mathcal{M}\models \phi'_0(0,\val{t_0},\ldots, \val{t_k}) \wedge \forall x \bigl(\phi'_1(x,\val{t_0},\ldots,\val{t_n})\rightarrow \phi'_2(x+1,\val{t_0},\ldots,\val{t_n})\bigr).\]
But as satisfiability in a model is closed under $\alpha$-conversion and each two of $\phi'_0$, $\phi_1'$, $\phi_2'$ are $\alpha$-equivalent, we get that
\[\mathcal{M}\models \phi'_0(0,\val{t_0},\ldots, \val{t_k}) \wedge \forall x \bigl(\phi'_0(x,\val{t_0},\ldots,\val{t_n})\rightarrow \phi'_0(x+1,\val{t_0},\ldots,\val{t_n})\bigr)\]
Hence, by induction in $\mathcal{M}$ we get
\[\mathcal{M}\models\forall x\text{ } \phi'_0(x,\val{t_0},\ldots,\val{t_n})\]
which by $\textnormal{UTB}^-$ again gives us $(\mathcal{M}, Tr')\models \forall x T(\phi'_0(\num{x},t_0,\ldots,t_k))$.
Hence also
\[(\mathcal{M}, Tr)\models \forall x\text{ } T(\phi(\num{x},t_0,\ldots, t_k)),\]
which ends the proof.
\end{proof}
In order to find a theory satisfying the Fischer-Horsten criterion, we decided to switch the inner logic of the truth theory. It allowed to formulate a very natural theory of truth modelled after Weak Kleene Scheme. Is it possible to realise Fischer-Horsten desiderata using a compositional theory of truth extending $\textnormal{PT}^-$? With the meaning we gave to the term ''axiomatic theory of truth'', we are not allowed to add more symbols to the language.\footnote{Without this restriction the answer is trivial: simply take $\textnormal{PT}^-$ together with $(\textnormal{WPT}^- + \textnormal{INT})$ but formulated with a different truth predicate symbol.} For the moment, we leave it as an open problem.
\begin{comment}
We have a partial answer to this question: we can formulate an extension of $\textnormal{PT}^-$ which is model-theoretically conservative over $\textnormal{PA}$, is finitely axiomatizable and has speed-up over $\textnormal{PA}$. However we must admit that this theory is not very natural. We decided to enclose it for the completeness of our exposition and for those that are ready to accept "whatever goes". To use Fischer-Horsten words:
\begin{quotation}
So from a standpoint of metamathematical pragmatism, one can take the truth theoretic part as being purely instrumental over the
arithmetical part. If one is prepared to take this standpoint, then the principles of the ideal part \footnote{I.e., truth principles.} require no further motivation: success suffices.\footnote{ Compare \cite{fischerhorsten}, p. 356 }
\end{quotation}
Let us define:
\begin{equation}\label{equat_safe}\nonumber
\textnormal{Safe}(x) := \forall y\leq x \bigl(\textnormal{Form}^{\leq 1}(y)\rightarrow \textnormal{tot}(y)\bigr)
\end{equation}
In each model of $\textnormal{PT}^-$ the formula $\textnormal{Safe}(x)$ defines the largest initial segment of a model in which every formula is total. The intuition behind calling this property "safe" is that, if $x$ satisfies $\textnormal{Safe}(x)$, then below $x$ no formula whose truth value is defined only partially can be found---in a sense, below $x$ one can be sure that $T$ behaves classically.
Our extension of $\textnormal{PT}^-$ will contain the axiom of internal induction restricted to safe formulae.
\begin{equation}\label{equat_int_safe}\tag{$\textnormal{INT}\upharpoonright_{\textnormal{Safe}}$}
\forall \phi(x) \bigl[\textnormal{Safe}(\phi(x))\rightarrow \textnormal{INT}(\phi(x))\bigr]
\end{equation}
The following proposition can be proved along the same lines as Proposition \ref{prop_wpt_spid} (in fact its first part was proved in \cite{fischerhorsten}):
\begin{stwierdzenie}
$\textnormal{PT}^-$ is partially classically compositional and $\textnormal{PT}^- +$ \ref{equat_int_safe} proves the truth of induction.
\end{stwierdzenie}
Now we prove that $\textnormal{PT}^- + $\ref{equat_int_safe} is model-theoretically weaker than $\textnormal{PT}^- + $\ref{equat_int_tot}:
\begin{tw}
$\textnormal{PT}^- + $\ref{equat_int_safe} is model-theoretically conservative over $\textnormal{PA}$.
\end{tw}
\begin{proof}
Let $\mathcal{M}\models \textnormal{PA}$ and let $Tr=\Gamma^{\mathcal{M}}_{\alpha}$ be the least fixpoint of $\Gamma$ as defined in Definition \ref{defin_oper_theta}. Then $(\mathcal{M},Tr)\models \textnormal{PT}^-$. We shall show that it makes also \ref{equat_int_safe} true.
We show that for every $x$ such that $\mathcal{M}\models \textnormal{Form}^{\leq 1}(x)$ it holds that
\[(\mathcal{M},\Gamma_{\alpha})\models \textnormal{Safe}(x)\]
if and only if $x$ is a standard natural number. In other words, we shall show that in each least fixpoint model of $\textnormal{PT}^-$ the formula $\textnormal{Safe}(x)$ defines the set of standard natural numbers. This will end the proof by repeating the argument used in the proof of Theorem \ref{tw_wpt_cons}. Let us fix an arbitrary $\phi(x)$. Right-to-left implication is obvious, since in such situation $x$ is a code of a standard sentence of $\mathcal{L}_{\textnormal{PA}}$ and such sentences are provably total in $\textnormal{PT}^-$. We prove left-to-right implication by contraposition: suppose $x$ is a code of a formula and $x>\mathbb{N}$. By an easy overspill argument, there exist $\phi$ and $b>\mathbb{N}$ such that
\[\mathcal{M}\models \textnormal{Sent}(\phi) \wedge \phi <x \wedge "\phi \textnormal{ starts with $b$ universal quantifiers.}"\]
We shall show that
\[\mathcal{M}\models \neg \textnormal{tot}(\phi).\]
Assume the contrary and suppose that $\mathcal{M}\models T(\neg\phi)$ (the other case being easier). Let $\phi^n$ denote the formula resulting from $\phi$ by deleting the first $n$ universal quantifiers. Since $\neg\phi\in Tr = \Gamma_{\alpha}$, for every $n\in\omega$ there exists $\gamma<\alpha$ such that
\[\neg\phi^n(\bar{\num{a}})\in \Gamma^{\mathcal{M}}_{\gamma}\]
for some $\bar{a}$. Let $\gamma_n$ denotes the least such $\gamma$. Then for every $k<n$, we have
\[\gamma_n<\gamma_k.\]
Hence for some $l\in\omega$, $\gamma_l = 0$. But $\phi^l$ starts with an existential quantifier. Hence for no $\bar{a}$, $\neg\phi^l(\bar{a})\in \Gamma_0$, which ends the proof.
\end{proof}
\end{comment}
\end{document}
|
\begin{document}
\begin{flushleft}
{\Large\bf A quantitative formula for the imaginary part of a Weyl coefficient\\[5mm]
}
\textsc{
Jakob Reiffenstein
\hspace*{-14pt}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\setcounter{footnote}{2}
\footnote{
Department of Mathematics, University of Vienna \\
Oskar-Morgenstern-Platz 1, 1090 Wien, AUSTRIA \\
email: [email protected]}
} \\[1ex]
\end{flushleft}
{\small
\textbf{Abstract.}
We investigate two-dimensional canonical systems $y'=zJHy$ on an interval, with positive semi-definite Hamiltonian $H$
. Let $q_H$ be the Weyl coefficient of the system. We prove a formula that determines the imaginary part of $q_H$ along the imaginary axis up to multiplicative constants, which are independent of $H$. We also provide versions of this result for Sturm-Liouville operators and Krein strings. \\
Using classical Abelian-Tauberian theorems, we deduce characterizations of spectral properties such as integrability of a given comparison function w.r.t. the spectral measure $\mu_H$, and boundedness of the distribution function of $\mu_H$ relative to a given comparison function. \\
We study in depth Hamiltonians for which $\arg q_H(ir)$ approaches $0$ or $\pi$ (at least on a subsequence). It turns out that this behavior of $q_H(ir)$ imposes a substantial restriction on the growth of $|q_H(ir)|$. Our results in this context are interesting also from a function theoretic point of view.
\\[3mm]
\textbf{AMS MSC 2020:} 30E99, 34B20, 34L05, 34L40
\\
\textbf{Keywords:} Canonical system, Weyl coefficient, growth estimates, high-energy behaviour
}
\pagenumbering{arabic}
\setcounter{page}{1}
\setcounter{footnote}{0}
\section[{Introduction}]{Introduction}
\noindent We study two-dimensional \textit{canonical systems}
\begin{align}
\label{A33}
y'(t)=zJH(t)y(t), \quad \quad t \in [a,b) \, \text{ a.e.},
\end{align}
where $-\infty < a <b \leq \infty$, $z \in \bb C$ is a spectral parameter and $J:=\smmatrix 0{-1}10$. The \textit{Hamiltonian} $H$ is assumed to be a locally integrable, $\bb R^{2 \times 2}$-valued function on $[a,b)$ that further satisfies
\begin{itemize}
\item[$\rhd$] $H(t) \geq 0$ and $H(t) \neq 0$, \quad \quad $t \in [a,b)$ a.e.;
\item[$\rhd$] $H$ is definite, i.e., if $v \in \mathbb{C}^2$ is s.t. $H(t)v \equiv 0$ on $[a,b)$, then $v=0$;
\item[$\rhd$] $\int_a^b \tr H(t) \mkern4mu\mathrm{d} t=\infty$ (limit point case at $b$).
\end{itemize}
Together with a boundary condition at $a$, the equation (\ref{A33}) becomes the eigenvalue equation of a self-adjoint (possibly multi-valued) operator $A_H$ in a Hilbert space $L^2(H)$ associated with $H$. Throughout this paper, we fix the boundary condition $(1,0)y(a)=0$, which is no loss of generality. \\
Many classical second-order differential operators such as Schr\"odinger and Sturm-Liouville operators, Krein strings, and Jacobi operators can be transformed to the form (\ref{A33}), see, e.g., \cite{remling:2018,teschl:2009,behrndt.hassi.snoo:2020,kaltenbaeck.winkler.woracek:2007,kac:1999}. Canonical systems thus form a unifying framework. \\
All of the above operators have in common that their spectral theory is centered around the Weyl coefficient $q$ of the operator (also referred to as Titchmarsh-Weyl $m$-function). This function is constructed by Weyl's nested disk method and is a Herglotz function, i.e., it is holomorphic on $\bb C \setminus \bb R$ and satisfies there $\frac{\IM q(z)}{\IM z} \geq 0$ as well as $q(\overlineerline{z})=\overlineerline{q(z)}$. It can thus be represented as
\begin{align}
\label{A17}
q(z)=\alpha + \beta z + \int_{\bb R} \bigg(\frac{1}{t-z}-\frac{t}{1+t^2} \bigg) \mkern4mu\mathrm{d} \mu(t), \quad \quad z \in \bb C \setminus \bb R
\end{align}
with $\alpha \in \bb R$, $\beta \geq 0$, and $\mu$ a positive Borel measure on $\bb R$ satisfying $\int_{\bb R} \frac{d\mu(t)}{1+t^2} <\infty$. The measure $\mu$ in the integral representation (\ref{A17}) of the Weyl coefficient is a spectral measure of the underlying operator model if $\beta =0$ (if $\beta > 0$, a one-dimensional component has to be added). The importance of canonical systems in this context lies in the Inverse Spectral Theorem of L. de Branges, stating that each Herglotz function $q$ is the Weyl coefficient of a unique (suitably normalized) canonical system. \\
\noindent Given a Hamiltonian $H$, we are ultimately interested in the description of properties of its spectral measure $\mu_H$ in terms of $H$. The correspondence between $H$ and $\mu_H$ can be best understood using the Weyl coefficient $q_H$, whose imaginary part $\IM q_H$ determines $\mu_H$ via the Stieltjes inversion formula. \\
In their recent paper \cite{langer.pruckner.woracek:heniest}, Langer, Pruckner, and Woracek gave a two-sided estimate for $\IM q_H(ir)$ in terms of the coefficients of $H$:
\begin{align}
\label{A43}
L(r) \lesssim \IM q_H(ir) \lesssim A(r), \quad \quad r>0,
\end{align}
where $L,A$ are explicit in terms of $H$, and we used the notation $f(r) \lesssim g(r)$ to state that $f(r) \leq Cg(r)$ for a constant $C>0$. Moreover, in (\ref{A43}) the constants implicit in $\lesssim$ are independent of $H$. The exact formulation of this result will be recalled in \Cref{Y98}. \\
It may happen that $L(r)={\rm o} (A(r))$, and $\IM q_H(ir)$ is not determined by (\ref{A43}). A toy example for this is the Hamiltonian
\begin{align*}
H(t)=t \left(\begin{matrix}
|\log t|^{\color{white} 1} & |\log t|^2 \\
|\log t|^2 & |\log t|^3 \\
\end{matrix} \right), \quad \quad t \in [0,\infty).
\end{align*}
For $r \to \infty$, a calculation shows that
\begin{align*}
L(r) &\asymp (\log r)^{-3}, \quad \quad A(r) \asymp (\log r)^{-1},
\end{align*}
where $f(r) \asymp g(r)$ means that both $f(r) \lesssim g(r)$ and $g(r) \lesssim f(r)$.
\newline
\noindent The following theorem, which is our main result, improves the estimate (\ref{A43}) by giving a formula for $\IM q_H(ir)$ up to universal multiplicative constants.
\begin{theorem}
\label{T1}
Let $H$ be a Hamiltonian on $[a,b)$, and denote\footnote{When there is no risk of ambiguity, we write $\Omega$ and $\omega_j$ instead of $\Omega_H$ and $\omega_j^{(H)}$ for short.}
\begin{equation}\label{Y08}
H(t) = \begin{pmatrix} h_1(t) & h_3(t) \\ h_3(t) & h_2(t) \end{pmatrix},\quad
\Omega_H(t) = \begin{pmatrix} \omega_1^{(H)}(t) & \omega_3^{(H)}(t) \\ \omega_3^{(H)}(t) & \omega_2^{(H)}(t) \end{pmatrix}
\mathrel{\mathop:}=\int_a^t H(s)\mkern4mu\mathrm{d} s.
\end{equation}
Let $\hat t : (0,\infty) \to (a,b)$ be a function satisfying\footnote{We will see later that the equation $\det \Omega_H(t)=\frac{1}{r^2}$ has a unique solution for every $r>0$. A possible choice of $\hat t$ is thus the function that maps $r>0$ to this solution. }
\begin{align}
\label{A49}
\det \Omega_H(\hat t(r)) \asymp \frac{1}{r^2}, \quad \quad r \in (0,\infty).
\end{align}
Then
\begin{align}
\label{A2}
\IM q_H(ir) &\asymp \bigg|q_H(ir)-\frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{r\omega_2^{(H)}(\hat t(r))}, \\[1.7ex]
\label{A3}
\frac{\IM q_H(ir)}{|q_H(ir)|^2} &\asymp \frac{1}{r\omega_1^{(H)}(\hat t(r))},
\end{align}
for $r \in (0,\infty)$. The constants implicit in $\asymp$ in (\ref{A2}) and (\ref{A3}) depend on the constants hidden in $\asymp$ in (\ref{A49}), but not on $H$. \\
If, in addition, $\IM q_H(ir)={\rm o} (|q_H(ir)|)$ for $r \to \infty$ (or $r \to 0$), then\footnote{With $f(r) \sim g(r)$ meaning $\lim \frac{f(r)}{g(r)}=1.$}
\begin{align}
\label{A11}
q_H(ir) \sim \frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))}, \quad \quad r \to \infty \quad ( r \to 0).
\end{align}
\end{theorem}
\noindent The two-sided estimate (\ref{A2}) has some useful features: its pointwise nature, its applicability for $r \to \infty$ and $r \to 0$, and the universality of the constants hidden in $\asymp$. However, it is rather different from an asymptotic formula: it does not capture small oscillations of $\IM q_H(ir)$ around $\frac{1}{r\omega_2^{(H)}(\hat t(r))}$. \\
Note also that the first relation in (\ref{A2}) can be seen as a statement about the real part of $q_H(ir)$. In fact, $\IM q_H(ir)$ is also obtained if we subtract $\RE q_H(ir)$ from $q_H(ir)$, then take absolute values. It is an open question whether $\RE q_H(ir)$ can be described more directly in terms of $H$.
\newline
\noindent A most important class of operators is that of Sturm-Liouville (in particular, Schr\"odinger) operators. Let us provide a reformulation of \Cref{T1} for these operators right away.
\subsection*{Sturm-Liouville operators}
We provide a version of \Cref{T1} for Sturm-Liouville equations
\begin{align}
\label{A44}
-(py')'+qy=zwy
\end{align}
on $(a,b)$, where $1/p, q,w \in L^1_{loc}(a,b)$, $w>0$ and $p,q$ are real-valued. Suppose that $a$ is in limit circle case and $b$ is in limit point case. Impose a Dirichlet boundary condition at $a$, i.e., $y(a)=0$. The Weyl coefficient for this problem is the unique number $m(z)$ with
\[
c(z,\cdot)+m(z)s(z,\cdot) \in L^2((a,b),w(x)\mkern4mu\mathrm{d} x)
\]
where $c(z,\cdot)$ and $s(z,\cdot)$ are solutions of (\ref{A44}) with initial values
\[
\binom{p(a)c'(z,a)}{c(z,a)}=\binom{0}{1}, \quad \binom{p(a)s'(z,a)}{s(z,a)}=\binom{1}{0}.
\]
\begin{theorem}
\label{T9}
For each $t \in (a,b)$, let $(.,.)_t$ and $\|.\|_t$ denote the scalar product and norm on $L^2((a,t),w(x)\mkern4mu\mathrm{d} x)$, i.e.,
\[
(f,g)_t=\int_a^t f(x)\overlineerline{g(x)} w(x) \mkern4mu\mathrm{d} x.
\]
For $\xi \in \mathbb{R}$, let $\hat t_\xi : (0,\infty) \to (a,b)$ be a function satisfying
\begin{align}
\label{A51}
\|c(\xi,\cdot)\|_{\hat t_\xi(r)}^2 \|s(\xi,\cdot)\|_{\hat t_\xi(r)}^2 - (c(\xi,\cdot),s(\xi,\cdot))_{\hat t_\xi(r)}^2 \asymp \frac{1}{r^2}, \quad \,\, r \in (0,\infty).
\end{align}
Then
\begin{align}
\label{A45}
\IM m(\xi+ir) &\asymp \frac{1}{r \|s(\xi,\cdot)\|_{\hat t_\xi(r)}^2}, \\
\label{A46}
\frac{\IM m(\xi+ir)}{|m(\xi+ir)|^2} &\asymp \frac{1}{r \|c(\xi,\cdot)\|_{\hat t_\xi(r)}^2},
\end{align}
for $r \in (0,\infty)$. The constants implicit in $\asymp$ are independent of $p,q,w$ as well as $\xi$, but do depend on the constants pertaining to $\asymp$ in (\ref{A51}).
\end{theorem}
\noindent In fact, \Cref{T9} is a direct consequence of \Cref{T1} upon employing a transformation (cf. \cite{remling:2018} for $p=w=1$ and $\xi=0$) that maps solutions of (\ref{A44}) to solutions of the canonical system $y'=(z-\xi)JH_\xi y$, where
\[
H_\xi (t) = w(t) \cdot \begin{pmatrix}
c(\xi,t)^2 & -s(\xi,t)c(\xi,t) \\
-s(\xi,t)c(\xi,t) & s(\xi,t)^2
\end{pmatrix}, \quad \quad t \in [a,b).
\]
The Weyl coefficients then satisfy $m(z)=q_{H_\xi}(z-\xi)$.
\subsection*{Historical remarks}
\noindent The origins of the Weyl coefficient in the theory of the Sturm-Liouville differential equation are well summarized in Everitt's paper \cite{everitt:2004}. We give a short account specifically on the history of estimates for the growth of the Weyl coefficient, which date back at least to the 1950s. Particular attention was often given to the deduction of asymptotic formulae for the Weyl coefficient \cite{marchenko:1952,kac:1973a,everitt:1972,kasahara:1975,atkinson:1981,bennewitz:1989}. However, asymptotic results usually depend on rather strong assumptions on the data. When weakening these assumptions, one can still ask for explicit estimates for $q(z)$ as $z \to \infty$ nontangentially in the upper half-plane. There is a number of rather early results that determine $|q(z)|$ up to $\asymp$, e.g., \cite{hille:1963,atkinson:1988,bennewitz:1989}, although these still depend on data subject to additional restrictions. Fundamental progress has been made by Jitomirskaya and Last \cite{jitomirskaya.last:1999}, who considered Schr\"odinger operators with arbitrary (real-valued and locally integrable) potentials. They found a formula up to $\asymp$ for $|q(z)|$, which also covers the case $z \to 0$. An analog of this formula for canonical systems was given in \cite{hassi.remling.snoo:2000}. \\
When it comes to $\IM q(z)$, however, no such formula was available. Only the very recent estimate (\ref{A43}) from \cite[Theorem 1.1]{langer.pruckner.woracek:heniest} made it possible to obtain our main result that determines $\IM q(z)$ up to $\asymp$.
\subsection*{Structure of the paper}
\noindent The proof of \Cref{T1}, together with some immediate corollaries, makes up \Cref{S2}. In \Cref{S5}, we continue with a first application, a criterion for integrability of a given comparison function with respect to $\mu_H$. We also characterize boundedness of the distribution function of $\mu_H$ relative to a given comparison function.
\newline
\noindent \Cref{S3} is dedicated to the boundary behavior of Herglotz functions. Cauchy integrals and the relative behavior of its imaginary and real part have been intensively studied. For example, for a Herglotz function $q$ it is known \cite{poltoratski:2003} that the set of $\xi \in \mathbb{R}$ for which
\begin{align}
\label{A47}
\lim_{r \to 0} \frac{\IM q(\xi+ir)}{|q(\xi+ir)|}=0
\end{align}
is a zero set w.r.t. $\mu$. In contrast to measure theoretic results like this, we use the de Branges correspondence $H \leftrightarrow q_H$ to investigate this behavior pointwise w.r.t. $\xi$. In \Cref{T7} we show that if $\xi$ is such that (\ref{A47}) holds, then $|q(\xi+ir)|$ is slowly varying (cf. \Cref{A48}). \Cref{T8} is a partial converse of this statement.
\newline
\noindent In \Cref{S4} we turn to a finer study of $\IM q_H(ir)$ in the context of the geometric origins of (\ref{A43}) and (\ref{A2}). Namely, the functions $L$ and $A$ describe the imaginary parts of bottom and top of certain Weyl disks containing $q_H(ir)$. We show that there are restrictions on the possible location of $q_H(ir)$ within the disks, and construct a Hamiltonian $H$ for which $q_H(ir)$ oscillates back and forth between the bottoms and tops of the disks. This construction allows us to answer several open problems that were posed in \cite{langer.pruckner.woracek:heniest}.
\newline
\noindent We conclude our work with a reformulation of \Cref{T1} for the principal Titchmarsh-Weyl coefficient $q_S$ of a Krein string. This reformulation is the content of \Cref{S6}.
\subsection*{Notation associated to Hamiltonians}
\noindent Let $H$ be a Hamiltonian on $[a,b)$.
\newline
\noindent An interval $(c,d) \subseteq [a,b)$ is called $H$-\textit{indivisible} if $H(t)$ takes the form $h(t)\binom{\cos \varphi}{\sin \varphi}\binom{\cos \varphi}{\sin \varphi}^*$ a.e. on $(c,d)$, with scalar-valued $h$ and fixed $\varphi \in [0,\pi)$. The angle $\varphi$ is then called the \textit{type} of the interval.
\begin{definition}
Let \begin{align}
\mathring a (H) &:=\inf \Big\{t > a \,\Big| \, (a,t) \text{ is not }H\text{-indivisible of type } 0 \text{ or } \frac{\pi}{2} \Big\}, \\
\hat a (H) &:=\inf \Big\{t > a \,\Big|\, (a,t) \text{ is not }H\text{-indivisible} \Big\}.
\end{align}
Usually, we write $\mathring a$ and $\hat a$ for short. Since $H$ is assumed to be definite, both of these numbers are smaller than $b$.
\end{definition}
\noindent Note that $(\omega_1 \omega_2)(t)>0$ if and only if $(a,t)$ is not $H$-indivisible of type $0$ or $\frac{\pi}{2}$, i.e., $t>\mathring a$. Using the assumption $\int_a^b \tr H(t) \mkern4mu\mathrm{d} t=\infty$, we infer that $\omega_1 \omega_2$ is an increasing bijection from $(\mathring a,b)$ to $(0,\infty)$. \\
Similarly, $\det \Omega (t)>0$ is equivalent to $t>\hat a$. We have
\[
\frac{d}{dt} \Big(\frac{\det \Omega (t)}{\omega_1(t)} \Big)=\omega_1(t)^{-2} \binom{-\omega_3(t)}{\omega_1(t)}^* H(t)\binom{-\omega_3(t)}{\omega_1(t)} \geq 0
\]
and (by symmetry) $\frac{d}{dt} \big(\frac{\det \Omega}{\omega_2}\big) \geq 0$. Since at least one of $\omega_1$ and $\omega_2$ is unbounded, $\det \Omega$ is an increasing bijection from $(\hat a,b)$ to $(0,\infty)$.
\begin{definition}
\label{A35}
For a Hamiltonian $H$ and a number $\eta >0$, set \\
\begin{minipage}{.5\linewidth}
\begin{equation*}
\mathring r_{\eta,H} : \left\{\begin{array}{ccc}
(\mathring a,b) &\to &(0,\infty) \\[0.5ex]
t &\mapsto & \frac{\eta}{ 2\sqrt{(\omega_1 \omega_2)(t)}},
\end{array}\right.
\end{equation*}
\end{minipage}
\begin{minipage}{.5\linewidth}
\begin{equation*}
\hat r_{\eta,H} : \left\{\begin{array}{ccc}
(\hat a,b) &\to &(0,\infty) \\[0.5ex]
t &\mapsto & \frac{\eta}{ 2\sqrt{\det \Omega (t)}}.
\end{array}\right.
\end{equation*}
\end{minipage}
\noindent Both of these functions are decreasing and bijective. We define their inverse functions,
\begin{align}
\mathring t_{\eta,H}:=\mathring r_{\eta,H}^{-1} \,:\, (0,\infty) \to (\mathring a,b), \quad \quad \hat t_{\eta,H}:=\hat r_{\eta,H}^{-1} \,:\, (0,\infty) \to (\hat a,b).
\end{align}
Note that the functions $\hat t_{\eta,H}$, for any $\eta>0$, satisfy (\ref{A49}). Functions of this form will be the default choice of $\hat t$ for the sake of \Cref{T1}. We will often fix $\eta$ and $H$ and write $\mathring r$, $\mathring t$, $\hat r$, $\hat t$ for short. If $\eta$ is fixed but the Hamiltonian is ambiguous, we may write $\mathring r_H$, $\mathring t_H$, $\hat r_H$, $\hat t_H$ to indicate dependence on $H$.
\end{definition}
\section{On the imaginary part of the Weyl coefficient}
\label{S2}
\noindent We start by providing the details of the estimate (\ref{A43}), which is the central result in \cite{langer.pruckner.woracek:heniest}.
\begin{theorem}[{\cite[Theorem 1.1]{langer.pruckner.woracek:heniest}}]
\label{Y98}
Let $H$ be a Hamiltonian on $[a,b)$, and let $\eta \in (0,1-\frac{1}{\sqrt 2})$ be fixed. For $r>0$, let $\mathring t(r)$ be the unique number satisfying
\begin{align}
\label{A50}
(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))=\frac{\eta^2}{4r^2},
\end{align}
cf. \Cref{A35}. Set\footnote{If $\eta$ and $H$ are clear from the context, we may write $A$ and $L$ for short.}
\[
A_{\eta,H}(r):=\frac{\eta}{2r\omega_2^{(H)}(\mathring t(r))}, \quad \quad L_{\eta,H}(r):= \frac{\det \Omega_H (\mathring t(r))}{(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))} \cdot A_{\eta,H}(r).
\]
Then the Weyl coefficient $q_H$ associated with the Hamiltonian $H$ satisfies
\begin{align}
|q_H(ir)| &\asymp A_{\eta,H}(r),
\label{Y35} \\[1.5ex]
L_{\eta,H}(r) \lesssim \IM q_H(ir) &\lesssim A_{\eta,H}(r)
\label{Y96}
\end{align}
for $r \in (0,\infty)$. The constants implicit in these relations are independent of $H$. Their dependence on $\eta$ is continuous.
\end{theorem}
\noindent In the following proof of \Cref{T1}, we will also show that \Cref{Y98} still holds if $\mathring t: (0,\infty) \to (a,b)$ is a function satisfying $(\omega_1 \omega_2)(\mathring t(r)) \asymp \frac{1}{r^2}$, and
\[
A(r):=\frac{1}{r\omega_2^{(H)}(\mathring t(r))}, \quad \quad L(r):= \frac{\det \Omega_H (\mathring t(r))}{(\omega_1^{(H)} \omega_2^{(H)})(\mathring t(r))} \cdot A(r).
\]
In particular, we can choose any $\eta>0$ in (\ref{A50}).
\begin{proof}[Proof of \Cref{T1}]
Let $\hat t_{\eta,H}$ be defined as in \Cref{A35}. We show that for any $\eta>0$, \Cref{T1} holds for $\hat t_{\eta,H}$ in place of $\hat t$, and that the dependence on $\eta$ of the constants hidden in $\asymp$ in (\ref{A2}) and (\ref{A3}) is continuous. This then implies that \Cref{T1} holds for any function $\hat t$ satisfying (\ref{A49}).
\newline
\noindent The proof is divided into steps. \\
\item[\textbf{Step 1.}]
We introduce a family of transformations of $H$ that leave the imaginary part of the Weyl coefficient unchanged. If $p \in \bb R$ and
\[
H_p(t):=\smmatrix 1p01 H(t) \smmatrix 10p1 =
\begin{pmatrix}
h_1(t)+2p h_3(t)+p^2 h_2(t) & h_3(t)+ph_2(t) \\
h_3(t)+ph_2(t) & h_2(t)
\end{pmatrix},
\]
an easy calculation shows that the Weyl coefficient $q_p$ of $H_p$ is given by $q_p(z)=q_0(z)+p=q_H(z)+p$. \\
\item[\textbf{Step 2.}]
We prove (\ref{A2})-(\ref{A11}) for fixed $\eta \in (0,1-\frac{1}{\sqrt{2}})$. The following abbreviations are used only in Step 2: \\
\begin{tabular}{|r|l||r|l||r|l|@{}m{0pt}@{}}
\hline
short form & meaning & short form & meaning &short form & meaning & \\[10pt]
\hline \hline
\rule{0pt}{3ex}$\mathring t$ & \rule{0pt}{3ex} $\mathring t_{\eta,H}$ & \rule{0pt}{3ex} $\mathring t_p$ & \rule{0pt}{3ex} $\mathring t_{\eta,H_p}$ & \rule{0pt}{3ex} $\Omega_p$ & \rule{0pt}{3ex} $\Omega_{H_p}$& \\[3pt]
\hline
\rule{0pt}{3ex}$\hat t$ & \rule{0pt}{3ex}$\hat t_{\eta,H}$ & \rule{0pt}{3ex}$\hat t_p$ & \rule{0pt}{3ex}$\hat t_{\eta,H_p}$ & \rule{0pt}{3ex}$\omega_j^{(p)}$ & \rule{0pt}{3ex}$\omega_j^{(H_p)}$ & \\[3pt]
\hline
\rule{0pt}{3ex}$L_p$& \rule{0pt}{3ex}$L_{\eta,H_p}$ & \rule{0pt}{3ex}$A_p$& \rule{0pt}{3ex}$A_{\eta,H_p}$ & \rule{0pt}{3ex}$\Omega$ & \rule{0pt}{3ex}$\Omega_H$
& \\[3pt]
\hline
\end{tabular}
\\[4pt]
\noindent Let $r>0$ be fixed (this is important). Our first observation is that $\hat t_p(r)=\hat t(r)$ for any $p$ since $\det \Omega_p(t)=\det \Omega (t)$ does not depend on $p$. If we can find $p$ such that $\mathring t_p(r)=\hat t_p(r)=\hat t(r)$, then clearly
\[
\frac{L_p(r)}{A_p(r)}=\frac{\det \Omega_p(\mathring t_p(r))}{(\omega_1^{(p)}\omega_2^{(p)})(\mathring t_p(r))}=\frac{\det \Omega_p(\hat t_p(r))}{(\omega_1^{(p)}\omega_2^{(p)})(\mathring t_p(r))}=1.
\]
We apply \Cref{Y98} with $\eta$ and $H_p$. The estimate (\ref{Y96}) then takes the form
\begin{equation}
\label{P1}
A_p(r) = L_p(r) \lesssim \IM q_H(ir) \lesssim A_p(r)
\end{equation}
while (\ref{Y35}) turns into
\begin{equation}
\label{P2}
|q_H(ir)+p| \asymp A_p(r),
\end{equation}
where
\[
A_p(r)= \frac{\eta}{2r\omega_2^{(p)}(\mathring t_p(r))} = \frac{\eta}{2r\omega_2(\hat t(r))}.
\]
The right choice of $p$ is
\[
p=- \frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))},
\]
leading to $\omega_3^{(p)}(\hat t(r))=0$ and thus
\[
(\omega_1^{(p)}\omega_2^{(p)})(\hat t(r))=\det \Omega_p (\hat t(r))=\det \Omega (\hat t(r))=\frac{\eta^2}{4r^2}.
\]
Consequently, $\mathring t_p(r)=\hat t(r)$. Observe that the implicit constants in (\ref{Y35}) and (\ref{Y96}) are independent of $H$ and $r$ and depend continuously on $\eta$. This shows that (\ref{A2}) holds, with constants depending continuously on $\eta$. \\
\item[\textbf{Step 3.}] (\ref{A3}) follows from an application of (\ref{A2}) to $\tilde H:=J^{\top}HJ=\smmatrix {h_2}{-h_3}{-h_3}{h_1}$ and note that $\hat t_{\eta,\tilde H}=\hat t_{\eta,H}$. Thus
\[
\frac{\IM q_H(ir)}{|q_H(ir)|^2}=\IM \Big(- \frac{1}{q_H(ir)} \Big)= \IM q_{\tilde H}(ir) \asymp \frac{1}{r \omega_2^{(\tilde H)}(\hat t_{\eta,\tilde H}(r))}=\frac{1}{r \omega_1^{(H)}(\hat t_{\eta,H}(r))}.
\]
Formula (\ref{A11}) follows if we divide (\ref{A2}) by $|q_H(ir)|$. Hence, we proved the assertion for $\eta \in (0,1-\frac{1}{\sqrt{2}})$.
\newline
\noindent In the remaining steps we treat the missing case $\eta \geq 1-\frac{1}{\sqrt{2}}$.
\item[\textbf{Step 4.}] Let $k>0$. For use in Step 5, we show that
\begin{align}
\label{A31}
\IM q_H(ir) \asymp \IM q_H (ikr), \quad \quad |q_H(ir)| \asymp |q_H (ikr )|
\end{align}
for $r \in (0,\infty)$, where the constants in $\asymp$ depend continuously on $k$ and are independent of $H$. \\
For the imaginary part, the statement is easy to see from the integral representation (\ref{A17}). For the absolute value, we use the Hamiltonian $\tilde H$ from Step 3 to obtain
\[
\frac{\IM q_H(ir)}{|q_H(ir)|^2}=\IM q_{\tilde H}(ir) \asymp \IM q_{\tilde H}(ikr)=\frac{\IM q_H(ikr)}{|q_H(ikr)|^2}.
\]
This shows that $|q_H(ir)| \asymp |q_H (ikr )|$ as well.
\item[\textbf{Step 5.}]
\noindent Fix a Hamiltonian $H$, and let $\eta_0 \geq 1-\frac{1}{\sqrt{2}}$. Then
\[
\mathring t_{\eta_0,H}(r)=\mathring t_{\frac 14,\frac{1}{4\eta_0}H}(r), \quad \quad \hat t_{\eta_0,H}(r)=\hat t_{\frac 14,\frac{1}{4\eta_0}H}(r)
\]
and
\[
A_{\eta_0,H}(r)= A_{\frac 14,\frac{1}{4\eta_0}H}(r), \quad \quad L_{\eta_0,H}(r)= L_{\frac 14,\frac{1}{4\eta_0}H}(r).
\]
Since $\frac 14$ is less than $1-\frac{1}{\sqrt{2}}$, we can use \Cref{Y98} with $\eta:=\frac 14$ to obtain
\begin{align}
\label{A30}
L_{\eta_0,H}(r) = L_{\frac 14,\frac{1}{4\eta_0}H}(r) \lesssim &\IM q_{\frac{1}{4\eta_0}H} (ir) \\
\leq &|q_{\frac{1}{4\eta_0}H}(ir)| \asymp A_{\frac 14,\frac{1}{4\eta_0}H}(r) = A_{\eta_0,H}(r) \nonumber
\end{align}
for $r \in (0,\infty)$. Since $q_{\frac{1}{4\eta_0}H}(z)=q_H \big(\frac{z}{4\eta_0} \big)$ and by Step 4, we see that \Cref{Y98} holds for arbitrary $\eta >0$. It is easy to check that continuous dependence of constants on $\eta$ is retained. Repeating Steps $1-3$ now shows that also \Cref{T1} holds for $\hat t_{\eta,H}$ for any $\eta >0$. Moreover, it is not hard to see that everything still works if $\hat t$ is a function satisfying (\ref{A49}).
\end{proof}
\begin{remark}
\label{A36}
\Cref{Y98} and \Cref{T1}, in the form we stated them, give information about $q_H(z)$ for $z=ir$. However, if $\vartheta \in (0,\pi )$ is fixed, these theorems also hold
\begin{itemize}
\item [$\rhd$] for $z=re^{i\vartheta}$ uniformly for $r \in (0,\infty )$ and
\item [$\rhd$] for $z=re^{i\varphi}$ uniformly for $r \in (0,\infty)$ and $|\frac{\pi}{2}-\varphi | \leq |\frac{\pi}{2}-\vartheta |$.
\end{itemize}
We restate the explicit constants coming from \cite{langer.pruckner.woracek:heniest}. Fix $\eta \in (0,1-\frac{1}{\sqrt{2}})$ and set $\sigma :=(1-\eta )^{-2}-1 \in (0,1)$. With
\begin{align*}
c_-(\eta ,\vartheta)=\frac{\eta \sin \vartheta}{2(1+|\cos \vartheta |)} \cdot \frac{1-\sigma}{1+\sigma}, \quad \quad c_+(\eta ,\vartheta)=\frac{\sigma+\frac{2}{\eta \sin \vartheta}}{1-\sigma},
\end{align*}
we have\footnote{Since $c_-$ and $c_+$ are clearly monotonic in $\vartheta $, (\ref{A37}) and (\ref{A38}) still hold when $q_H(re^{i\vartheta})$ is replaced by $q_H(re^{i\varphi})$, where $|\frac{\pi}{2}-\varphi | \leq |\frac{\pi}{2}-\vartheta |$.}
\begin{align}
\label{A37}
c_-(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_2(\hat t_{\eta ,H}(r))} &\leq \IM q_H(re^{i\vartheta}) \leq c_+(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_2(\hat t_{\eta ,H}(r))}, \\[1ex]
\label{A38}
c_-(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_1(\hat t_{\eta ,H}(r))} &\leq \frac{\IM q_H(re^{i\vartheta})}{|q_H(re^{i\vartheta})|^2} \leq c_+(\eta ,\vartheta) \cdot \frac{\eta}{2} \cdot \frac{1}{r\omega_1(\hat t_{\eta ,H}(r))}.
\end{align}
In order to show (\ref{A37}), we need to slightly adapt the proof of \Cref{T1} by replacing $ir$ with $re^{i\vartheta}$ in (\ref{P1}) and taking into account the constants provided in
\cite[Theorem 1.1]{langer.pruckner.woracek:heniest}. Then (\ref{A38}) follows as in Step 3 of the proof. \\
For $\vartheta =\frac{\pi}{2}$, the optimal choice of $\eta$ is around $0.13833$ which gives
\[
c_+(0.13833, \frac{\pi}{2}) \approx 1.568, \quad c_-(0.13833, \frac{\pi}{2}) \approx 0.002, \quad \frac{c_+(0.13833, \frac{\pi}{2})}{c_-(0.13833, \frac{\pi}{2})} \approx 675.772 .
\]
While it is possible to derive explicit constants also for $\eta \geq 1-\frac{1}{\sqrt{2}}$, doing so does not result in an improvement of the quotient $c_+ / c_-$.
\end{remark}
\subsection*{Immediate consequences of \Cref{T1}}
\noindent \textit{In order to simplify calculations, unless specified otherwise, we will always assume that $\mathring t(r)$ and $\hat t(r)$ are defined implicitly by}
\begin{align}
\label{A12}
(\omega_1 \omega_2)(\mathring t(r))=\frac{1}{r^2}, \quad \det \Omega (\hat t(r))=\frac{1}{r^2},
\end{align}
and similarly for $\mathring r$ and $\hat r$ (cf. \Cref{A35} with $\eta=2$).
\noindent We revisit the example from the introduction in more generality. The following example was communicated by Matthias Langer. The calculations can be found in the appendix. \\
\begin{example}
\label{A24}
Let $\alpha > 0$ and $\beta_1, \beta_2 \in \bb R$ where $\beta_1 \neq \beta_2$. Set $\beta_3 := \frac{\beta_1 + \beta_2}{2}$ and define, for $t \in (0,\infty)$,
\begin{align*}
H(t)=
t^{\alpha -1}\left(\begin{matrix}
|\log t|^{\beta_1} & |\log t|^{\beta_3} \\
|\log t|^{\beta_3} & |\log t|^{\beta_2} \\
\end{matrix} \right).
\end{align*}
Then for $r \to \infty$, we have
\begin{itemize}
\item[] $L(r) \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}-2}$ and
\item[] $A(r) \asymp |q_H(ir)| \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}}$,
\end{itemize}
i.e., $L(r) = {\rm o} ( A(r))$. Using \Cref{T1}, we can now continue the calculations, leading to
\[
\IM q_H(ir) \asymp (\log r)^{\frac{\beta_1-\beta_2}{2}-1} \asymp \sqrt{L(r)A(r)}.
\]
\end{example}
\noindent It is an immediate consequence of \Cref{T1} that $\IM q_H$ depends monotonically on the off-diagonal of $H$.
\begin{corollary}
\label{T1+}
Let $H=\smmatrix {h_1}{h_3}{h_3}{h_2}$ and $\tilde{H}=\smmatrix {h_1}{\tilde h_3}{\tilde h_3}{h_2}$ be two Hamiltonians on $[a,b)$. If $t>\hat a (H)$ such that
\[
\Big|\int_a^t h_3(s) \mkern4mu\mathrm{d} s \Big| \geq \Big|\int_a^t \tilde h_3(s) \mkern4mu\mathrm{d} s \Big|,
\]
then
\[
\IM q_H(i \hat r_H(t)) \lesssim \IM q_{\tilde H}(i \hat r_H(t))
\]
with a constant independent of $t$, $H$, and $\tilde H$.
\end{corollary}
\begin{proof}
Our condition states that $|\omega_3(t)| \geq |\tilde \omega_3(t)|$. Taking into account that $t>\hat a(H)$, this means that $0<\det \Omega (t) \leq \det \tilde \Omega (t)$. Hence $\hat r_H(t) \geq \hat r_{\tilde H}(t)$, and further $\hat t_{\tilde H}(\hat r_H(t)) \leq t$. Now, by (\ref{A2}),
\[
\IM q_H(i\hat r_H(t)) \asymp \frac{1}{\hat r_H(t)\omega_2(t)} \leq \frac{1}{\hat r_H(t)\omega_2(\hat t_{\tilde H}(\hat r_H(t)))} \asymp \IM q_{\tilde H}(i\hat r_H(t)).
\]
\end{proof}
\noindent The following result elaborates on the relative behavior of $\IM q_H$ and $|q_H|$. We obtain a quantitative and pointwise relation between $\frac{\IM q_H}{|q_H|}$ and $\frac{\det \Omega}{\omega_1 \omega_2}$, leading to the equivalence
\begin{align}
\label{A14}
\lim_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0 \,\, \Longleftrightarrow \,\, \lim_{t \to \hat a} \frac{\det \Omega (t)}{(\omega_1 \omega_2)(t)}=0.
\end{align}
The relation between $\frac{\det \Omega}{\omega_1 \omega_2}$ and $\frac{\IM q_H(ir)}{|q_H(ir)|}$ has been investigated also in \cite{langer.pruckner.woracek:gapsatz-arXiv}. Their proof of (\ref{A14})\footnote{In \cite{langer.pruckner.woracek:gapsatz-arXiv}, $\lim_{t \to a}$ was considered instead of $\lim_{t \to \hat a}$.} is based on compactness arguments. \\
Note that our result shows that (\ref{A14}) holds true for $r \to 0$ and $t \to b$ as well.
\begin{proposition}
\label{A4}
Let $H$ be a Hamiltonian on $[a,b)$. Then\footnote{$\mathring r(\hat t(r))$ is well-defined because of $\hat t(r) \in (\hat a,b) \subseteq (\mathring a,b)$.}
\begin{align}
\label{A9}
\frac{\IM q_H(ir)}{|q_H(ir)|} \asymp \frac{\mathring r(\hat t(r))}{r} = \sqrt{\frac{\det \Omega (\hat t(r))}{(\omega_1 \omega_2)(\hat t(r))}}
\end{align}
for $r \in (0,\infty)$. Moreover,
\begin{align}
\label{A10}
\big|q_H \big( i \mathring r(\hat t(r)) \big) \big| \asymp |q_H(ir)|, \quad \quad r \in (0,\infty).
\end{align}
All constants implicit in $\asymp$ do not depend on $H$.
\end{proposition}
\begin{proof}
By definition of $\mathring r$ and using (\ref{A2}) and (\ref{A3}),
\[
\mathring r(\hat t(r)) = \frac{1}{\sqrt{(\omega_1 \omega_2)(\hat t(r))}} \asymp r \frac{\IM q_H(ir)}{|q_H(ir)|}.
\]
We also have
\[
\sqrt{\frac{\det \Omega (\hat t(r))}{(\omega_1 \omega_2)(\hat t(r))}} = \frac{1}{\sqrt{r^2(\omega_1 \omega_2)(\hat t(r))}} = \frac{\mathring r(\hat t(r))}{r},
\]
and (\ref{A9}) follows. \\
For the proof of (\ref{A10}), we need the formula
\[
\omega_1(\mathring t(r)) \asymp \frac{|q_H(ir)|}{r}
\]
which we get from \Cref{Y98} applied to $J^{\top}HJ$. Combine this with (\ref{Y35}) to get
\[
|q_H(ir)|^2 \asymp \frac{\omega_1(\mathring t(r))}{\omega_2(\mathring t(r))}.
\]
On the other hand, (\ref{A2}) and (\ref{A3}) give
\[
|q_H(ir)|^2 \asymp \frac{\omega_1(\hat t(r))}{\omega_2(\hat t(r))}=\frac{\omega_1 \Big(\mathring t \big( \mathring r(\hat t(r)\big)\Big)}
{\omega_2 \Big(\mathring t \big( \mathring r(\hat t(r)\big)\Big)} \asymp \big|q_H \big( i \mathring r(\hat t(r)) \big) \big|^2.
\]
\end{proof}
\noindent The freedom in the choice of $\eta$ leads to the following formula that we will refer to later on.
\begin{corollary}
\label{T5}
Let $H$ be a Hamiltonian on $[a,b)$. Then, for any $k>0$,
\begin{align}
\label{A21}
\IM q_H(ikr) \asymp \bigg|q_H(ikr)-\frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))} \bigg| &\asymp \bigg|q_H(ir)-\frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))} \bigg|
\end{align}
with constants depending on $k$, but not on $H$. \\
If $\IM q_H(ir)={\rm o} (|q_H(ir)|)$ for $r \to \infty$ \emph{[}$r \to 0$\emph{]}, then
\begin{align}
\label{A20}
q_H(ikr) \sim \frac{\omega_3(\hat t(r))}{\omega_2(\hat t(r))}, \quad \quad r \to \infty \quad [r \to 0].
\end{align}
\end{corollary}
\begin{proof}
Apply \Cref{T1} to $H$ using $\hat t_{1,H}$, and to $kH$ using $\hat t_{k,kH}$. Then $\hat t_{1,H}(r)=\hat t_{k,kH}(r)$, and we write $\hat t(r)$ for short. Keeping in mind that $q_{kH}(z)=q_H(kz)$, this leads to
\[
\IM q_H(ir) \asymp \bigg|q_H(ir)-\frac{\omega_3^{(H)}(\hat t(r))}{\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{r\omega_2^{(H)}(\hat t(r))}
\]
as well as
\[
\IM q_H(ikr) \asymp \bigg|q_H(ikr)-\frac{k\omega_3^{(H)}(\hat t(r))}{k\omega_2^{(H)}(\hat t(r))} \bigg| \asymp \frac{1}{kr \cdot \omega_2^{(H)}(\hat t(r))}.
\]
(\ref{A21}) follows. Now (\ref{A20}) is obtained by dividing (\ref{A21}) by $|q_H(ikr)|$.
\end{proof}
\section{Behavior of tails of the spectral measure}
\label{S5}
\Cref{T1} that approximately determines the imaginary part of $q_H(ir)$
allows us to determine the growth of the spectral measure $\mu_H$ relative
to suitable comparison functions. Let us introduce the measure $\tilde\mu_H$ on $[0,\infty)$ by
\begin{equation}\label{Y140}
\tilde\mu_H([0,r)) := \tilde\mu_H(r) := \mu_H((-r,r)), \quad \quad r>0.
\end{equation}
In \Cref{Y190}, equivalent conditions are given for when the function $r \mapsto \tilde \mu_H$ is integrable w.r.t. a given weight function, and also when the measure $\tilde \mu_H$ is finite w.r.t. to a rescaling function. \\
On the other hand, we can view $\tilde\mu_H$ as a function of the positive real parameter $r$, and compare this to a given function $\ms g$. This is what we do in \Cref{Y02}. \\
We note that the content of this section is analogous to \cite[Section 4]{langer.pruckner.woracek:heniest}. The availability of formula (\ref{A2}) leads to improved results in the present article, however we provide less detail as was given in \cite{langer.pruckner.woracek:heniest}.
\newline
\noindent The proofs in this section are based on standard theorems of Abelian-Tauberian type, relating $\mu_H$ to its Poisson integral
\begin{align}
\mc P [\mu_H](z):= \int_{\bb R} \IM \Big( \frac{1}{t-z} \Big) \mkern4mu\mathrm{d} \mu_H(t).
\end{align}
By (\ref{A17}), we have $\mc P [\mu_H](z) = \IM q_H(z) - \beta \IM z $. If $\beta =0$, we can proceed with the application of Abelian-Tauberian theorems without problems. The case $\beta >0$ is equivalent to $a$ being the left endpoint of an $H$-indivisible interval of type $\frac{\pi}{2}$, i.e., $\mathring a(H) >a$ and $h_2$ vanishes a.e. on $[a,\mathring a(H))$. The restricted Hamiltonian $H_-:=H\big|_{[\mathring a(H),b)}$ then has the Weyl coefficient $q_{H_-}(z)=q_H(z)-\beta z$ and thus $\IM q_{H_-}(z) = \mc P [\mu_H](z)$. Hence, we can investigate $\mu_H$ by applying the theorems from this section to $H_-$.
\subsection[{Finiteness of the spectral measure w.r.t. given weight functions}]{Finiteness of the spectral measure w.r.t. given weight functions}
\label{Y190}
\begin{theorem}
\label{AT0}
Let $H$ be a Hamiltonian defined on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$. Let $\ms f$ be a continuous, non-decreasing function,
and denote by $\mu_H$ the spectral measure of $H$.
\noindent Then the following statements are equivalent:
\begin{Enumerate}
\item
\begin{equation}
\label{Y161}
\int_1^{\infty} \tilde\mu_H(r)\frac{\ms f(r)}{r^3}\mkern4mu\mathrm{d} r<\infty;
\end{equation}
\item
There is $ a' \in (\hat a,b)$ such that
\[
\int_{\hat a}^{a'}
\frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^*H(t)\binom{\omega_2(t)}{-\omega_3(t)}
\cdot\ms f\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t<\infty.
\]
\end{Enumerate}
\noindent If, in addition, $\ms f$ is differentiable,
then the above conditions hold if and only if there is $ a'\in(\hat a,b)$ such that
\begin{align*}
\int_{\hat a}^{a'}
\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)^{\frac 12}} \ms f'\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t < \infty.
\end{align*}
\end{theorem}
\begin{proof}
First note that finiteness of the integrals in the proposition clearly
does not depend on $a'\in(\hat a,b)$.
\noindent Let $\xi$ be the measure on $[1,\infty)$ such that $\ms f(r)=\xi([1,r))$, $r\ge1$.
It follows from \cite[Lemma~4]{kac:1982} that
\[
\int_{[1,\infty)}\frac{\Poi{\mu_H}(ir)}{r}\mkern4mu\mathrm{d}\xi(r) < \infty
\quad\Longleftrightarrow\quad
\int_1^\infty \frac{\tilde\mu_H(r)\ms f(r)}{r^3}\mkern4mu\mathrm{d} r < \infty.
\]
Since $h_2$ does not vanish identically in a neighborhood of $a$, we have $\Poi{\mu_H}=\IM q_H$. By \Cref{T1}, we have
\[
\frac{\Poi{\mu_H}(ir)}{r}
\asymp \frac{1}{r^2 \omega_2(\hat t(r))}
\asymp \frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))}.
\]
Hence
\begin{equation}\label{Y162}
\int_1^{\infty} \tilde\mu_H(r)\frac{\ms f(r)}{r^3}\mkern4mu\mathrm{d} r<\infty
\;\;\Longleftrightarrow\;\;
\int_{[1,\infty)}\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)}\mkern4mu\mathrm{d}\xi(r)
< \infty.
\end{equation}
We define a measure $\nu$ on $(0,\infty)$ via $\nu((r,\infty))=\frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))}$, $r>0$. Let $\hat\nu$ be the measure on $(\hat a,b)$ satisfying $\hat\nu((\hat a,t))=\nu((\hat r(t),\infty))=\frac{\det \Omega (t)}{\omega_2(t)}$, $t>\hat a$.
Integrating by parts (see, e.g., \cite[Lemma~2]{kac:1965}), we can rewrite the first integral in (\ref{Y162}) as follows:
\begin{align*}
& \int_{[1,\infty)}\frac{\det \Omega (\hat t(r))}{\omega_2(\hat t(r))} \mkern4mu\mathrm{d}\xi(r)
= \int_{[1,\infty)}\nu\bigl((r,\infty)\bigr)\mkern4mu\mathrm{d}\xi(r)
\\
&= \int_{[1,\infty)}\!\ms f(r)\mkern4mu\mathrm{d}\nu(r) = \int_{(\hat a,\hat t(1)]}\!\ms f(\hat r(t))\mkern4mu\mathrm{d} \hat\nu(t)
= \int_{(\hat a,\hat t(1)]}\ms f(\hat r(t))\mkern4mu\mathrm{d} \bigg(\frac{\det \Omega}{\omega_2} \bigg)(t)
\\
&= \int_{\hat a}^{\hat t(1)}\ms f\bigl(\hat r(t)\bigr)\cdot \frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^*H(t)\binom{\omega_2(t)}{-\omega_3(t)}\mkern4mu\mathrm{d} t.
\end{align*}
To prove the additional statement, let us assume that $\ms f$ is differentiable. Using a substitution we can rewrite
the second integral in \eqref{Y162} differently:
\begin{align*}
&\int_{[1,\infty)}\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)}\mkern4mu\mathrm{d}\xi(r)
= \int_1^\infty\frac{\det \Omega\bigl(\hat t(r)\bigr)}{\omega_2\bigl(\hat t(r)\bigr)} \ms f'(r)\mkern4mu\mathrm{d} r
\\
&= \int_{\hat t(r)}^{\hat a} \!\frac{\det \Omega (t)}{\omega_2(t)}\ms f'(\hat r(t))\hat r'(t)\mkern4mu\mathrm{d} t
= \frac 12 \int_{\hat a}^{\hat t(r)}\frac{\det \Omega (t)}{\omega_2(t)}\ms f'(\hat r(t))
\frac{(\det \Omega)'(t)}{\det \Omega (t) ^{\frac32}}\mkern4mu\mathrm{d} t.
\end{align*}
\end{proof}
\noindent
The following result provides, in particular, information on when the measure $\tilde \mu_H$ is finite w.r.t. a regularly varying rescaling function $\ms g$.
\begin{corollary}
Let $H$ be a Hamiltonian on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$.
Let $\ms g$ be a continuous function that is regularly varying with index $\alpha \in [0,2]$, and denote by $\mu_H$ the spectral measure of $H$ as in (\ref{A17}).
Then, for $\alpha \in (0,2)$ and every $a'\in (\hat a,b)$, the following statements are equivalent:
\begin{alignat*}{2}
&\rm (i)\, && \int_{[1,\infty)} \frac{\mkern4mu\mathrm{d}\tilde\mu_H(r)}{\ms g(r)} < \infty; \\[1.5ex]
&\rm (ii)\, &&\int_{\hat a}^{a'}\mkern-10mu
\frac{1}{\omega_2(t)^2}\binom{\omega_2(t)}{-\omega_3(t)}^* H(t)\binom{\omega_2(t)}{-\omega_3(t)}
\frac{\mkern4mu\mathrm{d} t}{\det \Omega (t)\ms g\bigl(\det \Omega (t)^{-\frac 12}\bigr)} < \infty . \\[1.5ex]
&\rm (iii)\, &&\int_{\hat a}^{a'}\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)\ms g\bigl(\det \Omega (t)^{-\frac 12}\bigr)}\mkern4mu\mathrm{d} t
< \infty;
\end{alignat*}
If $\alpha =0$, then $(iii) \Rightarrow (i)$ and $(iii) \Leftrightarrow (ii)$, while for $\alpha = 2$ we have $(iii) \Rightarrow (i)$ and $(iii) \Rightarrow (ii)$.
\end{corollary}
\begin{proof}
The increasing function $\ms f(r):=\int_1^r \frac{t}{\ms g(t)} \mkern4mu\mathrm{d} t$ is regularly varying by Karamata's Theorem (\cite[Propositions 1.5.8 and 1.5.9a]{bingham.goldie.teugels:1989}. Moreover,
\begin{equation}
\label{Y137}
\ms f(r)\;
\begin{cases}
\; \asymp\frac{r^2}{\ms g(r)}, & 0 \leq \alpha<2,
\\[2ex]
\; \gg\frac{r^2}{\ms g(r)}, & \alpha=2.
\end{cases}
\end{equation}
Clearly $(iii)$ is equivalent to
\[
\int_{\hat a}^{a'}
\frac{(\det \Omega)'(t)}{\omega_2(t)\det \Omega (t)^{\frac 12}} \ms f'\bigl(\det \Omega (t)^{-\frac12}\bigr)\mkern4mu\mathrm{d} t < \infty
\]
which is the term appearing in the additional statement of \Cref{AT0}.
Applying \Cref{AT0} and using (\ref{Y137}), this is equivalent to (for $\alpha \in [0,2)$) or implies (for $\alpha =2$) both $(ii)$ and
\[
\int_1^{\infty} \tilde\mu_H(r)\frac{dr}{r\ms g(r)}<\infty.
\]
By \cite[Proposition 4.5]{langer.pruckner.woracek:heniest}, this is further equivalent to (for $\alpha \in (0,2]$) or implies (for $\alpha =0$) the first item.
\end{proof}
\subsection{Comparative growth of the distribution function}
\label{Y02}
In this section we investigate $\limsup$-conditions for the
quotient $\frac{\tilde\mu_H(r)}{\ms g(r)}$ instead of integrability conditions.
Let us introduce the corresponding classes of measures.
\begin{definition}\label{Y77}
Let $\ms g(r)$ be a regularly varying function with index $\alpha \in [0,2]$
and $\lim_{r\to\infty}\ms g(r)=\infty$.
Then we set
\[
\mc F_{\ms g} \mathrel{\mathop:}= \big\{\mu\mid\mkern3mu \tilde\mu(r) \lesssim \ms g(r), \, r \to \infty \big\},\qquad
\mc F_{\ms g}^0 \mathrel{\mathop:}= \big\{\mu\mid\mkern3mu \tilde\mu(r) = {\rm o} ( \ms g(r)), \, r \to \infty\big\},
\]
where again $\tilde\mu(r)\mathrel{\mathop:}=\mu((-r,r))$.
\end{definition}
\noindent It should be mentioned that, for non-decreasing $\ms g$, if
\[
\int_{[1,\infty)} \frac{\mkern4mu\mathrm{d}\tilde\mu(r)}{\ms g(r)} < \infty ,
\]
then $\mu \in \mc F_{\ms g}^0 \subseteq \mc F_{\ms g}$. For further discussion of this relation, the reader is referred to \cite{langer.pruckner.woracek:heniest}.
\begin{theorem}\label{Y74}
Let $H$ be a Hamiltonian on $[a,b)$, and assume that $h_2$ does not vanish identically in a neighborhood of $a$.
Let $\ms g(r)$ be a regularly varying function with index $\alpha \in [0,2]$ and $\lim_{r\to\infty}\ms g(r)=\infty$. Denote by $\mu_H$ the spectral measure of $H$. For $\alpha < 2$, the following statements hold:
\begin{alignat*}{4}
&\rm (i)\quad &&\mu_H \in \mathcal{F}_{\ms g} \quad &&
\Leftrightarrow
&& \quad \limsup_{t \to \hat a} \frac{1}{\omega_2(t)\ms g \big( \det \Omega (t)^{-\frac 12} \big)}<\infty; \\[1ex]
&\rm (ii)\quad &&\mu_H \in \mathcal{F}_{\ms g}^0 \quad &&
\Leftrightarrow
&&\quad \lim_{t \to \hat a} \frac{1}{\omega_2(t)\ms g \big( \det \Omega (t)^{-\frac 12} \big)}=0.
\end{alignat*}
If $\alpha =2$, then the right hand side of $(i)$, $(ii)$ implies the left hand side, respectively.
\end{theorem}
\begin{proof}
We use \cite[Lemma 4.16]{langer.pruckner.woracek:heniest} which, adapted to our situation, reads as
\[
c_{\alpha}\limsup_{r\to\infty}\biggl(\frac{r}{\ms g(r)}\Poi{\mu_H}(ir)\biggr) \leq \limsup_{r\to\infty}\frac{\tilde\mu_H(r)}{\ms g(r)}
\leq c_{\alpha}' \limsup_{r\to\infty}\biggl(\frac{r}{\ms g(r)}\Poi{\mu_H}(ir)\biggr),
\]
and the second inequality holds even for $\alpha =2$. Since $h_2$ does not vanish identically in a neighborhood of $a$, we have $\Poi{\mu_H}=\IM q_H$. Therefore, the assertion follows from \Cref{T1} and a substitution $r=\hat r(t)$.
\end{proof}
\section{Weyl coefficients with tangential behavior}
\label{S3}
In this section, we investigate the scenario
\begin{align}
\label{A16}
\lim_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0 \quad \quad \text{or} \quad \quad \liminf_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|}=0.
\end{align}
This is equivalent to tangential behavior of $q_H(ir)$, i.e.,
\[
\lim_{r \to \infty} \arg q_H(ir) \in \{0,\pi\} \quad \text{or} \quad \liminf_{r \to \infty} \min \big\{\arg q_H(ir),\, \pi-\arg q_H(ir) \big\}=0.
\]
From \Cref{A4} we get that
\begin{align}
\label{A25}
\lim_{n \to \infty} \frac{\IM q_H(i r_n)}{|q_H(i r_n)|}=0 \,\, \Longleftrightarrow \,\, \lim_{n \to \infty} \frac{\det \Omega(\hat t(r_n))}{(\omega_1 \omega_2)(\hat t(r_n))}=0.
\end{align}
for every sequence $r_n \to \infty$.
All results in this section can be seen from the canonical systems perspective as well as from the Herglotz functions perspective. \\
\noindent To start with, we observe that the second assertion in (\ref{A16}) implies the first unless the limit inferior is assumed only along very sparse sequences. We formulate this fact in the language of Herglotz functions, and prove it within the canonical systems setting. However, we do not know a purely function theoretic proof (which may very well exist in the literature).
\begin{lemma}
Let $q$ be a Herglotz function. Suppose there is a sequence $(r_n)_{n \in \bb N}$ with $r_n \to \infty$, $\sup_{n \in \bb N} \frac{r_{n+1}}{r_n} < \infty$, and
\[
\lim_{n \to \infty} \frac{\IM q(ir_n)}{|q(ir_n)|}=0.
\]
Then $\lim_{r \to \infty} \frac{\IM q(ir)}{|q(ir)|}=0$.
\end{lemma}
\begin{proof}
Let $H$ be a Hamiltonian (on $[0,\infty)$), such that $q=q_H$. \\
Let $d(t):=\frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}$. Set $t_n := \hat t(r_n)$, then by (\ref{A9}),
\[
d(t_n) \asymp \bigg( \frac{\IM q(ir_n)}{|q(ir_n)|} \bigg)^2 \xrightarrow{n \to \infty} 0.
\]
Suppose that the assertion was not true, i.e., there is a sequence $\xi_1 > \xi_2 > ...$ converging to $0$, such that $d(\xi_k) \geq C>0$ for all $k$. For $k \in \bb N$, set $n(k):=\max \{n \in \bb N \mid t_n > \xi_k \}$. We obtain
\begin{align*}
&\Big(\frac{r_{n(k)+1}}{r_{n(k)}} \Big)^2 = \frac{\det \Omega(t_{n(k)})}{\det \Omega (t_{n(k)+1})} \geq \frac{\det \Omega(\xi_k)}{\det \Omega (t_{n(k)+1})} \\
&=\frac{d(\xi_k)}{d(t_{n(k)+1})} \cdot \frac{(\omega_1 \omega_2)(\xi_k)}{(\omega_1 \omega_2)(t_{n(k)+1})} \geq \frac{C}{d(t_{n(k)+1})} \xrightarrow{k \to \infty} \infty
\end{align*}
which contradicts our assumption.
\end{proof}
\noindent Recall formulae (\ref{A10}) and (\ref{A9}). On an intuitive level, they tell us that in the case that $\IM q_H(ir) \not\asymp |q_H(ir)|$, the growth of $|q_H(ir)|$ is restricted since $\mathring r(\hat t(r))$ is then far away from $r$. If read in the other direction, this means that if $|q_H(ir)|$ grows quickly and without oscillating too much, then $\mathring r(\hat t(r))$ and $r$ should be close to each other, and hence the quotient $\frac{\IM q_H(ir)}{|q_H(ir)|}$ should not decay.\\
The following definition introduces the notions needed in Theorems \ref{T7} and \ref{T8}, which confirm this intuition.
\begin{definition}
\label{A48}
\item[$\rhd$] A measurable function $f: (0,\infty) \to (0,\infty)$ is called \textit{regularly varying (at infinity) with index $\alpha \in \bb R$} if, for any $\lambda >0$,
\begin{align}
\lim_{r \to \infty}\frac{f(\lambda r)}{f(r)} = \lambda^{\alpha}.
\end{align}
If $\alpha =0$, then $f$ is also called \textit{slowly varying (at infinity)}.
\item[$\rhd$] A measurable function $f: (0,\infty) \to (0,\infty)$ is \textit{positively increasing (at infinity)} if there is $\lambda \in (0,1)$ such that
\begin{align}
\limsup_{r \to \infty} \frac{f(\lambda r)}{f(r)} <1.
\end{align}
Let us say explicitly that we do not require $f$ to be monotone.
\end{definition}
\stepcounter{lemma}
\begin{subtheorem}
\label{T7}
Let $q \neq 0$ be a Herglotz function. If $|q(ir)|$ or $\frac{1}{|q(ir)|}$ is positively increasing at infinity (in particular, if $|q(ir)|$ is regularly varying with index $\alpha \neq 0$), then $\IM q(ir) \asymp |q(ir)|$ as $r \to \infty$.
\end{subtheorem}
\begin{subtheorem}
\label{T8}
Let $q \neq 0$ be a Herglotz function. If $\IM q(ir) = {\rm o} (|q(ir)|)$ as $r \to \infty$, then, for every $\delta \in [0,1)$,
\begin{align}
\label{A19}
\lim_{r \to \infty} \frac{ q \Big(ir \Big[\frac{\IM q(ir)}{|q(ir)|}\Big]^{\delta} \Big)}{q(ir)} =1.
\end{align}
For $k>0$, we also have $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$, in particular, $|q(ir)|$ is slowly varying at infinity.
\end{subtheorem}
\begin{remark}
In \Cref{T7}, the requirement that $|q(ir)|$ should be positively increasing is meaningful. It is not enough that $|q(ir)|$ grows sufficiently fast, say, $|q(ir)| \gtrsim r^{\delta}$ for $r \to \infty$ and some $\delta > 0$. \\
In fact, for any given $\delta \in (0,1)$, we construct in \Cref{T6} a Hamiltonian\footnote{Choose suitable parameters $p,l \in (0,1)$, such that $\delta = \frac{\log l}{\log (pl)}$, i.e., $p=l^{\delta ^{-1}-1}$.} $H$ whose Weyl coefficient $q_H$ satisfies (see \Cref{R2}) $|q_H(ir)| \gtrsim r^{\delta}$ as $r \to \infty$, but
\[
\liminf_{r \to \infty} \frac{\IM q_H(ir)}{|q_H(ir)|} = 0.
\]
In other words, $\IM q_H(ir) \not\asymp |q_H(ir)|$. \\
Note also that for the above-mentioned $H$, certainly $|q_H(ir)|$ is not slowly varying \cite[Proposition 1.3.6]{bingham.goldie.teugels:1989}. Hence, in \Cref{T8} it is not enough to require $\IM q(ir_n) = {\rm o} (|q(ir_n)|)$ on some sequence $r_n \to \infty$.
\end{remark}
\begin{example}
Let $q(z)=\log z$, satisfying $|q(ir)| = \big[(\log r)^2+\frac{\pi^2}{4}\big]^{1/2}$ which is increasing. However, $\IM q(ir)$ is constant and hence $\IM q(ir) = {\rm o} (|q(ir)|)$ as $r \to \infty$. \Cref{T7} fails because $|q(ir)|$ is not positively increasing.
\end{example}
\begin{proof}[Proof of \Cref{T7}]
Assume first that $|q(ir)|$ is positively increasing. Then there are $\lambda, \sigma \in (0,1)$ and $R > 0$ such that
\begin{align}
\label{A18}
\frac{|q(i\lambda r)|}{|q(ir)|} \leq \sigma, \quad \quad r \geq R.
\end{align}
Let $H$ be a Hamiltonian with Weyl coefficient $q_H=q$, allowing us to use (\ref{A10}). \\
Suppose that the assertion was not true. Then there is a (w.l.o.g., monotone) sequence $r_n \to \infty$ with $\lim_{n \to \infty} \frac{\IM q(ir_n)}{|q(ir_n)|}=0$. Let $m(n)$ be such that
\[
\lambda^{m(n)+1} \leq \frac{\mathring r(\hat t(r_n))}{r_n} < \lambda^{m(n)}.
\]
Note that $m(n) \to \infty$ because of (\ref{A9}). \\
Furthermore, (\ref{A10}) ensures that there is $\beta >0$ with
\[
\beta \leq \frac{|q(i \mathring r(\hat t(r)))|}{|q(ir)|}, \quad r \in (0,\infty).
\]
We will also need that for $0<r<r'$,
\[
\frac{|q(ir)|}{|q(ir')|} \asymp \frac{r' \omega_2(\mathring t(r'))}{r \omega_2(\mathring t(r))} \leq \frac{r'}{r}
\]
because $\omega_2$ is nondecreasing. \\
Choosing $n$ so big that $\mathring r(\hat t(r_n)) \geq R$, we get the contradiction
\begin{align*}
\beta &\leq \frac{\big|q \big(i \mathring r(\hat t(r_n)) \big) \big|}{\big|q \big(ir_n \big)\big|} = \frac{\big|q \big(i \mathring r(\hat t(r_n)) \big) \big|}{\big|q \big(i\lambda^{m(n)}r_n \big)\big|} \cdot \prod_{j=0}^{m(n)-1}\frac{\big|q \big(i\lambda^{j+1} r_n \big)\big|}{\big|q \big(i\lambda^j r_n \big)\big|} \\
&\lesssim \frac{\lambda^{m(n)}r_n}{\mathring r(\hat t(r_n))} \sigma^{m(n)}
\leq \frac{\sigma^{m(n)}}{\lambda} \xrightarrow{n \to \infty} 0.
\end{align*}
This proves the theorem in the case that $|q(ir)|$ is positively increasing. \\
If, on the other hand, $\frac{1}{|q(ir)|}$ is positively increasing, we may set $\tilde q :=-\frac 1q$, for which $|\tilde q(ir)|$ is positively increasing. We obtain
\[
\frac{\IM q(ir)}{|q(ir)|} = \frac{\IM \tilde q(ir)}{|\tilde q(ir)|} \asymp 1.
\]
Finally, we note that if $|q(ir)|$ is regularly varying with index $\alpha >0$, then it is also positively increasing. If $|q(ir)|$ is regularly varying with index $\alpha < 0$, then $\frac{1}{|q(ir)|}$ is regularly varying with index $-\alpha > 0$ and thus positively increasing.
\end{proof}
\noindent Our proof of \Cref{T8} is elementary - only folklore facts that follow from the Herglotz integral representation (\ref{A17}) are needed. We would be interested in an elementary proof of \Cref{T7} as well, which so far we have not found. \\
\noindent One fact needed in the following proof is the following: For any Herglotz function $q$ and any $z \in \bb C_+$, we have
\begin{equation}
\label{A5}
|q'(z)| \leq \frac{\IM q(z)}{\IM z}.
\end{equation}
This can be seen using the representation (\ref{A17}): We write
\[
q'(z)=b+\int_{\bb R} \frac{d\sigma (t)}{(t-z)^2}
\]
and obtain
\[
|q'(z)| \leq b+\int_{\bb R} \frac{d\sigma (t)}{|t-z|^2}= \frac{\IM q(z)}{\IM z}.
\]
\begin{proof}[Proof of \Cref{T8}]
Let $k \in (0,1)$. Then
\begin{align}
\label{A15}
&|\log q \big(ikr \big)-\log q(ir)|=\Big|\int_{kr}^r i(\log q)'(is) \mkern4mu\mathrm{d} s \Big| \leq \int_{kr}^r \big|(\log q)'(is)\big| \mkern4mu\mathrm{d} s.
\end{align}
Apply (\ref{A5}) to $\log q$ and to $i\pi-\log q$ to obtain
\begin{align*}
&|(\log q)'(is)| \leq \frac 1s \min \big\{\IM [\log q(is)],\pi - \IM [\log q(is)] \big\} \\
&= \frac 1s \min \big\{\arg q(is), \pi-\arg q(is) \big\} \asymp \frac{\IM q(is)}{s|q(is)|}
\end{align*}
for all $s>0$. We will also need monotonicity in $s$ of $s\frac{\IM q(is)}{|q(is)|}$. In fact, it is easy to see from (\ref{A17}) that $s \IM q(is)$ is nondecreasing in $s$. Now we can write
\[
s\frac{\IM q(is)}{|q(is)|} = \sqrt{s \IM q(is) \cdot s \IM \Big(-\frac{1}{q(is)} \Big)}
\]
and hence $s\frac{\IM q(is)}{|q(is)|}$ is nondecreasing in $s$.
Putting together and continuing the estimation in (\ref{A15}), we obtain
\begin{align}
&|\log q \big(ikr \big)-\log q(ir)| \lesssim \int_{kr}^r \frac{\IM q(is)}{s|q(is)|} \mkern4mu\mathrm{d} s \leq r \frac{\IM q(ir)}{|q(ir)|} \cdot \int_{kr}^r \frac{ds}{s^2} \nonumber \\
&= r \frac{\IM q(ir)}{|q(ir)|}\Big(\frac{1}{kr}-\frac 1r \Big) \asymp \frac{\IM q(ir)}{|q(ir)|} \xrightarrow{r \to \infty} 0. \label{A28}
\end{align}
This shows $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$. To prove (\ref{A19}), set $k(r):= \frac{\IM q(ir)}{|q(ir)|}$ and repeat the calculations up to the second to last term in (\ref{A28}), but with $k$ replaced by $k(r)^{\delta}$, where $\delta \in [0,1)$. Since
\[
r k(r) \Big(\frac{1}{rk(r)^{\delta}}-\frac 1r \Big) \asymp k(r)^{1-\delta} \xrightarrow{r \to \infty} 0,
\]
we arrive at (\ref{A19}).
\end{proof}
\noindent Note that $\lim_{r \to \infty} \frac{q(ikr)}{q(ir)}=1$ is also a consequence of (\ref{A20}). The preceding proof, in addition to being elementary, is needed to show (\ref{A19}) which, upon taking absolute values, can be seen as slow variation with a rate.
\section{Maximal oscillation within Weyl disks}
\label{S4}
\noindent In order to explain the aim of this section, let us first recall the notion of Weyl disks. Let $W(t,z) \in \bb C^{2 \times 2}$ be the fundamental solution of
\begin{equation}
\frac{d}{dt} W(t,z)J=zW(t,z)H(t),
\end{equation}
with initial condition $W(a,z)=I$, solving the transpose of equation (\ref{A33}). We define the \textit{Weyl disks}
\begin{equation}
\label{A34}
D_{t,z}:=\Big\{ \frac{w_{11}(t,z)\tau + w_{12}(t,z)}{w_{21}(t,z)\tau + w_{22}(t,z)} \Big| \tau \in \overlineerline{\bb C_+} \Big\} \subseteq \overlineerline{\bb C_+},
\end{equation}
where $\bb C_+=\{z \in \bb C | \IM z>0\}$, and the closure is taken in the Riemann sphere $\overlineerline{\bb C}=\bb C \cup \{ \infty \}$. For fixed $z \in \bb C_+$ and $t_1 \leq t_2$, we have $D_{t_1,z} \supseteq D_{t_2,z}$, and the disks shrink down to a single point which is $q_H(z)$:
\[
\bigcap_{t \in [a,b)} D_{t,z} = \{ q_H(z) \}.
\]
\noindent Now we review the estimate (\ref{A43}) which has a geometric interpretation. Namely, the functions $L(r)$ and $A(r)$ give, up to $\asymp$, the imaginary part of the bottom and top point of $D_{\mathring t(r),ir}$, respectively. The size of $\IM q_H(ir)$ relative to $L(r)$ and $A(r)$ thus corresponds to the vertical position of $q_H(ir)$ within the disk $D_{\mathring t(r),ir}$. \\
In this section we give answers to several questions from \cite{langer.pruckner.woracek:heniest}. For instance, the question was raised whether there is a Hamiltonian $H$ for which $L(r) \asymp \IM q_H(ir) \not\asymp A(r)$ for $r \to \infty$. The answer to this particular question is no, cf. \Cref{T10}. However\footnote{In this section, we use the more transparent notation $f(r) \ll g(r)$ instead of $f(r) = {\rm o} (g(r))$.}, $L(r_n) \asymp \IM q_H(ir_n) \ll A(r_n)$ on a subsequence $r_n \to \infty$ is possible, and we provide examples for this in \Cref{T6} and in \Cref{R3}. The Weyl coefficient of the Hamiltonian constructed in \Cref{T6} exhibits "maximal" oscillatory behaviour in the sense that it goes back and forth between the bottoms and tops of the disks $D_{\mathring t(r),ir}$.
\begin{proposition}
\label{T10}
Let $H$ be a Hamiltonian on $(a,b)$. The following statements hold:
\begin{itemize}
\item[$(i)$] Suppose that $L(r) \not\asymp A(r)$ as $r \to \infty$. Then there exists a sequence $(r_n)_{n \in \bb N}$ such that $r_n \to \infty$, $L(r_n) \ll A(r_n)$, and
\[
\IM q_H(ir_n) \gtrsim \sqrt{L(r_n)A(r_n)}.
\]
\item[$(ii)$] Suppose that $L(r) \not\asymp A(r)$, but not $L(r) \ll A(r)$ as $r \to \infty$. Then there is also $(r_n')_{n \in \bb N}$ with $r_n' \to \infty$, $L(r_n') \ll A(r_n')$, and
\begin{equation}
\label{A32}
\IM q_H(ir_n') \asymp \sqrt{L(r_n')A(r_n')}.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
We shorten notation by setting $d(t):=\frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}$. By assumption, $\liminf_{t \to \hat a} d(t)=0$. Let $c \in (\hat a,b)$ be fixed and set $t_n := \max\{t \leq c \mid d(t) \leq \frac 1n \}$. With $t_n^+:= \hat t(\mathring r(t_n)) \geq t_n$, we have $d(t_n^+) \geq \frac 1n = d(t_n)$ if $n$ is large enough for $t_n^+ \leq c$ to hold. Using (\ref{A9}), we obtain
\[
\bigg(\frac{\IM q_H(i \mathring r(t_n))}{A(\mathring r(t_n))} \bigg)^2 \asymp d(t_n^+) \geq d(t_n)=\frac{L(\mathring r(t_n))}{A(\mathring r(t_n))}.
\]
Note that $L(\mathring r(t_n)) \ll A(\mathring r(t_n))$ because of $d(t_n) \to 0$. \\
Suppose now that $s:=\limsup_{r \to \infty} \frac{L(r)}{A(r)}>0$. Set $\xi_n := \max \{t \leq t_n \mid d(\xi_n)=\frac s2\}$ and find $\tau_n$ between $\xi_n$ and $t_n$ such that $d(\tau_n) = \min \{d(t) \mid t \in [\xi_n, t_n]\}$. Certainly, $d(\tau_n) \leq d(t_n)=\frac 1n$ and $d(\tau_n) \leq d(t)$ for all $t \in [\xi_n,c]$. Also note that by the same arguments as above,
\begin{align}
\label{A26}
\IM q_H(i \mathring r(\tau_n)) \gtrsim \sqrt{L(\mathring r(\tau_n)A(\mathring r(\tau_n))}.
\end{align}
\noindent We prove next that $\hat r(\tau_n) \ll \mathring r(\xi_n)$. Note that by passing to a subsequence and possibly switching signs of $\omega_3$ by looking at $J^{\top}HJ$ instead of $H$, we can assume that
\[
\lim_{n \to \infty} \frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}=1.
\]
A calculation shows that $\sqrt{(\omega_1 \omega_2)(t)}-\omega_3(t)$ is increasing. Hence
\begin{align}
\label{A27}
&\bigg(\frac{\hat r(\tau_n)}{\mathring r(\xi_n)} \bigg)^2= \frac{(\omega_1 \omega_2)(\xi_n)}{\det \Omega(\tau_n)} \\
&=\frac{(\omega_1 \omega_2)(\xi_n)\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(\sqrt{(\omega_1 \omega_2)(\tau_n)}-\omega_3(\tau_n)\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \nonumber\\
&\leq \frac{(\omega_1 \omega_2)(\xi_n)\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(\sqrt{(\omega_1 \omega_2)(\xi_n)}-\omega_3(\xi_n)\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \nonumber\\
&= \frac{\Big(1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)}{\Big(1-\frac{\omega_3(\xi_n)}{\sqrt{(\omega_1 \omega_2)(\xi_n)}}\Big)^2 \Big(1+\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}}\Big)} \lesssim 1-\frac{\omega_3(\tau_n)}{\sqrt{(\omega_1 \omega_2)(\tau_n)}} \to 0. \nonumber
\end{align}
Let $\tau_n^- := \mathring t(\hat r(\tau_n))$. By the calculation above, $\mathring r(\tau_n^-)=\hat r(\tau_n) < \mathring r(\xi_n)$ for large enough $n$, implying $\tau_n^- > \xi_n$ and hence $d(\tau_n^-) \geq d(\tau_n)$. Consequently,
\[
\frac{L(\hat r(\tau_n))}{A(\hat r(\tau_n))} =d(\tau_n^-) \geq d(\tau_n) \asymp \bigg(\frac{\IM q_H(i\hat r(\tau_n))}{A(\hat r(\tau_n))} \bigg)^2.
\]
This means that
\[
\IM q_H(i \hat r(\tau_n)) \leq C \sqrt{L(\hat r(\tau_n)) A(\hat r(\tau_n))}
\]
for some $C>0$ and all large $n$. Recall (\ref{A26}) and choose $C'>0$, w.l.o.g. $C'<C$, such that
\[
\IM q_H(i \mathring r(\tau_n)) \geq C' \sqrt{L(\mathring r(\tau_n)) A(\mathring r(\tau_n))}
\]
for large $n$. By continuity, we find, for each large $n$, an $r_n' \in [\mathring r(\tau_n),\hat r(\tau_n)]$ with
\[
\frac{\IM q_H(ir_n')}{\sqrt{L(r_n')A(r_n')}} \in [C',C],
\]
such that $(r_n')_{n \in \bb N}$ satisfies (\ref{A32}). The only thing left to prove is that $L(r_n') \ll A(r_n')$. \\
Suppose not, then on a subsequence we would have $L(r_n') \asymp A(r_n')$. Consider $\xi_n':=\mathring t(r_n') \leq \tau_n$ which would then satisfy $d(\xi_n') \gtrsim 1$ and hence
\[
1-\frac{\omega_3(\xi_n')}{\sqrt{(\omega_1 \omega_2)(\xi_n')}} \gtrsim 1.
\]
Now look at (\ref{A27}), but with $\xi_n$ replaced by $\xi_n'$. It follows that, for large $n$, $\hat r(\tau_n) < r_n'$, contradicting the choice of $r_n'$.
\end{proof}
\noindent In the following definition, we construct a Hamiltonian by prescribing $f:=\frac{\omega_3}{\sqrt{\omega_1 \omega_2}}$ and choosing $f$ to be a highly oscillating function. It should be mentioned that the method we use for prescription works on a general basis: Any locally absolutely continuous function with values in $(-1,1)$ occurs as $\frac{\omega_3}{\sqrt{\omega_1 \omega_2}}$ for some Hamiltonian. Details can be found in the appendix.
\begin{definition}
\label{T6}
Let $(t_n)_{n \in \bb N}, (\xi_n)_{n \in \bb N}$ be sequences of positive numbers converging to zero, where $\xi_{n+1}<t_n<\xi_n$ for all $n \in \bb N$. Choose $p,l \in (0,1)$ and set
\[
f(t_n)=1-p^n, \quad \quad f(\xi_n)=l^n
\]
and interpolate between those points using monotone and absolutely continuous functions (e.g., linear interpolation). Set
\[
\alpha_1(t):= \left\{ \begin{matrix}
\frac{f'(t)}{1-f(t)}, & t \in (\xi_{n+1},t_n), \\
0, & t \in (t_n,\xi_n)
\end{matrix}\right.
\]
and
\[
\alpha_2(t):= \left\{ \begin{matrix}
\frac{f'(t)}{1-f(t)}, & t \in (\xi_{n+1},t_n), \\
-2\frac{f'(t)}{f(t)}, & t \in (t_n,\xi_n)
\end{matrix}\right.
\]
For $t \in [0,t_1]$, let $\omega_i(t):=\exp \Big(-\int_t^{t_1} \alpha_i(s) \mkern4mu\mathrm{d} s \Big)$, $i=1,2$, and $\omega_3(t):=\sqrt{(\omega_1 \omega_2)(t)} \cdot f(t)$. Set $h_i(t)=\omega_i'(t)$, $i=1,2,3$, $t \in [0,t_1]$. For $t \in (t_1,\infty)$, let $h_1(t):=1$ and $h_2(t):=h_3(t):=0$. Finally, define
\[
H_{p,l} :=\begin{pmatrix}
h_1 & h_3 \\
h_3 & h_2
\end{pmatrix}.
\]
\end{definition}
\begin{lemma}
$H_{p,l}$ is a Hamiltonian on $[0,\infty)$, and $\omega_i(t)=\int_0^t h_i(s) \mkern4mu\mathrm{d} s$ for $i=1,2,3$ and $t \in [0,t_1]$. Moreover, $0$ is not the left endpoint of an $H_{p,l}$-indivisible interval.
\end{lemma}
\begin{proof}
We write $H$ instead of $H_{p,l}$ for short. First we show that $H(t) \geq 0$ for all $t \in [0,t_1]$. Start by noting that, for $i=1,2$,
\[
\frac{h_i(t)}{\omega_i(t)}=(\log \omega_i)'(t)=\alpha_i(t),
\]
and calculate
\begin{align*}
&\frac{h_3(t)^2}{(\omega_1 \omega_2)(t)}=\frac{\big[(\sqrt{\omega_1 \omega_2}f )'(t)\big]^2}{(\omega_1 \omega_2)(t)} = \Big(f'(t)+\frac 12 \Big[\frac{h_1(t)}{\omega_1(t)}+\frac{h_2(t)}{\omega_2(t)} \Big]f(t) \Big)^2 \\
&=\Big(f'(t)+\frac{\alpha_1(t)+\alpha_2(t)}{2} f(t) \Big)^2.
\end{align*}
If $t \in (t_n,\xi_n)$, then this equates to $0$, as does
\[
\frac{(h_1h_2)(t)}{(\omega_1 \omega_2)(t)}=\alpha_1(t)\alpha_2(t)=0.
\]
For $t \in (\xi_{n+1},t_n)$,
\[
\Big(f'(t)+\frac{\alpha_1(t)+\alpha_2(t)}{2} f(t) \Big)^2=\Big(\frac{f'(t)}{1-f(t)}\Big)^2=\alpha_1(t)\alpha_2(t)=\frac{(h_1h_2)(t)}{(\omega_1 \omega_2)(t)}.
\]
In both cases, $\det H(t)=0$. For $i=1,2$, as $\alpha_i(t) \geq 0$, $t \in [0,t_1]$, certainly $\omega_i(t)$ is increasing and thus $h_i(t) \geq 0$. This suffices to show that $H(t) \geq 0$. \\
$H$ is in limit point case since, for $t>t_1$, the trace of $H(t)$ equals $1$. To show that $\omega_i(t)=\int_0^t h_i(s) \mkern4mu\mathrm{d} s$, $i=1,2,3$, $t \in [0,t_1]$, we need to check that $\lim_{t \to 0} \omega_i(t)=0$. For $i=1$, this follows from
\begin{equation}
\int_0^{t_1} \alpha_1(s) \mkern4mu\mathrm{d} s = \sum_{n=1}^{\infty} \int_{\xi_{n+1}}^{t_n} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s=\sum_{n=1}^{\infty} \big[\log (1-l^{n+1})-\log(p^n) \big]=\infty.
\end{equation}
For $i=2$, it follows from the fact that $\alpha_2(t) \geq \alpha_1(t)$ for all $t \in [0,t_1]$, and for $i=3$ it follows from the definition of $\omega_3$ and the fact that $f(t) <1$, $t \in [0,t_1]$. \\
Finally, $0$ is not the left endpoint of an $H$-indivisible interval because
\[
\det \Omega(t)=(\omega_1 \omega_2)(t) \big(1-f(t)^2 \big) >0
\]
for all $t \in (0,t_1]$.
\end{proof}
\noindent We investigate the behaviour for $r \to \infty$ of $\IM q_{H_{p,l}}(ir)$ as well as $L(r)$ and $A(r)$. A rough description of the situation is:
\begin{center}
\label{tikz1}
\begin{tikzpicture}
\draw[scale=0.5, loosely dashed, domain=1:3.65, smooth, variable=\x] plot ({(\x+2)*(\x+2)}, {4*(\x+2)});
\draw[out=30, in=240] (4.5,6.3) to (6,8.5);
\draw[out=60, in=190] (6,8.5) to (6.7,9);
\draw[out=10, in=175] (6.7,9) to (7.5,8.9);
\draw[out=-5, in=190] (7.5,8.9) to (8,9);
\draw[out=10, in=205] (8,9) to (9.5,9);
\draw[out=25, in=240] (9.5,9) to (11.5,11.5);
\draw[out=60, in=185] (11.5,11.5) to (12.2,12.1);
\draw[out=5, in=205] (12.2,12.1) to (15,10.8);
\draw[out=25, in=245] (15,10.8) to (15.95,11.95);
\draw[out=30, in=190] (4.5,5.9) to (5.8,6.5);
\draw[out=10, in=200] (5.8,6.5) to (8,6.5);
\draw[out=20, in=220] (8,6.5) to (10.2,9.2);
\draw[out=40, in=170] (10.2,9.2) to (12,9.6);
\draw[out=-10, in=185] (12,9.6) to (13.4,9.4);
\draw[out=5, in=225] (13.4,9.4) to (14.8,10.3);
\draw[out=45, in=190] (14.8,10.3) to (15.95,10.95);
\draw[out=30, in=245] (4.5,6.05) to (5.7,7.5);
\draw[out=65, in=150] (5.7,7.5) to (7.5,8);
\draw[out=-30, in=240] (7.5,8) to (9.35,8.2);
\draw[out=60, in=221] (9.35,8.2) to (10.03,9.13);
\draw[out=41, in=237] (10.03,9.13) to (11.04,10.4);
\draw[out=57, in=180] (11.04,10.4) to (11.7,10.9);
\draw[out=0, in=140] (11.7,10.9) to (12.3,10.7);
\draw[out=-40, in=172] (12.3,10.7) to (13.52,9.75);
\draw[out=-8, in=218] (13.52,9.75) to (15,10.59);
\draw[out=38, in=241] (15,10.59) to (15.95,11.67);
\node (A) at (4.5,5) {$\mathring r(\xi_n)$};
\node (B) at (5.1,4.55) {$\hat r(\xi_n)$};
\node (C) at (5.85,5) {$\mathring r(t_n)$};
\node (D) at (9.15,4.55) {$\hat r(t_n)$};
\node (E) at (9.9,5) {$\mathring r(\xi_{n+1})$};
\node (F) at (10.5,4.55) {$\hat r(\xi_{n+1})$};
\node (G) at (11.25,5) {$\mathring r(t_{n+1})$};
\node (H) at (13.7,4.55) {$\hat r(t_{n+1})$};
\node (I) at (14.8,5) {$\mathring r(\xi_{n+2})$};
\node (J) at (15.4,4.55) {$\hat r(\xi_{n+2})$};
\draw[loosely dotted] (4.5,5.3) to (4.5,12.2);
\draw[loosely dotted] (5.1,4.85) to (5.1,12.2);
\draw[loosely dotted] (5.85,5.3) to (5.85,12.2);
\draw[loosely dotted] (9.15,4.85) to (9.15,12.2);
\draw[loosely dotted] (9.9,5.3) to (9.9,12.2);
\draw[loosely dotted] (10.5,4.85) to (10.5,12.2);
\draw[loosely dotted] (11.25,5.3) to (11.25,12.2);
\draw[loosely dotted] (13.7,4.85) to (13.7,12.2);
\draw[loosely dotted] (14.8,5.3) to (14.8,12.2);
\draw[loosely dotted] (15.55,4.85) to (15.55,12.2);
\node[scale=0.8] (J) at (7,6.28) {$L(r)$};
\node[scale=1.15] (K) at (7.1,7.15) {$r^{\frac{\log l}{\log (pl)}}$};
\node[scale=0.8] (L) at (7,8.48) {$\IM q_H(ir)$};
\node[scale=0.8] (M) at (7,9.18) {$A(r)$};
\end{tikzpicture}
A sketch of the behaviour of $q_{H_{p,l}}$
\end{center}
\noindent Formal details are given in the following theorem as well as in \Cref{R2}.
\begin{theorem}
\label{T2}
Let $p,l \in (0,1)$. For the Hamiltonian $H=H_{p,l}$ from \Cref{T6} and for all sufficiently large $n \in \bb N$, we have
\begin{equation}
\label{A8}
\mathring r(\xi_n) < \hat r(\xi_n) <\mathring r(t_n) < \hat r(t_n) < \mathring r(\xi_{n+1}).
\end{equation}
On the intervals delimited by the terms in (\ref{A8}), the functions $L(r)$, $\IM q_H(ir)$, and $A(r)$ behave in the following way:
\begin{itemize}
\item[$(i)$] $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\mathring r(\xi_n),\hat r(\xi_n)]$, $n \in \bb N$.
\item[$(ii)$] $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\hat r(\xi_n),\mathring r(t_n)]$, $n \in \bb N$. \\[1ex]
Moreover, $L(\mathring r(t_n)) \ll A(\mathring r(t_n))$.
\item[$(iii)$] $L(r) \ll A(r)$ uniformly for $r \in [\mathring r(t_n), \hat r(t_n)]$, $n \in \bb N$. \\[1ex]
In addition, $L(\mathring r(t_n)) \ll \IM q_H(i \mathring r(t_n)) \asymp A(\mathring r(t_n))$ as well as $L(\hat r(t_n)) \asymp \IM q_H(i \hat r(t_n)) \ll A(\hat r(t_n))$.
\item[$(iv)$] $L(r) \asymp \IM q_H(ir) \ll A(r)$ uniformly for $r \in [\hat r(t_n),\mathring r(\xi_{n+1})]$, $n \in \bb N$.
\end{itemize}
\end{theorem}
\noindent The proof of this theorem involves some (partly tedious) computations that are partly contained in the forthcoming lemma. \\
The symbol $\approx$ should mean equality up to an additive term that is bounded in $n$ and $t$.
\begin{lemma}
\label{R4}
For the Hamiltonian $H_{p,l}$, the following formulae hold.
\newline
\begin{tabular}{|lcl|}
\hline$\log \mathring r(t_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}$ \begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\[1ex]
\hline
$\log \mathring r(\xi_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}$ \begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
$\mkern-12mu$
\begin{tabular}{|lcl|}
\hline
$\log \hat r(t_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log (pl)}{2}-n \frac{\log l}{2}$
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(\xi_n)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}$
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
\begin{tabular}{|lcll|}
\hline
$\log \mathring r(t)$ & $\mkern-10mu \approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t)$, & $\mkern-10mu t \in [t_n,\xi_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \mathring r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log (1-f(t))$, & $\mkern-10mu t \in [\xi_{n+1},t_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t)-\frac {\log (1-f(t))}{2}$, & $\mkern-10mu t \in [t_n,\xi_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
$\log \hat r(t)$ & $\mkern-10mu\approx$ & $\mkern-10mu-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\frac {\log (1-f(t))}{2}$, & $\mkern-10mu t \in [\xi_{n+1},t_n]$.
\begin{minipage}[c][9mm][t]{0.1mm}
\end{minipage}\\
\hline
\end{tabular}
\newline
\end{lemma}
\begin{proof}
First we calculate
\begin{align}
&\log(\mathring r(t_n))=-\frac 12 \log [(\omega_1 \omega_2)(t_n)] =\frac 12 \int_{t_n}^{t_1} (\alpha_1(s)+\alpha_2(s)) \mkern4mu\mathrm{d} s \nonumber\\
&= \sum_{k=1}^{n-1} \Big( \int_{t_{k+1}}^{\xi_{k+1}} \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s + \int_{\xi_{k+1}}^{t_k} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s \Big) \nonumber\\
&= \sum_{k=1}^{n-1} \Big(\log(1-p^{k+1})-(k+1)\log l + \log(1-l^{k+1}) -k\log p \Big) \nonumber\\
\label{A23}
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}.
\end{align}
This also leads to
\begin{align*}
\log \hat r(t_n) &=-\frac 12 \log (1-f(t_n)^2)+\log \mathring r(t_n) \approx -\frac 12 \log (1-f(t_n))+\log \mathring r(t_n) \\
&\approx -n^2 \frac{\log (pl)}{2}-n \frac{\log l}{2}.
\end{align*}
If $t \in [t_n,\xi_n]$, then
\begin{align*}
\log \mathring r(t)=\log \mathring r(t_n)-\int_{t_n}^t \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(t).
\end{align*}
If $t \in [\xi_{n+1},t_n]$, then
\begin{align*}
&\log \mathring r(t)=\log \mathring r(t_n)+\int_t^{t_n} \frac{f'(s)}{1-f(s)} \mkern4mu\mathrm{d} s \\
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log (1-f(t)).
\end{align*}
By adding $-\frac 12 \log (1-f(t)^2) \approx -\frac 12 \log (1-f(t))$, the analogous formula for $\hat r(t)$ follows. Lastly,
\begin{align*}
\log \mathring r(\xi_n) &\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\xi_n) \\
&\approx -n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}.
\end{align*}
and
\begin{align*}
\log \hat r(\xi_n) &=-\frac 12 \log (1-f(\xi_n)^2)+\log \mathring r(\xi_n) \approx \log \mathring r(\xi_n).
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{T2}]
It follows from \Cref{R4} that $\hat r(\xi_n)<\mathring r(t_n)$ and $\hat r(t_n) < \mathring r(\xi_{n+1})$ for large enough $n$. The remaining two inequalities in (\ref{A8}) follow from the basic fact that $\mathring r(t) < \hat r(t)$ for all $t \in (0,\infty)$. \\
We will now prove $(i)-(iv)$ in reverse order.
\\[1.7ex]
\underline{$(iv)$:} $\xi_{n+1} \leq \mathring t(r) \leq t_n$ and $\xi_{n+1} \leq \hat t(r) \leq t_n$. By \Cref{R4},
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r))=\log r \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log \big(1-f(\mathring t(r))\big).
\end{align*}
Hence,
\begin{align*}
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f \big(\hat t(r)\big)^2} \asymp 1-f \big(\mathring t(r)\big)^2 =\frac{L(r)}{A(r)}.
\end{align*}
In addition,
\[
\frac{L \big( \mathring r(\xi_{n+1}) \big)}{A \big( \mathring r(\xi_{n+1}) \big)} \asymp 1-f(\xi_{n+1})^2 \asymp 1,
\]
while
\[
\frac{L \big(\hat r(t_n) \big)}{A \big(\hat r(t_n) \big)} \asymp 1-f \big(\mathring t(\hat r(t_n)) \big)^2 \asymp \sqrt{1-f(t_n)^2} = p^{\frac n2} \ll 1.
\]
\\[1.7ex]
\underline{$(iii)$:} $\xi_{n+1} \leq \mathring t(r) \leq t_n$ and $t_n \leq \hat t(r) \leq \xi_n$. Thus
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\hat t(r))-\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r)) \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log (pl)}{2}+\log \big(1-f(\mathring t(r))\big).
\end{align*}
Consequently,
\[
\frac 12 \log \big(1-f(\hat t(r))\big) \approx n \log p + \log f(\hat t(r))-\log \big(1-f(\mathring t(r))\big),
\]
which implies
\[
\sqrt{1-f(\hat t(r))} \asymp p^n \frac{f(\hat t(r))}{1-f(\mathring t(r))}.
\]
Let us check that the term $f(\hat t(r))$ can be neglected. Using that $f(\mathring t(r)) \leq 1-p^n$, we get
\[
\sqrt{1-f(\hat t(r))} \lesssim f(\hat t(r))
\]
which is only possible if $f(\hat t(r))$ stays away from $0$. As $f(\hat t(r)) <1$, this means that $f(\hat t(r)) \asymp 1$, leading to
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp \frac{p^n}{1-f(\mathring t(r))}.
\]
Hence, $\IM q_H(i\mathring r(t_n)) \asymp A(\mathring r(t_n))$. Looking back at case $(iv)$, we know that $\IM q_H(i\hat r(t_n)) \asymp L(\hat r(t_n)) \ll A(\hat r(t_n))$. In particular, since $\frac{L(r)}{A(r)}=1-f(\mathring t(r))^2$ is increasing for $r$ in $[\mathring r(t_n),\hat r(t_n)]$, we have $L(r) \ll A(r)$ uniformly on this interval.\\[1.7ex]
\underline{$(ii)$:} $t_n \leq \mathring t(r) \leq \xi_n)$ and $t_n \leq \hat t(r) \leq \xi_n$, leading to
\begin{align*}
&-n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\hat t(r))-\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r)) \\
&=\log \mathring r(\mathring t(r)) \approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\mathring t(r)).
\end{align*}
Hence
\begin{align*}
\sqrt{1-f(\hat t(r))} \asymp \frac{f(\hat t(r))}{f(\mathring t(r))} > f(\hat t(r)).
\end{align*}
In particular, $1-f(\hat t(r))$ stays away from $0$, which means that
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp 1.
\]
In other words, $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\hat r(\xi_n), \mathring r(t_n)]$. As we already know, $L(\mathring r(t_n)) \ll \IM q_H(i\mathring r(t_n)) \asymp A(\mathring r(t_n))$.\\[1.7ex]
\underline{$(i)$:} $t_n \leq \mathring t(r) \leq \xi_n$ and $\xi_n \leq \hat t(r) \leq t_{n-1}$. In this case
\begin{align*}
&-n^2 \frac{\log(pl)}{2}+n \frac{\log (pl)}{2}+\frac 12 \log \big(1-f(\hat t(r))\big) \approx \log \hat r(\hat t(r))=\log \mathring r(\mathring t(r)) \\
&\approx -n^2 \frac{\log(pl)}{2}-n \frac{\log \big(\frac lp \big)}{2}+\log f(\mathring t(r)).
\end{align*}
Taking into account that $f(\mathring t(r)) \geq l^n$ by definition, it follows that
\[
\frac{\IM q_H(ir)}{A(r)} \asymp \sqrt{1-f(\hat t(r))} \asymp \frac{f(\mathring t(r))}{l^n} \asymp 1.
\]
Therefore, $\IM q_H(ir) \asymp A(r)$ uniformly for $r \in [\mathring r(\xi_n),\hat r(\xi_n)]$. At the left end of this interval, we even have $L(\mathring r(\xi_n)) \asymp A(\mathring r(\xi_n))$ by case $(iv)$.
\end{proof}
\noindent Before we state our next result, we note that by definition of $H_{p,l}$,
\begin{equation}
\label{A13}
\liminf_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)}=\liminf_{t \to 0} \big(1-f(t)^2 \big)=0.
\end{equation}
In view of (\ref{A9}), we have $\liminf_{r \to \infty} \frac{\IM q_{H_{p,l}}(ir)}{|q_{H_{p,l}}(ir)|}=0$ and hence $\IM q_{H_{p,l}}(ir) \not\asymp |q_{H_{p,l}}(ir)|$. \\
Nevertheless, the following lemma shows that $|q_{H_{p,l}}(ir)|$ grows faster than a power. Recalling \Cref{T7}, this means that $|q(ir)| \gtrsim r^{\delta}$ for $r \to \infty$ is not a sufficient condition for $\IM q(ir) \asymp |q(ir)|$ as $r \to \infty$. Instead, we see that $|q(ir)|$ being positively increasing really means that not only does $|q(ir)|$ grow sufficiently fast, but also without oscillating too much.
\begin{lemma}
\label{R2}
Let $\delta := \frac{\log l}{\log (pl)} \in (0,1)$. Then
\begin{itemize}
\item[$\rhd$] $|q_{H_{p,l}}(ir)| \gtrsim r^{\delta}$, $r \to \infty$,
\item[$\rhd$] $|q_{H_{p,l}}(i\mathring r(\xi_n))| \asymp \mathring r(\xi_n)^{\delta}$.
\end{itemize}
\end{lemma}
\begin{proof}
We start the proof with calculating, for $t \in [t_n,\xi_n]$,
\begin{align*}
&\log \sqrt{ \frac{\omega_1(t)}{\omega_2(t)} }=\frac 12 \log \Big(\frac{\omega_1(t)}{\omega_2(t)} \Big)=\sum_{k=1}^{n-1} \int_{t_{k+1}}^{\xi_{k+1}} \frac{-f'(s)}{f(s)} \mkern4mu\mathrm{d} s + \int_{t_n}^t \frac{f'(s)}{f(s)} \mkern4mu\mathrm{d} s\\
&=\sum_{k=1}^{n-1} \Big(\log (1-p^{k+1})-(k+1)\log l \Big)+\log f(t)-\log (1-p^n) \\
&\approx -(n^2+n)\frac{\log l}{2}+\log f(t).
\end{align*}
Now we use our formula for $\log \mathring r(t)$:
\begin{align}
&\log \sqrt{\frac{\omega_1(t)}{\omega_2(t)} } \nonumber \\
&\approx \frac{\log l}{\log(pl)}\log \mathring r(t)+\frac 12 \bigg(\frac{\log(l)\log(\frac lp)}{\log(pl)} -\log l \bigg)n + \bigg(1-\frac{\log l}{\log(pl)} \bigg)\log f(t)\nonumber\\
\label{A22}
&=\frac{\log l}{\log(pl)}\log \mathring r(t)+\frac{\log p}{\log(pl)}\big(\log f(t)-n \log l\big), \quad t \in [t_n,\xi_n].
\end{align}
Since $f$ was assumed to be monotone decreasing on $[t_n,\xi_n]$, and $\log f(\xi_n)=n \log l$,
\[
\log \sqrt{\frac{\omega_1(t)}{\omega_2(t)} } \gtrapprox \frac{\log l}{\log(pl)}\log \mathring r(t)=\delta \log \mathring r(t),
\]
where $\gtrapprox$ indicates that the inequality holds up to an additive term that is bounded in $n$ and $t$. Therefore
\[
|q_{H_{p,l}}(i \mathring r(t))| \asymp \sqrt{\frac{\omega_1(t)}{\omega_2(t)}} \gtrsim \mathring r(t)^{\delta}, \quad t \in [t_n,\xi_n].
\]
Observing that $\frac{\omega_1}{\omega_2}$ is constant on $[\xi_{n+1},t_n]$ (since $\alpha_1-\alpha_2=0$ there), we obtain this estimate also for $t \in [\xi_{n+1},t_n]$:
\[
|q_{H_{p,l}}(i\mathring r(t))| \asymp \sqrt{\frac{\omega_1(t)}{\omega_2(t)}}=\sqrt{\frac{\omega_1(\xi_{n+1})}{\omega_2(\xi_{n+1})}} \gtrsim \mathring r(\xi_{n+1})^{\delta} \geq \mathring r(t)^{\delta}.
\]
Finally, setting $t=\xi_n$ in (\ref{A22}) yields $|q_{H_{p,l}}(i\mathring r(\xi_n))| \asymp \mathring r(\xi_n)^{\delta}$.
\end{proof}
\begin{example}
\label{R3}
Let $H$ be as in \Cref{T6}, but $f(\xi_n)=1-l^{n-1}$ instead, where $l > \sqrt{p}$. Similarly to \Cref{T2}, one can show that
\[
L(\hat r(t_n)) \asymp \IM q_H(i \hat r(t_n)) \ll A(\hat r(t_n)).
\]
However, for our new Hamiltonian,
\[
\lim_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)} = \limsup_{t \to 0} \frac{\det \Omega(t)}{(\omega_1 \omega_2)(t)} = \limsup_{t \to 0} \big( 1-f(t)^2 \big) = 0
\]
as opposed to (\ref{A13}).
\end{example}
\section{Reformulation for Krein strings}
\label{S6}
Recall that a \textit{Krein string} is a pair $S[L,\mathfrak{m}]$ consisting of a number $L \in (0,\infty ]$ and a nonnegative Borel measure $\mathfrak{m}$ on $[0,L]$, such that $\mathfrak{m}([0,t])$ is finite for every $t \in [0,L)$, and $\mathfrak{m}(\{L\})=0$. To this pair we associate the equation
\begin{equation}
y_+'(x)+z\int_{[0,x]} y(t)\mkern4mu\mathrm{d} \mathfrak{m}(t)=0, \quad \quad x \in [0,L),
\end{equation}
where $y_+'$ denotes the right-hand derivative of $y$, and $z$ is a complex spectral parameter. \\
For each string, we can construct a function $q_S$ called the \textit{principal Titchmarsh-Weyl coefficient} of the string (\cite{langer.winkler:1998} following \cite{kac.krein:1968}). This function belongs to the Stieltjes class, i.e., it is analytic on $\bb C \setminus [0,\infty)$, its imaginary part is nonnegative on $\bb C_+$, and its values on $(-\infty ,0)$ are positive. The correspondence between Krein strings and functions of Stieltjes class is bijective, as was shown by M.G.Krein. \newline
\noindent \Cref{A41} below is the reformulation of \Cref{T1} for the Krein string case.
\begin{theorem}
\label{A41}
Let $S[L,\mathfrak{m}]$ be a Krein string and set
\begin{equation}
\delta (t) := \bigg(\int_{[0,t)} \xi^2 \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)\cdot \bigg(\int_{[0,t)} \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)-\bigg(\int_{[0,t)} \xi \mkern4mu\mathrm{d} \mathfrak{m}(\xi) \bigg)^2.
\end{equation}
for $t \in [0,L)$. Let
\[
\hat \tau (r):=\inf \big\{t>0 \, \big| \, \frac{1}{r^2} \leq \delta (t) \big\}, \quad \quad r \in (0,\infty).
\]
We set
\begin{align}
f(r) :=\mathfrak{m}([0,\hat \tau (r)))+ \mathfrak{m}(\{\hat \tau (r)\}) \frac{\frac{1}{r^2}-\delta (\hat \tau (r))}{\delta (\hat \tau (r)+)-\delta (\hat \tau (r))}
\end{align}
if $\delta$ is discontinuous at $\hat \tau(r)$, and $f(r):=\mathfrak{m}([0,\hat \tau (r)))$ otherwise. Then
\begin{align}
\IM q_S(ir) \asymp \frac{1}{rf(r)}, \quad \quad r \in (0,\infty ),
\end{align}
with constants independent of the string.
\end{theorem}
\noindent Before proving \Cref{A41}, we need to introduce the concept of dual strings as well as a Hamiltonian associated to a string. Writing
\[
m(t) := \mathfrak{m}([0,t)), \quad \quad t \in [0,L)
\]
we can define the dual string $S[\hat L , \hat{\mathfrak{m}}]$ of $S[L,\mathfrak{m}]$ by setting
\[
\hat L :=\left\{ \begin{array}{ll}
m(L) & \text{if } L+m(L)=\infty ,\\
\infty & \text{else}
\end{array} \right.
\]
and
\[
\hat m (\xi):=\inf \{t >0 \, | \, \xi \leq m (t)\}.
\]
The function $\hat m$ is increasing and left-continuous and thus gives rise to a nonnegative Borel measure $\hat{\mathfrak{m}}$. \newline
\noindent The Hamiltonian defined by
\begin{equation}
\label{A39}
H(t) := \left\{ \begin{array}{ll} \begin{pmatrix}
\hat m(t)^2 & \hat m(t) \\
\hat m(t) & 1
\end{pmatrix} & \text{if } t \in [0,\hat L], \\[3ex]
\begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix} & \text{if } \hat L + \int_0^{\hat L} \hat m(t)^2 \mkern4mu\mathrm{d} t <\infty ,\,\, \hat L<t<\infty
\end{array} \right.
\end{equation}
then satisfies $q_S=q_H$, see e.g. \cite{kaltenbaeck.winkler.woracek:2007}.
\begin{proof}[Proof of \Cref{A41}]
In view of \Cref{T1} and the fact that $q_S=q_H$ for the Hamiltonian $H$ defined in (\ref{A39}), our task is to express $\hat t_H(r)$ in terms of the string. If $\delta (\hat \tau (r)) = \frac{1}{r^2}$, this is easy because of \cite[Corollary 3.4]{kaltenbaeck.winkler.woracek:2007} giving
\begin{align*}
\det \Omega_H(m (\hat \tau (r))) = \delta (\hat \tau (r))=\frac{1}{r^2}
\end{align*}
and hence $\hat t_H(r)=m (\hat \tau (r))$. \\
Otherwise, we have $\delta (\hat \tau (r))<\frac{1}{r^2}$ and $\delta (\hat \tau (r)+) \geq \frac{1}{r^2}$. Using again \cite[Corollary 3.4]{kaltenbaeck.winkler.woracek:2007}, we have
\begin{equation}
\label{A40}
\det \Omega_H(m (\hat \tau (r)))= \delta (\hat \tau (r)) < \frac{1}{r^2}, \quad \quad \det \Omega_H(m (\hat \tau (r)+))= \delta (\hat \tau (r)+) \geq \frac{1}{r^2}
\end{equation}
which tells us that $\hat t_H(r) \in \big(m (\hat \tau (r)),m (\hat \tau (r)+) \big]$. By \cite[Lemma 3.1]{kaltenbaeck.winkler.woracek:2007}, $\hat m$ is constant on this interval. Therefore, for $t \in \big(m (\hat \tau (r)),m (\hat \tau (r)+) \big]$,
\begin{align*}
\det \Omega_H(t) &=\bigg(\int_0^{m(\hat \tau (r))} \hat m (x)^2 \mkern4mu\mathrm{d} x +\big(t-m(\hat \tau (r))\big)\hat m (t)^2 \bigg)\cdot t \\
&-\bigg(\int_0^{m(\hat \tau (r))} \hat m (x) \mkern4mu\mathrm{d} x +\big(t-m(\hat \tau (r))\big)\hat m (t) \bigg)^2 = c_1(r)t+c_2(r)
\end{align*}
for some constants $c_1(r),c_2(r)$. Using (\ref{A40}), this leads to
\[
\det \Omega_H(t)=\delta (\hat \tau (r)) + \frac{t-m(\hat \tau (r))}{m(\hat \tau (r)+)-m(\hat \tau (r))} \big(\delta (\hat \tau (r)+)-\delta (\hat \tau (r))\big).
\]
If we equate this to $\frac{1}{r^2}$, we find that
\[
\hat t_H(r)=m(\hat \tau (r))+ \big(m(\hat \tau (r)+)-m(\hat \tau (r)) \big) \frac{\frac{1}{r^2}-\delta (\hat \tau (r))}{\delta (\hat \tau (r)+)-\delta (\hat \tau (r))}=f(r).
\]
Now we have $\omega_{H;2}(t)=\int_0^t h_2(s) \mkern4mu\mathrm{d} s =t$, and \Cref{T1} now shows
\[
\IM q_S(ir)=\IM q_H(ir) \asymp \frac{1}{r\hat t_H(r)}=\frac{1}{rf(r)}.
\]
\end{proof}
\appendix
\appendixpage
\section{A construction method for Hamiltonians with prescribed angle of $\boldsymbol{q_H}$}
\label{APP}
\setcounter{lemma}{0}
Let $H$ be a Hamiltonian on $(a,b)$. Assume for simplicity that $\hat a=a$, i.e., $a$ is not the left endpoint of an $H$-indivisible interval. As discussed at the beginning of \Cref{S3}, the behavior of
\[
\frac{\det \Omega (t)}{(\omega_1\omega_2)(t)}=1 - \frac{\omega_3(t)^2}{(\omega_1\omega_2)(t)} >0
\]
towards the left endpoint $a$ corresponds to the angle of $q_H(ir)$ for $r \to \infty$. It is thus desirable to be able to construct examples of Hamiltonians with prescribed $\frac{\omega_3(t)}{\sqrt{(\omega_1\omega_2)(t)}}$, which is what we did in \Cref{T6}. We give now a general version of this idea. \\
The following result is formulated for Hamiltonians in limit circle case, making the statement cleaner. When we made use of this construction method in \Cref{T6}, we obtained a Hamiltonian in limit point case by simply appending an infinitely long indivisible interval.
\begin{proposition}
\label{T3}
Let $f$ be locally absolutely continuous on $(a,b]$ and such that $f(t) \in (-1,1)$ for all $t \in (a,b]$. Then there is a Hamiltonian, in limit circle case at $b$, with the properties
\begin{itemize}
\item[(i)] $a=\hat a$ is not the left endpoint of an $H$-indivisible interval, and
\item[(ii)] $f(t)=\frac{\omega_3(t)}{\sqrt{(\omega_1\omega_2)(t)}}$ for all $t \in (a,b]$.
\end{itemize}
In addition, let
\[
\Delta(f):=\frac{2|f'|}{1-\sgn (f')f}
\]
which is in $L_{loc}^1((a,b])$.
Then all possible choices for $(\omega_1\omega_2)(t)$ are given by functions of the form
\begin{align*}
\exp \Big(c-\int_t^b g(s) \mkern4mu\mathrm{d} s \Big)
\end{align*}
where $c \in \bb R$, $g \in L_{loc}^1((a,b]) \setminus L^1((a,b])$ with $g(t) \geq \Delta(f)(t)$ and $g(t)>0$ for $t \in (a,b]$ a.e.
\end{proposition}
\begin{proof}
If $H$ is given and such that $(i),(ii)$ hold, then clearly $f$ is locally absolutely continuous and takes values in $(-1,1)$. \\
Let $f$ be as in the statement. Then clearly $f' \in L_{loc}^1((a,b])$. Also, the denominator of $\Delta (f)$ is locally bounded below by a positive number, and hence $\Delta(f) \in L_{loc}^1((a,b])$. We check the conditions that $\omega_1(t),\omega_2(t)$ must satisfy in order that they, together with $\omega_3(t):=\sqrt{(\omega_1\omega_2)(t)}f(t)$, give rise to a Hamiltonian through $h_i(t):=\omega_i'(t)$, $i=1,2,3$. Clearly, $\omega_1,\omega_2$ have to be increasing, absolutely continuous on $[a,b]$ and satisfy $\omega_1(0)=\omega_2(0)=0$. Moreover, we want
\[
(h_1h_2)(t) \geq h_3(t)^2=\Big(\sqrt{(\omega_1\omega_2)(t)}f'(t)+\frac{h_1(t)\omega_2(t)+\omega_1(t)h_2(t)}{2 \sqrt{(\omega_1\omega_2)(t)}}f(t) \Big)^2.
\]
This is equivalent to
\[
\frac{(h_1h_2)(t)}{(\omega_1\omega_2)(t)} \geq \bigg(f'(t)+\frac 12 \Big(\frac{h_1(t)}{\omega_1(t)}+\frac{h_2(t)}{\omega_2(t)} \Big) f(t)\bigg)^2
\]
Setting
\[
\alpha_i(t):=\frac{h_i(t)}{\omega_i(t)}, \quad i=1,2, \quad \quad g:=\alpha_1+\alpha_2 \in L_{loc}^1((a,b]),
\]
the inequality takes the form
\[
\alpha_i (g-\alpha_i) \geq \Big(f'+\frac 12 gf \Big)^2
\]
which is equivalent to
\begin{align}
\label{A7}
\alpha_i \in \Bigg[\frac g2-\sqrt{\frac{g^2}{4}-\Big(f'+\frac 12 gf \Big)^2},\frac g2+\sqrt{\frac{g^2}{4}-\Big(f'+\frac 12 gf \Big)^2}\Bigg].
\end{align}
In particular,
\begin{align*}
&\frac{g^2}{4}-\Big(f'+\frac 12gf \Big)^2 \geq 0 \, \Longleftrightarrow \, \frac g2 \geq \Big|f'+\frac 12gf \Big| \\
&\, \Longleftrightarrow g \geq \frac{2f'}{1-f} \text{ and } g \geq \frac{-2f'}{1+f} \\
&\, \Longleftrightarrow g \geq \frac{2|f'|}{1-\sgn(f')f} =\Delta(f).
\end{align*}
Since $\Delta(f) \in L_{loc}^1((a,b])$, we can find $g \in L_{loc}^1((a,b])$, $g(t) \geq \Delta (f)(t)$ a.e. on $(a,b]$, and additionally, $g \not\in L^1((a,b])$.
Choose measurable functions $\alpha_1,\alpha_2$ such that
\begin{itemize}
\item[$\rhd$] $\alpha_1+\alpha_2=g$,
\item[$\rhd$] (\ref{A7}) holds for $\alpha_1$ at almost all $t \in (a,b)$ (and hence for $\alpha_2$), and
\item[$\rhd$] $\alpha_1,\alpha_2 \not\in L^1((a,b])$.
\end{itemize}
Note that $\alpha_1$ and $\alpha_2$ belong to $L_{loc}^1((a,b])$ since $g$ does, and that such a choice is possible because one can always take $\alpha_1=\alpha_2=\frac g2$. \\
From the construction it is clear that for a Hamiltonian $H$ with $\frac{d}{dt} [\log \omega_i(t)]=\alpha_i(t)$, $i=1,2$, there is $c \in \bb R$ such that
\[
\omega_i(t):=\exp \Big(c-\int_t^b \alpha_i(s) \mkern4mu\mathrm{d} s \Big), \quad \quad i=1,2.
\]
\end{proof}
\section{Calculations for \Cref{A24}}
\label{APPB}
Let $H$ be the Hamiltonian from \Cref{A24},
\begin{align*}
H(t)=
t^{\alpha -1}\left(\begin{matrix}
|\log t|^{\beta_1} & |\log t|^{\beta_3} \\
|\log t|^{\beta_3} & |\log t|^{\beta_2} \\
\end{matrix} \right), \quad \quad t \in (0,\infty),
\end{align*}
where $\alpha > 0$, $\beta_1, \beta_2 \in \bb R$ such that $\beta_1 \neq \beta_2$, and $\beta_3 := \frac{\beta_1 + \beta_2}{2}$. We carry out the calculations to justify the claimed asymptotics from the example. They were communicated by Matthias Langer. \\
In order to calculate $\mathring t(r)$ and $\hat t(r)$, two lemmas are needed.
\begin{lemma}
\label{approx_inv}
Let $f: (0,\varepsilon) \to (0,\infty)$ be increasing and $f(t) \sim ct^a |\log t|^b$ as $t \to 0$, for $a>0$, $c>0$, $b \in \bb R$. Then
\[
f^{-1}(s) \sim \Big(ca^{-b}s|\log s|^{-b} \Big)^{\frac 1a}, \quad s \to 0.
\]
\end{lemma}
\begin{proof}
We have
\begin{align}
\label{sim}
\lim_{t \to 0} \,\frac{f(t)}{t^a|\log t|^b} = c.
\end{align}
Therefore,
\[
\lim_{t \to 0} \Big[\log f(t)-a\log t-b \log |\log t| \Big]= \log c
\]
and further
\[
\lim_{t \to 0} \Big[\frac{\log f(t)}{\log t} - a \Big]= \lim_{t \to 0} \Big[\frac{\log f(t)}{\log t} - a - b \frac{\log |\log t|}{\log t} \Big]= 0.
\]
In other words, $|\log t| \sim \frac 1a |\log f(t)|$. At the same time, by (\ref{sim}),
\[
t \sim \Big(c^{-1} f(t) |\log t|^{-b} \Big)^{\frac 1a} \sim \Big(c^{-1} f(t) [\frac 1a |\log f(t)|]^{-b} \Big)^{\frac 1a}
\]
which implies the assertion.
\end{proof}
\begin{lemma}
\label{int_asy}
Let $a>-1$ and $b \in \bb R$. Then
\begin{align*}
\int_0^t &s^a (-\log s)^b \mkern4mu\mathrm{d} s = \frac{1}{a+1}t^{a+1} (-\log t)^b \\
&\cdot \Big[1-\frac{b}{a+1}(-\log t)^{-1} + \frac{b(b-1)}{(a+1)^2}(-\log t)^2+{\rm O} \big((-\log t)^{-3} \big) \Big]
\end{align*}
\end{lemma}
\begin{proof}
\begin{align}
&\int_0^t s^a (-\log s)^b \mkern4mu\mathrm{d} s=\frac{1}{a+1}s^{a+1} (-\log s)^b \Big|_0^t - \frac{1}{a+1} \int_0^t s^a b (-\log s)^{b-1} \mkern4mu\mathrm{d} s \nonumber \\
\label{part_int_sa_logs_b}
&= \frac{1}{a+1}t^{a+1} (-\log t)^b - \frac{b}{a+1} \int_0^t s^a (-\log s)^{b-1} \mkern4mu\mathrm{d} s
\end{align}
Using (\ref{part_int_sa_logs_b}) two more times:
\begin{align*}
&= \frac{1}{a+1}t^{a+1} (-\log t)^b \\
&\quad- \frac{b}{a+1} \Big[\frac{1}{a+1}t^{a+1} (-\log t)^{b-1} - \frac{b-1}{a+1}\int_0^t s^a (-\log s)^{b-2} \mkern4mu\mathrm{d} s \Big] \\
&=\frac{1}{a+1}t^{a+1} (-\log t)^b \Big[1-\frac{b}{a+1}(-\log t)^{-1}+\frac{b(b-1)}{(a+1)^2}(-\log t)^{-2}\Big] \\
&\quad+c(a,b)\int_0^t s^a (-\log s)^{b-3} \mkern4mu\mathrm{d} s.
\end{align*}
The assertion follows using Karamata's Theorem \cite[Prop. 1.5.8 and 1.5.9a]{bingham.goldie.teugels:1989}.
\end{proof}
We are now in position to determine $L(r)$, $\IM q_H(ir)$, and $A(r)$.
\newline
\noindent \underline{Calculation of $\mathring t(r)$:}
By Karamata's Theorem we have
\[
\omega_i(t) \sim \frac{1}{\alpha}t^{\alpha} (-\log t)^{\beta_i}, \quad \quad i=1,2,3.
\]
Hence
\begin{align}
\label{m1m2}
(\omega_1\omega_2)(t) \sim \frac{1}{\alpha ^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2}.
\end{align}
Applying \Cref{approx_inv} yields
\[
(\omega_1\omega_2)^{-1}(s) \sim c \cdot s^{\frac{1}{2\alpha}} (-\log s)^{-\frac{\beta_3}{\alpha}}.
\]
We arrive at
\begin{align}
\label{tring}
\mathring t(r)=(\omega_1\omega_2)^{-1}(r^{-2}) \sim c' \cdot r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3}{\alpha}}.
\end{align}
\noindent \underline{Calculation of $A(r)$:}
\begin{align*}
A(r)&= \sqrt{\frac{\omega_1(\mathring t(r))}{\omega_2(\mathring t(r))}} \sim (-\log \mathring t(r))^{\frac{\beta_1-\beta_2}{2}} \sim \Big[-\log \Big(r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3}{\alpha}} \Big) \Big]^{\frac{\beta_1-\beta_2}{2}} \\
&\sim (\alpha \log r)^{\frac{\beta_1-\beta_2}{2}}.
\end{align*}
\noindent \underline{Calculation of $\hat t(r)$:} We use \Cref{int_asy} to calculate $\omega_i(t)$ with more precision:
\begin{align*}
\omega_i(t)=&\int_0^t s^{\alpha-1} (-\log s)^{\beta_i} \mkern4mu\mathrm{d} s = \frac{1}{\alpha}t^{\alpha} (-\log t)^{\beta_i} \\
&\cdot \Big[1-\frac{\beta_i}{\alpha}(-\log t)^{-1}+\frac{\beta_i (\beta_i -1)}{\alpha ^2}(-\log t)^{-2}+{\rm O} \big((-\log t)^{-3} \big) \Big]
\end{align*}
We get
\begin{align}
&\det \Omega (t)=\frac{1}{\alpha^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2} \nonumber\\
&\cdot \Big[1-\frac{\beta_1+\beta_2}{\alpha}(-\log t)^{-1}+\frac{\beta_1 (\beta_1-1)+\beta_2 (\beta_2-1) + \beta_1 \beta_2}{\alpha^2}(-\log t)^{-2} \nonumber\\
&-\Big(1-\frac{2\beta_3}{\alpha}(-\log t)^{-1}+\frac{2\beta_3 (\beta_3-1) + \beta_3^2}{\alpha^2}(-\log t)^{-2}+{\rm O} \big((-\log t)^{-3} \big) \Big)\Big] \nonumber\\
&=\frac{1}{\alpha^2}t^{2\alpha} (-\log t)^{\beta_1+\beta_2} \Big[\Big(\frac{\beta_1-\beta_2}{2\alpha} \Big)^2 (-\log t)^{-2} + {\rm O} \big((-\log t)^{-3} \big) \big)\Big] \nonumber\\
\label{detm}
&\sim c \cdot t^{2\alpha} (-\log t)^{2(\beta_3-1)}.
\end{align}
By \Cref{approx_inv},
\[
\big(\det \Omega \big)^{-1}(s) \sim c' \cdot s^{\frac{1}{2\alpha}} (-\log s)^{-\frac{\beta_3-1}{\alpha}}
\]
and further
\[
\hat t(r) = \big(\det \Omega \big)^{-1}(r^{-2}) \sim c'' \cdot r^{-\frac{1}{\alpha}} (\log r)^{-\frac{\beta_3-1}{\alpha}}.
\]
\noindent \underline{Calculation of $\IM q_H(ir)$:}
\begin{align*}
\IM q_H(ir)&\asymp \frac{1}{r\omega_2(\hat t(r))} \sim \frac{\alpha}{r} \big(\hat t(r) \big)^{-\alpha} \big(-\log \hat t(r) \big)^{-\beta_2} \\
&\sim c''' \cdot \frac{\alpha}{r} r (\log r)^{\beta_3-1} (\log r)^{-\beta_2} = c''' \cdot (\log r)^{\frac{\beta_1-\beta_2}{2}-1}.
\end{align*}
\noindent \underline{Calculation of $L(r)$:} Using (\ref{detm}), (\ref{m1m2}) and (\ref{tring}), we have
\[
\frac{\det \Omega (\mathring t(r))}{(\omega_1\omega_2)(\mathring t(r))} \sim \Big(\frac{\beta_1-\beta_2}{2\alpha} \Big)^2 (\alpha \log r)^{-2}.
\]
Multiplying by $A(r)$, we obtain
\[
L(r) \sim c (\log r)^{\frac{\beta_1-\beta_2}{2}-2}.
\]
\subsection*{Acknowledgements.}
\noindent This work was supported by the Austrian Science Fund and the Russian Federation for Basic Research (grant number
I-4600). Furthermore, I would like to thank my supervisor Harald Woracek for his support and the expertise he provided.
\printbibliography
\end{document}
|
\begin{document}
\title{A Unified Hard-Constraint Framework for\ Solving Geometrically Complex PDEs}
\begin{abstract}
We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered. Specifically, we first introduce the ``extra fields'' from the mixed finite element method to reformulate the PDEs so as to equivalently transform the three types of BCs into linear equations. Based on the reformulation, we derive the general solutions of the BCs analytically, which are employed to construct an ansatz that automatically satisfies the BCs. With such a framework, we can train the neural networks without adding extra loss terms and thus efficiently handle geometrically \modify{complex} PDEs, alleviating the unbalanced competition between the loss terms corresponding to the BCs and PDEs. We theoretically demonstrate that the ``extra fields'' can stabilize the training process. Experimental results on real-world geometrically complex PDEs showcase the \modify{effectiveness} of our method compared with state-of-the-art baselines.
\end{abstract}
\section{Introduction}
Many fundamental problems in science and engineering (e.g., \cite{batchelor2000introduction,majda2003introduction,schiesser2014computational}) are characterized by partial differential equations (PDEs) with the solution constrained by boundary conditions (BCs) that are derived from the physical system of the problem. Among all types of BCs, Dirichlet, Neumann, and Robin are the most commonly used \citep{strauss2007partial}. Figure~\ref{fig:three_type_bc} gives an illustrative example of these three types of BCs. Furthermore, in practical problems, physical systems can be very geometrically complex (\modify{where the geometry of the definition domain is irregular or has complex structures,} e.g., a lithium-ion battery \citep{jeon2011thermal}, a heat sink \citep{wu2017design}, etc), leading to a large number of BCs. How to solve such PDEs has become a challenging problem shared by both scientific and industrial communities.
\modify{The field of solving PDEs with neural networks has a history of more than 20 years \citep{dissanayake1994neural, bar2019unsupervised, sun2019solving, esmaeilzadeh2020meshfreeflownet, van2021optimally}}. Such methods are intrinsically mesh-free and therefore can handle high-dimensional as well as geometrically complex problems more efficiently compared with traditional mesh-based methods, like the finite element method (FEM). Physical-informed neural network (PINN) \citep{raissi2019physics} is one of the most influential works, where the neural network is trained in the way of taking the residuals of both PDEs and BCs as multiple terms of the loss function. \modify{Although there are many wonderful improvements such as DPM \cite{kim2020dpm}, PINNs still face serious challenges as discussed in the paper} \citep{krishnapriyan2021characterizing}. Some theoretical works \citep{wang2021understanding,wang2022and} point out that there exists an unbalanced competition between the terms of PDEs and BCs, limiting the application of PINNs to geometrically complex problems. To address this issue, some researchers \citep{berg2018unified,sun2020surrogate,lu2021physics} have tried to embed BCs into the ansatz. Some of them \citep{leake2020deep, schiassi2021extreme} follow the pipeline of the Theory of Connections \citep{mortari2017theory}, while others \citep{yu2018deep,kharazmi2019variational,liao2019deep} have considered solving the equivalent variational form of the PDEs. In this way, the neural networks can automatically satisfy the BCs and no longer require adding corresponding loss terms. Nevertheless, these methods are only applicable to specific BCs (e.g., Dirichlet BCs, homogeneous BCs, etc) or geometrically simple PDEs. The key challenge is that the equation forms of the Neumann and Robin BCs have no analytical solutions in general and are thus difficult to be embedded into the ansatz.
\begin{figure}
\caption{\textbf{An illustration on three types of BCs.}
\label{fig:three_type_bc}
\end{figure}
In this paper, we propose a unified hard-constraint framework for all the three most commonly used BCs (i.e., Dirichlet, Neumann, and Robin BCs). With this framework (see Figure~\ref{fig:pipeline} for an illustration), we are able to construct an ansatz that automatically satisfies the three types of BCs. Therefore, we can train the model without the losses of these BCs, which alleviates the unbalanced competition and significantly improves the performance of solving geometrically complex PDEs. Specifically, we first introduce the \textit{extra fields} from the mixed finite element method \citep{malkus1978mixed,brink1996some}. This technique substitutes the gradient of a physical quantity with new variables, allowing the BCs to be reformulated as linear equations. Based on this reformulation, we derive a general continuous solution of the BCs of simple form, overcoming the challenge that the original BCs cannot be solved analytically. Using the general solutions obtained, we summarize a paradigm for constructing the hard-constraint ansatz under time-dependent, multi-boundary, and high-dimensional cases. Besides, in Section~\ref{sec:theo}, we demonstrate that the technique of \textit{extra fields} can improve the stability of the training process.
We empirically demonstrate the effectiveness of our method through three parts of experiments. Firstly, we show the potency of our method in solving geometrically complex PDEs through two numerical experiments from real-world physical systems of a battery pack and an airfoil. And our framework achieves a supreme performance compared with advanced baselines, including the learning rate annealing \modify{methods} \citep{wang2021understanding}, \modify{domain decomposition-based methods} \citep{jagtap2020extended, moseley2021finite}, and existing hard-constraint methods \citep{sheng2021pfnn, sheng2022pfnn}. Second, we select a high-dimensional problem to demonstrate that our framework can be well applied to \modify{high-dimensional} cases. Finally, we study the impact of the \textit{extra fields} as well as some hyper-parameters and verify our theoretical results in Section~\ref{sec:theo}.
To sum up, we make the following contributions: \begin{itemize}
\item We introduce the \textit{extra fields} to reformulate the PDEs, and theoretically demonstrate that our reformulation can effectively reduce the instability of the training process.
\item We propose a unified hard-constraint framework for Dirichlet, Neumann, and Robin BCs, \modify{alleviating} the unbalanced competition between losses in physics-informed learning.
\item Our method has superior performance over state-of-the-art baselines on solving geometrically complex PDEs, as validated by numerical experiments in real-world physical problems.
\end{itemize}
\begin{figure}
\caption{\textbf{A pipeline of the proposed method.}
\label{fig:pipeline}
\end{figure}
\section{Background}
\subsection{Physics-Informed Neural Networks (PINNs)}
\label{sec:pinn}
We consider the following Laplace's equation as a motivating example:
\begin{subequations}
\label{eq_toy_example}
\begin{align}
\Delta u(x_1,x_2) &= 0, &&\modify{x_1\in(0,1],x_2\in[0,1]}, \label{eq_pde_toy_example}\\
\modify{u(x_1,x_2)} &= g(x_2), &&\modify{x_1=0,x_2\in[0,1]}, \label{eq_bc_toy_example}
\end{align}
\end{subequations}
where $g\colon \mathbb{R}\rightarrow \mathbb{R}$ is a known fixed function, Eq.~\eqref{eq_pde_toy_example} gives the form of the PDE, and Eq.~\eqref{eq_bc_toy_example} is a Dirichlet boundary condition (BC). A solution to the above problem is a solution to Eq.~\eqref{eq_pde_toy_example} which also satisfies Eq.~\eqref{eq_bc_toy_example}.
Physics-informed neural networks (PINNs)~\citep{raissi2019physics} employ a neural network $\mathrm{NN}(x_1,x_2;\bm{\theta})$ to approximate the solution, i.e., $\hat{u}(x_1,x_2;\bm{\theta})=\mathrm{NN}(x_1,x_2;\bm{\theta})\approx u(x_1,x_2)$, where $\bm{\theta}$ denotes the trainable parameters of the network. And we learn the parameters $\bm{\theta}$ by minimizing the following loss function:
\begin{equation}
\mathcal{L}(\bm{\theta})=\mathcal{L}_{\mathcal{F}}(\bm{\theta}) + \mathcal{L}_{\mathcal{B}}(\bm{\theta})\triangleq \frac{1}{N_f}\sum_{i=1}^{N_f} \left| \Delta \hat{u}(x^{(i)}_{f,1},x^{(i)}_{f,2};\bm{\theta}) \right|^2 + \frac{1}{N_b}\sum_{i=1}^{N_b} \left| \hat{u}(0,x^{(i)}_{b,2};\bm{\theta}) - g(x^{(i)}_{b,2}) \right|^2,
\label{eq:toy_example_loss}
\end{equation}
where $\mathcal{L}_{\mathcal{F}}$ is the term restricting $\hat{u}$ to satisfy the PDE (Eq.~\eqref{eq_pde_toy_example}) while $\mathcal{L}_{\mathcal{B}}$ is the one for the BC (Eq.~\eqref{eq_bc_toy_example}), $\{\bm{x}^{(i)}_f=(x^{(i)}_{f,1},x^{(i)}_{f,2})\}_{i=1}^{N_f}$ is a set of $N_f$ collocation points sampled in $[0,1]^2$, and $\{\bm{x}^{(i)}_b=(0,x^{(i)}_{b,2})\}_{i=1}^{N_b}$ is a set of $N_b$ boundary points sampled in $x_1=0\land x_2\in[0,1]$.
PINNs have a wide range of applications, including heat \citep{cai2021physics}, flow \citep{mao2020physics}, and atmosphere \citep{zhang2021spatiotemporal}. However, PINNs are struggling with some issues on the performance \citep{krishnapriyan2021characterizing}. Previous analysis~\citep{wang2021understanding,wang2022and} has demonstrated that the convergence of $\mathcal{L}_{\mathcal{F}}$ can be significantly faster than that of $\mathcal{L}_{\mathcal{B}}$. This pathology may lead to nonphysical solutions which do not satisfy the BCs or initial conditions (ICs). Moreover, for geometrically complex PDEs where the number of BCs is large, this problem is exacerbated and can seriously affect accuracy, as supported by our experimental results in Table~\ref{tab:pack}.
\subsection{Hard-Constraint Methods}
One potential approach to overcome this pathology is to embed the BCs into the ansatz in a way that any instance from the ansatz can automatically satisfy the BCs, as utilized by previous works \citep{berg2018unified, pang2019fpinns,sun2020surrogate, wang2021cenn, sheng2021pfnn, lu2021physics, sheng2022pfnn}. We note that the loss terms corresponding to the BCs are no longer needed, and thus the above pathology is alleviated. These methods are called \textit{hard-constraint methods}, and they share a similar formula of the ansatz as:
\begin{equation}
\label{eq_general_formula_hc}
\hat u(\bm{x};\bm{\theta})=u^{\partial \Omega}(\bm{x})+ l^{\partial \Omega}(\bm{x}) \mathrm{NN}(\bm{x};\bm{\theta}) ,
\end{equation}
where $\bm{x}$ is the coordinate, $\Omega$ is the domain of interest, $u^{\partial \Omega}(\bm{x})$ is the general solution at the boundary $\partial \Omega$, and $l^{\partial \Omega}(\bm{x})$ is an extended distance function which satisfies:
\begin{equation}\label{eq_ext_dist}
l^{\partial \Omega}(\bm{x})=
\begin{cases}
0& \mathrm{if}\ \bm{x}\in\partial\Omega,\\
>0& \mathrm{otherwise}.
\end{cases}
\end{equation}
In the case of Eq.~\eqref{eq_toy_example} (where $\bm{x}=(x_1,x_2)$, $\Omega=[0,1]^2$), the general solution is exactly $g(x_2)$, and we can use the following ansatz (which automatically satisfies the BC in Eq.~\eqref{eq_bc_toy_example}):
\begin{equation}
\hat u(x_1,x_2;\bm{\theta})= g(x_2) + x_1 \mathrm{NN}(x_1,x_2;\bm{\theta}).
\end{equation}
However, it is hard to directly extend this method to more general cases of Robin BCs (see Eq.~\eqref{eq:general_bcs}), since we cannot obtain the general solution $u^{\partial \Omega}(\bm{x})$ analytically. Existing attempts are either mesh-dependent \citep{gao2021phygeonet, zhao2021physics}, which are time-consuming for high-dimensional and geometrically complex PDEs, or ad hoc methods for specific (geometrically simple) physical systems \citep{rao2021physics}. It is still lacking a unified hard-constraint framework for both geometrically complex PDEs and the most commonly used Dirichlet, Neumann, and Robin BCs.
\section{Methodology}
We first introduce the problem setup of geometrically complex PDEs considered in this paper and then reformulate the PDEs via the \textit{extra fields}, followed by presenting our unified hard-constraint framework for embedding Dirichlet, Neumann, and Robin BCs into the ansatz.
\subsection{Problem Setup}
We consider a physical system governed by the following PDEs defined on a geometrically complex domain: $\Omega\subset\mathbb{R}^d$
\begin{equation}
\mathcal{F}[\bm{u}(\bm{x})]=\bm{0},\qquad\bm{x}=(x_1,\dots,x_d)\in\Omega,
\label{eq:genearl_PDE}
\end{equation}
where $\mathcal{F}=(\mathcal{F}_1,\dots,\mathcal{F}_N)$ includes $N$ PDE operators which map $\bm{u}$ to a function of $\bm{x}$, $\bm{u}$ and $\bm{u}$'s derivatives. Here, $\bm{u}(\bm{x})=(u_1(\bm{x}),\dots,u_n(\bm{x}))$ is the unknown solution, which represents physical quantities of interest. For each $u_j, j=1,\dots,n$, we impose suitable boundary conditions (BCs) as:
\begin{equation}
a_{j,i}(\bm{x})u_j+b_{j,i}(\bm{x})\big( \bm{n}_{j,i}(\bm{x})\cdot\nabla u_j \big) = g_{j,i}(\bm{x}),\quad\bm{x}\in\gamma_{j,i},\quad \forall i=1,\dots,m_j,
\label{eq:general_bcs}
\end{equation}
where $\{\gamma_{j,i}\}_{i=1}^{m_j}$ are subsets of the boundary $\partial\Omega$ whose closures are disjoint, $a_{j,i}, b_{j,i}\colon \gamma_{j,i} \rightarrow \mathbb{R}$ satisfy that $a_{j,i}^2(\bm{x})+b^2_{j,i}(\bm{x})\neq 0, \forall \bm{x}\in\gamma_{j,i}$, $\bm{n}_{j,i}\colon \gamma_{j,i} \rightarrow \mathbb{R}^d$ is the (outward facing) unit normal of $\gamma_{j,i}$ at each location, and $g_{j,i}\colon \gamma_{j,i} \rightarrow \mathbb{R}$. It is noted that Eq.~\eqref{eq:general_bcs} represents a Dirichlet BC if $a_{j, i}\equiv 1,b_{j, i}\equiv0$, a Neumann BC if $a_{j, i}\equiv 0,b_{j, i}\equiv 1$, and a Robin BC otherwise. In the following, we drop some of the subscripts in Eq.~\eqref{eq:general_bcs} for clarity \footnote{We simplify $\gamma_{j, i}, a_{j, i}(\bm{x}), b_{j, i}(\bm{x}), \bm{n}_{j, i}(\bm{x}), g_{j, i}(\bm{x})$ to $\gamma_{i}, a_{i}(\bm{x}), b_{i}(\bm{x}), \bm{n}(\bm{x}), g_{i}(\bm{x})$.}.
For such geometrically complex PDEs, if we directly resort to PINNs (see Section~\ref{sec:pinn}), there would be a difficult multi-task learning with at least $(\sum_{j=1}^n m_j + N)$ terms in the loss function. As discussed in the previous analyses \citep{wang2021understanding, wang2022and}, it will severely affect the convergence of the training due to the unbalanced competition between those loss terms. Hence, in this paper, we will discuss how to embed the BCs into the ansatz, where every instance automatically satisfies the BCs. However, it is infeasible to directly follow the pipeline of \textit{hard-constraint methods} (see Eq.~\eqref{eq_general_formula_hc}) since Eq.~\eqref{eq:general_bcs} does not have a general solution of analytical form. Therefore, a new approach is needed to address this intractable problem.
\subsection{Reformulating PDEs via Extra Fields}
\label{sec:extra_fields}
In this subsection, we present the general solutions of the BCs, which will be used to construct the hard-constraint ansatz subsequently. We first introduce the \textit{extra fields} from the mixed finite element method \citep{malkus1978mixed,brink1996some} to equivalently reformulate the PDEs. Let $\bm{p}_j(\bm{x})=(p_{j1}(\bm{x}),\dots,p_{jd}(\bm{x}))=\nabla u_j,j=1,\dots,n$. We substitute them into Eq.~\eqref{eq:genearl_PDE} and Eq.~\eqref{eq:general_bcs} to obtain the equivalent PDEs:
\begin{subequations}
\label{eq:pde_after_extra_field}
\begin{align}
\tilde{\mathcal{F}}[\bm{u}(\bm{x}),\bm{p}_1(\bm{x}),\dots,\bm{p}_n(\bm{x})]&=\bm{0},&&\bm{x}\in\Omega,\\
\bm{p}_j(\bm{x})&=\nabla u_j,&&\bm{x}\in\Omega\cup\partial\Omega,&&\forall j=1,\dots,n,
\label{eq:balance_eq_after_extra_field}
\end{align}
\end{subequations}
where $(\bm{u}(\bm{x}),\bm{p}_1(\bm{x}),\dots,\bm{p}_n(\bm{x}))$ is the solution to the new PDEs, $\tilde{\mathcal{F}}=(\tilde{\mathcal{F}}_1,\dots,\tilde{\mathcal{F}}_N)$ are the PDE operators after the reformulation. And for $j=1,\dots,n$, we have the corresponding BCs:
\begin{equation}
a_i(\bm{x})u_j+b_i(\bm{x})\big( \bm{n}(\bm{x})\cdot\bm{p}_j(\bm{x}) \big) = g_i(\bm{x}),\qquad\bm{x}\in\gamma_i,\qquad \forall i=1,\dots,m_j.
\label{eq:bc_after_extra_field}
\end{equation}
Now, we can see that Eq.~\eqref{eq:general_bcs} has been transformed into linear equations with respect to $(u_j, \bm{p}_j)$, which are much easier for us to derive general solutions. Hereinafter, we denote $(u_j,\bm{p}_j)$ by $\tilde{\bm{p}}_j$, and the general solution to the BC at $\gamma_i$ by $\tilde{\bm{p}}_j^{\gamma_i}=(u_j^{\gamma_i}, \bm{p}_j^{\gamma_i})$. Next, we will discuss how to
obtain $\tilde{\bm{p}}_j^{\gamma_i}$.
To obtain the general solution of Eq. \eqref{eq:bc_after_extra_field}, the first step is to find a basis $\bm{B}(\bm{x})$ of the null space (whose dimension is $d$), which should include $d$ vectors. However, we must emphasize that $\bm{B}(\bm{x})$ should be carefully chosen. Since Eq.~\eqref{eq:bc_after_extra_field} is parameterized by $\bm{x}$, for any $\bm{x}\in\gamma_i$, $\bm{B}(\bm{x})$ should always be a basis of the null space, that is, its columns cannot degenerate into linearly dependent vectors (otherwise it will not be able to represent all possible solutions). An example of an inadmissible $\bm{B}(\bm{x})$ is given in Appendix~\ref{sec_a1}.
Generally, for any dimension $d\in\mathbb{N}^+$, \modify{we believe it is non-trivial to find a simple expression for the basis}. Instead, we prefer to find $(d+1)$ vectors in the null space, $d$ of which are linearly independent (that way, $\bm{B}(\bm{x})\in\mathbb{R}^{(d+1) \times (d+1)}$). \modify{We now directly present our construction of the general solution while leaving a detailed derivation in Appendix~\ref{sec_a3}.}
\begin{equation} \label{eq_hc_robin}
\tilde{\bm{p}}_j^{\gamma_i}\modify{(\bm{x};\bm{\theta}_j^{\gamma_i})}=\bm{B}(\bm{x})\modify{\mathrm{NN}_j^{\gamma_i}(\bm{x};\bm{\theta}_j^{\gamma_i})} + \tilde{\bm{n}}(\bm{x})\tilde{g}_i(\bm{x}),
\end{equation}
where $\tilde{\bm{n}}=(a_i,b_i\bm{n})\big/\sqrt{a_i^2+b_i^2}$, $\tilde{g}_i=g_i\big/\sqrt{a_i^2+b_i^2}$, $\modify{\mathrm{NN}_j^{\gamma_i}}:\mathbb{R}^{d}\to\mathbb{R}^{d+1}$ is a neural network\modify{ with trainable parameters $\bm{\theta}_j^{\gamma_i}$}, and $\bm{B}(\bm{x})=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x}){\tilde{\bm{n}}(\bm{x})}^\top$ is the basis we have found (precisely, it is a set of vectors that always contain a basis of the null space for any $\bm{x}\in\gamma_i$). Incidentally, in the case of $d=1$ or $d=2$, we can find a simpler expression for $\bm{B}$ \modify{(see Appendix~\ref{sec_a2})}. We note that there is no restriction for the architecture of neural networks used and MLPs are chosen as default.
\subsection{A Unified Hard-Constraint Framework}\label{sec:unified_hc}
With the parameterization of a neural network, Eq.~\eqref{eq_hc_robin} can represent any function defined on $\gamma_i$, as long as the function satisfies the BC (see Eq.~\eqref{eq:bc_after_extra_field}). Since our problem domain contains multiple boundaries, we need to combine the general solutions corresponding to each boundary $\gamma_i$ to achieve an overall approximation. Hence, we construct our ansatz (for each $1\le j \le n$ \emph{separately}) as follows:
\begin{equation} \label{eq_unified_hc}
(\hat{u}_j,\hat{\bm{p}}_j) = l^{\partial \Omega}(\bm{x}) \modify{\mathrm{NN}^{\mathrm{main}}_j(\bm{x};\bm{\theta}^{\mathrm{main}}_j)} + \sum_{i=1}^{m_j} \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x}\modify{;\bm{\theta}_j^{\gamma_i}}),\quad \forall j=1,\dots,n,
\end{equation}
where $\modify{\mathrm{NN}^{\mathrm{main}}_j}:\mathbb{R}^{d}\to\mathbb{R}^{d+1}$ \modify{is the main neural network with trainable parameters $\bm{\theta}^{\mathrm{main}}_j$}, $l^{\partial \Omega}, l^{\gamma_i},i=1,\dots,m_j$ are \modify{continuous} extended distance functions (similar to Eq.~\eqref{eq_ext_dist}), and $\alpha_i~(i=1,\dots,m_j)$ are determined by:
\begin{equation}\label{eq_determine_alpha}
\alpha_i = \frac{\beta_s}{\min_{\bm{x} \in \partial\Omega \setminus \gamma_i} l^{\gamma_i}(\bm{x}) } ,
\end{equation}
where $\beta_s\in\mathbb{R}$ is a hyper-parameter of the ``hardness'' in the spatial domain. \modify{In Eq.~\eqref{eq_unified_hc}, we utilize extended distance functions to ``divide'' the problem domain into several parts, where $\{\tilde{\bm{p}}_j^{\gamma_i}\}_{i=1}^{m_j}$ is responsible for the approximation on the boundaries while $\mathrm{NN}^{\mathrm{main}}_j$ are responsible for internal. Furthermore, Eq.~\eqref{eq_determine_alpha} ensures that the importance of $\tilde{\bm{p}}_j^{\gamma_i}$ decays to $e^{-\beta_s}$ on the nearest boundary from $\gamma_i$, so that $\tilde{\bm{p}}_j^{\gamma_i}$ does not appear on other boundaries. We provide a theoretical guarantee
for the correctness and approximation ability of Eq.~\eqref{eq_unified_hc} in Appendix~\ref{sec_a4}.} Besides, if $a_i$, $b_i$, $\bm{n}$ or $g_i$ are only defined at $\gamma_i$, we can extend their definition to $\Omega\cup\partial\Omega$ using interpolation techniques or neural networks (see Appendix~\ref{sec_a5}). An extension of our framework to the temporal domain is discussed in Appendix~\ref{sec_a6}.
Finally, we can train our model with the following loss function:
\begin{equation}
\begin{aligned}
\mathcal{L}&= \frac{1}{N_f}\sum_{k=1}^{N_f}\sum_{j=1}^N \big| \tilde{\mathcal{F}}_j[\hat{\bm{u}}(\bm{x}^{(k)}),\hat{\bm{p}}_1(\bm{x}^{(k)}),\dots,\hat{\bm{p}}_n(\bm{x}^{(k)})] \big|^2 \\
&+ \frac{1}{N_f}\sum_{k=1}^{N_f}\sum_{j=1}^n \big\Vert \hat{\bm{p}}_j(\bm{x}^{(k)}) - \nabla\hat{u}_j(\bm{x}^{(k)}) \big\Vert_2^2 ,
\label{eq:hc_loss}
\end{aligned}
\end{equation}
where $\hat{\bm{u}}=(\hat{u}_1,\dots,\hat{u}_n)$, $\{\hat{u}_j,\hat{\bm{p}}_j\}_{j=1}^n$ is defined in Eq.~\eqref{eq_unified_hc} and $\{\bm{x}^{(k)}\}_{k=1}^{N_f}$ is a set of collocation points sampled in $\Omega$. \modify{For neatness, we omit the trainable parameters of neural networks here. We note that} Eq.~\eqref{eq:hc_loss} measures the discrepancy of both the PDEs (i.e., $\tilde{\mathcal{F}}_1,\dots,\tilde{\mathcal{F}}_N$) and the equilibrium equations introduced by the \textit{extra fields} (i.e., Eq.~\eqref{eq:balance_eq_after_extra_field}) at $N_f$ collocation points.
According to Eq.~\eqref{eq:hc_loss}, we have now successfully embedded BCs into the ansatz, and no longer need to take the residuals of BCs as extra terms in the loss function. That is, our ansatz strictly conforms to BCs throughout the training process, greatly reducing the possibility of generating nonphysical solutions. Nevertheless, this comes at the cost of introducing $(nd)$ additional equilibrium equations (see Eq.~\eqref{eq:balance_eq_after_extra_field}). But in many physical systems, especially those with complex geometries, the number of BCs ($\mathrm{cnt}(\mathrm{BCs})$) is far larger than $(nd)$ (e.g., $n=3$, $d=2$, $\mathrm{cnt}(\mathrm{BCs})=1260$ for a classical physical system, a heat exchanger \citep{dubovsky2011analytical}). So we may actually reduce the number of loss terms by orders of magnitude ($\Delta\mathrm{cnt}(\mathrm{losses}) = n d - \mathrm{cnt}(\mathrm{BCs}) \ll 0$), alleviating the unbalanced competition between loss terms.
\section{Theoretical Analysis} \label{sec:theo}
In Section~\ref{sec:extra_fields}, we introduce the \textit{extra fields} and reformulate the PDEs (from Eq.~\eqref{eq:genearl_PDE} and Eq.~\eqref{eq:general_bcs} to Eq.~\eqref{eq:pde_after_extra_field} and Eq.~\eqref{eq:bc_after_extra_field}). To further analyze the impact of this reformulation, we consider the following abstraction of 1D PDEs (Eq.~\eqref{eq_theo}) and the one after the reformulation (Eq.~\eqref{eq_theo_extra_fields}):
\begin{subequations}
\begin{align}
L\big[ u, \frac{\diff{u}}{\diff{x}}, \frac{\Diff{2}{u}}{\diff{x}^2},\cdots,\frac{\Diff{n}{u}}{\diff{x}^n} \big] = f(x),\qquad x \in \Omega,\label{eq_theo}\\
L\big[ u, p_1, p_2, \cdots, p_{n-1}]
= f(x),\qquad x \in \Omega,\label{eq_theo_extra_fields}
\end{align}
\end{subequations}
where $L$ is an operator, $f$ is a source function, and $p_{i+1}= \diff{p_i}/\diff{x}, p_1 = \diff{u}/\diff{x}$ are the introduced extra fields. From the above equations, we can find that the orders of derivatives in the PDEs are reduced. Intuitively, as the PDEs are included in the loss function (see Eq.~\eqref{eq:toy_example_loss}), lower derivatives result in less numerical sensitivity and thus stabilize the training process.
\begin{figure}
\caption{An illustration of the geometries of the problems in Section~\ref{sec:exp_heat}
\label{fig:geom_heat}
\label{fig:geom_airfoil}
\end{figure}
To formally explain this mechanism, without loss of generality, we consider a simple case (with proper Dirichlet BCs) where $L[\cdots]=\Delta u$, $f(x)=-a^2\sin{ax}$. And our ansatz is a single-layer neural network of \modify{width} $K$, i.e., $\hat{u}=\bm{c}^\top \modify{\sigma}(\bm{w} x+\bm{b})$, $\hat{p}=\bm{c}_{p}^\top \modify{\sigma}(\bm{w} x+\bm{b})$, where $\bm{c},\bm{w},\bm{b},\bm{c}_{p}\in\mathbb{R}^K$, $\modify{\sigma}$ is an element-wise activation function (for simplicity, we take $\modify{\sigma}$ as $\tanh$). More details on the PDEs are given in Appendix~\ref{sec_a7_1}. Next, we focus on the gradients of the loss terms of the PDEs $\mathcal{F}$, since that of the BC stays the same during the reformulation. We state the following theorem.
\begin{theorem}[Bounds for the Gradients of the PDE Loss Terms] \label{theo:universal}
Let $\bm{\theta}=(\bm{c},\bm{w},\bm{b})$, $\tilde{\bm{\theta}}=(\bm{c},\bm{w},\bm{b},\bm{c}_p)$, and $\mathcal{L}_{\mathcal{F}}$ as well as $\tilde{\mathcal{L}}_{\mathcal{F}}$ be the loss terms corresponding to the original and transformed PDEs, respectively. We have the following bounds:
\begin{subequations}\label{eq:theo_grad}
\begin{align}
\Big|\big(\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}}\big)^\top\Big| &= \mathcal{O}\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big(\bm{w}^2, |\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} ), |\bm{c}| \circ \bm{w}^2\big), \label{eq_grad}\\
\Big|\big({\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}}\big)^\top\Big| &= \mathcal{O}\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot \big(
|\bm{w}|, \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|, \bm{1}),\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}), \max(|\bm{w}|,\bm{1})
\big), \label{eq_grad_transformed}
\end{align}
\end{subequations}
where $\circ$ is the element-wise multiplication, $\bm{1}$ is an all-ones vector, $\max(\cdot)$ is the element-wise maximum of vectors, and operations on vectors ($\lvert\cdot\rvert$, $(\cdot)^2$, etc) are element-wise operations.
\end{theorem}
The proof is deferred to Appendix~\ref{sec_a7_2}. We can find that $(\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}})^\top$ has a high-order relationship with the \modify{trainable} parameters $\bm{\theta}$. This often makes training unstable, as the gradients $(\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}})^\top$ may rapidly explode or vanish when $\bm{\theta}$ are large or small. After the reformulation, $({\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}})^\top$ is associated with the parameters in a lower order, and some terms are controlled by $1$, alleviating this issue of the instability. We note that the improvement in stability comes from lower derivatives in the PDEs, and this mechanism does not depend on the specific form of the PDEs. In Section~\ref{sec:exp_abla_extra}, we empirically show that this mechanism is general, even for nonlinear PDEs.
\section{Experiments}\label{sec:exp}
In this section, we will numerically showcase the effectiveness of the proposed framework on geometrically complex PDEs, which is also the main focus of this paper. To this end, we test our framework on two real-world problems (see Section~\ref{sec:exp_heat} and Section~\ref{sec:exp_ns}) from a 2D battery pack and an airfoil, respectively. Besides, to validate that our framework is applicable to high-dimensional cases, an \modify{additional} high-dimension problem is considered in Section~\ref{sec:exp_high}, followed by an ablation study in Section~\ref{sec:exp_abla}. We refer to Appendix~\ref{sec_a8} for experimental details.
\subsection{Experiment Setup}\label{sec:exp_setup}
\paragraph{Evaluation}
In our experiments, we utilize mean absolute error (MAE) and mean absolute percentage error (MAPE) to measure the discrepancy between the prediction and the ground truth. In Section~\ref{sec:exp_ns}, we replace MAPE with weighted mean absolute percentage error (WMAPE) to avoid the ``division
by zero'', since a considerable part of the ground truth values are very close to zero,
\begin{equation}
\mathrm{WMAPE} = \frac{\sum_{i=1}^n|\hat{y}_i-y_i|}{\sum_{i=1}^n|y_i|},
\end{equation}
where $n$ is the number of testing samples, $\hat{y}_i$ is the prediction and $y_i$ is the ground truth.
\paragraph{Baselines}
We consider the following baselines in subsequent experiments:
\begin{itemize}
\item \textbf{PINN}: original implementation of the PINN \citep{raissi2019physics}.
\item \textbf{PINN-LA \& PINN-LA-2}: PINN with learning rate annealing algorithm to address the unbalanced competition \citep{wang2021understanding} and a variant we modified for numerical stability.
\item \textbf{xPINN\& FBPINN}: two representative works for solving geometrically complex PDEs via non-overlapping \citep{jagtap2020extended} and overlapping \citep{moseley2021finite} domain decomposition.
\item \textbf{PFNN \& PFNN-2}: a representative hard-constraint method based on the variational formulation of PDEs \citep{sheng2021pfnn} and its latest variant incorporating domain decomposition \citep{sheng2022pfnn}.
\end{itemize}
\begin{table}[!b]
\caption{Experimental results of the simulation of a 2D battery pack}
\centering
\begin{small}
\begin{tabular}{lllllllll}
\toprule
& \multicolumn{4}{c}{MAE of $T$} & \multicolumn{4}{c}{MAPE of $T$}\\
\cmidrule(r){2-5}
\cmidrule(r){6-9}
& $t=0$ & $t=0.5$ & $t=1$ & average & $t=0$ & $t=0.5$ & $t=1$ & average \\
\midrule
PINN & $0.1283$ & $0.0457$ & $0.0287$ & $0.0539$ & $128.21\%$ & $11.65\%$ & $4.47\%$ & $24.82\%$\\
PINN-LA & $0.0918$ & $0.0652$ & $0.0621$ & $0.0661$ & $91.72\%$ & $19.13\%$ & $11.96\%$ & $27.06\%$\\
PINN-LA-2 & $0.1062$ & $0.0321$ & $\textbf{0.0211}$ & $0.0402$ & $106.05\%$ & $8.94\%$ & $4.09\%$ & $19.76\%$\\
FBPINN & $0.0704$ & $0.0293$& $0.0249$ & $0.0343$ & $70.33\%$ & $8.13\%$ & $5.87\%$ & $14.74\%$\\
xPINN & $0.2230$ & $0.1295$ & $0.1515$ & $0.1454$ & $222.83\%$ & $30.28\%$ & $20.25\%$ & $54.70\%$\\
PFNN & $\textbf{0.0000}$ & $0.3036$ & $0.4308$ & $0.2758$ & $\textbf{0.02\%}$ & $79.64\%$ & $84.60\%$ & $68.29\%$\\
PFNN-2 & $\textbf{0.0000}$ & $0.3462$ & $0.5474$ & $0.3215$ & $\textbf{0.02\%}$ & $66.06\%$ & $90.21\%$ & $59.62\%$\\
\midrule
HC & $\textbf{0.0000}$ & $\textbf{0.0246}$ & $0.0225$ & $\textbf{0.0221}$ & $\textbf{0.02\%}$ & $\textbf{5.38\%}$ & $\textbf{3.77\%}$ & $\textbf{5.10\%}$\\
\bottomrule
\end{tabular}
\end{small}
\label{tab:pack}
\end{table}
\subsection{Simulation of a 2D battery pack (Heat Equation)} \label{sec:exp_heat}
To emphasize the capability of the proposed hard-constraint framework (HC) to handle geometrically complex cases, we first consider modeling thermal dynamics of a 2D battery pack \citep{smyshlyaev2011pde}, which is governed by the heat equation (see Appendix~\ref{sec_a8_1}), where $\bm{x}\in\Omega, t\in[0,1]$ are the spatial and temporal coordinates, respectively, $T(\bm{x},t)$ is the temperature over time. The geometry of the problem $\Omega$ is shown in Figure~\ref{fig:geom_heat}, where $\gamma_{\mathrm{cell},i}$ is the cell and $\gamma_{\mathrm{pipe},i}$ is the cooling pipe.
The baselines are trained with $N_f=\num{8192}$ collocation points, $N_b=512$ boundary points, and $N_i=512$ initial points (an additional $N_s=512$ points on the interfaces between subdomains are required for the xPINN), while the HC is trained with $N_f=\num{8192}$ collocation points. All the models are trained for $\num{5000}$ Adam~\cite{kingma2015adam} iterations (with a learning rate scheduler) followed by an L-BFGS optimization until convergence and tested with $N_t=\num{146487}$ points evaluated by the FEM. The testing results are given in Table~\ref{tab:pack} (where ``average'' means averaging over $t\in[0,1]$, the same below). We find that the accuracy of the HC is significantly higher than baselines almost all the time, especially at $t=0$. This is because the ansatz of the HC always satisfies the BCs and ICs during the training process, preventing the approximations from violating physical constraints at the boundaries. Besides, we also provide an empirical analysis of convergence in Appendix~\ref{sec_a9} and results of 5-run parallel experiments (where problems in all three cases are revisited) in Appendix~\ref{sec_a10}.
\subsection{Simulation of an Airfoil (Navier-Stokes Equations)} \label{sec:exp_ns}
In the next experiment, we consider the challenging benchmark PDEs in computational fluid dynamics, the 2D stationary incompressible Navier-Stokes equations, in the context of simulating the airflow around a real-world airfoil (\texttt{w1015.dat}) from the UIUC airfoil coordinates database (an open airfoil database) \citep{uiucairfoil}. Specifically, the airfoil is represented by $\num{240}$ anchor points and the governing equation can be found in Appendix~\ref{sec_a8_2}, where $\bm{u}(\bm{x}) = (u_1(\bm{x}), u_2(\bm{x}))$, $p(\bm{x})$ are the velocity and pressure of the fluid, respectively, and the geometry of the problem is shown in Figure~\ref{fig:geom_airfoil}.
The baselines (PFNN and PFNN-2 are not applicable to Navier-Stokes Equations) are trained with $N_f=\num{10000}$ collocation points and $N_b=2048$ boundary points (additional $N_s=2048$ points for the xPINN), while the HC is trained with $N_f=\num{10000}$ collocation points. All the models are trained for $\num{5000}$ Adam iterations followed by an L-BFGS optimization until convergence and tested with $N_t=\num{13651}$ points evaluated by the FEM. According to the results in Table~\ref{tab:airfoil}, the HC has more significant advantages than the previous experiment. This is because complex nonlinear PDEs are more sensitive to the BCs, and the failure at the BCs can cause the approximation to deviate significantly from the true solution. In addition, the WMAPE of $u_2$ is relatively large for all methods, because the true values of $u_2$ are close to zero, amplifying the effect of noises.
\begin{table}[!t]
\caption{Experimental results of the simulation of an airfoil}
\centering
\begin{small}
\begin{tabular}{lllllll}
\toprule
& \multicolumn{3}{c}{MAE} & \multicolumn{3}{c}{WMAPE}\\
\cmidrule(r){2-4}
\cmidrule(r){5-7}
& $u_1$ & $u_2$ & $p$ & $u_1$ & $u_2$ & $p$ \\
\midrule
PINN & $0.4682$ & $0.0697$ & $0.3883$ & $0.5924$ & $1.1979$ & $0.3539$\\
PINN-LA & $0.4018$ & $0.0595$& $0.2652$ & $0.5084$ & $1.0225$ & $0.2418$\\
PINN-LA-2 & $0.5047$ & $0.0659$ & $0.2765$ & $0.6385$ & $1.1325$ & $0.2521$\\
FBPINN & $0.4058$ & $0.0563$& $0.2665$ & $0.5134$ & $0.9676$ & $0.2429$\\
xPINN & $0.7188$ & $0.0583$& $1.1708$ & $0.9095$ & $1.0029$ & $1.0672$\\
\midrule
HC & $\textbf{0.2689}$ & $\textbf{0.0435}$ & $\textbf{0.2032}$ & $\textbf{0.3402}$ & $\textbf{0.7474}$ & $\textbf{0.1852}$\\
\bottomrule
\end{tabular}
\end{small}
\label{tab:airfoil}
\bm{s}pace{-0.1in}
\end{table}
\begin{table}[!b]
\bm{s}pace{-0.1in}
\caption{Experimental results of the high-dimensional heat equation}
\centering
\begin{small}
\begin{tabular}{lllllllll}
\toprule
& \multicolumn{4}{c}{MAE of $u$} & \multicolumn{4}{c}{MAPE of $u$}\\
\cmidrule(r){2-5}
\cmidrule(r){6-9}
& $t=0$ & $t=0.5$ & $t=1$ & average & $t=0$ & $t=0.5$ & $t=1$ & average \\
\midrule
PINN & $0.0219$ & $0.0428$ & $0.1687$ & $0.0582$ & $1.43\%$ & $1.70\%$ & $4.04\%$ & $1.99\%$\\
PINN-LA & $0.0085$ & $0.0149$ & $0.0727$ & $0.0235$ & $0.55\%$ & $0.59\%$ & $1.73\%$ & $0.78\%$\\
PINN-LA-2 & $0.0122$ & $0.0274$ & $0.1495$ & $0.0466$ & $0.79\%$ & $1.08\%$ & $3.57\%$ & $1.49\%$\\
PFNN & $\textbf{0.0000}$ & $0.1253$ & $0.3367$ & $0.1425$ & $\textbf{0.00\%}$ & $5.02\%$ & $8.19\%$ & $4.64\%$\\
\midrule
HC & $\textbf{0.0000}$ & $\textbf{0.0029}$ & $\textbf{0.0043}$ & $\textbf{0.0026}$ & $\textbf{0.00\%}$ & $\textbf{0.12\%}$ & $\textbf{0.11\%}$ & $\textbf{0.10\%}$\\
\bottomrule
\end{tabular}
\end{small}
\label{tab:high-d}
\end{table}
\subsection{High-Dimensional Heat Equation} \label{sec:exp_high}
To demonstrate that our framework can generalize to high-dimensional cases, we consider a problem of time-dependent high-dimensional PDEs defined in $\Omega\times[0,1]$ (see Appendix~\ref{sec_a8_3}), where $\Omega$ is a unit ball in $\mathbb{R}^{10}$, and $u$ is the quantity of interest.
Here, we do not consider the baselines of the xPINN, FBPINN, and PFNN-2, because the method of domain decomposition is not suitable for high-dimensional cases. And the other baselines are trained with $N_f=1000$ collocation points, $N_b=100$ boundary points, and $N_i=100$ initial points, while the HC is trained with $N_f=1000$ collocation points. All the models are trained for $\num{5000}$ Adam iterations followed by an L-BFGS optimization until convergence and tested with $N_t=\num{10000}$ points evaluated by the analytical solution to the PDEs. We refer to Table~\ref{tab:high-d} for the testing results. It can be seen that the HC achieves impressive performance in high-dimensional cases. This illustrates the effectiveness of Eq.~\eqref{eq_hc_robin} and the fact that our model can be directly applied to high-dimensional PDEs, not just geometrically complex PDEs.
\subsection{Ablation Study} \label{sec:exp_abla}
In this subsection, we are interested in the impact of the ``extra fields'' and the hyper-parameters of ``hardness'' (i.e., $\beta_s$ and $\beta_t$, where $\beta_t$ is the ``hardness'' in the temporal domain, refer to Appendix~\ref{sec_a6}).
\subsubsection{Extra fields} \label{sec:exp_abla_extra}
We test the vanilla PINN on Poisson's equation and nonlinear Schrödinger equation, where the control variable is whether to use the ``extra fields'', and the hard constraint method is not included (see Appendix~\ref{sec_a8_4}). We vary the network architecture and report the ratio of the moving variance (MovVar) of $\overline{|\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}}|}$
to that of $\overline{|{\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}}|}$ (the one with the ``extra fields'') at each iteration (see Figure~\ref{fig:abla_ext}). In both PDEs, we find that models with the ``extra fields'' achieve smaller gradient oscillations during training, which is consistent with our theoretical results in Section~\ref{sec:theo}. This is because the ``extra fields'' reduce the orders of the derivatives, which in turn avoids vanishing or exploding gradients.
\begin{figure}
\caption{The iteration history of $\mathrm{MovVar}
\label{fig:abla_poission}
\label{fig:abla_ns}
\label{fig:abla_ext}
\end{figure}
\begin{table}[!b]
\caption{The MAE / MAPE of $T$ on different $\beta_s$ and $\beta_t$}
\centering
\begin{small}
\begin{tabular}{lllll}
\toprule
& $\beta_t=1$ & $\beta_t=2$ & $\beta_t=5$ & $\beta_t=10$ \\
\midrule
$\beta_s=1$ & $0.3492$ / $48.21\%$ & $0.3539$ / $48.56\%$ & $0.3226$ / $44.64\%$ & $0.2889$ / $40.69\%$ \\
$\beta_s=2$ & $0.2800$ / $40.30\%$ & $0.1670$ / $26.16\%$ & $0.2140$ / $31.72\%$ & $0.1619$ / $25.20\%$ \\
$\beta_s=5$ & $0.1878$ / $28.68\%$ & $0.1195$ / $19.68\%$ & $0.0542$ / $10.35\%$ & $\textbf{0.0221}$ / $\textbf{5.10\%}$ \\
$\beta_s=10$ & $0.1896$ / $29.15\%$ & $0.1104$ / $18.70\%$ & $0.0517$ / $10.56\%$ & $0.0329$ / $8.15\%$ \\
\bottomrule
\end{tabular}
\end{small}
\label{tab:abla}
\end{table}
\subsubsection{Hyper-parameters of Hardness}
We repeated the experiment in Section~\ref{sec:exp_heat} with different combinations of the values of $\beta_s$ and $\beta_t$, and all other settings stay the same. Table~\ref{tab:abla} gives the testing results (where the MAE and MAPE are the average results over $t\in[0,1]$). In general, the accuracy increases as the values of $\beta_s$ and $\beta_t$ become larger. The reason is that ``harder'' $\beta_s$ and $\beta_t$ cause the BCs to be better satisfied, which in turn makes the approximations more compliant with physical constraints. However, too large $\beta_s$ and $\beta_t$ (we emphasize that they are both on the exponential, see Eq.~\eqref{eq_unified_hc}, \eqref{eq_determine_alpha}, and \eqref{eq:hc_time_2}) can lead to numerical instability, which may be related to performance degradation.
\section{Conclusion}\label{sec:conclusion}
In this paper, we develop a unified hard-constraint framework for solving geometrically complex PDEs. With the help of ``extra fields'', we reformulate the PDEs and find the general solutions of the Dirichlet, Neumann, and Robin BCs. Based on this derivation, we propose a general formula of the hard-constraint ansatz which is applicable to time-dependent, multi-boundary, and high-dimensional cases. Besides, our theoretical analysis demonstrates that reformulation is helpful for training stability. Extensive experiments show that our method can achieve state-of-the-art performance in real-world geometrically complex as well as high-dimension PDEs, and our theoretical results are universal to general PDEs. One of the limitations is that here we only consider the three most commonly used BCs and future works may include extending this framework to more general BCs.
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{See Section~\ref{sec:conclusion}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{See Appendix~\ref{sec_a11}.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{See Section~\ref{sec:theo}, \modify{Appendix~\ref{sec_a3}, \ref{sec_a4}, and \ref{sec_a7}.}}
\item Did you include complete proofs of all theoretical results?
\answerYes{See \modify{Appendix~\ref{sec_a3}, \ref{sec_a4}, and \ref{sec_a7}}.}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{\modify{See the supplementary material.}}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{See Appendix~\ref{sec_a8}.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{\modify{See Appendix~\ref{sec_a10}.}}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{See Appendix~\ref{sec_a8}.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{See Section~\ref{sec:exp_ns}.}
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerYes{See Section~\ref{sec:exp_ns}.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{No such information in the data.}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\appendix
\defA\arabic{equation}{A\arabic{equation}}
\defA\arabic{figure}{A\arabic{figure}}
\section{Appendix}
\subsection{A Counter Example of a Basis of the Null Space}\label{sec_a1}
We consider a special case of Eq.~(9), where $a_i(\bm{x})\equiv 0$, $b_i(\bm{x})\equiv 1$, $g_i(\bm{x})\equiv 0$, and the dimension is $d=3$, where $\bm{n}(\bm{x}) = (n_1(\bm{x}), n_2(\bm{x}), n_3(\bm{x}))$.
\begin{equation}
n_1(\bm{x})p_{j1}(\bm{x}) + n_2(\bm{x})p_{j2}(\bm{x}) + n_3(\bm{x})p_{j3}(\bm{x}) = 0,\qquad\bm{x}\in\gamma_i,\qquad \forall i=1,\dots,m_j.
\end{equation}
And a counter example of $\bm{B}(\bm{x})$ is given by
\begin{equation}
\label{eq:couter_example}
\begin{aligned}
\bm{B}(\bm{x}) &= [\bm{\beta}_1(\bm{x}), \bm{\beta}_2(\bm{x})],\\
\bm{\beta}_1(\bm{x}) &= [n_2(\bm{x}), -n_1(\bm{x}), 0]^\top,\\
\bm{\beta}_2(\bm{x}) &= [n_3(\bm{x}), 0, -n_1(\bm{x})]^\top.
\end{aligned}
\end{equation}
One could verify that the above formula of $\bm{B}(\bm{x})$ is a basis of the null space, if $n_1(\bm{x})\neq 0, \forall \bm{x} \in \gamma_i$. For a special case where $\gamma_i$ is a plane parallel to the x-axis, however, we have $n_1(\bm{x})\equiv 0, \forall \bm{x} \in \gamma_i$. In this case, $\bm{\beta}_1(\bm{x}), \bm{\beta}_2(\bm{x})$ are no longer linearly independent and cannot represent all possible solutions to $(p_{j1}(\bm{x}),p_{j2}(\bm{x}),p_{j3}(\bm{x}))$. Therefore, Eq.~\eqref{eq:couter_example} is not an admissible choice for $\bm{B}(\bm{x})$.
\subsection{A Basis of the Null Space in Low Dimensions}\label{sec_a2}
Let $\tilde{\bm{n}}=(a_i,b_i\bm{n})\big/\sqrt{a_i^2+b_i^2}$, $\tilde{g}_i=g_i\big/\sqrt{a_i^2+b_i^2}$, and $\tilde{\bm{p}}_j=(u_j,\bm{p}_j)$. Eq.~(9) is equivalent to
\begin{equation}\label{eq:general_bc_equiv}
\tilde{\bm{n}}(\bm{x})\cdot\tilde{\bm{p}}_j(\bm{x}) = \tilde{g}_i(\bm{x}),\qquad\bm{x}\in\gamma_i,\qquad \forall i=1,\dots,m_j.
\end{equation}
For $d=1$, we can rewrite Eq.~\eqref{eq:general_bc_equiv} as (the dimension of $\tilde{\bm{p}}_j$ is $d+1$)
\begin{equation}
\tilde{n}_1(\bm{x})\tilde{p}_{j1}(\bm{x}) + \tilde{n}_2(\bm{x})\tilde{p}_{j2}(\bm{x}) = \tilde{g}_i(\bm{x}),\qquad\bm{x}\in\gamma_i,\qquad \forall i=1,\dots,m_j.
\end{equation}
And we can find that the following basis is an acceptable one
\begin{equation}\label{eq:accept_example_2}
\bm{B}(\bm{x})=[\tilde{n}_2(\bm{x}),-\tilde{n}_1(\bm{x})]^\top,
\end{equation}
since $\bm{B}(\bm{x})=0\Leftrightarrow \tilde{\bm{n}}(\bm{x})=0$, and the latter contradicts the fact that $\tilde{\bm{n}}\cdot\tilde{\bm{n}}=1$. Then, we can use $\bm{B}$ to construct the general solution $\tilde{\bm{p}}_j^{\gamma_i}$ under $d=1$.
And for $d=2$, Eq.~\eqref{eq:general_bc_equiv} becomes
\begin{equation}
\tilde{n}_1(\bm{x})\tilde{p}_{j1}(\bm{x}) + \tilde{n}_2(\bm{x})\tilde{p}_{j2}(\bm{x}) + \tilde{n}_3(\bm{x})\tilde{p}_{j3}(\bm{x})= \tilde{g}_i(\bm{x}),\qquad\bm{x}\in\gamma_i,\qquad \forall i=1,\dots,m_j.
\end{equation}
An acceptable $\bm{B}(\bm{x})$ is given by
\begin{equation}
\label{eq:accept_example}
\begin{aligned}
\bm{B}(\bm{x}) &= [\bm{\beta}_1(\bm{x}), \bm{\beta}_2(\bm{x}), \bm{\beta}_3(\bm{x})],\\
\bm{\beta}_1(\bm{x}) &= [0, \tilde{n}_3(\bm{x}), -\tilde{n}_2(\bm{x})]^\top,\\
\bm{\beta}_2(\bm{x}) &= [-\tilde{n}_3(\bm{x}), 0, \tilde{n}_1(\bm{x})]^\top,\\
\bm{\beta}_3(\bm{x}) &= [\tilde{n}_2(\bm{x}), -\tilde{n}_1(\bm{x}), 0]^\top.
\end{aligned}
\end{equation}
We note that $\bm{\beta}_1(\bm{x}), \bm{\beta}_2(\bm{x}), \bm{\beta}_3(\bm{x})$ live in the null space and $\mathrm{rank}(\bm{B}(\bm{x}))=2$. So $\bm{B}(\bm{x})$ contains a basis in the null space, which can be used to construct the general solution $\tilde{\bm{p}}_j^{\gamma_i}$ under $d=2$.
\subsection{Explanation for the General Solution}\label{sec_a3}
We first show how to find an admissible expression of $\bm{B}(\bm{x})$ in arbitrary dimensions with respect to Eq.~\eqref{eq:general_bc_equiv} which is equivalent to original formulation of the BC (see Eq.~(9)). We perform a Gram–Schmidt orthogonalization of $\tilde{\bm{n}}$ (whose dimension is $d+1$) on each vector in the standard basis to get
\begin{equation}\label{eq_basis}
\bm{\beta}_k(\bm{x}) = \bm{e}_k - \frac{\bm{e}_k\cdot\tilde{\bm{n}}(\bm{x})}{\tilde{\bm{n}}(\bm{x})\cdot\tilde{\bm{n}}(\bm{x})}\tilde{\bm{n}}(\bm{x})=\bm{e}_k-\big(\bm{e}_k\cdot\tilde{\bm{n}}(\bm{x})\big)\tilde{\bm{n}}(\bm{x}),\qquad k=1,\dots,d+1,
\end{equation}
where $[\bm{e}_1,\dots,\bm{e}_{d+1}]=\bm{I}_{d+1}$, and obviously all $\bm{\beta}_k(\bm{x}),k=1,\dots,d+1,$ are in the $\mathrm{Null}(\tilde{\bm{n}}^\top)$. We set $\bm{B}(\bm{x})=[\bm{\beta}_1(\bm{x}),\dots,\bm{\beta}_{d+1}(\bm{x})]=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top$. Furthermore, we can prove that $\mathrm{rank}(\bm{B}(\bm{x}))=d,\forall \bm{x} \in\gamma_i$ (see Lemma~\ref{lemma_a_1}). Therefore, for $\forall\bm{x}\in\gamma_i$, $\bm{B}(\bm{x})$ always contains a basis of $\mathrm{Null}(\tilde{\bm{n}}^\top)$, and we consider such a $\bm{B}(x)$ to be an ideal choice for the general solution $\tilde{\bm{p}}_j^{\gamma_i}$.
\begin{lemma}\label{lemma_a_1}
$\mathrm{rank}(\bm{B}(\bm{x}))=d$ holds for all $\bm{x} \in\gamma_i$, where $\bm{B}(\bm{x})=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top$.
\end{lemma}
\begin{proof}
For all $\bm{x}\in\gamma_i$, we have known that $\bm{B} = \bm{I}_{d+1}-\tilde{\bm{n}}\tilde{\bm{n}}^\top$, where $\tilde{\bm{n}}\cdot\tilde{\bm{n}}=\tilde{\bm{n}}^\top\tilde{\bm{n}}=1$, and
\begin{equation}
\bm{B}\tilde{\bm{n}}=(\bm{I}_{d+1}-\tilde{\bm{n}}\tilde{\bm{n}}^\top)\tilde{\bm{n}}=\tilde{\bm{n}}-\tilde{\bm{n}}\tilde{\bm{n}}^\top\tilde{\bm{n}}=\tilde{\bm{n}}-\tilde{\bm{n}}=\bm{0}.
\end{equation}
Hence, $\mathrm{rank}(\bm{B})\leq d$. Besides, we notice that $\bm{H}=\bm{I}_{d+1}-2\tilde{\bm{n}}\tilde{\bm{n}}^\top$ is a Householder matrix, which is an invertible matrix, since
\begin{equation}
\bm{H}^\top \bm{H}=(\bm{I}_{d+1}-2\tilde{\bm{n}}\tilde{\bm{n}}^\top)^2=\bm{I}_{d+1}-4\tilde{\bm{n}}\tilde{\bm{n}}^\top+4\tilde{\bm{n}}\tilde{\bm{n}}^\top\tilde{\bm{n}}\tilde{\bm{n}}^\top=\bm{I}_{d+1}-4\tilde{\bm{n}}\tilde{\bm{n}}^\top+4\tilde{\bm{n}}\tilde{\bm{n}}^\top=\bm{I}_{d+1}.
\end{equation}
So $\mathrm{rank}(\bm{H})=d+1$, and we have
\begin{equation}
d+1=\mathrm{rank}(\bm{I}_{d+1}-2\tilde{\bm{n}}\tilde{\bm{n}}^\top)\leq \mathrm{rank}(\bm{I}_{d+1}-\tilde{\bm{n}}\tilde{\bm{n}}^\top) + \mathrm{rank}(\tilde{\bm{n}}\tilde{\bm{n}}^\top)=\mathrm{rank}(\bm{B})+1,
\end{equation}
which can deduce $d\leq\mathrm{rank}(\bm{B})$. Therefore, $\mathrm{rank}(\bm{B})=d$.
\end{proof}
Finally, we show that the general solution in Eq.~(10) satisfies the BC in Eq.~\eqref{eq:general_bc_equiv}.
\begin{equation}
\begin{aligned}
\tilde{\bm{n}}(\bm{x}) \cdot \tilde{\bm{p}}_j^{\gamma_i}(\bm{x}) &= \tilde{\bm{n}}(\bm{x}) \cdot \bm{B}(\bm{x}) \mathrm{NN}_j^{\gamma_i}(\bm{x}) + \tilde{\bm{n}}(\bm{x}) \cdot \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x})\\
&= \tilde{\bm{n}}(\bm{x}) \cdot \left( \bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top \right) \mathrm{NN}_j^{\gamma_i}(\bm{x}) + \tilde{g}_i(\bm{x})\\
&= \tilde{g}_i(\bm{x}),
\end{aligned}
\end{equation}
where we omit the trainable parameters for simplicity. Besides, the discussion of $\bm{B}(\bm{x})$ in low-dimensional cases (i.e., $d=1$ and $d=2$, see Appendix~\ref{sec_a3}) is similar, and we will leave it to the reader.
\subsection{Theoretical Guarantee of the Constructed Ansatz}\label{sec_a4}
In Appendix~\ref{sec_a3}, we have demonstrated that $\bm{B}(\bm{x})$ contains a basis of the null space of the BC for $\forall\bm{x} \in \gamma_i$ and the general solution in Eq.~(10) satisfies the corresponding BC. In this subsection, we will show that our constructed ansatz in Eq.~(11) is theoretically correct. We first prove that the ansatz in Eq.~(11) satisfies all the BCs under the following assumptions.
\begin{assumption}\label{ass_a_0}
The problem domain $\Omega$ is bounded.
\end{assumption}
\begin{assumption}\label{ass_a_1}
The shortest distance between $\gamma_1,\dots,\gamma_{m_j}$ is greater than zero for $j=1,\dots,n$.
\end{assumption}
\begin{assumption}\label{ass_a_2}
All the extended distance functions $l^{\partial\Omega},l^{\gamma_i},i=1,\dots,m_j$ are continuous and satisfy that $\min_{\bm{x} \in \partial\Omega \setminus \gamma_i} l^{\gamma_i}(\bm{x}) \geq C_i$, $\forall \bm{x}\in\gamma_i$, $i=1,\dots,m_j$ for $j=1,\dots,n$, where $C_i$ is a positive constant.
\end{assumption}
\begin{theorem}
$\forall \epsilon > 0$, there exists $\beta^0_s\in \mathbb{R}$, such that
\begin{equation}
\left| \tilde{\bm{n}} (\bm{x}) \cdot (\hat{u}_j, \hat{\bm{p}}_j) - \tilde{g}_i(\bm{x}) \right| < \epsilon,
\end{equation}
holds for all $\beta_s > \beta^0_s$, $\bm{x}\in \gamma_i$, $i=1,\dots,m_j$, $j=1,\dots,n$, where $\tilde{\bm{n}}=(a_i,b_i\bm{n})\big/\sqrt{a_i^2+b_i^2}$, $\tilde{g}_i=g_i\big/\sqrt{a_i^2+b_i^2}$, and $\bm{B}(\bm{x})=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top$.
\end{theorem}
\begin{proof}
For any $\bm{x}\in \gamma_i$, we have $l^{\partial \Omega}(\bm{x}) = 0$ according to the definition of the extended distance functions. Thus, $(\hat{u}_j, \hat{\bm{p}}_j)$ is now equal to
\begin{equation}
(\hat{u}_j,\hat{\bm{p}}_j) = \sum_{k=1}^{m_j} \exp{\big[-\alpha_k l^{\gamma_k}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_k}(\bm{x}),
\end{equation}
where we omit the trainable parameters. Then, according to Assumptions~\ref{ass_a_0} $\sim$ \ref{ass_a_2}, we can choose a sufficiently large $\beta^i_s$ (see Eq.~(12) for the relationship between $\alpha_i$ and $\beta_s$), such that
\begin{equation}
\begin{aligned}
\left| \tilde{\bm{n}} (\bm{x}) \cdot (\hat{u}_j, \hat{\bm{p}}_j) - \tilde{g}_i(\bm{x}) \right| &= \left| \tilde{\bm{n}} (\bm{x}) \cdot \left( \sum_{k=1}^{m_j} \exp{\big[-\alpha_k l^{\gamma_k}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_k}(\bm{x}) \right) - \tilde{g}_i(\bm{x}) \right|\\
&< \left| \tilde{\bm{n}} (\bm{x}) \cdot \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x}) - \tilde{g}_i(\bm{x}) \right| \\
&\qquad+ \left| \tilde{\bm{n}} (\bm{x}) \cdot \sum_{k\neq i} \exp{\big[-\alpha_k l^{\gamma_k}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_k}(\bm{x}) \right|\\
&\leq \left| \tilde{\bm{n}} (\bm{x}) \cdot \tilde{\bm{p}}_j^{\gamma_i}(\bm{x}) - \tilde{g}_i(\bm{x}) \right| + \left| \sum_{k\neq i} \exp{\big[-\alpha_k l^{\gamma_k}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_k}(\bm{x}) \right|\\
&= \left| \sum_{k\neq i} \exp{\big[-\alpha_k l^{\gamma_k}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_k}(\bm{x}) \right|\\
&< \epsilon,
\end{aligned}
\end{equation}
where we note that $l^{\gamma_i}(\bm{x})=0$ and $l^{\gamma_k}(\bm{x})>0,\forall k\neq i$ for all $\bm{x}\in\gamma_i$. Let $\beta^0_s = \max \{\beta^i_s \mid i=1,\dots,m_j\}$, then according to the arbitrariness of $j$, we conclude that the theorem holds.
\end{proof}
Next, we will prove that our ansatz can approximate the solution to the PDEs with arbitrarily low errors under the following assumptions in addition to Assumptions~\ref{ass_a_0} $\sim$ \ref{ass_a_2}.
\begin{assumption}\label{ass_a_3}
The solution to the PDEs $\bm{u}(\bm{x})$ is unique, bounded, and at least first-order continuous by element.
\end{assumption}
\begin{assumption}\label{ass_a_4}
$a_i(\bm{x})$, $b_i(\bm{x})$, and $g_i(\bm{x})$ are continuous (hence $\tilde{g}_i(\bm{x})$ is continuous, too) in $\gamma_i$ for $i=1,\dots,m_j$, $j=1,\dots,n$.
\end{assumption}
\begin{assumption}\label{ass_a_5}
Since $\bm{B}(\bm{x})=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top$ is a real symmetric matrix, we can perform an orthogonal diagonalization $\bm{B}(\bm{x}) = \bm{P}(\bm{x})^\top \Lambda(\bm{x}) \bm{P}(\bm{x})$, where $\Lambda(\bm{x})=\mathrm{diag}(\lambda_1(\bm{x}),\dots,\lambda_d(\bm{x}), 0)$, $\lambda_1(\bm{x})>\cdots>\lambda_d(\bm{x})>0$. We assume that $\tilde{\bm{n}}(\bm{x})$, $\bm{P}(\bm{x})$, and $\Lambda(\bm{x})$ are piece-wise continuous by element in $\gamma_i$ for $i=1,\dots,m_j$, $j=1,\dots,n$.
\end{assumption}
To begin with, we prove this lemma.
\begin{lemma}\label{lemma_a_6}
$\forall \epsilon > 0$, there exists $\bm{\theta}^{\gamma_i}_j\in {\Theta}^{\gamma_i}_j$, such that
\begin{equation}
\left \| \tilde{\bm{p}}^{\gamma_i}_j(\bm{x}; \bm{\theta}^{\gamma_i}_j)-\bm{q}(\bm{x}) \right\|_1 < \epsilon,
\end{equation}
holds for all $\bm{x}\in \gamma_i$ if $\bm{q}(\bm{x})$ is continuous in $\gamma_i$ and satisfies the BC (i.e., $\tilde{\bm{n}}(\bm{x})\cdot \bm{q}(\bm{x})=\tilde{g}_i(\bm{x}),\forall \bm{x} \in \gamma_i$), where ${\Theta}^{\gamma_i}_j$ is the parameter space of the neural network ${\mathrm{NN}}^{\gamma_i}_j$, $\| \cdot \|_1$ is the 1-norm of matrices (operator norm), and $\tilde{\bm{p}}^{\gamma_i}_j$ as well as $\bm{q}$ are both of dimension $d+1$. The above conclusion holds for all $i=1,\dots,m_j$, $j=1,\dots,n$.
\end{lemma}
\begin{proof}
From Eq.~\eqref{eq_basis} and Lemma~\ref{lemma_a_1}, we know that $\bm{B}(\bm{x})$ contain a basis of $\mathrm{Null}(\tilde{\bm{n}}^\top)$. Since $\bm{q}(\bm{x})$ satifies the BC, we can represent it as
\begin{equation}\label{eq_represent_q}
\bm{q}(\bm{x}) = \bm{B}(\bm{x}) \bm{r}(\bm{x}) + \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x}).
\end{equation}
Then we will show that there exists a piece-wise continuous choice of $\bm{r}(\bm{x})$. We rewrite Eq.~\eqref{eq_represent_q} as
\begin{equation}\label{eq_represent_q_2}
\bm{q}(\bm{x}) = \bm{P}(\bm{x})^\top \Lambda(\bm{x}) \bm{P}(\bm{x}) \bm{r}(\bm{x}) + \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x}).
\end{equation}
Since $\bm{B}(\bm{x})$ has $d+1$ column vectors, which is greater than the dimension of $\mathrm{Null}(\tilde{\bm{n}}^\top)$ (i.e., $d$), the choice of $\bm{q}(\bm{x})$ is not unique. We can choose a particular $\bm{q}(\bm{x})$ which satisfies that the last element of $\bm{P}(\bm{x})\bm{r}(\bm{x})$ is zero (i.e., $\bm{P}(\bm{x})\bm{r}(\bm{x})=[\dots,0]^\top$). Next, we continue with the equivalent transformation of Eq.~\eqref{eq_represent_q_2}.
\begin{equation}
\begin{aligned}
&& \bm{P}(\bm{x})^\top \Lambda(\bm{x}) \bm{P}(\bm{x}) \bm{r}(\bm{x}) + \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x})&=\bm{q}(\bm{x}) ,\\
\Longleftrightarrow&& \quad \bm{P}(\bm{x})^\top \Lambda(\bm{x}) \bm{P}(\bm{x}) \bm{r}(\bm{x}) &= \bm{q}(\bm{x})- \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x}),\\
\Longleftrightarrow&& \quad \Lambda(\bm{x}) \bm{P}(\bm{x}) \bm{r}(\bm{x}) &= \bm{P}(\bm{x})\left(\bm{q}(\bm{x})- \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x})\right) ,\\
\Longleftrightarrow&& \quad \mathrm{diag}(1,\dots,1,0) \bm{P}(\bm{x}) \bm{r}(\bm{x}) &= \Lambda'(\bm{x}) \bm{P}(\bm{x})\left(\bm{q}(\bm{x})- \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x})\right),\\
\Longleftrightarrow&& \quad \bm{P}(\bm{x}) \bm{r}(\bm{x}) &= \Lambda'(\bm{x}) \bm{P}(\bm{x})\left(\bm{q}(\bm{x})- \tilde{\bm{n}}(\bm{x}) \tilde{g}_i(\bm{x})\right),
\end{aligned}
\end{equation}
where $\Lambda'(\bm{x})=\mathrm{diag}(1/\lambda_1(\bm{x}),\dots,1/\lambda_d(\bm{x}), 0)$. The last equivalence holds because the last element of $\bm{P}(\bm{x}) \bm{r}(\bm{x})$ is always zero. From Assumption~\ref{ass_a_4} and \ref{ass_a_5}, combining the above formula, we have that $\bm{P}(\bm{x}) \bm{r}(\bm{x})$ is piece-wise continuous by element. Noticing that $\bm{r}(\bm{x}) = \bm{P}(\bm{x})^\top \bm{P}(\bm{x}) \bm{r}(\bm{x})$, we know that the $\bm{r}(\bm{x})$ we chosen is also piece-wise continuous by element.
We notice that
\begin{equation}
\begin{aligned}
\left \| \bm{B}(\bm{x}) \right\|_1 &= \left \| \bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top \right\|_1\\
&\le \left \| \bm{I}_{d+1} \right\|_1 + \left \| \tilde{\bm{n}}(\bm{x})\tilde{\bm{n}}(\bm{x})^\top \right\|_1\\
&\le 1 + d + 1\\
&= d + 2.
\end{aligned}
\end{equation}
According to the Universal Approximation of neural networks \cite{llanas2008constructive}, $\forall\epsilon>0$, there exists $\bm{\theta}^{\gamma_i}_j\in {\Theta}^{\gamma_i}_j$, such that
\begin{equation}
\left \| \mathrm{NN}^{\gamma_i}_j(\bm{x};\bm{\theta}^{\gamma_i}_j) - \bm{r}(\bm{x}) \right\|_1 < \frac{\epsilon}{d+2},
\end{equation}
holds for all $\bm{x}\in \gamma_i$. Therefore,
\begin{equation}
\begin{aligned}
\left \| \tilde{\bm{p}}^{\gamma_i}_j(\bm{x}; \bm{\theta}^{\gamma_i}_j)-\bm{q}(\bm{x}) \right\|_1 &= \left \| \bm{B}(\bm{x})\left(\mathrm{NN}^{\gamma_i}_j(\bm{x};\bm{\theta}^{\gamma_i}_j) - \bm{r}(\bm{x}) \right)\right\|_1\\
&\le \left \| \bm{B}(\bm{x}) \right\|_1 \left \| \mathrm{NN}^{\gamma_i}_j(\bm{x};\bm{\theta}^{\gamma_i}_j) - \bm{r}(\bm{x}) \right\|_1\\
&< \epsilon.
\end{aligned}
\end{equation}
According to the arbitrariness of $i$ and $j$, we conclude that the lemma holds.
\end{proof}
Finally, we state the following theorem.
\begin{theorem}
$\forall \epsilon > 0$, there exists $\beta_s\in\mathbb{R}$, $\bm{\theta}^{\mathrm{main}}_j\in\Theta^{\mathrm{main}}_j$, $\bm{\theta}^{\gamma_i}_j\in {\Theta}^{\gamma_i}_j,i=1,\dots,m_j$, such that
\begin{equation}\label{eq_final_theo}
\left \| (\hat{u}_j,\hat{\bm{p}}_j)- (u_j, \nabla u_j) \right\|_1 < \epsilon,
\end{equation}
holds for all $\bm{x}\in \Omega\cup\partial\Omega$, $j=1,\dots,n$, where ${\Theta}_{*}$ is the parameter space of the corresponding neural network, $\| \cdot \|_1$ is the 1-norm. The ground truth solution to the PDEs is $\bm{u}(\bm{x})=(u_1(\bm{x}),\dots,u_n(\bm{x}))$.
\end{theorem}
\begin{proof}
For $\bm{x}\in\gamma_i$, $i=1,\dots,m_j$, $(u_j, \nabla u_j)$ is continuous (according to Assumption~\ref{ass_a_3}) and satisfies the BC (which the solution needs to meet). From Lemma~\ref{lemma_a_6}, the definition of $(\hat{u}_j,\hat{\bm{p}}_j)$ in Eq.~(11), and Assumptions~\ref{ass_a_0} $\sim$ \ref{ass_a_2}, we can find $\bm{\theta}^{\gamma_i}_j\in {\Theta}^{\gamma_i}_j$ and a large enough $\beta_s^i$ such that Eq.~\eqref{eq_final_theo} holds for all $\bm{x}\in\gamma_i$.
Then we fix $\beta_s = \max \{\beta^i_s \mid i=1,\dots,m_j\}$ and $\bm{\theta}^{\gamma_i}_j\in {\Theta}^{\gamma_i}_j, i=1,\dots,m_j$ (which are what we determined for $\bm{x}\in\gamma_i,i=1,\dots,m_j$). From Assumption~\ref{ass_a_0}, we have $|l^{\partial \Omega}(\bm{x})| < C,\forall \bm{x}\in\Omega$, where $C$ is a positive constant. For $\bm{x}\in\Omega$, according to the Universal Approximation Theorem of neural networks \cite{chen1995universal}, there exists $\bm{\theta}^{\mathrm{main}}_j \in \Theta^{\mathrm{main}}_j$ satisfying that
\begin{equation}
\left \| \frac{(u_j, \nabla u_j) - \sum_{i=1}^{m_j} \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x};\bm{\theta}_j^{\gamma_i})}{l^{\partial\Omega}(\bm{x})} - \mathrm{NN}^{\mathrm{main}}_j(\bm{x};\bm{\theta}^{\mathrm{main}}_j)\right\|_1 < \frac{\epsilon}{C}.
\end{equation}
Therefore,
\begin{equation}
\begin{aligned}
&\left \| (\hat{u}_j,\hat{\bm{p}}_j)- (u_j, \nabla u_j) \right\|_1\\
=& \bigg \| (u_j, \nabla u_j) - \sum_{i=1}^{m_j} \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x};\bm{\theta}_j^{\gamma_i})
- l^{\partial\Omega}(\bm{x})\mathrm{NN}^{\mathrm{main}}_j(\bm{x};\bm{\theta}^{\mathrm{main}}_j) \bigg\|_1\\
=& \bigg \| \frac{(u_j, \nabla u_j) - \sum_{i=1}^{m_j} \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x};\bm{\theta}_j^{\gamma_i})}{l^{\partial\Omega}(\bm{x})}
- \mathrm{NN}^{\mathrm{main}}_j(\bm{x};\bm{\theta}^{\mathrm{main}}_j)\bigg\|_1 \left| l^{\partial \Omega}(\bm{x}) \right|\\
<& \frac{\epsilon}{C}\cdot C = \epsilon.
\end{aligned}
\end{equation}
According to the arbitrariness of $j$, we have proven this theorem.
\end{proof}
Besides, we note that it is easy to extend the above theorems to time-dependent cases (the ansatz is given in Appendix~\ref{sec_a6}), which will not be discussed separately here.
\subsection{Extension of the Parameter Functions in the BCs}\label{sec_a5}
In Eq.~(9), it is noted that $a_i$, $b_i$, $\bm{n}$ or $g_i$ may be only defined at $\gamma_i$. But they are included in our ansatz (see Eq.~(10) and Eq.~(11)), which is defined in $\Omega\cup\partial\Omega$. So we need to extend their definition smoothly to $\Omega\cup\partial\Omega$, using interpolation or approximation via neural networks. We consider the airfoil boundary (i.e, $\gamma_{\mathrm{airfoil}}$) in Section~5.3 as a motivating example.
\begin{figure}
\caption{Illustration of the extension from $\gamma_{\mathrm{airfoil}
\label{fig:airfoil_extension}
\end{figure}
Supposing $f(\bm{x})$ is only defined in $\gamma_{\mathrm{airfoil}}$, our task is to extend its definition to $\Omega\cup\partial\Omega$. As shown in Figure~\ref{fig:airfoil_extension}, we first place two reference points (i.e., $\bm{x}_0$ and $\bm{x}_1$) on the front and rear half of the airfoil. For any $\bm{x}\in\Omega\cup\partial\Omega$, it can be expressed as polar coordinates with respect to $\bm{x}_0$ and $\bm{x}_1$, respectively. We concatenate the two polar coordinates to form a new space. We next perform interpolation and approximation under the new space. This is because in the new space, we can better characterize the shape of the airfoil. It is true that there are many ways for coordinate transformations, not limited to the example here.
As for the interpolation, we can sample several points at the $\gamma_{\mathrm{airfoil}}$ to obtain the dataset $\{ ((\theta_0^{(i)}, \theta_1^{(i)}), f^{(i)}) \}_{i=1}^N$. For any $\bm{x}\in\Omega\cup\partial\Omega$, we generate the corresponding extended $f(\bm{x})$ by interpolating in the dataset. The interpolation method used here depends on the smoothness requirements of the ansatz. In addition, the number of reference points can also be changed, and in experiments, we found that only one reference point is enough.
Approximation via neural networks is a general method that does not require manual design. In this case, we can sample several points at the $\gamma_{\mathrm{airfoil}}$ to construct our dataset $\{ ((\phi_0(\bm{x}^{(i)}), \phi_1(\bm{x}^{(i)})), f^{(i)}) \}_{i=1}^N$, followed by training a neural network on the dataset, i.e. $\mathrm{NN}(\phi_0(\bm{x}^{(i)}), \phi_1(\bm{x}^{(i)}))\approx f^{(i)}$. For any $\bm{x}\in\Omega\cup\partial\Omega$, we take $\mathrm{NN}(\phi_0(\bm{x}), \phi_1(\bm{x}))$ as the corresponding extended $f(\bm{x})$. We can also train the neural network in the original space. However, experimental results show that training in the new space can achieve better results. The reason may be that the complex geometry become smoother and easier to learn in the new space.
It is worth noting that, in addition to the cases mentioned above, the extended distance functions $l(\bm{x})$ (here we omit the superscript and see Eq.~(4) for its definition) may also need to be handled similarly. Because for the complex geometry, the distance function can be very complex and we may want to replace it with a cheap surrogate model. The methods are similar, including approximating the distance function with a neural network, or constructing splines function \citep{sheng2021pfnn}.
\subsection{The Hard-Constraint Framework for Time-dependent PDEs}\label{sec_a6}
In this section, we consider the following time-dependent PDEs
\begin{equation}
\mathcal{F}[\bm{u}(\bm{x}, t)]=\bm{0},\qquad\bm{x}=(x_1,\dots,x_d)\in\Omega,t\in(0,T],
\end{equation}
where $t$ is the temporal coordinate, and the other notations are the same as those in Section~3.1. For each $u_j, j=1,\dots,n$, we pose suitable boundary conditions (BCs)
\begin{equation}
a_i(\bm{x}, t)u_j+b_i(\bm{x},t)\big( \bm{n}(\bm{x})\cdot\nabla u_j \big) = g_i(\bm{x},t),\quad\bm{x}\in\gamma_i,t\in(0,T],\quad \forall i=1,\dots,m_j,
\end{equation}
and an initial condition (IC)
\begin{equation}
u_j(\bm{x}, 0) = f_j(\bm{x}),\quad\bm{x}\in\Omega.
\end{equation}
Following the pipeline described in Section~3.2, we can find the general solution $\tilde{\bm{p}}_j^{\gamma_i}(\bm{x},t)$ as
\begin{equation}
\tilde{\bm{p}}_j^{\gamma_i}=\bm{B}(\bm{x}, t)\mathrm{NN}_j^{\gamma_i}(\bm{x}, t) + \tilde{\bm{n}}(\bm{x},t)\tilde{g}_i(\bm{x},t),
\end{equation}
where $\tilde{\bm{n}}=(a_i,b_i\bm{n})\big/\sqrt{a_i^2+b_i^2}$, $\tilde{g}_i=g_i\big/\sqrt{a_i^2+b_i^2}$, $\mathrm{NN}_j^{\gamma_i}:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}$ is a neural network, and $\bm{B}(\bm{x}, t)=\bm{I}_{d+1}-\tilde{\bm{n}}(\bm{x}, t){\tilde{\bm{n}}(\bm{x},t)}^\top$. And we omit the trainable parameters of neural networks for neatness.
Finally, we can construct our ansatz $(\hat{u}_j,\hat{\bm{p}}_j)$ as
\begin{subequations}\label{eq_time_ansatz}
\begin{align}
({u}_j^\dagger,\hat{\bm{p}}_j) &= l^{\partial \Omega}(\bm{x}) \mathrm{NN}^{\mathrm{main}}_j(\bm{x},t) + \sum_{i=1}^{m_j} \exp{\big[-\alpha_i l^{\gamma_i}(\bm{x})\big]}\tilde{\bm{p}}_j^{\gamma_i}(\bm{x},t),&& \forall j=1,\dots,n\label{eq:hc_time},\\
\hat{u}_j &= {u}_j^\dagger(\bm{x}, t) \big(1-\exp{[-\beta_t t]}\big) + f_j(\bm{x}) \exp{[-\beta_t t]},&& \forall j=1,\dots,n,\label{eq:hc_time_2}
\end{align}
\end{subequations}
where ${u}_j^\dagger$ is an intermediate variable that incorporates hard constraints in spatial dimensions, $\mathrm{NN}^{\mathrm{main}}_j:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}$ is the main neural network, $l^{\partial \Omega}, l^{\gamma_i},i=1,\dots,m_j$ are extended distance functions (see Eq.~(4)), $\alpha_i~(i=1,\dots,m_j)$ is determined in Eq.~(12), and $\beta_t\in\mathbb{R}$ is a hyper-parameter of the ``hardness'' in the temporal domain.
\subsection{Supplements to the Theoretical Analysis}\label{sec_a7}
In this section, we first give some supplements to the problem setting in Section~4. Then we present the proof of Theorem~4.1. Finally, we will characterize the mechanism described in Section~4 with another tool, the condition number.
\subsubsection{Supplements to the Problem Setting}\label{sec_a7_1}
As mentioned in Section~4, we consider the following 1D Poisson's equation
\begin{subequations}
\label{eq:theo_possion}
\begin{align}
\Delta u(x)&=-a^2\sin{ax},&&x\in(0,2\pi),\label{eq:theo_possion_1}\\
u(x)&=0,&&x=0\lor x=2\pi,
\end{align}
\end{subequations}
where $a\in\mathbb{R}$ and $u$ is the physical quantity of interest.
Here we use a single-layer neural network of width $K$ as our ansatz, i.e., $\hat{u}=\bm{c}^\top \sigma(\bm{w} x+\bm{b})$, where $\bm{c},\bm{w},\bm{b}\in\mathbb{R}^K$, $\sigma$ is an element-wise activation function (for simplicity, we take $\sigma$ as $\tanh$). To study the impact of the \textit{extra fields} alone, we train $\hat{u}$ in a soft-constrained manner. For ease of discussion, we consider the loss function in continuous form
\begin{equation}
\mathcal{L}(\bm{\theta})=\mathcal{L}_{\mathcal{F}}(\bm{\theta}) + \mathcal{L}_{\mathcal{B}}(\bm{\theta})\approx\frac{1}{2\pi}\int_0^{2\pi}\big(\Delta \hat{u}(x) + a^2\sin(ax)\big)^2 \diff{x} + \big(\hat{u}(0)\big)^2 + \big(\hat{u}(2\pi)\big)^2,
\end{equation}
where $\bm{\theta}=(\bm{c},\bm{w},\bm{b})$ is a set of trainable parameters.
Let $p=\nabla u = \diff{u}/\diff{x}$. We reformulate Eq.~\eqref{eq:theo_possion} via the \textit{extra fields} to obtain
\begin{subequations}
\label{eq:theo_possion_after_transformation}
\begin{align}
\nabla p(x)&=-a^2\sin{ax},&&x\in(0,2\pi),\\
p(x)&=\nabla u(x),&&x\in(0,2\pi),\\
u(x)&=0,&&x=0\lor x=2\pi.
\end{align}
\end{subequations}
Our ansatz becomes $\hat{u}=\bm{c}^\top \sigma(\bm{w} x+\bm{b})$ and $\hat{p}=\bm{c}_{p}^\top \sigma(\bm{w} x+\bm{b})$, where $\bm{c}_{p}\in\mathbb{R}^K$ is a weight vector with respect to the output $\hat{p}$. We can see that the loss term of the BC does not change while that of the PDE becomes
\begin{equation}
\tilde{\mathcal{L}}_{\mathcal{F}}(\tilde{\bm{\theta}})\approx\frac{1}{2\pi}\int_0^{2\pi}\Big[\big(\nabla \hat{p}(x) + a^2\sin(ax)\big)^2 + \big(\hat{p}(x)-\nabla \hat{u}(x)\big)^2\Big] \diff{x},
\end{equation}
where $\tilde{\bm{\theta}}=(\bm{c},\bm{w},\bm{b},\bm{c}_{p})$ is a set of trainable parameters.
\subsubsection{Proof of Theorem~4.1}\label{sec_a7_2}
In this part, we provide detailed proof of Theorem 4.1.
We first derive the derivatives of the ansatz for the original PDEs (we recall that $\sigma$ is $\tanh$ and we have $\sigma' = 1 - \sigma^2$)
\begin{subequations}
\begin{align}
\frac{\diff{\hat{u}}}{\diff{x}} &= \bm{c}^\top \Big[\big(\bm{1}-\sigma^2(\bm{w} x+\bm{b})\big)\circ\bm{w}\Big],\\
\frac{\Diff2{\hat{u}}}{\diff{x}^2} &= -2\bm{c}^\top \Big[\sigma(\bm{w} x+\bm{b})\circ\big(\bm{1}-\sigma^2(\bm{w} x+\bm{b})\big)\circ\bm{w}^2\Big].
\end{align}
\end{subequations}
We will abbreviate $\sigma(\bm{w} x+\bm{b})$ as $\bm{\sigma}$, and then we have
\begin{subequations}
\begin{align}
\frac{\diff{\hat{u}}}{\diff{x}} &= \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big],\\
\frac{\Diff2{\hat{u}}}{\diff{x}^2} &= -2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big].
\end{align}
\end{subequations}
Now, we can provide a bound for $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{c})^\top$ as
\begin{equation}
\begin{aligned}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{c}}\Big)^\top \bigg\rvert &= \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\Diff2{\hat{u}}}{\diff{x}^2} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\Diff2{\hat{u}}}{\diff{x}^2} \Big) \Big/ \partial \bm{c} \bigg) \diff{x}\Bigg\rvert\\
&= \frac{2}{\pi}\Bigg\lvert\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)\\
&\qquad\cdot \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] \diff{x}\Bigg\rvert\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\qquad\cdot \Bigg(\int_0^{2\pi}\big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big]^2\diff{x} \Bigg)^{\frac{1}{2}}\qquad (\mathrm{Cauchy-Schwarz})\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\big(2|\bm{c}|^\top\bm{w}^2 + a^2\big)^2\diff{x} \Bigg)^{\frac{1}{2}}\cdot \Bigg(\int_0^{2\pi}\bm{w}^4\diff{x} \Bigg)^{\frac{1}{2}}\\
&= 4 \big(2|\bm{c}|^\top\bm{w}^2 + a^2\big) \bm{w}^2\\
&\le 8 \big(|\bm{c}|^\top\bm{w}^2 + a^2\big) \bm{w}^2,
\end{aligned}
\end{equation}
where $\le$ between two vectors is an element-wise comparison. Thus, $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{c})^\top$ can be bounded by
\begin{equation} \label{eq:bound_c}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{c}}\Big)^\top \bigg\rvert = \mathcal{O}\big(|\bm{c}|^\top\bm{w}^2+a^2\big)\cdot\bm{w}^2.
\end{equation}
Similarly, for $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{w})^\top$, we have
\begin{equation}
\begin{aligned}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{w}}\Big)^\top \bigg\rvert &= \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\Diff2{\hat{u}}}{\diff{x}^2} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\Diff2{\hat{u}}}{\diff{x}^2} \Big) \Big/ \partial \bm{w} \bigg) \diff{x}\Bigg\rvert\\
&= \frac{2}{\pi}\Bigg\lvert\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)\\
&\qquad\cdot \bm{c}\circ\big[2\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w} + x(\bm{1}-3\bm{\sigma}^2)\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] \diff{x}\Bigg\rvert\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\qquad\cdot \Bigg(\int_0^{2\pi}\bm{c}^2\circ\big[2\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w} \\
&\qquad\qquad+ x(\bm{1}-3\bm{\sigma}^2)\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big]^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\big(2|\bm{c}|^\top\bm{w}^2 + a^2\big)^2\diff{x} \Bigg)^{\frac{1}{2}}\cdot \Bigg(\int_0^{2\pi}\bm{c}^2\circ(\bm{w}+2\pi\bm{w}^2)^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&= 4 \big(2|\bm{c}|^\top\bm{w}^2 + a^2\big) \cdot |\bm{c}| \circ (|\bm{w}|+2\pi\bm{w}^2)\\
&\le 16\pi \big(|\bm{c}|^\top\bm{w}^2 + a^2\big) \cdot |\bm{c}| \circ (|\bm{w}|+\bm{w}^2).
\end{aligned}
\end{equation}
Thus, the bound for $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{w})^\top$ is given by
\begin{equation} \label{eq:bound_w}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{w}}\Big)^\top \bigg\rvert = \mathcal{O}\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big(|\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} )\big).
\end{equation}
And for $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{b})^\top$, we have
\begin{equation}
\begin{aligned}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{b}}\Big)^\top \bigg\rvert &= \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\Diff2{\hat{u}}}{\diff{x}^2} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\Diff2{\hat{u}}}{\diff{x}^2} \Big) \Big/ \partial \bm{b} \bigg) \diff{x}\Bigg\rvert\\
&= \frac{2}{\pi}\Bigg\lvert\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)\\
&\qquad\cdot \big[\bm{c}\circ (\bm{1}-3\bm{\sigma}^2)\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] \diff{x}\Bigg\rvert\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\Big(-2\bm{c}^\top \big[\bm{\sigma}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big] + a^2\sin(ax)\Big)^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\qquad\cdot \Bigg(\int_0^{2\pi}\big[\bm{c}\circ (\bm{1}-3\bm{\sigma}^2)\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{w}^2\big]^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\le \frac{2}{\pi} \Bigg(\int_0^{2\pi}\big(2|\bm{c}|^\top\bm{w}^2 + a^2\big)^2\diff{x} \Bigg)^{\frac{1}{2}}\cdot \Bigg(\int_0^{2\pi}\bm{c}^2\circ\bm{w}^4\diff{x} \Bigg)^{\frac{1}{2}}\\
&= 4 \big(2|\bm{c}|^\top\bm{w}^2 + a^2\big) \cdot |\bm{c}| \circ \bm{w}^2\\
&\le 8 \big(|\bm{c}|^\top\bm{w}^2 + a^2\big) \cdot |\bm{c}| \circ \bm{w}^2.
\end{aligned}
\end{equation}
Thus, the bound for $(\partial \mathcal{L}_{\mathcal{F}} / \partial \bm{b})^\top$ is given by
\begin{equation} \label{eq:bound_b}
\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{b}}\Big)^\top \bigg\rvert = \mathcal{O}\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big(|\bm{c}| \circ \bm{w}^2\big).
\end{equation}
Recalling that $\bm{\theta} = (\bm{c}, \bm{w}, \bm{b})$, from Eq.~\eqref{eq:bound_c}, Eq.~\eqref{eq:bound_w}, and Eq.~\eqref{eq:bound_b}, we have
\begin{equation} \label{eq:bound_origin}
\Big|\big(\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}}\big)^\top\Big|=\bigg\lvert \Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{\theta}}\Big)^\top \bigg\rvert = \mathcal{O}\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big(\bm{w}^2, |\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} ), |\bm{c}| \circ \bm{w}^2\big).
\end{equation}
In contrast, for the transformed PDE, we first derive the derivatives of the ansatz (i.e., $\hat{u}=\bm{c}^\top \sigma(\bm{w} x+\bm{b}), \hat{p}=\bm{c}_p^\top \sigma(\bm{w} x+\bm{b})$)
\begin{subequations}
\begin{align}
\frac{\diff{\hat{u}}}{\diff{x}} &= \bm{c}^\top \big[(\bm{1}-\sigma^2(\bm{w} x+\bm{b}))\circ\bm{w}\big],\\
\frac{\diff{\hat{p}}}{\diff{x}} &= \bm{c}_p^\top \big[(\bm{1}-\sigma^2(\bm{w} x+\bm{b}))\circ\bm{w}\big].
\end{align}
\end{subequations}
We again abbreviate $\sigma(\bm{w} x+\bm{b})$ as $\bm{\sigma}$ to obtain
\begin{subequations}
\begin{align}
\frac{\diff{\hat{u}}}{\diff{x}} &= \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big],\\
\frac{\diff{\hat{p}}}{\diff{x}} &= \bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big].
\end{align}
\end{subequations}
We now compute a bound for $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{c})^\top$
\begin{equation}
\begin{aligned}
\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{c}}\Big)^\top \bigg\rvert &= \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\hat{p} - \frac{\diff{\hat{u}}}{\diff{x}}\Big) \cdot \bigg(\partial \Big( \frac{\diff{\hat{u}}}{\diff{x}} \Big) \Big/ \partial \bm{c} \bigg) \diff{x}\Bigg\rvert\\
&= \frac{1}{\pi}\Bigg\lvert\int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] \Big) \cdot \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] \diff{x}\Bigg\rvert\\
&\le \frac{1}{\pi} \Bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)^2\diff{x} \Bigg)^{\frac{1}{2}}\cdot \Bigg(\int_0^{2\pi}\big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&\le \frac{1}{\pi} \Bigg(\int_0^{2\pi}\big(\lVert \bm{c}_p \rVert_1 + |\bm{c}|^\top |\bm{w}|\big)^2\diff{x} \Bigg)^{\frac{1}{2}}\cdot \Bigg(\int_0^{2\pi}\bm{w}^2\diff{x} \Bigg)^{\frac{1}{2}}\\
&= 2 \big(\lVert \bm{c}_p \rVert_1 + |\bm{c}|^\top |\bm{w}|\big) |\bm{w}|,
\end{aligned}
\end{equation}
Thus, the bound for $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{c})^\top$ is given by
\begin{equation} \label{eq:bound_c_transformed}
\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{c}}\Big)^\top \bigg\rvert = \mathcal{O}\big(\lVert \bm{c}_p \rVert_1 + |\bm{c}|^\top |\bm{w}|\big)\cdot|\bm{w}|.
\end{equation}
As for $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{w})^\top$, we have
\begin{equation}
\begin{aligned}
&\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{w}}\Big)^\top \bigg\rvert\\
=& \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\diff{\hat{p}}}{\diff{x}} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\diff{\hat{p}}}{\diff{x}} \Big) \Big/ \partial \bm{w} \bigg) \diff{x} + \int_0^{2\pi}2\Big(\hat{p} - \frac{\diff{\hat{u}}}{\diff{x}}\Big) \cdot \bigg(\frac{\partial \hat{p}}{\partial \bm{w}} - \partial\Big( \frac{\diff{\hat{u}}}{\diff{x}} \Big) \Big/ \partial \bm{w} \bigg) \diff{x}\Bigg\rvert\\
=& \frac{1}{\pi}\Bigg\lvert \int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)\cdot \big[ \bm{c}_p\circ(\bm{1}-\bm{\sigma}^2)\circ(\bm{1}-2x\bm{\sigma}\circ\bm{w}) \big]\diff{x}\\
& + \int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big) \cdot \big[x\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2) - \bm{c}\circ(\bm{1}-\bm{\sigma}^2)\circ(\bm{1}-2x\bm{\sigma}\circ\bm{w}) \big] \diff{x}\Bigg\rvert\\
\le& \frac{1}{\pi} \Bigg(\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad\qquad\cdot \bigg(\int_0^{2\pi}\big[ \bm{c}_p\circ(\bm{1}-\bm{\sigma}^2)\circ(\bm{1}-2x\bm{\sigma}\circ\bm{w})\big]^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad+\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad\qquad\cdot \bigg(\int_0^{2\pi}\big[x\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2) - \bm{c}\circ(\bm{1}-\bm{\sigma}^2)\circ(\bm{1}-2x\bm{\sigma}\circ\bm{w})\big]^2\diff{x} \bigg)^{\frac{1}{2}}\Bigg)\\
&\le 4 \Bigg( \bigg(\int_0^{2\pi}\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\bm{c}_p^2\circ(\bm{1}+|\bm{w}|)^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad+\bigg(\int_0^{2\pi}\big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\big[ |\bm{c}_p| + |\bm{c}|\circ(\bm{1}+|\bm{w}|) \big]^2\diff{x} \bigg)^{\frac{1}{2}} \Bigg)\\
&= 8\pi \Big(\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big) \cdot \big[|\bm{c}_p| \circ (\bm{1}+|\bm{w}|)\big] + \big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|\big) \cdot \big[|\bm{c}_p| + |\bm{c}|\circ(\bm{1}+|\bm{w}|)\big]\Big)\\
&\le 40\pi \big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big].
\end{aligned}
\end{equation}
Thus, we can bound $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{w})^\top$ by
\begin{equation}
\begin{split}
\label{eq:bound_w_transformed}
\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{w}}\Big)^\top \bigg\rvert
&\le 40\pi \big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big]\\
&= \mathcal{O}\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot\big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big].
\end{split}
\end{equation}
And for $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{b})^\top$, we have
\begin{equation}
\begin{aligned}
& \bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{b}}\Big)^\top \bigg\rvert\\
=& \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\diff{\hat{p}}}{\diff{x}} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\diff{\hat{p}}}{\diff{x}} \Big) \Big/ \partial \bm{b} \bigg) \diff{x}\\
&\qquad + \int_0^{2\pi}2\Big(\hat{p} - \frac{\diff{\hat{u}}}{\diff{x}}\Big) \cdot \bigg(\frac{\partial \hat{p}}{\partial \bm{b}} - \partial\Big( \frac{\diff{\hat{u}}}{\diff{x}} \Big) \Big/ \partial \bm{b} \bigg) \diff{x}\Bigg\rvert\\
=& \frac{1}{\pi}\Bigg\lvert \int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)\cdot \big[ -2\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{\sigma}\circ\bm{w} \big]\diff{x}\\
&+ \int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)\cdot \big[\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2) + 2\bm{c}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{\sigma}\circ\bm{w} \big] \diff{x}\Bigg\rvert\\
\le& \frac{1}{\pi} \Bigg(\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad\qquad\cdot \bigg(\int_0^{2\pi} \big[ -2\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{\sigma}\circ\bm{w} \big]^2\diff{x} \bigg)^{\frac{1}{2}}\\
&+\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad\qquad\cdot \bigg(\int_0^{2\pi}\big[\bm{c}_p\circ(\bm{1}-\bm{\sigma}^2) + 2\bm{c}\circ(\bm{1}-\bm{\sigma}^2)\circ\bm{\sigma}\circ\bm{w}]^2\diff{x} \bigg)^{\frac{1}{2}}\Bigg)\\
\le& \frac{2}{\pi} \Bigg( \bigg(\int_0^{2\pi}\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\bm{c}_p^2\circ\bm{w}^2\diff{x} \bigg)^{\frac{1}{2}}\\
&+\bigg(\int_0^{2\pi}\big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\big[ |\bm{c}_p| + |\bm{c}|\circ|\bm{w}| \big]^2\diff{x} \bigg)^{\frac{1}{2}} \Bigg)\\
=& 4 \Big(\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big) \cdot \big[|\bm{c}_p| \circ |\bm{w}|\big]+ \big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|\big) \cdot \big[|\bm{c}_p| + |\bm{c}|\circ|\bm{w}|\big]\Big)\\
\le& 12 \big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big].
\end{aligned}
\end{equation}
Thus, we can bound $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{b})^\top$ by
\begin{equation}
\begin{split}
\label{eq:bound_b_transformed}
\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{b}}\Big)^\top \bigg\rvert
\le& 12 \big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big]\\
= & \mathcal{O}\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \big[ \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}) \big].
\end{split}
\end{equation}
For $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{c}_p)^\top$, we have
\begin{equation}
\begin{aligned}
\bigg\lvert &\Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{c}_p}\Big)^\top \bigg\rvert \\
=& \frac{1}{2\pi}\Bigg\lvert \int_0^{2\pi}2\Big(\frac{\diff{\hat{p}}}{\diff{x}} + a^2\sin(ax)\Big) \cdot \bigg(\partial \Big( \frac{\diff{\hat{p}}}{\diff{x}} \Big) \Big/ \partial \bm{c}_p \bigg) \diff{x}
+ \int_0^{2\pi}2\Big(\hat{p} - \frac{\diff{\hat{u}}}{\diff{x}}\Big) \cdot \bigg(\frac{\partial \hat{p}}{\partial \bm{c}_p} \bigg) \diff{x}\Bigg\rvert\\
=& \frac{1}{\pi}\Bigg\lvert \int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)\cdot \big[ (\bm{1}-\bm{\sigma}^2)\circ\bm{w} \big]\diff{x}\\
&\qquad + \int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)\cdot\bm{\sigma} \diff{x}\Bigg\rvert\\
\le& \frac{1}{\pi} \Bigg(\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big] + a^2\sin(ax)\Big)^2\diff{x} \bigg)^{\frac{1}{2}}
\cdot \bigg(\int_0^{2\pi}\big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w})\big]^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad+\bigg(\int_0^{2\pi}\Big(\bm{c}_p^\top \bm{\sigma} - \bm{c}^\top \big[(\bm{1}-\bm{\sigma}^2)\circ\bm{w}\big]\Big)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\bm{\sigma}^2\diff{x} \bigg)^{\frac{1}{2}}\Bigg)\\
\le& \frac{1}{\pi} \Bigg( \bigg(\int_0^{2\pi}\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\bm{w}^2\diff{x} \bigg)^{\frac{1}{2}}\\
&\qquad+\bigg(\int_0^{2\pi}\big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|)^2\diff{x} \bigg)^{\frac{1}{2}}\cdot \bigg(\int_0^{2\pi}\bm{1}\diff{x} \bigg)^{\frac{1}{2}} \Bigg)\\
=& 2 \Big(\big(|\bm{c}_p|^\top|\bm{w}| + a^2\big) \cdot |\bm{w}| + \big(\lVert\bm{c}_p\rVert_1+|\bm{c}|^\top|\bm{w}|\big) \cdot \bm{1}\Big)\\
\le& 4 \big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \max(|\bm{w}|,\bm{1}).
\end{aligned}
\end{equation}
Thus, we can bound $(\partial \tilde{\mathcal{L}}_{\mathcal{F}} / \partial \bm{c}_p)^\top$ by
\begin{equation}
\begin{split}
\label{eq:bound_c_p_transformed}
\left\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \bm{c}_p}\Big)^\top \right\rvert
\le& 4 \left(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\right) \cdot \max(|\bm{w}|,\bm{1})\\
= & \mathcal{O}\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big) \cdot \max(|\bm{w}|,\bm{1}).
\end{split}
\end{equation}
Recalling that $\tilde{\bm{\theta}} = (\bm{c}, \bm{w}, \bm{b}, \bm{c}_p)$, from Eq.~\eqref{eq:bound_c_transformed}, Eq.~\eqref{eq:bound_w_transformed}, Eq.~\eqref{eq:bound_b_transformed}, and Eq.~\eqref{eq:bound_c_p_transformed}, noting that $\lVert\bm{c}_p\rVert_1 + |\bm{c}|^\top|\bm{w}|\le \lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2$, we have
\begin{equation}\label{eq:bound_transformed}
\begin{aligned}
\Big|\big({\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}}\big)^\top\Big|&=\bigg\lvert \Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \tilde{\bm{\theta}}}\Big)^\top \bigg\rvert = \mathcal{O}\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot \big(
|\bm{w}|, \\
&\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|, \bm{1}),\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}), \max(|\bm{w}|,\bm{1})
\big).
\end{aligned}
\end{equation}
\subsubsection{Analysis via the Condition Number}\label{sec_a7_3}
In addition to providing bounds for the gradients of $\mathcal{L}_{\mathcal{F}}$ and $\tilde{\mathcal{L}}_{\mathcal{F}}$, we can also use the condition number to characterize the sensitivity of their gradients with respect to the parameters of the neural network. Let $\bm{\theta}^{(t)}=(\bm{c}^{(t)},\bm{w}^{(t)},\bm{b}^{(t)})$ and $\tilde{\bm{\theta}}^{(t)}=(\bm{c}^{(t)},\bm{w}^{(t)},\bm{b}^{(t)},\bm{c}_p^{(t)})$ are the parameters of the neural network in the $t$th step (before and after the reformulation). For simplicity, we introduce the following notations
\begin{subequations}
\begin{align}
\Delta \bm{\theta} &= \bm{\theta}^{(t+1)} - \bm{\theta}^{(t)},\\
\Delta \tilde{\bm{\theta}} &= \tilde{\bm{\theta}}^{(t+1)} - \tilde{\bm{\theta}}^{(t)},\\
\Delta \mathcal{L}_{\mathcal{F}} &= \mathcal{L}_{\mathcal{F}}(\bm{\theta}^{(t+1)}) - \mathcal{L}_{\mathcal{F}}(\bm{\theta}^{(t)}),\\
\Delta \tilde{\mathcal{L}}_{\mathcal{F}} &= \tilde{\mathcal{L}}_{\mathcal{F}}(\tilde{\bm{\theta}}^{(t+1)}) - \tilde{\mathcal{L}}_{\mathcal{F}}(\tilde{\bm{\theta}}^{(t)}).
\end{align}
\end{subequations}
The condition numbers of $\mathcal{L}_{\mathcal{F}}$ and $\tilde{\mathcal{L}}_{\mathcal{F}}$ are defined as
\begin{equation}
\mathrm{cond} = \frac{|\Delta \mathcal{L}_{\mathcal{F}}|}{ \lVert\Delta \bm{\theta}\rVert_2},\quad
\tilde{\mathrm{cond}} = \frac{|\Delta \tilde{\mathcal{L}}_{\mathcal{F}}|}{ \lVert\Delta \tilde{\bm{\theta}}\rVert_2}.
\end{equation}
Next we derive the bounds for $\mathrm{cond}$ and $\tilde{\mathrm{cond}}$, respectively. We first consider $\mathrm{cond}$
\begin{equation}
\begin{aligned}
\mathrm{cond} &= \frac{|\Delta \mathcal{L}_{\mathcal{F}}|}{ \lVert\Delta \bm{\theta}\rVert_2}
\approx\bigg\lvert\Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{\theta}}\Big)^\top \cdot \Delta\bm{\theta} \bigg\rvert \bigg / \lVert\Delta \bm{\theta}\rVert_2
\le \bigg\lVert\Big(\frac{\partial \mathcal{L}_{\mathcal{F}}}{\partial \bm{\theta}}\Big)^\top \bigg\rVert_2\\
&= \mathcal{O}\Big(\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big\lVert\bm{w}^2, |\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} ), |\bm{c}| \circ \bm{w}^2\big\rVert_2\Big)\quad\text{(Eq.~\eqref{eq:bound_origin})}\\
&=\mathcal{O}\Big(\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big\lVert\bm{w}^2, |\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} ), |\bm{c}| \circ \bm{w}^2\big\rVert_1\Big)\quad\text{(Equivalence of norms)}\\
&= \mathcal{O}\Big(\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big( \lVert\bm{w}^2\rVert_1 + \lVert|\bm{c}| \circ |\bm{w}|\circ( |\bm{w}| + \bm{1} )\rVert_1 + \lVert|\bm{c}| \circ \bm{w}^2\rVert_1\big) \Big)\\
&= \mathcal{O}\Big(\big(|\bm{c}|^\top\bm{w}^2 + a^2\big)\cdot\big\lVert \max(|\bm{c}|,\bm{1})\circ|\bm{w}|\circ\max(|\bm{w}|,\bm{1}) \big\rVert_1 \Big).
\end{aligned}
\label{eq:cond_before}
\end{equation}
Similarly, for $\tilde{\mathrm{cond}}$, we have
\begin{equation}
\begin{aligned}
\tilde{\mathrm{cond}} &= \frac{|\Delta \tilde{\mathcal{L}}_{\mathcal{F}}|}{ \lVert\Delta \tilde{\bm{\theta}}\rVert_2}
\approx\bigg\lvert\Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \tilde{\bm{\theta}}}\Big)^\top \cdot \Delta\tilde{\bm{\theta}} \bigg\rvert \bigg / \lVert\Delta \tilde{\bm{\theta}}\rVert_2
\le \bigg\lVert\Big(\frac{\partial \tilde{\mathcal{L}}_{\mathcal{F}}}{\partial \tilde{\bm{\theta}}}\Big)^\top \bigg\rVert_2\\
&= \mathcal{O}\Big(\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot \big\lVert
|\bm{w}|, \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|, \bm{1}),\\
&\qquad\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}), \max(|\bm{w}|,\bm{1})
\big\rVert_2\Big)\qquad\text{(Eq.~\eqref{eq:bound_transformed})}\\
&=\mathcal{O}\Big(\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot \big\lVert
|\bm{w}|, \max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|, \bm{1}),\\
&\qquad\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1}), \max(|\bm{w}|,\bm{1})
\big\rVert_1\Big)\qquad\text{(Equivalence of norms)}\\
&=\mathcal{O}\Big(\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot \big(
\lVert\bm{w}\rVert_1\\
&\qquad+ \lVert\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|, \bm{1})\rVert_1+\lVert\max(|\bm{c}|,|\bm{c}_p|)\circ\max(|\bm{w}|,\bm{1})\rVert_1\\
&\qquad+\lVert\max(|\bm{w}|,\bm{1})
\rVert_1\big)\Big)\\
&= \mathcal{O}\Big(\big(\lVert\bm{c}_p\rVert_1 + \max(|\bm{c}|,|\bm{c}_p|)^\top|\bm{w}| + a^2\big)\cdot\big\lVert \max(|\bm{c}|,|\bm{c}_p|,\bm{1})\circ\max(|\bm{w}|,\bm{1}) \big\rVert_1 \Big).
\end{aligned}
\label{eq:cond_after}
\end{equation}
From Eq.~\eqref{eq:cond_before} and Eq.~\eqref{eq:cond_after}, we find that for the original PDEs, the condition number has a higher order relationship with respect to $\bm{\theta}$. If $\bm{\theta}$ is large, the condition number can be very large, leading to oscillations in training. However, if $\bm{\theta}$ is small, the condition number will also be very small, resulting in smaller changes of $\bm{\theta}$ between adjacent iterations and therefore slower convergence. In contrast, after the reformulation, the condition number has a lower order relationship with respect to $\tilde{\bm{\theta}}$, which keeps the condition number more stable during training and alleviates this problem.
\subsection{Experimental details}\label{sec_a8}
In the following, we will briefly introduce some essential details of our experiments, including the experimental environment, hyper-parameters, construction of the ansatz, and details of the governing PDEs in each experiment. We first introduce the experimental environment, while other details are put into the subsections corresponding to the experiments.
\paragraph{Experimental environment}
We use PyTorch \citep{pytorch2019} as our deep learning library. And our codes for the physics-informed learning are based on DeepXDE \citep{lu2021deepxde}. We train all the models except domain decomposition based baselines (i.e., xPINN, FBPINN, and PFNN-2) on one NVIDIA TITAN Xp 12GB GPU, while the other three are trained on eight NVIDIA GeForce RTX 3090 24GB GPUs (since domain decomposition based models consist of several sub-networks and require more memory to be stored). The operating system is Ubuntu 18.04.5 LTS. If the analytical solution is unavailable, the ground truth solutions to the PDEs (i.e., the testing data) will be generated by COMSOL Multiphysics, a FEM commercial software. And we have put the generated testing data into the zip file.
\subsubsection{Simulation of a 2D battery pack (Heat Equation)}\label{sec_a8_1}
\paragraph{Governing PDEs}
The governing PDEs (along with boundary/initial conditions) are given by
\begin{subequations}
\label{eq:pde_battery}
\begin{align}
\frac{\partial T}{\partial t}&=k\Delta T(\bm{x},t),&&\boldsymbol x\in \Omega,t\in(0,1], \\
k\big(\bm{n}(\bm{x}) \cdot \nabla T(\bm{x},t)\big) &=h\big(T_a - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{outer}},t\in(0,1], \\
k\big(\bm{n}(\bm{x}) \cdot \nabla T(\bm{x},t)\big) &=h\big(T_c - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{cell},i},t\in(0,1],&i=1,\dots,n_c, \\
k\big(\bm{n}(\bm{x}) \cdot \nabla T(\bm{x},t)\big) &=h\big(T_w - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{pipe},i},t\in(0,1],&i=1,\dots,n_w, \\
T(\boldsymbol x,0)&=T_0,&&\boldsymbol x\in \Omega,
\end{align}
\end{subequations}
where $\bm{x}=(x_1,x_2)$, $t$ are the spatial and temporal coordinates, respectively, $T(\bm{x},t)$ is the temperature over time, $k=1$ is the thermal conductivity, $\Delta T=\partial^2 T / \partial x_1^2 + \partial^2 T / \partial x_2^2$, $h=1$ is the heat transfer coefficient, $\nabla T=(\partial T / \partial x_1, \partial T / \partial x_2)$, $T_a=0.1$, $T_c=5$, $T_w=1$ are, respectively, the temperature of the air, the cells ($n_c=11$ cells of radius $r_c=1$), the cooling pipes ($n_w=6$ pipes of radius $r_w=0.4$), $T_0=0.1$ is the initial temperature, and the geometry (i.e., $\Omega$, $\gamma_{\mathrm{outer}}$, etc) is shown in Figure~3(a).
And the reformulated PDEs are (which is used by the proposed model, HC)
\begin{subequations}
\begin{align}
\frac{\partial T}{\partial t}&=k\big(\nabla \cdot \bm{p} (\bm{x}, t)\big),&&\boldsymbol x\in \Omega,t\in(0,1], \\
\bm{p}(\bm{x}, t) &= \nabla T,&&\boldsymbol x\in \Omega\cup\partial\Omega,t\in(0,1],\\
k\big(\bm{n}(\bm{x})\cdot\bm{p}(\bm{x}, t)\big) &=h\big(T_a - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{outer}},t\in(0,1],\label{eq:robin_1}\\
k\big(\bm{n}(\bm{x})\cdot\bm{p}(\bm{x},t)\big) &=h\big(T_c - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{cell},i},t\in(0,1],&i=1,\dots,n_c, \label{eq:robin_2}\\
k\big(\bm{n}(\bm{x})\cdot\bm{p}(\bm{x},t)\big) &=h\big(T_w - T(\bm{x},t)\big),&&\boldsymbol x\in \gamma_{\mathrm{pipe},i},t\in(0,1],&i=1,\dots,n_w, \label{eq:robin_3}\\
T(\boldsymbol x,0)&=T_0,&&\boldsymbol x\in\Omega,
\end{align}
\end{subequations}
where $\bm{p}(\bm{x},t)$ is the introduced extra field.
\paragraph{Construction of the Ansatz}
Since the solution is a scalar function (i.e., $T(\bm{x},t)$), we directly denote the solution by $u(\bm{x}, t)=T(\bm{x},t)$. Let $\tilde{\bm{p}}$ denote $(u, \bm{p})$. We first derive the general solutions at $\gamma_{\mathrm{outer}}, \gamma_{\mathrm{cell},i}, \gamma_{\mathrm{pipe},i}$, respectively.
For $\bm{x}\in \gamma_{\mathrm{outer}}$, we have $a(\bm{x}) = h$, $b(\bm{x})=k$, $g(\bm{x})=hT_a$. According to Eq.~(10), the general solution $\tilde{\bm{p}}^{\gamma_{\mathrm{outer}}}$ is given by
\begin{equation}
\tilde{\bm{p}}^{\gamma_{\mathrm{outer}}}=\bm{B}(\bm{x})\mathrm{NN}^{\gamma_{\mathrm{outer}}}(\bm{x}, t) + \frac{(h,k\bm{n})}{\sqrt{h^2 + k^2}}\frac{hT_a}{\sqrt{h^2 + k^2}},
\end{equation}
where $\bm{B}(\bm{x})$ is computed in Eq.~\eqref{eq:accept_example} (with $\tilde{\bm{n}} = (\tilde{n}_1,\tilde{n}_2,\tilde{n}_3)=(h,k\bm{n})/{\sqrt{h^2 + k^2}}$). And for $\bm{x} \in \gamma_{\mathrm{cell},i}$ and $\gamma_{\mathrm{pipe},i}$, the derivation is similar, where we only need to change $T_a$ to $T_c$ and $T_w$, respectively.
Then, we gather all the general solutions computed to form our ansatz $(\hat{u}, \hat{\bm{p}})$ according to Eq.~\eqref{eq_time_ansatz}, where $\{ \gamma_{\mathrm{outer}}, \gamma_{\mathrm{cell},1},\dots,\gamma_{\mathrm{cell},n_c},\gamma_{\mathrm{pipe},1},\dots,\gamma_{\mathrm{pipe},n_w} \}$ are reordered as $\{ \gamma_{i} \}_{i=1}^{1+n_c+n_w}$ and $f(\bm{x}) = T_0$.
\paragraph{Choices of Extended Distance Functions}
For $\gamma_{\mathrm{cell},i}$ and $\gamma_{\mathrm{pipe},i}$, since they are 2D circles, we can directly choose the extended distance functions $l^{\gamma_{\mathrm{cell},i}}(\bm{x})$ and $l^{\gamma_{\mathrm{pipe},i}}(\bm{x})$ as the distance between $\bm{x}$ and the center minus the radius. For the rectangular $\gamma_{\mathrm{outer}}$, supposing that it is given by $[a_1, a_2]\times[b_1,b_2]$, we construct the extended distance function $l^{\gamma_{\mathrm{outer}}}$ as follows
\begin{equation}\label{eq_ext_dist_rec}
l^{\gamma_{\mathrm{outer}}}(\bm{x}) = \mathrm{SoftMin} (x_1 - a_1, a_2 - x_1, x_2 - b_1, b_2 - x_2),
\end{equation}
where $\bm{x}=(x_1,x_2)$, $\mathrm{SoftMin}$ is a differentiable version of min function which is implemented by \texttt{LogSumExp} in PyTorch, i.e., $\mathrm{SoftMin}(\bm{y}) = \mathrm{LogSumExp}(-\beta \bm{y}) / (-\beta)$, $\beta=4$. And the extended function $l^{\partial \Omega}(\bm{x})$ is computed by taking the $\mathrm{SoftMin}$ of the distances to all the boundaries
\begin{equation}
l^{\partial \Omega}(\bm{x}) = \mathrm{SoftMin}(l^{\gamma_{\mathrm{outer}}}(\bm{x}), l^{\gamma_{\mathrm{cell},1}}(\bm{x}),\dots,l^{\gamma_{\mathrm{cell},n_c}}(\bm{x}),l^{\gamma_{\mathrm{pipe},1}}(\bm{x}),\dots,l^{\gamma_{\mathrm{pipe},n_w}}(\bm{x})).
\end{equation}
\paragraph{Implementation}
All the models are trained for 5000 Adam iterations (with a learning rate scheduler of \texttt{ReduceLROnPlateau} from PyTorch and an initial learning rate of 0.01), followed by a L-BFGS optimization until convergence. Unless otherwise specified, the mean squared error (MSE) is used for the loss function and $\tanh$ is used for the activation function. And the hyper-parameters of each model are listed as follow
\begin{itemize}
\item \textbf{HC}: The main neural network is a multilayer perceptron (MLP) of size $[3] + 4\times[50] + [3]$ (which means 3 inputs, 4 hidden layers of width 50, and 3 outputs). The sub-networks (corresponding to Eq.~\eqref{eq:robin_1}, Eq.~\eqref{eq:robin_2}, and Eq.~\eqref{eq:robin_3}) are all MLPs of size $[3] + 3\times[20] + [3]$. And the hyper-parameters of ``hardness'' are $\beta_s=5$ and $\beta_t=10$.
\item \textbf{PINN}: The ansatz is an MLP of size $[3] + 4\times[50] + [1]$.
\item \textbf{PINN-LA}: The weights of the loss terms corresponding to the BCs are approximated by $\hat{\lambda}_i = \max_{\theta_n}\{ |\nabla_\theta \mathcal{L}_r(\theta_n)| \} \Big/ \overline{|\nabla_\theta \lambda_i \mathcal{L}_i(\theta_n)|}$. And the parameter of the moving average is $\alpha=0.1$, which is recommended by the paper \citep{wang2021understanding}. Besides, the parameters of the PINN are the same as above.
\item \textbf{PINN-LA-2}: In our modified version, we approximate the weights of the loss terms as $\hat{\lambda}_i = \overline{ |\nabla_\theta \mathcal{L}_r(\theta_n)| } \Big/ \overline{|\nabla_\theta \lambda_i \mathcal{L}_i(\theta_n)|}$. And the parameter of the moving average is also $\alpha=0.1$. Here we replace the maximum with the mean to make the weights of the loss terms more stable during the training process.
\item \textbf{FBPINN}: The domain of the problem is divided into $4\times 6=24$ subdomains by a regular grid. The size of the sub-network, an MLP, corresponding to each subdomain is $[3] + 3\times[30] + [1]$. And the scale factor is $\sigma=0.4$, chosen so that the window function is close to zero outside the overlapping region of the subdomains.
\item \textbf{xPINN}: The domain of the problem is divided into $4\times 6=24$ subdomains by a regular grid. The size of the sub-network corresponding to each subdomain is $[3] + 3\times[30] + [1]$. And the loss terms of the interface condition include average solution as well as residual continuity conditions.
\item \textbf{PFNN}: The PFNN considers the variational formulation of the Eq.~\eqref{eq:pde_battery} (i.e., Ritz formulation), and embed the initial condition (IC) into its ansatz (similar to Eq.~(5)). And the size of the neural network (an MLP) is $[3] + 4\times[50] + [1]$.
\item \textbf{PFNN-2}: The PFNN-2 replaces a single neural network with a domain decomposition based neural network on the basis of PFNN. In the original literature \citep{sheng2022pfnn}, the domain is decomposed in a hard way (like xPINN). However, in our experiments (see Table~1), we find that the performance of hard decomposition is relatively poor, which is because new loss terms are needed to maintain the continuity of the ansatz at the interfaces between the sub-domains, which further aggravates the unbalanced competition. To overcome this, we instead employ a soft domain decomposition, as in FBPINN. See the parts of PFNN and FBPINN for the values of hyper-parameters.
\end{itemize}
\subsubsection{Simulation of an Airfoil (Navier-Stokes Equations)}\label{sec_a8_2}
\paragraph{Governing PDEs}
The governing PDEs are given by
\begin{subequations}
\begin{align}
\boldsymbol u(\bm{x}) \cdot \nabla \boldsymbol u(\bm{x}) &= -\nabla p(\bm{x}) + v\nabla^2\boldsymbol u(\bm{x}) ,&&\boldsymbol x\in \Omega,\\
\nabla \cdot \boldsymbol u(\bm{x}) &= 0,&&\boldsymbol x\in \Omega,\\
\boldsymbol u(\bm{x})&=\boldsymbol{u}_0(\bm{x}),&&\boldsymbol x\in \gamma_{\mathrm{inlet}}\cup\gamma_{\mathrm{top}}\cup\gamma_{\mathrm{bottom}},\\
p(\bm{x})&=1,&&\boldsymbol x\in \gamma_{\mathrm{outlet}},\\
\bm{n}(\bm{x}) \cdot \bm{u}(\bm{x}) &=0,&&\boldsymbol x\in\gamma_{\mathrm{airfoil}},
\end{align}
\end{subequations}
where $\bm{u}(\bm{x}) = (u_1(\bm{x}), u_2(\bm{x}))$, $p(\bm{x})$, $v=1/50$ are the velocity, pressure, and viscosity of the fluid, respectively, $\bm{u}_0(\bm{x}) = (1, 0)$, and the geometry of the problem (i.e., $\Omega$, $\gamma_{\mathrm{inlet}}$, etc) is shown in Figure~3(b).
And the reformulated PDEs are (which is used by the proposed model, HC)
\begin{subequations}
\begin{align}
\big(\boldsymbol u(\bm{x}) - v\nabla\big) \cdot \big(\bm{p}_1(\bm{x}), \bm{p}_2(\bm{x})\big) &= -\nabla p(\bm{x}),&&\boldsymbol x\in \Omega,\\
\nabla \cdot \boldsymbol u(\bm{x}) &= 0,&&\boldsymbol x\in \Omega,\\
\bm{p}_1(\bm{x}) &= \nabla u_1,&&\boldsymbol x\in \Omega\cup\partial\Omega,\\
\bm{p}_2(\bm{x}) &= \nabla u_2,&&\boldsymbol x\in \Omega\cup\partial\Omega,\\
\boldsymbol u(\bm{x})&=\boldsymbol{u}_0(\bm{x}),&&\boldsymbol x\in \gamma_{\mathrm{inlet}}\cup\gamma_{\mathrm{top}}\cup\gamma_{\mathrm{bottom}},\\
p(\bm{x})&=1,&&\boldsymbol x\in \gamma_{\mathrm{outlet}},\\
\bm{n}(\bm{x}) \cdot \bm{u}(\bm{x}) &=0,&&\boldsymbol x\in\gamma_{\mathrm{airfoil}}, \label{eq:ns_bc}
\end{align}
\end{subequations}
where $\bm{p}_1(\bm{x})$ and $\bm{p}_2(\bm{x})$ are the introduced extra fields.
\paragraph{Construction of the Ansatz}
Here, the solution is $(\bm{u}(\bm{x}), p(\bm{x}))$. For $p(\bm{x})$, the general solution in $\gamma_{\mathrm{outlet}}$ is exactly $p^{\gamma_{\mathrm{outlet}}}(\bm{x})=1$. And the ansatz for $p$ is given by
\begin{equation}
\hat{p} = p^{\gamma_{\mathrm{outlet}}}(\bm{x}) + l^{\gamma_{\mathrm{outlet}}}(\bm{x}) \mathrm{NN}^{\mathrm{main}}_j(\bm{x})[3],
\end{equation}
where $[3]$ means taking the third elements of the output of $\mathrm{NN}^{\mathrm{main}}_j(\bm{x})$.
For $\bm{u}(\bm{x})$, the general solution in $\gamma_{\mathrm{inlet}}\cup\gamma_{\mathrm{top}}\cup \gamma_{\mathrm{bottom}}$ is exactly $\bm{u}^{\gamma_{*}}(\bm{x})=\bm{u}_0(\bm{x})$, where we define an alias $\gamma_{*}$ for $\gamma_{\mathrm{inlet}}\cup\gamma_{\mathrm{top}}\cup \gamma_{\mathrm{bottom}}$. In $\gamma_{\mathrm{airfoil}}$, the general solution is given by
\begin{equation}
\bm{u}^{\gamma_{\mathrm{airfoil}}} = \bm{B}(\bm{x}) \mathrm{NN}^{\gamma_{*}}(\bm{x}),
\end{equation}
where $\bm{B}(\bm{x}) = [n_2(\bm{x}), -n_1(\bm{x})]^\top$ according to Eq.~\eqref{eq:accept_example_2} and the output of $\mathrm{NN}^{\gamma_{*}}(\bm{x})$ is a scalar. Gathering $\bm{u}^{\gamma_{*}}$ and $\bm{u}^{\gamma_{\mathrm{airfoil}}}$, we then follow Eq.~(11) to obtain the ansatz for $\hat{\bm{u}}$
\begin{equation}
\hat{\bm{u}} = l^{\partial \Omega}(\bm{x}) \mathrm{NN}^{\mathrm{main}}_j(\bm{x})[1:2] + \exp\big[-\alpha_{\gamma_{*}} l^{\gamma_{*}}(\bm{x})\big]\bm{u}^{\gamma_{*}}(\bm{x}) + \exp\big[-\alpha_{\gamma_{\mathrm{airfoil}}} l^{\gamma_{\mathrm{airfoil}}}(\bm{x})\big]\bm{u}^{\gamma_{\mathrm{airfoil}}}(\bm{x}),
\end{equation}
where $[1:2]$ means taking the first two elements of the output of $\mathrm{NN}^{\mathrm{main}}_j(\bm{x})$ and $\alpha_{\gamma_{*}}$ as well as $\alpha_{\gamma_{\mathrm{airfoil}}}$ are similarly defined as in Eq.~(12).
\paragraph{Choices of Extended Distance Functions}
For $l^{\gamma_{\mathrm{airfoil}}}(\bm{x})$, a direct way is to calculate the distance between $\bm{x}$ and the airfoil $\gamma_{\mathrm{airfoil}}$. However, it may be very time-consuming since the $\gamma_{\mathrm{airfoil}}$ is highly complicated. So we prefer to approximate the true distance with an MLP with 3 hidden layers of width 30. We train the neural network before training our main model with $1024 \times 6$ points sampled in $\Omega$ ($5/6$ of them are sampled in the bounding box of the airfoil, and the rest are sampled in $\Omega$) along with their truth distances (which are computed by using the formula of the distance to a polygon) for $10,000$ Adam epochs (with a learning rate scheduler of \texttt{ReduceLROnPlateau} from PyTorch and an initial learning rate of 0.001). The loss function is a $\ell_1$ loss. A polar coordinate transformation trick is utilized as in Appendix~\ref{sec_a5}.
And for $\gamma_{\mathrm{outlet}}$, since it is a vertical line, for example, $x_1=a$, we can compute the extended distance function as $l^{\gamma_{\mathrm{outlet}}}(\bm{x}) = a - x_1$, where $\bm{x}=(x_1, x_2)$. And $\gamma_*$ is an open rectangle, so we can compute $l^{\gamma_*}(\bm{x})$ similarly to the case of the rectangle (see Eq.~\eqref{eq_ext_dist_rec}) while ignoring the right side. Besides, $l^{\partial \Omega}(\bm{x})$ is still computed by taking the $\mathrm{SoftMin}$ of the distances to all the boundaries.
\paragraph{Implementation}
All the models are trained for 5000 Adam iterations (with a learning rate scheduler of \texttt{ReduceLROnPlateau} from PyTorch and an initial learning rate of 0.001), followed by a L-BFGS optimization until convergence. And the hyper-parameters of each model are listed as follow
\begin{itemize}
\item \textbf{HC}: The main neural network is an MLP of size $[2] + 6\times[50] + [7]$. The sub-network (corresponding to Eq.~\eqref{eq:ns_bc}) is an MLP of size $[2] + 4\times[40] + [1]$. And the hyper-parameters of ``hardness'' is $\beta_s=5$.
\item \textbf{PINN}: The ansatz is an MLP of size $[2] + 6\times[50] + [3]$.
\item \textbf{PINN-LA}: The parameter of the moving average is $\alpha=0.1$.
\item \textbf{PINN-LA-2}: The parameter of the moving average is $\alpha=0.1$.
\item \textbf{FBPINN}: The domain of the problem is divided into $3\times 6=18$ subdomains by a regular grid. The size of the sub-network, an MLP, corresponding to each subdomain is $[2] + 4\times[30] + [3]$. And the scale factor is $\sigma=0.2$, chosen so that the window function is close to zero outside the overlapping region of the subdomains.
\item \textbf{xPINN}: The domain of the problem is divided into $3\times 6=18$ subdomains by a regular grid. The size of the sub-network corresponding to each subdomain is $[2] + 4\times[30] + [3]$. And the loss terms of the interface condition include average solution as well as residual continuity conditions.
\end{itemize}
\subsubsection{High-dimensional Heat Equation}\label{sec_a8_3}
\paragraph{Governing PDEs}
The governing PDEs are given by
\begin{subequations}
\begin{align}
\frac{\partial u}{\partial t}&=k\Delta u(\bm{x},t)+f(\bm{x}, t),&&\boldsymbol x\in \Omega\subset \mathbb{R}^d,t\in(0,1], \\
\bm{n}(\bm{x}) \cdot \nabla u(\bm{x},t) &=g(\bm{x},t),&&\boldsymbol x\in\partial\Omega,t\in(0,1],\\
u(\bm{x},0)&=g(\bm{x}, 0),&&\boldsymbol x\in \Omega,
\end{align}
\end{subequations}
where $u$ is the quantity of interest, $k=1/d$, $f(\bm{x}, t)=-k|\bm{x}|^2\exp{(0.5|\bm{x}|^2+t)}$, $d=10$, $\Omega$ is a unit ball (i.e., $\Omega=\{|\bm{x}|\le 1\}$), and $g(\bm{x}, t)=\exp{(0.5|\bm{x}|^2+t)}$ which is also the analytical solution to above PDEs.
And the reformulated PDEs are (which is used by the proposed model, HC)
\begin{subequations}
\begin{align}
\frac{\partial u}{\partial t}&=k\big(\nabla\cdot \bm{p}(\bm{x},t)\big)+f(\bm{x}, t),&&\boldsymbol x\in \Omega\subset \mathbb{R}^d,t\in(0,1], \\
\bm{p}(\bm{x}, t)&=\nabla u,&&\bm{x}\in \Omega,t\in(0,1], \\
\bm{n}(\bm{x}) \cdot \bm{p}(\bm{x},t) &=g(\bm{x},t),&&\boldsymbol x\in\partial\Omega,t\in(0,1], \label{eq:high_bc} \\
u(\bm{x},0)&=g(\bm{x}, 0),&&\boldsymbol x\in \Omega,
\end{align}
\end{subequations}
where $\bm{p}(\bm{x},t)$ is the introduced extra field.
\paragraph{Construction of the Ansatz}
The solution to the PDEs is a scalar function $u$ and there is only one boundary $\partial\Omega=\{ |\bm{x}|=1 \}$. We now derive the general solution $\bm{p}^{\partial\Omega}$ with respect to Eq.~\eqref{eq:high_bc}
\begin{equation}
\bm{p}^{\partial\Omega} = \bm{B}(\bm{x}) \mathrm{NN}^{\partial \Omega} + \bm{n}(\bm{x})g(\bm{x},t),
\end{equation}
where $\bm{B}(\bm{x}) = \bm{I}_d -\bm{n}(\bm{x})\bm{n}(\bm{x})^\top$. And the ansatz $(\hat{u},\hat{\bm{p}})$ is given by
\begin{subequations}
\begin{align}
\hat{\bm{p}} &= l^{\partial \Omega}(\bm{x}) \mathrm{NN}^{\mathrm{main}}_j(\bm{x},t)[1:d] + \bm{p}^{\partial\Omega}(\bm{x},t),\\
\hat{u} &= \mathrm{NN}^{\mathrm{main}}_j(\bm{x},t)[d+1] \big(1-\exp{[-\beta_t t]}\big) + g(\bm{x}, 0) \exp{[-\beta_t t]},
\end{align}
\end{subequations}
where $[1:d]$ means taking the first $d$ elements of the output of $\mathrm{NN}^{\mathrm{main}}_j(\bm{x},t)$ while $[d+1]$ means the last element.
\paragraph{Choices of Extended Distance Functions}
Since $\partial \Omega$ is a ND sphere, we can compute $l^{\partial \Omega}(\bm{x})$ by subtracting the distance between $\bm{x}$ and the center from the radius (the symbol is different from the previous 2D circles, since $\partial \Omega$ is the outer boundary).
\paragraph{Implementation}
All the models are trained for 5000 Adam iterations (with a learning rate scheduler of \texttt{ReduceLROnPlateau} from PyTorch and an initial learning rate of 0.01), followed by a L-BFGS optimization until convergence. And the hyper-parameters of each model are listed as follow
\begin{itemize}
\item \textbf{HC}: The main neural network is an MLP of size $[11] + 4\times[50] + [11]$. The sub-network (corresponding to Eq.~\eqref{eq:high_bc}) is an MLP of size $[11] + 3\times[20] + [10]$. And the hyper-parameters of ``hardness'' is $\beta_t=10$ (here we only have one boundary, so we can construct our ansatz (in the spatial domain) as in Eq.~(3) instead of Eq.~\eqref{eq:hc_time} and $\beta_s$ is no longer needed).
\item \textbf{PINN}: The ansatz is an MLP of size $[11] + 4 \times [50] + [1]$.
\item \textbf{PINN-LA}: The parameter of the moving average is $\alpha=0.1$.
\item \textbf{PINN-LA-2}: The parameter of the moving average is $\alpha=0.1$.
\item \textbf{PFNN}: The size of the neural network (an MLP) is $[11] + 4\times[50] + [1]$.
\end{itemize}
\subsubsection{Ablation Study: Extra fields}\label{sec_a8_4}
Here, our experiment is divided into two parts where we consider the Poisson's equation and the nonlinear Schrödinger equation, respectively. The relevant details are as follows
\paragraph{Poisson's Equation}
The governing PDEs are described as
\begin{subequations}
\begin{align}
\Delta u(x)&=-a^2\sin{ax},&&x\in(0,2\pi),\\
u(x)&=0,&&x=0\lor x=2\pi,
\end{align}
\end{subequations}
where $a=2$ and $u(x)$ is the physical quantity of interest.
And the reformulated PDEs are (corresponding to the \textit{extra fields})
\begin{subequations}
\begin{align}
\nabla p(x)&=-a^2\sin{ax},&&x\in(0,2\pi),\\
p(x)&=\nabla u(x),&&x\in(0,2\pi),\\
u(x)&=0,&&x=0\lor x=2\pi.
\end{align}
\end{subequations}
where $p(x)$ is the introduced extra field.
The two models (PINNs with and without the \textit{extra fields}) are trained with $N_f = 128$ collocation points and $N_b = 2$ boundary points for 10,000 Adam iterations (with a learning rate of 0.001). And we have tested different network architectures, including $[1] + 3\times[50] + [\cdot]$, $[1] + 3\times[100] + [\cdot]$, $[1] + 5\times[50] + [\cdot]$, $[1] + 5\times[100] + [\cdot]$, where for the PINN without the \textit{extra fields}, the number of outputs is 1, and for the PINN with the \textit{extra fields}, the number of outputs is 2.
\begin{figure}
\caption{The CV of all the values of $\overline{|\nabla_{\bm{\theta}
\label{fig:cv_poission}
\label{fig:cv_sh}
\label{fig:abla_cv}
\end{figure}
\paragraph{Schrödinger Equation}
The governing PDEs are described as
\begin{subequations}
\begin{align}
i\frac{\partial h}{\partial t}+\frac{1}{2}\frac{\partial^2 h}{\partial x^2}+|h(x,t)|^2h(x,t)&=0,&&x\in(-5,5),t\in(0,\pi/2],\\
h(t,-5)&=h(t,5),&&t\in(0,\pi/2],\\
\frac{\partial h}{\partial x}(t,-5)&=\frac{\partial h}{\partial x}(t,5),&&t\in(0,\pi/2],\\
h(0,x)&=2\sech (x),&&x\in(-5,5),
\end{align}
\end{subequations}
where $h(x,t)$ is the physical quantity of interest.
And the reformulated PDEs are (corresponding to the \textit{extra fields})
\begin{subequations}
\begin{align}
i\frac{\partial h}{\partial t}+\frac{1}{2}\frac{\partial p}{\partial x}+|h(x,t)|^2h(x,t)&=0,&&x\in(-5,5),t\in(0,\pi/2],\\
p(x,t)&=\frac{\partial h}{\partial x},&&x\in[-5,5],t\in(0,\pi/2],\\
h(t,-5)&=h(t,5),&&t\in(0,\pi/2],\\
\frac{\partial h}{\partial x}(t,-5)&=\frac{\partial h}{\partial x}(t,5),&&t\in(0,\pi/2],\\
h(0,x)&=2\sech (x),&&x\in(-5,5),
\end{align}
\end{subequations}
where $p(x)$ is the introduced extra field.
The two models (PINNs with and without the \textit{extra fields}) are trained with $N_f = 1000$ collocation points, $N_b = 20$ boundary points, and $N_i = 200$ initial points for 10,000 Adam iterations (with a learning rate of 0.001). And we have tested different network architectures, including $[1] + 3\times[50] + [\cdot]$, $[1] + 3\times[100] + [\cdot]$, $[1] + 5\times[50] + [\cdot]$, $[1] + 5\times[100] + [\cdot]$, where for the PINN without the \textit{extra fields}, the number of outputs is 2, and for the PINN with the \textit{extra fields}, the number of outputs is 4.
\paragraph{Experimental Results}
We report the ratio of the the moving variance (MovVar) of $\overline{|\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}}|}$
to that of $\overline{|{\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}}|}$ at each iteration during training, where the window size of the MovVar is 500 and after the MovVar, a moving average filter with a window size of 500 is applied. The results are shown in Figure~4. Besides, we also calculate the coefficient of variation (CV) of all the values of $\overline{|\nabla_{\bm{\theta}}\mathcal{L}_{\mathcal{F}}|}$ and $\overline{|{\nabla_{\tilde{\bm{\theta}}}\tilde{\mathcal{L}}_{\mathcal{F}}}|}$, respectively. And we give the results in Figure~\ref{fig:abla_cv}. Using the
CV as a criterion, we also find that the \textit{extra fields} significantly reduces the gradient oscillations during training, especially for the complex nonlinear PDEs.
\subsection{Empirical Analysis of Convergence}\label{sec_a9}
In this subsection, we will empirically analyze the convergence of our method as well as some representative baselines in the context of the simulation of a 2D battery pack (see Section~5.2). We now report the training history with respect to iterations in Figure~\ref{fig:convergence}. The left axis shows the loss of PINNs (including PINN, PINN-LA, and PINN-LA-2) while the right axis shows the loss of our method, HC. The PINN loss is computed by adding up all the loss terms (including the losses of PDEs and BCs), where the loss weights are ignored for PINN-LA and PINN-LA-2. The first 5000 iterations are trained with Adam (separated by the gray dotted line), and the last 15000 are trained with L-BFGS.
\begin{figure}
\caption{The convergence history of the simulation of a 2D battery pack.}
\label{fig:convergence}
\end{figure}
\begin{table}[b!]
\bm{s}pace{-0.1in}
\caption{Parallel experimental results of the simulation of a 2D battery pack (MAE of $T$)}
\bm{s}pace{0.1in}
\centering
\begin{small}
\begin{tabular}{lllll}
\toprule
& $t=0$ & $t=0.5$ & $t=1$ & average\\
\midrule
PINN & $0.1232\pm 0.0219$ & $0.0417 \pm 0.0141$ & $0.0263 \pm 0.0078$ & $0.0499 \pm 0.0135$\\
PINN-LA & $0.1083\pm 0.0266$ & $0.0927 \pm 0.0372$ & $0.1168\pm 0.0739$ & $0.0969\pm 0.0385$\\
PINN-LA-2 & $0.1065\pm 0.0059$ & $0.0322\pm 0.0031$ & $\textbf{0.0200}$ $\pm \ 0.0020$ & $0.0400\pm 0.0031$\\
FBPINN & $0.0763\pm 0.0071$ & $0.0258\pm 0.0037$& $0.0205\pm 0.0041$ & $0.0318\pm 0.0027$\\
xPINN & $0.2085\pm 0.0252$ & $0.1144\pm 0.0194$ & $0.1352\pm 0.0241$ & $0.1310\pm 0.0194$\\
PFNN & $\textbf{0.0000}$ $\pm \ 0.0000$ & $0.3769\pm 0.0974$ & $0.6012\pm 0.2274$ & $0.3522\pm 0.1019$\\
PFNN-2 & $\textbf{0.0000}$ $\pm \ 0.0000$ & $0.3814\pm 0.0381$ & $0.5247\pm 0.0394$ & $0.3365\pm 0.0236$\\
\midrule
HC & $\textbf{0.0000}$ $\pm \ 0.0000$ & $\textbf{0.0244}$ $\pm \ 0.0010$ & $0.0226\pm 0.0012$ & $\textbf{0.0219}$ $\pm \ 0.0007$\\
\bottomrule
\end{tabular}
\end{small}
\label{parallel_tab_pack_1}
\end{table}
From the results in Figure~\ref{fig:convergence}, we can see that the loss functions of all models drop significantly after switching to L-BFGS. This shows that L-BFGS can further promote convergence through utilizing the information of the second derivatives of the loss function. However, we may not start with L-BFGS because it can easily lead to divergence. We consider Adam+L-BFGS to be a practical choice. Furthermore, we find that the convergence of PINNs is negatively affected by the tricks of learning rate annealing algorithm, especially the PINN-LA without our modification. The HC has the fastest convergence rate among all models. This means that the hard-constraint method or extra fields may be helpful in accelerating convergence.
\subsection{Parallel Experiments}\label{sec_a10}
In this subsection, we revisit the three experiments in Section~5.2$\sim$5.4 and perform parallel tests in 5 runs to assess the significance of the results. We report the testing results (along with the $95\%$ confidence intervals) in Table~\ref{parallel_tab_pack_1}$\sim$\ref{parallel_high_2}. From the results, we can see that our method, HC still outperforms all the other baselines. Besides, HC has the least variation, which shows that the hard-constraint methods can improve the stability of training.
\begin{table}[t]
\bm{s}pace{-0.1in}
\caption{Parallel experimental results of the simulation of a 2D battery pack (MAPE of $T$)}
\bm{s}pace{0.1in}
\centering
\begin{small}
\begin{tabular}{lllll}
\toprule
& $t=0$ & $t=0.5$ & $t=1$ & average\\
\midrule
PINN & $123.16 \pm 21.91\%$ & $10.97 \pm 3.57\%$ & $4.52\pm 0.98\%$ & $23.58 \pm 5.61\%$\\
PINN-LA & $108.14 \pm 26.57\%$ & $24.15 \pm 8.07\%$ & $17.98\pm 8.97\%$ & $33.64 \pm 8.89\%$\\
PINN-LA-2 & $106.39 \pm 5.86\%$ & $8.70 \pm 0.57\%$ & $\textbf{3.86}$ $\pm \ 0.36\%$ & $19.56 \pm 1.00\%$\\
FBPINN & $76.25 \pm 7.06\%$ & $7.69 \pm 0.68\%$ & $5.26\pm 0.71\%$ & $15.07 \pm 0.57\%$\\
xPINN & $208.36 \pm 25.21\%$ & $26.25 \pm 4.99\%$ & $18.15\pm 3.05\%$ & $49.60 \pm 7.37\%$\\
PFNN & $\textbf{0.02}$ $ \pm \ 0.00\%$ & $94.63 \pm 17.68\%$ & $105.38\pm 27.16\%$ & $80.92 \pm 15.08\%$\\
PFNN-2 & $\textbf{0.02}$ $ \pm \ 0.00\%$ & $71.39 \pm 6.98\%$ & $82.04\pm 8.90\%$ & $61.65 \pm 4.64\%$\\
\midrule
HC & $\textbf{0.02}$ $ \pm \ 0.00\%$ & $\textbf{5.29}$ $ \pm \ 0.16\%$ & $3.87\pm 0.14\%$ & $\textbf{5.22}$ $ \pm \ 0.17\%$\\
\bottomrule
\end{tabular}
\end{small}
\label{parallel_tab_pack_2}
\end{table}
\begin{table}[t]
\caption{Parallel experimental results of the simulation of an airfoil (MAE)}
\centering
\begin{small}
\begin{tabular}{llll}
\toprule
& $u_1$ & $u_2$ & $p$ \\
\midrule
PINN & $0.4234\pm 0.0809$ & $0.0681\pm 0.0162$ & $0.3204\pm 0.1404$\\
PINN-LA & $0.4467\pm 0.0450$ & $0.0630\pm 0.0061$& $0.3028\pm 0.0480$ \\
PINN-LA-2 & $0.4542\pm 0.0875$ & $0.0679\pm 0.0111$ & $0.3230\pm 0.1115$ \\
FBPINN & $0.3975\pm 0.0221$ & $0.0544\pm 0.0030$& $0.2650\pm 0.0059$\\
xPINN & $0.6942\pm 0.0432$ & $0.0581\pm 0.0013$& $1.1587\pm 0.1251$ \\
\midrule
HC & $\textbf{0.2824}$ $\pm\ 0.0215$ & $\textbf{0.0435}$ $\pm\ 0.0024$ & $\textbf{0.2144}$ $\pm\ 0.0114$\\
\bottomrule
\end{tabular}
\end{small}
\label{tab_airfoil_parallel}
\bm{s}pace{-0.1in}
\end{table}
\begin{table}[b]
\bm{s}pace{0.1in}
\caption{Parallel experimental results of the simulation of an airfoil (WMAPE)}
\centering
\begin{small}
\begin{tabular}{llll}
\toprule
& $u_1$ & $u_2$ & $p$ \\
\midrule
PINN & $0.5358\pm 0.1024$ & $1.1709\pm 0.2778$ & $0.2921\pm 0.1279$\\
PINN-LA & $0.5653\pm 0.0570$ & $1.0819\pm 0.1048$ & $0.2760\pm 0.0437$\\
PINN-LA-2 & $0.5747\pm 0.1106$ & $1.1670\pm 0.1920$ & $0.2944\pm 0.1016$\\
FBPINN & $0.5030\pm 0.0279$ & $0.9347\pm 0.0517$ & $0.2416\pm 0.0054$\\
xPINN & $0.8784\pm 0.0546$ & $0.9986\pm 0.0225$ & $1.0562\pm 0.1140$\\
\midrule
HC & $\textbf{0.3573}$ $\pm\ 0.0272$ & $\textbf{0.7472}$ $\pm\ 0.0418$& $\textbf{0.1954}$ $\pm\ 0.0104$\\
\bottomrule
\end{tabular}
\end{small}
\label{tab_airfoil_parallel_2}
\bm{s}pace{-0.1in}
\end{table}
\begin{table}[htbp]
\bm{s}pace{-0.1in}
\caption{Parallel experimental results of the high-dimensional heat equation (MAE of $u$)}
\bm{s}pace{0.1in}
\centering
\begin{small}
\begin{tabular}{lllll}
\toprule
& $t=0$ & $t=0.5$ & $t=1$ & average\\
\midrule
PINN & $0.0204\pm 0.0148$ & $0.0357 \pm 0.0104$ & $0.1600 \pm 0.0600$ & $0.0525 \pm 0.0173$\\
PINN-LA & $0.0430\pm 0.0751$ & $0.3039 \pm 0.6691$ & $0.8011\pm 1.7228$ & $0.3464\pm 0.7531$\\
PINN-LA-2 & $0.0287\pm 0.0670$ & $0.2071\pm 0.6524$ & $0.5933\pm 1.7225$ & $0.2455\pm 0.7433$\\
PFNN & $\textbf{0.0000}$ $\pm \ 0.0000$ & $0.0895\pm 0.0727$ & $0.2130\pm 0.1790$ & $0.0963\pm 0.0788$\\
\midrule
HC & $\textbf{0.0000}$ $\pm \ 0.0000$ & $\textbf{0.0028}$ $\pm \ 0.0006$ & $\textbf{0.0046}$ $\pm \ 0.0008$ & $\textbf{0.0027}$ $\pm \ 0.0006$\\
\bottomrule
\end{tabular}
\end{small}
\label{parallel_high_1}
\end{table}
\begin{table}[htbp]
\caption{Parallel experimental results of the high-dimensional heat equation (MAPE of $u$)}
\bm{s}pace{0.1in}
\centering
\begin{small}
\begin{tabular}{lllll}
\toprule
& $t=0$ & $t=0.5$ & $t=1$ & average\\
\midrule
PINN & $1.15 \pm 0.51\%$ & $1.41 \pm 0.41\%$ & $3.83\pm 1.45\%$ & $1.75 \pm 0.58\%$\\
PINN-LA & $2.86 \pm 5.07\%$ & $12.06 \pm 26.54\%$ & $19.35\pm 41.65\%$ & $11.46 \pm 24.75\%$\\
PINN-LA-2 & $1.92 \pm 4.55\%$ & $8.19 \pm 25.80\%$ & $14.29\pm 41.54\%$ & $8.00 \pm 24.25\%$\\
PFNN & $\textbf{0.00}$ $ \pm \ 0.00\%$ & $3.59 \pm 2.93\%$ & $5.19\pm 4.36\%$ & $3.20 \pm 2.60\%$\\
\midrule
HC & $\textbf{0.00}$ $ \pm \ 0.00\%$ & $\textbf{0.11}$ $ \pm \ 0.03\%$ & $\textbf{0.11}$ $\pm \ 0.02\%$ & $\textbf{0.10}$ $ \pm \ 0.02\%$\\
\bottomrule
\end{tabular}
\end{small}
\label{parallel_high_2}
\end{table}
\subsection{Ethics Statement}\label{sec_a11}
PDEs have important applications in many fields, including applied physics, automobile manufacturing, economics, and the aerospace industry. Solving PDE via neural networks has attracted much attention in recent years, and it may be applied to the above fields in the future.
Our method also belongs to this kind. However, the method for solving PDE based on neural networks has no theoretical explanation or safety guarantee for the time being. Applying such methods to security-sensitive domains may lead to unexpected incidents and the cause of the accident may be hard to diagnose. Possible solutions include developing alternatives with theoretical interpretability or using safeguards.
\end{document}
|
\begin{document}
\title [The differentiation of hypoelliptic diffusion semigroups] {The
differentiation of hypoelliptic diffusion semigroups}
\author[M. Arnaudon]{Marc Arnaudon} \address{ Laboratoire de Math\'ematiques
et Applications, CNRS: UMR6086
\break\indent Universit\'e de Poitiers,
T\'el\'eport 2 -- BP 30179
\break\indent F--86962 Futuroscope
Chasseneuil, France} \email{[email protected]}
\author[A. Thalmaier]{Anton Thalmaier} \address{ Unit\'e de Recherche en
Math\'ematiques, FSTC
\break\indent Universit\'e du
Luxembourg
\break\indent 6, rue Richard
Coudenhove-Kalergi
\break\indent L--1359 Luxembourg, Grand-Duchy of
Luxembourg} \email{[email protected]} \date{\today\ \emph{ File:
}\jobname.tex}
\begin{abstract}\noindent
Basic derivative formulas are presented for hypoelliptic heat semigroups and
harmonic functions extending earlier work in the elliptic case. Following
the approach of~\cite{TH 97}, emphasis is placed on developing integration
by parts formulas at the level of local martingales. Combined with the
optional sampling theorem, this turns out to be an efficient way of dealing
with boundary conditions, as well as with finite lifetime of the underlying
diffusion. Our formulas require hypoellipticity of the diffusion in the
sense of Malliavin calculus (integrability of the inverse Malliavin
covariance) and are formulated in terms of the derivative flow, the
Malliavin covariance and its inverse.
Finally some extensions to the nonlinear setting of harmonic mappings are
discussed.
\end{abstract}
\maketitle
\tableofcontents
\noindent\keywords{{\em Keywords}: Diffusion semigroup, hypoelliptic operator,
integration by parts,
\break Malliavin calculus, Malliavin covariance}
\noindent \subjclass{AMS 1991 Subject classification: Primary 58G32,
60H30; Secondary 60H10}
\section{Introduction}\label{Sect1}\noindent
Let $M$ be a smooth $n$-dimensional manifold. On $M$ consider a globally
defined Stratonovich SDE of the type
\begin{equation}
\label{eq:StratonovichSDE}
\delta X=A(X)\,\delta Z+A_0(X)\,dt
\end{equation}
with $A_0\in\Gamma(TM)$, $A\in\Gamma(\R^r\otimes TM)$ for some $r$, and $Z$ an
$\R^r$-valued Brownian motion on some filtered probability space satisfying
the usual completeness conditions. Here $\Gamma(TM)$, resp.\
$\Gamma(\R^r\otimes TM)$, denote the smooth sections over $M$ of the tangent
bundle $TM$, resp.\ the vector bundle $\R^r\otimes TM$.
Solutions to \eqref{eq:StratonovichSDE} are diffusions with generator given in
H\"ormander form as
\begin{equation}
\label{eq:Generator}
L=A_0+{1\over 2}\sum\limits_{i=1}^r A_i^2
\end{equation}
where $A_i=A(\,{\raise1.5pt\hbox{\bf .}}\,)e_i\in\Gamma(TM)$ and $e_i$ the $i$th standard unit
vector in~$\R^r$.
There is a partial flow $X_t(\,{\raise1.5pt\hbox{\bf .}}\,)$, $\zeta(\,{\raise1.5pt\hbox{\bf .}}\,)$ to
\eqref{eq:StratonovichSDE} such that for each $x\in M$ the process $X_t(x)$,
$0\le t<\zeta(x)$, is the maximal strong solution to
\eqref{eq:StratonovichSDE} with starting point $X_0(x)=x$ and explosion
time~$\zeta(x)$. Adopting the notation $X_t(x,\omega)=X_t(x)(\omega)$, resp.\
$\zeta(x,\omega)=\zeta(x)(\omega)$ and
\begin{equation*}
M_t(\omega)=\{x\in M\colon\ t<\zeta(x,\omega)\},
\end{equation*}
it further means that there exists a set $\Omega_0\subset\Omega$ of full
measure such that for all $\omega\in\Omega_0$ the following conditions hold:
\begin{itemize}
\item[(i)] $M_t(\omega)$ is open in $M$ for $t\geq0$, i.e.\
$\zeta(\,{\raise1.5pt\hbox{\bf .}}\,,\omega)$ is lower semicontinuous on $M$.
\item[(ii)] $\map{X_t(\,{\raise1.5pt\hbox{\bf .}}\,,\omega)}{M_t(\omega)}M$ is a diffeomorphism
onto an open subset $R_t(\omega)$ of $M$.
\item[(iii)] For $t>0$ the map $s\mapsto X_s(\,{\raise1.5pt\hbox{\bf .}}\,,\omega)$ is continuous
from $[0,t]$ to $C^\infty\bigl(M_t(\omega),M\bigr)$ when the latter is
equipped with the $C^\infty$-topology.
\end{itemize}
\noindent Thus, the differential $\map{T_xX_t}{T_xM}{T_{X_t}M}$ of the map
$\map{X_t}{M_t}{M}$ is well-defined at each point $x\in M_t$, for all
$\omega\in\Omega_0$. We also write $X_{t\ast}$ for $TX_t$.
Let
\begin{equation}
\label{eq:SemiGroupMin}
(P_tf)(x)=\E\bigl[\bigl(f\circ X_t(x)\bigr)\,1_{\{t<\zeta(x)\}}\bigr]
\end{equation}
be the minimal semigroup associated to \eqref{eq:StratonovichSDE}, acting on
bounded measurable functions $\map fM\R$.
Let $\hbox{Lie}\bigl(A_0,A_1,\dots,A_r\bigr)$ denote the Lie algebra generated
by $A_0,\dots,A_r$, i.e., the smallest $\R$-vector space of vector fields on
$M$ containing $A_0,\dots,A_r$ and being closed under Lie brackets. We
suppose that \eqref{eq:Generator} is non-degenerate in the sense that the
ideal generated by $(A_1,\dots,A_r)$ in
$\hbox{Lie}\bigl(A_0,A_1,\dots,A_r\bigr)$ is the full tangent space at each
point $x\in M$:
$$
\hbox{Lie}\bigl(A_i,\,[A_0,A_i]\colon\,i=1,\dots,r\bigr)(x)=T_xM
\quad\hbox{for all }x\in M.\leqno\hbox{(H1)}
$$
Note that (H1) is equivalent to the following H\"ormander condition for
${\partial\over\partial t}+L$ on $\R\times M$:
\begin{equation*}
\dim\hbox{Lie}\Bigl(\textstyle{\partial\over\partial t}
+A_0, A_1,\dots,A_r\Bigr)(t,x)=n+1\quad\hbox{for all }(t,x)\in\R\times M.
\end{equation*}
By H\"ormander's theorem, under (H1) the semigroup \eqref{eq:SemiGroupMin} is
strongly Feller (mapping bounded measurable functions on $M$ to bounded
continuous functions on $M$) and has a smooth density $p\in
C^\infty({]0,\infty[}\times M\times M)$ such that
\begin{equation*}
P\bigl\{X_t(x)\in dy,\ t<\zeta(x)\bigr\}=p(t,x,y)\mathpal{vol}(dy),\quad t>0,\ x\in M,
\end{equation*}
see \cite{BI 81} for a probabilistic discussion.
In this paper we are concerned with the problem of finding stochastic
representations, under hypothesis (H1), for the derivative $d(P_tf)$ of
\eqref{eq:SemiGroupMin} which do not involve derivatives of $f$. Analogously,
in the situation of $L$-harmonic functions $\map uD\R$, given on some domain
$D$ in $M$ by its boundary values $u\vert\partial D$ via
\begin{equation}
\label{eq:ReprHarmFunct}
u(x)=\E\,[u\circ X_{\tau(x)}(x)],
\end{equation}
formulas are developed for $du$ not involving derivatives of the boundary
function; here $\tau(x)$ is the first exit time of $X(x)$ from $D$.
The paper is organized as follows. In Section~\ref{Sect2} we collect some background on
Malliavin calculus related to hypoelliptic diffusions. In Section~\ref{Sect3} we
explain our approach to integration by parts in the hypoelliptic case which
leads to differentiation formulas for hypoelliptic semigroups. Section~\ref{Sect4} is
devoted to integration by parts formulas at the level of local martingales.
In Section~\ref{Sect5} control theoretic aspects related to differentiation formulas are
discussed. It is shown that the solvability of a certain control problem
leads to simple formulas in particular cases, however the method turns out not
to cover the full hypoelliptic situation. We deal with the general situation
in Section~\ref{Sect7} where we refine the arguments of Section~\ref{Sect4} and \ref{Sect5} to give
probabilistic representations for the derivative of semigroups and
$L$-harmonic functions in the hypoelliptic case. A crucial step in this
approach is the use of the optional sampling theorem to obtain
local formulas by appropriate stopping times, as in the elliptic case \cite{TH 97},
\cite{T-W 98}. Our formulas are in terms of the derivative flow and
Malliavin's covariance; hence they are neither unique nor intrinsic: the
appearing terms depend on the specific SDE and not just on the generator.
Finally, in Section~\ref{Sect8}, we deal with possible extensions to nonlinear
situations, like the case of harmonic maps and nonlinear heat equations for
maps taking values in curved targets.
All presented formulas do not require full H\"ormander's Lie algebra
condition~(H1) but rather invertibility and integrability of the inverse
Malliavin covariance which is known to be slightly weaker, but still
sufficient to imply hypoellipticity of $\frac\partial{\partial t}+L$. In
particular, (H1) is allowed to fail on a collection of hypersurfaces. The
reader is referred to \cite{B-M 95} for precise statements in this direction.
\section{Hypoellipticity and the Malliavin Covariance}\label{Sect2}
\setcounter{equation}0\noindent Let $B\in\Gamma(TM)$ be a vector field on $M$.
We consider the push-forward $X_{t\ast}B$ (resp.\ pull-back $X_{t\ast}^{-1}B$)
of $B$ under the partial flow $X_t(\,{\raise1.5pt\hbox{\bf .}}\,)$ to the
system~\eqref{eq:StratonovichSDE}, more precisely,
\begin{equation}
\label{eq:PushedVF}
\begin{split}
(X_{t\ast}^\mstrut B)_x=\bigl(T_{X_t^{-1}(x)}X_t\bigr)\,B_{X_t^{-1}(x)}\,
&,\quad x\in R_t,\\
(X_{t\ast}^{-1}B)_x=\bigl(T_{X_t(x)}^\mstrut X_t\bigr)^{-1}\,B_{
X_t(x)}^\mstrut\, &,\quad x\in M_t.
\end{split}
\end{equation}
Note that $X_{t\ast}B$, resp.\ $X_{t\ast}^{-1}B$, are smooth vector fields on
$R_t$, resp.\ $M_t$, well-defined for all $\omega\in\Omega_0$. By definition,
\begin{equation}
\label{eq:PushedVFRewritten}
\begin{split}
(X_{t\ast}^\mstrut B)_x\,f&=B_{X_t^{-1}(x)}\,(f\circ X_t)\,,
\quad x\in R_t,\\
(X_{t\ast}^{-1}B)_x\,f&=B_{X_t(x)}^\mstrut\,(f\circ X_t^{-1})\,, \quad
x\in M_t,
\end{split}
\end{equation}
for germs $f$ of smooth functions at $x$.
\begin{thm}
\label{SDETransp}
The pushed vector fields $X_{t\ast}^\mstrut B$ and $X_{t\ast}^{-1}B$ as
defined by {\rm\eqref{eq:PushedVF}} satisfy the following SDEs:
\begin{align}
\label{eq:SDEPushedVf}
\delta(X_{t\ast}^\mstrut B) &=\sum_{i=1}^r\bigl[X_{t\ast}^\mstrut
B,A_i\bigl]\,\delta Z^i_t
+\bigl[X_{t\ast}^\mstrut B,A_0\bigl]\,dt\\
\label{eq:SDEPushedVfInv}
\delta(X_{t\ast}^{-1}B)
&=\sum_{i=1}^r\bigl(X_{t\ast}^{-1}[A_i,B]\bigr)\,\delta Z^i_t
+\bigl(X_{t\ast}^{-1}[A_0,B]\bigr)\,dt.
\end{align}
\end{thm}
\begin{proof}
See Kunita {\cite{KU 81}}, section 5.
\end{proof}
We have the famous ``invertibility of the Malliavin covariance matrix'' under
the H\"ormander condition (H1), e.g., see~Bismut~\cite{BI 81}, Prop.~4.1.
\begin{thm}
\label{MallInv}
Suppose {\rm(H1)} holds. Let $\sigma$ be a predictable stopping time, $x\in
M$. Then, a.s., for any predictable stopping time $\tau<\zeta(x)$, on
$\{\sigma<\tau\}$
\begin{equation*}
\sum_{i=1}^r\int_\sigma^\tau
(X_{s\ast}^{-1}A_i)_x^\mstrut\otimes(X_{s\ast}^{-1}A_i)_x^\mstrut\,ds
\in T_xM\otimes T_xM
\end{equation*}
is a positive definite quadratic form on $T_x^\ast M$. In particular, a.s.,
for each $t>0$,
\begin{equation}
\label{eq:MallCov}
C_t(x)
=\sum_{i=1}^r\int_0^t
(X_{s\ast}^{-1}A_i)_x^\mstrut\otimes(X_{s\ast}^{-1}A_i)_x^\mstrut\,ds
\end{equation}
defines a positive symmetric bilinear form on $T_x^\ast M$ for $x\in M_t$.
\end{thm}
Thus, a.s., $C_t$ provides a smooth section of the bundle $TM\otimes TM$ over
$M_t$ with the property that all $C_t(x)$ are symmetric and positive definite.
We may choose a non-degenerate inner product $\langle\,\cdot,\cdot\,\rangle$
on $T_xM$ and read $C_t(x)\in T_xM\otimes T_xM$ as a positive definite
bilinear form on $T_xM$:
\begin{equation*}
\bigl\langle C_t(x)u,v\bigr\rangle
=\sum_{i=1}^r\int_0^t\bigl\langle(X_{s\ast}^{-1}A_i)_x,u\bigr\rangle\,
\bigl\langle(X_{s\ast}^{-1}A_i)_x,v\bigr\rangle\,ds,
\quad u,v\in T_xM.
\end{equation*}
Under (H1) the ``random matrix'' $C_t(x)$ is invertible for $t>0$ and $x\in
M_t$. The following property is a key point in the stochastic calculus of
variation, e.g., \cite{NO 86}, \cite{KU 90}, \cite{NU 95}.
\begin{remark}
\label{AllLp}
Under hypothesis (H1) and certain boundedness conditions on the vector
fields $A_0,A_1,\dots,A_r$ (which are satisfied for instance if $M$ is
compact) we have $(\det C_t(x))^{-1}\in L^p$ for all $1\leq p<\infty$. In
the same way,
\begin{equation}
\label{eq:InvMallCovLp}
(\det C_\sigma(x))^{-1}\in L^p\quad\hbox{for $1\leq p<\infty$}
\end{equation}
if $\sigma=\tau_D^\mstrut(x)$ or $\sigma=\tau_D^\mstrut(x)\wedge t$ for some
$t>0$ where $\tau_D^\mstrut(x)$ is the first exit time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ from
some relatively compact open neighbourhood $D\not=M$ of $x$. Also note that
$\tau_D^\mstrut(x)\in L^p$ for all $1\leq p<\infty$, e.g.~\cite{B-K-S 84},
Lemma~(1.21).
\end{remark}
In the subsequent sections we adopt the following notation. By definition,
$C_t(x)\in T_xM\otimes T_xM$, thus $\map{C_t(x)}{T^\ast_xM}{T_xM}$ and
$\map{C_t(x)^{-1}}{T_xM}{T^\ast_xM}$. On the other hand,
\begin{equation}
\label{eq:XinverseA}
\map{(X_{s\ast}^{-1}A)_x^\mstrut}{\R^r}{T_xM},\quad
z\mapsto\sum_{i=1}^r(X_{s\ast}^{-1}A_i)_x^\mstrut\,z^i.
\end{equation}
Let $\map{(X_{s\ast}^{-1}A)^\ast_x}{T^\ast_xM}{(\R^r)^\ast\equiv\R^r}$ be the
adjoint (dual) map to \eqref{eq:XinverseA}, then we may write
\begin{equation}
\label{eq:MallCovRewritten}
C_t(x)=\int_0^t (X_{s\ast}^{-1}A)_x^\mstrut\,(X_{s\ast}^{-1}A)^\ast_x\,ds
\end{equation}
for the Malliavin covariance. In the sequel we usually identify $(\R^r)^\ast$
and $\R^r$.
\section{A Basic Integration by Parts Argument}\label{Sect3}
\setcounter{equation}0\noindent In this section we explain an elementary
strategy for integration by parts formulas which will serve us as a guideline
in the sequel. The argument is inspired by Bismut's original approach to
Malliavin calculus~\cite{BI 81}.
Consider again the SDE \eqref{eq:StratonovichSDE} and assume (H1) to be
satisfied. For simplicity, we suppose that $M$ is compact. Let $a$ be a
predictable process taking values in $T_xM\otimes(\R^r)^\ast\equiv
T_xM\otimes\R^r$ and $\lambda\in T^\ast_xM$ such that for each $t>0$,
\begin{equation}
\label{eq:CondGirsanov}
\E\left[\exp\left({1\over2}\int_0^t\vert a_s\lambda\vert^2\,ds
\right)\right]<\infty,\quad
\hbox{ $\lambda$ locally about $0$.}
\end{equation}
Let $dZ^\lambda=dZ+a\lambda\,dt$ and consider the Girsanov exponential
$G^\lambda_{\raise1.0pt\hbox{\bf .}}$ defined by
\begin{equation}
G^\lambda_t=\exp\left(-\int_0^t\,\langle a_s\lambda,dZ_s\rangle
-{1\over2}\,\int_0^t\vert a_s\lambda\vert^2\,ds\right).
\end{equation}
Write $X^\lambda$ for the flow to our SDE driven by the perturbed BM
$Z^\lambda$, analogously $C_{\raise1.0pt\hbox{\bf .}}^\lambda(x)$ etc. By definition,
$C_{\raise1.0pt\hbox{\bf .}}^\lambda(x)\in T_xM\otimes T_xM$ is a linear map from $T^\ast_xM$ to
$T_xM$ and $\map{C_{\raise1.0pt\hbox{\bf .}}^\lambda(x)^{-1}}{T_xM}{T^\ast_xM}$.
\begin{lemma}
\label{DerMallInv}
For any vector field $B\in\Gamma(TM)$ we have
\begin{equation}
\label{eq:FormulaLieBracket}
{\partial\over\partial\lambda_k}
\biggl\vert_{\lambda=0}(X_{t\ast}^\lambda)^{-1}(B)
=\sum_{i=1}^r\left[\int_0^tX_{s\ast}^{-1}(A_i)\,a_s^{ik}\,ds\,,\,
X_{t\ast}^{-1}(B)\right]
\end{equation}
in terms of the Lie bracket\/ $[\,,\,]$.
\end{lemma}
\begin{proof}
Note that $X_t^\lambda(x)=X_t\circ\varrho_t^\lambda(x)$ where
$\varrho^\lambda(x)$ solves
\begin{equation*}
\left\lbrace\begin{aligned}
d\varrho_t^\lambda&=X_{t\ast}^{-1}(A)(\varrho_t^\lambda)\,a_t\lambda\,dt\\
\varrho_0^\lambda&=x.\end{aligned}\right.
\end{equation*}
In particular, we have
\begin{equation*} {\partial\over\partial\lambda_k}
\biggl\vert_{\lambda=0}\varrho_t^\lambda=
\sum_{i=1}^r\int_0^t X_{s\ast}^{-1}(A_i)\,a_s^{ik}\,ds.
\end{equation*}
Moreover, from $X_{t\ast}^\lambda(x)
=\bigl(T_{\varrho_t^\lambda(x)}X_t\bigr)(T_x\varrho_t^\lambda)$ we conclude
that
$$
\bigl((X_{t\ast}^\lambda)^{-1}B\bigr){}_x=(T_x\varrho_t^\lambda)^{-1}
\bigl(T_{\varrho_t^\lambda(x)}X_t\bigr)^{-1}\,
B\bigl(X_t\circ\varrho_t^\lambda(x)\bigr)
\equiv(T_x\varrho_t^\lambda)^{-1}(X_{t\ast}^{-1}B)_{\varrho_t^\lambda(x)}.
$$
This gives the claim by definition of the bracket.
\end{proof}
\begin{thm}
\label{ResultCrude}
Let $M$ be compact and $f\in C^1(M)$. Assume that {\rm(H1)} is satisfied.
Then, for each $v\in T_xM$,
\begin{equation}
\label{eq:FormulaCrude}
d(P_tf)_xv=\E\left[\bigl(f\circ X_t(x)\bigr)\,\Phi_t\,v\right]
\end{equation}
where $\Phi$ is an adapted process with values in $T_x^\ast M$ such that
each $\Phi_t$ is $L^p$ for any $1\leq p<\infty$.
\end{thm}
\begin{proof} We fix $x$ and identify $T_xM$ with $\R^n$. By Girsanov's
theorem, for $v\in T_xM$, the expression
\begin{equation*}
H_k(\lambda)=\sum_\ell\E\left[\bigl(f\circ X_t^\lambda(x)\bigr)\cdot
G_t^\lambda\cdot \bigl(C_t^\lambda(x)^{-1}\bigr)_{k\ell}\,v^\ell\right]
\end{equation*}
is independent of $\lambda$ for any $C^1$-function $f$ on $M$. Thus
\begin{equation*}
\displaystyle\sum_k{\partial\over\partial\lambda_k}
\biggl\vert_{\lambda=0}H_k(\lambda)=0
\end{equation*}
which gives
\begin{align*}
&\sum_{i,k,\ell}\E\left[\bigl(D_if\bigr)\bigl(X_t(x)\bigr)\biggl(X_{t\ast}
\int_0^t(X_{s\ast}^{-1}A)_x^\mstrut\,a_s\,ds\biggr)_{ik}
\bigl(C_t(x)^{-1}\bigr)_{k\ell}\,v^\ell\right]\\
=-&\sum_{k,\ell}\E\left[f\bigl(X_t(x)\bigr)\,
{\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}
\bigl(G_t^\lambda\,\bigl(C_t^\lambda(x)^{-1}\bigr)_{k\ell}\,v^\ell\bigr)\right]\\
=-&\sum_{k,\ell}\E\left[f\bigl(X_t(x)\bigr)
\left(\left({\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}\!
G_t^\lambda\right)\bigl(C_t(x)^{-1}\bigr)_{k\ell}
+{\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}
\bigl(C_t^\lambda(x)^{-1}\bigr)_{k\ell}\right)\,v^\ell \right].
\end{align*}
Note that
\begin{equation*} {\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}
G_t^\lambda=-\left(\int_0^t a_s^\ast\,dZ_s\right)_k
\end{equation*}
where $a^\ast$ taking values in $T_xM\otimes(\R^r)^\ast$ is defined as the
adjoint to $a$. Furthermore,
\begin{equation*}
{\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}C_t^\lambda(x)^{-1}
=-C_t(x)^{-1}\,\left({\partial\over\partial\lambda_k}\biggl\vert_{\lambda=0}
C_t^\lambda(x)\right)\,C_t(x)^{-1}.
\end{equation*}
Recall that $(X_{s\ast}^{-1}A)_x\in(\R^r)^\ast\otimes T_xM$. We set
\begin{equation*}
a_s=a^n_s=(X_{s\ast}^{-1}A)_x^\ast\,1_{\{s\leq\tau_n\}}\in T_xM\otimes(\R^r)^\ast
\end{equation*}
where $(\tau_n)$ is an increasing sequence of stopping times such that
$\tau_n\nearrow t$ and such that each $a^n_\bull$ satisfies condition
\eqref{eq:CondGirsanov}. This gives a formula of the type
\begin{equation}
\label{eq:ApproxFormula}
\E\bigl[(df)_{X_t(x)}\,X_{t\ast}\,C_{\tau_n}(x)\,C_t(x)^{-1}v\bigr]
=\E\left[\bigl(f\circ X_t(x)\bigr)\cdot\Phi_t^n\,v\right]
\end{equation}
Finally, taking the limit as $n\to\infty$, we get
\begin{equation}
\label{eq:FirstFormula}
d(P_tf)_xv=\E\bigl[(df)_{X_t(x)}\,X_{t\ast}v\bigr]
=\E\left[\bigl(f\circ X_t(x)\bigr)\cdot\Phi_t\,v\right]
\end{equation}
where
\begin{align*}
\Phi_t\,v
=&\left(\int_0^t(X^{-1}_{s\ast}A)_x^\mstrut\,dZ_s\right)C_t^{-1}(x)\,v\\
&+\sum_{k,\ell}\left(C_t(x)^{-1}\left({\partial\over\partial\lambda_k}
\biggl\vert_{\lambda=0}
C_t^\lambda(x)\right)\,C_t(x)^{-1}\right)_{k\ell}\,v^\ell
\end{align*}
which can be further evaluated by means of \eqref{eq:FormulaLieBracket}.
Eq.\ \eqref{eq:FormulaLieBracket} also allows to conclude that
$\Phi_t\in\cap_{p\geq1}L^p$.
\end{proof}
\section{Integration by Parts at the Level of Local Martingales}\label{Sect4}
\setcounter{equation}0\noindent Let $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, $x\in M$ be a
family of local martingales where $F$ is differentiable in the second variable
with a derivative jointly continuous in both variables. We are mainly
interested in the following two cases:
\begin{align*}
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))
&=u\circ X_{\raise1.0pt\hbox{\bf .}}(x)\quad\hbox{for some $L$-harmonic function $u$ on $M$, and}\\
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))&=(P_{t-{\raise1.0pt\hbox{\bf .}}}f)\bigl(X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)
\hskip.3cm\hbox{for some bounded measurable $f$ on $M$, $t>0$.}
\end{align*}
Let $dF$ denote the differential of $F$ with respect to the second variable.
\begin{thm}
\label{LocMart}
Let $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, $x\in M$ be a family of local martingales
as described above. Then, for any predictable $\R^r$-valued process $k$ in
$L^2_{\hbox{\sevenrm loc}}(Z)$,
\begin{equation}
\label{eq:QuasiDer}
dF(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\,(T_xX_{\raise1.0pt\hbox{\bf .}})
\int_0^\bull (X_{s\ast}^{-1}A)_x^\mstrut k_s\,ds
-F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\int_0^\bull\langle k,dZ\rangle,\quad x\in M,
\end{equation}
is a family of local martingales.
\end{thm}
\begin{proof}[Proof\/ {\rm(}by means of Girsanov\/{\rm)}]
For $\varepsilon$ varying locally about $0$, consider the SDE
\begin{equation}
\label{eq:SDEPert}
\delta X^\varepsilon=A(X^\varepsilon)\,\delta Z^\varepsilon
+A_0(X^\varepsilon)\,dt
\end{equation}
with the perturbed driving process
$dZ^\varepsilon=dZ+\varepsilon\,k\,dt$. Then, for each $\varepsilon$,
\begin{equation}
\label{eq:PertMart}
F\bigl(\,{\raise1.5pt\hbox{\bf .}}\,,X^\varepsilon_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\,G^\varepsilon_{\raise1.0pt\hbox{\bf .}}
\end{equation}
is again a local martingale when the Girsanov exponential
$G^\varepsilon_{\raise1.0pt\hbox{\bf .}}$ is defined by
\begin{equation*}
G^\varepsilon_r=\exp\Bigl(-\int_0^r\varepsilon\,\langle k,dZ\rangle
-{1\over2}\,\varepsilon^2\!\int_0^r\vert k\vert^2\,ds\Bigr).
\end{equation*}
Moreover, the local martingale \eqref{eq:PertMart} depends $C^1$ on the
parameter $\varepsilon$ (in the topology of compact convergence in
probability), thus
\begin{equation*}
{\partial\over\partial\varepsilon}\Bigl\vert_{\varepsilon=0}
F\bigl(\,{\raise1.5pt\hbox{\bf .}}\,,X^\varepsilon_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\,G^\varepsilon_{\raise1.0pt\hbox{\bf .}}
={\partial\over\partial\varepsilon}\Bigl\vert_{\varepsilon=0}
F\bigl(\,{\raise1.5pt\hbox{\bf .}}\,,X^\varepsilon_{\raise1.0pt\hbox{\bf .}}(x)\bigr)
+F\bigl(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\,
{\partial\over\partial\varepsilon}\Bigl\vert_{\varepsilon=0}G^\varepsilon_{\raise1.0pt\hbox{\bf .}}
\end{equation*}
is also a local martingale. Taking into account that
\begin{equation*}
{\partial\over\partial\varepsilon}\Bigl\vert_{\varepsilon=0}X^\varepsilon_r(x)
=X_{r\ast}\int_0^r X_{s\ast}^{-1}A\bigl(X_s(x)\bigr)k_s\,ds
\end{equation*}
and
\begin{equation*}
{\partial\over\partial\varepsilon}\Bigl\vert_{\varepsilon=0}G^\varepsilon_r=
-\int_0^r\langle k,dZ\rangle,
\end{equation*}
we get the claim.
\end{proof}
\begin{proof}[Alternative proof\/ {\rm(of Theorem \ref{LocMart}\rm)}]
First note that $m_s:=dF(s,\,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}$, as the
derivative of a family of local martingales, is a local martingale in
$T^\ast_xM$, see~\cite{A-T 96}. Thus also
\begin{equation*}
n_s:=m_sh_s-\int_0^sm_rdh_r
\end{equation*}
is a local martingale for any $T_xM$-valued adapted process $h$ locally of
bounded variation. Choosing
\begin{equation*}
h=\int_0^\bull (X_{s\ast}^{-1}A)_x^\mstrut k_s\,ds
\end{equation*}
and taking into account that
\begin{equation*}
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))
=\int_0^\bull dF(s,\,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,A\bigl(X_s(x)\bigr)\,dZ,
\end{equation*}
the claim follows by noting that
\begin{equation*}
\int_0^\bull dF(s,\,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}\,dh_s\mathrel{\mathpalette\@mvereq{\hbox{\sevenrm m}}}
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\int_0^\bull\langle k,dZ\rangle
\end{equation*}
where $\mathrel{\mathpalette\@mvereq{\hbox{\sevenrm m}}}$ denotes equality modulo local martingales.
\end{proof}
Let $a$ be a predictable process taking values in $T_xM\otimes(\R^r)^\ast$ as
in the last section. The calculation above shows that
\begin{equation*}
n_s:=dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}
\left(\int_0^s (X_{r\ast}^{-1}A)_x^\mstrut\,a_r\,dr\right)
-F\bigl(s,X_s(x)\bigr)\int_0^sa_r^\ast\,dZ_r
\end{equation*}
is a local martingale in $T_xM$ which implies that
\begin{equation*}
N_s:=n_sh_s-\int_0^sn_r\,dh_r
\end{equation*}
is also a local martingale for any $T^\ast_xM$-valued adapted process $h$
locally of bounded variation. In particular, choosing again
$a_s=(X_{s\ast}^{-1}A)_x^\ast$, we get
\begin{align*}
N_s&=dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}\,C_s(x)\,h_s-
F\bigl(s,X_s(x)\bigr)\left(\int_0^s(X_{r\ast}^{-1}A)_x\,dZ_r\right)h_s\\
&-\int_0^s dF(r,{\raise1.5pt\hbox{\bf .}}\,)_{X_r(x)}\,X_{r\ast}\,C_r(x)\,dh_r +\int_0^s
F\bigl(r,X_r(x)\bigr)
\left(\int_0^r(X_{\rho\ast}^{-1}A)_x\,dZ_\rho\right)dh_r.
\end{align*}
For the last term it is trivial to observe that
\begin{align*}
\int_0^s F\bigl(r,X_r(x)\bigr)
&\left(\int_0^r(X_{\rho\ast}^{-1}A)_x\,dZ_\rho\right)dh_r\\
&\hskip1cm\mathrel{\mathpalette\@mvereq{\hbox{\sevenrm m}}} F\bigl(s,X_s(x)\bigr)\int_0^s
\left(\int_0^r(X_{\rho\ast}^{-1}A)_x\,dZ_\rho\right)dh_r.
\end{align*}
Now the idea is to take $h$ of the special form $h_s=C_s(x)^{-1}k_s$ for some
adapted $T_xM$-valued process $k$ locally pathwise of bounded variation such
that in addition $k_\tau=v$ and $k_s=0$ for $s$ close to~$0$. Then the
remaining problem is to replace
\begin{equation}
\label{eq:LousyTerm}
\int_0^sdF(r,{\raise1.5pt\hbox{\bf .}}\,)_{X_r(x)}\,X_{r\ast}\,C_r(x)\,dh_r
\end{equation}
modulo local martingales by expressions not involving derivatives of $F$.
This however seems to be difficult in general, but in Section~\ref{Sect7} we show that,
more easily, the expectation of \eqref{eq:LousyTerm} can be rewritten in terms
not involving derivatives of $F$.
\section{Hypoelliptic Diffusions and Control Theory}\label{Sect5}
\setcounter{equation}0\noindent The following two corollaries are immediate
consequences of Theorem \ref{LocMart}.
\begin{cor}
\label{LocMartPt}
Let $\map fM\R$ be bounded measurable. Fix $x\in M$ and $v\in T_xM$. Then,
for any predictable $\R^r$-valued process $k$ in $L^2_{\hbox{\sevenrm
loc}}(Z)$,
\begin{equation*}
(dP_{t-{\raise1.0pt\hbox{\bf .}}}f)_{X_\bull(x)}\,(T_xX_{\raise1.0pt\hbox{\bf .}})
\Bigl[v+\!\int_0^\bull (X_{s\ast}^{-1}A)_x k_s\,ds\Bigr]
-(P_{t-{\raise1.0pt\hbox{\bf .}}}f)\bigl(X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\!\int_0^\bull\langle k,dZ\rangle
\end{equation*}
is a local martingale on the interval $[0,t\wedge\zeta(x)[$.
\end{cor}
Note that $(dP_{t-{\raise1.0pt\hbox{\bf .}}}f)_{X_\bull(x)}\,(T_xX_{\raise1.0pt\hbox{\bf .}})\,v$ is a local
martingale as the derivative of the local martingale
$(P_{t-{\raise1.0pt\hbox{\bf .}}}f)\bigl(X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)$ at $x$ in the direction $v$,
see~\cite{A-T 96}.
\begin{cor}
\label{LocMartHarmonic}
Assume that $M$ is compact with nonempty smooth boundary $\partial M$. Let
$u\in C(M)$ be $L$-harmonic on $M{\setminus}\partial M$. Fix $x\in
M{\setminus}\partial M$ and $v\in T_xM$. Then, for any predictable
$\R^r$-valued process $k$ in $L^2_{\hbox{\sevenrm loc}}(Z)$,
\begin{equation*}
(du)_{X_\bull(x)}\,(T_xX_{\raise1.0pt\hbox{\bf .}})
\Bigl[v+\!\int_0^\bull (X_{s\ast}^{-1}A)_xk_s\,ds\Bigr]
-u\bigl(X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\!\int_0^\bull\langle k,dZ\rangle
\end{equation*}
is a local martingale on the interval $[0,\tau(x)[$ where $\tau(x)$ is the
first hitting time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ at $\partial M$.
\end{cor}
\begin{problem}[Control Problem]
\label{MainProblem}
Let $x\in M$ and $v\in T_xM$. Consider the random dynamical system
\begin{equation}
\label{eq:ContrPr}
\left\lbrace\begin{aligned}
\dot h_s&=(X_{s\ast}^{-1}A)_x\,k_s\\
h_0&=v.
\end{aligned}\right.
\end{equation}
Let $\sigma=\tau_D^\mstrut(x)$, resp., $\sigma=\tau_D^\mstrut(x)\wedge t$
for some $t>0$, where $\tau_D^\mstrut(x)$ is the first exit time of
$X_{\raise1.0pt\hbox{\bf .}}(x)$ from some relatively compact open neighbourhood $D$ of $x$.
We are concerned with the problem of finding predictable processes $k$
taking values in $\R^r$ such that $h_\sigma=0$,~a.s.
\end{problem}
\begin{example}
\label{EllCase}
Assume $L$ to be elliptic, i.e., $\map{A(x)}{\R^r}{T_xM}$ surjective for
each $x\in M$. Then
\begin{equation*}
k_s=A^\ast\bigl(X_s(x)\bigr)\,T_xX_s\,\dot h_s
\end{equation*}
solves Problem \ref{MainProblem} if the terms are defined as follows:
$A^\ast(\,{\raise1.5pt\hbox{\bf .}}\,)\in\Gamma(T^\ast M\otimes\R^r)$ is a smooth section and
(pointwise) right-inverse to $A(\,{\raise1.5pt\hbox{\bf .}}\,)$,
i.e.~$A(x)A^\ast(x)=\mathpal{id}_{T_xM}$ for $x\in M$, the process $h$ may be any
adapted process with values in $T_xM$ and with absolutely continuous sample
paths (e.g., paths in the Cameron-Martin space ${\mathbb H}(\R_+,T_xM)$) such that
$h_0=v$ and $h_\sigma=0$,~a.s. Thus, for elliptic $L$, there are
``controls'' $k$ transferring system \eqref {eq:ContrPr} from $v$ to $0$ in
time $\sigma$, moreover it is even possible to follow prescribed
trajectories $s\mapsto h_s$ from $v$ to $0$. In the hypoelliptic case, this
cannot be achieved in general, since the right-hand side in
\begin{equation*}
(T_xX_s)\,\dot h_s=A\bigl(X_s(x)\bigr)\,k_s
\end{equation*}
is allowed to be degenerate.
\end{example}
Under the assumption that Problem \ref{MainProblem} has an affirmative
solution, we get differentiation formulas in a straightforward way.
\begin{thm}
\label{ThmOne}
Let $\map fM\R$ be bounded measurable, $x\in M$, $v\in T_xM$, $t>0$. Let
$D$ be a relatively compact open neighbourhood of $x$ and
$\sigma=\tau_D^\mstrut(x)\wedge t$ where $\tau_D^\mstrut(x)$ is the first
exit time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ from $D$. Suppose there exists an $\R^r$-valued
predictable process $k$ such that
\begin{equation*}
\int_0^\sigma (X_{s\ast}^{-1}A)_x\,k_s\,ds\equiv v,\quad\hbox{a.s.,}
\end{equation*}
and $\bigl(\int_0^\sigma\vert k_s\vert^2\,ds\bigr)^{1/2}\in
L^{1+\varepsilon}$ for some $\varepsilon>0$. Then
\begin{equation}
\label{eq:dSemigroup}
d(P_tf)_xv
=\E\biggl[f\bigl(X_t(x)\bigr)\,1_{\{t<\zeta(x)\}}
\int_0^\sigma\langle k,dZ\rangle\biggr]
\end{equation}
where $P_tf$ is the minimal semigroup defined
by~{\rm\eqref{eq:SemiGroupMin}}.
\end{thm}
\begin{proof}
It is enough to check that the local martingale defined in Theorem
\ref{LocMart} is actually a uniformly integrable martingale on the interval
$[0,\sigma]$. The claim then follows by taking expectations, noting that
$(P_{t-\sigma}f)(X_\sigma(x))
=\E^{\SF_\sigma}\bigl[f\bigl(X_t(x)\bigr)\,1_{\{t<\zeta(x)\}}\bigr]$. See
Theorem 2.4 in \cite{TH 97} for technical details.
\end{proof}
Along the same lines, now exploiting Corollary \ref{LocMartHarmonic}, the
following result can be derived.
\begin{thm}
\label{ThmTwo}
Let $M$ be compact with smooth boundary $\partial M\not=\emptyset$ and let
$u\in C(M)$ be $L$-harmonic on $M{\setminus}\partial M$. Let $x\in
M{\setminus}\partial M$ and $v\in T_xM$. Denote $\tau(x)$ the first hitting
time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ at $\partial M$. Suppose there exists an
$\R^r$-valued predictable process $k$ such that
\begin{equation*}
\int_0^{\tau(x)}(X_{s\ast}^{-1}A)_x^\mstrut\,k_s\,ds\equiv v,\quad\hbox{a.s.,}
\end{equation*}
and $\bigl(\int_0^{\tau(x)}\vert k_s\vert^2\,ds\bigr)^{1/2}\in
L^{1+\varepsilon}$ for some $\varepsilon>0$. Then the following formula
holds:
\begin{equation}
\label{eq:dHarmonic}
(du)_xv
=\E\biggl[u\bigl(X_{\tau(x)}(x)\bigr)
\int_0^{\tau(x)}\langle k,dZ\rangle\,\biggr].
\end{equation}
\end{thm}
\noindent
In the elliptic case, formulas of type \eqref{eq:dSemigroup} and
\eqref{eq:dHarmonic} have been used in~\cite{T-W 98} to establish gradient
estimates for $P_tf$ and for harmonic functions $u$, see also \cite{Driver-Thalm:2001}
for extensions from functions to to sections.
Nonlinear generalizations of the elliptic case, e.g., to harmonic maps and solutions of
the nonlinear heat equations, are treated in~\cite{A-T 97}.
As explained, differentiation formulas may be obtained from the local
martingales \eqref{eq:QuasiDer} by taking expectations if there is a
``control'' $(k_s)$ transferring the system \eqref{eq:ContrPr} from $h_0=v$ to
$h_\sigma=0$. Solvability of the ``control problem'' is more or less
necessary for this approach, as is explained in the following remark.
\begin{remark}
\label{QuasiDerivatives}
Consider the general problem of finding semimartingales $h$, $\Phi$ with
$h_0=v$ and $\Phi_0=0$ where $h$ is $T_xM$-valued and $\Phi$ real-valued
such that
\begin{equation}
\label{eq:LocMartGeneral}
n_s=(dF_s)_{X_s(x)}\,X_{s\ast}h_s+F_s(X_s(x))\,\Phi_s,\quad s\geq0
\end{equation}
is a local martingale for any space-time transformation $F$ of the diffusion
$X(x)$ such that $F_s(X_s(x))\equiv F(s,X_s(x))$ is a local martingale. In
the notion of quasiderivatives, as used by Krylov~\cite{KR 93, Krylov:2004}, this means
that $\xi:=(T_xX)\,h$ is a $F$-quasiderivative for $X$ along $\xi$ at $x$
and $\Phi$ its $F$-accompanying process. Suppose that $h$ takes paths in
the Cameron-Martin space ${\mathbb H}(\R_+,T_xM)$. Then, by choosing $F\equiv1$, we
see that $\Phi$ itself should already be a local martingale, say
$\Phi_s=\int_0^s\langle k_r,dZ_r\rangle$. Thus
\begin{equation*}
n\mathrel{\mathpalette\@mvereq{\hbox{\sevenrm m}}}\int_0^\bull(dF_r)_{X_r(x)}\,X_{r\ast}\,dh_r+
\int_0^\bull(dF_r)_{X_r(x)}\,A(X_r(x))k_r\,dr
\end{equation*}
which implies
\begin{equation*}\int_0^\bull(dF_r)_{X_r(x)}\,X_{r\ast}\,dh_r+
\int_0^\bull(dF_r)_{X_r(x)}\,A(X_r(x))k_r\,dr\equiv0,
\end{equation*}
i.e., $(dF_s)_{X_s(x)}\,X_{s\ast}\,\dot h_s+
(dF_s)_{X_s(x)}\,A(X_s(x))k_s\equiv0$ for all $F$ of the above type. Hence,
assuming local richness of transformations $F$ of this type, we get for
$s\geq0$,
\begin{equation*}
X_{s\ast}\,\dot h_s+A(X_s(x))k_s\equiv0\end{equation*}
or
\begin{equation*}\dot h_s+(X_{s\ast}^{-1}A)_x\,k_s=0.
\end{equation*}
which means that $k$ solves the ``control problem''.
\end{remark}
Coming back to Problem \ref{MainProblem} we note that since the problem is
unaffected by changing $M$ outside of $D$, we may assume that $M$ is already
compact. It is also enough to deal with the case $\sigma=\tau_D^\mstrut(x)$
where $D$ has smooth boundary.
\begin{problem}[Modified Control Problem]
\label{MainProblem1}Let
\begin{equation*}
c_s(x)={d\over ds}C_s(x)
=\sum_{i=1}^r(X_{s\ast}^{-1}A_i)_x^\mstrut\otimes(X_{s\ast}^{-1}A_i)_x^\mstrut.
\end{equation*}
Confining the consideration to $\R^r$-valued processes $k$ of the special form
\begin{equation}
\label{eq:SpecialControl}
k_s=\sum_{i=1}^r\bigl\langle(X_{s\ast}^{-1}A_i)_x^\mstrut,u_s\bigr\rangle\,e_i
\end{equation}
for some adapted $T_xM$-valued process $u$, we observe that Problem~\ref{MainProblem}
reduces to finding predictable $T_xM$-valued processes $u$ such that
\begin{equation}
\label{eq:SpecialControl1}
\left\lbrace\begin{aligned}
\dot h_s&=c_s(x)\,u_s\\
h_0&=v\quad\hbox{and}\quad h_\sigma=0.
\end{aligned}\right.
\end{equation}
\end{problem}
This Problem \ref{MainProblem1}, as well as Problem \ref{MainProblem}, have an affirmative solution
in many cases. However, in the general situation, both problems are not
solvable under hypothesis (H1), as will be shown in the next section.
\section{Solvability of the control problem: Examples and counterexamples}\label{Sect6}
\setcounter{equation}0\noindent
We start discussing an example with solvability of the control conditions in a
non-elliptic situation.
\begin{example}\rm
\label{ExampleR2}
Let $M=\R^2$ and $A_0\equiv 0$, $A_1(x)=(1,0)$, $A_2(x)=(0,x_1)$. Then
$[A_1,A_2](x)=(0,1)$. The solution to
\begin{equation*}
\delta X=A(X)\,\delta Z
\end{equation*}
starting from $x=(x^1,x^2)$ is given by
\begin{equation*}
X_t(x) = \left(x^1+Z_t^1,x^2+x^1Z_t^2+\int_0^tZ_s^1\,dZ_s^2\right).
\end{equation*}
Consequently
\begin{equation*}
\bigl(X_{s\ast}^{-1}A\bigr)(x)=\left(
\begin{matrix}
1&0\\-Z_s^2&X_s^1
\end{matrix}
\right),
\end{equation*}
and the control problem at $x=0$ comes down to finding $k$ such that
\begin{equation*}
\dot h_s=\left(
\begin{matrix}
1&0\\-Z_s^2&Z_s^1
\end{matrix}\right)k_s,\quad h_0=v,\ h_\sigma=0,
\end{equation*}
and $\Bigl(\int_0^\sigma\vert k_s\vert^2\,ds\Bigr)^{1/2}\in
L^{1+\varepsilon}$. We may assume that $\vert v\vert=1$, and will further
assume that $\sigma=\tau_D$ or $\sigma=\tau_D\wedge t$ where $D$ is some
relatively compact neighbourhood of the origin in $\R^2$. (After possibly
shrinking $D$, we may also assume that $D$ is open with smooth boundary.)
Note that
\begin{equation*}
c_s(0)=\bigl(X_{s\ast}^{-1}A\bigr)_0\,\bigl(X_{s\ast}^{-1}A\bigr)_0^\ast=
\left(
\begin{matrix}
1&\!\!\!\!-Z_s^2\\-Z_s^2&\vert Z_s\vert^2
\end{matrix}
\right).
\end{equation*}
Thus if $\lambda_{\text{\rm min}}(s)$ denotes the smallest eigenvalue of
$c_s(0)$, then
\begin{equation}
\label{eq:LambdaMinBound}
\lambda_{\text{\rm min}}(s)\geq
\frac{(Z_s^1)^2}{1+\vert Z_s\vert^2}.
\end{equation}
(Indeed,
let $a:=Z_s^1$, $b:=Z_s^2$, and $x:=1+|Z_s|^2=1+a^2+b^2$; then
$$\lambda_{\text{min}}(s)=\frac{x-\sqrt{x^2-4a^2}}2=\frac x2\left[1-\sqrt{1-\frac{4a^2}{x^2}}\right]\geq
\frac{a^2}x,
$$
where we used $1-\sqrt{1-x}\geq x/2$).
We construct $h$ by solving the equation
\begin{equation}
\label{eq:hsDefinition}
\dot h_s=-\varphi^{-2}(X_s,Z_s)\,c_s(0)\,{h_s\over\vert h_s\vert},\quad h_0=v,
\end{equation}
where $X_s=X_s(0)$ and $\varphi$ is chosen in such a way that
\begin{equation*}
\sigma':=\inf\{s\geq0: h_s=0\}\leq\sigma.
\end{equation*}
More precisely, take $\varphi_1\in C^2(\bar D)$ with
$\varphi_1\vert{\partial D}=0$ and $\varphi_1>0$ in $D$. Similarly, for
some large ball $B$ in $\R^2$ about $0$ (containing $D$), let $\varphi_2\in
C^2(\bar B)$ with $\varphi_2\vert{\partial B}=0$ and $\varphi_2>0$ in $B$.
Let $\varphi(x,z):=\varphi_1(x)\varphi_2(z)$. We only deal with the case
$\sigma=\tau_D$, the case $\sigma=\tau_D\wedge t$ is dealt with an obvious
modification of \eqref{eq:hsDefinition}. Now, arguing as in the elliptic
case, one shows
\begin{equation*}
\int_0^\sigma\varphi^{-2}(X_s,Z_s)\,ds=\infty,\quad\text{a.s.}
\end{equation*}
Consequently, since $Z^1_\sigma\not=0$ with probability $1$, we may conclude
that also
\begin{equation*}
\int_0^\sigma\varphi^{-2}(X_s,Z_s)\,\frac{(Z_s^1)^2}{1+\vert Z_s\vert^2}\,ds=\infty,
\quad\text{a.s.}
\end{equation*}
Note that
\begin{equation*}
\frac{d}{ds}\vert h_s\vert=\frac{\langle\dot h_s,h_s\rangle}{\vert h_s\vert}
=\frac{-\varphi^{-2}(X_s,Z_s)\,\langle c_s(0)h_s,h_s\rangle}{\vert h_s\vert^2},
\end{equation*}
and hence by means of \eqref{eq:LambdaMinBound},
\begin{equation*}
1-\vert h_t\vert\geq \int_0^t\varphi^{-2}(X_s,Z_s)\,\lambda_{\text{\rm min}}(s)\,ds
\geq \int_0^t\varphi^{-2}(X_s,Z_s)\,\frac{(Z_s^1)^2}{1+\vert Z_s\vert^2}\,ds
\end{equation*}
which shows in particular that
\begin{equation*}
\sigma'\leq \inf\left\{t\geq0:
\int_0^t\varphi^{-2}(X_s,Z_s)\,\frac{(Z_s^1)^2}{1+\vert Z_s\vert^2}\,ds=1\right\}.
\end{equation*}
It remains to verify the integrability condition, i.e.,
$\Bigl(\int_0^{\sigma'}\vert k_s\vert^2\,ds\Bigr)^{1/2}\in
L^{1+\varepsilon}$ where
\begin{equation*}
k_s=-\varphi^{-2}(X_s,Z_s)\,\bigl(X_{s\ast}^{-1}A\bigr)_0^\ast\,
{h_s\over\vert h_s\vert}.
\end{equation*}
But, since on the interval $[0,\sigma]$ the Brownian motion $Z$ stays in a
compact ball $B$, and thus
\begin{equation*}
\left\vert\bigl(X_{s\ast}^{-1}A\bigr)_0^\ast\,{h_s\over\vert h_s\vert}\right\vert
\leq C
\end{equation*}
for some constant $C$, we are left to check
\begin{equation*}
\Bigl(\int_0^{\sigma'}\varphi^{-4}(X_s,Z_s)\,ds\Bigr)^{1/2}\in L^{1+\varepsilon}
\end{equation*}
which is done as in the elliptic case.
\end{example}
Contrary to Example \ref{ExampleR2} the next example gives a negative result
showing that in general Problem \ref{MainProblem} is not always solvable.
\begin{example}\rm
\label{Picard}
(J.~Picard) Let $M=\R^3$ and take
\begin{equation*}
A_0(x)=(0,0,0),\ A_1(x)=(1,0,0),\ A_2(x)=(0,1,x^1)
\end{equation*}
which obviously satisfy (H1). Then SDE \eqref{eq:StratonovichSDE} reads as
\begin{equation*}
X_t(x)=x+\left(Z_t^1,\, Z_t^2,\,x^1Z_t^2+\int_0^tZ_s^1\,dZ_s^2\right).
\end{equation*}
In particular,
\begin{equation*}
(X_{t\ast}^{-1}A_1)(0)=\bigl(1,0,-Z_t^2\bigr),\quad
(X_{t\ast}^{-1}A_2)(0)=\bigl(0,1,Z_t^1\bigr).
\end{equation*}
Thus \eqref{eq:ContrPr} is given by
\begin{equation*}
\dot h_s=\bigl(k_s^1,\,k_s^2,\,Z_s^1k_s^2-Z_s^2k_s^1\bigr)
\end{equation*}
where the problem is to find $h$ such that $h_0=v=(v^1,v^2,v^3)$ and
$h_\sigma=0$. By extracting the third coordinate, we get $\int_0^\sigma
Z_s^1k_s^2\,ds-\int_0^\sigma Z_s^2k_s^1\,ds=-v^3$. On the other hand, an
integration by parts yields
\begin{equation*}
\int_0^\sigma Z_s^2k_s^1\,ds-\int_0^\sigma Z_s^1k_s^2\,ds
=-\int_0^\sigma h_s^1\,dZ_s^2+\int_0^\sigma h_s^2\,dZ_s^1
\end{equation*}
where the condition on the integrability of $k$ implies that $-\int_0^\sigma
h_s^1\,dZ_s^2+\int_0^\sigma h_s^2\,dZ_s^1$ is~$L^1$ with expectation equal
to $0$. Combining both facts, we conclude that there is no solution
satisfying the integrability condition if $v^3\not=0$.
\end{example}
Note that if $\sigma$ is not in $L^1$, then the condition on the integrability
of $k$ does not imply any more that $\int_0^\sigma h_s^1\,dZ_s^2+\int_0^\sigma
h_s^2\,dZ_s^1$ is in $L^1$.
\begin{remark}\rm
\label{LambdaMin}
In Example \ref{Picard} Malliavin's covariance is explicitly given by
\begin{align*}
\bigl\langle C_t(0)u,u\bigr\rangle
&=\sum_{i=1}^2\int_0^t\bigl\langle(X_{r\ast}^{-1}A_i)(0),u\bigr\rangle^2\,dr\\
&=\int_0^t\bigl[\bigl(u^1-u^3Z_r^2\bigr)^2+\bigl(u^2+u^3Z_r^1\bigr)^2\bigr]\,dr.
\end{align*}
Of course, $C_t(0)-C_s(0)=\int_s^tc_r(0)\,dr$ is non-degenerate for all
$s<t$, nevertheless $\lambda_{\hbox{\sevenrm min}} c_s(0)=0$ for each fixed $s$, indeed:
\begin{equation*}
\bigl\langle c_s(0)u,u\bigr\rangle
=(u^1-u^3Z_s^2\bigr)^2+\bigl(u^2+u^3Z_s^1)^2,\quad u\in T_0M.
\end{equation*}
\end{remark}
The negative result of example \ref{Picard} depends very much on the fact that
$\sigma=\sigma_D$ is the first exit time of the diffusion from a relatively
compact neighbourhood of its starting point. The situation changes completely
if we allow arbitrarily large stopping times $\sigma$ (not necessarily exit
times from compact sets).
In the remainder of this section we give sufficient conditions for solvability
of the control problem. We assume that diffusions with generator $L$ have
infinite lifetime, but do no longer assume that the stopping time $\sigma$
is of a given type.
The question whether in this situation, given solvability of the control problem,
the local martingales defined in Theorem \ref{LocMart} are still uniformly
integrable martingales, needs to be checked from case to case.
We consider the following two conditions:
\begin{condition1}
There exists a positive constant $\alpha$ such that for any continuous (non
necessarily adapted) process $u_t$, taking values in $\{w\in T_xM,\ \|w\|=1\}$
and converging to $u$ almost surely,
\begin{equation}
\label{eq:Cond}
\int_0^\infty\langle c_s(x)u_s,u_s\rangle\1_{\{\cos(
c_s(x)u_s,u_s)>\alpha\}}\,ds=\infty\quad \hbox{a.s.}
\end{equation}
\end{condition1}
\begin{condition2}
There exists a positive constant $\alpha$ such that for any $u_0\in \{w\in
T_xM,\ \|w\|=1\}$, there exists a neighbourhood $V_{u_0}$ of $u_0$ in
$\{w\in T_xM,\ \|w\|=1\}$, such that
\begin{equation}
\label{eq:Strongcond}
\int_0^\infty\inf_{u\in V_{u_0}}\left(\langle c_s(x)u,u\rangle\1_{\{\cos(
c_s(x)u,u)>\alpha\}}\right)\,ds=\infty\quad \hbox{a.s.}
\end{equation}
\end{condition2}
The following result is immediate:
\begin{prop}
\label{S1}
Condition \textup{(C2)} implies Condition \textup{(C1)}.
\end{prop}
Now we prove that the control problem is solvable under condition (C1).
\begin{prop}
\label{S2}
Under Condition \textup{(C1)}, the control problem is solvable. More precisely,
considering the random dynamical system
\begin{equation}
\label{eq:ContrPr2}
\left\lbrace\begin{aligned}
\dot h_s&=(X_{s\ast}^{-1}A)_x\,k_s\\
h_0&=v.
\end{aligned}\right.
\end{equation}
there exists a (non necessarily finite) stopping time $\sigma$ and a
predictable $\R^r$-valued process $k\in L^2(Z)$ such that the process $h$
given by~\eqref{eq:ContrPr2} satisfies $h_\sigma=0$, a.s.
\end{prop}
\begin{proof}
We look for a solution of the control problem satisfying an equation of the
type
\begin{equation}
\label{eq:Ctrlvarphi}
\dot h_s=-\varphi_s\frac{1}{\|h_s\|} c_s(x)h_s
\end{equation}
with $c_s(x)u=\sum_{i=1}^r
(X_{s\ast}^{-1}A_i)_x\langle(X_{s\ast}^{-1}A_i)_x,u\rangle$, and where
$\varphi_s$ takes its values in $\{0,1\}$.
Assuming that (C1) is satisfied, we construct a sequence of
stopping times $(T_n)_{n\ge0}$ and a continuous
process $h$ inductively as follows:
\begin{itemize}
\item[(i)] $T_0=0$;
\item[(ii)] for $n\ge 0$, if $h_{T_{2n}}=0$, then
$T_{2n+2}=T_{2n+1}=T_{2n}$.
\item[(iii)] for $n\ge 0$, if $h_{T_{2n}}\not=0$, $h_t$ is constant on
$[T_{2n},T_{2n+1}]$ where
$$T_{2n+1}=\inf\{t>T_{2n},\ \cos( c_t(x)h_{T_{2n}},h_{T_{2n}})>\alpha\},$$
and $h_t$ solves
\begin{equation*}
\dot h_s=-\frac{1}{\|h_s\|}c_s(x)h_s\quad\text{on $[T_{2n+1},T_{2n+2}]$}
\end{equation*}
where $T_{2n+2}=\inf\{t>T_{2n+1},\ \cos(
c_t(x)h_t,h_t)<\alpha/2\hbox{ or }h_t=0\}$.
\end{itemize}
Let
\begin{equation*}
\sigma=\inf\{t>0,\ h_t=0\}\quad (=\infty\hbox{ if this set is empty}),
\end{equation*}
and for $s<\sigma$,
\begin{equation*}
\varphi_s=\1_{\cup_n[T_{2n+1},T_{2n+2}[}(s),
\end{equation*}
\begin{equation}
\label{eq:Defk}
k_s=-\varphi_s\frac{1}{\|h_s\|}\sum_{i=1}^r\langle(X_{s\ast}^{-1}A_i)_x,
h_s\rangle e_i,
\end{equation}
where $(e_1,\ldots, e_r)$ denotes the canonical basis of $\R^r$. Then $h_t$ solves
Eq.~\eqref{eq:Ctrlvarphi}, $\dot h_s=(X_{s\ast}^{-1}A)_xk_s$, and since
$$
\|k_s\|^2=-\varphi_s\left\langle \dot
h_s,\frac{h_s}{\|h_s\|}\right\rangle=-\frac{d}{ds}\|h_s\|,
$$
we have
\begin{equation}
\label{eq:Bdintksqr}
\int_0^\sigma\|k_s\|^2\,ds\le \|h_0\|.
\end{equation}
To conclude it is sufficient to prove that solutions $h_t$
satisfy $\lim_{s\to\sigma}h_s=0$.
First we remark that $h_t$ converges almost surely as $t$ tends to
$\sigma$. This is due to the fact that
$$
\|dh\|=\frac{d\|h\|}{\cos(h,dh)}=-\frac{d\|h\|}{\cos(h,c_s(x)h)}\le
-\frac{2}{\alpha}\,d\|h\|
$$
(recall $d\|h\|\le 0$); hence $h$ has a total variation bounded by
${2\|h_0\|}/{\alpha}$.
We define $u_t={h_0}/{\|h_0\|}$ on the set where $h_t$ converges to $0$ as
$t$ tends to $\sigma$, and $u_t={h_t}/{\|h_t\|}$ on the set where $h_t$
does not converge to $0$. This provides a process which converges as $t$ tends to
$\sigma$, but which is not adapted. On the set where $h_t$ does not converge to
$0$, we have
$$
\|h_0\|\ge-\int_0^\sigma d\|h\|\ge\int_0^\infty \langle
c_s(x)u_s,u_s\rangle\1_{\{\cos( c_s(x)u_s,u_s)>\alpha\}}\,ds,
$$
which implies, by Condition (C1), that this set has probability $0$.
\end{proof}
\begin{example}
Consider again Example \ref{Picard}, with $M=\R^3$,
\begin{equation*}
A_0(x)=(0,0,0),\ A_1(x)=(1,0,0),\ A_2(x)=(0,1,x^1).
\end{equation*}
For $u\in T_0M$, $\Vert u\Vert=1$,
we have
\begin{equation*}
\langle c_s(0)u,u\rangle=(u^1-u^3Z_s^2)^2+(u^2+u^3Z_s^1)^2
\end{equation*}
and
\begin{equation*}
\cos(c_s(0)u,u)=\frac{(u^1-u^3Z_s^2)^2+(u^2+u^3Z_s^1)^2}{\left((u^1-u^3Z_s^2)^2+(u^2+u^3Z_s^1)^2
+(-Z_s^2u^1+Z_s^1u^2+\|Z_s\|^2u^3)^2\right)^{1/2}}.
\end{equation*}
From there it is straightforward to verify that condition (C2) is realized in this case.
With Proposition~\ref{S2} we
obtain condition (C1), and with Proposition~\ref{S1} we get
solvability of the control problem. We stress again that now we allow $\sigma$
to be arbitrarily large.
Then, contrary to the negative result of Example \ref{Picard},
we are able to find $h$ such that $h_0=v$, $h_\sigma=0$, $\dot
h_s=\bigl(k_s^1,k_s^2,Z_s^1k_s^2-Z_s^2k_s^1\bigr)$, and
$\int_0^\sigma\vert k_s\vert^2\,ds\in L^{1}$.
\end{example}
\section{Derivative Formulas in the Hypoelliptic Case}\label{Sect7}
\setcounter{equation}0\noindent In this section the results of the Sections~\ref{Sect3}
and \ref{Sect4} are extended to derive general differentiation formulas for heat
semigroups and $L$-harmonic functions in the hypo\-elliptic case.
Let again $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, $x\in M$ be a family of local
martingales where the transformation $F$ is differentiable in the second
variable with a derivative jointly continuous in both variables. We fix $x\in
M$ and $v\in T_xM$. Let $\sigma$ be a stopping time which is dominated by the
first exit time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ from some relatively compact neighbourhood
of~$x$. We first note that
\begin{equation}
\label{eq:Equality}
dF(0,{\raise1.5pt\hbox{\bf .}}\,)_xv\equiv
\E\left[dF(\sigma,{\raise1.5pt\hbox{\bf .}}\,)_{X_\sigma(x)}\,X_{\sigma\ast}\,v\right]
\end{equation}
where $X_{\sigma\ast}$ is the derivative process at the random time $\sigma$.
Eq.~\eqref{eq:Equality} follows from the fact that the local martingale
$F(\,{\raise1.5pt\hbox{\bf .}}\,\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, differentiated in the direction $v$ at~$x$, is
again a local martingale, and under the given assumptions a uniformly
integrable martingale when stopped at $\sigma$. Our aim is to replace the
right-hand side of~\eqref{eq:Equality} by expressions not involving
derivatives of $F$. To this end the local martingales of Section~\ref{Sect4} are
exploited.
We start with an elementary construction. Let $D\subset M$ be a nonempty
relatively compact domain and $\varphi\in C^2(\ovl D)$ such that
$\varphi|\partial D=0$ and $\varphi>0$ on $D$. For $x\in D$ let
\begin{equation}
\label{eq:DefTr}
T(s)=\int_0^s\varphi^{-2}\bigl(X_r(x)\bigr)\,dr\,,\quad s\le\tau_D(x),
\end{equation}
and
\begin{equation}
\label{eq:Defsr}
\sigma(r)=\inf\bigl\{s\geq0: T(s)\ge r\bigr\}\le\tau_D(x).
\end{equation}
Note that $T(r)\to \infty$ as $r\nearrow\tau_D(x)$, almost surely,
see~\cite{T-W 98}. Fix $t_0>0$ and consider
\begin{equation}
\label{eq:CMprocess}
\ell_s=\frac1{t_0}\,
\rho\left(\int_0^s\varphi^{-2}\bigl(X_r(x)\bigr)\,dr\right)v
\end{equation}
for some $\rho\in C^1(\R_+,\R)$ such that $\rho(s)=0$ for $s$ close to $0$ and
$\rho(s)=t_0$ for $s\geq t_0$. Then $\ell_0=0$ and $\ell_s=v$ for
$s\ge\sigma(t_0)$.
Now for perturbations $X^\lambda$ of $X$, as in Section~\ref{Sect3}, let
\begin{equation*}
\ell^\lambda_s=\frac1{t_0}\,
\rho\left(\int_0^s\varphi^{-2}\bigl(X^\lambda_r(x)\bigr)\,dr\right)v
\end{equation*}
and $\sigma^\lambda(r)=\inf\bigl\{s\geq0: T^\lambda(s)\ge r\bigr\}$. We
introduce the abbreviation $\partial_\lambda^\mstrut
=\bigl(\frac\partial{\partial\lambda_1},\dots,
\frac\partial{\partial\lambda_n}\bigr)$. Then
$\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}\ell^\lambda_s$ exists and lies
in $\cap_{p>1}L^p$, see~\cite{T-W 98}, Section~\ref{Sect4} (the arguments there before
Theorem~4.1 extend easily to general exponents $p$). In a similar way, using
$T^\lambda\circ\sigma^\lambda=\mathpal{id}$, we see that
\begin{equation*}
\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}\sigma^\lambda
=-\frac1{T'\circ\sigma}\,
\Bigl(\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}T^\lambda\Bigr)\circ\sigma.
\end{equation*}
For our applications, it is occasionally useful to modify the above
construction such that already $\ell_s=v$ for $s\ge\sigma(t_0)\wedge t$ where
$t>0$ is fixed. This can easily be achieved by adding a term of the type
$\tan(\pi r/2t)$ to the right-hand side of \eqref{eq:DefTr} and by changing
the definition of $\ell_s$ in an obvious way.
Now let again $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$ be a local martingale,
as in Section~\ref{Sect4}, and consider the variation
\begin{equation}
\label{eq:PerturbedMart}
F\bigl(\,{\raise1.5pt\hbox{\bf .}}\,,X^\lambda_{\raise1.0pt\hbox{\bf .}}(x)\bigr)\,G^\lambda_{\raise1.0pt\hbox{\bf .}}
\end{equation}
of local martingales where
\begin{equation}
G^\lambda_t=\exp\left(-\int_0^t\,\langle a_s\lambda,dZ_s\rangle
-{1\over2}\,\int_0^t\vert a_s\lambda\vert^2\,ds\right).
\end{equation}
Then
\begin{equation*}
n_s=dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}
\left(\int_0^s X_{r\ast}^{-1}A\bigl(X_r(x)\bigr)\,a_r\,dr\right)
-F\bigl(s,X_s(x)\bigr)\int_0^sa_r^\ast\,dZ_r
\end{equation*}
is a local martingale in $T_xM$. Observe that $n$ is the derivative of
\eqref{eq:PerturbedMart} at $0$ with respect to $\lambda$, i.e.,
$n_s=\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
F\bigl(s,X^\lambda_s(x)\bigr)\,G^\lambda_s$. In particular, taking
\begin{equation}
\label{eq:Choicefora}
a_s=(X_{s\ast}^{-1}A)_x^\ast,
\end{equation}
then
\begin{equation*}
n_s=dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}\,C_s(x)
-F\bigl(s,X_s(x)\bigr)\int_0^s(X_{r\ast}^{-1}A)_x\,dZ_r.
\end{equation*}
This implies that also
\begin{equation*}
N_s:=n_sh_s-\int_0^sn_r\,dh_r
\end{equation*}
is a local martingale for any $T_x^\ast M$-valued adapted process $h$ locally
of bounded variation. We choose $h_s=C_s(x)^{-1}\ell_s$ where $\ell$ is given
by \eqref{eq:CMprocess}. Taking expectations gives
\begin{align}
\label{eq:PrelimFormula}
dF(0,{\raise1.5pt\hbox{\bf .}}\,)_xv&=
\E\left[dF(\sigma,{\raise1.5pt\hbox{\bf .}}\,)_{X_\sigma(x)}\,X_{\sigma\ast}\,v\right]\\
\notag &=\E\left[F\bigl(\sigma,X_\sigma(x)\bigr)
\left(\int_0^\sigma(X_{s\ast}^{-1}A)_x\,dZ_s \right)C_\sigma^{-1}(x)\,v
+\int_0^\sigma n_s\,dh_s\right].
\end{align}
where $\sigma:=\sigma(t_0)$. We deal separately with the term
\begin{equation}
\label{eq:Term}
\E\left[\int_0^\sigma n_s\,dh_s\right]
=\E\left[\int_0^\sigma \partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
\bigl[F\bigl(s,X^\lambda_s(x)\bigr)\,G^\lambda_s\bigr]\,
d\bigl(C_s(x)^{-1}\ell_s\bigr)\right].
\end{equation}
To avoid integrability problems, it may be necessary, as in proof of Theorem
\ref{ResultCrude}, to go through the calculation first with
\eqref{eq:Choicefora} replaced by
\begin{equation*}
a_s^k=(X_{s\ast}^{-1}A)_x^\ast\,1_{\{s\leq\tau_k\}},
\end{equation*}
where $(\tau_k)$ is an appropriate increasing sequence of stopping times such
that $\tau_k\nearrow\sigma$, and to take the limit as $k\to\infty$ in the
final formula. Note that, without loss of generality, $\sigma$ may be assumed
to be bounded. We shall omit this technical modification here.
We return to the term \eqref{eq:Term}. Observe that
\begin{align*}
\E&\left[\int_0^{\sigma^\lambda}
F\bigl(s,X^\lambda_s(x)\bigr)\,G^\lambda_s\,
d\bigl(C^\lambda_s(x)^{-1}\ell^\lambda_s\bigr)\right]\\
&\quad\equiv \int_0^\infty\E\left[
1_{\{s\leq\sigma^\lambda\}}\,F\bigl(s,X^\lambda_s(x)\bigr)\,G^\lambda_s\,
\frac d{ds}\bigl(C^\lambda_s(x)^{-1}\ell^\lambda_s\bigr)\right]ds
\end{align*}
is independent of $\lambda$. Thus differentiating with respect to $\lambda$
at $\lambda=0$ gives
\begin{align*}
&\E\left[\int_0^\sigma n_s\,dh_s\right]\\
&\quad= -\E\left[\int_0^\sigma F_s\,
d\Bigl[\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
\bigl(C^\lambda_s(x)^{-1}\ell^\lambda_s\bigr)\Bigr] +
\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}\int_0^{\sigma^\lambda}
F_s\,d\bigl(C_s(x)^{-1}\ell_s\bigr)\right]\\
&\quad= -\E\left[ F_\sigma\,
\Bigl[\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
\bigl(C^\lambda_s(x)^{-1}\ell^\lambda_s\bigr)\Bigr]_{s=\sigma} +
F_\sigma\,\left(\frac d{ds}\Bigl\vert_{s=\sigma}C_s(x)^{-1}\ell_s\right)
\left(\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}{\sigma^\lambda}\right)\right]
\end{align*}
where $F_s\equiv F\bigl(s,X_s(x)\bigr)$. Note that all terms in the last line
are nicely integrable. Substituting this back into Eq.\
\eqref{eq:PrelimFormula}, we find a formula of the wanted type:
\begin{equation}
\label{eq:WantedFormula}
dF(0,{\raise1.5pt\hbox{\bf .}}\,)_xv
=\E\left[F\bigl(\sigma,X_\sigma(x)\bigr)\,\Phi_\sigma v\right]
\end{equation}
where $\Phi_\sigma$ takes values in $T^\ast_xM$ and is $L^p$-integrable for
any $1\leq p<\infty$. Summarizing the above discussion, we conclude with the
following two theorems.
\begin{thm}
\label{ThmOneGeneral}
Let $M$ be a smooth manifold and $\map fM\R$ a bounded measurable
function. Assume that {\rm(H1)} holds. Let $x\in M$, $v\in T_xM$,
$t>0$. Then
\begin{equation}
\label{eq:dSemigroupGeneral}
d(P_tf)_xv
=\E\Bigl[f\bigl(X_t(x)\bigr)\,1_{\{t<\zeta(x)\}}\,\Phi_tv\Bigr]
\end{equation}
for the minimal semigroup $P_tf$ defined by~{\rm\eqref{eq:SemiGroupMin}}
where $\Phi_t$ is a $T^\ast_xM$-valued random variable which is
$L^p$-integrable for any $1\leq p<\infty$ and local in the following sense:
For any relatively compact neighbourhood $D$ of $x$ in $M$ there is a choice
for $\Phi_t$ which is $\SF_\sigma$-measurable where
$\sigma=t\wedge\tau_D^\mstrut(x)$ and $\tau_D^\mstrut(x)$ is the first exit
time of $X$ from $D$ when starting at $x$.
\end{thm}
\begin{proof}
Let $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))=(P_{t-{\raise1.0pt\hbox{\bf .}}}f)\bigl(X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)$.
Then Eq.~\eqref{eq:WantedFormula} gives
\begin{equation*}
d(P_tf)_xv
=\E\left[F\bigl(\sigma,X_\sigma(x)\bigr)\,\Phi_\sigma\right]
\end{equation*}
Again by taking into account that $(P_{t-\sigma}f)(X_\sigma(x))
=\E^{\SF_\sigma}\bigl[f\bigl(X_t(x)\bigr)\,1_{\{t<\zeta(x)\}}\bigr]$, we get
the claimed formula.
\end{proof}
\begin{thm}
\label{ThmTwoGeneral}
Let $M$ be compact with smooth boundary $\partial M\not=\emptyset$ and $u\in
C(M)$ be $L$-harmonic on $M{\setminus}\partial M$. Assume that {\rm(H1)}
holds. Let $x\in M{\setminus}\partial M$ and $v\in T_xM$. Denote $\tau(x)$
the first hitting time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ at $\partial M$. Then the following
formula holds:
\begin{equation}
\label{eq:dHarmonicGeneral}
(du)_xv
=\E\bigl[u\bigl(X_{\tau(x)}(x)\bigr)\,\Phi_{\tau(x)}v\bigr]
\end{equation}
where $\Phi_{\tau(x)}$ is a $T^\ast_xM$-valued random variable which is in
$L^p$ for any $1\leq p<\infty$ and local in the following sense: For any
relatively compact neighbourhood $D$ of $x$ in $M$ there is a choice for
$\Phi_{\tau(x)}$ which is already $\SF_\sigma$-measurable where
$\sigma=\tau_D^\mstrut(x)$ is the first exit time of $X$ from $D$ when
starting at $x$.
\end{thm}
\begin{proof}
The proof is completely analogous to the proof of
Theorem~\ref{ThmOneGeneral}.
\end{proof}
\begin{example}[Greek Deltas for Asian Options]\label{as_opt}
Consider the following SDE on the real line:
\begin{equation}
\label{Eq:Asian}
dS_t=\sigma(S_t)\,dW_t+\mu(S_t)\,dt\,,
\end{equation}
where $W_t$ is a real Brownian motion.
In Mathematical Finance one likes to calculate
so-called \textit{Greek Deltas for Asian Options\/}
which are expressions of the form
$$\Delta_0=\frac\partial{\partial S_0}\E[f(S_T,A_T)],\quad T>0,$$
where $S_t$ is given as solution to \eqref{Eq:Asian} and
\begin{equation}
\label{Eq:Asian1}
A_t=\int_0^t S_r\,dr.
\end{equation}
We may convert \eqref{Eq:Asian} to Stratonovich form
$$dS_t=\sigma(S_t)\,\delta W_t+m(S_t)\,dt$$
and consider $X_t:=(S_t,A_t)$ as a diffusion on $\R^2$.
Then
$$d\begin{pmatrix}X^1_t\\ X^2_t\end{pmatrix}
=\begin{pmatrix}\sigma(X^1_t)\\0\end{pmatrix}\circ dW_t
+\begin{pmatrix}m(X_t^1)\\ X_t^1\end{pmatrix}\,dt
$$
with the vector fields
$$A_0=\begin{pmatrix}m(x_1)\\ x_1\end{pmatrix},\quad
A_1=\begin{pmatrix}\sigma(x_1)\\0\end{pmatrix}.$$
Observe that
$$[A_1,A_0]=\begin{pmatrix}\sigma(x_1) m'(x_1)-\sigma'(x_1) m(x_1)\\ \sigma(x_1)\end{pmatrix}.$$
Thus if $\sigma>0$, then $X_t=(S_t,A_t)$ defines a hypoelliptic diffusion on $\R^2$.
\end{example}
\begin{example}[Trivial example]
In the special case $\sigma>0$ constant and $\mu=0$, i.e.,
$$\left\lbrace
\begin{aligned}
dS_t&=\sigma\,dW_t\\
dA_t&=S_t\,dt,
\end{aligned}\right.
$$
one easily checks
$$
X_{t\ast}=
\begin{pmatrix}1&0\\ t&1\end{pmatrix}\quad\text{and}\quad
X_{t\ast}^{-1}(A_1)\otimes X_{t\ast}^{-1}(A_1)=
\sigma^2\begin{pmatrix}1&-t\\ -t&t^2\end{pmatrix},
$$
and hence
$$C_T(x)=\sigma^2\begin{pmatrix}T&-T^2/2\\ -T^2/2&T^3/3\end{pmatrix}.$$
Consequently, the integration by parts argument of Sect.~\ref{Sect3} immediately gives
$$\frac\partial{\partial S_0}\E[f(S_T,A_T)]=
\frac6{\sigma T}\,\E\left[f(S_T,A_T)\,\left(\frac1T\int_0^TW_tdt-\frac13W_T\right)\right]
.$$
\end{example}
\begin{remark} In the more general situation of Example \ref{as_opt}, i.e.,
\begin{equation*}
dS_t=\sigma(S_t)\,dW_t+\mu(S_t)\,dt\quad\text{and}\quad
A_t=\int_0^t S_r\,dr,
\end{equation*}
Theorem \ref{ThmOneGeneral} may be applied to give a formula of the type
$$\Delta_0=\frac\partial{\partial S_0}\E[f(S_T,A_T)]
=\E[f(S_T,A_T)\,\pi_T^\mstrut],$$
where the weight $\pi_T$ is explicitely given and may be implemented numerically
in Monte-Carlo simulations. See \cite{Friz_Cass:2010} for extensions to jump diffusions,
and \cite{FLT:2008} for weights $\pi_T^\mstrut$ in terms of anticipating integrals.
\end{remark}
\section{The Case of Non-Euclidean Targets}\label{Sect8}
\setcounter{equation}0\noindent The aim of this section is to adapt our
method, to some extent, to the nonlinear case of harmonic maps between
manifolds. In addition to the manifold $M$, carrying a hypoelliptic
$L$-diffusion, we fix another manifold $N$, endowed with a torsionfree
connection $\nabla$. In stochastic terms, a smooth map $\map uMN$ is harmonic
(with respect to~$L$) if it takes $L$-diffusions on $M$ to
$\nabla$-martingales on $N$. Likewise, a smooth map $\map u{[0,t]\times
M}{N}$ is said to solve the nonlinear heat equation, if
$u\bigl(t-{\raise1.0pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x)\bigr)$ is a $\nabla$-martingale on $N$ for any
$L$-diffusion $X_{\raise1.0pt\hbox{\bf .}}(x)$ on $M$.
Henceforth, we fix a family $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, $x\in M$ of
$\nabla$-martingales on $N$ where $F$ is differentiable in the second variable
with a derivative jointly continuous in both variables. In particular, such
transformations $F$ map hypoelliptic $L$-diffusions on $M$ into
$\nabla$-martingales on $N$ and include the following two cases:
\begin{align*}
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))
&=u\circ X_{\raise1.0pt\hbox{\bf .}}(x)\text{ for some harmonic map $\map uMN$, and}\\
F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))&=u\bigl(t-{\raise1.0pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x)\bigr) \text{ where
$u$ solves the heat equation for maps $M\to N$.}
\end{align*}
Theorem \ref{LocMart} is easily extended to this situation. Recall that, if
$Y$ is a continuous semimartingale taking values in a manifold $N$ endowed
with a torsionfree connection~$\nabla$, then the geodesic (damped or deformed)
transport $\map{\Theta_{0,t}^\mstrut}{T^\mstrut_{\!Y_0}N}{T^\mstrut_{\!Y_t}N}$
on $N$ along $Y$ is defined by the following covariant equation along~$Y$:
\begin{equation}
\label{eq:GeodTrans}
\left\lbrace\begin{aligned}
d\,(\itr0\bull\Theta_{0,\bull}^\mstrut)&=
-\textstyle{1\over2}\,\itr0\bull
R(\Theta_{0,\bull}^\mstrut,dY)dY\\
\Theta_{0,0}^\mstrut&=\mathpal{id}
\end{aligned}\right.
\end{equation}
where $\map{\tr0t}{T^\mstrut_{\!Y_0}N}{T^\mstrut_{\!Y_t}N}$ is parallel
translation on $N$ along $Y$ and $R$ the curvature tensor to $\nabla$,
see~\cite{A-T 97}. Finally, recall the notion of anti-development of $Y$,
resp.\ ``deformed anti-development'' of $Y$,
\begin{equation}
\label{eq:AntiDev}
\SA(Y)=\int_0^{\bull}\itr0s\,\delta Y_s, \quad
\SA_{\hbox{\sevenrm def}}(Y)=\int_0^{\bull}\Theta_{0,s}^{-1}\,\delta Y_s
\end{equation}
which by definition both take values in $T^\mstrut_{\!Y_0}N$. Note that an
$N$-valued semimartingale is a $\nabla$-martingale if and only if $\SA(Y)$, or
equivalently $\SA_{\hbox{\sevenrm def}}(Y)$, is a local martingale.
\begin{thm}
\label{LocMartCurvedTarget}
Let $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$, $x\in M$ be a family of
$\nabla$-martingales on $N$, as described above. Then, for any predictable
$\R^r$-valued process $k$ in $L^2_{\hbox{\sevenrm loc}}(Z)$,
\begin{equation}
\label{eq:QuasiDerCurvedTarget}
\Theta_{0,\bull}^{-1}\,dF(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\,(T_xX_{\raise1.0pt\hbox{\bf .}})
\int_0^\bull (X_{s\ast}^{-1}A)_x^\mstrut k_s\,ds
-\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)
\int_0^\bull\langle k,dZ\rangle
\end{equation}
is a local martingale in $T_{F(0,x)}N$. Here $\Theta_{0,\bull}$ denotes the
geodesic transport on $N$ along the martingale $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$.
\end{thm}
\begin{proof} Observe that by \cite{A-T 97},
\begin{equation*}
m_s:=\Theta_{0,s}^{-1}\,dF(s,\,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}
\end{equation*}
is local martingale taking values in $T_xM\otimes T_{F(0,x)}N$, and that by
definition,
\begin{equation*}
\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)=
\int_0^\bull\Theta_{0,s}^{-1}\,dF(s,\,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}
\,A\bigl(X_s(x)\bigr)\,dZ_s.
\end{equation*}
The rest of the (alternative) proof to Theorem \ref{LocMart} carries over
with straight-forward modifications.
\end{proof}
It is straightforward to extend Theorem \ref{ThmOne} and Theorem \ref{ThmTwo}
to the nonlinear setting by means of the local martingale
\eqref{eq:QuasiDerCurvedTarget}.
\begin{thm}
\label{ThmOneVar}
Let $\map u{[0,t]\times M}{N}$ be a solution of the nonlinear heat equation,
$x\in M$, $v\in T_xM$. Let $D$ be a relatively compact open neighbourhood
of $x$ and $\sigma=\tau_D^\mstrut(x)\wedge t$ where $\tau_D^\mstrut(x)$ is
the first exit time of $X_{\raise1.0pt\hbox{\bf .}}(x)$ from $D$. Suppose there exists an
$\R^r$-valued predictable process $k$ such that
\begin{equation*}
\int_0^\sigma (X_{s\ast}^{-1}A)_x\,k_s\,ds\equiv v,\quad\hbox{a.s.}
\end{equation*}
and $\bigl(\int_0^\sigma\vert k_s\vert^2\,ds\bigr)^{1/2}\in
L^{1+\varepsilon}$ for some $\varepsilon>0$. Then the following formula
holds:
\begin{equation}
\label{eq:dHeatEq}
du(t,\,{\raise1.5pt\hbox{\bf .}}\,)_xv=\E\biggl[
\SA_{\hbox{\sevenrm def}}\bigl(u(t-{\raise1.5pt\hbox{\bf .}}\,,X_\bull(a))\bigr)_{\sigma}
\int_0^\sigma\langle k,dZ\rangle\biggr].
\end{equation}
\end{thm}
\begin{thm}
\label{ThmTwoVar}
Let $M$ be compact with smooth boundary $\partial M\not=\emptyset$. For
$x\in M{\setminus}\partial M$ let $\tau(x)$ be the first hitting time of
$\partial M$ with respect to the process $X_{\raise1.0pt\hbox{\bf .}}(x)$. Given $v\in T_xM$,
we suppose that there exists an $\R^r$-valued predictable process $k$ such
that
\begin{equation*}
\int_0^{\tau(x)}(X_{s\ast}^{-1}A)_x^\mstrut\,k_s\,ds\equiv v,\quad\hbox{a.s.}
\end{equation*}
and $\bigl(\int_0^{\tau(x)}\vert k_s\vert^2\,ds\bigr)^{1/2}\in
L^{1+\varepsilon}$ for some $\varepsilon>0$. Then, for any $u\in
C^\infty(M,N)$ which is harmonic on $M{\setminus}\partial M$, the following
formula holds:
\begin{equation}
\label{eq:dHarmNonLin}
(du)_xv
=\E\biggl[\SA_{\hbox{\sevenrm def}}\bigl(u(X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)_{\tau(x)}
\int_0^{\tau(x)}\langle k,dZ\rangle\,\biggr].
\end{equation}
\end{thm}
Note that if $a$ is a predictable process taking values in
$T_xM\otimes(\R^r)^\ast$, as in Section~\ref{Sect4}, then
\begin{equation}
\label{eq:QuasiDerCurvedTarget1}
\Theta_{0,\bull}^{-1}\,dF(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\,(T_xX_{\raise1.0pt\hbox{\bf .}})
\int_0^\bull (X_{s\ast}^{-1}A)_x^\mstrut a_s\,ds
-\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)
\int_0^\bull a_r^\ast \,dZ_r
\end{equation}
gives a local martingale in $T_xM\otimes T_{F(0,x)}N$. In particular, setting
\begin{equation}
\label{eq:StandChoice}
a_s=(X_{s\ast}^{-1}A)_x^\ast\,1_{\{s\leq\tau\}},
\end{equation}
where $\tau$ may be any predictable stopping time, we see that
\begin{equation}
\label{eq:LocMart1}
n_s=\Theta_{0,s}^{-1}\,dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}\,X_{s\ast}\,C_{s\wedge\tau}(x)
-\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)_s
\int_0^{s\wedge\tau}(X_{r\ast}^{-1}A)_x\,dZ_r
\end{equation}
is a local martingale. Let
\begin{equation}
\label{eq:YandDistY}
Y=\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))\bigr)\quad
\text{and}\quad
Y^\lambda=\SA_{\hbox{\sevenrm def}}\bigl(F(\,{\raise1.5pt\hbox{\bf .}}\,,X^\lambda_{\raise1.0pt\hbox{\bf .}}(x))\bigr).
\end{equation}
for variations $X^\lambda(x)$ of $X(x)$, as in Section~\ref{Sect3}, and recall that,
again with the choice~\eqref{eq:StandChoice},
\begin{equation}
\label{eq:Jacobi}
J_s=\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
F\bigl(s,X^\lambda_s(x)\bigr)=
dF(s,{\raise1.5pt\hbox{\bf .}}\,)_{X_s(x)}^\mstrut\,X_{s\ast}\,C_{s\wedge\tau}(x).
\end{equation}
By definition, $J_\bull w$ is a vector field on $N$ along the martingale
$F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$ for each $w\in T_x^\ast M$. Imitating the
strategy of Section~\ref{Sect7}, the idea is to differentiate
$Y^\lambda_{\raise1.0pt\hbox{\bf .}}\,G^\lambda_{\raise1.0pt\hbox{\bf .}}$ with respect to $\lambda$.
\begin{lemma}
\label{VertParVariation}
Keeping the notations as above, we have
\begin{equation}
\label{eq:DiffAntiDev}
\hbox{\rm vert}\left[\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}Y^\lambda\right]
=\Theta_{0,\bull}^{-1}J-J_0
+\int_0^\bull\,\Theta_{0,s}^{-1}\,(\nabla\Theta_{0,s}^\mstrut)\,dY_s
\end{equation}
where $\nabla\Theta_{0,\bull}^\mstrut : T_{F(0,x)}N\to
T_{F(\,{\raise1.5pt\hbox{\bf .}}\,,X_\bull(x))}N$ is defined by
\begin{equation}
(\nabla\Theta_{0,\bull}^\mstrut) u
=v_J^{-1}\bigl(\bigl(\Theta_{0,\bull}^{c\mstrut}\,h_{J_0}(u)\bigl){}^{\rm vert}\bigr).\end{equation}
In particular,
$\hbox{\rm vert}\left[\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}Y^\lambda\right]$
and
$\Theta_{0,\bull}^{-1}J-J_0$
differ only by a local martingale.
Here~$\Theta_{0,\bull}^{c\mstrut}$ denotes the geodesic transport on $TN$
along $J$ with respect to the complete lift $\nabla^c$ of the connection $\nabla$.
\end{lemma}
We are not going to prove Lemma~\ref{VertParVariation} here. We just remark
that, again with the choice \eqref{eq:StandChoice} for the process $a$, we end
up with the following local martingale:
\begin{equation}
\label{eq:VertMart}
\begin{split}
m&:=\hbox{vert}\left[\partial_\lambda^\mstrut\bigl\vert_{\lambda=0}
(Y^\lambda G^\lambda)\right]\\
&\phantom{:}=\Theta_{0,\bull}^{-1}J-J_0
+\int_0^\bull\,\Theta_{0,s}^{-1}\nabla\Theta_{0,s}^\mstrut\,dY_s
-Y\int_0^{\bull\wedge\tau}(X_{s\ast}^{-1}A)_x\,dZ_s.
\end{split}
\end{equation}
Then a procedure along the lines of Section~\ref{Sect7} leads to a formula for
$dF(0,\,{\raise1.5pt\hbox{\bf .}}\,)_xv$ which is analogous to the linear case, but with an
additional term of the type
\begin{equation}
\label{eq:Difference}
\E\left[\Bigl(\int_0^\sigma
\Theta_{0,s}^{-1}\nab{J_s}\Theta_{0,s}^\mstrut\,dY_s\Bigr)\,
C^{-1}_\sigma(x)\,v\right]
\end{equation}
for some stopping time $\sigma$. At the moment, it seems unclear whether it
is possible to avoid this extra term.
\section{Concluding Remarks}\label{Sect9}\setcounter{equation}0\noindent
1. The presented differentiation formulas are not intrinsic: they involve the
derivative flow which depends on the particular SDE and not just on the
generator. It is possible to make the formulas more intrinsic by using the
framework of Elworthy, Le Jan, Li \cite{E-L-L 97}, \cite{E-LJ-L 98} on
geometry of SDEs (e.g., filtering out redundant noise and working with
connections induced by the SDE).
2. In this paper we exploited perturbations of the driving Brownian motion and
a change of measure as method for constituting variational formulas. There
are of course other ways of performing perturbations leading to local
martingales which are related to integration by parts formulas. For instance,
one observes that the local martingale property of $F(\,{\raise1.5pt\hbox{\bf .}}\,,X_{\raise1.0pt\hbox{\bf .}}(x))$
is preserved under
\begin{itemize}
\item[(i)] a change of measure via Girsanov's theorem,
\item[(ii)] a change of time,
\item[(iii)] rotations of the BM $Z$.
\end{itemize}
In particular, (iii) seems to be promising in the hypoelliptic context since
it leads to contributions in the direction of the bracket $[A_i,A_j]$. So far
however, it is unclear to us how to relate such variations to regularity
results under hypoellipticity conditions.
\end{document}
|
\begin{document}
\title{Effects of time reversal symmetry in dynamical decoupling}
\author{Alexandre M. Souza, Gonzalo A. \'{A}lvarez and Dieter Suter}
\address{Fakult\"at Physik, Technische Universit\"at Dortmund, D-44221,
Dortmund, Germany}
\pacs{03.67.Pp, 03.65.Yz, 76.60.Lz}
\begin{abstract}
Dynamical decoupling (DD) is a technique for preserving the coherence of quantum mechanical states
in the presence of a noisy environment.
It uses sequences of inversion pulses to suppress the environmental perturbations by
periodically refocusing them.
It has been shown that different sequences of inversion pulses show vastly different
performance, in particular also concerning the correction of experimental pulse imperfections.
Here, we investigate specifically the role of time-reversal symmetry in the building-blocks
of the pulse sequence.
We show that using time symmetric building blocks often improves the performance
of the sequence compared to sequences formed by time asymmetric building blocks.
Using quantum state tomography of the echoes generated by the sequences,
we analyze the mechanisms that lead to loss of fidelity and show how they can be
compensated by suitable concatenation of symmetry-related blocks of decoupling pulses.
\end{abstract}
\maketitle
\section{Introduction \label{sec1} }
Dynamical decoupling (DD) \cite{viola,yang} is becoming an established technique for preserving quantum states from
decoherence with possible applications in quantum
information \cite{ddgate1,ddgate2,ddgate3,ddgate4,dqc1dd} and
magnetic resonance \cite{noise1,noise2,noise3,magnetometer1,meriles,magnetometer2,magnetometer3}. The
technique was devised to increase decoherence times by refocusing the system-environment
interactions with a sequence of control pulses periodically
applied to the system.
Recent experiments have
successfully implemented DD methods and demonstrated
the resulting increase of coherence times in different
systems \cite{exp4,exp3,warren,pra,exp1,exp2,ashok,prl,jpb}.
These works also showed that the performance of
DD sequences can be limited or even counterproductive if the accumulated effect
of pulse imperfections becomes too strong \cite{pra,exp2,prl,jpb}.
One approach to compensate the effect of these errors is to combine
one basic decoupling cycle with a symmetry-related copy into a longer cycle.
The resulting cycle can be more robust, i.e. less susceptible to pulse imperfections
than its building blocks, provided the basic blocks are well chosen and combined in
a suitable way.
In the field of nuclear magnetic resonance (NMR), symmetry-related arguments
have often been used for constructing supercycles \cite{Haeberlen1,mansfield,rhim1,rhim2}.
Using the symmetry properties of specific interactions, it is possible to remove them selectively
while retaining or restoring others \cite{levitt1,levitt2}. Symmetrization is widely used to eliminate
unwanted odd-order average Hamiltonian terms \cite{burum}.
This approach has been instrumental in the design of high-performance decoupling and
recoupling sequences \cite{levitt1,levitt2}.
Besides sequence development, symmetry arguments have also
been used extensively in the design of individual pulses with reduced sensitivity to
experimental imperfections \cite{pines1,morris}.
The main goal of this paper is to investigate differences between otherwise identical DD cycles,
in which the timing of the pulses is either symmetric with respect to time reversal, or not.
As the basic block we consider the XY-4 sequence \cite{maudsley}.
We compare the performance of the basic sequences as well as compensated higher-order sequences
and analyze their imperfections theoretically by average Hamiltonian theory \cite{Haeberlen1,ath} and experimentally
by applying quantum state tomography \cite{chuang,oliveira} to
the system after the end of each decoupling cycle.
This paper is organized
as follows.
In Section \ref{sec2} we introduce the basic idea of dynamic decoupling and demonstrate the relevance
of time reversal symmetry in this context.
In the subsequent sections \ref{sec3} and \ref{sec4} we compare different sequences
based on symmetric or asymmetric building blocks.
In the last section we draw some conclusions.
\section{Symmetrization in DD \label{sec2}}
Dynamical decoupling is a technique in which the coherence of qubits is dynamically
protected by refocusing the qubit-environment interaction \cite{viola,yang}. Within this
technique, a sequence
of $\pi$ rotations
is periodically applied to the system.
For a purely dephasing environment, i.e. one that couples only to the $z$-component of the system qubit,
this can be achieved simply by a train of identical $\pi$-pulses, the so-called
Carr-Purcell (CP)- \cite{cp} or Carr-Purcell-Meiboom-Gill (CPMG)-sequence \cite{mg}.
The shortest DD sequence that
cancels the zero-order average Hamiltonian for a general system-
environment interaction \cite{viola,cdd} is the XY-4 sequence \cite{maudsley} (see figures
\ref{fig1}a and \ref{fig1}b). This sequence also has the advantage of being much
less sensitive to pulse imperfections than the CP-sequence \cite{maudsley,xy}.
\begin{figure}
\caption{\label{fig1}
\label{fig1}
\end{figure}
\begin{figure}
\caption{\label{cpmg1}
\label{cpmg1}
\end{figure}
In the spectroscopy and quantum computing communities, two versions of the XY-4
sequence are used that differ by a seemingly minor detail.
As shown in Fig. \ref{fig1}a, the basic cycle originally
introduced in NMR \cite{maudsley,xy} starts with a delay
of duration $\tau/2$ and ends with another delay of the same duration.
It therefore shows reflection symmetry in the time domain with respect to the
center of the cycle.
In contrast to that, the sequence used in the quantum information
community \cite{viola,cdd,Khodjasteh,sdd,preskill}
starts with a delay of duration $\tau$ and ends after the fourth pulse (see Fig. \ref{fig1}b).
Clearly, this cycle is not symmetric in time.
One consequence of this small difference is that in the case shown in Fig. \ref{fig1}a,
the echoes are formed in the center of the windows between any two pulses,
while in Fig. \ref{fig1}b, the echoes coincide with every second pulse.
The separation in time between the echoes is therefore twice as long in this case.
Figure \ref{cpmg1} illustrates this difference with an experimental example.
Here, we measured the time evolution of the $^{13}$C nuclear spin polarization during
a CPMG sequence, using in one case a time-symmetric and in the other case
a non time-symmetric cycle.
The sample used for this experiment was polycrystalline adamantane.
The dephasing of the nuclear spin signal originates from the interaction with an environment
consisting of $^1$H nuclear spins.
To make this environment appear static and generate a long train of echoes,
we applied a homonuclear decoupling sequence to the protons \cite{ashok}.
As expected, in the symmetric case, the echoes appear with half the separation
of the asymmetric case.
\begin{figure}
\caption{\label{cpmg}
\label{cpmg}
\end{figure}
The larger separation of the echoes also can lead to a faster decay of the echo amplitude
if the environment is not static.
This is illustrated in Fig. \ref{cpmg}, which shows the decay of the echo amplitude as a function of time.
In this case, we did not apply homonuclear decoupling and the system-environment interaction
is therefore modulated by the homonuclear magnetic dipole-dipole interaction between the
environmental protons \cite{ashok}.
The plotted signal shows the echo amplitude measured after a single CPMG cycle for the symmetric and asymmetric case,
as a function of the cycle time $2 \tau$.
Clearly the symmetric cycle is more efficient in preserving the state of the system -
in agreement with findings from multiple pulse NMR \cite{levitt1,levitt2}.
Since any multiple pulse cycle suffers from imperfections and non-ideal properties,
it is often desirable to construct longer cycles that have better properties than simply
repeating the basic cycle.
Examples of DD sequences that can be constructed from the XY-4 cycle
include the XY-8 and XY-16 \cite{xy} sequences shown in Table \ref{tab1}.
Here, the XY-8 sequence concatenates an XY-4 cycle with its time-reversed version
\cite{xy,Khodjasteh,sdd,preskill},
thus generating a new cycle, which is inherently time-symmetric, independent of which
version of the XY-4 sequence was used for the building blocks.
In the following sections \ref{sec3} and \ref{sec4} we show that two symmetric
sequences, constructed according to the same rules from a basic XY-4 block can have
different behaviors depending if the basic block is chosen to be symmetric or not.
\begin{table}
\begin{centering}
{\footnotesize }\begin{tabular}{ccc}
& \textbf{\footnotesize Asymmetric}{\footnotesize{} } & \textbf{\footnotesize Symmetric}{\footnotesize{} }\tabularnewline
\hline
& & \tabularnewline
{\footnotesize \bf{$XY-4$} } & {\footnotesize \bf{$[\tau-X-\tau-Y]^{2}$} } & {\footnotesize \bf{$[\tau/2-X-\tau-Y-\tau/2]^{2}$} }\tabularnewline
& & \tabularnewline
\hline
& & \tabularnewline
{\footnotesize \bf{$XY-8$} } & \multicolumn{2}{c}{ {\footnotesize \bf{$(XY-4)(XY-4)^{T}$} } } \tabularnewline
& & \tabularnewline
{\footnotesize \bf{$XY-16$} } & \multicolumn{2}{c}{ {\footnotesize \bf{$(XY-8)\overline{(XY-8)}$} } }\tabularnewline
& & \tabularnewline
\hline
\end{tabular}
\par\end{centering}{\footnotesize \par}
\caption{{\footnotesize \label{tab1} Dynamical decoupling sequences.
The top line shows the time symmetric and asymmetric versions of XY-4, which can
be used as building blocks for other sequences.
$X$ and $Y$ represent $\pi$-pulses around the $x$
and $y$ axes respectively.
$U^T$ is the time-reversed sequence and $\overline{U}$ stands
for the sequence with inverted phases. }}
\end{table}
\section{Average Hamiltonian Theory \label{sec3}}
In this section we compare DD sequences
constructed from time symmetric and asymmetric building blocks in the
framework of average Hamiltonian theory \cite{Haeberlen1,ath}.
While the zero-order average Hamiltonian of the asymmetric sequence is the same as that of the symmetric sequence,
this is no longer the case for the higher order terms.
In particular, if a cycle is symmetric, that is
if $\tilde{H}(t) = \tilde{H}(\tau_c -t)$ for $0 \leq t \leq \tau_c$, where $\tilde{H}(t)$ represents the
Hamiltonian in the toggling frame \cite{Haeberlen1,ath}, it can
be shown that all odd order terms
of the average Hamiltonian vanish \cite{burum}.
Clearly, this condition can only be fulfilled, if the timing of the sequence is symmetric,
as in the example of Fig. \ref{fig1}a.
We first consider the
sequence XY-4, which is our basic building block.
Our system consists of a single qubit, which we describe as a spin 1/2,
and an environment, which consists of a spin-bath.
The Hamiltonian describing the system plus environment is then
\begin{eqnarray}
H = H_S + H_{SE} + H_E,
\label{HH}
\end{eqnarray}
where $H_S = \omega_S S_z $ is the system Hamiltonian,
$\mathbf{S} = (S_x,S_y,S_z)$ is the spin vector operator of the system qubit
and $\omega_S$ is the Zeeman frequency of the system.
$H_E$ is the environment
Hamiltonian, which does not commute with $H_{SE}$ in general but is not specified further.
$H_{SE}$ is the system-environment interaction:
\begin{eqnarray}
H_{SE} &=& \sum_k b_k S_z I_z^k .
\end{eqnarray}
In the following, we work in a resonant rotating frame, where $\omega_S = 0$
and therefore $H_S = 0$.
$\mathbf{I^k} = (I^k_x,I^k_y,I^k_z)$ is
the spin vector operator
of the $k^{th}$ environment spin,
$b_k$ is the coupling constant between the system and the $k^{th}$ spin of the environment.
In our case, the dominant source of experimental imperfections are flip-angle errors.
The actual pulse propagator for a nominal $\pi$ rotation around an axis defined by $\phi$ is then
\begin{eqnarray}
R(\phi) &=& e^{- i (1 + \epsilon) \pi S_{\phi}}
\label{rot}
\end{eqnarray}
where $\epsilon$ is the relative flip angle error,
$S_{\phi} = cos \phi S_x + sin \phi S_y$, and $\phi$ is the phase of the pulse.
We can write the zeroth ($\overline{H_0}$) and first ($\overline{H_1}$) order terms of the average
Hamiltonian for the time symmetric XY-4 sequence
\begin{eqnarray}
\overline{H_0^{S}} &=& H_E \label{H0S} \\
\overline{H_1^{S}} &=& \frac{5 \epsilon^2 \pi^2}{16 \tau} S_z - \sum_k b_k { \frac{\epsilon \pi}{32} (S_x + S_y) I_z^k}.
\label{H1S}
\end{eqnarray}
Details of the calculation are given in the appendix.
The zeroth order average Hamiltonian matches exactly the target Hamiltonian,
and for perfect pulses ($\epsilon=0$), the first order term vanishes, $\overline{H_1^{S}} = 0$, as expected for any symmetric sequence.
For finite pulse errors, the first-order term contains a rotation of the spin qubit around the $z$ axis
by an angle $ 5 \epsilon^2 \pi^2/4$.
This term results from the accumulated flip angle errors and is independent of the environment.
Since this term is proportional to the square of the flip angle error $\epsilon$,
it generates a rotation in the same direction for all spins, independent of the actual
field that they experience.
The second term in eq. (\ref{H1S}), in contrast, is linear in $\epsilon$.
For an optimal setting of the pulse, $\epsilon$ is distributed symmetrically around zero
and the resulting evolution due to this term does not lead to an overall precession,
but to a loss of amplitude.
This term combines pulse errors and the system-bath interaction.
It arises from the fact that pulses that do not
implement a $\pi$ rotation cannot properly refocus the system-environment interaction.
Now we compare
these results with the average Hamiltonian of the time asymmetric form of XY-4:
\begin{eqnarray}
\overline{H_0^{A}} &=& H_E \label{H0A} \\
\overline{H_1^{A}} &=& \frac{5 \epsilon^2 \pi^2}{16 \tau} S_z - \sum_k b_k \frac{ \epsilon \pi }{16} S_x I_z^k \nonumber \\
& & + i \tau S_z \sum_k b_k [I_z^k, H_E],
\label{H1A}
\end{eqnarray}
The most striking difference from the symmetric case is the appearance of a new term
which is a commutator between the internal Hamiltonian of the environment $H_E$ and the system-environment
interaction Hamiltonian $H_{SE}$.
Under ideal conditions, the first-order average Hamiltonian vanishes for the symmetric building block,
but not for the asymmetric case.
For the asymmetric case the third term, which is proportional to $[H_{SE},H_E]$ remains.
The commutator describes the time dependence of the system-environment interaction
due to the environmental Hamiltonian $H_E$.
This difference from the symmetric case may be understood in terms of the different positions
of the echoes shown in Fig. \ref{fig1} and Fig. \ref{cpmg1}:
In the asymmetric sequence, the time between echoes is twice as long as in the symmetric sequence,
which means that a time-dependent environment has a bigger effect.
In the symmetric sequence, the effect of the time-dependent environment appears only in the next-higher
order term.
Rules for improving the performance of multiple pulse sequences were discussed, e.g.,
in the context of broadband heteronuclear decoupling \cite{levitt3}
or for the compensated Carr-Purcell sequences \cite{xy}.
If we combine a XY-4 cycle with its time-reversed image to an XY-8 cycle,
we obtain a time-symmetric cycle, even if we start from the non-time symmetric block.
Nevertheless, we expect different results for the two cases.
An explicit calculation of the average Hamiltonian for the
combined cycle shows that
$\overline{H_0^{S}} = \overline{H_0^{A}} = H_E$
and $\overline{H_1^{S}} = \overline{H_1^{A}} = 0$,
i.e. all deviations from the ideal Hamiltonian vanish to first order.
This remains true for finite pulse errors: the symmetry of the sequence cancels the effect
of pulse errors in all odd-order average Hamiltonian terms.
We therefore proceed to calculate the second order terms.
For simplicity, we do not calculate the general expression, but consider two limiting cases.
First, we assume that the environmental Hamiltonian vanishes, $H_E = 0$.
The second order term then becomes
\begin{eqnarray}
\overline{H_2^{S}} &=& \frac{13 \epsilon^3 \pi^3 }{1536 \tau}(S_x + S_y) + \sum_k \frac{\epsilon^2 \pi^2 b_k}{384} S_z I_z \label{H2Sa}\\
\overline{H_2^{A}} &=& \overline{H_2^{S}} + \sum_k \frac{\epsilon b_k^2 \tau}{368} S_y \label{H2Aa}.
\end{eqnarray}
Again, the average Hamiltonian for the sequence built from asymmetric blocks
contains an additional error term, which depends on the pulse error and the square
of the system-environment interaction.
As the second limiting case, we assume ideal pulses but non-vanishing environmental Hamiltonian, $H_E \neq 0$.
The second order terms then become
\begin{eqnarray}
\overline{H_2^{S}} &=& \frac{\tau^2}{8} [[H_E,H_{SE}],H_E - \frac{1}{3} H_{SE}] \label{H2Sb}\\
\overline{H_2^{A}} &=& \overline{H_2^{S}} + \frac{\tau^2}{8} [[H_E,H_{SE}],7 H_E -H_{SE}]\label{H2Ab}.
\end{eqnarray}
As for the XY-4 sequence, the time-dependence of the environment, represented by the commutator
$[H_E,H_{SE}]$ has the bigger effect if the sequence uses an asymmetric building block and therefore
generates echoes with bigger time delays between them.
\section{experimental Results \label{sec4}}
\subsection{Setup and system}
For the experimental tests we used natural abundance $^{13}$C nuclear spins in the CH$_2$ groups
of a polycrystalline adamantane sample as the system qubit.
The carbon spins are coupled to nearby $^1$H nuclear spins by heteronuclear magnetic dipole interaction
corresponding to $H_{SE}$.
The protons are coupled to each other by the homonuclear dipolar interaction,
which corresponds to $H_E$ and does not commute with $H_{SE}$.
The system environment interaction is therefore not static and the carbon spins
experience a fluctuating environment \cite{pra}.
Under our conditions, the
interaction between the carbon nuclei can be neglected and the
decoherence mechanism is a pure dephasing process \cite{pra}, the evolution
of the system and environment is thus described by the Hamiltonian (\ref{HH}).
The experiments were
performed on a homebuilt 300 MHz
solid-state NMR spectrometer. The basic experimental scheme
consisted of: a state preparation
period, during which we prepared the carbon spins in a superposition state
oriented along the $y$ direction, a variable evolution
period, where DD sequences were applied, and the final read out period where we
determine the final state by quantum state tomography \cite{chuang,oliveira}.
\begin{figure}
\caption{\label{mag}
\label{mag}
\end{figure}
Figure \ref{mag} shows the signal decay for the symmetric version
of XY-4 and different delays $\tau$ between the refocusing pulses.
For the shortest cycle times, we observe shorter decays and oscillations.
As discussed in Ref. \cite{pra}, this is an indication that in this regime pulse
imperfections play the dominant role.
From the decay curves, we extract decay
times as the times where the magnetization
has decayed to $1/e$ of the initial value.
\subsection{Measured decay times}
Figure \ref{t2x} shows the decay times of the $M_y$-magnetization
as a function of the delay $\tau$ between the pulses.
For delays between 200 $\mu$s and 50 $\mu$s, the decoupling
performance improves as the delays between the pulses are reduced. However, as the delay between
the pulses becomes shorter than 50 $\mu$s,
the decay time decreases again, in agreement with
what was observed in \cite{pra,prl}: in this region, pulse errors become
more important than the coupling to the environment.
This occurs equally for both, the symmetric
and the asymmetric XY-4 sequence.
If we concatenate the XY-4 sequence with its time-reversed version to the XY-8 sequence,
we obtain qualitatively different behavior for the two different versions of XY-4:
If we start from the symmetric form of XY-4, the resulting XY-8(S) sequence shows improved
decoupling performance for increasing pulse rate, without saturating.
This is a clear indication that in this case, the concatenation eliminates the effect of pulse
imperfections and generates a robust, well-compensated sequence.
In strong contrast to this, concatenation of the asymmetric version of XY-4 to XY-8(A)
does not lead to a significant improvement: the decay times for XY-8(A) are identical to those
of the two XY-4 sequences, within experimental uncertainty.
A further concatenation to XY-16 does not change this behavior.
The qualitatively different behavior of the sequences using symmetric versus asymmetric building blocks
clearly shows that for the asymmetric versions, the pulse errors dominate, while the symmetric ones
compensate for the effect of pulse errors.
\begin{figure}
\caption{\label{t2x}
\label{t2x}
\end{figure}
\subsection{Tomographic analysis}
\begin{figure}
\caption{\label{ppxy4}
\label{ppxy4}
\end{figure}
For a more detailed picture of the process that reduces the signal for high pulse rates,
we applied state tomography of the evolving qubit by measuring all three components
along the $x$, $y$ and $z$-direction.
Figure \ref{ppxy4} shows the observed data for both versions of the XY-4 sequence.
The oscillation of the $x$- and $y$ components and the constantly small value of the
$z$-component are a clear indication of a precession around the $z$-axis,
in addition to the loss of signal amplitude.
This combination of precession and reduction of amplitude is also shown in the lower part of
Fig. \ref{ppxy4}, where the arrows show the xy-components of the magnetization for different
times during the sequence.
According to eqs (\ref{H1S}) and (\ref{H1A}), the precession around the $z$-axis
originates from the pulse error term $5 \epsilon^2 \pi^2/(16 \tau) S_z$, which is
proportional to $\epsilon^2$ and is the same
for the symmetric and the asymmetric sequence, in excellent agreement with the observed behavior.
\begin{figure}
\caption{\label{ppxy8}
\label{ppxy8}
\end{figure}
Figure \ref{ppxy8} shows the corresponding data for the two XY-16 sequences.
Here, as well as in the case of XY-8 (data not shown), we also observe a precession for the XY-16(A) sequence, but for the
sequence with symmetric building blocks, the oscillation is not observed.
Again, these results indicate that the sequences built from symmetric XY-4 blocks
have smaller average Hamiltonians and therefore show better performance than those built from asymmetric blocks.
If we change the spacing between the pulses, the behavior remains the same.
In Fig. \ref{pxy}, we show the measured precession angle around the $z$-axis divided by the number of pulses.
The precession is indistinguishable from zero for the compensated XY-8(S) and XY-16(S) sequences.
For other sequences, it is significant and independent
of the delay between the pulses.
The precession of the magnetization around the $z$-axis that we observe for some of the sequences
causes a deviation of the system from the desired evolution and reduces therefore the fidelity of the process.
However, compared to a dephasing process, it is easier to correct and can in principle be compensated if it is known.
We therefore compared not only the reduction of the magnetization amplitude along the initial direction,
but also the total magnetization left in the system, which eliminates the effect of the precession.
Figure \ref{t2n} shows the decay times of the total magnetization for different XY-$n$ sequences.
For short delays between the pulses, the difference between sequences built by
symmetric and asymmetric building blocks is small,
indicating that the main difference is related to the precession originating from the
pulse errors, which is better
compensated by concatenating symmetric building blocks.
For pulse delays longer than $\tau \approx 15 \mu$s, we start to see again that the
symmetric versions of XY-8(S) and XY-16(S) are superior to the asymmetric versions.
At this point, the time dependence of the environment plays a bigger role and reduces
the efficiency of the refocusing \cite{pra,prl}.
In agreement with eqs (\ref{H2Aa}) and (\ref{H2Ab}), these effects are bigger for those sequences
that use asymmetric building blocks.
\begin{figure}
\caption{\label{pxy}
\label{pxy}
\end{figure}
\begin{figure}
\caption{\label{t2n}
\label{t2n}
\end{figure}
\section{Discussion and Conclusions \label{sec5}}
Dynamical decoupling is becoming a standard technique for extending the lifetime of
quantum mechanical coherence.
Many different sequences have been put forward for reducing the effect of the environmental noise
on the system.
Since the number of possible sequences is infinite, a relatively straightforward approach for
designing improved sequences consists in concatenating different building blocks in such a way
that the resulting cycle has a smaller overall average Hamiltonian than that of its component blocks.
In this paper we consider the XY-4 sequence as a basic building block.
Since different versions of the XY-4 sequence were proposed in the literature,
some of them symmetric in time, others asymmetric, we compare these two
versions and in particular the different sequences that result when they are
concatenated with time-inverted and phase-shifted copies.
Since time-symmetric sequences generate average Hamiltonians in which all odd-order
terms vanish for ideal pulses,
it is expected that they perform better than non-symmetric but otherwise
identical sequences.
Experimentally, we could not verify this for the XY-4 sequence, since pulse errors dominate
the behavior under our experimental conditions.
However, in the case of the CPMG sequence, where pulse errors are insignificant,
we could clearly verify this expectation.
The symmetry of the basic building blocks is also important when they are concatenated
to higher order sequences, such as the XY-8 and XY-16 sequences \cite{xy}.
In this case, the odd order terms vanish in the average Hamiltonians of both sequences,
but the second order terms of the sequences that are built from asymmetric blocks
contain additional unwanted terms.
The experimental data are in agreement with this observation:
sequences consisting of time symmetric building blocks perform significantly better
than the corresponding sequences formed by time asymmetric blocks.
In order to understand the decay processes during the DD sequences, we performed quantum state
tomography as a function of time.
The results from these measurements show two different contributions to the overall fidelity loss:
A precession around the $z$-axis, which we could attribute to the combined effect of flip-angle errors
and an overall reduction of the amplitude, which results from the system-environment interaction.
For short delays between the pulses and correspondingly large number of pulses,
the pulse error term is the dominating effect.
Again, the symmetric and asymmetric version of the XY-4 sequence show similar performance.
However, as we use them as building blocks of the higher-order XY-8 and XY-16 sequences,
we find that the effect of the pulse errors is almost perfectly compensated if we use the symmetric
building blocks, while a significant effect remains when asymmetric blocks are concatenated.
While we have analyzed the effect of symmetry mostly for the XY-$n$ sequences,
this can clearly be generalized.
As we showed in Fig. \ref{cpmg}, the symmetric version of the CPMG sequence
shows significantly better decoupling performance than the asymmetric version.
The same concept can also be applied to the CDD sequences, which are generated by
inserting XY-4 sequences inside the delays of a lower-order CDD sequence \cite{cdd}.
The conventional concatenation scheme \cite{cdd} uses asymmetric building blocks.
Here, we used the symmetric XY-4 sequence as the building-blocks,
and we modified the concatenation scheme in such a way that the symmetry is preserved
and the delays between the pulses are identical at all levels of concatenation.
The conventional (asymmetric) version CDD$_n$(A) is iterated as $[CDD_{n-1}-X-CDD_{n-1}-Y]^2$.
In contrast to that, we construct the symmetrized version CDD$_n$(S) as
$[\sqrt{CDD_{n-1}}-X-CDD_{n-1}-Y-\sqrt{CDD_{n-1}}]^2$ \cite{prl}.
For $n=1$, we have $CDD_1$ = XY-4.
In Fig. \ref{fcdd} we compare the process fidelities \cite{pfidel,nielsen} for the
two versions of the CDD-2 sequence.
Clearly, the symmetrized version CDD$_2$(S) shows a significantly
improved performance, compared to the standard,
asymmetric version CDD$_2$(A).
\begin{figure}
\caption{\label{fcdd}
\label{fcdd}
\end{figure}
The results presented in this paper show
clearly that it is time reversal symmetry
is a useful tool for improving dynamic decoupling sequences.
The symmetric sequences often perform better and never worse than non-symmetric sequences,
at no additional cost.
\ack
We acknowledge useful discussions with Daniel Lidar.
This work is supported by the DFG through Su 192/24-1.
\section*{References}
\appendix
\section{Average Hamiltonian Calculation}
In this Appendix we show how the average Hamiltonian
was calculated. The essence of average-Hamiltonian theory is
that a cyclic evolution, $U(t)$, can be described
by an effective evolution governed by a time-independent
Hamiltonian $\overline{H}$. When $U$ is comprised by a sequence of
unitary operations, i.e. $U = e^{O_n} \cdots e^{O_2}e^{O_1}$, the average
Hamiltonian can be approximately computed by recursive applications
of the Baker-Campbell-Hausdorff formula
\begin{eqnarray}
log(e^{A}e^{B}) &\approx& A + B + \frac{1}{2}[A,B] + \nonumber \\
&& \frac{1}{12} \left( [A,[A,B]] + [[A,B],B] \right)
\label{bch}
\end{eqnarray}
To approximate the average Hamiltonian of the XY-4 sequence, consider the following sequence:
\begin{eqnarray}
[\tau_i - R_1 - \tau - R_2 - \tau - R_3 - \tau - R_4 -\tau_f],
\label{seq}
\end{eqnarray}
where $R_1 = R_3 = R(X)$, $R_2 = R_4 = R(Y)$ and $R({\phi})$ is a the pulse propagator
defined in Eq. \ref{rot}.
In the asymmetric form
of XY-4, $\tau_i = \tau$ and $\tau_f = 0$. For $\tau_i = \tau_f = \tau/2$, we obtain the symmetric form of XY-4.
The total sequence propagator is
\begin{eqnarray}
U &=& e^{-i H \tau_f} \left(\prod_{k=2}^{4} R_k e^{-i H \tau} \right) R_1 e^{-i H \tau_i} ,
\label{propag1}
\end{eqnarray}
where $H$ is the Hamiltonian of Eq. (\ref{HH}).
To account for flip angle errors, we decompose the propagator into a product of the ideal pulse propagator sandwiched
between two additional evolutions:
\begin{eqnarray}
R_{\phi} &=& e^{-i (1 + \epsilon) \pi S_{\phi}} \nonumber \\
&=& e^{-i H_{\phi} \frac{t_p}{2}} e^{-i \pi S_{\phi}} e^{-i H_{\phi} \frac{t_p}{2}}
\label{rotdec}
\end{eqnarray}
where $ H_{\phi} = \frac{\epsilon \pi}{t_p} S_{\phi}$ and $t_p$ is pulse length. Substituting
equation (\ref{rotdec}) in (\ref{propag1}) and using the following approximation:
\begin{eqnarray}
e^{-i H_{\phi} \frac{t_p}{2}} e^{-i H \tau} &\approx& e^{-i (H \tau + H_{\phi} \frac{t_p}{2})} \nonumber \\
&\approx& e^{-i (H + \frac{1}{2 \tau} \epsilon \pi S_{\phi}) \tau} \nonumber \\
&\approx& e^{-i H^{'} \tau },
\end{eqnarray}
the new sequence propagator is then rewritten as
\begin{eqnarray}
U &=& e^{-i H_5^{'} \tau_f} \left(\prod_{k=2}^{4} R_k e^{-i H_k^{'} \tau} \right) R_1 e^{-i H_1^{'} \tau_i} ,
\label{propag1b}
\end{eqnarray}
where
\begin{eqnarray}
\tau_i H_1^{'} &=&\tau_i H + \frac{t_p}{2} H_{X} \\
\tau H_{k=2,3,4}^{'} &=&\tau H + \frac{t_p}{2} \left( H_{X} + H_{Y} \right) \\
\tau_f H_5^{'} &=&\tau_i H + \frac{t_p}{2} H_{Y}.
\end{eqnarray}
The calculation can be simplified by transforming the Hamiltonians to a new frame
after each pulse, the so-called toggling frame. The Hamiltonians $\widetilde{H_k}$ in this new frame
are given by:
\begin{eqnarray}
\tau_i \widetilde{H_1} &=& \tau_i(H_E + H_{SE}) + \frac{t_p}{2} H_X \label{tgh1}\\
\tau \widetilde{H}_{k=2,3,4} &=& \tau [H_E + (-1)^{k+1} H_{SE}] \nonumber \\
&& - t_p [(2 \delta_{k,2} + 1) H_X + (2 \delta_{k,4} +1) H_Y] \label{tgh2}\\
\tau_f \widetilde{H_5} &=& \tau_f(H_E + H_{SE}) + \frac{t_p}{2} H_Y \label{tgh3}
\end{eqnarray}
and the sequence propagator is:
\begin{eqnarray}
U \approx \prod_{k=1}^{5} e^{-i \widetilde{H_k} \tau_k}
\label{propag1c}
\end{eqnarray}
The final step consists of the recursive applications of eq. (\ref{bch}) to
eq. (\ref{propag1c}). An explicit calculation of the zeroth and first
order terms leads to the
equations (\ref{H0S}) and (\ref{H1S}) for the symmetric case and (\ref{H0A}) and (\ref{H1A}) for
the asymmetric case. The calculation for XY-8 follows the same procedure
as described for XY-4. Here the sequence is
comprised by eight pulses and nine delays (see table \ref{tab1}), this leads to the total
propagator analog to \ref{propag1b}:
\begin{eqnarray}
U &=& e^{-i H_9^{'} \tau_f} \left(\prod_{k=2}^{8} R_k e^{-i H_k^{'} \tau} \right) R_1 e^{-i H_1^{'} \tau_i} ,
\label{propag2b}
\end{eqnarray}
Transforming the Hamiltonians $H_k^{'}$ to the toggling frame, one can
show that $\widetilde{H_k} = \widetilde{H}_{1-k}$, the Hamiltonians
$\widetilde{H}_{k=1,2,3,4}$ are the same as in Eqs. (\ref{tgh1}) - Eqs. (\ref{tgh2}) and
$\tau \widetilde{H_5} = \tau (H_E + H_{SE}) + t_p H_{Y}$.
\end{document}
|
\begin{document}
\title{Quantum state transfer in a $XX$ chain with impurities}
\author{Analia Zwick, Omar Osenda}
\address{Facultad de Matem\'atica, Astronom\'ia y F\'isica, Universidad Nacional de C\'ordoba, and IFEG-CONICET,
Ciudad Universitaria, X5016LAE C\'ordoba, Argentina}
\ead{[email protected], [email protected]}
\begin{abstract}
One spin excitation states are involved in the transmission of quantum
states and entanglement through a quantum spin chain, the localization
properties of these states are crucial to achieve the transfer of
information from one extreme of the chain to the other. We investigate
the bipartite entanglement and localization of the one excitation
states in a quantum $XX$ chain with one impurity. The bipartite entanglement
is obtained using the Concurrence and the localization is analyzed
using the inverse participation ratio. Changing the strength of the
exchange coupling of the impurity allows us to control the number
of localized or extended states. The analysis of the inverse participation
ratio allows us to identify scenarios where the transmission of quantum
states or entanglement can be achieved with a high degree of fidelity.
In particular we identify
a regime where the transmission of quantum states between the extremes of the
chain is executed in a short transmission time $\sim N/2$, where $N$ is the
number
of spins in the chain, and with a large fidelity.
\end{abstract}
\pacs{75.10.Pq; 03.67.Hk; 03.67.Mn; 05.50.+q }
\section{Introduction }
Since the first works dealing with the entanglement shared by pairs
of spins on a quantum chain, the translational invariance of the chain
(and its states) has been exploited to facilitate the analysis of
the problem \cite{wootters2000,wootters2001,nachtergaele2006}. Anyway,
there is a number of problems which do not possess the property of
being translationally invariant: semi-infinite chains, chains with
impurities \cite{osenda2003} or, in a more abstract sense, random
quantum states \cite{zyczkowski2004}. These problems have localized
quantum states whose properties strongly differ from those of translationally
invariant quantum states.
Localized quantum states can be used to storage quantum information
\cite{apollaro2007} and play an important role in the propagation of entanglement through a quantum spin chain \cite{apollaro2006}.
This kind of states also appears in some models of quantum computers
in presence of static disorder \cite{Georgeot2000}.
Since the localization of a quantum state is a global property it
seems natural to study its properties using a global entanglement
measure as, for example, the one proposed by Meyer and Wallach \cite{mewa2002}. Giraud {\em et al.}~\cite{giraud2007} derived exact expressions
for the mean value of the Meyer-Wallach entanglement for localized
random vectors and studied the dependence of this measure with the
localization length of the states. Viola and Brown \cite{viola2007}
studied the relationship between generalized entanglement and the
delocalization of pure quantum states. Of course there are other
possibilities to study the relationship between localization of quantum
states and entanglement. The bipartite entanglement and localization
of one-particle states in the Harper model has been addressed by Li
{\em et al.}~\cite{li2004}, the entanglement entropy at localization
transitions is studied in \cite{jia2008} and the localized
entanglement in one-dimensional Anderson model in~\cite{li2004b}.
In many proposals of quantum computers the qubit energies can be often
individually controlled, this corresponds to controllable disorder
of a spin system. Besides, in these models, the effective spin-spin
interaction is usually strongly anisotropic, it varies from the Ising
coupling in nuclear magnetic resonance and other systems \cite{chuang1998}
to the $XY$-type or the $XXZ$-type coupling in some Josephson-junction-based
systems \cite{makhlin2001}. The localization properties of one and
two excitation states in the $XXZ$ spin chain with a defect was studied
with some detail by Santos and Dykman \cite{santos2003}, but they
did not study the entanglement of the one and two excitation states.
In this paper we are interested in the behaviour of the localization
and the bipartite entanglement of the pure eigenstates of a quantum
chain with one impurity located in one extreme. It is well know that
the presence of one impurity results in the presence of a localized
state. If the strength of the impurity is large enough the energy
of the localized state lies outside the band of magnons, also known
as one spin excitation states \cite{santos2003}. The one spin magnons
in a homogeneous chain are extended states \cite{santos2003}.
As we will show, if the localization of a given state is measured
with the inverse participation ratio there are two kinds of localized
states, a) exponentially localized states that lie outside the band
of magnons, and b) localized states that lie inside the band, whose
number depends on the length of the chain and the strength of the
impurity. This second kind plays a fundamental role in the transmission
of quantum states through the chain. In most quantum state transfer
protocols the state to be transferred is localized at one end of the
quantum chain and the transmission is successful when the time evolution
of the system produces an equally localized state at the other end
of the chain. So it seems natural to investigate the time evolution
of a localization measure to gain some insight about the problem of
quantum state transfer.
So, the analysis of the time evolution
of the inverse participation ratio,
when the initial state consists in a single excitation located in
one impurity, allows the identification of scenarios where the transmission
of quantum states can be achieved for (comparatively) short times
and with a very good fidelity. In this sense we extend some results
obtained by W\'ojcik {\em et al.}~\cite{wojcik2005}.
The paper is organized as follows, in Section II we present the $XX$
model describing the quantum spin chain with a impurity. In Section
III we analyze in some detail the spectrum of the one spin excitations
and the eigenstates. In Section IV we present the results obtained
for the inverse participation ratio for each one spin excitation eigenstate
while the bipartite entanglement of the eigenstates is analyzed in
Section V. Finally, in Section VI, we discuss the relationship between
localization and transmission of quantum states.
\section{Model}
\label{secmod}
We consider a linear chain of $N$-qubits with $XX$ interaction. The coupling
strengths are homogeneous
except at one site, the impurity, where the coupling strength is different.
The system is described by the Hamiltonian
\begin{equation}
H(\alpha)=\alpha
J(\sigma_{1}^{x}\sigma_{2}^{x}+\sigma_{1}^{y}\sigma_{2}^{y})+\sum_{i>1}J(\sigma_
{i}^{x}\sigma_{i+1}^{x}+\sigma_{i}^{y}\sigma_{i+1}^{y}),
\label{hamiltonian}
\end{equation}
where $\sigma^{\gamma}$ are the Pauli matrices, $J<0$ is the exchange coupling
coefficient and $\alpha J$
is the impurity exchange strength, $\alpha=1$ corresponds to the
homogeneous case.
Since the Hamiltonian commutes with $S_{z}=\Sigma_{i}\sigma_{i}^{z}$,
the Hamiltonian $H(\alpha)$ has a block structure where each of them
is characterized by the number of excited spins in the chain. Because
we are interested in the transmission of a state with one excited
spin from one end of the chain to the other, we focus on the eigenvectors
of the one excitation subspace where the complete dynamics take place.
To describe the eigenstates, we choose a basis described by the computational
states of this subspace $|n\rangle=(\uparrow\uparrow...\uparrow\downarrow_{n}\uparrow...\uparrow)$,
where $n=1,\ldots,N$ given a basis set size equals to the number
of spins of the chain.
In this basis, the Hamiltonian $H$ is represented by a $N\times N$
matrix
\begin{equation}
{\mathcal{H}}=\left(\begin{array}{ccccc}
0 & \alpha J & 0 & \dots & 0\\
\alpha J & 0& J & \dots & 0\\
0 & J & 0 & \dots & 0\\
\vdots & \vdots & \vdots & \ddots & J\\
0 & 0 & 0 & J & 0\end{array}\right).\label{hamiltonian matriz}
\end{equation}
Implementations of this model could be realized, for example, with cold atoms
confined in optical lattices
\cite{hartmann2007,dorner2003,lewenstein2007,duan2003} or with nuclear spin
systems in NMR \cite{madi1997,alvarez2010}. While in the first case an initial
pure state in the one excitation subspace can be realized, in the spin ensemble
situations of NMR an effective one excitation subspace is achieved by creating
pseudo pure states where an excess of magnetization is localized on a given
spin.
\section{Energy spectrum and eigenstates}
In this Section we briefly recall some known results about the spectrum
and the eigenstates of the model emphasizing those features that
are of interest in the following Sections.
The one excitation spectrum consists of $N$ eigenenergies denoted
by $\left\{ E_{1}\leq E_{2}\leq...\leq E_{N}\right\} $. Choosing
the total number of spins even the spectrum results symmetrical with respect to
zero ($E=0$
is not an eigenvalue), for any value of $\alpha$. Then $\left\{
E_{1},...,E_{M}\right\} $
are negative values whereas $\{E_{M+1},...,E_{N}\}$ are positive,
where $M=N/2$. In the homogeneous case ($\alpha=\alpha_{J}\equiv1$),
the energy spectrum lies between the values $\pm2|J|$, this interval is usually
call the {\em band} of eigenvalues. The size
of the chain only changes the number of eigenvalues between those
extreme values, becoming a continuous spectrum when $N\rightarrow\infty$.
\begin{figure}
\caption{\label{energias}
\label{energias}
\end{figure}
The inhomogeneous case shows a different behaviour. For large enough
$\alpha$ the minimal and the maximal eigenenergy become isolated
from the band. There is a critical value $\alpha_{c}$ which separates
the region of the spectrum where the energies make a band ($0<\alpha<\alpha_{c}$)
from the region where the energies make a band with two isolated energies
($\alpha>\alpha_{c}$). { The critical point $\alpha_{c}$ can be obtained
analytically, and for large values of $N$,
$\alpha_{c}\simeq\sqrt{2}$. We will further analyze this point later on.}
For $\alpha\gg\alpha_{c}$ the minimal and maximal energies move
apart from the band proportionally to $-\alpha$ and $+\alpha$ respectively.
This behaviour is depicted in Figure \ref{energias}.
Figure \ref{energias} shows that most of the eigenenergies seem to
be fairly independent of $\alpha$, except for the minimal and maximal
energies. But a more detailed study of the derivative
of the eigenenergies with respect to $\alpha$ (see section V), shows
two regions where the changes in the spectrum are more noticeable:
(i) for $\alpha\sim0$ two eigenenergies become degenerate because
the system changes from a chain with $N$ coupled spins to a chain
with $N-1$ coupled spins and an uncoupled spin; ii) for
$\alpha\lesssim\alpha_{c}$ there is
a number of avoided crossings between successive eigenenergies, because
of the ``collision'' among the minimal (or maximal) eigenenergy and
the band.
The eigenstates in the one excitation subspace $|\Psi_{E}(\alpha)\rangle$,
whose eigenvalue equation is
\begin{equation}\label{ecua-autovalores}
H(\alpha)|\Psi_{E}(\alpha)\rangle=E|\Psi_{E}(\alpha)\rangle,
\end{equation}
can be written as a superposition of the one excitation states
\begin{equation}
|\Psi_{E_{j}}(\alpha)\rangle=\sum_{n=1}^{N}\Psi_{n}^{(j)}\left|n\right\rangle
,\label{one-excitation-sta}
\end{equation}
where due to the symmetries of the spectrum
\begin{equation}\label{psi-simetricos-1}
{
\Psi_{n}^{(j)}=(-1)^{n}\Psi_{n}^{(N-j+1)}.
}
\end{equation}
These coefficients $\Psi_{n}^{(j)}$ contain information about localization
and entanglement properties of the eigenstates and, can be written
as \cite{santos2003}
\begin{equation}\label{coe-analiticos}
\Psi_{n}^{(j)}=de^{i\theta n}+d^{\prime}e^{-i\theta n}.
\end{equation}
In a homogeneous chain, the eigenstates are wave-like superpositions
of the one excitation states where the coefficients of the superpositions
are given by (\ref{coe-analiticos}) with $\theta$ real. In
other case, $\alpha\ne1$, the eigenstates within the band are very
similar to the states of the homogeneous case (Figure \ref{two-states}
shows $\Psi_{n}^{(M)}$ for $\alpha=0.1$), but they differ in their
coefficient on the impurity site. For $\alpha>\alpha_{c}$ the minimal
eigenenergy state $|\Psi_{E_{1}}\rangle$ is quite different (similarly
for $|\Psi_{E_{N}}\rangle$), its coefficients $\Psi_{n}^{(1)}$ decay
exponentially (Figure \ref{two-states} shows $\Psi_{n}^{(1)}$ for
$\alpha=1.6$).
\begin{figure}
\caption{\label{two-states}
\label{two-states}
\end{figure}
It is rather simple to show the existence of a localized state when $\alpha\geq
\sqrt{2}$. Using the ansatz $\Psi_1=u_1$ and $\Psi_n=(-1)^{n+1} e^{-n\kappa}$,
for $n\geq 2$, to construct a state $\left|\Psi\right\rangle$, and replacing
this state in Equation~\ref{ecua-autovalores}, after some algebra we obtain
that
\begin{equation}
e^{2\kappa} = \alpha^2 -1,
\end{equation}
so, to obtain a localized state, the condition $ e^{2\kappa}\geq 1$ implies
that $\alpha\geq \sqrt{2}$. This has been discussed previously see, for
example, the work of Stolze and Vogel \cite{stolze}. In \cite{stolze} the
authors exploits the mapping between the $XX$ model with one excitation and a
non-interacting fermion model with one particle.
The density matrix for each eigenstate is given by \begin{equation}
\hat{\rho}_{E}(\alpha)=|\Psi_{E}(\alpha)\rangle\langle\Psi_{E}(\alpha)|,\label{rho_{E}}\end{equation}
\noindent which is a $N\times N$ matrix in the one excitation subspace.
\section{Localization of the eigenstates}
As stated above, the eigenenergies and eigenstates change according
to the strength of the impurity considered in the system. To quantify
and study their changes, we calculate the eigenstate localization
as a function of the impurity strength. For that purpose we use the
inverse participation ratio (IPR) \cite{giraud2007},
\begin{equation}
L_{IPR}(\left|\Psi\right\rangle )=\frac{\sum_{n}^{^{N}}\Psi_{n}^{2}}{\sum_{n}^{N}\Psi_{n}^{4}},
\end{equation}
where $\Psi_{i}$ are the coefficients of the superposition
(\ref{one-excitation-sta})
of the state. When the state is highly localized ({\em i.e.} $\Psi_{i}$
is nonzero for only one particular value of $i$) $L_{IPR}(\left|\Psi\right\rangle )$
has its minimum value, 1, and when the state is uniformly distributed
(ie. $\Psi_{i}=1/\sqrt{N}$ for all $i$) the IPR attains its maximum value, $N$.
We call a state $|\Psi\rangle$ {\em extended} if
$L_{IPR}(|\Psi\rangle)\sim{\mathcal{O}}(N)$,
{\em i.e.} the IPR is of the same order
of magnitude than the length of the chain.
From (\ref{psi-simetricos-1}), two states whose eigenenergies
are symmetric with respect to zero, say $|\Psi_{E_{j}}\rangle$ and
$|\Psi_{E_{N-j+1}}\rangle$ where $j\leq N/2$, have the same IPR,
{\em i.e.} $L_{IPR}(|\Psi_{E_{j}}\rangle)=L_{IPR}(|\Psi_{E_{N+1-j}}\rangle)$. As
a consequence, each curve in Figure \ref{IPR1} is double and we consider
the IPR only for the states $\left\{ |\Psi_{E_{1}}\rangle,...,|\Psi_{E_{M}}\rangle\right\} $.
\begin{figure}
\caption{\label{IPR1}
\label{IPR1}
\end{figure}
Figure \ref{IPR1} shows the inverse participation ratio $L_{IPR}$
of several eigenstates $\{|\Psi_{E_{1}}\rangle,....,|\Psi_{E_{N}}\rangle\}$
as a function of the impurity coupling $\alpha$. We can identify
three regions where the behaviour of the $L_{IPR}$ is qualitatively
different. These regions are separated by $\alpha_{J}$ and $\alpha_{c}$,
where at those values all eigenstates are equally localized.
The first region $0<\alpha<\alpha_{J}$ shows several localized eigenstates
corresponding to energies close to zero, i.e. the center of the band.
Calling $\alpha_{E_{j}}^{m}$ the value of $\alpha$ such that $L_{IPR}(E_{j},\alpha)=L_{IPR}(\left|\Psi_{E_{j}}(\alpha)\right\rangle )$
attains its minimum, the numerical results show that $L_{IPR}(\alpha_{E_{M}}^{m})<L_{IPR}(\alpha_{E_{M-1}}^{m})<...$
where $\alpha_{E_{M}}^{m}<\alpha_{E_{M-1}}^{m}<...$, {\em i.e.}
the eigenstate is more localized as it is closer to $E=0$. Besides,
the number of localized states increase with $N$.
In the second region $\alpha_{J}<\alpha<\alpha_{c},$ the eigenstates
with energies close to the border of the band become more extended
acquiring a IPR maximum near to $\alpha_{c}$. These peaks become
sharper when $N$ grows. At $\alpha_{c}$, these eigenstates are
again equally localized, but for values of $\alpha$ larger than
$\alpha_{c}$, but very close to this value, the eigenstates become
more localized. The size of the interval around $\alpha_{c}$ in
which this critical behaviour can be observed depends on the size of
the chain. This localization changes seem to be related to the avoided
crossings in the spectrum previously described.
In the last region $\alpha>\alpha_{c}$ there are only two eigenstates
highly localized that correspond to the minimal and maximal eigenenergies,
$E_{1}$ and $E_{N}$. The other states are extended through $N-1$
sites of the chain.
We want to stress that the IPR gives a coarse description of the eigenstates,
for example the states in Figure \ref{two-states}, despite of their
very different behaviour, are equally localized if the measure of localization
is the IPR, effectively $L_{IPR}(\Psi_{E_{1}})=L_{IPR}(\Psi_{E_{M}})\simeq5.6$
for both states. This indicates that the IPR can not distinguish
the exponentially localized state from the state with a wave-like
superposition extended over the chain if the latter has its coefficient
$\Psi_{1}$ large enough.
This shows that the IPR is a good tool to quantify changes in the
system due to the introduction of a impurity spin, however it does
not give information about where the eigenstate is localized. Moreover,
it does not distinguish between quite different states as those
described in Figure \ref{two-states}. Studying the coefficients of
the eigenstates, we can observe where they are localized. In the
present case they are mainly localized on the impurity site (see
Figure \ref{two-states}). However, since we are interested in the transmission of
initially localized quantum states, and that a successful transmission results
in another localized state, the IPR could provide an easy way to identify when
the transmission has taken place.
Since the IPR does not distinguish between the exponentially localized
states that lie outside the band of magnons and the localized states inside the
band it is necessary to study both kinds of states using a local quantity. In
the next Section we study the entanglement between the impurity site and its
first neighbor, this will allow us to classify the different eigenstates
accordingly with its entanglement content.
\section{Entanglement of the eigenstates}
The bipartite entanglement between two qubits can be calculated using
the Concurrence \cite{woottersPRL98}. The Concurrence of two qubits
in an arbitrary state characterized by the density matrix $\rho$
is given by
\begin{equation}
C(\rho)=max\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\},\label{concu}\end{equation}
where the $\lambda_{i}$ are the square roots of the eigenvalues,
in decreasing order, of the non-Hermitian matrix $\rho\widetilde{\rho}$.
The spin-flipped state $\widetilde{\rho}$ is defined as
\begin{equation}
\widetilde{\rho}=(\sigma^{y}\otimes\sigma^{y})\rho^{*}(\sigma^{y}\otimes\sigma^{y}),
\end{equation}
were $\rho^{*}$ is the complex conjugate of $\rho$ and it is taken
in the computational basis $\{|\uparrow\uparrow\rangle,
|\uparrow\downarrow\rangle\, |\downarrow\uparrow\rangle,
|\downarrow\downarrow\rangle\}$.
The concurrence takes values between 0 and 1, where 0 means that the
state is disentangled whereas 1 means a maximally entangled state.
\noindent When considering a subsystem of two qubits on the chain,
the concurrence is calculated with the reduced density matrix. The
reduced density matrix for the spin pair $(i,j)$, $\rho_{E}^{(i,j)}(\alpha)$,
corresponding to the eigenstate $|\Psi_{E}(\alpha)\rangle$ is given
by
\begin{equation}
\rho_{E}^{(i,j)}(\alpha)={Tr}\left|\Psi_{E}(\alpha)\right\rangle \left\langle \Psi_{E}(\alpha)\right|={\mathrm{T}r}\hat{\rho}_{E}(\alpha),
\end{equation}
\noindent where the trace is taken over the remaining $N-2$ spins leading
to a $4\times4$ matrix.
The structure of the reduced density matrix follows from the symmetry
properties of the Hamiltonian. Thus, in our case the concurrence
$C(\rho_{E_{k}}^{(i,j)})$ depends on $i$ and $j$, {\em i.e.}
the indexes of the sites where the spin pair lies. Note that in the
translationally invariant case $C(\rho_{E_{k}}^{(i,j)})$ depends
only on $|i-j|$. In what follows
$C_{i,j}=C_{i,j}(\rho_{E_{k}})=C(\rho_{E_{k}}^{(i,j)})$.
Using the definition $\langle\hat{A}\rangle=Tr(\hat{\rho}\hat{A})$,
we can express all the matrix elements of the density matrix $\rho^{(i,j)}$
in terms of different spin-spin correlation functions. In particular,
for nearest neighbors spins and the eigenstate $|\Psi_{E_{j}}\rangle$,
we get
\begin{equation}
\rho_{E_{j}}^{(i,i+1)}=\left(\begin{array}{cccc}
a_{j} & 0 & 0 & 0\\
0 & b_{j} & \langle\sigma_{i}^{+}\sigma_{i+1}^{-}\rangle_{E_{j}} & 0\\
0 & \langle\sigma_{i}^{+}\sigma_{i+1}^{-}\rangle_{E_{j}}^{*} & d_{j} & 0\\
0 & 0 & 0 & 0\end{array}\right),\label{mat-red}\end{equation}
where \begin{equation}
a_{j}=\frac{1}{4}\langle(\sigma^{z}+I)_{i}(\sigma^{z}+I)_{i+1}\rangle_{E_{j}},\end{equation}
\begin{equation}
b_{j}=\frac{1}{4}\langle(\sigma^{z}+I)_{i}(I-\sigma^{z})_{i+1}\rangle_{E_{j}},\end{equation}
\begin{equation}
d_{j}=\frac{1}{4}\langle(I-\sigma^{z})_{i}(\sigma^{z}+I)_{i+1}\rangle_{E_{j}},\end{equation}
$I$ is the $2\times2$ identity matrix, $\sigma_{i}^{\pm}=(\sigma_{i}^{x}\pm i\sigma_{i}^{y})/2$,
and
\begin{equation}
\langle\ldots\rangle_{E_{j}}=\left\langle \Psi_{E_{j}}\right|\ldots\left|\Psi_{E_{j}}\right\rangle.
\end{equation}
Thus, the concurrence results to be \begin{equation}
C_{i,i+1}(\rho_{E_{j}})=\max\{0,2\mid\langle\sigma_{i}^{+}\sigma_{i+1}^{-}\rangle_{E_{j}}\mid,2\sqrt{|b_{j}d_{j}|}\}.\label{formula_concurrence}
\end{equation}
For the set of eigenstates that we are considering, the expression
for the concurrence can be further simplified. After some algebra we get
\begin{equation}
b_{j}=(\Psi_{i+1}^{(j)})^{2},\quad d_{j}=(\Psi_{i}^{(j)})^{2},
\end{equation}
and that
\begin{equation}
\langle\sigma_{i}^{+}\sigma_{i+1}^{-}\rangle_{E_{j}}=\Psi_{i+1}^{(j)}\Psi_{i}^{(j)}.
\end{equation}
So, we get that
\begin{equation}
C_{i,i+1}(\rho_{E_{j}})=2\left|\Psi_{i+1}^{(j)}\Psi_{i}^{(j)}\right|.
\end{equation}
Using the Hellmann-Feynman theorem, and the symmetry properties of
the Hamiltonian, we find that
\begin{equation}
\frac{\partial E_{j}}{\partial\alpha}=2J\left\langle \Psi_{E_{j}}\right|\sigma_{1}^{+}\sigma_{2}^{-}\left|\Psi_{E_{j}}\right\rangle.\label{no-diagonal}
\end{equation}
From the expression for the reduced density matrix $\rho^{(i,i+1)}$,
(\ref{mat-red}), it is clear that when $\langle\sigma_{i}^{+}\sigma_{i+1}^{-}\rangle=0$
the reduced density matrix is diagonal and the bipartite entanglement
is zero. Moreover, from (\ref{no-diagonal}), when $\frac{\partial E_{j}}{\partial\alpha}=0$
we have that $C_{12}(\rho_{E_{j}})=0$.
So, the concurrence for the first two spins in the eigenstate $|\Psi_{E_{j}}\rangle$
is given by
\begin{equation}
C_{12}=\left|\frac{1}{J}\frac{\partial E_{j}}{\partial\alpha}\right|.\label{c12-energia-coe}
\end{equation}
We are interested in the relationship between localization and entanglement
for the whole one spin excitation spectrum. In particular, we want to show that
the bipartite entanglement of a given eigenstate, which is a local quantity,
between the impurity site and its first neighbor detects the type of
localization that the eigenstate possess.
First, we proceed to analyze the concurrence of the minimal eigenenergy state,
$C_{1,2}(\rho_{E_{1}})$
as a function of $\alpha$, the behaviour of this quantity is shown in
Figure \ref{concurrence01}. At first sight, it is clear that
$C_{1,2}(\rho_{E_{1}})$ is different from zero where
$L_{IPR}(\left|\Psi_{E_{1}}\right\rangle )$ (see Figure \ref{IPR1}) is
noticeable, and that $C_{1,2}(\rho_{E_{1}})\rightarrow 0$ when the eigenvalue
enters into the band and, consequently, the eigenstate becomes extended.
So, when the minimal eigenenergy state is
extended for $\alpha<\alpha_{c}$, the two first spins are disentangled
and $C_{1,2}(\rho_{E_{1}})=0$ consistently with $\frac{\partial E_{1}}{\partial\alpha}=0$
from (\ref{c12-energia-coe}). At the critical point $\alpha_{c}$,
the state starts to become localized increasing its degree of localization
when $\alpha\gg\alpha_{c}$; in the same way, the pair of spins starts
to became entangled and almost disentangled from the rest of the chain,
i.e. $C_{1,2}(\rho_{E_{1}})\sim1$.
Actually, the data shown in Figure \ref{concurrence01} corresponds too to
$C_{1,2}(\rho_{E_{N}}(\alpha))$, this can be seen
by the following argument.
\begin{figure}
\caption{\label{concurrence01}
\label{concurrence01}
\end{figure}
As in the case of the IPR, the concurrence $C_{12}$ for eigenstates
with symmetrical eigenenergies respect to zero ($E_{j}$ and $E_{N-j+1}$)
is the same. From Eqs.~(\ref{psi-simetricos-1}) and
(\ref{c12-energia-coe}),
it is straightforward to demonstrate the latter affirmation where
\begin{equation}
C_{12}(\rho_{E_{j}})=C_{12}(\rho_{E_{N-j+1}}),\quad j=1,\ldots,M,
\end{equation}
since
\begin{equation}
\frac{\partial E_{j}}{\partial\alpha}=-\frac{\partial E_{N-j+1}}{\partial\alpha}.\end{equation}
Following with the analysis of the entanglement between the first
two spin in the chain, we calculate the concurrence of the states
with energies inside the bands. Figure~\ref{concurrence03} shows
$C_{12}(\rho_{E_{j}})$ as a function of $\alpha$ for $j=2,\ldots,M$.
Note that the same scenario is observed for $C_{12}(\rho_{E_{j}})$
with $j=N-1,\ldots,M+1$.
\begin{figure}
\caption{\label{concurrence03}
\label{concurrence03}
\end{figure}
From Figure \ref{concurrence03}, and calling $\alpha_{i}^{m}$
the abscissa where $C_{12}(\rho_{E_{i}}(\alpha))$ has its maximum, we observe
that
$\alpha_{M}^{m}<...<\alpha_{2}^{m}$ and
$C_{12}(\rho_{E_{M}}(\alpha_{M}^{m}))>....>C_{12}(\rho_{E_{2}}(\alpha_{2}^{m}
))$.
This observation suggests that the ordering of the maxima of the concurrence
$C_{12}$
for the different eigenstates follows closely the ordering dictated by the
amount of localization of these eigenstates, {\em i.e} only the most localized
states around the impurity site has a noticeable entanglement. We will use this
observation as a guide to formulate a transmission protocol in the next Section.
As we have shown, the concurrence and the derivative of the energy are related
in a simple way, see (\ref{c12-energia-coe}). On the other hand it is well
known that the eigenvalues $E_i(\alpha)$ inside the band are rather insensitive
to changes in $\alpha$, indeed $\partial E_i(\alpha)/\partial \alpha \simeq 0$
almost everywhere, {\em except} near an avoided crossing with other eigenvalue.
In this sense, the behaviour shown by the concurrence in
Figure~\ref{concurrence03} reflects the presence of successive avoided
crossings between $E_1(\alpha)$ and $E_2(\alpha)$, between $E_2(\alpha)$ and
$E_3(\alpha)$, and so on. The abscissa of the peak in the concurrence of a
given eigenstate roughly corresponds to the point where the eigenvalue becomes
almost degenerate.
As a matter of fact, the scenario depicted in Figure~\ref{concurrence03} is not
only a manifestation of the avoided crossings in the spectrum, indeed it can be
considered as a precursor of the resonance state that appears in the system
when $N\rightarrow \infty$. Recently, Ferr\'on {\em et al.}~\cite{ferron2009}
have shown how the behaviour of an entanglement measure can be used to
detect a resonance state. In a chain a resonance state appears in the limit
$N\rightarrow\infty$, however the peaks in the concurrence obtained for
$N$ large, but finite, can be used to obtain
approximately the energy of the resonance state~\cite{ferron2009,pont2010}.
\section{Transmission of states and entanglement}
The effect of the localized states in the one magnon band is best
appreciated looking at the dynamical behaviour of the inverse participation
ratio. Figure~\ref{ipr-vs-t} shows the behaviour of $L_{IPR}(|\psi(t)\rangle)$,
where $|\psi(t)\rangle$ satisfies that \begin{equation}
i\frac{d|\psi(t)\rangle}{dt}=H|\psi(t)\rangle,\quad|\psi(t=0)\rangle=|1\rangle,\end{equation}
for different values of $\alpha$. There are, at least, three well
defined dynamical behaviours, each one associated to the number of
localized states in the system, see Figure~\ref{IPR1}. Figure~\ref{ipr-vs-t}
a) ($\alpha=0.1$) shows the behaviour of $L_{IPR}$ when there is
only one localized state at the center of the band; Figure~\ref{ipr-vs-t}
b) ($\alpha=0.4$) shows the dynamical behaviour of $L_{IPR}$ when
there are several localized states; the panels c), d) and e) show
the dynamical behaviour near the transition zone and, finally, f)
shows the dynamical behaviour when the system have exponentially localized
states.
\begin{figure}
\caption{\label{ipr-vs-t}
\label{ipr-vs-t}
\end{figure}
We do not want to analyze completely the rich dynamical behaviour
of $L_{IPR}$, however, from the point of view of the transmission
of quantum states, it is clear that the regime shown in panel b) seems
to be particularly useful. The panel b) shows that when the system
has several localized eigenstates $|\psi(t)\rangle$ consists in
a superposition of a reduced number of elements of the one excitation
states, {\em i.e} the number of significant coefficients $\Psi_{i}$
is small compared with $N$. Besides, the refocusing of the state
when the {}``signal'' reaches the end of the chain (near $t\simeq100$)
leads to an smaller $L_{IPR}$ when $\alpha=0.4$ than for the other
values of $\alpha$, compare panel b) with a), c), d) and e). The
case shown in f) is rather different, in this case the superposition
between the initial state $|1\rangle$ and the localized state is
rather big, so $|\psi(t)\rangle$ remains localized even for very
long times. This dynamical regime has been proposed to store quantum
states \cite{apollaro2006} and, more generally, this kind of states
with isolated eigenvalues has been proposed as a possible scenario
to implement practically a stable qubit \cite{michoel2009}.
We want to remark some points: 1) for very small $\alpha$ there is a
{}``refocusing'' such that $L_{IPR}\sim1$ for $t\sim{\mathcal{O}}(10^{4})$
when $N=200$. 2) The initial excitation that is localized in the
impurities diffuses over the chain \cite{Dente2008} so, for a given time $t$, the
number of sites on the chain that are excited is given, approximately,
by $L_{IPR}(t)$. The presence of localized states reduces this
number and the speed of propagation. For $0.3\lesssim\alpha<\alpha_{c}$
the refocusing of the signal appears a $t\sim N/2$, this time is
roughly independent of $\alpha$. For $\alpha\lesssim0.3$ the time
behaviour is more complicated but the refocusing times scales as $1/\alpha$,
approximately, for fixed $N$, we will consider back this last point
later.
We will use the regime b) identified in Figure~\ref{ipr-vs-t} to
implement the simplest transmission protocol, as suggested by Bose
\cite{bose2008,bose2003}, and the transmission of an entangled state.
But, as our results suggest, we will place a second impurity at the
end of the chain where the transmission should be detected. Locating
an impurity at the end of the chain introduces a set of localized
states around this site. The overall properties of the spectrum do
not change, however the presence of localized states at the end of
the chain would facilitate the transmission of states (or entanglement)
from one end of the chain to the other.
\begin{figure}
\caption{\label{con-and-fidel}
\label{con-and-fidel}
\end{figure}
In the simplest protocol of transmission (as described in \cite{bose2003})
the initial state, $|1\rangle$ evolves following the Hamiltonian dynamics,
and the quality of the transmission is measured with the fidelity
\begin{equation}
{F}=\langle1|\rho_{out}(t)|1\rangle,
\end{equation}
where $\rho_{out}(t)$ is the state at the end of the chain where
the transmission is received, and $t$ is the {}``arrival'' time.
For the transmission of an entangled state the protocol is slightly
different, again we follow the protocol described in \cite{bose2003}.
Using an auxiliary qubit $A$, and the first spin of the chain, the
state \begin{equation}
|\psi^{+}\rangle=\frac{1}{\sqrt{2}}(|\uparrow_{A}\downarrow_{1}\rangle+|\downarrow_{A}\uparrow_{1}\rangle)\end{equation}
is prepared. After the preparation of the initial state the systems
evolves accordingly with its Hamiltonian and the concurrence between
$A$ and the spin at the receiving end of the chain, $C_{A,N}(t)$,
is evaluated.
\begin{figure}
\caption{\label{landscape}
\label{landscape}
\end{figure}
Figure~\ref{con-and-fidel} shows the fidelity for the simplest transmission
protocol and the concurrence between the auxiliary qubit and the last
spin of the chain both as functions of the time. The strength of
the interaction between the first and the second spin is the same
that between the last and its neighbor, $\alpha J$, with $\alpha=0.4$,
and the chain has $N=200$ spins. The maximum value of the fidelity
and the concurrence are remarkably high. For our chain $C_{max}\simeq0.9$,
while for an unmodulated chain (with 200 spins) $C_{max}^{un}\simeq0.23$
\cite{bose2008,bose2003}. It is worth to remark that this large
value of the fidelity is not necessarily the larger possible tuning
the value of $\alpha$.
As a matter of fact, that a chain with two symmetrical impurities
outperforms a homogeneous one as a transmission device has been already
reported in \cite{wojcik2005}. In that work, W\'ojcik {\em
et al.} analyzed the transmission of quantum states modulating the
coupling between the source and destination qubits. They shown that
using small values of the coupling it is possible to obtain a fidelity
of transmission arbitrarily close to one with the transfer time scaling
linearly with the length $N$. Regrettably the resulting transfer
time obtained in their work is quite large. Here we will extend their
results showing that the transfer of quantum states is feasible for
shorter transfer times with a very good fidelity {( $\gtrsim$ 0.9)}
while keeping the linear scaling between the transfer time and the
length of the chain. To achieve this transfer scenario we will exploit
the information provided by the IPR: for large enough values of $\alpha$
there is a time of order $N/2$ such that $L_{IPR}\sim1$.
\begin{figure}
\caption{\label{transfer-all}
\label{transfer-all}
\end{figure}
The identification of regimes where the transmission of quantum states
can be achieved with large fidelity and for (relatively) short times
is of great importance. The different dynamical regimes of the fidelity
in a chain with two impurities is rather difficult to analyze except
when $\alpha\rightarrow0$, see \cite{wojcik2005}. Figure~\ref{landscape}
shows the complex landscape of the fidelity of transmission versus
the strength of the impurities and time. Some of the features shown
by the fidelity in Figure~\ref{landscape} are best understood using
the IPR. In particular, for $\alpha$ fixed, the first maximum of
the fidelity as a function of the time coincides with a minimum of
$L_{IPR}$. This observation, once systematized, provides the dynamical
regime where the transmission can be achieved with large fidelity
and {\em always} for times $\sim N/2$.
Our results about the time behaviour of the IPR show that for
$t_{IPR}\sim{{\mathcal{O}}}(N/2)$
there is always a local minimum of the IPR (see Figure~\ref{ipr-vs-t}
b)). Since the state that is being transferred is well localized it
is rather clear that we should look for times when the IPR attain
local minima to identify where it is possible to achieve a good transmission.
The time $t_{IPR}$ is rather independent of $\alpha$. So, optimizing
the value of $\alpha$ in order to minimize the value of the minimum
of the IPR at times $\sim t_{IPR}$ allow us to find the best fidelity
achievable for time $t_{tr}\sim t_{IPR}$. We call $\alpha_{opt}(N)$
the value such that the the fidelity $F(t_{tr})$ attains its maximum
for a given $N$ and for $t_{tr}\sim N/2$.
As Figure~\ref{con-and-fidel} shows, when the transfer of a given
state takes place the fidelity presents a well defined maximum at
time $t_{tr}\sim t_{IPR}\sim{\mathcal{O}}(N/2)$. The height of the
maximum, $F_{max}$ is a smooth function of $\alpha$ for $\alpha>0.3$,
and the same is valid for the transfer time $t_{tr}$.
Figure~\ref{transfer-all} summarizes our findings about the fidelity
of transmission following the recipe outlined in the two paragraphs
above. The upper panel shows the maximum transmission fidelity achievable
for a chain of length $N$ and the corresponding optimum value of
$\alpha$. As can be appreciated $F\gtrsim0.8$ even for $N=400$.
The maximum value of the fidelity is also well above the predicted
for an unmodulated chain and above $2/3$ that is the highest fidelity
for classical transmission of a quantum state. The lower panel shows
the transmission time $t_{tr}$ {\em vs} $N$. The linear scaling
of $t_{tr}$ with $N$ is rather clear.
It is clear that for an isolated chain the availability of a regime
where $F\sim1$, regardless of the time required to achieve the transfer,
is interesting. However, in the presence of dynamical disorder or
an {}``environment'', achieving a moderate fidelity for the transfer
at shorter times seems a better option.
\section{Discussion}
\label{discu}
There is enough evidence to affirm that the entanglement of quantum
states whose eigenenergies present avoided crossings will show steep
changes near of them (\cite{ferron2009},\cite{sha-inf-avoi}, this
work). In our case there is a number of avoided crossings that appears
between successive levels, when $E_{1}$ comes into the band as $\alpha$
decreases from values larger than the critical. The avoided crossing
between $E_{1}$ and $E_{2}$ is nearer to $\alpha_{c}$ than the
avoided crossing between $E_{2}$ and $E_{3}$, and so on. This is
the behaviour depicted in Figure (\ref{concurrence03}). The width
of the peak in $C_{12}$ of a given state (see Figure (\ref{concurrence03}))
is related to the magnitude of the derivative of the eigenenergy
of the state, the peak is sharper for $C_{12}(\rho_{E_{2}})$ and
the successive peaks are more and more rounded.
As we have shown, locating impurities at both extremes of the chain
allows to transfer more entanglement that an unmodulated chain {\em
if both impurities produce a number of localized eigenstates at each
end of the chain}. If a initially localized state is transmitted through the
chain, at a posterior time the state is composed by the superposition of many
propagating modes. The optimization of the couplings at the end of the chain
allows
the coherent superposition of many of those modes at some time
$t_{tr}$, resulting in a large fidelity of transmission. The arrival time
$t_{tr}$ is {\em always} $\geq N/2$. It could be interesting to compare the
results
presented in this work with the findings of Plastina and Apollaro
(\cite{plastina2007}) in the case of two {\em diagonal} impurities.
While IPR is an appealing quantity since it is very easy to calculate,
we have shown that it is not possible to guess how much entanglement
has a given state. The examples analyzed show that based on the IPR
it is not possible to guess from it how much entanglement has a given
state, anyway it remains an appealing quantity since it could be useful
to identify dynamical regimes where the transmission of quantum states
can be achieved. The example presented above, in which the tuning
of the interaction between only a couple of spins improves the transmission,
is encouraging. Of course the protocols for perfect transmission
perform this task better, but at the cost of modulating all the interactions
between the spins.
There is not, to our knowledge, a simple quantity that allows to relate,
in a direct way, localization and entanglement. This subject will
be object of further investigation.
\ack We would like to acknowledge SECYT-UNC 05/B337,
CONICET PIP 112-200801-01741, and FONCyT Grant N$^{o}$ PICT-2005
33305 (Argentina) for partial financial support of this work. We thank
B. Franzoni and G. A. Alvarez for helpful discussions and critical reading of the
manuscript.We thank Dr T Apollaro that very recently brought our attention to
reference \cite{Banchi2010} which deals with the subject addressed in this work.
\section*{References}
\end{document}
|
\begin{document}
\title{{f Mean-Payoff Automaton Expressions}
\begin{abstract}
Quantitative languages are an extension of boolean languages that assign to
each word a real number.
Mean-payoff automata are finite automata with numerical weights on transitions
that assign to each infinite path
the long-run average of the transition weights.
When the mode of branching of the automaton is deterministic, nondeterministic,
or alternating, the corresponding class of quantitative languages is not
\emph{robust} as it is not closed under the pointwise operations of
max, min, sum, and numerical complement.
Nondeterministic and alternating mean-payoff automata are not \emph{decidable}
either, as the quantitative generalization of
the problems of universality and language inclusion is undecidable.
We introduce a new class of quantitative languages, defined by
\emph{mean-payoff automaton expressions}, which is robust and decidable:
it is closed under the four pointwise operations, and we show that all
decision problems are decidable for this class.
Mean-payoff automaton expressions subsume deterministic mean-payoff automata,
and we show that they have expressive power incomparable to
nondeterministic and alternating mean-payoff automata.
We also present for the first time an algorithm to compute distance between two
quantitative languages, and in our case the quantitative languages are given
as mean-payoff automaton expressions.
\end{abstract}
\begin{comment}
\section{Introduction}
\noindent{\bf Some points.}
\begin{enumerate}
\item Deterministic mean-payoff not robust (not closed
under max, min etc.) and also not as expressive as nondeterministic
mean-payoff automata.
The quantitative decision problems are decidable for deterministic mean-payoff
automata.
\item Non-deterministic more expressive that deterministic, but are neither
robust (not closed under min and sum), and and the quantitative
decision problems (such as universality, inclusion, equivalence) are
undecidable.
\item We introduce a new class of mean-payoff automata that is the min-max-sum
closure of deterministic automata: so the class is robust by definition.
We show they have incomparable expressive power as compared to nondeterministic
automata.
We show the decision questions and distance questions are all decidable.
\end{enumerate}
Technical contribution: exact characterization of the value set as a
pointwise min of a convex hull of simple cycles. Algorithmic characterization
of the computation of the pointwise min of a convex hull of a finite set
(need to say that this was not known in comp. geometry and is also of
independent interest).
Using the algorithmic characterization present the solution of decision problems
(quantitative emptiness, universality, inclusion and equivalence, and also
the distance problem).
\end{comment}
\section{Introduction}\label{sec:intro}
Quantitative languages $L$ are a natural generalization of boolean languages
that assign to every word $w$ a real number $L(w) \in {\mathbb R}$ instead of a
boolean value.
For instance, the value of a word (or behavior) can be interpreted as the
amount of some resource (e.g., memory consumption, or power consumption) needed
to produce it, or bound the long-run average available use of the resource.
Thus quantitative languages can specify properties related to
resource-constrained programs, and an implementation $L_A$ satisfies
(or refines) a specification $L_B$ if $L_{A}(w)\leq L_{B}(w)$ for
all words~$w$.
This notion of refinement is a \emph{quantitative generalization of language
inclusion}, and it can be used to check for example if for each behavior,
the long-run average response time
of the system lies below the specified average response requirement.
Hence it is crucial to identify some relevant class of
quantitative languages for which this question is decidable.
The other classical decision questions such as emptiness, universality, and
language equivalence have also a natural quantitative extension.
For example, the \emph{quantitative emptiness problem} asks,
given a quantitative language~$L$ and a threshold $\nu \in {\mathbb Q}$, whether there
exists some word $w$ such that $L(w) \geq \nu$, and the
\emph{quantitative universality problem} asks whether $L(w) \geq \nu$ for all
words $w$.
Note that universality is a special case of language inclusion
(where $L_A(w) = \nu$ is constant).
Weighted \emph{mean-payoff automata} present a nice framework to express
such quantitative properties~\cite{CDH08}.
A weighted mean-payoff automaton is a finite automaton with numerical
weights on transitions. The value of a word $w$ is the maximal value of all
runs over~$w$ (if the automaton is nondeterministic, then there may be many
runs over~$w$), and the value of a run $r$ is the long-run average of the
weights that appear along~$r$. A mean-payoff extension to alternating automata
has been studied in~\cite{CDH-FCT09}.
Deterministic, nondeterministic and alternating mean-payoff automata are three
classes of mean-payoff automata with increasing expressive power.
However, none of these classes is closed under the four pointwise operations
of max, min (which generalize union and intersection respectively), numerical
complement\footnote{The numerical complement of a quantitative languages
$L$ is $-L$.},
and sum (see Table~\ref{tab:properties}).
Deterministic mean-payoff automata are not closed under max, min, and
sum~\cite{CDH09b}; nondeterministic mean-payoff automata are not closed under
min, sum and complement~\cite{CDH09b}; and alternating mean-payoff automata are
not closed under sum~\cite{CDH-FCT09}.
Hence none of the above classes is \emph{robust} with respect to
closure properties.
Moreover, while deterministic mean-payoff automata enjoy decidability of all
quantitative decision problems~\cite{CDH08},
the quantitative language-inclusion problem is undecidable
for nondeterministic and alternating mean-payoff automata~\cite{DDGRT10},
and thus also all decision problems are undecidable for alternating mean-payoff automata.
Hence although mean-payoff automata provide a nice framework to express
quantitative properties, there is no known class which is both robust and
decidable (see Table~\ref{tab:properties}).
In this paper, we introduce a new class of quantitative languages that are
defined by \emph{mean-payoff automaton expressions}.
An expression
is either a deterministic mean-payoff automaton, or it is the max, min, or sum
of two mean-payoff automaton expressions.
Since deterministic mean-payoff automata are closed under complement,
mean-payoff automaton expressions form a robust class that is closed under
max, min, sum and complement.
We show that
(a) all decision problems (quantitative emptiness, universality, inclusion, and
equivalence) are decidable for mean-payoff automaton expressions;
(b) mean-payoff automaton expressions are incomparable in
expressive power with both the nondeterministic and alternating mean-payoff
automata (i.e., there are quantitative languages expressible by mean-payoff
automaton expressions that are not expressible by alternating mean-payoff
automata, and there are quantitative languages expressible by nondeterministic
mean-payoff automata that are not expressible by mean-payoff automata
expressions); and (c) the properties of cut-point languages (i.e., the sets of
words with value above a certain threshold) for deterministic automata
carry over to mean-payoff automaton expressions, mainly the cut-point language
is $\omega$-regular when the threshold is isolated (i.e., some neeighborhood
around the threshold contains no word).
Moreover, mean-payoff automaton expressions can express all examples in the
literature of quantitative properties using mean-payoff
measure~\cite{AlurDMW09,CDH09b,CGHIKPS08}.
Along with the quantitative generalization of the classical decision problems,
we also consider the notion of \emph{distance} between two quantitative languages
$L_A$ and $L_B$, defined as $\sup_w \abs{L_A(w) - L_B(w)}$.
When quantitative language inclusion does not hold between an implementation $L_A$
and a specification $L_B$, the distance is a relevant information
to evaluate how close they are, as we may accept implementations
that overspend the resource but we would prefer the least expensive ones.
We present the first algorithm to compute the distance between two
quantitative languages: we show that the distance can be computed for
mean-payoff automaton expressions.
\begin{table}[!tb]
\begin{center}
\begin{tabular}{|l|*{3}{c|}{l|}*{4}{c|}}
\hline
& \multicolumn{4}{c}{Closure properties} & \multicolumn{4}{|c|}{Decision problems} \\
\cline{2-9}
& \,$\max$\, & \,$\min$\, & \,$\mathrm{sum}$\, & \,comp.\, & \,empt.\, & \,univ.\, & \,incl.\, & \,equiv.\, \\
\hline
\,Deterministic & $\times$ & $\times$ & $\times$ & \quad \checkmark\footnotemark\addtocounter{footnote}{-1} & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\,Nondeterministic\, & \checkmark & $\times$ & $\times$ & \quad $\times$ & \checkmark & $\times$ & $\times$ & $\times$ \\
\hline
\,Alternating & \checkmark & \checkmark & $\times$ & \quad \checkmark\footnotemark & $\times$ & $\times$ & $\times$ & $\times$ \\
\hline
\,Expressions & \checkmark & \checkmark & \checkmark & \quad \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\end{tabular}
\end{center}
\caption{Closure properties and decidability of the various classes of mean-payoff automata.
Mean-payoff automaton expressions enjoy fully positive closure and decidability properties.
\label{tab:properties}}
\end{table}
\footnotetext{Closure under complementation holds
because $\mathsf{LimInf}Avg$-automata and $\mathsf{LimSup}Avg$-automata are dual. It would not hold
if only $\mathsf{LimInf}Avg$-automata (or only $\mathsf{LimSup}Avg$-automata) were allowed.}
Our approach to show decidability of mean-payoff automaton expressions
relies on the characterization and algorithmic computation of the
values set $\set{L_E(w) \mid w \in \Sigma^\omega}$ of
an expression $E$, i.e. the set of all values of words according to $E$.
The value set can be viewed as an abstract representation of the
quantitative language $L_E$, and we show that all decision problems, cut-point
language and distance computation can be solved efficiently once we have this
set.
First, we present a precise characterization of the value set for
quantitative languages defined by mean-payoff automaton expressions.
In particular, we show that it is not sufficient to construct the convex
hull $\mathsf{conv}(S_E)$ of the set of
the values of simple cycles in the mean-payoff automata occurring in $E$,
but we need essentially to apply an operator $F_{\min}(\cdot)$ which given a
set $Z \subseteq {\mathbb R}^n$ computes the set of points $y \in {\mathbb R}^n$ that can
be obtained by taking pointwise minimum of each coordinate of points of a set
$X \subseteq Z$.
We show that while we need to compute the set $V_E = F_{\min}(\mathsf{conv}(S_E))$ to
obtain the value set,
and while this set is always convex, it is not always the case that
$F_{\min}(\mathsf{conv}(S_E)) = \mathsf{conv}(F_{\min}(S_E))$ (which would immediately give
an algorithm to compute $V_E$). This may appear counter-intuitive because
the equality holds in ${\mathbb R}^2$ but we show that the equality does not hold
in ${\mathbb R}^3$ (Example~\ref{examp2}).
Second, we provide algorithmic solutions to compute
$F_{\min}(\mathsf{conv}(S))$, for a finite set $S$.
We first present a constructive procedure that given $S$ constructs a finite
set of points $S'$ such that $\mathsf{conv}(S')=F_{\min}(\mathsf{conv}(S))$.
The explicit construction presents interesting properties about the set
$F_{\min}(\mathsf{conv}(S))$, however the procedure itself is computationally
expensive.
We then present an elegant and geometric construction of $F_{\min}(\mathsf{conv}(S))$
as a set of linear constraints.
The computation of $F_{\min}(\mathsf{conv}(S))$ is a new problem in computational
geometry and the solutions we present could be of independent interest.
Using the algorithm to compute $F_{\min}(\mathsf{conv}(S))$, we show that all
decision problems for mean-payoff automaton expressions are decidable.
\begin{comment}
Our technical contributions are as follows.
Given a quantitative language $L$ over alphabet $\Sigma$ given by a
mean-payoff automaton expressions, the value set
$V=\set{L(w) \mid w \in \Sigma^\omega}$ is the set of all possible values of
$L$.
The value set $V$ is a representative of the quantitative language $L$, and
given the value set $V$ for a quantitative language, all the decision problems
can be efficiently solved.
We show that all decision problems are decidable for mean-payoff automata
expressions based on an original approach that computes the value set $V$.
First, we present a precise characterization of the value set $V$ for
quantitative languages defined by mean-payoff automaton expressions, and
then present an effective computation of the value set.
The characterization is obtained in terms of a geometric object defined
as follows.
Given a set $S \subseteq {\mathbb R}^n$, let $F_{\min}(S)$ be the set of points
obtained by taking a finite subset $X$ of $S$ such that $|X| \leq n$, and
taking pointwise min of each coordinate of points in $X$.
We show that the set $V$ of values can be characterized as
$F_{\min}(\mathsf{conv}(S))$, where $S$ is a finite set of points (and
depends on cycles of the deterministic automata of the mean-payoff
automaton expression), and $\mathsf{conv}(S)$ is the convex hull of $S$.
Our second technical contribution consists of algorithmic solutions to compute
$F_{\min}(\mathsf{conv}(S))$, for a finite set $S$.
In two dimension ($S \subseteq {\mathbb R}^2$)
we show that $F_{\min}(\mathsf{conv}(S))=\mathsf{conv}(F_{\min}(S))$, and this immediately
gives an algorithm to compute $F_{\min}(\mathsf{conv}(S))$ in two dimension:
the set $F_{\min}(S)$ is finite and computable and then the convex hull
can be computed.
If this equality would hold in general (i.e., if in general $F_{\min}$ and
$\mathsf{conv}$ operations would be commutative), we would have an algorithm to
compute $F_{\min}(\mathsf{conv}(S))$.
However, we show (Example~\ref{examp2}) that the equality does not hold in
three dimension.
We first present a constructive procedure that given $S$ constructs a finite
set of points $S'$ such that $\mathsf{conv}(S')=F_{\min}(\mathsf{conv}(S))$.
The explicit construction presents interesting properties about the set
$F_{\min}(\mathsf{conv}(S))$, however, the procedure itself is computationally
expensive.
We then present an efficient and geometric construction of $F_{\min}(\mathsf{conv}(S))$.
The computation of $F_{\min}(\mathsf{conv}(S))$ is a new problem in computational
geometry and the solutions we present could be of independent interest.
Using the algorithm for computation of $F_{\min}(\mathsf{conv}(S))$, we show how all
the decision problems for mean-payoff automaton expressions are decidable.
The effective computation of the value set allows us not only to solve the
decision problems, but also enables us to answer other interesting questions
related to quantitative languages.
For example, consider the \emph{distance} problem between two quantitative
languages: given two languages $L_1$ and $L_2$, the distance between the
languages is the maximal difference $\abs{L_1(w) - L_2(w)}$ over all words~$w$, and the distance
is a measure how close are the languages (for example, how close is an
implementation as compared to a specification is given by the distance
between the implementation and the specification).
We show that from the value set computation we can compute the distance
of two mean-payoff automaton expressions.
\end{comment}
Due to lack of space, most proofs are given in the appendix.
\noindent{\em Related works.}
Quantitative languages have been first studied over finite words in the
context of probabilistic automata~\cite{Rabin63}
and weighted automata~\cite{Wautomata}.
Several works have generalized
the theory of weighted automata to infinite words (see~\cite{DrosteK03,DrosteGastin07,LatticeAutomata07,Bojanczyk10}
and~\cite{HandbookWA} for a survey), but none of those have considered mean-payoff conditions.
Examples where the mean-payoff measure has been used to specify long-run behaviours of systems
can be found in game theory~\cite{EM79,ZwickP96} and in Markov decision processes~\cite{Alfaro98}.
The mean-payoff automata as a specification language have been first investigated in~\cite{CDH08,CDH09b,CDH-FCT09},
and extended in~\cite{AlurDMW09} to construct a new class of (non-quantitative) languages of infinite words
(the multi-threshold mean-payoff languages), obtained by applying a query to a
mean-payoff language, and for which emptiness is decidable.
It turns out that a richer language of queries can be expressed
using mean-payoff automaton expressions (together with decidability of the
emptiness problem). A detailed comparison with the results of~\cite{AlurDMW09}
is given in Section~\ref{sec:decidable}. Moreover, we provide algorithmic solutions
to the quantitative language inclusion and equivalence problems and to distance computation which have no
counterpart for non-quantitative languages. Related notions of metrics have been
addressed in stochastic games~\cite{AlfaroMRS07} and probabilistic processes~\cite{DesharnaisGJP99,VidalTHCC05}.
\section{Mean-Payoff Automaton Expressions}\label{sec:mpa-expressions}
\noindent{\bf Quantitative languages.}
A \emph{quantitative language} $L$ over a finite alphabet $\Sigma$ is a function
$L: \Sigma^{\omega} \to {\mathbb R}$.
Given two quantitative languages $L_1$ and $L_2$ over $\Sigma$, we denote by $\max(L_1,L_2)$
(resp., $\min(L_1,L_2)$, $\mathrm{sum}(L_1,L_2)$ and $-L_1$)
the quantitative language that assigns $\max(L_1(w),L_2(w))$
(resp., $\min(L_1(w), L_2(w))$, $L_1(w) + L_2(w)$, and $-L_1(w)$) to each word
$w \in \Sigma^{\omega}$.
The quantitative language $-L$ is called the \emph{complement} of $L$.
The $\max$ and $\min$ operators for quantitative languages
correspond respectively to the least upper bound and
greatest lower bound for the pointwise order $\preceq$
such that $L_1 \preceq L_2$ if $L_1(w) \leq L_2(w)$ for all $w \in \Sigma^{\omega}$.
Thus, they generalize respectively the union and intersection operators
for classical boolean languages.
\noindent{\bf Weighted automata.}
A \emph{${\mathbb Q}$-weighted automaton} is a tuple $A=\tuple{Q,q_I,\Sigma,\delta,\mathsf{wt}}$, where
\begin{itemize}
\item $Q$ is a finite set of states, $q_I \in Q$ is the initial state, and $\Sigma$ is a finite alphabet;
\item $\delta \subseteq Q \times \Sigma \times Q$ is a finite set of labelled transitions.
We assume that $\delta$ is \emph{total}, i.e., for all
$q \in Q$ and $\sigma \in \Sigma$, there exists $q'$ such that
$(q,\sigma,q') \in \delta$;
\item $\mathsf{wt}: \delta \to {\mathbb Q}$ is a \emph{weight} function, where ${\mathbb Q}$ is the
set of rational numbers. We assume that rational numbers are encoded as pairs of
integers in binary.
\end{itemize}
We say that $A$ is \emph{deterministic} if for all
$q \in Q$ and $\sigma \in \Sigma$, there exists $(q,\sigma,q') \in \delta$
for exactly one $q' \in Q$. We sometimes call automata \emph{nondeterministic}
to emphasize that they are not necessarily deterministic.
\noindent{\bf Words and runs.} A \emph{word} $w \in \Sigma^\omega$ is an
infinite sequence of letters from $\Sigma$.
A \emph{lasso-word} $w$ in $\Sigma^{\omega}$ is an ultimately periodic word of the form
$w_1 \cdot w_2^{\omega}$ where $w_1 \in \Sigma^*$ is a finite prefix, and $w_2 \in \Sigma^+$ is nonempty.
A \emph{run} of $A$ over an infinite word $w=\sigma_1 \sigma_2 \dots$
is an infinite sequence $r = q_0 \sigma_1 q_1 \sigma_2 \dots $
of states and letters such that
\begin{compressEnum}
\itCompress $q_0 = q_I$, and
\itCompress $(q_i,\sigma_{i+1},q_{i+1}) \in \delta$ for all $i \geq 0$.
\end{compressEnum}
We denote by $\mathsf{wt}(r) = v_0 v_1 \dots$ the sequence of weights that occur in~$r$
where $v_i = \mathsf{wt}(q_i,\sigma_{i+1},q_{i+1})$ for all $i \geq 0$.
\noindent{\bf Quantitative language of mean-payoff automata.}
The \emph{mean-payoff value} (or limit-average) of a sequence $\bar{v} = v_0 v_1 \dots$ of real numbers
is either
$$\mathsf{LimInf}Avg(\bar{v}) = \liminf\limits_{n \to \infty} \frac{1}{n} \cdot\sum_{i=0}^{n-1} v_i,
\text{ or}\quad \mathsf{LimSup}Avg(\bar{v}) = \limsup\limits_{n \to \infty} \frac{1}{n}\cdot \sum_{i=0}^{n-1} v_i.$$
Note that if we delete or insert finitely many values in an infinite sequence of numbers,
its limit-average does not change, and if the sequence is ultimately periodic, then
the $\mathsf{LimInf}Avg$ and $\mathsf{LimSup}Avg$ values coincide (and correspond to the mean of the weights
on the periodic part of the sequence).
For $\mathsf{Val} \in \{\mathsf{LimInf}Avg, \mathsf{LimSup}Avg\}$, the quantitative language $L_A$ of $A$ is defined by
$L_A(w) = \sup \{\mathsf{Val}(\mathsf{wt}(r)) \mid r \text{ is a run of } A \text{ over } w\}$
for all $w \in \Sigma^{\omega}$.
Accordingly, the automaton $A$ and its quantitative language $L_A$
are called $\mathsf{LimInf}Avg$ or $\mathsf{LimSup}Avg$.
Note that for deterministic automata, we have $L_A(w) = \mathsf{Val}(\mathsf{wt}(r))$ where
$r$ is the unique run of $A$ over $w$.
We omit the weight function $\mathsf{wt}$ when it is
clear from the context, and we write $\mathsf{LimAvg}$ when the value according to $\mathsf{LimInf}Avg$ and $\mathsf{LimSup}Avg$
coincide (e.g., for runs with a lasso shape).
\noindent{\bf Decision problems and distance.}
We consider the following classical decision problems for quantitative languages,
assuming an effective presentation of quantitative languages (such as mean-payoff automata,
or automaton expressions defined later).
Given a quantitative language $L$ and a threshold $\nu \in {\mathbb Q}$,
the \emph{quantitative emptiness problem}
asks whether there exists a word $w \in \Sigma^{\omega}$
such that $L(w) \geq \nu$, and the \emph{quantitative universality problem}
asks whether $L(w) \geq \nu$ for all words $w \in \Sigma^{\omega}$.
Given two quantitative languages $L_1$ and $L_2$,
the \emph{quantitative language-inclusion problem} asks whether $L_{1}(w) \leq L_{2}(w)$
for all words $w \in \Sigma^{\omega}$,
and the \emph{quantitative language-equivalence problem} asks whether $L_{1}(w) = L_{2}(w)$
for all words $w \in \Sigma^{\omega}$. Note that universality is a special
case of language inclusion where $L_1$ is constant. Finally, the \emph{distance}
between $L_1$ and $L_2$ is $D_{\sup}(L_1,L_2) = \sup_{w \in \Sigma^{\omega}} \abs{L_1(w) - L_2(w)}$.
It measures how close is an implementation $L_1$ as compared to a specification $L_2$.
It is known that quantitative emptiness is decidable for nondeterministic mean-payoff
automata~\cite{CDH08}, while decidability was open for alternating mean-payoff
automata, as well as for the quantitative language-inclusion problem of
nondeterministic mean-payoff automata. Recent undecidability results on
games with imperfect information and mean-payoff objective~\cite{DDGRT10} entail that
these problems are undecidable (see Theorem~\ref{thrm_undecidable}).
\noindent{\bf Robust quantitative languages.}
A class ${\cal Q}$ of quantitative languages is \emph{robust} if the class is
closed under $\max,\min,\mathrm{sum}$ and complementation operations. The closure
properties allow quantitative languages from a robust class to be described
compositionally.
While nondeterministic $\mathsf{LimInf}Avg$- and $\mathsf{LimSup}Avg$-automata are closed
under the $\max$ operation, they are not closed under $\min$ and complement~\cite{CDH09b}.
Alternating $\mathsf{LimInf}Avg$- and $\mathsf{LimSup}Avg$-automata\footnote{See~\cite{CDH-FCT09} for
the definition of alternating $\mathsf{LimInf}Avg$- and $\mathsf{LimSup}Avg$-automata that generalize
nondeterministic automata.}
are closed under $\max$ and $\min$, but are not
closed under complementation and $\mathrm{sum}$~\cite{CDH-FCT09}
We define a \emph{robust} class of quantitative languages for mean-payoff automata which is closed
under $\max$, $\min$, $\mathrm{sum}$, and complement,
and which can express all natural examples of quantitative languages defined using the mean-payoff
measure~\cite{AlurDMW09,CDH09b,CGHIKPS08}.
\noindent{\bf Mean-payoff automaton expressions.}
A \emph{mean-payoff automaton expression} $E$ is obtained by the following grammar rule:
$$ E ::= A \mid \max(E,E) \mid \min(E,E) \mid \mathrm{sum}(E, E)$$
where $A$ is a \emph{deterministic} $\mathsf{LimInf}Avg$- or $\mathsf{LimSup}Avg$-automaton.
The quantitative language $L_E$ of a mean-payoff automaton expression $E$
is~$L_E = L_A$ if $E = A$ is a deterministic automaton,
and $L_E = {\rm op}( L_{E_1}, L_{E_2} )$ if $E = {\rm op}(E_1,E_2)$
for ${\rm op} \in \{\max,\min,\mathrm{sum}\}$.
By definition, the class of mean-payoff automaton expression is
closed under $\max$, $\min$ and $\mathrm{sum}$.
Closure under complement follows from the fact that the complement
of $\max(E_1,E_2)$ is $\min(-E_1,-E_2)$, the complement of
$\min(E_1,E_2)$ is $\max(-E_1,-E_2)$, the complement of $\mathrm{sum}(E_1,E_2)$
is $\mathrm{sum}(-E_1,-E_2)$, and the complement of a deterministic
$\mathsf{LimInf}Avg$-automaton can be defined by the same automaton with
opposite weights and interpreted as a $\mathsf{LimSup}Avg$-automaton, and
vice versa,
since $- \limsup(v_0, v_1, \dots) = \liminf(-v_0, \penalty-1000 -v_1, \dots)$.
Note that arbitrary linear combinations of deterministic mean-payoff automaton expressions
(expressions such as $c_1 E_1 + c_2 E_2$ where $c_1,c_2 \in {\mathbb Q}$
are rational constants) can be obtained for free since scaling the weights of a
mean-payoff automaton by a positive factor $\abs{c}$ results in a
quantitative language scaled by the same factor.
\section{The Vector Set of Mean-Payoff Automaton Expressions}\label{sec:vector-set}
Given a mean-payoff automaton expression $E$, let $A_1, \dots, A_n$ be the deterministic weighted
automata occurring in $E$.
The \emph{vector set} of $E$ is the set
$V_E = \{\tuple{L_{A_1}(w),\ldots,L_{A_n}(w)} \in {\mathbb R}^n \mid w \in \Sigma^{\omega}\}$
of tuples of values of words according to each automaton $A_i$.
In this section, we characterize the vector set of mean-payoff automaton expressions,
and in Section~\ref{sec:alg-cons} we give an algorithmic procedure to compute this set.
This will be useful to establish the decidability of all decision problems, and
to compute the distance between mean-payoff automaton expressions.
Given a vector $v \in {\mathbb R}^n$, we denote by $\norm{v} = \max_i \,\abs{v_i}$ the \emph{$\infty$-norm} of $v$.
The \emph{synchronized product} of $A_1, \dots, A_n$ such that $A_i = \tuple{Q_i,q^i_I,\Sigma,\delta_i,\mathsf{wt}_i}$
is the ${\mathbb Q}^n$-weighted automaton $A_E = A_1 \times \dots \times A_n = \tuple{Q_1 \times \dots \times Q_n,
(q^1_I, \dots, q^n_I), \Sigma,\delta,\mathsf{wt}}$ such that
$t = ((q_1, \dots, q_n), \sigma, (q'_1, \dots, q'_n)) \in \delta$ if $t_i:=(q_i, \sigma, q'_i) \in \delta_i$
for all \mbox{$1 \leq i \leq n$}, and $\mathsf{wt}(t) = (\mathsf{wt}_1(t_1), \dots, \mathsf{wt}_n(t_n))$.
In the sequel, we assume that all $A_i$'s are deterministic $\mathsf{LimInf}Avg$-automata (hence, $A_E$ is deterministic)
and that the underlying graph of the automaton $A_E$
has only one strongly connected component (scc).
We show later how to obtain the vector set without these restrictions.
For each (simple) cycle $\rho$ in $A_E$, let the \emph{vector value} of $\rho$ be the mean of the
tuples labelling the edges of $\rho$, denoted $\mathsf{Avg}(\rho)$. To each simple cycle $\rho$ in $A_E$ corresponds a (not necessarily simple)
cycle in each $A_i$, and the vector value $(v_1,\dots, v_n)$ of $\rho$ contains the mean value $v_i$ of $\rho$
in each $A_i$. We denote by $S_E$ the (finite) set of vector values of simple cycles in $A_E$.
Let $\mathsf{conv}(S_E)$ be the convex hull of $S_E$.
\begin{lemma}\label{lem:convex-hull-lasso-words}
Let $E$ be a mean-payoff automaton expression. The set
$\mathsf{conv}(S_E)$ is the closure of the set $\{L_E(w) \mid w \text{ is a lasso-word} \}$.
\end{lemma}
The vector set of $E$ contains more values than the convex hull $\mathsf{conv}(S_E)$,
as shown by the following example.
\begin{figure}
\caption{The vector set of $E = \max(A_1,A_2)$ is $F_{\min}
\label{fig:value-set}
\end{figure}
\begin{example}\label{ex1}
Consider the expression $E = \max(A_1,A_2)$ where $A_1$ and $A_2$ are
deterministic $\mathsf{LimInf}Avg$-automata (see \figurename~\ref{fig:value-set}).
The product $A_E = A_1 \times A_2$ has two simple cycles with respective vector values
$(1,0)$ (on letter `$a$') and $(0,1)$ (on letter `$b$'). The set $H=\mathsf{conv}(S_E)$
is the solid segment on \figurename~\ref{fig:value-set} and contains the vector
values of all lasso-words. However, other vector values can be obtained: consider
the word $w = a^{n_1} b^{n_2} a^{n_3} b^{n_4} \dots$ where $n_1 = 1$ and $n_{i+1} = (n_1 + \cdots + n_i)^2$
for all $i \geq 1$. It is easy to see that the value of $w$ according to $A_1$ is $0$
because the average number of $a$'s in the prefixes $a^{n_1} b^{n_2} \dots a^{n_i} b^{n_{i+1}}$
for $i$ odd is smaller than $\frac{n_1 + \cdots + n_i}{n_1 + \cdots + n_i + n_{i+1}} = \frac{1}{1+ n_1 + \cdots + n_i}$
which tends to $0$ when $i \to \infty$. Since $A_1$ is a $\mathsf{LimInf}Avg$-automaton,
the value of $w$ is $0$ in $A_1$, and by a symmetric argument the value of $w$ is also
$0$ in $A_2$. Therefore the vector $(0,0)$ is in the vector set of $E$.
Note that $z=(0,0)$ is the pointwise minimum of $x=(1,0)$ and $y=(0,1)$, i.e. $(0,0) = f_{\min}((1,0), (0,1))$
where $z = f_{\min}(x,y)$ if $z_1 = \min(x_1,y_1)$ and $z_2 = \min(y_1,y_2)$.
In fact, the vector set is the whole triangular region in \figurename~\ref{fig:value-set},
i.e. $V_E = \{ f_{\min}(x,y) \mid x,y \in \mathsf{conv}(S_E) \}$.
\end{example}
We generalize $f_{\min}$ to finite sets of points $P \subseteq {\mathbb R}^n$ in $n$ dimensions as follows: $f_{\min}(P) \in {\mathbb R}^n$
is the point $p = (p_1, p_2, \ldots, p_n)$ such that $p_i$ is the minimum $i^{\text{th}}$ coordinate of the points in $P$,
for $1 \leq i \leq n$. For arbitrary $S \subseteq {\mathbb R}^n$, define $F_{\min}(S) = \{f_{\min}(P) \mid P \text{ is a finite subset of } S\}$.
As illustrated in Example~\ref{ex1}, the next lemma shows that the vector set $V_E$ is equal to $F_{\min}(\mathsf{conv}(S_E))$.
\begin{lemma}\label{lem:value-set}
Let $E$ be a mean-payoff automaton expression built from deterministic $\mathsf{LimInf}Avg$-automata, and such that
$A_E$ has only one strongly connected component.
Then, the vector set of $E$ is $V_E = F_{\min}(\mathsf{conv}(S_E))$.
\end{lemma}
For a general mean-payoff automaton expression~$E$ (with both deterministic $\mathsf{LimInf}Avg$- and $\mathsf{LimSup}Avg$ automata, and
with multi-scc underlying graph), we can use the result of Lemma~\ref{lem:value-set} as follows.
We replace each $\mathsf{LimSup}Avg$ automaton $A_i$ occurring in $E$ by the $\mathsf{LimInf}Avg$ automaton $A'_i$ obtained
from $A_i$ by replacing every weight $\mathsf{wt}$ by $-\mathsf{wt}$. The duality of $\liminf$ and $\limsup$ yields
$L_{A'_i} = - L_{A_i}$. In each strongly connected component $\mathcal{C}$ of the underlying graph of $A_E$, we compute
$V_{\mathcal{C}} = F_{\min}(\mathsf{conv}(S_{\mathcal{C}}))$ (where $S_{\mathcal{C}}$ is the set of vector values of the simple cycles in $\mathcal{C}$) and apply
the transformation $x_i \to -x_i$ on every coordinate~$i$ where the automaton
$A_i$ was originally a $\mathsf{LimSup}Avg$ automaton. The union of the sets $\bigcup_{\mathcal{C}} V_{\mathcal{C}}$ where $\mathcal{C}$ ranges over
the strongly connected components of $A_E$ gives the vector set of $E$.
\begin{theorem}\label{theo:value-set-general}
Let $E$ be a mean-payoff automaton expression built from deterministic $\mathsf{LimInf}Avg$-automata, and let
$\mathcal{Z}$ be the set of strongly connected components in $A_E$.
For a strongly connected component $\mathcal{C}$ let $S_\mathcal{C}$ denote the set of vector values of the simple cycles
in $\mathcal{C}$.
The vector set of $E$ is $V_E = \bigcup_{\mathcal{C} \in \mathcal{Z}} F_{\min}(\mathsf{conv}(S_\mathcal{C}))$.
\end{theorem}
\section{Computation of $F_{\min}(\mathsf{conv}(S))$ for a Finite Set $S$}\label{sec:alg-cons}
It follows from Theorem~\ref{theo:value-set-general} that the vector set $V_E$ of a mean-payoff automaton expression~$E$ can
be obtained as a union of sets $F_{\min}(\mathsf{conv}(S))$, where $S \subseteq {\mathbb R}^n$ is a finite set.
However, the set $\mathsf{conv}(S)$ being in general infinite, it is not immediate that
$F_{\min}(\mathsf{conv}(S))$ is computable.
In this section we consider the problem of computing $F_{\min}(\mathsf{conv}(S))$ for a finite set $S$.
In subsection~\ref{subsec:explicit} we present an explicit construction and
in subsection~\ref{subsec:geometric} we give a geometric construction of
the set as a set of linear constraints.
We first present some properties of the set $F_{\min}(\mathsf{conv}(S))$.
\begin{lemma}\label{lem:value-set-convex}
If $X$ is a convex set, then $F_{\min}(X)$ is convex.
\end{lemma}
By Lemma~\ref{lem:value-set-convex}, the set $F_{\min}(\mathsf{conv}(S))$ is convex, and since
$F_{\min}$ is a monotone operator and $S \subseteq \mathsf{conv}(S)$, we have $F_{\min}(S) \subseteq F_{\min}(\mathsf{conv}(S))$
and thus $\mathsf{conv}(F_{\min}(S)) \subseteq F_{\min}(\mathsf{conv}(S))$.
The following proposition states that in two dimensions the above sets coincide.
\begin{proposition}\label{prop:conv-min-2D}
Let $S \subseteq {\mathbb R}^2$ be a finite set.
Then, $\mathsf{conv}(F_{\min}(S)) = F_{\min}(\mathsf{conv}(S))$.
\end{proposition}
We show in the following example that in three dimensions the above proposition does not hold, i.e., we show
that $F_{\min}(\mathsf{conv}(S_E)) \neq \mathsf{conv}(F_{\min}(S_E))$ in ${\mathbb R}^3$.
\begin{example}\label{examp2}
We show that in three dimension there is a finite set $S$ such that
$F_{\min}(\mathsf{conv}(S)) \not\subseteq \mathsf{conv}(F_{\min}(S))$.
Let $S = \{q,r,s\}$ with $q=(0,1,0)$, $r=(-1,-1,1)$, and $s=(1,1,1)$.
Then $f_{\min}(r,s) = r$, $f_{\min}(q,r,s) = f_{\min}(q,r) = t = (-1,-1,0)$, and $f_{\min}(q,s) = q$.
Therefore $F_{\min}(S) = \{q,r,s,t\}$. Consider $p = (r+s)/2 = (0,0,1)$. We have $p \in \mathsf{conv}(S)$
and $f_{\min}(p,q) = (0,0,0)$. Hence $(0,0,0) \in F_{\min}(\mathsf{conv}(S))$.
We now show that $(0,0,0)$ does not belong to $\mathsf{conv}(F_{\min}(S))$.
Consider $u= \alpha_q \cdot q + \alpha_r \cdot r + \alpha_s \cdot s + \alpha_t \cdot t$
such that $u$ in $\mathsf{conv}(F_{\min}(S))$.
Since the third coordinate is non-negative for $q,r,s$, and $t$, it follows that if $\alpha_r>0$
or $\alpha_s>0$, then the third coordinate of $u$ is positive.
If $\alpha_s=0$ and $\alpha_r=0$, then we have two cases:
(a) if $\alpha_t>0$, then the first coordinate of $u$ is negative; and
(b) if $\alpha_t=0$, then the second coordinate of $u$ is~1.
It follows $(0,0,0)$ is not in $\mathsf{conv}(F_{\min}(S))$.
\qed
\end{example}
\subsection{Explicit construction}\label{subsec:explicit}
Example~\ref{examp2} shows that in general $F_{\min}(\mathsf{conv}(S)) \not\subseteq \mathsf{conv}(F_{\min}(S))$.
In this section we present an explicit construction that given a finite set $S$ constructs a
finite set $S'$ such that (a)~$S \subseteq S' \subseteq \mathsf{conv}(S)$ and (b)~$F_{\min}(\mathsf{conv}(S)) \subseteq \mathsf{conv}(F_{\min}(S'))$.
It would follow that $F_{\min}(\mathsf{conv}(S))=\mathsf{conv}(F_{\min}(S'))$.
Since convex hull of a finite set is computable and $F_{\min}(S')$ is finite,
this would give us an algorithm to compute $F_{\min}(\mathsf{conv}(S))$.
For simplicity, for the rest of the section we write $F$ for $F_{\min}$ and
$f$ for $f_{\min}$ (i.e., we drop the $\min$ from subscript).
Recall that $F(S)=\set{f(P) \mid P \text{ finite subset of } S}$
and let $F_i(S)=\set{f(P) \mid P \text{ finite subset of $S$ and } |P| \leq i }$.
We consider $S \subseteq {\mathbb R}^n$.
\begin{lemma}\label{lemm1}
Let $S \subseteq {\mathbb R}^n$. Then, $F(S)=F_n(S)$ and $F_n(S) \subseteq F_2^{n-1}(S)$.
\end{lemma}
\begin{comment}
\begin{proof}
By definition, we have $F_n(S) \subseteq F(S)$.
For a point $x=f(P)$ for a finite subset $P \subseteq S$, choose one point each that
contributes to a coordinate and obtain a finite set $P' \subseteq P$ of at most $n$
points such that $x=f(P)$. This shows that $F(S) \subseteq F_n(S)$.
For the second part, let $P=\set{p_1, p_2,\ldots,p_k}$ with $k \leq n$, and let $x=f(P)$.
Let $x_1=f(p_1,p_2)$, and for $i > 1$ we define $x_i=f(x_{i-1},p_{i+1})$.
We have $x=x_{n-1}$ (e.g., $f(p_1,p_2,p_3)= f (f(p_1,p_2), p_3)$).
Thus we have obtained $x$ by applying $f$ on two points for $n-1$ times, and
it follows that $F_n(S) \subseteq F_2^{n-1}(S)$.
\qed
\end{proof}
\end{comment}
\noindent{\bf Iteration of a construction $\gamma$.}
We will present a
construction $\gamma$ with the following properties: input to the construction
is a finite set $Y$ of points, and the output $\gamma(Y)$ satisfies the
following properties
\begin{enumerate}
\item {\bf (Condition C1).} $\gamma(Y)$ is finite and subset of $\mathsf{conv}k(Y)$.
\item {\bf (Condition C2).} $F_2(\mathsf{conv}k(Y)) \subseteq \mathsf{conv}k(F (\gamma(Y)))$.
\end{enumerate}
Before presenting the construction $\gamma$ we first show how to iterate the
construction to obtain the following result: given a finite set of points $X$
we construct a finite set of points $X'$ such that
$F(\mathsf{conv}k (X)) = \mathsf{conv}k(F (X'))$.
\noindent{\em Iterating $\gamma$.}
Consider a finite set of points $X$,
and let $X_0=X$ and $X_1=\gamma(X_0)$. Then we have
\[
\mathsf{conv}k(X_1) \subseteq \mathsf{conv}k (\mathsf{conv}k (X_0)) \quad
\text{(since by Condition {\bf C1} we have }
X_1 \subseteq \mathsf{conv}k (X_0))
\]
and hence $\mathsf{conv}k(X_1) \subseteq \mathsf{conv}k(X_0)$; and
\[
F_2(\mathsf{conv}k(X_0)) \subseteq \mathsf{conv}k(F(X_1))
\qquad
\text{(by Condition {\bf C2})}
\]
By iteration we obtain that $X_i=\gamma(X_{i-1})$ for $i \geq 2$ and as above
we have
\[
(1)\ \mathsf{conv}k(X_i) \subseteq \mathsf{conv}k(X_0) \qquad
(2)\ F_2^i(\mathsf{conv}k(X_0)) \subseteq \mathsf{conv}k(F(X_i))
\]
Thus for $X_{n-1}$ we have
\[
(1)\ \mathsf{conv}k(X_{n-1}) \subseteq \mathsf{conv}k(X_0) \qquad
(2)\ F_2^{n-1}(\mathsf{conv}k(X_0)) \subseteq \mathsf{conv}k(F(X_{n-1}))
\]
By (2) above and Lemma~\ref{lemm1}, we obtain
\[
(A)\ F(\mathsf{conv}k (X_0)) = F_n(\mathsf{conv}k(X_0)) \subseteq F_2^{n-1}(\mathsf{conv}k(X_0))
\subseteq \mathsf{conv}k(F(X_{n-1}))
\]
By (1) above we have $\mathsf{conv}k(X_{n-1}) \subseteq \mathsf{conv}k (X_0)$ and hence
$F(\mathsf{conv}k(X_{n-1})) \subseteq F(\mathsf{conv}k (X_0))$.
Thus we have
\[
\mathsf{conv}k(F(\mathsf{conv}k(X_{n-1}))) \subseteq \mathsf{conv}k (F(\mathsf{conv}k (X_0))) = F(\mathsf{conv}k(X_0))
\]
where the last equality follows since by Lemma~\ref{lem:value-set-convex} we have
$F(\mathsf{conv}k(X_0))$ is convex.
Since $X_{n-1} \subseteq \mathsf{conv}k(X_{n-1})$ we have
\[
(B)\
\mathsf{conv}k(F(X_{n-1}))
\subseteq \mathsf{conv}k(F(\mathsf{conv}k(X_{n-1}))) \subseteq F(\mathsf{conv}k(X_0))
\]
Thus by (A) and (B) above we have
$F(\mathsf{conv}k(X_0)) = \mathsf{conv}k(F(X_{n-1}))$.
Thus given the finite set $X$, we have the finite set $X_{n-1}$ such
that (a)~$X \subseteq X_{n-1} \subseteq \mathsf{conv}(X)$ and (b)~$F(\mathsf{conv}(X)) = \mathsf{conv}(F(X_{n-1}))$.
We now present the construction $\gamma$ to complete the result.
\noindent{\bf The construction $\gamma$.}
Given a finite set $Y$ of points $Y'=\gamma(Y)$ is obtained by adding points
to $Y$ in the following way:
\begin{itemize}
\item For all $1 \leq k \leq n$, we consider all $k$-dimensional coordinate
planes $\Pi$ supported by a point in $Y$;
\item Intersect each coordinate plane $\Pi$ with $\mathsf{conv}k(Y)$ and the result is
a convex polytope $Y_\Pi$;
\item We add the corners (or extreme points) of each polytope $Y_\Pi$ to $Y$.
\end{itemize}
The proof that the above construction satisfies condition {\bf C1} and {\bf C2} is
given in the appendix, and thus we have the following result.
\begin{theorem}\label{theo:explicit-construction}
Given a finite set $S \subseteq {\mathbb R}^n$ such that $|S|=m$,
the following assertion holds:
a finite set $S'$ with $|S'| \leq m^{2^n} \cdot 2^{n^2+n}$ can be
computed in $m^{O(n \cdot 2^n)} \cdot 2^{O(n^3)}$ time such that
(a)~$S \subseteq S' \subseteq \mathsf{conv}(S)$ and
(b)~$F_{\min}(\mathsf{conv}(S))= \mathsf{conv}(F_{\min}(S'))$.
\end{theorem}
\subsection{Linear constraint construction}\label{subsec:geometric}
In the previous section we presented an explicit construction of a finite
set of points whose convex hull gives us $F_{\min}(\mathsf{conv}(S))$.
The explicit construction illuminates properties of the set $F_{\min}(\mathsf{conv}(S))$,
however, the construction is inefficient computationally.
In this subsection we present an efficient geometric construction for the
computation of $F_{\min}(\mathsf{conv}(S))$ for a finite set $S$.
Instead of constructing a finite set $S'\subseteq \mathsf{conv}(S)$ such that
$\mathsf{conv}(S') =F_{\min}(\mathsf{conv}(S))$, we represent $F_{\min}(\mathsf{conv}(S))$ as a finite set of
linear constraints.
Consider the \emph{positive orthant} anchored at the origin
in ${\mathbb R}^n$, that is, the set of points with non-negative
coordinates:
${\mathbb R}_+^n = \{ (z_1, z_2, \ldots, z_n) \mid z_i \geq 0, \forall i \}$.
Similarly, the \emph{negative orthant} is the set of points with
non-positive coordinates, denoted as ${\mathbb R}_-^n = - {\mathbb R}_+^n$.
Using vector addition, we write $y + {\mathbb R}_+^n$ for the positive
orthant anchored at $y$.
Similarly, we write $x + {\mathbb R}_-^n = x - {\mathbb R}_+^n$
for the negative orthant anchored at $x$.
The positive and negative orthants satisfy the following simple
\emph{duality relation}:
$x \in y + {\mathbb R}_+^n$ iff $y \in x - {\mathbb R}_+^n$.
Note that ${\mathbb R}_+^n$ is an $n$-dimensional convex polyhedron.
For each $1 \leq j \leq n$, we consider the $(n-1)$-dimensional face ${\mathbb Q}l_j$ spanned
by the coordinate axes except the $j^{\text{th}}$ one, that is,
${\mathbb Q}l_j = \{ (z_1, z_2, \ldots, z_n) \in {\mathbb R}_+^n \mid z_j = 0 \}$.
We say that $y + {\mathbb R}_+^n$ is \emph{supported} by $X$
if $(y + {\mathbb Q}l_j) {\; \cap \;} X \neq \emptyset$ for every $1 \leq j \leq n$.
Assuming $y + {\mathbb R}_+^n$ is supported by $X$, we can construct a
set $Y \subseteq X$ by collecting one point per
$(n-1)$-dimensional face of the orthant and get $y = f(Y)$.
It is also allowed that two faces contribute the same point to $Y$.
Similarly, if $y = f(Y)$ for a subset $Y \subseteq X$,
then the positive orthant anchored at $y$ is supported by $X$.
Hence, we get the following lemma.
\begin{lemma}[Orthant Lemma]
$y \in F_{\min}(X)$ iff $y + {\mathbb R}_+^n$ is supported by $X$.
\end{lemma}
\paragraph{Construction.}
We use the Orthant Lemma to construct $F_{\min}(X)$.
We begin by describing the set of points $y$ for which the $j^{\text{th}}$
face of the positive orthant anchored at $y$
has a non-empty intersection with $X$.
Define $F_j = X - {\mathbb Q}l_j$, the set of points of the form $x - z$,
where $x \in X$ and $z \in {\mathbb Q}l_j$.
\begin{lemma}[Face Lemma]
$(y + {\mathbb Q}l_j) {\; \cap \;} X \neq \emptyset$ iff $y \in F_j$.
\end{lemma}
\begin{proof}
Let $x \in X$ be a point in the intersection, that is,
$x \in y + {\mathbb Q}l_j$.
Using the duality relation for the $(n-1)$-dimensional orthant,
we get $y \in x - {\mathbb Q}l_j$.
By definition, $x - {\mathbb Q}l_j$ is a subset of $X - {\mathbb Q}l_j$,
and hence $y \in F_j$.
\qed
\end{proof}
It is now easy to describe the set defined in our problem statement.
\begin{lemma}[Characterization]
$F_{\min}(X) = \bigcap_{j=1}^n F_j$.
\end{lemma}
\begin{proof}
By the Orthant Lemma, $y \in F_{\min}(X)$ iff $y + {\mathbb R}_+^n$ is supported by $X$.
Equivalently, $(y + {\mathbb Q}l_j) {\; \cap \;} X \neq \emptyset$
for all $1 \leq j \leq n$.
By the Face Lemma, this is equivalent to $y$ belonging to the common
intersection of the sets
$F_j = X - {\mathbb Q}l_j$.
\qed
\end{proof}
\begin{comment}
\paragraph{Convex hull case.}
We are particularly interested in the case when $X$ is
a \emph{convex polytope}, that is, the convex hull of a finite set of points.
Recall that a \emph{convex polyhedron} is a slightly more general concept,
namely the common intersection of a finite set of closed half-spaces.
It is is a convex polytope iff it is bounded.
\begin{lemma}[Convex Polytope Corollary]
If $X$ is a convex polytope then so is $F_{\min}(X)$.
\end{lemma}
\begin{proof}
Let $S \subseteq {\mathbb R}^n$ be finite such that $X = \mathsf{conv}(S)$.
Since both $X$ and ${\mathbb Q}l_j$ are convex polyhedra,
their Minkowski difference, $F_j = X - {\mathbb Q}l_j$,
is also a convex polyhedron.
The common intersection of a finite number of convex polyhedra
is again a convex polyhedron.
This implies that $F_{\min}(X) = \bigcap_{j=1}^n F_j$
is a convex polyhedron.
It is bounded because no point $y$ sufficiently far from the origin
can define a positive orthant supported by $X$.
This implies that $F_{\min}(X)$ is a convex polytope.
\qed
\end{proof}
\end{comment}
\noindent{\bf Algorithm for computation of $F_{\min}(\mathsf{conv}(S))$.}
Following the construction, we get an algorithm that computes
$F_{\min}(\mathsf{conv}(S))$ for a finite set $S$ of points in ${\mathbb R}^n$.
Let $|S|=m$.
We first represent $X=\mathsf{conv}(S)$ as intersection of half-spaces:
we require at most $m^n$ half-spaces (linear constraints).
It follows that $F_j=X-{\mathbb Q}l_j$ can be expressed as $m^n$
linear constraints, and hence $F_{\min}(X)=\bigcap_{j=1}^n F_j$ can be expressed
as $n\cdot m^n$ linear constraints.
This gives us the following result.
\begin{theorem}
Given a finite set $S$ of $m$ points in ${\mathbb R}^n$, we can construct in
$O(n \cdot m^n)$ time $n \cdot m^n$ linear constraints that represent
$F_{\min}(\mathsf{conv}(S))$.
\end{theorem}
\section{Mean-Payoff Automaton Expressions are Decidable}\label{sec:decidable}
Several problems on quantitative languages can be solved for the class
of mean-payoff automaton expressions using the vector set.
The decision problems of quantitative emptiness and universality, and quantitative language inclusion and equivalence
are all decidable, as well as questions related to cut-point languages, and computing
distance between mean-payoff languages.
\paragraph{Decision problems and distance.}
From the vector set $V_E = \{\tuple{L_{A_1}(w),\ldots,L_{A_n}(w)} \in {\mathbb R}^n \mid w \in \Sigma^{\omega}\}$,
we can compute the \emph{value set} $L_E(\Sigma^{\omega}) = \{L_E(w) \mid w \in \Sigma^{\omega}\}$ of values
of words according to the quantitative language of $E$ as follows. The set $L_E(\Sigma^{\omega})$ is obtained by successive application
of \emph{min-}, \emph{max-} and \emph{sum-projections} $p^{\min}_{ij}, p^{\max}_{ij}, p^{{\mathrm{sum}}}_{ij}: {\mathbb R}^k \to {\mathbb R}^{k-1}$ where $i < j \leq k$,
defined by
$$\begin{array}{ll}
p^{\min}_{ij}((x_1,\dots, x_k)) = & (x_1, \dots, x_{i-1}, \min(x_i,x_j), x_{i+1}, \dots, x_{j-1}, x_{j+1}, \dots x_k), \\[+3pt]
p^{{\mathrm{sum}}}_{ij}((x_1,\dots, x_k)) = & (x_1, \dots, x_{i-1}, \quad\! x_i + x_j \quad\!, x_{i+1}, \dots, x_{j-1}, x_{j+1}, \dots x_k), \\
\end{array}
$$
and analogously for $p^{\max}_{ij}$. For example, $p^{\max}_{12}(p^{\min}_{23}(V_E))$ gives the set $L_E(\Sigma^{\omega})$ of word values of the mean-payoff automaton expression $E = \max(A_1,\min(A_2,A_3))$.
Assuming a representation of the polytopes of $V_E$ as a boolean combination $\varphi_E$ of linear constraints,
the projection $p^{\min}_{ij}(V_E)$ is represented by the formula
$$\psi = (\exists x_j: \varphi_E \land x_i \leq x_j) \lor (\exists x_i: \varphi_E \land x_j \leq x_i)[x_j \gets x_i]$$
where $[x \gets e]$ is a substitution that replaces every occurrence of $x$ by the expression $e$.
Since linear constraints over the reals admit effective elimination of existential quantification, the formula $\psi$ can be transformed
into an equivalent boolean combination of linear constraints without existential quantification.
The same applies to max- and sum-projections.
Successive applications of min-, max- and sum-projections (following the structure of the
mean-payoff automaton expression $E$)
gives the value set $L_E(\Sigma^{\omega}) \subseteq {\mathbb R}$ as a boolean combination
of linear constraints, hence it is a union of intervals. From this set, it is easy to decide the
quantitative emptiness problem and the quantitative universality problem: there exists a word $w \in \Sigma^{\omega}$ such that $L_E(w) \geq \nu$
if and only if $L_E(\Sigma^{\omega}) \cap \, [\nu, +\infty [ \,\neq \emptyset$, and $L_E(w) \geq \nu$ for all words $w \in \Sigma^{\omega}$
if and only if $L_E(\Sigma^{\omega}) \,\cap \, ] -\infty, \nu [ \,= \emptyset$.
In the same way, we can decide the quantitative language inclusion problem ``is $L_E(w) \leq L_F(w)$ for all words $w \in \Sigma^{\omega}$ ?''
by a reduction to the universality problem for the expression $F-E$ and threshold $0$
since mean-payoff automaton expressions are closed under sum and complement.
The quantitative language equivalence problem is then obviously also decidable.
Finally, the distance between the quantitative languages of $E$ and $F$
can be computed as the largest number (in absolute value) in the value set of $F-E$.
As a corollary, this distance is always a rational number.
\paragraph{Comparison with~\cite{AlurDMW09}.}
The work in~\cite{AlurDMW09} considers deterministic mean-payoff automata with multiple payoffs.
The weight function in such an automaton is of the form $\mathsf{wt}: \delta \to {\mathbb Q}^d$. The value of a finite
sequence $(v_i)_{1 \leq i \leq n}$ (where $v_i \in {\mathbb Q}^d$) is the mean of the tuples $v_i$, that is a $d$-dimensional vector $\mathsf{Avg}_n = \frac{1}{n} \cdot \sum_{i=0}^{n-1} v_i$.
The ``value'' associated to an infinite run (and thus also to the corresponding word, since the automaton is deterministic) is the
set $Acc \subseteq {\mathbb R}^d$ of accumulation points of the sequence $(\mathsf{Avg}_n)_{n \geq 1}$.
In~\cite{AlurDMW09}, a query language on the set of accumulation points is used to define \emph{multi-threshold mean-payoff languages}.
For $1 \leq i \leq n$, let $p_i: {\mathbb R}^n \to {\mathbb R}$ be the usual projection along the $i^{\text{th}}$ coordinate.
A query is a boolean combination of atomic threshold conditions of the form $\min(p_i(Acc)) \sim \nu$ or $\max(p_i(Acc)) \sim \nu$ where $\sim \in \{<,\leq,\geq,>\}$
and $\nu \in {\mathbb Q}$. A word is accepted if the set of accumulation points of its (unique) run satisfies the query.
Emptiness is decidable for such multi-threshold mean-payoff languages, by an argument based on the computation
of the convex hull of the vector values of the simple cycles in the automaton~\cite{AlurDMW09} (see also Lemma~\ref{lem:convex-hull-lasso-words}).
We have shown that this convex hull $\mathsf{conv}(S_E)$ is not sufficient to analyze quantitative languages of mean-payoff automaton expressions.
It turns out that a richer query language can also be defined using our construction of $F_{\min}(\mathsf{conv}(S_E))$.
In our setting, we can view a $d$-dimensional mean-payoff automaton $A$ as a product $P_A$ of 2d copies $A^i_{t}$ of $A$ (where $1 \leq i \leq d$ and $t \in \{\mathsf{LimInf}Avg, \mathsf{LimSup}Avg\}$),
where $A^i_{t}$ assigns to each transition the $i^{\text{th}}$ coordinate of the payoff vector in $A$, and the automaton is interpreted as a $t$-automaton.
Intuitively, the set $Acc$ of accumulation points of a word $w$ satisfies $\min(p_i(Acc)) \sim \nu$ (resp. $\max(p_i(Acc) \sim \nu$) if and only if the value of $w$ according to the
automaton $A^i_{t}$ for $t = \mathsf{LimInf}Avg$ (resp. $t = \mathsf{LimSup}Avg$) is $\sim \nu$. Therefore, atomic threshold conditions can be encoded
as threshold conditions on single variables of the vector set for $P_A$.
Therefore, the vector set computed in Section~\ref{sec:alg-cons} allows to decide
the emptiness problem for multi-threshold mean-payoff languages, by checking emptiness of the intersection
of the vector set with the constraint corresponding to the query.
Furthermore, we can solve more expressive queries in our framework, namely where atomic conditions are
linear constraints on $\mathsf{LimInf}Avg$- and $\mathsf{LimSup}Avg$-values.
For example, the constraint $\mathsf{LimInf}Avg(\mathsf{wt}_1) + \mathsf{LimSup}Avg(\mathsf{wt}_2) \sim \nu$
is simply encoded as $x_k + x_l \sim \nu$ where $k,l$ are the indices corresponding to $A^1_{\mathsf{LimInf}Avg}$ and $A^2_{\mathsf{LimSup}Avg}$
respectively.
Note that the trick of extending the dimension of the $d$-payoff vector with, say $\mathsf{wt}_{d+1} = \mathsf{wt}_1 + \mathsf{wt}_2$,
is not equivalent because
${\sf Lim}\raisebox{2.0pt}{\scalebox{0.45}{\Big\{\begin{tabular}{c}{\sf Sup}\\[-3pt]{\sf Inf}\end{tabular}\Big\}}}{\sf Avg}(\mathsf{wt}_1) \pm
{\sf Lim}\raisebox{2.0pt}{\scalebox{0.45}{\Big\{\begin{tabular}{c}{\sf Sup}\\[-3pt]{\sf Inf}\end{tabular}\Big\}}}{\sf Avg}(\mathsf{wt}_2)$
is not equal to ${\sf Lim}\raisebox{2.0pt}{\scalebox{0.45}{\Big\{\begin{tabular}{c}{\sf Sup}\\[-3pt]{\sf Inf}\end{tabular}\Big\}}}{\sf Avg}(\mathsf{wt}_1 \pm \mathsf{wt}_2)$
in general (no matter the choice of $\raisebox{2.0pt}{\scalebox{0.45}{\Big\{\begin{tabular}{c}{\sf Sup}\\[-3pt]{\sf Inf}\end{tabular}\Big\}}}$ and $\pm$).
Hence, in the context of non-quantitative languages our results also provide a richer query language for the deterministic mean-payoff automata with multiple payoffs.
\paragraph{Complexity.}
All problems studied in this section can be solved easily (in polynomial time) once
the value set is constructed, which can be done in quadruple exponential time.
The quadruple exponential blow-up is caused by $(a)$ the synchronized product
construction for $E$, $(b)$ the computation of the vector values of all simple cycles in $A_E$,
$(c)$ the construction of the vector set $F_{\min}(\mathsf{conv}(S_E))$, and $(d)$
the successive projections of the vector set to obtain the value set.
Therefore, all the above problems can be solved in 4EXPTIME.
\begin{theorem}\label{thrm_decidable}
For the class of mean-payoff automaton expressions, the quantitative emptiness,
universality, language inclusion, and equivalence problems, as well as distance
computation can be solved in 4EXPTIME.
\end{theorem}
Theorem~\ref{thrm_decidable} is in sharp contrast with the nondeterministic
and alternating mean-payoff automata for which language inclusion is
undecidable (see also Table~\ref{tab:properties}).
The following theorem presents the undecidability result that is derived from
the results of~\cite{DDGRT10}.
\begin{theorem}\label{thrm_undecidable}
The quantitative universality,
language inclusion, and
language equivalence problems are undecidable
for nondeterministic mean-payoff automata; and
the quantitative emptiness,
universality,
language inclusion, and
language equivalence problems are undecidable for
alternating mean-payoff automata.
\end{theorem}
\section{Expressive Power and Cut-point Languages}\label{sec:expressive-power}
We study the expressive power of mean-payoff automaton expressions $(i)$ according
to the class of quantitative languages that they define, and $(ii)$ according
to their cut-point languages.
\paragraph{Expressive power comparison.}
We compare the expressive power of
mean-payoff automaton expressions with nondeterministic and alternating mean-payoff automata.
The results of~\cite{CDH09b} show that there exist deterministic mean-payoff automata
$A_1$ and $A_2$ such that $\min(A_1,A_2)$ cannot be expressed by nondeterministic mean-payoff
automata.
The results of~\cite{CDH-FCT09} shows that there exists deterministic mean-payoff
automata $A_1$ and $A_2$ such that $\mathrm{sum}(A_1,A_2)$ cannot be expressed by alternating mean-payoff
automata.
It follows that there exist languages expressible by mean-payoff automaton expression that
cannot be expressed by nondeterministic and alternating mean-payoff automata.
In Theorem~\ref{thrm_expressive_power} we show the converse, that is,
we show that there exist languages expressible by nondeterministic
mean-payoff automata that cannot be expressed by mean-payoff automaton expression.
It may be noted that the subclass of mean-payoff automaton expressions that
only uses min and max operators (and no sum operator) is a strict subclass of
alternating mean-payoff automata, and when only the max operator is used we get
a strict subclass of the nondeterministic mean-payoff automata.
\begin{theorem}\label{thrm_expressive_power}
Mean-payoff automaton expressions are incomparable in expressive power with
nondeterministic and alternating mean-payoff automata: (a)~there exists a quantitative
language that is expressible by mean-payoff automaton expressions, but cannot be expressed by
alternating mean-payoff automata; and
(b)~there exists a quantitative language that is expressible by a nondeterministic
mean-payoff automaton, but cannot be expressed by a mean-payoff automaton expression.
\end{theorem}
\paragraph{Cut-point languages.}
Let $L$ be a quantitative language over $\Sigma$.
Given a threshold $\eta \in {\mathbb R}$, the \emph{cut-point language} defined by $(L,\eta)$
is the language (i.e., the set of words) $L^{\geq \eta} = \{w \in \Sigma^{\omega} \mid L(w) \geq \eta \}$.
It is known for deterministic mean-payoff automata that the cut-point language
may not be $\omega$-regular, while it is $\omega$-regular if the threshold $\eta$
is \emph{isolated}, i.e. if there exists $\epsilon > 0$ such that $\abs{L(w) - \eta} > \epsilon$ for
all words $w \in \Sigma^{\omega}$~\cite{CDH09b}.
We present the following results
about cut-point languages of mean-payoff automaton expressions.
First, we note that it is decidable whether a rational threshold $\eta$ is
an isolated cut-point of a mean-payoff automaton expression, using the value set
(it suffices to check that $\eta$ is not in the value set since this set is closed).
Second, isolated cut-point languages of mean-payoff automaton expressions are \emph{robust}
as they remain unchanged under sufficiently small perturbations of the transition
weights. This result follows from a more general robustness property of weighted
automata~\cite{CDH09b} that extends to mean-payoff automaton expressions: if the
weights in the automata occurring in $E$ are changed by at most $\epsilon$,
then the value of every word changes by at most $\max(k,1) \cdot \epsilon$ where
$k$ is the number of occurrences of the $\mathrm{sum}$ operator in $E$.
Therefore $D_{\sup}(L_E,L_{F^\epsilon}) \to 0$ when $\epsilon \to 0$ where $F^\epsilon$ is any mean-payoff automaton expression
obtained from $E$ by changing the weights by at most $\epsilon$.
As a consequence, isolated cut-point languages of mean-payoff automaton expressions are robust.
Third, the isolated cut-point language of mean-payoff automaton expressions is
$\omega$-regular. To see this, note that every strongly connected component
of the product automaton $A_E$ contributes with a closed convex set to the value
set of $E$. Since the $\max$-, $\min$- and $\mathrm{sum}$-projections are continuous
functions, they preserve connectedness of sets and therefore each scc $C$ contributes with
an interval $[m_C,M_C]$ to the value set of $E$. An isolated cut-point $\eta$ cannot belong to any of these
intervals, and therefore we obtain a B\"uchi-automaton for the cut-point language
by declaring to be accepting the states of the product automaton $A_E$ that belong
to an scc $C$ such that $m_C > \eta$. Hence, we get the following result.
\begin{theorem}\label{theo:cut-point}
Let $L$ be the quantitative language of a mean-payoff automaton expression.
If $\eta$ is an isolated cut-point of $L$, then the cut-point language $L^{\geq \eta}$
is $\omega$-regular.
\end{theorem}
\section{Conclusion and Future Works}
We have presented a new class of quantitative languages, the \emph{mean-payoff automaton expressions}
which are both robust and decidable (see Table~\ref{tab:properties}), and for which
the distance between quantitative languages can be computed.
The decidability results come with a high worst-case complexity, and it is
a natural question for future works to either improve the algorithmic solution,
or present a matching lower bound.
Another question of interest is to find a robust and decidable class of
quantitative languages based on the discounted sum measure~\cite{CDH08}.
\appendix
\section{Proofs of Section~\ref{sec:vector-set}}
\begin{proof}[of Lemma~\ref{lem:convex-hull-lasso-words}]
Let $A_1, \dots, A_n$ be the deterministic weighted automata occurring in $E$.
First, let $x \in \mathsf{conv}(S_E)$. Then, $x = \sum_{i=1}^p \lambda_i v_i$ where $v_1,v_2,\ldots,v_p$ are the
vector values of simple cycles $\rho_1, \rho_2,\ldots, \rho_p$ in $A_E = A_1 \times \dots \times A_n$,
and $\sum_{i=1}^p \lambda_i = 1$ with $\lambda_i \geq 0$ for all $1 \leq i \leq p$.
For each of the above cycles $\rho_i$, let $q_i$ be a state occurring in $\rho_i$, and let $\rho_{i \to j}$
be a simple path in $A_E$ connecting $q_i$ and $q_j$ (such paths exist for each $1 \leq i,j \leq p$
because $A_E$ has a unique strongly connected component). Let $\rho_{0 \to i}$ be a simple path in $A_E$
from the initial state $q_I$ to $q_i$. Note that the length of $\rho_i$ and $\rho_{i \to j}$ is at most
$m = \abs{A_E}$ the number of states in $A_E$.
We consider the following sequence of ultimately periodic paths,
parameterized by $N \in \mathbb N$:
$$ \hat{\rho}_N = \rho_{0 \to 1} \cdot (\rho_1^{k_1^N} \cdot \rho_{1 \to 2} \cdot \ldots \cdot \rho_p^{k_p^N} \cdot \rho_{p \to 1})^{\omega},$$
where $k_i^N = \left\lfloor \frac{N \cdot \lambda_i}{\abs{\rho_i}}\right\rfloor$ for all $1 \leq i \leq p$.
Note that $\hat{\rho}_N$ is the run of a lasso-word $w_N$ in $A_E$, and that
$N \cdot \lambda_i -\abs{\rho_i} \leq \abs{\rho_i} \cdot k_i^N \leq N \cdot \lambda_i$.
Because $\hat{\rho}_N$ is ultimately periodic, the vector value of $\hat{\rho}_N$ gives the value of $w_N$ in each $A_i$. It can
be computed as
$$\mathsf{LimAvg}(\hat{\rho}_N) = \mathsf{Avg}(\rho_1^{k_1^N} \cdot \rho_{1 \to 2} \cdot \ldots \cdot \rho_p^{k_p^N} \cdot \rho_{p \to 1})$$
and it can be bounded along each coordinate $j = 1, \dots, n$ as follows (we denote by $W$ the largest weight in $A_E$ in
absolute value):
$$
\begin{array}{rcl}
\mathsf{LimAvg}_j(\hat{\rho}_N) & \leq & \frac{\displaystyle\sum_{i=1}^p k_i^N \cdot \abs{\rho_i} \cdot \mathsf{Avg}_j(\rho_i) +
\displaystyle\sum_{i=1}^p \abs{A_E} \cdot W}
{\displaystyle\sum_{i=1}^p k_i^N \cdot \abs{\rho_i} + \displaystyle\sum_{i=1}^p \abs{A_E}} \\[+6pt]
& \leq & \frac{\displaystyle\sum_{i=1}^p N \cdot \lambda_i \cdot \mathsf{Avg}_j(\rho_i) +
p \cdot m \cdot W}
{\displaystyle\sum_{i=1}^p N \cdot \lambda_i - \abs{\rho_i} + \abs{A_E}} \\[+3pt]
& \leq & \frac{N \cdot x_j + p \cdot m \cdot W {\large \strut} }
{N} = x_j + \frac{p \cdot m \cdot W {\large \strut} }
{N} \\[+3pt]
\end{array}
$$
Analogously, we have
$$
\begin{array}{rcl}
\mathsf{LimAvg}_j(\hat{\rho}_N) & \geq & \frac{\displaystyle\sum_{i=1}^p k_i^N \cdot \abs{\rho_i} \cdot \mathsf{Avg}_j(\rho_i) -
\displaystyle\sum_{i=1}^p \abs{A_E} \cdot W}
{\displaystyle\sum_{i=1}^p k_i^N \cdot \abs{\rho_i} + \displaystyle\sum_{i=1}^p \abs{A_E}} \\[+6pt]
& \geq & \frac{\displaystyle\sum_{i=1}^p (N \cdot \lambda_i - \abs{\rho_i}) \cdot \mathsf{Avg}_j(\rho_i) -
p \cdot m \cdot W}
{\displaystyle\sum_{i=1}^p N \cdot \lambda_i + \abs{A_E} } \\[+3pt]
& \geq & \frac{N \cdot x_j - 2 p \cdot m \cdot W {\large \strut} }
{N + p \cdot m} = x_j - \frac{p \cdot m \cdot (2W - x_j) {\large \strut} }
{N + p \cdot m} \\[+3pt]
\end{array}
$$
Therefore $\mathsf{LimAvg}_j(\hat{\rho}_N) \to x_j$ when $N \to \infty$. This shows that $x$ is in the closure
of the vector set of lasso-words.
Second, we show that the value of lasso words according to each automaton $A_i$ form a vector
which belong to $\mathsf{conv}(S_E)$ (which is equal to its closure). Let $w=w_1(w_2)^{\omega}$ be a
lasso-word.
It is easy to see that there exists $p_1,p_2$ such that $p = p_1 + p_2 \leq m = \abs{A_E}$ and the run of $A_E$ on $w_1 w_2^p$
has the shape of a lasso (i.e., the automaton $A_E$ is in the same state after reading $w_1 w_2^{p_1}$ and after
reading $w_1 w_2^p$), and thus the cyclic part of the lasso can be decomposed into simple cycles in $A_E$.
The vector value of $w$ in each $A_i$ is the mean of the vector values of the simple cycles in the decomposition,
and therefore it belongs to the convex hull $\mathsf{conv}(S_E)$.
\qed
\end{proof}
\begin{proof}[of Lemma~\ref{lem:value-set}]
First, show that $V_E \subseteq F_{\min}(\mathsf{conv}(S_E))$.
Let $x \in V_E$ be a tuple of values of some word $w$ according to each automaton $A_i$ occurring in $E$
(i.e., $x_i = L_{A_i}(w)$ for all $1 \leq i \leq n$). For $\epsilon > 0$ and $1 \leq k \leq n$,
we construct a lasso-word $w^k_{\epsilon}$ such that $\abs{L_{A_k}(w_{\epsilon}) - x_k} \leq \epsilon$ and
$L_{A_i}(w_{\epsilon}) \geq x_i - \epsilon$ for all $1 \leq i \leq n$ with $i \neq k$. If we denote
by $y^k_{\epsilon}$ the vector value of $w^k_{\epsilon}$, then the value $y = f_{\min}(\{y^k_{\epsilon} \mid 1 \leq k \leq n \})$
is such that $\abs{y_i - x_i} \leq \epsilon$ for all $1 \leq k \leq n$. By Lemma~\ref{lem:convex-hull-lasso-words},
the limit of the vector value $y^k_{\epsilon}$ when $\epsilon \to 0$ is in $\mathsf{conv}(S_E)$, and thus
$x \in F_{\min}(\mathsf{conv}(S_E))$.
We give the construction of $w^k_{\epsilon}$ for $k=1$. The construction is similar for $k \geq 2$.
Consider the word $w$ and let $\rho$ be the suffix of the (unique) run of $A_E$ on $w$ which visits
only states in the strongly connected component of $A_E$. The value of $\rho$ and the value of $w$
coincide (according to each $A_i$) since the mean-payoff value is prefix-independent.
Since $L_{A_i}(w) = x_i$ for all $1 \leq i \leq n$,
there exists a position $p \in \mathbb N$ such that the mean value of all prefixes of $\rho$ of length greater
than $p$ is at least $x_i - \epsilon$ according to each $A_i$ (since each $A_i$ is a $\mathsf{LimInf}Avg$-automata).
Since $L_{A_1}(w) = x_1$, there exist infinitely many prefixes $\rho'$ of $\rho$ with mean value according to $A_1$
close to $x_1$, more precisely such that $\abs{\mathsf{Avg}_1(\rho') - x_1} \leq \epsilon$. Pick such a prefix $\rho'$ of length
at least $\max(p, \frac{1}{\epsilon})$. Since $\rho'$ is in the strongly connected component of $A_E$,
we can extend $\rho'$ to loop back to its first state. This requires at most $m$ additional
steps and gives $\rho''$. Note also that $\rho''$ can be reached from the initial state of $A_E$ since it was the case of $\rho$,
and thus it defines a lasso-shaped run whose value can be bounded along the first coordinate as follows:
$$
\begin{array}{rcl}
\abs{\mathsf{Avg}_1(\rho'') - x_1} & \leq & \frac{ {\Large \strut} \bigabs{ \abs{\rho'} \cdot \mathsf{Avg}_1(\rho') - \abs{\rho''} \cdot x_1} + m \cdot W}{\abs{\rho''}} \\[+3pt]
& \leq & \frac{ {\Large \strut} \abs{\rho'} \cdot \abs{\mathsf{Avg}_1(\rho') - x_1} + (\abs{\rho''} - \abs{\rho'}) \cdot x_1 + m \cdot W}{\abs{\rho''}} \\[+3pt]
& \leq & \epsilon + \frac{ {\Large \strut} m \cdot x_1 + m \cdot W}{\abs{\rho''}} \leq \epsilon + \frac{ {\Large \strut} 2m \cdot W}{\abs{\rho''}} \\[+5pt]
& \leq & \epsilon \cdot (1 + 2m \cdot W) \\[+1pt]
\end{array}
$$
Hence, the value along the first coordinate of the word $w^1_{\epsilon}$ corresponding to the run $\rho''$ tends to $x_1$ when when $\epsilon \to 0$.
We show similarly that the value of $w^1_{\epsilon}$ along the other coordinates $i \geq 2$ is bounded from below by $x_i - \epsilon \cdot (1 + 2m \cdot W)$.
The result follows.
Now, we show that $F_{\min}(\mathsf{conv}(S_E)) \subseteq V_E$.
In this proof, we use the notation $\odot$ for \emph{iterated concatenation} defined as follows.
Given nonempty words $w_1,w_2 \in \Sigma^{+}$, the finite word $w_1 \odot w_2$ is $w_1 \cdot (w_2)^k$
where $k = \abs{w_1}^2$. We assume that $\odot$ (iterated concatenation) and $\cdot$ (usual concatenation) have the same
precedence and that they are left-associative. For example, the expression $ab \odot a \cdot b$
is parsed as $(ab \odot a) \cdot b$ and denotes the word $abaaaab$, while the expression
$ab \cdot a \odot b$ is parsed as $(ab \cdot a) \odot b$ and denotes the word $abab^9$.
We use this notation for the purpose of simplifying the proof presentation, and some care needs to be taken.
For example, explicit use of concatenation (i.e., $a \cdot b$ vs. $ab$) makes a difference
since $ab \odot ab = (ab)^5$ while $ab \odot a \cdot b = aba^4b$. Finally, we use notations
such as $(w_1 \cdot w_2 \,\odot)^{\omega}$ to denote the infinite word $w_1 \cdot w_2 \odot w_1 \cdot w_2 \odot \dots$.
Usually we use the notation $w_1 \odot w_2$ when the run of $A_E$ on $w_1 \cdot w_2$ can be decomposed as
$\rho_1 \cdot \rho_2$ where $\rho_i$ corresponds to $w_i$ ($i=1,2$) and $\rho_2$ is a cycle
in the automaton. Then, the mean value of the run on $w_1 \odot w_2$ is
\begin{xalignat*}{1}
& \frac{ \abs{\rho_1} \cdot \mathsf{Avg}(\rho_1) + \abs{\rho_1}^2 \cdot \abs{\rho_2} \cdot \mathsf{Avg}(\rho_2)}{ \abs{\rho_1} + \abs{\rho_1}^2 \cdot \abs{\rho_2}} \\[+6pt]
= \ & \frac{ \mathsf{Avg}(\rho_1) + \abs{\rho_1} \cdot \abs{\rho_2} \cdot \mathsf{Avg}(\rho_2)}{ 1 + \abs{\rho_1} \cdot \abs{\rho_2}} \\[+6pt]
= \ & \mathsf{Avg}(\rho_2) + \frac{ \mathsf{Avg}(\rho_1) - \mathsf{Avg}(\rho_2)}{ 1 + \abs{\rho_1} \cdot \abs{\rho_2}}
\end{xalignat*}
Therefore, since $\abs{ \mathsf{Avg}(\rho_1) - \mathsf{Avg}(\rho_2)} \leq 2W$ independently of $w_1$ and $w_2$,
a key property of $\odot$ is that the mean value of $w_1 \odot w_2$ can be made
arbitrarily close to $\mathsf{Avg}(\rho_2)$ by taking $w_1$ sufficiently long (since $\abs{w_1} = \abs{\rho_1}$).
We proceed with the proof of the lemma.
Let $x \in F_{\min}(\mathsf{conv}(S_E))$ and let $y_1, \dots, y_n$ be $n$ points in $\mathsf{conv}(S_E)$ such that
the $i^{\text{th}}$ coordinate of $x$ and $y_i$ coincide for all $1 \leq i \leq n$, and
the $j^{\text{th}}$ coordinate of $x$ is smaller than the $j^{\text{th}}$ coordinate of $y_i$ for all $j \neq i$.
Such $y_i$'s exist by definition of $F_{\min}$ though they may not be distinct.
By Lemma~\ref{lem:convex-hull-lasso-words}, for all $\epsilon > 0$ there exist lasso-words
$w_1, \dots, w_n$ such that $\norm{v_k - y_k} \leq \epsilon$ where $v_k = \tuple{L_{A_1}(w_k),\ldots,L_{A_n}(w_k)}$
for each $1 \leq k \leq n$. For each $1 \leq i \leq n$, let $\rho_i$ be the cyclic part of the (lasso-shaped) run
of $A_E$ on $w_i$, and let $q_i$ be the first state in $\rho_i$. For each $1 \leq i,j \leq n$, define $\rho_{i \to j}$ the
shortest path in $A_E$ from $q_i$ to $q_j$, and let $\rho_{0 \to j}$ be a simple path in $A_E$ from the initial
state $q_I$ to $q_j$ (such paths exist because $A_E$ is strongly connected).
Note that $\mathsf{Avg}_j(\rho_i) = L_{A_j}(w_i)$.
We construct the following infinite run in $A_E$:
$$\hat{\rho} = \rho_{0 \to 1} \odot (\rho_1 \cdot \rho_{1 \to 2} \odot \rho_2 \cdot \rho_{2 \to 3} \odot \dots \rho_n \cdot \rho_{n \to 1} \odot)^{\omega} $$
It is routine to show that $\hat{\rho}$ is a run of $A_E$, and
we have $\mathsf{LimAvg}_j(\hat{\rho}) = v_{jj}$ because $(i)$ the cycles $\rho_1, \dots, \rho_n$ are asymptotically
prevailing over the cycle $\rho_{1 \to 2} \rho_{2 \to 3} \dot \rho_{n \to 1}$, $(ii)$
by the key property of $\odot$, there exist infinitely many prefixes in $\hat{\rho}$
such that the average of the weight along the $j^{\text{th}}$ coordinate converges to $v_{jj}$,
and $(iii)$ all cycles $\rho_i$ have average value greater than $v_{jj}$ along the $j^{\text{th}}$
coordinate. Therefore, the liminf of the averages along the $j^{\text{th}}$ coordinate (i.e., $\mathsf{LimAvg}_j(\hat{\rho})$)
is $v_{jj}$, and the vector of values of $\hat{\rho}$ is thus at distance $\epsilon$ of $x$, that is
$\norm{\mathsf{LimAvg}(\hat{\rho}) - x} \leq \epsilon$. The construction of $\hat{\rho}$ can be adapted to
obtain $\mathsf{LimAvg}(\hat{\rho}) = x$ by changing the $k^{\text{th}}$ occurrence of $\rho_i$ in $\hat{\rho}$ by a
cycle corresponding to a lasso-word $w_i$ obtained as above for $\epsilon < \frac{1}{n}$.
\qed
\end{proof}
\section{Proofs of Section~\ref{sec:alg-cons}}
\begin{proof}[of Lemma~\ref{lem:value-set-convex}]
Let $x = f_{\min}(u^1, u^2, \dots, u^n)$ and $y = f_{\min}(v^1, v^2, \dots, v^n)$
where $u^1, \dots, u^n, v^1, \dots, v^n \in X$. Let $z = \lambda x + (1-\lambda) y$ where $0 \leq \lambda \leq 1$
and we prove that $z \in F_{\min}(X)$.
Without loss of generality, assume that $x_i = u^i_i$ and $y_i = v^i_i$ for all $1 \leq i \leq n$.
Then $z_i = \lambda u^i_i + (1-\lambda) v^i_i$ for all $1 \leq i \leq n$.
To show that $z \in F_{\min}(X)$, we give for each $1 \leq j \leq n$ a point $p \in X$ such that
$p_j = z_j$ and $p_k \geq z_k$ for all $k \neq j$. Take $p = \lambda u^j + (1-\lambda) v^j$.
Clearly $p \in X$ since $u^j, v^j \in X$ and $X$ is convex, and $(i)$ $w_j = \lambda u^j_j + (1-\lambda) v^j_j = z_j$,
and $(ii)$ for all $k \neq j$, we have $w_k = \lambda u^j_k + (1-\lambda) v^j_k \geq \lambda u^k_k + (1-\lambda) v^k_k = z_k$
(since $u^k$ has the minimal value on $k^{\text{th}}$ coordinate among $u^1, \dots, u^n$, similarly for $v^k$).
\qed
\end{proof}
\begin{proof}[of Proposition~\ref{prop:conv-min-2D}]
By Lemma~\ref{lem:value-set-convex}, we already know that $\mathsf{conv}(F_{\min}(S)) \subseteq F_{\min}(\mathsf{conv}(S))$
(the set $F_{\min}(\mathsf{conv}(S))$ is convex, and since
$F_{\min}$ is a monotone operator and $S \subseteq \mathsf{conv}(S)$, we have $F_{\min}(S) \subseteq F_{\min}(\mathsf{conv}(S))$
and thus $\mathsf{conv}(F_{\min}(S)) \subseteq F_{\min}(\mathsf{conv}(S))$).
We prove that $F_{\min}(\mathsf{conv}(S)) \subseteq \mathsf{conv}(F_{\min}(S))$ if $S \subseteq {\mathbb R}^2$.
Let $x \in F_{\min}(\mathsf{conv}(S))$ and show that $x \in \mathsf{conv}(F_{\min}(S))$.
Since $x \in F_{\min}(\mathsf{conv}(S))$, there exist $p,q \in \mathsf{conv}(S)$ such that
$x = f_{\min}(p,q)$, and assume that $p_1 < q_1$ and $p_2 > q_2$ (other cases are
symmetrical, or imply that $x=p$ or $x=q$ for which the result is trivial as then $x \in \mathsf{conv}(S)$).
We show that $x = (p_1,q_2)$ is in the convex hull of $\{p,q,r\}$ where $r = f_{\min}(u,v)$
and $u \in S$ is the point in $S$ with smallest first coordinate, and $v \in S$ is the point
in $S$ with smallest second coordinate, so that $r_1 = u_1 \leq p_1$ and $r_2 = v_2 \leq q_2$.
Simple computations show that the equation $x = \lambda p + \mu q + (1-\lambda-\mu) r$ has
a solution with $0 \leq \lambda, \mu \leq 1$ and the result follows.
\qed
\end{proof}
\begin{proof}[of Lemma~\ref{lemm1}]
By definition, we have $F_n(S) \subseteq F(S)$.
For a point $x=f(P)$ for a finite subset $P \subseteq S$, choose one point each that
contributes to a coordinate and obtain a finite set $P' \subseteq P$ of at most $n$
points such that $x=f(P)$. This shows that $F(S) \subseteq F_n(S)$.
For the second part, let $P=\set{p_1, p_2,\ldots,p_k}$ with $k \leq n$, and let $x=f(P)$.
Let $x_1=f(p_1,p_2)$, and for $i > 1$ we define $x_i=f(x_{i-1},p_{i+1})$.
We have $x=x_{n-1}$ (e.g., $f(p_1,p_2,p_3)= f (f(p_1,p_2), p_3)$).
Thus we have obtained $x$ by applying $f$ on two points for $n-1$ times, and
it follows that $F_n(S) \subseteq F_2^{n-1}(S)$.
\qed
\end{proof}
\begin{proof}[of Theorem~\ref{theo:explicit-construction}]
We show that the construction $\gamma$ satisfies condition {\bf C1} and {\bf C2}.
Let $Y' = \gamma(Y)$.
Clearly the set $Y'$ is a finite subset of $\mathsf{conv}k(Y)$ and thus Condition {\bf C1}
holds and we now show that Condition {\bf C2} is satisfied.
Since $F_2(\mathsf{conv}(Y))$ is convex (by Lemma~\ref{lem:value-set-convex}), it suffices
to show that all corners of $F_2(\mathsf{conv}(Y))$ belong to $\mathsf{conv}k(F(Y'))$.
Consider a point $x=f(p,q)$ where $p,q \in \mathsf{conv}k(Y)$.
We will show that either $p,q \in Y'$ or $x$
cannot be a corner of $\mathsf{conv}k(F_2(Y))$. It will follow that
$F_2(\mathsf{conv}(Y)) \subseteq \mathsf{conv}k(F(Y'))$.
Our proof will be an induction on the number of coordinates such that there is a
\emph{tie} (tie is the case where the value of a coordinate of $p$ and $q$ coincide).
If there are $n$ ties, then the points $p$ and $q$ are equal and we have $x=p=q$,
and this case is trivial since $Y \subseteq Y'$. So the base case is done.
By inductive hypothesis, we assume that $k+1$-ties yield the result and we
consider the case for $k$-ties.
Without loss of generality we consider the following case:
\[
p_1=q_1; p_2=q_2; \cdots; p_k=q_k;
\]
\[
p_{k+1} < q_{k+1}; p_{k+2} < q_{k+2}; \cdots; p_{\ell}< q_{\ell};
\]
\[
p_{\ell+1} > q_{\ell+1}; p_{\ell+2} > q_{\ell+2}; \cdots; p_{n}> q_{n};
\]
i.e, the first $k$ coordinates are ties, then $p$ is the sole contributor
to the coordinates $k+1$ to $\ell$, and for the rest of the coordinates $q$ is
the sole contributor.
Below we will use the expression \emph{infinitesimal change} to mean change smaller than
$\eta =\min_{k <i \leq n} \abs{p_i -q_i}$ (note $\eta>0$).
Consider the plane $\Pi$ with first $k$ coordinates constant (given by
$x_1=p_1=q_1; x_2=p_2=q_2; \cdots; x_k=p_k=q_k$).
We intersect the plane~$\Pi$ with $\mathsf{conv}k(Y)$ and we obtain a polytope.
First we consider the case when $p$ and $q$ are not a corner of the polytope
and then we consider when $p$ and $q$ are corners of the polytope.
\begin{enumerate}
\item \emph{Case 1: $p$ is not a corner of the polytope $\Pi \cap \mathsf{conv}k(Y)$.}
We draw a line in $\Pi$ with $p$ as midpoint such that the line is contained
in $\Pi \cap \mathsf{conv}k(Y)$. This ensures that the coordinates $1$ to $k$ remain
fixed along the line.
\begin{enumerate}
\item If any one of coordinates from $k+1$ to $\ell$ changes along the line, then by
infinitesimal change of $p$ along the line, we ensure that $x$ moves along a line.
\item Otherwise coordinates $k+1$ to $\ell$ remain constant; and we move~$p$
along the line in a direction such that at least one of the remaining coordinates (say $j$)
decreases, and decreasing $j$ we have one of the following three cases:
\begin{enumerate}
\item we go down to $q_j$ and then we have one more tie and we are fine by
inductive hypothesis;
\item we hit a face of the polytope $\Pi \cap \mathsf{conv}k(Y)$ and then we change direction of the line (while staying in the hit face) and continue;
\item we hit a corner of the polytope $\Pi \cap \mathsf{conv}k(Y)$ and then $p$ becomes a corner
which will be handled in Case 3.
\end{enumerate}
\end{enumerate}
\item \emph{Case 2: $q$ is not a corner of the polytope $\Pi \cap \mathsf{conv}k(Y)$.}
By symmetric analysis to Case~1 either we are done or $q$ becomes a corner
of the polytope $\Pi \cap \mathsf{conv}k(Y)$.
\item \emph{Case 3: $p$ and $q$ are corners of the polytope $\Pi \cap \mathsf{conv}k(Y)$.}
If $\Pi$ is supported by $Y$, then both $p,q \in Y'$ and we are done.
Otherwise $\Pi$ is not supported by $Y$, and now we move along lines with
$p$ and $q$ as midpoints and slide the plane $\Pi$.
In other words we move $p$ and $q$ alone lines and move such that the ties
remain the same.
We also ensure infinitesimal changes along the line so that the contributor
of each coordinate is the same as original.
Let
\[
p(\lambda) =p + \lambda \cdot \vec{v};
\quad
q(\mu) =q + \mu \cdot \vec{w};
\]
be the lines where $\vec{v}$ and $\vec{w}$ are directions.
By ties for $1 \leq i \leq k$ we have $\lambda \cdot v_i =\mu \cdot w_i$.
Then for infinitesimal change the point $x$ moves as follows:
\[
\begin{array}{l}
x(\lambda,\mu) = f(p(\lambda),q(\mu))) \\[1ex]
=
(p_1 + \lambda \cdot v_1,
p_2 + \lambda \cdot v_2, \cdots
p_\ell + \lambda \cdot v_\ell,
q_{\ell +1} + \mu \cdot w_{\ell+1}, \cdots,
q_{n} + \mu \cdot w_{n})
\\[1ex]
= (p_1 + \lambda \cdot v_1,
p_2 + \lambda \cdot v_2, \cdots
p_\ell + \lambda \cdot v_\ell,
q_{\ell +1} + \lambda \cdot\frac{v_1}{w_1} \cdot w_{\ell+1}, \cdots,
q_{n} + \lambda \cdot\frac{v_1}{w_1} \cdot w_{n})
\end{array}
\]
It follows that $x$ moves along the line
$x + \lambda \cdot \vec{z}$ where for
$1 \leq i \leq \ell$ we have $z_i=v_i$ and for $\ell < i \leq n$ we have
$z_i= \frac{v_1}{w_1}\cdot w_i$; note that $w_1 >0$ since the plane slides.
Since $x$ moves along a line it cannot be an extreme point.
\end{enumerate}
This completes the proof. Also note that in the special case when there is
no tie at all then we do not need to consider Case 3 as then $\Pi = {\mathbb R}^n$
and thus $p$ and $q$ are corners of $\mathsf{conv}k(Y)$ and hence in $Y'$.
\noindent{\bf Analysis.} Given a set of $m$ points,
the construction $\gamma$ yield at most $m^2 \cdot 2^n$ points.
The argument is as follows: consider a point $p$, and then we consider
all $k$-dimensional coordinates planes through $p$.
There are ${n \choose k}$ possible $k$-dimensional coordinate plane through $p$,
and summing over all $k$ we get that there are at most $2^n$ coordinate planes
that we consider through $p$.
The interesection of a coordinate plane through $p$ with the convex hull of
$m$ points gives at most $m$ new corner points, and this claim is as proved
follows: the new corner points can be constructed as the shadow of the convex
hull on the plane, and since the convex hull has $m$ corner points the claim
follows.
Thus it follows that the construction yield at most $m^2 \cdot 2^n$ new points,
and thus we have at most $m+ m^2 \cdot 2^n \leq 2 \cdot m^2 \cdot 2^n$ points.
If the set $S$ has $m$ points, applying the construction iteratively for $n$
times we obtain the desired set $S'$ that has at most
$m^{2^n} \cdot 2^{n^2+n}$ points.
Since convex hull of a set of $\ell$ points in $n$ dimension can be
constructed in $\ell^{O(n)}$ time, it follows that the set $S'$ can
be constructed in $m^{O(n\cdot 2^n)} \cdot 2^{O(n^3)}$ time.
\qed
\end{proof}
\begin{proof}[Theorem~\ref{thrm_undecidable} (Sketch)]
We will show the undecidability for the quantitative universality problem
for nondeterministic mean-payoff automata.
It will follow that the quantitative language inclusion and quantitative
language equivalence problem are undecidable for both nondeterministic and
alternating automata.
The quantitative universality for nondeterministic automata can be reduced to
the quantitative emptiness as well as the quantitative universality problem
for alternating mean-payoff automata. Hence to complete the proof we derive
the undecidability of quantitative universality for nondeterministic
mean-payoff automata from the recent results of~\cite{DDGRT10}.
The results of~\cite{DDGRT10} show that in two-player \emph{blind}
imperfect-information mean-payoff games whether there is a player~1
blind-strategy $\sigma$ such that against all player~2 strategies $\tau$ the
mean-payoff value $P(\sigma,\tau)$ of the play given $\sigma$ and $\tau$ is
greater than $\nu$ is undecidable.
The result is a reduction from the halting problem of two-counter machines,
and we observe that the reduction has the following property:
for threshold value $\nu=0$, if the two-counter machine halts then
player~1 has a blind-strategy to ensure payoff greater than $\nu$, and otherwise
against every blind-strategy for player~1, player~2 can ensure that the payoff for
player~1 is at most $\nu=0$.
Thus from the above observation about the reduction of~\cite{DDGRT10} it follows
that in two-player blind imperfect-information mean-payoff games, given a
threshold $\nu$, the decision problem whether
\[
\exists \sigma.\ \inf_{\tau} P(\sigma,\tau) > \nu
\]
where $\sigma$ ranges over player~1 blind-strategies, and $\tau$ over player~2
strategies, is undecidable and dually the following decision problem whether
\[
\forall \sigma.\ \sup_{\tau} P(\sigma,\tau) \geq \nu
\]
is also undecidable.
The universality problem for nondeterministic mean-payoff automata is equivalent
to two-player blind imperfect information mean-payoff games where the choice of
words represents the blind-strategies for player~1 and resolving nondeterminism
corresponds to strategies of player~2.
It follows that for nondeterministic mean-payoff automata $A$,
given a threshold $\nu$, the decision problem whether
\[
\text{for all words } w. \ L_A(w) \geq \nu
\]
is undecidable.
\qed
\end{proof}
\section{Proofs of Section~\ref{sec:expressive-power}}
\begin{comment}
\begin{proof}{\bf of Lemma~\ref{lem:lavg-det-in-the-limit}.}
Consider the language $L_F$ of finitely many $a$'s, i.e., for an infinite word $w$
we have $L_F(w)=1$ if $w$ contains finitely many $a$'s and~0 otherwise.
It is easy to see that the nondeterministic mean-payoff automata (shown in
\figurename~\ref{figure:aut3}) defines $L_F$.
\begin{figure}
\caption{A nondeterministic mean-payoff automaton. \label{figure:aut3}
\label{figure:aut3}
\end{figure}
We now show that $L_F$ is not expressible by min-max-sum mean-payoff automata.
Consider a set ${\cal A}=\set{A_1,A_2, \ldots, A_\ell}$ of deterministic automata such that each
$A_i$ is either a $\mathsf{LimInf}Avg$- or $\mathsf{LimSup}Avg$-automata.
We will consider min-max-sum mean-payoff automata constructed with the set ${\cal A}$ of
deterministic automata.
For simplicity of the proof we assume that for all $A_i \in {\cal A}$, the weights that
appear in $A_i$ are non-negative ($\geq 0$).
Let $\beta$ be the maximum value of the weights that appear
in any $A_i$, and let $|Q_M|$ be the maximum number of states in any
automata.
Given $k>0$, let $j= \ell(k)=\lceil 3 \cdot k \cdot |Q_M|\cdot \beta \rceil$
and consider the word $w=(b^j \cdot a)$.
We will show that for any min-max-sum mean-payoff
automata $A$ constructed from ${\cal A}$ with at most $z$-sum operators and
any number of max and min operators satisfy the following property:
\begin{itemize}
\item {\em Property.} There exists $i_0 \in \mathbb N$ such that for all $i \geq i_0$ we have
$A(w^i \cdot b^\omega) \leq |Q_M| \cdot A(w^\omega)+ \frac{2^z}{k}$.
\end{itemize}
We prove the above property by structural induction.
\begin{enumerate}
\item \emph{Base case.} The base case corresponds to the case of deterministic
automata $A_i \in {\cal A}$.
Consider $A_i \in {\cal A}$ and the run of $A_i$ on $w^\omega$.
Let $C$ be the set of states that appear infinitely often in the unique
run of $A_i$ on $w^\omega$.
Let $U=\set{c_1, c_2,\ldots, c_u} \subseteq C$ be the set of states in
$C$ that appear after input $w^i$, for some $i \in \mathbb N$.
We order the states of $U$ in a cycle as follows: if the current state is $c_i$ in $U$, and
the input word is $w$, then the state after reading $w$ from $c_i$ is
$c_{(i\mod u +1)}$.
Let $\alpha_1,\alpha_2,\ldots, \alpha_u$ be the mean-payoff values of the $b$-cycle
from $c_1,c_2,\ldots,c_u$, respectively (where a $b$-cycle is the cycle executed on the
inputs of only $b$).
Let $\alpha=\max \set{\alpha_1,\alpha_2,\ldots,\alpha_u}$, and let $\alpha= \alpha_t$,
where $t \in \set{1,2,\ldots,u}$.
Once the cycle $C$ is reached on input $w^{i_0}$, for some $i_0 \geq 0$,
then a state in $U$ is reached on input $w^i$ for all $i \geq i_0$.
It follows that there exists $i_0 \in \mathbb N$ such that for all $i \geq i_0$ we have
$A_i(w^i \cdot b^\omega) \leq \alpha$.
We now prove a lower bound on the average of the weights in the unique run of $A_i$
over $(b^j \cdot a)^{|U|}$ is as follows: in this segment, after some $(b^j \cdot a)^k$,
the state $c_t$ is reached, and after state $c_t$ is reached, on the word $b^j$,
the $b$-cycle from $c_t$ is executed at least $(j-2\cdot|Q_i|)$ times (with a prefix of size at most
$|Q_i|$ in the beginning to reach the cycle and suffix of length $|Q_i|$ in the end
that does not complete the cycle).
Since the $b$-cycle from $c_t$ is executed at least $(j-2\cdot|Q_i|)$ times,
the average reward of the $b$-cycle is $\alpha$, and all rewards are
non-negative, the average is at least
\[
\begin{array}{rcl}
\displaystyle
\frac{(j -2\cdot|Q_i|)\cdot\alpha}{(j+1) \cdot |U|}
& \geq &
\displaystyle
\frac{(j -2\cdot|Q_i|)\cdot\alpha}{(j+1) \cdot |Q_i|}
\geq \frac{\alpha}{|Q_i|} - \frac{(2\cdot|Q_i|+1)\cdot\alpha}{(j+1) \cdot |Q_i|}
\\[2ex]
& \geq &
\displaystyle
\frac{\alpha}{|Q_i|} - \frac{(3\cdot|Q_i|)\cdot\beta}{j \cdot |Q_i|}
\geq \frac{\alpha}{|Q_i|} - \frac{3\cdot\beta}{j}
\end{array}
\]
we used above that $|U| \leq |Q_i|$, $\alpha \leq \beta$, and $|Q_i| \geq 1$.
By the choice of $j$ we have that the $A_i((b^j \cdot a)^\omega) \geq \frac{\alpha}{|Q_M|} - \frac{1}{|Q_M|\cdot k}$.
The result follows.
\item \emph{Max case.}
Consider an automaton $A=\max(A_1,A_2)$ with $z$-sum operators. By inductive hypothesis the
property holds for $A_1$ and $A_2$ and we show the property holds for $A$.
WLOG, let $A(w^\omega)= A_1(w^\omega) \geq A_2(w^\omega)$.
By hypothesis,
there exists $i_0^1$ such that for all $i \geq i_0^1$ we have
$A_1(w^i \cdot b^\omega) \leq |Q_M|\cdot A_1(w^\omega) +\frac{2^z}{k}$ and
there exists $i_0^2$ such that for all $i \geq i_0^2$ we have
$A_2(w^i \cdot b^\omega) \leq |Q_M| \cdot A_2(w^\omega) +\frac{2^z}{k}
\leq |Q_M|\cdot A_1(w^\omega) +\frac{2^z}{k}$.
With the choice of $i_0=\max\set{i_0^1,i_0^2}$, for all $i \geq i_0$
we have $A(w^i \cdot b^\omega)
=\max(A_1(w^i \cdot b^\omega), A_2(w^i \cdot b^\omega))
\leq |Q_M|\cdot A_1(w^\omega) + \frac{2^z}{k}$.
Thus the property is proved for $A$.
\item \emph{Min case.}
Consider an automaton $A=\min(A_1,A_2)$ with $z$-sum operators
By inductive hypothesis the property holds for $A_1$ and $A_2$ and
we show the property holds for $A$.
WLOG, let $A(w^\omega)= A_1(w^\omega) \leq A_2(w^\omega)$.
By hypothesis,
there exists $i_0^1$ such that for all $i \geq i_0^1$ we have
$A_1(w^i \cdot b^\omega) \leq |Q_M|\cdot A_1(w^\omega) +\frac{2^z}{k}$.
It follows that for all $i \geq i_0$ we have
$A(w^i \cdot b^\omega) \leq A_1(w^i \cdot b^\omega) \leq |Q_M|\cdot A(w^\omega) +\frac{2^z}{k}$.
Thus the property is proved for $A$.
\item \emph{Sum case.}
Consider an automaton $A=\mathrm{sum}(A_1,A_2)$ with $z$-sum operators.
By inductive hypothesis the property holds for $A_1$ and $A_2$ and we show the property holds for $A$.
By hypothesis,
there exists $i_0^1$ such that for all $i \geq i_0^1$ we have
$A_1(w^i \cdot b^\omega) \leq |Q_M|\cdot A_1(w^\omega) +\frac{2^{z-1}}{k}$ and
there exists $i_0^2$ such that for all $i \geq i_0^2$ we have
$A_2(w^i \cdot b^\omega) \leq |Q_M|\cdot A_2(w^\omega) +\frac{2^{z-1}}{k}$.
Let $i_0=\max\set{i_0^1,i_0^2}$, then for all $i \geq i_0$ we have
$A_1(w^i \cdot b^\omega) + A_2(w^i \cdot b^\omega) \leq |Q_M|\cdot A_1(w^\omega) +
\frac{2^{z-1}}{k} + |Q_M|\cdot A_2(w^\omega) + \frac{2^{z-1}}{k} =
|Q_M| \cdot A(w^\omega) + \frac{2^z}{k}$.
\end{enumerate}
It follows that the property holds for min-max-sum mean-payoff automata $A$
constructed from ${\cal A}$.
Given a min-max-sum payoff automata $A$ with $z$-sum operators we chose
$k > 2^{z+1}$, and let $j=\ell(k)$ and consider the word $w=(b^j \cdot a)$.
Hence we have that there exists $i_0$ such that
for all $i \geq i_0$ we have $A(w^i \cdot b^\omega) \leq |Q_M|\cdot A(w^\omega) + \frac{1}{2}$.
Since $L_F(w^i \cdot b^\omega)=1$ and $L_F(w^\omega)=0$, it follows that $A$
does not express $L_F$.
The result follows.
\qed
\begin{comment}
We show that $L_F$ cannot be defined by finite $\max$ of \dla's to prove the
desired claim.
Consider $\ell$ deterministic \dla\/ $A_1,A_2, \ldots, A_\ell$, and
let $A=\max\set{A_1,A_2,\ldots,A_\ell}$.
Towards contradiction we assume that $A$ defines the language $L_F$.
We assume without loss of generality that every state $q_i \in Q_i$ is
reachable from the starting $q_i^0$ for $A_i$ by a finite word~$w_q^i$.
For every finite word $w_f$, we have $A(w_f\cdot b^\omega)=1$, and hence
it follows that for every finite word $w_f$, there must exist a component
automata $A_i$ such from the state $q_i$ reachable from $q_i^0$ by $w_f$,
the $b$-cycle $C_i$ reachable from $q_i$ (the $b$-cycle is the cycle that
can be executed with $b$'s only) satisfy the condition that sum of the
weights of the cycle is at least $|C_i|$.
Let $\beta$ be the maximum absolute value of the weights that appear
in any $A_i$, and let $|Q_M|$ be the maximum number of states in any
automata.
Let $j= \lceil 2 \cdot( 2\cdot |Q_M| + 2\cdot |Q_M| \cdot \beta + \beta +1) \rceil$
and consider the word $w=(b^j \cdot a)^\omega$.
Since for every finite word $w_f$, there exist an automaton $A_i$
such that the $b$-cycle reachable $C_i$ from the state after the word
$w_f$ has average weight at least~1, it follows that there exist
an automaton $A_i$ such that the $b$-cycle $C_i$ executed infinitely
often on the word $(b^j \cdot a)^\omega$ has average weight at least~1.
A lower bound on the average of the weights in the unique run of $A_i$
over $(b^j \cdot a)$ is as follows: consider the set of states that appear
infinitely often in the run, then it can have a prefix of length at most
$Q_i$ whose sum of weights is at least $-|Q_i|\cdot \beta$, then it goes through
$b$-cycle $C_i$ for at least
$j-2\cdot|Q_i|$ steps with sum of weights at least
$(j-2\cdot|Q_i|)$ (since the $b$-cycle $C_i$ have average weight at
least~1), then again a prefix of length at most $|Q_i|$ without completing
the cycle (with sum of weights at least $-|Q_i| \cdot \beta$),
and then weight for $a$ is at least $-\beta$. Hence the average is at least
\[
\frac{(j -2\cdot|Q_i|) - 2\cdot|Q_i| \cdot \beta -\beta }{j+1}
\geq 1 - \frac{2\cdot|Q_i| + 2\cdot|Q_i| \cdot \beta +\beta +1 }{j}
\geq 1 - \frac{1}{2} =\frac{1}{2};
\]
we used above by choice of $j$ we have
$\frac{2\cdot|Q_i| + 2\cdot|Q_i| \cdot \beta +\beta +1}{j} \leq \frac{1}{2}$ since
$|Q_i| \leq |Q_M|$.
Hence we have $A((b^j \cdot a)^\omega) \geq \frac{1}{2}$
contradicting that $A$ defines the language $L_F$.
\end{proof}
\end{comment}
\begin{proof}[of Theorem~\ref{thrm_expressive_power}]
We prove the two assertions.
\begin{enumerate}
\item The results of~\cite{CDH-FCT09} shows that there exists deterministic mean-payoff
automata $A_1$ and $A_2$ such that $\mathrm{sum}(A_1,A_2)$ cannot be expressed by alternating mean-payoff
automata.
Hence the result follows.
\item We now show that there exist quantitative languages expressible by
nondeterministic mean-payoff automata that cannot be expressed by mean-payoff
automaton expressions.
Consider the language $L_F$ of finitely many $a$'s, i.e., for an infinite word $w$
we have $L_F(w)=1$ if $w$ contains finitely many $a$'s, and $L_F(w)=0$ otherwise.
It is easy to see that the nondeterministic mean-payoff automaton (shown in
\figurename~\ref{figure:aut3}) defines $L_F$.
\begin{figure}
\caption{A nondeterministic limit-average automaton. \label{figure:aut3}
\label{figure:aut3}
\end{figure}
We now show that $L_F$ is not expressible by a mean-payoff automaton expression.
Towards contradiction, assume that the expression $E$ defines the language $L_F$,
and let $A_E$ be the synchronized product of the deterministic automata occurring
in $E$ (assume $A_E$ has $n$ states).
Consider a reachable bottom strongly connected component $V$ of the underlying graph
of $A_E$, and let $C$ be a $b$-cycle in $V$. We construct an infinite word $w$
with infinitely many $a$'s as follows: $(i)$ start with a prefix $w_1$ of length at most $n$ to reach $C$, $(ii)$
loop $k$ times through the b-cycle $C$ (initially $k=1$), $(iii)$ read an `$a$' and then a finite word
of length at most $n$ to reach $C$ again (this is possible since $C$ is in a bottom s.c.c.),
and proceed to step $(ii)$ with increased value of $k$.
The cycle $C$ corresponds to a cycle in each automaton of $E$, and
since the value of $k$ is increasing unboundedly, the value of $w$ in
each automaton of $E$ is given by the average of the weights along
their $b$-cycle after reading $w_1$. Therefore, the value of $w$ and the
value of $w_1 b^\omega$ coincide in each deterministic automaton of $E$.
As a consequence, their value coincide in $E$ itself. This is a contradiction
since $L_F(w)=0$ while $L_F(w_1 b^\omega) = 1$.
\end{enumerate}
\qed
\end{proof}
\end{document}
|
\begin{document}
\title{Examples for the Quantum Kippenhahn Theorem}
\author{Ben Lawrence}
\address{\sc Ben Lawrence \\ Department of Mathematics \\ University of Auckland \\
Auckland\\
New Zealand\\
\email{[email protected]}}
\date{31.08.2018}
\keywords{Free analysis; non-commutative algebra; linear pencil; double eigenvalues; Kippenhahn conjecture}
\subjclass{Primary 15A22, 15A42; Secondary 46L52, 90C22}
\thanks{This work is part of the author's PhD thesis written under the supervision of Igor Klep. Supported by the Marsden Fund Council of the Royal Society of New Zealand.}
\begin{abstract}
Semidefinite programming optimises a linear objective function over a spectrahedron, and is one of the major advances of mathematical optimisation. Spectrahedra are described by linear pencils, which are linear matrix polynomials with hermitian matrix coefficients. Our focus will be on dimension-free linear pencils where the variables are permitted to be hermitian matrices. A major question on linear pencils, and matrix theory in general, is Kippenhahn's Conjecture. The conjecture states that given a linear pencil $xH + yK $ if the hermitian matrices $H$ and $K$ generate the full matrix algebra, then the pencil must have at least one simple eigenvalue for some $x$ and $y$. The conjecture is known to be false, via a single counterexample due to Laffey. A dimension-free version of the conjecture, known as the Quantum Kippenhahn Theorem, has recently been proven true non-constructively. We present a novel family of counterexamples to Kippenhahn's Conjecture, and use this family to construct concrete examples of the Quantum Kippenhahn Theorem.
\end{abstract}
\maketitle
\section{Introduction}\label{intro}
Semidefinite programming is one of the major advances in mathematical optimisation of the last thirty years. It is widely used in quantitative science, in such fields as control theory, computational finance, signal processing, and fluid dynamics, among many other applications \cite{sdp, sdp_aspects}. It involves the optimisation of a linear objective function of several variables with respect to constraints represented by linear matrix inequalities on those variables. In many cases, after translation in the space of variables, a clean way of representing such constraints is with a \emph{linear pencil} and an associated \emph{linear matrix inequality}: \begin{defn}[Linear pencil and linear matrix inequality]\label{defn_pencil_LMI}
A \emph{linear pencil} is an expression of the form \begin{equation}\label{eqn_pencil_def}
L(x_{1},..,x_{n}) = \sum_{i = 1}^{n} A_{i}x_{i}\end{equation} where the $A_{i}$ are hermitian matrices and the $x_{i} \in \mathbb{R}$ are variables. Linear pencils whose coefficients are hermitian matrices (not necessarily diagonal) give rise to linear matrix inequalities (LMIs). A \emph{linear matrix inequality} (LMI) is an expression of the form \begin{equation}\label{eqn_lmi_def}
I + L(x_{1},..,x_{n}) \succcurlyeq 0,
\end{equation} where $L$ is a linear pencil and the relation $\succcurlyeq$ stands for `positive semi-definite', meaning non-negative eigenvalues.
\end{defn}
Linear pencils are
ubiquitous in matrix theory and numerical analysis
(e.g. the generalized eigenvalue problem),
and they frequently appear in (real) algebraic geometry (cf. \cite{vinnikov,NPT}). The use of hermitian matrices (as opposed to arbitrary matrices) in the definition ensures convexity, and a full set of real eigenvalues simplify consideration of such inequalities. Inequality \eqref{eqn_lmi_def} defines a feasible region called a \emph{spectrahedron} \cite{free_convex}. Spectrahedra are always convex and semialgebraic.
The boundary of a spectrahedron associated to an linear pencil $L$ satisfies $\mbox{Det}(I + L) = 0. $ This boundary will be generically smooth, but may contain singularities such as sharp corners. The determinant of $L$ vanishes at each of the singularities. A linear function on the space of variables in which the spectrahedron lies will have a global direction of greatest increase and will have flat level sets. This means that the optimisation of a linear function over the spectrahedron will often be found at one of the singularities. To see this, consider a generically smooth convex set, with some sharp extreme points, approaching a flat plane. In many orientations, it will collide first with one of its extreme points. It is therefore of interest to be able to identify singularities algebraically. However, simply looking for vanishing points of the gradient is not always sufficient, as the following example shows.
\begin{example}[Double circle]\label{ex_double_circle}
$$ I + L = \left( \begin{array}{cccc}
1-x & y & \, & \, \\
y & 1+x & \, & \, \\
\, & \, & 1 -y & x \\
\, & \, & x & 1+y
\end{array} \right) \succcurlyeq 0. $$
\end{example}This spectrahedron is a disc. Evaluating the determinant of the LMI will give us $\mbox{Det}( I + L) = (1 - x^{2} - y^{2})^{2}$, and setting this to zero gives us the boundary. However the presence of the square in the determinant means that the gradient vanishes at every point on the boundary, falsely giving the impression of singularities where there are none. However, $L$ is composed of two block diagonal submatrices, each of which determine the same feasible region. By recognizing this and selecting only one of these sub-matrices to define the LMI, we can define the same feasible region and correctly determine that there are no singularities on the boundary. In this instance, we were saved from purely algebraic singularities by $L$ being block diagonal, but will this always be true?
Our work was motivated by the 1951 conjecture of Rudolf Kippenhahn \cite{kip}:
\begin{conj}[Kippenhahn \cite{kip}] \label{kip}
Let and $H, K$ be hermitian $2n \times 2n$ matrices, and let $f = \det(xH + yK + I) \in \mathbb{R}[x,y]$. Let $\mathcal{A}$ be the algebra generated by $H$ and $K$. If there exists $k \in \mathbb{N}, k \geq 2 $ and a $ g \in \mathbb{C}[x,y]$ such that $ f = g^{k}$ then there is some unitary matrix $U$ such that $U^{*}(xH + yK)U$ is block diagonal, and thus $ \mathcal{A} \neq M_{n}(\mathbb{C})$.
\end{conj}
Kippenhahn's conjecture can also be illustrated geometrically. Given a linear pencil $L=I+xH+yK$ as in Conjecture \ref{kip}, its determinant $f=\det L$ gives rise to the affine scheme $\mbox{Spec} \, \mathbb{C}[x,y]/(f)$. If $f = g^{k} $ with $k \geq 2 $ then this scheme is obviously nonreduced - see for example Chapter 5, Section 3.4 of \cite{shaf}. This conjecture is now known to be false. Our objective is to extend the understanding of the counterexamples to this conjecture. Kippenhahn originally gave a more general form of this conjecture where $f$ is permitted to be a product of more than one polynomial, and the matrices need not be of even order. Kippenhahn's conjecture linked the multiplicity of eigenvalues of a certain type of matrix polynomial to the algebra generated by the coefficients of that polynomial. In his paper \cite{kip}, Kippenhahn proved that his conjecture holds for $n \leq 2$. Shapiro extended the validity range of the conjecture in a series of 1982 papers. In the first of these papers \cite{shapiro1} she demonstrated that if $f$ has a linear factor of multiplicity greater than $n/3$, then the conjecture holds. This proves the more general form of the conjecture for the case of $3 \times 3$ matrices. The second paper \cite{shapiro2} shows that if $f = g^{n/2} $ where $g$ is quadratic, then the conjecture holds. This, combined with \cite{shapiro1}, proves the conjecture for matrices of order $4$ and $5$. The final paper \cite{shapiro3} showed that the conjecture holds if $f$ is a power of a cubic factor. This is sufficient to prove the conjecture for order $6$ matrices, that is for $n = 3$ in the form we are interested in, where $f$ is a power of an irreducible polynomial, but not Kippenhahn's original more general form. Buckley recently gave a proof of the same result as a corollary to a more general result about Weierstrass cubics \cite{buckley}.
In his 1983 paper \cite{laffey}, Laffey disproved the Conjecture for $ n = 4$ with a single counterexample. In 1998 Li, Spitkovsky, and Shukla disproved Kippenhahn's more general form of the conjecture for $n = 3$ by constructing a family of counterexamples (the LSS counterexample) of the form $f = \mbox{det}(I + xH + yK) = g^{2}h $, where both $g$ and $h$ are quadratics \cite{li}.
In \cite{burnside_graphs} we introduced a theorem which allows for the construction of a class of one-parameter families of counterexamples of Kippenhahn's conjecture, for $n \geq 4 $. Here we give a simplified version of that theorem:
\begin{theorem}[Simplified constructibility theorem \cite{burnside_graphs}]\label{thm_construct_simple}
Let $H$ and $K$ be $2n \times 2n$ hermitian matrices over $\mathbb{C}$. Then if, in a basis in which $K$ is diagonal, with any equal diagonal entries being consecutive, \begin{enumerate}
\item $K$ has eigenvalues of multiplicity at most $2$,
\item the $2 \times 2$ blocks of $H$ lying across the top row are all invertible,
\item there exist distinct $2 \times 2$ blocks $H_{1i}$ and $H_{1j}$ lying in the top row of $H$ so that $H_{1i}H_{1i}^{*}$ and $H_{1j}H_{1j}^{*}$ do not commute,
\end{enumerate} then the algebra generated by $H$ and $K$ is $M_{2n}(\mathbb{C})$. \\
\end{theorem}
\subsection{Main results}
By giving criteria for a pair of matrices $H$ and $K$ to generate the full matrix algebra, this theorem provides an avenue for constructing counterexamples to Kippenhahn's Conjecture. The criteria are quite broad, and allow for various families of counterexamples. We will use Theorem \ref{thm_construct_simple} in Section \ref{sec_counter} to introduce a novel counterexample family with different properties from the family introduced in \cite{burnside_graphs}. Make the following definitions: $$\alpha = \left( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array} \right), \qquad \beta = \left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right), \qquad U = \left( \begin{array}{cc}
0 & -1 \\
1 & 0
\end{array} \right).$$ The key properties of these $2 \times 2$ matrices are as follows: \begin{align*}
\alpha^{2} = \beta^{2} &= -U^{2} = I_{2}, \\
\alpha U + U \alpha = &0 = \beta U + U \beta,\\
\alpha \beta - \beta \alpha &= \left( \begin{array}{cc}
0 & 2 \\
-2 & 0
\end{array} \right).
\end{align*} Now let $n \geq 4$ be an integer, and define $q = \frac{n^{2}}{2} - 1 $. From the above matrices, build the following $2n \times 2n$ skew symmetric matrices, where unspecified entries are zero: \begin{equation}\label{eqn_counter_A}
A_{(b)} = {\small \left( \begin{array}{ccccccc}
q U & \alpha & \alpha & \alpha & \alpha & \hdots & \alpha\\
-\alpha & (q+1)U & b \alpha + \beta & b \alpha \\
-\alpha & -b\alpha - \beta & (q+2)U & 0 \\
-\alpha & -b \alpha & 0 & (q+3)U \\
-\alpha & \, & \, & \, & (q+4)U \\
\vdots & \, & \, & \, & \, & \ddots \\
-\alpha & \, & \, & \, & \, & \, & (q+n-1)U\\
\end{array}\right) },
\end{equation}
\begin{equation}\label{eqn_counter_B}
B = {\small \left( \begin{array}{cccc}
U \\
\, & U \\
\, & \, & \ddots \\
\, & \, & \, & U
\end{array} \right) },
\end{equation} where $b \in \mathbb{R}, b \neq 0$. We then define \begin{equation}
H_{(b)} = A_{(b)}^{2} , \qquad K = A_{(b)}B + BA_{(b)},\end{equation} giving rise to the linear pencil \begin{equation}\label{ex_family_1}
L_{(b)}(x,y) = x H_{(b)} + y K,
\end{equation} where $x$ and $y$ are real and we have indicated dependence on the parameter $b$ using a subscript. Note that $B^{2} = -I$, and consider the expression $$xA_{(b)} + yB.$$ Since this is an even-order skew symmetric matrix for all values of $x$ and $y$, its eigenvalues will come in imaginary conjugate pairs. Squaring the expression we have \begin{align*}
(xA_{(b)} + yB)^{2} &= x^{2}(A_{(b)})^{2} + xy(A_{(b)}B + BA_{(b)}) + y^{2}B^{2} \\
&= x^{2}H_{(b)} + xy K - y^{2}I,
\end{align*} which will have eigenvalues which are the squares of the eigenvalues of $A_{(b)} + y B$. Since $A_{(b)} + y B$ has imaginary conjugate pairs of eigenvalues, $x^{2}H_{(b)} + xy K - y^{2}I$ must have real negative eigenvalues of even multiplicity. Since adding a multiple of the identity only shifts eigenvalues by a constant, $x^{2}H_{(b)} + xy K$ must also have eigenvalues of even multiplicity, and with a shift of variables we can say the same for $xH_{(b)} + y K$. This claim is rigorously proven as Lemma \ref{lemma_eigenpairs} in Section \ref{sec_counter}. Care needs to be taken for the case where $y = 0$. Evaluating $K$, we have \begin{equation}\label{eqn_K}
K = {\small \left( \begin{array}{cccc}
-2qI_{2} & \\
\, & -2(q+1)I_{2} \\
\, & \, & \ddots \\
\, & \, & \, -2(q+n-1)I_{2}
\end{array} \right) }.\end{equation} This structure emerges from the properties of $\alpha$ and $\beta$ and the way in which they cancel out in the anti-commutator of $A_{(b)}$ and $B$. Notice how the non-zero $2 \times 2$ blocks of $K$ are simply the identity times the coefficients of the diagonal blocks of $A_{(b)}$, with a factor of $-2$. This ties in with the first condition of Theorem \ref{thm_construct_simple}. In Section \ref{sec_counter}, we will show that $H_{(b)}$ and $K$ as specified satisfy the requirements of Theorem \ref{thm_construct_simple}, and therefore span the full matrix algebra and violate Kippenhahn's conjecture. We summarise these results into the following theorem:
\begin{theorem}[New Counterexample Family]\label{thm_counter}
Let $b \neq 0$ be a real number, and let $n \geq 4$ be an integer. Then let $2n \times 2n$ skew-symmetric matrices $A_{(b)}$ and $B$ be as defined in Equations \eqref{eqn_counter_A} and \eqref{eqn_counter_B}. Let $H_{(b)} = (A_{(b)})^{2}$ and $K = A_{(b)}B + BA_{(b)}$. Then the linear pencil $$L = x H_{(b)} + y K, $$ where $x, y \in \mathbb{R}$, has the following properties: \begin{enumerate}
\item All of the eigenvalues of $L_{(b)}$ have even multiplicity,
\item $H_{(b)}$ and $K$ generate the full matrix algebra $M_{2n}(\mathbb{C})$,
\end{enumerate} for all $b \neq 0$ and for all $x, y \in \mathbb{R}$. Therefore $H_{(b)}$ and $K$ violate Kippenhahn's Conjecture.
\end{theorem}
The first key property to note is that the diagonal entries of $A_{(b)}$ are all different multiples of $U$. This ensures that $K$ has eigenvalues of multiplicity no more than two. The coefficients based on the constant $q$ have a double purpose, first purpose being to establish the proper structure of $K$, and second purpose to control the eigenvalues of $H_{(b)}$ and $K$ in a specific way. The precise meaning of this statement will be expanded upon in the latter part of this introduction, where we discuss quantisation.
The second key property of this family of counterexamples is presence of non-commuting blocks. In both cases, $A_{(b)}$, from which $H_{(b)}$ is built, is filled almost entirely off the diagonal with $\alpha$, except for $\beta$ which is placed at a specific location. To get an impression of what is happening here, consider the upper-left $8 \times 8$ block of $A_{(b)}$: $$ {\small \left( \begin{array}{cccc}
q U & \alpha & \alpha & \alpha \\
-\alpha & (q+1)U & b \alpha + \beta & b \alpha \\
-\alpha & -b\alpha - \beta & (q+2)U & 0 \\
-\alpha & -b \alpha & 0 & (q+3)U
\end{array}\right) }.$$ When $A_{(b)}$ is squared to give $H$, observe what happens when the second row of blocks of $A_{(b)}$ is multiplied by the third column and fourth column respectively. In the first instance, the $b \alpha$ term in the top row lines up with a zero and vanishes, and in the second instance it is an $b \alpha + \beta$ term which vanishes. Simplify by using $\alpha^{2} = I_{2}$, and we are left with non-commuting blocks in the second row of $H_{(b)}$. After symmetric permutation of rows and columns, all of the conditions for Theorem \ref{thm_construct_simple} are met. Intuitively, this $ 8 \times 8$ block contains a minimal amount of structure necessary to create the conditions for Theorem \ref{thm_construct_simple} to operate. Other such minimal counterexamples can be constructed \cite{burnside_graphs}. The single $b \alpha + \beta$ block is there to inject enough non-commutativity to enable $M_{2}(\mathbb{C})$ to be generated at a certain block-position, to be then dispersed all around the matrix. A little inspection will show that trying to perform this same construction with a $6 \times 6$ block will not work; there will not be enough room.
\subsection{Quantisation}
Many physical situations can be modelled by an LMI with the real variables replaced with non-commutative variables. Most often the variables of interest are hermitian matrices, due to their mathematical richness and physical relevance. The term \emph{free analysis} is used for the study of such LMIs; it refers to `dimension-free' since the dimension of the variables as hermitian matrices is usually left unspecified. Free analysis is widely used in operator algebra, mathematical physics, and quantum information theory \cite{operator_algebras}. See \cite[Chapter~8]{free_convex} for a survey of this material. Further information about free LMIs may be found in \cite{matricial_relaxation}. The process of replacing commutative variables with non-commutative variables is referred to as \emph{quantisation} \cite{matricial_relaxation}.
In Section \ref{sec_quantise} we investigate the application of quantisation to the Kippenhahn conjecture. Applied to a linear pencil such as in the Kippenhahn conjecture, a linear pencil of the form $L = xH + yK$ gives rise to $$L(X,Y) = X \otimes H + Y \otimes K, $$ where $X$ and $Y$ are hermitian matrices. In this dimension-free setting Kippenhahn's conjecture has been proven to hold true as Corollary 5.6 of \cite{kv}:
\begin{theorem}[Quantum Kippenhahn Theorem]\label{thm_quant_kipp_intro}
If $A_{1},...,A_{g}$ generate $M_{d}(\mathbb{C}) $ as an $\mathbb{C}$-algebra, then there exist $n \in \mathbb{N}$ and $X_{1},...,X_{g} \in M_{n}(\mathbb{C}) $ such that $$\mbox{\rm Dim Ker} \left( I_{nd} + \sum_{i=1}^{g} X_{i} \otimes A_{i} \right) = 1. $$
\end{theorem}
The implication of this result is that given any counterexample $H$ and $K$ to Kippenhahn's conjecture in the commutative setting, some hermitian matrices $X$ and $Y$ exist in the free setting for which $H$ and $K$ are no longer a counterexample, that is, $X \otimes H + Y \otimes K $ does not have a square characteristic polynomial. The proof of the theorem is not constructive in the sense that it proves that $X$ and $Y$ must exist such that the Kippenhahn conjecture holds true, but does not identify such $X$ and $Y$. We have been able to improve on this situation. For all of the known counterexamples to Kippenhahn's Conjecture, including our own, we have explicitly identified $2 \times 2 $ hermitian matrices $X$ and $Y$ which successfully quantise the counterexamples and restore Kippenhahn's conjecture in a dimension-free setting.
\begin{theorem}[Counterexample Quantisation]\label{thm_quant}
Let $H_{(b)}$ and $K$ be as in Theorem \ref{thm_counter}, to any order $n$, and let $$X = \left( \begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right), \qquad Y = \left( \begin{array}{cc}
0 & 1/q^{2} \\
1/q^{2} & 1
\end{array} \right), $$ where $q = \frac{(n+2)(n-2)}{2} + 1$. Then $X \otimes H_{(b)} + Y \otimes K $ has at least 6 simple eigenvalues at almost all parameter values $b$ and thus tightens the Quantum Kippenhahn Theorem with respect to a particular family of counterexamples.
\end{theorem}
Theorem \ref{thm_quant} is the main result of Section \ref{sec_quantise}, and will be proven there. We finish this paper in Sections \ref{sec_LSS} and \ref{sec_laffey} with a short discussion of the quantisation of Laffey's counterexample and Li, Spitkovsky, and Shukla's counterexample to the more general form of the conjecture. With these results, the theorems of Klep and Vol\v ci\v c in \cite{kv} are given concrete expression.
\section{A novel counterexample to Kippenhahn's Conjecture}\label{sec_counter}
In this section we will construct a novel family of counterexamples to Kippenhahn's Conjecture as described in Theorem \ref{thm_counter}. This family will be similar in structure to that constructed in \cite{burnside_graphs}, but will have different values along the diagonal, and a slightly different off-diagonal structure. Theorem \ref{thm_construct_simple} allows a fair amount of flexibility in the structure of the counterexamples it generates, and we have chosen the following structure so as to produce nice eigenvalue behaviour. The specifics of this will become clear in Section \ref{sec_quantise}. First we will state and prove the double eigenvalue property referred to in the Introduction:
\begin{lemma}[Double eigenvalue lemma]\label{lemma_eigenpairs}
Let $A$ and $B$ be real skew-symmetric matrices of order $2n$, and let $B^{2} = -I$. Define $H = A^{2}$ and $K = AB + BA$. Then, for all $x, y \in \mathbb{R}$, $xH + yK$ has eigenvalues only of even multiplicity.
\end{lemma}
\begin{proof}
Consider first $Ax + By$. Take some $x_0$ and $y_0$ in $\mathbb{R}$, where $x_0 \neq 0$. The eigenvalues of the real skew-symmetric matrix $Ax_0 + By_0$ will be purely imaginary and will exist in conjugate pairs. Note that a given pair may appear more than once. Denote such pairs by $\pm i \lambda_{k}$, where $\lambda_{k}$ is real and $k$ ranges from 1 to $n$.
Then the eigenvalues of $(Ax_0 + By_0)^{2}$ will be $-\lambda^{2}_{k}$, obviously coming in pairs. Since the same pair of eigenvalues of $Ax_0 + By_0$ may occur several times, we cannot say for sure that each $-\lambda^{2}_{k}$ has multiplicity 2, but we can be sure that it has even multiplicity. Let $v_{k}$ be an eigenvector of $Ax_0 + By_0$ with eigenvalue $i \lambda_{k}$, and $w_{k}$ be an eigenvector of $Ax_0 + By_0$ with eigenvalue $-i \lambda_{k}$. Since $v_{k}$ and $w_{k}$ belong to different eigenspaces of $Ax_0 + By_0$, the subspace $\mbox{span} \lbrace v, w \rbrace$ which they generate is two-dimensional.
Then $(Ax_0 + By_0)^{2} v_{k} = -\lambda^{2}_{k} v_{k} $ and $(Ax_0 + By_0)^{2} w_{k} = -\lambda^{2}_{k} w_{k} $. But $A^{2} = H$, $AB + BA = K$, and $B^{2} = -I_{2n}$, so we have $$ (Ax_0 + By_{0} )^{2} = Hx_{0}^{2} + Kx_{0}y_{0} - I_{2n}y_{0}^{2}, $$ so $$(Hx_{0}^{2} + Kx_{0}y_{0}) v_{k} =(y_{0} -\lambda^{2}_{k}) v_{k}. $$ Dividing through by $x_{0} \neq 0$ we have that $$(Hx_{0} + Ky_{0}) v_{k} =\frac{1}{x_{0}}(y_{0} -\lambda^{2}_{k}) v_{k}. $$ Therefore, $v_{k}$ is an eigenvector of $Hx_{0} + Ky_{0}$ with eigenvalue $\frac{1}{x_{0}}(y_{0} -\lambda^{2}_{k})$. Repeating this process for $w_{k}$, we see that
$w_{k}$ is also an eigenvector of $Hx_{0} + Ky_{0}$ with eigenvalue $\frac{1}{x_{0}}(y_{0} -\lambda^{2}_{k})$. Therefore $v_k$ and $w_k$ span a two-dimensional eigenspace of $Hx_{0} + Ky_{0}$.
For the case where $x_{0} = 0$, we simply have $K y_{0}$, which has paired eigenvalues due to the continuity of eigenvalues with respect to $x_{0}$ and $y_{0}$ \cite{matrix_analysis}.
Therefore, for every $x_{0}$ and $y_{0}$ in $\mathbb{R}$ $Hx_0 + Ky_0$ has eigenvalues all of even multiplicity.
\end{proof}
\subsection{Definition of the counterexample family}
Theorem \ref{thm_construct_simple} gives fairly wide freedom in constructing counterexamples to Kippenhahn's Conjecture. Our objective will be to construct a counterexample with well separated pairs of eigenvalues, and then show how these eigenvalue pairs are `split' by a suitable choice of quantisation. We will ensure the separation of pairs of eigenvalues by making the counterexample diagonally dominant in a specific way.
Take an integer $n \geq 4$, real number $b \neq 0$ and let $q = \frac{n^{2}}{2} - 1$. As before, set $$\alpha = \left( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array} \right) \mbox{, } \beta = \left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right) \mbox{ and } U = \left( \begin{array}{cc}
0 & -1 \\
1 & 0
\end{array} \right).$$ Now define $2n \times 2n$ matrices $A_{(b)}$ as in Equation \eqref{eqn_counter_A} and $ B = I_{n} \otimes U$.
The reason for the specific choice of $q = \frac{n^{2}}{2} - 1$ will become clear when we set bounds on the eigenvalues of the system in Section \ref{sec_quantise}, as will the reason for isolating $q$ into a variable of its own rather than simply expressing the diagonal entries in terms of $n$. Define symmetric matrices $$H_{(b)} = (A_{(b)})^{2}, K = A_{(b)}B + BA_{(b)}. $$ Lemma \ref{lemma_eigenpairs} allows us to conclude that $xH_{(b)} + yK$ has eigenvalues of all even multiplicities, and its characteristic polynomial is of the form $f = g^{2}$. So we have met the first condition for Kippenhahn's conjecture.
\subsection{The matrix algebra generated}
We will show that each of the three conditions of Theorem \ref{thm_construct_simple} applies to $H_{(b)}$ and $K$. \begin{enumerate}
\item Recall $K$ from Equation \eqref{eqn_K}. Clearly the first condition for Theorem \ref{thm_construct_simple} is satisfied.
\item Let us evaluate the $2 \times 2$ blocks of the top row and the second row of $H_{(b)}$. First the top row: $$\left( \begin{array}{cc}
-q^2-(n-1) & 0\\
0 & -q^2-(n-1)
\end{array} \right), \left( \begin{array}{cc}
-2 b & -2\\
0 & -2 b
\end{array} \right), \left( \begin{array}{cc}
b & -1\\
-3 & b
\end{array} \right), $$ $$ \left( \begin{array}{cc}
b & -3 \\
-3 & b
\end{array} \right), \left( \begin{array}{cc}
0 & -4 \\
-4 & 0
\end{array} \right),...,\left( \begin{array}{cc}
0 & -n+1 \\
-n+1 & 0
\end{array} \right). $$ Each of these blocks is invertible for all permitted values of $b$, that is $b \neq 0$, except for $b = \pm 3, \pm \sqrt{3}$. Now consider the second row:
$$\left( \begin{array}{cc}
-2 b & 0\\
-2 & -2 b
\end{array} \right), \left( \begin{array}{cc}
-2 b^2-q^2-2 q-3 & 0\\
0 & -2 b^2-q^2-2 q-3
\end{array} \right), \left( \begin{array}{cc}
0 & -b\\
-b & -2
\end{array} \right), $$ $$ \left( \begin{array}{cc}
-1 & -2 b \\
-2 b & -1
\end{array} \right), \left( \begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array} \right),...,\left( \begin{array}{cc}
-1 & 0 \\
0 & -1
\end{array} \right). $$ Each of these blocks is invertible for all permitted values of $b$, except for $b = \pm \frac{1}{2}$. Therefore for any allowed value of $b$, at least one of these rows has all invertible blocks. By simultaneous symmetric permutation of $H_{(b)}$ and $K$, we can place the invertible row at the top. Such simultaneous symmetric permutation will not affect the algebra generated by $H_{(b)}$ and $K$, nor will it affect Condition 1 on $K$. Therefore $H_{(b)}$ satisfies Condition 2 in general.
\item Take the third and fourth blocks of the first row of $H_{(b)}$: $$H_{13} = \left( \begin{array}{cc}
b & -1\\
-3 & b
\end{array} \right), H_{14} = \left( \begin{array}{cc}
b & -3 \\
-3 & b
\end{array} \right). $$ Then the commutator of $H_{13}H_{13}^{T}$ and $H_{14}H_{14}^{T}$ is $\left(
\begin{array}{cc}
0 & 48 b \\
-48 b & 0 \\
\end{array}
\right)$ which must always be non-zero. The commutator of the products of the third and fourth blocks of the second row with their own transposes likewise evaluates to $\left(
\begin{array}{cc}
0 & -16 b \\
16 b & 0 \\
\end{array}
\right)$, which must also be non-zero. So no matter which row we have placed at the top in step 2, Condition 3 is satisfied.
\end{enumerate}
We can therefore conclude that $H_{(b)}$ and $K$ violate the Kippenhahn Conjecture.
\section{Quantisation of the new counterexample}\label{sec_quantise}
As mentioned in the Introduction, quantisation \cite{matricial_relaxation} of a linear pencil is the process of replacing the commutative variables $x_{i}$ with non-commutative variables $X_{i}$, typically hermitian matrices, to give an expression of the form $$L(X) = \sum X_{i} \otimes A_{i}.$$ Such quantised pencils arise particularly in coupled linear systems.
Quantising a linear pencil such as in the Kippenhahn conjecture, we have $$L(X,Y) = X \otimes H + Y \otimes K, $$ where $X$ and $Y$ are hermitian matrices. In this setting Kippenhahn's conjecture has been proven to hold true in an arbitrary number of variables as Corollary 5.6 of \cite{kv}, stated as Theorem \ref{thm_quant_kipp_intro} in the Introduction. This Quantum Kippenhahn Theorem holds true for some hermitian $X$ and $Y$, but the proof does not identify such $X$ and $Y$. We have been able to improve on this situation. For the counterexample introduced in Section \ref{sec_counter}, we have explicitly identified $2 \times 2 $ hermitian matrices $X$ and $Y$ which successfully quantise the counterexample and restore Kippenhahn's conjecture in a dimension-free setting. Our approach will be to note that the entries of $L_{(b)}$ vary smoothly with $b$, and set $b$ to zero. We will then use Gershgorin's Disc Theorem and the Fundamental Symmetric Polynomials to show the desired result for $b = 0$. We will then use results from perturbation theory \cite{kato} to extend this result to almost all values of $b$.
To begin with, let us reconsider $A_{(b)}$ as stated in Equation \eqref{eqn_counter_A}. Notice how each element of $A_{(b)}$ is a polynomial function at most linear in $b$, and recall that $q$ is fixed depending on $n$. Now set $b = 0$. It will be recalled that $b = 0$ is specifically excluded from the valid range of the counterexample, but we may still consider it as a way to inspect the behaviour of the eigenvalues of $L_{(b)} = X \otimes H_{(b)} + Y \otimes K$. Recall that $K$ does not have any dependence on $b$.
With $b$ set to zero, we have $$A_{(0)} = {\footnotesize \left( \begin{array}{ccccccc}
q U & \alpha & \alpha & \alpha & \alpha & \hdots & \alpha\\
-\alpha & (q+1)U & \beta & 0 \\
-\alpha & - \beta & (q+2)U & 0 \\
-\alpha & 0 & 0 & (q+3)U \\
-\alpha & \, & \, & \, & (q+4)U \\
\vdots & \, & \, & \, & \, & \ddots \\
-\alpha & \, & \, & \, & \, & \, & (q+n-1)U\\
\end{array}\right) }.$$ Notice how each $ 2 \times 2$ block is either purely diagonal (the $\alpha$ blocks) or purely off-diagonal (the $U$ and $\beta$ blocks). Therefore, we can perform a symmetric permutation of the rows and columns of $A_{(0)}$, and likewise of $B$, separating $A_{(0)}$ into two $n \times n$ diagonal and off-diagonal blocks. Remember that $A_{(0)}$ is of dimension $2n \times 2n$. This permutation splits the elements of the $\alpha$ blocks across the diagonal blocks and the elements of $U$ and $\beta$ across the off-diagonal blocks. This is simply a change of basis order, and by an abuse of notation we may continue to refer to the altered matrix as $A_{(0)}$: $$ {\footnotesize \left( \begin{array}{ccccc;{6pt/4pt}ccccc}
0 & 1 & 1 & \hdots & 1 & -q & 0 & 0 & \hdots & 0 \\
-1 & \, & \, & \, & \, & 0 & -(q+1) & 1 & \, & \, \\
-1 & \, & \, & \, & \, & 0 & -1 & -(q+2) & \, & \, \\
\vdots & \, & \, & \, & \, & \vdots & \, & \, & \ddots & \, \\
-1 &\, & \, & \, & \, & 0 & \, & \, & \, & -(q+n-1) \\ \hdashline[6pt/4pt]
q & 0 & 0 & \hdots & 0 & 0 & -1 & -1 & \hdots & -1\\
0 & (q+1) & 1 & \, & \, & 1 & \, & \, & \, & \,\\
0 & -1 & (q+2) & \, & \, & 1 & \, & \, & \, & \,\\
\vdots & \, & \, & \ddots & \, & \vdots & \, & \, & \, & \,\\
0 & \, & \, & \, & (q+n-1) & 1 &\, & \, & \, & \,
\end{array} \right). }$$ Now we can symmetrically swap the $1^{st}$ and $(n+1)^{th}$ row, and likewise the $1^{st}$ and $(n+1)^{th}$ column, to get $$ {\footnotesize \left( \begin{array}{ccccc;{6pt/4pt}ccccc}
\, & \, & \, & \, & \, & q & -1 & -1 & \hdots & -1 \\
\, & \, & \, & \, & \, & -1 & -(q+1) & 1 & \, & \, \\
\, & \, & \, & \, & \, & -1 & -1 & -(q+2) & \, & \, \\
\, & \, & \, & \, & \, & \vdots & \, & \, & \ddots & \, \\
\, &\, & \, & \, & \, & -1 & \, & \, & \, & -(q+n-1) \\ \hdashline[6pt/4pt]
-q & 1 & 1 & \hdots & 1 & \, & \, & \, & \, & \,\\
1 & (q+1) & 1 & \, & \, & \, & \, & \, & \, & \,\\
1 & -1 & (q+2) & \, & \, & \, & \, & \, & \, & \,\\
\vdots & \, & \, & \ddots & \, & \, & \, & \, & \, & \,\\
1 & \, & \, & \, & (q+n-1) & \, &\, & \, & \, & \,
\end{array} \right). }$$
It is important to note that, with these symmetric permutations, the eigenvalues of $H_{(0)}$ and $K$ and any linear pencils thereof, as well as the algebra generated by $H_{(0)}$ and $K$, are not affected. Therefore we can freely adjust $H_{(0)}$ and $K$ to suit our purposes, so long as the permutations are symmetric and applied equally to both $H_{(0)}$ and $K$, or $A_{(0)}$ and $B$.
We can look at $A_{(0)}$ as being composed of four blocks $$A_{(0)} = \left( \begin{array}{cc}
0 & A_{1} \\
-A_{1}^{T} & 0
\end{array} \right), $$ and therefore we will have $$H_{(0)} = \left( \begin{array}{cc}
-A_{1}A_{1}^{T} & 0 \\
0 & -A_{1}^{T}A_{1}
\end{array} \right) = \left( \begin{array}{cc}
H_{1} & 0 \\
0 & H_{2}
\end{array} \right).$$ Likewise, $$K = \left( \begin{array}{cc}
K_{1} & 0 \\
0 & K_{1}
\end{array} \right), $$ $$K_{1} = \left( \begin{array}{cccc}
-2q & 0 & 0 & 0 \\
0 & -2(q + 1) & 0 & 0 \\
0 & 0 & \ddots & 0 \\
0 & 0 & 0 & -2(q + n - 1)
\end{array} \right). $$This is the critical difference between the counterexample family introduced here as compared with the counterexample family of \cite{burnside_graphs}. This counterexample family becomes block diagonal when the parameter $b$ is set to the forbidden value of zero, whereas the previous counterexample family of \cite{burnside_graphs} did not. Notice how $A_{1}$ is almost symmetric. It differs from its transpose only in the (2,3) and (3,2) entries. Let us see what effect this has on $H_{1}$ and $H_{2}$. Let $$A_{1} = \left( \begin{array}{cc}
A_{2} & P \\
P^{T} & Q
\end{array} \right), $$ where $Q$ is diagonal, $$A_{2} = \left( \begin{array}{ccc}
q & -1 & -1 \\
-1 & -(q + 1) & 1\\
-1 & -1 & -(q + 2)
\end{array} \right), $$ $$P = \left( \begin{array}{ccc}
-1 & \, & -1 \\
0 & \hdots & 0 \\
0 & \, & 0 \\
\end{array} \right). $$ Let us now evaluate the two diagonal blocks of $H_{(0)}$: $$H_{1} = -\left( \begin{array}{cc}
A_{2}A_{2}^{T} + PP^{T} & A_{2}P + PQ \\
P^{T}A_{2}^{T} + QP^{T} & P^{T}P + Q^{2}
\end{array} \right), $$ $$ H_{2} = -\left( \begin{array}{cc}
A_{2}^{T}A_{2} + PP^{T} & A_{2}^{T}P + PQ \\
P^{T}A_{2} + QP^{T} & P^{T}P + Q^{2}
\end{array} \right). $$ Observe that $$A_{2}P = A_{2}^{T}P = \left( \begin{array}{ccc}
-q & \, & -q \\
1 & \hdots & 1 \\
1 & \, & 1 \\
\end{array} \right). $$ Therefore $H_{1}$ and $H_{2}$ differ only in the top left $3 \times 3$ block. We can evaluate them explicitly as $${\footnotesize H_{1} = \left(
\begin{array}{cccccc}
-q^2-n+1 & 0 & -3 & -3 & \hdots & -n+1 \\
0 & -q^2-2 q-3 & 0 & -1 & \, & -1 \\
-3 & 0 & -q^2-4 q-6 & -1 & \, & -1 \\
-3 & -1 & -1 & -q^2-6 q-10 & \, & -1 \\
\vdots & \, & \, & \, & \ddots & \vdots \\
-n+1 & -1 & -1 & -1 & \hdots & -(q+n-1)^2-1 \\
\end{array}
\right),} $$ $${\footnotesize H_{2} = \left(
\begin{array}{cccccc}
-q^2-n+1 & -2 & -1 & -3 & \hdots & -n+1 \\
-2 & -q^2-2 q-3 & -2 & -1 & \, & -1 \\
-1 & -2 & -q^2-4 q-6 & -1 & \, & -1 \\
-3 & -1 & -1 & -q^2-6 q-10 & \, & -1 \\
\vdots & \, & \, & \, & \ddots & \vdots \\
-n+1 & -1 & -1 & -1 & \hdots & -(q+n-1)^2-1 \\
\end{array}
\right). }$$ Notice how the $q$ terms cancel out everywhere except along the diagonal, and additionally how there is a difference of about $-2q$ between consecutive diagonal entries in both $H_{1}$ and $H_{2}$. We are going to make this observation precise, and use the diagonal entries to get the eigenvalues under control, using the following famous theorem of Gershgorin.
\begin{theorem}[Gershgorin's Disc Theorem \cite{matrix_analysis}] \label{gershgorin}
Let $M = (m_{ij}) \in M_{n}(\mathbb{C})$, and let $$ R_{i}(M) = \sum_{j \neq i }^{n} |m_{ij}|,$$ where $1 \leq i \leq n$, denote the \emph{deleted absolute row sums} of $M$. Then all the eigenvalues of $M$ are located in the union of $n$ discs $$\bigcup_{i=1}^{n} \, \lbrace z \in \mathbb{C}: |z - m_{ii}| \leq R_{i}(M) \rbrace. $$ Furthermore, if a union of $k$ of these $n$ discs forms a connected region that is disjoint from all the remaining $n-k$ discs, then there are precisely $k$ eigenvalues of $M$ in this region. If all of the Gershgorin discs of $M$ are disjoint, then $M$ has only simple eigenvalues.
\end{theorem}
Since every matrix is similar to its transpose, the theorem may also be stated in terms of the \emph{deleted absolute column sums}. For a non-hermitian matrix this set of column discs may provide a more accurate estimate. With hermitian matrices the two sets of sums are equal.
\begin{lemma}\label{H_similiar}
$H_{1}$ and $H_{2}$ have only simple eigenvalues and are similar for all $n \geq 4$.
\end{lemma}
\begin{proof}
Let us examine the spacing of the diagonal entries of $H_{1}$ and $H_{2}$. Denote these diagonal entries by $D(t)$. The first three of these are $$D(1) = -q^2-n+1, \quad D(2) = -q^2-2 q-3, \quad D(3) = -q^2-4 q-6, $$ and the remainder are of the form $$D(t) = -(q + t - 1)^{2} - 1,$$ for $4 \leq t \leq n.$ Recall the definition of $q$: \begin{align*}
q &= \frac{n^{2}}{2} - 1 \\
&= 1 + \frac{(n-2)(n + 2)}{2}.
\end{align*} We can evaluate \begin{align*}
D(1) - D(2) &= -n + 2q + 4 = n^{2} - n + 2 > 0.
\end{align*} Therefore we have $D(1) > D(2)$. Now compare the remaining diagonal entries: \begin{align*}
D(2) - D(3) &= 2q + 3 = n^{2} + 1 > 0, \\
D(3) - D(4) &= 2q + 4 = n^{2} + 2 > 0, \\
D(t) - D(t+1) &= -(q + t - 1)^{2} + (q + t)^{2} \\
&= 2(q + t) - 1 \\
&= n^2 - 2 + 2t -1 = n^{2} - 3 + 2t > 0, \qquad \mbox{where } 4 \leq t \leq n-1.
\end{align*} So the diagonal entries of $H_{1}$ and $H_{2}$ form a descending sequence $$D(1) > D(2) > \hdots > D(n), $$ and the difference between entries always increases. The smallest difference is $D(1) - D(2)$, regardless of $n$.
Now let us examine the row sums of $H_{1}$ and $H_{2}$, as in Gershgorin's theorem. It is seen that $$R_{1}(H_{1}) = R_{1}(H_{2}) = \sum_{i = 1}^{n-1} i = \frac{1}{2}n(n-1).$$ Likewise, $$R_{2}(H_{1}) = n-3, \quad R_{2}(H_{2})= n+1,$$ $$R_{3}(H_{1}) = R_{3}(H_{2})= n, \quad R_{t}(H_{1}) = R_{t}(H_{2}) = n+t-3, \quad 4\leq t \leq n.$$ Remember that the row sums are absolute values. The largest of the row sums $ R(2),\hdots , R(n) $ is the last one, $R(n) = 2n-3$. Let us compare this to the first row sum $R(1)$: \begin{equation*}
R(1) - R(n) = \frac{1}{2}n^{2} - \frac{1}{2}n - 2n +3 = \frac{1}{2}(n-4)(n-1) + 1 > 0 \quad \forall \, n \geq 4.
\end{equation*} Therefore the radius of the first Gershgorin disc of $H_{1}$ and $H_{2}$, centered on $D(1)$, is the largest of all the Gershgorin discs. Furthermore, $$D(1) - D(2) < D(t) - D(t + 1) \quad \forall \, 2 \leq t \leq n-1. $$ This can be easily verified for $n \geq 4$, the allowed range of $n$, by referring to the separations of the $\lbrace D(t) \rbrace$ shown above. We can see that \begin{equation*}
\frac{1}{2}(D(1) - D(2)) = \frac{1}{2}(n^{2} - n + 2) > \frac{1}{2}n(n-1) = R(1).
\end{equation*} Therefore the largest of the Gershgorin discs of both $H_{1}$ and $H_{2}$ has a radius less than half of the smallest distance between two consecutive diagonal entries, when treated as points on the real line. Therefore all Gershgorin discs of $H_{1}$ and $H_{2}$, considered as separate matrices, are disjoint. Theorem \ref{gershgorin} tells us that $H_{1}$ and $H_{2}$ both therefore have only simple eigenvalues. However, $H_{(b)} = \left( \begin{array}{cc}
H_{1} & 0 \\
0 & H_{2}
\end{array} \right)$ is the square of a skew-symmetric matrix $A_{(b)}$. Since skew-symmetric matrices have purely imaginary eigenvalues in complex-conjugate pairs, $H_{(b)}$ must be negative definite with eigenvalues appearing with even multiplicity. But because $H_{(b)}$ is block diagonal, its eigenvalues are simply the union of the eigenvalues of $H_{1}$ and $H_{2}$. We have just shown that $H_{1}$ and $H_{2}$ have only simple eigenvalues (of multiplicity 1). Therefore if $H_{1}$ and $H_{2}$ had eigenvalues $\lambda$ and $\rho$ respectively, with $\lambda \neq \rho$, then both $\lambda$ and $\rho$ would be simple eigenvalues of $H_{(b)}$. Since this is impossible, we must conclude that $H_{1}$ and $H_{2}$ must have exactly the same eigenvalues, all simple. Thus $H_{1}$ and $H_{2}$ are similar.
\end{proof}
\begin{prop}
$H_{1}$ and $H_{2}$ are not trivially similar. That is, a similarity matrix $S$ relating $H_{1}$ and $H_{2}$ cannot be of the form $$S = \left( \begin{array}{cc}
\widetilde{S} & \, \\
\, & I_{n-3}
\end{array} \right), $$ where $\widetilde{S}$ is a similarity matrix relating the the first principal $3\times 3$ blocks of $H_{1}$ and $H_{2}$.
\end{prop}
\begin{proof}
If two matrices are similar, they must have the same determinant. Compare the determinants of the upper-left $3\times 3$ blocks of $H_{1}$ and $H_{2}$: $$\mbox{Det}{\small \left( \begin{array}{ccc}
-q^2-n + 1 & 0 & -3 \\
0 & -q^2-2 q-3 & 0 \\
-3 & 0 & -q^2-4 q-6 \\
\end{array} \right) }$$ $$ = -n q^4-6 n q^3-17 n q^2-24 n q-18 n-q^6-6 q^5-16 q^4-18 q^3+8 q^2+42 q+45, $$
$$\mbox{Det} {\small \left( \begin{array}{ccc}
-q^2-n + 1 & -2 & -1 \\
-2 & -q^2-2 q-3 & -2 \\
-1 & -2 & -q^2-4 q-6 \\
\end{array} \right)}$$ $$ = -n q^4-6 n q^3-17 n q^2-24 n q-14 n-q^6-6 q^5-16 q^4-18 q^3+8 q^2+42 q+33. $$ Taking the difference, we have $\mbox{Det}(H_{1}) - \mbox{Det}(H_{2}) = -4 (n-3)$, which is non-zero for all $n \geq 4$. Therefore the first principal $3\times 3$ blocks of $H_{1}$ and $H_{2}$ are not similar and so no trivial similarity matrix $S = \mbox{diag}( \widetilde{S}, I_{2n-3} )$ can be constructed for $H_{1}$ and $H_{2}$.
\end{proof}
Before proceeding we introduce some notation and give a few preliminary lemmas.
\begin{defn}[Submatrix notation]
Given a square matrix $M$ of order $n$ and an indexing set $I \subseteq \lbrace 1, 2, ..., n \rbrace $, we may denote the principal submatrix consisting of the rows and columns indexed by $I$ as $M\lbrace I \rbrace$.
\end{defn}
\begin{defn}[Anchor point]\label{defn_anchor_point}
Given a submatrix $M\lbrace I \rbrace$ we may refer to the elements of $I$ as the \emph{anchor points} of $M\lbrace I \rbrace$.
\end{defn}
\begin{example}
Let $$M = \left( \begin{array}{cccc}
1 & 2 & 3 & 4 \\
2 & 5 & 6 & 7 \\
3 & 6 & 8 & 9 \\
4 & 7 & 9 & 0
\end{array} \right), $$ then $$M\lbrace 1,3 \rbrace = \left( \begin{array}{cc}
1 & 3 \\
3 & 8
\end{array} \right),$$ and has anchor points $1$ and $3$.
\end{example}
The elementary symmetric polynomials in $n$ variables are a useful and familiar tool. We define the following notation:
\begin{defn}[Elementary symmetric polynomial on eigenvalues]\label{defn_elementary_symmetric_polys}
Let $M$ be an $n \times n$ hermitian matrix, and let $m_{1},...,m_{n}$ be its eigenvalues. Let $k \leq n$ be a natural number, and define $K$ as the set of all indexing sets of natural numbers of the form $\lbrace i_{1},i_{2} , ..., i_{k} \rbrace $ where $i_{1}<i_{2} < ...< i_{k} $. Then denote $$e_{k}(M) = \sum_{I \in K} \prod_{i \in I} m_{i}. $$ This is the $k^{th}$ elementary symmetric polynomial evaluated on the eigenvalues of $M$.
\end{defn}
\begin{example}
Suppose $M$ has eigenvalues $m_{1}, m_{2}, m_{3}$. Then $$e_{2}(M) = m_{1}m_{2} + m_{1}m_{3} + m_{2}m_{3}. $$
\end{example}
\begin{lemma}\label{lemma_elementary_symmetric_polys}
Take a matrix $M$ with eigenvalues $m_{1},...,m_{n}$. Let $k \leq n$ be a natural number, and define $K$ as the set of all indexing sets of natural numbers of the form $\lbrace i_{1},i_{2} , ..., i_{k} \rbrace $ where $i_{1}<i_{2} < ...< i_{k} $. Then $$e_{k}(M) = \sum_{I \in K} \mbox{Det } M\lbrace I \rbrace . $$
\end{lemma}
\begin{proof}
See Theorem 1.2.12 in \cite{matrix_analysis}, page 42.
\end{proof}
\begin{lemma}\label{off_diag_lemma}
The difference in determinant between matrices $\left( \begin{array}{cc}
a & b \\
c & d
\end{array} \right) \mbox{ and } \left( \begin{array}{cc}
a & e \\
f & d
\end{array} \right) $ is ${ef - bc}$. Likewise, the difference in determinant between matrices $\left( \begin{array}{cc}
a & b \\
c & d
\end{array} \right) \mbox{ and } \left( \begin{array}{cc}
e & b \\
c & f
\end{array} \right) $ is $ad - ef$.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
Now, let us define $$X = \left( \begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right), \quad Y = \left( \begin{array}{cc}
0 & q^{-2} \\
q^{-2} & 1
\end{array} \right). $$
We will now set $b = 0$ and examine the eigenvalues of $$L_{(0)} = X \otimes H_{(0)} + Y \otimes K = \left( \begin{array}{cc}
H_{(0)} & q^{-2} K \\
q^{-2} K & K
\end{array} \right).$$ We can split $L_{(0)}$ into two $2n \times 2n$ diagonal blocks using symmetric permutation: $$L(0) = \left( \begin{array}{cc;{6pt/4pt}cc}
H_{1} & q^{-2}K_{1} & 0 & 0 \\
q^{-2}K_{1} & K_{1} & 0 & 0 \\ \hdashline[6pt/4pt]
0 & 0 & H_{2} & q^{-2}K_{1} \\
0 & 0 & q^{-2}K_{1} & K_{1}
\end{array} \right) = \left( \begin{array}{cc}
L_{1} & 0 \\
0 & L_{2}
\end{array} \right). $$ Here we are denoting the diagonal blocks of $L_{(0)}$ by $L_{1}$ and $L_{2}$.
\begin{lemma}\label{L_simple}
$L_{1}$ and $L_{2}$ each have only simple eigenvalues.
\end{lemma}
\begin{proof}
We will deal with $L_{1}$ and $L_{2}$ one at a time.
The case of $L_{1}$: We will use Gershgorin's Disc Theorem. First examine the discs due to $H_{1}$. Here are the diagonal entries again:
\begin{align*}
D(1) &= - q^{2} - n + 1, & D(2) &= - q^{2} -2q - 3, \\
D(3) &= - q^{2} - 4 q - 6, & D(t) &= -(q + t - 1)^{2} - 1 \qquad 4\leq t \leq n.
\end{align*}
As in the proof of Lemma \ref{H_similiar}, the diagonal entries are in descending order and the smallest gap is between $D(1)$ and $D(2)$: $$D(1) - D(2) = n^{2} - n + 2.$$ The row sums are much as in the proof of Lemma \ref{H_similiar}, but they each pick up a single term from the presence of $q^{-2}K$:\begin{align*}
R_{1}(L_{1}) &= \frac{1}{2}n(n-1) + \frac{2}{q}, & R_{2}(L_{1}) &= n-3 + \frac{2(q + 1)}{q^{2}}, \\
R_{3}(L_{1}) &= n + \frac{2(q + 2)}{q^{2}}, & R_{t}(L_{1}) &= n+t-3 + \frac{2(q + t - 1)}{q^{2}}, \qquad 4\leq t \leq n.
\end{align*} In the proof of Lemma \ref{H_similiar} we established that of the row sums $\lbrace R_{2}(H_{1}),...,R_{n}(H_{1}) \rbrace $ it is $R_{n}(H_{1})$ that is largest. This is still true, since each of the extra terms $$\lbrace 2/q, 2(q+1)/q^2,....,2(q+t-1)/q^2,... \rbrace $$ is larger than the previous one. Therefore let us compare $R_{1}(L_{1})$ and $R_{n}(L_{1})$: \begin{align*}
R_{1}(L_{1}) - R_{n}(L_{1}) &= \frac{1}{2}n(n-1) + \frac{2}{q} - 2n + 3 - \frac{2(q + n - 1)}{q^{2}} \\
&= (n-1)\left( \frac{n-4}{2} - \frac{2}{q^{2}} \right) + 1.
\end{align*} This expression increases monotonically with $n$. At $n = 4$, $q = 7$, and the expression evaluates to \begin{equation*}
R_{1}(L_{1}) - R_{n}(L_{1}) = 3\left( - \frac{2}{7^{2}} \right) + 1 = \frac{43}{49} > 0.
\end{equation*} This difference can only increase as $n$ grows larger, so we can say that for all $n \geq 4$, $R_{1}(L_{1})$ is the largest row sum. Now let us see how the Gershgorin disc associated with $R_{1}(L_{1})$ fits into the space between $D(1)$ and $D(2)$: $$
R_{1}(L_{1}) = \frac{1}{2}n(n-1) + \frac{2}{q}, \qquad
\frac{1}{2}(D(1) - D(2)) = \frac{1}{2} n(n-1) + 1.
$$ But since $q > 2$ for all $n \geq 4$, we can see that $R_{1}(L_{1}) < \frac{1}{2}(D(1) - D(2))$ for all $n \geq 4$. We have already shown that every subsequent row sum is smaller than $R_{1}(L_{1})$ and that every pair of diagonal entries is farther apart. This means that none of the Gershgorin discs can make it halfway across the space between each pair of diagonal entries, and so all of the Gershgorin discs due to $H_{1}$ and $q^{-2}K_{1}$ are disjoint.
Now consider the lower right block of $L_{1}$ equal to $K_{1}$. The diagonal entries are now: \begin{align*}
D(1+n) &= -2q & D(2+n) &= -2(q+1) \\
D(3+n) &= -2(q+2) & D(t+n) &= -2(q+t-1) \qquad 4\leq t \leq n.
\end{align*} The space between any two diagonal entries is exactly 2. The row sums come from the off-diagonal $q^{-2} K$ block: \begin{align*}
R_{n+1}(L_{1}) &= \frac{2}{q} & R_{n+2}(L_{1}) &= \frac{2(q+1)}{q^{2}} \\
R_{n+3}(L_{1}) &= \frac{2(q+2)}{q^{2}} & R_{n+t}(L_{1}) &= \frac{2(q+t-1)}{q^{2}} \qquad 4+n\leq t \leq 2n.
\end{align*} The largest of these row sums is $R_{2n}(L_{1}):$
\begin{align*}
R_{2n}(L_{1}) = \frac{2(q+n-1)}{q^{2}} &< \frac{4((n-2)(n+2) + 2n)}{(n-2)^{2}(n+2)^{2}} \\
&= \frac{4n(n + 2)}{(n-2)^{2}(n+2)^{2}} - \frac{16}{(n-2)^{2}(n+2)^{2}} \\
&< \frac{4n(n + 2)}{(n-2)^{2}(n+2)^{2}} = \frac{4n}{(n-2)^{2}(n+2)}.
\end{align*} Notice that $n < n + 2$ and $4 \leq (n-2)^{2}$ for $n \geq 4$, and therefore $\frac{4n}{(n-2)^{2}(n+2)} \leq 1$ for $n \geq 4$. Therefore $R_{2n}(L_{1}) < 1$ for all $n \geq 4$, and so none of the Gershgorin discs centred on the diagonal entries $D(1+n),...,D(2n)$ can intersect with any of the other Gershgorin discs centred on those diagonal entries.
We have separated the Gershgorin disc into two groups, centered around $D(1),...,D(2)$ and $D(1+n),...,D(2n)$, and shown that these groups are disjoint amongst themselves. It remains to show that the Gershgorin discs centred around $D(2n)$ and $D(1)$ do not overlap, since these are the elements of each group which approach each other the closest. The separation of these is \begin{equation*}
D(2n) - D(1) = -2(q+n-1) + q^{2} + n - 1 = q^{2} - 2q -(n +1),
\end{equation*} and the sum of their respective row sums is \begin{equation*}
R_{1}(L_{1}) + R_{2n}(L_{1}) = \frac{1}{2}n(n-1) + \frac{2}{q} + \frac{2(q+n-1)}{q^{2}} = \frac{q^{2}n(n+1) + 8q + 4n - 4}{2
q^{2}}.
\end{equation*} The difference $D(2n) - D(1) - (R_{1}(L_{1}) + R_{2n}(L_{1}))$ is easily shown to be positive for all $n \geq 4$. Thus we see that the combined radii of the Gershgorin discs centred around $D(2n)$ and $D(1)$ is not large enough to cover the distance between them, for all $n \geq 4$.
Therefore all of the Gershgorin discs of $L_{1}$ are disjoint, and so $L_{1}$ has only simple eigenvalues.
The case for $L_{2}$: This is very much the same procedure as before, except there is a difference in the first three row sums $R_{1}(L_{2}),...,R_{3}(L_{2})$ due to the difference between $H_{1}$ and $H_{2}$. This difference does not affect the outcome, and the calculations are almost exactly the same so we will omit them here.
\end{proof}
We have shown that $L_{1}$ and $L_{2}$, the two diagonal components of $L_{(0)}$, have only simple eigenvalues. The eigenvalues of $L$ are the union of the eigenvalues of $L_{1}$ and $L_{2}$. Therefore, any repeated eigenvalues of $L$ must be eigenvalues which are shared between $L_{1}$ and $L_{2}$. The following theorem shows that there must be at least some eigenvalues which are not shared between $L_{1}$ and $L_{2}$, and which are therefore simple eigenvalues of $L_{(0)}$ as a whole.
\begin{theorem}\label{L_simple_sym}
$L_{(0)}$ has at least six simple eigenvalues.
\end{theorem}
\begin{proof}
Recall that $L_{(0)} = \mbox{diag}(L_{1},L_{2})$. Denote the eigenvalues of $L_{1}$ by $$\lambda_{1}, \hdots , \lambda_{2n}, $$ and the eigenvalues of $L_{2}$ by $$\rho_{1}, \hdots , \rho_{2n}. $$ If the eigenvalues were exactly the same, then all symmetric polynomials involving the eigenvalues would also be the same. We will compare the elementary symmetric polynomials of $L_{1}$ and $L_{2}$.
First symmetric polynomial of eigenvalues:
this is simply the trace. $L_{1}$ and $L_{2}$ share the same trace, since $H_{1}$ differs from $H_{2}$ only in off-diagonal elements, and so we can conclude that $$e_{1}(L_{1}) = \sum_{i = 1}^{2n} \lambda_{i} = \sum_{i = 1}^{2n} \rho_{i} = e_{1}(L_{2}).$$
Second symmetric polynomial of eigenvalues:
the second symmetric polynomial of the eigenvalues of a matrix $M$ is equal to the sums of symmetric $2 \times 2$ minors of $M$: $$e_{2}(M) = \sum_{i < j} m(i,j), $$ where $m(i,j)$ is the minor formed by deleting all rows and columns except numbers $i$ and $j$. Let us therefore compare such minors of $L_{1}$ and $L_{2}$. Keep in mind that $L_{1}$ and $L_{2}$ are the same everywhere except for the $3 \times 3$ block in the upper left: $$L_{1} = {\small \left( \begin{array}{cccc}
-q^{2} - n + 1 & 0 & -3 & \hdots\\
0 & -q^{2} - 2q - 3 & 0 \\
-3 & 0 & -q^{2} - 4q - 6 \\
\vdots & & & \ddots
\end{array} \right)},$$
$$L_{2} = {\small \left( \begin{array}{cccc}
-q^{2} - n + 1 & -2 & -1 & \hdots\\
-2 & -q^{2} - 2q - 3 & -2 \\
-1 & -2 & -q^{2} - 4q - 6 \\
\vdots & & & \ddots
\end{array} \right)}.$$ Let us now examine the possible values of $l_{1}(i,j)$ and $l_{2}(i,j)$, the $2 \times 2$ minors of $L_{1}$ and $L_{2}$. Since $L_{1}$ and $L_{2}$ differ only in the first three rows and columns, $$l_{1}(i,j) = l_{2}(i,j) \qquad \mbox{ if } i,j > 3. $$ Therefore when considering the difference between $e_{2}(L_{1})$ and $e_{2}(L_{2})$ we need to consider only those minors for which $i \leq 3$.
Consider now $i < 3, j \geq 3$. There is now one anchor point in the first three diagonal entries of $L_{1}$, and one anchor point elsewhere. Deleting all other rows and columns for the construction of the minor will invariably delete the off-diagonal entries of the first $3 \times 3$ block. Therefore $l_{1}(i,j) = l_{2}(i,j) $ for $ i \leq 3,j > 3. $
We need therefore only consider those minors which are obtained from the top-left $3 \times 3$ block. We now have {\small \begin{align*}
e_{2}(L_{1}) = \; &\mbox{Det}\left( \begin{array}{cc}
-q^{2} - n + 1 & 0\\
0 & -q^{2} - 2q -3
\end{array} \right) + \mbox{Det}\left( \begin{array}{cc}
-q^{2} - n + 1 & -3\\
-3 & -q^{2} - 4q -6
\end{array} \right) \\
&+ \mbox{Det}\left( \begin{array}{cc}
-q^{2} -4q -6 & 0\\
0 & -q^{2} - 2q -3
\end{array} \right) + (\mbox{Shared terms}),
\end{align*}
\begin{align*}
e_{2}(L_{2}) = \; &\mbox{Det}\left( \begin{array}{cc}
-q^{2} - n + 1 & -2\\
-2 & -q^{2} - 2q -3
\end{array} \right) + \mbox{Det}\left( \begin{array}{cc}
-q^{2} - n + 1 & -1\\
-1 & -q^{2} - 4q -6
\end{array} \right) \\
&+ \mbox{Det}\left( \begin{array}{cc}
-q^{2} -4q -6 & -2\\
-2 & -q^{2} - 2q -3
\end{array} \right) + (\mbox{Shared terms}),
\end{align*}}where we have grouped the terms arising from minors anchored at diagonal entries outside the first three into a `shared terms' group. Upon evaluating these three determinants, we will get terms in $q$, and three constant terms. In both cases these constant terms sum to $-9$, and so we have $$e_{2}(L_{1}) = -9 + (\mbox{Shared terms}) = e_{2}(L_{2}). $$ Therefore the second symmetric polynomial of eigenvalues is equal for both $L_{1}$ and $L_{2}$.
Third symmetric polynomial of eigenvalues:
we do not require this one for our purposes, so for brevity it will be skipped. Using similar reasoning to the case of the second symmetric polynomial, it can be shown that the third symmetric polynomial of eigenvalues is indeed the same for both $L_{1}$ and $L_{2}$.
Fourth symmetric polynomial of eigenvalues:
now we are choosing a sequence of indices $i < j < k < m$ and using these these to construct the symmetric $4 \times 4$ minors of $L_{1}$ and $L_{2}$. Many of these minors will be shared between the two matrices, and we can avoid these when considering the difference in the overall sums. The possible choices of $i,j,k,m$ fall into categories, which we will list and consider in turn. First, split $L_{1}$ and $L_{2}$ into sectors, as follows: $$L_{t} = \left( \begin{array}{ccc}
\multicolumn{1}{c|} P & \multicolumn{1}{c|} \, & \, \\ \cline{1-1}
\, & \multicolumn{1}{c|} Q & \, \\ \cline{1-2}
\, & \, & R \\
\end{array} \right). $$ Here $P$ represents the first $3 \times 3$ block of $H_{1}$ and $H_{2}$, which is the only point of difference between $L_{1}$ and $L_{2}$. To save space, use the following notation for the diagonal entries of $P$:\begin{equation*}
P_{1} = -q^{2} - n + 1, \quad P_{2} = -q^{2} - 2q - 3, \quad P_{3} = -q^{2} - 4q - 6.
\end{equation*} $Q$ represents the rest of $H_{1}$ and $H_{2}$. $R$ represents the rest of $L_{1}$ and $L_{2}$, consisting of $K$ in the bottom right corner, and the two off-diagonal copies of $q^{-2} K$. We will now look through the possibilities for constructing $4 \times 4$ minors, and identify those which have the potential to be different between $L_{1}$ and $L_{2}$.
Case 1: $i, j , k, m$ are all drawn from $Q$ and $R$.
Since $L_{1}$ and $L_{2}$ are identical in these sectors, minors which result from the choices of $i, j , k, m$ from these two sectors alone can be disregarded as `shared terms' when comparing $e_{4}(L_{1})$ and $e_{4}(L_{2})$. The conclusion here is that at least one of the anchor points must be drawn from $P$, a fact we will refer to again.
Case 2: $i, j , k, m$ are all drawn from $P$ and $Q$.
Since $P$ and $Q$ together make up $H_{1}$ or $H_{2}$, choosing $i, j , k, m$ from these sectors gives a sum of minors that is equal to $e_{4}(H_{1})$ or $e_{4}(H_{2})$. But we already know that $H_{1}$ and $H_{2}$ have the same eigenvalues, and so $e_{4}(H_{1}) = e_{4}(H_{2})$. Therefore minors which result from the choices of $i, j , k, m$ from these two sectors alone can be disregarded as shared terms. Therefore at least one of the anchor points must be drawn from $R$.
Case 3: $i$ drawn from $P$, and $j, k, m$ are drawn from $Q$ and $R$.
In this case none of the off-diagonal entries in $P$ are drawn into the sub-matrix. Since these are the only point of difference between $L_{1}$ and $L_{2}$, the resulting minors are shared between $e_{4}(L_{1})$ and $e_{4}(L_{2})$. Therefore at least two anchor points must come from $P$.
Case 4: $i, j$ from $P$, and $k, m$ from $R$.
Each choice of $k$ and $m$, giving anchor points $K_{k}$ and $K_{m}$ from $R$ will result in $2 \times 2$ minors drawn from the following submatrix: $${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & \, \, \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & r_{k} & r_{m} \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & \, & \, \\ \hline
\, & r_{k}^{T} & \, & K_{k} & \, \\
\, & r_{m}^{T} & \, & \, & K_{m}
\end{array} \right)}, $$ where $r_{1}$ and $r_{2}$ represent the first three rows of the columns corresponding to $k$ and $m$. There is limited freedom in the entries of $r_{k}$ and $r_{m}$. There can be at most one non-zero entry in each one, equal to $q^{-2} K_{k}$ or $q^{-2}K_{m}$ respectively, if $k$ or $m \leq 3 + n$, and any nonzero entry in $r_{m}$ must be lower in the column than a corresponding non-zero entry in $r_{k}$. This means that it is possible, depending on the choice of $k$ and $m$ for there to be a non-zero entry in $r_{k}$ and only zero entries in $r_{m}$, but not vice-versa. All of this follows because the $R$ sector is made up entirely of multiples of the matrix $K_{1}$. Examine each of the cases in turn. Firstly, the case for all zeros: $${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & 0 & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & 0 & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hline
0 & 0 & 0 & K_{k} & \, \\
0 & 0 & 0 & \, & K_{m}
\end{array} \right)}. $$ Since this matrix is block diagonal, extracting the $4 \times 4$ minors as required will lead to $$K_{k}K_{m}(2 \times 2 \mbox{ sum of minors obtained from } P ). $$ But we already know from the analysis of $e_{2}$ that these minors will be the same for $L_{1}$ and $L_{2}$, and so the $4 \times 4$ minors are the same between $L_{1}$ and $L_{2}$ in this case.
Now the case for non-zero entries in both columns. Consider first the following arrangement: $${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & q^{-2}K_{k} & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & 0 & q^{-2}K_{m} \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hline
q^{-2}K_{k} & 0 & 0 & K_{k} & \, \\
0 & q^{-2}K_{m} & 0 & \, & K_{m}
\end{array} \right)}. $$ This arrangement provides three choices of $4 \times 4$ minors in keeping with our requirements, and their sum is: {\footnotesize \begin{align*}
&\left| \begin{array}{cc;{6pt/4pt}cc}
P_{1} & (0 \mbox{ or } -2) & q^{-2}K_{k} & 0 \\
(0 \mbox{ or } -2) & P_{2} & 0 & q^{-2}K_{m} \\ \hdashline[6pt/4pt]
q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & q^{-2}K_{m} & \, & K_{m}
\end{array} \right| + \left| \begin{array}{cc;{6pt/4pt}cc}
P_{2} & (0 \mbox{ or } -2) & 0 & q^{-2}K_{m} \\
(0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
0 & 0 & K_{k} & \, \\
q^{-2}K_{m} & 0 & \, & K_{m}
\end{array} \right| \\
+ &\left| \begin{array}{cc;{6pt/4pt}cc}
P_{1} & (-3 \mbox{ or } -1) & q^{-2}K_{k} & 0 \\
(-3 \mbox{ or } -1) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & 0 & \, & K_{m}
\end{array} \right|.
\end{align*} } Apply the following symmetric permutation of submatrices to the second minor:
$$ \left| \begin{array}{cc;{6pt/4pt}cc}
P_{2} & (0 \mbox{ or } -2) & 0 & q^{-2}K_{m} \\
(0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
0 & 0 & K_{k} & \, \\
q^{-2}K_{m} & 0 & \, & K_{m}
\end{array} \right| \rightarrow \left| \begin{array}{cc;{6pt/4pt}cc}
P_{2} & (0 \mbox{ or } -2) & q^{-2}K_{m} & 0 \\
(0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
q^{-2}K_{m} & 0 & K_{m} & \, \\
0 & 0 & \, & K_{k}
\end{array} \right|. $$ Because symmetrically permuting entries of a matrix does not change its determinant, we have not changed the value of the minor with this manipulation. Now, we will have a situation where the off-diagonal blocks of each minor commute with the bottom right block of that minor, and so we can use the following identity of block matrices, given as Theorem 3 of \cite{block_matrix_dets}:
\begin{lemma}\label{lemma_block_det}
Let $M, N, E,$ and $F$ be square matrices of equal size, where $E$ and $F$ commute. Then
$$\left| \begin{array}{cc}
M & N \\
E & F
\end{array} \right| = \left|MF - NE \right|.$$
\end{lemma}
The product of all of the off diagonal blocks will be a diagonal matrix, and so we will end up with an expression of the form {\small \begin{align*}
&\left| \begin{array}{cc}
\mbox{terms} & K_{k}(0 \mbox{ or } -2) \\
K_{m}(0 \mbox{ or } -2) & \mbox{terms}
\end{array} \right| + \left| \begin{array}{cc}
\mbox{terms} & K_{m}(0 \mbox{ or } -2) \\
K_{k}(0 \mbox{ or } -2) & \mbox{terms}
\end{array} \right| \\
+ &\left| \begin{array}{cc}
\mbox{terms} & K_{k}(-3 \mbox{ or } -1) \\
K_{m}(-3 \mbox{ or } -1) & \mbox{terms}
\end{array} \right|.
\end{align*} }The only room for a difference to emerge in this expression between $L_{1}$ and $L_{2}$ is in the off diagonal terms. For $L_{1}$ these become $$-9K_{k}K_{m}$$ and for $L_{2}$ these become $$K_{k}K_{m}(-4-4-1) = -9K_{k}K_{m},$$ and so the terms are the same for both $L_{1}$ and $L_{2}$. This is a common feature of what we are doing here. Many expressions reduce down to the same set of $2 \times 2$ minors coming from $P$, with the same useful equality property emerging from the off-diagonals. The remaining two possibilities for non-zero entries in both columns of $R$, shown here,
$${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & q^{-2}K_{k} & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & 0 & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & q^{-2}K_{m} \\ \hline
q^{-2}K_{k} & 0 & 0 & K_{k} & \, \\
0 & 0 & q^{-2}K_{m} & \, & K_{m}
\end{array} \right)}, $$ $$ {\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & 0 & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & q^{-2}K_{k} & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & q^{-2}K_{m} \\ \hline
0 & q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & 0 & q^{-2}K_{m} & \, & K_{m}
\end{array} \right)}, $$
work out in the same way, and will not be repeated.
We are left with the possibility of a non-zero entry in the first column of $R$ and zeros in the second. Consider first the following arrangement: $${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & q^{-2}K_{k} & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & 0 & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hline
q^{-2}K_{k} & 0 & 0 & K_{k} & \, \\
0 & 0 & 0 & \, & K_{m}
\end{array} \right)}. $$ Same procedure as in the previous step: {\footnotesize \begin{align*}
&\left| \begin{array}{cc;{6pt/4pt}cc}
P_{1} & (0 \mbox{ or } -2) & q^{-2}K_{k} & 0 \\
(0 \mbox{ or } -2) & P_{2} & 0 & 0 \\ \hdashline[6pt/4pt]
q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & 0 & \, & K_{m}
\end{array} \right| + \left| \begin{array}{cc;{6pt/4pt}cc}
P_{2} & (0 \mbox{ or } -2) & 0 & 0 \\
(0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
0 & 0 & K_{k} & \, \\
0 & 0 & \, & K_{m}
\end{array} \right| \\
+ &\left| \begin{array}{cc;{6pt/4pt}cc}
P_{1} & (-3 \mbox{ or } -1) & q^{-2}K_{k} & 0 \\
(-3 \mbox{ or } -1) & P_{3} & 0 & 0 \\ \hdashline[6pt/4pt]
q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & 0 & \, & K_{m}
\end{array} \right|,
\end{align*} }again leading to an expression of the form \begin{align*}
&\left| \begin{array}{cc}
\mbox{terms} & K_{k}(0 \mbox{ or } -2) \\
K_{m}(0 \mbox{ or } -2) & \mbox{terms}
\end{array} \right| + \left| \begin{array}{cc}
\mbox{terms} & K_{m}(0 \mbox{ or } -2) \\
K_{k}(0 \mbox{ or } -2) & \mbox{terms}
\end{array} \right| \\
+ &\left| \begin{array}{cc}
\mbox{terms} & K_{k}(-3 \mbox{ or } -1) \\
K_{m}(-3 \mbox{ or } -1) & \mbox{terms}
\end{array} \right|.
\end{align*} The remaining two possibilities for one non-zero column of $R$, $${\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & 0 & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & q^{-2}K_{k} & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & 0 & 0 \\ \hline
0 & q^{-2}K_{k} & 0 & K_{k} & \, \\
0 & 0 & 0 & \, & K_{m}
\end{array} \right)}, $$ $$ {\small \left( \begin{array}{ccc|cc}
P_{1} & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & 0 & 0 \\
(0 \mbox{ or } -2) & P_{2} & (0 \mbox{ or } -2) & 0 & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & P_{3} & q^{-2}K_{k} & 0 \\ \hline
0 & 0 & q^{-2}K_{k} & K_{k} & \, \\
0 & 0 & 0 & \, & K_{m}
\end{array} \right)} $$ can be dealt with in the same way. The same conclusion follows as for the previous arrangements, and we see that there can be no difference between the minors of $L_{1}$ and $L_{2}$ constructed in this fashion. Therefore we can conclude that either the anchor points are of the form $i,j \in P$, $k \in Q$, $m \in R$, or of the form $i,j,k \in P$, $m \in R$.
Case 5: $i,j$ from $P$, $k$ from $P$ or $Q$, and $m$ from $R$.
Since $m$ is now definitely from $R$, we must have that $m > n$. To simplify the remaining steps, we will split the indexing of anchor points into indices $1,...,n$ for $i,j,k$ in $P$ and $Q$, and separate indices $1,...,n$ for $m$ in $R$. This amounts to redefining $m$ by $m \leftarrow m - n $. We will proceed by fixing $m$ and evaluating the possible minors from all other valid choices of $i,j$ and $k$. We will go through every $m$ from $1$ to $n$ (using the new indexing system) and collect the sum of the differences. First consider the case where $m > 3$. Consider the minor formed by $i,j,k \in P$: $${\small \left| \begin{array}{ccc|c}
-q^{2} - n + 1 & (0 \mbox{ or } -2) & (-3 \mbox{ or } -1) & 0 \\
(0 \mbox{ or } -2) & -q^{2} - 2q - 3 & (0 \mbox{ or } -2) & 0 \\
(-3 \mbox{ or } -1) & (0 \mbox{ or } -2) & -q^{2} -4q - 6 & 0 \\ \hline
0 & 0 & 0 & -2(q + m -1)
\end{array} \right|}. $$ Since $m > 3$, there are only zeros in the final row and column. Remembering that $q = (1/2)(n-2)(n+2) + 1$, we can directly evaluate this term, and we find that the difference due to this term is \begin{equation*}
\Delta(1,2,3,m) = -4(n-3)(-2(q + m -1)) = 4(n-3)(n+2)(n-2) + 8m(n-3).
\end{equation*} Now suppose that we are taking $k \in Q$. We shall fix $k$, and evaluate each of $\Delta(1,2,k,m)$, $\Delta(1,3,k,m)$, and $\Delta(2,3,k,m)$, and sum over $k$. Each minor will be of the form $${\small \left| \begin{array}{cc|c|c}
2 \times 2 & \mbox{submatrices} & (-k+1 \mbox{ or } -1) & 0 \\
\mbox{of} & P & -1& 0 \\ \hline
(-k+1 \mbox{ or } -1) & -1 & -(q+k-1)^{2}-1 & c \\ \hline
0 & 0 & c & -2(q + m -1)
\end{array} \right|}, $$ where the top left block represents the three $2 \times 2$ submatrices of $P$, and $c$ can either be $0$ if $m-k \neq 0$, or $-2q^{-2}(q + m -1)$ if $m-k = 0$. This non-zero entry occurs when the row (column) originating from $k$ lines up with a non-zero entry of the top right block of $L_{1}$ (or $L_{2}$). Since $m > 3$, such lining up cannot occur from any of the rows (columns) due to $i$ or $j$, and so the remaining two entries in the fourth row and column must be zero. For clarity, let us express the differences as follows: {\small \begin{align*}
\Delta(1,2,k,m) &= \left| \begin{array}{cc|c|c}
p_{1} & 0 & 1-k & 0 \\
0 & p_{2} & -1 & 0 \\ \hline
1-k & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{1} & -2 & 1-k & 0 \\
-2 & p_{2} & -1 & 0 \\ \hline
1-k & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right|, \\
\Delta(1,3,k,m) &= \left| \begin{array}{cc|c|c}
p_{1} & -3 & 1-k & 0 \\
-3 & p_{3} & -1 & 0 \\ \hline
1-k & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{1} & -1 & 1-k & 0 \\
-1 & p_{3} & -1 & 0 \\ \hline
1-k & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right|, \\
\Delta(2,3,k,m) &= \left| \begin{array}{cc|c|c}
p_{2} & 0 & -1 & 0 \\
0 & p_{3} & -1 & 0 \\ \hline
-1 & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{2} & -2 & -1 & 0 \\
-2 & p_{3} & -1 & 0 \\ \hline
-1 & -1 & t & c \\ \hline
0 & 0 & c & s
\end{array} \right|,
\end{align*}} where $s = -2(q + m -1)$, $t = -(q+k-1)^{2}-1$, and $p_{1}, p_{2}, p_{3}$ are the diagonal entries of $P$. Consider the Laplace expansion of these minors down the rightmost column, making use of the zero entries. In this expansion of each pair of minors, only those terms which involve both the entries at the (1,2) and (2,1) positions will differ between these two expansions. This is due to the presence of zeroes in the right column and bottom row, forcing many of the Laplacian expansion terms to equal zero. Therefore by using Lemma \ref{off_diag_lemma} to assist with the the right-column Laplacian expansions we can evaluate: \pagebreak \begin{align*}
\Delta(1,2,k,m) &= -4 \left(c^2-s (k+t-1)\right) \\
\Delta(1,3,k,m) &= 4 \left(2 c^2-s (k+2 t-1)\right) \\
\Delta(2,3,k,m) &= 4 \left(-c^2+s t+s\right),
\end{align*} which sums to $4s = -8(q + m -1)$. The contribution to $\Delta(i,j,k,m)$ for $i,j \in P$, $k \in Q$, and $m \in R, m > 3$ for a fixed $m$ and fixed $k$ is $-8(q + m -1)$. Summing this over $k \in Q$, we simply multiply by $(n-3)$ to get $$\sum_{\substack{i,j = 1 \\ i \neq j}}^{3}\sum_{k > 3} \Delta(i,j,k,m)= -8(n-3)(q + m -1) = -4(n-3)(n+2)(n-2) - 8m(n-3).$$ But this is equal to the negative of $\Delta(1,2,3,m)$ which we obtained earlier, cancelling out. Keep in mind that this sum is for a fixed $m$. So we have $$\sum_{m > 3} \sum_{k = 1}^{n}\Delta(i,j,k,m) = 0. $$
Now we shall turn our attention to the remaining case, which is $m \leq 3$. Again fix $k$, and consider $m = 1$. First, consider $\Delta(1,2,3,1) $: {\small \begin{align*}
\Delta(1,2,3,1) &= \left| \begin{array}{ccc|c}
-q^{2} - n + 1 & 0 & -3 & -2q^{-1} \\
0 & -q^{2} - 2q - 3 & 0 & 0 \\
-3 & 0 & -q^{2} -4q - 6 & 0 \\ \hline
-2q^{-1} & 0 & 0 & -2q
\end{array} \right| \\
&- \left| \begin{array}{ccc|c}
-q^{2} - n + 1 & -2 & -1 & -2q^{-1} \\
-2 & -q^{2} - 2q - 3 & -2 & 0 \\
-1 & -2 & -q^{2} -4q - 6 & 0 \\ \hline
-2q^{-1} & 0 & 0 & -2q
\end{array} \right|,
\end{align*} }which, repeatedly using Lemma \ref{off_diag_lemma} and expanding only those terms which do not cancel out, gives us \begin{equation}\label{eq_diff_1}
\Delta(1,2,3,1) =\frac{8 \left((n-3) q^3-2\right)}{q^2}
\end{equation} Likewise, by direct calculation, \begin{align}\label{eq_diff_2}
\Delta(1,2,3,2) &= \frac{8 (q+1) \left((n-3) q^4+4 q+4\right)}{q^4}, \\
\Delta(1,2,3,3) &= \frac{8 (q+2) \left((n-3) q^4-2 q-4\right)}{q^4}.\label{eq_diff_3}
\end{align} Summing these, we have that \begin{align}\label{eq_diff_sum}
\sum_{m=1}^{3}\Delta(1,2,3,m) &= 8 \left(3 n (q+1)-\frac{4}{q^4}-9 q-9\right).
\end{align} Now let us examine the case for a fixed $k > 3$ and a fixed $m <3$. Start with $m = 1,$ and use Lemma \ref{off_diag_lemma} to evaluate the differences: {\small \begin{align*}
\Delta(1,2,k,1) &= \left| \begin{array}{cc|c|c}
p_{1} & 0 & 1-k & c \\
0 & p_{2} & -1 & 0 \\ \hline
1-k & -1 & t & 0 \\ \hline
c & 0 & 0 & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{1} & -2 & 1-k & c \\
-2 & p_{2} & -1 & 0 \\ \hline
1-k & -1 & t & 0 \\ \hline
c & 0 & 0 & s
\end{array} \right| = 4 s (-1 + k + t), \\
\Delta(1,3,k,1) &= \left| \begin{array}{cc|c|c}
p_{1} & -3 & 1-k & c \\
-3 & p_{3} & -1 & 0 \\ \hline
1-k & -1 & t & 0 \\ \hline
c & 0 & 0 & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{1} & -1 & 1-k & c \\
-1 & p_{3} & -1 & 0 \\ \hline
1-k & -1 & t & 0 \\ \hline
c & 0 & 0 & s
\end{array} \right| = -4 s (-1 + k + 2 t),\\
\Delta(2,3,k,1) &= \left| \begin{array}{cc|c|c}
p_{2} & 0 & -1 & 0 \\
0 & p_{3} & -1 & 0 \\ \hline
-1 & -1 & t & 0 \\ \hline
0 & 0 & 0 & s
\end{array} \right| - \left| \begin{array}{cc|c|c}
p_{2} & -2 & -1 & 0 \\
-2 & p_{3} & -1 & 0 \\ \hline
-1 & -1 & t & 0 \\ \hline
0 & 0 & 0 & s
\end{array} \right| = 4 s (1 + t),
\end{align*} }where $s = -2q$, $c = -2/q$, $t = -(q + k - 1)^2$, and $p_{1},p_{2},p_{3}$ are the diagonal entries of $P$. The sum of these differences is \begin{equation}
\Delta(i,j,k,1) = 4s = -8q
\end{equation} for fixed $k > 3$, and hence\begin{equation}
\sum_{k>3} \Delta(i,j,k,1) = -(n-3)8q
\end{equation} Similar calculations give us
\begin{equation}
\sum_{k>3} \Delta(i,j,k,2) = -(n-3)8(q+1),
\end{equation}
\begin{equation}
\sum_{k>3} \Delta(i,j,k,3) = -(n-3)8(q+2).
\end{equation} for $k > 3$. Now, adding each of these to Equations \eqref{eq_diff_1}, \eqref{eq_diff_2}, \eqref{eq_diff_3}, we find that \begin{align*}
\sum \Delta(i,j,k,1) &= -(n-3)8q +\frac{8 \left((n-3) q^3-2\right)}{q^2} = -\frac{16}{q^2}, \\
\sum \Delta(i,j,k,2) &= (n-3)8(q+1) + \frac{8 (q+1) \left((n-3) q^4+4 q+4\right)}{q^4} = \frac{32 \left( q + 1\right)^{2}}{q^4},\\
\sum \Delta(i,j,k,3) &= (n-3)8(q+2) +\frac{8 (q+2) \left((n-3) q^4-2 q-4\right)}{q^4}= -\frac{16 \left( q + 2 \right)^{2}}{q^4}.
\end{align*} Notice how all of the explicit $n$-terms cancel out. These are the only differences we have uncovered which are non-zero. Let us add them up to find the final value of the difference between the fourth symmetric polynomial of eigenvalues of $L_{1}$ and $L_{2}$: \begin{equation*}
\sum\Delta(i,j,k,m) = \frac{32 (q+1)^2}{q^4}-\frac{16 (q+2)^2}{q^4}-\frac{16}{q^2} = -\frac{32}{q^{4}} \neq 0
\end{equation*}
We have shown that the difference of the fourth symmetric polynomial of $L_{1}$ and $L_{2}$ is non-zero, and so therefore $L_{1}$ and $L_{2}$ must have different eigenvalues. Therefore, $L_{(0)}$ has at least some simple eigenvalues. We will now place lower bounds on how many eigenvalues must differ between $L_{1}$ and $L_{2}$, and therefore a lower bound on how many simple eigenvalues $L_{(0)}$ must have.
Suppose that the eigenvalues of $L_{1}$ and $L_{2}$ are the same except for one eigenvalue. Denote the eigenvalues of $L_{1}$ by $$\lbrace \lambda_{1}, \lambda_{2}, \hdots , \lambda_{n} \rbrace, $$ and the eigenvalues of $L_{2}$ by $$\lbrace \lambda_{1} + t, \lambda_{2}, \hdots , \lambda_{n} \rbrace, $$ where $t \neq 0$. But then, the trace of $L_{1}$ cannot equal the trace of $L_{2}$, which we know it must since $e_{1}(L_{1}) = e_{1}(L_{2})$. Therefore $L_{1}$ and $L_{2}$ must differ in at least two eigenvalues, such that the sum of eigenvalues is the same for both: $$L_{1}: \quad \lbrace \lambda_{1}, \lambda_{2}, \lambda_{3}, \hdots \lambda_{n} \rbrace,$$ $$L_{2}: \quad \lbrace \lambda_{1} + t, \lambda_{2} - t, \lambda_{3}, \hdots \lambda_{n} \rbrace,$$ where $t \neq 0$. Now let us evaluate $e_{2}(L_{1})$ and $e_{2}(L_{2})$: \begin{align*}e_{2}(L_{1}) &= \lambda_{1}\lambda_{2} + (\lambda_{1} + \lambda_{2})\sum_{i=3}^{n} \lambda_{i} + \mbox{ shared terms }, \\
e_{2}(L_{2}) &= (\lambda_{1} + t)(\lambda_{2} - t) + (\lambda_{1} + t) \sum_{i=3}^{n} \lambda_{i} + (\lambda_{2} - t) \sum_{i=3}^{n} \lambda_{i} + \mbox{ shared terms } \\
&= (\lambda_{1} + t)(\lambda_{2} - t) + (\lambda_{1} + \lambda_{2})\sum_{i=3}^{n} \lambda_{i} + \mbox{ shared terms }.
\end{align*} We know that $e_{2}(L_{1}) = e_{2}(L_{2})$, and so \begin{equation*}\lambda_{1}\lambda_{2} = (\lambda_{1} + t)(\lambda_{2} - t) = \lambda_{1}\lambda_{2} + t(\lambda_{2} - \lambda_{1}) - t^{2},
\end{equation*} leading to $$ t(\lambda_{2} - \lambda_{1} - t) = 0. $$ Since $t \neq 0$, we have that $t = \lambda_{2} - \lambda_{1}$. But for this value of $t$, we have that $\lambda_{1} + t = \lambda_{2}$, and $\lambda_{2} + t = \lambda_{1}$, which would mean that the eigenvalues of $L_{2}$ are the same as the eigenvalues of $L_{1}$, which is a contradiction. Therefore $L_{1}$ and $L_{2}$ must differ in at least three eigenvalues. There may be more eigenvalues not in common, but three is a lower bound. Therefore, since the eigenvalues of $L_{(0)}$ are the union of the eigenvalues of $L_{1}$ and $L_{2}$, we can finally conclude that $L_{(0)}$ has at least six simple eigenvalues.
\end{proof}
Having shown the desired condition of some simple eigenvalues at a specific value of the parameter $b$ for the Kippenhahn counterexample $L_{(b)}$, we now wish to extend this condition to cover as many values of $b$ as possible. We will use results from perturbation theory found in Chapter 2 of \cite{kato} and Chapter 5 of \cite{knopp}.
\begin{theorem}
$L_{(b)}$ has some simple eigenvalues at almost all $b$.
\end{theorem}
\begin{proof}
Consider the characteristic polynomial of $L_{(b)}$: \begin{equation}\label{eqn_kato}
\mbox{Det}(L_{(b)} - \lambda I) = 0.
\end{equation} As stated on page 63 of \cite{kato}, this is an algebraic equation in $\lambda$ of degree $2n$, the order of $L_{(b)}$, with coefficients which are holomorphic in $b$. It follows (see page 119 of \cite{knopp}, or page 64 of \cite{kato}) that the roots of \eqref{eqn_kato} are branches of analytic functions of $b$ with only algebraic singularities. This implies that the number of distinct eigenvalues of $L_{(b)}$ is a constant independent of $b$ except at some \emph{exceptional points} where the analytic functions representing each root of \eqref{eqn_kato} meet. Since these eigenvalue functions are analytic and susceptible to a power series expansion, there can only be a finite number of such crossing points in any compact interval of $\mathbb{R}$, and only a countable number overall. At an exceptional point, the number of distinct eigenvalues can only decrease, never increase.
We already know that $L_{(0)}$ has at least six simple eigenvalues, and according to the above reasoning these eigenvalues can only collide at a countable number of values of $b$. Therefore $L_{(b)}$ has some simple eigenvalues at almost every $ b \in \mathbb{R}$, that is at all except a measure zero set of values of $b$.
\end{proof} This result establishes Theorem \ref{thm_quant} from the Introduction. We have shown that the counterexample presented in this paper can be quantised at all orders for almost all values of the parameter $b$. The tradeoff is that we have not shown that every eigenvalue is simple for almost all $b$, but only a subset of the eigenvalues. Regardless, this constitutes a quantisation of the Kippenhahn conjecture at all orders.
\section{Quantisation of Li, Spitkovsky and Shukla's counterexample}\label{sec_LSS}
In this section we will give a general description of the process of quantising the counterexample of Li, Spitkovsky, and Shukla (the LSS counterexample) to the more general form of Kippenhahn's Conjecture. First we will establish a preliminary lemma.
\begin{lemma}\label{lemma_aux}
Let $M = \left( \begin{array}{cc}
P & Q \\
Q & 0
\end{array}\right)$ be a matrix over a field $\mathbb{F}$, where $P$ and $Q$ are symmetric and invertible. Then for any eigenvalue $\lambda$ of $M$ of multiplicity $m$, the \emph{auxiliary matrix} $\widetilde{M}(\lambda)$ of $M$, defined as \begin{equation}\label{eqn_aux_matrix}
\widetilde{M}(\lambda) = P + \frac{1}{\lambda} Q^{2}
\end{equation} has $\lambda$ as an eigenvalue.
If $m > 1$, then $\lambda$ must also have multiplicity greater than 1 with respect to $\widetilde{M}(\lambda)$.
\end{lemma}
\begin{proof}
Let $\left( \begin{array}{c}
u \\
v
\end{array} \right) $ be an eigenvector of $M$, with eigenvalue $\lambda$: $$M \left( \begin{array}{c}
u \\
v
\end{array} \right) = \left( \begin{array}{c}
P u + Q v \\
Q u
\end{array} \right)=\left( \begin{array}{c}
\lambda u \\
\lambda v
\end{array} \right).$$ Since $\mbox{Det}(M) = \mbox{Det}(Q)^{2} \neq 0 $, we know that $M$ is invertible and $\lambda \neq 0$. Comparing components, we see that $$v = \frac{1}{\lambda} Q u .$$ We then have $$\widetilde{M}(\lambda)u = P u + \frac{1}{\lambda}Q^{2} u = \lambda u.$$ So $u$ is an eigenvector of the auxiliary matrix $\widetilde{M}(\lambda)$ with eigenvalue $\lambda$. Note that $u \neq 0$ because $u$ is an eigenvector. Suppose now that $\lambda$ appeared with multiplicity greater than one as an eigenvalue of $M$. Then there must be some $\lambda$-eigenvector $\left( \begin{array}{c}
u_{2} \\
v_{2}
\end{array} \right) \neq k \left( \begin{array}{c}
u \\
v
\end{array} \right) $ for any $k \in \mathbb{F}$. By the above reasoning, both $u$ and $u_{2}$ are eigenvectors of $\widetilde{M}(\lambda)$ with eigenvalue $\lambda$. Suppose that $u_{2} = k u$. Then $v_{2} = (k/\lambda) Q u $, as above, and so we would have $\left( \begin{array}{c}
u_{2} \\
v_{2}
\end{array} \right) = k \left( \begin{array}{c}
u \\
v
\end{array} \right), $ which is not possible. Therefore we must conclude that $u_{2}$ is not a multiple of $u$. Therefore $\lambda$ must be an eigenvalue of $\widetilde{M}(\lambda)$ of multiplicity at least two, thus proving the result.
\end{proof}
The auxiliary matrix \eqref{eqn_aux_matrix} will be of use in this section, and in Section \ref{sec_laffey} where we quantise Laffey's seminal counterexample \cite{laffey}.
In their paper \cite{li}, Li, Spitkovsky, and Shukla described the following $6 \times 6$ counterexample to the weak form of Kippenhahn's conjecture, where the characteristic polynomial need not be square. Take a matrix $A$ defined by $$A = {\small \left(
\begin{array}{cccccc}
0 & x & 0 & c y & 0 & 0 \\
0 & 0 & y & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -c x & 0 & \sqrt{1-c^2} \xi & 0 \\
0 & 0 & 0 & 0 & 0 & \eta \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}
\right), } $$ where $x,$ $y,$ $\xi,$ $\eta ,$ $c > 0,$ and $x^2 + y^2 = \xi^2 + \eta^2 = 1 ,$ $c < 1/2$. Then define $H = A + A^T $ and $K$ via $2A = H + i K$: $$H = {\footnotesize \left(
\begin{array}{cccccc}
0 & x & 0 & c y & 0 & 0 \\
x & 0 & y & 0 & 0 & 0 \\
0 & y & 0 & -c x & 0 & 0 \\
c y & 0 & -c x & 0 & \sqrt{1-c^2} \xi & 0 \\
0 & 0 & 0 & \sqrt{1-c^2} \xi & 0 & \eta \\
0 & 0 & 0 & 0 & \eta & 0 \\
\end{array}
\right), } $$ $${\footnotesize K = \left(
\begin{array}{cccccc}
0 & -i x & 0 & -i c y & 0 & 0 \\
i x & 0 & -i y & 0 & 0 & 0 \\
0 & i y & 0 & -i c x & 0 & 0 \\
i c y & 0 & i c x & 0 & -i \sqrt{1-c^2} \xi & 0 \\
0 & 0 & 0 & i \sqrt{1-c^2} \xi & 0 & -i \eta \\
0 & 0 & 0 & 0 & i \eta & 0 \\
\end{array}
\right). } $$ We will now quantise this counterexample. Define hermitian $X = \left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right)$ and $Y = \left( \begin{array}{cc}
0 & i \\
-i & 0
\end{array} \right),$ and consider $L(s) = s X \otimes H + Y \otimes K $, where $s \in \mathbb{R}, s > 1. $ We find that $L(s) = \left( \begin{array}{cc}
0 &M(s) \\
M(s)^{T} & 0
\end{array} \right), $ where $M(s) =$ $$ {\footnotesize \left(
\begin{array}{cccccc}
0 & (s+1) x & 0 & c (s+1) y & 0 & 0 \\
(s-1) x & 0 & (s+1) y & 0 & 0 & 0 \\
0 & (s-1) y & 0 & -c (s-1) x & 0 & 0 \\
c (s-1) y & 0 & -c (s+1) x & 0 & \sqrt{1-c^2} \xi (s+1) & 0 \\
0 & 0 & 0 & \sqrt{1-c^2} \xi (s-1) & 0 & \eta (s+1) \\
0 & 0 & 0 & 0 & \eta (s-1) & 0 \\
\end{array}
\right).}$$ The eigenvalues of $L(s)$ will be plus/minus the singular values of $M(s)$, but moreover will be plus/minus the square root of the eigenvalues of $M(s)M(s)^{T}$. So by showing that values exist of $s$ for which $M(s)M(s)^{T}$ has simple eigenvalues, we will show the same for $L(s)$. Firstly, $\mbox{Det}(M(s)) = -c^2 \eta ^2 (s-1)^3 (s+1)^3 \left(x^2+y^2\right)^2$, which for the allowed values of $c, \eta,$ and $s$ is non-zero.
Our approach is to evaluate the discriminant of the characteristic polynomial of $M(s)M(s)^{T}$ for $s = 2, 3 ,4$, and show that for any values of the parameters of the counterexample, at least one of these values of $s$ gives a non-zero discriminant. Therefore there is always some $sX$ and $Y$ for which $MM^{T}$ and thus $L$ have a non-zero discriminant, and therefore only simple eigenvalues. The process involves lengthy calculation of characteristic polynomials and their discriminants, done using Mathematica, and a Gr{\"o}bner basis calculation, done using Macaulay2. The code for these calculations may be be found at \texttt{github.com/blaw381/Quantum-Kippenhahn-Examples}.
\section{Quantisation of Laffey's counterexample}\label{sec_laffey}
Here is the counterexample given by Laffey in \cite{laffey}: \begin{equation*}
\begin{split} H &= {\small \left( \begin{array}{cccccccc}
-122 & 0 & 12 & 18 & -30 & 18 & 26 & 10 \\
0 & -122 & -6 & -12 & -16 & -28 & 20 & -16 \\
12 & -6 & -218 & 0 & 44 & 8 & 24 & 12 \\
18 & -12 & 0 & -218 & -2 & -34 & -10 & 22 \\
-30 & -16 & 44 & -2 & -216 & 0 & -12 & -8 \\
18 & -28 & 8 & -34 & 0 & -216 & -8 & 36 \\
26 & 20 & 24 & -10 & -12 & -8 & -120 & 0 \\
10 & -16 & 12 & 22 & -8 & 36 & 0 & -120
\end{array} \right)} , \\ &\qquad \qquad \; K ={\small \left( \begin{array}{cccccccc}
-4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -4 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 4 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -8 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -8 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 8
\end{array} \right)}. \end{split} \end{equation*} Quantise with $X = \left( \begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right)$ and $Y = \left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right) $ to get $L = X \otimes H + Y \otimes K. $
\begin{prop}
$L$ has only simple eigenvalues.
\end{prop}
\begin{proof}
Observe that $L = \left( \begin{array}{cc}
H & K \\
K & 0
\end{array} \right) $, and notice that the lower blocks trivially commute. We can therefore use Lemma \ref{lemma_block_det} and conclude that $\mbox{Det}(L) = \mbox{Det}(-K^{2}) \neq 0.$ Therefore all of the eigenvalues of $L$ are non-zero, and we can construct the auxiliary matrix $$\widetilde{L}(\lambda) = H + \frac{1}{\lambda}K^{2}, $$ corresponding to some eigenvalue $\lambda$ of $L$. Note that for the purpose of this auxiliary matrix, $\lambda$ is a fixed constant. By Lemma \ref{lemma_aux}, we know that $\lambda$ is an eigenvalue of $\widetilde{L}(\lambda)$, and our aim is to determine the multiplicity of $\lambda$ with respect to $\widetilde{L}(\lambda)$ . Note Lax's theorem \cite{lax}, that a symmetric matrix has repeated eigenvalues if and only if it commutes with a non-zero skew-symmetric matrix. We can take the commutator of $\lambda \widetilde{L}(\lambda)$ with a skew symmetric matrix $S = \left\lbrace s_{ij} \right\rbrace$. We have multiplied by $\lambda$ to improve the clarity of the resulting commutator. This commutator has 36 distinct elements, a sample of which is listed here:
{\small \begin{align*}
&(1) \qquad -4 \lambda (6 s_{1,3}+9 s_{1,4}-15 s_{1,5}+9 s_{1,6}+13 s_{1,7}+5 s_{1,8}), \\
&(2) \qquad 2 (8 \lambda s_{1,2}-22 \lambda s_{1,3}+\lambda s_{1,4}+47 \lambda s_{1,5}+6 \lambda s_{1,7}+4 \lambda s_{1,8} \\
&\qquad \qquad +6 \lambda s_{3,5} +9 \lambda s_{4,5}-9 \lambda s_{5,6}-13 \lambda s_{5,7}-5 \lambda s_{5,8}-24 s_{1,5}), \\
&(3) \qquad 2 \lambda (9 s_{1,2}+48 s_{2,4}+s_{2,5}+17 s_{2,6}+5 s_{2,7}-11 s_{2,8}-3 s_{3,4}+8 s_{4,5}+14 s_{4,6}-10 s_{4,7}+8 s_{4,8}), \\
& (4) \qquad 2 (9 \lambda s_{1,2}-4 \lambda s_{2,3}+17 \lambda s_{2,4}+47 \lambda s_{2,6}+4 \lambda s_{2,7}-18 \lambda s_{2,8} \\
& \qquad \qquad -3 \lambda s_{3,6}-6 \lambda s_{4,6}-8 \lambda s_{5,6}-10 \lambda s_{6,7}+8 \lambda s_{6,8}-24 s_{2,6}), \\
& (5) \qquad 4 \lambda (3 s_{1,2}+24 s_{2,3}-11 s_{2,5}-2 s_{2,6}-6 s_{2,7}-3 s_{2,8}+3 s_{3,4}+4 s_{3,5}+7 s_{3,6}-5 s_{3,7}+4 s_{3,8}), \\
& (6) \qquad 2 (5 \lambda s_{1,4}+9 \lambda s_{1,8}-8 \lambda s_{2,4}-6 \lambda s_{2,8}+6 \lambda s_{3,4}+4 \lambda s_{4,5} \\
&\qquad \qquad -18 \lambda s_{4,6}-49 \lambda s_{4,8}-\lambda s_{5,8}-17 \lambda s_{6,8}-5 \lambda s_{7,8}-24 s_{4,8}).
\end{align*} }
See \texttt{github.com/blaw381/Quantum-Kippenhahn-Examples} for the relevant code. We will suppose that each of these elements vanishes, giving us 36 equations, and solve for possible values of the $s_{i,j}$. Since $\lambda$ is a non-zero constant in the context of the auxiliary matrix, and each of the these polynomials is linear in the $s_{i,j}$, solving for the $s_{i,j}$ using linear algebra is immediately possible. It is easy to confirm using any computer algebra system that this system of equations has full rank with respect to the $s_{i,j}$, and so we must conclude that $ s_{i,j} = 0$ for all $i,j$. Therefore $\lambda \widetilde{L}(\lambda)$, and hence also $\widetilde{L}(\lambda)$, must have only simple eigenvalues. In particular $\lambda$ must be a simple eigenvalue of $\widetilde{L}(\lambda)$ and hence by Lemma \ref{lemma_aux} also a simple eigenvalue of $L$. Since we have made no special assumptions about $\lambda$, we conclude that $L$ must have only simple eigenvalues.
\end{proof}
\section{Final remarks on quantisation}
While Theorem \ref{thm_quant_kipp_intro} fixes the disproven Kippenhahn conjecture, the obtained quantities and thus the bounds on the size $n$ of $X$ and $Y$ needed are exponential in the size $d$ of $A$ and $B$. These bounds are likely far from optimal, a view suggested by the small size of the $X$ and $Y$ which we have found for the Kippenhahn conjecture counterexamples. There is a natural matrix convex set called a free spectrahedron associated to the matrices $A$ and $B$ in the counterexample family, consisting of a formal union of symmetric matrices $X$ and $Y$ of every size $n \in \mathbb{N}$ for which $L(X,Y)$ is positive semidefinite. The geometry of the boundary of this region is determined on the $n = d$ level; in fact, every boundary point can be naturally compressed to a $d \times d$ boundary point (cf also \cite[Section~3]{matricial_relaxation} or \cite{spectrahedral_containment}). We are thus led to consider that $n = d$ should suffice as a bound for the Quantum Kippenhahn Theorem. On the other hand, a recent result of Huber and Netzer \cite{polytopes} in a similar situation suggests that $n = 2$ may be sufficient in all cases. It would be interesting to know how much the existing bounds can be tightened, and if the desired property emerges at the first level of non-commutativity for $X$ and $Y$, that is for $n = 2$. Alternatively, it would also be interesting to find matrices $A$ and $B$ which generate $M_{d}(\mathbb{C})$, such that for generic $2 \times 2 $ hermitian $X$ and $Y$, $\mbox{Det}(I + A \otimes X + B \otimes Y)$ is a square. Preliminary numerical investigation suggests that almost any $2 \times 2$ hermitian $X$ and $Y$ without common eiegenspaces seems to break the square determinant property, when applied to an $8 \times 8$ Kippenhahn counterexample of the type generated by Theorem \ref{thm_construct_simple}. We therefore conjecture that the smallest such $A$ and $B$ have to be of size at least 10.
\end{document}
|
\begin{document}
\title{Optical Modulation of Electron Beams in Free Space}
\author{F.~Javier~Garc\'{\i}a~de~Abajo}
\email[Corresponding author: ]{\\ [email protected]}
\affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\affiliation{ICREA-Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats, Passeig Llu\'{\i}s Companys 23, 08010 Barcelona, Spain}
\author{Andrea~Kone\v{c}n\'{a}}
\affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\begin{abstract}
We exploit free-space interactions between electron beams and tailored light fields to imprint on-demand phase profiles on the electron wave functions. Through rigorous semiclassical theory involving a quantum description of the electrons, we show that monochromatic optical fields focused in vacuum can be used to correct electron beam aberrations and produce selected focal shapes. Stimulated elastic Compton scattering is exploited to imprint the phase, which is proportional to the integrated optical field intensity along the electron path and depends on the transverse beam position. The required light intensities are attainable in currently available ultrafast electron microscope setups, thus opening the field of free-space optical manipulation of electron beams.
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
Electron microscopy has experienced impressive advances over the last decades thanks to the design of sophisticated magnetostatic and electrostatic lenses that reduce electron optics aberrations \cite{HRU98,BDK02,HS19} and are capable of focusing electron beams (e-beams) with sub{\aa}ngstrom accuracy \cite{NCD04,MKM08}. In addition, the availability of more selective monochromators \cite{KLD14} enable the exploration of sample excitations down to the mid-infrared regime \cite{LTH17,HNY18,HKR19,HHP19}. Such precise control over e-beam shape and energy is crucial for atomic-scale imaging and spectroscopy \cite{HRU98,BDK02,HS19,KLD14,NCD04,MKM08,LTH17,HNY18,HKR19,HHP19}.
The focused e-beam profile ultimately depends on the phase acquired by the electron along its passage through the microscope column. Besides electron optics lenses, several physical elements have been demonstrated to control transverse e-beam shaping. In particular, biprisms based on biased wires provide a dramatic example of laterally-varying phase imprinting that is commonly used for e-beam splitting in electron holography \cite{MD1956}, along with applications such as symmetry-selected plasmon excitation in metallic nanowires \cite{GBL17}. In a related context, vortex e-beams have been realized using a magnetic pseudo-monopole \cite{BVV14}. Recently, plates with individually-biased perforations have been developed to enable position-selective control over the electric Aharonov-Bohm phase stamped on the electron wave function \cite{VBM18}, while passive carved plates have been employed as amplitude filters to produce highly-chiral electron vortices \cite{VTS10,MAA11,SLL14} and aberration correctors \cite{GTY17,SRP18}.
The electron phase can also be modified by the ponderomotive force associated with the interaction between e-beams and optical fields. In particular, periodic light standing waves were predicted to produce electron diffraction \cite{KD1933}, eventually observed in a challenging experiment \cite{FAB01,FB02,B07} that had to circumvent the weak free-space electron-photon coupling associated with energy-momentum mismatch \cite{paper149}. Such mismatch forbids single photon emission or absorption processes by free electrons, consequently limiting electron-light coupling to second-order interactions that concatenate an even number of virtual photon events. This type of interaction has been recently exploited to produce attosecond free-electron pulses \cite{KSH18,K19_2}. Interestingly, the presence of material structures introduces scattered optical fields that can supply momentum and break the mismatch, thus enabling real photon processes \cite{paper149}, used for example in laser-driven electron accelerators \cite{BNM19,SMY19}. Because the strength of scattered fields reflects the nanoscale optical response of the materials involved, this was speculated to enable electron energy-gain spectroscopy as a way to dramatically improve spectral resolution in electron microscopes \cite{H99,paper114,H09}, as later corroborated in experiment \cite{paper306}. By synchronizing the arrival of electron and light pulses at the sample, photon-induced near-field electron microscopy (PINEM) was demonstrated to exert temporal control over the electron wave function along the beam direction \cite{BFZ09,paper151,PLZ10,PZ12,KGK14,PLQ15,FES15,paper282,EFS16,RB16,VFZ16,KML17,FBR17,PRY17,paper311,MB18,MB18_2,paper325,paper332,K19,paper339,T20,KLS20,WDS20}. Additionally, modulation of the transverse wave function can be achieved in PINEM by laterally shaping the employed light \cite{paper272}, which results in the transfer of linear \cite{paper311} and angular \cite{paper332,paper312} momentum between photons and electrons.
Recently, we have proposed to use PINEM to imprint on-demand transverse e-beam phase profiles \cite{paper351}, thus relying on ultrafast e-beam shaping as an alternative approach to aberration correction. This method enables fast active control over the modulated e-beam at the expense of retaining only $\sim1/3$ of monochromatic electrons and potentially introducing decoherence through inelastic interactions with the light scatterer. An approach to phase moulding in which no materials are involved and the electron energy is preserved would then be desirable.
In this Letter, we propose an optical free-space electron modulator (OFEM) in which a phase profile is imprinted on the transverse electron wave function by means of stimulated elastic Compton scattering associated with the $A^2$ term in the light-electron coupling Hamiltonian. The absence of material structures prevents electron decoherence and enables the use of high light intensities, as required to activate ponderomotive forces. We present a simple, yet rigorous semiclassical theory that supports applications in aberration correction and transverse e-beam shaping. While optical e-beam phase stamping has been demonstrated using a continuous-wave laser in a tour-de-force experiment \cite{SAC19}, we envision pulsed illumination as a more feasible route to implement an OFEM, exploiting recent advances in ultrafast electron microscopy, particularly in systems that incorporate light injection with high numerical aperture \cite{paper325} for diffraction-limited patterning of the optical field.
\section{Free-space optical phase imprinting}
We study the free-space interaction between an e-beam and a light field represented by its vector potential $\Ab({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)$. Taking the e-beam propagation direction along $z$, it is convenient to write the electron wave function as $\psi_z(\Rb,t)=\ee^{{\rm i}} \def\ee{{\rm e} q_0z-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)$, where $\Rb=(x,y)$ denotes transverse coordinates and we separate the slowly-varying envelope $\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)$ from the fast oscillations imposed by the central wave vector $q_0$ and energy $E_0$. Under typical conditions met in electron microscopes, and assuming that interaction with light only produces small variations in the electron energy compared with $E_0$, we can adopt the nonrecoil approximation to reduce the Dirac equation in the minimal coupling scheme to an effective Schr\"odinger equation \cite{ValerioCompton},
\begin{align}
\left(\partial_t+\vb\cdot\nabla\right)\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)=\frac{-{\rm i}} \def\ee{{\rm e}}{\hbar}\mathcal{H}'({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t),
\nonumber
\end{align}
where
\begin{align}
\mathcal{H}'=\frac{e}{c}\vb\cdot\Ab+\frac{e^2}{2m_{\rm e}} \def\kB{{k_{\rm B}} c^2\gamma}\left(A_x^2+A_y^2+\frac{1}{\gamma^2}A_z^2\right)
\nonumber
\end{align}
is the interaction Hamiltonian, $\vb=v\zz$ is the electron velocity, and $\gamma=1/\sqrt{1-v^2/c^2}$ introduces relativistic corrections to the $A^2$ term. This equation admits the analytical solution
\begin{align}
\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)=\phi_0({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t)\exp\left[\frac{-{\rm i}} \def\ee{{\rm e}}{\hbar}\int_{-\infty}^t \!\!\! dt' \;\mathcal{H}'({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t+\vb t',t')\right],
\nonumber
\end{align}
where $\phi_0({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t)$ is the incident electron wave function.
We consider that the light field acts on the electron over a sufficiently short path length as to neglect any transverse variation in its wave function during interaction. We also note that the $\vb\cdot\Ab$ term in $\mathcal{H}'$ does not contribute to the final electron state because it represents real photon absorption or emission events, which are kinematically forbidden. Likewise, following a similar argument, under monochromatic illumination with light of frequency $\omega$, the time-varying components in $A^2$ ($\propto\ee^{\pm2{\rm i}} \def\ee{{\rm e}\omega t}$), which describe two-photon emission or absorption, also produce vanishing contributions. The remaining terms $\propto\ee^{\pm{\rm i}} \def\ee{{\rm e}\omega t\mp{\rm i}} \def\ee{{\rm e}\omega t}$ represent stimulated elastic Compton scattering, a second-order process that combines virtual absorption and emission of photons, amplified by the large population of their initial and final states. An alternative description of this effect is provided by the ponderomotive force acting on a classical point-charge electron and giving rise to diffraction in the resulting effective potential \cite{B07}. As we are interested in imprinting a phase on the electron wave function without altering its energy, we consider spectrally narrow external illumination that can be effectively regarded as monochromatic, such as that produced by laser pulses of much longer duration than the electron pulse. Writing the external field as ${\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)=2{\rm Re}\{{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})\ee^{-{\rm i}} \def\ee{{\rm e}\omega t}\}$, the transmitted wave function becomes
\begin{align}
\psi_z(\Rb,t)=\psi_0({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t)\;\ee^{{\rm i}} \def\ee{{\rm e}\varphi(\Rb)},
\nonumber
\end{align}
where
\begin{align}
\varphi(\Rb)\!=&\frac{-1}{\mathcal{M}\omega^2} \!\!\!\int_{-\infty}^\infty \!\!\!\!\!dz \bigg[|E_x({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})|^2\!+\!|E_y({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})|^2\!+\!\frac{1}{\gamma^2}|E_z({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})|^2\bigg]
\label{phase}
\end{align}
is an imprinted phase that depends on transverse coordinates $\Rb$, we define the scaled mass $\mathcal{M}=m_{\rm e}} \def\kB{{k_{\rm B}}\gamma v/c\alpha$, and $\alpha\approx1/137$ is the fine structure constant.
\begin{figure}
\caption{{\bf Optical free-space electron modulator (OFEM).}
\label{Fig1}
\end{figure}
\section{Description of an OFEM}
We envision an OFEM placed right before the objective lens of an electron microscope [Fig.\ \ref{Fig1}(a)], in a region where the e-beam spans a large diameter ($\gtrsim100$ times the light wavelength). The OFEM could consist of a combination of planar and parabolic mirrors with drilled holes that allow the e-beam to pass through the optical focal region [Fig.\ \ref{Fig1}(b)]. The electron is then exposed to intense fields that can be shaped with diffraction-limited lateral resolution through a spatial light modulator and a high numerical aperture of the parabolic mirror. This results in a controlled position-dependent phase, as prescribed by Eq.\ (\ref{phase}) [Fig.\ \ref{Fig1}(c)]. Considering a monochromatic e-beam and omitting for simplicity an overall $\ee^{-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}$ time-dependent factor, free propagation of the electron wave function between planes $z$ and $z_f$ is described by
\begin{align}
\psi_{z_f}(\Rb_f)&={\rm i}} \def\ee{{\rm e}nt\frac{d^2{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}\,d^2\Rb}{(2\pi)^2}\,\ee^{{\rm i}} \def\ee{{\rm e}{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}\cdot(\Rb_f-\Rb)+{\rm i}} \def\ee{{\rm e} q_z(z_f-z)}\psi_{z}(\Rb) \nonumber\\
&\propto\int d^2\Rb\;\ee^{{\rm i}} \def\ee{{\rm e} q_0|\Rb_f-\Rb|^2/2(z_f-z)}\psi_{z}(\Rb),
\label{psi1}
\end{align}
where the second line is obtained by performing the ${\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}=(q_x,q_y)$ integral in the paraxial approximation (i.e., $q_z=\sqrt{q_0^2-Q^2}\approx q_0-Q^2/2q_0$) and we are interested in exploring positions ${\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}_f=(\Rb_f,z)$ near an electron focal point. In a simplified microscope model, we take $z_L$ at the entrance of the objective lens where the OFEM is also placed, and the incident electron is a spherical wave $\psi_{z_L}(\Rb)\propto\ee^{{\rm i}} \def\ee{{\rm e} q_0R^2/2(z_L-z_{\rm xo})}$ emanating from the crossover point ${\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}=(0,0,z_{\rm xo})$ following the condenser lens. Introducing in Eq.\ (\ref{psi1}) the phase $\ee^{-{\rm i}} \def\ee{{\rm e} q_0 R^2/2f}$ produced by an objective lens with focal distance $f$ and aperture radius $R_{\rm max}$, we have (see Appendix\ \ref{secS1})
\begin{align}
\psi_{z_f}(\Rb_f)&\propto\int\!d^2\vec{\theta}\,\ee^{-{\rm i}} \def\ee{{\rm e} q_0\vec{\theta}\cdot\Rb_f}\ee^{{\rm i}} \def\ee{{\rm e}\chi(\vec{\theta})+{\rm i}} \def\ee{{\rm e}\varphi(\Rb)}\ee^{{\rm i}} \def\ee{{\rm e} q_0R^2\Delta/2},
\label{psifinal}
\end{align}
where $\vec{\theta}=\Rb/(z_f-z_L)$, we define $\Delta=1/(z_f-z_L)+1/(z_L-z_{\rm xo})-1/f$, the phases $\chi$ and $\varphi$ are produced by aberrations and the OFEM [Eq.\ (\ref{phase})], respectively, and the integral is restricted to $\theta<R_{\rm max}/(z_f-z_L)$. In what follows, we study the electron wave function profile $\psi_{z_f}(\Rb_f)$ as given by Eq.\ (\ref{psifinal}) at the focal plane $z_f$, defined by the condition $\Delta=0$.
{\it Required light intensity.---}From Eq.\ (\ref{phase}), the imprinted phase shift scales with the light intensity $I_0=c\,|E|^2/2\pi$ roughly as $\varphi/I_0\sim-2\pi L/\mathcal{M}c\omega^2$, where $L$ is the effective length of light-electron interaction, which depends on the focusing conditions of the external illumination. For example, for an electron moving along the axis of an optical paraxial vortex beam of azimuthal angular momentum number $m=1$ and wavelength $\lambda_0=2\pi c/\omega$, we have $L\approx2\lambda_0/\theta_L^2$, where $\theta_L$ is the light beam divergence half-angle (see Appendix\ \ref{secS3}). Under these conditions, a phase $\varphi=2\pi$ is achieved with a light power $\mathcal{P}=2\mathcal{M}c^2\omega\sim40\,$kW for 60\,keV electrons and $\lambda_0=500\,$nm; this result is independent of $\theta_L$, emphasizing the important role of phase accumulation along a long interaction region in a loosely focused light beam. This power level can be reached using nanosecond laser pulses synchronized with the electron passage through the optical field \cite{paper325}. We note that the phase scaling $\varphi\sim I_0/(v\gamma\omega^2)$ [see Eq.\ (\ref{phase})] leaves some room for improvement by placing the OFEM in low-energy regions of the e-beam to reduce the optical power demand.
\section{Aberration correction}
As an application of lateral phase moulding, we explore the conditions needed to compensate spherical aberration, corresponding to \cite{AOP01,PPB18} $\chi(\theta)=C_3q_0\theta^4/4$ in Eq.\ (\ref{psifinal}), where $C_3$ is a length coefficient. Upon examination of the phase profile imprinted by paraxial light vortex beams (see Appendix\ \ref{secS2}), we find that an azimuthal number $m=3$ produces the required radial dependence $\varphi(R)=-(\pi^2\,\mathcal{P}/96\,\mathcal{M}c^2\omega)\,(\theta_LR/\lambda_0)^4$ under the condition $R\ll\lambda_0/2\pi\theta_L$. For a typical microscope parameters $C_3=f=1\,$mm, 60\,keV electrons, $R_{\rm max}=30\,\upmu$m, and $\lambda_0=500\,$nm, the above condition is satisfied with $\theta_L\ll0.15^\circ$. Then, compensation of spherical aberration (i.e., $\varphi=-\chi$) is realized with a beam power $\mathcal{P}=(6\hbar c^2/\pi^4\alpha)\,C_3q_0^2\lambda_0^3/\theta_L^4(z_f-z_L)^4\sim3\times10^8\,$W, which is attainable using femtosecond laser pulses in ultrafast electron microscopes \cite{FES15,paper311,KLS20,WDS20}.
\section{Transverse e-beam shaping}
The production of on-demand electron spot profiles involves a two-step process comprising the determination of the necessary phase pattern $\varphi(\Rb)$, and from here the required optical beam parameters that generate that phase. While this is a complex task in general, we can find an approximate analytical solution for one-dimensional systems, assuming translational invariance along a direction $y$ perpendicular to the electron velocity. We therefore consider an optical beam characterized by an electric field ${\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})=E(x,z)\yy$ and explore the generation of focal electron shapes defined by a wave function $\psi_z(x)$ independent of $y$. For light propagating along the positive $z$ direction, we can write without loss of generality $E(x,z)=\int_{-k_0}^{k_0}(dk_x/2\pi)\,\ee^{{\rm i}} \def\ee{{\rm e}(k_xx+k_zz)}\,\beta_{k_x}$ with $k_z=\sqrt{k_0-k_x^2}$ and $k_0=2\pi/\lambda_0$ in terms of the expansion coefficients $\beta_{k_x}$. Inserting this expression into Eq.\ (\ref{phase}), we find (see Appendix\ \ref{secS4}) $\varphi(x)=\varphi_0-(1/2\pi\mathcal{M}\omega^2)\int_{-k_0}^{k_0}dk_x\,\ee^{2{\rm i}} \def\ee{{\rm e} k_xx}(k_z/|k_x|)\,\beta_{k_x}\beta_{-k_x}^*$, where $\varphi_0$ is a global phase. Given a target profile $\varphi^{\rm target}(x)$, we can then use the approximation $\beta_{k_x}\beta_{-k_x}^*\approx-2\mathcal{M}\omega^2(|k_x|/k_z)\int dx\,\ee^{-2{\rm i}} \def\ee{{\rm e} k_xx}\,\varphi^{\rm target}(x)$ to generate the needed light beam coefficients. A particular solution is obtained by imposing $\beta_{k_x}=\beta_{-k_x}^*$, which renders $\beta_{k_x}$ as the square root of the right-hand side in the above expression. For any solution, combining these two integral expressions and dismissing $\varphi_0$, we find
\begin{align}
\varphi(x)=\frac{1}{\pi}\int_{-R_{\rm max}}^{R_{\rm max}}\!\!dx'\,\frac{\sin\left[4\pi(x-x')/\lambda_0\right]}{x-x'}\varphi^{\rm target}(x'),
\label{phifinal}
\end{align}
which yields a diffraction-limited phase profile.
\begin{figure}
\caption{{\bf 1D electron focus shaping.}
\label{Fig2}
\end{figure}
$\varphi(\Rb)$ (rad)
We explore this strategy in Fig.\ \ref{Fig2}, where the left panels present the OFEM phase, whereas the right ones show the corresponding color-matched wave functions obtained by inserting that phase into Eq.\ (\ref{psifinal}) without aberrations ($\chi=0$) and with the integral over $\theta_y$ yielding a trivial overall constant factor. Broken curves in Fig.\ \ref{Fig2}(a,b) and red curves in Fig.\ \ref{Fig2}(c-f) stand for target profiles, whereas the rest of the curves are obtained by accounting for optical diffraction [i.e., by transforming the target phase as prescribed by Eq.\ (\ref{phifinal})]. In-plane OFEM and focal coordinates $x$ and $x_f$ are normalized as explained in the caption, thus defining universal curves for a specific choice of the ratio between the objective aperture radius and the light wavelength, $R_{\rm max}/\lambda_0=12.5$. Linear phase profiles [Fig.\ \ref{Fig2}(a)], which are well reproduced by diffraction-limited illumination, give rise to peaked electron wave functions [Fig.\ \ref{Fig2}(b)] centered at positions $x_f=(A/2\pi)\lambda_{e\perp}$ that depend on the slope of the phase $\varphi=Ax/R_{\rm max}+B$, with the offset value $B$ determining the focal peak phase. The situation is more complicated when aiming to produce two electron peaks, which can be achieved with an intermitent phase profile that combines two different slopes, either without [Fig.\ \ref{Fig2}(c,d)] or with [Fig.\ \ref{Fig2}(e,f)] offset to generate symmetric or antisymmetric wave functions, respectively. Light diffraction reduces the contrast of the obtained focal shapes, but still tolerates well-defined intensity peaks [Fig.\ \ref{Fig2}(b,d,f), light curves], which become sharper when $R_{\rm max}/\lambda_0$ is increased (see Fig.\ \ref{FigS1}).
\begin{figure}
\caption{{\bf 2D electron focus shaping.}
\label{Fig3}
\end{figure}
For actual two-dimensional beams, using consolidated results of image compression theory \cite{HLO1980,AT10}, we can find approximate contour spot profiles by setting the OFEM phase to the argument of the Fourier transform of solid shapes filling those contours. This is illustrated in Fig.\ \ref{Fig3}, where panel (b) represents the phase of the object in (a), while panel (c) is the actual diffraction-limited phase obtained by convoluting (b) with a point spread function $\propto J_1(2\pi R/\lambda_0)/R$ (see Appendix\ \ref{secS5}), which produces a blurred, but still discernible electron focal image.
\section{Conclusion}
In brief, shaped optical fields can modulate the electron wave function in free space to produce on-demand e-beam focal profiles. The required light intensities are reachable using pulsed illumination, currently available in ultrafast electron microscope setups. We have illustrated this idea with simple examples of target and optical profiles, but a higher degree of control over the transverse electron wave function should benefit from machine learning techniques \cite{SOJ20} for the well-defined problem of finding the optimum light beam angular profile that better fits the desired e-beam spot shape. In combination with spatiotemporal light shaping, the proposed OFEM element should enable the exploration of nanoscale nonlocal correlations in sample dynamics.
\section*{Acknowledgments}
We thank Mathieu Kociak and Ido Kaminer for helpful and enjoyable discussions. This work has been supported in part by ERC (Advanced Grant 789104-eNANO), the Spanish MINECO (MAT2017-88492-R and SEV2015-0522), the Catalan CERCA Program, and Fundaci\'{o} Privada Cellex.
\begin{widetext}
\appendix
\section{Derivation of equation\ (\ref{psifinal})}
\label{secS1}
We start from Eq.\ (\ref{psi1}),
\begin{align}
\psi_{z_f}(\Rb_f)\propto\int d^2\Rb\;\ee^{{\rm i}} \def\ee{{\rm e} q_0|\Rb_f-\Rb|^2/2(z_f-z_L)}\mathcal{F}(\Rb)\psi_{z_L}(\Rb),
\label{eq2}
\end{align}
where we have inserted a transmission factor $\mathcal{F}(\Rb')=\Theta(R_{\rm max}-R)\ee^{-{\rm i}} \def\ee{{\rm e} q_0 R^2/2f}$ associated with a focusing lens of focal distance $f$ and aperture radius $R_{\rm max}$ placed at the $z=z_L$ plane. Specifying the incident electron as a spherical wave $\psi_{z_L}(\Rb)\propto\ee^{{\rm i}} \def\ee{{\rm e} q_0R^2/2(z_L-z_{\rm xo})}$ emanating from the crossover point at $z=z_{\rm xo}$ (see Fig.\ \ref{Fig1}), Eq.\ (\ref{eq2}) reduces to
\begin{align}
\psi_{z_f}(\Rb_f)\propto\int_{R<R_{\rm max}} d^2\Rb\;
\ee^{-{\rm i}} \def\ee{{\rm e} q_0\Rb\cdot\Rb'/(z_f-z_L)}
\ee^{{\rm i}} \def\ee{{\rm e} q_0R^2\Delta/2},
\label{eq22}
\end{align}
where $\Delta=1/(z_f-z_L)+1/(z_L-z_{\rm xo})-1/f$. In Eq.\ (\ref{eq22}), we dismiss an overall factor $\ee^{{\rm i}} \def\ee{{\rm e} q_0R_f^2/2(z_f-z_L)}$, which introduces a negligible phase modulation across the focal spot [e.g., for 60\,keV electrons ($q_0\approx1291/$nm) and $z_f-z_L\sim f=1\,$mm, a change in $R_f$ by 1\,nm produces a phase shift $q_0R_f^2/2(z_f-z_L)\sim0.6\,$mrad]. This expression directly becomes Eq.\ (\ref{psi1}) by defining the 2D angular coordinate $\vec{\theta}=\Rb/(z_f-z_L)$ and inserting in the integrand two phase correction factors due to aberrations and the OFEM, represented through $\ee^{{\rm i}} \def\ee{{\rm e}\chi(\vec{\theta})}$ and $\ee^{{\rm i}} \def\ee{{\rm e}\varphi(\Rb)}$, respectively.
\section{Phase imprinted by paraxial optical vortex beams}
\label{secS2}
The electric field associated with an external light beam propagating along positive $z$ directions admits the general expression
\begin{align}
{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})=\sum_{\sigma={\rm s,p}}\int\frac{d^2k_\perp} \def\kperpb{{\bf k}_\perpb}{(2\pi)^2}
\,\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}\,\eh_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}\,\ee^{{\rm i}} \def\ee{{\rm e}k_\perp} \def\kperpb{{\bf k}_\perpb\cdot\Rb+{\rm i}} \def\ee{{\rm e} k_zz},
\label{Egeneral}
\end{align}
where we sum over polarizations $\sigma\!=$s,\,p and integrate over wave vectors $k_\perp} \def\kperpb{{\bf k}_\perpb=(k_x,k_y)$ within the light cone $k_\perp} \def\kperpb{{\bf k}_\perp<k_0=\omega/c$, $k_z=\sqrt{k_0^2-k_\perp} \def\kperpb{{\bf k}_\perp^2}$ is the light wave vector component along $z$, $\eh_{k_\perp} \def\kperpb{{\bf k}_\perpb{\rm s}}=(1/k_\perp} \def\kperpb{{\bf k}_\perp)(-k_y\hat{\bf x}} \def\yy{\hat{\bf y}} \def\zz{\hat{\bf z}} \def\hat{\bf n}} \def\eh{\hat{\bf e}{\hat{\bf n}+k_x\yy)$ and $\eh_{k_\perp} \def\kperpb{{\bf k}_\perpb{\rm p}}=(1/k_0k_\perp} \def\kperpb{{\bf k}_\perp)(k_zk_\perp} \def\kperpb{{\bf k}_\perpb-k_\perp} \def\kperpb{{\bf k}_\perp^2\zz)$ are unit polarization vectors, and $\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}$ are expansion coefficients. We now consider beams with a well-defined azimuthal angular momentum number $m$ by setting $\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}=\beta_{k_\perp} \def\kperpb{{\bf k}_\perp\sigma}\ee^{{\rm i}} \def\ee{{\rm e} m\varphi_{k_\perp} \def\kperpb{{\bf k}_\perpb}}$. Inserting these coefficients in Eq.\ (\ref{Egeneral}) and carrying out the $\varphi_{k_\perp} \def\kperpb{{\bf k}_\perpb}$ integral, we find
\begin{align}
{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})=\frac{{\rm i}} \def\ee{{\rm e}^m}{2\pi}\int_0^{k_0}k_\perp} \def\kperpb{{\bf k}_\perp\,dk_\perp} \def\kperpb{{\bf k}_\perp\,\left[
{\rm i}} \def\ee{{\rm e} \beta_{k_\perp} \def\kperpb{{\bf k}_\perpb{\rm s}}\,{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}_{k_zm{\rm s}}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})
-\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb{\rm p}}\,{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}_{k_zm{\rm p}}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})
\right],
\label{Ebessel}
\end{align}
where
\begin{subequations}
\begin{align}
{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}_{k_zm{\rm s}}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})&=\left[\frac{{\rm i}} \def\ee{{\rm e} m}{k_\perp} \def\kperpb{{\bf k}_\perp R} J_m(k_\perp} \def\kperpb{{\bf k}_\perp R)\,\hat{\bf R}} \def\ff{\hat{\bf \varphi}
- J_m'(k_\perp} \def\kperpb{{\bf k}_\perp R)\,\hat{\bf \varphi}\right] \ee^{{\rm i}} \def\ee{{\rm e} m \varphi} \ee^{{\rm i}} \def\ee{{\rm e} k_z z}, \nonumber\\
{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}_{k_zm{\rm p}}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})&=\frac{k_z}{k_0}\left[{\rm i}} \def\ee{{\rm e} J_m'(k_\perp} \def\kperpb{{\bf k}_\perp R)\,\hat{\bf R}} \def\ff{\hat{\bf \varphi}
- \frac{m}{k_\perp} \def\kperpb{{\bf k}_\perp R} J_m(k_\perp} \def\kperpb{{\bf k}_\perp R)\,\hat{\bf \varphi}
+\frac{k_\perp} \def\kperpb{{\bf k}_\perp}{k_z} J_m(k_\perp} \def\kperpb{{\bf k}_\perp R)\,\zz\right] \ee^{{\rm i}} \def\ee{{\rm e} m \varphi} \ee^{{\rm i}} \def\ee{{\rm e} k_z z} \nonumber
\end{align}
\end{subequations}
are Bessel beams. We now evaluate Eq.\ (\ref{phase}) using the field of Eq.\ (\ref{Ebessel}), which leads to
\begin{align}
\varphi(\Rb)&=\frac{-\alpha c}{4\pim_{\rm e}} \def\kB{{k_{\rm B}} v\gamma\omega^2}
\int_0^{k_0}k_\perp} \def\kperpb{{\bf k}_\perp k_zdk_\perp} \def\kperpb{{\bf k}_\perp \label{phibessel}\\
&\times\left[
\left|\beta_{k_\perp} \def\kperpb{{\bf k}_\perp{\rm s}}+({\rm i}} \def\ee{{\rm e} k_z/k_0)\beta_{k_\perp} \def\kperpb{{\bf k}_\perp{\rm p}}\right|^2J_{m-1}^2(k_\perp} \def\kperpb{{\bf k}_\perp R)
+\left|\beta_{k_\perp} \def\kperpb{{\bf k}_\perp{\rm s}}-({\rm i}} \def\ee{{\rm e} k_z/k_0)\beta_{k_\perp} \def\kperpb{{\bf k}_\perp{\rm p}}\right|^2J_{m+1}^2(k_\perp} \def\kperpb{{\bf k}_\perp R)
+(k_\perp} \def\kperpb{{\bf k}_\perp^2/\gamma^2k_0^2)|\beta_{k_\perp} \def\kperpb{{\bf k}_\perp{\rm p}}|^2J_m^2(k_\perp} \def\kperpb{{\bf k}_\perp R)
\right].
\nonumber
\end{align}
For optical paraxial beams constructed from a limited range of transverse wave vectors $k_\perp} \def\kperpb{{\bf k}_\perp\lesssim k_0\theta_L$, where $\theta_L$ is the divergence half-angle, with $\beta_{k_\perp} \def\kperpb{{\bf k}_\perp\sigma}\equiv\beta_\sigma$ taken to be constant within that range, we use the approximation $J_m(\theta)\approx\theta^m/2^mm!$ for $|\theta|\ll1$ to write the lowest-order contribution to Eq.\ (\ref{phibessel}) as
\begin{align}
\varphi(\Rb)\approx\frac{-m\alpha(k_0\theta_L)^{2m}}{2\pi4^mm!^2m_{\rm e}} \def\kB{{k_{\rm B}} v\gamma\omega}\,
\left|\beta_{\rm s}+{\rm i}} \def\ee{{\rm e}\beta_{\rm p}\right|^2\,R^{2(m-1)}.
\nonumber
\end{align}
This approximation is valid for small arguments of the Bessel functions, that is, $R\ll1/k_0\theta_0=\lambda_0/2\pi\theta_L$. For a light wavelength $\lambda_0=500\,$nm and a typical objective lens radius $R_{\rm max}\sim30\,\upmu$m, this imposes an upper limit on the divergence angle of the optical beam $\theta_L\ll0.15^\circ$.
The power carried by the light beam can be obtained by integrating the Poynting vector associated with the field of Eq.\ (\ref{Egeneral}). We find\begin{align}
\mathcal{P}=\frac{c^2}{8\pi^3\omega}\sum_{\sigma={\rm s,p}}\int d^2k_\perp} \def\kperpb{{\bf k}_\perpb
\,k_z\,\left|\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}\right|^2\approx\frac{\theta_L^2\omega^2}{8\pi^2c}\left(|\beta_{\rm s}|^2+|\beta_{\rm p}|^2\right),
\label{P}
\end{align}
where the rightmost approximation corresponds to the paraxial beam considered above. When the external light is made of only p or s polarization components, we can use Eq.\ (\ref{P}) to recast the phase as
\begin{align}
\varphi(\Rb)\approx\frac{-\pi\alpha m\,\mathcal{P}}{m!^2m_{\rm e}} \def\kB{{k_{\rm B}} cv\gamma\omega}\,\left(k_0\theta_LR/2\right)^{2(m-1)},
\label{phaseR}
\end{align}
which is obviously proportional to the beam power $\mathcal{P}$. Interestingly, by setting $m=3$ we have $\varphi(\Rb)\propto R^4$, which has the same radial scaling as the phase associated with spherical aberration in the electron beam.
\section{Effective length of interaction for an $m=1$ paraxial light beam}
\label{secS3}
For $m=1$, Eq.\ (\ref{phaseR}) is independent of $R$ and $\theta_L$ within the paraxial approximation. This allows us to estimate the power needed to obtain a phase shift of $2\pi$ in the electron as $\mathcal{P}=2m_{\rm e}} \def\kB{{k_{\rm B}} cv\gamma\omega/\alpha$. Now, from Eq.\ (\ref{phase}), we can roughly estimate the phase shift as
\begin{align}
\varphi\sim-2\pi\alpha I_0L/m_{\rm e}} \def\kB{{k_{\rm B}} v\gamma\omega^2,
\label{varphiL}
\end{align}
where the focal field intensity is absorbed in the light intensity $I_0=(c/2\pi)|{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}(0)|^2$ and $L$ is an effective light-electron interaction length that depends on the light focusing conditions. In particular, we can find $L$ for the paraxial beams considered above by first calculating the field intensity from Eq.\ (\ref{Ebessel}) for ${\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}=0$ and $m=1$; we find $|{\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}(0)|^2\approx(k_0^4\theta_L^4/32\pi^2)\left|\beta_{\rm s}+{\rm i}} \def\ee{{\rm e}\beta_{\rm p}\right|^2$, which permits writing the beam power in Eq.\ (\ref{P}) (for either s or p polarization) as $\mathcal{P}=8\pi c I_0/\omega^3\theta_L^2$, and from here and Eq.\ (\ref{phaseR}), we have $\varphi=-8\pi^2\alpha c I_0/m_{\rm e}} \def\kB{{k_{\rm B}} v\gamma\omega^3\theta_L^2$. Comparing this expression with Eq.\ (\ref{varphiL}), we obtain $L=2\lambda_0/\theta_L^2$.
\section{Derivation of equation\ (\ref{phifinal})}
\label{secS4}
The electric field ${\bf E}} \def\Bb{{\bf B}} \def\Hb{{\bf H}} \def\Ab{{\bf A}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})=E(x,z)\yy$ of a light beam propagating along positive $z$ directions and having translational invariance along $y$ can be regarded as a combination of plane waves of wave vectors $(k_x,k_z=\sqrt{k_0-k_x^2})$. More precisely, we can write
\begin{align}
E(x,z)=\int_{-k_0}^{k_0}\frac{dk_x}{2\pi}\,\ee^{{\rm i}} \def\ee{{\rm e}(k_xx+k_zz)}\,\beta_{k_x}
\label{Ex}
\end{align}
in terms of expansion coefficients $\beta_{k_x}$. Inserting this expression into Eq.\ (\ref{phase}) and performing the $z$ integral, we find
\begin{align}
\varphi(x)=\frac{-1}{2\pi\mathcal{M}\omega^2}\int_{-k_0}^{k_0}dk_x\int_{-k_0}^{k_0}dk'_x\,\ee^{{\rm i}} \def\ee{{\rm e}(k_x-k'_x)x}\,\beta_{k_x}\beta^*_{k'_x}\,\delta(k_z-k'_z).
\label{phixbeta}
\end{align}
Now, the $\delta$ function, which contributes only at $k_x=\pm k'_x$, can be recast as
\begin{align}
\delta(k_z-k'_z)=\frac{k_z}{|k_x|}\left[\delta(k_x-k'_x)+\delta(k_x+k'_x)\right],
\label{delta}
\end{align}
thus permitting us to carry out the $k'_x$ integral to find
\begin{align}
\varphi(x)=\varphi_0-\frac{1}{2\pi\mathcal{M}\omega^2}\int_{-k_0}^{k_0}dk_x\,\frac{k_z}{|k_x|}\,\ee^{2{\rm i}} \def\ee{{\rm e} k_xx}\,\beta_{k_x}\beta^*_{-k_x},
\label{phix}
\end{align}
where $\varphi_0=(-1/2\pi\mathcal{M}\omega^2)\int_{-k_0}^{k_0}dk_x\,(k_z/|k_x|)\,|\beta_{k_x}|^2$ is an overall phase that we dismiss because it is independent of $x$ and does not affect the electron intensity profile. Given a target OFEM phase $\varphi^{\rm target}(x)$, although the range of $k_x$ integration is not infinite, we can approximate the light beam coefficients as the inverse Fourier transform
\begin{align}
\beta_{k_x}\beta_{-k_x}^*\approx-2\mathcal{M}\omega^2\frac{|k_x|}{k_z}\int dx\,\ee^{-2{\rm i}} \def\ee{{\rm e} k_xx}\,\varphi^{\rm target}(x).
\label{betabeta}
\end{align}
A specific solution of this equation can be found by setting $\beta_{k_x}=\beta^*_{-k_x}$, which renders $\beta_{k_x}$ as the square root of the right-hand side of Eq.\ (\ref{betabeta}). For any solution, by inserting Eq.\ (\ref{betabeta}) into Eq.\ (\ref{phix}), we find
\begin{align}
\varphi(x)=\frac{1}{\pi}\int_{-R_{\rm max}}^{R_{\rm max}}\!\!dx'\,\frac{\sin\left[2k_0(x-x')\right]}{x-x'}\varphi^{\rm target}(x'),
\nonumber
\end{align}
which is Eq.\ (\ref{phifinal}). The actual phase $\varphi(x)$ that can be imprinted with a light beam in the OFEM element [i.e., taking into account the finite range of wave vectors $|k_x|\le k_0$ contributing to Eq.\ (\ref{Ex})] is the target phase convoluted with the 1D point spread function $\sin(2k_0x)/x$.
\section{Synthesis of 2D phase profiles}
\label{secS5}
In the analysis of Fig.\ \ref{Fig3}, we start with a black-and-white target intensity profile [e.g., $I(\Rb_f)=1$ and 0 in the black and white areas of Fig.\ \ref{Fig3}(a)] and perform the fast Fourier transform (FFT) \cite{PTV92} to compute $I_{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}=\int(d^2\Rb_f/4\pi^2)\,I(\Rb_f)\,\ee^{{\rm i}} \def\ee{{\rm e}{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}\cdot\Rb_f}$, where ${\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}=q_0\Rb/(z_f-z_L)$ [see definitions of different variables in Eq.\ (\ref{psifinal})]. We then retain only the phase of $I_{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}$, which is known to capture the contour of the designated intensity profile \cite{HLO1980,AT10}, so we regard it as the target phase $\varphi^{\rm target}(\Rb)={\rm arg}\{I_{\bf Q}} \def\Gb{{\bf G}} \def\gb{{\bf g}\}$ to be delivered by the OFEM [Fig.\ \ref{Fig3}(b)]. By analogy to the 1D profile analysis in Sec.\ \ref{secS4}, we introduce the effects of light diffraction in a phenomenological way by first Fourier transforming $\varphi_{\kparb}=\int_{R<R_{\rm max}}(d^2\Rb/4\pi^2)\,\varphi^{\rm target}(\Rb)\,\ee^{-{\rm i}} \def\ee{{\rm e}\kparb\cdot\Rb}$, where $\kparb=(k_x,k_y)$ is the light wave vector component parallel to the OFEM plane and the integral extends over the objective lens aperture defined by $R<R_{\rm max}$; in a second step, we transform back to $\varphi(\Rb)=\int_{\kpar<k_0} d^2\kparb\,\varphi_{{\bf k}} \def\kpar{k_\parallel} \def\kparb{{\bf k}_\parallel}\,\ee^{{\rm i}} \def\ee{{\rm e}\kparb\cdot\Rb}$, only retaining diffraction-limited components $\kpar<k_0$ [Fig.\ \ref{Fig3}(c)]; this procedure is equivalent to calculating $\varphi(\Rb)$ as the convolution of $\varphi^{\rm target}(\Rb)$ with the 2D point spread function $k_0 J_1(k_0R)/2\pi R$. We finally obtain the actual spot profile [Fig.\ \ref{Fig3}(d)] by evaluating Eq.\ (\ref{psifinal}) without aberrations ($\chi=0$) at the focal plane ($\Delta=0$).
Incidentally, apart from an overall normalization factor, we find that the light wavelength $\lambda_0$, the electron wave vector $q_0$ (or equivalently, the electron energy $E_0$), and the in-plane focal and OFEM coordinates $\Rb_f$ and $\Rb$ enter the analysis presented in this work only through the combinations $q_0\Rb\cdot\Rb_f/(z_f-z_L)=2\pi\,(\Rb/R_{\rm max})\,(\Rb_f/\lambda_{e\perp})$ [e.g., see Eq.\ (\ref{psifinal})] and $\Rb\cdot\kparb=\Rb/R_{\rm max}=2\pi\,(\kparb/k_0)\,(\Rb/R_{\rm max})\,(R_{\rm max}/\lambda_0)$, where $\kparb/k_0$ is the in-plane projection of the unit vector indicating the incident light direction, whereas $\lambda_{e\perp}=\lambda_e/{\rm NA}$ is the projected focal-plane electron wavelength defined as the ratio between the de Broglie wavelength $\lambda_e=2\pi/q_0$ and the numerical aperture of the objective lens ${\rm NA}=R_{\rm max}/(z_f-z_L)$. By normalizing $\Rb_f$ and $\Rb$ to $\lambda_{e\perp}$ and $R_{\rm max}$, respectively, we obtain universal curves that only depend on the ratio $R_{\rm max}/\lambda_0$ between the aperture radius of the objective lens and the light wavelength. We use this normalization in Figs.\ \ref{Fig2} and \ref{Fig3}, as well as in the additional Fig.\ \ref{FigS1}.
\section{Evaluation of the OFEM phase from the incident light field amplitude}
\label{secS6}
For completeness, we present a generalization of Eqs.\ (\ref{Ex}) and (\ref{phixbeta}) to 2D beams, for which the incident field can be expressed in general as indicated in Eq.\ (\ref{Egeneral}). Inserting the latter into Eq.\ (\ref{phase}), performing the integral over $z$, and using a relation similar to Eq.\ (\ref{delta}), the phase reduces to
\begin{align}
\varphi(\Rb)=\frac{-1}{(2\pi)^3\mathcal{M}\omega^2}\sum_{\sigma\sigma'}\int d^2k_\perp} \def\kperpb{{\bf k}_\perpb\int d^2k_\perp} \def\kperpb{{\bf k}_\perpb'\,\ee^{{\rm i}} \def\ee{{\rm e}(k_\perp} \def\kperpb{{\bf k}_\perpb-k_\perp} \def\kperpb{{\bf k}_\perpb')\cdot\Rb}\,\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}\beta^*_{k_\perp} \def\kperpb{{\bf k}_\perpb'\sigma'}\;S_{\sigma,\sigma'}(k_\perp} \def\kperpb{{\bf k}_\perp,\phi,\phi')\;\frac{k_z}{k_\perp} \def\kperpb{{\bf k}_\perp}\,\delta(k_\perp} \def\kperpb{{\bf k}_\perp-k_\perp} \def\kperpb{{\bf k}_\perp')\,\Theta(k_0-k_\perp} \def\kperpb{{\bf k}_\perp),
\label{newfi}
\end{align}
where $\phi$ and $\phi'$ are the azimuthal angles of $k_\perp} \def\kperpb{{\bf k}_\perpb$ and $k_\perp} \def\kperpb{{\bf k}_\perpb'$, respectively, and we define
\begin{align}
S_{\sigma,\sigma'}(k_\perp} \def\kperpb{{\bf k}_\perp,\phi,\phi')=\eh_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}\cdot\eh_{k_\perp} \def\kperpb{{\bf k}_\perpb'\sigma'}=\left\{
\begin{matrix}
\cos(\phi-\phi')& \quad\quad\quad\text{for }\sigma={\rm s},\;\sigma'={\rm s},\\
-\dfrac{k_z}{k_0}\sin(\phi-\phi')& \quad\quad\quad\text{for }\sigma={\rm s},\;\sigma'={\rm p},\\
\dfrac{k_z}{k_0}\sin(\phi-\phi')& \quad\quad\quad\text{for }\sigma={\rm p},\;\sigma'={\rm s},\\
\dfrac{k_z^2}{k_0^2}\cos(\phi-\phi')+\dfrac{k_\perp} \def\kperpb{{\bf k}_\perp^2}{k_0^2}& \quad\quad\quad\text{for }\sigma={\rm p},\;\sigma'={\rm p},
\end{matrix}
\right.
\nonumber
\end{align}
and $k_z=\sqrt{k_0^2-k_\perp} \def\kperpb{{\bf k}_\perp^2}$. Equation\ (\ref{newfi}) involves a 3D integral that needs to be evaluated over the coordinates $\Rb$ of the 2D lens aperture, thus demanding an unaffordable numerical effort consisting of $\sim N^5$ operations for $N\sim10^2-10^3$ points per dimension. A more practical way of evaluating this expression can be found by first computing its Fourier transform
\begin{align}
\varphi_{k_\perp} \def\kperpb{{\bf k}_\perpb}&=\int d^2\Rb\,\varphi(\Rb)\,\ee^{-{\rm i}} \def\ee{{\rm e}k_\perp} \def\kperpb{{\bf k}_\perpb\cdot\Rb} \nonumber\\
&=\frac{-1}{2\pi\mathcal{M}\omega^2}\sum_{\sigma\sigma'}\int d^2k_\perp} \def\kperpb{{\bf k}_\perpb'\,\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb'\sigma}\beta^*_{k_\perp} \def\kperpb{{\bf k}_\perpb'-k_\perp} \def\kperpb{{\bf k}_\perpb,\sigma'}\;S_{\sigma,\sigma'}(k_\perp} \def\kperpb{{\bf k}_\perp',\phi',\phi'')\;\frac{k'_z}{k_\perp} \def\kperpb{{\bf k}_\perp'}\,\delta\left(k_\perp} \def\kperpb{{\bf k}_\perp'-|k_\perp} \def\kperpb{{\bf k}_\perpb'-k_\perp} \def\kperpb{{\bf k}_\perpb|\right)\,\Theta(k_0-k_\perp} \def\kperpb{{\bf k}_\perp'),
\nonumber
\end{align}
where $\phi''$ is the azimuthal angle of $k_\perp} \def\kperpb{{\bf k}_\perpb'-k_\perp} \def\kperpb{{\bf k}_\perpb$. Here, the $\delta$ function imposes $k_\perp} \def\kperpb{{\bf k}_\perp'=k_\perp} \def\kperpb{{\bf k}_\perp/\cos(\phi-\phi')$ and introduces a denominator given by the derivative of its argument. After some simple algebra, we find
\begin{align}
\varphi_{k_\perp} \def\kperpb{{\bf k}_\perpb}=\frac{-1}{4\pi\mathcal{M}\omega^2}\sum_{\sigma\sigma'}\int_{-\phi_0}^{\phi_0} \frac{d\phi'}{\cos^2(\phi-\phi')}\;k'_z\;\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb'\sigma}\beta^*_{k_\perp} \def\kperpb{{\bf k}_\perpb'-k_\perp} \def\kperpb{{\bf k}_\perpb,\sigma'}\;S_{\sigma,\sigma'}(k_\perp} \def\kperpb{{\bf k}_\perp',\phi',\phi''),
\nonumber
\end{align}
where $\phi_0=\cos^{-1}\left(k_\perp} \def\kperpb{{\bf k}_\perp/2k_0\right)$. Together with the inverse Fourier transform $\varphi(\Rb)=(2\pi)^{-2}\int d^2k_\perp} \def\kperpb{{\bf k}_\perpb\,\varphi_{k_\perp} \def\kperpb{{\bf k}_\perpb}\,\ee^{{\rm i}} \def\ee{{\rm e}k_\perp} \def\kperpb{{\bf k}_\perpb\cdot\Rb}$, the evaluation now takes an affordable number of operations $\sim N^3\log^2N$ (i.e., a factor of $N$ for the 1D integral over $\phi'$ and the remaining factors needed for the 2D FFTs), which can be beneficial for carrying out extensive phase calculations, as needed to train machine learning algorithms for fast determination of the light coefficients $\beta_{k_\perp} \def\kperpb{{\bf k}_\perpb\sigma}$ in Eq.\ (\ref{Egeneral}).
\begin{figure}
\caption{Same as Fig.\ \ref{Fig2}
\label{FigS1}
\end{figure}
\end{widetext}
\begin{thebibliography}{67}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Haider et~al.}(1998)\citenamefont{Haider, Rose,
Uhlemann, Schwan, Kabius, and Urban}}]{HRU98}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Haider}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Rose}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Uhlemann}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Schwan}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Kabius}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Urban}},
\bibinfo{journal}{Ultramicroscopy} \textbf{\bibinfo{volume}{75}},
\bibinfo{pages}{53} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Batson et~al.}(2002)\citenamefont{Batson, Dellby, and
Krivanek}}]{BDK02}
\bibinfo{author}{\bibfnamefont{P.~E.} \bibnamefont{Batson}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{418}},
\bibinfo{pages}{617} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Hawkes and Spence}(2019)}]{HS19}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hawkes}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Spence}},
\emph{\bibinfo{title}{Springer Handbook of Microscopy}}
(\bibinfo{publisher}{Springer Nature Switzerland AG}, \bibinfo{year}{2019}).
\bibitem[{\citenamefont{Nellist et~al.}(2004)\citenamefont{Nellist, Chisholm,
Dellby, Krivanek, Murfitt, Szilagyi, Lupini, Borisevich, {Sides Jr.}, and
Pennycook}}]{NCD04}
\bibinfo{author}{\bibfnamefont{P.~D.} \bibnamefont{Nellist}},
\bibinfo{author}{\bibfnamefont{M.~F.} \bibnamefont{Chisholm}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}},
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibinfo{author}{\bibfnamefont{M.~F.} \bibnamefont{Murfitt}},
\bibinfo{author}{\bibfnamefont{Z.~S.} \bibnamefont{Szilagyi}},
\bibinfo{author}{\bibfnamefont{A.~R.} \bibnamefont{Lupini}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Borisevich}},
\bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{{Sides Jr.}}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~J.}
\bibnamefont{Pennycook}}, \bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{305}}, \bibinfo{pages}{1741} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Muller et~al.}(2008)\citenamefont{Muller, {Fitting
Kourkoutis}, Murfitt, Song, Hwang, Silcox, Dellby, and Krivanek}}]{MKM08}
\bibinfo{author}{\bibfnamefont{D.~A.} \bibnamefont{Muller}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{{Fitting Kourkoutis}}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Murfitt}},
\bibinfo{author}{\bibfnamefont{J.~H.} \bibnamefont{Song}},
\bibinfo{author}{\bibfnamefont{H.~Y.} \bibnamefont{Hwang}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Silcox}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{319}},
\bibinfo{pages}{1073} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Krivanek et~al.}(2014)\citenamefont{Krivanek, Lovejoy,
Dellby, Aoki, Carpenter, Rez, Soignard, Zhu, Batson, Lagos et~al.}}]{KLD14}
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Lovejoy}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Aoki}},
\bibinfo{author}{\bibfnamefont{R.~W.} \bibnamefont{Carpenter}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Rez}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Soignard}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Zhu}},
\bibinfo{author}{\bibfnamefont{P.~E.} \bibnamefont{Batson}},
\bibinfo{author}{\bibfnamefont{M.~J.} \bibnamefont{Lagos}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{514}}, \bibinfo{pages}{209} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Lagos et~al.}(2017)\citenamefont{Lagos, Tr\"ugler,
Hohenester, and Batson}}]{LTH17}
\bibinfo{author}{\bibfnamefont{M.~J.} \bibnamefont{Lagos}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tr\"ugler}},
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Hohenester}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~E.}
\bibnamefont{Batson}}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{543}}, \bibinfo{pages}{529} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Hage et~al.}(2018)\citenamefont{Hage, Nicholls, Yates,
McCulloch, Lovejoy, Dellby, Krivanek, Refson, and Ramasse}}]{HNY18}
\bibinfo{author}{\bibfnamefont{F.~S.} \bibnamefont{Hage}},
\bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Nicholls}},
\bibinfo{author}{\bibfnamefont{J.~R.} \bibnamefont{Yates}},
\bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{McCulloch}},
\bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Lovejoy}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}},
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Refson}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{Q.~M.} \bibnamefont{Ramasse}},
\bibinfo{journal}{Sci.\ Adv.} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{eaar7495} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Hage et~al.}(2019)\citenamefont{Hage, Kepaptsoglou,
Ramasse, and Allen}}]{HKR19}
\bibinfo{author}{\bibfnamefont{F.~S.} \bibnamefont{Hage}},
\bibinfo{author}{\bibfnamefont{D.~M.} \bibnamefont{Kepaptsoglou}},
\bibinfo{author}{\bibfnamefont{Q.~M.} \bibnamefont{Ramasse}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~J.} \bibnamefont{Allen}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{122}},
\bibinfo{pages}{016103} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Hachtel et~al.}(2019)\citenamefont{Hachtel, Huang,
Popovs, Jansone-Popova, Keum, Jakowski, Lovejoy, Dellby, Krivanek, and
Idrobo}}]{HHP19}
\bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Hachtel}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Huang}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Popovs}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Jansone-Popova}},
\bibinfo{author}{\bibfnamefont{J.~K.} \bibnamefont{Keum}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Jakowski}},
\bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Lovejoy}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dellby}},
\bibinfo{author}{\bibfnamefont{O.~L.} \bibnamefont{Krivanek}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~C.}
\bibnamefont{Idrobo}}, \bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{363}}, \bibinfo{pages}{525} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{M{\"o}llenstedt and D{\"u}ker}(1956)}]{MD1956}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{M{\"o}llenstedt}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{D{\"u}ker}},
\bibinfo{journal}{Zeitschrift f{\"u}r Physik} \textbf{\bibinfo{volume}{145}},
\bibinfo{pages}{377} (\bibinfo{year}{1956}).
\bibitem[{\citenamefont{Guzzinati et~al.}(2017)\citenamefont{Guzzinati, Beche,
Lourenco-Martins, Martin, Kociak, and Verbeeck}}]{GBL17}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Guzzinati}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Beche}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Lourenco-Martins}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Martin}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kociak}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Verbeeck}},
\bibinfo{journal}{Nat.\ Commun.} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{14999} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{B\'ech\'e et~al.}(2014)\citenamefont{B\'ech\'e,
Van~Boxem, Van~Tendeloo, and Verbeeck}}]{BVV14}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{B\'ech\'e}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Van~Boxem}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Van~Tendeloo}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Verbeeck}},
\bibinfo{journal}{Nat.\ Phys.} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{26} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Verbeeck et~al.}(2018)\citenamefont{Verbeeck,
{B\'ech\'e}, {M\"uller-Caspary}, Guzzinati, Luong, and Hertog}}]{VBM18}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Verbeeck}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{B\'ech\'e}}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{{M\"uller-Caspary}}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Guzzinati}},
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Luong}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Hertog}},
\bibinfo{journal}{Ultramicroscopy} \textbf{\bibinfo{volume}{190}},
\bibinfo{pages}{58} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Verbeeck et~al.}(2010)\citenamefont{Verbeeck, Tian, and
Schattschneider}}]{VTS10}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Verbeeck}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Tian}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Schattschneider}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{467}},
\bibinfo{pages}{301} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{McMorran et~al.}(2011)\citenamefont{McMorran, Agrawal,
Anderson, Herzing, Lezec, McClelland, and Unguris}}]{MAA11}
\bibinfo{author}{\bibfnamefont{B.~J.} \bibnamefont{McMorran}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Agrawal}},
\bibinfo{author}{\bibfnamefont{I.~M.} \bibnamefont{Anderson}},
\bibinfo{author}{\bibfnamefont{A.~A.} \bibnamefont{Herzing}},
\bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Lezec}},
\bibinfo{author}{\bibfnamefont{J.~J.} \bibnamefont{McClelland}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Unguris}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{331}},
\bibinfo{pages}{192} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Shiloh et~al.}(2014)\citenamefont{Shiloh, Lereah,
Lilach, and Arie}}]{SLL14}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Shiloh}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Lereah}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Lilach}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Arie}},
\bibinfo{journal}{Ultramicroscopy} \textbf{\bibinfo{volume}{144}},
\bibinfo{pages}{26} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Grillo et~al.}(2017)\citenamefont{Grillo, Tavabi,
Yucelen, Lu, Venturi, Larocque, Jin, Savenko, Gazzadi, Balboni
et~al.}}]{GTY17}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Grillo}},
\bibinfo{author}{\bibfnamefont{A.~H.} \bibnamefont{Tavabi}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Yucelen}},
\bibinfo{author}{\bibfnamefont{P.-H.} \bibnamefont{Lu}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Venturi}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Larocque}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jin}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Savenko}},
\bibinfo{author}{\bibfnamefont{G.~C.} \bibnamefont{Gazzadi}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Balboni}},
\bibnamefont{et~al.}, \bibinfo{journal}{Opt.\ Express}
\textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{21851} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Shiloh et~al.}(2018)\citenamefont{Shiloh, Remez, Lu,
Jin, Lereah, Tavabi, Dunin-Borkowski, and Arie}}]{SRP18}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Shiloh}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Remez}},
\bibinfo{author}{\bibfnamefont{P.-H.} \bibnamefont{Lu}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jin}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Lereah}},
\bibinfo{author}{\bibfnamefont{A.~H.} \bibnamefont{Tavabi}},
\bibinfo{author}{\bibfnamefont{R.~E.} \bibnamefont{Dunin-Borkowski}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Arie}},
\bibinfo{journal}{Ultramicroscopy} \textbf{\bibinfo{volume}{189}},
\bibinfo{pages}{46} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Kapitza and Dirac}(1933)}]{KD1933}
\bibinfo{author}{\bibfnamefont{P.~L.} \bibnamefont{Kapitza}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.~A.~M.} \bibnamefont{Dirac}},
\bibinfo{journal}{Proc.\ Cambridge\ Philos.\ Soc.}
\textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{297} (\bibinfo{year}{1933}).
\bibitem[{\citenamefont{Freimund et~al.}(2001)\citenamefont{Freimund,
Aflatooni, and Batelaan}}]{FAB01}
\bibinfo{author}{\bibfnamefont{D.~L.} \bibnamefont{Freimund}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Aflatooni}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Batelaan}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{413}},
\bibinfo{pages}{142} (\bibinfo{year}{2001}).
\bibitem[{\citenamefont{Freimund and Batelaan}(2002)}]{FB02}
\bibinfo{author}{\bibfnamefont{D.~L.} \bibnamefont{Freimund}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Batelaan}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{283602} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Batelaan}(2007)}]{B07}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Batelaan}},
\bibinfo{journal}{Rev.\ Mod.\ Phys.} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{929} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{{Garc\'{\i}a de Abajo}}(2010)}]{paper149}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{journal}{Rev.\ Mod.\ Phys.} \textbf{\bibinfo{volume}{82}},
\bibinfo{pages}{209} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Koz'ak et~al.}(2018)\citenamefont{Koz'ak,
Sch\"onenberger, and Hommelhoff}}]{KSH18}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koz'ak}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Sch\"onenberger}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hommelhoff}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{120}},
\bibinfo{pages}{103203} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Koz'ak}(2019)}]{K19_2}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koz'ak}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{123}},
\bibinfo{pages}{203202} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Black et~al.}(2019)\citenamefont{Black, Niedermayer,
Miao, Zhao, Solgaard, Byer, and Leedle}}]{BNM19}
\bibinfo{author}{\bibfnamefont{D.~S.} \bibnamefont{Black}},
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Niedermayer}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Miao}},
\bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Zhao}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Solgaard}},
\bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Byer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Leedle}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{123}},
\bibinfo{pages}{264802} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Sch\"nenberger
et~al.}(2019)\citenamefont{Sch\"nenberger, Mittelbach, Yousefi, McNeur,
Niedermayer, and Hommelhoff}}]{SMY19}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Sch\"nenberger}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Mittelbach}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Yousefi}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{McNeur}},
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Niedermayer}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hommelhoff}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{123}},
\bibinfo{pages}{264803} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Howie}(1999)}]{H99}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Howie}},
\bibinfo{journal}{Inst.\ Phys.\ Conf.\ Ser.} \textbf{\bibinfo{volume}{161}},
\bibinfo{pages}{311} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{{Garc\'{\i}a de Abajo} and Kociak}(2008)}]{paper114}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kociak}},
\bibinfo{journal}{New\ J.\ Phys.} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{073035} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Howie}(2009)}]{H09}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Howie}},
\bibinfo{journal}{Microsc.\ Microanal.} \textbf{\bibinfo{volume}{15}},
\bibinfo{pages}{314} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Pomarico et~al.}(2018)\citenamefont{Pomarico, Madan,
Berruto, Vanacore, Wang, Kaminer, {Garc\'{\i}a de Abajo}, and
Carbone}}]{paper306}
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Madan}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Berruto}},
\bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Vanacore}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Wang}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kaminer}},
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Carbone}},
\bibinfo{journal}{ACS\ Photonics} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{759} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Barwick et~al.}(2009)\citenamefont{Barwick, Flannigan,
and Zewail}}]{BFZ09}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}},
\bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Flannigan}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~H.}
\bibnamefont{Zewail}}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{462}}, \bibinfo{pages}{902} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{{Garc\'{\i}a de Abajo}
et~al.}(2010)\citenamefont{{Garc\'{\i}a de Abajo}, {Asenjo Garcia}, and
Kociak}}]{paper151}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Asenjo Garcia}}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kociak}},
\bibinfo{journal}{Nano\ Lett.} \textbf{\bibinfo{volume}{10}},
\bibinfo{pages}{1859} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Park et~al.}(2010)\citenamefont{Park, Lin, and
Zewail}}]{PLZ10}
\bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Park}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Lin}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~H.} \bibnamefont{Zewail}},
\bibinfo{journal}{New\ J.\ Phys.} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{123028} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Park and Zewail}(2012)}]{PZ12}
\bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Park}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~H.} \bibnamefont{Zewail}},
\bibinfo{journal}{J.\ Phys.\ Chem.\ A} \textbf{\bibinfo{volume}{116}},
\bibinfo{pages}{11128} (\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Kirchner et~al.}(2014)\citenamefont{Kirchner, Gliserin,
Krausz, and Baum}}]{KGK14}
\bibinfo{author}{\bibfnamefont{F.~O.} \bibnamefont{Kirchner}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gliserin}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Krausz}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Baum}},
\bibinfo{journal}{Nat.\ Photon.} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{52} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Piazza et~al.}(2015)\citenamefont{Piazza, Lummen,
{Qui\~{n}onez}, Murooka, Reed, Barwick, and Carbone}}]{PLQ15}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Piazza}},
\bibinfo{author}{\bibfnamefont{T.~T.~A.} \bibnamefont{Lummen}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{{Qui\~{n}onez}}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Murooka}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Reed}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Carbone}},
\bibinfo{journal}{Nat.\ Commun.} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{6407} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Feist et~al.}(2015)\citenamefont{Feist, Echternkamp,
Schauss, Yalunin, Sch\"afer, and Ropers}}]{FES15}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Feist}},
\bibinfo{author}{\bibfnamefont{K.~E.} \bibnamefont{Echternkamp}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Schauss}},
\bibinfo{author}{\bibfnamefont{S.~V.} \bibnamefont{Yalunin}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Sch\"afer}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ropers}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{521}},
\bibinfo{pages}{200} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Lummen et~al.}(2016)\citenamefont{Lummen, Lamb,
Berruto, LaGrange, Negro, {Garc\'{\i}a de Abajo}, McGrouther, Barwick, and
Carbone}}]{paper282}
\bibinfo{author}{\bibfnamefont{T.~T.~A.} \bibnamefont{Lummen}},
\bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Lamb}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Berruto}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{LaGrange}},
\bibinfo{author}{\bibfnamefont{L.~D.} \bibnamefont{Negro}},
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{McGrouther}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Carbone}},
\bibinfo{journal}{Nat.\ Commun.} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{13156} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Echternkamp et~al.}(2016)\citenamefont{Echternkamp,
Feist, Sch\"{a}fer, and Ropers}}]{EFS16}
\bibinfo{author}{\bibfnamefont{K.~E.} \bibnamefont{Echternkamp}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Feist}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Sch\"{a}fer}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ropers}},
\bibinfo{journal}{Nat.\ Phys.} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{1000} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Ryabov and Baum}(2016)}]{RB16}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ryabov}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Baum}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{353}},
\bibinfo{pages}{374} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Vanacore et~al.}(2016)\citenamefont{Vanacore,
Fitzpatrick, and Zewail}}]{VFZ16}
\bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Vanacore}},
\bibinfo{author}{\bibfnamefont{A.~W.~P.} \bibnamefont{Fitzpatrick}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~H.}
\bibnamefont{Zewail}}, \bibinfo{journal}{Nano\ Today}
\textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{228} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Koz\'ak et~al.}(2017)\citenamefont{Koz\'ak, McNeur,
Leedle, Deng, Sch\"onenberger, Ruehl, Hartl, Harris, Byer, and
Hommelhoff}}]{KML17}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koz\'ak}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{McNeur}},
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Leedle}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Deng}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Sch\"onenberger}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ruehl}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Hartl}},
\bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Harris}},
\bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Byer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hommelhoff}},
\bibinfo{journal}{Nat.\ Commun.} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{14342} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Feist et~al.}(2017)\citenamefont{Feist, Bach,
N.~Rubiano~{da Silva}, M\"{a}ller, Priebe, Domr\"{a}se, Gatzmann, Rost,
Schauss, Strauch et~al.}}]{FBR17}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Feist}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Bach}},
\bibinfo{author}{\bibfnamefont{T.~D.} \bibnamefont{N.~Rubiano~{da Silva}}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{M\"{a}ller}},
\bibinfo{author}{\bibfnamefont{K.~E.} \bibnamefont{Priebe}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Domr\"{a}se}},
\bibinfo{author}{\bibfnamefont{J.~G.} \bibnamefont{Gatzmann}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Rost}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Schauss}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Strauch}},
\bibnamefont{et~al.}, \bibinfo{journal}{Ultramicroscopy}
\textbf{\bibinfo{volume}{176}}, \bibinfo{pages}{63} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Priebe et~al.}(2017)\citenamefont{Priebe, Rathje,
Yalunin, Hohage, Feist, Sch\"{a}fer, and Ropers}}]{PRY17}
\bibinfo{author}{\bibfnamefont{K.~E.} \bibnamefont{Priebe}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Rathje}},
\bibinfo{author}{\bibfnamefont{S.~V.} \bibnamefont{Yalunin}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Hohage}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Feist}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Sch\"{a}fer}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ropers}},
\bibinfo{journal}{Nat.\ Photon.} \textbf{\bibinfo{volume}{11}},
\bibinfo{pages}{793} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Vanacore et~al.}(2018)\citenamefont{Vanacore, Madan,
Berruto, Wang, Pomarico, Lamb, McGrouther, Kaminer, Barwick, {Garc\'{\i}a de
Abajo} et~al.}}]{paper311}
\bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Vanacore}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Madan}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Berruto}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Wang}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}},
\bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Lamb}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{McGrouther}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kaminer}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}},
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nat.\ Commun.}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{2694} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Morimoto and Baum}(2018{\natexlab{a}})}]{MB18}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Morimoto}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Baum}},
\bibinfo{journal}{Phys.\ Rev.\ A} \textbf{\bibinfo{volume}{97}},
\bibinfo{pages}{033815} (\bibinfo{year}{2018}{\natexlab{a}}).
\bibitem[{\citenamefont{Morimoto and Baum}(2018{\natexlab{b}})}]{MB18_2}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Morimoto}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Baum}},
\bibinfo{journal}{Nat.\ Phys.} \textbf{\bibinfo{volume}{14}},
\bibinfo{pages}{252} (\bibinfo{year}{2018}{\natexlab{b}}).
\bibitem[{\citenamefont{Das et~al.}(2019)\citenamefont{Das, Blazit, Tenc\'e,
Zagonel, Auad, Lee, Ling, Losquin, C.~Colliex, {Garc\'{\i}a de Abajo}
et~al.}}]{paper325}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Das}},
\bibinfo{author}{\bibfnamefont{J.~D.} \bibnamefont{Blazit}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Tenc\'e}},
\bibinfo{author}{\bibfnamefont{L.~F.} \bibnamefont{Zagonel}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Auad}},
\bibinfo{author}{\bibfnamefont{Y.~H.} \bibnamefont{Lee}},
\bibinfo{author}{\bibfnamefont{X.~Y.} \bibnamefont{Ling}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Losquin}},
\bibinfo{author}{\bibfnamefont{O.~S.} \bibnamefont{C.~Colliex}},
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibnamefont{et~al.}, \bibinfo{journal}{Ultramicroscopy}
\textbf{\bibinfo{volume}{203}}, \bibinfo{pages}{44} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Vanacore et~al.}(2019)\citenamefont{Vanacore, Berruto,
Madan, Pomarico, Biagioni, Lamb, McGrouther, Reinhardt, Kaminer, Barwick
et~al.}}]{paper332}
\bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Vanacore}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Berruto}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Madan}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Biagioni}},
\bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Lamb}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{McGrouther}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Reinhardt}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kaminer}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nat.\ Mater.}
\textbf{\bibinfo{volume}{18}}, \bibinfo{pages}{573} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Kfir}(2019)}]{K19}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Kfir}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{123}},
\bibinfo{pages}{103602} (\bibinfo{year}{2019}).
\bibitem[{\citenamefont{{Di Giulio} et~al.}(2019)\citenamefont{{Di Giulio},
Kociak, and {Garc\'{\i}a de Abajo}}}]{paper339}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{{Di Giulio}}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kociak}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{journal}{Optica} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{1524}
(\bibinfo{year}{2019}).
\bibitem[{\citenamefont{Talebi}(2020)}]{T20}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Talebi}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{125}},
\bibinfo{pages}{080401} (\bibinfo{year}{2020}).
\bibitem[{\citenamefont{Kfir et~al.}(2020)\citenamefont{Kfir,
Louren\c{c}o-Martins, Storeck, Sivis, Harvey, Kippenberg, Feist, and
Ropers}}]{KLS20}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Kfir}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Louren\c{c}o-Martins}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Storeck}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Sivis}},
\bibinfo{author}{\bibfnamefont{T.~R.} \bibnamefont{Harvey}},
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Feist}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ropers}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{582}}, \bibinfo{pages}{46}
(\bibinfo{year}{2020}).
\bibitem[{\citenamefont{Wang et~al.}(2020)\citenamefont{Wang, Dahan, Shentcis,
Kauffmann, Hayun, Reinhardt, Tsesses, and Kaminer}}]{WDS20}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Wang}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Dahan}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Shentcis}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Kauffmann}},
\bibinfo{author}{\bibfnamefont{A.~B.} \bibnamefont{Hayun}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Reinhardt}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Tsesses}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kaminer}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{582}}, \bibinfo{pages}{50}
(\bibinfo{year}{2020}).
\bibitem[{\citenamefont{{Garc\'{\i}a de Abajo}
et~al.}(2016)\citenamefont{{Garc\'{\i}a de Abajo}, Barwick, and
Carbone}}]{paper272}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Barwick}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Carbone}},
\bibinfo{journal}{Phys.\ Rev.\ B} \textbf{\bibinfo{volume}{94}},
\bibinfo{pages}{041404(R)} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Cai et~al.}(2018)\citenamefont{Cai, Reinhardt, Kaminer,
and {Garc\'{\i}a de Abajo}}}]{paper312}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Cai}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Reinhardt}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kaminer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{{Garc\'{\i}a de Abajo}}},
\bibinfo{journal}{Phys.\ Rev.\ B} \textbf{\bibinfo{volume}{98}},
\bibinfo{pages}{045424} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Kone\v{c}n\'{a} and {Garc\'{\i}a de
Abajo}}(2020)}]{paper351}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Kone\v{c}n\'{a}}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.~J.}
\bibnamefont{{Garc\'{\i}a de Abajo}}}, \bibinfo{journal}{Phys.\ Rev.\ Lett.}
\textbf{\bibinfo{volume}{125}}, \bibinfo{pages}{030801}
(\bibinfo{year}{2020}).
\bibitem[{\citenamefont{Schwartz et~al.}(2019)\citenamefont{Schwartz, Axelrod,
Campbell, Turnbaugh, Glaeser, and M\"uller}}]{SAC19}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Schwartz}},
\bibinfo{author}{\bibfnamefont{J.~J.} \bibnamefont{Axelrod}},
\bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Campbell}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Turnbaugh}},
\bibinfo{author}{\bibfnamefont{R.~M.} \bibnamefont{Glaeser}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{M\"uller}},
\bibinfo{journal}{Nat.\ Methods} \textbf{\bibinfo{volume}{16}},
\bibinfo{pages}{1016} (\bibinfo{year}{2019}).
\bibitem[{Val()}]{ValerioCompton}
\bibinfo{note}{A. Valerio and F. J. Garc\'{\i}a de Abajo, in preparation.}
\bibitem[{\citenamefont{Allen et~al.}(2001)\citenamefont{Allen, Oxley, and
Paganin}}]{AOP01}
\bibinfo{author}{\bibfnamefont{L.~J.} \bibnamefont{Allen}},
\bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Oxley}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Paganin}},
\bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{87}},
\bibinfo{pages}{123902} (\bibinfo{year}{2001}).
\bibitem[{\citenamefont{Paganin et~al.}(2018)\citenamefont{Paganin, Petersen,
and Beltran}}]{PPB18}
\bibinfo{author}{\bibfnamefont{D.~M.} \bibnamefont{Paganin}},
\bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Petersen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.}
\bibnamefont{Beltran}}, \bibinfo{journal}{Phys.\ Rev.\ A}
\textbf{\bibinfo{volume}{97}}, \bibinfo{pages}{023835}
(\bibinfo{year}{2018}).
\bibitem[{\citenamefont{{Hayes} et~al.}(1980)\citenamefont{{Hayes}, {Jae Lim},
and {Oppenheim}}}]{HLO1980}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{{Hayes}}},
\bibinfo{author}{\bibnamefont{{Jae Lim}}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Oppenheim}}},
\bibinfo{journal}{IEEE Transactions on Acoustics, Speech, and Signal
Processing} \textbf{\bibinfo{volume}{28}}, \bibinfo{pages}{672}
(\bibinfo{year}{1980}).
\bibitem[{\citenamefont{Aiger and Talbot}(2010)}]{AT10}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Aiger}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Talbot}}, in
\emph{\bibinfo{booktitle}{2010 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition}} (\bibinfo{year}{2010}), pp.
\bibinfo{pages}{295--302}.
\bibitem[{\citenamefont{Spurgeon et~al.}(2020)\citenamefont{Spurgeon, Ophus,
Jones, Petford-Long, Kalinin, Olszta, Dunin-Borkowski, Salmon, Hattar, Yang
et~al.}}]{SOJ20}
\bibinfo{author}{\bibfnamefont{S.~R.} \bibnamefont{Spurgeon}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ophus}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jones}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Petford-Long}},
\bibinfo{author}{\bibfnamefont{S.~V.} \bibnamefont{Kalinin}},
\bibinfo{author}{\bibfnamefont{M.~J.} \bibnamefont{Olszta}},
\bibinfo{author}{\bibfnamefont{R.~E.} \bibnamefont{Dunin-Borkowski}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Salmon}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Hattar}},
\bibinfo{author}{\bibfnamefont{W.-C.~D.} \bibnamefont{Yang}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nat.\ Mater.} pp.
\bibinfo{pages}{DOI: 10.1038/s41563--020--00833--z} (\bibinfo{year}{2020}).
\bibitem[{\citenamefont{Press et~al.}(1992)\citenamefont{Press, Teukolsky,
Vetterling, and Flannery}}]{PTV92}
\bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{Press}},
\bibinfo{author}{\bibfnamefont{S.~A.} \bibnamefont{Teukolsky}},
\bibinfo{author}{\bibfnamefont{W.~T.} \bibnamefont{Vetterling}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~P.}
\bibnamefont{Flannery}}, \emph{\bibinfo{title}{Numerical Recipes}}
(\bibinfo{publisher}{Cambridge University Press}, \bibinfo{address}{New
York}, \bibinfo{year}{1992}).
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
We prove some general theorems for preserving Dependent Choice when taking symmetric extensions, some of which are unwritten folklore results. We apply these to various constructions to obtain various simple consistency proofs.
\end{abstract}
\title{Preserving Dependent Choice}
\section{Introduction}
Dependent Choice is one of the best known weak versions of the axiom of choice, and perhaps the most natural version of the axiom of choice. Indeed, Dependent Choice---or $\axiom{DC}$---is sometimes mistaken as countable choice, and while it is strong enough to provide us with the basis of analysis (Baire Category Theorem, well-behaved theory of Borel sets and measure, etc.), it is also consistent with assumptions such as ``all sets of reals are regular'' for many versions of regularity (e.g., Lebsegue measurability).
Therefore, in many constructions of models without the axiom of choice, it is often desirable to preserve $\axiom{DC}$. We sometimes have to work quite hard for that, and sometimes it is quite easy to obtain. The purpose of this note is to provide some straightforward conditions which allow for the preservation of $\axiom{DC}$, as well as its stronger versions $\axiom{DC}_{<\kappa}$ for some infinite cardinal $\kappa$.
The arguments shown here should be considered folklore, even if no explicit formulation or proof appeared in print at this level of generality until today.\footnote{With the exception of the author putting into print the folklore \autoref{lemma:closure} in \cite{Karagila:2014}.} They were used by different people over the years, even if applied to specific constructions each time.
\section{Preliminaries}
Our notation is mostly standard, and follows Jech for the most part. We use $\axiom{DC}_\lambda$ to denote the statement ``Every $\lambda$-closed tree has a chain of length $\lambda$ or a maximal element'', for the case $\lambda=\omega$ we just write $\axiom{DC}$, and if $\lambda$ is a limit cardinal, $\axiom{DC}_{<\lambda}$ is the abbreviation of $\forall\kappa<\lambda, \axiom{DC}_\kappa$. Similarly, $\axiom{AC}_\lambda$ will denote the statement ``Every family of $\lambda$ non-empty sets admits a choice function'' and $\axiom{AC}_{<\lambda}$ abbreviates $\forall\kappa<\lambda,\axiom{AC}_\kappa$ (we will not use $\axiom{AC}$ to mean $\axiom{AC}_\omega$, though, as $\axiom{AC}$ denotes the axiom of choice).
Much has been written on $\axiom{DC}$, for example, if $\lambda$ is singular then $\axiom{DC}_{<\lambda}$ implies $\axiom{DC}_\lambda$, see \cite[Chapter~8]{Jech:AC1973} for details and more.
\begin{theorem}[Folklore]
For a regular cardinal $\lambda$, $\axiom{DC}_\lambda$ holds if and only if every $\lambda^+$-closed forcing is $\lambda^+$-distributive.
\end{theorem}
\begin{proof}[Sketch of Proof]
Assuming $\axiom{DC}_\lambda$ the standard proof in $\axiom{ZFC}$ translates immediately. In the other direction, if $\axiom{DC}_\lambda$ fails, there is a tree which is $\lambda$-closed, but has no $\lambda$-chains and thus it is vacuously $\lambda^+$-closed, however forcing with the tree adds a $\lambda$-sequence, so it is a witness to the failure of the distributivity.
\end{proof}
We will say that a class $A$ is \textit{$\kappa$-closed} if $A^{<\kappa}\subseteq A$. Of course this depends on the universe, and when that is not clear from context we will be sure to explicitly state what is the universe to which the closure is relative.
\subsection{Some Forcing Shorthands}
We will want to define and manipulate names in a fairly explicit manner. This means that we cannot make the usual simplifying assumptions that let us choose arbitrary name with this and that kind of properties. To that end, we define a few shorthand notations.
We say that a name $\dot y$ \textit{appears} in a name $\dot x$, if there is an ordered pair $\tup{p,\dot y}\in\dot x$. We similarly say a condition $p$ appears in $\dot x$ if there is an ordered pair $\tup{p,\dot y}\in\dot x$.
If $\{\dot y_i\mid i\in I\}$ is a collection of names, we define $\{\dot y_i\mid i\in I\}^\bullet$ to be the obvious way of turning it into a name, namely, $\{\tup{1,\dot y_i}\mid i\in I\}$. This extends to other very canonical definitions, e.g.\ $\tup{\dot x,\dot y}^\bullet$ is the simplest way of creating the name of an ordered pair with $\dot x$ and $\dot y$. Using this notation, by the way, $\check x=\{\check y\mid y\in x\}^\bullet$.
Additionally, if $\dot x$ is a name, and $p$ is a condition, we write $\dot x\mathbin\upharpoonright p$ for the name $\{\tup{q,\dot y}\mid q\leq p, \dot y\text{ appears in }\dot x, q\mathrel{\Vdash}\dot y\in\dot x\}$. It is easy to verify that $p\mathrel{\Vdash}\dot x=\dot x\mathbin\upharpoonright p$, and if $q\mathrel{\bot} p$, then $q\mathrel{\Vdash}\dot x\mathbin\upharpoonright p=\varnothing$.
\subsection{Symmetric Extensions}
If $\PP$ is a forcing, and $\pi$ is an automorphism of $\PP$, then $\pi$ extends to $\PP$ names recursively:\[\pi\dot x=\{\tup{\pi p,\pi\dot y}\mid\tup{p,\dot y}\in\dot x\}.\]
This action also respects the forcing relation, as shown in the following lemma.
\begin{lemma*}[The Symmetry Lemma]
Suppose that $p\in\PP$ is a condition, $\dot x$ is a $\PP$-name, $\varphi(\dot x)$ is a formula in the language of forcing, and $\pi$ is an automorphism of $\PP$, then\[p\mathrel{\Vdash}\varphi(\dot x)\iff\pi p\mathrel{\Vdash}\varphi(\pi\dot x).\qed\]
\end{lemma*}
Fix an automorphism group $\mathscr G\leq\aut(\PP)$. We say that $\mathscr F$ is a \textit{normal filter of subgroups} of $\mathscr G$ if it is a filter on the lattice of subgroups which is closed under conjugations. Namely, $\mathscr F$ is closed under supergroups (with respect to $\mathscr G$) and intersections, and if $\pi\in\mathscr G$ and $H\in\mathscr F$, then $\pi H\pi^{-1}\in\mathscr F$ as well. If $\PP$ is a forcing, $\mathscr G$ is an automorphism group of $\PP$, and $\mathscr F$ is a normal filter of subgroups of $\mathscr G$ we say that $\tup{\PP,\mathscr G,\mathscr F}$ is a \textit{symmetric system}.\footnote{In many cases, it is enough to consider a normal filter base of subgroups, rather than a filter of subgroups, and we will not bother to make the distinction.}
We say that a $\PP$-name is \textit{$\mathscr F$-symmetric} if $\sym_\mathscr G(\dot x)=\{\pi\in\mathscr G\mid\pi\dot x=\dot x\}\in\mathscr F$, and if this property holds hereditarily for names which appear in $\dot x$, we say that $\dot x$ is \textit{hereditarily $\mathscr F$-symmetric}. The class of hereditarily $\mathscr F$-symmetric names is denoted by $\axiom{HS}_\mathscr F$. We will omit the subscripts when it is clear what is the symmetric system, which will be most of the time.
\begin{theorem*}
Suppose that $\tup{\PP,\mathscr G,\mathscr F}$ is a symmetric system, let $G$ be a $V$-generic filter, then $M=\axiom{HS}^G=\{\dot x^G\mid\dot x\in\axiom{HS}\}$ is a transitive class of $V[G]$ such that $V\subseteq M\subseteq V[G]$, and $M$ is a class model of $\axiom{ZF}$.
\end{theorem*}
The model $M$ in the above theorem is called a \textit{symmetric extension} of $V$. Finally, we have a symmetric forcing relation, namely $\mathrel{\Vdash}^\axiom{HS}$, which is the relativization of the forcing to the symmetric extension, which satisfies the same basic properties as $\mathrel{\Vdash}$.
\begin{definition}
If $\tup{\PP,\mathscr G,\mathscr F}$ is a symmetric system, we say that a condition $p\in\PP$ is \textit{tenacious}, if $\{\pi\in\mathscr G\mid\pi p=p\}\in\mathscr F$. We say that $\PP$ is tenacious, if there is a dense subset of tenacious conditions.
\end{definition}
It turns out that this concept is somehow a bit redundant, and if $\tup{\PP,\mathscr G,\mathscr F}$ is a symmetric system, then we can define a forcing $\PP^*\subseteq\PP$ such that $\tup{\PP^*,\mathscr G,\mathscr F}$ is equivalent to $\tup{\PP,\mathscr G,\mathscr F}$ and it is is not only tenacious, but in fact every condition is tenacious, as was shown in \cite[\S12]{Karagila:2016}. So when it is useful, we may assume $\PP$ is tenacious without loss of generality.
\section{Dependent Choice in Symmetric Extensions}
In \cite{Karagila:2014} we proved the following folklore lemma.
\begin{lemma}[Lemma~2.1 in \cite{Karagila:2014}]\label{lemma:closure}
Assume $\axiom{ZFC}$ holds. Suppose that $\tup{\PP,\mathscr G,\mathscr F}$ is a symmetric system such that $\PP$ is $\lambda$-closed and $\mathscr F$ is $\lambda$-complete, then $\mathrel{\Vdash}^\axiom{HS}\axiom{DC}_{<\lambda}$.
\end{lemma}
The proof is simple enough to merit repeating. But to simplify generalizations and variations, we will first extract the following lemma from the standard proof.
\begin{lemma}\label{lemma:model-closure}
Suppose that $M$ is a $\lambda$-closed inner model of $N$. If $N\models\axiom{DC}_{<\lambda}$, then $M\models\axiom{DC}_{<\lambda}$.
\end{lemma}
\begin{proof}
Suppose that $N$ satisfies $\axiom{DC}_{<\lambda}$, and let $T\in M$ be a $\kappa$-closed tree without maximal element, for some $\kappa<\lambda$. By $\lambda$-closure of $M$, $T$ is also $\kappa$-closed in $N$, and therefore has a branch there, and this branch is a function from $\kappa$ to $T$, so it is in $M$, as wanted.
\end{proof}
\begin{proof}[Proof of \autoref{lemma:closure}]
Let $G$ be a $V$-generic filter, and let $M$ be $\axiom{HS}^G$. It is enough, by the previous lemma, to prove that $M^\kappa\subseteq M$ for all $\kappa<\lambda$. And indeed, if $f\colon\kappa\to M$ for some $\kappa<\lambda$, let $\dot f$ be a name for $f$ such that all the names appearing in $\dot f$ are of the form $\tup{\check\alpha,\dot y}^\bullet$ where $\dot y\in\axiom{HS}$.
Let $p$ be any condition such that $p\mathrel{\Vdash}``\dot f$ is a function'', set $p_0=p$, and recursively extend $p_\alpha$ to $p_{\alpha+1}$ such that $p_{\alpha+1}$ decides the value of $\dot f(\check\alpha)$, going through limit steps using the fact that $\PP$ is $\lambda$-closed. Finally, for all $\alpha<\kappa$ there is a name $\dot y_\alpha\in\axiom{HS}$ such that $p_\kappa\mathrel{\Vdash}\dot f(\check\alpha)=\dot y_\alpha$, define $\dot g=\{\tup{\check\alpha,\dot y_\alpha}^\bullet\mid\alpha<\kappa\}^\bullet$.
Let $H=\bigcap_{\alpha<\kappa}\sym(\dot y_\alpha)$. By $\lambda$-closure of $\mathscr F$, $H\in\mathscr F$. It is easy to verify that $H$ is a subgroup of $\sym(\dot g)$, so $\dot g\in\axiom{HS}$ and $p_\kappa\mathrel{\Vdash}\dot g=\dot f$. This means that there is a dense open set of conditions $q\leq p$ such that for some $\dot g\in\axiom{HS}$, $q\mathrel{\Vdash}\dot g=\dot f$, so by genericity, $\dot f^G=f\in M$ as wanted. Now by the previous lemma, $M\models\axiom{DC}_\kappa$ for all $\kappa<\lambda$.
\end{proof}
\begin{lemma}\label{lemma:chain-condition}
We can replace ``$\PP$ is $\lambda$-closed'' by ``$\PP$ has the $\lambda$-c.c.'' in \autoref{lemma:closure}.\footnote{Amitayu Banerjee had let us know that he had independently made a similar observation.}
\end{lemma}
\begin{proof}[Sketch of Proof]
We again appeal to the argument that $M$ is $\lambda$-closed in $V[G]$. Suppose that $\dot f$ is a $\PP$-name for a function $f\colon\kappa\to M$.
For every $\alpha<\kappa$, let $D_\alpha$ be a maximal antichain of conditions $p$ such that for some $\dot y_p\in\axiom{HS}$, $p\mathrel{\Vdash}\dot f(\check\alpha)=\dot y_p$. We can now define $\dot y_\alpha$ to be the name obtained by $\bigcup_{p\in D_\alpha}\dot y_p\mathbin\upharpoonright p$. Without loss of generality we may assume that each condition is tenacious, so by intersecting, we can assume that $\pi\in\sym(\dot y_p)$ means that $\pi p=p$. In particular, by $\lambda$-completeness of $\mathscr F$, $\bigcap_{p\in D_\alpha}\sym(\dot y_p)\in\mathscr F$ and it is easy to see that this is a subgroup of $\sym(\dot y_\alpha)$. Therefore, $\dot y_\alpha\in\axiom{HS}$.
It follows that $H=\bigcap_{\alpha<\kappa}\sym(\dot y_\alpha)$ is in $\mathscr F$ and therefore $\dot g=\{\tup{\check\alpha,\dot y_\alpha}^\bullet\mid\alpha<\kappa\}^\bullet$ is such that $\dot g\in\axiom{HS}$. Since $\mathrel{\Vdash}\dot f=\dot g$, it follows that $f\in M$.
\end{proof}
This shows that if $\mathscr F$ is $\sigma$-closed, both c.c.c.\ and $\sigma$-closed forcings would preserve $\axiom{DC}$. Philipp Schlicht raised a natural question, will properness suffice?
\begin{lemma}\label{lemma:proper}
If $\PP$ is proper and $\mathscr F$ is $\sigma$-complete, then $\axiom{DC}$ is preserved.
\end{lemma}
\begin{proof}[Sketch of Proof]
Let $G$ be a $V$-generic filter, and $M$ the symmetric extension given by $\axiom{HS}^G$. Suppose that $f\colon\omega\to M$ is a function, and let $\dot f$ be a name for it, and some $p$ which forces that $\dot f$ is a function into $\axiom{HS}$.
Let $N$ be a countable elementary submodel of $H(\theta)$ for a sufficiently large $\theta$, with $\tup{\PP,\mathscr G,\mathscr F},\dot f,p\in N$. By elementarity, $N$ contains ``enough'' names from $\axiom{HS}$ to compute all the possible values of $\dot f$, as there are only countably many of those, we can intersect the relevant groups and remain in $\mathscr F$ to fix all the necessary names. Next, find an $N$-generic condition extending $p$, and use it to define a name for a function in $\axiom{HS}$ which the $N$-generic condition will force to be equal to $f$. By density, this must have happened in $V[G]$, so $f\in M$.
\end{proof}
The keen eyed reader might have noticed at this point that all these proofs are the same flavor: the symmetric extension is closed under ${<}\kappa$-sequences in the full extension. Does that provide us with a full characterization of symmetric extensions which satisfy $\axiom{DC}_{<\kappa}$?
The answer is negative, as to be expected. Let $\tup{\PP,\mathscr G,\mathscr F}$ be any symmetric system which preserve $\axiom{DC}_{<\kappa}$, by consider the product of $\tup{\PP,\mathscr G,\mathscr F}$ with the symmetric system $\tup{\Add(\omega,1),\aut(\Add(\omega,1)),\{\aut(\Add(\omega,1))\}}$. Namely, we take the product of $\PP$ with adding a single Cohen real, the full automorphism group, and the trivial filter of subgroups. It is not hard to see that only $\PP$-names can be symmetric in this extension, so the symmetric extension is the same as that given just by $\tup{\PP,\mathscr G,\mathscr F}$, but the full generic extension contains a Cohen real, therefore $\sigma$-closure is violated.
But is this the only trivial obstruction? The following theorem shows that morally, the answer is yes. We will need the axiom $\axiom{SVC}$, or ``Small Violation of Choice'' formulated by Andreas Blass in \cite{Blass:1979}. The axiom can be stated as ``The axiom of choice can be forced with a set-forcing''. In particular, symmetric extensions satisfy $\axiom{SVC}$, at least under the assumption that the ground model did.
\begin{theorem}
Suppose that $M\models\axiom{DC}_{<\kappa}+\axiom{SVC}$, then $M$ is $\kappa$-closed in a model of $\axiom{ZFC}$.
\end{theorem}
\begin{proof}
Without loss of generality we can assume that $\kappa$ is the least such that $\axiom{DC}_\kappa$ fails. From \cite[Theorem~8.1]{Jech:AC1973} it follows that $\kappa$ is regular. Recall that $\axiom{SVC}$ can be restated as ``there exists a set $X$ such that forcing a well-ordering of $X$ forces the axiom of choice''. Since $\axiom{DC}_{<\kappa}$ holds, we can force a well-ordering of $X$ of type $\kappa$ by initial segments. By $\axiom{DC}_{<\kappa}$ this forcing is $\kappa$-closed and does not add ${<}\kappa$-sequences. Therefore $M$ is a $\kappa$-closed inner model of a model of $\axiom{ZFC}$.
\end{proof}
$\axiom{SVC}$ should not be necessary, but it is \textit{somewhat} necessary. On the one hand it is easy to construct a class-symmetric extension which is $\kappa$-closed, but does not satisfy $\axiom{SVC}$ (e.g.\ the class extensions given in \cite{Karagila:2014}). On the other hand, if it is consistent (modulo large cardinal hypotheses) with $\axiom{ZF}+\axiom{DC}$ that all successor cardinals have cofinality $\omega_1$, or in a generalized Morris-style model satisfying $\axiom{DC}$ (see \cite{Karagila:2018} for details),\footnote{Neither statements are known to be consistent with $\axiom{ZF}+\axiom{DC}$. We conjecture the latter is consistent.} then such a model cannot be extended to a model of $\axiom{ZFC}$ without adding ordinals. In particular, this model is not $\aleph_1$-closed in a model of $\axiom{ZFC}$.
Finally, we remark that it is quite easy to verify that a $\sigma$-closed forcing must preserve $\axiom{DC}$. In a more general way, we can prove that a proper forcing cannot violate $\axiom{DC}$. For a complete discussion on the topic of properness in $\axiom{ZF}$, see the author's work with David Asper\'o in \cite{AsperoKaragila:2018}.
\section{Some Applications}
\subsection{Failures of \texorpdfstring{$\axiom{GCH}$}{GCH} at limit cardinals below a supercompact cardinal}\label{app:apter}
Arthur Apter proved in \cite{Apter:2012} the following theorem:
\begin{theorem*}[Apter, Theorem~3]
Assume $V\models\axiom{ZFC}+\axiom{GCH}+\kappa$ is supercompact. Then there is a symmetric extension in which $\axiom{AC}_\omega$ fails, $\kappa$ is a regular limit cardinal and supercompact, and $\axiom{GCH}$ holds at a limit cardinal $\delta$ if and only if $\delta>\kappa$.
\end{theorem*}
Of course, there are some concessions to be made. Supercompactness here is meant in the sense of ultrafilters, which is weaker than the sense of embedding (e.g.\ $\omega_1$ can be supercompact in the sense used by Apter, but it cannot be the critical point of an elementary embedding). In addition $\axiom{GCH}$ is weakened to mean that there is no injection from $\delta^{++}$ into $\mathcal P(\delta)$, this is because of the classical theorem that $\axiom{GCH}$ (in its standard formulations) implies the axiom of choice.
At the end Apter asks whether or not this result can be improved by having some weak form of the axiom of choice hold. Amitayu Banerjee pointed out that \autoref{lemma:chain-condition} gives a simple answer based on Apter's original construction.
\begin{theorem}
Assume $V\models\axiom{ZFC}+\axiom{GCH}+\kappa$ is supercompact. Then there is a symmetric extension in which $\axiom{DC}_{<\kappa}$ holds, $\kappa$ is a regular limit cardinal and supercompact, and $\axiom{GCH}$ holds for a limit cardinal $\delta$ if and only if $\delta>\kappa$.
\end{theorem}
\begin{proof}
Apter's proof begins by preparing $V$ so that $\kappa$ is indestructibly supercompact and that there is a club $C\subseteq\kappa$ such that $\min C=\omega$ and the successor points are inaccessible, such that for all $\delta\in C$, $2^\delta=2^{\delta^+}=\delta^{++}$.
Let $\tup{\kappa_i\mid i<\kappa}$ be a continuous enumeration of $C$, then $\PP$ is the Easton support product of $\Col(\kappa_i^{++},{<}\kappa_{i+1})$. We take $\mathscr G$ to be the Easton support product of the automorphism groups of each collapse, and $\mathscr F$ is the filter generated by the groups of the form $\fix(\alpha)=\{\pi\in\prod_{i\in C}\aut(\Col(\kappa_i^{++},{<}\kappa_{i+1}))\mid\pi\mathbin\upharpoonright\alpha=\id\}$. Namely, the filter is generated by groups which concentrate on only applying permutations above some fixed initial segment. Let $G$ be a $V$-generic filter for $\PP$ and let $M$ denote the symmetric extension.
Easily, $\mathscr F$ is $\kappa$-complete, and the Easton product is $\kappa$-c.c., so by \autoref{lemma:chain-condition} $\axiom{DC}_{<\kappa}$ holds, and by Lemma~3.3 in \cite{Hayut-Karagila:2018} $\kappa$ remains supercompact. Since $\axiom{GCH}$ held in $V$ above $\kappa$, and $\PP\subseteq V_\kappa$, it follows that for any limit cardinal $\delta>\kappa$, there is no injection from $\delta^{++}$ into $\mathcal P(\delta)$, since there is no such injection in $V[G]$, which agree with $V$ on cardinals above $\kappa$.
It remains to show that if $\delta\leq\kappa$ is a limit cardinal, then $\delta^{++}$ can be injected into $\mathcal P(\delta)$. For this note that $V_\kappa^M=V_\kappa^{V[G]}$, so it is enough to prove this in $V[G]$.
First, note that if $\delta<\kappa$ is a limit cardinal in $M$ then there is some limit ordinal $i<\kappa$, such that $\delta=\kappa_i$. Next, note that the Easton product above $i$ is $\delta^{++}$-closed, so it does not add subsets to $\delta$ nor it collapses $\delta^{++}$; and the product up to $i$ is $\delta^{+}$-c.c., so it does not collapse $\delta^{++}$ either.
Finally, the same holds for $\kappa$ itself, although in $M$ there is no well-ordering of $\mathcal P(\kappa)$, so we have to settle for the fact that $\kappa^{++}$ injects into $\mathcal P(\kappa)$ by the same arguments as above.
\end{proof}
\subsection{Sets of reals and Dependent Choice} In recent times, there is a renewed interest in many ``irregularity properties'' of sets of reals consistent with the failure of the axiom of choice already at that level. Namely, the existence of Luzin sets, Hamel bases, etc., in models where $\mathbb{R}$ cannot be well-ordered. The natural question after each resolve is whether or not $\axiom{DC}$ can be added. Perhaps unsurprisingly, the answer is almost always positive. Much of this work has been done in \cite{BCSWY:2018} by Brendle, Castiblanco, Schindler, Wu, and Yu. We will prove a simpler result of the same flavor, using simplified arguments. For simplicity, all the results in this part assume $V=L$.
Recall that a \textit{Luzin set} is an uncountable set of reals whose intersection with every nowhere dense set is countable. It is a classic theorem that the Continuum Hypothesis implies the existence of a Luzin set, as well as forcing with $\Add(\omega,\omega_1)$ adds a Luzin set.\footnote{Or more generally, $\Add(\omega,2^{\aleph_0})$.}
Taking $\PP=\Add(\omega,\omega_1)$ with the permutation group of $\omega_1$ acting on $\PP$ by $\pi p(\pi\alpha,n)=p(\alpha,n)$, and $\mathscr F$ is generated by $\fix(\alpha)$ for $\alpha<\omega_1$, where $\fix(\alpha)$ is $\{\pi\mid\pi\mathbin\upharpoonright\alpha=\id\}$. This symmetric system satisfies \autoref{lemma:chain-condition}, and therefore $\axiom{DC}$ holds in the extension.
Moreover, by a standard argument, the set of Cohen generics $A$ is in the model, but its enumeration is not. In particular, $\mathbb{R}$ cannot be well-ordered there. Finally, $A$ is of course uncountable. And given any nowhere dense set, $F$, let $x$ be a code for $F$,\footnote{Since $\axiom{DC}$ holds every Borel set has a code.} by c.c.c.\ there is a countable part of $\PP$ where $x$ was added, but then any $a\in A$ outside that part is Cohen generic over $L[x]$, and is therefore not in $F$. So $A\cap F$ is countable.
Replacing the Cohen reals by Sacks reals, and the finite support product by a countable support product, we lose the c.c.c.\ property, but we the forcing is still proper, as shown by Baumgartner in \cite{Baumgartner:1985}. By \autoref{lemma:proper} is enough to obtain $\axiom{DC}$.\footnote{One can also note that the product of $\aleph_2$ copies of Sacks reals, over a model of $\axiom{CH}$, will satisfy $\aleph_2$-c.c., so by taking the suitable construction just \autoref{lemma:chain-condition} provides us with $\axiom{DC}_{\omega_1}$.} In this model we also have that every real was added by a countable part of the product, although in this case this is due to homogeneity rather than chain condition. In \cite{BCSWY:2018}, the construction goes on to force a Burstein set, which is a Hamel basis with an addition property of being a Bernstein set. This second forcing is $\sigma$-closed, so it preserves $\axiom{DC}$.
This last part raises an interesting question. In \cite{BSWY:2018} the authors show that in Cohen's model there is a Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$. Cohen's model is famous of having a Dedekind-finite set of reals, and therefore $\axiom{DC}$ fails quite badly. However, it is also very different from Feferman's construction of a model satisfying $V=L(\mathbb{R})$ where the Boolean Prime Ideal theorem fails, in that the set of Cohen reals is in Cohen's model but not in Feferman's model. This is important because the proof in \cite{BSWY:2018} relies on this very fact. In \cite{BCSWY:2018} the construction goes through $L(\mathbb{R})$, where the set of Sacks reals is not present.
\begin{question}
Let $M$ be the symmetric extension obtained by forcing with a countable support product of Sacks reals of length $\omega_1$ as described above. Is there a Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$ in $M$?
\end{question}
\subsection{Generic structures}
Wilfrid Hodges' influential paper \cite{Hodges:1974} presents six constructions of rings that have seemingly impossible properties, proving once more the necessity of the axiom of choice in the study of algebraic structures. His constructions rely on Lemma~3 called ``Removal of subsets'' in the paper which allows the transfer of a countable structure with certain properties to a model of $\axiom{ZF}$ where the structure has only ``a few subsets''. The lemma then proved in \cite[Lemma~3.7]{Hodges:1976}. The proof goes through a more general construction, and then focuses on the case where $\kappa=\omega$, however by replacing $\omega$ by $\kappa$ (and finite by ${<}\kappa$) in the definitions relevant for the Removal of subsets, one immediately gets the consistency of $\axiom{DC}_{<\kappa}$ with the modified lemma.
We extend this type of lemma to allow for $\axiom{DC}_{<\kappa}$ to hold, if one assumes a little bit more. For the remainder of this section, $\mathcal L$ is a fixed first-order language, and $\kappa$ is a fixed regular cardinal.
For a $\mathcal L$-structure $M$, we say that $X\subseteq M^n$ is \textit{$\kappa$-supported} if there exists $Y\subseteq M$ such that $|Y|<\kappa$, and $\pi$ is any automorphism which fixes $Y$ pointwise, then $X=\{\pi\vec x\mid\vec x\in X\}$. Similarly, a sequence of relations is $\kappa$-supported if it is uniformly $\kappa$-supported.
Finally, we say that $M$ is \textit{$\kappa$-homogeneous} if whenever $A\subseteq M$ and $|A|<\kappa$, if $B\subseteq M$ such that $f\colon A\to B$ is an isomorphism as $\mathcal L$-substructures of $M$, then $f$ can be extended to an automorphism of $M$. It is well-known that if $M$ is $\kappa$-homogeneous, $A\equiv_N B$ for some $N\in[M]^{<\kappa}$, then there is an automorphism mapping $A$ to $B$ which fixes $N$ pointwise.
\begin{theorem}
Suppose that $M$ is a $\kappa$-homogeneous $\mathcal L$-structure. There exists a symmetric extension $W\subseteq V[G]$ in which there is an $\mathcal L$-structure such that $A\cong M$ in $V[G]$, but in $W$ the only subsets of $A$ are those which are $\kappa$-supported.
\end{theorem}
\begin{proof}
Let $\PP=\Add(\kappa,M\times\kappa)$, we define $\mathscr G$ to be $\aut(M)\wr S_\kappa$, namely the wreath product of the automorphism group of $M$ with the permutation group of $\kappa$, which is itself a permutation group of $M\times\kappa$. A permutation $\pi\in\mathscr G$ is made from an automorphism $\pi^*\in\aut(M)$, and for each $m\in M$ a permutation of $\kappa$, denoted by $\pi_m$, and $\pi(m,\alpha)=(\pi^*(m),\pi_m(\alpha))$. We define the action of $\mathscr G$ on $\PP$ in the standard way, \[\pi p(\pi^*(m),\pi_m(\alpha),\beta)=p(m,\alpha,\beta).\]
Finally, for $N\subseteq M$ and $E\subseteq\kappa$ we define \[\fix(N,E)=\{\pi\in\mathscr G\mid \pi^*\mathbin\upharpoonright N=\id\land\forall n\in N:\pi_n\mathbin\upharpoonright E=\id\},\] and $\mathscr F$ is filter generated by $\{\fix(N,E)\mid N\in[M]^{<\kappa}, E\in[\kappa]^{<\kappa}\}$.
Indeed, it is not hard to see that the conditions of \autoref{lemma:closure} and $\axiom{DC}_{<\kappa}$ must hold in the symmetric extension given by this symmetric system. Let $G$ be a $V$-generic filter for $\PP$, and let $W$ denote the symmetric extension.
For $m\in M$ and $\alpha<\kappa$ let $\dot x_{m,\alpha}$ be the name $\{\tup{p,\check\beta}\mid p(m,\alpha,\beta)=1\}$, $\dot a_m$ is the name $\{\dot x_{m,\alpha}\mid \alpha<\kappa\}^\bullet$ and $\dot A=\{\dot a_m\mid m\in M\}^\bullet$. Standard arguments show that for all $\pi\in\mathscr G$, $\pi\dot x_{m,\alpha}=\dot x_{\pi^*(m),\pi_m(\alpha)}$ and $\pi\dot a_m=\dot a_{\pi^*(m)}$, and so $\pi\dot A=\dot A$. Therefore all these names are symmetric.
Moreover, since the $M$-part of $\pi\in\mathscr G$ is an automorphism, if $R$ is a symbol in $\mathcal L$, then $\{\dot a_{\vec m}\mid\vec m\in R^M\}^\bullet$ is symmetric, where $\dot a_{\vec m}=\tup{\dot a_{m_i}\mid \vec m=\tup{m_i\mid i<\alpha}}^\bullet$. In particular in $W$ there is a natural way of interpreting $A$ as an $\mathcal L$-structure, and clearly in $V[G]$ it holds that $A\cong M$ by $m\mapsto a_m$.
It remains to show that if $B\subseteq A$ is in $W$, then $B$ is $[M]^{<\kappa}$-supported. Let $\dot B$ be a name for $B$ in $\axiom{HS}$ and $\fix(N,E)\subseteq\sym(\dot B)$. If $p\mathrel{\Vdash}``\dot B\text{ is not }\kappa\text{-suppported}"$, then in particular $N$ itself is not a support for $B$, then there is an automorphism $\pi^*$ which fixes $N$ pointwise and moves an element $B$ outside of $B$ itself. The problem is that this automorphism might be generic.
However, let $a_m\in B$ and $a_{m'}\notin B$ such that there is such $\sigma^*(a_m)=a_{m'}$. In particular $m$ and $m'$ have the same type over $N$. Since this statement is absolute to $V$, we can therefore assume without loss of generality that $\sigma^*\in V$, and therefore there is a suitable $\pi\in\mathscr G$ for which $\pi^*=\sigma^*$.
Moreover, we can assume that $\pi_a$ and $\pi_b$ are such that $\pi p$ is compatible with $p$, simply by ensuring the domains on the $a$ and $b$ coordinates of $p$ become disjoint. Therefore, $\pi p\mathrel{\Vdash}\dot a_{m'}\in\dot B$, but since $p$ and $\pi p$ are compatible, this is impossible.
\end{proof}
We draw some easy corollaries. The first is that $\kappa$-amorphous sets are consistent with $\axiom{DC}_{<\kappa}$, where a set is $\kappa$-amorphous if it cannot be written as a union of two subsets neither of which is of size ${<}\kappa$.
\begin{corollary}
It is consistent with $\axiom{DC}_{<\kappa}$ that there exists a set whose cardinality is not ${<}\kappa$, but every subset is either of size ${<}\kappa$ or its complement is of size ${<}\kappa$.
\end{corollary}
The next corollary was proved by the author in \cite{Karagila:2012}.
\begin{corollary}
It is consistent with $\axiom{DC}_{<\kappa}$ that there is a vector space over any fixed field which is not generated by ${<}\kappa$ vectors, but any proper subspace has dimension ${<}\kappa$.
\end{corollary}
Taking a countable field and $\kappa=\omega_1$ we obtain the following corollary.
\begin{corollary}
It is consistent with $\axiom{DC}$ that there is an uncountable Abelian group such that all of its proper subgroups are countable.
\end{corollary}
\begin{remark}The reason we used $\Add(\kappa,M\times\kappa)$ and not $\Add(\kappa,M)$ is that we needed to create a better set-theoretic indiscernibility between the $a_m$'s. If one repeats the proof using only $\Add(\kappa,M)$, then one discovers that sets such as $\{a_m\mid 0\in a_m\}$ enter the model, and they have nothing to do with being supported. However, doing that does offer one advantage of obtaining failures as subsets of the reals. So for example, one could $\Add(\omega,M)$ or use a countable support product of Sacks reals, and obtain the generic structure as a structure on a set of reals.
\end{remark}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document}
|
\begin{equation}gin{document}
\newcommand{\left { L^2[0,a]}ngle }{\left { L^2[0,a]}ngle }
\newcommand{\right \rangle}{\right \rangle}
\newcommand{\bibitem}{\bibitembitem}
\newcommand{\cite}{\citete}
\newcommand{\nonumber}{\nonumbernumber}
\newcommand{\begin{equation}}{\begin{equation}gin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{
}{
ace{.1in}}
\newcommand{\hspace{.2in}}{\hspace{.2in}ace{.2in}}
\newcommand{\begin{equation}a}{\begin{equation}gin{eqnarray}}
\newcommand{\end{equation}a}{\end{eqnarray}}
\newcommand{\begin{equation}an}{\begin{equation}gin{eqnarray*}}
\newcommand{\end{equation}an}{\end{eqnarray*}}
\newcommand{\begin{array}}{\begin{equation}gin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\overline}{\overline}
\newcommand{{ \mathcal T }}{{ \mathcal T }}
\newcommand{{ X_{g,n}}}{{ X_{g,n}}}
\newcommand{{ X_{g}}}{{ X_{g}}}
\newcommand{{{\bf \mathcal Z ^{g*} }}}{{{\bf \mathcal Z ^{g*} }}}
\newcommand{{{\bf Z^{g*}}}}{{{\bf Z^{g*}}}}
\newcommand{{{\bf \mathcal Z ^g }}}{{{\bf \mathcal Z ^g }}}
\newcommand{{{\bf \mathcal Z }}}{{{\bf \mathcal Z }}}
\newcommand{{{\bf \mathcal Z }}a}{{{\bf \mathcal Z ^a }}}
\newcommand{{{\bf \mathcal Z }}b}{{{\bf \mathcal Z ^b }}}
\newcommand{{{\bf Z^g}}}{{{\bf Z^g}}}
\newcommand{{{\bf Z^{a*}}}}{{{\bf Z^{a*}}}}
\newcommand{{{\bf Z^{b^*}}}}{{{\bf Z^{b^*}}}}
\newcommand{{{\bf Z^a}}}{{{\bf Z^a}}}
\newcommand{{{\bf Z^b}}}{{{\bf Z^b}}}
\newcommand{{{\bf Z^a}}g}{{{\bf Z^g_{ab}}}}
\newcommand{{{\bf Z^a}}h}{{{\bf Z^h_{ab}}}}
\newcommand{{L^2_{ab}}}{{L^2_{ab}}}
\newcommand{\Lab}{{L^2{[0,\frac{1}{b}]}\times L^2{[0,a]}}}
\newcommand{\ja}{{ \frac{-j}{a} }}
\newcommand{\na}{{ \frac{-n}{a} }}
\newcommand{\fkb}{{\frac{k}{b}}}
\newcommand{\fjb}{{\frac{j}{b}}}
\newcommand{\ka}{{ \frac{-k}{a} }}
\newcommand{\fab}{{ \left(\frac{a}{b}\right)}}
\newcommand{\sfb} {{\sqrt{\frac{1}{b}} }}
\newcommand{\fb}{{ \frac{1}{b}}}
\newcommand{{ L^2 (\mathbb R)}}{{ L^2 (\mathbb R)}}
\newcommand{{ L^2[0,a]}}{{ L^2[0,a]}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{ \mathbb Z}}{{ \mathbb Z}}
\newcommand{{ \mathbb C}}{{ \mathbb C}}
\newcommand{{ L^{\infty}_1\left(\ell_2\right)}}{{ L^{\infty}_1\left(\ell_2\right)}}
\newcommand{{ L^{\infty}_{\fb}\left(\ell_2\right)}}{{ L^{\infty}_{\fb}\left(\ell_2\right)}}
\newcommand{{\mathcal{P}}}{{\mathcal{P}}}
\newcommand{\begin{displaystyle}}{\begin{equation}gin{displaystyle}}
\newcommand{\end{displaystyle}}{\end{displaystyle}}
\newcommand{{g_{m,n}}}{{g_{m,n}}}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}}
\def\mathinner{\cdotp\cdotp\cdotp}} \def \norm{| \! | \! |{\mathinner{\ldotp\ldotp\ldotp}}
\def\mathinner{\cdotp\cdotp\cdotp}} \def \norm{| \! | \! |{\mathinner{\cdotp\cdotp\cdotp}} \def \nonumberrm{| \! | \! |}
\def \N{\rm {\bf N}} \def \R{\rm {\bf R}} \def \K{\rm {\bf K}} \def
\diag{\text{diag }} \def \tr{\text{tr }} \def \Com{\text{Com }} \def
\cal{\mathcal} \def \Bbb{\mathbb} \def \det{\text{det }}
\title{ A Hilbert $C^*$-Module for Gabor systems }
\author{Michael Coco and M.C. Lammers}
\email{Michael Coco, University of South Carolina, [email protected]}
\email{M.C. Lammers, University of South Carolina,
[email protected]}
\maketitle
\begin{equation}gin{abstract} We construct Hilbert $C^*$-modules useful for
studying Gabor systems and show that they are Banach Algebra under
pointwise multiplication. For rational $ab<1$ we prove that the set of
functions $g \in { L^2 (\mathbb R)}$ so that $(g,a,b)$ is a Bessel system is an ideal
for the Hilbert $C^*$-module given this pointwise algebraic structure.
This allows us to give a multiplicative perturbation theorem for frames.
Finally we show that a system $(g,a,b)$ yields a frame for ${ L^2 (\mathbb R)}$ iff
it is a modular frame for the given Hilbert $C^*$-module.
\end{abstract}
\section{Intro} A {\bf Gabor system} for a function $ g \in { L^2 (\mathbb R)}$
is obtained by applying modulations and translations ($g(x-ka)e^{2\pi
i m b x}$) to this function. In the study of these systems many
results have been obtained by using the function-valued inner product
$$\left { L^2[0,a]}ngle f,g\right \rangle_1(x)=\sum_{k \in{ \mathbb Z}} f(x-k)\overline{g(x-k)}.$$ Many of the
results from Hilbert space theory can be reproduced in this setting.
However, one of the most basic properties of a Hilbert space is lost.
Namely, for $f,g,h\in { L^2 (\mathbb R)}$ the function $\left { L^2[0,a]}ngle f,g\right \rangle_1(x) h(x) $ need
not be be in ${ L^2 (\mathbb R)}$. This brings us to the study of {\bf Hilbert
$C^*$-Modules (HCM)}. A Hilbert $C^*$-Module is the generalization
of the Hilbert space in which the inner product maps into a
$C^*$-algebra. By restricting to a subspace of ${ L^2 (\mathbb R)}$ we produce a
Hilbert $C^*$-Module for the inner product above containing the
functions $g$ which are of most interest in the study of Gabor
systems.\\
Our study of this HCM and its relation to Gabor systems is organized
in the following manner. In section~2 we provide some of the known
results involving Gabor systems and Gabor frames along with some of
the basics regarding HCM's. In section~3 we define our HCM's and
develop a list of basic properties which relate to Gabor systems. In
section 4 we show that our HCM is a Banach algebra under pointwise
multiplication and show that the set of functions with a finite upper
frame bound form an ideal for this algebra. In addition we give a
Balian-Low type theorem in the positive direction and show that one
can apply an additive perturbation result of Christenson and Heil to
obtain a multiplicative one in our Banach algebra. In section~5 we
make the connection between {\bf modular frames} for our HCM (defined
by Frank and Larson) and Gabor frames for ${ L^2 (\mathbb R)}$. We show that the
system $(g,a,b)$ is a frame for ${ L^2 (\mathbb R)}$ iff the translates of $g$,
$\{T_{ka}g\}_{k\in { \mathbb Z}}$, form a modular frame for our HCM. \\
The authors would like to thank Michael Frank for his many helpful
suggestions and references regarding Hilbert $C^*$-modules.
Throughout all sums are considered to be over ${ \mathbb Z}$ or ${ \mathbb Z}^@$
unless otherwise stated.
\section{Preliminaries}
In 1952 Duffin and Schaeffer \citete{DS} defined frames:
\begin{equation}gin{defn} A sequence $(f_{n})_{n\in \Bbb Z}$ of elements
of a Hilbert space $H$ is called a {\bf frame} if there are constants
$A,B > 0$ such that
\begin{equation}gin{equation}
A\|f\|^{2}\le \sum_{n\in \Bbb Z}|\left { L^2[0,a]}ngle f,f_{n}\right \rangle |^{2}\le B\|f\|^{2},\
\
\text{for all}\ \ f\in H. \nonumber
\end{equation}
\end{defn}
For a function $f \in { L^2 (\mathbb R)}$ and $a,b\in {\mathbb R}$ define the operators:
\[
\text{Translation: } T_{a}f(x) = f(x-a), \qquad
\text{ Modulation: } E_{b}f(x) = e^{2{\pi}ibx}f(x), \,
\]
\begin{equation}gin{defn}
If $a,b\in {\mathbb R}$ and $g\in { L^2 (\mathbb R)}$ we let $(E_{mb}T_{na}g)={g_{m,n}}$ and call
$\left \{(E_{mb}T_{na}g)\right \}_{m,n\in { \mathbb Z}}$ a {\bf Gabor system}
(also called a Weyl-Heisenberg system) and denote it by $(g,a,b)$. If
this system is a frame then we call it a {\bf Gabor frame}. We denote
by $(g,a)$ the family $(T_{na}g)_{n\in { \mathbb Z}}$.
\end{defn}
Now we wish to define some important operators associated with a Gabor
system $(g,a,b)$. Let $(e_{m,n})$ be an orthonormal basis for
${ L^2 (\mathbb R)}$. We call the operator $\mathcal T_g:{ L^2 (\mathbb R)}\rightarrow { L^2 (\mathbb R)}$ given by
$\mathcal T_ge_{m,n}=E_{mb}T_{na}g$ the {\bf preframe operator}
associated with $(g,a,b)$. The adjoint, $\mathcal T_g^*$, is called
the {\bf frame transform} and for $f\in { L^2 (\mathbb R)}$ and $m,n\in \Bbb Z$ we
have $\left { L^2[0,a]}ngle
\mathcal T_g^{*}f,e_{m,n}\right \rangle = \left { L^2[0,a]}ngle f,\mathcal T_g(e_{m,n})\right \rangle =\left { L^2[0,a]}ngle
f,(E_{mb}T_{na}g)\right \rangle $. Thus
\[
\mathcal T_g^{*}f = \sum_{m,n}\left { L^2[0,a]}ngle f,E_{mb}T_{na}g\right \rangle e_{m,n} \,\text {
and } \, \|\mathcal T_g^{*}f\|^{2} = \sum_{m,n}|\left { L^2[0,a]}ngle f,E_{mb}T_{na}g\right \rangle
|^{2} \text{ for all } f\in { L^2 (\mathbb R)}. \] It follows that the preframe
operator is bounded if and only if $(g_{m,n})$ has a finite upper
frame bound $B$. Finally we define the {\bf frame operator associated
with $(g,a,b)$}:
\[ S(f)=\mathcal T_g\mathcal T_g^*(f)=\sum_{m,n\in { \mathbb Z}}\left { L^2[0,a]}ngle
f,E_{mb}T_{na}g\right \rangle (E_{mb}T_{na}g).\] It is known that if $(g,a,b)$ is
a frame then $S$ is a positive, invertible operator. We will be
interested in when there is a finite upper frame bound for a Gabor
system.
\begin{equation}gin{defn} Let $\{\psi_n\}_{n \in { \mathbb Z}}$ be an orthonormal basis for
a Hilbert space $H$. A {\bf Bessel sequence} is any sequence
$\{\phi_n\}_{n \in{ \mathbb Z}}$(i.e. not necessarily one generated by the
operators $E_{mb}$ and $T_{na}$) so that $T(\sigma)=\sum_{n \in
{ \mathbb Z}}\left { L^2[0,a]}ngle
\sigma,\psi_n\right \rangle_H \phi_n$ defines a bounded operator from $H$ to
$H$. For fixed $a$ and $b$ we denote the set of functions $g$ so that
$\{E_{mb}T_{na}g\}$ is a Bessel sequence by $X_Z$.\\
\end{defn}
We will use a bracket product notation compatible with Gabor systems.
We define the {\bf $a$-inner product} for $f,g, \in { L^2 (\mathbb R)}$ and $a\in
{\mathbb R}^+$:
\[\left { L^2[0,a]}ngle f,g\right \rangle _a=\left { L^2[0,a]}ngle f,g\right \rangle _a(x)=\sum_k f(x-ka)\overline{g(x-ka)}\text{ and }
\|f\|_a(x)=\sqrt{\left { L^2[0,a]}ngle f,f\right \rangle _a(x)}. \]
For a detailed development of this $a$-inner product which includes two
forms of a Riesz Representation we refer the reader to \citete{CL} but
here we give a brief list of some of the properties which will be
useful later.
\begin{equation}gin{prop}{L^2_{ab}}el{pip} For $f,g \in { L^2 (\mathbb R)} $ \\
(a) $\left { L^2[0,a]}ngle f,f\right \rangle _a{g_{m,n}}e 0$ and = 0 iff $f=0$ \\ (b) $\left { L^2[0,a]}ngle f,g\right \rangle _a=
\overline{\left { L^2[0,a]}ngle g,f\right \rangle _a} $ \\ (c) $\left { L^2[0,a]}ngle \phi f,g\right \rangle _a= \phi \left { L^2[0,a]}ngle f,g\right \rangle
_a$ for all periodic functions such that $f,\phi f \in { L^2 (\mathbb R)}$\\ (d) $\left { L^2[0,a]}ngle
f,h+g\right \rangle _a = \left { L^2[0,a]}ngle f,h\right \rangle_a +\left { L^2[0,a]}ngle f,g\right \rangle_a$\\ (e) $|\left { L^2[0,a]}ngle f,g\right \rangle_a|^2 \le
\|f\|_a \|g\|_a $\\ (f) $\left { L^2[0,a]}ngle f,g\right \rangle _a \in L^1[0,a]$ \\ (g) $\left { L^2[0,a]}ngle
f,g\right \rangle _a$ is a-periodic on ${\mathbb R}$\\ (h) $\|f\|^2_{ L^2 (\mathbb R)} =\int_0^a \left { L^2[0,a]}ngle
f,f\right \rangle_a dx$\\ (i) $\|f+g\|_a \le \|f\|_a +\|g\|_a$
\end{prop}
While we see that the $a$-inner product has many of the same
properties as the usual inner product, there is one major difference
as we mentioned in the introduction. Consider $f,g,h \in{ L^2 (\mathbb R)}$ where \[
f(x)=g(x)=
\frac{1}{x^\frac{1}{3}}{\bf 1}_{[0,1]},\,\, h(x)={\bf 1}_{[0,1]}.\] Then
$\left { L^2[0,a]}ngle f,g \right \rangle_1 (x)h(x)= \begin{displaystyle} \frac{1}{x^\frac{2}{3}}\end{displaystyle} {\bf 1}_{[0,1]}$
and this function is clearly not in ${ L^2 (\mathbb R)}$.\\
We say that an operator $L:{ L^2 (\mathbb R)} \to { L^2 (\mathbb R)}$ is {\bf $a$-factorable} if
$L(\phi f)=\phi L(f)$ for all bounded $a$-periodic functions $\phi$. The operators associated with Gabor systems are
$\fb$-factorable and hence have the following representations which we
refer to as {\bf compressions} since we compress the modulation into
the $\fb$-inner product.
\begin{equation}gin{prop}\citete{CL,rs2} If $\|g\|_{\fb}\le B$ a.e., then the frame
operator and the preframe operator have the following compressions
\[\mathcal T_g(f)= \sfb \sum_k\left { L^2[0,a]}ngle f,e_k\right \rangle _{\fb}T_{ka}g, \quad
\mathcal T_g^*(f)= \sfb \sum_k\left { L^2[0,a]}ngle f,T_{ka} g\right \rangle _{\fb}e_k \text { and
}\]
\[S_g(f)=\fb \sum_k\left { L^2[0,a]}ngle f,T_{ka}g\right \rangle_\fb T_{ka}g\]
where $e_k =T_{\fkb} 1_{[0,\fb)}$ .
\end{prop}
\begin{equation}gin{defn}For ${ L^2[0,a]}mbda > 0$ the {\bf Zak transform} of a function
$f\in{ L^2 (\mathbb R)}$ is \[{ Z_{ L^2[0,a]}mbda}(f)(t,v)= { L^2[0,a]}mbda^{1/2}\sum_{k \in { \mathbb Z}} f(
{ L^2[0,a]}mbda(t-k))e^{2 \pi i k v}, \, t,v \in {\mathbb R}.\] We may interpret this
as a unitary operator from ${ L^2 (\mathbb R)}$ to $L^2[0,1]^2$. Since we are
frequently interested in the case where ${ L^2[0,a]}mbda=1$, we will also use
the notation $Z_1=Z$.
\end{defn}
Before we present some of the basics regarding Hilbert $C^*$-modules
we give two very important results for $(g,1,1)$ systems. The first is
due to Daubechies and the second was proved independently by Balian and
Low. A nice treatment of both can be found in \citete{FS}.
\begin{equation}gin{thm}{L^2_{ab}}el{DJ}\citete{D} The system $(g,1,1)$ yields a frame with
frame bounds A and B iff
\[A \le |Z(f)(t,v)| \le B \text{ for a.e. } (t,v)\in [0,1]^2\]
\end{thm}
\begin{equation}gin{thm}{L^2_{ab}}el{BL}{\bf(Balian-Low)} If the system $(g,1,1)$
yields a frame, then
\[ \|tg(t)\|_{ L^2 (\mathbb R)} \, \|{g_{m,n}}amma\hat g({g_{m,n}}amma)\|_{ L^2 (\mathbb R)}=+\infty \]
\end{thm}
Now we give some of the basics regarding Hilbert $C^*$-modules. We
refer the reader to \citete{rw} for a thorough treatment of this
subject. Essentially, a Hilbert $C^*$-module is the generalization of
a Hilbert space in which the inner product takes values in a
$C^*$-algebra instead of the complex numbers.
\begin{equation}gin{defn} Let $A$ be a $C^*$-algebra. An {\bf inner product
$A$-module } is an $A$-module $\mathcal M$ with a mapping $\left { L^2[0,a]}ngle
\cdot,\cdot
\right \rangle_A:\mathcal M\times \mathcal M\to A$ so that for all $x,y,z \in
\mathcal M$ and $a\in A$ \\
a) $\left { L^2[0,a]}ngle x,x \right \rangle_A{g_{m,n}}e 0 $ (as an element of $A$);\\ b) $\left { L^2[0,a]}ngle x,x \right \rangle_A
=0$ iff $x=0$;\\ c) $\left { L^2[0,a]}ngle x,y \right \rangle_A = \left { L^2[0,a]}ngle y,x \right \rangle_A^*$; \\ d) $\left { L^2[0,a]}ngle ax,x
\right \rangle_A= a \left { L^2[0,a]}ngle x,x \right \rangle_A$ ; \\ e) $\left { L^2[0,a]}ngle x+y,x \right \rangle_A =\left { L^2[0,a]}ngle x,z \right \rangle_A +\left { L^2[0,a]}ngle
y,z \right \rangle_A.$\\
If the space $\mathcal M$ is complete with respect to
the norm $\|x\|^2_A=\|
\left { L^2[0,a]}ngle x,x \right \rangle_A\|_A $, then we say $\mathcal M$ is a {\bf Hilbert $C^*$-module
with respect to $A$} or more simply a {\bf Hilbert $A$-module}. When
referring to a Hilbert $C^*$-module we will also use the abbreviation
{\bf HCM}.
\end{defn}
We provide a very simple example.
\begin{equation}gin{exam} Let $A$ be a $C^*$-algebra. The $A$-valued inner product
defined by $\left { L^2[0,a]}ngle a,b \right \rangle_A=ab^*$ makes $A$ a Hilbert $A$-module.
\end{exam}
\section{The HCM}
Given the way we have defined the $a$-inner product and the Hilbert
$C^*$-module it is natural to try to use the $a$-inner product to turn
${ L^2 (\mathbb R)}$ into a HCM. Unfortunately, the $a$-inner product maps ${ L^2 (\mathbb R)}$
into $L^1[0,a]$ which is not a $C^*$-algebra. For this reason we
consider the subspace of ${ L^2 (\mathbb R)}$ consisting of those functions which are
mapped into $L^\infty [0,a]$. This yields the following Lebesgue-Bochner space.\\
For $b>0$ let ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ be defined to be the set of measurable functions
$f:{\mathbb R} \to { \mathbb C}$ for which the norm
\[\|f\|^2_{{ L^{\infty}_{\fb}\left(\ell_2\right)}} =esssup_{[0,\fb]} \sum_k|f(x-\fkb )|^2 \]is finite.
Proposition
\ref{pip} gives us that ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ is a Hilbert $C^*$-Module ({\bf HCM})
over the $C^*$-algebra $L^\infty [0,\fb)$ with $C^*$-valued inner
product and $C^*$-valued norm
\[\left { L^2[0,a]}ngle f,g\right \rangle_\fb(x)= \sum_kf(x-\fkb )\overline{g(x-\fkb )}
\,\text{ and } \, \|f\|_\fb(x)=\left(\sum_k|f(x-\fkb )|^2\right)^{1/2} .\]
We begin with a few basic remarks:\\
{\bf 1}. Weak-* convergence. Because we are constructing our $HCM$
with Gabor systems in mind we will only require the $C^*$-valued inner
product to converge weak-* in $L^\infty[0,1]$. We motivate this by
considering the case$a=b=1$. In this case we have the classification
Theorem \ref{DJ}. With this theorem it is easy to see that the system
$(g,1,1)$ with $\begin{displaystyle} g=\sum_{k=0}^\infty {\bf1}_{[k+\frac{1}{2^{k+1}},
k+\frac{1}{2^k})} \end{displaystyle}$ produces a Gabor frame because $|Z(g)(t,v)|=1$
everywhere. However the series $\left { L^2[0,a]}ngle g,g
\right \rangle_1$ does not converge in norm in $L^\infty[0,1]$. If we only
require that the series converge in the weak* topology we avoid this
problem. This assumption results in frames being what Frank and
Larson \citete{FL} have termed nonstandard modular frames. We will
address this more in section 5.\\
{\bf 2}. Proposition~\ref{pip}~(h) yields the following inequality
which shows that ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ embeds continuously in ${ L^2 (\mathbb R)}$
\[\|f\|^2 =\int _0^\fb |\left { L^2[0,a]}ngle f,f\right \rangle_\fb| dx
\le \sup_{[0,1]} |\left { L^2[0,a]}ngle f,f\right \rangle_\fb| =\|f\|_{ L^{\infty}_{\fb}\left(\ell_2\right)}^2.\]
Finally, since ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ contains continuous functions of compact support,
we see it is norm dense in ${ L^2 (\mathbb R)}$.\\
{\bf 3}. ${ L^{\infty}_{\fb}\left(\ell_2\right)}\ne { L^2 (\mathbb R)}\cap L^\infty({\mathbb R})$. We illustrate this for
${ L^{\infty}_1\left(\ell_2\right)}$ and note that it is easily generalized. Let $f_k= {\bf
1}_{[k,k+\frac{1}{k^2})}$ and $f=\sum_{k=0}^1 f_k$. Then $f \in
{ L^2 (\mathbb R)}\cap L^\infty({\mathbb R})$ but $f\nonumbert \in { L^{\infty}_1\left(\ell_2\right)}$ for $\left { L^2[0,a]}ngle f,f\right \rangle_1 (x)$ is
unbounded near zero. \\
{\bf 4}. We interpret a basic result about Gabor frames in this space.
A necessary condition for $\{{g_{m,n}} \}_{m,n\in { \mathbb Z}}$ to be a Gabor frame
in ${ L^2 (\mathbb R)}$ with upper frame bound $B$ is that $esssup_{[0,\fb]}
\sum_k |g(x-\fkb )|^2 \le B $ a.e.. In other words, if ${g_{m,n}}$ is a frame
for ${ L^2 (\mathbb R)}$ with upper frame bound $B$ then $g$ in the $B$-ball of
${ L^{\infty}_{\fb}\left(\ell_2\right)}$.\\
{\bf 5}. The Wiener amalgam spaces $W(L,G)$ are defined using the
norm of two spaces where the local behavior is determined by (L) and
the global behavior by (G). For example,
\[W(L^p,\ell^q)=\left\{ f : \|f\|_{W(L^p,\ell^q)}
=\left ( \sum_k\|f\cdot e_k\|^q_p \right )^\frac{1}{q} <\infty \right
\}.\]
We will refer to ${\bf W}({\mathbb R})=W(L^\infty,\ell^1)$ as the Wiener
Algebra. The Segal algebra may be viewed as a specific Wiener amalgam
space:
\[{\bf S}_0({\mathbb R})=W(C_0,\ell^1)=\left\{ f \in W(L^\infty,\ell^1)
: f \text{ is continuous } \right \}.\]
These algebras have been used effectively in the study of Gabor
systems by Benedetto, Heil, Walnut, Feichtinger, Zimmerman and
Strohmer ( See chapters 2,3 and 8 of \citete{FS}). From the inequality
\[ \|g\|_{ L^{\infty}_1\left(\ell_2\right)} \le \|g\|_{W(L^\infty, \ell_2)} \le \|g\|_{W(L^\infty,
\ell_1)}\]
we conclude that the Segal algebra and the Weiner algebra embed in
${ L^{\infty}_1\left(\ell_2\right)}$. That is, for any $g$ in ${\bf S}_0({\mathbb R})$ we have
\[ \|g \|_{ L^{\infty}_1\left(\ell_2\right)} \le \|g\|_{{\bf W}({\mathbb R})}\le \|g\|_{{\bf S}_0({\mathbb R})}\]
{\bf 6.} One may view the Zak transform here as the analogue of the
Fourier transform when $a=b=1$. We give the usual notation for this
but add the representation with respect to the inner product for our
HCM. We should point out that although these transforms agree for
$t,v \in [0,1]^2$, the latter is periodic in $t$ and $v$ while the
former yields only a quasi periodicity.
\[Z(f)(t,v)= \sum_k f(t+k)e^{-2 \pi i k v}
= \sum_k \left { L^2[0,a]}ngle f,e_k \right \rangle_1(t)e^{-2 \pi i k v}\] where $e_k=T_k{\bf
1}_{[0,1)}.$
\[\hat f(v)=\int_{\mathbb R} f(t)e^{-2 \pi i t v}dt =\int_0^1 Z(f)(t,v)e^{-2
\pi i t v}dt\]
{\bf 7.} The Fourier transform is not a bounded linear operator on
${ L^{\infty}_1\left(\ell_2\right)}$. Indeed, if we let $f(x)={\bf 1}_{[0,1]}
\begin{displaystyle} \frac{1}{x^\frac{1}{3}}\end{displaystyle}$ it can be shown that the absolute
value of the Fourier transform is bounded by the function
\[ g(x)= \begin{equation}gin{cases} 2 &x\in [-1,1]\\
\frac{2}{x^\frac{2}{3}} & \text{otherwise} \end{cases} . \]
The function $g(x)$ is clearly in ${ L^{\infty}_1\left(\ell_2\right)}$ which implies $\hat f$ is
also. However, since the original function $f$ is not even bounded,
it is clearly not in ${ L^{\infty}_1\left(\ell_2\right)}$. Thus by the properties of the
Fourier transform we see that $\hat{ \hat f} $ is not in ${ L^{\infty}_1\left(\ell_2\right)}$.
\section{Bessel Systems}
Before we examine the role Bessel systems play in our HCM's we show
that these HCM's are Banach algebras under pointwise multiplication.
\begin{equation}gin{prop} The space ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ is a Banach algebra under pointwise
multiplication.
\end{prop}
\begin{equation}gin{proof}
The justification can be made with one inequality. Note that the
inequality below is not derived by Cauchy-Shwarz, but by the fact
that all the terms are positive and the left hand side is just the
diagonal of the product of the sums on the right.
\[\sum_k|f g(x-\fkb)|^2 \le \sum_k|f(x-\fkb)|^2 \sum_k|g(x-\fkb)|^2 .\]
\end{proof}
Unfortunately ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ is not a $C^*$-algebra. It is also clear that
this Banach Algebra has no identity since $f(x)={\bf 1}_{\mathbb R} $ is
clearly not in ${ L^{\infty}_{\fb}\left(\ell_2\right)}$. \\
Now we turn our attention to the functions that yield Bessel systems.
We show that the functions which produce a finite upper frame bound
for $a=b=1$ form a Banach space which may be embedded in
$L^\infty_1(\ell_2)$. For now let us refer to this as the Zak space,
denoted $X_Z$. More formally we define the space,
\[X_Z=\left\{f| f\in { L^2 (\mathbb R)}\, \text{and}\, \|f\|_{X_Z}= esssup_{t,v \in [0,1]^2
}|Z(f)(t,v)|<\infty\right \}.\]
Since $Z$ is a unitary operator from ${ L^2 (\mathbb R)}$ to $L^2[0,1]^2$ it is easy
to see that $\|\cdot\|_{X_Z}$ is actually a norm. The essential
inequality in the following proposition is well known and is usually
stated in the form $G_0=\sum_k|g(x-k)|^2 <B$ . We prove
it here for completeness and for the development of the Zak transform
on ${ L^{\infty}_1\left(\ell_2\right)}$.
\begin{equation}gin{prop} {L^2_{ab}}el{Zembed}The Zak space embeds continuously
in ${ L^{\infty}_1\left(\ell_2\right)}$, that is
\[\|g\|_{ L^{\infty}_1\left(\ell_2\right)} \le \|g\|_{X_Z} \text{ for all } g\in X_Z. \] \end{prop}
\begin{equation}gin{proof} To see that $X_Z$ embeds in ${ L^{\infty}_1\left(\ell_2\right)}$ we look at what the Zak
transform does on ${ L^{\infty}_1\left(\ell_2\right)}$. Let us define
\[L^\infty
L^2[0,1]^2=\left\{F(t,v)|\sup_{t\in[0,1]}\left(\int_0^1|F(t,v)|^2dv\right)
^\frac{1}{2} <\infty\right \}.\] Now let $f\in { L^{\infty}_1\left(\ell_2\right)}$ so
\begin{equation}an \|Z(f)(t,v)\|_{L^\infty L^2[0,1]^2}&=&\sup_{t\in[0,1]}\left(\int_0^1
|\sum\left { L^2[0,a]}ngle f,e_k\right \rangle_1 e^{-2 \pi i k v}|^2dv\right)^\frac{1}{2}\\
&=&\sup_{t\in[0,1]}
\left(\sum|\left { L^2[0,a]}ngle f,e_k\right \rangle_1|^2\right)^\frac{1}{2}=\|f\|_{ L^{\infty}_1\left(\ell_2\right)}
\end{equation}an It is easy to see from here that the Zak transform is an
isometry between these two spaces. Further, since the $L^2[0,1]$ norm
is bounded by the $L^\infty[0,1]$ norm, we see that $\|g\|_{ L^{\infty}_1\left(\ell_2\right)} \le
\|g\|_{X_Z}$ for $g\in X_Z$.
\end{proof}
Our next example shows that $X_Z \ne { L^{\infty}_1\left(\ell_2\right)}$.
\begin{equation}gin{exam}There exists $g\in { L^{\infty}_1\left(\ell_2\right)} $ with $g\nonumbert \in X_Z$ \end{exam}
\begin{equation}gin{proof}
Consider the function $g=\begin{displaystyle} \sum_{n=1}^\infty \frac{e_n}{n}\end{displaystyle}$. Clearly
$g\in { L^{\infty}_1\left(\ell_2\right)} $. Now Corollary 3.7 from \citete{CCJ} states that a positive
real-valued function is in $X_Z$ iff $\begin{displaystyle} \sum_k|\left { L^2[0,a]}ngle g, T_k g \right \rangle_1| \le
\infty\end{displaystyle}$. However computation shows that $\begin{displaystyle} \left { L^2[0,a]}ngle g, T_k g
\right \rangle_1=(1/k)\sum_{n=1}^{k}\frac{1}{n}\end{displaystyle}$ for the above $g$. These are
square summable but not summable. \end{proof}
As we mentioned a detailed study of the Balian and Low theorem
(Theorem \ref{BL}) can be found in \citete{FS}. In this study an
amalgam version of the BLT theorem for the Segal algebra ${\bf S}_0$
due to Heil is given.
\begin{equation}gin {thm}\citete{H} Let $g\in { L^2 (\mathbb R)}$. If $(g,1,1)$ forms a frame
for ${ L^2 (\mathbb R)}$, then
\[g\nonumbert \in {\bf S_0} \text{ and } \hat g \nonumbert \in {\bf S_0}\]
\end{thm}
A direct Corollary of the continuous embedding yields the positive
Balian-Low type result which in light of remark {\bf 7} from the
previous section is non-trivial.
\begin{equation}gin{cor} If the system $(g,1,1)$ is a frame then $g\in { L^{\infty}_1\left(\ell_2\right)}$ and
$\hat g \in { L^{\infty}_1\left(\ell_2\right)}$
\end{cor}
The proof follows directly from the theorem above and the fact that
$(g,1,1)$ forms a frame if and only if $(\hat g,1,1)$ does. This,
however, does not characterize the space $X_Z$. We showed in the
proof of Proposition
\ref{Zembed} that $Z$ is an isometry from ${ L^{\infty}_1\left(\ell_2\right)} $ to $L^\infty
L^2[0,1]^2$. Recall that $Z(\hat g)=e^{2
\pi i t v}Z(g)(v,-t)$. It suffices to show that there exists
a function $F(t,v)\in L^\infty L^2[0,1]^2$ so that $F(v,-t) \in
L^\infty L^2[0,1]^2 $ yet $F(t,v) \nonumbert \in L^\infty[0,1]^2$. The
function $F(t,v)=\begin{displaystyle} v^{-\frac{t}{3}}\end{displaystyle}$ is such a function. Given
that the Segal algebra (see remark {\bf 5} section 3) embeds in ${ L^{\infty}_1\left(\ell_2\right)}$
we may interpret these results as follows. If $(g,1,1)$ forms a frame
then $g,\hat g\in { L^{\infty}_1\left(\ell_2\right)}\setminus{\bf S_0}$. \\
Now we show that the functions with finite upper frame bound have an algebraic connection to
$L_1^\infty(\ell_2)$. We recall an algebraic term. If $M$ is a ring without
identity we say the {\bf square ideal}, $M^2$ is the ideal of the ring
formed by taking finite sums of products from $M$. That is
\[M^2=\left\{\sum_{i=1}^n x_iy_i: x_i,y_i \in M\right\}.\]
The theorem below not only shows that the Zak space is an ideal in
${ L^{\infty}_1\left(\ell_2\right)}$ but it contains the square ideal.
\begin{equation}gin{thm} For any $f,g\in L_1^\infty(\ell_2)$ their product $fg$
has a finite upper frame bound. In particular, the Zak space is an
ideal for $L_1^\infty(\ell_2)$.
\end{thm}
\begin{equation}gin{proof} Let $f,g$ in $L^\infty(\ell_2)$ and consider $Z(fg)$.
\begin{equation}a| Z(fg)(t,v)|^2&=&\left |\sum_k \left { L^2[0,a]}ngle fg,e_k\right \rangle_1(t)e^{-2 \pi i k
v}\right|^2\nonumber \\ &\le & \sum_k|f(t+k)|^2\sum_k|g(t+k)|^2 \le
\|f\|_{{ L^{\infty}_1\left(\ell_2\right)}}
\|g\|_{{ L^{\infty}_1\left(\ell_2\right)}}. \nonumber \end{equation}a
\end{proof}
As usual the argument may be mimicked in cases where $ab$ is rational.
To get the most general rational case one probably needs to use the
Zak matrices of Zibulski-Zeevi which may be found in
\citete{FS} section 1.5. We present the case where $a=\frac{1}{2},b=1 $
and leave the rest to the reader.
\begin{equation}gin{thm} If $f,g$ in $L_1^\infty(\ell_2)$, then $(fg,\begin{displaystyle}
\frac{1}{2}\end{displaystyle} ,1)$
is a Bessel system.
\end{thm}
\begin{equation}gin{proof} To see this we apply a standard technique when using
the Zak transform. Applying the Zak transform to the
frame operator of the system $(g,\frac{1}{2},1)$ we get:
\begin{equation}an Z(S_g)(f)&=&Z(\sum_k \left { L^2[0,a]}ngle f,T_{\frac{k}{2}} g \right \rangle_1 T_{\frac{k}{2}}g) \\
&=& Z(\sum_k \left { L^2[0,a]}ngle f,T_{{k}} g \right \rangle_1 T_{k}g +
Z(\sum_k \left { L^2[0,a]}ngle f,T_{k} T_{\frac{1}{2}}g \right \rangle_1 T_{k}(T_{\frac{1}{2}}g)\\
&=& Z(\sum_k \left { L^2[0,a]}ngle f,T_{{k}} g \right \rangle_1 T_{k}g )+Z(
\sum_k \left { L^2[0,a]}ngle f,T_{k} T_{\frac{1}{2}}g \right \rangle_1 T_{k}(T_{\frac{1}{2}}g)\\
&=& Z(f)\left(|Z(g)|^2+|Z(T_{\frac{1}{2}}g)|^2\right).
\end{equation}an
This implies that the system $(g,\frac{1}{2},1)$
has a finite upper frame bound iff
$|Z(g)|^2+|Z(T_{\frac{1}{2}}g)|^2 <B$. In view of this and the
proposition above all we need to show is that $|Z(T_{\frac{1}{2}}fg)|$
bounded. This follows easily from the fact that
$\|T_{\frac{1}{2}}f\|_{ L^{\infty}_1\left(\ell_2\right)} =\|f\|_{ L^{\infty}_1\left(\ell_2\right)}$
\end{proof}
One may interpret this theorem another way. We state it as a corollary
in the case $a=b=1$.
\begin{equation}gin{cor} For any $g\in L_1^\infty(\ell_2)$ and $x\in {\mathbb R}$
the operator
\[\mathcal V^1_g(f)(t,v,x) =Z(f T_xg)(t,v) e^{-2\pi itv}\]
is a bounded operator from ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ to $L^\infty ({\mathbb R}^3)$.
\end{cor}
\begin{equation}gin{proof} First we point out that because we are considering the periodic
extension of the Zak transform in $t,v\in[0,1]^2$ we really only need
$[0,1]^2\times {\mathbb R}$ in place of ${\mathbb R}^3$. Given $x\in
{\mathbb R}$ we have $\|T_xg\|_{ L^{\infty}_{\fb}\left(\ell_2\right)}=\|g\|_{ L^{\infty}_{\fb}\left(\ell_2\right)}$. The result follows from the above inequality.
\end{proof}
Let us point out that in many ways $\mathcal{V}^1_g(f)(t,v,x)$ resembles the
short time Fourier transform
\[\mathcal V_g(f,x)=\int_{\mathbb R} f(t)\overline {g(t-x)}e^{-2 \pi i tv}dt\]
For this reason we term this operator the {\bf windowed Zak
transform}.
We have a simple corollary for producing frames in ${ L^{\infty}_1\left(\ell_2\right)}$. Again we
state it here for the case $a=b=1$ but it is easily generalized to
many rational cases.
\begin{equation}gin{cor} If $f\in { L^{\infty}_1\left(\ell_2\right)}$ and $\inf |Z(f^2)| >A$ then $(f^2,1,1)$ is a
frame. \end{cor}
Finally we end the section by applying the following perturbation
theorem of Christensen and Heil to produce frames.
\begin{equation}gin{thm}\citete{CH} Let $H$ be a Hilbert space, $\{f_i\}_{i=1}^\infty $ be
a frame with bounds $A,B$ and let $\{g_i\}_{i=1}^\infty \subset H$. If
there exists $R<A$ so that
\[ \sum_{i=1}^\infty|\left { L^2[0,a]}ngle h,f_i-g_i\right \rangle_{\mathcal H}|^2 \le
R \, \|h\|^2 \text{ for all } h \in H,\] then
$\{g_i\}_{i=1}^\infty $ is a frame with bounds
$A(1-\sqrt{\frac{R}{A}})^2$ and $B(1-\sqrt{\frac{R}{A}})^2$.
\end{thm}
In the Gabor case this means that if $(g,a,b)$ is a frame then so is
$(f,a,b)$ if the system $(f-g,a,b)$ has an upper frame bound less than
that of $(g,a,b)$. We use this theorem and one of the inequalities
above to produce a multiplicative perturbation result.
\begin{equation}gin{prop} Let $ab<1$ and $ab$ rational. If $(g,a,b)$ produces a
frame with bounds $A,B$ and if $(f-1)\in { L^{\infty}_1\left(\ell_2\right)} $ with $\|f-1\|^2_{ L^{\infty}_1\left(\ell_2\right)} \le
\frac{R}{B}$, then $(fg,a,b)$ is a frame.
\end{prop}
\begin{equation}gin{proof} As usual, to highlight the essential components
of the argument we prove the case $a=b=1$. Let us point out that
$f-1$ need not be in ${ L^{\infty}_1\left(\ell_2\right)}$ even if $f$ is, since $1\nonumbertin { L^{\infty}_1\left(\ell_2\right)}$.
By the theorem above it
is enough to show that $fg-g$ has a finite upper frame bound less
than $A$. In view of Theorem \ref{DJ} it is enough to show
$|Z(fg-g)|<R$.
\begin{equation}a| Z(fg-g)(t,v)|^2&=&\left |\sum_k\left { L^2[0,a]}ngle (f-1)g,e_k \right \rangle_1(t)e^{-2 \pi i k
v}\right|^2\nonumber \\ &\le & \sum_k|(f-1)(t+k)|^2\sum_k|g(t+k)|^2 \nonumber\\
&\le& \|f-1\|^2_{{ L^{\infty}_1\left(\ell_2\right)}}
\|g\|^2_{{ L^{\infty}_1\left(\ell_2\right)}} \le \frac{R}{B} \cdot B .\nonumber \end{equation}a
Where the fact that $\|g\|_{{ L^{\infty}_1\left(\ell_2\right)}} \le B$ follows from Proposition
\ref{Zembed}. In this case we get frame bounds
$A(1-\sqrt{\frac{R}{A}})^2$ and $B(1-\sqrt{\frac{R}{A}})^2$.
\end{proof}
\section{$a$-frames and modular frames}
An immediate concern is whether the operators $\mathcal T_g $ and
$\mathcal T^*_g $ are bounded operators on ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ if they are bounded
operators on ${ L^2 (\mathbb R)}$. The answer is yes and follows from Proposition
5.9 of \citete{CL} which we state below.
\begin{equation}gin{prop}If $L:{ L^2 (\mathbb R)} \to { L^2 (\mathbb R)} $ is an a-factorable operator, then $L$
is bounded iff for all $f \in { L^2 (\mathbb R)} $
\[\|L(f)\|_a(t) \le \|L\|\|f\|_a(t)\] where
$\|L\|=\sup_{\|f\|_{ L^2 (\mathbb R)}=1}\|L(f)\|_{ L^2 (\mathbb R)} $.
\end{prop}
Given the compressed representations of $\mathcal T_g $ and $\mathcal
T^*_g $ it is easy to see that they are $\fb$-factorable on ${ L^2 (\mathbb R)}$.
Since the inequality in the proposition holds for all $f\in { L^2 (\mathbb R)}$ it
certainly holds for $f\in { L^{\infty}_{\fb}\left(\ell_2\right)}$ and we have our result by taking
supremums on both sides.\\
We now introduce two definitions both of which are abstractions of
frames. The first is that of an $a$-frame for ${ L^2 (\mathbb R)}$ and the second is
the analogue of a frame in a Hilbert $C^*$-module.\\
\begin{equation}gin{defn}\citete{CL} A sequence $\{f_n\} \in { L^2 (\mathbb R)}$ is an {\bf $a$-frame for
${ L^2 (\mathbb R)}$ } if there exists $A,B$ such that for all $f\in{ L^2 (\mathbb R)}$
\begin{equation}a {L^2_{ab}}el{a1} A \|f\|^2_a(x) \le \sum_n |\left { L^2[0,a]}ngle f,f_n \right \rangle_a(x)|^2 \le
B\|f\|_a^2(x) \text{ a.e..}\end{equation}a
\end{defn}
Since we are dealing with frames for different spaces we will add the
term ``modular'' to Frank and Larson's definition of a frame for a
HCM.
\begin{equation}gin{defn} \citete{FL} Let $\mathcal M$ be a HCM over the unital
$C^*$-algebra $A$. A sequence $\{x_n\} \in \mathcal M$ is said to be
a {\bf modular frame } for $\mathcal M$ if there are real constants
$C,D>0$ such that
\[C\left { L^2[0,a]}ngle x,x\right \rangle_A \le \sum_n \left { L^2[0,a]}ngle x,x_n\right \rangle_A\left { L^2[0,a]}ngle x_n,x\right \rangle_A \le D\left { L^2[0,a]}ngle
x,x\right \rangle_A\] for all $x\in \mathcal M$. If the sum in the middle of the
inequality always converges in norm this is referred to as a {\bf
standard modular frame} and if the middle sum converges weakly for
some $x\in
\mathcal M$ we will call this a {\bf non-standard modular frame}.
\end{defn}
We point out here that our results are in a different direction than
the ``standard'' results given by Frank and Larson. Since we only
require the sum $$\sum_k f(x-\fkb)\overline {g(x-\fkb)}$$ to converge weakly
in $L^\infty[0,\fb]$, we are dealing with the nonstandard case. Most
of their examination dealt with standard modular frames in HCM's which
were at worst countably generated. Neither of these conditions is met
in our case. Now we give the result of Casazza and Lammers regarding
the connection between $a$-frames and Gabor frames.
\begin{equation}gin{thm}\citete{CL} Let
$g_n(x)=g(x-na)$. Then $\{{g_{m,n}} \}_{m,n\in { \mathbb Z}}$ is a Gabor frame for
${ L^2 (\mathbb R)}$ with frame bounds $A,B$ iff $\{g_n\}$ is a $\fb$-frame for
${ L^2 (\mathbb R)}$. \end{thm}
This is somewhat surprising because this says that the frame
inequality holds pointwise for the bracket product. We use this
result to show the connections between Gabor frames and modular
frames.
\begin{equation}gin{thm}For $g\in { L^2 (\mathbb R)}$, $\{{g_{m,n}}\}_{m,n \in { \mathbb Z}}$ is a Gabor frame
for ${ L^2 (\mathbb R)}$ iff $\{g_n\}_n$ is a non-standard modular frame for ${ L^{\infty}_{\fb}\left(\ell_2\right)}$.
\end{thm}
\begin{equation}gin{proof} $\Rightarrow$ This follows directly from the theorem
above since ${ L^{\infty}_{\fb}\left(\ell_2\right)}$ is a subspace of ${ L^2 (\mathbb R)}$.\\
$\Leftarrow$ For any $f\in { L^2 (\mathbb R)}$ consider $f_0 =\begin{displaystyle}
\frac{f}{\|f\|_{\fb}} \end{displaystyle}$. Then $\left { L^2[0,a]}ngle f_0,f_0\right \rangle_\fb \le 1$ and
hence $f_0 \in { L^{\infty}_{\fb}\left(\ell_2\right)}$. Since $\{g_n\}$ is a modular frame for ${ L^{\infty}_{\fb}\left(\ell_2\right)}$,
\begin{equation}a A \|f_0\|^2_\fb(x) \le \sum_n |\left { L^2[0,a]}ngle f_0,g_n \right \rangle_\fb(x)|^2 \le
B\| f_0\|_\fb^2(x) \text{ a.e.} \nonumber \\ A
\frac{\|f\|^2_\fb(x)}{\|f\|^2_\fb(x)} \le \frac{1}{\|f\|^2_\fb(x)}
\sum_n |\left { L^2[0,a]}ngle f,g_n \right \rangle_\fb(x)|^2 \le B
\frac{\|f\|^2_\fb(x)}{\|f\|^2_\fb(x)} \text{ a.e.} \nonumber \\ A
\|f\|^2_\fb(x) \le \sum_n |\left { L^2[0,a]}ngle f,g_n \right \rangle_\fb(x)|^2 \le B\|
f\|_\fb^2(x) \text{ a.e..} \nonumber\end{equation}a
So we are done by the previous theorem.
\end{proof}
Finally we mention that there is a convolution for our HCM in the case
$a=b=1$. This is developed in much greater detail in
\citete{ML}. Our convolution is simply the preframe operator
${ \mathcal T }_g(f)$. That is, we define $g*_1f={ \mathcal T }_g(f)$ because
\[ Z({ \mathcal T }_g(f))=\sum_k \left { L^2[0,a]}ngle f, e_k\right \rangle_1 Z(T_kg)=\sum_k \left { L^2[0,a]}ngle f, e_k\right \rangle_1
e^{2 \pi i k v}Z(g)=Z(g)Z(f).\] The reason we call this a convolution
is because the Zak transform turns it into multiplication. Clearly
this does not necessarily map to an element in $L^2[0,1]^2$ but from Holder's
inequality we see that $Z({ \mathcal T }_g(f))\in L^1[0,1]^2$.
\begin{equation}gin{thebibliography}{WW}
\bibitembitem{CCJ} P.G. Casazza, O. Christensen and A.J.E.M. Janssen,
{\em Weyl-Heisenberg frames, translation-invariant systems, and the
Walnut representation.} preprint.
\bibitembitem{CL} P.G. Casazza and M.C. Lammers,
{\em Bracket Products for Weyl-Heisenberg Frames} preprint.
\bibitembitem{CH} O. Christensen and C. Heil {\em Perturbations of Banach
frames and atomic decompositions} Math., 185 (1997),33-47
\bibitembitem{D} I. Daubechies, {\em The Wavelet transform, time-frequency
localization and signal analysis},IEEE Transactions on Information
Theory, Vol. 36, No.5, (1990), 961-1005
\bibitembitem{DS} R.J. Duffin and A.C. Schaeffer, {\em A class of
non-harmonic Fourier series.} Trans. AMS 72 (1952) 341-366.
\bibitembitem{FS} H.G. Feichtinger and T. Strohmer, eds, {\em Gabor
Analysis and Algorithms: Theory and Applications}, Birkhauser, Boston
(1998).
\bibitembitem{FL} M. Frank and D. Larson {\em Frames in Hilbert
$C^*$-Modules and $C^*$-Algebras},preprint.
\bibitembitem{G} D. Gabor, {\em Theory of communications.} Jour. Inst.
Elec. Eng. (London) 93 (1946) 429-457.
\bibitembitem{H} C. Heil,{\em Weiner Amalgam spaces in generalized harmonic
analysis and Wavelet Theory}, PhD thesis, Univ. of Maryland, College
Park, 1990
\bibitembitem{ML} M.C. Lammers {\em Oversampled Zak transforms and the Zak
Algebras}, preprint
\bibitembitem{rw} I. Raeburn and D.P. Williams,{\em Morita Equivalence and
Continuous-Trace $C^*$-Algebras}, American Mathematical Society,1998
\bibitembitem{rs2} A. Ron and Z. Shen, {\em Weyl-Heisenberg frames and
Riesz bases in $L^{2} (R^{d})$}, Duke Math. J. 89 (1997) 237-282.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Double-Well\xspace Quantum Tunneling Visualized via Wigner's Function}
\author{Dimitris Kakofengitis}
\email{[email protected]}
\author{Ole Steuernagel}
\email{[email protected]}
\affiliation{School of Physics Astronomy and Mathematics,
University of Hertfordshire, Hatfield, AL10 9AB, UK}
\date{\today}
\begin{abstract}
We investigate quantum tunneling in smooth symmetric and asymmetric
double-well\xspace potentials. Exact solutions for the ground and first excited\xspace
states are used to study the dynamics. We introduce Wigner's
quasi-probability distribution\xspace function to highlight and visualize the
non-classical nature of spatial correlations arising in tunneling.
\end{abstract}
\maketitle
\section{Introduction}
Quantum tunneling was first discussed by Friedrich Hund in 1927 when
he considered the ground state\xspace of a double-well\xspace
potential~\cite{Nimtz_08,Merzbacher_02}. Quantum tunneling is a
microscopic phenomenon where a particle can penetrate into and in
some cases pass through a potential barrier, although the barrier is
energetically higher than the kinetic energy of the particle. This
motion amounts to particles penetrating areas where they have
``negative kinetic energy'' and is not allowed by the laws of
classical dynamics~\cite{Razavy_03}. Double-well\xspace potentials can be used
for the study of tunneling, as the central hump separating the two
wells constitutes a tunneling barrier, for eigenstates lower in
energy than the maximum height of the barrier. They are also used for
the study of molecular structures, for example in the ammonia
molecule~\cite{Ka_03}. The concept of quantum tunneling is central
to the operation of scanning tunneling
microscopes~\cite{Bai_00,Carminati_95}.
In this paper we investigate tunneling in partially exactly solvable
double-well\xspace potentials~\cite{Caticha_95} through the use of the Wigner
quasi-probability distribution\xspace function. We model tunneling based on the dynamics of
the lowest two wave function\xspace{}s, the quantum mechanical ground state\xspace and the
first excited\xspace state~\cite{Caticha_95}. We also present the probability distribution\xspace{}s
with respect to momentum and position, and illustrate the presence of
quantum coherences, which give rise to interference fringes with
negative values of Wigner's quasi-probability distribution\xspace.
In the next section, we provide a self-consistent discussion of the
partially exactly solvable symmetric and asymmetric double-well\xspace
potentials, first established in~\cite{Caticha_95}. In
section~\ref{sec:WignerFunction} we introduce Wigner's
quasi-probability distribution\xspace function and illustrate, for each case of the
double-well\xspace potential, time-evolution of the associated Wigner
quasi-probability distribution\xspace, and probability distribution\xspace{}s with respect to position and
momentum. Subsection~\ref{sec:QuantumCoherence} considers negative
regions of Wigner's quasi-probability distribution\xspace function and allows us to
visualize non-classical spatial coherences in tunneling, finally we
conclude.
\section{Partially Exactly Solvable Double-Well\xspace Potentials}
\subsection{The Schr\"odinger Equation}\label{sec:Schrodinger}
A family of partially exactly solvable double-well\xspace potentials was
introduced in 1995 by Caticha~\cite{Caticha_95}; for these the
ground and first excited\xspace states (and in some cases states energetically
above those) can be computed analytically in simple closed forms.
Note that despite considerable efforts, only few fully solvable
smooth potentials are known~\cite{Gendenshtein_83,Cooper_95}; no
fully solvable smooth double-well\xspace potential has yet been found. Such
partially solvable models may well prove to be very useful as
benchmarks for computer code and numerical tests.
The Schr\"odinger equation (for the two lowest states, $n = 0$ and
$n = 1$) is
\begin{equation}
\label{eq:SrhodingerEquation}
\frac{\hbar^2}{2m}{\psi_n}^{\prime\prime}(x) + \left[{E_n} - V(x)
\right]
{\psi_n}(x) = 0\; .
\end{equation}
Throughout this paper we choose $\hbar^2/2m = 1$. Similarly to
super-symmetric quantum mechanics~\cite{Gendenshtein_83,Cooper_95},
where a superpotential is defined in order to compute the ground state\xspace and
the corresponding potential, in~\cite{Caticha_95} a multiplier-function\xspace $\phi$
is defined, which relates the ground and first excited\xspace states by
\begin{equation}
\label{eq:FunctionPhi}
\psi_1 = \phi\psi_0\; .
\end{equation}
Substituting Eq.~(\ref{eq:FunctionPhi}) into
Eq.~(\ref{eq:SrhodingerEquation}) for $n = 1$ yields
\begin{equation}
\label{eq:SEnequals1}
\left(\phi\psi_0\right)^{\prime\prime} + \left[E_1-V
\right]\phi\psi_0 = 0\; .
\end{equation}
Eq.~(\ref{eq:SrhodingerEquation}) for $n = 0$ multiplied by $\phi$ is
\begin{equation}
\label{eq:SEnequals0}
\phi\psi_0^{\prime\prime} + \left[E_0 - V\right]\phi\psi_0 = 0\; .
\end{equation}
Upon subtraction of Eq.~(\ref{eq:SEnequals0}) from
Eq.~(\ref{eq:SEnequals1}), one obtains a function $\chi$
\begin{equation}
\label{eq:Superpotential}
\chi(x) = \frac{\phi^{\prime\prime} + \Delta E\phi}{2\phi^\prime} =
-\frac{\psi_0^\prime}{\psi_0} = -\frac{d\ln\psi_0}{dx}\; ,
\end{equation}
where $\Delta E = E_1 - E_0$ is the energy splitting between ground
and first excited\xspace states. Eq.~(\ref{eq:Superpotential}) resembles a
superpotential in the context of super-symmetric quantum
systems~\cite{Gendenshtein_83,Cooper_95}. The corresponding ground state\xspace
$\psi_0$ therefore is
\begin{equation}
\label{eq:Groundstate}
\psi_0(x) = \mathcal{N}\xspace\exp\left(-\int_0^x\chi(x^\prime)dx^\prime
\right)\; ,
\end{equation}
here $\mathcal{N}\xspace$ is a normalization constant. Rearranging
Eq.~(\ref{eq:SEnequals0}) determines the double-well\xspace potential $V$, up
to an additive constant
\begin{equation}
\label{eq:DoubleWellPotential}
V(x) = \frac{\psi_0^{\prime\prime}}{\psi_0} + E_0 = \chi^2 -
\chi^\prime + E_0\; .
\end{equation}
This is illustrated in
Fig.~\ref{fig:SuperpotentialPotentialPhiFunctions}.
\begin{figure}
\caption{The double-well\xspace potential $V$ (solid line), the function $\chi$
(dashed line) and the multiplier-function\xspace $\phi$ (dotted line).
For the symmetric case, ({\bf a}
\label{fig:SymmetricSuperpotentialPotentialPhi}
\label{fig:AsymmetricSuperpotentialPotentialPhi}
\label{fig:SuperpotentialPotentialPhiFunctions}
\end{figure}
\subsection{Symmetric Double-Well\xspace Potential}
For the case of a symmetric double-well\xspace potential, as displayed in
Figs.~\ref{fig:SuperpotentialPotentialPhiFunctions}~{\bf a}
and~\ref{fig:SymmetricDoubleWellPotential}, we use the multiplier-function\xspace
\begin{equation}
\label{eq:SymmetricCasePhiFunction}
\phi(x) = \frac{\sinh (ax)}{\cosh (bx)}\; ,
\end{equation}
where (with $E_0<E_1<0$)
\begin{equation}
\label{eq:SymmetricDoubleWellVariables} a = \sqrt{-E_0} \quad \mbox{
and } \quad b = \sqrt{-E_1}\; .
\end{equation}
The function $\chi$ then takes the form
\begin{multline}
\label{eq:SymmetricSuperpotential}
\chi(x) =\frac{\sinh (ax)\left(2b^2 - 2a^2\cosh^2 (bx)\right)}{b\sinh
(ax)\sinh (2bx)
- 2a\cosh (ax)\cosh^2 (bx)}\\
+ \frac{ab\cosh (ax)\sinh (2bx)}{b\sinh (ax)\sinh (2bx) - 2a\cosh
(ax)\cosh^2 (bx)}
\end{multline}
and the corresponding symmetric potential $V$ is
\begin{equation}
\label{eq:SymmetricPotential}
V(x) = \frac{2\left(b^2 - a^2\right)\left(a^2\cosh^2 (bx) +
b^2\sinh^2 (ax)\right)}{(a\cosh (ax)\cosh (bx) - b\sinh (ax)\sinh
(bx))^2}\; .
\end{equation}
The ground state\xspace $\psi_0$ (note typographical error in
corresponding~{Eq.~{(19)}} of ref.~\cite{Caticha_95}), is
\begin{equation}
\label{eq:SymmetricGroundState} \psi_0(x) = \frac{\psi_0(0)(e^{2bx}
+ 1)(a - b)e^{ax}}{(e^{2(b + a)x} + 1)(a - b) + (a + b)(e^{2ax} +
e^{2bx})}
\end{equation}
and the first excited\xspace state $\psi_1$ is
\begin{equation}
\label{eq:SymmetricFirstExcited}
\psi_1(x) = \frac{\psi_1(0)(e^{2ax} - 1)(a - b)e^{bx}}{(e^{2(b +
a)x} + 1)(a - b) + (a + b)(e^{2ax} + e^{2bx})}\; .
\end{equation}
The two wave function\xspace{}s have odd and even parity, see
Fig.~\ref{fig:SymmetricDoubleWellPotential}.
\begin{figure}
\caption{(Color Online) The symmetric double-well\xspace potential $V$
(solid blue line), the corresponding ground state\xspace $\psi_0$
(dashed red line) and the first excited\xspace state $\psi_1$
(dashed green line), for $E_0 = -1$ and $E_1 = -0.9$.}
\label{fig:SymmetricDoubleWellPotential}
\end{figure}
\subsection{Asymmetric Double-Well\xspace Potential}
With the multiplier-function\xspace $\phi$
\begin{equation}
\label{eq:AsymmetricCasePhiFunction}
\phi(x) = \alpha + \tanh(\beta x)
\end{equation}
the function $\chi$ takes the form
\begin{equation}
\label{eq:AsymmetricSuperpotential}
\chi(x) = \frac{\Delta E}{4\beta}\left[\sinh(2\beta x) +
2\alpha\cosh^2(\beta x)\right] - \beta\tanh(\beta x)
\end{equation}
and the corresponding asymmetric potential $V$ is
\begin{multline}
\label{eq:AsymmetricPotential}
V(x) = \beta^2 - \Delta E\alpha\sinh (2\beta x)\\
+ \cosh^2 (\beta x)\left(\frac{\Delta E^2}{4\beta^2}\alpha\sinh
(2\beta x) - \frac{\Delta E^2}{4\beta^2} - 2\Delta E\right) \\
+ \frac{\Delta E^2}{4\beta^2}\left( \alpha^2 + 1\right)\cosh^4
(\beta x) + \frac{3}{2}\Delta E + E_0\; .
\end{multline}
The corresponding lowest two wave function\xspace{}s are
\begin{multline}
\label{eq:AsymmetricGroundState}
\psi_0(x) = \psi_0(0) \cosh (\beta x)\\
\times \exp\left[ -\frac{\Delta E}{4\beta^2}\left(\cosh^2 (\beta x)
+ \alpha\beta x + \frac{\alpha}{2}\sinh (2\beta x)\right)\right]
\end{multline}
and
\begin{multline}
\label{eq:AsymmetricFirstExcited}
\psi_1(x) = \psi_1(0) (\alpha\cosh (\beta x) + \sinh (\beta x))\\
\times \exp\left[ -\frac{\Delta E}{4\beta^2}\left(\cosh^2 (\beta x)
+ \alpha\beta x + \frac{\alpha}{2}\sinh (2\beta x)\right)\right]\; .
\end{multline}
Fig.~\ref{fig:AsymmetricDoubleWellPotential} illustrates that the
potential's asymmetry strongly modifies the shape of the lowest two
wave function\xspace{}s as compared to the symmetric case, with $\alpha = 0$,
in~Fig.~\ref{fig:SymmetricDoubleWellPotential}.
\begin{figure}
\caption{(Color Online) The asymmetric double-well\xspace potential $V$ (solid
blue line), the corresponding ground state\xspace $\psi_0$ (dashed red line) and
the first excited\xspace state~$\psi_1$ (dashed green line), for $\alpha = 0.9$,
$\beta = 1$, $E_0 = 0$ and $\Delta E = 1$.}
\label{fig:AsymmetricDoubleWellPotential}
\end{figure}
To investigate the tunneling dynamics we use the normalized
superposition state
\begin{multline}
\label{eq:Superposition} \Psi(x,t) =
\sin(\theta)\exp\left(\frac{-iE_0t}{\hbar}\right)\psi_0(x) \\
+ \cos(\theta)\exp\left(\frac{-iE_1t}{\hbar}\right)\psi_1(x)\; ,
\end{multline}
with the weighting angle $\theta \in [0,\ldots,\pi/2]$. The energy
splitting $\Delta E$, gives rise to the beat period (or reciprocal
barrier-tunneling rate~\cite{Merzbacher_02,Razavy_03}) $T =
2\pi\hbar/\Delta E$. In other words, $T$ is the time needed for a
quantum particle initially localized in, say, the left well, to
perform a full oscillation: left-right-left. The larger the energy
splitting the shorter the beat period.
\section{Wigner's Function}\label{sec:WignerFunction}
Eugene Wigner introduced his quasi-probability distribution\xspace function $W(x,p;t)$ in
1932 for the study of quantum corrections to classical statistical
mechanics~\cite{Wigner_32}. It is a generating function for all
spatial auto-correlation functions of a given quantum mechanical
wave function\xspace $\psi$ and defined as~\cite{Razavy_03,Wigner_32,Belloni_04}
\begin{equation}
\label{eq:WignerDistributionFunction} W(x,p;t) =
\frac{1}{\pi\hbar}\int_{-\infty}^\infty{\Psi^*(x+y,t)\Psi(x-y,t)
e^{\frac{2ipy}{\hbar}}dy}\; ,
\end{equation}
where $x$ and $y$ are position variables and $p$ the momentum. The
Wigner quasi-probability distribution\xspace function is a real-valued phase-space
distribution function, it can assume negative values which is why it
is referred to as a \emph{quasi}-probability distribution\xspace.
Its marginals are the quantum-mechanical probability distribution\xspace{}s of position
\begin{equation}
\label{eq:PositionProbabilityDistribution}
P(x,t) = \left|\Psi(x,t)\right|^2 = \int_{-\infty}^\infty
{W(x,p;t)\; dp}
\end{equation}
and momentum
\begin{equation}
\label{eq:MomentumProbabilityDistribution} \tilde P(p,t) =
\left|\Phi(p,t)\right|^2 = \int_{-\infty}^\infty {W(x,p;t)\; dx}\; .
\end{equation}
It is normalized $
\int_{-\infty}^\infty{\int_{-\infty}^\infty{W(x,p;t)\; dp}\; dx} = 1
\; $ and the overlap of the Wigner functions $W_\psi(x,p;t)$ and
$W_\chi(x,p;t)$ of two distinct quantum states, $\psi(x,t)$ and
$\chi(x,t)$, yields the magnitude of their wave function\xspace overlap
squared~\cite{Belloni_04}
\begin{equation}
\label{eq:WignerNonNegativeRelation}
\int_{-\infty}^\infty{\int_{-\infty}^\infty{W_\psi(x,p;t)
W_\chi(x,p;t)\; dp}\; dx} =
\frac{2}{\pi\hbar}\left|\left\langle\psi|\chi\right\rangle\right|^2.
\end{equation}
Fig.~\ref{fig:DoubleWellWignerDistribution} shows plots of the time
evolution of the Wigner functions for symmetric and asymmetric double-well\xspace
potentials and the associated marginals $P(x,t)$ and $\tilde P(p,t)$;
all Wigner functions and marginals~$\tilde P(p,t)$ had to be
determined through numerical integrations.
\begin{figure}
\caption{(Color Online) Time
evolution of the Wigner quasi-probability distribution\xspace, for the symmetric double-well\xspace
potential $V$ of~Eq.~(\ref{eq:SymmetricPotential}
\label{fig:SymmetricDoubleWellWignerDistribution1}
\label{fig:AsymmetricDoubleWellWignerDistribution1}
\label{fig:SymmetricDoubleWellWignerDistribution2}
\label{fig:AsymmetricDoubleWellWignerDistribution2}
\label{fig:SymmetricDoubleWellWignerDistribution3}
\label{fig:AsymmetricDoubleWellWignerDistribution3}
\label{fig:DoubleWellWignerDistribution}
\end{figure}
\subsection{Negative values of Wigner function}\label{sec:QuantumCoherence}
As can be seen from the projections of the Wigner quasi-probability distribution\xspace{}s
in Fig.~\ref{fig:DoubleWellWignerDistribution}, a particle that
exists in two places at the same time shows interference fringes in
its momentum probability distribution\xspace $\tilde P(p,t)$. These are incompatible with
a single humped position distribution~$P(x,t)$ in conjunction with a
positive semi-definite phase space probability distribution\xspace. A phase
space distribution simultaneously yielding such marginals has to
contain regions with negative values. These negative regions are
indicators of the non-classical character of the spatial coherences
of the wave function\xspace{}s~\cite{Schleich_01} and are frequently studied in
experiments~\cite{Grangier_11}. They arise, e.g., whenever the
wave function\xspace spreads out over both wells of the double-well\xspace potential.
To avoid confusion, we would like to emphasize that negative regions
of the Wigner function, as a generating function for spatial
auto-correlation functions, represent non-classical behavior of
spatial correlations but not of the non-classical behavior of
tunneling associated with ``negative kinetic energy". The interference
fringes of the Wigner function, appearing roughly in the area where
tunneling occurs, represent the non-classical spatial coherence of the
wave function\xspace{}s located in both wells simultaneously.
\begin{figure}
\caption{(Color Online - Color scheme identical to
Fig.~\ref{fig:DoubleWellWignerDistribution}
\label{fig:DoubleWellWignerDistributionFringes}
\end{figure}
Fig.~\ref{fig:DoubleWellWignerDistributionFringes} illustrates the
changes Wigner's functions undergo when changing the potential wells
from separate wells to merged single troughs (displayed in
Fig.~\ref{fig:DoubleWellPotentialRangeDeltaE}): Since the coupling
between the wells increases, so does $\Delta E$. Concurrently the
peaks of the spatial wave functions move together which increases
the fringe spacing in the associated momentum representation,
visible as a reduction of the spatial frequency of Wigner's
functions' interference patterns.
\begin{figure}
\caption{(Color Online) The symmetric double-well\xspace potential $V$ of
Eq.~(\ref{eq:SymmetricPotential}
\label{fig:SymmetricDoubleWellPotentialRangeE1}
\label{fig:AsymmetricDoubleWellPotentialRangeDeltaE}
\label{fig:DoubleWellPotentialRangeDeltaE}
\end{figure}
\section{Conclusion}
We have considered the effects of tunneling in smooth partially
exactly solvable double-well\xspace potentials~\cite{Caticha_95}. We illustrated
the behavior of the associated wave function\xspace{}s and Wigner's
quasi-probability phase-space distributions. Wigner's functions can
assume negative values representing non-classical spatial coherences
of the wave function\xspace{}s, these were shown to arise in the case of tunneling
and analyzed and interpreted.
\pagebreak
\input{bibliography.bbl}
\end{document}
|
\begin{document}
\begin{abstract}
Colding and Minicozzi have shown that an embedded minimal disk $0\in\Sigma\subset B_R$ in $\mathbb R^3$ with large curvature at $0$ looks like a helicoid on the scale of $R$. Near $0$, this can be sharpened: on the scale of $|A|^{-1}(0)$, $\Sigma$ is close, in a Lipschitz sense, to a piece of a helicoid. We use surfaces constructed by Colding and Minicozzi to see this description cannot hold on the scale $R$.
\end{abstract}
\title{Distortions of the Helicoid}
In \cite{CM1,CM2,CM3,CM4}, Colding and Minicozzi give a complete
description of the structure of embedded minimal disks in a ball in
$\mathbb R^3$. Roughly speaking, they show that any such surface is
either modeled on a plane (i.e. is nearly graphical) or is modeled
on a helicoid (i.e. is two multi-valued graphs glued together
along an axis). In the latter case, the distortion may be quite
large. For instance, in \cite{MW}, Meeks and Weber ``bend" the
helicoid; that is, they construct minimal surfaces where the axis
is an arbitrary $C^{1,1}$ curve (see Figure \ref{MWexample}). A more serious example of
distortion is given by Colding and Minicozzi in \cite{CMPVNP}.
There they construct a sequence of minimal disks modeled on the
helicoid, but where the ratio between the scales (a measure of the
tightness of the spiraling of the multi-graphs) at different
points of the axis becomes arbitrarily large (see Figure \ref{CMexample}). Note, locally, near
points of large curvature, the surface is close to a helicoid, and
so the distortions are necessarily global in nature.
\begin{figure}
\caption{A cross section of one of Colding and Minicozzi's examples. Here $R=1$ and $(0,s)$ is a blow-up pair.}
\label{CMexample}
\end{figure}
\begin{figure}
\caption{A cross section of one of Meeks and Weber's examples, with axis the circle. Here $R$ is the outer scale of a disk and $(y,s)$ is a blow-up pair.}
\label{MWexample}
\end{figure}
Following \cite{CM2} we make the meaning of large curvature
precise by saying a pair $(y,s)\in\Sigma\times \mathbb R^+$ is a $(C)$
\emph{blow-up pair} if $\sup_{B_s\cap \Sigma} |A|^2\leq
4C^2s^{-2}=4|A|^2(y)$ (here $C$ is large and fixed and $\Sigma
\subset \mathbb R^3$ minimal). For $\Sigma$ minimal with $\partial
\Sigma\subset \partial B_R$ where $(0,s)$ is a blow-up pair, there
are two important scales; $R$ the outer scale and $s$ the blow-up
scale. The work of Colding and Minicozzi gives a value
$0<\Omega<1 $ so that the component of $\Sigma \cap B_{\Omega R }$
containing $0$ consists of two multi-valued graphs glued together
(see for instance Lemma 2.5 of \cite{CY} for a self-contained
explanation). On the other hand, Theorem 1.5 of \cite{BB} shows
that on the scale of $s$ (provided $R/s$ is large), $\Sigma$ is
bi-Lipschitz to a piece of a helicoid with Lipschitz constant near
1. Using the surfaces constructed in \cite{CMPVNP} we show that
such a result cannot hold on the outer scale and indeed fails to hold on certain smaller scales:
\begin{thm} \label{FirstThm}
Given $1>\Omega,\varepsilonilon>0$ and $1/2 > \gamma \geq 0$ there exists an embedded
minimal disk $0\in\Sigma$ with $\partial \Sigma\subset
\partial B_{R}$ and $(0,s)$ a blow-up pair so: the component of
$B_{\Omega R^{1-\gamma} s^{\gamma}}\cap \Sigma$ containing $0$ is not
bi-Lipschitz to a piece of a helicoid with Lipschitz constant in
$((1+\varepsilonilon)^{-1},1+\varepsilonilon)$.
\end{thm}
First, we recall the surfaces constructed in \cite{CMPVNP}:
\begin{thm} \label{SurfExst}(Theorem 1 of \cite{CMPVNP}) There is a sequence of compact embedded minimal disks $0\in \Sigma_i\subset B_1\subset \mathbb R^3$ with $\partial \Sigma_i\subset \partial B_1$ containing the vertical segment $\set{(0,0,t) : |t|\leq 1}\subset \Sigma_i$ such that the following conditions are satisfied:
\begin{enumerate}
\item $\lim_{i\to \infty} |A_{\Sigma_i}|^2(0)\to \infty$
\item \label{SecItema}
$\sup_{\Sigma_i}|A_{\Sigma_i}|^2 \leq 4|A_{\Sigma_i}|^2(0) = 8
a_i^{-4}$ for a sequence $a_i \to 0$
\item
\label{SecItem} $\sup_i \sup_{\Sigma_\backslash B_\delta}
|A_{\Sigma_i}|^2 <K\delta^{-4}$ for all $1>\delta>0$ and $K$ a
universal constant.
\item \label{ThrItem}
$\Sigma_i\backslash\set{x_3-axis} =\Sigma_{1,i}\cup \Sigma_{2,i}$
for multi-valued graphs $\Sigma_{1,i}$ and $\Sigma_{2,i}.$
\end{enumerate}
\end{thm}
\begin{rem}
\eqref{SecItema} and \eqref{SecItem} are slightly sharper than what is stated in Theorem
1 of \cite{CMPVNP}, but follow easily. \eqref{SecItema} follows from the Weierstrass data (see Equation (2.3) of \cite{CMPVNP}). This also gives \eqref{SecItem} near the axis, whereas away from the
axis use \eqref{ThrItem} and Heinz's curvature estimates.
\end{rem}
Next introduce some notation. For a surface $\Sigma$ (with a smooth metric) we
denote intrinsic balls by $\mathcal{B}_s^\Sigma$ and define the
(\emph{intrinsic}) \emph{density ratio} at a point $p$ as: $
\theta_s (p,\Sigma)=(\pi s^2)^{-1}\mathcal{A}rea(\mathcal{B}_s^\Sigma (p)
)$. When $\Sigma$ is immersed in $\mathbb R^3$ and has the induced metric,
$\theta_s(p,\Sigma)\leq \Theta_s (p,\Sigma)=({\pi
s^2})^{-1}{\mathcal{A}rea(B_s(p)\cap \Sigma)}$, the usual (extrinsic)
density ratio. Importantly, the intrinsic density ratio is well-behaved under bi-Lipschitz maps. Indeed, if $f: \Sigma \to
\Sigma'$ is injective and with $\alpha^{-1}<\Lip f<\alpha$, then:
\begin{equation} \label{DensBnds}
\alpha^{-4}\theta_{\alpha^{-1} s} (p,\Sigma) \leq \theta_s (f(p),{\Sigma'}) \leq \alpha^4\theta_{\alpha s} (p,\Sigma).
\end{equation}
This follows from the inclusion,
$\mathcal{B}^\Sigma_{\alpha^{-1} s} (f^{-1}(p))\subset f^{-1}
(\mathcal{B}^{\Sigma'}_s (p))$ and the behavior of area under
Lipschitz maps, $\mathcal{A}rea( f^{-1} (\mathcal{B}^{\Sigma'}_s (p))\leq
(\Lip f^{-1})^2 \mathcal{A}rea (\mathcal{B}^{\Sigma'}_s (p))$.
Note that by standard area estimates for minimal graphs, if $\Sigma\cap B_s(p)$ is a minimal graph then
$\theta_s(p,\Sigma)\leq 2$. In contrast, for a point near the
axis of a helicoid, for large $s$ the density ratio is large.
Thus, in a helicoid the density ratio for a fixed, large $s$
measures, in a rough sense, the distance to the axis. More generally, this holds near blow-up pairs of embedded minimal disks:
\begin{lem} \label{LowerBndDensLem}
Given $D>0$ there exists $R>1$ so: If $0\in\Sigma\subset B_{2Rs}$
is an embedded minimal disk with $\partial \Sigma\subset\partial
B_{2 Rs}$ and $(0,s)$ a blow-up pair then
$\theta_{Rs}(0,\Sigma)\geq D$.
\end{lem}
\begin{proof}
We proceed by contradiction, that is suppose there were a $D>0$
and embedded minimal disks $0\in\Sigma_i$ with
$\partial \Sigma_i \subset \partial
B_{2 R_i s}$ with $R_i\to \infty$ and $(0,s)$ a blow-up pair so
that $\theta_{R_i s}(0,\Sigma_i)\leq D$. The chord-arc
bounds of \cite{CY} imply there is a $1>\gamma>0$ so $\mathcal{B}_{R_i
s}^{\Sigma_i} (0)\supset \Sigma_i \cap B_{\gamma R_i s}$. Hence,
the
intrinsic density ratio bounds the extrinsic density ratio, i.e. $D\geq \theta_{R_i s}(p,{\Sigma_i})\geq
\gamma^2 \Theta_{\gamma R_i s}(p, \Sigma_i)$. Then, by a result of
Schoen and Simon \cite{ScSi} there is a constant
$K=K(D\gamma^{-2})$, so $|A_{\Sigma_i}|^2(0)\leq K(\gamma R_i
s)^{-2}$. But for $R_i$ very large this contradicts that $(0,s)$
is a blow-up pair for all $\Sigma_i$.
\end{proof}
\begin{rem}
Note that the above does not depend on the strength of chord-arc
bounds. In fact, it is also an immediate consequence of the fact
that intrinsic area bounds on a disk give total curvature bounds.
In turn, the total curvature bounds again yield uniform curvature
bounds. See Section 1 of \cite{CM2} for more detail.
\end{rem}
To produce our counterexample, we exploit the fact that two points
on a helicoid that are equally far from the axis must have the
same density ratio. Assuming the existence of a Lipschitz map
between our surface $\Sigma$ and a helicoid, we get a
contradiction by comparing the densities for two appropriately
chosen points that map to points equally far from the axis of the
helicoid.
\begin{proof}(of Theorem \ref{FirstThm})
Fix $1>\Omega,\varepsilonilon>0$ and $1/2 > \gamma\geq 0$ and set
$\alpha=1+\varepsilonilon$. Let $\Sigma_i$ be the surfaces of Theorem
\ref{SurfExst}; we claim for $i$ large, $\Sigma_i$ will be the
desired example. Suppose this was not the case. Setting $s_i =
Ca_i^2/\sqrt{2}$, where $a_i$ is as in \eqref{SecItema} and $C$ is the blow-up constant,one has
$(0,s_i)$ is a blow-up pair in $\Sigma_i$, since $\sup_{\Sigma_i
\cap B_{s_i}} |A_{\Sigma_i}|^2\leq 8a_i^{-4} = 4C^2s_i^{-2} =
4|A_{\Sigma_i}|^2(0)$, moreover, $s_i\to 0$. Hence, with $R_i=
\Omega s_i^{\gamma}<1$, the component of $B_{R_i}\cap {\Sigma}_i$
containing $0$, ${\Sigma}_i'$, is bi-Lipschitz to a piece of a
helicoid with Lipschitz constant in $(\alpha^{-1},\alpha)$. That
is, there are subsets $\Gamma_i$ of helicoids and diffeomorphisms
$f_i:{\Sigma}_i' \to \Gamma_i$ with $\Lip f_i\in
(\alpha^{-1},\alpha)$.
We now begin the density comparison. First, Lemma
\ref{LowerBndDensLem} implies there is a constant $r>0$ so for $i$
large $\theta_{r s_i} (0,{\Sigma_i'})\geq 4 \alpha^8$ and thus by
\eqref{DensBnds} $\theta_{\alpha r s_i} (f_i(0),{\Gamma_i})\geq 4
\alpha^{4} $. We proceed to find a point with small density on
$\Sigma_i$ that maps to a point on $\Gamma_i$ equally far from the
axis as $f_i(0)$ (which has large density).
Let $U_i$ be the (interior) of the component of $B_{1/2 R_i} \cap
{\Sigma}_i$ containing $0$. Note for $i$ large enough, as $s_i/R_i\to 0$, the distance between $\partial U_i$ and $\partial {\Sigma_i}'$
is greater than $4\alpha^2 rs_i$. Similarly, for $p\in\partial U_i$ for $i$ large, $p'\in
\mathcal{B}^{\Sigma_i'}_{4\alpha^2 r s_i}(p)$ implies $|p'|\geq
\frac{1}{4} R_i$. Hence, property \eqref{SecItem} gives that
$|A_{\Sigma_i'}|^2(p')\leq K' s_i^{-4\gamma}$. Thus, for $i$
sufficiently large $\mathcal{B}_{\alpha^2 r s_i} (p)$ is a graph
and so $\theta_{\alpha^2 r s_i} (p,{{\Sigma}_i'})\leq 2$. Pick
$u_i\in
\partial f(U_i)$ at the same distance to the axis as $f_i(0)$ and so the density ratio is the same at both points (see Figure \ref{ProofImg}).
As $f_i(U_i)$ is an open
subset of $\Gamma_i$ containing $f_i(0)$, ${p}_i=f_i^{-1}(u_i)\in
\partial U_i$. Notice that $\theta_{\alpha r s_i}
(u_i,{\Gamma_i})=\theta_{\alpha r s_i} (f_i(0),{\Gamma_i})\geq 4
\alpha^{4}$ so $2 \alpha^4 \geq\alpha^4 \theta_{\alpha^2 r
s_i}(p_i,{{\Sigma}_i'})\geq 4 \alpha^{4}$.
\begin{figure}
\caption{Finding $u_i$ }
\label{ProofImg}
\end{figure}
\end{proof}
\end{document}
|
\begin{document}
\title
{\bf
Parametrizing the moduli of non-special curves of genus five and its application to finding curves with many rational points
}
\author
{Momonari Kudo\thanks{Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo}
\ and Shushi Harashita\thanks{Graduate School of Environment and Information Sciences, Yokohama National University.}}
\maketitle
\markboth{Parametrizing the moduli of non-special curves of genus five and its application to finding curves with many rational points}{Parametrizing the moduli of non-special curves of genus five and its application}
\begin{abstract}
In algebraic geometry, it is important to give good parametrizations of spaces of curves, theoretically and also practically.
In particular, the case of non-hyperelliptic curves is the central issue.
In this paper, we give a very effective parametrizations of the moduli of curves of genus $5$
which are neither hyperelliptic nor trigonal.
After that, we construct an algorithm for a complete enumeration of non-special curves of genus $5$ having more rational points than a prescribed number, where ``non-special" here means non-hyperelliptic and non-trigonal with mild singularities of the associated sextic model which we propose.
As an application, we execute an implementation on computer algebra system MAGMA of the algorithm for curves over the prime field of characteristic $3$.
\end{abstract}
\footnote[0]{\keywords Algebraic curves, Rational points, Curves of genus five, Canonical curves\\
\operatorname{M}SC 14G05, 14G15,
14H10, 14H45, 14H50, 14Q05, 68W30}
\section{Introduction}\label{sec:Intro}
Let $K$ be a field and let $\mathbb{P}^n$ denote the projective $n$-space
over $K$.
Parameterizing the space of curves over $K$ of given genus is an important task in algebraic geometry, number theory and arithmetic geometry.
In particular, it is meaningful to construct families of curves, with explicit defining equations, that contain all curves of genus $g$.
For the hyperelliptic case, it is well-known that any hyperelliptic curve over $K$ of genus $g$ is the normalization of
$y^2 = f(x)$ for a square-free polynomial $f(x) \in K[x]$ of degree $2 g+1$ or $2 g+2$ if the characteristic of $K$ is odd.
In this case, the defining equation $y^2=f(x)$ is assumed to have at most $2g-1$ (resp.\ $2g$) unknown coefficients if $K$ is algebraically closed (resp.\ finite).
However, for a family of non-hyperelliptic curves, it is not so easy in general to find their defining equations, with parameters (unknown coefficients) as few as possible, by choosing a suitable model of the family.
Note that for $g\ge 2$, a curve is non-hyperelliptic if and only if the canonical sheaf is very ample
(cf.\ \cite[Chap.\,IV, Prop.\,5.2]{Har}).
For a non-hyperelliptic curve, its image of the embedding by the canonical linear system is called a {\it canonical curve}.
For genus $3$, we find a nice parametrization
in the paper by Lercier, Ritzenthaler, Rovetta and Sijsling \cite{LRRS}.
For genus $4$, a canonically embedded curve is the complete intersection of a quadric
and a cubic in ${\mathbb P}^3$.
In \cite{KH16} and \cite{KH17}, we discussed reductions of the space of
pairs of quadratic forms and cubic forms by
the projective general linear group $\operatorname{PGL}_4(K)$ of degree $4$.
The next target is the case of $g=5$.
In this case, it is known that there are two types of non-hyperelliptic curves; trigonal or not.
For the trigonal case, we already studied a quintic model in $\mathbb{P}^2$ of a trigonal curve of genus $5$ in order to enumerate superspecial ones over small finite fields~\cite{trigonal}.
Under some assumptions, we presented reductions of quintic forms defining trigonal curves of genus $5$, see \cite[Section 3]{trigonal} for details.
The remaining case is that the curve is non-hyperelliptic and non-trigonal, and it is known that the curve is realized as the complete intersection of three quadratic hypersurfaces in $\mathbb{P}^4$ (cf.\ \cite[Chap.~IV, Example 5.5.3 and Exercises 5.5]{Har}).
This case is quite more difficult than the hyperelliptic and trigonal cases,
since we need so many parameters to give three quadratic forms in 5 variables.
Although it is natural to reduce the parameters by using the natural action by $\operatorname{PGL}_5(K)$,
it would be hopeless to give an efficient reduction in this way,
since the group $\operatorname{PGL}_5(K)$ is huge and complicated.
\if 0
In this paper, we present an alternative parametrization of the space of canonical and non-trigonal curves of genus $5$,
and with this parametrization we also give a method to enumerate those curves with many rational points.
Specifically,
\fi
In this paper, we give a new effective parameterization of the space of canonical and non-trigonal curves of genus $5$.
Specifically, it will be proved that any canonical and non-trigonal curve $C$ of genus $5$ is bi-rational to a sextic $C'=V(F)$ in $\mathbb{P}^2$.
Note that, in most cases, $C'$ has five double points, or one triple point and two double points (in such a case we say that $C$ is {\it non-special}).
We also show that the dimension of the space of non-special curves with fixed singularities is at most $12$, which is just(!) the dimension of the moduli space of curves of genus $5$.
More precisely, the coefficients of $F$ have linear expressions in $12$ parameters, and the expressions can be all computed.
As an applications of this parametrization, we present an algorithm to enumerate non-special curves $C$ over a finite field $K$ of genus $5$ with prescribed number of rational points.
Executing the algorithm for $K = \mathbb{F}_3$ on Magma, we obtain the below theorem.
Note that we determined all the possible positions of singular points before the execution.
\begin{theo}\label{thm:main}
The maximal number of $\mathbb{F}_9$-rational points on a non-special curve $C$ of genus $5$ defined over $\mathbb{F}_3$ (and not over $\mathbb{F}_9$) is $32$. Moreover, there are exactly four $\mathbb{F}_{9}$-isogeny classes of Jacobian varieties of non-special curves $C$ of genus $5$ over $\mathbb{F}_3$ with 32 $\mathbb{F}_{9}$-rational points, whose Weil polynomials are
\begin{enumerate}
\item[\rm (1)] $(t^2 + 2 t + 9)(t^2 + 5 t + 9)^4$,
\item[\rm (2)] $(t + 3)^2(t^4 + 8 t^3 + 32 t^2 + 72 t + 81)^2$,
\item[\rm (3)] $(t + 3)^4(t^2 + 2 t + 9)(t^2 + 4 t + 9)^2$,
\item[\rm (4)] $(t + 3)^6(t^2 + 2 t + 9)^2$.
\end{enumerate}
In Section \ref{sec:new5}, examples of non-special curves $C$ over $\mathbb{F}_3$ with $\# C (\mathbb{F}_{9}) = 32$ will be given.
\end{theo}
As in the web-site manypoints.org \cite{ManyPoints}, the maximal number of $\#C(\mathbb{F}_9)$
of curves $C$ of genus $5$ over $\mathbb{F}_9$ is unknown
but is known to belong between $32$ and $35$
(this upper bound is due to Lauter \cite{Lauter}).
On the web-site, three examples of $C$ with 32 $\mathbb{F}_9$-rational points are listed. The above theorem gives at least one new example.
More concretely, the Weil polynomial of Fischer's example
$(x^4 + 1)y^4 + 2x^3y^3 + y^2 + 2xy + x^4 + x^2 = 0$
is $(t^2 + 2 t + 9)(t^2 + 5 t + 9)^4$.
In fact, this curve appears in our computation,
since we obtain a sextic form in $1/x$ and $1/y$ (having distinct 5 singular points) by dividing the example by $x^4y^4$.
The example of Ramos-Ramos \cite{Ramos-Ramos} (submitted by Ritzenthaler to the site)
$y^8 = a^2 x^2(x^2+a^7)$ with $a^2+a+2=0$
has Weil polynomial $(t + 3)^6(t^2 + 2 t + 9)^2$.
This curve is defined over $\mathbb{F}_{9}$, but the above theorem found a curve over the prime field $\mathbb{F}_3$ with the same Weil polynomial.
Anyway, from this theorem, we see that if one wants to find curves of genus $5$ over $\mathbb{F}_9$ with $\#C(\mathbb{F}_{9})>32$, one needs to search those not defined over $\mathbb{F}_3$ or curves whose sextic models have more complex singularities.
Our explicit parametrization and the algorithm presented in this paper may lead to fruitful applications both for theory and computation, such as classifying
canonical and non-trigonal curves of genus $5$ with invariants (Hasse-Witt rank and Ekedahl-Oort type and so on).
Some open problems will be summarized in Section \ref{sec:new6}.
One of our future works is to enumerate superspecial non-special curves of genus $5$ over finite fields.
\subsection*{Acknowledgments}
The contents of this paper was presented at Effective Methods in Algebraic
Geometry (MEGA 2021).
The authors thank the anonymous referees assigned by the organization of this conference for their comments and suggestions.
All of them are taken into account for improving the presentation of the paper.
The author is also very grateful to Christophe Ritzenthaler for helpful comments.
This work was supported by JSPS Grant-in-Aid for Scientific
Research (C) 17K05196 and 21K03159, and JSPS Grant-in-Aid for Young Scientists 20K14301.
\section{Non-hyperelliptic and non-trigonal curves of genus $5$}\label{sec:2}
In this section, we study curves of genus $5$
which are neither hyperelliptic nor trigonal.
In order to parameterize these curves,
we propose using the realization of these curves by their sextic models in $\mathbb P^2$. This method is much more effective than using the realization by the complete intersection
of three quadratic forms in ${\mathbb P}^4$.
Here we give an explicit construction of the sextic model for our purpose.
Although one may find a more conceptional way to construct such a sextic model in a general curve in \cite[Chap.~VI, Exercises, F-24 on p.~275]{ACGH},
it requires some assumptions and does not explain all of this section.
\subsection{Sextic models}\label{subsec:sextic}
The canonical model of a non-hyperelliptic and non-trigonal curve
of genus $5$ is the intersection of three quadrics in ${\mathbb P}^4$.
Let $C$ be such a curve, say
$V(\varphi_1, \varphi_2, \varphi_3)$ in ${\mathbb P}^4$
for three quadratic forms $\varphi_1$, $\varphi_2$ and $\varphi_3$ in $x_0,x_1,x_2,x_3,x_4$.
A sextic model associated to $C$ is obtained by
additional data: two points $P$ and $Q$ on $C$.
We find a short explanation of this construction in \cite[Chap.~IV, Example 5.5.3]{Har}, but that appears away from the context of looking at the space of curves.
Here we give an explicit construction toward parameterizing the space of these curves.
By a linear transformation,
we may assume that
\[
P=(1:0:0:0:0) \quad \text{and} \quad Q=(0:0:0:0:1).
\]
Since $\varphi_i$ vanishes at $P$ and $Q$, the quadratic forms $\varphi_i$ ($i=1,2,3$) must be of the form
\begin{equation}\label{ThreeQuadForms}
\varphi_i =a_i \cdot x_0x_4 + f_i\cdot x_0 + g_i\cdot x_4 + h_i,
\end{equation}
where $a_i \in K$ and $f_i$ and $g_i$ are linear forms in $x_1,x_2,x_3$
and $h_i$ is a quadratic form in $x_1,x_2,x_3$.
Put
\[
(v_1, v_2, v_3) := -(h_1,h_2,h_3)\cdot \Delta_A
\]
where $\Delta_A$ is the adjugate matrix of
\[
A := \begin{pmatrix}
a_1 & a_2 & a_3\\
f_1 & f_2 & f_3\\
g_1 & g_2 & g_3
\end{pmatrix}.
\]
Since $(v_1,v_2,v_3)=\det(A) (x_0x_4,x_0,x_4)$ on $C$,
the sextic
\[
C':\quad \det(A)\cdot v_1-v_2v_3=0
\]
in ${\mathbb P}^2=\operatorname{Proj} K[x_1,x_2,x_3]$ is birational to $C$.
Here we need to see that $\det A \ne 0$ holds generically. Indeed,
if $\det A$ were identically zero, then from \eqref{ThreeQuadForms} we have
$(3-{\rm rank} A)$ quadratic forms only in $x_1,x_2,x_3$.
If ${\rm rank} A<2$, then this contradicts with $C$ is irreducible.
If ${\rm rank} A = 2$, then let $\psi$ be the quadratic form in $x_1,x_2,x_3$;
then there is a dominant morphism $C \to V(\psi) \subset {\mathbb P}^2$,
which turns out to be of degree $2$. This contradicts with the assumption
that $C$ is not hyperelliptic.
This construction of the sextic $C'$ from $C$ with $P$ and $Q$
is canonical in the following sense:
\begin{prop}\label{ConstructionIsCanonical}
\begin{enumerate}
\item[{\rm (1)}]
Suppose $\langle\varphi_1,\varphi_2,\varphi_3 \rangle = \langle\phi_1,\phi_2,\phi_3\rangle$ as a linear space over a field defining $\varphi_i$ and $\phi_i$ for all $i=1,2$ and $3$.
Then $C'$ obtained from $\varphi_1,\varphi_2,\varphi_3$
is the same as that obtained from $\phi_1,\phi_2,\phi_3$.
\item[{\rm (2)}] The coordinate change
\[(x_0,x_1,x_2,x_3,x_4) \mapsto (x_4, x_1,x_2,x_3,x_0) \]
does not change the sextic.
\item[{\rm (3)}] The coordinate change
\[(x_0,x_1,x_2,x_3,x_4) \mapsto (x_0 + \phi, x_1,x_2,x_3,x_4+\psi) \]
with any linear $\phi,\psi$ in $x_1,x_2,x_3$ does not change the sextic.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $B$ be the square matrix of size $3$ such that
$(\varphi_1,\varphi_2,\varphi_3)B=(\phi_1,\phi_2,\phi_3)$.
Let $v_i'$ be $v_i$ associated to $\phi_1,\phi_2,\phi_3$. Then
\[
(v'_1, v'_2, v'_3) = -(h_1,h_2,h_3)B\cdot \det(AB) (AB)^{-1} = \det(B) (v_1,v_2,v_3).
\]
Hence we have
$\det(AB) v'_1 - v'_2v'_3 = \det(B) (\det(A)v_1-v_2v_3)$.
(2) The matrix $A$ for the new coordinate is
\[
A' := \begin{pmatrix}
a_1 & a_2 & a_3\\
g_1 & g_2 & g_3\\
f_1 & f_2 & f_3
\end{pmatrix}.
\]
Let $v_i'$ be $v_i$ associated to the new coordinate.
It is straightforward to see $(v'_1,v'_2,v'_3) = (-v_1,v_3,v_2)$.
Thus the sextic does not change.
(3) Let $A'$, $h'_i$ and $v'_i$ be the matrix $A$, the quadratic form $h_i$ and $v_i$ for the new coordinate. Then we have
\[
A'=\begin{pmatrix}
1 & 0 & 0\\
\psi & 1 & 0\\
\phi & 0 & 1
\end{pmatrix}
\begin{pmatrix}
a_1 & a_2 & a_3\\
f_1 & f_2 & f_3\\
g_1 & g_2 & g_3
\end{pmatrix}
\]
and $h'_i = h_i+a_i\phi\psi + f_i\phi + g_j \psi$.
The equation $\det(A')v'_1-v'_2v'_3 = \det(A)v_1-v_2v_3$
is derived by a straightforward computation, which is tedious but has some beautiful cancellations.
\end{proof}
From Proposition \ref{ConstructionIsCanonical} (1) and (2), we have
\begin{cor}
Let $K$ be a field.
If $C$ and the divisor $P+Q$ on $C$ for distinct two points $P$ and $Q$ is defined over $K$, then
the associated sextic $C'$ is defined over $K$.
\end{cor}
This corollary suggests us to use the sextic realization
to find/enumerate curves over $\mathbb{F}_q$ of genus $5$
with many $\mathbb{F}_{q^2}$-rational points,
since the assumption that $P+Q$ is defined over $\mathbb{F}_q$
is satisfied if $C$ has many (a few) $\mathbb{F}_{q^2}$-rational points.
The case we mainly treat in this paper is the case of $q=3$.
Actually, the maximal number $C(\mathbb{F}_9)$
for curves of genus $5$
over $\mathbb{F}_9$ is unknown, as mentioned in Section \ref{sec:Intro}.
We give an implementation of our algorithm,
restricting ourselves the case that $C$ is defined over $\mathbb{F}_3$.
In general, the curves defined by these sextic forms may have nasty singularities. But in most cases,
$C'$ has five singular points with multiplicity $2$.
The number of monomials of degree $6$ in three variables is $28$.
For each singular point, we have three linear equations
which assure that the point is singular
.
Considering a scalar multiplication to the whole sextic,
the number of free parameters is $28 - 5\times 3 -1 = 12$, see Propositions \ref{dim=12} and \ref{Reducibility} below for the linear independence of $5\times 3$ linear equations.
This is just the dimension of the moduli space of curves of genus $5$.
Note that both of the dimension of the space of two points on $C$
and that of five points on ${\mathbb P^2}$ modulo the action of $\operatorname{PGL}_3$
are equal to two.
This says that the parametrization by the sextic models
is very effective.
\begin{prop}\label{dim=12}
Let $\{P_1,\ldots,P_5\}$ be distinct five points of ${\mathbb P}^2(K)$.
Assume that any distinct four points in $\{P_1,\ldots,P_5\}$ are not contained in a line.
Then the space of sextics with double points at $P_1,\ldots, P_5$
up to scalar multiplications
is of dimension $12$.
\end{prop}
\begin{proof}
By a linear transformation by an element of $\operatorname{PGL}_3(K)$,
we may assume that $P_1=(1:0:0)$, $P_2=(0:1:0)$ and $P_3=(0:0:1)$.
Let $K[x,y,z]$ be the coordinate ring of ${\mathbb P}^2$.
Considering the permutation of $\{x,y,z\}$
by the symmetric group of degree $3$,
we may assume that $P_4 = (b:c:1)$ and $P_5=(d:e:1)$,
by the assumption of four points.
Then, any sextic with $z=1$ having singularity at $P_1$, $P_2$ and $P_3$ is
of the form
\begin{eqnarray*}
F&=&a_1 x^4 y^2 + a_2 x^4 y+ a_3 x^4 + a_4 x^3 y^3 + a_5 x^3 y^2
+ a_6 x^3 y + a_7 x^3 + a_8 x^2 y^4 + a_9 x^2 y^3+ a_{10} x^2 y^2\\
&&+ a_{11} x^2 y + a_{12} x^2 + a_{13} x y^4 + a_{14} x y^3
+ a_{15} x y^2 + + a_{16} x y+ a_{17} y^4 + a_{18}y^3 + a_{19}y^2.
\end{eqnarray*}
The condition that this sextic has singularity at $P_4$ (resp. $P_5$)
is described by three linear equations in $a_1,\ldots, a_{19}$
obtained from the condition that
$F$, $\frac{\partial F}{\partial x}$ and $\frac{\partial F}{\partial x}$
are zero at $P_4$ (resp.\ $P_5$).
The coefficients of these six linear equation makes a $6\times 19$ matrix $M$.
It suffices to show that $M$ is of rank $6$.
By a direct computation, the determinant of the minor (square matrix of size $6$)
corresponding to the coefficients of $a_7,a_{11},a_{12},a_{15},a_{18},a_{19}$
is $(be-cd)^5(be+cd)$ and
that of $a_{6}, a_{7}, a_{9}, a_{10}, a_{11}, a_{12}$
is $b^6d^6(c-e)^5$ and
that of $a_{5}, a_{10}, a_{14}, a_{15}, a_{18}, a_{19}$
is $c^6e^6(b-d)^5$. If the rank of $M$ were less than $6$,
then $(be-cd)^5(be+cd)=0$, $b^6d^6(c-e)^5=0$ and $c^6e^6(b-d)^5=0$ has to hold.
But this contradicts with the assumption of four points.
Indeed if $b=0$ then we have $c=0$ or $d=0$ by the first equation,
and then $P_1=P_4$ holds or ${P_1,P_3,P_4,P_5}$ is contained in the line $y=0$. Thus $b\ne 0$. Similarly $c,d$ and $e$ are not zero.
Then we have $b=d$ and $c=e$, which says $P_3=P_4$.
This is absurd. Hence $M$ has to be of rank $6$.
\end{proof}
\subsection{Non-special curves of genus 5}\label{subsec:generic}
As mentioned in the last paragraph of the previous subsection,
sextics $C'$ may have various kinds of singularities.
It would be natural that we start with the case that the singularity is milder.
This paper studies the case that
\begin{equation}\label{GenericGenusFormula}
g(C) = g(C') - \sum_{P'\in C'} \frac{m_{P'}(m_{P'}-1)}{2}
\end{equation}
holds with $g(C')=(6-1)(6-2)/2=10$, where
$g(D)$ is the arithmetic genus of a curve $D$
and $m_{P'}$ is the multiplicity of $C'$ at $P'$.
Note that in general the left hand side of \eqref{GenericGenusFormula}
is less than or equal to the right hand side.
See Remark \ref{rem:one-time-blow-up} below for a geometric meaning of \eqref{GenericGenusFormula}.
In order to have \eqref{GenericGenusFormula} in our case (i.e., $g(C)=5$), the multiple set of $m_{P'}$ with $m_{P'} \ge 2$
is either of $\{2,2,2,2,2\}$ and $\{3,2,2\}$.
\begin{defin}\label{def:generic}
We say that $C$ is {\it non-special} if a sextic model $C'$ associated to $(C,P+Q)$
satisfies \eqref{GenericGenusFormula}.
More concretely, there are two cases for non-special curves:
\begin{enumerate}
\item[(I)] $C'$ has five singular points ${P_1,\ldots, P_5}$ with multiplicity two.
\item[(II)] $C'$ has one triple point $P_1$ and two double points $P_2$ and $P_3$.
\end{enumerate}
\end{defin}
If $(C,P+Q)$ is defined over $K$, then
in case (I) the divisor $P_1 + \cdots + P_5$ is defined over $K$,
and in case (II) the triple point $P_1$ is defined over $K$
and the divisor $P_2 + P_3$ is defined over $K$.
For example in case (I)
the absolute Galois group $G_K$ of $K$ makes permutations of $\{P_1,\ldots, P_5\}$.
It is straightforward to see that
the patterns of the $G_K$-orbits in $\{P_1,\ldots, P_5\}$
is either of $(1,1,1,1,1)$, $(1,1,1,2)$, $(1,2,2)$, $(1,1,3)$, $(2,3)$, $(1,4)$ and $(5)$, where for example $(1,2,2)$ means that
$\{P_1,\ldots, P_5\}$ consists of three $G_K$-orbits
each of which has cardinarity $1$, $2$ and $2$ respectively.
In Section \ref{sec:4}, in the case of $K=\mathbb{F}_3$
we shall give an explicit classification of positions of
$\{P_1,\ldots, P_5\}$ in $\mathbb P^2$ up to $\operatorname{PGL}_3(\mathbb{F}_3)$.
Here is another constraint for the position of the singular points.
\begin{prop}\label{Reducibility}
In case {\rm (I)}, if distinct four elements of $\{P_1,P_2,P_3,P_4,P_5\}$ are contained in a line, then $C'$ is geometrically reducible (i.e., $C_{\overline K}:=C \times_{\mathrm{Spec}(K)} \mathrm{Spec}(\overline{K})$ is reducible, where $\overline K$ is the algebraic closure of $K$).
\end{prop}
\begin{proof}
Suppose that $P_1,\ldots, P_4$ are contained in a line.
By a linear transformation over $\overline K$, we may assume
\[
P_1 = (0:0:1),\quad P_2=(1:0:1),\quad P_3=(c:0:1),\quad P_4=(d:0:1),
\]
where $0,1,c$ and $d$ are muturally distinct.
Let $f$ be the sextic (in $x$ and $y$) of $C'$ obtained by substituting $1$ for $z$.
Since $P_1$ is a singular points, the smallest degree of the non-zero terms of $f$ is two.
Let $a_i$ be the $x^iy^0$-coefficient of $F$ for $i=2,3,4,5,6$.
Then the constant term and the degree-one term of
the Taylor expansion of $f$ at $(x,y)=(b,0)$
are $\sum_{i=2}^6 b^i a_i$ and $\sum_{i=2}^6 ib^{i-1} a_i$
for $b \in \{1,c,d\}$.
These are zero, as $C$ is singular at $P_i$ for $i=1,\ldots, 5$.
Since
\[
\det\begin{pmatrix}
1 & 1 & 1 & 1 & 1\\
c^2 & c^3 & c^4 & c^5 & c^6\\
2c & 3c^2 & 4c^3 & 5 c^4 & 6 c^5\\
d^2 & d^3 & d^4 & d^5 & d^6\\
2d & 3d^2 & 4d^3 & 5 d^4 & 6d^5
\end{pmatrix} = c^4d^4(c-1)^2(d-1)^2(c-d)^4,
\]
we have that $a_i$ are zero for all $i=2, \ldots, 6$.
This says that $f$ is divided by $y$,
in particular $C'$ is geometrically reducible.
\end{proof}
We have a similar result also in case (II):
\begin{prop}
In case {\rm (II)},
if $P_1$, $P_2$ and $P_3$ are contained in a line, then $C'$ is geometrically reducible.
\end{prop}
\begin{proof}
Suppose that $P_1$, $P_1$ and $P_3$ are contained in a line.
By a linear transformation over $\overline K$, we may assume
\[
P_1 = (0:0:1),\quad P_2=(1:0:1),\quad P_3=(c:0:1)
\]
where $0$, $1$ and $c$ are muturally distinct.
Let $f$ be the sextic (in $x$ and $y$) of $C'$ obtained by substituting $1$ for $z$.
Since $P_1$ is a triple point, the smallest degree of the non-zero terms of $f$ is three.
Let $a_i$ be the $x^iy^0$-coefficient of $F$ for $i=3,4,5,6$.
Then the constant term and the degree-one term of
the Taylor expansion of $f$ at $(x,y)=(b,0)$
are $\sum_{i=3}^6 b^i a_i$ and $\sum_{i=3}^6 ib^{i-1} a_i$
for $b \in \{1,c\}$.
These are zero, as $C$ is singular at $P_i$ for $i=1,\ldots, 5$.
Since
\[
\det\begin{pmatrix}
1 & 1 & 1 & 1 \\
3 & 4 & 5 & 6 \\
c^2 & c^3 & c^4 & c^5\\
2c & 3c^2 & 4c^3 & 5 c^4
\end{pmatrix} = c^4(c-1)^4,
\]
we have that $a_i$ is zero for all $i=3, \ldots, 6$.
This says that $f$ is divided by $y$, in particular $C'$ is geometrically reducible.
\end{proof}
The next remark explains what \eqref{GenericGenusFormula} means.
\begin{rem}\label{rem:one-time-blow-up}
We claim that if the desingularization $C \to C'$ satisfies \eqref{GenericGenusFormula},
then, at each singular point $P\in C'$,
the desingularization of $C'$ at $P$ is obtained by the one-time blow-up centered at $P$. Indeed,
by a linear coordinate change, the structure ring around $P$ is locally described as
$\mathbb{F}_q[X,Y]/(G)$ with $G\in \mathbb{F}_q[X,Y]$, where $P$ is the origin. Write
$G = G_m + G_{m+1} + \cdots$, where $m=m_P$ is the multiplicity at $P$ and $G_i$ is the part of degree $i$.
By a linear coordinate change again, we may assume that
the $Y^m$-coefficient of $G_m$ is not zero
if we assume $q-1 \ge m$ (which is satisfied in our case: $q=9$ and $m\le 3$).
Then, the morphism from the strict transform of the blow-up to $C'$ is locally described by the ring homomorphism
\[ \mathbb{F}_q[X,Y]/(G) \to \mathbb{F}_q[X,Z]/(\tilde G) \]
with ${\tilde G}(X,Z) = G(X,ZX)X^{-m}$ sending $(X,Y)$ to $(X,XZ)$.
Its cokernel is the $\frac{m(m-1)}{2}$-dimensional $\mathbb{F}_q$-vector space
generated by $Z^iX^j$ for $1\le i \le m-1$ and $0\le j \le i-1$.
Thus, we have the claim.
\end{rem}
Finally, we give a remark about the number of rational points on a non-special curve $C$ over finite field $\mathbb{F}_q$.
Each point $\in C$ over $P$ by this blow-up corresponds to a root of ${\tilde G}_m(1,Z)=0$, where ${\tilde G}_m(1,Z)={\tilde G}_m(X,ZX)X^{-m}$ with the notation of Remark \ref{rem:one-time-blow-up}.
Hence, we have the following formula
\begin{equation}\label{formula:NumberOfRationalPoints}
\#C(\mathbb{F}_q) = \#C'(\mathbb{F}_q) + \sum_{P\in C'(\mathbb{F}_q)} \left(\#V(h_P)(\mathbb{F}_q) - 1\right),
\end{equation}
where $h_P$ is the $G_m$ above for $P$, i.e., the homogeneous part of the least degree (i.e., $m_P$) of
the Taylor expansion at $P$ of the defining polynomial of an affine model containing $P$ of
$C'$ and $V(h_P)$ is the closed subscheme of ${\mathbb P}^1$ defined by the ideal $(h_P)$.
If $h_P$ is quadratic, then $\#V(h_P)(\mathbb{F}_q) - 1$
is equal to $1$ if the discriminant $\Delta(h_P)$ of $h_P$ is nonzero square
and to $-1$ if $\Delta(h_P)$ is nonzero non-square
and to $0$ if $\Delta(h_P) = 0$.
From a computational point of view, it may have a little advantage to use the fact that if $q=p^2$ and $\Delta(h_P)$ belongs to $\mathbb{F}_p$, then
$\Delta(h_P) \ne 0$ is equivalent to that $\Delta(h_P)$ is nonzero square in $\mathbb{F}_{p^2}$.
\begin{rem}\label{rem:non-mild}
The case of {\it non-mild} singularities (i.e., $C'$ has singularities other than mild singularities explained above) is one of our future works, see Section \ref{sec:new6}.
As in the mild case, one could also construct an algorithm to find a reduced equation defining $C'$ even in the non-mild case, once the type of the singularities is known.
However, the number of singular points of $C'$ can take any of $5$, $4$, $3$, $2$, $1$ in general.
This means that there are various kinds of singularities in the non-mild case, and thus it would require much effort to classify their possible types.
\end{rem}
\section{Concrete Algorithm}\label{sec:3}
Let $K$ be a finite field with characteristic $p$, and $K'$ a finite extension field of $K$.
As it was shown in Subsection \ref{subsec:sextic}, any canonical non-trigonal curve of genus $5$ over $K$ is realized as the normalization of a sextic in $\mathbb{P}^2$.
In particular, such a curve $C$ is said to be non-special if it satisfies either of the conditions (I) or (II) stated in Definition \ref{def:generic} of Subsection \ref{subsec:generic}.
In this section, we first present a concrete algorithm (Algorithm \ref{alg:I} below) to enumerate all non-special curves $C$ of genus $5$ over $K$ satisfying (I) and $\# C (K') \geq N$, where $N$ is a given positive integer.
Similarly to (I), we can construct an algorithm for (II), but we omit to write down it in this paper.
After presenting the algorithm for (I), we also give some remarks for implementation.
\subsection{Algorithm for the case (I)}\label{subsec:alg}
Let $\{ P_1, P_2, P_3, P_4, P_5 \}$ be a set of distinct five points in $\mathbb{P}^2(L)$, where $L$ is a finite extension field of $K$.
Assume that $\mathrm{Gal}(L/K)$ stabilizes the set $\{ P_1, P_2, P_3, P_4, P_5 \}$.
Given $\{ P_1, P_2, P_3, P_4, P_5 \}$ and an integer $N \geq 1$, Algorithm \ref{alg:I} below enumerates all sextic forms $F$ in $K [x,y,z]$ such that $C' : F = 0$ is a singular (irreducible) curve of geometric genus $5$ in $\mathbb{P}^2$ with $\mathrm{Sing}(C')=\{ P_1, P_2, P_3, P_4, P_5 \}$ and $\#C (K') \geq N$, where $C$ is the normalization of $C'$.
\begin{al}\label{alg:I}
{\it Input}. $\{ P_1, P_2, P_3, P_4, P_5 \}$ and $N \geq 1$.
{\it Output}. A set $\mathcal{F}$ of sextic forms in $K [x,y,z]$.
\begin{enumerate}
\item Set $\mathcal{F} := \emptyset$.
\item Construct $F = \sum_{i=1}^{28} a_i x^{\alpha_1^{(i)}} y^{\alpha_2^{(i)}} z^{\alpha_3^{(i)}} \in K [a_1 , \ldots , a_{28}] [x,y,z]$ with
\[
\{ (\alpha_1^{(1)}, \alpha_2^{(1)}, \alpha_3^{(1)}), \ldots , (\alpha_1^{(28)}, \alpha_2^{(28)}, \alpha_3^{(28)}) \} = \{ (\alpha_1, \alpha_2, \alpha_3) \in (\mathbb{Z}_{\geq 0})^{\oplus 3} : \alpha_1 + \alpha_2 + \alpha_3 = 6\},
\]
where $a_1, \ldots, a_{28}$ are indeterminates.
\item For each $\ell \in \{ 1, \ldots , 5 \}$:
\begin{enumerate}
\item Write $P_\ell=(p_1 : p_2 : p_3)$ for $p_1, p_2, p_3 \in L$.
\item Let $k$ be the minimal element in $\{1,2,3 \}$ such that $p_k \neq 0$, and put $z_{k} = p_k$, $z_{i} = X$ and $z_{j}=Y$ for $i,j\in \{1,2,3\} \smallsetminus \{k\}$ with $i < j$.
\item Compute $F_{\ell}:= F(z_1,z_2,z_3) \in L[a_1, \ldots , a_{28}][X,Y]$.
Let $f_{3 \ell - 2}$ and $f_{3 \ell - 1}$ be the coefficients of $X$ and $Y$ in $F_{\ell}$ respectively, and $f_{3 \ell}$ the constant term of $F_{\ell}$ as a polynomial in $X,Y$.
\end{enumerate}
\item Compute a basis $\{ \mathbf{b}_1, \ldots , \mathbf{b}_{d} \} \subset K^{\oplus 28}$ of the null-space of the linear system over $L$ defined by $f_{t}(a_1, \ldots , a_{28}) = 0$ for $1 \leq t \leq 15$, where $d$ is the dimension of the null-space. By Proposition \ref{dim=12} we have $d=13$.
\item For each $(v_1, \ldots , v_d) \in K^{\oplus d} \smallsetminus \{ (0, \ldots , 0) \}$ with $v_1 \in \{0, 1 \}$:
\begin{enumerate}
\item Compute $\mathbf{c}: = \sum_{j=1}^{d} v_j \mathbf{b}_j \in K^{\oplus 28} \smallsetminus \{ (0, \ldots , 0) \}$.
For each $1 \leq i \leq 28$, we denote by $c_i$ the $i$-th entry of $\mathbf{c}$.
\item If $\sum_{i=1}^{28} c_i x^{\alpha_1^{(i)}} y^{\alpha_2^{(i)}} z^{\alpha_3^{(i)}} \in K[x,y,z]$ is irreducible, and if $\sum_{i=1}^{28} c_i x^{\alpha_1^{(i)}} y^{\alpha_2^{(i)}} z^{\alpha_3^{(i)}}=0$ in $\mathbb{P}^2$ has geometric genus $5$:
\begin{enumerate}
\item[(i)] Set $F_{\mathbf{c}} := \sum_{i=1}^{28} c_i x^{\alpha_1^{(i)}} y^{\alpha_2^{(i)}} z^{\alpha_3^{(i)}}$, and let $C'$ be the plane curve in $\mathbb{P}^2$ defined by $F_{\mathbf{c}} = 0$.
\item[(ii)] Compute $\# C (K')$ by the formula \eqref{formula:NumberOfRationalPoints} given in Subsection \ref{subsec:generic}, where $C$ is the normalization of $C'$.
\item[(iii)] If $\#C (K') \geq N$, replace $\mathcal{F}$ by $\mathcal{F} \cup \{ F_{\mathbf{c}} \}$.
\end{enumerate}
\end{enumerate}
\item Output $\mathcal{F}$.
\end{enumerate}
\end{al}
\begin{rem}\label{rem:alg}
\begin{enumerate}
\item We can take a basis $\{ \mathbf{b}_1, \ldots , \mathbf{b}_{d} \}$ in Step (4) so that $\mathbf{b}_i \in K^{\oplus 28}$ for all $1 \leq i \leq d$, by the following general fact:
Let $E/K$ be a separable extension of fields, and $L$ the Galois closure of $E$ over $K$.
If a linear system over $E$ is $\mathrm{Gal}(L/K)$-stable, then each entry of the Echelon form of the coefficient matrix of the system belongs to $K$.
\item In Step (5)(b)(ii), we need to compute the discriminant $\Delta_{\ell}$ of the degree-$2$ part of the Taylor expansion of $F_{\mathbf{c}}$ at each $P_{\ell}$.
For this, we can use $F_{\ell}$ computed in Step (3) as follows:
Let $D_{\ell} \in L[a_1, \ldots , a_{28}]$ denote the discriminant of the degree-$2$ part of $F_{\ell}$ as a polynomial in $X$ and $Y$.
Then clearly we have $\Delta_{\ell} = D_{\ell}(\mathbf{c})$.
\end{enumerate}
\end{rem}
\subsection{Correctness of our algorithm and remarks for implementation}
The correctness of Algorithm \ref{alg:I} follows mainly from the definition of the multiplicity of a projective plane curve at a singular point:
Indeed, each polynomial $F_{\ell}$ computed in Step (3) is equal to the Taylor expansion of $F$ at $P_{\ell}$.
Since any vector $\mathbf{c} $ defining $F_{\mathbf{c}}$ in Step (5)(b)(i) is a root of the system $f_{3 \ell -2} = f_{3 \ell -2} = f_{3 \ell}=0$ for all $1 \leq \ell \leq 5$, we have that $P_{\ell}$ is a singular point of $C' : F_{\mathbf{c}} = 0$ with multiplicity $2$ for each $1 \leq \ell \leq 5$.
Conversely, we suppose that $\mathbf{c}$ is a vector in $K^{\oplus 28} \smallsetminus \{ (0, \ldots , 0) \}$ such that $F_{\mathbf{c}} = 0$ is an irreducible plane curve of geometric genus $5$ with multiplicity $2$ at the points $P_1, \ldots , P_5$.
By the definition of the multiplicity of a singularity, the vector $\mathbf{c}$ is a root of the system $f_{3\ell-2} = f_{3\ell-1} = f_{3\ell}=0$ for all $1 \leq \ell \leq 5$.
As it was described in Remark \ref{rem:alg} (1), each entry of the Echelon form of the coefficient matrix of the system belongs to $K$, and thus $\mathbf{c}$ is a root of a linear system over $K$ whose null-space over $L$ is the same as that of $f_{\ell} = f_{\ell+1} = f_{\ell+2}=0$ for all $1 \leq \ell \leq 5$.
Hence $\mathbf{c}$ is written as $\mathbf{c} = \sum_{j=1}^{d} v_j \mathbf{b}_j$ for some $(v_1, \ldots , v_d) \in K^{\oplus d} \smallsetminus \{ (0, \ldots , 0) \}$. Note that it suffices to compute the part with $v_1 \in \{0,1\}$, because
any scalar multiplication does not change the isomorphism class of the sextic.
We implemented Algorithm \ref{alg:I} and its variant for the case (II) over MAGMA V2.25-8~\cite{Magma}, \cite{MagmaHP} in its 64-bit version.
Details of our computational environment will be described in Section \ref{sec:new5}.
For Step (5)(b), our implementation utilizes MAGMA's built-in functions \texttt{IsIrreducible}, \texttt{GeometricGenus} and \texttt{Variety}.
In particular, we use \texttt{Variety} to compute $\# C' (K') = \# V (F_{\mathbf{c}})$.
\section{Position analysis for $\mathbb{F}_{3}$}\label{sec:4}
In this section,
we classify positions of five (resp.\ three) points in ${\mathbb P}^2$
which can be singular points of a non-special curve $C'$ over $\mathbb{F}_3$ in case (I)
(resp. in case (II)), up to the action by $\operatorname{PGL}_3(\mathbb{F}_3)$.
In each case, this is the problem to enumerate the orbits of an action of a finite group on a finite set. As this is done by a computer calculation (with MAGMA),
we state only the results.
\subsection{Case (I): $C'$ has five double points}
The Frobenius map $\sigma$ over $\mathbb{F}_3$ (the map taking the third power of each entry)
makes a permutation of $\{P_1,\ldots, P_5\}$,
i.e.,
\[
\{P_1,\ldots, P_5\} = \{\sigma(P_1),\sigma(P_2),\sigma(P_3),\sigma(P_4),\sigma(P_5)\}.
\]
The patterns of the Frobenius orbits in $\{P_1,\ldots, P_5\}$
is either of $(1,1,1,1,1)$, $(1,1,1,2)$, $(1,2,2)$, $(1,1,3)$, $(2,3)$, $(1,4)$ and $(5)$, where for example $(1,2,2)$ means that
$\{P_1,\ldots, P_5\}$ consists of three Frobenius orbits
each of which has cardinarity $1$, $2$ and $2$ respectively.
\noindent{\bf Case (1,1,1,1,1):}
This is the case where every singular point is $\mathbb{F}_3$-rational.
By a linear transformation of an element of $\operatorname{PGL}_3(\mathbb{F}_p)$,
we may assume
\[
P_1=(1:0:0),\quad P_2=(0:1:0),\quad P_3=(0:0:1).
\]
A computation shows that every position of $\{P_1,\ldots, P_5\}$
such that any four points among them are not contained
in a hyperplane is equivalent by a linear transformation
of a diagonal matrix and a permutation of $\{x,y,z\}$ to either of the following two cases:
\begin{enumerate}
\item $P_4 = (1:1:0)$ and $P_5 = (0:1:1)$,
\item $P_4 = (1:1:0)$ and $P_5 = (1:2:1)$.
\end{enumerate}
\begin{ex}
In characteristic $3$,
the defining equation of any sextic having singular points
$P_1=(0:0:1)$, $P_2=(0:1:0)$, $P_3=(1:0:0)$ $P_4=(1:1:0)$, $P_5=(0:1:1)$ is
\begin{eqnarray*}
&&
a_1 x^4y^2
+a_2 x^4yz
+a_3 x^4 z^2
+a_4 x^3 y^3
+ a_5 x^3 y^2 z
+ a_6 x^3 y z^2
+ a_7 x^3 z^3\\
&&
+ a_8 x^2 y^4
+ a_9 x^2 y^3 z
+ a_{10} x^2 y^2 z^2
+ a_{11} x^2 y z^3
+ a_{12} x^2 z^4
+ a_{13} x y^4 z\\
&&
+ a_{14} x y^3 z^2
+ a_{15} x y^2 z^3
+ a_{16} x y z^4
+ a_{17} y^4 z^2
+ a_{18} y^3 z^3
+ a_{19} y^2 z^4
\end{eqnarray*}
with
\[
\begin{cases}
a_1 + a_4 + a_8 = 0,\\
2a_1+a_8 = 0,\\
a_2+a_5+a_9+a_{13} = 0,
\end{cases}
\quad
\begin{cases}
a_{17}+a_{18}+a_{19} = 0,\\
a_{13} + a_{14} + a_{15} + a_{16} =0,\\
a_{17}+2a_{19}=0.
\end{cases}
\]
The quadratic form associated to
the singular point $P_i$ (of multiplicity $2$) is as follows:
\begin{eqnarray*}
Q_1 &:=& a_{12} x^2 + a_{16} xy + a_{19} y^2,\\
Q_2 &:=& a_8 x^2 + a_{13} xz + a_{17} z^2,\\
Q_3 &:=& a_1 y^2 + a_2 yz + a_3 z^2,\\
Q_4 &:=& a_1(y-x)^2 + (a_2 + 2a_5 + a_{13})(y-x)z
+(a_3 + a_6 + a_{10} + a_{14} + a_{17})z^2,\\
Q_5 &:=& (a_8 + a_9 + a_{10} + a_{11} + a_{12})x^2
+ (a_{13} + 2 a_{14} + a_{16})x(y-z)
+ a_{17} (y-z)^2
\end{eqnarray*}
respectively.
\end{ex}
\noindent{\bf Case (1,1,1,2) with linear independent $P_1$, $P_2$, $P_3$:}\quad
We consider the case where $\{P_1, \ldots, P_5\}$ contains three $\mathbb{F}_p$-rational points, say $P_1,P_2,P_3$, where $P_1$, $P_2$, $P_3$ are linearly independent,
and the other points are defined over $\mathbb{F}_{p^2}$ (not over $\mathbb{F}_p$)
and are conjugate to each other, i.e., $(P_4,P_5)=(\sigma(P_5),\sigma(P_4))$.
By a transformation by an element of $\operatorname{PGL}_3(\mathbb{F}_p)$, we may assume
\[
P_1=(1 : 0 : 0),\quad P_2=(0 : 1 : 0),\quad P_3=(0 : 0 : 1).
\]
Let $\zeta$ be a primitive element of $\mathbb{F}_{p^2}$.
A computation shows that every position of $\{P_1,\ldots, P_5\}$
such that any four points among them are not contained
in a hyperplane is equivalent
by a linear transformation
of a diagonal matrix and a permutation of $\{x,y,z\}$
to either of the three cases:
\begin{enumerate}
\item $P_4 = (1 : \zeta^5 : \zeta^7)$,
\item $P_4 = (1 : \zeta^7 : 1)$,
\item $P_4 = (1 : \zeta^2 : \zeta^2)$.
\end{enumerate}
with $P_5 = \sigma(P_4)$.
\noindent {\bf Case (1,1,1,2) with linear dependent $P_1$, $P_2$, $P_3$:}\quad
We consider the case where $\{P_1,P_2,P_3,P_4,P_5\}$ contains three $\mathbb{F}_p$-rational points, say $P_1,P_2,P_3$, where $P_1$, $P_2$, $P_3$ are linearly dependent,
and the other points are defined over $\mathbb{F}_{p^2}$
and are conjugate to each other.
By a transformation by an element of $\operatorname{PGL}_3(\mathbb{F}_p)$, we may assume
\[
P_1=(1 : 0 : 0),\quad P_2=(0 : 1 : 0),\quad P_3=(1:1:0).
\]
Let $\zeta$ be a primitive element of $\mathbb{F}_{p^2}$. We have three equivalent classes:
\begin{enumerate}
\item $P_4 = (1:\zeta^5:\zeta^7)$,
\item $P_4 = (1:1:\zeta^7)$,
\item $P_4 = (1:\zeta^2;\zeta^2)$
\end{enumerate}
with $P_5 = \sigma(P_4)$.
\noindent {\bf Case (1,2,2):}\quad
We consider the case where $\{P_1,P_2,P_3,P_4,P_5\}$ contains one $\mathbb{F}_p$-rational point $P_1$
and two pairs $(P_2,P_3)$ and $(P_4,P_5)$ of $\mathbb{F}_{p^2}$-rational points,
where $P_3=\sigma(P_2)$ and $P_5 = \sigma(P_4)$.
By a tranformation by an element of $\operatorname{PGL}_3(\mathbb{F}_p)$, we may assume
\[
P_1=(1:0:0).
\]
Let $\zeta$ be a primitive element of $\mathbb{F}_{p^2}$. We have five equivalent classes:
\begin{enumerate}
\item $(P_2,P_4)= ((1:1:\zeta^2),(0:1:\zeta^6))$,
\item $(P_2,P_4)= ((1:2:\zeta^5),(1:\zeta^2:\zeta^7))$,
\item $(P_2,P_4)= ((1: \zeta : 1),(1:\zeta^7:\zeta^7))$,
\item $(P_2,P_4)= ((1:\zeta^2:\zeta^6),(1:0:\zeta^5))$,
\item $(P_2,P_4)= ((1:0:\zeta^2),(1:1:\zeta^5))$.
\end{enumerate}
\noindent {\bf Case (1,1,3):}\quad
We consider the case where $\{P_1,P_2,P_3,P_4,P_5\}$ contains two $\mathbb{F}_p$-rational points $P_1, P_2$
and conjugate three points $(P_3,P_4,P_5)$ of $\mathbb{F}_{p^3}$-rational points,
where
\[
(P_3,P_4,P_5)=(P_3,\sigma(P_3), \sigma^2(P_3)).
\]
By a tranformation by an element of $\operatorname{PGL}_3(\mathbb{F}_p)$, we may assume
\[
P_1=(1:0:0),\quad P_2=(0:1:0).
\]
Let $\zeta$ be a primitive element of $\mathbb{F}_{p^3}$. We have four cases
\begin{enumerate}
\item $P_3=(1:2:\zeta^5)$,
\item $P_3=(1:\zeta^6:\zeta^{25})$,
\item $P_3=(1:\zeta^{17}:\zeta^2)$,
\item $P_3=(1:2:\zeta^{10})$.
\end{enumerate}
\noindent {\bf Case (2,3):}\quad
We consider the case where $\{P_1,P_2,P_3,P_4,P_5\}$ contains conjugate two points
$(P_1,P_2)$ of $\mathbb{F}_{p^2}$-rational points with $(P_1,P_2)=(P_1,\sigma(P_2))$ and
conjugate three points $(P_3,P_4,P_5)$ of $\mathbb{F}_{p^3}$-rational points,
where $(P_3,P_4,P_5)=(P_3,\sigma(P_3), \sigma^2(P_3))$.
By a tranformation by an element of $\operatorname{PGL}_3(\mathbb{F}_p)$, we may assume
\[
P_1=(1: \xi : 0)
\]
where $\xi$ is a primitive element of $\mathbb{F}_{p^2}$.
Let $\zeta$ be a primitive element of $\mathbb{F}_{p^3}$. We have three cases:
\begin{enumerate}
\item $P_3 = (1:2:\zeta^5)$,
\item $P_3 = (1:\zeta^{22}:2)$,
\item $P_3 = (1:\zeta^{17}:\zeta^2)$.
\end{enumerate}
\noindent {\bf Case (1,4):}\quad
We consider the case where $\{P_1,P_2,P_3,P_4,P_5\}$ contains one $\mathbb{F}_3$-rational point $P_1$
and the other 4 points are over $\mathbb{F}_{3^4}$ and are conjugate to each other,
say
\[ (P_2,P_3,P_4,P_5)=(P_2,\sigma(P_2), \sigma^2(P_2),\sigma^3(P_2)).
\]
By a tranformation by an element of $\operatorname{PGL}_3(\mathbb{F}_3)$, we may assume
$P_1=(1:0:0)$.
Let $\zeta$ be a primitive element of $\mathbb{F}_{3^4}$. We have five equivalent classes:
\begin{enumerate}
\item $P_2 = (1:\zeta^{75}:\zeta^{49})$,
\item $P_2 = (1:\zeta^8:\zeta^{70})$,
\item $P_2 = (1:\zeta^{59}:\zeta^{53})$,
\item $P_2 = (1:\zeta^{72}:\zeta^{29})$,
\item $P_2 = (1:\zeta^5:\zeta^{75})$.
\end{enumerate}
\noindent {\bf Case (5):}\quad This is the case where singular points on $C'$ are defined over $\mathbb{F}_{3^5}$ (but not over $\mathbb{F}_3$).
Then $\{P_1, \ldots, P_5\}$ consists of a single Frobenius orbit, namely
\[
\{P_1, P_2, P_3, P_4, P_5\}=\{P_1, \sigma(P_1), \sigma^2(P_1), \sigma^3(P_1),\sigma^4(P_1)\}.
\]
In this case, the five points are determined only by $P_1$.
A computation says that there are two cases:
\begin{enumerate}
\item $P_1 = (1: \zeta^{127}: \zeta^{143})$,
\item $P_1 = (1: \zeta^{218}: \zeta^{72})$
\end{enumerate}
for a primitive element $\zeta$ of $\mathbb{F}_{3^5}$.
\subsection{Case (II): $C'$ has one triple point with two double points}
We consider sextics $C'$ with singular points consisting of
one triple point and two double points.
The single triple point should be defined over $\mathbb{F}_3$.
Let $P_1$ be the triple point.
By a tranformation by an element of $\operatorname{PGL}_3(\mathbb{F}_3)$, we may assume
\[
P_1=(0:0:1).
\]
The patterns of the Frobenius orbits of the two remaining double points $\{P_2, P_3\}$ are $(1,1)$ and $(2)$.
\noindent {\bf Case (1,1):}\quad
This is the case where $C'$ has one triple point with two $\mathbb{F}_{3}$-rational double points. Up to actions by elements of $\operatorname{PGL}_3(\mathbb{F}_3)$ stabilizing $P_1$,
we have a unique case:
\[
P_2 = (1:0:0),{\rm qq}uad P_3 = (0:1:0).
\]
Up to actions by elements of $\operatorname{PGL}_3(\mathbb{F}_3)$ stabilizing $P_1$ and $\{P_2,P_3\}$, we may assume that $F$ is of the form
\[
F = b_1 x^3 z^3 + b_2 x^2yz^3 + b_3 x^3y^3 + \text{other terms}
\]
with $b_1,b_2,b_3 \in \{0,1\}$ and $(b_1,b_2) \ne (0,0)$.
\noindent {\bf Case (2):}\quad
This is the case where $C'$ has one triple point with two $\mathbb{F}_{9}$-rational double points. Up to actions by elements of $\operatorname{PGL}_3(\mathbb{F}_3)$ stabilizing $P_1$,
we have a unique case:
\[
P_2 = (1:\zeta:0),{\rm qq}uad P_3 = (1:\zeta^3:0),
\]
where $\zeta$ is the primitive element of $\mathbb{F}_9$
satisfing $\zeta^2-\zeta-1=0$.
Up to actions by elements of $\operatorname{PGL}_3(\mathbb{F}_3)$ stabilizing $P_1$ and $\{P_2,P_3\}$, we may assume that $F$ is of the form
\[
F = b_1 x^3 z^3 + b_2 x^2yz^3 + b_3 x^3y^3 + \text{other terms}
\]
with $b_1=1$ and $b_2,b_3 \in \{0,1\}$.
\section{Computational results with explicit equations in characteristic $3$}\label{sec:new5}
For $K=\mathbb{F}_p$ and $K'=\mathbb{F}_{p^2}$ with $p=3$, we executed Algorithm \ref{alg:I} for (I) and its variant for (II) over MAGMA V2.25-8 in each case given in Section \ref{sec:4}, in order to prove Theorem \ref{thm:main}.
We choose $N=32$ as the input, since
$32$ is the maximal number among the known numbers of $\mathbb{F}_9$-rational points of genus-five curves over $\mathbb{F}_{9}$.
Our implementations over MAGMA were run on a PC with ubuntu 18.04.5 LTS OS at 3.50GHz quadcore CPU (Intel(R) Xeon(R) E-2224G) and 64GB RAM.
It took about 58 hours in total to execute the algorithms for obtaining Theorem \ref{thm:main}.
For curves $C$ with $\# C (\mathbb{F}_{p^2}) \ge 32$ listed up, we also computed their Weil polynomials over MAGMA.
We have also obtained explicit equations of non-special curves of genus five over $\mathbb{F}_3$ with $32$ $\mathbb{F}_{9}$-rational points.
Here, we show some examples:
\begin{enumerate}
\item Case $(1,1,1,1,1)$ with $P_5=(1:2:1)$. The sextic
\[
F=x^4y^2 + x^3y^3 + x^2y^4 + x^4yz + x^3y^2z + 2x^2y^3z + 2xy^4z +
2x^2yz^3 + xy^2z^3 + x^2z^4 + 2xyz^4 + y^2z^4
\]
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t + 3)^6(t^2 + 2t + 9)^2$.
\item Case $(1,1,1,2)$ with linearly independent $P_1,P_2,P_3$ where $P_4=(1:\zeta^2:\zeta^2)$. The sextic
\[F=x^2y^4 + x^4yz + 2y^4z^2 + x^2z^4 + 2y^2z^4 \]
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t^2 + 2t + 9)(t^2 + 5t + 9)^4$.
\item Case $(1,1,1,2)$ with linearly dependent $P_1,P_2,P_3$ where $P_4=(1:\zeta^5:\zeta^7)$.
The sextic
\begin{eqnarray*}
F&=&x^4y^2 + x^3y^3 + x^2y^4 + 2x^3y^2z + xy^4z + x^2y^2z^2 + 2xy^3z^2 + 2x^3z^3\\
&& + 2y^3z^3 + x^2z^4 + 2xyz^4 + 2y^2z^4 + z^6
\end{eqnarray*}
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t+3)^4(t^2 + 2t + 9)(t^2+4t+9)^2$.
\item Case $(1,2,2)$ with $P_2=(1:2:\zeta^5)$ and $P_4=(1:\zeta^2:\zeta^7)$.
The sextic
\begin{eqnarray*}
F&=&x^4y^2 + 2x^3y^3 + 2xy^5 + 2y^6 + x^2y^3z + 2y^5z + 2x^4z^2 + x^3yz^2 + xy^3z^2 \\
&&+ 2x^3z^3 + x^2yz^3 +
xyz^4 + y^2z^4 + 2xz^5 + 2yz^5 + z^6
\end{eqnarray*}
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t + 3)^2 (t^4 + 8t^3 + 32t^2 + 72t + 81)^2$.
\item Case $(1,1,3)$ with $P_3=(1:2:\zeta^5)$. The sextic
\begin{eqnarray*}
F&=&x^4y^2 + x^2y^4 + x^4yz + 2xy^4z + 2x^4z^2 + 2x^3yz^2 + x^2y^2z^2
+ 2xy^3z^2 + 2y^4z^2 + 2x^3z^3\\
&& + 2x^2yz^3 + xy^2z^3 + y^3z^3 +
2x^2z^4 + xyz^4 + 2y^2z^4 + xz^5 + 2yz^5 + 2z^6
\end{eqnarray*}
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t^2 + 2t + 9)(t^2 + 5t + 9)^4$.
\item Case $(2,3)$ with $P_1=(1:\xi:0)$ and $P_3=(1:2:\zeta^5)$.
The sextic
\begin{eqnarray*}
F &=& x^5y + x^4y^2 + 2x^2y^4 + 2y^6 + x^5z + x^4yz + 2x^3y^2z +
2x^3yz^2 + x^2y^2z^2 + x^3z^3 \\
& & + 2x^2yz^3 + xy^2z^3+ 2x^2z^4 + xyz^4 + 2y^2z^4 + xz^5 + z^6
\end{eqnarray*}
has 32 $\mathbb{F}_9$-rational points with Weil polynomial
$(t^2 + 2t + 9)(t^2 + 5t + 9)^4$.
\end{enumerate}
\section{Concluding remarks with some open problems}\label{sec:new6}
We provided a new effective parameterization of the space of canonical and non-trigonal curves of genus $5$.
We realized such a curve as the normalization of a sextic in $\mathbb{P}^2$.
It was also proved that if the associated sextic model has five double pounts, the dimension of the space of such curves is $12$, which is just(!) the dimension of the moduli space of curves of genus $5$.
As an application, we also presented an algorithm to enumerate generic curves of genus $5$ over finite fields with many rational points.
By executing the algorithm over MAGMA, we showed that the maximal number of $\mathbb{F}_9$-rational points of generic curves of genus $5$ over $\mathbb{F}_3$ is $32$.
Our explicit parametrization and the algorithm presented in this paper may derive fruitful applications both for theory and computation, such as classifying
possible invariants (Hasse-Witt rank and so on) of canonical and non-trigonal curves of genus $5$.
Finally, we list some considerable open problems:
\begin{enumerate}
\item[(a)] Extend the parameterization to the case where curves have more complex singularities.
Concretely, present a parameterization of non-hyperelliptic and non-trigonal curves $C$ of genus $5$ such that the equality \eqref{GenericGenusFormula} in Subsection \ref{subsec:generic} does not hold (cf.\ Remark \ref{rem:non-mild}).
A more general problem is to give an explicit model in $\mathbb{P}^2$ of non-hyperelliptic curves of the other genus $\geq 4$.
Cf.\ a parameterization of generic curves of genus $3$ is presented in \cite{LRRS}, where an equation with $7$ parameters (the moduli dimension is also $6$) is given.
\item[(b)] Present methods to compute invariants of generic curves of genus $5$ such as Cartier-Manin and Hasse-Witt matrices.
If we can compute Cartier-Manin and Hasse-Witt matrices,
we can determine whether given curves are superspecial or not, as in \cite{KH16}, \cite{KH18}, \cite{trigonal} and \cite{KH17}.
For computing Cartier-Manin matrices, we may apply a method in \cite{SV} for a plane curve.
\item[(c)] Construct an algorithm to test whether given two generic curves of genus $5$ are isomorphic or not.
Computing the automorphism group of such a curve is also an interesting problem.
Cf.\ for the case of genus-$4$ non-hyperelliptic curves, the authors (resp.\ the authors and Senda) already presented an algorithm for the isomorphism test (resp.\ for computing automorphism groups), see \cite{KH17} (resp.\ \cite{KHS17}).
\item[(d)] Improve the efficiency of Algorithm \ref{alg:I} and its variant for the case (II), and then enumerate generic curves of genus $5$ with many rational points for $p>3$.
For this, it important to reduce the number of curves $C$ which we need to search.
\end{enumerate}
\section{Concluding remarks with some open problems}\label{sec:new6}
We provided a new effective parameterization of the space of canonical and non-trigonal curves of genus $5$.
We realized such a curve as the normalization of a sextic in $\mathbb{P}^2$.
It was also proved that if the associated sextic model has five double pounts, the dimension of the space of such curves is $12$, which is just(!) the dimension of the moduli space of curves of genus $5$.
As an application, we also presented an algorithm to enumerate generic curves of genus $5$ over finite fields with many rational points.
By executing the algorithm over MAGMA, we showed that the maximal number of $\mathbb{F}_9$-rational points of generic curves of genus $5$ over $\mathbb{F}_3$ is $32$.
Our explicit parametrization and the algorithm presented in this paper may derive fruitful applications both for theory and computation, such as classifying
possible invariants (Hasse-Witt rank and so on) of canonical and non-trigonal curves of genus $5$.
Finally, we list some considerable open problems:
\begin{enumerate}
\item[(a)] Extend the parameterization to the case where curves have more complex singularities.
Concretely, present a parameterization of non-hyperelliptic and non-trigonal curves $C$ of genus $5$ such that the equality \eqref{GenericGenusFormula} in Subsection \ref{subsec:generic} does not hold (cf.\ Remark \ref{rem:non-mild}).
A more general problem is to give an explicit model in $\mathbb{P}^2$ of non-hyperelliptic curves of the other genus $\geq 4$.
Cf.\ a parameterization of generic curves of genus $3$ is presented in \cite{LRRS}, where an equation with $7$ parameters (the moduli dimension is also $6$) is given.
\item[(b)] Present methods to compute invariants of generic curves of genus $5$ such as Cartier-Manin and Hasse-Witt matrices.
If we can compute Cartier-Manin and Hasse-Witt matrices,
we can determine whether given curves are superspecial or not, as in \cite{KH16}, \cite{KH18}, \cite{trigonal} and \cite{KH17}.
For computing Cartier-Manin matrices, we may apply a method in \cite{SV} for a plane curve.
\item[(c)] Construct an algorithm to test whether given two generic curves of genus $5$ are isomorphic or not.
Computing the automorphism group of such a curve is also an interesting problem.
Cf.\ for the case of genus-$4$ non-hyperelliptic curves, the authors (resp.\ the authors and Senda) already presented an algorithm for the isomorphism test (resp.\ for computing automorphism groups), see \cite{KH17} (resp.\ \cite{KHS17}).
\item[(d)] Improve the efficiency of Algorithm \ref{alg:I} and its variant for the case (II), and then enumerate generic curves of genus $5$ with many rational points for $p>3$.
For this, it important to reduce the number of curves $C$ which we need to search.
\end{enumerate}
\footnote[0]{
E-mail address of the first author: \texttt{[email protected]}\\
E-mail address of the second author: \texttt{[email protected]}
}
\end{document}
|
\begin{document}
\begin{center}
\begingroup
\scalefont{1.7}
\textbf{On Bayesian Nonparametric Continuous Time Series Models}\endgroup
\textbf{George Karabatsos
\footnote{
\noindent Corresponding author. \noindent Professor, University of
Illinois-Chicago, U.S.A., Program in Measurement and Statistics. 1040 W.
Harrison St. (MC\ 147), Chicago, IL 60607. E-mail: [email protected].
Phone:\ 312-413-1816.}}\\[0pt]
\textit{University of Illinois-Chicago}
\noindent and
\textbf{Stephen G. Walker
\footnote{
\noindent Professor, University of Kent, United Kingdom, School of
Mathematics, Statistics \& Actuarial Science. Currently, Visiting Professor
at The University of Texas at Austin, Division of Statistics and Scientific
Computation.}}
\textit{University of Kent, United Kingdom}\\[0pt]
March 2, 2013
\end{center}
\noindent \textbf{Abstract:} This paper is a note on the use of Bayesian
nonparametric mixture models for continuous time series. We identify a key
requirement for such models, and then establish that there is a single type
of model which meets this requirement. As it turns out, the model is well
known in multiple change-point problems.
\noindent \textbf{Keywords:} Change point; Mixture model.
\section{Introduction{\textbf{\label{Introduction Section}}}}
A lot of recent research has focused on the development of Bayesian
nonparametric, countably-infinite mixture models for time series data. This
work has aimed to relax the normality assumptions of the general class of
dynamic linear models (West \&\ Harrison, 1997\nocite{WestHarrison97}),
which already encompasses traditional normal (time-static)\ linear
regression, autoregressions, autoregressive moving average (ARMA) models,
and nonstationary polynomial trend and time-series models.
These Bayesian nonparametric (infinite-mixture)\ time series models have the
general form:
\begin{equation*}
f(y_{t}|t)=\dint f(y_{t}|\mathbf{x}_{t},\boldsymbol{\gamma },\boldsymbol{
\theta })\mathrm{d}G_{t}(\boldsymbol{\theta })=\tsum\limits_{j=1}^{\infty
}f(y_{t}|\mathbf{x}_{t},\boldsymbol{\gamma },\boldsymbol{\theta }
_{tj})\omega _{j}(t),
\end{equation*}
given time $t\in \mathcal{T};$ kernel (component) densities $\{f(\cdot |
\mathbf{x}_{t},\boldsymbol{\gamma },\boldsymbol{\theta }_{tj})\}_{j=1}^{
\infty }$ which are often specified by normal densities of a dynamic linear
model; mixture distribution
\begin{equation*}
G_{t}(\cdot )=\tsum\nolimits_{j=1}^{\infty }\omega _{j}(t)\delta _{
\boldsymbol{\theta }_{j}(t)}(\cdot ),
\end{equation*}
which is formed by an infinite-mixture of point-mass distributions $\delta _{
\boldsymbol{\theta }_{j}(t)}(\cdot )$ with mixture weights $\omega _{j}(t)$;
and prior distributions $\boldsymbol{\gamma }\sim \pi (\boldsymbol{\gamma })$
,$\ \boldsymbol{\theta }_{tj}\sim G_{0t}$, $\{\omega _{j}(t)\}_{j=1}^{\infty
}\sim \Pi $. \ All of the earlier models focuses on discrete time, and
specify $G_{t}$ to be some variant of the Dependent Dirichlet process (DDP)\
(MacEachern, 1999\nocite{MacEachern99}, 2000\nocite{MacEachern00}, 2001
\nocite{MacEachern01}), so that the mixture weights have a stick-breaking
form, with
\begin{equation*}
\omega _{j}(t)=\upsilon _{j}(t)\tprod\nolimits_{l=1}^{j=1}(1-\upsilon
_{l}(t)),\text{ and }\upsilon _{j}(t):\mathcal{T}\rightarrow \lbrack 0,1].
\end{equation*}
(Sethuraman, 1994\nocite{Sethuraman94}). Such DDP-based time-series models
either assume time-dependent stick-breaking weights (Griffin \&\ Steel, 2006
\nocite{GriffinSteel06}, 2011; Rodriguez \&\ Dunson, 2011\nocite
{RodriguezDunson11}), or assume non-time-dependent stick-breaking weights
and a time-dependent prior (baseline) distribution $G_{0t}$ (Rodriguez \&\
ter Horst, 2008\nocite{RodriguezTerHorst08}), or assume a fully
non-time-dependent Dirichlet process (DP)\ $G_{t}=G$ with only
time-dependence in the kernel densities (Hatjispyros, et al., 2009\nocite
{Hatjispyros_Nicoleris_Walker09}; Tang \& Ghosal, 2007\nocite{TangGhosal07};
Lau \&\ So, 2008\nocite{LauSo08}; Caron et al., 2008\nocite
{CaronDavyDoucetDuflos08}; Giardina et al., 2011\nocite{Giardina_etal11}; Di
Lucca et al., 2012\nocite{DiLucca_etal12}). Other related approaches
construct a time-dependent DDP$\ G_{t}$ either by generalizing the P\'{o}lya
urn scheme of the DP (e.g., Zhu et al., 2005\nocite{Zhu_etal05}; Caron et
al., 2007\nocite{CaronDavyDoucet07}); by a convex combination of
hierarchical Dirichlet processes (HDP)\ or DPs (Ren et al., 2008\nocite
{Ren_etal08}; Dunson, 2006\nocite{Dunson06}); by a HDP-based hidden Markov
model that has infinitely-many states (Fox et al., 2008\nocite{Fox_etal08},
2011\nocite{Fox_etal11}); or by a Markov-switching model having
finitely-many states (Taddy \& Kottas, 2009\nocite{TaddyKottas09}).
The more recent work on Bayesian nonparametric time series modeling has
focused on continuous time, and on developing a time-dependent mixture
distribution that has the general form,
\begin{equation}
G_{t}=\dsum\limits_{j=1}^{\infty }\omega _{j}(t)\delta _{\boldsymbol{\theta }
_{j}}(\cdot ), \label{GenCont}
\end{equation}
based on a process other than the DDP. Above, a baseline prior $\boldsymbol{
\theta }_{j}\sim _{iid}$ $G_{0}$ is assumed, which is a standard assumption.
In Section 2 we describe these continuous time series models, namely, the
geometric model (Mena, Ruggiero, \&\ Walker, 2011\nocite
{MenaRuggieroWalker11}), and a normalized random measure model (NRM)\
(Griffin, 2011\nocite{Griffin11}). In Section 3, we highlight a key property
such models are required to possess, and we identify a necessary model which
has such a property. We also in this section prove that the existing
continuous time series models do not have the required property.
\section{Continuous time models{\textbf{\label{CTM Section}}}}
The geometric model constructs a dependent process $G_{t}$ using
time-dependent geometric mixture weights
\begin{equation}
\omega _{j}(t)=\lambda _{t}(1-\lambda _{t})^{j-1}, \label{GEOmix}
\end{equation}
with $\lambda _{t}$ specified as a two-type Wright-Fisher diffusion (Mena,
Ruggiero, \&\ Walker, 2011\nocite{MenaRuggieroWalker11}).
The $(\lambda )_{t}$ follow a stochastic process with the stationary density
being a beta$(a,b)$. The transition mechanism is given, for $t>s$, by
\begin{equation*}
p(\lambda _{t}|\lambda _{s})=\sum_{m=0}^{\infty }p_{h}(m)\,p(\lambda
_{t}|m,\lambda _{s})
\end{equation*}
where $h=t-s$ and
\begin{equation*}
p(\lambda _{t}|m,\lambda _{s})=\sum_{k=0}^{m}\mathrm{beta}(\lambda
_{t}|a+k,b+m-k)\,\mathrm{bin}(k|m,\lambda _{s})
\end{equation*}
and
\begin{equation*}
p_{h}(m)=\frac{(a+b)_{m}\,\exp (-mch)}{m!}\,\left( 1-e^{-ch}\right) ^{a+b}
\end{equation*}
for some $c>0$.
Hence $G_{t}$ is a continuous time process and the properties are studied in
Mena, Ruggiero, and\ Walker (2011\nocite{MenaRuggieroWalker11}).
The normalized random measures (NRM)\ model constructs a time-dependent
process $G_{t}$ using time-dependent mixture weights that are formed by
normalizing a stochastic process derived from non-Gaussian
Ornstein-Uhlenbeck processes (Griffin, 2011\nocite{Griffin11}).
Specifically, these weights are constructed by
\begin{equation}
\omega _{j}(t)=\dfrac{\mathbf{1}(\tau _{j}\leq t)\exp (-\lambda (t-\tau
_{j}))J_{j}}{\tsum\nolimits_{l=1}^{\infty }\mathbf{1}(\tau _{l}\leq t)\exp
(-\lambda (t-\tau _{l}))J_{l}}, \label{NRMmix}
\end{equation}
where $(\tau ,J)$ follows a Poisson process with intensity $\lambda \,w(J)$,
where $w$ is a L\'{e}vy density.
Details and examples of obtaining the $(\tau _{j},J_{j})$ are provided by
Griffin (2011\nocite{Griffin11}). Aside from the specific examples
considered in this paper, we also note that any sequence of $(\tau
_{j},J_{j})$ are permissible provided
\begin{equation*}
{\tsum\nolimits_{l=1}^{\infty }\mathbf{1}(\tau _{l}\leq t)\exp (-\lambda
(t-\tau _{l}))J_{l}}<\infty
\end{equation*}
for all $t$.
\section{A key property{\textbf{\label{AKP Section}}}}
Using the mixture model
\begin{equation*}
f(y|t)=\int K(y|\theta)\,G_t(d\theta)=\sum_{j=1}^\infty
w_j(t)\,K(y|\theta_j),
\end{equation*}
we insist on the obvious requirement that for all suitably small $h$, we
want $y_t$ and $y_{t+h}$ to be arising from the same component. This
requirement is clearly not met by simply insisting that $G_{t+h}\rightarrow
G_t$ as $h\rightarrow 0$.
So, in this paper, we introduce the argument that a Bayesian nonparametric
continuous time series model should have a certain property. Specifically,
based on the above discussion, we need the property that
\begin{equation*}
P(\theta _{t}=\theta _{t+h})\rightarrow 1\quad \text{as}\quad h\rightarrow 0,
\end{equation*}
where $\theta _{t}$ denotes a sample from
\begin{equation*}
G_{t}=\sum_{j=1}^{\infty }\omega _{j}(t)\,\delta _{\theta _{j}},
\end{equation*}
i.e. that $\theta_t|G_t\sim G_t$, which means that $P(\theta_t=
\theta_j)=w_j(t)$.
Now it can be shown that
\begin{equation*}
P(\theta _{t}=\theta _{t+h})=\sum_{j=1}^{\infty }P(\theta _{t}=\theta
_{t+h}=\theta _{j}),
\end{equation*}
and hence we are asking for
\begin{equation*}
\text{\textrm{E}}\left\{ \sum_{j=1}^{\infty }\omega _{j}(t)\,\omega
_{j}(t+h)\right\} \rightarrow 1\quad \text{as}\quad h\rightarrow 0.
\end{equation*}
For this, it is necessary that
\begin{equation*}
D(h)=\sum_{j=1}^{\infty }\omega _{j}(t)\,\omega _{j}(t+h)\rightarrow 1\quad
\text{in probability as}\quad h\rightarrow 0.
\end{equation*}
Now assume that
\begin{equation*}
\sup_{j}|\omega _{j}(t+h)-\omega _{j}(t)|\rightarrow 0\quad \mbox{a.s. as}
\quad h\rightarrow 0
\end{equation*}
which is an extremely mild condition.
Hence, for any $\epsilon >0$,
\begin{equation*}
\sup_{j}|\omega _{j}(t+h)-\omega _{j}(t)|<\epsilon
\end{equation*}
for all small enough $h$. Therefore, for all small $h$, we have
\begin{equation*}
D(h)\leq \sum_{j=1}^{\infty }\omega _{j}^{2}(t)+\epsilon \quad \mbox{a.s.}
\end{equation*}
The only way we can now recover the convergence to 1 in probability is that
\begin{equation*}
\omega _{j}(t)=1\quad\mbox{a.s.}
\end{equation*}
for a particular $j$, which will depend on $t$.
Hence, we believe that a Bayesian nonparametric continuous time series model
should specify a time-dependent mixture distribution $G_{t}$ of the type
given in (\ref{GenCont}), where
\begin{equation*}
\omega _{j}(t)=\mathbf{1}(t\in A_{j}),
\end{equation*}
and the $(A_{j})_{j}$ form a random partition of $(0,\infty )$. In other
words, we recommend Bayesian nonparametric change-point mode for time series
analysis. Specifically, let $\mathcal{D}=\{(y_{t_{i}})\}_{i=1}^{n}$ denote a
sample of data consisting of $n$ dependent responses $y_{t_{i}}$ observed at
time points $t_{i}$. Then, such a model may be specified as:
\begin{subequations}
\label{NewModelChange}
\begin{eqnarray}
y_{t_{i}} &\sim &f(y_{t_{i}}|\boldsymbol{\theta }_{z(t_{i})}),\text{ }
i=1,\ldots ,n, \\
z(t_{i}) &=&j\iff \tau _{j-1}<t_{i}\leq \tau _{j}=(\tau _{j-1}+\epsilon _{j})
\\
\epsilon _{j} &\sim &\text{ }\mathrm{Ex}(\lambda ),\text{ \ }j=1,2,\ldots
\text{,} \\
\boldsymbol{\theta }_{j} &\sim &G_{0},\text{ \ }j=1,2,\ldots \text{,}
\end{eqnarray}
where $z(t_{i})$ denotes the random component index, and each of the gaps $
\epsilon _{j}=\tau _{j}-\tau _{j-1}$ are i.i.d. from an exponential $\mathrm{
Ex}(\lambda )$ prior distribution, with $\tau _{0}:=0$. The exponential
distribution for creating the intervals is not essential but there seems
little reason to make it more complicated.
Interestingly, neither the geometric model nor the NRM model specify a
mixing distribution $G_{t}$ with weights that satisfy the key property,
previously described. Figure \ref{GeometricPlot} illustrates this fact.
Specifically, for the geometric model, the figure shows samples of the
random component index $z(t)\sim \Pr (z(t)=j)\propto \omega _{j}(t)$, over a
convergent sequence of times $t=t_{l-1}+1/l^{2},$ for $l=1,2,\ldots ,1000,$
with $t_{0}=0$. These samples are presented for different choices of prior
parameters in this model, namely $b=1,10,30,50,$ along with $a=c=1$. As the
figure shows, as $t$ converges to time $1.6439,$ the random variable $z(t)$
does not converge to a single value. Instead, the random variable displays a
degree of uncertainty about the component (kernel)\ density at that time.
Now, we formally show how our time series model satisfies the property,
whereas the geometric model and the NRM models do not. For our model for
which we have based on the
\end{subequations}
\begin{equation*}
w_{j}(t)=\mathbf{1}(t\in A_{j})
\end{equation*}
and
\begin{equation*}
A_{j}=(\tau _{j-1},\tau _{j})
\end{equation*}
with
\begin{equation*}
\tau _{j}=\tau _{j-1}+\epsilon _{j}
\end{equation*}
where the $(\epsilon _{j})$ are independent and identically distributed
exponential random variables with parameter $\lambda $, it is
straightforward to show that
\begin{equation*}
\sum_{j=1}^{\infty }w_{j}(t)\,w_{j}(t+h)=\left\{
\begin{array}{ccc}
1 & \text{with probability} & e^{-h} \\
0 & \text{with probability} & 1-e^{-h}.
\end{array}
\right.
\end{equation*}
This follows since we need $t,t+h\in A_{j}$. Hence, it it seen that
\begin{equation*}
\mathrm{E}\left\{ \sum_{j=1}^{\infty }w_{j}(t)\,w_{j}(t+h)\right\}
=e^{-h}\rightarrow 1\quad \text{as}\quad h\rightarrow 0.
\end{equation*}
For the geometric model, we have
\begin{equation*}
\mathrm{E}\left\{ \sum_{j=1}^{\infty }w_{j}(t)\,w_{j}(t+h)\right\}
\end{equation*}
given by
\begin{equation*}
\mathrm{E}\left\{ \sum_{j=1}^{\infty }\lambda _{t}(1-\lambda
_{t})^{j-1}\,\lambda _{t+h}(1-\lambda _{t+h})^{j-1}\right\}
\end{equation*}
which is
\begin{equation*}
\mathrm{E}\left\{ \frac{\lambda _{t}\lambda _{t+h}}{\lambda _{t}+\lambda
_{t+h}-\lambda _{t}\lambda _{t+h}}\right\} .
\end{equation*}
This is strictly less than one due to the fact that $\lambda _{t}$ and $
\lambda _{t+h}$ are less than 1.
Finally, the NRM model also has
\begin{equation*}
\mathrm{E}\left\{ \sum_{j=1}^{\infty }w_{j}(t)\,w_{j}(t+h)\right\} <1,
\end{equation*}
and this result follows from the proof of his Theorem 2, which appears in
the Appendix of his paper.
\section{Discussion{\textbf{\label{Discussion Section}}}}
In summary, we advocate a specific property for mixture models for
continuous time series. Namely, that as the time $t+h$ approaches the limit $
h\rightarrow 0$, the model should certainly identify a single component
index $z(t),$ and hence a single component density $f(y_{t}|\boldsymbol{
\theta }_{z(t)})$ of the dependent response $Y_{t}$. In other words, there
is no strong reason why one should specify a time-series model that allows
the component density to drastically change, as time goes through
incrementally smaller changes. In essence we are not asking for $G_{t}$ to
be close to $G_{t+h}$, though this is given, but a rather weak condition;
rather we are asking that $\theta _{t}$ and $\theta _{t+h}$ are close in
probability, which approaches 1 as $h\rightarrow 0$.
Interestingly, we have shown that two Bayesian nonparametric
(infinite-mixture) models fail this sensible property. In contrast, we have
shown that for a mixture model to satisfy the property, it must be of the
form given in equation (\ref{NewModelChange}). This implies that the mixture
model must be a Bayesian multiple change-point model (e.g., Barry \&
Hartigan, 1993\nocite{BarryHartigan93}; Chib, 1998\nocite{Chib98}), having
infinitely-many change-point parameters $\tau _{j-1}<\tau _{j},$ $
j=1,2,\ldots $. Then, these results may encourage future developments in
Bayesian nonparametric models for continuous time series, more in terms of
multiple change point modeling.
\section{\noindent \noindent Acknowledgements\ }
This research is supported by National Science Foundation Grant SES-1156372,
from the Program in Methodology, Measurement, and Statistics.
\FRAME{fthFU}{6.5965in}{3.0752in}{0pt}{\Qcb{For the geometric time
series model, the log of samples of component index $z(t)\sim \Pr
(z(t)=j)\propto \protect\omega _{j}(t)$, over a convergent sequence of times
$t=t_{l-1}+1/l^{2},$ for $l=1,2,\ldots ,1000,$ with $t_{0}=0$. The component
index samples are shown for a range of choices of prior parameters, $
b=1,10,30,50$, along with $a=c=1$.}}{\Qlb{GeometricPlot}}{timemodels.eps}{
\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "F";width 6.5965in;height 3.0752in;depth
0pt;original-width 5.8208in;original-height 2.7003in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'TimeModels.eps';file-properties
"XNPEU";}}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract} In this research announcement we present a new
$q$-analog of a classical formula for the exponential generating
function of the Eulerian polynomials. The Eulerian polynomials
enumerate permutations according to their number of descents or
their number of excedances. Our $q$-Eulerian polynomials are the
enumerators for the joint distribution of the excedance statistic
and the major index. There is a vast literature on $q$-Eulerian
polynomials which involve other combinations of Mahonian and
Eulerian permutation statistics, but the combination of major
index and excedance number seems to have been completely
overlooked until now. We use symmetric function theory to prove
our formula. In particular, we prove a symmetric function version
of our formula, which involves an intriguing new class of
symmetric functions. We also present connections with
representations of the symmetric group on the homology of a poset
recently introduced by Bj\"orner and Welker and on the cohomology
of the toric variety associated with the Coxeter complex of the
symmetric group, studied by Procesi, Stanley, Stembridge,
Dolgachev and Lunts. \epsilonnd{abstract}
\sigmaection{Introduction} The subject of permutation statistics
originated in the early 20th century work of Major Percy
MacMahon \cite{mac1,mac2} and has developed into an active and
important area of enumerative combinatorics over the last four
decades. It deals with the enumeration of permutations
according to natural statistics. A permutation statistic is
simply a function from the symmetric group $\mathfrak S_n$ to the set of
nonnegative integers. MacMahon studied four fundamental
permutation statistics, the inversion index, the major index, the
descent number and the excedance number, which we define below.
Let $[n]$ denote the set $\{1,2,\partialots,n\}$. For each $\sigmaigma \in
\mathfrak S_n$, the descent set of $\sigmaigma$ is defined to be
$$\nablaes(\sigmaigma):= \{i \in [n-1] : \sigmaigma(i) > \sigmaigma(i+1) \},$$ and
the excedance set is defined to be $$\Exc(\sigmaigma):= \{i \in [n-1]
: \sigmaigma(i) > i\}.$$ The descent number and excedance number
are defined respectively by $$ \partiales(\sigmaigma) := |\nablaes(\sigmaigma)|
\qquad \mbox{ and }\qquad \epsilonxc(\sigmaigma) := |\Exc(\sigmaigma)|.$$ For
example, if $\sigmaigma = 32541$, written in one line notation, then
$$\nablaes(\sigma) = \{1,3,4\} \qquad \mbox{ and }\qquad \Exc(\sigma) = \{1,3\};$$
hence $\partiales(\sigma) = 3 $ and $\epsilonxc(\sigma) = 2$. If $i \in \nablaes(\sigma)$ we say
that $\sigma$ has a descent at $i$. If $i \in \Exc(\sigma)$ we say that
$\sigma(i)$ is an excedance of $\sigma$.
MacMahon \cite{mac1,mac2} observed that the descent number and
excedance number are equidistributed, that is, the number of
permutations in $\mathfrak S_n$ with $j$ descents equals the number of
permutations with $j$ excedances for all $j$. (There is a well-known
combinatorial proof of this fact due to Foata \cite{foa1,foa4}.) These
numbers were first studied by Euler and have come to be known as
the Eulerian numbers. They are the coefficients of the {\epsilonm
Eulerian polynomials} $$A_n(t) := \sigmaum_{\sigmaigma \in \sigmag_n}
t^{\partiales(\sigmaigma)} = \sigmaum_{\sigmaigma \in \sigmag_n} t^{\epsilonxc(\sigmaigma)}. $$
Any permutation statistic that is equidistributed with $\partiales$ and
$\epsilonxc$ is said to be an {\epsilonm Eulerian statistic}.
The Eulerian numbers and the Eulerian polynomials have been
extensively studied in many different contexts in the mathematics
and computer science literature. For excellent treatments of this subject, see the classic lecture notes of Foata and Schutzenberger \cite{fs},
the recent lecture notes of Foata and Han \cite{fh}, and Section~5.1 of Knuth's classic book series ``The Art of Computer Programming'' \cite{kn}.
The exponential generating function formula,
\begin{equation} \lambdaabel{expgen}\sigmaum_{n\gammae 0} A_n(t) {z^n \over n!}
= {1-t \over e^{z(t-1)} -t} \epsilonnd{equation}
where $A_0(t) =1$, is attributed to Euler in \cite{kn}.
The major index of a permutation is defined by
$$\maj(\sigmaigma) := \sigmaum_{i\in \nablaes(\sigmaigma)} i .$$
MacMahon \cite{mac2} proved
that the major index is equidistributed with the inversion statistic
$$ \inv(\sigmaigma) := |\{(i,j) : 1 \lambdae i<j\lambdae n \,\,\&
\,\, \sigmaigma(i) > \sigmaigma(j) \}|$$ and Rodrigues \cite{rod} proved
the second equality in \begin{equation} \lambdaabel{mah} \sigmaum_{\sigmaigma
\in \sigmag_n} q^{\maj(\sigmaigma)} = \sigmaum_{\sigmaigma \in \sigmag_n}
q^{\inv(\sigmaigma)} = [n]_q!,\epsilonnd{equation} where $$[n]_q := 1+q
+\partialots +q^{n-1}$$ and $$[n]_q! := [n]_q [n-1]_q \cdots [1]_q.$$
(An elegant combinatorial proof of the first equality in
(\ref{mah}) was obtained by Foata \cite{foa2,foa4}.)
Any permutation statistic that is equidistributed with the major
index and inversion index is said to be a {\epsilonm Mahonian
statistic}.
Note that by setting $q=1$ in (\ref{mah}), one gets the formula
$n!$ for the number of permutations. Equation (\ref{mah}) is a
beautiful ``$q$-analog'' of this formula and is the fundamental example
of the subject of permutation statistics and
$q$-analogs, in which one seeks to obtain nice $q$-analogs of
enumeration formulas.
One can look for nice $q$-analogs of the Eulerian polynomials by
considering the joint distributions of the Mahonian and Eulerian
statistics given above. Consider the four possibilities,
$$A_n^{\inv,\partiales}(q,t) := \sigmaum_{\sigmaigma \in \sigmag_n} q^{\inv(\sigmaigma)} t^{\partiales(\sigmaigma)} $$
$$A_n^{\maj,\partiales}(q,t) := \sigmaum_{\sigmaigma \in \sigmag_n} q^{\maj(\sigmaigma)} t^{\partiales(\sigmaigma)} $$
$$A_n^{\inv,\epsilonxc}(q,t) := \sigmaum_{\sigmaigma \in \sigmag_n}q^{\inv(\sigmaigma)} t^{\epsilonxc(\sigmaigma)} $$
$$A_n^{\maj,\epsilonxc}(q,t) := \sigmaum_{\sigmaigma \in \sigmag_n} q^{\maj(\sigmaigma)} t^{\epsilonxc(\sigmaigma)} .$$
There are many interesting results on the first three
$q$-Eulerian polynomials and on multivariate distributions of all
sorts of combinations of Eulerian and Mahonian statistics (for a sample see
\cite{br,csz, foa3,fs2,fz, gg, rrw, ra, sk,st1,wa2}). These include Stanley's \cite{st1} $q$-analog
of (\ref{expgen}) given by,
$$\sigmaum_{n\gammae 0} A_n^{{\inv,\partiales}}(q,t) {z^n \over [n]_q!} = {(1-t) \over \epsilonxp_q(z(t-1)) -t}$$
where $$\epsilonxp_q(z) := \sigmaum_{n\gammae 0} {z^n \over [n]_q!}.$$
Surprisingly, we have found no mention of the fourth $q$-Eulerian
polynomial $A_n^{\maj,\epsilonxc}(q,t)$ anywhere in the literature. Here
we announce the following remarkable $q$-analog of
(\ref{expgen}).
\begin{thm}\lambdaabel{expgenth} The $q$-exponential
generating function for
$A_n^{\maj,\epsilonxc}(q,t)$ is given by
\begin{equation} \lambdaabel{expgeneq} \sigmaum_{n\gammae 0} A_n^{\maj,\epsilonxc}(q,t) {z^n \over [n]_q!} = {(1-tq)
\epsilonxp_q(z) \over \epsilonxp_q(ztq) - tq \epsilonxp_q(z)}\,\, , \epsilonnd{equation} where
$A_0^{\maj,\epsilonxc}(q,t) = 1$.
\epsilonnd{thm}
When $q=1$, the formula (\ref{expgeneq}) reduces to
(\ref{expgen}) since $${(1-tq) \epsilonxp_q(z) \over \epsilonxp_q(ztq) -tq
\epsilonxp_q(z)} = {(1-t) e^z \over e^{zt} -te^z} = {(1-t) \over
e^{z(t-1)} -t}.$$ Though not quite as easily, one can show that
when $t=1$, the formula (\ref{expgeneq}) reduces to (\ref{mah}).
In the process of proving Theorem~\ref{expgenth}, we obtained the
following result.
\begin{thm} \lambdaabel{stem} Let ${\rm fix}(\sigmaigma)$ denote the number
of fixed points of $\sigma\in \mathfrak S_n$, i.e., the number of $i \in [n]$
such that $\sigma(i) = i$. Then
$$\sigmaum_{\sigmaigma
\in
\sigmag_n} q^{\maj(\sigmaigma)}t^{\epsilonxc(\sigmaigma)}r^{{\rm fix}(\sigmaigma)} = $$
$$ \sigmaum_{m = 0}^{\lambdafloor {n \over 2} \rfloor} (tq)^m \!\!\!\!\sigmaum_{\sigmacriptsize
\begin{array}{c} k_0\gammae 0 \\ k_1,\partialots, k_m \gammae 2 \\ \sigmaum k_i = n
\epsilonnd{array}} \lambdaeft[\begin{array}{c} n \\k_0,\partialots,k_m\epsilonnd{array}\right]_q\,\,
r^{k_0}
\prod_{i=1}^m [k_i-1]_{tq},$$
where $$\lambdaeft[\begin{array}{c} n \\k_0,\partialots,k_m\epsilonnd{array}\right]_q = {[n]_q!
\over [k_0]_q![k_1]_q!\cdots [k_m]_q!}.$$
\epsilonnd{thm}
In the next section we describe the techniques that we used to
prove these theorems. They involve an interesting class of
symmetric functions and a symmetric function identity which
generalizes Theorem~\ref{expgenth}. We prove this identity by
devising an interesting analog of a necklace construction of
Gessel and Reutenauer \cite {gr} and by generalizing a bijection
of Stembridge \cite{stem1}.
In Section~\ref{rep} we discuss a connection with two graded
representations of the symmetric group, which turn out to be
isomorphic. We show that a specialization of the Frobenius
characteristic of these representations yields
$A^{\maj,\epsilonxc}(q,t)$. One of the representations is the
representation of the symmetric group on the cohomology of the
toric variety associated with the Coxeter complex of the symmetric
group. This representation was studied by Procesi \cite{pr},
Stanley \cite{st2}, Stembridge \cite{stem1}, \cite{stem2}, and
Dolgachev and Lunts \cite{dl}. The other representation is the
representation of the symmetric group on the homology of maximal
intervals of a certain intriguing poset introduced by Bj\"orner
and Welker \cite{bw} in their study of connections between poset
topology and commutative algebra. In fact, our study of the
latter representation is what led us to discover formula
(\ref{expgenth}) and its symmetric function generalization, in the
first place.
Various authors have studied Mahonian (resp. Eulerian) partners to
Eulerian (resp. Mahonian) statistics whose joint distribution is
equal to a known Euler-Mahonian distribution. We mention, for
example, Foata \cite{foa3}, Foata and Zeilberger \cite{fz},
Skandera \cite{sk}, and Clarke, Steingrimisson and Zeng
\cite{csz}. In Section~\ref{new} we define a new Mahonian
statistic to serve as a partner for $\partiales$ in the $(\maj, \epsilonxc)$
distribution. We do not have a simple proof of the
equidistribution. We have a highly nontrivial proof which uses
tools from poset topology and the symmetric function results
announced in Sections~\ref{symsec} and \ref{rep}.
Details of the proofs discussed in this announcement, as well as further consequences and open problems, will appear in a forthcoming paper.
\sigmaection{Symmetric function generalization} \lambdaabel{symsec}
In this section we present a symmetric function generalization of
Theorem~\ref{expgenth}.
Let
$$H(z) = H(\mathbf x, z) := \sigmaum_{n\gammae 0} h_n(\x) z^n,$$
where $h_n(\mathbf x)$ denotes the complete homogeneous symmetric
function in the indeterminates $\mathbf x = (x_1,x_2,\partialots)$, that
is $$h_n(\mathbf x) := \sigmaum_{1 \lambdae i_1 \lambdae i_2 \lambdae \partialots \lambdae i_n}
x_{i_1}x_{i_2} \partialots x_{i_n}$$ for $n \gammae 1$, and $h_0 =1$. By
setting $x_i := q^{i-1}$, for all $i$, and $z := z(1-q)$ in
$H(\x,z)$, one obtains $\epsilonxp_q(z)$, see \cite{st3}. It follows
that \begin{equation}\lambdaabel{Heq} \lambdaeft .{(1-t) H(\x,z) \over
H(\x,zt) -tH(\x,z)} \right |_{\sigmacriptsize \begin{array}{l}
x_i:=q^{i-1} \\ z:= z(1-q) \epsilonnd{array}} = {(1-t) \epsilonxp_q(z) \over
\epsilonxp_q(zt) -t\epsilonxp_q(z)} .\epsilonnd{equation}
We will construct for each $n,j\gammae 0$, a quasisymmetric function $Q_{n,j}(\x)$ whose
generating function $\sigmaum_{n,j \gammae 0} Q_{n,j}(\x) t^j z^n$
specializes to
$$\sigmaum_{n\gammae 0} \sigmaum_{\sigma \in \mathfrak S_n} q^{\maj(\sigma) - \epsilonxc(\sigma)} t^{\epsilonxc(\sigma)} {z^n \over [n]_q!}$$
when we set $x_i := q^{i-1}$ and $z := z(1-q)$. Thus by taking
specializations of both sides of (\ref{symgen}) below and setting
$t:=tq$, we obtain (\ref{expgeneq}).
For $\sigma \in \sigmag_n$, let $\bar \sigma$ be the barred word obtained from
$\sigma$ by placing a bar above each {excedance}. For example, if $\sigma
= 531462$ then {$\bar \sigma = \bar 5 \bar 3 14 \bar 6 2$.} View $\bar
\sigma$ as a word over ordered alphabet $$\{ \bar 1 < \bar 2 < \partialots <
\bar n < 1 < 2 < \partialots < n\}.$$ We extend the definition of
descent set from permutations to words $w $ of length $n$ over
an ordered alphabet by letting
$$\nablaes(w) := \{i \in [n-1] : w_i > w_{i+1}\},$$
where $w_i$ is the $i$th letter of $w$. Now define the
excedance-descent set of a permutation $\sigma \in \mathfrak S_n$ to be
$$\Exd(\sigma) := \nablaes(\bar \sigma).$$
For example,
$\Exd({531462}) = \nablaes({\bar 5} \bar 3 14{\bar 6 2}) = \{1,4\}$.
The interesting thing about $\Exd$ is that for all $\sigma \in \mathfrak S_n$,
\begin{equation}\lambdaabel{exd} \sigmaum_{i \in \Exd(\sigma)} i = \maj(\sigma) - \epsilonxc(\sigma).\epsilonnd{equation}
For $S \sigmaubseteq [n-1]$ and $n \gammae 1$, define the quasisymmetric
function $$F_{S,n}(x_1,x_2,\partialots):= \sigmaum_{\sigmacriptsize
\begin{array}{c} i_1 \gammae \partialots \gammae i_n \\ j \in S \Rightarrow i_j
> i_{j+1} \epsilonnd{array}} x_{i_1} \partialots x_{i_n},$$ and let
$F_{\epsilonmptyset, 0} = 1$.
A basic result in Gessel's theory of quasisymmetric functions
(see eg., \cite{st3}) is that
$$F_{S,n}(1,q,q^2,\partialots) = {q^{\sigmaum_{s \in S} s} \over (1-q) (1-q^2) \partialots (1-q^n)}.$$
Hence it follows from (\ref{exd}) that for all $\sigma \in \mathfrak S_n$,
$$F_{\Exd(\sigma),n} (1,q,q^2,\partialots) = {q^{\maj(\sigma) -\epsilonxc(\sigma) } \over (1-q) (1-q^2) \partialots (1-q^n)}.$$
For any $n,j \gammae 0$, let $$ Q_{n,j} = Q_{n,j}(\x):=
\sigmaum_{\sigmacriptsize \begin{array}{c} \sigma \in \sigmag_n \\ \epsilonxc(\sigma) = j
\epsilonnd{array}} F_{\Exd(\sigma),n}(\x).$$ By taking the specialization
of the generating function we get,
\begin{equation}
\lambdaabel{Qeq}\lambdaeft . \sigmaum_{n,j \gammae 0} Q_{n,j}(\x) t^j z^n \right
|_{\sigmacriptsize \begin{array}{l} x_i:=q^{i-1} \\ z:= z(1-q)
\epsilonnd{array}} = \sigmaum_{n\gammae 0} \sigmaum_{\sigma \in \mathfrak S_n} q^{\maj(\sigma) -
\epsilonxc(\sigma)} t^{\epsilonxc(\sigma)} {z^n \over [n]_q!} .\epsilonnd{equation}
It follows from (\ref{Heq}) and (\ref{Qeq}) that by setting
$x_i:=q^{i-1} , z:= z(1-q)$ and $t:=tq$ in the following result we
obtain Theorem~\ref{expgenth}.
\begin{thm} \lambdaabel{symgenth}
\begin{equation} \lambdaabel{symgen}
\sigmaum_{n,j \gammae 0} Q_{n,j} t^j z^n = {(1-t) H(z) \over H(zt) -tH(z)}.
\epsilonnd{equation}
\epsilonnd{thm}
The proof of this theorem requires an alternative characterization
of $Q_{n,j}$ which involves an interesting analog of a
construction of Gessel and Reutenauer \cite{gr}. Gessel and
Reutenauer deal with circular words over the alphabet of positive
integers. We consider circular words over the alphabet of
barred and unbarred positive integers. For each such circular
word and any starting position, one gets an infinite word by
reading the circular word in a clockwise direction. If one gets a
distinct infinite word for each starting position, then the
circular word is said to be {\epsilonm primitive}. For example $(\bar
1,1,1)$ is primitive while $(\bar 1, 2, \bar 1, 2)$ is not. The
{\epsilonm absolute value} of a letter is the letter obtained by erasing
the bar if there is one. We will say that a primitive circular
word is a {\epsilonm necklace} if each letter that is followed
(clockwise) by a letter greater in absolute value is barred and
each letter that is followed by a letter smaller in absolute value
is unbarred. Letters that are followed by letters equal in
absolute value have the option of being barred or not. A circular
word consisting of one barred letter is not a necklace. For example the following
circular words are necklaces: $$(\bar 1 , 3, 1, \bar 1, 2, 2),
(\bar 1 , 3, \bar 1, \bar 1, 2, 2), (\bar 1 , 3, 1, \bar 1, \bar
2, 2), (\bar 1 , 3, \bar 1, \bar 1, \bar 2, 2), (3),$$ while
$(\bar 1 , \bar 3, 1, 1, 2, \bar 2)$ and $(\bar 3)$ are not.
Again we will need to order the barred letters, but this time by
$$\bar 1 <1 < \bar 2 < 2 < \partialots .$$
We order the necklaces by lexicographic order of the
lexicographically smallest infinite word obtained by reading the
necklace in a clockwise direction at some starting position. An
{\epsilonm ornament} is a weakly decreasing finite sequence of
necklaces. The type $\lambdaambda(R)$ of an ornament $R$ is the
partition whose parts are the sizes of the necklaces in $R$. The
weight $w(R)$ of an ornament $R$ is the product of the weights of
the letters of $R$, where the weight of the letter $a$ is the
indeterminate $x_{|a|}$, where $|a|$ denotes the absolute value of
$a$. For example
$$\lambdaambda( (\bar 1, 2, 2),(\bar 1, \bar 2, 3,3,2)) = (5,3)$$ and
$$w( (\bar 1, 2, 2),(\bar 1, \bar 2, 3,3,2)) = x_1^2 x_2^4 x_3^2.$$
For each partition $\lambdaambda$ and nonnegative integer $j$, let
$\mathfrak R_{\lambdaambda,j}$ be the set of ornaments of type
$\lambdaambda$ with $j$ bars.
\begin{thm} \lambdaabel{ornth}
For all $\lambdaambda\vdash n$ and $j= 0,1,\partialots,n-1$, let
$$Q_{\lambdaambda,j} = \sigmaum_{\sigma} F_{\Exd(\sigma),n}$$ summed over all
permutations of cycle type $\lambdaambda$ with $j$ excedances. Then
$$Q_{\lambdaambda,j} = \sigmaum_{R \in \mathfrak R_{\lambdaambda,j}} w(R).$$
\epsilonnd{thm}
This theorem is proved via a bijection between ornaments of type
$\lambdaambda$ and permutations of cycle type $\lambdaambda$ paired with
``compatible'' weakly increasing sequences of positive integers.
The theorem has several interesting consequences. For one thing,
it can be used it to prove that the quasisymmetric functions
$Q_{\lambdaambda,j}$ and $Q_{n,j}$ are actually symmetric. It also
has the following useful consequence.
\begin{cor}
\lambdaabel{dercor} Let $$\tilde Q_{n,j} =
\sigmaum_{\sigmacriptsize\begin{array}{c} \sigma \in \mathcal D_n\\ \epsilonxc(\sigma) =
j \epsilonnd{array}} F_{\Exd(\sigma),n}$$ where $\mathcal D_n$ is the set
of derangements in $\mathfrak S_n$. Then $$ Q_{n,j} = \sigmaum_{k = 0}^n h_k
\tilde Q_{n-k,j}.$$
\epsilonnd{cor}
It follows from Corollary~\ref{dercor} that Theorem~\ref{symgen}
is equivalent to
\begin{equation} \lambdaabel{symgen2}
\sigmaum_{n,j \gammae 0} \tilde Q_{n,j} t^j z^n = {1-t \over H(zt) -tH(z)},
\epsilonnd{equation}
which in turn, is equivalent to the recurrence relation
\begin{equation} \lambdaabel{rr}
\tilde Q_{n,j} = \sigmaum_{\sigmacriptsize \begin{array}{c}0 \lambdae m \lambdae n-2
\\ j+m-n < i < j \epsilonnd{array}} \tilde Q_{m,i} h_{n-m}.
\epsilonnd{equation}
We establish this recurrence relation by introducing another type
of configuration, closely related to ornaments.
Define a {\epsilonm banner} $B$ to be a word over the alphabet of
barred and unbarred positive integers, where $B(i) $ is barred if
$|B(i)| < |B(i+1)|$ and $B(i)$ is unbarred if $|B(i)| > |B(i+1)|$
or $i = length(B)$. All other letters have the option of being
barred. The weight of a banner is the product of the weights of
its letters.
A {\epsilonm Lyndon word} over an ordered alphabet is a word that is
strictly lexicographically smaller than all its circular rearrangements. A
{\epsilonm Lyndon factorization} of a word over an ordered alphabet is a
factorization into a weakly lexicographically decreasing sequence
of Lyndon words. It is a result of Lyndon \cite{l} that every
word has a unique Lyndon factorization. The Lyndon type of a word
is the partition whose parts are the lengths of the words in its
Lyndon factorization. For each partition $\lambdaambda$ and positive
integer $j$, let $\mathfrak B_{\lambdaambda,j}$ be the set of banners
with $j$ bars whose Lyndon type is $\lambdaambda$.
By turning the Lyndon words in the Lyndon factorization of a
banner into circular words, we obtain an ornament. This map from
banners to ornaments is the bijection whose existence is
asserted in the following proposition.
\begin{prop} For any partition $\lambdaambda$ and nonnegative integer $j$, there is a weight-preserving bijection from
$\mathfrak B_{\lambdaambda,j}$ to $\mathfrak R_{\lambdaambda,j}$. \epsilonnd{prop}
\begin{cor} \lambdaabel{ban} Let $\tilde {\mathfrak B}_{n,j}$ be the
set of banners of length $n$ with $j$ bars whose Lyndon type has
no parts of size $1$. Then
$$\tilde Q_{n,j} = \sigmaum_{B \in \tilde {\mathfrak B}_{n,j} }w(B) $$
\epsilonnd{cor}
Define a {\epsilonm marked sequence} $(\alpha,j)$ to be a weakly
increasing finite sequence $\alpha$ of positive integers together
with an integer $j$ such that $1 \lambdae j \lambdae \mbox
{length}(\alpha)-1$. Let $\mathfrak M_n$ be the set of marked
sequences of length $n$ and let $\tilde{\mathfrak B}_n$ be the set
of banners of length $n$ whose Lyndon type has no parts of size
$1$.
\begin{thm} \lambdaabel{bij} For all $n > 0$, there is a bijection
$$\gammaamma: \tilde {\mathfrak B}_{n} \to \bigcup_{m>0}
\tilde{\mathfrak B}_{m} \times \mathfrak M_{n-m},$$ such that if
$\gammaamma(B) = (B^\prime,(\alpha,j)) $ then $$w(B) =
w(B^\prime)w(\alpha)$$ and $$\bbar(B) = \bbar(B^\prime) + j,$$
where $\bbar(B)$ denotes the number of bars of $B$. \epsilonnd{thm}
We will not describe the bijection here except to say that, when
restricted to banners with distinct letters, it reduces to a
bijection from permutations to marked words that Stembridge
\cite{stem1} constructed to study the representation of the
symmetric group on the cohomology of the toric variety assoiciated
with the type A Coxeter complex. (We discuss this representation
in Section~\ref{rep}.) Banners in $ \tilde {\mathfrak B}_{n}$
admit a certain kind of decomposition, called a decreasing
decomposition in \cite{dw}. The decreasing decomposition plays the
role in our bijection that the cycle decomposition of permutations
plays in Stembridge's bijection. Corollary~\ref{ban} and
Theorem~\ref{bij} are all that is needed to establish the
recurrence relation (\ref{rr}), which yields our main result,
Theorem~\ref{symgenth}.
\sigmaection{Some Representation Theoretic Consequences} \lambdaabel{rep}
The Frobenius characteristic $\ch$ is a fundamental homomorphism
from the ring of representations of symmetric groups to the ring
of symmetric functions. In this section we present two
representations whose Frobenius characteristic is $Q_{n,j}$.
The first representation involves the toric variety associated
with the Coxeter complex of a Weyl group.
Let $X_n$ be the
toric variety associated with the Coxeter complex of $\sigmag_n$. The
action of $\sigmag_n$ on $X_n$ induces a representation of $\sigmag_n$ on
the cohomology $H^{2j}(X_n)$ for each $j = 0,\partialots,n-1$. (Cohomology in odd degree vanishes.)
Stanley \cite{st2}, using a formula of Processi \cite{pr}, proves
that
$$\sigmaum_{n\gammae 0} \sigmaum_{j=0}^{n-1} \ch H^{2j}(X_n)\,t^{j} z^n
= {(1-t) H(z) \over H(zt) -tH(z)}. $$ Combining this with
Theorem~\ref{symgenth} yields the following conclusion.
\begin{thm} For all $j = 0,1, \partialots, n-1$,
$$\ch H^{2j}(X_n)=Q_{n,j}.$$
\epsilonnd{thm}
The second representation involves poset topology, a subject in
which topological properties of a simplicial complex associated
with a poset are studied, see \cite{w1}. The faces of the
simplicial complex, called the order complex of the poset, are the
chains of the poset. Here we consider the homology of the order
complex of the Rees product of two simple posets. The Rees product
is a poset construction recently introduced by Bj\"orner and
Welker \cite{bw} in their study of relations between poset
topology and commutative algebra.
\begin{defn} Let $P$ and $Q$ be pure (ranked) posets with respective rank
functions $r_P$ and $r_Q$. The Rees product $P*Q$ of $P$ and $Q$ is defined as
follows:
$$P*Q :=\{(p,q) \in P \times Q : r_P(p) \gammae r_Q(q)\}$$
with order relation given by $(p_1,q_1) \lambdae (p_2,q_2)$ if the
following holds \bit \item $ p_1 \lambdae_P p_2 $ \item $q_1 \lambdae_Q q_2$
\item $r_P(p_2) -r_P(p_1) \gammae r_Q(q_2) -r_Q(q_1)$ \epsilonit
\epsilonnd{defn}
Let $B_n$ be the Boolean algebra (ie., the lattice of subsets of
$[n]$ ordered by inclusion) and let $C_n$ be the chain $1<2< \partialots
< n$. The maximum elements of $(B_n \sigmaetminus \{\epsilonmptyset\}) *
C_n $ are of the form $([n],j)$, where $j=1,\partialots, n$. Let
$I_{n,j}$ be the set of elements of $(B_n \sigmaetminus
\{\epsilonmptyset\}) * C_n $ that are smaller than $([n],j)$ and let
$\tilde H_{i}(I_{n,j})$ be the reduced simplicial (complex)
homology of the order complex of $I_{n,j}$. It follows from
results of Bj\"orner and Welker that homology vanishes below the
top dimension $n-2$. The symmetric group $\sigmag_n$ acts on $I_{n,j}$ in an obvious way and this induces a representation on $\tilde H_{n-2}( I_{n,j})$. We prove the following result using
techniques from poset topology.
\begin{thm} \lambdaabel{rees}
$$1+ \sigmaum_{n\gammae 1} \sigmaum_{j=1}^n \ch (\tilde H_{n-2}(I_{n,j})
\otimes {\rm sgn}) \,t^{j-1} z^n = {(1-t) H(z) \over H(zt) -tH(z)},$$
where ${\rm sgn}$ denotes the sign representation. Consequently for
all $n,j$, $$\ch (\tilde H_{n-2}( I_{n,j})\otimes {\rm sgn}) =
Q_{n,j-1}$$ and as $\mathfrak S_n$-modules $$\tilde H_{n-2}(
I_{n,j})\otimes {\rm sgn} \cong H^{2j}(X_n).$$
\epsilonnd{thm}
We conjecture that for all $\lambdaambda$ and $j$, the symmetric
function $Q_{\lambdaambda,j}$ is also the Frobenius characteristic of
some representation. One consequence of Theorem~\ref{ornth} is
that $Q_{\lambdaambda,j}$ can be described as a product of plethysms
of symmetric functions of the form $Q_{(n),i}$, where $(n)$
denotes a partition with a single part. Hence if the conjecture
holds for all $Q_{(n),i}$ then it holds in general. We use
ornaments and banners to show that if the conjecture does hold
then the restriction to $\mathfrak S_{n-1}$ of the representation whose
Frobenius characteristic is $Q_{(n),i}$, has Frobenius
characteristic $Q_{n-1,i-1}$.
\sigmaection{A new Mahonian statistic} \lambdaabel{new}
In this section we describe a new Mahonian statistic whose joint
distribution with $\partiales$ is the same as the joint distribution of
$\maj$ and $\epsilonxc$.
An {\epsilonm admissible inversion} of $\sigma \in \sigmag_n$ is a pair
$(\sigma(i),\sigma(j))$ such that the following conditions hold:
\bit
\item $i <j$
\item $\sigma(i) > \sigma(j)$
\item either
\bit
\item[$\circ$] $\sigma(j) < \sigma(j+1)$ or
\item[$\circ$] $\epsilonxists k $ such that $i<k<j$ and $\sigma(k) < \sigma(j)$.
\epsilonit
\epsilonit
Let $\ai(\sigma):= \#$ admissible inversions of $\sigma$.
Define the statistic $${\aid}(\sigma) := \ai(\sigma) + \partiales(\sigma).$$
For example, the
admissible inversions of $24153$ are $(2,1), (4,1)$ and $(4,3).$
So $\aid(24153)= 3+2$.
\begin{thm} \lambdaabel{aid} For all $n \gammae 1$,
$$\sigmaum_{\sigma \in \mathfrak S_n} q^{\aid(\sigma)} t^{\partiales(\sigma)}=
\sigmaum_{\sigma \in \mathfrak S_n} q^{\maj(\sigma)} t^{\epsilonxc(\sigma)}.$$
\epsilonnd{thm}
We do not have a direct proof of this simple identity except when $t$ or $q$ is 1. Our proof
relies on Theorem~\ref{expgenth}, a $q$-analog of
Theorem~\ref{rees}, and techniques from poset topology. We
consider the Rees product $(B_n(q) \sigmaetminus \{(0)\}) * C_n$,
where $B_n(q)$ is the lattice of subspaces of the vector space
$\F_q^n$. Let $I_{n,j}(q)$ be the set of elements in $(B_n(q)
\sigmaetminus \{(0)\}) * C_n $ that are less than the maximal element
$(\F_q^n,j)$. We first use a well-known tool from poset topology,
called lexicographic shellability, to prove that
\begin{equation}\lambdaabel{lex} \partialim \tilde H_{n-2}(I_{n,j}(q))
={\sigmaum_{\sigmacriptsize{\begin{array}{c} \sigmaigma \in \sigmag_n
\\
\partiales(\sigmaigma) = j-1\epsilonnd{array}}} \hspace{-.2in} q^{\ai(\sigma)}}.
\epsilonnd{equation}
We then use other tools from poset topology to prove a theorem
analogous to Theorem~\ref{rees} which states that
\begin{equation}
\sigmaum_{n\gammae 0} \sigmaum_{j=1}^n\partialim \tilde H_{n-2}(I_{n,j}(q))
t^{j-1}{z^n \over [n]_q!} = {(1-t) \epsilonxp_q(z) \over \epsilonxp_q(zt) - t
\epsilonxp_q(z)} .\epsilonnd{equation} Theorem~\ref{aid} now follows from
Theorem~\ref{expgenth} and equation (\ref{lex}).
\sigmaection{Acknowledgements}
The research presented here began while both authors were visiting the Mittag-Leffler Institute as participants in a combinatorics program organized by Anders Bj\"orner and Richard Stanley. We thank the Institute for its hospitality and support. We are also grateful to Ira Gessel for some very useful
discussions.
\begin{thebibliography}{xxxx}
\bibitem{br} D. Beck and J.B. Remmel, {\it Permutation enumeration of the symmetric group and the combinatorics of symmetric functions}, J. Combin. Theory Ser. A 72 (1995), 1--49.
\bibitem{bw} A. Bj\"orner and V. Welker,
{\it Segre and Rees products of posets, with ring-theoretic applications},
J. Pure Appl. Algebra {\bf 198} (2005), 43--55.
\bibitem{csz} R.J. Clarke, E. Steingr’msson, and J. Zeng, {\it New
Euler-Mahonian statistics on permutations and words}, Adv. in Appl.
Math. {\bf 18} (1997), 237--270.
\bibitem{dw} J. D\'esarm\'enien and M.L. Wachs,
{\it Descent classes of permutations with a given number of fixed points},
J. Combin. Theory Ser. A {\bf 64} (1993), no. 2, 311--328.
\bibitem{dl} I. Dolgachev and V. Lunts,
{\it A character formula for the representation of a Weyl group in the
cohomology of the associated toric variety}, J. Algebra {\bf 168}
(1994), 741--772.
\bibitem{foa1} D. Foata,
{\it Sur un \'enonc\' e MacMahon},
C. R. Acad. Sci. Paris {\bf 258} (1964), 1672--1675.
\bibitem{foa2} D. Foata,
{\it On the Netto inversion number of a sequence},
Proc. Amer. Math. Soc. {\bf 19} (1968), 236--240.
\bibitem{foa3} D. Foata, {\it Distributions eul\'eriennes et mahoniennes
sur le groupe des permutations}, NATO Adv. Study Inst. Ser., Ser.
C: Math. Phys. Sci., 31, Higher combinatorics (Proc. NATO Advanced
Study Inst., Berlin, 1976), pp. 27--49, Reidel, Dordrecht-Boston,
Mass., 1977.
\bibitem{foa4} D. Foata, {\it Rearrangements of words}, in M. Lothaire, Combinatorics on Words, Encylopedia of Math. and its Appl., Vol. 17, Addison-Wesley, Reading, MA, 1983.
\bibitem{fh} D. Foata and G.-N. Han,
{\it q-Series in Combinatorics; Permutation Statistics}, Lecture Notes, to appear.
\bibitem{fs} D. Foata and M.-P. Sch\"utzenberger, {\it Th\'eorie g\'eom\'etrique des polynomes eul\'eriens}, Lecture Notes in Mathematics, Vol. 138 Springer-Verlag, Berlin-New York 1970.
\bibitem{fs2} D. Foata and M.-P. Sch\"utzenberger,
{\it Major index and inversion number of permutations},
Math. Nachr. {\bf 83} (1978), 143--159.
\bibitem{fz} D. Foata and D. Zeilberger, {\it Denert's permutation
statistic is indeed Euler-Mahonian}, Stud. Appl. Math. {\bf 83}
(1990), 31--59.
\bibitem{gg} A.M. Garsia and I. Gessel,
{\it Permutation statistics and partitions},
Adv. in Math. {\bf 31} (1979), 288--305.
\bibitem{gr} I.M. Gessel and C. Reutenauer,
{\it Counting permutations with given cycle structure and descent set},
J. Combin. Theory Ser. A {\bf 64} (1993), 189--215.
\bibitem{kn} D. Knuth, The Art of Computer Programming, Vol. 3 Sorting and Searching, Second Edition, Reading, Massachusetts: Addison-Wesley, 1998.
\bibitem{l} M. Lothaire, Combinatorics on words, in Encylopedia of Math. and its Appl., Vol. 17, Addison-Wesley, Reading, MA, 1983.
\bibitem{mac1} P.A. MacMahon, Combinatory Analysis, 2 volumes,
Cambridge University Press, London, 1915-1916. Reprinted by
Chelsea, New York, 1960.
\bibitem{mac2} P.A. MacMahon, {\it The indices of permutations and the
derivation therefrom of functions of a single variable associated
with the permutations of any assemblage of objects}, Amer. J. Math.
{\bf 35} (1913), no. 3, 281--322.
\bibitem{pr} C. Procesi,
{\it The toric variety associated to Weyl chambers},
Mots, 153--161,
Lang. Raison. Calc.,
Herms, Paris, 1990.
\bibitem{rrw} A. Ram, J. Remmel, and T. Whitehead, {\it Combinatorics of the $q$-basis of symmetric functions}, J. Combin. Theory Ser. A {\bf 76} (1996), 231--271.
\bibitem{ra} D. Rawlings,
{\it Enumeration of permutations by descents, idescents, imajor index, and basic components},
J. Combin. Theory Ser. A {\bf 36} (1984), 1--14.
\bibitem{rod} O. Rodrigues, {\it Note sur les inversions, ou derangements
produits dans les permutations}, Journal de Mathematiques {\bf 4} (1839),
236--240.
\bibitem{sk} M. Skandera,
{\it An Eulerian partner for inversions},
S\'em. Lothar. Combin. {\bf 46} (2001/02), Art. B46d, 19 pp. (electronic).
\bibitem{st1} R.P. Stanley,
{\it Binomial posets, M\"obius inversion, and permutation enumeration},
J. Combinatorial Theory Ser. A {\bf 20} (1976), 336--356.
\bibitem{st2} R.P. Stanley, {\it Log-concave and unimodal sequences in
algebra, combinatorics, and geometry}, Graph theory and its
applications: East and West (Jinan, 1986), 500--535, Ann. New
York Acad. Sci., 576, New York Acad. Sci., New York, 1989.
\bibitem{st3} R.P. Stanley, Enumerative combinatorics. Vol. 2.
Cambridge Studies in Advanced Mathematics, 62.
Cambridge University Press, Cambridge, 1999.
\bibitem{stem1} J.R. Stembridge, {\it Eulerian numbers, tableaux, and the
Betti numbers of a toric variety}, Discrete Math. {\bf 99} (1992),
307--320.
\bibitem{stem2} J.R. Stembridge, {\it Some permutation representations of
Weyl groups associated with the cohomology of toric varieties},
Adv. Math. {\bf 106} (1994), 244--301.
\bibitem{w1} M.L. Wachs, {\it Poset topology: tools and applications}, to
appear as chapter of Geometric Combinatorics volume of PCMI
lecture notes serries. ArXiv math.CO/0602226.
\bibitem{wa2} M.L. Wachs,
{\it An involution for signed Eulerian numbers},
Discrete Math. {\bf 99} (1992), 59--62.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\def\ket#1{|#1\rangle}
\def\bra#1{\langle#1|}
\def\av#1{\langle#1\rangle}
\def\mathop{\longrightarrow}{\mathop{\longrightarrow}}
\title{Two-level atom excitation probability for single- and $N$-photon wavepackets}
\author{Hemlin Swaran Rag and Julio Gea-Banacloche}
\affiliation{Department of Physics, University of Arkansas, Fayetteville, AR 72701}
\email[]{[email protected]}
\date{\today}
\begin{abstract}
We study how the transient excitation probability of a two-level atom by a quantized field depends on the temporal profile of the incident pulse, in the presence of external losses, for both coherent and Fock states, and in two complementary limits: when the pulse contains only one photon (on average), and when the number of photons $N$ is large. For the latter case we derive analytical expressions for the scaling of the excitation probability with $N$ that can be easily evaluated for any pulse shape.
\end{abstract}
\maketitle
\section{Introduction and motivation}
Unlike its steady-state counterpart, the transient excitation probability of a single two-level atom interacting with a quantized field can, in principle, approach unity, for times smaller than the excited-state lifetime. Such perfect or near-perfect excitation could be useful in, for instance, quantum information processing, as a way to implement a single-atom switch, or a logical gate. The case in which the field consists of a single photon, in particular, has generated a fair amount of interest over the years. Schemes involving one-dimensional waveguides \cite{shenfan1,scarani1,chen} as well as free-space interaction \cite{leuchs1,leuchs2,leuchs3} have been considered recently.
An important result from these studies is the realization that the excitation probability depends critically, not just on the photon pulse's transverse-mode profile (in the case of free-space excitation), but also on its temporal profile. In particular, it has been known for a long time \cite{leuchs2} that the only way to achieve unit excitation probability is to use a wavepacket that is the time-reversed version of the one emitted by the atom when it decays spontaneously, which is to say, in free space, a ``rising exponential'' pulse (the notion of using a time-reversed pulse was first introduced, to our knowledge, in the context of cavity QED \cite{cirac1}). A number of schemes to generate such pulses have been proposed and partly demonstrated in recent years \cite{du1,leuchs4,leuchs5,leuchs6,martinis,du2,kurtsiefer1,kurtsiefer2}.
Our goal in this paper is twofold. In the first part (Section II), we use the model developed in \cite{scarani1,scarani2} to study theoretically the excitation probability for a two-level atom by a single-photon pulse, as a function of the temporal profile of the pulse, in the presence of external losses. These ``losses'' can be used to model what happens when the coupling to the spatial profile of the incident pulse is not perfect, that is to say, the atom is coupled to, and can decay into, other spatial field modes. Our results are therefore quite general and can apply both to the waveguide and free-space configurations. Besides the inclusion of losses, we also consider some novel temporal profiles, and generally carry our analytical calculations a bit farther than most previous studies, although in the end the final optimization of the pulse bandwidth typically needs to be done numerically.
In the second part (Section III), we turn our attention to the problem of excitation by multiphoton pulses, again in the presence of external losses and for various temporal profiles, and explore how the maximum excitation probability scales asymptotically with the number of photons in the pulse. The motivation for this comes initially from a consideration of the minimum energy requirements for quantum logic \cite{geabanaPRL}. It also complements the research reported in the first part, inasmuch as some schemes to generate single-photon rising exponential pulses may involve filtering, or otherwise throwing away a potentially large number of photons, and may only approximately succeed at generating the required shape. At that point, it makes sense to explore the asymptotic behavior of the excitation probability to ascertain whether one might not more efficiently resort to direct excitation of the atom by a more conventional, multi-photon pulse. (Of course, one does not have that luxury when the single photon is itself a qubit, or carrier of quantum information, but this does not always need to be the case.)
Throughout the paper, we consider both multimode Fock states and coherent states, for completeness, although in practice Fock states make more sense in the context of single-photon pulses, and coherent states in the context of multiphoton pulses. For the latter, we will present an analytical treatment that makes it straightforward to calculate the asymptotic (large $\bar n$), optimized excitation probability, for an arbitrary pulse shape.
\section{Single-photon results}
\subsection{Fock states}
\subsubsection{General equations, and rising exponential pulse}
For a single-photon pulse in a state of arbitrary temporal profile $f(t)$ (assuming $\int_{-\infty}^\infty |f(t)|^2\, dt = 1$), the on-resonance equations for the excitation of a two-level atom are:
\begin{align}
\dot P_e &= -(\Gamma_P+ \Gamma_B) P_e - \sqrt{\Gamma_P} f(t) \Sigma \cr
\dot \Sigma &= -\frac 1 2(\Gamma_P+ \Gamma_B) \Sigma - 2\sqrt{\Gamma_P} f(t)
\label{e1}
\end{align}
Here we denote by $\Gamma_P$ the coupling to the spatial modes that make up the incoming pulse, and by $\Gamma_B$ the coupling to other, ``bath'' modes (which results in loss of a photon from the system). $P_e$ is the excitation probability, and $\Sigma$ is the matrix element of the atomic dipole moment in between the states with 1 and 0 photons. For a derivation of these equations, see \cite{scarani1,domokos} (note that our $\Sigma$ is the sum of the $\sigma_+$ and $\sigma_-$ variables in \cite{scarani1}; also, we are assuming $f(t)$ is real, which is a natural assumption on resonance).
The system (\ref{e1}) is simple enough to allow for a general, formal solution for arbitrary $f(t)$: the equation for $\Sigma$ can be immediately integrated, and then we have for $P_e$ (assuming the atom starts in the ground state)
\begin{align}
P_e(t) = 2\Gamma_P &\int_{-\infty}^t e^{-(\Gamma_P+ \Gamma_B)(t-t^\prime)}f(t^\prime)\, dt^\prime \cr
&\times\int_{-\infty}^{t^\prime} e^{-(\Gamma_P+ \Gamma_B)(t^\prime-t^{\prime\prime})/2}f(t^{\prime\prime})\, dt^{\prime\prime}
\label{e2a}
\end{align}
Inspection (or an integration by parts) shows that this can be rewritten in the alternative form
\begin{equation}
P_e(t) = \Gamma_P e^{-(\Gamma_P+\Gamma_B)t} \left(\int_{-\infty}^t e^{(\Gamma_P+\Gamma_B)t^\prime/2} f(t^\prime)\, dt^\prime\right)^2
\label{e2}
\end{equation}
which makes the calculation much simpler, for arbitrary-shaped wavepackets. Equation (\ref{e2}) also allows for a very simple proof that the only wavepacket that can achieve full excitation at any time is a rising exponential (see \cite{leuchs2,shenfan1} for alternative approaches), in the absence of external losses. Consider $P_e$ at an arbitrary time $t=t_0$. We can rewrite Eq.~(\ref{e2}) as
\begin{equation}
P_e(t_0) = \frac{\Gamma_P}{\Gamma_P+\Gamma_B} \left(\int_{-\infty}^{t_0} u(t) f(t) \, dt \right)^2
\label{e4a}
\end{equation}
where the function $u(t)$, defined as
\begin{equation}
u(t) = {\sqrt{\Gamma_P+\Gamma_B}}\, e^{(\Gamma_P+\Gamma_B)(t-t_0)/2}
\end{equation}
is normalized to unity in the interval $(-\infty,t_0]$. From the Cauchy-Schwartz inequality, it then follows immediately that
\begin{equation}
P_e(t_0) \le \frac{\Gamma_P}{\Gamma_P+\Gamma_B} \int_{-\infty}^{t_0} f(t)^2 \, dt
\label{e6a}
\end{equation}
However, since $f(t)$ is normalized to unity in $(-\infty,\infty)$, it follows that the right-hand side of (\ref{e6a}) can never be equal to 1 unless, first, $\Gamma_B=0$ (no external losses) and, secondly, all of the norm of $f$ is contained in $(-\infty,t_0]$ (otherwise put, the pulse must be over by the time $t_0$). But then, the integral in (\ref{e4a}) is just the inner product of two functions normalized to unity over the interval $(-\infty,t_0]$, and so it can only be equal to 1 (its maximum possible value) if the functions are identical except for an overall sign.
We conclude, then, that $P_e(t_0)$ can only reach a maximum value of
\begin{equation}
P_{e,max} = \frac{\Gamma_P}{\Gamma_P + \Gamma_B} \qquad \text{(rising exponential)}
\label{opt}
\end{equation}
at some time $t_0$, if the excitation pulse has the form
\begin{equation}
f(t) = {\sqrt{\Gamma_P+\Gamma_B}}\, e^{(\Gamma_P+\Gamma_B)(t-t_0)/2}, \qquad \text{$t\le t_0$, 0 if $t>t_0$}
\label{n8}
\end{equation}
When $\Gamma_B =0$, we have $P_{e,max}=1.$
Note that, in general, in the presence of external losses, the optimal duration (bandwidth) of the pulse needs to be adjusted to include the $\Gamma_B$ term (as in Eq.~(\ref{n8})). When this is done, it is evident from Eq.~(\ref{e4a}) that the maximum excitation probability will be found to be equal to the lossless result times the factor $\Gamma_P/(\Gamma_P+\Gamma_B)$.
It is a relatively straightforward matter to use Eq.~(\ref{e2}), or equivalently (\ref{e4a}), to derive the excitation probability for other pulse shapes, to see how close they may get to the optimal result (\ref{opt}). We present several of these results explicitly below. (Some of these, in the lossless case, were previously presented in \cite{scarani1}, where they appear to have been obtained by numerical integration of the equations (\ref{e1}). This had, in particular, the curious consequence that the maximum excitation probability reported for the optimal rising exponential pulse was $0.995$ instead of 1.)
\subsubsection{Square pulse}
For a square pulse of duration $T$: $f(t) = 1/\sqrt T$, $0<t<T$, the first Eq.~(\ref{e1}) shows that $P_e$ starts to decay as soon as the pulse is over, so to find the maximum we may confine ourselves to the region $0<t<T$, in which case we get from Eq.~(\ref{e2}),
\begin{equation}
P_e(t) = \frac{4 \Gamma_P}{(\Gamma_B+\Gamma_P)^2 T} \left(1-e^{-(\Gamma_B+\Gamma_P)t/2}\right)^2
\end{equation}
This is maximum for $t=T$, and then we can optimize for $T$. Numerically we find $T_\text{opt} = 2.513/(\Gamma_B+\Gamma_P)$, and so
\begin{equation}
P_{e,max} = 0.815\frac{\Gamma_P}{\Gamma_P + \Gamma_B} \qquad \text{(square pulse)}
\end{equation}
\subsubsection{Gaussian pulse}
If we consider a Gaussian pulse instead, of the form
$f(t) = e^{-t^2/T^2}/\sqrt{T\sqrt{\pi/2}}$, substitution in (\ref{e2}) yields the exact expression
\begin{align}
P_e(t) = &\frac{\sqrt{2\pi} \Gamma_P T}{4} e^{-(\Gamma_P+\Gamma_B)t + (\Gamma_P+\Gamma_B)^2 T^2/8}\cr
&\times\left[1+\text{erf}\left(\frac t T - \frac{(\Gamma_P+\Gamma_B) T}{4}\right)\right]^2
\end{align}
Numerical maximization of this expression with respect to $t$ and $T$ yields $t_\text{opt} \simeq 0.731 T$, $T_\text{opt} = 1.368/(\Gamma_P+\Gamma_B)$, and
\begin{equation}
P_{e,max} = 0.801 \frac{\Gamma_P}{\Gamma_P + \Gamma_B} \qquad \text{(Gaussian)}
\end{equation}
It is interesting that the performance of the Gaussian pulse is extremely close to that of the square pulse. In the next section we will see that this is the case in the multiphoton, asymptotic limit as well.
\subsubsection{Pulses obtained by atomic decay}
We next look at a couple of pulses that might be easier to produce experimentally than the ones considered above. One of these is a simple exponentially-decaying pulse, $f(t) = e^{-t/T}\sqrt{2/T}$ for $t\ge 0$ (and zero for $t<0$). Equation (\ref{e2}) now yields
\begin{equation}
P_e(t) = \frac{8\Gamma_P T}{(\Gamma_P T + \Gamma_B T-2)^2}\left(e^{-t/T}-e^{-(\Gamma_B+\Gamma_P)t/2}\right)^2
\label{e7}
\end{equation}
As a function of $t$, this expression peaks at
\begin{equation}
t_{max} = \frac{2T}{\Gamma_P T + \Gamma_B T-2}\,\ln\left[(\Gamma_P + \Gamma_B )T/2\right]
\label{e8}
\end{equation}
Substitution of this back into Eq.~(\ref{e7}) leads to a complicated expression which, however, can be shown to have a maximum, as a function of $T$, when $T = 2/(\Gamma_P + \Gamma_B)$ (in which limit the expression (\ref{e8}) becomes $t_{max} = T$). This maximum value equals
\begin{align}
P_{e,max} &= \frac{4}{e^2}\, \frac{\Gamma_P}{\Gamma_P + \Gamma_B} \quad \; \text{(decaying exponential)}\cr
&\simeq 0.541 \frac{\Gamma_P}{\Gamma_P + \Gamma_B}
\end{align}
A somewhat more complex, but still, experimentally, relatively straightforward, kind of pulse would be the one produced by an atom decaying inside a single-sided cavity. If the atom is assumed to be fully excited at the time $t=0$, the pulse for $t\ge 0$ is given by
$f(t)=-\frac{g\sqrt{2\kappa}}{\sqrt{\kappa^2-4g^2}}\left(e^{-\left(\kappa+\sqrt{\kappa^2-4g^2}\right)t/2}-e^{-\left(\kappa-\sqrt{\kappa^2-4g^2}\right)t/2}\right)$
(see \cite{gea1}, Eq.(54)), where $g$ is the coupling rate of the atom to the cavity, and $\kappa$ the cavity decay rate. The excitation probability with such a pulse is
\begin{align}
P_e(t) = &\dfrac{8g^2\kappa\Gamma_{p}\,e^{-(\Gamma_{p}+\Gamma_{B})t}}{\kappa^2-4g^2}\Biggl(\dfrac{e^{\left(\Gamma_{p}+\Gamma_{B}-\kappa+\sqrt{\kappa^2-4g^2}\right)t/2}-1}{\Gamma_{p}+\Gamma_{B}-\kappa+\sqrt{\kappa^2-4g^2}}\cr
&-\dfrac{e^{\left(\Gamma_{p}+\Gamma_{B}-\kappa+\sqrt{\kappa^2-4g^2}\right)t/2}-1}{\Gamma_{p}+\Gamma_{B}-\kappa-\sqrt{\kappa^2-4g^2}}\Biggr)^2
\label{ne16}
\end{align}
We have not been able to find the maximum of this expression (with respect to all three parameters, $\kappa$, $g$ and $t$) analytically. Numerically, however, we have found that the optimal pulse happens in the good cavity limit, that is $\kappa < 2g$, so the square roots in (\ref{ne16}) are purely imaginary, and the time dependance includes Rabi oscillations as well as exponential decay. We have also found numerically that, in this region, the optimal value of $\kappa$ is given by $\kappa=\Gamma_{p}+\Gamma_B$, in a similar way as for the simple decaying exponential. This observation allows us to solve for the optimal time, with the result
\begin{equation}
t_{max}=
\frac{4}{\sqrt{4g^2-\kappa^2}}\,\tan^{-1}\frac{\sqrt{4g^2-\kappa^2}}{\kappa}
\end{equation}
The final maximization with respect to $g$ has to be done again numerically, with the result $g_{opt} = 0.9076 \kappa = 0.9076(\Gamma_P+\Gamma_B)$ (which means $t_{max} = 2.607/(\Gamma_P+\Gamma_B))$, and
\begin{equation}
P_{e,max} = 0.716 \frac{\Gamma_P}{\Gamma_P + \Gamma_B} \qquad \text{(Atom-cavity decay pulse)}
\end{equation}
Thus, in spite of all the extra parameters, this family of pulses still cannot do better than the Gaussian or the square, although it is certainly better than the plain decaying exponential.
The atom-cavity system, however, could in principle be used to generate a much greater variety of pulses, depending on how it is driven, itself. Thus, for instance, one could think of sending a single-photon pulse (with a simple shape, such as a Gaussian or a decaying exponential) into the cavity, through the coupling mirror, and then using the output pulse to excite the target atom in the waveguide. The output pulse profile is, for any input pulse, easily derived from the results in \cite{gea1}. Our calculations show that the efficiency of an initial Gaussian pulse can be boosted in this way to $0.85$, for example. (See the second line from the top in Fig. 1, which summarizes all the above results graphically.)
\begin{figure}\label{fig:fig1}
\end{figure}
Another possibility would be to drive the atom in the cavity directly (through the sides of the cavity, say), and near-deterministically, with a sufficiently strong external field. By controlling the time dependence of the atomic excitation in this way, one could in principle control the shape of the outgoing pulse (which would still be a single photon pulse, as long as care is taken not to cycle the atomic excitation up and down more than once) \cite{zoller}. Note, however, that at this point we are talking about using many photons (in the control field) just to generate a single-photon wavepacket with the ideal profile to perfectly excite a single atom. In some contexts, as when one means to use the single photon as a qubit, this may make sense; but if all we want is to excite a single atom with a minimal expenditure of energy, it seems reasonable to try a different tack and ask instead just how many photons, impinging directly on the target atom, it would take to bring its excitation probability arbitrarily close to one, assuming either that one starts with a pulse with the ``wrong'' shape (i.e., not a rising exponential), or that the coupling losses to the outside world represented by $\Gamma_B$ are not negligible.
This is the question that we will address in the second part of this paper, after we briefly consider the atomic excitation by ``single-photon'' coherent state wavepackets of various shapes in the next subsection.
\subsection{Coherent states}
As shown in \cite{scarani1,scarani2}, if the incident pulse is in a coherent state instead of a number state, the quantized-field treatment yields a result formally identical to the semiclassical ``optical Bloch equations.'' If the field is on resonance, the atom initially in the ground state, and the average number of incident photons is $\bar n$, then the atomic dipole moment and excitation probability are given by
\begin{align}
\begin{split}
& \dot{\Sigma} =-\dfrac{\Gamma_{B}+\Gamma_{p}}{2}\Sigma +4\sqrt{\bar{n}\Gamma_{p} }f(t)P_{e}-2\sqrt{\bar{n}\Gamma_{p} }f(t)
\cr
& \dot{P_{e}} =-(\Gamma_{B}+\Gamma_{p}) P_{e} -\Sigma f(t)\sqrt{\bar{n}\Gamma_{p} }
\end{split}
\label{ne19}
\end{align}
where, as before, $f(t)$ is the pulse profile. In the absence of damping ($\Gamma_P+\Gamma_B=0$), these equations are easy to solve, and lead to the familiar result that full inversion is achieved by using a $\pi$ pulse, that is to say, one for which $f$ satisfies
\begin{equation}
2 \sqrt{\bar n \Gamma_P} \int f(t) \, dt = \pi
\label{ne20}
\end{equation}
Of course, because of the presence of $\Gamma_P$ in (\ref{ne20}), this condition is, strictly speaking, incompatible with the setting of $\Gamma_B+\Gamma_P =0$, but this might still be approximately valid for a sufficiently intense ($\bar n \gg 1$) and short ($(\Gamma_B+\Gamma_P)T \ll 1$) pulse. This will be discussed further in Section III.
For this section, we only want to consider the case of a ``single-photon'' coherent-state pulse. By this we mean a pulse with $\bar n =1$. When expressed in terms of Fock states, such a pulse has a probability $p(n) = e^{-1}/n!$ to contain $n$ photons, that is to say, a probability $1/e = 0.368\ldots$ of having 0 photons (in which case no excitation will happen), an identical probability of having 1 photon, and a probability $1-2/e = 0.264\ldots$ of having more than 1 photon. Therefore, in terms of the single-photon Fock state excitation probability discussed in the previous subsection, which we will call $P_{e,N=1}$ below, we can bound the coherent-state excitation probability $P_{e,\bar n =1}$ by
\begin{equation}
0.368 P_{e,N=1} < P_{e,\bar n =1} < 0.632
\label{ne21}
\end{equation}
for any pulse shape. (The upper limit is just $1-p(0)$.)
Equation (\ref{ne21}) is enough to see that ``single-photon'' coherent state pulses can never achieve very large excitation probabilities, regardless of their shape. Numerical results for these pulses have been presented in \cite{scarani1}. Here we will only consider the one analytically solvable case, the square pulse, because we can do it for arbitrary $\bar n$, and the large $\bar n$ limit will be useful in the next section. Letting, then, $f(t) = 1/\sqrt T$, $0<t<T$, and defining $\Gamma = \Gamma_P+\Gamma_B$ for simplicity, $\Omega_0=2\sqrt{\bar{n}\Gamma_{p}/T}$, and $\Omega=\sqrt{\Omega^2-\Gamma^2/16}$, we get
\begin{equation}
P_{e}=\dfrac{\Omega_0^2}{\Gamma^2+2\Omega_0^2}\left(1-e^{-3/4~\Gamma t }\left[\cos(\Omega t)+\dfrac{3\;\Gamma}{4 \Omega}\sin(\Omega t)\right] \right)
\label{ne22}
\end{equation}
Maximizing Eq.~(\ref{ne22}), with respect to $t$ is not difficult; it can be readily shown that $t_{max}=\pi/\Omega$, a sort of ``$\pi$-pulse'' condition that will be consistent with $t<T$ if $T$ is chosen appropriately, specifically, provided that
\begin{equation}
-\sqrt{ \frac{64 \bar n^2 \Gamma_P^2}{\Gamma^2}-\pi^2} \le \frac{\Gamma T}{4} - \frac{8 \bar n \Gamma_P}{\Gamma} \le \sqrt{ \frac{64 \bar n^2 \Gamma_P^2}{\Gamma^2}-\pi^2}
\label{ne23}
\end{equation}
Substituting $t=\pi/\Omega$ in (\ref{ne22}), we get the function
\begin{equation}
P_e = \frac{4\bar n \Gamma_P/T}{\Gamma^2+8\bar n \Gamma_P/T}\left(1+\exp\left[-\frac{3\pi\Gamma}{\sqrt{64 \bar n \Gamma_P/T-\Gamma^2}}\right]\right)
\label{ne24}
\end{equation}
This is a monotonically decreasing function of $T$, which will therefore be maximized by choosing the smallest value of $T$ that is compatible with the condition (\ref{ne23}).
The result is a complicated function of $\bar n \Gamma_P/\Gamma$. In the $\bar n = 1$ case, and for no external losses ($\Gamma_P=\Gamma$), it has the value $0.433$.
It turns out, however, that it is possible to do better than this, at least in the $\bar n =1$ case, by using a pulse that is \emph{shorter} than $\pi/\Omega$. For such a pulse, the excitation probability grows monotonically and is maximum at end of the pulse ($t=T$), with the result
\begin{align}
P_e(T) = &\frac{4 \Gamma_P}{\Gamma^2 T + 8 \Gamma_P}\biggl(1 - e^{-3\Gamma T/4} \cr
&\quad\quad\times\left[\cos(\Omega T) + \frac{3\Gamma}{4 \Omega} \sin(\Omega T)\right]\biggr)
\label{ne25}
\end{align}
with $\Omega T = \frac 1 4 \sqrt{64 \Gamma_P T - \Gamma^2 T^2}$. Equation (\ref{ne25}) depends fundamentally on two variables, which we can choose to be $\Gamma_P/\Gamma$ and $\Gamma T$. For each value of the first one, we can numerically find the value of the second one that maximizes $P_e$. For $\Gamma_P = \Gamma$ (no external losses), the optimal $T$ is found to be $T = 1.487/\Gamma$, and the maximum $P_e$ is
\begin{equation}
P_{e,max} = 0.482 \qquad \text{(square pulse)}
\end{equation}
For other values of the external losses, we get the results shown in Figure 2, which also includes the results of numerical calculations for other pulse shapes. Note that the dependence on the external losses does not follow the simple dependance on $\Gamma_P/(\Gamma_P+\Gamma_B)$ that we obtained for single-photon Fock states in Section II.A above.
\begin{figure}\label{fig:fig2}
\end{figure}
\section{Multi-photon wavepackets, and asymptotic results}
\subsection{Coherent states}
\subsubsection{General results; square pulse}
When considering multiphoton excitation, especially in the large $\bar n$ limit, it makes more sense to think in terms of coherent states than Fock states, since multiphoton Fock states are notoriously difficult to produce. Accordingly, we will consider coherent states first, in which case the basic equations to solve are just Eqs.~(\ref{ne19}), from Section II.
As indicated earlier, Eqs.~(\ref{ne19}) cannot be solved exactly, to the best of our knowledge, except for a square pulse. Approximate solutions, however, are possible in two opposite limits. If the pulse is very long compared the overall decay time, $(\Gamma_P+\Gamma_B)^{-1}$, one can derive in a straightforward way an ``adiabatic solution,'' by formally setting the left-hand sides of Eqs.~(\ref{ne19}) equal to zero:
\begin{equation}
P_e(t) = \frac{4\bar n\Gamma_P}{(\Gamma_B+\Gamma_P)^2 + 8\bar n \Gamma_P f(t)^2}\,f(t)^2
\end{equation}
This more or less tracks the pulse, but it is always smaller than $1/2$, which is the value it approaches asymptotically as $\bar n\to \infty$. This is the well-known phenomenon of ``bleaching'': a sufficiently long and intense classical pulse will drive the population inversion of a two-level medium to zero, so the medium becomes transparent.
We are interested here in a different regime, where we expect the excitation probability can be made to approach the instantaneous value of 1 for a sufficiently intense and short pulse. Although, in general, this near-perfect excitation may be achieved for only a short time, one should note that, as long as the atomic levels are allowed to decay, any excitation we may produce will necessarily be transient. Whether it is useful or not depends on the timescales involved.
As in the previous section, we are particularly interested in exploring the differences between pulse shapes, only this time we want to see how the pulse shape affects the rate at which $P_e$ approaches 1 as $\bar n$ increases. We may conveniently start with the square pulse solution we derived above, Eq.~(\ref{ne22}), which has a local maximum at $t = \pi/\Omega = \pi T/\sqrt{4\bar n\Gamma_P T-(\Gamma T/4)^2}$, provided the condition (\ref{ne23}) is satisfied. For a sufficiently large $\bar n$, it is easy to see that this condition becomes
\begin{equation}
\frac{\pi^2 \Gamma }{4\bar n\Gamma_P} + O\left(\frac{1}{\bar n}\right)^3 \le \Gamma T \le \frac{64 \bar n\Gamma_P}{\Gamma} -\frac{\pi^2\Gamma }{4\bar n\Gamma_P} + O\left(\frac{1}{\bar n}\right)^3
\label{ne28}
\end{equation}
Also in the large $\bar n$ limit, Eq.~(\ref{ne24}) becomes
\begin{equation}
P_e \simeq 1 - \frac{3\pi\Gamma}{16}\sqrt{\frac{T}{\bar n \Gamma_P}} + \left(\frac{9\pi^2}{64}-\frac 1 2 \right) \frac{\Gamma^2 T}{4 \bar n\Gamma_P} + O\left(\frac{1}{\bar n}\right)^{3/2}
\label{ne29}
\end{equation}
Note that, if we do not optimize the pulse duration $T$, Eq.~(\ref{ne29}) only approaches 1 as $1/\sqrt{\bar n}$. On the other hand, if we substitute for $T$ the smallest value allowed by Eq.~(\ref{ne28}), namely, $\pi^2/4\bar n\Gamma_P$, we get a much more favorable scaling:
\begin{equation}
P_e \simeq 1 - \frac{3\pi^2\Gamma}{32 \bar n \Gamma_P} + \left(\frac{9\pi^2}{64}-\frac 1 2 \right) \left(\frac{\Gamma}{4 \bar n\Gamma_P}\right)^2 + O\left(\frac{1}{\bar n}\right)^3
\label{ne30}
\end{equation}
We should also verify that this is better than (or, as it turns out, equivalent to) the alternative we found for the $\bar n=1$ case in the previous section, namely, letting the maximum happen at the end of the pulse ($t=T$), in which case we need to optimize
\begin{align}
P_e(T) = &\frac{4 \bar n \Gamma_P}{\Gamma^2 T + 8 \bar n \Gamma_P}\Biggl(1- e^{-3\Gamma T/4} \cr
&\quad\quad\times\left[\cos(\Omega T) + \frac{3\Gamma}{4 \Omega} \sin(\Omega T)\right]\Biggr)
\label{ne31}
\end{align}
with respect to $T$. However, for large $\bar n$ it is clear that the prefactor approaches $1/2$, and the only way the term in parentheses can approach 2 is if $\cos(\Omega T) \simeq -1$. This requires $T \simeq \pi/\Omega$, so at this point this approach reduces to the previous one, since there we started by imposing $t=\pi/\Omega$ and later choosing the lowest value of $T$ compatible with this condition, namely, $T=\pi/\Omega$.
Finally, note that, in contrast to the single-photon case, in the large-$\bar n$ limit the optimal pulse duration (here $T=\pi^2/4\bar n\Gamma_P$) is, to lowest order in $1/\bar n$, independent of the external loss rate $\Gamma_B$. We will find this to be the case for every other pulse shape, as well.
\subsubsection{Perturbation theory in the large $\bar n$ limit}
The above exactly-solvable case shows that, in order to get the first-order correction (deviation from unity), in $1/\bar n$, to $P_e$ it is enough to keep terms linear in $\Gamma$ (note that $\Gamma$ and $\Gamma_P$ are treated as completely independent variables here; $\Gamma_P$ characterizes the atom-field coupling, whereas $\Gamma$ quantifies the losses, or spontaneous decay rate). It also suggests that, to the same order of accuracy, we may simply replace $\Omega = \sqrt{\Omega_0^2 + \Gamma^2/16}$ by $\Omega_0 = 2\sqrt{\bar n \Gamma_P/T}$, and assume that the maximum of $P_e$ happens at the time $t=\pi/\Omega_0$ where the $\pi$ pulse condition is satisfied in the absence of losses.
This suggests a simple strategy to obtain the first-order correction for an arbitrary pulse shape, namely, to use perturbation theory. Let $T$ be some parameter with the dimensions of time that characterizes the duration of the pulse, and let $g(t) = \sqrt T \, f(t)$ (so $g$ has the same shape as $f$ but is dimensionless). Defining $\Omega_0 = 2\sqrt{\bar n \Gamma_P/T}$ as above and the dimensionless time $\tau = \Omega_0 t$, we can rewrite the system (\ref{ne19}) as
\begin{align}
\begin{split}
& \frac{d x}{d\tau} = -\frac{\epsilon}{2}\,x + g(\tau) y - g(\tau)
\cr
& \frac{d y}{d\tau} = -\epsilon\, y -g(\tau) x
\end{split}
\label{ne32}
\end{align}
where $x \equiv {\Sigma}$, $y = 2 P_{e}$, and $\epsilon = \Gamma/\Omega_0$. We can then expand $x(t) = x^{(0)}(t) + \epsilon x^{(1)}(t)+\ldots$, $y(t) = y^{(0)}(t) + \epsilon y^{(1)}(t)+\ldots$, and substitute in (\ref{ne32}). The lowest-order equation
\begin{align}
\begin{split}
& \frac{d x^{(0)}}{d\tau} = g(\tau)\, y^{(0)} - g(\tau)
\cr
& \frac{d y^{(0)}}{d\tau} = -g(\tau)\, x^{(0)}
\end{split}
\label{ne33}
\end{align}
is immediately solved by
\begin{align}
x^{(0)}(\tau) &= -\sin[\theta(\tau)] \cr
y^{(0)}(\tau) &= 1-\cos[\theta(\tau)]
\label{ne34}
\end{align}
with
\begin{equation}
\theta(\tau) = \int_{-\infty}^\tau g(\tau^\prime)\,d\tau^\prime
\label{ne35}
\end{equation}
and, as expected, this gives unit excitation probability when $\theta = \pi$. The next-order correction must satisfy
\begin{align}
\begin{split}
& \frac{d x^{(1)}}{d\tau} = g(\tau)\, y^{(1)} -\frac{\epsilon}{2}\,x^{(0)}(\tau)
\cr
& \frac{d y^{(1)}}{d\tau} = -g(\tau)\, x^{(1)} -\epsilon\, y^{(0)}(\tau)
\end{split}
\label{ne36}
\end{align}
Again, changing to the variable $\theta$ turns this into a simple driven harmonic oscillator problem, with the formal solution for $y^{(1)}(\theta)$:
\begin{widetext}
\begin{equation}
y^{(1)}(\theta) = -\epsilon\int_0^\theta \frac{d\tau}{d\theta^\prime}\left(y^{(0)}(\theta^\prime)\cos(\theta-\theta^\prime) - \frac 1 2 x^{(0)}(\theta^\prime)\sin(\theta-\theta^\prime) \right) d\theta^\prime
\label{ne37}
\end{equation}
At this point, the only remaining difficulty may be to express $d\tau/d\theta = 1/g(\tau(\theta))$ as a function of $\theta$, since the inversion of Eq.~(\ref{ne35}) may be a nontrivial problem. Alternatively, note that the whole expression (\ref{ne37}) can be rewritten explicitly as an integral over $\tau$. For the moment, though, we will continue to use $\theta$ because it allows us to express the correction to $P_e$ at the expected maximum, $\theta = \pi$, in the following very compact form (using Eqs.~(\ref{ne34}), (\ref{ne35})):
\begin{align}
1-P_e \Bigr|_{\theta=\pi} &= \frac{\epsilon}{2}\int_0^\pi \frac{1}{g(\tau(\theta^\prime))}\left(-\cos(\theta^\prime)\left[1-\cos(\theta^\prime)\right] + \frac 1 2 \sin^2(\theta^\prime) \right) d\theta^\prime \cr
&= \epsilon \int_0^\pi \frac{\sin^4(\theta/2)}{g(\tau(\theta))} d\theta
\label{ne38}
\end{align}
\end{widetext}
Examples of the use of Eq.~(\ref{ne38}) follow.
\subsubsection{Decaying exponential pulse}
Consider a decaying exponential pulse, $f(t) = e^{-t/T}\sqrt{2/T}$ for $t\ge 0$, and zero for $t<0$. Then $g(\tau) = \sqrt 2\, e^{-\tau/\Omega_0 T}$, $\theta = \sqrt 2\, \Omega_0 T (1-e^{-\tau/\Omega_0 T})$, and $g(\tau(\theta)) = \sqrt 2[1-\theta/(\sqrt 2\Omega_0 T)]$. The result is then
\begin{equation}
1-P_e \Bigr|_{\theta=\pi} = \Gamma T \int_0^\pi \frac{\sin^4(\theta/2)}{2\sqrt{2\bar n\Gamma_P T}-\theta}\, d\theta
\label{ne39}
\end{equation}
It is now an easy matter to minimize this, numerically, with respect to $T$. The minimum is obtained when $T=3.347/\bar n \Gamma_P$, and the result is then
\begin{equation}
P_e \simeq 1 - 1.47895 \frac{\Gamma}{\bar n \Gamma_P}
\label{ne40}
\end{equation}
This is clearly less favorable than the square-pulse scaling, Eq.~(\ref{ne30}), since the prefactor $3\pi^2/32 \simeq 0.9253$. Again, note that if one were simply to increase $\bar n$ in Eq.~(\ref{ne39}), leaving $T$ constant, the excitation probability would only approach 1 as $1/\sqrt{\bar n}$, which is to say, much more slowly.
\subsubsection{Rising exponential pulse}
Let now $f(t) = e^{t/T}\sqrt{2/T}$ for $t\le 0$, and zero for $t>0$. Then $g(\tau) = \sqrt 2\, e^{\tau/\Omega_0 T}$ (for $\tau\le 0$), and $\theta = \sqrt 2\, \Omega_0 T e^{\tau/\Omega_0 T}$, so $g(\tau(\theta)) = \theta/\Omega_0 T$, and we find
\begin{equation}
1-P_e \Bigr|_{\theta=\pi} = \Gamma T \int_0^\pi \frac{\sin^4(\theta/2)}{\theta}\, d\theta = 0.519432\, \Gamma T
\label{ne41}
\end{equation}
This is, at first sight, a somewhat surprising result, in that it looks like it can be made arbitrarily small simply by reducing $T$, but recall that in order for Eq.~(\ref{ne38}) to be applicable, it must be possible for $\theta$ to reach the value of $\pi$, so we need to have $\sqrt 2\, \Omega_0 T \ge \pi$. Since $\Omega_0 = 2\sqrt{\bar n \Gamma_P/T}$, this leads to the condition
\begin{equation}
2\sqrt{2\bar n \Gamma_P T} \ge \pi
\label{ne42}
\end{equation}
Taking the smallest $T$ compatible with Eq.~(\ref{ne42}), namely, $T_{opt} = \pi^2/8\bar n\Gamma_P$, and substituting in (\ref{ne41}), we obtain
\begin{equation}
P_e = 1 - 0.519432\, \frac{\Gamma \pi^2}{8 \bar n \Gamma_P} = 1 - 0.640824 \frac{\Gamma}{\bar n \Gamma_P}
\label{ne41}
\end{equation}
This is better than the square pulse, and much better than the decaying exponential, requiring less than half the photons to reach the same value of $P_e$.
\subsubsection{Gaussian pulse}
Finally, consider a Gaussian pulse of the form $f(t) = e^{-t^2/T^2}/\sqrt{T\sqrt{\pi/2}}$. We now have
\begin{equation}
g(\tau) = \left(\frac{2}{\pi}\right)^{1/4} e^{-\tau^2/(\Omega_0 T)^2}
\end{equation}
\begin{equation}
\theta(\tau) = \int_{-\infty}^\tau g(\tau^\prime)\, d\tau^\prime = \frac 1 2 \Omega_0 T (2\pi)^{1/4} \left[1+\text{erf}\left(\frac{\tau}{\Omega_0 T}\right)\right]
\end{equation}
\begin{align}
1-P_e \Bigr|_{\theta=\pi} = &\frac{\Gamma}{\Omega_0} \int_0^\pi\exp\left[\text{InverseErf}\,^2\left(\frac{2\theta}{\Omega_0 T}\,\frac{1}{(2\pi)^{1/4}} - 1\right)\right]\cr
&\times\sin^4\left(\frac\theta 2\right)\,d\theta
\label{ne46}
\end{align}
where ``InverseErf'' is the inverse of the error function, available in packages such as \emph{Mathematica}. One now has to (numerically) minimize (\ref{ne46}) with respect to $T$, keeping in mind that $\Omega_0$ itself depends on $T$. In practice, it is easiest to introduce a parameter $a= \Omega_0 T$ in terms of which $T= a^2/4\bar n \Gamma_P$, and the prefactor $\Gamma/\Omega_0 = \Gamma T/a = a \Gamma/4\bar n \Gamma_P$, and minimize the resulting expression with respect to $a$. One then finds that
\begin{equation}
T_{opt} = \frac{1.45009}{\bar n \Gamma_P}
\end{equation}
and
\begin{equation}
P_e = 1 - 0.91597\,\frac{\Gamma}{\bar n \Gamma_P}
\label{ne48}
\end{equation}
very close to the square pulse result, Eq.~(\ref{ne30}), which is $\simeq 1 - 0.9253 \,{\Gamma/\bar n \Gamma_P}$ to lowest order. All of the above results are summarized graphically (for the lossless case) in Fig.~3.
\begin{figure}\label{fig:fig3}
\end{figure}
\subsection{Fock states}
Although multiphoton Fock states are very difficult to prepare, we wish to cover this case here for completeness, since it also turns out to be analytically tractable in the large $N$ limit.
Generalizing the model in \cite{scarani2} to include external losses, we find that the excitation probability for $N$ photons can be obtained by integrating the following system of $2N$ coupled differential equations:
\begin{align}
\dot{P}_{e,n} &=-\Gamma P_{e,n}-\sqrt{\Gamma_{P} n}f(t)\;\Sigma_{n-1}
\cr
\dot{\Sigma}_{n-1} &=-\dfrac{\Gamma}{2}\;\Sigma_{n-1}+4\sqrt{\Gamma_{P} n}f(t)\;P_{e,n-1}-2\sqrt{\Gamma_{P} n}f(t) \cr
\label{e9}
\end{align}
where the index $n$ runs from $1$ to $N$. (For an alternative formalism to deal with this problem, see the work of Baragiola et al. \cite{combes}.) Let $f(t) = 1/\sqrt T$, $0<t<T$. In this case, and for a small number of photons, one could easily integrate Eq.~(\ref{e9}) recursively, by hand, and obtain the excitation probability. However, this becomes impractical for a significantly large number of photons. We therefore resort to the same kind of perturbation theory we used above for the coherent-state pulse.
Letting $g(t)=\sqrt T\,f(t)$, $\Omega_0 = 2\sqrt{N\Gamma_P T}$, $\tau = \Omega_0 t$, we find the system (\ref{e9}) can be written as
\begin{align}
\frac{d}{d\tau}\,y_n &= -\epsilon y_n - g(\tau)\sqrt{\frac n N}\, x_{n-1} \cr
\frac{d}{d\tau}\,x_{n-1} &= -\frac \epsilon 2\, x_{n-1} + g(\tau)\sqrt{\frac{n}{N}}\, y_{n-1} - g(\tau) \sqrt{\frac{n}{N}}\cr
\label{e50}
\end{align}
where $x_n = \Sigma_n$, $y_n = 2 P_n$, and $\epsilon = \Gamma/\Omega_0$.
Introducing a perturbative solution of the form $x_n(t) = x_n^{(0)} + \epsilon x_n^{(1)} +\ldots$, $y_n(t) = y_n^{(0)} + \epsilon y_n^{(1)} +\ldots$, one can show recursively that the zero-th order solution has the form
\begin{align}
y_n^{(0)}(\theta) &=1 - \mathbf{ _{1}F_{1}} \left(-n,\frac 1 2, \frac{\theta^2}{4 N}\right) \cr
x_{n-1}^{(0)}(\theta) &= -\theta \sqrt\frac{n}{N}\, \mathbf{ _{1}F_{1}} \left(-n+1,\frac 3 2, \frac{\theta^2}{4 N}\right)
\label{e51}
\end{align}
in terms of the variable $\theta$ introduced as in Eq.~(\ref{ne35}), and the confluent hypergeometric function $\mathbf{ _{1}F_{1}}$. This is to be compared directly to the result (\ref{ne34}) for the coherent state case, with $n=N$. Indeed, in the large $N$ limit we find \cite{abr1}
\begin{align}
\mathbf{ _{1}F_{1}} \left(-N,\frac 1 2, \frac{\theta^2}{4 N}\right) &\simeq e^{\theta^2/8N} \cos\theta \cr
\mathbf{ _{1}F_{1}} \left(-N,\frac 3 2, \frac{\theta^2}{4 N}\right) &\simeq \frac 1 \theta\,e^{\theta^2/8N} \sin\theta
\label{ne52}
\end{align}
which shows that, as in the coherent-state case, the excitation probability, $y_N/2$, will be maximum around $\theta = \pi$. Note, however, that since we ultimately want an expression for $P_e$ that is correct to order $1/N$, we cannot neglect the exponential term in (\ref{ne52}) completely. Rather, we have to say that, to lowest order in $\epsilon$, the excitation probability, $P_e^{(0)}$ is given by
\begin{align}
P_e^{(0)} &= \frac 1 2(1- e^{\theta^2/8N} \cos\theta) \cr
P_{e,max}^{(0)} &\simeq 1 + \frac{\pi^2}{16 N}
\label{ne53}
\end{align}
Of course, an excitation probability greater than 1 is unphysical, but this is ultimately due to the fact that the zero-th order in $\epsilon$ is also unphysical: since $\epsilon = (\Gamma_P+\Gamma_B)/\Omega_0$ includes the coupling to the atom $\Gamma_P$, it can never be strictly zero. As we shall see below, the terms coming from the first order correction will ensure that $P_e$ is always less than 1, to first order in $1/N$.
This first-order correction is formally given by
\begin{widetext}
\begin{align}
y_N^{(1)}(\theta) = -\epsilon\int_0^\theta \frac{d\tau}{d\theta^\prime}&\Biggl\{
\sum_{n=0}^{N-1} \frac{(-1)^n N!}{(2n)! N^n (N-n)!}\,(\theta-\theta^\prime)^{2n}\left[1 - \mathbf{ _{1}F_{1}} \left(-N+n,\frac 1 2, \frac{{\theta^\prime}^2}{4 N}\right)\right] \cr
&-\frac 1 2 \sum_{n=1}^{N} \frac{(-1)^{n-1} (N-1)! }{(2n-1)! N^{n-1} (N-n)!}\,{(\theta-\theta^\prime)^{2n-1}}{\theta^\prime} \mathbf{ _{1}F_{1}} \left(-N+n,\frac 3 2, \frac{{\theta^\prime}^2}{4 N}\right) \Biggr\} d\theta^\prime
\label{ne54}
\end{align}
\end{widetext}
where, again, we wish to emphasize the similarity with the corresponding coherent-state result (\ref{ne37}). In fact, it is straightforward to show numerically that, for finite $\theta$ (in particular, $\theta\simeq \pi$), the term in curly braces in Eq.~(\ref{ne54}) approaches $\cos(\theta-\theta^\prime)(1-\cos\theta^\prime -\frac 12 \sin(\theta-\theta^\prime)\sin\theta^\prime)$, as $N\to\infty$. Qualitatively, this may be understood from the fact that, for large $N$ and small $n$, the difference between $N$ and $N-n$ in the first argument of the hypergeometric functions can be neglected, and for large $n$ the prefactors go to zero very fast, so the terms where the difference between $N$ and $N-n$ is substantial are strongly suppressed. Setting, then $N-n \simeq N$ in the argument of the hypergeometric functions, the sums can be carried out, with the result that the first one equals $\mathbf{ _{1}F_{1}} \left(-N,\frac 1 2, (\theta-\theta^\prime)^2/{4 N}\right)$ (up to terms that are negligible for large $N$), and the second one equals $(\theta-\theta^\prime)\mathbf{ _{1}F_{1}} \left(-N+n,\frac 3 2,(\theta-\theta^\prime)^2/4 N\right)$. One can then use the results (\ref{ne52}) to show the (asymptotic) identity between (\ref{ne54}) and the coherent-state result (\ref{ne34}), (\ref{ne37}). In particular, for $\theta = \pi$, we have then (making use of (\ref{ne53}))
\begin{equation}
1-P_e\Bigr|_{\theta=\pi} = -\frac{\pi^2}{16 N} + \epsilon \int_0^\pi \frac{\sin^4(\theta/2)}{g(\tau(\theta))} d\theta
\label{ne55}
\end{equation}
which means that the optimal pulse bandwidth (to minimize the second term on the right-hand side of (\ref{ne55})) will be exactly the same as for the coherent-state case (only with $\bar n$ replaced by $N$), and the maximum excitation probability will also be the same, plus the $\pi^2/16N$ term. Explicitly, we get
\begin{align}
P_e &= 1 -\frac{\pi^2}{32 N} -\frac{3\pi^2 \Gamma_B}{32 N \Gamma_P} \qquad\text{(square pulse)}\cr
P_e &= 1 -\frac{0.8621}{N} -\frac{1.47895\Gamma_B}{N \Gamma_P} \qquad\text{(decaying exponential)}\cr
P_e &= 1 -\frac{0.02397}{N} -\frac{0.64082\Gamma_B}{N \Gamma_P} \qquad\text{(rising exponential)}\cr
P_e &= 1 -\frac{0.29912}{N} -\frac{0.91597\Gamma_B}{N \Gamma_P} \qquad\text{(Gaussian)}
\label{ne56}
\end{align}
where we have separated the contribution of the external losses $\Gamma_B$ explicitly, to show that $P_e$ is indeed in all the cases lower than 1. (Note that this $\Gamma_B$ contribution is the same for Fock states as for coherent states.) We find that the rising exponential pulse in the $\Gamma_B = 0$ case is now more than an order of magnitude better than all the other pulses. These results are plotted (for the $\Gamma_B=0$ case) in Figure 4.
\begin{figure}\label{fig:fig4}
\end{figure}
The last of the equations (\ref{ne56}) should be compared (in the $\Gamma_B=0$ case) to the result obtained for Fock states by Baragiola et al., $P_e^N = 1-0.269 N^{-0.973}$ (see caption to Figure 3 of \cite{combes}). The two expressions yield very similar results for $N=40$, which was the largest value of $N$ considered in \cite{combes}, and our numerical calculations show that ours is a better fit for large $N$, as is to be expected.
\section{Conclusions}
We have considered theoretically the maximum excitation probability for a two-level system interacting with a quantum field, in the presence of external losses (or non-perfect coupling), in two complementary limits: when the incident field contains a single photon (on average), and asymptotically, when the number of photons is very large. In both cases we find that $P_e$ depends strongly on the temporal profile of the pulse, and that for each pulse shape it is essential to optimize the pulse duration (or bandwidth) in order to make $P_e$ as large as possible. In particular, for the single-photon (Fock state) case, we find that, if the external losses are characterized by the decay rate $\Gamma_B$, the optimum bandwidth depends on $\Gamma_B$, and with this optimization the maximum $P_e$ is just equal to the lossless result multiplied by the ratio $\Gamma_P/(\Gamma_P+\Gamma_B)$. For a coherent state with $\bar n = 1$, we have presented numerical results showing that this simple scaling does not apply.
For the multiphoton case, when the field is in a coherent state, we have derived an expression that allows one to evaluate the leading term in the expansion of $P_e$ in powers of $1/\bar n$ for a pulse of arbitrary temporal profile. We find in this case that the optimum pulse duration does not depend on the external losses, and what is typically required is for $T$ to scale as $\alpha/\bar n \Gamma_P$, where the constant $\alpha$ depends on the pulse shape. When this is done, one finds that $P_e$ approaches 1 as $P_e \simeq 1 - (\beta/\bar n)(1+\Gamma_B/\Gamma_P)$, where again the constant $\beta$ depends on the pulse shape. When the pulse duration is not optimized, one typically finds a much less favorable scaling with $\bar n$. We also find that the constant $\beta$ can vary by a factor of 2 or more across different pulses, and in terms of their effectiveness the various shapes that we have studied rank in roughly the same order in the large $\bar n$ as in the single-photon regime, with the rising exponential profile being the best, the decaying exponential the worst, and the square and Gaussian profiles being in the middle and very close to each other.
For multiphoton Fock states, we have shown that the optimal pulse bandwidth and the time when the excitation peaks are (in the asymptotic, large $N$ limit) the same as for the corresponding coherent state with $\bar n = N$, and the optimized excitation probability is the same plus $\pi^2/16 N$, regardless of the pulse shape or the level of external losses.
\end{document}
|
\begin{document}
\title{Higher order recurrences and row sequences of Hermite-Pad\'e approximation}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\footnotetext[1]{Departamento de
Matem\'{a}ticas, Universidad Carlos III de Madrid, Avda. Universidad
30, 28911 Legan\'{e}s, Madrid, Spain. email: \{lago, yzaldiva\}\symbol{'100}math.uc3m.es.
Both authors received support from research grant MTM 2015-65888-C4-2-P of Ministerio de Econom\'{\i}a, Industria y Competitividad, Spain.}
\begin{abstract}
We obtain extensions of the Poincar\'e and Perron theorems for higher order recurrence relations and apply them to obtain an inverse type theorem for row sequences of (type II) Hermite-Pad\'e approximation of a vector of formal power series.
\end{abstract}
\textbf{Keywords:} Higher order recurrences, Hermite-Pad\'e approximation, inverse type results.
\textbf{AMS classification:} Primary 30E10, 41A21; Secondary 65Q30.
\title{Higher order recurrences and row sequences of Hermite-Pad\'e approximation}
\section{Introduction}
\subsection{Background.}
Let $(f_n)_{n\geq 0}$ be a solution of the recurrence relation
\begin{equation}
\label{eq:1}
f_n + \alpha_{n,1}f_{n-1} + \cdots + \alpha_{n,m} f_{n-m} = 0, \qquad n \geq m,\end{equation}
with given initial conditions $f_0,\ldots,f_{m-1}$. If for all $n$ the coefficients $\alpha_{n,1},\ldots,\alpha_{n,m}$ are given, it is well known that the solution space of the recurrence relation is a vector space of dimension $m$.
Such recurrence relations appear and play a central role in many fields of mathematics: number theory, difference equations, continued fractions, and approximation theory to name a few. In the general theory, two results due to H. Poincar\'e \cite{Poin} and O. Perron \cite{Per1,Per2} single out (see also \cite{Gel}).
Assume that
\begin{equation}
\label{limit}
\lim_{n\to \infty} \alpha_{n,j} = \alpha_j, \qquad j=1,\ldots,m, \qquad \alpha_m \neq 0.
\end{equation}
Define the so called characteristic polynomial of \eqref{eq:1}
\begin{equation}
\label{char}
p(z) = z^m + \alpha_1 z^{m-1} + \cdots + \alpha_m = \prod_{j=1}^m (z - \lambdabda_j)
\end{equation}
\noindent
{\bf Poincare's Theorem.} Suppose that $0 < |\lambdabda_1| < \cdots < |\lambdabda_m|$. Then, any solution $(f_n)_{n\geq 0}$ of \eqref{eq:1} verifies that either $f_n = 0, n\geq n_0,$ or
\begin{equation}
\label{ratio}
\lim_{n\to \infty} \frac{f_{n+1}}{f_n} = \lambdabda_k,
\end{equation}
where $\lambdabda_k$ is one of the roots of the characteristic polynomial.
\noindent
{\bf Perron's Theorem.} Suppose that $0 < |\lambdabda_1| < \cdots < |\lambdabda_m|$ and $\alpha_{n,m} \neq 0, n \geq m$. Then, there exists a fundamental system of solutions $(f^{(k)}_{n})_{n\geq 0}, k=1,\ldots,m$ of \eqref{eq:1} such that
\begin{equation}
\label{ratio2}
\lim_{n\to \infty} \frac{f^{(k)}_{n+1}}{f^{(k)}_{n}} = \lambdabda_k, \qquad k=1,\ldots,m.
\end{equation}
Each solution $(f_n)_{n\geq 0}$ of \eqref{eq:1} can be associated with a Taylor series. Namely,
\begin{equation}
\label{Taylor}
f(z) = \sum_{n\geq 0} f_n z^n.
\end{equation}
In the sequel we will frequently identify a solution of \eqref{eq:1} and its associated Taylor series. The analytic properties of an analytic element are encoded in its Taylor coefficients. For example, under the assumptions of Perron's Theorem, if $f^{(k)}(z)$ denotes the Taylor series associated with $(f^{(k)}_{n})_{n\geq 0}$ from \eqref{ratio2} it immediately follows that the radius of convergence $R_0(f^{(k)})$ of $f^{(k)}$ equals $1/|\lambdabda_k|$. (Do not confuse $f^{(k)}$ with a derivative.) Thanks to a very deep result of E. Fabry \cite{fabry}, more can be said.
\noindent {\bf Fabry's Theorem.} Given a Taylor series $f$ whose coefficients verify \eqref{ratio} we have that $R_0(f) = |\lambdabda_k|^{-1}$ and $\lambdabda_k^{-1}$ is a singular point of $f$.
The object of this paper is to extend the results of Poincar\'e and Perron to more general recurrence relations for which the zeros of the characteristic polynomial do not necessarily have different absolute value, and describe some of the analytic properties of the functions associated with a system of fundamental solutions of the recurrence relation.
Set
\begin{equation}
\label{alphan}
\alpha_n(z) := 1 + \alpha_{n,1}z + \cdots + \alpha_{n,m}z^m.
\end{equation}
Unless otherwise stated, in the sequel we will assume that $\alpha_{n,m} \neq 0, n \geq m$.
If $[f]_n$ denotes the $n$-th Taylor coefficient of a formal power series $f$, then \eqref{eq:1} adopts the form
\begin{equation}
\label{eq:2}
[f\alpha_n]_n = 0, \qquad n \geq m.
\end{equation}
Let
\[ {\mathcal{P}}_{n} = \{\mathbb{Z}a_{n,1},\ldots,\mathbb{Z}a_{n,m}\},
\quad n\ge m,
\]
denote the collection of zeros of $\alpha_{n}$ repeated according to
their multiplicity. Set
$$
S=\sup_{N\ge m}\inf_{n \geq N}\left\{|\mathbb{Z}a_{n,k}|: \mathbb{Z}a_{n,k} \in {\mathcal{P}}_{n} \right\}
$$
and
$$
G=\inf_{N\ge m}\sup_{n \geq N}\left\{|\mathbb{Z}a_{n,k}|: \mathbb{Z}a_{n,k} \in {\mathcal{P}}_{n}\right\}.
$$
The following result is a consequence of \cite[Theorem 2.2]{cacoq2}
\begin{Theorem} Assume that $S > 0$ and $G < +\infty$. Then, any nontrivial solution $f$ of \eqref{eq:1} verifies $0 < c \leq R_0(f) \leq C < +\infty$, where $c$ and $C$ only depend on $S$ and $G$.
\label{teo:1}
\end{Theorem}
This means that under such general conditions, every solution of \eqref{eq:1} has a singularity in the annulus $\{z: c \leq |z| \leq C\}$. A more precise result may be given when the sequence of polynomials $(\alpha_n)_{n \geq m},$ has a limit.
In the following, we assume that \eqref{limit} takes place and
\[\lim_{n \to \infty} \alpha_n(z) := \alpha(z) = \prod_{k=1}^m \left(1 - \frac{z}{\mathbb{Z}a_k}\right) ,\qquad \deg (\alpha) = m.\]
According to \eqref{char}, we have $\alpha(z) = z^{m}p(1/z)$; therefore, the zeros of $\alpha$ and of the characteristic polynomial $p$ are reciprocals one of the other.
Putting together Theorems 1 and 2 of V. I Buslaev in \cite{Bus1}, we can formulate the following:
\noindent
{\bf Buslaev's Theorem.} Assume that \eqref{limit} takes place and $f$ is a non-trivial solution of \eqref{eq:1}. Then $R_0(f)$ is equal to the absolute value of one of the zeros of $\alpha$, the coefficients of $f$ satisfy a reduced recurrence relation of the form
\begin{equation}
\label{cociente}
f_n + \beta_{n,1}f_{n-1} + \cdots + \beta_{n,\ell} f_{n-\ell} = 0,\qquad \lim_n \beta_{n,k} = \beta_k, \qquad k=1,\ldots,\ell,
\end{equation}
where $\ell (\leq m)$ is equal to the number of zeros of $\alpha$ on the circle $\{z: |z| = R_0(f)\}$, all the zeros of $\beta(z) = 1+\beta_1z + \cdots\beta_\ell z^{\ell}$ lie on that circle, $\beta$ divides $\alpha$, and at least one of its zeros is a singular point of $f$.
When all the zeros of $\alpha$ have distinct absolute value, it is easy to see that Buslaev's theorem reduces to Poincare's result. A natural question arises, is there a fundamental system of solutions of \eqref{eq:1} such that every zero of $\alpha$ is a singular point of at least one function in the system?
The answer to this question is positive if it is known in advance that \eqref{eq:1} has a fundamental system of solutions of the form $\{f,zf,\ldots, z^{m-1}f\}$ for some (formal) Taylor expansion $f$ about the origin. This is a consequence of a result of S.P. Suetin \cite[Theorem 1]{suetin1985} where he solves a conjecture posed by A. A. Gonchar in \cite{gon2} in the context of inverse type results for row sequences of Pad\'e approximants. In the last section, we will see the connection between the study of row sequences of Pad\'e (and Hermite-Pad\'e) approximation and the study of the general theory of recurrence relations such as \eqref{eq:1}. In \cite[Theorem 1]{gon2}, again within the study of row sequences of Pad\'e approximation, Gonchar studied the case when
\begin{equation}
\label{rate}
\limsup_{n\to \infty} \|\alpha_n - \alpha\|^{1/n} = \theta < 1;
\end{equation}
that is, when the zeros of the polynomials $\alpha_n$ converge with geometric rate to the zeros of $\alpha$. In \eqref{rate}, $\|\cdot\|$ represents (for example) the coefficient norm in the finite dimensional space of polynomials of degree $\leq m$. In this situation, it may be proved that $f$ has in the disk of radius $\max\{|\mathbb{Z}a_k|: k=1,\ldots, m\}/\theta$ exactly $m$ poles which coincide, taking account of their order, with the zeros of $\alpha$ taking account of their multiplicity. Additionally, $\max\{|\mathbb{Z}a_k|: k=1,\ldots, m\}/\theta$ is the radius of the largest disk centered at the origin to which $f$ has a meromorphic extension with no more than $m$ poles. Gonchar's theorem was extended to the context of row sequences of (type II) Hermite-Pad\'e approximation in \cite[Theorem 1.4]{cacoq2}. As we shall see in the final section, the study of row sequences of Hermite-Pad\'e approximation is equivalent to the study of general solutions of recurrence relations of type \eqref{eq:1}. Having this in mind, we will present an extension of the Poincar\'e and Perron theorems to general recurrence relations \eqref{eq:1} verifying \eqref{limit} and some zeros of the polynomials $\alpha_n$ do not converge geometrically.
\subsection{Statement of the main result.} Without loss of generality, let the zeros of $\alpha$ be enumerated in such a way that
\begin{equation}
\label{enumeration}
0 < |\mathbb{Z}a_1| \leq |\mathbb{Z}a_2| \leq \cdots \leq |\mathbb{Z}a_m|.\end{equation}
If several $\mathbb{Z}a_k$ coincide, they are enumerated consecutively. We shall also assume that the zeros in the collections of points $\mathcal{P}_n$ are indexed so that
\begin{equation}
\label{limit2}
\lim_{n\to \infty} \mathbb{Z}a_{n,k} = \mathbb{Z}a_k, \qquad k=1,\ldots,m.
\end{equation}
Several circles centered at the origin may contain more than one zero of $\alpha$ (or a zero of $\alpha$ of multiplicity greater than $1$). Let $C$ be one such circle (if any). Let $\mathbb{Z}a_j,\mathbb{Z}a_{j+1},\ldots,\mathbb{Z}a_{j+N-1}$ be the zeros of $\alpha$ lying on $C$. In this case, we assume that
\begin{equation}
\label{several}
\limsup_{n\to \infty} |\mathbb{Z}a_{n,k} - \mathbb{Z}a_k|^{1/n} < 1, \qquad k=j,\ldots,j+N-1.
\end{equation}
The existence of such a circle $C$ is not required. If such a circle does not exist, all the zeros of $\alpha$ have distinct absolute value and we are in the situation of the Poincar\'e and Perron theorems.
\begin{Theorem}
\label{teo:2}
Assume that \eqref{limit2} and \eqref{several} take place. Then there is a fundamental system of solutions $\{f^{(1)},\ldots,f^{(m)}\}$ of \eqref{eq:1} such that $R_0(f^{(k)}) = |\mathbb{Z}a_k|, k=1,\ldots,m,$ and $\mathbb{Z}a_k$ is a singular point of $f^{(k)}$. Each $\mathbb{Z}a_k$ verifying \eqref{several} is a pole of $f^{(k)}$. Moreover, if $\mathbb{Z}a_k$ is a zero of multiplicity $\tau$ and $\mathbb{Z}a_k = \mathbb{Z}a_{k+1} = \cdots = \mathbb{Z}a_{k+\tau-1}$ then for each $s= 1,\ldots,\tau$, $f^{(k + s-1)}$ is analytic in a disk of radius larger than $|\mathbb{Z}a_k|$ except for a pole of exact order $s$ at $\mathbb{Z}a_k$.
\end{Theorem}
If the zeros of $\alpha$ have distinct absolute value, the statement of Theorem \ref{teo:2} is deduced directly from the Perron and Fabry theorems. When \eqref{several} takes place for $k=1,\ldots,m$, the thesis of Theorem \ref{teo:2} is a consequence of $(b) \Rightarrow (a)$ in \cite[Theorem 1.4]{cacoq2}.
\section{Proof of Theorems \ref{teo:1} and \ref{teo:2}.}
\subsection{Proof of Theorem \ref{teo:1}.} This result is a particular case of \cite[Theorem 2.2]{cacoq2}. To see this, for the benefit of the reader, we establish a connection between the notation employed here and in that paper.
Let $f$ be any non-trivial solution of \eqref{eq:1} and $t_n = t_n(f)$ the polynomial part of degree $\leq n-1$ of $\alpha_n f$. Due to \eqref{eq:1} and \eqref{eq:2} it follows that
\[(\alpha_n f - t_n)(z) = \mathcal{O}\left({z^{n+1}}\right).\]
This means that the rational function $t_n/\alpha_n$ is what in \cite{cacoq2} is called an $(n,m,m^*)$ incomplete Pad\'e approximation of $f$ with $m^* = 1$.
In \cite{cacoq2}, $\alpha_n$ is denoted $q_{n,m}$, $t_n$ is denoted $p_{n,m}$, $\lambdabda_n$ is the degree of the common zero which $p_{n,m}$ and $q_{n,m}$ may have at $z=0$, $m_n = \deg(q_{n,m})$, and $\tau_n = \min\{n-m^*-\lambdabda_n - \deg(p_{n,m}),m-\lambdabda_n - m_n\}$. In our case, since $\alpha_n(0) = 1$ and $\deg(q_{n,m}) = m$, we have that $\lambdabda_n = 0, m_n = m, \tau_n = 0,$ and $m^* = 1$. Thus, the assumptions in (i) and (ii) of \cite[Theorem 1.2]{cacoq2} are fulfilled and, therefore, $0 < R_0(f) < \infty$ as claimed. The proof gives lower and upper estimates of $R_0(f)$ depending on $S$ and $G$ which imply the last assertion of Theorem \ref{teo:1}.
$\Box$
\subsection{Comments on Buslaev's Theorem.} In \cite[Theorem 1]{Bus1} it is required that the solution considered is not a polynomial. We have stipulated that $\alpha_{n,m} \neq 0, n \geq m$. This restriction implies that any non-trivial solution cannot be a polynomial. In fact, the contrary would imply that $f_n = 0$ for all $n \geq n_0$ and solving the recurrence backwards (taking advantage that $\alpha_{n,m} \neq 0, n \geq m,$) we conclude that $f_n = 0, n\geq 0,$ and the solution would be the trivial one.
In \cite[Theorem 2]{Bus1} the author imposes that $0 < R_0(f) < \infty$. Our assumption imply this as Theorem \ref{teo:1} shows. The polynomials that we have denoted $\alpha_n$ play the role of the functions $\alpha_n$ in \cite[Theorem 2]{Bus1}.
\subsection{Some auxiliary results.}
The proof of Theorem \ref{teo:2} is somewhat constructive. We start out from a fundamental system of solutions of \eqref{eq:1} and through analytic continuation, carried out in successive steps, we find another fundamental system of solutions which fulfills the desired properties. As we carry out these steps, we find collections of solutions which according to Buslaev's theorem have radius of convergence equal to the absolute value of a zero of $\alpha$. On any such circle, there may fall one or several zeros of $\alpha$. The proof distinguishes two cases. The first when all the zeros on the circle satisfy \eqref{several}. In this case, the analytic continuation is based on \cite[Corollary 2]{Bus1}. If the circle contains only one zero of $\alpha$ of multiplicity 1 and we do not have \eqref{several}, we adapt a proof of Perron's Theorem given by M.A. Evgrafov in \cite{Evg} to continue the process.
Here, we state \cite[Corollary 2]{Bus1} in the form of a lemma for the reader's convenience. We wish to mention that \cite[Theorem 2.6]{cacoq2} plays the same role when all the zeros satisfy \eqref{several}.
\begin{Lemma}
\label{lem:1}
Suppose that the assumptions of Buslaev's theorem hold and $f$ is a non trivial solution of \eqref{eq:1}. Let $\mathbb{Z}a_j,\ldots,\mathbb{Z}a_{j+N-1}, N \geq 1,$ be the zeros of $\alpha$ on the circle $\{z: |z| = R_0(f)\}$ and \eqref{several} takes place. Then $R_0(g) > R_0(f)$, where $g(z) = \prod_{k = j}^{j+N-1}(z - \mathbb{Z}a_k) f(z)$
\end{Lemma}
Evgrafov's proof of Perron's theorem is based on several lemmas. His paper has not been translated so for completeness we include the proof of those lemmas that we will use. The next two lemmas are identical to his statements. Lemma \ref{lem:4} has been adapted to cover our more general situation.
\begin{Lemma}
\label{lem:2}
Let $(f_n)_{n\geq 0}$ be a solution of \eqref{eq:1} and let $(\gamma_n)_{n \geq 0}$ be a sequence such that $\gamma_n \neq 0, n\geq 0$. Then $(F_n = f_n/\gamma_n)_{n\geq 0}$ is a solution of the recurrence relation
\begin{equation}
\label{eq:3}
F_n + \alpha_{n,1}'F_{n-1} + \cdots + \alpha_{n,m}'F_{n-m} = 0, \qquad n \geq m,\end{equation}
where
\[\alpha_{n,j}' = \alpha_{n,j} \frac{\gamma_{n-j}}{\gamma_{n}}, \qquad j=1,\ldots,m.\]
Moreover, if $\,\lim_{n\to \infty} \gamma_{n+1}/\gamma_n = 1$ and the $\alpha_{n,j}, j=1,\ldots,m,$ verify \eqref{limit} then so do the $\alpha_{n,j}', j=1,\ldots,m $ and the recurrences \eqref{eq:1} and \eqref{eq:3} have the same characteristic polynomial.
\end{Lemma}
\noindent {\bf Proof.} Indeed, if we divide \eqref{eq:1} by $\gamma_n$, we obtain that for all $n\geq m$
\[0= \frac{f_n}{\gamma_n} + \alpha_{n,1}\frac{\gamma_{n-1}}{\gamma_n}\frac{f_{n-1}}{\gamma_{n-1}} + \cdots + \alpha_{n,m} \frac{\gamma_{n-m}}{\gamma_n}\frac{f_{n-m}}{\gamma_{n-m}},\]
which is \eqref{eq:3}.
Now, $\lim_{n\to \infty} \gamma_{n+1}/\gamma_n = 1$ implies that $\lim_{n\to \infty} \gamma_{n-j}/\gamma_n = 1, j=1,\ldots,m$, so from \eqref{limit} it follows that
\[\lim_{n\to \infty} \alpha_{n,j}' = \lim_{n\to \infty} \alpha_{n,j} = \alpha_j,\qquad j=1,\ldots,m.\]
Therefore, in that case both equations have the same characteristic polynomial.
$\Box$
It is easy to check that $(f_n^{(j)})_{n\geq 0}$, $j=1,\dots,N, 1\leq N \leq m,$ are linearly independent solutions of \eqref{eq:1} if and only if
$(f_n^{(j)}/\gamma_n)_{n\geq 0}$, $j=1,\dots, N,$ (where $\gamma_n\neq 0$, $n\geq 0$) are linearly independent solutions of \eqref{eq:3}.
\begin{Lemma}
\label{lem:3}
If equation \eqref{eq:1} has the solution $(\lambdabda^n)_{n\geq 0}, \lambdabda \neq 0$, then it's left hand side may be expressed in the form
\begin{equation}
\label{eq:4}
F_{n-1} + \beta_{n,1} F_{n-2} + \cdots + \beta_{n,m-1}F_{n-m} = 0
\end{equation}
where
\[F_n = f_{n+1} - \lambdabda f_n.\]
Additionally, if the $\alpha_{n,j}$ verify \eqref{limit} then
\begin{equation}
\label{limit*}
\lim_{n,j} \beta_{n,j} = \beta_j, \qquad j = 1,\ldots,m-1, \qquad \beta_{m-1} \neq 0
\end{equation}
and the characteristic polynomials of \eqref{eq:1} and \eqref{eq:4} are connected by the relation
\begin{equation}
\label{conexion}
(z - \lambdabda)(z^{m-1} + \beta_1 z^{m-2} + \cdots + \beta_{m-1}) = (z^{m} + \alpha_1 z^{m-1} + \cdots + \alpha_{m})
\end{equation}
\end{Lemma}
\noindent {\bf Proof.} Consider the polynomial
\[p_n(z) = z^m + \alpha_{n,1}z^{m-1}+ \cdots + \alpha_{n,m}.\]
Substituting the solution $(\lambdabda^n)_{n\geq 0}$ in \eqref{eq:1} and factoring out $\lambdabda^{n-m}$, we get
\[p_n(\lambdabda) = \lambdabda^m + \alpha_{n,1}\lambdabda^{m-1}+ \cdots + \alpha_{n,m} = 0;\]
therefore,
\[ \frac{p_n(z)}{z -\lambdabda} = z^{m-1} + \beta_{n,1} z^{m-2} + \cdots + \beta_{n,m-1}. \]
That is
\begin{equation}
\label{conexion*} p_n(z) = (z-\lambdabda)(z^{m-1} + \beta_{n,1} z^{m-2} + \cdots + \beta_{n,m-1}).
\end{equation}
Equating coefficients of equal power of $z$, we obtain
\begin{equation}
\label{relacion}
\beta_{n,j}-\lambdabda \beta_{n,j-1} = \alpha_{n,j}, \qquad j=1,\ldots,m, \qquad \beta_{n,0} = 1, \qquad \beta_{n,m} = 0.
\end{equation}
In particular,
\[\beta_{n,m-1} = - \alpha_{n,m}/\lambdabda \neq 0.\]
From \eqref{relacion} the existence of $\lim_{n\to \infty} \alpha_{n,j}$ and $\lim_{n\to \infty} \beta_{n,j-1}$ imply the existence of
$\lim_{n\to \infty} \beta_{n,j}.$ Since $\beta_{n,0} = 1$, it follows that \eqref{limit} implies \eqref{limit*} and then \eqref{conexion} is immediate taking limit over $n$ in \eqref{conexion*}.
It remains to verify \eqref{eq:4}. Put $F_n = f_{n+1} - \lambdabda f_n$ in the left hand side of \eqref{eq:4}. Using \eqref{relacion} and \eqref{eq:1}, we get
\[F_{n-1} + \beta_{n,1} F_{n-2} + \cdots + \beta_{n,m-1}F_{n-m} = \]
\[f_{n} - \lambdabda f_{n-1} + \beta_{n,1}(f_{n-1} - \lambdabda f_{n-2}) + \cdots + \beta_{n,m-1}(f_{n-m+1} - \lambdabda f_{n-m}) = \]
\[ f_n + (\beta_{n,1} - \lambdabda \beta_{n,0}) f_{n-1} + \cdots + (\beta_{n,m} -\lambdabda \beta_{n,m-1})f_{n-m} = 0\]
as we needed to prove.
$\Box$
Assuming that $(\lambdabda^n)_{n\geq 0}, \lambdabda \neq 0,$ is a solution of \eqref{eq:1}, it is easy to verify that $(f_n^{(j)})_{n\geq 0}$, $j=1,\ldots, N (\leq m-1)$ and $(\lambdabda^n)_{n\geq 0}$ constitute a system of linearly independent solutions of \eqref{eq:1} if and only if $(F_n^{(j)})_{n\geq 0}$, $j=1,\ldots,N,$ is a system of linearly independent solutions of \eqref{eq:4}, where $F_n^{(j)}=f_{n+1}^{(j)} -\lambdabda f_n^{(j)}, j=1,\ldots,N$.
\begin{Lemma}
\label{lem:4}
Suppose that $(F_n)_{n\geq 0}$ is such that $\limsup_{n\to \infty} |F_n|^{1/n} = \mu, \mu \neq |\lambdabda|$. Then, there exists a solution $(f_n)_{n\geq 0}$ of the equations $F_n = f_{n+1} - \lambdabda f_n$ such that
$\limsup_{n\to \infty} |f_n|^{1/n} = \mu$.
\end{Lemma}
\noindent {\bf Proof.} We give two different expression for the solution $(f_n)_{n\geq 0}$ depending on whether $\mu < |\lambdabda|$ or $\mu > |\lambdabda|$.
In the first case we set
\begin{equation}
\label{sol1}
f_n = - \frac{F_n}{\lambdabda} - \frac{F_{n+1}}{\lambdabda^2} - \frac{F_{n+2}}{\lambdabda^3} - \cdots ,
\end{equation}
and in the second
\begin{equation}
\label{sol2}
f_n = F_{n-1} + \lambdabda F_{n-2} + \cdots + \lambdabda^{n-1} F_0.
\end{equation}
We will see in a minute that \eqref{sol1} is convergent for each $n$. With this in mind, it is easy to verify that so defined the sequence $(f_n)_{n\geq 0}$ satisfies the required equations.
Let us verify that the numbers $f_n$ in \eqref{sol1} are finite and $\limsup_{n\to \infty} |f_n|^{1/n} = \mu$. Indeed, take $\varepsilon > 0$ such that $\mu + \varepsilon < |\lambdabda|$. From the assumption on the $F_n$ we get that for some constant $C \geq 1$
\[|F_n| \leq C(\mu + \varepsilon)^n, \qquad n \geq 0.\]
Consequently,
\begin{equation}
\label{acota}
|f_n| \leq C\left(\frac{(\mu + \varepsilon)^n}{|\lambdabda|} + \frac{(\mu + \varepsilon)^{n+1}}{|\lambdabda|^2} + \frac{(\mu + \varepsilon)^{n+2}}{|\lambdabda|^3} + \cdots \right) = \frac{C (\mu + \varepsilon)^n }{|\lambdabda| - (\mu + \varepsilon)} < \infty.
\end{equation}
Moreover, \eqref{acota} implies that
\[\limsup_{n\to \infty }|f_n|^{1/n} \leq \mu + \varepsilon.\]
Letting $\varepsilon$ tend to zero we get that $\limsup_{n\to \infty }|f_n|^{1/n} \leq \mu $. On the other hand, $|F_n| \leq |f_{n+1}| + |\lambdabda f_n|$. This in turn implies that $\mu = \limsup_{n\to \infty} |F_n|^{1/n} \leq \limsup_{n\to \infty} |f_n|^{1/n}$, so indeed we have equality.
When $\mu > |\lambdabda|$, we proceed analogously. Using \eqref{sol2}, we have
\begin{equation}
\label{acota2}
|f_n| \leq C (\mu + \varepsilon)^{n-1} \left(1 + \frac{|\lambdabda|}{\mu + \varepsilon} + \cdots + \frac{|\lambdabda|^{n-1}}{(\mu + \varepsilon)^{n-1}} \right) \leq \frac{C (\mu + \varepsilon)^{n}}{\mu + \varepsilon - |\lambdabda|}.
\end{equation}
From \eqref{acota2} we get that $\limsup_{n\to \infty} |f_n|^{1/n} \leq \mu$, and equality is derived just as before.
$\Box$
\section{Proof of Theorem \ref{teo:2}.}
\noindent
Let ${\bf f}^1 = ( {f}^{1,1}, {f}^{1,2}, \ldots, {f}^{1,m})$ be a fundamental system of solutions of \eqref{eq:1} and $\mathbb{Z}a_1,\ldots,\mathbb{Z}a_m$ the collection of zeros of $\alpha$ enumerated as in \eqref{enumeration}. According to Buslaev's theorem
\[0 < |\mathbb{Z}a_1| \leq R_0( {f}^{1,j}) \leq |\mathbb{Z}a_m| < \infty, \qquad j=1,\ldots,m.\]
Let $D_0({\bf f}^1)$ denote the intersection of all the disks centered at the origin and radii $R_0( {f}^{1,j}), j= 1,\ldots,m$ whose boundary we denote $\mathcal{C}_1$. By Buslaev's theorem, the circle $\mathcal{C}_1$ of radius $R_1:= R_0({\bf f}^1)$ contains at least one zero of $\alpha$. Moreover, several (and at least one) of the functions in ${\bf f}^1$ has radius of convergence equal to $R_1 = R_0({\bf f}^1)$. Let $\mathbb{Z}a_j,\ldots,\mathbb{Z}a_{j+N-1}$ be the collection of all the zeros of $\alpha$ lying on $\mathcal{C}_1$. We distinguish the cases when $N\geq 2$ and when $N = 1$. We remark that so far we cannot assert that $|\mathbb{Z}a_j| = |\mathbb{Z}a_1|$; in principle, it may be larger.
Suppose that $N \geq 2$. In this case, according to our assumptions, \eqref{several} takes place. Due to Lemma \ref{lem:1}, $R_0(\prod_{k=j}^{j+N}(z - \mathbb{Z}a_k) {f}^{1,\ell}) > R_0({\bf f}^1)= R_1, \ell=1,\ldots,m$; that is, either $ {f}^{1,\ell}$ has radius of convergence larger than $R_1$ to start with or, $ {f}^{1,\ell}$ has at most poles on $\mathcal{C}_1$ at zeros of $\alpha$ and their order is less than or equal to the multiplicity of the corresponding zero of $\alpha$.
Let us find coefficients
$c_1,\ldots,c_{ m}$ such that
\begin{equation} \label{lasg} \sum_{\ell=1}^{ m} c_\ell {f}^{1,\ell}
\end{equation}
is analytic in a neighborhood of $\overline{D_0(\mathbf{f}^1)}$.
Finding the coefficients $c_\ell$ reduces to solving a homogeneous linear
system of $N$ equations on $m$ unknowns. In
fact, if $\mathbb{Z}a$ is one of the zeros of $\alpha$ on $\mathcal{C}_1$ and it has multiplicity $\tau$ we obtain $\tau$ equations choosing the
coefficients $c_\ell$ so that
\begin{equation} \label{ecuacion} \int_{|\omegaega - \mathbb{Z}a| =
\delta} (\omegaega - \mathbb{Z}a)^\nu \left(\sum_{\ell=1}^{\bf m} c_\ell
{f}^{1,\ell}(\omegaega)\right) d\omegaega = 0, \qquad \nu=0,\ldots,\tau -1.
\end{equation}
where $\delta$ is sufficiently small. We do the same with each
distinct zero of $\alpha$ on $\mathcal{C}_1$. The homogeneous linear system of $N$ equations so
obtained has $m - N_1, N_1 \leq N,$ linearly independent solutions, where $N_1$ equals the rank of the linear system of equations.
Denote the solutions of the linear system by ${\bf c}^{1,j},$ $ j=N_1 + 1,\ldots,m $.
Set
\[ {\bf c}^{1,j} = (c_{1}^{1,j},\ldots,c_{m}^{1,j}),
\qquad j=N_1 +1,\dots,m ,
\]
and
\[ {f}^{2,j} = \sum_{\nu=1}^m c_{\nu}^{1,j} {f}^{1,\nu}, \qquad j=N_1+1,\ldots,m.\]
We wish to emphasize several points:
\begin{enumerate}
\label{properties}
\item The collection of functions ${\bf f}^2 = ( {f}^{2,N_1 +1},\ldots, {f}^{2,{m }})$ is made up of nontrivial linearly independent solutions of \eqref{eq:1}.
\item Because of \eqref{ecuacion}, $R_0( {f}^{2,j}) > R_1, j=N_1+1,\ldots,m.$
\item If $N_1 = N$; that is, the system of equations has full rank, then the system is solvable if for some specific value of $\nu =0,\ldots,\tau-1$ in \eqref{ecuacion} instead of equating the left hand to zero we equate it to $1$. Doing this for each zero $\mathbb{Z}a$ of $\alpha$ on $\mathcal{C}_1$ and for each $\nu=0,\ldots,\tau-1$, we obtain $N_1$ linearly independent solutions of \eqref{eq:1} which are meromorphic on a neighborhood of $\overline{D_0({\bf f}^1)}$ except for a pole of exact order $\nu+1$ at $\mathbb{Z}a$. On this circle, this would settle the last statement of the theorem. (We will show that on each circle containing more than one zero of $\alpha$ the corresponding system of equations has full rank. This conclusion will be drawn at the very end of the proof of the theorem.)
\end{enumerate}
Now let us suppose that $\mathcal{C}_1$ contains only one zero $\mathbb{Z}a_j$ of $\alpha$ of multiplicity $1$; that is $N=1$. If, nevertheless,
\eqref{several} takes place we could proceed as before, so we will not use this restriction in the arguments that follow. We have,
\[ R_0( {f}^{1,\nu}) \geq R_1 = R_0({\bf f}^1) = |\mathbb{Z}a_j|, \qquad \nu = 1,\ldots,m,\]
with equality for some $\nu$. Without loss of generality we can assume
\[R_0( {f}^{1,1}) = \cdots = R_0( {f}^{1,M}) = R_1, \qquad 1\leq M \leq m,\]
and $R_0(f^{1,\nu}) > R_1, \nu =M+1,\ldots, m$ (if any).
According to \eqref{cociente} (with $\ell = 1$) and the fact that $\mathbb{Z}a_j$ is the unique zero of $\alpha$ on $\mathcal{C}_1$, we have
\begin{equation}
\label{coc2}
\lim_{n\to \infty} \frac{ {f}^{1,\nu}_{n-1}}{ {f}^{1,\nu}_n} = \mathbb{Z}a_j, \qquad \nu = 1,\ldots,M,
\end{equation}
where $( {f}^{1,\nu}_n)_{n\geq 0}$ denotes the collection of Taylor coefficients of $ {f}^{1,\nu}$, and $\mathbb{Z}a_j$ is a singular point of each $ {f}^{1,\nu}, \nu =1,\ldots,M$. Should $M=1$ we have obtained one solution of \eqref{eq:1}, namely $ {f}^{1,1}$, with radius of convergence $R_1$ and the remaining solutions in ${\bf f}^1$ have radius of convergence larger than $R_1$. We aim to show that if $M > 1$ we can also find one solution of \eqref{eq:1} with radius $R_1$ and additional $m-1$ linearly independent solutions of \eqref{eq:1} (not necessarily $ {f}^{1,2},\ldots, {f}^{1,m}$) with radius larger than $R_1$.
Without loss of generality, we can assume that $ {f}^{1,1}_n \neq 0, n \geq 0$. Indeed, \eqref{coc2} entails that $ {f}^{1,1}_n \neq 0, n \geq n_0$. Let $T_n( {f})$ be the Taylor polynomial of $ {f}$ of degree $n$. Consider the collection of functions $\hat{ {f}}^1,\ldots,\hat{ {f}}^M$, where
\[\hat{ {f}}^\nu(z) = ({ {f}}^{1,\nu}(z) - T_{n_0}({ {f}}^{1,\nu})(z))/z^{n_0}, \qquad \nu =1,\ldots, M.\]
It is easy to verify that these functions are linearly independent, satisfy the recurrence \eqref{eq:1} with the indices $n$ shifted by $n_0$, have radii of convergence equal to $R_1$, have the same singularities as the corresponding $ {f}^{1,\nu}$ on $\mathcal{C}_1$, and $\hat{ {f}}^1_{n} \neq 0, n\geq 0$. Should it be necessary, we derive the desired properties of the functions ${ {f}}^{1,1},\ldots,{ {f}}^{1,M}$ from $\hat{ {f}}^1 ,\ldots,\hat{ {f}}^M$.
Let $\lambdabda = \mathbb{Z}a_j^{-1}$. This point is a root of the characteristic polynomial $p(z)= z^{m}\alpha(1/z)$ of \eqref{eq:1}. Set
\[\gamma_n = {f}^{1,1}_n/\lambdabda^n,\qquad n \geq 0. \]
According to Lemma \ref{lem:2}, $(\lambdabda^n)_{n \geq 0}, \lambdabda^n = { {f}^{1,1}_{n}}/{\gamma_n},$ is a solution of \eqref{eq:3}. From \eqref{coc2} we have
\begin{equation}
\label{gamma}
\lim_{n\to \infty} \frac{\gamma_{n+1}}{\gamma_n} = \frac{1}{\lambdabda}\lim_{n\to \infty} \frac{ {f}^{1,1}_{n+1}}{ {f}^{1,1}_{n}} = 1.
\end{equation}
Consequently, the recurrences \eqref{eq:1} and \eqref{eq:3} have the same characteristic polynomial. Aside from the solution $(\lambdabda^n)_{n \geq 0}$, \eqref{eq:3} also has the solutions $( {f}^{1,\nu}_{n}/\gamma_n)_{n\geq 0}, \nu = 2,\ldots,M$. Using Lemma \ref{lem:3}, with these solutions of \eqref{eq:3} we can construct $M-1$ linearly independent solutions $(F^{\nu}_{ n})_{n \geq 0}, \nu=2,\ldots,M,$ of \eqref{eq:4} where
\begin{equation}
\label{radii}
F^{\nu}_{n} = \frac{ {f}^{1,\nu}_{n+1}}{\gamma_{n+1}} - \lambdabda \frac{ {f}^{1,\nu}_{n}}{\gamma_{n}}, \qquad n \geq 0, \qquad \nu=2,\ldots,M.
\end{equation}
The radii of convergence of the functions $ {f}^{1,\nu}, \nu =2,\ldots,M$ equal $R_1 = 1/|\lambdabda|$. This together with \eqref{gamma} and \eqref{radii} imply that
\[\limsup_{n\to \infty} |F^{\nu}_{n}|^{1/n} \leq 1/R_1, \qquad \nu =2,\ldots,M.\]
According to Buslaev's theorem, for each $\nu = 2, \ldots, M$ the radius of convergence of $F^{\nu}(z) = \sum_{n=0}^{\infty} F^{\nu}_{n} z^n$ is equal to the reciprocal of the absolute value of one of the zeros of the characteristic polynomial $\hat{p}(z)=z^{m-1} + \beta_1 z^{m-1} + \cdots + \beta_{m-1}$ associated with \eqref{eq:4}. According to \eqref{conexion}, and our assumptions, $\lambdabda$ is not a zero of $\hat{p}$. Therefore,
\[\limsup_{n\to \infty} |F^{\nu}_{n}|^{1/n} := \mu_{\nu} < 1/R_1, \qquad \nu = 2, \ldots, M. \]
Using Lemma \ref{lem:4} and Lemma \ref{lem:3}, with formula \eqref{sol1} we can find $M-1$ linearly independent solutions $\hat{ {f}}^{\nu},\nu=2,\ldots,M$ of the recurrence \eqref{eq:4} whose radius of convergence is greater than $R_1$. Now, using again Lemma \ref{lem:2} with $\gamma_n = \lambdabda^n/ {f}^1_{1,n}, n\geq 0$ we find a system ${\bf f}^2 = {f^{2,2}, \ldots , f^{2,m}}$ of $m-1$ linearly independent solutions of \eqref{eq:1} each of which has radius of convergence
greater than $R_1 = |\mathbb{Z}a_j|$. Here, $(f^{2,2},\ldots,f^{2,M})$ come from the last application of Lemma \ref{lem:2} whereas $f^{2,\nu} = f^{1,\nu}, \nu = M+1,\ldots,m$. Summarizing, we have found that $R_0(f^{1,1}) = |\mathbb{Z}a_j|$ with $\mathbb{Z}a_j$ a singular point of $f^{1,1}$ and all the functions in ${\bf f}^2$ are linearly independent solutions of \eqref{eq:1} with radius of convergence larger than $|\mathbb{Z}a_j|$.
Now we proceed with ${\bf f}^2$ exactly the same way as we did with ${\bf f}^1$ and construct a system ${f}^3$ with $m - N_1 -N_2$ linearly independent solutions of \eqref{eq:1}, where $N_2$ denotes either the rank of the corresponding system of homogeneous linear equations (when the circle $\mathcal{C}_2$ has more than one zero of $\alpha$) or $N_2 = 1$ if $\mathcal{C}_2$ has exactly one zero of $\alpha$. In a finite number of steps, say $r$, we either run out of linearly independent solutions of \eqref{eq:1} because $N_1+\ldots+N_r = m$ or we find at least one non trivial solution of \eqref{eq:1} with radius of convergence $R > |\alpha_m|$.
The second possibility is impossible because according to Buslaev's theorem $R$ has be equal to the absolute value of one of the zeros of $\alpha$. On the other hand, since $N_k, k=1,\ldots,r,$ is less than or equal to the number of zeros, say $m_k$, of $\alpha$ on $\mathcal{C}_k$, if $N_1+\ldots+N_r = m$, if follows that $N_k = m_k, k=1,\ldots,r$. This happens only when $\cup_{k=1}^r \mathcal{C}_k$ contains all the zeros of $\alpha$ (we skip no circle at all as we carry out the process). In turn this means that either any given circle contains exactly one zero of $\alpha$ or the rank of the corresponding homogeneous linear system of equations is equal to the number of zeros of $\alpha$ on the circle. As we saw in the proof of the first step this means that associated with each circle $\mathcal{C}_k$ we have $N_k$ linearly independent solutions of \eqref{eq:1} with the properties announced in the statement of the theorem. Since the circles are contained one inside the other the collection of these linearly independent solutions form a fundamental system of solutions of \eqref{eq:1}.
$\Box$
\section{Recurrence relations and Hermite-Pad\'e approximation}
\subsection{Preliminaries.}
Let $\mathbf{f}=\left( f_1,f_2,\ldots,f_d\right)$ be a system of $d$ formal or convergent Taylor expansions about the origin; that is, for each $k=1,\ldots,d$, we have
\begin{equation}\label{serie hp}
f_k(z)=\sum\limits_{n=0}^\infty \phi_{k,n}z^n,\ \ \ \ \phi_{k,n}\in\mathbb{C}.
\end{equation}
In the sequel, ${\bf m} = (m_1,\ldots,m_d) \in \mathbb{Z}^d\setminus {\bf 0}$, where ${\bf 0}\in \mathbb{Z}^d$ denotes the null vector. Set $|{\bf m}| = m_1+\cdots + m_d$.
\begin{Definition}
\label{HP}
Let $({\bf f, m)}$ and $n \geq \max \{m_k: k=1,\ldots,d\}$ be given. Then, there exist polynomials $q$, $p_k$, $k=1,\ldots,d$, such that
\begin{enumerate}
\item[a.1)] $\deg p_k\leq n-m_k$, $k=1,\ldots,d$, $\deg q\leq |\mathbf{m}|$, $q\not\equiv 0$,
\item[a.2)] $q(z)f_k(z)-p_k=A_kz^{n+1}+\cdots$.
\end{enumerate}
The vector rational function $\mathbf{R}_{n,\mathbf{m}}=\left( p_1/q,\cdots,p_d/q\right)$ is called an $(n,\mathbf{m})$ (type II) Hermite-Pad\'e approximation of $\mathbf{f}$.
\end{Definition}
The existence of such polynomials reduces to solving a homogeneous linear system of $|{\bf m}|$ equations on the $|{\bf m}|+1$ unknown coefficients of $q$. Thus a nontrivial solution exists. Once $q$ is found, the polynomial $p_k, k=1,\ldots,d$ is the Taylor polynomial of degree $n-m_k$ of $qf_k$. Unlike Pad\'e approximants, Hermite-Pad\'e approximants are not uniquely determined (in general). For each $n$ we choose one candidate. Without loss of generality, we take $q$ to have its non zero coefficient of lowest degree equal to $1$. With this normalization, we write $q_{n,{|\bf m|}}$ and $p_{n,{|\bf m|},k}$ instead of $q$ and $p_k$, respectively.
Set
\begin{equation}\label{denom}
q_{n,|\mathbf{m}|}(z)=b_{n,|\mathbf{m}|}z^{|\mathbf{m}|}+b_{n,|\mathbf{m}|-1}z^{|\mathbf{m}|-1}+\cdots+b_{n,0}.
\end{equation}
where $b_{n,0} = 1$ if $q(0) \neq 0$ and equals $0$ otherwise.
With this notation, the conditions in Definition \ref{HP} reduce to the following
\[[z^\nu q_{n,|{\bf m}|}f_k]_n = 0, \qquad k=1,\ldots,d,\qquad \nu = 0,\ldots,m_k -1,\]
where $[\,\cdot\,]_n$ denotes the $n$-the Taylor coefficient of $(\cdot)$. In terms of recurrence relations, this means that the system of functions
\begin{equation}
\label{fs}
(f_1,\ldots,z^{m_1-1}f_1,f_2,\ldots,z^{m_2-1}f_2,f_3,\ldots,z^{m_d-1}f_d)
\end{equation}
is made up of solutions of the recurrence relations
\begin{equation}
\label{fse}[ q_{n,|{\bf m}|} f]_n = 0, \qquad n \geq |{\bf m}|,
\end{equation}
of type \eqref{eq:2} (or, what is the same, \eqref{eq:1}) with $\alpha_{n,m}$ replaced by $q_{n,|{\bf m}|}$. We say of type because, we cannot guarantee that $b_{n,|\mathbf{m}|} \neq 0$ or $b_{n,0} = 1$. Aside from this, which will not cause any problem for the application we have in mind (see \eqref{convpol} below), Theorem \ref{teo:1} will allow us to study the analytic properties and detect singularities of the system of functions $\bf f$, under appropriate assumptions on the asymptotic behavior of the sequence of polynomials $q_{n,|{\bf m}|}$.
The following notions were introduced in \cite{cacoq2}.
\begin{Definition}
Given $\mathbf{f}=(f_1,\ldots,f_d)$ and $\mathbf{m}=(m_1,\ldots,m_d)\in \mathbb{Z}_+^d\setminus \lbrace \mathbf{0}\rbrace$ we say that $\xi\in\mathbb{C}\setminus\lbrace 0\rbrace$ is a system pole of order $\tau$ of $(\mathbf{f}, \mathbf{m})$ if $\tau$ is the largest positive integer such that for each $s=1,\ldots,\tau$ there exists at least one polynomial combination of the form
\begin{equation}\label{system pole}
\sum\limits_{k=1}^dp_kf_k,\qquad \deg p_k<m_k,\qquad k=1,\ldots,d,
\end{equation}
which is analytic on a neighborhood of $\overline{D}_{|\xi|} = \{z: |z| \leq |\xi|\}$ except for a pole at $z=\xi$ of exact order $s$. If some component $m_k$ equals zero the corresponding polynomial $p_k$ is taken identically equal to zero.
\end{Definition}
\begin{Definition}\label{system sing}
Given $\mathbf{f}=(f_1,\ldots,f_d)$ and $\mathbf{m}=(m_1,\ldots,m_d)\in \mathbb{Z}_+^d\setminus \lbrace \mathbf{0}\rbrace$ we say that $\xi\in\mathbb{C}\setminus\lbrace 0\rbrace$ is a system singularity of $(\mathbf{f}, \mathbf{m})$ if there exists at least one polynomial combination of the form \eqref{system pole} analytic in $D_{|\xi|} = \{z:|z| < |\xi|\}$ and $\xi$ is a singular point of \eqref{system pole}.
\end{Definition}
In this context, the concepts of singular point and pole depend not only on the system of functions $\bf f$ but also on the multi index $\bf m$. For example, poles of the individual functions $f_k$ need not be system poles of $\bf f$ and system poles need not be poles of any of the functions $f_k$ (see interesting examples in \cite{cacoq2}).
\begin{Definition}\label{poly indep}
A vector $\mathbf{f}=(f_1,\ldots,f_d)$ of $d$ formal Taylor expansions is said to be polynomially independent with respect to $\mathbf{m}=(m_1,\ldots,m_d)\in\mathbb{N}^d$ if there do not exist polynomials $p_1,\ldots,p_d$, at least one of which is non-null, such that
\begin{enumerate}
\item[b.1)] $\deg p_k < m_k$, $k=1,\ldots,d$,
\item[b.2)] $\sum\limits_{k=1}^dp_kf_k$ is a polynomial.
\end{enumerate}
\end{Definition}
In particular, polynomial independence implies that for $k=1,\ldots,d$, $f_k$ is not a rational function with at most $m_k-1$ poles and the system of functions \eqref{fs} is linearly independent. Moreover, \eqref{fs} constitutes a fundamental system of solutions of \eqref{fse}.
\subsection{Application of the main result to Hermite-Pad\'{e} approximation.}
In the sequel, we will assume that the sequence \eqref{denom}, $n\geq\max\lbrace m_1,\ldots,m_d\rbrace$ verifies
\begin{equation}
\label{convpol}
\lim\limits_{n\longrightarrow\infty}q_{n,\mathbf{m}}(z)=q_{|\mathbf{m}|}(z) = \prod_{k=1}^{|\mathbf{m}|}(1- z\mathbb{Z}a_k^{-1}),\qquad \deg q_{|\mathbf{m}|} = |\mathbf{m}|,\qquad q_{|\mathbf{m}|}(0) = 1.
\end{equation}
\begin{Theorem} \label{teo:3}
Suppose that $\mathbf{f}=(f_1,\ldots,f_d)$ is a vector of formal power expansions as in \eqref{serie hp} which is polynomially independent with respect to $\mathbf{m}=(m_1,\ldots,m_d)\in\mathbb{N}^d$. Assume that \eqref{enumeration}, \eqref{limit2} and \eqref{several} take place. Then, each $\mathbb{Z}a_k, k=1,\ldots,|{\bf m}|,$ is a system singularity of $(\mathbf{f}, \mathbf{m})$. Moreover, if $\mathbb{Z}a_k$ verifying \eqref{several} is a zero of multiplicity $\tau_k$ of $q_{|{\bf m}|}$ then it is a system pole of $(\mathbf{f}, \mathbf{m})$ of order $\tau_k$.
\end{Theorem}
\noindent {\bf Proof.}
We have that $\lim_{n\rightarrow\infty}q_{n,\mathbf{m}}=q_{|\mathbf{m}|}$ where $\deg q_{|\mathbf{m}|} = |\mathbf{m}|$ and $q_{|{\bf m}|}(0) = 1$, then there exist $n_0$ such that $b_{n,|\mathbf{m}|}\neq 0, q_{n,0} = 1$, for all $n\geq n_0$.
Given $f(z) = \sum_{\nu =0}^{\infty} \phi_\nu z^\nu$ define
\[ \hat{f}(z) := (f(z) - T_{n_0}(f)(z))/z^{n_0 } = \sum_{\nu =0}^{\infty} \phi_{n_0 + \nu} z^{\nu} = \sum_{\nu =0}^{\infty} \hat{\phi}_{\nu} z^{\nu} , \]
where $T_{n_0}(f)$ is the Taylor polynomial of $f$ of degree $n_0-1$. Notice that the coefficients of $\hat{f}$ are shifted by $n_0$ in relation with the coefficients of $f$; that is
\[\hat{\phi}_\nu = \phi_{n_0 + \nu}, \qquad \nu \geq 0.\]
Consequently,
\[[q_{n_0 + n,|{\bf m}|}f ]_{n_0 +n} = \phi_{n+n_0}+b_{n+n_0,1}\phi_{n+n_0-1}+\cdots+b_{n+n_0,|\mathbf{m}|}\phi_{n+n_0-|\mathbf{m}|}=
\]
\[\hat{\phi}_{n}+b_{n+n_0,1}\hat{\phi}_{n-1}+\cdots+b_{n+n_0,|\mathbf{m}|}\hat{\phi}_{n-|\mathbf{m}|}\]
with $b_{n+n_0,|\mathbf{m}|} \neq 0, n \geq 0$. If we set $\hat{q}_{n,\mathbf{m}}=q_{n+n_0,\mathbf{m}}$, we have
\[[q_{n_0 + n,|{\bf m}|}f ]_{n_0 +n} = 0 \qquad \mbox{if and only if} \qquad [\hat{q}_{n,|{\bf m}|}\hat{f} ]_{n} = 0.\]
Set $\hat{\bf f} = (\hat{f}_1,\ldots,\hat{f}_d)$. Notice that the system of functions
\begin{equation}
\label{sysaux}
(\hat{f}_1,\ldots,z^{m_1-1}\hat{f}_1, f_2,\ldots,z^{m_d-1}\hat{f}_d)
\end{equation}
is linearly independent for otherwise the system $\bf f$ would not be polynomially independent. Consequently, \eqref{sysaux} constitutes a fundamental system of solutions of the recurrence relations
\begin{equation}
\label{ecaux}
[\hat{q}_{n,|{\bf m}|}\hat{f} ]_{n} = 0, \qquad n \geq |{\bf m}|,
\end{equation}
which verifies $\deg(\hat{q}_{n,|{\bf m}|}) = |{\bf m}|, \hat{q}_{n,|{\bf m}|}(0) = 1, n \geq 0,$ and
\[\lim_{n\to \infty} \hat{q}_{n,|{\bf m}|} = q_{|\bf m|}.\]
Therefore, the recurrence relation \eqref{ecaux} is exactly like \eqref{eq:1}, including the assumption that the coefficient which plays the role of $\alpha_{n,m}$ (which is the coefficient of $q_{n,|{\bf m}|}$ accompanying $z^{|{\bf m}|}$) is different from zero, and fulfills the assumptions of Theorem \ref{teo:2}.
Applying Theorem \ref{teo:2} to $(\hat{\bf f},{\bf m})$ we obtain that this pair fulfills the thesis of Theorem \ref{teo:3}. However, it is easy to verify that $(\bf f,\bf m)$ and $(\hat{\bf f},\bf m)$ have the same system singularities and the same system poles including their order.
$\Box$
Two immediate consequences of Theorem \ref{teo:3} which are worth singling out are the following.
\begin{Corollary} \label{cor:1}
Suppose that $\mathbf{f}=(f_1,\ldots,f_d)$ is a vector of formal power expansions as in \eqref{serie hp} which is polynomially independent with respect to $\mathbf{m}=(m_1,\ldots,m_d)\in\mathbb{N}^d$. Assume that all the zeros of $q_{|{\bf m|}}$ verify \eqref{several}. Then, $({\bf f,m})$ has exactly $|{\bf m}|$ system poles which coincide with the zeros of $q_{|{\bf m|}}$, taking account of their order.
\end{Corollary}
Corollary \ref{cor:1} contains the inverse statement, (b) implies (a), in \cite[Theorem 1.4]{cacoq2}.
\begin{Corollary} \label{cor:2}
Suppose that $\mathbf{f}=(f_1,\ldots,f_d)$ is a vector of formal power expansions as in \eqref{serie hp} which is polynomially independent with respect to $\mathbf{m}=(m_1,\ldots,m_d)\in\mathbb{N}^d$. Assume that all the zeros of $q_{|{\bf m|}}$ have distinct absolute value. Then, all the zeros of $q_{|{\bf m}|}$ are system singularities of $({\bf f,m})$.
\end{Corollary}
This corollary was suggested to take place in the sentence right after \cite[Corollary 5.2]{GY}. In \cite{GY} you may find other results regarding Hermite-Pad\'e approximation which complement this section.
\end{document}
|
\betaegin{document}
\rightharpoonup oenewcommand{1.2}{1.2}
\rightharpoonup oenewcommand{1.0}{1.0}
\tauitle{\betaf Cohomology and derivations of BiHom-Lie conformal algebras}
\alphauthor{{\betaf Shuangjian Guo$^{1}$, Xiaohui Zhang$^{2}$, Shengxiang Wang$^{3}$\footnote
{ Corresponding author(Shengxiang Wang):[email protected]} }\\
{\summall 1. School of Mathematics and Statistics, Guizhou University of Finance and Economics} \\
{\summall Guiyang 550025, P. R. of China} \\
{\summall 2. School of Mathematical Sciences, Qufu Normal University}\\
{\summall Qufu 273165, P. R. of China}\\
{\summall 3.~ School of Mathematics and Finance, Chuzhou University}\\
{\summall Chuzhou 239000, P. R. of China}}
\title{f Cohomology and derivations of BiHom-Lie conformal algebras}
\betaegin{center}
\betaegin{minipage}{13.cm}
{\betaf \betaegin{center} ABSTRACT \etand{center}}
In this paper, we introduce the notion of a BiHom-Lie conformal algebra and develop its cohomology theory.
Also, we discuss some applications to the study of deformations of regular BiHom-Lie conformal algebras.
Finally, we introduce derivations of multiplicative
BiHom-Lie conformal algebras and study their properties.
{\betaf Key words}: BiHom-Lie conformal algebra, cohomology, deformation, Nijenhuis operators, derivation.
{\betaf 2010 Mathematics Subject Classification:} 17A30, 17B45, 17D25, 17B81
\etand{minipage}
\etand{center}
\normalsize\epsilon skip1cm
\sumection*{INTRODUCTION}
\deltaef\tauhetaeequation{0. \alpharabic{equation}}
\sumetcounter{equation} {0}
Lie conformal algebras were introduced by Kac in \cdotite{Kac98},
he gave an axiomatic description of the singular part of the operator product expansion of chiral fields in conformal field theory.
It is an useful tool to study vertex algebras and has many applications in the theory of Lie algebras. Moreover.
Lie conformal algebras have close connections to Hamiltonian formalism in the theory of nonlinear evolution equations.
Lie conformal algebras are widely studied in the following aspects: The structure theory \cdotite{D'Andrea1998},
representation theory \cdotite{Cheng1997}, \cdotite{Cheng1998} and cohomology
theory \cdotite{Bakalov1999} of finite Lie conformal algebras have been developed. Moreover, Liberati
in \cdotite{L08} introduced a conformal analog of Lie bialgebras including the conformal classical Yang-Baxter equation, the conformal Manin triples and conformal Drinfeld¡¯s double.
The notion of Hom-Lie algebras was firstly introduced by Hartwig,
Larsson and Silvestrov to describe the structure of certain $q$-deformations
of the Witt and the Virasoro algebras, see \cdotite{Aizawa, Hartwig, Hu},
which was motivated by
applications to physics and to deformations of Lie algebras, especially Lie algebras
of vector fields.
Recently, the Hom-Lie conformal algebra was introduced
and studied in \cdotite{Yuan14}, where it was proved that a Hom-Lie conformal algebra is equivalent to a Hom-Gel'fand-Dorfman bialgebra.
Later, Zhao, Yuan and Chen developed the cohomology theory of Hom-Lie conformal algebras and discuss
some applications to the study of deformations of regular Hom-Lie conformal
algebras. Also, they introduced $\alpha^k$-derivations of multiplicative Hom-Lie conformal
algebras and study their properties in \cdotite{Zhao2016}.
A BiHom-algebra is an algebra in such a way that the identities defining the structure
are twisted by two homomorphisms $\alpha,\beta$. This class of algebras was introduced from a
categorical approach in \cdotite{Graziani} as an extension of the class of Hom-algebras. When the two
linear maps are same automorphisms, Bihom-algebras will be return to Hom-algebras.
These algebraic structures include BiHom-associative algebras, BiHom-Lie algebras and
BiHom-bialgebras.
Motivated by these results, the present paper is organized as follows.
In Section 2, we introduce the notion of a BiHom-Lie conformal algebra and develop the cohomology theory of BiHom-Lie conformal algebras.
In Section 3, we discuss some applications to the study of deformations of regular BiHom-Lie conformal algebras.
In Section 4, we study derivations of multiplicative BiHom-Lie conformal algebras and prove that the direct sum of the space of derivations is a BiHom-Lie
conformal algebra. In particular, any derivation gives rise to a derivation extension of a multiplicative BiHom-Lie conformal algebra.
In Section 5, we introduce generalized derivations of multiplicative BiHom-Lie conformal algebras and study their properties.
\sumection{Preliminaries}
\deltaef\tauhetaeequation{\alpharabic{section}.\alpharabic{equation}}
\sumetcounter{equation} {0}
Throughout the paper, all algebraic systems are supposed to be over a field $\mathbb{C}$.
We denote by $\mathbb{Z}$ the set of all integers and $\mathbb{Z}_{+}$ the set of all nonnegative integers.
In this section we recall some basic definitions and results related to our paper from \cdotite{Graziani} and \cdotite{Yuan14} .
\noindent{\betaf 1.1. Hom-Lie conformal algebra} A Hom-Lie conformal algebra is a $\mathbb{C}[\phiartial]$-module $R$ equipped with a linear endomorphism
$\alpha$ such that $\alpha\phiartial=\phiartial\alpha$, and a $\lambdaambda$-bracket $[\cdot_{\lambdaambda}\cdot]$ which defines a $\mathbb{C}$-bilinear map from $R\otimes R$ to $R[\lambdaambda]=\mathbb{C}[\lambdaambda]\otimes R$ such that the following axioms hold:
\betaegin{eqnarray*}
&&[\phiartial a_{\lambdaambda}b] =-\lambdaambda[a_\lambdaambda b],[ a_{\lambdaambda}\phiartial b] =(\phiartial+\lambdaambda)[a_\lambdaambda b],\\
&&[a_{\lambdaambda}b]=-[b_{-\lambdaambda-\phiartial}a],\\
&&[\alpha(a)_\lambdaambda[b_\mu c]]=[[a_{\lambdaambda}b]_{\lambdaambda+\mu}\alpha(c)]+[\alpha(b)_{\mu}[a_{\lambdaambda}c]],
\etand{eqnarray*}
for $a,b,c\in R$.
\noindent{\betaf 1.2. BiHom-Lie algebra } A BiHom-Lie algebra
is a 4-tuple $(L,[\cdotdot,\cdotdot],\alphalpha,\betaeta)$,
where $L$ is a ${k}$-linear space, $\alphalpha, \betaeta: L\rightharpoonup oightarrow L$ and
$[\cdotdot,\cdotdot]: L\otimes L\rightharpoonup oightarrow L$ are linear maps,
satisfying the following conditions:
\betaegin{eqnarray*}
&&\alphalpha\cdotirc\betaeta=\betaeta\cdotirc\alphalpha,\\
&&\alphalpha[a,a']=[\alphalpha(a),\alphalpha(a')],\betaeta[a,a']=[\betaeta(a),\betaeta(a')],\\
&&[\betaeta(a),\alphalpha(a')]=-[\betaeta(a'),\alphalpha(a)],\\
&&[\betaeta^{2}(a),[\betaeta(a'),\alphalpha(a'')]]+[\betaeta^{2}(a'),[\betaeta(a''),\alphalpha(a)]]+[\betaeta^{2}(a''),[\betaeta(a),\alphalpha(a')]]=0,
\etand{eqnarray*}
for all $a,a',a''\in A$.
A BiHom-Lie algebra $(L,[\cdotdot,\cdotdot],\alphalpha,\betaeta)$ is called multiplicative if $\alpha,\beta$ are algebra endomorphisms,
i.e., $\alpha([a, b]) = [\alpha(a), \alpha(b)], \beta([a, b]) = [\beta(a), \beta(b)]$, for any $a,b\in L$. In particular, if $\alpha,\beta$ are algebra
isomorphisms, then $(L,[\cdotdot,\cdotdot],\alphalpha,\betaeta)$ is called regular.
Obviously, a Hom-Lie algebra $(L,[\cdotdot,\cdotdot],\alphalpha)$ is a particular case of a BiHom-Lie
algebra, namely $(L,[\cdotdot,\cdotdot],\alphalpha,\alphalpha)$.
Conversely, a BiHom-Lie algebra $(L,[\cdotdot,\cdotdot],\alphalpha,\alphalpha)$ with bijective $\alphalpha$
is the Hom-Lie algebra $(L,[\cdotdot,\cdotdot],\alphalpha)$ .
\sumection{ Cohomology of BiHom-Lie conformal algebras }
\deltaef\tauhetaeequation{\alpharabic{section}. \alpharabic{equation}}
\sumetcounter{equation} {0}
In this section, we introduce the notation of BiHom-Lie conformal algebras and develop the cohomology theory of BiHom-Lie conformal algebras.
\noindent{\betaf Definition 2.1.} A BiHom-Lie conformal algebra is a $\mathbb{C}[\phiartial]$-module $R$ equipped with two commuting linear maps
$\alpha,\beta$ such that $\alpha\phiartial=\phiartial\alpha, \beta\phiartial=\phiartial\beta, \alpha([a_\lambdaambda b]) = [\alpha(a)_\lambdaambda \alpha(b)], \beta([a_\lambdaambda b]) = [\beta(a)_\lambdaambda \beta(b)]$, and a $\lambdaambda$-bracket $[\cdot_{\lambdaambda}\cdot]$ which defines a $\mathbb{C}$-bilinear map from $R\otimes R$ to $R[\lambdaambda]=\mathbb{C}[\lambdaambda]\otimes R$ such that the following axioms hold for $a,b,c\in R$:
\betaegin{eqnarray}
&&[\phiartial a_{\lambdaambda}b] =-\lambdaambda[a_\lambdaambda b], [ a_{\lambdaambda}\phiartial b] =(\phiartial+\lambdaambda)[a_\lambdaambda b],\\
&&[\beta(a)_{\lambdaambda}\alpha(b)]=-[\beta(b)_{-\lambdaambda-\phiartial}\alpha(a)],\\
&&[\alpha\beta(a)_\lambdaambda[b_\mu c]]=[[\beta(a)_{\lambdaambda}b]_{\lambdaambda+\mu} \beta(c)]+[\beta(b)_{\mu}[\alpha(a)_{\lambdaambda}c]].
\etand{eqnarray}
A BiHom-Lie conformal algebra $(R,\alpha,\beta)$ is called multiplicative if $\alpha,\beta$ are algebra endomorphisms,
i.e., $\alpha([a_\lambdaambda b]) = [\alpha(a)_\lambdaambda \alpha(b)], \beta([a_\lambdaambda b]) = [\beta(a)_\lambdaambda \beta(b)]$, for any $a,b\in R$.
In particular, if $\alpha,\beta$ are algebra
isomorphisms, then $(R,\alpha,\beta)$ is called regular.
A BiHom-Lie conformal algebra $R$ is called finite if $R$ is a finitely generated $\mathbb{C}[\phiartial]$-module. The
rank of $R$ is its rank as a $\mathbb{C}[\phiartial]$-module.
\noindent{\betaf Example 2.2.} Let $R$ be a Lie conformal algebra and $\alpha,\beta:A \rightharpoonup oightarrow A$ two commuting linear maps such that
$$\alpha\phiartial=\phiartial\alpha, \beta\phiartial=\phiartial\beta, \alpha([a_\lambdaambda b]) = [\alpha(a)_\lambdaambda \alpha(b)], \beta([a_\lambdaambda b]) = [\beta(a)_\lambdaambda \beta(b)].$$
Define a $\mathbb{C}$-bilinear map from $R\otimes R$ to $R[\lambdaambda]=\mathbb{C}[\lambdaambda]\otimes R$ by $[a_{\lambdaambda}b]'=[\alpha(a)_{\lambdaambda}\beta(b)]$.
Then $(R,\alpha,\beta)$ is a BiHom-Lie conformal algebra.
\noindent{\betaf Example 2.3.} Let $(L, [\cdot,\cdot], \alpha, \beta)$ be a regular BiHom-Lie algebra.
Denote by $\hat{L}=L\otimes \mathbb{C}[t, t^{-1}]$ the affization of $L$ with
\betaegin{eqnarray*}
[u\otimes t^m, v\otimes t^n]=[u,v]\otimes t^{m+n},
\etand{eqnarray*}
for any $u,v\in L, m,n\in \mathbb{Z}$.
Extend $\alpha,\beta$ to $\hat{L}$ by $\alpha(u\otimes t^{m})=\alpha(u)\otimes t^{m}$ and $\beta(u\otimes t^{m})=\beta(u)\otimes t^{m}$.
Then $(\hat{L}, [\cdot,\cdot], \alpha, \beta)$ is a BiHom-Lie algebra. By simple verification, we get a BiHom-Lie conformal algebra $R=\mathbb{C}[\phiartial]L$ with
\betaegin{eqnarray*}
[u_{\lambdaambda}v]=[u,v], ~~
\alpha(f(\phiartial)u)=f(\phiartial)\alpha(u), \beta(f(\phiartial)u)=f(\phiartial)\beta(u),
\etand{eqnarray*}
for any $u,v\in L.$
\noindent{\betaf Definition 2.4.} A module $(M, \alpha_M, \beta_M)$ over over a BiHom-Lie conformal algebra $(R, \alpha,\beta)$ is a $\mathbb{C}[\phiartial]$-
module endowed with a $\mathbb{C}$-linear map $\alpha_M, b_M$ and a $\mathbb{C}$-bilinear map $R\otimes M\rightharpoonup oightarrow M[\lambdaambda], a\otimes v\mapsto a_\lambdaambda v$,
such that for $a,b\in R, v\in M$:
\betaegin{eqnarray*}
\alpha_M \cdotirc \beta_M=\beta_M\cdotirc \alpha_M,\\
\alpha_M(a\cdot m)=\alpha(a)\cdot \alpha_M(m),\\
\beta_M(a\cdot m)=\beta(a)\cdot \beta_M(m),\\
\alpha\beta(a)_\lambdaambda(b_\mu v)-\beta(b)_{\mu}(\alpha(a)_\lambdaambda v)=[\beta(a)_\lambdaambda b]_{\lambdaambda+\mu}\beta_{M}(v),\\
(\phiartial a)_{\lambdaambda}v=-\lambdaambda(a_\lambdaambda v),a_\lambdaambda(\phiartial v)=(\phiartial+\lambdaambda)a_{\lambdaambda} v,\\
\beta_M \cdotirc \phiartial =\phiartial \cdotirc \beta_{M}, \alpha_M \cdotirc \phiartial =\phiartial \cdotirc \alpha_{M},\\
\alpha_M(a_\lambdaambda v)=\alpha(a)_\lambdaambda(\alpha_M(v)),\beta_M(a_\lambdaambda v)=\beta(a)_\lambdaambda(\beta_M(v)),
\etand{eqnarray*}
\noindent{\betaf Example 2.5.} Let $(R, \alpha,\beta)$ be a BiHom-Lie conformal algebra. Then $(R, \alpha,\beta)$ is an $R$-module
under the adjoint diagonal action, namely, $a_\lambdaambda b := [a_\lambdaambda b],$ for any $a,b\in R$.
\noindent{\betaf Proposition 2.6.} Let $(R, \alpha,\beta)$ be a regular BiHom-Lie conformal algebra and $(M,\alpha_M,\beta_M)$
an $R$-module. Assume that $\alpha$ and $\beta_M$ are bijective. Define a $\lambdaambda$-bracket $[\cdot_{\lambdaambda}\cdot]$ on $R\otimesplus M$ by
\betaegin{eqnarray*}
[(a+u)_\lambdaambda (b+v)]_{M}=[a_\lambdaambda b] +a_\lambdaambda v-\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda}\alpha_M\beta_M^{-1}(u),
\etand{eqnarray*}
for any $a,b\in R$ and $u,v\in M.$
Define $\alpha+\alpha_M, \beta+\beta_M: R\otimesplus M\rightharpoonup oightarrow R\otimesplus M$ by
\betaegin{eqnarray*}
(\alpha+\alpha_M)(a+u)=\alpha(a)+\alpha_M(u),~~(\beta+\beta_M)(a+u)=\beta(a)+\beta_M(u).
\etand{eqnarray*}
Then $(R\otimesplus M, \alpha+\alpha_M, \beta+\beta_M)$ is a regular BiHom-Lie conformal algebra.
{\betaf Proof.} Note that $R\otimesplus M$ is equipped with a $\mathbb{C}[\phiartial]$-module structure via
\betaegin{eqnarray*}
\phiartial(a+u)=\phiartial(a)+\phiartial(u),~~\forall a\in R, u\in M.
\etand{eqnarray*}
With this, it is easy to see that $(\alpha+\alpha_M)\cdotirc \phiartial =\phiartial\cdotirc (\alpha+\alpha+M),(\beta+\beta_M)\cdotirc \phiartial =\phiartial\cdotirc (\beta+\beta_M) $ and $(\alpha+\alpha_M)([(a+u)_\lambdaambda(b+v)]_M)=[((\alpha+\alpha_M)(a+u))_\lambdaambda(\alpha+\alpha_M)(b+v)]_M, (\beta+\beta_M)([(a+u)_\lambdaambda(b+v)]_M)=[((\beta+\beta_M)(a+u))_\lambdaambda(\beta+\beta_M)(b+v)]_M$,
for any $a,b\in R$ and $u,v\in M.$
For (2.1), we have
\betaegin{eqnarray*}
&&[\phiartial(a+u)_\lambdaambda (b+v)]_{M}
=[(\phiartial a+\phiartial u)_\lambdaambda (b+v)]_{M}\\
&=& [\phiartial a_{\lambdaambda}b]+(\phiartial a)_\lambdaambda v-\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda}\phiartial \alpha_M\beta_M^{-1}(u)\\
&=& -\lambdaambda [a_{\lambdaambda}b]-\lambdaambda a_\lambdaambda v-(\phiartial-\lambdaambda-\phiartial)\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda} \alpha_M\beta_M^{-1}(u)\\
&=& -\lambdaambda([a_{\lambdaambda}b]+ a_\lambdaambda v-\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda} \alpha_M\beta_M^{-1}(u))\\
&=& -\lambdaambda[(a+u)_\lambdaambda (b+v)]_{M},\\
&&[(a+u)_\lambdaambda \phiartial(b+v)]_{M}
= [(a+u)_\lambdaambda (\phiartial b+ \phiartial v)]_{M}\\
&=& [a_\lambdaambda \phiartial b]+a_\lambdaambda \phiartial v-\phiartial \alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda} \alpha_M\beta_M^{-1}(u)\\
&=& (\phiartial+\lambdaambda)[[a_{\lambdaambda}b]]+(\phiartial+\lambdaambda)a_\lambdaambda v-(\phiartial+\lambdaambda)\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda} \alpha_M\beta_M^{-1}(u)\\
&=&(\phiartial+\lambdaambda)([a_{\lambdaambda}b]+ a_\lambdaambda v-\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda} \alpha_M\beta_M^{-1}(u))\\
&=&(\phiartial+\lambdaambda) [(a+u)_\lambdaambda (b+v)]_{M}.
\etand{eqnarray*}
as desired. For (2.2), we calculate
\betaegin{eqnarray*}
&&[(\beta(b)+\beta_{M}(v))_{-\phiartial-\lambdaambda}(\alpha(a)+\alpha_M(u))]_M\\
&=&[\beta(b)_{-\phiartial-\lambdaambda}\alpha(a)]+\beta(b)_{-\phiartial-\lambdaambda}\alpha_M(u)-\beta(a)_\lambdaambda \alpha_M(v)\\
&=& -[\beta(a)_\lambdaambda\alpha(b)]-\beta(a)_\lambdaambda \alpha_M(v)+\beta(b)_{-\phiartial-\lambdaambda}\alpha_M(u)\\
&=& -[(\beta(a)+\beta_M(u))_\lambdaambda(\alpha(b)+\alpha_M(v))]_M.
\etand{eqnarray*}
For (2.3), we have
\betaegin{eqnarray*}
&&[(\alpha+\alpha_M)(\beta+\beta_M)(a+u)_{\lambdaambda}[(b+v)_{\mu}(c+w)]_M]_M\nonumber\\
&=& [(\alpha\beta(a)+\alpha_M\beta_M(u))_{\lambdaambda}[(b+v)_{\mu}(c+w)]_M]_M\nonumber
\etand{eqnarray*}
\betaegin{eqnarray*}
&=& [(\alpha\beta(a)+\alpha_M\beta_M(u))_{\lambdaambda}([b_\mu c]+b_\mu w-\alpha^{-1}\beta(c)_{-\phiartial-\mu}\alpha_M\beta^{-1}_M(v))]_M\nonumber\\
&=&[\alpha\beta(a)_\lambdaambda [b_\mu c]]+\alpha\beta(a)_\lambdaambda(b_\mu w)-\alpha\beta(a)_\lambdaambda(\alpha^{-1}\beta(c)_{-\phiartial-\mu}\alpha_M\beta^{-1}_M(v))\nonumber\\
&&-\alpha^{-1}\beta([b_\mu c])_{-\phiartial-\lambdaambda}\alpha^2_M(u),\\
&&[(\beta+\beta_M)(b+v)_\mu[(\alpha+\alpha_M)(a+u)_{\lambdaambda}(c+w)]_{M}]_M\nonumber\\
&=&[\beta(b)_\mu [\alpha(a)_\lambdaambda c]]+\beta(b)_\mu(\alpha(a)_\lambdaambda w)-\beta(b)_\mu(\alpha^{-1}\beta(c)_{-\phiartial-\mu}\alpha_M^2\beta^{-1}_M(u))\nonumber\\
&&-\alpha^{-1}\beta([\alpha(a)_\lambdaambda c])_{-\phiartial-\mu}\alpha_M(v),\\
&&[[(\beta+\beta_M)(a+u)_\lambdaambda (b+v)]_{M\lambdaambda+\mu}(\beta+\beta_M)(c+w)]_M\nonumber\\
&=&[[\beta(a)_\lambdaambda b ]_{\lambdaambda+\mu} \beta(c)]+[\beta(a)_\lambdaambda b]_{\lambdaambda+\mu}\beta_M(w)-\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}\alpha_M\beta_M^{-1}(\beta(a)_\lambdaambda v)\nonumber\\
&&-\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}\alpha_M\beta^{-1}_M(\alpha^{-1}\beta(b)_{-\phiartial-\lambdaambda}\alpha_{M}(u)).\nonumber
\etand{eqnarray*}
Since $(M,\alpha_M,\beta_M)$ is an $R$-module, we have
\betaegin{eqnarray*}
\alpha\beta(a)_\lambdaambda(b_\mu v)-\beta(b)_{\mu}(\alpha(a)_\lambdaambda v)=[\beta(a)_\lambdaambda b]_{\lambdaambda+\mu}\beta_{M}(v),
\etand{eqnarray*}
This implies
\betaegin{eqnarray*}
&&[(\alpha+\alpha_M)(\beta+\beta_M)(a+u)_{\lambdaambda}[(b+v)_{\mu}(c+w)]_M]_M\\
&=& [(\beta+\beta_M)(b+v)_\mu[(\alpha+\alpha_M)(a+u)_{\lambdaambda}(c+w)]_{M}]_M\\
&&+[[(\beta+\beta_M)(a+u)_\lambdaambda (b+v)]_{M\lambdaambda+\mu}(\beta+\beta_M)(c+w)]_M.
\etand{eqnarray*}
Therefore, $(R\otimesplus M, \alpha+\alpha_M, \beta+\beta_M)$ is a regular BiHom-Lie conformal algebra.
$\sumquare$
In the following we aim to develop the cohomology theory of regular BiHom-Lie conformal algebras.
To do this, we need the following concept.
\noindent{\betaf Definition 2.7.}
An $n$-cochain ($n\in \mathbb{Z}_{\gammaeq0}$) of a regular BiHom-Lie conformal algebra $R$ with coefficients
in a module $(M,\alphalpha_{M},\beta_{M} )$ is a $\mathbb{C}$-linear map
\betaegin{eqnarray*}
\gamma:R^n\rightharpoonup oightarrow M[\lambdaambda_1,... ,\lambdaambda_n],~~
(a_1,...,a_n)\mapsto \gamma_{\lambdaambda_1,...,\lambdaambda_n}(a_1,... , a_n),
\etand{eqnarray*}
where $M[\lambdaambda_1,...,\lambdaambda_n]$ denotes the space of polynomials with coefficients in $M$, satisfying the following conditions:
(1) Conformal antilinearity:
\betaegin{eqnarray*}
\gamma_{\lambdaambda_1,...,\lambdaambda_n}(a_1,... ,\phiartial a_i,... ,a_n)=-\lambdaambda_i\gamma_{\lambdaambda_1,...,\lambdaambda_n}(a_1,... ,a_i,... ,a_n).
\etand{eqnarray*}
(2) Skew-symmetry:
\betaegin{eqnarray*}
&&\gammaamma(a_{1},..., \beta(a_{i+1}), \alpha(a_{i}),..., a_{n})=-\gammaamma(a_{1}, ..., \beta(a_{i}), \alpha(a_{i+1}), ... , a_{n}).
\etand{eqnarray*}
(3) Commutativity:
\betaegin{eqnarray*}
\gamma\cdotirc \alpha=\alpha_M\cdotirc \gamma, \gamma\cdotirc \beta=\beta_M\cdotirc \gamma,
\etand{eqnarray*}
Let $R^{\otimestimes 0} = \mathbb{C}$ as usual so that a $0$-cochain
is an element of $M$. Define a differential $d$ of a cochain $\gamma$ by
\betaegin{eqnarray*}
&&(d\gammaamma)_{\lambdaambda_1,...,\lambdaambda_{n+1}}(a_{1},...,a_{n+1})\nonumber\\
&=&\sumum_{i=1}^{ n+1}(-1)^{i+1}\alphalpha\beta^{n-1}(a_{i})_{\lambdaambda_i}\gammaamma_{\lambdaambda_1,...,\hat{\lambdaambda}_{i},...,\lambdaambda_{n+1}}(a_{1},..., \hat{a}_{i},..., a_{n+1})+\\
&&\sumum_{1\lambdaeq i<j\lambdaeq n+1} (-1)^{i+j}\gammaamma_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_i,...\hat{\lambdaambda}_j,...,\lambdaambda_{n+1}}([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}], \beta(a_{1}),... ,\hat{a}_{i},... , \hat{a}_{j},...,\beta(a_{n+1})),
\etand{eqnarray*}
where $\gamma$ is extended linearly over the polynomials in
$\lambdaambda_i$. In particular, if $\gamma$ is a 0-cochain, then $(d\gamma)_\lambdaambda a=a_\lambdaambda \gamma$.
\noindent{\betaf Proposition 2.8.} $d\gamma$ is a cochain and $d^2=0$.
{\betaf Proof.} Let $\gamma$ be an $n$-cochain. It is easy to check that $d$ satisfies conformal antilinearity, skew-symmetry and commutativity.
That is, $d$ is an $(n+1)$-cochain.
Next we will check that $d^2=0$. In fact, we have
\betaegin{eqnarray}
&& (d^2\gamma) _{\lambdaambda_1,...,\lambdaambda_{n+2}}(a_{1},...,a_{n+2})\nonumber\\
&=& \sumum_{i=1}^{ n+1}(-1)^{i+1}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}(d\gamma)_{\lambdaambda_1,...,\hat{\lambdaambda}_{i},...,\lambdaambda_{n+2}}(a_{1},..., \hat{a}_{i},..., a_{n+2})\nonumber\\
&&+\sumum_{1\lambdaeq i<j\lambdaeq n+1} (-1)^{i+j}(d\gamma)_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_i,...\hat{\lambdaambda}_j,...,\lambdaambda_{n+2}}([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}], \beta(a_{1}),... ,\hat{a}_{i},... , \hat{a}_{j},...,\beta(a_{n+2}))\nonumber\\
&=&\sumum_{i=1}^{n+2}\sumum_{j=1}^{i-1}(-1)^{i+j}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}(\alphalpha\beta^{n-1}(a_{j})_{\lambdaambda_j}\gamma_{\lambdaambda_1,...,\hat{\lambdaambda}_{j,i},...,\lambdaambda_{n+2}}(a_{1},..., \hat{a}_{j,i},..., a_{n+2})\\
&&\sumum_{i=1}^{n+2}\sumum_{j=i+1}^{n+2}(-1)^{i+j+1}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}(\alphalpha\beta^{n-1}(a_{j})_{\lambdaambda_j}\gamma_{\lambdaambda_1,...,\hat{\lambdaambda}_{i,j},...,\lambdaambda_{n+2}}(a_{1},..., \hat{a}_{i,j},..., a_{n+2})\\
&&+\sumum_{i=1}^{n+2}\sumum_{1\lambdaeq j< k<i}^{n+2}(-1)^{i+j+k+1}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}\gamma_{\lambdaambda_j+\lambdaambda_k, \lambdaambda_1,...,\hat{\lambdaambda}_{j,k,i},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{j})_{\lambdaambda_j}a_{k}], \beta(a_{1}),... ,\hat{a}_{j,k,i},...,\beta(a_{n+2}))\\
&&+\sumum_{i=1}^{n+2}\sumum_{1\lambdaeq j< i<k}^{n+2}(-1)^{i+j+k}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}\gamma_{\lambdaambda_j+\lambdaambda_k, \lambdaambda_1,...,\hat{\lambdaambda}_{j,i,k},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{j})_{\lambdaambda_j}a_{k}], \beta(a_{1}),... ,\hat{a}_{j,i,k},...,\beta(a_{n+2}))\\
&&+\sumum_{i=1}^{n+2}\sumum_{1\lambdaeq i< j<k}^{n+2}(-1)^{i+j+k+1}\alphalpha\beta^{n}(a_{i})_{\lambdaambda_i}\gamma_{\lambdaambda_j+\lambdaambda_k, \lambdaambda_1,...,\hat{\lambdaambda}_{i,j,k},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{j})_{\lambdaambda_j}a_{k}], \beta(a_{1}),... ,\hat{a}_{i,j,k},...,\beta(a_{n+2}))
\etand{eqnarray}
\betaegin{eqnarray}
&&+\sumum_{1\lambdaeq i< j}^{n+2}\sumum_{k=1}^{i-1}(-1)^{i+j+k}\alphalpha\beta^{n}(a_{k})_{\lambdaambda_k}\gamma_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_{k, i,j},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}], \beta(a_{1}),... ,\hat{a}_{k, i,j},...,\beta(a_{n+2}))\\
&&+\sumum_{1\lambdaeq i< j}^{n+2}\sumum_{k=i+1}^{j-1}(-1)^{i+j+k+1}\alphalpha\beta^{n}(a_{k})_{\lambdaambda_k}\gamma_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_{i, k,j},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{i})_{\lambdaambda_j}a_{j}], \beta(a_{1}),... ,\hat{a}_{i, k,j},...,\beta(a_{n+2}))\\
&&+\sumum_{1\lambdaeq i< j}^{n+2}\sumum_{k=j+1}^{n+2}(-1)^{i+j+k}\alphalpha\beta^{n}(a_{k})_{\lambdaambda_k}\gamma_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_{i, j,k},...,\lambdaambda_{n+2}}\nonumber\\
&&([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}], \beta(a_{1}),... ,\hat{a}_{i,j,k},...,\beta(a_{n+2}))\\
&&+\sumum_{1\lambdaeq i< j}^{n+2}(-1)^{i+j}\alphalpha\beta^{n-1}([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j})_{\lambdaambda_i+\lambdaambda_j}\gamma_{\lambdaambda_1,...,\hat{\lambdaambda}_{j},...\hat{\lambdaambda}_{i},...,
\lambdaambda_{n+2}}(\beta(a_{1}),... ,\hat{a}_{j},...,\hat{a}_{i},...,\beta(a_{n+2})~~~~~~~~~\\
&&+\sumum^{n+2}_{distinct i,j,k,l,i< j,k<l}(-1)^{i+j+k+l}sign\{i,j, k, l\}\nonumber\\
&&\gamma_{\lambdaambda_k+\lambdaambda_l, \lambdaambda_i+\lambdaambda_j,\lambdaambda_1,...,\hat{\lambdaambda}_{i,j,k,l},...,\lambdaambda_{n+2}}(\beta[\alpha^{-1}\beta(a_{k})_{\lambdaambda_k}a_{l}],
\beta[\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}],...,\hat{a}_{i,j,k,l},..., \beta^2(a_{n+2}))~~~~~~~~~\\
&&\sumum^{n+2}_{i,j,k=1,i< j,k\neq i,j}(-1)^{i+j+k+l}sign\{i,j, k\}\nonumber\\
&&\gamma_{\lambdaambda_i+\lambdaambda_k+\lambdaambda_j,\lambdaambda_1,...,\hat{\lambdaambda}_{i,j,k},...,\lambdaambda_{n+2}}([[\alpha^{-1}\beta(a_i)_{\lambdaambda_i}a_{j}]_{\lambdaambda_i+\lambdaambda_j},\beta(\alpha_k)],
\beta^2(a_1),...,\hat{a}_{i,j,k},..., \beta^2(a_{n+2})),~~~~~~~~~
\etand{eqnarray}
where $sign\{i_1, ... , i_p\}$ is the sign of the permutation putting the indices in increasing
order and $\hat{a}_{i,j}$, $a_i, a_j,... $ are omitted.
It is obvious that (2.6) and (2.11) summations cancel each other. The same is true
for (2.7) and (2.10), (2.8) and (2.9). The BiHom-Jacobi identity implies (2.14) = 0 and
the skew-symmetry implies (2.13)=0. Because $(M, \alpha_M,\beta_M)$ is an $R$-module, it follows that
\betaegin{eqnarray*}
\alpha\beta(a)_\lambdaambda(b_\mu v)-\beta(b)_{\mu}(\alpha(a)_\lambdaambda v)=[\beta(a)_\lambdaambda b]_{\lambdaambda+\mu}\beta_{M}(v).
\etand{eqnarray*}
Since $\gamma\cdotirc \alpha=\alpha_M\cdotirc \gamma, \gamma\cdotirc \beta=\beta_M\cdotirc \gamma$, we have (2.4), (2.5) and (2.12) summations cancel.
So $d^2\gamma=0$ and the proof is completed.
$\sumquare$
Thus the cochains of a BiHom-Lie conformal algebra $R$ with coefficients in a module $M$ form a
complex, which is denoted by
\betaegin{eqnarray*}
\widetilde{C}^{\betaullet}_{\alpha,\beta} =\widetilde{C}^{\betaullet}_{\alpha,\beta}(R,M)=\betaigoplus_{n\in \mathbb{Z}_{+}}C^{n}_{\alpha,\beta}(R,M).
\etand{eqnarray*}
This complex is called the basic complex for the $R$-module $(M,\alpha_M,\beta_M)$.
\sumection{ Deformations and Nijenhuis operators of BiHom-Lie\\ conformal algebras}
\deltaef\tauhetaeequation{\alpharabic{section}. \alpharabic{equation}}
\sumetcounter{equation} {0}
In this section, we give the definition of a deformation and show that the deformation generated
by a 2-cocycle Nijenhuis operator is trivial.
Let $R$ be a regular BiHom-Lie conformal algebra. Define
$
a_\lambdaambda b=[\alpha^{s}(a)_\lambdaambda b],$ for any $a,b\in R$.
Set $\gamma\in \widetilde{C}^n_{\alpha,\beta}(R,R_s)$. Define an operator $d_s: \widetilde{C}^{n}_{\alpha,\beta}(R,R_s)\rightharpoonup oightarrow \widetilde{C}^{n+1}_{\alpha,\beta}(R,R_s)$ by
\betaegin{eqnarray*}
&&(d_s\gammaamma)_{\lambdaambda_1,...,\lambdaambda_{n+1}}(a_{1},..., a_{n+1})\nonumber\\
&=&\sumum_{i=1}^{ n+1}(-1)^{i+1}[\alpha^{s+1}\beta^{n-1}(a_{i})_{\lambdaambda_i}\gammaamma_{\lambdaambda_1,...,\hat{\lambdaambda}_{i},...,\lambdaambda_{n+1}}(a_{1},..., \hat{a}_{i},..., a_{n+1})]+\nonumber\\
&&\sumum_{1\lambdaeq i<j\lambdaeq n+1}(-1)^{i+j} \gammaamma_{\lambdaambda_i+\lambdaambda_j, \lambdaambda_1,...,\hat{\lambdaambda}_i,...\hat{\lambdaambda}_j,...,\lambdaambda_{n+1}}([\alpha^{-1}\beta(a_{i})_{\lambdaambda_i}a_{j}], \beta(a_{1}),... ,\hat{a}_{i},... , \hat{a}_{j},...,\beta(a_{n+1})).
\etand{eqnarray*}
Obviously, the operator $d_s$ is induced from the differential $d$. Thus $d_s$ preserves the space of cochains
and satisfies $d_s^2=0$. In the following the complex $\widetilde{C}^{\betaullet}_{\alpha,\beta}(R, R_s)$ is assumed to be associated with the
differential $d_s$.
Taking $s=-1$, let $\phisi\in C^{2}(R,R)_{\otimesverline{0}}$ be a bilinear operator commuting with $\alpha$ and $\beta$, we consider a $t$-parameterized family of bilinear operations on $R$,
\betaegin{eqnarray}
[a_\lambdaambda b]_t=[a_\lambdaambda b]+t\phisi_{\lambdaambda, -\phiartial-\lambdaambda}(a,b),~~~\forall a,b\in R.
\etand{eqnarray}
If $[\cdot_{\lambdaambda}\cdot]$ endows $(R,[\cdot_{\lambdaambda}\cdot], \alpha,\beta)$ with a BiHom-Lie conformal algebra structure, we say that $\phisi$ generates
a deformation of the BiHom-Lie conformal algebra $R$. It is easy to see that $[\cdot_{\lambdaambda}\cdot]$ satisfies (2.1).
The skew-symmetry of $[\cdot_{\lambdaambda}\cdot]$ means that
\betaegin{eqnarray*}
&&[\beta(a)_\lambdaambda \alpha(b)]_t=[\beta(a)_\lambdaambda \alpha(b)]+t\phisi_{\lambdaambda, -\phiartial-\lambdaambda}(\beta(a),\alpha(b)),\\
&&[\beta(b)_{-\phiartial-\lambdaambda} \alpha(a)]_t=[\beta(b)_{-\phiartial-\lambdaambda}\alpha(a)]+t\phisi_{-\phiartial-\lambdaambda, \lambdaambda}(\beta(b),\alpha(a)).
\etand{eqnarray*}
Then $[\beta(a)_\lambdaambda \alpha(b)]_t=-[\beta(b)_{-\phiartial-\lambdaambda} \alpha(a)]_t$ if and only if
\betaegin{eqnarray}
\phisi_{\lambdaambda, -\phiartial-\lambdaambda}(\beta(a),\alpha(b))=-\phisi_{-\phiartial-\lambdaambda, \lambdaambda}(\beta(b),\alpha(a)).
\etand{eqnarray}
If it is true for (2.3), by expanding the BiHom-Jacobi identity for $[\cdot_{\lambdaambda}\cdot]$, we have
\betaegin{eqnarray*}
&&[\alpha\beta(a)_\lambdaambda[b_{\mu}c]]+t([\alpha\beta(a)_\lambdaambda(\phisi_{\mu,-\phiartial-\mu}(b,c)]+\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),[b_{\mu}c]))\\
&&+t^2\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),\phisi_{\mu,-\phiartial-\mu}(b,c))\\
&=& [\beta(b)_\mu[\alpha(a)_{\lambdaambda}c]]+t([\beta(b)_\mu(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),c))]+\phisi_{\mu,-\phiartial-\mu}(\beta(b),[\alpha(a)_{\lambdaambda}c]))\\
&&+t^2\phisi_{\mu,-\phiartial-\mu}(\beta(b),\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),c))+[[\beta(a)_{\lambdaambda}b]_{\lambdaambda+\mu}\beta(c)]\\
&&+t([(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\beta(a),b))_{\lambdaambda+\mu}\beta(c)]+ \phisi_{\lambdaambda+\mu,-\phiartial-\lambdaambda-\mu}([\beta(a)_\lambdaambda b],\beta(c)))\\
&&+t^2\phisi_{\lambdaambda+\mu,-\phiartial-\lambdaambda-\mu}(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\beta(a),b), \beta(c)).
\etand{eqnarray*}
This is equivalent to the following conditions
\betaegin{eqnarray}
&&\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),\phisi_{\mu,-\phiartial-\mu}(b,c))\nonumber\\
&=&\phisi_{\mu,-\phiartial-\mu}(\beta(b),\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),c))
+ \phisi_{\lambdaambda+\mu,-\phiartial-\lambdaambda-\mu}(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\beta(a),b),\beta(c)),\\
&&[\alpha\beta(a)_\lambdaambda(\phisi_{\mu,-\phiartial-\mu}(b,c)]+\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),[b_{\mu}c])\nonumber\\
&=& [\beta(b)_\mu(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),c))]+\phisi_{\mu,-\phiartial-\mu}(\beta(b),[\alpha(a)_{\lambdaambda}c])\nonumber\\
&&+[(\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\beta(a),b))_{\lambdaambda+\mu}\beta(c)]+ \phisi_{\lambdaambda+\mu,-\phiartial-\lambdaambda-\mu}([\beta(a)_\lambdaambda b],\beta(c)).
\etand{eqnarray}
Obviously, (3.3) and (3.4) mean that $\phisi$ define a BiHom-Lie conformal algebra structure on $R$, as required.
$\sumquare$
A deformation is said to be trivial if there is a linear operator $f\in \widetilde{C}_{\alpha,\beta}^{1}(R,R_{-1})$ such that
\betaegin{eqnarray}
T_t([a_\lambdaambda b]_t)=[T_t(a)_\lambdaambda T_t(b)],~~~
\etand{eqnarray}
for any $a,b\in R$, where $T_t=id+tf$.
\noindent{\betaf Definition 3.1.} A linear operator $f\in \widetilde{C}_{\alpha,\beta}^{1}(R,R_{-1})$ is a Nijienhuis operator
if \betaegin{eqnarray}
[f(a)_\lambdaambda f(b)]=f([a_\lambdaambda b]_{N}),
\etand{eqnarray}
for any $a,b\in R$, where the bracket $[\cdot,\cdot]_{N}$ is defined by
\betaegin{eqnarray}
[a_\lambdaambda b]_{N}=[f(a)_\lambdaambda b]+[a_\lambdaambda f(b)]-f([a_\lambdaambda b]).
\etand{eqnarray}
\noindent{\betaf Theorem 3.2.} Let $(R,\alpha,\beta)$ be a regular BiHom-Lie conformal algebra and $f\in \widetilde{C}_{\alpha,\beta}^{1}(R,R_{-1})$
a Nijienhuis operator. Then a deformation of $(R,\alpha,\beta)$) can be obtained by putting
\betaegin{eqnarray}
\phisi_{\lambdaambda, -\phiartial-\lambdaambda}(a,b)=[a_\lambdaambda b]_{N},
\etand{eqnarray}
for any $a,b\in R$. Furthermore, this deformation is trivial.
{\betaf Proof.} To show that $\phisi$ generates a deformation, we need to verify (3.2), (3.3)
and (3.4). We first prove that $\phisi(\beta(a),\alpha(b))=-\phisi(\beta(b),\alpha(a)).$ In fact, we have
\betaegin{eqnarray*}
\phisi(\beta(a),\alpha(b))
&=&[a_\lambdaambda b]_N\\
&=&[f(\beta(a))_\lambdaambda \alpha(b)]+[\beta(a)_\lambdaambda f(\alpha(b))]-f[(\beta(a)_\lambdaambda\alpha(b)]\\
&=& -([f(\beta(b))_\lambdaambda \alpha(a)]+[\beta(b)_\lambdaambda f(\alpha(a))]-f[(\beta(b)_\lambdaambda\alpha(a)])\\
&=& -[\beta(b),\alpha(a)]_N\\
&=&-\phisi(\beta(b),\alpha(a)),
\etand{eqnarray*}
as desired. By (3.5),(3.6) and (3.7), we have
\betaegin{eqnarray*}
&&\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),\phisi_{\mu,-\phiartial-\mu}(b,c))+\phisi_{\mu,-\phiartial-\mu}(\beta(b),\phisi_{-\phiartial-\lambdaambda,\lambdaambda}(\alpha^{-1}\beta(c),\alpha^2\beta^{-1}(a)))\\
&&+\phisi_{-\phiartial-\lambdaambda-\mu, \lambdaambda+\mu}(\alpha^{-1}\beta^2(c), \phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),\alpha\beta^{-1}(b)))\\
&=&[f(\alpha\beta(a))_{\lambdaambda}[f(b)_{\mu}c]]+[f(\alpha\beta(a))_{\lambdaambda}[b_\mu f(c)]]-[f(\alpha\beta(a))_{\lambdaambda}f([b_{\mu}c])] \\
&&+[\alpha\beta(a)_{\lambdaambda}[f(b)_\mu f(c)]]-f([\alpha\beta(a)_{\lambdaambda}[f(b)_\mu c]])-f([\alpha\beta(a)_{\lambdaambda}[b_\mu f(c)]])\\
&&+f([\alpha\beta(a)_{\lambdaambda}f([b_\mu c])])+[f(\beta(b))_{\mu}[f(\alpha^{-1}\beta(c))_{-\phiartial-\lambdaambda}\alpha^2\beta^{-1}(a)]]\\
&&+[f(\beta(b))_{\mu}[\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda} f(\alpha^2\beta^{-1}(a))]]-[f(\beta(b))_{\mu}f([\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda}\alpha^2\beta^{-1}(a)])]\\
&&+[\beta(b)_{\mu}[f(\alpha^{-1}\beta(c))_{-\phiartial-\lambdaambda} f(\alpha^2\beta^{-1}(a))]]-f([\beta(b)_{\mu}[f(\alpha^{-1}\beta(c))_{-\phiartial-\lambdaambda} \alpha^2\beta^{-1}(a)]])\\
&&-f([\beta(b)_{\mu}[\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda} f(\alpha^2\beta^{-1}(a))]])+[f(\beta(b))_{\mu}f([\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda} \alpha^2\beta^{-1}(a)])])\\
&&+[f(\alpha^{-1}\beta^2(c))_{-\phiartial-\lambdaambda-\mu}[f(\alpha(a))_{\lambdaambda}\alpha\beta^{-1}(b)]]+[f(\alpha^{-1}\beta^2(c))_{-\phiartial-\lambdaambda-\mu}[\alpha(a)_\lambdaambda f(\alpha\beta^{-1}(b))]]\\
&&-[f(\alpha^{-1}\beta^2(c))_{-\phiartial-\lambdaambda-\mu}f([\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)])]+[\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}[f(\alpha(a))_{\lambdaambda}f(\alpha\beta^{-1}(b))]]\\
&&-f([\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}[f(\alpha(a))_{\lambdaambda}\alpha\beta^{-1}(b)]])-f([\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}[\alpha(a)_{\lambdaambda}f(\alpha\beta^{-1}(b))]])\\
&&+f([\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}f([\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)])]).
\etand{eqnarray*}
Since $f$ is a Nijenhuis operator, it follows that
\betaegin{eqnarray*}
&&-[f(\alpha\beta(a))_{\lambdaambda}f([b_{\mu}c])]+f([\alpha\beta(a)_{\lambdaambda}f([b_\mu c])])\\
&=&-f([f(\alpha\beta(a))_{\lambdaambda}[b_{\mu}c]])+f^2([\alpha\beta(a)_{\lambdaambda}[b_\mu c]]),\\
&& -[f(\beta(b))_{\mu}f([\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda}\alpha^2\beta^{-1}(a)])]+f([\beta(b)_{\mu}f([\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda} \alpha^2\beta^{-1}(a)])])\\
&=&-f([f(\beta(b))_{\mu}[\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda}\alpha^2\beta^{-1}(a)]])+f^2([\beta(b)_{\mu}[\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda} \alpha^2\beta^{-1}(a)]]),\\
&& -[f(\alpha^{-1}\beta^2(c))_{-\phiartial-\lambdaambda-\mu}f([\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)])]+f([\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}f([\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)])])\\
&=&-f[f(\alpha^{-1}\beta^2(c))_{-\phiartial-\lambdaambda-\mu}[\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)]]+f^2([\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}[\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)]]).
\etand{eqnarray*}
Note that
\betaegin{eqnarray*}
[\alpha\beta(a)_\lambdaambda[b_\mu c]]+[\alpha^{-1}\beta^2(c)_{-\phiartial-\lambdaambda-\mu}[\alpha(a)_{\lambdaambda}\alpha\beta^{-1}(b)]]+[\beta(b)_{\mu}[\alpha^{-1}\beta(c)_{-\phiartial-\lambdaambda}\alpha^2\beta^{-1}(a)]]=0.
\etand{eqnarray*}
Thus
\betaegin{eqnarray*}
&&\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha\beta(a),\phisi_{\mu,-\phiartial-\mu}(b,c))+\phisi_{\mu,-\phiartial-\mu}(\beta(b),\phisi_{-\phiartial-\lambdaambda,\lambdaambda}(\alpha^{-1}\beta(c),\alpha^2\beta^{-1}(a)))\\
&&+\phisi_{-\phiartial-\lambdaambda-\mu, \lambdaambda+\mu}(\alpha^{-1}\beta^2(c), \phisi_{\lambdaambda,-\phiartial-\lambdaambda}(\alpha(a),\alpha\beta^{-1}(b)))=0.
\etand{eqnarray*}
In the similar way, we can check that (3.4). This proves that $\phisi$ generates a deformation of the regular BiHom-Lie conformal algebra $(R, \alpha,\beta)$.
Let $T_t=id+tf$. By (3.8), we have
\betaegin{eqnarray}
T_t([a_\lambdaambda b]_t)&=&(id+tf)([a_\lambdaambda b]+t\phisi_{\lambdaambda,-\phiartial-\lambdaambda}(a,b))\nonumber\\
&=& (id+tf)([a_\lambdaambda b]+t[a_\lambdaambda b]_N)\nonumber\\
&=& [a_\lambdaambda b]+t([a_\lambdaambda b]_N+f([a_\lambdaambda b]))+t^2f([a_\lambdaambda b]_N).
\etand{eqnarray}
On the other hand, we have
\betaegin{eqnarray}
[T_t(a)_\lambdaambda T_t(b)]&=&[(a+tf(a))_\lambdaambda(b+tf(b))]\nonumber\\
&=& [a_\lambdaambda b]+t([f(a)_\lambdaambda b]+[a_\lambdaambda f(b)])+t^2[f(a)_\lambdaambda f(b)].
\etand{eqnarray}
Combining (3.9) with (3.10), it follows that $T_t([a_\lambdaambda b]_t)=[T_t(a)_\lambdaambda T_t(b)]$. Therefore the deformation
is trivial.
$\sumquare$
\sumection{Derivations of multiplicative BiHom-Lie conformal algebras }
\deltaef\tauhetaeequation{\alpharabic{section}. \alpharabic{equation}}
\sumetcounter{equation} {0}
For convenience, we denote by $\mathcal{A}$ the ring $\mathbb{C}[\phiartial]$ of polynomials in the indeterminate $\phiartial$.
\noindent{\betaf Definition 4.1.}$^{\cdotite{Kac98}}$ A conformal linear map between $\mathcal{A}$-modules $V$ and $W$ is a linear map
$\phihi: V\rightharpoonup oightarrow \mathcal{A}[\lambdaambda]\otimes_{\mathcal{A}}W$ such that
\betaegin{eqnarray}
\phihi(\phiartial v)=(\phiartial+\lambdaambda)(\phihi v).
\etand{eqnarray}
We will often abuse the notation by writing $\phihi: V \rightharpoonup oightarrow W$, it is clear from the
context that $\phihi$ is a conformal linear map. We will also write $\phihi_{\lambdaambda}$ instead of $\phihi$ to emphasize
the dependence of $\phihi$ on $\lambdaambda$.
The set of all conformal linear maps from $V$ to $W$ is denoted by $Chom(V,W)$ and it is
made into an $\mathcal{A}$-module via
\betaegin{eqnarray}
(\phiartial\phihi)_{\lambdaambda} v=-\lambdaambda
\etand{eqnarray}
We will write $Cend(V)$ for $Chom(V, V)$.
\noindent{\betaf Definition 4.2.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra. Then a
conformal linear map $D_\lambdaambda: R\rightharpoonup oightarrow R$ is called an $\alpha^k\beta^l$-derivation of $(R,\alpha,\beta)$ if
\betaegin{eqnarray}
D_\lambdaambda \cdotirc \alpha=\alpha\cdotirc D_\lambdaambda, ~~~D_\lambdaambda \cdotirc \beta=\beta\cdotirc D_\lambdaambda,\nonumber\\
D_\lambdaambda([a_\mu b])=[D_\lambdaambda(a)_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]+[\alpha^{k}\beta^{l}(a)_{\mu}D_\lambdaambda(b)].
\etand{eqnarray}
Denote by $Der_{\alpha^{s},\beta^{l}}$ the set of $\alpha^{s}\beta^{l}$-derivations of the multiplicative BiHom-Lie conformal
algebra $(R,\alpha,\beta)$. For any $a\in R$ satisfying $\alpha(a)=a, \beta(a)=a$, define $D_{k,l}:R\rightharpoonup oightarrow R$ by
\betaegin{eqnarray*}
D_{k,l}(a)_{\lambdaambda}(b)=[a_\lambdaambda\alpha^{k+1}\beta^{l-1}(b)],~~~\forall b\in R.
\etand{eqnarray*}
Then $D_{k,l}(a)$ is an $\alpha^{k+1}\beta^{l}$-derivation, which is called an inner $\alpha^{k+1}\beta^{l}$-derivation. In fact,
\betaegin{eqnarray*}
D_{k,l}(a)_{\lambdaambda}(\phiartial b)&=&[a_\lambdaambda\alpha^{k+1}\beta^{l-1}(\phiartial b)]
= [a_\lambdaambda\phiartial\alpha^{k+1}\beta^{l-1}( b)]
= (\phiartial+\lambdaambda)D_{k,l}(a)_{\lambdaambda}(b),
\etand{eqnarray*}
\betaegin{eqnarray*}
&&D_{k,l}(a)_{\lambdaambda}(\alpha(b))=[a_\lambdaambda\alpha^{k+2}\beta^{l-1}(b)]
= \alpha[a_\lambdaambda\alpha^{k+1}\beta^{l-1}( b)]
= \alpha\cdotirc D_{k,l}(a)_{\lambdaambda}(b),\\
&&D_{k,l}(a)_{\lambdaambda}(\beta(b))=[a_\lambdaambda\alpha^{k+1}\beta^{l}(b)]
= \beta[a_\lambdaambda\alpha^{k+1}\beta^{l-1}( b)]
= \beta\cdotirc D_{k,l}(a)_{\lambdaambda}(b),\\
&&D_{k,l}(a)_{\lambdaambda}([b_{\mu}c])= [a_\lambdaambda \alpha^{k+1}\beta^{l-1}([b_{\mu}c])
= [\alpha\beta(a)_\lambdaambda [\alpha^{k+1}\beta^{l-1}(b)_{\mu}\alpha^{k+1}\beta^{l-1}(c)]\\
&&~~~~~~~~~~~~~~~~~~~= [\beta(a)_\lambdaambda\alpha^{k+1}\beta^{l-1}(b)]_{\lambdaambda+\mu}\alpha^{k+1}\beta^{l}(c)]+[\alpha^{k+1}\beta^{l}(b)_{\mu}[\alpha(a)_{\lambdaambda}\alpha^{k+1}\beta^{l-1}(c)]]\\
&&~~~~~~~~~~~~~~~~~~~=[a_\lambdaambda\alpha^{k+1}\beta^{l-1}(b)]_{\lambdaambda+\mu}\alpha^{k+1}\beta^{l}(c)]+[\alpha^{k+1}\beta^{l}(b)_{\mu}[a_{\lambdaambda}\alpha^{k+1}\beta^{l-1}(c)]]\\
&&~~~~~~~~~~~~~~~~~~~=[D_{k,l}(a)_{\lambdaambda}(b)_{\lambdaambda+\mu}\alpha^{k+1}\beta^{l}(c)]+[\alpha^{k+1}\beta^{l}(b)_{\mu}(D_{k,l}(a)_{\lambdaambda}(c))].
\etand{eqnarray*}
Denote by $Inn_{\alpha^k\beta^l} (R)$ the set of inner $\alpha^k\beta^l$-derivations.
For any $D_{\lambdaambda}\in Der_{\alpha^k\beta^l}(R)$ and $D'_{\mu-\lambdaambda}\in Der_{\alpha^s\beta^t}(R)$, define their commutator $[D_\lambdaambda D']_{\mu}$ by
\betaegin{eqnarray}
[D_\lambdaambda D']_{\mu}(a)=D_\lambdaambda(D'_{\mu-\lambdaambda}a)-D'_{\mu-\lambdaambda}(D_\lambdaambda a),~~~\forall a\in R.
\etand{eqnarray}
\noindent{\betaf Lemma 4.3.} For any $D_{\lambdaambda}\in Der_{\alpha^k\beta^l}(R)$ and $D'_{\mu-\lambdaambda}\in Der_{\alpha^s\beta^t}(R)$, we have
\betaegin{eqnarray*}
[D_\lambdaambda D'] \in Der_{\alpha^{k+s}\beta^{l+t}}(R)[\lambdaambda].
\etand{eqnarray*}
{\betaf Proof.} For any $a,b\in R$, we have
\betaegin{eqnarray*}
&&[D_\lambdaambda D']_{\mu}(\phiartial a)\\
&=&D_{\lambdaambda}(D'_{\mu-\lambdaambda}\phiartial a)-D'_{\mu-\lambdaambda}(D_\lambdaambda \phiartial a)\\
&=& D_\lambdaambda((\phiartial+\mu-\lambdaambda)D'_{\mu-\lambdaambda}a)+D'_{\mu-\lambdaambda}((\mu+\lambdaambda)D_\lambdaambda a)\\
&=& (\phiartial+\mu)D_{\lambdaambda}(D'_{\mu-\lambdaambda}a)-(\phiartial+\mu)D'_{\mu-\lambdaambda}(D_{\lambdaambda}a)\\
&=& (\phiartial+\mu)[D_\lambdaambda D']_{\mu}(a).
\etand{eqnarray*}
Similarly ,we have
\betaegin{eqnarray*}
&&[D_\lambdaambda D']_{\mu}([a_{\gamma}b])\\
&=& D_\lambdaambda(D'_{\mu-\lambdaambda}[a_{\gamma}b])-D'_{\mu-\lambdaambda}(D_\lambdaambda[a_{\gamma}b])\\
&=&D_\lambdaambda([D'_{\mu-\lambdaambda}(a)_{\mu-\lambdaambda+\gammaamma}\alpha^s\beta^{t}(b)]+[\alpha^s\beta^{t}(a)_{\gamma}D'_{\mu-\lambdaambda}(b)])\\
&& -D'_{\mu-\lambdaambda}([D_\lambdaambda(a)_{\lambdaambda+\gammaamma}\alpha^k\beta^{l}(b)]+[\alpha^k\beta^{l}(a)_{\gamma}D_{\lambdaambda}(b)])\\
&=& [D_\lambdaambda(D'_{\mu-\lambdaambda}(a))_{\mu+\gammaamma}\alpha^{k+s}\beta^{l+t}(b)]+[\alpha^{k}\beta^{l}(D'_{\mu-\lambdaambda}(a))_{\mu-\lambdaambda+\gamma}D_{\lambdaambda}(\alpha^{s}\beta^{t}(b)))\\
&&+[D_\lambdaambda(\alpha^s\beta^t(a))_{\lambdaambda+\gammaamma}\alpha^k\beta^l(D'_{\mu-\lambdaambda}(b))]+[\alpha^{k+s}\beta^{l+t}(a)_\gamma(D_\lambdaambda(D'_{\mu-\lambdaambda}(b)))]\\
&&-[(D'_{\mu-\lambdaambda}D_\lambdaambda(a))_{\mu+\gammaamma}\alpha^{k+s}\beta^{l+t}(b)]-[\alpha^{s}\beta^{t}(D_\lambdaambda(s))_{\lambdaambda+\gammaamma}(D'_{\mu-\lambdaambda}(\alpha^k\beta^l(b)))]\\
&&-[(D'_{\mu-\lambdaambda}(\alpha^{k}\beta^{l}(a)))_{\mu-\lambdaambda+\gammaamma}\alpha^{s}\beta^{t}(D_{\lambdaambda}(b))]-[\alpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}(D'_{\mu-\lambdaambda}(D_\lambdaambda(b)))]\\
&=&[([D_\lambdaambda D']_\mu a)_{\mu+\gamma}\alpha^{k+s}\beta^{l+t}(b)]+[\alpha^{k+s}\beta^{l+t}(a)_{\gamma}([D_\lambdaambda D']_{\mu}b)].
\etand{eqnarray*}
Therefore, $[D_\lambdaambda D'] \in Der_{\alpha^{k+s}\beta^{l+t}}(R)[\lambdaambda]$.
$\sumquare$
Define
\betaegin{eqnarray}
Der(R)=\betaigoplus_{k\gammaeq0, l\gammaeq 0}Der_{\alpha^k\beta^l}(R).
\etand{eqnarray}
\noindent{\betaf Proposition 4.4.} With notations above, $(Der(R), \alpha',\beta')$ is a BiHom-Lie conformal algebra with respect to (4.4),
where $\alpha'(D)=D\cdotirc \alpha, \beta'(D)=D\cdotirc \beta$.
{\betaf Proof.} By (4.2), $Der(R)$ is a $\mathbb{C}[\phiartial]$-module. By (4.1), (4.2) and (4.4), it is easy to check that (2.1) and (2.2)
are satisfied. To check the BiHom-Jacobi identity, we compute
\betaegin{eqnarray*}
&&[\alpha'\beta'(D)_{\lambdaambda}[D'_{\mu}D'']]_{\tauhetaeta}(a)\\
&=&(D\cdotirc \alpha\beta)_{\lambdaambda}([D'_{\mu}D'']_{\tauhetaeta-\lambdaambda}a)-[D'_{\mu}D'']_{\tauhetaeta-\lambdaambda}((D\cdotirc \alpha\beta)_\lambdaambda a)\\
&=& D_{\lambdaambda}([D'_\mu D'']_{\tauhetaeta-\lambdaambda}\alpha\beta(a))-[D'_\mu D'']_{\tauhetaeta-\lambdaambda}(D_\lambdaambda\alpha\beta(a))\\
&=& D_\lambdaambda(D'_\mu(D''_{\tauhetaeta-\lambdaambda-\mu}\alpha\beta(a)))-D_\lambdaambda(D''_{\tauhetaeta-\lambdaambda-\mu}(D'_{\mu}\alpha\beta(a)))\\
&& -D_{\mu}'(D''_{\tauhetaeta-\lambdaambda-\mu}(D_\lambdaambda\alpha\beta(a)))+D''_{\tauhetaeta-\lambdaambda-\mu}(D'_\mu(D_\lambdaambda\alpha\beta(a))),\\
&&[\beta'(D')_{\mu}[\alpha'(D)_\lambdaambda D'']]_{\tauhetaeta}(a)\\
&=& D'_{\mu}(D_\lambdaambda(D''_{\tauhetaeta-\lambdaambda-\mu}\alpha\beta(a)))-D'_\mu(D''_{\tauhetaeta-\lambdaambda-\mu}(D_\lambdaambda(\alpha\beta(a))))\\
&& -D_\lambdaambda(D''_{\tauhetaeta-\lambdaambda-\mu}(D'_\mu\alpha\beta(a)))+D''_{\tauhetaeta-\lambdaambda-\mu}(D_\lambdaambda(D'_\mu\alpha\beta(a))),\\
&&[[\beta'(D)_\lambdaambda D']_{\lambdaambda+\mu}\beta'(D'')]_{\tauhetaeta}(a)\\
&=&D_\lambdaambda(D'_\mu(D''_{\tauhetaeta-\lambdaambda-\mu}\alpha\beta(a)))-D'_{\mu}(D_\lambdaambda(D''_{\tauhetaeta-\lambdaambda-\mu}\alpha\beta(a)))\\
&&-D''_{\tauhetaeta-\lambdaambda-\mu}(D_\lambdaambda(D'_\mu\alpha\beta(a)))+D''_{\tauhetaeta-\lambdaambda-\mu}(D'_\mu(D_\lambdaambda\alpha\beta(a))).
\etand{eqnarray*}
Thus, $[\alpha'\beta'(D)_{\lambdaambda}[D'_{\mu}D'']]_{\tauhetaeta}(a)=[\beta'(D')_{\mu}[\alpha'(D)_\lambdaambda D'']]_{\tauhetaeta}(a)+[[\beta'(D)_\lambdaambda D']_{\lambdaambda+\mu}\beta'(D'')]_{\tauhetaeta}(a)$. This proves that $(Der(R), \alpha',\beta')$ is a BiHom-Lie conformal algebra.
$\sumquare$
At the end of this section, we give an application of the $\alpha^0\beta^1$-derivations of a regular
BiHom-Lie conformal algebra $(R, \alpha,\beta)$.
For any $D_\lambdaambda\in Cend(R)$, define a bilinear
operation $[\cdot_\lambdaambda \cdot]_D$ on the vector space $R\otimesplus \mathbb{R}D$:
\betaegin{eqnarray*}
[(a+mD)_\lambdaambda(b+nD)]_D=[a_\lambdaambda b]+mD_\lambdaambda(b)-nD_{-\lambdaambda-\phiartial}(\alpha\beta^{-1}(a)), \forall a,b\in R, m,n\in \mathbb{R},
\etand{eqnarray*}
and a linear map $\alpha',\beta': R\otimesplus \mathbb{R}D\rightharpoonup oightarrow R\otimesplus \mathbb{R}D$ by $\alpha'(a+D)=\alpha(a)+D$ and $\beta'(a+D)=\beta(a)+D$.
\noindent{\betaf Proposition 4.5.} $(R\otimesplus \mathbb{R}D, \alpha',\beta')$ is a regular BiHom-Lie conformal algebra if and only
if $D_\lambdaambda$ is an $\alpha^0\beta^1$-derivation of $(R, \alpha,\beta)$.
{\betaf Proof.} Suppose that $(R\otimesplus \mathbb{R}D, \alpha',\beta')$ is a regular BiHom-Lie conformal algebra. For any $a,b\in R$, $m,n\in \mathbb{R}$, we have
\betaegin{eqnarray*}
&&\alpha'\cdotirc \beta'(a+mD)=\alpha'(\beta(a)+mD)=\alpha\cdotirc \beta(a)+ mD,\\
&&\beta'\cdotirc \alpha'(a+mD)=\beta'(\alpha(a)+mD)=\beta\cdotirc \alpha(a)+ mD.
\etand{eqnarray*}
Hence, we have
\betaegin{eqnarray*}
\alpha\cdotirc \beta=\beta\cdotirc \alpha \Leftrightarrow \alpha'\cdotirc \beta'=\beta'\cdotirc \alpha'.
\etand{eqnarray*}
On the other hand,
\betaegin{eqnarray*}
&&\alpha'[(a+mD)_\lambdaambda(b+nD)]_D\\
&=&\alpha'([a_\lambdaambda b]+mD_\lambdaambda(b)-nD_{-\lambdaambda-\phiartial}(\alpha\beta^{-1}(a)))\\
&=& \alpha[a_\lambdaambda b]+m\alpha (D_\lambdaambda(b))-n\alpha(D_{-\lambdaambda-\phiartial}(\alpha\beta^{-1}(a))),\\
&&[\alpha'(a+mD)_{\lambdaambda}\alpha'(b+nD)]\\
&=&[\alpha(a)+mD_{\lambdaambda}\alpha(b)+nD]\\
&=&[\alpha(a)_\lambdaambda \alpha(b)]+mD_\lambdaambda \alpha(b)-\alpha\beta^{-1}\cdotirc nD_{-\lambdaambda-\phiartial} \alpha(a).
\etand{eqnarray*}
It follows that
$
\alpha\cdotirc D_\lambdaambda=D_\lambdaambda\cdotirc \alpha.
$
Similarly, we have
$
\beta\cdotirc D_\lambdaambda=D_\lambdaambda\cdotirc \beta.
$
Next, the BiHom-Jacobi identity implies
\betaegin{eqnarray*}
[\alpha'\beta'(D)_{\lambdaambda}[a_{\mu}b]_D]_D=[\beta'(a)_{\lambdaambda}[\alpha'(D)_\mu b]_D]_{D}+[[\beta'(D)_\mu a]_{D\lambdaambda+\mu}\beta'(b)]_{D},
\etand{eqnarray*}
which is exactly $D_{\mu}([a_\lambdaambda b])=[(D_\mu a)_{\lambdaambda+\mu}\alpha^0\beta^1(b)]+[\alpha^0\beta^1(a)_{\lambdaambda}(D_\mu b)]$ by (4.6). Therefore, $D_\lambdaambda$ is an $\alpha^0\beta^1$-derivation of $(R, \alpha,\beta)$.
Conversely, let $D_\lambdaambda$ is an $\alpha^0\beta^1$-derivation of $(R, \alpha,\beta)$. For any $a,b\in R$, $m,n\in \mathbb{R}$, we have
\betaegin{eqnarray*}
&&[\beta'(b+nD)_{-\phiartial-\lambdaambda}\alpha'(a+mD)]_D\\
&=& [(\beta(b)+nD)_{-\phiartial-\lambdaambda}\alpha(a)+mD]_D\\
&=& [\beta(b)_{-\phiartial-\lambdaambda}\alpha(a)]+nD_{-\phiartial-\lambdaambda}(\alpha(a))-mD_\lambdaambda(\alpha(b))\\
&=&-[\beta(a)_\lambdaambda \alpha(b)]+nD_{-\phiartial-\lambdaambda}(\alpha(a))-mD_\lambdaambda(\alpha(b))\\
&=&-([\beta(a)_\lambdaambda \alpha(b)]-nD_{-\phiartial-\lambdaambda}(\alpha(a))+mD_\lambdaambda(\alpha(b)))\\
&=& -[(\beta(a)+mD)_{\lambdaambda}(\alpha(b)+nD)]_D.
\etand{eqnarray*}
So (2.2) holds. For (2.1), we have
\betaegin{eqnarray*}
&&[\phiartial D_\lambdaambda a]_D=-\lambdaambda[D_\lambdaambda a]_D,\\
&& [\phiartial a_\lambdaambda D]_D=-D_{-\phiartial-\lambdaambda}(\phiartial a)=-\lambdaambda[a_\lambdaambda D]_D,\\
&& [D_\lambdaambda \phiartial a]_D=D_\lambdaambda(\phiartial a)=(\phiartial+\lambdaambda)D_\lambdaambda(a)=(\phiartial+\lambdaambda)[D_\lambdaambda a]_D,\\
&&[a_\lambdaambda \phiartial D]_D=-(\phiartial D)_{-\lambdaambda-\phiartial}a=(\phiartial+\lambdaambda)[a_\lambdaambda D]_D,\\
&& \alpha'\cdotirc \phiartial =\phiartial\cdotirc \alpha', \beta'\cdotirc \phiartial =\phiartial\cdotirc \beta',
\etand{eqnarray*}
as desired. The BiHom-Jacobi identity is easy to check.
$\sumquare$
\sumection{Generalized derivations of multiplicative BiHom-Lie conformal algebras }
\deltaef\tauhetaeequation{\alpharabic{section}. \alpharabic{equation}}
\sumetcounter{equation} {0}
Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra. Define
\betaegin{eqnarray*}
\Omega=\{D_\lambdaambda\in Cend(R)|D_\lambdaambda \cdotirc \alpha=\alpha\cdotirc D_\lambdaambda, D_\lambdaambda \cdotirc \beta=\beta\cdotirc D_\lambdaambda \}.
\etand{eqnarray*}
Then $\Omega$ is a BiHom-Lie conformal algebra with respect to (4.4) and $Der(R)$ is a subalgebra of $\Omega$.
\noindent{\betaf Definition 5.1.} An element $D_{\mu}$ in $\Omega$ is called\\
(a) a generalized $\alpha^k\beta^l$-derivation of $R$, if there exist $D'_{\mu}, D''_{\mu}\in \Omega$ such that
\betaegin{eqnarray}
[(D_\mu(a))_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]+[\alpha^{k}\beta^{l}(a)_{\lambdaambda}(D'_{\mu}(b))]=D''_{\mu}([a_\lambdaambda b]),~~\forall a,b\in R.
\etand{eqnarray}
(b) an $\alpha^k\beta^l$-quasiderivation of $R$, if there is $D'_{\mu}\in \Omega$ such that
\betaegin{eqnarray}
[(D_\mu(a))_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]+[\alpha^{k}\beta^{l}(a)_{\lambdaambda}(D_{\mu}(b))]=D'_{\mu}([a_\lambdaambda b]),~~\forall a,b\in R.
\etand{eqnarray}
(c) an $\alpha^k\beta^l$-centroid of $R$, if it satisfies
\betaegin{eqnarray}
[(D_\mu(a))_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]=[\alpha^{k}\beta^{l}(a)_{\lambdaambda}(D_{\mu}(b))]=D_{\mu}([a_\lambdaambda b]),~~\forall a,b\in R.
\etand{eqnarray}
(d) an $\alpha^k\beta^l$-quasicentroid of $R$, if it satisfies
\betaegin{eqnarray}
[(D_\mu(a))_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]=[\alpha^{k}\beta^{l}(a)_{\lambdaambda}(D_{\mu}(b))], ~~\forall a,b\in R.
\etand{eqnarray}
(e) an $\alpha^k\beta^l$-central derivation of $R$, if it satisfies
\betaegin{eqnarray}
[(D_\mu(a))_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]=D_{\mu}([a_\lambdaambda b])=0, ~~\forall a,b\in R.
\etand{eqnarray}
Denote by $GDer_{\alpha^k\beta^l}(R), QDer_{\alpha^k\beta^l}(R), C_{\alpha^k\beta^l}(R), QC_{\alpha^k\beta^l}(R)$ and $ZDer_{\alpha^k\beta^l}(R)$ the sets of
all generalized $\alpha^k\beta^l$-derivations, $\alpha^k\beta^l$-quasiderivations, $\alpha^k\beta^l$-centroids, $\alpha^k\beta^l$-quasicentroids and
$\alpha^k\beta^l$-central derivations of $R$, respectively. Set
\betaegin{eqnarray*}
&&GDer(R):=\betaigoplus_{k\gammaeq0, l\gammaeq 0}GDer_{\alpha^k\beta^l}(R),~~QDer(R):=\betaigoplus_{k\gammaeq0, l\gammaeq 0}QDer_{\alpha^k\beta^l}(R).\\
&& C(R):= \betaigoplus_{k\gammaeq0, l\gammaeq 0} C_{\alpha^k\beta^l}(R),~~~QC_{\alpha^k\beta^l}(R):= \betaigoplus_{k\gammaeq0, l\gammaeq 0} QC_{\alpha^k\beta^l}(R),\\
&&ZDer(R):=\betaigoplus_{k\gammaeq0, l\gammaeq 0}ZDer_{\alpha^k\beta^l}(R).
\etand{eqnarray*}
It is easy to see that
\betaegin{eqnarray}
ZDer(R)\sumubseteq Der(R)\sumubseteq QDer(R)\sumubseteq GDer(R)\sumubseteq Cend(R),~~ C(R)\sumubseteq QC(R)\sumubseteq GDer(R).~~~~~
\etand{eqnarray}
\noindent{\betaf Proposition 5.2.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra. Then
(1) $GDer(R), QDer(R)$ and $C(R)$ are subalgebras of $\Omega$,
(2) $ZDer(R)$ is an ideal of $Der(R)$.
{\betaf Proof.} (1)We only prove that $GDer(R)$ is a subalgebra of $\Omega$. The proof for the other two cases is similar.
For any $D_{\mu}\in GDer_{\alpha^k\beta^{l}}(R),H_{\mu}\in GDer_{\alpha^s\beta^t}(R),a,b\in R$, there exist $D'_{\mu},D''_{\mu}\in \Omega,(resp.H'_{\mu},H''_{\mu}\in \Omega)$
such that (5.1) holds for $D_{\mu}(resp. H_{\mu})$. Recall that $\alphalpha'(D_{\mu})=D_{\mu}\cdotirc\alphalpha$ and $\beta'(D_{\mu})=D_{\mu}\cdotirc\beta$, we have
\betaegin{eqnarray*}
&&[(\alpha'(D_{\mu})(a))_{\lambdaambda+\mu}\alphalpha^{k+1}\beta^{l}(b)]\\
&=&[(D_{\mu}(\alphalpha(a)))_{\lambdaambda+\mu}\alphalpha^{k+1}\beta^{l}(b)]=\alpha([(D_{\mu}(a))_{\lambdaambda+\mu}\alphalpha^{k}\beta^{l}(b)])\\
&=&\alpha(D''_{\mu}([a_{\lambdaambda}b])-[\alpha^{k}(a)_{\lambdaambda}D'_{\mu}(b)])\\
&=&\alpha'(D''_{\mu})([a_{\lambdaambda}b])-[\alpha^{k+1}\beta^l(a)_{\lambdaambda}(\alphalpha'(D'_{\mu})(b))].
\etand{eqnarray*}
So $\alphalpha'(D_{\mu})\in GDer_{\alpha^{k+1}\beta^{l}}(R).$ Similarly, $\beta'(D_{\mu})\in GDer_{\alpha^{k}\beta^{l+1}}(R).$ Furthermore, we need to show
\betaegin{eqnarray}
[D''_{\mu}H'']_{\tauhetaeta}([a_{\lambdaambda}b])=[([D_{\mu}H]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]+[\alphalpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}([D'_{\mu}H']_{\tauhetaeta}(b))].
\etand{eqnarray}
By (5.4), we have
\betaegin{eqnarray}
[([D_{\mu}H]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}(b)]=[(D_{\mu}(H_{\tauhetaeta-\mu}(a)))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]-[(H_{\tauhetaeta-\mu}(D_{\mu}(a))))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]~~~.
\etand{eqnarray}
By (5.1), we obtain
\betaegin{eqnarray}
&&[(D_{\mu}(H_{\tauhetaeta-\mu}(a)))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]\nonumber\\
&=&D''_{\mu}([(H_{\tauhetaeta-\mu}(a))_{\lambdaambda+\tauhetaeta-\mu}\alphalpha^{s}\beta^{t}(b)])-D''_{\mu}([\alphalpha^{k}\beta^{l}(a)_{\lambdaambda}(H'_{\tauhetaeta-\mu}(b)))]\nonumber\\
&=&D''_{\mu}(H''_{\tauhetaeta-\mu}([a_{\lambdaambda}b]))-D''_{\mu}([\alphalpha^{s}\beta^{t}(a)_{\lambdaambda}(H'_{\tauhetaeta-\mu}(b)))]\\
&&-H''_{\tauhetaeta-\mu}([\alphalpha^{k}(a)_{\lambdaambda}(D'_{\mu}(b))])+[\alphalpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}(H'_{\tauhetaeta-\mu}(D'_{\mu}(b)))],\nonumber\\
&&[(H_{\tauhetaeta-\mu}(D_{\mu}(a)))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]\nonumber\\
&=&H''_{\tauhetaeta-\mu}([(D_{\mu}(a))_{\lambdaambda+\mu}\alphalpha^{k}\beta^{l}(b)])-[\alphalpha^{s}\beta^{t}(D_{\mu}(a))_{\lambdaambda+\mu}(H'_{\tauhetaeta-\mu}(\alphalpha^{k}\beta^{l}(b)))]\nonumber\\
&=&H''_{\tauhetaeta-\mu}(D''_{\mu}([a_{\lambdaambda}b]))-(H''_{\tauhetaeta-\mu}([\alphalpha^{k}\beta^{l}(a)_{\lambdaambda}(D'_{\mu}(b)])\nonumber\\
&&-D''_{\mu}([\alphalpha^{s}\beta^{t}(a)_{\lambdaambda}(H'_{\tauhetaeta-\mu}(b))])+[\alphalpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}(D'_{\mu}(H'_{\tauhetaeta-\mu}(b))).
\etand{eqnarray}
Substituting (5.9) and (5.10) into (5.8) gives (5.7). Hence $[D_{\mu}H)\in GDer_{\alphalpha^{k+s}\beta^{l+t}}(R)[\mu]$, and $GDer(R)$ is a BiHom sub-algebra of
$\Omega$.
(2) For $D_1{\mu}\in ZDer_{\alpha^k\beta^{l}}(R),D_2{\mu}\in Der_{\alpha^s\beta^{t}}(R)$ and $a,b\in R$, we have
\betaegin{eqnarray*}
[(\alphalpha'(D_1)_{\mu}(a)_{\lambdaambda+\mu}\alphalpha^{k+1}\beta^{l}(b)]=\alphalpha([(D_1{\mu}(a))_{\lambdaambda+\mu}\alphalpha^{k}\beta^{l}(b)])=\alphalpha'(D_1)_{\mu}([a_{\lambdaambda}b])=0,
\etand{eqnarray*}
which proves $\alphalpha'(D_1)\in ZDer_{\alpha^{k+1}\beta^{l}}(R).$ Simialrly, $\beta'(D_1)\in ZDer_{\alpha^{k}\beta^{l+1}}(R).$ By (5.5),
\betaegin{eqnarray*}
&&[D_1{\mu}D_{2}]_{\tauhetaeta}([a_{\lambdaambda}b])\\
&=&D_1{\mu}(D_{2\tauhetaeta-\mu}([a_{\lambdaambda}b]))-D_{2\tauhetaeta-\mu}(D_1{\mu}([a_{\lambdaambda}b]))\\
&=&D_1{\mu}(D_{2\tauhetaeta-\mu}([a_{\lambdaambda}b]))\\
&=&D_1{\mu}([(D_{2\tauhetaeta-\mu}(a))_{\lambdaambda+\tauhetaeta-\mu}\alphalpha^{s}\beta^{t}(b)]+[\alphalpha^{s}\beta^{t}(a)_{\lambdaambda}D_{2\tauhetaeta-\mu}(b)])=0,
\etand{eqnarray*}
\betaegin{eqnarray*}
&&[D_1{\mu}D_{2}]_{\tauhetaeta}(a)_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]\\
&=&[D_1{\mu}(D_{2\tauhetaeta-\mu}(a))-D_{2\tauhetaeta-\mu}(D_1{\mu}(a)))_{\lambdaambda+\tauhetaeta}\alphalpha^{k+s}\beta^{l+t}(b)]\\
&=&[-(D_{2\tauhetaeta-\mu}(D_1{\mu}(a)))_{\lambdaambda+\tauhetaeta}\alphalpha^{(k+s)}(b)]\\
&=&-(D_{2\tauhetaeta-\mu}([D_1{\mu}(a)_{\lambdaambda+\mu}\alphalpha^{k}\beta^{l}(b)])+\alphalpha^{s}\beta^{t}(D_1{\mu}(a))_{\lambdaambda+\mu}D_{2\tauhetaeta-\mu}(\alphalpha^{k}\beta^{l}(b))]\\
&=&0.
\etand{eqnarray*}
This shows that $[D_1{\mu}D_{2}]\in ZDer_{\alpha^{k+s}\beta^{l+t}}(R)[\mu].$ Thus $ZDer(R)$ is an ideal of $Der(R)$.
$\sumquare$
\noindent{\betaf Lemma 5.3.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra. Then \\
(1) $[Der(R)_\lambdaambda C(R)]\sumubseteq C(R)[\lambdaambda]$,\\
(2) $[QDer(R)_\lambdaambda QC(R)]\sumubseteq QC(R)[\lambdaambda]$,\\
(3) $[QC(R)_\lambdaambda QC(R)]\sumubseteq QDer(R)[\lambdaambda]$.
{\betaf Proof.} Straightforward.
$\sumquare$
\noindent{\betaf Theorem 5.4.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra. Then
\betaegin{eqnarray*}
GDer(R)=QDer(R)+QC(R).
\etand{eqnarray*}
{\betaf Proof.} For any $D_{\mu}\in GDer_{\alpha^k}(R)$, there exist $D'_{\mu},D''_{\mu}\in \Omega$ such that
\betaegin{eqnarray}
[(D_{\mu}(a)_{\lambdaambda+\mu}\alphalpha^{k}\beta^{l}(b)]+[\alphalpha^{k}\beta^{l}(a)_{\lambdaambda}D'_{\mu}(b)]=D''_{\mu}([a_{\lambdaambda}b]), \forall a,b\in R.
\etand{eqnarray}
By (2.2) and (5.11), we get
\betaegin{eqnarray}
[\alphalpha^{k}\beta^{l}(b)_{-\phiartial-\lambdaambda-\mu}D_{\mu}(a)]+[D'_{\mu}(b)_{-\phiartial-\lambdaambda}\alphalpha^{k}\beta^{l}(a)]=D''_{\mu}([b_{-\phiartial-\lambdaambda}a]).
\etand{eqnarray}
By (2.1) and setting $\lambdaambda'=-\phiartial-\lambdaambda-\mu$ in (5.12), we obtain
\betaegin{eqnarray}
[\alphalpha^{k}\beta^{l}(b)_{\lambdaambda'}D_{\mu}(a)]+[D'_{\mu}(b)_{\mu+\lambdaambda'}\alphalpha^{k}\beta^{l}(a)]=D''_{\mu}([b_{\lambdaambda'}a]).
\etand{eqnarray}
Change the place of $a,b$ and replace $\lambdaambda'$ by $\lambdaambda$ in (5.13), we have
\betaegin{eqnarray}
[\alpha^{k}\beta^{l}(a)_\lambdaambda D_{\mu}(b)]+[D'_{\mu}(a)_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]=D''_{\mu}([a_{\lambdaambda}b]).
\etand{eqnarray}
By (5.11) and (5.14), we have
\betaegin{eqnarray*}
&&[\frac{D_{\mu}+D'_\mu}{2}(a)_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]+[\alpha^{k}\beta^{l}(a)_\lambdaambda\frac{D_{\mu}+D'_\mu}{2}(b)]=D''_{\mu}([a_{\lambdaambda}b]),\\
&&[\frac{D_{\mu}-D'_\mu}{2}(a)_{\lambdaambda+\mu}\alpha^{k}\beta^{l}(b)]-[\alpha^{k}\beta^{l}(a)_\lambdaambda\frac{D_{\mu}+D'_\mu}{2}(b)]=0.
\etand{eqnarray*}
It follows that $\frac{D_{\mu}+D'_\mu}{2}\in QDer_{\alpha^{k}\beta^l}(R)$ and $\frac{D_{\mu}-D'_\mu}{2}\in QC_{\alpha^{k}\beta^l}(R)$. Thus
\betaegin{eqnarray*}
D_\mu=\frac{D_{\mu}+D'_\mu}{2}+\frac{D_{\mu}-D'_\mu}{2}\in QDer_{\alpha^{k}\beta^l}(R)+QC_{\alpha^{k}\beta^l}(R).
\etand{eqnarray*}
Therefore, $GDer(R)\sumubseteq QDer(R)+QC(R)$. The reverse inclusion relation follows from
(5.6) and Lemma 5.3.
$\sumquare$
\noindent{\betaf Theorem 5.5.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra, $\alpha,\beta$ surjections
and $Z(R)$ the center of $R$. Then $[C(R)_{\lambdaambda}QC(R)]\sumubseteq Chom(R,Z(R))[\lambdaambda]$. Moreover, if $Z(R)=0$, then
$[C(R)_{\lambdaambda}QC(R)]=0$.
{\betaf Proof.} Since $\alpha,\beta$ are surjections, for any $b'\in R$, there exists $b\in R$ such that $b'=\alpha^{k+s}\beta^{l+t}(b)$.
For any $D_{1\mu}\in C_{\alpha^{k}\beta^{l}}(R), D_{2\mu}\in QC_{\alpha^{s}\beta^{t}}(R), a\in R$, by (5.3) and (5.4), we have
\betaegin{eqnarray*}
&&[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}b']\\
&=&[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]\\
&=& [(D_{1\mu}(D_{2\tauhetaeta-\mu}(a)))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]-[(D_{2\tauhetaeta-\mu}(D_{1\mu}(a)))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]\\
&=& D_{1\mu}([D_{2\tauhetaeta-\mu}(a)_{\lambdaambda+\tauhetaeta-\mu}\alpha^{s}\beta^{t}(b)])-[\alpha^{s}\beta^{t}(b)(D_{1\mu}(a))_{\lambdaambda+\mu}D_{2\tauhetaeta-\mu}(\alpha^{k}\beta^{l}(b))]\\
&=& D_{1\mu}([D_{2\tauhetaeta-\mu}(a)_{\lambdaambda+\tauhetaeta-\mu}\alpha^{s}\beta^{t}(b)])-D_{1\mu}([\alpha^{s}\beta^t(a)_{\lambdaambda}D_{2\tauhetaeta-\mu}(b)]\\
&=&D_{1\mu}([D_{2\tauhetaeta-\mu}(a)_{\lambdaambda+\tauhetaeta-\mu}\alpha^{s}\beta^{t}(b)])-([\alpha^{s}\beta^t(a)_{\lambdaambda}D_{2\tauhetaeta-\mu}(b)]\\
&=&0.
\etand{eqnarray*}
Hence $[D_{1\mu}D_2](a)\in Z(R)[\mu]$ and $[D_{1\mu}D_2] \in Chom(R,Z(R))[\mu]$. If $Z(R)=0$, then $[D_{1\mu}D_2](a)=0$. Thus $[C(R)_{\lambdaambda}QC(R)]=0$.
$\sumquare$
\noindent{\betaf Proposition 5.6.} Let $(R,\alpha,\beta)$ be a multiplicative BiHom-Lie conformal algebra and $\alpha,\beta$ surjections. If $Z(R)=0$, then $QC(R)$ is a BiHom-Lie conformal algebra if and only if $[QC(R)_{\lambdaambda}QC(R)]=0$.
{\betaf Proof. } $\Rightarrow$ Assume that $QC(R)$ is a BiHom-Lie conformal algebra. Since $\alpha,\beta$ are surjections, for any $b'\in R$, there exists $b\in R$ such that $b'=\alpha^{k+s}\beta^{l+t}(b)$.
For any $D_{1\mu}\in QC_{\alpha^{k}\beta^{l}}(R), D_{2\mu}\in QC_{\alpha^{s}\beta^{t}}(R), a\in R$, by (5.4), we have
\betaegin{eqnarray}
&&[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}b']\nonumber\\
&=&[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]+[\alpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}([D_{1\mu}D_2]_{\tauhetaeta}(b))].
\etand{eqnarray}
By (4.4) and (5.4), we obtain
\betaegin{eqnarray*}
&&[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]\nonumber\\
&=& [(D_{1\mu}(D_{2\tauhetaeta-\mu}(a)))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]-[(D_{2\tauhetaeta-\mu}(D_{1\mu}(a)))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]\nonumber\\
&=&[\alpha^{k}\beta^{l}(D_{2\tauhetaeta-\mu}(a))_{\lambdaambda+\tauhetaeta-\mu}(D_{1\mu}(\alpha^s\beta^t(b)))]-[\alpha^{s}\beta^{t}(D_{1\mu}(a))_{\lambdaambda+\mu}(D_{2\tauhetaeta-\mu}(\alpha^{k}\beta^{l}(b)))]\nonumber\\
&=&[\alpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}(D_{2\tauhetaeta-\mu}(D_{1\mu}(b)))]-[\alpha^{s}\beta^{t}(a)_{\lambdaambda+\mu}((D_{1\mu}(D_{2\tauhetaeta-\mu}(b)))]\nonumber\\
&=& -[\alpha^{k+s}\beta^{l+t}(a)_{\lambdaambda}([D_{1\mu}D_2]_{\tauhetaeta}(b))].
\etand{eqnarray*}
By (5.15), we have
\betaegin{eqnarray*}
[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}b']=[([D_{1\mu}D_2]_{\tauhetaeta}(a))_{\lambdaambda+\tauhetaeta}\alpha^{k+s}\beta^{l+t}(b)]=0.
\etand{eqnarray*}
Thus $[D_{1\mu}D_2]_{\tauhetaeta}(a)\in Z(R)[\mu]=0$ since $Z(R)=0$. Therefore, $[QC(R)_{\lambdaambda}QC(R)]=0$.
$\Leftarrow$ Straightforward.
$\sumquare$
\betaegin{center}
{\betaf ACKNOWLEDGEMENT}
\etand{center}
The work of S. J. Guo is supported by the NSF of China (No. 11761017) and
the Fund of Science and Technology Department of Guizhou Province (No. [2019]1021).
The work of X. H. Zhang is supported by the Project Funded by China Postdoctoral Science Foundation (No. 2018M630768) and
the NSF of Shandong Province (No. ZR2016AQ03).
The work of S. X. Wang is supported by the outstanding top-notch talent cultivation project of Anhui Province (No. gxfx2017123)
and the NSF of Anhui Provincial (1808085MA14).
\rightharpoonup oenewcommand{REFERENCES}{REFERENCES}
\betaegin{thebibliography}{99}
\betaibitem{Aizawa} N. Aizawa, and H. Sato, $q$-deformation of the Virasoro algebra with central extension,
Phys. Lett. B., 256(1991), 185-190.
\betaibitem{Bakalov1999} B. Bakalov, V. Kac, A. Voronov, Cohomology of conformal algebras, Comm.
Math. Phys., 200 (1999), 561-598.
\betaibitem{Cheng1997} S. Cheng, V. Kac, Conformal modules, Asian J. Math., 1(1) (1997), 181-193.
\betaibitem{Cheng1998} S. Cheng, V. Kac, M. Wakimoto, Extensions of conformal modules, in Topological
Field Theory, Primitive Forms and Related Topics, Progress in Mathematics, Vol. 160
(Birkh¡§auser, Boston, 1998), pp. 33-57.
\betaibitem{D'Andrea1998} A. D'Andrea, V. Kac, Structure theory of finite conformal algebras, Selecta Math.
(N.S.) 4 (1998), 377-418.
\betaibitem{Graziani} G. Graziani, A. Makhlouf, C. Menini and F. Panaite, BiHom-associative algebras, BiHom-Lie algebras and BiHom-bialgebras,
Symmetry Integrability and Geometry: Methods and Applications, 11(2015), 1-34.
\betaibitem{Hartwig} J. T. Hartwig, D. Larsson and S. D. Silvestrov, Deformations of Lie algebras using $\sumigma$-derivations,
J. Algebra, 295(2016), 314-361.
\betaibitem{Hu} N. Hu, $q$-Witt algebras, $q$-Lie algebras, $q$-holomorph structure and representations,
Algebr. Colloq, 6(1999), 51-70.
\betaibitem{Kac98} V. G. Kac, Vertex Algebras for Beginners, Univ. Lect. Ser. vol. 10 (Amer. Math. Soc., Providence, RI, 1996), second edition 1998.
\betaibitem{L08} J. I. Liberati, On conformal bialgebras, J. Algebra, 319
(2008), 2295-2318.
\betaibitem{Yuan14} L. Yuan, Hom Gel'fand-Dorfman bialgebras and Hom-Lie conformal algebras, J. Math. Phys., 55
(2014), 043507 .
\betaibitem{Zhao2016} J. Zhao, L. Yuan and L. Chen, Deformations and generalized derivations of
Hom-Lie conformal algebras, Science China Mathematics, 61 (2018), 797-812.
\etand{thebibliography}
\etand{document}
|
\begin{document}
\title{Time-Space Trade-off Algorithms for Triangulating a Simple Polygon\footnote{Work on this paper by B.~A.~was supported in part by NSF Grants CCF-11-17336, CCF-12-18791, and CCF-15-40656, and by grant 2014/170 from the US-Israel Binational Science Foundation.
M.~K.~was partially supported by MEXT KAKENHI grant Nos.~12H00855, and 17K12635.
S.~P.~was supported in part by the Ontario Graduate Scholarship and The Natural Sciences and Engineering Research Council of Canada. A.~v.~R. and M.~R. were supported by JST ERATO Grant Number JPMJER1305, Japan.
An earlier version of this work appeared in the \emph{Proceedings of the 15th Scandinavian Symposium and Workshops on Algorithm Theory}
\begin{abstract}
An $s$-workspace algorithm is an algorithm that has read-only access to the values of the input, write-only access to the output, and only uses $O(s)$ additional words of space. We present a randomized $s$-workspace algorithm for triangulating a simple polygon~$P$ of $n$~vertices that runs in $O(n^2/s+n \log n \log^{5} (n/s))$ expected time using $O(s)$ variables, for any $s \leq n$. In particular, when
$s \leq \frac{n}{\log n\log^{5}\log n}$
the algorithm runs in $O(n^2/s)$ expected time.
\end{abstract}
\section{Introduction}
Triangulation of a simple polygon, often used as a preprocessing step in computer graphics, is performed in a wide range of settings including on embedded systems like the Raspberry Pi or mobile phones. Such systems often run read-only file systems for security reasons and have very limited working memory. An ideal triangulation algorithm for such an environment would allow for a trade-off in performance in time versus working space.
Computer science and specifically the field of algorithms generally have two optimization goals; running time and memory size. In the 70's there was a strong focus on algorithms that required low memory as it was expensive. As memory became cheaper and more widely available this focus shifted towards optimizing algorithms for their running time, with memory mainly as a secondary constraint.
Nowadays, even though memory is cheap, there are other constraints that limit memory usage. First, there is a vast number of embedded devices that operate on batteries and have to remain small, which means they simply cannot contain a large memory. Second, some data may be read-only, due to hardware constraints (e.g., read-only or write-once DVDs/CDs) or concurrency issues (i.e., to allow many processes to access the database at once).
These memory constraints can all be described in a simple way by the so-called \emph{constrained-workspace} model (see Section~\ref{sec:prelim} for details). Our input is read-only and potentially much larger than our working space, and the output we produce must be written to write-only memory. More precisely, we assume we have a read-only data set of size $n$ and a working space of size $O(s)$, for some user-specified parameter $s$. In this model, the aim is to design an algorithm whose running time decreases as $s$ grows. Such algorithms are called \emph{time-space trade-off} algorithms~\cite{s-mcepc-08}.
\subsection*{Previous Work}
Several models of computation that consider space constraints have been studied in the past (we refer the interested reader to~\cite{k-mca-15} for an overview). In the following we discuss the results related to triangulations. The concept of memory-constrained algorithms attracted renewed attention within the computational geometry community by the work of Asano~\emph{et~al.}\xspace~\cite{amrw-cwagp-10}. One of the algorithms presented in~\cite{amrw-cwagp-10} was for triangulating a set of $n$ points in the plane in $O(n^2)$ time using $O(1)$ variables. More recently, Korman~\emph{et~al.}\xspace~\cite{kmrrss-tstotvd-15} introduced two different time-space trade-off algorithms for triangulating a point set: the first one computes an arbitrary triangulation in $O(n^2/s + n\log^2 n)$ time using $O(s)$ variables. The second is a randomized algorithm that computes the Delaunay triangulation of the given point set in expected $O((n^2/s)\log s + n\log n\log^*n)$ time within the same space bounds.
The above results address triangulating discrete point sets in the plane. The first algorithm in this model for triangulating simple polygons was due to Asano~\emph{et~al.}\xspace~\cite{abbkmrs-mcasp-11} (in fact, the algorithm works for slightly more general inputs: plane straight-line graphs). It runs in $O(n^2)$ time using $O(1)$ variables. The first time-space trade-off for triangulating polygons was provided by Barba~\emph{et~al.}\xspace~\cite{bklss-sttosba-14}. In their work, they describe a general time-space trade-off algorithm that in particular could be used to triangulate monotone polygons. An even faster algorithm (still for monotone polygons) was afterwards found by Asano and Kirkpatrick~\cite{ak-tstanlnp-13}: $O(n\log_sn)$ time using $O(s)$ variables. Despite extensive research on the problem, there was no known time-space trade-off algorithm for general simple polygons. It is worth noting that no lower bounds on the time-space trade-off are known for this problem either.
If we forego space constraints, we can triangulate a simple polygon of $n$ vertices in linear time (using linear space)~\cite{c-tsplt-91}. However, this algorithm is considered difficult to implement and very slow in practice (see, e.g.,~\cite[p.~57]{o_rourke_c}). Alternatively, Hertel and Mehlhorn~\cite{hertel_mehlhorn} provided an algorithm that can triangulate a simple polygon of $n$ vertices, $r$ of which are reflex, in~$O(n \log r)$ time. Since our work is of theoretical nature, we will use Chazelle's triangulation algorithm. However, as the running time of our algorithms is dominated by other terms, we can instead use the one of Hertel and Mehlhorn without affecting the asymptotic performance.
\subsection*{Results}
This paper is structured as follows. In Section~\ref{sec:prelim} we define our model, as well as the problems we study. Our main result on triangulating a simple polygon $P$ with $n$ vertices using only a limited amount of memory can be found in Section~\ref{sec:main}. Our algorithm achieves expected running time of $O(n^2/s+n \log n \log^{5} (n/s))$ using $O(s)$ variables, for any $s \leq n$. Note that for most values of $s$ (i.e., when $s \leq \frac{n}{\log n \log^{5}\log n}$) the algorithm runs in $O(n^2/s)$ expected time.
Our approach uses a recent result by Har-Peled~\cite{Har-Peled15} as a tool for subdividing $P$ into smaller pieces and solving them recursively. Once the pieces are small enough to fit into memory, the subproblem can be handed over to the usual algorithm, without memory constraints. This divide-and-conquer approach has been often used in the memory-constrained literature, but each time the partition was constructed \emph{ad~hoc}, based on the properties of the problem being solved. We believe that the tool we introduce in this paper is very general and can be used for several problems. As an example, in Section~\ref{sec:extensions} we show how the same approach can be used to compute the \emph{shortest-path tree} from any point $p\in P$, or simply to split $P$ by $\Theta(s)$ pairwise non-crossing diagonals into smaller subpolygons, each with $\Theta(n/s)$ vertices.
\section{Preliminaries}\label{sec:prelim}
In this paper, we utilize the $s$-\emph{workspace} model of computation that is frequently used in the literature (see, for example, \cite{abbkmrs-mcasp-11,bklss-sttosba-14,bkls-cvpufv-13,Har-Peled15}). In this model the input data is given in a read-only array or some similar structure. In our case, the input is a simple polygon $P$; let~$v_1, v_2, \ldots, v_n$ be the vertices of $P$ in clockwise order along its boundary. We assume that, given an index~$i$, in constant time we can access the coordinates of the vertex $v_i$. We also assume that the usual word RAM operations (say, given $i$, $j$, $k$, finding the intersection point of the line passing through vertices $v_i$ and $v_j$ and the horizontal line passing through $v_k$) can be performed in constant time.
In addition to the read-only data, an $s$-workspace algorithm can use $O(s)$ variables during its execution, for some parameter $s$ determined by the user. Implicit memory consumption (such as the stack space needed in recursive algorithms) must be taken into account when determining the size of a workspace. We assume that each variable or pointer is stored in a data word of $\Theta(\log n)$ bits. Thus, equivalently, we can say that an $s$-workspace algorithm uses $O(s\log n)$ bits of storage.
In this model we study the problem of computing a \emph{triangulation} of a simple polygon~$P$, which is a maximal crossing-free straight-line graph whose vertices are the vertices of~$P$ and whose edges lie inside~$P$. Unless $s$ is very large, the triangulation cannot be stored explicitly. Thus, the goal is to report a triangulation of $P$ in a write-only data structure. Once an output value is reported, it cannot be accessed or modified afterwards.
In other memory-constrained triangulation algorithms~\cite{abbkmrs-mcasp-11,ak-tstanlnp-13} the output is reported as a list of edges in no particular order, with no information on neighboring edges or faces. Moreover, it is not clear how to modify these algorithms to obtain such information. Our approach has the advantage that, in addition to the list of edges, we can also report the triangles generated, together with the adjacency relationship between the edges and the triangles; see Section~\ref{sec:output} for details.
A vertex of a polygon is \emph{reflex} if its interior angle is larger than $180^\circ$.
Given two points $p,q\in P$, the \emph{geodesic} (or \emph{shortest path}) between them is the path of minimum length that connects $p$ and $q$ and that stays within $P$ (viewing $P$ as a closed set). The length of that path is the \emph{geodesic distance} from $p$ to $q$. It is well known that, for any two points of~$P$, their geodesic $\pi$ always exists and is unique. Such a path is a polygonal chain whose vertices (other than $p$ and $q$) are reflex vertices of $P$. Thus, we often identify $\pi$ with the ordered sequence of reflex vertices traversed by the path from $p$ to $q$. When that sequence is empty (i.e., the geodesic consists of the straight segment $pq$) we say that $p$ \emph{sees} $q$ and vice versa.
Our algorithm relies on a recent procedure by Har-Peled~\cite{Har-Peled15} for computing geodesics under memory constraints, which constructs the geodesic between any two points in a simple polygon of $n$ vertices in expected $O(n^2/s+n\log s \log^4 (n/s))$ time using $O(s)$ words of space. Note that this path might not fit in memory, so the edges of the geodesic are reported one by one, in order.
\section{Algorithm}\label{sec:main}
Let $\pi$ be the
geodesic connecting $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$. From a high-level perspective,
the algorithm uses the approach of Har-Peled~\cite{Har-Peled15} to
compute $\pi$. We will use the computed edges to subdivide $P$ into smaller problems that can be solved recursively.
We start by introducing some definitions that will help in recording
the portion of the polygon already triangulated.
Vertices $v_1$ and
$\ensuremath{v_{\lfloor n/2 \rfloor}}$ split the boundary of $P$ into two chains. We say $v_i$ is a
\emph{top} vertex if $1 < i < \ensuremath{\lfloor n/2 \rfloor}$ and a \emph{bottom} vertex if
$\ensuremath{\lfloor n/2 \rfloor} < i \leq n$. Top/bottom is the \emph{type} of a vertex and all vertices (except for $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$) have exactly one type. A diagonal $c$ is \emph{alternating} if it
connects a top and a bottom vertex or if one of its endpoints is either $v_1$ or $\ensuremath{v_{\lfloor n/2 \rfloor}}$, and \emph{non-alternating} otherwise.
We will use diagonals to partition $P$ into two parts. For simplicity of the exposition, given a diagonal $d$, we regard both components of $P\setminus d$ as closed (i.e., the diagonal belongs to both of them). Since any two consecutive vertices of $P$ can see each other, the partition produced by an edge of $P$ is trivial, in the sense that one subpolygon is $P$ and the other one is a line segment.
\begin{observation}\label{obs_split}
Let $c$ be a diagonal of $P$ not incident to $v_1$ or
$\ensuremath{v_{\lfloor n/2 \rfloor}}$. Vertices $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$ belong to different components of
$P \setminus c$ if and only if $c$ is an alternating diagonal.
\end{observation}
\begin{corollary}\label{cor_nonalter}
Let $c$ be a non-alternating diagonal of $P$. The component of
$P \setminus c$ that contains neither $v_1$ nor $\ensuremath{v_{\lfloor n/2 \rfloor}}$ has at most
$\ensuremath{\lfloor n/2 \rfloor}up$ vertices.
\end{corollary}
We will use alternating diagonals as a way to remember what part of the polygon has already been triangulated. More specifically, the algorithm will at all times store an alternating diagonal $a_c$. An invariant of our algorithm is that the connected component of $P\setminus a_c$ not containing $\ensuremath{v_{\lfloor n/2 \rfloor}}$ has already been triangulated.
Ideally, $a_c$ would be an edge of $\pi$, the geodesic connecting $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$, but this is not always possible. Instead, we guarantee that at least one of the endpoints of $a_c$ is a vertex of $\pi$ that has already been computed in the execution of the shortest-path algorithm.
With these definitions in place, we can give an intuitive description of our algorithm. We start by setting $a_c$ as the degenerate diagonal from $v_1$ to $v_1$. We then use the shortest-path computation procedure of Har-Peled. Our aim is to walk along $\pi$ until we find a new alternating diagonal $a_{\textrm{new}}$. At that moment we pause the execution of the shortest-path algorithm, triangulate the subpolygons of $P$ that have been created (and contain neither $v_1$ nor $\ensuremath{v_{\lfloor n/2 \rfloor}}$) recursively, set $a_c$ to $a_{\textrm{new}}$, and resume the execution of the shortest-path algorithm.
Although our approach is intuitively simple, there are several technical difficulties that must be carefully considered. Ideally, the number of vertices we walk along $\pi$ before finding an alternating diagonal is small and thus they can be stored explicitly. But if we do not find an alternating diagonal on $\pi$ in just a few steps (indeed, $\pi$ may contain no alternating diagonal), we need to use other diagonals. We also need to make sure that the complexity of each recursive subproblem is reduced by a constant fraction, that we never exceed space bounds, and that no part of the triangulation is reported more than once.
Let $v_c$ denote the endpoint of $a_c$ that is on $\pi$ and that is closest to $\ensuremath{v_{\lfloor n/2 \rfloor}}$. Recall that
the subpolygon defined by $a_c$ containing $v_1$ has already been triangulated. Let $w_0, \ldots , w_k$ be the portion of $\pi$ up to the next alternating diagonal. That is, path $\pi$ is of the form $\pi=(v_1, \ldots, v_c=w_0, w_1, \ldots, w_{k}, \ldots, \ensuremath{v_{\lfloor n/2 \rfloor}})$ where $w_1, \ldots, w_{k-1}$ are of the same type as $v_c$, and $w_k$ is of different type (or $w_k=\ensuremath{v_{\lfloor n/2 \rfloor}}$ if all vertices between $v_c$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$ are of the same type).
Consider the partition of $P$ induced by $a_c$ and this portion of $\pi$; see Figure~\ref{fig:ChainBetweenAlternationDiagonals2}. Let $P_1$ be the subpolygon induced by $a_c$ that does not contain $\ensuremath{v_{\lfloor n/2 \rfloor}}$. Similarly, let $P_{\ensuremath{\lfloor n/2 \rfloor}}$ be the subpolygon that is induced by the alternating diagonal $w_{k-1}w_k$ and does not contain $v_1$.\footnote{For simplicity of the exposition, the definition of $P_1$ assumes that $\ensuremath{v_{\lfloor n/2 \rfloor}}$ is not an endpoint of $a_c$ (similarly, $v_1$ not an endpoint of $w_{k-1}w_k$ in the definition of $P_{\ensuremath{\lfloor n/2 \rfloor}}$). Each of these conditions is not satisfied once (i.e., at the first and last diagonals of $\pi$), and in those cases the polygons $P_1$ and $P_{\ensuremath{\lfloor n/2 \rfloor}}$ are not properly defined. Whenever this happens we have $k=1$ and a single diagonal that splits $P$ in two. Thus, if $\ensuremath{v_{\lfloor n/2 \rfloor}}\in a_c$ (and thus $P_1$ is undefined), we simply define $P_1$ as the complement $P_{\ensuremath{\lfloor n/2 \rfloor}}$ (similarly, if $v_1\in w_{k-1}w_k$, we define $P_{\ensuremath{\lfloor n/2 \rfloor}}$ as complement of $P_1$). If both subpolygons are undefined simultaneously we assign them arbitrarily.} For any $i<k-1$, we define $Q_i$ as the subpolygon induced by the non-alternating diagonal~$w_iw_{i+1}$ that contains neither $v_1$ nor $\ensuremath{v_{\lfloor n/2 \rfloor}}$. Finally, let $R$ be the remaining component of $P$. Some of these subpolygons may be degenerate and consist only of a line segment (for example, when $w_iw_{i+1}$ is an edge of $P$).
\begin{figure}
\caption{Partitioning $P$ into subpolygons $P_1$, $P_{\ensuremath{\lfloor n/2 \rfloor}
\label{fig:ChainBetweenAlternationDiagonals2}
\end{figure}
\begin{lemma}
\label{lem:closeAlternation}
Each of the subpolygons $R$, $Q_1$, $Q_2$, $\ldots$, $Q_{k-2}$ has at most $\ensuremath{\lfloor n/2 \rfloor}up+k$ vertices. Moreover, if $w_k=\ensuremath{v_{\lfloor n/2 \rfloor}}$, then the subpolygon $P_{\ensuremath{\lfloor n/2 \rfloor}}$ has at most $\ensuremath{\lfloor n/2 \rfloor}up$ vertices.
\end{lemma}
\begin{proof}
Subpolygons $Q_i$ are induced by non-alternating diagonals and cannot have more than $\ensuremath{\lfloor n/2 \rfloor}up$ vertices, by Corollary~\ref{cor_nonalter}. The proof for $R$ follows by definition: the boundary of $R$ comprises the path $w_0 \dots w_k$ and a contiguous portion of $P$ consisting of only top vertices or only bottom vertices. Recall that there are at most $\ensuremath{\lfloor n/2 \rfloor}up$ of each type. Similarly, if $w_k=\ensuremath{v_{\lfloor n/2 \rfloor}}$, subpolygon $P_{\ensuremath{\lfloor n/2 \rfloor}}$ can only have vertices of one type (either only top or only bottom vertices), and thus the bound holds. This completes the proof of the Lemma.
\end{proof}
This result allows us to treat the easy case of our algorithm. When $k$ is small (say, a constant), we can pause the shortest-path computation, explicitly store all vertices $w_i$, recursively triangulate $R$ as well as the subpolygons $Q_i$ (for all $i\leq k-2$), update $a_c$ to the edge $w_{k-1}w_k$, and resume the shortest-path algorithm.
Handling the case of large $k$ is more involved. Note that we do not know the value of~$k$ until we find the next alternating diagonal, but we need not compute it directly. Given a parameter $\tau$ related to the workspace allowed for our algorithm, we say that the path is \emph{long} when $k>\tau$. Initially we set $\tau=s$ but the value of this parameter will change as we descend the recursion tree. We say that the distance between two alternating diagonals is \emph{long} whenever we have computed $\tau$ vertices of $\pi$ beyond $v_c$ and they are all of the same type as $v_c$. That is, path $\pi$ is of the form $\pi=(v_1, \ldots, v_c=w_0, w_1, \ldots, w_{\tau}, \ldots \ensuremath{v_{\lfloor n/2 \rfloor}})$ and vertices $w_0, w_1, \ldots w_{\tau}$ have the same type and, in particular, form a convex chain (see Figure~\ref{fig:ChainBetweenAlternationDiagonals2}). Rather than continue walking along $\pi$, we look for a vertex~$u$ of~$P$ that together with $w_{\tau}$ forms an alternating diagonal. Once we have found this diagonal, we have at most $\tau+2$ diagonals ($a_c, w_0w_1, w_1w_2, \ldots, w_{\tau-1}w_{\tau}$, and $uw_{\tau}$) partitioning $P$ into at most $\tau+3$ subpolygons once again: $P_1$ is the part induced by $a_c$ which does not contain~$\ensuremath{v_{\lfloor n/2 \rfloor}}$, $P_{\ensuremath{\lfloor n/2 \rfloor}}$ is the part induced by $uw_{\tau}$ which does not contain~$v_1$, $Q_i$ is the part induced by $w_iw_{i+1}$, which contains neither $v_1$ nor $\ensuremath{v_{\lfloor n/2 \rfloor}}$, and $R$ is the remaining component.
\begin{lemma}
\label{lem:farAlternation}
We can find a vertex $u$ so that $uw_{\tau}$ is an alternating diagonal, in $O(n)$ time using $O(1)$ space. Moreover, each of the subpolygons $R$, $Q_1$, $Q_2$, $\ldots$, $Q_{\tau-2}$ has at most $\ensuremath{\lfloor n/2 \rfloor}up+\tau$ vertices.
\end{lemma}
\begin{figure}
\caption{(left)~After we have walked $\tau$ steps of $\pi$, we can find an alternating diagonal by shooting a ray from $w_{\tau}
\label{fig:rayshooting}
\end{figure}
\begin{proof}
Proofs for the size of the subpolygons are identical to those of Lemma~\ref{lem:closeAlternation}. Thus, we focus on how to compute $u$ efficiently.
Without loss of generality, we may assume that the edge $w_{\tau-1}w_{\tau}$ is horizontal. Recall that the chain $w_0, \ldots, w_{\tau}$ is in convex position, thus all of these vertices must lie on one side of the line $\ell$ through $w_{\tau-1}$ and $w_{\tau}$, say below it.
Let $u'$ be the endpoint of $a_c$ other than $v_c$. If $u'$ also lies below $\ell$, we shoot a ray from $w_{\tau}$ towards~$w_{\tau-1}$. Otherwise, we shoot a ray from $w_{\tau}$ towards $u'$. Let $e$ be the first edge that is properly intersected by the ray and let $p_N$ be the endpoint of $e$ of highest $y$-coordinate. Observe that $p_N$ must be on or above $\ell$; see Figure~\ref{fig:rayshooting}~(left).
Ideally, we would like to report $p_N$ as the vertex $u$. However, point $p_N$ need not be visible even though some portion of $e$ is. Whenever this happens, we can use the visibility properties of simple polygons: since $e$ is partially visible, the portion of $P$ that obstructs visibility between $w_{\tau}$ and $p_N$ must cross the segment from $w_{\tau}$ to $p_N$. In particular, there must be one or more reflex vertices in the triangle formed by $w_{\tau}$, $p_N$, and the visible point of $e$ (shaded region of Figure~\ref{fig:rayshooting}~(left)). Among those vertices, the vertex $r$ that maximizes the angle $\angle p_Nw_{\tau}r$ must be visible from $w_\tau$ (see Lemma~1 of \cite{bkls-cvpufv-13}).
We claim that $r$ must be a top vertex: otherwise $\pi$ would need to pass through $r$ to reach $\ensuremath{v_{\lfloor n/2 \rfloor}}$, but since $r$ is above $\ell$, the shortest path from $w_{\tau-1}$ to $r$ does not go through~$w_\tau$. This means that $\pi$ could be made shorter by taking the shortest path from $w_{\tau-1}$ to $r$ instead of going through $w_\tau$.
This contradicts $\pi$ being the shortest path between $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$, and thus we conclude that $r$ is a top vertex, as claimed.
As described in Lemma~1 of \cite{bkls-cvpufv-13}, in order to find such a reflex vertex we need to scan~$P$ at most three times, each time storing a constant amount of information: once for finding the edge $e$ and point $p_N$, once more to determine if $p_N$ is visible, and a third time to find $r$ if $p_N$ is not visible.
\end{proof}
At a high level, our algorithm walks from $v_1$ to $\ensuremath{v_{\lfloor n/2 \rfloor}}$, stopping after walking $\tau$ steps or after finding an alternating diagonal, whichever comes first. This generates several subproblems of smaller complexity that are solved recursively. Once the recursion is done we update $a_c$ (to keep track of the portion of $P$ that has been triangulated), and continue walking along $\pi$. The walking process ends when it reaches $\ensuremath{v_{\lfloor n/2 \rfloor}}$. In this case, in addition to triangulating $R$ and the subpolygons $Q_i$ as usual, we must also triangulate $P_{\ensuremath{\lfloor n/2 \rfloor}}$.
The algorithm at the deeper levels of recursion is almost identical. There are only three minor changes that need to be introduced. We need some base cases to end the recursion. Recall that $\tau$ denotes the amount of space available to the current instance of the problem. Thus, if $\tau$ is comparable to $n$ (say, $10\tau \geq n$), then the whole polygon fits into memory and can be triangulated in linear time~\cite{c-tsplt-91}. Similarly, if $\tau$ is small (say, $\tau \leq 10$), we have run out of space and thus we triangulate $P$ using a constant-workspace algorithm~\cite{abbkmrs-mcasp-11}. In all other cases we continue with the recursive algorithm as usual.
For ease in handling the subproblems, at each step we also designate the vertex that fulfills the role of $v_1$ (i.e., one of the vertices from which the geodesic must be computed). Recall that we have random access to the vertices of the input. Thus, once we know which vertex plays the role of $v_1$, we can find the vertex that plays the role of $\ensuremath{v_{\lfloor n/2 \rfloor}}$ in constant time as well.
\begin{algorithm}
\renewcommand{\algorithmiccomment}[1]{(* #1 *)}
\begin{algorithmic}[1]
\IF[The polygon fits into memory.]{$10\tau \geq n$}
\STATE Triangulate $P$ using Chazelle's algorithm~\cite{c-tsplt-91}
\ELSIF[We ran out of recursion space.]{$\tau \leq 10$}
\STATE Triangulate $P$ using the constant workspace algorithm~\cite{abbkmrs-mcasp-11}
\ELSE[$P$ is large, we will use recursion.]
\STATE $a_c \leftarrow v_1v_1$
\STATE $v_c \leftarrow v_1$
\STATE walked $\leftarrow v_1$ \COMMENT{Variable to keep track of how far we have walked on $\pi$.}
\WHILE{walked $\neq \ensuremath{v_{\lfloor n/2 \rfloor}}$}
\STATE $i \leftarrow 0$ (* $i$ counts the number of steps before finding an alternating diagonal *)
\ensuremath{\mathbb{R}}\xspaceEPEAT
\STATE $i \leftarrow i+1$
\STATE $w_i \leftarrow $ next vertex of $\pi$
\UNTIL{$i=\tau$ \OR $w_{i-1}w_i$ is an alternating diagonal}
\IF{$w_{i-1}w_i$ is an alternating diagonal}
\STATE $u' \leftarrow w_{i-1}$
\STATE $a_{\textrm{new}} \leftarrow w_iw_{i-1}$
\ELSE[We walked too much. Use Lemma~\ref{lem:farAlternation} to partition the problem.]
\STATE $u' \leftarrow$ \textsc{FindAlternatingDiagonal}$(P, a_c, v_c, w_1, \ldots, w_{\tau})$
\STATE $a_{\textrm{new}} \leftarrow u'w_i$
\ENDIF
\STATE \COMMENT{Now we triangulate the subpolygons.}
\STATE Triangulate$(R,u',\tau \cdot \kappa)$
\FOR{$j=0$ \TO $i-2$}
\STATE Triangulate$(Q_j,w_j,\tau \cdot \kappa)$
\ENDFOR
\STATE $a_c \leftarrow a_{\textrm{new}}$
\STATE $v_c \leftarrow w_{\tau}$
\STATE walked $\leftarrow w_{\tau}$
\ENDWHILE
\STATE \COMMENT{We reached $\ensuremath{v_{\lfloor n/2 \rfloor}}$. All parts except $P_{\ensuremath{\lfloor n/2 \rfloor}}$ have been triangulated.}
\STATE Triangulate$(P_{\ensuremath{\lfloor n/2 \rfloor}},w_i,\tau\cdot \kappa)$
\ENDIF
\end{algorithmic}
\caption{Pseudocode for Triangulate$(P,v_1,\tau)$ that, given a simple polygon~$P$ with $n$~vertices, a vertex $v_1$ of $P$, and workspace capacity $\tau$, computes a triangulation of $P$ in $O(n^2/\tau+n \log n \log^{5} (n/\tau))$ expected time using $O(\tau)$ variables.}
\label{algo_main}
\end{algorithm}
In order to avoid exceeding the space bounds, at each level of the recursion we decrease the value of $\tau$ by a factor of $\kappa<1$. The exact value of the constant $\kappa$ will be determined below. Pseudocode of the recursive algorithm can be found in Algorithm~\ref{algo_main}. Although not explicitly defined in pseudocode, procedure $ \textsc{FindAlternatingDiagonal}$ computes an alternating diagonal as described in Lemma~\ref{lem:farAlternation}.
\begin{theorem}\label{main_theo}
Let $P$ be a simple polygon of $n$ vertices. For any $s \leq n$ we can compute a
triangulation of~$P$ in $O(n^2/s+n \log s \log^{5} (n/s))$ expected time using $O(s)$
variables. In~particular, when $s \leq\frac{n}{\log n\log^{5}\log n}$ the algorithm runs in $O(n^2/s)$ expected time.
\end{theorem}
In the remainder of the section we prove correctness of our algorithm and analyze its time and space requirements.
\subsection{Correctness}
We maintain the invariant that the current diagonal $a_c$ records the
already triangulated portion of the polygon. Every edge we output is a proper diagonal of $P$ and we recurse on subpolygons created by partitioning by such edges. Thus, we never report
an edge of the triangulation more than once. Hence, in order to show
correctness of the algorithm, it suffices to prove that the recursion
eventually terminates.
During the execution of the algorithm, we invoke recursion for polygons $Q_i$, $R$, and~$P_{\ensuremath{\lfloor n/2 \rfloor}}$ (the latter one only when we have reached $\ensuremath{v_{\lfloor n/2 \rfloor}}$). By Lemma~\ref{lem:closeAlternation} all of these polygons have size at most $n/2+\tau$. Since we only enter this level of recursion whenever $\tau\leq n/10$ (see lines 1--2 of Algorithm~\ref{algo_main}), overall the size of the problem decreases by a factor of $6/10$,
thereby guaranteeing that the recursion depth is bounded by $O(\log n)$. Note that there are several conditions for stopping the recursion, but only one of them is needed to guarantee $O(\log n)$ depth.
At each level of recursion we use the shortest-path algorithm of Har-Peled. This algorithm needs random access in constant time to the vertices of the polygon. Thus, we must make sure that this property is preserved at all levels of recursion. A simple way to do so would be to explicitly store the polygon in memory at every recursive call, but this may exceed the space bounds of the algorithm.
Instead, we make sure that the subpolygon is described by $O(\tau)$ words. By construction, each subpolygon consists of a single chain of contiguous input vertices of $P$ and at most~$\tau$ additional \emph{cut} vertices (vertices from the geodesics at higher levels). We can represent the portion of $P$ by the indices of the first and last vertex of the chain and explicitly store the indices of all cut vertices. By an appropriate renaming of the indices within the subpolygon, we can make the vertices of the chain appear first, followed by the cut vertices. Thus, when we need to access the $i$th vertex of the subpolygon, we can check if $i$ corresponds to a vertex of the chain or one of the cut vertices and identify the desired vertex in constant time, in either case.
Now, we must show that each recursive call satisfies this property. Clearly this holds for the top level of recursion, where the input polygon is simply $P$ and no cut vertices are needed. At the next level of recursion each subproblem has up to $\tau$ cut vertices and a chain of contiguous input vertices. We ensure that this property is satisfied at lower levels of recursion by an appropriate choice of $v_1$ (the vertex from which we start the path): at each level of recursion we build the next geodesic starting from either the first or last cut vertex. This might create additional cut vertices, but their position is immediately after or before the already existing cut vertices (see Figure~\ref{fig:rayshooting}~(right)).
This way we guarantee constant-time random access to current instance vertices, at all levels of recursion.
\subsection{Time Bounds}\label{sec:time}
We use a two-parameter function $T(\eta,\tau)$ to bound the expected running time of the algorithm at all levels of recursion. The first parameter $\eta$ represents the size of the problem. Specifically, for a polygon of $n$ vertices we set $\eta=n-2$, namely, the number of triangles to be reported. The second parameter $\tau$ gives the space bound for the algorithm. Initially, we have $\tau=s$, but this value decreases by a factor of $\kappa$ at each level of recursion. Recall that $\tau$ is also the workspace limit for the shortest-path algorithm of Har-Peled that we invoke as part of our algorithm. In addition, $\tau$ is also used as the limit on the length of the geodesic we explore looking for an alternating diagonal. Note that the memory usage of both our algorithm as well as the algorithm by Har-Peled is $O(\tau)$, that is, there are hidden constants. In order to solve the recursions, we cannot use a big-O notation and for readability we assume all hidden constants are 1 and simply write $\tau$ instead.
When $\tau$ becomes really small (say, $\tau \leq 10$) we have run out of allotted space. Thus, we triangulate the polygon using the constant workspace method of Asano \emph{et~al.}\xspace~\cite{abbkmrs-mcasp-11} that runs in $O(\eta^2)$ time. Similarly, if the space is large when compared to the instance size (say, $10 \tau\geq \eta$) the polygon fits in the allowed workspace, hence we use Chazelle's algorithm~\cite{c-tsplt-91} for triangulating it. In both cases we have $T(\eta,\tau) \leq c_{\Delta}(\eta^2/\tau+\eta)$ for some constant $c_{\Delta}>0$.
Otherwise, we partition the problem and solve it recursively. First we bound the time needed to compute the partition. The main tool we use is computing the geodesic between $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$. This is done by the algorithm of Har-Peled~\cite{Har-Peled15} which takes $O(\eta^2/\tau + \eta \log \tau \log^4 (\eta/\tau))$ expected time and uses $O(\tau)$ space. Recall that we may pause and resume it often during the execution of our algorithm, but overall we only execute it once, not counting recursive calls.
Another operation that we execute is \textsc{FindAlternatingDiagonal} (i.e., Lemma~\ref{lem:farAlternation}) which takes $O(\eta)$ time and $O(1)$ space. In the worst case, this operation is invoked once for every $\tau$ vertices of $\pi$. Since $\pi$ cannot have more than $\eta$ vertices, the overall time spent in this operation is bounded by $O(\eta^2/\tau+\eta)$. Thus, ignoring the time spent in recursion, the expected running time of the algorithm is $c_\textsc{HP}(\eta^2/\tau+ \eta \log \tau \log^4 (\eta/\tau))$ for some constant $c_\textsc{HP}$, which without loss of generality we assume to be at least $c_{\Delta}$.
We thus obtain a recurrence of the form
\[ T(\eta,\tau) \leq c_\textsc{HP}\left(\frac{\eta^2}{\tau}+ \eta \log \tau \log^4 \frac{\eta}{\tau}\right) + \sum_j T(\eta_j, \tau\kappa).
\]
Recall that the values $\eta_j$ cannot be very large, compared to $\eta$. Indeed, each subproblem can have at most a constant fraction $c$ of vertices of the original one (i.e., the way in which lines 1--4 of Algorithm~\ref{algo_main} have been set, we have $c=6/10$). Thus, each $\eta_j$ satisfies $\eta_j \leq c(\eta+2)-2 \leq c\eta$. Since subproblems partition the current polygon, we also have $\sum_j \eta_j = \eta$.
We claim that there exists a constant $c_R$, so that, for any $\tau,\eta>0$, $T(\eta,\tau) \leq c_R(\eta^2/\tau+ \eta\log \tau \log^{5} (\eta/\tau))$.
Indeed, when $\tau$ is small or the problem size fits into memory (for our choice of constants, this corresponds to $\tau \leq 10$ or $10\tau \geq \eta$) we have $T(\eta,\tau) \leq c_{\Delta}(\eta^2/\tau+\eta) \leq c_R (\eta^2/\tau+\eta)$ for any value of $c_R$ such that $c_R \geq c_{\Delta}$. Otherwise, we use induction and obtain
\begin{align*}
T(\eta,\tau) &\leq c_{\textrm{HP}}\left(\frac{\eta^2}{\tau}+ \eta\log \tau \log^4 \frac{\eta}{\tau}\right) + \sum_j T(\eta_j,\tau\kappa) \\
&\leq c_{\textrm{HP}}\left(\frac{\eta^2}{\tau}+ \eta\log \tau \log^4 \frac{\eta}{\tau}\right) + \frac{c_R}{\tau\kappa}\sum_j \eta_j^2+ c_R \sum_j \eta_j(\log \tau\kappa)\log^{5}\frac{\eta_j}{\tau\kappa} \\
&\leq \left(c_{\textrm{HP}}\frac{\eta^2}{\tau} + \frac{c_R}{\tau\kappa}\sum_j \eta_j^2\right) + c_{\textrm{HP}}\eta\log \tau \log^4\frac{\eta}{\tau} + c_R\sum_j \eta_j\log \tau\log^{5}\frac{\eta_j}{\tau\kappa}\\
&\leq \left(c_{\textrm{HP}}\frac{\eta^2}{\tau} + \frac{c_R}{\tau\kappa}\sum_j \eta_j^2\right) + c_{\textrm{HP}}\eta\log \tau \log^4 \frac{\eta}{\tau} + c_R\sum_j \eta_j\log \tau\log^{5}\frac{c\eta}{\tau\kappa}\\
&\leq \left(c_{\textrm{HP}}\frac{\eta^2}{\tau} + \frac{c_R}{\tau\kappa}\sum_j \eta_j^2\right) + c_{\textrm{HP}}\eta\log \tau \log^4 \frac{\eta}{\tau} +c_R\eta\log \tau\log^{5}\frac{c\eta}{\tau\kappa}.
\end{align*}
The sum $\sum_j \eta_j^2$ is at most $c\eta \sum_j \eta_j = c\eta^2$, since $\eta_j \leq c\eta$ and $\sum_j \eta_j = \eta$, yielding
\begin{align*}
T(\eta,\tau) &\leq \left(c_{\textrm{HP}}\frac{\eta^2}{\tau} + \frac{c_Rc}{\kappa}\frac{\eta^2}{\tau}\right) + c_{\textrm{HP}}\eta\log \tau \log^4\frac{\eta}{\tau} +c_R\eta\log \tau\log^{5}\frac{c\eta}{\tau\kappa}\\
&\leq \frac{c_R\eta^2}{\tau} + c_{\textrm{HP}}\eta\log \tau \log^4 \frac{\eta}{\tau} +c_R\eta\log \tau\log^{5}\frac{c\eta}{\tau\kappa},
\end{align*}
where the inequality $c_{\textrm{HP}}+\frac{c}{\kappa}c_R \leq c_R$ holds for sufficiently large values of $c_R$ and a value of $\kappa<1$ that is larger than $c$ and sufficiently close to $1$ (say, $c_R=10c_{\textrm{HP}}$ and $\kappa = 9/10$). Now we focus on the last two terms of the inequality. We upper bound $\log^{5}(c\eta/\tau\kappa)$ by $\log^4(\eta/\tau)\log(c\eta/\tau\kappa) =(\log^4(\eta/\tau))(\log(\eta/\tau) - \log(\kappa/c))$ and substitute to obtain
\begin{align*}
T(\eta,\tau) &\leq \frac{c_R\eta^2}{\tau} + c_{\textrm{HP}}\eta\log \tau \log^4 \frac{\eta}{\tau} + \left(c_R\eta\log \tau\log^4\frac{\eta}{\tau}\right)\left(\log\frac{\eta}{\tau} - \log\frac{\kappa}{c}\right)\\
&\leq \frac{c_R\eta^2}{\tau} + \left(\eta\log\tau\log^4\frac{\eta}{\tau}\right)\left(c_{\textrm{HP}} +c_R\log\frac{\eta}{\tau} - c_R\log\frac{\kappa}{c}\right)\\
&\leq \frac{c_R\eta^2}{\tau} +c_R \left(\eta\log \tau\log^{5}\frac{\eta}{\tau}\right)\\
&= c_R \left(\eta^2/\tau +\eta\log \tau\log^{5}\frac{\eta}{\tau}\right),
\end{align*}
as claimed.
Again, the $c_{\textrm{HP}} -c_R\log(\kappa/c)\leq 0$ inequality holds for sufficiently large values of $c_R$ that depend on $c_{\textrm{HP}}$, $\kappa$ and $c$.
\subsection{Space Bounds}
\label{sec:space}
We now show that the space bound holds. Recall that that we picked a parameter $\tau$ to bound the amount of space we use. Our algorithm uses more than $\tau$ space, but does not exceed $L \cdot \tau$ (for some large absolute constant $L>0$).
First we count the amount of space needed in recursion. Our algorithm will stop the recursion whenever the problem instance fits into memory or $\tau$ becomes small (in the example we chose, when $\tau \leq 10$). Since the value of $\tau$ decreases by a constant factor at each level of recursion, we will never recurse for more than $\log_\kappa s=O(\log s)$ levels. Thus, the implicit memory consumption used in recursion does not exceed the space bounds.
Now we bound the size of the workspace needed by the algorithm at level $i$ of the recursion (with the main algorithm invocation being level~$0$) by $O(s\cdot \kappa^i)$. Indeed, this is the threshold of space we receive as input (recall that initially we set $\tau=s$ and that at each level we reduce this value by a factor of $\kappa$). This threshold value is the amount of space for the shortest-path computation algorithm invoked at the current level, as well as the limit on the number of vertices of $\pi$ that are stored explicitly before invoking procedure \textsc{FindAlternatingDiagional}. Once we have found the new alternating diagonal, the vertices of $\pi$ that were stored explicitly are used to generate the subproblems for the recursive calls.
The space used for storing the intermediate points can be reused after the recursive executions are finished, so overall we conclude that at the $i$th level of recursion the algorithm never uses more than $O(s\cdot \kappa^i)$ space. Since we never have two simultaneously executing recursive calls at the same level, and $\kappa<1$ is a constant, the total amount of space used in the execution of the algorithm is bounded by
\[
O(s) + O(s\cdot \kappa) + O(s\cdot \kappa^2 ) + \ldots = O(s).
\]
\subsection{Considerations on the output}\label{sec:output}
For simplicity of the explanation, we assumed above that only edges of the triangulation needed to be reported. As mentioned in the introduction, our algorithm can be modified so that it reports the resulting faces (triangles) of the decomposition together with their adjacency relationship.
For example, we could list all the triangles (say, as triples of vertex indices) and for each one we can give its adjacent triangles. Similarly, for each edge (identified by a pair of indices) we can also report the clockwise and counterclockwise neighbor at each endpoint, and so on. Recall that in our computation model the output cannot be modified, so all information about a
triangle should be output at the same time. For example, when we report the first triangle, we need to know the identities of its adjacent triangles, which we have not yet computed.
In order to accomplish this, we require that the space allowance $s$ be at least $\log n$. Recall that at each level of recursion the size of the problem decreases by a constant factor. In particular, if $s \geq \log n$, the algorithm does not run out of recursion space, and line~4 of Algorithm~\ref{algo_main} is never executed.
That is, our algorithm partitions~$P$ into subpolygons~$P'$ until they fit into memory and triangulated using Chazelle's algorithm~\cite{c-tsplt-91}. Since the resulting triangulation of~$P'$ fits into memory, we can afford to report extra information.
This extra information is explicitly available at the bottom level of each recursion (i.e., within a subpolygon $P'$), so we can report it together with the diagonals of the triangulation. The only information that we may not have available is for the diagonals that separate $P'$ from the rest of the polygon and for the triangles that use these edges. This information will appear in two subpolygons, and the neighboring information has to be coordinated between the two instances.
For this purpose, we slightly alter the triangulation invariant associated with~$a_c$: subpolygon $P_1$ has been triangulated and all information has been reported \emph{except} for the diagonals (and the triangles that use those edges) between the two alternating diagonals. We explicitly store the pertinent adjacency information that has already been computed, and we report it only when at a later time the missing information becomes available.
This modification does not affect the running time or correctness of the algorithm. Thus, it suffices to show that space constraints are not exceeded either.
At any given moment of the operation of the algorithm, we store a constant amount of additional data associated with each diagonal that delimits currently existing subproblems and that we already record. As shown in Section~\ref{sec:space} the diagonals themselves are stored explicitly and that storage fits into $O(s)$ storage. In particular, the additional information will not exceed this constraint either.
\section{Other applications}\label{sec:extensions}
Algorithm~\ref{algo_main} introduces a general approach of solving problems recursively by partitioning~$P$ into subpolygons, each of which has $O(s)$ vertices. We focused on triangulating~$P$, so at the bottom of the recursion we used Chazelle's algorithm~\cite{c-tsplt-91} or Asano \emph{et~al.}\xspace's algorithm~\cite{abbkmrs-mcasp-11} depending on the available space. However, the same approach can be used for other structures: it suffices to replace the base cases of the recursion (lines 2 and 4 of Algorithm~\ref{algo_main}) with the appropriate algorithms.
As an illustration of other possible applications, we describe the modifications needed for computing the shortest-path tree of a point inside a simple polygon and for partitioning a polygon into $\Theta(s)$ subpolygons, each with $\Theta(n/s)$ vertices. We believe other applications can be obtained using the same strategy.
\subsection{Shortest-Path Tree}
\label{sec:spt}
Given a simple polygon $P$ and a point $p\in P$ (which need not be a vertex of $P$), the \emph{shortest-path tree} of $p$ (denoted by $\SPT(p)=\SPT(p,P)$, see Figure~\ref{fig:ShortestPathtree}, left) is the tree formed by the union of all geodesics from $p$ to vertices of $P$. ElGindy~\cite{e-hdpa-85} and later Guibas \emph{et~al.}\xspace~\cite{ghlst-ltavsppitsp-87} showed how to compute the shortest-path tree in linear time using $O(n)$ space.
In order to use the framework of Algorithm~\ref{algo_main}, we also need an algorithm that computes $\SPT(p)$ using a constant number of variables.
\begin{figure}
\caption{(left) The shortest-path tree for a point $p$, depicted with red dashed segments. (right) In the framework of Algorithm~\ref{algo_main}
\label{fig:ShortestPathtree}
\end{figure}
\begin{lemma}\label{lem_o1wspace}
Let $P$ be a simple polygon with $n$ vertices and let $p$ be any point of $P$ (vertex, boundary, or interior). We can compute $\SPT(p)$ in $O(n^2\log n)$ expected time using $O(1)$ variables.
\end{lemma}
\begin{proof}
We first show a randomized procedure that, given a simple polygon, a source~$q$, and a target $t$ computes the first link in the shortest path~$\sigma$ from~$q$ to~$t$ in expected $O(n\log n)$ time using $O(1)$~space. Our algorithm executes this procedure $n$~times, setting $q$ to be each vertex of $P$ in turn, and reporting the first edge towards~$p$. The union of these segments is $\SPT(p)$.
Thus, it suffices to show how to compute one edge of $\sigma$ efficiently. The constant-workspace shortest-path algorithm of Asano~\emph{et~al.}\xspace~\cite{amrw-cwagp-10} computes the entire shortest path $\sigma$ from $q$ to $t$ in $O(n^2)$ time, but computing a single segment of $\sigma$ may need $\Omega(n^2)$ time. Below we slightly modify their approach to ensure that we do not spend too much time in one step.
Note that we will only need this procedure when $q$ is a vertex of $P$; thus, for simplicity of presentation, we will assume so in the remainder of the proof. The more general case can be handled with only minor modifications. We begin by assuming that the first link of $\sigma$ lies in the interior of a given cone~$C$ with apex~$q$ (see Figure~\ref{fig:step}); initially $C$ is delimited by the directions of the edges incident to~$q$. Let $R_C$ be the set of all reflex vertices of $P$ lying in the interior of~$C$ (we include $t$ in $R_C$ as well if it lies in the interior of~$C$).
\begin{figure}
\caption{Given the cone $C$ (dashed green), we shoot a ray towards a reflex vertex~$r$ in $C$. If $r$ is not visible (left image), the cone $C$ is split into two by the ray (solid and dashed regions). The region that does not contain $t$ can be discarded (the resulting cone will have the ray as one of the new boundary edges). If $r$ is visible (right image) the ray splits $P$ into three components, one of which is hidden (i.e., the only visible portion of the hidden component from $q$ is in the ray itself). If $t$ lies in the hidden region (solid in the figure) we can report $r$ as the first vertex visited on the path from $q$ to $t$. Otherwise, we can shrink $C$ in a similar way as if $r$ were not visible. }
\label{fig:step}
\end{figure}
By our assumption, the first link of $\sigma$ must go towards a point of $R_C$, even if $\sigma$ leaves the cone later. So, if $R_C$ contains no reflex vertex, then $\sigma=qt$, the algorithm returns~$t$, and we are done. Otherwise, we pick a reflex vertex $r\in R_C$ uniformly at random and shoot a ray $\vec{qr}$ from~$q$ towards~$r$. Let $q'$ be its first proper intersection with the boundary of~$P$.
The cone $C$ is split into two cones by the ray~$\vec{qr}$, and the segment $qq'$ splits $P$ into two or three components, depending on whether or not $q$~sees~$r$. We compute the component $P'$ that contains~$t$. The following cases may occur (refer to Figure~\ref{fig:step}).
\begin{description}
\item[$q$ does not see $r$:] Then, $\sigma$ cannot go directly from $q$ to $r$. Moreover, $\sigma$ cannot enter $P\setminus P'$, so its first link must emanate from $q$ into $P'$. This determines the side of $\vec{qr}$ it must lie on.
Thus, we may shrink $C$ to a smaller cone and continue.
\item[$q$ can see $r$:] In this case $qq'$ splits $P$ into three components, two of which contain $q$ on their boundary, and the third one (the \emph{hidden} component) that does not. If $t$ does not lie in the hidden component, then we can shrink $C$ to a smaller cone with the same analysis as above and continue. If $t$ lies in the hidden component,
$qr \subset \sigma$, the algorithm returns~$r$, and we are done.
\end{description}
See Asano \emph{et~al.}\xspace~\cite{amrw-cwagp-10} for proof of correctness and how to handle degenerate cases. Overall, in one iteration we either find the desired vertex and terminate, or reduce the size of $R_C$. Each step can be done in linear time (we perform one ray-shooting operation and one point-location operation in constant workspace; each can be done in $O(n)$ time by brute force).
Our algorithm executes a procedure analogous to a randomized binary search on the set of directions to the vertices in $R_C$, bisecting it randomly and recursing on one of the ``halves,'' and terminating (at the latest) when this set is a singleton. Therefore the expected number of iterations is logarithmic and the total expected work required to find the first link of $\sigma$ is $O(n \log n)$.
\end{proof}
Note that we can make the above algorithm deterministic by using selection instead of picking a vertex of $R_C$ at random. This comes at a slight increase in the running time as a function of $s$ (see the detailed trade-off description and analysis in~\cite{bkls-cvpufv-13}).
Since we now have algorithms for $\Theta(1)$ and for $\Theta(n)$ words of working memory, we can use our general strategy to obtain a trade-off for the entire range of the space parameter~$s$.
\begin{theorem}
Let $P$ be a simple polygon with $n$ vertices and let $p$ be any point of $P$ (vertex, boundary, or interior). For any $s \leq n$ we can compute the shortest-path tree of $p$, $\SPT(p)$, in $O(n^2\log n/s+n\log s\log^{5} (n/s)$ expected time using $O(s)$ variables.
\end{theorem}
\begin{proof}
In order to use the framework of Algorithm~\ref{algo_main}, we first ensure that $p$~is a vertex of the polygon. If $p$ is already a vertex of $P$, we rename the vertices so that $p = v_1$. If $p$ lies in the interior of an edge of $P$, we look for a vertex~$q$ visible from $p$. The segment $pq$ splits $P$ into two subpolygons, and we run Algorithm~\ref{algo_main} on each subpolygon separately, renaming the vertices so that $p=v_1$. Although $pq$ appears in both shortest-path trees, we make sure it is only reported in one of the two subproblems. Finding a visible vertex $q$ can be done in linear time using a constant number of variables as explained in the proof of Lemma~\ref{lem:farAlternation}. Finally, if $p$ lies in the interior of~$P$, we find two visible vertices $q,q'$, again using the approach of Lemma~\ref{lem:farAlternation}. The segments $pq$ and $pq'$ split $P$ into two subpolygons both of which have $p$ as a vertex. As in the boundary case, treat the two subpolygons independently to obtain the overall tree, with $pq$ and $pq'$ reported once. In all cases, we introduce a constant number of modifications to the polygon, so they can be stored explicitly.
Now that $p=v_1$ is a vertex of $P$, we use the overall approach of Algorithm~\ref{algo_main}: compute the shortest path from $v_1$ to $\ensuremath{v_{\lfloor n/2 \rfloor}}$. Alternating diagonals found along the path are again used to generate subproblems, which are solved recursively until we run out of space. At the bottom of the recursion we use a linear-time algorithm for computing $\SPT(p)$ (such as those of ElGindy~\cite{e-hdpa-85} or Guibas \emph{et~al.}\xspace~\cite{ghlst-ltavsppitsp-87}) or Lemma~\ref{lem_o1wspace}, depending on whether or not the remaining polygon fits in memory.
We must slightly modify the way the algorithm finds an alternating diagonal when it has performed $\tau$ steps: we need a diagonal that ensures that both subproblems are independent (in contrast to the triangulation problem, where any diagonal suffices). Instead, we simply extend the edge $w_{\tau-1}w_{\tau}$ until it meets the boundary of~$P$. We declare this intersection point a virtual vertex $v$ (if it is not already a vertex) and use the segment $w_{\tau} v$ to split the polygon (see Figure~\ref{fig:ShortestPathtree}, right). Since $w_{\tau-1} w_{\tau}$ is an edge of the geodesic path, $v_1$ and $\ensuremath{v_{\lfloor n/2 \rfloor}}$ are on opposite sides of $w_{\tau} v$, thus $w_{\tau} v$ is an alternating diagonal.\footnote{Note that it is not properly a diagonal since there might not be a vertex at $v$, but by adding the virtual vertex we can treat it as one. We should remember to ignore it when outputting shortest-path tree edges.}
In each subpolygon we want to compute the shortest-path tree to $v_1$ which may lie outside the current subpolygon. Instead, we will show that in each subpolygon $P'$ there exists a vertex $w$ such that $\SPT(v_1,P)\cap P'=\SPT(w,P')$.
Indeed, the boundary of $P'$ consists of a contiguous portion of the boundary of $P$ and up to $s$ diagonals. Recall that in all cases these diagonals belong to the shortest path from $v_1$ to a boundary point of~$P$. These diagonals form a contiguous portion $\pi'$ of a shortest path to~$v_1$. Let $w$ be the vertex of $\pi'$ closest to $v_1$. Let $q$ be any point in $P'$. Since two shortest paths to $v_1$ cannot cross, we conclude that the shortest path from $q$ to $v_1$ cannot properly intersect~$\pi'$. Thus, after it intersects with it, it must follow the same path towards~$v_1$. In particular, it must also pass through $w$, which implies $\SPT(v_1,P)\cap P'=\SPT(w,P')$, as claimed.
That is, when processing a small subpolygon, we can forget about $v_1$ and compute the shortest-path tree to $w$, giving the same structure as in the original problem.
By doing so, we ensure that the recursively split polygons have the same structure as in Algorithm~\ref{algo_main}: a chain of contiguous input vertices and a (small) number of cut vertices stored in memory.
The analysis of space use is identical to that of Algorithm~\ref{algo_main}. We now turn to the running time bound. We claim that, for a suitably chosen constant $c$, it obeys the recurrence
\[
T(\eta,\tau) = c \left(\eta^2\log \frac{\eta}{\tau} + \eta \log \tau \log^4 \frac{\eta}{\tau} + \sum_i T(\eta_i,\tau \kappa)\right),
\]
which differs from the recurrence in Section~\ref{sec:time} in that the constant-space running time algorithm of Lemma~\ref{lem_o1wspace} is slower than its counterpart in Algorithm~\ref{algo_main} by a $\log(\eta/\tau)$ factor. By an entirely analogous analysis, the recurrence solves to
\[
T(\eta,\tau) = O\!\left(\frac{\eta^2 \log \eta}{\tau}+ \eta\log \tau \log^{5} \frac{\eta}{\tau}\right),
\]
concluding the proof of the theorem.
\end{proof}
\subsection{Partitioning $P$ into subpolygons of the same size}
\label{sec:subpolygons}
Asano \emph{et~al.}\xspace~\cite{abbkmrs-mcasp-11} observed that one can use a triangulation algorithm to partition a polygon into pieces of any desired size. Specifically, they showed that in any simple polygon there always exist $\Theta(s)$ non-crossing diagonals that split it into subpolygons with $\Theta(n/s)$ vertices each.
The existence was proven for any value of $s$ and the proof is constructive. However, since no time-space trade-off for triangulating polygons was known at that time, their algorithm would always run in quadratic time regardless of the size of the workspace (see Theorem~5.2 of~\cite{abbkmrs-mcasp-11}). We can now extend this result to obtain a proper time-space trade-off.
\begin{theorem}\label{theo_partiti}
Let $P$ be a simple polygon with $n$ vertices. For any $s\leq n$, we can partition~$P$ with $\Theta(s)$ non-crossing diagonals, so that each resulting subpolygon contains $\Theta(n/s)$ vertices. This partition can be computed in $O(n^2/s+n\log s\log^{6} (n/s))$ expected time using $O(s)$ variables.
\end{theorem}
\begin{proof}
Just as for the shortest-path tree computation, one can modify Algorithm~\ref{algo_main} to partition a polygon into pieces for the entire range of available working space memory values. Alternatively, we can also do it by combining Theorem 5.2 of~\cite{abbkmrs-mcasp-11} with our triangulation algorithm (Theorem~\ref{main_theo}). Below we sketch a proof of the latter approach, for completeness; we omit some of the bookkeeping details; refer to \cite{abbkmrs-mcasp-11} for the specifics.
The algorithm makes several scans of the input. At each step we keep a partition of~$P$ into subpolygons $\mathcal{P}=\{P_1, \ldots, P_k\}$; initially $k=1$ and $P_1=P$. Let $t := \lceil n/s \rceil$, our aim is to iteratively cut the polygons of $\mathcal{P}$ into smaller pieces until they have between $t$ and $t/6$ vertices each.
In each round, we scan each polygon~$P_i$. The ones with more than $t$ vertices are triangulated. For each edge of the triangulation, we check if it would create a balanced cut (i.e., a diagonal of a polygon of $n$ vertices makes a \emph{balanced cut} if neither component has fewer than $n/6$ vertices).
It is known that such a cut always exists in any triangulation. Once found, we use it to split the current polygon into two. After the $i$th round we have split~$P$ into subpolygons such that each either has the desired size or has at most $(5/6)^i n$ vertices. In each round we triangulate each subpolygon at most once. Moreover, each subpolygon is triangulated independently, so we can bound the running time of the $i$th round by
\begin{align*}
\sum _j \left(\frac{n_j^2}{s}+n_j\log n\log^{5}
\frac{n_j}{s}
\right)
& =
\sum _j \frac{n_j^2}{s} + \log n\log^{5}
\frac{n}{s}
\sum_{j} n_j\\
&\leq
\left(\frac{5}{6}\right)^{i-1} n \sum _j\frac{n_j}{s}+ n \log n\log^{5}
\frac{n}{s} \\
&\leq
\left(\frac{5}{6}\right)^{i-1} \cdot 3 (n^2/s) +n\log n\log^{5} \frac{n}{s},
\end{align*}
where we have used the fact that in each round the subpolygon sizes add up to at most~$3n$.
Summing over all passes and observing that the number of passes is at worst logarithmic in $n/s$, we conclude that the running time of this algorithm is
$O(n^2/s +n\log n\log^{6} (n/s)).$
Regarding space, each triangulation algorithm we invoke uses $O(s)$ space. Since each execution is independent we can reuse the space each time. In addition to that we need to explicitly store the list $\mathcal{P}$. Since all polygons of $\mathcal{P}$ have at least $t/6 \in \Theta(n/s)$ vertices, we never maintain more than $O(s)$ such subpolygons. Thus, the space bounds are also preserved.
\end{proof}
We note that the shortest-path algorithm of Har-Peled~\cite{Har-Peled15} also partitions $P$ into $O(s)$ pieces (of size $O(n/s)$ each) as part of his preprocessing. However, this is done by introducing Steiner points. Our approach can report the partition implicitly (by giving the indices of the diagonals) and avoids the need for Steiner points.
\section{Acknowledgments}
The authors would like to thank Jean-Fran\c{c}ois Baffier, Man-Kwun Chiu, and Takeshi Tokuyama for valuable discussions that preceded the creation of this paper.
Moreover, we would like to thank Wolfgang Mulzer for pointing out a critical flaw in a preliminary version of the paper, as well as for his help in correcting it.
\end{document}
|
\begin{equation}gin{document}
\title{Adaptive non-uniform B-spline dictionaries on a compact interval}
\begin{equation}gin{abstract}
Non-uniform B-spline dictionaries on a compact
interval are discussed. For each given partition,
dictionaries of B-spline functions
for the corresponding spline space are constructed.
It is asserted that, by dividing the given partition into subpartitions and
joining together the bases for the concomitant
subspaces, slightly redundant dictionaries of B-splines functions
are obtained. Such dictionaries are proved to
span the spline space
associated to the given partition. The
proposed construction is shown to
be potentially useful for the purpose of sparse signal representation.
With that goal in mind, spline spaces specially adapted to produce
a sparse representation of a given signal are considered.
\end{abstract}
\section{Introduction}
A representation in the form of a linear superposition of elements
of a vector space is said to be {\em sparse} if the number of
elements in the superposition is small, in comparison to the
dimension of the corresponding space. The interest for sparse
representations has enormously increased the last few years, in
large part due to their convenience for signal processing techniques
and the results produced by the theory of Compressed Sensing with
regard to the reconstruction of sparse signals from non-adaptive
measurements \cite{cs1,cs2,cs3,cs4,cs5}. Furthermore, the classical
problem of expressing a signal as a linear superposition of elements
taken from an orthogonal basis has been extended to consider the
problem of expressing a signal as a linear superposition of
elements, called {\em{atoms}}, taken from a redundant set, called
{\em {dictionary}} \cite{Mal98}. The corresponding signal
approximation in terms of highly correlated atoms is said to be {\em
{highly nonlinear}} and has been proved relevant to signal
processing applications. Moreover, a formal mathematical setting for
highly nonlinear approximations is being developed. As a small
sample of relevant literature let us mention
\cite{Dev98,Tem99,GN04}.
In regard to sparse approximations
there are two main problems to be looked at; one is in relation to
the design of suitable algorithms for finding the sparse approximation,
and other the construction of the dictionaries endowing
the approximation with the property of sparsity. In this
communication we consider the sparse representation matter for the
large class of signals which are amenable to satisfactory
approximation in spline spaces \cite{Uns99,chuib}. Given a signal, we
have the double goal of a) finding a spline space for approximating
the signal and b) constructing those dictionaries for the space
which are capable of providing a sparse representation of such a
signal. In order to
achieve both aims we first discuss the construction of
dictionaries of B-spline functions for non-uniform partitions, because
the usual choice, the B-spline basis for the space, is not expected to
yield sparse representations.
In a previous publication \cite{Bsplinedic}
a prescription for constructing B-spline
dictionaries on the compact interval is advised by restricting
considerations to uniform partitions (cardinal spline spaces).
Since our aim entails to relax this restriction, we are forced to look at the
problem from a different perspective. Here we divide
the partition into subpartitions and construct the dictionary
by joining together the bases for the subspaces associated to each
subpartition. The
resulting dictionary is proved to span the spline space for the
given non-uniform partition. Consequently, the uniform case considered in
\cite{Bsplinedic} arises as a particular case of this general
construction.
The capability of the proposed nonuniform dictionaries
to produce sparse representations is illustrated by a number of
examples.
The letter is organized as follows: Section 2 introduces splines
spaces and gives the necessary definitions. The property of
splines spaces which provides us with the foundations for the
construction of the proposed dictionaries is proven in this
section (c.f. Theorem 2). For a fixed partition, the actual
constructions of non-uniform B-spline dictionaries is discussed in
Section 3. Section 4 addresses the problem of finding the
appropriate partition giving rise to the spline space suitable for
approximating a given signal. In the same section a number of
examples are presented, which illustrate an important feature of
dictionaries for the adapted spaces. Namely, they may render a
very significant gain in the sparseness of the representation of
those signals which are well approximated in the corresponding
space. The conclusions are drawn in Section 5.
\section{Background and notations}
We refer to the fundamental books \cite{schum,chui,boor} for a complete
treatment of splines.
Here we simply introduce the adopted notation and
the basic definitions which are needed for presenting our
results.
\begin{equation}gin{definition}
Given a finite closed interval $[c,d]$ we define
a {\em partition} of $[c,d]$ as the finite set of points
\begin{equation}gin{equation}\langlebel{Delta}
{\cal D}elta:=\{x_i\}_{i=0}^{N+1}, N\in\mathbb{N},\,\,\text{such that} \,\,
c=x_0<x_1<\cdots<x_{N}<x_{N+1}=d.
\end{equation}
We further define $N$ subintervals $I_i, i=0,\,{\rm d}ots,N$ as:
$I_i=[x_i,x_{i+1}), i=0,\,{\rm d}ots,N-1$ and $I_N=[x_N,x_{N+1}]$.
\end{definition}
\begin{equation}gin{definition}\langlebel{splinespace}
Let ${\cal P}i_{m}$ be the
space of polynomials of
degree smaller or equal to $m\in\mathbb{N}_0=\mathbb{N}\cup\{0\}$.
Let $m$ be a positive integer and define
\begin{equation}gin{equation}
S_m({\cal D}elta)=\{f\in C^{m-2}[c,d]\ : \ f|_{I_i}\in{\cal P}i_{m-1},
i=0,\,{\rm d}ots,N\},
\end{equation}
where $f|_{I_i}$ indicates the
restriction of the function $f$ on the
interval ${I_i}$.
\end{definition}
The standard result established by the next theorem
is essential for our purpose.
\begin{equation}gin{theorem}\rm(\cite{schum}, pp.111) \langlebel{th:bas}
Let \begin{equation}gin{equation} {\cal D}elta:=\{x_i\}_{i=0}^{N+1},\,\,
N\in\mathbb{N},\,\,\text{such that} \,\, c=x_0<x_1<\cdots<x_{N}<x_{N+1}=d.
\end{equation}
Then
$$
S_m({\cal D}eltata)\,\,=\,\, {\rm span}\{1,x,\ldots,x^{m-1},
(x-x_i)^{m-1}_+, i=1,\ldots,N\},
$$
where $(x-x_i)^{m-1}_+=(x-x_i)^{m-1}$ for $x-x_i>0$ and $0$
otherwise.
\end{theorem}
We are now ready to prove the theorem from which our proposal will
naturally arise.
\begin{equation}gin{theorem}
Suppose that ${\cal D}eltata_1$ and ${\cal D}eltata_2$ are two partitions of
$[c,d]$. It holds to be true that
$$S_m({\cal D}eltata_1) +S_m({\cal D}eltata_2) = S_m({\cal D}eltata_1 \cup {\cal D}eltata_2).$$
\end{theorem}
\begin{equation}gin{proof}It stems from Theorem 1 and the basic result of linear algebra
establishing that for $A_1$ and $A_2$ two sets such that
$S_1= \Span\{A_1\}$ and $S_2= \Span\{A_2\}$, one has $S_1 +S_2=
\Span\{A_1\cup A_2\}$.
Certainly, from Theorem 1 and for
$$A_1:=\{1,x,\ldots,x^{m-1},
(x-x_i)^{m-1}_+, x_i \in {\cal D}eltata_1\}\quad {\text{and}} \quad
A_2:=\{1,x,\ldots,x^{m-1}, (x-x_i)^{m-1}_+, x_i \in {\cal D}eltata_2\}$$
we have: $S_m({\cal D}eltata_1)=\Span\{A_1\}$,\,
$S_m({\cal D}eltata_2)=\Span\{A_2\}$. Hence
\begin{equation}gin{eqnarray*}
S_m({\cal D}eltata_1) + S_m({\cal D}eltata_2)&= &\Span\{A_1 \cup A_2\}\\
&= &\Span \{1,x,\ldots,x^{m-1}, (x-x_i)^{m-1}_+, x_i \in {\cal D}eltata_1
\cup {\cal D}eltata_2\}
\end{eqnarray*}
so that, using Theorem 1 on the right hand side, the
proof is concluded.
\end{proof}
The next corollary is a direct consequence of the above theorem.
\begin{equation}gin{corollary}\langlebel{cr:1}
Suppose that ${\cal D}eltata_j,j=1,\ldots,n$ are partitions of $[c,d]$.
Then
$$
S_m({\cal D}eltata_1) +\cdots + S_m({\cal D}eltata_n)\,\,=\,\,S_m(\cup_{j=1}^n{\cal D}eltata_j).
$$
\end{corollary}
\section{Building B-spline dictionaries}
Let us start by recalling that an
{\em extended partition}
with single inner knots associated with $S_m({\cal D}elta)$ is a
set $\tildede{{\cal D}el}=\{y_i\}_{i=1}^{2m+N}$ such that
$$y_{m+i}=x_i,\,\,i=1,\ldots,N,\,\, x_1<\cdots<x_N$$
and the first and last $m$ points $y_1\leq \cdots \leq y_{m} \leq
c,\quad d \leq y_{m+N+1}\leq \cdots \leq y_{2m+N}$ can be
arbitrarily chosen.
With each fixed extended partition $\tildede{{\cal D}el}$ there is associated a
unique B-spline basis for $S_m({\cal D}elta)$, that we denote as
$\{B_{m,j}\}_{j=1}^{m+N}$. The B-spline $B_{m,j}$ can be defined
by the recursive formulae \cite{schum}:
\begin{equation}gin{eqnarray*}
B_{1,j}(x)&=& \begin{equation}gin{cases}
1, & t_j\leq x<t_{j+1},\\
0, & {\rm otherwise,}
\end{cases} \\
B_{m,j}(x) &=&
\frac{x-y_j}{y_{j+m-1}-y_j}B_{m-1,j}(x)+\frac{y_{j+m}-x}{y_{j+m}-y_{j+1}}B_{m-1,j+1}(x).
\end{eqnarray*}
The following theorem paves the way for the construction of
dictionaries for $S_m({\cal D}eltata)$. We use the symbol $\#$ to indicate
the cardinality of a set.
\begin{equation}gin{theorem}\langlebel{th:dic}
Let ${\cal D}eltata_j,\,j=1,\ldots,n$ be partitions of $[c,d]$ and
${\cal D}eltata=\cup_{j=1}^n {\cal D}eltata_j$. We denote the B-spline basis for
$S_m({\cal D}eltata_j)$ as $\{B_{m,k}^{(j)}:k=1,\ldots,m+\#{\cal D}eltata_j\}$.
Accordingly, a dictionary, ${\mathcal
D}_m({\cal D}eltata: \cup_{j=1}^n{\cal D}elta_j)$, for $S_m({\cal D}eltata)$ can be
constructed as
$$
{\mathcal D}_m({\cal D}eltata: \cup_{j=1}^n{\cal D}elta_j)\,\, :=\,\,
\cup_{j=1}^n \{B_{m,k}^{(j)}:k=1,\ldots, m+\#{\cal D}eltata_j \},
$$
so as to satisfy
$$
{\rm span}\{{\mathcal D}_m({\cal D}eltata: \cup_{j=1}^n{\cal D}elta_j)
\}\,\,=\,\,S_m({\cal D}eltata).
$$
When $n=1$, ${\mathcal D}_m({\cal D}eltata:{\cal D}elta_1)$ is reduced to the
B-spline basis of $S_m({\cal D}eltata)$.
\end{theorem}
\begin{equation}gin{proof} It immediately follows from Corollary \ref{cr:1}. Indeed,
\begin{equation}gin{eqnarray*}
{\rm span}\{{\mathcal D}_m({\cal D}eltata: \cup_{j=1}^n{\cal D}elta_j)\}&
=&{\rm span}\{\cup_{j=1}^n
\{B_{m,k}^{(j)}:k=1,\ldots, m+\#{\cal D}eltata_j \} \}\\
&=&S_m({\cal D}eltata_1)+ \cdots+S_m({\cal D}eltata_1)= S_m(\cup_{j=1}^n
{\cal D}eltata_j)=S_m({\cal D}eltata).
\end{eqnarray*}
\end{proof}
\begin{equation}gin{remark}
Note that the number of functions in the above defined dictionary is
equal to $n\cdot m+\sum_{j=1}^n\#{\cal D}eltata_j$, which is larger than
$\,{\rm d}im S_m({\cal D}eltata)=m+\#{\cal D}eltata$. Hence, excluding the trivial case
$n=1$, the dictionary constitutes a redundant dictionary for
$S_m({\cal D}eltata)$.
\end{remark}
According to Theorem \ref{th:dic}, to build a dictionary for
$S_m({\cal D}eltata)$ we need to choose $n$-subpartitions ${\cal D}eltata_j\in
{\cal D}eltata$ such that $\cup_{j=1}^n{\cal D}elta_j={\cal D}elta$. This gives a great deal of
freedom for the actual construction of a non-uniform B-spline
dictionary. Fig.~1 shows some examples which are produced by
generating a random partition of $[0,4]$ with 6 interior knots.
From an arbitrary partition
$$
{\cal D}elta\,\,:=\,\, \{0=x_0<x_1<\cdots <x_{6}<x_{7}=4\},
$$
we generate two subpartitions as
$$
{\cal D}elta_1\,\,:=\,\, \{0=x_0<x_1<x_3<x_{5}<x_{7}=4\}, \quad
{\cal D}elta_2\,\,:=\,\, \{0=x_0<x_2<x_4<x_{6}<x_{7}=4\}$$ and join
together the B-spline basis for ${\cal D}elta_1$ (light lines in the right
graphs of Fig.~1) and ${\cal D}elta_2$ (dark lines in the same graphs)
\begin{equation}gin{figure}[!ht]
\begin{equation}gin{center}
\epsfxsize=7cm\epsfbox{2-1.eps}\,\,\,\,\,\,\,\,\, \epsfxsize=7cm\epsfbox{re1-3.eps}\\
\epsfxsize=7cm\epsfbox{1.eps}\,\,\,\,\,\,\,\,\,
\epsfxsize=7cm\epsfbox{re2-3.eps}
\end{center}
\caption{\small{Examples of bases (graphs on the left) and the
corresponding dictionaries (right graphs) for a random partition
${\cal D}elta$ consisting of 6 interior knots. The top graphs correspond to
linear B-splines ($m=2$). The bottom graphs involve cubic B-splines
($m=4$). The top left graph depicts the B-spline basis for
$S_2({\cal D}elta)$ whereas the right one depicts the B-spline dictionary
${\mathcal D}_2({\cal D}elta:{\cal D}elta_1 \cup {\cal D}elta_2)$ arising by merging the
B-spline basis for $S_2({\cal D}elta_1)$ and for $S_2({\cal D}elta_2)$ (light and
dark lines, respectively). The bottom graphs have the same
description as the top graphs, but correspond to $S_4({\cal D}elta)$ and
${\mathcal D}_4({\cal D}eltata:{\cal D}eltata_1\cup{\cal D}eltata_2).$}}
\end{figure}
\begin{equation}gin{remark} As should be expected,
the cardinal B-spline dictionaries introduced in \cite{Bsplinedic}
arise here as particular cases of the proposed construction. In
order to show this we use ${\cal D}elta_b$ to denote an equidistant
partition of $[c,d]$ with distance $b$ between adjacent points such
that $(d-c)/b\in \mathbb{Z}$, i.e., ${\cal D}elta_b$ is composed by the inner knots
$x_j=c+jb,\,j=1,\ldots, (d-c)/b$. We also consider the partition
${\cal D}elta_{j_0,b'}$ with knots $x_{j}=c+j_0b+jb'$, where $b'/b\in \mathbb{Z}$
and $j_0$ is a fixed integer in $[0,b'/b-1]$. Since
$\cup_{j_0=0}^{{b'/b-1}} {\cal D}elta_{j_0,b'}={\cal D}elta_b$, we can build a
dictionary as
$$
{\mathcal D}_m({\cal D}eltata_b: \cup_{j=0}^{b'/b-1} {\cal D}elta_{j,b'})\,\, =\,\,
\cup_{j_0=0}^{{b'/b-1}}\{B_{m,k}^{(j_0)}:k=1,\ldots,
m+\#{\cal D}eltata_{j_0,b'} \},
$$
where $B_{m,k}^{(j_0)},k=1,\ldots,m+\#{\cal D}elta_{j_0,b'}$ is the
cardinal B-spline basis for the subspace
determined by the partition ${\cal D}elta_{j_0,b'}$. This
yields a cardinal B-spline dictionary as proposed in
\cite{Bsplinedic}.
\end{remark}
\section{Application to sparse signal representation}
Given a signal, $f$ say, we address now the issue of determining a
partition ${\cal D}eltata$, and sub-partitions ${\cal D}eltata_j,j=1,\ldots, n$,
such that: a) $\cup_{j=1}^n{\cal D}eltata_j={\cal D}eltata$ and b) the partitions are
suitable for generating a sparse representation of the
signal in hand. As a first step we propose to tailor the partition
to the signal $f$ by setting ${\cal D}elta$ taking into account
the critical points of the curvature function
of the signal, i.e.,
$$
T\,\,:=\,\,\{t: \left(\frac{f''}{(1-f'^2)^{3/2}}\right)'(t)=0\}.
$$
Usually the entries in $T$ are choosen as the initial knots of
${\cal D}elta$. In order to obtain more knots we apply subdivision
between consecutive knots in $T$ thereby obtaining a partition
${\cal D}elta$ with the decided number of knots.
Because most signals are processed with digital computers, one
normally has to deal with a numerical representation of a signal
in the form of sampling points. Thus, another
problem to be addressed is how to compute the entries in
$T$ from the sequence of points
$f(kh),k=1,\ldots,n$, where $h$ is the step length in the
discretization. The algorithm below outlines a procedure for
accomplishing the task.
\begin{equation}gin{algorithm}
\begin{equation}gin{algorithmic}
\STATE {\bf Input:} the interval $[c,d]$, the discrete signal
$f(c+kh),k=0,\ldots,n$, the subdivision level $l$.
\STATE {\bf
Output:} The partition ${\cal D}elta$.
\STATE ${\cal D}elta=\{ \}$
\FOR { $k=2:n-3$} \FOR {$t=0:2$}
\STATE $df=(f((k+t+1)h)-f((k+t)h))/h$
\STATE $ddf=(f((k+t+1)h)+f((k+t-1)h))-2f((k+t)h))/h^2$
\STATE $c_t=ddf/(1-df^2)^{3/2}$ \ENDFOR
\IF{$|c_0|<|c_1|$ {\rm and} $|c_1|>|c_2|$}
\STATE ${\cal D}elta={\cal D}elta \cup \{c+(k+1)h\}$
\ENDIF
\ENDFOR
\STATE{\rm Set} $N\,\,=\,\,\#{\cal D}eltata$
\STATE {\rm Set} ${\cal D}elta\,\,=\,\,{\cal D}elta\cup \{c,d \}$
\STATE {\rm Sort the entries in
${\cal D}eltata$:}
$$
c= x_0<x_1<\cdots <x_{N+1}=d
$$
\FOR{ $k=0:N$}
\STATE ${\cal D}elta={\cal D}elta\cup \{x_k+t(x_{k+1}-x_k)/l: t=1,\ldots,l-1\}$
\ENDFOR
\end{algorithmic}
\caption{\small{Computing the partition ${\cal D}elta$: $T(f,l)$.}}
\end{algorithm}
According to Theorem \ref{th:dic}, in order to build a dictionary
for $S_m({\cal D}eltata)$ we need to choose $n$-subpartitions
${\cal D}eltata_j\in {\cal D}eltata$ such that $\cup_{j=1}^n{\cal D}elta_j={\cal D}elta$. {{As an
example we suggest a simple method for producing $n$-subpartitions
${\cal D}eltata_j\in {\cal D}eltata$, which is used in the numerical simulations of
the next section. Considering the partition
${\cal D}elta=\{x_0,x_1,\ldots,x_{N+1}\}$ such that $c=x_0<x_1<\cdots
<x_{N+1}=d$, for each integer $j$ in $[1,n]$ we set
$$
{\cal D}elta_j\,\,:=\,\, \{c,d\}\cup \{x_{k} : k\in [1,N] \mbox{ and }
k\!\!\mod n=j-1\},
$$}}
e.g. if $N=10$ and $n=3$, we have $ {\cal D}elta_1\,\,=\,\, \{c, x_3,
x_6, x_9, d\}, \,\,{\cal D}elta_2\,\,=\,\, \{c, x_1, x_4, x_7, x_{10},
d\}$ $\,\,\,$ ${\cal D}elta_3\,\,=\,\, \{c, x_2, x_5, x_8, d\}. $
It is easy to see that the above defined partitions satisfy
$\cup_{j=1}^n{\cal D}elta_j={\cal D}elta$.
{\subsection{Numerical examples}} We produce here three examples
illustrating the potentiality of the proposed dictionaries for
achieving sparse representations by nonlinear techniques. The
signal we consider are the following:
\begin{equation}gin{itemize}
\item A chirp signal, $f_1=\cos(2\pi x^2)$, $ x\in [0, 8],$
plotted in the top left of Fig.~1.
\item A seismic signal $f_2$ plotted in the
top right graph of Fig.~1. This signal was taken
from the WaveLab802 Toolbox. It is
acknowledged there that the signal is distributed throughout
the seismic industry as a test dataset.
\item A cosine function of random phase $f_3= \cos(8\pi x + \phi(x)),$
where $\phi(x)$ is the piecewise constant function depicted in the
bottom right of Fig.~1. The left graph corresponds to
the signal.
\end{itemize}
\begin{equation}gin{figure}[!ht]
\langlebel{f2}
\begin{equation}gin{center}
\epsfxsize=7.5cm\epsfbox{ff_1.eps}
\epsfxsize=7.5cm\epsfbox{ff_2.eps}\\
\epsfxsize=7.5cm\epsfbox{ff_3.eps}
\epsfxsize=7.5cm\epsfbox{phi3.eps}
\end{center}
\caption{\small{Chirp signal $f_1=\cos(2\pi x^2)$ (top left). Seismic
signal $f_2$ (top right). Cosine signal $f_3=\cos(8\pi x +
\phi(x))$ (bottom left)
for the randomly generate phase $\phi(x)$ plotted in the
bottom right graph. The broken lines, indistinguishable from the
continue lines representing test signals, are the resulting
approximations up to error's norm ${\rm{tol}}_i=0.01||f_i||,\,i=1,2,3$.}}
\end{figure}
The three signals are to be approximated up to a
tolerance ${\rm{tol}}_i=0.01||f_i||,\,i=1,2,3$ for the norm of the
approximation's error.
We deal with the chirp signal $f_1$ on the interval $[0,8]$, by
discretizing it into $L=2049$ samples and applying Algorithm 1 to
produce the partition ${\cal D}elta= T(f_1,9)$. The resulting number of
knots is $1162$, which is enough to approximate
the signal, by a cubic B-spline basis for the space, within the above
specified precision ${\rm{tol}_1}$. A dictionary ${\mathcal
D}_4({\cal D}elta: \cup_{j=1}^{10}{\cal D}elta_j)$ for the identical space is
constructed by considering 10 subpartitions, which
yield $N_1=1200$ functions.
The signal $f_2$ is a piece of $L=513$ data. A partition of
cardinality $511$ is obtained as ${\cal D}elta= T(f_1,8)$ and the dictionary
of cubic splines we have used arises by considering $3$
subpartitions, which yields a dictionary ${\mathcal
D}_4({\cal D}elta: \cup_{j=1}^{3}{\cal D}elta_j)$ of cardinality $N_2= 521$.
The signal $f_3$ is discretized into $L=2049$ samples. The partition
${\cal D}elta= T(f_1,43)$ produces $732$ knots. Using $26$ subpartitions we
build a dictionary ${\mathcal D}_2({\cal D}elta: \cup_{j=1}^{26}{\cal D}elta_j)$ of
$N_3=782$ linear B-spline functions.
Denoting by $\alpha^i_n, \,n=1,\ldots,N_i$ the atoms of the
$i$th-dictionary, we look now for the subsets of indices $\Gamma_i,
\,i=1,2,3$ of cardinality $K_i, \,i=1,2,3$ providing us with
a sparse representation of the signals. In other words,
we are interested in the approximations
\begin{equation}gin{eqnarray*}
f_i^{K_i}& =& \sum_{n\in\Gamma_i} c_n^i \alpha^i_n,\quad i=1,2,3,
\end{eqnarray*}
such that $|| f_i^{K_i} - f_i|| \le {\rm{tol}}_i,\ i=1,2,3,$ and the values
$K_i, \,i=1,2,3$ are satisfactory small for the approximation
to be considered sparse. Since the problem of finding
the sparsest solution is intractable, for all the signals we
look for a satisfactory sparse
representation using the same greedy strategy,
which evolves by selecting atoms through stepwise
minimization of the residual error as follows.
i)The atoms are selected one by one according to the Optimized
Orthogonal Matching Pursuit (OOMP) method \cite{oomp} until the
above defined tolerance for the norm of the residual error is
reached.
ii)The previous approximation is improved, without greatly increasing
the computational cost, by a `swapping refinement' which at
each step interchanges one atom of the atomic decomposition with a
dictionary atom, provided that the operation decreases the norm of
the residual error\cite{swapping}.
iii)A Backward-Optimized Orthogonal Matching Pursuit (BOOMP)
method \cite{boomp} is applied to disregard some coefficients of
the atomic decomposition, in order to produce an approximation up
to the error of stage i). The last two steps are repeated until no
further swapping is possible.
Let us stress that, if steps ii) and iii) can be
executed at least once, the above strategy guarantees an improvement
upon the results of OOMP. The gain is
with respect to the number of
atoms involved in the approximation for the given error's norm.
The described technique is applied to all the non-orthogonal
dictionaries we have considered for comparison with the proposed
approach. The results are shown in Table 1. In the first column we
place the dictionaries to be compared. These are: 1) the spline
basis for the space adapted to the corresponding signal, as proposed
in Sec~4. As already mentioned, for signals $f_1$ and $f_2$ we use
cubic B-splines and for signal $f_3$ the linear one. 2) The
dictionary for the identical spaces consisting of functions of
larger support. 3) The orthogonal cosine bases used by the discrete
cosine transform (dct). 4) The semi-orthogonal cardinal Chui-Wang
spline wavelet basis \cite{CW92} and 5) the Chui-Wang cardinal
spline dictionary for the same space \cite{ARN08}. For signals $f_1$
and $f_2$ we use cubic spline wavelets and for signal $f_3$ linear
spline wavelets.
\begin{equation}gin{table}[!t]
\caption{\small{Comparison of sparsity performance achieved by selecting
atoms from the non-uniform bases and dictionaries for adapted
spline space (1st and 2nd rows), dft (3rd row), cardinal wavelet
bases and dictionaries (4th and 5th rows).} \langlebel{tab1}}
\centering\vspace*{5mm}
\begin{equation}gin{tabular}{|l|c|c|c|}\hline
Dictionaries & $K^1$ (signal $f_1$)& $K^2$ (signal $f_2$) &$K^3$ (signal $f_3$)\\
\hline \hline
Non-uniform spline basis &1097 & 322 & 529 \\
Non-uniform spline dictionary &173& 129 & 80 \\
\hline
Discreet cosine transform & 263 & 208 & 669\\
\hline
Cardinal Chui-Wang wavelet basis & 246 & 201 & 97 \\
Cardinal Chui-Wang wavelet dictionary & 174 & 112 & 92 \\
\hline
\end{tabular}
\end{table}
The last three columns of Table 1 display the number of atoms
involved in the atomic decomposition for each test signal and for
each dictionary. These numbers clearly show a remarkable performance
of the approximation produced by the proposed non-uniform B-spline
dictionaries. Notice that whilst the non-uniform spline space is
adapted to the corresponding signal, only the dictionary for the
space achieves the sparse representation. Moreover
the performance is superior to that of the Chui-Wang
spline wavelet basis \cite{CW92} even for signal $f_3$, which was specially
selected because, due to the abrupt edges delimiting the smooth
lines, is very appropriate to be approximated by wavelets.
It is also worth stressing that for signal
$f_1$ and $f_2$ the performance is similar to the
cardinal Chui-Wang dictionary, which is known to render a very good
representation for these signals \cite{ARN08}. However, whilst
the Chui-Wang cardinal spline wavelet dictionaries introduced in
\cite{ARN08} are significantly redundant with respect to the corresponding
basis (about twice as larger) the non-uniform B-spline dictionaries
introduced here contain a few more functions than the
basis. Nevertheless, as the examples of this section indicate, the
improvement in the sparseness of the approximation a dictionary
may yield with respect to the B-spline basis is enormous. Still
there is the issue of establishing how to decide on the number of
subpartitions to be considered. For these numerical examples
the number of subpartitions was fixed as the one producing the best
result when allowing the number of subpartitions to vary within
some range. It was observed that only for signal $f_2$ the optimum
number of subpartitions produced results significantly better
that all other values. Conversely, from signals $f_1$ and $f_3$
some variations from the optimum number of subpartitions still
produce comparable results.
\section{Conclusions}
Non-uniform B-spline dictionaries for adapted spline spaces have
been introduced. The proposed dictionaries are built by dividing a
given partition into subpartitions and merging the basis for the
concomitant subspaces. The dictionary functions are characterized by
having broader support than the basis functions for the identical
space. The uniform B-spline dictionaries proposed in
\cite{Bsplinedic} readily arise here as a particular case.
The capability of the non-uniform B-spline dictionaries to produce
sparse signal representation has been illustrated by recourse to
numerical simulations. Thus, we feel confident that a number of
applications could benefit from this construction, e.g., we
believe it could also be useful in Computer-Aided Geometry Design
(CAGD), for reducing control points and for finding sparse knot
sets of B-spline curves \cite{cagd,cagd1,cagd2}.
\begin{equation}gin{thebibliography}{99}
\bibitem{cs1} D.~Donoho, Compressed sensing,
IEEE Trans. on Information Theory 52 (2006) 1289--1306.
\bibitem{cs2}
E.~Cand\`{e}s, J.~Romberg, Quantitative robust uncertainty principles and
optimally sparse decompositions, Foundations of Comput. Math. 6 (2006)
227--254.
\bibitem{cs3}
E.~Cand\`{e}s, T.~Tao, Near optimal signal recovery from random projections:
Universal encoding strategies?, IEEE Trans. on Information Theory 52 (2006)
5406--5425.
\bibitem{cs4}
R.~Baraniuk, A lecture on compressive sensing, IEEE Signal
Processing Magazine 24(2007), 118-121.
\bibitem{cs5}
Compressive sensing resources, http://www.dsp.ece.rice.edu/cs/See references
listed there.
\bibitem{Mal98}
S.~Mallat, A wavelet tour of signal processing, Academic Press, London, 1998.
\bibitem{Dev98}A. DeVore, Nonlinear approximation,
{\em{Acta Numer.}}, 51--150 (1998).
\bibitem{Tem99}V. N. Temlyakov, Greedy algorithms
and $M$-term approximation with regard to redundant dictionaries'',
Journal of Approximation Theory, Vol 98(1999), 117--145.
\bibitem{GN04}
R.~Gribonval, M.~Nielsen, Nonlinear approximation with dictionaries. {I}.
{D}irect estimates, Journal of Fourier Analysis and Applications 10 (2004)
55--71.
\bibitem{Uns99}M. Unser, Splines. A perfect
fit for signal and image processing,
{\em IEEE Signal Processing Magazine},
22--38 (1999).
\bibitem{chuib}C. K. Chui,
{\it Wavelets: A Mathematical Tool for Signal
Processing}, SIAM, Philadelphia, 1997.
\bibitem{Bsplinedic} M. Andrle and
L. Rebollo-Neira, Cardinal B-spline dictionaries on a compact
interval, Appl. Comput. Harmon. Anal., 18(2005),336-346.
\bibitem{schum}
L. L. Schumaker, Spline Functions: Basic Theory,
Wiley, New-York, 1981.
\bibitem{chui}C. K. Chui, Multivariate splines, SIAM,
Philadelphia, 1988.
\bibitem{boor}Carl De Boor, A Practical Guide to Splines, Springer,
New York, 2001.
\bibitem{oomp}
L.~Rebollo-Neira, D.~Lowe, Optimized orthogonal matching pursuit approach, IEEE
Signal Processing Letters 9 (2002) 137--140.
\bibitem{swapping}
M.~Andrle, L.~Rebollo-Neira, A swapping-based refinement of
orthogonal matching pursuit strategies, Signal Processing 86
(2006) 480--495.
\bibitem{boomp}
M.~Andrle, L.~Rebollo-Neira, E.~Sagianos, Backward-optimized orthogonal
matching pursuit approach, IEEE Signal Proc. Let. 11 (2004) 705--708.
\bibitem{CW92}
C.~Chui, J.~Wang, On compactly supported spline wavelets and a duality
principle, Trans. Amer. Math. Soc. 330 (1992) 903--915.
\bibitem{ARN08} M.~Andrle, L.~Rebollo-Neira, From cardinal spline wavelet bases
to highly coherent dictionaries, Journal of Physics A 41 (2008) 172001.
\bibitem{cagd}
Sang-Mook Lee, A. Lynn Abbott, N. C. Clark and P. A. Araman,
Spline curve matching with sparse knot sets: Applications to
deformable shape detection and recognition, the 29th Annual
Conference of the IEEE Industrial Electronics Society, 2003.
\bibitem{cagd1}
H. Yang, W. Wang and J. Sun,
Control point adjustment for
B-spline curve approximation, Computer-Aided Design,
36 (2004) 639--652.
\bibitem{cagd2}
Feng Lu and E. E. Milios, Optimal spline fitting to planar
shape, Signal Processing, 37 (1994) 129--140.
\end{thebibliography}
\end{document}
|
\begin{document}
УДК 512.5 \thispagestyle{empty}
\begin{center}
\textbf{Double-Layer Potentials for a Generalized Bi-Axially
Symmetric Helmholtz Equation II}
\end{center}
\begin{center}
\textit{Abdumauvlen Berdyshev$^{{1}}$, Anvar Hasanov$^{{2}}$and
Tuhtasin Ergashev$^{{3}}$ }
\end{center}
$^{{1}}$ Abai Kazakh National Pedagogical University, Almati,
Kazakhstan, \\
$^{{2,3}}$Institute of Mathematics, Uzbek Academy of
Sciences, Tashkent, Uzbekistan
\textbf{Abstract:} The double-layer potential plays an important
role in solving boundary value problems for elliptic equations.
All the fundamental solutions of the generalized bi-axially
symmetric Helmholtz equation were known (\textit{Complex Variables
and Elliptic Equations}, \textbf{52}(8), 2007, 673--683.), and
only for the first one was constructed the theory of potential
(\textit{Sohag Journal of Mathematics} 2, No. 1, 1-10, 2015).
Here, in this paper, we aim at constructing theory of double-layer
potentials corresponding to the next fundamental solution. By
using some properties of one of Appell's hypergeometric functions
in two variables, we prove limiting theorems and derive integral
equations concerning a denseness of double-layer potentials.
\textbf{Keywords:} Singular partial differential equations;
Appell's hypergeometric functions in two variables; Generalized
bi-axially symmetric Helmholtz equation; Degenerated elliptic
equations;Generalized axially-symmetric potentials; Double-layer
potentials.
\section{Introduction}
Potential theory has played a paramount role in both analysis and
computation for boundary value problems for elliptic partial
differential equations. Numerous applications can be found in
fracture mechanics, fluid mechanics, elastodynamics,
electromagnetic, and acoustics. Results from potential theory
allow us to represent boundary value problems in integral equation
form. For problems with known Green's functions, an integral
equation formulation leads to powerful numerical approximation
schemes.
The double-layer potential plays an important role in solving
boundary value problems of elliptic equations. The representation
of the solution of the (first) boundary value problem is sought as
a double-layer potential with unknown density and an application
of certain property leads to a Fredholm equation of the second
kind for determining the function (see [18] and [29]).
By applying a method of complex analysis (based upon analytic
functions), Gilbert [16] constructed an integral representation of
solutions of the following generalized bi-axially symmetric
Helmholtz equation:
\[
H_{\alpha ,\beta} ^{\lambda} \left( {u} \right) \equiv u_{xx} +
u_{yy} + {\frac{{2\alpha}} {{x}}}u_{x} + {\frac{{2\beta}}
{{y}}}u_{y} - \lambda ^{2}u = 0, \quad \left( {0 < \alpha <
{\frac{{1}}{{2}}};0 < \beta < {\frac{{1}}{{2}}}} \right), \quad
\left( {H_{\alpha ,\beta} ^{\lambda}} \right)
\]
Fundamental solutions of the equation $\left( {H_{\alpha ,\beta}
^{\lambda} } \right)$ were constructed recently (see [19]). In
fact, the fundamental solutions of the equation $\left( {H_{\alpha
,\beta} ^{\lambda}} \right)$ when $\lambda = 0$ can be expressed
in terms of Appell's hypergeometric function in two variables of
the second kind, that is, the Appell function
\[
F_{2} \left( {a,b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y} \right)
\]
\noindent defined by (see [8, p.224,Eq.5.7.1(7)]; see also [4,
p.14,Eq.(12)] and [34, p.23, Eq.1.3(3)])
\begin{equation}
\label{eq1} F_{2} \left( {a;b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y}
\right) = {\sum\limits_{m,n = 0}^{\infty} {}} {\frac{{\left( {a}
\right)_{m + n} \left( {b_{1}} \right)_{m} \left( {b_{2}}
\right)_{n}}} {{\left( {c_{1}} \right)_{m} \left( {c_{2}}
\right)_{n} m!n!}}}x^{m}y^{n},
\end{equation}
\noindent where $\left( {a} \right)_{\nu} $ denotes the general
Pochhammer symbol which is defined (for $a,\,\nu \in C\backslash
\{0\})$, in terms of the familiar Gamma function, by
\[
\left( {a} \right)_{\nu} : = {\frac{{\Gamma \left( {a + \nu}
\right)}}{{\Gamma \left( {a} \right)}}} = {\left\{
{{\begin{array}{*{20}c}
{1}
& {(\nu = 0;\,\,\,a \in C\backslash \{0\})}
\\
{a(a + 1)...(a + \nu - 1)}
& {(\nu = n \in N;\,\,\,a \in C\backslash
\{0\}),}
\\
\end{array}}} \right.}
\]
\noindent it being understood conventionally that $(0)_{0} : = 1$
and assumed tacitly and the $\Gamma - $quotient exists.
In case of$\,\lambda = 0$ fundamental solutions look like
\begin{equation}
\label{eq2} q_{1} \left( {x,y;x_{0} ,y_{0}} \right) = k_{1}
\left( {r^{2}} \right)^{ - \alpha - \beta} F_{2} \left( {\alpha +
\beta ;\alpha ,\beta ;2\alpha ,2\beta ;\xi ,\eta} \right),
\end{equation}
\begin{equation}
\label{eq3} q_{2} \left( {x,y;x_{0} ,y_{0}} \right) = k_{2}
\left( {r^{2}} \right)^{\alpha - \beta - 1}x^{1 - 2\alpha}
x_{0}^{1 - 2\alpha} F_{2} \left( {1 - \alpha + \beta ;1 - \alpha
,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right),
\end{equation}
\begin{equation}
\label{eq4} q_{3} \left( {x,y;x_{0} ,y_{0}} \right) = k_{3}
\left( {r^{2}} \right)^{ - \alpha + \beta - 1}y^{1 - 2\beta}
y_{0}^{1 - 2\beta} F_{2} \left( {1 + \alpha - \beta ;\alpha ,1 -
\beta ;2\alpha ,2 - 2\beta ;\xi ,\eta} \right)
\end{equation}
\noindent and
\begin{equation}
\label{eq5} q_{4} \left( {x,y;x_{0} ,y_{0}} \right) = k_{4}
\left( {r^{2}} \right)^{\alpha + \beta - 2}x^{1 - 2\alpha} y^{1 -
2\beta} x_{0}^{1 - 2\alpha} y_{0}^{1 - 2\beta} F_{2} \left( {2 -
\alpha - \beta ;1 - \alpha ,1 - \beta ;2 - 2\alpha ,2 - 2\beta
;\xi ,\eta} \right),
\end{equation}
where
\begin{equation}
\label{eq6} k_{1} = {\frac{{2^{2\alpha + 2\beta}} }{{4\pi}}
}{\frac{{\Gamma \left( {\alpha} \right)\Gamma \left( {\beta}
\right)\Gamma \left( {\alpha + \beta } \right)}}{{\Gamma \left(
{2\alpha} \right)\Gamma \left( {2\beta} \right)}}},
\end{equation}
\begin{equation}
\label{eq7} k_{2} = {\frac{{2^{2 - 2\alpha + 2\beta}} }{{4\pi}}
}{\frac{{\Gamma \left( {1 - \alpha} \right)\Gamma \left( {\beta}
\right)\Gamma \left( {1 - \alpha + \beta} \right)}}{{\Gamma
\left( {2 - 2\alpha} \right)\Gamma \left( {2\beta} \right)}}},
\end{equation}
\begin{equation}
\label{eq8} k_{3} = {\frac{{2^{2 + 2\alpha - 2\beta}} }{{4\pi}}
}{\frac{{\Gamma \left( {\alpha} \right)\Gamma \left( {1 - \beta}
\right)\Gamma \left( {1 + \alpha - \beta} \right)}}{{\Gamma
\left( {2\alpha} \right)\Gamma \left( {2 - 2\beta} \right)}}},
\end{equation}
\begin{equation}
\label{eq9} k_{4} = {\frac{{2^{4 - 2\alpha - 2\beta}} }{{4\pi}}
}{\frac{{\Gamma \left( {1 - \alpha} \right)\Gamma \left( {1 -
\beta} \right)\Gamma \left( {2 - \alpha - \beta}
\right)}}{{\Gamma \left( {2 - 2\alpha} \right)\Gamma \left( {2 -
2\beta} \right)}}},
\end{equation}
\begin{equation}
\label{eq10} {\left. {{\begin{array}{*{20}c}
{r^{2}}
\\
{r_{1}^{2}}
\\
{r_{2}^{2}}
\\
\end{array}}} \right\}} = \left( {x{\begin{array}{*{20}c}
{ -}
\\
{ +}
\\
{ -}
\\
\end{array}} x_{0}} \right)^{2} + \left( {y{\begin{array}{*{20}c}
{ -}
\\
{ -}
\\
{ +}
\\
\end{array}} y_{0} \,} \right)^{2},
\xi = {\frac{{r^{2} - r_{1}^{2}}} {{r^{2}}}}, \eta = {\frac{{r^{2}
- r_{2}^{2}}} {{r^{2}}}},
\end{equation}
The fundamental solutions (\ref{eq2}) - (\ref{eq5}) possess the
following properties:
\begin{equation}
\label{eq11} {\left. {x^{2\alpha} {\frac{{\partial q_{1} \left(
{x,y;x_{0} ,y_{0}} \right)}}{{\partial x}}}} \right|}_{x = 0} = 0,
\quad {\left. {y^{2\beta} {\frac{{\partial q_{1} \left( {x,y;x_{0}
,y_{0}} \right)}}{{\partial y}}}} \right|}_{y = 0} = 0,
\end{equation}
\begin{equation}
\label{eq12} {\left. {q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}
\right|}_{x = 0} = 0, \quad {\left. {y^{2\beta} {\frac{{\partial
q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}}{{\partial y}}}}
\right|}_{y = 0} = 0,
\end{equation}
\begin{equation}
\label{eq13} {\left. {x^{2\alpha} {\frac{{\partial q_{3} \left(
{x,y;x_{0} ,y_{0}} \right)}}{{\partial x}}}} \right|}_{x = 0} = 0,
\quad {\left. {q_{3} \left( {x,y;x_{0} ,y_{0}} \right)}
\right|}_{y = 0} = 0,
\end{equation}
${\left. {q_{4} \left( {x,y;x_{0} ,y_{0}} \right)} \right|}_{x = 0} = 0$ and
${\left. {q_{4} \left( {x,y;x_{0} ,y_{0}} \right)} \right|}_{y =
0} = 0.$ (1.14)
In the paper [33] using fundamental solution $q_{1} \left(
{x,y;x_{0} ,y_{0} } \right)$ in the domain defined by
\begin{equation}
\label{eq14} \Omega \subset R_{ +} ^{2} = {\left\{ {\left( {x,y}
\right):\,x > 0,\,y > 0} \right\}},
\end{equation}
\noindent the double-layer potential theory for the equation
$\left( {H_{\alpha ,\beta }^{0}} \right)$ was investigated.
Here, in this publication, we aim at constructing theory of
double-layer potentials corresponding to the next fundamental
solution $q_{2} \left( {x,y;x_{0} ,y_{0}} \right)$. By using some
properties of one of Appell's hypergeometric functions in two
variables, we prove limiting theorems and derive integral
equations concerning a denseness of double-layer potentials.
\section{Green's formula}
We begin by considering the following identity:
\begin{equation}
\label{eq15} x^{2\alpha} y^{2\beta} {\left[ {uH_{\alpha ,\beta}
^{0} \left( {v} \right) - vH_{\alpha ,\beta} ^{0} \left( {u}
\right)} \right]} = {\frac{{\partial }}{{\partial x}}}{\left[
{x^{2\alpha} y^{2\beta} \left( {v_{x} u - vu_{x}} \right)}
\right]} + {\frac{{\partial}} {{\partial y}}}{\left[ {x^{2\alpha
}y^{2\beta} \left( {v_{y} u - vu_{y}} \right)} \right]}.
\end{equation}
Integrating both parts of identity (\ref{eq15}) on a domain
$\Omega $ in (\ref{eq14}), and using Green's formula, we find that
\begin{equation}
\label{eq16} {\int\!\!\!\int\limits_{\Omega} {}} x^{2\alpha}
y^{2\beta} {\left[ {uH_{\alpha ,\beta} ^{0} \left( {v} \right) -
vH_{\alpha ,\beta} ^{0} \left( {u} \right)} \right]}dxdy =
{\int\limits_{S} {}} x^{2\alpha} y^{2\beta }u\left( {v_{x} dy -
v_{y} dx} \right) - x^{2\alpha} y^{2\beta} v\left( {u_{x} dy -
u_{y} dx} \right),
\end{equation}
\noindent where $S = \partial \Omega $ is a boundary of the domain
$\Omega $.
If $u\left( {x,y} \right)$ and $v\left( {x,y} \right)$ are
solutions of the equation$\left( {H_{\alpha ,\beta} ^{0}}
\right)$, we find from (\ref{eq16}) that
\begin{equation}
\label{eq17} {\int\limits_{S} {}} x^{2\alpha} y^{2\beta} \left(
{u{\frac{{\partial v}}{{\partial n}}} - v{\frac{{\partial
u}}{{\partial n}}}} \right)ds = 0,
\end{equation}
\noindent where
\begin{equation}
\label{eq18} {\frac{{\partial}} {{\partial n}}} =
{\frac{{dy}}{{ds}}}{\frac{{\partial }}{{\partial x}}} -
{\frac{{dx}}{{ds}}}{\frac{{\partial}} {{\partial y}}}, \quad
{\frac{{dy}}{{ds}}} = \cos \left( {n,x} \right),
{\frac{{dx}}{{ds}}} = - \cos \left( {n,y} \right),
\end{equation}
$n$ being the exterior normal to the curve $S$. We also obtain the following
identity:
\begin{equation}
\label{eq19} {\int\!\!\!\int\limits_{\Omega} {}} x^{2\alpha}
y^{2\beta} {\left[ {u_{x}^{2} + u_{y}^{2} + \lambda ^{2}u^{2}}
\right]}dxdy = {\int\limits_{S} {}} x^{2\alpha} y^{2\beta}
u{\frac{{\partial u}}{{\partial n}}}ds,
\end{equation}
\noindent where $u\left( {x,y} \right)$ is a solution of the
equation $\left( {H_{\alpha ,\beta} ^{0}} \right)$. The special
case of (\ref{eq17}) when $v = 1$ reduces to the following form:
\begin{equation}
\label{eq20} {\int\limits_{S} {}} x^{2\alpha} y^{2\beta}
{\frac{{\partial u}}{{\partial n}}}ds = 0.
\end{equation}
We note from (\ref{eq20}) that the integral of the normal
derivative of a solution of the equation $\left( {H_{\alpha
,\beta} ^{0}} \right)$ with a weight $x^{2\alpha} y^{2\beta} $
along the boundary $S$ of the domain $\Omega $ in (\ref{eq14}) is
equal to zero.
\section{A double layer potential $w^{( {2} )}\left(
{x_{0} ,y_{0}} \right)$}
Let $\Omega $ in (\ref{eq14}) be a domain bounded by intervals
$\left( {0,a} \right)$ and $\left( {0,b} \right)$ of the $x - $
and $y - $ axes, respectively, and a curve $\Gamma $ with the
extremities at points $A( a,0)$ and $B(0,b)$. The parametrical
equations of the curve $\Gamma $ are given by
$x = x(s)$ and $y = y(s)$ ($s \in [0,l])$,
where $l$ denotes the length of $\Gamma $. We assume the following
properties of the curve $\Gamma $:
(i) The functions $x = x( {s} )$ and $y = y( {s})$ have continuous
derivatives $x'( {s} )$ and $y'( {s} )$ on a segment ${\left[
{0,l} \right]}$, do not vanish simultaneously;
(ii) The second derivatives $x''\left( {s} \right)$ and $y''\left(
{s} \right)$ satisfy to Hoelder condition on ${\left[ {0,l}
\right]}$, where $l$ denotes the length of the curve $\Gamma $;
(iii) In some neighborhoods of points $A\left( {a,0} \right)$ and
$B\left( {0,b} \right)$, the following conditions are satisfied:
\begin{equation} \label{eq001}
{\left| {{\frac{{dx}}{{ds}}}} \right|} \le cy^{1 + \varepsilon} \left( {s}
\right), \,\,\, {\left| {{\frac{{dy}}{{ds}}}} \right|} \le cx^{1 +
\varepsilon }\left( {s} \right),0 < \varepsilon < 1,\,\,c =
constant
\end{equation}
$\left( {x,y} \right)$ being the coordinates of a variable point on the curve
$\Gamma $.
Consider the following integral
\begin{equation}
\label{eq21} w^{\left( {2} \right)}\left( {x_{0} ,y_{0}} \right)
= {\int\limits_{0}^{l} {x^{2\alpha} y^{2\beta}} } \mu _{2} \left(
{s} \right){\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial n}}}ds
\end{equation}
\noindent where the density $\mu _{2} \left( {s} \right) \in
C{\left[ {0,\,l} \right]}$ and $q_{2} $ is given in (\ref{eq3}).
We call the integral (\ref{eq21}) \textit{a double-layer}
\textit{potential} with denseness$\mu _{2} \left( {s} \right)$.
We now investigate some properties of a double-layer potential
$w^{\left( {2} \right)}\left( {x_{0} ,y_{0}} \right)$ with
denseness $\mu _{2} \left( {s} \right)$.
\textbf{Lemma 1.} \textit{The following formula holds true:}
\begin{equation}
\label{eq22} w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}}
\right) = {\left\{ {{\begin{array}{*{20}c}
{i\left( {x_{0} ,y_{0}} \right) - 1}
& {\left( {\left( {x_{0}
,y_{0}} \right) \in \Omega} \right)}
\\
{i\left( {x_{0} ,y_{0}} \right) - {\frac{{1}}{{2}}}}
& {\left(
{\left( {x_{0} ,y_{0}} \right) \in \Gamma} \right)}
\\
{i\left( {x_{0} ,y_{0}} \right)}
& {\left( {\left( {x_{0} ,y_{0}}
\right) \notin \bar {\Omega} \,} \right),}
\\
\end{array}}} \right.}
\end{equation}
\textit{where a domain} $\Omega $\textit{ and the curve} $\Gamma
$\textit{ are described as in this section and} $\bar {\Omega} : =
\Omega \cup \Gamma $;
\[
i\left( {x_{0} ,y_{0}} \right) = k_{2} \left( {1 - 2\alpha}
\right)x_{0}^{1 - 2\alpha} {\int\limits_{0}^{b} {}} y^{2\beta}
\left( {x_{0}^{2} + (y - y_{0} )^{2}} \right)^{\alpha - \beta -
1}F\left( {1 - \alpha + \beta ,\beta ;2\beta ;{\frac{{ - 4yy_{0}}}
{{x_{0}^{2} + (y - y_{0} )^{2}}}}} \right)dy.
\]
\textit{Proof.}
\textbf{Case 1.} When $\left( {x_{0} ,y_{0}} \right) \in \Omega
$, we cut a circle centered at $\left( {x_{0} ,y_{0}} \right)$
with a small radius $\rho $ off the domain $\Omega $ and denote
the remaining by $\Omega ^{\rho }$ and circuit of the
cut-off-circle by $C_{\rho} $. The function $q_{2} \left(
{x,y;x_{0} ,y_{0}} \right)$ in (\ref{eq3}) is a regular solution
of the equation $\left( {H_{\alpha ,\beta} ^{0}} \right)$ in the
domain $\Omega ^{\rho} $. Using the following derivative formula
of Appell's hypergeometric function ([32], p. 19, (20)):
\begin{equation}
\label{eq23} {\frac{{\partial ^{m + n}F_{2} \left( {a;b_{1} ,b_{2}
;c_{1} ,c_{2} ;x,y} \right)}}{{\partial x^{m}\partial y^{n}}}} =
{\frac{{\left( {a} \right)_{m + n} \left( {b_{1}} \right)_{m}
\left( {b_{2}} \right)_{n}}} {{\left( {c_{1} } \right)_{m} \left(
{c_{2}} \right)_{n}}} }F_{2} \left( {a + m + n;b_{1} + m,b_{2} +
n;c_{1} + m,c_{2} + n;x,y} \right)
\end{equation}
we have
\[
\begin{array}{l}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}}{{\partial x}}}
= (1 - 2\alpha )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
1}x^{ - 2\alpha} x_{0}^{1 - 2\alpha} F_{2} \left( {1 - \alpha +
\beta ;1 - \alpha
,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
+ 2(\alpha - \beta - 1)k_{2} \left( {r^{2}} \right)^{\alpha - \beta - 2}(x
- x_{0} )x^{1 - 2\alpha} x_{0}^{1 - 2\alpha} F_{2} \left( {1 -
\alpha +
\beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
\end{array}
\]
\begin{equation}
\label{eq24}
\begin{array}{l}
- k_{2} \left( {r^{2}} \right)^{\alpha - \beta - 1}x^{1 - 2\alpha} x_{0}^{1
- 2\alpha} {\frac{{(1 - \alpha + \beta )(1 - \alpha )}}{{2 -
2\alpha }}}{\frac{{4x_{0}}} {{r^{2}}}}F_{2} \left( {2 - \alpha +
\beta ;2 - \alpha
,\beta ;3 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
- 2k_{2} \left( {r^{2}} \right)^{\alpha - \beta - 2}x^{1 - 2\alpha
}x_{0}^{1 - 2\alpha} (x - x_{0} ){\left[ {{\frac{{(1 - \alpha +
\beta )(1 - \alpha )}}{{2 - 2\alpha}} }\xi F_{2} \left( {2 -
\alpha + \beta ;2 - \alpha
,\beta ;3 - 2\alpha ,2\beta ;\xi ,\eta} \right)} \right.} \\
+ {\left. {{\frac{{(1 - \alpha + \beta )\beta}} {{2\beta}} }\eta F_{2}
\left( {2 - \alpha + \beta ;1 - \alpha ,1 + \beta ;2 - 2\alpha ,1
+ 2\beta
;\xi ,\eta} \right)} \right]} \\
\end{array}
\end{equation}
By applying the following known contiguous relation (see [32], p.
21):
\begin{equation}
\label{eq25}
\begin{array}{l}
{\frac{{b_{1}}} {{c_{1}}} }xF_{2} \left( {a + 1;b_{1} + 1,b_{2} ;c_{1} +
1,c_{2} ;x,y} \right) + {\frac{{b_{2}}} {{c_{2}}} }yF_{2} \left(
{a +
1;b_{1} ,b_{2} + 1;c_{1} ,c_{2} + 1;x,y} \right) \\
= F_{2} \left( {a + 1;b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y} \right) - F_{2}
\left( {a;b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y} \right), \\
\end{array}
\end{equation}
\noindent to (\ref{eq24}), we obtain
\begin{equation}
\label{eq26}
\begin{array}{l}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}}{{\partial x}}}
= (1 - 2\alpha )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
1}x^{ - 2\alpha} x_{0}^{1 - 2\alpha} F_{2} \left( {1 - \alpha +
\beta ;1 - \alpha
,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
- 2(1 - \alpha + \beta )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
2}x^{1 - 2\alpha} x_{0}^{2 - 2\alpha} F_{2} \left( {2 - \alpha +
\beta ;2 -
\alpha ,\beta ;3 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
- 2(1 - \alpha + \beta )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
2}x^{1 - 2\alpha} x_{0}^{1 - 2\alpha} (x - x_{0} )F_{2} \left( {2
- \alpha
+ \beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right) \\
\end{array}
\end{equation}
Similarly, we find that
$$
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial y}}} = - 2(1 - \alpha + \beta )k_{2} \left(
{r^{2}} \right)^{\alpha - \beta - 2}x^{1 - 2\alpha} x_{0}^{1 -
2\alpha} y_{0} F_{2} \left( {2 - \alpha + \beta ;1 - \alpha ,1 +
\beta ;2 - 2\alpha ,1 + 2\beta ;\xi ,\eta} \right)
$$
\begin{equation}
\label{eq27}
- 2(1 - \alpha + \beta )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
2}x^{1 - 2\alpha} x_{0}^{1 - 2\alpha} (y - y_{0} )F_{2} \left( {2
- \alpha + \beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi
,\eta} \right)
\end{equation}
Thus, with the help of (\ref{eq26}) and (\ref{eq27}), it follows
from (\ref{eq3}) and (\ref{eq18}) that
\begin{equation}
\label{eq28}
\begin{array}{l}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}}{{\partial n}}}
= \\
= - (1 - \alpha + \beta )k_{2} \left( {r^{2}} \right)^{\alpha - \beta -
1}x^{1 - 2\alpha} x_{0}^{1 - 2\alpha} F_{2} \left( {2 - \alpha +
\beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta}
\right){\frac{{\partial
}}{{\partial n}}}{\left[ {\ln r^{2}} \right]} \\
- 2k_{2} (1 - \alpha + \beta )\left( {r^{2}} \right)^{\alpha - \beta -
2}x^{1 - 2\alpha} x_{0}^{2 - 2\alpha} F_{2} \left( {2 - \alpha +
\beta ;2 - \alpha ,\beta ;3 - 2\alpha ,2\beta ;\xi ,\eta}
\right){\frac{{dy}}{{ds}}}
\\
+ 2k_{2} (1 - \alpha + \beta )\left( {r^{2}} \right)^{\alpha - \beta -
2}x^{1 - 2\alpha} x_{0}^{1 - 2\alpha} y_{0} F_{2} \left( {2 -
\alpha + \beta ;1 - \alpha ,1 + \beta ;2 - 2\alpha ,1 + 2\beta
;\xi ,\eta}
\right){\frac{{dx}}{{ds}}} \\
+ (1 - 2\alpha )k_{2} \left( {r^{2}} \right)^{\alpha - \beta - 1}x^{ -
2\alpha} x_{0}^{1 - 2\alpha} F_{2} \left( {1 - \alpha + \beta ;1
- \alpha
,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta} \right){\frac{{dy}}{{ds}}} \\
\end{array}
\end{equation}
Applying (\ref{eq20}) and considering identity (\ref{eq12}), we
get the following formula:
\begin{equation}
\label{eq29} w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}}
\right) = {\mathop {\lim }\limits_{\rho \to 0}}
{\int\limits_{C_{\rho}} {}} x^{2\alpha} y^{2\beta
}{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial n}}}ds + {\int\limits_{0}^{b} {y^{2\beta}
{\left. {{\left[ {x^{2\alpha }{\frac{{\partial q_{2} \left(
{x,y;x_{0} ,y_{0}} \right)}}{{\partial n}}}} \right]}}
\right|}_{x = 0} ds}} .
\end{equation}
Substituting from (\ref{eq28}) into (\ref{eq29}), we find that
\begin{equation}
\label{eq30}
w_1^{(2)}\left(x_0,y_0\right)=k_2x_0^{1-2\alpha}{\mathop {\lim
}\limits_{\rho \to
0}}\left\{(1-\alpha+\beta)\left[-J_1-2x_0J_2+2y_0J_3\right]+(1-2\alpha)J_4\right\}+J_5,
\end{equation}
\noindent where
$$
J_{1} (x_{0} ,y_{0} ) = {\int\limits_{C_{\rho}} {}} xy^{2\beta}
\left( {r^{2}} \right)^{\alpha - \beta - 1}F_{2} \left( {2 -
\alpha + \beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta}
\right){\frac{{\partial }}{{\partial n}}}{\left[ {\ln r^{2}}
\right]}ds,
$$
$$
J_{2} (x_{0} ,y_{0} ) = {\int\limits_{C_{\rho}} {}} xy^{2\beta}
\left( {r^{2}} \right)^{\alpha - \beta - 2}F_{2} \left( {2 -
\alpha + \beta ;2 - \alpha ,\beta ;3 - 2\alpha ,2\beta ;\xi ,\eta}
\right){\frac{{dy}}{{ds}}}ds,
$$
$$J_{3} (x_{0} ,y_{0} ) = {\int\limits_{C_{\rho}} {}} xy^{2\beta}
\left( {r^{2}} \right)^{\alpha - \beta - 2}F_{2} \left( {2 -
\alpha + \beta ;1 - \alpha ,1 + \beta ;2 - 2\alpha ,1 + 2\beta
;\xi ,\eta} \right){\frac{{dx}}{{ds}}}ds,
$$
$$J_{4} (x_{0} ,y_{0} ) = \int\limits_{C_{\rho}} y^{2\beta} \left(
{r^{2}} \right)^{\alpha - \beta - 1}F_{2} \left( {1 - \alpha +
\beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta ;\xi ,\eta}
\right){\frac{{dy}}{{ds}}}ds,
$$
$$
J_{5} (x_{0} ,y_{0} ) = {\int\limits_{0}^{b} {y^{2\beta} {\left.
{{\left[ {x^{2\alpha} {\frac{{\partial q_{2} \left( {x,y;x_{0}
,y_{0}} \right)}}{{\partial n}}}} \right]}} \right|}_{x = 0} ds}}.
$$
Now, by introducing the polar coordinates: $x = x_{0} + \rho\cos
\phi $ and $y = y_{0} + \rho \sin \phi $, we get
\begin{equation}
\label{eq31}
\begin{array}{l}
J_{1} \left( {x_{0} ,y_{0}} \right) = {\int\limits_{0}^{2\pi} {(x_{0} +
\rho \cos \phi )(y_{0} + \rho \sin \phi )^{2\beta} \left( {\rho
^{2}}
\right)^{\alpha - \beta - 1}}} \\
F_{2} \left( {2 - \alpha + \beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta
;\xi ,\eta} \right)d\phi \\
\end{array}
\end{equation}
By using the following known formulas (see [30], p. 253, (26),
[31], p. 113, (4)):
\begin{equation}
\label{eq32} F_{2} \left( {a;b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y}
\right) = {\sum\limits_{i = 0}^{\infty} {}} {\frac{{\left( {a}
\right)_{i} \left( {b_{1}} \right)_{i} \left( {b_{2}}
\right)_{i}}} {{\left( {c_{1}} \right)_{i} \left( {c_{2}}
\right)_{i} i!}}}x^{i}y^{i}F\left( {a + i,b_{1} + i;c_{1} + i;x}
\right)F\left( {a + i,b_{2} + i;c_{2} + i;y} \right),
\end{equation}
and
\begin{equation}
\label{eq33} F\left( {a,b;c,x} \right) = \left( {1 - x} \right)^{
- b}F\left( {c - a,b;c,{\frac{{x}}{{x - 1}}}} \right),
\end{equation}
we obtain
\begin{equation}
\label{eq34}
\begin{array}{l}
F_{2} \left( {a;b_{1} ,b_{2} ;c_{1} ,c_{2} ;x,y} \right) = \left( {1 - x}
\right)^{ - b_{1}} \left( {1 - y} \right)^{ - b_{2}}
{\sum\limits_{i = 0}^{\infty} {}} {\frac{{\left( {a} \right)_{i}
\left( {b_{1}} \right)_{i} \left( {b_{2}} \right)_{i}}} {{\left(
{c_{1}} \right)_{i} \left( {c_{2}} \right)_{i} i!}}}\left(
{{\frac{{x}}{{1 - x}}}} \right)^{i}\left(
{{\frac{{y}}{{1 - y}}}} \right)^{i} \cdot \\
\cdot F\left( {c_{1} - a,b_{1} + i;c_{1} + i;{\frac{{x}}{{x - 1}}}}
\right)F\left( {c_{2} - a,b_{2} + i;c_{2} + i;{\frac{{y}}{{y -
1}}}}
\right), \\
\end{array}
\end{equation}
where $F\left( {a,b;c;x} \right)$ is hypergeometric function of
Gauss ([31], p. 69, (2)). Hence we have
\begin{equation}
\label{eq35}
\begin{array}{l}
F_{2} \left( {2 - \alpha + \beta ;1 - \alpha ,\beta ;2 - 2\alpha ,2\beta
;\xi ,\eta} \right) \\
= \left( {\rho ^{2}} \right)^{1 - \alpha + \beta} \left( {\rho ^{2} +
4x_{0}^{2} + 4x_{0} \rho \cos \,\phi} \right)^{\alpha - 1}\left(
{\rho ^{2}
+ 4y_{0}^{2} + 4y_{0} \rho \sin \,\phi} \right)^{ - \beta} P_{11} , \\
\end{array}
\end{equation}
where
\[
\begin{array}{l}
P_{11} = {\sum\limits_{i = 0}^{\infty} {}} {\frac{{\left( {2 - \alpha +
\beta} \right)_{i} \left( {1 - \alpha} \right)_{i} \left(
{\beta} \right)_{i}}} {{\left( {2 - 2\alpha} \right)_{i} \left(
{2\beta} \right)_{i} i!}}}\left( {{\frac{{4x_{0}^{2} + 4x_{0} \rho
\cos \,\phi }}{{\rho ^{2} + 4x_{0}^{2} + 4x_{0} \rho \cos \,\phi}}
}} \right)^{i}\left( {{\frac{{4y_{0}^{2} + 4y_{0} \rho \sin
\,\phi}} {{\rho ^{2} + 4y_{0}^{2} +
4y_{0} \rho \sin \,\phi}} }} \right)^{i} \cdot \\
\cdot F\left( { - \alpha - \beta ,1 - \alpha + i;2 - 2\alpha +
i;{\frac{{4x_{0}^{2} + 4x_{0} \rho \cos \,\phi}} {{\rho ^{2} +
4x_{0}^{2} + 4x_{0} \rho \cos \,\phi}} }} \right)F\left( {\alpha +
\beta - 2,\beta + i;2\beta + i;{\frac{{4y_{0}^{2} + 4y_{0} \rho
\sin \,\phi}} {{\rho ^{2} +
4y_{0}^{2} + 4y_{0} \rho \sin \,\phi}} }} \right). \\
\end{array}
\]
Using the well-known Gauss's summation formula for $F\left(
{a,b;c;1} \right)$ ([31], p. 112, (46))
\[
F\left( {a,b;c;1} \right) = {\frac{{\Gamma \left( {c}
\right)\Gamma \left( {c - a - b} \right)}}{{\Gamma \left( {c - a}
\right)\Gamma \left( {c - b} \right)}}},c \ne 0, - 1, -
2,...,Re\left( {c - a - b} \right) > 0,
\]
we obtain
\begin{equation}
\label{eq36} {\mathop {\lim} \limits_{\rho \to 0}} P_{11} =
{\frac{{\Gamma (2 - 2\alpha )\Gamma (2\beta )}}{{\Gamma (2 -
\alpha + \beta )\Gamma (\beta )\Gamma (1 - \alpha )}}}.
\end{equation}
Thus, by virtue of the identities (\ref{eq31}), (\ref{eq35}), and
(\ref{eq36}), we get
\begin{equation}
\label{eq37}
- (1 - \alpha + \beta )k_{2} x_{0}^{1 - 2\alpha} {\mathop {\lim
}\limits_{\rho \to 0}} J_{1} \left( {x_{0} ,y_{0}} \right) = - 1.
\end{equation}
Similarly, by considering the corresponding identities and the
fact that
\begin{equation}
\label{eq38} {\mathop {\lim} \limits_{\rho \to 0}} \rho \ln \rho =
0,
\end{equation}
we find that
\begin{equation}
\label{eq39} {\mathop {\lim} \limits_{\rho \to 0}} J_{2} \left(
{x_{0} ,y_{0}} \right) = {\mathop {\lim} \limits_{\rho \to 0}}
J_{3} \left( {x_{0} ,y_{0}} \right) = {\mathop {\lim}
\limits_{\rho \to 0}} J_{4} \left( {x_{0} ,y_{0}} \right) = 0.
\end{equation}
Now we consider the integral $J_{5} \left( {x_{0} ,y_{0}}
\right)$, which, taking into account formula (\ref{eq28}), takes
the form
\begin{equation}
\label{eq40} J_{5} \left( {x_{0} ,y_{0}} \right) = i(x_{0} ,y_{0}
)
\end{equation}
Hence, by virtue of (\ref{eq37})-(\ref{eq40}), from (\ref{eq30})
at $\left( {x_{0} ,y_{0}} \right) \in \Omega $ follows
\begin{equation}
\label{eq41} w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}}
\right) = i(x_{0} ,y_{0} ) - 1.
\end{equation}
\textbf{Case 2.} When $\left( {x_{0} ,y_{0}} \right) \in \Gamma
$, we cut a circle $C_{\rho} $ centered at $\left( {x_{0} ,y_{0}}
\right)$ with a small radius $\rho $ off the domain $\Omega $ and
denote the remaining part of the curve by $\Gamma - \Gamma _{\rho}
$. Let $C_{\rho} ^{'} $ denote a part of the circle $C_{\rho} $
lying inside the domain $\Omega $. We consider the domain $\Omega
_{\rho} $which is bounded by a curve $\Gamma - \Gamma _{\rho} $,
$C_{\rho} ^{'} $ and segments ${\left[ {0,a} \right]}$ and
${\left[ {0,b} \right]}$ along the $x - $ and $y - $axes,
respectively. Then we have
\begin{equation}
\label{eq42}
\begin{array}{l}
w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}} \right) \equiv
{\int\limits_{0}^{l} {x^{2\alpha} y^{2\beta}} } {\frac{{\partial
q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}}{{\partial n}}}ds
= {\mathop {\lim} \limits_{\rho \to 0}} {\int\limits_{\Gamma - \Gamma
_{\rho}} {x^{2\alpha} y^{2\beta} {\frac{{\partial q_{2} \left(
{x,y;x_{0}
,y_{0}} \right)}}{{\partial n}}}ds}} \\
\end{array}
\end{equation}
When the point $\left( {x_{0} ,y_{0}} \right)$ lies outside the
domain $\Omega _{\rho} $, it is found that, in this domain $q_{2}
\left( {x,y;x_{0} ,y_{0}} \right)$ is a regular solution of the
equation$\left( {H_{\alpha ,\beta} ^{0}} \right)$. Therefore, by
virtue of (\ref{eq20}), we have
\begin{equation}
\label{eq43}
\begin{array}{l}
{\int\limits_{\Gamma - \Gamma _{\rho}} {x^{2\alpha} y^{2\beta
}{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial
n}}}ds}} = \\
= {\int\limits_{0}^{b} {y^{2\beta}} } {\left. {{\left[ {x^{2\alpha
}{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial x}}}} \right]}} \right|}_{x = 0} dy +
{\int\limits_{C_{\rho} ^{'}} {}} x^{2\alpha }y^{2\beta}
{\frac{{\partial}} {{\partial n}}}{\left\{ {q_{2} \left(
{x,y;x_{0} ,y_{0}} \right)} \right\}}ds. \\
\end{array}
\end{equation}
Substituting from (\ref{eq43}) into (\ref{eq42}), we get
\begin{equation}
\label{eq44} w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}}
\right) = i(x_{0} ,y_{0} ) + {\mathop {\lim} \limits_{\rho \to 0}}
{\int\limits_{C_{\rho} ^{'}} {} }x^{2\alpha} y^{2\beta}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial n}}}ds.
\end{equation}
Now, again by introducing the polar coordinates in the second
summand and calculating the limit at $\rho \to 0$, we obtain
\[
w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}} \right) =
i(x_{0} ,y_{0} ) - {\frac{{1}}{{2}}}.
\]
\textbf{Case 3.} When $\left( {x_{0} ,y_{0}} \right) \notin \bar
{\Omega }$, it is noted that the function $q_{2} \left( {x,y;x_{0}
,y_{0}} \right)$ is a regular solution of the equation $\left(
{H_{\alpha ,\beta} ^{0}} \right)$. Hence, in view of the formula
(\ref{eq20}), we have
\[
\begin{array}{l}
w_{1}^{\left( {2} \right)} \left( {x_{0} ,y_{0}} \right) \equiv
{\int\limits_{0}^{l} {x^{2\alpha} y^{2\beta}} } {\frac{{\partial
}}{{\partial n}}}{\left\{ {q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)} \right\}}ds
= {\int\limits_{0}^{b} {y^{2\beta}} } {\left. {{\left[ {x^{2\alpha
}{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial x}}}}
\right]}} \right|}_{x = 0} dy = i(x_{0} ,y_{0} ). \\
\end{array}
\]
The proof of Lemma 1 is thus completed.
\textbf{Lemma 2.} \textit{The following formula holds true:}
\begin{equation}
\label{eq45} w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right)
= {\left\{ {{\begin{array}{*{20}c}
{i\left( {x_{0} ,0} \right) - 1}
& {\left( {x_{0} \in \left( {0,a}
\right)} \right)}
\\
{i\left( {x_{0} ,0} \right) - {\frac{{1}}{{2}}}}
& {\left( {x_{0} =
0\,\,{\rm o}{\rm r}\,\,x_{0} = a} \right)}
\\
{i\left( {x_{0} ,0} \right)}
& {\left( {a < x_{0}} \right)}
\\
\end{array}}} \right.}
\end{equation}
where
$$
i\left( {x_{0} ,0} \right) = {\frac{{1 - 2\alpha}} {{1 + 2\beta}}
}k_{2} b^{2\beta + 1}x_{0}^{1 - 2\alpha} \left( {x_{0}^{2} +
b^{2}} \right)^{ - 1 + \alpha - \beta} F\left( {1,\beta +
{\frac{{1}}{{2}}};\beta +
{\frac{{3}}{{2}}};{\frac{{b^{2}}}{{x_{0}^{2} + b^{2}}}}} \right).
$$
\textit{Proof.} For considering the first case when $x_{0} \in
\left( {0,a} \right)$, we introduce a straight line $y = h$ for a
sufficiently small positive real number $h$ and consider a domain
$\Omega _{h} $ which is the part of the domain $\Omega $ lying
above the straight line $y = h$. Applying the formula
(\ref{eq20}), we obtain
\begin{equation}
\label{eq46} w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right)
= {\int\limits_{0}^{b} {}} {\left. {x^{2\alpha} y^{2\beta}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,0} \right)}}{{\partial
x}}}} \right|}_{x = 0} dy + {\mathop {\lim }\limits_{h \to 0}}
{\int\limits_{0}^{x_{1}} {}} {\left. {x^{2\alpha }y^{2\beta}
{\frac{{\partial q_{2} \left( {x,y;x_{0} ,0} \right)}}{{\partial
y}}}} \right|}_{y = h} dx,
\end{equation}
where $x_{1} \left( {\varepsilon} \right)$ is an abscissa of a
point at which the straight line $y = h$ intersects the curve
$\Gamma $. It follows from (\ref{eq40}), (\ref{eq27}) and
(\ref{eq46}) that
\begin{equation}
\label{eq47}
\begin{array}{l}
w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right) = i(x_{0} ,0) - \\
- 2\left( {1 - \alpha + \beta} \right)k_{2} x_{0}^{1 - 2\alpha} {\mathop
{\lim} \limits_{h \to 0}} h^{1 + 2\beta} {\int\limits_{0}^{x_{1}}
{} }x{\frac{{F\left( {2 - \alpha + \beta ,1 - \alpha ;2 - 2\alpha
,{\frac{{ - 4xx_{0}}} {{\left( {x - x_{0}} \right)^{2} +
h^{2}}}}} \right)}}{{{\left[ {\left( {x - x_{0}} \right)^{2} +
h^{2}} \right]}^{2 - \alpha + \beta
}}}}dx. \\
\end{array}
\end{equation}
Now, by using the hypergeometric transformation formula
(\ref{eq33}) inside the integrand (\ref{eq47}), we have
\begin{equation}
\label{eq48}
\begin{array}{l}
w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right) = i(x_{0} ,0) - \\
2\left( {1 - \alpha + \beta} \right)k_{2} x_{0}^{1 - 2\alpha} {\mathop
{\lim} \limits_{h \to 0}} h^{1 + 2\beta} {\int\limits_{0}^{x_{1}}
{} }x{\frac{{F\left( { - \alpha - \beta ,1 - \alpha ;2 - 2\alpha
,{\frac{{4xx_{0}}} {{\left( {x + x_{0}} \right)^{2} + h^{2}}}}}
\right)}}{{{\left[ {\left( {x - x_{0}} \right)^{2} + h^{2}}
\right]}^{1 + \beta} {\left[ {\left( {x + x_{0}} \right)^{2} +
h^{2}} \right]}^{1 -
\alpha}} }}dx, \\
\end{array}
\end{equation}
which, upon setting $x = x_{0} + ht$ inside the integrand, yields
\begin{equation}
\label{eq49}
\begin{array}{l}
w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right) = i(x_{0} ,0) - \\
- 2\left( {1 - \alpha + \beta} \right)k_{2} x_{0}^{1 - 2\alpha} {\mathop
{\lim} \limits_{h \to 0}} {\int\limits_{l_{1}} ^{l_{2}} {}}
\left( {x_{0} + ht} \right){\frac{{F\left( { - \alpha - \beta ,1 -
\alpha ;2 - 2\alpha ,{\frac{{4x_{0} \left( {x_{0} + ht}
\right)}}{{\left( {2x_{0} + ht} \right)^{2} + h^{2}}}}}
\right)}}{{\left( {1 + t^{2}} \right)^{\beta + 1}{\left[ {\left(
{2x_{0} + ht} \right)^{2} + h^{2}} \right]}^{1 - \alpha
}}}}dt, \\
\end{array}
\end{equation}
where
$$
l_{1} = - {\frac{{x_{0}}} {{h}}}, \quad l_{2} = {\frac{{x_{1} -
x_{0}}} {{h}}}.
$$
Considering
$$
{\mathop {\lim} \limits_{h \to 0}} F\left( { - \alpha - \beta ,1 -
\alpha ;2 - 2\alpha ,{\frac{{4x_{0} \left( {x_{0} + ht}
\right)}}{{\left( {2x_{0} + ht} \right)^{2} + h^{2}}}}} \right) =
$$
$$
F\left( { - \alpha - \beta ,1 - \alpha ;2 - 2\alpha ,1} \right) =
{\frac{{\Gamma \left( {2 - 2\alpha} \right)\Gamma \left( {1 +
\beta} \right)}}{{\Gamma \left( {2 - \alpha + \beta}
\right)\Gamma \left( {1 - \alpha} \right)}}},
$$
and
$$
{\int\limits_{ - \infty} ^{ + \infty} {}} {\frac{{dt}}{{\left( {1
+ t^{2}} \right)^{\beta + 1}}}} = {\frac{{\pi \Gamma \left(
{2\beta} \right)}}{{2^{2\beta - 1}\beta \Gamma ^{2}\left( {\beta}
\right)}}},
$$
we find from (\ref{eq49}) that
\begin{equation}
\label{eq50} w_{1}^{\left( {2} \right)} \left( {x_{0} ,0} \right)
= i(x_{0} ,0) - 1.
\end{equation}
The other three cases when $x_{0} = 0$, $x_{0} = a$ and$x_{0} > a$
can be proved by using arguments similar to those detailed above
in the first case.
This evidently completes our proof of Lemma 2.
\textbf{Lemma 3.} \textit{The following identities are fair}
\begin{equation}
\label{eq51} w_{1}^{\left( {2} \right)} \left( {0,y_{0}} \right)
= {\left\{ {{\begin{array}{*{20}c}
{ - 1}
& {\left( {y_{0} \in \left( {0,b} \right)} \right)}
\\
{ - {\frac{{1}}{{2}}}}
& {\left( {y_{0} = 0\,\,{\rm o}{\rm
r}\,\,y_{0} = b} \right)}
\\
{0}
& {\left( {b < y_{0}} \right)}
\\
\end{array}}} \right.}
\end{equation}
\textit{Proof.} The proof of Lemma 3 would run parallel to that of
Lemma 2.
\textbf{Theorem 1.} \textit{For any points} $\left( {x,y}
\right)$\textit{ and} $\left( {x_{0} ,y_{0}} \right) \in R_{ +}
^{2} $\textit{ and} $x \ne x_{0} $\textit{ and} $y \ne y_{0}
,$\textit{ the following inequality holds true:}
\begin{equation}
\label{eq52} {\left| {q_{2} \left( {x,y;x_{0} ,y_{0}} \right)}
\right|} \le {\frac{{\Gamma \left( {1 - \alpha} \right)\Gamma
\left( {\beta} \right)}}{{\pi \Gamma (1 - \alpha + \beta
)}}}{\frac{{4^{\beta - \alpha }x^{1 - 2\alpha} x_{0}^{1 -
2\alpha}} } {{\left( {r_{1}^{2}} \right)^{1 - \alpha} \left(
{r_{2}^{2}} \right)^{\beta}} }}F{\left[ {1 - \alpha ,\beta ;1 -
\alpha + \beta ;\left( {1 - {\frac{{r^{2}}}{{r_{1}^{2}}} }}
\right)\left( {1 - {\frac{{r^{2}}}{{r_{2}^{2}}} }} \right)}
\right]},
\end{equation}
where $\alpha $\textit{and} $\beta $\textit{ are real parameters
with} $\left( {0 < \alpha ,\beta < {\frac{{1}}{{2}}}}
\right)$\textit{ as in the equation} $\left( {H_{\alpha ,\beta}
^{\lambda}} \right)$ (\textit{with} $\lambda = 0)$\textit{, and}
$r,r_{1} $\textit{ and} $r_{2} $\textit{ are as in} (\ref{eq10})$.
$
\textit{Proof.} It follows from (\ref{eq34}) that
\begin{equation}
\label{eq53}
\begin{array}{l}
q_{2} \left( {x,y;x_{0} ,y_{0}} \right) = k_{2} x^{1 - 2\alpha} x_{0}^{1 -
2\alpha} \left( {r_{1}^{2}} \right)^{\alpha - 1}\left(
{r_{2}^{2}} \right)^{ - \beta} {\sum\limits_{i = 0}^{\infty} {}}
{\frac{{\left( {1 - \alpha + \beta} \right)_{i} \left( {1 -
\alpha} \right)_{i} \left( {\beta } \right)_{i}}} {{\left( {2 -
2\alpha} \right)_{i} \left( {2\beta} \right)_{i} i!}}}\left( {1 -
{\frac{{r^{2}}}{{r_{1}^{2}}} }}
\right)^{i}\left( {1 - {\frac{{r^{2}}}{{r_{2}^{2}}} }} \right)^{i}\times \\
\times F\left( {1 - \alpha - \beta ,1 - \alpha + i;2 - 2\alpha + i;1 -
{\frac{{r^{2}}}{{r_{1}^{2}}} }} \right)F\left( {\alpha + \beta -
1,\beta +
i;2\beta + i;1 - {\frac{{r^{2}}}{{r_{2}^{2}}} }} \right), \\
\end{array}
\end{equation}
Now, in view of the following inequalities:
$$
F\left( {1 - \alpha - \beta ,1 - \alpha + i;2 - 2\alpha + i;1 -
{\frac{{r^{2}}}{{r_{1}^{2}}} }} \right) \le {\frac{{(2 - 2\alpha
)_{i} \Gamma (2 - 2\alpha )\Gamma (\beta )}}{{(1 - \alpha + \beta
)_{i} \Gamma (1 - \alpha + \beta )\Gamma (1 - \alpha )}}}
$$
and
$$ F\left( {\alpha + \beta - 1,\beta + i;2\beta + i;1 -
{\frac{{r^{2}}}{{r_{2}^{2}}} }} \right) \le {\frac{{(2\beta )_{i}
\Gamma (2\beta )\Gamma (1 - \alpha )}}{{(1 - \alpha + \beta )_{i}
\Gamma (1 - \alpha + \beta )\Gamma (\beta )}}},
$$
we find from (\ref{eq53}) that the inequality (\ref{eq52}) holds
true. Hence Theorem 1 is proved.
By virtue of the following known formula ([31], p. 117, (12)):
$$
F\left( {a,b;a + b;z} \right) = - {\frac{{\Gamma \left( {a + b}
\right)}}{{\Gamma \left( {a} \right)\Gamma \left( {b}
\right)}}}F\left( {a,b;1;1 - z} \right)\ln \left( {1 - z} \right)
+
$$
$$
+ {\frac{{\Gamma \left( {a + b} \right)}}{{\Gamma ^{2}\left( {a}
\right)\Gamma ^{2}\left( {b} \right)}}}{\sum\limits_{j =
0}^{\infty} {} }{\frac{{\Gamma \left( {a + j} \right)\Gamma
\left( {b + j} \right)}}{{\left( {j!} \right)^{2}}}}{\left[ {2\psi
\left( {1 + j} \right) - \psi \left( {a + j} \right) - \psi \left(
{b + j} \right)} \right]}\left( {1 - z} \right)^{j},
$$
$$
\left( { - \pi < \arg \,\left( {1 - z} \right) < \pi ,\,\,a,b \ne
0, - 1, - 2,...} \right),
$$
we observe from (\ref{eq52}) that $q_{2} \left( {x,y;x_{0}
,y_{0}} \right)$ has a logarithmic singularity at $r = 0$.
\textbf{Theorem 2.} \textit{If the curve} $\Gamma $\textit{
satisfies to conditions (3.1) the inequality takes place}
$$
{\int\limits_{\Gamma} {}} x^{2\alpha} y^{2\beta} {\left|
{{\frac{{\partial q_{2} \left( {x,y;x_{0} ,y_{0}}
\right)}}{{\partial n}}}} \right|}ds \le C_{1} ,
$$
where $C_{1} $\textit{ is a constant.}
\textit{Proof.} Theorem 2 follows by suitably applying Lemmas 1 to
3.
\textbf{Theorem 3.} \textit{The following limiting formulas hold
true for a double-layer potential (\ref{eq21}):}
\begin{equation}
\label{eq54} w_{i}^{\left( {2} \right)} \left( {t} \right) = -
{\frac{{1}}{{2}}}\mu _{2} \left( {t} \right) +
{\int\limits_{0}^{l} {}} \mu _{2} \left( {s} \right)K_{2} \left(
{s,t} \right)ds
\end{equation}
and
\begin{equation} \label{eq55} w_{e}^{\left( {2} \right)} \left(
{t} \right) = {\frac{{1}}{{2}}}\mu _{2} \left( {t} \right) +
{\int\limits_{0}^{l} {}} \mu _{2} \left( {s} \right)K_{2} \left(
{s,t} \right)ds,
\end{equation}
where, as usual, $\mu _{2} \left( {t} \right) \in С{\left[
{0,\,\,l} \right]},$
$$
K_{2} \left( {s,t} \right) = [x(s)]^{2\alpha} [y(s)]^{2\beta
}{\frac{{\partial}} {{\partial n}}}{\left\{ {q_{2} {\left[
{x\left( {s} \right),y\left( {s} \right);x_{0} \left( {t}
\right),y_{0} \left( {t} \right)} \right]}} \right\}}
$$
$$
\left( {\left( {x\left( {s} \right),y\left( {s} \right)} \right)
\in \Gamma ;\left( {x_{0} \left( {t} \right),y_{0} \left( {t}
\right)} \right) \in \Gamma} \right),
$$
$w_{i}^{\left( {2} \right)} \left( {t} \right)$ and $w_{e}^{\left(
{2} \right)} \left( {t} \right)$ are limiting values of the
double-layer potential (\ref{eq21}) at
$$
\left( {x_{0} \left( {t} \right),y_{0} \left( {t} \right)} \right)
\to \Gamma
$$
from the inside and the outside, respectively.
\textit{Proof.} We find from Lemma 1, in conjunction with Theorems
1 and 2, that each of the limiting formulas asserted by Theorem 3
holds true.
\textbf{References}
[1] A. Altin, Solutions of type $r^{m}$ for a class of singular
equations. \textit{Internat. J. Math. Sci.,} \textbf{5}(1982),
613--619.
[2] A. Altin, Some expansion formulas for a class of singular
partial differential equations. \textit{Proc. Amer. Math. Soc.},
\textbf{85}(1982), 42--46.
[3] A. Altin and E.C.Young, Some properties of solutions of a
class of singular partial differential equations. \textit{Bull.
Ins. Math. Acad. Sinica}, \textbf{11}(1983), 81--87.
[4] P. Appell and J. Kampe de Feriet, \textit{Fonctions
Hypergeometriques et Hyperspheriques; Polynomes d'Hermite,}
Gauthier - Villars. Paris, 1926.
[5] J.L.Burchnall and T.W.Chaundy, Expansions of Appell's double
hypergeometric functions. // \textit{Quart. J. Math. Oxford Ser.}
11(1940), 249-270.
[6] A.Erdelyi, Singularities of generalized axially symmetric
potentials. \textit{Comm. Pure Appl. Math.}, 2(1956), 403-414.
[7] A.Erdelyi, An application of fractional integrals. \textit{J.
Analyse. Math}., 14(1965), 113-126.
[8] A.Erdelyi, W.Magnus, F.Oberhettinger and F.G.Tricomi,
\textit{Higher Transcendental Functions,} Vol.I, McGraw-Hill Book
Company, New York, Toronto and London,1953; Russian edition,
Izdat. Nauka, Moscow, 1973.
[9] A.J.Fryant, Growth and complete sequences of generalized
bi-axially symmetric potentials. \textit{J. Differential
Equations}, \textbf{31}(1979), 155--164.
[10] R.P.Gilbert, On the singularities of generalized axially
symmetric potentials. \textit{Arch. Rational Mech. Anal.}, 6
(1960), 171-176.
[11] R.P.Gilbert, Some properties of generalized axially symmetric
potentials. \textit{Amer. J. Math.}, 84(1962), 475-484.
[12] R.P.Gilbert, ``Bergman's'' integral operator method in
generalized axially symmetric potential theory. \textit{J. Math.
Phys}., 5(1964), 983-987.
[13] R.P.Gilbert, On the location of singularities of a class of
elliptic partial differential equations in four variables.
\textit{Canad. J. Math}., 17(1965), 676-686.
[14] R.P.Gilbert, On the analytic properties of solutions to a
generalized axially symmetric Schroodinger equations. \textit{J.
Differential equations}, 3(1967), 59-77.
[15] R.P.Gilbert, An investigation of the analytic properties of
solutions to the generalized axially symmetric, reduced wave
equation in $n + 1$ variables, with an application to the theory
of potential scattering. \textit{SIAM J. Appl. Math.} 16 (1968),
30-50.
[16] R.P.Gilbert,\textit{ Function Theoretic Methods in Partial
Differential Equations.} New York, London: Academic Press.
[17] R.P.Gilbert and H.Howard, On solutions of the generalized
axially symmetric wave equation represented by Bergman operators,
\textit{Proc. London Math. Soc}., \textbf{15} (1965), 346-360.
[18] N.M.Gunter, \textit{Potential Theory and Its Applications to
Basic Problems of Mathematical Physics} (Translated from the
Russian edition by J.R.Schulenberger), Frederick Ungar Publishing
Company, New York, 1967.
[19] A.Hasanov, Fundamental solutions of generalized bi-axially
symmetric Helmholtz equation. \textit{Complex Variables and
Elliptic Equations}, \textbf{52}(2007), 673--683.
[20] P.Henrici, Zur Funktionentheorie der Wellengleichung,
\textit{Comment. Math. Helv}., 27(1953), 235-293.
[21] P.Henrici, On the domain of regularity of generalized axially
symmetric potentials. \textit{Proc. Amer. Math. Soc}., 8(1957),
29-31.
[22] P.Henrici, Complete systems of solutions for a class of
singular elliptic Partial Differential Equations. \textit{Boundary
Value Problems in differential equations, pp.19-34,University of
Wisconsin Press, Madison}, 1960.
[23] A.Huber, On the uniqueness of generalized axisymmetric
potentials. \textit{Ann. Math}., 60(1954), 351-358.
[24] D.Kumar, Approximation of growth numbers generalized
bi-axially symmetric potentials. \textit{Fasciculi Mathematics},
35(2005), 51--60.
[25] C.Y.Lo, Boundary value problems of generalized axially
symmetric Helmholtz equations. \textit{Portugaliae Mathematica}.
36(1977), 279-289.
[26] O.I.Marichev, An integral representation of solutions of the
generalized biaxiallu symmetric Helmholtz equation and formulas
its inversion (Russian). \textit{Differencial'nye Uravnenija},
Minsk, \textbf{14}(1978), 1824--1831.
[27] P.A.McCoy, Polynomial approximation and growth of generalized
axisymmetric potentials. \textit{Canadian Journal of Mathematics},
\textbf{31}(1979), 49--59.
[28] P.A.McCoy, Best $L^{p}$-approximation of Generalized
bi-axisymmetric Potentials. \textit{Proceedings of the American
Mathematical Society}, \textbf{79}(1980), 435-440.
[29] C.Miranda, \textit{Partial Differential Equations of Elliptic
Type. Second Revised Edition} (Translated from the Italian edition
by Z.C.Motteler), Ergebnisse der Mathematik und ihrer
Grenzgebiete, Band 2, Springer-Verlag, Berlin, Heidelberg and New
York, 1970.
[30] P.P.Niu and X.B.Lo, Some notes on solvability of LPDO.
\textit{Journal of Mathematical Research and Expositions},
\textbf{3}(1983), 127--129.
[31] K.B.Ranger, On the construction of some integral operators
for generalized axially symmetric harmonic and stream
functions.\textit{ J. Math. Mech}., 14(1965), 383-401.
[32] J.M.Rassias and A.Hasanov, Fundamental Solutions of Two
Degenerated Elliptic Equations and Solutions of Boundary Value
Problems in Infinite Area. \textit{International Journal of
Applied Mathematics \& Statistics}, \textbf{8}(2007), 87-95.
[33] H.M.Srivastava, A.Hasanov and J.Choi, 2015. Double-Layer
Potentials for a Generalized Bi-Axially Symmetric Helmholtz
Equation. \textit{Sohag J.Math}. 2, No.1(2015),1-10.
[34] H.M.Srivastava and Karlsson, \textit{Multipl. Gaussian
Hypergeometric Series}, Halsted Press (Ellis Horwood Limited,
Chicherster), John Wiley and Sons, New York,Chichester,Brisbane
and Toronto,1985.
[35] R.J.Weinacht, Some properties of generalized axially
symmetric Helmholtz potentials. \textit{SIAM J. Math. Anal}.
5(1974), 147-152.
[36] A.Weinstein, Discontinuous integrals and generalized
potential theory. \textit{Trans. Amer. Math. Soc}., 63(1948),
342-354.
[37] A.Weinstein, Generalized axially symmetric potentials theory.
\textit{Bull. Amer. Math. Soc}., 59(1953), 20-38.
\end{document}
|
\begin{document}
\author{V.A.~Vassiliev}
\address{Steklov Mathematical Institute of Russian Academy of Sciences \ \ and \ \ National Research University Higher School of Economics}
\email{[email protected]}
\thanks{}
\title[Homology of spaces of equivariant maps]{Twisted homology of configuration spaces, homology of spaces of equivariant maps, and stable homology of spaces of non-resultant systems of real homogeneous polynomials}
\begin{abstract}
A spectral sequence calculating the homology groups of some spaces of maps equivariant under compact group actions is described. For the main example, we calculate the rational homology groups of spaces of even and odd maps $S^m \to S^M$, $m<M$, or, which is the same, the stable homology groups of spaces of non-resultant homogeneous polynomial maps ${\mathbb R}^{m+1} \to {\mathbb R}^{M+1}$ of growing degrees. Also, we find the homology groups of spaces of ${\mathbb Z}_r$-equivariant maps of odd-dimensional spheres for any $r$.
In an intermediate calculation, we find the homology groups of configuration spaces of projective and lens spaces with coefficients in several local systems.
\end{abstract}
\keywords{Equivariant maps, twisted homology, resultant, configuration space, order complex, orientation sheaf, simplicial resolution}
\subjclass[2010]{14P25, 55T99}
\maketitle
\section{Main results}
\subsection{Homology of spaces of ${\mathbb Z}_r$-equivariant maps of spheres}
Denote by $\mbox{EM}_0(S^m,S^M)$ (respectively, $\mbox{EM}_1(S^m,S^M)$) the space of even (respectively, odd) continuous maps $S^m \to S^M$, that is, of maps sending opposite points to the same point (respectively, to opposite points) in $S^M$. (The space $\mbox{EM}_0(S^m,S^M)$ can be identified with the space of continuous maps ${\mathbb R}P^m \to S^M$). Also, denote by $\mbox{EM}^*_i(S^m,S^M)$, $i \in \{0, 1\},$ the pointed versions of these spaces, i.e. the spaces of even or odd maps sending the fixed point of $S^m$ into the fixed point of $S^M$.
\begin{theorem}
\label{cor1} For any natural $m<M$ and $i =0$ $($respectively, $i=1)$, the Poincar\'e series of the
group $H^*(\mbox{EM}_i(S^m,S^M), {\mathbb Q})$ is indicated in the intersection of the corresponding row and the third column of Table
\ref{evv} $($respectively, of Table \ref{oddd}$)$. The Poincar\'e series of the similar group $H^*(\mbox{EM}^*_i(S^m,S^M), {\mathbb Q})$ is indicated in the intersection of the corresponding row and the second column of the same table.
\end{theorem}
There are obvious fiber bundles
\begin{equation}
\label{mfb}
\mbox{EM}_i (S^m, S^M) \to S^M
\end{equation}
sending any map to the image of the fixed point of $S^m$ under this map; their fibers are equal to $\mbox{EM}^*_i(S^m,S^M)$. The last columns of Tables \ref{evv} and \ref{oddd} complete the description of the rational cohomology spectral sequences of these fiber bundles. Namely, it follows from Theorem \ref{cor1} that all the cohomology groups $H^q(\mbox{EM}^*_i(S^m,S^M), {\mathbb Q})$ of their fibers are
at most one-dimensional. Therefore, to find the shape of all terms ${\mathcal E}_r$, $r\geq 1$, of such a spectral sequence, it is enough to indicate all the pairs of its groups ${\mathcal E}_M^{0,q}$ and ${\mathcal E}_M^{M, q-M+1}$, which are connected by a non-trivial action of the differential $d^M$. These pairs are listed in the last columns of our tables.
In Table \ref{genn} we similarly describe the spectral sequences of the fiber bundles
\begin{equation}\label{fb}\mbox{Map}(S^m,S^M) \stackrel{\Omega^m S^M}{\longrightarrow} S^M
\end{equation}
for the spaces of all continuous maps $S^m \to S^M$, $m<M$.
The answers given in the second column of this table (and probably also in entire this table) are not new since \cite{snaith}, \cite{Cohen76} and related works. We give them here for the comparison with the data of two previous tables; also, some homology calculations arising in this approach to these answers may be of independent interest.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Parities & $\mbox{EM}^*_0(S^m, S^M)$ & $\mbox{EM}_0(S^m,S^M)$ & $d^M$ \\
\hline
$\mbox{$\begin{array}{l} $m$ \ odd, \\$M$ \ odd \end{array}$}$ & {\large $\frac{1}{1-t^{M-m}}$} & {\large $\frac{1+t^M}{1-t^{M-m}}$} & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ even \end{array}$}$ & {\large 1} & {\large $1+ t^M $} &
$\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ odd \end{array}$}$ & {\large 1} & {\large $1+t^M$ } & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ odd, \\$M$ \ even \end{array}$}$
& {\large $\frac{1+t^{M-m}}{1-t^{2M-m-1}}$} & {\large $1+ t^{M-m} \frac{1+t^m}{1-t^{2M-m-1}}$} & ${\mathcal E}_M^{0, (s+1)(2M-m-1)} \to {\mathcal E}_M^{M, s(2M-m-1)+(M-m)}$, $s \geq 0$ \\
\hline
\end{tabular}
\caption{Poincar\'e series of spaces of even maps $S^m \to S^M$}
\label{evv}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Parities & $\mbox{EM}^*_1(S^m, S^M)$ & $\mbox{EM}_1(S^m,S^M)$ & $d^M$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ odd, \\$M$ \ odd \end{array}$}$ & {\large $\frac{1}{1-t^{M-m}}$} & {\large $\frac{1+t^M}{1-t^{M-m}}$} & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ even \end{array}$}$ & {\large $\frac{1+t^{M-1}}{1-t^{M-m}}$} & {\large $\frac{1+t^{2M-1}}{1-t^{M-m}}$} & ${\mathcal E}_M^{0,s(M-m)+M-1} \to {\mathcal E}_M^{M,s(M-m)}$, $s \geq 0$
\\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ odd \end{array}$}$ & {\large 1} & {\large $1+t^M$} & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ odd, \\$M$ \ even \end{array}$}$
& {\large $\frac{1+t^{M-1}}{1-t^{2M-m-1}}$} & {\large $ \frac{1+t^{2M-1}}{1-t^{2M-m-1}}$} & ${\mathcal E}_M^{0,s(2M-m-1)+M-1} \to {\mathcal E}_M^{M,s(2M-m-1)}$, $s \geq 0$ \\
\hline
\end{tabular}
\caption{Poincar\'e series of spaces of odd maps $S^m \to S^M$}
\label{oddd}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Parities & $\Omega^m S^M$ & $\mbox{Map}(S^m,S^M)$ & $d^M$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ odd, \\$M$ \ odd \end{array}$}$ & {\large $\frac{1}{1-t^{M-m}}$} & {\large $\frac{1+t^M}{1-t^{M-m}}$} & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ even \end{array}$}$ & {\large $\frac{1+t^{2M-m-1}}{1-t^{M-m}}$} & {\large $t^M + \frac{1+t^{3M-m-1}}{1-t^{M-m}}$} &
${\mathcal E}_M^{0,(s-1)(M-m)+(2M-m-1)} \to {\mathcal E}_M^{M, s(M-m)} ,$ $s \geq 1$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ even, \\$M$ \ odd \end{array}$}$ & $1+t^{M-m}$ & $(1+t^M)(1+t^{M-m})$ & $\emptyset$ \\
\hline
$\mbox{$\begin{array}{c} $m$ \ odd, \\$M$ \ even \end{array}$}$
& {\large $\frac{1+t^{M-m}}{1-t^{2M-m-1}}$} & {\large $1+ t^{M-m} \frac{1+t^m}{1-t^{2M-m-1}}$} & ${\mathcal E}_M^{0, (s+1)(2M-m-1)} \to {\mathcal E}_M^{M, s(2M-m-1)+(M-m)}$, $s \geq 0$ \\
\hline
\end{tabular}
\caption{Poincar\'e series of spaces of general maps $S^m \to S^M$}
\label{genn}
\end{center}
\end{table}
Any odd-dimensional sphere can be realized as the unit sphere in a complex vector space, hence the cyclic group ${\mathbb Z}_r$ acts on it by multiplications by the powers of $e^{2\pi i/r}$.
\begin{theorem} \label{m4}
Let $m<M$ be two odd numbers, and $0 < s \leq r$, then
1$)$ the Poincar\'e series of the rational cohomology group of the space of continuous maps $f: S^m \to S^M$
such that
\begin{equation}
\label{chara}
f(e^{2\pi i/r} x) = e^{2\pi i s/r} f(x)
\end{equation}
for any $x \in S^m$ is equal to $(1+t^{M})/(1-t^{M-m})$;
2$)$ the Poincar\'e series of the corresponding space of equivariant maps sending the fixed point of $S^m$ to the fixed of $S^M$ is equal to $1/(1-t^{M-m})$.
\end{theorem}
See \cite{circle} for the first application of our spectral sequence to the case of infinite group actions.
The data of Tables 1, 2 and 3 for $m$ and $M$ odd are special cases of Theorem 2 corresponding to $(r,s)$ equal to $(2,2),$ $(2,1)$, and $(1,1)$ respectively.
\subsection{Application to the spaces of non-resultant maps}
Denote by ${\mathfrak P}_{d,n}$ the space of homogeneous polynomials ${\mathbb R}^n \to {\mathbb R}$ of degree $d$. Its $k$-th Cartesian power
${\mathfrak P}_{d,n}^k$ is the space of systems of $k$ such polynomials. The {\em resultant} subvariety
$\Sigma \subset {\mathfrak P}_{d,n}^k$ consists of all systems having non-trivial common roots in ${\mathbb R}^n$.
The study of the topology of non-resultant spaces was essentially started in \cite{CCMM}; for some other related works see \cite{Book}, \cite{Kozl}, \cite{KY}, \cite{dan18}.
The embedding ${\mathfrak P}_{d,n}^k \to {\mathfrak P}_{d+2,n}^k$ defined by the multiplication of all polynomials by
$x_1^2 + \dots + x_n^2$ induces a homomorphism
\begin{equation} H^*({\mathfrak P}^k_{d+2,n} \setminus \Sigma) \to
H^*({\mathfrak P}^k_{d,n} \setminus \Sigma) \ .
\label{stab} \end{equation}
The structure of the {\em stable} rational cohomology groups of spaces ${\mathfrak P}^k_{d,n}\setminus \Sigma$ with
fixed $k>n$ and growing $d$, that is, of the inverse limits of these groups with respect to the maps (\ref{stab}), follows from Theorem \ref{cor1} by means of a result of \cite{KY}, indicating also a realistic estimate for the instant of the stabilization.
The answers are different for the stabilizations over even and
odd $d$, and are as follows.
The restrictions of the systems of the class ${\mathfrak P}^k_{d,n}$
to the unit sphere in ${\mathbb R}^n$ define injective maps
\begin{equation} \label{embev}{\mathfrak P}^k_{d,n} \setminus \Sigma \to
\mbox{EM}_i(S^{n-1} , ({\mathbb R}^k\setminus 0)) \sim \mbox{EM}_i(S^{n-1}, S^{k-1}),
\end{equation}
$i \equiv d(\mbox{mod } 2)$;
here $\sim$ means homotopy equivalence.
\begin{proposition}[see \cite{KY}] \label{trivi}
If $k>n$ then the maps $($\ref{embev}$)$ induce isomorphisms of cohomology groups in all dimensions strictly below $(k-n)(d+1)$.
\end{proposition}
\noindent
{\bf Remark.} It follows easily from Weierstrass approximation theorem that
for any natural $k$ and $n$, the inverse limit of groups $H^*({\mathfrak P}^k_{d,n} \setminus \Sigma)$ over the maps $($\ref{stab}$)$ with even or odd $d$ is naturally isomorphic to $H^*(\mbox{\rm EM}_i(S^{n-1}, S^{k-1}))$, $i \equiv d(\mbox{mod } 2)$, see e.g. Proposition 1 in \cite{dan18}. The previous proposition gives a realistic estimate for the instant of stabilization in the case $k>n$.
\begin{example} \rm The space ${\mathfrak P}_{1,3}^k \setminus \Sigma$ is homotopy equivalent to the Stiefel manifold $V_3({\mathbb R}^k)$. For odd $k>3$ the Poincar\'e polynomial of its rational cohomology group is equal to $(1+t^{k-3})(1+t^{2k-3})$. All elements of this group are stable, i.e. they are induced from certain elements of the group $H^*(\mbox{EM}_1(S^2,S^{k-1}),{\mathbb Q})$ under the map (\ref{embev}) with $n=3$. Indeed, the number 3 in the first bracket should be considered as $n\equiv m+1$, and in the second one just as 3, see the row \{$m$ even, $M$ even\} of Table \ref{oddd} for $m=2$, $M=k-1$. In the case of even $k>3$ the Poincar\'e polynomial of $H^*(V_3({\mathbb R}^k))$ is equal to $(1+t^{k-1})(1+t^{2k-5})$; only the generator corresponding to $t^{k-1}$ is stable, see the row \{$m$ even, $M$ odd\} of Table \ref{oddd}.
The groups $H^*({\mathfrak P}_{2,3}^k \setminus \Sigma,{\mathbb Q})$ are found in \cite{i16};
all of them contain a stable subgroup isomorphic to ${\mathbb Q}$ in dimension $k-1$, see the rows \{$m$ even\} of Table \ref{evv}.
\end{example}
\subsection{Notation} The calculations summarized in Theorems \ref{cor1} and \ref{m4} will be performed in the following terms.
Given a topological space $X$ and local system $L$ of groups on it,
{\bf $\bar H_*(X, L)$} is the (Borel--Moore) homology group of the complex of locally finite singular chains of $X$ with coefficients in $L$.
For any natural $N$, {\bf $I(X,N)$} and {\bf $B(X,N)$} denote respectively the ordered and unordered $N$-point configuration spaces of $X$. In particular, $I(X,N)$ is an open subset in $X^N$, and $B(X,N)=I(X,N)/S(N)$.
{\bf $\pm {\mathbb Z}$} is the ``sign'' local system of groups on $B(X,N)$ which is locally isomorphic to ${\mathbb Z}$, but the loops in $B(X,N)$ act on its fibers as multiplication by $\pm 1$ depending on the parity of the permutations of $N$ points defined by these loops. Also,
we denote by $\pm {\mathbb Q}$ the local system $\pm {\mathbb Z} \otimes {\mathbb Q}$.
If $X$ is a connected manifold, then by $\mbox{Or}$ we denote the local system on $B(X,N)$ which is locally equivalent to ${\mathbb Z}$,
and moves any element of the fiber to its opposite after the transportation over a loop in $B(X,N)$ if and only if the union of traces of all $N$ points during the movement along this loop defines an element of $H_1(X,{\mathbb Z}_2)$ destroying the orientation of $X$.
\begin{lemma} If $X$ is a connected $m$-dimensional manifold, then
the orientation sheaf of the space $B(X,N)$ is equal to \rm $(\pm {\mathbb Z})^{\otimes m} \otimes \mbox{Or}.$
$\Box$
\label{orient}
\end{lemma}
Consider the unique non-trivial local system of groups with fiber ${\mathbb Z}$ on ${\mathbb R}P^m$ : the non-zero element of the group $\pi_1({\mathbb R}P^m)$ acts on its fibers as multiplication by $-1$. If $m$ is odd then the homology group of ${\mathbb R}P^m$ with coefficients in this system is finite in all dimensions. Denote by $\Theta$ the local system on the space $({\mathbb R}P^m)^{N}$ (i.e. on the $N$th Cartesian power of ${\mathbb R}P^m$) which is the tensor product of $N$ systems lifted by the standard projections from these systems on the factors.
Denote by $\tilde \Theta$ the local system on $B({\mathbb R}P^m,N)$ with fiber ${\mathbb Z}$,
such that a loop in $B({\mathbb R}P^m,N)$ acts as multiplication by $-1$ on the fiber if and only if
the union of traces of all $N$ points of the configuration during the movement along this loop defines the non-zero element of the group $H_1({\mathbb R}P^m,{\mathbb Z}_2)$.
\begin{remark} \label{protriv}
$\tilde \Theta =\mbox{\rm Or}$ if $m$ is even.
The orientation sheaf of $B({\mathbb R}P^m,N)$ is equal to $\tilde \Theta$ if $m$ is even, and to $\pm {\mathbb Z}$ if m is odd.
The restriction of the local system $\Theta$ to the subspace $I({\mathbb R}P^m,N) \subset ({\mathbb R}P^m)^N$ is isomorphic to the system lifted from $\tilde \Theta$ by the standard covering $I({\mathbb R}P^m,N) \to B({\mathbb R}P^m,N)$.
\end{remark}
Denote by $L_r^m$ the lens space $S^m /({\mathbb Z}/r{\mathbb Z})$ ($m$ odd).
${\mathbb R}P_\star^m$ (respectively, $L^m_{r,\star}$) is the notation for the space ${\mathbb R}P^m$ (respectively, $L^m_{r})$ with one point removed.
\subsection{The main spectral sequence}
\label{agen}
In this subsection and the next one we describe a spectral sequence calculating the cohomology groups of some spaces of equivariant maps, including the ones considered in Theorems \ref{cor1} and \ref{m4}. Its consequences needed for the proof of these two theorems are collected in Corollary \ref{mainss} on page \pageref{mainss}. In \S \ref{which} we discuss which spaces of equivariant maps are homotopy equivalent to ones described here.
Let $X \subset {\mathbb R}^a$ be a compact semialgebraicl set, and $G$ be a compact Lie group acting freely and algebraically on a neighbourhood of $X$ in ${\mathbb R}^a$. Let $\rho:G \to \mbox{O}({\mathbb R}^W)$ be a representation of $G$, and $\Lambda$ a closed semi-algebraic subset in the unit sphere $S^{W-1} \subset {\mathbb R}^W$, invariant under the corresponding action of $G$ on $S^{W-1}$. Denote by $C\Lambda$ the union of the origin in ${\mathbb R}^W$ and of all rays that start from the origin and intersect the sphere $S^{W-1}$ at the points of $\Lambda$.
Denote by $\mbox{EM}_G(X,{\mathbb R}^W)$ the space of all continuous maps $X \to {\mathbb R}^W$ equivariant under our actions of $G$ (i.e., the maps $f$ such that $f(g(x))= \rho(g)(f(x))$ for any $x \in X$, $g \in G$).
Denote by $\mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$ its subspace consisting of maps whose images do not meet $C\Lambda$.
If the sum of the dimensions of $X/G$ and $C\Lambda$ is at most $W-2$, then the cohomology group $\tilde H^*(\mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$ of this space (reduced modulo a point) can be calculated by a spectral sequence, all whose groups $E^{p,q}_1$ are finitely generated and can be explicitly expressed in the terms of our data. All these non-trivial groups $E^{p,q}_1$ lie in the wedge \begin{equation}
\label{wedge}
\{(p,q)| p < 0, q+p(W-\dim X + \dim G - \dim C\Lambda ) \geq 0\} .
\end{equation}
In particular, for any $c$ there are only finitely many such non-trivial groups in the line $\{p+q=c\}$. Let us describe these groups.
For any natural $N$ consider the space $\tilde I(X,N)$ of ordered collections of $N$ points of $X$ such that the $G$-orbits of points are pairwise distinct, and consider the trivial fiber bundle over this space with fiber $(C\Lambda)^N$. It is convenient to consider the fiber of this bundle over a sequence $(x_1, \dots, x_N) \in X^N$ as the space of the ways to associate a point $\lambda_j \in C\Lambda$ with any point $x_j$ of this sequence. Denote by ${\mathfrak G}_N $ the semidirect product of the group $G^N$ and the permutation group $S(N)$ with the multiplication $$\left((g_1, \dots, g_N);P\right)\left((h_1, \dots, h_N);Q\right) = \left((g_1h_{P^{-1}(1)}, \dots, g_Nh_{P^{-1}(N)});PQ\right)$$
for any $g_j, h_j \in G$ and $P, Q \in S(N)$. This group acts on the base and the fibers of our fiber bundle: its element
\begin{equation}
\label{semi}
((g_1, \dots, g_N); P)
\end{equation} with $g_j \in G$, $P \in S(N)$, takes the points $(x_1, \dots, x_N)$ and $(\lambda_1, \dots, \lambda_N)$ to
\begin{equation} \label{semact}
\left(g_1(x_{P^{-1}(1)}), \dots, g_N(x_{P^{-1}(N)})\right) \qquad \mbox{and} \qquad \left(\rho(g_1)(\lambda_{P^{-1}(1)}), \dots, \rho(g_N)(\lambda_{P^{-1}(N)})\right)
\end{equation} respectively. Therefore this group acts also on the space of this fiber bundle. Denote by $\Pi_{N,G}(X)$ the quotient space of the space of our fiber bundle by this action; it can be considered as the space of a fiber bundle with the same fiber $(C\Lambda)^N$ over the configuration space $B(X/G,N)$.
Also, define the {\em sign} of the element
(\ref{semi}) (and of entire its connected component in ${\mathfrak G}_N$) as the product of the determinants of all $N$ operators $\rho(g_j) \in \mbox{O}({\mathbb R}^W)$ and the $(W+1)$th power of the sign of the permutation $P$. Consider the homomorphism $\pi_1(B/G,N) \to \pi_0({\mathfrak G}_N)$ from the exact sequence of the principal ${\mathfrak G}_N$-bundle $\tilde I(X,N) \to B(X/G,N)$. Composing it with the sign homomorphism $\pi_0({\mathfrak G}_N) \to \{\pm 1\}$ we obtain a homomorphism $\pi_1(B/G,N) \to \{\pm 1\}$. We denote the resulting local system with fiber ${\mathbb Z}$ by ${\mathcal J}$, and we use the same notation ${\mathcal J}$ for the pullback of this local system to $\Pi_{N,G}(X)$.
\begin{theorem}
\label{absv}
If the sum of dimensions of spaces $X/G$ and $C\Lambda$ does not exceed $W-2$, then there is a spectral sequence $E^{p,q}_r$ converging to the reduced homology group $\tilde H^*( \mbox{\rm EM}_G(X,({\mathbb R}^W \setminus C\Lambda)))$; all its groups $E^{p,q}_1$ with $p\geq 0$ are trivial, and
\begin{equation}
\label{absvf}
E^{p,q}_1 \simeq \bar H_{-pW-q}(\Pi_{-p,G}(X), {\mathcal J})
\end{equation}
for $p<0$.
\end{theorem}
In particular, for any $p<0$ the groups $E^{p,q}_1$ with this $p$ are trivial if $q$ does not belong to the segment $[-p(W-\dim X +\dim G - \dim C\Lambda), -pW]$. Indeed, only for such $q$ the lower index $-pW-q$ in (\ref{absvf}) belongs to the segment $[0, \dim \Pi_{-p,G}(X)]$.
\subsection{The relative version}
Let in conditions of Theorem \ref{absv} additionally $A $ be a $G$-invariant subcomplex in $X$, and $\varphi: A \to {\mathbb R}^W \setminus C\Lambda$ be a $G$-equivariant map. Denote by $\mbox{EM}_G(X,A;{\mathbb R}^W)$ the space of all $G$-equivariant maps $X \to {\mathbb R}^W$ coinciding with $\varphi$ on $A$. Denote by $\Pi_{N,G}(X,A)$ the part of $\Pi_{N,G}(X)$ consisting of all fibers
of our fiber bundle $\Pi_{N,G}(X) \to B(X/G,N)$ over the points of the subspace $B((X\setminus A)/G,N) \subset B(X/G,N)$.
\begin{theorem}
\label{relav}
If the sum of dimensions of spaces $X/G$ and $C\Lambda$ does not exceed $W-2$, then there is a spectral sequence $\hat E^{p,q}_r$ converging to the reduced homology group $\tilde H^*( \mbox{\rm EM}_G(X,A;({\mathbb R}^W \setminus C\Lambda))$; all its groups $\hat E^{p,q}_1$ with $p\geq 0$ are trivial, and
\begin{equation}
\label{relaf}
\hat E^{p,q}_1 \simeq \bar H_{-pW-q}(\Pi_{-p,G}(X,A), {\mathcal J})
\end{equation}
for $p<0$.
\end{theorem}
\begin{remark}\rm
In the case of the trivial group $G$, the spectral sequences (\ref{absvf}) and (\ref{relaf}) were
defined in \S III.6 of \cite{Book}. The sequence (\ref{absvf}) is in this case conjecturally equipotent to the one by Anderson \cite{anderson}: they are formally non-isomorphic since the first invariant term of the Anderson's spectral sequence is $E_2$ and that of ours one is $E_1$, but probably they handle effectively nearly the same class of examples.
The construction of spectral sequences (\ref{absvf}), (\ref{relaf}) is bery similar to the one given in \cite{Book}; we review it with appropriate modifications in \S \ref{sseven}.
\end{remark}
\begin{corollary}
\label{mainss}
For any $m<M$, there are three spectral sequences $\{E^{p,q}_r\}$ calculating the integer cohomology groups of the spaces $\mbox{Map}(S^m,S^M)$, $\mbox{EM}_0(S^m,S^M)$, and $\mbox{EM}_1(S^m,S^M)$, and three more spectral sequences $\{\hat E^{p,q}_r\}$ for the cohomology of spaces $\Omega^m S^M$, $\mbox{EM}^*_0(S^m,S^M)$, and $\mbox{EM}^*_1(S^m,S^M)$;
the supports of all six spectral sequences lie in the wedge $\{(p,q): p \leq 0$, $q+p(M-m+1) \geq 0\}$, the unique non-trivial groups $E^{p,q}_1$ and $\hat E^{p,q}_1$ with $p=0$ are $E^{0,0}_1 \simeq \hat E^{0,0}_1 \simeq {\mathbb Z}$; the group $E_1^{p,q}$ with $p <0$ is equal to
\begin{eqnarray}
\bar H_{-p(M+1)-q}\left(B(S^m, -p), (\pm {\mathbb Z})^{\otimes M}\right) \qquad \qquad & \mbox{for } &\mbox{Map}(S^m,S^M) \ ,
\label{msgen} \\
\label{ssss}
\bar H_{-p(M+1)-q}\left(B({\mathbb R}P^{m},-p),(\pm {\mathbb Z})^{\otimes M}\right) \qquad \qquad & \mbox{for } & \mbox{EM}_0(S^m,S^M), \\
\label{thomm2}
\bar H_{-p(M+1)-q}\left(B({\mathbb R}P^{m},-p),(\pm {\mathbb Z})^{\otimes M} \otimes \tilde \Theta^{\otimes (M+1)}\right) & \mbox{for } & \mbox{EM}_1(S^m,S^M) \ ;
\end{eqnarray}
the analogous groups $\hat E^{p,q}_1$ with $p<0$ are equal to
\begin{eqnarray}
\label{msgenpt}
\bar H_{-p(M+1)-q}\left(B({\mathbb R}^m, -p), (\pm {\mathbb Z})^{\otimes M}\right) \qquad \qquad & \mbox{for } & \Omega^m S^M, \\
\label{sssspt}
\bar H_{-p(M+1)-q}\left(B({\mathbb R}P_\star^{m},-p),(\pm {\mathbb Z})^{\otimes M}\right) \qquad \qquad & \mbox{for } & \mbox{EM}^*_0(S^m,S^M) \ , \\
\label{thomm2pt}
\bar H_{-p(M+1)-q}\left(B({\mathbb R}P_\star^{m},-p),(\pm {\mathbb Z})^{\otimes M} \otimes \tilde \Theta^{\otimes (M+1)}\right) & \mbox{for } & \mbox{EM}^*_1(S^m,S^M) .
\end{eqnarray}
For any odd numbers $m<M$, there is a spectral sequence $\{E^{p,q}_r\}$ $($respectively, $\{\hat E^{p,q}_r\})$ calculating the integer cohomology group of the space of maps considered in the first $($respectively, the second$)$ statement of Theorem \ref{m4}; their groups $E_1^{p,q}$ and $\hat E_1^{p,q}$ for any $p <0$ are equal respectively to
\begin{equation}
\label{lenspm}
\bar H_{-p(M+1)-q}\left(B(L_r^{m},-p),\pm {\mathbb Z}\right) \qquad \mbox{and}
\end{equation}
\begin{equation}
\label{lenspmpt}
\bar H_{-p(M+1)-q}\left(B(L_{r,\star}^{m}\ ,-p),\pm {\mathbb Z}\right) \ .
\end{equation}
\end{corollary}
\noindent
{\it Proof.} The formulas (\ref{msgen}--\ref{thomm2}) and (\ref{lenspm}) follow immediately from Theorem \ref{absv}, and formulas (\ref{msgenpt}--\ref{thomm2pt}) and (\ref{lenspmpt}) from Theorem \ref{relav}. In all these cases $W=M+1$ and $\Lambda=\emptyset$ (so that $C\Lambda$ is the origin in ${\mathbb R}^W$). In (\ref{ssss}) and (\ref{sssspt}) $X={\mathbb R}P^m$
(since the even maps $S^m \to S^M$ are in the obvious one-to-one correspondence with usual maps ${\mathbb R}P^m \to S^M$), and group $G$ is trivial. Analogously, the maps $S^m \to S^M$ commuting with the actions of the group ${\mathbb Z}_r$ according to the rule (\ref{chara}) are in one-to-one correspondence with the maps $L_\tau^m \to S^M$ (where $\tau$ is the greatest common divisor of $r$ and $s$) commuting with some free actions of the group $G={\mathbb Z}_\frac{r}{\tau}$: this action on $L^m_\tau$ is the factorization of the action on $S^m$. Therefore in (\ref{lenspm}) and (\ref{lenspmpt}) we have $X = L^m_\tau$. In all other cases $X=S^m$: the group $G$ equals ${\mathbb Z}_2$ in (\ref{thomm2}) and (\ref{thomm2pt}), and is trivial in (\ref{msgen}) and (\ref{msgenpt}).
In these examples it is convenient for us to consider the non-reduced homology groups, therefore we add the standard cells $E^{0,0} \simeq {\mathbb Q}$ in comparison to the spectral sequences of Theorems \ref{absv} and \ref{relav}.
$\Box$
\begin{remark} \rm
Spectral sequence (\ref{msgenpt}) stabilizes at the term $E_1$ by the Snaith splitting \cite{snaith}. The calculations given below prove a similar stabilization result of all spectral sequences (\ref{msgen}--\ref{lenspmpt}) at least over the rational coefficients.
Formulas (\ref{msgen}--\ref{ssss}) and (\ref{msgenpt}--\ref{sssspt}) related with the trivial group actions are the special cases of the main formula of \S III.6.2 in \cite{Book}. Formula (\ref{thomm2}) is announced in \cite{dan18}, together wit the ``non-pointed'' part of Theorem \ref{cor1} concerning the third columns of Tables \ref{evv} and \ref{oddd}.
\end{remark}
\subsection{Twisted homology of configuration spaces}
In order to deduce Theorems 1 and 2 from these spectral sequences, we need to calculate the homology of several configuration spaces, in particular the following ones.
\begin{theorem}
\label{lem29}
For any $r \geq 1$ and any odd $m$,
$($a$)$ $H^*(B(L_r^{m},N),{\mathbb Q}) \simeq H^*(S^{m},{\mathbb Q})$ for all $N \geq 1$,
$($b$)$ the group $H^j(B(L_r^{m},N),\pm {\mathbb Q})$ is
isomorphic to ${\mathbb Q}$ if $N$ is odd and $j$ is equal to either $(m-1)\frac{N-1}{2}$ or $(m-1)\frac{N-1}{2}+m,$
and is trivial for all other combinations of $N$ and $j$.
\end{theorem}
\begin{theorem} \label{main2}
Let $m$ be odd. Then
A$)$ The group $H_j(B({\mathbb R}P_\star^m,N), \tilde \Theta \otimes \pm {\mathbb Q})$ with arbitrary $N \geq 1$ is equal to ${\mathbb Q}$ if $j= \left]\frac{N}{2}\right[ \times (m-1)$ $($where $]X[$ denotes the smallest integer not smaller than $X)$ and is trivial for all other $j$.
B$)$ The $($Poincar\'e dual to one another$)$ groups $H_*(B({\mathbb R}P^m,N), \tilde \Theta \otimes \pm {\mathbb Q})$ and $\bar H_*(B({\mathbb R}P^m,N), \tilde \Theta \otimes {\mathbb Q})$ are trivial if $N$ is odd, and have the Poincar\'e polynomials equal respectively to $t^{\frac{N}{2}(m-1)}(1+t^m)$ and $ t^{\frac{N}{2}(m+1)} (1+t^{-m})$ if $N$ is even.
\end{theorem}
\begin{theorem} \label{main1}
If $m$ and $N$ are odd, then the group $H_*(I({\mathbb R}P^m,N), \Theta \otimes {\mathbb Q})$ is trivial.
\end{theorem}
The next theorem is not needed for our calculations, nevertheless I prove it below for the completeness.
\begin{theorem} \label{lem1} If $N$ is even and $m$ is odd, then
the Poincar\'e polynomial of the group $H_*(I({\mathbb R}P_\star^m,N-1), \Theta \otimes {\mathbb Q})$ is equal to
\begin{equation} \label{fibfor}
(N-1)!! \ t^{(m-1)N/2}
\prod_{r=1}^{N/2-1} \left(1+(2r-1)t^{m-1}\right) .
\end{equation}
\end{theorem}
\section{Proof of Theorem \ref{lem29}}
\begin{lemma}
\label{lem11}
For any $N \geq 1$, an arbitrary embedding ${\mathbb R}^m \to L_{r,\star}^m$ induces the isomorphisms
\begin{eqnarray}
\label{a}
H^*(I(L_{r,\star}^m,N),{\mathbb Q}) & \simeq & H^*(I({\mathbb R}^{m},N),{\mathbb Q}); \\
\label{b}
H^*(B(L_{r,\star}^m, N), {\mathbb Q}) & \simeq & H^*(B({\mathbb R}^{m}, N), {\mathbb Q}); \\
\label{c}
H^*(B(L_{r,\star}^m, N), \pm {\mathbb Q}) & \simeq & H^*(B({\mathbb R}^{m}, N), \pm {\mathbb Q}).
\end{eqnarray}
\end{lemma}
\noindent
{\it Proof.} The isomorphism (\ref{a}) follows by induction over the standard fiber bundles $I(X,l+1) \to I(X,l)$ defined by forgetting the last points of configurations, with $X$ equal to either $L^m_{r,\star}$ or ${\mathbb R}^m$. Indeed,
we have natural isomorphisms $ H^*({\mathbb R}^m \setminus \{l \ \mbox{ points}\},{\mathbb Q}) \simeq
H^*( L_{r,\star}^m \setminus \{l \ \mbox{ points}\},{\mathbb Q})$ for their fibers. These bundles are homologically trivial under the action of $\pi_1(I(X,l))$ since the cohomology groups of fibers are generated by the linking number classes with the removed ordered points. Therefore we also have natural isomorphisms of the spectral sequences of these fiber bundles.
These spectral sequences for $I({\mathbb R}^m,N)$ stabilize at the terms $E_2$, so that the Poincar\'e polynomials of the groups (\ref{a}) are equal to \begin{equation}
\label{fks}
\prod_{a=1}^{N-1}(1+ a t^{m-1}) \ ,
\end{equation}
cf. \cite{A69}, \cite{Cohen76}.
By Proposition 2 from \cite{Serre}, any group considered in (\ref{b}) or (\ref{c}) is equal to the quotient of the group considered in the same side of (\ref{a}) by the subgroup generated by all elements of the form $g(\alpha)-\alpha$, where $g$ is an element of the permutation group $S(N)$ that acts as follows on the cohomology of the spaces $I(L_{r,\star}^m,N)$ and $I({\mathbb R}^m,N)$. In the case of coefficients in ${\mathbb Q}$ considered in (\ref{b}) this action of $S(N)$ is the obvious one induced by the action of the permutations on the fibers of a principal $S(N)$-covering; in the case of $\pm {\mathbb Q}$-coefficients any element of $S(N)$ additionally multiplies the classes by $\pm 1$ depending on the parity of the corresponding permutation.
These $S(N)$-actions also commute with the cohomology maps induced by the embedding $I({\mathbb R}^m,N) \to I(L_{r,\star}^m,N)$, hence the results of this factorization for $L_{r,\star}^m$ are the same as for ${\mathbb R}^m$.
$\Box$
\begin{proposition}[see \cite{Cohen76}]
\label{chn76}
For any odd $m$, both groups $ H^*(B({\mathbb R}^{m}, N), {\mathbb Q})$ and \\ $H^*(B({\mathbb R}^{m}, N), \pm {\mathbb Q})$ are one-dimensional over ${\mathbb Q}$. Namely,
$ H^0(B({\mathbb R}^{m}, N), {\mathbb Q})\simeq {\mathbb Q}$ and $H^{(m-1)[N/2]}(B({\mathbb R}^{m}, N), \pm {\mathbb Q}) \simeq {\mathbb Q}$.
\end{proposition}
The first assertion of this proposition holds by the Euler characteristic reason: by (\ref{fks}) the cohomology group $H^*(I({\mathbb R}^{m},N), {\mathbb Q})$ of the $N!$-fold covering space is exactly $N!$-dimensional over ${\mathbb Q}$ and is placed in even dimensions only.
The groups (\ref{b}) are obviously isomorphic to ${\mathbb Q}$ in dimension $0$, hence they are trivial in all other dimensions.
Let us describe a cycle generating the homology group dual to (\ref{c}).
Let $N$ be even. Fix $N/2$ different points $a_s \in {\mathbb R}^m$, $s=1, \dots, N/2$, and a small number $\varepsilon>0$. Consider the submanifold $V(N) \subset B({\mathbb R}^m,N)$ consisting of all configurations of $N$ points split into $N/2$ pairs, the points of the $s$th pair being some opposite points of the $\varepsilon$-sphere around $a_s$ in ${\mathbb R}^m$. If $N$ is odd then a similar $((m-1)[N/2])$-dimensional manifold $V(N)$ is obtained from the manifold $V(N-1)$ we have just defined by adding one constant point $X_N$ distant from all points $a_s$ to all $(N-1)$-configurations forming this manifold $V(N-1)$.
\begin{lemma} \label{lemW} If $m$ is odd then the manifold $V(N)$ is $\pm {\mathbb Z}$-orientable for any $N$, and
the group $H_*(B({\mathbb R}^m,N),\pm {\mathbb Q})$ is generated by its fundamental $\pm {\mathbb Q}$-cycle.
\end{lemma}
{\it Proof} (cf. \cite{fuchs}). The assertion on the $\pm {\mathbb Q}$-orientability is immediate. Let us fix a direction in ${\mathbb R}^m$. If $N$ is even, denote by $\Xi$ the subset in $ B({\mathbb R}^m,N)$ consisting of all configurations, whose $N$ points can be split into pairs, such that the line through any pair has the chosen direction. For odd $N$, let $\Xi$ consist of all such $(N-1)$-configurations augmented arbitrarily with some $N$th points. $\Xi$ is an $(mN - (m-1)[N/2])$-dimensional semi-algebraic variety with a standard orientation in its regular points, and its fundamental cycle defines an element of the group
$\bar H_{mN - (m-1)[N/2]}(B({\mathbb R}^m,N), {\mathbb Q})$. In particular, its intersection index with the cycle $V(N) \subset B({\mathbb R}^m,N)$ is well defined. If the points $a_s$ (and $X_N$ if $N$ is odd) defining $W$ are in general position with respect to the chosen direction, and $\varepsilon$ is small, then these two cycles have exactly one transversal intersection point, therefore the homology classes of both of them are not equal to 0.
$\Box$
Consider now the exact sequence of the pair $(B(L^m_{r},N), B(L^m_{r,\star},N))$ with coefficients in the system $\pm {\mathbb Q}$.
Its term $H_i(B(L^m_{r},N), B(L^m_{r,\star},N); \pm {\mathbb Q})$ is isomorphic to
$H_{i-m}(B(L^m_{r,\star},N-1), \pm {\mathbb Q}) $ by the Thom isomorphism of the (trivial) normal bundle of the space $B(L^m_{r},N) \setminus B(L^m_{r,\star},N) \simeq B(L^m_{r,\star},N-1) $ in $B(L^m_{r},N).$ By formula (\ref{c}) and Proposition \ref{chn76} this exact sequence contains only two non-zero (and equal to ${\mathbb Q}$) terms not of the form $H_i(B(L^m_{r},N), \pm {\mathbb Q})$.
In the case of odd $N$ these two terms are distant from one another in this sequence and imply Theorem \ref{lem29} for such $N$. If $N$ is even then these two terms form the fragment
\begin{equation}H_{i+1}(B(L^m_{r},N), B(L^m_{r,\star},N); \pm {\mathbb Q}) \to H_{i}(B(L^m_{r,\star},N), \pm {\mathbb Q})\label{ii}
\end{equation}
with $i=(m-1)N/2.$
The image of the basic cycle of the left-hand group under this map can be realized by the $(m-1)\frac{N}{2}$-dimensional manifold in $B(L^m_{r,*},N)$ swept out by all $N$--configurations, in which some $N-1$ points form any configurations from the manifold $V(N-1)$ (in particular one of these points is fixed), and one point more belongs to a small $(m-1)$-dimensional sphere around the point $L^m_{r}\setminus L^m_{r,\star}$. This cycle is homologous to a similar one, in the definition of which the last sphere is replaced by a huge sphere in ${\mathbb R}^m \subset L^m_{r,\star} $ encircling all other points of all configurations of our cycle. The intersection index of the obtained cycle with the cycle $\Xi$ from the proof of Lemma \ref{lemW} is equal to 2 or $-2$ depending on the choice of orientations, therefore the map (\ref{ii}) is an isomorphism, and all groups $H_i(B(L^m_{r},N), \pm {\mathbb Q})$ are trivial.
$\Box$
\begin{remark} \rm Part A) of Theorem \ref{lem29} can also be deduced by applying the methods of the paper \cite{BCL} which contains a general algorithm for calculating the Betti numbers of configuration spaces of odd-dimensional manifolds.
\end{remark}
\section{Proof of Theorems \ref{main2}, \ref{main1}, and \ref{lem1}}
\subsection{Proof and realization of Theorem \ref{main2}(A)}
\begin{lemma}
\label{lempodd}
If $m$ is odd, then the group $H_*(I({\mathbb R}P^m_\star,N), \Theta \otimes {\mathbb Q})$ with any $N$ is $N!$-dimensional over ${\mathbb Q}$, and the group $H_*(B({\mathbb R}P^m_\star,N), \tilde \Theta \otimes \pm {\mathbb Q})$ is one-dimensional. All groups $H_j(I({\mathbb R}P^m_\star,N), \Theta \otimes {\mathbb Q})$ and $H_j(B({\mathbb R}P^m_\star,N), \tilde \Theta \otimes \pm {\mathbb Q})$ with odd $j$ are trivial.
\end{lemma}
\noindent
{\it Proof.} There is a principal $({\mathbb Z}_2)^N$-covering over the space $I({\mathbb R}P^m_\star,N)$: it is the space of all sequences of $N$ different points in $S^m$ such that none of its elements coincides with either of the two preimages of the distinguished point of ${\mathbb R}P^m$ under the standard covering map $S^m \to {\mathbb R}P^m$, or with a point opposite in $S^m$ to an other element of the sequence.
The space $X_N$ of this covering has the structure of a tower of fiber bundles $X_N \to X_{N-1} \to \dots \to X_2 \to X_1$ where $X_1$ is the sphere $S^m$ with $2$ points removed, and the fiber of any map $X_{k} \to X_{k-1}$ is $S^m$ with $2k$ points removed. It follows easily from the spectral sequences of all these bundles that the Poincar\'e polynomial of the rational homology group of the total space of this tower is equal to $\prod_{j=1}^{N} (1+(2j-1)t^{m-1})$. In particular, all its homology groups lie in even dimensions only, and its Euler characteristic is equal to $2^N N!$.
The group $({\mathbb Z}_2)^N$ of this covering acts on the complex of ${\mathbb Q}$-singular chains of the space of the covering: an element $g \in ({\mathbb Z}_2)^N$ transforms any singular simplex geometrically via the usual action of this group by central symmetries on the factors $S^m$ of the space $(S^m)^N$ containing our covering space, and additionally multiplies it by $-1$ if the number of ones in $g$ is odd. The group $H_*(I({\mathbb R}P^m_\star,N), \Theta \otimes {\mathbb Q})$ can be considered as the homology group of the complex of coinvariants for this action, hence this group also has non-trivial elements in even dimensions only. In particular, the total dimension of this group is equal to its Euler characteristic, which is equal to the Euler characteristic $2^N N!$ of the space of the covering divided by the degree $2^N$ of this covering.
In a similar way, $H_*(B({\mathbb R}P^m_\star,N), \tilde \Theta \otimes \pm {\mathbb Q})$ is the homology group of the complex of coinvariants for an action of a group of order $2^N N!$ on rational singular chains, hence it also has no non-trivial odd-dimensional elements, and its total dimension is equal to its Euler characteristic and hence to 1.
$\Box$
For even $N$, let $V(N) \subset B({\mathbb R}^m,N) \subset B({\mathbb R}P^m_\star,N)$ be the manifold from the proof of Lemma \ref{lemW} (we assume that all points $a_s$ participating in its construction are distant from the removed point ${\mathbb R}P^m \setminus {\mathbb R}P^m_\star$). For odd $N$, consider the $\left(\, \left]\frac{N}{2}\right[\times (m-1)\right)$-dimensional manifold $U(N)$ in $B({\mathbb R}P^m_\star,N)$ the elements of which are unions of an $N-1$-configuration from $V(N-1) \subset B({\mathbb R}P^m_\star,N-1)$ and a one point subset of the $\varepsilon$-sphere about the point ${\mathbb R}P^m \setminus {\mathbb R}P^m_\star$.
\begin{lemma}
\label{realiz}
The basic cycle of the manifold $V(N)$ $($in the case of even $N)$ or $U(N)$ $($if $N$ is odd$)$ defines a non-trivial element of the group $H_*(B({\mathbb R}P^m_\star,N), \tilde \Theta \otimes \pm {\mathbb Q})$.
\end{lemma}
\noindent
{\it Proof.} Let us describe a Poincar\'e dual element of the group $\bar H_*(B({\mathbb R}P^m_\star,N), \tilde \Theta \otimes {\mathbb Q})$.
For even $N$, consider the $\frac{N}{2}(m+1)$-dimensional subvariety in $B({\mathbb R}P_\star^m,N)$ consisting of all $N$-configurations which can be split into pairs so that the elements of each pair belong to the same fiber of the Hopf fibration ${\mathbb R}P^m \to {\mathbb C}P^{\frac{m-1}{2}}$. Its regular locus is $\tilde \Theta$-orientable, and it defines an element of the group $\bar H_{\frac{N}{2}(m+1)}(B({\mathbb R}P_\star^m,N),\tilde \Theta)$.
If $N$ is odd, then we consider the space of all $N$-configurations, which are the unions of an $N-1$-element subset as in the previous paragraph, and a one point subset of the fiber of the Hopf fibration through the point ${\mathbb R}P^m \setminus {\mathbb R}P^m_\star$.
The intersection indices of cycles $V(N)$ or $U(N)$ (depending on the parity of $N$) with these cycles are equal respectively to $\pm 1$ and $\pm 2$, in particular the classes of all these cycles in the corresponding homology groups are not equal to 0.
$\Box$
Theorem \ref{main2}(A) is thus also proved.
$\Box$
\subsection{Simplicial resolution of diagonal varieties \rm (see \cite{noneven})}
\label{sire}
Denote by $\nabla$ the set $({\mathbb R}P^m)^N \setminus I({\mathbb R}P^m,N)$. It is a stratified variety, whose strata correspond to all partitions of the set $\{1, \dots, N\}$ into $\leq N-1$ non-empty parts. Namely, with any such partition $A$ into some sets $A_1, \dots, A_k$ the closed stratum $L(A) \subset \nabla$ is associated, which consists of all collections $(x_1, \dots, x_N) \in ({\mathbb R}P^m)^N$ such that $x_\alpha =x_\beta$ whenever $\alpha$ and $\beta$ belong to the same part of this partition. Obviously, $L(A) \simeq ({\mathbb R}P^m)^k$.
Such partitions form a partially ordered set: any partition dominates its subpartitions. Denote by $\Delta(N)$ the order complex of this poset.
Let $m$ be odd, then by the Poincar\'e--Lefschetz duality in the $(\Theta \otimes {\mathbb Q})$-acyclic space $({\mathbb R}P^m)^N$ we have
\begin{equation} \label{lefsch}
H^i(I({\mathbb R}P^m,N), \Theta \otimes {\mathbb Q}) \simeq H_{mN-i-1}(\nabla, \Theta \otimes {\mathbb Q})
\end{equation}
for any $i$,
so we will consider the right-hand groups in (\ref{lefsch}). We use the standard simplicial resolution ${\mathfrak S}$ of $\nabla$, which is a subset in $\Delta(N) \times \nabla$ defined as follows. For any vertex of the complex $\Delta(N)$ (i.e., some non-complete partition $A$) denote by $\Delta_A \subset \Delta(N)$ its subordinate subcomplex, i.e. the order complex of subpartitions of $A$ (including $A$ itself). Let $\partial \Delta_A$ be the link of $\Delta_A$, that is, the union of its simplices not containing the maximal vertex $\{A\}$. The space ${\mathfrak S} \subset \Delta(N) \times \nabla$ is the union of the sets $\Delta_A \times L(A)$ over all non-complete partitions $A$ of $\{1, \dots, N\}$. Let $\Theta^!$ be the local system on ${\mathfrak S}$ lifted from $\Theta$ along the canonical projection ${\mathfrak S} \to \nabla$. We have the standard isomorphism
\begin{equation}
\label{simresi}
H_* ({\mathfrak S}, \Theta^!) \simeq H_*(\nabla, \Theta).
\end{equation}
The space ${\mathfrak S}$ has a natural filtration ${\mathfrak S}_0 \subset \dots \subset {\mathfrak S}_{N-2} = {\mathfrak S}$: its term ${\mathfrak S}_p$ is the union of spaces $\Delta_A \times L(A)$ over all partitions of $\{1, \dots,N\}$ into $\geq N-1-p$ sets. Consider the spectral sequence calculating the group $H_*({\mathfrak S}, \Theta^! \otimes {\mathbb Q})$ and induced by this filtration.
Any set ${\mathfrak S}_p \setminus {\mathfrak S}_{p-1}$ splits into the blocks corresponding to all partitions $A$ into exactly $N-1-p$ parts and equal to
\begin{equation}
\label{block}
(\Delta_A \setminus \partial \Delta_A) \times L(A).
\end{equation}
Correspondingly, the term $E^1_{p,q}$ of this spectral sequence splits into the sum of groups $
\bar H_{p+q}((\Delta_A \setminus \partial \Delta_A) \times L(A), \Theta^!)$
over such partitions $A$.
\begin{lemma}
The group $\bar H_*((\Delta_A \setminus \partial \Delta_A) \times L(A),\Theta^! \otimes {\mathbb Q})$
is trivial if some parts of $A$ consist of odd numbers of elements. If all parts of $A$ are even, then this group is equal to $\bar H_*(\Delta_A \setminus \partial \Delta_A, {\mathbb Q}) \otimes H_*(({\mathbb R}P^m)^{k(A)}, {\mathbb Q})$, where $k(A)$ is the number of parts of $A$.
\end{lemma}
This lemma follows immediately from the K\"unneth formula, see \cite{noneven}.
$\Box$
Theorem \ref{main1} follows immediately from this lemma and the isomorphisms (\ref{lefsch}) and (\ref{simresi}).
$\Box$
\subsection{Proof of Theorem \ref{lem1}.}
The projection of $({\mathbb R}P^m)^N$ onto its first factor defines a fiber bundle $\nabla \to {\mathbb R}P^m$. Let $\tilde \nabla$ be its fiber over a fixed point $x_1$. The space $I({\mathbb R}P_\star^m,N-1)$ is equal to its complement $({\mathbb R}P^m)^{N-1} \setminus \tilde \nabla$ in the product of remaining $N-1$ factors.
The restriction of our simplicial resolution to the fiber $\tilde \nabla$ defines a simplicial resolution $\tilde {\mathfrak S}$ of $\tilde \nabla$. Again, we have the Lefschetz isomorphism
\begin{equation}
\label{lef2}
H^i(I({\mathbb R}P_\star^m,N-1), \Theta \otimes {\mathbb Q}) \simeq H_{m(N-1)-i-1}(\tilde \nabla, \Theta \otimes {\mathbb Q}) \simeq H_{m(N-1)-i-1}(\tilde {\mathfrak S}, \Theta^! \otimes {\mathbb Q})
\end{equation}
for any $i$; let us calculate the right-hand groups in this equality using the restriction to $\tilde {\mathfrak S}$ of our filtration on ${\mathfrak S}$.
The term $\tilde E^1$ of the corresponding spectral sequence $\tilde E^r_{p,q}$ calculating $H_*(\tilde {\mathfrak S}, \Theta^! \otimes {\mathbb Q})$ was considered in \cite{noneven}, in particular it was shown there that this term has non-trivial groups $\tilde E^1_{p,q}$ only on $N/2$ horizontal segments corresponding to $q=0, m, 2m, \dots, (N/2-1)m$. Namely, for any $s=0,1, \dots, N/2-1$, only the groups $\tilde E^2_{p,sm}$ with natural $p \in [N/2-1, N-2-s]$ are non-trivial. The differential $d_1$ of the spectral sequence supplies these horizontal segments with the structure of chain complexes.
\begin{lemma}
All these horizontal complexes are acyclic in all their terms except for the top-dimensional ones $($that is, the groups $\tilde E^2_{p,q}$ can be non-trivial only for $(p,q) = (N-2,0), (N-3,m), (N-4, 2m), \dots, (N/2-1,(N/2-1)m)\ )$.
\end{lemma}
\noindent
{\it Proof.} For $m> N/2-1$ this is Lemma 1 in \cite{noneven}; but by the construction these $N/2$ complexes do not depend on $m$ (provided $m$ is odd).
$\Box$
\begin{corollary}
Our spectral sequence stabilizes in the term $\tilde E^2 \equiv \tilde E^\infty.$ The dimensions of its non-trivial groups $\tilde E^2_{p,sm}$ are equal to $\pm$ the Euler characteristic of corresponding horizontal complexes.
$\Box$
\end{corollary}
This corollary reduces Theorem \ref{lem1} to the following lemma.
\begin{lemma} \label{gen}
The Euler characteristic of the horizontal complex corresponding to $q =s m$, $s \in [0, \dots, N/2-1],$ is equal to $(-1)^s (N-1)!!$ times the coefficient at the term $\tau^{N/2-1-s}$ in the polynomial $(1+\tau)(1+3\tau) \cdots (1+(N-3)\tau)$.
\end{lemma}
\noindent
{\it Proof.} By the description of the term $\tilde E^1$ given in \cite{noneven}, the dimension of the group $\tilde E^1_{p, sm}$, $p \in [N/2-1, N-2-s]$, can be calculated as follows. Consider all unordered partitions of the set $\{1, \dots, N\}$ into $N-p-1$ parts of even cardinalities, then count any such partition into parts of cardinalities $N_1, \dots, N_{N-p-1}$ with the weight
\begin{equation} \label{weight}
(N_1-1)!(N_2-1)! \cdots (N_{N-p-1} -1)! \end{equation}
and multiply the obtained sum by $\binom{N-p-2}{s}.$
Given such a partition, let us order its parts lexicographically: the first part contains the element $1 \in \{1, \dots, N\}$, the second part contains the smallest element not contained in the first part, etc. Then our coefficient (\ref{weight}) is equal to the number of permutations $(a_1, \dots, a_N)$ of $\{1, \dots, N\}$, starting with 1, then running arbitrarily all other elements of the first part, then choosing the smallest element of the second part, then running arbitrarily the remaining elements of this part, etc. The common factor $\binom{N-p-2}{s}$ can be understood as the number of possible selections of some $s$ parts of our partition, not equal to the first one (or, which is the same, of the smallest elements of these parts).
So, the dimension of the group $E^1_{p,sm}$ is equal to the number of permutations $(a_1, \dots, a_N)$ of $\{1, \dots, N\}$ having some additional furniture and satisfying some conditions. The furniture consists of the choice of two subsets $R \supset S$
of cardinalities $N-p-2$ and $s$ respectively in the set of odd numbers $\{3, 5, \dots, N-1\}$; the conditions claim that $a_1=1$ and the numbers $a_i$ with $i \in R$ should be smaller than all forthcoming elements $a_j$, $j >i$.
Consider now an arbitrary permutation of $\{1, \dots, N\}$ with the first element $a_1=1$ and $s$ other selected elements $a_i$ on some odd places
\begin{equation} \label{places}
i= N_1+1, N_1+N_2+1, \dots, N_1+\dots + N_{s}+1,
\end{equation}
and count its occurrences from the above construction for all $p \in [N/2-1, N-2-s]$.
Any of these $s$ selected elements $a_i$ should be smaller than all the forthcoming ones, otherwise our permutation does not occur at all. Further, let $r$ be the number of non-selected and not equal to 1 elements $a_j$ of this permutation with odd $j$, which are smaller than all the forthcoming elements. If $r=0$ then our permutation occurs only once from the consideration of the group $\tilde E^1_{N-2-s,sm}$. If however $r>0$ then it occurs once in $\tilde E^1_{N-2-s, sm}$, $r$ times in $\tilde E^1_{N-3-s,sm}$, $\binom{r}{2}$ times in $\tilde E^1_{N-4-s,sm},$ etc. These $2^r$ occurrences are counted with the alternating signs in the calculation of the Euler characteristic of our horizontal complex, and annihilate one another for $r>0$. Therefore only the permutations with $r=0$ contribute to this Euler characteristic. The number of such permutations (that is, permutations starting with 1, having on the selected $s$ odd places (\ref{places}) $s$ elements which are smaller than all the forthcoming ones, and {\em not} having elements with this property on other odd places) is obviously equal to
\begin{equation} \label{combi}
(N-1)!!\frac{(N-3)!!}{(N-N_1-1)(N-(N_1+N_2)-1) \cdots (N-(N_1+\dots + N_s)-1)}.
\end{equation}
But the fraction in (\ref{combi}) is a summand after opening the brackets in the product $(1+\tau)(1+3\tau) \cdots (1+(N-3)\tau)$, containing exactly $s$ factors equal to $1$ (coming from the brackets No. $\frac{N-N_1}{2}, \frac{N-(N_1+ N_2)}{2},$\dots, $\frac{N-(N_1+ \dots +N_s)}{2}$ ).
$\Box$
\setminusallskip
\noindent
{\bf Problem.} Realize a basis in $H_*(I({\mathbb R}P_\star^m,N-1), \Theta \otimes {\mathbb Q})$ by explicitly defined cycles.
\subsection{Proof of Theorem \ref{main2}(B)}
The permutation group $S(N)$ acts in an obvious way on the space $I({\mathbb R}P^m,N)$ and on the local system $\Theta$ on it. Consider the resulting action of $S(N)$
on the complex of singular $\Theta$-chains of $I({\mathbb R}P^m,N)$, moving any simplex geometrically by the previous action and additionally multiplying it by $\pm 1$ depending on the parities of the permutations. The group $H_*(B({\mathbb R}P^m,N), \tilde \Theta \otimes \pm {\mathbb Q})$ is the homology group of the complex of coinvariants for this action. Again by Proposition 2 from \cite{Serre}, it is also the quotient group of the group
$H_*(I({\mathbb R}P^m,N),\Theta \otimes {\mathbb Q})$ by the subgroup generated by all elements of the form $g(\alpha)-\alpha$ where
$\alpha \in H_*(I({\mathbb R}P^m,N),\Theta \otimes {\mathbb Q}),$ $g \in S(N)$, and $\alpha \mapsto g(\alpha)$ is the induced action in the homology. Therefore the statement of Theorem \ref{main2}(B) concernong the case of odd $N$ follows from Theorem \ref{main1}.
The statement concerning even $N$ follows immediately from the exact homological sequence of the pair $(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N))$ with coefficients in the local system $ \tilde \Theta \otimes \pm {\mathbb Q}$, Theorem \ref{main2}A, and the
Thom isomorphism $H_i(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N); \tilde \Theta \otimes \pm {\mathbb Q}) \simeq H_{i-m}(B({\mathbb R}P^m_\star, N-1), \tilde \Theta \otimes \pm {\mathbb Q})$ for the normal bundle of the submanifold $B({\mathbb R}P^m,N) \setminus B({\mathbb R}P^m_\star,N) \simeq B({\mathbb R}P^m_\star, N-1)$.
$\Box$
\begin{lemma}
\label{lll}
If $N$ is even and $m$ is odd, then the manifold $V(N) \subset B({\mathbb R}^m,N) \subset B({\mathbb R}P^m,N)$ participating in Lemmas \ref{lemW} and \ref{realiz} defines a non-trivial element of the group $H_{\frac{N}{2}(m-1)}(B({\mathbb R}P^m,N), \tilde \Theta \otimes \pm {\mathbb Q})$.
\end{lemma}
\noindent
{\it Proof} is very similar to that of Lemma \ref{realiz}: our cycle has non-zero intersection index with the oriented subvariety in $B({\mathbb R}P^m,N)$ that consists of configurations which can be split into the pairs of points lying in the same fibers of the Hopf fibration.
$\Box$
\section{On configuration spaces of ${\mathbb R}P^m$ and ${\mathbb R}P^m_\star$ with even $m$}
In addition to Theorems \ref{main2} and \ref{lem29}, we will use the following facts on the homology of unordered configuration spaces.
\begin{proposition}
\label{prepe}
If $m$ is even, then the groups $\bar H_i(I({\mathbb R}P^m,N),{\mathbb Q})$ and $\bar H_i(I({\mathbb R}P_\star^m,N-1),{\mathbb Q})$ are trivial for all $N \geq 2$ and $i\geq 0$.
\end{proposition}
\noindent
{\it Proof.} As in \S \ref{sire}, consider the set $\nabla \equiv ({\mathbb R}P^m)^N \setminus I({\mathbb R}P^m,N)$ and its simplicial resolution ${\mathfrak S} \subset \Delta(N) \times \nabla$, so that $\bar H_i(I({\mathbb R}P^m,N),{\mathbb Q}) \simeq H_i(({\mathbb R}P^m)^N, \nabla; {\mathbb Q})$ and $H_*(\nabla,{\mathbb Q}) \simeq H_*({\mathfrak S},{\mathbb Q})$. Let us calculate the latter group by the spectral sequence built of the Borel--Moore homology groups of blocks (\ref{block}) with constant ${\mathbb Q}$-coefficients.
All spaces $L(A)$ have rational homology of a point, therefore this spectral sequence coincides with the one calculating the rational homology of the order complex $\Delta(N)$ of all non-complete partitions of $\{1, \dots, N\}$. This order complex has a maximal element (corresponding to the
``partition'' into a single set), hence is contractible.
Therefore the space $\nabla$ also has the rational homology of a point, and our statement concerning the group $\bar H_*(I({\mathbb R}P^m,N),{\mathbb Q})$ follows from the exact sequence of the pair $\left(({\mathbb R}P^m)^N, \nabla \right)$.
Similarly, for $N>1$ we have
\begin{equation}
\label{exsec}
\bar H_i(I({\mathbb R}P_\star^m,N-1),{\mathbb Q}) \simeq \tilde H_{i-1}(\tilde \nabla, {\mathbb Q}) \simeq \tilde H_{i-1}(\tilde {\mathfrak S}, {\mathbb Q} ) ,
\end{equation}
where $\tilde \nabla = \nabla \cap ({\mathbb R}P^m)^{N-1}$, see the previous section. The set $ \tilde {\mathfrak S} $ also has the rational homology of a point by a similar spectral sequence, so the groups (\ref{exsec}) are trivial.
$\Box$
\begin{corollary}
\label{refe}
If $m$ is even then the groups $\bar H_i(B({\mathbb R}P^{m},N), {\mathbb Q}),$ $\bar H_i(B({\mathbb R}P^{m},N), \pm {\mathbb Q}),$
$\bar H_i(B({\mathbb R}P_\star^{m},N-1), {\mathbb Q}),$ and $\bar H_i(B({\mathbb R}P_\star^{m},N-1), \pm {\mathbb Q})$
are trivial for any $N \geq 2$ and $i \geq 0$.
\end{corollary}
\noindent
{\it Proof}. These groups are the quotients of the groups considered in Proposition \ref{prepe} by some actions of permutation groups of $N$ or $N-1$ points.
$\Box$
\begin{lemma}
\label{lem9}
If $m$ is even then the group $\bar H_i(B({\mathbb R}P^{m},N), \tilde \Theta \otimes {\mathbb Q})$, $N\geq 2$, is equal to ${\mathbb Q}$ if $i=N m$ or $i=N m-(2m-1)$, and is trivial otherwise.
\end{lemma}
\noindent
{\it Proof}. By Remark \ref{protriv}, the orientation sheaf of $B({\mathbb R}P^{m},N)$ is in this case equal to $\tilde \Theta$. Therefore our statement is Poincar\'e dual to the next one.
\begin{lemma}[see \cite{knu}]
\label{propl}
For any $N \geq 2$, the group $H_i\left(B({\mathbb R}P^{m},N),{\mathbb Q}\right)$, $m$ even, is equal to ${\mathbb Q}$ if $i=0$ or $i=2m-1$, and is trivial for all other $i$.
$\Box$
\end{lemma}
Let us describe the bases for the ${\mathbb Q}$-vector spaces considered in the last two lemmas. We consider ${\mathbb R}P^m$ as a quotient space of the $m$-sphere with the induced metric.
\begin{lemma}
\label{reali}
The vector space $H_{2m-1}\left(B({\mathbb R}P^m,N),{\mathbb Q}\right)$ $($where $N\geq 2$ and $m$ is even$)$ is generated by the class of the submanifold in $B({\mathbb R}P^m,N)$ consisting of configurations, all whose points lie in one and the same projective line in ${\mathbb R}P^m$ and separate this line into $N$ equal segments. The group $\bar H_{Nm-(2m-1)}(B({\mathbb R}P^m,N), \tilde \Theta \otimes {\mathbb Q})$ is generated by the class of the variety consisting of all configurations containing the fixed point $X_0 \in {\mathbb R}P^m$ and some other point lying in a fixed line passing through $X_0$.
\end{lemma}
\noindent
{\it Proof.} It is easy to see that these cycles are orientable in the first case and $\tilde \Theta$-orientable in the second one.
They have only one common point $C$: the first cycle is smooth at it, and the second one coincides in its neighbourhood with the union of $N-1$ smooth manifolds. Let us calculate the intersection index of these cycles at this point.
Choose the chart ${\mathbb R}^m$ with coordinates $x_1, \dots, x_m$ in ${\mathbb R}P^m$ in such a way that the distinguished point $X_0$ is the origin in ${\mathbb R}^m$, the line containing the configuration $C$ is the $x_1$-axis, and remaining $N-1$ points of $C$ lie in this axis in the domain where $x_1>0$. Any line neighboring to this axis can be defined by the system of equations $x_k=p_k+q_kx_1$, $k=2, \dots, m$. For the local parameters of the first cycle in a neighborhood of the point $C$ we can choose the coefficients $p_k$ and $q_k$ of these equations of the line containing a configuration, and also the number $\varkappa$ defined as the coordinate $x_1$ of the point of this configuration neighboring to the origin.
A regular point of the second cycle is a $N$-configuration containing the origin in ${\mathbb R}^m$ and one other point of the $x_1$-axis. The set of such regular configurations having a positive coordinate $x_1$ of this second point is path-connected. A coorientation of this cycle at such a point in $B({\mathbb R}^m,N)$ is defined by the differential form \begin{equation}\label{coor}dx_1(u) \wedge dx_2(u) \wedge \dots \wedge dx_m(u) \wedge dx_2(v) \wedge \dots \wedge dx_m(v)\end{equation} where $u$ is the point of a neighboring configuration which is close to the origin, and $v$ is the other point of this configuration which is close to the $x_1$-axis. Moreover, this coorientation can be continued to the self-intersection points of the second cycle, corresponding to configurations with many points on the $x_1$-axis, as a collection of coorientations of all smooth local components of this cycle.
Let $0 \leq a_2 \leq \dots \leq a_N$ be the $x_1$-coordinates of points of the configuration $C$. Consider the square matrix of order $2m-1$, whose cell $(i,j)$ contains the value which the $i$-th factor of the coorientation form (\ref{coor}) of the local component of the second cycle related with the point $a_k$ takes on the $j$-th basic tangent vector of the first cycle at $C$ defined by increasing its $j$-th local parameter and keeping all other parameters. This matrix is equal to
$\mbox{
\begin{tabular}{|c|c|c|}
\hline
0 & 0 & 1 \\
\hline
{\bf E} & {\bf 0} & 0\\
\hline
{\bf E} & $a_k${\bf E} & 0\\
\hline
\end{tabular}}$ \
where {\bf E} is the unit $(m-1)\times (m-1)$ matrix, {\bf 0} is zero matrix, and zeros in the top row and right-hand column mean strings or columns of $m-1$ zeros. In particular, its determinant is equal to $a_k^{m-1}$. All numbers $a_k$ are positive, therefore all these determinants are positive too, and the sum of their contributions to the intersection index of our two cycles is equal to $N-1 \neq 0$. In particular, the homology classes of these cycles are non-trivial.
$\Box$
Consider now the groups $H^*(B({\mathbb R}P^m_\star,N), {\mathbb Q})$ with even $m$. The space ${\mathbb R}P^m_\star$ is obviously fibered over the manifold ${\mathbb R}P^{m-1}$ with fiber ${\mathbb R}^1$ . Let us fix an orientation of ${\mathbb R}P^{m-1}$ and define for any $N$ the class $\mbox{Ind} \in H^{m-1}(B({\mathbb R}P^m_\star,N))$, whose value on a generic piecewise-smooth $(m-1)$-dimensional cycle in $B({\mathbb R}P^m_\star,N)$ is equal to the oriented number of $N$-configurations in this cycle such that their images in ${\mathbb R}P^{m-1}$ contain the distinguished point of ${\mathbb R}P^{m-1}$. It is easy to check that this cohomology class is well-defined.
\begin{lemma}
\label{porp}
The group $H^j(B({\mathbb R}P^m_\star,N),{\mathbb Q}),$ $m$ even, is equal to ${\mathbb Q}$ for $j=0$ and $j=m-1$ and is trivial for all other $j$. The group $H^{m-1}(B({\mathbb R}P^m_\star,N),{\mathbb Q})$ is generated by the cocycle $\mbox{\rm Ind}$.
\end{lemma}
\noindent
{\it Proof} uses the induction over $N$. For $N=1$ the statement is obvious, since ${\mathbb R}P_\star^m$ is homotopy equivalent to ${\mathbb R}P^{m-1}$. The set $B({\mathbb R}P^m,N) \setminus B({\mathbb R}P^m_\star,N)$ can be obviously identified with the manifold $B({\mathbb R}P^m_\star,N-1)$. All fibers of the normal bundle of this submanifold in $B({\mathbb R}P^m,N)$ are canonically isomorphic to the tangent space of ${\mathbb R}P^m$ at the distinguished point, in particular this normal bundle is trivial. Therefore we have an isomorphism
\begin{equation} \label{ghys}
H_j(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N);{\mathbb Q}) \simeq H_{j-m}(B({\mathbb R}P^m_\star,N-1),{\mathbb Q}). \end{equation}
Suppose now that our lemma is proved for the space $B({\mathbb R}P_\star^m,N-1)$. By the exact sequence of the pair $(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N)),$ the groups $H_j(B({\mathbb R}P_\star,N),{\mathbb Q})$ have a chance to be non-trivial only for $j$ close to the dimensions of non-trivial elements of the groups $H_*(B({\mathbb R}P^m,N),{\mathbb Q})$ or $H_*(B({\mathbb R}P^m,N),B({\mathbb R}P^m_\star,N);{\mathbb Q})$.
By Lemma \ref{propl} and the induction hypothesis, there are only two fragments of this exact sequence, containing such non-trivial elements. One of them is
$$
0 \to H_{2m-1}(B({\mathbb R}P^m_\star,N),{\mathbb Q}) \to H_{2m-1}(B({\mathbb R}P^m,N),{\mathbb Q}) \stackrel{\rho}{\longrightarrow} $$
$$ \stackrel{\rho}{\longrightarrow} H_{2m-1}(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N);{\mathbb Q}) \to H_{2m-2}(B({\mathbb R}P_\star^m,N),{\mathbb Q}) \to 0 \ . $$
Two groups in the middle of this fragment, which are connected by the map $\rho$, are isomorphic to ${\mathbb Q}$ by Lemma \ref{propl}, isomorphism (\ref{ghys}) and the induction hypothesis. This map $\rho$ sends the class of the basic cycle described in the first statement of Lemma \ref{reali} to a class which corresponds via (\ref{ghys}) to the element of the group $H_{m-1}(B({\mathbb R}P_\star^m,N-1),{\mathbb Q}),$ on which the cocycle $\mbox{Ind}$ takes the value of $\pm (N-1) \neq 0$. So the map $\rho$ is an isomorphism, and all neighboring terms of our exact sequence vanish.
Another suspicious fragment of our sequence is $$0 \to H_m(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N);{\mathbb Q}) \to H_{m-1}(B({\mathbb R}P^m_\star,N), {\mathbb Q}) \to 0 \ . $$ Its group $H_m(B({\mathbb R}P^m,N), B({\mathbb R}P^m_\star,N);{\mathbb Q})$ is isomorphic by (\ref{ghys}) to the group $H_0(B({\mathbb R}P_\star^m,N-1),{\mathbb Q}) \simeq {\mathbb Q}$. Hence the group $ H_{m-1}(B({\mathbb R}P^m_\star,N), {\mathbb Q})$ is also isomorphic to ${\mathbb Q}$.
$\Box$
\section{Proof of Theorems \ref{cor1} and \ref{m4}}
We know now all groups in the formulas (\ref{ssss}), (\ref{thomm2}) and (\ref{sssspt}--\ref{lenspmpt}) tensored with ${\mathbb Q}$ for all combinations of parities of $m$ and $M$. Let us apply this information.
\subsection{Cases of even $m$ and even $i(M+1)$}
If $M$ is odd then the coefficient sheaves in both (\ref{sssspt}) and (\ref{thomm2pt}) are equal to $\pm {\mathbb Z}$. If $M$ is even then
the coefficient sheaf in (\ref{sssspt}) is equal to ${\mathbb Z}$.
By Corollary \ref{refe} (see page \pageref{refe}), in all these cases $\hat E_1^{p,q} \otimes {\mathbb Q} \equiv 0$ for all $p < 0$.
This implies that all spaces $\mbox{EM}^*_0(S^m,S^M)$ with even $m$, and also all spaces $\mbox{EM}^*_1(S^m,S^M)$ with even $m$ and odd $M$ have the rational homology of a point. By the spectral sequence of the fiber bundle (\ref{mfb}), the Poincare polynomials of corresponding spaces $\mbox{EM}_0(S^m,S^M)$ and $\mbox{EM}_1(S^m,S^M)$ are then equal to $1+ t^M$.
\subsection{The case of even $m$, even $M$, and $i=1$}
\label{ee1}
The coefficient sheaf in (\ref{thomm2}) is in this case equal to $\tilde \Theta$.
By Lemma \ref{lem9}, there are then only the following non-trivial groups $E^{p,q}_1 \otimes {\mathbb Q}$ with $p<0$ (all of which are isomorphic to ${\mathbb Q}$): they correspond to $(p,q) = (-1,M-m+1)$, or $p\leq -2$ and either $q=-p(M-m+1)$ or $q=-p(M-m+1)+(2m-1)$.
Let us prove that $E_\infty \equiv E_1$ over ${\mathbb Q}$.
The group $E^{-1,M-m+1} \simeq {\mathbb Q}$ obviously survives until $E_\infty$ and provides a generator $[\Sigma]$ of the group $H^{M-m}(\mbox{EM}_1(S^m,({\mathbb R}^{M+1} \setminus \{0\}),{\mathbb Q} )\simeq {\mathbb Q}$, namely the linking number class of the entire {\em discriminant variety} in the space of all odd maps $S^m \to {\mathbb R}^{M+1}$. (This variety consists of maps whose images contain the point $0 \in {\mathbb R}^{M+1}$; its codimension is equal to $M-m+1$). It is enough to prove that all powers $[\Sigma]^{\setminusile N}$ of this basic class $[\Sigma]$ are non-trivial elements of the groups $H^{N(M-m)}(\mbox{EM}_1(S^m,({\mathbb R}^{M+1} \setminus \{0\})) ,{\mathbb Q} )$. Consider a generic point of the $N$-fold self-intersection of the discriminant variety, that is, some odd $C^\infty$-map $F:S^{m} \to {\mathbb R}^{M+1}$ such that the set $F^{-1} (0)$ consists of exactly $N$ pairs of opposite points, and $dF$ is injective at all these points. Let ${\mathcal L} \ni F$ be a generic affine $(N (M+1))$-dimensional subspace in the space of $C^\infty$-smooth odd maps $S^{m}\to {\mathbb R}^{M+1}$. Close to $F$, the intersection of ${\mathcal L}$ with the discriminant variety consists of $N$ locally irreducible smooth components meeting generically one another. The complement of this intersection in a small neighborhood of the point $F$ in ${\mathcal L}$ is homotopy equivalent to the product of $N$ spheres of dimension $M-m$. Any factor of this product has linking number 1 with only one local component of the discriminant variety $\Sigma \cap {\mathcal L}.$ So the restriction of the globally defined class $[\Sigma] \in H^{M-m}(\mbox{EM}_1(S^{m},({\mathbb R}^{M+1} \setminus 0)) ,{\mathbb Q} )$ to any of these factors takes value 1 on the (suitably oriented) fundamental class of this factor. Since the number $M-m$ is even, the $N$-th power of the class $[\Sigma]$ takes value $N!$ on the fundamental $(N(M-m))$-dimensional cycle of this product. In particular, it is not zero-cohomologous. Therefore the additional cells $E^{p,q}$ with $p<-1$, $q=-p(M-m+1)+(2m-1)$, also survive in $E_\infty$.
The desired Poincar\'e series is thus equal to
\begin{equation}
1+ t^{M-m} + \sum_{N \geq 2} t^{N(M-m)} (1+t^{2m-1}) \equiv \frac{1+ t^{2M-1}}{1-t^{M-m}} \ .
\label{3evev}
\end{equation}
In a similar way, Lemma \ref{porp} and formula (\ref{thomm2pt}) imply that all non-trivial groups $\hat E^{p,q}_1$ of our spectral sequence calculating the group $H^*(\mbox{EM}^*_1(S^m,S^M),{\mathbb Q})$ are equal to ${\mathbb Q}$ and correspond to the pairs $(p,q)$ equal to $(0,0)$ and $(-N,N(M-m+1))$ and $(-N,N(M-m+1)+m-1)$ for arbitrary $N \geq 1$. The stabilization $\hat E_\infty \equiv \hat E_1$ of this spectral sequence can be proved exactly as in the previous paragraph. The Poincare polynomial of the resulting homology group is thus equal to
\begin{equation}
1+ \sum_{N \geq 1} t^{N(M-m)} (1+t^{m-1}) \equiv \frac{1+ t^{M-1}}{1-t^{M-m}} \ . \label{2evev}
\end{equation}
The formulas (\ref{3evev}) and (\ref{2evev}) form the third and the second cells of the corresponding row of Table \ref{oddd}. The structure of the differential $d^M$ indicated in the last cell of this row is the unique one compatible with these formulas.
\subsection{The case of odd $m$, odd $M$ and arbitrary $i$}
By Lemma \ref{orient} the orientation sheaf of $B(L_r^{m},N)$ is equal to $\pm {\mathbb Z}$. This is exactly the coefficient sheaf in (\ref{lenspm}). Hence the group (\ref{lenspm}) is Poincar\'e isomorphic to the group
$ H^{q+p(M-m+1)}(B(L_r^{m},-p), {\mathbb Z}).$
By Theorem \ref{lem29}(a) all non-trivial groups $E_1^{p,q} \otimes {\mathbb Q},$ $p<0$, are in this case the ones with arbitrary $p \leq -1$ and either $q= -p(M-m+1)$ or $q=-p(M-m+1) +m$; all of them are equal to ${\mathbb Q}$. All these groups survive until $E_\infty$ for the same reasons as in the previous case.
So, the Poincar\'e series of the group $H_*(\mbox{EM}_i(S^m,S^M),{\mathbb Q})$ with arbitrary $i$ is equal to
$$1+ \sum_{N \geq 1} t^{N(M-m)} (1+t^{m}) \equiv \frac{1+ t^{M}}{1-t^{M-m}} \ , $$
see the top cells of the third columns in Tables \ref{evv} and \ref{oddd}, and the first statement of Theorem \ref{m4}.
Also, the group (\ref{lenspmpt}) is Poincare isomorphic to $H^{q+p(M-m+1)}(B(L^m_{r,\star} \ ,-p),{\mathbb Z})$.
By statement (\ref{b}) of Lemma \ref{lem11} and Proposition \ref{chn76}, the latter group tensored with ${\mathbb Q}$ is equal to ${\mathbb Q}$ if $q+p(M-m+1)=0$ and is trivial in all other cases. Therefore we have $\hat E^{p,q}_1 \otimes {\mathbb Q} \simeq {\mathbb Q}$ for $(p,q) = (-N, N(M-m+1))$, $N \geq 0$; all other groups $\hat E^{p,q}_1 \otimes {\mathbb Q}$ with $p<0$ are trivial. This gives us the Poincare series $$\sum_{N=0}^\infty t^{N(M-m)} \equiv \frac{1}{1-t^{M-m}} \ ,$$ see the top cells of the second columns in Tables \ref{evv} and \ref{oddd}, and also the second statement of Theorem \ref{m4}.
\subsection{The case of odd $m$, even $M$, and $i=0$}
\label{3eoe}
The corresponding group $E^{p,q}_1$ given by (\ref{ssss}) is Poincar\'e isomorphic to
$ H^{q+p(M-m+1)}(B({\mathbb R}P^{m},-p), \pm {\mathbb Z}).$
By Theorem \ref{lem29}(B), the non-trivial groups $E^{p,q}_1 \otimes {\mathbb Q}$, $p<0$, lie on two parallel lines in the $(p,q)$-plane. Namely,
$E^{p,q}_1 \otimes {\mathbb Q} \simeq {\mathbb Q}$ if $p$ is odd and the number $q+p(M-m+1)$ is equal to either $(-p-1)(m-1)/2$ or to $(-p-1)(m-1)/2+m$; in all other cases $E^{p,q}_1 \otimes {\mathbb Q} =0$. Since $M>m$, no differential in this sequence can connect two non-trivial groups, hence $E_\infty \equiv E_1$ over ${\mathbb Q}$.
By counting the total dimensions $p+q$ of these group for all values $p=-(2j+1),$ $j \geq 0$, we get the Poincar\'e series
$$ 1+ \sum_{j=0}^\infty (1+t^m)t^{M-m +j(2M-m-1)} \equiv 1+ t^{M-m}\frac{1+t^m}{1-t^{2M-m-1}} \ , $$
see the third cell of the last row in Table \ref{evv}.
By (\ref{sssspt}) and Poincar\'e duality, $\hat E^{p,q}_1 \otimes {\mathbb Q} \simeq H^{q+p(M-m+1)}(B({\mathbb R}P_\star^{m},-p), \pm {\mathbb Q}).$
By statement (\ref{c}) of Lemma \ref{lem11} and Proposition \ref{chn76}, this group is equal to ${\mathbb Q}$ if $q+p(M-m+1) = (m-1)[\frac{N}{2}]$ and is trivial in all other cases. So for any $N>0$ there is exactly one non-trivial cell $E^{-N,q}_1$, namely $q$ should be equal to $N(M-m+1)+(m-1)[N/2]$. This expression depends monotonically on $N$, therefore our spectral sequence has no non-trivial higher differentials, and $\hat E_\infty = \hat E_1$. The Poincare series of the resulting homology group is thus equal to
$$1 + t^{M-m} + \sum_{k=1}^\infty \left(t^{2k(M-m)+k(m-1)} + t^{(2k+1)(M-m)+k(m-1)}\right) \equiv \frac{1+t^{M-m}}{1-t^{2M-m-1}} \ ,$$
see the second cell of the last row in Table \ref{evv}.
The structure of differentials $d^M$ indicated in the last cell of this row is the unique one which is compatible with the previous two cells.
\subsection{The case of odd $m$, even $M$, and $i=1$}
\label{3eoo}
Substituting Theorem \ref{main2}(A) to (\ref{thomm2}), we obtain that the group $E_1^{p,q} \otimes {\mathbb Q}$ is trivial if $p$ is odd; if $p=-N \neq 0$ is even, then $E_1^{p,q} \otimes {\mathbb Q}$ is equal to ${\mathbb Q}$ for $q=N(M+1)- \frac{N}{2}(m+1)$ or $q=N(M+1)- \frac{N}{2}(m+1)+m$, and is trivial for all other values of $q$. Again, our rational spectral sequence stabilizes at the term $E_1$ by dimensional reasons. The counting of the total degrees $p+q\equiv -N+q$ of its groups over all even natural values of $N$ gives us the Poincar\'e series equal to $$1 + (1+t^m)\sum_{j=1}^\infty t^{j(2M-m-1)} \equiv \frac{1+t^{2M-1}}{1-t^{2M-m-1}}.$$
Substituting Theorem \ref{main2}(B) to (\ref{thomm2pt}), we obtain that the group $\hat E_1^{p,q} \otimes {\mathbb Q}$ is non-trivial (and then equal to ${\mathbb Q}$) only for $(p,q)=\left(-N,N(M-m+1)+\left]\frac{N}{2}\right[\times (m-1)\right)$, $N \geq 0$. The stabilization $\hat E_\infty \equiv \hat E_1$ follows immediately from the position of these groups. Counting the corresponding total degrees $p+q$ separately forr even and odd $N$ we get the Poincar\'e series $$ \sum_{k=0}^\infty t^{k(2M-m-1)} + \sum_{k=0}^\infty t^{k(2M-m-1)+M-1} \equiv \frac{1+t^{M-1}}{1-t^{2M-m-1}} .$$ So we have found the second and the third cells of the last row of Table \ref{oddd}. The structure of the differential $d^M$ indicated in the last cell of this row is the only one compatible with the previous two cells.
\section{The justification of Table \ref{genn}}
By Poincare duality and Lemma \ref{orient}, the group (\ref{msgen}) is equal to
\begin{equation}
\label{msgen1}
H^{q+p(M-m+1)}\left(B(S^m,-p),(\pm {\mathbb Z})^{\otimes(M-m)}\right) ,
\end{equation}
and the group (\ref{msgenpt}) to
\begin{equation}
\label{msgen1pt}
H^{q+p(M-m+1)}\left(B({\mathbb R}^m,-p),(\pm {\mathbb Z})^{\otimes(M-m)}\right) .
\end{equation}
\subsection{$m$ and $M$ are odd}
See Theorem \ref{m4}, the case of $r=s=1$.
\subsection{$m$ and $M$ are even}
The group $H^*(B({\mathbb R}^m,N),{\mathbb Q})$ is in this case equal to ${\mathbb Q}[0]$ for $N=1$ and to ${\mathbb Q}[0] \oplus {\mathbb Q}[m-1]$ for $N \geq 2$ (see \cite{Cohen76}, Section 4). Consequently, all non-trivial cells $\hat E^{p,q}_1 \otimes {\mathbb Q}$, $p <0,$ of the spectral sequence $\hat E_r^{p,q}$ correspond to the pairs $(p,q)$ equal to either $(-1, M-m+1)$ or to $(-N,N(M-m+1))$ and $(-N, N(M-m+1)+m-1)$ with arbitrary $N \geq 2$; all these non-trivial groups are isomorphic to ${\mathbb Q}$. All of them survive until $\hat E_\infty$ by the same reason as in \S \ref{ee1}. So the Poincare series of the resulting homology group is equal to $$1+ t^{M-m} + \sum_{N=2}^\infty \left(t^{N(M-m)} + t^{N(M-m)+m-1}\right) \equiv \frac{1+t^{2M-m-1}}{1-t^{M-m}} \ ,$$ see the second column of Table \ref{genn}.
The group $H^*(B(S^m,N),{\mathbb Q})$ is equal to ${\mathbb Q}[0] \oplus {\mathbb Q}[2m-1]$ for $m \geq 3$
by the exact homological sequence of the pair $(B(S^m,N), B({\mathbb R}^m,N))$ (where ${\mathbb R}^m$ is $S^m$ less one point).
Indeed, the difference $B(S^m,N) \setminus B({\mathbb R}^m,N)$
is obviously diffeomorphic to $B({\mathbb R}^m,N-1)$, and the normal bundle of this difference in $B(S^m,N)$ is constant, with the fiber equal to the tangent space of $S^m$ at the removed point. Therefore by the Thom isomorphism and the well-known structure of $H^*( B({\mathbb R}^m,N-1),{\mathbb Q})$ the group $H_i(B(S^m,N), B({\mathbb R}^m,N);{\mathbb Q})$ is trivial in all dimensions except for $m$ and $2m-1$ and is isomorphic to ${\mathbb Q}$ in these two dimensions. The boundary map $H_i(B(S^m,N), B({\mathbb R}^m,N),{\mathbb Q}) \to H_{i-1}(B({\mathbb R}^m,N),{\mathbb Q})$ is thus trivial for all $i \neq m$. If $i=m$, then by the construction of the Thom isomorphism the image of this map is generated by the $(m-1)$-dimensional submanifold in $B({\mathbb R}^m,N)$ swept out by configurations, some $N-1$ points of which are constant, and the $N$-th one runs a huge sphere containing these $N-1$ points inside. This cycle is non-trivial in $H_{m-1}(B({\mathbb R}^m,N),{\mathbb Q})$, so our boundary map is also non-trivial for $i=m$.
Finally, we get the following structure of the term $E_1$ of the main spectral sequence over ${\mathbb Q}$. All groups $E_1^{p,q}$ with the following pairs $(p,q)$ are equal to ${\mathbb Q}$: $(0,0)$ (the standard term), $(-1,M-m+1)$ and $(-1,M+1)$ (because $B(S^m,1) = S^m)$), $(-2,2(M-m+1)$ (because $B(S^m,2)$ is homotopy equivalent to ${\mathbb R}P^m)$), and $(-N,N(M-m+1))$ and $(-N,N(M-m+1)+2m-1)$ for any $N \geq 3$.
All other groups $E_1^{p,q} \otimes {\mathbb Q}$ are trivial. The spectral sequence stabilizes at this term, that is $E_\infty \equiv E_1$, by the same reasons as in \S \ref{ee1}. Therefore the Poincare series of the group $H_*(\mbox{Map}(S^m,S^M),{\mathbb Q})$ is equal to
$$1 + t^{M-m} + t^M + t^{2(M-m)} + \sum_{N=3}^\infty \left(t^{N(M-m)} + t^{N(M-m)+2m-1}\right),$$ which is equal to the expression in the third column of Table \ref{genn}. The form of the higher differentials indicated in the fourth column for the case of even $m$ and $M$ is the unique one compatible with the data of the second and third cells.
\subsection{$m$ is even, $M$ is odd}
If $m$ is even then the group $H^*(B({\mathbb R}^m,N),\pm {\mathbb Q})$ is trivial for any $N \geq 2$, as well as its Poincar\'e dual group $\bar H_*(B({\mathbb R}^m,N),\pm {\mathbb Q})$, see e.g. Corollary 2 in \S I.4 of \cite{Book}. Therefore the spectral sequence (\ref{msgenpt}) for such $m$ and $M$ has only one non-trivial group $E^{p,q}_1$, with $p<0$, namely $E^{-1,M-m+1}_1$. This gives us the Poincare series $1+t^{M-m}$ for the group $H_*(\Omega^m S^M,{\mathbb Q})$. The spectral sequence of the fiber bundle (\ref{fb}) has therefore only four non-trivial cells: ${\mathcal E}_2^{p,q} \simeq {\mathbb Q}$ for $p \in \{0,M\}$ and $q \in \{0,M-m\}$. It obviously stabilizes in this term ${\mathcal E}_2$ and gives us the corresponding cell of the third column of Table \ref{genn}.
\subsection{$m$ is odd, $M$ is even}
By Proposition \ref{chn76} the group $H^j(B({\mathbb R}^m,N), \pm {\mathbb Q}) \equiv \bar H_{mN-j}(B({\mathbb R}^m,N), {\mathbb Q})$ is in this case non-trivial (and then equal to ${\mathbb Q}$) only for $j=(m-1)[N/2].$ So, for any $N>0$ there is exactly one non-trivial cell $\hat E^{-N,q}_1$, namely the one with $q=N(M-m+1)+(m-1)[N/2]$. The number $q$ in this expression depends strictly monotonically on $N$, therefore all higher differentials are trivial, and $\hat E_\infty = \hat E_1$. By counting the total degrees $p+q$ of non-trivial groups $\hat E_\infty^{p,q}$ separately for even and odd $N$ we see that the Poincar\'e series of the group $H_*(\Omega^m S^M,{\mathbb Q})$ is equal to $$\sum_{k=0}^\infty \left(t^{2k(M-m)+k(m-1)} + t^{(2k+1)(M-m)+k(m-1)}\right) \equiv \frac{1+t^{M-m}}{1-t^{2M-m-1}},$$
see the second cell of the last row in Table \ref{genn}.
Further, by Theorem \ref{lem29}(B) (with $r=1$) all groups $H^j(B(S^m,N), \pm {\mathbb Q})$ are trivial if $N$ is even; if $N$ is odd then they are non-trivial (and then equal to ${\mathbb Q}$) only for $j$ equal to $(m-1)\frac{N-1}{2}$ or $(m-1)\frac{N-1}{2}+m$. Therefore the calculation of the main spectral sequence $E_r^{p,q}$ in this case coincides with the one given in the first paragraph of \S \ref{3eoe}.
\section{The construction of the spectral sequences of Theorems \ref{absv} and \ref{relav}}
\label{sseven}
This construction follows very closely the strategy described in Chapter 3 of \cite{Book}. Namely, we consider the {\em discriminant} set $\Sigma \equiv \mbox{EM}_G(X,{\mathbb R}^W) \setminus \mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$, i.e. the set of $G$-equivariant maps $X \to {\mathbb R}^W$ whose images meet $C\Lambda$. For any finite-dimensional affine subspace ${\mathcal F} \subset \mbox{EM}_G(X,{\mathbb R}^W)$ we study the group $\bar H_*( \Sigma \cap {\mathcal F})$, which is Alexander dual to the group $H^*({\mathcal F} \setminus \Sigma)$, cf. \cite{A70}. The groups $H^*({\mathcal F}_i \setminus \Sigma)$ of an increasing sequence of generic subspaces ${\mathcal F}_i$ stabilize to the homology group of the space $\mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$ by a variant of the Weierstrass approximation theorem. On the other hand, the groups $\bar H_*({\mathcal F}_i \cap \Sigma)$ can be calculated using spectral sequences which stabilize (after a change of indices reflecting the Alexander duality) to the spectral sequence of Theorem \ref{absv}.
Everywhere in this section we use the notation and assumptions of \S \ref{agen}. In particular, $X$ is a compact semialgebraic subset of ${\mathbb R}^a$, $G$ is a compact Lie group acting freely and algebraically on $X$ and on some neighborhood of $X$ in ${\mathbb R}^a$, $\rho: G \to O({\mathbb R}^W)$ is a representation of $G$, and $C\Lambda$ is a $\rho(G)$-invariant cone in ${\mathbb R}^W$.
\subsection{Finite-dimensional approximations}
\label{fda}
\begin{proposition}
\label{appro}
For any homology class $\alpha \in H_*( \mbox{\rm EM}_G(X,({\mathbb R}^W \setminus C\Lambda))),$ there is a finite-dimensional affine subspace $\mathcal{F} \subset \mbox{\rm EM}_G(X,{\mathbb R}^W)$ such that the class $\alpha$ can be realized by a cycle contained in the subspace ${\mathcal F} \setminus \Sigma \equiv {\mathcal F} \cap \mbox{\rm EM}_G(X,({\mathbb R}^W \setminus C\Lambda)). $
If two cycles in such a subspace ${\mathcal F} \setminus \Sigma$ represent one and the same element of the group $H_*( \mbox{\rm EM}_G(X,({\mathbb R}^W \setminus C\Lambda))),$ then there is a finite-dimensional subspace ${\mathcal F}' \supset {\mathcal F}$ in $\mbox{\rm EM}_G(X,{\mathbb R}^W)$
such that these two cycles are homologous to one another in ${\mathcal F}' \setminus \Sigma$.
The relative versions of these statements for the spaces \ $\mbox{\rm EM}_G(X,A;({\mathbb R}^W \setminus C\Lambda))$ are also true.
\end{proposition}
\noindent
{\it Proof.} Realize the class $\alpha$ by a continuous map $\varkappa: \Xi \to \mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$ of a compact polyhedron $\Xi$ (i.e. by a continuous map $\tilde \varkappa: X \times \Xi \to {\mathbb R}^W \setminus C\Lambda$ that is equivariant with respect to the $G$-actions on $X$ and ${\mathbb R}^W$ and such that $\tilde \varkappa(x , \xi) \equiv \varkappa(\xi)(x)$ for $x \in X, \xi \in \Xi$). Let $r$ be the minimal distance in ${\mathbb R}^W$ between the sets $f(X)$ and $C\Lambda$ over all points $f \in \varkappa(\Xi)$. Let us embed the complex $\Xi$ regularly into some Euclidean space ${\mathbb R}^b$. $\frac{r}{2}$-approximate the map $\tilde \varkappa$ by the restriction of a polynomial map ${\mathbb R}^a \times {\mathbb R}^b \to {\mathbb R}^W$. This approximation can be considered as an $\frac{r}{2}$-approximation (in the $C^0$-topology) of the cycle $\varkappa(\Xi)$ by a cycle in the finite-dimensional vector space of restrictions to $X$ of polynomial maps $F: {\mathbb R}^a \to {\mathbb R}^W$ of a certain finite degree. Then replace any such map $F|_X$ by its $G$-symmetrization (that is, by the map $X \to {\mathbb R}^W$ sending any point $x \in X$ to $$ \int_G\rho(g^{-1})(F(g(x))) d \mu \ , $$
integration over the Haar measure). This $G$-symmetrization is a linear operator from the finite-dimensional space of polynomial maps of a certain degree to a subspace of the space of $G$-equivariant maps. The latter subspace is thus also finite-dimensional. The image in it of the cycle $\varkappa(\Xi)$ after all perturbations $\frac{r}{2}$-approximates the initial cycle, in particular is homologous to it in $\mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda))$.
The second statement of Proposition \ref{appro} can be proved in a similar way by approximating homologies between our cycles in the space of equivariant maps.
In the relative case of Theorem \ref{relav} we use the following modification of this construction. Lift our inclusion $ X \hookrightarrow {\mathbb R}^a$ to the embedding $X \to {\mathbb R}^{a} \oplus {\mathbb R}^1$ sending any point $x \in X$ to $(x, \mbox{dist}(x,A))$.
Let $\Phi:X \to {\mathbb R}^W$ be an arbitrary $G$-equivariant extension of the given map $\varphi: A \to {\mathbb R}^W$ to entire $X$.
Again realize the homology class $\alpha$ by a map $\varkappa:\Xi \to \mbox{EM}_G(X,({\mathbb R}^W \setminus C\Lambda)),$ then consider the map $X \times \Xi \to {\mathbb R}^W$ that takes any pair $(x,\xi)$ to $\varkappa(\xi)(x) - \Phi(x),$ and approximate this map by a polynomial map ${\mathbb R}^a \oplus {\mathbb R}^1 \oplus {\mathbb R}^b \to {\mathbb R}^W$ given by a set of $W$ polynomials vanishing on the hyperplane ${\mathbb R}^a \times \{0\} \times {\mathbb R}^b$. We obtain an approximation of the cycle $\alpha$ by a cycle in the affine space of maps of the form $P + \Phi$, where $P$ are polynomials of uniformly bounded degree. Finally, $G$-symmetrize these maps $P+\Phi$ as previously.
$\Box$
\subsection{$k$-sufficient finite-dimensional approximations}
\begin{definition} \rm
\label{kdef}
A finite-dimensional affine subspace ${\mathcal F} \subset \mbox{EM} _G(X,A; {\mathbb R}^W)$ is {\it $k$-sufficient} if
1) for any natural $N \leq k$, any points $x_1, \dots, x_N$ of $X\setminus A$ from pairwise different orbits of $G$, and any points $\lambda_1, \dots, \lambda_N$ in $C\Lambda$, the plane in ${\mathcal F}$ that consists of maps $f$ such that $f(x_j)=\lambda_j$, $j=1, \dots, N$, has codimension exactly $NW$ in ${\mathcal F}$;
2) for any natural $N$ the set of maps $f \in {\mathcal F}$ such that $f(x) \in C\Lambda$ for some $N$ points $x \in X \setminus A$ from pairwise different orbits of $G$ has codimension $\geq N(W-\dim X +\dim G- \dim C\Lambda)$ in ${\mathcal F}$.
A subspace ${\mathcal F}$ is called {\it $k$-transversal} if it satisfies condition 1) of this definition.
\end{definition}
We say that a subset of the space of $L$-dimensional subspaces of $\mbox{EM} _G(X,A; {\mathbb R}^W)$ is semialgebraic if its intersection with the space of $L$-dimensional subspaces of any finite-dimensional affine subspace of $\mbox{EM} _G(X,A; {\mathbb R}^W)$ is semialgebraic. Such a subset is {\em of codimension at least $T$} if for any finite-dimensional affine subspace $\Phi \subset \mbox{EM} _G(X,A; {\mathbb R}^W)$ there is a subspace $\tilde \Phi \supset \Phi$ such that the intersection of this subset with the space of $L$-dimensional affine subspaces of $\tilde \Phi$ has codimension at least $T$ in this space. It is easy to see that the complement of any semialgebraic subset of positive codimension is dense in the space of $L$-dimensional subspaces of $\mbox{EM} _G(X,A; {\mathbb R}^W)$, and the complement of any semialgebraic subset of codimension $\geq 2$ is path-connected.
\begin{proposition}
\label{ksuff}
Let $X$, $A$, $G$ and $\Lambda$ be as above. Then
1. For any natural $k$ and $L\geq k(W+\dim X - \dim G+ \dim C\Lambda),$ the set of not $k$-transversal subspaces is a semialgebraic subset of codimension at least $L- k(W+\dim X - \dim G+ \dim C\Lambda)+1$ in the space of $L$-dimensional affine subspaces of $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$.
2. If $\dim X - \dim G + \dim C\Lambda < W$ then for any natural $L$ there is a dense subset in the space of affine $L$-dimensional subspaces of $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$ such that condition 2$)$ of Definition \ref{kdef} is satisfied for all elements of this subset.
3. $k$-sufficient subspaces are dense in the space of affine subspaces of $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$ of any sufficiently large finite dimension.
\end{proposition}
\noindent
{\it Proof.} 1. For any collection of points $x_1, \dots, x_N$ and $\lambda_1, \dots, \lambda_N$ as in condition 1) of Definition \ref{kdef}, the maps $f$ such that $f(x_i)=\lambda_i$ for all $i=1, \dots, N$ form a subspace of codimension $N\, W$ in $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$. This subspace depends only on the class of the collection $\{(x_i, \lambda_i)\}$ in the space $\Pi_{N,G}(X,A)$, so we have an $N(\dim X-\dim G + \dim C\Lambda)$-parametric family of such subspaces. The set of $L$-dimensional affine subspaces in $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$ that are not in general position with respect to some fixed subspace of this family (i.e. have an empty or non-transversal intersection with it) has codimension $L-NW+1$ in the space of all $L$-dimensional subspaces. The union of all these sets over all points of $\Pi_{N,G}(X,A)$ is semialgebraic by Tarski-Seidenberg theorem, and the codimension of this union at least $L-NW+1-N(\dim X -\dim G + \dim C\Lambda) \geq L-k(W+\dim X -\dim G + \dim C\Lambda)+1 $. The codimension of the union of these unions over all $N \leq k$ also satisfies this estimate.
2. Set $k_0 = [L/(W-\dim X +\dim G - \dim C\Lambda)]+1$. Given an $L$-dimensional affine subspace ${\mathcal F}$ of $\mbox{\rm EM} _G(X,A; {\mathbb R}^W)$ and an arbitrary vicinity of ${\mathcal F}$ in the space of all subspaces of this dimension, we first choose a $k_0$-transversal affine subspace
$\tilde {\mathcal F} \subset \mbox{\rm EM} _G(X,A; {\mathbb R}^W)$ of finite dimension greater than $k_0(W+\dim X -\dim G +\dim C\Lambda)$, which contains some $L$-dimensional subspace ${\mathcal F}'$ from this vicinity: such a subspace $\tilde {\mathcal F}$ exists by statement 1 of our proposition. Then for any $N \leq k_0$ the
set of maps $f \in \tilde {\mathcal F}$ such that $f(x_i) \in C\Lambda$ for some $N$ points $x_i \in X$ from pairwise distinct $G$-orbits is swept out by a $N(\dim X -\dim G+\dim C\Lambda)$-parametric family of subspaces of codimension $NW$, in particular it is a semialgebraic subvariety of codimension $\geq N(W-\dim X +\dim G - \dim C\Lambda)$ in $\tilde {\mathcal F}$.
Any $L$-dimensional affine subspace ${\mathcal F}''$ of $\tilde {\mathcal F}$ that is transversal to these subvarieties for all $N \leq k_0$ satisfies condition 2) of Definition \ref{kdef}. Indeed, for all $N\leq k_0$ this condition follows from the transversality; for $N =k_0$ it means that the intersection of ${\mathcal F}''$ with the corresponding subvariety is empty, which implies the same also for all $N> k_0$.
By transversality theorem, we can choose this subspace ${\mathcal F}''$ to be arbitrarily close to ${\mathcal F}'$ and hence also to ${\mathcal F}$.
3. If $L \geq k(W+\dim X-\dim G + \dim C\Lambda)$ then we can choose this subspace ${\mathcal F}''$ to be also $k$-transversal.
$\Box$
\subsection{The simplicial resolution}
\label{sre}
We will show now that the groups $H^*({\mathcal F} \setminus \Sigma)$ for all $k$-sufficient subspaces ${\mathcal F}$ in
$\mbox{EM}_G(X, {\mathbb R}^W)$ or $ \mbox{EM}_G(X,A; {\mathbb R}^W)$
can be approximated using the standard spectral sequences of Theorems \ref{absv} and \ref{relav} in
all dimensions strictly smaller than $ (k+1)(W-\dim X + \dim G - \dim C\Lambda-1)$.
Let ${\mathcal F}$ be a $k$-sufficient subspace in $\mbox{EM}_G(X,{\mathbb R}^W)$. A simplicial resolution of the variety $\Sigma \cap {\mathcal F}$ is then constructed as follows.
Let $T$ be the maximal number of different orbits of $G$ in $X$ which are mapped to the points of $C\Lambda$ by a map of the class ${\mathcal F}$. By the second property of $k$-sufficient spaces, this number does not exceed \ $\dim {\mathcal F}/(W -\dim X +\dim G - \dim C\Lambda)$.
The simplicial resolution will be constructed as a subset in the product of the space ${\mathcal F}$ and the $T$-th {\em self-join} of the quotient space $X/G$.
Namely, let us embed this quotient space $X/G$ generically into an affine space of a very large dimension, and for any $N \leq T$ points of the image of the embedding consider their convex hull. If the dimension of the ambient space is sufficiently large and the embedding is indeed generic, then any such convex hull is a simplex of the corresponding dimension $N-1$, and these simplices have no unexpected intersections (that is, the intersection of any two such simplices is their common face, which is the convex hull of the intersection of their vertex sets). Let us assume that these conditions are satisfied, and define the $T$-th self-join $(X/G)^{\star T}$ of $X/G$ as the union of all these simplices with arbitrary $N \leq T$. Obviously, such spaces $(X/G)^{\star T}$ defined using different generic embeddings are homeomorphic to one another.
The space $(X/G)^{*T}$ is compact since $X/G$ is.
Let $f \in \Sigma \cap {\mathcal F}$ be a $G$-equivariant map $f: X \to {\mathbb R}^W$ that takes exactly $N$ orbits of our action of $G$ on $X$ to $C\Lambda$. Associate with it the simplex in the self-join $(X/G)^{\star T}$ spanned by the corresponding $N$ points of $X/G$, and the simplex in $ (X/G)^{\star T}\times {\mathcal F}$ equal to the product of the previous simplex and the point $\{f\} \in {\mathcal F}$. Finally, define the simplicial resolution $\sigma \subset (X/G)^{\star T} \times {\mathcal F} $ as the union of these simplices for all discriminant maps $f \in \Sigma \cap {\mathcal F}$.
This space $\sigma$ has a natural filtration $\sigma_1 \subset \dots \subset \sigma_{T} \equiv \sigma$, namely, $\sigma_N$ is the union of all $\leq N$-vertex simplices in this construction.
\begin{proposition}
\label{pro6}
1. The map $\sigma \to \Sigma \cap {\mathcal F}$ induced by the obvious projection $ (X/G)^{\star T} \times {\mathcal F} \to {\mathcal F}$ is a proper map that induces an isomorphism $\bar H_*(\sigma) \to \bar H_*(\Sigma \cap {\mathcal F})$ of Borel--Moore homology groups of these spaces.
2. The dimension of any non-empty set $\sigma_N \setminus \sigma_{N-1}$ does not exceed \ $\dim{\mathcal F}-N(W-\dim X + \dim G - \dim C\Lambda-1)-1$.
3. If $N \leq k$ then $\sigma_N \setminus \sigma_{N-1}$ is the space of a fibered product of two fiber bundles with base $\Pi_{N,G}(X)$ $($see \S \ref{agen}$)$. The fibers of these two bundles are an $(N-1)$-dimensional open simplex and a $(\dim {\mathcal F} -N\, W)$-dimensional affine space respectively. The orientation sheaves of these two bundles are respectively the sign sheaf $\pm {\mathbb Z}$ of the unordered configuration space $B(X/G,N)$, and the sheaf ${\mathcal J} \otimes \pm {\mathbb Z}$, see \S \ref{agen}.
\end{proposition}
\noindent
{\it Proof.} Properness of the projection $\sigma \to \Sigma \cap {\mathcal F}$ follows immediately from the construction, and the isomorphism in Statement 1 is a standard property of simplicial resolutions, see e.g. \cite{Book}, \cite{gorinov}. Statement 2 follows from the second condition in the definition of $k$-sufficient subspaces.
The projection of the fiber bundle from statement 3 is defined as follows. Any point of the space $\sigma_N \setminus \sigma_{N-1}$ has the form $(s,f)$, where $s$ belongs to an $(N-1)$-dimensional simplex in $(X/G)^{\star T}$ the $N$ vertices of which are distinct $G$-orbits in $X$ mapped by $f \in \Sigma$ to some $\rho(G)$-orbits in $C\Lambda$. Taking a representative $x_j \in X$ of any of these $N$ \ $G$-orbits, and the corresponding points $f(x_j) \in C\Lambda$, we obtain a point of the direct product $X^N \times (C\Lambda)^N$. The orbit of this point under the action of all elements (\ref{semi}) described by (\ref{semact}) is a point of the space $\Pi_{N,G}(X)$ that does not depend on the choice of the representatives $x_j$.
By the construction and the first property of $k$-sufficient approximations, the resulting map $\sigma_N \setminus \sigma_{N-1} \to \Pi_{N,G}(X)$ is a fiber bundle, whose fiber is the set of all points $(s,f)$ such that $s$ is in some open simplex with $N$ vertices in $(X/G)^{\star T}$, and $f$ belongs to the affine subspace in ${\mathcal F}$ that consists of the maps which take fixed values on a fixed set of $N$ different $G$-orbits in $X$. The orientation rules for these bundles of simplices and subspaces follow immediately from the construction.
$\Box$
\begin{corollary} \label{cormain}
If ${\mathcal F}$ is a $k$-sufficient subspace, then there is a spectral sequence $E^{p,q}_r({\mathcal F})$ converging to the group $H^*({\mathcal F} \setminus \Sigma)$ with the term $E^{p,q}_1$ given by formula $($\ref{absvf}$)$ for $p \geq -k$ and equal to zero if either $p>0$ or $q < -p(W-\dim X + \dim G - \dim C\Lambda)$. In particular, formula $($\ref{absvf}$)$ gives one $E^{p,q}_1({\mathcal F})$ for all $p,q$ such that $p<0$ and $p+q < (k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$.
\end{corollary}
\noindent
{\it Proof}. Our filtration $\{\sigma_N\}$ defines a spectral sequence $E^r_{N,t}({\mathcal F})$ calculating the group $\bar H_*(\sigma)$; in particular $E^1_{N,t}({\mathcal F}) \simeq \bar H_{N+t}(\sigma_N \setminus \sigma_{N-1})$. Let us turn this homological spectral sequence into a cohomological one by setting
\begin{equation}
\label{inver}
E^{p,q}_r({\mathcal F}) \equiv E^r_{-p, \dim {\mathcal F}-q-1}({\mathcal F}).\end{equation} The resulting spectral sequence converges to the group $\tilde H^*({\mathcal F}\setminus \Sigma)$, which is Alexander dual to the group $\bar H_*(\Sigma \cap {\mathcal F})$ calculated by the previous spectral sequence.
Statement 2 of Proposition \ref{pro6} implies that all its non-trivial groups $E^{p,q}_r({\mathcal F}),$ $r\geq 1$, lie in the wedge (\ref{wedge}), in particular such groups with $p<-k$ are trivial if $p+q<(k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$. Statement 3 of Proposition \ref{pro6} implies that all groups $E^{p,q}_1({\mathcal F})$ with $p \geq -k$ are as described by formula (\ref{absvf}).
The construction in the relative case and the proof of formula (\ref{relaf}) are the same with obvious modifications.
$\Box$
\begin{corollary}
\label{cor78}
The group $H_i(\mbox{\rm EM}_G(X,A; {\mathbb R}^W) \setminus \Sigma)$ is finitely generated for any $i$.
\end{corollary}
\noindent{\it Proof.} Choose a natural $k$ such that $i <(k+1)(W-\dim X + \dim G -\dim C\Lambda-1)$, then by Corollary \ref{cormain} the number of independent generators of the group $H_i({\mathcal F} \setminus \Sigma)$ for any $k$-sufficient subspace ${\mathcal F}$ is effectively bounded by a finite number $\mu$. If the group $H_i(\mbox{\rm EM}_G(X,A; {\mathbb R}^W) \setminus \Sigma)$ is not finitely generated, let us choose some $\mu+1$ of its independent generators. By Proposition \ref{appro} they can be realized by cycles in some finite-dimensional subspace, and hence also by cycles in a $k$-sufficient subspace approximating it: a contradiction.
$\Box$
\subsection{Isomorphism and stabilization of spectral sequences}
In this section we prove that not only the groups $E^{p,q}_1({\mathcal F})$ for all $k$-sufficient subspaces ${\mathcal F}$ are isomorphic to one another in the stable domain of the $(p,q)$-plane, but the entire spectral sequences are isomorphic there, too.
\begin{proposition}
\label{cor17}
If $L\geq k(W+\dim X - \dim G + \dim C \Lambda) +1 $ then
all spectral sequences $E^{p,q}_r({\mathcal F})$ corresponding to $k$-sufficient $L$-dimensional affine subspaces ${\mathcal F}$ of $ \mbox{\rm EM}_G(X,A; {\mathbb R}^W)$ are isomorphic to one another in the domain of the $(p,q)$-plane where $p+q < (k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$.
$\Box$
\end{proposition}
\noindent
{\it Proof.} The $k$-th term $\sigma_k({\mathcal F})$ of the simplicial resolution of $\Sigma \cap {\mathcal F}$ can be constructed as above for an arbitrary $k$-transversal (not necessarily $k$-sufficient) affine subspace ${\mathcal F}$, and its terms $\sigma_N({\mathcal F}) \setminus \sigma_{N-1}({\mathcal F})$ are described by statement 3 of Proposition \ref{pro6} as well. Denote by $E^r_{p,q}({\mathcal F},k)$ the homological spectral sequence approximating the group $\bar H_*(\sigma_k({\mathcal F}))$ and defined by our standard filtration of this simplicial resolution.
\begin{lemma}
\label{fam}
Let $\{{\mathcal F}_\tau\}$, $\tau \in [0,1]$, be a continuous family of $L$-dimen\-si\-onal $k$-transversal subspaces in $ \mbox{\rm EM}_G(X,A; {\mathbb R}^W)$. Then all spectral sequences $E_{p,q}^r({\mathcal F}_\tau,k)$ are isomorphic to one another.
\end{lemma}
\noindent
{\it Proof of Lemma \ref{fam}}. Denote by $\hat \sigma_k$ the space of all pairs $(\tau,z)$ where $\tau \in [0,1]$ and $z$ is a point of the term $\sigma_k({\mathcal F}_\tau)$ of the simplicial resolution of $\Sigma \cap {\mathcal F}_\tau$. This space admits an obvious projection $\pi:\hat \sigma_k \to [0,1]$ defined by the first elements of these pairs, and is filtered by the subspaces $\hat \sigma_N$, $N =1, \dots, k$. Consider the homological spectral sequence defined by this filtration and converging to the group $\bar H_*(\hat \sigma_k)$. For any $\tau \in [0,1]$, the inclusion $\sigma_k({\mathcal F}_\tau) \equiv \pi^{-1}(\tau) \cap \hat \sigma_k \hookrightarrow \hat \sigma_k$ defines a homomorphism of spectral sequences. By the construction, for any $N=1, \dots, k$ the space $\hat \sigma_N \setminus \hat \sigma_{N-1}$ is a locally trivial fiber bundle over the segment $[0,1]$ with fibers $\sigma_N({\mathcal F}_\tau) \setminus \sigma_{N-1}({\mathcal F}_\tau)$. Therefore this homomorphism of spectral sequences defines an isomorphism of their terms $E^1$, and hence it also is an isomorphism. In particular, the spectral sequences $E_{p,q}^r({\mathcal F}_\tau,k)$ associated with all values $\tau \in [0,1]$ are isomorphic to one another.
$\Box$
By statement 1 of Proposition \ref{ksuff}, in the conditions of Proposition \ref{cor17} the space of $k$-transversal $L$-dimensional affine subspaces of $ \mbox{EM} _G(X,A; {\mathbb R}^W)$
is path-connected if $L$ is large enough, therefore the corresponding spectral sequences $E^r_{p,q}({\mathcal F},k)$ are all isomorphic to one another. However, for $k$-sufficient subspaces ${\mathcal F}$ these spectral sequences are isomorphic to the (non-restricted) spectral sequences $E^r_{p,q}({\mathcal F})$ in the domain where $p+q\geq L-(k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$, hence these spectral sequences are isomorphic to one another too, and the corresponding cohomological spectral sequences $E^{p,q}_r({\mathcal F})$ (related with them by (\ref{inver})) also are there isomorphic to one another in the domain where $p+q < (k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$.
$\Box$
Let us prove that these spectral sequences coincide in such a domain also for $k$-sufficient subspaces ${\mathcal F}$ of different sufficiently large dimensions.
\begin{lemma}[cf. \cite{Book}]
\label{lem18}
Let ${\mathcal F} \subset {\mathcal F}'$ be two $k$-sufficient subspaces in $ \mbox{EM} _G(X,A; {\mathbb R}^W)$, and ${\mathcal F}$ be in general position with the semialgebraic variety $\Sigma \cap {\mathcal F}'$ $($that is, the closure of ${\mathcal F}$ in the projective compactification of ${\mathcal F}'$ is transversal to the stratified subvariety consisting of $\Sigma \cap {\mathcal F}'$ and the hyperplane added at the compactification$)$. Then the corresponding spectral sequences $E^{p,q}_r({\mathcal F})$ and $E^{p,q}_r({\mathcal F}')$ converging to the cohomology groups of ${\mathcal F} \setminus \Sigma$ and ${\mathcal F}' \setminus \Sigma$ are isomorphic to one another in the domain where $p+q < (k+1)(W-\dim X + \dim G - \dim C\Lambda -1)$, and the identical embedding ${\mathcal F} \setminus \Sigma \subset {\mathcal F}' \setminus \Sigma$ induces an isomorphism of these cohomology groups up to dimension $(k+1)(W-\dim X + \dim G - \dim C\Lambda -1)-1$.
\end{lemma}
\noindent
{\it Proof.}
By the Thom isotopy lemma (see e.g. \cite{GM}) there is a tubular neighborhood $U$ of ${\mathcal F}$ in ${\mathcal F}'$ such that the pair of stratified varieties $(U, \Sigma \cap U)$ is homeomorphic to the product of the pair $({\mathcal F}, \Sigma \cap {\mathcal F})$ and an open ball of dimension \ $\dim {\mathcal F}' -\dim {\mathcal F}$. The group $H^*(U \setminus \Sigma)$ can be calculated using a spectral sequence constructed in exactly the same way as $E^{p,q}_r({\mathcal F}')$, but with the terms $\sigma_N({\mathcal F}' ) \subset (X/G)^{\star T} \times {\mathcal F}'$ replaced by their intersections with the subset $(X/G)^{\star T} \times U$. The space $\sigma(U) $ of the simplicial resolution appearing in this construction is homeomorphic as a filtered space to the direct product of $\sigma({\mathcal F})$ and an open ball of dimension $\dim {\mathcal F}' -\dim {\mathcal F}$, hence the corresponding spectral sequence $E^{p,q}_r(U)$ is isomorphic to $E^{p,q}_r({\mathcal F})$. On the other hand, the inclusion $U \hookrightarrow {\mathcal F}'$ induces a natural homomorphism of the homological spectral sequences $E_{p,q}^r({\mathcal F}' ) \to E_{p,q}^r(U)$. It is an isomorphism of $E_{p,q}^1$ for $p \leq k$. Indeed, both terms $\sigma_p(U) \setminus \sigma_{p-1}(U)$ and $\sigma_p({\mathcal F}') \setminus \sigma_{p-1}({\mathcal F}')$ are the spaces of fiber bundles over one and the same base; the fibers of the latter bundle are some affine spaces, and fibers of the former one are open balls in these affine spaces.
All non-trivial terms $E_{p,q}^1$ of these spectral sequences with $p>k$ lie in the domain where $p+q \leq \dim {\mathcal F}'-(k+1)(W-\dim X + \dim G- \dim C\Lambda -1)-1$. Therefore neither these groups nor any higher differentials from them affect the groups $E_{p,q}^r$ with $p+q > \dim {\mathcal F}' - (k+1)(W-\dim X + \dim G - \dim C\Lambda -1)-1$ or, which is the same, the terms $E_r^{p,q}$ of the cohomological spectral sequence with $p+q < (k+1)(W-\dim X + \dim G - \dim C\Lambda -1).$
$\Box$
Theorems \ref{absv} and \ref{relav} follow immediately from propositions \ref{appro}, \ref{ksuff}, \ref{pro6} (with Corollaries \ref{cormain} and \ref{cor78}), \ref{cor17}, and Lemma \ref{lem18}. Namely, for any natural number $T$ the desired spectral sequence $E^{p,q}_r$ coincides in the domain where $p+q < T$ with the spectral sequence $E^{p,q}_r({\mathcal F})$ constructed in \S \ref{sre}, where ${\mathcal F}$ is an arbitrary $k$-sufficient affine subspace in $ \mbox{EM} _G(X,A; {\mathbb R}^W)$, the dimension of which and number $k$ are large enough with respect to $T$.
\subsection{Which equivariant function spaces can be reduced to the previous pattern?}
\label{which}
In the case of the trivial group $G$, the previous scheme allows one to calculate the cohomology groups of the spaces of continuous maps $X \to Y$ where $X$ is $m$-dimensional and $Y$ is an $m$-connected finite CW-complex. Indeed, we can embed $Y$ into a sphere $S^{W-1}$ as a deformation retract of the complement of a subcomplex $\Lambda$ of codimension $\geq m+2$ in $S^{W-1}$, so that the spaces of maps to $Y$ and to ${\mathbb R}^W \setminus C\Lambda$ are homotopy equivalent to one another. However, the proof of this fact known to me (see \cite{Book}) does not immediately extend to the case of non-trivial group actions.
On the other hand, it seems likely that even if we cannot realize this strategy geometrically, it can sometimes give us a hint about how a spectral sequence calculating the desired cohomology groups could look like, and the limit cohomology group of this virtual spectral sequence could be a correct guess at the answer. More precisely, suppose that $X$ is $m$-dimensional, $Y$ is $m$-connected, $G$ is a finite group acting freely on both $X$ and $Y$, and we study the homology of the space of $G$-equivariant maps $X \to Y.$
It is easy to construct a representation
$\rho: G \to \mbox{O}({\mathbb R}^W)$ with sufficiently large $W$ and also an embedding of $Y$ into the unit sphere $S^{W-1} \subset {\mathbb R}^W$ commuting with the actions of $G$. For instance, we can take for ${\mathbb R}^W$ a sufficiently large Cartesian power of the space of the regular representation of $G$. Increasing this power, we can make the codimension of the union of non-regular orbits arbitrarily large. The complement of this union in $S^{W-1}$ together with the $G$-action on it can then be made to homotopically approximate arbitrarily well the universal covering of $K(G,1)$. If the codimension of this union of non-regular orbits is greater than \ $\dim Y$, then there are no obstructions to embedding the space $Y/G$ into the quotient of this complement by this $G$-action so that after lifting the embedding to the $G$-covering we get a $G$-equivariant embedding $Y \to S^{W-1}$.
To solve our problem using the spectral sequence described above, we need to find a compact subcomplex $\Lambda \subset S^{W-1} \setminus Y$ invariant under our $G$-action and such that
a) $\Lambda$ contains all non-regular orbits of our $G$-action;
b) the complex $Y \subset S^{W-1}$ is a $G$-equivariant deformation retract of $S^{W-1} \setminus \Lambda$;
c) the codimension of $\Lambda$ in $S^{W-1}$ is at least $m +1$.
The main difficulty is with the property (c). The proof that for trivial $G$ there exists a $\Lambda$ with properties (a)-(c) (see
\cite{Book}) does not seem to generalize to $G \neq \{1\}$ in a straightforward way. Indeed, if we try to apply it, we need to cancel the critical points with high Morse indices of $G$-invariant functions on the manifold $S^{W-1} \setminus Y$. This is essentially equivalent to doing the same for usual functions on the space of regular $G$-orbits, which is not simply-connected if $G$ is not connected, and the standard methods (see e.g. \cite{hcob}) may not work. Whether or not such a representation $\rho: \mbox{O}({\mathbb R}^W)$ and a subcomplex $\Lambda$ exist for arbitrary $G$ seems to be a good problem.
However, even if we are unable to find $\rho$ and $\Lambda$ with these properties, let us take $\Lambda$ to be an arbitrary subcomplex that only satisfies conditions (a) and (b), and then construct the complexes $\Pi_{N,G}(X)$ and write down our spectral sequence exactly as in the previous subsections. By the Alexander duality, the {\it homological} dimension of $\Lambda$ does not exceed $W-m-2$, therefore the term $E^{p,q}_1$ of our spectral sequence looks as previously, in particular it has only finitely many non-trivial groups on any diagonal of the form $\{p+q= \mbox{const} \}$, and all these groups are finitely generated. It seems likely that this spectral sequence still will calculate the desired cohomology groups of the space of equivariant maps, even though our geometrical realization of it does not work in this case.
I thank F.~Cohen and V.~Turchin for advices and conversations.
\end{document}
|
\begin{document}
\title{\bfseries\scshape{Color Hom-Akivis algebras, Color Hom-Leibniz algebras and Modules over Color Hom-Leibniz algebras}
\begin{abstract}
In this paper we introduce color Hom-Akivis algebras and prove that the commutator of any color non-associative Hom-algebra structure
map leads to a color Hom-akivis algebra. We give various constructions of color Hom-Akivis algebras. Next we study flexible and alternative
color Hom-Akivis algebras. Likewise color Hom-Akivis algebras, we introduce non-commutative color Hom-Leibniz-Poisson algebras
and presente several constructions. Moreover we give the relationship between Hom-dialgebras and Hom-Leibniz-Poisson algebras;
i.e. a Hom-dialgebras give rise to a Hom-Leibniz-Poisson algebra. Finally we show that twisting a color Hom-Leibniz module
structure map by a color Hom-Leibniz algebra endomorphism, we get another one.
\end{abstract}
{\bf Subject Classification:} 17A30, 17D10, 16W30.
\noindent
{\bf Keywords:} Color Hom-Akivis algebras, Non-commutative Hom-Leibniz-Poisson algebra, Non-commutative Hom-Leibniz-Poisson modules,
Hom-dialgebras.
\section{Introduction}
Hom-algebraic structures appeared first as a generalization of Lie algebras in \cite{NH} were the authors
studied $q$-deformations of Witt and Virasoro algebras.
Other interesting Hom-type algebraic structures of many classical structures were studied as Hom-associative algebras, Hom-Lie admissible
algebras and more general G-Hom-associative algebras (\cite{AS2}), $n$-ary Hom-Nambu-Lie algebras (\cite{FSA}),
Hom-Lie admissible Hom-coalgebras and Hom-Hopf algebras (\cite{AM}), Hom-alternative algebras, Hom-Malcev algebras and Hom-Jordan
algebras (\cite{DY3}). Hom-algebraic structures were extended to the case of $G$-graded Lie algebras by studying Hom-Lie superalgebras,
Hom-Lie admissible superalgebras in \cite{MM}, color Hom-Lie algebras (\cite{KA}) and color Hom-Poisson algebras (\cite{IB1}).
In \cite{LY}, Yuan presentes some constructions of quadratique color Hom-Lie algebras, and this is used to provide several examples.
$T^*$-extensions and central extensions of color Hom-Lie algebras and some cohomological characterizations are established (\cite{FIS}).
Akivis algebras were introduced by M. A. Akivis (\cite{AMA}) as a tool in the study of some aspects of Web geometry and its connection
with loop theory. These algebras were originally called "W-algebras" (\cite{AMA1}). Later, Hofmann and Strambach (\cite{HS}) introduce
the term "Akivis algebras" for such algebraic objects.
Hom-Akivis algebras are introduced in \cite{NI1}, in which it is showed that the commutator of non-Hom-associative algebras
lead to Hom-Akivis algebras. It is also proved that Hom-Akivis algebras can be obtain from Akivis algebras
by twisting along algebra endomorphisms, and that the class of Hom-Akivis algebras is closed under self-morphisms.
The connection between Hom-Akivis algebras and Hom-alternative algebras is given.
A non-commutative version of Lie algebras were introduced by Loday in \cite{JL}. The bracket of the so-called Leibniz algebras
satisfies the Leibniz identity; which, combined with skew-symmetry, is a variation of the Jacobi identity.
hence Lie algebras are skew-symmetric Leibniz algebras.
In the Leibniz setting, the objects that play the role of associative algebras are called dialgebras, which were
introduced by Loday in \cite{JL1}. Dialgebras have two associative binary operations that
satisfy three additional associative-type axioms.
The relationship between dialgebras and Leibniz algebras are analyzed in \cite{JMC}.
It is proved in \cite{SMR} that from any Leibniz algebra $L$ one can construct a Leibniz-Poisson algebra $A$ and the properties of $L$
are close to the properties of $A$. Leibniz algebras were extended to Hom-Leibniz algebras in \cite{AS1}.
A (co)homology theory of Hom-Leibniz algebras and an initial
study of universal central extensions of Hom-Leibniz algebras was given in \cite{YY}.
The representation of Hom-Leibniz algebras are introduced in \cite{JIR} and the homology of Hom-Leibniz algebras are computed.
Recently Leibniz superalgebras were studied in \cite{KSM}.
The Hom-dialgebras are introduced in \cite{DY1} as an extention of
some work of Loday (\cite{JL1}). In a Hom-dialgebra, there are two binary operations that satisfy five
$\alpha$-twisted associative-type axioms. Each Hom-Lie algebra can be thought of as a Hom-Leibniz algebra. Likewise, to
every Hom-associative algebra is associated a Hom-dialgebra in which both binary
operations are equal to the original one.
The purpose of this paper is to study color Hom-Akivis algebras (graded version of Hom-Akivis algebras (\cite{NI1})), color
non-commutative-Hom-Leibniz-Poisson algebras and modules over color Hom-Leibniz algebras.
Section 2 is dedicated to the background material on graded vector spaces, bicharacter and non-Hom-associative algebras.
In section 3, we define color Hom-Akivis algebras which is the graded version of Hom-Akivis algebras.
Various properties of color Hom-Akivis algebras are studied. More precisely, we show that the commutator
of color non-Hom-associative algebra structure map lead to color Hom-Akivis algebras (Theorem \ref{n3}).
Next, we provide some constructions of color Hom-Akivis algebras;
on the one hand from color Akivis algebras (Proposition \ref{n5}) and on the other hand from a given color Hom-Akivis algebras
(Theorem \ref{n4}). Flexible color Hom-Akivis algebras and Hom-alternative color Hom-Akivis algebras are also exposed (Theorem \ref{n6}).
Section 4 being devoted to the study of Color Hom-Leibniz algebras, we define in it color Hom-Leibniz algebras
and give basic properties (Proposition \ref{n1}). Next, we introduce non-commutative
Hom-Leibniz-Poisson algebras (in brief color NHLP-algebras). As in the previous section on color Hom-Akivis algebras,
we give several twisting of non-commutative Hom-Leibniz-Poisson algebras (Theorem \ref{n1}).
Finally we give the relationship between Hom-dialgebras and Hom-Leibniz-Poisson algebras (Theorem \ref{diax}).
In section 5, we prove that the Yau's twisting of module Hom-algebras (\cite[ Lemma 2.5]{DY2}) works for modules over color
Hom-Leibniz algebras (Theorem \ref{mcl}) likewise it runs for modules over Hom-Lie algebras (\cite{BI}),
modules over color Hom-Lie algebras (\cite{IB1}) and modules over color Hom-Poisson algebras (\cite{IB1}).
Throughout this paper, $\bf K$ denotes a field of characteristic $0$ and $G$ is an abelian group.
\section{Preliminaries}
\begin{definition}
A vector space $V$ is said to be a $G$-graded if, there exist a family $(V_a)_{a\in G}$
of vector subspaces of $V$ such that $\displaystyle V=\bigoplus_{a\in G} V_a$.
An element $x\in V$ is said to be homogeneous of degree $a\in G$ if $x\in V_a$. We denote $\mathcal{H}(V)$ the set of all homogeneous
elements in $V$.
\end{definition}
\begin{definition}
Let $\displaystyle V=\bigoplus_{a\in G} V_a$ and $\displaystyle V'=\bigoplus_{a\in G} V'_a$ be two $G$-graded
vector spaces, and $b\in G$. A linear map $f : V\to V'$ is said to be homogeneous of degree $b$ if
$f(V_a)\subseteq V'_{a+b}$, for all $a\in G$. If, $f$ is homogeneous of degree zero i.e. $f(V_a)\subseteq V'_{a}$
holds for any $a\in G$, then $f$ is said to be even.
\end{definition}
\begin{definition}
An algebra $(A, \mu)$ is said to be $G$-graded if its underlying vector space is $G$-graded and if, furthermore,
$\mu(A_a, A_b)\subseteq A_{a+b}$, for all $a, b\in G$. Let $A'$ be another $G$-graded algebra. A morphism $f : A\rightarrow A'$
of $G$-graded algebras is by definition an algebra morphism from $A$ to $A'$ which is, in addition an even mapping.
\end{definition}
\begin{definition}
Let $G$ be an abelian group. A map $\varepsilon :G\times G\rightarrow {\bf K^*}$ is called a skew-symmetric bicharacter on $G$
if the following identities hold: for all $a, b, c\in G$,
\begin{enumerate}
\item [(i)] $\varepsilon(a, b)\varepsilon(b, a)=1$;
\item [(ii)] $\varepsilon(a, b+c)=\varepsilon(a, b)\varepsilon(a, c)$;
\item [(iii)]$\varepsilon(a+b, c)=\varepsilon(a, c)\varepsilon(b, c)$.
\end{enumerate}
\end{definition}
\begin{definition}
\begin{enumerate}
\item A multiplicative color non-Hom-associative (i.e. non necessarily Hom-associative) algebra is a quadruple
$(A, \mu, \varepsilon, \alpha_A)$ such that
\begin{enumerate}
\item $A$ is $G$-graded vector space;
\item $\mu : A\otimes A\rightarrow A$ is an even bilinear map;
\item $\varepsilon : G\otimes G\rightarrow K^*$ is a bicharacter;
\item $\alpha_A$ is an endomorphism of $(A,\mu)$ (multiplicativity).
\end{enumerate}
\item $(A, \mu, \alpha)$ is said to be $\varepsilon$-skew-symmetric (resp. $\varepsilon$-commutative) if, for any $x, y\in A$,
$\mu(x, y)=-\varepsilon(x, y)\mu(y, x)$ (resp. $\mu(x, y)=\varepsilon(x, y)\mu(y, x)$).
\item
Let $(A, \mu, \alpha_A)$ be color non-Hom-associative algebra. For any $x, y, z\in A$, the Hom-associator is defined by
\begin{eqnarray}
as(x, y, z)=\mu(\mu(x, y), \alpha_A(z))-\mu(\alpha_A, \mu(y, z)).
\end{eqnarray}
Then $(A, \mu, \varepsilon, \alpha_A)$ is said to be
\begin{enumerate}
\item Hom-flexible, if $as(x, y, x)=0$;
\item Hom-alternative, if $as(x, y, z)$ is $\varepsilon$-skew-symmetric in $x, y, z$.
\end{enumerate}
\end{enumerate}
\end{definition}
\section{Color Hom-akivis algebras}
Hom-Akivis algebras was introduced in \cite{NI1} as a twisted generalization of Akivis algebras. In the following, we generalize some
results. We prove that the commutator of a color non-Hom-associative algebra leads to a color Hom-Akivis algebra. Some properties
of color Hom-Akivis algebras are studied.
\begin{definition}\label{hai}
A color Hom-Akivis algebra is quintuple $(A, [-, -], [-, -, -], \varepsilon, \alpha_A)$ consisting of a $G$-graded vector space $A$,
an even $\varepsilon$-skew-symmetric bilinear map $[-, -]$, an even trilinear map $[-, -, -]$ and an even endomorphism
$\alpha_A : A\rightarrow A$ such that for all $x, y, z\in \mathcal{H}(A)$,
\begin{equation}
\oint\varepsilon(z, x)[[x, y], \alpha_A(z)]=\oint\varepsilon(z, x)\Big([x, y, z]-\varepsilon(x, y)[y, x, z]\Big). \label{hai1}
\end{equation}
If in addition, $\alpha_A$ is an endomorphism with respect to $[-,-]$ and $[-,-,-]$, the color Hom-Akivis algebra $A$
is said to be multiplicative.
\end{definition}
\begin{theorem}\label{n3}
Let $(A, \mu, \varepsilon, \alpha_A)$ be a color non-Hom-associative algebra. Define the maps $[-, -]: A\otimes A\rightarrow A$ and
$[-, -, -]_{\alpha_A} : A\otimes A\otimes A\rightarrow A$ as follows: for all $x,y,z\in A$,
\begin{eqnarray}
[x, y] &:=& \mu(x, y)-\varepsilon(x, y)\mu(y, x), \cr
[x, y, z]_{\alpha_A}&:=&as(x, y, z).\nonumber
\end{eqnarray}
Then $(A, [-, -], [-, -, -]_{\alpha_A}, \varepsilon, \alpha_A)$ is a multiplicative color Hom-Akivis algebra.
\end{theorem}
\begin{proof}
Color Hom-Akivis identity (\ref{hai1}) is proved by direct computation. For any homogeneous elements $x, y, z\in A$, we have
\begin{eqnarray}
\varepsilon(z, x)[[x, y], \alpha_A(z)]&=&\varepsilon(z, x)(xy)\alpha_A(z)-\varepsilon(y, z)\alpha_A(z)(xy)\nonumber\\
&&-\varepsilon(z, x)\varepsilon(x, y)(yx)\alpha_A(z)+\varepsilon(x, y)\varepsilon(y, z)\alpha_A(z)(yx).\nonumber
\end{eqnarray}
And,
\begin{eqnarray}
\varepsilon(z, x)([x, y, z]-\varepsilon(x, y)[y, x, z])&=&\varepsilon(z, x)(xy)\alpha_A(z)-\varepsilon(z, x)\alpha_A(x)(yz)\nonumber\\
&&-\varepsilon(z, x)
\varepsilon(x, y)(yx)\alpha_A(z)+\varepsilon(z, x)\varepsilon(x, y)\alpha_A(y)(xz).\nonumber
\end{eqnarray}
Expanding
$\oint\varepsilon(z, x)[[x, y], \alpha_A(z)]$ and $\oint\varepsilon(z, x)([x, y, z]-\varepsilon(x, y)[y, x, z])$, one gets (\ref{hai1})
and so the conclusion holds.
The multiplicativity follows from that of $(A, \mu, \varepsilon, \alpha_A)$.
\end{proof}
\begin{definition}
Let $(A, [-, -], [-, -, -], \varepsilon, \alpha_A)$ and $(\tilde A, \{-, -\}, \{-, -, -\}, \varepsilon, \alpha_{\tilde A})$ be two color
Hom-Akivis algebras. A morphism $f : A\rightarrow\tilde A$ of color Hom-Akivis algebras is an even linear map of $G$-graded vector spaces
$A$ and $\tilde A$ such that
\begin{eqnarray}
f([x, y])&=&\{f(x), f(y)\},\nonumber\\
f([x, y, z])&=&\{f(x), f(y), f(z)\}.\nonumber
\end{eqnarray}
\end{definition}
For example, if we take $(A, [-, -], [-, -, -], \varepsilon, \alpha_A)$ as a multiplicative color Hom-Akivis algebra, then the twisting self-map
$\alpha_A$ is itself an endomorphism of $(A, [-, -], [-, -, -], \varepsilon)$.
The following theorem is the color version of Theorem 4.4 in \cite{NI1}.
\begin{theorem}\label{n4}
Let $A_{\alpha_A}=(A, [-, -], [-, -, -], \varepsilon, \alpha_A)$ be a color Hom-Akivis algebra and
$\beta : A\to A$ an even endomorphism.
Then, for any integer $n\geq 1$, $A_{\beta^n}=(A, [-, -]_{\beta^n}, [-, -, -]_{\beta^{n}}, \varepsilon,\beta^n\circ\alpha_A)$
is a color Hom-Akivis algebra, where for all $x, y, z\in \mathcal{H}(A)$,
\begin{eqnarray}
[x, y]_{\beta^n}&:=&{\beta^n}([x, y]),\cr
[x, y, z]_{\beta^n}&:=&{\beta^{2n}}([x, y, z]). \nonumber
\end{eqnarray}
Moreover, if $A_{\alpha_A}$ is multiplicative and $\beta$ commutes with $\alpha_A$, then $A_{\beta^n}$ is multiplicative.
\end{theorem}
\begin{proof}
It is clear that $[-, -]_{\beta^n}$ and $[-, -, -]_{\beta^n}$ are bilinear and trilinear respectively. It is also clear that
the $\varepsilon$-skew-symmetry of $[-, -]_{\beta^n}$ comes from that of $[-, -]$. It remains to prove the color Hom-Akivis identity
(\ref{hai1}) and the multiplicativity of $A_{\beta^n}$. For any $x, y, z\in \mathcal{H}(A)$, we have,
\begin{eqnarray}
\oint\varepsilon(z, x)[[x, y]_{\beta^n}, (\beta^n\circ\alpha_A)(z)]_{\beta^n}
&=&\oint\varepsilon(z, x)\beta^n[\beta^n[x, y], (\beta^n\circ\alpha_A)(z)]\nonumber\\
&=&\beta^{2n}\oint \varepsilon(z, x)([x, y, z]-\varepsilon(x, y)[y, x, z])\nonumber\\
&=&\oint \varepsilon(z, x)(\beta^{2n}[x, y, z]-\varepsilon(x, y)\beta^{2n}[y, x, z])\nonumber\\
&=&\oint \varepsilon(z, x)([x, y, z]_{\beta^{2n}}-\varepsilon(x, y)[y, x, z]_{\beta^{2n}}).\nonumber
\end{eqnarray}
The multiplicativity is proved similarly as in \cite[Theorem 4.4]{NI1}.
\end{proof}
\begin{corollary}\label{n2}
Let $(A, [-, -], [-, -, -], \varepsilon)$ be a color Akivis algebra and $\beta : A\rightarrow A$ an even endomorphism. Then
$(A, [-, -]_{\beta^n}, [-, -, -]_{\beta^{n}}, \varepsilon, \alpha_A)$ is a multiplicative color Hom-Akivis algebra.
Moreover, suppose that $(\tilde A, \{-, -\}, \{-, -, -\}, \varepsilon)$ is another color Akivis algebra and $\tilde\alpha_A$
an even endomorphism of $\tilde A$. If $f : A\rightarrow \tilde A$ is a morphism of color Akivis algebras such that
$f\circ \alpha_A={\alpha_{\tilde A}}\circ f$, then
$f : (A, [-, -], [-, -, -], \varepsilon, \alpha_A)\rightarrow (\tilde A, \{-, -\}, \{-, -, -\}, \varepsilon, \alpha_{\tilde A})$
is a morphism of multiplicative color Hom-Akivis algebras.
\end{corollary}
\begin{proof}
It is similar to that of \cite[Corollary 4.5]{NI1}.
\end{proof}
The above construction of color Hom-Akivis algebras is used in \cite{NI1}, in the case of Hom-Akivis algebras,
to produce examples of Hom-Akivis algebras.
\begin{proposition}\label{n5}
Let $(A, [-, -], [-, -, -], \varepsilon)$ be a color Akivis algebra and $\beta : A\rightarrow A$ an even endomorphism. Define
$[-, -]_{\beta^n}$ and $[-, -, -]_{\beta^{n}}$ by: for all $x, y, z\in \mathcal{H}(A)$,
\begin{eqnarray}
[x, y]_{\beta^n}&:=&{\beta}([x, y]_{\beta^{n-1}}),\cr
[x, y, z]_{\beta^n}&:=&{\beta^{2}}([x, y, z]_{\beta^{n-1}}),\nonumber
\end{eqnarray}
Then $(A, [-, -]_{\beta^n}, [-, -, -]_{\beta^{n}}, \varepsilon, \alpha_A)$ is a multiplicative color Hom-Akivis algebra.
\end{proposition}
\begin{proof}
It is proved recurrently by applying Corollary \ref{n2}. The reader may also see \cite[Theorem 4.8]{NI1} for the proof.
\end{proof}
\begin{definition}
A color Hom-Akivis algebra $(A, [-, -], [-, -, -], \varepsilon), \alpha_A$ is said to be
\begin{enumerate}
\item Hom-flexible, if $[x, y, x]=0$ for all $x, y\in A$ ;
\item Hom-alternative, if $[-, -, -]$ is $\varepsilon$-alternating (i.e. $[-, -, -]$ is $\varepsilon$-skew- symmetric
for any pair of variables).
\end{enumerate}
\end{definition}
\begin{theorem}\label{n6}
Let $(A, \mu, \varepsilon, \alpha_A)$ be a color non-Hom-associative algebra and the quintuple
$(A, [-, -], as(-, -, -), \varepsilon, \alpha_A)$
its associated color Hom-Akivis algebra (as in Theorem \ref{n3}). Then we have the following:
\begin{enumerate}
\item if $(A, \mu, \varepsilon, \alpha_A)$ is Hom-flexible, then $(A, [-, -], as(-, -, -), \varepsilon, \alpha_A)$ is Hom-flexible;
\item if $(A, \mu, \varepsilon, \alpha_A)$ is Hom-alternative, then so is $(A, [-, -], as(-, -, -), \varepsilon, \alpha_A)$.
\end{enumerate}
\end{theorem}
\begin{proposition}
Let $\mathcal{A}_{\alpha_A}=(A, [-, -], [-, -, -], \varepsilon, \alpha_A)$ be a Hom-flexible color Hom-Akivis algebra. Then we have
\begin{eqnarray}
\oint\varepsilon(z, x)=\oint\Big(\varepsilon(z, x)+\varepsilon(x, y)\varepsilon(y, z)\Big)[x, y, z].\label{cvf}
\end{eqnarray}
In particular, $\mathcal{A}_{\alpha_A}$ is a color Hom-Lie algebra if and only if
\begin{eqnarray}
\oint\Big(\varepsilon(z, x)+\varepsilon(x, y)\varepsilon(y, z)\Big)[x, y, z]=0.
\end{eqnarray}
\end{proposition}
\begin{proof}
By expanding the left hand side of color Hom-Akivis identity (\ref{hai1}), and using the Hom-flexibility we get (\ref{cvf}).
\end{proof}
For more details on color Hom-Lie algebras, see \cite{KA}, \cite{IB1}, \cite{LY}.
\section{Color Hom-Leibniz algebras}
In this section, we write NHLP-algebra (resp. NLP-algebra) for non-commutative Hom-Leibniz-Poisson algebra
(resp. non-commutative Leibniz-Poisson algebra). Interrelation between Hom-dialgebras and NHLP-algebras are presented.
\subsection{Color NHLP-algebras}
This subsection is devoted to color Hom-Leibniz algebras. Color NHLP-algebras are introduced and their
various twisting are given as well as examples.
\begin{definition}
A color Hom-Leibniz algebra is a quadruple $(L, \mu, \varepsilon, \alpha_L)$, consisting of a graded vector space $L$, an even bilinear map
$\mu : L\otimes L\to L$, a bicharacter $\varepsilon : G\otimes G\rightarrow {\bf K}^*$ and an even linear space homomorphism
$\alpha_L : L\rightarrow L$ satisfying
\begin{eqnarray}
\mu\Big(\alpha_L(x), \mu(y, z)\Big)=\mu\Big(\mu(x, z), \alpha(y)\Big)+\varepsilon(x, y)\mu\Big(\alpha(x), \mu(y, z)\Big). \label{cla2}
\end{eqnarray}
A morphism of color Hom-Leibniz algebras is an even linear map which preserves the structures.
\end{definition}
\begin{remark}
\begin{enumerate}
\item When $G$ is an abelian group with trivial grading, we obtain the following Hom-Leibniz identity (\cite{NI1}):
\begin{eqnarray}
\mu\Big(\alpha_L(x), \mu(y, z)\Big)=\mu\Big(\mu(x, y), \alpha(z)\Big)+\mu\Big(\alpha(y), \mu(x, z)\Big). \label{cla}
\end{eqnarray}
\item The original definition of Hom-Leibniz algebras (\cite{AS1}) is related to the identity
\begin{eqnarray}
\mu\Big(\mu(x, y), \alpha_L(z)\Big)=\mu\Big(\alpha(x), \mu(y, z)\Big)+\mu\Big(\mu(x, z), \alpha(y)\Big) \label{cla4}
\end{eqnarray}
which is expressed in terms of (right) adjoint homomorphisms $ad_xy:\mu(x, y)$ of $(A, \mu, \alpha_A)$. This justifies the terms of
``(right) Hom-Leibniz algebra'' that could be used for the Hom-Leibniz algebras defined in \cite{AS1}.
The dual of (\ref{cla}) is (\ref{cla4}) and in this paper we consider only left color Hom-Leibniz algebras.
\end{enumerate}
\end{remark}
We have the following result.
\begin{proposition}\label{n7}
Let $(L, \mu, \varepsilon, \alpha_L)$ be a color Hom-Leibniz algebra. Then
\begin{eqnarray}
\Big(x\cdot y+\varepsilon(x, y) y\cdot x\Big)\alpha_L(z)=0, \label{ia}
\end{eqnarray}
\begin{eqnarray}
[x\cdot y, \alpha_L(z)]+\varepsilon(x, y)[\alpha_L(y), x\cdot z]=\alpha_L(x)\cdot[y, z],
\end{eqnarray}
where we have putted $\mu(x, y)=x\cdot y$ and $[x, y]=x\cdot y-\varepsilon(x, y)y\cdot x$.
\end{proposition}
\proof
The identity (\ref{cla}) implies that
\begin{eqnarray}
(x\cdot y)\cdot\alpha_L(z) = \alpha_L(x)\cdot(y\cdot z)-\varepsilon(x, y)\alpha_L(y)\cdot(x\cdot z) \nonumber
\end{eqnarray}
Likewise, interchanging $x$ and $y$, we have
\begin{eqnarray}
(y\cdot x)\cdot\alpha_L(z) = \alpha_L(y)\cdot(x\cdot z)-\varepsilon(y, x)\alpha_L(x)\cdot(y\cdot z)\nonumber
\end{eqnarray}
Then, multiplying the second equality by $\varepsilon(x, y)$, and adding memberwise with the first one, we obtain (\ref{ia}). Next, by direct
computation, we have
\begin{eqnarray}
[x\cdot y, \alpha_L(z)]+\varepsilon(x, y)[\alpha_L(y), x\cdot z]\!\!\!\!
&=&\!\!\!\!(x\cdot y)\cdot\alpha_L(z)-\varepsilon(x, z)\varepsilon(y, z)\alpha_L(z)\cdot(x\cdot y)\cr
& &\!\!\!\!+\varepsilon(x, y)\alpha_L(y)\cdot(x\cdot z)-\varepsilon(y, z)(x\cdot z)\cdot\alpha_L(y)\cr
&=&\!\!\!\!\alpha_L(x)\cdot(y\cdot z)-\varepsilon(x, z)\varepsilon(y, z)((z\cdot x)\cdot\alpha_L(y) \cr
& &\!\!\!\!+\varepsilon(z, x)\alpha_L(x)\cdot(z\cdot y))-\varepsilon(y, z)(x\cdot z)\cdot\alpha_L(y) \mbox{ (by (\ref{cla2})) }\nonumber\\
&=&\!\!\!\!\alpha_L(x)\cdot(y\cdot z)-\varepsilon(y, z)\alpha_L(x)\cdot(z\cdot y)\quad\mbox{(by}\;(\ref{ia}))\nonumber\\
&=&\!\!\!\!\alpha_L(x)\cdot[y, z]\nonumber. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qed
\end{eqnarray}
Now we define color NHLP-algebras which is the graded and Hom-version of NLP-algebras (\cite{JMC}).
\begin{definition} \label{nhlp}
A color NHLP-algebra is a $G$-graded vector space $P$ together with two even bilinear maps
$[-, -] : P\otimes P\rightarrow P$ and $\mu : P\otimes P\rightarrow P$, a bicharacter $\varepsilon : G\otimes G\to {\bf K}^*$ and
$\alpha_P : P\rightarrow P$ an even linear map such that, for any $x, y, z\in P$,
\begin{enumerate}
\item [i)]
$(P, [\cdot, \cdot], \varepsilon, \alpha_P)$ is a color Hom-Leibniz-Poisson algebra i.e.
\begin{eqnarray}
[\alpha_P(x), [y, z]]=[[x, y], \alpha_P(z)]+\varepsilon(x, y)[\alpha_P(y), [x, z]], \label{cpa}
\end{eqnarray}
\item [ii)]
$(P, \mu, \varepsilon, \alpha_P)$ is a color Hom-associative algebra i.e.
\begin{eqnarray}
\mu\Big(\alpha(x),\mu(y,z)\Big)=\mu\Big(\mu(x,y),\alpha(z)\Big) \mbox{ (Hom-associativity) };
\end{eqnarray}
\item [iii)]
and the following identity holds:
\begin{eqnarray}
[\alpha_P(x), \mu(y, z)]=\mu([x, y], \alpha_P(z))+\varepsilon(x, y)\mu(\alpha_P(y), [x, z]). \label{comp}
\end{eqnarray}
\end{enumerate}
If in addition, $\alpha_P$ is an endomorphism with respect to $\mu$ and $[-, -]$, we say that
$(P, \mu, [\cdot, \cdot], \varepsilon, \alpha_P)$ is a multiplicative color NHLP-algebra.
\end{definition}
\begin{remark}
When the Hom-associative product $\mu$ is $\varepsilon$-commutative, then $(A, \mu, [-, -]), \varepsilon, \alpha_A)$ is said to be
a commutative color Hom-Leibniz Poisson algebra.
\end{remark}
\begin{example}
\begin{enumerate}
\item
Any color Hom-Poisson algebra is a color NHLP-algebra algebra.
\item
Any color Hom-Leibniz algebra is a color NHLP-algebra.
\item
If $P$ is a Leibniz-Poisson algebras (viewed as Hom-Leibniz-Poisson algebras with trivial twisting and trivial grading), then the vector
space $P\otimes P$ is a NHLP-algebra with the operations
$$ (x_1\otimes x_2)(y_1\otimes y_2)= x_1y_1\otimes y_1y_2,$$
$$[x_1\otimes x_2, y_1\otimes y_2]=[[x_1, x_2], y_1]\otimes y_2+y_1\otimes[[x_1, x_2], y_2].$$
\end{enumerate}
\end{example}
The proposition below is a direct consequence of the Definition \ref{nhlp}.
\begin{proposition}
Let $(P, [\cdot, \cdot], \mu, \varepsilon, \alpha)$ be a color NHLP-algebra. Then
$(P, [\cdot, \cdot], \mu^{op}, \varepsilon, \alpha)$ and $(P, k\mu, k[-, -], \alpha_P)$ are also color NHLP-algebras, with
$\mu^{op}(x, y)=\mu(y, x)$ and $k \in {\bf K}^*$.
\end{proposition}
It is well known in Hom-algebras setting that one can obtain Hom-algebra structures from an ordinary one and an endomorphism.
The following theorem, which gives a way to construct a color NHLP-algebras from color NLP-algebras and an endomorphism,
is a similar result.
\begin{theorem}\label{n1}
Let $(A, \mu, [-, -], \varepsilon)$ be a color NLP-algebra and $\alpha_A : A\rightarrow A$ an even endomorphism.
Then, for any integer $n\geq 1$, $(A, \mu_{\alpha_A^n}, [-, -]_{\alpha_A^{n}}, \varepsilon, \alpha^n_A)$
is a multiplicative color NHLP-algebra, where for all $x, y, z\in \mathcal{H}(A)$,
\begin{eqnarray}
\mu_{\alpha_A^n}(x, y)&:=&{\alpha_A^n}(\mu(x, y)),\cr
[x, y]_{\alpha_A^n}(x, y)&:=&{\alpha_A^{n}}([x, y]).\nonumber
\end{eqnarray}
Moreover, suppose that $(\tilde A, \tilde\mu, \{-, -\}, \varepsilon)$ is another color NLP-algebra and $\tilde\alpha_A$
an even endomorphism of $\tilde A$. If $f : A\rightarrow \tilde A$ is a morphism of color NLP-algebras such that
$f\circ \alpha_A={\alpha_{\tilde A}}\circ f$, then
$f : (A, \mu, [-, -], \varepsilon, \alpha_A)\rightarrow (\tilde A, \tilde\mu, \{-, -, -\}, \varepsilon, \alpha_{\tilde A})$
is a morphism of multiplicative color NHLP-algebras.
\end{theorem}
\begin{proof}
We need to show that $(A, \mu_{\alpha_A^n}, [-, -]_{\alpha_A^{n}}, \varepsilon, \alpha^n_A)$ satisfies relations (\ref{cpa}) and
(\ref{comp}). We have, for any $x, y, z\in A$,
\begin{eqnarray}
[\alpha^n_A(x), \mu_{\alpha_A^n}(y, z)]_{\alpha_A^{n}}
&=&\alpha_A^{n}([\alpha^n_A(x), \alpha_A^n\mu(y, z)])\nonumber\\
&=&\alpha_A^{2n}([x, \mu(y, z)])\nonumber\\
&=&\alpha_A^{2n}([[x, y], z]+\varepsilon(x, y)[y, [x, z]])\nonumber\\
&=&\alpha_A^{n}([\alpha_A^{n}([x, y]), \alpha_A^{n}(z)]+\varepsilon(x, y)[\alpha_A^{n}(y), \alpha_A^{n}([x, z])])\nonumber\\
&=&[[x, y]_{\alpha_A^{n}}, \alpha_A^{n}(z)]_{\alpha_A^{n}}
+\varepsilon(x, y)[\alpha_A^{n}(y), [x, z]]_{\alpha_A^{n}}.\nonumber
\end{eqnarray}
The compatibility condition (\ref{comp}) is pointed out similarly and
the second assertion is proved as in the case of Theorem \ref{n4}.
\end{proof}
\begin{example} The commutative Leibniz-Poisson algebra used in this example is given in \cite{SMR1}, \cite{SMR}.
Let $(A, [-, -])$ is a Leibniz algebra over a commutative infinite field $\bf K$ and let
$$\tilde A=A\oplus {\bf K}$$
be a vector space with multiplications $\cdot$ and $\{-, -\}$ defined as: for $x, y\in A$ and $a, b\in {\bf K}$,
$$(x+a)\cdot(y+b)=(bx+ay)+ab\quad \text{ and } \quad \{x+a, y+b\}=[x, y].$$
Consider the homomorphism $\alpha_{\tilde A}=\alpha_A\oplus Id_{\bf K}:\tilde A\to\tilde A$, then
$(\tilde A, \cdot, \{-, -\}, \alpha_{\tilde A})$ is a commutative color Hom-Leibniz-Poisson algebra (with trivial grading).
\end{example}
The next result gives a procedure to produce color NHLP-algebras from given one.
\begin{theorem}
Let $(P, \mu, [-, -], \varepsilon, \alpha_P)$ be a color NHLP-algebra and $\beta : P\rightarrow P$ an even endomorphism.
Then $(P, \mu_{\beta^n}, [-, -]_{\beta^{n}}, \varepsilon, \beta^n\circ\alpha_P)$ is a color NHLP-algebra, with
\begin{eqnarray}
\mu_{\beta^n}(x, y)&:=&{\beta^n}(\mu(x, y)),\nonumber\\
\nonumber
[x, y]_{\beta^n}(x, y)&:=&{\beta^{n}}([x, y]),\nonumber
\end{eqnarray}
for all $x, y, z\in \mathcal{H}(P)$.
\end{theorem}
\proof
We need to show that $(P, \mu_{\beta^n}, [-, -]_{\beta^{n}}, \varepsilon, \beta^n\circ\alpha_P)$ satisfies relations (\ref{cpa}) and
(\ref{comp}). We have, for any $x, y, z\in \mathcal{H}(P)$,
\begin{eqnarray}
[({\beta^{n}}\circ\alpha_P)(x), [y, z]_{\beta^{n}}]_{\beta^{n}}
&=&{\beta^{n}}[({\beta^{n}}\circ\alpha_P)(x), {\beta^{n}}[y, z]]\nonumber\\
&=&{\beta^{2n}}([\alpha_P(x), [y, z]])\nonumber\\
&=&{\beta^{2n}}([[x, y], \alpha_P(z)]+\varepsilon(x, y)[\alpha_P(y), [x, z]])\nonumber\\
&=&{\beta^{n}}[{\beta^{n}}[x, y], ({\beta^{n}}\circ\alpha_P)(z)]+\varepsilon(x, y)[({\beta^{n}}\circ\alpha_P)(y), {\beta^{n}}[x, z]]\nonumber\\
&=&[[x, y]_{\beta^{n}}, ({\beta^{n}}\circ\alpha_P)(z)]_{\beta^{n}}
+\varepsilon(x, y)[({\beta^{n}}\circ\alpha_P)(y), [x, z]_{\beta^{n}}]_{\beta^{n}}.\nonumber
\end{eqnarray}
Next,
\begin{eqnarray}
[(\beta^n\circ\alpha_P)(x), \mu_{\beta^n}(y, z)]_{\beta^{n}}
&=&\beta^{n}[(\beta^n\circ\alpha_P)(x), \beta^n\mu(y, z)]\nonumber\\
&=&\beta^{2n}[\alpha_P(x), \mu(y, z)]\nonumber\\
&=&\beta^{2n}([[x, y], \alpha_P(z)]+\varepsilon(x, y)[\alpha_P(y), [x, z]])\nonumber\\
&=&\beta^{n}([\beta^{n}[x, y], (\beta^{n}\circ\alpha_P)(z)]+\varepsilon(x, y)[(\beta^{n}\circ\alpha_P)(y), \beta^{n}[x, z]])\nonumber\\
&=&[[x, y]_{\beta^{n}}, (\beta^{n}\circ\alpha_P)(z)]_{\beta^{n}}
+\varepsilon(x, y)[(\beta^{n}\circ\alpha_P)(y), [x, z]_{\beta^{n}}]_{\beta^{n}}\nonumber. \qed
\end{eqnarray}
\begin{proposition}
Let $(P, \mu, [-, -], \varepsilon)$ be a color NLP-algebra and $\beta : P\rightarrow P$ an even endomorphism. Define
$\mu_{\beta^n}$ and $[-, -]_{\beta^{n}}$ by
\begin{eqnarray}
\mu_{\beta^n}(x, y)&:=&{\beta}(\mu_{\beta^{n-1}}(x, y)),\nonumber\\
\nonumber
[x, y]_{\beta^n}&:=&{\beta^{n}}([x, y]_{\beta^{n-1}}),\nonumber
\end{eqnarray}
for all $x, y, z\in \mathcal{H}(P)$.
Then $(P, \mu_{\beta^n}, [-, -]_{\beta^{n}}, \varepsilon, \alpha_P)$ is a multiplicative color NHLP-algebra.
\end{proposition}
\subsection{NHLP-algebras and Dialgebras}
It is well known that Hom-dialgebras give rise to Hom-Leibniz algebra (\cite{DY1}).
In this subsection, we give a similar result for NHLP-algebras.
\begin{definition}[\cite{DY1}]\label{dia}
A Hom-dialgebra is a quadruple $(D, \dashv\,, \vdash, \alpha_D)$, where $D$ is a $\bf K$-vector space,
$\dashv\,, \vdash : D\otimes D\rightarrow D$ are
bilinear maps and $\alpha_D : D\rightarrow D$ a linear map such that the following five axioms are satisfied for $x, y, z\in D$ :
\begin{eqnarray}
(x\vdash y)\dashv\alpha_D(z)&=&\alpha_D(x)\vdash(y\dashv z), \nonumber\\
\alpha_D(x)\dashv (y\dashv z)&=&(x\dashv y)\dashv\alpha_D(z)=\alpha_D(x)\dashv(y\vdash z)\nonumber\\
(x\dashv y)\vdash\alpha_D(z)&=&=\alpha_D(x)\vdash(y\vdash z)=(x\vdash y)\vdash\alpha_D(z)\nonumber
\end{eqnarray}
\end{definition}
We say that a Hom-dialgebra $(D, \dashv\,, \vdash, \alpha_D)$ is Hom-associative if it so for the operations $\vdash$ and $\dashv$.
So any Hom-dialgebra is Hom-associative.
The following result connects Hom-dialgebras and Hom-Leibniz-Poisson algebras.
\begin{theorem}\label{diax}
Let $(D, \dashv\,, \vdash, \alpha_D)$ be a Hom-dialgebra. Define the bilinear map $[-, -] : D\otimes D\rightarrow D$ by setting
\begin{eqnarray}
[x, y]:=x\vdash y-y\dashv x.
\end{eqnarray}
Then $(D, \dashv\,, [-, -], \alpha_D)$ is a Hom-Leibniz-Poisson algebra.
\end{theorem}
\begin{proof}
We write down all twelve terms involved in the Hom-Leibniz identity (\ref{cla}) :
\begin{eqnarray}
[\alpha_D(x), [y, z]]&=& \alpha_D(x)\vdash(y\vdash z)-(y\vdash z)\dashv\alpha_D(x)
-\alpha_D(x)\vdash(z\dashv y)+(z\dashv y)\dashv \alpha_D(x)\cr
[\alpha_D(y), [x, z]] &=& \alpha_D(y)\vdash(x\vdash z)-(x\vdash z)\dashv\alpha_D(y)
-\alpha_D(y)\vdash(z\dashv x)+(z\dashv x)\dashv \alpha_D(y)\cr
[[x, y], \alpha_D(z)] &=& (x\vdash y)\vdash \alpha_D(z)-\alpha_D(z)\dashv(x\vdash y)
-(y\dashv x)\vdash\alpha_D(z)+\alpha_D(z)\dashv(y\dashv x)\nonumber.
\end{eqnarray}
Using the Hom-dialgebra axioms in Definition \ref{dia}, it is readily seen that (\ref{cla}) holds.
Next,
\begin{eqnarray}
[\alpha_D(x), y\dashv z]-[x, y]\dashv\alpha_D(z)-\alpha_D(y)\dashv[x, z]\!\!\!\! &=&\!\!\!\! \alpha_D(x)\vdash(y\dashv z)
-(y\dashv z)\dashv\alpha_D(x)\cr
& &-(x\vdash y)\dashv\alpha_D(z)+(y\dashv x)\dashv\alpha_D(z) \cr
& & -\alpha_D(y)\dashv(x\vdash z)+\alpha_D(y)\dashv(z\dashv x).\nonumber
\end{eqnarray}
The left hand side vanishes thanks also to the Definition \ref{dia}. Thus the conclusion holds.
\end{proof}
\section{Modules over color Hom-Leibniz algebras}
In this section we introduce modules over color Hom-Leibniz algebras and prove that the Yau's twisting of module structure map
works very well with module over color Hom-Leibniz algebras.
\begin{definition}
Let $G$ be an abelian group.
A Hom-module is a pair $(M,\alpha_M)$ in which $M$ is a $G$-graded vector space and $\alpha_M : M\longrightarrow M$ is an even linear map.
\end{definition}
\begin{definition}
Let $(L, [-, -], \varepsilon, \alpha_L)$ be a color Hom-Leibniz algebra and $(M, \alpha_M)$ a Hom-module. An $L$-module on $M$ consists of
even $\bf K$-bilinear maps
$\mu_L : L\otimes M\rightarrow M$ and $\mu_R : M\otimes L\rightarrow M$ such that for any $x, y\in L$, and $m\in M$,
\begin{eqnarray}
\alpha_M(\mu_L(x, m))&=&\mu_L(\alpha_L(x), \alpha_M(m))\\
\alpha_M(\mu_R(m, x))&=&\mu_R(\alpha_M(m), \alpha_L(x))\\
\mu_L([-, -]\otimes\alpha_M)&=&\mu_L(\alpha_L\otimes\mu_L)-\varepsilon(x, y)\mu_L(\alpha_L\otimes\mu_L)(\tau_{L, L}\otimes Id_L),\label{lm1}\\
\mu_R(\alpha_M\otimes[-, -])&=&\mu_R(\mu_L\otimes\alpha_L)(\tau_{M, L}\otimes Id_L)
+\varepsilon(x, m)\mu_L(\alpha_L\otimes\mu_R)(\tau_{M, L}\otimes Id_L),\label{lm2}\\
\mu_L(\alpha_L\otimes\mu_R)&=&\mu_R(\mu_L\otimes\alpha_L)+\varepsilon(m, x)\mu_R(\alpha_M\otimes [-, -])(\tau_{L, M}\otimes Id_L)\label{lm3}.
\end{eqnarray}
where $\tau_{L, L}(x\otimes y)=y\otimes x$, $\tau_{L, M}(x\otimes m)=m\otimes x$, $\tau_{M, L}(m\otimes x)=x\otimes m$.
\end{definition}
\begin{remark}
The conditions (\ref{lm1}), (\ref{lm2}) and (\ref{lm3}) can be written respectively as
\begin{eqnarray}
[x, y]\cdot \alpha_M(m)&=&\alpha_L(x)\cdot(y\cdot m)-\varepsilon(x, y)\alpha_L(y)\cdot(x\cdot m)\label{lm11},\\
\alpha_M(m)\ast[x, y]&=&(x\cdot m)\ast\alpha_L(y)+\varepsilon(x, m)\alpha_L(x)\cdot(m\ast y),\label{lm22}\\
\alpha_L(x)\cdot(m\ast y) &=&(x\cdot m)\ast\alpha_L(y)+\varepsilon(m, x)\alpha_M(m)\ast[x, y],\label{lm33}
\end{eqnarray}
where $\mu(x\otimes y)=xy$, $\mu_L(x\otimes m)=x\cdot m$ and $\mu_R(m\otimes x)=m\ast x$.
\end{remark}
\begin{example}
Any color Hom-Leibniz algebra and any Hom-Leibniz superalgebras is a module over itself.
\end{example}
\begin{theorem}\label{mcl}
Let $(L, [-, -], \varepsilon, \alpha_L)$ be a color Hom-Leibniz algebra and $(M, \mu_L, \mu_R, \alpha_M)$ a color Hom-Leibniz module. Then,
\begin{eqnarray}
\tilde\mu_L&=&\mu_L(\alpha_L^2\otimes Id_M) : L\otimes M\rightarrow M,\\
\tilde\mu_R&=&\mu_R(Id_M\otimes \alpha_L^2) : L\otimes M\rightarrow M,
\end{eqnarray}
define another color Hom-Leibniz module structure on $M$.
\end{theorem}
\begin{proof}
We have to point out relations (\ref{lm1})-(\ref{lm3}) for $\tilde\mu_L$ and $\tilde\mu_R$. We have, respectively, any for $x, y\in L, m\in M$,
\begin{eqnarray}
\tilde\mu_L([-, -]\otimes\alpha_M)(x\otimes y\otimes m) &=& \tilde\mu_L([x, y]\otimes \alpha_M(m)) \cr
&=& \alpha_L^2([x, y])\cdot\alpha_M(m)\nonumber\\
&=& [\alpha_L^2(x), \alpha_L^2(y)]\cdot\alpha_M(m)\nonumber\\
&=& \alpha_L^3(x)\cdot(\alpha_L^2(y)\cdot m)-\varepsilon(x, y)
\alpha_L^3(y)\cdot(\alpha_L^2(x)\cdot m)\; (\mbox{ by } (\ref{lm11}))\nonumber\\
&=& \tilde\mu_L(\alpha_L\otimes\tilde\mu_L)(x\otimes y\otimes m)\nonumber\\
& &-\varepsilon(x, y)\tilde\mu_L(\alpha_L\otimes\tilde\mu_L)(\tau_{L, L}
\otimes Id_L)(x\otimes y\otimes m).\nonumber
\end{eqnarray}
\begin{eqnarray}
\tilde\mu_R(\alpha_M\otimes[-, -])(m\otimes x\otimes y)&=&\tilde\mu_R(\alpha_M(m)\otimes [x, y])\nonumber\\
&=&\alpha_M(m)\ast[\alpha_L^2(x), \alpha_L^2(y)]\nonumber\\
&=&(\alpha_L^2(x)\cdot m)\ast\alpha_L^3(y)+\varepsilon(x, m)\alpha_L^3(x)\cdot(m\ast\alpha_L^2(y))\quad(\mbox{by}\;(\ref{lm22}))\nonumber\\
&=&\tilde\mu_R(\tilde\mu_L\otimes\alpha_L)(\tau_{M, L}\otimes Id_L)(m\otimes x\otimes y)\nonumber\\
& &+\varepsilon(x, m)\tilde\mu_L(\alpha_L\otimes\tilde\mu_R)(\tau_{M, L}\otimes Id_L)(m\otimes x\otimes y).\nonumber
\end{eqnarray}
\begin{eqnarray}
\varepsilon(m, x)\tilde\mu_R(\alpha_M\otimes[-, -])(\tau_{L, M}\otimes Id_L)(x\otimes m\otimes y)
&=&\varepsilon(m, x)\tilde\mu_R(\alpha_M\otimes[-, -])(x\otimes m\otimes y)\nonumber\\
&=&\varepsilon(m, x)\tilde\mu_R(\alpha_M(m)\otimes [x, y]) \cr
&=&\varepsilon(m, x)\alpha_M(m)\ast[\alpha_L^2(x), \alpha_L^2(y)]\nonumber\\
&=&\alpha_L^3(x)\cdot(m\ast\alpha_L^2(y)) \cr
& &-(\alpha_L^2(x)\cdot m)\ast\alpha_L^3(y)\quad(\mbox{by}\;(\ref{lm33})) \nonumber\\
&=&\tilde\mu_L(\alpha_L\otimes\tilde\mu_R)(x\otimes m\otimes y) \cr
& &-\tilde\mu_R(\tilde\mu_L\otimes \alpha_L)(x\otimes m\otimes y).\nonumber
\end{eqnarray}
\end{proof}
\begin{corollary}
Let $A_{\alpha_A^n}=(A, [-, -]_{\alpha_A^n}, \varepsilon, \alpha^n_A)$ be a multiplicative color Hom-Leibniz algebra as in Theorem
\ref{n1} and $(M, \mu_L, \mu_R, \alpha_M)$ an $A_{\alpha_A^n}$-module. Then $(M, \tilde\mu_L, \tilde\mu_R, \alpha_M)$ is also
a module over $A_{\alpha_A^n}$.
\end{corollary}
\end{document}
|
\begin{document}
\title[Hyperuniformity]
{Hyperuniform point sets on the sphere:\\ probabilistic aspects}
\subjclass[2010]{65C05, 11K38, 65D30}
\author[J. S. Brauchart]{Johann S. Brauchart\textsuperscript{\dag}}
\author[P. J. Grabner]{Peter J. Grabner\textsuperscript{\mathrm{d}ag}}
\address[J. B., P. G.]{Institute of Analysis and Number Theory,
Graz University of Technology,
Kopernikusgasse 24.
8010 Graz,
Austria}
\email{[email protected]}
\thanks{\dag{} This author is supported by the Lise Meitner scholarship M~2030
of the Austrian Science Foundation FWF}
\thanks{\textsuperscript{\mathrm{d}ag} These authors were supported by the Austrian
Science Fund FWF project F5503 (part of the Special Research Program (SFB)
``Quasi-Monte Carlo Methods: Theory and Applications'')}
\email{[email protected]}
\author[W. Kusner]{W\"oden Kusner\textsuperscript{\mathrm{d}ag}}
\address[W. K.]{Department of Mathematics, Vanderbilt University,
1326 Stevenson Center, Nashville, TN 37240, USA}
\email{[email protected]}
\author[J. Ziefle]{Jonas Ziefle}
\address[J. Z.]{Fachbereich Mathematik,
Auf der Morgenstelle 10, 72076 T\"ubingen, Germany}
\email{[email protected]}
\begin{abstract}
The concept of hyperuniformity has been introduced by Torquato and Stillinger
in 2003 as a notion to detect structural behaviour intermediate
between crystalline order and amorphous disorder. The present paper studies a
generalisation of this concept to the unit sphere. It is shown that several
well studied determinantal point processes are hyperuniform.
\end{abstract}
\subjclass[2010]{60G55 (Primary) 11K38 65C05 82D30 (Secondary)}
\maketitle
\section{Introduction}
It has been observed for a long time in the physics literature that
large (ideally infinite) particle systems can exhibit structural
behaviour between crystalline order and total disorder. Very prominent
examples are given by quasi-crystals and jammed sphere packings. Research in
mathematics and physics has been inspired by the
discovery of such materials which lie between crystalline
order and amorphous disorder. We just mention de~Bruijn's Fourier analytic
explanation for the diffraction pattern of quasi-crystals \cite{dBr86}
and the extensive collection of articles on quasi-crystals
\cite{AxGr95} as examples.
Hyperuniformity was introduced in \cite{torquato2003local} as a concept to
measure the occurrence of ``intermediate'' order. Such hyperuniform
configurations $X$ occur in jammed packings, in colloids, as well as in
quasi-crystals. The main feature of hyperuniformity is the fact that local
density fluctuations (``number variance'') are of smaller order than for an
i.i.d. random (``Poissonian'') point configuration.
The point of view taken in \cite{torquato2003local} was probabilistic based on
point processes. It has since been observed that determinantal point processes
exhibit less disordered behaviour in comparison to i.i.d. points due to the
built in mutual repulsion of particles (see
\cite{Hough_Krishnapur_Peres+2009:zeros_gaussian}). The prototypical example of
such a point process is given by the distribution of fermionic particles, whose
joint wave function is given as a determinant expressed in terms of the
individual wave functions.
An infinite discrete point set $X\subset\mathbb{R}^d$ is defined to be
hyperuniform if the variance of the random variable (``number variance'')
$\#((\mathbf{x}+t\Omega)\cap X)$ behaves like $o(t^d)$ as $t\to\infty$. Here,
$\Omega$ is a fixed compact test set (``window''); in most of the cases
$\Omega$ is chosen as a Euclidean ball. Notice that the number variance for
i.i.d. point sets is of exact order $t^d$. Thus, hyperuniformity is
characterised by a smaller order of magnitude of the variance. It was shown in
\cite{torquato2003local} that the best possible order for the variance is
$t^{d-1}$.
In \cite{Brauchart_Grabner_Kusner2019:hyperuniform_deterministic} a notion of
hyperuniformity for sequences of finite point sets on the sphere was
introduced. In that paper three regimes of hyperuniformity were identified and
studied, and several sequences of deterministically given point sets such as
designs, QMC-designs, and certain energy minimising point sets were shown to
exhibit hyperuniform behaviour. We also refer the reader to related recent work
\cite{PhysRevE.100.022107,PhysRevE.99.032601}.
It is the aim of the present paper to study hyperuniformity on the sphere for
samples of point processes on the sphere. Especially, we study the spherical
ensemble (see \cite{Krishnapur2006:zeros_random_analytic,
Hough_Krishnapur_Peres+2009:zeros_gaussian}) on $\mathbb{S}^2$
(Section~\ref{sec:spherical-ensemble}), the harmonic ensemble introduced in
\cite{Beltran_Marzo_Ortega-Cerda2016:determinantal}
(Section~\ref{sec:harmonic}), and the jittered sampling process
(Section~\ref{sec:jittered}). We observe that the jittered sampling process can
be seen as a determinantal point process. All processes turn out to be
hyperuniform in all three regimes. The harmonic ensemble has slightly weaker
behaviour in the threshold order regime.
Throughout this paper $\sigma=\sigma_d$ will denote the normalised
surface area measure on $\mathbb{S}^d$. We suppress the dependence on $d$ in
this notation.
\section{Point Processes}\label{sec:point-processes}
We consider a point process $\mathscr{X}_N$ sampling $N$ points given by the \emph{joint
densities} $\rho^{(N)}$
in the sense that
\begin{equation*}
\mathbb{P}\left((X_1,\dots,X_N)\in B\right)=
\idotsint_B\rho^{(N)}(\mathbf{x}_1,\dots,\mathbf{x}_N)\,
\mathrm{d}\sigma (\mathbf{x}_1)
\cdots \mathrm{d}\sigma (\mathbf{x}_N),
\end{equation*}
where $B$ is a measurable subset of $(\mathbb{S}^d)^N$.
We will assume
throughout this paper that the number of points $N$ is fixed and that the
process is simple, which means that the probability of sampling a point more
than once is zero. In some of the studied examples the number of points will
depend on a parameter $L$; in these cases we write $N_L$ for this number.
Note that in the literature
(e.g.,~\cite{Hough_Krishnapur_Peres+2009:zeros_gaussian}) the process is often
given in terms of its \emph{joint intensities} (\emph{correlation functions})
which are given by $N!\cdot\rho^{(N)}$. We use joint densities with respect to
the natural measure $\sigma$ on $\mathbb{S}^d$ in this paper
since they make the asymptotic dependence on $N$ more transparent. By a result
of Lenard~\cite{Soshnikov2000}, locally integrable functions $\rho^{(N)}$ can be
represented as the joint densities of a point process if and only if they
satisfy a positivity condition and the particles are exchangeable; i.e., the
joint densities are invariant under permutation of the entries
\begin{align} \label{permutation-inv}
\rho^{(N)}(\mathbf{x}_{\tau(1)},\dots,\mathbf{x}_{\tau(N)})=
\rho^{(N)}(\mathbf{x}_1,\dots,\mathbf{x}_N) \
\text{for all} \
\mathbf{x}_i\in \mathbb{S}^d, \ \tau \in \text{S}_N .
\end{align}
The \emph{reduced densities}
\begin{align*}
\rho^{(N)}_k(\mathbf{x}_1,\dots,\mathbf{x}_k):=\int_{(\mathbb{S}^d)^{N-k}}
\rho^{(N)}(\mathbf{x}_1,\dots,\mathbf{x}_N)\, \mathrm{d}\sigma (\mathbf{x}_{k+1})
\cdots \mathrm{d}\sigma (\mathbf{x}_N),
\end{align*}
$1 \leq k \leq N$,
describe how $k$ of $N$ points are distributed. The joint intensities are $\frac{N!}{(N-k)!}\rho^{(N)}_k$.
The number of points that are put into a test set
$B\subseteq\mathbb{S}^d$ by the process is the random variable
$\mathscr{X}_N(B):=\sum_{i=1}^N \mathbbm{1}_B(X_i)$, or in other words $N$ times the
empirical measure of $B$. As usual, $\mathbbm{1}_B$ denotes the indicator function of
the set~$B$.
For most of our study, we restrict ourselves to processes that are invariant
under isometries of the sphere
\begin{equation} \label{isometry-inv}
\begin{split}
\rho^{(N)}(A \mathbf{x}_1,\dots,A
\mathbf{x}_N)&=\rho^{(N)}(\mathbf{x}_1,\dots,\mathbf{x}_N) \\ &\phantom{= }
\text{for all}
\ \mathbf{x}_i \in \mathbb{S}^d, \ A \in \SO(d+1) .
\end{split}
\end{equation}
By summation over permutations and integration over isometries, joint densities
satisfying \eqref{permutation-inv} and \eqref{isometry-inv} do exist. In this
case we obtain
\begin{align}
\mathbb{E}\mathscr{X}_N(B)&=N\sigma(B), \label{eq:expect}\\
\mathbb{V}\mathscr{X}_N(B)
&=\mathbb{E}(\mathscr{X}_N(B)^2)-(\mathbb{E}\mathscr{X}_N(B))^2\notag\\
\begin{split} \label{eq:var}
&=N\sigma(B)(1-\sigma(B)) \\
&\phantom{=}+N(N-1)\iint_{B\times B}
\left(\rho_2^{(N)}(\mathbf{x}_1,\mathbf{x}_2)-1
\right)\,\mathrm{d}\sigma(\mathbf{x}_1)\,\mathrm{d}\sigma(\mathbf{x}_2).
\end{split}
\end{align}
The variance is independent of the position and orientation of the test set
$B$. So for a spherical cap the number variance only depends on the radius of
the cap.
\subsection*{Determinantal Point Processes}
Following~\cite{Hough_Krishnapur_Peres+2009:zeros_gaussian}, we introduce
determinantal point processes on $\mathbb{S}^d$. As pointed out before, we formulate
the description in terms of joint densities, rather than joint intensities.
\begin{definition}
A simple point process on $\mathbb{S}^d$ is called determinantal with kernel~$K$ if
its joint densities with respect to $\sigma$ are given by
\begin{align}\label{DPPdensities}
\rho^{(N)}_k(\mathbf{x}_1,\dots,\mathbf{x}_k) = \frac{(N-k)!}{N!}
\det \left(K(\mathbf{x}_i,\mathbf{x}_j)\right)_{i,j=1}^k,
\qquad 1 \leq k \leq N.
\end{align}
\end{definition}
From the definition, permutations of the variables do not change the
process. Furthermore, if $\mathbf{x}_i=\mathbf{x}_j$ for some $i\neq j$, then
the density is zero.
In \cite{Hough_Krishnapur_Peres+2009:zeros_gaussian} it is shown that
a process $\mathscr{X}_N$ samples exactly $N$ points if and only if it is
associated with the projection of $L^2$ to an $N$-dimensional
subspace $H$. Let $\psi_1,\ldots,\psi_N$ be an orthonormal basis of
$H$, then the kernel is given by
\begin{equation}
\label{eq:projkernel}
K_H(\mathbf{x},\mathbf{y})=\sum_{i=1}^N\psi_i(\mathbf{x})
\overline{\psi_i(\mathbf{y})}.
\end{equation}
\section{Hyperuniformity on the Sphere}\label{sec:sphere}
Complementing the extensive study of the notion of hyperuniformity in the
infinite setting, we are interested in studying an analogous property of
sequences of point sets in compact spaces. For convenience, we study the
$d$-dimensional unit sphere $\mathbb{S}^d$. Our ideas immediately generalise to
homogeneous spaces; further generalisations might be more elaborate,
since we rely heavily on harmonic analysis and specific properties of special
functions. For instance, for the flat torus a similar study has been carried
out in \cite{Stepanyuk2020:hyperuniform_point_sets}.
In order to adapt to the compact setting, we replace the infinite set $X$
studied in the classical notion of hyperuniformity by a sequence of finite
point sets, $(X_N)_{N\in \mathcal{J}}$, where we assume that the cardinality $\#X_N$ is
$N$. By using an infinite set $\mathcal{J}\subseteq\mathbb{N}$ as index set, we always
allow for subsequences.
Throughout the paper we use the notation
\begin{equation*}
C(\mathbf{x},\phi)=\{\mathbf{y}\in\mathbb{S}^d\mid
\langle\mathbf{x},\mathbf{y}\rangle>\cos\phi\}
\end{equation*}
for the (open) spherical cap with center $\mathbf{x}$ and opening angle
$\phi$. The normalised surface area of the cap is given by
\begin{equation} \label{eq:normalised.surface.cap}
\sigma\left(C(\mathbf{x},\phi)\right)=
\gamma_d\int_0^\phi\sin(\theta)^{d-1}\mathrm{d}\theta\asymp\phi^{d} \quad\text{as
}\phi\to0,
\end{equation}
where
\begin{equation*}
\gamma_d=\left(\int_0^\pi\sin(\theta)^{d-1}\mathrm{d}\theta\right)^{-1}=
\frac{\Gamma(d)}{2^{d-1}\Gamma(d/2)^2}.
\end{equation*}
Notice that $\gamma_d=\frac{\omega_{d-1}}{\omega_d}$, where $\omega_d$ is the
surface area of $\mathbb{S}^d$.
Here and throughout the paper, we shall use $f(x) \asymp g(x)$ as $x \to x_0$
to mean that there exist positive constants $c$ and $C$ such that
$c \, g(x) \leq f(x) \leq C \, g(x)$ for $x$ sufficiently close to~$x_0$. We
will write $\sigma(C(\phi))$ for the normalised surface area of the cap
$C(\cdot,\phi)$.
In this paper we shall study the \emph{number variance}.
\begin{definition}[Number variance]\label{def:number-v}
Let $\mathscr{X}_N$ be a point process on the sphere~$\mathbb{S}^d$ sampling
$N$ points. The \emph{number variance} of $\mathscr{X}_N$ for caps of opening
angle $\phi$ is given by
\begin{equation}
\label{eq:variance}
V(\mathscr{X}_N,\phi):=\mathbb{V}\mathscr{X}_N(C(\cdot,\phi)):=
\mathbb{E}\left(\mathscr{X}_N(C(\cdot,\phi))^2\right)-
\left(\mathbb{E}\mathscr{X}_N(C(\cdot,\phi))\right)^2.
\end{equation}
If the process $\mathscr{X}_N$ is rotation invariant, the implicit
integration with respect to the center of the cap $C(\cdot,\phi)$
can be omitted.
\end{definition}
As in the Euclidean case we define hyperuniformity by a comparison
between the behaviour of the number variance of a sequence of point
sets and of the i.i.d. case. For i.i.d. random points, the variance is
$N\sigma(C(\phi))(1-\sigma(C(\phi)))$ (see \eqref{eq:var}), which has order of magnitude $N$, $N\sigma(C(\phi_N))$, and $t^d$, respectively, in
the three cases \eqref{eq:large}, \eqref{eq:small}, and
\eqref{eq:threshold} listed below.
\begin{definition}[Hyperuniformity]\label{def-hyper}
Let $\mathscr{X}_N$ be a point process on the sphere~$\mathbb{S}^d$ sampling $N$ points. The process $(\mathscr{X}_N)_{}$ is called
\begin{itemize}
\item \textbf{hyperuniform for large caps} if
\begin{equation}
\label{eq:large}
V(\mathscr{X}_N,\phi)=o\left(N\right)\quad \text{as } N\to\infty
\end{equation}
for all $\phi\in(0,\frac\pi2)$ ;
\item \textbf{hyperuniform for small caps} if
\begin{equation}
\label{eq:small}
V(\mathscr{X}_N,\phi_N)=o\left(N\sigma(C(\phi_N))\right)
\quad \text{as } N\to\infty
\end{equation}
and all sequences $(\phi_N)_{N\in\mathbb{N}}$ such that
\begin{enumerate}
\item $\lim_{N\to\infty}\phi_N=0$
\item $\lim_{N\to\infty}N\sigma(C(\phi_N))=\infty$,
which is equivalent to ${\phi_NN^{\frac1d}\to\infty}$ as ${N\to\infty}$.
\end{enumerate}
\item \textbf{hyperuniform for caps at threshold order} if
\begin{equation}
\label{eq:threshold}
\limsup_{N\to\infty}V(\mathscr{X}_N,tN^{-\frac1d})=
\mathcal{O}(t^{d-1}) \quad\text{as } t\to\infty.
\end{equation}
The $\mathcal{O}(t^{d-1})$ in \eqref{eq:threshold} could be replaced by the
less strict $o(t^d)$ in a more general setting.
\end{itemize}
\end{definition}
\section{Intersection Volume of Spherical Caps}
\label{sec:intersection}
In this section we collect some formulas and properties of the intersection
volume of two spherical caps that will be needed in the discussion
later on. Besides a possibly new formula for the volume of the intersection of
two caps of equal size we provide sharp inequalities and asymptotic expansions,
which enable us to obtain precise results on the number variance.
We will briefly introduce some basic facts and notation regarding spherical
harmonics. Let $\mathcal H_\ell$ denote the vector space of spherical
harmonics of degree $\ell\in \mathbb{N}$. Its dimension is
\begin{equation*}
Z(d,\ell) = \frac{2\ell+d-1}{d-1}\binom{\ell+d-2}{d-2}.
\end{equation*}
With respect to the $L^2(\mathbb{S}^d,\sigma)$ inner product, $\mathcal H_\ell$ has a real
orthonormal basis $\{Y_{\ell,k}\}_{k=1}^{Z(d,\ell)}$. The addition theorem for
spherical harmonics (see~\cite{Mueller1966:spherical_harmonics}) gives
\begin{align*}
\sum_{k=1}^{Z(d,\ell)} Y_{\ell,k}(\mathbf{x})Y_{\ell,k}(\mathbf{y}) = Z(d,\ell)
P_\ell^{(d)}(\langle \mathbf{x},\mathbf{y} \rangle) , \quad
\mathbf{x},\mathbf{y} \in \mathbb{S}^d ,
\end{align*}
where $P_\ell^{(d)}$, $\ell \geq 0$, are the Legendre polynomials for the sphere $\mathbb{S}^d$
normalised by $P_\ell^{(d)}(1)=1$. Notice that for $d\geq2$ these are
Gegenbauer polynomials for the parameter $\frac{d-1}2$:
\begin{equation} \label{eq:interconnection.formula}
Z(d,\ell)P_\ell^{(d)}(x)=\frac{2\ell+d-1}{d-1} C_\ell^{\frac{d-1}2}(x).
\end{equation}
It is well-known that the Laplace series for the indicator function of the spherical cap
$C(\mathbf{x},\phi)$ is given by
\begin{equation*}
\mathbbm{1}_{C(\mathbf{x},\phi)}(\mathbf{y})=
\sigma(C(\cdot,\phi))+\sum_{n=1}^\infty a_n(\phi)Z(d,n)
P_n^{(d)}(\langle\mathbf{x},\mathbf{y}\rangle),
\end{equation*}
where the Laplace coefficients are given by
\begin{equation}\label{eq:an-phi}
a_n(\phi)=\gamma_d\int_0^\phi
P_n^{(d)}(\cos(\theta))\sin(\theta)^{d-1}\,\mathrm{d}\theta=
\frac{\gamma_d}d\sin(\phi)^dP_{n-1}^{(d+2)}(\cos(\phi))
\end{equation}
for $ n\geq1$.
The intersection volume is then obtained as the spherical convolution of the
indicator function with itself. This gives
\begin{equation}\label{eq:gphi}
\begin{split}
g_\phi(\langle\mathbf{x},\mathbf{y}\rangle)&:= \sigma(C(\mathbf{x},\phi)\cap
C(\mathbf{y},\phi))- \sigma(C(\phi))^2\\
&=\sum_{n=1}^\infty a_n(\phi)^2Z(d,n)
P_n^{(d)}(\langle\mathbf{x},\mathbf{y}\rangle).
\end{split}
\end{equation}
In \cite{Lee-Kim2014:concise_formulas_intersection} formulas for the volume of
the intersection of two spherical caps have been derived. In our special case
of the intersection of two caps of equal size, we get
\begin{equation*}
\sigma(C(\mathbf{x},\phi)\cap C(\mathbf{y},\phi))=
\frac{d-1}\pi\int_{\frac\psi2}^\phi\sin(t)^{d-1}
\int_0^{\arccos(\frac{\tan(\frac\psi2)}{\tan(t)})}\sin(u)^{d-2}\,\mathrm{d} u\,\mathrm{d} t,
\end{equation*}
where $\langle\mathbf{x},\mathbf{y}\rangle=\cos\psi$ and
$\psi\leq2\phi$.
The change of variables
\begin{align*}
\tan(v)&=\tan(t)\cos(u), \\
\sin(w)&=\sin(t)\sin(u)
\end{align*}
transforms the double integral into
\begin{equation*}
\frac1\pi\int_{\frac\psi2}^\phi
\frac{(\sin^2\phi-\sin^2v)^{\frac{d-1}2}}{\cos(v)^{d-1}}\,\mathrm{d} v.
\end{equation*}
This gives
\begin{equation}\label{eq:diff-cap-int}
g_\phi(1)-g_\phi(\cos\psi)=
\sigma(C(\mathbf{x},\phi)\setminus C(\mathbf{y},\phi))=
\frac1\pi\int_0^{\frac\psi2}
\frac{\left(\sin^2\phi-\sin^2v\right)_+^{\frac{d-1}2}}{\cos(v)^{d-1}}\,\mathrm{d} v
\end{equation}
for all $0<\psi<\pi$; here we define $(a)_+:=\max(0,a)$.
From this we obtain the following lemma.
\begin{lemma}\label{lem5}
There exists a positive constant $A_d$ depending only on $d$ such that for
all $(\phi,\psi)$ with $0\leq\psi\leq2\phi\leq\pi$ the inequalities
\begin{equation}
\label{eq:hypercap}
\frac1{2\pi}
\psi(\sin\phi)^{d-1}- A_d\psi^3\sin(\phi)^{d-3} \leq
\sigma(C(\mathbf{x},\phi)\setminus C(\mathbf{y},\phi))\leq\frac1{2\pi}
\psi(\sin\phi)^{d-1}
\end{equation}
hold. Here, $\cos\psi=\langle\mathbf{x},\mathbf{y}\rangle$.
For $d\leq3$, these
inequalities hold for $(\phi,\psi)\in[0,\frac\pi2]\times[0,\pi]$.
\end{lemma}
\begin{proof}
In the integral \eqref{eq:diff-cap-int}
we make the substitution $\sin(v)=\sin(\phi)\sin(u)$ to obtain
\begin{equation}\label{eq:capsize}
g_\phi(1)-g_\phi(\cos\psi)=\frac{\sin(\phi)^d}\pi
\int_0^{h(\phi,\psi)}\frac{\cos(u)^d}{(1-\sin(\phi)^2\sin(u)^2)^{\frac d2}}
\mathrm{d} u,
\end{equation}
where the upper bound in integration is given by
\begin{equation*}
h(\phi,\psi)=\arcsin\left(\min\left(1,
\frac{\sin(\frac\psi2)}{\sin(\phi)}\right)\right).
\end{equation*}
Now, the integrand in \eqref{eq:capsize} is bounded from below by
$\cos(u)^d\geq 1-\frac d2u^2$, from which we derive the estimate
\begin{equation*}
g_\phi(1)-g_\phi(\cos\psi)\geq \frac{\sin(\phi)^d}\pi
\left(\arcsin\left(\frac{\sin(\frac\psi2)}{\sin(\phi)}\right)-
\frac d6\left(\arcsin\left(\frac{\sin(\frac\psi2)}{\sin(\phi)}\right)
\right)^3\right).
\end{equation*}
From this we derive the lower bound in \eqref{eq:hypercap} using the
estimates
\begin{align*}
x&\leq\arcsin(x)\leq x+\left(\frac\pi2-1\right)x^3\\
x-\frac{x^3}3&\leq\sin(x)\leq x
\end{align*}
for $x\geq0$.
For the upper bound, we just observe that the integrand in
\eqref{eq:diff-cap-int} is bounded by $\sin(\phi)^{d-1}$.
\qed
\end{proof}
For $d=2$, we get the explicit formula
\begin{align*}
&\sigma\left(C(\mathbf{x},\phi)\setminus C(\mathbf{y},\phi)\right)\\
&=
\begin{cases}
\frac1\pi\left(\arcsin\left(\frac{\sin\frac\psi2}{\sin\phi}\right)-
\arcsin\left(\frac{\tan\frac\psi2}{\tan\phi}\right)\cos\phi\right)
&\text{for }\psi\leq2\phi
\\
\sin^2\frac\phi2&\text{for }\psi>2\phi,
\end{cases}
\end{align*}
where $\cos\psi=\langle\mathbf{x},\mathbf{y}\rangle$.
\section{The Spherical Ensemble}\label{sec:spherical-ensemble}
The spherical ensemble of $N$ points is obtained by stereographically
projecting the eigenvalues of $A^{-1}B$ to the sphere $\mathbb{S}^2$, where $A$ and
$B$ are $N\times N$ matrices with i.i.d. random complex Gaussian entries
(see~\cite[Section~4.3.8]{Hough_Krishnapur_Peres+2009:zeros_gaussian} or
\cite[Chapter~5]{Krishnapur2006:zeros_random_analytic}).
These eigenvalues form a determinantal point process $\mathscr{X}_N^S$ on $\mathbb{C}$
with kernel
\begin{equation}\label{eq:widetildeK}
\widetilde K_N(z,w):=(1+z\overline{w})^{N-1}
\end{equation}
with respect to the measure
\begin{equation*}
\mathrm{d}\mu_N(z):=\frac{N}{\pi(1+|z|^2)^{N+1}}\,\mathrm{d}\lambda_2(z),
\end{equation*}
where $\lambda_2$ denotes the Lebesgue measure on $\mathbb{C}$.
The corresponding function space is the space of square integrable entire
functions
\begin{equation*}
\mathscr{P}_N:=L^2(\mathbb{C},\mathrm{d}\mu_N)\cap H(\mathbb{C}),
\end{equation*}
which consists exactly of the polynomials of degree $\leq N-1$. The kernel
$\widetilde K_N$ is the reproducing kernel of this Hilbert space.
Applying the stereographic projection
\begin{equation}\label{eq:stereo}
\mathbf{x}=(x_1,x_2,x_3)\in\mathbb{S}^2\mapsto\frac{x_1+ix_2}{1-x_3}
\end{equation}
transforms the measure by
\begin{equation}\label{eq:mustereo}
\mathrm{d}\mu_N(z)=\frac N{2^{N-1}}(1-x_3)^{N-1}\mathrm{d}\sigma(\mathbf{x}).
\end{equation}
In order to obtain an isometry of spaces $L^2(\mathbb{C},\mu_N)$ and
$L^2(\mathbb{S}^2,\sigma)$, the basis elements of $\mathscr{P}_N$ are mapped by
\begin{equation}\label{eq:zkmapsto}
z^k\mapsto \frac{\sqrt N}{2^{\frac{N-1}2}}(x_1+ix_2)^k(1-x_3)^{\frac{N-1}2-k}
\quad\text{for }k=0,\ldots,N-1.
\end{equation}
Inserting \eqref{eq:stereo} into \eqref{eq:widetildeK} and multiplying by
$\frac N{2^{N-1}}((1-x_3)(1-y_3))^{\frac{N-1}2}$ gives the projected kernel on
the sphere
\begin{equation*}
K_N(\mathbf{x},\mathbf{y})=\frac{N}{2^{N-1}}
\left(\frac{1+\langle\mathbf{x},\mathbf{y}\rangle-x_3-y_3+i(x_2y_1-x_1y_2)}
{\sqrt{(1-x_3)(1-y_3)}}\right)^{N-1}
\end{equation*}
and the space of functions on $\mathbb{S}^2$ is spanned by the functions given in
\eqref{eq:zkmapsto}. These functions are orthogonal with respect to $\sigma$.
In order to compute the expectation of a general energy sum with
respect to the process generated by $K_N$, we compute the determinant
\begin{equation*}
N(N-1) \rho_2^{(N)}(\mathbf{x},\mathbf{y})=
K_N(\mathbf{x},\mathbf{x})K_N(\mathbf{y},\mathbf{y})-
\left|K_N(\mathbf{x},\mathbf{y})\right|^2.
\end{equation*}
We notice that $K_N(\mathbf{x},\mathbf{x})=N$ and
\begin{equation*}
|1+z\overline{w}|^2=
\left|\left(1+\frac{x_1+ix_2}{1-x_3}\frac{y_1-iy_2}{1-y_3}\right)\right|^2=
\frac1{(1-x_3)(1-y_3)}\left(2+2\langle\mathbf{x},\mathbf{y}\rangle\right)
\end{equation*}
for $\mathbf{x},\mathbf{y}\in\mathbb{S}^2$, which gives
\begin{equation*}
\left|K_N(\mathbf{x},\mathbf{y})\right|^2=
N^2\left(\frac{1+\langle\mathbf{x},\mathbf{y}\rangle}2\right)^{N-1}
\end{equation*}
and
\begin{equation*}
N(N-1) \rho_2^{(N)}(\mathbf{x},\mathbf{y})=
N^2\left(1-\left(\frac{1+\langle\mathbf{x},\mathbf{y}\rangle}2\right)^{N-1}
\right).
\end{equation*}
Now let $g:[-1,1]\to\mathbb{R}$ be a function with
$\int_{-1}^1g(x)\,\mathrm{d} x=0$. Then
\begin{multline}
E_{g}(N):=\mathbb{E}\sum_{i,j=1}^Ng\left(\langle\mathbf{x}_i,\mathbf{x}_j\rangle\right)\\
=Ng(1)+N^2\iint_{\mathbb{S}^2\times\mathbb{S}^2}
g\left(\langle\mathbf{x},\mathbf{y}\rangle\right)
\left(1-\left(\frac{1+\langle\mathbf{x},\mathbf{y}\rangle}2\right)^{N-1}\right)
\,\mathrm{d}\sigma(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{y})\label{eq:g-energy}\\
=
\frac{N^2}2\int_{-1}^1\left(g(1)-g(x)\right)\left(\frac{1+x}2\right)^{N-1}\,\mathrm{d}
x.
\end{multline}
We apply \eqref{eq:g-energy} to the function
$g_\phi$ given by \eqref{eq:gphi}.
Putting everything together, we obtain
\begin{align}
V(\mathscr{X}_N^S,\phi)
&=E_{g_\phi}(N) \notag \\
\begin{split}
&=\frac{N^2}{4\pi}\sin\phi
\int_{-1}^1\arccos(x)\left(\frac{1+x}2\right)^{N-1} \mathrm{d} x \\
&\phantom{=}+ \mathcal{O}\Bigg(\frac{N^2}{\sin\phi}\int_{-1}^1\arccos(x)^3
\left(\frac{1+x}2\right)^{N-1} \mathrm{d} x\Bigg)
\end{split} \\
&=\frac{\sin\phi}{2\sqrt\pi} \, \frac{\Gamma(N+\frac12)}{\Gamma(N)}+
\mathcal{O}\Big( \frac{1}{N^{1/2} \sin\phi} \Big) \notag \\
&=\frac{\sqrt{\sigma(C(\phi))(1-\sigma(C(\phi)))}}{\sqrt\pi}N^{1/2}+
\mathcal{O}\Big( \frac{1}{N^{1/2} \sin\phi} \Big)\label{eq:Eg-asymp}
\end{align}
valid for $\phi\in(0,\frac\pi2)$. Thus, we have proved the following lemma. We
note that \eqref{eq:Eg-asymp} was obtained in \cite[Lemma~2.1]{AlishahiZamani}
with the restriction that $\sigma(C(\phi))^{-1}=o(N)$ and with a weaker error
term.
\begin{lemma}\label{lem1}
The number variance of the spherical ensemble satisfies for ${\phi\in(0,\pi)}$
\begin{equation}\label{eq:sphere-variance}
V(\mathscr{X}_N^S,\phi)=\frac{\sqrt{\sigma(C(\phi))(1-\sigma(C(\phi)))}}
{\sqrt\pi}N^{1/2}+\mathcal{O}\Big( \frac{1}{N^{1/2} \sin\phi} \Big)
\end{equation}
with an absolute implied constant; especially,
\begin{equation}\label{eq:spher-threshold}
\lim_{N\to\infty}V(\mathscr{X}_N^S,tN^{-\frac12})=
\frac t{2\sqrt\pi}+\mathcal{O}(t^{-1}).
\end{equation}
\end{lemma}
\begin{remark}
Inserting \eqref{eq:diff-cap-int} directly into \eqref{eq:g-energy} gives the
closed formula
\begin{equation*}
E_{g_\phi}( N ) = \frac{N \sin^2 \phi}{\pi} \int_0^1 \left( 1 - v^2 \right)^{\frac{1}{2}} \left( 1 - v^2 \, \sin^2 \phi \right)^{N-1} \mathrm{d} v,
\end{equation*}
which could be used for an alternative yet slightly more elaborate proof of
Lemma~\ref{lem1}.
\end{remark}
From this lemma we immediately obtain the following theorem.
\begin{theorem}\label{thm:spher-hyper}
The spherical ensemble is hyperuniform in all three regimes.
\end{theorem}
\begin{proof}
For the large cap case, we obtain $V(\mathscr{X}_N^S,\phi)=\mathcal{O}(N^{1/2})$;
for the small cap case, we obtain
$V(\mathscr{X}_N^S,\phi_N)=\mathcal{O}((N\phi_N)^{1/2})=o(N\phi_N)$. In the threshold
order case, we use \eqref{eq:spher-threshold}.
\qed
\end{proof}
\begin{remark}
The error term in \eqref{eq:sphere-variance} has the correct order with
respect to $N$ and $\phi$. This shows that taking $\phi_N=o(N^{-\frac12})$
does not make sense, because then the error term would become the dominant term that tends
to $\infty$.
\end{remark}
\section{The Harmonic Ensemble}\label{sec:harmonic}
The function space of spherical harmonics of degree $\leq L$ on the sphere
$\mathbb{S}^d$ and the projection kernel to this space of dimension
$Z(d+1,L)=\frac{2L+d}{d}\binom{L+d-1}{d-1}$ was used in
\cite{Beltran_Marzo_Ortega-Cerda2016:determinantal} to define a determinantal
point process $\mathscr{X}_L^H$, the \emph{harmonic ensemble}. This process samples
$N:=N_L:=Z(d+1,L)\asymp L^d$ points. We will study this process with respect to
hyperuniformity in this section.
The projection kernel to this space is given by
\begin{align*}
K_L(\langle\mathbf{x},\mathbf{y}\rangle):=
\sum_{\ell=0}^L Z(d,\ell) P_\ell^{(d)}(\langle \mathbf{x},\mathbf{y} \rangle) =
\frac{Z(d+1,L)}{\binom{L+d/2}{L}}
\mathcal{P}_L^{(\frac{d}{2},\frac{d}{2}-1)}(\langle\mathbf{x},\mathbf{y}\rangle)
\end{align*}
for $\mathbf{x},\mathbf{y} \in \mathbb{S}^d$, where $\mathcal{P}_L^{(\alpha,\beta)}$,
$L \geq 0$, are the usual Jacobi polynomials. This identity follows from
\cite[Theorem~7.1.3]{Andrews-Askey-Roy1999:Special_functions} after rewriting
the Legendre polynomials in terms of Gegenbauer and Jacobi polynomials using
\eqref{eq:interconnection.formula}.
\begin{theorem}\label{thm:harmonic}
The harmonic ensemble is hyperuniform for large and small caps. In the
threshold order regime the weaker property
\begin{equation}
\label{eq:threshold-log}
\limsup_{L\to\infty}V(\mathscr{X}_L^H,tN_L^{-\frac1d})=\mathcal{O}(t^{d-1}\log t)=o(t^d)
\end{equation}
holds.
\end{theorem}
\begin{proof}
The number variance $V(\mathscr{X}_L^H,\phi)$ can be expressed as (cf. similar
computations that lead to \eqref{eq:g-energy})
\begin{equation*}
\gamma_d \int_0^\pi (g_\phi(1)-g_\phi(\cos \theta))K_L(\cos \theta)^2
(\sin \theta)^{d-1}\,\mathrm{d}\theta,
\end{equation*}
where $g_\phi$ is given by
\eqref{eq:gphi}. Using Lemma~\ref{lem5}, we obtain
\begin{equation}\label{eq:harmonic_number_variance}
\begin{split}
&V(\mathscr{X}_L^H,\phi)\\
=& \frac{\gamma_d}{\pi} \left(\!\!\frac{Z(d+1,L)}{\binom{L+\frac d2}L}\!\right)^2
\!\!\!\!(2\sin\phi)^{d-1}\!\!\!\int_0^{2\phi}\!\!\!\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\!\!\left(\sin\frac \theta2\right)^d\!\!
\left(\cos\frac \theta2\right)^{d-1}\!\!\!\!\!\!\mathrm{d}\theta\\+
&\mathcal{O}\Bigg(L^d(\sin\phi)^{d-3}\int_0^{2\phi}\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\left(\sin\frac \theta2\right)^{d+2}
\left(\cos\frac \theta2\right)^{d-1}\mathrm{d}\theta\Bigg) \\+
&\gamma_d \left(\frac{Z(d+1,L)}{\binom{L+\frac d2}L}\right)^2
\sigma(C(\phi))\int_{2\phi}^\pi\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2(\sin \theta)^{d-1}\,\mathrm{d}\theta.
\end{split}
\end{equation}
The case of large and small caps was studied in
\cite{Beltran_Marzo_Ortega-Cerda2016:determinantal}; we summarise the
computations given there for completeness. The case of caps at threshold
order is new and will be given in more detail.
We make use of known asymptotic expansions for the Jacobi polynomials
(see~\cite[5.2.3 and 5.2.4]{Magnus_Oberhettinger_Soni1966:formulas_theorems})
\begin{align}
\label{eq:P-asymp1}
\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos \theta)&=
\frac{\cos\left(\left(L+\frac d2\right)\theta-\frac\pi4(d+1)\right)}
{\sqrt{\pi L}\left(\sin\frac \theta2\right)^{\frac{d+1}2}
\left(\cos\frac \theta2\right)^{\frac{d-1}2}}+\mathcal{O}(L^{-\frac32})\\
\mathcal{P}_L^{(\frac d2,\frac d2-1)}\left(\cos \frac \tau L\right)&=L^{\frac d2}
\left(\frac 2\tau\right)^{\frac d2}J_{\frac d2}(\tau)+\mathcal{O}(L^{\frac d2-1}),
\label{eq:P-asymp2}
\end{align}
where $J_{\frac d2}$ denotes the Bessel function of the first kind of index $\frac d2$. Given
a constant $C>0$, the asymptotic relation \eqref{eq:P-asymp1} is used for
$\theta>\frac CL$, whereas relation \eqref{eq:P-asymp2} is used for
$\theta=\frac\tau L\leq\frac CL$.
This gives
\begin{multline}
\label{eq:int-1}
\int_0^{\frac CL}\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\left(\sin\frac \theta2\right)^d \left(\cos\frac
\theta2\right)^{d-1}\,\mathrm{d}\theta\\= \frac 1L\int_0^C J_{\frac
d2}(\theta)^2\,\mathrm{d}\theta+\mathcal{O}(L^{-2})
\end{multline}
for the integral over the ``small'' values of $\theta$,
\begin{multline}
\label{eq:int-2}
\int_{\frac CL}^{\alpha}\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\left(\sin\frac \theta2\right)^d
\left(\cos\frac \theta2\right)^{d-1}\,\mathrm{d}\theta\\=
\frac1{\pi L}\int_{\frac CL}^{\alpha}\frac{\cos\left(\left(L+\frac
d2\right)\theta-\frac\pi4(d+1)\right)^2}{\sin(\frac
\theta2)}\,\mathrm{d}\theta+\mathcal{O}(L^{-2})
\end{multline}
for the integral over the ``large'' values of $\theta$,
\begin{multline}
\label{eq:int-3}
\int_{\alpha}^\pi\left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\left(\sin\frac \theta2\right)^{d-1} \left(\cos\frac
\theta2\right)^{d-1}\,\mathrm{d}\theta\\
=\frac1{\pi L}\int_{\alpha}^\pi
\frac{\cos\left(\left(L+\frac
d2\right)\theta-\frac\pi4(d+1)\right)^2}{(\sin\frac \theta2)^2}\,\mathrm{d}\theta+
\mathcal{O}(L^{-2})=\mathcal{O}((L\alpha)^{-1})
\end{multline}
and
\begin{multline}
\label{eq:int-4}
\int_0^\alpha \left(\mathcal{P}_L^{(\frac d2,\frac d2-1)}(\cos
\theta)\right)^2\left(\sin\frac \theta2\right)^{d+2}
\left(\cos\frac \theta2\right)^{d-1}\,\mathrm{d} t\\=
\frac1{\pi L}\int_0^\alpha \!\!\cos\left(\left(L+\frac
d2\right)\theta-\frac\pi4(d+1)\right)^2 \sin\Big(\frac
\theta2\Big)\,\mathrm{d}\theta+\mathcal{O}(L^{-2})
=
\mathcal{O}(\alpha^2L^{-1})
\end{multline}
for the integral in the error term.
In the case of large caps ($0<\phi<\frac\pi2$ fixed), we compute the number
variance as
\begin{multline*}
V(\mathscr{X}_L^H,\phi)=\frac{\gamma_d}{\pi}\left(\frac{Z(d+1,L)}{\binom{L+\frac d2}L}\right)^2
\frac{(2\sin\phi)^{d-1}}L\Biggl(\int_0^C J_{\frac d2}(\theta)^2\,
\mathrm{d}\theta\\+
\frac1{\pi}\int_{\frac CL}^{2\phi}\frac{\cos\left(\left(L+\frac
d2\right)\theta-\frac\pi4(d+1)\right)^2}{\sin(\frac
\theta2)}\,\mathrm{d}\theta+
\mathcal{O}(L^{-1})+\mathcal{O}(\phi^{-1})\Biggr)\\=
\mathcal{O}((\sin\phi)^{d-1}L^{d-1}\log( L \phi ) ),
\end{multline*}
where we have used $\left(Z(d+1,L)/\binom{L+\frac d2}L\right)^2 \asymp L^d$ and
the logarithmic term comes from the second summand. This is the true asymptotic
order and due to $N_L\asymp L^d$ we have $V(\mathscr{X}_L^H,\phi)=o\left(N_L\right) $ as
$L\to\infty$ for all $\phi\in(0,\frac\pi2)$.
In the case of small caps, a similar computation gives
\begin{align*}
V(\mathscr{X}_L^H,\phi_L) &= \mathcal{O}((\sin\phi_L)^{d-1}L^{d-1}\log( L \phi_L ) ) = \mathcal{O}\Big( \frac{\log( L \phi_L )}{ L \phi_L } \, (\sin\phi_L)^{d}L^{d}\Big) \\ &= \mathcal{O}\Big( \frac{\log( L \phi_L )}{ L \phi_L } \, N_L\,\sigma(C(\phi_L)) \Big)=o(N_L\,\sigma(C(\phi_L))).
\end{align*}
For caps at threshold order, we analyse the three terms in
\eqref{eq:harmonic_number_variance} separately. The first term gives
\begin{equation*}
\frac{\gamma_d}{\pi}\left(\frac{Z(d+1,L)}{\binom{L+\frac d2}L}\right)^2
\frac{(2\sin tL^{-1})^{d-1}}L\left(\int_0^t J_{\frac d2}(\theta)^2\,\mathrm{d}\theta+
\mathcal{O}(L^{-1})\right)
\end{equation*}
using \eqref{eq:P-asymp2}.
We use the asymptotic behaviour of the Bessel function for
$\theta\to\infty$
(see~\cite[3.14.1]{Magnus_Oberhettinger_Soni1966:formulas_theorems})
\begin{equation*}
J_{\frac d2}(\theta)=\frac{\cos\left(\theta-\frac{\pi (d+1)}4\right)}
{\sqrt{\frac{\pi\theta}2}}+\mathcal{O}(\theta^{-\frac32})
\end{equation*}
to obtain
\begin{equation*}
\int_0^tJ_{\frac d2}(\theta)^2\,\mathrm{d}\theta=\frac1\pi\log t+\mathcal{O}(1).
\end{equation*}
Thus the first term in \eqref{eq:harmonic_number_variance} for $\phi_L\sim
tL^{-1}$ is of order $t^{d-1}\log t$.
The second term in \eqref{eq:harmonic_number_variance} is analysed using
\eqref{eq:int-4}, which gives an order of $t^{d-1}$ for $\phi_L\sim
tL^{-1}$. Similarly, the third term is analysed using \eqref{eq:int-3}, which
again gives an order of $t^{d-1}$.
Putting these order estimates together yields
\begin{equation*}
V(\mathscr{X}_L^H,tL^{-1})=\mathcal{O}(t^{d-1}\log t)
\end{equation*}
and concludes our proof.
\qed
\end{proof}
\section{Jittered Sampling}\label{sec:jittered}
In \cite{Gigante_Leopardi2017:diameter_ahlfors} it is shown that on arbitrary
Ahlfors regular metric measure spaces there exist area-regular partitions. For
the case of the sphere studied here, an area regular partition is given by
$\mathcal{A}$ $=\{A_1,\ldots,A_N\}$ with $\bigcup_{i=1}^N A_i =\mathbb{S}^d$,
$\sigma(A_i)=\frac1N$, and $i\neq j\Rightarrow A_i\cap A_j=\emptyset$
satisfying
\begin{equation}\label{eq:diam}
\diam(A_i) \le C_dN^{-1/d},\qquad i=1,\ldots,N,
\end{equation}
with a constant depending only on $d$
(see also \cite{alexander1972sum,bourgain1988distribution,
kuijlaars1998asymptotics,mhaskar2001spherical,Le2006}).
Such partitions allow us to consider the average behaviour of \emph{jittered
sampling}; the point process $\mathscr{X}_N^{\mathcal{A}}$ constructed by sampling the
sphere with the condition that each of the $N$ points lies in a distinct region
of the partition.
The jittered sampling variance integral is written as:
\begin{multline*}
V(\mathscr{X}_N^{\mathcal{A}},\phi)\\=\!\!
\int\limits_{\mathbb{S}^d}\!\int\limits_{A_1}\!\!\dots\!\! \int\limits_{A_N}\!\!
\left(\sum_{i=1}^N
\mathbbm{1}_{C(\mathbf{x},\phi)}(\mathbf{y}_i)-N \sigma(C(\phi))\right)^2\!\!\!
\mathrm{d}\sigma_1(\mathbf{y}_1)\cdots \mathrm{d}\sigma_N(\mathbf{y}_N)\,\mathrm{d}\sigma(\mathbf{x}),
\end{multline*}
where $\sigma_i(\cdot):=N\sigma(\cdot\cap A_i)$ is the uniform probability
measure on $A_i$.
The integral can be split into off-diagonal and diagonal terms
\begin{align}
&V(\mathscr{X}_N^{\mathcal{A}},\phi)
=\sum_{\substack{i,j=1\\i\neq j}}^N
\int\limits_{A_i}\int\limits_{A_j}\!\!\sigma(C(\mathbf{y}_i,\phi)\cap C(\mathbf{y}_j,\phi))
\,\mathrm{d}\sigma_i(\mathbf{y}_i)\,\mathrm{d}\sigma_j(\mathbf{y}_j)
\label{eq:variance_jittered}\\&+
N\sigma(C(\phi))-N^2\sigma(C(\phi))^2\notag\\
=&N\sum_{i=1}^N\int\limits_{A_i} \Bigg(\int\limits_{\mathbb{S}^d}\!\!
\sigma(C(\mathbf{y}_i,\phi)\cap
C(\mathbf{y},\phi))\,\mathrm{d}\sigma(\mathbf{y}) \notag\\ &-
\int\limits_{A_i}\!\!\sigma(C(\mathbf{y}_i,\phi)\cap C(\mathbf{y},\phi))
\,\mathrm{d}\sigma(\mathbf{y})\Bigg) \mathrm{d}\sigma_i(\mathbf{y}_i)+
N\sigma(C(\phi))-N^2\sigma(C(\phi))^2\notag\\
=&N^2\left(\int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}
\sigma(C(\mathbf{x},\phi)\cap C(\mathbf{y},\phi))
\,\mathrm{d}\sigma(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{y})-\sigma(C(\phi))^2\right)\notag\\
&+\sum_{i=1}^N\int\limits_{A_i}\int\limits_{A_i}\Big(\sigma(C(\mathbf{x}_i,\phi))-
\sigma(C(\mathbf{x}_i,\phi)\cap C(\mathbf{y}_i,\phi))\Big)
\,\mathrm{d}\sigma_i(\mathbf{x}_i)\,\mathrm{d}\sigma_i(\mathbf{y}_i)\notag\\
=&\frac12\sum_{i=1}^N\int\limits_{A_i}\int\limits_{A_i}
\sigma(C(\mathbf{x}_i,\phi)\triangle C(\mathbf{y}_i,\phi))
\,\mathrm{d}\sigma_i(\mathbf{x}_i)\,\mathrm{d}\sigma_i(\mathbf{y}_i),\label{eq:diagonal}
\end{align}
where $\triangle$ denotes the symmetric difference operator of two sets. For
the last equality, we have used
\begin{align*}
&\int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}
\sigma(C(\mathbf{x},\phi)\cap C(\mathbf{y},\phi))
\,\mathrm{d}\sigma(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{y})\\
&=\int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}
\mathbbm{1}_{C(\mathbf{x},\phi)}(\mathbf{z})
\mathbbm{1}_{C(\mathbf{y},\phi)}(\mathbf{z})\,
\mathrm{d}\sigma(\mathbf{z})\,\mathrm{d}\sigma(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{y})\\
&= \int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}\int\limits_{\mathbb{S}^d}
\mathbbm{1}_{C(\mathbf{z},\phi)}(\mathbf{x})
\mathbbm{1}_{C(\mathbf{z},\phi)}(\mathbf{y})\,
\mathrm{d}\sigma(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{y})\,\mathrm{d}\sigma(\mathbf{z})
=\sigma(C(\phi))^2.
\end{align*}
So in fact the variance of the jittered sampling process reduces to
the diagonal terms. The measure of the symmetric difference can be
bounded like
\begin{equation*}
\sigma(C(\mathbf{x}_i,\phi)\triangle C(\mathbf{y}_i,\phi))\leq
C_d\operatorname{surface}(\partial C(\mathbf{x}_i,\phi))
\arccos(\langle\mathbf{x}_i,\mathbf{y}_i\rangle),
\end{equation*}
where $C_d$ is a constant only depending on the dimension $d$. This can be
seen from Lemma~\ref{lem5} using the observation that
$\operatorname{surface}(\partial C(\mathbf{x}_i,\phi))\asymp
(\sin\phi)^{d-1}$.
From the diameter bounds coming from our choice of equipartition, every
summand in \eqref{eq:diagonal} can be bounded by
$\mathcal{O}(\phi^{d-1}N^{-\frac1d})$, which gives
\begin{equation}
\label{eq:jittered-bound}
V(\mathscr{X}_N^{\mathcal{A}},\phi)=\mathcal{O}\left(\phi^{d-1}N^{\frac{d-1}d}\right);
\end{equation}
the implied constant depends only on the dimension and the constants in
\eqref{eq:diam}.
\begin{theorem}\label{thm:jittered}
The jittered sampling point process is hyperuniform in all three regimes.
\end{theorem}
\begin{proof}
From \eqref{eq:jittered-bound} it is now immediate that
$V(\mathscr{X}_N^{\mathcal{A}},\phi)=o(N)$ for all $\phi\in(0,\frac\pi2)$, which proves
hyperuniformity for large caps.
Again from \eqref{eq:jittered-bound} we obtain
\begin{equation*}
V(X_N,\phi_N)=\mathcal{O}\Big((\phi_NN^{\frac1d})^{d-1}\Big)=
o\Big((\phi_NN^{\frac1d})^d\Big)=o\big(\phi_N^dN\big)
\end{equation*}
under the assumptions on $(\phi_N)_{N\in\mathbb{N}}$ in
Definition~\ref{def-hyper}, which proves hyperuniformity for small caps.
\label{sec:jitt-threshold}
Inserting $\phi_N=tN^{-\frac1d}$ into \eqref{eq:jittered-bound} yields
\begin{equation*}
V(\mathscr{X}_N^{\mathcal{A}},tN^{-\frac1d})=\mathcal{O}\big(t^{d-1}\big)\qquad\text{
as }t\to\infty,
\end{equation*}
which implies hyperuniformity at threshold order.
\qed
\end{proof}
\subsection*{Jittered Sampling is Determinantal}
In this subsection we consider a general probability space
$(\Lambda,\mathcal{B},\mu)$ with the only requirement that for any
$N\in\mathbb{N}$ there exists a partition of $\Lambda$ into $N$ disjoint
measurable sets of equal measure. Consider an area-regular partition
$\mathcal{A}=\{A_1,\ldots,A_N\}$ of the space $\Lambda$ into pairwise disjoint
measurable sets; i.e.,
\begin{align*}
A_i\cap A_j &= \emptyset, \qquad i\neq j,\\
\mu\Big(\bigcup_{i=1}^NA_i\Big)&=1, \\
\mu(A_i)&=\frac1N, \qquad i = 1, \dots, N.
\end{align*} Define the projection operator
\begin{equation*} p_{\mathcal{A}}(f)(x):=
\sum_{i=1}^N\frac{\mathbbm{1}_{A_i}(x)}{\mu(A_i)}\int_{A_i}f(y)\,\mathrm{d}\mu(y)=
\int_\Lambda K_{\mathcal{A}}(x,y)f(y)\,\mathrm{d}\mu(y)
\end{equation*} to the space of functions measurable with respect to
the finite $\sigma$-algebra generated by $\mathcal{A}$. The kernel of this operator
is given by
\begin{equation*}
K_{\mathcal{A}}(x,y):=\sum_{i=1}^N\frac{\mathbbm{1}_{A_i}(x)\mathbbm{1}_{A_i}(y)}{\mu(A_i)}.
\end{equation*} The determinantal point process $\mathscr{X}_N^{\mathcal{A}}$
defined by the projection kernel $K_{\mathcal{A}}$ is then equal to
the jittered sampling process associated to the partition
$\mathcal{A}$, which can be seen by computing
\begin{multline*} \mathbb{E}\mathscr{X}_N^{\mathcal{A}}(A_1)\cdots
\mathscr{X}_N^{\mathcal{A}}(A_N)\\= \int_{A_1}\cdots\int_{A_N}
\det\left(K_{\mathcal{A}}(x_i,x_j)_{i,j=1}^N\right)\,\mathrm{d}\mu(x_1)
\cdots \mathrm{d}\mu(x_N).
\end{multline*} Expanding the determinant gives
\begin{multline*} \mathbb{E}\mathscr{X}_N^{\mathcal{A}}(A_1)\cdots
\mathscr{X}_N^{\mathcal{A}}(A_N)\\= \sum_\pi\sgn(\pi)\int_{A_1}\cdots\int_{A_N}
\prod_{i=1}^NK_{\mathcal{A}}(x_i,x_{\pi(i)})\,\mathrm{d}\mu(x_1)\cdots\mathrm{d}\mu(x_N).
\end{multline*} Now we notice that $K_{\mathcal{A}}(x_i,x_j)=0$ if
$i\neq j$ and $x_i\in A_i$ and $x_j\in A_j$. Thus, the integrand in the
sum vanishes for all $\pi\neq\id$, which gives
\begin{equation}\label{eq:jexpect}
\mathbb{E}\mathscr{X}_N^{\mathcal{A}}(A_1)\cdots\mathscr{X}_N^{\mathcal{A}}(A_N)= \prod_{i=1}^N
\int_{A_i}K_{\mathcal{A}}(x_i,x_i)\,\mathrm{d}\mu(x_i)=1.
\end{equation} The process $\mathscr{X}_N^{\mathcal{A}}$ samples $N$ points almost
surely by the fact that $NK_{\mathcal{A}}$ defines a projection operator
(see~\cite{Hough_Krishnapur_Peres+2009:zeros_gaussian}). The product of random
variables $\mathscr{X}_N^{\mathcal{A}}(A_1)\cdots \mathscr{X}_N^{\mathcal{A}}(A_N)$ is either $0$
or $1$ and thus equal to $1$ (a.s.) by \eqref{eq:jexpect}. This implies that
the process samples exactly one point per set of the partition
$\mathcal{A}$. Furthermore, we have
\begin{align*}
\mathbb{E}\mathscr{X}_N^{\mathcal{A}}(D)&=\int_{D}K(x,x)\,\mathrm{d}\mu(x)=
\sum_{i=1}^N\int_D\frac{\mathbbm{1}_{A_i}(x)^2}{\mu(A_i)}\,\mathrm{d}\mu(x)\\&=
\sum_{i=1}^N \frac{\mu(A_i\cap D)}{\mu(A_i)}=N\mu(D),
\end{align*}
and, for $D\subseteq A_i$, this implies
$\mathbb{E}\mathscr{X}_N^{\mathcal{A}}(D)=\mu(D)/\mu(A_i)$; the sample point
chosen from $A_i$ is distributed with measure $\mu_i$ on $A_i$, where
\begin{equation*}
\mu_i(A)=N\mu(A_i\cap A).
\end{equation*}
\begin{ackno}
This material is based upon work supported by the National Science
Foundation under Grant No. DMS-1439786 while the first three authors
were in residence at the Institute for Computational and
Experimental Research in Mathematics in Providence, RI, during the
Spring 2018 semester.
\end{ackno}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document}
|
\begin{document}
\newcommand{{\rm per}}{{\rm per}}
\newtheorem{teorema}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{utv}{Proposition}
\newtheorem{svoistvo}{Property}
\newtheorem{sled}{Corollary}
\newtheorem{con}{Conjecture}
\author{A. A. Taranenko}
\title{Upper bounds on the permanent of multidimensional (0,1)-matrices \thanks{The work is supported
by the Russian Science Foundation (grant 14--11--00555)}}
\date{}
\maketitle
\begin{abstract}
The permanent of a multidimensional matrix is the sum of products of entries over all diagonals.
By Minc's conjecture, there exists a reachable upper bound on the permanent of 2-dimensional (0,1)-matrices. In this paper we obtain some generalizations of Minc's conjecture to the multidimensional case. For this purpose we prove and compare several bounds on the permanent of multidimensional (0,1)-matrices.
Most estimates can be used for matrices with nonnegative bounded entries.
\end{abstract}
\section{Definitions and upper bounds on the permanent of 2-dimensional matrices}
Let $n,d \in \mathbb N$, and let $I_n^d= \left\{ (\alpha_1, \ldots , \alpha_d):\alpha_i \in \left\{1,\ldots,n \right\}\right\}$.
A \textit{$d$-dimensional matrix $A$ of order $n$} is an array $(a_\alpha)_{\alpha \in I^d_n}$, $a_\alpha \in\mathbb R$. A matrix $A$ is called \textit{nonnegative} if $a_\alpha \geq 0$ for all $\alpha \in I^d_n$.
Let $k\in \left\{0,\ldots,d\right\}$. A \textit{$k$-dimensional plane} in $A$ is the set of entries obtained by fixing $d-k$ indices and letting the other $k$ indices vary from 1 to $n$. A $(d-1)$-dimensional plane is said to be a \textit{hyperplane}. The \textit{direction} of a plane is the vector describing which indices are fixed in the plane.
Let $\alpha$ belong to $I$, and let $(A|\alpha)$ denote the $d$-dimensional matrix of order $n-1$ obtained from the matrix $A$ by deleting the entries $a_\beta$ such that $\alpha_i=\beta_i$ for some $i \in \left\{1,\ldots,d \right\}$.
For a $d$-dimensional matrix $A$ of order $n$, denote by $D(A)$ the set of its diagonals
$$D(A)=\left\{ (\alpha^1,\ldots,\alpha^n) | \alpha^i \in I_n^d, \forall i ~\forall j \neq i ~ \rho (\alpha^i,\alpha^j)=d\right\},$$
where $\rho$ is the Hamming distance (the number of positions at which the corresponding entries are different).
Then the \textit{permanent} of a matrix $A$ is
$${\rm per} A = \sum\limits_{p\in D} \prod\limits_{\alpha \in p} a_\alpha.$$
In this paper we mostly consider (0,1)-matrices, that is, matrices all of whose entries are equal to 0 or 1. But sometimes we concern matrices with nonnegative entries, which are not greater than 1.
First we show the trivial upper bound on the permanent of nonnegative 2-dimensional matrices.
\begin{utv} \label{trivoc}
Let $A$ be a nonnegative 2-dimensional matrix of order $n$. Suppose that the sum of entries in the $i$th row of the matrix $A$ is not grater than $r_i$. Then
$${\rm per} A \leq \prod \limits_{i=1}^n r_i.$$
\end{utv}
\begin{proof}
The proof is by induction on the order of matrices. Using the definition of the permanent, we have
$$ {\rm per} A = \sum \limits_{j=1}^n a_{n,j} {\rm per} (A|(n,j)) \leq r_n \max_{j= 1, \ldots, n} {\rm per} (A|(n,j)).$$
Note that $(A|(n,j))$ are the matrices of order $n-1$ such that the sum of entries in their $i$th row is not greater than $r_i$. By the inductive assumption, ${\rm per} (A|(n,j)) \leq \prod \limits_{i=1}^{n-1} r_i$ for all $j=1 \ldots n$. Therefore,
$${\rm per} A \leq \prod \limits_{i=1}^n r_i.$$
\end{proof}
The following inequality, proved by Bregman~\cite{bregman}, Schrijver~\cite{shriver}, and
Radhakrishnan~\cite{krishna}, is known as Minc's conjecture~\cite{gypminc}.
\begin{teorema}[\cite{bregman,krishna,shriver}] \label{gminc}
Let $A$ be a 2-dimensional matrix of order $n$, and let $r_i$ be the number of 1's in the $i$th row of the matrix $A$. Then
$${\rm per} A \leq \prod \limits_{i=1}^n r_i!^{1/r_i}.$$
\end{teorema}
Using the theorem, we can extend the inequality on the permanent of nonnegative matrices with bounded entries.
\begin{sled} \label{mincsled}
Let $A$ be a nonnegative 2-dimensional matrix of order $n$ whose entries are not greater than 1. Suppose that the sum of entries in the $i$th row of $A$ is not greater than $r_i$. Then
$${\rm per} A \leq \prod \limits_{i=1}^n \left\lceil r_i\right\rceil!^{\frac{1}{\left\lceil r_i\right\rceil}}.$$
\end{sled}
\begin{proof}
Construct recursively nonnegative $2$-dimensional matrices $A=A^0, A^1, \ldots, A^n$ such that their entries are not greater than 1 and ${\rm per} A^i \leq {\rm per} A^{i+1}$ for all $i \in \left\{0, \ldots, n-1\right\}$.
Assume that the matrix $A^i$ is constructed. Let us construct $A^{i+1}.$ Rearrange the columns of the matrix $A^i$ so that ${\rm per} (A^i|(i+1,k)) \geq {\rm per}(A^i|(i+1,k+1))$ for all $k$. Call the resulting matrix $B^i$. Let $A^{i+1}= (a^{i+1}_{j,k})_{j,k=1}^n$ and $B^{i}= (b^{i}_{j,k})_{j,k=1}^n.$
Put
$a^{i+1}_{j,k} = b^i_{j,k}$ for $j \neq i+1$,
$a^{i+1}_{i+1,k} = 1$ for $k \leq \left\lceil \gamma_{i+1} \right\rceil$,
and $a^{i+1}_{i+1,k} = 0$ for $k > \left\lceil \gamma_{i+1} \right\rceil$.
Then $A^n$ is a (0,1)-matrix with $\left\lceil r_i\right\rceil$ ones in the $i$th row. By construction, ${\rm per} A^i = {\rm per} B^i \leq {\rm per} A^{i+1}.$ By Theorem~\ref{gminc}, we have
$${\rm per} A \leq {\rm per} A^n \leq \prod\limits_{i=1}^n \left\lceil r_i\right\rceil ! ^{\frac{1}{\left\lceil r_i\right\rceil}}.$$
\end{proof}
If we know only the sum of all entries of a 2-dimensional nonnegative matrix, then we can estimate its permanent by the following inequality.
\begin{sled}
Let $A$ be nonnegative 2-dimensional matrix of order $n$ whose entries are not greater than 1. Suppose that $\sum\limits_{i,j=1}^n a_{i,j}= \gamma n$. Then
$${\rm per} A \leq (\gamma+1)^n e^{-n} (e\sqrt{ \gamma+1})^{\frac{n}{ \gamma+1}}.$$
\end{sled}
\begin{proof}
Suppose that the sum of entries in the $i$th row of $A$ is equal to $r_i$. By Corollary~\ref{mincsled},
$${\rm per} A \leq \prod \limits_{i=1}^n \left\lceil r_i\right\rceil!^{\frac{1}{\left\lceil r_i\right\rceil}}.$$
Note that $\sum\limits_{i=1}^n \left\lceil r_i\right\rceil \leq \sum\limits_{i=1}^n ( r_i +1 ) = ( \gamma +1) n.$ Using an approximation of the factorial
$$x! \leq e x^{x+1/2} e^{-x},$$
we obtain
$${\rm per} A \leq \prod\limits_{i=1}^n e^{-1 + 1/\left\lceil r_i\right\rceil} \left\lceil r_i\right\rceil^{1 + \frac{1}{2\left\lceil r_i\right\rceil}}.$$
It can be proved that $e^{1/x} x^{1+1/2x}$ is a concave function for $x>1$. Therefore,
$${\rm per} A \leq \prod\limits_{i=1}^n e^{-1 + \frac{1}{\gamma+1}} ( \gamma +1)^{1 + \frac{1}{2( \gamma+1)}} =
( \gamma+1)^n e^{-n} (e\sqrt{ \gamma+1})^{\frac{n}{ \gamma+1}}. $$
\end{proof}
In the following section we prove upper bounds on the permanent of multidimensional (0,1)-matrices through the number of planes covering all ones of the matrix. Also, we propose an upper bound by means of sums of entries in hyperplanes and prove that it holds asymptotically. In addition, we estimate the permanent of a 3-dimensional (0,1)-matrix through the permanent of some 2-dimensional matrix.
\section{Upper bounds on the permanent of multidimensional matrices}
There is a trivial upper bound on the permanent of nonnegative multidimensional matrices, which is similar to Proposition~\ref{trivoc}.
\begin{utv} \label{mtriv}
Let $A$ be a $d$-dimensional matrix of order $n$. Suppose that the sum of entries in the $i$th hyperplane of the matrix $A$ is not grater than $r_i$. Then
$${\rm per} A \leq \prod \limits_{i=1} ^n r_i.$$
\end{utv}
Consider multidimensional (0,1)-matrices now. In~\cite{my}, the author proved an asymptotic upper bound on the permanent of matrices such that each 1-dimensional plane of the matrix contains exactly one 1.
\begin{teorema}
Let $d \geq 3$, and let $\Omega^d(n)$ be the set of $d$-dimensional (0,1)-matrices of order $n$ such that each 1-dimensional plane contains exactly one 1. Then
$$\max \limits_{A \in \Omega^d(n)} {\rm per} A \leq n!^{d-2} e^{-n + o(n)} \mbox{ as } n \rightarrow \infty.$$
\end{teorema}
It would be great to generalize the bound from Theorem~\ref{gminc} to the multidimensional case and to get a bound on the permanent in terms of sums in hyperplanes. But at the moment we can estimate the permanent in terms of the number of planes covering all ones in a matrix. To make the further reasoning clearer, we prove a simple lemma.
\begin{lemma} \label{vsp}
Let $A$ be a $d$-dimensional matrix of order $n$. Let us fix some direction of $k$-dimensional planes, $1 \leq k \leq d-2,$ and enumerate them by $(d-k)$-dimensional indices. Put
$$T = \left\{ \tau: I^1_n \rightarrow I^{d-k - 1}_n | ((1, \tau(1)), \ldots, (n, \tau(n))) \mbox{ is a diagonal in a } (d-k)\mbox{-dimensional matrix}\right\}.$$
Denote by $A_\tau$ the $(k+1)$-dimensional matrix of order $n$ such that the $i$th hyperplane of $A_\tau$ is the $(i,\tau(i))$-th $k$-dimensional plane of the matrix $A$. Then
$${\rm per} A = \sum\limits_{\tau \in T} {\rm per} A_\tau.$$
\end{lemma}
\begin{proof}
Without loss of generality we suppose that in the $k$-dimensional planes the first $d-k$ indices are fixed and the last $k$ indices vary. By the definition,
$${\rm per} A = \sum\limits_{p\in D} \prod\limits_{\alpha \in p} a_\alpha.$$
Divide the set $D$ of diagonals onto the parts
$$D_\tau = \left\{p \in D| p=((1, \tau(1), *, \ldots,*), \ldots, (n, \tau(n),*,\ldots,*))\right\},$$
where $*$ means an arbitrary symbol. Rearrange the summands in the definition of the permanent
$${\rm per} A = \sum\limits_{\tau \in T}\sum\limits_{p\in D_\tau} \prod\limits_{\alpha \in p} a_\alpha.$$
Since $\sum\limits_{p\in D_\tau} \prod\limits_{\alpha \in p} a_\alpha$ is the permanent of the matrix $A_\tau$, the proof is over.
\end{proof}
Let $A$ be a $d$-dimensional matrix of order $n$. Denote by $L ^ k_i (A)$ the set of $k$-dimensional planes in $A$ such that their last $k$ indices vary, the first $d-k$ indices are fixed, and the very first index equals $i$.
Let us prove an upper bound on the permanent of multidimensional matrices.
\begin{teorema} \label{bad}
Let $A$ be a $d$-dimensional (0,1)-matrix of order $n$. Suppose that all ones in the $i$th hyperplane $L ^ {d-1}_i (A)$ can be covered by $s_{i,d-1}$ planes from $L ^ {d-2}_i (A)$, $\ldots$, all ones in a plane from $L ^ {k}_i (A)$ can be covered by $s_{i,k}$ planes from $L ^ {k-1}_i (A)$, $\ldots$, each 1-dimensional plane from $L ^ 1_i (A)$ contains $s_{i,1}$ ones at most. Then
$${\rm per} A \leq \prod \limits_{k=1}^{d-1} \prod \limits_{i=1}^n s_{i,k}!^{1/s_{i,k}}.$$
\end{teorema}
\begin{proof}
The proof is by induction on the dimension of matrices. We consider only the first step of induction, that is the step from the 2-dimensional case to the 3-dimensional case.
Let $A$ be a 3-dimensional (0,1)-matrix of order $n$, and let $l_{i,j}$ be 1-dimensional planes in the $i$th hyperplane of the matrix $A$. Suppose that there are $m_i$ 1-dimensional planes $l_{i,j}$ containing ones. Also assume that each plane $l_{i,j}$ contains at most $s_i$ ones.
Put $S= \left\{ \sigma \in S_n | \mbox{plane } l_{i,\sigma (i)} \mbox{ contains ones for all } i\right\},$ where $S_n$ is the symmetric group on $\left\{1, \ldots, n \right\}$.
By Lemma~\ref{vsp},
$${\rm per} A = \sum \limits_{\sigma \in S} {\rm per} A_{\sigma},$$
where $A_{\sigma}$ is the 2-dimensional (0,1)-matrix such that its $i$th row is the plane $l_{i, \sigma (i)}.$
Note that for all $\sigma \in S$, there are $s_i$ ones in the $i$th row of the matrix $A_\sigma$. By Theorem~\ref{gminc}, we have
$${\rm per} A_\sigma \leq \prod \limits_{i=1}^n s_i!^{1/s_i}$$
for all $\sigma \in S.$ Consequently,
$${\rm per} A \leq |S| \prod \limits_{i=1}^n s_i!^{1/s_i}.$$
Estimate the cardinality of the set $S$ now. For this purpose, consider the (0,1)-matrix $B$ such that $b_{i,j} = 1$ if and only if the plane $ l_{i,j}$ contains ones. Notice that $|S| = {\rm per} B$ and that the $i$th row of $B$ contains $m_i$ ones. Using Theorem~\ref{gminc}, we obtain
$${\rm per} B \leq \prod \limits_{i=1}^n m_i!^{1/m_i}.$$
Therefore,
$${\rm per} A \leq \prod \limits_{i=1}^n m_i!^{1/m_i} s_i!^{1/s_i}.$$
\end{proof}
The equality holds, for example, if the matrix $A$ is a block diagonal matrix. The bound will be rough in many cases, because it depends on the arrangement of ones in the matrix. Unfortunately, we have no success in generalization of the known proofs of Theorem~\ref{gminc} to the multidimensional case, and we don't have a good estimate on the permanent of matrices through the sums in hyperplanes. But we propose the following conjecture, which was tested on a number of matrices of small order and dimension.
\begin{con}\label{voc}
Let $A$ be a $d$-dimensional (0,1)-matrix of order $n$. Suppose that there are $r_i$ ones in the $i$th hyperplane of $A$. Then
$${\rm per} A \leq n!^{d-2} \prod \limits_{i=1}^n \left\lceil \frac{r_i}{n^{d-2}}\right\rceil ! ^{ \frac{1}{\left\lceil r_i /n^{d-2}\right\rceil}}.$$
\end{con}
This equality holds if all 2-dimensional planes of some direction in the matrix $A$ are a 2-dimensional matrix such that the equality from Theorem~\ref{gminc} holds on it.
If Conjecture~\ref{voc} is true, we can estimate the permanent of matrices by means of the sums in planes of an arbitrary dimension.
Actually, let $A$ be a $d$-dimensional (0,1)-matrix of order $n$, and let $l_\beta$ be $k$-dimensional planes of some direction in the matrix $A$. Suppose that the sum of entries in the plane $l_\beta$ is equal to $r_\beta$. By Lemma~\ref{vsp},
$${\rm per} A = \sum \limits_{\tau \in T} {\rm per} A_\tau,$$
where
$$T = \left\{ \tau: I^1_n \rightarrow I^{d-k - 1}_n | ((1, \tau(1)), \ldots, (n, \tau(n))) \mbox{ is a diagonal in a } (d-k)\mbox{-dimensional matrix}\right\}.$$
Note that $A_\tau$ is the $(k+1)$-dimensional matrix such that $l_{i,\tau (i)}$ are its hyperplanes. The sum of entries in the hyperplanes of the matrix $A_\tau$ is equal to $r_{i,\tau (i)}.$ If the conjecture is true, then ${\rm per} A_\tau \leq n!^{k-1} \prod \limits_{i=1}^n h_{i,\tau (i)}!^{1/h_{i,\tau (i)}},$ where $h_{i,\tau (i)} = \left\lceil \frac{r_{i,\tau (i)}}{n^{k-1}}\right\rceil.$ Therefore,
$${\rm per} A \leq n!^{k-1} \sum \limits_{\tau \in T} \prod \limits_{i=1}^n h_{i,\tau (i)}!^{1/h_{i,\tau (i)}}.$$
Let $B$ be a $d$-dimensional matrix of order $n$ such that $b_\beta = h_{\beta}!^{1/h_{\beta}}$, $\beta \in I_n^{d-k}.$ Then
$${\rm per} A \leq n!^{k-1} {\rm per} B.$$
If $l_\beta$ are 1-dimensional planes, then we can estimate the permanent of $A_\tau$ with the help of Theorem~\ref{gminc}. Therefore the following proposition holds.
\begin{utv} \label{groc}
Let $A$ be a $d$-dimensional (0,1)-matrix of order $n$, and let $l_\beta$ be 1-dimensional planes of some direction in the matrix $A$. Suppose that the sum of entries in the plane $l_\beta$ is equal to $r_\beta$. Consider the $(d-1)$-dimensional matrix $B$ of order $n$ such that $b_\beta = r_{\beta}!^{1/r_{\beta}}.$ Then
$${\rm per} A \leq {\rm per} B.$$
\end{utv}
The following example illustrates that this bound is weaker than Conjecture~\ref{voc}. Consider the 3-dimensional (0,1)-matrix of order 3:
$$
A = \left( \begin{array} {cccc}
1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0
\end{array}
\times
\begin{array} {cccc}
0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0
\end{array}
\times
\begin{array} {cccc}
0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0
\end{array}
\times
\begin{array} {cccc}
0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 1
\end{array}
\right).
$$
Its permanent equals 74, the sum of entries in each hyperplane equals 8. Consider the matrix $B$ such that $b_{i,j}= s_{i,j}!^{1/s_{i,j}}$, where $s_{i,j}$ is the sum of entries from $(i,j)$-th row of the matrix $A$. It can be checked that ${\rm per} B \approx 104,23.$ But Conjecture~\ref{voc} claims that ${\rm per} A \leq 4! * 2^{4/2} = 96.$ On the other hand, Theorem~\ref{bad} gives that ${\rm per} A \leq 4!^2 = 576.$
However we can prove that Conjecture~\ref{voc} holds asymptotically for matrices such that the number of ones in its hyperplanes is sufficiently large.
\begin{teorema} \label{asym}
Assume that for some integer $d \geq 2$ and for all integer $n$ there are $n$ integer numbers $r_1(n), \ldots ,r_n(n)$ such that $\min\limits_{i=1 \ldots n} r_i(n)/ n^{d-2} \rightarrow \infty$ as $n \rightarrow \infty.$ Denote by $\Lambda^d(n,r)$ the set of $d$-dimensional (0,1)-matrices of order $n$ such that the number of ones in their hyperplanes is not greater than $r_i(n).$ Put $F(x) = \left\lceil x\right\rceil!^{1/\left\lceil x\right\rceil}.$ Then
$$ \max\limits_{A \in \Lambda^d(n,r)} {\rm per} A \leq n!^{d-2} e^{o(n)} \prod \limits_{i=1}^n F\left(\frac{r_i(n)}{n^{d-2}}\right) \mbox{ as } n \rightarrow \infty.$$
\end{teorema}
\begin{proof}
The proof is by induction on the dimension of matrices. The basis of induction is the case $d=2$, that holds by Thereom~\ref{gminc}.
Assume that for dimension $d-1$ the theorem holds, and prove the theorem for dimension $d$. Let $A$ be an arbitrary matrix from $\Lambda^d(n,r).$ Then the number of ones in its $i$th hyperplane is not greater than $r_i(n).$
Divide these hyperplanes into 1-dimensional planes $l_\beta$ of some direction.
Assume that there are $s_\beta$ ones in the plane $l_\beta$. Consider the $(d-1)$-dimensional matrix $B$ such that $b_\beta= s_\beta!^{1/s_\beta}$.
By Proposition~\ref{groc}, ${\rm per} A \leq {\rm per} B.$ Put $f(x)= x!^{1/x}.$ Using an approximation of the factorial, we estimate $f(x):$
$$(2\pi x)^{1/2x} x e^{-1} \leq f(x) \leq e^{1/x} x^{1+1/2x} e^{-1}.$$
Denote by $g(x)$ the right-hand side of the inequality.
It can be checked that $g(x)$ is a concave function. Since $\sum \limits_{\beta = (i,*,\ldots,*)} s_\beta = r_i(n)$, it follows that
$$\sum \limits_{\beta = (i,*,\ldots,*)} g(s_\beta) \leq n^{d-2} g(r_i(n)/n^{d-2}),$$
where $*$ means an arbitrary symbol.
Estimate the sum of entries in the $i$th hyperplane of $B$:
$$\sum \limits_{\beta = (i,*,\ldots,*)} f(s_\beta) \leq n^{d-2}g(r_i(n)/n^{d-2}) = r_i(n) e^{-1+n^{d-2}/r_i(n)} \left(\frac{r_i(n)}{n^{d-2}}\right)^{n^{d-2}/2r_i(n)}.$$
Note that the entries of the matrix $B$ are not greater than $f(n).$ As in the proof of Corollary~\ref{mincsled}, we rearrange the sum of entries in the hyperplanes of $B$ and obtain the $(d-1)$-dimensional matrix $C$ of order $n$ such that the entries of $C$ equal 0 or $f(n)$ and ${\rm per} B \leq {\rm per} C.$ Note that there are at most $\frac{n^{d-2}g(r_i(n)/n^{d-2})}{f(n)}$ nonzero entries in the $i$th hyperplane of the matrix $C$.
Using the inequality for $f(n)$, we obtain an upper bound on the number of nonzero entries in the $i$th hyperplane of $C$
$$\frac{n^{d-2}g(r_i(n)/n^{d-2})}{f(n)} \leq \frac{e^{n^{d-2}/r_i(n)} }{(2\pi n)^{1/2n} } \left( \frac{r_i(n)}{n^{d-2}}\right)^{n^{d-2}/2r_i(n)} \frac{r_i(n)}{n}.$$
Denote by $s_i(n)$ the right-hand side of the inequality.
Let $\Lambda^{d-1}(n,s)$ be the set of $(d-1)$-dimensional (0,1)-matrices of order $n$ such that the number of ones in their $i$th hyperplane is not greater than $s_i(n).$ If we divide all entries of the matrix $C$ by $f(n)$, we obtain some matrix from $\Lambda^{d-1}(n,s)$.
Since ${\rm per} B \leq {\rm per} C$ and ${\rm per} A \leq {\rm per} B,$ it follows that ${\rm per} A \leq {\rm per} C.$ Because $A$ is an arbitrary matrix from $\Lambda^{d-1}(n,s)$, we have
$$\max\limits_{A \in \Lambda^d(n,r)} {\rm per} A \leq f(n)^n \max\limits_{C \in \Lambda^{d-1}(n,s)} {\rm per} C.$$
Under the hypothesis of the theorem, we get that $s_i(n) = (r_i(n)/n)^{1+o(1)}$ and $\min\limits_{i=1 \ldots n} s_i(n)/ n^{d-3} \rightarrow \infty$ as $n\rightarrow\infty$. Therefore $\Lambda^{d-1}(n,s)$ satisfies the conditions of the theorem. By the inductive assumption, we finally obtain
$$\max\limits_{A \in \Lambda^d(n,r)} {\rm per} A \leq f(n)^n n!^{d-3} e^{o(n)} \prod \limits_{i=1}^n F\left(\left(\frac{r_i(n)}{n^{d-2}}\right)^{1+ o(1)}\right) = n!^{d-2} e^{o(n)} \prod \limits_{i=1}^n F\left(\frac{r_i(n)}{n^{d-2}}\right) \mbox{ as } n \rightarrow \infty .$$
\end{proof}
By the same argument as in Corollary~\ref{mincsled}, we can obtain the similar results for nonnegative matrices with bounded entries.
\end{document}
|
\begin{document}
\title{Coherently prepared nondegenerate Y-shaped four-level correlated emission laser: A source of tripartite entangled light}
\author{Sintayehu Tesfa} \email{[email protected]}
\affiliation{Max Planck Institute for the Physics of Complex Systems, N$\ddot{o}$thnitzer Str. 38, 01187 Dresden, Germany\\
Physics Department, Dilla University, P. O. Box 419, Dilla, Ethiopia}
\date{\today}
\begin{abstract} A detailed derivation of the master equation of the cavity radiation of a coherently prepared $Y$-shaped four-level correlated emission laser is presented. The outline of the procedures that can be employed in analytically solving the stochastic differential equations and the rate equations of various correlations are also provided. It is shown that coherently preparing the atoms in the upper two energy levels and the lower, initially, can lead to a genuine continuous variable tripartite entanglement. Moreover, preparing the atoms in the coherent superposition, other than the possible maximum or minimum, of the upper two energy levels, leaving the lower unpopulated, may lead to a similar observation. With the possibility of the atom at the intermediate energy level to take three different transition roots guided by the induced coherence, this system in general is found to encompass versatile options for practical utilization. In particular, coupling at least one of the dipole forbidden transitions by an external radiation is expected to enhance the degree of detectable entanglement.\end{abstract}
\pacs{42.50.Ar, 42.50.Gy, 03.65.Ud}
\maketitle
\section{INTRODUCTION}
In recent years, due to the relative simplicity and high efficiency in the generation, manipulation, and detection of optical continuous variable (CV) states \cite{rmp77513}, the corresponding entangled states have been successfully implemented in unconditional quantum teleportation \cite{prl80869,s282706}, quantum dense coding \cite{prl88047904}, quantum error correction \cite{np5541}, and universal quantum computation \cite{prl821784} among others. With the progress in the CV entanglement research, the generation of more than a bipartite entanglement has attracted much attention. A truly $N$-partite entangled state generated by a single-mode squeezed state and linear optics \cite{prl91080404} along with the generation of CV tripartite entanglement using cascaded nonlinear interaction in an optical cavity without linear optics have been theoretically investigated \cite{pra74042332} and also experimentally realized \cite{prl91080404,prl97140504}. Nevertheless, the structure of the entanglement for the three-mode system is a bit more subtle than that for a bipartite case, wherein, different classes of entanglement are defined based on how the density matrix may be partitioned \cite{pra74063809}. The classifications range from fully inseparable, which means that the density matrix is not separable for any grouping of the modes (genuine tripartite entanglement), to fully separable, where the three modes are not entangled in any way. Despite the challenge, fortunately, there is a large number of works that are aimed at devising ways of detecting a tripartite entanglement \cite{pra64052303,pra67052315,pra74063809,pra75012311,pra81062322}.
On the other hand, there has been enormous effort in studying quantum optical systems that are capable of generating tripartite entangled light \cite{pra72053805,pra74042332,pra74063809} that includes the nondegenerate parametric oscillation and six-wave mixing in the nonlinear medium. As an alternative, the idea of generating a tripartite entanglement from various schemes of four-level atomic systems via coherent superposition has been under study recently \cite{jpb41035501,jpb42165506,oc2821593,jpb43155506,jmo561607,pra82032322,epjd56247}. Entangled photons from these systems are expected to have a potential applications in quantum memory \cite{n432482} and long distance quantum-communication \cite{prl855643}, since the low frequency and narrow linewidth of the light can enhance efficient coupling between photons and atomic memories in a quantum network \cite{prl96093604,n457859}. Some of the possible schemes include but not limited to $\lambda$-type \cite{pra82032322}, $V$-type \cite{jpb42165506}, cascade-type \cite{oc2821593,jpb41035501}, and $Y$-type \cite{jpb43155506}. In these works, the coherent superposition is induced by exciting an atom in a lower energy level to the upper (using an external pumping mechanism) from where the atom undergoes direct spontaneous emission that leads to a genuine tripartite entanglement \cite{epjd56247,oc2821593}. Quite recently, Shi {\it{et al.}} \cite{jpb43155506} extensively discussed the way of generating a tripartite entanglement applying a mechanism of six-wave mixing. The three external radiations they applied were believed to be responsible for creating the required nonclassical correlations.
However, in this contribution, following a similar kind of reasoning, it is proposed that the $Y$-shaped four-level atomic scheme can be a reliable source of strong entangled light if the initial preparation of the atoms is assumed to be accountable for inducing coherent superposition. Moreover, rather than placing the atoms in the cavity throughout the operation and expose them to thermal fluctuations, as usually the case, it is assumed that they are injected into the cavity at a constant rate. For the sake of convenience, the amplification of the light due to reflection between the walls of the coupler mirror when photons with different frequency emitted in the forked four-level cascade transition are correlated by the coherence induced via initial preparation is dubbed as a nondegenerate $Y$-shaped four-level correlated emission laser. Since a large number of atoms, in principle, can be injected into the cavity over a longer period of time and the direct spontaneous emission process can also be efficient, if the atoms are properly prepared initially, it would be reasonable expecting this system as a reliable source of bright light.
In order to study the dynamics of the entanglement employing the existing criteria, it is found necessary and appropriate establishing the mathematical framework for solving the involved differential equations beforehand. To this effect, based on the involved structure of the atomic levels, the master equation is derived following the outline presented elsewhere for the corresponding two-mode case \cite{pra79033810,pra82053835}. Due to the coherent superposition induced via initial preparation and cascading process, a significant correlation in the generated three modes is observed in the calculated master equation. Moreover, with the intention of paving the way for in depth analysis, the procedure of solving the emerging coupled differential equations is outlined. It is found that the required solutions can be explicitly written down once the corresponding $3\times3$ matrix constructed from the prefactors in the master equation is diagonalized and its eigenmatrix is constructed, which in principle is a surmountable task although the rigor may be somewhat lengthy.
\section{Description of the Model}
In earlier studies on the three-level cascade laser, it was observed that the spontaneous transition during the cascading process induces a coherent superposition that leads to enhanced quantum features including a bipartite CV entanglement \cite{pra75062305,oc283781}. It can be asserted that initially preparing atoms in a certain coherent superposition and then allowing them to follow realistic spontaneous transition roots can yield a strongly entangled light \cite{pra74043816,pra77013815,jpb42215506}. Taking this as a motivation, the $Y$-shaped four-level atomic system initially prepared in a coherent superpostion of the energy levels between which a direct electric dipole transition is forbidden would be considered. For the sake of convenience, the lower energy level is denoted by $|0\rangle$, the intermediate energy level by $|1\rangle$, and the upper two energy levels by $|2\rangle$ and $|3\rangle$ (Please note that the schematic representation of the involved atomic energy levels is provided in Fig. \ref{fig1}). In order to expedite the cascading process, it is assumed that the parity of the energy levels $|0\rangle$, $|2\rangle$, and $|3\rangle$ is the same whereas that of $|1\rangle$ is different. This entails that direct spontaneous transitions between energy levels $|2\rangle$ $\leftrightarrow$ $|3\rangle$, $|2\rangle$ $\leftrightarrow$ $|0\rangle$, and $|0\rangle$ $\leftrightarrow$ $|3\rangle$ are electric dipole forbidden, but due to the parity difference, the transitions between $|1\rangle$ $\leftrightarrow$ $|0\rangle$, $|2\rangle$ $\leftrightarrow$ $|1\rangle$, and $|1\rangle$ $\leftrightarrow$ $|3\rangle$ are allowed. It is worth noting that if required the dipole forbidden transitions can be induced by an external pumping mechanism in a similar manner as in the three-level case \cite{prl94023601, pra75033816,jpb41055503,jpb41145501}.
While the atom undergoes a direct spontaneous transition from energy level $|3\rangle$ to $|1\rangle$, suppose it emits a photon represented by annihilation operator $\hat{a}_{3}$. In principle, it can still undergo a direct spontaneous emission and go over to the lower energy level $|0\rangle$; in the process emits a photon described by $\hat{a}_{1}$. In the cascading transition from energy level $|3\rangle$ to $|0\rangle$ via $|1\rangle$, a correlation between the two emitted photons ($\hat{a}_{3}$ and $\hat{a}_{1}$) can readily be established. In a similar manner, it is not difficult to realize that there could be a correlation between the photons emitted ($\hat{a}_{2}$ and $\hat{a}_{1}$) when the atom undergoes a spontaneous transition from energy level $|2\rangle$ to $|0\rangle$ via $|1\rangle$. These two processes, which are not entirely independent, are expected to initiate nonclassical correlations between the photons emitted while the atom cascades from the upper two energy levels to the lower via different forked roots. Nevertheless, to establish a genuine tripartite entanglement as prescribed by the von Loock and Furusawa criteria \cite{pra67052315}, a correlation between the photons emitted from the upper two energy levels ($\hat{a}_{2}$ and $\hat{a}_{3}$) is very crucial. In order to initiate this important correlation, it is worth noting that in case the atom is initially prepared in a coherent superposition of the upper two energy levels and the lower, it does not arbitrarily undergo the aforementioned spontaneous transitions due to the resulting population sharing.
If there is a triply resonant light in the cavity, the atom with energy level $|1\rangle$ has three distinct alternatives to take except for the spontaneous decay to any other energy level that is not involved in the present consideration. The first and the most natural one is to continue with the direct spontaneous emission and then goes over to the lower energy level, or it can absorb a photon ($\hat{a}_{2}$) and excited to the upper energy level $|2\rangle$, or it can absorb a photon ($\hat{a}_{3}$) and excited to the upper energy level $|3\rangle$. In this description, if one manages to send a large number of initially prepared atoms through the cavity, the absorption-emission mechanism painstakingly follows different roots, which leads to additional correlation between emitted photons. In this scenario, as long as there is a coherent superposition between the energy levels $|2\rangle$ and $|3\rangle$ initially, a meaningful correlation in a subsequent emission of photons denoted by $\hat{a}_{2}$ and $\hat{a}_{3}$ is expected. It is, hence, envisaged that this process can forge the required nonclassical correlation between the photons emitted from the upper two energy levels.
The more general explanation and possible setup for practical utilization of similar schemes were provided earlier in Refs. \cite{prl99123603,prl102013601}. However, this contribution has one essential difference in which the initial preparation is assumed to be a prominent source of the atomic coherent superposition as opposed to the external pumping mechanism. Although the initial preparation and injection process lead to some technical difficulties in practical utilization of the potential of this system, it is envisaged that the challenge would be less severe when compared to an external pumping mechanism that naturally initiates thermal fluctuations and atomic broadening.
\begin{figure}\label{fig1}
\end{figure}
\section{Master equation}
The interaction of a nondegenerate $Y$-shaped four-level atom with a triply resonant cavity
radiation can be described in the rotating-wave approximation and
the interaction picture by the Hamiltonian of the form
\begin{align}\label{fl1}\hat{H}_{I}&=ig\big[\hat{a}_{3}|3\rangle\langle1|-|1\rangle\langle3|\hat{a}^{\dagger}_{3}+\hat{a}_{2}|2\rangle\langle1|-|1\rangle\langle2|\hat{a}^{\dagger}_{2}\notag\\&+\hat{a}_{1}|1\rangle\langle0|-|0\rangle\langle1|\hat{a}^{\dagger}_{1}\big],\end{align} where $g$ is a coupling constant
chosen to be the same for all transitions for convenience and $\hat{a}_{i}$'s are
the annihilation operators that represent the three cavity modes.
Assuming that the atoms are initially prepared in the coherent superposition of the atomic energy levels except the intermediate, the pertinent atomic state can be taken as
\begin{align}\label{fl2}|\Psi_{A}(0)\rangle=C_{3}(0)|3\rangle+C_{2}(0)|2\rangle+C_{0}(0)|0\rangle,\end{align}
where $C_{i}(0)$'s are the probability amplitudes for the atom to be initially in the $i$'s energy level and the corresponding density operator takes the form
\begin{align}\label{fl3}\rho_{A}^{(0)}&=\rho_{33}^{(0)}|3\rangle\langle3|+\rho_{32}^{(0)}|3\rangle\langle2|+\rho_{23}^{(0)}|2\rangle\langle3|+\rho^{(0)}_{30}|3\rangle\langle0|\notag\\&+\rho^{(0)}_{03}|0\rangle\langle3|+\rho^{(0)}_{22}|2\rangle\langle2|+\rho^{(0)}_{20}|2\rangle\langle0|+\rho^{(0)}_{02}|0\rangle\langle2|\notag\\&+\rho^{(0)}_{00}|0\rangle\langle0|,\end{align} where $\rho_{ii}^{(0)}$'s are the initial populations and $\rho_{ij(i\ne j)}^{(0)}$'s are the coherences between the atomic energy levels. It is worth noting that the intermediate energy level is initially unpopulated and the phase fluctuation resulting from imperfect preparation is not taken into consideration.
The atoms prepared in this manner are assumed to be injected into a triply resonant cavity at a constant rate $r_{a}$
and removed after sometime $T$, which is long enough for the
atoms to spontaneously decay to energy levels that do not involve in the process. The density operator
for the cavity radiation plus a single atom injected into the cavity at
time $t_{j}$ can be denoted by $\rho_{AR}(t,t_{j})$, where $t-T\le t_{j}\le
t$. The density operator for all the atoms in the cavity plus the
cavity radiation at time $t$ can be expressed as
\begin{align}\label{fl4}\hat{\rho}_{AR}(t) =
r_{a}\sum_{j}\hat{\rho}_{AR}(t,t_{j})\Delta t_{j},\end{align}
where $r_{a}\Delta t_{j}$ represents the number of atoms injected
into the cavity in a time interval of $\Delta t_{j}$. Assuming
that the atoms are continuously injected into the cavity and
taking the limit that $\Delta t_{j}\rightarrow 0$, the summation
over $j$ can be converted into integration with respect to $t'$,
\begin{align}\label{fl5}\hat{\rho}_{AR}(t) =
r_{a}\int_{t-T}^{t} \hat{\rho}_{AR}(t,t')dt'.\end{align}
Replacing the summation over randomly
injected atoms to integration in a similar manner has been done frequently \cite{pr159208,pra40237,pra82053835}.
It is also a well established fact
that the density operator evolves in time
according to
\begin{align}\label{fl6}\frac{\partial}{\partial t}\hat{\rho}_{AR}(t,t') =
-i\big[\hat{H}, \;\hat{\rho}_{AR}(t,t')\big].\end{align}
It is not hard to observe that $t'$ can be switched in such a way that $\hat{\rho}_{AR}(t,t)$ represents the density operator for an
atom plus the cavity radiation at a time when the atom is injected into the
cavity, whereas $\hat{\rho}_{AR}(t,t-T)$ represents the
density operator
when the atom is removed from the cavity. Since the atomic and radiation variables
are not correlated at the instant the atoms
are injected into or removed from the cavity, it is possible to propose that
\begin{align}\label{fl8}\hat{\rho}_{AR}(t,t) = \hat{\rho}_{A}(0)\hat{\rho}(t),\end{align}
\begin{equation}\label{fl9} \hat{\rho}_{AR}(t,t-T)=\hat{\rho}_{A}(t-T)\hat{\rho}(t),\end{equation}
where $ \hat{\rho}_{A}(0)=\hat{\rho}_{A}(t).$
Hence, in view of Eqs. \eqref{fl8} and \eqref{fl9}, integration of Eq. \eqref{fl5} results
\begin{equation}\label{fl13}\frac{d}{dt}\hat{\rho}_{AR}(t) =
r_{a}[\hat{\rho}_{A}(0) - \hat{\rho}_{A}(t-T)]\hat{\rho}(t) -
i[\hat{H}, \;\hat{\rho}_{AR}(t)].\end{equation} Now taking the trace over the atomic variables using the fact that
$Tr_{A}(\hat{\rho}_{A}(0)) = Tr_{A}(\hat{\rho}_{A}(t-T)) =1,$ leads to
\begin{align}\label{fl15}\frac{d\hat{\rho}(t)}{dt}
= - iTr_{A}[\hat{H}, \;\hat{\rho}_{AR}(t)].\end{align}
Upon employing Eqs. \eqref{fl1} and \eqref{fl15}, the time evolution of the
reduced density operator for radiation turns out to be
\begin{align}\label{fl16}\frac{d\hat{\rho}(t)}{dt}& =
g\big[\hat{\rho}_{31}\hat{a}^{\dagger}_{3}-\hat{a}^{\dagger}_{3}\hat{\rho}_{31}+
\hat{a}_{3}\hat{\rho}_{13} - \hat{\rho}_{13}\hat{a}_{3}
\notag\\&+\hat{\rho}_{21}\hat{a}^{\dagger}_{2}-\hat{a}^{\dagger}_{2}\hat{\rho}_{21}+
\hat{a}_{2}\hat{\rho}_{12} - \hat{\rho}_{12}\hat{a}_{2}]\notag\\&+\hat{\rho}_{10}\hat{a}^{\dagger}_{1}-\hat{a}^{\dagger}_{1}\hat{\rho}_{10}+
\hat{a}_{1}\hat{\rho}_{01} - \hat{\rho}_{01}\hat{a}_{1}\big],\end{align}
in which $\hat{\rho}_{ij} =\langle i|\hat{\rho}_{AR}|j\rangle$ with
$i,j =$ 0, 1, 2, 3.
On the other hand, on the basis of Eq. \eqref{fl13}, one can readily write
\begin{align}\label{fl18}\frac{d}{dt}\hat{\rho}_{ij}(t)& =
r_{a}\langle i|\hat{\rho}_{A}(0)|j\rangle\hat{\rho} -
r_{a}\langle i|\hat{\rho}_{A}(t-T)|j\rangle\hat{\rho} \notag\\&-
i\langle i|[\hat{H}, \;\hat{\rho}_{AR}(t)]|j\rangle -
\gamma\hat{\rho}_{ij},\end{align} where the last term is
introduced in order to account for the atomic decay process. Basically, $\gamma$ is the decay rate associated with every atomic transition including the rate of dephasing. In practical situation, assuming all decay rates as equal may not be reasonable as recently discussed elsewhere \cite{pra79033810,pra79063815}.
Assuming the atoms to be removed from
the cavity after they have decayed to energy levels that do not involve in the lasing process implies that $\langle i|\hat{\rho}_{A}(t-T)|j\rangle =0.$ Hence, on account of Eqs. \eqref{fl1}, \eqref{fl3}, and
\eqref{fl18}, one gets
\begin{align}\label{fl20}\frac{d}{dt}\hat{\rho}_{ij}(t) &=-\gamma\hat{\rho}_{ij}+
r_{a}\hat{\rho}(t)\big[\rho_{33}^{(0)}\delta_{i3}\delta_{3j} +
\rho_{32}^{(0)}\delta_{i3}\delta_{2j} \notag\\&+
\rho_{23}^{(0)}\delta_{i2}\delta_{3j} +
\rho_{30}^{(0)}\delta_{i3}\delta_{0j}+\rho_{03}^{(0)}\delta_{i0}\delta_{3j} +
\rho_{22}^{(0)}\delta_{i2}\delta_{2j} \notag\\&+
\rho_{20}^{(0)}\delta_{i2}\delta_{0j} +
\rho_{02}^{(0)}\delta_{i0}\delta_{2j}+\rho_{00}^{(0)}\delta_{i0}\delta_{0j}\big]
\notag\\&+ g[\hat{a}_{3}\hat{\rho}_{1j}\delta_{i3}-
\hat{a}^{\dagger}_{3}\hat{\rho}_{3j}\delta_{i1} +
\hat{a}_{2}\hat{\rho}_{1j}\delta_{i2} -
\hat{a}^{\dagger}_{2}\hat{\rho}_{2j}\delta_{i1} \notag\\&+ \hat{a}_{1}\hat{\rho}_{0j}\delta_{i1} -\hat{a}_{1}^{\dagger} \hat{\rho}_{1j}\delta_{0i}
-\hat{\rho}_{i3}\hat{a}_{3}\delta_{1j} +
\hat{\rho}_{i1}\hat{a}^{\dagger}_{3}\delta_{3j} \notag\\&-
\hat{\rho}_{i2}\hat{a}_{2}\delta_{1j} +
\hat{\rho}_{i1}\hat{a}^{\dagger}_{2}\delta_{2j} -
\hat{\rho}_{i1}\hat{a}_{1}\delta_{0j} +
\hat{\rho}_{i0}\hat{a}^{\dagger}_{1}\delta_{1j}],\end{align} from which follows
\begin{align}\label{fl21}\frac{d}{dt}\hat{\rho}_{33}(t) &= r_{a}\rho_{33}^{(0)}\hat{\rho}(t)
+ g[\hat{a}_{3}\hat{\rho}_{13} + \hat{\rho}_{31}\hat{a}^{\dagger}_{3}]-\gamma\hat{\rho}_{33},\end{align}
\begin{align}\label{fl22}\frac{d}{dt}\hat{\rho}_{32}(t) &= r_{a}\rho_{32}^{(0)}\hat{\rho}(t)
+ g[\hat{a}_{3}\hat{\rho}_{12} + \hat{\rho}_{31}\hat{a}^{\dagger}_{2}]
-\gamma\hat{\rho}_{32},\end{align}
\begin{align}\label{fl27}\frac{d}{dt}\hat{\rho}_{31}(t) =g[\hat{a}_{3}\hat{\rho}_{11} - \hat{\rho}_{33}\hat{a}_{3}-
\hat{\rho}_{32}\hat{a}_{2} + \hat{\rho}_{30}\hat{a}^{\dagger}_{1}] -\gamma\hat{\rho}_{31},\end{align}
\begin{align}\label{fl23}\frac{d}{dt}\hat{\rho}_{30}(t) &= r_{a}\rho_{30}^{(0)}\hat{\rho}(t)
+ g[\hat{a}_{3}\hat{\rho}_{10} - \hat{\rho}_{31}\hat{a}_{1}]
-\gamma\hat{\rho}_{30},\end{align}
\begin{align}\label{fl24}\frac{d}{dt}\hat{\rho}_{22}(t) &= r_{a}\rho_{22}^{(0)}\hat{\rho}(t)
+ g[\hat{a}_{2}\hat{\rho}_{12} + \hat{\rho}_{21}\hat{a}^{\dagger}_{2}]-\gamma\hat{\rho}_{22},\end{align}
\begin{align}\label{fl28}\frac{d}{dt}\hat{\rho}_{21}(t) =g[\hat{a}_{2}\hat{\rho}_{11} - \hat{\rho}_{23}\hat{a}_{3}-
\hat{\rho}_{22}\hat{a}_{2} + \hat{\rho}_{20}\hat{a}^{\dagger}_{1}] -\gamma\hat{\rho}_{21},\end{align}
\begin{align}\label{fl25}\frac{d}{dt}\hat{\rho}_{20}(t) &= r_{a}\rho_{20}^{(0)}\hat{\rho}(t)
+ g[\hat{a}_{2}\hat{\rho}_{10} - \hat{\rho}_{21}\hat{a}_{1}]
-\gamma\hat{\rho}_{20},\end{align}
\begin{align}\label{fl29}\frac{d}{dt}\hat{\rho}_{11}(t)& =g[\hat{a}_{1}\hat{\rho}_{01} - \hat{\rho}_{13}\hat{a}_{3}-
\hat{\rho}_{12}\hat{a}_{2} + \hat{\rho}_{10}\hat{a}^{\dagger}_{1}\notag\\&-\hat{a}^{\dagger}_{3}\rho_{31}-\hat{a}^{\dagger}_{2}\rho_{21}] -\gamma\hat{\rho}_{11},\end{align}
\begin{align}\label{fl30}\frac{d}{dt}\hat{\rho}_{10}(t) =g[\hat{a}_{1}\hat{\rho}_{00} - \hat{\rho}_{11}\hat{a}_{1}-
-\hat{a}^{\dagger}_{3}\hat{\rho}_{30} - \hat{a}^{\dagger}_{2}\hat{\rho}_{20}] -\gamma\hat{\rho}_{10},\end{align}
\begin{align}\label{fl26}\frac{d}{dt}\hat{\rho}_{00}(t) &= r_{a}\rho_{00}^{(0)}\hat{\rho}(t)
- g[\hat{a}_{1}\hat{\rho}_{10} + \hat{\rho}_{01}\hat{a}^{\dagger}_{1}]
-\gamma\hat{\rho}_{00}.\end{align}
In the good cavity limit ($\gamma\gg\kappa$, where $\kappa$ is the cavity damping constant), the cavity mode
variables change slowly compared with the atomic variables. It is, hence, expected that
the atomic variables will reach steady state in a relatively short
time. The time derivative of such variables can be set to zero, while keeping the remaining atomic and cavity mode
variables at time $t$. This procedure is referred to as the
adiabatic approximation scheme. Confining to linear
analysis, which amounts to dropping the terms containing $g$ in
Eqs. \eqref{fl21}, \eqref{fl22}, \eqref{fl23}, \eqref{fl24}, \eqref{fl25}, and \eqref{fl26},
and applying the adiabatic approximation scheme, one finds
\begin{align}\label{fl31}\hat{\rho}_{ij}= {r_{a}\rho_{ij}^{(0)}\over\gamma}\hat{\rho}(t),\end{align}
\begin{align}\label{fl32}\hat{\rho}_{11} =0,\end{align}
with $\hat{\rho} = \hat{\rho}(t)$ and $i,j=0,2,3$.
Eq. \eqref{fl32} indicates the procedure in which the initially unpopulated energy level ($|1\rangle$) is adiabatically eliminated.
At this juncture, making use of Eqs. \eqref{fl27}, \eqref{fl28}, \eqref{fl29}, \eqref{fl30},
\eqref{fl31}, \eqref{fl32},
and applying the adiabatic
approximation scheme once again, it is possible to verify that
\begin{align}\label{fl36}\hat{\rho}_{31}& =\frac{gr_{a}\hat{\rho}}{\gamma^{2}}\left[\rho_{30}^{(0)}\hat{a}_{1}^{\dagger} -
\rho_{33}^{(0)}\hat{a}_{3} -\rho_{32}^{(0)}\hat{a}_{2}\right],\end{align}
\begin{align}\label{fl37}\hat{\rho}_{21}& =\frac{gr_{a}\hat{\rho}}{\gamma^{2}}\left[\rho_{20}^{(0)}\hat{a}_{1}^{\dagger} -
\rho_{22}^{(0)}\hat{a}_{2} -\rho_{32}^{(0)}\hat{a}_{3}\right],\end{align}
\begin{align}\label{fl38}\hat{\rho}_{10}& =\frac{gr_{a}}{\gamma^{2}}\left[\rho_{00}^{(0)}\hat{a}_{1} -
\rho_{20}^{(0)}\hat{a}_{2}^{\dagger} -\rho_{30}^{(0)}\hat{a}_{3}^{\dagger}\right]\hat{\rho},\end{align} where $\rho_{ij}^{(0)}=\rho_{ji}^{(0)}$ is set based on the fact that the initial populations and coherences are real constants.
Now applying
Eqs. \eqref{fl36}, \eqref{fl37}, and \eqref{fl38}, it is possible to express Eq. \eqref{fl16} as
\begin{widetext}\begin{align}\label{fl39}\frac{d\hat{\rho}}{dt}& =\frac{r_{a}g^{2}\rho_{33}^{(0)}}{\gamma^{2}} \left[2\hat{a}^{\dagger}_{3}\hat{\rho}\hat{a}_{3} -
\hat{\rho}\hat{a}_{3}\hat{a}^{\dagger}_{3} - \hat{a}_{3}\hat{a}^{\dagger}_{3}\hat{\rho}\right] +
\frac{r_{a}g^{2}\rho_{32}^{(0)}}{\gamma^{2}} \left[2\hat{a}^{\dagger}_{3}\hat{\rho}\hat{a}_{2}+2\hat{a}_{2}^{\dagger}\hat{\rho}\hat{a}_{3} -
\hat{\rho}\hat{a}_{2}\hat{a}^{\dagger}_{3} - \hat{a}_{3}\hat{a}^{\dagger}_{2}\hat{\rho}-\hat{\rho}\hat{a}_{3}\hat{a}_{2}^{\dagger}-\hat{a}_{2}\hat{a}_{3}^{\dagger}\hat{\rho}\right]\notag\\&+
\frac{r_{a}g^{2}\rho_{22}^{(0)}}{\gamma^{2}} \left[2\hat{a}^{\dagger}_{2}\hat{\rho}\hat{a}_{2} -
\hat{\rho}\hat{a}_{2}\hat{a}^{\dagger}_{2} - \hat{a}_{2}\hat{a}^{\dagger}_{2}\hat{\rho}\right]-\frac{r_{a}g^{2}\rho_{30}^{(0)}}{\gamma^{2}} \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{3}+2\hat{a}_{3}^{\dagger}\hat{\rho}\hat{a}_{1}^{\dagger} -
\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}^{\dagger}_{3} - \hat{a}_{1}^{\dagger}\hat{a}^{\dagger}_{3}\hat{\rho}-\hat{a}_{3}\hat{a}_{1}\hat{\rho}-\hat{\rho}\hat{a}_{3}\hat{a}_{1}\right]\notag\\&+
\frac{r_{a}g^{2}\rho_{00}^{(0)}}{\gamma^{2}} \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{1}^{\dagger} -
\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}_{1} - \hat{a}_{1}^{\dagger}\hat{a}_{1}\hat{\rho}\right]
-\frac{r_{a}g^{2}\rho_{20}^{(0)}}{\gamma^{2}} \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{2}+2\hat{a}_{2}^{\dagger}\hat{\rho}\hat{a}_{1}^{\dagger} -
\hat{\rho}\hat{a}_{2}\hat{a}_{1} - \hat{a}_{2}\hat{a}_{1}\hat{\rho}-\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\hat{\rho}-\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\right].\end{align}\end{widetext}
It is not difficult to observe from the form of this master equation that the two roots of the spontaneous transition resemble their counterparts in the three-level cascade scheme \cite{pra74043816}. It is good to note that, in addition to the correlation pertinent to the cascading process, the initial preparation also contributes a new term with a prefactor ${r_{a}g^{2}\rho_{32}^{(0)}\over\gamma^{2}}$ since $\rho_{32}^{(0)}$ represents the initial coherence associated with these atomic energy levels. Based on the fact that the cross-correlation terms are indicative of nonclassical features, it is possible to observe that this master equation can be taken as evidence for the existence of correlation between the three emitted photons.
In Eq. \eqref{fl39} there are six different prefactors that are not independent altogether, which might make the analysis more difficult. As a result, in order to rewrite this master equation in a more appealing manner, it appears convenient introducing two parameters defined by
\begin{align}\label{fl43}\eta_{1}=\rho_{00}^{(0)}-\rho_{33}^{(0)},\end{align}
\begin{align}\label{fl44}\eta_{2}=\rho_{00}^{(0)}-\rho_{22}^{(0)}.\end{align} $\eta_{1}$ and $\eta_{2}$ are basically the population inversions viewed from different upper energy levels. In light of the anticipated initial preparation, $\eta_{1}$ and $\eta_{2}$ are not entirely independent. This can be evinced by the fact that \newline$\rho_{33}^{(0)}+\rho_{22}^{(0)}+\rho_{00}^{(0)}=1$, which leads to
\begin{align}\label{fl45}\rho_{00}^{(0)}={1+\eta_{1}+\eta_{2}\over3},\end{align}
\begin{align}\label{fl46}\rho_{22}^{(0)}={1+\eta_{1}-2\eta_{2}\over3},\end{align}
\begin{align}\label{fl47}\rho_{33}^{(0)}={1+\eta_{2}-2\eta_{1}\over3}.\end{align}
Moreover, based on the nature of the initial state (Eq. \eqref{fl2}), that is, $\rho_{33}^{(0)}=C_{3}(0)C^{*}_{3}(0)$, $\rho_{22}^{(0)}=C_{2}(0)C^{*}_{2}(0)$, and \newline$\rho_{00}^{(0)}=C_{0}(0)C^{*}_{0}(0)$, it may not be difficult to realize that
$\rho_{30}^{(0)}=\sqrt{\rho_{33}^{(0)}\rho_{00}^{(0)}},$
$\rho_{20}^{(0)}=\sqrt{\rho_{22}^{(0)}\rho_{00}^{(0)}},$ and
$\rho_{32}^{(0)}=\sqrt{\rho_{33}^{(0)}\rho_{22}^{(0)}}.$
Furthermore, assuming that the environment modes can be represented by a three-mode independent vacuum reservoir, it is possible to include its effect following the standard approach \cite{lou}. In this respect, with the aid of the above variable transformation, one readily finds
\begin{widetext}\begin{align}\label{fl48}\frac{d\hat{\rho}}{dt}& =AB \left[2\hat{a}^{\dagger}_{3}\hat{\rho}\hat{a}_{3} -
\hat{\rho}\hat{a}_{3}\hat{a}^{\dagger}_{3} - \hat{a}_{3}\hat{a}^{\dagger}_{3}\hat{\rho}\right]+
AE \left[2\hat{a}^{\dagger}_{3}\hat{\rho}\hat{a}_{2}-
\hat{\rho}\hat{a}_{2}\hat{a}^{\dagger}_{3}-\hat{a}_{2}\hat{a}_{3}^{\dagger}\hat{\rho}+2\hat{a}_{2}^{\dagger}\hat{\rho}\hat{a}_{3} -\hat{\rho}\hat{a}_{3}\hat{a}_{2}^{\dagger} - \hat{a}_{3}\hat{a}^{\dagger}_{2}\hat{\rho}\right] \notag\\&+
AC \left[2\hat{a}^{\dagger}_{2}\hat{\rho}\hat{a}_{2} -
\hat{\rho}\hat{a}_{2}\hat{a}^{\dagger}_{2} - \hat{a}_{2}\hat{a}^{\dagger}_{2}\hat{\rho}\right]-
AF \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{3}-\hat{a}_{3}\hat{a}_{1}\hat{\rho}-\hat{\rho}\hat{a}_{3}\hat{a}_{1}+2\hat{a}_{3}^{\dagger}\hat{\rho}\hat{a}_{1}^{\dagger} -
\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}^{\dagger}_{3} - \hat{a}_{1}^{\dagger}\hat{a}^{\dagger}_{3}\hat{\rho}\right]\notag\\&+
AD \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{1}^{\dagger} -
\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}_{1} - \hat{a}_{1}^{\dagger}\hat{a}_{1}\hat{\rho}\right]
-AG \left[2\hat{a}_{1}\hat{\rho}\hat{a}_{2}-
\hat{\rho}\hat{a}_{2}\hat{a}_{1} - \hat{a}_{2}\hat{a}_{1}\hat{\rho}+2\hat{a}_{2}^{\dagger}\hat{\rho}\hat{a}_{1}^{\dagger} -\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\hat{\rho}-\hat{\rho}\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\right]\notag\\&+{\kappa\over2}\sum_{i=1}^{3}\left[2\hat{a}_{i}\hat{\rho}\hat{a}^{\dagger}_{i}-\hat{a}^{\dagger}_{i}\hat{a}_{i}\hat{\rho}-\hat{\rho}\hat{a}_{i}^{\dagger}\hat{a}_{i}\right],\end{align}\end{widetext} where
\begin{align}\label{fl49}A = \frac{2r_{a}g^{2}}{\gamma^{2}},\end{align}
\begin{align}\label{fl50}B=\frac{1+\eta_{2}-2\eta_{1}}{6},\end{align}
\begin{align}\label{fl51}C=\frac{1+\eta_{1}-2\eta_{2}}{6},\end{align}
\begin{align}\label{fl52}D=\frac{1+\eta_{1}+\eta_{2}}{6},\end{align}
\begin{align}\label{fl53}E=\frac{\sqrt{1-\eta_{1}-\eta_{2}+5\eta_{1}\eta_{2}-2(\eta_{1}^{2}+\eta^{2}_{2})}}{6},\end{align}
\begin{align}\label{fl54}F=\frac{\sqrt{1-\eta_{1}+2\eta_{2}-\eta_{1}\eta_{2}-2\eta_{1}^{2}+\eta^{2}_{2}}}{6},\end{align}
\begin{align}\label{fl55}G=\frac{\sqrt{1-\eta_{2}+2\eta_{1}-\eta_{1}\eta_{2}+\eta_{1}^{2}-2\eta^{2}_{2}}}{6},\end{align}
and $\kappa$ is the cavity damping constant taken to be the same for all modes for convenience.
\section{Demonstration of the tripartite entanglement}
It is clearly shown that this master equation is essentially described in terms of $A$, $\eta_{1}$, $\eta_{2}$, and $\kappa$. It is possible to infer from the form of the master equation that the first two terms to the left indicate the gain in modes $\hat{a}_{3}$ and $\hat{a}_{2}$, in respective order, whereas the third term the loss of mode $\hat{a}_{1}$. This outcome is fairly consistent with the earlier reports on the cascade three-level scheme. As already discussed previously, there is a cross-correlation between the three modes, primarily in the form of $\hat{a}_{1}$ and $\hat{a}_{2}$, $\hat{a}_{1}$ and $\hat{a}_{3}$, and $\hat{a}_{2}$ and $\hat{a}_{3}$, separately. Hence from the outset, it is not hard to envisage, according to the von Loock and Furusawa criteria \cite{pra67052315}, a genuine CV tripartite entanglement of a radiation generated by a coherently prepared Y-shaped four-level laser. It is obvious that the detail of the detectable degree of entanglement depends on the strength of $E$, $F$, and $G$, which on the other hand heavily rely on $A$, $\eta_{1}$, and $\eta_{2}$. It goes without saying that the strength of the entanglement by and large depends on the way the atoms are initially prepared and the rate at which the atoms are injected, which believed to give the experimenter a considerable freedom for manipulation.
It is good to note that the relation between $\eta_{1}$ and $\eta_{2}$ is so subtle that it can lead to very rich alternatives in analyzing the system. For instance, when the atoms are initially prepared to be in the lower energy level, one can readily see that $\eta_{1}=\eta_{2}=1$. In this case, $B=C=0$, which indicates that no photon is generated from the two upper levels. Moreover, $E=F=G=0,$ which implies that virtually there is no anticipated correlation; as it should be. However, assuming the atoms to be prepared initially in an equal probability between the three levels, $\eta_{1}=\eta_{2}=0,$ leads to $B=C=D=E=F=G=1/6$. This indicates that there is a meaningful correlation among the emitted radiations. Basically, these two options are the two extreme cases where there is no and possible maximum coherence at the beginning, respectively.
For the sake of convenience, suppose the atoms are initially prepared so that 50\% of them to be in the lower energy level, that is, $\eta_{1}+\eta_{2}=0.5$ and the remaining 50\% of them are in one of the upper energy levels (let us say $|3\rangle$); $\eta_{1}=0$ and $\eta_{2}=0.5$. In this case, one can readily see that $C=F=G=0$, which shows that there is only one part of the cascade transitions. Since there is no photon with $\hat{a}_{2}$, tripartite entanglement is not expected. In order to see the situation in depth, with the same assumption regarding to the lower energy level, suppose the remaining 50\% population is equally shared between the upper two energy levels, that is, $\eta_{1}=\eta_{2}=0.25$. In this case, it is not difficult to assert that all prefactors in the master equation are different from zero, which indicates the possibility for having nonclassical correlations among the emitted photons. It is not difficult to observe, at this juncture, that a similar outcome could have been predicted had the 50\% population is arbitrarily shared between the upper two energy levels; although the details can vary.
With the same convection, suppose the atoms are initially prepared in 50:50 coherent superposition of the upper two energy levels; $\eta_{1}=\eta_{2}=-0.5$ (please note that in this case the lower energy level is initially unpopulated). In this case, one can readily see that $D=F=G=0$. This indicates that the radiation emitted in the spontaneous transition between the lower two energy levels is not correlated with the corresponding upper two transitions. This can be directly linked to the fact that since the upper two energy levels are prepared with a maximum coherent superposition between them, the atomic transition is basically restricted to transitions from energy level $|3\rangle$ to $|1\rangle$ and then to $|2\rangle$ and vice versa. The chance that the atoms break the established coherence and goes over to the lower energy level is quite small. That is why even the corresponding mean photon number of the radiation which largely depends on a prefactor $D$ also can be quite small.
In the same manner, it is possible to assert that there could be a meaningful correlation between the photons emitted during a direct spontaneous transition from the upper two energy levels to the lower due to the coherence induced by the cascading process when the initial coherent superposition is not the maximum possible. The nonclassical correlation in this system seems to show the reminiscent of the absence of the bipartite entanglement in a three-level cascade system when the lower and the upper energy levels are prepared in a maximum coherent superposition \cite{jpb42215506}.
One may deduce from this interpretation that the nonclassical correlation that leads to a tripartite entanglement can be induced by initially preparing atoms coherently between the upper two energy levels and the lower with arbitrary, other than zero, probability. It is also envisioned that the cascading transitions are a very vital mechanism in connecting the upper two energy levels with the lower. Therefore, no doubt that, properly harnessing the utility accorded with the initial preparation and cascading mechanisms results a genuine tripartite entangled light. Based on earlier studies in the three-level cascade laser \cite{jpb41055503,jpb41145501}, it is equally expected that externally pumping the atoms can establish a coupling between energy levels in which direct spontaneous transitions are dipole forbidden that can significantly improve the tripartite nonclassical correlations.
\section{Conclusion}
The detailed derivation of the master equation that describes the radiation emitted from a coherently prepared nondegenerate $Y$-shaped four-level correlated emission laser is presented. In view of limiting the otherwise arising complications, the atoms are presumed to be prepared in an arbitrary perfect coherent superposition of the three energy levels, where the intermediate energy level is taken to be unpopulated at the beginning to make use of the adiabatic elimination technique. In setting up the laser, the initially prepared atoms are presumed to be injected into a triply resonant cavity. To pave the way for further in depth analysis, the approaches by which the corresponding stochastic differential equations can be obtained and the resulting equations are solved are outlined in the Appendix. Moreover, the rate equations that describe the time evolution of various correlations that can be required in the study of the quantum features and statistical properties of the radiation and the procedure applied in solving them are provided.
It turns out that the quantum system under consideration can be a source of a continuous variable tripartite entangled light under certain conditions. Detailed investigation shows that due to the cascade transitions, the emission-absorption mechanism which is guided by the induced coherent superposition is found to be responsible for establishing the required correlation between the emitted photons. Further analysis based on varying the way the atoms are initially prepared shows that coherently coupling the three atomic energy levels is very vital in generating a tripartite entangled light. It is unequivocally asserted that leaving one of the upper energy levels unpopulated at the beginning leads to a bipartite entanglement at best since there is no way for the atoms to go to the unpopulated level in the course of the process. However, initially preparing the atoms in the upper two energy levels other than in a possible maximum coherence leaving the lower energy level unpopulated can lead to the appearance of the tripartite entanglement since the upper two energy levels can be coupled to the lower via the cascade transitions.
In relation to the similarity of the result of preparing the atoms in the coherent superposition of the upper two energy levels with the corresponding three-level scheme, it is expected that an external driving mechanism can be used to improve the generated entanglement in some respect. Furthermore, it may not be hard to realize that a highly intense light can be generated since the injection mechanism allows to send as many as required atoms through the cavity over a longer period of time without significantly exposing them to fluctuations and broadening associated with heating. Hence, this study by and large tries to show that the nondegenerate $Y$-shaped four-level scheme can be a source of reliable bright genuine tripartite entanglement with much more promise of flexible arrangement essentially by combining the initial preparation and external driving options. Even though starting with the form of the master equation and the values of the involved prefactors yield encouraging outcomes, it is incontestable that in depth analysis is still lacking. Owing to the involved rigor, length of the expressions, and complications of the different viable scenarios, a simpler and more specialized analysis is deferred to subsequent communications.
\section*{ACKNOWLEDGMENTS}
I thank the Max Planck Institute for the Physics of Complex Systems for allowing me to visit and use their facility in carrying out this research and Dilla University for granting the leave of absence. I also acknowledge the valuable comments of Klaus Hornberger.
\section*{APPENDIX}
In order to study the quantum features and statistical properties of the radiation, different auto-correlations, that refer to the pertinent mean photon numbers, and cross-correlations are required. In many instances, it is possible to obtain these correlations by making use of the master equation and then solve the resulting coupled differential equations. To this
effect, employing Eq. \eqref{fl48} and the fact that
${d\over dt}\langle\hat{O}(t)\rangle =
Tr\left({d\hat{\rho}\over dt}\hat{O}\right),$ where $\hat{O}$ is an operator, the following essential correlations are obtained (Please note that the full list is unnecessarily long)
$${d\over dt}\langle\hat{a}^{\dagger}_{1}(t)\rangle =-\left({\kappa\over2}+AD\right)\langle\hat{a}^{\dagger}_{1}(t)\rangle$$$$+AF\langle\hat{a}_{3}(t)\rangle+AG\langle\hat{a}_{2}(t)\rangle,$$
$${d\over dt}\langle\hat{a}_{2}(t)\rangle =-\left({\kappa\over2}-AC\right)\langle\hat{a}_{2}(t)\rangle$$$$+AE\langle\hat{a}_{3}(t)\rangle-AG\langle\hat{a}_{1}^{\dagger}(t)\rangle,$$
$${d\over dt}\langle\hat{a}_{3}(t)\rangle =-\left({\kappa\over2}-AB\right)\langle\hat{a}_{3}(t)\rangle$$$$+AE\langle\hat{a}_{2}(t)\rangle-AF\langle\hat{a}_{1}^{\dagger}(t)\rangle,$$
$${d\over dt}\langle\hat{a}_{3}^{\dagger}\hat{a}_{3}\rangle= (2AB-\kappa)\langle\hat{a}_{3}^{\dagger}\hat{a}_{3}\rangle +AE[\langle\hat{a}_{3}^{\dagger}\hat{a}_{2}\rangle+\langle\hat{a}_{3}\hat{a}_{2}^{\dagger}\rangle]$$$$- AF[\langle\hat{a}_{3}^{\dagger}\hat{a}_{1}^{\dagger}\rangle+\langle\hat{a}_{3}\hat{a}_{1}\rangle]+2AB,$$
$${d\over dt}\langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle= (2AC-\kappa)\langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle +AE[\langle\hat{a}_{3}^{\dagger}\hat{a}_{2}\rangle+\langle\hat{a}_{3}\hat{a}_{2}^{\dagger}\rangle]$$$$- AG[\langle\hat{a}_{2}^{\dagger}\hat{a}_{1}^{\dagger}\rangle+\langle\hat{a}_{2}\hat{a}_{1}\rangle]+2AC,$$
$${d\over dt}\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle=- (2AD+\kappa)\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle +AF[\langle\hat{a}_{3}^{\dagger}\hat{a}_{1}^{\dagger}\rangle+\langle\hat{a}_{3}\hat{a}_{1}\rangle]$$$$+ AG[\langle\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\rangle+\langle\hat{a}_{2}\hat{a}_{1}\rangle]-2AD,$$
$${d\over dt}\langle\hat{a}_{3}^{\dagger}\hat{a}_{2}\rangle= (AB+AC-\kappa)\langle\hat{a}_{3}^{\dagger}\hat{a}_{2}\rangle-AG\langle\hat{a}^{\dagger}_{3}\hat{a}_{1}^{\dagger}\rangle$$$$+ AE[\langle\hat{a}_{3}^{\dagger}\hat{a}_{3}\rangle+\langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle] -AF\langle\hat{a}_{2}\hat{a}_{1}\rangle+2AE,$$
$${d\over dt}\langle\hat{a}_{3}\hat{a}_{1}\rangle= (AB-AD-\kappa)\langle\hat{a}_{3}\hat{a}_{1}\rangle+ AG\langle\hat{a}_{3}\hat{a}_{2}^{\dagger}\rangle$$$$+AF[\langle\hat{a}_{3}^{\dagger}\hat{a}_{3}\rangle-\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle]+AF,$$
$${d\over dt}\langle\hat{a}_{2}\hat{a}_{1}\rangle= (AC-AD-\kappa)\langle\hat{a}_{2}\hat{a}_{1}\rangle+ AE\langle\hat{a}_{1}\hat{a}_{3}\rangle $$$$+AF\langle\hat{a}_{2}\hat{a}_{3}^{\dagger}\rangle+AG[\langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle-\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle]+AG.\eqno(A1)$$
Assuming that $c$-number expressions associated with normal ordering are mathematically more appealing, the first three equations are rewritten as
$${d\over dt}\alpha^{*}_{1}(t) =-\left({\kappa\over2}+AD\right)\alpha^{*}_{1}(t) +AF\alpha_{3}(t)$$$$+AG\alpha_{2}(t)+f^{*}_{1}(t),$$
$${d\over dt}\alpha_{2}(t) =-\left({\kappa\over2}-AC\right)\alpha_{2}(t) +AE\alpha_{3}(t)$$$$-AG\alpha_{1}^{*}(t)+f_{2}(t),$$
$${d\over dt}\alpha_{3}(t) =-\left({\kappa\over2}-AB\right)\alpha_{3}(t) +AE\alpha_{2}(t)$$$$-AF\alpha_{1}^{*}(t)+f_{3}(t),\eqno(A2)$$
where $f_{i}(t)$'s are the pertinent stochastic noise forces.
For the sake of convenience, these equations can be put in a more compact from as
\begin{equation*}{d\over dt}{\cal{R}}(t)=-{\cal{M}}{\cal{R}}(t)+{\cal{N}}(t),\eqno(A3)\end{equation*} where
\begin{equation*}{\cal{M}}=\left(\begin{matrix}
{\kappa\over2}+AD &-AG & -AF \\
AG & {\kappa\over2}-AC & -AE \\
AF & -AE & {\kappa\over2}-AB
\end{matrix}\right),\end{equation*}
\begin{equation*}{\cal{R}}(t)=\left(\begin{matrix}
\alpha_{1}^{*}(t) \\
\alpha_{2}(t)\\
\alpha_{3}(t)
\end{matrix}\right),\end{equation*}
\begin{equation*}{\cal{N}}(t)=\left(\begin{matrix}
f_{1}^{*}(t) \\
f_{2}(t)\\
f_{3}(t)
\end{matrix}\right).\eqno(A4)\end{equation*}
In principle, this coupled differential equations can be solved following a somewhat lengthy but straightforward algebra. First of all, it is desirable and possible to diagonalize the matrix ${\cal{M}}$ using the eigenvalue equations in which ${\cal{M}} {\cal{V}}_{i}=\lambda_{i}{\cal{V}}_{i},$
where ${\cal{V}}_{i}$'s are the eigenvectors and $\lambda_{i}$'s are the corresponding eigenvalues. For a 3X3 matrix, although the involved rigor is lengthy, it is possible to find both the eigenvalues and the corresponding eigenvectors, with the property that ${\cal{V}} {\cal{V}}^{-1}=\cal{I}$ and ${\cal{D}}={\cal{V}}^{-1} {\cal{M}} {\cal{V}},$ where ${\cal{V}}^{-1}$ is the inverse of the matrix constructed from relevant eigenvalues and ${\cal{D}}$ is the diagonal matrix corresponding to ${\cal{M}}$. With this arrangement, the solution of Eq. (A3) can be proposed as
\begin{equation*} {\cal{R}}(t)=\big[{\cal{V}} e^{-{\cal{D}} t}{\cal{V}}^{-1}\big] {\cal{R}}(0) +\int_{0}^{t}\big[ {\cal{V}} e^{-{\cal{D}}(t-t')}{\cal{V}}^{-1}\big] {\cal{N}}(t')dt'.\eqno(A5)\end{equation*}
It is common knowledge that ${\cal{R}}(0)$ describes the properties at the beginning of the lasing process. If initially the cavity is assumed to be in a three-mode vacuum state, one can readily disregard the contribution of the first term in Eq. (A5) and it is also possible to verify by taking the expectation values of the expressions in Eq. (A2) and comparing them with the first three expressions in Eq. (A1) that
\begin{equation*}\langle{\cal{N}}(t)\rangle=0,\eqno(A6)\end{equation*} which implies that $\langle f_{i}(t)\rangle=0$ for $i=1,2,3$. This essentially reflects the stochastic nature of the noise.
Furthermore, applying the various rate equations, distinct terms of Eq. (A5), and assuming the noise force at later time does not affect system variables at earlier time result in
\begin{equation*}{\cal{F}}(t',t'')=A\left(\begin{matrix}
-2D & G & F \\
G& 2C &2E \\
F& 2E & 2B
\end{matrix}\right)\delta(t'-t''),\eqno(A7)\end{equation*} where
${\cal{F}}(t,t')=\langle{\cal{N}}(t) {\cal{N}}^{T}(t')\rangle,$ in which $T$ stands for complex conjugate transpose. Based on the results obtained so far, it can be observed that
\begin{equation*}\langle{\cal{R}}(t){\cal{R}}^{T}(t)\rangle=\int_{0}^{t}\int_{0}^{t}{\cal{P}}(t,t'){\cal{F}}(t',t''){\cal{P}}^{T}(t,t'')dt'dt'',\eqno(A8)\end{equation*}
where
${\cal{P}}(t,t_{i})={\cal{V}}e^{{\cal{D}}(t-t_{i})}{\cal{V}}^{-1}.$
In principle, carrying out the involved matrix manipulations and then term by term integrations yield the correlations required for studying the quantum features and statistical properties of the radiation without further approximation. Even though the number of terms to be handled is somewhat large, carrying out the integration is over simplified by the presence of the $\delta$-function associated with the correlation of the noise forces (${\cal{F}}(t',t'')$) and the exponential dependence.
\end{document}
|
\begin{document}
\title[Lattice action on the boundary]{Lattice action on the boundary of $\hbox{\rm SL}(n,\mathbb{R})$}
\author{Alexander Gorodnik}
\address{Department of Mathematics, University of Michigan, Ann Arbor, MI 48109}
\email{[email protected]}
\thanks{This article is a part of author's PhD thesis at Ohio State University done under supervision of Prof.~Bergelson.}
\begin{abstract}
Let $\Gamma$ be a lattice in $G=\hbox{\rm SL}(n,\mathbb{R})$ and $X=G/S$ a homogeneous space of $G$,
where $S$ is a closed subgroup of $G$ which contains a real algebraic subgroup $H$ such that
$G/H$ is compact. We establish uniform distribution of orbits of $\Gamma$ in $X$ analogous
to the classical equidistribution on torus. To obtain this result, we first prove
an ergodic theorem along balls in the connected component of Borel subgroup of $G$.
\end{abstract}
\maketitle
\section{Introduction}
Let $G=\hbox{\rm SL}(n,\mathbb{R})$ and $\Gamma$ a lattice in $G$; that is, $\Gamma$ is a discrete subgroup of $G$ with finite covolume.
We study the action of $\Gamma$ on a compact homogeneous space $X$ of algebraic origin.
Namely, $X=G/S$ where $S$ is a closed subgroup of $G$ which contains the connected
component of a real algebraic subgroup $H$ of $G$ such that $G/H$ is compact.
An important example is provided by the Furstenberg boundary of $G$ \cite{f63}. In this case, $X=G/B$
where $B$ is the subgroup of upper triangular matrices in $G$.
It is possible to deduce from a result of Dani \cite[Theorem~13.1]{st}
that every orbit of $\Gamma$ in $X$ is dense.
We will prove a quantitative estimate for the distribution of orbits.
Introduce a norm on $G$:
\begin{equation}\label{eq_norm0}
\|g\|=\left(\hbox{Tr}({}^tgg)\right)^{1/2}=\left(\sum_{i,j} g_{ij}^2\right)^{1/2}\quad\hbox{for}\quad g=(g_{ij})\in G.
\end{equation}
For $T>0$, $\Omega\subseteq X$, and $x_0\in X$, define a counting function
\begin{equation}\label{eq_NT}
N_T(\Omega, x_0)=|\{\gamma\in \Gamma: \|\gamma\|<T, \gamma\cdot x_0\in \Omega\}|.
\end{equation}
Let $m$ be a normalized $\hbox{SO}(n)$-invariant measure on $X$.
It follows from the Iwasawa decomposition (see (\ref{eq_iwasawa}) below) that
$X$ is a homogeneous space of $\hbox{SO}(n)$.
Therefore, the measure $m$ is unique.
The following theorem shows that orbits of $\Gamma$ in $X$ are uniformly distributed with respect
to the measure $m$.
\begin{thm} \label{th_asympt}
For a Borel set $\Omega\subseteq X$ such that $m(\partial \Omega)=0$ and $x_0\in X$,
\begin{equation}\label{eq_asympt}
N_T(\Omega, x_0)\sim \frac{\gamma_n}{\bar\mu(\Gamma\backslash G)} m(\Omega) T^{n(n-1)}\quad\hbox{as}\quad T\rightarrow\infty,
\end{equation}
where $\gamma_n$ is a constant (defined in (\ref{eq_gamma}) below), and $\bar\mu$
is a finite $G$-invariant measure on $\Gamma\backslash G$ (defined in (\ref{eq_mubar})).
\end{thm}
It would be interesting to obtain an estimate for the error term in (\ref{eq_asympt}).
This, however, would demand introducing different techniques than those employed here.
Theorem \ref{th_asympt} is analogous to the result of Ledrappier \cite{l} (see also \cite{n})
who investigated the distribution of dense orbits of a lattice $\Gamma\subset\hbox{\rm SL}(2,\mathbb{R})$
acting on $\mathbb{R}^2$. Ledrappier used the equidistribution property of the horocycle flow.
Similarly, we deduce Theorem \ref{th_asympt} from an equidistribution property of orbits of Borel subgroup.
Denote by $B^o$ the connected component of the upper triangular subgroup of $G$.
To prove Theorem \ref{th_asympt}, we use the following ergodic theorem for the right action of $B^o$
on $\Gamma\backslash G$.
\begin{thm} \label{th_ergodic}
Let $\varrho$ be a right Haar measure on $B^o$, and $\nu$ the normalized $G$-invariant measure
on $\Gamma\backslash G$. Then for any $\tilde{f}\in C_c(\Gamma\backslash G)$ and $y\in \Gamma\backslash G$,
$$
\frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{f}(yb^{-1})d\varrho(b)\rightarrow \int_{\Gamma\backslash G} \tilde{f}d\nu\quad\hbox{as}\quad T\rightarrow\infty,
$$
where $B^o_T=\{b\in B^o:\|b\|<T\}$.
\end{thm}
\begin{remark}[Remarks.]
\hspace{1cm}
\begin{enumerate}
\item[1.] One can consider the analogous limit for a left Haar measure on $B^o$.
In this case, it may happen that the limit is $0$ for some $y\in \Gamma\backslash G$
and all $\tilde{f}\in C_c(\Gamma\backslash G)$ (see Proposition \ref{l_contr}).
\item[2.] Since $B^o$ is solvable (hence, amenable),
one might expect that convergence for a.e. $y\in\Gamma\backslash G$
follows from known ergodic theorems for amenable group actions.
Moreover, since $\nu$ is the only normalized $B^o$-invariant measure on $\Gamma\backslash G$,
one could expect that convergence holds for every $y$.
However, this approach does not work
because the sets $B^o_{T}$ do not form a F\o lner sequence, and the space $\Gamma\backslash G$
is not compact in general.
\item[3.] To prove Theorem \ref{th_ergodic}, we use Ratner's classification
of ergodic measures for unipotent flows \cite{r1}. In fact, we don't need the full strength of
this result. Since the subgroup $U$ (defined in (\ref{eq_U}) below) is horospherical,
it is enough to know classification of ergodic measures for horospherical subgroups.
The situation is much easier in this special case (see \cite[\S 13]{st}).
\item[4.] We expect that analogs of Theorems \ref{th_asympt} and \ref{th_ergodic} hold
for a noncompact semisimple Lie group and its irreducible lattice with balls $B_T^o$
defined by the Riemann metric. The main difficulty here is to show that
the measure of $B_T^o$ is ``concentrated'' on the ``cone'' $B_T^C$ (cf. Lemma \ref{lem_BTC2}).
\item[5.] It was pointed out by P.~Sarnak that it might be possible to prove the results of this article using
harmonic analysis on $\Gamma\backslash G$. In particular, Corollary \ref{th_distr_sl2} below can be deduced from
the result of Good (Corollary on page 119 of \cite{g}). Note that his method gives an estimate on the error term.
\end{enumerate}
\end{remark}
The paper is arranged as follows. In the next section, we give examples of applications of Theorem \ref{th_asympt}.
In Section \ref{sec_ba} we set up notations and prove some basic lemmas.
Theorem \ref{th_asympt} is deduced from Theorem \ref{th_ergodic} in Section \ref{sec_th1}.
In Sections \ref{sec_uni} and \ref{sec_rep}, we review results on the structure of unipotent flows and prove auxiliary facts about
finite-dimensional representations of $\hbox{\rm SL}(n,\mathbb{R})$. Finally, Theorem \ref{th_ergodic}
is proved in Section \ref{sec_th2}.
\section{Examples} \label{s_ex}
\begin{enumerate}
\item[1.] Let $X=\mathbb{R}\cup\{\infty\}$, which is considered as the boundary of the hyperbolic upper half plane.
The group $G=\hbox{\rm SL}(2,\mathbb{R})$ acts on $X$ by fractional linear transformations:
\begin{equation}\label{eq_frac}
g\cdot x=\frac{ax+b}{cx+d}\quad \hbox{for}\;x\in X,\; g=\left(
\begin{tabular}{cc}
$a$ & $b$\\
$c$ & $d$
\end{tabular}
\right)\in G.
\end{equation}
Let $\Gamma$ be a lattice in $\hbox{\rm SL}(2,\mathbb{R})$.
For $\Omega\subseteq X$ and $x_0\in X$, define the counting function $N_T(\Omega,x_0)$ as in (\ref{eq_NT}).
Its asymptotics can be derived from Theorem \ref{th_asympt}.
Note that the asymptotics of $N_T(X,x_0)$ as $T\rightarrow\infty$ provides a solution of
the so-called hyperbolic circle problem (cf. \cite[p.~266]{ter1} and references therein).
\begin{col} \label{th_distr_sl2}
{\sc (of Theorem \ref{th_asympt})}
For $x_0\in X$ and $-\infty\le a<b\le +\infty$,
$$
N_T((a,b),x_0)\sim c_{\Gamma}\left(\int_a^b\frac{dt}{1+t^2}\right) T^2\quad\hbox{as}\quad T\rightarrow\infty,
$$
where $c_\Gamma=\frac{1}{2\bar\mu(\Gamma\backslash G)}$ ($\bar\mu$ is the
$G$-invariant measure on $\Gamma\backslash G$ defined in (\ref{eq_mubar})).
\end{col}
\begin{proof}[Proof.]
It is easy to see from (\ref{eq_frac}) that $G$ acts
transitively on $X$, and the
stabilizer of $\infty$ in $G$ is the group of upper triangular matrices $B$.
Thus, Theorem \ref{th_asympt} is applicable to the space $X$.
Note that
$$
K=\hbox{SO}(2)=\left\{k_\theta=\left(\begin{tabular}{rr}
$\cos 2\pi\theta$ & $\sin 2\pi\theta$\\
$-\sin 2\pi\theta$ & $\cos 2\pi\theta$
\end{tabular}
\right):\theta\in [0,1)\right\},
$$
and the normalized Haar measure on $K$ is given by $dk=d\theta$.
The measure $m$ on $X$ can be defined as the image of $dk$ under the map
$K\rightarrow X:k\rightarrow k\cdot \infty$.
By (\ref{eq_frac}), $k_\theta\cdot\infty=-\hbox{ctan}\; 2\pi\theta$.
Then
$$
m((a,b))=\int_{k\cdot\infty\in (a,b)}dk=\mathop{\int_{-\hbox{\small ctan}\; 2\pi\theta\in (a,b)}}_{\theta\in [0,1)} d\theta
=\frac{1}{\pi}\int_a^b\frac{dt}{1+t^2}.
$$
We have used the substitution $t=-\hbox{ctan}\; 2\pi\theta$.
Finally, by Theorem \ref{th_asympt},
$$
N_T((a,b),x_0)\mathop{\sim}_{T\rightarrow\infty} \frac{\gamma_2 m((a,b))}{\bar\mu(\Gamma\backslash G)}T^2
=c_{\Gamma}\left(\int_a^b\frac{dt}{1+t^2}\right) T^2.
$$
Note that $\gamma_2=\frac{\pi}{2}$ by (\ref{eq_gamma}) below.
\end{proof}\medbreak
\item[2.] Let $X=\mathbb{P}^{n-1}$ be the projective space
(more generally $X=\mathcal{G}_{n,k}$, Grassmann variety, or $X=\mathcal{F}_n$,
flag variety), and $m$ be the rotation invariant normalized measure on $X$.
Then the asymptotic estimate (\ref{eq_asympt}) holds for the standard action of $G=\hbox{\rm SL}(n,\mathbb{R})$
on $X$. This is a special case of Theorem \ref{th_asympt} because $X$ can be identified
with $G/S$ where $S$ is a closed subgroup of $G$ that contains $B$, the group of
upper triangular matrices.
\end{enumerate}
\section{Basic facts} \label{sec_ba}
For $s=(s_1,\ldots, s_n)\in\mathbb{R}^n$, define
$$
a(s)=\hbox{diag}(e^{s_1},\ldots, e^{s_n}).
$$
For $t=(t_{ij}:1\le i<j\le n)$, $t_{ij}\in\mathbb{R}$, denote by $n(t)$ the unipotent upper triangular matrix with
entries $t_{ij}$ above the main diagonal.
We use the following notations:
\begin{eqnarray}
G&=&\hbox{\rm SL}(n,\mathbb{R}),\nonumber\\
K&=&\hbox{SO}(n),\nonumber\\
A^o&=&\{a(s)|\;s\in\mathbb{R}^n,\;\sum_i s_i=0\},\nonumber\\
N&=&\{n(t)|\; t_{ij}\in\mathbb{R},1\le i<j\le n\},\nonumber\\
B^o&=&A^oN=NA^o. \nonumber
\end{eqnarray}
For $s\in\mathbb{R}^n$, $\sum_i s_i=0$, denote $\alpha_{ij}(s)=s_i-s_j$, where $i,j=1,\ldots, n$,
$i<j$. These are the positive roots of the Lie algebra of $G$. Note that
\begin{equation} \label{eq_adj}
\hbox{Ad}_{a(s)}n\left(\{t_{ij}\}\right)=a(s)n\left(\{t_{ij}\}\right)a(s)^{-1}=n(\{e^{\alpha_{ij}(s)}t_{ij}\}).
\end{equation}
Let
\begin{equation} \label{eq_varrho}
\delta(s)=\frac{1}{2}\sum_{i<j} \alpha_{ij}(s)=\sum_{1\le k\le n} (n-k)s_k.
\end{equation}
For $C\in \mathbb{R}$, define
$$
A^C=\{a(s)\in A^o|\;s_i>C,i=1,\ldots, n-1\}.
$$
Also put $B^C=A^CN$.
Let $dk$ be the normalized Haar measure on $K$.
A Haar measure on $N$ is given by $dn=dt=\prod_{i<j}dt_{ij}$.
A Haar measure on $A^o$ is $da=ds=ds_1\ldots ds_{n-1}$.
The product map $A^o\times N\rightarrow B^o$ is a diffeomorphism.
The image of the product measure under this map is a left Haar measure on $B^o$.
Denote this measure by $\lambda$. Then a right Haar measure $\varrho$ on $B^o$ can be defined by
\begin{equation} \label{eq_lambda}
\varrho (f)=\int_{B^o} f(b^{-1}) \lambda(b)=\int_{A^o\times N} f(a(s)n(t))e^{2\delta(s)}dsdt,\quad f\in C_c(B^o).
\end{equation}
The map corresponding to the Iwasawa decomposition
\begin{equation} \label{eq_iwasawa}
(k,a,n)\mapsto kan: K\times A^o\times N\rightarrow G
\end{equation}
is a diffeomorphism. One can define a Haar measure $\mu$ on $G$
in terms of this decomposition:
\begin{equation} \label{eq_iwasawa2}
\int_G fd\mu=\int_{K\times A^o\times N}f(ka(s)n(t))e^{2\delta(s)}dkdsdt=\int_{K\times B^o} f(kb)dkd\varrho(b)\end{equation}
for $f\in C_c(G)$.
For a lattice $\Gamma$ in $G$, there exists
a finite measure $\bar \mu_\Gamma$ on $\Gamma\backslash G$ such that
\begin{equation} \label{eq_mubar}
\int_G fd\mu=\int_{\Gamma\backslash G}\sum_{\gamma\in\Gamma} f(\gamma g)d\bar\mu_\Gamma(g),\quad f\in C_c(G).
\end{equation}
Let $\beta$ be an automorphism of $G$. Then $\beta(\Gamma)$ is a lattice too.
Moreover, the following lemma holds.
\begin{lem}\label{l_auto}
Define a map
$$
\bar\beta:\Gamma\backslash G\rightarrow\beta(\Gamma)\backslash G:g\Gamma\mapsto \beta(g)\beta(\Gamma)
$$
Then $\bar\beta(\bar\mu_\Gamma)=\bar\mu_{\beta(\Gamma)}$.
\end{lem}
\begin{proof}[Proof.]
Since the automorphism group of $G$ is a finite extension of the group of inner automorphisms,
and $G$ is unimodular, it follows that that the measure $\mu$ is $\beta$-invariant.
Every $\tilde{f}\in C_c(\beta(\Gamma)\backslash G)$ can be represented as
$\sum_{\gamma\in\beta(\Gamma)} f(\gamma g)$ for some $f\in C_c(G)$ (see \cite[Ch.~1]{rag}).
Then
\begin{eqnarray*}
\bar\beta(\bar\mu_\Gamma)(\tilde{f})&=&
\int_{\Gamma\backslash G}\sum_{\gamma\in\beta(\Gamma)} f(\gamma \beta(g))d\bar\mu_\Gamma(g)
=\int_G f(\beta(g))d\mu(g)\\
&=&\int_G f(g)d\mu(g)=\bar\mu_\Gamma(\tilde{f}).
\end{eqnarray*}
\end{proof}\medbreak
For a subset $D\subseteq G$ and $T>0$, put
$$
D_T=\{d\in D: \|d\|<T\}.
$$
Note that
\begin{equation} \label{eq_BT}
B_T^o=\left\{a(s)n(t): \sum_{1\le i\le n} e^{2s_i}+\sum_{1\le i<j\le n} e^{2s_i}t_{ij}^2<T^2 \right\}.
\end{equation}
For $s\in\mathbb{R}^n$, define
\begin{equation} \label{eq_Ns}
N(s)=\sum_i e^{2s_i}.
\end{equation}
\begin{lem} \label{lem_BTC}
For $C\in\mathbb{R}$,
$$
\varrho(B_T^C)=c_n\int_{A^C_T} \Big(T^2-N(s)\Big)^{\frac{n(n-1)}{4}}\hbox{\rm exp}\left(\sum_{k} (n-k)s_k\right)ds,
$$
where $c_n=\pi^{n(n-1)/4}/\Gamma(1+n(n-1)/4)$.
\end{lem}
\begin{proof}[Proof.]
Use formulas (\ref{eq_lambda}), (\ref{eq_varrho}), (\ref{eq_BT}), and make change of
variables $t_{ij}\rightarrow e^{-s_i}t_{ij}$. Then the above formula follows from the fact
that the volume of the unit ball in $\mathbb{R}^m$ is $\pi^{m/2}/\Gamma(1+m/2)$.
\end{proof}\medbreak
It follows from Lemma \ref{lem_BTC} that $\varrho (B_T^o)=O\left(T^{(n^2-n)}\right)$ as $T\rightarrow\infty$.
In fact, more precise statement is true:
\begin{lem}
\begin{equation} \label{eq_BTasy}
\varrho(B_T^o)\sim \gamma_n T^{(n^2-n)}\quad\hbox{as}\quad T\rightarrow\infty,
\end{equation}
where
\begin{equation} \label{eq_gamma}
\gamma_n=\frac{\pi^{n(n-1)/4}}{2^{n-1}\Gamma\left(\frac{n^2-n+2}{2}\right)}\prod_{k=1}^{n-1} \Gamma\left(\frac{n-k}{2}\right).
\end{equation}
\end{lem}
\begin{proof}[Proof.]
Since the norm (\ref{eq_norm0}) is $K$-invariant, $G_T=KB^o_T$. By (\ref{eq_iwasawa2}), $\mu (G_T)=\varrho(B^o_T)$.
The asymptotics of $\mu (G_T)$ as $T\rightarrow\infty$ was computed in \cite[Appendix~1]{drs}.
\end{proof}\medbreak
\begin{lem} \label{lem_BTC2}
For any $C\in\mathbb{R}$, $\varrho(B_T^C)\sim\varrho(B_T^o)$ as $T\rightarrow\infty$.
\end{lem}
\begin{proof}[Proof.]
For $i_0=1,\ldots, n-1$, put
$$
A_T^{i_0}=\{a(s)\in A_T^o: s_{i_0}\le C\}\;\;\hbox{and}\;\;B_T^{i_0}=\{a(s)n(t)\in B_T^o: s_{i_0}\le C\}.
$$
We claim that $\varrho (B_T^{i_0})=o(\varrho (B_T^o))$ as $T\rightarrow\infty$.
It follows from (\ref{eq_BT}) that if $a(s)\in A_T^o$, then
$s_i<\log T$ for every $i=1,\ldots, n$.
Then by Lemma \ref{lem_BTC},
\begin{eqnarray}
\nonumber \varrho(B_T^{i_0})&\le& c_nT^{\frac{n(n-1)}{2}}\int_{A_T^{i_0}} \hbox{\rm exp}\left(\sum_{k} (n-k)s_k\right)ds \\
\nonumber &\ll& T^{\frac{n(n-1)}{2}} \prod_{{k<n},{k\ne i_0}} \int_{-\infty}^{\log T} e^{(n-k)s_k}ds_k\ll T^{n(n-1)-(n-i_0)}.
\end{eqnarray}
(Here and later on $A\ll B$ means $A < c\cdot B$ for some absolute constant $c>0$.)
Now the claim follows from (\ref{eq_BTasy}).
Since $B_T^o-B_T^C=\cup_{i_0<n} B_T^{i_0}$, $\varrho (B_T^o-B_T^C)=o(\varrho (B_T^o))$
as $T\rightarrow\infty$. Therefore, $\varrho(B_T^C)\sim\varrho(B_T^o)$ as $T\rightarrow\infty$.
\end{proof}\medbreak
Next, we show that Theorem \ref{th_ergodic} fails for a left Haar measure on $B^o$.
\begin{pro}\label{l_contr}
Let $\Gamma$ be a lattice in $G=\hbox{\rm SL}(2,\mathbb{R})$, and $y\in\Gamma\backslash G$ be such that
the orbit $yN$ is periodic. Then for every $\tilde{f}\in C_c(\Gamma\backslash G)$,
$$
\frac{1}{\lambda(B^o_T)}\int_{B^o_T} \tilde{f}(yb^{-1})d\lambda(b)\rightarrow 0 \quad\hbox{as}\quad T\rightarrow\infty.
$$
\end{pro}
\begin{proof}[Proof.]
For $C\in \mathbb{R}$, put $\hat{A}^C=\{a(s)\in A^o:s_1<C\}$ and $\hat{B}^C=\hat{A}^C N$.
As in Lemma \ref{lem_BTC2}, one can show that $\lambda (B^o_T-\hat{B}^C_T)=o(\lambda (B^o_T))$
as $T\rightarrow\infty$. Therefore, for every $C\in\mathbb{R}$,
$$
\frac{1}{\lambda(B^o_T)}\int_{B^o_T-\hat{B}^C_T} \tilde{f}(yb^{-1})d\lambda(b)\rightarrow 0 \quad\hbox{as}\quad T\rightarrow\infty.
$$
On the other hand, according to \cite[Lemma~14.2]{st}, $yN^{-1}a(s)^{-1}\rightarrow\infty$ as
$s_1\rightarrow -\infty$. Thus, there exists $C\in\mathbb{R}$ such that
$y(\hat{B}^C_T)^{-1}\cap \hbox{supp} (\tilde{f})=\emptyset$. Then
$$
\int_{\hat{B}^C_T} \tilde{f}(yb^{-1})d\lambda(b)= 0.
$$
This proves the proposition.
\end{proof}\medbreak
\section{Proof of Theorem \ref{th_asympt}} \label{sec_th1}
{\bf Claim}. {\it It is enough to prove Theorem \ref{th_asympt} for $X=G/B^o$}.
\begin{proof}[Proof.]
Suppose that the theorem is proved for $X=G/B^o$.
At first, we consider a special case:
\begin{equation}\label{eq_spec}
X=G/(B^o)^{g_0}\quad\hbox{for}\;\;\hbox{some}\;\; g_0\in G,
\end{equation}
where $(B^o)^{g_0}=g_0^{-1}B^og_0$.
By the Iwasawa decomposition (\ref{eq_iwasawa}), $(B^o)^{g_0}=(B^o)^{k_0}$ for some
$k_0\in K$.
The normalized $K$-invariant measure $m$ on $G/(B^o)^{k_0}$ is defined as
$$
m(C)=\int_{k: k(B^o)^{k_0}\in C} dk\quad\hbox{for}\;\; \hbox{Borel}\;\hbox{set}\;\; C\subseteq G/(B^o)^{k_0}.
$$
Similarly, one defines the normalized $K$-invariant measure $m^*$ on $G/B^o$.
Consider a map
$$
\beta:G/B^o\rightarrow G/(B^o)^{k_0}: gB^o\mapsto g^{k_0} (B^o)^{k_0}.
$$
Clearly, $\beta$ is a diffeomorphism.
Using that $K$ is unimodular, one proves that
\begin{equation}\label{eq_m}
m^*(\beta^{-1}(C))=m(C)\quad\hbox{for}\;\; \hbox{Borel}\;\hbox{set}\;\; C\subseteq G/(B^o)^{k_0}.
\end{equation}
Take
$$
\Omega\subseteq G/(B^o)^{k_0}\quad\hbox{and}\quad x_0=h_0(B^o)^{k_0}\in G/(B^o)^{k_0}
$$
such that $m(\partial\Omega)=0$. Set
$$
{\Omega}^*=\beta^{-1}(\Omega)\subseteq G/B^o\quad\hbox{and}\quad {x}^*_0=h_0^{k_0^{-1}}B^o\in G/B^o.
$$
By (\ref{eq_m}), $m^*(\partial{\Omega^*})=0$ too. For $\gamma\in\Gamma$,
$\gamma\cdot x_0\in\Omega$ iff $\gamma^{k_0^{-1}}\cdot {x}^*_0\in {\Omega}^*$.
Note also that $\|\gamma^{k_0^{-1}}\|=\|\gamma\|$. Therefore,
$$
N_T(\Omega,x_0)=|\{\gamma\in \Gamma^{k_0^{-1}}: \|\gamma\|<T, \gamma\cdot {x}^*_0\in {\Omega}^*\}|
$$
Applying the assumption to the lattice $\Gamma^*=\Gamma^{k_0^{-1}}$, one gets
$$
N_T(\Omega, x_0)\sim \frac{\gamma_n}{\bar\mu^*(\Gamma^*\backslash G)} m^*(\Omega^*) T^{n(n-1)}\quad\hbox{as}\quad T\rightarrow\infty,
$$
where $\bar\mu^*$ is the measure on $\Gamma^*\backslash G$ defined in (\ref{eq_mubar}).
Now the special case (\ref{eq_spec}) follows
from Lemma \ref{l_auto} and (\ref{eq_m}).
Let us consider the general case.
Let $S$ be a closed subgroup of $G$ such that $S\supseteq H^o$, where $H$ is a real algebraic subgroup of
$G$, and $G/H$ is compact. Since $H$ has finitely many connected components, $G/H^o$
is compact too. Recall that the homogeneous space $G/H^o$ is
compact iff $H_\mathbb{C}$ contains a maximal connected $\mathbb{R}$-split solvable $\mathbb{R}$-subgroup of $G_\mathbb{C}$
(see, for example, \cite[Theorem 3.1]{pr}).
Since maximal connected $\mathbb{R}$-split solvable $\mathbb{R}$-subgroups of $G_\mathbb{C}$
are conjugate over $G_\mathbb{R}$ (see \cite{bt}, or Theorem 15.2.5 and Exercise 15.4.8 in \cite{spr}),
for some $g_0\in G$, $B^{g_0}\subseteq H$.
Hence, $(B^o)^{g_0}\subseteq H^o\subseteq S$.
Denote by $\pi$ the projection map $G/(B^o)^{g_0}\rightarrow G/S$.
Take
$$
\Omega\subseteq G/S\quad\hbox{and}\quad x_0\in G/S
$$
such that $m(\partial\Omega)=0$. Set
$$
{\Omega}^*=\pi^{-1}(\Omega)\subseteq G/(B^o)^{g_0}\quad\hbox{and}\quad {x}^*_0\in \pi^{-1}(x_0).
$$
Let $m^*$ be the $K$-invariant normalized measure on $G/(B^o)^{g_0}$. Then $m=\pi(m^*)$ is
the $K$-invariant normalized measure on $G/S$. It is easy to check that
$\pi(\partial\Omega^*)\subseteq \partial\Omega$. Hence,
$m^*(\partial\Omega^*)\le m^*(\pi^{-1}(\partial\Omega))=m(\partial\Omega)=0$.
Finally,
$$
N_T(\Omega,x_0)=N_T(\Omega^*,x_0^*)\mathop{\sim}_{T\rightarrow\infty}
\frac{\gamma_n}{\bar\mu(\Gamma\backslash G)} m^*(\Omega^*) T^{n(n-1)}
=\frac{\gamma_n}{\bar\mu(\Gamma\backslash G)} m(\Omega) T^{n(n-1)}.
$$
\end{proof}\medbreak
We need the following proposition that follows easily from Theorem \ref{th_ergodic}.
\begin{pro} \label{p_ergodic}
Let $f$ be the characteristic function of a relatively compact Borel subset $Z\subseteq G$
such that $\mu (\partial Z)=0$. Then for any $y\in \Gamma\backslash G$,
\begin{equation}\label{eq_char_last}
\frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{f}(yb^{-1})d\varrho(b)
\longrightarrow \frac{1}{\bar \mu(\Gamma\backslash G)}\int_{G} fd\mu\quad\hbox{as}\;\;T\rightarrow\infty,
\end{equation}
where $\tilde{f}(\Gamma g)=\sum_{\gamma\in \Gamma} f(\gamma g)\in C_c(\Gamma\backslash G)$.
\end{pro}
\begin{proof}[Proof.]
The argument is quite standard. One chooses functions $\phi_n,\psi_n\in C_c(G)$, $n\ge 1$,
such that $\phi_n\le f\le \psi_n$ and $\int_G (\psi_n-\phi_n)d\mu<\frac{1}{n}$.
By Theorem \ref{th_ergodic} and (\ref{eq_mubar}),
$$
\lim_{T\rightarrow\infty} \frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{\phi}_n(yb^{-1})d\varrho(b)
=\frac{1}{\bar \mu(\Gamma\backslash G)}\int_{\Gamma\backslash G} \tilde{\phi}_n d\bar\mu=
\frac{1}{\bar \mu(\Gamma\backslash G)}\int_{G} \phi_n d\mu,
$$
and
$$
\lim_{T\rightarrow\infty} \frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{\psi}_n(yb^{-1})d\varrho(b)
=\frac{1}{\bar \mu(\Gamma\backslash G)}\int_{G} \psi_n d\mu
$$
for every $n\ge 1$.
This implies (\ref{eq_char_last}).
\end{proof}\medbreak
The proof of Theorem \ref{th_asympt} should be compared with similar arguments in
\cite{drs}, \cite{em}, \cite{ems}, \cite{emm}
where other counting problems were also reduced to asymptotics of certain integrals.
\begin{proof}[Proof of Theorem \ref{th_asympt}.]
Let $\alpha: K\rightarrow G/B^o$ be a map defined by $\alpha(k)=kB^o$.
By (\ref{eq_iwasawa}), $\alpha$ is a diffeomorphism.
The measure $m$ is given by
\begin{equation} \label{eq_mOm}
m(C)=\int_{\alpha^{-1}(C)}dk\quad\hbox{for}\;\;\hbox{Borel}\;\;\hbox{set}\;\; C\subseteq G/B^o.
\end{equation}
Since $\alpha$ is surjective, $x_0=k_0^{-1}B^o$ for some $k_0\in K$.
It follows from the Iwasawa decomposition (\ref{eq_iwasawa}) that the product map
$$
K\times B^o:(k,b)\mapsto kk_0^{-1}bk_0\in G
$$
is a diffeomorphism. For $g\in G$, define $k_g\in K$ and $b_g\in B^o$
such that
$$
g=k_gk_0^{-1}b_gk_0.
$$
Since $G$ and $K$ are unimodular,
it follows from (\ref{eq_iwasawa2}) that for $f\in C_c(G)$,
$$
\int_G fd\mu= \int_{K\times B^o} f(kk_0^{-1}bk_0)dk d\varrho(b).
$$
Let $\phi$ be the characteristic function of $\tilde{\Omega}\stackrel{def}{=}\alpha^{-1}(\Omega)k_0\subseteq K$,
and $\psi_{\mathcal{O}}$ the characteristic function of an open bounded symmetric
neighborhood ${\mathcal{O}}$ of $1$ in $B^o$ with boundary of measure $0$ normalized so that
$\int_{B^o} \psi_{\mathcal{O}} d\varrho=1$.
Then $\int_{B^o} \psi_{\mathcal{O}} d\lambda=1$ too.
Note that for $g\in G$, $gx_0\in\Omega$ iff $k_g\in\tilde{\Omega}$.
Put $f_{\mathcal{O}}(g)=\phi(k_g)\psi_{\mathcal{O}}(b_g)$.
Let $\tilde{f}_{\mathcal{O}}(\Gamma g)=\sum_{\gamma\in\Gamma} f_{\mathcal{O}}(\gamma g)$.
Now Proposition \ref{p_ergodic} can be applied to $\tilde{f}_{\mathcal{O}}$:
\begin{eqnarray}
\nonumber \frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{f}_{\mathcal{O}} (\Gamma k_0^{-1} b^{-1} {k_0})d\varrho (b)
\mathop{\longrightarrow}_{T\rightarrow\infty} \frac{1}{\bar\mu(\Gamma\backslash G)}\int_{G} {f}_{\mathcal{O}} (gk_0)d\mu(g)\\
=\frac{1}{\bar\mu(\Gamma\backslash G)}\int_K \phi dk\cdot\int_{B^o} \psi_{\mathcal{O}} d\varrho=\frac{1}{\bar\mu(\Gamma\backslash G)}\int_{\tilde{\Omega}} dk=\frac{m(\Omega)}{\bar\mu(\Gamma\backslash G)}. \label{eq_bound0}
\end{eqnarray}
The last equality follows from (\ref{eq_mOm}).
Take $r>1$. There exists a bounded open symmetric neighborhood ${\mathcal{O}}$
of identity in $B^o$ (with boundary of measure $0$) such that for
any $b\in {\mathcal{O}}={\mathcal{O}}^{-1}$ and $x\in M(n,\mathbb{R})$,
\begin{equation} \label{eq_norm}
r^{-1}\|x\|\le \|b^{-1}x\|\le r\|x\|.
\end{equation}
Then for ${\mathcal{O}}$ as above,
\begin{eqnarray}
\nonumber \sum_{\gamma\in \Gamma} \int_{B^o_T} f_{\mathcal{O}} (\gamma k_0^{-1}b^{-1}{k_0})d\varrho(b)
=\sum_{\gamma\in \Gamma} \int_{(B^o_T)^{-1}} f_{\mathcal{O}} (k_\gamma k_0^{-1}b_\gamma b{k_0})d\lambda(b)\\
\nonumber =\sum_{\gamma\in \Gamma} \int_{b_\gamma(B^o_T)^{-1}} f_{\mathcal{O}} (k_\gamma k_0^{-1}b{k_0})d\lambda(b)=
\sum_{\gamma\in \Gamma} \int_{\|b^{-1}b_\gamma \|<T} \phi (k_\gamma)\psi_{\mathcal{O}}(b)d\lambda(b)\\
\nonumber =\sum_{\gamma: k_\gamma\in\tilde{\Omega}} \int_{\|b^{-1}b_\gamma \|<T} \psi_{\mathcal{O}}(b)d\lambda(b)
=\sum_{\gamma: \gamma\cdot x_0\in \Omega} \int_{\|b^{-1}b_\gamma \|<T} \psi_{\mathcal{O}}(b)d\lambda(b).
\end{eqnarray}
The integral
$$
I_\gamma\stackrel{def}{=}\int_{\|b^{-1}b_\gamma \|<T} \psi_{\mathcal{O}}(b)d\lambda(b)
$$
is not greater than $1$. By (\ref{eq_norm}), $I_\gamma=0$ for $\gamma\in\Gamma$ such that
$\|\gamma\|=\|b_\gamma\|\ge rT$. Therefore,
\begin{equation} \label{eq_bound1}
N_{rT}(\Omega, x_0)\ge\sum_{\gamma\in \Gamma} \int_{B^o_T} f_{\mathcal{O}} (\gamma k_0^{-1}b^{-1}{k_0})d\varrho(b).
\end{equation}
By (\ref{eq_norm}), $I_\gamma=1$ for $\gamma\in \Gamma$ such that $\|\gamma\|=\|b_\gamma\|< r^{-1}T$.
Thus,
\begin{equation} \label{eq_bound2}
N_{r^{-1}T}(\Omega,x_0)\le\sum_{\gamma\in \Gamma} \int_{B^o_T} f_{\mathcal{O}} (\gamma k_0^{-1}b^{-1}{k_0})d\varrho(b).
\end{equation}
It follows from (\ref{eq_BTasy}) that $\varrho(B_{r^{-1}T}^o)\sim \gamma_n r^{n-n^2} T^{n^2-n}$
as $T\rightarrow\infty$.
Then using (\ref{eq_bound1}) and (\ref{eq_bound0}), we get
\begin{eqnarray}
\nonumber \liminf_{T\rightarrow\infty} \frac{N_T(\Omega,x_0)}{T^{n^2-n}}&\ge&
\liminf_{T\rightarrow\infty} \frac{\gamma_n}{r^{n^2-n}\varrho(B^o_{r^{-1}T})}\int_{B^o_{r^{-1}T}} \tilde{f}_{\mathcal{O}} (\gamma k_0^{-1}b^{-1}{k_0})d\varrho(b)\\
\nonumber &=&\frac{\gamma_n m(\Omega)}{r^{n^2-n}\bar\mu(\Gamma\backslash G)}.
\end{eqnarray}
This inequality holds for any $r>1$. Hence,
$$
\liminf_{T\rightarrow\infty} \frac{N_T(\Omega,x_0)}{T^{n^2-n}}\ge \frac{\gamma_n m(\Omega)}{\bar\mu(\Gamma\backslash G)}.
$$
Similarly, using (\ref{eq_bound2}) and (\ref{eq_bound0}), one can prove that
$$
\limsup_{T\rightarrow\infty} \frac{N_T(\Omega,x_0)}{T^{n^2-n}}\le \frac{\gamma_n m(\Omega)}{\bar\mu(\Gamma\backslash G)}.
$$
This finishes the proof of Theorem \ref{th_asympt}.
\end{proof}\medbreak
\section{Behavior of unipotent flows} \label{sec_uni}
In this section we review some deep results on
equidistribution of unipotent flows, which are crucial for the proof of Theorem \ref{th_ergodic}.
Note that there are two different approaches available: Ratner \cite{r2} and Dani, Margulis \cite{dm93}.
Both of these methods rely on Ratner's measure rigidity \cite{r1}. We follow the method of Dani and Margulis.
The results below were proved in \cite{dm91,dm93} for the case of one-dimensional flows
and extended to higher dimensional flows and even polynomial
trajectories in \cite{sh94,ems97,sh96}. See \cite[\S 19]{st} and \cite{kss01}
for more detailed exposition.
Appropriate adjustments are made for
the right $G$-action on $\Gamma\backslash G$ instead of left $G$-action on $G/\Gamma$.
\par{\sc Notations:}
Let $G$ be a connected semisimple Lie group
without compact factors, and $\Gamma$ a lattice in $G$.
Let $\mathfrak g$ be the Lie algebra of $G$. For positive integers $d$ and $n$, denote by
${\mathcal{P}}_{d,n}(G)$ the set of functions $q:\mathbb{R}^n\rightarrow G$ such that
for any $a,b\in\mathbb{R}^n$, the map
$$
t\in\mathbb{R}\mapsto \hbox{Ad}(q(at+b))\in\mathfrak{g}
$$
is a polynomial of degree at most $d$ with respect to some basis of $\mathfrak{g}$.
Let $V_G=\oplus_{i=1}^{\dim \mathfrak{g}}\wedge^i\mathfrak{g}$. There is a natural
action of $G$ on $V_G$ induced from the adjoint representation. Fix a norm on $V_G$.
For a Lie subgroup $H$ of $G$ with Lie algebra $\mathfrak{h}$, take a unit
vector $p_H\in \wedge^{\dim \mathfrak{h}}\mathfrak{h}$.
The following theorem allows us to estimate divergence of polynomial trajectories.
For its proof, see \cite[Theorems 2.1--2.2]{sh96}.
\begin{thm} \label{th_help1}
There exist closed subgroups $U_i$ ($i=1,\ldots, r$) such that each $U_i$
is the unipotent radical of a maximal parabolic subgroup, $\Gamma U_i$ is
compact in $\Gamma\backslash G$, and for any
$d,n\in\mathbb{N}$, $\varepsilon,\delta>0$, there exists a compact set $C\subseteq \Gamma\backslash G$
such that for any $q\in {\mathcal{P}}_{d,n}(G)$ and a bounded open convex set $D\subseteq \mathbb{R}^n$,
one of the following holds:
\begin{enumerate}
\item[1.] There exist $\gamma\in\Gamma$ and $i=1,\ldots, r$ such that
$\sup_{t\in D} \|q(t)^{-1}\gamma\cdot p_{U_i}\|\le\delta.$
\item[2.] $\ell(t\in D: \Gamma q(t)\notin C)< \varepsilon \ell(D)$, where $\ell$
is the Lebesgue measure on $\mathbb{R}^n$.
\end{enumerate}
\end{thm}
Denote by ${\mathcal{H}}_\Gamma$ the family of all proper closed connected subgroups $H$ of $G$
such that $\Gamma\cap H$ is a lattice in $H$, and $\hbox{Ad}(H\cap \Gamma)$ is Zariski-dense in $\hbox{Ad}(H)$.
\begin{thm} \label{th_help2}
The set ${\mathcal{H}}_\Gamma$ is countable. For any $H\in {\mathcal{H}}_\Gamma$,
$\Gamma\cdot p_H$ is discrete.
\end{thm}
For proofs, see \cite[Theorem 1.1]{r1} and \cite[Theorems 2.1 and 3.4]{dm93}.
Fix a subgroup $U$ generated by $1$-parameter unipotent subgroups.
For a closed subgroup $H$ of $G$, denote
$$
X(H,U)=\{g\in G: gU\subseteq Hg\}.
$$
Define a singular set
\begin{equation} \label{eq_sing}
Y=\bigcup_{H\in {\mathcal{H}}_\Gamma} \Gamma X(H,U)\subseteq \Gamma\backslash G.
\end{equation}
It follows from Dani's generalization of Borel density theorem and
Ratner's topological rigidity that $y\in Y$ iff $yU$ is not dense in $\Gamma\backslash G$
(see \cite[Lemma 19.4]{st}).
One needs to estimate behavior of polynomial trajectories near the singular set $Y$.
The following result can be deduced from \cite[Proposition 5.4]{sh94}.
It is formulated in \cite{sh96} and \cite{kss01}.
Note that it is analogous to Theorem \ref{th_help1} with the point at infinity being
replaced by the singular set.
\begin{thm} \label{th_help3}
Let $d,n\in\mathbb{N}$, $\varepsilon>0$, $H\in {\mathcal{H}}_\Gamma$.
For any compact set $C\subseteq \Gamma X(H,U)$, there exists a compact set
$F\subseteq V_G$ such that for any neighborhood $\Phi$ of $F$ in $V_G$, there exists
a neighborhood $\Psi$ of $C$ in $\Gamma\backslash G$
such that for any $q\in {\mathcal{P}}_{d,n}(G)$ and a bounded open convex set $D\subseteq \mathbb{R}^n$,
one of the following holds:
\begin{enumerate}
\item[1.] There exists $\gamma\in\Gamma$ such that $q(D)^{-1}\gamma\cdot p_{H}\subseteq \Phi$.
\item[2.] $\ell(t\in D: \Gamma q(t)\in \Psi)< \varepsilon \ell(D)$, where $\ell$
is the Lebesgue measure on $\mathbb{R}^n$.
\end{enumerate}
\end{thm}
\section{Representations of $\hbox{\rm SL}(n,\mathbb{R})$}\label{sec_rep}
In order to be able to use the results from the previous section,
we collect here some information about representations of $\hbox{\rm SL}(n,\mathbb{R})$.
The next lemma is essentially Lemma 5.1 from \cite{sh96}.
We present its proof because more precise information about
dependence on $\beta$ in the inequality (\ref{eq_shineq}) is needed.
\begin{lem} \label{lem_shah}
Let $V$ be a finite dimensional vector space with a norm $\|\cdot\|$,
$\mathfrak n$ be a nilpotent Lie subalgebra of $\hbox{\rm End}(V)$ with a
basis $\{b_i:i=1,\ldots, m\}$, and $N=\exp(\mathfrak{n})\subseteq \hbox{\rm GL}(V)$
be the Lie group of $\mathfrak n$.
Define a map $\Theta:\mathbb{R}^m\rightarrow N$:
$$
\Theta(t_1,\ldots, t_m)=\exp\left(\sum_{i=1}^m t_ib_i\right),\quad (t_1,\ldots, t_m)\in\mathbb{R}^m.
$$
For $\beta>0$, put $D(\beta)=\Theta\left([0,\beta]\times\cdots \times[0,\beta]\right)$. Let
$$
W=\{v\in V: \mathfrak{n}\cdot v=0\}.
$$
Denote by $\hbox{\rm pr}_W$ the orthogonal projection on W with respect to some
scalar product on $V$.
Then there exists a constant $c_0>0$ such that for any $\beta\in (0,1)$ and $v\in V$,
\begin{equation} \label{eq_shineq}
\sup_{n\in D(\beta)} \|\hbox{\rm pr}_W(n\cdot v)\|\ge c_0\beta^d\|v\|,
\end{equation}
where $d$ is the degree of the polynomial map $\Theta$.
\end{lem}
\begin{proof}[Proof.]
Let
$$
{\mathcal{I}}=\left\{(i_1,\ldots, i_m)\in\mathbb{Z}^m: i_k\ge 0,\sum_k i_k\le d\right\}.
$$
For $t\in\mathbb{R}^m$ and $I=(i_1,\ldots, i_m)\in {\mathcal{I}}$, denote $t^I=\prod_k t_k^{i_k}$,
and $|I|=\sum_k i_k$.
One can write $\Theta(t)=\sum_{I\in {\mathcal{I}}} t^I B_I$ for some $B_I\in \hbox{End}(V)$.
Then
$$
\hbox{pr}_W(\Theta(t)v)=\sum_{I\in {\mathcal{I}}} t^I\hbox{pr}_W(B_Iv).
$$
Consider a map $T:V\rightarrow\oplus_{I\in {\mathcal{I}}} W$ defined by
$$
Tv=\sum_{I\in {\mathcal{I}}} \hbox{pr}_W(B_Iv),
$$
and a map $A_t:\oplus_{I\in {\mathcal{I}}} W\rightarrow W$ for $t\in\mathbb{R}^m$ defined by
$$
A_t\left(\bigoplus_{I\in {\mathcal{I}}} w_I\right)=\sum_{I\in {\mathcal{I}}} t^Iw_I.
$$
For ${I\in {\mathcal{I}}}$, take fixed $s_I\in\mathbb{R}^m$ such that $0<s_{I,k}<1$ and $s_{I_1,k}\ne s_{I_2,k}$
for $I_1\ne I_2$, and put $t_I=\beta s_I$. Let
$$
A=\bigoplus_{I\in {\mathcal{I}}} A_{t_I}:\bigoplus_{I\in {\mathcal{I}}} W\rightarrow\bigoplus_{I\in {\mathcal{I}}} W.
$$
The map $A$ has a matrix form $\left(t_I^J\right)_{I,J\in {\mathcal{I}}}$. This matrix is a Kronecker
product of Vandermonde matrices which implies that $A$ is invertible. Using elementary row and column
operations, one can write
\begin{equation} \label{eq_norm1}
\left(t_I^J\right)_{I,J\in {\mathcal{I}}}=B\cdot\hbox{diag}\left(\beta^{|I|}:I\in {\mathcal{I}}\right)\cdot C
\end{equation}
for some $B,C\in\hbox{GL}\left(\oplus_{I\in {\mathcal{I}}} W\right)$, which are independent of $\beta$.
It is convenient to use a norm on $\oplus_{I\in {\mathcal{I}}} W$ defined by
$$
\left\|\bigoplus_{I\in {\mathcal{I}}} w_I\right\|=\max_{I\in {\mathcal{I}}} \|w_I\|,\quad w_I\in W.
$$
Then by (\ref{eq_norm1}), for $w\in\oplus_{I\in {\mathcal{I}}} W$,
\begin{equation} \label{eq_norm2}
\|Aw\|\ge \|B^{-1}\|^{-1}\cdot\beta^d\cdot\|C^{-1}\|^{-1}\cdot\|w\|.
\end{equation}
It follows from Lie-Kolchin theorem that $T$ is injective (see \cite[Lemma 5.1]{sh96}). Therefore, there
exists a constant $c_1>0$ such that $\|Tv\|\ge c_1\|v\|$ for $v\in V$.
Then using (\ref{eq_norm2}), we get
$$
\sup_{n\in D(\beta)} \|\hbox{\rm pr}_W(n\cdot v)\|=\sup_{t:0<t_i<\beta} \|A_tTv\|\ge \|ATv\|\ge c_0\beta^d\|v\|,
$$
where $c_0=\|B^{-1}\|^{-1}\cdot\|C^{-1}\|^{-1}\cdot c_1$.
\end{proof}\medbreak
We will need the following elementary observation:
\begin{lem} \label{lem_polynom}
Let $F:\mathbb{R}^m\rightarrow\mathbb{R}^m$ be a $C^1$ bijection such that $F(0)=0$,
and $F^{-1}$ is $C^1$ too.
Fix a norm on $\mathbb{R}^m$ and denote by $D(r)$ a ball of radius $r$ centered at the origin.
Then there are $c_1,c_2>0$ such that
$$
D(c_1r)\subseteq F(D(r))\subseteq D(c_2r)
$$
for every $r\in (0,1)$.
\end{lem}
Let $\mathfrak g$ be the Lie algebra of $G=\hbox{\rm SL}(n,\mathbb{R})$,
and $\mathfrak{g}_\mathbb{C}=\mathfrak{g}\otimes \mathbb{C}$.
Recall the root space decomposition of $\mathfrak{g}_\mathbb{C}$:
$$
\mathfrak{g}_\mathbb{C}=\mathfrak{g}_0\oplus \sum_{i\ne j} \mathfrak{g}_{ij},
$$
where $\mathfrak{g}_0$ is the subalgebra of diagonal matrices of $\mathfrak{g}_\mathbb{C}$,
and $\mathfrak{g}_{ij}=\mathbb{C}E_{ij}$ ($E_{ij}$ is the matrix with $1$
in position $(i,j)$ and $0$'s elsewhere).
Introduce {\it fundamental weights} of $\mathfrak{g}_\mathbb{C}$:
\begin{equation} \label{eq_fw}
\omega_i (s)=s_1+\cdots +s_i,\quad 1\le i\le n-1,
\end{equation}
where $s\in \mathbb{C}^n$ and $\sum_i s_i=0$.
{\it Dominant weights} are defined as linear combinations with non-negative integer coefficients of
the fundamental weights $\omega_i$, $1\le i\le n-1$.
A {\it highest weight} of a representation of $\mathfrak{g}_\mathbb{C}$ is a weight
that is maximal with respect to the ordering on the dual space of $\mathfrak{g}_0$.
Recall that irreducible representations of $\mathfrak{g}_\mathbb{C}$ are classified by their highest weights
(see, for example, \cite[Ch.~5]{gw}).
The highest weights are precisely the dominant weights
defined above.
\begin{lem} \label{lem_diverge0}
Let $\pi$ be a representation of $G$ on a finite dimensional complex vector space $V$.
Let $x\in V-\{0\}$ be such that $\pi(N)x=x$.
Then $x$ is a sum of weight vectors with dominant weights.
Moreover, if $V$ does not contain non-zero $G$-fixed vectors, every weight
in this sum is not zero.
\end{lem}
\begin{proof}[Proof.]
Consider a representation $\tilde \pi$ of $\mathfrak{g}_\mathbb{C}$ on $V$ induced by the representation $\pi$.
Since this representation is completely reducible, it is enough to
consider the case when it is irreducible.
We claim that in this case, $x$ is a vector with the highest weight. Write $x=\sum_k x_k$ where each $x_k\in V$
is a weight vector with a weight $\lambda_k$. We may assume that $\lambda_k\ne \lambda_l$
for $k\ne l$. Since $\pi(N)x=x$, $\tilde\pi(E_{ij})x=0$ for $i<j$. Thus,
$\sum_k \tilde\pi(E_{ij})x_k=0$. Since $\tilde\pi(E_{ij})x_k$ is either $0$ or
a weight vector with the weight $\lambda_k+\alpha_{ij}$, the non-zero terms in the sum
are linearly independent. Hence, $\tilde\pi(E_{ij})x_k=0$ for $i<j$.
Suppose that $\lambda_k$ is not the highest weight. Note that $\tilde\pi(\mathfrak{g}_0)x_k=\mathbb{C}x_k$,
and $\tilde\pi(E_{ji})x_k$ has weight $\lambda_k-\alpha_{ij}<\lambda_k$ for $i<j$.
By Poincar\'e-Birkhoff-Witt theorem, the universal enveloping algebra
$\mathcal{U}(\mathfrak g_\mathbb{C})=\mathcal{U}(\mathfrak b^-)\oplus\mathcal{U}(\mathfrak g_\mathbb{C})\mathfrak n$,
where $\mathfrak b^-$ is the space of lower triangular matrices, and $\mathfrak n$ is the Lie algebra of $N$.
Therefore, the space $\tilde\pi(\mathcal{U}(\mathfrak{g}_\mathbb{C}))x_k=\tilde\pi(\mathcal{U}(\mathfrak{b}^-))x_k$ does not contain a vector with the highest weight.
This contradicts irreducibility of $\tilde\pi$. Thus, each $x_k$ is of the highest weight,
and $x$ is a highest weight vector.
Since every highest weight is dominant, the lemma is proved.
\end{proof}\medbreak
For a fixed $g_0\in G$, define $q_s(t)=g_0n(t)^{-1}a(s)^{-1}$. We are going to study $q_s$ using techniques from Section
\ref{sec_uni}. The next lemma guarantees that the first possibility in
Theorems \ref{th_help1} and \ref{th_help3} does not occur.
For $\beta>0$, define
\begin{equation} \label{eq_drho}
D(\beta)=\left\{n(t)\in N: \sum_{i<j} t_{ij}^2<\beta^2\right\}.
\end{equation}
\begin{lem} \label{l_diverge}
Let $\pi$ be a nontrivial representation of $G$ on a real vector space $V$ equipped with a norm $\|\cdot\|$.
Let $V_0=\{v\in V: \pi(G)v=v\}$, and $V_1$ be a $G$-invariant complement for $V_0$.
(Note that $V_1$ exists because $\pi$ is completely reducible.)
Denote by $\Pi$ the projection on $V_1$.
Then for any relatively compact set $K\subseteq V$ and
$r>0$, there exist $\alpha \in (0,1)$ and $C_0>0$ such that for any $s$ with $a(s)\in A_T^{C_0}$ and
$x\in V$ with $\|\Pi(x)\|>r$,
\begin{equation}\label{eq_div}
\pi\left(q_s\left(D(e^{-\alpha s_1})\right)\right)^{-1}\cdot x\nsubseteq K.
\end{equation}
\end{lem}
\begin{proof}[Proof.]
It is convenient to extend $\pi$ to $V_\mathbb{C}=V\otimes \mathbb{C}$.
$(V_0)_\mathbb{C}$ is the space of $G$-fixed vectors in $V_\mathbb{C}$.
Thus, we may assume $V$ to be complex. Also dealing with projections on $V_1$, we may assume
that $V$ has no $G$-fixed vectors.
Since $\{g_0^{-1}\cdot x:\|x\|>r\}\subseteq \{x:\|x\|>r_1\}$ for some $r_1>0$,
we may assume that $g_0=1$.
For some $R>0$, $K\subseteq \{x\in V: \|x\|<R\}$. If (\ref{eq_div})
fails for some $s\in\mathbb{R}^n$ and $x\in V$, then
\begin{equation} \label{eq_g0}
\sup_{n\in D(e^{-\alpha s_1})} \|\pi(a(s)n)x\|<R.
\end{equation}
Let $W=\{x\in V:\pi(N)x=x\}$.
Clearly, the statement of the lemma is independent of the norm used. It is convenient
to use the max-norm with respect to a basis $\left\{ v_i\right\}$ of $V$ consisting of weight
vectors, i.e.
$$
\left\|\sum_i u_i v_i\right\|=\max_i |u_i|,\quad u_i\in \mathbb{C},
$$
and each $v_i$ is an eigenvector of $A$.
Moreover we can choose the basis $\{v_i\}$ so that it contains a basis of $W$.
Let $\hbox{pr}_W$ be the projection onto $W$ with respect to this basis.
Then $\hbox{pr}_W$ commutes with $a(s)$.
Note that there exists $C'>0$ such that for any $v\in V$,
\begin{equation} \label{eq_g1}
\|v\|\ge C'\|\hbox{pr}_W(v)\|.
\end{equation}
Let $\mathcal{K}\subseteq \mathbb{N}$ be such that $k\in \mathcal{K}$
iff $v_k$ has a non-zero dominant weight. Denote this weight by $\lambda_k$.
By Lemma \ref{lem_diverge0}, $W\subseteq \left<v_k:k\in \mathcal{K}\right>$.
In particular, for $n\in N$,
\begin{equation} \label{eq_g2}
\hbox{pr}_W(\pi(n)x)=\sum_{k\in {\mathcal{K}}} u_k(n)v_k\quad\hbox{for}\;\hbox{some}\; u_k(n)\in \mathbb{C}.
\end{equation}
Therefore,
\begin{equation} \label{eq_g3}
\|\hbox{pr}_W(\pi(n)x)\|=\max_{k\in {\mathcal{K}}} |u_k(n)|.
\end{equation}
Using the fact that $\hbox{pr}_W$ and $\pi(a(s))$ commute,
(\ref{eq_g1}), (\ref{eq_g2}), and (\ref{eq_g3}), we have that for any $n\in N$,
\begin{eqnarray}
\|\pi(a(s)n)x\|\ge C'\|\hbox{pr}_W(\pi(a(s))\pi(n)x)\|=
C'\left\|\pi(a(s))\left(\sum_{k\in {\mathcal{K}}} u_k(n)v_k\right)\right\| \nonumber\\
=C'\max_{k\in {\mathcal{K}}} \left(|u_k(n)|e^{\lambda_k(s)}\right)\ge C'\exp\left(\min_{k\in {\mathcal{K}}} \lambda_k(s)\right)\|\hbox{pr}_W(\pi(n)x)\|. \label{eq_g4}
\end{eqnarray}
Let $\mathfrak n$ be the Lie algebra of $N$.
Denote by $\tilde\pi$ the representation of $\mathfrak g$ induced by $\pi$.
Since $\mathfrak g$ is simple, $\tilde\pi$ is faithful.
Thus, $\tilde\pi$ defines an isomorphism between $\mathfrak n$ and $\tilde\pi(\mathfrak n)$.
Since the exponential map $\mathfrak n\to N$ is a polynomial isomorphism,
the coordinates on $N$ used in Lemma \ref{lem_shah}
and the coordinates $\{t_{ij}\}$ are connected by a polynomial isomorphism too.
By Lemma \ref{lem_polynom}, (\ref{eq_shineq}) holds for the set $D(\beta)$ defined in (\ref{eq_drho}).
Therefore,
\begin{equation}\label{eq_g5}
\sup_{n\in D(e^{-\alpha s_1})}\|\hbox{pr}_W(\pi(n)x)\|\ge c_0 (e^{-\alpha s_1})^d\|x\|\ge c_0re^{-\alpha ds_1}
\end{equation}
for some positive integer $d$.
It follows from (\ref{eq_g4}) and (\ref{eq_g5}) that if (\ref{eq_g0}) holds, then
$$
\exp\left(\min_{k\in {\mathcal{K}}} \lambda_k (s)-\alpha d s_1\right)\le \frac{R}{c_0C'r}.
$$
Take $\alpha<d^{-1}$. Since each $\lambda_k$ is a non-zero dominant weight, it follows from
(\ref{eq_fw}) that that $\lambda_k(s)-\alpha d s_1\rightarrow\infty$
as $C\rightarrow\infty$ for $s$ such that $a(s)\in A^C$.
Hence, there exists $C_0>0$ such that (\ref{eq_g0}) does not hold
for $s$ with $a(s)\in A_T^{C_0}$.
Since (\ref{eq_g0}) fails, (\ref{eq_div}) holds.
\end{proof}\medbreak
\begin{lem} \label{l_diverge00}
Use notations from Lemma \ref{l_diverge}.
Let $H$ be a subgroup of $G$, and $x\in V$ such that $\pi(H)x$ is discrete in $V$.
Then $\Pi(\pi(H)x)$ is discrete in $V_1$.
\end{lem}
\begin{proof}[Proof.]
Denote by $x_0\in V_0$ and $x_1\in V_1$ the components of $x$ with respect to
the decomposition $V=V_0\oplus V_1$. Then $\Pi(\pi(H)x)=\pi(H)x_1$.
Suppose that for some $\{h_n\}\subseteq H$, $\pi(h_n)x_1\rightarrow y$
for some $y\in V_1$. Then $\pi(h_n)x$ converges to $x_0+y$. It follows that
$\pi(h_n)x$ is constant for large $n$. Therefore, $\pi(h_n)x_1=\pi(h_n)x-x_0$
is constant for large $n$ too.
\end{proof}\medbreak
\section{Proof of Theorem \ref{th_ergodic}}\label{sec_th2}
Let ${\mathcal{Z}}=(\Gamma\backslash G)\cup\{\infty\}$ be the $1$-point compactification
of $\Gamma\backslash G$.
For $T>0$, define a normalized measure on ${\mathcal{Z}}$ by
$$
\nu_T(\tilde{f})=\frac{1}{\varrho(B^o_T)}\int_{B^o_T} \tilde{f}(yb^{-1})d\varrho(b),\quad \tilde{f}\in C_c(\Gamma\backslash G).
$$
To prove Theorem \ref{th_asympt},
we need to show that $\nu_T\rightarrow \nu$ in weak-$*$ topology as $T\rightarrow\infty$.
Since the space of normalized measures on ${\mathcal{Z}}$ is compact,
it is enough to prove that if $\nu_{T_i}\rightarrow \eta$ as $T_i\rightarrow\infty$ for some normalized
measure $\eta$ on ${\mathcal{Z}}$, then $\eta$ is $G$-invariant, and $\eta (\{\infty\})=0$.
It follows from Lemma \ref{lem_BTC2} that for any $C\in\mathbb{R}$,
\begin{equation} \label{eq_eta}
\eta(\tilde{f})=\lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{B^C_{T_i}} \tilde{f}(yb^{-1})d\varrho(b),\quad \tilde{f}\in C_c(\Gamma\backslash G).
\end{equation}
Let
\begin{equation}\label{eq_U}
U=\{n(t)\in N: t_{ij}=0\;\hbox{for}\; i<j<n\}.
\end{equation}
\begin{lem} \label{lem_eta}
The measure $\eta$ is $U$-invariant.
\end{lem}
\begin{proof}[Proof.]
For $\tilde{f}\in C_c(\Gamma\backslash G)$, and $g_0\in G$, define
$\tilde{f}^{g_0}(\Gamma g)=\tilde{f}(\Gamma gg_0)\in C_c(\Gamma\backslash G)$.
Let $\tilde{f}\in C_c(\Gamma\backslash G)$. Take $M>0$ such that $|\tilde{f}|<M$.
For $T>0$ and $s\in\mathbb{R}^{n-1}$, define a set
\begin{equation} \label{eq_Dts}
D_{s,T}=\{n\in N: \|a(s)n\|<T\}.
\end{equation}
Denote by $\chi_{s,T}(n)$ the characteristic function of the set $D_{s,T}$.
Then we can rewrite (\ref{eq_eta}) as
\begin{equation}\label{eq_eta1}
\eta(\tilde{f})=\lim_{T_i\rightarrow\infty}\frac{1}{\varrho(B^o_{T_i})}
\int_{A_{T_i}^C}\int_{N} \tilde{f}(yn^{-1}a(-s))\chi_{s,T_i}(n)e^{2\delta(s)}dnds.
\end{equation}
Let
\begin{equation}\label{eq_apr}
A_T^{'C}=\{a(s)\in A_T^C: T^2-N(s)> T\},
\end{equation}
where $N(s)$ is defined in (\ref{eq_Ns}), and $B_T^{'C}=A_T^{'C}N\cap B_T^o$.
We claim that the equality (\ref{eq_eta1})
holds when $A_{T_i}^C$ is replaced by $A_{T_i}^{'C}$. By Lemma \ref{lem_BTC},
\begin{eqnarray*}
&&\int_{A_{T_i}^C-A_{T_i}^{'C}}\int_{N} \tilde{f}(yn^{-1}a(-s))\chi_{s,T_i}(n)e^{2\delta(s)}dnds\\
&\ll& \int_{A_{T_i}^C-A_{T_i}^{'C}}\left(T_i^2-N(s)\right)^{\frac{n(n-1)}{4}}\exp\left(\sum_k (n-k)s_k\right)ds\\
&=& O\left(T_i^{\frac{3n(n-1)}{4}}\right)\quad\hbox{as}\quad T_i\rightarrow\infty.
\end{eqnarray*}
Now the claim follows from (\ref{eq_BTasy}).
Take $u\in U$.
Let $u(s)=\hbox{Ad}_{a(-s)}(u)$. Then
\begin{eqnarray}
\nonumber |\eta(\tilde{f}^{u})-\eta(\tilde{f})|\le
\limsup_{T_i\rightarrow\infty}\frac{1}{\varrho(B_{T_i}^o)}\int_{A_{T_i}^{'C}}\int_{N} |\tilde{f}(yn^{-1}u(s)a(-s))\chi_{s,T_i}(n)\\
- \tilde{f}(yn^{-1}a(-s))\chi_{s,T_i}(n)|dn e^{2\delta(s)}ds. \label{eq_mess1}
\end{eqnarray}
We estimate the last integral:
\begin{eqnarray}
\nonumber \int_{N} \left|\tilde{f}(yn^{-1}u(s)a(-s))\chi_{s,T_i}(n)-\tilde{f}(yn^{-1}a(-s))\chi_{s,T_i}(n)\right|dn\\
\nonumber =\int_{N} \left|\tilde{f}(yn^{-1}a(-s))\right|\cdot \left|\chi_{s,T_i}(u(s)n)-\chi_{s,T_i}(n)\right|dn\\
\le M \int_{N} \left|\chi_{s,T_i}(u(s)n)-\chi_{s,T_i}(n)\right|dn. \label{eq_mess2}
\end{eqnarray}
Recall that $\alpha_{i,n}(-s)=-s_i+s_n$. Therefore, by (\ref{eq_adj}),
$u(s)_{in}= e^{-s_i+s_n} u_{in}$ for $i=1,\ldots, n-1$. It follows from the triangle
inequality that
$$
\|a(s)u(s)n\|\le \|a(s)n\|+\sqrt{\sum_{i=1}^{n-1} e^{2s_i}u(s)_{in}^2}\le \|a(s)n\|+ e^{s_n}\|u\|,
$$
and similarly,
$$
\|a(s)n\|\le \|a(s)u(s)n\|+ e^{s_n}\|u^{-1}\|=\|a(s)u(s)n\|+ e^{s_n}\|u\|.
$$
Hence,
$$
\chi_{s,T_i-e^{s_n}\|u\|}(n)\le \chi_{s,T_i}(u(s)n)\le \chi_{s,T_i+e^{s_n}\|u\|}(n)
$$
for $n\in N$. Therefore,
\begin{equation}\label{eq_mess3}
\int_{N} \left|\chi_{s,T_i}(u(s)n)-\chi_{s,T_i}(n)\right|dn\le\int_{N}\left(\chi_{s,T_i+e^{s_n}\|u\|}-
\chi_{s,T_i-e^{s_n}\|u\|}\right)dn.
\end{equation}
Let $\varepsilon>0$. We claim that there exists $C_0>0$ such that for $C>C_0$ and
$a(s)\in A^{'C}_{T_i}$,
\begin{equation} \label{eq_mess4}
\int_{N}\left(\chi_{s,T_i+e^{s_n}\|u\|}-\chi_{s,T_i-e^{s_n}\|u\|}\right)dn
\le \varepsilon \int_{N}\chi_{s,T_i}dn.
\end{equation}
Similarly to Lemma \ref{lem_BTC},
$$
\int_{N}\chi_{s,T_i}dn=c_n\left(T_i^2-N(s)\right)^{\frac{n(n-1)}{4}}\exp\left(\sum_k (n-k)s_k\right).
$$
Also $e^{s_n}\rightarrow 0$ for $a(s)\in A^{'C}_{T_i}$ as $C\rightarrow\infty$.
Therefore, the equation (\ref{eq_mess4}) will follow from the following.
{\it Claim}. There exists $d_0=d_0(\varepsilon)>0$ such that for any $d\in (0,d_0)$ and
$a(s)\in A^{'C}_{T_i}$,
\begin{equation}\label{eq_LLL}
\Big((T_i+d)^2-N(s)\Big)^{\frac{n(n-1)}{4}}-\Big((T_i-d)^2-N(s)\Big)^{\frac{n(n-1)}{4}}
<\varepsilon \Big((T_i-d)^2-N(s)\Big)^{\frac{n(n-1)}{4}}.
\end{equation}
Note that by (\ref{eq_apr}), $(T_i-d)^2-N(s)>0$ for $a(s)\in A^{'C}_{T_i}$ and $d<1/2$.
The inequality (\ref{eq_LLL}) is equivalent to
$$
(T_i+d)^2-N(s)<(1+\bar\varepsilon)\Big((T_i-d)^2-N(s)\Big),
$$
where $\bar\varepsilon=(1+\varepsilon)^\frac{4}{n(n-1)}-1$.
By (\ref{eq_apr}), the last inequality follows from
$$
(T_i+d)^2 +\bar\varepsilon (T_i^2-T_i)<(1+\bar\varepsilon)(T_i-d)^2,
$$
or equivalently,
$$
T_i(4d-\bar\varepsilon+2d\bar\varepsilon)<\bar\varepsilon d^2.
$$
If one takes $d<d_0=\bar\varepsilon/(2\bar\varepsilon+4)$, the left hand side is negative.
This proves the claim (\ref{eq_LLL}).
Thus, (\ref{eq_mess4}) holds.
Then by (\ref{eq_mess1}), (\ref{eq_mess2}), (\ref{eq_mess3}), and (\ref{eq_mess4}),
\begin{eqnarray*}
|\eta(\tilde{f}^{u})-\eta(\tilde{f})|&\le&
(M\varepsilon)\limsup_{T_i\rightarrow\infty}\frac{1}{\varrho(B_{T_i}^o)}\int_{A_{T_i}^{'C}}\int_{N} \chi_{s,T_i}(n)e^{2\delta(s)} dnds\\
&=&(M\varepsilon)\limsup_{T_i\rightarrow\infty}\frac{\varrho(B_{T_i}^{'C})}{\varrho(B_{T_i}^o)}\le M\varepsilon.
\end{eqnarray*}
This shows that $\eta(\tilde{f}^{u})=\eta(\tilde{f})$.
\end{proof}\medbreak
\begin{lem} \label{lem_tildeA}
Let $\alpha \in (0,1)$. Let
\begin{equation} \label{eq_tildeA}
\tilde{A}^C_T=\left\{a(s)\in A_T^C|\; (T^2-N(s))^{1/2}>\exp\left(\mathop{\max}_{1\le j\le n-1}\{s_j\}-\alpha s_1\right)\right\},
\end{equation}
where $N(s)$ is as in (\ref{eq_Ns}), and $\tilde{B}^C_T=\tilde{A}^C_TN\cap B_T^o$. Then for $C>0$,
$$
\eta(\tilde{f})=\lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{B}^C_{T_i}} \tilde{f}(yb^{-1})d\varrho(b),\quad \tilde{f}\in C_c(\Gamma\backslash G).
$$
\end{lem}
\begin{proof}[Proof.]
By (\ref{eq_eta}), it is enough to show that
\begin{equation} \label{eq_tBTC}
\frac{\varrho\left(B^C_{T_i}-\tilde{B}^C_{T_i}\right)}{\varrho\left(B^o_{T_i}\right)}\rightarrow 0\quad\hbox{as}\quad T_i\rightarrow \infty.
\end{equation}
As in Lemma \ref{lem_BTC},
$$
\varrho\left(B_{T_i}^C-\tilde{B}^C_{T_i}\right)=c_n\mathop{\int}_{A_{T_i}^C-\tilde{A}^C_{T_i}} \Big(T_i^2-N(s)\Big)^{\frac{n(n-1)}{4}}\hbox{\rm exp}\left(\sum_{k} (n-k)s_k\right)ds.
$$
Therefore,
\begin{eqnarray*}
\varrho\left(B_{T_i}^C-\tilde{B}^C_{T_i}\right)\le c_n \int_{A_{T_i}^C}\exp\Big(\frac{n(n-1)}{2}\mathop{\max}_{1\le j\le n-1}\{s_j\}\\
-\frac{\alpha n(n-1)}{2} s_1+\sum_{k} (n-k)s_k\Big)ds
\le c_n \mathop{\sum}_{1\le j\le n-1} \int_{A_{T_i}^C}\exp\Big(\frac{n(n-1)}{2}s_j\\
-\frac{\alpha n(n-1)}{2} s_1+\sum_{k} (n-k)s_k\Big)ds
\end{eqnarray*}
Then as in the proof of Lemma \ref{lem_BTC2}, for $j\ne 1$
\begin{eqnarray*}
\int_{A_{T_i}^C}\exp\left(\frac{n(n-1)}{2}s_j-\frac{\alpha n(n-1)}{2} s_1+\sum_{k} (n-k)s_k\right)ds\\
\le \int_{-\infty}^{\log T_i} \exp\left(\left(-\frac{\alpha n(n-1)}{2}+n-1\right)s_1\right)ds_1\\
\cdot\int_{-\infty}^{\log T_i} \exp\left(\left(\frac{n(n-1)}{2}+n-j\right)s_j\right)ds_j\cdot
\prod_{{k<n},{k\ne 1,j}} \int_{-\infty}^{\log T_i} e^{(n-k)s_k}ds_k\\
\ll T_i^{n(n-1)-\alpha n(n-1)/2}
\end{eqnarray*}
as $T_i\rightarrow\infty$.
For $j=1$, the same estimate can be obtained by a similar calculation.
Now (\ref{eq_tBTC}) follows from (\ref{eq_BTasy}).
\end{proof}\medbreak
Let $y=\Gamma g_0$ for $g_0\in G$. Define $q_s(t)=g_0n(t)^{-1}a(s)^{-1}$.
We apply the results of Section \ref{sec_uni} to the map $q_s$.
\begin{lem} \label{lem_inf1}
$\eta(\{\infty\})=0$.
\end{lem}
\begin{proof}[Proof.]
Let $\varepsilon,\delta >0$. Apply Theorem \ref{th_help1} to the map $q_s(t)$.
By Theorem \ref{th_help2}, the set $\Gamma\cdot p_{U_i}\subseteq V_G$ is discrete.
Write $V_G=V_0\oplus V_1$, where $V_0$ is the space of vectors fixed by $G$,
and $V_1$ is its $G$-invariant complement. Denote by $\Pi$ the projection on $V_1$.
By Lemma \ref{l_diverge00}, $\Pi(\Gamma\cdot p_{U_i})$ is discrete.
Also $0\notin \Pi(\Gamma\cdot p_{U_i})$. Otherwise $p_{U_i}$ is fixed by $G$, and
it would follow that $U_i$ is normal in $G$ which is a contradiction.
Therefore, there exists $r>0$ such that
$$
\|\Pi(x)\|>r\quad\textrm{for}\quad x\in\bigcup_i \Gamma\cdot p_{U_i}.
$$
Now we can apply Lemma \ref{l_diverge}. Let
$$
K=\{x\in V_G: \|x\|\le \delta\}.
$$
By Lemma \ref{l_diverge},
there exist $\alpha \in (0,1)$ and $C_0>0$ such that the first case of Theorem \ref{th_help1}
fails for $q_s$ when $D$ is a bounded open convex set which contains
$D(e^{-\alpha s_1})$ (it is defined in (\ref{eq_drho})),
and $s$ is such that $a(s)\in A^{C_0}_{T_i}$. Therefore, for some compact set
$C\subseteq \Gamma\backslash G$,
\begin{equation} \label{eq_mC}
\omega \left(\left\{n(t)\in D: \Gamma q_s(t)\notin C\right\}\right)< \varepsilon
\omega (D)
\end{equation}
when $D\supseteq D(e^{-\alpha s_1})$ and $a(s)\in A^{C_0}_{T_i}$,
where $\omega=dt$ denotes the Lebesgue measure on $N$.
Let $D_{s,T_i}$ be as in (\ref{eq_Dts}). By Lemma \ref{lem_tildeA},
\begin{equation} \label{eq_tA}
\eta(\tilde{f})=\lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{A}^C_{T_i}}\int_{D_{s,T_i}}\tilde{f}(\Gamma q_s(t))e^{2\delta(s)}dtds,\quad \tilde{f}\in C_c(\Gamma\backslash G).
\end{equation}
Note that
$$
D_{s,T_i}=\left\{n(t)\in N: \sum_{i<j} e^{2s_i}t_{ij}^2<T_i^2-N(s)\right\},
$$
where $N(s)$ is defined in (\ref{eq_Ns}) (cf. (\ref{eq_BT})).
It follows that $D_{s,T_i}$ contains $D(\beta)$ for
$$
\beta<(T_i^2-N(s))^{1/2}\exp\left(-\mathop{\max}_{1\le i\le n-1} \{s_i\}\right).
$$
When $a(s)\in\tilde{A}^{C_0}_{T_i}$, the right hand side is greater then $e^{-\alpha s_1}$
(see (\ref{eq_tildeA})). Therefore, $D_{s,T_i}\supseteq D(e^{-\alpha s_1})$ when
$a(s)\in\tilde{A}^{C_0}_{T_i}$. By (\ref{eq_mC}),
\begin{equation} \label{eq_mC1}
\omega \left(\left\{n(t)\in D_{s,T_i}: \Gamma q_s(t)\notin C\right\}\right)< \varepsilon
\omega (D_{s,T_i})\quad \hbox{for}\;\; a(s)\in\tilde{A}^{C_0}_{T_i}.
\end{equation}
Let $\chi_C$ be the characteristic function of the set $C$.
Take $\tilde{f}\in C_c(\Gamma\backslash G)$
such that $\chi_C\le \tilde{f}\le 1$. Then using (\ref{eq_tA}) and (\ref{eq_mC1}), we get
\begin{eqnarray*}
\eta(\hbox{supp}(\tilde{f}))&\ge& \lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{A}^{C_0}_{T_i}}\int_{D_{s,T_i}}\chi_C(\Gamma q_s(t))e^{2\delta(s)}dtds\\
&\ge&\lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{A}^{C_0}_{T_i}}(1-\varepsilon)\omega(D_{s,T_i})e^{2\delta(s)}ds\\
&=&(1-\varepsilon)\lim_{T_i\rightarrow\infty} \frac{\varrho(\tilde{B}^{C_0}_{T_i})}{\varrho(B^o_{T_i})}=1-\varepsilon.
\end{eqnarray*}
Hence, $\eta(\{\infty\})\le\eta(\hbox{supp}(\tilde{f})^c)\le\varepsilon$ for every $\varepsilon>0$.
\end{proof}\medbreak
Recall that the singular set $Y$ of $U$ was defined in (\ref{eq_sing}).
\begin{lem} \label{lem_Y1}
$\eta(Y)=0$.
\end{lem}
\begin{proof}[Proof.]
By (\ref{eq_sing}) and Theorem \ref{th_help2}, it is enough to show that
$\eta(\Gamma X(H,U))=0$ for any $H\in {\mathcal{H}}_\Gamma$. Moreover, since $\Gamma X(H,U)$
is $\sigma$-compact, we just need to show that $\eta(C)=0$ for any compact set
$C\subseteq \Gamma X(H,U)$.
Take $\varepsilon>0$. Apply Theorem \ref{th_help3} to the map $q_s(t)$.
Let $F\subseteq V_G$ be as in Theorem \ref{th_help3}.
Fix a relatively compact neighborhood $\Phi$ of $F$ in $V_G$.
Take $\Psi\supseteq C$ as in Theorem \ref{th_help3}.
By Theorem \ref{th_help2}, the set $\Gamma\cdot p_H$ is discrete.
Let $\Pi$ be as in the proof of Lemma \ref{lem_inf1}.
By Lemma \ref{l_diverge00}, $\Pi(\Gamma\cdot p_H)$ is discrete.
If $0\in\Pi(\Gamma\cdot p_H)$, the vector $p_H$ is fixed by $G$, and $H$
is normal in $G$, which is impossible.
Therefore, for some $r>0$, $\|\Pi(x)\|>r$ for every $x\in \Gamma\cdot p_H$.
Applying Lemma \ref{l_diverge} with $K=\Phi$, one gets that
there exist $\alpha\in (0,1)$ and $C_0>0$ such that the first case of Theorem \ref{th_help1}
fails for $q_s$ when $D$ is a bounded open convex set containing
$D(e^{-\alpha s_1})$, and $s$ is such that $a(s)\in A^{C_0}_{T_i}$. Therefore,
the second case should hold:
\begin{equation} \label{eq_mC3}
\omega \left(\left\{n(t)\in D: \Gamma q_s(t)\in \Psi\right\}\right)< \varepsilon \omega (D)
\end{equation}
when $D\supseteq D(e^{-\alpha s_1})$ and $a(s)\in A^{C_0}_{T_i}$.
Let $D_{s,T_i}$ be as in (\ref{eq_Dts}). It was shown in the proof of Lemma \ref{lem_inf1}
that $D_{s,T_i}\supseteq D(e^{-\alpha s_1})$ when $a(s)\in\tilde{A}^{C_0}_{T_i}$.
It follows from (\ref{eq_mC3}) that
\begin{equation} \label{eq_mC4}
\omega \left(\left\{n(t)\in D_{s,T_i}: \Gamma q_s(t)\in \Psi\right\}\right)< \varepsilon
\omega (D_{s,T_i})\quad \hbox{for}\;\; a(s)\in\tilde{A}^{C_0}_{T_i}.
\end{equation}
Take a function $\tilde{f}\in C_c(\Gamma\backslash G)$ such that $\tilde{f}=1$ on $C$,
$\hbox{supp}(\tilde{f})\subseteq \Psi$, and $0\le \tilde{f}\le 1$.
Let $\chi_\Psi$ be the characteristic function of $\Psi$.
Then using (\ref{eq_tA}) and (\ref{eq_mC4}), we get
\begin{eqnarray*}
\eta(C)\le \lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{A}^{C_0}_{T_i}}\int_{D_{s,T_i}}\chi_\Psi(\Gamma q_s(t))e^{2\delta(s)}dtds\\
\le \lim_{T_i\rightarrow\infty}
\frac{1}{\varrho(B^o_{T_i})}\int_{\tilde{A}^{C_0}_{T_i}}\varepsilon\omega(D_{s,T_i})e^{2\delta(s)}ds
=\varepsilon\lim_{T_i\rightarrow\infty} \frac{\varrho(\tilde{B}^{C_0}_{T_i})}{\varrho(B^o_{T_i})}=\varepsilon.
\end{eqnarray*}
This shows that $\eta(C)=0$. Hence, $\eta (Y)=0$.
\end{proof}\medbreak
Recall that by Lemma \ref{lem_eta}, $\eta$ is $U$-invariant.
By Ratner's measure classification \cite{r1}, an ergodic component of $\eta$ is
either $G$-invariant or supported on $Y\cup\{\infty\}$. By Lemmas
\ref{lem_inf1} and \ref{lem_Y1}, the set of ergodic components of the second type
has measure $0$. Therefore, $\eta$ is $G$-invariant, and $\eta=\nu$.
This finishes the proof of Theorem \ref{th_ergodic}.
\section*{ Acknowledgments }
The author wishes to thank his advisor V.~Bergelson for constant
support, encouragement, and help with this paper, N.~Shah and Barak~Weiss for insightful comments about the paper,
and H.~Furstenberg and G.~Margulis for interesting discussions.
\end{document}
|
\betagin{document}
\title[Solutions for fractional operator problem
]
{Solutions for fractional operator problem via local Pohozaev identities}
\author{Yuxia Guo, Ting Liu and Jianjun Nie
}
\address{Department of Mathematical Science, Tsinghua University, Beijing, P.R.China}
\email{[email protected]}
\address{Department of Mathematical Science, Tsinghua University, Beijing, P.R.China}
\email{[email protected]}
\address{Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, P.R.China}
\email{[email protected]}
\thanks{This work is supported by NSFC(11771235,11801545)}
\maketitle
\betagin{abstract}
We consider the following fractional Schr\"{o}dinger equation involving critical exponent:
\betagin{equation*}
\left\{\betagin{array}{ll}
(-\Deltalta)^s u+V(|y'|,y'')u=u^{2^*_s-1} \ \hbox{ in } \ \mathbb{R}^N, \\
u>0, \ y \in \mathbb{R}^N,
\end{array}\right. \eqno{(P)}
\end{equation*}
where $s\in(\frac{1}{2}, 1)$, $(y',y'')\in \mathbb{R}^2\times \mathbb{R}^{N-2}$, $V(|y'|,y'')$ is a bounded nonnegative function with a weaker symmetry condition. We prove the existence of infinitely many solutions for the above problem by a finite dimensional reduction method combining various Pohazaev identies.
\end{abstract}
{\bf Keywords:} Fractional Laplacian, Critical exponent, Various Pohozaev identies, Infinitely many solutions.
{\bf AMS} Subject Classification: 35B05; 35B45.
\section{Introduction}
In this paper, we are concerned with the following problem:
\betagin{equation}\lambdabel{problem1}
\left\{\betagin{array}{ll}
(-\Deltalta)^s u+V(|y'|,y'')u=u^{2^*_s-1}, \ \hbox{ in } \ \mathbb{R}^N, \\
u>0, \ y \in \mathbb{R}^N,
\end{array}\right.
\end{equation}
where $s\in(\frac{1}{2}, 1)$ and $2^*_s=\frac{2N}{N-2s}$ is the critical Sobolev exponent. For any $s\in (0,1)$, $(-\Deltalta)^s$ is the fractional Laplacian in $\mathbb{R}^N$, which is a nonlocal operator defined as:
\betagin{equation} \lambdabel{aum23}
\betagin{aligned}
(-\Deltalta)^su(y)=c(N,s)P.V.\int_{\mathbb{R}^N}\frac{u(y)-u(x)}{|x-y|^{N+2s}}dx=c(N,s)\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{\mathbb{R}^N\setminus B_\varepsilonsilon(x)}\frac{u(y)-u(x)}{|x-y|^{N+2s}}dx,
\end{aligned}
\end{equation}
where $P.V.$ is the Cauchy principal value and $c(N,s)$ is a constant depending on $N$ and $s.$ This operator is well defined in $C^{1,1}_{loc}\cap \mathcal {L}_s$, where $\mathcal {L}_s=\{u\in L^1_{loc}:\int_{\mathbb{R}^N}\frac{|u(x)|}{1+|x|^{N+2s}}dx<\infty\}$. For more details on the fractional Laplacian, we referee to \cite{Eleonora} and the references therein.
The Fractional Laplacian operator appears in dives areas including biological modeling, physics and mathematical finances, and can be regarded as the infinitesimal generator of a stable Levy process (see for example \cite{ad}). From the view point of mathematics, an important feature of the fractional Laplacian operator is its nonlocal property, which makes it more challenge than the classical Laplacian operator. Thus, problems with the fractional Laplacian have been extensively studied, both for
the pure mathematical research and in view of concrete real-world applications, see for example, \cite{BBarrios}-\cite{XCabre}, \cite{CS2007}, \cite{Quaas}, \cite{Servadeia}, \cite{TJin}, \cite{JTan1}, \cite{JTan}, \cite{SYan} and the references therein.
Solutions of \eqref{problem1} are related to the existence of standing wave solutions to the following fractional Schr\"{o}dinger equation
\betagin{equation}
\left\{\betagin{array}{ll}
i\partialrtial_t\Psi+(-\Deltalta)^s\Psi=F(x,\Psi) \ \hbox{ in } \ \mathbb{R}^N, \\
\lim\limits_{|x|\rightarrow\infty}|\Psi(x,t)|=0, \ \hbox{for \ all} \ t>0.
\end{array}\right.
\end{equation}
That is, solutions with the form $\Psi(x,t)=e^{-ict}u(x)$, where c is a constant.
In this paper, under a weaker symmetry condition for $V(y)$, we will construct multi-bump solutions of \eqref{problem1} through a finite dimensional reduction method combining with various Pohozaev identies. More precisely, we consider the case $V(y)=V(|y'|,y'')=V(r,y'')$, $y=(y',y'')\in \mathbb{R}^2\times\mathbb{R}^{N-2}$ and assume that:
\vskip8pt
{\it{($V$) \ Let $V(y)\geq0$ is a bounded function and belongs to $C^2(\mathbb{R}^N)$. Suppose that $r^{2s}V(r,y'')$ has a critical point $(r_0,y_0'')$ satisfying $r_0 > 0$, $ V(r_0,y_0'') > 0$ and
$deg \big(\nablabla \big(r^{2s}V(r,y'')\big), (r_0,y_0'')\big) \neq 0.$}}
\vskip8pt
Before the statement of the main results. Let us first introduce some notations.
Denote $D^{s}(\mathbb{R}^N)$ the completion of $C_0^\infty(\mathbb{R}^N)$ under the norm $\|(-\Deltalta)^{\frac{s}{2}}u\|_{L^2(\mathbb{R}^N)}$, where $\|(-\Deltalta)^{\frac{s}{2}}u\|_{L^2(\mathbb{R}^N)}$ is defined by $(\int_{\mathbb{R}^N}|\xi|^{2s}|\mathcal {G}u(\xi)|^2 d\xi)^{\frac{1}{2}}$, and $\mathcal {G}u$ is the Fourier transformation of $u$:
$$\mathcal {G}u(\xi)=\frac{1}{(2\pi)^{\frac{N}{2}}}\int_{\mathbb{R}^N}e^{-i\xi\cdot x}u(x)dx.$$
We will construct the solutions in following the energy space:
$$H^{s}(\mathbb{R}^N)=\{u\in D^{s}(\mathbb{R}^N): \int_{\mathbb{R}^N} V(y)u^2dy<+\infty\}$$
with the norm:
$$\|u\|_{H^{s}(\mathbb{R}^N)}=\big(\|(-\Deltalta)^{\frac{s}{2}}u\|^2_{L^2(\mathbb{R}^N)}+\int_{\mathbb{R}^N} V(y)u^2dy\big)^{\frac{1}{2}},$$
We define the functional $I$ on $H^{s}(\mathbb{R}^N)$ by:
\betagin{equation}\lambdabel{func}
I(u)=\frac{1}{2}\int_{\mathbb{R}^N}|(-\Deltalta)^{\frac{s}{2}}u|^2dy+\frac{1}{2}\int_{\mathbb{R}^N}V(|y'|,y'')u^2dy
-\frac{1}{2^*_s}\int_{\mathbb{R}^N}(u)_+^{2^*_s}dy,
\end{equation}
where $(u)_+=\max(u,0)$. Then the solutions of problem \eqref{problem1} correspond to the critical points of the functional $I.$
It is well known that the following functions $$U_{x,\lambdambda}(y)=C(N,s)(\frac{\lambdambda}{1+\lambdambda^2|y-x|^2})^{\frac{N-2s}{2}}, \ \ \lambdambda>0, \ x\in \mathbb{R}^N,$$ where $C(N,s)=2^{\frac{N-2s}{2}}\frac{{\mathcal G}amma(\frac{N+2s}{2})}{{\mathcal G}amma(\frac{N-2s}{2})}$,
are the only solutions for the problem (see \cite{{ELieb}}):
\betagin{equation}\lambdabel{single}(-\Deltalta)^s u=u^{\frac{N+2s}{N-2s}}, \ \ u>0 \ \hbox{ in } \ \mathbb{R}^N.\end{equation}
Define
\betagin{equation*}
\betagin{aligned}
H_s=\{u: & u\in H^s(\mathbb{R}^{N}), u(y_1, y_2, y'')=u(y_1, -y_2, y''), \\
&u(rcos(\theta+\frac{2\pi j}{k}),rsin(\theta+\frac{2\pi j}{k}),y'')=u(rcos\theta,rsin\theta,y'')\}.
\end{aligned}
\end{equation*}
Let $$x_j=(\overline{r}cos\frac{2(j-1)\pi}{k},\overline{r}sin\frac{2(j-1)\pi}{k},\overline{y}''), \ \ j=1,\ldots,k.$$
To construct the solution of \eqref{problem1}, we hope to use $U_{x,\lambdambda}(y)$ as an approximation solution. However, the decay of $U_{x_j,\lambdambda}$ is not fast enough for us when the dimension $N\leq 6s$. So, we need to cut off this function. Let $\deltalta>0$ be a small constant such that $r^{2s}V(r,y'')>0$ if $|(r,y'')-(r_0,y''_0)|\leq10\deltalta$. Take $\zeta(y)=\zeta(r,y'')$ be a $C^2$ smooth function satisfying $\zeta=1$ if $|(r,y'')-(r_0, y''_0)|\leq \deltalta$, $\zeta=0$ if $|(r,y'')-(r_0, y''_0)|\geq 2\deltalta$, $|\nablabla\zeta|\leq C$ and $0\leq\zeta\leq1$.
Denote
$$Z_{x_j,\lambdambda}=\zeta U_{x_j,\lambdambda}, \ Z_{\overline{r}, \overline{y}'',\lambdambda}^*=\sum_{j=1}^kU_{x_j,\lambdambda}, \ Z_{\overline{r}, \overline{y}'',\lambdambda}=\sum_{j=1}^kZ_{x_j,\lambdambda}.$$
But this will bring us a new difficult. In the proof of Proposition \ref{proposition2}, we have to deal with $(-\Deltalta)^s(\zeta(y)U_{x_j,\lambdambda}(y))$. By \eqref{aum23}, we can deduce that
$$(-\Deltalta)^s\big(\zeta(y)U_{x_j,\lambdambda}(y)\big)=\zeta(y)U_{x_j,\lambdambda}^{2_s^*-1}(y)+
c(N,s)\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{\mathbb{R}^N\setminus B_\varepsilonsilon(x)}\frac{\big(\zeta(y)-\zeta(x)\big)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}dx.$$
In order to obtain a good enough result, we need to calculate the last principal value every carefully (see Lemma \ref{lemma6}).
In the following of the present paper, we always assume that $\tau=\frac{N-4s}{N-2s}$, $k>0$ is a large integer, and $\lambdambda\in[L_{0}k^{\frac{N-2s}{N-4s}}, L_{1}k^{\frac{N-2s}{N-4s}}]$ for some constants $L_1>L_0>0$,
$|(\overline{r},\overline{y}'')-(r_0, y''_0)|\leq \theta$ with $\theta>0$ is a small constant, and
$$\Omegaega_j=\{y: y=(y',y'')\in \mathbb{R}^2\times\mathbb{R}^{N-2}, \lambdangle \frac{y'}{|y'|},\frac{x_j'}{|x_j'|}\rangle\geq cos\frac{\pi}{k}\}.$$
Our main result is:
\betagin{theorem}\lambdabel{th:1}
Suppose that $s\in(\frac{1}{2}, 1)$ and $N>4s+2\tau$. If $V(y)$
satisfies the condition $(V)$, then there is an integer $k_{0}>0$,
such that for any integer $k\geq k_{0}$, problem \eqref{problem1} has
a solution $u_k$ of the form
$$u_{k}=Z_{\overline{r}_k, \overline{y}''_k,\lambdambda_k}+\phi_{k}$$ where $\phi_{k}\in
H_{s}$, $\lambdambda_{k}\in[L_{0}k^{\frac{N-2s}{N-4s}}, L_{1}k^{\frac{N-2s}{N-4s}}]$, and as $k\rightarrow\infty$, $\lambdambda_k^{-\frac{N-2s}{2}}\|\phi_{k}\|_{L^{\infty}}\rightarrow 0$,
$(\overline{r}_k,\overline{y}''_k)\rightarrow(r_0,y''_0).$
\end{theorem}
We will prove Theorem \ref{th:1} by finite dimensional reduction method combining various Pohozaev identities. Finite dimensional reduction method has been extensively used to construct solutions for equations with critical growth. We referee to \cite{bc}, \cite{cny}-\cite{clin2}, \cite{YGuo}, \cite{Guo2016}, \cite{GPY}, \cite{liyy19},
\cite{yylinwm}, \cite{niu},\cite{nyan}, \cite{szhang}, \cite{yan}
and references therein. Roughly speaking, the outline to carry out the reduction argument is as follows: We first construct an good enough approximation solution and linearized the original problem around the approximation solution. Then we solve the corresponding finite dimensional problem to obtain a true solution. To finish the second step, we have to obtain a better estimate for the error term. In this paper, since the operator is nonlocal and the potential function is assumed to have weak symmetry, we have to modify both steps.
\vskip8pt
We introduce the following norms:
$$\|u\|_{*}=\sup\limits_{y\in\mathbb{R}^N}(\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}})^{-1}\lambdambda^{-\frac{N-2s}{2}}|u(y)|$$
and
$$\|f\|_{**}=\sup\limits_{y\in\mathbb{R}^N}(\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}})^{-1}\lambdambda^{-\frac{N+2s}{2}}|f(y)|.$$
We first use $Z_{\overline{r}, \overline{y}'',\lambdambda}$ as an approximate solution to obtain a unique function $\phi(\overline{r}, \overline{y}'',\lambdambda)$, then the problem of finding critical points for $I(u)$ can be reduced to that of finding critical points of $F(\overline{r},\overline{y}'',\lambdambda)=I(Z_{\overline{r}, \overline{y}'',\lambdambda}+\phi(\overline{r}, \overline{y}'',\lambdambda))$. Then in the second step, we solve the corresponding finite dimensional problem to obtain a solution. However, in the first step, we can only obtain $\|\phi\|_*\leq\frac{C}{\lambdambda^{s+\sigma}}$ (see Proposition \ref{proposition2}). From Lemma \ref{exp2} and \ref{exp3}, we know that
\betagin{equation} \lambdabel{energyexpansion11}
\betagin{aligned}
\frac{\partialrtial F}{\partialrtial\lambdambda}=\frac{\partialrtial I(Z_{\overline{r},\overline{y}'',\lambdambda})}{\partialrtial\lambdambda}+O(k\lambdambda^{-1}\|\phi\|^2_*)=k\left(-\frac{B_1}{\lambda^{2s+1}}V(\bar{r},\bar{y}'')+ \frac{B_3k^{N-2s}}{\lambda^{N-2s+1}}+O\big(\frac{1}{\lambda^{2s+1+\sigma}}\big)\right),
\end{aligned}
\end{equation}
\betagin{equation} \lambdabel{energyexpansion12}
\betagin{aligned}
\frac{\partialrtial F}{\partialrtial\overline{r}}
=&\frac{\partialrtial I(Z_{\overline{r},\overline{y}'',\lambdambda})}{\partialrtial\overline{r}}+O(k\lambdambda\|\phi\|^2_*)\\
=&k\left(\frac{B_1}{\lambdambda^{2s}}\frac{\partialrtial V(\overline{r},\overline{y}'')}{\partialrtial\overline{r}}+\sum\limits_{j=2}^k\frac{B_2}{\overline{r}\lambdambda^{N-2s}|x_1-x_j|^{N-2s}}+O(\frac{1}{\lambdambda^{s+\sigma}})\right),
\end{aligned}
\end{equation}
and
\betagin{equation} \lambdabel{energyexpansion13}
\betagin{aligned}
\frac{\partialrtial F}{\partialrtial\overline{y}''_j}=\frac{\partialrtial I(Z_{\overline{r},\overline{y}'',\lambdambda})}{\partialrtial\overline{y}''_j}+O(k\lambdambda\|\phi\|^2_*)=k\left(\frac{B_1}{\lambdambda^{2s}}\frac{\partialrtial V(\overline{r},\overline{y}'')}{\partialrtial\overline{y}''_j}+O(\frac{1}{\lambdambda^{s+\sigma}})\right).
\end{aligned}
\end{equation}
Note that the estimate of $\phi$ is only good enough for the expansion \eqref{energyexpansion11}. But it destroys the main terms in the expansions of \eqref{energyexpansion12} and \eqref{energyexpansion13}.
To overcome this difficulty, following the idea in Peng, Wang and Yan \cite{PWY}, instead of studying \eqref{energyexpansion12} and \eqref{energyexpansion13}, we turn to prove that if $(\bar r, \bar{y}'', \lambdambda)$ satisfies the following local Pohozaev identities:
\betagin{eqnarray}\lambdabel{firstpohozaevidentity11}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial \nu}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial y_{i}}+\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_{k}|^{2}\nu_{i}\\
&=\int_{B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}\big)\frac{\partialrtial u_k}{\partialrtial y_i}, \ \ \ \ \ \ i=3,\ldots, N,
\end{aligned}
\end{eqnarray}
and
\betagin{eqnarray}\lambdabel{secondtpohozaevidentity11}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} +\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_k|^{2}\lambdangle Y,\nu\rangle +\frac{2s-N}{2}\int_{\partialrtial\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
&=\int_{ B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}\big)\lambdangle y, u_k\rangle,
\end{aligned}
\end{eqnarray}
where $u_k=Z_{\overline{r}, \overline{y}'', \lambdambda}+\phi$, $\tilde{u}_k$ is the extension of $u_k$ (see below \eqref{poisson involution}),
$$ \mathcal B^{+}_{\rho}=\{Y=(y,t):|Y-(r_0,y''_0,0)|\leq\rho \ \hbox{and} \ t>0 \}\subseteq {\Bbb R}_{+}^{N+1},$$
$$ \partialrtial'\mathcal B^{+}_{\rho}=\{Y=(y,t):|y-(r_0,y''_0)|\leq\rho,t=0\}\subseteq {\Bbb R}^{N},$$
$$ \partialrtial''\mathcal B^{+}_{\rho}=\{Y=(y,t):|Y-(r_0,y''_0,0)|=\rho,t>0 \}\subseteq {\Bbb R}_{+}^{N+1},$$
$$\partialrtial\mathcal B^{+}_{\rho}=\partialrtial'\mathcal B^{+}_{\rho}\cup\partialrtial''\mathcal B^{+}_{\rho},$$
$$ B_{\rho}=\{y:|y-(r_0,y''_0)|\leq\rho\}\subseteq {\Bbb R}^{N}.$$
For any $u\in D^{s}(\mathbb{R}^N)$, $\widetilde{u}$ is defined by:
\betagin{eqnarray}\lambdabel{poisson involution}
\widetilde{u}(y,t)=\mathcal P_{s}[u]:=\int_{{\Bbb R}^{N}} P_s(y-\xi,t)u(\xi)d\xi,\quad (y,t)\in {\Bbb R}^{N+1}_+:={\Bbb R}^{N}\times (0,+\infty),
\end{eqnarray}
where
\[
P_s(x,t)=\betata(N,s)\frac{t^{2s}}{(|x|^2+t^2)^{\frac{N+2s}{2}}}
\] with constant $\betata(N,s)$ such that $\int_{{\Bbb R}^{N}}P_s(x,1)d x=1$.
We refer $\widetilde{u}=\mathcal P_{s}[u]$ to be the \emph{extension} of $u$. Moreover, $\widetilde{u}$ satisfies (see \cite{CS2007})
\betagin{eqnarray*}
\mathrm{div}(t^{1-2s}\nablabla \widetilde{u})=0, \quad \mbox{in }\mathbb{R}^{N+1}_+
\end{eqnarray*}
and
\betagin{eqnarray*}
-\lim\limits_{t\to 0}t^{1-2s}\partial_t \widetilde{u}(y,t)=\omegaega_s(-\Deltalta)^{s} u(y), \quad \mbox{on} \ {\Bbb R}^{N}
\end{eqnarray*}
in the distribution sense, where $\omegaega_s=2^{1-2s}{\mathcal G}amma(1-s)/{\mathcal G}amma(s)$.
Due to the nonlocalness of the fractional Laplacian operator, we have to overcome some serious difficulties. Indeed, we do not have the local Pohozaev identities for $u$, but for $\widetilde{u}$. The integrals appearing in \eqref{firstpohozaevidentity11} and \eqref{secondtpohozaevidentity11} is much more complicated. We have to integrate one more time than the Laplacian operator case. It is very difficult when we derive some sharp estimates for each term in \eqref{firstpohozaevidentity11} and \eqref{secondtpohozaevidentity11}. We need a lot of preliminary lemmas.
Our paper is organized as follows. In section 2, we perform a finite dimensional reduction.
We prove the Theorems \ref{th:1} in section 3.
In Appendix A, we give some essential estimates.
We put the energy expansions for $\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\lambdambda}\rangle$, $\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{r}}\rangle$ and $\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{y}''}\rangle$ in Appendix B.
\section{Finite dimensional reduction}
In this section, we perform a finite dimensional reduction by using $Z_{\overline{r}, \overline{y}'',\lambdambda}$ as an
approximation solution and considering the linearization of the
problem \eqref{problem1} around the approximation solution
$Z_{\overline{r}, \overline{y}'',\lambdambda}$.
Let $$Z_{i,1}=\frac{\partialrtial Z_{x_i,\lambdambda}}{\partialrtial \lambdambda}, \ \ Z_{i,2}=\frac{\partialrtial Z_{x_i,\lambdambda}}{\partialrtial \overline{r}}, \ \ Z_{i,k}=\frac{\partialrtial Z_{x_i,\lambdambda}}{\partialrtial \overline{y}''_k}, \ k=3,\ldots,N.$$
Then direct computation shows that
$$Z_{i,1}=O(\lambdambda^{-1}Z_{x_i,\lambdambda}), \ \ Z_{i,l}=O(\lambdambda Z_{x_i,\lambdambda}), \ l=2,\ldots,N.$$
We consider the following linearized problem:
\betagin{equation}\lambdabel{problem3}
\left\{\betagin{array}{ll}
(-\Deltalta)^s \phi+V(r,y'')\phi-(2^*_s-1)Z_{\overline{r},\overline{y}'',\lambdambda}^{2^*_s-2}\phi=h+\sum\limits_{l=1}^Nc_l\sum\limits_{i=1}^kZ_{x_i,\lambdambda}^{2^*_s-2}Z_{i,l},\\
u\in H_{s}, \ \ \sum\limits_{i=1}^k\int_{\mathbb{R}^N}Z_{x_i,\lambdambda}^{2^*_s-2}Z_{i,l}\phi=0, \ l=1,2,\ldots,N,
\end{array}\right.
\end{equation}
for some numbers $c_l$.
\betagin{lemma}\lambdabel{lemma4}
Suppose that $N>4s$ and $\phi_k$ solves problem \eqref{problem3}. If $\|h_k\|_{**}\rightarrow0$ as $k\rightarrow\infty$, then $\|\phi_k\|_{*}\rightarrow0$ as $k\rightarrow\infty$.
\end{lemma}
{\bf Proof.} We prove this lemma
by contradiction arguments. Assume that there exist $h_{k}$
with $\|h_{k}\|_{\ast\ast}\rightarrow0$ as $k\rightarrow\infty$, $\|\phi_k\|_{\ast}\geq c>0$ with $\lambdambda=\lambdambda_k,$ $\lambdambda_{k}\in[L_{0}k^{\frac{N-2s}{N-4s}},L_{1}k^{\frac{N-2s}{N-4s}}]$ and
$(\overline{r}_k,\overline{y}''_k)\rightarrow(r_0,y''_0)$.
Without loss of generality, we can assume that
$\|\phi_{k}\|_{\ast}\equiv1.$ For simplicity, we drop the subscript $k$.
Firstly, we have
\betagin{equation}
\betagin{aligned}
|\phi(y)|\leq& C\int_{\mathbb{R}^{N}}\frac{1}{|y-z|^{N-2s}}Z_{\overline{r},\overline{y}'',\lambdambda}^{2^*_s-2}|\phi|dz\\
&+C\int_{\mathbb{R}^{N}}\frac{1}{|y-z|^{N-2s}}\Big[|h|+|\sum\limits_{l=1}^{N}c_l\sum\limits_{i=1}^{k}
Z_{x_{i},\lambdambda}^{2^*_s-2}Z_{i,l}|\Big]dz\\
=:&A_1+A_2.
\end{aligned}
\end{equation}
For the first term $A_1$, by Lemma \ref{Lemma appendix1} and \ref{Lemma appendix2}, we can deduce that
\betagin{equation}\lambdabel{equality6}
\betagin{aligned}
\big|A_1\big|\leq&C\|\phi\|_{\ast}\int_{\mathbb{R}^{N}}\frac{1}{|y-z|^{N-2s}}Z_{\overline{r},\overline{y}'',\lambdambda}^{2^*_s-2}
\sum\limits_{i=1}^{k}\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|z-x_{i}|)^{\frac{N-2s}{2}+\tau}}dz\\
\leq&C\|\phi\|_{\ast}\lambdambda^{\frac{N-2s}{2}}\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau+\theta}},
\end{aligned}
\end{equation}
where $\theta$ is a small constant. For the second term $A_2$, we make use of Lemma \ref{Lemma appendix2}, so that
\betagin{equation}\lambdabel{equality7}
\betagin{aligned}
\big|A_2\big|&\leq
C\|h\|_{\ast\ast}\int_{\mathbb{R}^{N}}\sum\limits_{i=1}^{k}
\frac{\lambdambda^{\frac{N+2s}{2}}}{|y-z|^{N-2s}(1+\lambdambda|z-x_{i}|)^{\frac{N+2s}{2}+\tau}}dz\\
&\quad +C\sum\limits_{l=1}^{N}|c_l|\int_{\mathbb{R}^{N}}\sum\limits_{i=1}^{k}\frac{\lambdambda^{\frac{N+2s}{2}+n_l}}{|y-z|^{N-2s}(1+\lambdambda|z-x_{i}|)^{N+2s}}dz\\
&\leq
C\|h\|_{\ast\ast}\lambdambda^{\frac{N-2s}{2}}\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau}}
+C\sum\limits_{l=1}^{N}|c_l|\lambdambda^{\frac{N-2s}{2}+n_l}\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau}},
\end{aligned}
\end{equation}
where $n_1=-1$, $n_l=1$ for $l=2,\ldots,N$.
Then, we have
\betagin{equation}\lambdabel{equality13}
\betagin{aligned}
&\big(\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau}}\big)^{-1}\lambdambda^{-\frac{N-2s}{2}}|\phi|
\\ \leq & C\|\phi\|_{\ast}\frac{\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau+\theta}}}{\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_{i}|)^{\frac{N-2s}{2}+\tau}}}
+C\|h\|_{\ast\ast}+C\sum\limits_{l=1}^{N}|c_l|\lambdambda^{n_l}.
\end{aligned}
\end{equation}
Multiplying both sides of \eqref{problem3} by $Z_{1,t}$, we have
\betagin{equation}\lambdabel{equality8}
\betagin{aligned}
&\sum\limits_{l=1}^{N}c_l\sum\limits_{i=1}^{k}\int_{\mathbb{R}^N}Z_{x_i,\lambdambda}^{2^*_s-2}Z_{i,l}Z_{1,t}\\
=&\big\lambdangle(-\Deltalta )^s\phi-V(r,y'')\phi-(2^*_s-1)Z^{2^*_s-2}_{\overline{r},\overline{y}'',\lambdambda}\phi,Z_{1,t}\big\rangle-\lambdangle h,Z_{1,t}\rangle.
\end{aligned}
\end{equation}
First of all, there exists a constant $\overline{c}>0$ such that
\betagin{equation}\lambdabel{equality11}
\betagin{aligned}
\sum\limits_{i=1}^{k}\int_{\mathbb{R}^N}Z_{x_i,\lambdambda}^{2^*_s-2}Z_{i,l}Z_{1,t} \left\{
\betagin{array}{ll}
=(\overline{c}+o(1))\lambdambda^{2n_t}, & l=t, \\
\leq \frac{\overline{c}\lambdambda^{n_t}\lambdambda^{n_l}}{\lambdambda^{N}}, & l\neq t.
\end{array}
\right.
\end{aligned}
\end{equation}
On the other hand, we have
\betagin{equation}\lambdabel{aum16}
\betagin{aligned}
&|\lambdangle V(r,y'')\phi,Z_{1,t}\rangle|\\
\leq& C\|\phi\|_*\int_{\mathbb{R}^N}\frac{\zeta\lambdambda^{N-2s+n_t}}{(1+\lambdambda|y-x_1|)^{N-2s}}
\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_i|)^{\frac{N-2s}{2}+\tau}}\\
\leq& C\|\phi\|_*\lambdambda^{N-2s+n_t}\big[\int_{\mathbb{R}^N}\frac{\zeta}{(1+\lambdambda|y-x_1|)^{\frac{3N-6s}{2}+\tau}}\\
&\quad \quad \quad \quad +\sum\limits_{i=2}^{k}\frac{1}{(\lambdambda|x_1-x_i|)^{\tau}}\int_{\mathbb{R}^N}\zeta(\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{3N-6s}{2}}}+\frac{1}{(1+\lambdambda|y-x_i|)^{\frac{3N-6s}{2}}})\big]\\
\leq&C\|\phi\|_*\int_{\mathbb{R}^N}\zeta\frac{\lambdambda^{N-2s+n_t}}{(1+\lambdambda|y-x_1|)^{\frac{3N-6s}{2}}}\leq\frac{C\lambdambda^{n_t}\|\phi\|_*log\lambdambda}{\lambdambda^{\min(2s,\frac{N-2s}{2})}}
\leq\frac{C\lambdambda^{n_t}\|\phi\|_*}{\lambdambda^{s+\sigma}}
\end{aligned}
\end{equation}
and
\betagin{equation}\lambdabel{aum17}
\betagin{aligned}
|\lambdangle h,Z_{1,t}\rangle|&\leq C\|h\|_{\ast\ast}\int_{\mathbb{R}^{N}}\frac{\lambdambda^{N+n_t}}{(1+\lambdambda|y-x_1|)^{N-2s}}\sum\limits_{i=1}^{k}\frac{1}{(1+\lambdambda|y-x_i|)^{\frac{N+2s}{2}+\tau}}\\
&\leq C\lambdambda^{n_t}\|h\|_{\ast\ast}.
\end{aligned}
\end{equation}
Moreover, one has
\betagin{equation}\lambdabel{equality12}
\betagin{aligned}
|\lambdangle(-\Deltalta)^s\phi-(2^*_s-1)Z^{2^*_s-2}_{\overline{r},\overline{y}'',\lambdambda}\phi,Z_{1,t}\rangle|\leq\frac{C\lambdambda^{n_t}\|\phi\|_{\ast}}{\lambdambda^{s+\sigma}}.
\end{aligned}
\end{equation}
Combining \eqref{equality8}, \eqref{equality11}, \eqref{aum16}, \eqref{aum17} and \eqref{equality12}, we have
$$|c_t|\leq \frac{C}{\lambdambda^{n_t}}(\frac{\|\phi\|_{\ast}}{\lambdambda^{\sigma}}+\|h\|_{\ast\ast})+\frac{C}{\lambdambda^{n_t}}\sum\limits_{l\neq t}\frac{\lambdambda^{n_l}|c_l|}{\lambdambda^{N}}.$$
This implies that
$$\sum\limits_{l=1}^N|c_l|\lambdambda^{n_l}\leq C(\frac{\|\phi\|_{\ast}}{\lambdambda^{\sigma}}+\|h\|_{\ast\ast}).$$
Thus by \eqref{equality13} and $\|\phi\|_{\ast}=1$, there is $R>0$ such that
\betagin{equation}\lambdabel{equality20}
\betagin{aligned}
\|\lambdambda^{-\frac{N-2s}{2}}\phi(y)\|_{L^\infty(B_{R/\lambdambda}(x_i))}\geq a>0,
\end{aligned}
\end{equation}
for some $i$. As a result, we have that $\widetilde{\phi}=\lambdambda^{-\frac{N-2s}{2}}\phi(\frac{y}{\lambdambda}+x_i)$ converges uniformly, in any compact set, to a solution $u$ of the following equation:
$$(-\Deltalta)^su-(2^*_s-1)U_{0,\Lambdambda}^{2^*_s-2}u=0, \ \ \hbox{ in } \ \mathbb{R}^N,$$
for some $0<\Lambdambda_1\leq\Lambdambda\leq\Lambdambda_2$. Since $u$ is perpendicular to the kernel of this equation, $u=0$. This is a contradiction to \eqref{equality20}.
{$\Box$}
\vskip8pt
Using the same argument as in the proof of Proposition 4.1 in \cite{pfm}, we can obtain the following proposition.
\betagin{proposition}\lambdabel{proposition1}
There exist $k_0>0$ and a constant $C>0$, independent of $k$, such that for all
$k\geq k_0$ and all $h\in L^\infty(\mathbb{R}^N)$, problem \eqref{problem3} has a unique solution $\phi=L_k(h)$. Besides,
\betagin{equation}\lambdabel{equality1}
\betagin{aligned}
\|L_k(h)\|_*\leq C\|h\|_{**}, \ \ \ |c_l|\leq \frac{C}{\lambdambda^{n_l}}\|h\|_{**}.
\end{aligned}
\end{equation}
\end{proposition}
Now we consider the following problem:
\betagin{equation}\lambdabel{problem4}
\left\{\betagin{array}{ll}
(-\Deltalta)^s(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi)+V(r,y'')(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi)=(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi)^{2_s^*-1}+\sum\limits_{l=1}^Nc_l\sum\limits_{i=1}^kZ^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}, \ \ \hbox{ in } \mathbb{R}^N, \\
\phi\in H_{s},
\sum\limits_{i=1}^k\int_{\mathbb{R}^N}Z^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}\phi=0, \ l=1,\ldots, N.
\end{array}\right.
\end{equation}
In the rest of this section, we devote
ourselves to the proof of the following proposition by using the contraction mapping theorem.
\betagin{proposition}\lambdabel{proposition2}
There exist $k_0>0$ and a constant $C>0$, independent of $k$, such that for all
$k\geq k_0$, $L_0k^{\frac{N-2s}{N-4s}}\leq \lambdambda\leq L_1k^{\frac{N-2s}{N-4s}}$, $|(\overline{r},\overline{y}'')-(r_0, y''_0)|\leq \theta$, problem \eqref{problem4} has a unique solution $\phi=\phi(\overline{r},\overline{y}'',\lambdambda)$ satisfying,
\betagin{equation}\lambdabel{equality2}
\betagin{aligned}
\|\phi\|_*\leq C(\frac{1}{\lambdambda})^{s+\sigma}, \ \ \ |c_l|\leq C(\frac{1}{\lambdambda})^{s+\sigma},
\end{aligned}
\end{equation}
where $\sigma>0$ is a small constant.
\end{proposition}
We rewrite \eqref{problem4} as
\betagin{equation}\lambdabel{problem5}
\left\{\betagin{array}{ll}
(-\Deltalta)^s\phi+V(r,y'')\phi-(2_s^*-1)(Z_{\overline{r},\overline{y}'',\lambdambda})^{2_s^*-2}\phi=\mathcal {F}(\phi)+l_k+\sum\limits_{l=1}^Nc_l\sum\limits_{i=1}^kZ^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}, \ \ \hbox{ in } \mathbb{R}^N, \\
\phi\in H_{s},
\sum\limits_{i=1}^k\int_{\mathbb{R}^N}Z^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}\phi=0, \ l=1,\ldots, N.
\end{array}\right.
\end{equation}
where
$$\mathcal{F}(\phi)=(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi)^{2_s^*-1}_+-Z_{\overline{r},\overline{y}'',\lambdambda}^{2_s^*-1}-(2_s^*-1)Z_{\overline{r},\overline{y}'',\lambdambda}^{2_s^*-2}\phi,$$
and
\betagin{equation}
\betagin{aligned}
l_k(y)&=\big(Z_{\overline{r},\overline{y}'',\lambdambda}^{2_s^*-1}(y)-\zeta(y) \sum\limits_{j=1}^kU_{x_j,\lambdambda}^{2_s^*-1}(y)\big)-V(r,y'')Z_{\overline{r},\overline{y}'',\lambdambda}(y)\\
&\quad-\sum\limits_{j=1}^kc(N,s)\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{\mathbb{R}^N\setminus B_\varepsilonsilon(x)}\frac{\big(\zeta(y)-\zeta(x)\big)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}dx\\
&=:J_1+J_2+J_3.
\end{aligned}
\end{equation}
In order to use the contraction mapping theorem to prove Proposition \ref{proposition2}, we need to estimate $\mathcal{F}(\phi)$ and $l_k$. In the following, we assume that $\|\phi\|_{\ast}$ is small.
\betagin{lemma}\lambdabel{lemma5}
We have $\|\mathcal{F}(\phi)\|_{**}\leq C\|\phi\|_*^{\min(2,2_s^*-1)}$.
\end{lemma}
{\bf Proof.}
Firstly, we consider $2_s^*\leq3$.
Using the H\"{o}lder inequality, we obtain:
\betagin{equation*}
\betagin{aligned}
|\mathcal{F}(\phi)|&\leq C\|\phi\|_*^{2_s^*-1}(\sum\limits_{j=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}})^{2_s^*-1}\\
&\leq C\|\phi\|_*^{2_s^*-1}\lambdambda^{\frac{N+2s}{2}}\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}(\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\tau}})^{\frac{4s}{N-2s}}\\
&\leq C\|\phi\|_*^{2_s^*-1}\lambdambda^{\frac{N+2s}{2}}\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
When $2_s^*>3$, we have
\betagin{equation*}
\betagin{aligned}
|\mathcal{F}(\phi)|&\leq C\|\phi\|_*^{2}(\sum\limits_{j=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}})^{2}(\sum\limits_{j=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_j|)^{N-2s}})^{2_s^*-3}\\
&\quad+C\|\phi\|_*^{2_s^*-1}(\sum\limits_{j=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}})^{2_s^*-1}\\
&\leq C(\|\phi\|_*^{2}+\|\phi\|_*^{2_s^*-1})\lambdambda^{\frac{N+2s}{2}}(\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}})^{2_s^*-1}\\
&\leq C\|\phi\|_*^{2}\lambdambda^{\frac{N+2s}{2}}\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
Hence, we obtain
$\|\mathcal{F}(\phi)\|_{**}\leq C\|\phi\|_*^{\min(2,2_s^*-1)}$. \partialr
{$\Box$}
\vskip8pt
Next, we estimate $l_k$.
\betagin{lemma}\lambdabel{lemma6}
If $N>4s+2\tau$, then there exists a small $\sigma>0$ such that $\|l_k\|_{**}\leq \frac{C}{\lambdambda^{s+\sigma}}$.
\end{lemma}
{\bf Proof.}
By the symmetry, we can assume that $y\in\Omegaega_1$. Then $|y-x_j|\geq|y-x_1|$. We first estimate the term $J_1$. We have
\betagin{equation*}
\betagin{aligned}
|J_1|&\leq C[(\sum\limits_{j=2}^kU_{x_j,\lambdambda})^{2_s^*-1}+U_{x_1,\lambdambda}^{2_s^*-2}\sum\limits_{j=2}^kU_{x_j,\lambdambda}+\sum\limits_{j=2}^kU_{x_j,\lambdambda}^{2_s^*-1}]\\
&\leq C\lambdambda^{\frac{N+2s}{2}}\big(\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}\big)^{2_s^*-1}+\frac{C\lambdambda^{\frac{N+2s}{2}}}{(1+\lambdambda|y-x_1|)^{4s}}\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}.
\end{aligned}
\end{equation*}
If $N-2s\geq\frac{N+2s}{2}-\tau,$ then we have
\betagin{equation*}
\betagin{aligned}
&\quad \frac{1}{(1+\lambdambda|y-x_1|)^{4s}}\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}\\
&\leq\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{N+2s}{2}+\tau}}\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}-\tau}}\\
&\leq\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{N+2s}{2}+\tau}}\sum\limits_{j=2}^k\frac{1}{(\lambdambda|x_1-x_j|)^{\frac{N+2s}{2}-\tau}}\\
&\leq\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{N+2s}{2}+\tau}}(\frac{k}{\lambdambda})^{\frac{N+2s}{2}-\tau}.
\end{aligned}
\end{equation*}
If $N-2s<\frac{N+2s}{2}-\tau,$ then $4s>\frac{N+2s}{2}+\tau,$ we obtain that
\betagin{equation*}
\betagin{aligned}
&\quad \frac{1}{(1+\lambdambda|y-x_1|)^{4s}}\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}\\
&\leq\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{N+2s}{2}+\tau}}\sum\limits_{j=2}^k\frac{1}{(\lambdambda|x_1-x_j|)^{N-2s}}\\
&\leq\frac{1}{(1+\lambdambda|y-x_1|)^{\frac{N+2s}{2}+\tau}}(\frac{k}{\lambdambda})^{N-2s}.
\end{aligned}
\end{equation*}
Using the H\"{o}lder inequality, we have
\betagin{equation*}
\betagin{aligned}
&\quad\big(\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}\big)^{2_s^*-1}\\
&\leq \sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}\big(\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{4s}(\frac{N-2s}{2}-\frac{N-2s}{N+2s}\tau)}}\big)^{\frac{4s}{N-2s}}\\
&\leq C\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}(\frac{k}{\lambdambda})^{\frac{N+2s}{N-2s}(\frac{N-2s}{2}-\frac{N-2s}{N+2s}\tau)}\\
&\leq C\sum\limits_{j=2}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}(\frac{1}{\lambdambda})^{s+\sigma}.
\end{aligned}
\end{equation*}
Thus
$$\|J_1\|_{**}\leq C(\frac{1}{\lambdambda})^{s+\sigma}.$$
Now, we estimate $J_2$. Note that $\zeta=0$ when $|(r,y'')-(r_0,y''_0)|\geq2\deltalta$ and
$\frac{1}{\lambdambda}\leq\frac{C}{1+\lambdambda|y-x_j|}$ when $|(r,y'')-(r_0,y''_0)|<2\deltalta$.
We have
\betagin{equation*}
\betagin{aligned}
|J_2|&\leq\frac{C}{\lambdambda^{2s}}\lambdambda^{\frac{N+2s}{2}}\sum\limits_{j=1}^k\frac{\zeta}{(1+\lambdambda|y-x_j|)^{N-2s}}\\
&\leq\frac{C}{\lambdambda^{\min(2s, N-\frac{N+2s}{2}-\tau)}}\lambdambda^{\frac{N+2s}{2}}\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
If $N>4s+2\tau$, then $\|J_2\|_{**}\leq\frac{C}{\lambdambda^{s+\sigma}}$.
\vskip8pt
We have
\betagin{equation*}
\betagin{aligned}
J_3&=\sum\limits_{j=1}^kc(N,s)\big(\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(y)\setminus B_\varepsilonsilon(y)}\frac{\big(\zeta(y)-\zeta(x)\big)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}dx\\
&\quad\quad\quad\quad\quad\quad\quad+\int_{\mathbb{R}^N\setminus B_{\frac{\deltalta}{4}}(y)}\frac{\big(\zeta(y)-\zeta(x)\big)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}dx\big)\\
&=:\sum\limits_{j=1}^kc(N,s)(J_{31}+J_{32}).
\end{aligned}
\end{equation*}
We first estimate $J_{31}$. From the definition of function $\zeta$, we have
$\zeta(y)-\zeta(x)=0$ when $x,y\in B_{\deltalta}(x_j)$ or $x,y\in \mathbb{R}^N\setminus \overline{B_{2\deltalta}(x_j)}$. So, $J_{31}\neq0$ only when $B_{\frac{\deltalta}{4}}(y)\subset B_{\frac{5}{2}\deltalta}(x_j)\setminus B_{\frac{1}{2}\deltalta}(x_j)$. It holds $\frac{3}{4}\deltalta\leq|y-x_j|\leq|x-y|+|x-x_j|\leq\frac{\deltalta}{4}+|x-x_j|\leq\frac{3}{2}|x-x_j|\leq\frac{15}{4}\deltalta$ when $B_{\frac{\deltalta}{4}}(y)\subset B_{\frac{5}{2}\deltalta}(x_j)\setminus B_{\frac{1}{2}\deltalta}(x_j)$.
Furthermore, we divide $J_{31}$ as following,
\betagin{equation*}
\betagin{aligned}
J_{31}&=
\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(y)\setminus B_\varepsilonsilon(y)}\frac{\nablabla\zeta(y)\cdot (y-x)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}dx+O(\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(y)\setminus B_\varepsilonsilon(y)}\frac{U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s-2}}dx)\\
&=:J_{311}+J_{312}
\end{aligned}
\end{equation*}
Note that $B_{\frac{\deltalta}{4}}(y)\setminus B_\varepsilonsilon(y)$ is symmetrical set. Then by the mean value theorem, we get that
\betagin{equation*}
\betagin{aligned}
|J_{311}|&=\big|\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(y)\setminus B_\varepsilonsilon(y)}\frac{\nablabla\zeta(y)\cdot (y-x)U_{x_j,\lambdambda}(x)}{|x-y|^{N+2s}}\big|\\
&=\big|C(N,s)\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(0)\setminus B_\varepsilonsilon(0)}\frac{\nablabla\zeta(y)\cdot z}{|z|^{N+2s}}\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda^2|z+y-x_j|^2)^{\frac{N-2s}{2}}}\big|\\
&=\big|\frac{C(N,s)\lambdambda^{\frac{N-2s}{2}}}{2}\lim\limits_{\varepsilonsilon\rightarrow 0^+}\int_{B_{\frac{\deltalta}{4}}(0)\setminus B_\varepsilonsilon(0)}\frac{\nablabla\zeta(y)\cdot z}{|z|^{N+2s}}\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\big(\frac{1}{(1+\lambdambda^2|z+y-x_j|^2)^{\frac{N-2s}{2}}}-\frac{1}{(1+\lambdambda^2|-z+y-x_j|^2)^{\frac{N-2s}{2}}}\big)\big|\\
&\leq C\lambdambda^{\frac{N-2s}{2}+1}\int_{B_{\frac{\deltalta}{4}}(0)}\frac{|\nablabla\zeta(y)|}{|z|^{N+2s-2}}
\frac{1}{(1+\lambdambda|(2\varepsilonrtheta-1)z+y-x_j|)^{N-2s+1}}\\
&\leq\frac{C}{\lambdambda^{s+\sigma}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}},
\end{aligned}
\end{equation*}
where $0<\varepsilonrtheta<1$ and since $|(2\varepsilonrtheta-1)z+y-x_j|\geq|y-x_j|-|(2\varepsilonrtheta-1)z|\geq\frac{2}{3}|y-x_j|$ when $z\in B_{\frac{\deltalta}{4}}(0)$.
Similarly, we can obtain $$|J_{312}|\leq\frac{C}{\lambdambda^{s+\sigma}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.$$
For the term $J_{32}$, we divide three cases:\\
Case1: If $y\in B_\deltalta(x_j)$,then
\betagin{equation*}
\betagin{aligned}
|J_{32}|&\leq\int_{\mathbb{R}^N\setminus \big(B_{\frac{\deltalta}{4}}(y)\cup B_{\deltalta}(x_j)\big)}\frac{1}{|x-y|^{N+2s}}\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|x-x_j|)^{N-2s}}\\
&\leq\frac{C}{\lambdambda^{2s}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{N-2s}}\int_{\mathbb{R}^N\setminus B_{\frac{\deltalta}{4}}(y)}\frac{1}{|x-y|^{N+2s}}\\
&\leq\frac{C}{\lambdambda^{s+\sigma}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
Case2: If $\deltalta\leq|y-x_j|\leq3\deltalta$, then by Lemma \ref{Lemma appendixaA30},
\betagin{equation*}
\betagin{aligned}
|J_{32}|&\leq\int_{\mathbb{R}^N\setminus B_{\frac{\deltalta}{4}}(y)}\frac{1}{|x-y|^{N+2s}}\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|x-x_j|)^{N-2s}}\\
&\leq C\lambdambda^{\frac{N+2s}{2}}\int_{\mathbb{R}^N\setminus B_{\frac{\deltalta\lambdambda}{4}}(\lambdambda y)}\frac{1}{|z-\lambdambda y|^{N+2s}}\frac{1}{(1+|z-\lambdambda x_j|)^{N-2s}}\\
&\leq C\lambdambda^{\frac{N+2s}{2}}\big(\frac{1}{(\lambdambda|y-x_j|)^N}+\frac{1}{\lambdambda^{2s}}\frac{1}{(\lambdambda|y-x_j|)^{N-2s}}\big)\\
&\leq\frac{C}{\lambdambda^{s+\sigma}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
Case3: Suppose that $|y-x_j|>3\deltalta$. Note that $|x-y|\geq|y-x_j|-|x-x_j|\geq\frac{1}{3}|y-x_j|$ when $|y-x_j|\geq3\deltalta$ and $|x-x_j|\leq2\deltalta$. Then we have
\betagin{equation*}
\betagin{aligned}
|J_{32}|&\leq\int_{B_{2\deltalta}(x_j)}\frac{1}{|x-y|^{N+2s}}\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|x-x_j|)^{N-2s}}\\
&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\int_{B_{2\deltalta}(x_j)}\frac{1}{|x-y|^{N+2s}}\frac{1}{|x-x_j|^{N-2s}}\\
&\leq \frac{C\lambdambda^{\frac{N+2s}{2}}}{\lambdambda^{N}}\frac{1}{|y-x_j|^{\frac{N+2s}{2}+\tau}}\int_{B_{2\deltalta}(x_j)}\frac{1}{|x-x_j|^{N-2s}}\\
&\leq\frac{C}{\lambdambda^{s+\sigma}}\lambdambda^{\frac{N+2s}{2}}\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N+2s}{2}+\tau}}.
\end{aligned}
\end{equation*}
So, we obtain $\|J_3\|_{**}\leq\frac{C}{\lambdambda^{s+\sigma}}$.
\vskip8pt
As a result, we have proved that $\|l_k\|_{**}\leq\frac{C}{\lambdambda^{s+\sigma}}$.
{$\Box$}
\vskip8pt
{\bf Proof of Proposition \ref{proposition2}.}
Let $y=(y', y'')$, $y'\in \mathbb{R}^2$, $y''\in \mathbb{R}^{N-2}$. Set
$$\betagin{array}{ll}
E=\{u : u\in C(\mathbb{R}^N)\cap H_s, \|u\|_{\ast}\leq\frac{1}{\lambdambda^s}, \sum\limits_{i=1}^k\int_{\mathbb{R}^N}Z^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}u=0, \ l=1,\ldots, N\}.\end{array}$$
By Proposition \ref{proposition1}, the solution
$\phi$ of \eqref{problem4} is equivalent to the following fixed point
problem:
$$\phi=A(\phi)=:L_{k}\big(\mathcal{F}(\phi)\big)+L_{k}\big(l_{k}\big).$$
Hence, it is sufficient to prove that the operator $A$ is a
contraction map from the complete space $E$ to itself. In fact, for any $\phi\in E$, by Proposition \ref{proposition1}, Lemma
\ref{lemma5} and Lemma \ref{lemma6}, we have
\betagin{displaymath}
\betagin{aligned}
\|A(\phi)\|_\ast\leq&C\|L_{k}\big(\mathcal{F}(\phi)\big)\|_{\ast}+C\|L_{k}(l_{k})\|_{\ast}\\
\leq&C\Big[\|\mathcal{F}(\phi)\|_{\ast\ast}+\|l_{k}\|_{\ast\ast}\Big]\\
\leq&\frac{1}{\lambdambda^{s}},
\end{aligned}
\end{displaymath}
which shows that $A$ maps $E$ to $E$ itself and $E$ is invariant
under $A$ operator.
If $2_s^*\leq3$, then for $\forall \phi_1, \phi_2\in E$, we have
\betagin{displaymath}
\betagin{aligned}
\|A(\phi_{1})-A(\phi_{2})\|_*=&\|L_{k}\big(\mathcal {F}(\phi_{1})-\mathcal {F}(\phi_{2})\big)\|_{\ast}\\
\leq&C\|\mathcal {F}(\phi_{1})-\mathcal {F}(\phi_{2})\|_{\ast\ast}\\
\leq&C\|(|\phi_{1}|+|\phi_{2}|)^{2_s^*-2}|\phi_{1}-\phi_{2}|\|_{\ast\ast}\\
\leq&\frac{1}{2}\|\phi_{1}-\phi_{2}\|_{\ast}.
\end{aligned}
\end{displaymath}
The case $2_s^*>3$ can be discussed in a similar way.
\vskip8pt
Hence, $A$ is a contraction map. The Banach fixed point theorem
tells us that there exists a unique solution $\phi\in E$ for the
problem \eqref{problem4}.\partialr
\vskip8pt
Finally, by Proposition \ref{proposition1}, we have
$$\|\phi\|_{\ast}\leq C(\frac{1}{\lambdambda})^{s+\sigma} \ \hbox{and} \ |c_l|\leq C\|\mathcal {F}(\phi)+l_k\|_{\ast\ast}\leq C(\frac{1}{\lambdambda})^{s+\sigma}.$$
{$\Box$}
\section{Proof of the Main Theorem }
Let $\phi$ be the function obtained in Proposition \ref{proposition2} and $u_k=Z_{\overline{r}, \overline{y}'', \lambdambda}+\phi$. In order to use local Pohozaev identities, we quote the extension of $u_k$, that is $\tilde{u}_{k}=\tilde{Z}_{\overline{r}, \overline{y}'', \lambdambda}+\tilde{\phi}$.
$\tilde{Z}_{\overline{r}, \overline{y}''}$ and $\tilde{\phi}$ are extensions of $Z_{\overline{r}, \overline{y}''}$ and $\phi$, respectively.
Then we have
\betagin{equation}\lambdabel{widetildeuLm}
\betagin{cases}
\displaystyle\mathrm{div}(t^{1-2s}\nablabla \widetilde{u}_k)=0, \quad &\mbox{in} \quad \mathbb{R}^{N+1}_+, \\
\displaystyle-\lim\limits_{t\rightarrow 0^+}t^{1-2s}\partialrtial_t \widetilde{u}_k=\omegaega_s \big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}+\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\big), \quad & \mbox{on } \quad \mathbb{R}^{N}.
\end{cases}
\end{equation}
Without loss of generality, we may assume $\omegaega_s=1$. Multiplying \eqref{widetildeuLm} by $\frac{\partialrtial \tilde{u}_{k}}{\partialrtial y_{i}}$ ($i=3,\ldots, N$) and $\lambdangle\nablabla \tilde{u}_k, Y\rangle$ respectively, integrating by parts, we have the following two Pohozaev identities:
\betagin{eqnarray}\lambdabel{firstpohozaevidentity1}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial \nu}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial y_{i}}+\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_{k}|^{2}\nu_{i}\\
&=\int_{B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}+\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\big)\frac{\partialrtial u_k}{\partialrtial y_i}, \ \ \ \ \ \ i=3,\ldots, N,
\end{aligned}
\end{eqnarray}
and
\betagin{eqnarray}\lambdabel{secondtpohozaevidentity1}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} +\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_k|^{2}\lambdangle Y,\nu\rangle +\frac{2s-N}{2}\int_{\partialrtial\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
&=\int_{ B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}+\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\big)\lambdangle y, u_k\rangle.
\end{aligned}
\end{eqnarray}
In the following, we assume $\rho\in(2\deltalta,5\deltalta)$. We have the following lemma.
\betagin{lemma}\lambdabel{}
Suppose that $(\overline{r}, \overline{y}'', \lambdambda)$ satisfies
\betagin{eqnarray}\lambdabel{firstpohozaevidentity}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial \nu}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial y_{i}}+\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_{k}|^{2}\nu_{i}\\
&=\int_{B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}\big)\frac{\partialrtial u_k}{\partialrtial y_i}, \ \ \ \ \ \ i=3,\ldots, N,
\end{aligned}
\end{eqnarray}
\betagin{eqnarray}\lambdabel{secondtpohozaevidentity}
\betagin{aligned}
&\quad-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} +\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_k|^{2}\lambdangle Y,\nu\rangle +\frac{2s-N}{2}\int_{\partialrtial\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
&=\int_{ B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}\big)\lambdangle y, u_k\rangle
\end{aligned}
\end{eqnarray}
and
\betagin{eqnarray}\lambdabel{thirdpohozaevidentity}
\betagin{aligned}
\int_{\mathbb{R}^N}\big((-\Deltalta)^su_k+V(r,y'')u_k-(u_k)_+^{2^*_s-1}\big)\frac{\partialrtial Z_{\overline{r}, \overline{y}'', \lambdambda}}{\partialrtial\lambdambda}=0.
\end{aligned}
\end{eqnarray}
Then we have $c_l=0$, $l=1,\ldots, N$.
\end{lemma}
{\bf Proof.} By \eqref{firstpohozaevidentity1}, \eqref{secondtpohozaevidentity1}, \eqref{firstpohozaevidentity} and \eqref{secondtpohozaevidentity}, we have
\betagin{eqnarray}\lambdabel{cl1}
\betagin{aligned}
\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\frac{\partialrtial u_k}{\partialrtial y_i}=0, \ i=3,\ldots,N, \\ \sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\lambdangle y, \nablabla u_k\rangle=0.
\end{aligned}
\end{eqnarray}
Note that $\zeta=0$ in $\mathbb{R}^N\setminus B_{\rho}$. By \eqref{thirdpohozaevidentity} and \eqref{cl1}, we have
\betagin{eqnarray}\lambdabel{aum1}
\betagin{aligned}
\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{\mathbb{R}^N}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}v=\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}v=0,
\end{aligned}
\end{eqnarray}
for $v=\frac{\partialrtial u_k}{\partialrtial y_i}$, $v=\lambdangle\nablabla u_k, y\rangle $ and $v=\frac{\partialrtial Z_{\overline{r}, \overline{y}'', \lambdambda}}{\partialrtial\lambdambda}$.
By direct calculations, we have
\betagin{eqnarray}\lambdabel{aum5}
\betagin{aligned}
\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,2}\lambdangle y', \nablabla_{y'} Z_{\overline{r},\overline{y}'',\lambdambda}\rangle=k\lambdambda^{2}(a_1+o(1)),
\end{aligned}
\end{eqnarray}
\betagin{eqnarray}\lambdabel{aum6}
\betagin{aligned}
\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,i}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial y_i}=k\lambdambda^{2}(a_2+o(1)), \ \ i=3,\ldots,N,
\end{aligned}
\end{eqnarray}
and
\betagin{eqnarray}\lambdabel{aum7}
\betagin{aligned}
\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,1}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial \lambdambda}=\frac{k}{\lambdambda^{2}}(a_3+o(1)),
\end{aligned}
\end{eqnarray}
where $a_1>0$, $a_2>0$ and $a_3>0$.
Furthermore, we have that
\betagin{eqnarray}\lambdabel{aum3}
\betagin{aligned}
&\quad\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\lambdangle y, \nablabla Z_{\overline{r},\overline{y}'',\lambdambda}\rangle\\
&=\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,2}\lambdangle y', \nablabla_{y'} Z_{\overline{r},\overline{y}'',\lambdambda}\rangle c_2+O(\frac{k}{\lambdambda^{N-2}}|c_2|)+o(k\lambdambda^2\sum\limits_{l=3}^N|c_l|)+o(k|c_1|)\\
&=k\lambdambda^2(a_1+o(1))c_2+o(k\lambdambda^2\sum\limits_{l=3}^N|c_l|)+o(k|c_1|)
\end{aligned}
\end{eqnarray}
and
\betagin{eqnarray}\lambdabel{aum4}
\betagin{aligned}
&\quad\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial y_i}\\
&=\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,i}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial y_i} c_i+o(k\lambdambda^2\sum\limits_{l\neq 1,i}|c_l|)+o(k|c_1|)\\
&=k\lambdambda^2(a_2+o(1))c_i+o(k\lambdambda^2\sum\limits_{l\neq 1,i}|c_l|)+o(k|c_1|), \ \ \ \ i=3,\ldots,N.
\end{aligned}
\end{eqnarray}
Since $\phi$ is a solution to \eqref{problem4}, by fractional elliptical equation estimates (see for example \cite{Silvestre}), we can obtain $\phi\in C^1$ when $s>\frac{1}{2}$.
Using integrating by parts and $\|\phi\|_*\leq\frac{C}{\lambdambda^{s+\sigma}}$, we have
\betagin{eqnarray}\lambdabel{}
\betagin{aligned}
\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}v=o(k\lambdambda^2\sum\limits_{l=2}^N|c_l|)+o(k|c_1|),
\end{aligned}
\end{eqnarray}
for $v=\lambdangle y, \nablabla\phi_{\overline{r},\overline{y}'',\lambdambda}\rangle$ and
$v=\frac{\partialrtial\phi_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial y_i}$.
\vskip8pt
It follows from \eqref{aum1}, we obtain that
\betagin{eqnarray}\lambdabel{aum2}
\betagin{aligned}
\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}v=o(k\lambdambda^2\sum\limits_{l=2}^N|c_l|)+o(k|c_1|),
\end{aligned}
\end{eqnarray}
for $v=\lambdangle\nablabla y, Z_{\overline{r},\overline{y}'',\lambdambda}\rangle$ and $v=\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial y_i}$.
By \eqref{aum3}, \eqref{aum4} and \eqref{aum2}, we have
\betagin{eqnarray}\lambdabel{aum8}
\betagin{aligned}
c_l=o(\frac{1}{\lambdambda^2}|c_1|), \ l=2,\ldots,N.
\end{aligned}
\end{eqnarray}
From \eqref{aum1}, \eqref{aum7} and \eqref{aum8}, we deduce that
\betagin{eqnarray}\lambdabel{}
\betagin{aligned}
0&=\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,1}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial \lambdambda}\\
&=\sum\limits_{j=1}^k\int_{B_{\rho}}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,1}\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial \lambdambda}c_1+o(\frac{k}{\lambdambda^2})c_1\\
&=k(a_3+o(1))c_1+o(\frac{k}{\lambdambda^2})c_1,
\end{aligned}
\end{eqnarray}
which implies that $c_1=0$. We also have $c_l=0, \ l=2,\ldots,N.$
{$\Box$}
Note that
\betagin{eqnarray*}
\betagin{aligned}
&\frac{2s-N}{2}\int_{\partialrtial\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
=&\frac{2s-N}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k+\frac{2s-N}{2}\int_{B_{\rho}}
\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}+\sum\limits_{l=1}^Nc_l\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\big)u_k,
\end{aligned}
\end{eqnarray*}
\betagin{eqnarray*}
\betagin{aligned}
&\quad\int_{B_{\rho}}\big(-V(r,y'')u_k+(u_k)_+^{2^*_s-1}\big)\lambdangle y, \nablabla u_k\rangle\\
&=\int_{B_{\rho}}\big(-\frac{1}{2}V(r,y'')\lambdangle y, \nablabla u_k^2\rangle+\frac{1}{2^*_s}\lambdangle y, \nablabla (u_k)_+^{2^*_s}\rangle\big)\\
&=-\frac{1}{2}\int_{\partialrtial B_{\rho}}V(r,y'')u_k^2\lambdangle y, \nu\rangle+\frac{1}{2}\int_{B_{\rho}}\big(NV(r,y'')+\lambdangle\nablabla V(r,y''), y\rangle\big)u_k^2\\
&\quad\quad+
\frac{1}{2^*_s}\int_{\partialrtial B_{\rho}}(u_k)_+^{2^*_s}\lambdangle y, \nu\rangle+\frac{2s-N}{2}\int_{B_{\rho}}(u_k)_+^{2^*_s},
\end{aligned}
\end{eqnarray*}
and $\sum\limits_{l=1}^Nc_l\int_{B_{\rho}}
\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}\phi=0$.
We find that \eqref{secondtpohozaevidentity} is equivalent to
\betagin{eqnarray}\lambdabel{aum12}
\betagin{aligned}
&\quad
\int_{ B_{\rho}}\big(sV(r,y'')+\frac{1}{2}\lambdangle\nablabla V(r,y''), y\rangle\big)u_k^2\\
&=-\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} +\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_k|^{2}\lambdangle Y,\nu\rangle +\frac{2s-N}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
&\quad+\frac{1}{2}\int_{\partialrtial B_{\rho}}V(r,y'')u_k^2\lambdangle y,\nu\rangle-\frac{1}{2^*_{s}}\int_{\partialrtial B_{\rho}}(u_k)_+^{2^*_{s}}\lambdangle y,\nu\rangle+\frac{2s-N}{2}\sum\limits_{l=1}^Nc_l\int_{B_{\rho}}
\sum\limits_{j=1}^kZ_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}Z_{\overline{r},\overline{y}'',\lambdambda}.
\end{aligned}
\end{eqnarray}
Similarly, \eqref{firstpohozaevidentity} is equivalent to
\betagin{eqnarray}\lambdabel{aum11}
\betagin{aligned}
&\quad\frac{1}{2}\int_{B_{\rho}}\frac{\partialrtial V(r,y'')}{\partialrtial y_i''}u_k^2\\
&=\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial \nu}\frac{\partialrtial \tilde{u}_{k}}{\partialrtial y_{i}}-\frac{1}{2}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_{k}|^{2}\nu_{i}\\
&\quad+\frac{1}{2}\int_{\partialrtial B_{\rho}}V(r,y'')u_k^2\nu_i
+\frac{1}{2_s^*}\int_{\partialrtial B_{\rho}}u_k^{2^*_s}\nu_i , \ \ i=3,\ldots, N.
\end{aligned}
\end{eqnarray}
\betagin{lemma}\lambdabel{relation estimate}
Relations \eqref{aum12} and \eqref{aum11} are equivalent to
\betagin{equation}\lambdabel{aum9}
\betagin{aligned}
\int_{B_{\rho}}\big(sV(r,y'')+\frac{1}{2}\lambdangle\nablabla V(r,y''),y\rangle\big)u_k^2=O(\frac{k}{\lambdambda^{2s+\sigma}})
\end{aligned}
\end{equation}
and
\betagin{equation}\lambdabel{aum10}
\betagin{aligned}
\int_{B_{\rho}}\frac{\partialrtial V(r,y'')}{\partialrtial y_i}u_k^2=O(\frac{k}{\lambdambda^{2s+\sigma}}), \ \ i=3,\ldots,N.
\end{aligned}
\end{equation}
\end{lemma}
\betagin{proof} We only give the proof for \eqref{aum9}. The proof of \eqref{aum10} is similar.
Note that $\tilde{u}_k=\tilde{Z}_{\overline{r},\overline{y}'',\lambdambda}+\tilde{\phi}$. We have
\betagin{eqnarray}\lambdabel{}
\betagin{aligned}
&\quad\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu}\\
&=\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}, Y\rangle \frac{\partialrtial\widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\nu}+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{\phi}, Y\rangle \frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}\\
&\quad\quad+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}, Y\rangle \frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}
+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{\phi}, Y\rangle \frac{\partialrtial\widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\nu}.
\end{aligned}
\end{eqnarray}
Using Lemma \ref{estimate for Z}, we can obtain
\betagin{eqnarray}\lambdabel{aum13}
\betagin{aligned}
&\quad|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}, Y\rangle \frac{\partialrtial\widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\nu}|\\
&\leq \frac{C}{\lambdambda^{N-2s}}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}(\sum\limits_{i=1}^k\frac{1}{(1+|y-x_{i}|)^{N-2s+1}})^2\\
&\leq \frac{Ck^2}{\lambdambda^{N-2s}}\int_{\partialrtial''\mathcal B^{+}_{\rho}}\frac{t^{1-2s}}{(1+|y-x_{1}|)^{2N-4s+2}}\\
&\leq \frac{Ck^2}{\lambdambda^{N-2s}}.
\end{aligned}
\end{eqnarray}
By Lemma \ref{le:lemma appendixB.4},
\betagin{equation}\lambdabel{aum15}
\betagin{aligned}
|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{\phi}, Y\rangle \frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}|\leq C\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla\widetilde{\phi}|^2\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.
\end{aligned}
\end{equation}
By the process of the proof of \eqref{aum13} and \eqref{aum15}, we also have
\betagin{equation*}
\betagin{aligned}
|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}, Y\rangle \frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}
+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \widetilde{\phi}, Y\rangle \frac{\partialrtial\widetilde{Z}_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\nu}|\leq\frac{Ck\|\phi\|_*}{\lambdambda^{\frac{N-2s}{2}}}.
\end{aligned}
\end{equation*}
Note that $N>4s+2\tau$. So we have proved that $$|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\lambdangle\nablabla \tilde{u}_k, Y\rangle \frac{\partialrtial\tilde{u}_k}{\partialrtial\nu}|\leq \frac{Ck}{\lambdambda^{2s+\sigma}}.$$
Similarly, we can prove $$|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla \tilde{u}_k|^{2}\lambdangle Y,\nu\rangle|\leq \frac{Ck}{\lambdambda^{2s+\sigma}}.$$
Next, we estimate the term $\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k$,
\betagin{equation*}
\betagin{aligned}
&\quad\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k\\
&=\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{Z}_{r'',\overline{y}'',\lambdambda}}{\partialrtial\nu} \widetilde{Z}_{r'',\overline{y}'',\lambdambda}+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}\widetilde{\phi}\\
&\quad+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{Z}_{r'',\overline{y}'',\lambdambda}}{\partialrtial\nu} \widetilde{\phi}+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu} \widetilde{Z}_{r'',\overline{y}'',\lambdambda}.
\end{aligned}
\end{equation*}
By Lemma \ref{estimate for Z},
\betagin{eqnarray}
\betagin{aligned}
&\quad|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{Z}_{r'',\overline{y}'',\lambdambda}}{\partialrtial\nu} \widetilde{Z}_{r'',\overline{y}'',\lambdambda}|\\
&\leq \frac{C}{\lambdambda^{N-2s}}\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\sum\limits_{i=1}^k\frac{1}{(1+|y-x_{i}|)^{N-2s+1}}\times\sum\limits_{j=1}^k\frac{1}{(1+|y-x_{j}|)^{N-2s}}\\
&\leq\frac{Ck^2}{\lambdambda^{N-2s}}\int_{\partialrtial''\mathcal B^{+}_{\rho}}\frac{t^{1-2s}}{(1+|y-x_{1}|)^{2N-4s+1}}\\
&\leq \frac{Ck^2}{\lambdambda^{N-2s}}.
\end{aligned}
\end{eqnarray}
It follows from \eqref{B.4.2} that
\betagin{equation}\lambdabel{aum14}
\betagin{aligned}
\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\widetilde{\phi}|^2\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.
\end{aligned}
\end{equation}
By \eqref{aum14} and Lemma \ref{le:lemma appendixB.4}, one has
\betagin{equation*}
\betagin{aligned}
&\quad|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu}\widetilde{\phi}dS|\\
&\leq(\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla\widetilde{\phi}|^2)^{\frac{1}{2}}(\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\widetilde{\phi}^2)^{\frac{1}{2}}\\
&\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.
\end{aligned}
\end{equation*}
Similarly, we can get $$|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{Z}_{r'',\overline{y}'',\lambdambda}}{\partialrtial\nu} \widetilde{\phi}+\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\widetilde{\phi}}{\partialrtial\nu} \widetilde{Z}_{r'',\overline{y}'',\lambdambda}|\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.$$
We have proved that $$|\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}\frac{\partialrtial\tilde{u}_k}{\partialrtial\nu} \tilde{u}_k|\leq\frac{Ck}{\lambdambda^{2s+\sigma}}.$$
Since $\zeta=0$ on $\partialrtial B_{\rho}$, $u_k=\phi$ on $\partialrtial B_{\rho}$. We deduce that
\betagin{equation*}
\betagin{aligned}
|\int_{\partialrtial B_{\rho}}V(r,y'')u_k^2\lambdangle y,\nu\rangle|
&\leq C\|\phi\|_*^2\int_{\partialrtial B_{\rho}}\big(\sum\limits_{j=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}}\big)^2\\
&\leq \frac{Ck^2\|\phi\|_*^2}{\lambdambda^{2\tau}}\leq \frac{Ck}{\lambdambda^{2s+\tau}}
\end{aligned}
\end{equation*}
and
\betagin{equation*}
\betagin{aligned}
|\int_{\partialrtial B_{\rho}}(u_k)_+^{2^*_{s}}\lambdangle y,\nu\rangle|\leq\frac{Ck^{2^*_s}\|\phi\|_*^{2^*_s}}{\lambdambda^{2^*_s\tau}}\leq\frac{Ck}{\lambdambda^{2s+\tau}}.
\end{aligned}
\end{equation*}
From Proposition \ref{proposition2}, we know the following estimates for $c_l$
\betagin{equation}
\betagin{aligned}
|c_l|\leq C(\frac{1}{\lambdambda})^{s+\sigma}.
\end{aligned}
\end{equation}
On the other hand,
\betagin{equation}
\betagin{aligned}
&\sum\limits_{j=1}^k\int_{B_{\rho}}
Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}Z_{\overline{r},\overline{y}'',\lambdambda}\\
=&\sum\limits_{j=1}^k\int_{B_{\rho}}
Z_{x_j,\lambdambda}^{2^*_s-1}Z_{j,l}+\sum\limits_{j=1}^k\int_{B_{\rho}}
\sum\limits_{i\neq j}Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}Z_{x_i,\lambdambda}\\
=&O(\frac{1}{\lambdambda^N})+O(\frac{k}{\lambdambda^{2s}}).
\end{aligned}
\end{equation}
These imply that
\betagin{equation}
\betagin{aligned}
&|\sum\limits_{l=1}c_l\sum\limits_{j=1}^k\int_{B_{\rho}}
Z_{x_j,\lambdambda}^{2^*_s-2}Z_{j,l}Z_{\overline{r},\overline{y}'',\lambdambda}|\leq\frac{Ck}{\lambdambda^{2s+\sigma}}.
\end{aligned}
\end{equation}
Combining the above estimates, we find that \eqref{aum12} is equivalent to
\betagin{equation*}
\betagin{aligned}
\int_{B_{\rho}}\big(sV(r,y'')+\frac{1}{2}\lambdangle\nablabla V(r,y''),y\rangle\big)u_k^2=O(\frac{k}{\lambdambda^{2s+\sigma}}).
\end{aligned}
\end{equation*}
\end{proof}
\betagin{lemma}\lambdabel{gry}
For any function $g(r,y'') \in C^1(\mathbb{R}^N)$, it holds
\[
\displaystyle \int_{B_\rho}g(r,y'')u_k^2=k\left(\frac{1}{\lambda^{2s}}g(\bar{r},\bar{y}'')\displaystyle \int_{ \mathbb{R}^N}U_{0,1}^2+o\big(\frac{1}{\lambda^{2s}}\big)\right).
\]
\end{lemma}
\betagin{proof}
We have
\[
\displaystyle \int_{B_\rho}g(r,y'')u_k^2=\displaystyle \int_{ D_\rho}g(r,y'')Z_{\bar{r},\bar{y}'',\lambda}^2+2\displaystyle \int_{ D_\rho}g(r,y'')Z_{\bar{r},\bar{y}'',\lambda}\phi+\displaystyle \int_{ D_\rho}g(r,y'')\phi^2.
\]
Note that
\betagin{equation*}
\betagin{aligned}
&\quad\big|2\int_{B_\rho}g(r,y'')Z_{\bar{r},\bar{y}'',\lambda}\phi+\int_{B_\rho}g(r,y'')\phi^2\big|\\
&\leq C\big(\|\phi\|_*\int_{B_\rho}\sum\limits_{i=1}^k\frac{\zeta\lambdambda^{N-2s}}{(1+\lambdambda|y-x_i|)^{N-2s}}\sum\limits_{j=1}^k\frac{1}{(1+\lambdambda|y-x_j|)^{\frac{N-2s}{2}+\tau}}\\
&\quad\quad\quad+\|\phi\|_*^2\int_{B_\rho}\big(\sum\limits_{i=1}^k\frac{\lambdambda^{\frac{N-2s}{2}}}{(1+\lambdambda|y-x_i|)^{\frac{N-2s}{2}+\tau}}\big)^2\big)\\
&\leq\frac{Ck\|\phi\|_*}{\lambdambda^{s}}+\frac{Ck\|\phi\|_*^2}{\lambdambda^{2\tau}}\leq\frac{Ck}{\lambdambda^{2s+\sigma}}
\end{aligned}
\end{equation*}
and
\betagin{equation*}
\betagin{aligned}
\int_{B_\rho}g(r,y'')Z_{\bar{r},\bar{y}'',\lambda}^2&=\sum\limits_{j=1}^k\left(\int_{B_\rho}g(r,y'')Z^2_{x_j,\lambda}+\sum\limits_{i\neq j}\int_{B_\rho}g(r,y'')Z_{x_i,\lambda}Z_{x_j,\lambda}\right)\\
&=k\left(\frac{1}{\lambdambda^{2s}}g(\bar{r},\bar{y}'')\int_{\mathbb{R}^N}U^2_{0,1}+o(\frac{1}{\lambdambda^{2s}})\right).
\end{aligned}
\end{equation*}
We get the result.
\end{proof}
{\bf Proof of Theorem \ref{th:1}.} By \eqref{aum9} and \eqref{aum10}, we can deduce that
\betagin{equation*}
\betagin{aligned}
\int_{B_{\rho}}\big(sV(r,y'')+\frac{1}{2}r\frac{\partialrtial V(r,y'')}{\partialrtial r}\big)u_k^2=O(\frac{k}{\lambdambda^{2s+\sigma}}).
\end{aligned}
\end{equation*}
That is
\betagin{equation}\lambdabel{aum18}
\betagin{aligned}
\int_{B_{\rho}}\frac{1}{r^{2s-1}}\frac{\partialrtial (r^{2s}V(r,y''))}{\partialrtial r}u_k^2=O(\frac{k}{\lambdambda^{2s+\sigma}}).
\end{aligned}
\end{equation}
Applying Lemma \ref{gry} to \eqref{aum10} and \eqref{aum18}, we obtain
\betagin{equation*}
\betagin{aligned}
k\big(\frac{1}{\lambdambda^{2s}}\frac{\partialrtial V(\overline{r},\overline{y}'')}{\partialrtial \overline{y}_i}\int_{\mathbb{R}^N}U_{0,1}^2+o(\frac{1}{\lambdambda^{2s}})\big)=o(\frac{k}{\lambdambda^{2s}})
\end{aligned}
\end{equation*}
and
\betagin{equation*}
\betagin{aligned}
k\big(\frac{1}{\lambdambda^{2s}}\frac{1}{\overline{r}^{2s-1}}\frac{\partialrtial (\overline{r}^{2s}V(\overline{r},\overline{y}''))}{\partialrtial \overline{r}}\int_{\mathbb{R}^N}U_{0,1}^2+o(\frac{1}{\lambdambda^{2s}})\big)=o(\frac{k}{\lambdambda^{2s}}).
\end{aligned}
\end{equation*}
Therefore, the equations to determine $(\overline{r},\overline{y}'')$ are
\betagin{equation}\lambdabel{aum21}
\betagin{aligned}
\frac{\partialrtial (\overline{r}^{2s}V(\overline{r},\overline{y}''))}{\partialrtial \overline{y}_i}=o(1), \ \ i=3,\ldots,N,
\end{aligned}
\end{equation}
and
\betagin{equation}\lambdabel{aum22}
\betagin{aligned}
\frac{\partialrtial (\overline{r}^{2s}V(\overline{r},\overline{y}''))}{\partialrtial \overline{r}}=o(1).
\end{aligned}
\end{equation}
From \eqref{thirdpohozaevidentity} and \eqref{energy expansion1}, the equation to determine $\lambdambda$ is
\betagin{equation}\lambdabel{aum19}
\betagin{aligned}
-\frac{B_1}{\lambda^{2s+1}}V(\bar{r},\bar{y}'')+ \frac{B_3k^{N-2s}}{\lambda^{N-2s+1}}=O\big(\frac{1}{\lambda^{2s+1+\sigma}}\big).
\end{aligned}
\end{equation}
Let $\lambdambda=tk^{\frac{N-2s}{N-4s}}$, then $t\in[L_0,L_1]$. It follows from \eqref{aum19} that
\betagin{equation}\lambdabel{aum20}
\betagin{aligned}
-\frac{B_1}{t^{2s+1}}V(\bar{r},\bar{y}'')+ \frac{B_3}{t^{N-2s+1}}=o(1), \ \ t\in[L_0,L_1].
\end{aligned}
\end{equation}
Define $$H(t,\bar{r},\bar{y}'')=\big(\nablabla_{\overline{r},\overline{y}''}(\overline{r}^{2s}V(\overline{r},\overline{y}'')),
-\frac{B_1}{t^{2s+1}}V(\bar{r},\bar{y}'')+ \frac{B_3}{t^{N-2s+1}}\big).$$
Then
$$deg\big(H(t,\bar{r},\bar{y}''),[L_0,L_1]\times B_\theta((r_0,y''_0))\big)=-deg\big(\nablabla_{\overline{r},\overline{y}''}(\overline{r}^{2s}V(\overline{r},\overline{y}'')), B_\theta((r_0,y''_0)\big)\neq0.$$
Hence, \eqref{aum21}, \eqref{aum21} and \eqref{aum20} have a solution $t_k\in[L_0,L_1]$ and $(\overline{r}_k,\overline{y}''_k)\in B_\theta((r_0,y''_0))$.
{$\Box$}
\betagin{appendices}
\section{Some estimates}
In this section, we give some essential estimates.
For $x_{i},x_{j},y\in {\Bbb R}^{N}$, define
$g_{ij}(y)=\frac{1}{(1+|y-x_{i}|)^{\alphapha}(1+|y-x_{j}|)^{\betata}},$
where $x_{i}\neq x_{j},$ $\alphapha>0$ and $\betata>0$ are two constants.
\betagin{lemma}\lambdabel{Lemma appendix1}
For any constant $\gammamma\in(0,\min(\alphapha,\betata)]$, we have
$$g_{ij}(y)\leq\frac{C}{(1+|x_{i}-x_{j}|)^{\gammamma}}\Big(\frac{1}{(1+|y-x_{i}|)^{\alphapha+\betata-\gammamma}}
+\frac{1}{(1+|y-x_{j}|)^{\alphapha+\betata-\gammamma}}\Big).$$
\end{lemma}
\betagin{proof}
See the proof of Lemma A.1 in \cite{{wy}}.
\end{proof}
\betagin{lemma}\lambdabel{Lemma appendix2}
For any constant $0<\varepsilonrtheta< N-2s$, there is a constant $C>0$, such that
$$\int_{{\Bbb R}^{N}}\frac{1}{|y-z|^{N-2s}}\frac{1}{(1+|z|)^{2s+\varepsilonrtheta}}dz\leq \frac{C}{(1+|y|)^{\varepsilonrtheta}}.$$
\end{lemma}
\betagin{proof}
See the proof of Lemma 2.1 in \cite{Guo2016}.
\end{proof}
\betagin{lemma}\lambdabel{Lemma appendixaA30}
Let $\mu>0$. For any constants $0<\betata<N$, there exists a constant $C>0$ independent of $\mu$, such that
$$\int_{\mathbb{R}^N\setminus B_\mu(y)}\frac{1}{|y-z|^{N+2s}}\frac{1}{(1+|z|)^{\betata}}dz\leq C\big(\frac{1}{(1+|y|)^{\betata+2s}}+\frac{1}{\mu^{2s}}\frac{1}{(1+|y|)^{\betata}}\big).$$
\end{lemma}
\betagin{proof} Without loss of generality, we set $|y|\geq2$, and
let $d=\frac{|y|}{2}.$
Then, we have
\betagin{equation*}
\betagin{aligned}
&\int_{\mathbb{R}^N\setminus B_\mu(y)}\frac{1}{|y-z|^{N+2s}}\frac{1}{(1+|z|)^{\betata}}dz\\
\leq& \int_{B_{d}(0)}+\int_{B_{d}(y)\setminus B_\mu(y)}+\int_{\mathbb{R}^{N}\setminus\big(B_{d}(0)\cup B_{d}(y)\big)}\frac{1}{|y-z|^{N+2s}}\frac{1}{(1+|z|)^{\betata}}dz.
\end{aligned}
\end{equation*}
By direct computation, we have
$$\int_{B_{d}(0)}\frac{dz}{|y-z|^{N+2s}(1+|z|)^{\betata}}\leq\frac{C}{d^{N+2s}}\int_{0}^{d}\frac
{r^{N-1}dr}{(1+r)^{\betata}}\leq\frac{C}{d^{\betata+2s}},$$
and
$$\int_{B_{d}(y)\setminus B_\mu(y)}\frac{dz}{|y-z|^{N+2s}(1+|z|)^{\betata}}\leq\frac{C}{d^{\betata}}\int_{B_{d}(y)\setminus B_\mu(y)}\frac{dz}
{|y-z|^{N+2s}}\leq\frac{C}{\mu^{2s}d^{\betata}}.$$
For $z\in\mathbb{R}^{N}\setminus\big(B_{d}(0)\cup B_{d}(y)\big)$, we
have $|y-z|\geq\frac{|y|}{2}$ and $|z|\geq\frac{|y|}{2}.$ If $|z|\geq 2|y|$, then $|y-z|\geq|z|-|y|\geq\frac{|z|}{2}$, and if $|z|<2|y|$, then $|y-z|\geq\frac{|y|}{2}>\frac{|z|}{4}$.
Thus, we have
\betagin{equation*}
\betagin{aligned}
\displaystyle\int_{\mathbb{R}^{N}\setminus\big(B_{d}(0)\cup
B_{d}(y)\big)}\frac{dz}{|y-z|^{N+2s}(1+|z|)^{\betata}} &\leq
C\displaystyle\int_{\mathbb{R}^{N}\setminus B_{d}(0)}\frac{dz}{(1+|z|)^{\betata}|z|^{N+2s}}\\
&\leq\frac{C}{d^{\betata+2s}}.
\end{aligned}
\end{equation*}
\end{proof}
\betagin{lemma}\lambdabel{Lemma appendixa}
Let $\rho>0$. Suppose that $(y-x)^2+t^2=\rho^2$, $t>0$ and $\alphapha>N$ . Then, when $0<\betata<N$, we have
\betagin{equation}\lambdabel{lemmaA3 1}
\int_{\mathbb{R}^N}\frac{1}{(t+|z|)^{\alphapha}} \frac{1}{|y-z-x|^{\betata}}dz\leq C\big(\frac{1}{(1+|y-x|)^{\betata}}\frac{1}{t^{\alphapha-N}}+\frac{1}{(1+|y-x|)^{\alphapha+\betata-N}}\big).
\end{equation}
\end{lemma}
\betagin{proof}
The proof is same to that of Lemma A.3 in \cite{GNNT}.
\end{proof}
\betagin{lemma}\lambdabel{estimate for Z}
Suppose that $(y-x)^2+t^2=\rho^2$. Then there exists a constant $C>0$ such that
\betagin{equation}
\betagin{aligned}
|\widetilde{Z}_{x_{i},\lambdambda}|\leq\frac{C}{\lambdambda^{\frac{N-2s}{2}}}\frac{1}{(1+|y-x_{i}|)^{N-2s}} \ \ \hbox{and} \ \ |\nablabla \tilde{Z}_{x_{i},\lambdambda}|\leq\frac{C}{\lambdambda^{\frac{N-2s}{2}}}\frac{1}{(1+|y-x_{i}|)^{N-2s+1}}.
\end{aligned}
\end{equation}
\end{lemma}
\betagin{proof}
By Lemma \ref{Lemma appendixa}, we have
\betagin{equation*}
\betagin{aligned}
|\widetilde{Z}_{x_{i},\lambdambda}(y,t)|&=|\betata(N,s)\int_{\mathbb{R}^N}\frac{t^{2s}}{(|y-\xi|^2+t^2)^{\frac{N+2s}{2}}}\zeta(\xi)U_{x_{i},\lambdambda}(\xi)d\xi|\\
&=|\betata(N,s)C(N,s)\int_{\mathbb{R}^N}\frac{t^{2s}}{(|y-\xi|^2+t^2)^{\frac{N+2s}{2}}}\zeta(\xi)(\frac{\lambdambda}{1+\lambdambda^2|
\xi-x_{i}|^2})^{\frac{N-2s}{2}}d\xi|\\
&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|)^{N+2s}} \frac{1}{(\lambdambda^{-1}+|y-tz-x_{i}|)^{N-2s}}dz\\
&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{t^{2s}}{(t+|z|)^{N+2s}} \frac{1}{(\lambdambda^{-1}+|y-z-x_{i}|)^{N-2s}}dz\\
&\leq\frac{C}{\lambdambda^{\frac{N-2s}{2}}}\frac{1}{(1+|y-x_{i}|)^{N-2s}}.
\end{aligned}
\end{equation*}
Note that for $l=1,\ldots,N$
\betagin{equation*}
\betagin{aligned}
&\quad \frac{\partialrtial}{\partialrtial y_l}\int_{\mathbb{R}^N}\frac{t^{2s}}{(|y-\xi|^2+t^2)^{\frac{N+2s}{2}}}\zeta(\xi)(\frac{\lambdambda}{1+\lambdambda^2|
\xi-x_{i}|^2})^{\frac{N-2s}{2}}d\xi\\
&=\frac{1}{\lambdambda^{\frac{N-2s}{2}}}\frac{\partialrtial}{\partialrtial y_l}\int_{\mathbb{R}^N}\frac{1}{(1+|z|^2)^{\frac{N+2s}{2}}}\zeta(y-tz)(\frac{1}
{\lambdambda^{-2}+|y-tz-x_{i}|^2})^{\frac{N-2s}{2}}dz\\
&=\frac{2s-N}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|^2)^{\frac{N+2s}{2}}} \zeta(y-tz) \frac{(y-tz-x_{i})_l}{(\lambdambda^{-2}+|y-tz-x_{i}|^2)^{\frac{N-2s}{2}+1}}dz\\
&\quad+\frac{1}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|^2)^{\frac{N+2s}{2}}}\frac{\partialrtial\zeta(y-tz)}{\partialrtial y_l} \frac{1}{(\lambdambda^{-2}+|y-tz-x_{i}|^2)^{\frac{N-2s}{2}}}dz,
\end{aligned}
\end{equation*}
and
\betagin{equation*}
\betagin{aligned}
&\quad \frac{\partialrtial}{\partialrtial t}\int_{\mathbb{R}^N}\frac{t^{2s}}{(|y-\xi|^2+t^2)^{\frac{N+2\sigma}{2}}}(\frac{\lambdambda}
{1+\lambdambda^2|\xi-x_{i}|^2})^{\frac{N-2s}{2}}d\xi\\
&=\frac{N-2s}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|^2)^{\frac{N+2s}{2}}}\zeta(y-tz) \frac{\sum^{N}_{l=1}(y-tz-x_{k,L})_lz_l}{(\lambdambda^{-2}+|y-tz-x_{i}|^2)^{\frac{N-2s}{2}+1}}dz\\
&\quad+\frac{1}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|^2)^{\frac{N+2s}{2}}} \frac{\nablabla\zeta(y-tz)\cdot z}{(\lambdambda^{-2}+|y-tz-x_{i}|^2)^{\frac{N-2s}{2}}}dz.
\end{aligned}
\end{equation*}
Then, by the definition of $\zeta$ and \eqref{lemmaA3 1}, we have
\betagin{equation}\lambdabel{nie1}
\betagin{aligned}
|\nablabla \tilde{Z}_{x_{i},\lambdambda}|&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{1}{(1+|z|)^{N+2s-1}} \frac{1}{(1+|y-tz-x_{i}|)^{N-2s+1}}dz\\
&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\int_{\mathbb{R}^N}\frac{t^{2s-1}}{(t+|z|)^{N+2s-1}} \frac{1}{(1+|y-z-x_{i}|)^{N-2s+1}}dz\\
&\leq \frac{C}{\lambdambda^{\frac{N-2s}{2}}}\frac{1}{(1+|y-x_{i}|)^{N-2s+1}}.
\end{aligned}
\end{equation}
\end{proof}
For any $\deltalta>0$, we define the following two sets
$$D_1=\{Y=(y,t):\deltalta<|Y-(r_0,y_0'',0)|<6\deltalta,t>0\},$$
and
$$D_2=\{Y=(y,t):2\deltalta<|Y-(r_0,y_0'',0)|<5\deltalta,t>0\}.$$
\betagin{lemma}\lambdabel{le:lemma appendixB.4}
For any $\deltalta>0$, there is a $\rho=\rho(\deltalta)\in(2\deltalta, 5\deltalta)$ such that
\betagin{equation}\lambdabel{B.4.1}
\betagin{aligned}
\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla\widetilde{\phi}|^2dS\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}},
\end{aligned}
\end{equation}
where $C$ is a constant, dependent on $\deltalta$.
\end{lemma}
\betagin{proof}
By \eqref{lemmaA3 1}, for $(y,t)\in D_1$, we have
\betagin{equation}\lambdabel{B.4.2}
\betagin{aligned}
|\widetilde{\phi}(y,t)|&=\Big|\int_{\mathbb{R}^N}\betata(N,s)\frac{t^{2s}}
{(|y-\xi|^2+t^2)^{\frac{N+2s}{2}}}\phi(\xi)d\xi\Big|\\
&\leq \frac{C\|\phi\|_{\ast}t^{2s}}{\lambdambda^\tau}
\sum^{k}_{i=1}\int_{\mathbb{R}^N}\frac{1}{(|z|+t)^{N+2s}}
\frac{1}{|y-z-x_{i}|^{\frac{N-2s}{2}+\tau}}dz\\
&\leq \frac{C\|\phi\|_{\ast}t^{2s}}{\lambdambda^\tau}
\sum^{k}_{i=1}(\frac{1}{(1+|y-x_{i}|)^{\frac{N-2s}{2}+\tau}}\frac{1}{t^{2s}}+\frac{1}{(1+|y-x_{i}|)^{\frac{N+2s}{2}+\tau}})\\
&\leq \frac{C\|\phi\|_{\ast}}{\lambdambda^\tau}
\sum^{k}_{i=1}\frac{1}{(1+|y-x_{i}|)^{\frac{N-2s}{2}+\tau}}.
\end{aligned}
\end{equation}
Take $\varepsilonrphi\in C_0^{\infty}(\mathbb{R}^{N+1})$ be a function with $\varepsilonrphi(y,t)=1$ in $D_2$, $\varepsilonrphi(y,t)=0$ in $\mathbb{R}^{N+1}\setminus D_1$ and $|\nablabla\varepsilonrphi|\leq C$.
Note that $\widetilde{\phi}$ satisfies
\betagin{equation}
\left\{\betagin{aligned}
&\displaystyle-\mathrm{div}(t^{1-2s}\nablabla\widetilde{\phi})=0\quad \mbox{in} \quad \mathbb{R}^{N+1}_+,\\
&\quad\displaystyle-\lim\limits_{t\to 0}t^{1-2s}\partialrtial_t \widetilde{\phi}(y,t)\\&=-V(r,y'')\phi+(2_s^*-1)(Z_{\overline{r},\overline{y}'',\lambdambda})^{2_s^*-2}\phi+\mathcal {F}(\phi)+l_k+\sum\limits_{l=1}^Nc_l\sum\limits_{i=1}^kZ^{2_s^*-2}_{x_i,\lambdambda}Z_{i,l}, \quad \mbox{in}\quad {\Bbb R}^{N}.
\end{aligned}\right.
\end{equation}
Multiplying $\varepsilonrphi^2\widetilde{\phi}$ on the both sides of the equation and integrating by parts over $D_1$, we have
\betagin{equation*}
\betagin{aligned}
0=&\int_{D_1}-\mathrm{div}(t^{1-2s}\nablabla\widetilde{\phi})\varepsilonrphi^2\widetilde{\phi}dydt\\
=&\int_{D_1}t^{1-2s}\nablabla\widetilde{\phi}\nablabla(\varepsilonrphi^2\widetilde{\phi})dydt\\
=&\int_{D_1}t^{1-2s}\nablabla\widetilde{\phi}(\varepsilonrphi^2\nablabla\widetilde{\phi}+2\varepsilonrphi\nablabla\varepsilonrphi\widetilde{\phi})dydt.
\end{aligned}
\end{equation*}
For any $\varepsilonsilon>0$, we have
\betagin{equation*}
\betagin{aligned}
&\quad\int_{D_1}t^{1-2s}\nablabla\widetilde{\phi}\varepsilonrphi\nablabla\varepsilonrphi\widetilde{\phi}dydt\\
&\leq \varepsilonsilon\int_{D_1}t^{1-2s}|\nablabla\widetilde{\phi}|^2\varepsilonrphi^2dydt +C(\varepsilonsilon)\int_{D_1}t^{1-2s}\widetilde{\phi}^2|\nablabla\varepsilonrphi|^2dydt.
\end{aligned}
\end{equation*}
Taking $\varepsilonsilon=\frac{1}{4}$ and using \eqref{B.4.2}, we obtain that
\betagin{equation*}
\betagin{aligned}
\int_{D_2}t^{1-2s}|\nablabla\widetilde{\phi}|^2dydt&\leq C\int_{D_1}t^{1-2s}\widetilde{\phi}^2|\nablabla\varepsilonrphi|^2\\
&\leq \frac{C\|\phi\|_*^2}{\lambdambda^{2\tau}}\int_{D_1}t^{1-2s}\big(\sum\limits_{i=1}^k\frac{1}{(1+|y-x_i|)^{\frac{N-2s}{2}+\tau}}\big)^2\\
&\leq \frac{C\|\phi\|_*^2}{\lambdambda^{2\tau}}\int_{D_1}\frac{t^{1-2s}k^2}{(1+|y-x_1|)^{N-2s+2\tau}}\\
&\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.
\end{aligned}
\end{equation*}
By using the mean value theorem of integrals, there is a $\rho=\rho(\deltalta)\in(2\deltalta, 5\deltalta)$ such that
\betagin{equation*}
\betagin{aligned}
\int_{\partialrtial''\mathcal B^{+}_{\rho}}t^{1-2s}|\nablabla\widetilde{\phi}|^2dS\leq \frac{Ck\|\phi\|_*^2}{\lambdambda^{\tau}}.
\end{aligned}
\end{equation*}
\end{proof}
\section{Energy expansion}
In this section, we give some estimates of the energy expansions for
$$\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\lambdambda}\rangle,$$ $$\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{r}}\rangle \hbox{and} \lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi(\overline{r},\overline{y}'',\lambdambda)),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{y}''}\rangle.$$
\betagin{lemma}\lambdabel{exp1}
If $N>4s+\tau$, then
\[
\frac{\partialrtial I(Z_{\bar{r},\bar{y}'',\lambdambda})}{\partialrtial \lambdambda}=k\left(-\frac{B_1}{\lambdambda^{2s+1}}V(\bar{r},\bar{y}'')+ \sum_{j=2}^{k}\frac{B_2}{\lambdambda^{N-2s+1}|x_j-x_1|^{N-2s}}+O\bigg(\frac{1}{\lambdambda^{2s+1+\sigma}}\bigg)\right),
\]
where $B_1$ and $B_2$ are some positive constants.
\end{lemma}
\betagin{proof}
By a direct computation, we have
\betagin{equation}
\betagin{split}
&\quad \frac{\partialrtial I(Z_{\bar{r},\bar{y}'',\lambdambda})}{\partialrtial \lambdambda}\\&=\frac{\partialrtial I(Z^*_{\bar{r},\bar{y}'',\lambdambda})}{\partialrtial \lambdambda}+O(\frac{k}{\lambdambda^{2s+1+\sigma}})\\&
=\displaystyle \int_{\mathbb{R}^N}V(y)Z^*_{\bar{r},\bar{y}'',\lambdambda}\frac{\partialrtial Z^*_{\bar{r},\bar{y}'',\lambdambda}}{\partialrtial \lambdambda}-\displaystyle \int_{\mathbb{R}^N}\big((Z^*_{\bar{r},\bar{y}'',\lambdambda})^{2^*_s-1}-\sum_{j=1}^kU_{x_j,\lambdambda}^{2^*_s-1}\big)
\frac{\partialrtial Z^*_{\bar{r},\bar{y}'',\lambdambda}}{\partialrtial \lambdambda}+O(\frac{k}{\lambdambda^{2s+1+\sigma}})\\
&=I_1-I_2+O(\frac{k}{\lambdambda^{2s+1+\sigma}}).
\end{split}
\end{equation}
For the term $I_1$, By Lemma \ref{Lemma appendix1}, we can check that
\betagin{equation}\lambdabel{VV}
\betagin{array}{ll}
\displaystyle I_1
&=k\bigg(\displaystyle \int_{\mathbb{R}^N}V(y)U_{x_1,\lambdambda}\frac{\partialrtial U_{x_1,\lambdambda}}{\partialrtial \lambdambda}+O\big(\frac{1}{\lambdambda}\displaystyle \int_{\mathbb{R}^N}U_{x_1,\lambdambda}\displaystyle \sum_{j=2}^kU_{x_j,\lambdambda}\big)\bigg)\\
&=k\bigg(\displaystyle \frac{V(\bar{r},\bar{y}'')}{\lambdambda^{2s+1}}\displaystyle \frac{(N-2s)C^2(N,s)} {2}\int_{\mathbb{R}^N}\frac{\lambdambda^N(1-\lambdambda^2|y-x_1|^2)}{(1+\lambdambda^2|y-x_1|^2)^{N-2s+1}}dy\\
&\quad\quad\quad+O\big(\frac{1}{\lambdambda^{2s+1}}\displaystyle \sum_{j=2}^k\frac{1}{(\lambdambda|x_1-x_j|)^{N-4s-\sigma}}\big)
+O(\frac{k}{\lambdambda^{2s+1+\sigma}})\bigg)\\
&=k\bigg(\displaystyle-\frac{B_1V(\bar{r},\bar{y}'')}{\lambdambda^{2s+1}}\displaystyle+O(\frac{k}{\lambdambda^{2s+1+\sigma}})\bigg),\\
\end{array}
\end{equation}
where $B_1=-\frac{(N-2s)C^2(N,s)} {2}\displaystyle\int_{\mathbb{R}^N}\frac{\lambdambda^N(1-\lambdambda^2|y-x_1|^2)}{(1+\lambdambda^2|y-x_1|^2)^{N-2s+1}}dy>0$.
Next, we estimate $I_2$.
$$\alphaigned
I_2&=k\displaystyle \int_{\Omegaega_1}\big((Z^*_{\bar{r},\bar{y}'',\lambdambda})^{2^*_s-1}-\displaystyle\sum_{j=1}^kU_{x_j,\lambdambda}^{2^*_s-1}\big)
\frac{\partialrtial Z^*_{\bar{r},\bar{y}'',\lambdambda}}{\partialrtial \lambdambda}\\&=k\left(\displaystyle \int_{\Omegaega_1}(2^*_s-1)U_{x_1,\lambdambda}^{2^*_s-2}\displaystyle\sum_{j=2}^kU_{x_j,\lambdambda}\frac{\partialrtial U_{x_1,\lambdambda}}{\partialrtial \lambdambda}+O\bigg(\frac{1}{\lambdambda^{2s+1+\sigma}}\bigg)\right)\\&=k\left(-\displaystyle \sum_{j=2}^k\frac{B_2}{\lambdambda^{N-2s+1}|x_j-x_1|^{N-2s}}+O\bigg(\frac{1}{\lambdambda^{2s+1+\sigma}}\bigg)\right),
\endaligned$$
for some constant $B_2>0$.
Thus, we obtain that
\[
\frac{\partialrtial I(Z_{\bar{r},\bar{y}'',\lambdambda})}{\partialrtial \lambdambda}=k\left(-\frac{B_1}{\lambdambda^{2s+1}}V(\bar{r},\bar{y}'')+\displaystyle \sum_{j=2}^{k}\frac{B_2}{\lambdambda^{N-2s+1}|x_j-x_1|^{N-2s}}+O\bigg(\frac{1}{\lambdambda^{2s+1+\sigma}}\bigg)\right).
\]
\end{proof}
\betagin{lemma} \lambdabel{exp2} We have
\betagin{equation} \lambdabel{energy expansion1}
\betagin{aligned}
&\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\lambdambda}\rangle\\
=&\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\lambdambda}\rangle+O(\frac{k}{\lambdambda^{2s+1+\sigma}})\\
=&k\left(-\frac{B_1}{\lambda^{2s+1}}V(\bar{r},\bar{y}'')+ \sum_{j=2}^{k}\frac{B_2}{\lambda^{N-2s+1}|x_j-x_1|^{N-2s}}+O\big(\frac{1}{\lambda^{2s+1+\sigma}}\big)\right)
\\=&k\left(-\frac{B_1}{\lambda^{2s+1}}V(\bar{r},\bar{y}'')+ \frac{B_3k^{N-2s}}{\lambda^{N-2s+1}}+O\big(\frac{1}{\lambda^{2s+1+\sigma}}\big)\right),
\end{aligned}
\end{equation}
where $B_1$ and $B_2$ are the same constants in Lemma \ref{exp1}, $B_3>0.$
\end{lemma}
\betagin{proof}
By symmetry, we have
$$\alphaigned
&\quad \lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\lambdambda}\rangle\\
&=\int_{\mathbb{R}^{N}}\big((-\Deltalta)^s u_k+V(r,y'')u_k-(u_k)_{+}^{2^*_s-1}\big)\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\\&=\bigg\lambdangle I'(Z_{\bar{r},\bar{y}'',\lambda}),\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\bigg\rangle+k\bigg\lambdangle(-\Deltalta)^s \phi+V(r,y'')\phi-(2^*_s-1)Z_{\bar{r},\bar{y}'',\lambda}^{2^*_s-2}\phi,\frac{\partialrtial Z_{x_1,\lambda}}{\partialrtial \lambda}\bigg\rangle\\&\quad-\displaystyle \int_{\mathbb{R}^{N}}\bigg((Z_{\bar{r},\bar{y}'',\lambda}+\phi)_+^{2^*_s-1}-Z_{\bar{r},
\bar{y}'',\lambda}^{2^*_s-1}-(2^*_s-1)Z_{\bar{r},\bar{y}'',\lambda}^{2^*_s-2}\phi\bigg)\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\\&:=\bigg\lambdangle I'(Z_{\bar{r},\bar{y}'',\lambda}),\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\bigg\rangle+kJ_1-J_2.
\endaligned$$
By \eqref{aum16} and \eqref{equality12}, we have
\[
J_1=O(\frac{||\phi||_*}{\lambda^{1+s+\sigma}})=O(\frac{1}{\lambda^{2s+1+\sigma}}).
\]
Note that $(1+t)_+^\gammamma-1-\gammamma t=O(t^2)$ for all $t\in \mathbb{R}^N$ if $1<\gammamma\leq2$, and $|(1+t)_+^\gammamma-1-\gammamma t|\leq C(t^2+|t|^\gammamma)$ for all $t\in \mathbb{R}^N$ if $\gammamma>2$. So,
if $2^*_s\leq3$, we have
$$\alphaigned
|J_2|&= \bigg|\displaystyle \int_{\mathbb{R}^{N}}\bigg((Z_{\bar{r},\bar{y}'',\lambda}+\phi)_+^{2^*_s-1}-Z_{\bar{r},
\bar{y}'',\lambda}^{2^*_s-1}-(2^*_s-1)Z_{\bar{r},\bar{y}'',\lambda}^{2^*_s-2}\phi\bigg)\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\bigg| \\
&\leq C\displaystyle \int_{\mathbb{R}^{N}}Z_{\bar{r},
\bar{y}'',\lambda}^{2^*_s-3}\phi^2\big|\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\big|\\
& \leq \frac{C||\phi||_*^2}{\lambda}\displaystyle \int_{\mathbb{R}^{N}}\left(\displaystyle \sum_{j=1}^k\frac{\lambda^{\frac{N-2s}{2}}}{(1+\lambda|y-x_j|)^{N-2s}}\right)^{2^*_s-2}\left(\displaystyle \sum_{i=1}^k\frac{\lambda^{\frac{N-2s}{2}}}{(1+\lambda|y-x_i|)^{\frac{N-2s}{2}+\tau}}\right)^2
\\&\leq \frac{C||\phi||_*^2}{\lambda}\displaystyle \int_{\mathbb{R}^{N}}\lambda^N \sum_{j=1}^k\frac{1}{(1+\lambda|y-x_j|)^{4s}}\sum_{i=1}^k\frac{1}{(1+\lambda|y-x_i|)^{N-2s+\tau}}\\
&\leq \frac{Ck||\phi||_*^2}{\lambda}=O\left(\frac{k}{\lambda^{2s+1+\sigma}}\right).
\endaligned$$
If $2^*_s>3$, we have
$$\alphaigned
|J_2|&\leq C\displaystyle \int_{\mathbb{R}^{N}}\left(Z_{\bar{r},
\bar{y}'',\lambda}^{2^*_s-3}\phi^2\big|\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\big|+|\phi|^{2^*_s-1}\big|\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\big|\right)\\&=O\left(\frac{k}{\lambda^{2s+1+\sigma}}\right).
\endaligned$$
Thus we obtain
\[
\bigg\lambdangle I'(Z_{\bar{r},\bar{y}'',\lambda}+\phi),\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\bigg\rangle=\bigg\lambdangle I'(Z_{\bar{r},\bar{y}'',\lambda}),\frac{\partialrtial Z_{\bar{r},\bar{y}'',\lambda}}{\partialrtial \lambda}\bigg\rangle+O\left(\frac{k}{\lambda^{2s+1+\sigma}}\right).
\]
Combining Lemma \ref{exp1}, we finish the proof.
\end{proof}
Note that $Z_{i,l}=O(\lambdambda Z_{x_i,\lambdambda}), \ l=2,\ldots,N.$ Similarly, we can prove the following lemma:
\betagin{lemma} \lambdabel{exp3}
We have
\betagin{equation} \lambdabel{energy expansion2}
\betagin{aligned}
&\quad\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{r}}\rangle\\
&=\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{r}}\rangle+O(\frac{k}{\lambdambda^{s+\sigma}})\\
&=k\big(\frac{B_1}{\lambdambda^{2s}}\frac{\partialrtial V(\overline{r},\overline{y}'')}{\partialrtial\overline{r}}+\sum\limits_{j=2}^k\frac{B_2}{\overline{r}\lambdambda^{N-2s}|x_1-x_j|^{N-2s}}+O(\frac{1}{\lambdambda^{s+\sigma}})\big),
\end{aligned}
\end{equation}
and
\betagin{equation} \lambdabel{energy expansion3}
\betagin{aligned}
\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}+\phi),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{y}''_j}\rangle
=\lambdangle I'(Z_{\overline{r},\overline{y}'',\lambdambda}),\frac{\partialrtial Z_{\overline{r},\overline{y}'',\lambdambda}}{\partialrtial\overline{y}''_j}\rangle+O(\frac{k}{\lambdambda^{s+\sigma}})=k\big(\frac{B_1}{\lambdambda^{2s}}\frac{\partialrtial V(\overline{r},\overline{y}'')}{\partialrtial\overline{y}''_j}+O(\frac{1}{\lambdambda^{s+\sigma}})\big),
\end{aligned}
\end{equation}
where $B_1$ and $B_2$ are the same constants in Lemma \ref{exp1}.
\end{lemma}
\end{appendices}
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
\betagin{thebibliography}{60}
\bibitem{ad} D. Applebaum, Levy processes and stochastic calculus, Second edition, Cambridge Studies in Advanced Matematics, 116, Cambridge University Press, Cambridge, 2009.
\bibitem{bc} A. Bahri and J. Coron, The scalar curvature problem on
the standard three dimentional sphere, J. Funct. Anal., 95 (1991),
106-172.
\bibitem{BBarrios} B. Barrios, E. Colorado, A. de Pablo, U. S\'{a}nchez, On some critical problems for the fractional Laplacian operator, J. Differential Equations, 252 (2012), 6133-6162.
\bibitem{CBrandle} C. Br\~{a}ndle, E. Colorado, A. de Pablo, A concave-convex elliptic problem involving the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect, A 143 (2013), 39-71.
\bibitem{XCabre2} X. Cabr\'{e}, J. Tan, Positive solutions of nonlinear problems involving the square root of the Laplacian, Adv. Math, 224 (2010), 2052-2093.
\bibitem{XCabre} X. Cabr\'{e}, Y. Sire, Nonlinear equations for fractional Laplacians I: regularity, maximum principles, and Hamiltonian estimates, Ann. Inst. H. Poincar\'{e}Anal. Non Lin\'{e}aire, 31 (2014), 23-53.
\bibitem{CS2007} L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. in Part. Diff. Equa., 32 (2007), 1245-1260.
\bibitem{cny} D. Cao, E. Noussair and S. Yan, On the scalar curvature
equation $-\Deltalta u=(1+\varepsilonsilon K)u^{\frac{N+2}{N-2}}$ in
$\mathbb{R}^N,$ Valc. Var. Part. Diff. Equ., 15 (2002), 403-419.
\bibitem{aschang9} S. Y. A. Chang and P. C. Yang, A perturbation
result in prescribing scalar curvature on $S^n,$ Duke Math. J.,
64 (1991), 27-69.
\bibitem{clin1} C. C. Chen and C. S. Lin, Estimate of the
conformal scalar curvature equation via the method of moving
planes, II, J. Differential Geom, 49 (1998), 115-178.
\bibitem{clin2} C. C. Chen and C. S. Lin, Prescribing scalar
curvature on $S^N$, I. A priori estimates, J. Differential Geom,
57 (2001), 67-171.
\bibitem{pfm} M. del Pino, P. Felmer, M. Musso, Two-bubble
solutions in the super-critical Bahri-Coron's problem, Calc. Var.
Partial Differential Equations, 16 (2003), 113-145.
\bibitem{Eleonora} E. Di Nezzaa, G. Palatuccia and E. Valdinocia, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Svi. Math. 136 (2012), no. 5, 521-573.
\bibitem{YGuo} Y. Guo, B. Li, Infinitely many solutions for the prescribed curvature problem of polyharmonic operator, Calc. Var., 46 (2013), 809-836.
\bibitem{GNNT} Y. Guo, J. Nie, M. Niu, Z. Tang, Local uniqueness and periodicity for the prescribed scalar curvature problem of fractional operator in $R^N$, Calc. Var., 56 (2017).
\bibitem{Guo2016} Y. Guo, and J. Nie, Infinitely many non-radial solutions for the prescribed curvature problem of fractional operator, Discrete Contin. Dyn. Syst., 36 (2016), 6873-6898.
\bibitem{GPY}Y. X.Guo, S. J.Peng, S. S.Yan, Local uniqueness and periodicity induced by concentration, Proc. London. Math. Soc., 114 (2017), 1005-1043.
\bibitem{TJin} T. Jin, Y.Y. Li, J. Xiong, On a fractional Nirenberg problem, part I: blow up analysis and com-pactness of solutions, J. Eur. Math. Soc, 16 (2014), 1111-1171.
\bibitem{liyy19} Y. Li, Prescribed scalar curvature on $S^n$ and
related problems II, Existence and comapctness, Comm. Pure Appl.
Math., 49 (1996), 541-597.
\bibitem{yylinwm} Y. Li and W. M. Ni, On the conformal scalar
curvature equation in $\mathbb{R}^N,$ Duck math. J., 57 (1988),
859-924.
\bibitem{ELieb} E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, Ann. of Math., (2) 118 (2) (1983), 349-374.
\bibitem{niu} M. Niu, Z. Tang and L. Wang, Solutions for conformally invariant fractional Laplacian equations with multi-bumps centered in lattices, J. Differential Equations., (4) (2019), 1756-1831.
\bibitem{nyan} E. Noussair and S. Yan, The scalar curvature
equation on $\mathbb{R}^N,$ Nonlinear Anal., 45 (2001), 483-514.
\bibitem{PWY} S. J. Peng, C. H. Wang, S. S. Yan, Construction of solutions via local Pohozaev identities, J. Funct. Anal., 274 (2018), 2606-2633.
\bibitem{Quaas} A. Quaas and J. G. Tan, Positive solutions of the nonlinear Schr\"{o}dinger equation with the fractional Laplacian, Proceedings of the Royal Society of Edinburgh, 142A (2012), 1237-1262.
\bibitem{szhang} R. Schoen and D. Zhang, Prescribed calar
curvature problem on the $n-$ sphere, Calc. Var. Partial
Differential Equations, 4 (1996), 1-25.
\bibitem{Servadeia} R. Servadeia and E. Valdinoci, Mountain Pass solutions for non-local elliptic operators, J. Math. Anal. Appl., 389 (2012), 887-898.
\bibitem{Silvestre} L. Silvestre, Regularity of the Obstacle Problem for a Fractional Power
of the Laplace Operator, Comm. Pure Appl. Math., 60 (2007), 67-112.
\bibitem{JTan1} J. Tan, The Br\'{e}zis-Nirenberg type problem involving the square root of the Laplacian, Calc. Var. Partial Differential Equations, 42 (2011), 21-41.
\bibitem{JTan} J. Tan, J. Xiong, A Harnack inequality for fractional Laplace equations with lower order terms, Discrete Contin. Dyn. Syst., 31 (2011), 975-983.
\bibitem{wy} J. Wei, S. Yan, Infinitely many solutions for
the prescribed scalar curcature problem on $\mathbb{S}^N,$ Journal of
Functional Analysis, 258 (2010), 3048-2081.
\bibitem{yan} S. Yan, Concentration of solutions for the scalar
curvature equation on $\mathbb{R}^N,$ J. Differential Equations,
163 (2000), 239-264.
\bibitem{SYan} S. Yan, J. Yang, X. Yu, Equations involving fractional Laplacian operator: Compactness and application, Journal of Functional Analysis, 269 (2015), 47-79.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{First- and Second-Order Analysis for Optimization Problems with Manifold-Valued Constraints\thanks{This work was supported by DFG grants SCHI~1379/3--1 as well as HE~6077/10--1 within the \href{https://spp1962.wias-berlin.de}
\begin{abstract}
We consider optimization problems with manifold-valued constraints.
These generalize classical equality and inequality constraints to a setting in which both the domain and the codomain of the constraint mapping are smooth manifolds.
We model the feasible set as the preimage of a submanifold with corners of the codomain.
The latter is a subset which corresponds to a convex cone locally in suitable charts.
We study first- and second-order optimality conditions for this class of problems.
We also show the invariance of the relevant quantities with respect to local representations of the problem.\end{abstract}
\begin{keywords}
optimization on manifolds, manifold-valued constraints, manifold with corners, first- and second-order optimality conditions, Lagrangian function\end{keywords}
\begin{AMS}
\href{https://mathscinet.ams.org/msc/msc2010.html?t=90C30}{90C30}, \href{https://mathscinet.ams.org/msc/msc2010.html?t=90C46}{90C46}, \href{https://mathscinet.ams.org/msc/msc2010.html?t=49Q99}{49Q99}, \href{https://mathscinet.ams.org/msc/msc2010.html?t=65K05}{65K05}
\end{AMS}
\section{Introduction}
\label{section:Introduction}
The presence of constraints renders optimization problems not only more interesting, but also more difficult to analyze and solve.
Constrained nonlinear optimization problems on $\R^m$ can be cast in the following form,
\begin{equation}
\label{eq:NLP}
\begin{aligned}
\text{Minimize}
\quad
&
f(x)
,
\quad
\text{where }
x \in \R^m
\\
\text{subject to (\st)}
\quad
&
g(x) \in K
.
\end{aligned}
\end{equation}
Here $f \colon \R^m \to \R$ denotes the objective function, and $g \colon \R^m \to \R^n$ represents the constraint function.
Moreover, $K \subset \R^n$ is a convex cone satisfying $0 \in K$, \ie, it induces a preorder on $\R^n$ defined by
\begin{equation*}
y \le_K z
\quad
\Leftrightarrow
\quad
y - z \in K
.
\end{equation*}
The constraint in \eqref{eq:NLP} can thus be written as $g(x) \le_K 0$.
Problems of the form \eqref{eq:NLP} include classical nonlinear programming problems with equality and inequality constraints.
These are described by $g(x) = (g_I(x),g_E(x))^\transp$ and $K = \R_-^k \times \{0\}^{n-k} \subset \R^k \times \R^{n-k}$, where $\R_-^k$ is the non-positive orthant in $\R^k$.
It is well known that---under appropriate constraint qualifications---local minimizers of \eqref{eq:NLP} admit Lagrange multipliers, \ie, there exists $\mu \in \R^n$ such that
\begin{equation}
\label{eq:NLP_KKT}
\begin{aligned}
&
f'(x) + \sum_{i=1}^n \mu_i \, g_i'(x)
=
0
,
\\
&
\mu \in K^\circ
\coloneqq
\setDef{s \in \R^n}{s^\transp v \le 0 \text{ for all } v \in K}
,
\\
&
\mu^\transp g(x)
=
0
\end{aligned}
\end{equation}
holds.
In short, we can write $f'(x) + \mu^\transp \, g'(x) = 0$ with $\mu \in K^\circ$ and $\mu^\transp g(x) = 0$.
The set $K^\circ$ is called the polar cone of~$K$.
\Cref{eq:NLP_KKT} is known as generalized Karush--Kuhn--Tucker (KKT) conditions pertaining to problem~\eqref{eq:NLP}.
We refer the reader to, \eg, \cite[Ch.~9]{Luenberger:1969:1}, \cite{ZoweKurcyusz:1979:1}, \cite{Troeltzsch:1984:1}, \cite[Ch.~5]{Jahn:2007:1}, \cite[Ch.~6]{Troeltzsch:2010:1}, for results in this direction in finite and infinite-dimensional spaces.
In this paper, we generalize \eqref{eq:NLP} to constrained optimization problems on manifolds, replacing $\R^m$ and $\R^n$ by finite-dimensional, smooth manifolds~$\cM$ and $\cN$, respectively.
Theory for the case of equality and inequality constraints $g \colon \cM \to \R^n$ has been considered in \cite{YangZhangSong:2014:1,BergmannHerzog:2019:1} and some algorithmic approaches have been discussed in \cite{LiuBoumal:2019:2,ObaraOkunoTakeda:2020:1}.
Theory and an algorithm for equality constraints of the form $g(p)=q_*$ with $g \colon \cM \to \cN$ were presented in \cite{SchielaOrtiz:2021:1}.
Here we aim to incorporate equality and inequality constraints for manifold-valued constraint mappings $g \colon \cM \to \cN$.
Such an extension is not straightforward since there is no natural way to define a cone (nor a preorder) on the manifold~$\cN$ which would take the role of the condition $g(x) \in K$.
We propose here to overcome this difficulty by requiring the constraint function to have values in a \emph{submanifold with corners}~$\cK \subset \cN$, a mathematical object that corresponds to a convex cone locally in adequate charts.
We thus consider the following class of problems,
\begin{equation}
\label{eq:problem_setting_introduction}
\begin{aligned}
\text{Minimize}
\quad
&
f(p)
,
\quad
\text{where }
p \in \cM
\\
\text{\st}
\quad
&
g(p) \in \cK
,
\end{aligned}
\end{equation}
which generalizes \eqref{eq:NLP}.
The description of the feasible set as $\cF \coloneqq \setDef{p \in \cM}{g(p) \in \cK}$ turns out to be convenient and relevant in a number of situations.
Moreover, it will be shown that this description is independent of possibly varying parametrizations of the given problem.
Our formulation differs from other generalizations of equality and inequality constraints.
Consider for instance a geodesic polygon as a feasible set~$\cF$, defined on the sphere $\cM = \sphere{2}$, \ie a set bounded by a set of geodesics.
More generally, we can also consider a geodesic polyhedron on $\sphere{m}$, \ie, a region bounded by a number of geodesic hyperplanes.
In other words, its boundary consists of totally geodesic submanifolds, \cf, \eg, \cite[Ch.~XI, ยง4]{Lang:1999:1}.
An example of a geodesic polygon is given in see \cref{fig:geodesic_triangle} in $\sphere{2}$.
$\cF$ constitutes a submanifold of $\cN = \cM$ with corners, so it can be naturally parametrized as $g(p) \in \cK$ with $g = \id_\cM$ and $\cK = \cF$.
By contrast, an algebraic description of $\cF$ in terms of classical inequalities runs into difficulties.
In the case of a vector space $\cM = \R^m$, the analogue of $\cF$ (an ordinary polygon) can be easily represented as the intersection of finitely many closed half spaces, using linear inequality constraints $g_i(x) = \inner{x - y_i}{n_i} \le 0$.
A similar attempt to describe $\cF$ on $\sphere{m}$ via inequality constraints of the type $g_i(p) = \riemannian{\logarithm{q_i} p}{n_i} \le 0$ can certainly be used locally; however, the lack of injectivity of the exponential map on $\sphere{m}$, and thus the lack of global well-definedness of its inverse, the logarithmic map, makes this inequality constraint globally not well-defined.
\begin{figure}
\caption{A geodesic polygon on the $2$-sphere. Unlike in $\R^2$, this set cannot be described as the intersection of half spaces. Notice that, for instance, at the tangent space at the light blue point in the middle of the horizontal geodesic, the image of the upper half space under the exponential map is the entire sphere.}
\label{fig:geodesic_triangle}
\end{figure}
This paper is structured as follows.
We describe our approach to modeling manifold-valued constraints using manifolds with corners in \cref{section:constrained_optimization}.
Constraint qualifications are introduced and discussed in \cref{section:CQs}.
\Cref{section:first-order_conditions} is devoted to the derivation of first-order necessary optimality conditions.
We show in \cref{section:first-order_conditions_using_retractions} that equivalent conditions are obtained when the problem is pulled back to a tangent space, using a retraction.
In \cref{section:Lagrange_function} we introduce the analogue of a Lagrangian function for \eqref{eq:problem_setting_introduction}.
In preparation for the formulation of second-order optimality conditions in \cref{section:second-order_conditions}, we define the critical cone in \cref{section:Critical_cone}.
Finally, \cref{section:application_control_of_variational_problems} presents an application of our theory to the control of discretized variational problems.
We denote manifolds as well as subsets of manifolds by calligraphic letters.
For an introduction to differentiable manifolds, we refer the reader, \eg, to \cite{Lee:2012:1}.
Points on the manifold~$\cM$ are denoted by the letter~$p$, while points on $\cN$ are denoted by $q$.
Each manifold comes with a collection of charts $(\cU,\psi)$, and each chart maps an open subset $\cU$ of $\cM$ (or $\cN$) onto an open set in $\R^m$ (or $\R^n$), where $m$ and $n$ are the dimensions of $\cM$ and $\cN$, respectively.
We say that a chart $(\cU,\psi)$ is centered at a point $p$ if $p \in \cU$ holds.
For the purpose of this paper, since we will be pursuing a first- and second-order analysis, we will mostly assume that $\cM$ and $\cN$ are of class $C^2$, \ie, the chart transition maps $\psi_2 \circ \psi_1^{-1}$ are of this class.
In chart space, we use the letters $x \in \R^m$ and $y \in \R^n$.
We write $C^j(\cM,\cN)$ for the set of all mappings $\cM \to \cN$ which are $j$~times continuously differentiable.
The identity mappings on a vector space $V$ or on a manifold~$\cM$ are denoted by $\id_V$ and $\id_\cM$, respectively.
The zero element in the tangent space~$\tangentspace{p}{\cM}$ of a manifold~$\cM$ at $p$ is denoted by $0_p$.
We distinguish primal elements $v \in \tangentspace{p}{\cM}$ and dual elements $\mu \in \cotangentspace{p}{\cM}$ and write dual pairings in the form $\dual{\mu}{v}$ and compositions with linear mappings~$A$ into $\tangentspace{p}{\cM}$ as $\mu \, A$.
\section{Manifold-Valued Constraints}
\label{section:constrained_optimization}
Our method of choice to generalize equality and inequality constrained problems to manifolds is to replace the usual cone~$K$ that the equality and inequality constraints~$g$ are mapping into by a submanifold with corners.
In the following we use $0 \le k \le n$ and write $\R^k \times \{0\}^{n-k}$ to denote the subset of $\R^n$ consisting of those elements whose last $n-k$ components vanish.
We define the map $W \colon \R^n \to \R^{n-k}$ by $W x = (x_{k+1}, \dots, x_n)^\transp$.
Further, as usual, $v \le 0$ in $\R^\ell$ means $v_i \le 0$ for $i = 1, \dots, \ell$.
\begin{definition}[Submanifold with corners (\cite{Michor:1980:1})]
\label{definition:submanifold_with_corners}
Suppose that $\cN$ is an $n$-dimensional $C^2$-manifold.
A subset $\cK \subset \cN$ is called a submanifold with corners of dimension~$k$ if, for each $q \in \cN$, there exists a local chart $(\cU,\psi)$ satisfying $\psi(q) = 0$, an index $\ell$ satisfying $0 \le \ell \le k$, and a \emph{surjective} linear operator
\begin{equation*}
A \colon \R^k \times \{0\}^{n-k} \to \R^\ell
\end{equation*}
such that
\begin{equation*}
\begin{aligned}
\psi(\cK \cap \cU)
&
=
\setDef{x \in \psi(\cU) \cap (\R^k \times \{0\}^{n-k})}{A \, x \le 0}
\\
&
=
\setDef{x \in \psi(\cU)}{A \, x \le 0, \; W x = 0}
\end{aligned}
\end{equation*}
holds.
In this case, $(\cU,\psi)$ is termed an adapted local chart centered at~$q$.
\end{definition}
We may identify $A$ with a matrix
$\begin{bsmallmatrix} \widehat A & 0 \end{bsmallmatrix}$
where $\widehat A \in \R^{\ell \times k}$ and $0 \in \R^{\ell \times (n-k)}$.
For $x \in \R^k \times \{0\}^{n-k}$, we then have $A \, x = \widehat A \, (x_1, \ldots, x_k)^\transp$.
We refer to $q$ in \cref{definition:submanifold_with_corners} as a corner of index~$\ell$.
It has been shown in \cite{Michor:1980:1} that the index~$\ell$, which may of course depend on $q$, however does not depend on the particular choice of the adapted local chart centered at~$q$.
In terms of optimization, $\ell$ describes the number of active inequality constraints at~$q$.
This generalizes the notion of vertices ($\ell = k$), edges ($\ell = k-1$), and higher-dimensional facets.
The requirement $\ell \le k$ is essential in this definition.
In local charts, the description of a corner satisfies the linear independence constraint qualification (LICQ), because the rows of $\widehat A$ are necessarily linearly independent to guarantee surjectivity.
Thus, whenever $(\widetilde \cU, \widetilde \psi)$ is a (non-adapted) local chart on $\cN$ such that $\widetilde \psi(\cK \cap \widetilde \cU)$ is given by the nonlinear constraint $\widetilde A(x) \le 0$ with $\widetilde A(0) = 0$, we can use the surjective implicit function theorem to construct an adapted local chart~$\psi$ such that $\psi(\cK \cap \cU)$ is described by $\widetilde A'(0) \, x \le 0$.
\Cref{definition:submanifold_with_corners} can be conceived as straightforward generalizations of the concepts
\begin{enumerate}
\item
of an embedded submanifold~$\cK \subset \cN$, which is obtained when $\ell = 0$ holds for all $q \in \cK$,
\item
of a smoothly bounded subset~$\cK \subset \cN$ with non-empty interior, which is obtained when $k = n$ and, for every $q \in \cN$, either $\ell = 0$ (interior point) or $\ell = 1$ (boundary point) holds,
\item
and of a convex polyhedron $\cK\subset \cN = \R^n$, whose corners satisfy the above regularity condition.
In particular, the non-positive orthant~$\cK = \R_-^n\subset \R^n$ is a submanifold with corners of dimension~$n$ of $\R^n$.
For instance, the origin $q = 0$ is a corner of index~$n$ and it can be described by $\widehat A = \id_{\R^n}$.
As another example, the point $q = -e_j$ (the negative $j$-th unit vector in $\R^n$), is a corner of index~$n-1$ and a local description of $\cK$ can be defined via $\widehat A \in \R^{(n-1) \times n}$ whose rows are $e_i^\transp$ with $1 \le i \le n$, $i \neq j$.
\end{enumerate}
Next we discuss tangent spaces in the context of submanifolds with corners.
Among the various equivalent ways to define the tangent space for differentiable manifolds, we use the one given in \cite{Lang:1999:1,Michor:1980:1}.
Let $q \in \cN$ and consider the set
\begin{equation*}
\setDef{(\psi,v)}{\psi \colon \cU_\psi \to \R^n \text{ is a chart at } q \in \cN, \; v \in \R^n}
.
\end{equation*}
For two charts $\psi_1, \psi_2$, we denote the transition map by $T \coloneqq \psi_2 \circ \psi_1^{-1}$.
Define an equivalence relation $(\psi_1,v_{\psi_1}) \sim (\psi_2,v_{\psi_2})$ by
\begin{equation*}
T'(\psi_1(q)) \, v_{\psi_1}
=
v_{\psi_2}
.
\end{equation*}
We call any corresponding equivalence class a tangent vector $v$ of $\cN$ at~$q$ and $v_\psi$ its representative in the chart $\psi$.
For fixed $q \in \cN$, the set of these equivalence classes is a vector space $\tangentspace{q}{\cN}$, termed the tangent space of $\cN$ at~$q$.
The disjoint union of $\tangentspace{q}{\cN}$ over all $q \in \cN$ can be endowed with the structure of a manifold, more accurately a vector bundle, termed the tangent bundle $\tangentspace{}{\cN}$ of $\cN$.
Suppose now that $\cK$ is a submanifold with corners of $\cN$ of dimension~$k$.
For $q \in \cK$, we define the tangent space $\tangentspace{q}{\cK}$ as the set of all $v \in \tangentspace{q}{\cN}$ which possess a representative $v_\psi$ in an adapted chart $\psi$ centered at~$q$ such that $v_\psi$ is an element of $\R^k \times \{0\}^{n-k}$.
In this case, \emph{all} representatives of $v$ in all adapted charts centered at~$q$ satisfy the same relation.
It is easy to verify that $\tangentspace{q}{\cK}$ is a linear subspace of $\tangentspace{q}{\cN}$ of dimension~$k$.
Notice that the dimension of $\tangentspace{q}{\cK}$ does not depend on the index of $q$ as a corner of $\cK$.
Further, the set of inner tangent vectors $\innertangentcone{q}{\cK} \subset \tangentspace{q}{\cK}$ is defined as all $v \in \tangentspace{q}{\cK}$ which satisfy, in addition, $A \, v_\psi \le 0$ for representatives in adapted charts centered at~$q$.
As discussed in \cite{Michor:1980:1}, $\innertangentcone{q}{\cK}$ is well-defined and it is a polyhedral convex cone.
Similarly, we denote by $\zerotangentspace{q}{\cK}$ the linear subspace of all elements~$v$ of $\tangentspace{q}{\cK}$ for which the representatives in adapted charts centered at~$q$ satisfy $A \, v_\psi = 0$.
We refer the reader to \cref{figure:tangent_vectors} for an illustrative example.
\begin{figure}
\caption{Illustration of a $k = 2$-dimensional manifold with corners~$\cK$ (teal) as a subset of the $n = 2$-dimensional sphere $\cN = \sphere{2}
\label{figure:tangent_vectors}
\end{figure}
The following are our standing assumptions for the remainder of this paper.
\begin{assumption}
Let $\cM$ and $\cN$ be $C^2$-manifolds of dimensions~$m$ and $n$, respectively.
Moreover, let $\cK$ be a submanifold with corners of $\cN$ of dimension~$k$.
We further suppose that $f \in C^2(\cM,\R)$ and $g \in C^2(\cM,\cN)$ hold and consider the following problem:
\begin{equation}
\label{eq:problem_setting}
\begin{aligned}
\text{Minimize}
\quad
&
f(p)
,
\quad
\text{where }
p \in \cM
\\
\text{\st}
\quad
&
g(p) \in \cK
.
\end{aligned}
\end{equation}
\end{assumption}
Notice that products of submanifolds with corners are again submanifolds with corners.
One can therefore easily combine several constraints, \eg, $g_1(p) \in \cK_1$ and $g_2(p) \in \cK_2$, into one single constraint mapping into a product manifold.
We re-iterate that \eqref{eq:problem_setting} generalizes classical nonlinear programming problems with equality and inequality constraints.
The latter are obtained in case $\cM = \R^m$, $\cN = \R^n$, $\cK = \R_-^k \times \{0\}^{n-k} \subset \R^k \times \R^{n-k}$.
At any $p \in \cK$, the adapted local chart centered at a point $p$ can be chosen as $\varphi(\tilde p) = \tilde p-p$, and $\widehat A$ consists of the appropriate rows of $\id_{\R^k}$.
Be aware that in general the feasible set~$\cF \coloneqq g^{-1}(\cK) \subset \cM$ is \emph{not} a submanifold with corners even though $\cK$ is.
For example, consider $p$ to be the tip of a pyramid~$\cP$ in $\R^3$, where $\ell>3$ planes meet.
Then, locally near $p$, $\cP$ is described by $\ell>3$ inequality constraints, and thus $\cP$ cannot be a submanifold with corners of $\R^3$, because this would violate the condition $\ell \le k = 3$ in \cref{definition:submanifold_with_corners}.
Nevertheless, with a suitable affine mapping $g \colon \R^3 \to \R^\ell$, $\cP$ can be described locally as $\cP = g^{-1}(\R_-^\ell)$.
Thus, by means of the constraint mapping~$g$ we can obtain feasible sets more general than submanifolds with corners of $\cM$.
Also in view of practical computational approaches, the set~$\cK$ should have a simple structure, allowing, \eg, a local representation in computable adapted charts.
Suppose that $\varphi \colon \cM \supset \cU_p \to \R^m$ is a chart centered at~$p$ and that $\psi \colon \cN \supset \cU_{g(p)} \to \R^n$ is a chart centered at~$g(p)$.
We may then define the following local representations of $f$ and $g$:
\begin{equation*}
f_\varphi
\coloneqq
f \circ \varphi^{-1}
\colon
\varphi(\cU_p)
\to
\R
,
\quad
g_{\psi,\varphi}
\coloneqq
\psi \circ g \circ \varphi^{-1}
\colon
\varphi(\cU_p)
\to
\R^n
\end{equation*}
and obtain the following classical constrained optimization problem locally:
\begin{equation}
\label{eq:problem_setting_local}
\begin{aligned}
\text{Minimize}
\quad
&
f_\varphi(p_\varphi),
\quad
\text{where }
p_\varphi \in \varphi(\cU_p)
\\
\text{\st}
\quad
&
\paren[auto]\{.{
\begin{aligned}
A \, g_{\psi,\varphi}(p_\varphi)
&
\le
0
,
\\
W g_{\psi,\varphi}(p_\varphi)
&
=
0
.
\end{aligned}
}
\end{aligned}
\end{equation}
As a general strategy, we will carry over results on first- and second-order optimality conditions from \eqref{eq:problem_setting_local} to \eqref{eq:problem_setting} by formulations that are independent of the local representation in charts.
We will use rather straightforward and well established strategies of proof but highlight invariance considerations which arise in the differential geometric context.
\begin{example}\label{example:classical}
Consider the standard case, \ie $\cN = \R^{n_I+n_E}$ and
\begin{equation*}
\begin{aligned}
g_I(x)
&
\le
0
&
&
\text{in }
\R^{n_I}
,
\\
g_E(x)
&
=
0
&
&
\text{in }
\R^{n_E}
.
\end{aligned}
\end{equation*}
This fits into our general setting \eqref{eq:problem_setting_introduction} if we define
\begin{equation*}
\begin{aligned}
\cK
\coloneqq
\setDef[auto]{y \in \R^{n_I+n_E}}{
\begin{aligned}
y_i
&
\le
0
\text{ for }
i = 1, \ldots, n_I
,
\\
y_i
&
=
0
\text{ for }
i = n_I + 1,\ldots, n_I + n_E
\end{aligned}
}
.
\end{aligned}
\end{equation*}
This set $\cK$ is a submanifold of $\cN$ with corners of dimension~$k = n_I$.
An adapted chart at a point $y \in \cK$ can be defined by $\varphi(\eta) = \eta - y$ and by choosing the chart domain~$\cU$ as an open $\norm{\cdot}_\infty$-ball about $y$ with radius $r = \min \setDef{\abs{y_i}}{y_i < 0}$.
The index~$\ell$ of any point $y \in \cK$ equals the number of components~$1 \le i \le n_I$ for which $y_i = 0$ holds.
Then the linear mapping $\widehat A \in \R^{\ell \times k}$ consists of rows equal to $e_i^\transp$ (the $i$-th unit vector in $\R^k$) for each index~$i$ with $y_i = 0$.
For any $y \in \cK$, the tangent space $\tangentspace{y}{\cK}$ (in its representation \wrt the chart $\varphi(\eta) = \eta - y$) is given by $\R^{n_I} \times \{0\}^{n_E}$.
At the point, $y = (1, 0, \ldots, 0)^\transp \in \cK$, for instance, the cone of inner tangent vectors is described by $\R \times \R_-^{n_I-1} \times \{0\}^{n_E}$, while the subspace $\zerotangentspace{y}{\cK}$ is equal to $\R \times \{0\}^{n_I-1} \times \{0\}^{n_E}$.
\end{example}
\begin{example}\label{eq:geopoly}
Consider a geodesic polyhedron $\cK \subset \cN$ on a Riemannian manifold $\cN$, \ie, a set whose facets are totally geodesic submanifolds as in \cref{fig:geodesic_triangle}; \cf, \eg, \cite[Ch.~XI, ยง4]{Lang:1999:1}.
We may use the logarithmic map $\logarithm{q} \colon \cN \to \tangentspace{q}{\cN} \cong \R^n$ to construct an adapted local chart at a point~$q \in \cK$.
Then $\cK$ can be represented as $A \, v_\psi \le 0$ and $\cK$ is a manifold with corners, provided that $A$ (which depends on $q$ and $\psi$, of course) is surjective at any $q \in \cK$.
\end{example}
\begin{example}\label{eq:constraintlr}
Given two mappings $g_\ell, g_r \colon \cM \to \cN$, consider the equality constraint
\begin{equation*}
g_\ell(p)
=
g_r(p)
.
\end{equation*}
Since $\cN$ in general is not a vector space, this constraint cannot be written in the usual form $g_\ell(p) - g_r(p) = 0$.
However, it can be formulated as $g(p) \in \cK$ via the mapping
\begin{equation*}
g
\colon
\cM
\ni
p
\mapsto
(g_\ell(p), g_r(p))
\in
\cN \times \cN
\end{equation*}
with $\cK = \setDef{(q_1,q_2) \in \cN \times \cN}{q_1 = q_2}$ the diagonal submanifold of $\cN \times \cN$.
\end{example}
\begin{example}
Consider a vector bundle $\pi \colon \cN \to \cB$, where $\cB$ and $\cN$ are smooth manifolds and $\pi$ is a smooth surjective map.
In fact, the total space~$\cN$ of a vector bundle is a manifold with special structure in the sense that, for each $q$ in the base manifold~$\cB$, the preimages $\pi^{-1}(q)$ (called fibres) are linear spaces; see, \eg, \cite[Ch.~III]{Lang:1999:1}.
In applications, a constraint mapping $g \colon \cM \to \cN$ of the form
\begin{equation*}
g(p)
=
0_{\pi(g(p))}
\end{equation*}
arises frequently, in particular when $\cN = \tangentspace{}{\cB}$ or $\cN = \cotangentspace{}{\cB}$ is the tangent bundle or cotangent bundle over $\cB$, respectively.
Since the mapping $q \mapsto 0_{\pi(q)}$ is well-defined and smooth on vector bundles, this constraint is of the form discussed in \cref{eq:constraintlr}.
If the fibres $\pi^{-1}(q)$ of $\cN$ are equipped with preorder cones $K_q \subset \pi^{-1}(q)$, then also inequality constraints of the form
\begin{equation*}
g(p)
\le
0_{\pi(g(p))}
,
\quad
\text{\ie,}
\quad
g(p) \in -K_{\pi(g(p))}
\end{equation*}
can be included under suitable assumptions on the choice of cones.
\end{example}
\section{Constraint Qualifications}
\label{section:CQs}
We recapitulate the definition of the tangent cone of a subset $\cF \subset \cM$ and generalize basic results, known for optimization problems on vector spaces, to the case of manifolds with corners.
We recall that $t_k \searrow 0$ denotes a sequence of strictly positive real numbers that converges to~$0$.
\begin{definition}[Tangent cone] \label{definition:tangent_cone}
Let $p \in \cF$ and $(\cU,\varphi)$ be a chart centered at~$p$.
A tangent vector~$v \in \tangentspace{p}{\cM}$ is said to belong to the tangent cone $\tangentcone{p}{\cF} \subset \tangentspace{p}{\cM}$ at~$p$ if there exists a representative $v_\varphi$ in the chart~$\varphi$ and sequences $t_k \searrow 0$ and $x_{\varphi,k} \in \R^m$ such that
\begin{equation}\label{eq:tangential_sequence}
x_{\varphi,k} \to v_\varphi
\text{ and }
t_k \, x_{\varphi,k} \in \varphi(\cF \cap \cU)
\end{equation}
holds.
We then call $x_{\varphi,k}$ a feasible tangential sequence for $v_\varphi$.
\end{definition}
The following result shows that \cref{definition:tangent_cone} does not depend on the chosen chart.
Indeed, the tangent cone can alternatively be defined without the use of a chart; compare \cite[Def.~3.2]{BergmannHerzog:2019:1}.
\begin{lemma}\label{lemma:tangentcone_representative}
Property~\eqref{eq:tangential_sequence} holds for one representative of $v \in \tangentspace{p}{\cM}$ if and only if it holds for every representative of $v$.
\end{lemma}
\begin{proof}
Consider two local charts $\varphi_1$ and $\varphi_2$ centered at $p$ and their smooth transition map $T = \varphi_2 \circ \varphi_1^{-1}$, defined in a neighborhood~$\cU$ of $0 = \varphi_1(p) = \varphi_2(p) = T(0)$.
Then the corresponding representatives $v_{\varphi_1}$ and $v_{\varphi_2}$ of a tangent vector $v \in \tangentspace{p}{\cM}$ are related by $v_{\varphi_2} = T'(0) \, v_{\varphi_1}$.
By differentiability of $T$ we obtain (for sufficiently large $k$ so that $t_k \, x_{\varphi_1,k} \in \cU$):
\begin{equation*}
x_{\varphi_2,k}
\coloneqq
\frac{T(t_k \, x_{\varphi_1,k})-T(0)}{t_k} \to T'(0) \, v_{\varphi_1}
=
v_{\varphi_2}
\end{equation*}
for any pair of sequences $x_{\varphi_1,k} \to v_{\varphi_1}$ and $t_k \searrow 0$.
Hence, $v_{\varphi_1}$ safisfies \eqref{eq:tangential_sequence} if and only if $v_{\varphi_2}$ does.
\end{proof}
Obviously, $\tangentcone{p}{\cF}$ is a cone and $0 \in \tangentcone{p}{\cF}$.
Furthermore, it is closed.
To see this, consider a sequence $v^i \in \tangentcone{p}{\cF}$ which converges to $v \in \tangentspace{p}{\cM}$ with $v \neq 0$.
Using a chart, we have sequences $t_k^i \searrow 0$ and $x_{\varphi,k}^i \to v_\varphi^i$.
From these, appropriate diagonal sequences can be chosen to verify $v \in \tangentcone{p}{\cF}$.
The following simple lemma can be proved as in the standard case:
\begin{lemma}\label{lemma:positive_derivative}
Let $f \in C^1(\cM,\R)$ and assume that $p$ is a local minimizer of $f$ on a set~$\cF \subset \cM$.
Then $f'(p) \, v \ge 0$ holds for all $v \in \tangentcone{p}{\cF}$.
\end{lemma}
\begin{proof}
Consider $v \in \tangentcone{p}{\cF}$ and a corresponding tangential sequence $t_k \, v_k \in \cF$ with representatives $x_{\varphi,k}$.
Then, by optimality, $t_k^{-1}(f_\varphi(t_k \, x_{\varphi,k}) - f_\varphi(0)) \ge 0$ holds for $k \in \N$ sufficiently large.
Since $x_{\varphi,k} \to v_\varphi$ we obtain $f'(0) \, x_{\varphi,k} \to f'(0) \, v_\varphi$, but since $t_k^{-1}(f_\varphi(t_k \, x_{\varphi,k}) - f_\varphi(0)) - f'(0) \, x_{\varphi,k} \to 0$ by differentiability, this limit has to be non-negative.
\end{proof}
The following result shows that the tangent cone to a submanifold with corners has a particularly simple structure since it agrees with the cone of inner tangent vectors defined in \cref{section:constrained_optimization}:
\begin{proposition}
Suppose that $\cK$ is a submanifold with corners of $\cN$ and $q \in \cK$.
Then
\begin{equation*}
\tangentcone{q}{\cK}
=
\innertangentcone{q}{\cK}
.
\end{equation*}
\end{proposition}
\begin{proof}
Let $v \in \tangentspace{q}{\cK}$.
Consider an adapted local chart $\psi$ of $\cK \subset \cN$, centered at~$q$, and defined on a neighborhood $\cU$ of $q$, and $v_\psi$ the corresponding representative of $v$.
Since both $\tangentcone{q}{\cK}$ and $\innertangentcone{q}{\cK}$ are cones, we may assume \wolog that $\lambda \psi(\cK \cap \cU) \subset \psi(\cK \cap \cU)$ and $\lambda v_\psi \in \psi(\cU)$ for $\lambda \in [0,1]$.
Two cases can occur.
If $v_\psi \in \psi(\cK)$, then $v \in \innertangentcone{q}{\cK}$ holds by definition, and $v \in \tangentcone{q}{\cK}$ follows because $t_k \, v_\psi \in \psi(\cK \cap \cU)$ is clearly a tangential sequence.
By contrast, if $v_\psi \not \in \psi(\cK)$, then $v \not \in \innertangentcone{q}{\cK}$ by definition.
Moreover,
\begin{equation*}
\dist{\psi(\cK \cap \cU)}{v_\psi}
\coloneqq
\inf_{w \in \psi(\cK \cap \cU)} \norm{v_\psi - w}
>
0
\end{equation*}
because $\psi(\cK \cap \cU)$ is closed in $\psi(\cU)$.
Then we can compute
\begin{equation*}
\dist{\psi(\cK \cap \cU)}{\lambda \, v_\psi}
\coloneqq
\inf_{w \in \psi(\cK \cap \cU)} \norm{\lambda(v_\psi - w)}
=
\lambda \dist{\psi(\cK \cap \cU)}{v_\psi}
\;
\text{for all } \lambda \in ]0,1]
.
\end{equation*}
Hence, there is no feasible tangential sequence for $v_\psi$.
\end{proof}
In the following we consider the linearization
\begin{equation*}
g'(p) \colon \tangentspace{p}{\cM} \to \tangentspace{g(p)}{\cN}
\end{equation*}
of $g$ at~$p$.
Its representation in a local chart $\varphi$, centered at $p$, and an adapted local chart $\psi$, centered at $g(p)$, reads:
\begin{equation*}
g_{\psi,\varphi}'(0)
\coloneqq
(\psi \circ g \circ \varphi^{-1})'(0)
\colon
\R^m \to \R^n
.
\end{equation*}
\begin{definition}[Linearizing cone]
\label{definition:linearizing_cone}
The linearizing cone at a point $p \in \cF$ is defined as
\begin{equation*}
\linearizingcone{p}{g}{\cK}
\coloneqq
\setDef[big]{v \in \tangentspace{p}{\cM}}{g'(p) \, v \in \innertangentcone{g(p)}{\cK}}
=
g'(p)^{-1}\paren[big](){\innertangentcone{g(p)}{\cK}}
\subset
\tangentspace{p}{\cM}
.
\end{equation*}
\end{definition}
\begin{lemma}
We have $\tangentcone{p}{\cF} \subset \linearizingcone{p}{g}{\cK}$.
\end{lemma}
\begin{proof}
Consider $v \in \tangentcone{p}{\cF}$, its representation in a chart $v_{\varphi}$ and corresponding sequences $t_k \searrow 0$ and $x_{\varphi,k} \to v_\varphi$, where $g_{\psi,\varphi}(t_k \, x_{\varphi,k}) \in \psi(\cK)$.
We obtain:
\begin{equation*}
A \, g_{\psi,\varphi}(t_k \, x_{\varphi,k})
\le
0
,
\quad
A \, g_{\psi,\varphi}(0)
=
0
.
\end{equation*}
It follows that
\begin{align*}
t_k A \, g_{\psi,\varphi}'(0)(v_\varphi)
&
=
A \, g_{\psi,\varphi}'(0)(t_k \, x_{\varphi,k}) + t_k A \, g_{\psi,\varphi}'(0)(v_\varphi - x_{\varphi,k})
\\
&
=
A \, g_{\psi,\varphi}(t_k \, x_{\varphi,k}) + \co(t_k)
,
\end{align*}
and thus
\begin{equation*}
\frac{1}{t_k} A \, g_{\psi,\varphi}(t_k \, x_{\varphi,k}) \to A \, g_{\psi,\varphi}'(0)(v_\varphi)
.
\end{equation*}
Since every row of the left hand side is non-positive, its limit cannot be positive.
Thus, $A \, g_{\psi,\varphi}'(0)(v_\varphi) \le 0$ and similarly $W g_{\psi,\varphi}'(0)(v_\varphi) = 0$.
This implies $v \in \linearizingcone{p}{g}{\cK}$.
\end{proof}
\begin{definition}
The (description of the) feasible set $\cF$ is called transversal over $\cK$ at~$p \in \cF$ if
\begin{equation*}
\image g'(p) - \tangentspace{g(p)}{\cK}
=
\tangentspace{g(p)}{\cN}
.
\end{equation*}
It is said to satisfy the Zowe--Kurcyusz--Robinson constraint qualification\\ (ZKRCQ, compare \cite{ZoweKurcyusz:1979:1}) at~$p \in \cF$ if
\begin{equation}
\label{eq:ZKRCQ}
\tag{ZKRCQ}
\image g'(p) - \innertangentcone{g(p)}{\cK}
=
\tangentspace{g(p)}{\cN}
.
\end{equation}
It is said to satisfy the linear independence constraint qualification (LICQ) at~$p \in \cF$ if
\begin{equation}
\label{eq:LICQ}
\tag{LICQ}
\image g'(p) - \zerotangentspace{g(p)}{\cK}
=
\tangentspace{g(p)}{\cN}
.
\end{equation}
\end{definition}
Clearly, since $\zerotangentspace{g(p)}{\cK} \subset \innertangentcone{g(p)}{\cK} \subset \tangentspace{g(p)}{\cK}$ holds, \eqref{eq:LICQ} implies \eqref{eq:ZKRCQ}, which in turn implies transversality.
If the index $\ell$ of $g(p)$ satisfies $\ell = 0$, \ie $g(p)$ is not a corner of positive index, then all above notions are equivalent, because $\zerotangentspace{g(p)}{\cK} = \tangentspace{g(p)}{\cK}$ holds in this case.
\begin{proposition}\label{proposition:tangentcone_is_linearizingcone}
If \eqref{eq:ZKRCQ} holds, then $\tangentcone{p}{\cF} = \linearizingcone{p}{g}{\cK}$.
\end{proposition}
\begin{proof}
As above, consider a chart $\varphi$ of $\cM$ centered at~$p$ and an adapted chart $\psi$ of $\cN$ centered at~$g(p)$.
Then the feasible set is represented locally as follows:
\begin{equation*}
\cF_{\psi,\varphi}
\coloneqq
\setDef{x \in \varphi(\cU)}{W g_{\psi,\varphi}(x) = 0, \; A \, g_{\psi,\varphi}(x) \le 0}
,
\end{equation*}
while the representation of the linearizing cone is:
\begin{equation}\label{eq:LRep}
\linearizingcone{p}{g}{\cK}_{\psi,\varphi}
\coloneqq
\setDef{x \in \R^m}{W g_{\psi,\varphi}'(0) \, x = 0, \; A \, g_{\psi,\varphi}'(0) \, x \le 0}
.
\end{equation}
Then \eqref{eq:ZKRCQ} can be written as:
\begin{equation}\label{eq:ZKRCQ_Rep}
\image g_{\psi,\varphi}'(0) - \setDef{y \in \R^n}{W y = 0, \; A y \le 0}
=
\R^n
.
\end{equation}
Under assumption \eqref{eq:ZKRCQ}, we can apply \cite{ZoweKurcyusz:1979:1}
to conclude that $\linearizingcone{p}{g}{\cK}_{\psi,\varphi}$ coincides with the tangent cone of $\cF_{\psi,\varphi}$ at~$0$ in $\R^n$, which is, by \cref{lemma:tangentcone_representative}, a representative of $\tangentcone{p}{\cF}$.
Since both sets are representatives of subsets of $\tangentspace{p}{\cM}$, we conclude the result as claimed.
\end{proof}
Using the local representation \eqref{eq:LRep}, where our constraints are split into equality and inequality constraints, we can formulate the Mangasarian--Fromo\-vitz constraint qualification (MFCQ) in the following way:
\begin{equation}
\label{eq:MFCQ}
\tag{MFCQ}
\paren[auto].\}{
\begin{aligned}
&
\text{The mapping $W g_{\psi,\varphi}'(0)$ is surjective.}
\\
&
\text{There exists $\hat x \in \R^m$ such that $W g_{\psi,\varphi}'(0) \, \hat x = 0$}
\\
&
\qquad
\text{and $A \, g_{\psi,\varphi}'(0) \, \hat x < 0$ holds (in each component).}
\end{aligned}
\quad
}
\end{equation}
\begin{proposition}
\eqref{eq:MFCQ} and \eqref{eq:ZKRCQ} are equivalent.
\end{proposition}
\begin{proof}
Let \eqref{eq:MFCQ} hold and $y \in \R^n$ be arbitrary.
Define $\hat y \coloneqq g_{\psi,\varphi}'(0) \, \hat x \in \image g_{\psi,\varphi}'(0)$.
In addition, since $W g_{\psi,\varphi}'(0)$ is surjective, there is $\tilde x$, such that $W g_{\psi,\varphi}'(0) \, \tilde x = W y$ and we define $\tilde y \coloneqq g_{\psi,\varphi}'(0) \, \tilde x$.
Then we can write for any $\alpha > 0$:
\begin{equation*}
y
=
(\alpha \, \hat y + \tilde y) - (\alpha \, \hat y + \tilde y - y),
\end{equation*}
where $\alpha \, \hat y + \tilde y \in \image g_{\psi,\varphi}'(0)$.
By construction, $W(\alpha \, \hat y + \tilde y - y) = 0$ holds, and choosing $\alpha$ sufficiently large we also obtain $A(\alpha \, \hat y + \tilde y - y) \le 0$, because $A \hat y < 0$.
This shows \eqref{eq:ZKRCQ_Rep} and thus \eqref{eq:ZKRCQ}.
If \eqref{eq:ZKRCQ} holds, then for any $y \in \R^n$ there is $\hat y \in \image g_{\psi,\varphi}'(0)$, such that $W \hat y = W y$ and $A \hat y \le Ay$, because $y = \hat y - (\hat y - y)$ with $W(\hat y - y) = 0$ and $A(\hat y - y) \le 0$.
Thus, since $W$ and $A$ are surjective by definition of manifolds with corners, $W g_{\psi,\varphi}'(0)$ is surjective as well, and we find $y$ such that $W y = 0$ and $A y < 0$, and thus also $\hat y = g_{\psi,\varphi}'(0) \, \hat x$ with the same properties.
So \eqref{eq:MFCQ} holds.
\end{proof}
\begin{proposition}\label{proposition:LICQ}
$\cF$ satisfies \eqref{eq:LICQ} at $p \in \cF$ if and only if, for every representation in charts, the following linear mapping is surjective:
\begin{equation*}
B \, g_{\psi,\varphi}'(0)
\colon
\R^m \to \R^\ell \times \R^{n-k}
,
\quad
\text{where }
B
\coloneqq
\begin{pmatrix}
A
\\
W
\end{pmatrix}
.
\end{equation*}
\end{proposition}
\begin{proof}
Let $v \in \tangentspace{g(p)}{\cN}$ with representative $v_\psi \in \R^n$.
If $B \, g_{\psi,\varphi}'(0)$ is surjective, then we find $w_\varphi \in \R^m$, such that $B \, g_{\psi,\varphi}'(0) \, w_\varphi = -B \, v_\psi$.
This implies that $v^0_\psi \coloneqq g_{\psi,\varphi}'(0) \, w_\varphi + v_\psi \in \ker B$ and we may write $v_\psi = g_{\psi,\varphi}'(0) \, w_\varphi - v^0_\psi$. Thus, we have found $w \in \tangentspace{p}{\cM}$ and $v^0 \in \zerotangentspace{g(p)}{\cK} $, such that $v = g'(p)w-v^0$.
If, conversely, \eqref{eq:LICQ} holds, then we can write $v_\psi = g_{\psi,\varphi}'(0) \, w_\varphi - v^0_\psi$ for any $v \in \tangentspace{g(p)}{\cN}$ with $v^0_\psi \in \ker B$ and thus $B \, g_{\psi,\varphi}'(0) \, w_\varphi = B \, v_\psi$.
Hence the surjectivity of $B \, g_{\psi,\varphi}'(0)$ follows from the surjectivity of $B$, which holds by \cref{definition:submanifold_with_corners} of a submanifold with corners.
\end{proof}
\section{First-Order Optimality Conditions}
\label{section:first-order_conditions}
In this section we address the first-order necessary optimality conditions for \eqref{eq:problem_setting} under the constraint qualification \eqref{eq:ZKRCQ}.
To this end, we recall that
\begin{equation*}
S^\circ
=
\setDef{v^* \in V^*}{\dual{v^*}{s} \le 0 \text{ for all $s \in S$}}
\end{equation*}
denotes the polar cone of an arbitrary set~$S \subset V$ of a normed vector space~$V$.
\begin{theorem}
\label{theorem:KKT}
Suppose that $p_* \in \cF$ is a local minimizer of \eqref{eq:problem_setting} such that \eqref{eq:ZKRCQ} holds at~$p_*$.
Then there exists a Lagrange multiplier $\mu \in \cotangentspace{g(p_*)}{\cN}$ such that the following KKT conditions hold:
\begin{subequations}\label{eq:KKT_conditions}
\begin{align}
\label{eq:KKT_conditions1}
&
f'(p_*) + \mu \, g'(p_*)
=
0
\quad
\text{on }
\cotangentspace{p_*}{\cM}
,
\\
\label{eq:KKT_conditions2}
&
\mu
\in
\paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ
.
\end{align}
\end{subequations}
The set of all possible Lagrange multipliers,
$\Lambda(p_*) = \setDef{\mu \in \cotangentspace{g(p_*)}{\cN}}{\eqref{eq:KKT_conditions} \text{ holds}}$
is compact.
If \eqref{eq:LICQ} holds, then $\Lambda(p_*)$ is a singleton.
\end{theorem}
\begin{proof}
By \cref{lemma:positive_derivative} we have $f'(p_*) \ge 0$ on $\tangentcone{p_*}{\cF}$ and thus, by \cref{proposition:tangentcone_is_linearizingcone} on $\linearizingcone{p_*}{g}{\cK}$.
Hence $v = 0$ is a minimizer of the following linear problem:
\begin{equation*}
\begin{aligned}
\text{Minimize}
\quad
&
f'(p_*) \, v
,
\quad
\text{where }
v \in \tangentspace{p_*}{\cM}
\\
\text{ \st}
\quad
&
g'(p_*) \, v \in \innertangentcone{p_*}{\cK}
.
\end{aligned}
\end{equation*}
Due to the \eqref{eq:ZKRCQ} regularity condition, we can once more apply the results of \cite{ZoweKurcyusz:1979:1} to this problem to conclude the existence of a Lagrange multiplier $\mu$ such that the KKT conditions \eqref{eq:KKT_conditions} hold, so $\Lambda(p_*)$ is non-empty.
Being the intersection of closed sets, $\Lambda(p_*)$ is also closed.
In order to prove the boundedness of $\Lambda(p_*)$, we proceed by contradiction.
Consider a sequence $\mu_k$ of Lagrange multipliers with $\norm{\mu_k} \to \infty$ and a corresponding bounded sequence $\lambda_k \coloneqq (\mu_k-\mu_1)/\norm{\mu_k}$ with $\mu_1/\norm{\mu_k} \to 0$.
By picking a subsequence we may assume that $\lambda_k$ converges to a limit $\lambda_*$ with $\norm{\lambda_*} = 1$.
Due to \eqref{eq:ZKRCQ}, every $v \in \tangentspace{g(p_*)}{\cN}$ can be written as $v = w-u$, where $w \in \image g'(p_*)$ and $u \in \innertangentcone{g(p_*)}{\cK}$.
Then we compute
\begin{equation*}
\dual{\lambda_*}{v}
=
\lim_{k \to \infty} \dual{\lambda_k}{v}
=
\lim_{k \to \infty} \paren[auto](){\frac{1}{\norm{\mu_k}} \dual{(\mu_k-\mu_1)}{w} - \frac{\dual{\mu_k}{u}}{\norm{\mu_k}} + \frac{\dual{\mu_1}{u}}{\norm{\mu_k}}}
.
\end{equation*}
Since $\dual{(\mu_k - \mu_1) }{ w} = 0$, $\dual{ \mu_k}{w} \le 0$, and the last addend in the sum tends to $0$, as $k \to \infty$, it follows that $\dual{ \lambda_*}{v} \ge 0$ holds for all $v \in \tangentspace{g(p_*)}{\cN}$ and thus $\lambda_* = 0$, which is in contradiction to $\norm{\lambda_*} = 1$.
Hence, $\Lambda(p_*)$ is bounded and therefore compact.
Now consider two solutions $\mu_1$ and $\mu_2$ of \eqref{eq:KKT_conditions}.
Then $\mu_1 - \mu_2 \in \paren[big](){\zerotangentspace{g(p_*)}{\cK}}^\circ$ and $(\mu_1 - \mu_2) \, g'(p_*) = 0$.
Hence, for all $v \in \image g'(p_*) - \zerotangentspace{g(p_*)}{\cK}$, it follows that $(\mu_1 - \mu_2) \, v = 0$.
If \eqref{eq:LICQ} holds, then this implies $(\mu_1 - \mu_2) \, v = 0$ for all $v \in \tangentspace{g(p_*)}{\cN}$ and thus $\mu_1 = \mu_2$.
\end{proof}
In the following, we derive a representation $\mu_\psi \in \R^n$ of $\mu \subset \paren[big](){\innertangentcone{g(p)}{\cK}}^\circ$ with respect to an adapted local chart $\psi$ centered at~$g(p)$.
Recall that, by definition, $v \in \innertangentcone{g(p)} \cK$ holds if and only if $W v_\psi = 0$ and $A \, v_\psi \le 0$.
\begin{proposition}
$\mu \in \paren[big](){\innertangentcone{g(p)} \cK}^\circ$ holds if and only if its representation $\mu_\psi$ in an adapted chart is of the following form:
\begin{equation*}
\mu_\psi
=
\begin{pmatrix}
A^\transp \lambda_I
\\
W^\transp \lambda_E
\end{pmatrix}
\in \R^n
\end{equation*}
where $\lambda_I \ge 0 \in \R^\ell$ and $\lambda_E \in \R^{n-k}$.
Hence, in local charts, \eqref{eq:KKT_conditions} reads:
\begin{align*}
f_\varphi'(p)
+ \lambda_I^\transp A \, g_{\psi,\varphi}'(p)
+ \lambda_E^\transp W g_{\psi,\varphi}'(p)
&
=
0
,
\\
\lambda_I
&
\ge
0
.
\end{align*}
\end{proposition}
\begin{proof}
Consider a representative $v_\psi$ of an element of $\innertangentcone{g(p)}{\cK}$ and $\mu_\psi$ of the claimed form:
\begin{equation*}
\dual{\mu_\psi}{v_\psi}
=
\dual{A^\transp \lambda_I}{v_\psi}
+
\dual{\lambda_E}{0}
=
\dual{\lambda_I}{A \, v_\psi}
\le
0
.
\end{equation*}
Hence, $\mu \in \paren[big](){\innertangentcone{g(p)}{\cK}}^\circ$.
For the converse, assume that $(\lambda_I)_i < 0$ for some $1\le i \le \ell$.
Since $A$ is surjective, choose $v_\psi$ such that $A \, v_\psi = -e_i$ holds, which implies $\dual{A^\transp \lambda_I}{v_\psi} = -(\lambda_I)_i > 0$, so $\mu \not \in \paren[big](){\innertangentcone{g(p)}{\cK}}^\circ$.
\end{proof}
We return back to \cref{example:classical} and recall that the rows of $\widehat A \in \R^{\ell \times k}$ consist of those unit vectors $e_i^\transp$ for which $g_I(x_*)_i = 0$ holds.
We observe the representation
\begin{equation*}
\mu_\psi
=
\begin{pmatrix}
\eta_I
\\
\eta_E
\end{pmatrix}
,
\quad
\text{where }
\eta_I
=
\sum_{\mrep{{\setDef{i}{g_I(x_*)_i = 0}}}{}} \lambda_i \, e_i
\quad
\text{with some }
\lambda_i \ge 0
.
\end{equation*}
Thus we obtain the classical complementarity result:
\begin{equation*}
\eta_I \ge 0
,
\quad
g_I(x_*) \le 0
,
\quad
\dual{\eta_I}{g_I(x_*)}
=
0
,
\end{equation*}
together with the well-known dual equation:
\begin{equation*}
f'(x_*)
+ \eta_I^\transp g_I'(x_*)
+ \eta_E^\transp g_E'(x_*)
=
0
.
\end{equation*}
After transposition, it takes the more familiar form
\begin{equation*}
\nabla f(x_*)
+ g_I'(x_*)^\transp \eta_I
+ g_E'(x_*)^\transp \eta_E
=
0
.
\end{equation*}
\section{Retractions and Linearizing Maps}
\label{section:first-order_conditions_using_retractions}
Numerical solution algorithms frequently employ retractions to pull back optimization problems on manifolds to the corresponding tangent spaces.
In this section we will consider reformulations of the KKT conditions \eqref{eq:KKT_conditions} in terms of these objects.
This is an alternative to our approach via local charts employed in \cref{section:first-order_conditions} and it allows us to argue more conveniently in some cases.
Moreover, retractions are also the approach we take for the second-order analysis in \cref{section:second-order_conditions}.
We will use the following definitions:
\begin{definition}\label{definition:retraction}
Let $V_{0_p} \subset \tangentspace{p}{\cM}$ be a neighborhood of $0_p \in \tangentspace{p}{\cM}$.
A $C^2$-mapping $\retract{p} \colon V_{0_p} \to \cM$ is called a local retraction at~$p$ if it satisfies:
\begin{enumerate}[label=\ensuremath{(\retractionSymbol\roman*)},leftmargin=*]
\item \label[property]{item:definition:retraction:zero}
$\retract{p}(0_p) = p$,
\item \label[property]{item:definition:retraction:diff}
$D \retract{p}(0_p) = \id_{\tangentspace{p}{\cM}}$.
\end{enumerate}
Let $\cU_{q} \subset \cN$ be a neighborhood of $q \in \cN$.
A $C^2$-mapping $S_q \colon \cU_q \to \tangentspace{q}{\cN}$ is called a local linearizing map at~$q$ if it satisfies:
\begin{enumerate}[label=\ensuremath{(S\roman*)},leftmargin=*]
\item \label[property]{item:definition:linearizing_map:y}
$S_q(q) = 0_q$,
\item \label[property]{item:definition:linearizing_map:Diff}
$D S_q(q) = \id_{\tangentspace{q}{\cN}}$.
\end{enumerate}
We call $S_q$ adapted to $\cK$ if $S_q(\cU_q \cap \cK) = S_q(\cU_q) \cap \innertangentcone{q}{\cK}$ holds.
\end{definition}
Every chart $\varphi$ on $\cM$, centered at~$p$, induces a local retraction at~$p$ via $\retract{p}(v) \coloneqq \varphi^{-1}(v_\varphi)$.
Moreover, every adapted chart $\psi$ on $\cN$, centered at~$q$, induces an adapted linearizing map:
for any $\eta \in \cU_q$ we define $v \coloneqq S_q(\eta) \in \tangentspace{q}{\cN}$ by the equivalence class of $v_\psi \coloneqq \psi(\eta)$. If $\cK$ is a geodesic polyhedron on a Riemannian manifold $\cN$ as in \cref{eq:geopoly}, then $\logarithm{q}$ yields an adapted linearizing map at $q$.
\begin{remark}
Retractions are widely used in optimization algorithms on manifolds; see, \eg, \cite{AbsilMahonySepulchre:2008:1}.
Linearizing maps for constrained problems were introduced in \cite{SchielaOrtiz:2021:1}, but a similar concept has been used in a different context in~\cite{Boumal:2010:1} under the name \enquote{generalized logarithmic map}.
The concept of adapted linearizing maps may be useful for the implementation of numerical algorithms in this setting.
As we will see below, it allows us to write down a local optimization problem at~$p_*$ in a way that resembles a classical formulation without the need of further linearization of $S_{g(p_*)}(\cU_{g(p_*)} \cap \cK)$.
\end{remark}
Let $p$ be a feasible point of \eqref{eq:problem_setting} and $\retract{p} \colon V_{0_p} \to \cM$ and $S_{g(p)} \colon \cU_{g(p)} \to \tangentspace{g(p)}{\cN}$ be a given local retraction and adapted linearizing map, respectively.
Choosing their domain of definition sufficiently small, we may assume without loss of generality that $\retract{p}$ and $S_{g(p)}$ are injective with $g(\retract{p}(V_{0_p})) \subset \cU_{g(p)}$.
We can now locally pull back our problem as follows:
\begin{align*}
\bf
&
\coloneqq
f \circ \retract{p}
\colon
V_{0_p}
\to \R
,
\\
\bg
&
\coloneqq
S_{g(p)} \circ g \circ \retract{p}
\colon
V_{0_p} \to \tangentspace{g(p)}{\cN}
,
\\
\bK
&
\coloneqq
S_{g(p)}(\cK \cap \cU_{g(p)}) = \innertangentcone{g(p)}{\cK} \cap S_{g(p)}(\cU_{g(p)})
,
\end{align*}
and formulate a local optimization problem on the tangent space at~$p$:
\begin{equation}
\label{eq:problem_setting_Retraction}
\begin{aligned}
\text{Minimize}
\quad
&
\bf(v)
,
\quad
\text{where }
v \in V_{0_p} \subset \tangentspace{p}{\cM}
\\
\text{\st}
\quad
&
\bg(v) \in \bK \subset \tangentspace{g(p)}{\cN}
,
\end{aligned}
\end{equation}
since $\bK$ is the intersection of a polyhedral convex cone and a neighborhood of $0_{g(p)}$.
It can thus be described by finitely many linear equality and inequality constraints on $\tangentspace{g(p)}{\cN}$.
Neglecting the local neighborhoods, \eqref{eq:problem_setting_Retraction} is locally a classical constrained optimization problem of the form:
\begin{equation}
\label{eq:problem_setting_Retraction_classic}
\begin{aligned}
\text{Minimize}
\quad
&
\bf(v)
,
\quad
\text{where }
v \in \tangentspace{p}{\cM}
\\
\text{\st}
\quad
&
A_I \, \bg(v)
\le
0
\\
\text{and}
\quad
&
A_E \, \bg(v)
=
0
\end{aligned}
\end{equation}
with linear mappings $A_I \colon \tangentspace{g(p)} \cN \to \R^{\ell}$ and $A_E \colon \tangentspace{g(p)} \cN \to \R^{n-k}$.
{Notice that the data of problem \eqref{eq:problem_setting_Retraction_classic} is, of course, not uniquely defined.
For instance, we may premultiply $A_I$ by a positive diagonal matrix, and $A_E$ by any invertible matrix.
However, the viable choices for $A_I$ and $A_E$ do not depend on the choice of $S_{g(p)}$.}
\begin{theorem}\label{theorem:KKT_retractions}
Suppose that $p_*$ is a feasible point of \eqref{eq:problem_setting}.
Then $p_*$ is locally optimal for \eqref{eq:problem_setting} if and only if $v_* = 0 \in \tangentspace{p_*}{\cM}$ is a local minimizer of \eqref{eq:problem_setting_Retraction}.
In this case, when \eqref{eq:ZKRCQ} holds at~$p_*$, then there exists $\mu \in \cotangentspace{g(p_*)}{\cN}$ such that
\begin{align*}
\bf'(0_{p_*}) + \mu \, \bg'(0_{p_*})
&
=
0
\quad
\text{in }
\cotangentspace{p_*}{\cM}
,
\\
\mu
\in
\paren[big](){\tangentcone{0_{p_*}}{\bK}}^\circ
&
=
\paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ
.
\end{align*}
\end{theorem}
\begin{proof}
Clearly, $0_{p_*} \in \tangentspace{p_*}{\cM}$ is a local minimizer of
\eqref{eq:problem_setting_Retraction} if and only if $p_*$ is a local minimizer of \eqref{eq:problem_setting}.
Moreover, by the chain rule, using \cref{item:definition:retraction:diff} of $\retract{p_*}$ and \cref{item:definition:linearizing_map:Diff} of $S_{g(p_*)}$:
\begin{equation*}
\bf'(0_{p_*}) = f'(p_*)
,
\quad
\bg'(0_{p_*}) = g'(p_*)
,
\quad
\tangentcone{0_{p_*}}{\bK} = \innertangentcone{g(p_*)}{\cK}
.
\end{equation*}
Thus, our conditions directly follow from \eqref{eq:KKT_conditions}.
\end{proof}
As an alternative approach, we can apply a classical theorem on KKT conditions to \eqref{eq:problem_setting_Retraction_classic} and obtain
\begin{equation}\label{eq:KKT_retractions_explicit}
\begin{aligned}
\bf'(0_{p_*}) + \lambda_I^\transp A_I \, \bg'(0_{p_*}) + \lambda_E^\transp A_E \, \bg'(0_{p_*})
&
=
0
,
\\
\lambda_I
&
\ge
0
,
\end{aligned}
\end{equation}
with $\lambda_I \in \R^\ell$ and $\lambda_E \in \R^{n-k}$, which depend on the choice of $A_I$ and $A_E$.
By invariance, the first row equivalently yields:
\begin{equation*}
f'(p_*) + \lambda_I^\transp A_I \, g'(p_*) + \lambda_E^\transp A_E \, g'(p_*)
=
0
\end{equation*}
and thus by comparison,
\begin{equation*}
\mu
=
\lambda_I^\transp A_I + \lambda_E^\transp A_E
\in
\paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ
.
\end{equation*}
We emphasize that the number of rows in $A_I$, which is equal to the index $\ell$ of the corner $g(p_*)$, depends on $g(p_*)$.
Thus, there is no further distinction necessary between active and inactive constraints, because this is already built into the local representation of $\cK$.
The formulation \eqref{eq:KKT_retractions_explicit} allows us to split the given constraints into individual components and to distinguish strongly active and weakly active constraints, according to the structure of $\lambda_I$.
\begin{definition}
We call the $i$-th constraint $(A_I)_i \, \bg \le 0$ weakly active at $(p_*,\lambda_I,\lambda_E)$ if $(\lambda_I)_i = 0$ holds, and strongly active in case $(\lambda_I)_i > 0$.
\end{definition}
Observe that this definition does not depend on the particular choice of $A_I$.
If $A_I$ is premultiplied by a positive diagonal matrix, then the notion of weak and strong activity of $(A_I)_i$ is not changed.
\section{Lagrangian Functions}
\label{section:Lagrange_function}
When $\cN = V$ is a normed linear space with dual space $V^*$ and $g \colon \cM \to V$, then a Lagrangian function for our problem \eqref{eq:problem_setting} with Lagrange multiplier $\mu \in V^*$ can be defined as usual:
\begin{equation*}
L
\colon
\cM \times V^*
\ni
(p,\mu)
\mapsto
L(p,\mu)
\coloneqq
f(p) + \mu(g(p))
\in
\R
.
\end{equation*}
However when $\cN$ is a nonlinear manifold, then $\mu$ cannot be defined as a linear functional on $\cN$.
Rather, we need to replace it with a function $h \in C^1(\cN,\R)$ and define
\begin{equation*}
L
\colon
\cM \times C^1(\cN,\R)
\ni
(p,h)
\mapsto
L(p,h)
\coloneqq
f(p) + h(g(p))
\in \R
\end{equation*}
as a Lagrangian function.
In the following we will consider $h$ fixed and regard the mapping $p \mapsto L(p,h)\colon \cM \to \R$ as a function in $p$.
Its derivative $L'$ is given by
\begin{equation*}
L'(p,h)
\coloneqq
\frac{\d}{\d p} L(p,h)
=
f'(p) + h'(g(p)) \, g'(p)
.
\end{equation*}
For these derivatives to be well-defined at a point $p$, it is enough that $h$ is defined in some neighborhood of $p$.
We can observe two things.
First, $\mu \coloneqq h'(g(p)) \in \cotangentspace{g(p)}{\cN}$ can be interpreted as a Lagrange multiplier; second, $L'(p,h)$ only depends on $\mu = h'(g(p))$ and not on the particular choice of $h$.
The paragraph above explains how to obtain $\mu$ from $h$.
Conversely, let $p_* \in \cM$ be fixed and $q_* = g(p_*)$.
In view of the KKT-conditions \eqref{eq:KKT_conditions} we would like to extend a Lagrange multiplier $\mu\in \cotangentspace{q_*}{\cN}$ locally to a nonlinear function $h$ on a neighbourhood of $q_*$ such that $h'(q_*) = \mu$ holds.
This can be achieved by using a linearizing map $S_{q_*}$ about $q_*$ and defining $h \coloneqq \mu \circ S_{q_*}$.
Then we obtain a Lagrangian function of the form
\begin{equation*}
L_{S_{q_*}}(p,\mu)
\coloneqq
L(p,\mu \circ S_{q_*})
=
f(p) + \mu \circ S_{q_*} \!\! \circ g(p)
.
\end{equation*}
Since $h'(q_*) = \mu \circ DS_{q_*}(q_*) = \mu$, we obtain with this definition of $h$:
\begin{equation}
\label{eq:coincidence_of_Lagrangian_functions}
L'_{S_{q_*}}(p_*,\mu)
=
f'(p_*) + \mu \, g'(p_*) = L'(p_*,h)
.
\end{equation}
Alternatively we may define Lagrangian functions near $p_*$ with $q_* = g(p_*)$ via pull-backs:
\begin{equation*}
\begin{aligned}
&
\bL
\colon
\tangentspace{p_*}{\cM} \times \cotangentspace{q_*}{\cN}
\to
\R
\\
&
(v,\mu)
\mapsto
\bL(v,\mu)
\coloneqq
\bf(v) + \mu(\bg(v))
=
(f \circ \retract{p_*})(v) + (\mu \circ S_{q_*} \!\! \circ g \circ \retract{p_*})(v)
\end{aligned}
\end{equation*}
with derivative
\begin{equation*}
\bL'(v,\mu)
=
\bf'(v) + \mu \, \bg'(v)
\quad
\text{and thus}
\quad
\bL'(0_{p_*},\mu)
=
f'(p_*) + \mu \, g'(p_*).
\end{equation*}
It is therefore justified to define the derivative of the Lagrangian function in the following way:
\begin{equation}
\label{eq:coincidence_of_Lagrangian_functions_2}
\begin{aligned}
L'(p_*,\mu)
\coloneqq
f'(p_*) + \mu \, g'(p_*)
=
\bL'(0_{p_*},\mu)
=
L'_{S_{q_*}}(p_*,\mu)
=
L'(p_*,h)
\\
\text{for }
\mu = h'(q_*)
,
\end{aligned}
\end{equation}
independently of the choice of the retraction $\retract{p_*}$, linearizing map~$S_{q_*}$, and $h$, as long as $\mu = h'(q_*)$.
Utilizing the identifications $\mu \coloneqq h'(g(p_*))$ and $h \coloneqq \mu \circ S_{g(p_*)}$, we find that the KKT conditions \eqref{eq:KKT_conditions} can equivalently be written in the familiar way:
\begin{subequations}\label{eq:KKT_conditions_pull-back}
\begin{align}
\label{eq:KKT_conditions_pull-back1}
&
L'(p_*,\mu)
=
0
\quad
\text{on }
\cotangentspace{p_*}{\cM}
,
\\
\label{eq:KKT_conditions_pull-back2}
&
\mu
\in
\paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ
.
\end{align}
\end{subequations}
\section{The Critical Cone}
\label{section:Critical_cone}
To derive second-order optimality conditions, we need a definition of the critical cone at a KKT point $p_*$ as a subset of the tangent cone~$\tangentcone{p_*}{\cF}$.
Suppose that $(p_*,\mu)$ satisfies the KKT conditions \eqref{eq:KKT_conditions}.
We define the critical cone at $p_*$ as
\begin{align*}
\criticalcone{\cM}
&
\coloneqq
\setDef{v \in \tangentspace{p_*}{\cM}}{g'(p_*) \, v \in \innertangentcone{g(p_*)}{\cK} \text{ and } f'(p_*) \, v = 0}
\\
&
\mrep[r]{{}={}}{{}\coloneqq{}}
\setDef{v \in \tangentspace{p_*}{\cM}}{g'(p_*) \, v \in \innertangentcone{g(p_*)}{\cK} \text{ and } \mu \, g'(p_*) \, v = 0}
.
\end{align*}
We also introduce the definition
\begin{align*}
\criticalcone{\cN}
&
\coloneqq
g'(p_*) \, \criticalcone{\cM}
=
\setDef{w \in \innertangentcone{g(p_*)}{\cK}}{\dual{\mu}{w} = 0}
\\
&
\mrep[r]{{}={}}{{}\coloneqq{}}
\setDef{w \in \innertangentcone{g(p_*)}{\cK}}{(A_I)_j w = 0 \text{ for all } j = 1, \dots, \ell \text{ such that } (\lambda_I)_j = 0}
,
\end{align*}
where $(A_I)_j$ are the components of the mapping $A_I \colon \tangentspace{g(p_*)}{\cN} \to \R^\ell$ used in \eqref{eq:problem_setting_Retraction_classic}.
Then we can write $\mu \in \paren[big](){\Span \criticalcone{\cN}}^\circ$ for any Lagrange multiplier $\mu \in \Lambda(p_*)$.
The following considerations will be useful for the discussion of second-order conditions:
\begin{lemma}\label{lemma:PhiCone}
Suppose that $X$ is a normed linear space and $U$, $V$ are open neighborhoods of $0 \in X$.
Consider a diffeomorphism $\Phi \colon U \to V$ such that $\Phi(0) = 0$ and $\Phi'(0) = \id_X$ hold.
Let $K$ be a polyhedral cone of the form
\begin{equation*}
K
=
\setDef{v \in X}{A_I \, v \le 0, \; A_E \, v = 0}
\end{equation*}
with linear maps $A_I \colon X \to \R^{n_I}$ and $A_E \colon X \to \R^{n_E}$.
Suppose that
\begin{equation*}
\Phi
\colon
K \cap U
\to
K \cap V
\end{equation*}
is bijective.
Select a row $a_j = (A_I)_j$ and define the facet
\begin{equation*}
K_j
=
\setDef{v \in X}{A_I \, v \le 0, \; A_E \, v = 0, \; a_j v = 0}
.
\end{equation*}
Then there are neighborhoods $\tilde U$ and $\tilde V$ of $0$ such that
\begin{equation*}
\Phi
\colon
K_j \cap \tilde U
\to
K_j \cap \tilde V
\end{equation*}
is also bijective.
\end{lemma}
\begin{proof}
We may assume \wolog that $\tilde U = U = B_r(0)$ is an open ball of radius~$r$ about~$0$.
Since $\Phi$ is a homeomorphism and thus preserves boundaries of sets, we conclude in particular that
\begin{equation*}
\Phi
\colon
\partial K \cap U
\to
\partial K \cap V
\end{equation*}
is also a homeomorphism.
Consider now the \enquote{open} facet
\begin{equation*}
\tilde K_j
=
\setDef{v \in K_j}{(A_I)_\ell \, v < 0 \text{ for all } \ell \neq j}
,
\end{equation*}
which is a relatively open subset of $\partial K$.
Then $U \cap \tilde K_j$ is a connected set, because $U$ and $\tilde K_j$ are both connected and convex.
The continuity of $\Phi$ implies that $\Phi(U \cap \tilde K_j)$ is connected as well.
However, the arbitrary union of two (or more) distinct open facets is not connected because each $\tilde K_j$ is a relatively open subset of this union.
Hence, $\Phi(U \cap \tilde K_j)$ is a subset of an open facet $\tilde K_\ell$ and it remains to show $j = \ell$.
Since $\Phi'(0) = \id_X$ holds, we find that
\begin{equation*}
\Phi'(0)
\colon
\tilde K_j
\to
\tilde K_j
\end{equation*}
is bijective.
Using the differentiability of~$\Phi$ this implies that there exists $x_0 \in \tilde K_j$ such that $\Phi(x_0) \in \tilde K_j$ holds.
We thus conclude that $\Phi(U \cap \tilde K_j) \subset \tilde K_j$.
Picking some $B_\rho(0) \subset V$ we can show by the same argumentation
\begin{equation*}
\Phi^{-1}(B_\rho(0) \cap \tilde K_j) \subset \tilde K_j \cap U
\end{equation*}
and thus $B_\rho(0) \cap \tilde K_j \subset \Phi(\tilde K_j \cap U)$.
Thus, $\Phi(U \cap \tilde K_j)$ can be written as $\tilde K_j \cap \tilde V$, where $\tilde V$ is a neighborhood of $0$.
\end{proof}
This lemma can be applied recursively also to subfacets of $K$.
Hence, after finitely many steps of application, we conclude in particular that there are neighborhoods $U$ and $V$ of $0$ such that $\Phi$ maps $\criticalcone{\cN} \cap U$ bijectively onto $\criticalcone{\cN} \cap V$.
\begin{lemma}\label{lemma:ThetaIntoCone}
Consider two adapted linearizing maps $S_{q,1}$ and $S_{q,2}$ and the transition map~$\Theta \coloneqq S_{q,1} \circ S_{q,2}^{-1}$.
Then
\begin{equation*}
\begin{aligned}
v \in \innertangentcone{q}{\cK}
\quad
&
\Rightarrow
\quad
\Theta''(0_q)[v, v] \in \tangentspace{q}{\cK}
,
\\
v \in \criticalcone{\cN}
\quad
&
\Rightarrow
\quad
\Theta''(0_q)[v, v] \in \Span \criticalcone{\cN}
.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
Consider any cone $K \subset \tangentspace{q}{\cN}$ such that $\Theta$ maps $K$ into $K$.
Since $\Theta(0_q) = 0_q$ and $\Theta'(0_q) = \id_{\R^n}$ hold, we can compute
\begin{equation*}
\Theta''(0_q)[v, v]
=
\lim_{t \to 0} t^{-2} \paren[auto](){\Theta(t \, v) - \Theta(0_q) - \Theta'(0_q) \, t \, v}
=
\lim_{t \to 0} t^{-2} \paren[auto](){\Theta(t \, v) - t \, v}
.
\end{equation*}
Since both $\Theta(t \, v)$ and $t \, v$ belong to $K$, $\Theta(t \, v) - t \, v$ belongs to $\Span K$ and thus so does the limit.
By definition, $\Theta$ maps $\bK \subset \innertangentcone{q}{\cK}$ into $\innertangentcone{q}{\cK}$ and thus $\Theta''(0_q)[v,v] \in \Span \innertangentcone{q}{\cK} = \tangentspace{q}{\cK}$ for $v \in \innertangentcone{q}{\cK} $, proving our first assertion.
Our second assertion follows similarly, because $\Theta$ maps $\criticalcone{\cN}$ into $\criticalcone{\cN}$ by \cref{lemma:PhiCone}.
\end{proof}
\section{Second-Order Optimality Conditions}
\label{section:second-order_conditions}
Compared to the case in vector spaces, the formulation of second-order conditions on manifolds exhibits an additional difficulty.
On a vector space $V$ the second derivative of a real-valued function $\sigma \colon V \to \R$ at $x \in V$ can be represented as a bilinear form $\sigma''(x) \colon V \times V \to \R$, whose definiteness properties can be studied.
In contrast, for $\sigma \colon \cM \to \R$ we have $\sigma' \colon \tangentspace{}{\cM} \to \R$ and thus $\sigma'' \colon \tangentspace{}{(\tangentspace{}{\cM})} \to \R$.
The required representation of $\sigma''(p)$ as a bilinear form on $\tangentspace{p}{\cM}$, \ie $\sigma''(p) \colon \tangentspace{p}{\cM} \times \tangentspace{p}{\cM} \to \R$, is not given canonically.
A connection, or equivalenty, a covariant derivative, has to be specified for this purpose.
However, at a stationary point $p_*\in \cM$, \ie $\sigma'(p_*) = 0$, second derivatives of scalar-valued functions \emph{can} be represented canonically by bilinear forms on $\tangentspace{p_*}{\cM}$ without the help of a covariant derivative, as shown in the following lemma.
\begin{lemma}\label{lemma:morse}
Suppose that $\sigma \in C^2(\cM,\R)$.
At a point~$p_* \in \cM$ satisfying $\sigma'(p_*) = 0$, the second derivative $\sigma''(p_*) \colon \tangentspace{p_*}{\cM} \times \tangentspace{p_*}{\cM} \to \R$ is a well-defined symmetric bilinear form, \ie, a symmetric $(2,0)$-tensor.
\end{lemma}
\begin{proof}
Consider two charts ${\varphi_1}$ and ${\varphi_2}$ centered at $p_*$ so that $\varphi_1(p_*) = \varphi_2(p_*) = 0$ holds.
Then $\sigma$ has representations $\sigma_{\varphi_1} \coloneqq \sigma \circ {\varphi_1}^{-1}$ and $\sigma_{\varphi_2} = \sigma \circ {\varphi_2}^{-1}$ in charts, and $\sigma_{\varphi_1} = \sigma_{\varphi_2} \circ T$ with $T = \varphi_2 \circ \varphi_1^{-1}$.
Let $v_{\varphi_1}$ and $v_{\varphi_2}$ be the representatives of $v \in \tangentspace{p_*}{\cM}$.
Then $v_{\varphi_2} = T'(0) \, v_{\varphi_1}$ holds and we have
\begin{equation*}
\sigma_{\varphi_1}'(0) \, v_{\varphi_1}
=
\sigma_{\varphi_2}'(0) \, T'(0) \, v_{\varphi_1}
=
\sigma_{\varphi_2}'(0) \, v_{\varphi_2}
.
\end{equation*}
Using $\sigma'(p_*) = 0$ we find
\begin{align*}
\sigma_{\varphi_1}''(0)[v_{\varphi_1}, v_{\varphi_1}]
&
=
\sigma_{\varphi_2}''(0)[T'(0) \, v_{\varphi_1}, T'(0) \, v_{\varphi_1}]
+
\sigma_{\varphi_2}'(0) \, T''(0)[v_{\varphi_1}, v_{\varphi_1}]
\\
&
=
\sigma_{\varphi_2}''(0)[T'(0) \, v_{\varphi_1}, T'(0) \, v_{\varphi_1}]
.
\end{align*}
This implies the well-definedness of $\sigma''(p_*)$ on $\tangentspace{p_*}{\cM} \times \tangentspace{p_*}{\cM}$.
Its symmetry follows from the theorem of Schwarz.
\end{proof}
As a consequence of \cref{lemma:morse}, second-order optimality conditions for unconstrained optimization problems on $C^2$-manifolds can be formulated without recourse to covariant derivatives.
Even for constrained problems for which the constraint target manifold $\cN = V$ is a \emph{linear space}, we can apply \cref{lemma:morse} to the Lagrangian function $L \colon \cM \times V^* \to \R$, \ie $\sigma(p) \coloneqq L(p,\mu)$, at a KKT point $p_*$ with Lagrange multiplier $\mu \in V^*$ and obtain a well-defined second derivative $L''(p_*,\mu) \colon \tangentspace{p_*}{\cM} \times \tangentspace{p_*}{\cM} \to \R$, because of $\sigma(p_*) = L'(p_*,\mu) = 0$.
For the general case of manifold-valued constraints, the situation is more complex, since, as we have seen, a classical Lagrange multiplier~$\mu$ cannot be used directly to define a Lagrangian function due to lack of linearity of $\cN$.
Instead, a nonlinear function $h \in C^2(\cN,\R)$ was used to define $L(p,h)$.
Although $L'(p,h)$ only depends on $\mu \coloneqq h'(g(p))$, the situation is different for the second-order derivative.
Let $p_*$ be a KKT-point, $q_* = g(p_*)$, and $\mu \in \cotangentspace{q_*}{\cN}$ the corresponding Lagrange multiplier such that $h'(q_*) = \mu$ and $L'(p_*,h) = 0$ hold.
Then we can apply \cref{lemma:morse} to $\sigma(p) \coloneqq L(p,h) = f(p) + h(g(p))$ and obtain a well-defined bilinear form at $p_*$:
\begin{equation*}
L''(p_*,h)
\colon
\tangentspace{p_*}{\cM} \times \tangentspace{p_*}{\cM}
\to
\R
.
\end{equation*}
Unfortunately, $L''(p_*,h)$ still depends on the particular choice of $h$ and not only on $\mu =h'(q_*)$.
This can be seen most clearly when $\cM$ and $\cN$ are linear spaces.
Then we can compute $L''(p_*,h)$ as follows:
\begin{equation*}
L''(p_*,h)[v,v]
=
f''(p_*)[v,v]
+ \mu \, g''(p_*)[v,v]
+ h''(q_*)[g'(p_*) \, v, g'(p_*) \, v]
,
\end{equation*}
and we observe that the third term on the right hand side depends on the second derivative of $h$. Of course, these second derivatives can be avoided when $\cN$ is a linear space by taking the canonical choice $h = \mu$, but such a canonical choice is not possible when $\cN$ is nonlinear.
However, suppose we use an \emph{adapted} linearizing map $S_{q_*}$ about $q_*$ to define $h = \mu \circ S_{q_*}$ and thus $L_{S_{q_*}}(p,\mu) = L(p,h)$ holds.
In that case, as we will show now, $L''(p_*, h)[v, v] = L_{S_{q_*}}''(p_*,\mu)[v,v]$ \emph{is} independent of the particular choice of $S_{q_*}$ on the critical cone, \ie, for $v \in \criticalcone{\cM}$.
This is all we need in order to formulate second-order optimality conditions in an invariant way.
\begin{proposition}\label{pro:invarianceLpp}
Suppose that $p_*$ is a KKT point, $q_* = g(p_*)$ holds and $\mu \in \cotangentspace{q_*}{\cN}$ is a corresponding Lagrange multiplier so that \eqref{eq:KKT_conditions_pull-back} is satisfied.
Let $S_{q_*,1}$ and $S_{q_*,2}$ be adapted linearizing maps about $q_*$.
Then
\begin{equation}\label{eq:adaptedLMInvariance}
L_{S_{q_*,1}}''(p_*,\mu)[v, v]
=
L_{S_{q_*,2}}''(p_*,\mu)[v, v]
\quad
\text{for all }
v \in \criticalcone{\cM}
.
\end{equation}
In view of \eqref{eq:coincidence_of_Lagrangian_functions_2}, we therefore also refer to $L_{S_{q_*,i}}''(p_*,h_i)$ simply as $L''(p_*,\mu)$.
Moreover, for any pullback with retraction $\retract{p_*}$ and adapted linearizing map $S_{q_*}$, the relation
\begin{equation*}
L''(p_*,\mu)[v, v]
=
\bL''(0_{p_*},\mu)[v, v]
\quad
\text{for all }
v \in \criticalcone{\cM}
\end{equation*}
holds.
\end{proposition}
\begin{proof}
Defining $\Theta \coloneqq S_{q_*,1} \circ S_{q_*,2}^{-1}$, we observe $\mu \circ S_{q_*,2} = \mu \circ \Theta \circ S_{q_*,1}$.
Consequently, for $w \in \tangentspace{p_*}{\cM}$ and $p = \retract{p_*}(w)$, we have
\begin{align*}
L_{S_{q_*,2}}(p,\mu) - L_{S_{q_*,1}}(p,\mu)
&
=
\mu \circ (\id_{\tangentspace{q_*}{\cN}} - \Theta) \circ S_{q_*,1} \circ g(p)
\\
&
=
\mu \circ (\id_{\tangentspace{q_*}{\cN}} - \Theta) \circ \bg(w)
.
\end{align*}
The first derivatives read
\begin{equation*}
L_{S_{q_*,2}}'(p,\mu) \, v - L_{S_{q_*,1}}'(p,\mu) \, v
=
\mu \circ \paren[big](){\id_{\tangentspace{q_*}{\cN}} - \Theta'(\bg(w))} \circ \bg'(w) \, v
.
\end{equation*}
Since $p_*$ is stationary, second derivatives of $L_{S_{q_*}}(p,\mu)$
are well-defined and can be computed as follows, using the fact that $\Theta'(0_{q_*}) = \id_{\tangentspace{q_*}{\cN}}$ holds:
\begin{align*}
\MoveEqLeft
\paren[big](){L_{S_{q_*,2}}''(p_*,\mu) - L_{S_{q_*,1}}''(p_*,\mu)}[v, v]\\
&
=
\mu \circ \paren[big](){\id_{\tangentspace{q_*}{\cN}} - \Theta'(0_{q_*})} \, \bg''(0_{p_*})[v, v]
- \mu \circ \Theta''(0_{q_*})[\bg'(0_{p_*}) \, v, \; \bg'(0_{p_*}) \, v]
\\
&
=
- \mu \circ \Theta''(0_{q_*})[\bg'(0_{p_*}) \, v, \; \bg'(0_{p_*}) \, v]
.
\end{align*}
For $v \in \criticalcone{\cM}$ we conclude $\bg'(0_{p_*}) \, v \in \criticalcone{\cN}$ and thus we find by \cref{lemma:ThetaIntoCone}, using that the linearizing maps are adapted:
\begin{equation*}
\Theta''(0_{q_*})[\bg'(0_{p_*}) \, v, \bg'(0_{p_*}) \, v]
\in
\Span \criticalcone{\cN}
.
\end{equation*}
By stationarity and by definition of $\criticalcone{\cN}$, we infer $\restr{\mu}{\Span \criticalcone{\cN}} = 0$ and thus
\begin{equation}\label{eq:coincideOnCcrit}
\mu \circ \Theta''(0_{q_*})[\bg'(0_{p_*}) \, v, \bg'(0_{p_*}) \, v]
=
0
\quad
\text{for all }
v \in \criticalcone{\cM}
,
\end{equation}
which yields the desired result.
\end{proof}
\begin{remark}\label{rem:secondorderLM}
The conclusion of \cref{pro:invarianceLpp} can be extended slightly beyond the class of adapted linearizing maps: let us call $S_{q_*,1}$ and $S_{q_*,2}$ second-order consistent, if their transition map $\Theta \coloneqq S_{q_*,1} \circ S_{q_*,2}^{-1}$ satisfies $\Theta''(0_{q_*}) = 0$.
Clearly, \eqref{eq:coincideOnCcrit} holds for second-order consistent linearizing maps, even for all $v \in \tangentspace{p_*}{\cM}$.
Hence, \eqref{eq:adaptedLMInvariance} extends to linearizing maps each of which is second-order consistent with some adapted linearizing map.
\end{remark}
\begin{remark}
The restriction to \emph{adapted} linearizing maps in \cref{pro:invarianceLpp} is natural, taking into account the definition of a manifold with corners via adapted local charts.
To illustrate that this restriction is also essential (up to \cref{rem:secondorderLM}), consider $\cM = \cN = \R^2$ with $p = (p_1,p_2)^\transp$, $f(p) = -p_1$, $g = \id_\cM$ and $\cK = \setDef{p \in \cM}{p_1 \le 0}$.
Then $0$ is a local minimizer of $f$, $\criticalcone{\cM} = \criticalcone{\cN} = \setDef{v \in \cM}{v_1 = 0}$ hold, and
\begin{equation*}
0
=
L'(0,\mu) \, v
=
\begin{pmatrix} -v_1 \\ 0 \end{pmatrix} + \mu \, \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
\quad
\text{implies}
\quad
\mu
=
\begin{pmatrix} 1 \\ 0\end{pmatrix}
.
\end{equation*}
Using the adapted linearizing map $S_{0,1} = \id_\cM$, we obtain $\mu \circ S_{0,1}(p) = p_1$ and $L_{S_{0,1}}''(0,\mu)[v, v] = 0$, but using the non-adapted linearizing map $S_{0,2}(p) \coloneqq (p_1 + \alpha \, p_2^2, p_2)$ would yield $\mu \circ S_{0,2}(p) = p_1 + \alpha \, p_2^2$ and $L_{0,2}''(0,\mu)[v, v] = 2 \, \alpha \, v_1^2$.
In general, it is also not possible to extend \eqref{eq:adaptedLMInvariance} beyond $v \in \criticalcone{\cM}$.
Using the adapted local linearizing map $S_{0,3}(p) \coloneqq (p_1 + p_1 \, p_2,p_2)^\transp$ (when $\abs{p_2} < 1$, then $p_1 + p_1 p_2 \ge 0 \Leftrightarrow p_1 \ge 0$), we obtain $\mu \circ S_{0,3}(p) = p_1 + p_1 \, p_2$ and thus $L_{S_{0,3}}''(0,\mu)[v, v] = 2 \, v_1 \, v_2$, which concides with $L_{S_{0,1}}''(0,\mu)[v, v] = 0$ on $\criticalcone{\cM}$ but not on all of $\R^2$.
\end{remark}
Having achieved an invariant definition of $L''$ on the critical cone, second-order optimality conditions for manifold-valued constraints can now be reduced to the classical vector-valued case.
Suppose that $p_* \in \cM$ is a KKT point with Lagrange multiplier $\mu$.
For any choice of retraction at~$p_*$ and adapted linearizing map at~$g(p_*)$, we consider the second derivative of the pullback $\bL''(0_{p_*},\mu)$, which---as we have seen---is invariant on the critical cone $\criticalcone{\cM}$.
Invoking well-known results from the literature, we obtain the following second-order sufficient optimality conditions:
\begin{theorem}
Assume that $p_* \in \cM$ and $\mu \in \cotangentspace{g(p_*)}{\cN}$ satisfy the KKT conditions~\eqref{eq:KKT_conditions}.
Moreover, suppose that
\begin{equation*}
L''(p_*,\mu)[v, v]
>
0
\quad
\text{holds for all }
v \in \criticalcone{\cM} \setminus \{0_{p_*}\}
.
\end{equation*}
Then $p_*$ is a strict local minimizer of problem~\eqref{eq:problem_setting}.
\end{theorem}
\begin{proof}
It is clear that this result holds for $\bL''(0_{p_*},\mu)$ and thus, by invariance, it also holds for $L''(p_*,\mu)$; see, \eg, \cite[Thm.~12.6]{NocedalWright:2006:1}.
\end{proof}
Concerning second-order \emph{necessary} optimality conditions, a wide variety of constraint qualifications can be found in the literature (\cf, \eg, \cite{HaeserRamos:2019:1} and references therein), leading to second-order conditions of various strength.
We restrict our discussion here to the simplest case:
\begin{theorem}
Assume that $p_* \in \cM$ is a local minimizer of problem~\eqref{eq:problem_setting} and that \eqref{eq:LICQ} holds at $p_*$.
Then $p_*$ satisfies the KKT conditions~\eqref{eq:KKT_conditions} with some Lagrange multiplier $\mu \in \cotangentspace{g(p_*)}{\cN}$.
Moreover,
\begin{equation*}
L''(p_*,\mu)[v, v]
\ge
0
\quad
\text{holds for all } v \in \criticalcone{\cM}
.
\end{equation*}
\end{theorem}
\begin{proof}
It is clear that this result holds for $\bL''(0_{p_*},\mu)$ and thus, by invariance, it also holds for $L''(p_*,\mu)$; see, \eg, \cite[Thm.~12.5]{NocedalWright:2006:1}.
\end{proof}
\section{Application to the Control of Discretized Variational Problems}
\label{section:application_control_of_variational_problems}
Suppose that $\cY$ and $\cU$ are smooth manifolds and consider the following energy minimization problem, parametrized (or controlled) by $u$:
\begin{equation*}
\text{Minimize}
\quad
E(y,u)
,
\quad
\text{where }
y \in \cY
,
\end{equation*}
which we replace by its stationarity condition:
\begin{equation*}
0^*_y
=
c(y,u)
\coloneqq
\partial_y E(y,u) \in \cotangentspace{y}{\cY}
.
\end{equation*}
Such a situation occurs frequently in the infinite-dimensional context of variational problems, where occasionally $\cY$ and/or $\cU$ are nonlinear, smooth manifolds.
Also the principle of stationary action, which is applied, \eg, in classical mechanics, leads to problems of a similar form.
After discretization, a similar problem on finite-dimensional manifolds is obtained.
Using the control variable~$u$, an optimal control problem or a parameter identification problem may then be formulated as follows:
\begin{equation*}
\begin{aligned}
\text{Minimize}
\quad
&
f(y,u)
,
\quad
\text{where }
(y,u) \in \cY \times \cU
\\
\text{\st}
\quad
&
0^*_y
=
c(y,u)
.
\end{aligned}
\end{equation*}
A simple concrete example, which has been considered in \eg, \cite[Ch.~6]{OrtizLopez:2020:1}, is the optimal control of a static inextensible flexible rod. Here $y : [0,1]\to \R^3$ is the configuration of the rod, $u$ is an applied force, and $E(y,u)$ is the total energy of the rod.
Inextensibility is modelled by requiring $y'(t) \in \mathbb S^2$ for all $t \in [0,1]$, the unit sphere in $\R^3$, which renders $\cY$ a nonlinear manifold.
An appropriate objective function~$f$ may comprise the distance of $y$ to some desired configuration and a Tychonov term for $u$.
For details we refer to \cite[Ch.~6]{OrtizLopez:2020:1} and \cite{SchielaOrtiz:2021:1}.
Setting $p \coloneqq(y,u)$, $\cM \coloneqq \cY \times \cU$, $\cN \coloneqq \cotangentBundle[\cY]$, and taking $\cK$ to be the zero-section of $\cotangentBundle[\cY]$, \ie, the pairs $(y,0^*_y)\in \cotangentBundle[\cY]$, which can be identified with $\cK = \cY$, we observe that this problem fits into our theoretical framework, where the constraint mapping is defined as follows:
\begin{equation*}
g
\colon
\cY \times \cU \to \cotangentBundle[\cY]
\ni
p
=
(y,u)
\mapsto
g(p)
\coloneqq
(y,c(y,u))
.
\end{equation*}
To formulate first-order optimality conditions, we calculate the derivative at a feasible point:
\begin{equation*}
g'(p)
=
(\id_{\cM},c(y,u))'(y,u)
\colon
\tangentspace{y}{\cY} \times \tangentspace{u}{\cU}
\to
\tangentspace{(y,0^*_y)}{(\cotangentBundle[\cY])}
.
\end{equation*}
At $0_y$ we can utilize the canonical splitting (a connection or covariant derivative is not required here) of the cotangent's tangent space
\begin{equation*}
\tangentspace{(y,0^*_y)}{(\cotangentBundle[\cY])}
\isomorphic
\tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}
\end{equation*}
into the tangent space of the base manifold and a fibre. This allows us to write $g'(p)$ as a pair:
\begin{align*}
g'(p)
\colon
\tangentspace{y}{\cY} \times \tangentspace{u}{\cU}
&
\to \tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}\\
\delta p
=
(\delta y, \delta u)
&
\mapsto
g'(p) \, \delta p
=
\paren[big](){\delta y, c'(y,u)(\delta y,\delta u)}
\in
\tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}
\end{align*}
and the tangent space of $\cK$ as:
\begin{equation*}
\innertangentcone{(y,0^*_y)}{\cK}
=
\tangentspace{y}{\cY}
=
\tangentspace{y}{\cY} \times \{0^*_y\}
\subset
\tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}.
\end{equation*}
Thus the linearized constraints can be split into two parts, the first of which is redundant:
\begin{equation*}
g'(p) \, \delta p \in \innertangentcone{(y,0_y^*)}{\cK}
\quad \Leftrightarrow \quad
\delta y \in \tangentspace{y}{\cY}
,
\quad
c'(y,u)(\delta y,\delta u)
=
0^*_y
.
\end{equation*}
Constraint qualifications are fulfilled at $p$, provided that $\image g'(p) - \tangentspace{y}{\cY} \times \{0^*_y\} = \tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}$ holds.
This is the case if and only if $c'(y,u) \colon \tangentspace{y}{\cY} \times \tangentspace{u}{\cU} \to \cotangentspace{y}{\cY}$ is surjective.
A Lagrange multiplier $\mu$ is an element of
\begin{equation*}
(T^i_{(y,0^*_y)} \cK)^\circ
=
(\tangentspace{y}{\cY})^\circ
=
(\tangentspace{y}{\cY} \times \{0^*_y\})^\circ
=
\{0_y^*\} \times (\tangentspace{y}{\cY})^{**}
=
\{0_y^*\} \times \tangentspace{y}{\cY}
,
\end{equation*}
where the last identity is the canonical identification of the bidual space with the primal space.
A Lagrange multiplier thus is a pair
\begin{equation*}
\mu
=
(0_y^*, \lambda) \in \cotangentspace{y}{\cY} \times \tangentspace{y}{\cY}
.
\end{equation*}
These splittings yield $\mu \, g'(p) \, \delta p = \paren[big](){0_y^*\delta y, \lambda c'(y,u)(\delta y,\delta u)} = \lambda c'(y,u)(\delta y,\delta u)$ and thus the KKT-conditions read
\begin{equation*}
0
=
f'(y,u)(\delta y,\delta u) + \lambda \, c'(y,u)(\delta y,\delta u)
\quad
\text{for all }
(\delta y,\delta u) \in \tangentspace{y}{\cY} \times \tangentspace{u}{\cU}
.
\end{equation*}
Since $c(y,u) = \partial_y E(y,u)$ is a linear form on $\tangentspace{y}{\cY}$, $c'(y,u)$ can be interpreted as a bilinear form on $(\tangentspace{y}{\cY} \times \tangentspace{u}{\cU}) \times \tangentspace{y}{\cY}$ and we have (notice that $\partial_{yy} E(y,u)$ is well-defined by \cref{lemma:morse}, since $\partial_yE(y,u) = 0$ holds):
\begin{equation*}
\lambda \, c'(y,u)(\delta y,\delta u)
\!=\!
(\partial_{y} E)'(y,u)(\lambda,\!\delta y,\!\delta u)
\!=\!
\partial_{yy} E(y,u)(\lambda,\!\delta y) +\partial_{yu} E(y,u)(\lambda,\!\delta u)
\end{equation*}
Then the KKT conditions read in more detail:
\begin{align*}
\partial_y f(y,u) \, \delta y + \partial_{yy} E(y,u)(\lambda,\delta y)
&
=
0
\quad
\text{for all } \delta y \in \tangentspace{y}{\cY}
,
\\
\partial_u f(y,u) \, \delta u + \partial_{yu} E(y,u)(\lambda,\delta u)
&
=
0
\quad
\text{for all } \delta u \in \tangentspace{u}{\cU}
,
\\
\partial_y E(y,u) \, \delta y
&
=
0
\quad
\text{for all } \delta y \in \tangentspace{y}{\cY}
.
\end{align*}
To write down a Lagrangian function and second-order conditions, we need adapted linearizing maps on the zero section of $\cotangentspace{}{\cY}$ at a KKT-point $p_* = (y_*,u_*)$ with $q_* = g(p_*) = (y_*,0_{y_*}^*)$.
Utilizing the above splitting, these are those mappings $S_{q_*} \colon \cotangentspace{}{\cY} \to \tangentspace{y}{\cY} \times \cotangentspace{y}{\cY}$ which map the zero section $\cK = \cY$ to the first factor of the product, \ie $0_\eta \mapsto (\delta y(\eta),0_y)$.
For a specific example, consider a $C^2$-retraction $\retract{y_*} \colon T_{y_*} \cY \to \cY$ with derivative $D \retract{y_*}(v) \colon \tangentspace{{y_*}}{\cY} \to \tangentspace{\retract{y_*}(v)}{\cY}$.
Then an adapted linearizing map can be given as:
\begin{align*}
S_{q_*} (y,w)
\coloneqq
(v,w \, D \retract{y_*}(v))
,
\quad
\text{where }
v
=
\retractionSymbol^{-1}_{y_*}(y) \in \tangentspace{{y_*}}{\cY}
.
\end{align*}
Since $w \in \cotangentspace{y}{\cY}$ holds, it follows that $w \, D \retract{y_*}(v) \in \cotangentspace{y_*}{\cY}$, and $S_{q_*} (y,0^*_y) = (v,0^*_{y_*})$, as required.
With the help of this linearizing map, the Lagrange multiplier~$\mu$ can be extended locally to a function $h \in C^2(\cotangentspace{}{\cY},\R)$ as follows:
\begin{equation*}
h (\eta,w)
=
\mu \, S_{q_*}(\eta, w)
=
0^*_{y_*} \, v + w \, D \retract{y_*}(v) \lambda
=
w \, D \retract{y_*}(v)
\lambda
\end{equation*}
and thus the Lagrangian function near $p_*$ reads:
\begin{equation*}
L_{S_{q_*}}(p,\mu)
=
f(y,u) + \partial_y E(y,u) \, D \retract{y_*}(v) \lambda
,
\quad
v
=
\retractionSymbol^{-1}_{y_*}(y)
.
\end{equation*}
Its first derivative at a feasible point, where $\partial_y E(y,u) = 0$ holds, is given by
\begin{equation*}
L'_{S_{q_*}}(p,\mu)(\delta p)
\!=\!
f'(y,u)(\delta y,\delta u) + (\partial_y E)'(y,u) (D \retract{y_*}(v) \lambda,\delta y,\delta u),
\;
v
=
\retractionSymbol^{-1}_{y_*}(y)
.
\end{equation*}
For a the KKT point $p_*$ we observe $L'_{S_{q_*}}(p_*,\mu) = 0$, since $D \retract{y_*}(0_{y_*}) = \id_{\tangentspace{y_*}{\cY}}$.
Since $\innertangentcone{(y,0_y^*)}{\cK}$ is a linear subspace in our setting, the critical cone $\criticalcone{\cM}$ is given as the preimage of $\criticalcone{\cN} = \tangentspace{y_*}{\cY} \times \{0^*_{y_*}\}$ under $g'(p_*)$, so it is the set
\begin{align*}
\criticalcone{\cM}
&
=
\setDef{(\delta y,\delta u)}{c'(y_*,u_*)(\delta y,\delta u) = 0}
\\
&
=
\setDef{(\delta y,\delta u)}{\partial_{yy} E(y_*,u_*)(v,\delta y) + \partial_{yu} E(y_*,u_*)(v,\delta u) = 0 \; \forall v \in \tangentspace{y_*}{\cY}}
.
\end{align*}
Finally, the second derivative of the Lagrangian at $p_*$ is well-defined on $\criticalcone{\cM}$ and can, at least formally, be written as:
\begin{equation*}
\begin{aligned}
L_{S_{q_*}}''(y_*,u_*,\lambda)[\delta p, \delta p]
=
(f''(y_*,u_*)+ (\partial_yE)''(y_*,u_*)(\lambda))[(\delta y,\delta u),(\delta y,\delta u)]
\\
\text{for all }
(\delta y,\delta u) \in \criticalcone{\cM}
.
\end{aligned}
\end{equation*}
As a consequence of the restriction $(\delta y,\delta u) \in \criticalcone{\cM}$ and the fact that $S_{q_*}$ is adapted, terms containing $DD \retract{p_*}$ are not present in this formula, which reflects \cref{pro:invarianceLpp}.
\section{Conclusion and Outlook}
\label{section:conclusion_outlook}
In this paper we have extended the analysis of optimization problems on manifolds from vector space-valued constraints to the much more flexible case of manifold-valued constraints.
We have seen that such problems arise naturally when constraints are formulated in a geometric way, and in the optimal control of variational problems on manifolds.
We generalized the polyhedric structure required for inequality constraints by using submanifolds with corners and adapted local charts.
First-order optimality conditions were derived, which directly generalize the known cases.
An appropriate definition of the Lagrangian function and the formulation of well-defined second-order optimality conditions, however, revealed the significance of the above mentioned polyhedric structure, reflected by the important role played by adapted linearizing maps.
We emphasize that in order to derive the theory, Riemannian metrics or connections were not needed.
Most of the stated results may be generalized to infinite-dimensional Banach manifolds.
However, we expect additional technical difficulties.
First, it seems to be an open problem how to generalize \cref{definition:submanifold_with_corners} to the infinite dimensional case, \ie, to define corners of infinite index $\ell = \infty$ in a useful way.
Second, already in infinite-dimensional Banach spaces, optimality conditions exhibit a couple of topologcal subtleties, which have to be tackled in the case of Banach manifolds, as well.
Further, algorithmic approaches for this class of optimization problems are still to be developed, even in the finite-dimensional setting.
An idea would be to extend SQP methods to this setting.
At every iterate $x_k$ we perform a local pull-back of the given problem to tangent spaces, using retractions and adapted linearizing maps.
Locally, we end up with a problem of the form~\eqref{eq:problem_setting_Retraction_classic}.
A QP step may then be computed for this pull-back, and an update can be defined via a retraction.
A detailed realization of this basic idea is, however, subject to future research.
\section*{Data Availability}
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
\printbibliography
\end{document}
|
\begin{document}
\title{Sensing quantum chaos through the non-unitary geometric phase}
\author{Nicol\'{a}s Mirkin}
\email[Corresponding author:]{\,[email protected]}
\affiliation{
Departamento de F\'{i}sica “J. J. Giambiagi” and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina
}
\author{Diego Wisniacki}
\affiliation{
Departamento de F\'{i}sica “J. J. Giambiagi” and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina
}
\author{Paula I. Villar}
\affiliation{
Departamento de F\'{i}sica “J. J. Giambiagi” and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina
}
\author{Fernando C. Lombardo}
\affiliation{
Departamento de F\'{i}sica “J. J. Giambiagi” and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina
}
\date{\today}
\begin{abstract}
Quantum chaos is usually characterized through its statistical implications on the energy spectrum of a given system. In this work we propose a decoherent mechanism for sensing quantum chaos. The chaotic nature of a many-body quantum system is sensed by studying the implications that the system produces in the long-time dynamics of a probe coupled to it under a dephasing interaction. By introducing the notion of an effective averaged decoherence factor, we show that the correction to the geometric phase acquired by the probe with respect to its unitary evolution can be exploited as a robust tool for sensing the integrable to chaos transition of the many-body quantum system to which it is coupled. This sensing mechanism is verified for several systems with different types of symmetries, disorder and even in the presence of long-range interactions, evidencing its universality.
\end{abstract}
\maketitle
\section{Introduction}
Which is the most universal feature of quantum chaos? Is it a spectral or a dynamical property? Historically, quantum chaos was introduced in the literature through a spectral definition, where the chaoticity of a many-body quantum system was characterized through its statistical similarity to the predictions raised by Random Matrix Theory (RMT) \cite{haake1991quantum,stockmann2000quantum}.
However, later on the approach slightly turned to a dynamical definition of quantum chaos. For instance, the seminal work of Peres proposed the Loschmidt Echo, a measure of irreversibility and sensitivity to perturbations, as a dynamical signature of quantum chaos \cite{peres1984stability}. While Peres original proposal was explicitly conjectured for the long-time regime, since the appearance of some outstanding works that found a link between the short-time behaviour and the Lyapunov exponent \cite{jalabert2001environment,goussev2012loschmidt}, the long-time regime was almost entirely neglected. More recently, the Out-of-Time-Ordered-Correlators (OTOCS) were also proposed as a dynamical quantifier of quantum chaos \cite{larkin1969quasiclassical,shenker2014black,aleiner2016microscopic,huang2017out} and it has been argued that the long-time regime is the most reliable one for characterizing the chaotic nature of a given many-body quantum system \cite{garcia2018chaos,fortes2019gauging}. Considering that the OTOCS and the Loschmidt Echo are deeply connected \cite{yan2020information}, the question of wether the chaotic nature of a quantum system can be universally measured through the long-time dynamics of the Loschmidt Echo remains unexplored.
Other remarkable feature of the Loschmidt Echo is its connection with decoherence under pure dephasing scenarios. As a consequence of this, some works have studied the interplay
between chaos and decoherence, for example, by considering quantum systems with a classical chaotic counterpart as a reduced system, usually coupled to regular environments and with the focus on the short-time dynamics \cite{zurek1994decoherence,pattanayak1997exponentially, habib1998decoherence,monteoliva2000decoherence,jalabert2001environment,toscano2005decoherence, wisniacki2009scaling}. On the contrary, the idea of sensing the chaotic nature of an arbitrary many-body environment without classical counterpart through the long-time decoherent dynamics of a probe coupled to it, has not been investigated yet. This is what we call a decoherent characterization of quantum chaos and constitutes the main motivation of our present work.
We remark that in the past the task of sensing with a probe a certain property of a many-body quantum system has already been demonstrated in the literature, both theoretically and experimentally, for identifying critical points, phase transitions, unknown temperatures and non-Markovian behaviour \cite{quan2006decay,cucchietti2007universal,damski2011critical,haikka2012non}. In those works, a common approach that has already proven its worth is to monitor how the geometric phase acquired by the probe is corrected due to its coupling to the many-body environment with respect to its unitary evolution \cite{fuentes2002vacuum,carollo2005geometric,yi2007geometric,cucchietti2010geometric,martin2013berry,lombardo2020detectable,lombardo2021detectable}.
In this work, our ultimate goal is to construct a local decoherent mechanism for sensing quantum chaos. For that purpose, we propose a unified picture that bridges the gap between quantum chaos, decoherence, Loschmidt Echo, long-time dynamics and the non-unitary geometric phase. Our proposal consists on probing the chaoticity of a many-body quantum system by monitoring the decoherence dynamics suffered by a two-level system coupled to the latter through a dephasing interaction (see Fig. \ref{fig1}). Given its robustness and sensitivity, the physical quantity that we use to quantify the detrimental effects generated by the many-body environment to the probe is the correction to its accumulated geometric phase with respect to an isolated unitary evolution. Under this general scheme, we find a precise correspondence between the degree of correction to the geometric phase acquired by the probe and the degree of chaos present in the many-body quantum environment being sensed. Seeking universality, we verify this sensing mechanism for several environmental spin chains with different conserved symmetries, in the presence of disorder and even in the realistic situation of long-range interactions. Remarkably, we achieve the latter by maintaining both the intrinsic Hamiltonian of the probe and the interaction term fixed, which makes our sensing protocol experimentally feasible. Moreover, unlike previous methods considered in the literature, we can reproduce the whole integrable to chaos transition without any consideration about the energy level symmetries and resorting to not so large many-body quantum systems. Despite a similar approach was followed in a recent work \cite{mirkin2021quantum}, there the probe was a reduced part of the chain being sensed and thus the interaction Hamiltonian was not fixed but depended on the specific model under consideration. On the contrary, in this work the physical interpretation underlying the sensing mechanism is more transparent and the quantity that we use to monitor decoherence is more robust for being measured experimentally.
This work is organized as follows. On Section II we present the general framework under which we built our decoherent characterization of quantum chaos. Subsequently, on Section III we review the standard approach regarding the spectral definition of a quantum chaotic system. We continue on Section IV with a short description about the non-unitary geometric phase. Our main analysis is presented on Section V, where we consider four different physical systems as environments to be sensed. We conclude on Section VI with some final remarks.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{Schematic representation of our local decoherent characterization of quantum chaos.
The chaoticity of a many-body quantum system is sensed through the long-time decoherent dynamics of a probe coupled to it under a dephasing interaction. While the decoherence factor has huge fluctuations when the energy-level separation of the many-body quantum environment follows a Poisson distribution, almost no coherence revivals are seen in the probe if the statistics is compatible to a Wigner-Dyson distribution. The impact of the decoherence on the probe is quantified through the correction to its accumulated geometric phase with respect to the unitary evolution.}
\label{fig1}
\end{figure}
\section{General framework}
The main idea of our work is schematically summarized on Fig. \ref{fig1}. As we have already posed in the Introduction, our proposal consists on probing the chaotic behaviour of an arbitrary many-body quantum system by monitoring the long-time decoherence dynamics of a two-level quantum probe coupled to it. With this purpose, we will consider a situation where the probe is coupled to the first spin of an environmental chain through a dephasing interaction. The Hamiltonian representing the above setup is given by
\begin{equation}
\begin{aligned}
\hat{H} & = \hat{H}_S + \hat{H}_{int} + \hat{H}_E \\ &= \frac{\omega}{2} \hat{\sigma}_{0}^{z} + g \hat{\sigma}_{0}^{z}\hat{\sigma}_{1}^{z} + \hat{H}_E
\end{aligned}
\end{equation}
where $\omega$ is the characteristic frequency of the probe, $g$ the coupling strength to the first spin of the chain and $\hat{H}_E$ the specific environmental Hamiltonian to be sensed. As we are interested in a dynamical approach, we will consider an initial product state of the form
\begin{equation}
\hat{\rho}(0)= \ket{\psi_{0}}\bra{\psi_{0}} \otimes \ket{\varepsilon(0)} \bra{\varepsilon(0)},
\end{equation}
where the initial state of the probe is
\begin{equation}
\left|\psi_{0}\right\rangle=\cos (\theta / 2)|0\rangle+ e^{i\phi}\sin (\theta / 2)|1\rangle,
\end{equation}
with $\theta=[0,\pi)$ and $\phi=[0,2\pi)$. Similarly, the initial state of the environment is a pure random state for each spin
\begin{equation}
\ket{\varepsilon(0)}= \ket{\psi_{1}} \ket{\psi_{2}} \dots \ket{\psi_{L}}
\end{equation}
where \begin{equation}
\left|\psi_{k}\right\rangle=\cos (\vartheta_k / 2)|0\rangle+e^{i\varphi_k}\sin (\vartheta_k / 2)|1\rangle \quad \forall k \in [1,L],
\label{in_state}
\end{equation}
with $\vartheta_k=[0,\pi)$, $\varphi_k=[0,2 \pi)$ and $L$ the number of environmental spins. Let us remark that in order to sense the entire spectrum of the environment in an unbiased way, we must consider several realizations with different random initial pure states. This procedure is important since with a sufficiently large number of realizations, all the eigenstates of the environment will then contribute equally to the dynamics of the probe. Given the dephasing interaction between the probe and the environmental chain, the reduced dynamics of the probe can be solved analytically. Its reduced density matrix is given by
\begin{equation}
\begin{aligned}
\hat{\rho}_{\mathrm{r}}(t) & =\cos ^{2}(\theta / 2) \ket{0}\bra{0}
+\sin ^{2}(\theta / 2) \ket{1}\bra{1} \\
& +\frac{\sin \theta}{2} e^{- i (\omega t+\phi)} r(t) \ket{0}\bra{1}
+\frac{\sin \theta}{2} e^{i (\omega t+\phi)} r^{*}(t)\ket{1}\bra{0},
\end{aligned}
\label{rho_t}
\end{equation}
where $r(t)$ is the so-called decoherence factor and is given by
$r(t)=\left\langle\varepsilon_{1}(t) \mid \varepsilon_{0}(t)\right\rangle$ with $\ket{\varepsilon_{k}(t)}=e^{-i t\left[\hat{H}_{E}+(-1)^{k} \hat{H}_{\mathcal{S}\mathcal{E}} \right]}|\varepsilon(0)\rangle$, where $\hat{H}_{\mathcal{S}\mathcal{E}}$ refers to the term of $\hat{H}_{int}$ acting solely over the environmental degrees of freedom (in our case $\hat{\sigma}_{1}^{z}$). It is important to note that the square of the absolute value of the decoherence factor is exactly what is known as the Loschmidt Echo, which measures the sensitivity of the environment to the specific perturbation induced by its coupling to the probe.
As has already been posed, we are interested in sensing the environment homogeneously, so we must consider a full mixture of realizations for pure random initial states for the environment ($\hat{\rho}_{\varepsilon_m}(0)=\ket{\varepsilon_m(0)}\bra{\varepsilon_m(0)} \, \text{with} \, m \in [1,R]$). Consequently, if the number of realizations $R$ is sufficiently large, we will be dealing with an effective averaged decoherence factor $\tilde{r}_{e}(t)$ of the form
\begin{widetext}
\begin{equation}
\begin{aligned}
\tilde{r}_{e}(t)&= \lim_{R\to\infty} \left[ \frac{1}{R}\sum_{m=1}^{R} \bra{\varepsilon_m(0)} e^{i t\left[\hat{H}_{E}- \hat{H}_{\mathcal{S}\mathcal{E}} \right]} e^{-i t\left[\hat{H}_{E}+ \hat{H}_{\mathcal{S}\mathcal{E}} \right]}|\varepsilon_m(0) \rangle \right] \\
& = \lim_{R\to\infty} \left[ \frac{1}{R}\sum_{m=1}^{R} \Tr \left( \ket{\varepsilon_m(0)}\bra{\varepsilon_m(0)}
e^{i t\left[\hat{H}_{E}- \hat{H}_{\mathcal{S}\mathcal{E}} \right]} e^{-i t\left[\hat{H}_{E}+ \hat{H}_{\mathcal{S}\mathcal{E}} \right]} \right) \right] \simeq \frac{1}{2^L} \Tr \left( e^{i t\left[\hat{H}_{E}- \hat{H}_{\mathcal{S}\mathcal{E}} \right]} e^{-i t\left[\hat{H}_{E}+ \hat{H}_{\mathcal{S}\mathcal{E}} \right]} \right).
\end{aligned}
\label{dec_factor_eff}
\end{equation}
\end{widetext}
Due to its averaged nature, this effective decoherence factor is independent of the initial state, containing exclusively the environmental Hamiltonian and its perturbation due to the coupling with the probe. Equivalently, let us note that Eq. (\ref{dec_factor_eff}) is exactly the decoherence factor associated to a maximally mixed initial state for the environment $\hat{\rho}_\epsilon (0)=\mathbb{1}/2^L$ (see Appendix \ref{appA} for more details) and it is also related to the average of the Loschmidt Echo over initial states according to the Haar measure \cite{zanardi2004purity,dankert2005efficient,wisniacki2013sensitivity}. We can clear up Eq. (\ref{dec_factor_eff}) by diagonalizing both evolution operators separately,
\begin{equation}
\begin{aligned}
& \hat{U}^\dagger = e^{i t\left[\hat{H}_{E}- \hat{H}_{\mathcal{S}\mathcal{E}} \right]} = \sum_{k} e^{it \xi_k} \ket{\xi_k}\bra{\xi_k} \\
& \hat{V} = e^{-i t\left[\hat{H}_{E}+ \hat{H}_{\mathcal{S}\mathcal{E}} \right]} = \sum_{l} e^{-it \eta_l} \ket{\eta_l}\bra{\eta_l},
\end{aligned}
\label{aut_pert}
\end{equation}
where $\set{\xi_k , \eta_l}$ and $\set{\ket{\xi_k},\ket{\eta_l}}$ are the eigenvalues and eigenstates of the operators $\hat{U}^\dagger$ and $\hat{V}$, respectively. Using the above definitions, we have
\begin{equation}
\hat{U}^\dagger \hat{V}= \sum_{k=1} \sum_{l=1} e^{-it(\eta_l - \xi_k)} \braket{\xi_k}{\eta_l} \ket{\xi_k}\bra{\eta_l}.
\label{uv}
\end{equation}
and if we trace over Eq. (\ref{uv}), the effective averaged decoherence factor $\tilde{r}_e (t)$ reduces to
\begin{equation}
\begin{aligned}
\tilde{r}_e(t)&= \frac{1}{2^L} \Tr \left( \hat{U}^\dagger \hat{V} \right) \\ &
= \frac{1}{2^L} \left( \sum_{k=1} \sum_{l=1} e^{-it(\eta_l - \xi_k)} \abs{\braket{\xi_k}{\eta_l}}^2 \right).
\end{aligned}
\label{r_Tinf}
\end{equation}
This constitutes the universal expression for an averaged decoherence factor associated to a completely arbitrary environment coupled to a probe through a dephasing interaction. In what follows, we will show that the chaotic nature of the environment is hidden in the long-time dynamics of this quantity, involving both the eigenvalues and eigenstates of the perturbed Hamiltonian $\hat{H}_E$.
\section{Spectral characterization of quantum chaos}
Quantum chaos was first introduced in the literature as a spectral property. Under this context, a many-body quantum system was considered as chaotic if the statistics behind its energy level distribution fulfilled the predictions raised by RMT. Given the statistical nature of this approach, a usual requirement is to work under the limit of high dimensional Hilbert spaces and to separate the energy levels according to their symmetries \cite{percival1973regular, berry1977level,bohigas1984characterization}. By arranging the energy levels in an ordered set $e_n$, we can define the nearest neighbour spacings as $s_n=e_{n+1}-e_n$. Therefore, if the statistics behind $s_n$ follows a Wigner-Dyson distribution, the quantum system is said to be chaotic. On the contrary, if the statistics is Poissonian, the system is integrable. For quantifying the statistical distance between the actual distribution and a perfectly chaotic or integrable one, it is a standard procedure to use the so-called distribution of $\min(r_n,1/r_n)$, where $r_n$ is the ratio between the two nearest neighbour spacings of a given level ($r_n=s_n/s_{n-1}$). Consequently, the chaotic nature of a given system can be measured through the spectral indicator \cite{oganesyan2007localization,atas2013distribution,kudo2018finite}
\begin{equation}
\Tilde{r}_n=\dfrac{\min(s_n,s_{n-1})}{\max(s_n,s_{n-1})}=\min(r_n,1/r_n).
\end{equation}
Since the mean value of $\Tilde{r}_n$ ($\overline{\min(r_n,1/r_n)}$) attains a minimum value when the statistics is Poissonian ($\mathcal{I}_{P} \eqsim 0.386$) and a maximum
value when it is Wigner-Dyson ($\mathcal{I}_{WD} \eqsim 0.5307$), we can normalize it as
\begin{equation}
\eta=\dfrac{\overline{\min(r_n,1/r_n)}-\mathcal{I}_P}{\mathcal{I}_{WD}-\mathcal{I}_P}.
\label{eta}
\end{equation}
Consequently, an integrable regime is characterized by $\eta \simeq 0$ and a chaotic one by $\eta \simeq 1$. We emphasize that this measure is only useful under the limit of large many-body quantum systems such as to have a sufficiently robust energy spectrum to compute the statistics. Also, note that this spectral indicator $\eta$ only depends on the eigenvalues of the spectrum, differently from the averaged decoherence factor on Eq. (\ref{r_Tinf}) which takes into account not only the eigenvalues but the eigenvectors of the system as well.
\section{Non-unitary geometric phase}
During an adiabatic evolution an isolated quantum system can acquire a phase that is geometric in nature, besides its usual dynamical phase \cite{berry1984quantal}. Naturally, if the evolution is non-unitary, the detrimental effects from the environment modify the geometric phase that the open system acquires with respect to its closed evolution. As a consequence of this, great attention has been devoted to understand the robustness of the geometric phase to different types of environment due to its potential implementations on geometric quantum computation \cite{lombardo2006geometric,lombardo2010environmentally,lombardo2013nonunitary,lombardo2015correction,villar2020geometric}.
The geometric phase acquired by a two-level-system along a unitary cyclic evolution $\hat{H}_S$
is given by
\begin{equation}
\Phi_u= N \pi(1+\cos(\theta)),
\end{equation}
where $N$ is the number of periods under consideration. On the contrary, when the evolution is non-unitary, we can generalize via a quantum kinematic approach the notion of geometric phase along a
quasi-cyclic path $\mathcal{P}: t \in [0,\tau]$ ($\tau= 2\pi N/\omega$) defining
\begin{equation}
\begin{aligned}
\Phi=\arg \Bigg(\sum_{k=\pm} & \sqrt{\lambda_{k}(0) \lambda_{k}(\tau)} \left\langle\Psi_{k}(0) \mid \Psi_{k}(\tau)\right\rangle \\ & e^{-\int_{0}^{\tau} d t\left(\Psi_{k}| \frac{\partial}{\partial t} | \Psi_{k}\right\rangle}\Bigg),
\end{aligned}
\label{eq_phase}
\end{equation}
where $\lambda_k$ are the eigenvalues of the reduced density matrix of the open system and $\ket{\Psi_k}$ its eigenstates \cite{tong2004kinematic}
. This expression is gauge invariant and reduces to the unitary case when the interaction strength between the open system and the environment vanishes ($g=0$). Since under our scheme the initial state of the probe is pure ($\lambda_+(0)=1$ and $ \lambda_-(0)=0$), then Eq. (\ref{eq_phase}) simplifies to
\begin{equation}
\Phi = \arg \left(\langle\Psi_+(0) \mid \Psi_+(\tau)\rangle \right)-\operatorname{Im} \int_{0}^{\tau} d t\langle\Psi_+ \mid \dot{\Psi}_+\rangle.
\label{fase}
\end{equation}
Thus, we can define the correction of the non-unitary geometric phase $\Phi$ with respect to the unitary case $\Phi_u$ as $\delta \Phi= 1 -\Phi/\Phi_u$. It is important to remark that the correction to the unitary geometric phase depends strongly on the kind of environment coupled to the main system and that this correction can be measured by means of NMR techniques \cite{cucchietti2010geometric} or by interferometric experiments \cite{zeilinger1988single,leek2007observation,maclaurin2012measurable,wood2020observation}.
\section{Sensing quantum chaos on realistic spin chains}
In this Section we will show our results for the general framework that was previously introduced on Section II, in which a two-level quantum probe is sensing the chaoticity of a realistic spin chain coupled to it under a dephasing interaction. To unveil the degree of quantum chaos present in the environmental chain being sensed, we will monitor how the reduced dynamics of the probe is affected when a certain parameter of $\hat{H}_E$ (that sets the chaoticity of the chain) is swept along some parameter range. As we have already argued, we will study how the geometric phase acquired by the probe along several periods is corrected with respect to its unitary evolution. The reason why we focus in this particular quantity is based on the fact that the non-unitary geometric phase has already proven its worth as a sensor for monitoring other properties of many-body quantum systems. Nevertheless, the task of sensing quantum chaos has been entirely neglected. For the sake of universality, we will test our sensing protocol under the light of four spin chains as different as possible, involving different types of symmetries, disorder and even in the presence of long-range interactions.
\subsection{Ising with transverse magnetic field}
The first environmental spin chain in which we are going to deploy our analysis is an Ising spin chain with transverse magnetic field, whose Hamiltonian is given by
\begin{equation}
\hat{H}_E = \sum\limits_{k=1}^{L}\left(h_x \hat{\sigma}_{k}^{x}+h_z\hat{\sigma}_{k}^{z} \right) - J \sum\limits_{k=1}^{L-1}\hat{\sigma}_{k}^{z}\hat{\sigma}_{k+1}^{z},
\label{ising}
\end{equation}
where $h_x$ and $h_z$ are the strength of the transverse and longitudinal magnetic field, respectively, $J$ the nearest-neighbor coupling and $L$ the number of spins in the chain. In this model, the parity is a conserved quantity and we must take it into account for studying the integrable to quantum chaos transition with the standard spectral measures. The parity is defined through the permutation operators $\hat{\Pi}=\hat{P}_{1,L}\hat{P}_{2,L-1}\dots \hat{P}_{L/2-1,L/2+1}$ for a chain of odd length $L$ and for the even case its analogous. This implies that the spanned space is divided into odd and even subspaces with dimension $D=D^{even}+D^{odd}$ ($D^{even/odd}\approx D/2$). While this model is integrable in the limit of $h_z \gg h_x$ and $h_x \gg h_z$, it exhibits quantum chaos when the longitudinal and the transverse field are of comparable strength. As a consequence, the environmental parameter that we will sweep to see how the geometric phase of the probe is affected by the chaoticity of the environmental chain is $h_z$.
For an introductory but representative example, in the upper panel of Fig. \ref{fig_dynamics} we show the non-unitary accumulated geometric phase of the probe as a function of the number of periods, for a typical regime where the environment is integrable ($h_z=0 \,;h_x=1$) and for a typical regime where it is chaotic ($h_z=0.5\,;h_x=1$). For greater clarity, in the inset we normalize it with respect to the unitary evolution and there it becomes evident that during the first periods (short-time dynamics), the correction $|\delta \Phi|$ generated by both environmental regimes is indistinguishable. However, as the time evolution grows, the integrable regime leads to a significant lower correction with respect to the chaotic case. This qualitatively different behaviour for $|\delta \Phi|$ can be better understood by comparing with the lower panel of Fig. \ref{fig_dynamics}, where we show the mean value of the averaged decoherence factor $|\tilde{r}_e(t)|$ as a function of the number of periods for the same set of parameters as the inset. While in the chaotic regime the mean value of $|\tilde{r}_e(t)|$ goes to zero at long times, fully destroying the coherences of the probe and thus any vestige of the geometric phase, in the integrable regime $|\tilde{r}_e(t)|$ oscillates periodically around a mean value much greater than zero (see Fig. 1) and the information about the geometric phase is not totally lost.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{\textbf{Upper panel:}
\label{fig_dynamics}
\end{figure}
Since we are dealing with a two-level-system acting as a probe, for a better geometrical understanding, in Fig. \ref{fig_geometric} we show the trajectories of the probe
parametrized by its Bloch vector $\vec{r}=(r_x,r_y,r_z)$ (recall that $r_z$ is constant) along specific periods for the same set of parameters of Fig. \ref{fig_dynamics}. As before, it can be seen that for short times ($N=1$) the trajectories in both regimes are indistinguishable. However, as the probe continues evolving for longer times, the decoherence produced by the chaotic environment is so strong that no revivals on the coherences are observed (in this regime, the trajectory of the probe is a quasi-stationary point). On the contrary, if the environment is integrable, there is a strong non-markovian behaviour that leads to huge revivals on both the coherences and the geometrical phase. For example, it can be observed from Fig. \ref{fig_geometric} that despite along a specific period ($N=15$) the geometrical structure of the trajectory is almost entirely lost, several periods later ($N=25$) it has been quite well recovered with respect to the unitary evolution. This can be interpreted as a flow of information going from the probe to the environment during $N=15$ and as a backflow of information during $N=25$, which evidences a strong relation between non-Markovianity and the integrable nature of the environmental chain. (see Appendix B for more details)
\cite{breuer2009measure,rivas2014quantum,garcia2012non,mirkin2019information,mirkin2019entangling}.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{Each panel shows the trajectory of the probe along a specific period $N$ ($N=\tau \omega/2\pi$). The black solid line is for a unitary evolution where the trajectory of the probe is perfectly cyclic ($g=0$). The green squares refer to a non-unitary evolution caused by an integrable environmental chain ($h_z=0.0$) and the blue diamonds to a chaotic one ($h_z=0.5$). The initial state of the probe is a fixed pure state $(\theta=3\pi/8 ; \phi=0)$, R= 100 realizations for different pure random initial states for the environment were considered and the rest of the parameters are set as $\omega=1$, $h_x=1$, $J=1$, $g=0.2$, $L=9$. }
\label{fig_geometric}
\end{figure}
Let us now extend our analysis. Until now, we have entirely focused on comparing the two limit cases of interest, i.e. a purely integrable regime with respect to a maximally chaotic one. But the important question is: can we extend our approach for reproducing the whole integrable to chaos transition when sweeping over a certain parameter range? To address this question, we should compare the standard spectral indicator $\eta$, which is already normalized, with the correction to the geometric phase $|\delta \Phi|$ (or some other dynamical quantity), which is not. Therefore, to normalize a given quantity $X$ when sweeping over a certain parameter $y$ on which $X$ depends, we can define
\begin{equation}
X_{Norm}= \frac{\max (X(y))-X(y)}{\max (X(y))-\min(X(y))}.
\end{equation}
In Fig. \ref{transition_ising} we study the normalized correction $|\delta \Phi|$ averaged over 100 different realizations of pure random initial states for the environment, together with the spectral indicator of chaos $\eta$, both as a function of the parameter $h_z$. While for computing $\eta$ we have diagonalized a large spin chain of $L=14$ spins and taken into account the symmetries of the environment by analyzing only the odd subspace, for calculating $|\delta \Phi|$ we have used a much shorter environmental spin chain ($L=9$) and its symmetries were completely neglected. Remarkably, we can see a strong correspondence between the degree of chaos present in the environmental chain (quantified with the standard spectral approach) with respect to the non-unitary correction on the accumulated geometric phase of the probe. However, this strong correspondence between the standard spectral characterization of quantum chaos and our decoherent approach holds only for sufficiently long times, as it is shown in the inset of Fig. \ref{transition_ising}. There can be seen that as we increase the number of periods involved in the dynamics, the degree of correction on the accumulated geometric phase of the probe converges to the degree of chaos present in the environmental spin chain being sensed.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{\textbf{Main panel:}
\label{transition_ising}
\end{figure}
To gain further insight and test the universality of our decoherent mechanism for sensing quantum chaos, in the next subsections we will cover a wide range of physical systems with different conserved symmetries and interactions. We will show that this same approach holds in all those scenarios as well.
\renewcommand{Figure}{Figure}
\begin{figure*}
\caption{In all panels, we plot the normalized correction $|\delta \Phi|$ averaged over several different realizations of pure random initial states for the specific environmental chain under consideration, together with the spectral indicator of chaos $\eta$, as a function of a certain parameter which tunes the integrable to chaos transition in the environment. For computing $|\delta \Phi|$, the initial state of the probe is always the same fixed pure state $(\theta=3\pi/7; \phi=0)$,
the environmental chain has always a length of $L=9$ spins and $N=20$ periods are considered ($\omega=1$). \textbf{Panel (a): Heisenberg with random magnetic field.}
\label{transition_all}
\end{figure*}
\subsection{Heisenberg with random magnetic field}
The next environmental model to be sensed is a spin chain with nearest-neighbor interaction coupled to a random magnetic field in the $\hat{z}$ direction at each site. The Hamiltonian with open boundary conditions is given by
\begin{equation}
\hat{H}_E=\sum_{k=1}^{L-1}\left(\hat{S}_{k}^{x}\hat{S}_{k+1}^{x} + \hat{S}_{k}^{y}\hat{S}_{k+1}^{y} + \hat{S}_{k}^{z}\hat{S}_{k+1}^{z} \right) + \sum_{k=1}^{L} h_k^z \hat{S}_{k}^{z},
\label{heisenberg}
\end{equation}
where $\{h_k^z\}$ is a set of random variables uniformly distributed within the interval $[-h,h]$. In this model, the $\hat{z}$ component of the total spin $\hat{S}^z=\sum_{k=1}^L \hat{S}_k^z$ is a conserved quantity. This conservation allows the separation of the spanned space into smaller subspaces $\hat{\mathcal{S}}_n$, where $n$ is a fixed quantity of spins up or down. The dimension of each subspace is given by
\begin{equation}
D_n= \dim\left(\hat{\mathcal{S}}_n\right)= \begin{pmatrix}
L\\
n
\end{pmatrix}
=\frac{L!}{n!(L-n)!}.
\end{equation}
Taking into account this symmetry, the system is integrable for $h=0$. But as the strength of the random magnetic field is increased, the degree of disorder is higher and the system reaches a Wigner-Dyson distribution near $h \simeq 0.5$. Finally, if the degree of disorder is too strong, there is a transition to a many-body localization (MBL) and the energy levels then follow a Poisson distribution again. In the upper panel of Fig. \ref{transition_all} we show the correction to the geometric phase of the probe $|\delta \Phi|$ averaged over 250 realizations of different random sets of $\set{h_k^z}$ and different pure random initial states for the environment, together with the spectral chaos indicator $\eta$, both as a function of the random magnetic field $h$. As well as in the Ising spin model that has been analyzed before, for computing $|\delta \Phi|$ a shorter environmental chain was used and no considerations about its energy level symmetries were made. Despite this particular model presents disorder and a MBL, once again the global spectral characterization of quantum chaos coincides with our local decoherent approach for sufficiently long times.
\subsection{Perturbed XXZ model}
The many-body quantum system that we will sense now is an anisotropic spin chain with nearest-neighbor couplings and a perturbation with next-nearest-neighbor couplings. The Hamiltonian of this model with open boundary conditions is given by
\begin{equation}
\hat{H}_E(\lambda)=\hat{H}_0 + \lambda \hat{H}_1,
\end{equation}
where the parameter $\lambda$ quantifies the strength of the perturbation and each term is given by
\begin{equation}
\begin{split}
& \hat{H}_0= \sum_{k=1}^{L-1}\left(\hat{\sigma}_{k}^{x}\hat{\sigma}_{k+1}^{x} + \hat{\sigma}_{k}^{y}\hat{\sigma}_{k+1}^{y} +\mu \hat{\sigma}_{k}^{z}\hat{\sigma}_{k+1}^{z} \right) \\
& \hat{H}_1= \sum_{k=1}^{L-2}\left(\hat{\sigma}_{k}^{x}\hat{\sigma}_{k+2}^{x} + \hat{\sigma}_{k}^{y}\hat{\sigma}_{k+2}^{y} + \mu \hat{\sigma}_{k}^{z}\hat{\sigma}_{k+2}^{z} \right).
\end{split}
\end{equation}
In this particular model, the integrable to chaos transition arises as the next-nearest-neighbor term becomes comparable to the nearest-neighbor one. Nevertheless, to observe the transition with the standard spectral measures, we must consider all the symmetries that this model presents. First, if the chain is isotropic ($\mu=1$), the total spin $\hat{S}^2$ is conserved, so we must consider a case with $\mu \neq 1$ to see the transition. Also, not only the $\hat{z}$ component of the spin $\hat{S}^z=\sum_{k=1}^L \hat{S}_k^z$ is a conserved quantity but also parity, so we need to analyze an odd or even subspace with a fixed number of excitations for being able to see the transition. In the middle panel of Fig. \ref{transition_all} we show the results for this model. The correction to the geometric phase of the probe $|\delta \Phi|$ as well as the standard spectral indicator $\eta$ are plotted as a function of the perturbation strength $\lambda$.
To sense homogeneously this environmental chain, for computing $|\delta \Phi|$ $100$ realizations were considered and a much shorter spin chain was used ($L=9$) with respect to the analysis regarding the spectral indicator of chaos $\eta$ ($L=15$).
\subsection{Ising with long range interactions}
The last model we will consider now differs from all others in the sense that it presents long-range interactions. The Hamiltonian is given by
\begin{equation}
\hat{H}_E=\sum_{j<j^{\prime}} J_{j j^{\prime}} \sigma_{j}^{x} \sigma_{j^{\prime}}^{x}+\sum_{j=1}^{N}\left(B^{z 0}+(j-1) g_e \right) \sigma_{j}^{z},
\end{equation}
where the long-range interactions are given by
$J_{j j^{\prime}}=\frac{J_0}{|j-j^{\prime}|^\gamma}$, with $J_0$ being the nearest-neighbor coupling, $B^{z0}$ an overall bias field, $g_e$ a gradient strength and $\gamma$ the long range coefficient. We fix $\gamma=1.3$ and $B^{z0}/J_0 \gg 1 $ so that the total magnetization in the $\hat{z}$ direction is approximately conserved \cite{morong2021observation}. In the lower panel of Fig. \ref{transition_all} we show the correction to the geometric phase of the probe $|\delta \Phi|$ averaged over 100 realizations of different pure random initial states for the environment, together with the spectral chaos indicator $\eta$, both as a function of $g_e$. Despite for computing $|\delta \Phi|$ a shorter chain was used and no considerations about the symmetries were made, as well as in all the other models, we are able to diagnosis quite accurately the degree of chaos through our local decoherent description.
\section{Conclusions}
In this work we have proposed a realistic decoherent mechanism for sensing the chaotic nature of a generic many-body quantum system. By locally coupling a two-level quantum probe to a many-body environmental chain through a dephasing interaction, we have shown that the long-time dynamics of the probe can be used as a sensor to monitor the degree of quantum chaos present in the chain. In particular, we have shown that the correction to the accumulated geometric phase of the probe with respect to its unitary evolution can be exploited as a robust tool for sensing the whole integrable to chaos transition within the specific spin chain that is acting as a many-body environment. For the sake of universality, we have used our decoherent mechanism for sensing the chaoticity on four different environmental spin chains, each one possessing different conserved symmetries and even in the presence of disorder and long-range interactions. Besides its universality, our method is experimentally friendly with current technologies since it does not require to consider long environmental chains neither to care about the energy level symmetries. By bridging the gap between Peres original idea of focusing on the long-time dynamics of the Loschmidt Echo, its relation to decoherence under pure dephasing interactions and the use of the non-unitary geometric phase as a sensor, we hope our findings to shed new light on the double nature of quantum chaos, both as a spectral and dynamical property.
\section{Averaged decoherence factor}
\label{appA}
The decoherence factor plays an important role whenever an open quantum system is coupled to an environment through a dephasing interaction. For convenience, it is usually assumed that the environment starts evolving from a specific pure state. Under such non-trivial assumption, the decoherence factor takes the form
\begin{equation}
r(t)= \bra{\varepsilon(0)} e^{i t\left[H_{E}- H_{\mathcal{S}\mathcal{E}} \right]} e^{-i t\left[H_{E}+ H_{\mathcal{S}\mathcal{E}} \right]}|\varepsilon(0)\rangle,
\end{equation}
where $|\varepsilon(0)\rangle$ refers to the pure initial state of the environment. However, as we are interested in the task of sensing quantum chaos and this is a property of the entire spectrum of the environment, we cannot focus merely on one pure initial state. If we do this, great part of the environmental spectrum will not contribute to the dynamics of the probe
and thus we may be losing important information about the chaoticity of the environmental chain. This is the reason why we must introduce the concept of an averaged decoherence factor $\tilde{r}_e (t)$, where we simply take an average over different pure random initial states for the environment $|\varepsilon_m(0)\rangle$ (see Eq. (\ref{dec_factor_eff})).
As was argued in the main text, if the number of realizations $R$ is sufficiently large, given the full mixture of pure initial states involved in the average, $\tilde{r}_{e}(t)$
will converge to the decoherence factor associated to a maximally mixed initial state. This is precisely what is shown numerically in Fig. \ref{fig_convergence}
for the particular case of an environmental
Ising spin chain with transverse magnetic (see model in Eq. (\ref{ising})). However, the same argument holds for the rest of the models considered.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{Absolute value of the averaged decoherence factor $|\tilde{r}
\label{fig_convergence}
\end{figure}
Finally, it is also important to remark that the square of the absolute value of the decoherence factor is exactly what is known as the Loschmidt Echo (LE), i.e. $M(t)=|r(t)|^2$. In such a context, a common procedure is to average the LE over initial states according to the Haar measure, which is uniform over all quantum states in the Hilbert space. This averaged LE is given by
\begin{widetext}
\begin{equation}
\overline{M}(t)= \int d \ket{\varepsilon (0)} |r(t)|^2 = \frac{2^L+\left|\Tr(e^{i t\left[\hat{H}_{E}- \hat{H}_{\mathcal{S}\mathcal{E}} \right]} e^{-i t\left[\hat{H}_{E}+ \hat{H}_{\mathcal{S}\mathcal{E}} \right]} )\right|^2}{2^L(2^L+1)},
\end{equation}
\end{widetext}
which beyond a dimensional factor, presents the same qualitative behavior as the averaged decoherence factor $|\tilde{r}_e(t)|$ defined in Eq. (\ref{dec_factor_eff}).
\section{Chaos and non-Markovianity}
\label{appB}
Under the general framework of open quantum systems, non-Markovianity (NM) is usually associated with the existence of a flow of information that goes from the environment back to the open system. One standard way of quantifying the degree of NM under a given dynamics is to monitor the revivals of distinguishability \cite{medida1}. The distinguishability ($ \sigma(\hat{\rho}_{r1}(0),\hat{\rho}_{r2}(0),t)$) can be quantified by the derivative of the trace distance, which is defined as $\mathcal{D}(\hat{\rho}_{r1},\hat{\rho}_{r2})=\dfrac{1}{2}||\hat{\rho}_{r1}-\hat{\rho}_{r2}||$ and where $||A||=tr(\sqrt{A^{\dagger}A})$. In this context, under a Markovian regime the information of the open system is continuously leaked to the environment and thus quantum states become less and less distinguishable. On the contrary, under a non-Markovian dynamics, information can flow from the environment back to the reduced system and the distinguishability between states may increase within a given period of time. Consequently, a standard measure of NM (BLP measure \cite{medida1}) consists on integrating upon all the intervals of time where the distinguishability between two orthogonal initial states increases and maximizing over them i.e.
\begin{equation}
\mathcal{N}^{BLP}=\max\limits_{{\lbrace \hat{\rho}_{r1}(0),\hat{\rho}_{r2}(0)\rbrace}} \int_{0, \sigma >0}^{\infty} \sigma \left (\hat{\rho}_{r1}(0),\hat{\rho}_{r2}(0),t'\right) dt'.
\label{BLP}
\end{equation}
Nevertheless, this measure of NM has some inconsistencies that have already been pointed out in the literature \cite{pineda2016measuring}. The main problem is that it overestimates the weight of fluctuations in the trace distance, which is an unfortunate fact due to the huge fluctuations that are present in our averaged decoherence factor in the integrable regime. To avoid this problem, a possible solution is to follow the approach proposed in Ref. \cite{pineda2016measuring}, where the calculation is restricted to the largest revival of $\mathcal{D}(\hat{\rho}_{r1},\hat{\rho}_{r2})$ with respect to its minimum value prior to the revival, instead of integrating upon all revivals, i.e
\begin{equation}
\mathcal{N}^{LR}=\max_{t f, t \leq t f}\left[D\left(\hat{\rho}_{r1}\left(t_{f}\right), \hat{\rho}_{r2}\left(t_{f}\right)\right)-D\left(\hat{\rho}_{r1}(t), \hat{\rho}_{r2}(t)\right)\right].
\end{equation}
In Fig. \ref{no_marko} we study the relation between quantum chaos (quantified through the standard spectral indicator $\eta$) and both measures of NM ($\mathcal{N}^{BLP}_{Norm}$ and $\mathcal{N}^{LR}_{Norm}$) as a function of $h_z$, for the particular case of an environmental Ising spin chain with transverse magnetic (see model in Eq. (\ref{ising})). Remarkably, we can see a great degree of correspondence between the chaotic nature of the environment and the $\mathcal{N}^{LR}$ measure of NM. In particular, it is clear that the more chaotic the environment is, the dynamics is more Markovian ($\mathcal{N}^{LR}_{Norm} \simeq 0$). On the contrary, if the environment is integrable, the degree of NM is higher. This is consistent with the fact that the correction to the geometric phase of the probe in the integrable regime is lower with respect to the chaotic regime. While there is a significant backflow of information in the integrable case allowing the probe to partially recover the information that was previously lost, this is not the case in the chaotic regime and thus the correction is much higher. This constitutes another concrete example where NM can be exploited as a resource for quantum information protocols \cite{mirkin2019entangling,mirkin2019information,mirkin2020quantum,bylicka2013non,berk2019resource,anand2019quantifying,bhattacharya2020convex,env_resource0}. Moreover, since NM is explicitly defined for the long-time regime ($T \to \infty$) and we are obtaining a remarkable correspondence between the degree of NM in the dynamics and the degree of chaos in the environment, this is another argument for the fact that quantum chaos is a spectral property that manifests itself in the long-time dynamics.
\renewcommand{Figure}{Figure}
\begin{figure}
\caption{Normalized measures of NM ($\mathcal{N}
\label{no_marko}
\end{figure}
\end{document}
|
\begin{document}
\title[Subgaussian concentration and rates of convergence.]{Subgaussian concentration and rates of convergence in directed polymers.}
\author{Kenneth S. Alexander}
\address{Department of Mathematics \\
University of Southern California\\
Los Angeles, CA 90089-2532 USA}
\email{[email protected]}
\thanks{The research of the first author was supported by NSF grants
DMS-0405915 and DMS-0804934. The research of the second author was partially supported by IRG-246809. The second author would also like to acknowledge the hospitality of the University of Southern California and Academia Sinica, Taipei, where parts of this work were completed.
}
\author{Nikos Zygouras}
\address{Department of Statistics\\
University of Warwick\\
Coventry CV4 7AL, UK}
\email{[email protected]}
\keywords{directed polymers, concentration, modified Poincar\'e inequalities, coarse graining}
\subjclass{Primary: 82B44; Secondary: 82D60, 60K35}
\maketitle
\begin{abstract} We consider directed random polymers in $(d+1)$ dimensions with nearly gamma i.i.d. disorder. We study the partition function $Z_{N,\omega}$ and establish exponential concentration of $\log Z_{N,\omega}$ about its mean on the subgaussian scale $\sqrt{N/\log N}$ . This is used to show that $\mathbb{E}[ \log Z_{N,\omega}]$ differs from $N$ times the free energy by an amount which is also subgaussian (i.e. $o(\sqrt{N})$), specifically $O( \sqrt{\frac{N}{\log N}}\log \log N)$.
\end{abstract}
\section{Introduction.}
We consider a symmetric simple random walk on $Z^d$, $d\geq 1$. We denote the paths of the walk by $(x_n)_{n\geq 1}$ and its distribution (started from 0) by $P$. Let $(\omega_{n,x})_{n\in \mathbb{N},x\in \mathbb{Z}^d}$ be a collection of i.i.d. mean-zero random variables with distribution $\nu$ and denote their joint distribution by $\mathbb{P}$. We think of $(\omega_{n,x})_{n\in \mathbb{N},x\in \mathbb{Z}^d}$ as a random potential with the random walk moving inside this potential. This interaction gives rise to the directed polymer in a random environment and can be formalised by the introduction of the following Gibbs measure on paths of length $N$:
\begin{eqnarray*}
d\mu_{N,\omega}=\frac{1}{Z_{N,\omega}} e^{\beta\sum_{n=1}^N\omega_{n,x_n}} dP,
\end{eqnarray*}
where $\beta>0$ is the inverse temperature. The normalisation
\begin{eqnarray}\label{partitionf}
Z_{N,\omega}=E\left[\exp\left(\beta\sum_{n=1}^N\omega_{n,x_n}\right)\right]
\end{eqnarray}
is the partition function.
A central question for such polymers is how the fluctuations of the path are influenced by the presence of the disorder. Loosely speaking, consider the two exponents $\xi$ and $\chi$ given by
\begin{eqnarray*}
E_{N,\omega}[|x_N|^2]\sim N^{2\xi}, \quad \mathbb{V}\text{ar}\left(\log Z_{N,\omega}\right)\sim N^{2\chi}.
\end{eqnarray*}
It is believed that $\chi<1/2$ for all $\beta > 0 $ and all $d$ (see \cite{KS}.)
It is expected and partially confirmed for some related models (\cite{Wut1}, \cite{Chat1}) that the two exponents $\chi,\xi$ are related via
\begin{eqnarray}\label{hyperscaling}
\chi=2\xi-1.
\end{eqnarray}
So there is reason for interest in the fluctuations of $\log Z_{N,\omega}$, and in particular in establishing that these fluctuations are \emph{subgaussian}, that is, $o(N^{1/2})$, as compared to the gaussian scale $N^{1/2}$. It is the $o(\cdot)$ aspect that has not previously been proved: in \cite{Pi97} it is proved that in the point-to-point case (that is, with paths $(x_n)_{n\geq 1}$ restricted to end at a specific site at distance $N$ from the origin) one has variance which is $O(N)$ when the disorder has finite variance, and an exponential bound for $|\log Z_{N,\omega} - \mathbb{E}\log Z_{N,\omega}|$ on scale $N^{1/2}$ when the disorder has an exponential moment.
The zero-temperature case of the polymer model is effectively last passage percolation. More complete results exist in this case in dimension $1+1$, for specific distributions \cite{Joh}. There, based on exact computations related to combinatorics and random matrix theory, not only the scaling exponent $\chi$ for the directed last passage time was obtained, but also its limiting distribution after centering and scaling. A first step towards an extension of this type of result in the case of directed polymers in dimension $1+1$ for particular disorder is made in \cite{COSZ}; see also \cite{BC} for a step towards asymptotics. The best known result for undirected point-to-point last passage percolation is in \cite{BKS03}, stating that for $v\in \mathbb{Z}^d$, $d\geq 2$, one has $\mathbb{V}\text{ar}(\max_{\gamma:0\to v}\sum_{x\in\gamma}\omega_x)\leq C|v|/\log|v|$, when the disorder $\omega$ is Bernoulli. Some results on sublinear variance estimates for directed last passage percolation in $1+1$ dimensions with gaussian disorder were obtained in \cite{Chat2}, but the type of estimates there does not extend to higher dimensions, or to directed polymers at positive temperature. The assumption of gaussian disorder is also strongly used there. In \cite{G} estimates of the variance of directed last passage percolation are obtained via a coupling method, which appears difficult to extend to the case of polymers. In \cite{BR08} exponential concentration estimates on the scale $(|v|/\log |v|)^{1/2}$ were obtained for first passage percolation, for a large class of disorders.
The extension of these results to directed polymers is not straightforward. This can be be seen, for example, from the fact that subgaussian fluctuations for a point-to-point directed polymer can naturally fail. Such failure occurs, for example, if one restricts the end point of a $(1+1)$-dimensional directed polymer to be $(N,N)$. Then \eqref{partitionf} reduces to a sum of i.i.d. variables whose fluctuations are therefore gaussian.
The first result of the present paper is to obtain exponential concentration estimates on the scale $(N/\log N)^{1/2}$. Specifically, for {\it nearly gamma} disorder distributions (see Definition \ref{nearly gamma}, a modification of the definition in \cite{BR08}) we prove the following; here and throughout the paper we use $K_i$ to denote constants which depend only on $\beta$ and $\nu$.
\begin{theorem} \label{expconc}
Suppose the disorder distribution $\nu$ is nearly gamma with $\int e^{4\beta|\omega|} \nu(d\omega)<\infty$. Then there exist $K_0, K_1$ such that
\begin{eqnarray*}
\mathbb{P}\left( \left| \log Z_{N,\omega}-\mathbb{E}\log Z_{N,\omega}\right|>t\sqrt{\frac{N}{\log N}}\right)\leq K_0e^{-K_1t},
\end{eqnarray*}
for all $N\geq 2$ and $t>0$.
\end{theorem}
The nearly gamma condition ensures that $\nu$ has some exponential moment (see Lemma \ref{neargmoment}), so for small $\beta$ the exponential moment hypothesis in Theorem \ref{expconc} is redundant.
The proof follows the rough outline of \cite{BR08}, and uses some results from there, which we summarize in Section \ref{review conc}.
We use Theorem \ref{expconc}, in combination with coarse graining techniques motivated by \cite{Al11}, to provide subgaussian estimates of the rate of convergence of $N^{-1}\mathbb{E}\log Z_{N,\omega}$ to the free energy.
Here the \emph{free energy} of the polymer (also called the \emph{pressure}) is defined as
\begin{equation} \label{aslimit}
p(\beta)= \lim_{N\to\infty} \frac{1}{N}\log Z_{N,\omega} \quad \mathbb{P}-\text{a.s.}
\end{equation}
The existence of the free energy is obtained by standard subadditivity arguments and concentration results \cite{CSY03}, which furthermore guarantee that
\begin{eqnarray} \label{pbeta}
p(\beta)&=& \lim_{N\to\infty} \frac{1}{N}\mathbb{E}\log Z_{N,\omega}\\
&=&\sup_N \frac{1}{N}\mathbb{E}\log Z_{N,\omega}.
\end{eqnarray}
Specifically, our second main result is as follows.
\begin{theorem}\label{ratemain}
Under the same assumptions as in Theorem \ref{expconc}, there exists $K_2$ such that for all $N\geq 3$,
\begin{equation} \label{rate2}
Np(\beta) \geq \mathbb{E}\log Z_{N,\omega} \geq Np(\beta) - K_2N^{1/2}\frac{\log \log N}{(\log N)^{1/2}}.
\end{equation}
\end{theorem}
Controlling the speed of convergence of the mean is useful when one considers deviations of $N^{-1}\log Z_{N,\omega}$ from its limit $p(\beta)$ instead of from its mean, analogously to \cite{Chat1}.
Regarding the organization of the paper, in Section \ref{review conc}
we review certain concentration inequalities and related results, mostly from \cite{BR08}, and give an extension of the definition from \cite{BR08} of a nearly gamma distribution so as to allow non-positive variables. In Section \ref{proofcon} we provide the proof of Theorem \ref{expconc}. In Section \ref{proofrates} we provide the proof of Theorem \ref{ratemain}. Finally, in Section \ref{prooflem} we provide the proof of a technical lemma used in Section \ref{proofrates}.
\section{Preliminary Results on Concentration and Nearly Gamma Distributions.}\label{review conc}
Let us first define the class of nearly gamma distributions. This class, introduced in \cite{BR08} is quite wide and in particular it includes the cases of Gamma and normal variables. The definition given in \cite{BR08} required that the support does not include negative values. Here we will extend this definition in order to accommodate such values as well.
\begin{definition}\label{nearly gamma}
Let $\nu$ be a probability measure on $\mathbb{R}$, absolutely continuous with respect to the Lebesque measure, with density $h$ and cumulative distribution function $H$. Let also $\mathbb{P}hi$ be the cumulative distribution function of the standard normal. $\nu$ is said to be {\it nearly gamma} (with parameters $A,B$) if
\begin{itemize}
\item[(i)] The support $I$ of $\nu$ is an interval.\\
\item[(ii)] $h(\cdot)$ is continuous on $I$.\\
\item[(iii)] For every $y\in I$ we have
\begin{eqnarray} \label{psidef}
\psi(y) := \frac{\mathbb{P}hi'\circ\mathbb{P}hi^{-1} (H(y))}{h(y)}\leq \sqrt{B+A|y|},
\end{eqnarray}
where $A,B$ are nonnegative constants.
\end{itemize}
\end{definition}
The motivation for this definition (see \cite{BR08}) is that $H^{-1}\circ\mathbb{P}hi$ maps a gaussian variable to one with distribution $\nu$, and $\psi(y)$ is the derivative of this map, evaluated at the inverse image of $y$. With the bound on $\psi$ in (iii), the log Sobolev inequality satisfied by a gaussian distribution with respect to the differentiation operator translates into a useful log Sobolev inequality satisfied by the distribution $\nu$ with respect to the operator $\psi(y)d/dy$.
It was established in \cite{BR08} that a distribution is nearly gamma if (i), (ii) of Definition \ref{nearly gamma} are valid, and
(iii) is replaced by
\begin{itemize}
\item[(iv)]
if $ I=[\nu_-,\nu_+]$ with $ |\nu_{\pm}|<\infty$, then
\begin{eqnarray*}
\frac{h(x)}{|x-\nu_\pm|^{\alpha_{\pm}}},
\end{eqnarray*}
remains bounded away from zero and infinity for $x\sim \nu_{\pm}$, for some $\alpha_\pm>-1$.
\item[(v)]
If $\nu_+=+\infty$ then
\begin{eqnarray*}
\frac{\int_{x}^\infty h(t) dt}{h(x)}
\end{eqnarray*}
remains bounded away from zero and infinity, as $x\to+\infty$. The analogous statement is valid if $\nu_-=-\infty$.
\end{itemize}
The nearly gamma property ensures the existence of an exponential moment, as follows.
\begin{lemma}\label{neargmoment}
Suppose the distribution $\nu$ is nearly gamma with parameters $A,B$. Then $\int e^{tx}\ \nu(dx) < \infty$ for all $t<2/A$.
\end{lemma}
\begin{proof}
Let $T=H^{-1}\circ\mathbb{P}hi$, so that $T(\xi)$ has distribution $\nu$ for standard normal $\xi$; then \eqref{psidef} is equivalent to
\[
T'(x) \leq \sqrt{B+A|T(x)|} \quad \text{for all } x \in \mathbb{R}.
\]
Considering $T(x)\geq 0$ and $T(x)<0$ separately, it follows readily from this that
\[
\left| \frac{d}{dx} \sqrt{(B+A|T(x)|)} \right| \leq \frac{A}{2} \quad\text{for all $x$ with } T(x)\neq 0,
\]
so for some constant $C$ we have $\sqrt{(B+A|T(x)|)} \leq C+A|x|/2$, or
\[
|T(x)| \leq \frac{C^2-B}{A} + C|x| + \frac{A}{4}x^2,
\]
and the lemma follows.
\end{proof}
For $\omega \in \mathbb{R}^{\mathbb{Z}^{d+1}}$ and $(m,y) \in \mathbb{Z}^{d+1}$ we define $\hat{\omega}^{(m,y)} \in \mathbb{R}^{\mathbb{Z}^{d+1}\backslash \{(m,y)\}}$ by the relation $\omega = (\hat{\omega}^{(m,y)},\omega_{m,y})$. In other words, $\hat{\omega}^{(m,y)}$ is $\omega$ with the coordinate $\omega^{(m,y)}$ removed. Given a function $F$ on $\mathbb{R}^{\mathbb{Z}^{d+1}}$ and a configuration $\omega$, the average sensitivity of $F$ to changes in the $(m,y)$ coordinate is given by
\[
Y^{(m,y)}(\omega) := \int \left| F(\hat{\omega}^{(m,y)},\tilde{\omega}_{m,y}) - F(\omega) \right|\ d\mathbb{P}(\tilde{\omega}_{m,y}).
\]
We define
\[
Y_N(\omega) := \sum_{(m,y) \in \{1,\dots,N\}\times\mathbb{Z}^d} Y^{(m,y)}(\omega),
\]
\[
\rho_N := \sup_{(m,y) \in \{1,\dots,N\}\times\mathbb{Z}^d} \sqrt{ \mathbb{E}\left[ (Y^{(m,y)})^2 \right]},
\]
\[
\sigma_N := \sqrt{ \mathbb{E}\left( Y_N^2\right) }.
\]
We use the same notation (a mild abuse) when $F$ depends on only a subset of the coordinates.
We are now ready to state the theorem of Benaim and Rossignol \cite{BR08}, specialized to the operator $\psi(s)d/ds$ applied to functions $e^{\frac{\theta}{2}F(\hat{\omega}_{m,y},\cdot)}$.
\begin{theorem}\label{main concentration}
Let $F\in L^2(\nu^{\{1,\dots,N\}\times\mathbb{Z}^d})$ and let $\rho_N,\sigma_N$ be as above. Suppose that there exists $K>e\rho_N\sigma_N$ such that
\begin{eqnarray} \label{assumption Poincare}
\sum_{(m,y) \in \{1,\dots,N\}\times\mathbb{Z}^d} \mathbb{E} \left[ \left( \psi(\omega_{m,y}) \frac{\partial}{\partial\omega_{m,y}}
e^{\frac{\theta}{2}F}\right)^2\right]
\leq
K\theta^2 \mathbb{E}\left[e^{\theta F}\right]
\end{eqnarray}
for all $|\theta|<\frac{1}{2\sqrt{l(K)}}$ where
\begin{eqnarray*}
l(K)=\frac{K}{\log \frac{K}{\rho_N\sigma_N \log \frac{K}{\rho_N\sigma_N } }}.
\end{eqnarray*}
Then for every $t>0$ we have that
\begin{eqnarray*}
\mu\left( |F-\mathbb{E}[F] |\geq t\sqrt{l(K)} \right) \leq 8e^{-t}.
\end{eqnarray*}
\end{theorem}
Observe that if $K$ is of order $N$, then a bound on $\rho_N\sigma_N$ of order $N^\alpha$ with $\alpha<1$ is sufficient to ensure that $l(K)$ is of order $N/\log N$. In particular it is sufficient to have $\sigma_N$ of order $N$ and $\rho_N$ of order $N^{-\tau}$ with $\tau>0$, which is what we will use below.
\section{Concentration for the Directed Polymer.}\label{proofcon}
In this section we will establish the first main result of the paper, which is Theorem \ref{expconc}. We assume throughout that the distribution $\nu$ of the disorder is nearly gamma with parameters $A,B$. We finally denote $\mathbb{P}=\nu^{\mathbb{Z}^{d+1}}$. We write $\mu(f)$ for the integral of a function $f$ with respect to a measure $\mu$.
Let $(n,x)\in \mathbb{N}\times \mathbb{Z}^d$. We denote the partition function of the directed polymer of length $N$ in the shifted environment $\omega_{n+\cdot,x+\cdot}$ by
\begin{eqnarray}\label{ZN}
Z_{N,\omega}^{(n,x)}:=E\left[ e^{\beta\sum_{i=1}^N\omega_{n+i,x+x_i}} \right],
\end{eqnarray}
and let $\mu_{N,\omega}^{(n,x)}$ be the corresponding Gibbs measure. For $I\subset \mathbb{N}\times \mathbb{Z}^d$ we define
\begin{eqnarray*}
\overline{F}_{N,\omega}^I:=\frac{1}{| I |}\sum_{(n,x)\in I} \log Z_{N,\omega}^{(n,x)}.
\end{eqnarray*}
Define the set of paths from the origin
\[
\Gamma_N = \{ \{(i,x_i)\}_{i\leq N}: x_0=0,|x_i-x_{i-1}|_1=1 \text{ for all } i\};
\]
we write $\gamma_N=\{(i,x_i)\colon i=0,\dots,N\}$ for a generic or random polymer path in $\Gamma_N$. Let
\begin{eqnarray}\label{absmax}
\mathcal{M}_{N,\omega}=\max_{\gamma_N} \sum_{(m,y)\in\gamma_N}|\omega_{m,y}|,
\end{eqnarray}
and let $\mathcal{M}^{(n,x)}_{N,\omega}$ denote the same quantity for the shifted disorder, analogously to \eqref{ZN}.
\begin{proposition} \label{Poinc}
There exists $\theta_0(\beta,\nu)$ such that for all $|\theta|<\theta_0$ and $|I|\leq (2d)^N$, the function $\overline{F}_{N,\omega}^I$ satisfies the following Poincar\'e type inequality:
\begin{eqnarray*}
\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} \mathbb{E}\left[\left( \psi(\omega_{m,y})\frac{\partial }{\partial \omega_{m,y}} e^{\frac{\theta}{2} \overline{F}_{N,\omega}^I } \right)^2\right]
\leq
C_{AB} \theta^2\beta^2 N\,\,\mathbb{E}\left[ e^{\theta\overline{F}_{N,\omega}^I} \right],
\end{eqnarray*}
where $C_{AB}$ is a constant depending on the nearly gamma parameters $A,B$.
\end{proposition}
\begin{proof}
By the definition of nearly gamma we have that
\begin{align} \label{ABsub}
\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} &\mathbb{E}\left[\left( \psi(\omega_{m,y})\frac{\partial }{\partial \omega_{m,y}} e^{\frac{\theta}{2} \overline{F}_{N,\omega}^I } \right)^2\right] \notag \\
&\leq
B\mathbb{E}\left[ \left(\frac{\partial}{\partial\omega_{m,y}} e^{\frac{\theta}{2} \overline{F}_{N,\omega}^I } \right)^2 \right] +
A \mathbb{E}\left[ |\omega_{m,y}| \left(\frac{\partial}{\partial\omega_{m,y}} e^{\frac{\theta}{2} \overline{F}_{N,\omega}^I } \right)^2 \right].
\end{align}
Regarding the first term on the right side of \eqref{ABsub}, we have
\begin{eqnarray*}
\frac{\partial \overline{F}^I_{N,\omega}}{\partial \omega_{m,y}}=
\frac{\beta}{| I |}\sum_{(n,x)\in I} \mu_{N,\omega}^{n,x}(1_{(m-n,y-x)\in \gamma_N})
\end{eqnarray*}
and
\begin{align}\label{Poincare estimate}
\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} &\mathbb{E}\left[\left( \frac{\partial}{\partial\omega_{m,y}}e^{\frac{\theta}{2}\overline{F}_{N,\omega}^I} \right)^2\right]\nonumber\\
&=\frac{1}{4}\theta^2
\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} \mathbb{E}\left[\left( \frac{\partial \overline{F}_{N,\omega}^I }{\partial\omega_{m,y}}
\right)^2
e^{\theta\overline{F}_{N,\omega}^I}
\right]\nonumber\\
&= \frac{1}{4}\theta^2 \beta^2 \sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} \mathbb{E}\left[\left(
\frac{1}{| I |}\sum_{(n,x)\in I} \mu_{N,\omega}^{n,x}(1_{(m-n,y-x)\in \gamma_N})
\right)^2
e^{\theta\overline{F}_{N,\omega}^I}
\right]\nonumber\\
&\leq \frac{1}{4}\theta^2\beta^2 \sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} \mathbb{E}\left[
\frac{1}{| I |}\sum_{(n,x)\in I} \mu_{N,\omega}^{n,x}(1_{(m-n,y-x)\in \gamma_N})
\,\,e^{\theta\overline{F}_{N,\omega}^I}
\right]\\
&= \frac{1}{4}\theta^2\beta^2 N\,\, \mathbb{E}\left[
e^{\theta\overline{F}_{N,\omega}^I}
\right] ,\nonumber
\end{align}
where the last equality is achieved by performing first the summation over $(m,y)$ and using that the range of the path consists of $N$ sites after the starting site. Regarding the second term on the right side of \eqref{ABsub}, we define $\mathcal{M}^{I}_{N,\omega}=\max_{(n,x)\in I} \mathcal{M}^{(n,x)}_{N,\omega}$ for a set $I\subset \mathbb{N}\times \mathbb{Z}^{d}$. We then have
$-\beta \mathcal{M}^{I}_{N,\omega} \leq \overline{F}_{N,\omega}^I \leq \beta \mathcal{M}^{I}_{N,\omega} $ so following similar steps as in \eqref{Poincare estimate} we have
\begin{eqnarray}\label{termtwo}
\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d}
&& \mathbb{E}\left[|\omega_{m,y}| \left(\frac{\partial}{\partial\omega_{m,y}} e^{\frac{\theta}{2} \overline{F}_{N,\omega}^I } \right)^2 \right]\nonumber\\
&\leq&
\frac{1}{4}\theta^2\beta^2 \sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} \mathbb{E}\left[
\frac{1}{| I |}\sum_{(n,x)\in I} \mu_{N,\omega}^{n,x}
(|\omega_{m,y}|1_{(m-n,y-x)\in \gamma_N})
\,\,e^{\theta\overline{F}_{N,\omega}^I}
\right] \nonumber\\
&\leq& \frac{1}{4}\theta^2\beta^2
\mathbb{E}\left[ \mathcal{M}^{I}_{N,\omega} e^{\theta \overline{F}_{N,\omega}^I}
\right] \nonumber \\
&\leq& \frac{1}{4}\theta^2\beta^2 \Big( bN
\mathbb{E}\left[ e^{\theta \overline{F}_{N,\omega}^I}
\right] +
\mathbb{E}\left[ \mathcal{M}^{I}_{N,\omega} e^{|\theta| \beta \mathcal{M}^{I}_{N,\omega}}
; \mathcal{M}^{I}_{N,\omega} > bN \right]
\Big),
\end{eqnarray}
where $b$ a constant to be specified. We would like to show that the second term on the right side of \eqref{termtwo} is smaller that the first one. First, in the case that $\theta>0$, since the disorder has mean zero, bounding $Z_{N,\omega}^{(n,x)}$ below by the contribution from any one path and then applying Jensen's inequality to the expectation $\mathbb{E}[\cdot]$ we obtain
\begin{eqnarray}\label{viaJensen}
\mathbb{E}\left[ e^{\theta \overline{F}_{N,\omega}^I}\right] \geq e^{-\theta N \log(2d)},
\end{eqnarray}
while in the case that $\theta<0$, applying Jensen's inequality to the average over $I$ gives
\[
\mathbb{E}\left[ e^{\overline{F}_{N,\omega}^I}\right] \leq \mathbb{E}\left[ Z_{N,\omega} \right] = e^{\lambda(\beta)N},
\]
with $\lambda(\beta)$ the log-moment generating function of $\omega$, and hence, taking the $\theta$ power and then applying Jensen's inequality to $\mathbb{E}[\cdot]$,
\begin{eqnarray}\label{viaJensen2}
\mathbb{E}\left[ e^{\theta \overline{F}_{N,\omega}^I}\right] \geq e^{\theta N \lambda(\beta)}.
\end{eqnarray}
Moreover, for $b>0$ we have
\begin{eqnarray}\label{bterm}
&&\mathbb{E}\left[ \mathcal{M}^{I}_{N,\omega} e^{|\theta| \beta \mathcal{M}^{I}_{N,\omega}}
; \mathcal{M}^{I}_{N,\omega} > bN \right] \nonumber \\
&=& bN e^{|\theta| \beta bN} \mathbb{P}(\mathcal{M}^{I}_{N,\omega}>bN)
+ N\int_b^\infty (1+|\theta| \beta uN) e^{|\theta| \beta uN} \mathbb{P}(\mathcal{M}^{I}_{N,\omega}>uN) du.
\end{eqnarray}
Denoting by $\mathcal{J}(\cdot)$ the large deviation rate function related to $|\omega|$ we have that \eqref{bterm} is bounded by
\begin{eqnarray}\label{bbterm}
bN(2d)^N |I| e^{(|\theta|\beta b-\mathcal{J}(b))N} + N(2d)^N |I| \int_b^\infty (1+|\theta|\beta u N) e^{(|\theta|\beta u-\mathcal{J}(u))N} du.
\end{eqnarray}
Let $0<L< \lim_{x\to \infty} \mathcal{J}(x)/x$ (which exists since $\mathcal{J}(x)/x$ is nondecreasing for $x>\mathbb{E}|\omega|$) and choose $b$ large enough so $\mathcal{J}(b)/b>L$. Then provided $|\theta|$ is small enough (depending on $\beta, \nu$) and $b$ is large enough (depending on $\nu$), \eqref{bbterm}is bounded above by
\begin{eqnarray*}
&&bN(2d)^{2N} e^{(|\theta|\beta-L)bN}+ N(2d)^{2N}\int_b^\infty (1+|\theta| \beta uN) e^{(|\theta|\beta-L)uN}du\\
&\leq& bN (2d)^{2N} e^{-\frac{L}{2}bN} +N (2d)^{2N}\int_b^\infty (1+|\theta| \beta uN) e^{-\frac{L}{2}uN}du\\
&\leq&e^{-LbN/4}\\
&\leq& \mathbb{E}\left[ e^{\theta \overline{F}^I_{N,\omega}}\right],
\end{eqnarray*}
where the last inequality uses \eqref{viaJensen} and \eqref{viaJensen2}. This combined with \eqref{termtwo} and \eqref{Poincare estimate} completes the proof.
\end{proof}
The averaging over sets $I$ used in the preceding proof is related to the auxiliary randomness used in the main proof in \cite{BKS03}.
Define the point-to-point partition function
\[
Z_{N,\omega}(z)=E\left[\exp\left(\beta\sum_{n=1}^N\omega_{n,x_n}\right)1_{x_N=z}\right]
\]
and let $\mu_{N,\omega,z}$ be the corresponding Gibbs measure.
With $I$ fixed, we define
\begin{eqnarray*}
W_{N,\omega}^{(m,y)}:=\int\left|\overline{F}_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^I- \overline{F}_{N,\omega}^I \right|d\mathbb{P}(\tilde\omega_{m,y}),
\end{eqnarray*}
\[
L_{N,\omega}^{(m,y)}(z):=\int\left| \log Z_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}(z) - \log Z_{N,\omega}(z) \right|
d\mathbb{P}(\tilde\omega_{m,y}),
\]
\begin{eqnarray*}
W_{N,\omega}:= \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} W_{N,\omega}^{(m,y)}, \qquad
L_{N,\omega}(z):= \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} L_{N,\omega}^{(m,y)}(z),
\end{eqnarray*}
\begin{eqnarray*}
W_{N,\omega,\pm}^{(m,y)}:=\int\left(\overline{F}_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^I- \overline{F}_{N,\omega}^I \right)_\pm d\mathbb{P}(\tilde\omega_{m,y}),
\end{eqnarray*}
\[
L_{N,\omega,\pm}^{(m,y)}(z):=\int\left( \log Z_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}(z)
- \log Z_{N,\omega}(z) \right)_\pm d\mathbb{P}(\tilde\omega_{m,y}),
\]
and
\begin{eqnarray*}
W_{N,\omega,\pm}:= \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} W_{N,\omega,\pm}^{(m,y)}, \qquad
L_{N,\omega,\pm}(z):= \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} L_{N,\omega,\pm}^{(m,y)}(z).
\end{eqnarray*}
We finally define
\begin{eqnarray*}
r_N:=\sup_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \sqrt{\mathbb{E}\left[\big(W_{N,\omega}^{(m,y)} \big)^2\right] }, \qquad
\hat{r}_N(z) := \sup_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \sqrt{\mathbb{E}\left[\big(L_{N,\omega}^{(m,y)}(z) \big)^2\right] },
\end{eqnarray*}
\begin{eqnarray*}
s_N:=\sqrt{\mathbb{E}\left[\big(W_{N,\omega} \big)^2\right] }, \qquad
\hat{s}_N(z):=\sqrt{\mathbb{E}\left[\big(L_{N,\omega}(z) \big)^2\right] },
\end{eqnarray*}
\begin{eqnarray*}
r_N^{\pm}:=\sup_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \sqrt{\mathbb{E}\left[\big(W_{N,\omega,\pm}^{(m,y)} \big)^2\right] },
\qquad \hat{r}_N^{\pm}(z):=\sup_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d}
\sqrt{\mathbb{E}\left[\big(L_{N,\omega,\pm}^{(m,y)}(z) \big)^2\right] }
\end{eqnarray*}
and
\begin{eqnarray*}
s_N^\pm:=\sqrt{\mathbb{E}\left[\big(W_{N,\omega,\pm} \big)^2\right] }, \qquad
\hat{s}_N^\pm(z):=\sqrt{\mathbb{E}\left[\big(L_{N,\omega,\pm}(z) \big)^2\right] }.
\end{eqnarray*}
It is clear that $r_n\leq r_N^++r_N^-$ and $s_n\leq s_N^++s_N^-$.
We make use of two choices of the set $I$ of sites: let $0<\alpha<1/2$ and
\[
I_\pm^\alpha:=\{(n,x)\in \mathbb{N}\times \mathbb{Z}^d\colon n=\pm N^{\alpha},|x|_\infty<N^\alpha\}.
\]
\begin{proposition}\label{rNsN}
For $\alpha<1/2$ and $I = I_\pm^\alpha$, there exists $K_3$ such that the following estimates hold true:
\[
r_N^\pm \leq \frac{1}{|I_+^\alpha|^{1/4}} K_3, \qquad
\hat{r}_N^\pm(z) \leq K_3,
\]
\[
s_N^\pm \leq K_3 N, \qquad \hat{s}_N^\pm(z) \leq K_3 N.
\]
\end{proposition}
\begin{proof}
We first consider $r_N^\pm$ and $s_N^\pm$. Observe that
\begin{eqnarray} \label{posnegpart}
\left(\overline{F}_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{I_+^\alpha}- \overline{F}_{N,\omega}^{I_+^\alpha} \right)_\pm
&=& \left( \frac{1}{| I_+^\alpha |}\sum_{(n,x)\in I_+^\alpha} \big(
\log Z _{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{(n,x)}- \log Z_{N,\omega}^{(n,x)}
\big)
\right)_\pm\nonumber\\
&\leq& \frac{1}{| I_+^\alpha |}\sum_{(n,x)\in I_+^\alpha} \big(
\log Z _{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{(n,x)}- \log Z_{N,\omega}^{(n,x)}
\big)_\pm.
\end{eqnarray}
The difference on the right side can be written as
\begin{eqnarray} \label{logdiff2}
\log Z _{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{(n,x)}- \log Z_{N,\omega}^{(n,x)}
&=&
\log \frac{1}{Z_{N,\omega}^{(n,x)}}
E\left[ e^{\beta\sum_{i=1}^N\omega_{n+i,x+x_i}} e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)1_{x+x_{m-n}=y}}
\right]\nonumber\\
&=& \log \mu_{N,\omega}^{(n,x)}\left( e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)1_{x+x_{m-n}=y}}
\right)\nonumber\\
&=& \log\left( 1+ \mu_{N,\omega}^{(n,x)}\left( e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)1_{x+x_{m-n}=y}}-1\right)
\right)\\
&\leq& \log\left( 1+ e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \right) \notag \\
&\leq& e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right),\nonumber
\end{eqnarray}
so
\begin{align}\label{plus}
W_{N,\omega,+}^{(m,y)} &= \int \left(\overline{F}_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{I_+^\alpha}- \overline{F}_{N,\omega}^{I_+^\alpha} \right)_+\,d\mathbb{P}(\tilde\omega_{m,y})\notag\\
&\leq\frac{1}{| I_+^\alpha |}\sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \int_{\tilde\omega_{m,y}\geq\omega_{m,y}}
e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \,d\mathbb{P}(\tilde\omega_{m,y}).
\end{align}
To bound $r_N^+$, we have using \eqref{plus}:
\begin{align} \label{rNplus}
\mathbb{E}\left[\big(W_{N,\omega,+}^{(m,y)} \big)^2\right] &\leq
\mathbb{E}\left[ \left( \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \int_{\tilde\omega_{m,y}\geq\omega_{m,y}}
e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \,d\mathbb{P}(\tilde\omega_{m,y}) \right)^2\right]
\nonumber \\
&\leq
\mathbb{E}\left[ \left( \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right)\right)^4\right]^{1/2}\nonumber\\
&\qquad\times
\mathbb{E}\left[ \left(
\int_{\tilde\omega_{m,y}\geq\omega_{m,y}}
e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \,d\mathbb{P}(\tilde\omega_{m,y}) \right)^4\right]^{1/2} \nonumber\\
&\leq
\mathbb{E}\left[ \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right)\right]^{1/2} e^{\frac{1}{2}(\lambda(-4\beta)+4\lambda(\beta))} \nonumber\\
&=
\mathbb{E}\left[ \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}\left( 1_{x+x_{m-n}=y}
\right)\right]^{1/2} e^{\frac{1}{2}(\lambda(-4\beta)+4\lambda(\beta))} \\
&\leq
\frac{1}{| I_+^\alpha |^{1/2}} e^{\frac{1}{2}(\lambda(-4\beta)+4\lambda(\beta))}, \nonumber
\end{align}
where in the equality we used the homogeneity of the environment and in the last inequality we used the fact that the directed path has at most one contact point with the set $I_+^\alpha$ and, therefore,
$\sum_{(n,x)\in I_+^\alpha} 1_{x+x_{m-n}=y} \leq 1$. Hence
\begin{eqnarray*}
r_N^+\leq \frac{1}{| I_+^\alpha |^{1/4}} e^{\frac{1}{4}(\lambda(-4\beta)+4\lambda(\beta))}.
\end{eqnarray*}
The estimate on $s_N^+$ follows along the same lines. Specifically, we have using \eqref{plus} that
\begin{align} \label{summoment}
& \mathbb{E}\left[\left(W_{N,\omega,+}\right)^2\right] = \mathbb{E}\left[\left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} W_{N,\omega,+}^{(m,y)} \right)^2\right]\nonumber\\
&\leq
\mathbb{E}\left[ \left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \int_{\tilde\omega_{m,y}\geq\omega_{m,y}}
e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \,d\mathbb{P}(\tilde\omega_{m,y}) \right)^2\right]\nonumber\\
&\leq
e^{2\lambda(\beta)} \mathbb{E}\left[ \left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) e^{-\beta\omega_{m,y}}
\right)^2\right]\nonumber\\
&\leq
e^{2\lambda(\beta)} \mathbb{E}\left[ \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha}
\left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right) e^{-\beta\omega_{m,y}}
\right)^2\right]\nonumber\\
&\leq
e^{2\lambda(\beta)} \mathbb{E}\left[ \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha}
\left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right)
\right)
\left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right) e^{-2\beta\omega_{m,y}}
\right)\right]\nonumber\\
&=
N\,e^{2\lambda(\beta)} \,
\frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha}
\sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \mathbb{E}\left[ \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right) e^{-2\beta\omega_{m,y}} \right] \nonumber\\
&\leq
N\,e^{2\lambda(\beta)} \,
\frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha}
\sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \mathbb{E}\left[ \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right) \right] \mathbb{E}\left[ e^{-2\beta\omega_{m,y}}\right] \\
&= N^2\,e^{\lambda(-2\beta)+2\lambda(\beta)}\nonumber
\end{align}
where
in the equalities we used the fact that
\begin{equation}\label{rangeN}
\sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d}\mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y} \right) =N,
\end{equation}
and in the last inequality we used the easily verified fact that $\mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}\right)$ and $e^{-\beta\omega_{m,y}}$ are negatively correlated.
It follows from \eqref{summoment} that
\begin{eqnarray*}
s_N^+\leq N e^{\frac{1}{2}(\lambda(-2\beta)+2\lambda(\beta))}.
\end{eqnarray*}
We now need to show how these estimates extend to $r_N^-,s_N^-$. Using \eqref{posnegpart} and the second equality in \eqref{logdiff2},
\begin{eqnarray} \label{Fminus}
&&\left(\overline{F}_{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}^{I_+^\alpha}
- \overline{F}_{N,\omega}^{I_+^\alpha} \right)_- \notag\\
&\leq&
-\frac{1}{| I_+^\alpha|}\sum_{(n,x)\in I_+^\alpha}\log \mu_{N,\omega}^{(n,x)}\left( e^{\beta(\tilde\omega_{m,y}-\omega_{m,y})1_{x+x_{m-n=y}}} \right) 1_{\tilde\omega_{m,y}<\omega_{m,y}}.
\end{eqnarray}
By Jensen's inequality this is bounded by
\begin{eqnarray} \label{jensen}
&& \frac{1}{| I_+^\alpha|}
\sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}
\left( 1_{x+x_{m-n=y} } \right)
\beta(\omega_{m,y}-\tilde\omega_{m,y} )1_{\tilde\omega_{m,y}<\omega_{m,y}}.
\end{eqnarray}
It follows that
\[
\mathbb{E}\left[\big(W_{N,\omega,-}^{(m,y)} \big)^2\right] \leq
\mathbb{E}\left[ \left( \frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \int_{\tilde\omega_{m,y}<\omega_{m,y}}
\beta\big(\omega_{m,y}-\tilde\omega_{m,y}\big) \,d\mathbb{P}(\tilde\omega_{m,y}) \right)^2\right]
\]
From this we can proceed analogously to \eqref{rNplus} and obtain
\[
r_N^- \leq \frac{\beta}{| I_+^\alpha |^{1/4}} \left( \mathbb{E}[\omega_{m,y}^4] +\mathbb{E}[|\omega_{m,y}|]^4 \right)^{1/4}.
\]
To bound $s_N^-$ we first observe that
\begin{equation} \label{omegas}
\int_{\tilde\omega_{m,y}\leq\omega_{m,y}}
\big(\omega_{m,y}-\tilde\omega_{m,y}\big) \,d\mathbb{P}(\tilde\omega_{m,y}) \leq (\omega_{m,y})_+ + \mathbb{E}[(\omega_{0,0})_-].
\end{equation}
Using \eqref{posnegpart}, \eqref{rangeN}, \eqref{omegas} and the three equalities in \eqref{logdiff2}, it follows that
\begin{align} \label{summinus}
\mathbb{E}[(W_{N,\omega,-})^2] &= \mathbb{E}\left[\left(\sum_{(m,y)\in \mathbb{N}\times\mathbb{Z}^d} W_{N,\omega,-}^{(m,y)}\right)^2\right]\notag\\
&\leq
\mathbb{E}\left[ \left( \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \frac{1}{| I_+^\alpha |}
\sum_{(n,x)\in I_+^\alpha} \mu_{N,\omega}^{(n,x)}\left( 1_{x+x_{m-n}=y}
\right) \beta ( (\omega_{m,y})_+ + \mathbb{E}[(\omega_{0,0})_-] ) \right)^2\right]\notag\\
&\leq 2\beta^2 N^2 ( \mathbb{E}[(\omega_{0,0})_-] )^2 + 2\beta^2 \mathbb{E}\left[ \left(
\frac{1}{| I_+^\alpha |} \sum_{(n,x)\in I_+^\alpha} \mathcal{M}_{N,\omega}^{(n,x)}\right)^2 \right] \notag\\
&\leq 2\beta^2 N^2 ( \mathbb{E}[(\omega_{0,0})_-] )^2 + 2\beta^2 \mathbb{E}\left[( \mathcal{M}_{N,\omega})^2 \right],
\end{align}
where
$\mathcal{M}_{N,\omega}$ is from \eqref{absmax}.
A similar computation to the one following \eqref{termtwo} shows that for $L,b$ as chosen after \eqref{bbterm}, with $b$ sufficiently large (depending on $\nu$),
\begin{align} \label{2ndmoment}
\mathbb{E}[(\mathcal{M}_{N,\omega})^2] &\leq (bN)^2 + \int_{(bN)^2}^\infty \mathbb{P}\left((\mathcal{M}_{N,\omega})^2 > t\right)\ dt \notag \\
&\leq (bN)^2 + N^2 \int_{b^2}^\infty \mathbb{P}\left(\mathcal{M}_{N,\omega} > N\sqrt{y}\right)\ dy \notag \\
&\leq (bN)^2 + N^2(2d)^N \int_{b^2}^\infty e^{-N\mathcal{J}(\sqrt{y})}\ dy \notag \\
&\leq (bN)^2 + N^2(2d)^N \int_{b^2}^\infty e^{-NL\sqrt{y}}\ dy \notag \\
&\leq (bN)^2 + N^2e^{-LbN/2} \notag\\
&\leq (b^2+1)N^2.
\end{align}
With \eqref{summinus} this shows that
\[
s_N^- \leq K_3 N.
\]
Turning to $\hat{r}_N^\pm(z)$ and $\hat{s}_N^\pm(z)$,
as in \eqref{logdiff2} we have
\begin{align} \label{logdiff3}
\log Z _{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}(z) - \log Z_{N,\omega}(z)
&\leq \mu_{N,\omega,z}\left(1_{x_m=y}\right) e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)}
\end{align}
and then as in \eqref{rNplus},
\begin{align} \label{rNplus2}
\mathbb{E}\left[ \big(L_{N,\omega,+}^{(m,y)}(z) \big)^2 \right] &\leq
\mathbb{E}\left[ \left( \int_{\tilde\omega_{m,y}\geq\omega_{m,y}}
e^{\beta\big(\tilde\omega_{m,y}-\omega_{m,y}\big)} \,d\mathbb{P}(\tilde\omega_{m,y}) \right)^2\right]
\nonumber \\
&\leq e^{\lambda(-2\beta)+2\lambda(\beta)},
\end{align}
so also
\begin{equation} \label{rNpbound2}
\hat{r}_N^+(z) \leq e^{\lambda(-2\beta)+2\lambda(\beta)}.
\end{equation}
Further, analogously to \eqref{summoment} but with $I_+^\alpha$ replaced by a single point, we obtain
\begin{align} \label{summoment2}
\hat{s}_N^+(z)^2 &= \mathbb{E}\left[ \big(L_{N,\omega,+}(z) \big)^2 \right] \notag \\
&\leq
N\,e^{2\lambda(\beta)} \,
\sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d}
\mathbb{E}\left[ \mu_{N,\omega,z}\left( 1_{x_m=y} \right) \right]
\mathbb{E}\left[ e^{-2\beta\omega_{m,y}}\right] \notag\\
&= N^2\,e^{\lambda(-2\beta)+2\lambda(\beta)}.
\end{align}
Next, analogously to \eqref{Fminus} and \eqref{jensen},
\begin{align} \label{minusbound}
&\left( \log Z _{N,(\hat\omega^{(m,y)},\tilde\omega_{m,y})}(z) - \log Z_{N,\omega}(z) \right)_-
\leq \beta\mu_{N,\omega,z}\left( 1_{x_m=y} \right) (\omega_{m,y}-\tilde\omega_{m,y} )1_{\tilde\omega_{m,y}<\omega_{m,y}}
\end{align}
so
\begin{equation}
\mathbb{E}\left[ \big(L_{N,\omega,-}^{(m,y)}(z) \big)^2 \right] \leq 2\beta^2 \mathbb{E}(\omega_{m,y}^2)
\end{equation}
and hence
\[
\hat{r}_N^-(z) \leq 2\beta \mathbb{E}(\omega_{0,0}^2)^{1/2}.
\]
To deal with $\hat{s}_N^-(z)$, observe that by \eqref{omegas} and \eqref{minusbound}, similarly to \eqref{summinus},
\begin{align}\label{Lbound}
L_{N,\omega,-}(z) &\leq \sum_{(m,y)\in \mathbb{N}\times \mathbb{Z}^d} \beta\mu_{N,\omega,z}\left( 1_{x_m=y} \right)
( (\omega_{m,y})_+ + \mathbb{E}[(\omega_{0,0})_-] ) \notag \\
&\leq \beta \mathcal{M}_{N,\omega} + \beta N \mathbb{E}[(\omega_{0,0})_-].
\end{align}
Therefore $\hat{s}_N^-(z)^2$ is bounded by the right side of \eqref{summinus}, which with \eqref{2ndmoment} shows $\hat{s}_N^-(z) \leq K_3 N$.
\end{proof}
Proposition \ref{rNsN} shows that $\log [N/ (r_Ns_N \log (N / r_Ns_N))]$ is of order $\log N$. We can apply Proposition \ref{Poinc} and Theorem \ref{main concentration}, the latter with $\rho_N = r_N, \sigma_N = s_N, F= \overline{F}_{N,\omega}^{I_+^\alpha}$ and $K$ a multiple of $N$, to yield part (i) of the next proposition. Part (ii) follows similarly, using $\hat{r}_N(z)$ and $\hat{s}_N(z)$ in place of $r_N$ and $s_N$, and $F(\omega) = \log Z_{N,\omega}(z)$.
\begin{proposition}\label{avgconc}
(i) There exist $K_4$ and $N_0=N_0(\beta,\nu)$ such that
\begin{eqnarray*}
\mathbb{P}\left( \left| \overline{F}_{N,\omega}^{I_+^\alpha}-\mathbb{E} \overline{F}_{N,\omega}^{I_+^\alpha}\right|>t\sqrt{\ell(N)}\right)\leq 8e^{-K_4 t},
\end{eqnarray*}
for $t>0$ and $N \geq N_0$, where $\ell(N)=N/\log N$.
(ii) There exists $K_5$ and $N_1=N_1(\beta,\nu)$ such that
\begin{equation}
\mathbb{P}\left( \left| \log Z_{N,\omega}(z)-\mathbb{E}\log Z_{N,\omega}(z)\right|>t\sqrt{N}\right)\leq 8e^{-K_5t},
\end{equation}
for all $N\geq N_1,t>1$ and all $z \in \mathbb{Z}^d$ with $|z|_1 \leq N$.
\end{proposition}
We can now prove the first main theorem.
\begin{proof}[Proof of Theorem \ref{expconc}.]
We start by obtaining an $a.s.$ upper and lower bound on $\log Z_{N,\omega}$. Loosely, for the lower bound we consider a point $(\lfloor N^\alpha \rfloor,x)\in I_+^\alpha$ and we force the polymer started at $(0,0)$ to pass through that point; the energy accumulated by the first part of the polymer, i.e. $\sum_{i=1}^{\lfloor N^\alpha \rfloor} \omega_{i,x_i} $, is then bounded below by the minimum energy that the polymer could accumulate during its first $\lfloor N^\alpha \rfloor$ steps. More precisely, we define
\begin{eqnarray*}
\text{M}_{N,\omega}^{n_1,n_2}:=\max\{| \omega_{n,x} | \colon n_1\leq n \leq n_2,\, |x|_\infty \leq N\},
\end{eqnarray*}
and then bound below by the minimum possible energy:
\begin{eqnarray*}
\sum_{i=1}^{\lfloor N^\alpha \rfloor} \omega_{i,x_i}\geq -N^\alpha \text{M}_{N^\alpha,\omega}^{0,N^{\alpha}}.
\end{eqnarray*}
Letting
\begin{eqnarray*}
\text{M}^+_{N,\omega}:=N^{\alpha}\log (2d)+\beta N^{\alpha} \left( \text{M}_{N^\alpha,\omega}^{0,N^{\alpha}}+ \text{M}_{N+N^\alpha,\omega}^{N,N+N^{\alpha}}\right)
\end{eqnarray*}
we then get that
\begin{eqnarray}\label{l bound 1}
\log Z_{N,\omega}&\geq & \log E\left[ e^{\beta\sum_{i=\lfloor N^\alpha \rfloor+1}^N \omega_{i,x_i} }\big|\ X_{\lfloor N^\alpha \rfloor}=x \right] +\log P(X_{\lfloor N^\alpha \rfloor}=x) -\beta N^{\alpha} \,\text{M}_{N^{\alpha},\omega}^{0,N^{\alpha}}\nonumber\\
&\geq& \log Z_{N,\omega}^{(\lfloor N^\alpha \rfloor,x)} - \text{M}^+_{N,\omega}.
\end{eqnarray}
Averaging \eqref{l bound 1} over $x\in I_+^\alpha$ yields
\begin{eqnarray}\label{lower bound}
\log Z_{N,\omega}&\geq &
\overline{F}_{N,\omega}^{I_+^\alpha}
- \text{M}_{N,\omega}^+.
\end{eqnarray}
In a related fashion we can obtain an upper bound on $\log Z_{N,\omega}$. In this case we start the polymer from a location $(-\lfloor N^\alpha \rfloor,x)\in I_-^\alpha$ and we force it to pass through $(0,0)$. Letting
\begin{eqnarray}
\text{M}_{N,\omega}^-:=N^\alpha \log(2d)+\beta N^\alpha \left( \text{M}_{N^{\alpha},\omega}^{-N^\alpha,0} +\text{M}_{N,\omega}^{N-N^\alpha,N}\right),
\end{eqnarray}
we then have analogously to \eqref{l bound 1} that
\begin{eqnarray}\label{u bound 1}
\log Z_{N,\omega}^{(-\lfloor N^\alpha \rfloor,x)}\geq \log Z_{N,\omega}-\text{M}_{N,\omega}^-,
\end{eqnarray}
so that, averaging over $I_-^\alpha$,
\begin{eqnarray}\label{upper bound}
\log Z_{N,\omega}&\leq &
\overline{F}_{N,\omega}^{I_-^\alpha}
+ \text{M}_{N,\omega}^-.
\end{eqnarray}
Using the fact that $\overline{F}_{N,\omega}^{I_+^\alpha}$ and $\mathbb{E} \overline{F}_{N,\omega}^{I_-^\alpha}$ have the same distribution, and $\mathbb{E}\log Z_{N,\omega}=\mathbb{E} \overline{F}_{N,\omega}^{I_+^\alpha}=\mathbb{E} \overline{F}_{N,\omega}^{I_-^\alpha}$ we obtain from \eqref{lower bound} and \eqref{upper bound} that
\begin{align}\label{interpolation}
\mathbb{P}&\left( \left| \log Z_{N,\omega}-\mathbb{E}\log Z_{N,\omega}\right|>t\sqrt{\ell(N)}\right)\\
&\leq \mathbb{P}\left( \overline{F}_{N,\omega}^{I_-^\alpha} -\mathbb{E}\overline{F}_{N,\omega}^{I_-^\alpha}+M_{N,\omega}^- >t\sqrt{\ell(N)}\right)
+
\mathbb{P}\left( \overline{F}_{N,\omega}^{I_+^\alpha}-\mathbb{E}\overline{F}_{N,\omega}^{I_+^\alpha} -M_{N,\omega}^+<-t\sqrt{\ell(N)}\right) \nonumber\\
&\leq \mathbb{P}\left( \overline{F}_{N,\omega}^{I_-^\alpha} -\mathbb{E}\overline{F}_{N,\omega}^{I_-^\alpha} >\frac{1}{2}t\sqrt{\ell(N)}\right)
+
\mathbb{P}\left( \overline{F}_{N,\omega}^{I_+^\alpha}-\mathbb{E}\overline{F}_{N,\omega}^{I_+^\alpha}<-\frac{1}{2}t\sqrt{\ell(N)}\right)\nonumber\\
&\qquad + \mathbb{P}\left( M_{N,\omega}^+>\frac{1}{2}t\sqrt{\ell(N)}\right) +
\mathbb{P}\left( M_{N,\omega}^->\frac{1}{2}t\sqrt{\ell(N)}\right)\nonumber \\
&= \mathbb{P}\left( |\overline{F}_{N,\omega}^{I_+^\alpha}-\mathbb{E}\overline{F}_{N,\omega}^{I_+^\alpha}|>\frac{1}{2}t\sqrt{\ell(N)}\right)
+ \mathbb{P}\left( M_{N,\omega}^+>\frac{1}{2}t\sqrt{\ell(N)}\right)
+\mathbb{P}\left( M_{N,\omega}^->\frac{1}{2}t\sqrt{\ell(N)}\right).\nonumber
\end{align}
For $N\geq N_0(\beta,\nu)$, Proposition \ref{avgconc}(i) guarantees that the first term on the right side in \eqref{interpolation} is bounded by $8e^{-K_4t/2}$. The second and the third terms are similar so we consider only the second one. If $t>1$, then for some $K_6$, for large $N$,
\begin{eqnarray*}
\mathbb{P}\left( M_{N,\omega}^+>\frac{1}{2}t\sqrt{\ell(N)}\right)
&\leq&
K_6N^{1+\alpha} \mathbb{P}\left(|\omega_{0,0}|>\frac{t}{8\beta} N^{-\alpha}\sqrt{\frac{N}{\log N}}\right)\\
&\leq&K_6N^{1+\alpha}\exp\left(-\frac{t}{8} \frac{N^{\frac{1}{2}-\alpha}}{\sqrt{\log N}}\right)
\mathbb{E}\left[e^{\beta|\omega|}\right]\\
&\leq& \exp\left(-\frac{t}{16} \frac{N^{\frac{1}{2}-\alpha}}{\sqrt{\log N}}\right) .
\end{eqnarray*}
Putting the estimates together we get from \eqref{interpolation} that for some $K_7$,
\begin{eqnarray} \label{three_est}
\mathbb{P}\left( \left| \log Z_{N,\omega}-\mathbb{E}\log Z_{N,\omega}\right|>t\sqrt{\ell(N)}\right)
&\leq& 10e^{-K_7t}
\end{eqnarray}
for all $N$ large (say $N\geq N_2(\beta,\nu)\geq N_0(\beta,\nu)$) and $t>1$. For $t\leq 1$, \eqref{three_est} is trivially true if we take $K_7$ small enough. This completes the proof for $N\geq N_2$.
For $2 \leq N<N_2$ an essentially trivial proof suffices. Fix any (nonrandom) path $(y_n)_{n\leq N}$ and let $T_N = \sum_{n=1}^N \omega_{n,y_n}$, so that $Z_{N,\omega} \geq (2d)^{-N}e^{\beta T_N}$. Let $K_8 = N_2\log 2d + \max_{N<N_2} \mathbb{E}\log Z_{N,\omega}$, $K_9 = \min_{N<N_2} \mathbb{E}\log Z_{N,\omega}$ and $K_{10} = \max_{N<N_2} \mathbb{E} Z_{N,\omega}$. Then for some $K_{11},K_{12}$,
\[
\mathbb{P}\left( \log Z_{N,\omega}-\mathbb{E}\log Z_{N,\omega} < -t\sqrt{\frac{N}{\log N}}\right) \leq \mathbb{P}(\beta T_N <K_8-t)
\leq K_{11}e^{-K_{12}t}
\]
and by Markov's inequality,
\[
\mathbb{P}\left( \log Z_{N,\omega}-\mathbb{E}\log Z_{N,\omega} > t\sqrt{\frac{N}{\log N}}\right) \leq \mathbb{P}(Z_{N,\omega}>e^{K_9 + t})
\leq K_{10}e^{-K_9-t}.
\]
The theorem now follows for these $N\geq 2$.
\end{proof}
\section{Subgaussian rates of convergence}\label{proofrates}
In this section we prove Theorem \ref{ratemain}.
We start with the simple observation that $\mathbb{E}\log Z_{N,\omega}$ is superadditive:
\begin{equation} \label{superadd}
\mathbb{E}\log Z_{N+M,\omega} \geq \mathbb{E}\log Z_{N,\omega} + \mathbb{E}\log Z_{M,\omega},
\end{equation}
which by standard superadditivity results implies that the limit in \eqref{pbeta} exists, with
\begin{equation} \label{frombelow}
\lim_{N\to\infty} \frac{1}{N}\mathbb{E}\log Z_{N,\omega} = \sup_N \frac{1}{N}\mathbb{E}\log Z_{N,\omega}.
\end{equation}
Let $\mathbb{L}^{d+1}$ be the even sublattice of $\mathbb{Z}^{d+1}$:
\[
\mathbb{L}^{d+1} = \{(n,x) \in \mathbb{Z}^{d+1}: n+x_1 + \dots + x_d \text{ is even}\}.
\]
Let $H_N = \{(N,x): x \in \mathbb{Z}^d\} \cap \mathbb{L}^{d+1}$ and for $l<m$ and $(l,x),(m,y) \in \mathbb{L}^{d+1}$ define
\begin{eqnarray*}
Z_{m-l,\omega}((l,x)(m,y))=E_{l,x}\left[e^{\beta\sum_{n=l+1}^m\omega_{n,x_n}};x_m=y\right].
\end{eqnarray*}
Recall the notation \eqref{ZN} for a polymer in a shifted disorder.
The following lemma will be used throughout. Its proof follows the same lines as (\cite{Al11}, Lemma 2.2(i)) and analogously to that one it is a consequence of Theorem \ref{expconc} for part (i), and Proposition \ref{avgconc}(ii) for part (ii).
\begin{lemma} \label{sums}
Let $\nu$ be nearly gamma. There exists $K_{13}$ as follows. Let $n_{\max} \geq 1$ and let $0 \leq s_1<t_1 \leq s_2 < t_2 < \dots \leq s_r < t_r$ with $t_j-s_j \leq n_{\max}$ for all $j \leq r$. For each $j \leq r$ let $(s_j,y_{j}) \in H_{s_j}$ and $(t_j,z_{j}) \in H_{t_j}$, and let
\[
\zeta_j = \log Z_{t_j-s_j,\omega}((s_j,x_j)(t_j,y_j)), \qquad \chi_j = \log Z_{t_j-s_j,\omega}^{(s_j,x_j)}.
\]
Then for $a>0$, we have the following.
(i)
\begin{equation}
\mathbb{P}\left( \sum_{j=1}^r |\chi_j - \mathbb{E}\chi_j| > 2a\right) \leq 2^{r+1} \exp\left( -K_{13}a\left( \frac{ \log n_{\max}}{n_{\max}} \right)^{1/2} \right),
\end{equation}
(ii)
\begin{equation} \label{weak2side}
\mathbb{P}\left( \sum_{j=1}^r |\zeta_j - \mathbb{E}\zeta_j| > 2a\right) \leq 2^{r+1} \exp\left( -\frac{K_{13}a}{n_{\max}^{1/2}} \right),
\end{equation}
(iii)
\begin{equation} \label{oneside}
\mathbb{P}\left( \sum_{j=1}^r (\zeta_j - \mathbb{E}\chi_j)_+ > 2a\right) \leq 2^{r+1} \exp\left( -K_{13}a\left( \frac{ \log n_{\max}}{n_{\max}} \right)^{1/2} \right),
\end{equation}
\end{lemma}
Note (iii) follows from (i), since $\zeta_j \leq \chi_j$. We do not have a bound like \eqref{oneside}, with factor $(\log n_{\max})^{1/2}$, for the lower tail of the $\zeta_j$'s, but for our purposes such a bound is only needed for the upper tail, as \eqref{weak2side} suffices for lower tails.
We continue with a result which is like Theorem \ref{ratemain} but weaker (not subgaussian) and much simpler. Define the set of paths from the origin
\[
\Gamma_N = \{ \{(i,x_i)\}_{i\leq N}: x_0=0,|x_i-x_{i-1}|_1=1 \text{ for all } i\}.
\]
For a specified block length $n$, and for $N=kn$, the \emph{simple skeleton} of a path in $\Gamma_N$ is $\{(jn,x_{jn}):0 \leq j \leq k\}$. Let $\mathcal{C}_s$ denote the class of all possible simple skeletons of paths from $(0,0)$ to $(kn,0)$ and note that
\begin{equation} \label{Cs}
|\mathcal{C}_s| \leq (2n)^{dk}.
\end{equation}
For a skeleton $\mathcal{S}$ (of any type, including simple and types to be introduced below), we write $\Gamma_N(\mathcal{S})$ for the set of all paths in $\Gamma_N$ which pass through all points of $\mathcal{S}$. For a set $\mathcal{A}$ of paths of length $N$ we set
\[
Z_{N,\omega}(\mathcal{A}) = E\left( e^{\beta\sum_{i=1}^N \omega_{i,x_i}} 1_{\mathcal{A}} \right),
\]
and we write $Z_{N,\omega}(\mathcal{S})$ for $Z_{N,\omega}(\Gamma_N(\mathcal{S}))$.
\begin{lemma}\label{rateweak}
Suppose $\nu$ is nearly gamma. Then there exists $K_{14}$ such that
\begin{equation} \label{rate3}
\mathbb{E}\log Z_{n,\omega} \geq p(\beta)n - K_{14}n^{1/2}\log n \quad \text{for all } n \geq 2.
\end{equation}
\end{lemma}
\begin{proof}
It is sufficient to prove the inequality in \eqref{rate3} for sufficiently large $n$.
Fix $n$ and consider paths of length $N=kn$. For each $\mathcal{S} = \{(jn,x_{jn}):0 \leq j \leq k\} \in \mathcal{C}_s$ we have
\begin{equation} \label{blocks}
\mathbb{E}\log Z_{N,\omega}(\mathcal{S}) = \sum_{j=1}^k \mathbb{E}\log Z_{n,\omega}\bigg( ((j-1)n,x_{(j-1)n}),(jn,x_{jn}) \bigg) \leq k\mathbb{E}\log Z_{n,\omega}.
\end{equation}
By Lemma \ref{sums}(ii) (note $K_{13}$ is defined there),
\begin{align} \label{eachS}
\mathbb{P}\bigg( \log Z_{N,\omega}(\mathcal{S}) - \mathbb{E}\log Z_{N,\omega}(\mathcal{S}) \geq 16dK_{13}^{-1}kn^{1/2}\log n \bigg) \leq 2^{k+1}e^{-8dk\log n},
\end{align}
so by \eqref{Cs},
\begin{align} \label{allS}
\mathbb{P}\bigg( &\log Z_{N,\omega}(\mathcal{S}) - \mathbb{E}\log Z_{N,\omega}(\mathcal{S}) \geq 16dK_{13}^{-1}kn^{1/2}\log n \text{ for some } \mathcal{S} \in \mathcal{C}_s \bigg)
\leq e^{-4dk\log n}.
\end{align}
Combining \eqref{Cs},\eqref{blocks} and \eqref{allS} we see that with probability at least $1-e^{-4dk\log n}$ we have
\begin{align} \label{ceiling}
\log Z_{kn,\omega} &= \log \left( \sum_{\mathcal{S} \in \mathcal{C}_s} Z_{N,\omega}(\mathcal{S}) \right) \notag\\
&\leq dk\log(2n) + k\mathbb{E}\log Z_{n,\omega} + 16dK_{13}^{-1}kn^{1/2}\log n.
\end{align}
But by \eqref{aslimit}, also with probability approaching 1 as $k\to \infty$ (with $n$ fixed), we have
\begin{equation} \label{nlimit}
\log Z_{kn,\omega} \geq knp(\beta) - k
\end{equation}
which with \eqref{ceiling} shows that
\[
\mathbb{E}\log Z_{n,\omega} \geq np(\beta) -1 - d\log(2n) - 16dK_{13}^{-1}n^{1/2}\log n.
\]
\end{proof}
The proof of Theorem \ref{ratemain} follows the general outline of the preceding proof. But to obtain that (stronger) theorem, we need to sometimes use Lemma \ref{sums}(i),(iii) in place of (ii), and use a coarse-graining approximation effectively to reduce the size of \eqref{Cs}, so that we avoid the $\log n$ in the exponent on the right side of \eqref{allS}, and can effectively use $\log\log n$ instead.
For $(n,x)\in\mathbb{L}^{d+1}$ let
\[
s(n,x) = np(\beta) - \mathbb{E}\log Z_{n,\omega}(x), \qquad s_0(n) = np(\beta) - \mathbb{E}\log Z_{n,\omega}.
\]
so $s(n,x) \geq 0$ by \eqref{frombelow}. $s(n,x)$ may be viewed as a measure of the inefficiency created when a path makes an increment of $(n,x)$. As in the proof of Lemma \ref{rateweak}, we consider a polymer of length $N=kn$ for some block length $n$ to be specified and $k \geq 1$. In general we take $n$ sufficiently large, and then take $k$ large, depending on $n$; we tacitly take $n$ to be even, throughout. In addition to \eqref{superadd} we have the relation
\[
Z_{n+m,\omega}(x+y) \geq Z_{n,\omega}(x)Z_{m,\omega}^{(n,x)}(y) \qquad \text{for all $x,y,z \in \mathbb{Z}^d$ and all } n,m\geq 1,
\]
which implies that $s(\cdot,\cdot)$ is subadditive. Subadditivity of $s_0$ follows from \eqref{superadd}.
Let
\begin{equation} \label{slowlyvar}
\rho(m) = \frac{\log \log m}{K_{13}(\log m)^{1/2}}, \quad \theta(m) = (\log m)^{5/2}, \quad \text{and} \quad \text{Var}phi(m) = \lfloor (\log m)^3 \rfloor.
\end{equation}
For our designated block length $n$, for $x \in \mathbb{Z}^d$ with $(n,x) \in \mathbb{L}^d$, we say the transverse increment $x$ is \emph{inadequate} if $s(n,x) > n^{1/2}\theta(n)$, and \emph{adequate} otherwise.
Note the dependence on $n$ is suppressed in this terminology. For general values of $m$, we say $(m,x)$ is \emph{efficient} is $s(m,x) \leq 4n^{1/2}\rho(n)$, and \emph{inefficient} otherwise; again there is a dependence on $n$. For $m=n$, efficiency is obviously a stronger condition than adequateness. In fact, to prove Theorem \ref{ratemain} it is sufficient to show that for large $n$, there exists $x$ for which $(n,x)$ is efficient.
Let
\[
h_n = \max\{|x|_\infty: x \text{ is adequate}\}.
\]
(Note we have not established any monotonicity for $s(n,\cdot)$, so some sites $x$ with $|x|_\infty\leq h_n$ may be inadequate.) We wish to coarse-grain on scale $u_n = 2\lfloor h_n/2\text{Var}phi(n) \rfloor$. A \emph{coarse-grained} (or \emph{CG}) \emph{point} is a point of form $(jn,x_{jn})$ with $j \geq 0$ and $x_{jn} \in u_n\mathbb{Z}^d$.
A \emph{coarse-grained} (or \emph{CG}) \emph{skeleton} is a simple skeleton $\{(jn,x_{jn}):0 \leq j \leq k\}$ consisting entirely of CG points. By a \emph{CG path} we mean a path from $(0,0)$ to $(kn,0)$ for which the simple skeleton is a CG skeleton.
\begin{remark} \label{strategy}
A rough strategy for the proof of Theorem \ref{ratemain} is as follows; what we actually do is based on this but requires certain modifications. It is enough to show that for some $K_{15}$, for large $n$, $s(n,x) \leq K_{15}n^{1/2}\rho(n)$ for some $x$. Suppose to the contrary $s(n,x) > K_{15}n^{1/2}\rho(n)$ for all $x$; this means that for every simple skeleton $\mathcal{S}$ we have
\[
\mathbb{E}\log Z_{kn,\omega}(\mathcal{S}) \leq knp(\beta) - kK_{15}n^{1/2}\rho(n).
\]
The first step is to use this and Lemma \ref{sums} to show that, if we take $n$ then $k$ large, with high probability
\[
\log Z_{kn,\omega}(\hat{\mathcal{S}}) \leq knp(\beta) - \half kK_{15}n^{1/2}\rho(n) \quad \text{for every CG skeleton } \hat{\mathcal{S}};
\]
this makes use of the fact that the number of CG skeletons is much smaller than the number of simple skeletons. The next step is to show that with high probability, every simple skeleton $\mathcal{S}$ can be approximated by a CG skeleton $\hat{\mathcal{S}}$ without changing $\log Z_{kn,\omega}(\mathcal{S})$ too much, and therefore
\[
\log Z_{kn,\omega}(\mathcal{S}) \leq knp(\beta) - \frac{1}{4} kn^{1/2}K_{15}\rho(n) \quad \text{for every simple skeleton } \mathcal{S}.
\]
The final step is to sum $Z_{kn,\omega}(\mathcal{S})$ over simple skeletons $\mathcal{S}$ (of which there are at most $(2n)^{dk}$) to obtain
\[
\log Z_{kn,\omega} \leq dk \log 2n + knp(\beta) - \frac{1}{4} kn^{1/2}K_{15}\rho(n).
\]
Dividing by $kn$ and letting $k\to\infty$ gives a limit which contradicts \eqref{aslimit}; this shows efficient values $x$ must exist.
\end{remark}
We continue with the proof of Theorem \ref{ratemain}. Let
\[
\hat{H}_N = \{x \in \mathbb{Z}^d: (N,x) \in H_N, |x|_1 \leq N\};
\]
when $N$ is clear from the context we refer to points $x\in \hat{H}_N$ as \emph{accessible sites}. Clearly $|\hat{H}_N| \leq (2N)^d$.
\begin{lemma} \label{unexcess}
(i) There exists $K_{16}$ such that for all $n\geq 2$, $s(n,0) \leq K_{16}n^{1/2}\log n$.
(ii) There exists $K_{17}$ such that for $n$ large (depending on $\beta$) and even, if $|x|_1 \leq K_{17} n^{1/2}\theta(n)$ then $x$ is adequate.
\end{lemma}
\begin{proof}
We first prove (i). It suffices to consider $n$ large. Let $m=n/2$. It follows from Proposition \ref{avgconc}(ii) that
\begin{align} \label{eachx}
\mathbb{P}&\left( \left| \log Z_{m,\omega}(x) - \mathbb{E}\log Z_{m,\omega}(x) \right| \geq 2dK_5^{-1}m^{1/2}\log m
\text{ for some } x \in \hat{H}_m \right) \notag \\
&\qquad \leq (2m)^de^{-2d\log m} \notag \\
&\qquad \leq \half.
\end{align}
It follows from \eqref{eachx}, Theorem \ref{expconc} and Lemma \ref{rateweak} that with probability at least 1/4, for some accessible site $x$ we have
\begin{align} \label{hyperfrac}
\exp\left( \mathbb{E}\log Z_{m,\omega}(x) + 2dK_5^{-1}m^{1/2}\log m \right) &\geq Z_{m,\omega}(x) \notag\\
&\geq \frac{1}{(2m)^d} Z_{m,\omega} \notag\\
&\geq \frac{1}{(2m)^d} \exp\left( \mathbb{E}\log Z_{m,\omega} - m^{1/2} \right) \notag\\
&\geq \exp\left( p(\beta)m - 2K_{14}m^{1/2}\log m \right),
\end{align}
and therefore we have the deterministic statement
\begin{equation}
\mathbb{E}\log Z_{m,\omega}(x) \geq p(\beta)m - K_{18}m^{1/2}\log m.
\end{equation}
Then by symmetry and subadditivity,
\begin{equation} \label{0unexc}
s(n,0) \leq s(m,x)+s(m,-x) \leq 2K_{18}n^{1/2}\log n.
\end{equation}
Turning to (ii), let $J=2\lfloor K_{19}n^{1/2}\theta(n) \rfloor$, with $K_{19}$ to be specified. Analogously to \eqref{eachx} we have using Proposition \ref{avgconc}(ii) that for large $n$,
\begin{align} \label{eachx2}
\mathbb{P}&\left( \left| \log Z_{n-2J,\omega}(-x) - \mathbb{E}\log Z_{n-2J,\omega}(-x) \right| \geq 2dK_5^{-1}n^{1/2}\log n
\text{ for some } x \in \hat{H}_J \right) \notag \\
&\qquad \leq (2J)^d8 e^{-2d\log n} \notag \\
&\qquad \leq \frac{1}{4}.
\end{align}
Similarly, also for large $n$,
\begin{align} \label{ratio}
\mathbb{P}&\left( \log Z_{J,\omega}\big((n-2J,x),(n-J,0)\big) > 2p(\beta)J \text{ for some } x \in \hat{H}_J \right) \notag \\
&\leq \mathbb{P}\left( \log Z_{J,\omega}\big((n-2J,x),(n-J,0)\big) - \mathbb{E}\log Z_{J,\omega}\big((n-2J,x),(n-J,0)\big) > p(\beta)J \text{ for some } x \in \hat{H}_J \right) \notag \\
&\leq (2J)^d 8e^{-K_5p(\beta)J^{1/2}} \notag \\
&<\frac{1}{4}.
\end{align}
Then analogously to \eqref{hyperfrac}, since
\[
Z_{n-J,\omega}(0) = \sum_{x \in \hat{H}_j} Z_{n-2J,\omega}(x)Z_{J,\omega}\big((n-2J,x),(n-J,0)\big),
\]
by \eqref{0unexc}---\eqref{ratio}, Proposition \ref{avgconc}(ii) and Lemma \ref{rateweak}, with probability at least 1/4, for some $x\in\hat{H}_J$ we have
\begin{align} \label{hyperfrac2}
\exp&\left( \mathbb{E}\log Z_{n-2J,\omega}(x) + 2dK_5^{-1}n^{1/2}\log n \right) \notag\\
&\geq Z_{n-2J,\omega}(x) \notag\\
&\geq Z_{n-2J,\omega}(x)Z_{J,\omega}\big((n-2J,x),(n-J,0)\big)e^{-2p(\beta)J} \notag \\
&\geq \frac{1}{|\hat{H}_J|} Z_{n-J,\omega}(0)e^{-2p(\beta)J} \notag\\
&\geq \frac{1}{(2J)^d} \exp\left( \mathbb{E}\log Z_{n-J,\omega}(0) - 2dK_5^{-1}n^{1/2}\log n - 2p(\beta)J \right) \notag\\
&\geq \exp\left( p(\beta)n - 5p(\beta)K_{19}n^{1/2}\theta(n) \right),
\end{align}
and therefore
\begin{equation} \label{thetagap}
\mathbb{E}\log Z_{n-2J,\omega}(x) \geq p(\beta)n - 6p(\beta)K_{19}n^{1/2}\theta(n).
\end{equation}
If $|y|_1 \leq J$, then $|y-x|_1 \leq 2J$, so there is a path $\{(i,x_i)\}_{n-2J \leq i \leq n}$ from $(n-2J,x)$ to $(n,y)$. Therefore using \eqref{thetagap},
bounding $Z_{2J,\omega}\big((n-2J,x),(n,y)\big)$ below by the term corresponding to this single path we obtain
\begin{align}
\mathbb{E}\log Z_{n,\omega}(y) &\geq \mathbb{E}\log Z_{n-2J,\omega}(x) + \mathbb{E}\log Z_{2J,\omega}\big((n-2J,x),(n,y)\big) \notag \\
&\geq \mathbb{E}\log Z_{n-2J,\omega}(x) - 2J\log 2d + \beta\mathbb{E}\sum_{i=n-2J+1}^n \omega_{i,x_i} \notag \\
&= \mathbb{E}\log Z_{n-2J,\omega}(x) - 2J\log 2d \notag \\
&\geq p(\beta)n - K_{19}\big(6p(\beta)+4\log 2d \big)n^{1/2}\theta(n).
\end{align}
Taking $K_{19} = (6p(\beta) + 4 \log 2d )^{-1} $, this shows that $y$ is adequate whenever $|y|_1 \leq J$.
\end{proof}
Observe that for a simple skeleton $\mathcal{S} = \{(jn,x_{jn}), j \leq k\}$, we have a sum over blocks:
\begin{equation} \label{skelsum}
\log Z_{N,\omega}(\mathcal{S}) = \sum_{j=1}^k \log Z_{n,\omega}\bigg(((j-1)n,x_{(j-1)n}),(jn,x_{jn})\bigg).
\end{equation}
The rough strategy outlined in Remark \ref{strategy} involves approximating $Z_{N,\omega}(\mathcal{S})$ by $Z_{N,\omega}(\hat{\mu}S)$, where $\hat{\mu}S$ is a CG skeleton which approximates the simple skeleton $\mathcal{S}$; equivalently, we want to replace $x_{(j-1)n},x_{jn}$ in \eqref{skelsum} by CG points. This may be problematic for some values of $j$ and some paths in $\Gamma_N(\mathcal{S})$, however, for three reasons. First, if we do not restrict the possible increments to satisfy $|x_{jn} - x_{(j-1)n}|_\infty \leq h_n$, there will be too many CG skeletons to sum over. Second, even when increments satisfy this inequality, there are difficulties if increments are inadequate. Third, paths which veer to far off course transversally within a block present problems in the approximation by a CG path. Our methods for dealing with these difficulties principally involve two things: we do the CG approximation only for ``nice'' blocks, and rather than just CG skeletons, we allow more general sums of the form
\[
\sum_{j=1}^l \log Z_{\tau_j-\tau_{j-1},\omega}((\tau_{j-1},y_{j}),(\tau_j,z_{j})),
\]
which need not have $y_{j}=z_{j-1}$. We turn now to the details.
In approximating \eqref{skelsum} we want to in effect only change paths within a distance $n_1 \leq 6dn/\text{Var}phi(n)$ (to be specified) of each hyperplane $H_{jn}$. To this end, given a site $w=(jn\pm n_1,y_{jn\pm n_1}) \in H_{jn\pm n_1}$, let $z_{jn}$ be the site in $u_n\mathbb{Z}^d$ closest to $y_{jn\pm n_1}$ in $\ell^1$ norm (breaking ties by some arbitrary rule), and let $\pi_{jn}(w) = (jn,z_{jn})$, which may be viewed as the projection into $H_{jn}$ of the CG approximation to $w$ within the hyperplane $H_{jn\pm n_1}$. Given a path $\gamma=\{(i,x_{i}), i \leq kn\}$ from $(0,0)$ to $(kn,0)$, define points
\[
d_j = d_j(\gamma) = (jn,x_{jn}), \quad 0 \leq j \leq k,
\]
\[
e_j = (jn+n_1,x_{jn+n_1}), \quad 0 \leq j \leq k-1,
\]
\[
f_j = (jn-n_1,x_{jn-n_1}), \quad 1 \leq j \leq k.
\]
We say a \emph{sidestep} occurs in block $j$ in $\gamma$ if either
\[
|x_{(j-1)n+n_1} - x_{(j-1)n}|_\infty > h_n \quad \text{or} \quad |x_{jn} - x_{jn-n_1}|_\infty > h_n.
\]
Let
\[
\mathcal{E}_{in} = \mathcal{E}_{in}(\gamma) = \{1 \leq j \leq k: x_{jn} - x_{(j-1)n} \text{ is inadequate}\},
\]
\[
\mathcal{E}_{side} = \mathcal{E}_{side}(\gamma) = \{1 \leq j \leq k: j \notin \mathcal{E}_{in} \text{ and a sidestep occurs in block } j\},
\]
\[
\mathcal{E} = \mathcal{E}_{in} \cup \mathcal{E}_{side}
\]
and let
\[
e_{j-1}' = \pi_{(j-1)n}(e_{j-1}), \quad f_j' = \pi_{jn}(f_j), \quad j \notin \mathcal{E}.
\]
Blocks with indices in $\mathcal{E}$ are called \emph{bad blocks}, and $\mathcal{E}$ is called the \emph{bad set}. Define the tuples
\begin{equation} \label{Rjdef}
\mathcal{T}_j = \mathcal{T}_j(\gamma) = \begin{cases} (d_{j-1},e_{j-1},f_j,d_j) &\text{if } j \in \mathcal{E},\\
(e_{j-1}',f_j') &\text{if } j \notin \mathcal{E}, \end{cases}
\end{equation}
define the \emph{CG-approximate skeleton} of $\gamma$ to be
\[
S_{CG}(\gamma) = \{ \mathcal{T}_j: 1 \leq j \leq k\}
\]
and define the \emph{CG-approximate bad} (respectively \emph{good) skeleton} of $\gamma$ to be
\[
S_{CG}^{bad}(\gamma) = \{ \mathcal{T}_j: j \in \mathcal{E}\}, \qquad S_{CG}^{good}(\gamma) = \{ \mathcal{T}_j: j \notin \mathcal{E}\}.
\]
Note $\mathcal{E}_{in}(\gamma), \mathcal{E}_{side}(\gamma), S_{CG}^{bad}(\gamma)$ and $S_{CG}^{good}(\gamma)$ are all functions of $S_{CG}(\gamma)$.
We refer to the bad set $\mathcal{E}$ also as the \emph{index set of} $S_{CG}^{bad}(\gamma)$.
Let $\mathcal{C}_{CG}$ (respectively $\mathcal{C}_{CG}^{bad}$) denote the class of all possible CG-approximate skeletons (respectively bad skeletons) of paths of length $kn$ starting at $(0,0)$. For $B \subset \{1,\dots,k\}$ let
$\mathcal{C}_{CG}(B)$ denote the class of all CG-approximate skeletons in $\mathcal{C}_{CG}$ with bad set $B$, and
analogously, let $\mathcal{C}_{CG}^{bad}(B)$ denote the class of all possible CG-approximate bad skeletons in $\mathcal{C}_{CG}^{bad}$ with index set $B$.
Then for $b\leq k$ define
\[
\mathcal{C}_{CG}^{bad}(b) = \cup_{B:|B|=b}\ \mathcal{C}_{CG}^{bad}(B).
\]
The partition function corresponding to a CG-approximate skeleton $\mathcal{S}_{CG}$ is
\begin{align} \label{tZdef}
\tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) &= \left( \prod_{j\notin\mathcal{E}} Z_{n_1,\omega}(e_{j-1}',f_j') \right) \left( \prod_{j\in\mathcal{E}} Z_{n_1,\omega}(d_{j-1},e_{j-1}) Z_{n-2n_1,\omega}(e_{j-1},f_j)
Z_{n_1,\omega}(f_j,d_j) \right).
\end{align}
So that we may consider these two products separately, we denote the first as $\tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{good})$ and the second as $\tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad})$.
For a CG-approximate skeleton in $\mathcal{C}_{CG}(B)$, and for $j \notin B$, if $e_{j-1}'=(n(j-1),w), d_{j-1}=(n(j-1),x),f_j'=(nj,y)$ and $d_j=(nj,z)$, we always have
\[
|w-x|_\infty \leq h_n + \frac{u_n}{2}, \quad |z-y|_\infty \leq h_n + \frac{u_n}{2}.
\]
It follows readily that if $\mathcal{T}_1,\dots,\mathcal{T}_{j-1}$ are specified and $j \notin B$, then there are at most $(4h_nu_n^{-1}+3)^{2d} \leq (5\text{Var}phi(n))^{2d}$ possible values of $\mathcal{T}_j$; if $j \in B$ there are at most $(2n)^{4d}$.
It follows that the number of CG-approximate skeletons satisfies
\begin{equation} \label{CB}
|\mathcal{C}_{CG}(B)| \leq (5\text{Var}phi(n))^{2d(k-|B|)}(2n)^{4d|B|}.
\end{equation}
Note that the factor $\text{Var}phi(n)$ in place of $n$ in \eqref{CB} represents the entropy reduction resulting from the use of CG paths.
Summing \eqref{CB} over $B$ we obtain
\begin{equation} \label{CGsize}
|\mathcal{C}_{CG}| \leq 2^k (2n)^{4dk}.
\end{equation}
For $B=\{j_1<\dots<j_{|B|}\} \subset \{1,\dots,k\}$, setting $j_0=0$ we have
\begin{equation} \label{Cbadsize}
|\mathcal{C}_{CG}^{bad}(B)| \leq \prod_{1\leq i \leq |B|} [(2(j_i-j_{i-1})n)^d (2n)^{3d}]\leq \left( \frac{16n^4k}{|B|} \right)^{d|B|},
\end{equation}
so for each $b \leq k$, using ${k\choose b} \leq (ke/b)^b$,
\begin{equation}\label{Cbadbsize}
|\mathcal{C}_{CG}^{bad}(b)| \leq {k\choose b} \left( \frac{16n^4k}{b} \right)^{db} \leq \left( \frac{8n^2k}{b} \right)^{2db}.
\end{equation}
We also use the non-coarse-grained analogs of the $\mathcal{T}_j$, given by
\begin{equation} \label{Vjdef}
\mathcal{V}_j = \mathcal{V}_j(\gamma) = (d_{j-1},e_{j-1},f_j,d_j), \quad j \leq k,
\end{equation}
and define the \emph{augmented skeleton} of $\gamma$ to be
\[
\mathcal{S}_{aug}(\gamma) = \{\mathcal{V}_j, 1 \leq j \leq k\}.
\]
We write $\mathcal{C}_{aug}$ for the class of all possible augemented skeletons of paths from $(0,0)$ to $(kn,0)$.
Note that $\mathcal{E}_{side}(\gamma),\mathcal{E}_{in}(\gamma)$ and $\mathcal{S}_{CG}(\gamma)$ are functions of $\mathcal{S}_{aug}(\gamma)$; we denote by $F$ the ``coarse-graining map'' such that
\[
\mathcal{S}_{CG}(\gamma) = F\left( \mathcal{S}_{aug}(\gamma) \right).
\]
We can write
\[
Z_{N,\omega} = \sum_{\mathcal{S}_{CG}\in\mathcal{C}_{CG}}\ \sum_{\mathcal{S}_{aug} \in F^{-1}(\mathcal{S}_{CG})} Z_{N,\omega}(\mathcal{S}_{aug}),
\]
and define
\[
\tilde{Z}_{N,\omega} = \sum_{\mathcal{S}_{CG}\in\mathcal{C}_{CG}}\ |F^{-1}(\mathcal{S}_{CG})| \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}).
\]
Now for a given choice of $e_{j-1}'$ there are at most $(2n_1)^d$ possible choices of $e_{j-1}$ and then at most $(2n_1)^d$ for $d_{j-1}$, and similarly for $f_j',f_j,d_j$, so for all $\mathcal{S}_{CG}$,
\begin{equation} \label{Fsize}
|F^{-1}(\mathcal{S}_{CG})| \leq (2n_1)^{4dk}.
\end{equation}
The following will be proved in the next section.
\begin{lemma} \label{fastpi}
For $n$ sufficiently large, there exists an even integer $n_1 \leq 6dn/\text{Var}phi(n)$ such that for all $p \in H_{n_1}$ we have
\[
\mathbb{E}\log Z_{n_1,\omega}(\pi_0(p),p) \geq p(\beta)n_1 - 20dn^{1/2}\rho(n).
\]
\end{lemma}
This lemma is central to the following, which bounds the difference between partition functions for a skeleton and for its CG approximation.
\begin{lemma} \label{CGcorrec}
There exists $K_{20}$ such that under the conditions of Theorem \ref{expconc}, for $n$ sufficiently large,
\begin{align} \label{CGcorrec1}
\mathbb{P}&\left( \log Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(F(\mathcal{S}_{aug})) \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{aug} \in \mathcal{C}_{aug} \right) \notag \\
&\leq e^{-K_{20}k(\log n)(\log \log n)}.
\end{align}
\end{lemma}
\begin{proof}
We have
\begin{align} \label{splitup}
\mathbb{P}&\left( \log Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(F(\mathcal{S}_{aug})) \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{aug} \in \mathcal{C}_{aug} \right) \notag \\
&\leq \sum_{\mathcal{S}_{CG} \in \mathcal{C}_{CG}}\ \sum_{\mathcal{S}_{aug} \in F^{-1}(\mathcal{S}_{CG})}
\mathbb{P}\left( \log Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \geq 80dkn^{1/2}\rho(n) \right).
\end{align}
Fix $\mathcal{S}_{CG} \in \mathcal{C}_{CG}$ and $\mathcal{S}_{aug} \in F^{-1}(\mathcal{S}_{CG})$. We can write $\mathcal{S}_{aug}$ as $\{\mathcal{V}_j,j\leq k\}$ with $\mathcal{V}_j$ as in \eqref{Vjdef}.
Then using Lemma \ref{rateweak},
\begin{align} \label{sumsmall}
\log &Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \notag\\
&\leq \sum_{j \notin B} \bigg[ \left( \log Z_{n_1,\omega}(d_{j-1},e_{j-1})
- \log Z_{n_1,\omega}(e_{j-1}',e_{j-1}) \right) \notag \\
&\qquad \qquad + \left( \log Z_{n_1,\omega}(f_j,d_j) - \log Z_{n_1,\omega}(f_j,f_j') \right) \bigg] \notag\\
&\leq \sum_{j \notin B} \bigg[ \left( \log Z_{n_1,\omega}(d_{j-1},e_{j-1}) - \mathbb{E}\log Z_{n_1,\omega}(d_{j-1},e_{j-1}) \right) \notag \\
&\qquad\qquad - \left( \log Z_{n_1,\omega}(e_{j-1}',e_{j-1}) - \mathbb{E}\log Z_{n_1,\omega}(e_{j-1}',e_{j-1}) \right) \notag \\
&\qquad\qquad + \left( \log Z_{n_1,\omega}(f_j,d_j) - \mathbb{E}\log Z_{n_1,\omega}(f_j,d_j) \right) \notag \\
&\qquad\qquad - \left( \log Z_{n_1,\omega}(f_j,f_j') - \mathbb{E}\log Z_{n_1,\omega}(f_j,f_j') \right) \bigg] \notag \\
&\qquad + \sum_{j \notin B} \left[ 2p(\beta) n_1 - \mathbb{E}\log Z_{n_1,\omega}(e_{j-1}',e_{j-1})
- \mathbb{E}\log Z_{n_1,\omega}(f_j,f_j') \right].
\end{align}
By Lemma \ref{fastpi}, the last sum is bounded by $40dkn^{1/2}\rho(n)$. Hence letting $T$ denote the first sum on the right side of \eqref{sumsmall}, we have by \eqref{sumsmall} and Lemma \ref{sums}(ii):
\begin{align} \label{logdiff}
\mathbb{P}&\left( \log Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \geq 80dkn^{1/2}\rho(n) \right) \notag\\
&\leq \mathbb{P}\left( T > 40dkn^{1/2}\rho(n) \right) \notag\\
&\leq 2^{2k+1}\exp\left(- 20K_{13}dk\rho(n)\left( \frac{n}{n_1} \right)^{1/2} \right) \notag \\
&\leq e^{-kK_{21}(\log n)(\log \log n)}.
\end{align}
Combining \eqref{splitup} and \eqref{logdiff} with \eqref{CGsize} and \eqref{Fsize} we obtain that for large $n$,
\begin{align} \label{combine}
\mathbb{P}&\left( \log Z_{N,\omega}(\mathcal{S}_{aug}) - \log \tilde{Z}_{N,\omega}(F(\mathcal{S}_{aug})) \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{aug} \in \mathcal{C}_{aug} \right) \notag\\
&\leq (2n)^{9dk} e^{-kK_{21}(\log n)(\log \log n)} \notag\\
&\leq e^{-kK_{21}(\log n)(\log \log n)/2}.
\end{align}
\end{proof}
It is worth noting that in \eqref{combine} we do not make use of the entropy reduction contained in \eqref{CB}. Nonetheless we are able to obtain a good bound because we apply Lemma \ref{sums}(ii) with $n_{max}=n_1$ instead of $n_{max}=n$.
Let $b_{nk} = \lfloor \frac{k\log\log n}{(\log n)^{3/2}} \rfloor$. We deal separately with CG-approximate skeletons according to whether the number of bad blocks exceeds $b_{nk}$. Let
\[
\mathcal{C}_{CG}^- = \cup_{B:|B|\leq b_{nk}} \mathcal{C}_{CG}(B), \quad \mathcal{C}_{CG}^+ = \cup_{B:|B|> b_{nk}} \mathcal{C}_{CG}(B).
\]
The next lemma shows that bad blocks have a large cost, in the sense of reducing the mean of the log partition function---compare the $n^{1/2}\theta(n)$ factor in \eqref{badcost2} to the $n^{1/2}\log n$ factor in \eqref{rate3}.
\begin{lemma} \label{badcost}
For $n$ sufficiently large, for all $1 \leq b \leq k$ and $\mathcal{S}_{CG}^{bad} \in \mathcal{C}_{CG}^{bad}(b)$,
\begin{equation} \label{badcost2}
\mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) \leq p(\beta)bn - \half bn^{1/2}\theta(n).
\end{equation}
\end{lemma}
\begin{proof}
Fix $B \subset \{1,\dots,k\}$ with $|B|=b$, fix $\mathcal{S}_{CG}^{bad} \in \mathcal{C}_{CG}^{bad}(B)$, let $\mathcal{E}_{in},\mathcal{E}_{side}$ be the corresponding sets of indices of bad blocks, and let $\{\mathcal{T}_j, j \in B\}$ be as in \eqref{Rjdef}. Then
\begin{align} \label{CGdecomp}
\mathbb{E}\log &\tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) \notag\\
&= \sum_{j \in B} \big[ \mathbb{E}\log Z_{n_1,\omega}(d_{j-1},e_{j-1}) + \mathbb{E}\log Z_{n-2n_1,\omega}(e_{j-1},f_j)
+ \mathbb{E}\log Z_{n_1,\omega}(f_j,d_j) \big].
\end{align}
For $j \in \mathcal{E}_{in}$ we have
\begin{align} \label{exc}
\mathbb{E}\log &Z_{n_1,\omega}(d_{j-1},e_{j-1}) + \mathbb{E}\log Z_{n-2n_1,\omega}(e_{j-1},f_j)
+ \mathbb{E}\log Z_{n_1,\omega}(f_j,d_j) \notag\\
&\leq \mathbb{E}\log Z_{n,\omega}(d_{j-1},d_j) \notag \\
&\leq p(\beta)n - n^{1/2}\theta(n).
\end{align}
For $j \in \mathcal{E}_{side}$, write $e_{j-1}-d_{j-1}$ as $(n_1,x)$, so $|x|_\infty > h_n$ and therefore $x$ is inadequate. If the sidestep occurs from $(j-1)n$ to $(j-1)n+n_1$, then by superadditivity and Lemma \ref{unexcess}(i),
\begin{align} \label{straight}
\mathbb{E}\log Z_{n_1,\omega}(d_{j-1},e_{j-1}) &= \mathbb{E}\log Z_{n_1,\omega}((0,0),(n_1,x)) \notag\\
&\leq \mathbb{E}\log Z_{n,\omega}((0,0),(n,x)) - \mathbb{E}\log Z_{n-n_1,\omega}((n_1,x),(n,x)) \notag \\
&\leq p(\beta)n - n^{1/2}\theta(n) - \left( p(\beta)(n-n_1) - K_{16}n^{1/2}\log n \right) \notag\\
&\leq p(\beta)n_1 - \half n^{1/2}\theta(n),
\end{align}
and therefore
\begin{align} \label{side}
\mathbb{E}\log &Z_{n_1,\omega}(d_{j-1},e_{j-1}) + \mathbb{E}\log Z_{n-2n_1,\omega}(e_{j-1},f_j)
+ \mathbb{E}\log Z_{n_1,\omega}(f_j,d_j) \notag\\
&\leq p(\beta)n - \half n^{1/2}\theta(n).
\end{align}
Combining \eqref{CGdecomp}, \eqref{exc} and \eqref{side} we obtain
\begin{equation}
\mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad})
\leq p(\beta)bn - \half bn^{1/2}\theta(n).
\end{equation}
\end{proof}
It follows by additivity that
\begin{equation} \label{ecompare}
\mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \leq k\mathbb{E}\log Z_{n,\omega}
\end{equation}
for all CG skeletons $\mathcal{S}_{CG}$. Rather than considering deviations of $\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG})$ above its mean, it will be advantageous to consider deviations above the right side of \eqref{ecompare}.
The next two lemmas show that it is unlikely for this deviation to be very large for any CG skeleton. We will use the fact that for each $\mathcal{S}_{CG} \in \mathcal{C}_{CG}$ with bad set $B$, we have by Lemmas \ref{rateweak} and \ref{badcost}
\begin{align} \label{meandiff}
|B|\mathbb{E}\log Z_{n,\omega} - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) \geq \half |B|n^{1/2}\theta(n) - K_{14}|B|n^{1/2}\log n
\geq \frac{1}{4}|B|n^{1/2}\theta(n).
\end{align}
\begin{lemma} \label{fewbad}
Under the conditions of Theorem \ref{expconc}, if $n$ and then $k$ are chosen sufficiently large,
\begin{align} \label{CGbound1}
P&\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - k\mathbb{E}\log Z_{n,\omega} \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^- \bigg) \notag \\
&\leq e^{-16K_{13}dk\rho(n)}.
\end{align}
\end{lemma}
\begin{proof}
From \eqref{CB} we see that
\begin{equation} \label{CCGmsize}
|\mathcal{C}_{CG}^-| \leq 2^k(5\text{Var}phi(n))^{2dk} (2n)^{4db_{nk}} \leq e^{10dk\log\log n}.
\end{equation}
Combining this with Lemma \ref{sums}(ii),(iii) (with $n_{max}=n$) and \eqref{CGsize}, \eqref{Cbadbsize}, we obtain
\begin{align} \label{tZbound}
\mathbb{P}\bigg( &\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - k\mathbb{E}\log Z_{n,\omega} \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^- \bigg) \notag\\
&\leq \sum_{b=0}^{b_{nk}} \mathbb{P}\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) - b\mathbb{E}\log Z_{n,\omega} \geq 40dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{CG}^{bad} \in \mathcal{C}_{CG}^{bad}(b) \bigg)\notag\\
&\qquad + \mathbb{P}\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{good}) - (k-|B|)\mathbb{E}\log Z_{n,\omega} \geq 40dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^- \bigg) \notag\\
&\leq \sum_{b=0}^{b_{nk}} \mathbb{P}\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad})
\geq 40dkn^{1/2}\rho(n) \text{ for some } \mathcal{S}_{CG}^{bad} \in \mathcal{C}_{CG}^{bad}(b) \bigg) \notag\\
&\qquad + |\mathcal{C}_{CG}^-|2^{k+1} \exp\left( -20K_{13}dk\rho(n)(\log n)^{1/2} \right) \notag\\
&\leq \sum_{b=1}^{b_{nk}} |\mathcal{C}_{CG}^{bad}(b)| 2^{3b+1} e^{-20K_{13}dk\rho(n)}
+ |\mathcal{C}_{CG}^-|2^{k+1} e^{-20dk\log\log n} \notag\\
&\leq \sum_{b=1}^{b_{nk}} \left( \frac{32n^2k}{b} \right)^{2db} e^{-20K_{13}dk\rho(n)}
+ e^{-9dk\log\log n}.
\end{align}
Note that the event in the third line of \eqref{tZbound} is well-defined because $\mathcal{S}_{CG}^{good}$ is a function of $\mathcal{S}_{CG}$.
For each $b \leq b_{nk}$ we have
\begin{equation} \label{bnkprop1}
\frac{ \log \frac{32n^2k}{b} }{ \frac{32n^2k}{b} } \leq \frac{ \log \frac{32n^2k}{b_{nk}} }{ \frac{32n^2k}{b_{nk}} }
\leq \frac{ 3 \log\log n }{ 32n^2(\log n)^{1/2} }
\end{equation}
so
\begin{equation} \label{bnkprop2}
2db\log \frac{32n^2k}{b} \leq \frac{ 3dk \log\log n }{ (\log n)^{1/2} } = 3K_{13}dk\rho(n).
\end{equation}
With \eqref{tZbound} this shows that for $k$ sufficiently large (depending on $n$),
\begin{align} \label{tZbound2}
\mathbb{P}\bigg( &\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - k\mathbb{E}\log Z_{n,\omega} \geq 80dkn^{1/2}\rho(n)
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^- \bigg) \notag\\
&\leq b_{nk}e^{-17K_{13}dk\rho(n)} + e^{-9dk\log\log n} \notag\\
&\leq e^{-16K_{13}dk\rho(n)}.
\end{align}
\end{proof}
We continue with a similar but simpler result for $\mathcal{C}_{CG}^+$.
\begin{lemma} \label{manybad2}
Under the conditions of Theorem \ref{ratemain}, for $n$ sufficiently large and $N=kn$,
\begin{align} \label{CGbound1}
P&\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - k\mathbb{E}\log Z_{n,\omega} \geq 0
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^+ \bigg) \leq e^{-K_{13}k(\log n)(\log\log n)/16}.
\end{align}
\end{lemma}
\begin{proof}
In contrast to \eqref{CB}, it is straightforward that
\begin{equation} \label{CCGplus}
|\mathcal{C}_{CG}^+| \leq (2n)^{4dk}.
\end{equation}
Using \eqref{meandiff} we obtain that for $\mathcal{S}_{CG} \in \mathcal{C}_{CG}(B)$ with $|B|\geq b_{nk}$,
\begin{align} \label{lowmean}
k\mathbb{E}&\log Z_{n,\omega} - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \notag\\
&= \left[ |B|\mathbb{E}\log Z_{n,\omega} - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{bad}) \right]
+ \left[ (k-|B|)\mathbb{E}\log Z_{n,\omega} - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}^{good}) \right] \notag\\
&\geq \frac{1}{4}b_{nk}n^{1/2}\theta(n).
\end{align}
Combining this with Lemma \ref{sums}(ii) (with $n_{max}=n$) and \eqref{CCGplus}, we obtain
\begin{align} \label{tZbound0}
\mathbb{P}\bigg( &\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - k\mathbb{E}\log Z_{n,\omega} \geq 0
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^+ \bigg) \notag\\
&\leq \mathbb{P}\bigg( \log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) - \mathbb{E}\log \tilde{Z}_{N,\omega}(\mathcal{S}_{CG}) \geq \frac{1}{4}b_{nk}n^{1/2}\theta(n)
\text{ for some } \mathcal{S}_{CG} \in \mathcal{C}_{CG}^+ \bigg) \notag\\
&\leq |\mathcal{C}_{CG}^+|2^{3k+1} e^{-K_{13}b_{nk}\theta(n)/8} \notag\\
&\leq e^{-K_{13}k(\log n)(\log\log n)/16}.
\end{align}
\end{proof}
We can now complete the proof of Theorem \ref{ratemain}.
If we take $n$ and then $k$ large,
with probability greater than 1/2, none of the events given in Lemmas \ref{CGcorrec}, \ref{fewbad} and \ref{manybad2} occur, and we then have for all $\mathcal{S}_{aug} \in \mathcal{C}_{aug}$:
\begin{align} \label{eachskel}
\log Z_{N,\omega}(\mathcal{S}_{aug}) &\leq k\mathbb{E}\log Z_{n,\omega} + 160dkn^{1/2}\rho(n).
\end{align}
Then since $|\mathcal{C}_{aug}| \leq (2n)^{3dk}$, summing over $\mathcal{S}_{aug} \in \mathcal{C}_{aug}$ shows that, still with probability greater than 1/2,
\begin{equation} \label{allskel}
\log Z_{kn,\omega} \leq k\mathbb{E}\log Z_{n,\omega} + 160dkn^{1/2}\rho(n) + 3dk\log(2n) \leq k\mathbb{E}\log Z_{n,\omega} + 161dkn^{1/2}\rho(n).
\end{equation}
By \eqref{aslimit}, for fixed $n$, for sufficiently large $k$ we have, again with probability greater than 1/2:
\begin{equation} \label{aslimit2}
\frac{1}{kn}\log Z_{kn,\omega} \geq p(\beta) - \frac{1}{n}.
\end{equation}
Thus with positive probability, both \eqref{allskel} and \eqref{aslimit2} hold, and hence
\[
k\mathbb{E}\log Z_{n,\omega} + 161dkn^{1/2}\rho(n) \geq knp(\beta) - k,
\]
which implies
\[
\mathbb{E}\log Z_{n,\omega} \geq np(\beta) - 162dn^{1/2}\rho(n).
\]
\section{Proof of Lemma \ref{fastpi}}\label{prooflem}
We begin with some definitions.
A path $((l,x_{l}),(l+1,x_{l+1}),\dots,(l+m,x_{l+m}))$ is \emph{clean} if every increment $(t-s,x_{t}-x_{s})$ with $l \leq s < t \leq l+m$ is efficient.
Let $x^*$ be an adequate site with first coordinate $x_1^*=|x^*|_\infty=h_n$. Given a path $\gamma = \{(m,x_{m})\}$ from $(0,0)$ to $(n,x^*)$, let
\[
\tau_j = \tau_j(\gamma) = \min\{m: (x_m)_1 = ju_n\}, \quad 1 \leq j \leq \text{Var}phi(n).
\]
The \emph{climbing skeleton} of $\gamma$ is $\mathcal{S}_{cl}(\gamma) = \{(\tau_j,x_{\tau_j}): 1 \leq j \leq \text{Var}phi(n)\}$.
A \emph{climbing segment} of $\gamma$ is a segment of $\gamma$ from $(\tau_{j-1},x_{\tau_{j-1}})$ to $(\tau_j,x_{\tau_j})$ for some $j$. A climbing segment is \emph{short} if $\tau_j - \tau_{j-1} \leq 2n/\text{Var}phi(n)$, and \emph{long} otherwise. (Note $n/\text{Var}phi(n)$ is the average length of the climbing segments in $\gamma$.) Since the total length of $\gamma$ is $n$, there can be at most $\text{Var}phi(n)/2$ long climbing segments in $\gamma$, so there are at least $\text{Var}phi(n)/2$ short ones. Let
\[
\mathcal{J}_s(\gamma) = \{j \leq \text{Var}phi(n): \text{ the $j$th climbing segment of $\gamma$ is short} \},
\]
\[
\mathcal{J}_l(\gamma) = \{j \leq \text{Var}phi(n): \text{ the $j$th climbing segment of $\gamma$ is long} \},
\]
\[
J_l(\gamma) = \left( \cup_{j \in \mathcal{J}_l(\gamma)} (\tau_{j-1},\tau_j) \right) \cap \left\lfloor \frac{2n}{\text{Var}phi(n)} \right\rfloor \mathbb{Z}.
\]
If no short climbing segment of $\gamma$ is clean, we say $\gamma$ is \emph{soiled}. For soiled $\gamma$, for each $j \in \mathcal{J}_s(\gamma)$ there exist $\alpha_j(\gamma)<\beta_j(\gamma)$ in $[\tau_{j-1},\tau_j]$ for which the increment of $\gamma$ from $(\alpha_j,x_{\alpha_j})$ to $(\beta_j,x_{\beta_j})$ is inefficient. (If $\alpha_j,\beta_j$ are not unique we make a choice by some arbitrary rule.)
We can reorder the values $\{\tau_j, j \leq \text{Var}phi(n)\} \cup \{\alpha_j,\beta_j: j \in \mathcal{J}_s(\gamma)\} \cup J_l(\gamma)$ into a single sequence $\{\sigma_j, 1 \leq j \leq N(\gamma)\}$ with $\text{Var}phi(n) \leq N(\gamma) \leq 4\text{Var}phi(n)$, such that at least $\text{Var}phi(n)/2$ of the increments $(\sigma_j - \sigma_{j-1},x_{\sigma_j} - x_{\sigma_{j-1}}), j \leq N(\gamma),$ are inefficient. The \emph{augmented climbing skeleton} of $\gamma$ is then the sequence
$\mathcal{S}_{acl}(\gamma) = \{(\sigma_j,x_{\sigma_j}): 1 \leq j \leq N(\gamma)\}$. The set of all augmented climbing skeletons of soiled paths from $(0,0)$ to $(n,x^*)$ is denoted $\mathcal{C}_{acl}$.
\begin{lemma} \label{existence}
Provided $n$ is large, there exists a path from $(0,0)$ to $(n,x^*)$ containing a short climbing segment which is clean.
\end{lemma}
Note that Lemma \ref{existence} is a purely deterministic statement, since the property of being clean does not involve the configuration $\omega$.
Translating the segment obtained in Lemma \ref{existence} to begin at the origin, we obtain a path $\alpha^*$ from $(0,0)$ to some site $(m^*,y^*)$, with the following properties:
\begin{equation} \label{properties}
m^* \leq \frac{2n}{\text{Var}phi(n)}, \quad y_1^* = u_n \quad \text{and $\alpha^*$ is clean}.
\end{equation}
By definition, every increment of $\alpha^*$ is efficient. The proof of (\cite{Al11}, Lemma 2.3) then applies unchanged: for $n_1=2d(m^*+1)$, given $p \in \hat{H}_{n_1}$ with $\pi_0(p)=0$, one can find $4d+1$ segments of $\alpha^*$ (or reflections of such segments through coordinate hyperplanes, which are necessarily also efficient) such that the sum of the increments made by these segments is $p$. By subadditivity this shows that $s(p) \leq (4d+1)n^{1/2}\rho(n)$, proving Lemma \ref{fastpi}.
\begin{proof}[Proof of Lemma \ref{existence}]
Let $\mathcal{D}^*$ denote the set of all soiled paths from $(0,0)$ to $(n,x^*)$. We will show that $\mathbb{P}(Z_{n,\omega}(\mathcal{D}^*) < Z_{n,\omega}(x^*))>0$, which shows that unsoiled paths exist, proving the lemma.
Since $x^*$ is adequate, it follows from Proposition \ref{avgconc}(ii) that
\begin{equation} \label{bigprob}
\mathbb{P}\bigg( \log Z_{n,\omega}(x^*) > p(\beta)n - 2n^{1/2}\theta(n) \bigg) > \half.
\end{equation}
On the other hand, for paths in $\mathcal{D}^*$, fixing $\mathcal{S}_{acl} = \{(\sigma_j,x_{\sigma_j}): 1 \leq j \leq r\} \in \mathcal{C}_{acl}$, since there are at least $\text{Var}phi(n)/2$ inefficient increments $(\sigma_j - \sigma_{j-1},x_{\sigma_j} - x_{\sigma_{j-1}})$, we have
\begin{equation} \label{ESacl}
\mathbb{E} \log Z_{n,\omega}(\mathcal{S}_{acl}) \leq p(\beta)n - 2n^{1/2} \text{Var}phi(n)\rho(n).
\end{equation}
Hence by Lemma \ref{sums}(ii) (with $n_{\max} = 6dn/\text{Var}phi(n)$),
\begin{align} \label{eachSacl}
\mathbb{P}&\left( \log Z_{n,\omega}(\mathcal{S}_{acl}) \geq p(\beta)n - n^{1/2} \text{Var}phi(n)\rho(n) \right) \notag\\
&\leq \mathbb{P}\left( \log Z_{n,\omega}(\mathcal{S}_{acl}) - \mathbb{E}\log Z_{n,\omega}(\mathcal{S}_{acl}) \geq n^{1/2} \text{Var}phi(n)\rho(n) \right) \notag\\
&\leq 2^{4\text{Var}phi(n)+1} \exp\left( -\frac{K_{13}}{2} n^{1/2} \text{Var}phi(n)\rho(n)\left( \frac{\text{Var}phi(n)}{6dn} \right)^{1/2} \right) \notag\\
&\leq e^{-(\log n)^4(\log\log n)/6d^{1/2}}.
\end{align}
Since $3\theta(n) \leq \text{Var}phi(n)\rho(n)$ and
\begin{equation} \label{Caclsize}
|\mathcal{C}_{acl}| \leq (2n)^{4(d+1)\text{Var}phi(n)},
\end{equation}
it follows from \eqref{eachSacl} that, in contrast to \eqref{bigprob},
\begin{align} \label{allSacl}
\mathbb{P}&\left( \log Z_{n,\omega}(\mathcal{D}^*) > p(\beta)n - 2n^{1/2}\theta(n) \right) \notag\\
&\leq \mathbb{P}\left( \log |\mathcal{C}_{acl}| + \max_{\mathcal{S}_{acl} \in \mathcal{C}_{acl}} \log Z_{n,\omega}(\mathcal{S}_{acl}) > p(\beta)n - 2n^{1/2}\theta(n) \right) \notag\\
&\leq \mathbb{P}\left( \log Z_{n,\omega}(\mathcal{S}_{acl}) \geq p(\beta)n - 3n^{1/2}\theta(n) \text{ for some } \mathcal{S}_{acl} \in \mathcal{C}_{acl} \right) \notag\\
&\leq |\mathcal{C}_{acl}| e^{-(\log n)^4(\log\log n)/6d^{1/2}} \notag\\
&\leq e^{-(\log n)^4(\log\log n)/12d^{1/2}}.
\end{align}
It follows from \eqref{bigprob} and \eqref{allSacl} that $\mathbb{P}(Z_{n,\omega}(\mathcal{D}^*) < Z_{n,\omega}(x^*))>0$, as desired.
\end{proof}
\begin{remark} \label{exponents}
The exponents on $\log m$ in the definition \eqref{slowlyvar} of $\theta(m)$ and $\text{Var}phi(m)$ are not the only ones that can be used. The proof of Lemma \ref{CGcorrec} requires (ignoring constants) $\text{Var}phi(n) \geq (\log n)^3$, Lemma \ref{manybad2} requires $\theta(n) \geq (\log n)^{5/2}$ and Lemma \ref{existence} requires $\text{Var}phi(n) \geq \theta(n)(\log n)^{1/2}$.
\end{remark}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Stochastic multi-symplectic Runge-Kutta methods for stochastic Hamiltonian PDEs}
\author[rvt]{Liying Zhang}
\ead{[email protected]}
\author[els]{Lihai Ji\corref{cor1}}
\ead{[email protected]}
\address[rvt]{School of Mathematical Science, China University of Mining and Technology, Beijing 100083, China}
\address[els]{Institute of Applied Physics and Computational Mathematics, Beijing 100094, China}
\cortext[cor1]{Corresponding author}
\fntext[fn1]{The first author is supported by NNSFC (NO. 11601514 and NO. 11771444) and the second author is supported by NNSFC (NO. 11471310 and NO. 11601032).}
\begin{abstract}
In this paper, we consider stochastic Runge-Kutta methods
for stochastic Hamiltonian partial differential equations and present some
sufficient conditions for multi-symplecticity of stochastic Runge-Kutta methods of stochastic Hamiltonian
partial differential equations. Particularly, we apply these ideas to stochastic Maxwell equations with multiplicative noise, possessing the stochastic multi-symplectic conservation law and energy conservation law. Theoretical analysis shows that the methods can preserve both the discrete stochastic multi-symplectic conservation law and discrete energy conservation law almost surely.
\end{abstract}
\begin{keyword}
stochastic Hamiltonian partial differential equations \sep Runge-Kutta methods \sep multi-symplecticity \sep stochastic Maxwell equations
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{1.1}
Consider the following stochastic partial differential equations (SPDEs) in the sense of Stratonovich
\begin{equation}\label{SHPDE}
{\rm {\bf K}}{\rm d}_tz+{\rm {\bf L}}z_{x}{\rm d}t=\nabla_z S_1(z){\rm d}t+\nabla_z S_2(z)\circ{\rm d}W(t),~~z\in\mathbb{R}^n,~n\geq2,
\end{equation}
where ${\rm {\bf K}}$, ${\rm {\bf L}}$ are skew-symmetric matrices, and $S_1$, $S_2$ are
smooth functions of state variable $z$. Let $H=L^2(\mathbb{R},\mathbb{R})$ and
let $(\Omega,\mathcal{F},P)$ be a probability space with a normal filtration $\{\mathcal{F}_{t}\}_{0\leq t\leq T}$.
Moreover, let $W:[0,T]\times\Omega\rightarrow H$ be a standard $Q$-Wiener process with
respect to $\{{\mathcal F}_{t}\}_{0\leq t\leq T}$, with a trace class operator $Q: H\rightarrow H$.
The SPDEs are called the stochastic Hamiltonian PDEs (see \cite{JWH2013}). Moreover, the stochastic Hamiltonian PDEs \eqref{SHPDE} possess the stochastic multi-symplectic conservation law (Theorem 2.2 in \cite{JWH2013}) as follows
\begin{equation}\label{s_multi_symplectic_law}
{\rm d}_t\omega(t,x)+\partial_x\kappa(t,x){\rm d}t=0,~a.s.,
\end{equation}
where $\omega(t,x)=\frac{1}{2}{\rm d}z\wedge {\rm {\bf K}}{\rm d}z$, $\kappa(t,x)=\frac{1}{2}{\rm d}z\wedge {\rm {\bf L}}{\rm d}z$ are differential 2-forms associated with the two skew-symmetric matrices ${\rm {\bf K}}$ and ${\rm {\bf L}}$, respectively.
Many physical and engineering phenomena are modeled by stochastic Hamiltonian PDEs. The advantage of modeling using these so-called stochastic Hamiltonian PDEs is that stochastic Hamiltonian PDEs possess stochastic multi-symplectic geometric structure \eqref{s_multi_symplectic_law} and then are able to more fully capture the behavior of interesting phenomena. Therefore, it requires the corresponding numerical schemes could exactly preserve the discrete stochastic multi-symplectic structure, which are known as stochastic multi-symplectic schemes. In recent years, many researchers have studied different stochastic Hamiltonian PDEs and various stochastic multi-symplectic numerical schemes have been developed, analyzed and tested, see, stochastic Maxwell equaitons \cite{CHZ2016,HJZ2014,HJZC2017}; stochastic nonlinear Schr\"{o}dinger equation \cite{CHJK2017,CLHZ2017,HWZ2017,JWH2013}; stochastic Korteweg-de Vries equation \cite{JWH2012}.
In the well developed numerical analysis of deterministic ODEs, derivative free approximation methods are of particular interest. Especially Runge-Kutta (RK) methods are widely used. In the last decade, it has been widely recognized that the stochastic RK methods play an important role in numerically solving stochastic ODEs and popularly employed, see, e.g.,~\cite{BB2001,BB2014,HXW2015,KP1999,MD2012,MD2015,Rosser2009,TV2002,Wang2008} and the references therein. The symplectic condition of stochastic RK methods for stochastic Hamiltonian ODEs was obtained firstly in \cite{MD2012}. Furthermore, \cite{MD2015} derived the symplectic conditions of stochastic partitioned Runge-Kutta (PRK) methods. Recently, based on variational principle, \cite{CH2016} proposed a general class of stochastic symplectic RK methods in the temporal direction to the stochastic Schr\"{o}dinger equation in the Stratonovich sense and showed that the methods preserve the charge conservation law. In particular, the authors present the mean-square convergence order of the semidiscrete scheme is 1 under
appropriate assumptions.
For deterministic Hamiltonian PDEs, the multi-symplecticity of RK methods has been studied in \cite{HLS2005}. To the best of our knowledge, there has been no work in the literature which studies the multi-symplectic RK methods for stochastic Hamiltonian PDEs \eqref{SHPDE}. Motivated by \cite{CH2016,MD2012,MD2015}, we consider the general case of stochastic Hamiltonian PDEs, investigate the multi-symplecticity of stochastic RK methods, and present some sufficient conditions for stochastic multi-symplectic RK methods in this paper. To this end, we main utilize the corresponding variation form of stochastic Hamiltonian PDEs and the chain rule of Stratonovich integral, then derive the multi-symplectic conditions by a tedious computation.
The rest of this paper is organized as follows. In Section 2, we apply stochastic RK methods to solve \eqref{SHPDE} and then present conditions for multi-symplecticity of stochastic RK methods. In particular, an one-stage stochastic multi-symplectic RK method is given. In Section 3, the multi-symplecticity of stochastic RK methods for the three-dimensional stochastic Maxwell equations with multiplicative noise is discussed. Concluding remarks are presented in Section 4.
\section{Stochastic multi-symplectic RK methods}
\subsection{Stochastic multi-symplectic conditions}
In this section, we give the conditions of multi-symplecticity
of stochastic RK discretization for general stochastic Hamiltonian PDEs. In the sequel, we denote $Z_{m}^{k}\approx z(c_m h,d_k\tau)$, $Z_{m}^{1}\approx z(c_m h,\tau)$, $Z_{1}^{k}\approx z(h,d_k\tau)$, $c_m=\sum_{n=1}^s\widetilde{a}_{mn}$, $d_k=\sum_{j=1}^ra_{kj}$. To solve \eqref{SHPDE}, we present a class of stochastic RK methods with r-stage in temporal direction and s-stage in spatial direction, respectively, namely
\begin{subequations}\label{PRK}
\begin{align}
&Z_{m}^{k}=z_{m}^{p}+\tau\sum_{j=1}^{r}a_{kj}\delta_t^{m,j}Z_{m}^{j},\quad \forall~p=0,1,\cdots,P\label{PRK_sub1}\\[1mm]
&z_{m}^{p+1}=z_{m}^{p}+\tau\sum_{k=1}^{r}b_{k}\delta_t^{m,k}Z_{m}^{k},\quad \forall~p=0,1,\cdots,P\label{PRK_sub2}\\[1mm]
&Z_{m}^{k}=z_{i}^{k}+h\sum_{n=1}^{s}\widetilde{a}_{mn}\delta_{x}^{n,k}Z_{n}^{k},\quad \forall~i=0,1,\cdots,I\label{PRK_sub3}\\[1mm]
&z_{i+1}^{k}=z_{i}^{k}+h\sum_{m=1}^{s}\widetilde{b}_{m}\delta_{x}^{m,k}Z_{m}^{k},\quad \forall~i=0,1,\cdots,I\label{PRK_sub4}\\[1mm]
&\tau{\rm {\bf K}}\delta_{t}^{m,k}Z_{m}^{k}+\tau{\rm {\bf L}}\delta_{x}^{m,k}Z_{m}^{k}=\tau\nabla_{z}S_1(Z_{m}^{k})+\nabla_{z}S_2(Z_{m}^{k})\Delta W_m^k,\label{PRK_sub5}
\end{align}
\end{subequations}
where $\tau$, $h$ are the stepsizes in time and spatial directions, respectively. $\delta_t^{m,k}$ and $\delta_{x}^{m,k}$ the discretizations of the partial derivatives $\partial_t$ and $\partial_x$, respectively. The increment $\Delta W_m^k:=W(t_{k+1},x_m,\omega)-W(t_{k},x_m,\omega)$ is given by
\begin{equation}\label{Q_process}
\Delta W_m^k(\omega):=\sum_{j=1}^{\infty}\sqrt{\eta_j}e_j(x_m)\left(\beta_j(t^{k+1},\omega)-\beta_j(t^k,\omega)\right)
\end{equation}
for all $\omega\in\Omega$. Here $e_j,~j\in\mathbb{N}$ is an orthonormal basis of $H$ consisting of eigenfunctions of $Q$ such that $Qe_j=\eta_j e_j,~j\in\mathbb{N}$ and $\beta_j,~j\in\mathbb{N}$ are independent real-valued Brownian motions on the
probability space $(\Omega,\mathcal{F},P,\{\mathcal{F}_{t}\}_{0\leq t\leq T})$.
In order not to complicate the notation, and for sake of brevity, we shall use the abbreviations, $\delta_{t}$ denotes $\delta_{t}^{m,k}$, $\delta_{x}$ denotes $\delta_{x}^{m,k}$ and so on.
\begin{definition}
A numerical method with the approximating solution $z_m^{k}$ for \eqref{SHPDE} is said to be stochastic multi-symplectic if it satisfies
\begin{equation}\label{def_sm}
\tau\delta_{t}\omega_{m}^{k}+h\delta_{x}\kappa_{m}^{k}=0,~a.s.,
\end{equation}
where $\omega_m^k=\frac{1}{2}{\rm d}z_{m}^{k}\wedge {\rm {\bf K}}{\rm d}z_{m}^{k}$, $\kappa_{m}^{k}=\frac{1}{2}{\rm d}z_{m}^{k}\wedge {\rm {\bf L}}{\rm d}z_{m}^{k}$.
\end{definition}
As follows we state the main theorem of this paper.
\begin{thm}
Assume that the coefficients $a_{kj},\widetilde{a}_{mn},b_k,\widetilde{b}_m$ of \eqref{PRK} satisfy the relations
\begin{align}\label{cond0}
b_kb_j-b_ka_{kj}-b_ja_{jk}=0~~{\rm and}~~\widetilde{b}_m\widetilde{b}_n-\widetilde{b}_m\widetilde{a}_{mn}-\widetilde{b}_n\widetilde{a}_{nm}=0,
\end{align}
for all $k,j=1,\cdots,r$ and $n,m=1,\cdots,s$, then the stochastic RK methods \eqref{PRK} is stochastic multi-symplectic with the discrete stochastic multi-symplectic conservation law almost surely
\begin{equation}\label{def_sm_prk}
\frac{\omega^{p+1}-\omega^{p}}{\tau}+\frac{\kappa_{i+1}-\kappa_{i}}{h}=0,\quad \forall~p=0,1,\cdots,P;~i=0,1,\cdots,I,
\end{equation}
where
\begin{equation*}
\begin{split}
\omega^p&=\frac{1}{2}\sum_{m=1}^s\widetilde{b}_m{\rm d}z_{m}^{p}\wedge {\rm {\bf K}}{\rm d}z_{m}^{p},\quad
\kappa_{i}=\frac{1}{2}\sum_{k=1}^rb_k{\rm d}z_{i}^{k}\wedge {\rm {\bf L}}{\rm d}z_{i}^{k}.
\end{split}
\end{equation*}
\end{thm}
\begin{pr}
Differentiating \eqref{PRK_sub1} and \eqref{PRK_sub2}, it holds
\begin{subequations}
\begin{align}
{\rm d}Z_{m}^{k}&={\rm d}z_{m}^{p}+\tau\sum_{j=1}^{r}a_{kj}{\rm d}(\delta_tZ_{m}^{j}),\label{Diff_1}\\[1mm]
{\rm d}z_{m}^{p+1}&={\rm d}z_{m}^{p}+\tau\sum_{k=1}^{r}b_{k}{\rm d}(\delta_tZ_{m}^{k}).\label{Diff_2}
\end{align}
\end{subequations}
Then we have
\begin{equation}\label{Proof_1}
\begin{split}
{\rm d}z_m^{p+1}\wedge{\rm {\bf K}}{\rm d}z_m^{p+1}&=\left({\rm d}z_{m}^{p}+\tau\sum_{k=1}^{r}b_{k}{\rm d}\left(\delta_tZ_{m}^{k}\right)\right)\wedge{\rm {\bf K}}\left({\rm d}z_{m}^{p}+\tau\sum_{k=1}^{r}b_{k}{\rm d}\left(\delta_tZ_{m}^{k}\right)\right)\\
&={\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}+\tau\sum_{k=1}^rb_k\left({\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{k})+{\rm d}(\delta_tZ_{m}^{k})\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}\right)\\
&\quad\quad+\tau^2\sum_{k=1}^r\sum_{j=1}^rb_kb_j\left({\rm d}(\delta_tZ_{m}^{k})\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{j})\right).
\end{split}
\end{equation}
Substituting \eqref{Diff_1} into the second term of the right-side of the equation \eqref{Proof_1}, it yields
\begin{equation*}
\begin{split}
{\rm d}z_m^{p+1}\wedge{\rm {\bf K}}{\rm d}z_m^{p+1}&={\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}+\tau\sum_{k=1}^rb_k\left({\rm d}Z_m^k\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{k})+{\rm d}(\delta_tZ_{m}^{k})\wedge{\rm {\bf K}}{\rm d}Z_m^k\right)\\
&\quad\quad+\tau^2\sum_{k=1}^r\sum_{j=1}^r(b_kb_j-b_ka_{kj}-b_ja_{jk})\left({\rm d}(\delta_tZ_{m}^{k})\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{j})\right).
\end{split}
\end{equation*}
Using \eqref{cond0} and the skew-symmetry of matrix ${\rm {\bf K}}$, we have
\begin{equation}\label{Proof_2}
\begin{split}
{\rm d}z_m^{p+1}\wedge{\rm {\bf K}}{\rm d}z_m^{p+1}&={\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}\\
&\quad\quad+\tau\sum_{k=1}^rb_k\left({\rm d}Z_m^k\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{k})+{\rm d}(\delta_tZ_{m}^{k})\wedge{\rm {\bf K}}{\rm d}Z_m^k\right)\\
&={\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}+2\tau\sum_{k=1}^rb_k{\rm d}Z_m^k\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{k}).
\end{split}
\end{equation}
Similarly, it can obtain
\begin{equation}\label{Proof_3}
\begin{split}
{\rm d}z_{i+1}^{k}\wedge{\rm {\bf L}}{\rm d}z_{i+1}^{k}&={\rm d}z_{i}^{k}\wedge{\rm {\bf L}}{\rm d}z_{i}^{k}\\
&\quad\quad+h\sum_{m=1}^s\widetilde{b}_m\left({\rm d}Z_m^k\wedge{\rm {\bf L}}{\rm d}(\delta_xZ_{m}^{k})+{\rm d}(\delta_xZ_{m}^{k})\wedge{\rm {\bf L}}{\rm d}Z_m^k\right)\\
&={\rm d}z_{i}^{k}\wedge{\rm {\bf L}}{\rm d}z_{i}^{k}+2h\sum_{m=1}^s\widetilde{b}_m{\rm d}Z_m^k\wedge{\rm {\bf L}}{\rm d}(\delta_xZ_{m}^{k}).
\end{split}
\end{equation}
Multiplying both sides of \eqref{Proof_2} and \eqref{Proof_3} with $\frac{1}{2}h\widetilde{b}_m$, $\frac{1}{2}\tau b_k$, summing over all spatial grid
points $m$ and temporal grid points $k$, respectively, and adding them together, we haves
\begin{equation}\label{Proof_4}
\begin{split}
\frac{h}{2}\sum_{m=1}^{s}&\widetilde{b}_m{\rm d}z_{m}^{p+1}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p+1}+\frac{\tau}{2}\sum_{k=1}^{r}b_k{\rm d}z_{i+1}^{k}\wedge{\rm {\bf L}}{\rm d}z_{i+1}^{k}\\
&=\frac{h}{2}\sum_{m=1}^{s}\widetilde{b}_m{\rm d}z_{m}^{p}\wedge{\rm {\bf K}}{\rm d}z_{m}^{p}+\frac{\tau}{2}\sum_{k=1}^{r}b_k{\rm d}z_{i}^{k}\wedge{\rm {\bf L}}{\rm d}z_{i}^{k}\\
&+\tau h\sum_{m=1}^s\sum_{k=1}^{r}b_k\widetilde{b}_m\left({\rm d}Z_m^k\wedge{\rm {\bf K}}{\rm d}(\delta_tZ_{m}^{k})+{\rm d}Z_m^k\wedge{\rm {\bf L}}{\rm d}(\delta_xZ_{m}^{k})\right).
\end{split}
\end{equation}
Noticing that, if we take the differential in the phase space on both sides of \eqref{PRK_sub5}, we can deduce that
\begin{equation*}
{\rm {\bf K}}{\rm d}(\delta_{t}Z_{m}^{k})+{\rm {\bf L}}{\rm d}(\delta_{x}Z_{m}^{k})=D_{zz}S_1(Z_{m}^{k}){\rm d}Z_m^k+D_{zz}S_2(Z_{m}^{k}){\rm d}Z_m^k\frac{\Delta W_m^k}{\tau}
\end{equation*}
then we use ${\rm d}Z_m^k$ to perform wedge product with the above equation, it yields
\begin{equation}\label{Proof_5}
{\rm d}Z_m^k\wedge{\rm {\bf K}}{\rm d}(\delta_{t}Z_{m}^{k})+{\rm d}Z_m^k\wedge{\rm {\bf L}}{\rm d}(\delta_{x}Z_{m}^{k})=0,
\end{equation}
where the equality is due to the symmetry of $D_{zz}S_1(Z_{m}^{k})$ and $D_{zz}S_2(Z_{m}^{k})$. Combining \eqref{Proof_4} and \eqref{Proof_5}, we get the stochastic multi-symplectic conservation law \eqref{def_sm_prk}.
\end{pr}
\subsection{One-stage stochastic multi-symplectic RK method}
In this subsection, we use the stochastic multi-symplectic conditions \eqref{cond0} to construct an one-stage stochastic multi-symplectic RK method. Consider one-stage stochastic multi-symplectic RK methods in the following form
\begin{center}
\begin{tabular}{c|c}
& a \\
\hline & b
\end{tabular},
~~~~\begin{tabular}{c|c}
& $\widetilde{a}$ \\
\hline & $\widetilde{b}$
\end{tabular}.
\end{center}
Using the stochastic multi-symplectic conditions \eqref{cond0}, the following results hold:
\begin{align}
b=2a,~ \widetilde{b}=2\widetilde{a}.
\end{align}
In particular, choosing $b=\widetilde{b}=1$, we get the scheme as
\begin{center}
\begin{tabular}{c|c}
& {\rm 1/2} \\
\hline & {\rm 1}
\end{tabular},
~~~~\begin{tabular}{c|c}
& {\rm 1/2} \\
\hline & {\rm 1}
\end{tabular},
\end{center}
more precisely,
\begin{equation}
{\rm {\bf K}}\left(\frac{z_{i+\frac{1}{2}}^{p+1}-z_{i+\frac{1}{2}}^{p}}{\tau}\right)+{\rm {\bf L}}\left(\frac{z_{i+1}^{p+\frac{1}{2}}-z_{i}^{p+\frac{1}{2}}}{h}\right)=\nabla_{z}S_1(z_{i+\frac{1}{2}}^{p+\frac{1}{2}})+\nabla_{z}S_2(z_{i+\frac{1}{2}}^{p+\frac{1}{2}})\frac{\Delta W_{i+\frac{1}{2}}^{p+\frac{1}{2}}}{\tau},
\end{equation}
which is equivalent to implicit midpoint scheme in \cite{JWH2013}.
\section{Application of stochastic multi-symplectic RK methods}
In this section, we apply the stochastic RK methods \eqref{PRK} to the stochastic Maxwell equations with multiplicative noise in statistical radiophysics \cite{RKT1987}. We obtain the sufficient conditions for multi-symplecticity of stochastic RK methods. Moreover, we derive the discrete energy conservation law under stochastic multi-symplectic RK methods.
Considering the following stochastic Maxwell equations with multiplicative noise in Stratonovich sense \cite{HJZC2017,LSY2010,RKT1987}
\begin{equation}\label{stochastic maxwell equations}
\begin{split}
{\rm d}\mathcal{E}(x,t)&=A_M\mathcal{E}(x,t){\rm d}t+B(\mathcal{E})(x,t)\circ {\rm d}W(t),~x\in D,~t>0\\
\mathcal{E}_0(x)&=\xi,~x\in D
\end{split}
\end{equation}
in $L^{2}(D)^6=L^{2}(D)^3\times L^{2}(D)^3,~D\subset \mathbb{R}^3$ driven by a standard $Q$-Wiener process $W(t)$ and the perfect conductor boundary condition $n\times E=0$ on $\partial D$. Let $\mathcal{E}=\left(H^T,E^T\right)^T$, $H=(H_1,H_2,H_3)$ and $E=(E_1,E_2,E_3)$, then the Maxwell operator is given by
\begin{equation*}
A_M\begin{pmatrix}
H\\
E
\end{pmatrix}=\begin{pmatrix}
0&-\nabla\times\\
\nabla\times&0
\end{pmatrix}\begin{pmatrix}
H\\
E
\end{pmatrix},
\end{equation*}
and
\begin{equation*}
B(\mathcal{E})=\begin{pmatrix}
0&\lambda\\
-\lambda &0
\end{pmatrix}\begin{pmatrix}
H\\
E
\end{pmatrix}.
\end{equation*}
In \cite{HJZC2017}, the authors show that stochastic Maxwell equations can be written
\begin{equation}\label{12}
{\rm {\bf K}}{\rm d}_{t}\mathcal{E}+{\rm {\bf L}}_{1}\mathcal{E}_{x}{\rm d}t+{\rm {\bf L}}_{2}\mathcal{E}_{y}{\rm d}t+{\rm {\bf L}}_{3}\mathcal{E}_{z}{\rm d}t=\nabla S(\mathcal{E})\circ
{\rm d}W(t),
\end{equation}
with
\begin{equation}\label{SS}
S(\mathcal{E})=\frac{\lambda}{2}\left(|E_1|^{2}+|E_2|^{2}+|E_3|^{2}+|H_1|^{2}+|H_2|^{2}+|H_3|^{2}\right)
\end{equation}
and
\begin{equation*}
{\rm {\bf K}}=\left(\begin{array}{ccccccc}
0&-I_{3\times3}\\[1mm]
I_{3\times3}&0\\[1mm]
\end{array}\right),\quad
{\rm {\bf L}}_{i}=\left(\begin{array}{cccccccc}
\mathcal{D}_{i}&0\\[1mm]
0&\mathcal{D}_{i}\\[1mm]
\end{array}
\right),~~i=1, 2, 3.
\end{equation*}
The sub-matrix $I_{3\times3}$ is a $3\times3$ identity matrix and
\begin{small}
$$
\mathcal{D}_{1}=\left(\begin{array}{ccccccc}
0&0&0\\[1mm]
0&0&-1\\[1mm]
0&~1&0\\[1mm]
\end{array}\right),
\mathcal{D}_{2}=\left(\begin{array}{ccccccc}
0&0&~1\\[1mm]
0&0&0\\[1mm]
-1&0&0\\[1mm]
\end{array}\right),
\mathcal{D}_{3}=\left(\begin{array}{ccccccc}
0&-1&0\\[1mm]
~1&0&0\\[1mm]
0&0&0\\[1mm]
\end{array}\right).
$$
\end{small}
Now we investigate the stochastic multi-symplecticity of RK methods for the equations \eqref{stochastic maxwell equations}. The stochastic RK method \eqref{PRK} applied to the equation \eqref{stochastic maxwell equations} is
\begin{subequations}\label{RK00}
\begin{align}
&\Upsilon_{mplk}=\mathcal{E}_{mpl}^{\rho}+\tau\sum_{j=1}^{r}a_{kj}\delta_t\Upsilon_{mplj},\quad \forall~\rho=0,1,\cdots,N\label{GME_sub1}\\[1mm]
&\mathcal{E}_{mpl}^{\rho+1}=\mathcal{E}_{mpl}^{\rho}+\tau\sum_{k=1}^{r}b_{k}\delta_t\Upsilon_{mplk},\quad \forall~\rho=0,1,\cdots,N\label{GME_sub2}\\[1mm]
&\Upsilon_{mplk}=\mathcal{E}_{i_1pl}^{k}+\Delta x\sum_{n=1}^{s}\widetilde{a}_{mn}\delta_{x}\Upsilon_{nplk},\quad \forall~i_1=0,1,\cdots,I_1\label{GME_sub3}\\[1mm]
&\mathcal{E}_{(i_1+1)pl}^{k}=\mathcal{E}_{i_1pl}^{k}+\Delta x\sum_{m=1}^{s}\widetilde{b}_{m}\delta_{x}\Upsilon_{mplk},\quad \forall~i_1=0,1,\cdots,I_1\label{GME_sub4}\\[1mm]
&\Upsilon_{mplk}=\mathcal{E}_{mi_2l}^{k}+\Delta y\sum_{q=1}^{\iota}\overline{a}_{pq}\delta_{y}\Upsilon_{mqlk},\quad \forall~i_2=0,1,\cdots,I_2\label{GME_sub5}\\[1mm]
&\mathcal{E}_{m(i_{2}+1)l}^{k}=\mathcal{E}_{mi_2l}^{k}+\Delta y\sum_{p=1}^{\iota}\overline{b}_{p}\delta_{y}\Upsilon_{mplk},\quad \forall~i_2=0,1,\cdots,I_2\label{GME_sub6}\\[1mm]
&\Upsilon_{mplk}=\mathcal{E}_{mpi_3}^{k}+\Delta z\sum_{v=1}^{\sigma}\widehat{a}_{lv}\delta_{z}\Upsilon_{mpv k},\quad \forall~i_3=0,1,\cdots,I_3\label{GME_sub7}\\[1mm]
&\mathcal{E}_{mp(i_3+1)}^{k}=\mathcal{E}_{mpi_3}^{k}+\Delta z\sum_{u=1}^{\sigma}\widehat{b}_{u}\delta_{z}\Upsilon_{mpuk},\quad \forall~i_3=0,1,\cdots,I_3\label{GME_sub8}\\[1mm]
&\tau{\rm {\bf K}}\delta_{t}\Upsilon_{mplk}+\tau\sum_{i=1}^3{\rm {\bf L}}_{i}\delta_{x_i}\Upsilon_{mplk}=\nabla S(\Upsilon_{mplk})\Delta W_m^k.\label{GME_sub9}
\end{align}
\end{subequations}
\begin{thm}
Assume that the coefficients $a_{kj},\widetilde{a}_{mn},\overline{a}_{pq},\widehat{a}_{lv},b_k,\widetilde{b}_m,\overline{b}_{p},\widehat{b}_{u}$ of \eqref{RK00} satisfy the relations
\begin{align}\label{cond1}
\begin{split}
&b_kb_j-b_ka_{kj}-b_ja_{jk}=0,\qquad \overline{b}_p\overline{b}_q-\overline{b}_p\overline{a}_{pq}-\overline{b}_q\overline a_{qp}=0\\
&\widetilde{b}_m\widetilde{b}_n-\widetilde{b}_m\widetilde{a}_{mn}-\widetilde{b}_n\widetilde{a}_{nm}=0,\qquad \widehat{b}_l\widehat{b}_v-\widehat{b}_l\widehat{a}_{lv}-\widehat{b}_v\widehat{a}_{lv}=0
\end{split}
\end{align}
for all $k,j=1,\cdots,r$, $n,m=1,\cdots,s$, $p,q=1,\cdots,\iota$ and $l,v=1,\cdots,\sigma,$ then the stochastic RK methods \eqref{RK00} are stochastic multi-symplectic with the discrete stochastic multi-symplectic conservation law
\begin{equation}\label{def_sm_rk_maxwell}
\frac{\omega^{\rho+1}-\omega^{\rho}}{\tau}+\frac{\kappa_{i_1+1}^{(1)}-\kappa_{i_1}^{(1)}}{\Delta x}+\frac{\kappa_{i_2+1}^{(2)}-\kappa_{i_2}^{(2)}}{\Delta y}+\frac{\kappa_{i_3+1}^{(3)}-\kappa_{i_3}^{(3)}}{\Delta z}=0,~a.s.,
\end{equation}
where
\begin{equation*}
\begin{split}
\omega^\rho&=\frac{1}{2}\sum_{m=1}^{s}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}\widetilde{b}_m\overline{b}_p\widehat{b}_l
{\rm d}z_{mpl}^{\rho}\wedge {\rm {\bf K}}{\rm d}z_{mpl}^{\rho},\quad \rho=0,1,\cdots,N,\\
\kappa_{i_1}^{(1)}&=\frac{1}{2}\sum_{k=1}^{r}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}b_k\overline{b}_p\widehat{b}_l
{\rm d}z_{i_1 pl}^{k}\wedge {\rm {\bf L}}_1{\rm d}z_{i_1 pl}^{k},\quad i_1=0,1,\cdots,I_1,\\
\kappa_{i_2}^{(2)}&=\frac{1}{2}\sum_{k=1}^{r}\sum_{m=1}^{s}\sum_{l=1}^{\sigma}b_k\widetilde{b}_m\widehat{b}_l
{\rm d}z_{mi_2 l}^{k}\wedge {\rm {\bf L}}_2{\rm d}z_{mi_2 l}^{k},\quad i_2=0,1,\cdots,I_2,\\
\kappa_{i_3}^{(3)}&=\frac{1}{2}\sum_{k=1}^{r}\sum_{m=1}^{s}\sum_{p=1}^{\iota}b_k\widetilde{b}_m\overline{b}_p
{\rm d}z_{mpi_3}^{k}\wedge {\rm {\bf L}}_3{\rm d}z_{mpi_3}^{k},\quad i_3=0,1,\cdots,I_3.
\end{split}
\end{equation*}
\end{thm}
\begin{pr}
The proof is
analogous to that of the Theorem \ref{cond0}, so we ignore it here.
\end{pr}
For stochastic Maxwell equations \eqref{stochastic maxwell equations}, \cite{HJZC2017} presents that the energy is invariant, i.e.,
\begin{equation*}
I(t):=\int_{D}(|\textbf{E}(x,y,z,t)|^{2}+|\textbf{H}(x,y,z,t)|^{2})dxdydz=Constant,~a.s.
\end{equation*}
In the context of our methods, it is interesting to see whether this energy functional $I(t)$ remain invariant, thus describing
the persistence of integrability of the fully discrete scheme. The result is stated in the following theorem.
\begin{thm}\label{Theorem}
Under the periodic boundary condition and the conditions \eqref{cond1}, the stochastic Runge-Kutta methods \eqref{RK00} have the following discrete energy conservation law, that is, for all $\rho=0,1,\cdots,N$
\begin{align}\label{58}
\begin{split}
\sum_{i_1}&\sum_{i_2}\sum_{i_3}\sum_{m=1}^{s}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}\widetilde{b}_{m}\overline{b}_p\widehat{b}_l
\Big(|\mathbf{E}_{mpl;i_1,i_2,i_3}^{\rho+1}|^2+|\mathbf{H}_{mpl;i_1,i_2,i_3}^{\rho+1}|^2\Big)\\
&=\sum_{i_1}\sum_{i_2}\sum_{i_3}\sum_{m=1}^{s}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}\widehat{b}_{m}\overline{b}_p\widehat{b}_l
\Big(|\mathbf{E}_{mpl;i_1,i_2,i_3}^{\rho}|^2+|\mathbf{H}_{mpl;i_1,i_2,i_3}^{\rho}|^2\Big),~a.s.
\end{split}
\end{align}
\end{thm}
\begin{pr}
Define energy functional $\zeta(\mathcal{E})=\mathcal{E}^T\mathcal{E}$, we have
\begin{equation*}\label{Proof_6}
\begin{split}
\zeta(\mathcal{E}_{mpl}^{\rho+1})- \zeta(\mathcal{E}_{mpl}^{\rho})&=\left(\mathcal{E}_{mpl}^{\rho+1}\right)^T\mathcal{E}_{mpl}^{\rho+1}-\left(\mathcal{E}_{mpl}^{\rho}\right)^T\mathcal{E}_{mpl}^{\rho}\\
&=\left(\mathcal{E}_{mpl}^{\rho+1}\right)^T\left(\mathcal{E}_{mpl}^{\rho+1}-\mathcal{E}_{mpl}^{\rho}\right)+\left(\mathcal{E}_{mpl}^{\rho+1}-\mathcal{E}_{mpl}^{\rho}\right)^T\mathcal{E}_{mpl}^{\rho}. \end{split}
\end{equation*}
It follows from \eqref{GME_sub1} and \eqref{GME_sub2} that
\begin{equation}\label{Proof_7}
\begin{split}
&\zeta(\mathcal{E}_{mpl}^{\rho+1})- \zeta(\mathcal{E}_{mpl}^{\rho})=\tau\left(\mathcal{E}_{mpl}^{\rho+1}\right)^T\sum_{k=1}^{r}b_k\delta_t\Upsilon_{mplk}+\tau\sum_{k=1}^{r}b_k\left(\delta_t\Upsilon_{mplk}\right)^T\mathcal{E}_{mpl}^{\rho}\\
&=\tau\left(\mathcal{E}_{mpl}^{\rho}+\tau\sum_{k=1}^{r}b_k\delta_t\Upsilon_{mplk}\right)^T\sum_{k=1}^{r}b_k\delta_t\Upsilon_{mplk}+\tau\sum_{k=1}^{r}b_k\left(\delta_t\Upsilon_{mplk}\right)^T\mathcal{E}_{mpl}^{\rho}\\
&=\tau\sum_{k=1}^{r}b_k\left((\mathcal{E}_{mpl}^{\rho})^T\delta_t\Upsilon_{mplk}+(\delta_t\Upsilon_{mplk})^T\mathcal{E}_{mpl}^{\rho}\right)+\tau^2\sum_{k=1}^{r}\sum_{j=1}^{r}b_kb_j(\delta_t\Upsilon_{mplk})^T(\delta_t\Upsilon_{mplj})\\
&=\tau\sum_{k=1}^{r}b_k\left(\left(\Upsilon_{mplk}-\tau\sum_{j=1}^{r}a_{k_j}\delta_t \Upsilon_{mplj}\right)^T\delta_t\Upsilon_{mplk}+\left(\delta_t\Upsilon_{mplk}\right)^T\left(\Upsilon_{mplk}-\tau\sum_{j=1}^{r}a_{k_j}\delta_t \Upsilon_{mplj}\right)\right)\\
&~~~~~~~~~~~~+\tau^2\sum_{k=1}^{r}\sum_{j=1}^{r}b_kb_j(\delta_t\Upsilon_{mplk})^T(\delta_t\Upsilon_{mplj})\\
&=2\tau\sum_{k=1}^{r}b_k\Upsilon_{mplk}^T\delta_t\Upsilon_{mplk}+\tau^2\sum_{k=1}^{r}\sum_{j=1}^{r}\left(b_kb_j-b_ka_{kj}-b_ja_{jk}\right)(\delta_t\Upsilon_{mplk})^T(\delta_t\Upsilon_{mplj})\\
&=2\tau\sum_{k=1}^{r}b_k\Upsilon_{mplk}^T\delta_t\Upsilon_{mplk},
\end{split}
\end{equation}
where the last equality is due to \eqref{cond1}.
Denote $\Lambda_i:={\rm {\bf K}}^{-1}{\rm {\bf L}}_i,~i=1,2,3$ and define $\alpha_i(\mathcal{E})=\mathcal{E}^T\Lambda_i\mathcal{E}$, we can obtain
\begin{equation*}\label{Proof_8}
\begin{split}
\alpha_1(\mathcal{E}_{(i_1+1)pl}^{\rho})&- \alpha_1(\mathcal{E}_{i_1pl}^{\rho})=\left(\mathcal{E}_{(i_1+1)pl}^{\rho}\right)^T\Lambda_1\mathcal{E}_{(i_1+1)pl}^{\rho}-\left(\mathcal{E}_{i_1pl}^{\rho}\right)^T\Lambda_1\mathcal{E}_{i_1pl}^{\rho}\\
&=\left(\mathcal{E}_{(i_1+1)pl}^{\rho}\right)^T\Lambda_1\left(\mathcal{E}_{(i_1+1)pl}^{\rho}-\mathcal{E}_{i_1pl}^{\rho}\right)+\left(\mathcal{E}_{(i_1+1)pl}^{\rho}-\mathcal{E}_{i_1pl}^{\rho}\right)^T\Lambda_1\mathcal{E}_{i_1pl}^{\rho}. \end{split}
\end{equation*}
It follows from \eqref{GME_sub3} and \eqref{GME_sub4} that
\begin{equation}\label{Proof_9}
\begin{split}
&\alpha_1(\mathcal{E}_{(i_1+1)pl}^{\rho})- \alpha_1(\mathcal{E}_{i_1pl}^{\rho})\\
&=\Delta x\left(\mathcal{E}_{(i_1+1)pl}^{\rho}\right)^T\Lambda_1\sum_{m=1}^{s}\tilde{b}_m\delta_x\Upsilon_{mplk}+\Delta x\sum_{m=1}^{s}\tilde{b}_m\left(\delta_x\Upsilon_{mplk}\right)^T\Lambda_1\mathcal{E}_{i_1pl}^{\rho}\\
&=\Delta x\left(\mathcal{E}_{i_1pl}^{\rho}+\Delta x\sum_{m=1}^{s}\tilde{b}_m\delta_x\Upsilon_{mplk}\right)^T\Lambda_1\sum_{m=1}^{s}\tilde{b}_m\delta_x\Upsilon_{mplk}+\Delta x\sum_{m=1}^{s}\tilde{b}_m\left(\delta_x\Upsilon_{mplk}\right)^T\Lambda_1\mathcal{E}_{i_1pl}^{\rho}\\
&=\Delta x\sum_{m=1}^{s}\tilde{b}_m\left((\mathcal{E}_{i_1pl}^{\rho})^T\Lambda_1\delta_x\Upsilon_{mplk}+(\delta_x\Upsilon_{mplk})^T\Lambda_1\mathcal{E}_{i_1pl}^{\rho}\right)+\Delta x^2\sum_{m=1}^{s}\sum_{n=1}^{s}\tilde{b}_m\tilde{b}_n(\delta_x\Upsilon_{mplk})^T\Lambda_1\delta_x\Upsilon_{nplk}\\
&=\Delta x\sum_{m=1}^{s}\tilde{b}_m\Bigg(\left(\Upsilon_{mplk}-\Delta x\sum_{n=1}^{s}a_{k_j}\delta_x \Upsilon_{nplk}\right)^T\Lambda_1\delta_x\Upsilon_{mplk}\\
&+\left(\delta_x\Upsilon_{mplk}\right)^T\Lambda_1\left(\Upsilon_{mplk}-\Delta x\sum_{n=1}^{s}a_{k_j}\delta_x \Upsilon_{nplk}\right)\Bigg)+\Delta x^2\sum_{m=1}^{s}\sum_{n=1}^{s}\tilde{b}_mb_j(\delta_x\Upsilon_{mplk})^T\Lambda_1(\delta_x\Upsilon_{nplk})\\
&=2\Delta x\sum_{m=1}^{s}\tilde{b}_m\Upsilon_{mplk}^T\Lambda_1\delta_x\Upsilon_{mplk}+\Delta x^2\sum_{m=1}^{s}\sum_{n=1}^{s}\left(\tilde{b}_m\tilde{b}_n-\tilde{b}_m\tilde{a}_{mn}-\tilde{b}_n\tilde{a}_{nm}\right)(\delta_x\Upsilon_{mplk})^T\Lambda_1(\delta_x\Upsilon_{nplk})\\
&=2\Delta x\sum_{m=1}^{s}\tilde{b}_m\Upsilon_{mplk}^T\Lambda_1\delta_x\Upsilon_{mplk},
\end{split}
\end{equation}
where the last two equalities are due to the symmetry of matrix $\Lambda_1$ and the multi-symplectic conditions \eqref{cond1}, respectively.
Similarly, we can prove that
\begin{equation}\label{Proof_10}
\begin{split}
\alpha_2(\mathcal{E}_{m(i_2+1)l}^{\rho})- \alpha_2(\mathcal{E}_{mi_2l}^{\rho})=2\Delta y\sum_{p=1}^{\iota}\bar{b}\Upsilon_{mplk}^T\Lambda_2\delta_y\Upsilon_{mplk},
\end{split}
\end{equation}
\begin{equation}\label{Proof_11}
\begin{split}
\alpha_3(\mathcal{E}_{mp(i_3+1)}^{\rho})- \alpha_3(\mathcal{E}_{mpi_3}^{\rho})=2\Delta z\sum_{l=1}^{\sigma}\hat{b}\Upsilon_{mplk}^T\Lambda_3\delta_z\Upsilon_{mplk}.
\end{split}
\end{equation}
Next, by using the non-singularity of matrix ${\rm {\bf K}}$ and the equality \eqref{SS}, multiplying both sides of \eqref{GME_sub9} with $\Upsilon_{mplk}^T{\rm {\bf K}}^{-1}$, it yields
\begin{align}\label{variable}
\begin{split}
\Upsilon_{mplk}^T\delta_t\Upsilon_{mplk}+\Upsilon_{mplk}^T\sum_{i=1}^3\Lambda_i\delta_{x_i}\Upsilon_{mplk}=\lambda\Upsilon_{mplk}^T{\rm {\bf K}}^{-1}\Upsilon_{mplk}\frac{\Delta W_m^k}{\tau}.
\end{split}
\end{align}
Due to the skew-symmetry of matrix ${\rm {\bf K}}$, summing up for $m,p,l,k$, we have
\begin{equation*}\label{111}
\sum_{k=1}^{r}\sum_{m=1}^{s}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}b_k\tilde{b}_m\bar{b}_p\hat{b}_l\left(\Upsilon_{mplk}^T\delta_t\Upsilon_{mplk}+\Upsilon_{mplk}^T\sum_{i=1}^3\Lambda_i\delta_{x_i}\Upsilon_{mplk}\right)=0,
\end{equation*}
then, multiplying the above equation by $2\tau\Delta x\Delta y\Delta z$ and substituting \eqref{Proof_7}, \eqref{Proof_9}, \eqref{Proof_10} and \eqref{Proof_11} into the above equation, it holds
\begin{align*}
\begin{split}
\Bigg(\Delta x\Delta y\Delta z&\sum_{m=1}^{s}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}\tilde{b}_m\bar{b}_p\hat{b}_l\left(\zeta(\mathcal{E}_{mpl}^{\rho+1})- \zeta(\mathcal{E}_{mpl}^{\rho})\right)\\
&+\tau\Delta y\Delta z\sum_{k=1}^{r}\sum_{p=1}^{\iota}\sum_{l=1}^{\sigma}b_k\bar{b}_p\hat{b}_l\left(\alpha_1(\mathcal{E}_{(i_1+1)pl}^{\rho})- \alpha_1(\mathcal{E}_{i_1pl}^{\rho})\right)\\
&+\tau\Delta x\Delta z\sum_{k=1}^{r}\sum_{m=1}^{s}\sum_{l=1}^{\sigma}b_k\tilde{b}_m\hat{b}_l\left(\alpha_2(\mathcal{E}_{m(i_2+1)l}^{\rho})- \alpha_2(\mathcal{E}_{mi_2l}^{\rho})\right)\\
&+\tau\Delta x\Delta y\sum_{k=1}^{r}\sum_{m=1}^{s}\sum_{p=1}^{\iota}b_k\tilde{b}_m\bar{b}_p\left(\alpha_3(\mathcal{E}_{mp(i_3+1)}^{\rho})- \alpha_3(\mathcal{E}_{mpi_3}^{\rho})\right)
\Bigg)=0.
\end{split}
\end{align*}
Summing up for $i_1, i_2, i_3$ over the spatial domain and utilizing the periodic boundary condition, we can get the result \eqref{58}.
\end{pr}
\begin{rmk}
The results of these two theorems are evidently consistent with the stochastic multi-symplectic conservation law and discrete energy conservation law in \cite{HJZC2017}, respectively, which means that the stochastic multi-symplectic conservation law and energy conservation law can be exactly preserved by the proposed stochastic multi-symplectic RK methods \eqref{RK00}-\eqref{cond1}.
\end{rmk}
For the general stochastic Hamiltonian PDEs in the sense of the Stratonovich
\begin{align}
{\rm {\bf K}}{\rm d}_tz+\sum_{m=1}^{M}{\rm {\bf L}}_mz_{x_m}{\rm d}t=\nabla_z S_1(z){\rm d}t+\nabla_z S_2(z)\circ{\rm d}W(t),~~z\in\mathbb{R}^n,
\end{align}
where ${\rm {\bf K}}$ and ${\rm {\bf L}}_m,~m=1,2,\cdots,M$ are skew-symmetric matrices, we can obtain the following corollary.
\begin{corr}
Let the matrix ${\rm {\bf K}}$ be nonsingular and the Hamiltonian functions $S_1=\frac{1}{2}z^T{\rm {\bf A}}z,~S_2=\frac{1}{2}z^T{\rm {\bf B}}z$, where ${\rm {\bf A}},~{\rm {\bf B}}$ are symmetric matrices. If ${\rm {\bf K}}^{-1}{\rm {\bf A}}$ and ${\rm {\bf K}}^{-1}{\rm {\bf B}}$ are skew-symmetric matrices, then for all $\rho=0,1,\cdots,N$, it holds
\begin{align}
\begin{split}
\sum_{i_1}\cdots\sum_{i_M}&\sum_{j_1=1}^{s_1}\cdots\sum_{j_M=1}^{s_M}b_{j_1}^{(1)}\cdots b_{j_M}^{(M)}|z_{j_1\cdots j_M;i_1,\cdots,i_M}^{\rho+1}|^2\\
&=\sum_{i_1}\cdots\sum_{i_M}\sum_{j_1=1}^{s_1}\cdots\sum_{j_M=1}^{s_M}b_{j_1}^{(1)}\cdots b_{j_M}^{(M)}|z_{j_1\cdots j_M;i_1,\cdots,i_M}^{\rho}|^2,~a.s.,
\end{split}
\end{align}
where $i_1,\cdots,i_M$ denote the spatial grids, $s_1,\cdots,s_M$ denote the stage of stochastic RK methods applied to the directions of space, $|z|^2$ denotes the sum of components, i.e., $|z|^2:=|z_1|^2+\cdots+|z_d|^2$.
\end{corr}
\section{Conclusions}
In this paper, we construct a class of stochastic RK methods to stochastic Hamiltonian PDEs and give some sufficient conditions for stochastic multi-symplecticity of the constructed stochastic RK methods. Finally, we apply these techniques to three-dimensional stochastic Maxwell equations with multiplicative noise in sense of Stratonovich. We show that the proposed stochastic RK methods can preserve the discrete stochastic multi-symplectic conservation law and the discrete energy conservation law almost surely. However, the theoretical analysis of the convergence of stochastic multi-symplectic RK methods is difficult, we will devote to study it rigorously in the future work.
\parindent=6mm
\end{document}
|
\begin{document}
\title{Generalized Ramsey numbers: forbidding paths with few colors}
\begin{abstract}
Let $f(K_n, H, q)$ be the minimum number of colors needed to edge-color $K_n$ so that every copy of $H$ is colored with at least $q$ colors. Originally posed by Erd\H{o}s and Shelah when $H$ is complete, the asymptotics of this extremal function have been extensively studied when $H$ is a complete graph or a complete balanced bipartite graph. Here we investigate this function for some other $H$, and in particular we determine the asymptotic behavior of $f(K_n, P_v, q)$ for almost all values of $v$ and $q$, where $P_v$ is a path on $v$ vertices.
\end{abstract}
\section{Introduction}
The classical graph Ramsey problem asks, ``for a given $k$ and $p$, what is the smallest $n$ such that every edge-coloring of $K_n$ with $k$ colors contains a monochromatic $K_p$?'' We may invert the roles of $k$ and $n$ in this question: for a given $n$ and $p$, what is the largest $k$ such that every edge-coloring of $K_n$ with $k$ colors contains a monochromatic $K_p$? This is equivalent to determining the smallest $k+1$ such that there exists an edge-coloring of $K_n$ with $k+1$ colors such that every copy of $K_p$ contains at least $2$ colors. Let $f(G, H, q)$ be the minimum number of colors needed to edge-color $G$ so that every copy of $H$ in $G$ contains at least $q$ colors.\overline{f}ootnote{In the literature these are called $(H, q)$-colorings, and $G$ is typically taken to be $K_n$ or $K_{n,n}$ to ensure no difficulty in finding copies of $H$; see for example \cite{axenovich}.} The classical Ramsey problem is then equivalent to determining $f(K_n, K_p, 2)$; as an example from \cite{gyarfas}, the Ramsey number $R(3,3,3) = 17$ is equivalent to $f(K_{16}, K_3, 2) = 3$ and $f(K_{17}, K_3, 2) = 4$. Additionally, $f(G, H, e(H))$ can be viewed as a variant of the rainbow Ramsey problem, where every copy of $H$ must receive all distinct colors (as defined in \cite{rainbow}). In this way studying $f(G, H, q)$ for general $q$ bridges the gap between these problems.
The function $f(K_n, K_p, q)$ was first introduced by Erd\H{o}s and Shelah (see Section 5 of \cite{earlyerdos}), and while Elekes, Erd\H{o}s, and F\"uredi had some preliminary results (described in Section 9 of \cite{erdos}), the problem was first systematically studied by Erd\H{o}s and Gy\'arf\'as \cite{gyarfas}. They focused on the asymptotics of $f(K_n, K_p, q)$ for large $n$, with $p$ and $q$ fixed. They also determined the \textbf{linear and quadratic thresholds} of this function, that is, they determined the smallest $q(p)$ such that $f(K_n, K_p, q(p)) = \Omega(n)$ (or $=\Omega(n^2)$, resp.). Axenovich, F\"uredi, and Mubayi \cite{axenovich} adapted these results to $f(K_{n,n}, K_{p,p}, q)$ in addition to relating these problems to other results in extremal combinatorics. Many others (only a few of which are mentioned here) have studied $f(K_n, K_p, q)$ and $f(K_{n,n}, K_{p,p}, q)$ for $q$ between the linear and quadratic thresholds \cite{sarklin,sarkbiplin}, above the quadratic threshold \cite{axenovich,sarkquad}, and at the `polynomial' threshold \cite{poly}. Besides two general results in \cite{axenovich}, little has been said for $f(K_n, H, q)$ when $H$ is not complete or complete bipartite.
The aim of this paper is to open up the study of $f(K_n, H, q)$ for general $H$. We consider $H$ and $q$ as fixed, determining the asymptotics of $f(K_n, H, q)$ in terms of $n$. In Section 2 we make some general observations for all $H$, supplementing those in \cite{axenovich}. One of the first Ramsey results for non-complete graphs is due to Gerencs\'er and Gy\'arf\'as \cite{gg}, who determined that every 2-coloring of $K_n$ has a monochromatic path of order $\ceil{(2n+1)/3}$. Inspired by this, in Section 3 we find the asymptotics of $f(K_n, P_v, q)$ for most $v$ and $q$, where $P_v$ denotes the path on $v$ vertices:
\begin{theorem}\label{maintheorem}
Let $v \geq 3$. The smallest $q(v)$ for which $f(K_n, P_v, q(v)) = \Theta(n^2)$ is $q(v) = \ceil{v/2}+1$. The largest $q(v)$ for which $f(K_n, P_v, q(v)) = \Theta(n)$ is $q(v) = \overline{f}loor{v/2}$, unless $v=3$ or $v=5$, in which case $q(3) = 2$ and $q(5)=3$.
\end{theorem}
\begin{figure}
\caption{Bounds on $f(K_n, P_v, q)$}
\label{results}
\end{figure}
See Figure \ref{results} for a complete summary of the results of Section 3. The only undetermined cases are when $v \geq 7$ is odd and $q = \overline{f}rac{v+1}{2}$. Here we can show that $f(K_n, P_v, q)$ is neither linear nor quadratic in $n$, but somewhere in between. It is not clear what the correct asymptotics are for these cases, so we pose the following problem.
\begin{problem}
Asymptotically determine $f(K_n, P_v, \overline{f}rac{v+1}{2})$ for odd $v \geq 7$.
\end{problem}
Note that $f(K_n, P_v, q)$ is almost always either $\Theta(n)$ or $\Theta(n^2)$ (with the only notable exception mentioned above). This is in great contrast to the behavior of $f(K_n, K_p, q)$. Pohoata and Sheffer \cite{adam} showed that for $p \geq 2(m+1) \geq 6$,
\[ f\left(K_n, K_p, \binom{p}{2} - m\cdot \overline{f}loor{\overline{f}rac{p}{m+1}} + m + 1\right) = \Omega\left(n^{1+\overline{f}rac{1}{m}}\right) ,\]
and they noted that Theorem \ref{LLL} (below) implies an upper bound of
\[ f\left(K_n, K_p, \binom{p}{2} - m\cdot \overline{f}loor{\overline{f}rac{p}{m+1}} + m + 1\right) = O\left( n^{1 + \overline{f}rac{1}{m} + \varepsilon(p)} \right) ,\]
where $\varepsilon(p)$ goes to zero as $p$ grows. Thus, for each value of $m$, the upper bound gets asymptotically close to the lower bound for sufficiently large $p$. This means that $f(K_n, K_p, q)$, as a function of $q$, attains arbitrarily many asymptotic values (even while ignoring subpolynomial factors), for sufficiently large $p$. Similarly tight results can be obtained for the bipartite variant $f(K_{n,n}, K_{p,p}, q)$, using essentially the same method as \cite{adam}.
Define $T(H)$, the number of `tiers' for $H$, to be the number of $q \geq 2$ such that $f(K_n, H, q)$ and $f(K_n, H, q+1)$ differ by some polynomial factor. Above we concluded that $T(K_p)$ is some unbounded function of $p$, whereas Theorem \ref{maintheorem} and the results of Section 2 show that $T(P_v), T(S_t), T(tK_2)$ are all at most $3$ for all $v$ and $t$, where $S_t$ and $tK_2$ is a star and a matching on $t$ edges, respectively. It would be interesting to know for what other classes $\mathcal{H}$ of graphs is $T(H)$ bounded for all $H \in \mathcal{H}$. This may be the case when the graphs of $\mathcal{H}$ are sufficiently sparse, perhaps if they were all trees.
\subsection{Notation and Terminology}
The number of edges of $H$ is denoted by $e(H)$, while the number of vertices of $H$ is denoted by $v(H)$. A copy of $H$ in $G$ is a subgraph of $G$ isomorphic to $H$. All colorings are edge-colorings. For a color $c$, the $c$-degree of a vertex $u$, denoted $d_c(u)$, is the number of edges of color $c$ incident to $u$. The Tur\'an number of $H$, denoted $\text{ex}(n, H)$ is the largest number of edges an $n$-vertex graph may have without containing a copy of $H$. For a positive integer $t$, $tH$ is the disjoint union of $t$ copies of $H$.
We employ the following asymptotic notation throughout: $f(n) = O(g(n))$ means that there exists $c>0$ and $n_0$ such that $f(n) \leq c g(n)$ for all $n \geq n_0$. Similarly, $f(n) = \Omega(g(n))$ corresponds to $f(n) \geq c g(n)$. If $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$, then we say $f(n) = \Theta(g(n))$. All logarithms are base $2$.
It will often be helpful to think of $f(G, H, q)$ in terms of \textbf{repeated colors}. If color $c$ appears on exactly $r+1$ edges in a coloring of $H$, then we say color $c$ is repeated $r$ times. We say $H$ has $r$ `repetitions' or `repeats' if $r=\sum r_i$, where color $c_i$ is repeated $r_i$ times, the sum being over all colors. Note that if $H$ has $r$ repetitions and contains exactly $q$ colors, then $r+q = e(H)$. With this in mind, let $\overline{f}(G, H, r) = f(G, H, e(H)-r)$. For example, the number of colors required so that there are no `isosceles' triangles is $\overline{f}(K_n, K_3, 0) = f(K_n, K_3, 3)$.
\section{General Observations}
A wide-reaching upper bound on $f(G, H, q)$ is achieved with a simple application of the Lov\'asz Local Lemma. This idea first appeared in \cite{gyarfas} but was stated in full generality in \cite{axenovich}. The following bound is close to asymptotically tight in many cases when $q$ is large.
\begin{theorem}\label{LLL}
Let $v(H) = v$ and $e(H) = e$. Then $f(K_n, H, q) = O\left(n^{\overline{f}rac{v-2}{e-q+1}}\right)$, or equivalently $\overline{f}(K_n, H, r) = O\left(n^{\overline{f}rac{v-2}{r+1}}\right)$.
\end{theorem}
In the tradition of Erd\H{o}s and Gy\'arf\'as \cite{gyarfas}, we ask what are the linear and quadratic thresholds for any graph $H$. The question of the linear threshold was solved for connected $H$ in \cite{axenovich}, whose result and proof we give here in slightly more generality for completeness.
\begin{theorem}\label{lin}
Let $c$ be the number of connected components of $H$, let $v(H) = v$, and let $e(H) = e$. Then $f(K_n, H, e-v+2+c) = \Omega(n)$ and $f(K_n, H, e-v+2) = O\left(n^{1-\overline{f}rac{1}{v-1}}\right)$.
\end{theorem}
\begin{proof}
The upper bound comes from Theorem \ref{LLL}.
For the lower bound, we show that if one color class is too large, we can find a copy of $H$ with at least $v-1-c$ repeats. Let $F$ be an edge-maximal spanning forest of $H$ (since $H$ may not be connected), and let $T$ be a tree on $v$ vertices which contains $F$. Note that $T$ consists of $F$ along with $c-1$ more edges. It is well known that $\text{ex}(n, T) \leq vn$, that is, any graph on $n$ vertices with at least $vn$ edges contains every tree on $v$ vertices.\overline{f}ootnote{Indeed, a graph on $n$ vertices with $vn$ edges has average degree $2v$. Upon repeatedly deleting the vertices of degree less than $v$, we have removed fewer than $vn$ edges, and so we have a nonempty graph of minimum degree at least $v$, into which we can embed $T$. In fact, the Erd\H{o}s-S\'os conjecture states that $\text{ex}(n, T) \leq (v-2)n/2$, which Ajtai, Koml\'os, Simonovits, and Szemer\'edi have shown to be true for sufficiently large $v$ and $n$ (unpublished, see e.g., Section 6 of \cite{trees}).} If some color in a coloring of $K_n$ contains more than $vn$ edges, then we can find a monochromatic $T$ and thus a monochromatic $F$, which has $v-c-1$ repeats. This in turn gives a copy of $H$ with at least $v-c-1$ repeats. If we require each copy of $H$ to have at most $v-c-2$ repeats, then we must use at least $\binom{n}{2} / (vn)$ colors. More succinctly,
\[ \overline{f}(K_n, H, v-c-2) \geq \overline{f}(K_n, F, v-c-2) \geq \overline{f}(K_n, T, v-3) \geq \binom{n}{2}/\text{ex}(n,T) = \Omega(n) .\]
\end{proof}
Note that when $H$ is connected, Theorem \ref{lin} gives the linear threshold as $q=e-v+3$. What is the best one can do when $H$ is disconnected? The threshold of $q=e-v+2+c$ given by Theorem \ref{lin} is still correct for forests, since in that case $e-v+2+c = 2$. On the other hand, it is not immediately clear if the linear threshold for $H=2K_3$ is $q=4$ or $q=3$.
We now turn to the quadratic threshold, the smallest $q$ for which $f(K_n, H, q)$ is quadratic in $n$. We can answer this question exactly for some graphs, specifically those with a perfect matching and maximum degree at least $v/2$, where $v$ is the number of vertices:
\begin{prop}\label{exactquad}
Suppose $v(H) = v$, $H$ has a matching of size $\overline{f}loor{v/2}$, and $H$ has a vertex of degree at least $\overline{f}loor{v/2}$. Letting $e(H) = e$, we have $f(K_n, H, e-\overline{f}loor{v/2}+2) = \Theta\left(n^2\right)$ and $f(K_n, H, e-\ceil{v/2}+1) = O\left(n^{2-4/v}\right)$.
\end{prop}
\begin{proof}
The upper bound $f(K_n, H, e-\ceil{v/2}+1) = O\left(n^{2-4/v}\right)$ comes from Theorem \ref{LLL}. The upper bound $f(K_n, H, e-\overline{f}loor{v/2}+2) = O\left(n^2\right)$ is trivial.
The lower bound $\overline{f}(K_n, H, \overline{f}loor{v/2}-2) = \Omega\left(n^2\right)$ is obtained as follows. Suppose we have a coloring of $K_n$ where every copy of $H$ has at most $\overline{f}loor{v/2}-2$ repeats. If there is either a monochromatic matching or a monochromatic star with $\overline{f}loor{v/2}$ edges, then there is a copy of $H$ with at least $\overline{f}loor{v/2}-1$ repeats. For a given color, the endpoints of a maximal matching form a vertex cover of size less than $v$, each vertex of which is incident to fewer than $\overline{f}loor{v/2}$ edges. Thus each color class has size less than $v^2/2$, and so there are at least $\overline{f}rac{2}{v^2} \binom{n}{2}$ colors.
\end{proof}
Note that the proof of Proposition \ref{exactquad} relies on only two forbidden substructures, both of which are monochromatic. The more advanced proof of Pohoata and Sheffer \cite{adam} uses two forbidden substructures, but only one is monochromatic. The other contains many colors and is crucial to obtaining lower bounds of the form $\Omega\left(n^{\alpha}\right)$ for fractional $\alpha$ (for example, in Proposition \ref{pathodd} below).
Proposition \ref{exactquad} clearly generalizes to the following statement.
\begin{prop}\label{quad}
If $H$ has a matching of size $b$ and a vertex of degree at least $b$, then $f(K_n, H, e(H)-b+2) = \Theta\left(n^2\right)$, or equivalently, $\overline{f}(K_n, H, b-2) = \Theta\left(n^2\right)$.
\end{prop}
One may hope that the quadratic threshold for $H$ is always the one given in Proposition \ref{exactquad}, $q = e-\overline{f}loor{v/2}+2$, even when the hypotheses are not satisfied. In fact, this is the case for paths (see Theorem \ref{maintheorem}). However, there are three extreme cases where the quadratic threshold does not even exist: a star with $t$ edges, denoted $S_t$, a matching with $t$ edges, denoted $t K_2$, and the triangle. In the following it is understood that $t \geq 2$. Recall that $\overline{f}(K_n, H, 0) = f(K_n, H, e(H))$ is the minimum number of colors needed so that every copy of $H$ is `rainbow.'
\begin{coloring}\label{order}
Label the vertices $1, \dots, n$ and color the edge between $i$ and $j$ with color $i$ if $i < j$. This shows $\overline{f}(K_n, tK_2, 0) = O(n)$.
\end{coloring}
In fact, if we color the edge between vertex $n-1$ and vertex $n$ with color $n-2$, this gives a coloring showing that $\overline{f}(K_n, tK_2, 0) \leq n-2$. To see that $\overline{f}(K_n, tK_2, 0) \geq n-2$, note that every color class must either be a star or a triangle. The number of edges covered by $k$ stars is at most $(n-1) + \cdots + (n-k)$, so the number of edges covered by $n-3$ of these color classes is at most $(n-1) + \cdots + 3 < \binom{n}{2}$.
\begin{coloring}\label{fact}
The edge set of $K_n$ can be partitioned into matchings of size $\overline{f}loor{n/2}$ (when $n$ is even this partition is known as a $1$-factorization). Color the edges of $K_n$ according to which matching contains them. This coloring contains either $n$ or $n-1$ colors, and shows that $\overline{f}(K_n, S_t, 0) = O(n)$.
\end{coloring}
This coloring actually shows that $\overline{f}(K_n, S_t, 0) = \overline{f}(K_n, K_3, 0) = 2\ceil{n/2}-1$, since each color class must be a matching, and in a $1$-factorization all the color classes attain their maximum size. Similarly, we have $\overline{f}(K_n, K_3, 0) = 2\ceil{n/2}-1$.
\begin{cor}\label{cor}
If $\overline{f}(K_n, H, 0) = O\left(n^{2-\varepsilon}\right)$ for some $\varepsilon>0$, then $H$ is either a matching, a star, or a triangle. In that case, $\overline{f}(K_n, H, 0) = \Theta(n)$.
\end{cor}
\begin{proof}
If $H$ has a matching of size two and a vertex of degree at least two, then $\overline{f}(K_n, H, 0) = \Theta\left(n^2\right)$ by Proposition \ref{quad}. The only graphs without those properties are matchings, stars, and the triangle.
\end{proof}
\section{Paths}
We now turn our attention to the asymptotics of $f(K_n, P_v, q)$, where $P_v$ is a path on $v \geq 4$ vertices. (Recall that the asymptotics of $P_3 = S_2$ were found in Section 2.) We show the range of $q$ for which $f(K_n, P_v, q)$ is linear in $n$ in Proposition \ref{pathlin}, and the range for which it is quadratic in $n$ in Theorem \ref{pathquad}. The only gap in these ranges is bounded in Proposition \ref{pathodd}. See Figure \ref{results} for a summary of these results.
\begin{prop}\label{pathlin}
For every $2 \leq q \leq \overline{f}loor{v/2}$, we have $f(K_n, P_v, q) = \Theta(n)$.
\end{prop}
\begin{proof}
\[ \Omega(n) = f(K_n, P_v, 2) \leq f(K_n, P_v, q) \leq f(K_n, P_v, \overline{f}loor{v/2}) = O(n). \]
The lower bound follows by Theorem \ref{lin} and the upper bound by Coloring \ref{order}.
\end{proof}
Now we find the quadratic threshold for paths. We first give a lemma which facilitates the proofs of Theorem \ref{pathquad} and Proposition \ref{pathodd}.
\begin{lemma}\label{pathlemma}
Fix $v \geq 3$. In any coloring of $K_n$, one of three things must occur:
\begin{itemize}
\item there exists a copy of $P_v$ with at least $\overline{f}loor{(v-1)/2}$ repeats,
\item $\Omega(n^2)$ colors are used, or
\item there exist at least $n^2/16k\log n$ color classes each with a matching of at least $k/16v$ edges, for some $k \geq 16v^2$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $K_n$ be colored with colors from $C$ such that every copy of $P_v$ repeats at most $\ceil{(v-1)/2}-1$ colors. First note that every color class has size less than $vn$, or else we would find a monochromatic $P_v$ (see e.g.~\cite{gallai}). For a color $c$, let $E_c$ denote the set of edges of color $c$.
Suppose that for every $k \geq 16v^2$, there are at most $n^2/8k \log n$ color classes each of size at least $k$. Let $C_i = \{c : i \leq |E_c| \}$, and let $C' = \{c : |E_c| \leq 32 v^2\}$. Then
\[ \binom{n}{2} = \sum_{i=0}^{\overline{f}loor{\log (vn)}} \sum_{c \in C_{2^{i}}\setminus C_{2^{i+1}}} |E_c| \leq |C'| 32v^2 + \sum_{i=\overline{f}loor{\log (32v^2)}}^{\overline{f}loor{\log (vn)}} 2^{i+1} \cdot \overline{f}rac{n^2}{2^i \cdot 8 \log n} \leq |C'| 32v^2 + \overline{f}rac{n^2}{4} \]
which implies $|C'| = \Omega(n^2)$, so $|C| = \Omega(n^2)$, as desired.
Now suppose that there is some $k \geq 16v^2$ such that $|C_k| \geq n^2/8k\log n$; fix one such $k$. Let $C^*$ be the set of colors $c \in C_k$ such that there is a monochromatic matching in color $c$ with at least $\overline{f}rac{k}{16v}$ edges. If $|C^*| \geq \overline{f}rac{1}{2} |C_k| \geq n^2/16k\log n$, then we are done. Otherwise, we have $|C^*| < \overline{f}rac{1}{2} |C_k|$, and we will find a copy of $P_v$ with at least $\overline{f}loor{(v-1)/2}$ repeats with the use of an auxiliary bipartite graph.
Let $c \in C_k \setminus C^*$, and let $S$ be a minimum vertex cover for the edges of color $c$. Since the endpoints of the edges of a maximal matching form a vertex cover and $c$ has a maximal matching of size at most $\overline{f}rac{k}{16v}$, we have $|S| \leq \overline{f}rac{k}{8v}$. Since the vertices of $S$ of $c$-degree less than $4v$ cover at most $4v |S| \leq \overline{f}rac{k}{2}$ edges of $E_c$, then at least half of $E_c$ is incident to vertices of $c$-degree at least $4v$.
Let $G$ be the bipartite graph with parts $X = V(K_n)$ and $Y = \{(u,c): c \in C_k \setminus C^*, u \in V(K_n), d_c(u) \geq 4v \}$ and an edge between $x \in X$ and $(u,c) \in Y$ if the edge $ux$ is colored with $c$. Note that for a given color $c$, the number of edges incident to vertices of the form $(u,c)$ is at least $\overline{f}rac{1}{2}|E_c| \geq \overline{f}rac{k}{2}$. This implies
\[ |E(G)| \geq \overline{f}rac{k}{2}|C_k\setminus C^*| \geq \overline{f}rac{k}{4}|C_k| \geq \overline{f}rac{n^2}{32\log n} .\]
On the other hand, it is clear from the definition of $Y$ that $|E(G)| \geq 4v|Y|$.
We delete vertices of degree at most $v$ repeatedly from $X$ and $Y$ until we cannot delete any more, leaving us with a bipartite graph $G'$ with minimum degree greater than $v$. To show that $G'$ is not empty, suppose we delete $D$ edges from $G$; observe that
\[ |E(G')| = |E(G)| - D \geq \max\left(4v|Y|, \overline{f}rac{n^2}{32\log n}\right) - v (|Y| + |X|) > 0 ,\]
which can be seen by splitting into the cases $|Y| > |X|$ and $n = |X| \geq |Y|$. Thus $G'$ is not empty, and the minimum degree of $G'$ is at least $v+1$.
We greedily construct a path $x_1 (u_1, c_1) \cdots x_{\overline{f}loor{v/2}} (u_{\overline{f}loor{v/2}}, c_{\overline{f}loor{v/2}}) x_{\overline{f}loor{v/2}+1}$ in $G'$ as follows: select $x_i$ not equal to any of $x_1,\dots,x_{i-1}$ or $u_1,\dots, u_{i-1}$, and select $u_i$ not equal to any of $x_1,\dots,x_i$ or $u_1,\dots,u_{i-1}$. When selecting $x_i$, only $2i-2 < v+1$ vertices of $X$ have been forbidden, so we may choose greedily. When selecting $u_i$, it is possible that we have forbidden many vertices of $Y$, since many vertices of $Y$ correspond to the same $u_j$. However, the vertex $x_j$ is adjacent to $(u_j, c_j)$ only if the edge $x_j u_j$ has color $c_j$, and so $x_j$ has at most one neighbor in $Y$ for every choice of $u_j$. Thus at most $2i-1 < v+1$ of $x_i$'s neighbors in $Y$ are forbidden, so we may choose $u_i$ greedily.
If $v$ is even, this allows us to construct the path $x_1 u_1 \cdots x_{v/2} u_{v/2}$, which has $v/2 - 1 = \overline{f}loor{(v-1)/2}$ repeats. If $v$ is odd, we construct the path $x_1 u_1 \cdots x_{(v-1)/2} u_{(v-1)/2} x_{(v+1)/2}$, which has $(v-1)/2$ repeats (see Figure \ref{case1}).
\end{proof}
\begin{figure}
\caption{Path found in Case 1 of the proof of Theorem \ref{pathquad}
\label{case1}
\end{figure}
We can think of the argument with the auxiliary bipartite graph as a kind of Tur\'an problem: the auxiliary graph $G$ is reflective of a hypergraph on $V(K_n)$ (the edges are exactly those $c$-neighborhoods of $u$), and we wish to find a kind of `self-avoiding' Berge-path in this hypergraph given that the number of hyperedges present is large.
With Lemma \ref{pathlemma}, we find the quadratic threshold for paths with an even number of vertices in the following theorem.
\begin{theorem}\label{pathquad}
For every $\ceil{v/2}+1 \leq q \leq v-1$, we have $f(K_n, P_v, q) = \Theta\left(n^2\right)$.
\end{theorem}
\begin{proof}
It suffices to show $\overline{f}(K_n, P_v, \ceil{(v-1)/2}-2) = \Omega\left(n^2\right)$. Let $K_n$ be colored with colors from $C$ such that every copy of $P_v$ repeats at most $\ceil{(v-1)/2}-2$ colors. The first case of Lemma \ref{pathlemma} does not occur, and in the second case we are done. If the third case of Lemma \ref{pathlemma} occurs, then we have a monochromatic matching of size $k/16v \geq \overline{f}loor{v/2}$, with which we have a copy of $P_v$ with $\ceil{(v-1)/2}-1$ repeats (see Figure \ref{matching}). So this case is impossible as well.
\end{proof}
\begin{figure}
\caption{Edges of a matching strung into a path, as in Case 2 of the proof of Theorem \ref{pathquad}
\label{matching}
\end{figure}
There is a small gap in what we have shown when $v$ is odd, namely $f(K_n, P_v, (v+1)/2)$ has only been shown to be $\Omega(n)$ (and $O(n^2)$ trivially). Note that $v=3$ is equivalent to the star $S_2$, so by Corollary \ref{cor}, $f(K_n, P_3, 2) = \Theta(n)$. The $v=5$ case was shown to be linear by a construction of Rosta \cite{rosta}: label the vertices of $K_n$ with distinct binary strings of length $\ceil{\log n}$, and color the edge between vertices labeled $x$ and $y$ with color $x \oplus y$ (where $\oplus$ is bitwise exclusive-or). This coloring is proper, since $x \oplus y = x \oplus z$ implies $y=z$. In this coloring, if any $P_5$ with vertices $a,b,c,d,e$ (in that order) contains only two colors, it must be that $a \oplus b = c \oplus d$ and $b \oplus c = d \oplus e$. Upon `adding' these equations, we get $a \oplus c = c \oplus e$, meaning $a = e$, contradicting this being a path. Thus Rosta's construction shows $f(K_n, P_5, 3) \leq 2^{\ceil{\log n}} \leq 2n$.
We only have loose bounds when $v \geq 7$. First we need a combinatorial lemma of Erd\H{o}s \cite{erdoslemma}, which we state here for a particular case.
\begin{lemma}\label{erdos1}
Let $A_1, \dots A_N$ be $N$ subsets of $A$, where $|A| = n$ and $|A_i| \geq d\sqrt{n}$ for some constant $d$. If $N \geq \overline{f}rac{8}{d} \sqrt{n}$, then there are some $i \neq j$ such that $|A_i \cap A_j| \geq \overline{f}rac{d^2}{2}$.
\end{lemma}
\begin{prop}\label{pathodd}
For odd $v\geq 7$, we have $f(K_n, P_v, \overline{f}rac{v+1}{2}) = O\left(n^{2-2/(v-1)}\right)$. For odd $v \geq 9$, we have $f(K_n, P_v, \overline{f}rac{v+1}{2}) = \Omega\left(n^{3/2}/\log n\right)$, and for $v=7$, we have $f(K_n, P_7, 4) = \Omega\left(n^{4/3}/\log^{2/3} n\right)$.
\end{prop}
\begin{proof}
The upper bound comes from Theorem \ref{LLL}.
Let $K_n$ be colored with colors from $C$ so that every $P_7$ contains at least $4$ colors (and at most $2$ repeated colors). By Lemma \ref{pathlemma}, we either have a copy of $P_7$ with at least $3$ repeats (which does not occur), we use $\Omega(n^2)$ colors (so we are done), or there exists a $k \geq 16v^2$ such that there are at least $n^2/16k\log n$ color classes each with a matching of at least $k/16v$ edges. Let $C^*$ be the set of these colors. If $k \leq n^{2/3}/\log^{1/3} n$, then there are already at least $\Omega(n^{4/3}/\log^{2/3} n)$ many colors. Otherwise, $k \geq n^{2/3}/\log^{1/3} n$. For $c \in C^*$, there is a matching $M$ of size $ k/16v \geq n^{2/3}/(16v\log^{1/3} n)$ in color $c$; let $S$ be the matched vertices of $M$. If an edge of color $c$ appears between vertices of $S$ but is not in the matching $M$, then we have a $P_7$ with $3$ repeats, which is not allowed. If there are five edges of color $c'$ among the vertices of $S$, these edges must be incident to at least three edges of $M$, and so we may find a $P_7$ with $3$ repeats as well (Figure \ref{P7} enumerates these possibilities and shows the desired $P_7$ in each case). Therefore there are at least $\Omega(n^{4/3}/\log^{2/3} n)$ colors (even just among the vertices of $S$).
\begin{figure}
\caption{Cases with a large matching when $v=7$ in Proposition \ref{pathodd}
\label{P7}
\end{figure}
To show the lower bound when $v \geq 9$, again apply Lemma \ref{pathlemma}: the first case does not occur, and in the second case we are done, so suppose we are in the third case as before. If $k\leq 32 v^{3/2} n^{1/2}$ we have $\Omega(n^{3/2}/\log n)$ many colors in $C^*$ and we are done. So suppose $k \geq 32 v^{3/2} n^{1/2}$. Let $C^* = \{c_1, \dots, c_{N}\}$, let $M_i$ be a maximum matching in color $c_i$, and let $d=4\sqrt{v}$. Note that $|V(M_i)| \geq 2 \overline{f}rac{k}{16v} \geq d \sqrt{n}$ and $N = |C^*| \geq \overline{f}rac{n^2}{16k \log n} \geq \overline{f}rac{8}{d} \sqrt{n}$, the latter of which follows because $k \leq vn$, since no color class has size more than $vn$. We have by Lemma \ref{erdos1} that there are two indices $i$ and $j$ such that $|V(M_i) \cap V(M_j)| \geq \overline{f}rac{d^2}{2} = 8v$. The graph whose edges are the edges of $M_i$ and $M_j$ incident to at least one vertex of $V(M_i) \cap V(M_j)$ contains a disjoint union of even cycles and paths of length at least two on at least $8v > \overline{f}rac{5}{3} v$ vertices. Thus we can find $\ceil{\overline{f}rac{v}{3}}$ disjoint copies of an edge of color $c_i$ incident with an edge of color $c_j$, as in Figure \ref{cherry}. Stringing these together into a path with $v$ vertices gives at least $\overline{f}loor{\overline{f}rac{2}{3}v}-2 \geq \overline{f}rac{v-1}{2}$ many repeated colors (with equality when $v = 9$, 11, and 13), which is not allowed.
\begin{figure}
\caption{Multicolored stars that are strung into a path.}
\label{cherry}
\end{figure}
\end{proof}
\end{document}
|
\begin{document}
\title[H\'enon renormalization]{
Probabilistic Universality in two-dimensional Dynamics\\}
\author{M. Lyubich, M. Martens}
\address {SUNY at Stony Brook}
{\diamond}ate{\today}
\begin{abstract}
In this paper we continue to explore infinitely renormalizable H\'enon maps with small Jacobian.
It was shown in \cite{CLM} that contrary to the one-dimensional intuition,
the Cantor attractor of such a map is non-rigid and the conjugacy
with the one-dimensional Cantor attractor is at most $1/2$-H\"older.
Another formulation of this phenomenon is that the scaling structure of the H\'enon Cantor attractor
differs from its one-dimensional counterpart. However, in this paper we prove
that the weight assigned by the canonical
invariant measure to these bad spots tends to zero on microscopic scales.
This phenomenon is called {{\bar i}t Probabilistic Universality}.
It implies, in particular, that the Hausdorff dimension of the canonical measure is universal.
In this way, universality and rigidity phenomena of one-dimensional dynamics
assume a probabilistic nature in the two-dimensional world.
\end{abstract}
\maketitle
{{\diamond}elta}f0 pt{0 pt}
{{\diamond}elta}f0 pt{0 pt}
{{\diamond}elta}f0 pt{0 pt}
{{\diamond}elta}fPublished in modified form:{Published in modified form:}
{{\diamond}elta}f\SBIMSMark#1#2#3{
{\bf f}ont\SBF=cmss10 at 10 true pt
{\bf f}ont\SBI=cmssi10 at 10 true pt
\setbox0=\hbox{\SBF \hbox to 0 pt{\relax}
Stony Brook IMS Preprint \##1}
\setbox2=\hbox to \wd0{{\hat f}il \SBI #2}
\setbox4=\hbox to \wd0{{\hat f}il \SBI #3}
\setbox6=\hbox to \wd0{\hss
\vbox{\hsize=\wd0 \parskip=0pt {\mbox{\boldmath$\alpha$} }selineskip=10 true pt
\copy0 \break
\copy2 \break
\copy4 \break}}
\partialmen0=\ht6 \advance\partialmen0 by \vsize \advance\partialmen0 by 8 true pt
\advance\partialmen0 by -\pagetotal
\advance\partialmen0 by 0 pt
\partialmen2=\hsize \advance\partialmen2 by .25 true in
\advance\partialmen2 by 0 pt
\openin2=publishd.tex
{\bar i}feof2\setbox0=\hbox to 0pt{}
\else
\setbox0=\hbox to 3.1 true in{
\vbox to \ht6{\hsize=3 true in \parskip=0pt \noindentndent
{\SBI Published in modified form:}{\hat f}il\break
{\bar i}nput publishd.tex
}}
{\bf f}i
\operatorname{cl}osein2
\ht0=0pt {\diamond}p0=0pt
\ht6=0pt {\diamond}p6=0pt
\setbox8=\vbox to \partialmen0{
\hbox to \partialmen2{\copy0 \hss \copy6}}
\ht8=0pt {\diamond}p8=0pt \wd8=0pt
\copy8
\message{*** Stony Brook IMS Preprint #1, #2. #3 ***}
}
\SBIMSMark{2011/2}{June 2011}{}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
Renormalization ideas have played a central role in Dynamics since
the discovery of the Universality and Rigidity phenomena by Feigenbaum \cite{F}, and independently by Coullet and Tresser \cite{CT},
in the mid 1970s. Roughly speaking, it means that different systems in the same ``universality class''
have the same small scale geometry. In the one-dimensional setting this phenomenon has been viewed from many angles
(statistical physcis, geometric function theory, Teichm\"uller theory, hyperbolic geometry, infinite-dimensional complex geometry)
and by now has been fully and rigorously justified, see \cite{Ep}, \cite{FMP}, \cite{L}, \cite{Lan}, \cite{Ma2}, \cite{McM}, \cite{S} and references therein.
In \cite{CT} Coullet and Tresser also conjectured that these phenomena would also be valid in higher dimensional systems, even in infinite dimensional situations. Indeed, computer and physical experiments that followed suggested that universality and rigidity hold in much more general context.
The simplest test case for it is the dissipative H\'enon family which can be viewed as a small perturbation of the one-dimensional
quadratic family. However, it was shown in \cite{CLM} that Universality and Rigidity break down already in this case.
This puts in question the relevance of one-dimensional models for higher dimensional problems.
In this paper we provide a resolution of this unsatisfactory situation:
namely, we show that for dissispative H\'enon maps, small scale universality is actually valid in {{\bar i}t probabilistic} sense,
almost everywhere with respect to the canonical invariant measure. {{\bar i}t Probabilistic universality} and {{\bar i}t probabilistic rigidity} phenomena may be valid for higher dimensional (including infinite dimensional) systems which are contracting in all but one direction.
\comm{In this sense, Universality and Rigidity phenomena may be valid for higher dimensional
(including infinite dimensional systems) which are contracting in all but one direction.}
Let us now formulate our results more precisely.
We consider a class of dissipative H\'enon-like maps on the unit box $B^0=[0,1]\times [0,1]$ of form
\begin{equation}{\lambda}bel{Henon family intro}
F(x,y)= ( f(x)-{\varepsilon}(x,y),x ),
\end{equation}
where $f(x)$ is a unimodal map with non-degenerate critical point and ${\varepsilon}$ is small.
It maps $B^0$ on a slightly thickened parabola $x=f(y)$.
Such a map is called {{\bar i}t renormalizable} if there exists a smaller box $B^1\subset B^0$
around the tip of of the parabola which is mapped into itself under $F^2$.
The {{\bar i}t renormalization} for $F$ is the map $RF= \Psi^{-1}\circ F^2\circ \Psi$,
where $\Psi: B^0\rightarrow B^1$ is an explicit non-linear change of variable (``rescaling'')
that brings $F^2$ to the normal form of type (\ref{Henon family intro}).
If $RF$ is in turn renormalizable then $F$ is called {{\bar i}t twice renormalizble}, etc.
In this paper we will be concerned with {{\bar i}t infinitely renormalizable} H\'enon-like maps.
Such a map admits a nest of $2^n$-periodic boxes $B^0\supset B^1\supset B^2\supset{\diamond}ots$ shrinking to the {{\bar i}t tip}
$\tau$ of $F$. The $n^{th}$-renormalization cycle is the orbit ${\mathcal B}^n=\{B^n_i= f^i (B^n), i=0,1,{\diamond}ots 2^n-1\}$. We obtain a hierarchy of such cycles
shrinking to the {{\bar i}t Cantor attractor}
$$
{\mathcal O}_F= \bigcap_{n=0}^{\bar i}nfty \bigcup_{i=0}^{2^n-1} B^n_i
$$
on which $F$ acts as the dyadic adding machine.
In particular, the dynamics on ${\mathcal O}_F$ is uniquely ergodic, so we obtain a canonical invariant measure $\mu$ supported on ${\mathcal O}_F$.
We define the {{\bar i}t average Jacobian} of $F$ as follows:
$$
b_F=\exp {\bar i}nt_{{\mathcal O}_F} \log {\mathbb J}ac F d\mu .
$$
Consider a strongly dissipative infinitely renormalizable H\'enon-like map.
The geometry of a piece $B{\bar i}n {\mathcal B}^n$ can be very different from the geometry of the corresponding piece $I$ of the one-dimensional renormalization fixed point $f_*$. The pieces of the one-dimensional system are small intervals.
Take a piece $B{\bar i}n {\mathcal B}^n$ and the two pieces $B_1, B_2{\bar i}n {\mathcal B}^{n+1}$ with $B_1, B_2\subset B$. Let $I, I_1, I_2$ be the corresponding pieces of $f_*$. The piece $B$ of $F$ has {{\bar i}t ${\varepsilon}ilon-$precision} if after one
simultaneous rescaling and translation $A:\Bbb{R}^2\to \Bbb{R}^2$ we have
that the (Hausdorff) distance between $I$ and $A(B)$, $ I_1$ and $A(B_1)$,
$I_2$ and $A(B_2)$ is at most ${\varepsilon}ilon\cdot \partialam(I)$. The triples $B_1, B_2\subset B$ and $I_1,I_2\subset I$ are geometrical almost the same.
Collect the pieces of the $n^{th}-$cycle with ${\varepsilon}ilon-$precision in
$$
{\mathcal S}_n({\varepsilon}ilon)=\{B{\bar i}n {\mathcal B}^n| B \text{ has } {\varepsilon}ilon-{precision}\}.
$$
\begin{defn} The geometry of the Cantor attractor ${\mathcal O}_F$ of a dissipative infinitely renormalizable H\'enon-like map is probabilistically universal
if there exists $\theta<1$ such that
$$
\mu({\mathcal S}_n(\theta^n)){\bf g}e 1-\theta^n.
$$
\end{defn}
\begin{thm}(Probabilistic universality) The geometry of the Cantor attractor of a strongly dissipative infinitely renormalizable H\'enon-like map is probabilistically universal.
\end{thm}
\begin{defn} The Cantor attractor ${\mathcal O}_F$ of a dissipative infinitely renormalizable H\'enon-like map is probabilistically rigid if the conjugation $h:{\mathcal O}_F\to {\mathcal O}_{f_*}$ to the attractor ${\mathcal O}_{f_*}$ of the one-dimensional renormalization fixed point $f_*$ has the following property.
There exist $\beta>0$, and a sequence $X_1\subset X_2\subset X_3\subset \cdots \subset {\mathcal O}_F$ such that
$
h:X_N\to h(X_N)\subset {\mathcal O}_{f_*}
$
is $(1+\beta)$-differentiable,
and
$
\mu(X_N)\to 1.
$
\end{defn}
\begin{thm}(Probabilistic Rigidity) The Cantor attractor of a dissipative infinitely renormalizable H\'enon-like map is probabilistically rigid.
\end{thm}
The Cantor attractor ${\mathcal O}_F$ is not part of a smooth curve, see \cite{CLM}. However, large parts of it, the sets
$$
X_N=\bigcap_{k{\bf g}e N} {\mathcal S}_n(\theta^n)
$$
where $\theta<1$ is close enough to $1$ satisfy
\begin{thm} Each set $X_N\subset {\mathcal O}_F$ is part of a smooth $C^{1+\beta}-$curve.
\end{thm}
Let $\mu_*$ be the invariant measure on ${\mathcal O}_{f_*}$, the attractor of the one-dimensional renormalization fixed point. A consequence of probabilistic rigidity is
\begin{thm} The Hausdorff dimension is universal
$$
HD_{\mu}({\mathcal O}_F)=HD_{\mu_*}({\mathcal O}_{f_*}).
$$
\end{thm}
The theory of universality and rigidity became a probabilistic geometrical theory for H\'enon dynamics.
We prove the above results by introducing the so-called {{\bar i}t pushing-up} machinery. This method locates the pieces in the $n^{th}$-renormalization cycle that have exponential precision. The difficulty is that the orbit between two such good pieces may pass through poor pieces,
so one cannot recover all good pieces by simple iteration of the original map.
Instead, the pushing-up machinery relates pieces in the same renormalization cycle
by means of the diffeomorphic rescalings built into the notion of renormalization. The distortion of these rescalings can be controlled if the two pieces under consideration, viewed from an appropriate scale, do not lie ``too deep" (in the sense precisely defined below) .
This machinery might have applications beyond the present situation.
For the reader's convenience, the pushing-up machinery will be informally outlined in \S \ref{out}.
Also more special notations are collected in the Nomenclature.
For a survey on H\'enon renormalization see \cite{LM2}. For early experiments and results on H\'enon renormalization
see \cite{CEK}, \cite{Cv}, and \cite{GST}.
{\bf Acknowledgment.} We thank all the institutions and foundations that have supported us in the course of this work:
Simons Mathematics and Physics Endowment, Fields Institute, NSF, NSERC, University of Toronto. In fall 2005, when M. Feigenbaum saw the negative results of \cite{CLM}, he made computer experiments that suggested that the universal scaling of the attractor is violated very rarely.
Our paper provides a rigorous justification of Feigenbaum's experiments and conjectures. We also thank C. Tresser for many valuable renormalization discussions, and R. Schul for interesting comments on \cite{J}.
\comm{
Universality and rigidity are central themes in one-dimensional dynamics.
The position of these notions in two-dimensional dynamics will become
equally crucial but their role is more delicate. We will discuss renormalization results for infinitely renormalizable H\'enon maps. Lets us first recall the
role of universality and rigidity in one-dimensional dynamics.
A unimodal maps is a smooth map of the interval with only one critical point. The critical point is non-degenerate. A smooth unimodal map $f{\bar i}n {\mathcal U}$ is renormalizable if it contains two intervals which are exchanged by the map. The two intervals form the first renormalization
cycle, ${\mathcal C}_1=\{I^1_0, I^1_1\}$, where $I^1_0$ contains the critical point $c$ of $f$.
Let ${\mathcal U}_0$ be the collection of renormalizable maps. The renormalization of $f{\bar i}n {\mathcal U}_0$ is an affinely rescaled version of the first return map to $I^1_0$,
$f^2:I^1_0\to I^1_0$. This defines an operator
$$
R_c:{\mathcal U}_0\to{\mathcal U}
$$
Similarly, one can rescale the first return map to $I^1_1$, the interval which contains the critical value $v$ of $f$. This defines the second renormalization operator
$$
R_v:{\mathcal U}_0\to {\mathcal U}.
$$
These renormalization operators are microscopes used to the study the small scale geometry of the dynamics. In particular, $R_cf$ is a unimodal map which describes the dynamics on one scale lower in $I^1_0$. Similarly $R_vf$ describes the geometry one scale smaller in $I^1_1$.
A map is infinitely renormalizable if it can be renormalized infinitely many times. That means for each $n{\bf g}e 1$ $R^nf{\bar i}n {\mathcal U}_0$. An infinitely renormalizable map has cycles, pairwise disjoint intervals, ${\mathcal C}_n=\{I^n_i| i=0,1,2,{\diamond}ots, 2^n-1\}$, with
$f(I^n_i)=I^n_{i+1}$ and
$$
\bigcup{\mathcal C}_{n+1}\subset \bigcup{\mathcal C}_n.
$$
This nested sequence of dynamical cycles accumulates at a Cantor set.
$$
{\mathcal C}=\bigcap\bigcup{\mathcal C}_n.
$$
This Cantor set attracts almost every orbit. It is called the Cantor attractor of the map. The only points whose orbits are not attracted to this Cantor set are the period point. Th period points have periods of the form $2^n$. The cycle ${\mathcal C}_n$ is centered around a period orbit of
length $2^{n+1}$. It contains all the periodic orbits of period $2^s$ with $s{\bf g}e n+1$.
Every small part of the Cantor attractor ${\mathcal C}$ of some infinitely renormalizable map, say within an interval $I^n_i$ of the $n^{th}-$cycle, can be studied by repeatedly applying one of the renormalization operators $R_c$ or $R_v$. For each interval in the cycle there is a uniquely defined sequence of length $n$ of
choices $w=(c,c,v,c,{\diamond}ots,v)$ such that the
$$
R_wf=R_c\circ R_c\circ R_v\circ R_c\circ {\diamond}ots \circ R_vf
$$
describes the dynamics within the given $I^n_i$. Denote the length of a word $w$ by $|w|$.
\begin{thm}(Universality) There exists $\rho<1$ such that for any two infinitely renormalizable maps $f,g{\bar i}n {\mathcal U}_0$ and any finite word $w$
$$
\text{dist}(R_wf, R_wg)=O(\text{dist}(f,g)\rho^{|w|}).
$$
\end{thm}
The universality has means that the Cantor attractors are all the same on
asymptotically small scale. However, the actual geometry one observes depends on the place where one zooms in. This universal geometric structure of
attractor is far from the well-known middle-third Cantor set, where in every place one recovers the same geometry. In the Cantor attractor there are essentially no two places with the same asymptotic geometry (only points which ly on the same orbit can have the same asymptotic geometry).
Given two infinitely renormalizable maps $f, g{\bar i}n {\mathcal U}$, there exists a homeomorphism $h$ between the domains of the two maps which maps orbits to orbits,
$$
h\circ f=g\circ h.
$$
The maps are conjugated, the homeomorphism is called a conjugation. The dynamics of two conjugated maps are the same from a topological point of view.
\begin{thm}(Rigidity) The conjugation between two infinitely renormalizable maps is differentiable on the attractor.
\end{thm}
If a conjugation is differentiable, it means that on small scale the conjugation is essentially affine. This means that the microscopic geometrical properties of corresponding parts of the attractor are the same. One can deform an infinitely renormalizable map to another infinitely renormalizable map. However, the microscopic structure of the Cantor attractor is not changed, rigidity.
The {{\bar i}t topology} of the system determines the {{\bar i}t geometry} of the system.
This central idea has been proved in one-dimensional dynamics. It also holds
when the systems are not of the period doubling type describes above but have
topological characteristics which are tame. We will not discuss the most general statement and omit the precise definition of tame.
Renormalization has been the central tool in the development of one-dimensional
dynamics. The next step is to develop, if possible, a renormalization theory for higher dimensional systems. The long term goal is to use renormalization to describe the topology of higher dimensional systems, use renormalization to study the geometry of the attractors and finally, use the geometrical theory to explain the probabilistic behavior of higher dimensional systems. This is a very long term project. We will discuss renormalization of period doubling type for H\'enon maps.
A H\'enon map is a map of the plane of the form
$$
F(x,y)=(f(x)-{\varepsilon}ilon(x,y),x)
$$
where $f{\bar i}n {\mathcal U}$ and ${\varepsilon}ilon$ small. These maps are used to describe
the creation of chaos staring at a homoclinic bifurcation of a dissipative
map. In such situations we may assume that ${\varepsilon}ilon$ is indeed very small.
A H\'enon map is renormalizable if there are two disjoint domains $B^1_v$ and $B^1_c$ which are exchanged by the map. This is the two-dimensional version of one-dimensional renormalizable maps. Let ${\mathcal B}_1=\{B^1_v, B^1_c\}$ be the first cycle. As before we can consider infinitely renormalizable maps. They have a nested sequence of cycles, as in the one-dimensional case,
$$
{\mathcal B}_n=\{B^n_j| j=0,2,3,{\diamond}ots, 2^n-1\}
$$
The attractor of an infinitely renormalizable H\'enon map is
$$
{\mathcal O}_F=\bigcap \bigcup {\mathcal B}_n.
$$
Indeed, almost every orbit is attracted to this Cantor set. Similarly to the one-dimensional situation we have that the points whose orbits are not accumulating ate the Cantor set converge to periodic orbits of period $2^n$, for some $n{\bf g}e 0$.
The Cantor attractor ${\mathcal O}_F$ is conjugated to the one-dimensional Cantor attractor of infinitely renormalizable maps. In particular, it has a unique invariant measure $\mu$. This measure assigns to each piece in $B{\bar i}n {\mathcal B}_n$ a mass
$\mu(B)={\bf f}rac{1}{2^n}$. This measure describes the statistical distribution of orbits in this Cantors set. Namely, the average time to spent in a piece $B{\bar i}n {\mathcal B}_n$ equals $\mu(B)$,
$$
\lim_{n\to{\bar i}nfty} {\bf f}rac{1}{n} \sum_{i=0}^{n-1} 1_B(F^i(x))=\mu(B),
$$
where $1_B(x)=1$ when $x{\bar i}n B$ and $0$ otherwise. Historically, this measure has been used to describe the statistical behavior of the system. In two dimensional H\'enon dynamics it will also be used to describe the small scale geometrical properties of the attractor. In particular, the {{\bar i}t average Jacobian} plays a role.
$$
b_F=e^{{\bar i}nt \ln \text{det}Df d\mu}.
$$
A very small average Jacobian means that the system is very dissipative. The degenerate H\'enon map
$$
F_0(x,y)=(f(x),x)
$$
has the same dynamics as the corresponding unimodal map. In this case
$b_{F_0}=0$. Strongly dissipative systems can be thought of as small perturbations of one-dimensional systems. The attractor of the degenerate map $F_0$ lies on a smooth curve, the graph of $f$. The dynamics of the degenerate map can be understood as one-dimensional dynamics. Although, the H\'enon maps with $b_F>0$ are small perturbations, the geometry of their attractor is surprisingly different from their one-dimensional counterpart. The geometry of two-dimensional
strongly dissipative infinitely renormalizable systems can not be understood
directly from the one-dimensional theory, although these systems are small perturbations.
\begin{thm} The attractor ${\mathcal O}_F$ of a strongly dissipative infinitely renormalizable H\'enon map with $b_F>0$ does not lie on a smooth curve.
\end{thm}
There exists a renormalization operator $R$ which studies the attractor of dissipative infinitely renormalizable H\'enon maps. Unfortunately, there is only the counterpart of the unimodal renormalization operator $R_v$ which can be studied
successfully in the H\'enon case. Each infinitely renormalizable H\'enon map has a {{\bar i}t tip}, it is the counterpart of the critical value $v$ of a unimodal map.
The microscopic geometry of the attractor around this tip can be studied by repeatedly applying this H\'enon renormalization operator. Around the tip there is a universal microscopic geometry.
\begin{thm}(Universality) If $F$ is a strongly dissipative infinitely renormalizable H\'enon map then
$$
R^nF\to F_*,
$$
where $F_*(x,y)=(f_*(x),x)$ is the degenerate H\'enon map corresponding the the unimodal renormalization fixed point $f_*$ with $R_cF_*=f_*$.
\end{thm}
Although there is universal geometry around the tip there are other points in the Cantor attractors ${\mathcal O}_F$ which have a very different microscopic geometry compared to the expected geometry of its one-dimensional counterpart.
\begin{thm}(Non-Rigidity) If $F_1$ and $F_2$ are strongly dissipative infinitely renormalizable H\'enon maps, with $b_{F_1}>b_{F_2}$, then the attractors ${\mathcal O}_{F_1}$ and ${\mathcal O}_{F_2}$ are not smoothly conjugated. In particular, ${\mathcal O}_{F_1}$ is not smoothly conjugated to the attractor ${\mathcal O}_{F_*}$ of the degenerate H\'enon map.
\end{thm}
An attractor has bounded geometry if the measurements of the pieces $B_1, B_2{\bar i}n {\mathcal B}_{n+1}$ with $B_1\cup B_2\subset B{\bar i}n {\mathcal B}_n$ are uniformly comparable to the diameter of $B$. The microscopic geometry of maps with $b_F>0$ might be very different from the one-dimensional situation.
\begin{thm}(Unbounded Geometry) There is a small $b_0>0$ and a set $X\subset [0,b_0]$ of full Lebesgue measure such that the attractor of any $F$ with $b_F{\bar i}n X\subset (0,b_0)$ has unbounded geometry.
\end{thm}
These Theorems indicate that the discussion of Universality and Rigidity for H\'enon maps is not a straight forward generalization of the one-dimensional theory. Apparently, the topological property of being infinitely renormalizable is not enough to determine the geometry of the attractor. The asymptotic geometry can be dramatically changed by changing the average Jacobian. Fortunately, the global topology structure of the maps is neither determined completely by the property of being infinitely renormalizable.
The {{\bar i}t Global attractor} of an infinitely renormalizable H\'enon map $F$ is
$$
\mathcal{A}_F=\bigcap_{n{\bf g}e 0} F^n([0,1]^2).
$$
The global attractor consists of the Cantor attractor and the unstable manifolds of the period points.
\begin{thm} For every infinitely renormalizable H\'enon map $F$ is
$$
\mathcal{A}_F={\mathcal O}_F\cup \bigcup_{n{\bf g}e 0} W^u(\text{Orb}(p_n)),
$$
where $p_n$ is a periodic point of period $2^n$.
\end{thm}
\begin{thm} The average Jacobian $b_F$ is a topological invariant of the global attractor $\mathcal{A}_F$.
\end{thm}
Maps with different Jacobians have geometrically different attractors. However, the topology of their global attractors is also different. It still might be true that the topology of the global attractor determines the geometry of the Cantor attractor.
Infinitely renormalizable H\'enon maps do play a role in models of the real
world. There has not been observations which noticed the non-rigidity or any
consequence of the bad geometrical parts of the Cantor attractor. The quantitative aspects of
one-dimensional universality correspond well with real world experiments.
These observations which confirm the presence of one-dimensional geometry is explained by two phenomena: {{\bar i}t probabilistic universality} and
{{\bar i}t probabilistic rigidity}.
Consider a strongly dissipative infinitely renormalizable H\'enon map.
The geometry of a piece $B{\bar i}n {\mathcal B}_n$ can be very different from the geometry of the corresponding piece $I$ of the one-dimensional system $F_*$. The pieces of the one-dimensional system are small arcs, almost line segments.
Take a piece $B{\bar i}n {\mathcal B}_n$ and the two pieces $B_1, B_2{\bar i}n {\mathcal B}_{n+1}$ with $B_1, B_2\subset B$. let $I, I_1, I_2$ be the corresponding pieces of the degenerate
map $F_*$. The piece $B$ of $F$ has {{\bar i}t ${\varepsilon}ilon-$precision} if after one
simultaneous rescaling and translation $A;\Bbb{R}^2\to \Bbb{R^2}$ we have
that the (Hausdorff) distance between $B$ and $A(B)$, $ B_1$ and $A(B_1)$,
$B_2$ and $A(B_2)$ is at most ${\varepsilon}ilon$. The triples $B_1, B_2\subset B$ and $I_1,I_2\subset I$ are geometrical almost the same.
Collect the pieces of the $n^{th}-$cycle with ${\varepsilon}ilon-$precision in
$$
{\mathcal S}_n({\varepsilon}ilon)=\{B{\bar i}n {\mathcal B}_n| B \text{ has } {\varepsilon}ilon-{precision}\}.
$$
\begin{defn} The geometry of the Cantor attractor ${\mathcal O}_F$ of a dissipative infinitely renormalizable H\'enon map is universal in probailistic sense if there exists $\theta<1$ such that
$$
\mu({\mathcal S}_n(\theta^n)){\bf g}e 1-\theta^n.
$$
\end{defn}
\begin{thm}(Probabilistic universality) The geometry of the Cantor attractor of a strongly dissipative infinitely renormalizable H\'enon map is universal in probabilistic sense.
\end{thm}
\begin{defn} The Cantor attractor ${\mathcal O}_F$ of a dissipative infinitely renormalizable H\'enon map is rigid in probabilistic sense if the conjugation $h:{\mathcal O}_F\to {\mathcal O}_{F_*}$ with the attractor ${\mathcal O}_{F_*}$ of the degenerate map $F_*$ has the following property. There exists $\beta>0$, $\theta<1$ and a sequence
$$
X_N=\bigcap_{k{\bf g}e N} {\mathcal S}_k(\theta^n)\subset {\mathcal O}_F,
$$
with $N{\bf g}e 1$ and
$$
\mu(X_N){\bf g}e 1-O(\theta^N)
$$
such that
$$
h:X_N\to h(X_n)\subset {\mathcal O}_{F_*}
$$
is H\"older differentiable, $h$ is $C^{1+\beta}$.
\end{defn}
\begin{thm}(Probabilistic Rigidity) The Cantor attractor of a strongly dissipative infinitely renormalizable H\'enon map is rigid in probabilistic sense.
\end{thm}
The Cantor attractor ${\mathcal O}_F$ is not part of a smooth curve. However, large parts of it, the sets $X_N$ are part of smooth curves. In a probabilistic sense, the parts which destroy, the truly one dimensional structure has smaller and smaller measure once we go to smaller and smaller scale.
\begin{thm} The sets $X_N\subset {\mathcal O}_F$ are part of a smooth $C^{1+\beta}-$curve.
\end{thm}
Let $\mu_*$ be the invariant measure on ${\mathcal O}_{F_*}$, the attractor of the degenerate H\'enon map. A consequence of probabilistic rigidity is
\begin{thm} The Hausdorff dimension is universal
$$
HD_{\mu}({\mathcal O}_F)=HD_{\mu_*}({\mathcal O}_{F_*}).
$$
\end{thm}
The theory of universality and rigidity became a probabilistic geometrical theory for H\'enon dynamics.
Blur on organisation.
For the reader's convenience, more special notations are collected in the Nomenclature.
\centerline{\bf Acknowledgements}
We thank all the institutions and foundations that have supported us in the course of this work:
Simons Mathematics and Physics Endowment, Fields Institute, NSF, NSERC, University of Toronto.
}
\section{Outline}{\lambda}bel{out}
\subsection{Infinitely renormalizable H\'enon-like maps}
We will start with outlining the set-up developed in \cite{CLM,LM1} --
see \S \ref{prelim} for details.
We consider a class ${\mathcal H}={\mathcal H}({\mbox{\boldmath$\alpha$} }r{\varepsilon})$ of H\'enon-like maps of the form
$$
F\colon (x, y) \mapsto (f(x)-{\varepsilon}(x,y), x),
$$
acting on the unit box $B^0 =[0,1]\times [0,1]$,
where $f(x)$ is a unimodal map subject of certain regularity assumptions,
and $\|{\varepsilon}\|< {\mbox{\boldmath$\alpha$} }r{\varepsilon}$ is small (for an appropriate norm).
If the unimodal map $f$ is renormalizable
then the renormalization $F_1=RF{\bar i}n {\mathcal H}$ is defined as $(\Psi^1_0)^{-1} \circ (F^2|_{B^1}) \circ \Psi^1_0$,
where $B^1$ is a certain box around the {{\bar i}t tip}, a point which plays the role of the ``critical value'',
and $\Psi^1_0 : {\mathbb D}om(F_1)\rightarrow B^1$ is an explicit {{\bar i}t non-linear} change of variables.
Inductively, we can define $n$ times renormalizable maps for any $n{\bar i}n {\mathbb N}$,
and consequently, {{\bar i}t infinitely renormalizable} H\'enon-like maps.
For such a map the $n$-fold renormalization $F_n= R^n F{\bar i}n {\mathcal H}$
is obtained as $ (\Psi^n_0)^{-1} \circ (F^{2^n}|_{B^n}) \circ \Psi^n_0$,
where $B^n$ is an appropriate {{\bar i}t renormalization box},
$\Psi^n_0: {\mathbb D}om(F_n)\rightarrow B^n$ is a non-linear change of variables.
These boxes $B^n$ form a nest around the {{\bar i}t tip} of $F$:
$$
B^0\supset B^1\supset{\diamond}ots\supset B^n\supset{\diamond}ots \ni \tau
$$
Taking the iterates $F^k B^n$, $k=0,1,{\diamond}ots, 2^n-1$,
we obtain a family ${\mathcal B}^n$ of $2^n$ pieces $\{B^n_{\omega}\}$, called the {{\bar i}t $n^{th}$ renormalization level}, that can be naturally
labelled by strings ${\omega}{\bar i}n \{c,v\}^n$ in two symbols, $c$ and $v$,
with $B^n_{v^n}\equiv B^n$. See \S \ref{prelim} for details. Then
$$
{\mathcal O}_F = \bigcap_n \bigcup_{\omega} B^n_{\omega}
$$
is an attracting Cantor set on which $F$ acts as the adding
machine. This Cantor set carries a unique invariant measure $\mu$.
This allows us to introduce
the most important geometric parameter attached to $F$, its {{\bar i}t average Jacobian}
$$
b_F= \exp {\bar i}nt_{{\mathcal O}_F} \log {\mathbb J}ac F \, d\mu.
$$
Usually, we will denote the average Jacobian with $b$.
The size of the boxes decays exponentially:
\begin{equation}{\lambda}bel{sigma}
\partialam B^n_{\omega} \leq C {\sigma}^n,
\end{equation}
where ${\sigma}{\bar i}n (0,1)$ is the universal scaling factor
(coming from one-dimensional dynamics) while $C=C({\mbox{\boldmath$\alpha$} }r {\varepsilon})$ depends only on the geometry of $F$.
A surprising phenomenon discovered in \cite{CLM} is that unlike its one-dimensional counterpart,
the Cantor set ${\mathcal O}_F$ {{\bar i}t does not have universal geometry}: it essentially depends on the average Jacobian $b$.
However, the difference appears only in scale of order $b$: if all the pieces $ B^n_{\omega}$ of level $n$ are much bigger than $ b$
then the geometry of the pieces $B^n_{\omega}$ is controlled by one-dimensional dynamics:
the pieces are aligned along the parabola $x=f(y)$ with thickness of order $b$.
According to (\ref{sigma}), this happens whenever
\begin{equation}{\lambda}bel{safe scales for F}
{\alpha}pha {\sigma}^n {\bf g}eq b
\end{equation}
with sufficienty small (absolute) ${\alpha}pha>0$, i.e., when
\begin{equation}{\lambda}bel{safe scales}
n \leq c |\log b| - A, \quad {\mathrm {where}}\ c = {\bf f}rac 1{|\log{\sigma}|}, \ A= {\bf f}rac {\log {\alpha}pha} {\log {\sigma}}.
\end{equation}
We will call these levels {{\bar i}t safe}.
\subsection{Random walk model}{\lambda}bel{random walk}
\newcommand{{{\diamond}elta}pth}{\operatorname{depth}}
To any point $x{\bar i}n {\mathcal O}\equiv {\mathcal O}_F$ we can assign its {{\bar i}t depth}
$$
{{\diamond}elta}pth (x)\equiv k(x)= \sup \{ k: \ x{\bar i}n B^k\}{\bar i}n {\mathbb N}\cup \{{\bar i}nfty\}.
$$
Here the tip is the only point of infinite depth. If ${{\diamond}elta}pth(x)=k$ then $x{\bar i}n E^k\equiv B^{k}\setminus B^{k+1}$ (see Figure \ref{figE} and \ref{figEsch}).
We say that a point $x{\bar i}n {\mathcal O}$ is {{\bar i}t combinatorially closer to $\tau$ than $y{\bar i}n {\mathcal O}$}
if $k(x)>k(y)$.
We will now encode any point $x{\bar i}n {\mathcal O}$ by its {{\bar i}t closest approaches} to $\tau$
in {{\bar i}t backward} time. Namely, let us consider the backward orbit $\{F^{-t} x\}_{t=0}^{\bar i}nfty$,
and mark the moments $t_m$ ($m=0,1,{\diamond}ots$) of closest approaches, i.e., at the moment $t_m$ the point $x_m:= F^{-t_m} x$ is combinatorially
closer to $\tau$ than all previous points $F^{-t} x$, $t=0,1,{\diamond}ots, t_m-1$. Since the dynamics of $F$ on ${\mathcal O}$ is the adding machine,
this is an infinite sequence of moments for any $x\not{\bar i}n \operatorname{orb} (\tau)$.
If $x=F^t(\tau)$, we terminate the code at the moment $t$.
Let
$$
k_m(x) = k(x_m),\ m=0,1,{\diamond}ots,
$$
be the sequence of the corresponding depths.
Obviously, both sequences, ${\mbox{\boldmath$\alpha$} }r t= \{t_m\}$ and ${\mbox{\boldmath$\alpha$} }r k= \{k_m\}$ are {{\bar i}t strictly increasing}.
For any depth $k$, let us consider the {{\bar i}t first return map} (see Figure \ref{figE} and \ref{figEsch}).
$$
G_k: B^{k+1} \rightarrow B^k, \quad G_k= F^{2^k},
$$
and the {{\bar i}t first landing map in backward time}
$$
L_k: \bigcup_{m=0}^{2^k-1} F^m (B^k) \rightarrow B^k,\quad L_k(x) = F^{-m} x,\ \mbox{for} \ x{\bar i}n F^m (B^k).
$$
Then we have by definition:
$$
x_m = G_{k_m(x)} (x_{m+1}),\quad x_{m} = L_{k_m(x)} (x)
$$
Let ${\Sigma}gma$ stand for the space of strictly increasing sequences ${\mbox{\boldmath$\alpha$} }r k= \{k_m\}$ of symbols $k_m{\bar i}n {\mathbb N}\cup \{{\bar i}nfty\}$
that terminate at moment $m$ if and only if $k_m={\bar i}nfty$. Endow ${\Sigma}gma$ with a weak topology and the measure $\nu$
corresponding to the following {{\bar i}t random walk} on ${\mathbb N}$: the probability of jumping from $k{\bar i}n {\mathbb N}$ to $l{\bar i}n {\mathbb N}$
is equal to $1/2^{l-k}$ if $l>k$, and it vanishes otherwise. The initial distribution on ${\mathbb N}$ is given by
$\nu\{k\}= 1/2^{k+1}$. We let $j_m := k_{m+1}-k_m$ be the {{\bar i}t jumps} in our random walk.
\begin{lem}
The coding $x\mapsto {\mbox{\boldmath$\alpha$} }r k(x)$ establishes a homeomorphism between ${\mathcal O}$ and ${\Sigma}gma$ and a measure-theoretic
isomorphism between $({\mathcal O},\mu)$ and $({\Sigma}gma,\nu)$.
\end{lem}
We can also consider the random walk that {{\bar i}t stops on depth $n$}.
This means that we consider the orbit $F^{-t} x$ only until the moment it lands in $B^n$.
The corresponding (finite) coding sequence $\{\tilde k_m \}_{m=0}^T$ is defined as follows:
$\tilde k_m= k_m$ whenever $k_m < n$ ($m=0,1{\diamond}ots, T-1$), while $\tilde k_T =n$.
(In what follows we will skip ``tilde'' in the notation as long as it would not lead to confusion.)
\comm{****
Fix a ``control sequence'' $j_n{\bar i}n {\mathbb N}$, $n=0,1,{\diamond}ots$.
We say that a point $x{\bar i}n {\mathcal O}$ is {{\bar i}t ${\mbox{\boldmath$\alpha$} }r j$-controlled after a moment $N$}
if $j_n(x) \leq j_n$ for all $n{\bf g}eq N$.
(We will say just ``controlled'' if it is clear which sequence ${\mbox{\boldmath$\alpha$} }r j$ is meant,
or ``eventually controlled'' if we do not care of the precise moment $N$.)
\begin{cor}
Let $j_n$ be a control sequence satisfying the following summability condition:
$$
\sum_{n=0}^{\bar i}nfty {\bf f}rac 1{2^{j_n}} < {\bar i}nfty.
$$
Then a.e. $x{\bar i}n {\mathcal O}$ is eventually ${\mbox{\boldmath$\alpha$} }r j$-controlled.
More precisely, the probability that a point $x{\bar i}n {\mathcal O}$ is
not controlled after a moment $N$ is at most
$$
{\varepsilon}_n:= \sum_{n=N}^{\bar i}nfty {\bf f}rac 1{2^{j_n}}\to 0.
$$
\end{cor}
In particular, this Corollary is applicable to {{\bar i}t linearly growing} jumps $j_n{\bf g}eq a n$.
In this case, the probability that $x$ is not controlled after time $N$ is exponentially small,
namely, it is $O(1/2^{an})$.
***}
Fix an increasing {{\bar i}t control function} $s: {\mathbb N}\rightarrow {\mathbb Z}_+$.
We say that a sequence ${\mbox{\boldmath$\alpha$} }r k=\{k_m\}_{m=0}^{\bar i}nfty$ is {{\bar i}t $s$-controlled after a moment $N$}
if $j_m \leq s(k_m)$ for all $k_m{\bf g}eq N$.
We say that a point $x{\bar i}n {\mathcal O}$ is {{\bar i}t $s$-controlled after moment N} if its code
${\mbox{\boldmath$\alpha$} }r k(x)$ is such. The set of these points is denoted by $K_N$.
\begin{lem}{\lambda}bel{control criterion}
Under the summability assumption
$$
\sum_{k=0}^{\bar i}nfty {\bf f}rac 1{2^{s(k)}} < {\bar i}nfty
$$
we have
$$
\nu(K_N){\bf g}e 1-O(\sum_{k=N}^{\bar i}nfty {\bf f}rac 1{2^{s(k)}}).
$$
\end{lem}
\comm{
\begin{lem}{\lambda}bel{control criterion}
Almost all points $x{\bar i}n {\mathcal O} $ are eventually $s$-controlled
under the following summability assumption:
$$
\sum_{k=0}^{\bar i}nfty {\bf f}rac 1{2^{s(k)}} < {\bar i}nfty.
$$
\end{lem}
}
\begin{proof}
It follows immediately from the definition of the random walk, using the monotonicity of the control function, that
$$
\nu(K_N){\bf g}e \prod_{k=N}^{\bar i}nfty (1-{\bf f}rac 1{2^{s(k)}}),
$$
which implies the Lemma.
\end{proof}
\subsection{Geometric estimates}{\lambda}bel{Geom estimates outline}
Our analysis depends essentially on the geometric control of the renormalizations and changes of variables
established in \cite{CLM}.
The renormalizations have the following nearly {{\bar i}t universal shape}:
\begin{equation}{\lambda}bel{universality formula}
R^n F = (f_n(x) -\, b^{2^n}\, a(x)\, y\, (1+ O(\rho^n)), \ x\, ),
\end{equation}
where the $f_n$ converge exponentially fast to the universal unimodal map $f_*$,
$a(x)$ is a universal function, and $\rho{\bar i}n (0,1)$.
The changes of variables $\Psi_k^l: {\mathbb D}om(F^l)\rightarrow {\mathbb D}om(F^k)$ have the following form:
\begin{equation}{\lambda}bel{factoring-outline}
\Psi_k^l = D_k^l \circ (\operatorname{id} + {\bf S}_k^l),
\end{equation}
where
\begin{equation}{\lambda}bel{reshufflingout}
D_k^l=
\left(
\begin{array}{cc}
1 & t_k\\
0 & 1
\end{array}\right)
\left(
\begin{array}{cc}
({\sigma}^2)^{l-k} & 0\\
0 & (-{\sigma})^{l-k}
\end{array}\right) (1+O(\rho^k)).
\end{equation}
is a linear map with $t_k\asymp b_F^{2^k}$,
while $\operatorname{id} + {\bf S}_k^l: (x,y)\mapsto (x+S_k^l(x,y), y)$ is a horizontal non-linear map with
$$
| \partial_x S^l_k | = O(1),\quad |\partial_y S^l_k | = O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}).
$$
\subsection{Regular boxes}
In this section we outline the results of \S \ref{pieces}.
For any $x{\bar i}n {\mathcal O}$, we let $B_n(x)$ be the box $B^n_{\omega}{\bar i}n {\mathcal B}^n$
containing $x$ (in particular, $B_n(\tau) = B^n$).
Let ${\mathcal B}^n_*= {\mathcal B}^n\smallsetminus \{B^n\}$
stand for the family of boxes $B^n_{\omega}$ that do not contain the tip.
Notice that the depth of all points $x$ in any box $B{\bar i}n {\mathcal B}^n_*$
is the same, so it can be assigned to the box itself.
In other words,
$$
{{\diamond}elta}pth (B) = \sup \{ k: \ B\subset B^k\}{\bar i}n \{0,1,{\diamond}ots n-1\}.
$$
Let ${\mathcal B}^n[l]$, $l<n$, be the family of all boxes of level $n$ whose depth is $l$.
Note that ${\mathcal B}^n[l]$ contains $2^{n-l-1}$ boxes.
We can view the box $B$ in the renormalization coordinates on various scales.
Namely, to view $B$ from scale $k\leq n$ means that we consider its preimage ${\mathbf{B}}$
under the (nonlinear) rescaling $\Psi_0^k : {\mathbb D}om(F_k)\rightarrow B^k$.
The main scale from which $B$ will be viewed is its depth $k$,
so from now on ${\mathbf{B}} := (\Psi_0^k)^{-1}(B)$ will stand for the corresponding box
(see Figure \ref{figregpiece}). This seemingly minor ingredient plays a crucial role in the estimates.
A box $B$ as above is called {{\bar i}t regular} if the horizontal and vertical projections of ${\mathbf{B}}$
are $K$-comparable, where $K>0$ is a universal constant, to be specified in the main body of the paper.
In other words, $\operatorname{mod} {\mathbf{B}}$ (the ratio of the the vertical and horizontal sizes of ${\mathbf{B}}$) is of order 1.
We will control depth by the control function
\begin{equation}{\lambda}bel{exp control f-n}
s(k)= a 2^k -A\quad \mathrm{where}\ a= {\bf f}rac{\log b}{\log {\sigma}},\ A={\bf f}rac{\log {\alpha}pha}{\log {\sigma}},
\end{equation}
with a sufficiently small universal ${\alpha}pha>0$ to be specified in the main body of the paper.
With this choice, we have:
\begin{equation}{\lambda}bel{safe scales for F_k}
{\alpha}pha{\sigma}^{l-k} {\bf g}eq b^{2^k}.
\end{equation}
Comparing it to (\ref{safe scales for F}) and (\ref{safe scales}),
we see that the level $l-k$ controlled in this way is safe for the renormalization $F_k$.
We say that the box $B{\bar i}n {\mathcal B}^n[l]$ is {{\bar i}t not too deep} in scale $B^k$ if
$$
l-k \leq s(k).
$$
There are a number of constant which have to be chosen appropriately, for example ${\alpha}pha$ and $K$.
In the main body of this paper it will be shown how to choose these constants carefully such that all Lemmas
and Propositions hold. From now on we will assume in this outline that the constants are chosen appropriately and
will not mention this matter any more.
We will number the Lemmas and Propositions in this outline as the corresponding statements in the main body. However,
the version in the outline should be viewed as an informal version of the actual statements.
\noindentndent
{\bf Proposition 4.1.}
{{\bar i}t For all sufficiently big levels $k$, the following is true.
If a regular box $B{\bar i}n {\mathcal B}^n_*$, $n>k$, is not too deep in scale $B^k$ then $G_k(B)$ is regular. }
{{\bar i}t Outline of the proof}.
Let $B{\bar i}n {\mathcal B}^n[l]$, $n>l>k$. We should view $B$ from scale $l$,
i.e., consider the piece ${\mathbf{B}}$ of level $n-l$ for the renormalization $F_l$,
see Figure \ref{mappiece}. As the piece $\tilde{B}=G_k(B)$ has depth $k$,
it should be viewed from this depth. So, we consider the corresponding piece
$\tilde{{\mathbf{B}}}$ of level $n-k$ for the renormalization $F_k$.
Then $\tilde{{\mathbf{B}}}=F_k\circ \Psi_k^l({\mathbf{B}})$.
Using geometric estimates for factorization (\ref{factoring-outline}) we show that
$$
\operatorname{mod} \Psi_k^l ({\mathbf{B}}) \asymp {\sigma}^{l-k} \operatorname{mod} {\mathbf{B}},
$$
provided ${\mathbf{B}}$ is regular. So $\Psi_k^l({\mathbf{B}})$ is highly stretched in the vertical direction.
The nearly Universal map $F_k$, see (\ref{universality formula}), will contract the vertical
size by a factor of order $b^{2^k}<<{\sigma}^{l-k}$ since the piece is not too deep.
This implies that the image under $F_k$ is essentially the
image of the horizontal side. We obtain a piece $\tilde{\mathbf{B}}$, which is essentially a curve, that gets roughly aligned with the parabola,
which makes its modulus of order 1.
\rlap{$\sqcup$}$\sqcap$\smallsetminusallskip
\subsection{Universal sticks}
Given a box $B{\bar i}n {\mathcal B}^n[l]$ of a map $F$,
let ${\mathcal O}(B): = {\mathcal O}_F \cap B$ be the part of the postcritical set ${\mathcal O}_F$ contained in $B$.
Respectively, ${\mathcal O}({\mathbf{B}})= {\mathcal O}_{F_l}\cap {\mathbf{B}}$, where ${\mathbf{B}}$ is the rescaled box corresponding to $B$.
We say that a box $B{\bar i}n {\mathcal B}^n[l]$ is a ${{\diamond}elta}$-stick if the postcritical set
$O({\mathbf{B}})$ is contained in a diagonal strip $\Pi$ of
thickness ${{\diamond}elta}$, relatively the horizontal size of ${\mathbf{B}}$.
The minimal thickness is denoted by ${{\diamond}elta}_{{\mathbf{B}}}$. See Figure \ref{wid}.
Let us consider the pieces $B_1$ and $B_2$ of level $n+1$ contained in $B$. The corresponding pieces ${\mathbf{B}}_1$ and ${\mathbf{B}}_2$ occupy fractions
${\sigma}gma_{{\mathbf{B}}_1}$ and ${\sigma}gma_{{\mathbf{B}}_2}$ of ${\mathbf{B}}$, called {{\bar i}t scaling ratios}, see Figure \ref{sca}. Let ${\sigma}gma^*_{{\mathbf{B}}_1}$ and ${\sigma}gma^*_{{\mathbf{B}}_2}$ be the scaling ratios of the corresponding
pieces for the degenerate renormalization fixed point $F_*$. Let ${{\mathbb D}elta}lta {\sigma}gma_{{\mathbf{B}}}$ be the maximal difference between the corresponding scaling ratios.
A piece $B{\bar i}n {\mathcal B}^n$ is called ${\varepsilon}$-{{\bar i}t universal} if ${{\diamond}elta}_{{\mathbf{B}}}\le {\varepsilon}ilon$ and ${{\mathbb D}elta}lta{\sigma}gma_{{\mathbf{B}}}\le {\varepsilon}ilon$.
Consider very deep pieces $B{\bar i}n {\mathcal B}^n[k]$, with $(1-q_0)\cdot n\le k\le n$, at scale $n-k$. Then we are watching pieces of
${\mathcal B}^{n-k}(F_{k})$ which can be obtained by following the orbit of $B^{n-k}_v(F_{k})$ for $2^{n-k}$ steps. $F_{k}$ is at a distance
$O(\rho^{k})$ to the degenerate renormalization fixed point $F_*$. When $q_0>0$ is small, these few iterates, $2^{n-k}=2^{q_0\cdot n}$, with a map $O(\rho^{(1-q_0)\cdot n})$ close to the renormalization fixed can be well approximated by iterates of the renormalization fixed point. At this scale, one-dimensional dynamics is a good geometrical model.
We call this the {{\bar i}t one-dimensional regime}.
\noindentndent
{\bf Proposition 7.2.} {{\bar i}t There exist $\theta<1$, $0<q_0<q_1$ such that every piece in ${\mathcal B}^n[k]$, with
$(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$, is $O(\rho^{n})$-universal.
}
We are going to refine Proposition 4.1, in the sense that we are estimating how ${\varepsilon}ilon$-universality is distorted (or even improved!)
when we apply maps $G_k$ to regular pieces which are not too deep in scale $B^k$.
\noindentndent
{\bf Proposition 5.1 and 6.1}
{{\bar i}t If $B{\bar i}n{\mathcal B}^n[l]$ is regular and not too deep in $B^k$
then
$$
{{\diamond}elta}lta_{\tilde{{\mathbf{B}}}}\le {\bf f}rac12 \cdot {{\diamond}elta}lta_{\mathbf{B}}+O({\sigma}gma^{n-l}),
$$
and
$$
{{\mathbb D}elta}lta{\sigma}gma_{\tilde{{\mathbf{B}}}}= {{\mathbb D}elta}lta{\sigma}gma_{{\mathbf{B}}}+O({{\diamond}elta}lta_{{\mathbf{B}}}+{\sigma}gma^{n-l}),
$$
where $\tilde{B}=G_k(B){\bar i}n {\mathcal B}^n[k]$ and $\tilde{{\mathbf{B}}}=F_k(\Psi^l_k({\mathbf{B}}))$.}
{{\bar i}t Outline of the proof.}
We consecutively estimate, using geometric estimates of \S \ref{Geom estimates outline},
the relative thickness of the pieces $B_\partialff=(\operatorname{id}+ {\bf S}^l_k)({\mathbf{B}})$,
$B_{\text{aff}}=D^l_k(B_\partialff)$ and $\tilde{\mathbf{B}}= F_k(B_{\text{aff}})$, see Figure \ref{mapfac}.
The first one is comparable with the thickness of ${\mathbf{B}}$, up to an error of order ${\sigma}^{n-l}$,
since the horizontal map $\operatorname{id} + {\bf S}_k^l$ has bounded geometry
(where the error ${\sigma}^{n-l}{\bf g}eq \partialam {\mathbf{B}}$ comes from the second order correction).
Let us now represent the affine map $D^l_k$ as a composition of the diagonal part ${{\mathbb L}ambda}$ and
the the sheer part $T$, see (\ref{reshufflingout}).
The diagonal map ${{\mathbb L}ambda}$ preserves the horizontal thickness,
so the thickness is only effected by the sheer part $T$, which has order $t_k\asymp b^{2^k}$.
Using this estimate and that $B$ is not too deep in $B^k$, we show that ${{\diamond}elta}(B_{\text{aff}}) =O ({{\diamond}elta}_{B_\partialff})$.
Finally, we show that the map $F_k$, being strongly vertically contracting, improves thickness
again using that $B$ is not too deep in $B^k$.
The maps $\Psi^l_k$ do not distort the scaling ratios at all as a consequence of the precise defintion of scaling ratios. The piece $\tilde{\mathbf{B}}$ is the image under $F_k$ of $B_{\text{aff}}=\Psi^l_k({\mathbf{B}})$. This map is exponentially close to the degenerate renormalization fixed point. It will not distort the scaling ratios too much.
\qed
Starting with pieces obtained during the one-dimensional regime, we apply repeatedly the maps $G_k$ as long as the new pieces are not too deep.
This process is called the {{\bar i}t pushing-up regime}.
The pieces created by the combined one-dimensional and pushing-up regimes are $O(\rho^n)$-universal.
This can be seen as follows.
Proposition 7.2, states that the pieces from the one-dimensional regime are exponentially universal.
These pieces are the starting pieces of the pushing-up regime. Propositions 5.1 and 6.1, state that the
error in scaling ratios caused by pushing-up is of order of the sum of the ticknesses observed during the
pushing-up process. Moreover, the thicknesses are essentially contracted each pushing-up step.
Unfortunately, the pieces generated by the combination of the one-dimensional and pushing-up regimes, do not
have a total measure which tends to $1$. In particular, Proposition 8.2 states that asymptotically, these pieces will be missing
a fraction of the order $O(2^k(b^{\bf g}amma)^{2^k})$ of $B^k$, where ${\bf g}amma>0$. This is an immediate consequence
of the fact that during the pushing-up regime we only pushed-up pieces which are not too deep.
The solution to this problem is to stop the pushing-up regime at the level $\kappaappa(n)\asymp \ln n$.
Then $B^{\kappaappa(n)}$ will be filled except for an exponential small fraction with $O(\rho^n)$-universal pieces. After level
$\kappaappa(n)$ we start the {{\bar i}t brute-force} regime, push-up all pieces without considering whether they are too deep or not.
In other words, just apply the original map $F$ for $2^{\kappaappa(n)}$ steps. But under these iterates the $O(\rho^n)$-universal
sticks get spoiled at most by factor $O(C^{\kappaappa(n)})= O(n^c)$ with some $c>0$. Hence,
they are $O(n^c \rho^n)$-universal sticks, and we still see $O(\theta^n)$-universality, for some $\theta<1$.
Denote the pieces in ${\mathcal B}^n$ generated by combining these three regimes by ${\mathcal P}_n$. These pieces are $\theta^n$-universal.
\comm{
Finally, the following lemma gives control of the push-forwards of universal sticks:
\begin{prop}{\lambda}bel{push-forward of universal sticks}
For all sufficiently big levels $k$, the following is true.
If a ${{\diamond}elta}$-universal stick $B{\bar i}n {\mathcal B}_n[l]$, $n>l{\bf g}eq k$, is not too deep in scale $B^k$
then $G_k(B)$ is an ${\varepsilon}$-universal stick with
$$
{\varepsilon}= {{\diamond}elta} + O(|\hat B| + O({\sigma}^{n-l})),
$$
where $\hat B$ is the box of level $n-1$ containing $B$.
\end{prop}
}
\comm{****
\subsection{Satisfactory control}
We say that a control function $s$ is {{\bar i}t satisfactory} if:
\smallsetminusallskip\noindent $\bullet$
$s$ controls almost all points $x{\bar i}n O$;
\smallsetminusallskip\noindent $\bullet$
There is a universal ${\sigma} {\bar i}n (0,1)$ such that for any level $k$,
if a ${{\diamond}elta}-$universal box $B{\bar i}n {\mathcal B}_n[l]$, $n> l {\bf g}eq k$, is not too deep in scale $B^k$
(for the control function $s$),
then the box $G_k(B)$ is $({{\diamond}elta} +O({\sigma}^{n-l}))$-universal.
It follows under these circumstances that if $x{\bar i}n O$ is $s$-controlled and for some $n{\bf g}eq k_m$,
the box $B_n(x_m)$ is an ${\varepsilon}$-universal stick on depth $k_m$,
then $B_n(x)$ is a $O({\varepsilon})$-universal stick.
Moreover, the same is eventually (for $n$ sufficiently big) true for
eventually $s$-controlled points $x$.
\smallsetminusallskip Lemma \ref{control criterion} and Propositions \ref{push-forward of universal sticks} show that the
control function
(\ref{exp control f-n}) is satisfactory.
\begin{lem}
If there exists a satisfactory control function then the geometry of $O$ is
probabilistically universal.
\end{lem}
\begin{proof}
Take some depth $k$ and some $s$-controlled point $x{\bar i}n O$.
Let $\{k_m\}_{m =0}^N$ be the code of $x$ that stops on depth $k$.
Since the renormalizations $R^k F$ converge to the universal function $F_*$,
there exists a sequence of levels $n_k \to {\bar i}nfty$ such that
all natural numbers $n$ appear in this sequence and all the boxes
$B_n (x_N) \subset B_k$ of level $n = k + n_m$ viewed from depth $k$
have a ${{\diamond}elta}_k$-universal shape with ${{\diamond}elta}_k\to 0$.
Since the function $s$ is satisfactory, the corresponding boxes $B_n(x)$ (viewed from depth $0$)
have an ${\varepsilon}_k$-universal shape with ${\varepsilon}_k\to 0$.
\end{proof}
******}
\subsection{Probabilistic universality}
\comm{
We say that the geometry of $O$ is {{\bar i}t probabilistically universal}
if for almost all points $x{\bar i}n O$ the boxes $B_n(x)$ are
asymptotically universal sticks.
}
We say that the geometry of $O$ is {{\bar i}t probabilistically universal}
if there exists a $\theta{\bar i}n (0,1)$ such that
the total measure of boxes $B{\bar i}n {\mathcal B}^n$ which are $\theta^n$-universal sticks
is at least $1-O(\theta^n)$.
\begin{thm}
The geometry of $O$ is probabilistically universal.
\end{thm}
\begin{proof}
Let $n{\bf g}e 1$. The pieces in ${\mathcal P}_n$ are $\theta^n$-universal. Left is to estimate $\mu({\mathcal P}_n)$.
The one-dimensional regime deals with the pieces of ${\mathcal B}^n$ in $B^k$ with $(1-q_1)\cdot n \leq k \leq (1-q_0)\cdot n$. They occupy a fraction
$1-O({\bf f}rac1{2^{(q_1-q_0)\cdot n}})$ of the measure of $B^{(1-q_1)\cdot n}$.
Push them up until $B^0$ without restriction whether they are too deep or not. They will occupy ${\mathcal B}^n$ except for
an exponential small fraction. Let $R_n$ be the corresponding set of paths of the random walk. These are the paths which hit the interval
$[(1-q_1)\cdot n, \leq (1-q_0)\cdot n]$ at least once but are not necessarily $s$-controlled. So
$$
\nu(R_n)=1-O({\bf f}rac1{2^{(q_1-q_0)\cdot n}}).
$$
Recall, the set $K_{\kappaappa(n)}$ consists of the paths which are $s$-controlled after depth $\kappaappa(n)\asymp \ln n$. Lemma \ref{control criterion} gives
$$
\nu(K_{\kappaappa(n)})= 1- O(\sum_{k=\kappaappa(n)}^{\bar i}nfty {\bf f}rac 1{2^{s(k)}}) = 1- O\left({\bf f}rac {1} {{2}^{a 2^{\kappaappa(n)}}}\right) =
1-O(\rho^n )
$$
for some $\rho{\bar i}n (0,1)$.
Observe, the set of paths corresponding to ${\mathcal P}_n$
is
$R_n\cap K_{\kappaappa(n)}$.
Hence,
$$
\mu({\mathcal P}_n)=\nu(R_n\cap K_{\kappaappa(n)})=1-O(\theta^n),
$$
for some $\theta{\bar i}n (0,1)$.
\end{proof}
\comm{
\begin{lem}{\lambda}bel{many universal sticks}
Let $F$ be a H\'enon map, and let $q_1> q_0 > 0$.
Then there exists $\theta{\bar i}n (0,1)$ such that for any level $k$, the boxes
$B{\bar i}n {\mathcal B}_n$ with $(1+q_0) k \leq n\leq (1+q_1) k$ which
are $O (\theta^n)$-universal sticks have total measure ${\bf g}eq 1-O(\theta^n)$ in $B^k$.
\end{lem}
\begin{proof}
Take all the boxes $B{\bar i}n {\mathcal B}_n$ in $B^k$ with $(1+q_0) k \leq n\leq (1+q_1) k$
of depth $l\leq (1+q_0/2) k$. Lemma \ref{universal sticks lem} is applicable to them,
so they are $O (\theta^n)$-universal sticks. The boxes that are excluded
are contained in $B^{(1+q_0/2)k}$, so their total measure (conditioned to $B^k$)
is $O(\theta^n)$.
\end{proof}
}
\comm{Fix some constants $q_1>q_0>0$.
Take any $n{\bar i}n {\mathbb N}$, and select a stopping level $k$ such that $(1+q_0)k \leq n\leq (1+q_1)k $.
By Lemma \ref{many universal sticks}, there exists a $\theta{\bar i}n (0,1)$ such that the total measure of
$O(\theta^n)-$universal sticks in $B^k$ (conditioned to $B^k$) is ${\bf g}eq 1- O(\theta^n)$.
Spreading these boxes around by the first landing map $L_k: O\rightarrow B^k$,
we obtain a family ${\mathcal F}_n\subset {\mathcal B}_n$ of boxes of total measure ${\bf g}eq 1- O(\theta^n)$.
We will show that for any $\theta_*{\bar i}n (0,1)$,
majority of these boxes are $O(\theta_*^n)-$universal sticks.
To this end, we will use the exponential control function (\ref{exp control f-n}).
Consider the set of points $x$ that are controlled starting depth $N{\sigma}m \log n $:
$$
X_N= \{x{\bar i}n O:\ j_m(x)\leq s(k_m(x))\ \mbox{whenever $k_m{\bf g}eq N$}\}.
$$
Then
\begin{equation}{\lambda}bel{deviations}
\mu(X_N)= 1- \sum_{k=N}^{\bar i}nfty {\bf f}rac 1{2^{s(k)}} = 1- O\left({\bf f}rac {1} {{2}^{a2^N}}\right) =
1-O(\rho^n )\quad \mbox{for some $\rho{\bar i}n (0,1)$}.
\end{equation}
Hence the family of boxes $B_n(x){\bar i}n {\mathcal F}_n$ with $x{\bar i}n X_N$ has total measure ${\bf g}eq 1-O(\rho^n+ \theta^n)$.
}
\comm{***
By the Law of Big Numbers,
the average jump $j_m$ for a typical point $x$ is equal to 2:
$$
{\bf f}rac 1{N} k_N(x)= {\bf f}rac 1{N} \sum_{m=0}^{N-1} j_m(x) \to
\sum_{j=1}^{\bar i}nfty {\bf f}rac j{2^j} =2\quad \mbox{as $N\to{\bar i}nfty$},\ \mbox{for $\mu$-a.e.}\ x{\bar i}n O.
$$
By the Large Deviations Theorem, \note{check}
for any ${\varepsilon}>0$ there exists $\theta{\bar i}n (0,1)$ such that
\begin{equation}{\lambda}bel{deviations}
\mu\{x{\bar i}n O:\ |{\bf f}rac 1{N} k_N(x) -2 |{\bf g}eq {\varepsilon} \} < O(\theta^N).
\end{equation}
Let
$$
X_{\varepsilon}(N)= \{x{\bar i}n O:\ |{\bf f}rac 1{k} k_k(x) -2 | < {\varepsilon},\quad k=N, N+1 \}.
$$
By (\ref{deviations}), $\mu(X_{\varepsilon}(N)) = 1-O(\theta^N)$.
But for ${\varepsilon}$ sufficiently small, any point $x{\bar i}n X_{\varepsilon}(N)$ is $s$-controlled after moment $N$ with
$s$ given by (\ref{control f-n}).
***}
\comm{
Let us show that each of these boxes is a $O(\theta_*^n)$-universal stick.
Let $x{\bar i}n {\mathbb X}_N$, with the backward orbit $\{x_m\}$.
Let $x_s$ be the first landing of this orbit in $B^N$,
while $x_m$ be the first landing in $B^k$.
By definition of the family ${\mathcal F}_n$, the box $B_n(x_m)$ is $O(\theta^n)-$universal stick.
By Proposition \ref{push-forward of universal sticks},
$B_s$ is also $O(\theta^n)-$universal stick.
But under further $N$ push-forwards,
this stick gets spoiled at most by factor $O(C^N)= O(n^{\bf g}amma)$ with some ${\bf g}amma>0$.
Hence $B_n(x)$ is
$O(n^{\bf g}amma \theta^n)-$universal stick, and we are done.
}
\section{Preliminaries}{\lambda}bel{prelim}
A complete discussion of the following definitions and statements can be found in part I and part II, see \cite{CLM}, \cite{LM1}, of this series on renormalization of H\'enon-like maps.
Let ${\Omega}ega^h, {\Omega}ega^v\subset \mathbb{C}$ be neighborhoods of $[-1,1]\subset \mathbb{R}$ and ${\Omega}ega={\Omega}ega^h\times {\Omega}ega^v$. The set
${\mathcal H}_{\Omega}ega(\overline{{\varepsilon}ilon})$ consists of maps $F:[-1,1]^2\to [-1,1]^2$ of the following form.
$$
F(x,y)=(f(x)-{\varepsilon}ilon(x,y), x),
$$
where $f:[-1,1]\to [-1,1]$ is a unimodal map which admits a holomorphic extension to ${\Omega}ega^h$ and ${\varepsilon}ilon:[-1,1]^2\to \mathbb{R}$ admits a holomorphic extension to ${\Omega}ega$ and finally, $|{\varepsilon}ilon|\le \overline{{\varepsilon}ilon}$. The critical point $c$ of $f$ is non degenerate, $D^2f(c)<0$. A map in ${\mathcal H}_{\Omega}ega(\overline{{\varepsilon}ilon})$ is called a {{\bar i}t H\'enon-like} map. Observe that H\'enon-like maps map vertical lines to horizontal lines.
A unimodal map $f:[-1,1]\to [-1,1]$ with critical point $c{\bar i}n [-1,1]$ is {{\bar i}t renormalizable} if $f^2:[f^2(c),f^4(c)]\to [f^2(c),f^4(c)]$ is unimodal and
$[f^2(c),f^4(c)]\cap f([f^2(c),f^4(c)])=\emptyset$. The renormalization of $f$
is the affine rescaling of $f^2|([f^2(c),f^4(c)]$, denoted by $Rf$. The domain of $Rf$ is again $[-1,1]$. The renormalization operator $R$ has a unique fixed
point $f_*:[-1,1]\to [-1,1]$. The introduction of \cite{FMP} presents the history of renormalization of unimodal maps and describes the main results.
The {{\bar i}t scaling factor} of this fixed point $f_*$ is
$$
{\sigma}gma={\bf f}rac{|[f_*^2(c),f_*^4(c)]|}{|[-1,1]|}.
$$
A H\'enon-like map is renormalizable if there exists a domain $D\subset [-1,1]^2$ such that
$F^2:D\to D$. The construction of the domain $D$ is inspired by renormalization of unimodal maps. In particular, it should be a topological construction. However, for small $\overline{{\varepsilon}ilon}>0$ the actual domain $A\subset [-1,1]$, used to renormalize as was done in \cite{CLM}, has an analytical definition. The precise definition can be found in \S 3.5 of part I. If the renormalizable H\'enon-like maps is given by $F(x,y)=(f(x)-{\varepsilon}ilon(x,y))$ then the
domain $A\subset [-1,1]$, an essentially vertical strip, is bounded by two curves of the form
$$
f(x)-{\varepsilon}ilon(x,y)=\text{Const}.
$$
These curves are graphs over the $y$-axis with a slope of the order
$\overline{{\varepsilon}ilon}>0$.
The domain $A$ satisfies similar combinatorial properties as the domain of renormalization of a unimodal map:
$$
F(A)\cap A=\emptyset,
$$
and
$$
F^2(A)\subset A.
$$
Unfortunately, the restriction $F^2|A$ is not a H\'enon-like map
as it does not map vertical lines into horizontal lines.
This is the reason why the coordinated change needed to define the renormalization of $F$ is not an affine map,
but it rather has the following form. Let
$$
H(x,y)=(f(x))-{\varepsilon}ilon(x,y),y)
$$
and
$$
G=H\circ F^2\circ H^{-1}.
$$
The map $H$ preserves horizontal lines and it is designed in such a way
that the map $G$ maps vertical lines into horizontal lines.
Moreover, $G$ is well defined on a rectangle $U\times [-1, 1]$ of
full height. Here $U\subset [-1,1]$ is an interval of length $2/|s|$
with $s<-1$.
Let us rescale the domain of $G$ by the $s$-dilation ${{\mathbb L}ambda}mbda$,
such that the rescaled domain is of the form $[-1,1]\times V$,
where $V\subset \mathbb{R}$ is an interval of length $2/|s|$. Define the renormalization of $F$ by
$$
RF= {{\mathbb L}ambda}mbda\circ G\circ {{\mathbb L}ambda}mbda^{-1}.
$$
Notice that $RF$ is well defined on the rectangle $[-1,1]\times V$.
The coordinate change $\psi= H^{-1}\circ {{\mathbb L}ambda}^{-1}$ maps this rectangle
onto the
topological rectangle $A$ of full height.
The set of $n$-times renormalizable maps is denoted by ${\mathcal H}^n_{\Omega}ega(\overline{{\varepsilon}ilon})\subset {\mathcal H}_{\Omega}ega(\overline{{\varepsilon}ilon})$. If
$F{\bar i}n {\mathcal H}^n_{\Omega}ega(\overline{{\varepsilon}ilon})$ we
use the notation
$$
F_n=R^nF.
$$
The set of infinitely renormalizable maps is denoted by
$$
{\mathcal I}_{\Omega}ega(\overline{{\varepsilon}ilon})=\bigcap_{n{\bf g}e 1}
{\mathcal H}^n_{\Omega}ega(\overline{{\varepsilon}ilon}).
$$
The renormalization operator acting on ${\mathcal H}^1_{\Omega}ega(\overline{{\varepsilon}ilon})$,
$\overline{{\varepsilon}ilon}>0$ small enough, has a unique fixed point
$F_*{\bar i}n {\mathcal I}_{\Omega}ega(\overline{{\varepsilon}ilon})$. It is the degenerate map
$$
F_*(x,y)=(f_*(x), x).
$$
This renormalization fixed point is hyperbolic and the stable manifold has codimension one. Moreover,
$$
W^s(F_*)={\mathcal I}_{\Omega}ega(\overline{{\varepsilon}ilon}).
$$
If we want to emphasize that some set, say $A$,
is associated with a certain map $F$ we
use notation like $A(F)$.
The coordinate change which conjugates $F_k^2|A(F_k)$ to $F_{k+1}$ is
denoted
by
\begin{equation}{\lambda}bel{phi}
\psi^{k+1}_v=({{\mathbb L}ambda}mbda_k \circ H_k)^{-1}: {\mathbb D}omain(F_{k+1})\to A(F_k).
\end{equation}
Here $H_k$ is the non-affine part of the coordinate change used to define
$R^{k+1}F$ and ${{\mathbb L}ambda}mbda_k$ is the dilation by $s_k<-1$.
Now, for $k<n$, let
\begin{equation}{\lambda}bel{Phi}
\Psi^n_k=\psi^{k+1}_v\circ \psi^{k+2}_v\circ \cdots \circ \psi^{n}_v:
{\mathbb D}omain(F_{n})\to A_{n-k}(F_k),
\end{equation}
where
$$
A_k(F)=\Psi^k_0({\mathbb D}omain(F_k))\cap B.
$$
Notice, that each $A_k\subset [-1,1]$ is of full height and
$\Psi^k_0$ conjugates $R^kF$ to $F^{2^k}|A_k$.
Furthermore,
$
A_{k+1}\subset A_k
$.
The change of
coordinates conjugating the renormalization $RF$ to $F^2$ is denoted by
$\psi_v^1 := H^{-1}\circ {{\mathbb L}ambda}^{-1}$. To describe the attractor of an infinitely renormalizable H\'enon-like map we also need the map $\psi_c^1= F\circ
\psi^1_v$. The subscripts $v$ and $c$ indicate that these maps are
associated to the critical {{\bar i}t value} and the {{\bar i}t critical} point,
respectively.
Similarly, let $\psi^2_v$ and $\psi^2_c$ be the corresponding
changes of variable for $RF$, and let
$$
\psi^2_{vv}= \psi^1_v\circ \psi^2_v, \quad \psi^2_{cv}= \psi^1_c\circ
\psi^2_v, \quad \psi^2_{vc}=\psi^1_v\circ\psi^2_c,\quad \psi^2_{cc}=\psi^1_c\circ \psi^2_c.
$$
and, proceeding this way, for any $n{\bf g}e 0$, construct $2^{n}$ maps
$$ \psi^n_w = \psi^1_{w_1}\circ{\diamond}ots\circ \psi^n_{w_{n}}, \quad
w=(w_1, {\diamond}ots, w_{n}){\bar i}n\{v,c\}^{n}. $$
The notation $\psi^n_w(F)$ will also be used to emphasize the map
under consideration, and we will let $W=\{v,c\}$ and
$W^n=\{v,c\}^n$ be the $n$-fold Cartesian product. The following Lemma and its proof can be found in \cite[Lemma 5.1]{CLM}.
\begin{lem}{\lambda}bel{contracting}
Let $F{\bar i}n {\mathcal I}^c_{\Omega}({\mbox{\boldmath$\alpha$} }r{\varepsilon})$. There exist $C>0$
such that for $w{\bar i}n W^n$,
$
\| D\psi^n_w\|\leq C {\sigma}gma^n,
$
$n{\bf g}e 1$.
\end{lem}
Let $F{\bar i}n {\mathcal I}_{\Omega}ega(\overline{{\varepsilon}ilon})$ and consider the domains
$$
B^n_{\omega}ega=\operatorname{Im} \psi^n_{\omega}ega.
$$
The first return maps to the domains
$$
B^n_{v^n}=\operatorname{Im} \Psi^n_0=\operatorname{Im} \psi^n_{v^n}
$$
correspond to the different renormalizations.
Notice,
$$
B^{n+1}_{v^{n+1}}\subset B^n_{v^n}.
$$
An infinitely renormalizable H\'enon-like map has an invariant Cantor set:
$$
{\mathcal O}_F=\bigcap_{n{\bf g}e 1} \bigcup_{i=0}^{2^n-1} F^i(B^n_{v^n})=
\bigcap_{n{\bf g}e 1} \bigcup_{{\omega}ega{\bar i}n W^n} B^n_{\omega}ega.
$$
The dynamics on this Cantor set
is conjugate to an adding machine. Its unique invariant measure is denoted
by $\mu$.
The {{\bar i}t average Jacobian}
$$
b_F=\exp{\bar i}nt \log {\mathbb J}ac F d\mu
$$
with respect to $\mu$ is an important parameter that
influences the geometry of ${\mathcal O}_F$, see \cite[Theorem 10.1]{CLM}.
The critical point (and critical value) of a unimodal map plays a crucial role
in its dynamics. The counterpart of the critical value for infinitely renormalizable H\'enon-like maps is the
{{\bar i}t tip}
$$
\{\tau_F\}=\bigcap_{n{\bf g}e 1} B^n_{v^n}.
$$
\subsection{One-dimensional maps}{\lambda}bel{1D universal f-s}
Recall that $f_*\colon [-1,1] \to [-1,1]$ stands for the one-dimensional
renormalization fixed point
normalized so that $f_*(c_*)=1$ and $f_*^2(c_*)=-1$, where
$c_*{\bar i}n [-1,1]$ is the critical point of $f_*$.
\comment{
We let $J_c^* =[-1, f_*^4(c_*)]$ be the smallest renormalization interval of $f_*$,
and we let $s\colon J_c^*\rightarrow I$ be the orientation reversing affine rescaling.
The smallest renormalization interval around the critical value is denoted by
$J_v^*= f_*(J^*_c)=[f_*^3(c_*),1]$.
Then $s\circ f_*: J_v^*\rightarrow [-1,1] $ is an expanding diffeomorphism.
Consider the inverse contraction
\[
g_*\colon [-1,1]\to J^*_v, \quad g_*=f_*^{-1}\circ s^{-1},
\]
where $f_*^{-1}$ stands for the branch of the inverse map that maps $J_c^*$ onto $J_v^*$.
The function $g_*$ is the non-affine branch of the so called ``presentation function''
(see~\cite{BMT} and references therein).
Note that $1$ is the unique fixed point of $g_*$.
Let $J_c^*(n) \subset [-1,1]$ be the smallest periodic interval of period $2^n$ that contains $c_*$ and $J_v^*(n) \subset [-1,1]$ be the smallest periodic interval of period $2^n$ that contains $1$.
Let
$G_*^n\colon [-1,1]\to [-1,1]$ be the diffeomorphism obtained by rescaling
affinely the image of $g_*^n$. The fact that $g_*$ is a contraction implies
that the following limit exists
\[
u_*=\lim_{n\to {\bar i}nfty} G_*^n \colon [-1,1]\to [-1,1],
\]
where the convergence is exponential in the $C^3$-topology.
In fact, this function linearizes $g_*$ near the attracting fixed point $1$
(see, e.g., \cite[Theorem 8.2]{Mi}). The following Lemma and its proof can be found in \cite[Lemma 7.1]{CLM}.
\begin{lem} {\lambda}bel{ustarfstar} For every $n{\bf g}e 1$
\begin{enumerate}
{\bar i}tem [\rm (1)] $J^*_v(n)=g_*^n(I)$,
{\bar i}tem [\rm (2)] $R^n_vf_*= G^n_*\circ f_*\circ (G^n_*)^{-1}$.
\end{enumerate}
Moreover,
\begin{enumerate}
{\bar i}tem [\rm (3)]
$
u_*\circ f_*=f^*\circ u_*.
$
\end{enumerate}
\end{lem}
Along with $u_*$, consider its rescaling
$$
v_*: [-1,1]\rightarrow {\mathbb R}, \quad v_*(x)={\bf f}rac{1}{u'_*(1)}(u_*(x)-1)+1,
$$
normalized so that $v_*(1)=1$ and $\partialsplaystyle {\bf f}rac{d v_*}{dx} (1)=1$.
The following Lemma is an improvement of Lemma 7.3 in \cite{CLM}. Although the proof of this Lemma is essentially the same, we include it here with the small change.
\begin{lem}{\lambda}bel{convergence to g-star}
Let $\rho{\bar i}n (0,1)$, $C>0$.
Consider a sequence of smooth functions $g_l: [-1,1]\rightarrow [-1,1]$, $l=1,{\diamond}ots, n$,
such that
$\| g_k - g_*\|_{C^3}\leq C \rho^l$. Let
$g^n_k=g_k\circ{\diamond}ots\circ g_n$,
and let
$G^n_k= a^n_k\circ g^n_k: [-1,1]\rightarrow [-1,1]$, where $a^n_k$ is the affine rescaling of
$\operatorname{Im} g^n_k$
to $[-1,1]$.
Then
$$
\|G^n_k - G_*^{n-k+1}\|_{C^2}\leq C_1 \rho^{{\bf f}rac{n}{2}},
$$
where $C_1$ depends only
on $\rho$ and $C$.
\end{lem}
\begin{proof}
Let $I_0=[-1,1]$ and $I_j=[x_j,y_j]\subset [-1,1]$ such that
$g_{n-j}(I_j)=I_{j+1}$.
Rescale affinely the domain and image of $g_{n-j}\colon I_j\to I_{j+1}$
and denote the normalized diffeomorphism by $h_{n-j}\colon [-1,1]\to [-1,1]$.
Let
$$
I_j^*=[x_j^*,1]=g_*^{j}([-1,1])
$$
and rescale the domain and image of $g_*\colon I_j^*\to I_{j+1}^*$ and
denote the normalized diffeomorphism by $h_{n-j}^*\colon [-1,1]\to
[-1,1]$. Note that
$$
h_{k}^*\circ h_{k+1}^*\circ \cdots \circ h_n^* \to
u_*,
$$
where the convergence in the $C^2$ topology is exponential in $n-k$.
In the following estimates we
will use a uniform constant $\rho<1$
for exponential estimates.
Let ${{\mathbb D}elta}lta x_j=x_j-x^*_j$ and ${{\mathbb D}elta}lta y_j= y_j-1$ . Use, $g_{n-j}(x_j)=x_{j+1}$,
then
$$
x_{j+1}=g_*(x^*_j)+g_*'(\zeta_j) \cdot {{\mathbb D}elta}lta x_j +O(\rho^{n-j}).
$$
We may assume that the maximal derivative of $g_*$ is smaller than $\rho<1$.
Then
$$
\begin{aligned}
{{\mathbb D}elta}lta x_j&\le \sum_{l=0}^{j-1} C\rho^{n-l} \prod_{k=l+1}^{j-1} g_*'(\zeta_k)\\
&\le C \sum_{l=0}^{j-1} \rho^{n-l} \rho^{j-1-l}\\
&= C \rho^{n-j+1} \sum_{l=0}^{j-1} (\rho^2)^{j-l-1}\\
&=O(\rho^{n-j}).
\end{aligned}
$$
Use a similar argument for ${{\mathbb D}elta}lta y_j$ to get
\begin{equation}{\lambda}bel{xy}
|{{\mathbb D}elta}lta x_j|, |{{\mathbb D}elta}lta y_j|= O(\rho^{n-j}).
\end{equation}
Because, $g_l$ and $g_*$ are contractions we have
\begin{equation}{\lambda}bel{II*}
|I_j|, |I^*_j|=O(\rho^{j}).
\end{equation}
We will represent a diffeomorphism $\phi:I\to J$ by its nonlinearity
\begin{equation}{\lambda}bel{nonl}
\eta_\phi={\bf f}rac{D^2\phi}{D\phi}.
\end{equation}
The use of the nonlinearity is that it allows to control distortion
\begin{equation}{\lambda}bel{intnonl}
\log {\bf f}rac{D\phi(y)}{D\phi(x)}={\bar i}nt_x^y \eta.
\end{equation}
Let $\eta_l$ and $\eta^*$ be the nonlinearities of $g_l$ and $g_*$.
Notice that
$$
\|\eta_l-\eta^*\|_{C^1}=O(\rho^l).
$$
Furthermore, let $\mathbb{I}_j:[-1,1]\to I_j$ and
$\mathbb{I}^*_j:[-1,1]\to I^*_j$
be the affine orientation preserving rescalings. Using this notation
$$
\eta_{n-j}(\mathbb{I}_j(x))=\eta^*(\mathbb{I}^*_j(x))+
D\eta^*(\zeta_j)\cdot \left(\mathbb{I}_j(x)-\mathbb{I}^*_j(x)\right)+ O(\rho^{n-j}),
$$
for some $\zeta_j{\bar i}n [\mathbb{I}_j(x),\mathbb{I}^*_j(x)]$.
Hence,
$$
\eta_{n-j}(\mathbb{I}_j(x))=\eta^*(\mathbb{I}^*_j(x))+
O(\rho^{n-j}).
$$
The nonlinearities of $h_{n-j}$ and $h^*_{n-j}$ are given by
\begin{equation}{\lambda}bel{etaj}
\eta_{h_{n-j}}=|I_j|\cdot \eta_{n-j}(\mathbb{I}_j),
\end{equation}
and similarly
\begin{equation}{\lambda}bel{eta*j}
\eta_{h^*_{n-j}}=|I^*_j|\cdot \eta^*(\mathbb{I}^*_j).
\end{equation}
Now,
$$
\begin{aligned}
|\eta_{h_{n-j}}(x)-\eta_{h^*_{n-j}}(x)|&=
O((|I_j|-|I^*_j|)+ \rho^{n-j}\cdot |I^*_j|)\\
&=O((|I_j|-|I^*_j|)+\rho^n).
\end{aligned}
$$
Use (\ref{xy}) and (\ref{II*})
$$
|\eta_{h_{n-j}}(x)-\eta_{h^*_{n-j}}(x)| =
\left\{
\begin{array}{ccc }
O(\rho^{n-j}) & : & j\leq n/2
\\
O(\rho^{j}) & : & n-k+1{\bf g}e j> n/2.
\end{array} \right.
$$
It follows that
$$
\sum_{j=0}^{n-k+1} \|\eta_{h_{n-j}}- \eta_{h^*_{n-j}}\|_{C^0}=O(\rho^{{\bf f}rac{n}{2}}).
$$
Note that we can estimate $\|\eta_{h_{n-j}}\|_{C^1}$ by using
$$
D\eta_{h_{n-j}}=|I_j|^2 D\eta_{n-j}(\mathbb{I}_j).
$$
This and (\ref{etaj}), (\ref{eta*j}) give a universal bound on
$\sum_{j=0}^{n-k+1} \|D\eta_{h^{(*)}_{n-j}}\|_{C^1}$.
These bounds, allow use to use
a reshuffling argument,
see \cite[Lemma 14.1]{CLM}, which finishes the
proof of the Lemma.
\end{proof}
}
The renormalization fixed point $f_*$ has a nested sequence of renormalization cycles ${\mathcal C}_n$, $n{\bf g}e 1$. A cycle consists of the following intervals. The critical point of $f_*$ is $c_*$ and the critical value $v_*=f_*(c_*)$
$$
I^*_j(n)=[f^{j}_*(v_*), f^{j+2^n}_*(v_*)]{\bar i}n {\mathcal C}_n,
$$
with $j=0,1,2, \cdots, 2^n-1$. The collection ${\mathcal C}_n$ consists of pairwise disjoint intervals. Notice, for $j=0,1,2,{\diamond}ots, 2^n-2$
$$
f_*(I^*_j(n))=I^*_{j+1}(n),
$$
and
$$
f_*(I^*_{2^n-1}(n))=I^*_0(n).
$$
The interval in ${\mathcal C}_n$ which contains the critical point is denoted by
$$
U_n=I^*_{2^n-1}(n)\ni c_*.
$$
The {{\bar i}t nonlinearity} of a $C^2$-diffeomorphism $\phi:I\to \phi(I)\subset \Bbb{R}$, $I\subset \Bbb{R}$, is
\begin{equation}{\lambda}bel{nonlin}
\eta_{\phi}=D\ln D\phi.
\end{equation}
The {{\bar i}t Distortion} of a diffeomorphism $\phi:I\to J$ between intervals $I,J\subset \Bbb{R}$ is defined as
$$
\text{Dist}(\phi)=\max_{x,y}\log {\bf f}rac{D\phi(y)}{D\phi(x)}.
$$
If $\eta$ is the nonlinearity of $\phi$ then
\begin{equation}{\lambda}bel{distnonl}
\text{Dist}(\phi)\le \|\eta\|_{C^0}\cdot |I|.
\end{equation}
The distortion of a map does not change if we rescale domain and range.
Given $r>0$. The $r$-neighborhood $T\supset I$ of an interval $I\subset \Bbb{R}$ is the interval such that both components of $T\setminus I$ have length $r|I|$.
\begin{lem}{\lambda}bel{distortion} There exist $r>0$ and $D>1$ such that the $r$-neighborhoods $T_j(n)\supset I^*_j(n)$ have the following property. For all $n{\bf g}e 1$ the following holds. Let $\zeta_j{\bar i}n T_j(n)$ then
$$
\prod_{l=k_1}^{k_2-1}{\bf f}rac{|Df_*(\zeta_j)|}{{\bf f}rac{|I^*_{j+1}(n)|}{|I^*_j(n)|}}\le D.
$$
with $0\le k_1<k_2<2^n$.
\end{lem}
\begin{proof} The a priori bounds on the cycles ${\mathcal C}_n$ are described in \cite{MS}, see also
\cite{CMMT}. The a priori bounds state that for some $r>0$ the gap between $I_j(n+1)$ and $I_{j+2^{n+1}}(n)$ satisfies
$$
|I_j(n)\setminus (I_j(n+1)\cup I_{j+2^{n+1}}(n))|{\bf g}e 5r\cdot |I_j(n)|.
$$
Hence, we have
$T_j(n+1)\cap T_{j+2^{n+1}}(n+1)=\emptyset$ and
$$
|T_j(n+1)|+|T_{j+2^{n+1}}(n)|\le (1-r)\cdot |T_j(n)|.
$$
Let $\eta_j(n)$ be the nonlinearity, see (\ref{nonlin}), of the rescaling of $f_*:I^*_j(n)\to I^*_{j+1}(n)$. The rescaling turns domain and range into $[-1,1]$.
Lemma 3.1 in \cite{Ma2} says that
$$
\begin{aligned}
\|\eta_j(n+1)\|_{C_0}&\le {\bf f}rac{|T_j(n+1)|}{ |T_j(n)|}\cdot \|\eta_j(n)\|_{C_0}, \\
\|\eta_{j+2^{n+1}}(n+1)\|_{C^0}&\le {\bf f}rac{|T_{j+2^{n+1}}(n+1)|}{ |T_j(n)|}\cdot \|\eta_j(n)\|_{C_0}.
\end{aligned}
$$
Hence,
$$
\|\eta_j(n+1)\|_{C_0}+\|\eta_{j+2^{n+1}}(n+1)\|_{C^0}\le (1-r)\cdot \|\eta_j(n)\|_{C_0},
$$
for $j=0,1,2,\cdots, 2^{n}-2$.
The a priori bounds also imply a universal bound
$$
\|\eta_{2^n-1}(n+1)\|_{C_0}\le K.
$$
Inductively, this gives a universal bound
$$
\sum_{j=0}^{2^n-2} \|\eta_{j}(n)\|_{C_0}\le K_0.
$$
Use (\ref{distnonl}) and observe,
$$
\log {\bf f}rac{|Df_*(\zeta_j)|}{{\bf f}rac{|I^*_{j+1}(n)|}{|I^*_j(n)|}}=O(\|\eta_j(n)\|_{C_0}).
$$
The Lemma follows.
\end{proof}
\begin{prop}{\lambda}bel{disto} There exists $\rho<1$ such that the following holds. Let $0<q_0$ and
$I{\bar i}n {\mathcal C}_n$ and $I\subset U_{k}\setminus U_{(1-q_0)\cdot n}$, with $k<(1-q_0)\cdot n$. Let $t_I=2^{k}$ be the first return to
$U_{k}$. Then for every $j\le t_I$
$$
\text{Dist}(f_*^j|I)=O(\rho^{q_0\cdot n}).
$$
\end{prop}
\begin{proof} Let $s_I{\bf g}e t_I$ be the first return time of $I$ to $U_{(1-q_0)\cdot n}$. There exists $J_0\subset U_{k}$ with $I\subset J_0$ such that
$f_*^{s_I}: J_0 \to U_{(1-q_0)\cdot n}$ diffeomorphically, \cite {Ma1}. Let $J_k=f_*^k(J_0)$ and $I_k=f_*^k(I)$. The {{\bar i}t a priori} bounds on the geometry of the cycles ${\mathcal C}_n$ imply
$$
{\bf f}rac{|I_k|}{|J_k|}=O(\rho^{q_0\cdot n}).
$$
This estimate hold because the intervals $J_k$ are in ${\mathcal C}_{(1-q_0)\cdot n}$ and the intervals $I_k$ are in ${\mathcal C}_n$.
The nonlinearity of the rescaled map $f_*:J_k\to J_{k+1}$ which has the unit interval as its domain and range, is denoted by $\eta_k$. As in the proof of Lemma \ref{distortion} we obtain
$$
\sum_{k=0}^{s_I-1} \|\eta_k\|_{C^0}\le K_0.
$$
The nonlinearity of the rescaled version of the map $f_*:I_k\to I_{k+1}$ which has the unit interval as its domain and range, is denoted by $\eta^I_k$. Lemma 3.1 in \cite{Ma2} says that the nonlinearity of the restriction $f_*:I_k\to I_{k+1}$ of $f_*:J_k\to J_{k+1}$ satisfies
$$
\|\eta^I_k\|_{C^0}\le {\bf f}rac{|I_k|}{|J_k|}\cdot \|\eta_k\|_{C^0}.
$$
Hence,
$$
\sum_{k=0}^{s_I-1} \|\eta^I_k\|_{C^0}=O(\rho^{q_0\cdot n}).
$$
The distortion of a map $f_*^t:I_k\to I_{k+t}$ is bounded as follows.
$$
\begin{aligned}
\text{Dist}(f^s_*|I_k)&\le \sum_{j=k}^{k+t-1} \text{Dist}(f_*|I_{j})\\
&\le \sum_{j=0}^{s_I-1} \|\eta^I_j\|_{C^0}=O(\rho^{q_0\cdot n}).
\end{aligned}
$$
This finishes the proof of the Lemma.
\end{proof}
\subsection{Geometrical properties of the Cantor attractor}
\begin{thm}[Universality]{\lambda}bel{universality}
For any $F{\bar i}n {\mathcal I}_{\Omega}({\mbox{\boldmath$\alpha$} }r{\varepsilon})$ with sufficiently small ${\mbox{\boldmath$\alpha$} }r{\varepsilon}$, we have:
\[
R^n F = (f_n(x) -\, b^{2^n}\, a(x)\, y\, (1+ O(\rho^n)), \ x\, ),
\]
where $f_n\to f_*$ exponentially fast, $b$ is the average Jacobian, $\rho{\bar i}n (0,1)$,
and $a(x)$ is a universal function. Moreover, $a$ is analytic and
positive.
\end{thm}
\begin{cor}{\lambda}bel{bdsFx} There exists a universal $d_1>0$ such that for $k{\bf g}e 1$ large enough \
$$
{\bf f}rac{1}{d_1}\le |{\bf f}rac{\partial F_k}{\partial x}(z)|\le d_1,
$$
for every $z{\bar i}n B^1_v(F_k)$.
\end{cor}
Let $\tau_n$ be the tip of $F_n=R^nF$ and $\tau^*$ be the tip of $F_*$.
\begin{lem}{\lambda}bel{holmt} There exists $\rho<1$ such the
conjugations
$$
h_n:{\mathcal O}_{F_*}\to {\mathcal O}_{F_n}
$$
with $h_n(\tau_*)=\tau_{n}$ satisfy
$$
|h_n(z)-z|=O(\rho^n),
$$
for every $z{\bar i}n {\mathcal O}_{F_*}$.
\end{lem}
\begin{proof} Choose $z^*{\bar i}n {\mathcal O}_{F_*}$ and let $z=h_n(z^*)$. There are unique sequence $w_{n+1},{\diamond}ots, w_m, {\diamond}ots$, and $z_n, z_{n+1},{\diamond}ots, z_m,{\diamond}ots$, and
$z^*_n,z^*_{n+1},{\diamond}ots, z^*_m,{\diamond}ots$ with $w_k{\bar i}n \{c,v\}$, $z_k{\bar i}n {\mathcal O}_{F_k}$, and $z^*_k{\bar i}n {\mathcal O}_{F_*}$ such that $z=z_n$, $z^*=z^*_n$ and for $k{\bf g}e n$
$$
z_k=\psi^{k+1}_{w_{k+1}}(z_{k+1})
$$
$$
z^*_k=(\psi^{k+1}_{w_{k+1}})^*(z^*_{k+1}).
$$
This follows from the construction of ${\mathcal O}_F$ in \cite{CLM}.
Theorem \ref{universality} implies
$$
|\psi^{k+1}_w-(\psi^{k+1}_w)^*|=O(\rho^k)
$$
for some $\rho<1$. The proof of Lemma 5.6 in \cite{CLM} gives that
$(\psi^{k+1}_w)^*=\psi^*_w$ are contractions, $|D\psi^*_w|\le {\sigma}gma<1$. Then for $k{\bf g}e n$
$$
\begin{aligned}
|z_k-z^*_k|=&|\psi^{k+1}_{w_{k+1}}(z_{k+1})-(\psi^{k+1}_{w_{k+1}})^*(z^*_{k+1})|\\
\le &|\psi^{k+1}_{w_{k+1}}(z_{k+1})-(\psi^{k+1}_{w_{k+1}})^*(z_{k+1})|+\\
&|(\psi^{k+1}_{w_{k+1}})^*(z_{k+1})-(\psi^{k+1}_{w_{k+1}})^*(z^*_{k+1})|\\
\le &O(\rho^k)+{\sigma}gma\cdot |z_{k+1}-z^*_{k+1}|
\end{aligned}
$$
Then for every $m>n$ we have
$$
|z_n-z^*_n|\le \sum_{k=n+1}^m O(\rho^k)\cdot {\sigma}gma^{k-n-1}+{\sigma}gma^{m-n}\cdot |z_m-z^*_m|.
$$
Observe, $|z_m-z^*_m|\le 1$ and the Lemma follows by taking $m>n$ sufficiently large.
\end{proof}
\comment{
\begin{cor}{\lambda}bel{tipdist} There exists $\rho<1$ such that for ${\Omega}ega$ there exists $C>0$ such that
$F{\bar i}n {\mathcal H}_{\Omega}ega$
$$
|\tau_n-\tau^*|\le C |F_n-F_*|_{C^2}=O(\rho^n).
$$
\end{cor}
\note{If we don't use the $C^2$ estimate we can skip the proof, it is a cor of Lemma 2.6}
}
\comment{
\begin{proof} The proof is by collecting facts from \cite{LM}. The following notation is illustrated in Figure 7.1 from \cite{LM}. Given $n{\bf g}e 1$ denote the graph of $f_n$, see Theorem \ref{universality}, over the $y$-axis by $\Gamma_0$. The map $F_n$ has two fixed points $\beta_0$ and $\beta_1$, where $\beta_1$ has two negative exponents. The local stable manifold of $\beta_1$ is a graph over the $y$-axis very close to a vertical line. Let $\Gamma_2$ be the curve between the first two intersections of the local unstable manifold of $\beta_0$ and $\Gamma_{\bar i}nfty$ be the curve between the first two intersections of the local unstable manifold of $\beta_2$. The curves $\Gamma_0$, $\Gamma_2$, and $\Gamma_{\bar i}nfty$ can be extended to graphs of functions defined on the same domain in the $y$-axis.
Corollary 7.2 from \cite{LM} states
$$
|\Gamma_2-\Gamma_{\bar i}nfty|_{C^2}=O(b^{2^n}).
$$
Proposition 7.1(7.1) from \cite{LM} gives
$$
|\Gamma_0-\Gamma_{\bar i}nfty|_{C^2}=O(b^{2^n}).
$$
Hence,
$$
|\Gamma_0-\Gamma_2|_{C^2}=O(b^{2^n})
$$
and
$$
|v_0-v_2|=O(b^{2^n}),
$$
where $v_0$ and $v_2$ are the points on $\Gamma_0$, $\Gamma_2$ with a vertical tangency.
Proposition 7.4(3) from \cite{LM} gives
$$
|\tau_n-v_2|=O(b^{2^n}).
$$
Hence,
$$
|\tau_n-v_0|=O(b^{2^n}).
$$
Observe, $|v_0-\tau^*|=O(F_n-F_*)$ and the Lemma follows.
\end{proof}
}
\subsection{Analytical properties of the coordinate changes}
Fix an infinitely renormalizable H\'enon-like map $F{\bar i}n {\mathcal I}_{\Omega}({\mbox{\boldmath$\alpha$} }r{\varepsilon})$
to which we can apply the results of \cite{CLM} and \cite{LM1}, ${\mbox{\boldmath$\alpha$} }r{\varepsilon}>0$ is small enough. For such an $F$, we have a well defined {{\bar i}t tip}:
$$
\tau\equiv \tau(F)=\bigcap_{n{\bf g}e 0} B^n_{v^n}
$$
Consider the tips of the renormalizations, $\tau_k=\tau(R^k F)$.
To simplify the notations, we will translate these tips to the origin
by letting
$$
\Psi_k = \psi^0_v (R^k F)\, (z + \tau_{k+1}) - \tau_k.
$$
Denote the derivative of the maps $\Psi_k $ at $0$ by $D_k$
and decompose it into the unipotent and diagonal factors:
\begin{equation}{\lambda}bel{Dk}
D_k= \left(
\begin{array}{cc}
1 & t_k\\
0 & 1
\end{array}\right)
\left(
\begin{array}{cc}
{\alpha}pha_k & 0\\
0 & \beta_k
\end{array}\right).
\end{equation}
Let us factor this derivative out from $\Psi_k$:
$$
\Psi_k = D_k \circ (\operatorname{id} + {\bf s}_k),
$$
where ${\bf s}_k(z) = (s_k(z) , 0) = O(|z|^2)$ near 0.
Lemma 7.4 in \cite{CLM} states
\begin{lem} {\lambda}bel{smalls}
There exists $\rho<1$ such that for $k{\bar i}n {\mathbb Z}_+$ the following estimates hold:
\begin{enumerate}
{\bar i}tem [\rm (1)] $\partialsplaystyle
{\alpha}pha_k={\sigma}gma^2 \cdot (1+O(\rho^k)),\quad \beta_k=-{\sigma}gma \cdot (1+O(\rho^k)), \quad
t_k=O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^k});
$
{\bar i}tem [\rm (2)]
$\partialsplaystyle | \partial_x s_k | = O(1),\quad |\partial_y s_k| = O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^k}); $
{\bar i}tem[\rm (3)]
$\partialsplaystyle
|\partial^2_{xx} s_k |= O(1),\quad
|\partial^2_{xy} s_k |= O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}) ,\quad
|\partial^2_{yy} s_k| =O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}).
$
\end{enumerate}
\end{lem}
\begin{lem}{\lambda}bel{tilt} The numbers $t_k$ quantifying the tilt satisfy$$
t_k\asymp -b_F^{2^k}.
$$
\end{lem}
We will use the following notation
$$
B^{n-k}_{v^{n-k}}(F_k)=\operatorname{Im} \Psi_k^n.
$$
\comment{
The following statements is Corollary 7.5 from \cite{CLM}
\begin{cor}{\lambda}bel{second derivatives}
Let $k<n$. For $z {\bar i}n B_{v^{n-k}}^{n-k}$ we have:
$$
\left| \partial_x s_k(z)\right| =O ({\sigma}gma^{n-k}), \quad
\left| \partial_y s_k(z) \right| =
O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}\cdot{\sigma}gma^{n-k}).
$$
\end{cor}
\note{did we use this Cor?}
}
Consider the derivatives of the maps $\Psi^n_k$ at the origin:
$$
D_k^n=D_k\circ D_{k+1}\circ \cdots D_{n-1}.
$$
We can reshuffle this composition and obtain:
\begin{equation}{\lambda}bel{reshuffling}
D_k^n=
\left(
\begin{array}{cc}
1 & t_k\\
0 & 1
\end{array}\right)
\left(
\begin{array}{cc}
({\sigma}^2)^{n-k} & 0\\
0 & (-{\sigma})^{n-k}
\end{array}\right) (1+O(\rho^k)).
\end{equation}
Factoring the derivatives $D_k^n$ out from $\Psi_k^n$, we obtain:
\begin{equation}{\lambda}bel{factoring}
\Psi_k^n = D_k^n \circ (\operatorname{id} + {\bf S}_k^n),
\end{equation}
where ${\bf S}_k^n (z) = (S_k^n(z), 0) = O(|z|^2)$ near 0.
The following Lemma is Lemma 7.6 in \cite{CLM}
\begin{lem}{\lambda}bel{APPsi} For $k<n$, we have:
\begin{enumerate}
{\bar i}tem [{\rm (1)}]
$ \partialsplaystyle
| \partial_x S^n_k | = O(1),\quad |\partial_y S^n_k | = O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k});
$
{\bar i}tem [{\rm (2)}]
$ \partialsplaystyle
|\partial^2_{xx} S^n_k| =O(1), \quad
|\partial^2_{yy} S^n_k| =O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}), \quad
|\partial^2_{xy} S^n_k| =O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}}^{2^k}\, {\sigma}^{n-k}).
$
\end{enumerate}
\end{lem}
\begin{lem}{\lambda}bel{bdsS'} There exists a universal $d_0>0$ such that for $k{\bf g}e 1$ large enough \
$$
{\bf f}rac{1}{d_0}\le |{\bf f}rac{\partial(\operatorname{id} + {\bf S}_k^n)}{\partial x}|\le d_0
$$
\end{lem}
\begin{proof} According to proposition 7.8 in \cite{CLM}, the diffeomorphisms
$\operatorname{id} + {\bf S}_k^n$ stay wihin a compact family of diffeomorphisms. This gives the upperbound on
the derivative. The partial derivative ${\bf f}rac{\partial(\operatorname{id} + {\bf S}_k^n)}{\partial x}$ can not be zero in a point because otherwise the derivative of the diffeomorphism would become singular in point. This gives the positive lower bound on the partial derivative.
\end{proof}
\comment{
We are now ready to describe the asymptotical behavior of the $\Psi$-functions
using the universal one-dimensional functions from \S \ref{1D universal f-s}.
Let us normalize the function $v_*$ so that it fixes $0$ rather than $1$:
$$
{\bf v}_*(x) = v_*(x+1)-1.
$$
\begin{lem}{\lambda}bel{ustar}
There exists $\rho<1$ such that for all $k<n$ and $y{\bar i}n I$,
$$
\left|\operatorname{id} + S^n_k (\cdot,y)-{\bf v}_*(\cdot)\right|=O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}ilon}^{2^k}\cdot y+
\rho^{n-k})
$$
and
$$
\left| 1+ {\bf f}rac{\partial S_k^n}{\partial x}(\cdot,y)-
{\bf f}rac{\partial {\bf v}_*}{\partial x}(\cdot)\right|=O(\rho^{n-k}).
$$
\end{lem}
\begin{prop}{\lambda}bel{limit}
There exists a coefficient $a_k{\bar i}n \mathbb{R}$, $a_k=O(b^{2^k})$ and an absolute constant $\rho{\bar i}n (0,1)$ such that
$$
\left|(x + S^l_k(x,y))-({\bf v}_*(x)+a_k y^2)\right|=O(\rho^{l-k}).
$$
\end{prop}
\begin{proof} By Lemma \ref{APPsi},
$$
\left|{\bf f}rac{\partial^2 S_k^n}{\partial yx} \right|=O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^k}{\sigma}gma^{n-k})=O({\sigma}gma^{n-k})
$$
and
$$
\left|{\bf f}rac{\partial S_k^n}{\partial y} \right|=O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}ilon}^{2^k}).
$$
Hence it is enough to verify the desired convergence on the horizontal section passing through the tip:
$$
\partialst_{C^1}(\operatorname{id} + S_k^n (\cdot, 0 ),\, {\bf v}_*(\cdot))=O(\rho^{n-k}).
$$
Let us normalize $g_*$ so that $0$ becomes its fixed point with $1$ as
multiplier:
$$
{\mathbf{g}}_*(x) = {\bf f}rac{g_*(x+1)-1}{g_*'(1)}.
$$
Now, $\operatorname{id}+S^n_k(\cdot, 0)$ is the rescaling of $\Psi^n_k(\cdot, 0)$
normalized so
that
the fixed point $0$ has multiplier 1. By Theorem 4.1,
$$
\partialst_{C^3}(\operatorname{id}+ s_k(\cdot, 0), {\mathbf{g}}_*(\cdot)) = O(\rho^k).
$$
Hence, by Lemma \ref{convergence to g-star},
$$
\partialst_{C^1}(\operatorname{id} + S^n_k(\cdot,0), {\mathbf{g}}_*^{n-k}(\cdot)) = O(\rho^{k}) .
$$
Since ${\mathbf{g}}^n \to {\bf v}_*$ exponentially fast, the conclusion follows.
\end{proof}
Let $(S^n_k)^*(x,y)=v_*(x)-x$ be the non affine part of the coordinate changes used to renormaliza $F_*$.
\begin{prop}{\lambda}bel{limit}
There exists an absolute constant $\rho{\bar i}n (0,1)$ such that
$$
\left|(x + S^n_k(x,y))-(x+(S^n_k)^*(x))\right|=O(\rho^{n},\rho^{n-k}+b^{2^k}\cdot y^2\}),
$$
with $n>k$.
\end{prop}
The proof is the same as the proof of Proposition 7.8 in \cite{CLM} together with the observation that
$$
a_F=O(b^{2^k})
$$
when applied to $S^n_k$.
\begin{proof} The image of the vertical interval $y\mapsto (0,y)$
under the map $\operatorname{id} + {\mathbf{S}}_0^n$ is the graph of a function $w_n:I\to \mathbb{R}$ defined by
$$
w_n(y)=S_0^n(0,y).
$$
By the second part of Lemma~\ref{ustar} we have:
$$
\left|(x + S^n_0(x,y))-({\bf v}_*(x) + w_n(y) )\right|
=O(\rho^n).
$$
Let us show that the functions $w_n$ converge to a parabola.
The identity
$$
D_0^{n+1}\circ (\operatorname{id} + {\mathbf{S}}^{n+1}_0) = \Psi^{n+1}_0=\Psi^n_0\circ \Psi_n= D^n_0\circ(\operatorname{id} +{\mathbf{S}}^n_0)\circ D_n\circ (\operatorname{id}+{\bf s}_n),
$$
implies
$$
{\mathbf{S}}^{n+1}_0= {\bf s}_n+ D_n^{-1}\circ {\mathbf{S}}^n_0\circ D_n\circ (\operatorname{id}+{\bf S}_n),
$$
so that
\begin{equation}{\lambda}bel{w sub n}
w_{n+1}(y)=s_n(0,y)+ {\bf f}rac{1}{{\alpha}pha_n} S_0^n({\alpha}pha_n s_n(0,y)+\beta_n t_n y,\, \beta_n y),
\end{equation}
where ${\alpha}pha_n, \beta_n$ and $t_k$ are the entries of $D_n$, see
equation~(\ref{Dk}).
The estimate of $\partial_y s_n$ from Lemma \ref{smalls} implies:
\begin{equation}{\lambda}bel{s sub n}
s_n(0,y)=e_n y^2 + O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y^3),
\end{equation}
where $e_n=O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}ilon}^{2^n})$.
The estimate of $\partial_{xy}^2 S^n_0$ from Lemma \ref {APPsi} implies:
$$
{\bf f}rac{\partial S_0^n}{\partial x}(0,y) = O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y).
$$
Hence
$$
S_0^n({\alpha}pha_n s_n(0,y)+\beta_n t_n y,\, \beta_n y)
$$
$$
= S^n_0(0,\beta_n y)+
{\bf f}rac{\partial S^n_0}{\partial x} (0, \beta_n y) ({\alpha}pha_n s_n(0,y)+\beta_n t_n y)+O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y^3)
$$
$$
= S^n_0(0, \beta_n y) + q_n y^2 + O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y^3 )= w_n(\beta_n y) + q_n y^2 + O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y^3 ),
$$
where $q_n= O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n})$.
Incorporating this and (\ref{s sub n}) into (\ref{w sub n}), we obtain:
$$
w_{n+1}(y)= {\bf f}rac{1}{{\alpha}pha_n} w_n(\beta_n y)+
c_n y^2 + O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n} y^3),
$$
where $c_n=O({\mbox{\boldmath$\alpha$} }r{{\varepsilon}ilon}^{2^n})$.
Writing $w_n$ in the form
$$
w_n(y)=a_ny^2 +A_n(y) y^3,
$$
we obtain:
$$
a_{n+1}={\bf f}rac{\beta_n^2}{{\alpha}pha_n} a_n +c_n
$$
and
$$
\|A_{n+1}\|\le {\bf f}rac{|\beta_n|^3}{{\alpha}pha_n}\|A_n\|+ O({\mbox{\boldmath$\alpha$} }r{\varepsilon}^{2^n}).
$$
Now the first item of Lemma~\ref{smalls}
implies that $a_n\to a_F$ and $\|A_n\|\to 0$ exponentially fast.
\end{proof}
}
\begin{lem}{\lambda}bel{deltapsi} There exists $\rho<1$ such that
$$
|\Psi^n_k-(\Psi_k^n)^*|_{C^0}=O(\rho^k).
$$
\end{lem}
\begin{proof} The proof is a small modification of the proof of
Lemma \ref{holmt}. Use the same notation: $w_l=v$ for all $l{\bf g}e k$.
We have to incorporate the translation which center the maps around the tips.
The estimates in the proof of Lemma \ref{holmt} become
$$
|\Psi^n_k(z)-(\Psi_k^n)^*(z)|
\le O(\rho^k)+\sum_{l=k+1}^n O(\rho^l)\cdot {\sigma}gma^{l-k-1}+{\sigma}gma^{n-k}\cdot |z_n-z^*_n|,
$$
where $z_n=z+\tau_n$ and $z^*_n=z+\tau^*$. From Lemma \ref{holmt} we get that
$|z_n-z^*_n|=O(\rho^n)$ and the Lemma follows.
\end{proof}
\subsection{General Notions}
We will use the following general notions and notations throughout the text.
Let $\subset \Bbb{R}^2$ and $Q=[a,a+h]\times [b,b+v]$ be the smallest rectangle containing $X$. Then $h{\bf g}e 0$ is the horizontal size of $X$ and $v{\bf g}e 0$ the vertical size.
$Q_1\asymp Q_2$ means that $C^{-1}\le Q_1/Q_2\le C$, where $C>0$ is
an absolute constant or depending on, say $F$. Similarly, we will use $Q_1{\bf g}trsim Q_2$.
\section{Regular Pieces}{\lambda}bel{pieces}
By saying that something depends on the geometry of $F$,
we mean that it depends on the $C^2$-norm of $F$.
Below, all the constants depend only on the geometry of $F$
unless otherwise is explicitly said.
The tip piece $B^k\equiv B^k_{v^k}$ of level $k{\bar i}n {\mathbb N} $ contains two pieces of level $k+1$,
the tip one, $B^{k+1}$, and the lateral one,
$$
E^k =B^{k+1}_{v^{k}c}.
$$
They are illustrated in Figure \ref{figE}, and more schematically, in Figure~\ref{figEsch}.
For $n{\bf g}e k{\bf g}e 0$, let
$$
{\mathcal B}^n[k]\equiv {\mathcal B}^n(F)[k] =\{B{\bar i}n {\mathcal B}^n|\, B\subset E^k\}.
$$
We call $k$ the {{\bar i}t depth} of any piece $B{\bar i}n {\mathcal B}^n[k]$.
A piece $B^n_{\omega}ega$ belongs to ${\mathcal B}^n[k]$ if and only if
$$
{\omega}ega=v^{k}c{\omega}ega_{k+2}{\diamond}ots{\omega}ega_n.
$$
Observe
$$
\mu \left(\bigcup_{B{\bar i}n {\mathcal B}^n[k]} B\right) = \mu(E^k) = {\bf f}rac{1}{2^{k+1}},
$$
where $\mu$ is the invariant measure on ${\mathcal O}_F$.
Let
$$
G_k = F^{2^k}: \bigcup_{l>k} E^l\to E^k, \quad k{\bf g}e 0.
$$
Given a piece $B{\bar i}n {\mathcal B}^n[k]$, there is a unique sequence
$$
k=k_0<k_1<{\diamond}ots <k_t=n, \quad k_i = k_i(B)
$$
such that
$$
B=G_{k_0}\circ G_{k_1}\circ\cdots \circ G_{k_{t-1}}\circ G_{k_t}(B^n).
$$
To see it, consider the backward orbit $\{F^{-m} B\}$ that brings $B$ to the tip piece $B^n$.
Let $F^{-m_i}(B)$ be the moments of its closest combinatorial approaches to the tip,
in the sense of the nest $B^0\supset B^1\supset{\diamond}ots$.
Then $k_i$ is the depth of $F^{-m_i}(B)$.
Thus, $F^{-m_i}(B){\bar i}n E^{k_i}$, while $F^{-m}(B)\cap B^{k_i}=\emptyset$ for all $m<m_i$, compare with \S \ref{random walk}.
The pieces
$$
B^{(i)}:= F^{-m_i}(B) = G_{k_i}\circ \cdots \circ G_{k_{t-1}}\circ G_{k_t}(B^n){\bar i}n {\mathcal B}^n[k_i],
$$
with $i=1,2, \cdots, t$,
are called {{\bar i}t predecessors} of $B$.
Let us view a piece $B=B^n_{v^{k}c{\omega}ega_{k+2}\cdots {\omega}ega_n}{\bar i}n {\mathcal B}^n[k]$
from {{\bar i}t scale} $k$, i.e., let us consider the following piece ${\mathbf{B}}$ of depth $0$ for
the renormalization $F_k \equiv R^k F$:
\begin{equation}{\lambda}bel{B'}
{\mathbf{B}}=B^{n-k}_{c{\omega}ega_{k+2}\cdots {\omega}ega_n}(F_k){\bar i}n {\mathcal B}^{n-k}(F_k)[0], \quad \mathrm{so}\ B=\Psi_0^k ({\mathbf{B}}),
\end{equation}
see Figure \ref{figregpiece}.
Below, a ``rectangle'' means a rectangle with horizontal and vertical sides.
Given a piece $B = B^n_{\omega} {\bar i}n {\mathcal B}^n$, let us consider
the smallest rectangle $Q=Q^n_{\omega}$ containing $B\cap {\mathcal O}_F$.
We say that $Q=Q(B)$ is {{\bar i}t associated with} $B$.
\begin{rem} We are primarily interested in the geometry of the Cantor attractor ${\mathcal O}_F$.
For this reason we consider rectangles $Q$ superscribed around ${\mathcal O}_F\cap B$
rather than the ones superscribed around the actual pieces $B$.
However, our results apply to the latter rectangles as well.
\end{rem}
Given $B{\bar i}n {\mathcal B}^n[k]$,
let us consider the rectangle ${\mathbf{Q}}$ associated to ${\mathbf{B}}{\bar i}n {\mathcal B}^{n-k}(F_k)$.
Let ${\bf h}$ and $\bv$ be its horizontal and vertical sizes of ${\mathbf{Q}}$ respectively.
We also call them the {{\bar i}t sizes of $B$ viewed from scale $k$}.
We say that the piece $B{\bar i}n {\mathcal B}^n[k]$ is {{\bar i}t regular} if these sizes are comparable,
or, in other words, if $\operatorname{mod} {\mathbf{B}}: = {\bf h}/\bv$ is of order 1:
\begin{equation}{\lambda}bel{h/v is bounded}
{\bf f}rac{1}{C_0} \le \operatorname{mod} {\mathbf{B}} \le C_0,
\end{equation}
with $C_0=3d_1$, where $d_1>0$ is the bound on $\partial F_k/ \partial x$ from
Corollary \ref{bdsFx}
(see~Figure~\ref{figregpiece}).
\begin{figure}
\caption{A regular piece}
\end{figure}
Notice that in the degenerate case, $F(x,y)=(f(x),x)$, every piece is regular
since the slope of $f$ in $E^0$ is squeezed in between $d_1$ and $1/d_1$.
Next, we will specify an exponential control function $s(k)=s_{\alpha}pha(k)= a2^k-A$, see (\ref{exp control f-n}).
Namely, we let
$$
a= {\bf f}rac{\ln b}{\ln {\sigma}gma}, \quad A = {\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma},
$$
where ${\alpha}pha>0$ is a small parameter.
The actual choice of ${\alpha}pha={\alpha}pha({\mbox{\boldmath$\alpha$} }r{\varepsilon}) >0$ depending only on the geometry of $F$
will be made in the cause of the paper (see Propositions \ref{regtoreg},
\ref{sticktostick}, etc.).
Let $ l(k)= l_{\alpha}pha(k) = s(k) +k.$
If $k\leq l \le l(k)$ we say that the pieces $B{\bar i}n{\mathcal B}^n[l]$ are {{\bar i}t not too deep in $B^k$}.
The choice of the control function is made so that
\begin{equation}{\lambda}bel{blalpha}
b^{2^k}\leq {\alpha}pha\, {\sigma}gma^{l-k} \quad \mbox{for $l\le l_{\alpha}pha(k)$}.
\end{equation}
\begin{prop}{\lambda}bel{regtoreg} There exists $k^*{\bf g}e 0$ and ${\alpha}pha^*>0$
such that for ${\alpha}pha<{\alpha}pha^*$ and $k{\bf g}e k^*$
the following holds.
If $B{\bar i}n{\mathcal B}^n[l]$ is regular and not too deep in $B^k$,
$k<l\le l_{\alpha}pha(k)$,
then
$$
\tilde{B}=G_k(B){\bar i}n {\mathcal B}^n[k]
$$
is regular as well.
\end{prop}
\begin{proof}
We should view $B$ from scale $l$,
i.e., consider the piece ${\mathbf{B}} {\bar i}n {\mathcal B}^{n-l}(F_l)[0]$ defined by (\ref{B'}).
As the puzzle piece $\tilde{B}=F^{2^k}(B)$ has depth $k$,
it should be viewed from this depth. So, we consider
\begin{equation}{\lambda}bel{tilde B'}
\tilde{{\mathbf{B}}}{\bar i}n {\mathcal B}^{n-k}(F_k)[0], \quad \mathrm{such \quad that}\ \Psi_0^k(\tilde{{\mathbf{B}}})=\tilde{B}.
\end{equation}
Observe:
$\tilde{{\mathbf{B}}}=F_k\circ \Psi_k^l({\mathbf{B}})$ (see Figure \ref{mappiece}).
\begin{figure}
\caption{Pieces $B$ and $\tilde B$ viewed from appropriate scales.}
\end{figure}
As above, let $({\bf h}, \bv)$ be the sizes of ${\mathbf{B}}$,
and let $(\tilde{{\bf h}}, \tilde{\bv})$ be the sizes of $\tilde {\mathbf{B}}$.
Since $B$ is regular, bound (\ref{h/v is bounded}) hold for $\operatorname{mod} {\mathbf{B}} = {\bf h}/\bv$.
We want to show that the same bound hold for $\operatorname{mod} \tilde {\mathbf{B}} = \tilde {\bf h}/ \tilde \bv$.
The map $\Psi_k^l$ factors into a non-linear and an affine part as described in \S \ref{prelim}:
$$
\Psi_k^l = D_k^l \circ (\operatorname{id} + {\bf S}_k^l).
$$
Figure \ref{mapfac} shows details of this factorization
for the map $\Psi^l_k$ from Figure~\ref{mappiece}.
Let $h_{\text{diff}}$ and $v_{\text{diff}}$ be the sizes of the rectangle $Q_{\text{diff}}$
associated with the piece $(\operatorname{id} + {\bf S}_k^l)({\mathbf{B}})$, see Figure \ref{mapfac}.
Lemmas \ref{APPsi}(1) and \ref{bdsS'}
imply for $k$ big enough:
$$
h_{\text{diff}} \leq d_0 \cdot {\bf h} +O(\overline{{\varepsilon}ilon}^{2^k})\cdot \bv \leq 2 d_0\cdot {\bf h},
$$
where the last estimate takes into account (\ref{h/v is bounded}).
Similarly,
\begin{equation}{\lambda}bel{hdifflow}
h_{\text{diff}}{\bf g}eq {\bf f}rac 1{2d_0} {\bf h}.
\end{equation}
Moreover, since the map $\operatorname{id} + {\bf S}_k^l$ is horizontal, we have
\begin{equation}{\lambda}bel{vdiffup}
v_{\text{diff}}= \bv\leq C_0\cdot {\bf h} .
\end{equation}
\begin{figure}
\caption{Factorization of the map $\Psi_k^l$ into horizontal and affine parts. }
\end{figure}
Let $h_{\text{aff}}$ and $v_{\text{aff}}$ be the sizes of the rectangle $Q_{\text{aff}}$
associated with the piece $B_{\text{aff}}=\Psi_k^l({\mathbf{B}})=D_k^l\circ(\operatorname{id} + {\bf S}_k^l)({\mathbf{B}})$
(which is the piece $B$ viewed from scale $k$).
Incorporating the above estimates into decomposition (\ref{reshuffling})
and using Lemma \ref{tilt}, we obtain for large $k$ (with $s=l-k$) :
$$
\begin{aligned}
h_{\text{aff}}&\le (h_{\text{diff}} \cdot {\sigma}gma^{2s} + v_{\text{diff}} \cdot
| t_k |\cdot {\sigma}gma^s)\cdot (1+O(\rho^k)) \\
& \le [3d_0 \cdot {\sigma}gma^s+ O(b^{2^k})]\cdot {\sigma}gma^s \cdot {\bf h} . \\
\end{aligned}
$$
Similarly, we obtain a lower bound for $h_{\text{aff}}$:
$$
h_{\text{aff}} {\bf g}eq [{\bf f}rac 1 {3d_0} \cdot {\sigma}gma^s - O(b^{2^k})] \cdot {\sigma}gma^s \cdot {\bf h}.
$$
If $B$ is not too deep for scale $k$, then $b^{2^k}\leq {\alpha}pha \, {\sigma}^s$, and we obtain:
\begin{equation}{\lambda}bel{h_aff}
h_{\text{aff}}\asymp {\sigma}gma^{2s}\cdot {\bf h},
\end{equation}
as long as ${\alpha}pha$ is small enough (depending on the geometry of $F_k$).
Bounds on $v_{\text{aff}}$ are obtained similarly (in fact, easier):
\begin{equation}{\lambda}bel{v_aff}
v_{\text{aff}} = v_{\text{diff}}\cdot {\sigma}gma^{l-k}\cdot (1+O(\rho^k)) =
\bv \cdot {\sigma}gma^s\cdot (1+O(\rho^k))\asymp \bv\cdot {\sigma}gma^s.
\end{equation}
Thus,
\begin{equation}{\lambda}bel{mod B_aff}
\operatorname{mod} B_{\text{aff}}=\operatorname{mod} \Psi_k^l({\mathbf{B}}) \asymp {\sigma}^s \operatorname{mod} {\mathbf{B}}.
\end{equation}
it gets roughly aligned with the parabola-like curve inside $E^k$,
which makes its modulus of order 1.
Furthermore, Theorem \ref{universality} and Corollary \ref{bdsFx} imply,
for $k$ large enough, the following bounds on the sizes of $\tilde{{\mathbf{B}}}$:
$$
\begin{aligned}
{\bf f}rac 1{2d_1} h_{\text{aff}} -A_0b^{2^k}\cdot v_{\text{aff}}
\leq \tilde{{\bf h}}&
\le 2d_1\cdot h_{\text{aff}} +A_0b^{2^k}\cdot v_{\text{aff}},\\
\tilde{\bv}&= h_{\text{aff}},
\end{aligned}
$$
where $A_0>0$ is an upper bound for $ a(x)\, (1+ O(\rho^k))$
which controls the vertical derivative of $F_k$.
Hence
$$
\operatorname{mod} \tilde {\mathbf{B}} \leq 2d_1 + {\bf f}rac{A_0 b^{2^k}} {\operatorname{mod} \Psi_k^l({\mathbf{B}})}
\leq 2d_1 + {\bf f}rac{A_0 b^{2^k}} {{\sigma}^s \operatorname{mod} {\mathbf{B}}}\leq 2d_1+ A_0 C_0 {\alpha}pha\leq 3d_1,
$$
as long as ${\alpha}pha$ is small enough.
\begin{rem}
Notice that $\operatorname{mod} {\mathbf{B}}$ appears only in the residual term of the last estimate.
The main term ($2 d_1$) depends only on the geometry of $F$,
which makes the bound for $\operatorname{mod} \tilde{{\mathbf{B}}}$ as good as that for ${\mathbf{B}}$.
\end{rem}
The lower estimate, $\operatorname{mod}\tilde {\mathbf{B}} {\bf g}eq (3d_1)^{-1}$, is similar.
\end{proof}
\comm{***************
\begin{lem}{\lambda}bel{Abvh} There exists $k^*{\bf g}e 0$ and ${\alpha}pha^*>0$
such that for ${\alpha}pha<{\alpha}pha^*$ and $k{\bf g}e k^*$ and $k<l\le l_k({\alpha}pha)$ the following holds.
$$
{\bf f}rac{1}{2d_1}-A_0b^{2^k}{\bf f}rac{v_{\text{aff}}}{h_{\text{aff}}}{\bf g}e C_0
$$
and
$$
A_0b^{2^k}{\bf f}rac{v_{\text{aff}}}{h_{\text{aff}}}=O({\alpha}pha).
$$
\end{lem}
\begin{proof} For $k{\bf g}e 0$ large enough we have
$$
\begin{aligned}
A_0b^{2^k}{\bf f}rac{v_{\text{aff}}}{h_{\text{aff}}}&\le
2 A_0b^{2^k}\cdot {\bf f}rac{v\cdot {\sigma}gma^{l-k}}{[[{\bf f}rac{1}{d_0} +O(\overline{{\varepsilon}ilon}^{2^k})]\cdot {\sigma}gma^{2(l-k)}+ O({\sigma}gma^{l-k}\cdot b^{2^k})]\cdot h}\\
&\le 2 {\bf f}rac{A_0}{C_0}b^{2^k}\cdot {\bf f}rac{1}{[[{\bf f}rac{1}{d_0} +O(\overline{{\varepsilon}ilon}^{2^k})]\cdot {\sigma}gma^{(l-k)}+ O(b^{2^k})]}\\
&\le 2 {\bf f}rac{A_0}{C_0}\cdot {\bf f}rac{{\bf f}rac{ b^{2^k}}{{\sigma}gma^{(l-k)}}}{[[{\bf f}rac{1}{d_0} +O(\overline{{\varepsilon}ilon}^{2^k})]+ O({\bf f}rac{ b^{2^k}}{{\sigma}gma^{(l-k)}})]}\\
&=O({\alpha}pha),
\end{aligned}
$$
when
$$
{\bf f}rac{ b^{2^k}}{{\sigma}gma^{(l-k)}}\le {\alpha}pha
$$
is small enough. This holds when $l\le l_k$. The first inequality holds when ${\alpha}pha>0$ is small enough.
\end{proof}
This preparation allows us to prove that $\tilde{B}$ is also regular. Namely, for $k{\bf g}e 0$ large enough and ${\alpha}pha>0$ small enough, Lemma \ref{Abvh} gives
$$
{\bf f}rac{\tilde{h}}{\tilde{v}}{\bf g}e {\bf f}rac{1}{2d_1}-A_0b^{2^k}{\bf f}rac{v_{\text{aff}}}{h_{\text{aff}}}
{\bf g}e C_0.
$$
Similarly, one shows ${\bf f}rac{\tilde{h}}{\tilde{v}}\le 3d_1={\bf f}rac{1}{C_0}$.
We finished the proof of Proposition \ref{regtoreg}.
**************}
\section{Sticks}{\lambda}bel{stic}
Let us consider a piece $B{\bar i}n {\mathcal B}^n[l]$ and the corresponding piece ${\mathbf{B}}{\bar i}n {\mathcal B}^{n-l}(F_l)[0]$,
see (\ref{B'}) and Figure \ref{figregpiece}. In the degenerate case, most
of the pieces ${\mathbf{B}}\cap {\mathcal O}_{F_l}$ get squeezed in a narrow strip around the diagonal
of the associated rectangle ${\mathbf{Q}}$.
We will show that this is also the case for many pieces of H\'enon perturbations.
To this end, let us quantify the thickness of the pieces in question.
Let us first introduce two standard strips of thickness ${{\diamond}elta}$:
$$
{{\mathbb D}elta}lta_{{{\diamond}elta}lta}^\pm =\{(x,y){\bar i}n [0,1]^2\, | \, \,|y \pm x|\le {\bf f}rac{{{\diamond}elta}lta}{2}\}
$$
(oriented ``north-west'' and ``north-east'' respectively.)
Given a piece $B{\bar i}n {\mathcal B}^n$ and the associated rectangle $Q=Q(B)$,
let $L: [0,1]^2\rightarrow Q$ be the diagonal affine map.
Let ${{\mathbb D}elta}(B)= L({{\mathbb D}elta}_{{\diamond}elta})^\pm$, where:
\smallsetminusallskip\noindent
$\bullet$ we select the ``$+$''-sign if $B$ comes from the upper branch of the parabola $x=f(y)$,
and ``$-$''-sign otherwise.
\smallsetminusallskip\noindent $\bullet$
${{\diamond}elta}={{\diamond}elta}_B$ is selected to be the smallest one such that ${{\mathbb D}elta}(B)\supset B\cap {\mathcal O}$.
\smallsetminusallskip This ${{\diamond}elta}_B$ is called the {{\bar i}t (relative) thickness} of $B$.
The horizontal size $h{{\diamond}elta}_B$ of ${{\mathbb D}elta}(B)$ is called the {{\bar i}t absolute thickness} of $B$.
${{\mathbb D}elta}(B)$ is called the associated diagonal strip.
We let ${\boldsymbol{\Delta}} \equiv {{\mathbb D}elta}lta_{\mathbf{B}}$ and call it the {{\bar i}t regular stick} associated with $B$,
see Figure \ref{wid}.
\begin{figure}
\caption{Regular stick}
\end{figure}
\begin{prop}{\lambda}bel{sticktostick}
There exists $k^*{\bf g}e 0$ and ${\alpha}pha^*>0$ such that for ${\alpha}pha<{\alpha}pha^*$ and $k^*\leq k$ the following holds.
If $B{\bar i}n{\mathcal B}^n[l]$ is regular and not too deep in $B^k$, $k<l\le l_{\alpha}pha(k)$,
then
$$
{{\diamond}elta}lta_{\tilde{{\mathbf{B}}}}\le {\bf f}rac12 \cdot {{\diamond}elta}lta_{\mathbf{B}}+O({\sigma}gma^{n-l}),
$$
where $\tilde{B}=G_k(B){\bar i}n {\mathcal B}^n[k]$ and $\tilde{{\mathbf{B}}}=F_k(\Psi^l_k({\mathbf{B}}))$.
\end{prop}
\begin{proof}
We will use the notation from \S \ref{pieces}.
Let ${\boldsymbol{\de}}\equiv {{\diamond}elta}lta_{\mathbf{B}}$,
and let ${\mathbf{w}}={\boldsymbol{\de}} \cdot {\mathbf{h}}$ be the absolute thickness of ${\mathbf{B}}$.
The relative thickness of $\tilde{{\mathbf{B}}}$ is denoted by $\tilde{{\boldsymbol{\de}}}\equiv {{\diamond}elta}_{\tilde{\mathbf{B}}}$.
To estimate $\tilde{{\boldsymbol{\de}}}$, we will decompose $\Psi_k^l$ as in \S \ref{pieces}.
Let $w_{\text{diff}}$ be the absolute thickness of $B_\partialff \equiv (\operatorname{id} + {\bf S}_k^l) ({\mathbf{B}})$.
Then
\begin{equation}{\lambda}bel{wdiff}
w_{\text{diff}} = O( {\mathbf{w}} + {\mathbf{h}}\cdot{\sigma}gma^{n-l}).
\end{equation}
Indeed, let $\Gamma_y$ be the horizontal section of
$(\operatorname{id} + {\bf S}_k^l)({{\mathbb D}elta}lta_{\mathbf{B}})$ on height $y$,
and let $\boldsymbol{\Gamma}_y= (\operatorname{id} + {\bf S}_k^l)^{-1} (\Gamma_y)$.
Then
$$
|\Gamma_y|\leq |{\mathbf{G}}a_y|\cdot \|\operatorname{id}+{\bf S}_k^l\|_{C^1} = O({\mathbf{w}}),
$$
where the last estimate follows from Lemma \ref{APPsi}(1).
Furthermore, let us consider a boundary curve of
$(\operatorname{id} + {\bf S}_k^l)({{\mathbb D}elta}lta_{\mathbf{B}})$.
Its horizontal deviation from any of its tangent lines is bounded by
\begin{equation}{\lambda}bel{curvature}
{\bf f}rac 12 \|\operatorname{id}+{\bf S}_k^l\|_{C^2}\cdot (\partialam {\mathbf{B}})^2 = O({\sigma}^{n-l})\cdot {\mathbf{h}} ,
\end{equation}
where the last estimate follows from Lemma \ref{APPsi} (2), bounded modulus of ${\mathbf{B}}$ (\ref{h/v is bounded}),
and Lemma \ref{contracting}.
Hence
$$
w_\partialff \leq \max_y|\Gamma_y| + O({\sigma}^{n-l})\cdot {\mathbf{h}},
$$
and (\ref{wdiff}) follows.
Together with (\ref{hdifflow}), it implies:
\begin{equation}{\lambda}bel{de_diff}
{{\diamond}elta}_\partialff = O({\boldsymbol{\de}} + {\sigma}^{n-l}).
\end{equation}
{{\diamond}elta}f\partialag{{\mathrm{diag}}}
Let us now consider the piece
$B_{\text{aff}}\equiv \Psi^l_k({\mathbf{B}})=D^l_k(B_\partialff)$, see Figure~\ref{mapfac}.
Let $D^l_k= T \circ {{\mathbb L}ambda}$, where ${{\mathbb L}ambda}={{\mathbb L}ambda}^l_k$ and $T=T^l_k$ are the diagonal and sheer parts of $D^l_k$ respectively, see (\ref{reshuffling}).
Let us consider a box $B_\partialag= {{\mathbb L}ambda}(B_\partialff)$,
and let $h_\partialag={\sigma}^{2(l-k)}h_\partialff $ and $v_\partialag={\sigma}^{l-k} v_\partialff$ be its horizontal and vertical sizes.
Since diagonal affine maps preserve the horizontal thickness,
the thickness is only effected by the sheer part $T^l_k$, which has order $t_k\asymp b^{2^k}$,
see Lemma \ref{tilt}, namely:
\begin{equation}{\lambda}bel{de_aff}
\begin{aligned}
{{\diamond}elta}_{\text{aff}} &\le {{\diamond}elta}lta_{\partialff}\cdot {\bf f}rac{1}{1-{\bf f}rac{v_{\partialag}}{h_{\partialag}} \cdot t_k}\\
&={{\diamond}elta}lta_{\partialff} \cdot {\bf f}rac{1}{1-{\bf f}rac{v_{\partialff}}{h_{\partialff}} \cdot {\sigma}^{-(l-k)} \cdot t_k} \\
&=O({{\diamond}elta}_\partialff) = O({\boldsymbol{\de}} + {\sigma}^{n-l}).
\end{aligned}
\end{equation}
where the passage to the last line comes from (\ref{blalpha}), (\ref{hdifflow}), (\ref{vdiffup}) and Lemma \ref{tilt}.
Let us also consider the absolute {{\bar i}t vertical thickness} $u_{\text{aff}}$ of $B_{\text{aff}}$,
i.e., the vertical size of the stick ${{\mathbb D}elta}(B_{\text{aff}})$. From triangle similarity, we have:
$$
{\bf f}rac{u_{\text{aff}}} {v_{\text{aff}}}=
{\bf f}rac{w_{\text{aff}}}{h_{\text{aff}}}
$$
So
\begin{equation}{\lambda}bel{u_aff}
u_{\text{aff}}= {\bf f}rac {w_{\text{aff}}}{\operatorname{mod} B_{\text{aff}}} \asymp {\bf f}rac {w_{\text{aff}}}{{\sigma}^s \operatorname{mod} {\mathbf{B}}} \asymp {\sigma}^{-s} \cdot w_{\text{aff}},\quad s=l-k,
\end{equation}
where the last estimate follows from regularity of $B$ while the previous one comes from (\ref{mod B_aff}).
We are now prepared to apply the map $F_k : (x,y)\mapsto (f_k(x)-{\varepsilon}ilon_k(x,y), x)$,
where $\|{\varepsilon}_k\|_{C^2}= O(2^{b^k})$, see Theorem \ref{universality}.
Let $\tilde{{\mathbf{w}}}$ be the absolute thickness of $\tilde{{\mathbf{B}}}$.
By (\ref{h_aff})--(\ref{v_aff}),
the rectangle $Q_{\text{aff}}$ associated with $B_{\text{aff}}$ has sizes
$$
v_{\text{aff}}\asymp {\sigma}gma^{s}\bv \quad \mathrm{and}\
h_{\text{aff}}\asymp {\sigma}gma^{2s}{\bf h}.
$$
Let us use affine parametrization for the diagonal $Z$ of $B_{\text{aff}}$:
$$
x=x_0+t, \quad y=y_0+{\bf f}rac{C}{{\sigma}gma^s}t,\quad 0\le t\le h_{\text{aff}},
$$
where $x_0,y_0$ is its corner where the stick ${{\mathbb D}elta}_{\text{aff}}$ begins.
Restrict $F_k$ to this diagonal:
$$
F_k(x(t),y(t))=(A+Bt+E(t), x_0+t),
$$
where $E(t)$ is the second order deviation of $F_k(Z)$ from the straight line.
We obtain:
$$
\begin{aligned}
E(t)&=O(\|{\bf f}rac{\partial^2 (f_k-{\varepsilon}ilon_k)}{\partial x^2}\|+
\|{\bf f}rac{\partial^2 {\varepsilon}ilon_k}{\partial xy}\cdot {\sigma}gma^{-s}\|+
\|{\bf f}rac{\partial^2 {\varepsilon}ilon_k}{\partial y^2}\|\cdot ({\sigma}gma^{-s})^2))\cdot h_{\text{aff}}^2\\
&=O(h_{\text{aff}}+b^{2^k}{\sigma}gma^{-s}h_{\text{aff}}+
(b^{2^k}{\sigma}gma^{-s})\cdot ({\sigma}gma^{-s}h_{\text{aff}}))\cdot h_{\text{aff}}.
\end{aligned}
$$
From Lemma \ref{contracting} we have $h_{\text{aff}}=O({\sigma}gma^{n-k})$.
Hence,
$$
\begin{aligned}
E(t)&=O({\sigma}gma^{n-k}+b^{2^k}{\sigma}gma^{-(l-k)+n-k}+
{\alpha}pha\cdot {\sigma}gma^{-(l-k)+n-k)})\cdot h_{\text{aff}}\\
&=O({\sigma}gma^{n-l})\cdot h_{\text{aff}}
\end{aligned}
$$
where we used that $l$ is not too deep for $k$, i.e. $b^{2^k}{\sigma}gma^{-s}\le {\alpha}pha$.
It follows that
$$
\begin{aligned}
\tilde{{\mathbf{w}}} &= O({\sigma}gma^{n-l}\cdot h_{\text{aff}}+b^{2^k}\cdot u_{{\text{aff}}})\\
&= O({\sigma}gma^{n-l}\cdot h_{\text{aff}}+b^{2^k}{\sigma}gma^{-s}\cdot w_{{\text{aff}}}) \\
&= O({\sigma}gma^{n-l}\cdot h_{\text{aff}}+ {\alpha}pha \cdot w_{{\text{aff}}}) ,
\end{aligned}
$$
where we used (\ref{u_aff}).
\begin{rem}
This was the moment where the thickness improves.
\end{rem}
From the regularity of $\tilde{{\mathbf{B}}}$ we get
$\tilde{{\bf h}}\asymp \tilde{\bv}=h_{\text{aff}}$. Thus,
$$
\begin{aligned}
\tilde{\boldsymbol{\de}} &=O({\sigma}gma^{n-l}+{\alpha}pha\cdot {{\diamond}elta}_{\text{aff}} )\\
&= O({\alpha}pha\cdot {\boldsymbol{\de}} + {\sigma}^{n-l})\\
\end{aligned}
$$
where we used (\ref{de_aff}).
The Proposition follows
as long as ${\alpha}pha$ is sufficiently small.
\end{proof}
\section{Scaling}{\lambda}bel{scaling}
Let $B{\bar i}n {\mathcal B}^n[k]$ and $\hat{B}{\bar i}n{\mathcal B}^{n-1}[k]$ with $B\subset \hat{B}$.
Say,
$$
B=B^n_{{\omega}ega\nu}\subset \hat{B}=B^{n-1}_{{\omega}ega}\subset E^k.
$$
Let ${\mathbf{B}}$ and $\hat{{\mathbf{B}}}$ be the corresponding rescaled pieces,
so $B=\Psi_0^k({\mathbf{B}})$ and $\hat{B}=\Psi_0^k(\hat{{\mathbf{B}}})$.
The horizontal and vertical sizes of the associated rectangles are called ${\mathbf{h}},{\mathbf{v}}>0$ and $\hat{{\mathbf{h}}}, \hat{{\mathbf{v}}}>0$ respectively.
The {{\bar i}t scaling number} of $B$ is
$$
{\sigma}gma_{\mathbf{B}}={\bf f}rac{{\mathbf{v}}}{\hat{{\mathbf{v}}}}.
$$
\begin{rem}{\lambda}bel{prosca}
The scaling number can be expressed directly in terms of the original pieces $B$ and $\hat{B}$.
Indeed, since the diffeomorphism $\Psi^k_0$ is a horizontal map,
we have $ {\sigma}gma_{\mathbf{B}}= v/\hat v$, where $v$ and $\hat{v}$ are the vertical sizes of $B$ and $\hat{B}$.
We will use the notation ${\sigma}gma_B$ when we refer to the corresponding measurement in the domain of $F$.
This formal distinction will only play a role in (\ref{deltasigmapro}).
\end{rem}
\begin{rem}
There are many possible ways to define the scaling number.
The proof of the probabilistic universality will show that the relative thickness of most pieces asymptotically vanishes.
Because of this, most definitions of the scaling number become equivalent.
\end{rem}
For $B= B^n_{{\omega}ega\nu}(F)$ as above,
let $B^*= B^n_{{\omega}ega\nu}(F_*)$ be the corresponding degenerate
piece for the renormalization fixed point $F_*$.
The {{\bar i}t proper scaling} for $B$ is
$$
{\sigma}gma^*_{{\mathbf{B}}}={\sigma}gma_{B^n_{{\omega}ega\nu}(F_*)}.
$$
The function
$$
\underline{{\sigma}gma}: B\mapsto {\sigma}gma_{\mathbf{B}}
$$
is called the {{\bar i}t scaling function} of $F$. The universal scaling function
$\underline{{\sigma}gma}^*$ of $F_*$ is injective, as was shown in \cite{BMT}.
\begin{rem}{\lambda}bel{1Dscalingstar} Given a piece $B{\bar i}n {\mathcal B}^{n+1}(F^*)$. Let $\hat{B}^*{\bar i}n {\mathcal B}^n(F_*)$ which contains $B$.
For some $\hat{i}<2^n$ we have
$$
\pi_1(\hat{B}^*)=I^*_{\hat{i}}(n){\bar i}n {\mathcal C}_n.
$$
Similarly, $\pi_1(B^*)=I^*_{i}(n+1){\bar i}n {\mathcal C}_{n+1}$, for $i=\hat{i}$ or $i=\hat{i}+2^n$. The scaling ratios ${\sigma}gma_{\mathbf{B}}$ are
vertical measurements of pieces. Using that H\'enon like maps take vertical lines to horizontal lines, $y'=x$, we have
$$
{\sigma}gma^*_{{\mathbf{B}}}={\bf f}rac{|I^*_{i-1}(n+1)|}{|I^*_{\hat{i}-1}(n)|}.
$$
\end{rem}
\begin{prop}{\lambda}bel{scapush}
There exists $k^*{\bf g}e 0$ and ${\alpha}pha^*>0$ such that for ${\alpha}pha<{\alpha}pha^*$ and $k{\bf g}e k^*$ the following holds.
If a piece $B{\bar i}n{\mathcal B}^n[l]$ is regular and not too deep for $E_k$, i.e. $k<l\le l_{\alpha}pha(k)$,
then
$$
{\sigma}gma_{\tilde{{\mathbf{B}}}}= {\sigma}gma_{\mathbf{B}} + O({{\diamond}elta}lta_{\hat{{\mathbf{B}}}}+{\sigma}gma^{n-l}),
$$
where $\tilde{B}=G_k(B){\bar i}n {\mathcal B}^n[k]$ and $B\subset \hat{B}=\Psi^l_0(\hat{{\mathbf{B}}}){\bar i}n {\mathcal B}^{n-1}[l]$.
\end{prop}
\begin{proof} As above in \S \ref{pieces}, let $h_{\text{aff}}$ stand for the horizontal length of $B_{\text{aff}} =\Psi_k^l({\mathbf{B}})$, see Figure \ref{mapfac}.
We will use the similar notation $\hat{h}_{\text{aff}}$ and $\hat{w}_{\text{aff}}$ for the corresponding measurements of
the piece $\hat B_{\text{aff}} : = \Psi_k^l(\hat{{\mathbf{B}}})$.
Since $F_k$ maps vertical lines to horizontal lines, we have
$$
{\sigma}gma_{\tilde{{\mathbf{B}}}}={\bf f}rac{h_{\text{aff}}}{\hat{h}_{\text{aff}}}.
$$
Let ${\bf g}amma$ be the angle between the diagonal of $\hat B_{\text{aff}}$ and the vertical side, so $\operatorname{tg}{\bf g}amma = \operatorname{mod} \hat B_{\text{aff}}$.
Then
$$
v_{\text{aff}} \cdot \operatorname{tg} {\bf g}amma = \hat h_{\text{aff}}\, {\bf f}rac{v_{\text{aff}}}{\hat v_{\text{aff}}} = \hat h_{\text{aff}}\cdot {\sigma}_{\mathbf{B}},
$$
Now Figure \ref{figsca} shows:
$$
|h_{\text{aff}} - v_{\text{aff}} \cdot \operatorname{tg} {\bf g}amma|\leq \hat w_{\text{aff}}.
$$
Dividing by $\hat h_{\text{aff}}$ (taking into account the two previous formulas and definition of the relative thickness
$\hat {{\diamond}elta}_{\text{aff}}= \hat w_{\text{aff}}/\hat h_{\text{aff}}$), we obtain:
$$
| {\sigma}gma_{\tilde{{\mathbf{B}}}} - {\sigma}gma_{\mathbf{B}}| \leq \hat{{{\diamond}elta}}_{\text{aff}}.
$$
Now the Proposition follows from (\ref{de_aff}).
\end{proof}
\section{Universal Sticks}{\lambda}bel{pres}
\subsection{Definition and statement} {\lambda}bel{defstat}
Let us consider a piece $B{\bar i}n {\mathcal B}^n$ and the two pieces $B_1, B_2{\bar i}n {\mathcal B}^{n+1}$ of level $n+1$ contained in $B$.
Rotate it to make it horizontal and then rescale it to horizontal size 1; denote the corresponding linear conformal map by $A$.
Let ${{\diamond}elta}lta, {\sigma}gma_{B_1}, {\sigma}gma_{B_2}{\bf g}e 0$ be the smallest numbers such that:
\begin{itemize}
{\bar i}tem[(1)] $A(B\cap {\mathcal O}_F)\subset [0,1]\times [0,{{\diamond}elta}lta]$,
{\bar i}tem[(2)]$A(B_1\cap {\mathcal O}_F)\subset [0,{\sigma}gma_{B_1}]\times [0,{{\diamond}elta}lta]$,
{\bar i}tem[(3)]$A(B_2\cap {\mathcal O}_F)\subset [1-{\sigma}gma_{B_2},1]\times [0,{{\diamond}elta}lta]$,
\end{itemize}
for the optimal choice of $A$. The numbers ${\sigma}gma_{B_1}$, and ${\sigma}gma_{B_2}$ are called {{\bar i}t scaling factors} of $B_1$ and $B_2$.
\begin{rem}{\lambda}bel{scaprec} The scaling factor ${\sigma}gma_{\mathbf{B}}$ of a piece $B$ is a measurement of the corresponding ${\mathbf{B}}$.
The scaling factor ${\sigma}gma_B$ of $B$ reveres to measurements of the actual piece in the domain of $F$. The difference between the scaling factors ${\sigma}gma_B$ and ${\sigma}gma_{\mathbf{B}}$ is estimated in Proposition \ref{precision}.
\end{rem}
We say that $B$ is {{\bar i}t ${\varepsilon}ilon$-universal} if
$$
|{\sigma}gma_{B_1}-{\sigma}gma^*_{{\mathbf{B}}_1}|\le {\varepsilon}ilon, \quad
|{\sigma}gma_{B_2}-{\sigma}gma^*_{{\mathbf{B}}_2}|\le {\varepsilon}ilon, \quad
\text{and} \quad
{{\diamond}elta}lta\le {\varepsilon}ilon.
$$
The {{\bar i}t precision } of the piece $B$ is the smallest ${\varepsilon}ilon>0$ for which $B$ is ${\varepsilon}ilon$-universal.
The optimal $A^{-1}([0,1]\times[0,{{\diamond}elta}lta])$ is called the {{\bar i}t ${\varepsilon}ilon$-stick} for $B$. We will revere to the {{\bar i}t (relative) length} and {{\bar i}t (relative) height} of such a stick.
Let ${\mathcal S}^n({\varepsilon}ilon)\subset {\mathcal B}^n$ be the collection of ${\varepsilon}ilon$-universal pieces.
\begin{defn}{\lambda}bel{defunidist}
The Cantor attractor ${\mathcal O}_F$ of an infinitely renormalizable H\'enon-like map $F{\bar i}n {\mathcal H}_{\Omega}ega(\overline{{\varepsilon}ilon})$
is {{\bar i}t probabilistically universal} if there is $\theta<1$ such that
$$
\mu({\mathcal S}^n(\theta^n)){\bf g}e 1-\theta^n,\quad n{\bf g}e 1.
$$
\end{defn}
Now we can formulate the main result of this paper:
\begin{thm}{\lambda}bel{distuni}
The Cantor attractor ${\mathcal O}_F$ is probabilistically universal.
\end{thm}
After careful choices of $\theta<1$, $q_0<q_1$ and $\kappaappa(n)=-\text{Const}+ \ln n$, one distinguishes three regimes where pieces in ${\mathcal S}_n(\theta^n)\cap E^k$ are discovered by different techniques.
The {{\bar i}t one-dimensional regime}: all the pieces in ${\mathcal B}^n[k]$ with $(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$ are in ${\mathcal S}_n(\theta^n)$. These very deep pieces are controlled by the one-dimensional renormalization fixed point: they are perturbed versions the corresponding pieces of $F_*$ and their relative displacements are exponentially small, see Lemma \ref{displacement} and Proposition \ref{asympwidth}. We have to exclude the pieces in ${\mathcal B}^n[k]$ with $k>(1-q_0)\cdot n$ because they do not have a small thickness. Viewed from their scale $k$, they are relatively large pieces close to the graph of $f_*$. The curvature of the graph of $f_*$ causes this pieces to have a large thickness.
The {{\bar i}t pushing-up regime}: the pieces from the one-dimensional regime can be pushed up without being distorted too much, using the Propositions \ref{sticktostick} and \ref{scapush}, as long as they are not too deep. The resulting pieces have exponentially small precision, see Proposition \ref{precision}. In this way one finds pieces in ${\mathcal S}_n(\theta^n)\cap E^k$ for
$0\le k<(1-q_1)\cdot n$. Unfortunately, the relative measure of these pieces in ${\mathcal S}_n(\theta^n)\cap E^k$ obtained by pushing up, is only exponentially close to $1$, for $k{\bf g}e \kappaappa(n)\asymp \ln n$, see Proposition \ref{meas} . That is why the pushing-up regime is restricted to $\kappaappa(n)\le k< (1-q_1)\cdot n$ where these pieces occupy $E^k$ except for an exponential small relative part.
The {{\bar i}t brute-force regime}: the pieces obtained in the one-dimensional and pushing-up regimes are in $B^{\kappaappa(n)}$.
They will be spread around by brute-force iteration of the original map until returning.
The time to go from $B^{\kappaappa(n)} $ and return by iterating the original map is $2^{\kappaappa(n)}$.
The depth $\kappaappa(n)$ is the largest integer such that $2^{\kappaappa(n)}\le Kn\ln 1/\theta$.
The pieces in the one-dimensional and pushing-up regime have exponentially small precision. Each of the
brute-force return steps used to spread around the pieces from the deeper regimes, will distort their exponential precision $\theta^n$, see Proposition \ref{badpart}. The total distortion along such a return orbit can be bounded by $O(r^{2^{\kappaappa(n)}})=O(r^{Kn\ln 1/\theta})$, with $r{\bf g}trsim 1/b>>1$.
However, this distortion can not destroy the exponential precision when $\theta<1$ is chosen close enough to $1$.
The pushing-up regime is split into two parts. Let $\kappaappa_0(n)$ be the smallest integer such that $l(\kappaappa_0(n)){\bf g}e n$. As long as $\kappaappa_0(n)\le k<(1-q_1)\cdot n$ the pieces in ${\mathcal B}^n[l]$, $l>k$, are not too deep and can be pushed up into $E^k$. Indeed, $\kappaappa_0(n)\asymp \ln n$ is uniquely defined and can not be adjusted. Unfortunately, we can not use $\kappaappa(n)=\kappaappa_0(n)$ because the corresponding return time $2^{\kappaappa_0(n)}$ used to fill the brute-force regime might be too large. Too large in the sense that it might build up too much distortion, which is of the order $O(r_0^{n})$ for some definite $r_0>1$. We have to choose $\kappaappa(n)\asymp \ln n $ much smaller than $\kappaappa_0(n)$ to get an arbitrarily slow growing rate for the distortion during the brute-force regime. The rate should be small enough such that the exponential decaying precision in the deeper regimes can not be destroyed.
In the regime $\kappaappa(n)\le k<\kappaappa_0(n)$ we have
$l(k)<(1-q_1)\cdot n$ which means that we can not push up all previously recovered pieces in $B^n[l]$ with $l>l(k)$. This is responsible for the super-exponential loss term in Proposition \ref{meas}.
\subsection{Universal sticks created in the one-dimensional regime}
\begin{prop}{\lambda}bel{asympwidth} There exist $\rho<1$, $q^*>0$ with the following property. For every $0<q_0<q_1\le q^*$ there exists $n^*>0$ such that for $n{\bf g}e n^*$ and $(1-q_1)\cdot n\le k\le n$
\begin{itemize}
{\bar i}tem [(1)] every $B{\bar i}n {\mathcal B}^n[k]$ is regular.
{\bar i}tem [(2)] for every $B{\bar i}n {\mathcal B}^{n+1}[k]$
$$
|{\sigma}gma_{\mathbf{B}}-{\sigma}gma^*_{\mathbf{B}}|=O(\rho^{q_1\cdot n}),
$$
where $B=\Psi^{k+1}_0({\mathbf{B}})$.
{\bar i}tem [(3)] for every $B{\bar i}n {\mathcal B}^n[k]$ with $(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$
$$
{{\diamond}elta}lta_{\mathbf{B}}=O(\rho^{q_0\cdot n}),
$$
where $B=\Psi^k_0({\mathbf{B}})$.
\end{itemize}
\end{prop}
Choose,
$(1-q_1)\cdot n\le k\le n$
and $B{\bar i}n {\mathcal B}^n[k]$. Let ${\mathbf{B}}{\bar i}n {\mathcal B}^{n-k}(F_k)$ be such that $B=\Psi_0^k({\mathbf{B}})$.
Let $\tau_n$ be the tip of $F_n$ and $\tau_*$ the tip of $F_*$. In the next part we will have to compare the maps $\Psi^n_k$ related to $F$ and the maps $(\Psi^n_k)^*$ corresponding to $F_*$. Let
$$
{\mathbf{B}}_0=B_{v^{n-k}}^{n-k}(F_k)=\Psi^n_k({\mathbb D}om(F_n))
$$
and
$$
{\mathbf{B}}^*_0=B_{v^{n-k}}^{n-k}(F_*)=(\Psi^{n-k}_0)^*({\mathbb D}om(F_*)).
$$
where $(\Psi^{n-k}_0)^*$ is the change of coordinates used to construct $R^{n-k}F_*$.
Then ${\mathbf{B}}=F_k^j({\mathbf{B}}_0)$ for some $0\le j<2^{n-k}$ and $j$ is odd. Let
${\mathbf{B}}_j=F_k^j({\mathbf{B}}_0)$ and ${\mathbf{B}}^*_j=F_*^j({\mathbf{B}}^*_0)$ for $0\le j<2^{n-k}$. We will analyse the relative positions of ${\mathbf{B}}_j$ and ${\mathbf{B}}^*_j$.
Let
$$
I_j=\pi_1({\mathbf{B}}_j)
\quad \mathrm{and} \quad
J_j=\pi_2({\mathbf{B}}_j).
$$
The intervals in the $n^{th}$ cycle of $f_*$ are denoted by $I^*_j(n)$, see
\S \ref{1D universal f-s}. Observe,
$$
I^*_j\equiv I^*_j(n-k)=\pi_1({\mathbf{B}}^*_j), \quad 0\le j<2^{n-k}.
$$
and
$$
J^*_j=\pi_2({\mathbf{B}}^*_j)=I^*_{j-1}(n-k),
\quad 0< j<2^{n-k}.
$$
Consider the conjugations
$$
h_n:{\mathcal O}_{F_*}\to {\mathcal O}_{F_n}
$$
with $h_n(\tau_*)=\tau_{n}$. These conjugations allow us to label the points in
${\mathcal O}_{F_n}$.
Choose, $z^*{\bar i}n {\mathcal O}_{F_*}$ and let
$z=h_n(z^*)$.
Let $(x_0,y_0)=\Psi^n_k(z){\bar i}n {\mathbf{B}}_0$ and
$(x^*_0,y^*_0)=(\Psi^n_k)^*(z^*){\bar i}n {\mathbf{B}}^*_0$.
The points in the orbits are
$$
(x_j, y_j)=F_k^j(x_0,y_0)
\quad \mathrm{and}\quad
(x^*_j, y^*_j)=F_*^j(x^*_0,y^*_0),
$$
with $0\le j<2^{n-k}$. The first estimates will be on the relative displacements
${\bf f}rac{{{\mathbb D}elta}lta x_j}{|I^*_j|}$ and ${\bf f}rac{{{\mathbb D}elta}lta y_j}{|J^*_j|}$ where
$
{{\mathbb D}elta}lta x_j=x_j-x^*_j
$
and
${{\mathbb D}elta}lta y_j=y_j-y^*_j$.
\begin{lem}{\lambda}bel{displacement}
There exist $\rho<1$, $q^*>0$ with the following property. For every $0<q\le q^*$ there exists $n^*>0$ such that for $n{\bf g}e n^*$, $(1-q)\cdot n\le k\le n$, and $0\le j<2^{n-k}$
$$
{\bf f}rac{|{{\mathbb D}elta}lta x_j|}{|I^*_j|}=O(\rho^{q \cdot n}),
\quad\mathrm{and}\quad
{\bf f}rac{|{{\mathbb D}elta}lta y_j|}{|J^*_j|}=O(\rho^{q \cdot n}).
$$
\end{lem}
\begin{proof} Recall, $y_{j+1}=x_{j}$. Hence,
$$
{\bf f}rac{|{{\mathbb D}elta}lta y_{j+1}|}{|J^*_{j+1}|}= {\bf f}rac{|{{\mathbb D}elta}lta x_{j}|}{|I^*_{j}|},
$$
we only have to estimate the displacements ${{\mathbb D}elta}lta x_j$ and ${{\mathbb D}elta}lta y_0$.
Since, $F_k\to F_*$ exponentially fast controlled by some $\rho<1$, see Theorem \ref{universality}, we have
$$
\begin{aligned}
x_{j+1}&=f_*(x_j)+ O(\rho^k)\\
&=f_*(x^*_j)+Df_*(\zeta_j){{\mathbb D}elta}lta x_j +O(\rho^k).
\end{aligned}
$$
Hence,
$$
{{\mathbb D}elta}lta x_{j+1}= Df_*(\zeta_j){{\mathbb D}elta}lta x_j +O(\rho^k).
$$
There exists $K>1$ such that
\begin{equation}{\lambda}bel{delatxyest}
{\bf f}rac{|{{\mathbb D}elta}lta x_{j+1}|}{|I^*_{j+1}|}\le {\bf f}rac{Df_*(\zeta_j)}{{\bf f}rac{|I^*_{j+1}|}{|I^*_j|}} \cdot {\bf f}rac{ |{{\mathbb D}elta}lta x_j|}{|I^*_j|} +
K{\bf f}rac{\rho^k}{\rho_0^{n-k}},
\end{equation}
where we used the {{\bar i}t a priori} bounds: $|I^*_{j+1}|{\bf g}e \rho_0^{n-k}$ for some $\rho_0<1$.
We will use (\ref{delatxyest}) repeatedly but to do so we first need to estimate $|{{\mathbb D}elta}lta x_0|$.
Let ${{\mathbb D}elta}lta z=z-z^*$ and use the Lemmas \ref{deltapsi}, \ref{contracting}, and \ref{holmt} in the following estimate
$$
\begin{aligned}
|(x_0,y_0)-(x^*_0, y^*_0)|&\le |\Psi^n_k(z)-(\Psi_k^n)^*(z^*)|\\
&\le |\Psi^n_k-(\Psi_k^n)^*|+|(\Psi^n_k)^*(z)-(\Psi^n_k)^*(z^*)|\\
&\le O(\rho^k)+ |D(\Psi^n_k)^*|\cdot |{{\mathbb D}elta}lta z|\\
&=O(\rho^k+ {\sigma}gma^{n-k}\cdot \rho^n )\\
&=O(\rho^k).
\end{aligned}
$$
Thus,
\begin{equation}{\lambda}bel{deltax0}
{\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|}=O({\bf f}rac{\rho^k}{\rho_0^{n-k}} )
\end{equation}
and
\begin{equation}{\lambda}bel{deltay0}
{\bf f}rac{ |{{\mathbb D}elta}lta y_0|}{|J^*_0|}=O({\bf f}rac{\rho^k}{\rho_0^{n-k}}).
\end{equation}
Let $r>0$ and $D>1$ be given as in Lemma \ref{distortion} and $K>1$ as defined above.
For $q>0$ small enough and $n{\bf g}e 1$ large enough we have
\begin{equation}{\lambda}bel{deltax0}
{\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|}=O({\bf f}rac{\rho^k}{\rho_0^{n-k}})=O(({\bf f}rac{\rho^{1-q}}{\rho_0^{q}})^n)=O(\rho^{q\cdot n})\le {\bf f}rac{r}{2D}.
\end{equation}
and
\begin{equation}{\lambda}bel{deltax02}
DK({\bf f}rac{2}{\rho_0})^{n-k}\cdot \rho^k=O(({\bf f}rac{\rho^{1-q}}{(\rho_0/2)^{q}})^n)=O(\rho^{q\cdot n})\le {\bf f}rac{r}{2}.
\end{equation}
One has to be careful when applying (\ref{delatxyest}) repeatedly. The points $\zeta_j$ should not be too far from $I^*_j$ to be able to control distortion.
\begin{clm} For $q>0$ small enough and $n>1$ large enough
$$
{\bf f}rac{ |{{\mathbb D}elta}lta x_j|}{|I^*_j|}\le DK({\bf f}rac{2}{\rho_0})^{n-k}\cdot \rho^k+D {\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|},
$$
for $0 \le j<2^{n-k}$.
\end{clm}
\begin{proof}
The proof is by induction: the statement holds for $j=0$ because $D>1$. Suppose it holds
up to $j<2^{n-k}-1$. The $r-$neighborhoods $U_l(n)\supset I^*_l$ were introduced in
Lemma \ref{distortion}. The induction hypothesis together with (\ref{deltax0}) and (\ref{deltax02}) imply that
$$
\zeta_l{\bar i}n U_l(n-k)
$$
for $l\le j$. Now repeatedly apply (\ref{delatxyest}) and Lemma \ref{distortion} to get
$$
\begin{aligned}
{\bf f}rac{ |{{\mathbb D}elta}lta x_{j+1}|}{|I^*_{j+1}|}&\le \sum_{l=1}^{j+1} (\prod_{k=l}^{j} {\bf f}rac{Df_*(\zeta_k)}{{\bf f}rac{|I^*_{k+1}|}{|I^*_k|}} )\cdot K{\bf f}rac{ \rho^k}{\rho_0^{n-k}}+ (\prod_{k=0}^{j} {\bf f}rac{Df_*(\zeta_k)}{{\bf f}rac{|I^*_{k+1}|}{|I^*_k|}} )\cdot{\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|}\\
&\le (j+1) D \operatorname{ind} ex{\cite{{\bf f}ootnote{\cite{{\bf f}ootnote{\pageref{\ref{•}}}}}}}K{\bf f}rac{ \rho^k}{\rho_0^{n-k}}+D {\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|}\\
&\le DK({\bf f}rac{2}{\rho_0})^{n-k}\cdot \rho^k+D {\bf f}rac{ |{{\mathbb D}elta}lta x_0|}{|I^*_0|}.
\end{aligned}
$$
This estimate finishes the induction step.
\end{proof}
Now incorporate the estimates (\ref{deltax0}), (\ref{deltax02}) in the Claim and together with (\ref{deltay0}), Lemma \ref{displacement} follows.
\end{proof}
\noindentndent
{{\bar i}t Proof of Proposition \ref{asympwidth}.}
Let $(1-q_1)\cdot n\le k\le n$ and assume that the conditions of Lemma \ref{displacement} are satisfied. Choose $B{\bar i}n {\mathcal B}^n[k]$. Let ${\mathbf{B}}{\bar i}n {\mathcal B}^{n-k}(F_k)$ be such that $B=\Psi_0^k({\mathbf{B}})$, say ${\mathbf{B}}={\mathbf{B}}_j$ with $0< j<2^{n-k}$ odd.
The pieces ${\mathbf{B}}^*_j{\bar i}n {\mathcal B}^{n-k}(F_*)$, $0< j<2^{n-k}$ odd, are curves on the graph of $f_*$ contained in $B^1_c(F_*)$,
that is, they have a bounded slope. This bounded slope implies that
$$
|I^*_j|\asymp |J^*_j|.
$$
This bound and Lemma \ref{displacement} imply that the Hausdorff distance between ${\mathbf{B}}_j$ and ${\mathbf{B}}^*_j$ is
$O(\rho^{q_0\cdot n}\cdot |I^*_j|)$. We get that
$B_j=\Psi_0^k({\mathbf{B}}_j)$ is regular, which proves Proposition \ref{asympwidth}(1).
Let $B{\bar i}n {\mathcal B}^{n+1}[k]$, say $B=\Psi_0^k({\mathbf{B}})$ with
${\mathbf{B}}{\bar i}n {\mathcal B}^{n-k+1}(F_k)$ and ${\mathbf{B}}\subset {\mathbf{B}}_j{\bar i}n {\mathcal B}^{n-k}(F_k)$, for some $0< j<2^{n-k}$.
Recall that the scaling ratio of $B{\bar i}n {\mathcal B}^{n+1}[k]$ is a measurement in vertical direction in the domain of $F_k$.
The relative displacement of every point $z^*{\bar i}n {\mathcal O}_{F_*}$ is estimated in Lemma \ref{displacement}. These bounds imply
$$
|{\sigma}gma_{\mathbf{B}}-{\sigma}gma_{{\mathbf{B}}^*}|=O(\rho^{q_0\cdot n}).
$$
This finishes the proof of Proposition \ref{asympwidth}(2).
To control the thickness associated to $B{\bar i}n {\mathcal B}^n[k]$ we have to restrict ourselves to
$(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$.
The piece ${\mathbf{B}}\equiv {\mathbf{B}}_j$, which determines the relative thickness of $B=\Psi_0^k({\mathbf{B}})$ has a Hausdorff distance $O(\rho^{q_0\cdot n}\cdot |I^*_j|)$ to ${\mathbf{B}}^*_j$, Lemma \ref{displacement}. This piece ${\mathbf{B}}^*_j$ is a curve
in the graph of $f_*$ contained in $B^1_c(F_*)$.
This curve has a bounded slope. Hence, its relative thickness is
proportional to its diameter,
which is of the order ${\sigma}gma^{n-k}\le {\sigma}gma^{q_0\cdot n}$, see
Lemmas \ref{contracting}.
The control of the Hausdorff distance and the small relative thickness of ${\mathbf{B}}^*_j$ implies
$$
{{\diamond}elta}lta_{\mathbf{B}}=O(\rho^{q_0\cdot n})
$$
We finished the proof of Proposition \ref{asympwidth}(3).
\qed
\subsection{Universal sticks created in the pushing-up regime}
\begin{defn}{\lambda}bel{controlled}
Given $0<q_0<q_1$, the collection ${\mathcal P}_n(k; q_0,q_1)$ of {{\bar i}t $(q_0,q_1)$-controlled} pieces
consists of $B{\bar i}n {\mathcal B}^n[k]$ with the following property. If $B^{(i)}$, $i=0,1,2,\cdots,t$, are the predecessors of $B=B^{(0)}$ with
$$
k=k_{0}(B)<k_1(B)<k_{2}(B)<{\diamond}ots< k_{t-1}(B)<k_t(B)<n.
$$
then
\begin{itemize}
{\bar i}tem[(1)] $k_{i+1}\le l(k_{i})$, $i=0,1,2,3,{\diamond}ots, t-1$,
{\bar i}tem[(2)] there exists $0\le s\le t$ such that
$
(1-q_1)\cdot n\le k_s(B)\le (1-q_0)\cdot n,
$
and
{\bar i}tem[(3)]
$ k_{s-1}(B)\le (1-q_1)\cdot n.$
\end{itemize}
\end{defn}
\begin{rem}{\lambda}bel{topcontrolled} The definition of controlled pieces is a combinatorial definition. It does not depend on $F$ but only on the average Jacobian $b_F$ which is a
topological invariant, \cite{LM1}. If $B$ is a $(q_0,q_1)$-controlled piece of $F$ then the corresponding piece $B^*$ is $(q_0,q_1)$-controlled piece of $F_*$.
\end{rem}
The definition of controlled pieces implies
\begin{equation}{\lambda}bel{PP}
\bigcup_{k<l\le l(k)} G_k({\mathcal P}_n(l; q_0,q_1))={\mathcal P}_n(k; q_0,q_1).
\end{equation}
Proposition \ref{asympwidth} introduced the constants $\rho<1$, and $q^*>0$. The constants ${\alpha}pha^*>0$ and $k^*>0$ are the optimal choice given by the Propositions \ref{regtoreg}, \ref{sticktostick} and \ref{scapush}.
Now Proposition \ref{regtoreg} and Proposition \ref{asympwidth}(1) imply
\begin{lem}{\lambda}bel{regPP}
Let ${\alpha}pha<{\alpha}pha^*$. For every $q^*>q_1>q_0>0$ there exists $n^*{\bf g}e 1$ such that
every $B{\bar i}n {\mathcal P}_n(k; q_0,q_1)$ and all its predecessors are regular when $n{\bf g}e n^*$ and $k{\bf g}e k^*$.
\end{lem}
\begin{lem}{\lambda}bel{wdeltas}
Let ${\alpha}pha<{\alpha}pha^*$. For every $q^*>q_1>q_0>0$ there exists $n^*{\bf g}e 1$ such that
for every $\hat{B}{\bar i}n {\mathcal P}_n(k;q_0,q_1)$ and $B{\bar i}n {\mathcal B}^{n+1}[k]$ with $B\subset \hat{B}$
$$
{{\diamond}elta}lta_{\hat{{\mathbf{B}}}}=O(\rho^{q_0\cdot n})
$$
and
$$
|{\sigma}gma_{\mathbf{B}}-{\sigma}gma^*_{\mathbf{B}}|=O(\rho^{q_0\cdot n})
$$
when $n{\bf g}e n^*$ and $k{\bf g}e k^*$.
\end{lem}
\begin{proof}
Let us call the predecessors of $\hat{B}$ and $B$
$$
B^{(i)}\subset \hat{B}^{(i)},
$$
$i=0,1,2,{\diamond}ots, t$.
Let $k_i=k_i(\hat{B})=k_i(B)$ and
${{\diamond}elta}lta_i$ the relative thickness of $\hat{{\mathbf{B}}}^{(i)}$, where $\hat{B}^{(i)}=\Psi^{k_i}_0(\hat{{\mathbf{B}}}^{(i)})$, and ${\sigma}gma_i={\sigma}gma_{{\mathbf{B}}^{(i)}}$, the scaling number of $B^{(i)}\Psi^{k_i}_0({\mathbf{B}}^{(i)})$, $i=0,1,2,{\diamond}ots, t$. Observe, the piece $B$
might have one predecessor more than $\hat{B}$.
Apply
Propositions \ref{sticktostick} and \ref{scapush}. In particular,
\begin{equation}{\lambda}bel{iterw}
{{\diamond}elta}lta_{i-1}\le {\bf f}rac12 {{\diamond}elta}lta_i+O({\sigma}gma^{n-k_{i}})
\end{equation}
and
\begin{equation}{\lambda}bel{iters}
|{\sigma}gma_{i-1}-{\sigma}gma_i|=O({{\diamond}elta}lta_i +{\sigma}gma^{n-k_i} )
\end{equation}
for $i=1,2,{\diamond}ots,t$.
Iterating estimate (\ref{iterw}) we obtain
\begin{equation}{\lambda}bel{sumdelta}
\begin{aligned}
\sum_{i=0}^s {{\diamond}elta}lta_i
&\le 2{{\diamond}elta}lta_s+O({\sigma}gma^{n-k_s})\\
&= O(\rho^{q_0\cdot n})+O({\sigma}gma^{q_0\cdot n}),
\end{aligned}
\end{equation}
where we used Proposition \ref{asympwidth}(3) and property (2) of
Definition \ref{controlled}. We may assume ${\sigma}gma<\rho<1$.
The first estimate of the Lemma follows:
$$
{{\diamond}elta}lta_{\hat{{\mathbf{B}}}}={{\diamond}elta}lta_0\le \sum_{i=0}^s {{\diamond}elta}lta_i=O(\rho^{q_0\cdot n}).
$$
To establish the second estimate of the Proposition, first observe that
$$
\begin{aligned}
{\sigma}gma_{{\mathbf{B}}^{(0)}}&={\sigma}gma_{{\mathbf{B}}^{(s)}}+
\sum_{i=0}^{s-1}( {\sigma}gma_{{\mathbf{B}}^{(i)}}-{\sigma}gma_{{\mathbf{B}}^{(i+1)}}).
\end{aligned}
$$
Hence, by using (\ref{iters}) and (\ref{sumdelta}),
$$
\begin{aligned}
|{\sigma}gma_{B^{(0)}}-{\sigma}gma_{B^{(s)}}|&\le
\sum_{i=0}^{s-1}| {\sigma}gma_{B^{(i)}}-{\sigma}gma_{B^{(i+1)}}|\\
&= O(\sum_{i=1}^s( {{\diamond}elta}lta_i+ {\sigma}gma^{n-k_{i}}))\\
&=O(\rho^{q_0\cdot n}+{\sigma}gma^{n-k_s})
=O(\rho^{q_0\cdot n}).
\end{aligned}
$$
If $B{\bar i}n {\mathcal P}_n(k,q_0, q_1)$ and $B^*$ is the corresponding piece of
$F_*$,
then $B^*$ is also controlled. Namely, each $l(k)={\bar i}nfty$ because $b_{F_*}=0$. Hence, we have the same estimate for the proper scaling
$$
|{\sigma}gma^*_{{\mathbf{B}}^{(0)}}-{\sigma}gma^*_{{\mathbf{B}}^{(s)}}|
=O(\rho^{q_0\cdot n}).
$$
This finishes the proof. Namely, $B^{(s)}{\bar i}n {\mathcal B}^{n+1}[k_s]$ with $(1-q_1)\cdot n\le k_s\le n$ and we can apply
Proposition \ref{asympwidth}(2),
$$
\begin{aligned}
|{\sigma}gma_{\mathbf{B}}-{\sigma}gma^*_{{\mathbf{B}}}|&=|{\sigma}gma_{{\mathbf{B}}^{(0)}}-{\sigma}gma^*_{{\mathbf{B}}^{(0)}}|\\
&\le |{\sigma}gma_{{\mathbf{B}}^{(0)}}-{\sigma}gma_{{\mathbf{B}}^{(s)}}|+
|{\sigma}gma_{{\mathbf{B}}^{(s)}}-{\sigma}gma^*_{{\mathbf{B}}^{(s)}}|+|{\sigma}gma^*_{{\mathbf{B}}^{(s)}}-{\sigma}gma^*_{{\mathbf{B}}^{(0)}}|\\
&=O(\rho^{q_0\cdot n}).
\end{aligned}
$$
\end{proof}
The measurements of the pieces, such as scaling and thickness, are geometrical quantities observed when viewing a piece from its scale, they are geometrical measurements of ${\mathbf{B}}$ and not $B$ itself. The next Proposition states that the actual pieces $B$ inherit exponentially small estimates for their precision. The Proposition is also a preparation for the brute-force regime which concerns iteration of the original map.
\begin{prop}{\lambda}bel{precision} Let ${\alpha}pha<{\alpha}pha^*$. For every $q^*>q_1>q_0>0$ there exists $n^*{\bf g}e 1$ such that
$$
{\mathcal P}_n(k;q_0,q_1)\subset {\mathcal S}_n(O(\rho^{q_0\cdot n}))
$$
when $n{\bf g}e n^*$ and $k{\bf g}e k^*$.
\end{prop}
The estimates in the proof of this Proposition are like the estimates used to prove the Propositions \ref{regtoreg}, \ref{sticktostick},
and \ref{scapush}.
\begin{proof}
Let $\hat{B}{\bar i}n {\mathcal P}_n(k;q_0,q_1)$ and $B{\bar i}n{\mathcal B}^{n+1}[k]$ with
$B\subset \hat{B}$.
Let ${\mathbf{B}}$ and $\hat{{\mathbf{B}}}$ be such that $B=\Psi_0^k({\mathbf{B}})$ and
$\hat{B}=\Psi_0^k(\hat{{\mathbf{B}}})$. The horizontal and vertical size of the smallest rectangle which contains $\hat{{\mathbf{B}}}$ are ${\bf h},\bv>0$. Let ${\boldsymbol{\de}}>0$ be the relative thickness of $\hat{{\mathbf{B}}}$, the absolute thickness of
$\hat{{\mathbf{B}}}$ is $\bw={\boldsymbol{\de}}\cdot {\bf h}$. From Lemma \ref{contracting} we get
$$
{\bf h},\bv =O({\sigma}gma^{n-k}).
$$
Moreover, the regularity of $\hat{B}$ gives
$$
{\bf h}\asymp \bv.
$$
The situation allows to apply Lemma \ref{wdeltas} :
\begin{equation}{\lambda}bel{deltasigmaestimate}
{\boldsymbol{\de}}=O(\rho^{q_0\cdot n})
\quad \mathrm{and} \quad
|{\sigma}gma_{\mathbf{B}}-{\sigma}gma^*_{{\mathbf{B}}}|=O(\rho^{q_0\cdot n}).
\end{equation}
We have to show that $\hat{B}\cap {\mathcal O}_{F}=\Psi^k_0(\hat{{\mathbf{B}}}\cap
{\mathcal O}_{F_k})$ is contained in a $O(\rho^{q_0\cdot n})$-stick. As before
we will decompose $\Psi_0^k$ into its diffeomorphic part
$(\operatorname{id}+{\bf S}_0^k)$ and its affine part. Let
$h_{\text{diff}}, v_{\text{diff}}>0$ be the horizontal and vertical size of the smallest rectangle containing the image of $\hat{{\mathbf{B}}}$ under $(\operatorname{id}+{\bf S}_0^k)$ and
$w_{\text{diff}}>0$ the absolute thickness of its stick and ${\sigma}gma_{\text{diff}}>0$ the scaling factor of the image of ${\mathbf{B}}$ under the same diffeomorphism. Then we have
$$
\begin{aligned}
{\sigma}gma_{\text{diff}}&={\sigma}gma_{\mathbf{B}},\\
v_{\text{diff}}&=\bv
\end{aligned}
$$
and, by recalling (\ref{wdiff}),
$$
\begin{aligned}
w_{\text{diff}}&=O( \bw +{\sigma}gma^{n-k} \cdot {\bf h} ),\\
h_{\text{diff}}&\asymp {\bf h}.
\end{aligned}
$$
The last two estimates rely on $\bv\asymp {\bf h}$. The term ${\bf h} \cdot {\sigma}gma^{n-k}$ reflects the distortion of $(\operatorname{id}+{\bf S}_0^k)$ on $\hat{{\mathbf{B}}}$ determined by the
diameter of $\hat{{\mathbf{B}}}$ which is of the order ${\sigma}gma^{n-k}$. The next step is to apply the affine part of $\Psi_0^k$. Denote the measurements after this step by $h_{\text{aff}}, v_{\text{aff}}, w_{\text{aff}}, {\sigma}gma_{\text{aff}}>0$ resp.
Equation (\ref{reshuffling}) and Lemma \ref{tilt} yield
\begin{equation}{\lambda}bel{oldaffestimates}
\begin{aligned}
w_{\text{aff}}&\asymp {\sigma}gma^{2k}w_{\text{diff}},\\
{\sigma}gma_{\text{aff}}&={\sigma}gma_{\text{diff}}={\sigma}gma_{\mathbf{B}},\\
h_{\text{aff}}&\asymp {\sigma}gma^{2k} h_{\text{diff}}+ {\sigma}gma^kv_{\text{diff}},\\
\end{aligned}
\end{equation}
Use the above estimates in the following
\begin{equation}{\lambda}bel{waffhaff}
{\bf f}rac{w_{\text{aff}}}{h_\text{aff}}=O({\bf f}rac{{\sigma}gma^{2k} \cdot [\bw+{\sigma}gma^{n-k}\cdot {\bf h} ]}
{{\sigma}gma^{2k} \cdot {\bf h} +{\sigma}gma^{k}\cdot \bv})= O({\sigma}gma^k\cdot {\boldsymbol{\de}} +{\sigma}gma^{n})=O(\rho^{q_0\cdot n}).
\end{equation}
Consider the smallest conformal image of a rectangle aligned along the diagonal of the rectangle containing $\hat{B}=\Psi^k_0(\hat{{\mathbf{B}}})$,
see Figure \ref{actualstick}. The precision of $\hat{B}$ will be better than the precision based on the measurements of this approximation of the
stick. Let $l'>0$ be the length, $w'>0$ be the absolute thickness and ${\sigma}gma'>0$ be the scaling factor of $B\subset \hat{B}$ within this rectangle. Then
\begin{equation}{\lambda}bel{l'}
l'=\sqrt{h_{\text{aff}}^2+v^2_{\text{aff}}},
\end{equation}
and
\begin{equation}{\lambda}bel{w'}
w'\le w_{\text{aff}}.
\end{equation}
First we will estimate the precision of ${\sigma}gma'$. Let ${\bf g}amma$ be the angle between the diagonal of the rectangle and the horizontal. Observe,
$$
\cos {\bf g}amma={\bf f}rac{h_{\text{aff}}}{\sqrt{h_{\text{aff}}^2+v^2_{\text{aff}}}},
$$
see Figure \ref{actualstick}. The projection ${{\mathbb D}elta}lta l'$ of the horizontal interval of length $w_{\text{aff}}$ onto the diagonal has length
$$
{{\mathbb D}elta}lta l'=w_{\text{aff}} \cdot \cos {\bf g}amma.
$$
Observe,
$$
|{\sigma}gma' \cdot l'-{\sigma}gma_{\text{aff}}\cdot l'|\le {{\mathbb D}elta}lta l'=w_{\text{aff}} \cdot {\bf f}rac{h_{\text{aff}}}{\sqrt{h_{\text{aff}}^2+v^2_{\text{aff}}}}.
$$
Then, by using (\ref{waffhaff}) and (\ref{l'}),
\begin{equation}{\lambda}bel{delsigma}
|{\sigma}gma' -{\sigma}gma_{\text{aff}}|\le {\bf f}rac{w_{\text{aff}} }{h_{\text{aff}} }\cdot {\bf f}rac{h^2_{\text{aff}}}{h_{\text{aff}}^2+v^2_{\text{aff}}}\le {\bf f}rac{w_{\text{aff}} }{h_{\text{aff}} }=
O(\rho^{q_0\cdot n}).
\end{equation}
Use (\ref{deltasigmaestimate}), (\ref{oldaffestimates}), and (\ref{delsigma}) to estimate the precision of ${\sigma}gma'$
\begin{equation}{\lambda}bel{delsig}
|{\sigma}gma'-{\sigma}gma_{{\mathbf{B}}}^*|\le | {\sigma}gma'-{\sigma}gma_{\text{aff}}|+|{\sigma}gma_{\text{aff}}-{\sigma}gma_{{\mathbf{B}}}^*|= O(\rho^{q_0\cdot n}).
\end{equation}
The estimate (\ref{w'}) says that the height of the stick containing $\hat{B}$ is at most $w_{\text{aff}}$.
The relative height is estimated by
\begin{equation}{\lambda}bel{relthick}
{\bf f}rac{w'}
{l'}\le {\bf f}rac{w_{\text{aff}}}
{\sqrt{h_{\text{aff}}^2+v^2_{\text{aff}}}}\le
{\bf f}rac{w_{\text{aff}}}
{h_{\text{aff}}}=O(\rho^{q_0\cdot n}),
\end{equation}
where we used (\ref{waffhaff}) and (\ref{l'}).
The estimates (\ref{delsig}) and (\ref{relthick}) confirm that
$
\hat{B}{\bar i}n {\mathcal S}_n(\rho^{q_0\cdot n})),
$
which finishes the proof of the Proposition.
\end{proof}
\subsection{Universal sticks created in the brute-force regime}
\begin{prop}{\lambda}bel{badpart} There exists ${\varepsilon}ilon^*>0$, and $q^*>0$ such that the following holds.
Let ${\varepsilon}ilon<{\varepsilon}ilon^*$, and $0<q_0<q_1<q^*$ then there exists $n^*{\bf g}e 1$ such that
if for $0\le j< 2^{(1-q_1)\cdot n}$
$$
F^j(B){\bar i}n{\mathcal S}_n({\varepsilon}ilon),
$$
with
$B{\bar i}n {\mathcal B}^n[k]$, $(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$, and $n{\bf g}e n^*$, then
$$
F^{j+1}(B){\bar i}n {\mathcal S}_n(O({\varepsilon}ilon+\rho^{q_0\cdot n})).
$$
\end{prop}
\begin{proof}
Choose $\hat{B}{\bar i}n {\mathcal B}^n[k]$
with $(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$ and $B{\bar i}n {\mathcal B}^{n+1}$ with $B\subset \hat{B}$. The iterates under the original map are denoted by
$B_j=F^j(B)$ and $\hat{B}_j=F^j(\hat{B})$, $j\le 2^{(1-q_1)\cdot n}$.
Assume that for some $j\le 2^{(1-q_1)\cdot n}$
$$
\hat{B}_j{\bar i}n {\mathcal S}_n({\varepsilon}ilon).
$$
The piece $\hat{B}_j$ is contained in an ${\varepsilon}ilon$-stick.
Say $\hat{B}_j\cap {\mathcal O}_F$ is contained in a rectangle of length $l>0$ and height
$w\le {\varepsilon}ilon l$. The smaller rectangle which contains $B_j\cap {\mathcal O}_F$ has length ${\sigma}gma_{j} l$, where ${\sigma}gma_j={\sigma}gma_{B_j}$ and
$|{\sigma}gma_j-{\sigma}gma^*_{{\mathbf{B}}_j}|\le {\varepsilon}ilon$. Notice that we have to estimate the scaling factor ${\sigma}gma_{B_j}$ and not ${\sigma}gma_{{\mathbf{B}}_j}$, compare remark \ref{prosca}.
Apply $F$ to this rectangle. The stick which contains $\hat{B}_{j+1}$ has length $l'>0$ and height $w'>0$. The relevant scaling factor of $B_{j+1}$ is ${\sigma}gma_{j+1}={\sigma}gma_{B_{j+1}}$.
Choose, $M, m>0$ such that
$$
m |v| \le |DF(x,y)v|\le M |v|.
$$
This is possible because $F$ is a diffeomorphism onto its image. However, $m=O(b)$.
Let $K>0$ be the maximum norm of the Hessian of $F$. The diameter of $\hat{B}_j\cap {\mathcal O}_F$, which is proportional to $l$, is of the order ${\sigma}gma^n$, see Lemma \ref{contracting}. We can estimate the sizes $l', w'$ and ${\sigma}gma'$ by applying the derivative of $F$ and correcting for
distortion which is bounded by $K l^2$. Let $D$ be the absolute value of the directional derivative of $F$ in the direction of the rectangle containing $\hat{B}_j$, measured in a corner of the rectangle.
Then
$$
\begin{aligned}
l'&{\bf g}e Dl-2K l^2-2M w,\\
w'&\le M w+2K l^2,\\
\end{aligned}
$$
Observe,
$$
|{\sigma}gma_{j+1} \cdot l'-D \cdot {\sigma}gma_{j} \cdot l|\le 2M w+2K l^2.
$$
Let us first estimate the relative height of the stick of $\hat{B}_{j+1}$. Use $w\le {\varepsilon}ilon l$,
\begin{equation}{\lambda}bel{epsw}
\begin{aligned}
{\bf f}rac{w'}{l'}&\le {\bf f}rac{ M{\varepsilon}ilon l+2K l^2}{ ml-2K l^2-2M{\varepsilon}ilon l}\\
&\le {\bf f}rac{M}{m-2K l-2M{\varepsilon}ilon }\cdot {\varepsilon}ilon+2{\bf f}rac{K}{m-2K l-2M{\varepsilon}ilon } \cdot l\\
&=O({\varepsilon}ilon+{\sigma}gma^n)=O({\varepsilon}ilon+\rho^{q_0\cdot n}),
\end{aligned}
\end{equation}
when ${\varepsilon}ilon<{\varepsilon}ilon^*$, $q_0< q^*_1$ small enough, and $n{\bf g}e n^*$ large enough. Similarly,
\begin{equation}{\lambda}bel{deltasigma}
|{\sigma}gma_{j+1}-{\sigma}gma_{j}|=O({\varepsilon}ilon+\rho^{q_0\cdot n}).
\end{equation}
Use remark \ref{1Dscalingstar} and apply Proposition \ref{disto} to get
\begin{equation}{\lambda}bel{deltasigmapro}
|{\sigma}gma^*_{{\mathbf{B}}_s}-{\sigma}gma^*_{{\mathbf{B}}}|=O(\rho^{q_0\cdot n}),
\end{equation}
with $0\le s< 2^{(1-q_1)\cdot n}$.
We need to estimate the scaling factor ${\sigma}gma_{j+1}$ of $B_{j+1}$.
Use (\ref{deltasigma}) and (\ref{deltasigmapro})
and and the notation ${\sigma}gma^*_j={\sigma}gma^*_{{\mathbf{B}}_j}$. Then
\begin{equation}{\lambda}bel{epssig}
\begin{aligned}
|{\sigma}gma_{j+1}-{\sigma}gma^*_{j+1}|&\le
|{\sigma}gma_{j+1}-{\sigma}gma_j|+|{\sigma}gma_j-{\sigma}gma^*_{j}|+|{\sigma}gma^*_j-{\sigma}gma^*_{j+1}|\\
&\le O({\varepsilon}ilon+\rho^{q_0\cdot n})+{\varepsilon}ilon+O(\rho^{q_0\cdot n})\\
&=O({\varepsilon}ilon+\rho^{q_0\cdot n}),
\end{aligned}
\end{equation}
for ${\varepsilon}ilon\le {\varepsilon}ilon^*$, $0<q_0<q^*$ small enough and $n{\bf g}e n^*$ large enough. The estimates (\ref{epsw}) and (\ref{epssig}) together finish the
proof.
\end{proof}
\section{Probabilistic Universality}{\lambda}bel{conv}
In this section we are going to estimate the measure of the pieces created in the three regimes, see Proposition \ref{Pn1mintheta}.
Let ${\alpha}pha={\alpha}pha^*, {\varepsilon}ilon^*>0$, and $0< q^*_1<1/3$ small and $k^*{\bf g}e 1$ large enough to allow the use of the Propositions \ref{precision}, and \ref{badpart}.
For each $n{\bf g}e 1$, let $\kappaappa_0(n)\asymp \ln n$ be the smallest integer such that
$$
l(\kappaappa_0(n))\equiv2^{\kappaappa_0(n)} \cdot {\bf f}rac{\ln b}{\ln {\sigma}gma}-{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}+\kappaappa_0(n){\bf g}e n.
$$
For $n{\bf g}e 1$ large enough we have
\begin{equation}{\lambda}bel{kapn}
\kappaappa_0(n)\le {\bf f}rac{\ln n}{\ln 2}.
\end{equation}
\begin{lem}{\lambda}bel{pushupregime} Given $q_0<q_1$. There exists $n^*{\bf g}e 1$ such that for $n{\bf g}e n^*$ and
$\kappaappa_0(n)\le k<(1-q_0)\cdot n$,
$$
\mu({\mathcal P}_n(k; q_0,q_1)){\bf g}e [1-{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}] \cdot \mu(E^k).
$$
\end{lem}
\begin{proof} Let $\beta_n(k; q_0,q_1)=\mu(E^k\setminus{\mathcal P}_n(k; q_0,q_1) )$ be the measure of the uncontrolled pieces.
The construction implies immediately
\begin{equation}{\lambda}bel{betamuEk}
\beta_n(k; q_0,q_1)=\mu(E^k), \quad (1-q_0)\cdot n< k\le n,
\end{equation}
and
\begin{equation}{\lambda}bel{beta0}
\beta_n(k; q_0,q_1)=0, \quad (1-q_1)\cdot n\le k\le (1-q_0)\cdot n,
\end{equation}
every piece in the one-dimensional regime is controlled. The Lemma holds for $(1-q_1)\cdot n\le k\le (1-q_0)\cdot n$.
This implies that the fraction of the uncontrolled part in $\cup_{l{\bf g}e (1-q_1)\cdot n} E^l$ is
\begin{equation}{\lambda}bel{qq}
{\bf f}rac{\sum_{l=(1-q_1)\cdot n}^n\beta_n(l; q_0,q_1)}
{\mu(B^{(1-q_1)\cdot n})}\le {\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}.
\end{equation}
Observe,
$$
\begin{aligned}
l((1-q_1)\cdot n-1)&= 2^{(1-q_1)\cdot n-1}\cdot {\bf f}rac{\ln b}{\ln {\sigma}gma}-
{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}+(1-q_1)\cdot n-1\\
&{\bf g}trsim 2^{(1-q_1)\cdot n-1}{\bf g}e n,
\end{aligned}
$$
holds when $n{\bf g}e 1$ is large enough. All pieces in ${\mathcal B}^n[k]$, with $k{\bf g}e (1-q_1)\cdot n$ are not too deep for level
$(1-q_1)\cdot n-1$. Hence, equation (\ref{PP}) reduces to
$$
{\mathcal P}_n((1-q_1)\cdot n-1; q_0,q_1)=\bigcup_{(1-q_1)\cdot n\le l\le n} G_{(1-q_1)\cdot n-1}({\mathcal P}_n(l; q_0,q_1)).
$$
Hence, using (\ref{qq}),
$$
\begin{aligned}
\beta_n((1-q_1)\cdot n-1; q_0,q_1)&=\sum_{l=(1-q_1)\cdot n}^n\beta_n(l; q_0,q_1)\\
&\le {\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}\cdot\mu(B^{(1-q_1)\cdot n})\\
&={\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}\cdot \mu(E^{(1-q_1)\cdot n-1}).
\end{aligned}
$$
Now we finish the proof by induction. The Lemma is proved for $k=(1-q_1)\cdot n-1$. Assume the Lemma holds from $(1-q_1)\cdot n-1$ down
to $k+1\le (1-q_1)\cdot n-1$. Because $k{\bf g}e \kappaappa_0(n)$ we have $l(k){\bf g}e n$. Hence, again by using
(\ref{PP}), (\ref{betamuEk}), (\ref{beta0}), and $\mu(E^l)={\bf f}rac{1}{2^{l+1}}$, $l{\bf g}e 0$, we get
$$
\begin{aligned}
\mu({\mathcal P}_n(k; q_0,q_1))&=\mu(\bigcup_{l=k+1}^n G_k({\mathcal P}_n(l; q_0,q_1)))\\
&= \sum_{l=k+1}^{ (1-q_1)\cdot n-1} \mu({\mathcal P}_n(l; q_0,q_1)))+
\sum_{l=(1-q_1)\cdot n}^{(1-q_0)\cdot n} \mu(E^l)\\
&{\bf g}e (1-{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}})\cdot [
\sum_{l=k+1}^{ (1-q_1)\cdot n-1} \mu(E^{l})+ {\bf f}rac{1}{2^{(1-q_1)\cdot n}}]\\
&= (1-{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}})\cdot [
\sum_{l=k+1}^{ (1-q_1)\cdot n-1} \mu(E^{l})+\sum_{l=(1-q_1)\cdot n}^{{\bar i}nfty} \mu(E^l)]\\
&=(1-{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}})\cdot \mu(E^k).
\end{aligned}
$$
\end{proof}
\begin{prop}{\lambda}bel{meas} Given $q_0<q_1<q^*_1$. There exists $n^*{\bf g}e 1$ such that for $n{\bf g}e n^*$ and $k\le (1-q_0)\cdot n$
$$
\mu({\mathcal P}_n(k; q_0,q_1)){\bf g}e [1- {\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}
-2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma}{\ln {\sigma}gma}}\sum_{l= k}^{{\bar i}nfty} 2^l (b^{\bf g}amma)^{2^l}]\cdot
\mu(E^k),
$$
where ${\bf g}amma=- {\bf f}rac{\ln 2}{\ln {\sigma}gma}{\bar i}n (0,1)$.
\end{prop}
\begin{proof}
According to Lemma \ref{pushupregime}, the Proposition holds for $\kappaappa_0(n)\le k\le (1-q_0)\cdot n$.
The proof for the lower values of $k<\kappaappa_0(n)$ is by induction.
Assume by induction
$$
\beta_n(k; q_0,q_1))\le [{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}
+2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma}{\ln {\sigma}gma}}
\sum_{l= k}^{\kappaappa_0(n)-1} 2^l (b^{\bf g}amma)^{2^l}]\cdot \mu(E^k),
$$
which holds for $k=\kappaappa_0(n)$. Suppose it holds from $\kappaappa_0(n)$ down to
$k+1\le \kappaappa_0(n)$.
Observe,
$$
{\bf f}rac{1}{2^{l(k)}}=2^{{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}}\cdot 2^{-[{\bf f}rac{k}{2^k}+{\bf f}rac{\ln b}{\ln {\sigma}gma}]\cdot 2^k}
\le 2^{{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}}\cdot 2^{-{\bf f}rac{\ln b}{\ln {\sigma}gma}\cdot 2^k}.
$$
Hence,
\begin{equation}{\lambda}bel{2lk}
{\bf f}rac{1}{2^{l(k)}}\le 2^{{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}}\cdot (b^{\bf g}amma)^{2^k}.
\end{equation}
Use (\ref{kapn}) and observe,
$$
\begin{aligned}
l_{\kappaappa_0(n)-1}&=2^{\kappaappa_0(n)-1}\cdot {\bf f}rac{\ln b}{\ln {\sigma}gma}-
{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}+\kappaappa_0(n)-1\\
&={\bf f}rac12 (n+{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}-\kappaappa_0(n))
-{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}+\kappaappa_0(n)-1\\
&\le {\bf f}rac12 n(1+{\bf f}rac{\kappaappa_0(n)}{n})+O(1)\\
&\le {\bf f}rac12 n(1+{\bf f}rac{\ln n}{n\ln 2})+O(1)\\
&<(1-q_1)\cdot n,
\end{aligned}
$$
holds when $n{\bf g}e n^*$ large enough because $q^*_1<{\bf f}rac13$.
Hence, for $n{\bf g}e 1$ large enough, we have
\begin{equation}{\lambda}bel{lk}
l(k)\le l(\kappaappa_0(n)-1)< (1-q_1)\cdot n.
\end{equation}
Use (\ref{PP}), the induction hypothesis, (\ref{2lk}), and (\ref{lk}) in the following
estimates.
$$
\begin{aligned}
\beta_n(k; q_0,q_1))&\le \sum_{l=k+1}^{l(k)} \beta_n(l; q_0,q_1)+
\mu(B^{l(k)+1})\\
&\le [{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}
+2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma }{\ln {\sigma}gma}}
\sum_{l= k+1}^{\kappaappa_0(n)-1} 2^l (b^{\bf g}amma)^{2^l}]\cdot \sum_{l=k+1}^{l(k)}\mu(E^l)
+{\bf f}rac{1}{2^{l(k)}}\\
&\le
[{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}
+2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma }{\ln {\sigma}gma}}
\sum_{l= k+1}^{\kappaappa_0(n)-1} 2^l (b^{\bf g}amma)^{2^l}]\cdot \mu(E^k)+
2^{{\bf f}rac{\ln {\alpha}pha}{\ln {\sigma}gma}} (b^{\bf g}amma)^{2^k}\\
&= [{\bf f}rac{1}{2^{(q_1-q_0)\cdot n+1}}
+2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma}{\ln {\sigma}gma}}
\sum_{l= k}^{\kappaappa_0(n)-1} 2^l (b^{\bf g}amma)^{2^l}]\cdot \mu(E^k),
\end{aligned}
$$
where the last equality uses $\mu(E^k)={\bf f}rac{1}{2^{k+1}}$.
\end{proof}
For each $K>0$ and $\theta<1$, let $\kappaappa(n)$ be the largest integer such that
$$
2^{\kappaappa(n)}\le Kn\ln 1/\theta.
$$
\begin{lem}{\lambda}bel{kn} There exists $K>0$ such that for every $\theta<1$ there exists $n^*{\bf g}e 1$ such that $\kappaappa(n){\bf g}e k^*$ for $n{\bf g}e n^*$ and
$$
2^{{\bf f}rac{\ln {\alpha}pha{\sigma}gma}{\ln {\sigma}gma}}\sum_{l= \kappaappa(n)}^{{\bar i}nfty} 2^l (b^{\bf g}amma)^{2^l}\le {\bf f}rac13 \theta^n.
$$
\end{lem}
\begin{proof} Observe,
$$
\sum_{l= \kappaappa(n)}^{{\bar i}nfty} 2^l (b^{\bf g}amma)^{2^l}=O(2^{\kappaappa(n)}(b^{\bf g}amma)^{2^{\kappaappa(n)}}).
$$
To achieve the property of the Lemma it suffices to satisfy
$$
\ln 2^{\kappaappa(n)}+ 2^{\kappaappa(n)}\ln b^{\bf g}amma +O(1)\le n\ln \theta.
$$
In turn, this holds when
$$
n\ln 1/\theta \cdot [{\bf f}rac12 K \ln b^{\bf g}amma +1]+O(1)\le 0.
$$
This holds for large $n{\bf g}e 1$ when $K>0$ is chosen large enough.
\end{proof}
In the sequel we will fix $K>0$ according to the previous Lemma.
For each $Q>0$ and $\theta<1$, define $q_0$ by
$$
q_0=Q\ln 1/\theta.
$$
and
$$
q_1=[Q+{\bf f}rac{3}{2 \ln 2}]\cdot \ln 1/\theta.
$$
\begin{lem}{\lambda}bel{q0q1} For every $\theta<1$ there exists $n^*{\bf g}e 1$ such that for $Q>0$ and $n{\bf g}e n^*$
$$
{\bf f}rac{1}{2^{(q_1-q_0)\cdot n +1}}\le {\bf f}rac13 \theta^n.
$$
\end{lem}
The brute-force regime consists of iterates of $\bigcup_{l=\kappaappa(n)}^{(1-q_0)\cdot n} {\mathcal P}_n(l;q_0,q_1) $ up to just one step before the moment of return to $B^{\kappaappa(n)}\equiv \bigcup_{l=\kappaappa(n)}^{\bar i}nfty E^l$. The return uses exactly $2^{\kappaappa(n)}$ steps. Thus we obtain for each choice $Q>0$ and $\theta<1$, the collection
\begin{equation}{\lambda}bel{PPn}
{\mathcal P}_n=\bigcup_{j=0}^{2^{\kappaappa(n)} -1}F^j( \bigcup_{l=\kappaappa(n)}^{(1-q_0)\cdot n}
{\mathcal P}_n(l;q_0,q_1) )
\end{equation}
\begin{prop}{\lambda}bel{muPnSn} There exist $Q>0$ and $\theta^*<1$ such that the following holds. For $\theta^*\le \theta<1$ there exists $n^*{\bf g}e 1$ such that for $n{\bf g}e n^*$
$$
{\mathcal P}_n \subset {\mathcal S}_n(\theta^n).
$$
\end{prop}
\begin{proof} Take $B{\bar i}n \bigcup_{l=\kappaappa(n)}^{(1-q_0)\cdot n} {\mathcal P}_n(l;q_0,q_1) $. According to
Proposition \ref{precision} there exists $C>0$ such that
\begin{equation}{\lambda}bel{BinS}
B{\bar i}n {\mathcal S}_n(C\rho^{q_0\cdot n}),
\end{equation}
when $\theta<1$ close enough to $1$ and $n{\bf g}e 1$ large enough (Recall that $q_0$ depends on $\theta$).
Now consider an image $F^j(B)$ with $j\le 2^{\kappaappa(n)}-1<2^{(1-q_1)\cdot n}$. Denote its precision by ${\varepsilon}ilon_j$. This is a piece in the brute-force regime.
If $\theta<1$ close enough to $1$ and $n{\bf g}e 1$ large enough we can apply Proposition \ref{badpart}: there exists $r>1$ such that if
${\varepsilon}ilon_j\le {\varepsilon}ilon^*$ then
\begin{equation}{\lambda}bel{repeateps}
{\varepsilon}ilon_{j+1}\le r\cdot ({\varepsilon}ilon_j+\rho^{q_0\cdot n}).
\end{equation}
Choose $Q>0$ large enough such that
$$
Q\ln \rho +K\ln r+{\bf f}rac32\le 0.
$$
This choice implies
\begin{equation}{\lambda}bel{Crbound}
\rho^{q_0\cdot n} \cdot r^{2^{\kappaappa(n)}}\le (\theta^{{\bf f}rac32})^n.
\end{equation}
Now we can repeatedly apply (\ref{repeateps}): for $n{\bf g}e 1$ large enough and $0\le j<2^{\kappaappa(n)}$
$$
\begin{aligned}
{\varepsilon}ilon_j&\le C\rho^{q_0\cdot n} \cdot r^j
+ \rho^{q_0\cdot n}\cdot \sum_{i=0}^{j-1} r^{j-i}\\
&\le (C+{\bf f}rac{r}{r-1})\cdot \rho^{q_0\cdot n} \cdot r^{2^{\kappaappa(n)}}\le \theta^n\le {\varepsilon}ilon^*.
\end{aligned}
$$
Every piece in ${\mathcal P}_n$ is $\theta^n$-universal.
\end{proof}
In the sequel we will fixed $Q>0$ according to the previous Proposition.
\begin{prop}{\lambda}bel{Pn1mintheta} There exists $\theta^*<1$ such that the following holds. For $\theta^*\le \theta<1$ there exists $n^*{\bf g}e 1$ such that for $n{\bf g}e n^*$
$$
\mu({\mathcal P}_n){\bf g}e 1-\theta^n.
$$
\end{prop}
\begin{proof} For $\theta<1$ close enough to $1$ we have
$$
{\bf f}rac{1}{2^{{\bf f}rac12- Q\ln 1/\theta}}\le \theta.
$$
Hence, for $n{\bf g}e 1$ large enough
\begin{equation}{\lambda}bel{QKtheta}
{\bf f}rac{Kn\ln 1/\theta}{2^{(1-Q\ln 1/\theta)\cdot n+1}}\le {\bf f}rac13\cdot \theta^n.
\end{equation}
For $\theta<1$ close enough to $1$, and $n{\bf g}e 1$ large enough we can apply Proposition \ref{meas}, Lemmas \ref{kn}, \ref{q0q1}, and (\ref{QKtheta})
to obtain
$$
\begin{aligned}
\mu( {\mathcal P}_n )&=
2^{\kappaappa(n)}\cdot \mu(\bigcup_{l=\kappaappa(n)}^{(1-q_0)\cdot n} {\mathcal P}_n(l;q_0,q_1) )\\
&{\bf g}e 2^{\kappaappa(n)}\cdot(1-{\bf f}rac23\theta^n)\cdot \sum_{l=\kappaappa(n)}^{(1-q_0)\cdot n} \mu(E^l)\\
&=(1-{\bf f}rac23\theta^n)\cdot (1-{\bf f}rac{2^{\kappaappa(n)}}{2^{(1-q_0)\cdot n+1}})\\
&{\bf g}e (1-{\bf f}rac23\theta^n)\cdot (1-{\bf f}rac{Kn\ln 1/\theta}{2^{(1-Q\ln 1/\theta)\cdot n+1}})\\
&{\bf g}e 1-\theta^n.
\end{aligned}
$$
\end{proof}
The Propositions \ref{Pn1mintheta} and \ref{muPnSn} confirm probabilistic universality, Theorem \ref{distuni}.
\section{Recovery}
The pieces in ${\mathcal B}^n$ which are contained in $\theta^n$-sticks can be determined by pure combinatorial methods. In \cite{CLM}, it has been shown that there are pieces which are not contained in $\theta^n$-sticks. Probabilistic universality says that these {{\bar i}t bad} spots will be filled on deeper levels with pieces contained in sticks with exponential precision. This recovery process has a combinatorial description.
A piece $B{\bar i}n {\mathcal B}^n$ has an associated word
${\omega}ega=w_1w_2 {\diamond}ots w_n$, with letters $w_k{\bar i}n\{c,v\}$, such that
$$
B=\operatorname{Im} \psi^1_{w_1}\circ \psi^2_{w_2}\circ {\diamond}ots \psi^n_{w_n}
$$
where $\psi^k_{v}$ is the non-affine rescaling used to renormalize $R^kF$, and to obtain $R^{k+1}F$ and $\psi^k_{c}=R^kF\circ \psi^k_{v}$. If $B_1, B_2{\bar i}n {\mathcal B}^{n+1}$ are the two pieces contained in $B$ then the associated words for $B_1$ and $B_2$ are $wc$ and $wv$. This discussion defines a homeomorphism
$$
w:{\mathcal O}_F\to \{c,v\}^{\Bbb{N}}.
$$
The relation between the $k_i(B)$, $i=0,1,2,{\diamond}ots, t$, which define the predecessors of $B{\bar i}n {\mathcal B}^n$ and the word ${\omega}ega=w_1w_2{\diamond}ots w_n$ is as follows. If
$i{\bar i}n \{k_0(B),k_1(B),{\diamond}ots, k_t(B)\}$ then
$w_i=c$, otherwise $w_i=v$.
\comm{
In the next section, see (\ref{PPn}), we will introduce collections ${\mathcal P}_n\subset {\mathcal B}_n$ of pieces
which are contained in $\theta^n$-sticks, and prove for some $\theta<1$,
\begin{equation}{\lambda}bel{PnSn}
{\mathcal P}_n\subset {\mathcal S}_n(\theta^n).
\end{equation}
These collections turn out to have a large measure,
\begin{equation}{\lambda}bel{mPn}
\mu({\mathcal P}_n){\bf g}e 1-\theta^n.
\end{equation}
The selection process for the pieces which are in ${\mathcal P}_n$, see (\ref{PPn}), is purely topological.
The influence of the actual map is limited by the value of its average Jacobian which is a topological invariant, see \cite{LM}.
Let $k_1(n)$ be defined by
$$
2^{k_1(n)}=A_1(1+n\ln {\bf f}rac{1}{\theta}),
$$
and
$$
q_0(n)=A_2(1+n\ln {\bf f}rac{1}{\theta}),
$$
$$
q_1(n)=(A_2+A_3)(1+n\ln {\bf f}rac{1}{\theta}).
$$
The precise values of $\theta<1$, and $A_1, A_2, A_3>0$ will be discussed in the next section. These values are universal, they do not depend on the actual map. The value of $\theta$ will be close to $1$. This means that $q_0(n)$ and $q_1(n)$ are small fractions of $n$ and
\begin{equation}{\lambda}bel{k1n}
k_1(n)=(1+o(1)){\bf f}rac{\ln n}{\ln 2}.
\end{equation}
}
In the previous section we constructed the collection ${\mathcal P}_n\subset {\mathcal S}_n(\theta^n)$, see (\ref{PPn}).
The word ${\omega}ega=w_1w_2{\diamond}ots w_n$ of a piece $B{\bar i}n {\mathcal P}_n$ is characterized by
\begin{itemize}
{\bar i}tem [(1)] If $k{\bf g}e \kappaappa(n)$ and $w_k=c$ then there exists $k<i\le l(k)$ with $w_i=c$.
{\bar i}tem [(2)] There exits $n-q_1\cdot n\le k\le n-q_0\cdot n$ with $w_k=c$.
\end{itemize}
\begin{rem} Recall, $q_0$, $q_1$, and the function $l(k)$, depend only on the average Jacobian, which is a topological invariant, see \cite{LM1}. The characterization of the pieces in ${\mathcal P}_n$ is purely topological.
\end{rem}
\begin{defn}{\lambda}bel{controlpt} A point $x{\bar i}n {\mathcal O}_F$ is {{\bar i}t eventually controlled} if there exists $N_x{\bf g}e 1$ such that for all $n{\bf g}e N_x$ there exists
$n-q_1\cdot n\le k\le n-q_0\cdot n$
with
$$
w_k=c,
$$
where $w(x)=w_1w_2w_3{\diamond}ots$. The collection of eventually controlled points is denoted by $C_F\subset {\mathcal O}_F$.
\end{defn}
\begin{lem}{\lambda}bel{controlPP} The set of eventually controlled points satisfies $\mu(C_F)=1$ and
$$
C_F=\bigcup_{N{\bf g}e 1} \bigcap_{n{\bf g}e N} {\mathcal P}_n.
$$
\end{lem}
\begin{proof} There exists $k^*{\bf g}e 1$ such that $(1-q_1)\cdot l(k)>k$ for $k{\bf g}e k^*$. Let $x{\bar i}n C_F$. Choose $n{\bf g}e 1$ large enough such that
$n{\bf g}e \kappaappa(n){\bf g}e N_x$ and $\kappaappa(n){\bf g}e k^*$. The piece $B_n(x){\bar i}n {\mathcal B}^n$ contains $x$. Then $B_n(x)$ satisfies property (2).
Choose $k{\bf g}e \kappaappa(n)$. Then $l(k)> (1-q_1)\cdot l(k)> k{\bf g}e \kappaappa(n){\bf g}e N_x$. Hence, there exists $w_i=c$ with
$(1-q_1)\cdot l(k)\le i\le (1-q_0)\cdot l(k)$. Now, $i{\bf g}e (1-q_1)\cdot l(k) >k$. Moreover, $i\le (1-q_0)\cdot l(k)<l(k)$. The piece $B_n(x)$ satisfies property (1). We proved,
\begin{equation}{\lambda}bel{xP}
x{\bar i}n \bigcap_{\kappaappa(n){\bf g}e \max \{N_x,k^*\} }{\mathcal P}_n.
\end{equation}
Choose $x{\bar i}n \bigcap_{n{\bf g}e N} {\mathcal P}_n$. Then property (2) implies that for
every $n{\bf g}e N$ there exists
$n-q_1\cdot n\le k\le n-q_0\cdot n$
with
$$
w_k=c.
$$
We proved that
$
\bigcap_{n{\bf g}e N} {\mathcal P}_n\subset C_F,
$
for $N{\bf g}e 1$.
The statement on the measure of $C_F$ follows from Proposition \ref{Pn1mintheta}.
This finishes the proof of Lemma \ref{controlPP}
\end{proof}
The recovery process can be described by using Proposition \ref{muPnSn} and (\ref{xP})
\begin{prop}{\lambda}bel{recov} If $x{\bar i}n {\mathcal O}_F$ is controlled then and $\kappaappa(n){\bf g}e N_x$ then
$
B_n(x){\bar i}n {\mathcal S}_n(\theta^n).
$
\end{prop}
\begin{rem} Given a conjugation $h:{\mathcal O}_{F_1}\to {\mathcal O}_{F_2}$ then $b_{F_1}=b_{F_2}$, see \cite{LM1}, and
$h(C_{F_1})=C_{F_2}$. The set of controlled points is a topological invariant.
\end{rem}
\section{Probabilistic Rigidity}{\lambda}bel{haus}
The geometry of large parts of ${\mathcal O}_F$ resemble that of the geometry of ${\mathcal O}_{F_*}$,
see Theorem \ref{distuni}, probabilistic universality.
The large parts are
\begin{equation}{\lambda}bel{XN}
X_N=\bigcap_{k{\bf g}e N}{\mathcal S}_k(\theta^k),
\end{equation}
where $\theta<1$ is given by Theorem \ref{distuni}, with
$$
\mu(X_N){\bf g}e 1-O(\theta^N).
$$
Let
$$
X=\bigcup_{N{\bf g}e 1} X_N
$$
and note $\mu(X)=1$.
As a consequence of a result from \cite{CLM} we known that there is no continuous line field on ${\mathcal O}_F$ consisting of tangent lines to ${\mathcal O}_F$. However, the first step towards describing the geometry of ${\mathcal O}_F$ will be the construction of tangent lines to ${\mathcal O}_F$ in all points of $X\subset{\mathcal O}_F$.
Choose $N{\bf g}e 1$ and define for $n{\bf g}e N$
$$
T_n:X_N\to \Bbb{P}^1
$$
as follows. Let $x{\bar i}n X_N$ and let $B_n(x){\bar i}n {\mathcal B}^n$, $n{\bf g}e N$, be the piece with $x{\bar i}n B_n(x)$. The part ${\mathcal O}_F\cap B_n(x)$ is contained in a $\theta^n$-stick see Figure \ref{hol}. The direction of the longest edge of this stick is denoted by $T_n(x){\bar i}n \Bbb{P}^1$.
The {{\bar i}t a priori} bounds give that the scaling ${\sigma}gma_1$ of $B_{n+1}(x)$ is strictly away from
zero. Namely,
$
{\sigma}gma_1={\sigma}gma_{B_{n+1}(x)}{\bf g}e{\sigma}gma^*_{B_{n+1}(x)}-\theta^n{\bf g}e a>0.
$
The angle between $T_n(x)$ and $T_{n+1}(x)$ is of the order $\theta^n$, see Figure \ref{hol}. The piecewise constant functions $T_n$ form a Cauchy sequence,
\begin{equation}{\lambda}bel{Cbeta}
\text{dist}(T_{n+1}(x),T_n(x))=O(\theta^n).
\end{equation}
for $n{\bf g}e N$ and $x{\bar i}n X_N$.
The limit is denoted by
$$
T=\lim_{n\to{\bar i}nfty}T_{n} : X_N\to \Bbb{P}^1.
$$
The construction implies that we get in fact a map
$$
T:X\to \Bbb{P}^1.
$$
The actual line through $x{\bar i}n X\subset {\mathcal O}_F$ with direction $T(x)$ is denoted by $T_x\subset \Bbb{R}^2$.
\begin{defn}{\lambda}bel{microh}
The Cantor set ${\mathcal O}_F$ is {{\bar i}t almost everywhere $(1+\beta)$-differentiable}
if for each $N{\bf g}e 1$ there exists $C_N>0$ such that
$$
dist(x, T_{x_0})\le C_N|x-x_0|^{1+\beta}
$$
when $x{\bar i}n {\mathcal O}_F$, $x_0{\bar i}n X_N$.
The tangent line field of ${\mathcal O}_F$ is {{\bar i}t weakly $\beta$-H\"older } if for each $N{\bf g}e 1$ there exists $C_N>0$ such that
$$
\text{dist}(T(x_0),T(x_1))\le C_N|x_0-x_1|^\beta,
$$
with $x_0,x_1{\bar i}n X_N$.
\end{defn}
\comm{
\begin{defn}{\lambda}bel{microh}
The line field $T$ on $X$ is {{\bar i}t almost everywhere $\beta$-H\"older}
if for each $N{\bf g}e 1$ the restriction $T|X_N$
is $\beta$-H\"older
$$
\text{dist}(T(x_0),T(x_1))\le C_N|x_0-x_1|^\beta,
$$
with $x_0,x_1{\bar i}n X_N$.
The line field $T$ consists of {{\bar i}t $\beta$-H\"older tangent lines to ${\mathcal O}_F$} if for each $N{\bf g}e 1$ there exists $C_N>0$ such that
$$
dist(x, T_{x_0})\le C_N|x-x_0|^{1+\beta}
$$
when $x{\bar i}n {\mathcal O}_F$, $x_0{\bar i}n X_N$.
\end{defn}
}
\begin{rem}
The objects we consider have H\"older estimates on the growing sets $X_N$. Although, the increasing sequence of sets $X_1\subset X_2\subset X_3\subset \cdots$ is intrinsically related to the notion of being almost everywhere H\"older we will suppress it in the notation, instead of using
{{\bar i}t almost everywhere H\"older with respect to the sequence $\{X_N\}$}.
\end{rem}
\begin{thm}{\lambda}bel{lines} The Cantor set ${\mathcal O}_F$ is almost everywhere $(1+\beta)$-differentiable, where $\beta>0$ is universal. The tangent line field is weakly $\beta$-H\"older.
\end{thm}
\begin{proof}
Choose $N{\bf g}e 1$.
Let
$$
d_N=\min_{B{\bar i}n {\mathcal B}^N} \text{diam}(B\cap {\mathcal O}_F)>0.
$$
Choose, $x_0,x_1{\bar i}n X_N$. We will find a uniform H\"older estimate for the function $T|X_N$ in these two points.
Let $n{\bf g}e 1$ such that $x_1{\bar i}n B_n(x_0)$ and $x_1\notin B_{n+1}(x_0)$. To prove a H\"older estimate we may assume that $n{\bf g}e N$.
The {{\bar i}t a priori} bounds for the Cantor set of the one-dimensional map $f_*$ and the probabilistic universality of ${\mathcal O}_F$ observed in the sets $X_N$, see (\ref{XN}), give a $\rho<1$ such that
$$
|x_1-x_0|{\bf g}e \rho^{n-N}\cdot d_N.
$$
Estimate (\ref{Cbeta}) implies
$$
\begin{aligned}
\text{dist}(T(x_1), T(x_0))&\le \text{dist}(T(x_1), T_{n}(x_1))+
\text{dist}(T_{n}(x_0), T(x_0))\\
&= O(\theta^n)\\
&\le C_N|x_1-x_0|^\beta.
\end{aligned}
$$
where $C_N=O({\bf f}rac{\theta^N}{(d_N)^\beta})$ and $\beta>0$ is such that
\begin{equation}{\lambda}bel{beta}
\rho^\beta=\theta.
\end{equation}
The estimate only holds when $x_0$ and $x_1$ are in the same piece of ${\mathcal B}^N$. To get a global estimate we might have to increase the constant to obtain
$$
\text{dist}(T(x_1), T(x_0))\le C_N |x_1-x_0|^\beta,
$$
for any pair $x_0,x_1{\bar i}n X_N$.
Choose $x{\bar i}n {\mathcal O}_F$ to prove that $T_{x_0}$, $x_0{\bar i}n X_N$, is a $\beta-$H\"older tangent
line to ${\mathcal O}_F$.
Again let
$n{\bf g}e 1$ such that $x{\bar i}n B_n(x_0)$ and $x\notin B_{n+1}(x_0)$. The distance between $x_0$ and $x$ is bounded from below when $n<N$. To find the H\"older estimate for the distance between $x$ and $T_{x_0}$ we may assume that $n{\bf g}e N$.
Recall, $\text{dist}(T(x_0), T_{n}(x_0))=O(\theta^n)$ and
$|x-x_0|{\bf g}e \rho^{n-N}\cdot d_N$.
Denote the length of the stick which contains ${\mathcal O}_F\cap B_n(x_0)$ by $l>0$.
The a priori bounds imply
$$
l=O(|x-x_0|).
$$
Then
\begin{equation}{\lambda}bel{distxT}
\begin{aligned}
\text{dist}(x, T_{x_0})&= O(\theta^n)\cdot l\\
&=O((\rho^n)^\beta |x-x_0|)\\
&\le C_N|x-x_0|^{1+\beta}.
\end{aligned}
\end{equation}
This estimate holds when $x_0, x$ are in the same piece of ${\mathcal B}^N$. We might have to increase the constant $C_N$ to get a global H\"older estimate.
\end{proof}
In \cite{CLM} it has been shown that the Cantor attractors ${\mathcal O}_F$, with $b_F>0$, can not be part of a smooth curve.
\begin{thm}{\lambda}bel{curve} Each set $X_N\subset {\mathcal O}_F$ is contained in a
$C^{1+\beta}$-curve.
\end{thm}
\begin{proof} The proof will not use the specific structure of the set $X_N$ described by the pieces in ${\mathcal B}^n$.
The proof holds for every closed set in the plane with tangents line to each point with H\"older dependence on the point.
We will construct a $C^{1+\beta}$-curve through every set $X_N\cap B$ with $B{\bar i}n {\mathcal B}^{N+K}$ and $K{\bf g}e 0$ large enough.
This suffices to prove the Theorem.
Choose $B{\bar i}n {\mathcal B}^{N+K}$ with $X_N\cap B\ne \emptyset$. For each $x_0{\bar i}n X_N\cap B$ consider the cusps
$$
S_{x_0}=\{x{\bar i}n B| \text{dist}(x, T_{x_0})< C_N|x-x_0|^{1+\beta}\}.
$$
Note
$
X_N\cap B\subset S_{x_0}.
$
Thus
$$
S\equiv \bigcap_{x{\bar i}n X_N}S_{x}\supset X_N\cap B.
$$
Fix $K{\bf g}e 0$ large enough such that each $S_x\setminus \{x\}$ has two components.
This defines already an order on $X_N\cap B$. Write
$$
S_x\setminus\{x\}=S^+_x\cup S^-_x,
$$
where $S^\pm_x$ are the connected components. We may assume that the assignment of connected components
preserves the order in the following sense.
If $x_1{\bar i}n S^+_{x_0}$ then
$$
S^+_{x_1}\cap X_N\subset S^+_{x_0}.
$$
A point $x{\bar i}n X_N$ is a boundary
point of $X_N$ if
$S^+_x\cap X_N=\emptyset$ or $S^-_x\cap X_N=\emptyset$.
A connected component $G\subset S\setminus X_N$, see Figure \ref{gap}, is called a gap of $X_N$. For every gap there exist two boundary points $x_0, x_1{\bar i}n X_N$ such that
$$
G\subset S^+_{x_0}\cap S^-_{x_1}.
$$
Consider a gap between two boundary points $x_0$ and $x_1$ and the graph
over the tangent line $T_{x_0}$ of a cubic polynomial ${\bf g}amma_G$ which passes through $x_0$ and $x_1$ and is tangent to the tangent lines $T_{x_0}$ and $T_{x_1}$. Denote the graph of ${\bf g}amma_G$ also by ${\bf g}amma_G$. A calculation shows that
$$
|{\bf g}amma_G|_0\le 7 C_N|x_0-x_1|^{1+\beta}
$$
and if $D{\bf g}amma_G(x){\bar i}n \Bbb{P}^1$ is the direction of the tangent line to the graph ${\bf g}amma_G$ at a point $x{\bar i}n {\bf g}amma_G$ then
$$
|D{\bf g}amma_G(y)-D{\bf g}amma_G(x)|\le 21 C_N|y-x|^\beta.
$$
In particular, the distance between the tangent directions along the curve and
the direction at the boundary points shrink to zero as the diameter of the gap shrinks. This implies that the closure of the union of the curves ${\bf g}amma_G$
$$
{\bf g}amma=X_N \cup \bigcup_{G} {\bf g}amma_G
$$
is a $C^1$ curve.
Left is to show that the tangent direction $D{\bf g}amma$ is $C^\beta$. Choose $x_0, x_1{\bar i}n {\bf g}amma$. Let $a_0{\bar i}n {\bf g}amma\cap X_N$ be the closest
point to $x_0$ on the line segment between $x_0$ and $x_1$. Similarly, let $a_1$ be the closest point to $x_1$. If $x_0{\bar i}n G$ then $a_0$ is a boundary point of the gap $G$, See Figure \ref{gap}. For $K{\bf g}e 0$ large enough, the distances between these points are, up to a factor close to $1$, equal to the corresponding distances of the projections of these points to the tangent line through $a_0$. We may assume that $|x_1-a_1|,|a_1-a_0|,|a_0-x_0|\le 2|x_1-x_0|$.
Then
$$
\begin{aligned}
|D{\bf g}amma(x_1)-D{\bf g}amma(x_0)|&\le
C_N\cdot \{21|x_1-a_1|^\beta+ |a_1-a_0|^\beta+ 21|a_0-x_0|^\beta\}\\
&\le 86 C_N|x_1-x_0|^\beta.
\end{aligned}
$$
The curve ${\bf g}amma$ is $C^{1+\beta}$ and contains $X_N\cap B$.
\end{proof}
The following Theorem is an answer to a question posed by J.C. Yoccoz.
\begin{thm} The Cantor attractor ${\mathcal O}_F$ is contained in a rectifiable curve without self-intersections.
\end{thm}
\begin{proof} Let $F_n:[0,1]^2\to [0,1]^2$ be the $n^{th}$-renormalization of $F$.
The piece $B^1_v(F_n)\subset {\mathbb D}om(F_n)$ is strip bounded between two horizontal line segments and $B^1_c(F_n)\subset {\mathbb D}om(F_n)$ is strip bounded between two vertical line segments. Let ${\bf g}amma_n$ be a collection of three line segments which connects the two pieces and each piece with the horizontal boundaries of ${\mathbb D}om(F_n)=[0,1]^2$, see Figure \ref{gn}.
For each $n{\bf g}e 1$ we will construct inductively a curve $\Gamma^n$ in the domain of $F$ which passes through all pieces $B{\bar i}n {\mathcal B}^n$ of the $n^{th}$-cycle of $F$. Let $\Gamma^n_n$ consists of ${\bf g}amma_n$ and curves in the boundaries of $B^1_v(F_n)$ and $B^1_c(F_n)$ connecting the end points of ${\bf g}amma_n$, see Figure \ref{gn}.
Suppose $\Gamma^n_{k+1}$ is defined and its end point are in the two horizontal boundary part of the domain of $F_{k+1}$, see Figure \ref{gn}. Let $\Gamma^n_k$ be the curve connecting the top and bottom of the domain of $F_k$ consists of the curves
$$
\Gamma^n_k=\psi^k_v(\Gamma^n_{k+1})\cup \psi^k_c(\Gamma^n_{k+1})\cup {\bf g}amma_k\cup g^n_k,
$$
where $g^n_k$ consists of the two shortest horizontal line segments connecting the endpoints of $ \psi^k_v(\Gamma^n_{k+1})$ with the end points of ${\bf g}amma_k$ and the two vertical line segments connecting the endpoints of $ \psi^k_v(\Gamma^n_{k+1})$ with the end points of ${\bf g}amma_k$, see Figure \ref{gn}. Let $\Gamma^n=\Gamma^n_0$.
The curve $\Gamma^{n+1}$ is obtained from $\Gamma^n$ by changing it inside the pieces of ${\mathcal B}^n$. Hence,
$$
\Gamma^{n+1}\setminus {\mathcal B}^n=\Gamma^n \setminus {\mathcal B}^n.
$$
This refinement process induces natural parametrizations of the curves $\Gamma^n$ where the parametrization of $\Gamma^{n+1}$ is obtained from the one of $\Gamma^n$ by only adjusting only inside the pieces of ${\mathcal B}^{n}$. In each piece $B{\bar i}n {\mathcal B}^n$, the curve $\Gamma^{n+1}$ is partitioned into five sub-curves, see Figure \ref{gn}. The refinement of the parametrization of $\Gamma^n$ spends equal time in each of these five sub-curves. The diameter of the pieces in ${\mathcal B}^n$ decay exponentially fast, $\sup_{B{\bar i}n{\mathcal B}_n} \partialam(B)=O({\sigma}gma^n)$. The construction and this decay imply that the parametrization have a uniform H\"older bound. This bound allows us to take a limit. Let $\Gamma$ be the limiting H\"older curve. It contains ${\mathcal O}_F$.
The maps $\psi^k_v$ and $\psi^k_c$ are contracting distance by at least ${\bf f}rac{1}{2.5}$, for $k{\bf g}e 1$ large enough, see Lemma \ref{contracting}.
Denote the length of $\Gamma^n_k$ by $|\Gamma^n_k|$. Then,
$$
\begin{aligned}
|\Gamma^n_k|&\le {\bf f}rac{2}{2.5} \cdot |\Gamma^n_{k+1}|+|{\bf g}amma_k|+|g^n_k|\\
&\le {\bf f}rac{2}{2.5} \cdot |\Gamma^n_{k+1}|+4.
\end{aligned}
$$
The curves $\Gamma^n_k$ have a bounded length. In particular, the limiting curve $\Gamma$ is rectifiable.
Outside the pieces $B{\bar i}n {\mathcal B}^n$ the curve $\Gamma$ coincides with $\Gamma^n$ which consists of non-intersecting curves. A self-intersection has to be a point $x{\bar i}n {\mathcal O}_F$. Let $B_n(x){\bar i}n {\mathcal B}^n$ the piece which contains this self-intersection. The interval of parameter values which correspond to points in $B_n(x)$ is an interval of length $O(1/5^n)$. This means that the parametrization is injective. There are no self-intersections.
\end{proof}
\begin{rem} The curve $\Gamma$ for the degenerate maps follows the same combinatorial construction as for a non-degenerate maps. This implies that the order of the pieces $B{\bar i}n {\mathcal B}^n$ in the curve $\Gamma$ is the same order as observed in
one-dimensional maps.
\end{rem}
\begin{rem} The relative height (or thickness) of a piece $B{\bar i}n {\mathcal B}^n$ coincides with the number $\beta(B)\le 1$ introduced by P. Jones. In \cite{J}, Jones characterizes sets which are contained in rectifiable curves. A set ${\mathcal O}$ is contained in a rectifiable curve if and only if its diadic covers ${\mathcal B}^n$ satisfy the summability condition
$$
\sum_{n{\bf g}e 1} \sum_{B{\bar i}n {\mathcal B}^n} \beta^2(B)\cdot \partialam{B}<{\bar i}nfty.
$$
In the present case of ${\mathcal O}_F$, one can use the dynamical covers ${\mathcal B}^n$ instead of the diadic ones.
Since $\partialam(B)=O({\sigma}gma^n)$ with $2{\sigma}gma<1$, the set ${\mathcal O}_F$ satisfies the summability condition with respect to these covers. The diameter of the pieces decay fast enough so that we do not have to consider actual geometrical information of the pieces: the bound $\beta(B)\le 1$ suffices. For completeness we include a direct proof for rectifiability using the strongly contracting rescalings $\psi^k_c$ and $\psi^k_v$.
The sets $X_N$ have better geometrical properties. The relative height (or thickness) of the pieces covering $X_N$ and the corresponding numbers $\beta(B)$ decay exponentially fast. This is responsible for the smooth curves containing these sets.
\end{rem}
The {{\bar i}t tangent bundle} over ${\mathcal O}_F$ is defined by
$$
TX=\{(x,v){\bar i}n X\times \Bbb{R}^2| v{\bar i}n T_x\}.
$$
If $Y\subset X$ then the tangent bundle over $Y$ is denoted by
$$
TY=\{(x,v){\bar i}n TX| x{\bar i}n Y\}.
$$
We identify $T_x\subset \Bbb{R}^2$, $\{x\}\times T(x)\subset TX$ with the
{{\bar i}t tangent space } at $x{\bar i}n X\subset {\mathcal O}_F$.
Let
$
\pi_x:\Bbb{R}^2\to T_x
$
be the orthogonal projection.
Let $Y\subset {\mathcal O}_{F_1}$. A map $h:Y\to h(Y)\subset {\mathcal O}_{F_2}$
is differentiable at $x_0{\bar i}n Y$ if
$x_0$ and $h(x_0)$ have a tangent line,
and there exists a linear $Dh(x_0): T_{x_0}\to T_{h(x_0)}$ such that for $x{\bar i}n Y$
$$
h(x)=h(x_0)+Dh(x_0)(\pi_{x_0}(x)-x_0)+o(|x-x_0|).
$$
We will identify $Dh(x_0)$ with a number.
A bijection $h:X\to h(X)\subset {\mathcal O}_{F_2}$ is {{\bar i}t almost everywhere a
$(1+\beta)$-diffeomorphism}
if for each $N{\bf g}e 1$ the restriction $h|X_N$ is differentiable at each $x{\bar i}n X_N$ and
$$
Dh: TX_N\to Th(X_N)
$$
and its inverse are $\beta$-H\"older homeomorphisms.
Let ${\mathcal O}_{F_*}$ be the Cantor attractor of the fixed point of renormalization, the degenerate map $F_*$. Its invariant measure is denoted by $\mu_*$.
In \cite{LM1} it has been shown that every conjugation
which extends to a homeomorphism between neighborhoods of ${\mathcal O}_F$ and
${\mathcal O}_{F_*}$ respects the orbits of the tips. We will only consider conjugations
$$
h:{\mathcal O}_F\to {\mathcal O}_{F_*}
$$
with $h(\tau_{F})=\tau_{F_*}$.
\begin{defn}{\lambda}bel{defdistributionalrigidity} The attractor ${\mathcal O}_F$ of an infinitely renormalizable H\'enon map $F{\bar i}n{\mathcal H}_{{\Omega}ega}(\overline{{\varepsilon}ilon})$ is {{\bar i}t probabilistically rigid} if there exists $\beta>0$
such that the restriction
$
h:X\to h(X)
$
of the conjugation $h:{\mathcal O}_F\to {\mathcal O}_{F_*}$, is almost everywhere a $(1+\beta)$-diffeomorphism.
\end{defn}
\begin{thm}{\lambda}bel{distributionalrigidity} The Cantor attractor ${\mathcal O}_F$ is probabilistically rigid.
\end{thm}
\begin{proof} Fix $N{\bf g}e 1$ and choose $B^0{\bar i}n {\mathcal S}_{N}(\theta^N)$ which intersects $X_N$. Consider the stick which contains $B^0$.
Call one of the long edges of this stick the bottom and choose an orientation of this line segment.
It suffices to show the differentiability of the conjugation restricted to such a piece.
We will construct a curve containing $X_N\cap B^0$. This curve will be the closure of a countable collection of pairwise disjoint line segments.
These line segments are called gaps. This piecewise affine curve is better adapted to the problem at hand than the curve of
Theorem \ref{curve}. Let
$$
{\mathcal X}_N(k)=\{B{\bar i}n {\mathcal S}_k(\theta^k)| B\cap X_N\ne \emptyset \text{ and } B\subset B^0\}.
$$
Given $B{\bar i}n {\mathcal X}_N(k)$. Let ${{\diamond}elta}lta>0$ be the relative height of the stick of $B$ and ${\sigma}gma_1, {\sigma}gma_2>0$ the scaling factors of the two
pieces $B_1, B_2{\bar i}n {\mathcal B}^{k+1}$ contained in $B$. The stick of $B$ has three parts. Two rectangles of relative length ${\sigma}gma_1$ and ${\sigma}gma_2$ containing respectively $B_1$ and $B_2$ and the the complement within the stick. This last part does not intersect $X_N$. It could be that one of the other parts also does not intersect $X_N$. At least one of the parts does intersect $X_N$. Let $E$ be the union of the parts which do not intersect $X_N$ and $H_-$ and $H_+$ be the vertical boundaries of $E$, see Figure \ref{gapseg}.
The {{\bar i}t gap} of $B$ will be a line segment $G_B$ connecting $H_-$ with $H_+$. Let $B_l{\bar i}n {\mathcal X}_N(l)$ which intersect $H_+$, $l=k, {\diamond}ots, L$. Choose
$$
x^+_B{\bar i}n H_+\cap {\mathcal O}_F\cap \bigcap_{l=k}^L B_l.
$$
The point $x^+_B$ is uniquely defined when $L={\bar i}nfty$. In fact, it will be a point of $X_N$. When $L<{\bar i}nfty$ we have some freedom choosing $x^+_B$. Choose it to be the closest point to the bottom of $B_0$.
Similarly, choose a point $x^-_B{\bar i}n H_-$. The gap of $B$, denoted by $G_B$, is the line segment $(x^-_B, x^+_B)$.
The length of a gap is defined by
$$
|G_B|\equiv |x^+_B-x^-_B|.
$$
\begin{rem} {\lambda}bel{endpt} The gaps are pairwise disjoint. For $B_1{\bar i}n {\mathcal X}_N(k+1)$ and $B{\bar i}n {\mathcal X}_N(k)$ it might happen that $G_{B_1}$ and
$G_B$ have a common endpoint. The angle between the gap $G_B$ and the bottom of $B{\bar i}n {\mathcal X}_N(k)$ is of order $\theta^k$. This is a consequence of ${{\diamond}elta}lta=O(\theta^k)$ and the {{\bar i}t a priori} bounds on ${\sigma}gma_1$ and ${\sigma}gma_2$.
\end{rem}
There is a natural order on $X_N\cap B^0$ and the collection of gaps. It coincides with the order of the projections of $X_N$ and
the gaps onto the bottom of $B^0$. Let us define the order between some $x{\bar i}n X_N\cap B^0$ and a gap $G_{B_1}$. Let $k{\bf g}e N$ be maximal such that there is $B{\bar i}n {\mathcal X}_N(k)$ with $x{\bar i}n B$ and $G_{B_1}\cap B\ne\emptyset$.
The stick of $B$ has three parts as described above. Observe, $x$ and $G_{B_1}$ cannot be in the same part of the stick of $B$.
The angle of the axis of $B$ with the bottom of $B^0$ is of order $\theta^N$. This defines an order on the three parts of this stick. Accordingly, this defines whether $x>G_{B_1}$, or, $x<G_{B_1}$.
The {{\bar i}t gap}-distance between $x,y{\bar i}n X_N\cap B^0$ is
$$
|x-y|_g=\sum_{x<G_B<y} |G_B|.
$$
The gaps between $x,y{\bar i}n X_N\cap B^0$ form a curve
$$
[x,y]_g\equiv \overline{\bigcup_{x<G_B<y} G_B}.
$$
It is a graph over the tangent line of $x$.
\begin{clm}{\lambda}bel{distg} If $x,y{\bar i}n B\cap X_N$ with $B{\bar i}n {\mathcal X}_N(k)$
then
$$
{\bf f}rac{|x-y|_g}{|x-y|}=1+O(\theta^k).
$$
\end{clm}
\begin{proof} Let $\pi_x$ be the projection onto the tangent line $T_x$ of $x$. Then
\begin{equation}{\lambda}bel{distg1}
|x-\pi_x(y)|=\sum_{x<G_{B'}<y} |\pi_x(G_{B'})|.
\end{equation}
The angle between each gap $G_{B'}$ between $x$ and $y$, and the tangent line of $x$ is of order $\theta^k$, see (\ref{Cbeta}) and remark \ref{endpt}. This implies that
\begin{equation}{\lambda}bel{distg2}
{\bf f}rac{|\pi_x(G_{B'})|}{|G_{B'}|}=1+O(\theta^k).
\end{equation}
The Cantor set ${\mathcal O}_F$ is almost everywhere differentiable, see Theorem \ref{lines}. In particular, use (\ref{distxT}) to obtain
\begin{equation}{\lambda}bel{distg3}
{\bf f}rac{|x-\pi_x(y)|}{|x-y|}=1+O(\theta^k).
\end{equation}
The estimates (\ref{distg1}), (\ref{distg2}), and (\ref{distg3}) prove the Claim.
\end{proof}
Given a piece $B$ of $F$, the corresponding piece of $F_*$ is denoted by $B^*=h(B)$.
\begin{clm}{\lambda}bel{GG*} Let $B_l{\bar i}n {\mathcal X}_N(l)$ with $B_l\subset B_k{\bar i}n {\mathcal X}_N(k)$. Then
$$
\ln{\bf f}rac{|G_{B_l}|} {|G_{B_k}|}\cdot
{\bf f}rac{|G_{B^*_k}|} {|G_{B^*_l}|} =O(\theta^k).
$$
\end{clm}
\begin{proof} The Claim holds for $l=k+1$ because the relevant pieces are in ${\mathcal S}_k(\theta^k)$ and
${\mathcal S}_{k+1}(\theta^{k+1})$.
In general, there is a unique sequence of pieces $B_j{\bar i}n {\mathcal X}_N(j)$, $k\le j\le l$ with
$B_l\subset B_{l-1}\subset {\diamond}ots B_{k+1}\subset B_k$. Then
$$
\ln{\bf f}rac{|G_{B_l}|} {|G_{B_k}|}\cdot
{\bf f}rac{|G_{B^*_k}|}{|G_{B^*_l}|}
=\sum_{j=k}^{l-1} \ln{\bf f}rac{|G_{B_{j+1}}|} {|G_{B_{j}}|} \cdot
{\bf f}rac{|G_{B^*_{j}}|} {|G_{B^*_{j+1}}|} =\sum_{j=k}^{l-1} O(\theta^j)=O(\theta^k).
$$
\end{proof}
\begin{clm}{\lambda}bel{xyz} Let $x,y,z{\bar i}n X_N\cap B$ with $B{\bar i}n {\mathcal X}_N(k)$ and $x^*,y^*,z^*{\bar i}n h(X_N)$ the corresponding images under $h$. Then
$$
\ln {\bf f}rac{|x-y|_g}{|x-z|_g}\cdot {\bf f}rac{|x^*-z^*|_g}{|x^*-y^*|_g}=O(\theta^k).
$$
\end{clm}
\begin{proof} Claim \ref{GG*} gives for every piece $\tilde{B}\subset B$
$$
|G_{\tilde{B}}|= |G_{\tilde{B}^*}|\cdot {\bf f}rac{|G_{B}|} {|G_{B^*}|}\cdot (1+O(\theta^k)).
$$
This implies
$$
\begin{aligned}
{\bf f}rac{|x-y|_g}{|x-z|_g}&={\bf f}rac{\sum_{x<\tilde{B}<y} |G_{\tilde{B}}|} {\sum_{x<\tilde{B}<z} |G_{\tilde{B}}|}\\
&={\bf f}rac{\sum_{x^*<\tilde{B}^*<y^*} |G_{\tilde{B}^*}|} {\sum_{x^*<\tilde{B}^*<z^*} |G_{\tilde{B}^*}|}\cdot (1+O(\theta^k))\\
&={\bf f}rac{|x^*-y^*|_g}{|x^*-z^*|_g}\cdot (1+O(\theta^k)).
\end{aligned}
$$
This finishes the proof of the Claim.
\end{proof}
A reformulation of this Claim is the following. Let $x,y,z{\bar i}n X_N\cap B$ with $B{\bar i}n {\mathcal X}_N(k)$. Then
\begin{equation}{\lambda}bel{deltaDh}
|\ln {\bf f}rac{|h(y)-h(x)|_g}{|y-x|_g}-\ln {\bf f}rac{|h(z)-h(x)|_g}{|z-x|_g}|=O(\theta^k).
\end{equation}
This implies that for $x,y{\bar i}n X_N\cap B^0$ the following limit exists.
$$
Dh(x)=\lim_{y\to x} {\bf f}rac{|h(y)-h(x)|_g}{|y-x|_g}.
$$
Moreover, the limit depends continuously on $x$.
\begin{clm}{\lambda}bel{hHolder} There exists a universal $\beta>0$, independent of $N$, such that $Dh:X_N\to \Bbb{R}$ is $\beta$-H\"older.
\end{clm}
\begin{proof}
Choose $x_0, x{\bar i}n X_N\cap B^0$ to prove a H\"older estimate for $\ln Dh$. Let $k{\bf g}e N$ be maximal such that $x{\bar i}n B_k(x_0)$.
Observe, as before in the proof of Theorem \ref{lines},
$$
|x-x_0|{\bf g}e \rho^{k-N}\cdot \text{diam}(B^0)
$$
where $\rho<1$. Choose $\beta>0$ such that $\rho^\beta=\theta$.
Then
\begin{equation}{\lambda}bel{deltaxtheta}
\theta^k=O(|x-x_0|^\beta).
\end{equation}
Hence, using (\ref{deltaDh}) and (\ref{deltaxtheta}),
$$
|\ln Dh(x)-\ln Dh(x_0)|=O(\theta^k)=O(|x-x_0|^\beta).
$$
This suffices to show the H\"older bound for $Dh$.
\end{proof}
We will identify $Dh(x)$ with a linear map $Dh(x):T_x\to T_{h(x)}$.
The positive function $Dh$ is bounded. This bound, (\ref{deltaDh}), and Claim \ref{distg}, imply that for $x,x_0{\bar i}n X_N$
\begin{equation}{\lambda}bel{dhdx}
|h(x)-h(x_0)|=O(|x-x_0|).
\end{equation}
\begin{clm}{\lambda}bel{Deltah} For $x,y{\bar i}n X_N\cap B^0$
$$
|h(y)-h(x)|=Dh(x)\cdot |x-y|\cdot (1+O(|x-y|^\beta)).
$$
\end{clm}
\begin{proof} Let $k{\bf g}e N$ be maximal such that $y{\bar i}n B_k(x)$.
Apply Claim \ref{distg}, (\ref{deltaDh}), and (\ref{deltaxtheta}), in the following estimate
$$
\begin{aligned}
|h(y)-h(x)|&={\bf f}rac{|h(y)-h(x)|}{|h(y)-h(x)|_g}\cdot |h(y)-h(x)|_g\\
&=(1+O(\theta^k))\cdot Dh(x)\cdot |y-x|_g\\
&=(1+O(|y-x|^\beta))\cdot Dh(x)\cdot |y-x|.
\end{aligned}
$$
\end{proof}
Now we are prepared to show the differentiability of $h$. Choose $x,x_0{\bar i}n X_N\cap B^0$. Let $k{\bf g}e N$ be maximal such that $x{\bar i}n B_k(x_0)$.
Let ${{\mathbb D}elta}lta=Dh(x_0)(\pi_{x_0}(x)-x_0){\bar i}n T_{h(x_0)}$. Claim \ref{Deltah}, (\ref{distg3}), and (\ref{deltaxtheta}), imply
\begin{equation}{\lambda}bel{delta}
|{{\mathbb D}elta}lta|=|h(x)-h(x_0)|\cdot (1+O(|x-x_0|^\beta)).
\end{equation}
Let $J=\pi_{h(x_0)}(h(x))-h(x_0){\bar i}n T_{h(x_0)}$ and $V=h(x)-\pi_{h(x_0)}(h(x))$. The image $h({\mathcal O}_F)$ is contained in a smooth curve, the image of the degenerate map $F_*$. Hence,
\begin{equation}{\lambda}bel{JJ}
\begin{aligned}
|J|&=|h(x)-h(x_0)|\cdot (1+O(|h(x)-h(x_0)|^2))\\
&=|h(x)-h(x_0)|\cdot (1+O(|x-x_0|^\beta))
\end{aligned}
\end{equation}
and
\begin{equation}{\lambda}bel{V}
|V|=O(|h(x)-h(x_0)|^2).
\end{equation}
Apply (\ref{delta}), (\ref{JJ}), (\ref{V}), and (\ref{dhdx}), in the following estimate
$$
\begin{aligned}
h(x)&=h(x_0)+{{\mathbb D}elta}lta+(J-{{\mathbb D}elta}lta)+V\\
&=h(x_0)+{{\mathbb D}elta}lta+O(|h(x)-h(x_0)|\cdot |x-x_0|^\beta)+O(|h(x)-h(x_0)|^2)\\
&=h(x_0)+Dh(x_0)(\pi_{x_0}(x)-x_0)+O(|x-x_0|^{1+\beta}) .
\end{aligned}
$$
This finishes the proof of the differentiability and the Theorem.
\end{proof}
\comm{
\begin{proof} Let $x_0{\bar i}n X_N$. Consider the collection of intervals, projections of pieces,
$$
{\mathcal I}_k=\{\text{hull}(\pi_{x_0}(B\cap {\mathcal O}_F))| B{\bar i}n {\mathcal B}_k, B\cap X_N\ne \emptyset \}.
$$
The corresponding gaps are collected in
$$
{\mathcal G}_k=\{G| \exists J{\bar i}n {\mathcal I}_{k-1} \text{ with } G=J\setminus \cup {\mathcal I}_k\}.
$$
Let
$$
\mathcal{A}_k={\mathcal I}_k\cup {\mathcal G}_k.
$$
Each piece $B{\bar i}n B_k$ has a well defined counterpart $B^*{\bar i}n {\mathcal B}_k(F_*)$, defined by $h(B\cap {\mathcal O}_F)=B^*\cap {\mathcal O}_{F_*}$. This defines the corresponding intervals in the tangent space $T_{h(x_0)}$. For $I{\bar i}n \mathcal{A}_k$, denote its counterpart by $I^*{\bar i}n \mathcal{A}^*_k$.
If $I{\bar i}n \mathcal{A}_{k+1}$ and
$I\subset J\subset \text{hull}(\pi_{x_0}(B'\cap {\mathcal O}_F))$ where $J{\bar i}n \mathcal{A}_{k}$ let
$$
{\sigma}gma_{I}={\bf f}rac{|I|}{|J|}
$$
and ${\sigma}gma_I^*={\bf f}rac{|I^*|}{|J^*|}$ its counterpart. Observe, that for all $I{\bar i}n \mathcal{A}_k$, $k{\bf g}e N$
$$
{\sigma}gma_I={\sigma}gma^*_I\cdot (1+O(\theta^k)).
$$
This holds because $X_N\subset {\mathcal S}_k(\theta^k)$ with $k{\bf g}e N$.
Let $I_m(x_0){\bar i}n \mathcal{A}_m$ be the interval which contains
$x_0{\bar i}n I_m(x_0)$. Observe,
$$
{\bf f}rac{|I_{m+1}^*(x_0)|}{|I_{m+1}(x_0)|}
=(1+O(\theta^m))\cdot {\bf f}rac{|I_m^*(x_0)|}{|I_m(x_0)|}.
$$
This implies that
\begin{equation}{\lambda}bel{Dh}
Dh(x_0)=\lim_{m\to {\bar i}nfty} {\bf f}rac{|I_m^*(x_0)|}{|I_m(x_0)|}
\end{equation}
is well defined and is nonzero. Later we will show that $Dh$ serves as the derivative of $h$. To show the H\"older continuity of $Dh$ we use the following
refinement of the previous type of estimates.
Let $I_j{\bar i}n \mathcal{A}_j$ with $I_{j+1}\subset I_j$. For $n>k$
\begin{equation}{\lambda}bel{telescope}
{\bf f}rac{| I_{n}|\cdot | I^*_{k}|}
{| I^*_{n}|\cdot | I_{k}|}=
1+O(\theta^{k}).
\end{equation}
This holds because,
$$
\begin{aligned}
{\bf f}rac{| I_{n}|\cdot | I^*_{k}|}
{| I^*_{n}|\cdot | I_{k}|}&=
\prod_{j=k+1}^{n}{\bf f}rac{{\bf f}rac{| I_{j}|}{ | I_{j-1}|}}
{{\bf f}rac{| I^*_{j}|}{ | I^*_{j-1}|}}
=\prod_{j=k+1}^{n}{\bf f}rac{{\sigma}gma_{I_j}}{{\sigma}gma^*_{I_j}}\\
&=\prod_{j=k+1}^{n}(1+O(\theta^{j}))=1+O(\theta^{k}).
\end{aligned}
$$
Choose $x_0, x_1{\bar i}n X_N$ to prove a H\"older estimate for $Dh$. Let $n{\bf g}e 1$ be such that $x_0,x_1{\bar i}n B_n(x_0)$ and $x_1\notin B_{n+1}(x_0)$. To prove a H\"older estimate we may assume that $n{\bf g}e N$. Observe, as before in the proof of Theorem \ref{lines},
$$
|x_1-x_0|{\bf g}e C_N\cdot \rho^{n-N}\cdot d_N.
$$
where
$$
d_N=\min_{B{\bar i}n {\mathcal B}_N} \text{diam}(B\cap {\mathcal O}_F)>0
$$
and $\rho<1$. This implies, for fixed $N{\bf g}e 1$,
\begin{equation}{\lambda}bel{dx}
\theta^n=O(|x_1-x_0|^\beta).
\end{equation}
The projection of $B_m(x_1)\cap {\mathcal O}_F$ onto $T_{x_0}$ is denoted by $J_m{\bar i}n \mathcal{A}_m$. The projection onto $T_{x_1}$ is $I_m(x_1)$. Recall, that the angle between $T_{x_0}$ and $T_{x_1}$ are bounded by the H\"older bound of $T:X_N\to \Bbb{P}^1$, see Theorem \ref{lines},
$$
d(T(x_0), T(x_1))=O(|x_1-x_0|^\beta).
$$
Because $x_1{\bar i}n X_N$ we have that $B_m(x_1)$, $m{\bf g}e N$, has
$\theta^m$-precision, $B_m(x_1)\cap{\mathcal O}_F$ is contained in a stick with relative thickness $\theta^m$. This implies
\begin{equation}{\lambda}bel{IJ}
{\bf f}rac{|I_m(x_1)|}{|J_m|}=1+O(\theta^n).
\end{equation}
Then, by using (\ref{telescope}), (\ref{dx}), and (\ref{IJ}), we get
$$
\begin{aligned}
{\bf f}rac{Dh(x_1)}{Dh(x_0)}&=\lim_{m\to {\bar i}nfty} {\bf f}rac{|I^*_m(x_1)|}{|I_m(x_1)|}\cdot
{\bf f}rac{|I_m(x_0)|}{|I^*_m(x_0)|}\\
&=(1+O(\theta^n))\cdot \lim_{m\to {\bar i}nfty} {\bf f}rac{|J^*_m|}{|J_m|}\cdot
{\bf f}rac{|I_m(x_0)|}{|I^*_m(x_0)|}\\
&=(1+O(\theta^n))\cdot \lim_{m\to {\bar i}nfty} {\bf f}rac{|J^*_m|}{|J_m|}
{\bf f}rac{|I_n(x_0)|}{|I^*_n(x_0)|}\cdot
{\bf f}rac{|I_m(x_0)|}{|I^*_m(x_0)|} {\bf f}rac{|I^*_n(x_0)|}{|I_n(x_0)|}\\
&=1+O(\theta^n)\\
&=1+O(|x_1-x_0|^\beta).
\end{aligned}
$$
This implies that $x\mapsto Dh(x)$, $x{\bar i}n X_N$, is continuous and hence bounded. In turn, this gives
$$
|Dh(x_1)-Dh(x_0)|=O(|x_1-x_0|^\beta),
$$
when $x_0,x_1{\bar i}n X_N$.
Left is to proof that $h|X_N$, $N{\bf g}e 1$, is differentiable. Take
$x_0,x{\bar i}n X_N$. Let $n{\bf g}e 1$ be such that $x_0,x{\bar i}n B_n(x_0)$ and $x\notin B_{n+1}(x_0)$. To prove the differentiability at $x_0$ we may assume that $n{\bf g}e N$. Observe, as before,
$$
|x-x_0|{\bf g}e C_N\cdot \rho^{n-N}\cdot d_N.
$$
This implies
\begin{equation}{\lambda}bel{dx2}
\theta^n=O(|x-x_0|^\beta).
\end{equation}
Let $T\subset T_{x_0}$ be the interval connecting $x_0$ and $\pi_{x_0}(x)$. Observe,
\begin{equation}{\lambda}bel{Tdx}
|T|\asymp |x-x_0|.
\end{equation}
For each $m{\bf g}e N$ we can find a minimal collection of intervals, with pairwise disjoint interiors,
$$
J_{m,k}{\bar i}n \bigcup_{l=N}^m \mathcal{A}_l
$$
such that
$$
T_m=\bigcup_{k} J_{m,k}\supset T,
$$
and
$$
J_{m,0}=I_m(x_0).
$$
Note that $T_{m+1}\subset T_m$ and $T=\cap T_m$. Let
$$
T^*_m=\bigcup_{k} J^*_{m,k}
$$
and $T^*\subset T_{h(x_0)}$ be the interval connecting $h(x_0)$ and
$\pi_{h(x_0)}(h(x))$. The directions of the tangent lines $T_x$, $x{\bar i}n B_n(x_0)$, are very close to the direction of $T_{x_0}$. The same holds for their counterparts along ${\mathcal O}_{F_*}$. This implies that the map
$$
T_{x_0}\ni \pi_{x_0}(x)\mapsto \pi_{h(x_0)}{\bar i}n T_{h(x_0)}
$$
is monotone. Hence, the interiors of the intervals $J^*_{m,k}$ are disjoint and
$T^*=\cap T^*_m$.
Then, by using (\ref{telescope}),
$$
\begin{aligned}
|T^*|&=\lim_{m\to{\bar i}nfty}{\bf f}rac{|T^*_m|}{|T_m|}\cdot |T|\\
&=\lim_{m\to{\bar i}nfty}{\bf f}rac{\sum_{k}|J^*_{m,k}|}{|I^*_n(x_0)|}
{\bf f}rac{|I_n(x_0)|}{\sum_{k}|J_{m,k}|}\cdot
{\bf f}rac{|I^*_n(x_0)|}{|I_n(x_0)|}{\bf f}rac{|I_m(x_0)|}{|I^*_m(x_0)|}\cdot
{\bf f}rac{|I^*_m(x_0)|}{|I_m(x_0)|}\cdot |T|\\
&=(1+O(\theta^n))\cdot\lim_{m\to{\bar i}nfty}{\bf f}rac{|I^*_m(x_0)|}{|I_m(x_0)|}\cdot |T|\\
&=(1+O(\theta^n))\cdot Dh(x_0)\cdot |T|.
\end{aligned}
$$
Finally, use Theorem \ref{lines},
$$
\begin{aligned}
h(x)&=h(x_0)+\pi_{x_0}(h(x))+O(|h(x)-h(x_0)|^{1+\beta})\\
&= h(x_0)+|T^*|+O(|T^*|^{1+\beta})\\
&=h(x_0)+Dh(x_0)|T|+ O(\theta^n\cdot |T|)+O(|T|^{1+\beta})\\
&=h(x_0)+Dh(x_0)(\pi_{x_0}(x)-x_0)+O(|x-x_0|^{1+\beta}),
\end{aligned}
$$
where we used (\ref{dx2}) and (\ref{Tdx}). The estimate for $Dh^{-1}$ can be obtained the same way by exchanging the role of the intervals $I^*$ and $I$. Observe, the continuity of $Dh$ and $Dh(x)>0$ implies that $\min Dh>0$.
The tangent spaces $T_x$, $x{\bar i}n X_N$, have almost constant direction when points are in the same piece $B{\bar i}n {\mathcal B}_N$. Choose the orientations of the tangents space such that the projections $\pi_{x_0}:T_{x_1}\to T_{x_0}$, $x_0, x_1{\bar i}n B$ with
$B{\bar i}n {\mathcal B}_N$, preserve orientation. Indeed, using this orientation we can
identify the derivative $Dh(x)$ with a positive number.
\end{proof}
}
\begin{rem} The conjugation $h:{\mathcal O}_F \to {\mathcal O}_{F_*}$ satisfies
$$
h(x)=h(x_0)+Dh(x_0)(\pi_{x_0}(x)-x_0)+O(|x-x_0|^{1+\beta})
$$
in almost every point $x_0{\bar i}n {\mathcal O}_F$. Observe, that the H\"older exponent is universal. The H\"older constant tends to infinity when $h$ is restricted to larger and larger sets $X_N$, when $N\to {\bar i}nfty$.
\end{rem}
The Cantor attractor ${\mathcal O}_F$ has two characteristic exponents, \cite {O}.
One is zero the other is $\ln b_F$, see \cite{CLM}. The function $T:X\to \Bbb{P}^1$ constructed before defines a measurable line field, with respect to $\mu$, on ${\mathcal O}_F$.
\begin{prop}{\lambda}bel{lambda=0} The line field
$$
T:{\mathcal O}_F \to \Bbb{P}^1
$$
is the invariant line field of zero characteristic exponent.
\end{prop}
\begin{proof}
For each point $x_0{\bar i}n X$ we have, see Theorem \ref{lines},
$$
dist(x, T_{x_0})\le C_{x_0}|x-x_0|^{1+\beta}
$$
with $x{\bar i}n {\mathcal O}_F$. The map $F$ is a diffeomorphism which preserves ${\mathcal O}_F$.
Hence,
$$
dist(x, DF(x_0)T_{x_0})=O(|x-F(x_0)|^{1+\beta})
$$
with $x{\bar i}n {\mathcal O}_F$. For almost every $x_0{\bar i}n X$ we have $F(x_0){\bar i}n X$. Hence, $T$ is an invariant line field, i.e. for almost every $x_0{\bar i}n {\mathcal O}_F$ we have
$$
DF(x_0)T_{x_0}=T_{F(x_0)}.
$$
The map $F$ has only two invariant lines fields, the two characteristic directions, \cite{O}. Left is to show that $T(x)$ corresponds to the zero exponent.
Choose $N{\bf g}e 1$. For almost every $x_0{\bar i}n X_N$ there are $t_n\to {\bar i}nfty $ such
that
$$
F^{t_n}(x_0){\bar i}n X_N.
$$
This is because the ergodic measure $\mu$ assigns positive measure to $X_N$.
Let $v{\bar i}n T_{x_0}$ and $v_*{\bar i}n T_{h(x_0)}$ be unit vectors. Apply the chain rule
$$
\begin{aligned}
|DF^{t_n}(x_0)v|&= |Dh^{-1}(F_*(h(x_0)))|\cdot |DF_*^{t_n}(h(x_0)) Dh(x_0)v| \cdot |Dh(x_0)|\\
&\asymp |DF_*^{t_n}(h(x_0)) v_*|.
\end{aligned}
$$
Observe, $v_*{\bar i}n T_{h(x_0)}$ which is a tangent line to the graph of $f_*$. The degenerate H\'enon map $F_*$ has zero exponential contraction along this curve. Hence,
$$
\lim_{t\to{\bar i}nfty} {\bf f}rac{1}{t}\ln |DF^t(x_0)v|=
\lim_{n\to{\bar i}nfty} {\bf f}rac{1}{t_n}\ln |DF^{t_n}(x_0)v|=0
$$
On a set of full measure in $X_N$ there is no exponential contraction along the direction $T(x)$. The line field $T$ has exponent zero.
\end{proof}
The Hausdorff dimension of a measure $\mu$ on a metric space ${\mathcal O}$ is defined as
$$
HD_{\mu}({\mathcal O})={\bar i}nf_{\mu(X)=1} HD(X).
$$
\begin{thm}{\lambda}bel{HD} The Hausdorff dimension of the invariant measure is universal
$$
HD_\mu({\mathcal O}_F)=HD_{\mu_*}({\mathcal O}_{F_*}).
$$
\end{thm}
\begin{proof} Let $h:{\mathcal O}_F\to {\mathcal O}_{F_*}$ be a conjugation which exchanges the orbits of the tips. According to Theorem \ref{distributionalrigidity} there are sets $X_N\subset {\mathcal O}_F$ with $\mu(X_N){\bf g}e 1-O(\theta^N)$ and on which $h$ is a
$(1+\beta)$-diffeomorphism. The continuity of the derivative gives upper and lower bounds of the derivative. This implies
$$
HD(h(X_N))=HD(X_N).
$$
Hence, for
$
X=\bigcup_{N{\bf g}e 1} X_N
$
and every $Z\subset {\mathcal O}_F$
$$
HD(h(X\cap Z))=HD(X\cap Z).
$$
Let $Z_N\subset {\mathcal O}_F$ with $\mu(Z_N)=1$ and $\lim_{N\to {\bar i}nfty} HD(Z_N)=HD_\mu({\mathcal O}_F)$ then
$$
\begin{aligned}
HD_\mu({\mathcal O}_F)&{\bf g}e \lim_{N\to{\bar i}nfty} HD(Z_N\cap X)\\
&=\lim_{N\to{\bar i}nfty} HD(h(Z_N\cap X))\\
&{\bf g}e HD_{\mu_*}({\mathcal O}_{F_*}),
\end{aligned}
$$
where the last inequality holds because $\mu_*(h(Z_N\cap X))=\mu(Z_N\cap X)=1$.
The opposite inequality $HD_{\mu_*}({\mathcal O}_{F_*}){\bf g}e HD_\mu({\mathcal O}_F)$ is obtained in the same way.
\end{proof}
\begin{rem} We can identify the Hausdorff dimension of the measure on the Cantor attractor. Namely,
$$
HD_\mu({\mathcal O}_F)={\bf f}rac{\ln 2 }{{\bar i}nt \ln | Dr_*| d\mu_*}.
$$
where $r_*$ is the analytic expanding one dimensional map constructed such that $ \pi_1({\mathcal O}_{F_*})$ is its invariant Cantor set, see for example \cite{BMT} and references therein. The measure $\mu_*$ is the projected measure from
${\mathcal O}_{F_*}$.
\end{rem}
\section*{Appendix: Open Problems}{\lambda}bel{problems}
Let us finish with some questions related to the previous discussion.
\noindentndent
\underline{Problem I:}
The collections ${\mathcal P}_n$, see (\ref{PPn}), of good pieces that we have constructed are determined by the average Jacobian of the map.
Observe that ${\mathcal S}_n(\theta^n)$ might be slightly larger than ${\mathcal P}_n$.
It was suggested by Feigenbaum's experiment, mentioned in the introduction, that
the statistics of the remaining {{\bar i}t bad} pieces,
might be governed by some universality law.
This problem is also related to one of the open problems in \cite{CLM} on the regularity of the conjugation $h:{\mathcal O}_F\to {\mathcal O}_G$ when $b_F=b_G$.
\noindentndent
\underline{Problem II:} Do wandering domains exist? This question was already formulated in \cite{LM1}. It is included again because its solution might be obtained by using the techniques developed in this paper.
\section*{Nomenclature}
\begin{itemize}
{\bar i}tem[$b_F$] average Jacobian, \S \ref{prelim}
{\bar i}tem[$B^n_{\omega}ega$] a piece of the $n^{th}$-renormalization cycle, \S \ref{prelim}
{\bar i}tem[${\mathcal B}^n$] collection of pieces in the $n^{th}$-renormalization cycle, \S \ref{prelim}
{\bar i}tem[${{\mathcal B}^n[k]}$] pieces of ${\mathcal B}^n$ in $E^k$, \S \ref{pieces}
{\bar i}tem[$B_n(x)$] the piece in ${\mathcal B}^n$ containing $x{\bar i}n {\mathcal O}_F$
{\bar i}tem[${\mathbf{B}}$] the piece $B$ viewed from its proper scale, \S \ref{pieces}, Figure \ref{figregpiece}
{\bar i}tem[$\text{Dist}(\phi)$] Distortion, (\ref{distnonl})
{\bar i}tem[$D_k$] derivative of $\psi^k_v$ at the tip, (\ref{Dk})
{\bar i}tem[${{\diamond}elta}lta_B$] thickness of $B$, \S \ref{stic}
{\bar i}tem[${{\mathbb D}elta}lta_B$] absolute thickness of $B$, \S \ref{stic}
{\bar i}tem[$E^k$] part of a dynamical partition, \S \ref{pieces}, Figure \ref{figE}, \ref{figEsch}
{\bar i}tem[$f_*$] unimodal renormalization fixed point, \S \ref{prelim}
{\bar i}tem[$G_k$] return map related to the partition by $E^k$, \S \ref{pieces}, Figure \ref{figE}, \ref{figEsch}
{\bar i}tem[$k_i(B)$] depth of the $i^{th}$-predecessor of $B$, \S \ref{pieces}, Definition \ref{controlled}
{\bar i}tem[$\kappaappa_0(n)$] minimal depth to safely push-up, \S \ref{conv}, and \S \ref{defstat}
{\bar i}tem[$\kappaappa(n)$] upper bound of the brute-force regime, \S \ref{conv}, and Lemma \ref{kn}
{\bar i}tem[$l(k)$] maximal allowable depth, \S \ref{pieces}
{\bar i}tem[$\eta_\phi$] Nonlinearity, (\ref{nonlin})
{\bar i}tem[${\mathcal O}_F$] invariant Cantor set of $F$, \S \ref{prelim}
{\bar i}tem[$\psi^k_{c,v}$] coordinate changes related to the renormalization $R(R^kF)$, (\ref{phi})
{\bar i}tem[$\psi^{n}_{\omega}ega$] coordinate change, \S \ref{prelim}
{\bar i}tem[$\Psi^{n}_k$] coordinate change relating $R^{n-k}(R^kF)$ to $R^n$, (\ref{Phi})
{\bar i}tem[${\mathcal P}_n(k; q_0,q_1)$] collection of $q_0,q_1$-controlled pieces, Definition \ref{controlled}
{\bar i}tem[${\mathcal P}_n$] pieces obtained by applying the three regimes, \S \ref{conv}, and (\ref{PPn})
{\bar i}tem[$q_0,q_1$] boundary one-dimensional regime, \S \ref{conv}, and Lemma \ref{q0q1}
{\bar i}tem[${\sigma}gma$] scaling factor of the unimodal renormalization fixed point, \S \ref{prelim}
{\bar i}tem[${\sigma}gma_B$] scaling factor of $B$, \S \ref{scaling}
{\bar i}tem[${\mathcal S}^n({\varepsilon}ilon)$] collection of pieces in ${\mathcal B}^n$ with ${\varepsilon}ilon$ precision, \S \ref{scaling}
{\bar i}tem[$t_k$] tilt of the derivative of $\psi^k_v$ at the tip, (\ref{Dk})
{\bar i}tem[$T$] tangent line field to ${\mathcal O}_F$, \S \ref{haus}
{\bar i}tem[$\tau_F$] tip, \S \ref{prelim}
{\bar i}tem[$X$] the differentiable part of ${\mathcal O}_F$, \S \ref{haus}
\end{itemize}
\end{document}
|
\begin{document}
\title{The Compositional Integral\\A Brief Introduction}
\author{James David Nixon\\
[email protected]\\
University of Toronto}
\maketitle
\begin{abstract}
The Compositional Integral is defined, formally constructed, and discussed. A direct generalization of Riemann's construction of the integral; it is intended as an alternative way of looking at First Order Differential Equations. This brief notice aims to: familiarize the reader with a different approach to integration, fabricate a notation for a modified integral, and express a startling use for infinitely nested compositions. Taking inspiration from Euler's Method for approximating First Order Differential Equations, we affiliate the method with Riemann Sums; and look at it from a different, modern angle.
\end{abstract}
\emph{Keywords}: Real Analysis, First Order Differential Equations, Riemann Integration, Infinitely Nested Compositions\\
2010 Mathematics Subject Classification. 26B10; 34A05.\\
\section{Introduction}\label{sec1}
\setcounter{equation}{0}
As an undergraduate student in mathematics we’ve all encountered (or will encounter) a theorem by Picard and Lindel\"{o}f. Though this theorem is named for \'{E}mile Picard and Ernst Lindel\"{o}f, its history traces from Augustin-Louis Cauchy, to Leonhard Euler, to Isaac Newton--and through much of 17th to 19th century mathematics. But as ingenious Picard and Lindel\"{o}f’s theorem is, it is non-productive; insofar as it does not \emph{produce} the function in any feasible manner. For a more detailed look at the history and development of differential equations, refer to \cite{HisODEMisc, HisODECaj, HisODEInce, HisODESas}; where arguably the climax of classical contributions to the theory of differential equations is Picard and Lindel\"{o}f's eponymous theorem. This paper aims to look at First Order Differential Equations from a different perspective; like the Necker cube, differential equations can be viewed in more ways than one.\\
The brute can summarize Picard and Lindel\"{o}f's theorem in a few key words. For $|x-x_0| < \delta$ with $\delta > 0$ small enough, and for $f(x, t)$ a Lipschitz continuous function, the mapping:
\[
\Phi u = y_0 + \int_{x_0}^x f(s,u(s))\,ds
\]
is a contraction. Therefore, by The Banach Fixed Point Theorem\footnote{This theorem will most likely be the first non-trivial case where a student encounters a use for The Banach Fixed Point Theorem.} it has a unique fixed point $y$, which inherently satisfies $y(x_0) = y_0$ and $\Phi y = y$, which reduces to $y'(x) = f(x, y(x))$.\\
This provides us with a local solution to the differential equation $y' = f(x,y)$ subject to the constraint $y(x_0) = y_0$. It also ensures this solution is unique, thanks to our use of The Banach Fixed Point Theorem; and that it is continuously differentiable. A more in depth look at The Picard-Lindel\"{o}f Theorem can be found in \cite{TheOdDEq}.
The author would like to expand this theorem. This theorem does not necessarily help us produce the solution $y$–it only informs us it exists and is unique. Even though Picard and Lindel\"{o}f's theorem is considered constructive--the author feels it isn't productive. The result is restricted to tiny intervals about $x_0$, making us unsure of where the iteration of $\Phi$ actually converges--simply that it must somewhere. Numerical calculations of $y$ using iterations of $\Phi$ is also unfeasible in practice.
In no different a manner than how the Riemann Integral is \emph{productive}, we should have something similar for $y$. There is a Riemann Sum of $g$ which converges to an object $G$ and that object satisfies $G'(x) = g(x)$. What if there was the same thing for the equation $y' = f(x, y(x))$? A kind of ``Picard-Lindel\"{o}f Integral." We should have some \emph{thing} and this \emph{thing} converges to $y$ everywhere; and in a useful productive sense.
This \emph{thing} should also behave in a manner similar to our usual notion of an integral. This \emph{thing} should be accessible and intuitive like the Riemann Sum. And above all, this thing should look and \emph{feel} like an integral. The author will argue that Euler had already met these criteria in the 18th century. He simply worded it in a language of infinitesimals.\\
We are going to start with a formal calculus and prove some less than obvious facts about it. The author will then propose a way of actually constructing this formal calculus. In order to do this, the author will be brash and introduce a new notation known as the differential bullet product. This will take some persuasion. He maintains though, that the summation of this paper is the introduction of a formal calculus, The Compositional Integral, and an argument for a notation which describes said idea.
The methodology of our proposed, new kind of integral, is vastly similar to Euler's Method for approximating First Order Differential Equations. The idea is to make this approximation notion more precise, and re-approach it as a modified Riemann Sum--and describe some properties which, as the author would put it, have been overlooked. We are also going to steal Leibniz's notation for the integral, and twist it a bit.
However, and this is a strong however, the integral was originally a formal idea. It took much work to make it anything less than formal. A Riemann Sum, as beautiful and practical as it is, still couldn’t accomplish a victory over the formal meaning. The integral is a mysterious thing. The author cannot fully construct The Compositional Integral to the extent mathematician’s have constructed modern integration. He simply wishes to discuss the formal object, and give a concrete instance where it works. Therein, the majority of this paper will be formal arguments. He hopes more knowledgeable mathematicians of measure theory and Lesbesgue's theory of integration will have something more interesting to say.
\section{So, what is The Compositional Integral?} \label{sec2}
\setcounter{equation}{0}
Let’s borrow Leibniz's notation for the integral. The Compositional Integral can be introduced modestly in a less than avant-garde fashion. The notation may look a little clunky, but the pieces fit together rather tightly.
Let $b \ge a$, supposing $f(s,t)$ is a \emph{nice} function\footnote{Bear with the author, as what we mean by \emph{nice} will have to be filled in as we progress.}, write:
\begin{equation}\label{eq:1A}
Y_{ba}(t) = \int_a^b f(s, t)\, ds \bullet t
\end{equation}
To get what this expression means will be the point of this paper. And if the reader can absorb what this expression means, they can absorb the thesis of this paper. The author’s goal is to acclimatize the reader slowly with this notation. But the author will simply start with the denotion $Y_{ba}(t)$.\\
\begin{definition}[The Compositional Integral]\label{def1}
The Compositional Integral:
\[
y(x) = Y_{xa}(t) = \int_a^x f(s, t)\, ds \bullet t
\]
is the \emph{unique}\footnote{Although one would usually have to show $y$ is unique, by The Picard-Lindel\"{o}f Theorem it certainly is if, for instance, $f$ is globally Lipschitz on its domain. We include uniqueness in the definition for convenience.} function $y$ such that $y'(x) = f(x, y(x))$ and $y(a) = t$.
\end{definition}
This is a bit of a mouthful, and imprecise on domains, but the imprecision of this definition is warranted. This definition is made the way it is to introduce more simply what the author calls the formal semi-group laws; where $a \le b \le c$:
\begin{enumerate}
\item $Y_{aa}(t) = t$\\
\item $Y_{cb}(Y_{ba}(t)) = Y_{ca}(t)$
\end{enumerate}
These laws comprise a modified additivity condition of the usual integral $\int_b^c + \int_a^b = \int_a^c$ and $\int_a^a = 0$. Where now addition is replaced with composition--and we have a semi-group-structure rather than a group-structure (at least for now).
As a brief digression, to gather some intuition; if we were to let $f(s,t)= f(s)$ be constant in $t$, then the differential equation would reduce to $y'(x) = f(x)$ and $y(a) = t$. The Compositional Integral becomes the integral. That is to mean $y(x) = t + \int_a^xf(s)\,ds$, and $Y_{ba}(t) = t + \int_a^b f(s)\,ds$. The composition law ($2$) across $t$ becomes the usual additivity condition of the integral--albeit written a bit strangely. Of which, the constant of integration plays a more prominent role as an argument of a function.
There is not much more than a trick to proving ($1$) and ($2$). We will restrict ourselves to a formal proof that breaks down the mechanism of it purely from Definition \ref{def1}. The following argument really only works for well behaved $f$, which we avoid describing as it may muddy the initial intuition. But the reader may guess that it is something like a \emph{nice} global Lipschitz condition.
\begin{theorem}\label{thm1}
Let $Y_{ba}(t)$ be The Compositional Integral of $f(s,t)$; then the following group laws are satisfied:
\begin{enumerate}
\item $Y_{aa}(t) = t$\\
\item $Y_{cb}(Y_{ba}(t)) = Y_{ca}(t)$\\
\end{enumerate}
for all $a \le b \le c$.
\end{theorem}
\begin{proof}
Using the definition of $Y$ we can play a few tricks to get our result. When $y(x)= Y_{xa}(t)$ then $y(a) = t$ by definition and so ($1$) is satisfied by first principles. Proving ($2$) is more wordplay then anything. Firstly $u = Y_{xb}(Y_{ba}(t))$ is the unique function such that $u(b) = Y_{bb}(Y_{ba}(t))= Y_{ba}(t)$ (by ($1$)) and $u'(x) = f(x, u(x))$. Similarly though, the function $w = Y_{xa}(t)$ is the unique function such that $w(b) = Y_{ba}(t)$ and $w'(x) = f(x, w(x))$. Therefore they must equal, $w = u$. Plugging in $x = c$ gives the result.
\end{proof}
The majority of this theorem relied on the uniqueness of a solution to a First Order Differential Equation; where again $f$ is \emph{nice}. This allowed for an identity principle, which was used as the cornerstone of this theorem. It isn't very hard to imagine the cases where $f$ is suitable and this argument works--again, something like a \emph{nice} global Lipschitz condition.\\
Now, this identity alone does not justify considering this object an integral. Luckily, there's more hidden to the proposed notation. Considering the usual integral, there is an iconic ability to substitute variables using Leibniz's differential calculus. If $u = \gamma(s)$, $du = \gamma'(s)ds$ and $u(\alpha) = a$ and $u(\beta) = b$ then,
\[
\int_a^b f(s)\,ds = \int_\alpha^\beta f(u)\, du = \int_\alpha^\beta f(\gamma(s))\gamma'(s)\,ds
\]
This leads us to the next nice fact about The Compositional Integral, and hints more aggressively as to the usefulness of the proposed notation. The same substitution of variables is still perfectly valid.
\[
\int_a^b f(s,t) \,ds\bullet t = \int_{\alpha}^\beta f(u,t) \,du \bullet t = \int_\alpha^\beta f(\gamma(s),t)\gamma'(s)\,ds\bullet t
\]
Remembering the definition of The Compositional Integral, this can be shown using the less than startling identity:
\[
\frac{d}{dx} y(\gamma(x)) = f(\gamma(x),y(\gamma(x)))\gamma'(x)
\]
More thoroughly, the function $w(x) = y(\gamma(x))$ is the \emph{unique} function such that $w'(x) = f(\gamma(x),w(x))\gamma'(x)$ and $w(\alpha) = y(\gamma(\alpha)) = y(a) = t$. Therefore $w(x) = \int_\alpha^x f(\gamma(s),t)\gamma'(s)\,ds\bullet t$. Similarly $w(\beta) = y(b)$. Since $y(b) = Y_{ba}(t)$, we must have:
\[
Y_{ba}(t) = \int_\alpha^\beta f(\gamma(s),t)\gamma'(s)\,ds\bullet t
\]
This also explicitly constructs the inverse for each element of the semi-group. Meaning, The Compositional Integral forms a group under composition. The function $Y_{ab}(t) = Y_{ba}^{-1}(t)$ which allows $Y_{ab}(Y_{ba}) = Y_{aa} = t$. Theorem \ref{thm1} hypothetically shows this, but it may be unclear what $Y_{ab}$ means when $a \le b$. Using Leibniz's rule of substitution where $u(s) = b+a - s$ with $u(a) = b$ and $u(b) = a$, the meaning of $Y_{ab}$ can be clarified by the identity:
\[
Y_{ab}(t) = \int_b^a f(s,t)\,ds \bullet t = \int_a^b f(u,t)\,du \bullet t = \int_a^b -f(b+a-s,t)\,ds \bullet t
\]
Therefore:
\[
Y_{ab}(t) = Y^{-1}_{ba}(t) = \int_a^b -f(b+a-s,t)\,ds \bullet t
\]
This leaves us with the conception that not only does $Y_{ba}(t)$ have a semi-group-structure for $a \le b$, it has a group-structure when we remove the restriction $a \le b$; the inverse of $Y_{ba}$ is $Y_{ab}$, which can be described using a substitution of variables.\\
The last facet of The Compositional Integral is perhaps the most important. The Compositional Integral can be constructed using something looking like a Riemann Sum. We will devote the majority of this brief paper justifying this.
The group structure of $Y_{ba}$ can be used to construct $Y_{ba}$. It is beneficial to think accurately about Euler's Method in the following argument. If throwing in one's hat as to whom truly deserves priority over all the ideas in this paper; the author feels the entirety of this paper could probably be attributed to Euler, it's simply that he said it differently.
Let $P = \{s_i\}_{i=0}^n$ be a partition of $[a, b]$ written in descending order (this can cause a bit of a trip up). That is to say $b = s_0 > s_1 > s_2 > ... > s_n = a$. Let's also write $Y_{\Delta s_i} = Y_{s_is_{i+1}}$ then by the group law $Y_{ca} = Y_{cb}(Y_{ba})$:
\[
Y_{ba} = Y_{bs_1}(Y_{s_1s_2}(Y_{s_2s_3}(...Y_{s_{n-1}a}))) = \prod_i Y_{\Delta s_i}
\]
Where the product is taken to mean composition. As $\Delta s_i = s_i -s_{i+1}$ tends to zero these $Y_{\Delta s_i} \to t$, which follows because $Y_{aa}(t) = t$. We know something stronger though, we know as $s_i,s_{i+1} \to s_{i}^*$ that $\dfrac{Y_{\Delta s_i} - t}{\Delta s_i} \to f(s_i^*, t)$ which is just the differential equation used to define $Y$. This implies each $Y_{\Delta s_i}$ looks like $t + f(s_i^*, t)\Delta s_i$, up to an error of order $\mathcal{O}(\Delta s_i^2)$. To make an educated guess then: shouldn't the $\mathcal{O}(\Delta s_i^2)$ part be negligible as the partition gets finer? It should be safe to write:
\[
Y_{ba} = \lim_{\Delta s_i \to 0} \prod_i t + f(s_i^*, t)\Delta s_i
\]
We may write this in a form that would be familiar to more Classical Analysts. Let $\Delta s_i \to ds$ and write:
\begin{equation}
Y_{ba} = \prod t + f(s, t)\, ds\\
\label{eq:1}
\end{equation}
Although this may seem unusual; this is no more than the combination of Euler's Method and Riemann Sums with their language of partitions. For context, Euler would usually fix $\Delta s_i$ small and approximate $Y_{ba}$ as we let $b$ grow. We are fixing $b$, but letting $\Delta s_i \to ds$ and thinking of this as a Riemann Sum. But then again, Classical Analysts never saw the need for a Riemann Sum, it was just how infinitesimals worked.
\section{But where does the $\bullet$ come from?}\label{sec3}
\setcounter{equation}{0}
If the reader leaves this paper with one thing, it is hopefully an understanding of the differential bullet product $ds \bullet t$. And what it means when combined with the integral. The expression $\int ds \bullet t$ behaves similarly to the expression $\int ds$ (they both satisfy: a group structure, substitution of variables, and a First Order Differential Equation).
But, what does the differential bullet product mean? That’s a tough question to answer without sufficient context. The author will boil it down into one thing. We cannot use the notation $\prod$ to represent nested compositions. Notation must represent clearly and precisely. We’ll need a notation for iterated compositions, and we’ll have to be clear. The author chooses the symbol $\OmSum$.
If $h_j$ is a sequence of functions taking some interval to itself. The expression $\OmSum_{j=0}^n h_j (t)$ can be understood to mean:
\[
\OmSum_{j=0}^n h_j (t) = h_0(h_1(h_2(...h_n(t))))
\]
No different than Euler’s notation for products and sums, except, we’ll need some additional notation in the spirit of Leibniz. When our function $h_j$ depends on another variable which is not being composed across, our notation becomes unclear. Insofar, when we write:
\[
\OmSum_{j=0}^n h_j (s, t)
\]
Does this mean we compose across $t$?
\[
h_0(s, h_1(s, ...h_n(s, t)))
\]
Or does this mean we compose across $s$?
\[
h_0(h_1(...h_n(s, t)..., t), t)
\]
This is no different a problem than when an undergraduate writes $\int e^{st}$ and the professor is expected to guess whether the integration is across $s$ or $t$. It becomes unclear whether one means $\int e^{st}\,ds$ or $\int e^{st} \, dt$. To reconcile the situation we’re going to use a bullet. Therein, the above expressions can be written more clearly:
\[
\OmSum_{j=0}^n h_j (s, t) \bullet t = h_0(s, h_1(s, ...h_n(s, t)))
\]
\[
\OmSum_{j=0}^n h_j (s, t) \bullet s = h_0(h_1(...h_n(s, t)..., t), t)
\]
So when the author uses a bullet ($\bullet$) followed by a variable, it is to mean that the operation is bound to the variable. Specifically compositions are across this variable.
Returning to our discussion from above, we arrive at a more playful denotion of The Compositional Integral. Let $P = \{s_i\}_{i=0}^n$ be a partition of $[a, b]$ (written in descending order) with $s_{i+1} \le s_i^* \le s_i$, then we can write, and aim to justify:
\[
\prod_i t + f(s_i^*,t)\Delta s_i = \OmSum_{i=0}^{n-1} t + f(s_i^*, t)\Delta s_i \bullet t \approx Y_{ba}(t)
\]
This is essentially the statement of Euler's Method as it's stated today, and it's still used by numerical approximation algorithms. Traditionally one would write this calculation a bit more sequentially:
\begin{eqnarray*}
t_0 &=& t\\
t_1 &=& t_0 + f(s_{n-1}^*,t_0)\Delta s_{n-1}\\
t_2 &=& t_1 + f(s_{n-2}^*,t_1)\Delta s_{n-2}\\
&\vdots&\\
t_n &=& t_{n-1} + f(s_0^*,t_{n-1})\Delta s_0 \approx Y_{ba}(t)\\
\end{eqnarray*}
The benefit of this notation is that $t_{n+1} \approx Y_{(b+\Delta)a}$ and $t_{n+2} \approx Y_{(b+2\Delta)a}$, and we can think of this as a sequence, or a process, which continues to approximate. The main proposition of our altered form, is this becomes equality as $\Delta s_i \to 0$ (and $n\to\infty$), but $b$ is fixed, and isn't allowed to vary. This isn't much of a leap of faith considering the vast amount of numerical evidence which supports this claim; and lays at the foundation of numerical approximation algorithms.
At this point, the notation can be rephrased; the notation of Section 2 can be better motivated. Let $\Delta s_i \to ds$, the summatory part $\OmSum t + $ becomes an $\int$. This is a continuous, infinitesimal, composition; similar to a continuous sum. It becomes a sweep of $f(s,t)$ for $s \in [a,b]$ across $t$; which we write as $ds \bullet t$. Again, this is something like a Riemann Sum... but it's an infinitely nested composition. The bounds on the integral can be made explicit, and it leaves us with the expression:
\[
\lim_{\Delta s_i \to ds}\OmSum_{i=0}^{n-1} t + f(s_i^*, t)\Delta s_i \bullet t = \int_a^b f(s, t)\, ds \bullet t
\]
Which is what the author means by the differential bullet product. We specifically call it a product as our group law $Y_{cb}(Y_{ba}(t)) = Y_{ca}$ can be written as the product of integrals:
\[
\int_{a}^c f(s,t)\,ds\bullet t = \int_{b}^c f(s,t)\,ds\bullet \int_{a}^b f(s,t)\,ds\bullet t
\]
The bullet is composition. We choose a bullet for this product of integrals, rather than the traditional symbol $\circ$, as to specify the composition is across $t$; and to emphasize the group-structure.\\
It is helpful to note when $f(s, t) = f(s)$ is constant in $t$, then $t + f(s)$ is a translation and the compositions above revert to addition. Illustrated by the formal manipulations,
\begin{eqnarray*}
\int_a^b f(s) ds \bullet t &=& \lim_{\Delta s_i\to0}\OmSum_{i=0}^{n-1} t + f(s_i^*)\Delta s_i \bullet t\\
&=& \lim_{\Delta s_i \to 0} \big{(}t + \sum_{i=0}^{n-1} f(s_i^*)\Delta s_i\big{)}\\
&=& t + \int_a^b f(s)\, ds\\
\end{eqnarray*}
We are reduced to the usual Riemann Sum definition of an integral. Furthermore we can explicitly see $t$ as the constant of integration. The additivity of integrals is again the composition law
\[
Y_{cb}(Y_{ba}(t)) = t + \int_b^c f(s)\,ds + \int_a^b f(s)\,ds = t + \int_a^c f(s)\,ds = Y_{ca}(t)
\]
\section{Approaching from the other side of the equation}\label{sec4}
\setcounter{equation}{0}
Continuing with the same idea, we are going to approach from the other side of the equation. We will start with our (or Euler's?) proposed definition of $Y$ and argue that it is $Y$. To separate the objects as two different things we will call the proposed definition $\widetilde{Y}$. The aim is to formally argue and motivate $Y = \widetilde{Y}$, but we will not attempt to prove it yet. The benefit of this side, is to construct $\widetilde{Y}$ first, and provide a constructive/productive form of Picard and Lindel\"{o}f's Theorem.
Recalling the proposed definition: if $P = \{s_i\}_{i=0}^n$ is a partition of $[a, b]$ in descending order, and $s_{i+1} \le s_i^* \le s_i$:
\[
\widetilde{Y}_{ba}(t) = \lim_{\Delta s_i\to 0} \OmSum_{i=0}^{n-1} t + f(s_i^*, t) \Delta s_i \bullet t
\]
We are going to take a leap of faith momentarily and assume this expression converges uniformly. Some things about $\widetilde{Y}$ are simple to prove off hand, but $\widetilde{Y}$ may seem so foreign to navigate, the reader may not know where to look. So to start slow, the semi-group laws from before hold. Firstly, $\widetilde{Y}_{aa}(t) = t$ as this becomes the null composition which is the identity value $\text{Id} = t$. It is helpful to think about how the null sum is $0$, and the null product is $1$. More importantly, $\widetilde{Y}_{cb}(\widetilde{Y}_{ba}) = \widetilde{Y}_{ca}$ which is worth while to the reader for the author to write out.\\
\emph{A proof-sketch that $\widetilde{Y}_{cb}(\widetilde{Y}_{ba}) = \widetilde{Y}_{ca}$:} Let $P = \{s_i\}_{i=0}^n$ be a partition of $[a, b]$ in descending order, with $s_{i+1} \le s_i^* \le s_i$; and let $R = \{r_j\}_{j=0}^m$ be a partition of $[b, c]$ in descending order, with $r_{j+1} \le r_j^* \le r_j$. Then,
\begin{eqnarray*}
\widetilde{Y}_{cb}(\widetilde{Y}_{ba}) &=& \lim_{\Delta r_j\to 0} \OmSum_{j=0}^{m-1} t + f(r_j^*, t) \Delta r_j \bullet \lim_{\Delta s_i\to 0}\OmSum_{i=0}^{n-1} t + f(s_i^*, t)\Delta s_i \bullet t\\
&=& \lim_{\Delta r_j\to 0} \lim_{\Delta s_i\to 0} \OmSum_{j=0}^{m-1} t + f(r_j^*, t) \Delta r_j \bullet\OmSum_{i=0}^{n-1} t + f(s_i^*, t)\Delta s_i \bullet t\\
&=& \lim_{\Delta q_k\to 0}\OmSum_{k=0}^{n+m-1} t + f(q_k^*, t)\Delta q_k \bullet t\\
&=& \widetilde{Y}_{ca}\\
\end{eqnarray*}
Where $Q = P \cup R= \{q_k\}_{k=0}^{n+m}$ is a partition of $[a, c]$ written in descending order, consisting of $q_j = r_j$ for $0 \le j \le m$ and $q_{i+m} = s_i$ for $0 \le i \le n$ and similarly for $q_k^*$.\\
This is purely a formal manipulation, but the reader may care to see how a rigorous proof would evolve if these objects converge uniformly in $t$, or in some \emph{nice} way.
Reparametrizing the composition from $[a,b]$ to $[\alpha,\beta]$ with a continuously differentiable function $\gamma$; Leibniz's substitution of variables appears. Taking $\gamma(p^*_i) = s_i^*$ and $\gamma_i = \gamma_i(p_i) = s_i$, where $\beta = p_0 > p_1 > ... > p_{n-1} > p_n = \alpha$ and $p_{i+1} \le p_i^* \le p_i$, then by an approximate mean value theorem:
\[
f(s_i^*,t) (s_i - s_{i+1}) = f(\gamma(p_i^*),t) (\gamma_i - \gamma_{i+1}) \approx f(\gamma(p_i^*),t) \gamma'(p_i^*) (p_i - p_{i+1})
\]
So that,
\[
\widetilde{Y}_{ba}(t) = \lim_{\Delta p_i \to 0} \OmSum_{i=0}^{n-1} t + f(\gamma(p_i^*),t) \gamma'(p_i^*) \Delta p_i \bullet t
\]
So, our proposed definition also admits substitution of variables. It's also nice to see that the composition behaves little differently than how Riemann Sums behave, in this instance at least.\footnote{This hints aggressively to the idea of adding measure theory to the discussion.}
To extend our group-structure, if we invert $\widetilde{Y}_{ba}$ to $\widetilde{Y}_{ba}^{-1}$, then componentwise $t + f(s_i^*,t)\Delta s_i$ gets mapped to $\approx t - f(s_i^*,t)\Delta s_i$. Since composition is non-commutative, the partition is now in ascending order, and our inverse becomes precisely $\OmSum_{i=0}^{n-1} t - f(b+a-s_i^*,t)\Delta s_i \bullet t$. This agrees with our earlier inversion formula.\\
A more difficult idea to intuit is that $\widetilde{Y}_{xa}$, using this definition, satisfies the differential equation that $Y$ satisfies: $\frac{d}{dx} \widetilde{Y}_{xa} = f(x, \widetilde{Y}_{xa})$. And that this expression actually satisfies a First Order Differential Equation and we can come full circle. The author will only use intuition to morally justify this statement, for the moment. This logical sequence is a formal use of infinitesimals. Starting with the following identity:
\[
\widetilde{Y}_{(x+dx)x}(t) = t + f(x, t)dx
\]
Which can be sussed out as ``composing an infinitesimal amount,'' or ``composing over the interval $[x,x+dx]$.'' If one can accept this malignant use of infinitesimals\footnote{As the author would argue Classical Analysts took it as fact, though they definitely wrote it differently.}, it can be expanded using our semi-group laws $\widetilde{Y}_{cb}(\widetilde{Y}_{ba}(t)) = \widetilde{Y}_{ca}(t)$, so that:
\begin{eqnarray*}
\widetilde{Y}_{(x+dx)a} &=& \widetilde{Y}_{(x+dx)x}(\widetilde{Y}_{xa}) = \widetilde{Y}_{xa} + f(x, \widetilde{Y}_{xa}) dx\\
\frac{d \widetilde{Y}_{xa}}{dx} &=& \frac{\widetilde{Y}_{(x+dx)a} - \widetilde{Y}_{xa}}{dx}\\
&=&f(x,\widetilde{Y}_{xa})\\
\end{eqnarray*}
This kind of tells us this idea should work. If the objects converge in the best manner possible, all of this seems like a Leibnizian argument using infinitesimals.\\
The above arguments work out formally as we've written, but proving it does generally and rigorously is difficult. For that reason, we will work through a specific case. It can be illuminating to use an example, and may clear cut some of the block-ways which heed intuition on the matter. This will also give a glimpse of the difficulty of the problem in a rigorous setting.
\section{The nit and gritty}\label{sec5}
\setcounter{equation}{0}
Now that we’re caught up with the sweeping motions, we’ll work through a case in which we can do everything we just did above but with a bit more rigor. To do such, we’ll work with the function $f(x, t) = e^{-xt}$ for $x \in [0, 1]$ and $t \in \mathbb{R}^+$. And we'll try to construct The Compositional Integral of $f$. Although we've just deliberated on The Compositional Integral as a formal thing; the author has yet to construct it, or even prove its existence. We aim to prove the following theorem:
\begin{theorem}\label{thm1B}
The following two claims hold:
\begin{enumerate}
\item For $0 \le a \le b \le 1$ and $t \in \mathbb{R}^+$, there is a unique Compositional Integral of $e^{-xt}$, denoted
\[
Y_{ba}(t) = \int_a^b e^{-st}\,ds \bullet t
\]
Where $Y_{ba}(t): \mathbb{R}^+ \to \mathbb{R}^+$.
\item Let $P = \{ s_i\}_{i=0}^{n}$ be a partition of $[a, b]$ written in descending order, with $s_{i+1} \le s_i^* \le s_i$; as $\Delta s_i = s_i - s_{i+1} \to 0$ the expression
\[
\OmSum_{i=0}^{n-1}t+e^{-s_i^*t}\Delta s_i \bullet t
\]
converges to The Compositional Integral $Y_{ba}(t)$ of $e^{-xt}$.
\end{enumerate}
\end{theorem}
In our proof, it will be shown in one motion that $\widetilde{Y}=Y$. It will then be evident the function $\widetilde{Y}_{xa}(t)$ is the unique function $y(x)$ such that $y(a) = t$ and $y'(x) = e^{-xy(x)}$.
This provides us with the quick and justifiable statement that The Compositional Integral is a meaningful thing and looks something like a Riemann Sum, if only a Riemann Sum involved compositions... A Riemann composition, if you will. For convenience, the author will call it \emph{The Riemann Composition} of The Compositional Integral.
The proof we will provide will require some hand waving, as to write out all the steps produces a mess of equations. For this reason we will try to be short but convincing nonetheless. We will try to argue classically, but will admit much more rigor than a Classical Analyst would.
\begin{proof}
To begin, we'll prove ($1$). For all $t\in \mathbb{R}^+$ and $0 \le a \le b \le 1$ there is a function $Y_{ba}(t):\mathbb{R}^+\to\mathbb{R}^+$--The Compositional Integral of $f(x,t) = e^{-xt}$.
To show this is an exercise in soft-analysis. For all $t_0,t_1 \in\mathbb{R}^+$ and $x \in [0,1]$ we have $|e^{-xt_0} - e^{-xt_1}| \le |t_0 - t_1|$. Therefore by The Picard-Lindel\"{o}f Theorem, for every $x_0 \in [0,1]$ there is a neighborhood $|x-x_0| < \delta$ ($\delta$ can be chosen to work for all $x_0$), where for each $ t \in \mathbb{R}^+$, we have a function $y_{t,x_0}$ in which $y_{t,x_0}'(x)= e^{-xy_{t,x_0}(x)}$ and $y_{t,x_0}(x_0) = t$. These neighborhoods $|x-x_0|<\delta$, and hence functions $y_{t,x_0}$, can be glued together. We can extend $y_{t,x_0}(x)$ from $|x-x_0| < \delta$ to $|x-x_0| < 3\delta/2$ by noticing
\[
y_{t,x_0}(x\pm\delta/2) = y_{y_{t,x_0}(x_0\pm\delta/2),x_0}(x)
\]
Continuing this process, $y_{t,x_0}$ can be extended from $|x-x_0|<\delta$ to $x \in [0,1]$ using a monodromy principle. The presiding identity principle is not that $y$ is analytic, though. Instead $y_{t,x_0}$ satisfies the same First Order Differential Equation for each $x_0$.
To elaborate: consider two intervals $I$ and $J$ where $I \cap J \neq \emptyset$. Let $u: I \to \mathbb{R}$ and $w: J \to \mathbb{R}$. Assume $u\Big{|}_{I \cap J} = w\Big{|}_{I \cap J}$, and they satisfy the same First Order Differential Equation, $y' = e^{-xy(x)}$. By the uniqueness property of First Order Differential Equations, $u = w$ on $I \cup J$. This monodromy principle forms $Y_{ba}(t) = y_{t,a}(b)$ for all $t\in \mathbb{R}^+$ and $0 \le a \le b \le 1$.
Lastly, $Y_{ba}(t) : \mathbb{R}^+ \to \mathbb{R}^+$ because $Y_{aa}(t) = t \in \mathbb{R}^+$ and $Y_{xa}(t)$ is increasing in $x$ because it's derivative is greater than $0$. Theorem \ref{thm1} can now be thought of rigorously, and shows that $Y$ satisfies a group structure.\\
For our proof of ($2$), that The Riemann Composition converges to $Y_{ba}(t)$; by Taylor's theorem:
\[
Y_{ss'}(t) = t + f(s^*,t)(s-s') + R_\Delta
\]
Where here $\frac{R_\Delta}{\Delta} \to 0$ as $\Delta \to 0$, and $\Delta$ is an upper bound on $(s - s')$ where $0 \le s' \le s^* \le s \le 1$. Now $R_\Delta$ depends on $s^*, s', s$ and $t$, but we are going to throw its dependence away, as it can clutter the proof. Since we will be letting $\Delta\to 0$ its dependence on $t$ (and $s^*, s', s$) becomes irrelevant (especially because the convergence is uniform for $t \in \mathbb{R}^+$, and $0 \le s' \le s^* \le s \le 1$). Let $P= \{s_i\}_{i=0}^{n}$ be a partition of $[a,b]$ in descending order, and let $s_{i+1} \le s_i^* \le s_i$. Let $\max_{i=0,1,...,n-1} \Delta s_i=\Delta$. The following identities should illustrate the method of the proof:
\begin{eqnarray*}
Y_{ba}(t) &=& Y_{bs_1}(Y_{s_1s_2}(...Y_{s_{n-1}a}(t)))\\
&=&\OmSum_{i=0}^{n-1} Y_{s_is_{i+1}}(t)\bullet t\\
&=&\OmSum_{i=0}^{n-1} t+ f(s_i^*,t)\Delta s_i + R^{i}_\Delta \bullet t\\
&=& \OmSum_{i=0}^{n-1} t+ f(s_i^*,t)\Delta s_i\bullet t + \sum_{i=0}^{n-1} Q^i_\Delta\\
\end{eqnarray*}
Where here each $\frac{Q^i_\Delta}{\Delta} \to 0$ as $\Delta\to 0$ for all $0 \le i \le n-1$. Ignoring $R_\Delta^i$'s dependence on $t$, the justification of this identity follows from an inductive use of the rule $g(t+\mathcal{O}(\Delta^2)) = g(t) + \mathcal{O}(\Delta^2)$--where we must be sure to count how many error terms we are adding together. This crude formalism is justified, again due to the uniform convergence of $R_\Delta^i \to 0$.
Now since $\Delta = \mathcal{O}(1/n)$ we must have $Q^i_\Delta = \mathcal{O}(1/n^2)$. We are taking the sum $\sum_{i=0}^{n-1} Q^i_\Delta$, so we can see that
\[
\sum_{i=0}^{n-1} Q^i_\Delta = \sum_{i=0}^{n-1} \mathcal{O}(1/n^2) = n \mathcal{O}(1/n^2) = \mathcal{O}(1/n)
\]
This allows us to write that:
\[
Y_{ba}(t) - \OmSum_{i=0}^{n-1} t + f(s_i^*,t)\Delta s_i \bullet t = \mathcal{O}(1/n) = \mathcal{O}(\Delta)
\]
And so in letting $\Delta\to 0$ (and $n\to\infty$), the LHS tends to zero and our Riemann Composition converges to The Compositional Integral of $e^{-xt}$.
\end{proof}
To summarize what was especially needed from $f$ in this argument, in our exact choice of $f(x,t) = e^{-xt}$; the mapping $f(x,t) : [0,1] \times \mathbb{R}^+ \to \mathbb{R}^+$ and therefore the nested compositions are meaningful. Secondly, it was required that the function $f(x,t) = e^{-xt}$ is globally Lipschitz continuous in $t$ on $\mathbb{R}^+$ for all $x \in [0,1]$ as this allowed for the simple argument proving the function $Y_{ba}(t)$ even exists and is unique. The global Lipschitz condition also ensured the uniform convergence of the error term $R^i_\Delta \to 0$, which allowed for the error term to be pulled through the composition so easily.
Supposing we chose another function $f$ where $t$ was restricted to some interval $[c,d]$, then this causes innumerable problems. We would need that compositions of $t+f(s_i^*,t)\Delta s_i$ are meaningful things, but this is difficult as $[c,d]$ is bounded and the composing functions may grow to a value greater than $d$, or less than $c$, and our composition may no longer make sense. Especially because of the dangling translation by $t$. We would need $t+f(s_i^*,t)\Delta s_i:[c,d] \to [c,d]$ for all $0 \le i \le n-1$ and as $\Delta s_i \to 0$, which is unnatural and quite restrictive. Avoiding this would require more clever topological arguments; they would surely not fit in the confines of this notice.
Therein, our choice of $e^{-xt}$ was very intentional, and a very special function for this argument to work. Constructing The Compositional Integral for arbitrary functions proves to be a much more difficult task, especially if the only condition we demand is that $f$ is Lipschitz. But if one takes Euler's word for it, it isn't much of a leap.
The author imagines it very plausible that The Riemann Composition converges to The Compositional Integral if all that is asked is that $f$ is Lipschitz. A proof of this would simply take longer than the space we have in this paper. And probably more expertise than the author has.
\section{In Conclusion}\label{sec6}
\setcounter{equation}{0}
The Compositional Integral can be made into a meaningful thing. It is a stark redefinition of the Riemann Integral, and provides a productive form of The Picard-Lindel\"{o}f Theorem, in which we have some \emph{thing} and this \emph{thing} converges to the solution of a First Order Differential Equation. It also looks like the integral in more ways than one. The author has remained as curt as possible, but hopes to excise a curiousity in the reader and leave the subject open ended. What else can be done with this strange new integral? Can we add measure theory by looking at $\mu (\Delta s_i)$ rather than $\Delta s_i$ for some measure $\mu$? Can we add contour integrals by parameterizing contours $C$ in the complex plane using some differentiable arc $\gamma$? How do we take limits at infinity? Are there dominated or monotone convergence theorems? The author can only imagine.\\
And as to what we've really done in this paper, it may be fun to hint at expansions of common functions using these methods. We can express $e^x$ in a somewhat new way, or at least provide a new way of justifying the expansion. For $x,t \in \mathbb{R}^+$:
\[
t e^x= \int_{0}^x t\,ds\bullet t
\]
because $y = te^x$ satisfies $y(0) = t$ and $y'(x) = y(x)$. Interestingly, now the group structure of The Compositional Integral become the multiplicative property of $e^x$. In this special case, The Riemann Composition reduces to an identity exactly of the form $\lim_{n\to\infty}(1+\frac{x}{n})^n = e^x$--the author thinks it's one of the many ways Euler probably derived the expression. Namely if $P=\{s_i\}_{i=0}^n$ is a partition of $[0,x]$, then:
\begin{eqnarray*}
te^x &=& \lim_{\Delta s_i \to 0} \OmSum_{i=0}^{n-1} t + t\Delta s_i \bullet t\\
&=& \lim_{\Delta s_i \to 0} \OmSum_{i=0}^{n-1} t(1 + \Delta s_i) \bullet t\\
&=& \lim_{\Delta s_i \to 0} t \prod_{i=0}^{n-1}(1+\Delta s_i)\\
\end{eqnarray*}
Where here $\Delta s_i$ looks like $\frac{x}{n}$; so, $\prod_{i=0}^{n-1}(1+\Delta s_i)$ looks like $(1+\frac{x}{n})^n$. Using the same reasoning, we can generalize. The following identities written as though they are Riemann Sums, are derived in the same manner and are interesting--but are not unknown:
\begin{eqnarray*}
e^{x^2} &=& \lim_{\Delta s_i \to 0} \prod_{i=0}^{n-1} (1 + 2 s_i^*\Delta s_i) = \lim_{n\to\infty} \prod_{i=0}^{n-1} \Big{(}1 + 2 i \Big{(}\frac{x}{n}\Big{)}^2\Big{)}\\
e^{x^3} &=& \lim_{\Delta s_i \to 0} \prod_{i=0}^{n-1} (1 + 3 (s_i^*)^2\Delta s_i) = \lim_{n\to\infty} \prod_{i=0}^{n-1} \Big{(}1 + 3 i^2 \Big{(}\frac{x}{n}\Big{)}^3\Big{)}\\
&\vdots&\\
e^{x^k} &=& \lim_{\Delta s_i \to 0} \prod_{i=0}^{n-1} (1 + k (s_i^*)^{k-1}\Delta s_i) = \lim_{n\to\infty} \prod_{i=0}^{n-1} \Big{(}1 + k i^{k-1} \Big{(}\frac{x}{n}\Big{)}^{k}\Big{)}\\
\end{eqnarray*}
The following derivation is left to the reader:
\[
te^{\int_0^x p(s)\,ds} = \int_0^x p(s)t\,ds\bullet t = \lim_{\Delta s_i \to 0} t\prod_{i=0}^{n-1} (1 + p(s_i^*)\Delta s_i)
\]
Therefore The Compositional Integral of $f$ reduces to the Volterra integral of $p$ when $f(s,t) = p(s)t$ \cite{Volterra}. If I haven't convinced the reader--these identities can be proven by taking logarithms, and using the estimate $\log(1+x) \sim x$, which is the driving point of Volterra's construction.
\eject
\end{document}
|
\begin{enumerate}gin{document}
\title{Projective limits of Poletsky--Stessin Hardy spaces}
\author{Evgeny A. Poletsky}
\begin{enumerate}gin{abstract} In this paper we show that that on a strongly pseudoconvex domain $D$ the projective limit of all Poletsky--Stessin Hardy spaces $H^p_u(D)$, introduced in \cite{PS}, is isomorphic to the space $H^\infty(D)$ of bounded holomorphic functions on $D$ endowed with a special topology.
\par To prove this we show that Carath\'eodory balls lie in approach regions, establish a sharp inequality for the Monge--Amp\'ere mass of the envelope of plurisubharmonic exhaustion functions and use these facts to demonstrate that the intersection of all Poletsky--Stessin Hardy spaces $H^p_u(D)$ is $H^\infty(D)$.
\end{abstract}
\thetaanks{The author was partially supported by a grant from Simons Foundation.}
\keywords{Hardy spaces, pluripotential theory}
\subjclass[2000]{ Primary: 32A35; secondary: 32A70, 32U10}
\address{Department of Mathematics, Syracuse University, \newline
215 Carnegie Hall, Syracuse, NY 13244} \email{[email protected]}
\maketitle
\section{Introduction}
\par In \cite{PS} M. Stessin and the author introduced on a general hyperconvex domain $D$ the spaces of holomorphic functions $H^p_u(D)$ as analogs of the classical Hardy spaces on the unit disk. This spaces are parameterized by plurisubharmonic exhaustion functions $u$ of $D$. When $D$ is strictly pseudoconvex they all are the subsets of classical Hardy spaces $H^p(D)$ studied, for example, in \cite{S} and coincide with $H^p(D)$ when $u$ is a pluricomplex Green function.
\par Recently, M. Alan and N. Gogus in \cite{AG}, S. Sahin in \cite{Sa}, K. R. Shrestha in \cite{Sh1} and the latter with the author in \cite{PSh} showed that if $D$ is the unit disk $\mathbb D$ these spaces form a subclass of weighted Hardy spaces studied, for example, in \cite{M} and \cite{BPST}. However, these subclass has special properties and, moreover, has no analogs in several variables. That is why we kept for it the name of Poletsky--Stessin Hardy spaces that is already used in these papers.
\par The parametrization of these spaces by plurisubharmonic exhaustion functions transforms this class into a projective system. In this paper we show that on a strongly pseudoconvex domain $D$ the projective limit of this system can be identified with the space $H^\infty(D)$ of bounded holomorphic functions on $D$ endowed with the projective topology. To prove this we construct for any unbounded holomorphic function $f$ a plurisubharmonic exhaustion function $u$ such that $f\not\in H^p_u(D)$. The construction is based on sharp estimates of the total Monge--Amp\'ere mass of the plurisubharmonic envelope of exhaustion functions (see Section {\mathbf {Re\,}}f{S:mam}) and a placement of Carath\'eodory balls into Stein's approach regions in Section {\mathbf {Re\,}}f{S:argb}.
\par We are grateful to M. Alan, N. Gogus, S. Sahin and K. R. Shrestha for stimulating discussions.
\section{Approach regions and balls}\label{S:argb}
\par Let $D$ be a bounded domain in $\mathbb C^n$ with $C^2$ boundary. For $z_0\in\partial D$ we denote by $\nu_{z_0}$ the unit outward normal to $\partial D$ at $z_0$. Following E. Stein in \cite{S} for $\alpha>1$ we define the approach region ${\mathcal A}_D^\alpha(z_0)$ at $z_0$ as \[{\mathcal A}_D^\alpha(z_0)=\{z\in D:\, |(z-z_0)\cdot\nu_{z_0}|<\alpha\delta_D(z), |z-z_0|^2<\alpha\delta_D(z)\},\]
where $\delta_D(z)$ is the minimum of the distances from $z$ to $\partial D$ or to the tangent plane to $\partial D$ at $z_0$.
\par Recall that the Carath\'eodory function $c(z,w)$ on $D$ is defined as the supremum of $|f(z)|$ over all holomorphic functions $f$ on $D$ such that $f(w)=0$ and $|f|\le1$ on $D$. We define Carath\'eodory balls centered at $w$ and of radius $r<1$ as the sets $C_D(w,r)=\{z\in D:\,c(z,w)\le r\}$.
\par We will need the following result (see \cite[Theorem 2]{G}).
\begin{Theorem}\label{T:Gt} Let $D$ be a strongly pseudoconvex domain in $\mathbb C^n$ with $C^2$ boundary and $z_0\in\partial D$. Let $p$ be a peak function on $D$ at $z_0$, i.e., $p$ is continuous on $\overline D$, holomorphic on $D$, $p(0)=1$ and $|p|<1$ elsewhere on $\overline D$. Let $0<a<b<1$ and let $S(a)=\{z\in D:\,|p(z)|>a\}$. Choose any $\eta>0$. Then there exists a positive constant $L=L(D,a,b,\eta)\ge1$ such that the following holds: given $f\in H^\infty(S(a))$, there exists $\hat f\in H^\infty(D)$ such that $\|\hat f\|_{H^\infty(D)}\le L\|f\|_{H^\infty(S(A))}$ and $\|f-\hat f\|_{H^\infty(S(b))}\le\eta\|f\|_{H^\infty(S(A))}$.
\end{Theorem}
\begin{Lemma}\label{L:balls} Let $D$ be a strongly pseudoconvex domain in $\mathbb C^n$ with $C^2$ boundary and $z_0\in\partial D$. For every $0< r<1$ there is $\alpha>0$ with the following property: for every neighborhood $U$ of $z_0$ there is $z\in D\cap U$ such that the Carath\'eodory ball $C_D(z,r)$ lies in the approach region ${\mathcal A}_D^\alpha(z_0)$.
\end{Lemma}
\begin{enumerate}gin{proof} We will prove this lemma in steps.
\par {\bf Step 1:} {\it The lemma holds when $D$ is the unit ball $B$ centered at the origin and $z_0=(1,0,\dots,0)$. One can take as $z$ any point $z=tz_0$, $0<t<1$, and $\alpha=20(1-r)^{-1}$.}
\par Since $B$ has a transitive group of biholomorphisms, $C_B(v,r)=F(C_B(0,r))$, where $F$ is a biholomorphism of $B$ moving $0$ into $v$. Note that $C_B(0,r)$ is the ball of radius $r$ centered at the origin.
\par We let $v=(t,0,\dots,0)$, where $0<t<1$. If $z=(z_1,\dots,z_n)\in\mathbb C^n$ then we set $z'=(z_2,\dots,z_n)$. The biholomorphism $(w_1,w')=F(z_1,z')$ moving 0 to $v$ is given by the formulas:
\[w_1=\frac{t+z_1}{1+tz_1}\text{ and }w'=(1-t^2)^{1/2}\frac{z'}{1+tz_1}.\]
Since for the ball the distance from a point in the ball to the boundary never exceeds the distance to the tangent plane $\delta_B(w_1,w')=1-(|w_1|^2+|w'|^2)^{1/2}$.
If $(z_1,z')\in C_B(0,r)$ then
\[\delta_B(w_1,w')\ge \frac12(1-|w_1|^2-|w'|^2)\ge\frac{(1-t^2)(1-r^2)}{2|1+tz_1|^2}\ge\frac{(1-t)(1-r)}{2|1+tz_1|^2}. \]
Since $\nu_{z_0}=(1,0')$ for $(w_1,w_2)\in C_B(v,r)$ we have
\[|(w-z_0)\cdot\nu_{z_0}|=|1-w_1|=\frac{(1-t)|1-z_1|}{|1+tz_1|}\le\frac{4(1-t)}{|1+tz_1|^2}\]
and
\[|w-z_0|^2=|w'|^2+|1-w_1|^2=\frac{(1-t^2)|z'|^2+(1-t)^2|1-z_1|^2}{|1+tz_1|^2}\le
\frac{10(1-t)}{|1+tz_1|^2}.\]
\par Therefore, for every $0<t<1$ the Carath\'eodory ball $C_B((t,0'),r)$ lies in the approach region ${\mathcal A}_B^\alpha(z_0)$, when $\alpha=20(1-r)^{-1}$ and this ends Step 1.
\par {\bf Step 2:} {\it Let $0\le t<1$, $z_0=(1,0,\dots,0)$, $B_{-t}=\{z\in\mathbb C^n:\,|z+tz_0|<1+t\}$ and $B_t=\{z\in\mathbb C^n:\,|z-tz_0|<1-t\}$. Then ${\mathcal A}^\alpha_{B_{-t}}(z_0)\subset{\mathcal A}^{4\alpha}_{B_t}(z_0)$ when $0<t<(8\alpha)^{-1}$.}
\par If $z=(z_1,\dots,z_n)\in B_{-t}$ and $x={\mathbf {Re\,}} z_1$, then
$\delta_{B_{-t}}(z)=1+t-|z+tz_0|$ and $\delta_{B_t}(z)=1-t-|z-tz_0|$. Direct calculations show that
\[(1+t+|z+tz_0|)\delta_{B_{-t}}(z)=(1-t+|z-tz_0|)\delta_{B_t}(z)+4t(1-x).\] Thus $\delta_{B_{-t}}(z)\le2\delta_{B_t}(z)+4t(1-x)$. But $1-x<\alpha\delta_{B_{-t}}(z)$. Hence $\delta_{B_{-t}}(z)\le 2(1-4t\alpha)^{-1}\delta_{B_t}(z)$. If $0<t<(8\alpha)^{-1}$ then $\delta_{B_{-t}}(z)\le 4\delta_{B_t}(z)$. So if $z\in{\mathcal A}_{B_{-t}}^\alpha(z_0)$ then $z\in{\mathcal A}_{B_{-t}}^{4\alpha}(z_0)$.
\par {\bf Step 3:} {\it Let $p$ be a peak function at $z_0$. If the lemma holds for some $S(a)=\{z\in D:\,|p(z)|>a\}$ then it holds for $D$.}
\par The function $\delta(z)$ in the definition of approach regions is the same whether we take it with respect to $D$ or $S(a)$ when $z$ is sufficiently close to $z_0$. So we can take $b_0$, $a<b_0<1$ so that the intersections of approach regions with respect to $D$ or $S(a)$ coincide in $S(b_0)$.
\par Fix some positive $r<1$ and let $r'=r+(1-r)/2$. We take $\varepsilon,\eta>0$ such that
\[(1+2\eta)^{-1}(1-\varepsilon)(r'-2\eta)>r.\]
Let $L=L(D,a,b_0,\eta)$. We take an integer $m$ such that $b_0^mL<1$ and a number $b$ between $b_0$ and 1 such that $b^m>1-\varepsilon$. There is $c$, $b<c<1$, such that the Carath\'eodory balls $C_D(w,r')\subset S(b)$ when $w\in S(c)$. Indeed, if $z_0\in C(w,r')$ and $f$ is a conformal mapping $f$ of the unit disk onto itself such that $f(p(w))=0$, then $|f(p(z_0)|\le r'$. Direct calculations show that if $|p(w)|>(b+r')/(1+br')$ then $|p(z_0)|>b$.
\par Since the lemma holds on $S(a)$ we can find $\alpha$ and $w_0\in S(c)$ such that for every point $w\not\in{\mathcal A}_D^\alpha(z_0)\cap S(b)$ there is a holomorphic function $f$ on $S(a)$ such that $|f|<1$ on $S(a)$, $f(w_0)=0$ and $|f(w)|>r'$. By Theorem {\mathbf {Re\,}}f{T:Gt} there is a function $\hat f\in H^\infty(D)$ such that $\|\hat f\|_{H^\infty(D)}\le L$ and $\|f-\hat f\|_{H^\infty(S(b_0))}\le\eta$.
\par Let $g=(1+2\eta)^{-1}p^m(\hat f-\hat f(w_0)$. If $z\in D\setminus S(b_0)$ then $|g(z)|\le b_0^mL\le 1$.
If $z\in S(b_0)$ then $|g(z)|<(1+2\eta)^{-1}(1+2\eta)=1$. Hence $|g|<1$ on $D$. Now
\[|g(w)|\ge(1+2\eta)^{-1}b^m(r'-2\eta)>(1+2\eta)^{-1}(1-\varepsilon)(r'-2\eta)>r.\]
Hence $w\not\in C_D(w_0,r)$ and $C_D(w_0,r)\subset{\mathcal A}_D^\alpha(z_0)\cap S(b)$. This ends Step 3.
\par We take a plurisubharmonic function $\phi\in C^2(\overline D)$ defining $D$ such that $\nabla\phi\ne0$ on $\partial D$.
Let \[L_{z_0}(z)=\sum_{i,j=1}^n\phi_{z_i,z_j}(z_0)(z_i-(z_0)_i)(z_j-(z_0)_j)\] and \[H_{z_0}(z)=\sum_{i,j=1}^n\phi_{z_i,\overline z_j}(z_0)(z_i-(z_0)_i)(\overline z_j-(\overline z_0)_j).\]
The Taylor expansion of $\phi$ at $z_0$ is
\[\phi(z)=2{\mathbf {Re\,}} (\nabla\phi(z_0),z-z_0) +{\mathbf {Re\,}} L_{z_0}(z)+\frac12H_{z_0}(z)+o(\|z-z_0\|^2).\]
\par {\bf Step 4:} {\it The lemma holds when $z_0=(1,0,\dots,0)$ and the Taylor expansion of $\phi$ at $z_0$ is
\[\phi(z)=-2(1-x)+|z-z_0|^2 +o(|z-z_0|^2).\]}
\par We take $\alpha=20(1-r)^{-1}$ and $t=(16\alpha)^{-1}$. By Step 1
$C_B(sz_0,r)\subset{\mathcal A}_B^\alpha(z_0)$ for any $0<s<1$. The dilation $d(z)=(1+t)z-tz_0$ moves $B$ onto $B_{-t}$ and $C_B(sz_0,r)$ onto $C_{B_{-t}}(s'z_0,r)$, $s'=(1+t)s-t$. If $z\in{\mathcal A}_B^\alpha(z_0)$ then
\[|d(z)-z_0|^2=(1+t)^2|z-z_0|^2<(1+t)^2\alpha\delta_B(z)=(1+t)\alpha\delta_{B_{-t}}(z)<
2\alpha\delta_{B_{-t}}(z),\]
while $|(d(z)-z_0)\cdot\nu_{z_0}|=\alpha\delta_{B_{-t}}(z)$.
Thus $d$ moves ${\mathcal A}_B^\alpha(z_0)$ into ${\mathcal A}_{B_{-t}}^{2\alpha}(z_0)$ and we see that $C_{B_{-t}}(sz_0,r)\subset{\mathcal A}_{B_{-t}}^{2\alpha}(z_0)$ for any $0<s<1$.
\par There is $x_0<1$ such that if $\Omega=\{z\in D:\,{\mathbf {Re\,}} z_1>x_0\}$, $B'_t=B_t\cap\Omega$, $B'_{-t}=B_{-t}\cap\Omega$ and $D'$ is the connected component of $D\cap\Omega$ containing $z_0$, then $B'_t\subset D'\subset B'_{-t}$. Hence by Step 2
\[C_{D'}(sz_0,r)\subset C_{B'_{-t}}(sz_0,r)\subset{\mathcal A}_{B_{-t}}^{2\alpha}(z_0)\subset{\mathcal A}_{B_t}^{8\alpha}(z_0)\subset{\mathcal A}_{D'}^{8\alpha}(z_0)\] when $s$ is sufficiently close to $1$. By Step 3 the statement holds.
\par {\bf Step 5:} {\it General case.} There is (see Lemma 5 and Proposition 2 in \cite{G}) a quadratic transformation $F$ of $\mathbb C^n$, biholomorphic in a neighborhood $U$ of $z_0$, that moves $D$ into a domain where the Taylor expansion of $\phi$ at $\phi(z_0)$ has the form
\[\phi(z)=-2{\mathbf {Re\,}} z_1+\sum_{j=1}^n|z-z_0|^2+o(|z-z_0|^2).\]
Since the image and the preimage of approach regions under the mapping $F$ will lie in corresponding approach regions near the boundary by Steps 3 and 4 we get our lemma.
\end{proof}
\section{The Monge--Amp\'ere mass of envelopes}\label{S:mam}
\par A domain $D\subset\mathbb C^n$ is {\it hyperconvex} if there is a continuous function $u$ on $\overline D$ equal to zero on $\partial D$ and negative and plurisubharmonic on $D$ and it is {\it strongly hyperconvex} if $u$ extends as a continuous plurisubharmonic function to a neighborhood of $\overline D$. We denote by ${\mathcal E}(D)$ the set of all continuous functions $u$ on $\overline D$ equal to zero on $\partial D$ and negative and plurisubharmonic on $D$. We assume that such functions can take $-\infty$ as their value.
\par A {\it pluriregular condensor }
$K=(K_1,\dots,K_m,\sigmagma_1,\dots,\sigmagma_m)$ is a system of pluriregular compact sets \[K_m\subset K_{m-1}\subset\dots\subset K_1\subset D\subset\overline D=K_0\] and numbers
$\sigmagma_m<\sigmagma_{m-1}<\dots<\sigmagma_1<\sigmagma_0=0$ such that there is a continuous plurisubharmonic function $\omegaega(z)=\omegaega(z,K,D)$ on $D$ with zero boundary values,
$K_i=\{\omegaega\le\sigmagma_i\}$ and $\omegaega$ is maximal on $D_{\sigma_{i-1}}\setminus K_i$ for all $1\le i\le m$, where $D_\sigma=\{z\in D:\,\omega(z)<\sigma\}$ (see \cite{P} for more details). We will call this function {\it the relative extremal function} of the condensor $K$ in $D$. Of course, not every choice of sets $K_i$ and numbers $\sigmagma_i$ can be realized as a condensor. But if $u$ is a continuous negative plurisubharmonic function on $D$ and sets $K_i=\{u\le\sigmagma_i\}$ are pluriregular,
then $K$ has a continuous relative extremal function.
\par The following lemma was proved in \cite[Lemma 4.2]{P}.
\begin{enumerate}gin{Lemma}\label{L:ca} Let $K=(K_1,\dots,K_m,\sigmagma_1,\dots,\sigmagma_m)$ be a pluriregular condensor in $D$. There is a sequence of pluricomplex multipole Green functions $g_j$ converging to $\omegaega(z)=\omegaega(z,K,D)$ uniformly on
compacta in $D_{\sigmagma_{i-1}}\setminus K_i$, $1\le i\le m$.
Moreover, if $\psi$ is a continuous function on ${\mathbb R}$, then
\[\lim_{j\to\infty}\int\limits_ D\psi(\omegaega(z))(dd^cg_j)^n=
\int\limits_ D\psi(\omegaega(z))(dd^c\omegaega)^n.\]\end{Lemma}
\par The following lemma is a slight but important elaboration of the previous result.
\begin{Lemma}\label{L:cae} Let $K=(K_1,\dots,K_m,\sigmagma_1,\dots,\sigmagma_m)$ be a pluriregular condensor in $D$. There is a sequence of pluricomplex multipole Green functions $g_j(z)<\omega(z)=\omegaega(z,K,D)$ on $D$ converging to $u(z)$ uniformly on compacta in $D_{\sigmagma_{i-1}}\setminus K_i$, $1\le i\le m$. Moreover, if $\psi$ is a continuous function on ${\mathbb R}$, then
\[\lim_{j\to\infty}\int\limits_ D\psi(\omegaega(z))(dd^cg_j)^n=
\int\limits_ D\psi(\omegaega(z))(dd^c\omegaega)^n.\]
\end{Lemma}
\begin{enumerate}gin{proof} By Lemma {\mathbf {Re\,}}f{L:ca} there is a sequence of Green functions $h_j$ on $D$ converging to $u$ uniformly on
compacta in $D_{\sigmagma_{i-1}}\setminus K_i$, $1\le i\le m$. Moreover, if $\psi$ is a continuous function on ${\mathbb R}$, then
\[\lim_{j\to\infty}\int\limits_ D\psi(\omega(z))(dd^ch_j)^n=
\int\limits_ D\psi(\omega(z))(dd^c\omega)^n.\]
\par Let us choose a decreasing sequence of numbers $\alpha_k>1$ converging to 1. Define $\sigma'_{ik}=\alpha_k^{-1}\sigma_i$ and $\sigma''_{ik}=\alpha_k\sigma_i$. There is $k_0$ such that for all $k>k_0$ and $i=1,\dots,m$ we have
\[\sigma_{i+1}<\sigma_{ik}''<\sigma_i<\sigma'_{ik}<\sigma_{i-1}.\]
\par For any such $k$ there is $j_k>k$ such that $\alpha_kh_{j_k}<\sigma'_{ik}$ on $\partial D_{\sigma'_{ik}}$ and $\alpha_kh_{j_k}<\sigma''_{ik}$ on $\partial D_{\sigma''_{ik}}$. By the maximum principle $\alpha_kh_{j_k}<\sigma'_{ik}$ on $D_{\sigma'_{ik}}\setminus D_{\sigma''_{ik}}$. Hence
\[\alpha^3_kh_{j_k}<\alpha_k^2\sigma'_{ik}=\alpha_k\sigma_i=\sigma''_{ik}\] on $D_{\sigma'_{ik}}\setminus D_{\sigma''_{ik}}$. So if $g_k=\alpha_k^3h_{j_k}$ then $g_k<\omega$ on $D_{\sigma'_{ik}}\setminus D_{\sigma''_{ik}}$ for all $i=1,\dots,m$. By the maximality of $\omega$ on $D_{\sigma_i}\setminus\overline D_{\sigma_{i+1}}$ and we see that $g_k<\omega$ on $D$. Clearly,
\[\lim_{j\to\infty}\int\limits_ D\psi(\omegaega(z))(dd^cg_j)^n=
\int\limits_ D\psi(\omegaega(z))(dd^c\omegaega)^n.\]
\end{proof}
\par Given a continuous function $\phi$ on $D$ we denote by $E\phi$ the plurisubharmonic envelope of $\phi$, i.e., the maximal plurisubharmonic function on $D$ less or equal to $\phi$. Such a function exists due to the continuity of $\phi$. By \cite[Lemma 1]{W} if $\phi<0$ on $D$ and $\lim_{z\to\partial D}\hat\phi(z)=0$, the $E\phi$ is continuous on $D$. For an at most countable sequence of functions $\{u_j\}\subset{\mathcal E}$ we denote by $E\{u_j\}$ the envelope of $\min\{u_j\}$.
\begin{Theorem}\label{T:mae} If $D$ is a strongly hyperconvex domain and continuous plurisubharmonic functions $\{u_j\}\subset{\mathcal E}(D)$, then $\sum MA(u_j)\ge MA(E\{u_j\})$, where
\[MA(u)=\int\limits_ D(dd^cu)^n.\]
\end{Theorem}
\begin{enumerate}gin{proof} First, we prove this theorem for two functions $u$ and $v$. Since $E(u,v)\ge u+v$ we see that $E(u,v)\in{\mathcal E}(D)$. We may assume that functions $u,v\in{\mathcal E}(D)$ are bounded and of the finite Monge--Amp\'ere mass. If the former does not hold then we replace $u$ and $v$ with $u_k=\max\{u,-k\}$ and $v_k=\max\{v,-k\}$ respectively and use the fact that $MA(u_k)\to MA(u)$ for a decreasing sequence $\{u_k\}$ and $E(u_k,v_k)\searrow E(u,v)$. If the latter is not true then the statement is evident.
\par If $K$ and $L$ are pluriregular condensors in $D$, $u(z)=\omega(z,K,D)$ and $v(z)=\omega(z,L,D)$, then by Lemma {\mathbf {Re\,}}f{L:cae} there are sequences of pluricomplex multipole Green functions $\{g_j(z)<u(z)\}$ and $\{h_j(z)<v(z)\}$ on $D$ such that
\[\lim_{j\to\infty}MA(g_j)=MA(u)\] and
\[\lim_{j\to\infty}MA(h_j)=MA(v).\]
\par Clearly, $E(u,v)>E(g_j,h_j)$ and by the Comparison Principle
$MA(E(u,v))\le MA(E(g_j,h_j))$.
But $E(g_j,h_j)$ is a pluricomplex multipole Green function with poles at poles of $g_j$ and $h_j$ and weights equal to the maximum of weights $g_j$ or $h_j$ at a pole. Hence
$MA(E(g_j,h_j))\le MA(g_j)+MA(h_j)$ and our theorem holds in this case.
\par In the next step we prove the theorem for functions $u,v\in{\mathcal E}(D)$ for which there is an open set $D'\subset\subset D$ such that $\partial D'$ is a smooth hypersurface, $u$ and $v$ are equal to $\sigmagma_1<0$ on $\partial D'$, maximal on $D\setminus\overline D'$ and are of class $C^2$ on $D'$.
\par For this we will construct an inductive sequence of pluriregular condensors $K_j$ and $L_j$ such that the sequences of functions $u_j(z)=\omega(z,K_j,D)$ and $v_j(z)=\omega(z,L_j,D)$ are decreasing and converging to $u$ and $v$ respectively. Then $E(u,v)$ is the limit of the decreasing sequence of $E(u_j,v_j)$ and, consequently, the theorem holds in this case.
\par We let $K_0=(\overline D',\sigma_1)$. If $K_j=(K_{1j}=\overline D',K_{2j}\dots,K_{i_jj},\sigma_{1j}=\sigma_1,\sigma_{2j},\dots,\sigma_{m_jj})$ has been constructed, then by Sard's theorem for every $1\le i\le m_j-1$ we can find numbers \[\sigma_{j,i+1}=\delta_{l_{ij}}<\delta_{l_{ij}-1}<\dots<\delta_1<\delta_0=\sigmagma_{ij}\]
such that $\delta_l-\delta_{l+1}<1/j$ and the function $u$ is not degenerate on $\{u=\delta_l\}$, $1\le l\le l_{ij}$. For $i=m_j$ we select numbers $\delta_l$ as before between $\sigma_{i_jj}$ and the minimum of $u$ on $D$.
\par Since the hypersurfaces of $\{u=\delta_{l_{ij}}\}$ are smooth, the compact sets $K_{l_{ij}}=\{u\le\delta_{l_{ij}}\}$ are pluriregular. We relabel the numbers $\sigma_{kj}$ and $\delta_{l_{ij}}$ and compact sets $K_{l_{ij}}$ as $\sigma_{i,j+1}$ and $K_{i,j+1}$ respectively arranging them in the right order and define a pluriregular condensor
\[K_{j+1}=(\overline D',K_{2,j+1}\dots,K_{m_{j+1},j+1},\sigma_{1,j+1},\dots,\sigma_{m_{j+1},j+1}).\] We denote $\omega(z,K_{j+1},D)$ by $u_{j+1}(z)$.
\par Since the functions $u_j$ are maximal on $K_{ij}^o\setminus K_{i+1,j}$ and are equal to $u$ on $\partial K_{ij}$ we see that $u_j\ge u$ on $D$ and $u_j\ge u_{j+1}$ on $D$. Hence the sequence of $u_j$ is decreasing and, clearly, converging to $u$. Since a similar construction works for the function $v$ too, our theorem holds in this case.
\par For the general case we suppose $D=\{z\in\mathbb C^n:\,\phi(z)<0\}$, where $\phi$ is a continuous plurisubharmonic function defined on a neighborhood $V$ of $\overline D$ and its restriction to $D$ is in ${\mathcal E}(D)$. The sequence of plurisubharmonic functions $u_k$ on $V$ equal to $\max\{u,k\phi\}$ on $D$ and to $k\phi$ on $V\setminus D$ is decreasing on $D$ and converges to $u$ uniformly on $\overline D$. In particular, the total Monge--Amp\'ere masses of $u_k$ on $D$ converge to this mass of $u$. Hence if we prove our theorem for continuous functions that admit a continuous plurisubharmonic extension to $V$, then we prove it for all functions in ${\mathcal E}(D)$.
\par If $u$ is such a function then there is a decreasing sequence of plurisubharmonic functions $u_k$ on some domain $G$ containing $\overline D$ that belong to $C^\infty(G)$ (see \cite[Theorem 2.9.2]{K}) and converge to $u$ uniformly on $\overline D$. Let $\varepsilon_k=\sup_{z\in\partial D}u_k(z)$.
\par Let us choose a sequence of numbers $\sigmagma_{1k}<0$ converging to 0 such that for all $k$ the set $\{u_k=\sigmagma_{1k}\}$ is a smooth hypersurface compactly belonging to $D$. We define $u'_k$ as a function that is equal to $u_k-\varepsilon_k$ on the set $W_k=\{u_k\le\sigmagma_{1k}\}$, to 0 on $\partial D$ and to be maximal on $D\setminus W_k$. These functions uniformly converge to $u$ and they are plurisubharmonic because $u'_k\ge u_k-\varepsilon_k$ on $D\setminus W_k$. Hence the total Monge--Amp\'ere masses of $u'_k$ on $D$ converge to this mass of $u$. Since
for functions like this our theorem is already proved, it is proved for all $u\in{\mathcal E}(D)$.
\par For finitely many functions $u_1,\dots,u_k$ the result follows immediately by induction: the envelope $v_k$ of $\min\{u_1,\dots,u_k\}$ is equal to the envelope of \[\min\{\min\{u_1,\dots,u_{k-1}\},u_k\}.\] For the infinite case we note that $E(\{u_j\})$ is the limit of the decreasing sequence of $v_k$ and the inequality follows from the classical result of Bedford and Taylor.
\end{proof}
\par This result is sharp. If $D$ is hyperconvex and $W=\{w_1,\dots,w_k\}\subset D$, then the {\it pluricomplex Green
function with poles at the set $W$} is a unique unction $g(z,W)\in{\mathcal E}(D)$ such that
$(dd^cg(z,W))^n=\sum_{j=1}^k(2\pi)^n\delta_{w_j}$, $|g_D(z,W)-\sum_{j=1}^k\log|z-w_j||$ is
bounded on $D$ and $g(z,W)$ is maximal outside $W$, i.e., $(dd^cg)^n=0$ on $D\setminus W$.
\par If $u$ and $v$ are two pluricomplex Green functions with non-overlapping poles, then $E(u,v)$ is the pluricomplex Green functions whose set of poles is the union of poles of $u$ and $v$. Hence we have an equality in Theorem {\mathbf {Re\,}}f{T:mae}.
\par We finish this section with the following observation. Let ${\mathcal E}_1(D)$ be the set of all $u\in{\mathcal E}(D)$ such that $MA(u)=1$.
\begin{Corollary}\label{C:mae} If $u,v\in{\mathcal E}_1(D)$ then $MA(E(u,v))\le 2$.\end{Corollary}
\section{Poletsky--Stessin Hardy spaces}
\par Let $D$ be a hyperconvex domain in ${\mathbb C}^n$ and $u\in{\mathcal E}(D)$. Following \cite{D} we set $B_u(r)=\{z\in D:\,u(z)< r\}$ and $S_u(r)=\{z\in D:\,u(z)= r\}$. Let $$\mu_{u,r}=(dd^cu_r)^n-\chi_{D\setminus B_u(r)}(dd^cu)^n,$$
where $u_r=\max\{u,r\}$. The measure $\mu_{u,r}$ is nonnegative and
supported by $S_u(r)$. In \cite[Theorem 1.7]{D} Demailly had proved the
following fundamental Lelong--Jensen formula.
\begin{Theorem}\label{T:ljf} For all $r<0$ and every plurisubharmonic function
$\phi$ on $D$
$$\mu_{u,r}(\phi)=\int\limits_ D\phi\mu_{u,r}$$ is finite and
\begin{enumerate}gin{equation}\label{e:ljf}
\mu_{u,r}(\phi)-\int\limits_ {B_u(r)}\phi(dd^cu)^n=\int\limits_
{B_u(r)}(r-u)dd^c\phi\wedge(dd^cu)^{n-1}.
\end{equation}
\end{Theorem}
\par The last integral in this formula can be equal to $\infty$.
Then the integral in the left side is equal to $-\infty$. This
cannot happen if $\phi\ge0$.
\par The function
\[{\mathcal P}hi(r)=\int\limits_{B_u(r)}(r-u)dd^c\phi\wedge(dd^cu)^{n-1}\] is, evidently, increasing and it follows that the function $\mu_{u,r}(\phi)$ is increasing and continuous from the left.
\par As in \cite{PS} for $p\ge1$ we define the Hardy space $H^p_u(D)$ as the set of all holomorphic functions $f$ on $D$ such that
\[\limsup_{r\to0^-}\mu_{u,r}(|f|^p)<\infty.\]
\par Since $\mu_{u,r}(|f|^p)$ is an increasing function of $r$ for all $r<0$, we can replace $\limsup$ in the definition of this space by $\lim$. So we can introduce the norm on $H^p_u(D)$ as
\[\|f\|^p_{u,p}=\lim_{r\to0^-}\mu_{u,r}(|f|^p)=\int\limits_ {D}|f|^p(dd^cu)^n-\int\limits_
{D}udd^c|f|^p\wedge(dd^cu)^{n-1}.\]
It was shown (see \cite[Theorem 4.1]{PS}) that the spaces $H^p_u(D)$ are Banach for $p\ge1$. \par The following theorem which is a direct consequence of \cite[Corollary 3.2]{PS} shows that faster decaying near the boundary of $D$ exhausting functions determine dominating norms.
\begin{Theorem}\label{T:it} Let $u$ and $v$ be continuous plurisubharmonic
exhaustion functions on $D$ and let $F$ be a compact set in $D$
such that $bv(z)\le u(z)$ for some constant $b>0$ and all $z\in
D\setminus F$. Then $H^p_v(D)\subset H^p_u(D)$ and $\|f\|_{u,p}\le
b^{n/p}\|f\|_{v,p}$.
\end{Theorem}
\par Let $u=(u_1,\dots,u_k)\in{\mathcal E}^k_1$. Let $H^p_u(D)$ be the direct product $H^p_{u_1}(D)\times\cdots\times H^p_{u_k}(D)$ with the norm
\[\|(f_1,\dots,f_k)\|_{u,p}=\sum_{j=1}^k\|f_j\|_{u_j,p}.\] We denote by $B_{u,p}(r)$ the open ball of radius $r$ centered at the origin of $H^p_u$.
\par If $p=2$ then we introduce a sesqui-linear form on $H^2_u(D)$ as
\[(f,g)_u=\lim_{r\to 0^-}\sum_{j=1}^k\int\limits_{S_u(r)}f_j\overline g_j\,d\mu_{u,r}.\]
Since $2{\mathbf {Re\,}} f\overline g=|f+g|^2-|f|^2-|g|^2$ and $2\mathbf {Im\,} f\overline g=|f+ig|^2-|f|^2-|g|^2$, the existence of the limit follows. By the H\"older inequality $|(f,g)_u|^2\le(f,f)_u(g,g)_u<\infty$.
It follows that a continuous non-negative sesqui-linear form $(f,g)$ is well defined on $H^2_u(D)$ and makes this space a Hilbert space.
\par The norm of $f=(f_1,\dots,f_k)\in(H^\infty(D))^k$ will be defined as
\[\|f\|_{\infty}=\sum_{j=1}^k\|f_j\|_\infty\] and let $B^k_\infty(r)$ be the open ball of radius $r$ centered at the origin of $(H^\infty)^k$. If $f,g\in H^p_u(D)$ and $|f|\le|g|$ on $D$, then $\mu_{u,r}(|f|^p)\le \mu_{u,r}(|g|^p)$. Hence $\|f\|_{u,p}\le\|g\|_{u,p}$ and we see that $B^k_\infty(r)\subset B_{u,p}(r)$ when $u\in{\mathcal E}_1^k$.
\par If $u=(u_1,\dots,u_k)$ and $v=(v_1,\dots,v_k)$ are in ${\mathcal E}^k_1(D)$ then we say that $u\succeq v$ if there is a constant $c>0$ and a compact set $F\subset D$ such that $cu_j\le v_j$ on $D\setminus F$. In this case $H^p_u(D)\subset H^p_{v,k}(D)$ and there is a constant $a>0$ such that $\|f\|_{v,p}\le a\|f\|_{u,p}$.
\begin{Proposition}\label{P:cs} Let $u,v\in{\mathcal E}^k_1(D)$ and $u\succeq v$. Then:\begin{enumerate}
\item If $A\subset H^p_v(D)$ is closed in $H^p_v(D)$ then $A\cap H^p_u$ is closed in $H^p_u(D)$;
\item the closed balls $\overline B_{u,p}(R)$ in $H^p_u(D)$ of radius $R$ are closed in $H^p_v(D)$;
\item if $A\subset H^2_u(D)$ is a closed convex bounded set, then $A$ is a closed bounded set in $H^2_v(D)$.
\end{enumerate}
\end{Proposition}
\begin{enumerate}gin{proof} (1) Indeed, if a sequence $\{f_j\}\subset A\cap H^p_u(D)$ and $f_j\to f$ in $H^p_u(D)$, then $\|f_j-f\|_{v,p}\le c\|f_j-f\|_{u,p}$. Hence $f_j\to f$ in $H^p_v(D)$ and $f\in A$.
\par (2) Let $\{f_j=(f_{j1},\dots,f_{jk})\}$ be a sequence in $\overline B_{u,p}(R)$ converging in $H^p_v(D)$ to $g=(g_1,\dots,g_k)$. Then the functions $f_{jm}$ converge to $g_m$ in $H^p_{v_m}(D)$. Since the integrals $\mu_{u_m,r}(|f_{jm}|^p)$ are increasing in $r$ we see that $\mu_{u_m,r}(|f_{jm}|^p)\le \|f_{jm}\|^p_{u_m,p}$. By Theorem 3.6 from \cite{PS} $\{f_{jm}\}$ is a Cauchy sequence in the uniform metric on any compact set in $D$. Hence, for any $r<0$
\[\mu_{u_m,r}(|g_m|^p)=\lim_{j\to\infty}\mu_{u_m,r}(|f_{jm}|^p)\le
\lim_{j\to\infty}\|f_{jm}\|^p_{u_m,p}.\]
Consequently,
\[\|g\|_{u,p}\le \lim_{j\to\infty}\|f_j\|_{u,p}\le R\]
and we see that $f\in\overline B_{u,p}(R)$.
\par (3) The fact that $A$ is bounded in $H^2_v(D)$ follows from Theorem {\mathbf {Re\,}}f{T:it}. Let $\{f_j=(f_{j1},\dots,f_{jk})\}$ be a sequence in $A$ converging in $H^2_v(D)$ to $g=(g_1,\dots,g_k)$. Then the functions $f_{jm}$ converge to $g_m$ in $H^2_v(D)$. As it was observed in part (2) $\{f_{jm}\}$ is a Cauchy sequence in the uniform metric on any compact set in $D$.
\par Since Hilbert spaces are reflexive, the closed balls are weakly compact. Since $A$ is convex and closed it is weakly closed in $H^2_{u,k}(D)$. Hence there is a subsequence $\{f_{j_k}\}$ weakly converging to $h=(h_1,\dots,h_k)\in A$.
\par If $D$ is a hyperconvex domain, $w_0\in D$ and $\omega(z,w_0)$ is the pluricomplex Green function with pole in $w_0$, then for any $u\in{\mathcal E}$ there is a constant $c>0$ such that $cu\le\omega$ near $\partial D$. Hence $H^p_u(D)\subset H^p_\omega(D)$ and $\|f\|_{\omega,p}\le c^{n/p}\|f\|_{u,p}$. By formula (3.2) in \cite{PS}
\[(2\pi)^n|f(w)|^p\le\|f\|_{p,\omega}\le c^{n/p}\|f\|_{p,u}\] when $f\in H^p_u(D)$. Hence point evaluations are continuous functionals on $H^p_u(D)$. Thus $h_m=g_m$.
\end{proof}
\par The following result was proved by K. R. Shrestha in \cite{Sh2} when $D$ is the unit disk.
\begin{Theorem}\label{T:ib} If $\overline B=\cap_{u\in{\mathcal E}_1^k}\overline B_{u,p}(R)$ then $\overline B=\overline B_\infty(R)$.
\end{Theorem}
\begin{enumerate}gin{proof} Suppose that $f=(f_1,\dots,f_k)\in\overline B$ and $\sum\|f_j\|_\infty>R$. We fix an $\varepsilon>0$ and find points $w_1,\dots,w_k\in D$ such that $|f_j(w_j)|\ge\|f_j\|_\infty-\varepsilon$. Let $u_j(z)=g(z,w_j)$. Then $\|f_j\|_{u_j,p}\ge|f_j(w_j)|$ and $\|f\|_{u,p}\ge\|f\|_\infty-k\varepsilon$, where $u=(u_1,\dots,u_k)$. Since $\varepsilon$ is arbitrary we come to a contradiction.
\par If $f\in\overline B_\infty(R)$ then $\|f_j\|_{u,p}\le\|f_j\|_\infty$ for any $u\in{\mathcal E}_1^k$. Hence $f\in \overline B$.
\end{proof}
\par The following result gives some chances for a reduction of $H^\infty$ problems to $H^2$ problems. When $D$ is the unit disk it was proved in \cite{PSh} by K. R. Shrestha and the author for any $p>1$ and without any conditions.
\begin{Theorem} \label{T:int1} Let $D$ be a strongly pseudoconvex domain, $w_0\in D$, $R>0$ and $g=g(z,w_0)$. Let $A \subset (H^2_g(D))^k$ be a closed convex set. Then $A\cap \overline B_\infty(R) \neq \emptyset$ if and only if for any number of functions $u_1,\dots,u_m\in{\mathcal E}^k_1(D)$ the set $A\cap \overline B_{u_1,2}(R)\cap\cdots\cap \overline B_{u_m,2}(R) \neq \emptyset$. \end{Theorem}
\begin{enumerate}gin{proof} By Proposition {\mathbf {Re\,}}f{P:cs}(1) for $u\in{\mathcal E}^k_1(D)$ the set $A_u=A\cap \overline B_{u,2}(R)$ is closed in $H^2_u(D)$. Since it is convex and bounded by Proposition {\mathbf {Re\,}}f{P:cs}(3) it is closed and bounded in $(H^2_g(D))^k$. Since it is convex it is weakly closed in $(H^2_g(D))^k$ and, consequently, weakly compact.
\par Since any finite number of the sets $A_{u_1},\dots A_{u_m}$ have the non-empty intersection we see that $\cap_{u\in{\mathcal E}_1^k}A_u \neq \emptyset$. By Theorem {\mathbf {Re\,}}f{T:ib} the latter set is equal to $A\cap B_\infty(R)$.
\par If $f\in A\cap \overline B_\infty(R)$ then $\|f\|_{u,2}\le R$ and the theorem follows.
\end{proof}
\section{Projective limits of Poletsky--Stessin Hardy spaces}
\par The partially ordered set $({\mathcal E}_1(D),\succeq)$ is directed. Indeed, if $u,v\in{\mathcal E}^k_1(D)$ then $w=E(u,v)=(E(u_1,v_1),\dots,E(u_k,v_k))\in{\mathcal E}^k(D)$ and $w\succeq u,v$. Let $M_j$ be the total Monge--Amp\'ere mass of $w_j$. By Corollary {\mathbf {Re\,}}f{C:mae} $M_j\le2$. Hence \[\tilde w=(M_1^{-1/n}w_1,\dots,M_k^{-1/n}w_k)\in{\mathcal E}^k_1(D)\] and $\tilde w\succeq u$ and $\tilde w\succeq v$.
\par By Theorem {\mathbf {Re\,}}f{T:it} if $u\succeq v$ then $H^p_u\subset H^p_v$ and the imbedding operator $i_{uv}$ is continuous. Thus the set of spaces $H^p_u(D)$, $u\in{\mathcal E}^k_1$, form a projective system (see \cite[II.6]{Sch}). Let $X^p$ be the projective limit of $(H^p_u(D),u\in{\mathcal E}^k_1(D))$, i. e., a subspace of all $x\in\operatorname{pr}od_{u\in{\mathcal E}^k_1}H^p_u(D)$ such that $x_v=i_{uv}x_u$. Thus the mappings $i_u:\,X^p\to H^p_u(D)$ are defined. The projective topology on $X^p$ is the weakest topology that makes all mappings $i_u$ continuous.
\par If we fix a point $w_0\in D$ and let $g(z, w_0)$ be the pluricomplex Green function with pole at $w_0$, then $u\succeq {\bf g}=(g,\dots,g)$ for all $u\in{\mathcal E}^k(D)$ and $H^p_u(D)\subset H^p_{{\bf g}}(D)$. Thus we can identify $X^p$ with all $f\in H^p_{{\bf g}}(D)$ such that $f\in H^p_u(D)$ for all $u\in H^p_u(D)$.
\par Let $f$ be a holomorphic function on $D$ and $z_0\in\partial D$. The function $f$ has the admissible limit at $z_0$ if for every approach region ${\mathcal A}_D^\alpha(z_0)$ the limit
\[f^*(z_0)=\lim_{z\to z_0,z\in{\mathcal A}_D^\alpha(z_0)}f(z)\] exists.
\begin{Theorem}\label{T:mt} Let $f$ be a holomorphic function on a strongly pseudoconvex domain $D$ with the $C^2$ boundary. Suppose that $f$ has admissible limits at points $\{z_j\}\in\partial D$ and $\lim_{j\to\infty}f^*(z_j)=\infty$. Then for any $p>1$ there is $u\in{\mathcal E}_1(D)$ such that $f\not\in H^p_u(D)$.
\end{Theorem}
\begin{enumerate}gin{proof} The function $\log c(z,w)$ is plurisubharmonic, negative and has a simple pole at $w$. Hence $\log c(z,w)\le g(z,w)$, where $g(z,w)$ is the pluricomplex Green function with pole at $w$. We define Green balls $G_D(w,r)=\{z\in D:\,g(z,w)\le\log r\}$. Clearly $G_D(w,r)\subset C_D(w,r)$.
\par Let us take any positive converging series $\sum a_j$ and fix a sequence $z_j\in\partial D$ such that $f$ has admissible limits at $z_j$ and
\[\sum_{j=1}a_j^n|f^*(z_j)|^p=\infty.\]
Let ${\mathcal A}_j={\mathcal A}_D^{\alpha_j}(z_j)$, where $\alpha_j$ are chosen so that we can find a point $w_j$ as close to $z_j$ as we want such that $G_j=G_D(w_j,e^{-1})\subset{\mathcal A}_j$.
\par We will choose inductively points $w_j$. Let $w_0$ be any point. If $w_0,\dots,w_{k-1}$ have been chosen we select $w_k$ to satisfy the following conditions:
\begin{enumerate}\item $G_k\subset{\mathcal A}_k$ and $|f|>|f^*(z_k)|/2$ on $G_k$;
\item $a_jg(z,w_j)>-2^{-j-1}a_k$ on $G_k$, $0\le j\le k-1$;
\item $g(z,w_k)>-2^{-k-1}a_j$ on $G_j$, $0\le j\le k-1$.
\end{enumerate}
This is possible because by Lemma {\mathbf {Re\,}}f{L:balls} we can take $w_j$ as close to $z_j$ so that $G_k\subset{\mathcal A}_k$ as we want and by \cite{C} $g(z,w)\to 0$ uniformly on compacta in $\overline D\setminus\{z_j\}$ when $w\to z_j$ and we know that $g(z,w)$ is equal to 0 on $\partial D$ when $w$ is fixed .
\par Let $u_j=a_j\max\{g(z,w_j),-2\}$. Note that if $F$ is an open set in $D$ containing $G_D(w_j,e^{-2})$ then
\[\int\limits_ {F}(dd^cu_j)^n=a_j^n.\]
Let $u=E(\{u_j\})$. Since the series $v=\sum_{j=0}^\infty u_j$ converges uniformly on $\overline D$ we see that $v\in{\mathcal E}$, so $u\ge v$ is a continuous plurisubharmonic function on $D$ equal to 0 on $\partial D$. Since
\[\sum_{j=0}^\infty MA(u_j)=\sum_{j=0}^\infty a_j^n<\infty\]
by Theorem {\mathbf {Re\,}}f{T:mae} $M=MA(u)<\infty$.
\par Let us evaluate the Monge--Amp\'ere mass of $u$ on $G_k$. From the inequalities $u_k\ge u\ge v$ on $D$ and the conditions on the choices of $w_j$ on $\partial G_k$ we get
\[-a_k\ge u\ge-\sum_{j=0}^{k-1}2^{-j-1}a_k-a_k-\sum_{j=k+1}^\infty 2^{-j-1}a_k\ge-\frac32a_k.\]
Hence $u+3a_k/2\ge 0$ on $\partial G_k$ and the set $F_k=\{6(u+\frac32a_k)< u_k\}$ compactly belongs to $G_k$. Moreover if $z\in \partial G_D(w_k,e^{-2})$ then
\[6(u(z)+\frac32a_k)\le6(u_k(z)+\frac32a_k)=-3a_k<-2a_k=u_k(z).\]
Thus the set $F_k$ contains the ball $G_D(w_k,e^{-2})$. By the Comparison principle
\[6^n\int\limits_ {G_k}(dd^cu)^n=\int\limits_ {G_k}(dd^c6(u(z)+\frac32a_k))^n\ge6^n\int\limits_ {F_k}(dd^cu_k)^n=6^na_k^n.\]
\par Hence
\[\|f\|^p_{u,p}\ge\int\limits_ D|f|^p(dd^cu)^n\ge\sum_{k=0}^\infty\int\limits_ {G_k}|f|^p(dd^cu)^n\ge2^{-p}\sum_{k=0}^\infty|f^*(z_k)|^pa_k^n=\infty.\]
Hence $f\not\in H^p_u(D)$.
\end{proof}
\par Let us introduce a new topology on the space $(H^\infty(D))^k$. Consider imbeddings $j_u:\,(H^\infty(D))^k\to H^p_u(D)$, $u\in{\mathcal E}_1^k$, and for any $R>0$ the sets $j_u^{-1}(B_{u,p}(R))$. These sets with an empty set form a basis because they, evidently, cover $(H^\infty(D))^k$ and for any $x,y\in (H^\infty(D)^k$, any $u,v\in{\mathcal E}^k_u(D)$ and any $R_1,R_2>0$ the intersection $A$ of the sets $x+i_u^{-1}(B_{u,p}(R_1))$ and $y+i_v^{-1}(B_{v,p}(R_2))$ contains an element of the basis. Indeed, if $A$ is empty then there is nothing to prove. If $z\in A$ then $\|z-x\|_{u,p}<R_1$ and $\|z-y\|_{v,p}<R_2$. Let $w=E(u,v)$ and $\tilde w=(\alpha_1w_1,\dots,\alpha_kw_k)$, where the coefficients $\alpha_j\ge 1/2$ have been chosen so that $\tilde w\in{\mathcal E}^k_1(D)$. Since $\|f\|_{w,p}\ge\max\{\|f\|_{u,p},\|f\|_{v,p}\}$ we see that $\|f\|_{\tilde w,p}\ge2^{-n/p}\max\{\|f\|_{u,p},\|f\|_{v,p}\}$. Hence $B_{\tilde w,p}(2^{-n/p}R)\subset B_{u,p}(R)\cap B_{v,p}(R)$ and we see that there is $c>0$ such that the set $z+B_{\tilde w,p}(c)\subset A$.
\par We denote by $Y^p$ the space $(H^\infty(D))^k$ endowed with the topology defined by the basis of sets $j_u^{-1}(B_{u,p}(R))$ for all $u\in{\mathcal E}_1^k$ and all $R>0$.
\begin{Theorem}\label{T:int} Let $D$ be a strongly pseudoconvex domain with the $C^2$ boundary and let $p\ge1$. Then $\cap_{u\in{\mathcal E}^k_1(D)}H^p_u(D)=(H^\infty(D))^k$ and the projective limit $X^p$ of $(H^p_u(D),u\in{\mathcal E}^k_1(D))$ is isomorphic to $Y^p$.
\end{Theorem}
\begin{enumerate}gin{proof} It suffices to prove this theorem for $k=1$. Since all mappings $i_{uv}$ are imbeddings if $x\in X^p$ and $x=(f_u,u\in{\mathcal E}_1)$ then $f_u=f_v=f$ and this $f$ belongs to all spaces $H^p_u$ or $f\in \cap_{u\in{\mathcal E}^k_1(D)}H^p_u(D)$. Let us show that the latter space is $(H^\infty(D))^k$. Suppose that $f$ be unbounded. Since $f\in H^p_{{\bf g}}$ by \cite[Theorem 10]{S} $f$ has admissible limits a.e. on the boundary. If the function $f^*$ is bounded then the real and imaginary parts of $f$, which are harmonic functions, have bounded admissible limits equal to $f^*$ a.e. (see \cite{AS, Sm}) and this implies that $f$ is bounded. Hence $f^*$ is unbounded. By Theorem {\mathbf {Re\,}}f{T:mt} there is $u\in{\mathcal E}_1(D)$ such that $f\not\in H^p_u(D)$. Thus $f\in H^\infty(D)$ and we got a mapping ${\mathcal P}hi:\,X^p\to Y^p$. Clearly, this mapping is an algebraic isomorphism.
\par By its definition the projective topology on $X^p$ must contain all sets $A(x,u,R)=x+i_u^{-1}(B_{u,p}(R))$, where $u\in{\mathcal E}^k_1(D)$, $x\in (H^\infty(D)^k$ and $R>0$. It is easy to see that $F(A(x,u,R))={\mathcal P}hi(x)+j_u^{-1}(B_{u,p}(R))$. We conclude that these sets form a basis of the projective topology on $X^p$ and, therefore, ${\mathcal P}hi$ is a topological isomorphism.
\end{proof}
\par The duals of $H^p_u(D)$ form an inductive system and their inductive limit can be considered. We will not go here into this. Instead, we will show that the intersection of any countable family of spaces $H^p_u(D)$ contains an unbounded function.
\begin{Theorem} Let $D$ be a strongly pseudoconvex domain with the $C^2$ boundary and let $p\ge1$. Let $\{u_j\}\subset{\mathcal E}_1(D)$. Then the space $X=\cap_{j=1}^\infty H^p_{u_j}(D)$ contains an unbounded function.
\end{Theorem}
\begin{enumerate}gin{proof} Let us pick up positive coefficients $\alpha_j$ such that the function \[u=\sum_{j=1}^\infty \alpha_ju_j\in{\mathcal E}_1(D).\] Clearly $H^p_u(D)\subset X$ and we only need to prove that for any $u\in{\mathcal E}_1(D)$ the space $H^p_u(D)$ contains an unbounded function. If not then the continuous imbedding $H^\infty(D)\to H^p_u(D)$ is onto. By a theorem of Banach the inverse mapping is also continuous. Let us find a point $z_0\in\partial D$ such that $\mu_u(\{z_0\})=0$ and take a peak function $q$ at $z_0$. The norm of the functions $q^m$, $m\in\mathbb N$, in $H^\infty(D)$ is 1. \cite[Theorem 3.1]{D} states that for a plurisubharmonic function $\phi$ on $D$ continuous up to the boundary
\[\mu_u(\phi)=\int\limits_ {B_u(r)}\phi(dd^cu)^n+\int\limits_
{B_u(r)}(r-u)dd^c\phi\wedge(dd^cu)^{n-1}.\] Thus the norms of the functions $q^m$ in $H^p_u(D)$ are equal to $\mu_u^{1/p}(|q|^{pm})$ and, consequently, converge to 0. We came to a contradiction.
\end{proof}
\begin{enumerate}gin{thebibliography}{999}
\bibitem{AG} M. Alan, N. G. Gogus {\em Poletsky-Stessin-Hardy spaces in the plane,} Complex Anal. Oper. Theory, {\bf 8} (2014), 975–-990
\bibitem{AS} N. Aronszajn, K. T. Smith, {\em Functional spaces and functional completion,} Ann. Inst. Fourier, {\bf 6} (1955--1956), 125–-185
\bibitem{BPST}Bonilla, A.; Pérez-González, F.; Stray, A.; Trujillo-González, R. Approximation in weighted Hardy spaces. J. Anal. Math. {\bf 73} (1997), 65–-89.
\bibitem{C} D. Coman, {\em Boundary behavior of the pluricomplex Green function,} Ark. Mat. {\bf 36} (1998), 341–-353
\bibitem{D} J.-P. Demailly, {\em Mesure de Monge--Ampere et mesures plurisousharmonique}, Math. Z., {\bf 194} (1987), 519--564.
\bibitem{G} I. Graham, {\em Boundary behavior of the Carathéodory and Kobayashi metrics on strongly pseudoconvex domains in Cn with smooth boundary,} Trans. Amer. Math. Soc., {\bf 207} (1975), 219–-240
\bibitem{K} M. Klimek, {\em Pluripotential Theory,} Oxford Sci. Publ., 1991
\bibitem{M} McPhail, J. Darrell, \emph{A weighted interpolation problem for analytic functions,} Studia Math., {\bf 96} (1990), 105-–111
\bibitem{P} E. A. Poletsky, {\em Approximation of plurisubharmonic functions by multipole Green functions,} Trans. Amer. Math. Soc., {\bf 355} (2003), 1579–-1591.
\bibitem{PS} E. A. Poletsky, M. I. Stessin, {\em Hardy and Bergman spaces on hyperconvex domains and their composition operators,} Indiana Univ. Math. J., {\bf 57} (2008), 2153–-2201
\bibitem{PSh} E. A. Poletsky, K. R. Shrestha, {\em On weighted Hardy spaces on the unit disk,} (placed on arxiv)
\bibitem{R} W. Rudin, {\em Function theory in the unit ball of $\mathbb C^n$,} Grundlehren der Mathematischen Wissenschaften, {\bf 241}, Springer-Verlag, New York-Berlin, 1980
\bibitem{S} E. M. Stein, {\em Boundary Behaviour of Holomorphic Functions of Several Complex Variable,} Princeton Univ. Press, 1971
\bibitem{Sa} S. \c{S}ahin, \emph{Poletsky-Stessin Hardy spaces on domains bounded by an analytic Jordan curve in $\mathbb C$}, Complex Variables and Elliptic Equations, DOI:10.1080/17476933.2014.1001112, (2014)
\bibitem{Sch} H. H. Schaefer, {\em Topological Vector Spaces,} The McMillan Co, 1966
\bibitem{Sh1} K. R. Shrestha, \emph{ Boundary Values Properties of Functions in Weighted Hardy Spaces,} arXiv:1309.6561
\bibitem{Sh2} K. R. Shrestha, {\em Weighted Hardy spaces on the unit disk}, Complex Analysis and Operator Theory, DOI 10.1007/s11785-014-0427-6
\bibitem{Sm} K. T. Smith, {\em A generalization of an inequality of Hardy and Littlewood,} Canad. J. Math., {\bf 8} (1956), 157–-170.
\bibitem{W} J. B. Walsh, {\em Continuity of enevelopes of plurisubharmonic functions,} J. Math. Mech., {\bf 18}, (1968), 143--148
\end{thebibliography}
\end{document}
|
\begin{document}
\author{Wei Wei, Tsinghua University}
\title{Tutorials on Advanced Optimization Methods}
\subtitle{}
\maketitle
\frontmatter
\mainmatter
\appendix
\chapter*{}
\begin{center}
{\Large \bf Tutorials on Advanced Optimization Methods}
\end{center}
\
\begin{center}
{\bf Wei Wei, Tsinghua University}
\end{center}
\
This material is the appendix part of my book collaborated with Professor Jianhui Wang at Southern Methodist University:
{\color{blue} Wei Wei, Jianhui Wang. Modeling and Optimization of Interdependent Energy Infrastructures. Springer Nature Switzerland, 2020.}
{\small \url{https://link.springer.com/book/10.1007
This material provides thorough tutorials on some optimization techniques frequently used in various engineering disciplines, including
\begin{enumerate}
\item[$\clubsuit$] Convex optimization
\item[$\clubsuit$] Linearization technique and mixed-integer linear programming
\item[$\clubsuit$] Robust optimization
\item[$\clubsuit$] Equilibrium/game problems
\end{enumerate}
It discusses how to reformulate a difficult (non-convex, multi-agent, min-max) problem to a solver-compatible form (semidefinite program, mixed-integer linear program) via convexification, linearization, and decomposition, so the original problem can be reliably solved by commercial/open-source software. Fundamental algorithms (simplex algorithm, interior-point algorithm) are not the main focus.
This material is a good reference for self-learners who have basic knowledge in linear algebra and linear programming. It is one of the main references for an optimization course taught at Tsinghua University. If you need teaching slides, please contact {\color{blue}[email protected]} or find the up-to-date contact information at {\small \url{https://sites.google.com/view/weiweipes/}}
\pdfbookmark[2]{Bookmarktitle}{internal_label}
\tableofcontents
\motto{The great watershed in optimization isn't between linearity and nonlinearity, but convexity and non-convexity. \\ \rightline{ $-$Ralph Tyrrell Rockafellar}}
\chapter{Basics of Linear and Conic Programs}
\label{App-A}
The mathematical programming theory has been thoroughly developed in width and depth since its birth in 1940s, when George Dantzig invented simplex algorithm for linear programming. The most influential findings in the field of optimization theory can be summarized as \cite{App-A-CVX-Book-Ben}:
1) Recognition of the fact that under mild conditions, a convex optimization program is computationally tractable: the computational effort under a given accuracy grows moderately with the problem size even in the worst case. In contrast, a non-convex program is generally computationally intractable: the computational effort of the best known methods grows prohibitively fast with respect to the problem size, and it is reasonable to believe that this is an intrinsic feature of such problems rather than a limitation of existing optimization techniques.
2) The discovery of interior-point methods, which was originally developed in 1980s to solve LPs and could be generalized to solve convex optimization problems as well. Moreover, between these two extremes (LPs and general convex programs), there are many important and useful convex programs. Although nonlinear, they still possess nice structured properties, which can be utilized to develop more dedicated algorithms. These polynomial-time interior-point algorithms turn out to be considerably more efficient than those exploiting only the convex property.
The superiority of formulating a problem as a convex optimization problem is apparent. The most appealing advantage is that the problem can be solved reliably and efficiently. It is also convenient to build the associated dual problem, which gives insights on sensitivity information and may help develop distributed algorithm for solving the problem. Convex optimization has been applied in a number of energy system operational issues, and well acknowledged for its computational superiority. We believe that it is imperative for researchers and engineers to develop certain understanding on this important topic.
As we have already learnt in previous chapters, many optimization problems in energy system engineering can be formulated as or converted to convex programs. The goal of this chapter is to help readers develop necessary background knowledge and skills to apply several well-structured convex optimization models, including LPs, SOCPs, and SDPs, i.e., to formulate or transform their problems as these specific convex programs. Certainly, convex transformation (or convexification) may be rather tricky and require special knowledge and skills. Nevertheless, the attempt often turns out to be worthwhile. We also pay special attention to nonconvex QCQPs, which can model various decision-making problems in engineering, such as optimal power flow and optimal gas flow. We discuss convex relaxation technique based on SDP,which is shown to be very useful to get a high-quality objective lower bound. We also present MILP formulations for some special QPs; because of the special problem structure, these MILP models can tackle practically sized problems in reasonable time.
Most materials regarding convex sets and functions come from \cite{App-A-CVX-Book-Boyd} and its solution manual \cite{App-A-CVX-Book-Solution}; extensions of duality theory from linear programming to conic programming follows from \cite{App-A-CVX-Book-Ben}. We consolidate necessary contents in a convenient way to make this book self-contained and easy to follow.
\section{Basic Notations}
\label{App-A-Sect01}
\subsection{Convex Sets}
\label{App-A-Sect01-01}
A set $C \in \mathbb R^n$ is convex if the line segment connecting any two points in $C$ is contained in $C$, i.e., for any $x_1,x_2 \in C$, we have $\theta x_1 + (1-\theta)x_2 \in C$, $\forall \theta \in [0,1]$. Roughly speaking, standing at anywhere in a convex set, you can see every other point in the set. Fig. \ref{fig:App-01-01} illustrates a simple convex set and a non-convex set in $\mathbb R^2$.
\begin{figure}
\caption{Left: the circle is convex; right: the ring is non-convex.}
\label{fig:App-01-01}
\end{figure}
The convex combination of $k$ points $x_1,\cdots,x_k$ is defined as $\theta_1 x_1 + \cdots + \theta_k x_k$, where $\theta_1,\cdots,\theta_k \ge 0$, and $\theta_1 + \cdots +\theta_k = 1$. A convex combination of points can be regarded as a weighted average of the points, with $\theta_i$ the weight of $x_i$ in the mixture.
The convex hull of set $C$, denoted conv$(C)$, is the smallest convex set that contains $C$. Particularly, if $C$ has finite elements, then
\begin{equation}
\mbox{conv}(C) = \{ \theta_1 x_1 + \cdots + \theta_k x_k ~|~ x_i \in C,~ \theta_i \ge 0,~ i=1,~ \cdots,k,~ \theta_1 + \cdots + \theta_k = 1\} \notag
\end{equation}
Fig. \ref{fig:App-01-02} illustrates the convex hulls of two sets in $\mathbb R^2$.
Some useful convex sets are briefly introduced.
\begin{figure}
\caption{Left: The convex hull of eighteen points. Right: The convex hull of a kidney shaped set.}
\label{fig:App-01-02}
\end{figure}
{\noindent \bf 1. Cones}
A set $C$ is called a cone, or nonnegative homogeneous, if for any $x \in C$, we have $\theta x \in C$, $\forall \theta \ge 0$. A set $C$ is a convex cone if it is convex and a cone: for any $x_1, x_2 \in C$ and $\theta_1,\theta_2 \ge 0$, we have $\theta_1 x_2 + \theta_2 x_2 \in C$.
The conic combination (or nonnegative linear combination) of $k$ points $x_1,\cdots,x_k$ is defined as $\theta_1 x_1 + \cdots + \theta_k x_k$, where $\theta_1,\cdots,\theta_k \ge 0$. If a set of finite points $\{x_i\}$, $i=1,2\cdots$ resides in a convex cone $C$, then every conic combination of $\{x_i\}$ remains in $C$. Conversely, a set $C$ is a convex cone if and only if it contains all conic combinations of its elements. The conic hull of set $C$ is the smallest convex cone that contains $C$. Fig. \ref{fig:App-01-03} illustrates the conic hulls of two sets in $\mathbb R^2$.
\begin{figure}
\caption{The conic hulls of the two sets \cite{App-A-CVX-Book-Boyd}
\label{fig:App-01-03}
\end{figure}
Some widely used cones are introduced.
{\noindent \bf a. The nonnegative orthant}
The nonnegative orthant is defined as
\begin{equation}
\label{eq:App-01-Orthant-P}
\mathbb R^n_+ = \{ x \in \mathbb R^n ~|~ x \ge 0 \}
\end{equation}
It is the set of vectors composed of non-negative entries. It is clearly a convex cone.
{\noindent \bf b. Second-order cone}
The unit second-order cone is defined as
\begin{equation}
\label{eq:App-01-Unit-SOC}
\mathbb L^{n+1}_C = \{(x,t) \in \mathbb R^{n+1} ~|~ \| x \|_2 \le t\}
\end{equation}
It is also called the Lorentz cone or ice-cream cone. Fig. \ref{fig:App-01-04} exhibits $\mathbb L^3_C$.
\begin{figure}
\caption{$\mathbb L^3_C = \left\{ (x_1,x_2,t) ~\middle|~ \sqrt{x^2_1 + x^2_2}
\label{fig:App-01-04}
\end{figure}
For any $(x,t) \in \mathbb L^{n+1}_C$ and $(y,z) \in \mathbb L^{n+1}_C$, we have
\begin{equation}
\| \theta_1 x + \theta_2 y \|_2 \le \theta_1 \|x\|_2 + \theta_2 \|y\|_2 \le \theta_1 t + \theta_2 z \Rightarrow
\theta_1 \begin{bmatrix} x \\ t \end{bmatrix} +
\theta_2 \begin{bmatrix} y \\ z \end{bmatrix} \in \mathbb L^{n+1}_C \notag
\end{equation}
which means that the unit second-order cone is a convex cone.
Sometimes, it is convenient to use the following inequality to represent a second-order cone in optimization problems
\begin{equation}
\label{eq:App-01-General-SOC}
\|A x + b \|_2 \le c^T x + d
\end{equation}
where $A \in \mathbb R^{m \times n}$, $b \in \mathbb R^m$, $c \in \mathbb R^n$, $d \in \mathbb R$. It is the inverse image of the unit second-order cone under the affine mapping $f(x) =(A x + b, c^T x + d)$, and hence is convex. Second-order cones in forms of (\ref{eq:App-01-Unit-SOC}) and (\ref{eq:App-01-General-SOC}) are interchangeable.
\begin{equation}
\|A x + b \|_2 \le c^T x + d \Leftrightarrow
\begin{bmatrix} A \\ c^T \end{bmatrix} x +
\begin{bmatrix} b \\ d \end{bmatrix} \in \mathbb L^{m+1}_C
\notag
\end{equation}
and hence is convex.
{\noindent \bf c. Positive semidefinite cone}
The set of symmetric $m \times m$ matrices is denoted by
\begin{equation}
\mathbb S^m = \{X \in \mathbb R^{m \times m}~|~ X = X^T\} \notag
\end{equation}
which is a vector space with dimension $m(m + 1)/2$.
The set of symmetric positive semidefinite matrices is denoted by
\begin{equation}
\mathbb S^m_+ = \{X \in \mathbb S^m ~|~ X \succeq 0 \} \notag
\end{equation}
The set of symmetric positive definite matrices is denoted by
\begin{equation}
\mathbb S^m_{++} = \{X \in \mathbb S^m ~|~ X \succ 0 \} \notag
\end{equation}
Clearly, $\mathbb S^m_+$ is a convex cone: if $A, B \in \mathbb S^m_+$, then for any $x \in \mathbb R^m$ and positive scalars $\theta_1,\theta_2 \ge 0$, we have
\begin{equation}
x^T(\theta_1 A + \theta_2 B) x = \theta_1 x^T A x + \theta_2 x^T B x \ge 0 \notag
\end{equation}
implying $\theta_1 A + \theta_2 B \in \mathbb S^m_+$.
A positive semidefinite cone in $\mathbb R^2$ can be expressed via three variables $x,y,z$ as
\begin{equation}
\begin{bmatrix}
x & y \\ y & z
\end{bmatrix} \succeq 0
\Leftrightarrow x \ge 0,~ xz \ge y^2 \notag
\end{equation}
which is plotted in Fig. \ref{fig:App-01-05}. In fact, $\mathbb L^3_C$ and $\mathbb S^2_+$ are equivalent to each other. To see this, the hyperbolic inequality $xz \ge y^2$ with $x \ge 0, z \ge 0$ defines the same feasible region in $\mathbb R^3$ as the following second-order cone
\begin{equation}
\left\| \begin{gathered}
2y \\ x-z
\end{gathered} \right\|_2 \le x + z,~
x \ge 0,~ z \ge 0 \notag
\end{equation}
In higher-order dimensions, every second-order cone can be written as an LMI via Schur complement as
\begin{equation}
\label{eq:App-01-SOC-LMI}
\|A x + b \|_2 \le c^T x + d \Rightarrow
\left[ \begin{gathered} (c^T x + d)I \\ (A x + b)^T \end{gathered}~~
\begin{gathered} A x + b \\ c^T x + d \end{gathered} \right] \succeq 0
\end{equation}
\begin{figure}
\caption{Positive semidefinite cone in $\mathbb S^2$ (or in $\mathbb R^3$) \cite{App-A-CVX-Book-Boyd}
\label{fig:App-01-05}
\end{figure}
In this sense of representability, positive semidefinite cones are more general than second-order cones. However, the transformation in (\ref{eq:App-01-SOC-LMI}) may not be superior from the computational perspective, because SOCPs are more tractable than SDPs.
{\noindent \bf d. Copositive cone}
A copositive cone $\mathbb C^n_+$ consists of symmetric matrices whose quadratic form is nonnegative over the nonnegative orthant $\mathbb R^n_+$:
\begin{equation}
\label{eq:App-01-COP-Cone}
\mathbb C^n_+ = \{ A ~|~ A \in \mathbb S^n,~ x^T A x \ge 0,~
\forall x \in \mathbb R^n_+ \}
\end{equation}
The copositive cone $\mathbb C^n_+$ is closed, pointed, and convex \cite{App-A-COP-Cone-Property}. Clearly, $\mathbb S^n_+ \subseteq \mathbb C^n_+$, and every entry-wise nonnegative symmetric matrix $A$ belongs to $\mathbb C^n_+$. Actually, $\mathbb C^n_+$ is significantly larger than the positive semidefinite cone and the nonnegative symmetric matrix cone.
{\noindent \bf 2. Polyhedra}
A polyhedron is defined as the solution set of a finite number of linear inequalities:
\begin{equation}
\label{eq:App-01-Poly-H}
P = \{ x~|~ Ax \le b \}
\end{equation}
(\ref{eq:App-01-Poly-H}) is also called a hyperplane representation for a polyhedron. It is easy to show that polyhedra are convex sets. Sometimes, a polyhedron is also called a polytope. The two concepts are often used interchangeably in this book. Because of physical bounds of decision variables, the polyhedral feasible regions in practical energy system optimization problems are usually bounded, which means that there is no extreme ray.
Polyhedra can be expressed via the convex combination as well. The convex hull of a finite number of points
\begin{equation}
\label{eq:App-01-Poly-V-1}
\mbox{conv} \{v_1,\cdots,v_k\} = \{ \theta_1 v_1 + \cdots + \theta_k v_k ~|~ \theta \ge 0,~ 1^T \theta = 1 \}
\end{equation}
defines a polyhedron. (\ref{eq:App-01-Poly-V-1}) is called a convex hull representation. If the polyhedron is unbounded, a generalization of this convex hull representation is
\begin{equation}
\label{eq:App-01-Poly-V-2}
\{ \theta_1 v_1 + \cdots + \theta_k v_k ~|~ \theta \ge 0,~ \theta_1 + \cdots + \theta_m = 1, m \le k \}
\end{equation}
which considers nonnegative linear combinations of $v_i$, but only the first $m$ coefficients whose summation is 1 are bounded, and the remaining ones can take arbitrarily large values. In view of this, the convex hull of points $v_1,\cdots,v_m$ plus the conic hull of points $v_{m+1}, \cdots,v_k$ is a polyhedron. The reverse is also correct: any
polyhedron can be represented by convex hull and conic hull.
How to represent a polyhedron depends on what information is available: if its boundaries are expressed via linear inequalities, the hyperplane representation is straightforward; if its extreme points and extreme rays are known in advance, the convex-conic hull representation is more convenient. With the growth in dimension, it is becoming more difficult to switch (derive one from the other) between the hyperplane representation and the hull representation.
\subsection{Generalized Inequalities}
\label{App-A-Sect01-02}
A cone $K \subseteq \mathbb R^n$ is called a proper cone if it satisfies:
1) $K$ is convex and closed.
2) $K$ is solid, i.e., it has non-empty interior.
3) $K$ is pointed, i.e., $x \in K$, $-x \in K$ $\Rightarrow x = 0$.
A proper cone $K$ can be used to define a generalized inequality, a partial ordering on $\mathbb R^n$, as follows
\begin{equation}
\label{eq:App-01-GNI}
x \preceq_K y \Longleftrightarrow y - x \in K
\end{equation}
We denote $x \succeq_K y$ for $y \preceq_K x$. Similarly, a strict partial
ordering can be defined by
\begin{equation}
\label{eq:App-01-GNI}
x \prec_K y \Longleftrightarrow y - x \in \mbox{int}(K)
\end{equation}
where int$(K)$ stands for the interior of $K$, and write $x \succ_K y$ for $y \prec_K x$.
The nonnegative orthant $\mathbb R^n_+$ is a proper cone. When $K = \mathbb R^n_+$, the partial ordering $\preceq_K$ comes down to the element-wise comparison between vectors: for $x,y \in \mathbb R^n$, $x \preceq_{\mathbb R^n_+} y$ means $ x_i \le y_i$, $i = 1,\cdots n$, or the traditional notation $x \le y$.
The positive semidefinite cone $\mathbb S^n_+$ is a proper cone in $\mathbb S^n$. When $K = \mathbb S^n_+$, the partial ordering $\preceq_K$ comes down to a linear matrix inequality between symmetric matrices: for $X,Y \in \mathbb S^n$, $X \preceq_{\mathbb S^n_+} Y$ means $ Y - X$ is positive semidefinite. Because it arises so frequently, we can drop the subscript $\mathbb S^n_+$ when we write a linear matrix inequality $Y \succeq X$ or $X \preceq Y$. It is understood that such a generalized inequality corresponds to the positive semidefinite cone without particular mention.
A generalized inequality is equivalent to linear constraints with $K = \mathbb R^n_+$; for other cones, such as the second-order cone $\mathbb L^{n+1}_C$ or the positive semidefinite cone $\mathbb S^n_+$, the feasible region is nonlinear but remains convex.
\subsection{Dual Cones and Dual Generalized Inequalities}
\label{App-A-Sect01-03}
Let $K$ be a cone in $\mathbb R^n$. Its dual is defined as the following set
\begin{equation}
\label{eq:App-01-Dual-Cone}
K^* = \{y ~|~ x^T y \ge 0,~ \forall x \in K\}
\end{equation}
Because $K^*$ is the intersection of homogeneous half spaces (half spaces passing through the origin). It is a closed convex cone.
The interior of $K^*$ is given by
\begin{equation}
\label{eq:App-01-Interior-DCone}
\mbox{int}(K^*) = \{y ~|~ x^T y > 0,~ \forall x \in K,~ x \ne 0 \}
\end{equation}
To see this, if $y^T x > 0$, $\forall x \in K$, then $(y+u)^T x > 0$, $\forall x \in K$ holds for all $u$ that is sufficiently small; hence $y \in \mbox{int}(K^*)$. Conversely, if $y \in K^*$ and $\exists x \in K : y^T x = 0$, $x \ne 0$, then $(y - tx)^T x < 0$, $\forall t > 0$, indicating $y \notin \mbox{int}(K^*)$.
If int$(K) \ne \emptyset$, then $K^*$ is pointed. If this is not true, suppose $\exists y \ne 0$: $y \in K^*$, $-y \in K^*$, i.e., $y^T x \ge 0$, $\forall x \in K$ and $-y^T x \ge 0$, $\forall x \in K$, so we have $x^T y = 0$, $\forall x \in K$, which is in contradiction with int$(K) \ne \emptyset$.
In conclusion, $K^*$ is a proper cone, if the original cone $K$ is so; $K^*$ is closed and convex, regardless of the original cone $K$. Fig. \ref{fig:App-01-06} shows a cone $K$ (the region between $L_2$ and $L_3$) and its dual cone $K^*$ (the region between $L_1$ and $L_4$) in $\mathbb R^2$.
\begin{figure}
\caption{Illustration of a cone and its dual cone in $\mathbb R^2$.}
\label{fig:App-01-06}
\end{figure}
In light of the definition of $K^*$, a non-zero vector $y$ is the normal of a homogeneous half space which contains $K$ if and only if $y \in K^*$. The intersection of all such half spaces containing $K$ constitutes the cone $K$ (if $K$ is closed), in view of this
\begin{equation}
\label{eq:App-01-Double-Dual}
K = \bigcap_{y \in K^*} \left\{x ~|~ y^T x \ge 0 \right\} =
\{x ~|~ y^T x \ge 0,~ \forall y \in K^* \} = K^{**}
\end{equation}
This fact can be also understood in $\mathbb R^2$ from Fig. \ref{fig:App-01-06}. The extreme cases for the normal vector $y$ such that the corresponding half space contains $K$ are $L_1$ and $L_4$, and the intersection of these half spaces for all $y \in K^*$ turns out to be the original cone $K$.
Next, we investigate the dual cones of three special proper cones, i.e., $\mathbb R^n_+$, $\mathbb L^{n+1}_C$, and $\mathbb S^n_+$, respectively.
{\noindent \bf 1. The nonnegative orthant}
By observing the fact
\begin{equation}
x^T y \ge 0,~ \forall x \ge 0 \Longleftrightarrow y \ge 0 \notag
\end{equation}
we naturally have $(\mathbb R^n_+)^* = \mathbb R^n_+$; in other words, the nonnegative orthant is self-dual.
{\noindent \bf 2. The second-order cone}
Now, we show that the second-order cone is also self-dual: $(\mathbb L^{n+1}_C)^* = \mathbb L^{n+1}_C$. To this end, we need to demonstrate
\begin{equation}
x^T u + t v \ge 0,~ \forall (x,t) \in L^{n+1}_C
\Longleftrightarrow \|u\|_2 \le v \notag
\end{equation}
$\Rightarrow$: Suppose the right-hand condition is false, and $\exists (u,v):\|u\|_2 > v$, by recalling Cauchy-Schwarz inequality $|a^T b| \le \|a\|_2 \|b\|_2$, we have
\begin{equation}
\min_{x} \left\{ x^T u ~\middle|~ \mbox{s.t.} \left\| x \right\|_2 \le t \right\} = - t \| u\|_2 \notag
\end{equation}
In such circumstance, $x^T u + t v = t(v - \|u\|_2) < 0$, $\forall t > 0$, which is in contradiction with the left-hand condition.
$\Leftarrow$: Again, according to Cauchy-Schwarz inequality, we have
\begin{equation*}
x^T u + t v \ge -\|x\|_2 \|u\|_2 + t v \ge -\|x\|_2 \|u\|_2 + \|x\|_2 v = \|x\|_2 \left( v - \|u\|_2 \right) \ge 0
\end{equation*}
{\noindent \bf 3. The positive semidefinite cone}
We investigate the dual cone of $\mathbb S^n_+$. The inner product of $X,Y \in \mathbb S^n$ is defined by the element-wise summation
\begin{equation}
\langle X,Y \rangle = \sum_{i=1}^n \sum_{j=1}^n X_{ij} Y_{ij}
= \mbox{tr}(X Y^T) \notag
\end{equation}
We establish this fact: $(\mathbb S^n_+)^* = \mathbb S^n_+$, which boils down to
\begin{equation}
\mbox{tr}(X Y^T) \ge 0,~ \forall X \succeq 0
\Longleftrightarrow Y \succeq 0 \notag
\end{equation}
$\Rightarrow$: Suppose $Y \notin \mathbb S^n_+$, then $\exists q \in \mathbb R^n$ such that
\begin{equation}
q^T Y q = \mbox{tr}(q q^T Y^T) < 0 \notag
\end{equation}
which is in contradiction with the left-hand condition because $X = q q^T \in \mathbb S^n_+$.
$\Leftarrow$: Now suppose $X, Y \in \mathbb S^n_+$. $X$ can be expressed via its eigenvalues $\lambda_i \ge 0$ and eigenvectors $q_i$ as $X = \sum_{i=1}^n \lambda_i q_i q^T_i$, then we arrive at
\begin{equation}
\mbox{tr}(X Y^T) = \mbox{tr} \left( Y\sum_{i=1}^n \lambda_i q_i q^T_i \right)= \sum_{i=1}^n \lambda_i q^T_i Y q_i \ge 0 \notag
\end{equation}
In summary, it follows that the positive semidefinite cone is self-dual.
{\noindent \bf 4. The completely positive cone}
Following the same concept of matrix inner product, it is shown that $(\mathbb C^n_+)^*$ is the cone of so-called completely positive matrices and can be expressed as \cite{App-A-COP-Cone-Dual}
\begin{equation}
\label{eq:App-01-COMPL-Cone}
\mathbb (\mathbb C^n_+)^* = \mbox{conv} \{ x x^T ~|~ x \in \mathbb R^n_+\}
\end{equation}
In contrast to previous three cones, the copositive cone $\mathbb C^n_+$ is not self-dual.
When the dual cone $K^*$ is proper, it induces a generalized inequality $\preceq_{K^*}$, which is called the dual generalized inequality of the one induced by cone $K$ (if $K$ is proper). According to the definition of dual cone, an important fact relating a generalized inequality and its dual is
1) $x \preceq_K y$ if and only if $\lambda^T x \le \lambda^T y$, $\forall \lambda \in K^*$.
2) $x \prec_K y$ if and only if $\lambda^T x < \lambda^T y$, $\forall \lambda \in K^*$, $\lambda \ne 0$.
When $K = K^{**}$, the dual generalized inequality of $\preceq_{K^*}$ is $\preceq_K$, and the above property holds if the positions of $K$ and $K^*$ are swapped.
\subsection{Convex Function and Epigraph}
\label{App-A-Sect01-04}
A function $f:\mathbb R^n \to \mathbb R$ is convex if its feasible region $X$ is a convex set, and for all $x_1,x_2 \in X$, the following condition holds
\begin{equation}
\label{eq:App-01-Convex-Function}
f(\theta x_1 + (1-\theta)x_2) \le \theta f(x_1) + (1-\theta) f(x_2),~
\forall \theta \in [0,1]
\end{equation}
The geometrical interpretation of inequality (\ref{eq:App-01-Convex-Function}) is that the chord connecting points $(x_1, f(x_1))$ and $(x_2, f(x_2))$ always lies above the curve of $f$ between $x_1$ and $x_2$ (see Fig. \ref{fig:App-01-07}). Function $f$ is strictly convex if strict inequality holds in (\ref{eq:App-01-Convex-Function}) when $x_1 \ne x_2$ and $0 < \theta < 1$. Function $f$ is called (strictly) concave if $-f$ is (strictly) convex. An affine function is both convex and concave.
The graph of a function $f:\mathbb R^n \to \mathbb R$ is defined as
\begin{equation}
\label{eq:App-01-Graph-f}
\mbox{graph } f = \{(x,f(x))~|~ x \in X\}
\end{equation}
which is a subset of $\mathbb R^{n+1}$.
The epigraph of a function $f:\mathbb R^n \to \mathbb R$ is defined as
\begin{equation}
\label{eq:App-01-Epigraph-f}
\mbox{epi } f = \{(x,t)~|~ x \in X,~ f(x) \le t \}
\end{equation}
which is a subset of $\mathbb R^{n+1}$. These definitions are illustrated through Fig. \ref{fig:App-01-07}.
\begin{figure}
\caption{Illustration of the graph of a convex function $f(x)$ (the solid line) and its epigraph (the shaded area) in $\mathbb R^2$.}
\label{fig:App-01-07}
\end{figure}
Epigraph bridges the concepts of convex sets and convex functions: A function is convex if and only if its epigraph is a convex set. Epigraph is frequently used in formulating optimization problems. A nonlinear objective function can be replaced by a linear objective and an additional constraint in epigraph form. In this sense, we can assume that any optimization problem has a linear objective function. Nonetheless, this does not facilitate solving the problem, as non-convexity moves to the constraints, if the objective function is not convex. Nonetheless, the solution to an optimization problem with a linear objective can always be found at the boundary of the convex hull of its feasible region, implying that if we can characterize the convex hull, a problem in epigraph form admits an exact convex hull relaxation. However, in general, it is difficult to express convex hull in an analytical form.
Analyzing convex functions is a well developed field. Broadening the knowledge in convex analysis could be mathematically demanding, especially for readers who are primarily interested in applications. We will not pursue in sophisticated theories in depth any more. Readers are referred to the literature suggested at the end of this chapter for further information.
\section{From Linear to Conic Program}
\label{App-A-Sect02}
Linear programming is one of the most mature and tractable mathematical programming problems. In this section, we first investigate and explain the motivation of linear programming duality theory, then provide a unified model for conic programming problems. LPs, SOCPs, and SDPs are special cases of conic programs associated with generalized inequalities $\preceq_K$ where $K=\mathbb R^n$, $\mathbb L^{n+1}_C$, and $\mathbb S^n_+$, respectively. Our aim is to help readers who are not familiar with conic programs build their decision-making problems in these formats with structured convexity, and write out their dual problems more conveniently. The presentation logic is consistent with \cite{App-A-CVX-Book-Ben}, and most of the presented materials in this section also come from \cite{App-A-CVX-Book-Ben}.
\subsection{Linear Program and its Duality Theory}
\label{App-A-Sect02-01}
A linear program is an optimization program with the form
\begin{equation}
\label{eq:App-01-LP-Compact}
\min \{ c^T x ~|~ A x \ge b \}
\end{equation}
where $x$ is the vector of decision variables, $A$, $b$, $c$ are constant coefficient matrices with compatible dimensions. We assume LP (\ref{eq:App-01-LP-Compact}) is feasible, i.e., its feasible set $X = \{x~|~ Ax \ge b\}$ is a non-empty polyhedron; moreover, because of the limited ranges of decision variables representing physical quantities, we assume $X$ is bounded. In such circumstance, LP (\ref{eq:App-01-LP-Compact}) always has a finite optimum. LPs can be solved by mature algorithms, such as the simplex algorithm and the interior-point algorithm, which are not the main focus of this book.
A question which is important both in theory and practice is: how to find a systematic way to bound the optimal value of (\ref{eq:App-01-LP-Compact})? Clearly, if $x$ is a feasible solution, an instant upper bound is given by $c^T x$. Lower bounding is to find a value $a$, such that $c^T x \ge a$ holds for all $x \in X$.
A trivial answer is to solve the problem and retrieve its optimal value, which is the tightest lower bound. However, there may be a smarter way to retrieve a valid lower bound with much cheaper computational expense. To outline the basic motivation, let us consider the following example
\begin{equation}
\label{eq:App-01-LP-Example}
\min \left\{ \sum_{i=1}^6 x_i ~\middle|~ \begin{gathered}
2 x_1 + 1 x_2 + 3 x_3 + 8 x_4 + 5 x_5 + 3 x_6 \ge 5 \\
6 x_1 + 2 x_2 + 6 x_3 + 1 x_4 + 1 x_5 + 4 x_6 \ge 2 \\
2 x_1 + 7 x_2 + 1 x_3 + 1 x_4 + 4 x_5 + 3 x_6 \ge 1 \\
\end{gathered} \right\}
\end{equation}
Although LP (\ref{eq:App-01-LP-Example}) is merely a toy case for modern solvers and computers, one may guess it is still a little bit complicated for mental arithmetic. In fact, we can claim the optimal value is 0.8 at a glance without any sophisticated calculation: summing up the three constraints yields an inequality
\begin{equation}
\label{eq:App-01-LP-Example-Weigh-Sum}
10(x_1 + x_2 + x_3 + x_4 + x_5 + x_6) \ge 8
\end{equation}
which immediately gives the optimal value is 0.8. To understand why such a value is indeed the optimum, by adding the constraints together and dividing both sides by 10, inequality (\ref{eq:App-01-LP-Example-Weigh-Sum}) implies that the objective function must get a value which is greater than or equal to 0.8 at any feasible point; moreover, to demonstrate that 0.8 is attainable, we can find a point $x^*$ which activates the three constraints simultaneously, so (\ref{eq:App-01-LP-Example-Weigh-Sum}) becomes an equality. LP duality is merely a formal generalization of this simple trick.
Multiplying each constraint in $Ax \ge b$ with a non-negative weight $\lambda_i$, and adding all constraints together, we will see
\begin{equation}
\lambda^T A x \ge \lambda^T b \notag
\end{equation}
If we choose $\lambda$ elaborately such that $\lambda^T A = c^T$, then $\lambda^T b$ will be a valid lower bound of the optimal value of (\ref{eq:App-01-LP-Compact}). To improve the lower bound estimation, one may optimize the weighting vector $\lambda$, giving rise to the following problem
\begin{equation}
\label{eq:App-01-LP-Dual}
\max_{\lambda} \{ \lambda^T b ~|~ A^T \lambda = c,~ \lambda \ge 0 \}
\end{equation}
where $\lambda$ is the vector of decision variables or dual variables, and the feasible region $D=\{\lambda ~|~ A^T \lambda = c,~ \lambda \ge 0\}$ is a polyhedron. Clearly, (\ref{eq:App-01-LP-Dual}) is also an LP, and is called the dual problem of LP (\ref{eq:App-01-LP-Compact}). Correspondingly, (\ref{eq:App-01-LP-Compact}) is called the primal problem. From above construction, we immediately conclude $c^T x \ge \lambda^T b$.
\begin{proposition}
\label{pr:App-01-LP-Weak-Duality}
(Weak duality): The optimal value of (\ref{eq:App-01-LP-Dual}) is less than or equal to the optimal value of (\ref{eq:App-01-LP-Compact}).
\end{proposition}
In fact, the optimal bound offered by (\ref{eq:App-01-LP-Dual}) is tight.
\begin{proposition}
\label{pr:App-01-LP-Strong-Duality}
(Strong duality): Optimal values of (\ref{eq:App-01-LP-Dual}) and (\ref{eq:App-01-LP-Compact}) are equal.
\end{proposition}
To see this, an explanation is given in \cite{App-A-CVX-Book-Ben}. If a real number $a$ is the optimal value of the primal LP (\ref{eq:App-01-LP-Compact}), the system of linear inequalities
\begin{equation}
S^P: \left\{ \begin{gathered}
-c^T x > - a : \lambda_0 \\
A x \ge b: \lambda
\end{gathered} \right. \notag
\end{equation}
must have an empty solution set, indicating that at least one of the following two systems does have a solution (called separation property later)
\begin{equation}
S^D_1: \left\{ \begin{gathered}
-\lambda_0 c + A^T \lambda = 0 \\
-\lambda_0 a + b^T \lambda \ge 0 \\
~~ \lambda_0 > 0,~~ \lambda \ge 0
\end{gathered} \right. \notag
\end{equation}
\begin{equation}
S^D_2: \left\{ \begin{gathered}
-\lambda_0 c + A^T \lambda = 0 \\
-\lambda_0 a + b^T \lambda > 0 \\
~~ \lambda_0 \ge 0,~~ \lambda \ge 0
\end{gathered} \right. \notag
\end{equation}
We can show that $S^P$ has no solutions if and only if $S^D_1$ has a solution.
$S^D_1$ has a solution $\Rightarrow$ $S^P$ has no solution is clear. Otherwise, suppose that $S^P$ has a solution $x$, because $\lambda_0$ is strictly positive, the weighted summation of inequalities in $S^P$ leads to
\begin{equation*}
0 = 0^T x =(-\lambda_0 c + A^T \lambda)^T x =-\lambda_0 c^T x + \lambda^T A x > -\lambda_0 a + \lambda^T b
\end{equation*}
which is in contradiction with the second inequality in $S^D_1$.
$S^P$ has no solution $\Rightarrow$ $S^D_1$ has a solution. Suppose $S^D_1$ has no solution, $S^D_2$ must have a solution owing to the separation property (Theorem 1.2.1 in \cite{App-A-CVX-Book-Ben}). Moreover, if $\lambda_0 > 0 $, the solution of system $S^D_2$ also solves system $S^D_1$, so there must be $\lambda_0 = 0$. As a result, the solution of $S^D_2$ is independent of the values of $a$ and $c$. Let $c = 0$ and $a = 0$, the solution $\lambda$ of $S^D_2$ satisfies $A^T \lambda = 0$, $b^T \lambda > 0$. Therefore, for any $x$ with a compatible dimension, $\lambda^T(Ax-b) = \lambda^T Ax - \lambda^T b <0$ holds. In addition, because $\lambda \ge 0$, we can conclude that $Ax \ge b$ has no solution, a contradiction to the assumption that (\ref{eq:App-01-LP-Compact}) is feasible.
Now, consider the solution of $S^D_1$. Without loss of generality, we can assume $\lambda_0 = 1$; otherwise, if $\lambda_0 \ne 1$, ($1,\lambda/\lambda_0$) also solves $S^D_1$. In view of this, in normalized condition ($\lambda_0=1$), $S^D_1$ comes down to
\begin{equation}
S^D_3: \left\{
\begin{gathered}
A^T \lambda = c \\
b^T \lambda \ge a \\
\lambda \ge 0
\end{gathered} \right. \notag
\end{equation}
Now we can see the strong duality: Let $a^*$ be the optimal solution of (\ref{eq:App-01-LP-Compact}). For any $a < a^*$, $S^P$ has no solution, so $S^D_1$ has a solution $(1,\lambda^*)$. According to $S^D_3$, the optimal value of (\ref{eq:App-01-LP-Dual}) is no smaller than $a$, i.e., $a \le b^T \lambda^* \le a^*$. When $a$ tends to $a^*$, we can conclude that the primal and dual optimal values are equal. Since the primal problem always has a finite optimum (as we assumed before), so does the dual problem, as they share the same optimal value. Nevertheless, even if the primal feasible region is bounded, the dual feasible set $D$ may be unbounded, and the dual problem is always bounded above. Please refer to \cite{App-A-LP-Book-Dantzig,App-A-LP-Book-Bertsimas,App-A-LP-Book-Vanderbei} for more information on duality theory in linear programming.
\begin{proposition}
\label{pr:App-01-LP-OCD-Primal-Dual}
(Primal-dual optimality condition) If LP (\ref{eq:App-01-LP-Compact}) is feasible and $X$ is bounded, then any feasible solution to the following system
\begin{equation}
\label{eq:App-01-LP-OCD-Primal-Dual}
\begin{lgathered}
A x \ge b \\
A^T \lambda = c,~ \lambda \ge 0 \\
c^T x = b^T \lambda
\end{lgathered}
\end{equation}
solves the original primal-dual pair of LPs: $x^*$ is the optimal solution of (\ref{eq:App-01-LP-Compact}), and $\lambda^*$ is the optimal solution of (\ref{eq:App-01-LP-Dual}).
\end{proposition}
\noindent (\ref{eq:App-01-LP-OCD-Primal-Dual}) is also called the primal-dual optimality condition of LPs. It consists of linear inequalities and equalities, and there is no objective function to be optimized.
Substituting $c = A^T \lambda$ into the last equation of (\ref{eq:App-01-LP-OCD-Primal-Dual}) gives $\lambda^T A x = \lambda^T b$, i.e.
\begin{equation}
\lambda^T (b-Ax) = 0 \notag
\end{equation}
Since $\lambda \ge 0$ and $Ax \ge b$, above equation is equivalent to
\begin{equation}
\lambda_i (b-Ax)_i = 0 \notag
\end{equation}
where notation $(b-Ax)_i$ and $\lambda_i$ stand for the $i$-th components of vectors $b-Ax$ and $\lambda$, respectively. This condition means that at most one of $\lambda_i$ and $(b-Ax)_i$ can take a strictly positive value. In other words, if the $i$-th inequality constraint is inactive, then its dual multiplier $\lambda_i$ must be 0; otherwise, if $\lambda_i >0$, then the corresponding inequality constraint must be binding. This phenomenon is called the complementarity and slackness condition.
Applying KKT optimality condition for general nonlinear programs to LP (\ref{eq:App-01-LP-Compact}) we have:
\begin{proposition}
\label{pr:App-01-LP-OCD-KKT}
(KKT optimality condition) If LP (\ref{eq:App-01-LP-Compact}) is feasible and $X$ is bounded, the following system
\begin{equation}
\label{eq:App-01-LP-OCD-KKT}
\begin{gathered}
0 \le \lambda \bot A x - b \ge 0 \\
A^T \lambda = c
\end{gathered}
\end{equation}
has a solution ($x^*,\lambda^*$) (may not be unique), where $a \bot b$ means $a^T b = 0$, $x^*$ solves (\ref{eq:App-01-LP-Compact}) and $\lambda^*$ solves (\ref{eq:App-01-LP-Dual}).
\end{proposition}
The question that which one of (\ref{eq:App-01-LP-OCD-Primal-Dual}) and (\ref{eq:App-01-LP-OCD-KKT}) is better can be subtle and has very different practical consequences. At the first look, the former one seems more tractable because (\ref{eq:App-01-LP-OCD-Primal-Dual}) is a linear system while (\ref{eq:App-01-LP-OCD-KKT}) contains complementarity and slackness conditions. However, the actual situation in practice is more complicated. For example, to solve a bilevel program with an LP lower level, the LP is often replaced by its optimality condition. In a bilevel optimization structure, some of the coefficients $A$, $b$, and $c$ are optimized by the upper-level agent, say, the coefficient vector $c$ representing the price is controlled by the upper level decision maker, while $A$ and $b$ are constants. If we use (\ref{eq:App-01-LP-OCD-Primal-Dual}), the term $c^T x$ in the single-level equivalence becomes non-convex, although $c$ is a constant in the lower level, preventing a global optimal solution from being found easily. In contrast to this, if we use (\ref{eq:App-01-LP-OCD-KKT}) and linearize the complementarity and slackness condition via auxiliary integer variables, the single-level equivalent problem can be formulated as an MILP, whose global optimal solution can be procured with reasonable computation effort.
The dual problem of LPs which maximize its objective can be derived in the same way. Consider the LP
\begin{equation}
\label{eq:App-01-LP-Max-Primal}
\max \{ c^T x ~|~ A x \le b \}
\end{equation}
For this problem, we need an upper bound on the objective function. To this end, associating a non-negative dual vector $\lambda$ with the constraint, and adding the weighted inequalities together, we have
\begin{equation}
\lambda^T A x \le \lambda^T b \notag
\end{equation}
If we intentionally choose $\lambda$ such that $\lambda^T A = c^T$, then $\lambda^T b$ will be a valid upper bound of the optimal value of (\ref{eq:App-01-LP-Max-Primal}). The dual problem
\begin{equation}
\label{eq:App-01-LP-Max-Dual}
\min_{\lambda} \{ \lambda^T b ~|~ A^T \lambda = c,~ \lambda \ge 0 \}
\end{equation}
optimizes the weighting vector $\lambda$ to offer the tightest upper bound.
Constraints in the form of equality and $\ge$ inequality can be considered using the same paradigm. Bearing in mind that we are seeking an upper bound, so we need a certification for $c^T x \le a$, so the dual variables for equalities have no signs and those for $\ge$ inequalities should be negative.
Sometimes it is useful to define the dual cone of a polyhedron, despite that a bounded polyhedron is not a cone. Recall its definition, the dual cone of a polyhedron $P$ can be defined as
\begin{equation}
\label{eq:App-01-Dual-Polytope-1}
P^* = \{y ~|~ x^T y \ge 0,~ \forall x \in P \}
\end{equation}
where $P = \{x ~|~ Ax \ge b \}$. As we have demonstrated in Sect. \ref{App-A-Sect01-03}, the dual cone is always closed and convex; however, for a general set, its dual cone does not have an analytical expression.
For polyhedral sets, the condition in (\ref{eq:App-01-Dual-Polytope-1}) holds if and only if the minimal value of $x^T y$ over $P$ is non-negative. For a given vector $y$, let us investigate the minimum of $x^T y$ through an LP
\begin{equation}
\min_x \{y^T x ~|~ Ax \ge b \} \notag
\end{equation}
It is known from Proposition \ref{pr:App-01-LP-Weak-Duality} that $y^T x \ge b^T \lambda$, $\forall \lambda \in D_P$, where $D_P =\{\lambda ~|~ A^T \lambda = y, \lambda \ge 0 \}$. Moreover, if $\exists \lambda \in D_P$ such that $b^T \lambda < 0$, Proposition \ref{pr:App-01-LP-Strong-Duality} certifies the existence of $x \in P$ such that $y^T x = b^T \lambda < 0$. In conclusion, the dual cone of polyhedron $P$ can be cast as
\begin{equation}
\label{eq:App-01-Dual-Polytope-2}
P^* = \{ y ~|~ \exists \lambda: b^T \lambda \ge 0,~
A^T \lambda = y,~ \lambda \ge 0 \}
\end{equation}
which is also a polyhedron. It can be observed from (\ref{eq:App-01-Dual-Polytope-2}) that all constraints in $P^*$ are homogeneous, so $P^*$ is indeed a polyhedral cone.
\subsection{General Conic Linear Program}
\label{App-A-Sect02-02}
Linear programs cover vast topics in engineering optimization problems. Its duality program provides informative quantifications and valuable insights of the problem at hand, which help develop efficient algorithms for itself and facilitate building tractable reformulations for more complicated mathematical programming models, such as robust optimization, multi-level optimization and equilibrium problems. The algorithms of LPs, which are perfectly developed by now, can solve quite large instances (with up to hundreds of thousands of variables and constraints). Nevertheless, there are practical problems which cannot be modeled by LPs. To cope with these essentially nonlinear cases, one needs to explore new models and computational methods
beyond the reach of LPs.
The broadest class of optimization problems which the LP can be compared with is the class of convex optimization problems. Convexity marks whether a problem can be solved efficiently, and any local optimizer of a convex program must be a global optimizer. Efficiency is quantified by the number of arithmetic operations required to solve the problem. Suppose that all we know about the problem is its convexity: its objective and constraints are convex functions in decision variables $x \in \mathbb R^n$, and their values along with their derivatives at any given point can be evaluated within $M$ arithmetic operations. The best known complexity for finding an $\epsilon$-solution turns out to be \cite{App-A-CVX-Book-Ben}
\begin{equation}
O(1) n(n^3+M) \ln \left( \frac{1}{\epsilon} \right) \notag
\end{equation}
Although this bound grows polynomially with $n$, the computation time may be still unacceptable for a large $n$ like $n = 1,000$, which is in contrast to LPs which are solvable with $n = 100,000$. The reason is: linearity are much stronger than convexity; the structure of an affine function $a^T x + b$ solely depends on its constant coefficients $a$ and $b$; function values and derivatives are never evaluated in a state-of-the-art LP solver. There are many classes of convex programs which are essentially nonlinear, but still possess nice analytical structure, which can be used to develop more dedicated algorithms. These algorithms may perform much more efficiently than those exploiting only convexity. In what follows, we consider such a class of convex program, i.e., the conic program, which is a simple extension of LP. Its general form and mathematical model are briefly introduced, while the details about interior-point algorithms is beyond the scope of this book, which can be found in \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}.
{\noindent \bf 1. Mathematical model}
When we consider to add some nonlinear factors in LP (\ref{eq:App-01-LP-Compact}), the most common way is to replace a linear function $a^T x$ with a nonlinear but convex function $f(x)$. As what has been explained, this may not be advantageous from a computational perspective. In contrast to this, we sustain all functions to be linear, but inject nonlinearity in the comparative operators $\ge$ or $\le$. Recall the definition of generalized inequalities $\succeq_K$ with cone $K$, we consider the following problem in this section
\begin{equation}
\label{eq:App-01-CP-Primal}
\min_x \{ c^T x ~|~ A x \succeq_K b \}
\end{equation}
which is called a conic programming problem. An LP is a special case of the conic program with $K = \mathbb R^n_+$. With this generalization, we are able to formulate a much wider spectrum of optimization problems which cannot be modeled as LPs, while enjoy nice properties of structured convexity.
{\noindent \bf 2. Conic duality}
Aside from developing high-performance algorithms, the most important and elegant theoretical result in the area of LP is its duality theorem. In view of their similarities in mathematical appearances, how can the LP duality theorem be extended to conic programs? Similarly, the motivation of duality is the desire of a systematic way to certify a lower bound on the optimal value of conic program (\ref{eq:App-01-CP-Primal}). Let us try the same trick: multiplying the dual vector $\lambda$ on both sides of $Ax \succeq_K b$, and adding them together, we obtain $\lambda^T A x$ and $b^T \lambda$; moreover, if we are lucky to get $A^T \lambda = c$, we guess $b^T \lambda$ can serve as a lower bound of the optimum of (\ref{eq:App-01-CP-Primal}) under some condition. The condition can be translated into: what is the admissible region of $\lambda$, such that the inequality $\lambda^T A x \ge b^T \lambda$ is a consequence of $Ax \succeq_K b$? A nice answer has been given at the end of Sect. \ref{App-A-Sect01-03}. Let us explain the problem from some simple cases.
Particularly, when $K = \mathbb R^n_+$, the admissible region of $\lambda$ is also $\mathbb R^n_+$, because we have already known the fact that the dual variable of $\ge$ inequalities in an LP which minimizes its objective should be non-negative. However, $\mathbb R^n_+$ is no longer a feasible region of $\lambda$ for conic programs with generalized inequality $\succeq_K$ if $K \ne \mathbb R^n_+$. To see this, consider $\mathbb L^3_C$ and the corresponding generalized inequality
\begin{equation}
\begin{bmatrix} x \\ y \\ z \end{bmatrix} \succeq_{\mathbb L^3_C}
\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \Longleftrightarrow
z \ge \sqrt{x^2+y^2} \notag
\end{equation}
$(x,y,z)=(-1,-1,1.5)$ is a feasible solution. However, the weighted summation of both sides with $\lambda = [1,1,1]^T$ gives a false inequality $-0.5 \ge 0$.
To find the feasible region of $\lambda$, consider the condition
\begin{equation}
\label{eq:App-01-CP-Lambda}
\forall a \succeq_K 0 ~\Rightarrow~ \lambda^T a \ge 0
\end{equation}
If (\ref{eq:App-01-CP-Lambda}) is true, we have the following logical inferences
\begin{equation}
\begin{gathered}
\\ \Leftrightarrow \\ \Rightarrow \\ \Leftrightarrow
\end{gathered} \quad
\begin{aligned}
Ax & \succeq_K b \\
Ax - b & \succeq_K 0 \\
\lambda^T (Ax-b) & \ge 0 \\
\lambda^T A x & \ge \lambda^T b
\end{aligned} \notag
\end{equation}
Conversely, if $\lambda$ is an admissible vector for certifying
\begin{equation}
\forall (a,b: a \succeq_K b) ~\Rightarrow~ \lambda^T a \ge \lambda^T b \notag
\end{equation}
then, (\ref{eq:App-01-CP-Lambda}) is clearly true by letting $b=0$. Therefore, the admissible set of $\lambda$ for generalized inequality $\succeq_K$ with cone $K$ can be written as
\begin{equation}
\label{eq:App-01-CP-Dual-Cone}
K^* =\{ \lambda ~|~ \lambda^T a \ge 0,~ \forall a \in K \}
\end{equation}
which contains vectors whose inner products with all vectors belonging to $K$ are nonnegative. Recall the definition in (\ref{eq:App-01-Dual-Cone}), we can observe that the set $K^*$ is actually the dual cone of cone $K$.
Now we are ready to setup the dual problem of conic program (\ref{eq:App-01-CP-Primal}). As in the case of LP duality, we try to recover the objective function from the linear combination of constraints by choosing a proper dual variable $\lambda$, i.e., $\lambda^T A x= c^T x$, in addition, $\lambda \in K^*$ ensures $\lambda^T A x \ge \lambda^T b$, implying that $\lambda^T b$ is a valid lower bound of the objective function. The best bound one can expect is the optimum of the problem
\begin{equation}
\label{eq:App-01-CP-Dual}
\max_\lambda \{ b^T \lambda ~|~ A^T \lambda = c,~ \lambda \succeq_{K^*} 0\}
\end{equation}
which is also a conic program, and called the dual problem of conic program (\ref{eq:App-01-CP-Primal}). From above construction, we have already known that $c^T x \ge b^T \lambda$ is satisfied for all feasible $x$ and $\lambda$, which is the weak duality of conic programs.
In fact, the primal-dual pair of conic programs has following properties:
\begin{proposition}
\label{pr:App-01-CP-Conic-Duality}
(Conic Duality Theorem) \cite{App-A-CVX-Book-Ben} : The following conclusions hold true for conic program (\ref{eq:App-01-CP-Primal}) and its dual (\ref{eq:App-01-CP-Dual}).
1) Conic duality is symmetric: the dual problem is still a conic one, and the primal and dual problems are dual to each other.
2) Weak duality holds: the duality gap $c^T x - b^T \lambda$ is nonnegative over the primal and dual feasible sets.
2) If either of the primal problem or the dual problem is strictly feasible and has a finite optimum, then the other is solvable, and the duality gap is zero: $c^T x^* = b^T \lambda^*$ for some $x^*$ and $\lambda^*$.
3) If either of the primal problem or the dual problem is strictly feasible and has a finite optimum, then a pair of primal-dual feasible solutions ($x, \lambda$) solves the respective problems if and only if
\begin{equation}
\label{eq:App-01-CP-OCD-PD}
\begin{gathered}
Ax \succeq_K b \\
A^T \lambda = c \\
\lambda \succeq_{K^*} 0 \\
c^T x = b^T \lambda
\end{gathered}
\end{equation}
or
\begin{equation}
\label{eq:App-01-CP-OCD-KKT}
\begin{gathered}
0 \preceq_{K^*} \lambda \bot Ax - b \succeq_K 0 \\
A^T \lambda = c
\end{gathered}
\end{equation}
where (\ref{eq:App-01-CP-OCD-PD}) is called the primal-dual optimality condition, and (\ref{eq:App-01-CP-OCD-KKT}) is called the KKT optimality condition.
\end{proposition}
The proof can be found in \cite{App-A-CVX-Book-Ben} and is omitted here. To highlight the role of strict feasibility in Proposition \ref{pr:App-01-CP-Conic-Duality}, consider the following example
\begin{equation}
\min_x \left\{ x_2 ~\middle|~
\begin{bmatrix} x_1 \\ x_2 \\ x_1 \end{bmatrix}
\succeq_{\mathbb L^3_C} 0
\right\} \notag
\end{equation}
The feasible region is
\begin{equation*}
\sqrt{x^2_1 + x^2_2} \le x_1 \Leftrightarrow x_2 = 0,~ x_1 \ge 0
\end{equation*}
So its optimal value is 0. As explained before, second-order cones are self-dual: $(\mathbb L^3_C)^* = \mathbb L^3_C$, it is easy to see the dual problem is
\begin{equation}
\max_\lambda \left\{ 0 ~\middle|~ \lambda_1 + \lambda_3 = 0,~
\lambda_2 = 1,~ \lambda \succeq_{\mathbb L^3_C} 0
\right\} \notag
\end{equation}
The feasible region is
\begin{equation}
\left\{ \lambda ~\middle|~ \sqrt{\lambda_1^2 + \lambda^2_2} \le \lambda_3,~
\lambda_3 \ge 0,~ \lambda_2 = 1,~ \lambda_1 = - \lambda_3 \right\} \notag
\end{equation}
which is empty, because $ \sqrt{(-\lambda_3)^2 + 1} > \lambda_3$.
This example demonstrates that the existence of a strictly feasible point is indispensable for conic duality. But this condition is not necessary in LP duality, which means strong duality holds in conic programming with stronger assumptions.
Several classes of conic programs with particular cones are of special interests. The cones in these problems are self-dual, so we can set up the dual program directly, which allows to explore deeply into the original problem, or convert it into equivalent formulations which are more computationally friendly. The structure of these relatively simple cones also helps develop efficient algorithms for corresponding conic programs. In what follows, we will investigate two
extremely important classes of conic programs.
\subsection{Second-order Cone Program}
\label{App-A-Sect02-03}
{\bf 1. Mathematical models of the primal and dual problems}
Second-order cone program is a special class of conic problem with $K =\mathbb L^{n+1}_C$. It minimizes a linear function over the intersection
of a polytope and the Cartesian product of second-order cones, and can be formulated as
\begin{equation}
\label{eq:App-01-SOCP-Conic-Primal}
\min_x \left\{ c^T x ~\middle|~ Ax-b \succeq_K 0 \right\}
\end{equation}
where $x \in \mathbb R^n$, and $K = \mathbb L^{m_1}_C \times \cdots \times \mathbb L^{m_k}_C \times \mathbb R^{m_p}_+$, in other words, the conic constraints in (\ref{eq:App-01-SOCP-Conic-Primal}) can be expressed as $k$ second-order cones $A_i x - b_i \succeq_{\mathbb L^{m_i}} 0$, $i = 1,\cdots,k$ plus one polyhedron $A_p x - b_p \ge 0$ with the following matrix partition
\begin{equation}
\begin{bmatrix} A; b\end{bmatrix} =
\begin{bmatrix}
[A_1;b_1] \\ \vdots \\ [A_k;b_k] \\ [A_p;b_p]
\end{bmatrix}
\notag
\end{equation}
Recall the definition of second-order cone, we further partition the sub-matrices $A_i,b_i$ into
\begin{equation}
\begin{bmatrix} A_i; b_i \end{bmatrix} = \left[
\begin{gathered} D_i \\ p^T_i \end{gathered} ~~
\begin{gathered} d_i \\ q_i \end{gathered} ~ \right],~
i = 1,\cdots,k \notag
\end{equation}
where $D_i \in \mathbb R^{(m_i-1) \times n}$, $p_i \in \mathbb R^n$, $d_i \in \mathbb R^{m_i-1}$, $q_i \in \mathbb R$. Then we can write (\ref{eq:App-01-SOCP-Conic-Primal}) as
\begin{equation}
\label{eq:App-01-SOCP-MP-Primal}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.}~~ & A_p x \ge b_p \\
& \|D_i x - d_i\|_2 \le p^T_i x -q_i,~ i=1,\cdots,k
\end{aligned}
\end{equation}
(\ref{eq:App-01-SOCP-MP-Primal}) is often more convenient for model builders.
It is easy to see that the cone $K$ in (\ref{eq:App-01-SOCP-Conic-Primal}) is self-dual, as both second-order cone and non-negative orthant are self-dual. In this regard, the dual problem of SOCP (\ref{eq:App-01-SOCP-Conic-Primal}) can be expressed as
\begin{equation}
\label{eq:App-01-SOCP-Conic-Dual}
\max_\lambda \left\{ b^T \lambda ~\middle|~ A^T \lambda = c,~
\lambda \succeq_K 0 \right\}
\end{equation}
Partitioning the dual vector as
\begin{equation}
\lambda = \begin{bmatrix} \lambda_1 \\ \vdots \\ \lambda_k \\ \lambda_p \end{bmatrix},~
\lambda_i \in \mathbb L^{m_i}_C,~ i = 1, \cdots, k,~ \lambda_p \ge 0 \notag
\end{equation}
We can write the dual problem as
\begin{equation}
\label{eq:App-01-SOCP-Conic-Dual-Decomp}
\begin{aligned}
\max_\lambda~ & \sum_{i=1}^k b^T_i \lambda_i + b^T_p \lambda_p \\
\mbox{s.t.}~~ & \sum_{i=1}^k A^T_i \lambda_i + A^T_p \lambda_p = c \\
& \lambda_i \in \mathbb L^{m_i}_C,~ i = 1, \cdots, k \\
& \lambda_p \ge 0
\end{aligned}
\end{equation}
We further partition $\lambda_i$ according to the norm representation in (\ref{eq:App-01-SOCP-MP-Primal})
\begin{equation}
\lambda_i = \begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix},~
\mu_i \in \mathbb R^{m_i-1},~ \nu_i \in \mathbb R \notag
\end{equation}
all second-order cone constraints are associated with dual variables as
\begin{equation}
\begin{bmatrix} D_i x \\ p^T_i x \end{bmatrix} -
\begin{bmatrix} d_i \\ q_i \end{bmatrix}
\in \mathbb L^{m_i}_C :
\begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix},~
i = 1, \cdots, k \notag
\end{equation}
so the admissible region of dual variables $(\mu_i,\nu_i)$ is
\begin{equation}
\begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix} \in (\mathbb L^{m_i}_C)^*
\Rightarrow \|\mu_i\|_2 \le \nu_i \notag
\end{equation}
Finally, we arrive at the dual form of (\ref{eq:App-01-SOCP-MP-Primal})
\begin{equation}
\label{eq:App-01-SOCP-MP-Dual}
\begin{aligned}
\max_\lambda~ & \sum_{i=1}^k \left(\mu^T_i d_i + \nu_i q_i \right) + b^T_p \lambda_p \\
\mbox{s.t.}~~ & \sum_{i=1}^k \left(D^T_i \mu_i + \nu_i p_i \right) + A^T_p \lambda_p = c \\
& \|\mu_i\|_2 \le \nu_i,~ i = 1, \cdots, k \\
& \lambda_p \ge 0
\end{aligned}
\end{equation}
(\ref{eq:App-01-SOCP-MP-Primal}) and (\ref{eq:App-01-SOCP-MP-Dual}) are more convenient than (\ref{eq:App-01-SOCP-Conic-Primal}) and (\ref{eq:App-01-SOCP-Conic-Dual-Decomp}) respectively because norm constraints can be recognized by most commercial solvers, whereas generalized inequalities $\succeq_K$ and constraints with the form $\in \mathbb L^{m_i}_C$ are supported only in some dedicated packages. Strict feasibility can be expressed in a more straightforward manner via norm constraints: the primal problem is strictly feasible if $\exists x: \|D_i x - d_i\|_2 < p^T_i x -q_i,~ i=1,\cdots,k,~A_p x > b_p$; the dual problem is strictly feasible if $\|\mu_i\|_2 < \nu_i,~ i = 1, \cdots, k,~ \lambda_p > 0$. In view of this, (\ref{eq:App-01-SOCP-MP-Primal}) and (\ref{eq:App-01-SOCP-MP-Dual}) are treated as the standard forms of an SOCP and its dual by practitioners whose primary interests are applications.
{\noindent \bf 2. What can be expressed via SOCPs?}
Mathematical programs raised in engineering applications may not always appear in standard convex forms, and convexity may be hidden in seemingly non-convex expressions. Therefore, an important step is to recognize the potential existence of a convex form that is equivalent to the original formulation. This task can be rather tricky. We introduce some frequently used functions and constraints that can be represented by second-order cone constraints.
{\noindent \bf a. Convex quadratic constraints}
A convex quadratic constraint has the form
\begin{equation}
\label{eq:App-01-SOCP-Convex-QC-1}
x^T P x + q^T x + r \le 0
\end{equation}
where $P \in \mathbb S^n_+$, $q \in \mathbb R^n$, $r \in \mathbb R$ are constant coefficients. Let $t = q^Tx + r$, we have
\begin{equation}
t = \frac{(t+1)^2}{4} - \frac{(t-1)^2}{4} \notag
\end{equation}
Performing the Cholesky factorization $P = D^T D$, (\ref{eq:App-01-SOCP-Convex-QC-1}) can be represented by
\begin{equation}
\|Dx\|^2_2 + \frac{(t+1)^2}{4} \le \frac{(t-1)^2}{4} \notag
\end{equation}
So (\ref{eq:App-01-SOCP-Convex-QC-1}) is equivalent to the following second-order cone constraint
\begin{equation}
\label{eq:App-01-SOCP-Convex-QC-2}
\left\| \begin{gathered}
2 D x \\ q^Tx + r + 1
\end{gathered} \right\|_2 \le
q^Tx + r - 1
\end{equation}
However, not every second-order cone constraint can be expressed via a convex quadratic constraint. By squaring $\|D x - d\|_2 \le p^T x -q$ we get an equivalent quadratic inequality
\begin{equation}
x^T (D^T D - p p^T ) x + 2(q p^T - d^T D)x +d^T d -q^2 \le 0
\end{equation}
with $p^T x - q \ge 0$. The matrix $M = D^T D - p p^T$ is not always positive semidefinite. Indeed, $M \succeq 0$ if and only if $\exists u, \|u\|_2 \le 1: p=D^T u$. On this account, SOCPs are more general than convex QCQPs.
{\noindent \bf b. Hyperbolic constraints}
Hyperbolic constraints are frequently encountered in engineering optimization problems. They are non-convex in their original forms but can be represented by a second-order cone constraint. A hyperbolic constraint has the form
\begin{equation}
\label{eq:App-01-SOCP-Hyper-1}
x^T x \le yz,~ y > 0,~ z > 0
\end{equation}
where $x \in \mathbb R^n$, $y,z \in \mathbb R_{++}$. Noticing the fact that $4yz = (y+z)^2 - (y-z)^2$, (\ref{eq:App-01-SOCP-Hyper-1}) is equivalent to the following second-order cone constraint
\begin{equation}
\label{eq:App-01-SOCP-Hyper-2}
\left\| \begin{gathered}
2 x \\ y-z
\end{gathered} \right\|_2 \le
y+z,~ y > 0,~z > 0
\end{equation}
However, a hyperbolic constraint can not be expressed via a convex quadratic constraint, because the compact quadratic form of (\ref{eq:App-01-SOCP-Hyper-1}) is
\begin{equation}
\begin{bmatrix} x \\ y \\ z \end{bmatrix}^T P
\begin{bmatrix} x \\ y \\ z \end{bmatrix} \le 0,~
P = \begin{bmatrix}
2I & 0 & 0 \\
0 & 0 & -1 \\
0 & -1 & 0
\end{bmatrix} \notag
\end{equation}
where the matric $P$ is indefinite.
Many instances can be regarded as special cases of hyperbolic constraints, such as the upper branch of hyperbola
\begin{equation}
\{ (x,y) ~|~ xy \ge 1,~ x > 0 \} \notag
\end{equation}
and the epigraph of a fractional-quadratic function $g(x,s)=x^T x/s$, $s>0$
\begin{equation}
\left\{ (x,s,t) ~\middle|~ t \ge \frac{x^T x}{s},~ s > 0 \right\} \notag
\end{equation}
{\noindent \bf c. Composition of second-order cone representable functions}
A function is called second-order cone representable if its epigraph can be represented by second-order cone constraints. Second-order cone representable functions are closed under composition \cite{App-A-SOCP-Boyd}. Suppose two univariate convex functions $f_1(x)$ and $f_2(x)$ are second-order cone representable, and $f_1(x)$ is monotonically increasing, the composition $g(x) = f_1(f_2(x))$ is also second-order cone representable, because its epigraph $\{(x,t) ~|~ g(x) \le t\}$ can be expressed by
\begin{equation}
\{(x,t) ~|~ \exists s: f_1(s) \le t,~ f_2(x) \le s \} \notag
\end{equation}
where $f_1(s) \le t$ and $f_2(x) \le s$ essentially come down to second-order cone constraints.
{\noindent \bf d. Maximizing the production of concave functions}
Suppose two functions $f_1(x)$
and $f_2(x)$ are concave with $f_1(x) \ge 0,~ f_2(x) \ge 0$, and $-f_1(x)$ and $-f_2(x)$ are second-order cone representable [which means $f_1(x) \ge t_1$ and $f_2(x) \ge t_2$ are (equivalent to) second-order cone constraints]. Consider the maximum of their production
\begin{equation}
\label{eq:App-01-SOCP-Bargain-1}
\max_x \{ f_1(x) f_2(x) ~|~ x \in X, f_1(x) \ge 0, f_2(x) \ge 0 \}
\end{equation}
where the feasible region $X$ is the intersection of a polyhedron and second-order cones. It is not instantly clear whether problem (\ref{eq:App-01-SOCP-Bargain-1}) is a convex optimization problem or not. This formulation frequently arises in engineering applications, such as the Nash Bargaining problem and multi-objective optimization problems.
By introducing auxiliary variables $t,t_1,t_2$, it is immediately seen that problem (\ref{eq:App-01-SOCP-Bargain-1}) is equivalent to the following SOCP
\begin{equation}
\label{eq:App-01-SOCP-Bargain-2}
\begin{aligned}
\max_{x,t,t_1,t_2}~~ & t \\
\mbox{s.t.}~~ & x \in X \\
& t_1 \ge 0,~ t_2 \ge 0,~ t_1t_2 \ge t^2 \\
& f_1(x) \ge t_1,~ f_2(x) \ge t_2
\end{aligned}
\end{equation}
At the optimal solution, $f_1(x)f_2(x)=t^2$.
{\noindent \bf 3. Polyhedral approximation of second-order cones}
Although SOCPs can be solved very efficiently, the state-of-the-art in numerical computing of SOCPs is still incomparable to that in LPs. The salient computational superiority of LPs inspires a question: can we approximate an SOCP by an LP without dramatically increasing the problem size? There have been other reasons to explore LP approximations for SOCPs. For example, to solve a bilevel program with an SOCP lower level, the SOCP should be replaced by its optimality conditions. However, the primal-dual optimality condition (\ref{eq:App-01-CP-OCD-PD}) may introduce bilinear terms, while the second-order cone complementarity constraints in KKT optimality condition (\ref{eq:App-01-CP-OCD-KKT}) cannot be linearized easily. If the SOCP can be approximated by an LP, then the KKT optimality condition can be linearized and the original bilevel program can be reformulated as an MILP. Clearly, if we only work in original variables, the number of additional constraints would quickly grow unacceptable with the increasing problem dimension and required accuracy. In this section, we introduce the technique developed in \cite{App-A-SOCP-LP}, which lifts the problem into higher dimensions with moderate numbers of auxiliary variables and constraints.
We start with the basic question: find a polyhedral $\epsilon$-approximation ${\rm \Pi}$ for $\mathbb L^3_C$ such that:
1) If $x \in \mathbb L^3_C$, then $\exists u: (x,u) \in {\rm \Pi}$.
2) If $(x,u) \in {\rm \Pi}$ for some $u$, then $\sqrt{x^2_1+x^2_2} \le (1+\epsilon) x_3$.
Geometrically, the polyhedral cone $\rm \Pi$ includes a system of homogeneous linear equalities and inequalities in variables $x,u$; its projection on $x$-space is an $\epsilon$-outer approximation of $\mathbb L^3_C$, and the error bound is quantified by $\epsilon x_3$. The answer to this question is given in \cite{App-A-SOCP-LP}. It is shown that ${\rm \Pi}$ can be expressed by
\begin{equation}
\label{eq:App-01-SOCP-Polyhedral}
\begin{aligned}
(a) \quad & \left\{ \begin{lgathered}
\xi^0 \ge |x_1| \\ \eta^0 \ge |x_2| \end{lgathered} \right. \\
(b) \quad & \left\{ \begin{lgathered}
\xi^j = \cos \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} +
\sin \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \\
\eta^j \ge \left| -\sin \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} +
\cos \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \right|
\end{lgathered} \right. ,~ j=1,\cdots,v \\
(c) \quad & \left\{ \begin{lgathered} \xi^v \le x_3 \\
\eta^v \le \tan \left( \frac{\pi}{2^{v+1}} \right) \xi^v
\end{lgathered} \right.
\end{aligned}
\end{equation}
Formulation (\ref{eq:App-01-SOCP-Polyhedral}) can be understood from an geometric point of view:
1) Given $x \in \mathbb L^3_C$, set $\xi^0 = |x_1|$, $\eta^0 = |x_2|$, which satisfies (a) in (\ref{eq:App-01-SOCP-Polyhedral}), and point $P^0 = (\xi^0,\eta^0)$ belongs to the first quadrant. Let
\begin{equation}
\left\{ \begin{lgathered}
\xi^j = \cos \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} +
\sin \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \\
\eta^j = \left| -\sin \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} +
\cos \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \right|
\end{lgathered} \right. \notag
\end{equation}
which ensures (b). Point $P^j = (\xi^j,\eta^j)$ is obtained from $P^{j-1}$ according to following operation: rotate $P^{j-1}$ by angle $\phi_j = \pi/2^{j+1}$ clockwise and get a mediate point $Q^{j-1}$; if $Q^{j-1}$ resides in the upper half-plane, $P^j = Q^{j-1}$; otherwise $P^j$ is the reflection of $Q^{j-1}$ with respect to the $x$-axis. By this construction, it is clear that all vectors from the origin to $P^j$ have the same Euclidean norm, i.e., $\|[x_1,x_2]\|_2$.
Moreover, as $P^0$ belongs to the first quadrant, the angle of $Q^0$ must satisfy $-\pi/4 \le \arg(Q^0) \le \pi/4$, and $0 \le \arg(P^1) \le \pi/4$. With the procedure going on, we have $|\arg(Q^j)| \le \pi/2^{j+1}$, and $0 \le \arg(P^{j+1}) \le \pi/2^{j+1}$, for $j=1,\cdots,v$. In the last step, $\xi^v \le \|P^v\|_2 = \|[x_1,x_2]\|_2 \le x_3$ and $ 0 \le \arg(P^v) \le \pi/2^{v+1}$ hold, ensuring condition (c). In this manner, a point in $\mathbb L^3_C$ has been extended to a solution of (\ref{eq:App-01-SOCP-Polyhedral}).
2) Given $(x,u) \in {\rm \Pi}$, where $u = \{\xi^j,\eta^j\},j=1,\cdots,v$. Define $P^j = [\xi^j, \eta^j]$, and it directly follows from (a) and (b) that all $P^j$ belongs to the first quadrant, and $\left\|P^0\right\|_2 \ge \sqrt{x^2_1+x^2_2}$. Moreover, recall the construction of $Q^j$ in previous analysis, it is seen $\|P^j\|_2 = \|Q^j\|_2$; the absolute value of the vertical coordinate of $P^{j+1}$ is no less than that of $Q^j$; therefore, $\|P^{j+1}\|_2 \ge \|Q^j\|_2 = \|P^j\|_2$. At last
\begin{equation}
\left\| P^v \right\|_2 \le \dfrac{x_3}{\cos \left( \dfrac{\pi}{2^{v+1}} \right)}\notag
\end{equation}
so we arrive at $\sqrt{x^2_1+x^2_2} \le (1+\epsilon) x_3$, where
\begin{equation}
\epsilon = \dfrac{1}{\cos \left( \dfrac{\pi}{2^{v+1}} \right)} - 1
\end{equation}
In this way, a solution of (\ref{eq:App-01-SOCP-Polyhedral}) has been approximately extended to $\mathbb L^3_C$.
Now, let us consider the general case: approximating
\begin{equation}
\mathbb L^{n+1}_C = \left\{(y,t)~\middle|~ \sqrt{y^2_1+\cdots+y^2_n} \le t \right\} \notag
\end{equation} via a polyhedral cone. Without loss of generality, we assume $n=2^K$. To make use of the outcome in (\ref{eq:App-01-SOCP-Polyhedral}), $y$ is split into $2^{K-1}$ pairs $(y_1,y_2),\cdots,(y_{n-1},y_n)$, which are called variables of generation 0. A successor variable is associated with each pair, which is called variable of generation 1, and is further divided into $2^{K-2}$ pairs and associated with variable of generation 2, and so on. After $K-1$ steps of dichotomy, we complete variable splitting with two variables of generation $K-1$. The only variable of generation $K$ is $t$. For notation convenience, let $y^l_i$ be $i$-th variable of generation $l$, the original vector $y=[y^0_1,\cdots,y^0_n]$, and $t=y^K_1$. The ``parents'' of $y^l_i$ are variables $y^{l-1}_{2i-1},y^{l-1}_{2i}$. The total number of variables in the ``tower'' is $2n-1$.
Using the tower of variables $y^l$, $\forall l$, the system of constraints
\begin{equation}
\label{eq:App-01-SOCP-Tower-Cone}
\sqrt{(y^{l-1}_{2i-1})^2 + (y^{l-1}_{2i})^2} \le y^l_i,~
i = 1,\cdots,2^{K-l},~ l = 1,\cdots,K
\end{equation}
gives the same feasible region on $y$ as $L^{n+1}_C$, and each second-order cone in $\mathbb L^3_C$ in (\ref{eq:App-01-SOCP-Tower-Cone}) can be approximated by a polyhedral cone given in (\ref{eq:App-01-SOCP-Polyhedral}).
The size of this polyhedral approximation is unveiled in \cite{App-A-SOCP-LP}:
1) The dimension of the lifted variable is $p \le n+O(1)\sum_{l=1}^K 2^{K-l} v_l$.
2) The number of constraints is $q \le O(1) \sum_{l=1}^K 2^{K-l} v_l$.
The quality of the approximation is \cite{App-A-SOCP-LP}
\begin{equation}
\beta = \prod_{l=1}^K \dfrac{1}{\cos \left( \dfrac{\pi}{2^{v_l+1}} \right)} - 1 \notag
\end{equation}
Given a desired tolerance $\epsilon$, choose
\begin{equation}
v_l = \lfloor O(1) l \ln \frac{2}{\epsilon} \rfloor \notag
\end{equation}
with a proper constant $O(1)$, we can guarantee the following bounds:
\begin{equation}
\begin{aligned}
\beta & \le \epsilon \\
p & \le O(1)n \ln \frac{2}{\epsilon} \\
q & \le O(1)n \ln \frac{2}{\epsilon} \\
\end{aligned} \notag
\end{equation}
which implies that the required numbers of variables and constraints grow
linearly in the dimension of the target second-order cone.
\subsection{Semidefinite Program}
\label{App-A-Sect02-04}
{\bf 1. Notation clarification}
In this section, variables appear in the form of symmetric matrices, some notations should be clarified first.
The Frobenius inner product of two matrices $A,B \in \mathbb M^n$ is defined by
\begin{equation}
\label{eq:App-01-SDP-Frobenius}
\langle A, B \rangle = \mbox{tr}(AB^T) =
\sum_{i=1}^n \sum_{j=1}^n A_{ij} B_{ij}
\end{equation}
The Euclidean norm of a matrix $X \in \mathbb M^n$ can be defined through the Frobenius inner product as follows
\begin{equation}
\label{eq:App-01-SDP-Matrix-Norm}
\left\| X \right\|_2 = \sqrt{\langle X,X \rangle} = \sqrt{\mbox{tr}(X^T X)}
\end{equation}
Equipped with the Frobenius inner product, the dual cone of a given cone $K \subset \mathbb S^n$ is defined by
\begin{equation}
\label{eq:App-01-SDP-Dual-Cone}
K^*=\{Y \in \mathbb S^n ~|~ \langle Y, X \rangle \ge 0,~ \forall X \in K\}
\end{equation}
Among the cones in $\mathbb S^n$, this section talks about the positive semidefinite cone $\mathbb S^n_+$. As what has been demonstrated in Sect. \ref{App-A-Sect01-03}, $S^n_+$ is self-dual, i.e., $(\mathbb S^n_+)^* = \mathbb S^n_+$. The interior of cone $\mathbb S^n_+$ consists of all $n \times n$ matrices that are positive definite, and is denoted by $\mathbb S^n_{++}$.
{\noindent \bf 2. Primal and dual formulations of SDPs}
When $K=\mathbb S^n_+$, conic program (\ref{eq:App-01-CP-Primal}) boils down to an SDP
\begin{equation}
\min_x \{ c^T x ~|~ A x - b \in \mathbb S^n_+ \} \notag
\end{equation}
which minimizes a linear objective over the intersection of affine plane $y = Ax -b$ and the positive semidefinite cone $\mathbb S^n_+$. However, the notation in such a form is a little confusing: $Ax-b$ is a vector, which is not dimensionally compatible with the cone $\mathbb S^n_+$. In fact, we have met a similar difficulty at the very beginning: the vector inner product does not apply to matrices, which is consequently replaced with the Frobenius inner product. There are two prevalent ways to resolve the confliction in dimension, leading to different formulations which will be discussed.
{\noindent \bf a. Formulation based on vector decision variables}
In this formulation, $b$ is replaced with a matrix $B \in \mathbb S^n$, and $Ax$ is replaced with a linear mapping $\mathcal A x: \mathbb R^n \to \mathbb S^n$. In this way, $\mathcal A x - B$ becomes an element of $\mathbb S^n$. A simple way to specify the linear mapping $\mathcal A x$ is
\begin{equation}
\mathcal A x = \sum_{j=1}^n x_j A_j,~ x = [x_1,\cdots,x_n]^T, ~
A_1,\cdots,A_n \in \mathbb S^n \notag
\end{equation}
With all these input matrices, an SDP can be written as
\begin{equation}
\label{eq:App-01-SDP-LMI-Primal}
\min_x \{ c^T x ~|~ x_1 A_1 + \cdots + x_n A_n - B \succeq 0 \}
\end{equation}
where the cone $\mathbb S^n_+$ is omitted in the operator $\succeq$ without causing confusion. The constraint in (\ref{eq:App-01-SDP-LMI-Primal}) is an LMI.
This formulation is general enough to capture the situation in which multiple LMIs exist, because
\begin{equation}
\mathcal A_i x - B_i \succeq 0,~ i=1,\cdots,k
\Leftrightarrow \mathcal A x - B \succeq 0 \notag
\end{equation}
with $\mathcal A x = \mbox{Diag} (\mathcal A_1x,\cdots,\mathcal A_k x)$ and $B = \mbox{Diag} (B_1,\cdots,B_k)$.
The general form of conic duality can be specified in the case when the cone $K = \mathbb S^n_+$. Associating a matrix dual variable $\rm \Lambda$ with the LMI constraint, and recalling the fact that $(\mathbb S^n_+)^* = \mathbb S^n_+$, the dual program of SDP (\ref{eq:App-01-SDP-LMI-Primal}) reads:
\begin{equation}
\label{eq:App-01-SDP-LMI-Dual}
\max_{\rm \Lambda} \{ \langle B, {\rm \Lambda} \rangle ~|~
\langle A_i, {\rm \Lambda} \rangle =c_i,~ i = 1, \cdots,n, ~
{\rm \Lambda} \succeq 0 \}
\end{equation}
which remains an SDP.
Apply conic duality theorem given in Proposition \ref{pr:App-01-CP-Conic-Duality}
to SDPs (\ref{eq:App-01-SDP-LMI-Primal}) and (\ref{eq:App-01-SDP-LMI-Dual}).
1) Suppose $A_1,\cdots,A_n$ are linearly independent, i.e., no nontrivial linear combination of $A_1,\cdots,A_n$ gives an all zero matrix.
2) The primal SDP (\ref{eq:App-01-SDP-LMI-Primal}) is strict feasible, i.e., $\exists x: x_1 A_1 + \cdots + x_n A_n \succ B$, and is solvable (the minimum is attainable)
3) The dual SDP (\ref{eq:App-01-SDP-LMI-Dual}) is strict feasible, i.e.,
$\exists {\rm \Lambda} \succ 0: \langle A_i, {\rm \Lambda} \rangle =c_i,i = 1,\cdots,n$, and is solvable (the maximum is attainable).
The optimal values of (\ref{eq:App-01-SDP-LMI-Primal}) and (\ref{eq:App-01-SDP-LMI-Dual}) are equal, and the complementarity and slackness condition
\begin{equation}
\label{eq:App-01-SDP-Complement}
\langle {\rm \Lambda}, x_1 A_1 + \cdots + x_n A_n - B \rangle = 0
\end{equation}
is necessary and sufficient for a pair of primal and dual feasible solutions ($x,{\rm \Lambda})$ to be optimal for their respective problems. For a pair of positive semidefinite matrices, it can be shown that
\begin{equation}
\langle X Y \rangle = 0 \Leftrightarrow XY = YX = 0 \notag
\end{equation}
indicating that the
eigenvalues of these two matrices in some certain basis are “complementary”: for every common eigenvector, at most one of the two eigenvalues of $X$ and $Y$ can be strictly positive.
{\noindent \bf b. Formulation based on matrix decision variables}
This formulation directly incorporates a matrix decision variable $X \in \mathbb S^n_+$, and imposes other restrictions on $X$ through linear equations. In the objective function, the vector inner product $c^T x$ is replaced by a Frobenius inner product $\langle C, X \rangle$. In this way, an SDP can be written as
\begin{equation}
\label{eq:App-01-SDP-Hybrid-Primal}
\begin{aligned}
\min_X ~~ & \langle C, X \rangle \\
\mbox{s.t.}~~ & \langle A_i, X \rangle = b_i : \lambda_i,~ i=1,\cdots,m \\
& X \succeq 0 : {\rm \Lambda}
\end{aligned}
\end{equation}
By introducing dual variables (following the colon) for individual constraints, the dual program of (\ref{eq:App-01-SDP-Hybrid-Primal}) can be constructed as
\begin{equation}
\begin{aligned}
\max_{\lambda,\rm \Lambda} ~~ &
b^T \lambda + \langle 0, {\rm \Lambda} \rangle \\
\mbox{s.t.}~~ &
{\rm \Lambda} + \lambda_1 A_1 + \cdots + \lambda_n A_n = C \\
& {\rm \Lambda} \succeq 0
\end{aligned} \notag
\end{equation}
Eliminating $\rm \Lambda$, we obtain
\begin{equation}
\label{eq:App-01-SDP-Hybrid-Dual}
\begin{aligned}
\max_\lambda ~~ & b^T \lambda \\
\mbox{s.t.}~~ & C - \lambda_1 A_1 - \cdots - \lambda_n A_n \succeq 0
\end{aligned}
\end{equation}
It is observed that (\ref{eq:App-01-SDP-Hybrid-Primal}) and (\ref{eq:App-01-SDP-Hybrid-Dual}) are in the same form compared with (\ref{eq:App-01-SDP-LMI-Dual}) and (\ref{eq:App-01-SDP-LMI-Primal}), respectively, except for the signs of some coefficients.
SDP handles positive semidefinite matrices, so it is especially powerful in eigenvalue related problems, such as Lyapunov stability analysis and controller design, which are the main field of control theorists. Moreover, every SOCP can be formulated as an SDP because
\begin{equation}
\| y \|_2 \le t \Leftrightarrow
\begin{bmatrix} tI & y \\ y^T & t\end{bmatrix} \succeq 0 \notag
\end{equation}
Nevertheless, solving SOCPs via SDP may not be a good idea. Interior-point algorithms for SOCPs have much better worst-case complexity than those for SDPs. In fact, SDPs are extremely popular in the convex relaxation technique for non-convex quadratic optimization problems, owing to its ability to offer a nearly global optimal solution in many practical applications, such as the OPF problem in power systems. The SDP based convex relaxation method for non-convex QCQPs will be discussed in the next section. Here we talk about some special cases involving homogeneous quadratic functions or at most two non-homogeneous quadratic functions.
{\noindent \bf 3. Homogeneous quadratic programs}
Consider the following quadratic program
\begin{equation}
\label{eq:App-01-SDP-Homo-QP}
\begin{aligned}
\min~~ & x^T B x \\
\mbox{s.t.}~~ & x^T A_i x \ge 0,~ i = 1,\cdots,m
\end{aligned}
\end{equation}
where $A_1,\cdots,A_m,B \in \mathbb S^n$ are constant coefficients. Suppose that problem (\ref{eq:App-01-SDP-Homo-QP}) is feasible. Due to its homogeneity, the optimal value is clear: $-\infty$ or 0, depending on whether there is a feasible solution $x$ such that $x^T B x < 0$ or not. But it is unclear which situation takes place, i.e., to judge $x^T B x \ge 0$ over the intersection of homogeneous inequalities $x^T A_i x \ge 0$, $i=1,\cdots,m$, or whether the implication
\begin{equation}
\label{eq:App-01-SDP-H-SLemma-1}
x^T A_i x \ge 0,~ i = 1,\cdots,m \Rightarrow x^T B x \ge 0
\end{equation}
holds.
\begin{proposition}
\label{pr:App-01-SDP-H-SLemma-1}
If there exist $\lambda_i \ge 0$, $i=1,2,\cdots$ such that $B \succeq \sum_i \lambda_i A_i$, then the indication in (\ref{eq:App-01-SDP-H-SLemma-1}) is true.
\end{proposition}
To see this, $B \succeq \sum_i \lambda_i A_i \Leftrightarrow x^T (B - \sum_i \lambda_i A_i ) x \ge 0 \Leftrightarrow x^T B x \ge \sum_i \lambda_i x^T A_i x$; therefore, $x^T B x$ is a direct consequence of $x^T A_i x \ge 0, i = 1,\cdots,m$, as the right-hand side of the last inequality is non-negative. Proposition \ref{pr:App-01-SDP-H-SLemma-1} provides a sufficient condition for (\ref{eq:App-01-SDP-H-SLemma-1}), and necessity is generally not guaranteed. Nevertheless, if $m=1$, the condition is both necessary and sufficient.
\begin{proposition}
\label{pr:App-01-SDP-H-SLemma-2}
(S-Lemma) Let $A,B \in \mathbb S^n$ and a homogeneous quadratic inequality
\begin{equation}
(a) \quad x^T A x \ge 0 \notag
\end{equation}
is strictly feasible. Then the homogeneous quadratic inequality
\begin{equation}
(b) \quad x^T B x \ge 0 \notag
\end{equation}
is a consequence of (a) if and only if $\exists \lambda \ge 0: B \succeq \lambda A$.
\end{proposition}
Proposition \ref{pr:App-01-SDP-H-SLemma-2} is called the S-Lemma or S-Procedure. It can be proved by many means. The most instructive one, in our tastes, is based on the semidefinite relaxation, which can be found in \cite{App-A-CVX-Book-Ben}.
{\noindent \bf 4. Non-homogeneous quadratic programs with a single constraint}
Consider the following quadratic program
\begin{equation}
\label{eq:App-01-SDP-Heter-QP}
\begin{aligned}
\min~~ & f_0 (x) = x^T A_0 x + 2 b^T_0 x + c_0 \\
\mbox{s.t.}~~ & f_1(x) = x^T A_1 x + 2 b^T_1 x + c_1 \le 0
\end{aligned}
\end{equation}
Let $f^*$ denote the optimal solution, so $f_0(x) - f^* \ge 0$ is a consequence of $-f_1(x) \ge 0$. A sufficient condition for this implication is $\exists \lambda \ge 0: f_0(x) - f^* + \lambda f_1(x) \ge 0$. The left-hand side is a quadratic function with matrix form
\begin{equation}
\left[ \begin{gathered} x \\ 1 \end{gathered} \right]^T
\left[ \begin{gathered} A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T \end{gathered}\quad
\begin{gathered} b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f^* \end{gathered}\right]
\left[ \begin{gathered} x \\ 1 \end{gathered} \right] \notag
\end{equation}
Its non-negativeness is equivalent to
\begin{equation}
\left[ \begin{gathered}
A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T
\end{gathered} \quad
\begin{gathered}
b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f^*
\end{gathered} \right] \succeq 0
\end{equation}
Similar to the homogeneous case, this condition is also sufficient. In view of this, the optimal value $f^*$ of (\ref{eq:App-01-SDP-Heter-QP}) solves the following SDP
\begin{equation}
\begin{aligned}
\min_{\lambda,f}~~ & f \\
\mbox{s.t.} ~~ &
\left[ \begin{gathered}
A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T
\end{gathered} \quad
\begin{gathered}
b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f
\end{gathered} \right] \succeq 0
\end{aligned}
\label{eq:App-01-SDP-Heter-QP-LMI}
\end{equation}
This conclusion is known as the non-homogeneous S-Lemma:
\begin{proposition}
\label{pr:App-01-SDP-N-SLemma-2}
(Non-homogeneous S-Lemma) Let $A_i \in \mathbb S^n$, $b_i \in \mathbb R^n$, and $c_i \in \mathbb R$, $i=0,1$, if $\exists x: x^T A_1 x + 2 b^T_1 x + c_1 < 0$, the implication
\begin{equation}
x^T A_1 x + 2 b^T_1 x + c_1 \le 0 \Rightarrow
x^T A_0 x + 2 b^T_0 x + c_0 \le 0 \notag
\end{equation}
holds if and only if
\begin{equation}
\exists \lambda \ge 0 :
\left[ \begin{gathered} A_0 \\ b_0^T \end{gathered} ~~
\begin{gathered} b_0 \\ c_0 \end{gathered} \right] \preceq \lambda
\left[ \begin{gathered} A_1 \\ b_1^T \end{gathered} ~~
\begin{gathered} b_1 \\ c_1 \end{gathered} \right]
\end{equation}
\end{proposition}
Because the implication can boil down to the maximum of quadratic function $x^T A_0 x + 2 b^T_0 x + c_0$ being non-positive over set $\{x|x^T A_1 x + 2 b^T_1 x + c_1 \le 0 \}$, which is a special case of (\ref{eq:App-01-SDP-Heter-QP}) by letting $f_0(x) = -x^T A_0 x - 2 b^T_0 x - c_0$, Proposition \ref{pr:App-01-SDP-N-SLemma-2} is a particular case of (\ref{eq:App-01-SDP-Heter-QP-LMI}) with the optimum $f^*=0$.
A formal proof based on semidefinite relaxation is given in \cite{App-A-CVX-Book-Ben}. Since a quadratic inequality describes an ellipsoid, Proposition \ref{pr:App-01-SDP-N-SLemma-2} can be used to test whether an ellipsoid is contained in another one.
As a short conclusion, we summarize the relation of discussed convex programs in Fig. \ref{fig:App-01-08}.
\begin{figure}
\caption{Relations of the discussed convex programs.}
\label{fig:App-01-08}
\end{figure}
\section{Convex Relaxation Methods for Non-convex QCQPs}
\label{App-A-Sect03}
One of the most prevalent and promising applications of SDP is to build tractable approximations of computationally intractable optimization problems. One of the most quintessential appliances is the convex relaxation of
quadratically constrained quadratic programs (QCQPs), which cover vast engineering optimization problems. QCQPs are generally non-convex and could have more than one locally optimal solution, and each of them may yield significant different objective values. However, gradient based algorithms can only find a local solution which largely depends on the initial point. One primary interest is to identify the global optimal solution or determine a high-quality bound for the optimum, which can be used to quantify the optimality gap of a given local optimal solution. The SDP relaxation technique for solving
non-convex QCQPs are briefly reviewed in this section.
\subsection{SDP Relaxation and Valid Inequalities}
\label{App-A-Sect03-01}
A standard fact of quadratic expression is
\begin{equation}
\label{eq:App-01-SDPr-Quad}
x^T Q x = \langle Q, x x^T \rangle
\end{equation}
where $\langle \cdot \rangle$ stands for the Frobenius inner product.
Following the logic in \cite{App-A-SDP-Relaxation-Tutor}, we focus our attention on QCQPs in the following form
\begin{equation}
\label{eq:App-01-SDPr-QCQP}
\min~~ \{x^T C x + c^T x ~|~ x \in F \}
\end{equation}
where
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Fea}
F = \left\{ x \in \mathbb R^n ~\middle|~ x^T A_k x + a^T_k x \le b_k,~
k = 1, \cdots, m,~ l \le x \le u \right\}
\end{equation}
All coefficient matrices and vectors have compatible dimensions. If $A_k = 0$ in all constraints, then the feasible set $F$ is a polyhedron, and (\ref{eq:App-01-SDPr-QCQP}) reduces to a quadratic program (QP); If $A_k \succeq 0$, $k=1,\cdots,m$ and $C \succeq 0$, (\ref{eq:App-01-SDPr-QCQP}) is a convex QCQP, which is easy to solve. Without loss of generality, we assume $A_k$, $k=1,\cdots,m$ and $C$ are indefinite, $F$ is a non-convex set, and the objective is a non-convex function. In fact, a number of hard optimization problems can be cast as non-convex QCQP (\ref{eq:App-01-SDPr-QCQP}). For example, a polynomial optimization problem can be reduced to a QCQP by introducing a tower of condensing variables, e.g., $x_1 x_2 x_3 x_4$ could be replaced by quadratic term $x_{12} x_{34}$ with $x_{12} = x_1 x_2$ and $x_{34}= x_3 x_4$. Moreover, a binary constraint $x \in \{0,1\}$ is equivalent to quadratic equality $x(x-1)=1$ where $x$ is continuous.
A common idea to linearize non-convex terms $x^T A_k x$ is to define new variables $X_{ij}=x_i x_j$, $i=1,\cdots,n$, $j=1,\cdots,n$. In this way, $x^T A_k x = \sum_i \sum_j A_{ij} x_i x_j = \sum_{ij} A_{ij}X_{ij}$, and the last term is linear. Recall (\ref{eq:App-01-SDPr-Quad}), this fact can be written in a compact form
\begin{equation}
x^T A_k x = \langle A_k, X\rangle, ~ X = x x^T \notag
\end{equation}
With this transformation, QCQP (\ref{eq:App-01-SDPr-QCQP}) becomes
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Ext}
\min~~ \{ \langle C, X \rangle + c^T x ~|~ (x,X) \in \hat F \}
\end{equation}
where
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Fea-Ext}
\hat F = \left\{ (x,X) \in \mathbb R^n \times \mathbb S^n ~\middle|~
\begin{gathered}
\langle A_k, X \rangle + a^T_k x \le b_k,~ k = 1, \cdots, m~ \\
l \le x \le u,~~ X = x x^T
\end{gathered} \right\}
\end{equation}
In problem (\ref{eq:App-01-SDPr-QCQP-Ext}), non-convexity are concentrated in the relation between the lifting variable $X$ and the original variable $x$, whereas all other constraints are linear. Moreover, if we replace $\hat F$ with its convex hull conv($\hat F$), the optimal solution of (\ref{eq:App-01-SDPr-QCQP-Ext}) will not change, because its objective function is linear. However, conv($\hat F$) does not have a closed form expression. Convex relaxation approaches can be interpreted as attempting to approximate conv($\hat F$) through structured convex constraints which can be recognized by existing solvers.
We define the following linear relaxation
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Fea-LPr}
\hat L = \left\{ (x,X) \in \mathbb R^n \times \mathbb S^n ~\middle|~
\begin{gathered}
\langle A_k, X \rangle + a^T_k x \le b_k \\
k = 1, \cdots, m~ \\ l \le x \le u
\end{gathered} \right\}
\end{equation}
which contains only linear constraints. Now, let us consider the lifting constraint
\begin{equation}
\label{eq:App-01-SDPr-X=xxT}
X = x x^T
\end{equation}
which is called a rank-1 constraint. However, a rank constraint is non-convex and cannot be accepted by most solvers. Notice the fact that if (\ref{eq:App-01-SDPr-X=xxT}) holds, then
\begin{equation}
\begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} =
\begin{bmatrix} 1 & x^T \\ x & x x^T \end{bmatrix} =
\begin{bmatrix} 1 \\ x \end{bmatrix}
\begin{bmatrix} 1 \\ x \end{bmatrix}^T \succeq 0 \notag
\end{equation}
Define an LMI constraint
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Fea-LMI}
\mbox{LMI} = \left\{ (x,X) ~\middle|~ Y=
\begin{bmatrix}
1 & x^T \\ x & X
\end{bmatrix} \succeq 0 \right\}
\end{equation}
The positive semi-definiteness condition is true over conv($\hat F$).
The basic SDP relaxation of (\ref{eq:App-01-SDPr-QCQP-Ext}) replaces the rank-1 constraint in $\hat F$ with a weaker but convex constraint (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}), giving rise to the following SDP
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Basic}
\begin{aligned}
\min~~ & \langle C, X \rangle + c^T x \\
\mbox{s.t.}~~ & (x,X) \in \hat L \cap \mbox{LMI}
\end{aligned}
\end{equation}
Clearly, the LMI constraint enlarges the feasible region defined by (\ref{eq:App-01-SDPr-X=xxT}), so the optimal solution to (\ref{eq:App-01-SDPr-QCQP-Basic}) may not be feasible in the original QCQP, and the optimal value is a strict lower bound. In this situation, the SDP relaxation is inexact. Conversely, if matrix $Y$ is indeed rank-1 at the optimal solution, then the SDP relaxation is exact and $x$ solves the original QCQP (\ref{eq:App-01-SDPr-QCQP}).
The basic SDP relaxation model (\ref{eq:App-01-SDPr-QCQP-Basic}) can be further improved by enforcing additional linkages between $x$ and $X$, which are called valid inequalities. Suppose linear inequalities $\alpha^T x \le \alpha_0$ and $\beta^T x \le \beta_0$ are chosen from $\hat L$, then the quadratic inequality
\begin{equation}
(\alpha_0 - \alpha^T x) (\beta_0 - \beta^T x) = \alpha_0 \beta_0 - \alpha_0 \beta^T x -\beta_0 \alpha^T x + x^T \alpha \beta^T x \ge 0 \notag
\end{equation}
holds for all $x \in \hat L$. The last quadratic term can be linearized via the lifting variable $X$, resulting in the following linear inequality
\begin{equation}
\label{eq:App-01-SDPr-QCQP-VIN}
\alpha_0 \beta_0 - \alpha_0 \beta^T x - \beta_0 \alpha^T x +
\langle \beta \alpha^T, X \rangle \ge 0
\end{equation}
Any linear inequality in $\hat L$ (possibly the same) can be used to construct valid inequalities. Because additional constraints are imposed on $X$, the relaxation could be tightened, and the feasible region shrinks but may still be larger than conv($\hat F$).
If we construct valid inequality (\ref{eq:App-01-SDPr-QCQP-VIN}) from side constraint $l \le x \le u$, we get
\begin{equation}
\label{eq:App-01-SDPr-QCQP-VIN-Quad}
\left. \begin{gathered}
(x_i - l_i) (x_j - l_j) \ge 0 \\
(x_i - l_i) (u_j - x_j) \ge 0 \\
(u_i - x_i) (x_j - l_j) \ge 0 \\
(u_i - x_i) (u_j - x_j) \ge 0 \\
\end{gathered} \right\},~~
\forall i,j = 1,\cdots,n,~ i \le j
\end{equation}
Expanding these quadratic inequalities, the coefficients of quadratic terms $x_i x_j$ are equal to 1, and we obtain simple bounds on $X_{ij}$
\begin{equation}
\begin{gathered}
x_i l_j + x_j l_i - l_i l_j \le X_{ij} \\
x_i u_j + x_j l_i - l_i u_j \ge X_{ij} \\
u_i x_j - u_i l_j + x_i l_j \ge X_{ij} \\
u_i x_j + x_i u_j - u_i u_j \le X_{ij}
\end{gathered} \notag
\end{equation}
or in a compact matrix form \cite{App-A-SDP-Relaxation-Tutor}
\begin{equation}
\label{eq:App-01-SDPr-QCQP-VIN-RLT}
\mbox {RLT} = \left\{ (x,X) ~\middle|~
\begin{gathered}
l x^T + x l^T - l l^T \le X \\
u x^T + x u^T - u u^T \le X \\
x u^T + l x^T - l u^T \ge X
\end{gathered} \right\}
\end{equation}
(\ref{eq:App-01-SDPr-QCQP-VIN-RLT}) is known as the reformulation-linearization technique after the term appeared in \cite{App-A-APP-RLT}. These constraints have been extensively studied since it was proposed in \cite{App-A-Convex-Concave-Envelop}, due to the simple structure and satisfactory performance in various applications.
The improved SDP relaxation with valid inequalities can be written as
\begin{equation}
\label{eq:App-01-SDPr-QCQP-LMI+RLT}
\begin{aligned}
\min~~ & \langle C, X \rangle + c^T x \\
\mbox{s.t.}~~ & (x,X) \in \hat L \cap \mbox{LMI} \cap \mbox{RLT}
\end{aligned}
\end{equation}
From the construction of $\hat L$, LMI, and RLT, it is directly concluded that
\begin{equation}
\label{eq:App-01-SDPr-QCQP-Inclusion}
\mbox{conv}(\hat F) \subseteq \hat L \cap \mbox{LMI} \cap \mbox{RLT}
\end{equation}
The inclusion becomes tight only in some very special situations, such as those encountered in the homogeneous and non-homogeneous S-Lemma. Nevertheless, what we really need is the equivalence between the optimal solution of the relaxed problem (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) and that of the original problem (\ref{eq:App-01-SDPr-QCQP}): if the optimal matrix variable of (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) allows a rank-1 decomposition
\begin{equation}
\begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} =
\begin{bmatrix} 1 \\ x \end{bmatrix}
\begin{bmatrix} 1 \\ x \end{bmatrix}^T \notag
\end{equation}
which indicates $X$ has a rank-1 decomposition $X = x x^T$, then $x$ is optimal in (\ref{eq:App-01-SDPr-QCQP}), and the SDP relaxation is said to be exact, although $\mbox{conv}(\hat F)$ may be a strict subset of $\hat L \cap \mbox{LMI} \cap \mbox{RLT}$.
\subsection{Successively Tightening the Relaxation}
\label{App-A-Sect03-02}
If the matrix $X$ has a rank higher than 1, the corresponding optimal solution $x$ in (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) may be infeasible in (\ref{eq:App-01-SDPr-QCQP}).
The rank-1 constraint on $X$ can be exactly described by a pair of LMIs $X \succeq x x^T$ and $X \preceq x x^T$. The former one is redundant to (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}) indicated by the Schur complement theorem; the latter one is non-convex, which is simply neglected in the SDP relaxation.
{\noindent \bf 1. A dynamical valid inequality generation approach}
An approach is proposed in \cite{App-A-SDP-Relaxation-Tutor} to generate valid inequalities dynamically by harnessing the constraint violations in $X \preceq x x^T$. The motivation comes from the fact that
\begin{equation}
X -x x^T \preceq 0 \Leftrightarrow \langle X, v_i v^T_i \rangle \le (v^T_i x)^2,~ i=1,\cdots,n \notag
\end{equation}
where $\{v_1,\cdots,v_n\}$ is a set of orthogonal basis of $\mathbb R^n$. To see this, any vector $h \in \mathbb R^n$ can be expressed as the linear combination of the orthogonal basis as $h = \sum_{i=1}^n \lambda_i v_i$, therefore, $h^T (X-x x^T) h = \langle X, h h^T \rangle - (h^T x)^2 = \sum_{i=1}^n \lambda^2_i [\langle X, v_i v^T_i \rangle - (v^T_i x)^2] \le 0$. In view of this,
\begin{equation}
\hat F = \hat L \cap \mbox{LMI} \cap \mbox{NSD} \notag
\end{equation}
where
\begin{equation}
\begin{aligned}
\mbox{NSD} & = \{(x,X)~|~ X -x x^T \preceq 0 \} \\
& = \{ (x,X) ~|~ \langle X, v_i v^T_i \rangle \le
(v^T_i x)^2, i=1,\cdots,n \}
\end{aligned} \notag
\end{equation}
If $\{v_1,\cdots,v_n\}$ is the standard orthogonal basis,
\begin{equation}
\label{eq:App-01-SDPr-QCQP-NSD-1}
\mbox{NSD} = \{ (x,X) ~|~ X_{ii} \le x_i^2,~i=1,\cdots,n \}
\end{equation}
It is proposed in \cite{App-A-SDP-Relaxation-Tutor} to construct NSD as
\begin{equation}
\label{eq:App-01-SDPr-QCQP-NSD-2}
\mbox{NSD} = \{ (x,X) ~|~ \langle X, \eta_i \eta^T_i \rangle \le
(\eta^T_i x)^2,~i=1,\cdots,n \}
\end{equation}
where $\{\eta_1,\cdots,\eta_n\}$ are the eigenvectors of matrix $X - x x^T$, because they exclude infeasible points with respect to $X -x x^T \preceq 0$ most effectively.
Non-convex constraints in (\ref{eq:App-01-SDPr-QCQP-NSD-1}) and (\ref{eq:App-01-SDPr-QCQP-NSD-2}) can be handled by a special disjunctive programming derived in \cite{App-A-QCQP-Extended} and the convex-concave procedure investigated in \cite{App-A-CCP-Boyd}. The former one is an exact approach which requires binary variables to formulate disjunctive constraints; the latter is a heuristic approach which only solves convex optimization problems. We do not further detail these techniques here.
{\noindent \bf 2. A rank penalty method \cite{App-A-SDP-Rank-CCP}}
In view of the rank-1 exactness condition, another way to tighten SDP relaxation is to work on the rank of the optimal solution. A successive rank penalty approach is proposed in \cite{App-A-SDP-Rank-CCP}. We consider problem (\ref{eq:App-01-SDPr-QCQP-Ext}) as a rank-constrained SDP
\begin{equation}
\label{eq:App-01-SDP-Rank}
\min~~ \{ \langle {\rm \Omega}, Y \rangle ~|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT},~ \mbox{rank}(Y) = 1 \}
\end{equation}
where
\begin{equation}
{\rm \Omega} =
\begin{bmatrix}
0 & 0.5 c^T \\ 0.5 c & C
\end{bmatrix},~~ Y =
\begin{bmatrix}
1 & x^T \\ x & X
\end{bmatrix} \notag
\end{equation}
constraints $\hat L$, LMI, and RLT (rearranged for variable $Y$) are defined in (\ref{eq:App-01-SDPr-QCQP-Fea-LPr}), (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}), and (\ref{eq:App-01-SDPr-QCQP-VIN-RLT}), respectively. The last constraint in (\ref{eq:App-01-SDP-Rank}) ensures that $Y$ has a rank-1 decomposition such that $X = x x^T$. Actually, LMI and RLT are redundant to the rank-1 constraint, but will give a high quality convex relaxation when the rank constraint is relaxed.
To treat the rank-1 constraint in a soft manner, we introduce a dummy variable $Z$, and penalize the matrix rank in the objective function, giving rising to the following problem
\begin{equation}
\label{eq:App-01-SDP-Rank-Penalty}
\begin{aligned}
\min_{Y}~~ & \left\{ \langle {\rm \Omega}, Y \rangle +
\min_{Z} \frac{\rho}{2} \| Y-Z \|^2_2 \right\} \\
\mbox{s.t.} ~~ & Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT},~
\mbox{rank}(Z) = 1
\end{aligned}
\end{equation}
If the penalty parameter $\rho$ is sufficiently large, the penalty term will be zero at the optimal solution, so $Y=Z$ and rank$(Z)=1$. One advantage of this treatment is that the constraints on $Y$ and $Z$ are decoupled, and the inner rank minimization problem has a closed-form solution.
To see this, if rank$(Y) = k > 1$, the singular value decomposition of $Y$ has the form $Y = U {\rm \Sigma} V^T$, where
\begin{equation}
{\rm \Sigma} = \mbox{diag}(S,0),~
S = \mbox{diag}(\sigma_1,\cdots,\sigma_k),~
\sigma_1 \ge \cdots \ge \sigma_k > 0 \notag
\end{equation}
$U$ and $V$ are orthogonal matrices. Let matrix $D$ have the same dimension as $Y$, $D_{11}=\sigma_1$, and $D_{ij}=0$, $\forall (i,j) \ne (1,1)$, we have
\begin{equation}
\begin{aligned}
& \quad \min_{Z}~ \left\{ \frac{\rho}{2} \| Y - Z \|^2_2 ~\middle|~
\mbox{rank}(Z) = 1 \right\} \\
& = \min_{Z}~ \left\{ \frac{\rho}{2} \| U(Y - Z)V^T \|^2_2 ~\middle|~
\mbox{rank}(Z) = 1 \right\} \\
& = \min_{Z}~ \left\{ \frac{\rho}{2} \| {\rm \Sigma} - U Z V^T \|^2_2 ~\middle|~ \mbox{rank}(Z) = 1 \right\} \\
& = \frac{\rho}{2} \| {\rm \Sigma} - D \|^2_2
= \frac{\rho}{2} \sum_{i=2}^k \sigma_i^2 \\
& = \frac{\rho}{2} \| Y \|^2_2 - \frac{\rho}{2} \sigma_1^2 (Y)
\end{aligned} \notag
\end{equation}
To represent the latter term via a convex function, let matrix $\rm \Theta$ have the same dimension as $Y$, ${\rm \Theta}_{11} = 1$, and ${\rm \Theta}_{ij}=0$, $\forall (i,j) \ne (1,1)$, we have
\begin{equation}
\mbox{tr}(Y^T U {\rm \Theta} U^T Y) = \mbox{tr}(V {\rm \Sigma} U^T U {\rm \Theta} U^T U {\rm \Sigma} V^T) = \mbox{tr}(V {\rm \Sigma} {\rm \Theta} {\rm \Sigma} V^T)
= \mbox{tr}({\rm \Sigma} {\rm \Theta} {\rm \Sigma}) = \sigma^2_1 (Y) \notag
\end{equation}
Define two functions $f(Y) = \langle {\rm \Omega}, Y \rangle + \frac{\rho}{2} \|Y\|^2_2$ and $g(Y) = \mbox{tr}(Y^T U {\rm \Theta} U^T Y)$. Because $\|Y\|_2$ is convex in $Y$ (Example 3.11, \cite{App-A-CVX-Book-Boyd}), so is $\|Y\|^2_2$ (composition rule, page 84, \cite{App-A-CVX-Book-Boyd}); clearly, $f(Y)$ is a convex function in $Y$, as it is the sum of a linear function and a convex function. For the latter one, the Hessian matrix of $g(Y)$ is
\begin{equation*}
\nabla^2_Y g(Y) = U {\rm \Theta} U^T = U {\rm \Theta}^T {\rm \Theta} U^T = ({\rm \Theta} U^T)^T {\rm \Theta} U^T \succeq 0
\end{equation*}
so $g(Y)$ is also convex in $Y$. Substituting above results into problem (\ref{eq:App-01-SDP-Rank-Penalty}), the rank constrained SDP (\ref{eq:App-01-SDP-Rank}) boils
down to
\begin{equation}
\label{eq:App-01-SDP-Rank-DCP}
\min_{Y} \left\{ \langle {\rm \Omega}, Y \rangle + \frac{\rho}{2} \|Y\|^2_2 - \frac{\rho}{2} \mbox{tr}(Y^T U {\rm \Theta} U^T Y) ~\middle|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \right\}
\end{equation}
The objective function is a DC function, and the feasible region is convex, so (\ref{eq:App-01-SDP-Rank-DCP}) is a DC program. One can employ the convex-concave procedure discussed in \cite{App-A-CCP-Boyd} to solve this problem. The flowchart is summarized in Algorithm \ref{Ag:App-01-SDP-Rank-CCP}.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Sequential SDP}
\begin{algorithmic}[1]
\STATE Choose an initial penalty parameter $\rho^0$, a penalty growth rate $\tau > 0$, and solve the following SDP relaxation model
\begin{equation}
\min~~ \{ \langle {\rm \Omega}, Y \rangle ~|~
Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \} \notag
\end{equation}
The optimal solution is $Y^*$.
\STATE Construct the linear approximation of $g(Y)$ as
\begin{equation}
\label{eq:App-01-SDP-DCP-Concave-Lin}
g_L (Y,Y^*) = g(Y^*) + \langle \nabla g(Y^*),Y-Y^* \rangle
\end{equation}
Solve the the following SDP
\begin{equation}
\label{eq:App-01-SDP-DCP-Master}
\min_{Y} \left\{ f(Y) - \frac{\rho}{2} g_L(Y,Y^*) ~\middle|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \right\}
\end{equation}
The optimal solution is $Y^*$.
\STATE If rank$(Y^*)=1$, terminate and report the optimal solution $Y^*$; otherwise, update $\rho \leftarrow (1+\tau) \rho$, and go to step 2.
\end{algorithmic}
\label{Ag:App-01-SDP-Rank-CCP}
\end{algorithm}
For the convergence of Algorithm \ref{Ag:App-01-SDP-Rank-CCP}, we have the following properties.
\begin{proposition}
\label{pr:App-01-SDP-Rank-CCP-1}
\cite{App-A-SDP-Rank-CCP} The optimal value sequence $F(Y^i)$ generated by Algorithm \ref{Ag:App-01-SDP-Rank-CCP} is monotonically decreasing.
\end{proposition}
Denote by $F(Y) = f(Y) - \frac{\rho}{2} g(Y)$ the objective function of (\ref{eq:App-01-SDP-Rank-DCP}) in the DC form, and $H(Y,Y^i) = f(Y) - \frac{\rho}{2} g_L(Y,Y^i)$ the convexified objective function in (\ref{eq:App-01-SDP-DCP-Master}) by linearizing the concave term in $F(Y)$. Two basic facts help explain this proposition:
1) $g_L(Y^*,Y^*) = g(Y^*)$, $\forall Y^*$ which directly follows from the definition in (\ref{eq:App-01-SDP-DCP-Concave-Lin}).
2) For any given $Y^*$, $g(Y) \ge g_L(Y,Y^*)$, $\forall Y$, because the graph of a convex function must lie over its tangent plane at any fixed point.
First we can asset inequality $H(Y^{i+1},Y^i) \le H(Y^i,Y^i)$, because $H(Y,Y^i)$ is optimized in problem (\ref{eq:App-01-SDP-DCP-Master}). The optimum $H(Y^{i+1},Y^i)$ deserves a value no greater than that at any feasible point.
Furthermore, with the definition of $H(Y^i,Y^i)$, we have
\begin{equation}
H(Y^{i+1},Y^i) \le H(Y^i,Y^i) = f(Y^i) - \frac{\rho}{2} g_L(Y^i,Y^i) =
f(Y^i) - \frac{\rho}{2} g(Y^i) = F(Y^i) \notag
\end{equation}
On the other hand,
\begin{equation}
H(Y^{i+1},Y^i)=f(Y^{i+1}) - \frac{\rho}{2} g_L(Y^{i+1},Y^i) \ge f(Y^{i+1}) - \frac{\rho}{2} g(Y^{i+1})=F(Y^{i+1}) \notag
\end{equation}
Consequently, we arrive at the monotonic property
\begin{equation}
F(Y^{i+1}) \le F(Y^i) \notag
\end{equation}
\begin{proposition}
\label{pr:App-01-SDP-Rank-CCP-2}
\cite{App-A-SDP-Rank-CCP} The solution sequence $Y^i$ generated by Algorithm \ref{Ag:App-01-SDP-Rank-CCP} approaches to the optimal solution of problem (\ref{eq:App-01-SDP-Rank}) when $\rho \rightarrow \infty$.
\end{proposition}
It is easy to understand that whenever $\rho$ is sufficiently large, the penalty term will tend to 0, and the rank-1 constraint in (\ref{eq:App-01-SDP-Rank}) is met. A formal proof can be found in \cite{App-A-SDP-Rank-CCP}. A few more remarks are given below.
1) The convex-concave procedure in \cite{App-A-SDP-Boyd} is a local algorithm under mild conditions and needs a manually supplied initial point. Algorithm \ref{Ag:App-01-SDP-Rank-CCP}, however, is elaborately initiated at the solution offered by the SDP relaxation model, which usually appears to be close to the global optimal one for many engineering optimization problems. Therefore, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} generally performs well and will identify the global optimal solution, although a provable guarantee is non-trivial.
2) In practical applications, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} could converge without the penalty parameter approaching infinity, because when some constraint quantification holds, there exists an exact penalty parameter $\rho^*$, such that the optimal solution leads to a zero penalty term for any $\rho \ge \rho^*$ \cite{App-A-Exact-Penalty-1,App-A-Exact-Penalty-2}, and Algorithm \ref{Ag:App-01-SDP-Rank-CCP} converges in a finite number of steps.
If the exact penalty parameter does not exist, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} may fail to converge. In such circumstance, one can impose an upper bound on $\rho$, and use an alternative convergence criterion: the change of the objective value $F(Y)$ in two consecutive steps is less than a given threshold value. As a result, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} will be able to find an approximate solution of problem (\ref{eq:App-01-SDP-Rank}), and the rank-1 constraint may not be enforced.
3) From the numeric computation perspective, a very large $\rho$ may cause ill-conditioned problem and lead to numerical instability, so it is useful to gradually increase $\rho$ from a small value. Another reason for the moderate growth of $\rho$ is that it does not cause dramatic change of optimal solutions in two successive iterations. As a result, $g_L(Y,Y^*)$ can provide relatively accurate approximation for $g(Y)$ in every iteration.
4) The penalty term $\rho_i p(Y^i)/2 = \rho_i \left[ \|Y^i\|^2_2 - \mbox{tr}(Y^{iT} U {\rm \Theta} U^T Y^i) \right]/2$ gives an upper bound on the optimality gap induced by rank relaxation. To see this, let $\rho^*$ and $Y^*$ be the exact penalty parameter and corresponding optimal solution of (\ref{eq:App-01-SDP-DCP-Master}), i.e., $p(Y^*)=0$; $\rho_i$ and $Y^i$ be the penalty parameter and optimal solution in $i$-th iteration. According to Proposition \ref{pr:App-01-SDP-Rank-CCP-1}, we have $\langle {\rm \Omega}, Y^* \rangle \le \langle {\rm \Omega}, Y^i \rangle + \rho_i p(Y^i)/2$; moreover, since the rank-1 constraint is relaxed before Algorithm \ref{Ag:App-01-SDP-Rank-CCP} could converge, $\langle {\rm \Omega}, Y^i \rangle \le \langle {\rm \Omega}, Y^* \rangle$ holds. Therefore, $\langle {\rm \Omega}, Y^i \rangle$ and $\langle {\rm \Omega}, Y^i \rangle + \rho_i p(Y^i)/2$ are lower and upper bounds for the optimal value of problem (\ref{eq:App-01-SDP-Rank}). In this regard, $\rho_i p(Y^i)/2$ is an estimation on the optimality gap.
\subsection{Completely Positive Program Relaxation}
\label{App-A-Sect03-03}
Inspired by the convex hull expression in (\ref{eq:App-01-COMPL-Cone}), researchers have shown that most non-convex QCQPs can be modeled as linear programs over the intersection of a completely positive cone and a polyhedron \cite{App-A-COPr-1,App-A-COPr-2,App-A-COPr-3}. For example, consider minimizing a quadratic function over a standard simplex
\begin{equation}
\label{eq:App-01-COPr-Example-1}
\begin{aligned}
\min~~ & x^T Q x \\
\mbox{s.t.} ~~ & e^T x = 1 \\
& x \ge 0
\end{aligned}
\end{equation}
where $Q \in \mathbb S^n$, and $e$ denotes the all-one vector with $n$ entries. Following the paradigm similar to (\ref{eq:App-01-SDPr-QCQP-Ext}), let $X = x x^T$, and then we can construct a valid inequality
\begin{equation*}
1 = x^T e e^T x = x^T E x = \langle E, X \rangle
\end{equation*}
where $E=ee^T$ is the all-one matrix. According to (\ref{eq:App-01-COMPL-Cone}), conv$\{xx^T| x \in \mathbb R^n_+ \}$ is given by $(\mathbb C^n_+)^*$. Therefore, problem (\ref{eq:App-01-COPr-Example-1}) transforms to
\begin{equation}
\label{eq:App-01-COPr-Example-2}
\begin{aligned}
\min~~ & \langle Q, X \rangle \\
\mbox{s.t.} ~~ & \langle E, X \rangle = 1 \\
& X \in \mathbb (\mathbb C^n_+)^*
\end{aligned}
\end{equation}
Problem (\ref{eq:App-01-COPr-Example-2}) is a convex relaxation of (\ref{eq:App-01-COPr-Example-1}). Because the objective is linear, the optimal solution must be located at one extremal point of the convex hull of the feasible region. In view of the representation in (\ref{eq:App-01-COMPL-Cone}), the extremal points are exactly rank-1, so the convex relaxation (\ref{eq:App-01-COPr-Example-2}) is always exact.
Much more general results are demonstrated in \cite{App-A-COPr-2} that every quadratic program with linear and binary constraints can be rewritten as a completely positive program. More precisely, a mixed-integer quadratic program
\begin{equation}
\label{eq:App-01-COPr-Example-3}
\begin{aligned}
\min~~ & x^T Q x + 2 c^T x \\
\mbox{s.t.} ~~ & a^T_i x = b_i,~ i = 1,\cdots,m \\
& x \ge 0,~ x_j \in \{0,1\}, j \in B
\end{aligned}
\end{equation}
and the following completely positive program
\begin{equation}
\label{eq:App-01-COPr-Example-4}
\begin{aligned}
\min~~ & \langle Q, X \rangle + 2 c^T x \\
\mbox{s.t.} ~~ & a^T_i x = b_i,~ i = 1,\cdots,m \\
& \langle a_i a^T_i, X \rangle = b^2_i,~ i = 1,\cdots,m \\
& x_j = X_{jj},~ j \in B \\
& X \in \mathbb (C^n_+)^*
\end{aligned}
\end{equation}
have the same optimal solution, as long as problem (\ref{eq:App-01-COPr-Example-3}) satisfies: $a^T_i x = b_i$, $\forall i$ and $x \ge 0$ implies $x_j \le 1$, $\forall j \in B$. Actually, this is a relatively mild condition \cite{App-A-COPr-2}. Complementarity constraints can be handled in the similar way. Whether problems with general quadratic constraints can be restated as completely positive programs in the similar way remains an open question.
The NP-hardness of problem (\ref{eq:App-01-COPr-Example-3}) makes (\ref{eq:App-01-COPr-Example-4}) NP-hard itself. The complexity has been encapsulated into the last cone constraint. The relaxation model is still interesting due to its convexity. Furthermore, it can be approximated via a sequence of SDPs with growing sizes \cite{App-A-COPr-SOS} given an arbitrarily small error bound.
\subsection{MILP Approximation}
\label{App-A-Sect03-04}
SDP relaxation technique introduces a squared matrix variable that contains $n(n+1)/2$ independent variables. Although exploiting the sparse pattern of $X$ via graphic theory is helpful to expedite problem solution, the computational burden is still high especially when the initial relaxation is inexact and a sequence of SDPs should be solved. Inspired by difference-of-convex programming an alternative choice is to express the non-convexity of QCQP by univariate concave functions, and approximate these concave functions via PWL functions compatible with mixed-integer programming solvers. This approach has been expounded in \cite{App-A-QCQP-MILP}.
Consider nonconvex QCQP
\begin{equation}
\label{eq:App-01-QCQP-MILP-Appr-1}
\begin{aligned}
\min~~ & x^T A_0 x + a^T_0 x \\
\mbox{s.t.} ~~ & x^T A_k x + a^T_k x \le b_k,~ k = 1, \cdots, m
\end{aligned}
\end{equation}
We can always find $\delta_0$, $\delta_1$, $\cdots$, $\delta_m \in \mathbb R^+$, such that $A_k + \delta_k I \succeq 0$, $k=0,\cdots,m$. For example, $\delta_k$ can take the absolute value of the most negative eigenvalue of $A_k$, and $\delta_k=0$ if $A_k \succeq 0$. Then, problem (\ref{eq:App-01-QCQP-MILP-Appr-1}) can be cast as
\begin{equation}
\label{eq:App-01-QCQP-MILP-Appr-2}
\begin{aligned}
\min~~ & x^T (A_0+\delta_0I) x + a^T_0 x -\delta_0 1^T y \\
\mbox{s.t.} ~~ & x^T (A_k + \delta_k I) x + a^T_k x - \delta_k 1^T y \le b_k,~ k = 1, \cdots, m \\
& y_i = x^2_i, i=1, \cdots, n
\end{aligned}
\end{equation}
Problem (\ref{eq:App-01-QCQP-MILP-Appr-2}) is actually a difference-of-convex program; however, the nonconvex terms are consolidated in much simpler parabolic equalities, which can be linearized via the SOS2 based PWL approximation technique discussed in Appendix \ref{App-B-Sect01}. Except for the last $n$ quadratic equalities, remaining constraints and objective function of problem (\ref{eq:App-01-QCQP-MILP-Appr-2}) are all convex, so the linearized problem gives rise to a mixed-integer convex quadratic program.
Alternatively, we can first perform convex relaxation by replacing $y_i = x^2_i$ with $y_i \ge x^2_i$, $i=1, \cdots, n$; if strict inequality holds at the optimal solution, a disjunctive cut is generated to remove this point from the feasible region. However, the initial convex relaxation can be very weak ($y=+\infty$ is usually an optimal solution). Predefined disjunctive cuts can be added \cite{App-A-QCQP-MILP}.
Finally, nonconvex QCQP is a hard optimization problem. Developing an efficient algorithm should leverage specific problem structure. For example, SDP relaxation is suitable for OPF problems; MILP approximation can be used for small and dense problems. Unlike SDP relaxation works on a squared matrix variable, the number of auxiliary variables in (\ref{eq:App-01-QCQP-MILP-Appr-2}) and its mixed-integer convex quadratic program approximation is moderate. Therefore, this approach is promising to tackle practical problems whose coefficient matrices are usually sparse. Furthermore, no particular assumption is needed to guarantee the exactness of relaxation, so this method is general enough to tackle a wide spectrum of engineering optimization problems.
\section{MILP Formulation of Nonconvex QPs}
\label{App-A-Sect04}
In a non-convex QCQP, if the constraints are all linear, it is called a nonconvex QP. There is no doubt that convex relaxation methods presented in the previous section can be applied to nonconvex QPs. However, the relaxation is generally inexact. In this section, we introduce exact MILP formulations to globally solve such a nonconvex optimization problem; unlike the mixed-integer programming approximation method in Sect. \ref{App-A-Sect03-04}, in which approximation error is inevitable, by using duality theory, the MILP models will be completely equivalent to the original QP. Thanks to the advent of powerful MILP solvers, this method is becoming increasingly competitive compared to existing global solution methods and is attracting more attentions from the research community.
\subsection{Nonconvex QPs over polyhedra}
The presented approach is devised in \cite{App-A-QP-MILP}. A nonconvex QP with linear constraints has the form of
\begin{equation}
\label{eq:App-01-NC-QP}
\begin{aligned}
\min~~ & \frac{1}{2} x^T Q x + c^T x \\
\mbox{s.t.} ~~ & Ax \le b
\end{aligned}
\end{equation}
where $Q$ is a symmetric, but indefinite matrix; $A$, $b$, $c$ are constant coefficients with compatible dimensions. We assume that finite lower and upper limits of the decision variable $x$ have been included, and thus the feasible region is a bounded polyhedron. The KKT conditions of (\ref{eq:App-01-NC-QP}) can be written as:
\begin{equation}
\label{eq:App-01-NC-QP-KKT}
\begin{gathered}
c + Qx + A^T \xi = 0 \\
0 \le \xi \bot b - Ax \ge 0
\end{gathered}
\end{equation}
If there is a multiplier $\xi$ so that the pair $(x,\xi)$ of primal and dual variables satisfies KKT condition (\ref{eq:App-01-NC-QP-KKT}), then $x$ is said to be a KKT point or a stationary point. The complementarity and slackness condition in (\ref{eq:App-01-NC-QP-KKT}) gives $b^T \xi = x^T A^ T \xi$. For any primal-dual pair $(x,\xi)$ that satisfies (\ref{eq:App-01-NC-QP-KKT}), the following relations hold
\begin{equation}
\label{eq:App-01-NC-QP-Lin}
\begin{aligned}
\frac{1}{2} x^T Qx + c^T x &= \frac{1}{2} c^T x +\frac{1}{2} x^T (c + Qx)\\
& = \frac{1}{2} c^T x - \frac{1}{2} x^T A^ T \xi = \frac{1}{2} \left( c^T x - b^T \xi \right)
\end{aligned}
\end{equation}
As such, the non-convex quadratic objective function is equivalently stated as a linear function in the primal and dual variables without loss of accuracy.
Thus, if problem (\ref{eq:App-01-NC-QP}) has an optimal solution, then the solution can be retrieved by solving an LPCC
\begin{equation}
\label{eq:App-01-NC-QP-LPCC}
\begin{aligned}
\min~~ & \frac{1}{2} \left( c^T x - b^T \xi \right) \\
\mbox{s.t.} ~~ & c + Qx + A^T \xi = 0 \\
&0 \le \xi \bot b - Ax \ge 0
\end{aligned}
\end{equation}
which is equivalent to the following MILP
\begin{equation}
\label{eq:App-01-NC-QP-MILP}
\begin{aligned}
\min~~ & \frac{1}{2} c^T x - b^T \xi \\
\mbox{s.t.} ~~ & c + Qx + A^T \xi = 0 \\
& 0 \le \xi \le M(1-z) \\
& 0 \le b - Ax \le Mz \\
& z ~\mbox{ binary}
\end{aligned}
\end{equation}
where $M$ is a sufficiently large constant; $z$ is a vector of binary variables. Regardless of the value of $z_i$, at most one of $\xi_i$ and $(b-Ax)_i$ can take a strictly positive value. For more rigorous discussions on this method, please see \cite{App-A-QP-MILP}, in which an unbounded feasible region is considered. More tricks in MILP reformulation technique can be found in the next chapter.
It should be pointed out that the set of optimal solutions of (\ref{eq:App-01-NC-QP}) is a subset of stationary points described by (\ref{eq:App-01-NC-QP-KKT}), because (\ref{eq:App-01-NC-QP-KKT}) is only a necessary condition for optimality but not sufficient. Nevertheless, as we assumed that the feasible region is a bounded polytope (thus compact), QP (\ref{eq:App-01-NC-QP}) must have a finite optimum, then according to \cite{App-A-QP-LPCC-Opt-Eqv}, the optimal value is equal to the minimum of objective function values perceived at stationary points. Therefore, MILP (\ref{eq:App-01-NC-QP-MILP}) provides an exact solution to (\ref{eq:App-01-NC-QP}).
Finally, we shed some light on the selection of $M$, since it has notable impact on the computational efficiency of (\ref{eq:App-01-NC-QP-MILP}). An LP based bound preprocessing method is thoroughly discussed in \cite{App-A-QP-LPCC-Bounding}, which is used in a finite branch-and-bound method for solving LPCC (\ref{eq:App-01-NC-QP-LPCC}). Here we briefly introduce the bounding method.
For the primal variable $x$ which represents physical quantities or measures, its bounds depends on practical situations and security considerations, and we assume that the bound is $0 \le x \le U$. The bound can be tightened by solving
\begin{equation}
\label{eq:App-01-NC-QP-LPCC-Primal-Bounding}
\min (\max)~ \{x_j ~|~ Ax \le b,~ 0 \le x \le U \}
\end{equation}
In (\ref{eq:App-01-NC-QP-LPCC-Primal-Bounding}), we can incorporate individual bounds for the components of vector $x$, which never wrecks the optimal solution and can be supplemented in (\ref{eq:App-01-NC-QP-MILP}).
For the dual variables, we consider (\ref{eq:App-01-NC-QP}) again with explicit bounds on primal variable $x$
\begin{equation*}
\begin{aligned}
\min~~ & \frac{1}{2} x^T Q x + c^T x \\
\mbox{s.t.} ~~ & Ax \le b: \xi \\
& 0 \le x \le U: \lambda,\rho
\end{aligned}
\end{equation*}
where $\xi$, $\lambda$, $\rho$ following the colon are dual variables. Its KKT condition reads
\begin{subequations}
\label{eq:App-01-NC-QP-KKT-Primal-Bound}
\begin{gather}
c + Qx + A^T \xi - \lambda + \rho = 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-1}\\
0 \le \xi \bot b - Ax \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-2}\\
0 \le x \bot \lambda \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-3} \\
0 \le U - x \bot \rho \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-4}
\end{gather}
Multiplying both sides of (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-1}) by a feasible solution $x^T$
\begin{equation}
\label{eq:App-01-NC-QP-KKT-Primal-Bound-5}
c^T x + x^T Q x + x^T A^T \xi - x^T \lambda + x^T \rho = 0
\end{equation}
Substituting $\xi^T A x = \xi^T b$, $x^T \lambda = 0$, and $x^T \rho = \rho^T U$ concluded from (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-2})-(\ref{eq:App-01-NC-QP-KKT-Primal-Bound-4}) into (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-5}) outcomes
\begin{equation}
\label{eq:App-01-NC-QP-KKT-Primal-Bound-6}
c^T x + x^T Q x + b^T \xi + U^T \rho = 0
\end{equation}
\end{subequations}
The upper bounds (lower bounds are 0) on the dual variables required for MILP (\ref{eq:App-01-NC-QP-MILP}) can be computed from the following LP:
\begin{subequations}
\label{eq:App-01-NC-QP-KKT-Dual-Bounding}
\begin{align}
\max~~ & \lambda_j \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Obj} \\
\mbox{s.t.}~~ & c + Qx + A^T \xi - \lambda + \rho = 0
\label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-1} \\
& \mbox{tr}(Q^T X) + c^T x + b^T \xi + U^T \rho = 0
\label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2} \\
& \mbox{Cons-RLT}=\{ (x,X)~|~(\ref{eq:App-01-SDPr-QCQP-VIN-RLT})\}
\label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-3} \\
& 0 \le x \le U,~ Ax \le b, \lambda, \xi, \rho \ge 0
\label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-4}
\end{align}
\end{subequations}
In (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2}), quadratic equality (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-6}) is linearized by letting $X = xx^T$, and (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-3}) is a linear relaxation for above rank-1 condition, as explained in Sect. \ref{App-A-Sect03-01}.
By exploiting the relaxation revealed in (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2}), it bas been proved that problem (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding}) always has a finite optimum, because the recession cone of the set comprised of the primal and dual variables as well as their associated valid inequalities is empty, see the proof of Proposition 3.1 in \cite{App-A-QP-LPCC-Bounding}. This is a pivotal theoretical guarantee. Other bounding techniques which only utilize KKT conditions hardly ensure a finite optimum.
\subsection{Standard Nonconvex QPs}
The presented approach is devised in \cite{App-A-Standard-QP}. A standard nonconvex QP entails minimizing a nonconvex quadratic function over a unit probability simplex
\begin{equation}
\label{eq:App-01-Stand-QP}
\begin{aligned}
v(Q) = \min~ & x^T Q x \\
\mbox{s.t.} ~ & x \in {\rm \Delta}_n
\end{aligned}
\end{equation}
where $Q$ is a symmetric matrices, and unit simplex
\begin{equation*}
{\rm \Delta}_n = \{x \in \mathbb R^n_+~|~e^T x = 1 \}
\end{equation*}
where $e$ is all-one vector. A nonhomogeneous objective can always be transformed to a quadratic form given the simplex constraint ${\rm \Delta}_n$:
\begin{equation*}
x^T Q x + 2c^T x = x^T (Q + e c^T + c e^T ) x,~ \forall x \in {\rm \Delta}_n
\end{equation*}
Standard nonconvex QPs have wide applications in portfolio optimization, quadratic resource allocation, graphic theory and so on. In addition, for a
given symmetric matrix $Q$, a necessary and sufficient condition for $Q$ being copositive is $v(Q) \ge 0$. Copositive programming is a young and active research field, and can help the research in convex relaxation. A fundamental problem is copositivity test, which entails solving (\ref{eq:App-01-Stand-QP}) globally.
Problem (\ref{eq:App-01-Stand-QP}) is a special case of nonconvex QP (\ref{eq:App-01-NC-QP}), so the methods in previous subsection also work for (\ref{eq:App-01-Stand-QP}). The core trick is to select a big-M parameter in linearizing complementarity and slackness conditions. Due to its specific structure, the valid big-M parameter for problem (\ref{eq:App-01-Stand-QP}) can be chosen in a much more convenient way. To see this, the KKT condition of (\ref{eq:App-01-Stand-QP}) reads as
\begin{subequations}
\label{eq:App-01-Stand-QP-KKT}
\begin{align}
Qx - \lambda e - \mu & = 0 \label{eq:App-01-Stand-QP-KKT-1} \\
e^T x & = 1 \label{eq:App-01-Stand-QP-KKT-2} \\
x & \ge 0 \label{eq:App-01-Stand-QP-KKT-3} \\
\mu & \ge 0 \label{eq:App-01-Stand-QP-KKT-4} \\
x_j \mu_j & = 0,~ j=1,\cdots,n \label{eq:App-01-Stand-QP-KKT-5}
\end{align}
\end{subequations}
where $\lambda$ and $\mu$ are dual variables associated with equality constraint $e^T x = 1$ and inequality constraint $x \ge 0$. Because the feasible region is polyhedral, constraint quantification always holds, and any optimal solution of (\ref{eq:App-01-Stand-QP}) must solve KKT system (\ref{eq:App-01-Stand-QP-KKT}).
Multiplying both sides of (\ref{eq:App-01-Stand-QP-KKT-1}) by $x$ results in $x^T Q x = \lambda x^T e - x^T \mu$; substituting (\ref{eq:App-01-Stand-QP-KKT-2}) and (\ref{eq:App-01-Stand-QP-KKT-5}) into the right-hand side concludes $x^T Q x = \lambda$. Provided with eligible big-M parameter, problem (\ref{eq:App-01-Stand-QP}) is (exactly) equivalent to the following MILP
\begin{equation}
\label{eq:App-01-Stand-QP-MILP-1}
\begin{aligned}
\min~~ & \lambda \\
\mbox{s.t.} ~~ & Qx - \lambda e - \mu = 0 \\
& e^T x = 1,~ 0 \le x \le y \\
&0 \le \mu_j \le M_j(1-y_j),~ j = 1, \cdots, n
\end{aligned}
\end{equation}
where $y \in \{0,1\}^n$, and $M_j$ is the big-M parameter. It is the upper bound of dual variable $\mu_j$. To estimate such a bound, according to (\ref{eq:App-01-Stand-QP-KKT-1})
\begin{equation*}
\mu_j = e^{T}_j Q x - \lambda,~ j=1,\cdots,n
\end{equation*}
where $e_j$ is the $j$-th column of $n \times n$ identity matrix. For the first term,
\begin{equation*}
x^T Q e_j \le \max_{i \in \{1,\cdots,n\}} Q_{ij},~ j=1,\cdots,n
\end{equation*}
As for the second term, we know $\lambda \ge v(Q)$, so any known lower bound of $v(Q)$ can be used to obtain an upper bound of $M_j$. One possible lower bound of $v(Q)$ is suggested in \cite{App-A-Standard-QP} as
\begin{equation*}
l(Q) = \min_{1 \le i,j \le n} Q_{ij} + \dfrac{1}{\sum_{k=1}^n \left(Q_{kk}- \min\limits_{1 \le i,j \le n} Q_{ij} \right)^{-1}}
\end{equation*}
If the minimal element of $Q$ locates on the main diagonal, the second term vanishes and $l(Q) = \min_{1 \le i,j \le n} Q_{ij}$.
In summary, a valid choice of $M_j$ would be
\begin{equation}
\label{eq:App-01-Stand-QP-Big-M}
M_j = \max_{i \in \{1,\cdots,n\}} Q_{ij} - l(Q),~ j = 1,\cdots,n
\end{equation}
It is found in \cite{App-A-Standard-QP} that if we relax (\ref{eq:App-01-Stand-QP-KKT-1}) as an inequality and solve the following MILP
\begin{equation}
\label{eq:App-01-Stand-QP-MILP-2}
\begin{aligned}
\min~~ & \lambda \\
\mbox{s.t.} ~~ & Qx - \lambda e - \mu \le 0 \\
& e^T x = 1,~ 0 \le x \le y \\
&0 \le \mu_j \le M_j(1-y_j),~ j = 1, \cdots, n
\end{aligned}
\end{equation}
which is an relaxed version of (\ref{eq:App-01-Stand-QP-MILP-1}), the optimal solution will not change. However, in some instances, solving (\ref{eq:App-01-Stand-QP-MILP-2}) is significantly faster than solving (\ref{eq:App-01-Stand-QP-MILP-1}). More thorough theoretical analysis can be found in \cite{App-A-Standard-QP}.
\section{Further Reading}
\label{App-A-Sect05}
Decades of wonderful research has resulted in elegant theoretical developments and sophisticated computational softwares, which have brought convex optimization to an unprecedented dominating stage where it serves as the baseline and reference model for optimization problems in almost every discipline. Only problems which can be formulated as convex programs are regarded as theoretically solvable. We suggest following materials for readers who want to build a solid mathematical background or know more about applications in the field of convex optimization.
1. Convex analysis and convex optimization. Convex analysis is a classic topic in mathematics, and focuses on basic concepts and topological properties of convex sets and convex functions. We recommend monographs \cite{App-A-Convex-Analysis-1,App-A-Convex-Analysis-2,App-A-Convex-Analysis-3}. The last one sheds more light on optimization related topics, including DC programming, polynomial programming, and equilibrium constrained programming, which are originally non-convex. The most popular textbooks on convex optimization include \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}. They contain important materials that everyone who wants to apply this technique should know.
2. Special convex optimization problems. The most mature convex optimization problems are LPs, SOCPs, and SDPs. We recommend \cite{App-A-LP-Book-Dantzig,App-A-LP-Book-Bertsimas,App-A-LP-Book-Vanderbei} for the basic knowledge of duality theory, simplex algorithm, interior-point algorithm, and applications of LPs. The modeling abilities of SOCPs and SDPs have been well discussed in \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}. A geometric program is a type of optimization problem whose objective and constraints are characterized by special monomials and posynomial functions. Through a logarithmic variable transformation, a geometric program can be mechanically converted to a convex optimization problem. Geometric programming is relatively restrictive in structure, and it may not be apparent to see whether a given problem can be expressed by a geometric program. We recommend a tutorial paper \cite{App-A-GOP-Boyd} and references therein on this topic. Copositive program is a relatively young field in operational research. It is a special class of conic programming which is more general than SDP. Basic information on copositive/completely positive programs is introduced in \cite{App-A-Copositive-1,App-A-Copositive-2,App-A-Copositive-3}. They are particularly useful in combinatorial and quadratic optimization. Though very similar to SDPs in appearances, copositive programs are NP-hard. Algorithms and applications of copositive and completely positive programs have continued to be highly active research fields \cite{App-A-COP-New-1,App-A-COP-New-2,App-A-COP-New-3}.
3. General convex optimization problems. Beside above mature convex optimization models that can be specified without high level of expertise, recognizing the convexity of a general mathematical programming problem may be rather tricky. A deep understanding on convex analysis is unavoidable. Furthermore, to solve the problem using off-the-shelf solvers, a user must find a way to transform the problem into one of the standard forms (if a general purpose NLP solver fails to solve it). The so-called disciplined convex programming method is proposed in \cite{App-A-Disp-CVX} to lower this expertise barrier. The method consists of a set of rules and conventions that one must follow when setting up the problem such that the convexity is naturally sustained. This methodology has been implemented in cvx toolbox under Matlab environment.
4. Convex relaxation methods. One major application of convex optimization is to derive tractable approximations for non-convex programs, so as to facilitate problem resolution in terms of computational efficiency and robustness. A general QCQP is a quintessential non-convex optimization problem. Among various convex relaxation approaches, the SDP relaxation is shown to be able to offer high quality solutions for many QCQPs raised in signal process \cite{App-A-SDPr-Signal-1,App-A-SDPr-Signal-2} and power system energy management \cite{App-A-SDPr-Power-1,App-A-SDPr-Power-2}. Decades of excellent studies on SDP relaxation methods for QCQPs are comprehensively reviewed in \cite{App-A-SDP-Relaxation-Tutor,App-A-SDPr-QCQP-Rev-1,App-A-SDPr-QCQP-Rev-2}. Some recent advances are reported in \cite{App-A-SDPr-QCQP-1,App-A-SDPr-QCQP-2,App-A-SDPr-QCQP-3,App-A-SDPr-QCQP-4,App-A-SDPr-QCQP-5,App-A-SDPr-QCQP-6,App-A-SDPr-QCQP-7}. The rank of the matrix variable has a decisive impact on the exactness (or tightness) of the SDP relaxation. Low rank SDP method are attracting increasing attentions from researchers, and many approaches are proposed to recover a low-rank solution. More information can be found in \cite{App-A-SDP-Rank-1,App-A-SDP-Rank-2,App-A-SDP-Rank-3,App-A-SDP-Rank-4,App-A-SDP-Rank-5} and references therein.
5. Sum-of-squares (SOS) programming is originally devised in \cite{App-A-SOS-1} to decompose a polynomial $f(x)$ as the square of another polynomial $g(x)$ (if there exists), such that $f(x)=[g(x)]^2$ must be non-negative. Non-negativity of a polynomial over a semi-algebraic set can be certified in a similar way via Positivstellensatz refutations. This can be done by solving a structured SDP \cite{App-A-SOS-1}, and implemented in a Matlab based toolbox \cite{App-A-SOS-2}. Based on these outcomes, a promising methodology is quickly developed for polynomial programs, which cover a broader class of optimization problems than QCQPs. It is proved that the global solution of a polynomial program can be found by solving a hierarchy of SDPs under mild conditions. This is very inspiring since polynomial programs are generally non-convex while SDPs are convex. We recommend \cite{App-A-Poly-SDP-1} for a very detailed discussion on this approach, and \cite{App-A-Poly-SDP-2,App-A-Poly-SDP-3,App-A-Poly-SDP-4,App-A-Poly-SDP-5} for some recent advances. However, users should be aware that this approach may be unpractical because the size of the relaxed SDP quickly becomes unacceptable after a few steps. Nonetheless, the elegant theory still marks a milestone in the research field.
\input{ap01ref}
\motto{There is no problem in all mathematics that cannot be solved by direct counting. But with the present implements of mathematics many operations can be performed in a few minutes which without mathematical methods would take a lifetime. \\ \rightline{ $-$Ernst Mach}}
\chapter{Formulation Recipes in Integer Programming}
\label{App-B}
As stated in Appendix \ref{App-A}, generally speaking, convex optimization problems can be solved efficiently. However, the majority of optimization problems encountered in practical engineering are non-convex, and gradient based NLP solvers terminate at a local optimum, which may be far away from the global one. In fact, any nonlinear function can be approximated by a PWL function with adjustable errors by controlling the granularity of partitions. A PWL function can be expressed via a logic form or incorporating integer variables. Thanks to the latest progress in branch-and-cut algorithms and the development of state-of-the-art MILP solvers, a large-scale MILP can often be solved globally within reasonable computational efforts \cite{App-MILP-Solver-Perform}, although the MILP itself is proved to be NP-hard. In view of this fact, PWL/MILP approximation serves as a viable option to tackle real-world non-convex optimization problems, especially those with special structures.
This chapter introduces PWL approximation methods for nonlinear functions and linear representations of special non-convex constraints via integer programming techniques. When the majority of a problem at hand is linear or convex, while non-convexity arises from nonlinear functions with only one or two variables, linear complementarity constraints, logical inferences and so on, it is worth trying the methods in this chapter, in view of the fact that MILP solvers are becoming increasingly efficient to retrieve a solution with a pre-defined optimality gap.
\section{Piecewise Linear Approximation of Nonlinear Functions}
\label{App-B-Sect01}
\subsection{Univariate Continuous Function}
\label{App-B-Sect01-01}
Considering a nonlinear continuous function $f(x)$ in a single variable $x$, we can evaluate the function values $f(x_0)$, $f(x_1)$, $\cdots$, $f(x_n)$ at given breakpoints $x_0$, $x_1$, $\cdots$, $x_k$, and replace $f(x)$ with the following PWL function
\begin{equation}
\label{eq:App-02-PWL-Logic}
f(x) = \begin{cases}
m_1 x + c_1, & x \in [x_0,x_1] \\
m_2 x + c_2, & x \in [x_1,x_2] \\
\qquad \vdots & \qquad \vdots \\
m_k x + c_k, & x \in [x_{k-1},x_k]
\end{cases}
\end{equation}
\begin{figure}
\caption{Piecewise linear and piecewise constant approximations.}
\label{fig:App-02-01}
\end{figure}
As an illustrative example, two curves of the original nonlinear function and its PWL approximation are portrayed in part (a), Fig. \ref{fig:App-02-01}. The PWL function in (\ref{eq:App-02-PWL-Logic}) is a finite union of line segments, but still non-convex. Moreover, the logic representation in (\ref{eq:App-02-PWL-Logic}) is not compatible with commercial solvers. Given the fact that any point on a line segment can be expressed as a convex combination of two terminal points, (\ref{eq:App-02-PWL-Logic}) can be written as
\begin{equation}
\label{eq:App-02-PWL-CC}
\begin{aligned}
x & = \sum_i \lambda_i x_i \\
y & = \sum_i \lambda_i f(x_i) \\
\lambda & \ge 0,~ \sum_i \lambda_i =1 \\
\lambda & \in \mathbb{SOS}_2
\end{aligned}
\end{equation}
where $\mathbb{SOS}_2$ stands for the special ordered set of type 2, describing a vector of variables with at most two adjacent ones being able to take nonzero values. The $\mathbb{SOS}_2$ constraint on $\lambda$ can be declared via the build-in module of commercial solvers such as CPLEX or GUROBI. Please note that if $f(x)$ is convex and to be minimized, then the last $\mathbb{SOS}_2$ requirement is naturally met (thus can be relaxed), because the epigraph of $f(x)$ is a convex region. Otherwise, relaxing the last $\mathbb{SOS}_2$ constraint in (\ref{eq:App-02-PWL-CC}) gives rise to the convex hull of the sampled points $(x_0,y_0)$, $\cdots$, $(x_k,y_k)$. In general, the relaxation is inexact.
Branch-and-bound algorithms which directly working on SOS variables exhibit good performance \cite{App-MILP-SOS}, but it is desirable to explore equivalent MILP formulations to leverage the superiority of state-of-the-art solvers.
To this end, we first provide an explicit form using additional integer variables.
\begin{equation}
\label{eq:App-02-SOS2-MILP}
\begin{aligned}
\lambda_0 & \le z_1 \\
\lambda_1 & \le z_1 + z_2 \\
\lambda_2 & \le z_2 + z_3 \\
\vdots & \\
\lambda_{k-1} & \le z_{k-1} + z_k \\
\lambda_k & \le z_k \\
z_i & \in \{ 0,1 \},~ \forall i,~ \sum\nolimits_{i=1}^k z_i = 1 \\
\lambda_i & \ge 0,~ \forall i,~ \sum\nolimits_{i=0}^k \lambda_i = 1
\end{aligned}
\end{equation}
Formulation (\ref{eq:App-02-SOS2-MILP}) illustrates how integer variables can be used to enforce $\mathbb{SOS}_2$ requirements on the weighting coefficients. This formulation does not involve any manually supplied parameter, and often gives stronger bounds when the integrality of binary variables are relaxed.
Sometimes, it is more convenient to use a piecewise constant approximation, especially when the original function $f(x)$ is not continuous. An example is exhibited in part (b), Fig. \ref{fig:App-02-01}. In this approach, the feasible interval of $x$ is partitioned into $S-1$ segments (associated with binary variables $\theta_s$, $s=1$, $\cdots$, $S-1$) by $S$ breakpoints $x_1$, $\cdots$, $x_S$ (associated with $S$ continuous weight variables $\lambda_s$, $s=1$, $\cdots$, $S$); In the $s$-th interval between $x_s$ and $x_{s+1}$, the function value $f(x)$ is approximated by the arithmetic mean $f_s = 0.5[f(x_s)+f(x_{s+1})]$, $s=1,\cdots,$ $S-1$, which is a constant as illustrated in Fig. \ref{fig:App-02-01}. With an appropriate number of partitions, an arbitrary function $f(x)$ can be approximated by a piecewise constant function as follows
\begin{subequations}
\label{eq:App-02-PWC-CC}
\begin{equation}
x = \sum_{s=1}^S \lambda_s x_s,~
y = \sum_{s=1}^{S-1} \theta_s f_s
\label{eq:App-02-PWC-CC-1}
\end{equation}
\begin{equation}
\lambda_1 \le \theta_1,~ \lambda_S \le \theta_{S-1}
\label{eq:App-02-PWC-CC-2}
\end{equation}
\begin{equation}
\lambda_s \le \theta_{s-1} + \theta_s,~ s = 2,\cdots, S-1
\label{eq:App-02-PWC-CC-3}
\end{equation}
\begin{equation}
\lambda_s \ge 0,~ s = 1, \cdots, S,~
\sum\nolimits_{s=1}^S \lambda_s = 1
\label{eq:App-02-PWC-CC-4}
\end{equation}
\begin{equation}
\theta_s \in \{0,1\}, s=1,\cdots,S-1,~
\sum\nolimits_{s=1}^{S-1} \theta_s = 1
\label{eq:App-02-PWC-CC-5}
\end{equation}
\end{subequations}
In (\ref{eq:App-02-PWC-CC}), binary variable $\theta_s=1$ indicates interval $s$ is activated, and constraint (\ref{eq:App-02-PWC-CC-5}) ensures that only one interval will be activated; Furthermore, constraints (\ref{eq:App-02-PWC-CC-2})-(\ref{eq:App-02-PWC-CC-4}) enforce weigh coefficients $\alpha_s$, $s=1$, $\cdots$, $S$ to be $\mathbb{SOS}_2$; Finally, constraint (\ref{eq:App-02-PWC-CC-1}) expresses $y$ and $x$ via the linear combination of sampled values. The advantage of piecewise constant formulation (\ref{eq:App-02-PWC-CC}) lies in the binary expression of function value $y$, such that the product of $y$ and another continuous variable can be easily linearized via integer programming technique, which can be seen in Sect. \ref{App-B-Sect02-03}.
Clearly, the required number of binary variables introduced in formulation (\ref{eq:App-02-SOS2-MILP}) is $k$, which grows linearly with respect to the number of breakpoints, and the final MILP model may suffer from computational overheads due to the presence of a large number of binary variables when more breakpoints are involved for improving accuracy. In what follows, we present a useful formulation that only engages a logarithmic number of binary variables and constraints. This technique is proposed in \cite{App-MILP-SOS2-LogCC-1,App-MILP-SOS2-LogCC-2,App-MILP-SOS2-LogCC-3}. Consider the following constraints:
\begin{equation}
\label{eq:App-02-SOS2-Log}
\begin{aligned}
\sum_{i \in L_n} \lambda_i & \le z_n,~ \forall n \in N \\
\sum_{i \in R_n} \lambda_i & \le 1 - z_n,~ \forall n \in N \\
z_n & \in \{0, 1\},~ \forall n \in N \\
\lambda & \ge 0,~ \sum\nolimits^k_{i=0} \lambda_i = 1
\end{aligned}
\end{equation}
where $L_n$ and $R_n$ are index sets of weights $\lambda_i$, $N$ is an index set corresponding to the number of binary variables. The dichotomy sequences $\{L_n, R_n \}_{n \in N}$ constitute a branching scheme on the indices of weights, such that constraint (\ref{eq:App-02-SOS2-Log}) guarantees that at most two adjacent elements of $\lambda$ can take strictly positive values, so as to meet the $\mathbb{SOS}_2$ requirement. The required number of binary variables $z_n$ is $\lceil \log_2 k \rceil$, which is significantly smaller than that involved in formulation (\ref{eq:App-02-SOS2-MILP}).
\begin{figure}
\caption{Gray codes and sets $L_n$, $R_n$ for two and three binary variables.}
\label{fig:App-02-02}
\end{figure}
Next, we demonstrate how to design the sets $L_n$ and $R_n$ based on the concept of Gray codes. For notation brevity, we restrict the discussion to the instances with 2 and 3 binary variables (which are shown in Fig. \ref{fig:App-02-02}), indicating 5 and 9 breakpoints (or 4 and 8 intervals) in consequence.
As shown in Fig. \ref{fig:App-02-02}, Gray codes G$_1$ - G$_8$ form a binary system where any two adjacent numbers only differ in one bit. For example, G$_4$ and G$_5$ differ in the first bit, and G$_5$ and G$_6$ differ in the second bit. Such Gray codes are used to describe which two adjacent weights are activated. In general, sets $R_n$ and $L_n$ are constructed as follows: the index $v \in L_n$ if the binary values of the $n$-th bit of two successive codes $G_n$ and $G_{n+1}$ are equal to 1, or $v \in R_n$ if they are equal to 0. This principle can be formally defined in a mathematical way as
\begin{equation}
\label{eq:App-02-SOS2-Log-L}
L_n = \left\{v~\middle|~ \begin{aligned}
(G^n_v &= 1 \mbox{ and } G^n_{v+1} = 1) \\
\cup~ (v & = 0 \mbox{ and } G^n_{1} =1) \\
\cup~ (v & = k \mbox{ and } G^n_{k} =1)
\end{aligned} \right\}
\end{equation}
\begin{equation}
\label{eq:App-02-SOS2-Log-R}
R_n = \left\{v~\middle|~ \begin{aligned}
(G^n_v &= 0 \mbox{ and } G^n_{v+1} = 0) \\
\cup~ (v & = 0 \mbox{ and } G^n_{1} = 0) \\
\cup~ (v & = k \mbox{ and } G^n_{k} = 0)
\end{aligned} \right\}
\end{equation}
where $G^n_v$ stands for the $n$-th bit of code $G_v$.
For example, sets R$_1$, R$_2$, R$_3$ and L$_1$, L$_2$, L$_3$ for Gray codes G$_1$-G$_8$ are shown in Fig. \ref{fig:App-02-02}. In such a way, we can establish the rule that only two adjacent weights can be activated via (\ref{eq:App-02-SOS2-Log}). To see this, consider that if $\lambda_i > 0$ for $i=4,5$ and $\lambda_i = 0$ for other indices, we let $z_1=1$, $z_2=1$, $z_3$ = 0, which leads to the following constraint set:
\begin{equation*}
\left\{ \begin{lgathered}
\lambda_0 + \lambda_1 + \lambda_2 + \lambda_3 \le 1 - z_1 = 0 \\
\lambda_5 + \lambda_6 + \lambda_7 + \lambda_8 \le z_1 = 1 \\
\lambda_0 + \lambda_1 + \lambda_6 \le 1 - z_2 = 0 \\
\lambda_3 + \lambda_4 + \lambda_8 \le z_2 = 1 \\
\lambda_0 + \lambda_4 + \lambda_5 \le 1 - z_3 = 1 \\
\lambda_2 + \lambda_7 + \lambda_8 \le z_3 = 0 \\
\lambda_i \ge 0, \forall i, \sum\nolimits^8_{i=0} \lambda_i = 1
\end{lgathered} \right.
\end{equation*}
Thus we can conclude that
\begin{gather*}
\lambda_4 + \lambda_5 = 1, \lambda_4 \ge 0, \lambda_5 \ge 0, \\
\lambda_0=\lambda_1=\lambda_2=\lambda_3=\lambda_6=\lambda_7=\lambda_8=0
\end{gather*}
This mechanism can be interpreted as follows: $z_1=1$ enforces $\lambda_i = 0$, $i=0,1,2,3$
through set $R_1$; $z_2=1$ further enforces $\lambda_6 = 0$ through set $R_2$; finally, $z_3=0$ enforces $\lambda_7 = \lambda_8 = 0$ through set $L_3$. Then the remaining weights $\lambda_4$ and $\lambda_5$ constitute the positive coefficients. In this regard, only $\log_2 8 =3$ binary variables and $2 \log_2 8 = 6$ additional constraints are involved. Compared with formulation (\ref{eq:App-02-SOS2-MILP}), the gray code can be regarded as extra branching operation enabled by problem structure, so the number of binary variables in expression (\ref{eq:App-02-SOS2-Log}) is greatly reduced in the case with a large value of $k$.
As a special case, consider the following problem
\begin{equation}
\label{eq:App-02-Example-1-NLP}
\min \left\{ \sum_i f_i(x_i) ~\middle|~ x \in X \right\}
\end{equation}
where $f_i(x_i),i=1,2,\cdots$ are convex univariate functions, and $X$ is a polytope. This problem is convex but nonlinear. The DCOPF problem, a fundamental issue in power market clearing, is given in this form, in which $f_i(x_i)$ is a convex quadratic function. Although any local NLP algorithm can find the global optimal solution of (\ref{eq:App-02-Example-1-NLP}), there are still reasons to seek approximated LP formulations. One is that problem (\ref{eq:App-02-Example-1-NLP}) may be embedded in another optimization problem and serve as its constraint. This is a pervasive modeling paradigm to study the strategic behaviors and market powers of energy providers, where the electricity market is cleared according to a DCOPF, and delivered energy of generation companies and nodal electricity prices are extracted from the optimal primal variables and dual variables associating with power balancing constraints, respectively. An LP representation allows to exploit the elegant LP duality theory for further analysis, and helps characterize optimal solution through primal-dual or KKT optimality conditions. To this end, we can opt to solve the following LP
\begin{equation}
\label{eq:App-02-Example-1-LP-Approx}
\begin{aligned}
\min_{x,y,\lambda} ~~ & \sum_i y_i \\
\mbox{s.t.}~~ & y_i = \sum_{k} \lambda_{ik} f_i(x_{ik}),~ \forall i \\
& x_i = \sum_k \lambda_{ik} x_{ik},~\forall i,~ x \in X \\
& \lambda \ge 0,~ \sum_k \lambda_{ik} =1,~ \forall i \\
\end{aligned}
\end{equation}
where $x_{ik}, k=1,2,\cdots$ are break points (constants) for variable $x_i$, and the associated weights are $\lambda_{ik}$. Because $f_i(x_i)$ are convex functions, the $\mathbb{SOS}_2$ requirement on the weight variable $\lambda$ is naturally met, so it is relaxed from the constraints.
\subsection{Bivariate Continuous Nonlinear Function}
\label{App-B-Sect01-02}
Consider a continuous nonlinear function $f(x,y)$ in two variables $x$ and $y$. The entire feasible region is partitioned into $M \times N$ disjoint sub-rectangles by $M+N+2$ breakpoints $x_n$, $n=0,1,\cdots,N$ and $y_m$, $m=0,1,\cdots,M$, as illustrated in Fig. \ref{fig:App-02-03}, and the corresponding function values are $f_{mn} = f(x_m,y_n)$. By introducing a planar weighting coefficient matrix $\{ \lambda_{mn}\}$ for each grid point that satisfies
\begin{subequations}
\begin{equation}
\label{eq:App-02-f(x,y)-Weight}
\begin{gathered}
\lambda_{mn} \ge 0,~ \forall m, \forall n \\
\sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} =1
\end{gathered}
\end{equation}
we can present any point $(x,y)$ in the feasible region by a convex combination of the extreme points of the sub-rectangle it resides in:
\begin{equation}
\label{eq:App-02-f(x,y)-Variable}
\begin{gathered}
x = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} x_n = \sum^N_{n=0}
\left( \sum^M_{m=0} \lambda_{mn} \right) x_n \\
y = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} y_m = \sum^M_{m=0}
\left( \sum^N_{n=0} \lambda_{mn} \right) y_m
\end{gathered}
\end{equation}
and its function value
\begin{equation}
\label{eq:App-02-f(x,y)-Value}
f(x,y) = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} f_{mn}
\end{equation}
\end{subequations}
is also a convex combination of the function values at the corner points.
\begin{figure}
\caption{Breakpoints and active rectangle for PWL approximation.}
\label{fig:App-02-03}
\end{figure}
As we can see from Fig. \ref{fig:App-02-03}, in a valid representation, if ($x^*, y^*$) belongs to a sub-rectangle, only the weight parameter associated with the four corner points can be non-negative, while others should be forced at 0. In such a pattern, the sum of columns/rows of matrix ${\rm \Lambda} = [\lambda_{mn}],\forall m,n$, which remains a vector, should constitute an $\mathbb{SOS}_2$, and $\rm \Lambda$ is called a planar $\mathbb{SOS}_2$, which can be implemented via two $\mathbb{SOS}_1$ constraints on the marginal weight vectors. In fact, at most three of the four corner points can be associated with uniquely determined non-negative weights. Consider point O and the active rectangle ABCD shown in Fig. \ref{fig:App-02-03}. The location of O can be expressed by a linear combination of the coordinates of corner points $x_A,x_B,x_C,x_D$ associating with non-negative weights $\lambda_A,\lambda_B,\lambda_C,\lambda_D$ as:
\begin{subequations}
\begin{equation}
\label{eq:App-02-Rectangle}
x_O = \lambda_A x_A + \lambda_B x_B + \lambda_C x_C + \lambda_D x_D \\
\end{equation}
In the first case
\begin{equation}
\label{eq:App-02-Rectangle-Weight-Case1}
\lambda^1_A,~ \lambda^1_B,~ \lambda^1_C,~ \lambda^1_D \ge 0,~
\lambda^1_A + \lambda^1_B + \lambda^1_C + \lambda^1_D = 1
\end{equation}
In the second case
\begin{equation}
\label{eq:App-02-Rectangle-Weight-Case2}
\lambda^2_A, \lambda^2_C, \lambda^2_D \ge 0,~ \lambda^2_B = 0,~
\lambda^2_A + \lambda^2_C + \lambda^2_D = 1
\end{equation}
In the third case
\begin{equation}
\label{eq:App-02-Rectangle-Weight-Case3}
\lambda^3_B, \lambda^3_C, \lambda^3_D \ge 0,~ \lambda^3_A = 0,~
\lambda^3_B + \lambda^3_C + \lambda^3_D = 1
\end{equation}
We use superscripts 1, 2, 3 to distinguish values of weights in different representations. According to Caratheodory theorem, the non-negative weights are uniquely determined in (\ref{eq:App-02-Rectangle-Weight-Case2}) and (\ref{eq:App-02-Rectangle-Weight-Case3}), and in the former (latter) case, we say $\rm \Delta$ACD ($\rm \Delta$BCD) is activated or selected. Denote function values in these three cases by
\begin{gather}
f_1(x_O) = \lambda^1_A f(x_A) + \lambda^1_B f(x_B) + \lambda^1_C f(x_C) + \lambda^1_D f(x_D) \label{eq:App-02-Rectangle-Value-Case1} \\
f_2(x_O) = \lambda^2_A f(x_A) + \lambda^2_C f(x_C) + \lambda^2_D f(x_D)
\label{eq:App-02-Rectangle-Weight-Case2} \\
f_3(x_O) = \lambda^3_B f(x_B) + \lambda^3_C f(x_C) + \lambda^3_D f(x_D)
\label{eq:App-02-Rectangle-Weight-Case3}
\end{gather}
\end{subequations}
Suppose $f(x_A)< f(x_B)$, the plane defined by points B, C, D lies above that defined by points A, C, D, hence $f_2(x_O) < f_1(x_O) < f_3(x_O)$. If a smaller (larger) function value is in favor, then $\rm \Delta$ACD ($\rm \Delta$BCD) will be activated at the optimal solution. Please bear in mind that as long as A, B, C, D are not in the same plane, $f_1(x_O)$ will be strictly less (greater) than $f_3(x_O)$ ($f_2(x_O)$). Therefore, (\ref{eq:App-02-Rectangle-Value-Case1}) will not become binding at the optimal solution, and the weights for active corners are uniquely determined. If rectangle ABCD is small enough, such a discrepancy can be neglected. Nonetheless, non-uniqueness of the corner weights has little injury on its application, because the optimal solution $x_O$ and optimal value will be consistent with the original problem. The weights do not correspond to physical strategies that need to be deployed, and the linearization method can be considered as a black box to the decision maker, who provides function values at $x_A,x_B,x_C,x_D$ and receives a unique solution $x_O$.
Detecting the active sub-rectangle that $(x^*, y^*)$ resides in requires additional constraints on the weight parameter $\lambda_{mn}$. The aforementioned integer formulation is used to impose planar $\mathbb{SOS}_2$ constraints. Let $\lambda^n$ and $\lambda^m$ be the aggregated weights for $x$ and $y$, respectively, i.e.,
\begin{equation}
\label{eq:App-02-f(x,y)-fmn}
\begin{gathered}
\lambda^n = \sum^M_{m=0} \lambda_{mn},~ \forall n \\
\lambda^m = \sum^N_{n=0} \lambda_{mn},~ \forall m
\end{gathered}
\end{equation}
which are also called the marginal weight vectors, and introduce the following constraints:
\begin{equation}
\label{eq:App-02-f(x,y)-MILP-x}
\mbox{For $x$: } \left\{
\begin{aligned}
\sum_{n \in L^1_k} \lambda^n & \le z^1_k \\
\sum_{n \in R^1_k} \lambda^n & \le 1 - z^1_k \\
z^1_k & \in \{0, 1\}
\end{aligned} \right\},~ \forall k \in K_1
\end{equation}
\begin{equation}
\label{eq:App-02-f(x,y)-MILP-y}
\mbox{For $y$: } \left\{
\begin{aligned}
\sum_{m \in L^2_k} \lambda^m & \le z^2_k \\
\sum_{m \in R^2_k} \lambda^m & \le 1 - z^2_k \\
z^2_k & \in \{0, 1\}
\end{aligned} \right\},~ \forall k \in K_2
\end{equation}
where $L^1_k$, $L^2_k$ and $R^2_k$, $R^2_k$ are index sets of weights $\lambda^n$ and $\lambda^m$, $K_1$ and $K_2$ are index sets of binary variables. The dichotomy sequences $\{L^1_k,R^1_k \}_{k \in K_1}$ and $\{L^2_k,R^2_k\}_{k \in K_2}$ constitute a branching scheme on the indices of weights, such that constraints (\ref{eq:App-02-f(x,y)-MILP-x}) and (\ref{eq:App-02-f(x,y)-MILP-y}) would guarantee that at most two adjacent elements of $\lambda^n$ and $\lambda^m$ can take strictly positive values, so as to detect the active sub-rectangle. In this approach, the required number of binary variables is $\lceil \log_2 M \rceil + \lceil \log_2 N \rceil$. The construction of these index sets has been explained in the univariate case.
Likewise, the discussions for problems (\ref{eq:App-02-Example-1-NLP}) and (\ref{eq:App-02-Example-1-LP-Approx}) can be easily extended if the objective function is the sum of bi-variate convex functions, implying that the planar $\mathbb{SOS}_2$ condition is naturally met.
\subsection{Approximation Error}
This section answers a basic question: For a given function, how many intervals (break points) are needed to achieve certain error bound $\varepsilon$? For the ease of understanding, we restrict our attention to univariate function, including the quadratic function $f(x)= ax^2$, and more generally, the continuous function $f(x)$ that is three times continuously differentiable. Let $\phi_f(x)$ be the PWL approximation for function $f(x)$ on $X=\{x|x_l \le x \le x_m \}$, the absolute maximum approximation error is defined by ${\rm \Delta} = \max_{x \in X} |f(x)-\phi_f (x)|$.
First let us consider the quadratic function $f(x) = a x^2$, which has been thoroughly studied in \cite{App-MILP-PWL-Error-1}. The analysis is briefly introduced here. Choose an arbitrary interval $[x_{i-1},x_i] \subset X$, the PWL approximation
can be parameterize in a single variable $t \in [0,1]$ as
\begin{gather*}
x(t) = x_{i-1} + t (x_i-x_{i-1}) \\
\phi_f(x(t)) = ax_{i-1}^2 + at (x_i^2 - x_{i-1}^2)
\end{gather*}
Clearly, $f(x(0))=\phi_f(x(0))=x^2_{i-1}$, $f(x(1))=\phi_f(x(1))=x^2_i$, and $\phi_f(x(t)) > f(x(t))$, $\forall t \in (0,1)$. The maximal approximation error in the interval must be found at a critical point which satisfies
\begin{equation*}
\begin{aligned}
& \dfrac{d}{dt}\left( \phi_f(x(t)) - f(x(t)) \right) \\
=~ & \dfrac{d}{dt} a \left[ x_{i-1}^2 + t (x_i^2 - x_{i-1}^2) -
( x_{i-1} + t (x_i-x_{i-1}))^2 \right] \\
=~ & \dfrac{d}{dt} a (x_i - x_{i-1})^2 (t-t^2) \\
=~ & a (x_i - x_{i-1})^2 (1-2t) \\
=~ & 0 \Rightarrow t = \frac{1}{2}
\end{aligned}
\end{equation*}
implying that $x(1/2)$ is always a critical point where the approximation error reaches maximum, regardless of the partition of intervals, and the error is given by
\begin{equation*}
\begin{aligned}
{\rm \Delta} & = \phi \left( x \left( \frac{1}{2} \right) \right)
- f\left( x \left( \frac{1}{2} \right) \right) \\
& = a \left[ x_{i-1}^2 + \frac{1}{2} (x_i^2 - x_{i-1}^2)
- \frac{1}{4} ( x_{i-1} + x_i)^2 \right] \\
& = \frac{a}{4} (x_i - x_{i-1})^2
\end{aligned}
\end{equation*}
which is quadratic in the length of the interval and independent of its location. In this regard, the intervals must be evenly distributed with equal length in order to get the best performance. If $X$ is divided into $n$ intervals, the absolute maximum approximation error is
\begin{equation*}
{\rm \Delta} = \frac{a}{4n^2} (x_m - x_l)^2
\end{equation*}
Therefore, for a given tolerance $\epsilon$, the number of intervals should satisfy
\begin{equation*}
n \ge \sqrt{\frac{a}{\varepsilon}} \frac{x_m-x_l}{2}
\end{equation*}
For quadratic function $f(x)=a x^2$, coefficient $a$ determines its second-order derivative. For more general situations, above discussion implies that the number of intervals needed to perform a PWL approximation for function $f(x)$ may depend on its second-order derivative. This problem has been thoroughly studied in \cite{App-MILP-PWL-Error-2}. The conclusion is: for a three times continuously differentiable $f(x)$ in the interval $[x_l, x_m]$, the optimal number of segments $s(\varepsilon)$ under given error tolerance $\varepsilon$ can be selected as
\begin{equation*}
s(\varepsilon) \propto \dfrac{c}{\sqrt{\varepsilon}},~ \varepsilon \to 0^+
\end{equation*}
where
\begin{equation*}
c = \dfrac{1}{4} \int^{x_m}_{x_l} \sqrt{|f^{\prime \prime}(x)|}
\end{equation*}
The conclusion still holds if $\sqrt{|f^{\prime \prime}(x)|}$ has integrable singularities at the endpoints.
\section{Linear Formulation of Product Terms}
\label{App-B-Sect02}
Product of two variables, or a bilinear term, naturally arises in optimization models from various disciplines. For one example, in economic studies, if the price $c$ and the quantity $q$ of a commodity are variables, then the cost $cq$ would be a bilinear term. For another, in circuit analysis, if both of the voltage $v$ and the current $i$ are variables, then the electric power $vi$ would be a bilinear term. Bilinear terms are non-convex. Throughout history, linearizing bilinear terms using linear constraints and integer variables is a frequently used technique in optimization community. This section presents several techniques for the central question of product linearization: how to enforce constraint $z=xy$, depending on the types of $x$ and $y$.
\subsection{Product of Two Binary Variables}
\label{App-B-Sect02-01}
If $x \in \mathbb {B}$ and $y \in \mathbb {B}$, then $z = x y$ is equivalent to the following linear inequalities
\begin{equation}
\label{eq:App-02-xy-BB}
\begin{gathered}
0 \le z \le y \\
0 \le x-z \le 1-y \\
x \in \mathbb{B},~y \in \mathbb{B},~ z \in \mathbb{B}
\end{gathered}
\end{equation}
It can be verified that if $x=1$, $y=1$, then $z = 1$ is achieved; if $x = 0$ or $y = 0$, then $z = 0$ is enforced, regardless of the value of $y$ or $x$. This is equivalent to the requirement $z = x y$.
If $x \in \mathbb {Z}^+$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {Z}^+$ belongs to interval $[y^L,y^U]$, given the following binary expansion
\begin{equation}
\label{eq:App-02-xy-ZZ-BE}
\begin{aligned}
x = x^L + \sum_{k=1}^{K_1} 2^{k-1} u_k \\
y = y^L + \sum_{k=1}^{K_2} 2^{k-1} v_k
\end{aligned}
\end{equation}
where $K_1 = \lceil \log_2 (x^U-x^L) \rceil$, $K_2 = \lceil \log_2 (y^U-y^L) \rceil$. To develop a vector expression, define vectors $b^1 = [2^0, 2^1,\cdots,2^{K_1-1}]$, $u = [u_1,u_2,\cdots,u_{K_1}]^T$,
$b^2 = [2^0, 2^1,\cdots,2^{K_2}]$, $v = [v_1,v_2,\cdots,v_{K_2}]^T$, and matrices $B = (b^1)^T b^2$, $z = u v^T$, then
\begin{equation}
x y = x^L y^L + x^L b^2 v + y^L b^1 u + \langle B, z \rangle \notag
\end{equation}
where product matrix $z = u v^T$, and $\langle B, z \rangle = \sum_i \sum_j B_{ij} z_{ij}$. Relation among $u$, $v$, and $z$ can be linearized via equation (\ref{eq:App-02-xy-BB}) element-wise. Its compact form is given by
\begin{equation}
\label{eq:App-02-xy-ZZ-Comp}
\begin{gathered}
{\bf 0}^{K_1 \times K_2} \le z \le {\bf 1}^{K_1 \times 1} v^T \\
{\bf 0}^{K_1 \times K_2} \le u {\bf 1}^{1 \times K_2} - z \le {\bf 1}^{K_1 \times K_2} - {\bf 1}^{K_1 \times 1} v^T \\
u \in \mathbb{B}^{K_1 \times 1},~ v \in \mathbb{B}^{K_2 \times 1},~
z \in \mathbb{B}^{K_1 \times K_2}
\end{gathered}
\end{equation}
\subsection{Product of Integer and Continuous Variables}
\label{App-B-Sect02-02}
We consider the binary-continuous case. If $x \in \mathbb {R}$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {B}$, and then $z = x y$ is equivalent to the following linear inequalities
\begin{equation}
\label{eq:App-02-xy-BC}
\begin{gathered}
x^L y \le z \le x^U y \\
x^L (1-y) \le x-z \le x^U (1-y) \\
x \in \mathbb{R},~y \in \mathbb{B},~ z \in \mathbb{R}
\end{gathered}
\end{equation}
It can be verified that if $y = 0$, then $z$ is enforced to be 0 and $x^L \le x \le x^U$ is naturally met; if $y = 1$, then $z = x$ and $x^L \le z \le x^U$ must be satisfied, indicating the same relationship on $x$, $y$, and $z$. As for the integer-continuous case, the integer variable can be represented as (\ref{eq:App-02-xy-ZZ-BE}) using binary variables, yielding a linear combination of binary-continuous products.
It should be mentioned that the upper bound $x^U$ and the lower bound $x^L$ are crucial for creating linearization inequalities. If explicit bounds are not available at hand, one can incorporate a constant $M$ that is big enough. The value of $M$ will have a notable impact on the computation time. To enhance efficiency, a desired value should be the minimal $M$ that ensures that inequality $-M \le x \le M$ never becomes binding at optimum, as it leads to the strongest bound if integrality of binary variables is neglected, expediting the converge of the branch-and-bound procedure. However, such a value is generally unclear before we solve the problem. Nevertheless, we do not actually need to find the smallest value $M^{*}$. Any $M \ge M^*$ produces the same optimal solution and is valid for linearization. Please bear in mind that an over-large $M$ not only deteriorates the computation time, but also cause numeric instability due to a large conditional number. So a proper tradeoff must be made between efficiency and accuracy. A proper $M$ can be determined from estimating the bound of $x$ from certain heuristics, which is problem-dependent.
\subsection{Product of Two Continuous Variables}
\label{App-B-Sect02-03}
If $x \in \mathbb {R}$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {R}$ belongs to interval $[y^L,y^U]$, there are three options for linearizing their product $xy$. The first one considers $z = xy$ as a bivariate function $f(x,y)$, and applies the planar $\mathbb{SOS}_2$ method in Sect. \ref{App-B-Sect01-02}. The second one discretizes $y$, for example, as follows
\begin{equation}
\begin{gathered}
y = y^L + \sum_{k=1}^{K} 2^{k-1} u_k {\rm \Delta} y \\
{\rm \Delta} y = \dfrac{y^U - y^L}{2^K},~ u_k \in \mathbb {B},~ \forall k
\end{gathered}
\label{eq:App-02-xy-CC-Discrete-y}
\end{equation}
and
\begin{equation}
xy = x y^L + \sum_{k=1}^{K} 2^{k-1} v_k {\rm \Delta} y \\
\end{equation}
where $v_k = u_k x$ can be linearized through equation (\ref{eq:App-02-xy-BC})
as
\begin{equation}
\label{eq:App-02-xy-CC-BE}
\begin{gathered}
x^L u_k \le v_k \le x^U u_k,~ \forall k \\
x^L (1 - u_k) \le x - v_k \le x^U (1 - u_k),~ \forall k \\
x \in \mathbb{R},~
u_k \in \mathbb{B},~ \forall k,~
v_k \in \mathbb{R},~ \forall k
\end{gathered}
\end{equation}
In practical problems, bilinear terms often appear as the inner production of two vectors. For convenience, we present the compact linearization of $x^T y$ via binary expansion. Let $y$ be the candidate vector variable to be discretized; perform (\ref{eq:App-02-xy-CC-Discrete-y}) on each element of $y$
\begin{equation*}
y_j = y^L_j + \sum_{k=1}^{K} 2^{k-1} u_{jk} {\rm \Delta} y_j,~ \forall j
\end{equation*}
and thus
\begin{equation*}
x_j y_j = x_j y^L_j + \sum_{k=1}^{K} 2^{k-1} v_{jk} {\rm \Delta} y_j,~
\forall j,~ v_{jk} = u_{jk} x_j,~ \forall j, \forall k
\end{equation*}
Relation $v_{jk} = u_{jk} x_j$ can be expressed via linear constraints
\begin{equation*}
x^L_j u_{jk} \le v_{jk} \le x^U_j u_{jk},~
x^L_j(1-u_{jk}) \le x_j - v_{jk} \le x^U_j(1-u_{jk}),~
\forall j, \forall k
\end{equation*}
Denote by $V$ and $U$ are matrix variables consisting of $v_{jk}$ and $u_{jk}$, respectively; $1_K$ stands for all-one column vector with a dimension of $K$; ${\rm \Delta}_Y$ is a diagonal matrix with ${\rm \Delta} y_j$ being non-zero entries; vector $\zeta = [2^0,2^1,\cdots,2^{K-1}]$. Combining all above element-wise expressions together, we have the linear formulation of $x^T y$ in a compact matrix form
\begin{equation*}
x^T y = x^T y^L + \zeta V^T {\rm \Delta} y
\end{equation*}
in conjunction with
\begin{equation}
\begin{gathered}
y = y^L + {\rm \Delta}_Y U \zeta^T \\
(x^L \cdot 1_K^T) \otimes U \le V \le (x^U \cdot 1_K^T) \otimes U \\
(x^L \cdot 1_K^T) \otimes (1-U) \le x \cdot 1^T_K - V \le (x^U \cdot 1_K^T) \otimes (1-U) \\
x \in \mathbb{R}^J,~ y \in \mathbb R^J,~
U \in \mathbb{B}^{J \times K},~ V \in \mathbb{R}^{J \times K}
\end{gathered}
\label{eq:App-02-xy-CC-Vector-BE}
\end{equation}
where $\otimes$ represents element-wise product of two matrices with the same dimension.
One possible drawback of this formulation is that the discretized variable is no longer continuous. The approximation accuracy can be improved by increasing the number of breakpoints without introducing too many binary variables, whose number is given by $\lceil \log_2 (y^U-y^L)/{\rm \Delta} y \rceil$. Furthermore, the variable to be discretized must have clear upper and lower bounds. This is not restrictive because decision variables of engineering problems are subject to physical operating limitations, such as the maximum and minimum output of a generator. Nevertheless, if $x$, for example, is unbounded in formulation, but the problem has a finite optimum, we can replace $x^U(x^L)$ in (\ref{eq:App-02-xy-CC-BE}) with a large enough big-M parameter $M(-M)$, so that the true optimal solution remains feasible. It should be pointed out that the value of $M$ may influence the computational efficiency of the equivalent MILP, as mentioned previously. The optimal choice of $M$ in general cases remains an open problem, but there could be heuristic methods for specific instances. For example, if $x$ stands for the marginal production cost, which is a dual variable whose bounds are unclear, one can alternatively determine a suitable bound from historical data or price forecast.
An alternative formulation for the second option deals with product term $x f(y)$, and $xy$ is a special case when $f(y)=y$. By performing the piecewise constant approximation (\ref{eq:App-02-PWC-CC}) on function $f(y)$, the product becomes $xy = \sum_{s=1}^{S-1} x \theta_s y_s$, where $y_s$ is constant, $x$ is continuous, and $\theta_s$ is binary. The products $x \theta_s$, $s=1,\cdots,S-1$ can be readily linearized via the method in \ref{App-B-Sect02-02}. In this approach, the continuity of $x$ and $y$ are retained. However, the number of binary variables in the piecewise constant approximation for $f(y)$ grows linearly in the number of samples on $y$.
The third one converts the product into a separable form, and then performs PWL approximation for univariate nonlinear functions. To see this, consider a bilinear term $x y$. Introduce two continuous variables $u$ and $v$ defined as follows
\begin{equation}
\label{eq:App-02-xy-CC-Decomp-1}
\begin{gathered}
u = \frac{1}{2} (x+y) \\
v = \frac{1}{2} (x-y)
\end{gathered}
\end{equation}
Now we have
\begin{equation}
\label{eq:App-02-xy-CC-Decomp-2}
x y = u^2 - v^2
\end{equation}
In (\ref{eq:App-02-xy-CC-Decomp-2}), $u^2$ and $v^2$ are univariate nonlinear functions, and can be approximated by the PWL method presented in Sect. \ref{App-B-Sect01-01}. Furthermore, if $x_l \le x \le x_u$, $y_l \le y \le y_u$, then the lower and upper bounds of $u$ and $v$ are given by
\begin{equation*}
\begin{gathered}
\frac{1}{2} (x_l + y_l) \le u \le \frac{1}{2} (x_u + y_u) \\
\frac{1}{2} (x_l - y_u) \le v \le \frac{1}{2} (x_u - y_l)
\end{gathered}
\end{equation*}
Formulation (\ref{eq:App-02-xy-CC-Decomp-2}) has a connotative advantage. If $xy$ appears in the objective function which is to be minimized and is not involved in constraints, we only need to approximate $v^2$ because $u^2$ is convex and $-v^2$ is concave. The minimum amount of binary variables in this method is a logarithmic function in the number of break points, as explained in Sect. \ref{App-B-Sect01-01}.
The bilinear term $xy$ can be replaced by a single variable $z$ in the following situation: 1) if the lower bounds $x_l$ and $y_l$ are nonnegative; 2) either $x$ or $y$ is not referenced anywhere else except in $xy$. For instance, $y$ is such a variable, then the bilinear term $xy$ can be replaced by variable $z$ and constraint $x y_l \le z \le x y_u$. Once the problem is solved, $y$ can be recovered by $y = z/x$ if $x > 0$, and the inequality constraint on $z$ guarantees $y \in [y_l,y_u]$; otherwise if $x = 0$, then $y$ is undetermined and has no impact on the optimum.
\subsection{Monomial of Binary Variables}
\label{App-B-Sect02-04}
Previous cases discuss linearizing the product of two variables. Now we consider a binary monomial with $n$ variables
\begin{equation}
\label{eq:App-02-Monomial-Binary}
z = x_1 x_2 \cdots x_n,~ x_i \in \{0,1\},~ i=1,2,\cdots,n
\end{equation}
Clearly, this monomial takes a binary value. Since the product of two binary can be expressed by a single one in light of (\ref{eq:App-02-xy-BB}), the monomial can be linearized recursively. Nevertheless, by making full use of the binary property of $z$, a smarter and concise way to represent (\ref{eq:App-02-Monomial-Binary}) is given by
\begin{align}
z & \in \{0,1\} \label{eq:App-02-Monomial-BL-1} \\
z & \le \dfrac{x_1 + x_2 + \cdots + x_n}{n} \label{eq:App-02-Monomial-BL-2}\\
z & \ge \dfrac{x_1 + x_2 + \cdots + x_n -n +1}{n} \label{eq:App-02-Monomial-BL-3}
\end{align}
If at least one of $x_i$ is equal to 0, because $\sum_{i=1}^n x_i - n + 1 \le 0$, (\ref{eq:App-02-Monomial-BL-3}) becomes redundant; moreover, $\sum^n_{i=1} x_i /n \le 1 - 1/n$, which removes $z=1$ from the feasible region, so $z$ will take a value of 0; otherwise, if all $x_i$ are equal to 1, $\sum^n_{i=1} x_i /n = 1$, and the right-hand side of (\ref{eq:App-02-Monomial-BL-3}) is $1/n$, which removes $z=0$ from the feasible region. Hence $z$ is forced to be 1. In conclusion, linear constraints (\ref{eq:App-02-Monomial-BL-1})-(\ref{eq:App-02-Monomial-BL-3}) have the same effect as (\ref{eq:App-02-Monomial-Binary}).
In view of the above transformation technique, a binary polynomial program can always be reformulated as a binary linear program. Moreover, if a single continuous variable appears in the monomial, the problem can be reformulated as an MILP.
\subsection{Product of Functions in Integer Variables}
\label{App-B-Sect02-05}
First, let us consider $z = f_1(x_1)f_2(x_2)$, where decision variables are positive integers, i.e., $x_i \in \{d_{i,1},d_{i,2},\cdots,d_{i,r_i}\},i=1,2$. Without particular tricks, $f_1$ and $f_2$ can be expressed as
\begin{gather*}
f_1 = \sum_{j=1}^{r_1} f_1(d_{1,j})u_{1,j},~ u_{1,j} \in \{0,1\},~
\sum_{j=1}^{r_1} u_{1,j} = 1 \\
f_2 = \sum_{j=1}^{r_2} f_2(d_{2,j})u_{2,j},~ u_{2,j} \in \{0,1\},~
\sum_{j=1}^{r_2} u_{2,j} = 1
\end{gather*}
and the product of two binary variables can be linearized via (\ref{eq:App-02-xy-BB}). Above formulation introduces a lot of intermediary binary variables, and is not propitious to represent a product with more functions recursively.
Ref. \cite{App-MILP-Fun-Prod} suggests another choice
\begin{equation}
z = \sum_{i=1}^{r_2} f_2(d_{2,j}) \sigma_{2,j},~
\sum_{i=1}^{r_2} \sigma_{2,j} = f_1(x_1),~
\sigma_{2,j} = f_1(x_1) u_{2,j}
\label{eq:App-02-Fun-Prod}
\end{equation}
where $u_{2,j} \in \{0,1\}$ and $\sum_{j=1}^{r_2} u_{2,j} = 1$. Although $f_1(x_1) u_{2,j}$ remains nonlinear because of decision variable $x_1$, (\ref{eq:App-02-Fun-Prod}) can be used to linearize a product with more than two nonlinear functions.
To see this, Denote by $z_1 = f_1(x_1)$, $z_i = z_{i-1}f_i(x_i)$, $i = 2,\cdots,n$; integer variable $x_i \in \{d_{i,1},d_{i,2},\cdots,d_{i,r_i}\}$, $f_i(x_i)>0$, $i=1,\cdots,n$. By using (\ref{eq:App-02-Fun-Prod}), $z_i,i=1,\cdots,n$ have the following expressions \cite{App-MILP-Fun-Prod}
\begin{equation}
\begin{aligned}
& z_1 = \sum_{j=1}^{r_1} f_1(d_{1,j})u_{1,j} \\
& z_2 = \sum_{j=1}^{r_2} f_2(d_{2,j})\sigma_{2,j},~
\sum_{i=1}^{r_2} \sigma_{2,j} = z_1,~ \cdots \\
& z_n = \sum_{j=1}^{r_n} f_n(d_{n,j})\sigma_{n,j},~
\sum_{i=1}^{r_n} \sigma_{n,j} = z_{n-1} \\
& \left. \begin{lgathered}
0 \le z_{i-1} - \sigma_{i,j} \le \bar z_{i-1}(1-u_{i,j}) \\
0 \le \sigma_{i,j} \le \bar z_{i-1} u_{i,j},~ u_{i,j} \in \{0,1\}
\end{lgathered} \right\},~
\begin{lgathered}
j = 1, \cdots, r_i, \\
i = 2, \cdots, n
\end{lgathered} \\
& x_i = \sum_{j=1}^{r_i} d_{i,j} u_{i,j}, ~ \sum_{j=1}^{r_i} u_{i,j} = 1,~ i = 1,2, \cdots,n
\end{aligned}
\label{eq:App-02-Funs-Prod}
\end{equation}
In (\ref{eq:App-02-Funs-Prod}), the number of binary variables is $\sum_{i=1}^n r_i$, and grow linearly in the dimension of $x$ and the interval length of each $x_i$. To reduce the number of auxiliary binary variable $u_{i,j}$, the dichotomy procedure in Sect. \ref{App-B-Sect01-01} for SOS2 can be applied, which is discussed in \cite{App-MILP-Fun-Prod}.
\subsection{Log-sum Functions}
\label{App-B-Sect02-06}
We consider log-sum function $\log(x_1+x_2+\cdots+x_n)$, which arises from solving a signomial geometric programming problem. The basic element in such a problem has a form of
\begin{equation}
\label{eq:App-02-Signomial}
c_k \prod_{j=1}^l y_j^{a_{jk}}
\end{equation}
where $y_j > 0$, $c_k$ is a constant, and $a_{jk} \in \mathbb R$. Non-integer value of $a_{jk}$ makes signomial geometric programming problem even harder than polynomial programs. Under some variable transformation, the non-convexity of a signomial geometric program can be concentrated in some log-sum functions \cite{App-MILP-Signomial}. In view of the form in (\ref{eq:App-02-Signomial}), we discuss log-sum function in Sect. \ref{App-B-Sect02}.
We aim to represent function $\log(x_1+x_2+\cdots+x_n)$ in terms of $\log x_1$, $\log x_2$, $\cdots$, $\log x_n$. Following the method in \cite{App-MILP-Signomial}, define a univariate function $F(X) = \log(1+e^X)$ and let $X_i=\log x_i$, ${\rm \Gamma}_i=\log(x_1+\cdots+x_i)$, $i = 1,\cdots,n$. The relation between $X_i$ and ${\rm \Gamma}_i$ can be revealed. Because
\begin{equation*}
\begin{aligned}
F(X_{i+1}-{\rm \Gamma}_i) & = \log\left( 1+ e^{\log x_{i+1} - \log(x_1+\cdots+x_i)} \right) \\
& = \log \left( 1+ \dfrac{x_{i+1}}{x_1+\cdots+x_i}\right)
= {\rm \Gamma}_{i+1} - {\rm \Gamma}_i
\end{aligned}
\end{equation*}
By stipulating $W_i = X_{i+1}-{\rm \Gamma}_i $, we have the following recursive equations
\begin{equation}
\label{eq:App-02-Signomial-Recursive}
\begin{aligned}
{\rm \Gamma}_{i+1} & = {\rm \Gamma}_i + F(W_i),~ i = 1,\cdots,n-1 \\
W_i & = X_{i+1}-{\rm \Gamma}_i,~ i = 1,\cdots,n-1
\end{aligned}
\end{equation}
Function $F(W_i)$ can be linearized using the method in Sect. \ref{App-B-Sect01-01}. Based on this technique, an outer-approximation approach is proposed in \cite{App-MILP-Signomial} to solve signomial geometric programming problem via MILP.
\section{Other Frequently used Formulations}
\label{App-B-Sect03}
\subsection{Minimum Values}
\label{App-B-Sect03-01}
Let $x_1$, $x_2$, $\cdots$, $x_n$ be continuous variables with known lower bound $x^L_i$ and upper bound $x^U_i$, and $L = \min \{x^L_1, x^L_2, \cdots, x^L_n\}$, then their minimum $y = \min \{x_1, x_2, \cdots, x_n\}$ can be expressed via linear constraints
\begin{equation}
\label{eq:App-02-Minimum-PWL}
\begin{gathered}
x^L_i \le x_i \le x^U_i,~ \forall i \\
y \le x_i, \forall i \\
x_i-(x^U_i - L)(1-z_i) \le y, \forall i \\
z_i \in \mathbb {B},~ \forall i,~ \sum_i z_i = 1
\end{gathered}
\end{equation}
The second inequality guarantees $y \le \min \{x_1, x_2, \cdots, x_n\}$; in addition, if $z_i =1$, then $y \ge x_i$, hence $y$ achieves the minimal value of $\{x_i\}$. According to the definition of $L$, $x_i - y \le x^U_i - L, \forall i$ holds, thus the third inequality is inactive for the remaining $n-1$ variables with $z_i = 0$.
\subsection{Maximum Values}
\label{App-B-Sect03-02}
Let $x_1$, $x_2$, $\cdots$, $x_n$ be continuous variables with known lower bound $x^L_i$ and upper bound $x^U_i$, and $U = \max \{x^U_1, x^U_2, \cdots, x^U_n\}$, then their maximum $y = \max \{x_1, x_2, \cdots, x_n\}$ can be expressed via linear constraints
\begin{equation}
\label{eq:App-02-Maximum-PWL}
\begin{gathered}
x^L_i \le x_i \le x^U_i,~ \forall i \\
y \ge x_i, \forall i \\
x_i + (U - x^L_i)(1-z_i) \ge y, \forall i \\
z_i \in \mathbb {B},~ \forall i,~ \sum_i z_i = 1
\end{gathered}
\end{equation}
The second inequality guarantees $y \ge \max \{x_1, x_2, \cdots, x_n\}$; in addition, if $z_i =1$, then $y \le x_i$, hence $y$ achieves the maximal value of $\{x_i\}$. According to the definition of $U$, $y - x_i \le U - x^L_i, \forall i$ holds, thus the third inequality is inactive for the remaining $n-1$ variables with $z_i = 0$.
\subsection{Absolute Values}
\label{App-B-Sect03-03}
Suppose $x \in \mathbb {R}$ and $|x| \le U$, the absolute value function $y = |x|$, which is nonlinear, can be expressed via PWL function as
\begin{equation}
\label{eq:App-02-Abs-PWL}
\begin{gathered}
0 \le y - x \le 2 U z,~ U (1-z) \ge x \\
0 \le y + x \le 2 U (1-z),~ -U z \le x \\
-U \le x \le U,~ z \in \mathbb{B}
\end{gathered}
\end{equation}
When $x > 0$, the first line yields $z = 0$ and $y = x$, while the second line is inactive. When $x < 0$, the second line yields $z = 1$ and $y = -x$, while the first line is inactive. When $x = 0$, either $z = 0$ or $z = 1$ gives $y = 0$. In conclusion, (\ref{eq:App-02-Abs-PWL}) has the same effect as $y=|x|$.
\subsection{Linear Fractional of Binary Variables}
\label{App-B-Sect03-04}
A linear fractional of binary variables takes the form of
\begin{equation}
\label{eq:App-02-BLF-1}
\dfrac{a_0+\sum_{i=1}^n a_i x_i}{b_0+\sum_{i=1}^n b_i x_i}
\end{equation}
We assume $b_0+\sum_{i=1}^n b_i x_i \ne 0$ for all $x \in \{0,1\}^n$. Define a new continuous variable
\begin{equation}
\label{eq:App-02-BLF-2}
y = \dfrac{1}{b_0+\sum_{i=1}^n b_i x_i}
\end{equation}
The lower bound and upper bound of $y$ can be easily computed. Then the linear fractional shown in (\ref{eq:App-02-BLF-1}) can be replaced with a linear expression
\begin{equation}
\label{eq:App-02-BLF-3}
a_0 y + \sum_{i=1}^n a_i z_i \\
\end{equation}
with constraints
\begin{equation}
\label{eq:App-02-BLF-4}
b_0 y + \sum_{i=1}^n b_i z_i = 1
\end{equation}
\begin{equation}
\label{eq:App-02-BLF-5}
z_i = x_i y,~ \forall i
\end{equation}
where (\ref{eq:App-02-BLF-5}) describes a product of a binary variable and a continuous variable, which can be linearized through equation (\ref{eq:App-02-xy-BC}).
\subsection{Disjunctive Inequalities}
\label{App-B-Sect03-05}
Let $\{P^i\}, i = 1, 2, \cdots, m$ be a finite set of bounded polyhedra. Disjunctive inequalities usually arise when the solution space is characterized by the union $S = \cup_{i=1}^m P^i$ of these polyhedra. Unlike intersection operator which preserves convexity, disjunctive inequalities form a non-convex region. It can be represented by MILP model using binary variables. We introduce three emblematic methods.
{\noindent \bf 1. Big-M formulation}
The hyperplane representations of polyhedra are given by $P^i = \{ x \in \mathbb{R}^n | A^i x \le b^i \}, i = 1, 2, \cdots, m$. By introducing binary variables $z_i, i = 1, 2, \cdots, m $, an MILP formulation for $S$ can be written as
\begin{equation}
\label{eq:App-02-Disj-Big-M}
\begin{gathered}
A^i x \le b^i + M^i(1-z_i) ,~ \forall i \\
z_i \in \mathbb{B},~\forall i,~ \sum^m_{i=1} z_i = 1
\end{gathered}
\end{equation}
where $M^i$ is a vector such that when $z_i = 0$, $A^i x \le b^i + M^i$ holds. To show the impact of the value of $M$ on the tightness of formulations (\ref{eq:App-02-Disj-Big-M}) when integrality constraints $z_i \in \mathbb{B}, \forall i$ are relaxed as $z_i \in [0,1], \forall i$, we contrivedly construct 4 polyhedra in $\mathbb {R}^2$, which are depicted in Fig. \ref{fig:App-02-04}. The continuous relaxations of (\ref{eq:App-02-Disj-Big-M}) with different values of $M$ are illustrated in the same graph, showing that the smaller the value of $M$, the tighter the relaxation of (\ref{eq:App-02-Disj-Big-M}).
\begin{figure}
\caption{Big-M formulation and their relaxed regions.}
\label{fig:App-02-04}
\end{figure}
From a computational perspective, the element in $M$ should be as small as possible, because a huge constant without any insights about problem data will feature a bad conditional number. Furthermore, the continuous relaxation of MILP model will be very weak, resulting in poor objective value bounds and excessive branch-and-bound computation. The goal of big-M parameter selection is to create a model whose continuous relaxation is close to the convex hull of the original constraint, i.e. the smallest convex set that contains the original feasible region. A possible selection of the big-M parameter is
\begin{equation}
\label{eq:App-02-Value-Big-M}
\begin{gathered}
M^i_l = \left(\max_{j \ne i} M^{ij}_l \right) - b^i_l \\
M^{ij}_l = \max_x \left\{ [A^i x]_l: A^j x \le b^j \right\}
\end{gathered}
\end{equation}
where subscript $l$ stands for the $l$-th element of a vector or $l$-th row of a matrix. As polyhedron $P^i$ are bounded, all bound parameters in (\ref{eq:App-02-Value-Big-M}) are well defined.
However, even the tightest big-M parameter will yield a relaxed solution space that is generally larger than the convex hull of the original feasible set. In many applications, good variable bounds can be estimated from certain heuristic methods which explore specific problem data and structure.
{\noindent \bf 2. Convex hull formulation}
Let $\mbox{vert}(P^i) = \{ v^i_l \}, l = 1,2, \cdots,L^i$ denote sets of vertices of polyhedra $\{P^i\}, i = 1,2,\cdots,m$, where $L^i$ is the number of vertices of $P^i$. The set of extreme rays is empty since $P^i$ is bounded. By introducing binary variables $z_i, i = 1, 2, \cdots, m $, an MILP formulation for $S$ is given by
\begin{equation}
\label{eq:App-02-Disj-Conv}
\begin{gathered}
\sum_{i=1}^m \sum_{l=1}^{L^i} \lambda^i_l v^i_l = x \\
\sum_{l=1}^{L^i} \lambda^i_l = z_i,~ \forall i \\
\lambda^i_l \ge 0,~ \forall i,~ \forall l \\
z_i \in \mathbb{B},~\forall i,~ \sum^m_{i=1} z_i = 1
\end{gathered}
\end{equation}
Formulation (\ref{eq:App-02-Disj-Conv}) does not rely on manually supplied parameter. Instead, it requires enumerating all extreme points of polyhedra $P^i$. Although the vertex representation and hyperplane representation of a polyhedron are interchangeable, given the fact that vertex enumeration is time consuming for high-dimensional polyhedra, (\ref{eq:App-02-Disj-Conv}) is useful only if $P^i$ are originally represented by extreme points.
{\noindent \bf 3. Lifted formulation}
A smarter formulation exploits the fact that bounded polyhedra $P^i$ share the same recession cone $\{ 0 \}$, i.e., equation $A^i x = 0$ has no non-zero solutions. Otherwise, suppose $A^i x^* = 0$, $x^* \ne 0$, and $y \in P^i$, then $y + \lambda x^* \in P^i$, $\forall \lambda > 0$, because $A^i(y + \lambda x^*) = A^i y \le b^i$. As a result, $P^i$ is unbounded. Bearing this in mind, an MILP formulation for $S$ is given by
\begin{equation}
\label{eq:App-02-Disj-Lift}
\begin{gathered}
A^i x^i \le b^i z^i,~ \forall i \\
\sum_{i=1}^m x^i = x \\
z_i \in \mathbb{B},~\forall i \\
\sum^m_{i=1} z_i = 1
\end{gathered}
\end{equation}
Formulation (\ref{eq:App-02-Disj-Lift}) is also parameter-free. Since it incorporates additional continuous variable for each polytope, we call it a lifted formulation. It is easy to see that the feasible region of $x$ is the union of $P^i$: if $z_i = 0$, $x^i = 0$ as analyzed before; otherwise, if $z_i = 1$, $x = x^i \in P^i$.
{\noindent \bf 4. Complementarity and slackness condition}
Complementarity and slackness condition naturally arises in the KKT optimality condition of a mathematical programming problem, an equilibrium problem, a hierarchical optimization problem, and so on. It is a quintessential law to characterize the logic condition under which a rational decision-making progress must obey. Here we pay attention to the linear case and equivalent MILP formulation, because nonlinear cases give rise to MINLPs, which are challenging to solve and not superior from the computational point of view.
A linear complementarity and slackness condition can be written as
\begin{equation}
\label{eq:App-B-LCP-MILP-1}
0 \le y \bot Ax - b \ge 0
\end{equation}
where vectors $x$ and $y$ are decision variables; $A$ and $b$ are constant coefficients with compatible dimensions; notation $\bot$ stands for the orthogonality of two vectors. In fact, (\ref{eq:App-B-LCP-MILP-1}) encompasses the following nonlinear constraints in traditional form
\begin{equation}
\label{eq:App-B-LCP-MILP-2}
y \ge 0,~ Ax - b \ge 0,~ y^T(Ax-b) = 0
\end{equation}
In view of the non-negativeness of $y$ and $Ax-b$, the orthogonality condition is equivalent to the element-wise logic form $y_i = 0$ or $a_i x-b_i=0$, $\forall i$, where $a_i$ is the $i$-th row of $A$; in other words, at most one of $y_i$ and $a_i x-b_i$ can take a strictly positive value, implying that the feasible region is either the slice $y_i = 0$ or the slice $a_i x-b_i=0$. Therefore, (\ref{eq:App-B-LCP-MILP-1}) can be regarded as a special case of the disjunctive constraints.
In practical application, (\ref{eq:App-B-LCP-MILP-1}) usually serves as constraints in an optimization problem. For example, in a sequential decision making or a linear bilevel program, the KKT condition of the lower-level LP appears in the form of (\ref{eq:App-B-LCP-MILP-1}), which is the constraint of the upper-level optimization problem. The main computation challenge arises from the orthogonality condition, which is nonlinear and non-convex, and violates the linear independent constraint qualification, see Appendix \ref{App-D-Sect03} for an example. Nonetheless, in view of the switching logic between $y_i$ and $a_i x-b_i$, we can introduce a binary variable $z_i$ to select which slice is active \cite{App-MILP-Fortuny-Amat}
\begin{equation}
\label{eq:App-B-LCP-MILP-3}
\begin{gathered}
0 \le a_i x-b_i \le M z_i,~ \forall i \\
0 \le y_i \le M(1-z_i),~ \forall i
\end{gathered}
\end{equation}
where $M$ is a large enough constant. According to (\ref{eq:App-B-LCP-MILP-3}), if $z_i=0$, then $(Ax - b)_i=0$ must hold, and the second inequality is redundant; otherwise, if $z_i=1$, then we have $y_i=0$, and the first inequality becomes redundant. (\ref{eq:App-B-LCP-MILP-3}) can be written in a compact form as
\begin{equation}
\label{eq:App-B-LCP-MILP-4}
\begin{gathered}
0 \le A x - b \le M z \\
0 \le y \le M(1-z)
\end{gathered}
\end{equation}
It is worth mentioning that the big-M parameter $M$ has a notable impact on the feasible region of the relaxed problem as well as the computational efficiency of the MILP model, as illustrated in Fig. \ref{fig:App-02-04}. One should make sure that (\ref{eq:App-B-LCP-MILP-4}) would not remove the optimal solution from the feasible set. If both $x$ and $y$ have clear bounds, then $M$ can be easily estimated; otherwise, we may prudently employ a large $M$, at the cost of sacrificing the computational efficiency.
Furthermore, if we are aiming to solve (\ref{eq:App-B-LCP-MILP-1}) without an objective function and other constraints, such a problem is called a linear complementarity problem (under some proper transformation), for which we can build parameter-free MILP models. More details can be found in Appendix \ref{App-D-Sect04-02}.
\subsection{Logical Conditions}
Logical conditions are associated with indicator constraints with a statement like ``if event A then event B''. An event can be described in many ways. For example, a binary variable $a=1$ can stand for event A happens, and otherwise $a=0$; a point $x$ belongs to a set $X$ can denote a system is under secure operating condition, and otherwise $x \notin X$. In view of this, the disjunctive constraints discussed above is a special case of logical condition. In this section, we expatiate on how some usual logical conditions can be expressed via linear constraints. Let A, B, C, $\cdots$ associated with binary variables $a$, $b$, $c$, $\cdots$ represent events. Main results for linearizing typical logical conditions are summarized in \ref{tab:App-B-Logic-MILP} \cite{App-MILP-Logic-Cons}.
\begin{table}[htp]
\small
\renewcommand{1.5}{1.3}
\renewcommand{1em}{1em}
\caption{Linear form of some typical logic conditions}
\centering
\begin{tabular}{ll}
\hline
If A then B & $b \ge a$ \\
Not B & $1-b$ \\
If A then not B & $a+b \le 1$ \\
If not A then B & $a+b \ge 1$ \\
A if and only if B & $a=b$ \\
If A then B and C & $b+c \ge 2a$ \\
If A then B or C & $b+c \ge a$ \\
If B or C then A & $2a \ge b+c$ \\
If B and C then A & $a \ge b+c-1$ \\
If M or more of N events then A & $(N-M+1)a \ge b+c+\cdots-M+1$ \\
\hline
\end{tabular}
\label{tab:App-B-Logic-MILP}
\end{table}
Logical AND is formulated as a function of two binary inputs. Specifically, $c = a$ AND $b$ can be expressed as $c=\min\{a,b\}$ or $c=ab$. The former one can be linearized via (\ref{eq:App-02-Minimum-PWL}) and the letter one through (\ref{eq:App-02-xy-BB}), and both of them renders
\begin{equation}
\label{eq:App-B-Logic-AND-2}
c \le a,~ c \le b,~ c \ge a+b-1,~ c \ge 0 \\
\end{equation}
For the case with multiple binary inputs, i.e., $c=\min\{c_1,\cdots,c_n\}$, or $c = \prod_{i=1}^n c_i$, (\ref{eq:App-B-Logic-AND-2}) can be generalized as
\begin{equation}
\label{eq:App-B-Logic-AND-N}
c \le c_i,~ \forall i,~ c \ge \sum\nolimits_{i=1}^n c_i -n +1,~ c \ge 0
\end{equation}
Logical OR is formulated as a function of two binary inputs, i.e., $c=\max \{a,b\}$, which can be linearized via (\ref{eq:App-02-Maximum-PWL}), yielding
\begin{equation}
\label{eq:App-B-Logic-OR-2}
c \ge a,~ c \ge b,~ c \le a+b,~ c \le 1 \\
\end{equation}
For the case with multiple binary inputs, i.e., $c=\max \{c_1,\cdots,c_n\}$, (\ref{eq:App-B-Logic-OR-2}) can be generalized as
\begin{equation}
\label{eq:App-B-Logic-OR-N}
c \ge c_i,~ \forall i,~ c \le \sum\nolimits_{i=1}^n c_i,~ c \le 1
\end{equation}
\section{Further Reading}
\label{App-B-Sect04}
Throughout the half-century long research and development, MILP has become an indispensable and unprecedentedly powerful modeling tool in mathematics and engineering, thanks to the advent of efficient solvers that encapsulate many state-of-the-art techniques \cite{{App-MILP-History}}. This chapter aims to provide an overview on formulation recipes that transform complicated conditions into MILPs, so as to take full advantages of off-the-shelf solvers. The paradigm is able to deal with a fairly broad class of hard optimization problems.
Readers who are interested in the strength of MILP model, may find in-depth discussions in \cite{App-MILP-Strength} and references therein. For those interested in the PWL approximation of nonlinear functions, we refer to \cite{App-MILP-PWL-Function-1,App-MILP-PWL-Function-2,App-MILP-PWL-Function-3} and references therein, for various models and methods. The most promising one may be the convex combination model with a logarithmic number of binary variables, whose implementation has been thoroughly discussed in \cite{App-MILP-SOS2-LogCC-1,App-MILP-SOS2-LogCC-2,App-MILP-SOS2-LogCC-3}. For those who are interested in the polyhedral study of single-term bilinear sets and MILP based methods for bilinear programs may find extensive information in \cite{App-MILP-MIBLP-1,App-MILP-MIBLP-2} and references therein.
For those who need more knowledge about mathematical program with disjunctive constraints, in which constraint activity is controlled by logical conditions, we recommend \cite{App-MILP-Disj-Review}; specifically, the choice of big-M parameter is discussed in \cite{App-MILP-Disj-Big-M}. For those who wish to learn more about integer programming techniques, we refer to \cite{App-MILP-Union} for the formulation of union of polyhedra, \cite{App-MILP-Representability} for the representability of MILP, and \cite{App-MILP-MICQP,App-MILP-Duality} for the more general mixed-integer conic programming as well as its duality theory. To the best of our knowledge, dissertation \cite{App-MILP-Dissertation-MIT} launches the most comprehensive and in-depth study on MILP approximation of non-convex optimization problems. State-of-the-art MILP formulations which balance problem size, strength, and branching behavior are developed and compared, including those mentioned above. The discussions in \cite{App-MILP-Dissertation-MIT} offer insights on designing efficient MILP models that perform extremely well in practice, despite of their theoretically non-polynomial complexity in the worst case.
\input{ap02ref}
\motto{To be uncertain is to be uncomfortable,
but to be certain is to be ridiculous.}
\chapter{Basics of Robust Optimization}
\label{App-C}
Real-world decision-making models often involve unknown data. Reasons for data uncertainty could come from inexact measurements or forecast errors. For example, in power system operation, the wind power generation and system loads are barely known exactly at the time when the generation schedule should be made; in inventory management, market price and demand volatility is the main source of financial risks. In fact, optimal solutions to mathematical programming problems can be highly sensitive to parameter perturbations \cite{RO-Detail-1}. The optimal solution to the nominal problem may be highly suboptimal or even infeasible in reality due to parameter inaccuracy. Consequently, there is a great need of a systematic methodology that is capable of quantifying the impact of data inexactness on the solution quality, and is able to produce robust solutions that are insensitive to data uncertainty.
Optimization under uncertainty has been a focus of the operational research community for a long time. Two approaches are prevalent to deal with uncertain data in optimization, namely stochastic optimization (SO) and robust optimization (RO). They differ in the ways of modeling uncertainty. The former one assumes that the true probability distribution of uncertain data is known or can be estimated from available information, and minimizes the expected cost in its objective function. SO provides strategies that are optimal in the sense of statistics. However, the probability distribution itself may be inexact owing to the lack of enough data, and the performance of the optimal solution could be sensitive to the probability distribution chosen in the SO model. The latter one considers uncertain data resides in a pre-defined uncertainty set, and minimizes the cost in the worst-case scenario in its objective function. Constraint violation is not allowed for all possible data realizations in the uncertainty set. RO is popular because it relies on simple data and distribution-free. From the computational perspective, it is equivalent to convex optimization problems for a variety of uncertainty sets and problem types; for the intractable cases, it can be solved via systematic iteration algorithms. For more technical details about RO, we refer to \cite{RO-Detail-1, RO-Detail-2, RO-Guide,RO-Convex}, survey articles \cite{RO-Survey,RO-Survey-2018}, and many references therein. Recently, distributionally robust optimization (DRO), an emerging methodology that inherits the advantages of SO and RO, has attracted wide attention. In DRO, uncertain data are described by probability distribution functions which are not known exactly and restricted in a functional ambiguity set constructed from available information and structured properties. The expected cost associated with the worst-case distribution is minimized, and the probability of constraint violations can be controlled via robust chance constraints. In many cases, the DRO can be reformulated as a convex optimization problem, or solved iteratively via convex optimization. RO and DRO approaches are young and active research fields, and the challenge is to explore tractable reformulations with various kinds of uncertainties. SO is a relatively mature technique, and the current research is focusing on probabilistic modeling of uncertainty, chance constrained programming, multi-stage SO such as stochastic dual dynamic programming, as well as more efficient computational methods.
There are several ways to categorize robust optimization methods. According to how uncertainty is dealt with, they can be classified into static (single-stage) RO and dynamic (multi-stage) RO. According to how uncertainty is modeled, they can be divided into RO and DRO. In the latter category, the ambiguity set for probability distribution can be further classified into the moment based one and the divergence based one. We will shed light on each of them in this chapter. Specifically, RO will be discussed in Sect. \ref{App-C-Sect01} and Sect. \ref{App-C-Sect02}, moment-based DRO will be presented in Sect. \ref{App-C-Sect03}, and divergence-based DRO, also called robust SO will be illuminated in Sect. \ref{App-C-Sect04}. In the operations research community, DRO and robust SO refer to the same thing: optimization problem with distributional uncertainty, and can be used interchangeably, although DRO is preferred by the majority of researchers. In this book, we intentionally distinguish them because the moment ambiguity set can be set up with little information and is more likely a RO; the divergence based set relies on an empirical distribution (may be inexact), so is more similar to an SO. In fact, the gap between SO and RO has been significantly narrowed by recent research progress in the sense of data-driven optimization.
\section{Static Robust Optimization}
\label{App-C-Sect01}
For the purpose of clarity, we begin to explain the paradigm of static RO from LPs, the best known and most frequently used mathematical programming problem in engineering applications. It is relatively easy to derive tractable robust counterparts with various uncertainty sets. Nevertheless, most results can be readily generalized to robust conic programs. The general form of an LP with uncertain parameters can be written as follows:
\begin{equation}
\label{eq:App-03-SRO-ULP}
\min_x \left\{ c^T x ~\middle|~ Ax \le b \right\}:(A,b,c) \in W
\end{equation}
where $x$ is the decision variable, $A$, $b$, $c$ are coefficient matrices with compatible dimensions, and $W$ denotes the set of all possible data realizations constructed from available information or historical data, or merely a rough estimation.
Without loss of generality, we can assume that the objective function and the constraint right-hand side in (\ref{eq:App-03-SRO-ULP}) are certain, and uncertainty only exists in coefficient matrix $A$. To see this, it is not difficult to observe that problem (\ref{eq:App-03-SRO-ULP}) can be written as an epigraph form
\begin{equation*}
\min_{t,x,y} \{t~|~c^T x - t \le 0,~ Ax-by \le 0,~ y=1 \}: (A,b,c) \in W
\end{equation*}
By introducing additional scalar variables $t$ and $y$, coefficients appearing in the objective function and constraint right-hand side are constants.
With this transformation, it will be more convenient to define the feasible solution and the optimal solution to (\ref{eq:App-03-SRO-ULP}). Hereinafter, we neglect the uncertainty in cost coefficient vector $c$ and constraint right-hand vector $b$ without particular mention, and consider problem
\begin{equation}
\label{eq:App-03-SRO-LP-UA}
\min_x \left\{ c^T x ~\middle|~ Ax \le b \right\}: A \in W
\end{equation}
Next we present solution concepts of static RO under uncertain data.
\subsection{Basic Assumptions and Formulations}
\label{App-C-Sect01-01}
Basic assumptions and definitions in static RO \cite{RO-Detail-1} are summarized as follows.
\begin{assumption}
\label{ap:App-03-SRO-1}
Vector $x$ represents ``here-and-now'' decisions: they should be determined without knowing exact values of uncertain parameters.
\end{assumption}
\begin{assumption}
\label{ap:App-03-SRO-2}
Once the decisions are made, constraints must be feasible when the actual data is within the uncertainty set $W$, and may be either feasible or not when the actual data step outside the uncertainty set $W$.
\end{assumption}
These assumptions bring about the definition for a feasible solution of (\ref{eq:App-03-SRO-LP-UA}).
\begin{definition}
\label{df:App-03-SRO-Feasibility}
A vector $x$ is called a robust feasible solution to (\ref{eq:App-03-SRO-LP-UA}) if the following condition holds:
\begin{equation}
\label{eq:App-03-SRO-Robust-Fea}
A x \le b,~ \forall A \in W
\end{equation}
\end{definition}
To prescribe an optimal solution, the worst-case criterion is widely accepted in RO studies, leading to the following definition:
\begin{definition}
\label{df:App-03-SRO-Optimality}
The robust optimal value of (\ref{eq:App-03-SRO-LP-UA}) is the minimum value of the objective function over all possible $x$ that satisfies (\ref{eq:App-03-SRO-Robust-Fea}).
\end{definition}
After we have agreed on the meanings of feasibility and optimality of (\ref{eq:App-03-SRO-LP-UA}), we can seek the optimal solution among all robust feasible solutions to the problem. Now, the robust counterpart (RC) of the uncertain LP (\ref{eq:App-03-SRO-LP-UA}) can be described as:
\begin{equation}
\label{eq:App-03-SRO-RC}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ & a^T_i x \le b_i,~ \forall i,~ \forall A \in W
\end{aligned}
\end{equation}
where $a^T_i$ is the $i$-th row of matrix $A$, and $b_i$ is the $i$-th element of vector $b$. We have two observations on the formulation of robust constraints in (\ref{eq:App-03-SRO-RC}).
\begin{proposition}
\label{pr:App-03-Invariance-1}
Robust feasible solutions of (\ref{eq:App-03-SRO-RC}) remain the same if we replace $W$ with the Cartesian product $\hat W = W_1 \times \cdots \times W_n$, where $W_i = \{a_i | \exists A \in W \}$ is the projection of $W$ on the coefficient space of $i$-th row of $A$.
\end{proposition}
This is called the constraint-wise property in static RO \cite{RO-Detail-1}. The reason is
\begin{equation}
a^T_i x \le b_i,~ \forall A \in W~
\Leftrightarrow~ \max_{A \in W} a^T_i x \le b_i
\Leftrightarrow~ \max_{a_i \in W_i} a^T_i x \le b_i
\notag
\end{equation}
As a result, problem (\ref{eq:App-03-SRO-RC}) comes down to
\begin{equation}
\label{eq:App-03-SRO-RC-Hull}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ & a^T_i x \le b_i,\forall a_i \in W_i,~ \forall i
\end{aligned}
\end{equation}
Proposition \ref{pr:App-03-Invariance-1} seems rather counter-intuitive. One may perceive that (\ref{eq:App-03-SRO-RC}) will be less conservative with uncertainty set $W$ since it is a subset of $\hat W$. In fact, later we will see that this intuition is true for adjustable robustness.
\begin{proposition}
\label{pr:App-03-Invariance-2}
Robust feasible solutions of (\ref{eq:App-03-SRO-RC-Hull}) remain the same if we replace $W_i$ with its convex hull ${\rm conv}(W_i)$.
\end{proposition}
To see this, let vector $a^j_i$, $j=1,2,\cdots$ be the extreme points of $W_i$, then any point $\bar a_i \in$ conv$(W_i)$ can be expressed by $\bar a_i = \sum_j \lambda_j a^j_i$, where $\lambda_j \ge 0$, $\sum_j \lambda_j = 1$ are weight coefficients. If $x$ is feasible for all extreme points $a^j_i$, i.e., $a^j_i x \le b_i$, $\forall j$, then
\begin{equation}
\bar a^T_i x = \sum_j \lambda_j a^j_i x \le \sum_j \lambda_j b_i = b_i \notag
\end{equation}
which indicates that the constraint remains intact for all uncertain parameters reside in conv$(W_i)$.
Combining Propositions \ref{pr:App-03-Invariance-1} and \ref{pr:App-03-Invariance-2}, we can conclude that the robust counterpart of an uncertain LP with a certain objective remains intact even if sets $W_i$ of uncertain data are extended to their closed convex hulls, and $W$ to the Cartesian product of the resulting sets. In other words, we can make a further assumption on the uncertainty set without loss of generality.
\begin{assumption}
\label{ap:App-03-SRO-3}
The uncertainty set $W$ is the Cartesian product of closed and convex sets.
\end{assumption}
\subsection{Tractable Reformulations}
\label{App-C-Sect01-02}
The constraint-wise property enables us to analyze the robustness of each constraint $a^T_i x \le b_i$, $\forall a_i \in W_i$ separately. Without particular mention, we will omit the subscript $i$ for brevity. To facilitate discussion, it is convenient to parameterize the uncertain vector as $a = \bar a + P \zeta$, where $\bar a$ is the nominal value of $a$, $P$ is a constant matrix, $\zeta$ is a new variable that is uncertain. This section will focus on how to derive tractable reformulation for robust constraints in the form of
\begin{equation}
\label{eq:App-03-SRO-RC-Single}
(\bar a + P \zeta)^T x \le b,~ \forall \zeta \in Z
\end{equation}
where $Z$ is the uncertainty set of variable $\zeta$. For same reasons, we can assume that $Z$ is closed and convex. A “computationally tractable” problem means that there are known solution algorithms which can solve the problem with polynomial running time in its input size even in the worst case. It has been shown in \cite{RO-Detail-1} that problem (\ref{eq:App-03-SRO-RC-Hull}) is generally intractable even if each $W_i$ is closed and convex. Nevertheless, tractability can be preserved for some special classes of uncertainty sets. Some well-known results are summarized in the following.
Condition (\ref{eq:App-03-SRO-RC-Single}) contains an infinite number of constraints due to the enumeration over set $Z$. Later we will see that for some particular uncertainty sets, the $\forall$ quantifier as well as the uncertain parameter $\zeta$ can be eliminated by using duality theory, and the resulting constraint in variable $x$ is still convex.
{\noindent \bf 1. Polyhedral uncertainty set}
We start with a commonly used uncertainty set: a polyhedron
\begin{equation}
\label{eq:App-03-SRO-US-Polyhedron}
Z = \{ \zeta ~|~ D \zeta + q \ge 0\}
\end{equation}
where $D$ and $q$ are constant matrices with compatible dimensions.
To exclude the $\forall$ quantifier for variable $\zeta$, we investigate the worst case of the left-hand side and require
\begin{equation}
\label{eq:App-03-SRO-Poly-1}
\bar a^T x + \max_{\zeta \in Z} (P^T x)^T \zeta \le b
\end{equation}
For a fixed $x$, the second term is the optimum of an LP in variable $\zeta$. Duality theory of LP says that the following relation holds
\begin{equation}
\label{eq:App-03-SRO-Poly-2}
(P^T x)^T \zeta \le q^T u,~ \forall \zeta \in Z,~ \forall u \in U
\end{equation}
where $u$ is the dual variable, and $U = \{ u ~|~ D^T u + P^T x =0,~ u \ge 0\}$ is the feasible region of the dual problem. Please be cautious on the sign of $u$. We actually replace $u$ with $-u$ in the original dual LP. Therefore, a necessary condition to validate (\ref{eq:App-03-SRO-RC-Single}) is
\begin{equation}
\label{eq:App-03-SRO-Poly-3}
\exists u \in U: \bar a^T x + q^T u \le b
\end{equation}
It is also sufficient if the second term takes its minimum value over $U$, because strong duality always holds for LPs, i.e. $(P^T x)^T \zeta = q^T u$ is satisfied at the optimal solution. In this regard, (\ref{eq:App-03-SRO-Poly-1}) is equivalent to
\begin{equation}
\label{eq:App-03-SRO-Poly-4}
\bar a^T x + \min_{u \in U}~ q^T u \le b
\end{equation}
In fact, the ``min'' operator in (\ref{eq:App-03-SRO-Poly-4}) can be omitted in a RC optimization problem that minimizes the objective function, and thus renders polyhedral constraints, although (\ref{eq:App-03-SRO-Poly-1}) is not given in a closed form and seems non-convex.
In summary, the RC problem of an uncertain LP with polyhedral uncertainty
\begin{equation}
\label{eq:App-03-SRO-RC-LP-1}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ &
(\bar a_i + P_i \zeta_i)^T x \le b_i,~ \forall \zeta_i \in Z_i,~\forall i\\
& Z_i = \{ \zeta_i ~|~ D_i \zeta_i + q_i \ge 0\},~ \forall i
\end{aligned}
\end{equation}
can be equivalently formulated as
\begin{equation}
\label{eq:App-03-SRO-RC-LP-2}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ &
\bar a^T_i x + q^T_i u_i \le b_i,~\forall i \\
& D^T_i u_i + P^T_i x = 0,~ u_i \ge 0,~\forall i
\end{aligned}
\end{equation}
which is still an LP.
{\noindent \bf 2. Cardinality constrained uncertainty set}
Cardinality constrained uncertainty set is a special class of
polyhedral uncertainty set which incorporates a budget constraint and defined as follows
\begin{equation}
\label{eq:App-03-SRO-US-Card}
Z({\rm \Gamma}) = \left\{ \zeta ~\middle|~ -1 \le \zeta_j \le 1,~ \forall j,~ \sum_j |\zeta_j| \le \rm \Gamma \right \}
\end{equation}
where $\rm \Gamma$ is called the budget of uncertainty \cite{RO-Price-Robust}. Motivated by the fact that each entry $\zeta_j$ is unlikely to reach 1 or $-1$ at the same time, the budget constraint controls the total data deviation from their forecast values. In other words, the decision maker can achieve a compromise between the level of solution robustness and the optimal cost by adjusting the value of $\rm \Gamma$, which should be less than the dimension of $\zeta$, otherwise the the budget constraint will be redundant.
Although the cardinality constrained uncertainty set $Z({\rm \Gamma})$ is essentially a polyhedron, the number of its facets, or the number of linear constraints in (\ref{eq:App-03-SRO-US-Polyhedron}), grows exponentially in the dimension of $\zeta$, leading to a huge and dense coefficient matrix for the uncertainty set. To circumvent this difficulty, we can lift it into a higher dimensional space as follows by introducing auxiliary variables
\begin{equation}
\label{eq:App-03-SRO-US-Card-Lift}
Z({\rm \Gamma}) = \left\{ \zeta,\sigma ~\middle|~ - \sigma_j \le \zeta_j \le \sigma_j,~ \sigma_j \le 1,~\forall j,~ \sum_j \sigma_j \le \rm \Gamma \right \}
\end{equation}
The first inequality naturally suggests $\sigma_j \ge 0$, $\forall j$.
It is easy to see the equivalence of (\ref{eq:App-03-SRO-US-Card}) and (\ref{eq:App-03-SRO-US-Card-Lift}), and the numbers of variables and constraints in the latter one grows linearly in the dimension of $\zeta$.
Following a similar paradigm, certifying constraint robustness with a cardinality constrained uncertainty set requires the optimal value function of the following LP in variables $\zeta$ and $\sigma$ representing the uncertainty
\begin{equation}
\label{eq:App-03-SRO-Card-1}
\begin{aligned}
\max_{\zeta,\sigma}~~ (P^T & x)^T \zeta \\
\mbox{s.t.}~~
- \zeta_j - \sigma_j & \le 0,~ \forall j : u^n_j \\
\zeta_j - \sigma_j & \le 0,~ \forall j : u^m_j \\
\sigma_j & \le 1,~ \forall j : u^b_j \\
\sum_j \sigma_j & \le {\rm \Gamma} : u_r
\end{aligned}
\end{equation}
where $u^n_j$, $u^m_j$, $u^b_j$, $\forall j$, and $u_r$ following a colon are the dual variables associated with each constraint. The dual problem of (\ref{eq:App-03-SRO-Card-1}) is given by
\begin{equation}
\label{eq:App-03-SRO-Card-2}
\begin{aligned}
\min_{u^n,u^m,u^b,u_r}~~ & u_r {\rm \Gamma} + \sum_j u^b_j \\
\mbox{s.t.}~~ & u^m_j - u^n_j = (P^T x)_j,~ \forall j \\
& -u^m_j - u^n_j + u^b_j + u_r = 0,~ \forall j \\
& u^m_j,~ u^n_j,~ u^b_j \ge 0,~ \forall j,~ u_r \ge 0
\end{aligned}
\end{equation}
In summary, the RC problem of an uncertain LP with cardinality constrained uncertainty
\begin{equation}
\label{eq:App-03-SRO-RC-Card-1}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ &
(\bar a_i + P_i \zeta_i)^T x \le b_i,~ \forall \zeta_i \in Z_i({\rm \Gamma}_i),~\forall i
\end{aligned}
\end{equation}
can be equivalently formulated as
\begin{equation}
\label{eq:App-03-SRO-RC-Card-2}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ & \bar a^T_i x + u_{ri} {\rm \Gamma}_i + \sum_j u^b_{ij} \le b_i,~\forall i \\
& u^m_{ij} - u^n_{ij} = (P^T_i x)_j,~ \forall i,~ \forall j \\
&-u^m_{ij} - u^n_{ij} + u^b_{ij} + u_{ir} = 0,~ \forall i,~ \forall j \\
& u^m_{ij},~ u^n_{ij},~ u^b_{ij} \ge 0,~ \forall i,~ \forall j,~
u_{ir} \ge 0,~ \forall i
\end{aligned}
\end{equation}
which is still an LP.
{\noindent \bf 3. Several other uncertainty sets}
Equivalent convex formulations of the uncertain constraint (\ref{eq:App-03-SRO-RC-Single}) with some other uncertainty sets are summarized in Table \ref{tab:App-03-SRO-RCs} \cite{RO-Guide}. These outcomes are derived using the similar method described previously.
\begin{table}[!t]
\scriptsize
\renewcommand{1.5}{2.0}
\renewcommand{1em}{1em}
\caption{Equivalent convex formulations with different uncertainty sets}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Uncertainty & $Z$ & Robust reformulation & Tractability \\
\hline
Box & $\|\zeta\|_\infty \le 1$ & $\bar a^T x + \| P^T x \|_1 \le b$ & LP\\
Ellipsoidal & $\|\zeta\|_2 \le 1$ & $\bar a^T x+\| P^T x \|_2 \le b$ & LP\\
$p$-norm & $\|\zeta\|_p \le 1$ & $\bar a^T x + \| P^T x \|_q \le b$ & Convex program \\
Proper cone & $D \zeta + q \in K$ & $ \left\{ \begin{lgathered}
\bar a^T x + q^T u \le b \\ D^T u + P^T x = 0 \\ u \in K^* \end{lgathered} \right.$ & Conic LP \\
Convex constraints & $h_k(\zeta) \le 0, \forall k$ & $ \left\{ \begin{lgathered}
\bar a^T x + \sum_k \lambda_k h^*_k \left( \frac{u^k}{\lambda_k} \right) \le b \\ \sum_k u^k = P^T x \\ \lambda_k \ge 0, \forall k \end{lgathered} \right.$ & Convex program \\
\hline
\end{tabular}
\label{tab:App-03-SRO-RCs}
\end{table}
Table \ref{tab:App-03-SRO-RCs} includes three cases: the p-norm uncertainty, the conic uncertainty, and general convex uncertainty.
In the $p$-norm case, the H{\"o}lder's inequality is used, i.e.:
\begin{equation}
(P^T x)^T \zeta \le \|P^T x\|_p \| \zeta \|_q
\end{equation}
where $\|\cdot\|_p$ and $\|\cdot\|_q$ with $p^{-1} + q^{-1} = 1$ are a pair of dual norms. Since norm function of any order is convex \cite{CVX-Book-Boyd}, the resulting RC is a convex program. Moreover, if $q$ is a positive rational number, the $q$-order cone constraints can be represented by a set of SOC inequalities \cite{SOCP-p-norm}, which is computationally more friendly. Box ($\infty$-norm) and ellipsoidal (2-norm) uncertainty sets are special kinds of $p$-norm ones.
In the general conic case, conic duality theory \cite{CVX-Book-Ben} is used. $K^*$ stands for the dual cone of $K$, and the polyhedral uncertainty is a special kind of this case when $K$ is the nonnegative orthant.
In the general convex case, Fenchel duality, a basic theory in convex analysis, is needed. Notation $h^*$ stands for the convex conjugate function, i.e. $h^*(x) = \sup_y x^T y -h(y)$. The detailed proof of RC reformulations and more examples can be found in \cite{SRO-CVX-RCs}.
Above analysis focuses on the situation in which problem functions are linear in decision variables, and problem\ data are affine in some uncertain parameters, such as the form $a = \bar a + P \zeta$. For robust quadratic optimization, robust semidefinite optimization, robust conic optimization, and robust discrete optimization, in which the optimization problem is nonlinear and discontinuous, please refer to \cite{RO-Detail-1} and \cite{RO-Detail-2}; for quadratic type uncertainty, please refer to \cite{RO-Detail-1} (in Sect. 1.4) and \cite{SRO-CVX-RCs}.
\subsection{Formulation Issues}
\label{App-C-Sect01-03}
To help practitioners build a well-defined and easy-to-solve robust optimization model, some important modeling issues and deeper insights are discussed in this section.
{\noindent \bf 1. Choosing the uncertainty set}
Since a robust solution remains feasible if the uncertain data does not step outside the uncertainty set, the level of robustness mainly depends on the shape and size of the uncertainty set. The more reliable, the higher the cost. One may wish to seek a trade-off between reliability and economy. This inspires the development of smaller uncertainty sets with a certain probability guarantee that the constraint violation is unlikely to happen. Such guarantees are usually described via a chance constraint
\begin{equation}
\label{eq:App-03-SRO-Chance-Cons}
\Pr\nolimits _\zeta [a(\zeta)^T x \le b] \ge 1 - \varepsilon
\end{equation}
For $\varepsilon = 0$, chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) is protected in the traditional sense of RO. When $\varepsilon >0$, it becomes challenging to derive tractable reformulation for (\ref{eq:App-03-SRO-Chance-Cons}), especially when the probability distribution of uncertain data is unclear or inaccurate. In fact, this issue is closely related to the DRO that will be discussed later on. Here we provide some simple results which help the decision maker choose the parameter of the uncertainty set.
It is revealed that if $\mathbb E [\zeta] = 0$, the components of $\zeta$ are independent, and the uncertainty set takes the form
\begin{equation}
\label{eq:App-03-SRO-US-BOX-ELP}
Z = \{\zeta ~|~ \|\zeta\|_2 \le {\rm \Omega},~ \|\zeta\|_\infty \le 1 \}
\end{equation}
then chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) holds with a probability of at least $1-\exp(-{\rm \Omega^2}/2)$ (see \cite{RO-Detail-1}, Proposition 2.3.3).
Moreover, if the uncertainty set takes the form
\begin{equation}
\label{eq:App-03-SRO-US-BOX-Budget}
Z = \{\zeta ~|~ \|\zeta\|_1 \le {\rm \Gamma},~ \|\zeta\|_\infty \le 1 \}
\end{equation}
then chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) holds with a probability of at least $1-\exp(-{\rm \Gamma^2}/2L)$, where $L$ is the dimension of $\zeta$ (see \cite{RO-Detail-1}, Proposition 2.3.4, and \cite{RO-Price-Robust}).
It is proposed to construct uncertainty sets based on the
central limit theorem. If each component of $\zeta$ is independent and identically distributed with mean $\mu$ and variance $\sigma^2$, the uncertainty set can be built as \cite{SRO-US-CLT}
\begin{equation}
\label{eq:App-03-SRO-US-CLT}
Z = \left\{ \zeta ~\middle|~ \left|\sum_{i=1}^L \zeta_i - L \mu \right| \le \rho \sqrt{L} \sigma,~ \|\zeta\|_\infty \le 1 \right\}
\end{equation}
where parameter $\rho$ is used to control the probability guarantee. Variations of this formulation can take other distributional information into account, such as data correlation and long tail-effect. It is a special kind of polyhedral uncertainty, however, it is unbounded for $L > 1$, since the components can be arbitrarily large as long as their summation is relatively small. Unboundedness may prevents establishing tractable RCs.
Additional references are introduced in further reading.
{\noindent \bf 2. How to solve a problem without a clear tractable reformulation?}
The existence of a tractable reformulation for a static RO problem largely depends on the type of the uncertainty set. If the robust counterpart cannot be
written as a tractable convex program, a smart remedy is to use an adaptive scenario generation procedure: first solve the problem with a smaller uncertainty set $Z_S$ which is a subset of the original one $Z$, and the problem with $Z_S$ has a known tractable reformulation. If the optimal solution $x^*$ is robust against all scenarios in $Z$, it is also an optimal solution of the original problem. Otherwise, we have to identify a scenario $\zeta^* \in Z$ which leads to the most severe violation, which can be implemented by solving
\begin{equation}
\label{eq:App-03-SRO-Scen-Gen-Sub}
\max~ \left\{ (P^T x^*)^T \zeta ~|~{\zeta \in Z} \right\}
\end{equation}
where $Z$ is a closed and convex set as validated in Assumption \ref{ap:App-03-SRO-3}, and then append a cutting plane
\begin{equation}
\label{eq:App-03-SRO-Scenario-Cut}
a(\zeta^*)^T x \le b
\end{equation}
to the reformulation problem. (\ref{eq:App-03-SRO-Scenario-Cut}) removes $x$ that will cause infeasibility in scenario $\zeta^*$, so is called a feasibility cut. It is linear and does not alter tractability. Then the updated problem is solved again. According to Proposition \ref{pr:App-03-Invariance-2}, the new solution $x^*$ will be robust for uncertain data in the convex hull of $Z_S \cup \zeta^*$. Above procedure continues until robustness is certified over the original uncertainty set $Z$.
This simple approach often converges quickly in a few number of iterations. Its advantage is that tractability is preserved. When we choose $Z_S = \zeta^0$, where $\zeta^0$ is the nominal scenario or forecast, it could be more efficient than using convex reformulations, because only LPs (whose sizes are almost equal to the problem without uncertainty, and grows slowly) and simple convex programs (\ref{eq:App-03-SRO-Scen-Gen-Sub}) are solved, see \cite{SRO-Cut-Generation} for a comparison. This paradigm is an essential strategy for solving the adjustable RO problems in the next section.
{\noindent \bf 3. How to deal with equality constraints?}
Although the theory of static RO is relatively mature, it encounters difficulties in dealing with equality constraints. For example, consider $x + a = 1$ where $a \in [0,0.1]$ is uncertain. However, one can seldom find a solution that makes the equality hold true for multiple values of $a$. The problem remains if you write a equality into a pair of opposite inequalities. In fact, this issue is inevitable in the static setting. In addition, this limitation will lead to completely different robust counterpart formulations for originally equivalent deterministic problems.
Consider the inequality $a x \le 1$, which is equivalent to $a x + s = 1,~ s \ge 0$. Suppose $a$ is uncertain and belongs to interval $[1,2]$, their respective robust counterparts are given by
\begin{equation}
\label{eq:App-03-SRO-Example-1}
a x \le 1,~ \forall a \in [1,2]
\end{equation}
and
\begin{equation}
\label{eq:App-03-SRO-Example-2}
a x + s = 1,~ \forall a \in [1,2],~ s \ge 0
\end{equation}
The feasible set for (\ref{eq:App-03-SRO-Example-1}) is $x \le 1/2$, and is $x = 0$ for (\ref{eq:App-03-SRO-Example-2}). By observing this difference, it is suggested that a static RO model should avoid using slack variables in constraints with uncertain parameters.
Sometimes, the optimization problem may contain state variables which can respond to parameter changes by adjusting their values. In such circumstance, equality constraint can be used to eliminate state variables. Nevertheless, such an action may lead to a problem that contains nonlinear uncertainties, which are challenging to solve. An example is taken from \cite{RO-Guide} to illustrate this issue. The constraints are
\begin{equation}
\label{eq:App-03-SRO-Example-3}
\begin{lgathered}
\zeta_1 x_1 + x_2 + x_3 = 1 \\
x_1 + x_2 + \zeta_2 x_3 \le 5
\end{lgathered}
\end{equation}
where $\zeta_1$ and $\zeta_2$ are uncertain.
If $x_1$ is a state variable and $\zeta_1 \ne 0$, substituting $x_1 = (1 - x_2 - x_3)/\zeta_1$ in the second inequality results in
\begin{equation}
\left( 1-\frac{1}{\zeta_1} \right) x_2 + \left( \zeta_2 - \frac{1}{\zeta_1} \right) x_3 \le 5 - \frac{1}{\zeta_1} \notag
\end{equation}
in which the uncertainty becomes nonlinear in the coefficients.
If $x_2$ is a state variable, substituting $x_2 = 1 - \zeta_1 x_1 - x_3$ in the inequality yields
\begin{equation}
(1 - \zeta_1) x_1 + (\zeta_2 -1) x_3 \le 4 \notag
\end{equation}
in which the uncertainty sustains linear in the coefficients.
If $x_3$ is a state variable, substituting $x_3 = 1 - \zeta_1 x_1 - x_2$ in the inequality gives
\begin{equation}
(1 - \zeta_1 \zeta_2) x_1 + (1 - \zeta_2) x_2 \le 5 - \zeta_2 \notag
\end{equation}
in which the uncertainty is nonlinear in the coefficients.
In conclusion, in the case that $x_2$ is a state variable, the problem is easier from a computational perspective. It is important to note that the physical interpretation of variable elimination is to determine the adjustable variable with exact information on the uncertain data. If no adjustment is allowed in (\ref{eq:App-03-SRO-Example-3}), the only robust feasible solution is $x_1 = x_3 =0$, $x_2 = 1$, which is rather restrictive. The adjustable RO will be elaborated in detail in the next section.
{\noindent \bf 4. Pareto efficiency of the robust solution}
The concept of Pareto efficiency in RO problems is proposed in \cite{SRO-Pareto-1}. If the optimal solution under the worst-case data realization is not unique, it is rational to compare their performances in non-worst-case scenarios: an alternative solution may give an improvement in the objective value for at least one data scenario without deteriorating the objective performances in all other scenarios. To present related concept tersely, we restrict the discussion on the following robust LP with objective uncertainty
\begin{equation}
\label{eq:App-03-RLP-Obj}
\max\{p^T x~|~ \mbox{s.t. } x \in X,~ \forall p \in W \} =
\max_{x \in X} \left\{ \min_{p \in W} p^T x \right\}
\end{equation}
where $W = \{p ~|~ D p \ge d \}$ is a polyhedral uncertainty set for the price vector $p$; $X = \{ x~|~A x \le b \}$ is the feasible region which is independent of the uncertainty. More general cases are elaborated in \cite{SRO-Pareto-1}. We consider this form because it is easy to discuss related issues, although objective uncertainty can be moved into constraints.
For a given strategy $x$, the worst-case uncertainty is
\begin{equation}
\label{eq:App-03-RLP-Dual}
\min \{ p^T x ~|~ \mbox{s.t.}~ p \in W \} =
\max \{ d^T y ~|~ \mbox{s.t.}~ y \in Y \}
\end{equation}
where $Y =\{ y ~|~ D^T y = x,~ y \ge 0\}$ is the feasible set for dual variable $y$. Substituting (\ref{eq:App-03-RLP-Dual}) in (\ref{eq:App-03-RLP-Obj}) gives
\begin{equation}
\label{eq:App-03-RLP-Obj-RC}
\max\{d^T y~|~ \mbox{s.t. } D^T y = x,~ y \ge 0,~x \in X\}
\end{equation}
which is an LP. Its solution $x$ is the robust optimal one to (\ref{eq:App-03-RLP-Obj}), and the worst-case price $p$ can be found by solving the left-hand side LP in (\ref{eq:App-03-RLP-Dual}). Let $z^{RO}$ be the optimal value of (\ref{eq:App-03-RLP-Obj-RC}), and then the set of robust optimal solutions for (\ref{eq:App-03-RLP-Obj}) can be expressed via
\begin{equation}
\label{eq:App-03-RLP-Obj-XRO}
X^{RO} = \{x~|~ x \in X: \exists y \in Y
\mbox{ such that } y^T d \ge z^{RO} \}
\end{equation}
If (\ref{eq:App-03-RLP-Obj-RC}) has a unique optimal solution, $X^{RO}$ is a singleton; otherwise, a Pareto optimal robust solution can be formally defined.
\begin{definition}
\label{df:App-03-Pareto-Efficiency}
\cite{SRO-Pareto-1} $x \in X^{RO}$ is a Pareto optimal solution for problem (\ref{eq:App-03-RLP-Obj}) if there is no other $\bar x \in X$ such that $p^T \bar x \ge p^T x,~ \forall p \in W$ and $\bar p^T \bar x > \bar p^T x$ for some $\bar p \in W$.
\end{definition}
The terminology ``Pareto optimal'' is borrowed from multi-objective optimization theory: RO problem (\ref{eq:App-03-RLP-Obj}) is viewed as a multi-objective LP with infinitely many objectives, each of which corresponds to a particular $p \in W$. Some interesting problems are elaborated.
{\noindent \bf a. Pareto efficiency test}
In general, it is not clear whether $X^{RO}$ contains multiple solutions, at least before a solution $x \in X^{RO}$ is found. To test whether a given solution $x$ is a robust optimal one or not, it is proposed to solve a new LP
\begin{equation}
\label{eq:App-03-PRO-Test-LP}
\begin{aligned}
\max_{y} ~~ & \bar p^T y \\
\mbox{s.t.} ~~ & y \in W^* \\
& x + y \in X
\end{aligned}
\end{equation}
where $\bar p$ is a relative interior of the polyhedral uncertainty set $W$, which is usually set to the nominal scenario, and $W^* = \{ y~|~\exists \lambda:d^T \lambda \ge 0,~ D^T \lambda = y,~ \lambda \ge 0\}$ is the dual cone of $W$. Please refer to Sect. \ref{App-A-Sect02-01} and equation (\ref{eq:App-01-Dual-Polytope-2}) for the dual cone of a polyhedral set. Since $y = 0$, $\lambda = 0$ is always feasible in (\ref{eq:App-03-PRO-Test-LP}), the optimal value is either zero or strictly positive. In the former case, $x$ is also a Pareto optimal solution; in the latter case, $\bar x = x + y^*$ dominates $x$ and itself is Pareto optimal for any $y^*$ that solves LP (\ref{eq:App-03-PRO-Test-LP}) \cite{SRO-Pareto-1}. The interpretation of (\ref{eq:App-03-PRO-Test-LP}) is clear: since $y \in W^*$, $y^T p$ must be non-negative for all $p \in W$. If we can find $y$ that leads to a strict objective improvement for $\bar p$, then $x+y$ would be Pareto optimal.
In view of the above interpretation, it is a direct conclusion that for an arbitrary relative interior point $\bar p \in W$, the optimal solutions to the problem
\begin{equation}
\label{eq:App-03-PRO-Find-LP}
\max~ \left\{ \bar p^T x ~|~ x \in X^{RO} \right\}
\end{equation}
are Pareto optimal.
{\noindent \bf b. Characterizing the set of Pareto optimal solutions}
It is interesting to characterize the Pareto optimal solution set $X^{PRO}$.
After we get $z^{RO}$ and $X^{RO}$, solve the following LP
\begin{equation}
\label{eq:App-03-XPRO}
\begin{aligned}
\max_{x,y,\lambda} ~~ & \bar p^T y \\
\mbox{s.t.} ~~ & d^T \lambda \ge 0,~ D^T \lambda = y,~ \lambda \ge 0 \\
& x \in X^{RO},~ x + y \in X
\end{aligned}
\end{equation}
and we can conclude $X^{PRO}=X^{RO}$ if and only if the optimal value of (\ref{eq:App-03-XPRO}) is equal to 0 \cite{SRO-Pareto-1}. If this is true, the decision maker would not have to worry about Pareto efficiency, as any solution in $X^{RO}$ is also Pareto optimal. More broadly, the set $X^{PRO}$ is shown to be non-convex and is contained in the boundary of $X^{RO}$.
{\noindent \bf c. Optimization over Pareto optimal solutions}
In the case that $X^{PRO}$ is not a singleton, one may consider to optimize a linear secondary objective over $X^{PRO}$, i.e.:
\begin{equation}
\label{eq:App-03-OPTI-XPRO}
\max \{ r^T x ~|~ \mbox{s.t. } x \in X^{PRO} \}
\end{equation}
It is demonstrated in \cite{SRO-Pareto-1} that if $r$ lies in the relative interior of $W$, the decision maker can simply replace $X^{PRO}$ with $X^{RO}$ in (\ref{eq:App-03-OPTI-XPRO}) without altering the problem solution, due to the property revealed in (\ref{eq:App-03-PRO-Find-LP}). In more general cases, problem (\ref{eq:App-03-OPTI-XPRO}) can be formulated as an MILP \cite{SRO-Pareto-1}
\begin{equation}
\label{eq:App-03-OPTI-XPRO-MILP}
\begin{aligned}
\max_{x,\mu,\eta,z}~~ & r^T x \\
\mbox{s.t.}~~ & x \in X^{RO} \\
& \mu \le M(1-z) \\
& b - A x \le Mz \\
& DA^T \mu - d \eta \ge D \bar p \\
& \mu, \eta \ge 0, z \in \{0,1\}^{m}
\end{aligned}
\end{equation}
where $M$ is a sufficiently large number, $m$ is the dimension of vector $z$. To show their equivalence, it is revealed that the feasible set of (\ref{eq:App-03-OPTI-XPRO-MILP}) depicts an optimal solution of (\ref{eq:App-03-PRO-Test-LP}) with a zero objective value \cite{SRO-Pareto-1}. In other words, the constraints of (\ref{eq:App-03-OPTI-XPRO-MILP}) contain the KKT optimality condition of (\ref{eq:App-03-PRO-Test-LP}). To see this, the binary vector $z$ imposes the complementarity and slackness condition $\mu^T (b-Ax)=0$, which ensures $\lambda, \mu, \eta$ are the optimal solution of the following primal-dual
LP pair
\begin{equation}
\mbox{Primal : }
\begin{aligned}
\max_{\lambda}~~ & \bar p^T D^T \lambda \\
\mbox{s.t.}~~ & \lambda \ge 0 \\
& d^T \lambda \ge 0 \\
& A D^T \lambda \le b - A x
\end{aligned}
\quad
\mbox{Dual : }
\begin{aligned}
\min_{\mu, \eta}~~ & \mu^T (b-Ax) \\
\mbox{s.t.}~~ & \mu \ge 0 \\
& \eta \ge 0 \\
& D A^T \mu - d \eta \ge D \bar p
\end{aligned}
\notag
\end{equation}
The original variable $y$ in (\ref{eq:App-03-PRO-Test-LP}) is eliminated via equality $ D^T \lambda = y$ in the dual cone $W^*$. According to strong duality, the optimal value of the primal LP (\ref{eq:App-03-PRO-Test-LP}) is $\bar p^T D^T \lambda = \bar p^T y = 0$, and Pareto optimality is guaranteed.
In practice, Pareto inefficiency is not a contrived phenomenon, see various examples in \cite{SRO-Pareto-1} and power market examples in \cite{SRO-Pareto-2,SRO-Pareto-3}.
{\noindent \bf 5. On max-min and min-max formulations}
In many literatures, the robust counterpart problem of (\ref{eq:App-03-SRO-LP-UA}) is written as a min-max form
\begin{equation}
\label{eq:App-03-SRO-RC-min-max}
\begin{aligned}
\mbox{Opt-1} = \min_x \max_{A \in W} \left\{c^T x ~|~ \mbox{s.t. } A x \le b \right\}
\end{aligned}
\end{equation}
which means $x$ is determined before $A$ takes a value in $W$, and the decision maker can foresee the worst consequence of deploying $x$ brought by the perturbation of $A$. To make a prudent decision that is insensitive to data perturbation, the decision maker resorts to minimizing the maximal objective.
The max-min formulation
\begin{equation}
\label{eq:App-03-SRO-RC-max-min}
\begin{aligned}
\mbox{Opt-2} = \max_{A \in W} \min_x \left\{c^T x ~|~ \mbox{s.t. } A x \le b \right\}
\end{aligned}
\end{equation}
has a different interpretation: the decision maker can first observe the realization of uncertainty, and then recovers the constraints by deploying a corrective action $x$ as a response to the observed $A$. Certainly, this specific $x$ may not be feasible for other $A \in W$. On the other hand, the uncertainty, like a rational player, can foresee the optimal action taken by the human decision maker, and select a strategy that will yield a maximal objective value even an optimal corrective action is deployed.
From above analysis, the feasible region of $x$ in (\ref{eq:App-03-SRO-RC-min-max}) is a subset of that in (\ref{eq:App-03-SRO-RC-max-min}), because (\ref{eq:App-03-SRO-RC-max-min})
only accounts for a special scenario in $W$. As a result, their optimal values satisfy Opt-1 $\ge$ Opt-2.
Consider the following problem in which the uncertainty is not constraint-wise
\begin{equation}
\label{eq:App-03-SRO-RC-Toy}
\begin{aligned}
\min_x ~~ & x_1 + x_2 \\
\mbox{s.t.} ~~ & x_1 \ge a_1,~ x_2 \ge a_2,~ \forall a \in W
\end{aligned}
\end{equation}
where $W = \{a ~|~ a \ge 0,~ \|a\|_2 \le 1 \}$.
For the min-max formulation, since $x$ should be feasible for all possible values of $a$, it is necessary to require $x_1 \ge 1$ and $x_2 \ge 1$, and Opt-1 $=2$ for problem (\ref{eq:App-03-SRO-RC-Toy}).
As for the max-min formulation, as $x$ is determined in response to the value of $a$, it is clear that the optimal choice is $x_1 = a_1$ and $x_2 = a_2$, so the problem becomes
\begin{equation}
\begin{aligned}
\max_a ~~ & a_1 + a_2 \\
\mbox{s.t.} ~~ & a^2_1 + a^2_2 \le 1
\end{aligned} \notag
\end{equation}
whose optimal value is Opt-2 $=\sqrt{2} <$ Opt-1.
As a short conclusion, static RO models discussed in this section are used to immunize against constraint violation or objective volatility caused by data perturbations, without jeopardizing computational tractability. General approaches involve reformulating the original uncertainty dependent constraints into deterministic convex ones without uncertain data, such that feasible solutions of the robust counterpart program remain feasible for all data realizations in the pre-specified uncertainty set, which interprets the meaning of robustness.
\section{Adjustable Robust Optimization}
\label{App-C-Sect02}
Several reasons call for developing new decision-making mechanisms to overcome limitations of the static RO approach: 1) Equality constraints often give rise to infeasible robust counterpart problems in the static setting; 2) real-world decision-making process may involve multiple stages, in which some decisions indeed can be made after the uncertain data has been known or can be predicted accurately. Take power system operation for an example, the on-off status of generation units must be made several hours before real-time dispatch when the renewable power is unclear; however, the output of some units (called AGC units) can change in response to the real values of system demands and renewable generations. This section will be devoted to the adjustable robust optimization (ARO) with two stages, which leverages the adaptability in the second stage. We still focus our attention on the linear case.
\subsection{Basic Assumptions and Formulations}
\label{App-C-Sect02-01}
The essential difference between static RO and ARO approaches stems from the manner of decision making.
\begin{assumption}
\label{ap:App-03-ARO-1}
In an ARO problem, some variables are ``here-and-now'' decisions, whereas the rest are ``wait-and-see'' decisions: they can be made at a later moment according to the observed data.
\end{assumption}
In analogy to the static case, the decision-making mechanism can be explained.
\begin{assumption}
\label{ap:App-03-ARO-2}
Once the here-and-now decisions are made, there must be at least one valid wait-and-see decision which is able to recover constraints in response to the observed data realization, if the actual data is within the uncertainty set.
\end{assumption}
In this regard, we can say here-and-now decisions are robust against the uncertainty, and wait-and-see decisions are adaptive to the uncertainty. These terminologies are borrowed from two-stage SO models. In fact, there is a close relation between two-stage SO and two-stage RO \cite{ARO-TSSO-Relation-1,ARO-TSSO-Relation-2}.
Now we are ready to post the compact form of a linear ARO problem with an uncertain constraint right-hand side:
\begin{equation}
\label{eq:App-03-ARO}
\min_{x \in X} \left\{ c^T x + \max_{w \in W} \min_{y(w) \in Y(x,w)} d^T y(w) \right\}
\end{equation}
where $x$ is the here-and-now decision variable (or the first-stage decision variable), and $X$ is the feasible region of $x$; $w$ is the uncertain parameter, and $W$ is the uncertainty set, which has been discussed in the previous section; $y(w)$ is the wait-and-see decision variable (or second-stage decision variable), which can be adjusted according to the actual data of $w$, so it is represented as a function of $w$; $Y$ is the feasible region of $y$ given the values of $x$ and $w$, because the here-and-now decision is not allowed to change in this stage, and the exact value of $w$ is known. It has a polyhedral form
\begin{equation}
\label{eq:App-03-ARO-Y(x,w)}
Y(x,w) = \{y ~|~ Ax + By + Cw \le b \}
\end{equation}
where $A$, $B$, $C$, and $b$ are constant matrices and vector with compatible dimensions. It is clear that both of the here-and-now decision $x$ and the data uncertainty $w$ can influence the feasible region $Y$ in the second stage. We define $w = 0$ the nominal scenario and assume $0$ is a relative interior of $W$. Otherwise, we can decompose the uncertainty as $w = w^0 + \Delta w$ and merge the constant term $Cw^0$ into the right-hand side as $b \to b - C w^0$, where $w^0$ is the predicted or expected value of $w$, and ${\rm \Delta} w$ is the forecast error, which is the real uncertain parameter.
It should be pointed out that $x$, $w$, and $y$ may contain discrete decision variables. Later we will see, integer variables in $x$ and $w$ do not significantly alter the solution algorithm of ARO. However, because integrality in $y$ prevents the use of LP duality theory, the computation will be greatly challenged. Although we assume coefficient matrices are constants in (\ref{eq:App-03-ARO-Y(x,w)}), most results in this section can be generalized if matrix $A$ is a linear function in $w$; the situation would be complicated in matrix $B$ is uncertainty-dependent. The purpose for the specific form in (\ref{eq:App-03-ARO-Y(x,w)}) is that it is more dedicated to the problems considered in this book: uncertainties originates from renewable/load volatility can be modeled by term $Cw$ in (\ref{eq:App-03-ARO-Y(x,w)}), and the coefficients representing component and network parameters are constants.
Assumption \ref{ap:App-03-ARO-2} inspires the definition for a feasible solution of ARO (\ref{eq:App-03-ARO}).
\begin{definition}
\label{df:App-03-ARO-XR}
A first-stage decision $x$ is called robust feasible in (\ref{eq:App-03-ARO}) if the feasible region $Y(x,w)$ is non-empty for all $w \in W$, and the set of robust feasible solutions are given by:
\begin{equation}
\label{eq:App-03-ARO-XR}
X_R = \{x ~|~ x \in X : \forall w \in W,~ Y(x,w) \ne \emptyset \}
\end{equation}
\end{definition}
Please be aware of the sequence in (\ref{eq:App-03-ARO-XR}): $x$ takes its value first, and then parameter $w$ chooses a value in $W$ before some $y \in Y$ does. The non-emptiness of $Y$ is guaranteed by the selection of $x$ for an arbitrary $w$. If we swap the latter two terms and write $Y(x,w) \ne \emptyset$, $\forall w \in W$, like the form in a static RO, it sometimes cause confusion that both $x$ and $y$ are here-and-now type decisions, the adaptiveness vanishes, and thus $X_R$ may become empty if uncertainty appears in an equality constraint, as analyzed in the previous section.
The definition of an optimal solution depends on the decision maker's attitude towards the cost in the second stage. In (\ref{eq:App-03-ARO}), we adopt the following definition.
\begin{definition}
\label{df:App-03-SRO-Optimality-MMC}
(Min-max cost criterion) An optimal solution of (\ref{eq:App-03-ARO}) is a pair of here-and-now decision $x \in X_R$ and wait-and-see decision $y(w^*)$ corresponding to the worst-case scenario $w^* \in W$, such that the total cost in scenario $w^*$ is minimal, where the worst-case scenario $w^*$ means that for the fixed $x$, the optimal second-stage cost is maximized over $W$.
\end{definition}
Other criteria may give different robust formulations. For example, the minimum nominal cost formulation and min-max regret formulation.
\begin{definition}
\label{df:App-03-SRO-Optimality-MNC}
(Minimum nominal cost criterion) An optimal solution under the minimum nominal cost criterion is a pair of here-and-now decision $x \in X_R$ and wait-and-see decision $y^0$ corresponding to the nominal scenario $w^0 = 0$, such that the total cost in scenario $w^0$ is minimal.
\end{definition}
The minimum nominal cost criterion leads to the following robust formulation
\begin{equation}
\label{eq:App-03-ARO-V1}
\begin{aligned}
\min~~ & c^T x + d^T y \\
\mbox{s.t.} ~~ & x \in X_R \\
& y \in Y(x,w^0)
\end{aligned}
\end{equation}
where robustness is guranteed by $X_R$.
To explain the concept of regret, the minimum perfect-information
total cost is
\begin{equation*}
C_P(w) = \min \left\{ c^T x + d^T y ~|~ x \in X,~ y \in Y(x,w) \right\}
\end{equation*}
where $w$ is known to the decision maker. For a fixed first-stage decision $x$, the maximum regret is defined as
\begin{equation*}
\mbox{Reg}(x) = \max_{w \in W} \left\{ \min_{y \in Y(x,w)} \{c^T x + d^T y\} - C_P(w) \right\}
\end{equation*}
\begin{definition}
\label{df:App-03-SRO-Optimality-MMC}
(Min-max regret criterion) An optimal solution under the min-max regret criterion is a pair of here-and-now decision $x$ and wait-and-see decision $y(w)$, such that the worst-case regret under all possible scenarios $w \in W$ is minimized.
\end{definition}
The min-max regret cost criterion leads to the following robust formulation
\begin{equation}
\label{eq:App-03-ARO-MMR}
\min_{x \in X} \left\{ c^T x + \max_{w \in W} \left\{ \min_{y \in Y(x,w)} d^T y - \min_{x^\prime \in X,y^\prime \in Y(x,w)} \left\{ c^T x^\prime + d^T y^\prime \right\} \right\} \right\} \\
\end{equation}
In an ARO problem, we can naturally assume that the uncertainty set is a polyhedron. To see this, if $x$ is a robust solution under an uncertainty set consists of discrete scenarios, i.e., $W=\{w^1,w^2,\cdots w^S\}$, according to Definition \ref{df:App-03-ARO-XR}, there exist corresponding $\{y^1,y^2,\cdots y^S\}$ such that
\begin{equation}
\begin{gathered}
B y^1 \le b - A x -C w^1 \\
B y^2 \le b - A x -C w^2 \\
\vdots \\
B y^S \le b - A x -C w^S
\end{gathered}
\notag
\end{equation}
For non-negative weighting parameters $\lambda_1, \lambda_2,\cdots, \lambda_S \ge 0$, $\sum_{s=1}^S \lambda_s =1$, we have
\begin{equation}
\sum_{s=1}^S \lambda_s ( B y^s ) \le \sum_{s=1}^S \lambda_s (b-Ax-Cw^s)
\notag
\end{equation}
or equivalently
\begin{equation}
B \sum_{s=1}^S \lambda_s y^s \le b - Ax - C \sum_{s=1}^S \lambda_s w^s
\notag
\end{equation}
indicating that for any $w = \sum_{s=1}^S \lambda_s w^s \in \mbox{conv}(\{w^1,w^2,\cdots w^S\})$, the wait-and-see decision $y = \sum_{s=1}^S \lambda_s y^s$ can recover all constraints, and thus $Y(x,w) \ne \emptyset$. This property inspires the following proposition that is in analogy to Proposition \ref{pr:App-03-Invariance-2}
\begin{proposition}
\label{pr:App-03-Invariance-3}
Suppose $x$ is a robust feasible solution for a discrete uncertainty set $\{w^1,w^2,\cdots w^S\}$, then it remains robust feasible if we replace the uncertainty set with its convex hull.
\end{proposition}
Proposition \ref{pr:App-03-Invariance-3} also implies that in order to ensure the robustness of $x$, it is sufficient to consider the extreme points of a bounded polytope. Suppose the vertices of the polyhedral uncertainty set are $w^s$, $s=1,2,\cdots,S$. Consider the following set
\begin{equation}
\label{eq:App-03-ARO-XR-Lift}
{\rm \Xi} = \{x,y^1,y^2,\cdots,y^S ~|~
A x + B y^s \le b -C w^s,~ s = 1,2,\cdots,S \}
\end{equation}
Robust feasible region $X_R$ is the projection of polyhedron $\rm \Xi$ on $x$-space, which is also a polyhedron (Theorem B.2.5 in \cite{CVX-Book-Ben}).
\begin{proposition}
\label{pr:App-03-XR}
If the uncertainty set has a finite number of extreme points, set $X_R$ is a polytope.
\end{proposition}
Despite the nice theoretical properties, it is still difficult to solve an ARO problem in its general form (\ref{eq:App-03-ARO}). There have been considerable efforts spent on developing different approximations and approaches to tackle the computational challenges. We leave the solution methods of ARO problems to the next subsection. Here we demonstrate the benefit from postponing some decisions to the second stage via a simple example taken from \cite{RO-Detail-1}.
Consider an uncertain LP
\begin{equation}
\begin{aligned}
\min_x ~~ x_1 \\
\mbox{s.t.}~~ & x_2 \ge 0.5 \xi x_1 + 1 ~~ (a_\xi) \\
& x_1 \ge (2 - \xi) x_2 ~~~~ (b_\xi) \\
& x_1 \ge 0,~ x_2 \ge 0 ~~~~ (c_\xi)
\end{aligned} \notag
\end{equation}
where $\xi \in [0, \rho]$ is an uncertain parameter and $\rho$ is a constant (level of uncertainty) which may take a value in the open interval $(0,1)$.
In a static setting, both $x_1$ and $x_2$ must be independent of $\xi$. When $\xi = \rho$, constraint $(a_{\xi})$ suggests $x_2 \ge 0.5 \rho x_1 +1$; when $\xi = 0$, constraint $(b_\xi)$ indicates $x_1 \ge 2 x_2$; as a result, we arrive at the conclusion $x_1 \ge \rho x_1 + 2$, so the optimal value in the static case satisfies
\begin{equation}
\mbox{Opt} \ge x_1 \ge \dfrac{2}{1-\rho} \notag
\end{equation}
Thus the optimal value tends to infinity when $\rho$ approaches 1.
Now consider the adjustable case, in which $x_2$ is a wait-and-see decision. Let $x_2 = 0.5 \xi x_1 + 1$, ($a_\xi$) is always satisfied; substituting $x_2$ in constraint ($b_\xi$) yields:
\begin{equation}
x_1 \ge (2 - \xi) (\dfrac{1}{2} \xi x_1 + 1),~ \forall \xi \in [0, \rho]
\notag
\end{equation}
Substituting $x_1=4$ into above inequality we have
\begin{equation}
4 \ge 2(2 - \xi)\xi + 2-\xi, \forall \xi \in [0, \rho] \notag
\end{equation}
This inequality can be certified by the fact that $\xi \ge 0$ and $\xi(2-\xi) \le 1$, $\forall \xi \in \mathbb R$, indicating that $x_1=4$ is a robust feasible solution. Therefore, the optimal value should be no greater than 4 in the adjustable case for any $\rho$. The difference of optimal values in two cases can go arbitrarily large, depending on the value of $\rho$.
\subsection{Affine Policy Based Approximation Model}
\label{App-C-Sect02-02}
ARO problem (\ref{eq:App-03-ARO}) is difficult to solve because the functional dependence of the wait-and-see decision on $w$ is arbitrary, and there lacks a closed-form formula to characterize the optimal solution function $y(w)$ or certify whether $Y(x,w)$ is empty or not. At this point, we consider to approximate the recurse function $y(w)$ using a simpler one, naturally, an affine function
\begin{equation}
\label{eq:App-03-ARO-Affine-Policy}
y(w) = y^0 + G w
\end{equation}
where $y^0$ is the action in the second stage for the nominal scenario $w=0$, and $G$ is the gain matrix to be designed. (\ref{eq:App-03-ARO-Affine-Policy}) is called a linear decision rule or affine policy. It explicitly characterizes the wait-and-see decisions as an affine function in the revealed uncertain data. The rationality for employing an affine policy instead of other parametric ones is that it yields computationally tractable robust counterpart reformulations.
This finding is firstly reported in \cite{ARO-Affine-Policy}.
To validate (\ref{eq:App-03-ARO-XR}) under the linear decision rule, substituting (\ref{eq:App-03-ARO-Affine-Policy}) in (\ref{eq:App-03-ARO-Y(x,w)})
\begin{equation}
\label{eq:App-03-AARO-RC-1}
Ax + By^0 + (BG + C) w \le b,~ \forall w \in W
\end{equation}
In (\ref{eq:App-03-AARO-RC-1}), decision variables are $x$, $y^0$, and $G$, which should be made before $w$ is known, and thus are here-and-now decisions. The wait-and-see decision (or the incremental part) is naturally determined from (\ref{eq:App-03-ARO-Affine-Policy}) without further optimization, and cost reduction is considered in the determination of gain matrix $G$. (\ref{eq:App-03-AARO-RC-1}) is in form of (\ref{eq:App-03-SRO-RC-Single}), and hence its robust counterpart can be derived via the methods in Appendix \ref{App-C-Sect01-02}. Here we just provide the results of polyhedral uncertainty as an example.
Suppose the uncertainty set is described by
\begin{equation}
W = \{w ~|~ S w \le h \} \notag
\end{equation}
If we assume that $y^0$ is the optimal second stage decision when $w=0$, then we have
\begin{equation}
Ax + By^0 \le b \notag
\end{equation}
Furthermore, (\ref{eq:App-03-AARO-RC-1}) must hold if
\begin{equation}
\label{eq:App-03-AARO-RC-2}
\max_{w \in W} (BG + C)_i w \le 0,~ \forall i
\end{equation}
where $(\cdot)_i$ stands for the $i$-th row of the input matrix.
According to LP duality theory,
\begin{equation}
\label{eq:App-03-AARO-RC-3}
\max_{w \in W}~ (BG + C)_i w = \min_{{\rm \Lambda}_i \in {\rm \Pi}_i}~
{\rm \Lambda_i} h, ~ \forall i
\end{equation}
where $\rm \Lambda$ is a matrix consists of the dual variables, ${\rm \Lambda_i}$ is the $i$-th row of $\rm \Lambda$ and also the dual variable of the $i$-th LP in (\ref{eq:App-03-AARO-RC-3}), and the set
\begin{equation}
{\rm \Pi}_i = \{ {\rm \Lambda}_i ~|~ {\rm \Lambda}_i \ge 0,~
{\rm \Lambda}_i S = (BG + C)_i \} \notag
\end{equation}
is the feasible region of the $i$-th dual LP.
The minimization operator in the right-hand side of (\ref{eq:App-03-AARO-RC-3}) can be omitted if the objective is to seek a minimum. Moreover, if we adopt the minimum nominal cost criterion, the ARO problem with a linear decision rule in the second stage can be formulated as an LP
\begin{equation}
\label{eq:App-03-AARO-RC-LP}
\begin{aligned}
\min~~ & c^T x + d^T y^0 \\
\mbox{s.t.} ~~ & Ax + By^0 \le b, {\rm \Lambda} h \le 0 \\
& {\rm \Lambda} \ge 0,~ {\rm \Lambda} S = BG + C
\end{aligned}
\end{equation}
In (\ref{eq:App-03-AARO-RC-LP}), decision variables are vectors $x$ and $y^0$, gain matrix $G$ and dual matrix $\rm \Lambda$. The constraints actually constitute a lifted formulation for $X_R$ in (\ref{eq:App-03-ARO-XR}). If the min-max cost criterion is employed, the objective can be transformed into a linear inequality constraint with uncertainty via an epigraph form, whose robust form can be derived using similar procedures shown above.
Affine policy based method is attractive because it reduces the conservatism in the static RO approach by incorporating corrective actions, and sustains computational tractability. In theory, the affine assumption more or less restricts the adaptability in the recourse stage. Nevertheless, research work in \cite{AARO-Opt-1,AARO-Opt-2,AARO-Opt-3} shows that linear decision rules are indeed optimal or near optimal for many practical problems.
For more information on other decision rules and their reformulations, please see \cite{RO-Detail-1} (Chapter 14.3) for the quadratic decision rule, \cite{ARO-Extend-Affine-Policy} for the extended linear decision rule, \cite{ARO-Finite-Adapt-1,ARO-Finite-Adapt-2} for the piecewise constant decision rule (finite adaptability), \cite{ARO-PWL-DR-1,ARO-PWL-DR-2} for the piecewise linear decision rule, and \cite{ARO-General-DR} for generalized decision rules. The methods in \cite{ARO-Finite-Adapt-1,ARO-PWL-DR-2} can be used to cope with integer wait-and-see decision variables. See also \cite{RO-Guide}.
\subsection{Algorithms for Fully Adjustable Models}
\label{App-C-Sect02-03}
Fully adjustable models are generally NP-hard \cite{ARO-Benders-Decomposition}. To find the solution in Definition \ref{df:App-03-SRO-Optimality}, the model is decomposed into a master problem and a subproblem, which are solved iteratively, and a sequence of lower bound and upper bound of the optimal values are generated, until they get close enough to each other. To explain the algorithm for ARO problems, we discuss two instances.
{\noindent \bf 1. Second-stage problem is an LP}
Now we consider problem (\ref{eq:App-03-ARO}) without specific functional assumptions on the wait-and-see variables. We start from the second-stage LP with fixed $x$ and $w$:
\begin{equation}
\label{eq:App-03-ARO-Inner-LP}
\begin{aligned}
\min_{y}~~ & d^T y \\
\mbox{s.t.}~~ & By \le b - Ax - Cw : u
\end{aligned}
\end{equation}
where $u$ is the dual variable, and the dual LP of (\ref{eq:App-03-ARO-Inner-LP}) is
\begin{equation}
\label{eq:App-03-ARO-Inner-Dual}
\begin{aligned}
\max_{u}~~ & u^T (b - Ax - Cw) \\
\mbox{s.t.}~~ & B^T u = d,~ u \le 0
\end{aligned}
\end{equation}
If the primal LP (\ref{eq:App-03-ARO-Inner-LP}) has a finite optimum, the dual LP (\ref{eq:App-03-ARO-Inner-Dual}) is also feasible and has the same optimum; otherwise, if (\ref{eq:App-03-ARO-Inner-LP}) is infeasible, then (\ref{eq:App-03-ARO-Inner-Dual}) will be unbounded. Sometimes, an improper choice of $x$ indeed leads to an infeasible second-stage problem. To detect infeasibility, consider the following LP with slack variables
\begin{equation}
\label{eq:App-03-ARO-Inner-LP-Slack}
\begin{aligned}
\min_{y,s}~~ & 1^T s \\
\mbox{s.t.}~~ & s \ge 0 \\
& B y - I s \le b - Ax - Cw : u
\end{aligned}
\end{equation}
Its dual LP is
\begin{equation}
\label{eq:App-03-ARO-Inner-Dual-Slack}
\begin{aligned}
\max_{u}~~ & u^T (b - Ax - Cw) \\
\mbox{s.t.}~~ & B^T u = 0,~ -1 \le u \le 0
\end{aligned}
\end{equation}
(\ref{eq:App-03-ARO-Inner-LP-Slack}) and (\ref{eq:App-03-ARO-Inner-Dual-Slack}) are always feasible and have the same finite optimums. If the optimal value is equal to 0, then LP (\ref{eq:App-03-ARO-Inner-LP}) is feasible; otherwise, if the optimal value is strictly positive, then LP (\ref{eq:App-03-ARO-Inner-LP}) is infeasible.
For notation brevity, define feasible sets for the dual variable
\begin{equation}
\begin{aligned}
U_O & = \{ u ~|~ B^T u = d,~ u \le 0 \} \\
U_F &= \{ u ~|~ B^T u = 0,~ -1 \le u \le 0 \}
\end{aligned}
\notag
\end{equation}
The former one is associated with the dual form (\ref{eq:App-03-ARO-Inner-Dual}) of the second-stage optimization problem (\ref{eq:App-03-ARO-Inner-LP}); the latter one corresponds to the dual form (\ref{eq:App-03-ARO-Inner-Dual-Slack}) of the second-stage feasibility test problem (\ref{eq:App-03-ARO-Inner-LP-Slack}).
Next, we proceed to the middle level with fixed $x$:
\begin{equation}
\label{eq:App-03-ARO-Middle-Linear-max-min}
R(x)=\max_{w \in W} \min_{y \in Y(x,w)} d^T y
\end{equation}
which is a linear max-min problem that identifies the worst-case uncertainty. If LP (\ref{eq:App-03-ARO-Inner-LP}) is feasible for an arbitrarily given value of $w \in W$, then we conclude $x \in X_R$ defined in (\ref{eq:App-03-ARO-XR}); otherwise, if LP (\ref{eq:App-03-ARO-Inner-LP}) is infeasible for some $w \in W$, then $x \notin X_R$ and $R(x) = +\infty$.
To check whether $x \in X_R$ or not, we investigate the following problem
\begin{equation}
\label{eq:App-03-ARO-Middle-Linear-max-min-Fea}
\begin{aligned}
\max_{w}~~ & \min_{y,s} 1^T s \\
\mbox{s.t.}~~ & w \in W,~ s \ge 0 \\
& B y - I s \le b - Ax - Cw : u
\end{aligned}
\end{equation}
It maximizes the minimum of (\ref{eq:App-03-ARO-Inner-LP-Slack}) over all possible values of $w \in W$. Since the minimums of (\ref{eq:App-03-ARO-Inner-LP-Slack}) and (\ref{eq:App-03-ARO-Inner-Dual-Slack}) are equal, problem (\ref{eq:App-03-ARO-Middle-Linear-max-min-Fea}) is equivalent to maximizing the optimal value of (\ref{eq:App-03-ARO-Inner-Dual-Slack}) over the uncertainty set $W$, leading to a bilinear program
\begin{equation}
\label{eq:App-03-ARO-Middle-BLP-Fea}
\begin{aligned}
r(x) =\max_{u,w}~~ & u^T (b - Ax - Cw) \\
\mbox{s.t.}~~ & w \in W,~ u \in U_F
\end{aligned}
\end{equation}
Because both $W$ and $U_F$ are bounded, (\ref{eq:App-03-ARO-Middle-BLP-Fea})
must have a finite optimum. Clearly, $0 \in U_F$, so $r(x)$ must be non-negative. In fact, if $r(x) = 0$, then $x \in X_R$; if $r(x)> 0$, then $x \notin X_R$. With the duality transformation, the opposite optimization operators in (\ref{eq:App-03-ARO-Middle-Linear-max-min-Fea}) come down to a traditional NLP.
For similar reasons, by replacing the second-stage LP (\ref{eq:App-03-ARO-Inner-LP}) with its dual LP (\ref{eq:App-03-ARO-Inner-Dual}), problem (\ref{eq:App-03-ARO-Middle-Linear-max-min}) is equivalent to the following bilinear program
\begin{equation}
\label{eq:App-03-ARO-Middle-BLP-Opt}
\begin{aligned}
r(x) =\max_{u,w}~~ & u^T (b - Ax - Cw) \\
\mbox{s.t.}~~ & w \in W,~ u \in U_O
\end{aligned}
\end{equation}
The fact that a linear max-min problem can be transformed as a bilinear program using LP duality is reported in \cite{Linear-max-min-BLP}. Bilinear programs can be locally solved by general purpose NLP solvers, but the non-convexity prevents a global optimal solution from being found easily. In what follows, we introduce some methods that exploit specific features of the uncertainty set and are widely used by the research community. In view that (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) only differ in the dual feasibility set, we will use set $U$ to refer either $U_F$ or $U_O$ in the unified solution method.
{\noindent \bf a. General polytope}
Suppose that the uncertainty set is described by
\begin{equation}
W = \{w ~|~ S w \le h \} \notag
\end{equation}
An important feature in (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) is that the constraint set $W$ and $U$ are separated and there is no constraint that involves $w$ and $u$ simultaneously, so the bilinear program can be considered in the following format
\begin{equation}
\label{eq:App-03-ARO-BLP-Poly-1}
\begin{aligned}
\max_{u \in U}~~ u^T (b - A x) + \max_{w} ~~ & (- u^T C w ) \\
\mbox{s.t.}~~ & S w \le h : \xi
\end{aligned}
\end{equation}
The bilinear term $u^T C w$ is non-convex. If we treat the second part $\max _{w \in W} (- u^T C w )$ as an LP in $w$ where $u$ is a parameter, whose KKT optimality condition is given by
\begin{equation}
\label{eq:App-03-ARO-BLP-Poly-2}
\begin{gathered}
0 \le \xi \bot h - S w \ge 0 \\
S^T \xi + C^T u = 0
\end{gathered}
\end{equation}
The stationary point of LCP (\ref{eq:App-03-ARO-BLP-Poly-2}) gives the optimal primal and dual solutions simultaneously. As the uncertainty set is a bounded polyhedron, the optimal solution must be bounded, and strong duality holds, so we can replace $- u^T C w$ in the objective with a linear term $h^T \xi$ and additional constraints in (\ref{eq:App-03-ARO-BLP-Poly-2}). Moreover, the complementarity and slackness condition in (\ref{eq:App-03-ARO-BLP-Poly-2}) can be linearized via the method in Appendix \ref{App-B-Sect03-05}. In summary, problem (\ref{eq:App-03-ARO-BLP-Poly-1}) can be solved via an equivalent MILP
\begin{equation}
\label{eq:App-03-ARO-BLP-Poly-MILP}
\begin{aligned}
\max_{u,w,\xi}~~ & u^T (b - A x) + h^T \xi \\
\mbox{s.t.}~~ & u \in U,~ \theta \in \{0,1\}^m \\
& S^T \xi + C^T u = 0 \\
& 0 \le \xi \le M(1 - \theta) \\
& 0 \le h - S w \le M \theta
\end{aligned}
\end{equation}
where $m$ is the dimension of $\theta$, and $M$ is a large enough constant. Compared with (\ref{eq:App-03-ARO-BLP-Poly-1}), non-convexity migrates from the objective function to the constraints with binary variables. The number of binary variables in (\ref{eq:App-03-ARO-BLP-Poly-MILP}) only depends on the number of constraints in set $W$, and is independent of the dimension of $x$.
Another heuristic method for bilinear programs in the form of (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) is the mountain climbing method in \cite{BLP-Mountain-Climbing}, which is summarized in Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing}
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Mountain climbing}
\begin{algorithmic}[1]
\STATE Choose a convergence tolerance $\varepsilon>0$, and an initial $w^* \in W$;
\STATE Solve the following LP with current $w^*$
\begin{equation}
\label{eq:App-03-BLP-Mountain-Climbing-LP1}
R_1 = \max_{u \in U} ~ u^T ( b - A x - C w^*)
\end{equation}
The optimal solution is $u^*$ and the optimal value is $R_1$;
\STATE Solve the following LP with current $u^*$
\begin{equation}
\label{eq:App-03-BLP-Mountain-Climbing-LP2}
R_2 = \max_{w \in W} ~ (b - A x - C w)^T u^*
\end{equation}
The optimal solution is $w^*$ and the optimal value is $R_2$;
\STATE If $R_2 - R_1 \le \varepsilon$, report the optimal value $R_2$ as well as the optimal solution $w^* ,u^*$, and terminate; otherwise, go to step 2.
\end{algorithmic}
\label{Ag:App-03-BLP-Mountain-Climbing}
\end{algorithm}
The optimal solutions of LPs must be found at one of the vertices of its feasible region, hence $w^* \in {\rm{vert}}(W)$ and $u^* \in {\rm{vert}}(U)$ hold. As its name implies, the sequence of objective values generated by Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is monotonically increasing, until a local maximum is found \cite{BLP-Mountain-Climbing}. The convergence is guaranteed by the finiteness of $\mbox{vert}(U)$ and $\mbox{vert}(W)$. If we try multiple initial points that are chosen elaborately and pick up the best one among the returned results, the solution quality is often satisfactory. The key point is, these initial points should span along most directions in the $w$-subspace. For example, one may search the $2m$ points on the boundary of $W$ in directions $\pm e^m_i$, $i=1,2,\cdots,m$, where $m$ is the dimension of $w$, and $e^m_i$ is the $i$-th column of an $m \times m$ identity matrix. As LPs can be solved very efficiently, Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is especially suitable for the instances with very complicated $U$ and $W$, and usually outperforms general NLP solvers for bilinear programs with disjoint constraints.
Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is also valid if $W$ is other convex set, say, an ellipsoid, and converges to a local optimum in a finite number of iterations for a given precision \cite{BLP-Mountain-Climbing-BCVX}.
{\noindent \bf b. Cardinality constrained uncertainty set}
A continuous cardinality constrained uncertainty set in the form of (\ref{eq:App-03-SRO-US-Card}) is a special class of the polyhedral case, see the transformation in (\ref{eq:App-03-SRO-US-Card-Lift}). Therefore, the previous method can be applied, and the number of inequalities in the polyhedral form is $3m+1$, which is equal to the number of binary variables in MILP (\ref{eq:App-03-ARO-BLP-Poly-MILP}). As revealed in Proposition \ref{pr:App-03-Invariance-3}, for a polyhedral uncertainty set, we can merely consider the extreme points.
Consider a discrete cardinality constrained uncertainty set
\begin{subequations}
\label{eq:App-03-ARO-US-Card}
\begin{equation}
\label{eq:App-03-ARO-W-Card}
W = \left\{ w \middle| \begin{gathered}
w_j = w^0_j + w^+_j z^+_j - w^-_j z^-_j, \forall j \\
\exists ~ z^+,z^- \in Z
\end{gathered} \right\}
\end{equation}
\begin{equation}
\label{eq:App-03-ARO-Z-Card}
Z = \left\{ z^+,z^- \middle| \begin{gathered}
z^+,~ z^- \in \{0,1\}^m \\
z^+_j + z^-_j \le 1,~ \forall j \\
1^T ( z^+ + z^-) \le {\rm \Gamma}
\end{gathered} \right\}
\end{equation}
\end{subequations}
where the budget of uncertainty ${\rm \Gamma} \le m$ is an integer. In (\ref{eq:App-03-ARO-W-Card}), each element $w_j$ takes one of three possible values: $w^0_j$, $w^0_j + w^+_j$, and $w^0_j - w^-_j$, and at most $\rm \Gamma$ of the $m$ elements $w_j$ can take a value that is not equal to $w^0_j$. If the forecast error is symmetric, i.e., $w^+_j = w^-_j$, then (\ref{eq:App-03-ARO-US-Card}) is called symmetric as the nominal scenario locates at the center of $W$. We discuss this case separately because this representation allows to linearize the non-convexity in (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) with fewer binary variables.
Expanding the bilinear term $u^T C w$ in an element-wise form
\begin{equation}
u^T C w = u^T C w^0 + \sum_i \sum_j
( c_{ij} w^+_j u_i z^+_j - c_{ij} w^-_j u_i z^-_j ) \notag
\end{equation}
where $c_{ij}$ is the element of matrix $C$. Let
\begin{equation}
v^+_{ij} = u_i z^+_j,~ v^-_{ij} = u_i z^-_j,~ \forall i,~ \forall j \notag
\end{equation}
the bilinear term can be expressed via a linear function. The product involving a binary variable and a continuous variable can be linearized via the method illuminated in Appendix \ref{App-B-Sect02-02}.
In conclusion, bilinear subproblems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) can be solved via MILP
\begin{equation}
\label{eq:App-03-ARO-W-Card-MILP}
\begin{aligned}
\max ~~ & u^T ( b - A x ) - u^T C w^0 - \sum_i \sum_j
( c_{ij} w^+_j v^+_{ij} - c_{ij} w^-_j v^-_{ij} ) \\
\mbox{s.t.} ~~ & u \in U,~ \{z^+,z^- \} \in Z \\
& 0 \le v^+_{ij} - u_j \le M (1 - z^+_j), -M z^+_j \le v^+_{ij} \le 0,~
\forall i, \forall j \\
& 0 \le v^-_{ij} - u_j \le M (1 - z^-_j), -M z^-_j \le v^-_{ij} \le 0,~
\forall i, \forall j
\end{aligned}
\end{equation}
where $M=1$ for problem (\ref{eq:App-03-ARO-Middle-BLP-Fea}) since $-1 \le u \le 0$, and $M$ is a sufficiently large number for problem (\ref{eq:App-03-ARO-Middle-BLP-Opt}),
because there is no clear bounds for the dual variable $u$. The number of binary variables in MILP (\ref{eq:App-03-ARO-W-Card-MILP}) is $2m$, which is less than that in (\ref{eq:App-03-ARO-BLP-Poly-MILP}) if the uncertainty set is replaced by its convex hull. The number of additional continuous variables $v^+_{ij}$ and $v^-_{ij}$ is also moderate since the matrix $C$ is sparse.
Finally, we are ready to give the decomposition algorithm which is proposed in \cite{ARO-CCG}. In light of Proposition \ref{pr:App-03-Invariance-3},
it is sufficient to consider the extreme points $w^1$, $w^2$, $\cdots$, $w^S$ in the uncertainty set, inspiring the following epigraph formulation which is equivalent to (\ref{eq:App-03-ARO})
\begin{equation}
\label{eq:App-03-ARO-Epigraph}
\begin{aligned}
\min_{x,y^s,\eta } ~~ & c^T x + \eta \\
\mbox{s.t.} ~~ & x \in X \\
& \eta \ge d^T y^s,~ \forall s \\
& A x + B y^s \le b - C w^s, \forall s
\end{aligned}
\end{equation}
Recall (\ref{eq:App-03-ARO-XR-Lift}), the last constraint is in fact a lifted formulation for $X_R$. For polytope and cardinality constrained uncertainty sets, the number of extreme points are finite, but may grow exponentially in the dimension of uncertainty. Actually, it is difficult and also unnecessary to enumerate every extreme point, because most of them actually provide redundant constraints. A smart method is to identify active scenarios which contribute binding constraints in $X_R$. This motivation has been widely used in complex optimization problems and formalized in Sect. \ref{App-C-Sect01-03}. The procedure of the adaptive scenario generation algorithm for ARO is summarized in Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Adaptive scenario generation}
\begin{algorithmic}[1]
\STATE Choose a tolerance $\varepsilon > 0$, set $LB = -\infty$, $UB = +\infty$, iteration index $k = 0$, and the critical scenario set $O = w^0$;
\STATE Solve the following master problem
\begin{equation}
\label{eq:App-03-ARO-ASG-Master}
\begin{aligned}
\min_{x,y^s,\eta } ~~ & c^T x + \eta \\
\mbox{s.t.} ~~ & x \in X,~ \eta \ge d^T y^s,~ s = 0,\cdots, k \\
& A x + B y^s \le b - C w^s, \forall w^s \in O
\end{aligned}
\end{equation}
The optimal solution is $x^{k+1}$, $\eta^{k+1}$, and update $LB = c^T x^{k+1} + \eta^{k+1} $;
\STATE Solve bilinear feasibility testing problem (\ref{eq:App-03-ARO-Middle-BLP-Fea}) with $x^{k+1}$, the optimal solution is $w^{k+1}$, $u^{k+1}$; if the optimal value $r^{k+1} > 0$, update $O = O \cup w^{k+1}$, and add a scenario cut
\begin{equation}
\label{eq:App-03-ARO-ASG-Cut}
\eta \ge d^T y^{k+1},~ A x + B y^{k+1} \le b - C w^{k+1}
\end{equation}
with a new variable $y^{k+1}$ to the master problem (\ref{eq:App-03-ARO-ASG-Master}), update $k \leftarrow k+1$, and go to Step 2;
\STATE Solve bilinear optimality testing problem (\ref{eq:App-03-ARO-Middle-BLP-Opt}) with $x^{k+1}$, the optimal solution is $w^{k+1}$, $u^{k+1}$, and the optimal value is $R^{k+1}$; update $O = O \cup w^{k+1}$ and $UB = c^T x^{k+1} + R^{k+1}$, create scenario cut (\ref{eq:App-03-ARO-ASG-Cut}) with a new variable $y^{k+1}$.
\STATE If $UB - LB \le \varepsilon$, report the optimal solution, terminate; otherwise, add the scenario cut in step 4 to the master problem (\ref{eq:App-03-ARO-ASG-Master}), update $k \leftarrow k+1$, and go to step 2;
\end{algorithmic}
\label{Ag:App-03-ARO-Adaptive-Scenario-Generation}
\end{algorithm}
Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} converges in a finite number of iterations, which is bounded by the number of extreme points of the uncertainty set. In practice, this algorithm often converges in a few iterations, because problems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) always identify the most critical scenario that should be considered. This is why we name the algorithm ``adaptive scenario generation''. It is called ``constraint-and-column generation algorithm'' in \cite{ARO-CCG}, because the numbers of decision variables (columns) and constraints increase simultaneously. Please note that the scenario cut streamlines the feasibility cut and optimality cut used in the existing literature.
Bilinear subproblems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) can be solved by the methods discussed previously, according to the form of the uncertainty set. In Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}, we utilize $w$ to create scenario cuts, which are also called primal cuts. In fact, the optimal dual variable $u$ of (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) provides sensitivity information, and can be used to construct dual cuts, which is a single inequality in the first-stage variable $x$. See Benders decomposition algorithm in \cite{ARO-Benders-Decomposition}. Since scenario cuts are much tighter than Benders cuts, Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} is the most prevalent method for solving ARO problems.
If matrix $A$ is uncertainty-dependent, the scenario constraints in the master problem (\ref{eq:App-03-ARO-ASG-Master}) becomes $A(w^s) x + B y^s \le b - C w^s$, $\forall w^s$, where $A(w^s)$ is constant but varies in different scenarios; the objective function of bilinear subproblems changes to $u^T [b - A(w)x - Cw]$, where $x$ is given in the subproblem. If $A$ can be expressed as a linear function in $w$, the problem structure remains the same, and previous methods are still valid. Even if the second-stage problem is an SOCP, the adaptive scenario generation framework remains applicable, and the key procedure is to solve a max-min SOCP. Such a model originates from the robust operation of a power distribution network with uncertain generation and demand. By dualizing the inner-most SOCP, the max-min SOCP is cast as a bi-convex program, which can be globally or locally solved via an MISOCP or the mountain climbing method.
Recently, the duality theory of fully-adjustable robust optimization problem has been proposed in \cite{Duality-ARO}. It has been shown that this kind of problem is self-dual, i.e., the dual problem remains an ARO. However, solving the dual problem may enjoy better efficiency. An extended CCG algorithm which always produces a feasible fist-stage decision (if one exists) is proposed in \cite{Ext-CCG-ARO}.
{\noindent \bf 2. Second-stage problem is an MILP}
Now we consider the case in which some of the wait-and-see decisions are discrete. As what can be observed from the previous case, the most important tasks in solving an ARO problem is to validate feasibility and optimality, which can boil down to solving a linear max-min problem. When the wait-and-see decisions are continuous and the second-stage problem is linear, LP duality theory is applied such that the linear max-min problem is cast as a traditional bilinear program. However, discrete variables appearing in the second stage make the recourse problem a mixed-integer linear max-min problem with a non-convex inner level, preventing the use of LP duality theory. As a result, validating feasibility and optimality becomes more challenging.
The compact form of an ARO problem with integer wait-and-see decisions can be written as
\begin{equation}
\label{eq:App-03-ARO-MIP-Recourse}
\min_{x \in X} \left\{ c^T x + \max_{w \in W} \min_{y,z \in Y(x,w)} d^T y + g^T z \right\}
\end{equation}
where $z$ is binary and depends on the exact value of $w$; the feasible region
\begin{equation}
Y(x,w) = \left\{ y,z ~\middle|~ \begin{gathered}
By + Gz \le b - Ax - Cw \\
y \in \mathbb{R}^{m_1},~ z \in {\rm \Phi}
\end{gathered} \right\} \notag
\end{equation}
where feasible set ${\rm \Phi} = \{z | z \in \mathbb{B}^{m_2}, T z \le v\}$; $m_1$ and $m_2$ are dimensions of $y$ and $z$; $T$ and $v$ are constant coefficients; all coefficient matrices have compatible dimensions. We assume that the uncertainty set $W$ can be represented by a finite number of extreme points. This kind of problem is studied in \cite{ARO-MIP-Nested-CCG}. A nested constraint-and-column generation algorithm is proposed.
Different from the mainstream idea that directly solves a linear max-min program as a bilinear program, the mixed-integer max-min program in (\ref{eq:App-03-ARO-MIP-Recourse}) is expanded to a tri-level problem
\begin{equation}
\label{eq:App-03-MILP-max-min}
\begin{aligned}
\max_{w \in W} \min_{z \in {\rm \Phi}} g^T z + \min_{y}~~ & d^T y \\
\mbox{s.t.}~~ & By \le b - Ax - Cw -G z
\end{aligned}
\end{equation}
For the ease of discussion, we assume all feasible sets are bounded, because decision variables of practical problems have physical bounds. By replacing the innermost LP in variable $y$ with its dual LP, problem (\ref{eq:App-03-MILP-max-min}) becomes
\begin{equation}
\label{eq:App-03-MILP-Recoure-Trilevel}
\max_{w \in W} \left\{ \min_{z \in {\rm \Phi}} \left\{ g^T z +
\max_{u \in U} u^T (b - Ax - Cw -G z) \right\} \right\}
\end{equation}
where $u$ is the dual variable, and set $U = \{ u ~|~ u \le 0,~ B^T u = d \}$. Because both $w$ and $z$ are expressed via binary variables, bilinear terms $u^T C w$ and $u^T G z$ have linear representations by using the method in Appendix \ref{App-B-Sect02-02}. Since $\rm \Phi$ has a countable number of elements, problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) (in its linearized version) has the same form as ARO problem (\ref{eq:App-03-ARO}), and can be solved by Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}. More exactly, write (\ref{eq:App-03-MILP-Recoure-Trilevel}) into an epigraph form by enumerating all possible elements $z \in {\rm \Phi}$, then perform Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} and identify binding elements. In this way, the minimization operator in the middle level is eliminated.
The nested adaptive scenario generation algorithm for ARO problem (\ref{eq:App-03-ARO-MIP-Recourse}) with mixed-integer recourses is summarized in Algorithm \ref{Ag:App-03-ARO-Nested-ASG}. Because both $W$ and $\rm \Phi$ are finite sets with countable elements, Algorithm \ref{Ag:App-03-ARO-Nested-ASG} converges in a finite number of iterations. Notice that we do not distinguish feasibility and optimality subproblems in above algorithm due to their similarities. One can also introduce slack here-and-now variables in the second stage and penalty terms in the objective function, such that the recourse problem is always feasible. It should be pointed out that Algorithm \ref{Ag:App-03-ARO-Nested-ASG} incorporates double loops, and an MILP should be solved in each iteration in the inner loop, so we'd better not expect too much on its efficiency. Nonetheless, it is the first systematic method to solve an ARO problem with integer variables in the second stage. Another concept which should be clarified is that although the second-stage discrete variable $z$ is treated as scenario and enumerated on the fly when solving problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) in step 3 (the inner loop), it is a decision variable of the master problem (\ref{eq:App-03-ARO-Nested-ASG-Master}) in the outer loop.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Nested adaptive scenario generation}
\begin{algorithmic}[1]
\STATE Choose a tolerance $\varepsilon > 0$, set $LB = -\infty$, $UB = +\infty$, iteration index $k = 0$, and the critical scenario set $O = w^0$;
\STATE Solve the following master problem
\begin{equation}
\label{eq:App-03-ARO-Nested-ASG-Master}
\begin{aligned}
\min_{x,y,z,\eta } ~~ & c^T x + \eta \\
\mbox{s.t.} ~~ & x \in X \\
& \eta \ge d^T y^s + g^T z^s,~ z^s \in {\rm \Phi},~ s = 0,\cdots,k \\
& A x + B y^s + G z^s \le b - C w^s,~ \forall w^s \in O
\end{aligned}
\end{equation}
The optimal solution is $x^{k+1}$, $\eta^{k+1}$, and update $LB = c^T x^{k+1} + \eta^{k+1} $;
\STATE Solve problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) with $x^{k+1}$, the optimal solution is $(z^{k+1},w^{k+1},u^{k+1})$, and optimal value is $R^{k+1}$; update $O = O \cup w^{k+1}$, $UB = \min \{UB, c^T x^{k+1} + R^{k+1}\}$, create new variables $(y^{k+1},z^{k+1})$ and scenario cuts
\begin{equation}
\label{eq:App-03-ARO-Nested-ASG-Cut}
\begin{gathered}
\eta \ge d^T y^{k+1} + g^T z^{k+1},~ z^{k+1} \in {\rm \Phi} \\
A x + B y^{k+1} + G z^{k+1} \le b - C w^{k+1}
\end{gathered}
\end{equation}
\STATE If $UB - LB \le \varepsilon$, terminate and report the optimal solution and optimal value; otherwise, add scenario cuts (\ref{eq:App-03-ARO-Nested-ASG-Cut}) to the master problem (\ref{eq:App-03-ARO-Nested-ASG-Master}), update $k \leftarrow k+1$, and go to step 2;
\end{algorithmic}
\label{Ag:App-03-ARO-Nested-ASG}
\end{algorithm}
As a short conclusion, to overcome the limitation of traditional static RO approaches which require all decisions should be made without exact information on the underlying uncertainty, ARO employs a two-stage decision-making framework and allows a subset of decision variables to be made after the uncertain data are revealed. Under some special decision rules, computational tractability can be preserved. In fully adjustable cases, the ARO problem can be solved by a decomposition algorithm. The subproblem comes down to a (mixed-integer) linear max-min problem, which is generally challenging to solve. We introduce MILP reformulations for special classes of uncertainty sets, which are compatible with commercial solvers, and help solve an engineering optimization problem in a systematic way.
\section{Distributionally Robust Optimization}
\label{App-C-Sect03}
Static and adjustable RO models presented in Sect. \ref{App-C-Sect01} and Sect. \ref{App-C-Sect02} do not rely on specifying probability distributions of the uncertain data, which are used in SO approaches for generating scenarios, evaluating probability of constraint violation, or deriving analytic solutions for some specific problems. Instead, RO design principle aims to cope with the worst-case scenario in a pre-defined uncertainty set in the space of uncertain variables, which is a salient distinction between these two approaches. If the exact probability distribution is precisely known, optimal solutions to SO models would be less conservative than the robust ones from the statistical perspective. However, the optimal solution to SO models could have poor statistical performances if the actual distribution is not identical to the designated one \cite{Bertsimas-2006}. As for the RO approach, as it hedges against the worst-case scenario, which rarely happens in reality, the robust strategy could be conservative thus suboptimal in most cases.
A method which aims to build a bridge connecting SO and RO approaches is the DRO, whose optimal solutions are designed for the worst-case probability distribution within a family of candidate distributions, which are described by statistic information, such as moments, and structure properties, including symmetry, unimodality, and so on. This approach is generally less conservative than the traditional RO because dispersion effect of uncertainty is taken into account, i.e., the probability of an extreme event is low. Meanwhile, the statistic performances of the solution is less sensitive to the perturbation in probability distributions than that of an SO model, as it hedges against the worst distribution. Publications on this method have been proliferating rapidly in the past few years. This section only sheds light on some most representative methods which have been used in energy system studies.
\subsection{Static Distributionally Robust Optimization}
\label{App-C-Sect03-01}
In analogy with the terminology used in Sect \ref{App-C-Sect01}, ``static'' means that all decision variables are here-and-now type. Theoretical outcomes in this part mainly come from \cite{Static-DRO}. A static DRO problem can be formulated as
\begin{equation}
\label{eq:App-03-DRO-Model-1}
\begin{aligned}
\min_x ~~ & c^T x \\
\mbox{s.t.} ~~ & x \in X \\
& \Pr \left( a_i (\xi)^T x \le b_i(\xi),~ i=1,\cdots,m \right) \ge 1 - \varepsilon,~ \forall f(\xi) \in {\mathcal P}
\end{aligned}
\end{equation}
where $x$ is the decision variable, $X$ is a closed and convex set that is independent of the uncertain parameter, $c$ is a deterministic vector, and $\xi$ is the uncertain data, whose probability density function $f(\xi)$ is not known exactly, and belongs to $\mathcal P$, a set comprised of candidate distributions. Robust chance constraint in (\ref{eq:App-03-DRO-Model-1}) requires a finite number of linear inequalities depending on $\xi$ to be met with a probability of at least $1-\varepsilon$, regardless of the true probability density function of $\xi$. We assume uncertain coefficients $a_i$ and $b_i$ are linear functions in $\xi$, i.e.
\begin{equation}
\begin{gathered}
a_i(\xi) = a^0_i + \sum_{j=1}^k a^j_i \xi_j \\
b_i(\xi) = b^0_i + \sum_{j=1}^k b^j_i \xi_j
\end{gathered} \notag
\end{equation}
where $a^0_i$, $a^j_i$ are constant vectors and $b^0_i$, $b^j_i$ are constant scalars. Define
\begin{equation}
y^j_i (x) = (a^j_i)^T x - b^j_i,~\forall i,~ \forall j \notag
\end{equation}
the chance constraint in (\ref{eq:App-03-DRO-Model-1}) can be expressed via
\begin{equation}
\label{eq:App-03-DRO-RCC}
\Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0,~ i=1,\cdots,m \right) \ge 1 - \varepsilon,~ \forall f(\xi) \in {\mathcal P}
\end{equation}
where vector $y_i(x)=[y^1_i(x),\cdots,y^k_i(x)]^T$ is affine in $x$. Since the objective is certain and constraint violation is bounded by a small probability, problem (\ref{eq:App-03-DRO-Model-1}) is also called a robust chance-constrained program.
Chance constraints can be transformed into tractable ones that are convex in variable $x$ only for a few special cases. For example, if $\xi$ follows a Gaussian distribution, $\varepsilon \le 0.5$, and $m=1$, then the individual chance constraint without distribution uncertainty is equivalent to a single SOC constraint \cite{CCO-Gauss}. For $m >1$, joint chance constraints form convex feasible region when the right-hand side terms $b_i(\xi)$ are uncertain and follow a log-concave distribution \cite{Static-DRO,CCP-RHS-Log-Concave}, while coefficients $a_i$, $i=1,\cdots,m$ are deterministic.
Constraint (\ref{eq:App-03-DRO-RCC}) is even more challenging at first sight: not only the random vector $\xi$, but also the probability distribution function $f(\xi)$ itself is uncertain. Because in many practical situations, probability distribution must be estimated from enough historical data, which may not be available at hand. Typically, one may only have access to some statistical indicators about $f(\xi)$, e.g. its mean value, covariance, and support set. Using a specific $f(\xi) \in \mathcal P$ may lead to over-optimistic solutions which fail to satisfy the probability guarantee under the true distribution.
Similar to the paradigm in static RO, a prudent way to immunize a chance constraint against uncertain probability distribution is to investigate the situation in the worst case, inspiring the following distributionally robust chance constraint, which is equivalent to (\ref{eq:App-03-DRO-RCC})
\begin{equation}
\label{eq:App-03-DRO-DRCC}
\inf_{f(\xi) \in \mathcal P} \Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0,~ i=1,\cdots,m \right) \ge 1 - \varepsilon
\end{equation}
Clearly, if $x$ satisfies (\ref{eq:App-03-DRO-DRCC}), the probability of constraint violation is upper bounded by $\varepsilon$ for the true probability distribution of $\xi$.
This section introduces convex optimization models for approximating
robust chance constraints under uncertain probability distributions, whose first- and second-order moments as well as the support set (or equivalently the feasible region) of random variable are known. More precisely, we let $\mathbb E_P (\xi) = \mu \in \mathbb R^k$ be the mean value and $\mathbb E_P ((\xi-\mu)(\xi-\mu)^T) = {\rm \Sigma} \in \mathbb S^k_{++}$ be the covariance matrix of random variable $\xi$ under the true distribution $P$. We define the moment matrix
\begin{equation}
{\rm \Omega} = \begin{bmatrix}
{\rm \Sigma} + \mu \mu^T & \mu \\ \mu^T & 1
\end{bmatrix}
\notag
\end{equation}
for ease of notation.
To help readers understand the fundamental ideas in DRO, we briefly introduce the worst-case expectation problem, which will be used throughout this section. Recall that $\mathcal P$ represents the set of all probability distributions on $\mathbb R^k$ with mean vector $\mu$ and covariance matrix ${\rm \Sigma} \succ 0$,. the problem is formulated by
\begin{equation}
\theta^m_P = \sup_{f(\xi) \in \mathcal P} \mathbb E
\left[ (g(\xi))^+ \right] \notag
\end{equation}
where $g:\mathbb R^k \to \mathbb R$ is a function of $\xi$; $(g(\xi))^+$ means the maximum between 0 and $g(\xi)$. Write the problem into an integral format
\begin{equation}
\label{eq:App-03-DRO-Worst-Expectation-Primal}
\begin{aligned}
\theta^m_P = \sup_{f(\xi) \in \mathcal P} ~~ &
\int_{\xi \in \mathbb R^k} \max \{0, g(\xi) \} f(\xi) \mbox{d} \xi \\
\mbox{s.t.} ~~ & f(\xi) \ge 0,~ \forall \xi \in \mathbb R^k \\
& \int_{\xi \in \mathbb R^k} f(\xi) \mbox{d} \xi = 1 :~ \lambda_0 \\
& \int_{\xi \in \mathbb R^k} \xi f(\xi) \mbox{d} \xi = \mu :~ \lambda \\
& \int_{\xi \in \mathbb R^k} \xi \xi^T f(\xi) \mbox{d} \xi
= {\rm \Sigma} + \mu \mu^T :~ {\rm \Lambda}
\end{aligned}
\end{equation}
In problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the decision variables are the values of $f(\xi)$ over all possible $\xi \in \mathbb R^k$, so there are infinitely many decision variables, and problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) is an infinite-dimensional LP. The former two constraints enforce $f(\xi)$ to be a valid distribution function; the latter two ensure consistent first- and second-order moments. The optimal solution gives the worst-case distribution. However, it is difficult to solve (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) in its primal form. We now associate dual variables $\lambda_0 \in \mathbb R$, $\lambda \in \mathbb R^k$, and ${\rm \Lambda} \in \mathbb S^k$ with each integral constraint, and the dual problem of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) can be constructed following the duality theory of conic LP, which is given by
\begin{equation}
\label{eq:App-03-DRO-Worst-Expectation-Dual}
\begin{aligned}
\theta^m_D = \inf_{\lambda_0,\lambda,{\rm \Lambda} } ~~ & \lambda_0 +
\mu^T \lambda + \mbox{tr} [{\rm \Lambda}^T({\rm \Sigma} + \mu \mu^T)] \\
\mbox{s.t.} ~~ & \lambda_0 + \xi^T \lambda + \mbox{tr} [{\rm \Lambda}^T
(\xi \xi^T)] \\
& \quad \ge \max \{ 0,g(\xi) \}, \forall \xi \in \mathbb R^k
\end{aligned}
\end{equation}
To understand this dual form in (\ref{eq:App-03-DRO-Worst-Expectation-Dual}), we can image a discrete version of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), in which $\xi_1$, $\cdots$, $\xi_n$ are sampled scenarios of the uncertain parameter, and their associated probabilities $f(\xi_1)$, $\cdots$, $f(\xi_n)$ are decision variables of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}). Moreover, if we replace the integral arithmetic in the constraints with the summation arithmetic, (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) comes down to a traditional LP, and its dual is also an LP, where the constraint becomes
\begin{equation*}
\lambda_0 + \xi^T_i \lambda + \mbox{tr} [{\rm \Lambda}^T
(\xi_i \xi^T_i)] \ge \max \{ 0,g(\xi_i) \},~ i = 1, \cdots, n
\end{equation*}
Let $n \to +\infty$ and $\xi$ spread over $\mathbb R^k$, we can get the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}).
Unlike the primal problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) that has infinite decision variables, the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) has finite variables and an infinite number of constraints. In fact, we are optimizing over the coefficients of a polynomial in $\xi$. Because $\rm \Sigma \succ 0$, Slater condition is met, and thus strong duality holds (this conclusion can be found in many other literatures, such as \cite{Zero-Gap-GPI}), i.e., $\theta^m_P = \theta^m_D$. In the following, we will eliminate $\xi$ and reduce the constraint into convex ones in dual variables $\lambda_0$, $\lambda$, and $\rm \Lambda$. Recall the definition of matrix $\rm \Omega$, the compact form of problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) can be expressed as
\begin{equation}
\label{eq:App-03-DRO-Worst-Expectation-Dual-Comp}
\begin{aligned}
\inf_{M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\
\mbox{s.t.} ~~ &
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge 0,~
\forall \xi \in \mathbb R^k \\
& \begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge g(\xi),~
\forall \xi \in \mathbb R^k \\
\end{aligned}
\end{equation}
where the matrix decision variable is
\begin{equation}
M = \begin{bmatrix}
{\rm \Lambda} & \dfrac{\lambda}{2} \\
\dfrac{\lambda^T}{2} & \lambda_0
\end{bmatrix} \notag
\end{equation}
and the first constraint is equivalent to an LMI $M \succeq 0$.
A special case of the worst-case expectation problem is
\begin{equation}
\label{eq:App-03-DRO-GPI-Primal}
\theta^m_P = \sup_{f(\xi) \in \mathcal P} \Pr [ \xi \in S ]
\end{equation}
which quantifies the maximum probability of the event $\xi \in S$, where $S$ is a Borel measurable set. This problem has a close relationship with generalized probability inequalities discussed in \cite{Zero-Gap-GPI} and the generalized moments problem studied in \cite{Moment-Book}.
By defining an indicator function as
\begin{equation}
\mathbb I_{S} (\xi) = \left\{
\begin{gathered} 1 \\ 0 \end{gathered} \quad
\begin{lgathered}
\mbox{if } \xi \in S \\
\mbox{otherwise}
\end{lgathered} \right. \notag
\end{equation}
The dual problem of (\ref{eq:App-03-DRO-GPI-Primal}) can be written as
\begin{equation}
\label{eq:App-03-DRO-GPI-Dual}
\begin{aligned}
\inf_{M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\
\mbox{s.t.} ~~ & M \succeq 0,~
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge 1,~
\forall \xi \in S \\
\end{aligned}
\end{equation}
which is a special case of (\ref{eq:App-03-DRO-Worst-Expectation-Dual-Comp}) when $g(\xi) = \mathbb I_S(\xi)$.
Next we present how to formulate a robust chance constraint
(\ref{eq:App-03-DRO-DRCC}) as convex constraints that can be recognized by convex optimization solvers.
{\noindent \bf 1. Individual chance constraints}
Consider a single robust chance constraint
\begin{equation}
\label{eq:App-03-DRO-DRCC-Single}
\inf_{f(\xi) \in \mathcal P} \Pr \left( y^0 (x) + y(x)^T \xi \le 0 \right) \ge 1 - \varepsilon
\end{equation}
The feasible set in $x$ is denoted by $X^S_R$.
To eliminate the optimization over function $f(\xi)$, we leverage the concept of conditional value-at-risk (CVaR) introduced by \cite{CVaR}. For a given loss function $L(\xi)$ and tolerance $\varepsilon \in (0,1)$, the CVaR at level $\varepsilon$ is defined as
\begin{equation}
\label{eq:App-03-CVaR}
\mbox{CVaR}(L(\xi), \varepsilon) = \inf_{\beta \in \mathbb R} \beta +
\frac{1}{\varepsilon} \mathbb E_{f(\xi)} \left( \left[ L(\xi) - \beta \right]^+ \right)
\end{equation}
where the expectation is taken over a given probability distribution $f(\xi)$. CVaR is the conditional expectation of loss greater than the $(1-\varepsilon)$-quantile of the loss distribution. Indeed, condition
\begin{equation}
\Pr \left[ L(\xi) \le \mbox{CVaR}(L(\xi), \varepsilon) \right] \ge 1- \varepsilon
\notag
\end{equation}
holds regardless of the probability distribution and loss function $L(\xi)$ \cite{Static-DRO}. Therefore, to certify $\Pr(L(\xi) \le 0) \ge 1-\varepsilon$, a sufficient condition without probability evaluation is $\mbox{CVaR}(L(\xi), \varepsilon) \le 0$, or more precisely:
\begin{equation}
\label{eq:App-03-CVaR-CC}
\begin{aligned}
& \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \le 0 \\ \Longrightarrow
& \inf_{f(\xi) \in \mathcal P} \Pr \left( y^0(x) + y(x)^T \xi \le 0 \right) \ge 1 - \varepsilon
\end{aligned}
\end{equation}
According to (\ref{eq:App-03-CVaR}), above worst-case CVaR can be expressed by
\begin{equation}
\begin{lgathered}
\sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \\
= \sup_{f(\xi) \in \mathcal P} \inf_{\beta \in \mathbb R} \left\{ \beta +
\frac{1}{\varepsilon} \mathbb E_{f(\xi)} \left( \left[ y^0(x) + y(x)^T \xi - \beta \right]^+ \right) \right\} \\
= \inf_{\beta \in \mathbb R} \left\{ \beta + \frac{1}{\varepsilon} \sup_{f(\xi) \in \mathcal P} \mathbb E_{f(\xi)} \left( \left[ y^0(x) + y(x)^T \xi - \beta \right]^+ \right) \right\}
\end{lgathered}
\end{equation}
The maximization and minimization operators are interchangeable because of the saddle point theorem in \cite{Saddle-Point}. Recall previous analysis; the worst-case expectation can be computed from problem
\begin{equation}
\begin{aligned}
\inf_{\beta,M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\
\mbox{s.t.} ~~ & M \succeq 0,~ \\
&
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge
y^0(x) + y(x)^T \xi - \beta,~
\forall \xi \in \mathbb R^k \\
\end{aligned} \notag
\end{equation}
The semi-infinite constraint has a matrix quadratic form
\begin{equation}
\begin{bmatrix} \xi \\ 1 \end{bmatrix}^T
\left( M - \begin{bmatrix}
0 & \dfrac{y(x)}{2} \\
\dfrac{y(x)^T}{2} & y^0(x) - \beta
\end{bmatrix} \right)
\begin{bmatrix} \xi \\ 1 \end{bmatrix} \ge
0,~ \forall \xi \in \mathbb R^k \notag
\end{equation}
which is equivalent to
\begin{equation}
M - \begin{bmatrix}
0 & \dfrac{y(x)}{2} \\
\dfrac{y(x)^T}{2} & y^0(x) - \beta
\end{bmatrix} \succeq 0 \notag
\end{equation}
As a result, the worst-case CVaR can be calculated from an SDP
\begin{equation}
\label{eq:App-03-Worst-CVaR-SDP}
\begin{aligned}
\sup_{f(\xi) \in \mathcal P} ~~ & \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \\
= \inf_{\beta, M} ~~ & \beta + \frac{1}{\varepsilon}
\mbox{tr}({\rm \Omega}^T M) \\
\mbox{s.t.} ~~ & M \succeq 0 \\
& M \succeq \begin{bmatrix}
0 & \dfrac{y(x)}{2} \\
\dfrac{y(x)^T}{2} & y^0(x) - \beta
\end{bmatrix}
\end{aligned}
\end{equation}
It is shown that the indicator $\Rightarrow$ in (\ref{eq:App-03-CVaR-CC}) is in fact an equivalence $\Leftrightarrow$ \cite{Static-DRO} in static DRO. In conclusion, robust chance constraint (\ref{eq:App-03-DRO-DRCC-Single}) can be written as a convex set in variable $x$, $\beta$, and $M$ as follows
\begin{equation}
\label{eq:App-03-DRO-XSR}
X^S_R = \left\{ x ~\middle|~ \begin{lgathered}
\exists \beta \in \mathbb R,~ M \succeq 0 \mbox{ such that} \\
\beta + \frac{1}{\varepsilon}
\mbox{tr}({\rm \Omega}^T M) \le 0 \\
M \succeq \begin{bmatrix}
0 & \dfrac{y(x)}{2} \\
\dfrac{y(x)^T}{2} & y^0(x) - \beta
\end{bmatrix}
\end{lgathered} \right\}
\end{equation}
{\noindent \bf 2. Joint chance constraints}
Now consider the joint robust chance constraints
\begin{equation}
\label{eq:App-03-DRO-DRCC-Joint}
\inf_{f(\xi) \in \mathcal P} \Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0,
i = 1,\cdots,m \right) \ge 1 - \varepsilon
\end{equation}
The feasible set in $x$ is denoted by $X^J_R$.
Let $\alpha$ be the vector of strictly positive scaling parameters, and $\mathcal A = \{ \alpha ~|~ \alpha > 0 \}$. It is clear that constraint
\begin{equation}
\label{eq:App-03-DRO-DRCC-Joint-Para}
\inf_{f(\xi) \in \mathcal P} \Pr \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\} \le 0 \right] \ge 1 - \varepsilon
\end{equation}
imposes the same feasible region in variable $x$ as (\ref{eq:App-03-DRO-DRCC-Joint}). Nonetheless, it turns out that parameter $\alpha_i$ can be co-optimized to improve the quality of the convex approximation for $X^J_R$. (\ref{eq:App-03-DRO-DRCC-Joint-Para}) is a single robust chance constraint, and can be conservatively approximated by a worst-case CVaR constraint
\begin{equation}
\label{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}
\sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\}, \varepsilon \right] \le 0
\end{equation}
It defines a feasible region in variable $x$ with auxiliary parameter $\alpha \in \mathcal A$, which is denoted by $X^J_R(\alpha)$. Clearly, $X^J_R(\alpha) \subseteq X^J_R$, $\forall \alpha \in \mathcal A$. Unlike (\ref{eq:App-03-Worst-CVaR-SDP}), condition (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) is $\alpha$-dependent.
By observing the fact that
\begin{equation}
\begin{aligned}
& \begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge
\max_{i=1,\cdots,m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\} - \beta,~ \forall \xi \in \mathbb R^k \\ \Longleftrightarrow
& \begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge
\alpha_i \left( y^0_i (x) + y(x)_i^T \xi \right) - \beta,~
\forall \xi \in \mathbb R^k,~ i = 1,\cdots,m \\ \Longleftrightarrow
& M - \begin{bmatrix}
0 & \dfrac{\alpha_i y_i(x)}{2} \\
\dfrac{\alpha_i y_i(x)^T}{2} & \alpha_i y^0_i(x) - \beta
\end{bmatrix} \succeq 0, ~ i = 1,\cdots, m
\end{aligned} \notag
\end{equation}
and employing the optimization formulation of the worst-case expectation problem, the worst-case CVaR in (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) can be calculated by
\begin{equation}
\label{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR-SDP}
\begin{aligned}
& J(x,\alpha) = \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\}, \varepsilon \right] \\ = &
\inf_{\beta \in \mathbb R} \left\{ \beta + \frac{1}{\varepsilon} \sup_{f(\xi) \in \mathcal P} \mathbb E_{f(\xi)} \left( \left[ \max_{i=1,\cdots,m} \left\{ \alpha_i \left( y^0_i(x) + y_i(x)^T \xi \right) \right\} - \beta \right]^+ \right) \right\} \\ = &
\inf_{\beta, M} \left\{ \beta + \frac{1}{\varepsilon} \mbox{tr}({\rm \Omega}^T M) ~\middle|~ \mbox{s.t.} ~ M \succeq 0,~ M \succeq
\begin{bmatrix}
0 & \dfrac{\alpha_i y_i(x)}{2} \\
\dfrac{\alpha_i y_i(x)^T}{2} & y^0_i(x) - \beta
\end{bmatrix}, \forall i \right\}
\end{aligned}
\end{equation}
In conclusion, for any fixed $\alpha \in \mathcal A$, the worst-case CVaR constraint (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) can be written as a convex set in variables $x$, $\beta$, and $M$ as follows
\begin{equation}
\label{eq:App-03-DRO-XJR-alpha}
X^J_R(\alpha) = \left\{ x ~\middle|~ \begin{lgathered}
\exists \beta \in \mathbb R,~ M \succeq 0 \mbox{ such that} \\
\beta + \frac{1}{\varepsilon}
\mbox{tr}({\rm \Omega}^T M) \le 0 \\
M \succeq
\begin{bmatrix}
0 & \dfrac{\alpha_i y_i(x)}{2} \\
\dfrac{\alpha_i y_i(x)^T}{2} & y^0_i(x) - \beta
\end{bmatrix}, \forall i
\end{lgathered} \right\}
\end{equation}
Moreover, it is revealed in \cite{Static-DRO} that the union $\bigcup_{\alpha \in \mathcal A} X^J_R(\alpha)$ gives an exact description of $X^J_R$, which indicates that the original robust chance constrained program
\begin{equation}
\label{eq:App-03-DRO-Model-2}
\min_x \left\{ c^T x ~\middle|~ \mbox{s.t. } x \in X \cap X^J_R \right \}
\end{equation}
and the worst-case CVaR formulation
\begin{equation}
\min_{x,\alpha} \left\{ c^T x ~\middle|~ \mbox{s.t. }
x \in X \cap X^J_R(\alpha),~ \alpha \in \mathcal A \right \} \notag
\end{equation}
or equivalently
\begin{equation}
\label{eq:App-03-DRO-Model-3}
\min_{x,\alpha} \left\{ c^T x ~\middle|~ \mbox{s.t. }
x \in X,~ \alpha \in \mathcal A,~ J(x,\alpha) \le 0 \right \}
\end{equation}
have the same optimal value. The constraints of (\ref{eq:App-03-DRO-Model-3}) contain bilinear matrix inequalities, which means that if either $x$ or $\alpha$ is fixed, $J(x,\alpha) \le 0$ in (\ref{eq:App-03-DRO-Model-3}) can come down to LMIs, however, when both $x$ and $\alpha$ are variables, the constraint is non-convex, making problem (\ref{eq:App-03-DRO-Model-3}) difficult to solve. In view of the biconvex feature \cite{{BLP-Mountain-Climbing-BCVX}}, a sequential convex optimization procedure is presented to find an approximated solution.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf }
\begin{algorithmic}[1]
\STATE Choose a convergence tolerance $\varepsilon > 0$; Let the iteration counter $k=1$, $x^0 \in X \cap X^J_R(\alpha)$ be a feasible solution for some $\alpha$ and $f^0 = c^T x^0$;
\STATE Solve the following subproblem with input $x^{k-1}$
\begin{equation}
\label{eq:App-03-BMI-Sub}
\min_\alpha ~ \left\{ J(x,\alpha) ~|~
\mbox{s.t. } \alpha \ge \delta \bf 1 \right\}
\end{equation}
where $\bf 1$ denotes the all-one vector with a compatible dimension, and $\delta > 0$ is a small constant; the worst-case CVaR functional is defined in (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR-SDP}). The optimal solution is $\alpha^k$;
\STATE Solve the following master problem with input $\alpha^k$
\begin{equation}
\label{eq:App-03-BMI-Master}
\min_x ~ \left\{ c^T x ~|~ \mbox{s.t. } x \in X,~ J(x,\alpha^k) \le 0\right\}
\end{equation}
The optimal solution is $x^k$ and the optimal value is $f^k$;
\STATE If $|f^k - f^{k-1}| / |f^{k-1}| \le \varepsilon$, terminate and report the optimal solution $x^k$; otherwise, update $k \leftarrow k+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-03-BMI-Mountain-Climbing}
\end{algorithm}
The main idea of this algorithm is to identify the best feasible region $X^J_R(\alpha)$ through successively solving the subproblem (\ref{eq:App-03-BMI-Sub}), and therefore improving the objective value. The performance of Algorithm \ref{Ag:App-03-BMI-Mountain-Climbing} is intuitively explained below.
Because parameter $\alpha$ is optimized in the subproblem (\ref{eq:App-03-BMI-Sub}) given the value $x^k$, there must be $J(x^k,\alpha^{k+1}) \le J(x^k,\alpha^k) \le 0$, $\forall k$, demonstrating that $x^k$ is a feasible solution of the master problem (\ref{eq:App-03-BMI-Master}) in iteration $k+1$; therefore, the optimal values of (\ref{eq:App-03-BMI-Master}) in two consecutive iterations satisfy $c^T x^{k+1} \le c^T x^k$, as the objective evaluated at the optimal solution $x^{k+1}$ in iteration $k+1$ deserves a value no greater than that is incurred at any feasible solution. In this regard, the optimal value sequence $f^k$, $k=1,2,\cdots$ is monotonically decreasing. If $X$ is bounded, the optimal solution sequence $x^k$ is also bounded, and the optimal value converges. Algorithm \ref{Ag:App-03-BMI-Mountain-Climbing} does not necessarily find the global optimum of problem (\ref{eq:App-03-DRO-Model-3}). Nevertheless, it is desired by practical problems due to its robustness since it involves only convex optimization.
In many practical applications, the uncertain data $\xi$ is known to be within a strict subset of $\mathbb R^k$, which is called the support set. We briefly outline how to incorporate the support set in the distributionally robust chance constraints. We assume the support set $\rm \Xi$ is the intersection of a finite number of ellipsoids, i.e.
\begin{equation}
\label{eq:App-03-Support-Set}
{\rm \Xi} = \left\{ \xi \in \mathbb R^k ~\middle|~
\xi^T W_i \xi \le 1,~ i = 1,\cdots,l \right\}
\end{equation}
where $W_i \in \mathbb S^k_+$, $i=1,\cdots,l$, and we have $\Pr(\xi \in {\rm \Xi}) = 1$. Let $\mathcal P_{\rm \Xi}$ be the set of all candidate probability
distributions supported on $\rm \Xi$ which have identical first- and second-order moments.
Consider the worst-case expectation problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}). If we replace $\mathcal P$ with $\mathcal P_{\rm \Xi}$, the constraints of the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) become
\begin{alignat}{2}
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T & \ge 0,~ & \quad
& \forall \xi \in {\rm \Xi} \label{eq:App-03-S-Lemma-1} \\
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T & \ge g(\xi),~ &
& \forall \xi \in {\rm \Xi} \label{eq:App-03-S-Lemma-2}
\end{alignat}
According to (\ref{eq:App-03-Support-Set}), $1-\xi^T W_i \xi$ must be non-negative if and only if $\xi \in {\rm \Xi}$, and hence a sufficient condition for (\ref{eq:App-03-S-Lemma-1}) is the existence of constants $\tau_i \ge 0$, $i = 1,\cdots,l$, such that
\begin{equation}
\label{eq:App-03-S-Lemma-3}
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T -
\sum_{i=1}^l \tau_i \left( 1-\xi^T W_i \xi \right) \ge 0
\end{equation}
Under this condition, as long as $\xi \in {\rm \Xi}$, we have
\begin{equation*}
\begin{bmatrix} \xi^T & 1 \end{bmatrix} M
\begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge
\sum_{i=1}^l \tau_i \left( 1-\xi^T W_i \xi \right) \ge 0
\end{equation*}
Arrange (\ref{eq:App-03-S-Lemma-3}) as a matrix quadratic form
\begin{equation*}
\begin{bmatrix} \xi^T & 1 \end{bmatrix}
\left(M - \sum_{i=1}^l \tau_i \begin{bmatrix}
-W_i & {\bf 0} \\
{\bf 0}^T & 1
\end{bmatrix} \right)
\begin{bmatrix} \xi \\ 1 \end{bmatrix} \ge 0,~
\forall \xi \in \mathbb R^k
\end{equation*}
As a result, (\ref{eq:App-03-S-Lemma-1}) can be reduced to an LMI in variables $M$ and $\tau$
\begin{equation}
\label{eq:App-03-S-Lemma-4}
M - \sum_{i=1}^l \tau_i
\begin{bmatrix}
-W_i & {\bf 0} \\
{\bf 0}^T & 1
\end{bmatrix} \succeq 0
\end{equation}
For similar reasons, by letting $g(\xi) = y^0(x) + y(x)^T \xi - \beta$, (\ref{eq:App-03-S-Lemma-2}) can be conservatively approximated by the following LMI
\begin{equation}
\label{eq:App-03-S-Lemma-5}
M - \sum_{i=1}^l \tau_i
\begin{bmatrix}
-W_i & {\bf 0} \\
{\bf 0}^T & 1
\end{bmatrix} \succeq
\begin{bmatrix}
0 & \frac{1}{2} y(x) \\
\frac{1}{2} y(x)^T & y^0(x) - \beta
\end{bmatrix}
\end{equation}
In fact, (\ref{eq:App-03-S-Lemma-4}) and (\ref{eq:App-03-S-Lemma-5}) are special cases of S-Lemma. Based upon these outcomes, most formulations in this section can be extended to consider the bounded support set $\rm \Xi$ in the form of (\ref{eq:App-03-Support-Set}). For polyhedral and some special classes of convex support sets, one may utilize the nonlinear Farkas lemma (Lemma 2.2 in \cite{Static-DRO}) to derive tractable reformulations.
\subsection{Adjustable Distributionally Robust Optimization}
\label{App-C-Sect03-02}
As explained in Appendix \ref{App-C-Sect01}, the traditional static RO encounters difficulties in dealing with equality constraints. This plight remains in the DRO approach following a static setting. Consider $x + \xi = 1$ where $\xi \in [0,0.1]$ is uncertain, while its mean and variance are known. For any given $x^*$, the worst-case probability $\inf_{f(\xi) \in \mathcal P} \Pr[x^* + \xi = 1] =0$, because one can always find a feasible probability distribution function $f(\xi)$ that satisfies the first- and second-order moment constraints, whereas $f(1-x^*) = 0$.
To vanquish this difficulty, it is necessary to incorporate wait-and-see decisions. A simple remedy is to impose an affine recourse policy without involving optimization in the second stage, giving rise to an affine-adjustable RO with distributional uncertainty and linear decision rule, which can be solved by the method in Appendix \ref{App-C-Sect03-01}.
This section aims to investigate the following adjustable DRO with completely flexible wait-and-see decisions
\begin{equation}
\label{eq:App-03-ADRO-Model-1}
\min_{x \in X} \left\{ c^T x + \sup_{f(w) \in \mathcal P}
\mathbb E_{f(w)} Q(x,w) \right\}
\end{equation}
where $x$ is the first-stage (here-and-now) decision, and $X$ is its feasible set; the uncertain parameter is denoted by $w$; the probability distribution $f(w)$ belongs to the Chebyshev ambiguity set (whose first- and second-order moments are known)
\begin{equation}
\label{eq:App-03-ADRO-Ambiguity-Set}
\mathcal P =\left\{ f(w) \middle| \begin{gathered}
f(w) \ge 0,~\forall w \in W \\
\int_{w \in W} f(w) \mbox{d} w = 1 \\
\int_{w \in W} w f(w)\mbox{d} w = \mu \\
\int_{w \in W} w w^T f(w)\mbox{d}w = {\rm \Theta}
\end{gathered} \right\}
\end{equation}
supported on $W = \{w~|~ (w - \mu)^T Q (w-\mu) \le {\rm \Gamma} \}$, where matrix ${\rm \Theta} = {\rm \Sigma} + \mu \mu^T$ represents the second-order moment; $\mu$ is the mean value and $\rm \Sigma$ is the covariance matrix. The expectation in (\ref{eq:App-03-ADRO-Model-1}) is taken over the worst-case $f(w)$ in $\mathcal P$, and the second-stage problem under fixed $x$ and $w$ is an LP
\begin{equation}
\label{eq:App-03-ADRO-Model-2}
Q(x,w) = \min_{y \in Y(x,w)} d^T y
\end{equation}
$Q(x,w)$ is its optimal value function under fixed $x$ and $w$. The feasible set of the second-stage problem is
\begin{equation}
Y(x,w) = \{ y ~|~ B y \le b - A x -C w \} \notag
\end{equation}
Matrices $A$, $B$, $C$ and vectors $b$, $c$, $d$ are constant coefficients in the model. We assume that the second-stage problem is always feasible, i.e., $\forall x \in X$, $\forall w \in W:$ $Y(x,w) \ne \emptyset$ and is bounded, and thus $Q(x,w)$ has a finite optimal value. This can be implemented by introducing wait-and-see type slack variables and adding penalties in the objective of (\ref{eq:App-03-ADRO-Model-2}).
The difference between problems (\ref{eq:App-03-ARO}) and (\ref{eq:App-03-ADRO-Model-1}) stems from the descriptions of uncertainty and the criteria in the objective function: more information of the dispersion effect, such as the covariance matrix, is taken into account in the latter one, and the objective function in (\ref{eq:App-03-ADRO-Model-1}) is an expectation reflecting the statistical behavior of the second-stage cost, rather than the one in (\ref{eq:App-03-ARO}) which is associated with only a single worst-case scenario, and leaves the performances in all other scenarios un-optimized. Because the probability distribution is uncertain, it is prudent to investigate the worst-case outcome in which the expected cost of the second stage is maximized. This formulation is advantageous in several ways: first, the requirement on the exact probability distribution is not necessary, and the optimal solution is insensitive to the family of distributions with common mean and covariance; second, the dispersion of the uncertainty is also taken into account, which helps reduce model conservatism: since the variance is fixed, a scenario that leaves far away from the forecast would have a low probability; finally, it is often important to tackle the tail effect, which indicates that the occurrence of a rare event may induce heavy losses in spite of its low probability. Such phenomenon is naturally taken into account in (\ref{eq:App-03-ADRO-Model-1}). In what follows, we outline the method proposed in \cite{App03-Sect3-ADRO-1} to solve the adjustable DRO problem (\ref{eq:App-03-ADRO-Model-1}). A slight modification is that an ellipsoid support set is considered.
{\noindent \bf 1. The worst-case expectation problem}
We consider the following worst-case expectation problem with a fixed $x$
\begin{equation}
\label{eq:App-03-ADRO-Sub-Worst-Expectation}
\sup_{f(w) \in \mathcal P}
\mathbb E_{f(w)} Q(x,w)
\end{equation}
According to the discussions for problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the dual problem of (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}) is
\begin{equation}
\label{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}
\begin{aligned}
\min_{H,h,h_0}~~ & \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\
\mbox{s.t.}~~ & w^T H w + h^T w + h_0 \ge Q(x,w),~\forall w \in W
\end{aligned}
\end{equation}
where $H$, $h$, $h_0$ are dual variables. Nevertheless, the optimal value function $Q(x,w)$ is not given in a closed form. From the LP duality theory
\begin{equation}
Q(x,w) = \max_{u \in U} ~ u^T (b - A x - C w) \notag
\end{equation}
where $u$ is the dual variable of LP (\ref{eq:App-03-ADRO-Model-2}), and its feasible set is given by
\begin{equation}
U = \{ u ~|~ B^T u = d,~ u \le 0 \} \notag
\end{equation}
Because we have assumed that $Q(x,w)$ is bounded, the optimal solution of the dual problem can be found at one of the extreme points of
$U$, i.e.,
\begin{equation}
\label{eq:App-03-ADRO-Sub-Critical-Vertex}
\exists u^* \in \mbox{vert}(U): \ Q(x,w) = (b-Ax-Cw)^T u^*
\end{equation}
where $\mbox{vert}(U) = \{u^1,u^2,\cdots,u^{N_E }\}$ stands for the vertices of polyhedron $U$, and $N_E = |\mbox{vert}(U)|$ is the cardinality of vert($U$). In view of this, the constraint of (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) can be expressed as
\begin{equation}
w^T H w + h^T w + h_0 \ge (b-Ax-Cw)^T u^i,~
\forall w \in W,~ i=1,\cdots,N_E \notag
\end{equation}
Recall the definition of $W$; a certification for above condition is
\begin{equation}
\begin{aligned}
w^T H w & + h^T w + h_0 - (b-Ax-Cw)^T u^i \\
& \ge \lambda [{\rm \Gamma} - (w-\mu)^T Q (w-\mu)] \ge 0,~
\forall w \in \mathbb R^k,~ i=1,\cdots,N_E
\end{aligned} \notag
\end{equation}
which has the following compact matrix form
\begin{equation}
\label{eq:App-03-ADRO-Sub-Cons-LMI}
\begin{bmatrix} w \\ 1 \end{bmatrix}^T M^i
\begin{bmatrix} w \\ 1 \end{bmatrix} \ge 0,
\forall w \in \mathbb R^k,~ i=1,\cdots,N_E
\end{equation}
where
\begin{equation}
\label{eq:App-03-ADRO-Sub-Cons-LMI-MI}
M^i = \begin{bmatrix}
H + \lambda Q & \dfrac{h-C^T u^i}{2} - \lambda Q \mu \\
\dfrac{h^T- (u^i)^T C}{2}-\lambda \mu^T Q & h_0 -(b-Ax)^T u^i - \lambda ({\rm \Gamma} - \mu^T Q \mu)
\end{bmatrix}
\end{equation}
and (\ref{eq:App-03-ADRO-Sub-Cons-LMI}) simply reduces to $M^i \succeq 0$, $i=1,\cdots,N_E$.
Finally, problem (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) comes down to the following SDP
\begin{equation}
\label{eq:App-03-ADRO-Sub-Worst-Expectation-Dual-SDP}
\begin{aligned}
\min_{H,h,h_0,\lambda}~~ & \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\
\mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ i=1,\cdots,N_E \\
& \lambda \in \mathbb R^+
\end{aligned}
\end{equation}
where $M^{i} (H,h,h_0,\lambda)$ is defined in (\ref{eq:App-03-ADRO-Sub-Cons-LMI-MI}). Above results can be readily extended if the support set is the intersection of ellipsoids.
{\noindent \bf 2. Adaptive constraint generation algorithm}
Due to the positive semi-definiteness of the covariance matrix $\rm \Sigma$, the duality gap between problems (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}) and (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) is zero \cite{App03-Sect3-ADRO-1}, and hence we can replace the worst-case expectation in (\ref{eq:App-03-ADRO-Model-1}) with its dual form, yielding
\begin{equation}
\label{eq:App-03-ADRO-Model-3}
\begin{aligned}
\min ~~ & c^T x + \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\
\mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ i=1,\cdots,N_E \\
& x \in X,~ \lambda \in \mathbb R^+
\end{aligned}
\end{equation}
Problem (\ref{eq:App-03-ADRO-Model-3}) is an SDP. However, the number of vertices in set $U$ ($|\mbox{vert}(U)|$) may increase exponentially in the dimension of $U$. It is non-trivial to enumerate all of them. However, because of weak duality, only the one which is optimal in the dual problem provides an active constraint, as shown in (\ref{eq:App-03-ADRO-Sub-Critical-Vertex}), and the rest are redundant inequalities. To identify the critical vertex in (\ref{eq:App-03-ADRO-Sub-Critical-Vertex}), we solve problem (\ref{eq:App-03-ADRO-Model-3}) in iterations: in the master problem, a subset of $\mbox{vert}(U)$ is used to formulate a relaxation, then check whether the following constraint
\begin{equation}
\label{eq:App-03-ADRO-Sub-Double-Enumeration}
w^T H w + h^T w + h_0 \ge (b-Ax-Cw)^T u,~
\forall w \in W,~ \forall u \in U
\end{equation}
is fulfilled. If yes, the relaxation is exact and the optimal solution is found; otherwise, find a new vertex of $U$ at which constraint (\ref{eq:App-03-ADRO-Sub-Double-Enumeration}) is violated, and then add a cut to the master problem so as to tighten the relaxation, till constraint (\ref{eq:App-03-ADRO-Sub-Double-Enumeration}) is satisfied. The flowchart is summarized in Algorithm \ref{Ag:App-03-ADRO-Delayed-Vertex-Generation}.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf }
\begin{algorithmic}[1]
\STATE Choose a convergence tolerance $\epsilon > 0$ and an initial vertex set $V_E \subseteq \mbox{vert}(U)$.
\STATE Solve the following master problem
\begin{equation}
\label{eq:App-03-ADRO-Model-AVG-Master}
\begin{aligned}
\min ~~ & c^T x + \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\
\mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ \forall u^i \in V_E \\
& x \in X,~ \lambda \in \mathbb R^+
\end{aligned}
\end{equation}
The optimal value is $R^*$, and the optimal solution is $(x^*, H, h, h_0)$.
\STATE Solve the following sub-problem with obtained $(x^*, H, h, h_0)$
\begin{equation}
\label{eq:App-03-ADRO-Model-AVG-Sub}
\begin{aligned}
\min_{w,u}~~ & w^T H w + h^T w+h_0 - (b-Ax^* -Cw)^T u \\
\mbox{s.t.}~~ & w \in W,~ u \in U
\end{aligned}
\end{equation}
The optimal value is $r^*$, and the optimal solution is $u^* $and $w^*$.
\STATE If $r^* \ge - \varepsilon$, terminate and report the optimal solution $x^*$and the optimal value $R^*$; otherwise, $V_E = V_E \cup u^*$, add an LMI cut $M (H,h,h_0,\lambda) \succeq 0$ associated with the current $u^*$ to the master problem (\ref{eq:App-03-ADRO-Model-AVG-Master}), and go to step 2.
\end{algorithmic}
\label{Ag:App-03-ADRO-Delayed-Vertex-Generation}
\end{algorithm}
Algorithm \ref{Ag:App-03-ADRO-Delayed-Vertex-Generation} terminates in a finite number of iterations which is bounded by $|\mbox{vert}(U)|$. Actually, it will converge within a few iterations, because the sub-problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) in step 3 always identifies the most critical vertex in $\mbox{vert}(U)$. It is worth mentioning that the subproblem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) is a non-convex program. Despite that it can be solved by general NLP solvers, we suggest three approaches with different computational complexity and optimality guarantees.
1. If the support set $W = \mathbb R^k$, it can be verified that matrix $M_i$ becomes
\begin{equation}
\begin{bmatrix}
H & \dfrac{h + C^T u^i}{2} \\
\dfrac{(h + C^T u^i)^T}{2} & h_0 - (b - Ax)^T u^i
\end{bmatrix} \succeq 0 \notag
\end{equation}
Then there must be $H \succeq 0$, and non-convexity appears in the bilinear term $u^T C w$. In such circumstance, problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) can be solved via a mountain climbing method similar to Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} (but here the mountain is actually a pit because the objective is to be minimized).
2. In the case that $W$ is an ellipsoid, above iterative approach is still applicable; however, the $w$-subproblem in which $w$ is to be optimized may become non-convex because $H$ may be indefinite. Since $u$ is fixed in the $w$-subproblem, non-convex term $w^T H w$ can be decomposed as the difference of two convex functions as $w^T (H + \alpha I) w - \alpha w^T w$, where $\alpha$ is a constant such that $H + \alpha I$ is positive-definite, and the $w$-subproblem can be solved by the convex-concave procedure elaborated in \cite{CCP-Boyd}, or any existing NLP solver.
3. As a non-convex QP, problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) can be globally solved by the MILP method presented in Appendix \ref{App-A-Sect04}. This method could be time consuming with the growth in problem sizes.
\section{Data-driven Robust Stochastic Program}
\label{App-C-Sect04}
Most classical SO methods assume that the probability distribution of uncertain factors is exactly known, which is an input of the problem. However, such information heavily relies on historical data, and may not be available at hand or accurate enough. Using an inaccurate distribution in a classical SO model could lead to biased results. To cope with ambiguous probability distributions, a natural way is to consider a set of possible candidates derived from available data, instead of a single distribution, just as the moment-inspired ambiguity set used in DRO. In this section, we investigate some useful SO models with distributional uncertainty described by divergence ambiguity sets, which is referred to as robust SO. When the distribution is discrete, the distributional uncertainty is interpreted by the perturbation of probability value associated with each scenario; when the distribution is continuous, the distance of two density functions should be specified first. In this section, we consider $\rm \Phi$-divergence and Wasserstein metric based ambiguity sets.
\subsection{Robust Chance Constrained Stochastic Program}
\label{App-C-Sect04-01}
We introduce robust chance-constrained stochastic programs with distributional robustness. The ambiguous PDF is modeled based on $\phi$-divergence, and the optimal solution provides constraint feasibility guarantee with desired probability even in the worst-case distribution. In short, the underlying problem possesses the following features:
1) The PDF is continuous and the constraint violation probability is a functional.
2) Uncertain parameters do not explicitly appear in the objective function.
Main results of this section come from \cite{App03-Sect4-RCCP}.
{\noindent \bf 1. Problem formulation}
In a traditional chance-constrained stochastic
linear program, the decision maker seeks a cost-minimum solution at which some certain constraints can be met with a given probability, yielding:
\begin{equation}
\label{eq:App-03-CCP-Model}
\begin{aligned}
\min ~~ & c^T x \\
\mbox{s.t.}~~ & \Pr [C(x,\xi)] \ge 1-\alpha \\
& x \in X
\end{aligned}
\end{equation}
where $x$ is the vector of decision variables; $\xi$ is the vector of uncertain parameters, and the exact (joint) probability distribution is apparent to the decision maker; vector $c$ represents the cost coefficients; $X$ is a polyhedron that is independent of $\xi$; $\alpha$ is the risk level or the maximum allowed probability of constraint violation; $C(x,\xi)$ collects all uncertainty dependent constraints, whose general form is given by
\begin{equation}
\label{eq:App-03-CCP-Cons-Uncertain}
C(x,\xi) = \{\xi~|~ \exists y : A(\xi) x + B(\xi) y \le b(\xi) \}
\end{equation}
where $A$, $B$, $b$ are constant coefficient matrices that may contain uncertain parameters; $y$ is a recourse action that can be made after $\xi$ is known. In the presence of $y$, we call (\ref{eq:App-03-CCP-Model}) a two-stage problem; otherwise, it is a single-stage problem if $y$ is null. We don't consider the cost of recourse actions in the objective function in its current form. In case of need, we can add the second-stage cost $d^T y(\xi)$ in the objective function, and $\xi$ is a specific scenario which $y(\xi)$ corresponds to; for instance, robust optimization may consider a max-min cost scenario or a max-min regret scenario; traditional SO often tackles the expected second-stage cost $\mathbb E[d^T y(\xi)]$. We leave it to the end of this section to discuss how to deal with the second-stage cost in the form of worst-case expectation like (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}), and show that the problem can be convexified under some technical assumptions.
In the chance constraint, for a given $x$, the probability of constraint satisfaction can be evaluated for a particular probability distribution of $\xi$. Traditional studies on chance-constrained programs often assume that the distribution of $\xi$ is perfectly known. However, this assumption can be very strong because it requires a lot of historical data. Moreover, the optimal solution may be sensitive to the true distribution and thus highly suboptimal in practice. To overcome these difficulties, a prudent method is to consider a set of probability distributions belonging to a pre-specified ambiguity set $D$, and require that the chance constraint should be satisfied under all possible distributions in $D$, resulting in the following robust chance-constrained programming problem:
\begin{equation}
\label{eq:App-03-RCCP-Model}
\begin{aligned}
\min ~~ & c^T x \\
\mbox{s.t.}~~ & \inf_{f(\xi) \in D} \Pr[C(x,\xi)]
\ge 1-\alpha \\
& x \in X
\end{aligned}
\end{equation}
where $f(\xi)$ is the probability density function of random variable $\xi$.
The ambiguity set $D$ in (\ref{eq:App-03-RCCP-Model}) which includes distributional information can be constructed in a data-driven fashion, such as the moment based ones used in Appendix \ref{App-C-Sect03}. Please see \cite{Am-Set-Overview} for more information on establishing $D$ based on moment data and other structural properties, such as symmetry and unimodality. The tractability of (\ref{eq:App-03-RCCP-Model}) largely depends on the form of $D$. For example: if $D$ is built on the mean value and covariance matrix (which is called a Chebyshev ambiguity set), a single robust chance constraint can be reformulated as an LMI and a set of joint robust chance constraints can be approximated by BMIs \cite{Static-DRO}; probability of constraint violation under more general moment based ambiguity sets can be evacuated by solving conic optimization problems \cite{Am-Set-Overview}.
A shortcoming of moment description is that it does not provide a direct measure on the distance between the candidate PDFs in $D$ and a reference distribution. Two PDFs with the same moments may differ a lot in other aspects. Furthermore, the worst-case distribution corresponding to a Chebyshev ambiguity set always puts more weights away from the mean value, subject to the variance. As such, the long-tail effect is a source of conservatism. In this section, we consider the confidence set built around a reference distribution. The motivation is: the decision maker may have some knowledge on what distribution the uncertainty follows, although such a distribution could be inexact, and the true density function would not deviate far away from it.
To describe distributional ambiguity in term of a PDF, the first problem is how to characterize the distance between two functions. One common measure on the distance between density functions is the $\phi$-divergence, which is defined as \cite{Am-Set-Phi-Divergence}
\begin{equation}
\label{eq:App-03-RCCP-Phi-Div}
D_\phi(f \| f_0) = \int_{\rm \Omega}
\phi \left( \dfrac{f(\xi)}{f_0(\xi)} \right) f_0(\xi) d\xi
\end{equation}
where $f$ and $f_0$ stand for the particular density function and the estimated one (or the reference distribution), respectively; function $\phi$ satisfies:
\begin{equation*}
\begin{lgathered}
\mbox{(C1)}~~ \phi(1) = 0 \\
\mbox{(C2)}~~ 0\phi(x/0) = \begin{cases}
x \lim_{p \to +\infty} \phi(p)/p & \mbox{if } x > 0 \\
0 & \mbox{if } x = 0
\end{cases} \\
\mbox{(C3)}~~ \phi(x)= +\infty~ \mbox{for } x<0 \\
\mbox{(C4)}~~ \phi(x)~ \mbox{is a convex function on}~ \mathbb R^+
\end{lgathered}
\end{equation*}
It is proposed in \cite{Am-Set-Phi-Divergence} that the ambiguity set can be built as:
\begin{equation}
\label{eq:App-03-Conf-Set-Phi-Div}
D = \{P: D_\phi(f \| f_0) \le d, f={\rm d} P/{\rm d} \xi\}
\end{equation}
where the tolerance $d$ can be adjusted by the decision maker according to their attitudes towards risks. The ambiguity set in (\ref{eq:App-03-Conf-Set-Phi-Div}) can be denoted as $D_{\phi}$, without causing confusion with the definition of $\phi$-divergence $D_\phi(f \| f_0) $. Compared to the moment-based ambiguity sets, especially the Chebyshev ambiguity set, where only the first- and second-order moments are involved, the density based description captures the overall profile of the ambiguous distribution, so may hopefully provide less conservative solutions. However, it hardly guarantees consistent moments. Which one is better depends on data availability: if we are more confident on the reference distribution, (\ref{eq:App-03-Conf-Set-Phi-Div}) may be better; otherwise, if we only have limited statistic information such as mean and variance, then the moment-based ones are more straightforward.
\begin{table}[!htp]
\footnotesize
\renewcommand{1.5}{1.3}
\renewcommand{1em}{1em}
\caption{Instances of $\phi$-divergences}
\centering
\begin{tabular}{cc}
\toprule
Divergence & function $\phi(x)$ \\
\midrule
KL-divergence & $x \log x - x + 1$ \\
reverse KL-divergence & $- \log x$ \\
Hellinger distance & $({\sqrt x} -1)^2$ \\
Variation distance & $|x-1|$ \\
J-divergence & $(x-1)\log x$ \\
$\chi^2$ divergence & $(x-1)^2$ \\
$\alpha$-divergence &
$\begin{cases}
\dfrac{4}{1-\alpha^2} \left( 1-x^{(1+\alpha)/2} \right)
& \mbox{If}~ \alpha \ne \pm 1 \\
x \ln x & \mbox{If}~ \alpha = 1 \\
- \ln x & \mbox{If}~ \alpha = -1
\end{cases}$ \\
\bottomrule
\end{tabular}
\label{tab:App03-RCCP-01}
\end{table}
Many commonly seen divergence measures are special cases of $\phi$-divergence, coinciding with a particular choice of function $\phi$. Some examples are given in Table \ref{tab:App03-RCCP-01} \cite{App03-Sect4-Example-Phi-Div}. In what follows, we will use the KL-divergence. According to its corresponding function $\phi$, the KL-divergence is given by
\begin{equation}
\label{eq:App-03-RCCP-Phi-Div}
D_\phi(f \| f_0) = \int_{\rm \Omega}
\log \left( \dfrac{f(\xi)}{f_0(\xi)} \right) f(\xi) d\xi
\end{equation}
Before presenting the main results in \cite{App03-Sect4-RCCP}, the definition of conjugate duality is given. For a univariate function $g: \mathbb R \to \mathbb R \cup \{+\infty\}$, its conjugate function $g^*: \mathbb R \to \mathbb R \cup \{+\infty\}$ is defined as
\begin{equation*}
g^*(t) = \sup_{x \in \mathbb R} \{tx - g(x)\}
\end{equation*}
For a valid function $\phi$ for $\phi$-divergence satisfying (C1)-(C4), its conjugate function $\phi^*$ is convex, nondecreasing, and the following condition holds \cite{App03-Sect4-RCCP}
\begin{equation}
\label{eq:App-03-DRSO-Conjugate-Ineq}
\phi^*(x) \ge x
\end{equation}
Besides, if $\phi^*$ is a finite constant on a closed interval $[a, b]$, then it is a finite constant on the interval $(-\infty,b]$.
{\noindent \bf 2. Equivalent formulation}
It is revealed in \cite{App03-Sect4-RCCP} that when the confidence set $D$ is constructed based on $\phi$-divergence, robust chance constrained program (\ref{eq:App-03-RCCP-Model}) can be easily transformed into a traditional chance-constrained program (\ref{eq:App-03-CCP-Model}) at the reference distribution by calibrating the confidence tolerance $\alpha$.
\begin{theorem}
\label{thm:App03-RCCP-Phi-Div}
{\rm \cite{App03-Sect4-RCCP}} Let $\mathbb P_0$ be the cumulative distribution function generated by density function $f_0$, then the robust chance constraint
\begin{equation}
\label{eq:App-03-Thm-1}
\inf_{\mathbb P(\xi) \in \{D_{\phi}(f\|f_0) \le d\} }
\Pr[C(x,\xi)] \ge 1-\alpha
\end{equation}
constructed based on $\phi$-divergence is equivalent to a traditional chance constraint
\begin{equation}
\label{eq:App-03-Thm-2}
\Pr\nolimits_0[C(x,\xi)] \ge 1-\alpha^\prime_+
\end{equation}
where $\Pr_0$ means that the probability is evaluated at the reference distribution $\mathbb P_0$, $\alpha^\prime_+ = \max\{\alpha^\prime,0\}$, and $\alpha^\prime$ can be computed by
\begin{equation*}
\alpha^\prime = 1 - \inf_{z \in Z} \left\{
\dfrac{\phi^*(z_0+z)-z_0-\alpha z +d}{\phi^*(z_0+z)-\phi^*(z_0)} \right\}
\end{equation*}
where
\begin{equation*}
Z = \left\{ z \middle| \begin{lgathered}
z>0,~ z_0 + \pi z \le l_\phi \\
\underline m(\phi^*) \le z+z_0 \le \overline m(\phi^*)
\end{lgathered} \right\}
\end{equation*}
In above formula, constants $l_\phi = \lim_{x \to +\infty} \phi(x)/x$, $\overline m(\phi^*) = \inf \{ m: \phi^*(m)=+\infty\}$, $\underline m(\phi^*) = \sup \{ m: \phi^*$ is a finite constant on $(-\infty,m]\}$, Table \ref{tab:App03-RCCP-02} summarizes the values of these parameters for typical $\phi$-divergence measures, and
\begin{equation*}
\pi = \begin{cases}
-\infty & \mbox{if Leb}\{[f_0=0]\}=0 \\
0 & \mbox{if Leb $\{[f_0=0]\}>0$ and Leb$\{[f_0=0] \backslash C(x,\xi) \}=0$} \\
1 & \mbox{otherwise}
\end{cases}
\end{equation*}
where Leb$\{\cdot\}$ is the Lebesgue measure on $\mathbb R^{Dim(\xi)}$.
\end{theorem}
\begin{table}[!htp]
\footnotesize
\renewcommand{1.5}{1.3}
\renewcommand{1em}{1em}
\caption{Values of $l_\phi$, $\underline m(\phi^*)$, and $\overline m(\phi^*)$ for $\phi$-divergences}
\centering
\begin{tabular}{cccc}
\toprule
$\phi$-Divergence & $l_\phi$ & $\underline m(\phi^*)$ & $\overline m(\phi^*)$\\
\midrule
KL-divergence & $+\infty$ & $-\infty$ & $+\infty$ \\
Hellinger distance & $1$ & $-\infty$ & $1$ \\
Variation distance & $1$ & $-1$ & $1$ \\
J-divergence & $+\infty$ & $-\infty$ & $+\infty$ \\
$\chi^2$ divergence & $+\infty$ & $-2$ & $+\infty$ \\
\bottomrule
\end{tabular}
\label{tab:App03-RCCP-02}
\end{table}
The values of $\alpha^\prime$ for the Variation distance and the $\chi^2$ divergence have analytical expressions; for the KL divergence, $\alpha^\prime$ can be computed from one-dimensional line search. Results are shown in Table \ref{tab:App03-RCCP-03}.
\begin{table}[!htp]
\footnotesize
\renewcommand{1.5}{1.3}
\renewcommand{1em}{1em}
\caption{Values of $\alpha^\prime$ for some $\phi$-divergences}
\centering
\begin{tabular}{cc}
\toprule
$\phi$-Divergence & $\alpha^\prime$ \\
\midrule
$\chi^2$ divergence & $\alpha^\prime = \alpha- \dfrac{\sqrt{d^2 + 4d (\alpha-\alpha^2)}-(1-2\alpha)d}{2d+2}$ \\
& \\
Variation distance & $\alpha^\prime = \alpha - \dfrac{1}{2}d$ \\
& \\
KL-divergence & $\alpha^\prime = 1- \inf_{x \in (0,1)} \left\{ \dfrac{{\rm e}^{-d}x^{1-\alpha}-1}{x-1} \right\}$ \\
\bottomrule
\end{tabular}
\label{tab:App03-RCCP-03}
\end{table}
For the KL divergence, calculating $\alpha^\prime$ entails
solving $\inf_{x\in (0,1)} h(x)$ where
\begin{equation*}
h(x) = \dfrac{{\rm e}^{-d}x^{1-\alpha}-1}{x-1}
\end{equation*}
Its first-order derivative is given by
\begin{equation*}
h^\prime (x) = \dfrac{1-\alpha{\rm e}^{-d}x^{1-\alpha}-(1-\alpha){\rm e}^{-d} x^{-\alpha}}{(x-1)^2},~~ \forall x \in (0,1)
\end{equation*}
To claim the convexity of $h(x)$, we need to show that $h^\prime (x)$ is an increasing function in $x \in (0,1)$. To this end, first notice that the denominator $(x-1)^2$ is a decreasing function in $x$ on the open interval $(0,1)$; then we can show the numerator is an increasing function in $x$, because its first-order derivative gives
\begin{equation*}
(1-\alpha{\rm e}^{-d}x^{1-\alpha}-(1-\alpha){\rm e}^{-d} x^{-\alpha})^\prime _x = \alpha (1-\alpha){\rm e}^{-d} (x^{-\alpha-1}-x^{-\alpha}) >0,~ \forall x \in (0,1)
\end{equation*}
Hence $h^\prime(x)$ is monotonically increasing, and $h(x)$ is a convex function in $x$. Moreover, because $h^\prime(x)$ is continuous in $(0,1)$, and $\lim_{x \to 0^+} h^\prime(x) = -\infty$, $\lim_{x \to 1^-} h^\prime(x) = +\infty$, there must be some $x^* \in [\delta, 1-\delta]$ such that $h^\prime(x^*)=0$, i.e., the infimum of $h(x)$ is attainable. The minimum of $h(x)$ can be calculated by solving a nonlinear equation $h^\prime(x)=0$ via Newton's method, or a derivative-free line search, such as the golden section search algorithm. Either scheme is computationally inexpensive.
Finally, we discuss the connection between the modified tolerance $\alpha^\prime$ and its original value $\alpha$. Because a set of distributions are considered in (\ref{eq:App-03-Thm-1}), the threshold in (\ref{eq:App-03-Thm-2}) should be greater than the original one, i.e., $1-\alpha^\prime \ge 1-\alpha$ must hold. To see this, recall inequality (\ref{eq:App-03-DRSO-Conjugate-Ineq}) of conjugate function, we have
\begin{equation*}
\alpha \phi^* (z_0 + z) + (1 - \alpha) \phi^∗ (z_0) \ge
\alpha (z_0 + z) + (1 - \alpha) z_0
\end{equation*}
The right-hand side gives $\alpha z + z_0$; in the ambiguity set (\ref{eq:App-03-RCCP-Phi-Div}), $d$ is strictly positive, therefore
\begin{equation*}
\alpha \phi^* (z_0 + z) + (1 - \alpha) \phi^∗ (z_0) \ge
\alpha z + z_0 -d
\end{equation*}
which gives
\begin{equation*}
\phi^* (z_0 + z) - z_0 - az + d \ge
(1 - \alpha) (\phi^∗ (z_0 + z) - \phi^∗ (z_0) )
\end{equation*}
Recall the expression of $\alpha^\prime$ in Theorem \ref{thm:App03-RCCP-Phi-Div}, we arrive at
\begin{equation*}
1-\alpha^\prime = \dfrac{\phi^* (z_0+z)-z_0-az+d}{\phi^∗ (z_0+z)-\phi^∗(z_0)}
\ge 1 - \alpha
\end{equation*}
which is the desired conclusion.
Theorem \ref{thm:App03-RCCP-Phi-Div} concludes that the complexity of handling
a robust chance constraint is almost the same as that of tackling a traditional chance constraint associated with the reference distribution $\mathbb P_0$, except for the efforts on computing $\alpha^\prime$. If $\mathbb P_0$ belongs to the family of log-concave distributions, then the chance constraint is convex. As a special case, if $\mathbb P_0$ is the Gaussian distribution or a uniform distribution on ellipsoidal support, a single chance constraint can boil down to a second-order cone \cite{App03-Sect4-Example-Q-Distribution}. For more general cases, the chance constraint is non-convex in $x$. In such circumstance, we will use risk based reformulation and the sampling average approximation (SAA) approach.
{\noindent \bf 3. Risk and SAA based reformulation}
Owing to the different descriptions on dispersion ambiguity and presence of the wait-and-see decision $y$, unlike DRO problem (\ref{eq:App-03-DRO-Model-1}) with static robust chance constraint (\ref{eq:App-03-DRO-DRCC}) which can be transformed into an SDP, constraint (\ref{eq:App-03-Thm-1}) is treated in a different way, as demonstrated in Theorem \ref{thm:App03-RCCP-Phi-Div}: it comes down to a traditional chance constraint (\ref{eq:App-03-Thm-2}) while the dispersion ambiguity is taken into account by a modification in the confidence level. The remaining task is to express (\ref{eq:App-03-Thm-2}) as a solver-compatible form.
{\noindent \bf 1) Loss function}
For given $x$ and $\xi$, constraints in $C(x,\xi)$ cannot be met if no $y$ satisfying $A(\xi) x + B(\xi) y \le b(\xi)$ exists. To quantify the constraint violation under scenario $\boldsymbol{\xi}$ and first-stage decision $x$, define the following loss function $L(x,\xi)$
\begin{equation}
\label{eq:App-03-RCCP-Loss-Function}
\begin{aligned}
L(x,\xi) = \min_{y,\sigma} ~~ & \sigma \\
\text{s.t.}~~ & A(\xi) x + B(\xi) y \le b(\xi) + \sigma {\bf 1}
\end{aligned}
\end{equation}
where {\bf 1} is an all-one vector with compatible dimension. If $L(x,\xi) \ge 0$, the minimum of slackness $\sigma$ under the joint efforts of the recourse action $y$ is defined as the loss; otherwise, demands are satisfiable after the uncertain parameter is known. As we assume $C(x,\xi)$ is a bounded polytope, problem (\ref{eq:App-03-RCCP-Loss-Function}) is always feasible and bounded below. Therefore, the loss function $L(\boldsymbol{x}, \boldsymbol{\xi})$ is well-defined, and the chance constraint (\ref{eq:App-03-Thm-2}) can be written as
\begin{equation}
\label{eq:App-03-RCC-Loss-Fun}
\Pr\nolimits_0 [ L(x,\xi) \leq 0 ] \geq 1 - \alpha^\prime_+
\end{equation}
In this way, the joint chance constraints are consolidated into a single one, just like what has been done in (\ref{eq:App-03-DRO-DRCC-Joint}) and (\ref{eq:App-03-DRO-DRCC-Joint-Para}).
{\noindent \bf 2) VaR based reformulation: An MILP}
For a given probability tolerance $\beta$ and a first-stage decision $x$, the $\beta$-VaR for loss function $L(x,\xi)$ under the reference distribution PDF $\mathbb P_0$ is defined as
\begin{equation}
\label{eq:App-03-RCC-VaR-Def}
\beta \mbox{-VaR}(x) = \min \left\{ a \in \mathbb {R} \middle| \int_{L(x,\xi) \leq a} f_0(\xi) \mbox{d} \xi \ge \beta \right\}
\end{equation}
which interprets the threshold $a$ such that the loss is no greater than $a$ will hold with a probability no less than $\beta$. According to (\ref{eq:App-03-RCC-VaR-Def}), an equivalent expression of chance constraint (\ref{eq:App-03-RCC-Loss-Fun}) is
\begin{equation}
\label{eq:App-03-RCC-CC-VaR}
(1 - \alpha^\prime_+) \mbox{-VaR} (x) \le 0
\end{equation}
So that probability evaluation is obviated. Furthermore, if SAA is used, (\ref{eq:App-03-RCC-Loss-Fun}) and (\ref{eq:App-03-RCC-CC-VaR}) indicate that the scenarios which will lead to $L(x,\xi) > 0$ account for a fraction of $\alpha_{1+}$ among all sampled data.
Let $\xi_1,\xi_2, \cdots, \xi_q$ be $q$ scenarios sampled from random variable $\boldsymbol{\xi}$. We use $q$ binary variables $z_1, z_2, \cdots, z_q$ to identify possible infeasibility: $z_k = 1$ implies that constraints cannot be satisfied in scenario $\xi_k$. To this end, let $M$ be a large enough constant, consider inequality
\begin{equation}
\label{eq:App-03-RCC-Loss-SSA}
A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + M z_k
\end{equation}
In (\ref{eq:App-03-RCC-Loss-SSA}), if $z_k = 0$, recourse action $y_k$ will recover all constraints in scenario $\xi_k$, and thus $C(x,\xi_k)$ is non-empty; otherwise, if no such a recourse action $y_k$ exists, then constraint violation will take place. To reconcile infeasibility, $z_k = 1$ so that (\ref{eq:App-03-RCC-Loss-SSA}) becomes redundant, and there is actually no constraint for scenario $\boldsymbol{\xi_k}$. The fraction of sampled scenarios which will incur inevitable constraint violations is counted by $\sum_{k=1}^q z_k/q$. So we can write out the following MILP reformulation for robust chance-constrained program (\ref{eq:App-03-RCCP-Model}) based on VaR and SAA
\begin{equation}
\label{eq:App-03-RCCP-VaR-MILP}
\begin{aligned}
\min ~~ & c^T x \\
\mbox{s.t.}~~ & x \in X \\
& A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + M z_k,~ k=1,\cdots,q \\
& \sum_{k = 1}^{q} z_k \leq q\alpha^\prime_+,~ z_k \in \{ 0, 1 \},~ k=1,\cdots,q
\end{aligned}
\end{equation}
In MILP (\ref{eq:App-03-RCCP-VaR-MILP}), constraint violation can happen in at most $q \alpha_{1+}$ out of $q$ scenarios in the reference distribution, according to Theorem \ref{thm:App03-RCCP-Phi-Div}, and the reliability requirement (\ref{eq:App-03-Thm-1}) under all possible distributions in ambiguity set $D_\phi$ can be guaranteed by the selection of $\alpha^\prime_+$. Improved MILP formulations of chance constraints which do not rely on the specific big-M parameter are comprehensively studied in \cite{App03-Sect4-CC-SAA-MILP}, and some structure properties of the feasible region are revealed.
{\noindent \bf 3) CVaR based reformulation: An LP}
The number of binary variables in MILP (\ref{eq:App-03-RCCP-VaR-MILP}) is equal to the number of sampled scenarios. To guarantee the accuracy of SAA, a large number of scenarios are required, preventing MILP (\ref{eq:App-03-RCCP-VaR-MILP}) from being solved efficiently. To ameliorate this plight, we provide a conservative LP approximation for problem (\ref{eq:App-03-RCCP-Model}) based on the properties of CVaR revealed in \cite{CVaR}.
The $\beta$-CVaR for the loss function $L(x,\xi)$ is defined as
\begin{equation}
\label{eq:App-03-VaR-Def}
\beta \mbox{-CVaR} (x) = \frac{1} {1 - \beta} \int_{L(x,\xi) \ge \beta \text{-VaR} (x)} L(x,\xi) f(\boldsymbol{\xi}) d \xi
\end{equation}
which interprets the conditional expectation of loss that is no less than $\beta$-VaR; therefore, relation
\begin{equation}
\label{eq:App-03-VaR-CVaR}
\beta \mbox{-VaR} \leq \beta \mbox{-CVaR}
\end{equation}
always holds, and a conservative approximation of constraint (\ref{eq:App-03-RCC-CC-VaR}) is
\begin{equation}
(1 - \alpha^\prime_+) \mbox{-CVaR} (x) \le 0
\label{eq:App-03-RCC-CC-CVaR}
\end{equation}
Inequality (\ref{eq:App-03-RCC-CC-CVaR}) is a sufficient condition for (\ref{eq:App-03-RCC-CC-VaR}) and (\ref{eq:App-03-RCC-Loss-Fun}). This conservative replacement is apposite to the spirit of robust optimization. In what follows, we will reformulate (\ref{eq:App-03-RCC-CC-CVaR}) in a solver-compatible form.
According to \cite{CVaR}, the left-hand side of (\ref{eq:App-03-RCC-CC-CVaR}) is equal to the optimum of the following minimization problem
\begin{equation}
\min_{\gamma} \left\{ \gamma + \frac{1}{\alpha^\prime_+} \int_{\xi \in \mathbb {R}^K} \max \{ L(x,\xi) - \gamma, 0 \} f(\xi) \mbox{d} \xi \right\}
\label{eq:App-03-CVaR-Opt-Int}
\end{equation}
By performing SAA, the integral in (\ref{eq:App-03-CVaR-Opt-Int}) renders a summation over discrete sampled scenarios $\xi_1,\xi_2, \cdots, \xi_q$, resulting in
\begin{equation}
\label{eq:App-03-CVaR-Opt-Discrete}
\min_{\gamma} \left\{ \gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q \max \left\{ L(x,\xi_k) - \gamma, 0 \right\} \right\}
\end{equation}
By introducing auxiliary variable $s_k$, the feasible region defined by (\ref{eq:App-03-RCC-CC-CVaR}) can be expressed via
\begin{gather*}
\exists \gamma \in \mathbb R, s_k \in \mathbb R^+,~\sigma_k \in \mathbb R,~ k = 1,\cdots,q \\
\sigma_k - \gamma \le s_k,~ k = 1,\cdots,q \\
A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + \sigma_k {\bf 1},
~ k = 1,\cdots,q \\
\gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q s_k \leq 0
\end{gather*}
Now we can write out the the conservative LP reformulation for robust chance constrained program (\ref{eq:App-03-RCCP-Model}) based on CVaR and SAA
\begin{equation}
\label{eq:App-03-RCCP-CVaR-LP}
\begin{aligned}
\min_{x,y,s,\gamma} ~~ & c^T x \\
\mbox{s.t.}~~ & x \in X,~ \gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q s_k \leq 0,~s_k \geq 0,~ k = 1,\cdots,q \\
& A(\xi_k) x + B(\xi_k) y_k - b(\xi_k) \le (\gamma + s_k) {\bf 1},
~ k = 1,\cdots,q
\end{aligned}
\end{equation}
where $\sigma_k$ is eliminated.
According to (\ref{eq:App-03-VaR-CVaR}), condition (\ref{eq:App-03-RCC-CC-CVaR}) guarantees (\ref{eq:App-03-RCC-CC-VaR}) as well as (\ref{eq:App-03-RCC-Loss-Fun}), so chance constraint in (\ref{eq:App-03-Thm-1}) holds with a probability no less (usually higher) than $1-\alpha$, regardless of the true distributions in confidence set $D_\phi$. Since (\ref{eq:App-03-VaR-CVaR}) is usually a strict inequality, this fact will introduce some extent of conservatism in the CVaR based LP model (\ref{eq:App-03-RCCP-CVaR-LP}).
Relations among different mathematical models discussed in this section are summarized in Fig. \ref{fig:Fig-App03-01}.
\begin{figure}
\caption{Relations of the models discussed in this section.}
\label{fig:Fig-App03-01}
\end{figure}
{\noindent \bf 4. Considering second-stage cost}
Finally, we elaborate how to solve problem (\ref{eq:App-03-RCCP-Model}) with a second-stage cost in the sense of worst-case expectation, i.e.
\begin{equation}
\label{eq:App-03-RCCP-MEdy-1}
\begin{aligned}
\min_x~ & \left\{c^T x + \max_{P(\xi) \in D_{KL}} \mathbb E_P [Q(x,\xi)] \right\} \\
\mbox{s.t.}~~ & x \in X \\
& \sup_{P(\xi) \in D^\prime} \Pr[C(x,\xi)]
\ge 1-\alpha
\end{aligned}
\end{equation}
where $Q(x,\xi)$ is the optimal value function of the second-stage problem
\begin{equation*}
\begin{aligned}
Q(x,\xi) = \min ~~ & q^T y \\
\mbox{s.t.}~~ & B(\xi) y \le b(\xi) - A(\xi) x
\end{aligned}
\end{equation*}
which is an LP for a fixed first-stage decision $x$ and a given parameter $\xi$;
\begin{equation*}
D_{KL}=\{P(\xi)~|~ D^{KL}_\phi(f \| f_0) \le d_{KL}(\alpha^*),~ f = {\rm d} P/ {\rm d} \xi\}
\end{equation*}
is the KL-divergence based ambiguity set, and $d_{KL}$ is an $\alpha$-dependent threshold which determines the size of the ambiguity set, and $\alpha^*$ reflects the confidence level: the real distribution is contained in $D_{KL}$ with a probability no less than $\alpha^*$. For discrete distributions, the KL-divergence measure has the form of
\begin{equation*}
D^{KL}_\phi(f \parallel f_0) = \sum_s \rho_s \log \dfrac{\rho_s}{\rho^0_s}
\end{equation*}
In either case, there are infinitely many PDFs satisfying the inequality in the ambiguity set $D_{KL}$ when $d_{KL} > 0$. Otherwise, when $d_{KL}=0$, the ambiguity set $D_{KL}$ becomes a singleton, and the model (\ref{eq:App-03-RCCP-MEdy-1}) degenerates to a traditional SO problem. In practice, the user can specify the value of $d_{KL}$ according to the attitude towards risks. Nevertheless, the proper value of $d_{KL}$ can be obtained from probability theory. Intuitively, the more historical data we possess, the closer the reference PDF $f_0$ leaves from the true one, and the smaller $d_{KL}$ should be set.
Suppose we have totally $M$ samples with equal probabilities to fit in $N$ bins, and there are $M_1$, $M_2$, $\cdots$, $M_N$ samples fall into each bin, then the discrete reference PDF for the histogram is $\{\pi_1, \cdots,\pi_N\}$, where $\pi_i = M_i/M$, $i = 1, \cdots, N$. Let $\pi^r_1$, $\cdots$, $\pi^r_N$ be the real probability of each bin, according to the discussions in \cite{Am-Set-Phi-Divergence}, random variable $2M \sum_{i=1}^N \pi^r_i \log (\pi^r_i/\pi_i)$ follows $\chi^2$ distribution with $N-1$ degrees of freedom. Therefore, the confidence threshold can be calculated from
\begin{equation*}
d_{KL}(\alpha^*) = \dfrac{1}{2M}\chi_{N-1,\alpha^*}^2
\end{equation*}
where $\chi_{N-1,\alpha^*}^2$ stands for the $\alpha^*$ upper quantile of $\chi^2$ distribution with $N-1$ degrees of freedom. For other divergence based ambiguity sets, please see more discussions in \cite{Am-Set-Phi-Divergence}. Robust chance constraints in (\ref{eq:App-03-RCCP-MEdy-1}) are tackled using the method presented previously, and the objective function will be treated independently. The ambiguity sets in the objective function and chance constraints could be the same one or different ones, and thus are distinguished by $D_{KL}$ and $D^\prime$.
Sometimes, it is imperative to coordinately optimize the costs in both stages. For example, in the facility planning problem, the first stage represents the investment decision and the second stage describes the operation management. If we only optimize the first-stage cost, then the facilities with lower investment costs will be preferred, but they may suffer from higher operating costs, and not be the optimal choice from the long-term aspect.
To solve (\ref{eq:App-03-RCCP-MEdy-1}), we need a tractable reformulation for the worst-case expectation problem under KL-divergence ambiguity set
\begin{equation}
\label{eq:App-03-MEdy-KL-Div}
\max_{P(\xi) \in D_{KL}} \mathbb E_P [Q(x,\xi)]
\end{equation}
under fixed $x$. It is proved in \cite{App03-Sect4-RCCP-mEdy,Am-Set-Phi-Divergence} that problem (\ref{eq:App-03-MEdy-KL-Div}) is equivalent to
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Dual}
\min_{\alpha \ge 0} ~ \alpha \log \mathbb E_{P_0} [{\rm e}^{Q(x,\xi)/\alpha}] + \alpha d_{KL}
\end{equation}
where $\alpha$ is the dual variable. Formulation (\ref{eq:App-03-MEdy-KL-Div-Dual}) has two advantages: first, the expectation is evaluated associated with the reference distribution $P_0$, which is much easier than optimizing over the ambiguity set $D_{KL}$; second, the maximum operator switches to a minimum operator, which is consistent with the objective function of the decision making problem. We will use SAA to express the expectation, giving rise to a discrete version of problem (\ref{eq:App-03-MEdy-KL-Div-Dual}). In fact, in discrete cases, (\ref{eq:App-03-MEdy-KL-Div-Dual}) can be derived from (\ref{eq:App-03-MEdy-KL-Div}) using Lagrange duality. The following interpretation is given in \cite{App03-Sect4-KL-Div-UC}.
Denote by $\xi_1,\cdots,\xi_s$ the representative scenarios in the discrete distribution; their corresponding probabilities in the reference PDF and the actual PDF are given by $P_0 = \{p^0_1,\cdots,p^0_s\}$ and $P = \{p_1,\cdots,p_s\}$, respectively. Then problem (\ref{eq:App-03-MEdy-KL-Div}) can be written in a discrete form as
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Discrete}
\begin{aligned}
\max_p~~ & \sum_{i=1}^s p_i Q(x,\xi_i) \\
\mbox{s.t.} ~~ & \sum_{i=1}^s p_i \log \left( \dfrac{p_i}{p^0_i} \right) \le d_{KL} \\
& p \ge 0,~ 1^T p =1
\end{aligned}
\end{equation}
where vector $p=[p_1,\cdots,p_s]^T$ is the decision variable. According to Lagrange duality theory, the objective function of the dual problem is
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-1}
g(\alpha,\mu) = \alpha d_{KL} + \mu + \sum_{i=1}^s \max_{p_i \ge 0} p_i \left( Q(x,\xi_i) - \mu - \alpha \log \left( \dfrac{p_i}{p^0_i} \right) \right)
\end{equation}
where $\mu$ is the dual variable associated with equality constraint $1^T p =1$, and $\alpha$ with the KL-divergence inequality. Substituting $t_i = p_i / p^0_i$ into (\ref{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-1}) and eliminating $p_i$, we get
\begin{equation*}
g(\alpha,\mu) = \alpha d_{KL} + \mu + \sum_{i=1}^s \max_{t_i \ge 0}~ p^0_i t_i \left( Q(x,\xi_i) - \mu - \alpha \log t_i \right)
\end{equation*}
Calulating the first-order derivative of $t_i ( Q(x,\xi_i) - \mu - \alpha \log t_i )$ with respect to $t_i$, the optimal solution is
\begin{equation*}
t_i = {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}} > 0
\end{equation*}
and the maximum is
\begin{equation*}
\alpha {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}}
\end{equation*}
As a result, the dual objective reduces to
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-2}
g(\alpha,\mu) = \alpha d_{KL} + \mu + \alpha \sum_{i=1}^s
p^0_i {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}}
\end{equation}
and the dual problem of (\ref{eq:App-03-MEdy-KL-Div-Discrete}) can be rewritten as
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Discrete-Dual-1}
\min_{\alpha \ge 0,\mu}~ g(\alpha,\mu)
\end{equation}
The optimal solution $\mu^*$ must satisfy $\partial g / \partial \mu = 0$, yielding
\begin{equation*}
\sum_{i=1}^s p^0_i {\rm e} ^{\frac{ Q(x,\xi_i) - \mu^* - \alpha}{\alpha}}=1
\end{equation*}
or
\begin{equation*}
\mu^* = \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{Q(x,\xi_i)/\alpha} - \alpha
\end{equation*}
Substituting above relations into $g(\alpha,\mu)$ results in the following dual problem
\begin{equation}
\label{eq:App-03-MEdy-KL-Div-Discrete-Dual-2}
\min_{\alpha \ge 0}~ \left\{ \alpha d_{KL} + \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{Q(x,\xi_i)/\alpha} \right\}
\end{equation}
which is a discrete form of (\ref{eq:App-03-MEdy-KL-Div-Dual}).
In (\ref{eq:App-03-RCCP-MEdy-1}), replacing the inner problem (\ref{eq:App-03-MEdy-KL-Div}) with its Lagrangian dual form (\ref{eq:App-03-MEdy-KL-Div-Discrete-Dual-2}), we can obtain an equivalent mathematical program
\begin{equation}
\label{eq:App-03-RCCP-MEdy-2}
\begin{aligned}
\min~ & \left\{c^T x + \alpha d_{KL} + \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{\theta_i/\alpha} \right\} \\
\mbox{s.t.}~~ & x \in X,~ \alpha \ge 0,~ \theta_i = q^T y_i,~ \forall i \\
& A(\xi_i) x + B(\xi_i) y_i \le b(\xi_i),~ \forall i \\
& \mbox{Cons-RCC}
\end{aligned}
\end{equation}
where Cons-RCC stands for the LP based formulation of robust chance constraints, so the constraints in problem (\ref{eq:App-03-RCCP-MEdy-2}) are all linear, and the only nonlinearity rests in the last term of the objective function. In what follows, we will show it is actually a convex function in $\theta_i$ and $\alpha$.
In the first step, we claim that the following function is convex (\cite{CVX-Book-Boyd}, page 87, in Example 3.14)
\begin{equation*}
h_1(\theta) = \log \left( \sum_{i=1}^s {\rm e}^{\theta_i} \right)
\end{equation*}
Since the composition with an affine mapping preserves convexity (\cite{CVX-Book-Boyd}, Sect. 3.2.2), a new function
\begin{equation*}
h_2(\theta) = h_1 (A \theta + b)
\end{equation*}
remains convex under linear mapping $\theta \to A \theta + b$. Let $A$ be an identity matrix, and
\begin{equation*}
b = \begin{bmatrix}
\log p^0_1 \\ \vdots \\ \log p^0_s
\end{bmatrix}
\end{equation*}
then we have
\begin{equation*}
h_2(\theta) = \log \left( \sum_{i=1}^s p^0_i {\rm e}^{\theta_i} \right)
\end{equation*}
is a convex function; at last, function
\begin{equation*}
h_3(\alpha,\theta) = \alpha h_2(\theta/\alpha)
\end{equation*}
is the perspective of $h_2(\theta)$, so is also convex (\cite{CVX-Book-Boyd}, page 89, Sect. 3.2.6).
In view of this convex structure, (\ref{eq:App-03-RCCP-MEdy-2}) essentially gives rise to a convex program, and the local minimum is also the global one. However, according to our experiments, general purpose NLP solvers still have difficulty to solve (\ref{eq:App-03-RCCP-MEdy-2}). Therefore, we employ the outer approximation method \cite{App03-Sect4-OA-1,App03-Sect4-OA-2}. The motivation is to solve the epigraph form of (\ref{eq:App-03-RCCP-MEdy-2}), in which nonlinearity is moved into the constraints; then linearize the feasible region with an increasing number of cutting planes generated in an iteration algorithm, until certain convergence criterion is met. In this way, the hard problem (\ref{eq:App-03-RCCP-MEdy-2}) can be solved via a sequence of LPs. The outer approximation algorithm is outlined in Algorithm \ref{Ag:App-03-DRO-Outer-Approximation}. Because (\ref{eq:App-03-RCCP-MEdy-2}) is a convex program, the cutting planes will not remove any feasible point, and Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} finds the global optimal solution in finite steps, regardless of the initial point. But for sure, the number of iterations is affected by the quality of initial guess. A proper initiation could be obtained by solving a traditional SO problem without considering distribution uncertainty.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf Outer Approximation}
\begin{algorithmic}[1]
\STATE Choose an initial point $(\theta^1,\alpha^1)$ and convergence tolerance $\epsilon > 0$, the initial objective value is $R^1=0$, and iteration index $k=1$.
\STATE Solve the following master problem which is an LP
\begin{equation}
\label{eq:App-03-RCCP-MEdy-OA-Lin}
\begin{aligned}
\min_{\alpha,\theta,\gamma,x}~~ & c^T x + \alpha d_{KL} + \gamma \\
\mbox{s.t.}~~ & h_3(\alpha^j,\theta^j) + \nabla h_3(\alpha^j,\theta^j)
\begin{bmatrix}
\alpha - \alpha^j \\
\theta - \theta^j
\end{bmatrix}
\le \gamma,~ j=1,\cdots,k \\
& x \in X,~ \alpha \ge 0,~ \theta_i = q^T y_i,~ \forall i \\
& A(\xi_i) x + B(\xi_i) y_i \le b(\xi_i),~ \forall i \\
& \mbox{Cons-RCC}
\end{aligned}
\end{equation}
The optimal value is $R^{k+1}$, and the optimal solution is $(x^{k+1},\theta^{k+1},\alpha^{k+1})$.
\STATE If $R^{k+1} - R^k \le \varepsilon$, terminate and report the optimal solution $(x^{k+1},\theta^{k+1},\alpha^{k+1})$; otherwise, update $k \leftarrow k+1$, calculate the gradient $\nabla h_3$ at the obtained solution $(\alpha^k,\theta^k)$, add the following cut to problem (\ref{eq:App-03-RCCP-MEdy-OA-Lin}), and go to step 2.
\begin{equation}
\label{eq:App-03-RCCP-MEdy-OA-Cut}
h_3(\alpha^k,\theta^k) + \nabla h_3(\alpha^k,\theta^k)
\begin{bmatrix}
\alpha - \alpha^k \\
\theta - \theta^k
\end{bmatrix}
\le \gamma
\end{equation}
\end{algorithmic}
\label{Ag:App-03-DRO-Outer-Approximation}
\end{algorithm}
\begin{figure}
\caption{Illustration of the outer approximation algorithm.}
\label{fig:Fig-App03-02}
\end{figure}
The motivation of Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} is illustrated in \ref{fig:Fig-App03-02}. The original objective function is nonlinear but convex. In the epigraph form (\ref{eq:App-03-RCCP-MEdy-OA-Lin}), we generate a set of linear cuts (\ref{eq:App-03-RCCP-MEdy-OA-Cut}) dynamically according to the optimal solution found in step 2, then the convex region can be approximated with arbitrarily high accuracy around the optimal solution. The convergence of the very basic version of outer approximation method has been analyzed in \cite{App03-Sect4-OA-3,App03-Sect4-OA-4}. In fact, Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} is very efficient to solve problem (\ref{eq:App-03-RCCP-MEdy-2}), because problem (\ref{eq:App-03-RCCP-MEdy-OA-Lin}) is an LP, the objective function is smooth, and the algorithm often converges in a few number of iterations.
\subsection{Stochastic Program with Discrete Distributions}
\label{App-C-Sect04-02}
In ARO discussed in Appendix \ref{App-C-Sect02}, the uncertain parameter is assumed to reside in the so-called uncertainty set. Every element in this set is treated equally, so the scenario in the worst case must be one of the extreme points of the uncertainty set, which is the main source of conservatism in the traditional RO paradigm. In contrast, in the classic two-stage SO, uncertain parameter $\xi$ is modeled through a certain probability distribution $P$, and the expected cost is minimized, giving rise to
\begin{equation}
\label{eq:App-03-TTSP}
\begin{aligned}
\min ~~ & c^T x + \mathbb E_P [Q(x,\xi)] \\
\mbox{s.t.}~~ & x \in X
\end{aligned}
\end{equation}
where the bounded polyhedron $X$ is the feasible region of first-stage decision $x$, $\xi$ is the uncertain parameter, and $Q(x,\xi)$ is the optimal value function of the second-stage problem, which is an LP for fixed $x$ and $\xi$
\begin{equation}
\label{eq:App-03-TTSP-Stage2}
\begin{aligned}
Q(x,\xi) = \min ~~ & q^T y \\
\mbox{s.t.}~~ & B(\xi) y \le b(\xi) - A(\xi) x
\end{aligned}
\end{equation}
where $q$ is the cost coefficients, $A(\xi)$, $B(\xi)$, and $b(\xi)$ are constant matrices affected by uncertain data, $y(\xi)$ is the second-stage decision, which is the reaction to the realization of uncertainty.
Since the true PDF of $\xi$ is difficult to obtain in some circumstances, in this section, we do not require perfect knowledge on the probability distribution $\mathbb P$ of random variable $\xi$, and let it be ambiguous around a reference distribution and reside in an ambiguity set $D$, which can be constructed from limited historical data. We take all possible distributions in the ambiguity set into consideration, so as to minimize the expected cost in the worst-case distribution, resulting in the following model
\begin{equation}
\label{eq:App-03-RTSP}
\begin{aligned}
\min ~~ & c^T x + \max_{P(\xi) \in D} \mathbb E_P [Q(x,\xi)] \\
\mbox{s.t.}~~ & x \in X
\end{aligned}
\end{equation}
Compared with (\ref{eq:App-03-RCCP-Model}), constraint violation is not allowed in problem (\ref{eq:App-03-RTSP}), and the second-stage expected cost in the worst-case distribution is considered. It is a particular case of (\ref{eq:App-03-RCCP-MEdy-1}) without chance constraints. Specifically, we will utilize discrete distributions in this section. This formulation enjoys several benefits. One is the easy exposition of the density function. In previous sections, the candidate in the moment or divergence based ambiguity sets is not given in an analytical form, and vanishes during the dual transformation. As a result, we don't have clear knowledge on the worst-case distribution. For discrete distributions, the density function is a vector of real entries associated with the probability of each representative scenario. We can easily construct the ambiguity set and optimize an expectation over discrete distributions. The other originates from the computational perspective, which can be seen later. Main results in this section come from \cite{App03-Sect4-DRSO-Zhao,App03-Sect4-DRSO-Ding}.
{\noindent \bf 1. Modeling the confidence set}
For a given set of historical data with $M$ elements, which can be regarded as $M$ samples of the random variable, we can draw a histogram with $K$ bins as an estimation of the reference distribution. Suppose that the numbers of samples fall in each bin is $M_1,M_2,\cdots,M_K$, where $\sum^K_{i=1} M_i=M$, then the reference (empirical) distribution of the uncertain data is given by $\mathbb P_0 = [p^0_1,\cdots,p^0_K]$, where $p^0_i = M_i/M$, $i=1,\cdots,K$. Since the data may not be enough to fit a PDF with high accuracy, the actual distribution should be close to but might be different from its reference. It is proposed in \cite{App03-Sect4-DRSO-Zhao} to construct the ambiguity set using statistical inference corresponding to a given tolerance. Two types of ambiguity sets are suggested based on $L_1$ norm and $L_{\infty}$ norm
\begin{equation}
\label{eq:App-03-RTSP-D1}
D_1 = \left\{ \mathbb P \in \mathbb R^K_+ \middle| \| \mathbb P - \mathbb P_0 \|_1 \le \theta \right\} = \left\{ p \in {\rm \Delta}_K \middle| \sum_{i=1}^K \left| p_i - p^0_i \right| \le \theta \right\}
\end{equation}
\begin{equation}
\label{eq:App-03-RTSP-D2}
D_\infty = \left\{ \mathbb P \in \mathbb R^K_+ \middle| \| \mathbb P - \mathbb P_0 \|_\infty \le \theta \right\} = \left\{ p \in {\rm \Delta}_K \middle| \max_{1\le i \le K} \left| p_i - p^0_i \right| \le \theta \right\}
\end{equation}
where ${\rm \Delta}_K = \{p \in [0,1]^K:{\bf 1}^T p = 1\}$. These two ambiguity sets can be easily expressed by polyhedral sets as follows
\begin{equation}
\label{eq:App-03-RTSP-D1-Poly}
D_1 = \left\{ p \in {\rm \Delta}_K \middle|
\begin{lgathered}
\exists t \in \mathbb R^K_+ :
\sum\nolimits_{k=1}^K t_k \le \theta \\
t_k \ge p_k - p^0_k,~ k = 1,\cdots,K \\
t_k \ge p^0_k - p_k,~ k = 1,\cdots,K
\end{lgathered}
\right\}
\end{equation}
\begin{equation}
\label{eq:App-03-RTSP-D2-Poly}
D_\infty = \left\{ p \in {\rm \Delta}_K \middle|
\begin{lgathered}
\theta \ge p_k - p^0_k,~ k=1,\cdots, K \\
\theta \ge p^0_k - p_k,~ k=1,\cdots, K
\end{lgathered} \right\}
\end{equation}
where $p=[p_1,\cdots,p_K]^T$ is the variable in the ambiguity set; $t=[t_1,\cdots,t_K]^T$ is the lifting (auxiliary) variable in $D_1$; parameter $\theta$ reflects decision maker's confidence level on the distance between the reference distribution and the true one. Apparently, the more historical data we utilize, the smaller their distance will be. Provided with $M$ observations and $K$ bins, the quantitative relation between the value of $\theta$ and the number of samples are given by \cite{App03-Sect4-DRSO-Zhao}
\begin{equation}
\label{eq:App-03-RTSP-D1-Conf}
\Pr \{ \|\mathbb P - \mathbb P_0\|_1 \le \theta \} \ge
1-2K {\rm e}^{-2M \theta/K}
\end{equation}
\begin{equation}
\label{eq:App-03-RTSP-D2-Conf}
\Pr \{ \|\mathbb P - \mathbb P_0\|_\infty \le \theta \} \ge
1-2K {\rm e}^{-2M \theta}
\end{equation}
According to (\ref{eq:App-03-RTSP-D1-Conf}) and (\ref{eq:App-03-RTSP-D2-Conf}), if we want to maintain (\ref{eq:App-03-RTSP-D1}) and (\ref{eq:App-03-RTSP-D2}) with a confidence level of $\beta$, parameter $\theta$ should be selected as
\begin{equation}
\label{eq:App-03-RTSP-D1-Theta}
\mbox{For } D_1:~ \theta_1 = \dfrac{K}{2M} \ln \dfrac{2K}{1-\beta}
\end{equation}
\begin{equation}
\label{eq:App-03-RTSP-D2-Theta}
\mbox{For } D_\infty:~ \theta_\infty = \dfrac{1}{2M} \ln \dfrac{2K}{1-\beta}
\end{equation}
As the size of sampled data approaches infinity, $\theta_1$ and $\theta_\infty$ decrease to 0, and the reference distribution converges to the true one. Accordingly, problem (\ref{eq:App-03-RTSP}) becomes a traditional two-stage SO.
{\noindent \bf 2. CCG based decomposition algorithm}
Let $\xi^k$ denote the representative scenario of the $k$-th bin, $p_k$ be the corresponding probability, and $P = [p_1,\cdots,p_K]$ belongs to the ambiguity set in form of (\ref{eq:App-03-RTSP-D1}) or (\ref{eq:App-03-RTSP-D2}), then problem (\ref{eq:App-03-RTSP}) can be written as
\begin{equation}
\label{eq:App-03-RTSP-Discrete}
\begin{aligned}
\min ~~ & c^T x + \max_{\mathbb P} \sum_{k=1}^K p_k \min q^T y^k \\
\mbox{s.t.}~~ & x \in X,~ \mathbb P \in D \\
& A(\xi^k) x + B(\xi^k) y^k \le b(\xi^k), \forall k
\end{aligned}
\end{equation}
Problem (\ref{eq:App-03-RTSP-Discrete}) has a min-max-min structure and can be solved by the Benders decomposition method \cite{App03-Sect4-DRSO-Zhao} or the CCG method \cite{App03-Sect4-DRSO-Ding}. The latter one will be introduced in the rest of this section. It decomposes problem (\ref{eq:App-03-RTSP-Discrete}) into a lower bounding master problem and an upper bounding subproblem, which are solved iteratively until the gap between the upper bound and lower bound gets smaller than a convergence tolerance. The basic idea has been explained in Appendix \ref{App-C-Sect02-03}. As we can see in \cite{App03-Sect4-DRSO-Ding}, the second-stage problem can be a broader class of convex programs, such as an SOCP.
{\noindent \bf 1) Subproblem}
For a given first-stage decision $x$, the subproblem aims to find the worst-case distribution, which comes down to a max-min program shown below
\begin{equation}
\label{eq:App-03-RTSP-SP-1}
\max_{\mathbb P \in D} \sum_{k=1}^K p_k \min_{y^k \in Y_k(x)} q^T y^k
\end{equation}
where
\begin{equation}
\label{eq:App-03-RTSP-SP-Yk}
Y_k = \{ y^k ~|~ B(\xi^k) y^k \le b(\xi^k) - A(\xi^k) x\},~ \forall k
\end{equation}
Problem (\ref{eq:App-03-RTSP-SP-1}) has some unique features
that facilitate the computation:
(1) Feasible sets $Y_k$ are decoupled.
(2) The probability variables $p_k$ do not affect feasible sets $Y_k$.
(3) The ambiguity set $D$ and feasible sets $Y_k$ are decoupled.
Although (\ref{eq:App-03-RTSP-SP-1}) seems nonlinear due to the production of scalar variable $p_k$ and vector variable $y^k$ in the objective function, as we can see in the following discussion, it is equivalent to an LP or can be decomposed into several LPs, and thus can be solved efficiently.
{\emph{An equivalent LP} }
Because $p_k \ge 0$, we can exchange the summation operator and the minimization operator, and problem (\ref{eq:App-03-RTSP-SP-1}) can be written as
\begin{equation}
\label{eq:App-03-RTSP-SP-2}
\max_{\mathbb P \in D} \min_{y^k \in Y_k(x)} \sum_{k=1}^K p_k q^T y^k
\end{equation}
For the inner minimization problem, $p_k$ is constant, so it is an LP, whose dual problem is
\begin{equation*}
\begin{aligned}
\max_{\mu^k}~~ & \sum_{k=1}^K \left(b(\xi^k)-A(\xi^k) x \right)^T \mu^k \\
\mbox{s.t.}~~ & \mu_k \le 0,~ B^T (\xi^k) \mu^k = p_k q,~ \forall k
\end{aligned}
\end{equation*}
where $\mu^k$ are dual variables. Substituting it into (\ref{eq:App-03-RTSP-SP-2}), and combining two maximization operators, we obtain
\begin{equation}
\label{eq:App-03-RTSP-SP-3}
\begin{aligned}
\max_{p_k,\mu^k}~~ & \sum_{k=1}^K \left(b(\xi^k)-A(\xi^k) x \right)^T \mu^k\\
\mbox{s.t.}~~ & \mu_k \le 0,~ B^T (\xi^k) \mu^k = p_k q,~ \forall k \\
& (p_1,\cdots,p_k) \in D
\end{aligned}
\end{equation}
Since $D$ is polyhedral, problem (\ref{eq:App-03-RTSP-SP-3}) is in fact an LP. The optimal solution offers the worst-case distribution $[p^*_1,\cdots,p^*_K]$, which will be used to generate cuts in the master problem. The recourse actions $y^k$ in each scenario will be provided by the optimal solution of the master problem.
Despite of the fact that LP is acknowledged as the most tractable mathematical programming problem, however, when $K$ is extremely large, it is still challenging to solve (\ref{eq:App-03-RTSP-SP-3}) or even store it in a computer. Nevertheless, the separability of feasible regions allows solving (\ref{eq:App-03-RTSP-SP-1}) in a decomposition manner.
{\emph{A decomposition method} }
As mentioned above, $p_k$ has no impact on $Y_k$, which are decoupled; moreover, because $p_k$ is a scalar in the objective function of each inner minimization problem, it does not affect the optimal solution $y^k$. In view of this convenience, problem (\ref{eq:App-03-RTSP-SP-1}) can be decomposed into $K+1$ smaller LPs, and can be solved in parallel. To this end, for each $\xi^k$, solve the following LP:
\begin{equation*}
h^*_k = \min_{y^k \in Y_k(x)} q^T y^k,~ k=1,\cdots,K
\end{equation*}
The optimal value is $h^*_k$; after obtaining optimal values $(h^*_1,\cdots,h^*_K)$ of the $K$ LPs, we can retrieve the worst-case distribution through solving an additional LP
\begin{equation*}
\max_{\mathbb P \in D}~~ \sum_{k=1}^K p_k h^*_k
\end{equation*}
In fact, if the second-stage problem is a conic program (in \cite{App03-Sect4-DRSO-Ding}, it is an SOCP), above discussions are still valid, as long as the strong duality holds.
It is interesting to notice that in the ARO problem in Sect. \ref{App-C-Sect02-03}, the subproblem comes down to a non-convex bilinear program after dualizing the inner minimization problem, and is generally NP-hard; in this section, the subproblem actually gives rise to LPs, whose complexity is polynomial in problem sizes. The reason accounting for this difference is that the uncertain parameter in (\ref{eq:App-03-RTSP-Discrete}) is expressed by sampled scenarios and thus is constant; the distributional uncertainty appearing in the objective function does not influence the constraints of the second stage problem, and thus the linear max-min problem (\ref{eq:App-03-RTSP-SP-2}) reduces to an LP after a dual transformation.
{\noindent \bf 2) The CCG algorithm}
The motivation of CCG algorithm has been thoroughly discussed in Appendix \ref{App-C-Sect02-03}. In this section, for a fixed $x$, the optimal value of subproblem (\ref{eq:App-03-RTSP-SP-1}) is denoted by $Q(x)$, and $c^T x + Q(x)$ gives an upper bound of the optimal solution of (\ref{eq:App-03-RTSP-Discrete}), because the first-stage variable is un-optimized. Then a set of new variables and optimality cuts are generated and added into master problem. If the subproblem is infeasible in some scenario, then a set of feasibility cuts are assigned to the master problem. The master problem starts from a subset of $D$, which is updated by including the worst-case distribution identified by the subproblem. Forasmuch, the master problem is a relax version of the original problem (\ref{eq:App-03-RTSP-Discrete}), and provides a lower bound on the optimal value. The flowchart of the CCG procedure for problem (\ref{eq:App-03-RTSP-Discrete}) is given in Algorithm \ref{Ag:App-03-RTSO-CCG}. This algorithm will terminate in a finite number of iterations, as the confidence set $D$ has finite extreme points.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf }
\begin{algorithmic}[1]
\STATE Choose a convergence tolerance $\varepsilon>0$, and an initial probability vector $p^0 \in D$; Set LB $=-\infty$, UB $=+\infty$, and iteration index $s=0$.
\STATE Solve the master problem
\begin{equation}
\label{eq:App-03-RTSO-CCG-MP}
\begin{aligned}
\min_{x,\eta,y^{k,m}} ~~ & c^T x + \eta \\
\mbox{s.t.} ~~ & x \in X,~~ \eta \ge \sum_{k=1}^K p^m_k q^T y^{k,m},~
m \in \mbox{Opt}\{ 0,1,\cdots,s\} \\
& A(\xi^k) x + B(\xi^k) y^{k,m} \le b(\xi^k),~
m \in \mbox{Opt}\{ 0,1,\cdots,s\},~ \forall k \\
& A(\xi^k) x + B(\xi^k) y^{k,m} \le b(\xi^k),~
m \in \mbox{Fea}\{ 0,1,\cdots,s\},~k \in I(s)
\end{aligned}
\end{equation}
where Opt$\{*\}/$Fea$\{*\}$ selects the iterations in which an optimality (feasibility) cut is generated; $I(s)$ depicts the index of scenarios in which the second-stage problem is infeasible in iteration $s$. The optimal solution is $(x^*,\eta^*)$; update LB $=c^T x^* + \eta^*$;
\STATE Solve subproblem (\ref{eq:App-03-RTSP-SP-1}) with current $x^*$. If there exists some $\xi^k$ such that $Y_k(x^*)=\emptyset$, then generate new variable $y^{k,s}$, update $I(s)$, and add the following feasibility cut to the master problem
\begin{equation}
\label{eq:App-03-RTSO-CCG-Fea-Cut}
A(\xi^k) x + B(\xi^k) y^{k,s} \le b(\xi^k),~ k \in I(s)
\end{equation}
Otherwise, if $Y_k(x^*) \ne \emptyset, \forall k$, subproblem (\ref{eq:App-03-RTSP-SP-1}) can be solved. The optimal solution is $p^{s+1}$, and the optimal value is $Q(x^*)$; update UB $=$ min$\{\mbox{UB}, c^T x^* + Q(x^*)\}$, create new variables $(y^{1,s+1},\cdots,y^{k,s+1})$, and add the following optimality cut to the master problem
\begin{equation}
\label{eq:App-03-RTSO-CCG-Opt-Cut}
\begin{gathered}
\eta \ge \sum_{k=1}^K p^{s+1}_k q^T y^{k,s+1} \\
A(\xi^k) x + B(\xi^k) y^{k,s+1} \le b(\xi^k),~ \forall k
\end{gathered}
\end{equation}
\STATE If UB$-$LB$< \varepsilon$, terminate and report the optimal first-stage solution $x^*$ as well as the worst-case distribution $p^{s+1}$; otherwise, update $s \leftarrow s+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-03-RTSO-CCG}
\end{algorithm}
\subsection{Formulations based on Wasserstein Metric}
Up to now, the KL-divergence based ambiguity set based formulations have received plenty of research, because it enjoys some convenience when deriving the robust counterpart. For example, it has already known in Sect. \ref{App-C-Sect04-01} that robust chance constraints under KL-divergence ambiguity set can reduce to a traditional chance constraints under the empirical distribution with a rescaled confidence level, and the worst-case expectation problem under KL-divergence ambiguity set is equivalent to a convex program. However, according to its definition, KL-divergence ambiguity set may encounter theoretical difficulty to represent confidence sets for continuous distribution \cite{Am-Set-Wasserstein-1}, because the empirical distribution calibrated from finite data must be discrete, and any distribution in the KL-divergence ambiguity set must assign positive probability mass to each sampled scenario. As a continuous distribution has a density function, it must reside outside the KL-divergence ambiguity set regardless of the sampled scenarios. In contrast, Wasserstein metric based ambiguity sets contain both discrete and continuous distributions. It offers an explicit confidence level for the unknown distribution belonging to the set, and enables the decision maker more informative guidance to control the model conservativeness. This section introduces state-of-the-art results in robust SO with Wasserstein metric based ambiguity sets. The most critical problem is the robust counterparts of the worst-case expectation problem and robust chance constraints, which will be discussed respectively. They can be embedded in single- and two-stage robust SO problems without substantial barriers. The materials in this section mainly come from \cite{Am-Set-Wasserstein-1}.
{\noindent \bf 1. Wasserstein metric based ambiguity set}
Let $\rm \Xi$ be the support set of multi-dimensional random variable $\xi \in \mathbb R^m$. $M({\rm \Xi})$ represent all probability distributions $\mathbb Q$ supported on $\rm \Xi$, and $\mathbb E_{\mathbb Q}[\|\xi\|]=\int_{\rm \Xi} \|\xi\| \mathbb Q({\rm d}\xi) < \infty$, where $\|\cdot\|$ stands for an arbitrary norm on $\mathbb R^m$.
\begin{definition}
\label{def:App-03-Wass-Metric}
Wasserstein metric $d_W: M({\rm \Xi}) \times M({\rm \Xi}) \to \mathbb R_+$ is defined as
\begin{equation*}
d_W(\mathbb Q,\mathbb Q_0) = \inf \left( \int_{\rm \Xi^2} \left\| \xi - \xi^0 \right\| {\rm \Pi} ({\rm d} \xi, {\rm d} \xi^0) \middle|
\begin{gathered}
\mbox{ $\rm \Pi$ is a joint distribution of $\xi$ and} \\
\mbox{ $\xi^0$ with marginals $\mathbb Q$ and $\mathbb Q_0$}
\end{gathered} \right)
\end{equation*}
for two probability distributions $\mathbb Q, \mathbb Q_0 \in M({\rm \Xi})$.
\end{definition}
As a special case, for two discrete distributions, Wasserstein metric is given by
\begin{equation}
d_W(\mathbb Q,\mathbb Q_0) = \inf_{\pi \ge 0} \left( \sum_i \sum_j \pi_{ij} \left\| \xi_j - \xi^0_i \right\| ~\middle|~
\begin{gathered}
\sum\nolimits_j \pi_{ij} = p^0_i,~ \forall i \\
\sum\nolimits_i \pi_{ij} = p_j,~ \forall j
\end{gathered}~~ \right)
\label{eq:App-03-Wass-Def-Dis}
\end{equation}
where $p^0_i$ and $p_j$ denote the probability of representative scenario $\xi^0_i$ and $\xi_j$.
In either case, the decision variable $\rm \Pi$ (or $\pi_{ij}$) represents the probability mass transported from $\xi^0_i$ to $\xi_j$, therefore, the Wasserstein metric can be viewed as the minimal cost of a transportation plan, where the distance $\| \xi_j - \xi^0_i \|$ encodes the transportation cost of unit mass.
Sometimes, the Wasserstein metric can be represented in the dual form
\begin{equation}
d_W(\mathbb Q,\mathbb Q_0) = \sup_{f \in L} \left(
\int_{\rm \Xi} f(\xi) \mathbb Q({\rm d} \xi) -
\int_{\rm \Xi} f(\xi) \mathbb Q_0({\rm d} \xi) \right)
\label{eq:App-03-Wass-Def-Dual}
\end{equation}
where $L=\{f:|f(\xi)-f(\xi^0)|\le \| \xi - \xi^0 \|,\forall \xi,\xi^0 \in {\rm \Xi} \}$ (Theorem 3.2, \cite{Am-Set-Wasserstein-1}, which was firstly discovered by Kantorovich and Rubinstein \cite{Kantorovich-Rubinstein} for distributions
with a bounded support).
With above definition, the Wasserstein ambiguity set is the ball of radius $\epsilon$ centered at the empirical distribution $\mathbb Q_0$
\begin{equation}
\label{eq:App-03-Wass-Ambiguity-Set}
D_W = \left\{ \mathbb Q \in M({\rm \Xi}):d_W(\mathbb Q,\mathbb Q_0) \le \epsilon \right\}
\end{equation}
where $\mathbb Q_0$ is constructed with $N$ independent data samples
\begin{equation*}
\mathbb Q_0 = \frac{1}{N} \sum_{i=1}^N \delta_{\xi^0_i}
\end{equation*}
where $\delta_{\xi^0_i}$ stands for Dirac distribution concentrating unit mass at $\xi^0_i$.
Particularly, we require the unknown distribution $\mathbb Q$ follow a light tail assumption, i.e., there exists $a > 1$ such that
\begin{equation*}
\int_{\rm \Xi} {\rm e}^{\|\xi\|^a} \mathbb Q({\rm d} \xi) < \infty
\end{equation*}
This assumption indicates that the tail of distribution $\mathbb Q$ decays at an exponential rate. If $\rm \Xi$ is bounded and compact, this assumption trivially holds. Under this assumption, modern measure concentration theory provides the following finite sample guarantee for the unknown distribution belonging to Wasserstein ambiguity set
\begin{equation}
\Pr\left[ d_W (\mathbb Q, \mathbb Q_0) \ge \epsilon \right] \le
\begin{dcases}
c_1 {\rm e}^{-c_2 N \epsilon^{\max\{m,2\}}} & \mbox{ if }\epsilon \le 1\\
c_1 {\rm e}^{-c_2 N \epsilon^a} & \mbox{ if } \epsilon > 1
\end{dcases}
\label{eq:App-03-Wass-Conf-Level}
\end{equation}
where $c_1, c_2$ are positive constants depending on $a$, $A$, and $m$ and $m \ne 2$.
Equation (\ref{eq:App-03-Wass-Conf-Level}) provides a priori estimate of the confidence level for $\mathbb Q \notin D_W$. On the other hand, we can utilize (\ref{eq:App-03-Wass-Conf-Level}) to select parameter $\epsilon$ of the Wasserstein ambiguity set such that $D_W$ contains the uncertain distribution $\mathbb Q$ with probability $1-\beta$ for some prescribed $\beta$. This requires solving $\epsilon$ from the right-hand side of (\ref{eq:App-03-Wass-Conf-Level}) with a given left-hand side $\beta$, resulting in
\begin{equation}
\epsilon = \begin{dcases}
\left( \dfrac{\ln(c_1 \beta^{-1})}{c_2 N} \right)^{1/\max\{m,2\}} & \mbox{ if } N \ge \dfrac{\ln(c_1 \beta^{-1})}{c_2} \\
\left( \dfrac{\ln(c_1 \beta^{-1})}{c_2 N} \right)^{1/a} & \mbox{ if } N < \dfrac{\ln(c_1 \beta^{-1})}{c_2}
\end{dcases}
\label{eq:App-03-Wass-Radius}
\end{equation}
Wasserstein ambiguity set with above radius can be regarded as a confidence
set for the unknown distribution $\mathbb Q$ as in statistical testing.
{\noindent \bf 2. Worst-case expectation problem}
A robust SO problem under Wasserstein metric naturally requests to minimize the worst-case expected cost:
\begin{equation}
\label{eq:App-03-Wass-RSO}
\inf_{x \in X} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [h(x,\xi)]
\end{equation}
We demonstrate how to solve the core problem: the worst-case expectation
\begin{equation}
\label{eq:App-03-Wass-SupE}
\sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)]
\end{equation}
where $l(\xi) = \max_{1 \le k \le K} l_k(\xi)$ is the payoff function, consisting of the point-wise maximum of $K$ elementary functions. For notation brevity, the dependence on $x$ is suppressed and will be recovered later on when necessary. We further assume that the support set $\rm \Xi$ is closed and convex, and specific $l(\xi)$ will be discussed.
Problem (\ref{eq:App-03-Wass-SupE}) renders an infinite-dimensional optimization
problem for continuous distribution. Nonetheless, the inspiring work in \cite{Am-Set-Wasserstein-1} show that (\ref{eq:App-03-Wass-SupE}) can be reformulated as a finite-dimensional convex program for various payoff functions. To see this, expand the worst-case expectation as
\begin{equation*}
\sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] = \left\{
\begin{aligned}
\sup_{\rm \Pi}~ & \int_{\rm \Xi} l(\xi) \mathbb Q ({\rm d} \xi) \\
\mbox{s.t.}~ & \int_{\rm \Xi^2} \left\| \xi - \xi^0 \right\| {\rm \Pi} ({\rm d} \xi, {\rm d} \xi^0) \le \epsilon \\
& \mbox{ $\rm \Pi$ is a joint distribution of $\xi$} \\
& \mbox{ $\xi^0$ with marginals $\mathbb Q$ and $\mathbb Q_0$}
\end{aligned} \right.
\end{equation*}
According to the law of total probability, $\rm \Pi$ can be decomposed as the marginal distribution $\mathbb Q_0$ of $\xi^0$ and the conditional distributions $\mathbb Q_i$ of $\xi$ given $\xi^0 = \xi^0_i$:
\begin{equation*}
{\rm \Pi} = \frac{1}{N} \sum_{i=1}^N \delta_{\xi^0_i} \otimes \mathbb Q_i
\end{equation*}
and the worst-case expectation evolves into a generalized moment problem in conditional distributions $\mathbb Q_i$, $i \le N$
\begin{equation*}
\sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] = \left\{
\begin{aligned}
\sup_{\mathbb Q_i \in M({\rm \Xi})} & \frac{1}{N} \sum_{i=1}^N
\int_{\rm \Xi} l(\xi) \mathbb Q_i ({\rm d} \xi) \\
\mbox{s.t.} ~~~ & \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi}
\left\| \xi - \xi^0_i \right\| \mathbb Q_i ({\rm d} \xi) \le \epsilon
\end{aligned} \right.
\end{equation*}
Using standard Lagrangian duality, we obtain
\begin{equation*}
\begin{aligned}
\sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] =
\sup_{\mathbb Q_i \in M({\rm \Xi})} \inf_{\lambda \ge 0} &
\frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} l(\xi) \mathbb Q_i ({\rm d} \xi)\\
& + \lambda \left(\epsilon -\frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi}
\left\| \xi - \xi^0_i \right\| \mathbb Q_i ({\rm d} \xi) \right)
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
& \le \inf_{\lambda \ge 0} \sup_{\mathbb Q_i \in M({\rm \Xi})} \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} \left( l(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right) \mathbb Q_i ({\rm d} \xi) \\
& = \inf_{\lambda \ge 0} \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N \sup_{\xi \in {\rm \Xi}} \left( l(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right)
\end{aligned}
\end{equation*}
Decision variables $\lambda$ and $\xi$ have finite dimensions. The last problem can be reformulated as
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~ & \sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right) \le s_i \\
& i=1,\cdots,N,~k=1,\cdots,K \\
& \lambda \ge 0
\end{aligned}
\label{eq:App-03-Wass-SupE-Reduce-1}
\end{equation}
From the definition of dual norm, we know $\lambda \left\| \xi - \xi^0_i \right\| = \max_{\|z_{ik}\|_* \le \lambda} \langle z_{ik}, \xi-\xi^0_i \rangle$, so the constraints give rise to
\begin{equation*}
\begin{aligned}
\sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \lambda \left\| \xi - \xi^0_i \right\|\right)
= & \sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \max_{\|z_{ik} \|_* \le \lambda} \langle z_{ik}, \xi-\xi^0_i \rangle \right) \\
= & \sup_{\xi \in {\rm \Xi}} \min_{\|z_{ik}\|_* \le \lambda} l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle \\
\le & \min_{\|z_{ik}\|_* \le \lambda} \sup_{\xi \in {\rm \Xi}}~ l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle
\end{aligned}
\end{equation*}
Substituting it into problem (\ref{eq:App-03-Wass-SupE-Reduce-1}) leads to a more restricted feasible set and a larger objective value, yielding
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~ & \min_{\|z_{ik}\|_* \le \lambda} \sup_{\xi \in {\rm \Xi}}~ l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle \le s_i \\
& i=1,\cdots,N,~ k = 1,\cdots, K \\
& \lambda \ge 0
\end{aligned}
\label{eq:App-03-Wass-SupE-Reduce-2}
\end{equation}
The constraints of (\ref{eq:App-03-Wass-SupE-Reduce-2}) trivially suggests the feasible set of $\lambda$ is $\lambda \ge \|z_{ik}\|_*$, and the min operator in constraints can be omitted because it is in compliance with the objective function. Therefore, we arrive at
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~ & \sup_{\xi \in {\rm \Xi}}~ \left( l_k(\xi) - \langle z_{ik}, \xi \rangle \right) + \langle z_{ik}, \xi^0_i \rangle \le s_i,~ \lambda \ge \|z_{ik}\|_* \\
& i=1,\cdots,N, ~ k = 1,\cdots K
\end{aligned}
\label{eq:App-03-Wass-SupE-Reduce-3}
\end{equation}
It is proved in \cite{Am-Set-Wasserstein-1} that problems (\ref{eq:App-03-Wass-SupE}) and (\ref{eq:App-03-Wass-SupE-Reduce-3}) are actually equivalent. Next, we will derive the concrete forms of (\ref{eq:App-03-Wass-SupE-Reduce-3}) under specific payoff function $l(\xi)$ and uncertainty set $\rm \Xi$. Unlike \cite{Am-Set-Wasserstein-1} which relies on conjugate functions in convex analysis, we mainly exploit LP duality theory, which is more friendly to readers with engineering background.
{\bf Case 1:} Convex PWL payoff function $l(\xi) = \max_{1 \le k \le K} \{a^T_k \xi + b_k\}$ and bounded polyhedral uncertainty set ${\rm \Xi} = \{\xi \in \mathbb R^m:C \xi \le d\}$. The key point is the supremum regarding $\xi$ in the following constraint
\begin{equation*}
\sup_{\xi \in {\rm \Xi}} \left( a^T_k \xi - \langle z_{ik}, \xi \rangle \right) + b_k + \langle z_{ik}, \xi^0_i \rangle \le s_i
\end{equation*}
For each $k$, the supremum is an LP
\begin{equation*}
\begin{aligned}
\max~~ & (a_k - z_{ik})^T \xi \\
\mbox{s.t.} ~~& C \xi \le d
\end{aligned}
\end{equation*}
Its dual LP reads
\begin{equation*}
\begin{aligned}
\min~~ & d^T \gamma_{ik} \\
\mbox{s.t.} ~~& C^T \gamma_{ik} = a_k - z_{ik} \\
& \gamma_{ik} \ge 0
\end{aligned}
\end{equation*}
Therefore, $z_{ik} = a_k - C^T \gamma_{ik}$. Because of strong duality, we can replace the supremum by the objective of the dual LP, which gives rise to:
\begin{equation*}
d^T \gamma_{ik} + b_k + \langle a_k - C^T \gamma_{ik}, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N,~ k = 1,\cdots,K
\end{equation*}
Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 1:
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~ & b_k + a^T_k \xi^0_i + \gamma^T_{ik} (d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N,~ k = 1,\cdots,K \\
& \lambda \ge \|a_k - C^T \gamma_{ik}\|_*,~ \gamma_{ik} \ge 0,~ i=1,\cdots,N,~ k = 1,\cdots,K
\end{aligned}
\label{eq:App-03-Wass-SupE-Conic-Case1}
\end{equation}
In the absence of distributional uncertainty, or $\epsilon = 0$ which implies that Wasserstein ambiguity set $D_W$ is a singleton, $\lambda$ can take any non-negative value without changing the objective function. Because all sampled scenarios must belong to the support set, i.e. $d - C^T \xi^0_i \ge 0$, $\forall i$ holds, so there must be $\gamma_{ik}=0$ at the optimal solution, leading to an optimal value of $\sum_{i=1}^N s_i/N$, where $s_i=\max_{1 \le k \le K} \{a^T_k \xi^0_i + b_k\}$, which represents the sample
average of the payoff function under the empirical distribution.
{\bf Case 2:} Concave PWL payoff function $l(\xi) = \min_{1 \le k \le K} \{a^T_k \xi + b_k\}$ and bounded polyhedral uncertainty set ${\rm \Xi} = \{\xi \in \mathbb R^m:C \xi \le d\}$. In such circumstance, the supremum regarding $\xi$ in the constraint becomes
\begin{equation*}
\max_{\xi \in {\rm \Xi}}~ \left\{ -z^T_i \xi + \min_{1 \le k \le L} \left\{ a^T_k \xi + b_k \right\} \right\}
\end{equation*}
which is equivalent to an LP
\begin{equation*}
\begin{aligned}
\max~~ & -z^T_i \xi + \tau_i \\
\mbox{s.t.} ~~ & A \xi + b \ge \tau_i {\bf 1} \\
& C \xi \le d
\end{aligned}
\end{equation*}
where the $k$-th row of $A$ is $a^T_k$; the $k$-th entry of $b$ is $b_k$; $\bf 1$ is all-one vector with a compatible dimension. Its dual LP reads
\begin{equation*}
\begin{aligned}
\min~~ & b^T \theta_i + d^T \gamma_i \\
\mbox{s.t.} ~~& - A^T \theta_i + C^T \gamma_i= -z_i \\
& {\bf 1}^T \theta_i = 1,~ \theta_i \ge 0,~ \gamma_i \ge 0
\end{aligned}
\end{equation*}
Therefore, $z_i = A^T \theta_i - C^T \gamma_i$. Because of strong duality, we can replace the supremum by the objective of the dual LP, which gives rise to:
\begin{equation*}
b^T \theta_i + d^T \gamma_i + \langle A^T \theta_i - C^T \gamma_i, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N
\end{equation*}
Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 2:
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~~ & \theta^T_i (b+A\xi^0_i) + \gamma^T_i ( d - C \xi^0_i) \le s_i,~ i=1,\cdots,N \\
& \lambda \ge \|A^T \theta_i - C^T \gamma_i \|_*,~ i=1,\cdots,N \\
&\gamma_i \ge 0,~ \theta_i \ge 0,~ {\bf 1}^T \theta_i = 1,~ i=1,\cdots,N
\end{aligned}
\label{eq:App-03-Wass-SupE-Conic-Case2}
\end{equation}
There will be no $k$ index for the constraints, because it is packaged in $A$ and $b$.
An analogous analysis shows that if $\epsilon=0$, there must be $\gamma_i = 0$ and
\begin{equation*}
s_i = \min \{\theta^T_i (b+A\xi^0_i): \theta_i \ge 0,~ {\bf 1}^T \theta_i = 1 \} = \min_{1 \le k \le K} \{a^T_k \xi + b_k\}
\end{equation*}
implying $\sum_{i=1}^N s_i/N$ is the sample average of the payoff function
under the empirical distribution.
Now we focus our attention on the min-max problem (\ref{eq:App-03-Wass-RSO}) which frequently arises in two-stage robust SO, which entails evaluation of the expected recourse cost from an LP parameterized in $\xi$. We investigate two cases depending on where $\xi$ appears.
{\bf Case 3:} Uncertain cost coefficients: $l(\xi) = \min_y \{y^T Q \xi: Wy \ge h - Ax\}$ where $x$ is the first-stage decision variable, $y$ represents the recourse action, and the feasible region is always non-empty. In this case, the supremum regarding $\xi$ in the constraint becomes
\begin{equation*}
\begin{aligned}
& \max_{\xi \in {\rm \Xi}}~ \left\{ -z^T_i \xi + \min_{y} \left\{ y^T Q \xi: Wy \ge h - Ax \right\} \right\} \\
=& \min_{y} \left\{\max_{\xi \in {\rm \Xi}} \left\{ \left(Q^T y-z_i\right)^T \xi \right\} : Wy \ge h - Ax\right\}
\end{aligned}
\end{equation*}
Replace the inner LP with its dual, we get an equivalent LP
\begin{equation*}
\begin{aligned}
\min_{\gamma_i,y_i}~~ & d^T \gamma_i \\
\mbox{s.t.} ~~ & C^T \gamma_i = Q^T y_i - z_i,~ \gamma_i \ge 0 \\
& Wy_i \ge h - Ax
\end{aligned}
\end{equation*}
Here we associated variable $y$ with a subscript $i$ to highlight its dependence on the value of $\xi$. Therefore, $z_i = Q^T y_i - C^T \gamma_i$, and we can replace the supremum by the objective of the dual LP, which gives rise to:
\begin{equation*}
d^T \gamma_i + \langle Q^T y_i - C^T \gamma_i, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N
\end{equation*}
Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 3:
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~~ & y^T_i Q \xi^0_i + \gamma^T_i(d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N\\
& \lambda \ge \|Q^T y_i - C^T \gamma_i \|_*, ~\gamma_i \ge 0,~ i=1,\cdots,N\\
& W y_i \ge h-Ax,~ i=1,\cdots,N
\end{aligned}
\label{eq:App-03-Wass-SupE-Conic-Case3}
\end{equation}
Without distributional uncertainty, $\epsilon = 0$, $\lambda$ can be arbitrary nonnegative value; for similar reason, we have $\gamma_i=0$ and $s_i =y^T_i Q \xi^0_i$ at optimum. So problem (\ref{eq:App-03-Wass-SupE-Conic-Case3}) is equivalent to the SAA problem under the empirical distribution
\begin{equation*}
\min_{y_i}~ \left\{ \frac{1}{N} \sum_{i=1}^N y^T_i Q \xi^0_i: W y_i \ge h-Ax \right\}
\end{equation*}
{\bf Case 4:} Uncertain constraint right-hand side:
\begin{equation*}
\begin{aligned}
l(\xi) & = \min_y ~\{q^T y: Wy \ge H \xi + h - Ax\} \\
& = \max_{\theta} \left\{ \theta^T(H\xi + h - Ax):W^T \theta = q,
\theta \ge 0 \right\} \\
& = \max_k ~ v^T_k (H\xi + h - Ax) = \max_k \left\{ v^T_k H \xi + v^T_k(h-Ax) \right\}
\end{aligned}
\end{equation*}
where $v_k$ is the vertices of polyhedron $\{\theta:W^T \theta = q, \theta \ge 0\}$. In this way, $l(\xi)$ is expressed as a convex PWL function. Applying the result in Case 1, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 4:
\begin{equation}
\begin{aligned}
\inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\
\mbox{s.t.} ~ & v^T_k(h-Ax) + v^T_k H \xi^0_i + \gamma^T_{ik} (d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N,~ \forall k \\
& \lambda \ge \|H^T v_k - C^T \gamma_{ik}\|_*,~ \gamma_{ik} \ge 0,~ i=1,\cdots,N,~ \forall k
\end{aligned}
\label{eq:App-03-Wass-SupE-Conic-Case4}
\end{equation}
For similar reason, without distributional uncertainty, we have $\gamma_{ik}=0$ and $s_i = v^T_k(h-Ax) + v^T_k H \xi^0_i = q^Ty_i$ at optimum, where the last equality is because of strong duality. So problem (\ref{eq:App-03-Wass-SupE-Conic-Case3}) is equivalent to the SAA problem under the empirical distribution
\begin{equation*}
\min_{y_i}~ \left\{ \frac{1}{N} \sum_{i=1}^N q^T y_i: W y_i \ge H \xi + h - Ax \right\}
\end{equation*}
The following discussions are devoted to the computational tractability.
\begin{itemize}
\item If the 1-norm or $\infty$-norm is used to define Wasserstein metric, their dual norms are $\infty$-norm and 1-norm respectively, then problems (\ref{eq:App-03-Wass-SupE-Conic-Case1})-(\ref{eq:App-03-Wass-SupE-Conic-Case4}) reduce to LPs whose sizes grow with the number $N$ of sampled data. If the Euclidean norm is used, the resulting problems will be SOCP.
\item For Case 1, Case 2 and Case 3, the remaining equivalent LPs scale polynomially and can be therefore readily solved. As for Case 4, the number of vertices may grow exponential in the problem size. However, one can adopt a decomposition algorithm similar to CCG which iteratively identifies critical vertices without enumerating all of them.
\item The computational complexity of all equivalent convex programs is independent of the size of the Wasserstein ambiguity set.
\item It is shown in \cite{Am-Set-Wasserstein-1} that the worst-case expectation can also be computed from the following problem
\begin{equation}
\label{eq:App-03-Wass-SupE-Extreme-Q}
\begin{aligned}
\sup_{\alpha_{ik},q_{ik}} ~~& \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K
\alpha_{ij} l_k \left( \xi^0_i - \frac{q_{ik}}{\alpha_{ik}} \right) \\
\mbox{s.t.}~~ & \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K \|q_{ik}\| \le \epsilon\\
& \alpha_{ik} \ge 0, \forall i,\forall k,~
\sum_{k=1}^K \alpha_{ik} =1, \forall i \\
& \xi^0_i - \frac{q_{ik}}{\alpha_{ik}} \in {\rm \Xi},~
\forall i, \forall k
\end{aligned}
\end{equation}
Non-convex term arise from the fraction $q_{ik}/\alpha_{ik}$. In fact, problem (\ref{eq:App-03-Wass-SupE-Extreme-Q}) is convex following the definition of extended perspective function \cite{Am-Set-Wasserstein-1}. Moreover, if $[\alpha_{ik}(r),q_{ik}(r)]_{r \in \mathbb N}$ is a sequence of feasible solutions and the corresponding objective values converge to the supremum of (\ref{eq:App-03-Wass-SupE-Extreme-Q}), then the discrete distribution
\begin{equation*}
\mathbb Q_r = \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K \alpha_{ik}(r)\delta_{\xi_{ik}(r)},~ \xi_{ik}(r) = \xi^0_i - \frac{q_{ik}(r)}{\alpha_{ik}(r)}
\end{equation*}
approaches the worst-case distribution in $D_W$ \cite{Am-Set-Wasserstein-1}.
\end{itemize}
{\noindent \bf 3. Static robust chance constraints}
Another important issue in SO is chance constraint. Here we discuss robust joint chance constraints in the following form
\begin{equation}
\label{eq:App-03-Wass-RCC-Def}
\inf_{\mathbb Q \in D_W} \Pr [a(x)^T \xi_i \le b_i(x), i=1,\cdots,I] \ge 1-\beta
\end{equation}
where $x$ is the decision variable; the chance constraint involves $I$ inequalities with uncertain parameter $\xi_i$ supported on set ${\rm \Xi}_i \subseteq \mathbb R^n$ for each $i$. The joint probability distribution $\mathbb Q$ belongs to the Wasserstein ambiguity set. $a(x) \in \mathbb R^n$ and $b(x) \in \mathbb R$ are affine mappings of $x$, where $a(x)=\eta x + (1-\eta) {\bf 1}$, $\eta \in \{0,1\}$, and $b_i(x) = B^T_ix + b^0_i$. When $\eta=1$ ($\eta=0$), (\ref{eq:App-03-Wass-RCC-Def}) involves left-hand (right-hand) uncertainty. ${\rm \Xi}=\prod_i {\rm \Xi_i}$ is the support set of $\xi = [\xi^T_1,\cdots,\xi^T_I]^T$. The robust chance constraint (\ref{eq:App-03-Wass-RCC-Def}) requires that all inequalities be met for all possible distributions in Wasserstein ambiguity set $D_W$ with a probability of at least $1-\beta$, where $\beta \in (0,1)$ denotes a prescribed risk tolerance. The feasible region stipulated by (\ref{eq:App-03-Wass-RCC-Def}) is $X$. We will introduce main results from \cite{Am-Set-Wasserstein-2} while avoiding rigorous mathematical proofs.
\begin{assumption}
\label{ap:App-03-Wass-RCC}
The support set $\rm \Xi$ is an $n \times I$-dimensional vector space, and the distance metric in Wasserstein ambiguity set is $d(\xi,\zeta)=\|\xi - \zeta\|$.
\end{assumption}
\begin{theorem}
\cite{Am-Set-Wasserstein-2} Under Assumption (\ref{ap:App-03-Wass-RCC}), $X=Z_1 \cup Z_2$, where
\begin{equation}
\label{eq:App-03-Wass-RCC-Z1}
Z_1 = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered}
\epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\
z_j + \gamma \le \max \left\{ b_i(x) - a(x)^T \zeta^j_i,0 \right\} \\
i = 1, \cdots I,~ j = 1,\cdots, N \\
z_j \le 0,~ j = 1,\cdots,N \\
\|a(x)\|_* \le v,~ \gamma \ge 0
\end{lgathered} \right\}
\end{equation}
where $\epsilon$ is the radius of the Wasserstein ambiguity set, $N$ is the number of sampled scenarios in the empirical distribution, and
\begin{equation}
\label{eq:App-03-Wass-RCC-Z2}
Z_2 = \{x \in \mathbb R^n ~|~ a(x) = 0,~ b_i(x) \ge 0,~i=1,\cdots I \}
\end{equation}
\label{th:App-03-Wass-RCC-Exact}
\end{theorem}
In Theorem \ref{th:App-03-Wass-RCC-Exact}, $Z_2$ is trivial: If $ \eta = 1$, then $Z_2=\{x \in \mathbb R^n~|~ x=0,~b_i \ge 0,~ \forall i\}$; If $ \eta = 0$, then $Z_2=\emptyset$. $Z_1$ can be reformulated as an MILP compatible form if it is bounded. By linearizing the second constraint, we have
\begin{equation}
Z_1 = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered}
\epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\
z_j + \gamma \le s_{ij},~ \forall i, \forall j \\
b_i(x) - a(x)^T \zeta^j_i \le s_{ij} \le M_{ij} y_{ij},~\forall i, \forall j \\
s_{ij} \le b_i(x) - a(x)^T \zeta^j_i+M_{ij}(1-y_{ij}),~\forall i, \forall j \\
\|a(x)\|_* \le v,~ \gamma \ge 0,~z_j \le 0,~ \forall j \\
s_{ij} \ge 0,~ y_{ij} \in \{0,1\},~ \forall i,~ \forall j
\end{lgathered} \right\}
\label{eq:App-03-Wass-RCC-Z1-MILP}
\end{equation}
where $\forall i$ and $\forall j$ are short for $i=1,\cdots,I$ and $j=1,\cdots,N$, respectively;
\begin{equation*}
M_{ij} \ge \max_{x \in Z_1} \left| b_i(x) - a(x)^T \zeta^j_i \right|
\end{equation*}
It is easy to see that if $b_i(x) - a(x)^T \zeta^j_i <0$, then $y_{ij}=0$ (otherwise $s_{ij} \le b_i(x) - a(x)^T \zeta^j_i <0$), hence $s_{ij} = 0=\max \{ b_i(x) - a(x)^T \zeta^j_i,0 \}$. If $b_i(x) - a(x)^T \zeta^j_i >0$, then $y_{ij}=1$ (otherwise $b_i(x) - a(x)^T \zeta^j_i \le M_{ij}y_{ij} = 0$), hence $s_{ij} = b_i(x) - a(x)^T \zeta^j_i =\max\{ b_i(x) - a(x)^T \zeta^j_i,0\}$. If $b_i(x) - a(x)^T \zeta^j_i =0$, then we have $s_{ij} = 0$ regardless of the value of $y_{ij}$. In conclusion, (\ref{eq:App-03-Wass-RCC-Z1}) and (\ref{eq:App-03-Wass-RCC-Z1-MILP}) are equivalent.
For right-hand uncertainty in which $\eta=0$, $a(x)={\bf 1}$, $X = Z_1$ because $Z_2 = \emptyset$. Moreover, variable $v$ in (\ref{eq:App-03-Wass-RCC-Z1-MILP}) is equal to 1 if 1-norm is used in Wasserstein ambiguity set $D_W$, indicating $v \ge \|{\bf 1}\|_\infty =1$ in $Z_1$.
In (\ref{eq:App-03-Wass-RCC-Z1-MILP}), a total number of $I \times N$ binary variables are introduced to linearize the $\max\{a,b\}$ function, making the problem challenging to solve. An inner approximation of $Z$ is to simply replace $\max \{ b_i(x) - a(x)^T \zeta^j_i,0 \}$ with its first input, yielding a parameter-free approximation
\begin{equation}
\label{eq:App-03-Wass-RCC-CVaR}
Z = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered}
\epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\
z_j + \gamma \le b_i(x) - a(x)^T \zeta^j_i,~ \forall i,\forall j \\
z_j \le 0,~ \forall j,~ \|a(x)\|_* \le v,~ \gamma \ge 0
\end{lgathered} \right\}
\end{equation}
This formulation can be derived from CVaR model, and enjoys better computational tractability.
{\noindent \bf 4. Adaptive robust chance constraints}
Robust chance constraint program with Wasserstein metric is studied in \cite{App03-Sect4-DRSO-Was-4} in a different but more general form. The problem is as follows
\begin{equation}
\label{eq:App-03-DRCC-Wass-1}
\begin{aligned}
\min_{x \in X}~~ & c^T x \\
\mbox{s.t.} ~~ & \inf_{\mathbb Q \in D_W} \Pr [F(x,\xi) \le 0] \ge 1-\beta
\end{aligned}
\end{equation}
where $X$ is a bounded polyhedron, $F: \mathbb R^n \times {\rm \Xi} \to \mathbb R$ is a scalar function that is convex in $x$ for every $\xi$. This formulation is general enough to capture joint chance constraints. To see this, suppose $F$ contains $K$ individual constraints, then $F$ can be defined as the component-wise maximum as in (\ref{eq:App-03-DRO-DRCC-Joint-Para}).
Here we develop a technique to solve two-stage problems where $F(x,\xi)$ is the optimal value of another LP parameterized in $x$ and $\xi$. More precisely, we consider
\begin{subequations}
\label{eq:App-03-TSDRCC-Wass}
\begin{equation}
\label{eq:App-03-TSDRCC-Wass-1}
\begin{aligned}
\min_{x \in X}~~ & c^T_1 x \\
\mbox{s.t.} ~~ & \sup_{\mathbb Q \in D_W} \Pr [f(x,\xi) \ge c^T_2 \xi] \le \beta
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:App-03-TSDRCC-Wass-2}
\begin{aligned}
f(x,\xi) = \min ~~ & c^T_3 y \\
\mbox{s.t.} ~~ & Ax + By + C\xi \le d
\end{aligned}
\end{equation}
\end{subequations}
where in (\ref{eq:App-03-TSDRCC-Wass-1}), the robust chance constraint can be regarded a risk limiting requirement, and the threshold value depends on uncertain parameter $\xi$. We assume LP (\ref{eq:App-03-TSDRCC-Wass-2}) is always feasible (relatively complete recourse) and has finite optimum. Second-stage cost can be considered in the objective function of (\ref{eq:App-03-TSDRCC-Wass-1}) in form of worst-case expectation which has been discussed in previous sections and is omitted here for the sake of brevity. Here we focus on coping with second-stage LP in robust chance constraint.
Define loss function
\begin{equation}
\label{eq:App-03-TSDRCC-Wass-Loss-Fun}
g(x,\xi) = f(x,\xi) - c^T_2 \xi
\end{equation}
Recall the relation between chance constraint and CVaR discussed in Sect. \ref{App-C-Sect03-01}, a sufficient condition of robust chance constraint in (\ref{eq:App-03-TSDRCC-Wass-1}) is CVaR$(g(x,\xi),\beta) \le 0$, $\forall \mathbb Q \in D_W$, or equivalently
\begin{equation}
\label{eq:App-03-TSDRCC-CVaR-1}
\sup_{\mathbb Q \in D_W} \inf_{\gamma \in \mathbb R} \beta \gamma + \mathbb E_{\mathbb Q} (\max \{g(x,\xi)-\gamma,0\}) \le 0
\end{equation}
According to \cite{App03-Sect4-DRSO-Was-4}, constraint (\ref{eq:App-03-TSDRCC-CVaR-1}) can be conservatively approximated by
\begin{equation}
\label{eq:App-03-TSDRCC-CVaR-2}
\epsilon L + \inf_{\gamma \in \mathbb R} \left\{ \beta \gamma + \frac{1}{N} \sum_{i=1}^N \max \{g(x,\xi^i)-\gamma,0\} \right\} \le 0
\end{equation}
where $\epsilon$ is the parameter in Wasserstein ambiguity set $D_W$, $L$ is a constant satisfying $g(x,\xi) \le L\| \xi \|_1$, and $\xi^i$, $i=1,\cdots,N$ are samples of uncertain data. Substituting (\ref{eq:App-03-TSDRCC-Wass-2}) and (\ref{eq:App-03-TSDRCC-Wass-Loss-Fun}) into (\ref{eq:App-03-TSDRCC-CVaR-2}), we obtain an LP that is equivalent to problem (\ref{eq:App-03-TSDRCC-Wass})
\begin{equation}
\label{eq:App-03-TSDRCC-CVaR-Eqv-LP}
\begin{aligned}
\min~~ & c^T_1 x \\
\mbox{s.t.} ~~ & x \in X,~ \epsilon L + \beta \gamma + \frac{1}{N} \sum_{i=1}^N s_{i} \le 0 \\
& s_i \ge 0,~ s_i \ge c_3^T y^i - c^T_2 \xi^i - \gamma,~ i = 1,\cdots,N \\
& Ax + By^i + C\xi^i \le d,~ i = 1,\cdots,N \\
\end{aligned}
\end{equation}
where $y^i$ is the second-stage decision associated with $\xi^i$. This formulation could be very conservative due to three reasons. First, worst-case distribution is considered; second, CVaR constraint (\ref{eq:App-03-TSDRCC-CVaR-1}) is a pessimistic approximation of chance constraints; finally, sampling constraint (\ref{eq:App-03-TSDRCC-CVaR-2}) is a pessimistic approximation of (\ref{eq:App-03-TSDRCC-CVaR-1}).
More discussions on robust chance constraints with Wasserstein metric under various settings can be found in \cite{App03-Sect4-DRSO-Was-4}.
{\noindent \bf 5. Use of forecast data}
Wasserstein metric enjoys many advantages, such as finite-sample performance guarantee and existence of tractable reformulation. However, moment information is not used, especially the first-order moment reflecting the prediction, which can be updated with time rolling on, so the worst-case distribution generally has a mean value different from the forecast (if available). To incorporate forecast data, we propose the following Wasserstein ambiguity set with fixed-mean
\begin{equation}
\label{eq:App-03-Wass-Mom}
D^M_W = \left\{ \mathbb Q \in D_W \middle| \mathbb E_{\mathbb Q} [\xi] = \hat \xi \right\}
\end{equation}
and the worst-case expectation problem can be expressed as
\begin{subequations}
\label{eq:App-03-MaxE-Wass-Mom}
\begin{align}
\sup_{\mathbb Q \in D^M_W} ~~ & \mathbb E_{\mathbb Q} [l(\xi)] \\
= \sup_{f^n(\xi)} ~~ & \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} l(\xi) f^n(\xi) {\rm d} \xi \\
\mbox{s.t.} ~~ & \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} \| \xi - \xi^n \|_p f^n(\xi) {\rm d} \xi \le \epsilon : \lambda \\
& \int_{\rm \Xi} f^n(\xi) {\rm d} \xi = 1 : \theta_n,~ n = 1,\cdots, N \\
& \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} \xi f^n(\xi) {\rm d} \xi = \hat \xi : \rho
\end{align}
\end{subequations}
where $l(\xi)$ is a loss function similar to that in (\ref{eq:App-03-Wass-SupE}), $f^n(\xi)$ is the conditional density function under historical data sample $\xi^n$, dual variables $\lambda$, $\theta_n$, and $\rho$ are listed following a colon. Similar to the discussions for problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the dual problem of (\ref{eq:App-03-MaxE-Wass-Mom}) is
\begin{subequations}
\label{eq:App-03-Dual-MaxE-Wass-Mom}
\begin{align}
\min_{\lambda \ge 0,\theta_n,\rho} ~~ & (\lambda \epsilon + \rho^T \hat \xi) N + \sum_{n=1}^N \theta_n \label{eq:App-03-Dual-MaxE-Wass-Mom-Obj}\\
\mbox{s.t.} ~~ & \theta_n + \lambda \| \xi - \xi^n \|_p + \rho^T \xi \ge l(\xi), \forall \xi \in {\rm \Xi},~ \forall n \label{eq:App-03-Dual-MaxE-Wass-Mom-Cons}
\end{align}
\end{subequations}
For $p=2$, polyhedral $\rm \Xi$ and PWL $l(\xi)$, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be transformed into the intersection of PSD cones, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) gives rise to an SDP; some examples can be found in Sect. \ref{App-C-Sect03-01}.
If $\rm \Xi$ is described by a single quadratic constraint, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be reformulated by using the well-known S-Lemma, which has been discussed in Sect. \ref{App-A-Sect02-04}, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) still comes down to an SDP. For $p=1$ or $p=+\infty$, polyhedral $\rm \Xi$ and PWL $l(\xi)$, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be transformed into a polyhedron using duality theory, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) gives rise to an LP. Because the ambiguity set is more restrictive, problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) would be less conservative than problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in which the mean value of uncertain data is free.
A Wasserstein-moment metric with variance is exploited in \cite{App03-Sect4-DRSO-Was-5} and applied to wind power dispatch. Nevertheless, the ambiguity set neglects first-order moment and considers second-order moment. This formulation is useful when little historical data is available at hand.
As a short conclusion, distributionally robust optimization and data-driven robust stochastic optimization leverage statistical information on the uncertain data and overcome the conservatism of traditional robust optimization approaches which are built upon the worst-case scenario. The core issue is the equivalent convex reformulation of the worst-case expectation problem or the robust chance constraint over the uncertain probability distribution restricted in the ambiguity set. Optimization over a moment based ambiguity set can be formulated as a semi-infinite LP, whose dual problem gives rise to SDPs, and hence can be readily solved. When additional structure property is taken into account, such as unimodality, more sophisticated treatment is need. As for the robust stochastic programming, tractable reformulation of the worst-case expectation and robust chance constraints is the central issue. Robust chance constraint under a $\phi$-divergence based ambiguity set are equivalent to traditional chance constraint under the empirical distribution but with a modified confidence level, and it can be transformed into an MILP or approximated by LP based on risk theory under the help of sampling average approximation technique, so does a robust chance constraint under a Wasserstein metric based ambiguity set, following somewhat different expressions. The worst-case expectation under $\phi$-divergence based ambiguity set boils down to a convex program with linear constraints and a nonlinear objective function, which can be efficiently solved via outer approximation algorithm. The worst-case expectation under Wasserstein ambiguity set comes down to a conic program which is convex and readily solvable. Unlike the max-min problem in traditional robust optimization method identifying the worst-case scenario which the decision maker wishes to avoid, the worst-case expectation problem in distributionally robust optimization and robust stochastic programming is solved in its dual form, whose solution is less intuitive to the decision maker; moreover, it may not be easy to recover the primal optimal solution, i.e., the worst-case probability. The worst-case distribution in the robust chance constrained stochastic programming is discussed in \cite{App03-Sect4-RCCP,Am-Set-Wasserstein-1}; the worst-case discrete distribution in a two-stage stochastic program with min-max expectation can be computed via a polynomial complexity algorithm. Nonetheless, from a practical perspective, what the human decision makers actually need to deploy is merely the here-and-now decision, and the worst probability distribution is usually not very important, since corrective actions can be postponed to a later stage when the uncertain data have been observed or can be predicted with high accuracy.
\section{Further Reading}
\label{App-C-Sect05}
Uncertainty is ubiquitous in real-life decision-making problems, and the decision maker usually has limited information and statistic data on the uncertain factors, which makes robust optimization very attractive in practice, as it is tailored to the available information at hand, and often gives rise to computationally tractable reformulations. Although the original idea can date back to \cite{RO-Soyster} in 1970s, it is during the past two decades that the fundamental theory of robust optimization has been systematically developed. This research field is even more active during the past five years. This chapter aims to help beginners get an overview on this method and understand how to apply robust optimization in practice. We provide basic models and tractable reformulations, called the robust counterparts, for various robust optimization models under different assumptions on the uncertainty and decision-making manner. Basic theory of robust optimization is provided in \cite{RO-Detail-1,RO-Detail-2}. Comprehensive surveys can be found in \cite{RO-Guide,RO-Survey}. Here we shed more light on several important topics in robust optimization.
Uncertainty sets play a decisive role on the performance of a robust solution. A larger set could protect the system against a higher level of uncertainty, and increase the cost as well. However, the probability that uncertain data take their wort-case values is usually small. The decision-maker needs to make a trade-off between reliability and economy. Ambiguous chance constraints and their approximations are discussed in Chapter 2 of \cite{RO-Detail-1}, based on which the parameter in the uncertainty set can be selected. It is proposed in \cite{Un-Set-Data-Driven} to construct uncertainty sets from historical data and statistical tests. The connection of uncertainty sets and coherent risk measures are revealed in \cite{Un-Set-Risk-Measure}. It is shown that the distortion risk measure leads to a polyhedral uncertainty set. Specifically, the connection of CVaR and uncertainty sets is discussed in \cite{Un-Set-CVaR}. A reverse correspondence is reported in \cite{Risk-Measure-Un-Set}, demonstrating that robust optimization could generalize the concepts of risk measures. A data-driven approach is proposed in \cite{Un-Set-Data-Driven} to construct uncertainty sets for robust optimization based on statistical hypothesis tests. The counterpart problems are shown to be tractable, and optimal solutions satisfy constraints with finite-sample probabilistic guarantee.
Distributionally robust optimization integrates statistic information, worst-case expectation, and robust probability guarantee in a holistic optimization framework, in which the uncertainty is modeled via an ambiguous probability distribution. The choice of ambiguity sets for candidate distributions affects not only the model conservatism, but also the existence of tractable reformulations. Various ambiguity sets have been proposed in the literature, which can be roughly classified into two categories:
1) Moment ambiguity sets. All PDFs share the same moment data, usually the first- and second-order moments, and structured properties, such as symmetry and unimodality. For example, Markov ambiguity set contains all distributions with the same mean and support, and the worst-case expectation is shown to be equivalent to LPs \cite{Am-Set-Markov}. Chebyshev ambiguity set is composed of all distributions with known expectation and covariance matrix, and usually leads to SDP counterparts \cite{Static-DRO,Am-Set-Chebyshev-1,Am-Set-Chebyshev-2}; the Gauss ambiguity set contains all unimodal distributions in the Chebyshev ambiguity set, and also gives rise to SDP reformulations \cite{Am-Set-Gauss-1}.
2) Divergence ambiguity sets. All PDFs are close to a reference distribution in term of a specified measure. For example, the Wasserstein ambiguity quantifies the divergence via Wasserstein metric \cite{Am-Set-Wasserstein-1,Am-Set-Wasserstein-2,Am-Set-Wasserstein-3}; the $\phi$-divergence ambiguity \cite{Am-Set-Phi-Divergence,Am-Set-Phi-Div-1} characterizes the divergence of two probability density functions through the distance of special non-negative weights (for discrete distributions) or integrals (for continuous distributions).
More information on the types of ambiguity sets and reformulations of their distributionally robust counterparts can be found in \cite{Am-Set-Overview}. According to the latest research progress, the moment based distributionally robust optimization is relatively mature and has been widely adopted in engineering, because the semi-infinite LP formulation and its dual for the worst-case expectation problem offer a systematic approach to analyze the impact of uncertain distributions. However, when more complicated ambiguity sets are involved, such as the Gauss ambiguity set, deriving a tractable reformulation needs more sophisticated approaches. The study on the latter category, which directly imposes uncertainty on the distributions is attracting growing attentions in the past two or three years, because it makes full use of historical data, which can better capture the unique feature of uncertain factors under investigation.
Data-driven robust stochastic programming, conceptually the same as distributionally robust optimization but preferred by some researchers, has been studied using $\phi$-divergence in \cite{App03-Sect4-DRSO-Phi-1,App03-Sect4-DRSO-Phi-2}, and Wasserstein metric in \cite{Am-Set-Wasserstein-1,Am-Set-Wasserstein-2,Am-Set-Wasserstein-3,App03-Sect4-DRSO-Was-1,App03-Sect4-DRSO-Was-2,App03-Sect4-DRSO-Was-3,App03-Sect4-DRSO-Was-4}, because a tractable counterpart problem can be derived under such ambiguity sets.
Many decision-making problems in engineering and finance often require that a certain risk measure associated with random variables should be limited below a threshold. However, the probability distribution of random variables is not exactly known; therefore, the risk limiting constraint must be able to withstand perturbations of distribution in a reasonable range. This entails a tractable reformulation of a risk measure under distributional uncertainty. This problem has been comprehensively discussed in \cite{Am-Set-Wasserstein-3}. In more recent publications, CVaR under moment ambiguity set with unimodality is studied in \cite{App03-Sect4-DR-Risk-1}; VaR and CVaR under moment ambiguity set are discussed in \cite{App03-Sect4-DR-Risk-2}; distortion risk measure under Wasserstein ambiguity set is considered in \cite{App03-Sect4-DR-Risk-3}.
In multi-stage decision making, causality is a pivotal issue for practical implementation, which means that the wait-and-see decisions in the current stage cannot depend on the information of uncertainty in future stages. For example, in a unit commitment problem with 24 periods, the wind power output is observed period-by-period. It is shown in \cite{App03-Sect5-Causal-1} that the two-stage robust model in \cite{ARO-Benders-Decomposition} offers non-causal dispatch strategies, which are in fact not robust. A multi-stage causal unit commitment model is suggested in \cite{App03-Sect5-Causal-1,App03-Sect5-Causal-2} based on affine policy. Causality is put to effect by imposing block diagonal constraints on the gain matrix of affine policy. Causality is also called non-anticipativity in some literature, such as \cite{App03-Sect5-Causal-3}, which is attracting attention from practitioners \cite{App03-Sect5-Causal-4,App03-Sect5-Causal-5}.
For some other interesting topics on robust optimization, such as the connection with stochastic optimization, connection with risk theory, and applications in engineering problems other than those in power systems, readers can refer to \cite{RO-Survey}. Nonlinear issues have been addressed in \cite{SRO-CVX-RCs,App03-Sect5-RNLP-1}. Optimization models with uncertain SOC and SDP constraints are discussed in \cite{App03-Sect5-RSDP-1,App03-Sect5-RSDP-2}. The connection among robust optimization, data utilization, and machine learning has been reviewed in \cite{App03-Sect5-Opt-Data-ML}.
\input{ap03ref}
\motto{Life is not a game. Still, in this life, we choose the games we live to play.}
\chapter{Equilibrium Problems}
\label{App-D}
The concept of an equilibrium describes a state that the system has no incentive to change. These incentives can be profit-driven in the case of competitive markets or a reflection of physical laws such as energy flow equations. In this sense, equilibrium is encompasses broader concepts than the solution of a game. Equilibrium is a fundamental notation appearing in various disciplines in economics and engineering. Identifying the equilibria allows eligible authorities to predict the system state at a future time or design reasonable policies for regulating a system or a market. This is not saying that an equilibrium state must appear sooner or later, partly because decision makers in reality have only limited rationality and information. Nevertheless, the awareness of such an equilibrium could be helpful for system design and operation. In this chapter, we restrict our attention in the field of game theory, which entails simultaneously solving multiple interactive optimization problems. We review the notions of some quintessential equilibrium problems and show how they can be solved via traditional optimization methods. These problems can be roughly categorized into two classes: the first one contains only one level: all players must make a decision simultaneously, which is referred to as a Nash-type game; the second one has two levels: decisions are made sequentially by two groups of players, called the leaders and the followers. This category is widely known as Stackelberg-type games, or multi-leader-follower games, or equilibrium programs with equilibrium constraints (EPEC). Unlike a traditional mathematical programming problem where the decision maker is unique, in an equilibrium problem or a game, multiple decision makers seek optimums of individual optimization problems parameterized in the optimal solutions of others.
General notations used throughout this chapter are defined as follows. Specific symbols are explained in the individual sections. In the game theoretic language, a decision maker is called a player. Vector $x=(x_1,\cdots,x_n)$ refers to the joint decisions of all upper-level players or the so-called leaders in a bilevel setting, where $x_i$ stands for the decisions of leader $i$; $x_{-i}=(x_1,\cdots,x_{i-1},x_{i+1},\cdots,x_n)$ refers to the rivals' actions for leader $i$. Similarly, $y=(y_1,\cdots,y_m)$ refers to the joint decisions of all lower-level players or the so-called followers, where $y_j$ stands for the decisions of follower $j$; $y_{-j}=(y_1,\cdots,y_{j-1},y_{j+1},\cdots,y_m)$ refers to the rivals' actions for follower $j$. $\lambda$ and $\mu$ are Lagrangian dual multipliers associated with inequality and equality constraints.
\section{Standard Nash Equilibrium Problem}
\label{App-D-Sect01}
After J. F. Nash published his work on the equilibrium of $n$-person non-cooperative games in early 1950s \cite{App-04-Nash-1,App-04-Nash-2}, game theory quickly became a new branch of operational research. Nash equilibrium problem (NEP) captures the interactive behaviors of strategic players, in which each player's utility depends on the actions of other players. During decades of wonderful research, a variety of new concepts and algorithms of Nash equilibriums have been proposed and applied to almost every area of knowledge. This section just reviews some basic concepts and the most prevalent best-response algorithms.
\subsection{Formulation and Optimality Condition}
\label{App-D-Sect01-01}
In a standard $n$-person non-cooperative game, each player minimizes his payoff function $f_i(x_i,x_{-i})$ which depends on all players' actions.
The strategy set $X_i=\{x_i \in \mathbb R^{k_i} ~|~ g_i(x_i) \le 0\}$ of player $i$ is independent of $x_{-i}$. The joint strategy set of the game is the Cartesian product of $X_i$, i.e., $X = \prod_{i=1}^n X_i$, and $X_{-i} = \prod_{j \ne i} X_j$. Roughly speaking, the non-cooperative game is a collection of coupled optimization problems, where player $i$ chooses $x_i \in X_i$ that minimizes his payoff $f_i(x_i,x_{-i})$ given his rivals' strategies $x_{-i}$, or mathematically
\begin{equation}
\label{eq:App-04-NE-Problem}
\left.
\begin{aligned}
\min_{x_i} ~~ & f_i(x_i,x_{-i}) \\
\mbox{s.t.}~~ & g_i(x_i) \le 0 : \lambda_i
\end{aligned}
\right\},~ i = 1,\cdots,n
\end{equation}
In the problem of player $i$, the decision variable is $x_i$, and $x_{-i}$ is regarded as parameters; $\lambda_i$ is the dual variable.
The Nash equilibrium consists of a strategy profile such that every player's strategy constitutes the best response to all other players' strategies, or in other words, no player can further reduce his payoff by changing his action unilaterally. Therefore, the Nash equilibrium is a stable state which can sustain spontaneously. The mathematical definition is formally given below.
\begin{definition}
\label{df:App-04-NE}
A strategy vector $x^* \in X$ is a Nash equilibrium if the condition
\begin{equation}
\label{eq:App-04-NE-Definition}
f_i(x^*_i,x^*_{-i}) \le f_i(x_i,x^*_{-i}),~ \forall x_i \in X_i
\end{equation}
holds for all players.
\end{definition}
Condition (\ref{eq:App-04-NE-Definition}) naturally interprets the fact that at a Nash equilibrium, if any player choose an alternative strategy, his payoff may grow, which is undesired. To depict a Nash equilibrium, a usual approach is the fixed-point of best-response mapping. Let $B_i(x_{-i})$ be the set of optimal strategies of player $i$ given the strategies $x_{-i}$ of others, then set $B(x) = \prod_{i=1}^n B_i(x_{-i})$ is the best-response mapping of the game. It is clear that $x^*$ is a Nash equilibrium if and only if $x^* \in B(x^*)$, i.e., $x^*$ is a fixed point of $B(x)$. This fact establishes the foundation for analyzing Nash equilibria using the well-developed fixed-point theory. However, conducting the fixed-point analysis usually requires the best-response mapping $B(x)$ in a closed form. Moreover, to declare the existence and uniqueness of a Nash equilibrium, the mapping should be contractive \cite{App-04-Fixed-Point-1}. These strong assumptions inevitably limit the applicability of fixed-point method. For example, in many instances, the best-response mapping $B(x)$ is neither contractive nor continuous, but Nash equilibria may still exist.
Another way to characterize the Nash equilibrium is the KKT system approach. Generally speaking, in a standard Nash game, each player is facing an NLP parameterized in the rivals' strategies. If we consolidate the KKT optimality conditions of all these NLPs in (\ref{eq:App-04-NE-Problem}), we get the following KKT system
\begin{equation}
\label{eq:App-04-NE-KKT}
\left. \begin{gathered}
\nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda_i^T \nabla_{x_i} g_i (x_i) = 0 \\
\lambda_i \ge 0,~g(x_i) \le 0,~ \lambda_i^T g_i (x_i) = 0
\end{gathered} \right\}~i = 1,\cdots,n
\end{equation}
If $x^*$ is a Nash equilibrium that satisfies (\ref{eq:App-04-NE-Definition}), and any standard constraint qualification holds for every player's problem in (\ref{eq:App-04-NE-Problem}), then $x^*$ must be a stationary point of the concentrated KKT system (\ref{eq:App-04-NE-KKT}) \cite{App-04-GNEP-KKT-1}; and vice versa: if all problems in (\ref{eq:App-04-NE-Problem}) meet a standard constraint qualification, and a point $x^*$ together with a proper vector of dual multipliers $\lambda = (\lambda_1,\cdots,\lambda_n)$ solves KKT system (\ref{eq:App-04-NE-KKT}), then $x^*$ is also a Nash equilibrium that satisfies (\ref{eq:App-04-NE-Definition}).
Problem (\ref{eq:App-04-NE-KKT}) is an NCP and is the optimality condition of Nash equilibrium. It is a natural attempt to retrieve an equilibrium by solving NCP (\ref{eq:App-04-NE-KKT}) without deploying an iterative algorithm, which may suffer from divergence. To obviate the computational challenges brought by the complementarity and slackness constraints in KKT system (\ref{eq:App-04-NE-KKT}), a merit function approach and an interior-point method are comprehensively discussed in \cite{App-04-GNEP-KKT-1}.
\subsection{Variational Inequality Formulation}
\label{App-D-Sect01-02}
An alternative perspective to study the NEP is to formulate it as a variational inequality (VI) problem. This approach is pursued in \cite{App-04-GNEP-VI-1}. The advantage of variational inequality approach is that it permits an easy access to existence and uniqueness results without the best-response mapping. From a computational point of view, it naturally leads to easily implementable algorithms along with provable convergence performances.
Given a closed and convex set $X \in \mathbb R^n$ and a mapping $F:X \to \mathbb R^n$, a variational inequality problem, denoted by VI($X,F$), is to determine a point $x^* \in X$ satisfying \cite{App-04-VI-1}
\begin{equation}
\label{eq:App-04-VI}
(x-x^*)^T F(x^*) \ge 0,~ \forall x \in X
\end{equation}
To see the connection between a VI problem and a traditional convex optimization problem that seeks a minimum of a convex function $f(x)$ over a convex set $X$, let us assume that the optimal solution is $x^*$, then the feasible region must not lie in the half space where $f(x)$ decreases; geometrically, the line segment connecting any $x \in X$ with $x^*$ must form an acute angle with the gradient of $f$ at $x^*$, which can be mathematically described as $(x-x^*)^T \nabla f(x^*) \ge 0,~ \forall x \in X$. This condition can be concisely expressed by VI($X,\nabla f$) \cite{App-04-VI-2}.
However, when the Jacobian matrix of $F$ is not symmetric, $F$ cannot be written as the gradient of another scalar function, and hence the variational inequality problem encompasses broader classes of problems than traditional mathematical programs. For example, when $X = \mathbb R^n$, problem (\ref{eq:App-04-VI}) degenerates into a system of equations $F(x^*)=0$; when $X = \mathbb R^n_+$, problem (\ref{eq:App-04-VI}) comes down to an NCP $0 \le x^* \bot F(x^*) \ge0$.
To see the later case, $x^* \ge 0$ because it belongs to $X$; if any element of $F(x^*)$ is negative, say the first element $[F(x^*)]_1 < 0$, we let $x_1 = x^*_1 +1$, and $x_i=x^*_i$, $i=2,\cdots$, then $(x-x^*)^T F(x^*) = [F(x^*)]_1 < 0$, which is contradictive to (\ref{eq:App-04-VI}). Hence $ F(x^*) \ge 0$ must hold. Let $x=0$ in (\ref{eq:App-04-VI}), we have $(x^*)^T F(x^*) \le 0$. Because $x^* \ge 0$ and $F(x^*) \ge 0$, there must be $(x^*)^T F(x^*) = 0$, resulting in the target NCP.
The monotonicity of $F$ plays a central role in the theoretical analysis of VI problems, just like the role of convexity in mathematical programming. It has a close relationship with the Jacobian matrix $\nabla F$ \cite{App-04-GNEP-VI-1,App-04-VI-3}:
\begin{svgraybox}
\normalsize
\renewcommand{1.5}{1.5}
\renewcommand{1em}{1em}
\centering
\begin{tabular}{lll}
$F(x)$ is monotone on $X$ & $\Leftrightarrow$ & $\nabla F(x) \succeq 0$, $\forall x \in X$ \\
$F(x)$ is strictly monotone on $X$ & $\Leftarrow$ & $\nabla F(x) \succ 0$, $\forall x \in X$ \\
$F(x)$ is strongly monotone on $X$ & $\Leftrightarrow$ & $\nabla F(x) -c_{m}I \succeq 0$, $\forall x \in X$ \\
\end{tabular}
\end{svgraybox}
\noindent where $c_m$ is a strictly positive constant. As a correspondence to convexity, a differentiable function $f$ is convex (strictly convex, strongly convex) on $X$ if and only if $\nabla f$ is monotone (strictly monotone, strongly monotone) on $X$.
Conceptually, monotonicity (convexity) is the weakest, since the matrix $\nabla F(x)$ can have zero eigenvalues; strict monotonicity (strict convexity) is stronger, as all eigenvalues of matrix $\nabla F(x)$ are strictly positive; strong monotonicity (strong convexity) is the strongest, because the smallest eigenvalue of matrix $\nabla F(x)$ should be greater than a given positive number. Intuitively, a strong convex function must be more convex than a given convex quadratic function; for example, $f(x)=x^2$ is strongly convex on $\mathbb R$; $f(x)=1/x$ is convex on $\mathbb R_{++}$ and strongly convex on $(0,1]$.
To formulate an NEP as a VI problem and establish the existence and uniqueness result of Nash equilibria, we list some assumptions on the convexity and smoothness of each player's problem.
\begin{assumption}
\label{ap:App-04-NE-Convex-Smooth} \cite{App-04-GNEP-VI-1}
1) The strategy set $X_i$ is non-empty, closed and convex;
2) Function $f_i(x_i,x_{-i})$ is convex in $x_i \in X_i$ for fixed $x_{-i} \in X_{-i}$;
3) Function $f_i(x_i,x_{-i})$ is continuously differentiable in $x_i \in X_i$ for fixed $x_{-i} \in X_{-i}$;
4) Function $f_i(x_i,x_{-i})$ is twice continuously differentiable in $x \in X$ with bounded second derivatives.
\end{assumption}
\begin{proposition}
\label{pr:App-04-NE-VI} \cite{App-04-GNEP-VI-1}
In a standard NEP NE($X,f$), where $f = (f_1,\cdots,f_n)$, if conditions 1)-3) in Assumption \ref{ap:App-04-NE-Convex-Smooth} are met, then the game is equivalent to a variational inequality problem VI($X,F$) with
\begin{equation*}
X = X_1 \times \cdots \times X_n
\end{equation*}
and
\begin{equation*}
F(x) = (\nabla_{x_1} f_1 (x),\cdots,\nabla_{x_n} f_n (x))
\end{equation*}
\end{proposition}
In the VI problem corresponding to a traditional mathematical program, the Jacobian matrix $\nabla F$ is symmetric, because it is the Hessian matrix of a scalar function. However, in Proposition \ref{pr:App-04-NE-VI}, the Jacobian matrix $\nabla F$ for an NEP is generally non-symmetric. Building upon the VI reformulation, the standard results on solution properties of VI problems \cite{App-04-VI-1} can be extended to standard NEPs.
\begin{proposition}
\label{pr:App-04-NE-Property}
Given an NEP NE($X,f$), all conditions in Assumption \ref{ap:App-04-NE-Convex-Smooth} are met, then we have the following statements:
1) If $F(x)$ is strictly monotone, then the game has at most one Nash equilibrium.
2) If $F(x)$ is strongly monotone, then the game has a unique Nash equilibrium.
\end{proposition}
Some sufficient guarantees for $F(x)$ to be (strictly, strongly)
monotone are given in \cite{App-04-GNEP-VI-1}. It should be pointed out that the equilibrium concept in the sense of Proposition \ref{pr:App-04-NE-Property} is termed the pure-strategy Nash equilibrium, so as to distinguish it from the mixed-strategy Nash equilibrium which will appear later on.
\subsection{Best Response Algorithms}
\label{App-D-Sect01-03}
A major benefit of the VI reformulation is that it leads to easily implementable solution algorithms. Here we list two of them. Readers who are interested in the proofs on their performances can consult \cite{App-04-GNEP-VI-1}.
{\noindent \bf 1. Algorithms for strongly convex cases}
The first algorithm is a totally asynchronous-iterative one, in which players may update their strategies with different frequencies. Let $T=\{0,1,2,\cdots\}$ be the indices of iteration steps, and $T_i \subseteq T$ be the set of steps in which player $i$ updates his own strategy $x_i$. The notation $x^k_i$ implies that at step $k \notin T_i$, $x^k_i$ remains unchanged. Let $t^i_j(k)$ be the latest step at which the strategy of player $j$ is received by player $i$ at step $k$. Therefore, if player $i$ updates his strategy at step $k$, he uses the following strategy profile offered by other players:
\begin{equation}
\label{eq:App-04-NE-Strategy-Update}
x^{t^i(k)}_{-i} = \left(x^{t^i_1(k)}_1,\cdots,x^{t^i_{i-1}(k)}_{i-1},
x^{t^i_{i+1}(k)}_{i+1},\cdots,x^{t^i_n(k)}_{n} \right)
\end{equation}
Using above definitions, the totally asynchronous-iterative algorithm is summarized in Algorithm \ref{Ag:App-04-NE-Asy-BR}. Some technique conditions for which the schedules $T_i$ and $t^i_j(k)$ should satisfy in order to be implementable in practice are discussed in \cite{App-04-Update-Sequence-1,App-04-Update-Sequence-2}, which are assumed to be satisfied without particular mention.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Asynchronous best-response algorithm}
\begin{algorithmic}[1]
\STATE Choose a convergence tolerance $\varepsilon>0$ and a feasible initial point $x^0 \in X$; the iteration index is $k=0$;
\STATE For player $i=1,\cdots,n$, update the strategy $x^{k+1}_i$ as
\begin{equation}
\label{eq:App-04-NE-Asy-BR}
x^{k+1}_i = \begin{cases}
x^*_i \in \arg \min_{x_i} \left\{ f_i\left( x_i,x^{t^i(k)}_{-i} \right) ~\middle|~ x_i \in X_i \right\} & \mbox{if } k \in T_i \\
x^n_i & \mbox{otherwise}
\end{cases}
\end{equation}
\STATE If $\| x^{k+1} - x^k \|_2 \le \varepsilon$, terminate and report $x^{k+1}$ as the Nash equilibrium; otherwise, update $k \leftarrow k+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-NE-Asy-BR}
\end{algorithm}
A sufficient condition which guarantees the convergence of Algorithm \ref{Ag:App-04-NE-Asy-BR} is provided in \cite{App-04-GNEP-VI-1}. Roughly speaking, Algorithm \ref{Ag:App-04-NE-Asy-BR} would converge if $f_i(x)$ is strongly convex in $x_i$. However, this is a strong assumption, which cannot be satisfied even if there is only one point where the partial Hessian matrix $\nabla^2_{x_i} f_i(x)$ of player $i$ is singular.
Algorithm \ref{Ag:App-04-NE-Asy-BR} reduces to some classic algorithms by enforcing a special updating procedure, i.e., a particular selection of $T_i$ and $t^i_j(k)$.
For example, if players update their strategies simultaneously (sequentially), Algorithm \ref{Ag:App-04-NE-Asy-BR} becomes the Jacobi (Gauss-Seidel) type iterative scheme. Interestingly, the asynchronous best-response algorithm is robust against data missing or delay, and is guaranteed to find the unique Nash equilibrium. This feature greatly relaxes the requirement on data synchronization and simplifies the design of communication systems, and makes this class of algorithm very appealing in distributed system operations.
{\noindent \bf 2. Algorithms for convex cases}
To relax the strong monotonicity assumption on $F(x)$, the second algorithm has been proposed in \cite{App-04-GNEP-VI-1}, which only uses the monotonicity property and is summarized below. Algorithm \ref{Ag:App-04-NE-Proximal} converges to a Nash equilibrium, if each player's optimization problem is convex (or $F(x)$ is monotone), which significantly improves its applicability.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf : Proximal Decomposition Algorithm}
\begin{algorithmic}[1]
\STATE Given $\{ \rho_n\}_{n=0}^\infty$, $\varepsilon >0$, and $\tau > 0$, choose a feasible initial point $x^0 \in X$;
\STATE Find an equilibrium $z^0$ of the following NEP using Algorithm \ref{Ag:App-04-NE-Asy-BR}
\begin{equation}
\label{eq:App-04-NE-Proximal}
\left. \begin{aligned}
\min_{x_i} ~~ & f_i( x_i,x_{-i}) + \tau \| x_i - x^0_i \|^2_2 \\
\mbox{s.t.}~~ & x_i \in X_i
\end{aligned} \right\}, i=1,\cdots,n
\end{equation}
\STATE If $\| z^0 - x^0 \|_2 \le \varepsilon$, terminate and report $x^0$ as the Nash equilibrium; otherwise, update $x^0 \leftarrow (1-\rho_n) x^0 + \rho_n z^0$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-NE-Proximal}
\end{algorithm}
Algorithm \ref{Ag:App-04-NE-Proximal} is a double-loop method: the inner loop identifies a Nash equilibrium of the regularized game (\ref{eq:App-04-NE-Proximal}) with $x^0$ being a parameter, which is updated in each iteration, and the outer loop updates $x^0$ by selecting a new point along the line connecting $x^0$ and $z^0$. Notice that in step 2, as long as $\tau$ is large enough, the Hessian matrix $\nabla^2_{x_i} f_i (x) + 2 \tau I$ must be positive definite, and thus the best-response algorithm applied to (\ref{eq:App-04-NE-Proximal}) is guaranteed to converge to the unique Nash equilibrium. See \cite{App-04-GNEP-VI-1} for more details about parameter selection.
The penalty term $\tau \| x_i - x^0_i \|^2$ limits the change of optimal strategies in two consecutive iterations, and can be interpreted as a damping factor that attenuates possible oscillations during the computation. It is worth mentioning that the penalty parameter $\tau$ significantly impacts the convergence rate of Algorithm \ref{Ag:App-04-NE-Proximal} and should be carefully selected. If it is too small, the damping effect of the penalty term is limited, and the oscillation may still take place; if it is too large, the increment of $x$ in each step is very small, and Algorithm \ref{Ag:App-04-NE-Proximal} may suffer from a slow convergence rate. The optimal value of $\tau$ is problem-dependent. There is not a universal way to determine its best value.
Recently, single-loop distributed algorithms for monotone Nash games are proposed in \cite{App-04-GNEP-ITR}, which authors believe to be promising in practical applications. In these two schemes, the regularization parameter is updated at once after each iteration is completed, rather than when the regularized problem is approximately solved, and players can select their parameter independently.
\subsection{Nash Equilibrium of Matrix Games}
\label{App-D-Sect01-04}
As explained before, not all Nash games have an equilibrium, especially when the strategy set and the payoff function are non-convex or discrete. To widen the equilibrium notion and reveal deeper insights on the behaviors of players in such instances, it is instructive to revisit some simple games, called the matrix game, which is the primary research object of game theorists.
The bimatrix game refers to a matrix game involving two players P1 and P2. The numbers of possible strategies of P1 and P2 are $m$ and $n$, respectively. $A = \{a_{ij}\} \in \mathbb M^{m \times n}$ is the payoff matrix of P1: when P1 chooses strategy $i$ and P2 selects strategy $j$, the payoff of P1 is $a_{ij}$. The payoff matrix $B \in \mathbb M^{m \times n}$ of P2 can be defined in the same way. In a matrix game, each player is interested to determine a probability distribution of his actions, such that his expected payoff is minimized. Let $x_i$ ($y_j$) be the probability that P1 (P2) will use strategy $i$ ($j$), vectors $x=[x_1,\cdots,x_m]^T$ and $y = [y_1,\cdots,y_n]^T$ are called mixed strategies, clearly,
\begin{equation}
\label{eq:App-04-MG-Mixed-Strategy}
\begin{gathered}
x \ge 0,~ \sum_{i=1}^m x_i = 1 \quad \mbox{or} \quad x \in {\rm \Delta}_m\\
y \ge 0,~ \sum_{j=1}^n y_j = 1 \quad \mbox{or} \quad y \in {\rm \Delta}_n\\
\end{gathered}
\end{equation}
where ${\rm \Delta}_m$ and ${\rm \Delta}_n$ are simplex slices in $\mathbb R^m$ and $\mathbb R^n$.
{\noindent \bf 1. Two-person zero-sum games}
The zero-sum game represents a totally competitive situation: P1's gain is P2's loss, so the sum of their payoff matrices is $A+B = 0$, as its name suggests. Such type of game has been well studied in vast literature since von Neumann found the famous Minimax Theorem in 1928. The game is revisited from a mathematical programming perspective in \cite{App-04-Minimax-LP}. The proposed linear programming method is especially powerful for instances with a high-dimensional payoff matrix. Next, we briefly introduce this method.
Let us begin with a payoff matrix $A = \{a_{ij}\}$, $a_{ij} > 0$, $\forall i,j$ with strictly positive entries (otherwise, we can add a constant to every entry, such that the smallest entry becomes positive, and the equilibrium strategy remains the same). The expected payoff of P1 is given by
\begin{equation}
\label{eq:App-04-ZSG-Payoff}
V_A = \sum_{i=1}^m \sum_{j=1}^n x_i a_{ij} y_j = x^T A y
\end{equation}
which must be positive because of the element-wise positivity assumption on $A$.
Since $B=-A$ and minimizing $-x^T A y$ is equivalent to maximizing $x^T A y$, the two-person zero-sum game has a min-max form as
\begin{equation}
\label{eq:App-04-ZSG-1}
\min_{x \in {\rm \Delta}_m} \max_{y \in {\rm \Delta}_n} ~~ x^T A y
\end{equation}
or
\begin{equation}
\label{eq:App-04-ZSG-2}
\max_{y \in {\rm \Delta}_n} \min_{x \in {\rm \Delta}_m} ~~ x^T A y
\end{equation}
The solution to the two-person zero-sum matrix game (\ref{eq:App-04-ZSG-1}) or (\ref{eq:App-04-ZSG-2}) is called a mixed-strategy Nash equilibrium, or the saddle point of a min-max problem. It satisfies
\begin{gather}
(x^*)^T A y^* \le x^T A y^* ,~ \forall x \in {\rm \Delta}_m \notag \\
(x^*)^T A y^* \ge y^T A x^* ,~ \forall y \in {\rm \Delta}_n \notag
\end{gather}
To solve this game, consider (\ref{eq:App-04-ZSG-1}) in the following format
\begin{equation}
\label{eq:App-04-ZSG-3}
\min_x \left\{ v_1(x) ~\middle|~ x \in {\rm \Delta}_m \right\}
\end{equation}
where $v_1(x)$ is the optimal value function of the problem faced by P2 with the fixed strategy $x$ of P1
\begin{equation}
v_1(x) = \max_y \left\{ x^T A y ~\middle|~ y \in {\rm \Delta}_n \right\}\notag
\end{equation}
In view of the feasible region defined in (\ref{eq:App-04-MG-Mixed-Strategy}), $v_1(x)$ is equal to the maximal element of vector $x^T A$, which is strictly positive, and the inequality
\begin{equation}
A^T x \le {\bf 1}^n v_1(x) \notag
\end{equation}
holds. Furthermore, introducing a normalized vector $\bar x = x/v_1(x)$, we have
\begin{equation}
\begin{gathered}
\bar x \ge 0, ~~ A^T \bar x \le {\bf 1}^n \\
v_1(x) = ({\bar x}^T {\bf 1}^m)^{-1}
\end{gathered} \notag
\end{equation}
Taking these relations into account, problem (\ref{eq:App-04-ZSG-3})
becomes
\begin{equation}
\label{eq:App-04-ZSG-4}
\begin{aligned}
\min_{\bar x} ~~ & ({\bar x}^T {\bf 1}^m)^{-1} \\
\mbox{s.t.}~~ & A^T \bar x \le {\bf 1}^n \\
& \bar x \ge 0
\end{aligned}
\end{equation}
Because the objective is strictly positive and monotonic, the optimal solution of (\ref{eq:App-04-ZSG-4}) keeps unchanged if we choose to maximize ${\bar x}^T {\bf 1}^m$ under the same constraints, giving rise to the following LP
\begin{equation}
\label{eq:App-04-ZSG-5}
\begin{aligned}
\max_{\bar x} ~~ & {\bar x}^T {\bf 1}^m \\
\mbox{s.t.}~~ & A^T \bar x \le {\bf 1}^n \\
& \bar x \ge 0
\end{aligned}
\end{equation}
Let $\bar x^*$ and $\bar v^*_1$ be the optimal solution and optimal value of LP (\ref{eq:App-04-ZSG-5}). According to the analysis of variable transformation, the optimal expected payoff $v^*_1$ and the optimal mixed strategy $x^*$ of P1 in this game are given by
\begin{equation}
\label{eq:App-04-ZSG-6}
v^*_1 = 1 / \bar v^*_1,~~ x^* = \bar x^* / \bar v^*_1
\end{equation}
Consider (\ref{eq:App-04-ZSG-2}) in the same way, we obtain the following LP for P2:
\begin{equation}
\label{eq:App-04-ZSG-7}
\begin{aligned}
\min_{\bar y} ~~ & {\bar y}^T {\bf 1}^n \\
\mbox{s.t.}~~ & A \bar y \ge {\bf 1}^m \\
& \bar y \ge 0
\end{aligned}
\end{equation}
Denote by $\bar y^*$ and $\bar v^*_2$ the optimal solution and optimal value of LP (\ref{eq:App-04-ZSG-7}), and then the optimal expected payoff $v^*_2$ and the optimal mixed strategy $y^*$ of P2 in this game can be posed as
\begin{equation}
\label{eq:App-04-ZSG-8}
v^*_2 = 1 / \bar v^*_2,~~ y^* = \bar y^* / \bar v^*_2
\end{equation}
In summary, the mixed-strategy Nash equilibrium of two-person zero-sum matrix game (\ref{eq:App-04-ZSG-1}) is $(x^*,y^*)$, and the payoff of P1 is $v^*_1$. Interestingly, we notice that problems (\ref{eq:App-04-ZSG-5}) and (\ref{eq:App-04-ZSG-7}) constitute a pair of dual LPs, implying that their optimal values are equal, and the optimal solution $y^*$ in (\ref{eq:App-04-ZSG-8}) also solves the inner LP of (\ref{eq:App-04-ZSG-1}). This observation
leads to two important conclusions:
1) The Nash equilibrium of a two-person zero-sum matrix game, or the saddle point, can be computed by solving a pair of dual LPs. In fact, if one player's strategy, say $x^*$, have been obtained from (\ref{eq:App-04-ZSG-5}) and (\ref{eq:App-04-ZSG-6}), the rival's strategy can be retrieved by solving (\ref{eq:App-04-ZSG-1}) with $x = x^*$.
2) The decision sequence of a two-person zero-sum game is interchangeable without influencing the saddle point.
{\noindent \bf 2. General bimatrix games}
In more general two-person matrix games, the sum of payoff matrices is not equal to zero, and each player wishes to minimize its own expected payoff taking the other player's strategy as given. In the setting of mixed strategies, players are selecting the probability distribution among available strategies rather than a single action (the pure strategy), and the respective optimization problems are
as follows
\begin{equation}
\label{eq:App-04-BMG-1}
\begin{aligned}
\min_x ~~ & x^T A y \\
\mbox{s.t.}~~ & x^T {\bf 1}^m =1 : \lambda \\
& x \ge 0
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:App-04-BMG-2}
\begin{aligned}
\min_y ~~ & x^T B y \\
\mbox{s.t.}~~ & y^T {\bf 1}^n =1 : \gamma \\
& y \ge 0
\end{aligned}
\end{equation}
The pair of probability distributions $(x^*,y^*)$ is called a mixed-strategy Nash equilibrium if
\begin{gather}
(x^*)^T A y^* \le x^T A y^* ,~ \forall x \in {\rm \Delta}_m \notag \\
(x^*)^T B y^* \le (x^*)^T B y ,~ \forall y \in {\rm \Delta}_n \notag
\end{gather}
Unlike the zero-sum case, there is not an equivalent LP that can extract the Nash equilibrium. Performing the KKT system approach, we write out the KKT condition for (\ref{eq:App-04-BMG-1})
\begin{gather*}
A y - \lambda {\bf 1}^m - \mu = 0 \\
0 \le \mu \bot x \ge 0 \\
x^T {\bf 1}^m = 1
\end{gather*}
where $\mu$ is the dual variable associated with the non-negative constraint, and can be eliminated from the first equality. Concentrating the KKT conditions of LPs (\ref{eq:App-04-BMG-1}) and (\ref{eq:App-04-BMG-2}) gives
\begin{equation}
\label{eq:App-04-BMG-3}
\begin{gathered}
0 \le A y - \lambda {\bf 1}^m \bot x \ge 0,~ x^T {\bf 1}^m = 1 \\
0 \le B^T x - \gamma {\bf 1}^n \bot y \ge 0,~ y^T {\bf 1}^n = 1 \\
\end{gathered}
\end{equation}
Complementarity condition (\ref{eq:App-04-BMG-3}) can be solved by setting $\lambda = \gamma = 1$ and omitting equality constraints, and recovering them at a later normalization step, i.e., we first solve
\begin{equation}
\label{eq:App-04-BMG-4}
\begin{gathered}
0 \le A y - {\bf 1}^m \bot x \ge 0 \\
0 \le B^T x - {\bf 1}^n \bot y \ge 0 \\
\end{gathered}
\end{equation}
Suppose that the solution is ($\bar x, \bar y$), then the Nash equilibrium is
\begin{equation}
\label{eq:App-04-BMG-5}
\begin{gathered}
x^* = \bar x / \bar x^T {\bf 1}^m \\
y^* = \bar y / \bar y^T {\bf 1}^n
\end{gathered}
\end{equation}
and the corresponding multipliers are derived from (\ref{eq:App-04-BMG-3}) as
\begin{equation}
\label{eq:App-04-BMG-6}
\begin{gathered}
\lambda^* = (x^*)^T A y^* \\
\gamma^* = (x^*)^T B y^* \\
\end{gathered}
\end{equation}
On the other hand, if ($x^*, y^*$) is a Nash equilibrium and solves (\ref{eq:App-04-BMG-3}) with multipliers ($\lambda^*,\gamma^*$), we can observe that ($x^*/\gamma^*,y^*/\lambda^*$) solves (\ref{eq:App-04-BMG-4}), therefore
\begin{equation}
\label{eq:App-04-BMG-7}
\begin{gathered}
\bar x = \dfrac{x^*}{(x^*)^T B y^*} \\
\bar y = \dfrac{y^*}{(x^*)^T A y^*}
\end{gathered}
\end{equation}
Now we can see that identifying the mixed-strategy Nash equilibrium of a bimatrix game entails solving KKT system (\ref{eq:App-04-BMG-3}) or (\ref{eq:App-04-BMG-4}), which is called a linear complementarity problem (LCP). A classical algorithm for LCP is the Lemke's method \cite{App-04-LCP-Lemke-1,App-04-LCP-Lemke-2}. Another systematic way to solve an LCP is to reformulate it as an MILP using the method described in Appendix \ref{App-B-Sect03-05}. Nonetheless, there are more tailored MILP models for LCPs, which will be detailed in Sect. \ref{App-D-Sect04-02}.
Unlike the pure-strategy Nash equilibrium, whose existence relies on some assumptions on convexity, the mixed-strategy Nash equilibrium for matrix games, which is the discrete probability distribution among available actions, always exists \cite{App-04-Nash-2}. If a game with two players has no pure-strategy Nash equilibrium, and each player can choose actions from a finite strategy set, we can then calculate the payoff matrices as well as the mixed-strategy Nash equilibrium, which informs the likelihood that the player will adopt each corresponding pure strategy.
\subsection{Potential Games}
\label{App-D-Sect01-05}
Despite that a direct certification of the existence and uniqueness of a pure-strategy Nash equilibrium for a general game model is non-trivial, when the game possesses some special structures, such a certification becomes axiomatic. One of these guarantees is the existence of an exact potential function, and the associated problem is known as the potential game \cite{App-04-Potential-Game-1}. Four types of potential games are listed in \cite{App-04-Potential-Game-1}, categorized by the type of the potential function. Other extensions of the potential game have been studied as well. For a complete introduction, we recommend \cite{App-04-Potential-Game-2}.
\begin{definition}
\label{pr:App-04-PG-1}
(Exact potential game) A game is an exact potential game if there is a potential function $U(x)$ such that:
\begin{equation}
\label{eq:App-04-PG-1}
\begin{gathered}
f_i(x_i,x_{-i}) - f_i(y_i,x_{-i}) = U(x_i,x_{-i}) - U(y_i,x_{-i}) \\
\forall x_i,y_i \in X_i,~ \forall x_{-i} \in X_{-i},~ i = 1,\cdots,n
\end{gathered}
\end{equation}
\end{definition}
In an exact potential game, the change in the utility/payoff of any single player due to the unilateral strategy deviation leads to the same amount of change in the potential function. Among various variations of potential games which are defined by relaxing the strict equality (\ref{eq:App-04-PG-1}), the exact potential game is the most fundamental one and has attracted the majority of research interests. Throughout this section, the term potential game means the exact one without particular mention.
The condition for a game being a potential game and the method for constructing the potential function are given in the following proposition.
\begin{proposition}
\label{pr:App-04-PG-1}
\cite{App-04-Potential-Game-1} Suppose the payoff functions $f_i$, $i=1,\cdots,n$ in a game are twice continuously differentiable, then a potential function exists if and only if
\begin{equation}
\label{eq:App-04-PG-2}
\dfrac{\partial^2 f_i}{\partial x_i \partial x_j} =
\dfrac{\partial^2 f_j}{\partial x_i \partial x_j},~~ \forall i,j = 1,\cdots,n
\end{equation}
and the potential function can be constructed as
\begin{equation}
\label{eq:App-04-PG-3}
U(v) - U(z) = \sum_{i=1}^n \int_0^1 (x^\prime_i(t))^T \dfrac{\partial f_i}{\partial x_i} (x(t)) d t
\end{equation}
where $x(t): [0,1] \to X$ is a continuously differentiable path in $X$ connecting strategy profile $v$ and a fixed strategy profile $z$, such that $x(0)=z$, $x(1)=v$.
\end{proposition}
To obtain (\ref{eq:App-04-PG-3}), first, a direct consequence of (\ref{eq:App-04-PG-1}) is
\begin{equation}
\label{eq:App-04-PG-4}
\dfrac{\partial f_i}{\partial x_i} = \dfrac{\partial U}{\partial x_i},~ i=1,\cdots,n
\end{equation}
For any smooth curve $C(t): [0,1] \to X$ and any function $U$ with a continuous gradient $\nabla U$, the gradient theorem in calculus tells us
\begin{equation}
U(C_{end}) - U(C_{start}) = \int_C \nabla U(s) d s \notag
\end{equation}
where vector $s$ represents points along the integral trajectory $C$ parameterized in a scalar variable. Introducing $s=x(t)$: when $t=0$, $s=C_{start}=z$; when $t=1$, $s=C_{end}=v$. By the chain rule, $ds = x^\prime(t) dt$, and hence we get
\begin{equation}
\begin{aligned}
U(v) - U(z) & = \int_0^1 (x^\prime(t))^T \nabla U(x(t)) d t \\
& = \sum_{i=1}^n \int_0^1 (x_i^\prime(t))^T
\dfrac{\partial U}{\partial x_i} (x(t)) d t
\end{aligned} \notag
\end{equation}
Then, if $U$ is a potential function, substituting (\ref{eq:App-04-PG-4}) into above equation gives equation (\ref{eq:App-04-PG-3}).
In summary, for a standard NEP with continuous payoff functions, we can check whether it is a potential game, and further construct its potential function, if the right-hand side of (\ref{eq:App-04-PG-3}) has a closed form expression. Nevertheless, in some particular cases, the potential function can be observed without calculating an integral.
1. The payoff functions of the game can be decomposed as
\begin{equation}
f_i(x_i,x_{-i}) = p_i(x_i) + Q(x),~ \forall i = 1,\cdots,n \notag
\end{equation}
where the first term only depends on $x_i$, and the second term that couples all players' strategies and appears in every utility function is identical. In such circumstance, the potential function is instantly posed as
\begin{equation}
U(x) = Q(x) + \sum_{i=1}^n p_i(x_i) \notag
\end{equation}
which can be verified through its definition in (\ref{eq:App-04-PG-1}).
2. The payoff functions of the game can be decomposed as
\begin{equation}
f_i(x_i,x_{-i}) = p_i(x_{-i}) + Q(x),~ \forall i = 1,\cdots,n \notag
\end{equation}
where the first term only depends on the joint actions of opponents $x_{-i}$, and the second term is common and identical to all players. In such circumstance, the potential function is $Q(x)$. This is easy to understand because $x_{-i}$ is constant in the decision-making problem of player $i$ and thus the first term $p_i(x_{-i})$ can be omitted from the objective function.
3. The payoff function of each player has a form of
\begin{equation}
f(x_i,x_{-i}) = \left( a+b \sum_{j=1}^n x_j \right) x_i + c_i(x_i),~
i=1,\cdots,n \notag
\end{equation}
Obviously,
\begin{equation}
\dfrac{\partial^2 f_i}{\partial x_i \partial x_j} =
\dfrac{\partial^2 f_j}{\partial x_i \partial x_j} = b \notag
\end{equation}
therefore, a potential function exists and is given by
\begin{equation}
U(x) = a \sum_{i=1}^n x_i + \dfrac{b}{2} \sum_{i=1}^n \sum_{j \ne i} x_ix_j + b \sum_{i=1}^n x^2_i + \sum_{i=1}^n c_i(x_i) \notag
\end{equation}
The potential function provides a convenient way to analyze the Nash equilibria of potential games, since the function coincides with incentives of all players.
\begin{proposition}
\label{pr:App-04-PG-2} \cite{App-04-Potential-Game-2}
If game $G_1$ is a potential game with potential function $U(x)$; $G_2$ is another game with the same number of players and their payoff functions are $F_1(x_1,x_{-1}) = \cdots = F_n(x_n,x_{-n}) = U(x)$. Then $G_1$ and $G_2$ have the same set of Nash equilibria.
\end{proposition}
This is easy to understand because an equilibrium of $G_1$ satisfies
\begin{equation}
f_i(x^*_i,x^*_{-i}) \le f_i(x_i,x^*_{-i}), \forall x_i \in X_i,~
i = 1,\cdots,n \notag
\end{equation}
By the definition of potential function (\ref{eq:App-04-PG-1}), this gives
\begin{equation}
\label{eq:App-04-PG-5}
U(x^*_i,x^*_{-i}) \le U(x_i,x^*_{-i}), \forall x_i \in X_i,~ i = 1,\cdots,n
\end{equation}
So any equilibrium of $G_1$ is an equilibrium of $G_2$. Similarly, the reverse holds, too.
In Proposition \ref{pr:App-04-PG-2}, the identical interest game $G_2$ is actually an optimization problem. The potential function builds a bridge between an NEP and a mathematical programming problem. Let $X = X_1 \times \cdots \times X_n$, (\ref{eq:App-04-PG-5}) can be written as
\begin{equation}
\label{eq:App-04-PG-6}
U(x^*) \le U(x), \forall x \in X
\end{equation}
On this account, we have
\begin{proposition}
\label{pr:App-04-PG-3} \cite{App-04-Potential-Game-1}
Every minimizer of the potential function $U(x)$ in $X$ is a (pure-strategy) Nash equilibrium of the potential game.
\end{proposition}
Proposition \ref{pr:App-04-PG-3} is very useful. It reveals the fact that computing a Nash equilibrium of a potential game is equivalent to solving a traditional mathematical program. Meanwhile, the existence and uniqueness results of Nash equilibrium for potential games can be understood from the solution property of NLPs.
\begin{proposition}
\label{pr:App-04-PG-4} \cite{App-04-Potential-Game-2}
Every potential game with a continuous potential function $U(x)$ and a compact strategy space $X$ has at least one (pure-strategy) Nash equilibrium. If $U(x)$ is strictly convex, then the Nash equilibrium is unique.
\end{proposition}
Propositions \ref{pr:App-04-PG-3}-\ref{pr:App-04-PG-4} make no reference on the convexity of individual payoff functions of players. Moreover, if the potential function $U(x)$ is non-convex and has multiple local minimums, then each local optimizer corresponds to a local Nash equilibrium where $X_i$ in (\ref{eq:App-04-PG-5}) is replaced with the intersection of $X_i$ with a neighborhood region of $x^*_i$.
\section{Generalized Nash Equilibrium Problem}
\label{App-D-Sect02}
In above developments for standard NEPs, we have assumed that the strategy sets are decoupled: the available strategies of each player do not depend on other players' choices. However, there are indeed many practical cases where the strategy sets are interactive. For example, when players consume a common resource, the total consumption should not exceed the inventory quantity. The generalized Nash equilibrium problem (GNEP), invented in \cite{App-04-GNEP-1}, relaxes the strategy independence assumption in classic NEPs and allows the feasible set of each player's actions to depend on the rivals' strategies. For a comprehensive review, we recommend \cite{App-04-GNEP-2}.
\subsection{Formulation and Optimality Condition}
\label{App-D-Sect02-01}
Denote by $X_i(x_{-i})$ the strategy set of player $i$ when others select $x_{-i}$. In a GNEP, given the value of $x_{-i}$, each player $i$ determines a strategy $x_i \in X_i(x_{-i})$ which minimizes a payoff function $f_i(x_i,x_{-i})$. In this regard, a GNEP with $n$ players is the joint solution of $n$ coupled optimization problems
\begin{equation}
\label{eq:App-04-GENP-MP}
\left. \begin{aligned}
\min_{x_i} ~~ & f_i(x_i,x_{-i}) \\
\mbox{s.t.}~~ & x_i \in X_i(x_{-i})
\end{aligned} \right\},~~ i=1,\cdots,n
\end{equation}
In (\ref{eq:App-04-GENP-MP}), correlation
takes place not only in the objective function, but also in the constraints.
\begin{definition}
\label{df:App-04-GENP}
A generalized Nash equilibrium (GNE), or the solution of a
GNEP, is a feasible point $x^*$ such that
\begin{equation}
f(x^*_i,x^*_{-i}) \le f(x_i,x^*_{-i}),~ \forall x_i \in X_i (x_{-i})
\end{equation}
holds for all players.
\end{definition}
In its full generality, the GNEP is much more difficult than an NEP due to the variability of strategy sets. In this section, we restrict our attention to a particular class of GNEP: the so-called GNEP with shared convex constraints. In such a problem, the strategy sets can be expressed as
\begin{equation}
\label{eq:App-04-GENP-Convex-Xi}
X_i(x_{-i}) = \left\{x_i ~\middle|~ x_i \in Q_i,~
g(x_i,x_{-i}) \le 0 \right\},~ i = 1,\cdots,n
\end{equation}
where $Q_i$ is a closed and convex set which involves only $x_i$; $g(x_i,x_{-i}) \le 0$ represents the shared constraints. They consist of a set of convex inequalities coupling all players' strategies and are identical in $X_i(x_{-i})$, $i=1,\cdots,n$. Sometimes, $Q_i$ and $g(x_i,x_{-i}) \le 0$ are also mentioned as local and global constraints, respectively.
In the absence of shared constraints, the GNEP reduces to a standard NEP. Define the feasible set of strategy profile $x=(x_1,\cdots,x_n)$ in a GNEP
\begin{equation}
\label{eq:App-04-GENP-Convex-X}
X = \left\{x ~\middle|~ x \in \prod_{i=1}^n Q_i,~ g(x) \le 0 \right\}
\end{equation}
It is easy to see that $X_i(x_{-i})$ is a slice of $X$. A geometric interpretation of (\ref{eq:App-04-GENP-Convex-Xi}) is illustrated in Fig. \ref{fig:App-04-01}. It is seen that the choice of $x_1$ influences the feasible interval $X_2(x_1)$ of Player 2.
\begin{figure}
\caption{Relations of $X$ and the individual strategy sets.}
\label{fig:App-04-01}
\end{figure}
We make some assumptions on the smoothness and convexity for a GNEP with shared constraints.
\begin{assumption}
\label{ap:App-04-GNEP-Convex-Smooth} \
1) Strategy set $Q_i$ of each player is nonempty, closed, and convex.
2) Payoff function $f_i(x_i,x_{-i})$ of each player is twice continuously differentiable in $x$ and convex in $x_i$ for every fixed $x_{-i}$.
3) Functions $g(x) = (g_1(x),\cdots,g_m(x))$ are differentiable and convex in $x$.
\end{assumption}
In analogy with the NEP, concatenating the KKT optimality condition of each optimization problem in (\ref{eq:App-04-GENP-MP}) gives us what is called the KKT condition of the GNEP. For notation brevity, we omit local constraints ($Q_i=\mathbb R^{n_i}$) and assume that $X_i(x_{-i})$ contains only global constraints. Write out the KKT condition of GNEP (\ref{eq:App-04-GENP-MP})
\begin{equation}
\label{eq:App-04-GNEP-KKT}
\left. \begin{gathered}
\nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda_i^T \nabla_{x_i} g (x) = 0 \\
\lambda_i \ge 0,~g(x) \le 0,~ \lambda_i^T g (x) = 0
\end{gathered} \right\}~i = 1,\cdots,n
\end{equation}
where $\lambda_i$ is the Lagrange multiplier vector associated with the global constraints in the $i$-th player's problem.
\begin{proposition}
\label{pr:App-04-GNEP-KKT} \cite{App-04-GNEP-2} \
1) Let $\bar x = (\bar x_1,\cdots,\bar x_n)$ be the equilibrium of a GNEP, then a multiplier vector $\bar \lambda = (\bar \lambda_1,\cdots,\bar \lambda_n)$ exists, such that the pair $(\bar x, \bar \lambda)$ solves KKT system (\ref{eq:App-04-GNEP-KKT}).
2) If $(\bar x, \bar \lambda)$ solves KKT system (\ref{eq:App-04-GNEP-KKT}), and Assumption \ref{ap:App-04-GNEP-Convex-Smooth} holds, then $\bar x$ is an equilibrium of GNEP (\ref{eq:App-04-GENP-MP}) with shared convex constraints.
\end{proposition}
However, in contrast to an NEP, the solutions of an GNEP may be non-isolated and constitute a low dimensional manifold, because $g(x)$ is a common constraint shared by all, and the Jacobian of the KKT system may appear to be singular. A meticulous explanation is provided in \cite{App-04-GNEP-2}. We give a graphic interpretation for this phenomenon.
Consider a GNEP with two players:
\begin{gather}
\mbox{Player 1:} \quad
\left\{ \begin{aligned}
\max_{x_1} ~~ & x_1 \\
\mbox{s.t.}~~ & x_1 \in X_1(x_2)
\end{aligned} \right. \notag \\
\mbox{Player 2:} \quad
\left\{ \begin{aligned}
\max_{x_2} ~~ & x_2 \\
\mbox{s.t.}~~ & x_2 \in X_2(x_1)
\end{aligned} \right. \notag
\end{gather}
where
\begin{gather}
X_1(x_2) = \{x_1 ~|~ x_1 \ge 0,~ g(x_1,x_2) \le 0 \} \notag \\
X_2(x_1) = \{x_2 ~|~ x_2 \ge 0,~ g(x_1,x_2) \le 0 \} \notag
\end{gather}
and the global constraint set is
\begin{equation*}
\left\{ x ~\middle|~ \begin{gathered}
g_1 = 2 x_1 + x_2 \le 0 \\
g_2 = x_1 + 2 x_2 \le 0
\end{gathered} \right\}
\end{equation*}
The feasible set $X$ of the strategy profile is plotted in Fig. \ref{fig:App-04-02}. It can be verified that any point on the line segments
\begin{equation}
L_1 = \left\{ (x_1,x_2) ~\middle|~ 0 \le x_1 \le \frac{2}{3},~
\frac{2}{3} \le x_2 \le 1,~ x_1 + 2 x_2 = 2 \right\} \notag
\end{equation}
and
\begin{equation}
L_2 = \left\{ (x_1,x_2) ~\middle|~ \frac{2}{3} \le x_1 \le 1,~
0 \le x_2 \le \frac{2}{3},~ 2 x_1 + x_2 = 2 \right\} \notag
\end{equation}
is an equilibrium point that satisfies Definition \ref{df:App-04-GENP}.
\begin{figure}
\caption{Illustration of the equilibria of a simple GNEP.}
\label{fig:App-04-02}
\end{figure}
To refine a meaningful equilibrium from the infinitely many candidates, it is proposed to impose additional conditions on the Lagrange multipliers associated with shared constraints \cite{App-04-GNEP-3}. The outcome is called a restricted Nash equilibrium. Two special cases are discussed here.
{\noindent \bf 1. Normalized Nash equilibrium}
The normalized Nash equilibrium
is firstly introduced in \cite{App-04-NNE-Rosen}. It incorporates a cone constraint on the dual multipliers
\begin{equation}
\label{eq:App-04-NNE}
\lambda_i = \beta_i \lambda_0,~ \beta_i > 0,~ i = 1,\cdots,n
\end{equation}
where $\lambda_0 \in \mathbb R^m_+$. Solving KKT system (\ref{eq:App-04-GNEP-KKT}) with constraint (\ref{eq:App-04-NNE}) gives an equilibrium solution. It is shown that for any given $\beta \in \mathbb R^n_{++}$, a normalized Nash equilibrium exists as long as the game is feasible. Moreover, if the mapping \begin{equation}
F(\beta): \mathbb R^n \to \mathbb R^n = \left(
\begin{gathered}
\frac{1}{\beta_1} \nabla_{x_1} f_1(x_1,x_{-1}) \\ \vdots \\
\frac{1}{\beta_n} \nabla_{x_n} f_n(x_n,x_{-n})
\end{gathered} \right) \notag
\end{equation}
parameterized in $\beta$ is strictly monotone (by assuming convexity of payoff functions), then the normalized Nash equilibrium is unique.
The relation given in (\ref{eq:App-04-NNE}) indicates that the dual variables $\lambda_i$ associated with the shared constraints are a constant vector scaled by different scalars. From an economic perspective, this means that the shadow prices of common resources at any normalized Nash equilibrium are proportional among each player.
{\noindent \bf 2. Variational equilibrium}
Recall the variational inequality formulation for the NEP in Proposition \ref{pr:App-04-NE-VI}, a GNEP with shared convex constraints can be treated in the same way: Let $F(x) = (\nabla_{x_1} f_1 (x),\cdots,\nabla_{x_n} f_n (x))$ be a mapping, and the feasible region $X$ is defined in (\ref{eq:App-04-GENP-Convex-X}), then every solution of variational inequality problem VI($X,F$) gives an equilibrium solution of the GNEP, which is called the variational equilibrium (VE).
However, unlike an NEP and its associated VI problem which have the same solutions, not all equilibria of the GNEP are preserved when it is passed to a corresponding VI problem; see \cite{App-04-GNEP-VI-2,App-04-GNEP-VI-3} for examples and further details. In fact, a solution $x^*$ of a GNEP is a VE if and only if it solves KKT system (\ref{eq:App-04-GNEP-KKT}) with the following constraints on the Lagrange dual multipliers \cite{App-04-GNEP-VI-1,App-04-GNEP-2,App-04-GNEP-VI-2}:
\begin{equation}
\label{eq:App-04-VE}
\lambda_1 = \cdots = \lambda_n = \lambda_0 \in \mathbb R^m_+
\end{equation}
implying that all players perceive the same shadow prices of common resource at a VE. The VI approach has two important implications. First, it allows us analyze a GNEP using well-developed VI theory, such as conditions which could guarantee the existence and uniqueness of the equilibrium point; second, condition (\ref{eq:App-04-VE}) gives an interesting economic interpretation of the VE, and inspires pricing-based distributed algorithms to compute an equilibrium solution, which will be discussed in the next section.
The concept of potential game for NEPs directly applies to GNEPs. If a GNEP with shared convex constraints possesses a potential function $U(x)$ which satisfies (\ref{eq:App-04-PG-1}), an equilibrium can be retrieved from a mathematical program which minimizes the potential function over the feasible set $X$ defined in (\ref{eq:App-04-GENP-Convex-X}). To reveal the connection of the optimal solution and the VE, we omit constraints in the local strategy sets $Q_i$, $i=1,\cdots,n$ for notation simplicity, and write out the mathematical program as follows
\begin{equation}
\label{eq:App-04-Potential-GNEP-1}
\begin{aligned}
\min_x ~~ & U(x) \\
\mbox{s.t.} ~~ & g(x) \le 0
\end{aligned}
\end{equation}
whose KKT optimality condition is given by
\begin{equation}
\label{eq:App-04-Potential-GNEP-2}
\begin{gathered}
\nabla_{x} U (x) + \lambda^T \nabla_{x} g (x) = 0 \\
\lambda \ge 0,~g(x) \le 0,~ \lambda^T g (x) = 0
\end{gathered}
\end{equation}
The first equality can be decomposed into $n$ sub-equations
\begin{equation}
\label{eq:App-04-Potential-GNEP-3}
\nabla_{x_i} U (x) + \lambda^T \nabla_{x_i} g (x) = 0,~ i=1,\cdots,n
\end{equation}
Recall (\ref{eq:App-04-PG-4}), $\nabla_{x_i} U (x) = \nabla_{x_i} f_i (x_i ,x_{-i})$, substituting it into (\ref{eq:App-04-Potential-GNEP-2}) we have
\begin{gather}
\nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda^T \nabla_{x_i} g (x) = 0,~ i=1,\cdots,n \notag \\
\lambda \ge 0,~g(x) \le 0,~ \lambda^T g (x) = 0 \notag
\end{gather}
which is exactly KKT system (\ref{eq:App-04-GNEP-KKT}) with identical shadow price constraint (\ref{eq:App-04-VE}). In this regard, we can see
\begin{proposition}
\label{pr:App-04-Potential-GNEP}
Optimizing the potential function of a GNEP with shared convex constraints gives a variational equilibrium.
\end{proposition}
Consider the example shown in Fig. \ref{fig:App-04-02} again, $(2/3,2/3)$ is the unique VE of the GNEP, which is plotted in Fig. \ref{fig:App-04-03}. The corresponding dual variables of global constraints $g_1 \le 0$ and $g_2 \le 0$ are $(1/3,1/3)$.
\begin{figure}
\caption{Illustration of the variational equilibrium.}
\label{fig:App-04-03}
\end{figure}
\subsection{Best-Response Algorithm}
\label{App-D-Sect02-02}
The presence of shared constraints wrecks the Cartesian structure of $\prod_{i=1}^n Q_i$ in a standard Nash game, and prevents a direct application of the best response methods presented in Appendix \ref{App-D-Sect01-03} to solve an NEP. Moreover, even if an equilibrium can be found, it may depend on the initial point as well as the optimization sequence, because solutions of a GNEP are non-isolated. To illustrate this pitfall, take Fig. \ref{fig:App-04-03} for an example. Suppose we pick up an arbitrary point $x_0 \in X$ as the initial value. If we first maximize $x_1$ ($x_2$), the point moves to $B$ ($A$), and then in the second step, $x_2$ ($x_1$) does not change, because it is already an equilibrium solution in the sense of Definition \ref{df:App-04-GENP}. In view of this, fixed-point iteration may give any outcome on the line segments connecting $(2/3,2/3)$ and $(0,1)/(1,0)$, depending on the initiation.
This section introduces the distributed algorithms proposed in \cite{App-04-GNEP-VI-1} which identify a VE of GNEP (\ref{eq:App-04-GENP-MP}) with shared convex constraints. Motivated by the Lagrange decomposition framework, we can rewrite problem (\ref{eq:App-04-GENP-MP}) in a more convenient form. Consider finding a pair ($x,\lambda$), where $x$ is the equilibrium of the following standard NEP $\mathcal G(\lambda)$ with a given vector $\lambda$ of Lagrange multipliers
\begin{equation}
\label{eq:App-04-GNEP-DIS-1}
\mathcal G(\lambda): \quad \left\{
\begin{aligned}
\min_{x_i} ~~ & f_i(x_i,x_{-i}) + \lambda ^T g(x) \\
\mbox{s.t.}~~ & x_i \in Q_i
\end{aligned} \right\},~ i=1,\cdots,n
\end{equation}
and furthermore, a complementarity constraint
\begin{equation}
\label{eq:App-04-GNEP-DIS-2}
0 \le \lambda \bot -g(x) \ge 0
\end{equation}
Problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) has a clear economic interpretation: suppose the shared constraints represent the availability of some common resources, vector $\lambda$ can be viewed as the prices paid by players for consuming these resources. Actually, when a resource is adequate, the inequality constraint is not binding and the Lagrange dual multiplier is zero; the dual multiplier or shadow price is positive only if a resource becomes scarce, indicated by a binding inequality constraint. This relation has been imposed in constraint (\ref{eq:App-04-GNEP-DIS-2}).
The KKT conditions of $\mathcal G(\lambda)$ (\ref{eq:App-04-GNEP-DIS-1}) in conjunction with condition (\ref{eq:App-04-GNEP-DIS-2}) turn out to be the VE condition of GNEP (\ref{eq:App-04-GENP-MP}). In view of this connection, a VE can be found by solving (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) in a distributed manner based on previous algorithms developed for NEPs. Likewise, we discuss strongly convex cases and convex cases separately, due to their different convergence guarantees.
{\noindent \bf 1. Algorithms for strongly convex cases}
Suppose that the game $\mathcal G(\lambda)$ in (\ref{eq:App-04-GNEP-DIS-1}) is strongly convex and has a unique Nash equilibrium $x(\lambda)$ for any given $\lambda \ge 0$. This uniqueness condition allows defining the map
\begin{equation}
\label{eq:App-04-GNEP-ALG-1}
{\rm \Phi} (\lambda): \lambda \to -g(x(\lambda))
\end{equation}
which quantifies the negative violation of the shared constraints at $x(\lambda)$. Based on (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}), the distributed algorithm is provided as follows.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf }
\begin{algorithmic}[1]
\STATE Choose an initial price vector $\lambda^0 \ge 0$. The iteration index is $k = 0$.
\STATE Given $\lambda^k$, find the unique equilibrium $x(\lambda^k)$ of $\mathcal G(\lambda^k)$ using Algorithm \ref{Ag:App-04-NE-Asy-BR}.
\STATE If $0 \le \lambda^k \bot {\rm \Phi} (\lambda^k) \ge 0$ is satisfied, terminate and report $x(\lambda^k)$ as the VE; otherwise, choose $\tau_k > 0$, and update the price vector according to
\begin{equation}
\lambda^{k+1}=\left[\lambda^k - \tau_k {\rm \Phi} (\lambda^k) \right]^+ \notag
\end{equation}
set $k \leftarrow k+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-GNE-1}
\end{algorithm}
Algorithm \ref{Ag:App-04-GNE-1} is a double-loop method. The range of parameter $\tau_n$ and convergence proof have been thoroughly discussed in \cite{App-04-GNEP-VI-1} based on the monotonicity of the mapping $F+ \nabla g(x) \lambda$, where $F=(\nabla_{x_i} f_i)_{i=1}^n$, and $\nabla g(x)$ is a matrix whose $i$-th column is equal to $\nabla g_i$.
{\noindent \bf 2. Algorithms for convex cases}
Now we consider the case in which the VI associated with problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) is
merely monotone (at least one problem in (\ref{eq:App-04-GNEP-DIS-1}) is not strongly convex). In such circumstance, the convergence of Algorithm \ref{Ag:App-04-GNE-1} is no longer guaranteed. This is not only because Algorithm \ref{Ag:App-04-NE-Asy-BR} for the inner loop game $\mathcal G(\lambda)$ may not converge, but also because the outer loop has to be complicated. To circumvent this difficulty, we try to convexify the game using regularization terms as what has been done in Algorithm \ref{Ag:App-04-NE-Proximal}. To this end, we have to explore an optimization reformulation for the complementarity constraint (\ref{eq:App-04-GNEP-DIS-2}), which is given by
\begin{equation}
\lambda \in \arg \min_{\bar \lambda} \left\{ - \bar \lambda^T g(x) ~\middle|~ \bar \lambda \ge 0 \right\} \notag
\end{equation}
Then, consider the following ordinary NEP with $n+1$ players in which the last player controls the price vector $\lambda$:
\begin{equation}
\label{eq:App-04-GNEP-n+1}
\begin{aligned}
\min_{x_i} ~~ & \left\{ f_i(x_i,x_{-i}) + \lambda ^T g(x)
~\middle|~ x_i \in Q_i \right\},~ i = 1,\cdots,n \\
\min_{\lambda} ~~ & \left\{ -\lambda^T g(x) ~\middle|~ \lambda \ge 0 \right\}
\end{aligned}
\end{equation}
where the last player solves an LP in variable $\lambda$ parameterized in $x$. At the equilibrium, $g(x) \le 0$ is implicitly satisfied. To see this, because $Q_i$ is bounded, problems of the first $n$ players must have a finite optimum for arbitrary $\lambda$. If $g(x) \nleq 0$, the last problem has an infinite optimum, imposing a large penalty on the constraint that is violated, and thus the first $n$ players will alter their strategies accordingly. Whenever $g(x) \le 0$ is met, the last LP must have a zero minimum, which satisfies (\ref{eq:App-04-GNEP-DIS-2}). In summary, this extended game (\ref{eq:App-04-GNEP-n+1}) has the same equilibria as problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}). Since the strategy sets of (\ref{eq:App-04-GNEP-n+1}) have a Cartesian structure, Algorithm \ref{Ag:App-04-NE-Proximal} can be applied to find an equilibrium.
\begin{algorithm}[!htp]
\normalsize
\caption{\bf }
\begin{algorithmic}[1]
\STATE Given $\{ \rho_n\}_{n=0}^\infty$, $\varepsilon >0$, and $\tau > 0$, choose a feasible initial point $x^0 \in X$ and an initial price vector $\lambda^0$; the iteration index is $k=0$.
\STATE Given $z^k = (x^k,\lambda^k)$, find a Nash equilibrium $z^{k+1} = (x^{k+1},\lambda^{k+1})$ of the following regularized NEP using Algorithm \ref{Ag:App-04-NE-Asy-BR}
\begin{equation}
\begin{aligned}
\min_{x_i} ~~ & \left\{ f_i(x_i,x_{-i}) + \lambda ^T g(x) + \tau \left\| x_i - x^k_i \right\|^2_2 ~\middle|~ x_i \in Q_i \right\},~ i = 1,\cdots,n\\
\min_{\lambda} ~~ & \left\{ -\lambda^T g(x) + \tau \left\| \lambda - \lambda^k \right\|^2_2 ~ ~\middle|~ \lambda \ge 0 \right\}
\end{aligned} \notag
\end{equation}
\STATE If $\| z^{k+1} - z^k \|_2 \le \varepsilon$, terminate and report $x^{k+1}$ as the variational equilibrium; otherwise, update $k \leftarrow k+1$, $z^k \leftarrow (1-\rho_k) z^{k-1} + \rho_k z^k$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-GNE-2}
\end{algorithm}
The convergence of Algorithm \ref{Ag:App-04-GNE-2} is guaranteed under a sufficiently large $\tau$. More quantitative discussions on parameter selection and convergence conditions can be found in \cite{App-04-GNEP-VI-1}. In practice, the value of $\tau$ should be carefully chosen to achieve satisfactory computational performances.
In NEPs and GNEPs, players make simultaneous decisions. In real-life decision making problems, there are many situations in which players can move sequentially. In the rest of this chapter, we consider three kinds of bilevel games, in which the upper-level (lower-level) players are called leaders (followers), and leaders make decisions prior to follower’s. The simplest one is the Stackelberg game, or the single-leader-single-follower game, or just the bilevel program; Stackelberg game can be generalized by incorporating multiple players in the upper and lower levels. Players at the same level make decisions simultaneously, whereas followers' actions are subject to leaders' movements, forming an NEP parameterized in the leaders' decisions. When there is only one leader, the problem is called a mathematical program with equilibrium constraints (MPEC); when there are multiple leaders, the problem is referred to as an equilibrium program with equilibrium constraints (EPEC). It is essentially a bilevel GNEP among the leaders.
\section{Bilevel Programs}
\label{App-D-Sect03}
Bilevel program is a special mathematical program with another optimization problems nested in the constraints. The main problem is called the upper-level problem, and the decision maker is the leader; the one nested in constraints is called the lower-level problem, and the decision maker is the follower. In game theory, a bilevel program is usually referred to as the Stackelberg game, which arises in many economic and engineering design problems.
\subsection{Bilevel Programs with a Convex Lower Level}
{\bf 1. Mathematic model and single-level equivalence}
A bilevel program is the most basic instance of bilevel games. The leader moves first and chooses a decision $x$; then the follower selects its strategy $y$ solving the lower-level problem parameterized in $x$
\begin{equation}
\label{eq:App-04-BLP-LLP}
\begin{aligned}
\min_y ~~ & f(x,y) \\
\mbox{s.t.}~~ & g(x,y) \le 0: \lambda \\
& h(x,y) = 0: \mu
\end{aligned}
\end{equation}
where $\lambda$ and $\mu$ following the colon are dual variables associated with inequality and equality constraints, respectively. We assume that problem (\ref{eq:App-04-BLP-LLP}) is convex and the KKT condition is necessary and sufficient for a global optimum
\begin{equation}
\label{eq:App-04-BLP-LLP-KKT}
\mbox{Cons-KKT} = \left\{(x,y,\lambda,\mu) ~\middle|~
\begin{gathered}
\nabla_y f(x,y) + \lambda^T \nabla_y g(x,y) + \mu^T \nabla_y h(x,y) = 0 \\
0 \le \lambda \bot - g(x,y) \ge 0, ~ h(x,y) = 0
\end{gathered} \right\}
\end{equation}
The set of optimal solutions of problem (\ref{eq:App-04-BLP-LLP}) is denoted by $S(x)$. If (\ref{eq:App-04-BLP-LLP}) is strictly convex, the optimal solution is unique, and $S(x)$ reduces to a singleton.
When the leader minimizes its payoff function $F(x,y)$, the best response $y(x) \in S(x)$ is taken into account. The leader's problem is formally
described as
\begin{equation}
\label{eq:App-04-BLP-ULP}
\begin{aligned}
\min_{x, \bar y} ~~ & F(x,\bar y) \\
\mbox{s.t.}~~ & x \in X \\
& \bar y \in S(x)
\end{aligned}
\end{equation}
Notice that although $\bar y$ acts as a decision variable of the leader, it is actually controlled by the follower through the best response mapping $S(x)$. When the leader makes decisions, it will take the response from the follower into account. When $S(x)$ is a singleton, qualifier $\in$ reduces to $=$; otherwise, if $S(x)$ contains more than one elements, (\ref{eq:App-04-BLP-ULP}) assumes that the follower will choose the one which is preferred by the leader. Therefore, (\ref{eq:App-04-BLP-ULP}) is called an optimistic equivalence. On the contrary, the pessimistic equivalence assumes that the follower will choose the one which is unfavorable for the leader, which is more difficult to solve. As for the optimistic case, replacing $\bar y \in S(x)$ with KKT condition (\ref{eq:App-04-BLP-LLP-KKT}) leads to the NLP formulation of the bilevel program, or more exactly, a mathematical program with complementarity constraints (MPCC)
\begin{equation}
\label{eq:App-04-BLP-MPCC}
\begin{aligned}
\min_{x,\bar y,\lambda,\mu} ~~ & F(x,\bar y) \\
\mbox{s.t.}~~ & x \in X,~
(x,\bar y,\lambda,\mu) \in \mbox{Cons-KKT}
\end{aligned}
\end{equation}
Although the lower-level problem (\ref{eq:App-04-BLP-LLP}) is convex, the best reaction map of the follower characterized by Cons-KKT is non-convex, so a bilevel program is intrinsically non-convex and generally difficult to solve.
{\noindent \bf 2. Why bilevel programs are difficult to solve?}
Two difficulties prevent an MPCC from being solved reliably and efficiently.
1) The feasible region of (\ref{eq:App-04-BLP-ULP}) is non-convex: even if objective functions and constraints of the leader and the follower are linear, the complementarity and slackness condition in (\ref{eq:App-04-BLP-LLP-KKT}) is still non-convex. An NLP solver only finds a local solution for non-convex problems, if succeeds, and global optimality can hardly be guaranteed.
2) Despite of its non-convexity, the failure to meet ordinary constraint qualifications creates another barrier for solving an MPCC. NLP algorithms generally stop when a stationary point of the KKT conditions is found; however,
due to the presence of the complementarity and slackness condition, the dual multipliers may not be well-defined because of the violation of standard constraint qualifications. Therefore, NLP solvers may fail to find a local optimum without particular treatment on the complementarity constraints. To see how constraint qualifications are violated, consider the following simplest linear complementarity constraint
\begin{equation}
x \ge 0,~ y \ge 0, ~ x^T y =0,~ x \in \mathbb R^5,~ y \in \mathbb R^5 \notag
\end{equation}
The Jacobian matrix of the active constraints at point ($\bar x, \bar y$) is
\begin{equation}
J = \left[ ~
\begin{gathered}
e_{\bar x} \\ 0 \\ \bar y
\end{gathered} ~
\begin{gathered}
0 \\ e_{\bar y} \\ \bar x
\end{gathered} ~
\right] \notag
\end{equation}
where $e_{\bar x}$ and $e_{\bar y}$ are zero-one matrices corresponding to the active constraints $x_i =0$, $i \in I$, $y_j = 0$, $j \in J$, where $I \bigcup J = \{1,2,3,4,5\}$, and $I \bigcap J$ is not necessarily empty. Suppose that $I = \{1,2,4\}$ and $J = \{3,5\}$, then
\begin{equation}
J = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\bar y_1 & \bar y_2 & \bar y_3 & \bar y_4 & \bar y_5 &
\bar x_1 & \bar x_2 & \bar x_3 & \bar x_4 & \bar x_5
\end{bmatrix} \notag
\end{equation}
Since $\bar x_1 = \bar x_2 = \bar x_4 = 0$ and $\bar y_3 = \bar y_5 = 0$, it is apparent that the row vectors of $J$ are linearly dependent at point ($\bar x, \bar y$). The same applies to any ($\bar x, \bar y$) regardless of the indices $I$ and $J$ of active constraints, because whenever $y_j > 0$, complementarity will enforce $x_i=0,i=j$, creating a binding inequality in $x \ge 0$ and a row in matrix $J$ whose $i$-th element is 1; whenever $x_i > 0$, complementarity will enforce $y_j=0,j=i$, creating a binding inequality in $y \ge 0$ and a row in matrix $J$ whose $(i+5)$-th element is 1. Therefore, the last row of $J$ can be represented by a linear combination of the other rows.
Above discussion and conclusion on linear complementarity constraints also apply to the nonlinear case, because the Jacobian matrix $J$ has the same structure. In this regard, above difficulty is an intrinsic phenomenon in MPCCs.
\begin{proposition}
\label{pr:App-04-MPCC-MFCQ}
Complementarity and slackness conditions violate the linear independent constraint qualification at any feasible solution.
\end{proposition}
From a geometric perspective, the feasible region of complementarity constraints consists of slices like $x_i=0$, $y_j = 0$; there is no strictly feasible point and the Slater's condition does not hold. In conclusion, general purpose NLP solvers are not numerically reliable for solving MPCCs, although they
were once used to carry out such tasks.
{\noindent \bf 3. Methods for solving MPCCs}
In view of the
limitations of standard NLP algorithms, new constraint qualifications are proposed to define stationary solutions so as to solve MPCCs through conventional NLP methods, such as the Bouligand-, Clarke-, Mordukhovich-, weakly-, and Strongly-stationary constraint qualifications. See \cite{App-04-BLP-CQ-1,App-04-BLP-CQ-2,App-04-BLP-CQ-3,App-04-BLP-CQ-4} for further information. Through some proper transformation, MPCCs can be solved via standard NLP algorithms as well. Several approaches are available for this task.
{\noindent \bf a. Regularization method \cite{App-04-MPCC-Reg-1,App-04-MPCC-Reg-2,App-04-MPCC-Reg-3}}
In this approach, the non-negativity and complementarity requirements
\begin{equation}
\label{eq:App-04-MPCC-Reg-1}
x \ge 0,~ y \ge 0,~ xy =0
\end{equation}
are approximated by
\begin{equation}
\label{eq:App-04-MPCC-Reg-2}
x \ge 0,~ y \ge 0, ~ xy \le \varepsilon
\end{equation}
Please note that $xy \ge 0$ is a natural result of non-negativity requirements on $x$ and $y$. When $\varepsilon=0$, (\ref{eq:App-04-MPCC-Reg-2}) is equivalent to (\ref{eq:App-04-MPCC-Reg-1}); when $\varepsilon > 0$, (\ref{eq:App-04-MPCC-Reg-2}) defines a larger feasible region than (\ref{eq:App-04-MPCC-Reg-1}), so this approach is sometimes called a relaxation method. The smaller $\varepsilon$ is, the closer any feasible point $(x,y)$ is to achieve complementarity.
if $x$ and $y$ are vectors with non-negative elements, $x^T y =0$ is the same as $x_i y_i =0$, $i=1,2,\cdots$. The same procedure can be applied if $x$ and $y$ are replaced by nonlinear functions.
Since Slater's condition holds for the feasible set defined by (\ref{eq:App-04-MPCC-Reg-2}) with $\epsilon > 0$, NLP solvers can be used to solve related optimization problem. In a regularization procedure for solving an MPCC, the relaxation (\ref{eq:App-04-MPCC-Reg-2}) is applied with gradually decreased value of $\varepsilon$ for implementation issues. If the initial value of $\varepsilon$ is too small, the solver may be numerically unstable and fail to find a feasible solution.
{\noindent \bf b. Penalization method \cite{App-04-MPCC-Pen-1,App-04-MPCC-Pen-2,App-04-MPCC-Pen-3}}
In this approach, the complementarity condition $xy=0$ is removed from the set of constraints; instead, an associated penalty term $xy/\varepsilon$ is added to the objective function to create an extra cost whenever complementarity is not satisfied. Since $x$ and $y$ are non-negative, as indicated by (\ref{eq:App-04-MPCC-Reg-1}), the penalty term would never take a negative value. In this way, the feasible region becomes much simpler.
In a penalization procedure for solving an MPCC, a sequence of NLPs are solved iteratively with gradually decreased value of $\varepsilon$, and the violation of complementarity condition gradually approaches to 0 as iterations proceed. If $\varepsilon$ is initiated too small, the penalty coefficient $1/\varepsilon$ is very large which may cause an ill-conditioned problem and numeric instability. One advantage of this iterative procedure is that the optimal solution in iteration $k$ can be used as the initial guess in iteration $k+1$, since the feasible region does not change, and the solution in every iteration is feasible in the next one. A downside of this approach is that the NLP solver generally identifies a local optimum. In consequence, a smaller $\epsilon$ may not necessarily lead to a solution that gets closer to the feasible region.
{\noindent \bf c. Smoothing method \cite{App-04-MPCC-Smooth-1,App-04-MPCC-Smooth-2}}
This approach employs the perturbed Fischer-Burmeister function
\begin{equation}
\label{eq:App-04-Fischer-Burmeister-1}
\phi(x,y,\varepsilon) = x + y - \sqrt{x^2 + y^2 + \varepsilon}
\end{equation}
which is firstly introduced in \cite{App-04-MPCC-SQP-1} for LCPs, and shown particularly useful in SQP methods for solving MPCCs in \cite{App-04-MPCC-SQP-2}. Clearly, when $\varepsilon= 0$, the function $\phi$ reduces to the standard Fischer-Burmeister function
\begin{equation}
\label{eq:App-04-Fischer-Burmeister-2}
\phi(x,y,0) = 0 ~\Longleftrightarrow~ x \ge 0,~ y \ge 0,~ xy=0
\end{equation}
$\phi(x,y,0)$ is not smooth at the origin $(0,0)$. When $\varepsilon > 0$, the function $\phi$ satisfies
\begin{equation}
\label{eq:App-04-Fischer-Burmeister-2}
\phi(x,y,\varepsilon) = 0 ~\Longleftrightarrow~ x \ge 0,~ y \ge 0,~
xy = \varepsilon/2
\end{equation}
and is smooth in $x$ and $y$.
In view of this, complementarity and slackness condition (\ref{eq:App-04-MPCC-Reg-1}) can be replaced by $\phi(x,y,\varepsilon) = 0$ and further embedded in NLP models. When $\varepsilon$ tends to 0, (\ref{eq:App-04-MPCC-Reg-1}) is enforced approximately.
{\noindent \bf d. Sequential quadratic programming (SQP) \cite{App-04-MPCC-SQP-1,App-04-MPCC-SQP-2,App-04-MPCC-SQP-3}}
SQP is a general purpose NLP method. In each iteration of SQP, the quadratic functions in complementarity constraints are approximated by a linear one, and the nonlinear objective function is replaced with their second-order Taylor series, constituting a quadratic program with linear constraints (maybe in conjunction with trust region bounds). At the optimal solution, nonlinear constraints are linearized, the objective function is approximated again, and then the SQP algorithm proceeds to the next iteration.
When applied to an MPCC, the SQP method is often capable of finding a local optimal solution, without a sequence of user-specified $\varepsilon_k$ approaching to 0, probably because the SQP solver itself is endowed with some softening ability, e.g., when a quadratic program encounters numeric issues, the SQP solver SNOPT automatically relaxes some hard constraints and penalizes violations in the objective function.
The aforementioned classical methods are discussed in \cite{App-04-BLP-NLP-1}, and numeric experiences are reported in \cite{App-04-MPCC-NLP-Test}.
{\noindent \bf e. MINLP methods}
Due to the wide applications in various engineering disciplines, solution methods of MPCCs continue to be an active research area. Recall that the complementarity constraints in form of $g(x) \ge 0$, $h(x) \ge 0$, $g(x)h(x)=0$ is equivalent to
\begin{equation}
0 \le g(x) \le M z,~ 0 \le h(x) \le M(1-z) \notag
\end{equation}
where $z$ is a binary variable, $M$ is a sufficiently large constant. Therefore, an MPCC can be converted to a mixed integer nonlinear program
(MINLP). MINLP removes the numeric difficulty in MPCC; however, the computation complexity remains. If all functions in (\ref{eq:App-04-BLP-MPCC}) are linear, or there are only a few complementarity constraints, the resulting MILP or MINLP model may be solved within reasonable time; otherwise, the branch-and-bound algorithm could offer upper and lower bounds on the optimal value.
{\noindent \bf f. Convex relaxation/approximation methods}
If all functions in MPCC (\ref{eq:App-04-BLP-MPCC}) are linear, it is a non-convex QCQP in which non-convexity originates from the complementarity constraints. When the problem scale is large, the MILP method may be time-consuming. Inspired by the success of convex relaxation methods in non-convex QCQPs, there have been increasing interests for developing convex relaxation methods for MPCCs. An SDP relaxation method is proposed in \cite{App-04-MPCC-SDP-1}, which is embedded in a branch-and-bound algorithm to solve the MPCC. For the MPCC derived from a bilevel polynomial program, it is proposed to solve a sequence of SDPs with increasing problem sizes, so as to solve the original problem globally \cite{App-04-MPCC-SDP-2,App-04-MPCC-SDP-3}. Convex relaxation methods have been applied to power market problems in \cite{App-04-MPCC-SDP-4,App-04-MPCC-SDP-5}. Numerical experiments show that the combination of MILP and SDP relaxation can greatly reduce the computation time. Nonetheless, please bear in mind that in the SDP relaxation model, the decision variable is a matrix with a dimension of $n \times n$, so solving the SDP model may still be a challenging task, although it is convex.
Recently, a DC programming approach is proposed in \cite{App-04-LPCC-DCP} to solve LPCC in the penalized version. In this approach, the quadratic penalty term is decomposed into the difference of two convex quadratic functions, and the concave part is then linearized. Computational performances reported in \cite{App-04-LPCC-DCP} are very promising.
\subsection{Special Bilevel Programs }
Although general bilevel programs are difficult, there are special cases which can be solved relatively easily. One of such classes of programs is the linear bilevel program, in which objective functions are linear and constraints are polyhedra. The linear max-min problem is a special case of the linear bilevel program, in which the leader and the follower have completely opposite targets. Furthermore, two special market models are studied.
{\noindent \bf 1. Linear bilevel program}
A linear bilevel program can be written as
\begin{equation}
\label{eq:App-04-Linear-Bilevel-1}
\begin{aligned}
\max_x ~~ & c^T x + d^T y(x) \\
\mbox{s.t.} ~~& C x \le d \\
& \begin{aligned}
y(x) \in \arg \min_y ~~ & f^T y \\
\mbox{s.t.} ~~ & By \le b - Ax
\end{aligned}
\end{aligned}
\end{equation}
In problem (\ref{eq:App-04-Linear-Bilevel-1}), the follower makes a decision $y$ after the leader deploys its action $x$, which influences the feasible region of $y$. Meanwhile, the leader can predict the follower's optimal response $y(x)$, and choose a strategy that finally optimizes $c^T x + d^T y(x)$. Other matrices and vectors are constant coefficients.
Given the upper level decision $x$, the follower is facing an LP, whose KKT optimality condition is given by
\begin{equation}
\begin{gathered}
B^T u = f \\
0 \ge u ~\bot~ Ax + By - b \le 0
\end{gathered} \notag
\end{equation}
The last constraint is equivalent to the following linear constraints
\begin{equation}
\begin{gathered}
-M(1-z) \le u \le 0 \\
-Mz \le Ax+By-b \le 0
\end{gathered} \notag
\end{equation}
where $z$ is a vector consisting of binary variables, and $M$ is a large enough constant.
In problem (\ref{eq:App-04-Linear-Bilevel-1}), replacing follower's LP with its KKT condition gives rise to an MILP
\begin{equation}
\label{eq:App-04-Linear-Bilevel-MILP}
\begin{aligned}
\max_{x,y,u,z} ~~ & c^T x + d^T y \\
\mbox{s.t.} ~~& C x \le d, ~ B^T u = f \\
& -M(1-z) \le u \le 0 \\
&-Mz \le Ax + By - b \le 0
\end{aligned}
\end{equation}
If the number of complementarity constraints is moderate, MILP (\ref{eq:App-04-Linear-Bilevel-MILP}) can be often solved efficiently, despite of its NP-hard complexity in the worst-case. Since MILP solvers and computation hardware keep improving nowadays, it is always worthy of bearing this technique in mind. Please also be aware that the big-M parameter notably impacts the performance of solving MILP (\ref{eq:App-04-Linear-Bilevel-MILP}). A heuristic method to determine such a parameter in linear bilevel programs is proposed in \cite{App-04-LPCC-BigM}. This method firstly solves two LPs and generates a feasible solution of the equivalent MPCC; then solves a regularized version of the MPCC model using NLP solvers and identifies a local optimal solution near the obtained feasible point; finally, the big-M parameter and the binary variables are initiated according to the local optimal solution. In this way, no manually-supplied parameter is needed, and the MILP model is properly strengthened.
Another optimality certification of follower's LP is the following primal-dual optimality condition
\begin{equation}
\begin{gathered}
B^T u = f,~ u \le 0,~ Ax + By \le b \\
u^T (b-Ax) = f^T y
\end{gathered} \notag
\end{equation}
The first line summarizes feasible regions of the primal and dual variables. The last equation enforces equal values on the optimums of the primal and the dual problems, which is known as the strong duality condition.
Replacing follower's LP with the primal-dual optimality condition gives an NLP:
\begin{equation}
\label{eq:App-04-Linear-Bilevel-NLP}
\begin{aligned}
\max_{x,y,u} ~~ & c^T x + d^T y \\
\mbox{s.t.} ~~& C x \le d, ~ Ax + By \le b \\
& u \le 0,~ B^T u = f \\
& u^T (b-Ax) = f^T y
\end{aligned}
\end{equation}
The following discussion are divided in two categories based on the type of variable $x$.
a. $x$ is {\bf continuous}. In such a general situation, there is no effective way to solve problem (\ref{eq:App-04-Linear-Bilevel-NLP}), due to the last bilinear equality. Notice the fact that $f^T y \ge u^T (b-Ax) $ always holds on the feasible region because of the weak duality, the last constraint can be relaxed and penalized in the objective function, resulting in a bilinear program over a polyhedron \cite{App-04-LBLP-Pen-1,App-04-LBLP-Pen-2,App-04-LBLP-Pen-3}
\begin{equation}
\label{eq:App-04-Linear-Bilevel-NLP-Pen}
\begin{aligned}
\max_{x,y,u} ~~ & c^T x + d^T y - \sigma [f^T y - u^T (b-Ax)] \\
\mbox{s.t.} ~~& C x \le d, ~ Ax + By \le b \\
& u \le 0,~ B^T u = f
\end{aligned}
\end{equation}
where $\sigma > 0$ is a penalty parameter. In problem (\ref{eq:App-04-Linear-Bilevel-NLP-Pen}), the constraints on $u$ and $(x,y)$ are decoupled, so this problem can be solved by Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} (mountain climbing) in Appendix \ref{App-C-Sect02-03}, if global optimality is not mandatory.
In some problems, the upper-level decision influences the lower-level cost function, and has no impact on the feasible region in the lower level. For example, the tax rate design or a retail market pricing belongs to such category. The same procedure can be performed to solve this kind of bilevel problem. We recommend the MILP model, because in the penalized model, both $f^T y$ and $u^T Ax$ are non-convex. A tailored retail market model will be introduced later.
b. $x$ is {\bf binary}. In such circumstance, the bilinear term $ u^T A x =\sum_{ij} A_{ij} u_i x_j$ can be linearized by replacing $u_i x_j$ with a new continuous variable $v_{ij}$ together with auxiliary linear inequalities enforcing $v_{ij}=u_i x_j$. In this way, the last inequality translates into
\begin{equation*}
\begin{gathered}
u^T b - \sum\nolimits_{ij} A_{ij} v_{ij} = f^T y \\
u^l_i x_j \le v_{ij} \le 0,~
u^l_i (1-x_j) \le u_i-v_{ij} \le 0,~ \forall i,j
\end{gathered}
\end{equation*}
where $u^l_i$ is a proper bound that does not discard the original optimal solution. As we can see, a bilevel linear program with binary upper-level variables is not necessarily harder than all continuous instances. This formulation is very useful to model interdiction problems in which $x$ mimics attack strategy.
c. $x$ can be {\bf discretized}. Even if $x$ is continuous, we can approximate it via binary expansion
\begin{equation*}
x_i = x^l_i + {\rm \Delta}_i \sum_{k=0}^{K} 2^k z_{ik},~ z_{ik} \in \{0,1\}
\end{equation*}
where $x^l_i$ ($x^m_i$) is the lower (upper) bound of $x_i$, and ${\rm \Delta}_i = (x^m_i-x^l_i)/2^{K+1}$ is the step size. With this transformation, the bilinear term $u^T A x$ becomes
\begin{equation*}
\sum_{ij} A_{ij} u_i x^l_i + \sum_{ij} A_{ij} u_i {\rm \Delta}_j \sum_{k=0}^{K} 2^k z_{jk}
\end{equation*}
The first term is linear, and $u_i z_{jk}$ in the second term can be linearized in a similar way. However, this entails introducing continuous variable with respect to indices $i$, $k$ and $k$. A low-complexity linearization method is suggested in \cite{App-04-BLLP-MILP-Sim}. It re-orders the summations in the second term as
\begin{equation*}
\sum_j \sum_{k=0}^{K} {\rm \Delta}_j 2^k z_{jk} \sum_i A_{ij} u_i
\end{equation*}
which can be linearized through defining an auxiliary continuous variable $v_{jk} = z_{jk} \sum_i A_{ij} u_i$ and stipulating
\begin{equation*}
- M z_{jk} \le v_{jk} \le M z_{jk},~
- M (1-z_{jk}) \le \sum_i A_{ij} u_i - v_{jk} \le M (1-z_{jk})
\end{equation*}
where $M$ is a large enough constant.
The core idea behind this trick is to treat $u^T A$ as a whole vector which has the same dimension as $x$, because for bilinear form $x^T v = \sum_i x_i v_i$, the dimension of summation is one, while for $x^T Q v = \sum_{ij} Q_{ij} x_i v_j$, the dimension of summation is two. This observation inspires us to conform vector dimensions while deploying such linearization.
{\noindent \bf 2. Linear max-min problem}
A linear max-min problem is a special case of the linear bilevel program, which can be written as
\begin{equation}
\label{eq:App-04-Linear-Max-Min-1}
\begin{aligned}
\max_x ~~ & c^T x + d^T y(x) \\
\mbox{s.t.} ~~& x \in X \\
& \begin{aligned}
y(x) \in \arg \min_y ~~ & c^T x + d^T y \\
\mbox{s.t.} ~~ & y \in Y,~ By \le b -Ax
\end{aligned}
\end{aligned}
\end{equation}
In problem (\ref{eq:App-04-Linear-Max-Min-1}), the follower seeks an objective that is completely opposite to that of the leader. This kind of problem frequently arises in robust optimization and has been discussed in Appendix \ref{App-C-Sect02-03} from the computational perspective. Here we revisit it from a game theoretical point of view.
Problem (\ref{eq:App-04-Linear-Max-Min-1}) can be expressed as a two-person zero-sum game
\begin{equation}
\label{eq:App-04-Linear-Max-Min-2}
\max_{x \in X} \min_{y \in Y} \left\{ c^T x + d^T y ~\middle|~
Ax + By \le b \right\} \\
\end{equation}
However, the coupled constraints make it different from a saddle point problem in the sense of a Nash game or a matrix game. Indeed, it is a Stackelberg game. Let us investigate the interchangeability of the max and min operators (decision sequence). We have already shown in Appendix \ref{App-D-Sect01-04} that swapping the order of max and min operators in a two-person zero-sum matrix game does not influence the equilibrium. However, this is not the case of (\ref{eq:App-04-Linear-Max-Min-2}) \cite{App-04-Linear-max-min}, because
\begin{equation}
\begin{aligned}
& \max_{x \in X} \min_{y \in Y} \{ c^T x + d^T y ~|~ Ax + By \le b \} \\
=& \max_{x \in X} \left\{ c^T x + \min_{y \in Y} \{d^T y ~|~
By \le b - Ax\} \right\} \\
\ge & \max_{x \in X} \left\{ c^T x + \min_{y \in Y}~ d^T y \right\} \\
=& \max_{x \in X} ~ c^T x + \min_{y \in Y} ~ d^T y \\
=& \min_{y \in Y} \left\{ d^T y + \max_{x \in X} ~ c^T x \right\} \\
\ge & \min_{y \in Y} \left\{ d^T y + \max_{x \in X} \{ c^T x ~|~
Ax \le b - By \} \right\} \\
=& \min_{y \in Y} \max_{x \in X} \{ c^T x + d^T y ~|~ Ax + By \le b \}
\end{aligned} \notag
\end{equation}
In fact, strict inequality usually holds in the third and sixth line. This result implies that owing to the presence of strategy coupling, the leader rests in a superior status, which is different from the Nash game in which players possess the same positions.
To solve linear max-min problem (\ref{eq:App-04-Linear-Max-Min-2}), there is no doubt that the aforementioned MILP transformation for general linear bilevel programs gives a possible mean for this task. Nevertheless, the special structure of (\ref{eq:App-04-Linear-Max-Min-2}) allows several alternatives which are more dedicated and effective. To this end, we will transform it into an equivalent optimization problem using LP duality theory. For the ease of notation, we merge polytope $Y$ into the coupled constraint, and the dual of lower-level LP in (\ref{eq:App-04-Linear-Max-Min-1}) (or the inner LP in (\ref{eq:App-04-Linear-Max-Min-2})) reads
\begin{equation}
\max_u ~ \{ u^T (b - Ax) ~|~ u \in U \} \notag
\end{equation}
where $U = \{u ~|~ B^T u = d,~ u \le 0\}$ is the feasible region of dual variable $u$. As strong duality always holds for LPs, we have $d^T y = u^T (b-Ax)$. Substituting it into (\ref{eq:App-04-Linear-Max-Min-1}) we obtain
\begin{equation}
\label{eq:App-04-Linear-Max-Min-Bilinear}
\begin{aligned}
\max ~~ & c^T x + u^T b - u^T A x \\
\mbox{s.t.} ~~ & x \in X,~ u \in U
\end{aligned}
\end{equation}
Problem (\ref{eq:App-04-Linear-Max-Min-Bilinear}) is a bilinear program due to the product term $u^T A x$ in variables $u$ and $x$. Several methods for solving such a problem locally or globally have been set forth in Appendix \ref{App-C-Sect02-03}, as a fundamental methodology in robust optimization. Although variable $y$ of the follower does not appear in (\ref{eq:App-04-Linear-Max-Min-Bilinear}), it can be easily recovered from the lower level of (\ref{eq:App-04-Linear-Max-Min-1}) with the obtained leader's strategy $x$.
{\noindent \bf 3. A retail market problem}
In a retail market, a retailer releases the prices of some goods; according to the retail prices, the customer decides on the optimal purchasing strategy subject to the demands on each goods as well as production constraints; finally, the retailer produces or trades with a higher level market to manage the inventory, and delivers the goods to customers. This retail market can be modeled through a bilevel program. In the upper level
\begin{subequations}
\label{eq:App-04-RMarket-Retailer}
\begin{align}
\max_{x,z} ~~ & x^T D_C y(x) - p^T D_M z \label{eq:App-04-RMarket-UP-1}\\
\mbox{s.t.}~~ & A x \le a \label{eq:App-04-RMarket-UP-2} \\
& B_1 y(x) + B_2 z \le b \label{eq:App-04-RMarket-UP-3}
\end{align}
\end{subequations}
(\ref{eq:App-04-RMarket-UP-1})-(\ref{eq:App-04-RMarket-UP-3}) form retailer's problem, where vector $x$ denotes the prices of goods released by the retailor; vector $y(x)$ stands for the amounts of goods purchased by the customer, which is determined from an optimal production planning problem; $p$ is the production cost or the price in the higher level market; $z$ represents the production/purchase strategy of the retailer. Other matrices and vectors are constant coefficients. The first term in objective function (\ref{eq:App-04-RMarket-UP-1}) is the income paid by the customer, and the second term is the payoff of the retailer. The objective function is the total profit to be maximized. Because there is no competition and the retailer has full market power, to avoid unfair retail prices, we assume that both sides have reached certain agreements on the pricing policy, which is modeled through constraint (\ref{eq:App-04-RMarket-UP-2}). It includes simple lower and upper bounds as well as other bilateral contract, such as the restriction on the average price over a certain period or the price correlation among multiple goods. The inventory dynamics and other technique constraints are depicted by constraint (\ref{eq:App-04-RMarket-UP-3}).
Given the retail prices, customers solve the optimal production planning problem in the lower level
\begin{equation}
\label{eq:App-04-RMarket-Customer}
\begin{aligned}
\min_y ~~ & x^T D_C y \\
\mbox{s.t.}~~ & F y \ge f
\end{aligned}
\end{equation}
and determine the optimal purchasing strategy. The objective function in (\ref{eq:App-04-RMarket-Customer}) is the total cost of customers, where the price vector $x$ is constant coefficient; constraints capture the demands and all other technique requirements in the production process.
Bilevel program (\ref{eq:App-04-RMarket-Retailer})-(\ref{eq:App-04-RMarket-Customer}) are not linear, although (\ref{eq:App-04-RMarket-Customer}) is indeed an LP, because of the bilinear term $x^T D_C y$ in (\ref{eq:App-04-RMarket-UP-1}), where both $x$ and $y$ are variables (the retailer controls $y$ indirectly through prices). The KKT condition of LP (\ref{eq:App-04-RMarket-Customer}) reads
\begin{equation}
\begin{gathered}
D^T_C x = F^T u \\
0 \le u \bot Fy - f \ge 0
\end{gathered} \notag
\end{equation}
where $u$ is the dual variable. The complementarity constraints can be linearized via binary variables, which has been clarified in Appendix $\ref{App-B-Sect03-05}$. Furthermore, strong duality
gives
\begin{equation}
x^T D_C y = f^T u \notag
\end{equation}
The right-hand side is linear in $u$. Therefore, problem (\ref{eq:App-04-RMarket-Retailer})-(\ref{eq:App-04-RMarket-Customer}) and the following MILP
\begin{equation}
\label{eq:App-04-RMarket-MILP}
\begin{aligned}
\max_{x,y,u,v,z} ~~ & f^T u - p^T D_M z \\
\mbox{s.t.} ~~ & A x \le a,~ B_1 y + B_2 z \le b \\
& v \in \mathbb B^{N_f},~ D^T_C x = F^T u \\
& 0 \le u \le M(1-v) \\
& 0 \le Fy - f \le Mv
\end{aligned}
\end{equation}
have the same optimal solution in primal variables, where $N_f$ is the dimension of $f$.
We can learn from this example that when the problem exhibits a certain structure, the non-convexity can be eliminated without introducing additional dimensions of complexity. In problem (\ref{eq:App-04-RMarket-Retailer}), the price is a primal variable quoted by a decision maker, and is equal to the knock-down price. This scheme is called pay-as-bid. Next, we give an example of a marginal pricing market where the price is determined by the dual variables of a market clearing problem.
{\noindent \bf 4. A wholesale market problem}
In a wholesale market, a provider bids its offering prices to a market organizer. The organizer collects information on available resources and the bidding of the provider, and then clears the market by scheduling the production in the most economic way. The provider is paid at the marginal cost. This problem can be modeled by a bilevel program
\begin{equation}
\label{eq:App-04-Pool-Market-Provider}
\max_{\beta} ~ \lambda(\beta)^T p(\beta) - f(p(\beta))
\end{equation}
where $\beta$ is the offering price vector of the provider, $p(\beta)$ is the quantity of goods ordered by the market organizer, function $f(p) = \sum_i f_i(p_i)$, where $f_i(p_i)$ is a univariate convex function representing the production cost, and $\lambda(\beta)$ is the marginal prices of each kind of goods. Both of them depend on the value of $\beta$, and are determined from the market clearing problem in the lower level
\begin{subequations}
\label{eq:App-04-Pool-Market-MC}
\begin{align}
\min_{p,u} ~~ & \beta^T p + c^T u \label{eq:App-04-Pool-Market-MC-Obj} \\
\mbox{s.t.}~~ & p_n \le p \le p_m: \eta_n, \eta_m \label{eq:App-04-Pool-Market-Cons-1}\\
& p + F u = d: \lambda \label{eq:App-04-Pool-Market-Cons-2} \\
& A u \le a: \xi \label{eq:App-04-Pool-Market-Cons-3}
\end{align}
\end{subequations}
where $u$ includes all other variables, such as the amount of each kind of goods collected from other providers or produced locally, the system operating variable, and so on; $c$ is the coefficient including prices of goods offered by other providers, and the production cost if the organizer wishes to produce the goods by itself. Objective function (\ref{eq:App-04-Pool-Market-MC-Obj}) represents the total cost in the market to be minimized. Constraint (\ref{eq:App-04-Pool-Market-Cons-1}) defines offering limits of the upper-level provider; constraint (\ref{eq:App-04-Pool-Market-Cons-2}) is the system-wide production-demand balancing condition of each goods, the dual variable $\lambda$ at the optimal solution gives the marginal cost of each goods; (\ref{eq:App-04-Pool-Market-Cons-3}) imposes constraints which the system operation must obey, such as network flow and inventory dynamics.
In the provider's problem (\ref{eq:App-04-Pool-Market-Provider}), the offering price $\beta$ is not restricted by finite upper bounds pricing policies (but such a policy can certainly be modeled), because the competition appears in the lower level: if $\beta$ is not reasonable, the market organizer would resort to other providers or count on its own production capability.
Compared with the situation in a retail market, problems (\ref{eq:App-04-Pool-Market-Provider})-(\ref{eq:App-04-Pool-Market-MC}) are even more complicated: the dual variable $\lambda$ appears in the objective function of the provider, and the term $\lambda^T p$ is non-convex. In the following, we reveal that it can be exactly expressed as a linear function in the primal and dual variables via (somehow tricky) algebraic transformations.
KKT conditions of the market clearing LP (\ref{eq:App-04-Pool-Market-MC}) are summarized as follows
\begin{subequations}
\label{eq:App-04-Pool-Market-MC-KKT}
\begin{gather}
\beta = \lambda + \eta_n + \eta_m
\label{eq:App-04-Pool-Market-MC-KKT-1} \\
\eta_n^T (p-p_n) = 0
\label{eq:App-04-Pool-Market-MC-KKT-2} \\
\eta_m^T (p_m-p) = 0
\label{eq:App-04-Pool-Market-MC-KKT-3} \\
c= A^T \xi + F^T \lambda
\label{eq:App-04-Pool-Market-MC-KKT-4} \\
\xi^T (Au - a) = 0
\label{eq:App-04-Pool-Market-MC-KKT-5} \\
\eta_n \ge 0,~ \eta_m \le 0,~ \xi \le 0
\label{eq:App-04-Pool-Market-MC-KKT-6} \\
(\ref{eq:App-04-Pool-Market-Cons-1})-(\ref{eq:App-04-Pool-Market-Cons-3})
\label{eq:App-04-Pool-Market-MC-KKT-7}
\end{gather}
\end{subequations}
According to (\ref{eq:App-04-Pool-Market-MC-KKT-1}),
\begin{subequations}
\begin{equation}
\label{eq:App-04-Pool-Market-MILP-1}
\beta^T p = \lambda^T p + \eta^T_n p + \eta^T_m p
\end{equation}
From (\ref{eq:App-04-Pool-Market-MC-KKT-2}) and (\ref{eq:App-04-Pool-Market-MC-KKT-3}) we have
\begin{equation}
\label{eq:App-04-Pool-Market-MILP-2}
\eta_n^T p = \eta^T_n p_n,~ \eta_m^T p = \eta^T_m p_m,
\end{equation}
Substituting (\ref{eq:App-04-Pool-Market-MILP-2}) in (\ref{eq:App-04-Pool-Market-MILP-1}) renders
\begin{equation}
\label{eq:App-04-Pool-Market-MILP-3}
\lambda^T p = \beta^T p - \eta^T_n p_n - \eta^T_m p_m
\end{equation}
Furthermore, strong duality of LP implies the following equality
\begin{equation}
\beta^T p + c^T u = \eta^T_n p_n + \eta^T_m p_m + d^T \lambda + a^T \xi \notag
\end{equation}
or
\begin{equation}
\label{eq:App-04-Pool-Market-MILP-4}
\beta^T p - \eta^T_n p_n - \eta^T_m p_m = d^T \lambda + a^T \xi - c^T u
\end{equation}
Substituting (\ref{eq:App-04-Pool-Market-MILP-4}) in (\ref{eq:App-04-Pool-Market-MILP-3}) results in
\begin{equation}
\label{eq:App-04-Pool-Market-MILP-5}
\lambda^T p = d^T \lambda + a^T \xi - c^T u
\end{equation}
The right-hand side is a linear expression for $\lambda^T p$ in primal variable $u$ and dual variables $\lambda$ and $\xi$.
\end{subequations}
Combining the KKT condition (\ref{eq:App-04-Pool-Market-MC-KKT}) and (\ref{eq:App-04-Pool-Market-MILP-5}) gives an MPCC which is equivalent to the bilevel wholesale market problem (\ref{eq:App-04-Pool-Market-Provider})-(\ref{eq:App-04-Pool-Market-MC})
\begin{equation}
\label{eq:App-04-Pool-Market-MPCC}
\begin{aligned}
\max ~~ & d^T \lambda + a^T \xi - c^T u - f(p(\beta)) \\
\mbox{s.t.} ~~ & (\ref{eq:App-04-Pool-Market-MC-KKT-1})-(\ref{eq:App-04-Pool-Market-MC-KKT-7})
\end{aligned}
\end{equation}
Because complementarity conditions (\ref{eq:App-04-Pool-Market-MC-KKT-2}), (\ref{eq:App-04-Pool-Market-MC-KKT-3}), (\ref{eq:App-04-Pool-Market-MC-KKT-5}) can be linearized, and convex function $f(p)$ can be approximated by PWL functions, MPCC (\ref{eq:App-04-Pool-Market-MPCC}) can be recast as an MILP.
\subsection{Bilevel Mixed-integer Program}
Although LP can tackle many economic problems and market activities in real life, there are indeed even more decision-making problems which are beyond the reach of LP, for example, power market clearing considering unit commitment \cite{App-04-BiMIP-TEP}. KKT optimality condition or strong duality from LP theory do not apply to discrete optimization problems due to their intrinsic non-convexity. Furthermore, this is no computationally viable approach to express the optimality condition of a general discrete program in closed form, making a bilevel mixed-integer programs much more challenging to solve than a bilevel linear program. Some traditional algorithms either rely
on enumerative branch-and-bound strategies based on a weak relaxation or depends on complicated operations that are problem-specific. To our knowledge, the reformulation and decomposition algorithm proposed in \cite{App-04-BiMIP-Zeng} is the first approach that can solve general bilevel mixed-integer programs in a systematic way, and will be introduced in this section.
The bilevel mixed-integer program has the following form
\begin{equation}
\label{eq:App-04-BiMIP-Comp-1}
\begin{aligned}
\min ~~ & f^T x + g^T y + h^T z \\
\mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\
& (y,z) \in \arg \max~~ w^T y + v^T z \\
& \qquad \qquad \quad \mbox{s.t.}~~ P y + Nz \le r -K x \\
& \qquad \qquad \qquad ~~~ y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d}
\end{aligned}
\end{equation}
where $x$ is the upper-level decision variable and appears in constraints of the lower-level problem; $y$ and $z$ represent lower-level continuous decision variable and discrete decision variable, respectively. We do not distinguish upper-level continuous variable and discrete variable because they have little impact on the exposition of the algorithm, unlike the ones appeared in the lower level. If the lower-level has multiple solutions, the follower chooses the one in favor of the leader. In the current form, the upper-level constraints are independent of lower-level variables. Nevertheless, coupling constraints in the upper level can be easily incorporated \cite{App-04-BiMIP-Yue}.
In this section, we assume that the relatively complete recourse property in \cite{App-04-BiMIP-Zeng} holds, i.e., for any feasible pair $(x, z)$, the feasible set for lower-level continuous variable $y$ is non-empty. Under this premise, the optimal solution exists. This assumption is mild because we can add slack variables in the lower-level constraints and penalize constraint violation in the lower-level objective function. For instances in which the relatively complete recourse property is missing, please refer to the remedy in \cite{App-04-BiMIP-Yue}.
To eliminate $\in$ qualifier in (\ref{eq:App-04-BiMIP-Comp-1}), we duplicate decision variables and constraints of the lower-level problem and set up an equivalent formulation:
\begin{equation}
\label{eq:App-04-BiMIP-Comp-2}
\begin{aligned}
\min ~~ & f^T x + g^T y^0 + h^T z^0 \\
\mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\
& K x + P y^0 + N z^0 \le r \\
& w^T y^0 + v^T z^0 \ge \max~~ w^T y + v^T z \\
& \qquad \qquad \qquad ~~~ \mbox{s.t.}~~ P y + Nz \le r -K x \\
& \qquad \qquad \qquad \qquad ~~ y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d}
\end{aligned}
\end{equation}
In this formulation, the leader controls its original variable $x$ as well as replicated variables $y^0$ and $z^0$. Conceptually, the leader will use $(y^0, z^0)$ to anticipate the response of follower and its impact on his objective function. Clearly, if the lower-level problem has a unique optimal solution, it must be equal to $(y^0, z^0)$. It is worth mentioning that although more variables and constraints are incorporated in (\ref{eq:App-04-BiMIP-Comp-2}), this formulation is actually an informative and convenient expression for algorithm development, as $\ge$ would be more friendly to general purpose mathematical programming solvers.
Up to now, the obstacle of solving (\ref{eq:App-04-BiMIP-Comp-2}) remains: discrete variable $z$ in the lower level, which prevents the use of optimality condition of LP. To overcome this difficulty, we treat $y$ and $z$ separately and restructure the lower-level problem as:
\begin{equation}
\label{eq:App-04-BiMIP-Obj-L}
w^T y^0 + v^T z^0 \ge \max_{z \in Z}~ v^T z + \max_y \{w^T y |Py \le r-Kx-Nz\}
\end{equation}
where $Z$ represents the set consisting of all possible values of $z$. Despite the large cardinality of $Z$, the second optimization is a pure LP, and can be replaced with its KKT condition, resulting in:
\begin{equation}
\label{eq:App-04-BiMIP-L-LP-KKT}
\begin{aligned}
w^T y^0 + v^T z^0 & \ge \max_{z \in Z}~ v^T z + w^T y \\
& \qquad \mbox{s.t.}~ P^T \pi = w \\
& \qquad \quad ~~ 0 \le \pi \bot r-Kx-Nz-Py \ge 0
\end{aligned}
\end{equation}
The complementarity constraints can be linearized via the method in Sect. \ref{App-B-Sect03-05}. Then, by enumerating $z^j$ over $Z$ with associated variables $(y^j,\pi^j)$, we arrive at an MPCC that is equivalent to problem (\ref{eq:App-04-BiMIP-Comp-2})
\begin{equation}
\label{eq:App-04-BiMIP-MPCC-Full}
\begin{aligned}
\min ~~ & f^T x + g^T y^0 + h^T z^0 \\
\mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\
& K x + P y^0 + N z^0 \le r,~ P^T \pi^j = w,~ \forall j \\
& 0 \le \pi^j \bot r - K x - N z^j - P y^j \ge 0,~ \forall j \\
& w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j
\end{aligned}
\end{equation}
Without particular mention, (\ref{eq:App-04-BiMIP-MPCC-Full}) is compatible with MILP solvers.
Except for the KKT optimality condition, another popular approach entails applying primal-dual condition for the LP regarding the lower-level continuous variable $y$. Following this line, rewrite this LP in (\ref{eq:App-04-BiMIP-Obj-L}) by strong duality, we obtain
\begin{equation}
\label{eq:App-04-BiMIP-L-LP-PD}
\begin{aligned}
w^T y^0 + v^T z^0 & \ge \max_{z \in Z}~ v^T z + \min ~ \pi^T ( r-Kx-Nz) \\
& \qquad \qquad \qquad ~ \mbox{s.t.}~ P^T \pi = w,~ \pi \ge 0
\end{aligned}
\end{equation}
In (\ref{eq:App-04-BiMIP-L-LP-PD}), if all variables in $x$ are binary, the bilinear terms $\pi^T K x$ and $\pi^T N z$ from the leader's point of view can be linearized via the method in Sect. \ref{App-B-Sect02-02}. The min operator in the right-hand side can be omitted because the upper-level objective function is to be minimized, giving rise to
\begin{equation}
\label{eq:App-04-BiMIP-BLIP-Full}
\begin{aligned}
\min ~~ & f^T x + g^T y^0 + h^T z^0 \\
\mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\
& K x + P y^0 + N z^0 \le r \\
& w^T y^0 + v^T z^0 \ge v^T z^j + ( r-Kx-Nz)^T \pi^j,~ \forall j \\
& P^T \pi^j = w,~ \pi^j \ge 0,~ \forall j
\end{aligned}
\end{equation}
Clearly, (\ref{eq:App-04-BiMIP-BLIP-Full}) has fewer constraints compared to (\ref{eq:App-04-BiMIP-MPCC-Full}). Nevertheless, whenever $x$ contains continuous variables, linearizing $\pi^T K x$ would incur more binary variables.
One may think that it is hopeless to solve above enumeration forms (\ref{eq:App-04-BiMIP-MPCC-Full}) and (\ref{eq:App-04-BiMIP-BLIP-Full}) due to the large cardinality of $Z$. In a way similar to the CCG algorithm for solving robust optimization, we can start with a subset of $Z$ and solve relaxed version of problem (\ref{eq:App-04-BiMIP-MPCC-Full}), until the lower bound and upper bound of optimal value converge. The flowchart
is shown in Algorithm \ref{Ag:App-04-BiMIP-CCG}
\begin{algorithm}[!t]
\normalsize
\caption{\bf : CCG algorithm for bilevel MILP}
\begin{algorithmic}[1]
\STATE Set LB $=-\infty$, UB $= +\infty$, and $l = 0$;
\STATE Solve the following master problem
\begin{equation}
\label{eq:App-04-BiMIP-CCG-Master}
\begin{aligned}
\min ~~ & f^T x + g^T y^0 + h^T z^0 \\
\mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\
& K x + P y^0 + N z^0 \le r,~ P^T \pi^j = w,~ \forall j \le l \\
& 0 \le \pi^j \bot r - K x - N z^j - P y^j \ge 0,~ \forall j \le l \\
& w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j \le l
\end{aligned}
\end{equation}
The optimal solution is $(x^*, y^{0*}, z^{0*}, y^{1*},\cdots,y^{l*}, \pi^{1*}, \cdots, \pi^{l*})$, and the optimal value is $v^*$. Update lower bound LB $=v^*$.
\STATE Solve the following lower-level MILP with obtained $x^*$
\begin{equation}
\label{eq:App-04-BiMIP-CCG-SP-1}
\begin{aligned}
\theta (x^*) = \max ~~ & w^T y + v^T z \\
\mbox{s.t.}~~ & P y + Nz \le r - K x^* \\
& y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d}
\end{aligned}
\end{equation}
The optimal value is $\theta (x^*)$.
\STATE Solve an additional MILP to refine a solution that is favor of the leader
\begin{equation}
\label{eq:App-04-BiMIP-CCG-SP-2}
\begin{aligned}
{\rm \Theta} (x^*) = \min ~~ & g^T y + h^T z \\
\mbox{s.t.}~~ & w^T y + v^T z \ge \theta(x^*) \\
& P y + Nz \le r - K x^* \\
& y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d}
\end{aligned}
\end{equation}
The optimal solution is $(y^*,z^*)$, and the optimal value is ${\rm \Theta}(x^*)$. Update upper bound UB $= \min \{\mbox{UB}, f^T x^* + {\rm \Theta} (x^*)\}$.
\STATE If UB $-$ LB $=0$, terminate and report optimal solution; otherwise, set $z^{l+1}=z^*$, create new variables $(y^{l+1}, \pi^{l+1})$, adding the following cuts to master problem
\begin{equation*}
\begin{gathered}
w^T y^0 + v^T z^0 \ge w^T y^{l+1} + v^T z^{l+1} \\
0 \le \pi^{l+1} \bot r - K x - N z^{l+1} - P y^{l+1} \ge 0,~
P^T \pi^{l+1} = w
\end{gathered}
\end{equation*}
Update $l \leftarrow l+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-BiMIP-CCG}
\end{algorithm}
Because $Z$ has finite elements, Algorithm \ref{Ag:App-04-BiMIP-CCG} must terminate in a finite number of iterations, which is bounded by the cardinality of $Z$. When it converges, LB equals to UB without a positive gap.
To see this, suppose that in iteration $l_1$, $(x^*, y^{0*}, z^{0*})$ is obtained in step 2 with LB $<$ UB, and $z^*$ is produced in step 4. Particularly, we assume that $z^*$ was previously derived in some iteration $l_0 < l_1$. Then, in step 5, new variables and cuts associated with $z^* = z^{l_1+1}$ will be generated and augmented with the master problem. As those variables and constraints already exist after iteration $l_0$, the augmentation is essentially redundant, and the optimal value of master problem in iteration $l_1+1$ remains the same as that in iteration $l_1$, so does LB. Consequently, in iteration $l_1+1$
\begin{equation*}
\begin{aligned}
\mbox{LB} & = f^T x^* + g^T y^{0*} + h^T z^{0*} \\
& = f^T x^* + \min~ g^T y^0 + h^T z^0 \\
& \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^*,~ P^T \pi^j = w,~ \forall j \le l_1+1 \\
& \qquad \qquad \qquad 0 \le \pi^j \bot r - K x^* - N z^j - P y^j \ge 0,~ \forall j \le l_1+1 \\
& \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j \le l_1+1 \\
& \ge f^T x^* + \min~ g^T y^0 + h^T z^0 \\
& \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^*,~ P^T \pi^j = w,~ j = l_1+1 \\
& \qquad \qquad \qquad 0 \le \pi^j \bot r - K x^* - N z^j - P y^j \ge 0,~ j = l_1+1 \\
& \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ j = l_1+1\\
& \ge f^T x^* + \min~ g^T y^0 + h^T z^0 \\
& \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^* \\
& \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge \theta(x^*) \\
& = f^T x^* + {\rm \Theta}(x^*)
\end{aligned}
\end{equation*}
The second $\ge$ follows from the fact that $z^{l_1+1}$ is the optimal solution to problem (\ref{eq:App-04-BiMIP-CCG-SP-1}) and KKT condition in constraints warrants that $v^Tz^{l_1+1} + w^Ty^{l_1+1} = \theta(x^*)$. In the next iteration, the algorithm terminates since LB $\ge$ UB.
It should be pointed out that although a large amount of variables and constraints are generated in step 5, in practice, Algorithm \ref{Ag:App-04-BiMIP-CCG} often converges to an optimal solution within a small number of iterations that could be drastically smaller than the cardinality of $Z$, because the most critical scenarios in $Z$ can be discovered from problem (\ref{eq:App-04-BiMIP-CCG-SP-1}).
It is suggested in \cite{App-04-BiMIP-Zeng} that the master problem could be tightened by introducing variables $(\hat y, \hat \pi)$ representing the primal and dual variables of lower-level problem corresponding to $(x, z^0)$ and augmenting the following constraints
\begin{equation*}
\begin{gathered}
w^T y^0 + v^T z^0 \ge w^T \hat y + v^T z^0 \\
0 \le \hat \pi \bot r - K x - N z^0 - P \hat y \ge 0,~
P^T \hat \pi = w
\end{gathered}
\end{equation*}
It is believed that such constraints includes some useful information that is parametric not only to $x$ but also to $z^0$, and is not available from any fixed samples $z^1,\cdots,z^l$. It is also pointed out that for instance with pure integer variables in the lower-level problem, this strategy is generally ineffective.
\section{Mathematical Programs with Equilibrium Constraints}
\label{App-D-Sect04}
A mathematical program with equilibrium constraints (MPEC) is an extension of the bilevel program by incorporating multiple followers competing with each other, resulting in a GNEP in the lower level. In this regard, an MPEC is a single-leader-multi-follower Stackelberg game. In a broader sense, MPEC is an optimization problem with variational inequalities. MPECs are difficult to solve because of the complementarity constraints.
\subsection{Mathematical Formulation}
\label{App-D-Sect04-01}
In an MPEC, the leader deploys its action $x$ prior to the followers; then each follower selects its optimal decision $y_j$ taking the decision of the leader $x$ and rivals' strategies $y_{-j}$ as given. The MPEC can be formulated in two levels:
\begin{subequations}
\label{eq:App-04-MPEC-1}
\begin{align}
\mbox{Leader:} \qquad & \left\{
\begin{aligned}
\min_{x,\bar y, \bar \lambda, \bar \mu} ~~ &
F(x,\bar y, \bar \lambda, \bar \mu) \\
\mbox{s.t.} ~~ & G(x,\bar y) \le 0 \\
& (\bar y, \bar \lambda, \bar \mu) \in S(x)
\end{aligned} \right. \label{eq:App-04-MPEC-Leader} \\
\mbox{Followers:} \qquad & \left\{
\begin{aligned}
\min_{y_j,\lambda_j, \mu_j} ~~ & f_j (x,y_j,y_{-j}) \\
\mbox{s.t.} ~~ & g_j (x,y_j) \le 0 : \mu_j \\
& h(x,y) \le 0 : \lambda_j
\end{aligned} \right\},~ \forall j \label{eq:App-04-MPEC-Followers}
\end{align}
\end{subequations}
In (\ref{eq:App-04-MPEC-Leader}), the leader minimizes its payoff function $F$ which depends on the choice of its own $x$, the decisions of the followers $y$, and the dual variables $\lambda$ and $\mu$ from the lower level, because these dual variables may represent the prices of goods determined by the lower-level market clearing model. Constraints include inequalities and equalities (as a pair of opposite inequalities), as well as the optimality condition of the lower-level problem. In (\ref{eq:App-04-MPEC-Followers}), $x$ is treated as a parameter, and the competition among followers comes down to a GNEP with shared convex constraints: the payoff function $f_j(x,y_j,y_{-j})$ of follower $j$ is assumed to be convex in $y_j$; inequality $g_j(x,y_j) \le 0$ defines a local constraint of follower $j$ which is convex in $y_j$ and does not involve $y_{-j}$; inequality $h(x,y) \le 0$ is the shared constraint which is convex in $y$. Since each follower's problem is convex, the KKT condition is both necessary and sufficient for optimality. We assume that the set of GNEPs $S(x)$ is always non-empty.
The GNEP encompasses several special cases in the lower level. If the global constraint is absent, it degenerates into an NEP; moreover, if the objective functions of followers are also decoupled, the lower level reduces to independent convex optimization programs.
By replacing the lower-level GNEP with its KKT condition (\ref{eq:App-04-GNEP-KKT}), the MPEC (\ref{eq:App-04-MPEC-1}) becomes an MPCC, which can be solved by some suitable methods explained before. As the lower level GNEP usually possesses infinitely many equilibria, the outcome found by the MPCC reformulation is the favourite one from the leader's perspective. We can also require the Lagrange multipliers for the shared constraints should be equal, so as to restrict the GNEP to VEs. If the followers' problems are linear, the primal-dual optimality condition is an alternative choice in addition to the KKT condition, as it often involves fewer constraints. Nevertheless, the strong duality may introduce products involving primal and dual variables, such as those in (\ref{eq:App-04-Linear-Bilevel-NLP}) and (\ref{eq:App-04-Linear-Bilevel-NLP-Pen}), which remain non-convex and require special treatments.
\subsection{Linear Complementarity Problem}
\label{App-D-Sect04-02}
A linear complementarity problem (LCP) requires finding a feasible solution subject to the following constraints
\begin{equation}
\label{eq:App-04-LCP-1}
0 \le x \bot P x + q \ge 0
\end{equation}
where $P$ is a square matrix; $q$ is a vector. Their dimensions are compatible with $x$.
LCP is a special case of MPCC without an objective function. This type of problem frequently arises in various disciplines including market equilibrium analysis, computational mechanics, game theory, and mathematical programming. The theory of LCPs is a well-developed field. Detailed discussions can be found in \cite{App-04-LCP-Book}. In general, an LCP is NP-hard, although it is polynomially solvable for some special cases. One situation is when the matrix $P$ is positive semidefinite. In such circumstance, problem (\ref{eq:App-04-LCP-1}) can be solved via the following convex quadratic program
\begin{equation}
\label{eq:App-04-LCP-CQP}
\begin{aligned}
\min~~ & x^T P x + q^T x \\
\mbox{s.t.}~~ & x \ge 0,~ P x + q \ge 0
\end{aligned}
\end{equation}
(\ref{eq:App-04-LCP-CQP}) is a CQP which is readily solvable. Its optimum must be non-negative according to the constraints. If the optimal value of (\ref{eq:App-04-LCP-CQP}) is 0, then its optimal solution also solves LCP (\ref{eq:App-04-LCP-1}); otherwise, if the optimal value is strictly positive, LCP (\ref{eq:App-04-LCP-1}) is infeasible. In fact, this conclusion holds no matter whether $P$ is positive semidefinite or not. However, if $P$ is indefinite, identifying the global optimum of a non-convex QP (\ref{eq:App-04-LCP-CQP}) is also NP-hard, and thus does not facilitate solving the LCP.
There is a large body of literature discussing algorithms for solving LCPs. One of the most representative ones is the Lemke's pivoting method developed in \cite{App-04-LCP-Lemke}, and another emblematic one is the interior-point method proposed in \cite{App-04-LCP-IPO}. One drawback of the former method is its exponentially growing worst-case complexity, which makes it less efficient for large problems. The latter approach runs in polynomial time, but it requires the positive semidefiniteness of $P$, which is a strong assumption and limits its application. In this section, we will not present comprehensive reviews on the algorithms for LCP. We will introduce MILP formulations for problem (\ref{eq:App-04-LCP-1}) devised in \cite{App-04-LCP-MILP-1,App-04-LCP-MILP-2}. They make no reference on any special structure of matrix $P$. More importantly, they offer an option to access the solutions of practical problems in a systematic way.
Recall the MILP formulation techniques presented in Appendix \ref{App-B-Sect03-05}, it is easy to see that problem (\ref{eq:App-04-LCP-1}) can be equivalently expressed as linear constraints with additional binary variable $z$ as follows
\begin{equation}
\label{eq:App-04-LCP-MILC}
0 \le x \le Mz,~ 0 \le P x + q \le M(1-z)
\end{equation}
Integrality of $z$ maintains the element-wise complementarity of $x$ and $Px+q$: at most one of $x_i$ and $(Px+q)_i$ can be strictly positive. Formulation (\ref{eq:App-04-LCP-MILC}) entails a manually specified parameter $M$, which is not instantly available at hand. On the one hand, it must be big enough to preserve all extreme points of (\ref{eq:App-04-LCP-1}). On the other hand, it is expected to be as small as possible from a computational perspective, otherwise, the continuous relaxation of (\ref{eq:App-04-LCP-MILC}) would be very loose. In this regard, (\ref{eq:App-04-LCP-MILC}) is too cursory, although it might work well.
To circumvent above difficulty, it is proposed in \cite{App-04-LCP-MILP-1} to solve a bilinear program without a big-M parameter
\begin{equation}
\label{eq:App-04-LCP-BLP}
\begin{aligned}
\min_{x,z}~~ & z^T ( P x + q ) + ( {\bf 1} - z )^T x \\
\mbox{s.t.}~~ & x \ge 0,~ P x + q \ge 0,~ z \mbox{ binary}
\end{aligned}
\end{equation}
If (\ref{eq:App-04-LCP-1}) has a solution $x^*$, the optimal value of (\ref{eq:App-04-LCP-BLP}) is 0: for $x^*_i>0$, we have $z^*_i=1$ and $(P x^* + q)_i=0$; for $(P x^* + q)_i>0$, we have $z^*_i=0$ and $x^*_i=0$. The optimal solution is consistent with the feasible solution of (\ref{eq:App-04-LCP-MILC}). The objective can be linearized by introducing auxiliary variables $w_{ij}=z_i x_j$, $\forall i,j$. However, applying normal integer formulation techniques in Appendix \ref{App-B-Sect02-02} on variable $w_{ij}$ again needs the upper bound of $x_i$, another interpretation of the big-M parameter.
A parameter-free MILP formulation is suggested in \cite{App-04-LCP-MILP-1}. To understand the basic idea, recall the fact that $(1-z_i)x_i=0$; if we impose $x_i=w_{ii}=x_iz_i$, $i=1,2,\cdots$ in the constraint, $({\bf 1} - z )^T x$ in the objective can be omitted. Furthermore, multiplying both sides of $\sum_j P_{kj} x_j + q_k \ge 0$, $k=1,2,\cdots$ with $z_i$ gives $\sum_j P_{kj} w_{ij} + q_k z_i\ge 0$, $\forall i,k$. Since $z_i \in \{0,1\}$, $\sum_j P_{kj} x_j + q_k \ge \sum_j P_{kj} w_{ij} + q_k z_i$, $\forall i,k$ and $0 \le w_{ij} \le x_j$, $\forall i,j$ naturally hold. Collecting up these valid inequalities, we obtain an MILP
\begin{equation}
\label{eq:App-04-LCP-MILP-1}
\begin{aligned}
\min_{x,z,w}~~ & q^T z + \sum_i \sum_j P_{ij} w_{ij} \\
\mbox{s.t.}~~ & \sum_j P_{kj} x_j + q_k \ge \sum_j P_{kj} w_{ij} + q_k z_i \ge 0,~ \forall i,k \\
& 0 \le w_{ij} \le x_j,~ \forall i,j,~ w_{jj} = x_j,~ \forall j, ~ z \mbox{ binary}
\end{aligned}
\end{equation}
Instead of enforcing every $\sum_j P_{kj} w_{ij} + q_k z_i$ being at 0, we relax them as inequalities and minimize their summation. More valid inequalities can be added in (\ref{eq:App-04-LCP-MILP-1}) by exploiting linear cuts of $z$. It is proved in \cite{App-04-LCP-MILP-1} that relation $w_{ij}=z_i x_j$, $\forall i,j$ is implicitly guaranteed at the optimal solution of (\ref{eq:App-04-LCP-MILP-1}). In view of this, MILP (\ref{eq:App-04-LCP-MILP-1}) is equivalent to LCP (\ref{eq:App-04-LCP-1}) in the following sense: (\ref{eq:App-04-LCP-1}) has a solution if and only if (\ref{eq:App-04-LCP-MILP-1}) has an optimal value equal to zero, and the optimal solution to (\ref{eq:App-04-LCP-MILP-1}) incurring a zero objective value is a solution of LCP (\ref{eq:App-04-LCP-1}). MILP (\ref{eq:App-04-LCP-MILP-1}) is superior compared with (\ref{eq:App-04-LCP-MILC}) and big-M linearization based MILP formulation of MINLP (\ref{eq:App-04-LCP-BLP}) because it is parameter-free and gives tighter continuous relaxation. Nevertheless, the number of constraints in (\ref{eq:App-04-LCP-MILP-1}) is significantly larger than that in formulation (\ref{eq:App-04-LCP-MILC}). This method has been further analyzed in \cite{App-04-LCP-MILP-2} and extended to binary-constrained mixed LCPs.
Another parameter-free MILP formulation is suggested in \cite{App-04-LCP-MILP-3}, which takes the form of
\begin{equation}
\label{eq:App-04-LCP-MILP-2}
\begin{aligned}
\max_{\alpha,y,z}~~ & \alpha \\
\mbox{s.t.}~~ & 0 \le (P y)_i + q_i \alpha \le 1-z_i,~\forall i \\
& 0 \le y_i \le z_i,~ z_i \in \{0,1\},~ \forall i \\
& 0 \le \alpha \le 1
\end{aligned}
\end{equation}
Since $\alpha =0$, $y=0$, $z=0$ is always feasible, MILP (\ref{eq:App-04-LCP-MILP-2}) is feasible and has an optimum no greater than 1. By observing the constraints, we can conclude that if MILP (\ref{eq:App-04-LCP-MILP-2}) has a feasible solution with $\bar \alpha > 0$, then $x=y/\bar \alpha$ solves problem (\ref{eq:App-04-LCP-1}). If the optimal solution $\bar \alpha= 0$, then problem (\ref{eq:App-04-LCP-1}) has no solution; otherwise, suppose $\bar x$ solves (\ref{eq:App-04-LCP-1}), and let $\bar \alpha^{-1} = \max \{\bar x_i, (P\bar x)_i+q_i,i=1,\cdots\}$, then for any $0 <\alpha \le \bar \alpha$, $\bar y = \alpha \bar x$ is feasible in (\ref{eq:App-04-LCP-MILP-2}). As a result, the optimal solution should be no less than $\bar \alpha$, rather than 0. Compared with formulation (\ref{eq:App-04-LCP-MILC}), the big-M parameter is adaptively scaled by optimizing $\alpha$.
Because (\ref{eq:App-04-LCP-MILP-2}) works with an intermediate variable $y$, when LCP (\ref{eq:App-04-LCP-1}) should be jointly solved with other conditions on $x$, formulation (\ref{eq:App-04-LCP-MILP-2}) is not advantageous, because non-convex variable transformation $x=y/\alpha$ must be appended to link both parts.
Robust solutions of LCPs with uncertain $P$ and $q$ are discussed in \cite{App-04-Robust-LCP}. It is found that when $P \succeq 0$, robust solutions can be extracted from an SOCP under some mild assumptions on the uncertainty set; otherwise, the more general problem with uncertainty can be reduced to a deterministic non-convex QCQP. This technique is particularly useful in uncertain traffic equilibrium problems and uncertain Nash-Cournot games. Uncertain VI problems and MPCCs can be tackled in the similar vein after some proper transformations.
It is shown in \cite{App-04-LCP-BLP-MPEC} that a linear bilevel program or its equivalent MPEC can be globally solved via a sequential LCP method. A hybrid enumerative method is suggested which substantially reduces the effort for searching a solution of the LCP or certifying that the LCP has no solution. When the LCP is easy to solve, this approach is attractive.
Several extensions of LCP, including the discretely-constrained mixed LCP, discretely-constrained Nash-Cournot game, discretely-constrained MPEC, and logic constrained equilibrium problem as well as their applications in energy markets and traffic system equilibrium have been investigated in \cite{App-04-DM-LCP-1,App-04-DM-LCP-2,App-04-DM-LCP-3,App-04-DM-LCP-4}. In a word, due to its wide applications, LCP is still an active research field, and MILP remains an appealing method for solving LCPs for practical problems.
\subsection{Linear Programs with Complementarity Constraints}
A linear program with complementarity constraints (LPCC) entails solving a linear optimization problem with linear complementarity constraints. It is a special case of MPCC if all functions in the problem are linear, and a generalization of LCP by incorporating an objective function to be optimized. An LPCC has the following form
\begin{equation}
\label{eq:App-04-LPCC}
\begin{aligned}
\max_{x,y}~~ & c^T x + d^T y \\
\mbox{s.t.}~~ & Ax + By \ge f \\
& 0 \le y \bot q + N x + M y \ge 0
\end{aligned}
\end{equation}
A standard approach for solving (\ref{eq:App-04-LPCC}) is to linearize the complementarity constraint by introducing a binary vector $z$
and solve the following MILP
\begin{equation}
\label{eq:App-04-LPCC-MILP-BigM}
\begin{aligned}
\max_{x,y,z}~~ & c^T x + d^T y \\
\mbox{s.t.}~~ & Ax + By \ge f \\
& 0 \le q + N x + M y \le M z \\
& 0 \le y \le M(1-z) \\
& z \in \{0,1\}^m
\end{aligned}
\end{equation}
If both of $x$ and $y$ are bounded variables, we can readily derive the proper value of $M$ in each inequality; otherwise, finding high quality bounds is nontrivial even if they do exist. The method in \cite{App-04-LPCC-BigM} can be used to determine proper bounds of $M$, if the NLP solver can successfully find local solutions of the bounding problems.
Using a arbitrarily large value may solve the problem correctly. Nevertheless, parameter-free method is still of great theoretical interests. A smart Benders decomposition algorithm is proposed in \cite{App-04-LPCC-Benders} to solve (\ref{eq:App-04-LPCC-MILP-BigM}) without requiring the value of $M$. The completely positive programming method developed in \cite{App-04-LPCC-CPP-Relax} can also be used to solve (\ref{eq:App-04-LPCC-MILP-BigM}). For more theory and algorithm for LPCC, please see \cite{App-04-LPCC-DCP}, \cite{App-04-LPCC-1}-\cite{App-04-LPCC-7} and references therein. Interesting connections among conic QPCCs, QCQPs, and completely positive programs are revealed in \cite{App-04-LPCC-8}.
\section{Equilibrium Programs with Equilibrium Constraints}
\label{App-D-Sect05}
An equilibrium program with equilibrium constraints (EPEC) is the most general extension of the bilevel program. It incorporates multiple leaders and multiple followers competing with each other in the upper level and the lower level, respectively, resulting in two GNEPs in both levels. In this regard, an EPEC is a multi-leader-follower Stackelberg game.
\subsection{Mathematical model}
\label{App-D-Sect05-01}
In an EPEC, each leader $i$ deploys an action $x_i$ prior to the followers while taking movements of other leaders $x_{-i}$ into account and anticipating the best responses $y(x)$ from the followers; then each follower selects its optimal decision $y_j$ by taking the strategies of leaders $x$ and rivals' actions $y_{-j}$ as given. The EPEC can be formulated in two levels
\begin{subequations}
\label{eq:App-04-EPEC-1}
\begin{align}
\mbox{Leaders:} \qquad & \left\{
\begin{aligned}
\min_{x_i,\bar y, \bar \lambda, \bar \mu} ~~ &
F_i (x_i,x_{-i},\bar y, \bar \lambda, \bar \mu) \\
\mbox{s.t.} ~~ & G_i (x_i) \le 0 \\
& (\bar y, \bar \lambda, \bar \mu) \in S(x_i,x_{-i})
\end{aligned} \right\},~\forall i
\label{eq:App-04-EPEC-Leaders} \\
\mbox{Followers:} \qquad & \left\{
\begin{aligned}
\min_{y_j} ~~ & f_j (x,y_j,y_{-j}) \\
\mbox{s.t.} ~~ & g_j (x,y_j) \le 0 : \mu_j \\
& h(x,y) \le 0 : \lambda_j
\end{aligned} \right\},~ \forall j
\label{eq:App-04-EPEC-Followers}
\end{align}
\end{subequations}
In (\ref{eq:App-04-EPEC-Leaders}), each leader minimizes its payoff function $F_i$ which depends on its own choice $x_i$, the decisions of followers $y$, dual variables $\lambda$ and $\mu$ are parameterized in competitors' strategies $x_{-i}$. Tuple $(\bar y, \bar \lambda, \bar \mu)$ in the upper level is restricted by the optimality condition of the lower-level problem. Although the inequality constraints of leaders are decoupled, and we do not explicitly consider global constraints in the upper-level GNEP, the leaders' strategy sets as well as their payoff functions are still correlated through the best reaction map $S(x_i,x_{-i})$, and hence (\ref{eq:App-04-EPEC-Leaders}) itself is a GNEP, which is non-convex. The followers' problem (\ref{eq:App-04-EPEC-Followers}) is a GNEP with shared constraints, which is the same as the situation in an MPEC. The same convexity assumptions are made in (\ref{eq:App-04-EPEC-Followers}). The structure of EPEC (\ref{eq:App-04-EPEC-1}) is depicted in Fig. \ref{fig:App-04-04}. The equilibrium solution of EPEC (\ref{eq:App-04-EPEC-1}) is defined as the GNE among leaders' MPECs. It is common knowledge that EPECs often have no pure strategy equilibrium due to the intrinsic non-convexity of MPECs.
\begin{figure}
\caption{The structure of an EPEC.}
\label{fig:App-04-04}
\end{figure}
\subsection{Methods for Solving an EPEC}
\label{App-D-Sect05-02}
An EPEC can be viewed as a set of coupled MPEC problems: leader $i$ is facing an MPEC composed of problem $i$ in (\ref{eq:App-04-EPEC-Leaders}) together with all followers' problems in (\ref{eq:App-04-EPEC-Followers}), which is parameterized in $x_{-i}$. By replacing lower-level GNEP with its KKT optimality conditions, it can be imaged that the GNEP among leaders have non-convex constraints which inherit the tough properties of complementarity constraints. Thus, solving an EPEC is usually extremely challenging. To our knowledge, systematic algorithms of EPEC are firstly developed in dissertations \cite{App-04-EPEC-Algorithm-1,App-04-EPEC-Algorithm-2,App-04-EPEC-Algorithm-3}. The primary application of such an equilibrium model is found in energy market problems, see \cite{App-04-EPEC-Algorithm-4} for an excellent introduction.
Unlike the NEP and GNEP discussed in Sect. \ref{App-D-Sect01} and Sect. \ref{App-D-Sect02}, where the strategy sets are convex or jointly convex, because the lower-level problems are replaced with KKT optimality conditions and the MPEC for the leader is intrinsically non-convex, provable existence and uniqueness guarantees for the solution to EPECs are non-trivial. There is sustainable attempt on the analysis of EPEC solution properties. For example, the existence of a unique equilibrium for certain EPEC instances are discussed in \cite{App-04-EPEC-Solution-1,App-04-EPEC-Solution-2}, and in \cite{App-04-EPEC-Solution-3} for a nodal price based power market model. However, the existence and uniqueness of solution are only guaranteed under restrictive conditions. Counterexamples have been given in \cite{App-04-EPEC-Solution-4} to demonstrate that there is no general result for
the existence of solutions to EPECs due to their non-convexity. The non-uniqueness issue is studied in \cite{App-04-EPEC-Algorithm-2,App-04-EPEC-Solution-5}. It is shown that even in the simplest instances, local uniqueness of the EPEC equilibrium solution may not be guaranteed, and a manifold of equilibria may exist. This can be understood because EPEC is a generalization of GNEP, whose solution property is illustrated in Sect. \ref{App-D-Sect02-01}. When the payoff functions possess special structures, say, a potential function exists, then the existence of a global equilibrium can be investigated using the theory of potential games \cite{App-04-EPEC-Solution-6,App-04-EPEC-Solution-7,App-04-EPEC-Potential-MPEC}. In summary, the theory of EPEC solutions are much more complicated than the single-level NEP and GNEP.
This section reviews several representative algorithms which are widely used in literature. The former two are generic and seen in \cite{App-04-EPEC-Algorithm-1,App-04-EPEC-Algorithm-2}; the third one is motivated by the convenience brought by the property of potential games, and reported in \cite{App-04-EPEC-Potential-MPEC}; at last, a pricing game in a competitive market, which appears to be non-convex at first sight, is presented to show the hidden convexity in such a special equilibrium model.
{\noindent \bf 1. Best response algorithm}
Since the equilibrium of an EPEC is a GNEP among leaders' MPEC problems, the most intuitive strategy for identifying an equilibrium solution
is the best response algorithm. In some literature, it is also called diagonalization method or sequential MPEC method. This approach can be further categorized into Jacobian type and Gauss-Seidel type method, according to the information used when players update their strategies.
To explain the algorithmic details, denote by MPEC($i$) the problem of leader $i$: the upper level is problem $i$ in (\ref{eq:App-04-EPEC-Leaders}), and the lower level is the GNEP described in (\ref{eq:App-04-EPEC-Followers}) given all leaders' strategies. Let $x^k_i$ be the strategy of leader $i$ in iteration $k$, and $x^k=(x^k_1,\cdots,x^k_m)$ the strategy profile of leaders. The Gauss-Seidel type algorithm proceeds as follows \cite{App-04-Complement-Book}:
\begin{algorithm}[H]
\normalsize
\caption{\bf : Best-response (Diagonalization) algorithm for EPEC}
\begin{algorithmic}[1]
\STATE Choose an initial strategy profile $x^0$ for leaders, set convergence tolerance $\varepsilon>0$, an allowed number of iterations $K$, and the iteration index $k=0$;
\STATE Let $x^{k+1}=x^k$. Loop for players $i=1,\cdots,m$:
\begin{enumerate}
\item[a.] Solve MPEC($i$) for leader $i$ given $x^{k+1}_{-i}$.
\item[b.] Replace $x^{k+1}_i$ with the optimal strategy of leader $i$ just obtained.
\end{enumerate}
\STATE If $\| x^{k+1} - x^k \|_2 \le \varepsilon$, the upper level converges; solve lower-level GNEP (\ref{eq:App-04-EPEC-Followers}) with $x^*=x^{k+1}$ using the algorithms elaborated in Sect. \ref{App-D-Sect02-02}, and the equilibrium among followers is $y^*$. Report $(x^*,y^*)$ and terminate.
\STATE If $k = K$, report failure of convergence and quit.
\STATE Update $k \leftarrow k+1$, and go to step 2.
\end{algorithmic}
\label{Ag:App-04-EPEC-Diagonalization}
\end{algorithm}
Without an executable criterion to judge the existence and uniqueness of solution, possible outcomes of Algorithm \ref{Ag:App-04-EPEC-Diagonalization} are discussed in three situations.
1. There is no equilibrium. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} does not converge. In such circumstance, one may turn to seeking a mixed-strategy Nash equilibrium, which always exists. Examples are given in \cite{App-04-Complement-Book}: if there are two leaders, we can list possible strategy combinations and solve the lower-level GNEP among followers, then compute respective payoffs of the two leaders, and then build a bimatrix game, whose mixed-strategy Nash equilibrium can be calculated from solving an LCP, as explained in Sect. \ref{App-D-Sect01-04}.
2. There is a unique equilibrium, or there are multiple equilibria. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} may converge or not, and which equilibrium will be found (if it converges) depends on the initial strategy profile offered in step 1.
3. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} may converge to a local equilibrium in the sense of \cite{App-04-EPEC-Solution-3}, if each MPEC is solved by a local NLP method which does not guarantee global optimality. The true equilibrium can be found only if each leader's MPEC can be globally solved. The MILP reformulation (if possible) offers one plausible way for this task.
{\noindent \bf 2. KKT system method}
To tackle the divergence issue in the best response algorithm, it is proposed to apply the KKT condition to each leader's MPEC and solve the resulting KKT systems simultaneously \cite{App-04-EPEC-Algorithm-3,App-04-EPEC-Algorithm-5}. The solution turns out to be a strong stationary equilibrium point of EPEC (\ref{eq:App-04-EPEC-1}). There is no convergence issue in this approach, since no iteration is deployed. However, special attention should be paid to some potential problems mentioned below.
1. Since the EPEC is essentially a GNEP among leaders, the concentrated KKT system may have non-isolated solutions. To refine a meaningful outcome, we can manually specify a secondary objective function, which is optimized subject to the KKT system.
2. The embedded (twice) application of KKT condition for the lower-level problems and upper-level problems inevitably introduces extensive of complementarity and slackness conditions, which greatly challenges solving the concentrated KKT system. In this regard, scalability may be a main bottleneck for this approach. If the lower-level GNEP is linear, it may be better to use primal-dual optimality condition first for followers, and then KKT condition for leaders.
3. Because each leader's MPEC is non-convex, a stationary point of the KKT condition is not necessarily an optimal solution of the leader; as a result, the solution of the concentrated KKT system may not be an equilibrium of the EPEC. To validate the result, one can conduct the best-response method initiated at the candidate solution with a slight perturbation.
{\noindent \bf 3. Potential MPEC method}
When the upper-level problems among leaders admit a potential function satisfying (\ref{eq:App-04-PG-1}), the EPEC can be reformulated as an MPEC, and the relations of their solutions are revealed by comparing the KKT condition of the normalized Nash stationary points of the EPEC and the KKT condition of the associated MPEC \cite{App-04-EPEC-Potential-MPEC}.
For example, if the leaders' objectives are given by
\begin{equation}
F_i(x_i,x_{-i},y) = F^S_i(x_i) + H(x,y) \notag
\end{equation}
or in other words, the payoff function $F_i(x_i,x_{-i},y)$ can be decomposed as the sum of two parts: the first one $F^S_i(x_i)$ only depends on the local variable $x_i$, and the second one $H(x,y)$ is common to all leaders. In such circumstance, the potential function can be expressed as
\begin{equation}
U(x,y) = H(x,y) + \sum_{i=1}^m F^S_i(x_i) \notag
\end{equation}
Please see Sect. \ref{App-D-Sect01-05} for the condition under which a potential function exists and special instances in which a potential function can be easily found.
Suppose that leaders' local constraints are given by $x_i \in X_i$ which is independent of $x_{-i}$ and $y$, and the best reaction map of followers with fixed $x$ is given by $(\bar y, \bar \lambda, \bar \mu) \in S(x)$. Clearly, the solution of MPEC
\begin{equation}
\begin{aligned}
\min_{x,\bar y,\bar \lambda,\bar \mu} ~~ & U(x,\bar y) \\
\mbox{s.t.} ~~ & x_i \in X_i,~ i=1,\cdots,m \\
& (\bar y, \bar \lambda, \bar \mu) \in S(x)
\end{aligned} \notag
\end{equation}
must be an equilibrium solution of the original EPEC.
This approach leverages the property of potential games and is superior over the previous two methods (if a potential function exists): KKT condition is applied only once to the lower level problems, and the equilibrium can be retrieved by solving MPEC only once.
{\noindent \bf 4. A pricing game in a competitive market}
We consider an EPEC taken from the examples in \cite{App-04-EPEC-Hidden-Convexity-1}, which models a strategic pricing game in a competitive market. The hidden convexity in this EPEC is revealed. For ease of exposition, we study the case with two leaders and one follower. The results can be extended to the situation where more than two leaders exist. The pricing game with two leaders can be formulated by the following EPEC
\begin{subequations}
\label{eq:App-04-Two-Leader-Pricing-Game}
\begin{align}
\mbox{Leader 1:} \quad & \max_{x_1}~ \left\{ y^T(x_1,x_2) A_1 x_1 ~\middle|~ \ B_1 x_1 \le b_1 \right\} \label{eq:App-04-TLPG-Leader-1} \\
\mbox{Leader 2:} \quad & \max_{x_2}~ \left\{ y^T(x_1,x_2) A_2 x_2 ~\middle|~ \ B_2 x_2 \le b_2 \right\} \label{eq:App-04-TLPG-Leader-2} \\
\mbox{Follower:} \quad & \max_y~ \left\{ f(y) - y^T A_1 x_1 - y^T A_2 x_2
~\middle|~ Cy=d \right\} \label{eq:App-04-TLPG-Follower}
\end{align}
\end{subequations}
In (\ref{eq:App-04-TLPG-Leader-1}) and (\ref{eq:App-04-TLPG-Leader-2}), two leaders announce their offering prices $x_1$ and $x_2$, respectively, subject to some certain pricing policy described in their corresponding constraints. The follower then decides how many goods should be purchased from each leader, according to the optimal solution of problem (\ref{eq:App-04-TLPG-Follower}), where the profit of the follower
\begin{equation}
f(y)=-\dfrac{1}{2} y^T Q y + c^T y \notag
\end{equation}
is a strongly concave quadratic function, i.e. $Q \succ 0$, and matrix $C$ has full rank in its rows. Each player in the market wishes to maximize his own profit. The utilities of leaders are the payments from trading with the follower; the profit of follower is the revenue minus the purchasing cost.
At first sight, EPEC (\ref{eq:App-04-Two-Leader-Pricing-Game}) is non-convex, not only because the leaders' objective functions are bilinear, but also because the best response mapping is generally non-convex. In light of the strong convexity of (\ref{eq:App-04-TLPG-Follower}), the following KKT condition:
\begin{equation}
\begin{gathered}
c-Qy-A_1 x_1-A_2 x_2-C^T \lambda = 0 \\ Cy-d=0
\end{gathered} \notag
\end{equation}
is necessary and sufficient for a global optimum. Because constraints in (\ref{eq:App-04-TLPG-Follower}) are all equalities, there is no complementarity and slackness condition. Solve this set of linear equations, we can obtain the optimal solution $y$ in a closed form. To this end, substituting
\begin{equation}
y=Q^{-1}(c-A_1 x_1-A_2 x_2-C^T \lambda) \notag
\end{equation}
into the second equation, we have
\begin{equation}
\lambda = M \left[N(c-A_1 x_1-A_2 x_2)-d \right] \notag
\end{equation}
where
\begin{equation}
M = \left[C Q^{-1} C^T \right]^{-1},~ N = C Q^{-1} \notag
\end{equation}
Moreover, eliminating $\lambda$ in the expression of $y$ gives the best reaction map
\begin{equation}
\label{eq:App-04-TLPG-Follower-Best-Reaction}
y = r + D_1 x_1 + D_2 x_2
\end{equation}
where
\begin{equation}
\begin{gathered}
r = Q^{-1} c + N^T M d - N^T M N c \\
D_1 = N^T M N A_1 -Q^{-1} A_1 \\
D_2 = N^T M N A_2 -Q^{-1} A_2 \\
\end{gathered} \notag
\end{equation}
Substituting (\ref{eq:App-04-TLPG-Follower-Best-Reaction}) into the objective functions of leaders, EPEC (\ref{eq:App-04-Two-Leader-Pricing-Game}) reduces to a standard Nash game
\begin{equation}
\begin{aligned}
\mbox{Leader 1:} \quad & \max_{x_1}~ \left\{ \theta_1 (x_1,x_2) ~\middle|~ \ B_1 x_1 \le b_1 \right\} \\
\mbox{Leader 2:} \quad & \max_{x_2}~ \left\{ \theta_2 (x_1,x_2) ~\middle|~ \ B_2 x_2 \le b_2 \right\} \\
\end{aligned} \notag
\end{equation}
where
\begin{equation}
\begin{gathered}
\theta_1(x_1,x_2) = r^T A_1 x_1 + x_1^T D_1^T A_1 x_1 + x_2^T D_2^T A_1 x_1\\
\theta_2(x_1,x_2) = r^T A_2 x_2 + x_2^T D_2^T A_2 x_2 + x_1^T D_1^T A_2 x_2
\end{gathered} \notag
\end{equation}
The partial Hessian matrix of $\theta_1(x_1,x_2)$ can be calculated as
\begin{equation*}
\nabla^2_{x_1} \theta_1(x_1,x_2) = 2A^T_1 (N^T M N -Q^{-1}) A_1
\end{equation*}
As $Q \succ 0$, its inverse matrix $Q^{-1} \succ 0$; denote by $Q^{-1/2}$ the square root of $Q^{-1}$, and
\begin{equation}
P_J = I - Q^{-1/2} C^T (CQ^{-1} C^T)^{-1} C Q^{-1/2} \notag
\end{equation}
It is easy to check that $P_J$ is a projection matrix, which is symmetric and idempotent, i.e., $P_J = P^2_J =P^3_J =\cdots$. Moreover, it can be verified that the Hessian matrix $\nabla^2_{x_1} \theta_1(x_1,x_2)$ can be expressed via
\begin{equation}
\nabla^2_{x_1} \theta_1(x_1,x_2) = 2A^T_1 (N^T M N -Q^{-1}) A_1
= -2 A^T_1 Q^{-1/2} P_J Q^{-1/2} A_1 \notag
\end{equation}
For any vector $z$ with a proper dimension,
\begin{equation}
\begin{aligned}
z^T \nabla^2_{x_1} \theta_1(x_1,x_2) z ~
=~ & -2 z^T \left( A^T_1 Q^{-1/2} P_J Q^{-1/2} A_1 \right) z \\
=~ & -2 z^T A^T_1 Q^{-1/2} P^T_J P_J Q^{-1/2} A_1 z \\
=~ & -2 (P_J Q^{-1/2} A_1 z)^T (P_J Q^{-1/2} A_1 z) \le 0
\end{aligned} \notag
\end{equation}
We can see that $\nabla^2_{x_1} \theta_1(x_1,x_2) \preceq 0$. The similar analysis also applies to $\nabla^2_{x_2} \theta_2(x_1,x_2)$. Therefore, the problems of leaders are actually convex programs, and a pure-strategy Nash equilibrium exists.
\section{Conclusions and Further Reading}
\label{App-D-Sect06}
Equilibrium problems entail solving interactive optimization problems simultaneously, and serve as the foundation for modeling competitive behaviors among strategic decision makers, and analyzing the stable outcome of a game. This chapter provides an overview on two kinds of equilibrium problems that frequently arise in various economic and engineering applications.
One-level equilibrium problems, including the NEP and GNEP, are introduced first. The existence of equilibrium can be ensured under some convexity and monotonicity assumptions. Distributed methods for solving one-level games are presented. When each player solves a strictly convex optimization problem, distributed algorithms converge with provable guarantee, and thus are preferred, whereas the KKT system renders nonlinear equations and is relatively difficult to solve. To address incomplete information and uncertainty in player's decision making, a robust optimization based game model is proposed in \cite{App-04-Robust-Game-Theory}, which is distribution-free and relaxes Harsanyi's assumptions on Bayesian games. Particularly, the robust Nash equilibrium of a bimatrix game with uncertain payoffs can be characterized via the solution of a second-order cone complementarity problem \cite{App-04-Robust-NE-1}, and more general cases involving $n$ players and continuous payoffs are discussed in \cite{App-04-Robust-NE-2}. Distributional uncertainty is tackled in \cite{App-04-DR-CC-Game}, in which the mixed-strategy Nash equilibrium of a distributionally robust chance-constrained game is studied. A generalized Nash game arises when the strategy sets of players are coupled. Due to practical interests from a variety of engineering disciplines, the solution method for GNEPs is still an active research area. The volume of articles is growing quickly in recent years, say, \cite{App-04-GNEP-Algorithm-1,App-04-GNEP-Algorithm-2,App-04-GNEP-Algorithm-3,App-04-GNEP-Algorithm-4,App-04-GNEP-Algorithm-5,App-04-GNEP-Algorithm-6,App-04-GNEP-Algorithm-7,App-04-GNEP-Algorithm-8,App-04-GNEP-Algorithm-9}, to name just a few. GNEPs with uncertainties are studied in \cite{App-04-GNEP-Uncertainty-1,App-04-GNEP-Uncertainty-2}.
Bilevel equilibrium problems, including the bilevel program, MPEC, and EPEC, are investigated. These problems are intrinsically hard to solve, due to the non-convexity induced by the best reaction map of followers, and solution properties have been revealed for specific instances under restrictive assumptions. We recommend \cite{App-04-Complement-Book,App-04-BLP-Book-1,App-04-BLP-Book-2} for theoretical foundations and energy market applications of bilevel equilibrium models, and \cite{App-04-BLP-Review-Pozo} for an up-to-date survey. The theories on bilevel programs and MPEC are relatively mature. Recent research efforts have been spent on new constraint qualifications and optimality conditions, for example, the work in \cite{App-04-MPEC-CQ-1,App-04-MPEC-CQ-2,App-04-MPEC-CQ-3,App-04-MPEC-CQ-4}. The MILP reformulation is preferred by most power system applications, because the ability of MILP solvers keep improving, and a global optimal solution can be found. Stochastic MPEC is proposed in \cite{App-04-Stochastic-MPEC-1} to model uncertainty using probability distributions. Algorithms are developed in \cite{App-04-Stochastic-MPEC-2,App-04-Stochastic-MPEC-3,App-04-Stochastic-MPEC-4,App-04-Stochastic-MPEC-5}, and a literature review can be found in \cite{App-04-Stochastic-MPEC-6}. Owing to the inherent hardness, discussions on EPEC models are limited to special cases, such as those with shared P-matrix linear complementarity constraints \cite{App-04-Muti-Leader-Follower-Game-1}, power market models \cite{App-04-Muti-Leader-Follower-Game-1,App-04-Muti-Leader-Follower-Game-2,App-04-Muti-Leader-Follower-Game-3}, those with convex quadratic objectives and linear constraints \cite{App-04-Muti-Leader-Follower-Game-1}, and Markov game models \cite{App-04-EPEC-Markov-Regularization}. Methods for solving EPEC are based on relaxing or regularizing complementarity constraints \cite{App-04-EPEC-Markov-Regularization,App-04-EPEC-Relaxation}, as well as evolutionary algorithms \cite{App-04-Muti-Leader-Follower-Game-EA}. Robust equilibria of EPEC are discussed in \cite{App-04-Robust-SNE}. An interesting connection between the bilevel program and the GNEP has been revealed in \cite{App-04-BiP-GNEP}, establishing a new look on these game models.
We believe that the equilibrium programming models will become an imperative tool for designing and analyzing interconnected energy systems and related markets, in view of the physical interdependence of heterogenous energy flows and strategic interactions among different network operators.
\input{ap04ref}
\backmatter
\printindex
\end{document}
|
\begin{document}
\title[Annihilator of Top Local Cohomology and Lynch's Conjecture]{ Annihilator of Top Local Cohomology and Lynch's Conjecture}
\author[A.~fathi]{Ali Fathi}
\address{Department of Mathematics, Zanjan Branch,
Islamic Azad University, Zanjan, Iran.}
\email{[email protected]}
\keywords{ Local cohomology, annihilator, Lynch's Conjecture}
\subjclass[2010]{13D45, 14B15, 13E05}
\dedicatory{Dedicated to Professor Hossein Zakeri}
\begin{abstract}
Let $R$ be a commutative Noetherian ring, $\mathfrak a$ a proper ideal of $R$ and $N$ a non-zero finitely generated $R$-module with $N\neq \mathfrak a N$. Let $d$ (respectively $c$) be the smallest (respectively greatest) non-negative integer $i$ such that the local cohomology $\operatorname{H}^i_{\mathfrak a}(N)$ is non-zero. In this paper, we provide sharp bounds under inclusion for the annihilators of the local cohomology modules
$\operatorname{H}^d_{\mathfrak a}(N)$, $\operatorname{H}^c_{\mathfrak a}(N)$ and these annihilators are computed in certain cases. Also, we construct a counterexample to Lynch's conjecture.
\end{abstract}
\maketitle
\section{\bf Introduction}
Throughout this paper, $R$ is a commutative Noetherian ring with non-zero identity. Let $\fa$ be an ideal of $R$, $N$ an $R$-module and $i$ a non-negative integer. The {\it $i$-th local cohomology } of $N$ with respect to $\fa$ was defined by Grothendieck as follows:
$$\h i{\fa}N:={\underset{n\in\N}{\varinjlim}\operatorname{Ext}^i_R(R/\fa^n, N)};$$
see \cite{bs} and \cite{g} for more details.
Throughout this section, let $\fa$ be an ideal of $R$ and $N$ a finitely generated $R$-module.
We recall that the cohomological dimension (respectively, the depth) of $N$ with respect to $\fa$, denoted by $\cdd R {\fa}N$ (respectively, $\grad{R}{\fa}N$), is defined as the supremum (respectively, infimum) of the non-negative integers $i$ such that $\h i{\fa}N$ is non-zero.
The arithmetic rank of $\fa$, denoted by $\textrm{ara}(\fa)$, is the least number of elements of $R$ required to generate an ideal which has the same radical as $\fa$. The $N$-height of $\fa$ is defined as $\Ht N{\fa}:=\inf\{\dim_{R_\fp}(N_\fp): \fp\in\supp_R(N)\cap\operatorname{V}(\fa)\}$, where $\operatorname{V}(\fa)$ denotes the set of all prime ideals of $R$ containing $\fa$. We denote the set of minimal elements of $\ass_R(N)$ by $\mass_R(N)$; also, the set of elements $\fp$ of $\ass_R(N)$ with $\dim_R(R/\fp)=\dim_R(N)$ is denoted by $\assh_R(N)$.
For a submodule $L$ of $N$ and $\fp\in\supp_R(N)$, we denote the contraction of $L_\fp$ under the canonical map $N\rightarrow N_\fp$ by $C^N_\fp(L)$. Finally, we denote the set of integers (respectively, positive integers, non-negative integers) by $\mathbb Z$ (respectively, $\N$, $\N_0$). For any unexplained notation and terminology,
we refer the reader to \cite{bs}, \cite{bh} and \cite{mat}.
We adopt the convention that the intersection (respectively, union) of empty family of subsets of a set $A$ is $A$ (respectively, the empty set). Also, we adopt the convention that the infimum (respectively, supremum) of empty set of integers is $\infty$ (respectively, $-\infty$).
Since $\bigcup_{i\in\N_0}\supp_R(\h i{\fa}N)=\supp_R(N/\fa N)$(see \cite[Lemma 2.4]{f2018}), we obtain $N=\fa N$ if and only if $\h i{\fa}N=0$ for all $i\in\N_0$. In this case we have $\cdd R{\fa}N=-\infty$, $\grad{R}{\fa}N=\infty$ and $\Ht N{\fa}=\infty$ by above convention.
If $R$ is a regular local ring containing a field and $\fa$ a proper ideal of $R$, then it is known that $\h i{\fa}R\neq 0$ if and only if $\h i{\fa}R$ is faithful (i. e., $\operatorname{Ann}_R (\h i{\fa}R)=0$). This was proved by Huneke and Koh in prime characteristic (see \cite[Lemma 2.2]{hk}) and by Lyubeznik in characteristic zero (see \cite[Corrolary 3.5]{ly} and the proof of \cite[Theorem 2.3]{hk}). Boix and Eghbali provided a characteristic-free proof of this result in \cite[Theorem 3.6]{be}. This leads to the following conjecture of Lynch; see \cite[Conjecture 1.2]{l}.
\textbf{Lynch's Conjecture.} If $R$ is a local ring and $\fa$ a proper ideal of $R$ with $c:=\cdd R {\fa}R>0$, then $\dim_R(R/\operatorname{Ann}_R (\h c{\fa}R))=\dim_R(R/\gam {\fa}R)$. In particular, if $\fa$ contains a non-zerodivisor, then $\dim_R(R/\operatorname{Ann}_R (\h c{\fa}R))=\dim_R(R)$.
Let $\fa\neq R$ and $c:=\cdd R {\fa}R$.
The conjecture is known to be false: the first counterexample was constructed by Bahmanpour
in \cite[Example 3.2]{b2017} over a nonequidimensional local ring of dimension at least $5$ with cohomological dimension $c=2$. Also, Singh and Walther
\cite{sw} provided a counterexample over a nonequidimensional local ring of dimension $3$ with cohomological dimension $c=2$. In both examples $\operatorname{Ann}_R (\h c{\fa}R)$
has height zero. Datta, Switala and Zhang, in \cite{dsw}, produced a counterexample for this conjecture over a regular local ring of mixed characteristic such that
$\operatorname{Ann}_R (\h c{\fa}R)$ is non-zero and $\cdd R {\fa}R=\operatorname{ara} (\fa)$. There are some affirmative answers to Lynch's conjecture. When $R$ is either a ring of a prime
characteristic and $\cdd R {\fa}R=\operatorname{ara} (\fa)$ or that $R$ is pure in a regular ring containing a field, Hochster and Jeffries proved that $\operatorname{Ann}_R (\h c{\fa}R)$ has
height zero; see
\cite[Corollary 2.7 and Theorem 2.9]{hj}. Also, we note that Datta-Switala-Zhang's example shows that the result of Hochster and Jeffries does not hold for an arbitrary commutative Noetherian ring $R$ and ideal $\fa$ with $\cdd R {\fa}R=\operatorname{ara} (\fa)$. Note that if $\dim_R(R/\operatorname{Ann}_R (\h c{\fa}R))=\dim_R(R)$, then $\Ht R{\operatorname{Ann}_R (\h c{\fa}R)}=0$ and if in addition $c>0$, then $\gam {\fa}R\subseteq\operatorname{Ann}_R (\h c{\fa}R)$ and so $\dim_R(R/\operatorname{Ann}_R (\h c{\fa}R))=\dim_R(R/\gam {\fa}R)=\dim_R(R)$.
We assume for the remainder of this section that $N\neq\fa N$, $c:=\cdd R{\fa}N$ and $0=N_1\cap\dots\cap N_n$ is a minimal primary decomposition of the zero submodule of $N$
with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$.
If $c=\dim_R(N)$, then we have
\begin{equation}{\textstyle\operatorname{Ann}_R (\h c{\fa}N)=\operatorname{Ann}_R (N/{\bigcap_{\cdd R {\fa}{R/\fp_i}=c}N_i})}.\end{equation}
This equality was proved by Lynch \cite[Theorem 2.4]{l} whenever $R$ is a complete local ring and $N=R$. Bahmanpour et al. \cite[Theorem 1.1]{bag} proved it when $R$ is a complete local ring and $\fa$ is the maximal ideal of $R$. Finally, it is proved in general form by Atazadeh et al. in \cite[Theorem 2.3]{asn}. Therefore in the case $c=\dim_R(N)$, Lynch's conjecture is true and the $N$-height of $\operatorname{Ann}_R (\h c{\fa}N)$ is zero. Also recently, in \cite[Theorem 1.2]{an2022}, Atazadeh and Naghipour show that the $N$-height of $\operatorname{Ann}_R (\h c{\fa}N)$ is zero in the case when $c=\dim_R(N)-1$.
If $c$ is not necessarily equal $\dim_R(N)$, in \cite [Theorem 3.4]{f}, we gave the following bound for the annihilator of $\h c{\fa}N$:
\begin{equation}\textstyle{\operatorname{Ann}_R ({N}/{\bigcap_{\fp_i\in\Delta}N_i})\subseteq\operatorname{Ann}_R (\h c{\fa}N)
\subseteq \operatorname{Ann}_R ({N}/{\bigcap_{\fp_i\in\Sigma}N_i})},\end{equation}
where $\Delta:=\{\fp\in\ass_R(N): \cdd R {\fa}{R/\fp}=c\}$ and $\Sigma:=\{\fp\in\ass_R(N): \cdd R {\fa}{R/\fp}=\dim_R (R/\fp)=c\}$.
If $c=\dim_R(N)$, then $\Delta=\Sigma$ and this bound yields the equation (1.1). This paper is divided into 5 section.
In Sec. 2, for a submodule $L$ of $N$ and $\fp\in\supp_R(N)$, we present some properties of $C^N_\fp(L)$ which are used in the sequal.
In Sec. 3, for an arbitrary non-negative integer $t$, we provide a lower bound for the annihilator of $\h t{\fa}N$. More precisely, we show that
$C^t(\fa, N):=\bigcap_{\fp_i\in\Delta(t)}N_i=\bigcap_{\fp\in\Delta(t)}C^N_\fp(0)=\gam {\fa(t)}N$ is the largest submodule $L$ of $N$ such that
$\cdd R{\fa}L<t$, where $\Delta(t):=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\}$ and $\fa(t):=\bigcap_{\fp\in\ass_R(N)\setminus\Delta(t)}\fp$.
There is a following lower bound for the annihilator of $\h t{\fa}N$
\begin{equation}
\operatorname{Ann}_R(N/C^t(\fa, N))\subseteq \operatorname{Ann}_R(\h t{\fa}N);
\end{equation}
see Theorem \ref{lower}.
Now set $d:=\grad{R}{\fa}N$. We denote $C^{d}(\fa, N)$ and $C^{c}(\fa, N)$ by ${\rm S}(\fa, N)$ and $\T {\fa}N$ respectively.
For each $t\leq d$, we have $C^t(\fa, N)={\rm S}(\fa, N)$ and for each $t\geq c+1$ we have $C^t(\fa, N)=N$ and there is the filtration
\begin{equation}{\rm S}(\fa, N)=C^{d}(\fa, N)\subseteq\dots\subseteq C^{c}(\fa, N)=\T {\fa}N\subset N\end{equation}
of submodules of $N$ such that, for each $d\leq t\leq c$,
$\cdd R{\fa}{C^{t+1}(\fa, N)/C^t(\fa, N)}=t$ whenever $C^{t}(\fa, N)\neq C^{t+1}(\fa, N)$; see Proposition \ref{prop5}. The submodule $\T {\fa}N$ (respectively ${\rm S}(\fa, N)$) of $N$ is used in Sec. 4 (respectively Sec. 5) to study the annihilator of the last (respectively first) non-zero local cohomology module of $N$ with respect to $\fa$. For a submodule $L$ of $N$, we have $L=\fa L$ if and only if $L\subseteq {\rm S}(\fa, N)$. We can regard this as a version of Nakayama's Lemma because ${\rm S}(\fa, N)=0$ if and only if $1-a$ is a non-zerodivisor on $N$ for all $a\in \fa$; see Proposition \ref{prop5}.
In Sec. 4, we consider the annihilator of the top local cohomology $\h c{\fa}N$. In Theorem \ref{annh2}, the upper bound for the annihilator of $\h c{\fa}N$ in (1.2) is improved as follows.
\begin{equation} \textstyle{\operatorname{Ann}_R(N/\T{\fa}N)\subseteq\operatorname{Ann}_R (\h c{\fa}N)\subseteq \operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma}C^N_\fp(0))},\end{equation}
where $\Sigma:=\{\fp\in\supp_R(N): \cdd R {\fa}{R/\fp}=\dim_R(R/\fp)=c\}$. Also, it is proved that if for each $\fp\in\ass_R(N)$ with $\cdd R{\fa}{R/\fp}=c$ there exists $\fq\in\Sigma$ such that $\fp\subseteq\fq$, then the above upper and lower bounds are equal. By using this theorem, we construct a counterexample to Lynch's conjecture (see Example \ref{exa}) which extends Bahmanpour's example \cite{b2017} and Singh--Walther's example \cite{sw}. We also conclude from this theorem (see Corollary \ref{cor1}) that if $\Sigma\neq\emptyset$ or $c=\dim_R(N)-1$, then
\begin{equation}
\Ht N{\operatorname{Ann}_R (\h c{\fa}N)}=0. \end{equation}
Next, it is shown in Theorem \ref{annh4} that if $(0:_{\h c{\fa}{N/N_i}}\fa)$ is finitely generated for all $\fp_i\in\ass_R(N)$ with $\cdd R{\fa}{R/\fp_i}=c$, then
\begin{equation}\operatorname{Ann}_R (\h c{\fa}N)=\operatorname{Ann}_R (N/\T{\fa}N).\end{equation}
In particular, if $N$ is coprimary and $(0:_{\h c{\fa}N}\fa)$ is finitely generated, then we have $\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R (N)$. Therefore if $R$ is domain and $(0:_{\h c{\fa}R}\fa)$ is finitely generated, then $\h c{\fa}R$ is faithful.
As an application of Theorem \ref{annh4}, we will prove in Corollary \ref{cor2} that the equality (1.7) holds in the following cases:
(i) $c\leq 1$;
(ii) $\dim_R(N/\fa N)\leq 1$;
(iii) $\dim_R(N)\leq 2$;
(iv) $\h c{\fa}N$ is $\fa$-cofinite minimax.
Also, if $(R, \fn)$ is a complete local ring, $\h c{\fa}N$ is Artinian and $(0:_{\h c{\fa}N}\fa)$ is finitely generated, then it is proved in Lemma 4.9 that
\begin{align}
&\att_R(\h c{\fa}N)=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}=c\}\\
&\nonumber=\{\fp\in\mass_R(N): \surd(\fa+\fp)=\fn,\, \dim_R(R/\fp)=c\}.
\end{align}
Now, let $(R, \fn)$ be a local ring. Then $\dim_R(N)-\dim_R(N/\fa N)$ is a lower bound for the cohomological dimension $c:=\cdd R{\fa}N$ of $N$ with respect to $\fa$. If $\dim_R(N)-\dim_R(N/\fa N)=c$, then we show in Theorem \ref{cd2} that
\begin{align}
&\dim_R(N)=\dim_R(R/\operatorname{Ann}_R(\h c{\fa}N)),\\
&\nonumber\textstyle{\operatorname{Ann}_R(\h c{\fa}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\assh_R(N)}C^N_\fp(0))}\end{align}
and equality holds if $N$ is unmixed, that is, $\dim_R(R/\fp)=\dim_R(N)$ for all $\fp\in\ass_R(N)$. Note that the above theorem gives an affirmative answer to Lynch's conjecture. Then we deduce in Corollary \ref{sys} that if
$0\leq t\leq \dim_R(N)$ and $x_1,\dots,x_t\in\fn$ is a part of a system of parameters for $N$, then
\begin{align}
&\cdd R{(x_1,\dots,x_t)}N=t,\\
\nonumber &\dim_R(R/\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N))=\dim_R(N),\\
\nonumber&\textstyle{\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\assh_R(N)}C^N_\fp(0))}\end{align}
and equality holds if $N$ is unmixed. Hence in this case also Lynch's conjecture holds.
In Sec. 5, we consider the annihilator of the first non-zero local cohomology $\h d{\fa}N$, where $d:=\grad{R}{\fa}N$. More precisely, we show in Theorem \ref{annh7} that
\begin{equation}\textstyle{\operatorname{Ann}_R(N/{\rm S}(\fa, N))\subseteq\operatorname{Ann}_R(\h d{\fa}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma} C^N_\fp(0))\cap(\bigcap_{\fp\in\Sigma'}\fp)},\end{equation}
where $\Sigma:=\{\fp\in\mass_R(N): \Ht {R/\fp}{(\fa+\fp)/\fp}=d\}$ and $\Sigma':=\{\fp\in\ass_R(N)\setminus\mass_R(N): \Ht {R/\fp}{(\fa+\fp)/\fp}=d\}$. In Lemma \ref{loc}, for an arbitrary non-negative integer $t$, when $(R,\fn)$ is local we improve the upper bound that presented in \cite[Theorem 3.2]{f} for the annihilator $\h t{\fn}N$ as follows
\begin{equation}
\textstyle{\operatorname{Ann}_R(\h t{\fn}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma(t)}C_\fp^N(0))\cap(\bigcap_{\fp\in\Sigma'(t)}\fp),}
\end{equation}
where $\Sigma(t):=\{\fp\in\mass_R(N): \dim_R(R/\fp)=t\}$ and $\Sigma'(t):=\{\fp\in\ass_R(N)\setminus\mass_R(N): \dim_R(R/\fp)=t\}$. The last inclusion in (1.11) is equality whenever $N$ is Cohen-Macaulay; see Corollary \ref{cor3}. Note that for $\fp\in\supp_R(N)$, $\operatorname{Ann}_R(N/C^N_\fp(0))=C^R_\fp(\operatorname{Ann}_R(N))\subseteq\fp$. Example \ref{exa2} shows that to improve the upper bound for the annihilator of $\h d{\fa}N$ in (1.11), we can not replace $\mass_R(N)$ by $\ass_R(N)$ in the index set $\Sigma$. This example also shows that, in general, there is not a subset $\Sigma$ of $\supp_R(N)$ such that $\operatorname{Ann}_R(\h d{\fa}N)=\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma}C^N_\fp(0))$ even if $R$ is a complete regular local ring and $\fa$ is its maximal ideal. Also, in Lemma \ref{annh6}, when $N$ is coprimary, we show that
\begin{equation}
\operatorname{Ann}_R(\h {\Ht N{\fa}}{\fa}N)=\operatorname{Ann}_R(N).
\end{equation}
In particular, if $R$ is domain, then $\h {\Ht R{\fa}}{\fa}R$ is faithful.
In Proposition \ref{prop3}, it is proved that if $(R, \fn)$ is a homomorphic image of a Cohen-Macaulay local ring and $t\in\N_0$ is such that
$\h t{\fn}N\neq 0$, then
\begin{equation}\dim_R(R/\operatorname{Ann}_R(\h t{\fn}N))\leq t\end{equation}
and equality holds if $\dim_R(R/\fp)=t$ for some $\fp\in\ass_R(N)$.
Example \ref{exa3} shows that there is a local ring $(R, \fn)$ which is a homomorphic image of a complete regular local ring such that
\begin{equation}
\dim_R(R/\operatorname{Ann}_R(\h {\depth_R(R)}{\fn}{R}))<\depth_R(R)=\depth_R(R/\gam {\fn}R).
\end{equation}
Therefore the analogue version of Lynch's conjecture is not true for $\h {\depth_R(R)}{\fn}R$ and inequality (1.14) may be strict.
\section{\bf Preliminaries}
Let $L$ be a proper submodule of an $R$-module $N$. Then $L$ is called a {\it primary submodule} of $N$ when for all $r\in R$ and $x\in N$ if $rx\in L$, then $x\in L$ or $r^nN\subseteq L$ for some $n\in\N$. If $L$ is a primary submodule of $N$, then $\fp:=\surd(\operatorname{Ann}_R(N/L))$
is a prime ideal of $R$ and $L$ is called a $\fp$-primary submodule of $N$. An expression of $L$ as an intersection of finitely many primary submodules of $N$ is called a {\it primary decomposition} of $L$ in $N$.
Such a primary decomposition
$$L=N_1\cap\dots \cap N_n\quad \textrm {with } N_i\ \textrm{ $\fp_i$-primary in } N\ (1\leq i\leq n)$$
of $L$ in $N$ is said to be {\it minimal primary decomposition} when $\fp_1,\dots,\fp_n$ are distinct and $\bigcap_{1\leq j\neq i\leq n}N_j\nsubseteq N_i$ for all $1\leq i\leq n$. In this case, we have $\ass_R(N/L)=\{\fp_1,\dots, \fp_n\}$ and hence $n$ and the set $\{\fp_1,\dots, \fp_n \}$ are uniquely determined by a minimal primary decomposition of $L$ in $N$. When $\ass_R(N)$ has just one element (or equivalently $0$ is a primary submodule of $N$), then $N$ is called
{\it coprimary}. See \cite{at, mat} for more details about the primary decomposition of modules.
Let $S$ be a multiplicatively closed subset of $R$ and $L$ a submodule of an $R$-module $N$. We denote the contraction of $S^{-1}L$ under the canonical map $N\rightarrow S^{-1}N$ by $C^N_S(L)$ (in \cite{f} it is denoted by $S_N(L)$). If $\fp\in\supp_R(N)$ and $S=R\setminus \fp$, we write $C_\fp^N(L)$ instead of $C^N_S(L)$. For an ideal $\fa$ and a prime ideal $\fp$ of $R$ we show $C_\fp^R(\fa)$ by $C_\fp(\fa)$.
Also, a subset $\Sigma$ of $\ass_R(N)$ is called an {\it isolated subset} of $\ass_R(N)$ when it satisfies the following condition: if $\fq\in \ass_R(N)$ and $\fq\subseteq\fp$ for some $\fp\in\Sigma$, then $\fq\in\Sigma$.
\begin{lem}\label{cont2} Let $L$ be a submodule of an $R$-module $N$. Let $\Sigma$ be a finite subset of $\spec(R)$ and $S:=R\setminus\bigcup_{\fp\in\Sigma}\fp$. Then $C^N_S(L)=\bigcap_{\fp\in\Sigma} C^N_{\fp}(L)$.
\end{lem}
\begin{proof}
If $\Sigma=\emptyset$, then $S=R$ and so $S^{-1}(L)=S^{-1}(N)=0$, $C^N_S(L)=N$. On the other hand, the intersection of empty family of submodules of $N$ is $N$. Hence the
assertion holds in this case. Now assume that $\Sigma$ is not empty.
If $x\in C^N_S(L)$, then in $S^{-1}(N)$, we have $x/1=l/s$ for some $s\in S$ and $l\in L$. Now for each $\fp\in\Sigma$, it is easy to see that $x/1=l/s$ in $N_{\fp}$. Therefore
$x\in C^N_{\fp}(L)$ for all $\fp\in\Sigma$ and so $C^N_S(L)\subseteq\bigcap_{\fp\in\Sigma} C^N_{\fp}(L)$.
To prove the reverse inclusion, assume that $x$ is an arbitrary element in $N$ with $x\notin C^N_S(L)$ and it is sufficient for us to show that
$x\notin \bigcap_{\fp\in\Sigma} C^N_{\fp}(L)$. Since $x/1\notin S^{-1}(L)$ in $S^{-1}(N)$, we have $sx/s=x/1\notin S^{-1}(L)$ for all
$s\in S$. Therefore $sx\notin L$ for all $s\in S$. Consequently, $(L:_Rx)\cap S=\emptyset$, where $(L:_Rx)$ denotes the ideal $\{r\in R: rx\in L\}$ of $R$.
It follows that $(L:_Rx)\subseteq\bigcup_{\fp\in\Sigma}\fp$ and so $(L:_Rx)\subseteq\fp$ for some $\fp\in\Sigma$ by the Prime Avoidance Theorem.
Now in $N_{\fp}$, if $x/1=l/s$ for some $l\in L$ and some $s\in R\setminus \fp$, then
there exists $t\in R\setminus \fp$ such that $tsx=tl$ and so
$ts\in(L:_Rx)\subseteq\fp$, which is impossible. Therefore $x/1\notin L_{\fp}$ or equivalently $x\notin C^N_{\fp}(L)$, as required.
\end{proof}
\begin{prop}\label{prop1}
Let $L$ be a proper submodule of a non-zero finitely generated $R$-module $N$ and let $L=N_1\cap\dots\cap N_n$ be a minimal primary decomposition of $L$ in $N$
with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$. Let $\Sigma$ be an isolated subset of $\ass_R(N/L)$. Then
$$\textstyle{\bigcap_{\fp_i\in\Sigma}N_i=C^N_S(L)=\bigcap_{\fp\in\Sigma}C^N_\fp(L)=\bigcup_{t\in\N}(L:_N\fb^t)},$$ where $S:=R\setminus \bigcup_{\fp\in\Sigma}\fp$ and $\fb:=\bigcap_{\fp\in\ass_R(N/L)\setminus\Sigma} \fp$.
\end{prop}
\begin{proof}
If $\Sigma=\emptyset$, then $S=R$ and $\fb=\surd(\operatorname{Ann}_R(N/L))$. Thus $\bigcap_{\fp_i\in\Sigma}N_i=C^N_S(L)=\bigcap_{\fp\in\Sigma}C^N_\fp(L)=\bigcup_{t\in\N}(L:_N\fb^t)=N$. So assume that $\Sigma\neq \emptyset$. The first claimed equality is proved in \cite[Lemma 2.2]{f} and the second equality is proved in Lemma \ref{cont2}. Now we show that $\bigcap_{\fp_i\in\Sigma}N_i=\bigcup_{t\in\N}(L:_N\fb^t)$. Assume that $x\in \bigcup_{t\in\N}(L:_N\fb^t)$. Therefore $\fb^tx\subseteq L$ for some $t\in\N$ and so $\fb^tx\subseteq N_i$ for all $1\leq i\leq n$. Let $\fp_i\in\Sigma$. Since $N_i$ is a $\fp_i$-primary submodule of $N$, it follows from $\fb^tx\subseteq N_i$ that either $x\in N_i$ or $\fb^t\subseteq \fp_i$. If $\fb^t\subseteq \fp_i$, then $\fp\subseteq \fp_i$ for some $\fp\in\ass_R(N/L)\setminus \Sigma$. Since $\Sigma$ is an isolated subset of $\ass_R(N/L)$, we obtain $\fp\in\Sigma$, which is impossible. Therefore $x\in N_i$ and so
$\bigcup_{t\in\N}(L:_N\fb^t)\subseteq\bigcap_{\fp_i\in\Sigma}N_i$. To prove the reverse inclusion, assume that $x\in\bigcap_{\fp_i\in\Sigma}N_i$ and we show that $x\in\bigcup_{t\in\N}(L:_N\fb^t)$. For each $1\leq i\leq n$, $\fp_i=\surd(\operatorname{Ann} (N/N_i))$ and hence there exists $r_i\in\N$ such that $\fp_i^{r_i}N\subseteq N_i$. Set $r=\max \{r_i: \fp_i\in\ass_R(N/L)\setminus\Sigma\}$. Then $\fb^rN\subseteq\bigcap_{\fp_i\in\ass_R(N)\setminus\Sigma}N_i$ and so $\fb^rx\subseteq\bigcap_{\fp_i\in\ass_R(N/L)\setminus\Sigma}N_i$. Also $\fb^rx\subseteq\bigcap_{\fp_i\in\Sigma}N_i$ because $x\in\bigcap_{\fp_i\in\Sigma}N_i$. Therefore
$$\textstyle{\fb^rx\subseteq (\bigcap_{\fp_i\in\ass_R(N/L)\setminus\Sigma}N_i)\cap(\bigcap_{\fp_i\in\Sigma}N_i)=\bigcap_{\fp_i\in\ass_R(N/L)}N_i=L}$$
and consequently $x\in\bigcup_{t\in\N}(L:_N\fb^t)$. This completes the proof.
\end{proof}
Let $N$ be a non-zero finitely generated $R$-module, $\fp\in\supp_R(N)$ and $n\in\N$.
Then we have
$$\mass_R(N/\fp^nN)=\msupp_R(N/\fp^nN)={\rm Min}({\rm V}(\fp)\cap\supp_R(N))=\{\fp\}.$$
Thus $\{\fp\}$ is an isolated subset of $\ass_R(N/\fp^nN)$. Therefore, by Proposition \ref{prop1}, the $\fp$-primary component of each minimal primary decomposition of $\fp^nN$ in $N$ is $C^N_\fp(\fp^nN)$ and hence it is uniquely determined by $n$, $\fp$ and $N$.
The $\fp$-primary component of $\fp^nN$ in $N$ is called the {\it n-th symbolic power} of $\fp$ with respect to $N$, denoted by $(\fp N)^{(n)}$.
\begin{lem} \label{sym1}
Let $N$ be a non-zero finitely generated $R$-module and $\fp\in\supp_R(N)$. Then
$$\textstyle{C^N_\fp(0)=\bigcap_{n\in\N}(\fp N)^{(n)}=\bigcap_{L\in\mathcal P}L,}$$
where $\mathcal P$ denotes the set of all $\fp$-primary submodules of $N$.
\end{lem}
\begin{proof}
We prove the claimed equalities in some steps.
1) $\ker(N\rightarrow N_\fp)\subseteq \bigcap_{L\in\mathcal P}L$. Assume that $x\in\ker(N\rightarrow N_\fp)$ and $L$ is an arbitrary $\fp$-primary submodule of $N$. Since $x/1$ is zero in $N_\fp$, there exists $s\in R\setminus \fp$ such that $sx=0\in L$. As $s\notin \fp$ and $L$ is a $\fp$-primary submodule of $N$, we have $x\in L$.
2) $\bigcap_{L\in\mathcal P}L\subseteq \bigcap_{n\in\N}(\fp N)^{(n)}$. This inclusion is obvious because $(\fp N)^{(n)}$ is a $\fp$-primary submodule of $N$ for all $n$. We recall that $(\fp N)^{(n)}=C^N_\fp(\fp^nN)$ is the unique $\fp$-primary component of every minimal primary decomposition of $\fp^nN$ in $N$.
3) $\bigcap_{n\in\N}(\fp N)^{(n)}\subseteq \ker(N\rightarrow N_\fp)$. Assume that $x\in\bigcap_{n\in\N}(\fp N)^{(n)}$. Then $x/1\in\bigcap_{n\in\N}(\fp^nR_\fp)N_\fp$. Now Krull's Intersection Theorem \cite[Theorem 8.10]{mat} implies that $\bigcap_{n\in\N}(\fp^nR_\fp)N_\fp=0$ and so $x/1=0$. Therefore $x\in\ker(N\rightarrow N_\fp)$.
\end{proof}
\begin{lem}\label{cont} Let $L$ be a submodule of a non-zero finitely generated $R$-module $N$. Then the following statements hold.
\begin{enumerate}[\rm(i)]
\item If $\fp_1,\fp_2\in\supp_R(N)$ are such that $\fp_1\subseteq\fp_2$, then $C^{N}_{\fp_2}(L)\subseteq C^{N}_{\fp_1}(L)$.
\item If $\fp\in\supp_R(N)$, then $\operatorname{Ann}_R (N/C^N_\fp(L))=C_\fp(\operatorname{Ann}_R (N/L))$. In particular, $\operatorname{Ann}_R (N/C^N_\fp(0))=C_\fp(\operatorname{Ann}_R (N))$.
\item If $\fp\in\supp_R(N)$, then $C_\fp(\operatorname{Ann}_R (N))\subseteq\fq$ for all $\fq\in\supp_R(N)$ with $\fq\subseteq\fp$. In particular, $\Ht N{C_\fp(\operatorname{Ann}_R (N))}=0$.
\end{enumerate}
\end{lem}
\begin{proof}
The statement (i) is obvious. To prove (ii), assume that $\fp\in\supp_R(N)$ and $r\in R$. Then we have
\begin{align*}
&\textstyle{r\in\operatorname{Ann}_R (N/C^N_\fp(L))\Leftrightarrow rN\subseteq C^N_\fp(L)\Leftrightarrow {r\over 1} N_\fp\subseteq L_\fp}\\
&\textstyle{\Leftrightarrow {r\over 1}\in\operatorname{Ann}_{R_\fp}(N_\fp/L_\fp)\Leftrightarrow r\in C_\fp(\operatorname{Ann}_R (N/L)).}
\end{align*}
(Note that $(\operatorname{Ann}_R(N/L))_\fp=\operatorname{Ann}_{R_\fp}(N_\fp/L_\fp)$ by \cite[Proposition 3.14]{at}.) Now we prove (iii). Let $\fp, \fq\in\supp_R(N)$ with $\fq\subseteq\fp$. Then, by (i), $C_\fp(\operatorname{Ann}_R (N))\subseteq C_\fq(\operatorname{Ann}_R (N))$. Also if $r\in C_\fq(\operatorname{Ann}_R (N))$, then $(r/1)N_\fq=0$. Therefore $r/1$ is not a unit in $R_\fq$ and so $r\in\fq$. Thus $C_\fq(\operatorname{Ann}_R (N))\subseteq\fq$. These inclusions prove (iii).
\end{proof}
\begin{lem}\label{lem5}
Let $L$ be a submodule of a non-zero $R$-module $N$ and $\fp, \fq\in\supp_R(N)$ with $\fp\subseteq\fq$. Then $(C^N_\fp(L))_\fq=C^{N_\fq}_{\fp R_\fq}(L_\fq)$.
\end{lem}
\begin{proof}
Let $\alpha\in(C^N_\fp(L))_\fq$. Hence, in $N_\fq$, $\alpha=x/s$ for some $s\in R\setminus \fq$ and some $x\in C^N_\fp(L)$. Therefore, in $N_\fp$, $x/1=l/t$ for some $l\in L$ and some $t\in R\setminus \fp$. It follows that $xtt'=lt'$ for some $t'\in R\setminus \fp$. Since $tt'/1\in R_\fq\setminus \fp R_\fq$, in $(N_\fq)_{\fp R_\fq}$ we have
${x\over s}/{1\over 1}={tt'\over 1}{x\over s}/{tt'\over 1}={lt'\over s}/{tt'\over 1}\in (L_\fq)_{\fp R_\fq}.$
This means that $\alpha=x/s$ is an element of $C^{N_\fq}_{\fp R_\fq}(L_\fq)$ and so $(C^N_\fp(L))_\fq\subseteq C^{N_\fq}_{\fp R_\fq}(L_\fq)$.
Now we prove the reverse inclusion. Assume that $\alpha\in C^{N_\fq}_{\fp R_\fq}(L_\fq)$. Hence $\alpha=x/s$ for some $x\in N$ and some $s\in R\setminus\fq$, and in $(N_\fq)_{\fp R_\fq}$ we have ${x\over s}/{1\over 1}\in (L_\fq)_{\fp R_\fq}$. Therefore there exists $l/s'\in L_\fq$ with $l\in L, s'\in R\setminus \fq$ and $t/s''\in R_\fq\setminus\fp R_\fq$ with $t\in R\setminus \fp, s''\in R\setminus \fq$ such that ${x\over s}/{1\over 1}={l\over s'}/{t\over s''}$ in $(N_\fq)_{\fp R_\fq}$. It follows that there exists $t'/s'''\in R_\fq\setminus \fp R_\fq$ with $t'\in R\setminus \fp, s'''\in R\setminus \fq$ such that
${x \over s}{t\over s''}{t'\over s'''}={l\over s'}{t'\over s'''}$ in $N_\fq$.
Thus $xtt's's'''s^{iv}=lt'ss''s'''s^{iv}$ for some $s^{iv}\in R\setminus \fq$. Since $R\setminus\fq\subseteq R\setminus \fp$, $xt''=l'$ for some $t''\in R\setminus \fp$ and some $l'\in L$. Hence, in $N_\fp$ we have
$x/1=xt''/t''=l'/t''\in L_\fp$
and so $x\in C^N_\fp(L)$. Therefore $\alpha=x/s\in (C^N_\fp(L))_\fq$ and hence $C^{N_\fq}_{\fp R_\fq}(L_\fq)\subseteq(C^N_\fp(L))_\fq$. This completes the proof.
\end{proof}
\section{\bf A lower bound for the annihilator of local cohomology}
Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ and $t$ an arbitrary non-negative integer.
In this section, we provide a lower bound for the annihilator of the local cohomology $\h t{\fa}N$; see Theorem \ref{lower}.
We recall that, the {\it cohomological dimension} of an $R$-module $N$ with respect to an ideal $\fa$ is defined as $\cdd R{\fa}N:=\sup\{i\in\N_0: \h i{\fa}N\neq 0\}$. The {\it arithmetic rank} of $\fa$, denoted by $\textrm{ara}(\fa)$, is the least number of elements of $R$ required to generate an ideal which has the same radical as $\fa$. By \cite[Corollary 3.3.3]{bs}, $\cdd R{\fa}N\leq \textrm{ara}(\fa)<\infty$ and hence $\cdd R{\fa}N\in \N_0\cup\{-\infty\}$. Also, it follows from \cite[Exercise 6.2.6 and Theorem 6.2.7]{bs} that $\h i{\fa}N=0$ for all $i\in\N_0$ if and only if $N=\fa N$. Therefore $\cdd R {\fa}N\in\N_0$ when $N\neq\fa N$ and $\cdd R{\fa}N=-\infty$ when $N=\fa N$.
\begin{lem}[See {\cite[Theorem 1.2]{dnt} or \cite[Proposition 4.7]{cjr}}]\label{cd}
Let $N$ and $L$ be two finitely generated $R$-modules and $\fa$ an ideal of $R$. If $\supp_R(N)\subseteq \supp_R(L)$, then $\cdd R {\fa}N\leq\cdd R {\fa}L$.
In particular,
$\cdd R {\fa}N=\cdd R {\fa}L$ whenever $\supp_R(N)=\supp_R(L)$.
\end{lem}
Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$. Since $\supp_R(N)=\supp_R(\bigoplus_{\fp\in\ass_R(N)}R/\fp)$, Lemma \ref{cd} implies that $\cdd R {\fa}N=\max\{\cdd R {\fa}{R/\fp}: \fp\in\ass_R(N)\}$. We will often use this fact and Lemma \ref{cd} without explicit mention. We refer the reader to
\cite[Sec. 4]{cjr} and \cite{dnt} for more details about the cohomological dimension.
\begin{thm}\label{lower} Let $N$ be a non-zero finitely generated $R$-module and $\fa$ an ideal of $R$. Let
$0=N_1\cap\hdots\cap N_n$ be a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$. For each $t\in\mathbb Z$, set $\Delta(t):=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\}$.
Then the following statements hold.
\begin{enumerate}[\rm(i)]
\item There is the following equalities:
$$\textstyle{\bigcap_{\fp_i\in\Delta(t)}N_i=\bigcap_{\fp\in\Delta(t)}C_\fp^N(0)=C^N_{S(t)}(0)=\gam {\fa (t)}N,}$$ where $S(t):=R\setminus\bigcup_{\fp_i\in\Delta(t)}\fp$ and $\fa (t):=\bigcap_{\fp\in\ass_R(N),\ \cdd R{\fa}{R/\fp}<t}\fp$. In particular, $\bigcap_{\fp_i\in\Delta(t)}N_i$ is independent of the choice of minimal primary decomposition of the zero submodule of $N$.
\item For each submodule $L$ of $N$, $\cdd R{\fa}L<t$ if and only if $L\subseteq C^N_{S(t)}(0)$.
\item There is the following lower bound for the annihilator of $\h t{\fa}N$:
$$\textstyle{\operatorname{Ann}_R({N}/\bigcap_{\fp\in\Delta(t)}C_\fp^N(0))=\bigcap_{\fp\in\Delta(t)}C_\fp(\operatorname{Ann}_R(N))\subseteq\operatorname{Ann}_R(\h t{\fa}N).}$$
\end{enumerate}
\end{thm}
\begin{proof}
(i) If $\fq\in\ass_R (N)$ and $\fq\subseteq \fp$ for some $\fp\in\Delta(t)$, then $t\leq\cdd R{\fa}{R/\fp}\leq\cdd R{\fa}{R/\fq}$ and so $\fq\in\Delta(t)$. Therefore
$\Delta(t)$ is an isolated subset of $\ass_R (N)$ and hence (i) follows from Proposition \ref{prop1}.
(ii) Set $C=C^N_{S(t)}(0)$. Then, by (i), we have
\begin{align*}
&\ass_R (C)=\textstyle{\ass_R(\bigcap_{\fp_i\in\Delta(t)}N_i)=\ass_R(N)\setminus\Delta(t)}\\
&=\{\fp\in\ass_R (N): \cdd R{\fa}{R/\fp}<t\}.
\end{align*}
Hence, $\cdd R{\fa}{C}<t$ and so, for each submodule $L$ of $C$, $\cdd R{\fa}L\leq\cdd R{\fa}C<t$. Conversely,
if $L$ is a submodule of $N$ such that $\cdd R{\fa}L<t$, then
$$\ass_R ({L}/({L\cap C}))=\ass_R (({L+C})/{C})\subseteq\ass_R ({N}/{C})=\Delta(t).$$
Thus if $\ass_R ({L}/({L\cap C}))\neq\emptyset$, then $t\leq\cdd R{\fa}{{L}/({L\cap C})}\leq\cdd R{\fa}L$, which is impossible. Therefore
$L\subseteq C$ and the proof of (ii) is completed.
(iii) By (ii), $\cdd R{\fa}{C}<t$. Therefore $\h t{\fa}N\cong\h t{\fa}{N/C}$ and hence
$$\operatorname{Ann}_R(N/C)\subseteq\operatorname{Ann}_R(\h t{\fa}{N/C})=\operatorname{Ann}_R(\h t{\fa}N).$$
\end{proof}
\begin{defn} \label{def1} Let $N$ be a non-zero finitely generated $R$-module, $\fa$ an ideal of $R$ and $t\in\N_0$. Let $0=N_1\cap\dots\cap N_n$ be a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$. We Set $\Delta(t):=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\}$, $S(t):=R\setminus\bigcup_{\fp\in\Delta(t)}\fp$ and $\fa(t):=\bigcap_{\fp\in\ass_R(N)\setminus\Delta(t)}\fp$. Then, by Theorem \ref{lower},
$$\textstyle{\bigcap_{\fp_i\in\Delta(t)}N_i=\bigcap_{\fp\in\Delta(t)}C_\fp^N(0)=C^N_{S(t)}(0)=\gam {\fa (t)}N}$$
is the largest submodule $L$ of $N$ with the property that $\cdd R{\fa}L<t$.
We denote this submodule of $N$ by $C^t(\fa, N)$. If $N=\fa N$, then $\Delta(t)=\emptyset$, $S(t)=R$ and $\fa(t)=\surd(\operatorname{Ann}_R(N))$. Thus $C^t(\fa, N)=N$ for all $t$. If $N\neq\fa N$, then $d:=\grad{R} {\fa}N$ and $c:=\cdd R{\fa}N$ are finite non-negative integers and we denote $C^d(\fa, N)$ and $C^c(\fa, N)$ by ${\rm S}(\fa, N)$ and ${\rm T}(\fa, N)$ respectively.
\end{defn}
In Proposition \ref{prop5}, we give some properties of $C^t(\fa, N)$. The following lemma is needed.
\begin{lem}\label{grad}
Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$ with $N\neq \fa N$. Then for each $\fp\in \ass_R(N)$ with $\fa+\fp\neq R$, there is the following inequalities:
$$\grad{R} {\fa}N\leq\Ht {R/\fp}{\fa+\fp}\leq \cdd R{\fa}{R/\fp}.$$
In particular, for each $\fp\in \ass_R(N)$, $\cdd R{\fa}{R/\fp}<\grad{R} {\fa}N$ if and only if $\cdd R{\fa}{R/\fp}=-\infty$ or equivalently $\fa+\fp=R$.
\end{lem}
\begin{proof}
Assume that $\fp\in\ass_R(N)$ with $\fa+\fp\neq R$ and $\fq$ is a prime ideal of $R$ containing $\fa+\fp$ such that $\Ht {R/\fp}{\fa+\fp}=\Ht {R/\fp}{\fq/\fp}$. Then we have
$$\grad{R} {\fa}N\leq\grad{R} {\fq}N\leq\depth_{R_\fq}({N_\fq}).$$
Also, depth formula \cite[Lemma 9.3.2]{bs} yields
$$\depth_{R_\fq}({N_\fq})\leq\depth_{R_\fp}(N_\fp)+\Ht {R/\fp}{\fq/\fp}=\Ht {R/\fp}{\fq/\fp}=\Ht {R/\fp}{\fa+\fp}.$$
These inequalities prove the first claimed inequality. To prove the other inequality, we set $\bar{R}:=R/\fp$.
Since $\fa\bar R$ is a proper ideal of $\bar R$, it follows from \cite[Exercise 7.3.4]{bs} and the Independence Theorem that
$$\Ht{\bar{R}}{\fa \bar{R}}\leq\cdd {\bar R}{\fa \bar R}{\bar R}=\cdd R{\fa}{R/\fp}.$$
This proves the second claimed inequality. The last assertion follows immediately from the first part.
\end{proof}
\begin{defn} Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$. We say that $N$ is relative Cohen-Macaulay with respect to $\fa$ if $\grad{R} {\fa}N=\cdd R{\fa}N$ or equivalently there is precisely one non-vanishing local cohomology
module of $N$ with respect to $\fa$; see \cite{ra}.
\end{defn}
Note that if $N=\fa N$, then $\h i{\fa}N=0$ for all $i\in\N_0$ and hence $\grad{R} {\fa}N=\inf\{i\in\N_0: \h i{\fa}N\neq 0\}=\infty$ and $\cdd R{\fa}N=\sup\{i\in\N_0: \h i{\fa}N\neq 0\}=-\infty$. Therefore $\grad{R} {\fa}N\neq\cdd R{\fa}N$ and so $N$ is not relative Cohen-Macaulay with respect to $\fa$ in this case.
\begin{cor}\label{rel} Let $\fa$ be an ideal of $R$ and $N$ a finitely generated $R$-module. If $N$ is relative Cohen-Macaulay with respect to $\fa$, then, for each $\fp\in\ass_R(N)$ with $\fa+\fp\neq R$, there is the following equalities:
$$\grad{R} {\fa}N=\Ht {R/\fp}{\fa+\fp}=\cdd R{\fa}{R/\fp}=\cdd R{\fa}N.$$
\end{cor}
\begin{proof}
It is an immediate consequence of Lemmas \ref{grad} and \ref{cd}.
\end{proof}
\begin{prop}\label{prop5}
Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$ such that $N\neq\fa N$. Set $d:=\grad{R}{\fa}N$, $c:=\cdd R{\fa}N$ and, for each $t\in\mathbb Z$, $\Delta(t):=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\}$.
The following statements hold.
\begin{enumerate}[\rm(i)]
\item There is the following equalities:
\begin{align*}
&\ass_R(N/C^t(\fa, N))=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\},\\
&\ass_R(C^t(\fa, N))=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}<t\},\\
&\ass_R(C^{t+1}(\fa, N)/C^t(\fa, N))=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}=t\}.
\end{align*}
In particular, $C^t({\fa}, N)=N$ if and only if $\Delta(t)=\emptyset$; and $C^t({\fa}, N)=0$ if and only if $\Delta(t)=\ass_R(N)$.
\item $C^t(\fa, N)=N$ for all $t>c$; $C^t(\fa, N)={\rm S}(\fa, N)$ for all $t\leq d$; and
$\{C^i(\fa, N)\}_{i\in\mathbb Z}$ gives the following bounded ascending chain
$${\rm S}(\fa, N)=C^d(\fa, N)\subseteq\dots\subseteq C^c(\fa, N)=\T{\fa}N\subsetneqq C^{c+1}(\fa, N)=N$$
of submodules of $N$ such that, for each $d\leq t\leq c$, $\cdd R{\fa}{C^{t+1}(\fa, N)/C^t(\fa, N)}=t$ whenever $C^{t}(\fa, N)\neq C^{t+1}(\fa, N)$.
\item $\ass_R({\rm S}(\fa, N))=\{\fp\in\ass_R(N): \fa+\fp=R\}$ and $\ass_R(N/{\rm S}(\fa, N))=\{\fp\in\ass_R(N): \fa+\fp\neq R\}.$
\item ${\rm S}(\fa, N)=0$ if and only if for each $a\in\fa$, $1-a$ is a non-zerodivisor on $N$.
\item For each submodule $L$ of $N$, $L=\fa L$ (or equivalently $\h i{\fa}L=0$ for all $i\in \N_0$) if and only if $L\subseteq{\rm S}(\fa, N)$.
In particular, the following statements are equivalent.
\begin{enumerate}[(1)]
\item For each $a\in\fa$, $1-a$ is a non-zerodivisor on $N$.
\item For each submodule $L$ of $N$, $L=\fa L$ if and only if $L=0$.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
(i) Let $0=N_1\cap\dots\cap N_n$ be a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_i)=\{\fp_i\}$ for all $1\leq i\leq n$. By definition, $C^t(\fa, N)=\bigcap_{\fp_i\in\Delta(t)}N_i$. Now by \cite[Lemma 2.1]{f}, we have
\begin{align*}
&\ass_R(N/C^t(\fa, N))=\Delta(t)=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}\geq t\},\\
&\ass_R(C^t(\fa, N))=\ass_R(N)\setminus\Delta(t)=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}<t\}.
\end{align*}
To prove the last claimed equality, assume that $\fp\in\ass_R(C^{t+1}(\fa, N)/C^t(\fa, N))$. Since $C^{t+1}(\fa, N)/C^t(\fa, N)$ is a submodule of $N/C^t(\fa, N)$, $\cdd R{\fa}{R/\fp}\geq t$. On the other hand, $\fp\in\supp_R(C^{t+1}(\fa, N))$ and so $\cdd R{\fa}{R/\fp}\leq\cdd R{\fa}{C^{t+1}(\fa, N)}< t+1$. Therefore $$\ass_R(C^{t+1}(\fa, N)/C^t(\fa, N))\subseteq\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}=t\}.$$
Now we prove the reverse inclusion. Assume that $\fp\in\ass_R(N)$ and $\cdd R{\fa}{R/\fp}=t$. Hence $\fp\in\ass_R(N/C^t(\fa, N))\setminus\ass_R(N/C^{t+1}(\fa, N))$. It follows from the exact sequence $0\rightarrow C^{t+1}(\fa, N)/C^t(\fa, N)\rightarrow N/C^t(\fa, N)\rightarrow N/C^{t+1}(\fa, N)\rightarrow 0$ that $\fp\in\ass_R(C^{t+1}(\fa, N)/C^t(\fa, N))$. This proves the reverse inclusion.
(ii) If $t>c$, then $\Delta(t)=\emptyset$ and so $C^t(\fa, N)=N$ by (i). Also for $t\leq d$ and $\fp\in\ass_R(N)$, Lemma \ref{grad} implies that $\cdd R{\fa}{R/\fp}\geq t$ when $\fa+\fp\neq R$, and $\cdd R{\fa}{R/\fp}=-\infty$ when $\fa+\fp=R$. Thus $\Delta(t)=\{\fp\in\ass_R(N): \fa+\fp\neq R\}=\Delta(d)$ and consequently $C^t(\fa , N)={\rm S}(\fa, N)$ for all $t\leq d$. Also, it is clear by definition that $\T {\fa}N\neq N$ and $C^t(\fa, N)\subseteq C^{t+1}(\fa, N)$ for all $t$. The final assertion follows from (i).
(iii) By Lemma \ref{grad}, we have $\Delta(d)=\{\fp\in\ass_R(N): \fa+\fp\neq R\}$. Now (iii) follows from (i).
(iv) Since $\operatorname{Zdv}_R(N)=\bigcup_{\fp\in\ass_R(N)}\fp$, $1-a$ is a non-zerodivisor on $N$ for all $a\in\fa$, if and only if $\fa+\fp\neq R$ for all $\fp\in\ass_R(N)$ or equivalently $\ass_R({\rm S}(\fa, N))=\emptyset$. This proves (iv).
(v) By (ii), $C^0(\fa, N)={\rm S}(\fa, N)$. Therefore it follows from Theorem \ref{lower}(ii) that for each submodule $L$ of $N$, $\h i{\fa}L=0$ for all $i\in\N_0$ (or equivalently $L=\fa L$) if and only if $L\subseteq {\rm S}(\fa, N)$. The last assertion follows from (iv). This completes the proof.
\end{proof}
\section{\bf An upper bound for the annihilator of top local cohomology and Lynch's conjecture}
In \cite[Theorem 3.4]{f}, we provide a bound for the annihilator of top local cohomology. In the following theorem, we establish a sharper upper bound for this annihilator. Using this theorem, we can compute the annihilators of top local cohomology modules in certain cases and we are able to construct a counterexample to Lynch's conjecture.
\begin{thm} \label{annh2} Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$ with $N\neq\fa N$. Set $c:=\cdd R {\fa}N$, $\Delta:=\{\fp\in\ass_R(N): \cdd R {\fa}{R/\fp}=c\}$ and $\Sigma:=\{\fp\in\supp_R(N): \cdd R {\fa}{R/\fp}=\dim_R (R/\fp)=c\}$.
Then
\begin{align*} {\operatorname{Ann}_R({N}/\operatorname{T}({\fa}, N))}\subseteq\operatorname{Ann}_R(\h c{\fa}N)
\textstyle{\subseteq \operatorname{Ann}_R (N/\bigcap_{\fp\in\Sigma}C^N_{\fp}(0))}.
\end{align*}
Moreover, if for each $\fp\in\Delta$ there exists $\fq\in\Sigma$ with $\fp\subseteq\fq$, then
$$\textstyle{\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R (N/\T {\fa}N)=\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma}C^N_{\fp}(0)).}$$
\end{thm}
\begin{proof}
The first claimed inclusion follows from Theorem \ref{lower}(iii). Now we prove the second inclusion. If $\Sigma=\emptyset$, then $\bigcap_{\fp\in\Sigma}C^N_{\fp}(0)=N$ and so there is nothing to prove. Hence assume that $\Sigma\neq\emptyset$ and $\fp\in\Sigma$. Let $L$ be an arbitrary $\fp$-primary submodule of $N$. Since $\ass_R(N/L)=\{\fp\}$, we obtain
\begin{align*}
&\{\fq\in\ass_R(N/L): \cdd R {\fa}{R/\fq}=\cdd R {\fa}{N/L}\}=\{\fp\}\\&=\{\fq\in\ass_R(N/L): \cdd R {\fa}{R/\fq}=\dim_R(R/\fq)=\cdd R {\fa}{N/L}\}.
\end{align*}
Therefore $\operatorname{Ann}_R (\h c{\fa}{N/L})=\operatorname{Ann}_R (N/L)$ by \cite[Theorem 3.4(iii)]{f}.
Also, the exact sequence $0\rightarrow L\rightarrow N\rightarrow N/L\rightarrow 0$ induces the epimorphism
$\h c{\fa}N\rightarrow \h c{\fa}{N/L}$ and so
$$\operatorname{Ann}_R (\h c{\fa}{N})\subseteq\operatorname{Ann}_R (\h c{\fa}{N/L})=\operatorname{Ann}_R (N/L).$$
Since $\fp$ is an arbitrary element of $\Sigma$ and $L$ is an arbitrary $\fp$-primary submodule of $N$, it follows from the above inclusion in view of Lemma \ref{sym1} that
\begin{align*}
&\operatorname{Ann}_R (\h c{\fa}{N})\subseteq\textstyle{\bigcap_{\fp\in\Sigma}\, \bigcap_{L\in\mathcal P}\operatorname{Ann}_R ({N/L})}
=\textstyle{\bigcap_{\fp\in\Sigma}\operatorname{Ann}_R ({N/\bigcap_{L\in\mathcal P}L})}\\
&=\textstyle{\bigcap_{\fp\in\Sigma}\operatorname{Ann}_R ({N/C^N_{\fp}(0)})}=\textstyle{\operatorname{Ann}_R (N/\bigcap_{\fp\in\Sigma}C^N_{\fp}(0)),}
\end{align*}
where $\mathcal P$ denotes the set of all $\fp$-primary submodules of $N$. This proves the second claimed inclusion.
Finally, assume that $0=N_1\cap\dots\cap N_n$ is a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$ and assume that for each $\fp_i\in\Delta$ there exists $\fq_i\in\Sigma$ such that $\fp_i\subseteq\fq_i$. Since $N_i$ is a $\fp_i$-primary submodule of $N$, by Lemma \ref{sym1}, we have $C^N_{\fp_i}(0)\subseteq N_i$ and so $\bigcap_{\fq\in\Sigma}C^N_{\fq}(0)\subseteq C^N_{\fq_i}(0)\subseteq C^N_{\fp_i}(0)\subseteq N_i$. Since $\fp_i$ is an arbitrary element of $\Delta$, we obtain $\bigcap_{\fq\in\Sigma}C^N_{\fq}(0)\subseteq \bigcap_{\fp_i\in\Delta}N_i=\T {\fa}N,$ see Definition \ref{def1}. Hence
$\operatorname{Ann}_R(N/\bigcap_{\fq\in\Sigma}C^N_{\fq}(0))\subseteq\operatorname{Ann}_R(N/\T {\fa}N)$.
Now the first part of theorem gives the claimed equalities and the proof is completed.
\end{proof}
Let $N$ be a finitely generated $R$-module of dimension $n\geq 1$, $\fa$ an ideal of $R$ with $N\neq\fa N$ and $c:=\cdd R {\fa}N$. In \cite[Theorems 1.1 and 1.2]{an2022}, Atazadeh and Naghipour as one of their main results proved that $\Ht N{\operatorname{Ann} (\h c{\fa}N)}<n-c$. In particular, $\Ht N{\operatorname{Ann}_R (\h c{\fa}N)}=0$ when $c=n-1$. In the following corollary, we prove their result by using Theorem \ref{annh2} and we also show that if there exist $\fp\in\supp_R(N)$ with $\cdd R {\fa}{R/\fp}=\dim_R(R/\fp)=c$ ($c$ is not necessarily equal $n-1$), then $\operatorname{Ann}_R (\h c{\fa}N$ has $N$-height zero.
\begin{lem}\label{lem3} Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ with $N\neq\fa N$ and $c:=\cdd R{\fa}N$. Then
$\h c{\fa}{N\otimes_R(\cdot)}\cong \h c{\fa}N\otimes_R(\cdot)$ on the category of $R$-modules and $R$-homomorphisms.
\end{lem}
\begin{proof}
Set $\bar{R}:=R/\operatorname{Ann}_R(N)$. It follows from the Independence Theorem \cite[Theore 4.2.1]{bs} and Lemma \ref{cd} that $\cdd {\bar R}{\fa\bar{R}}{\bar{R}}=\cdd R {\fa}{\bar{R}}=\cdd R {\fa}{N}=c$. Hence $\h c{\fa\bar{R}}{\cdot}$ is a right exact functor on the category of $\bar{R}$-modules. Furthermore $\h c{\fa\bar{R}}{\cdot}$ is an additive functor that preserves direct sums. Therefore $\h c{\fa\bar{R}}{\cdot}$ is naturally isomorphic to $\h c{\fa \bar{R}}{\bar{R}}\otimes_{\bar{R}}(\cdot)$ on the category of $\bar{R}$-modules; see \cite[Theorem 5.45]{r}. Since $N$ is an $\bar R$-module, $N\otimes_RM$ has $\bar R$-module structure for each $R$-module $M$ and so, on the category of $R$-modules, we have
\begin{align*}
&\h c{\fa}{N\otimes_R(\cdot)}\cong\h c{\fa \bar{R}}{N\otimes_R(\cdot)}\cong N\otimes_R(\cdot)\otimes_{\bar{R}}\h c{\fa\bar{R}}{\bar{R}}\\
&\cong(\cdot)\otimes_RN\otimes_{\bar{R}}\h c{\fa\bar{R}}{\bar{R}}
\cong (\cdot)\otimes_R\h c {\fa\bar{R}}{N}
\cong(\cdot)\otimes_R\h c{\fa}N.
\end{align*}
\end{proof}
\begin{cor} \label{cor1} Let $\fa$ be an ideal of $R$ and $N$ a finitely generated $R$-module of dimension $n$ such that $N\neq \fa N$. Let $c:=\cdd R{\fa}{N}$ and $\fp\in\supp_R(N)$ is such that $\cdd R {\fa}{R/\fp}=c$.
\begin{enumerate}[\rm(i)]
\item If $\dim_R(R/\fp)=c$, then $\Ht N {\operatorname{Ann}_R(\h {c}{\fa}N)}=0$.
\item If $\dim_R(R/\fp)>c$, then $\Ht N {\fp}\leq n-c-1$.
\end{enumerate}
In particular, if $n\geq 1$ and $c=n-1$, then $\Ht N {\operatorname{Ann}_R (\h {c}{\fa}N)}=0$.
\end{cor}
\begin{proof}
We set $J:=\operatorname{Ann}_R (\h {c}{\fa}N)$. If $\dim_R(R/\fp)=c$, then $J\subseteq \operatorname{Ann}_R (N/C^N_{\fp}(0))=C_\fp(\operatorname{Ann}_R (N))$ by Theorem \ref{annh2}. Hence (i) is an immediate consequence of Lemma \ref{cont}(iii). Now to prove (ii), suppose that $\dim_R(R/\fp)>c$. Therefore
$$c=\cdd R {\fa}{R/\fp}<\dim_R(R/\fp)\leq n-\Ht N {\fp}$$
and hence $\Ht N {\fp}<n-c$. This proves (ii). Finally, assume that $c=n-1$. By Lemma \ref{lem3}, $\h c{\fa}{N/J N}\cong\h c{\fa}N/J \h c{\fa}N$. Since $J\h c{\fa}N=0$,
$\h c{\fa}{N/J N}\cong\h c{\fa}N$.
Therefore $\cdd R {\fa}{N/JN}=c$. Hence there exists $\fp\in\supp_R(N/JN)=\operatorname{V}(J)$ such that $\cdd R {\fa}{R/\fp}=c$. If $\dim_R(R/\fp)=c$, then $\Ht NJ=0$ by (i). Otherwise, by (ii), $\Ht N{\fp}=0$. As $J\subseteq\fp$, we obtain $\Ht NJ=0$.
\end{proof}
\begin{rem}\label{rem2} Let $\fa, \fb$ be ideals of $R$ and $N$ a finitely generated $R$-module. It follows from the Independence Theorem that $\h i{\fa}N\cong\h i{\fb}N$ for all $i$ when
$\fa+\operatorname{Ann}_R(N)=\fb+\operatorname{Ann}_R(N)$; see \cite[Theorem 4.2.1]{bs}. This fact is used in Example \ref{exa}. Furthermore, we proved, in \cite[Theorem 2.2]{f2018}, that
\begin{align*}
&\inf\{i\in\N_0: \h i{\fa}N\ncong\h i{\fb}N\}=\fgrad {\fa+\fb}{\fa\cap\fb}N\\
&=\inf\{\depth {N_\fp}: \fp\in \operatorname{V}(\fa+\operatorname{Ann}_R(N))\triangle\operatorname{V}(\fb+\operatorname{Ann}_R(N))\},
\end{align*}
where $A\triangle B=A\cup B-A\cap B$ denotes the symmetric difference of the sets $A$ and $B$ and for ideals $\fa$ and $\fb$, $\fgrad{\fa}{\fb}N$ denotes
the $\fa$-filter grade of $\fb$ on $N$; see \cite{f2015} for definition and basic properties. In particular, $\h i{\fa}N\cong\h i{\fb}N$ for all $i$ if and only if
$\surd(\fa+\operatorname{Ann}_R(N))=\surd(\fb+\operatorname{Ann}_R(N))$.
\end{rem}
Now, by using Theorem \ref{annh2}, we construct a counterexample to Lynch's conjecture which extends the examples given in \cite{b2017} and \cite{sw}.
\begin{ex}\label{exa} Let $S$ be a commutative Noetherian ring of finite dimension $d\geq 3$ such that there exists a subset $U:=\{u_1,\dots, u_d\}$ of $S $ with $\Ht SU=d$ and suppose that each ideal generated by a subset of $U$ is a prime ideal of $S$. (When $S$ is a local ring, then it is easy to see that $S$ must be a regular ring with maximal ideal $(U)$ and $U$ is a regular system of parameters for $S$.)
Let $X$, $Y$ and $Z$ be disjoint non-empty subsets of $U$ such that $|X|\leq|Y|\leq|Z|$ (here, for a set $A$, $|A|$ denotes the cardinal number of $A$) and let $X'$ and $Y'$ be non-empty subsets of $X$ and $Y$ respectively. Set $J:=(X)\cap (Y)\cap (Z)$, $R:=S/J$ and $I:=(X')+(Y')+J/J$. Then
\begin{enumerate}[\rm(i)]
\item $\ass_R(R)=\{\fp_1:=(X)/J, \ \fp_2:=(Y)/J, \ \fp_3:=(Z)/J\}$.
\item $\grad{R} I{R/\fp_1}=\cdd R I{R/\fp_1}=|Y'|$, $\grad{R} I{R/\fp_2}=\cdd R I{R/\fp_2}=|X'|$ and $\grad{R} I{R/\fp_3}=\cdd R I{R/\fp_3}=|X'|+|Y'|$.
\item $\dim_R(R/\fp_1)=d-|X|$, $\dim_R(R/\fp_2)=d-|Y|$ and $\dim_R(R/\fp_3)=d-|Z|$.
\item $\gam IR=0$, $\dim_R(R)=\dim_R(R/\gam IR)=d-|X|$ and $c:=\cdd R IR=|X'|+|Y'|$.
\item Set $q:=(U-X'\cup Y')/J$, then $\fp_3\subseteq\fq$ and $\cdd R I{R/\fq}=\dim_R(R/\fq)=c$.
\item $\operatorname{Ann}_R (\h cIR)=(Z)/J$ and $\dim_R(R/\operatorname{Ann}_R (\h cIR)=d-|Z|$. In particular, $$\dim_R(R/\gam IR)-\dim_R(R/\operatorname{Ann}_R (\h cIR)=|Z|-|X|.$$
\end{enumerate}
\end{ex}
\begin{proof}
Since $\Ht SU=d$, Krull's Generalized Principal Ideal Theorem \cite[Theorem 13.5]{mat} implies that $u_i\notin (U\setminus\{u_i\})$ for all $1\leq i\leq d$. For each subset $V$ of $U$, since $(V\setminus\{u_i\})$ is a prime ideal of $S$, $\ass_S(S/(V\setminus\{u_i\}))=\{(V\setminus\{u_i\})\}$ and so $u_i$ is a non-zerodivisor on $S/(V\setminus\{u_i\})$ for all $1\leq i\leq d$ . Hence every permutation of $u_1,\dots,u_d$ is an $S$-sequence.
Next, for each $1\leq i\leq d$, since $u_i\notin(u_1,\dots,u_{i-1})$ and $(u_1,\dots,u_{i-1})$ is a prime ideal of $S$, we have $\dim_S(S/(u_1,\dots,u_i))\leq \dim_S(S/(u_1,\dots,u_{i-1}))-1$. We can now repeat this argument to deduce that $\dim_S(S/(u_1,\dots,u_i))\leq \dim_S(S)-i=d-i$ for all $1\leq i\leq d$. On the other hand, the strict chain of prime ideals $(u_1,\dots,u_i)\subset\dots\subset(u_1,\dots,u_d)$ yields $\dim_S(S/(u_1,\dots,u_i))\geq d-i$. Therefore $\dim_S(S/(u_1,\dots,u_i))= d-i$ for all $1\leq i\leq d$. By renaming the elements of $U$, we deduce that $\dim_S(S/(V))=d-|V|$ for all subsets $V$ of $U$. These facts are used in the sequel.
(i) It is clear that $\fp_1\cap\fp_2\cap\fp_3=0$ is a minimal primary decomposition of $0$ in $R$ and $\ass_R(R)=\{\fp_1, \fp_2, \fp_3\}$.
(ii) It follows from the Independence Theorem (see Remark \ref{rem2}) that
$$\h iI{R/\fp_1}\cong\h iI{S/(X)}\cong \h i{(X', Y')}{S/(X)}\cong\h i{(Y')}{S/(X)}.$$
We also have $$|Y'|=\grad S{(Y')}{S/(X)}\leq \cdd S {(Y')}{S/(X)}\leq \operatorname{ara}(Y')\leq |Y'|.$$
Therefore $\h i{(Y')}{S/(X)}$ and consequently $\h iI{R/\fp_1}$ are non-zero only at $i=|Y'|$. Similarly, we have
$\h iI{R/\fp_2}\cong\h i{(X')}{S/(Y)}$ and $\h iI{R/\fp_3}\cong\h i{(X', Y')}{S/(Z)}.$
Thus $\h iI{R/\fp_2}$ is non-zero only at $i=|Y'|$ and $\h iI{R/\fp_3}$ is non-zero only at $i=|X'|+|Y'|$.
(iii) As was mentioned at the beginning of the proof, we have $\dim_R(R/\fp_1)=\dim_S(S/(X))=d-|X|$, $\dim_R(R/\fp_2)=\dim_S(S/(Y))=d-|Y|$ and $\dim_R(R/\fp_3)=\dim_S(S/(Z))=d-|Z|$.
(iv) Since $\spec(R)=\supp_R(\bigoplus_{\fp\in\ass_R(R)}R/\fp)$, we obtain
$$\textstyle{c:=\cdd R IR=\cdd R I{\bigoplus_{1\leq i\leq 3}R/\fp_i}=\max_{1\leq i\leq 3}\cdd R I{R/\fp_i}=|X'|+|Y'|},$$
$$\textstyle{\dim_R(R)=\max_{1\leq i\leq 3}\dim_R({R/\fp_i})=d-|X|.}$$
Next, as $X'$ and $Y'$ are nonempty sets, there exists $x\in X'$ and $y\in Y'$. We have $x+y+J\in I\setminus \bigcup_{1\leq i\leq 3}\fp_i$ because $X, Y, Z$ are disjoint sets and $u_i\notin (U\setminus \{u_i\})$ for all $1\leq i\leq d$. Therefore $x+y+J\in I$ is a non-zerodivisor on $R$ and hence $\gam IR=0$. This completes the proof of (iv).
(v) It is clear that $Z\subseteq U-X\cup Y\subseteq U-X'\cup Y'$ and consequently $\fp_3\subseteq \fq$. Also, it follows from the Independence Theorem that (note that $J\subseteq (U-X'\cup Y')$)
$$\h iI{R/\fq}\cong\h iI{S/(U-X'\cup Y')}\cong\h i{(X', Y')}{S/(U-X'\cup Y')}.$$
Similar to (ii) we can deduce that $\h i{(X', Y')}{S/(U-X'\cup Y')}$ and consequently $\h iI{R/\fq}$ are non-zero only at $i=|X'|+|Y'|$. Therefore $\cdd RI{R/\fq}=c$. Finally, we have
$$\dim_R(R/\fq)=\dim_S(S/(U-X'\cup Y'))=d-|U-X'\cup Y'|=|X'|+|Y'|=c.$$
(vi) It follows from (v) and Theorem \ref{annh2} that
$\operatorname{Ann}_R (\h cIR)=\operatorname{Ann}_R (R/\fp_3)=\fp_3$
and hence $\dim_R(R/\operatorname{Ann}_R (\h cIR))=d-|Z|$. Therefore
$$\dim_R(R/\gam IR)-\dim_R(R/\operatorname{Ann}_R (\h cIR))=d-|X|-(d-|Z|)=|Z|-|X|.$$
\end{proof}
\begin{rem}
(i) (Bahmanpour's example \cite[Example 3.2]{b2017}). Let $S$ be a regular local ring of dimension $d\geq 7$ and $U:=\{u_1,\dots,u_d\}$ a system of parameters for $S$. Let $l$ be an integer with $7\leq l\leq d$. Set $X:=\{u_1, u_2\}$, $Y:=\{u_3, u_4\}$, $Z:=\{u_5,\dots, u_l\}$, $X':=\{u_1\}$ and $Y':=\{u_3\}$. Let
$J:=(X)\cap(Y)\cap(Z)$, $R:=S/J$ and $I:=(X')+(Y')+J/J$. Then, by Example \ref{exa}, we have
\begin{align*}
& c:=\cdd R IR=|X'|+|Y'|=2,\\
& \dim_R(R/\gam IR)=\dim_R(R)=d-|X|=d-2,\\
&\operatorname{Ann}_R (\h cIR)=(Z)/J=(u_5,\dots, u_l)/J,\\
& \dim_R(R/ \operatorname{Ann}_R (\h cIR))=d-|Z|=d-l+4<d-2.
\end{align*}
In particular, Lynch's conjecture does not hold in this case. Note that Bahmanpour, in \cite[Example 3.2]{b2017}, obtained $\dim_R(R/ \operatorname{Ann}_R (\h cIR))$ without computing $\operatorname{Ann}_R (\h cIR)$.
(ii) (Singh--Walther's example \cite{sw}). Let $K$ be a field and $S:=K[x, y, z_1, z_2]$ (or $S:=K[[x, y, z_1, z_2]]$). If we set $U:=\{x, y, z_1, z_2\}$, $X=X':=\{x\}$, $Y=Y':=\{y\}$, $Z:=\{z_1, z_2\}$, $J:=(X)\cap(Y)\cap(Z)=(xyz_1, xyz_2)$, $R:=S/J$ and $I:=(X')+(Y')+J/J=(x, y)/J$, then Example \ref{exa} implies that
\begin{align*}
& c:=\cdd R IR=|X'|+|Y'|=2,\\
& \dim_R(R/\gam IR)=\dim_R(R)=d-|X|=4-1=3,\\
&\operatorname{Ann}_R (\h cIR)=(Z)/J=(z_1, z_2)/J,\\
& \dim_R(R/ \operatorname{Ann}_R (\h cIR))=d-|Z|=4-2=2.
\end{align*}
It follows that Lynch's conjecture is false. This example also shows that \cite[Proposition 4.3 and Theorem 4.4]{l} are not true.
\end{rem}
We recall that an $R$-module $N$ is called {\it minimax} if there exists a finitely generated submodule $L$ of $N$ such that $N/L$ is Artinian. The class of minimax modules includes all Noetherian and all Artinian modules; see \cite{zo}. Also, for an ideal $\fa$ of $R$, Hartshorne \cite{h} defined an $R$-module $N$ to be {\it $\fa$-cofinite} if $\supp_R(N)\subseteq\operatorname{V}(\fa)$ and $\ext iR{R/\fa}N$ is finitely generated for all $i\in\N_0$. The following lemmas are needed to prove our next theorem.
\begin{lem}[{\cite[Theorem 1.6]{mel1999}}]\label{cofart} Let $(R, \fn)$ be a complete local ring, $\fa$ a proper ideal of $R$ and $N$ an Artinian $R$-module. Then the following conditions on $N$ are equivalent:
\begin{enumerate}[\rm(i)]
\item $N$ is $\fa$-cofinite.
\item $(0:_{N}\fa)$ is finitely generated
\item For each attached prime ideal $\fp$ of $N$, $\surd(\fa+\fp)=\fn$.
\end{enumerate}
\end{lem}
\begin{lem}[{\cite[Corollary 4.4]{mel}}]\label{ser} Let $\fa$ be an ideal of $R$. The class of $\fa$-cofinite minimax $R$-modules is closed under taking submodules,
quotients and extensions, i.e., it is a Serre subcategory of the category of $R$-modules.
\end{lem}
Let $(R, \fn)$ be a complete local ring, $\fa$ an ideal of $R$, $N$ a finitely generated $R$-module with $N\neq \fa N$ and $c:=\cdd R{\fa}N$. In \cite[Theorem 2.4]{nr}, Rastgoo and Nazari proved that if $\h c{\fa}N$ is $\fa$-cofinite Artinian, then $\att_R(\h c{\fa}N)=\{\fp\in\mass_R(N): \dim_R(R/\fp)=c, \surd(\fa+\fp)=\fn\}$. The following lemma generalizes this theorem.
\begin{lem}\label{annh5} Let $(R, \fn)$ be a complete local ring, $\fa$ an ideal of $R$, $N$ a finitely generated $R$-module with $N\neq \fa N$ and $c:=\cdd R{\fa}N$. Suppose that $\h c{\fa}N$ is Artinian and $(0:_{\h c{\fa}N}\fa)$ is finitely generated. Set $\Delta:=\{\fp\in\ass_R(N): \cdd R{\fa}{R/ \mathfrak {p}}=c\}$ and
$\Sigma:=\{\fp\in\mass_R(N): \dim_R(R/\fp)=c, \ \surd(\fa+\fp)=\fn\}$. Then
$$\matt_R(\h c{\fa}N)=\att_R(\h c{\fa}N)=\Delta=\Sigma,$$
$$\textstyle{\T {\fa}N=\bigcap_{\fp\in\Sigma}C^N_\fp(0)},$$
$$ \textstyle{\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R(N/\T {\fa}N)}.$$
\end{lem}
\begin{proof}
Note that, by Lemma \ref{cofart}, $\h c{\fa}N$ is $\fa$-cofinite and Artinian. We prove the claimed equalities in some steps.
1) $\att_R(\h c{\fa}N)\subseteq\Delta$. Assume that $\fp\in\att_R(\h c{\fa}N)$. Hence by Lemma \ref{lem3}, $\h c{\fa}{N/\fp N}\cong\h c{\fa}{N}/\fp\h c{\fa}N$.
Thus $\h c{\fa}{N/\fp N}\neq 0$ because $\fp\in\att_R(\h c{\fa }{N})$. Since $\att_R(\h c{\fa}N)\subseteq\operatorname{V}(\operatorname{Ann}_R(N))$, we have $\supp_R(N/\fp N)=\supp_R(R/\fp)$ and so $\cdd R{\fa}{R/\fp}=\cdd R{\fa}{N/\fp N}=c$. Therefore $\fp\in\Delta$.
2) $\Sigma\subseteq\Delta$. If $\fp\in\Sigma$, then $\cdd R{\fa}{R/\fp}=\cdd R{\fa+\fp}{R/\fp}=\cdd R{\fn}{R/\fp}=\dim_R(R/\fp)=c$ and so $\fp\in\Delta$.
3) $\Delta\subseteq \Sigma$ and $\Delta\subseteq \matt_R(\h c{\fa}N)$. Assume that $\fp\in\Delta$
and $L$ is an arbitrary $\fp$-primary submodule of $N$. Since $\ass_R(N/L)=\{\fp\}$, we obtain $\cdd R {\fa}{N/L}=\cdd R {\fa}{R/\fp}=c$. Also, it follows from the exact sequence $\h c{\fa}N\rightarrow\h c{\fa}{N/L}\rightarrow 0$ in view of Lemma \ref{ser} that $\h c{\fa}{N/L}$ is $\fa$-cofinite Artinian. Now assume that $\fq\in\att_R(\h c{\fa}{N/L})$. Since
$\att_R(\h c{\fa}{N/L})\subseteq\operatorname{V}({\operatorname{Ann}_R (\h c{\fa}{N/L})})\subseteq\operatorname{V}(\operatorname{Ann}_R (N/L)),$ we have $\fp\subseteq\fq$. By Lemma \ref{lem3}, $\h c{\fa}{N/\fq N}\cong\h c{\fa}{N}/\fq \h c{\fa}N$.
Thus $\h c{\fa}{N/\fq N}\neq 0$ because $\fq\in\att_R(\h c{\fa }{N})$. Since $\supp_R(N/\fq N)=\supp_R(R/\fq)$, $\cdd R {\fa}{R/\fq}=\cdd R {\fa}{N/\fq N}=c$. Also, Lemma \ref{cofart} implies that $\surd(\fa+\fq)=\fn$ and hence, by Remark \ref{rem2}, $c=\cdd R {\fa}{R/\fq}=\cdd R {\fn}{R/\fq}=\dim_R{R/\fq}$. Therefore $\operatorname{Ann}_R (\h c{\fa}N)\subseteq C_\fq(\operatorname{Ann}_R (N))$ by Theorem \ref{annh2}. We show that $\fq$ is minimal in $\supp_R(N)$. Assume that $\fq'\in{\rm Min}\supp_R(N)$ is such that $\fq'\subseteq\fq$. Then, by Lemma \ref{cont}(iii),
$\operatorname{Ann}_R (\h c{\fa}N)\subseteq\fq'$. Thus $\fq'\in{\rm Min}\operatorname{V}(\operatorname{Ann}_R (\h c{\fa}N))={\rm Min}\att_R(\h c{\fa}N)$ and so $\surd(\fa+\fq')=\fn$ by Lemma \ref{cofart}. Hence $\cdd R {\fa}{R/\fq'}=\cdd R {\fn}{R/\fq'}=\dim_R{R/\fq'}$. Also, we have $c=\cdd R {\fa}{R/\fq}\leq \cdd R {\fa}{R/\fq'}\leq \cdd R {\fa}{N}=c$. It follows that $\cdd R {\fa}{R/\fq'}=\dim_R{R/\fq'}=c$. Therefore $\fq=\fq'$ and consequently $\fq$ is a minimal element of $\supp_R(N)$. Since $\fp\subseteq\fq$, we have $\fp=\fq$. The equalities $\fp=\fq=\fq'$ show that $\fp\in\matt_R(\h c{\fa}N)$ and $\fp\in\Sigma$. Therefore $\Delta\subseteq \Sigma$ and $\Delta\subseteq \matt_R(\h c{\fa}N)$.
(1), (2) and (3) prove that $\matt_R(\h c{\fa}N)=\att_R(\h c{\fa}N)=\Delta=\Sigma$. Finally, since $\Delta=\Sigma$, Theorem \ref{annh2} implies that $\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R(N/\T {\fa}N)$. Also it is clear that $\T {\fa}N=\bigcap_{\fp\in\Sigma}C^N_\fp(0)$ (see Definition \ref{def1}). This completes the proof.
\end{proof}
\begin{thm}\label{annh4}
Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ such that $N\neq\fa N$ and $c:=\cdd R {\fa}N$. Let $0=N_1\cap\hdots\cap N_n$ be a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_i):=\{\fp_i\}$ for all $1\leq i\leq n$. Set $\Delta:=\{\fp\in\ass_R(N): \cdd R {\fa}{R/\fp}=c\}$. If $(0:_{\h c{\fa}{N/N_i}}\fa)$ is finitely generated for all $\fp_i\in\Delta$ (or $(0:_{\h c{\fa}{N}/M}\fa)$ is finitely generated for all submodules $M$ of $\h c{\fa}N$), then
$$\operatorname{Ann}_R (\h c{\fa}N)=\operatorname{Ann}_R (N/\T {\fa}N).$$
In particular, if $N$ is coprimary and $(0:_{\h c{\fa}N}\fa)$ is finitely generated, then $$\operatorname{Ann}_R (\h c{\fa}N)=\operatorname{Ann}_R (N).$$
\end{thm}
\begin{proof}
Set $T:=\T {\fa}N=\bigcap_{\fp_i\in\Delta}N_i$. Since $\cdd R {\fa}T<c$,
$\h c{\fa}N\cong\h c{\fa}{N/T}$ and hence $\operatorname{Ann}_R (N/T)\subseteq\operatorname{Ann}_R (\h c{\fa}{N/T})=\operatorname{Ann}_R (\h c{\fa}N)$. Now we show that $$\operatorname{Ann}_R (\h c{\fa}N)\subseteq\operatorname{Ann}_R (N/T).$$
To prove the claimed inclusion, we assume that $r\in R$, $r\notin \operatorname{Ann}_R (N/T)$ and it is sufficient for us to show that $r\notin\operatorname{Ann}_R(\h c{\fa}N)$. Since $rN\nsubseteq T$, $rN\nsubseteq N_i$ for some $N_i$ with $\fp_i\in\Delta$. As $\ass_R (r(N/N_i))=\ass_R(N/N_i)=\{\fp_i\}$, we see $\supp_R (r(N/N_i))=\supp_R(N/N_i)=\supp_R(R/\fp_i)$ and so
$\cdd R {\fa}{r(N/N_i)}=\cdd R {\fa}{N/N_i}=c.$
Now suppose that $\fn\in{\rm Min}\supp_R(\h c{\fa}{N/N_i})$. It follows from $\supp_{R_\fn} ((r(N/N_i))_\fn)=\supp_{R_\fn}((N/N_i)_\fn)=\operatorname{V}(\fp_iR_\fn)$ that
$\cdd R {\fa R_\fn}{r/1(N/N_i)_\fn}=\cdd R {\fa R_\fn}{(N/N_i)_\fn}=c$. Therefore $\h c{\fa R_\fn}{r/1(N/N_i)_\fn}\neq 0$ and hence
$$\h c{\fa \widehat{R_\fn}}{(r/1) \hat\,((N/N_i)_\fn)\, \hat{}\, }\neq 0,$$ where " $\hat{}$ " denotes the $\fn R_\fn$-adic completion.
As $(0:_{\h c{\fa}{N/N_i}}\fa)$ is a finitely generated $R$-module, in view of \cite[Theorem 7.11]{mat}, $(0:_{\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}}\fa\widehat{R_\fn})$ is a finitely generated $\widehat{R_\fn}$-module. Also, by \cite[Theorem 4.3.2]{bs}, \cite[Theorem 23.2(ii)]{mat} and these facts that $\ass_{R_\fn}(\h c{\fa R_\fn}{(N/N_i)_\fn}=\{\fn R_\fn\}$ and $\widehat{R_\fn}$ is a local ring with maximal ideal $\fn\widehat{R_\fn}$, we have
\begin{align*}
&\ass_{\widehat{R_\fn}}(0:_{\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}}\fa\widehat{R_\fn})=\operatorname{V}(\fa\widehat{R_\fn})\cap\ass_{\widehat{R_\fn}}({\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}})\\
&= \ass_{\widehat{R_\fn}}({\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}})
=\ass_{\widehat{R_\fn}}({\h c{\fa {R_\fn}}{{(N/N_i)_\fn}}}\otimes_{\widehat{R_\fn}}\widehat{R_\fn})\\
&=\textstyle{\bigcup_{\fp R_\fn\in\ass_{R_\fn}(\h c{\fa R_\fn}{(N/N_i)_\fn})}\ass_{\widehat{R_\fn}}(\widehat{R_\fn}/\fp \widehat{R_\fn})}
=\ass_{\widehat{R_\fn}}(\widehat{R_\fn}/\fn \widehat{R_\fn})=\{\fn \widehat{R_\fn}\}.
\end{align*}
It follows that $(0:_{\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}}\fa\widehat{R_\fn})$ has finite length and so ${\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\,}}$ is an Artinian $\widehat{R_\fn}$-module by Melkersson's Theorem (see \cite[Theorem 7.1.2]{bs}).
Now, $\h c{\fa \widehat{R_\fn}}{(r/1) \hat\,((N/N_i)_\fn)\, \hat{}\, }\neq 0$ yields $(r/1) \hat\,((N/N_i)_\fn)\, \hat{}\, \nsubseteq {\rm T}(\fa \widehat{R_\fn}, ((N/N_i)_\fn)\, \hat{} \,)$ because $\T{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{} \,}$ is the largest $\widehat{R_\fn}$-submodule $S$ of $((N/N_i)_\fn)\, \hat{}$
such that $\h c{\fa \widehat{R_\fn}}{S}=0$.
By Lemma \ref{annh5}, $$\operatorname{Ann}_{\widehat {R_\fn}}(\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\, })=\operatorname{Ann}_{\widehat {R_\fn}}(((N/N_i)_\fn)\, \hat{}/\T{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{} \,}).$$
Thus $(r/1) \hat\,\h c{\fa \widehat{R_\fn}}{((N/N_i)_\fn)\, \hat{}\, }\neq 0$ and so $r\h c{\fa}{N/N_i}\neq 0$. Next it follows from the exact sequence $\h c{\fa}N\rightarrow \h c {\fa}{N/N_i}\rightarrow 0$ that $r\h c{\fa}N\neq 0$, as required. This proves the first claimed equality. Note that if $(0:_{\h c{\fa}{N}/M}\fa)$ is finitely generated for all submodules $M$ of $\h c{\fa}N$, then the $R$-module $(0:_{\h c{\fa}{N/N_i}}\fa)$ is finitely generated for all $1\leq i\leq n$ because $\h c{\fa}{N/N_i}$ is a homomorphic image of $\h c{\fa}N$. Finally assume that $N$ is coprimary and $\ass_R(N)=\{\fp\}$. Then $0$ is a $\fp$-primary submodule of $N$ and so $\T {\fa}N=0$. Therefore, by the first part of theorem, we have $\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R(N)$.
\end{proof}
\begin{cor}\label{cor2} Let $N$ be a non-zero finitely generated $R$-module, $\fa$ an ideal of $R$ with $N\neq\fa N$ and $c:=\cdd R{\fa}N$. Suppose that one of the following conditions holds:
\begin{enumerate}[\rm(i)]
\item $c\leq 1$;
\item $\dim_R(N/\fa N)\leq 1$;
\item $\dim_R(N)\leq 2$;
\item $\h {c}{\fa}N$ is $\fa$-cofinite minimax.
\end{enumerate}
Then
$$\operatorname{Ann}_R (\h {c}{\fa}N)=\operatorname{Ann}_R (N/\T {\fa}N),$$
$$\Ht N{\operatorname{Ann}_R(\h {c}{\fa}N)}=0,$$
and
\begin{align*}&\dim_R(R/\operatorname{Ann}_R (\h {c}{\fa}N))\\
&=\max\{\dim_R(R/\fp): {\fp\in\mass_R(N),}\ {\cdd R {\fa}{R/\fp}=c}\}
\end{align*}
\end{cor}
\begin{proof}
Set $\Delta:=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}=c\}$. We have $\operatorname{Ann}_R(N/\T{\fa}N)=\operatorname{Ann}_R(N/\bigcap_{\fp\in\Delta}C^N_{\fp}(0))=\bigcap_{\fp\in\Delta}C_\fp(\operatorname{Ann}_R(N))$. Since $\Delta\neq\emptyset$, it follows from Lemma \ref{cont}(iii) that $$\Ht N{\operatorname{Ann}_R(N/\T{\fa}N)}=0.$$ Also,
by Proposition \ref{prop5}(i), we have
$$\dim_R(R/\operatorname{Ann}_R(N/\T {\fa}N))=\max_{\fp\in\ass_R(N/\T {\fa}N)}\dim_R(R/\fp)=\max_{\fp\in\Delta}\ \dim_R(R/\fp).$$
Now if $\fp\in\Delta$ and $\fq\in\mass_R(N)$ is such that $\fq\subseteq\fp$, then $c=\cdd R{\fa}{R/\fp}\leq\cdd R{\fa}{R/\fq}\leq c$ and so $\fq\in\Delta$. Therefore
$$\dim_R(R/\operatorname{Ann}_R(N/\T {\fa}N))=\max_{\fp\in\mass_R(N), \ \cdd R{\fa}{R/\fp}=c}\dim_R(R/\fp).$$
Thus, to complete our proof, we only need to prove that the first claimed equality $\operatorname{Ann}_R(\h c{\fa}N)=\operatorname{Ann}_R(N/\T {\fa}N)$ holds in all the given cases.
Suppose that $0=N_1\cap\hdots\cap N_n$ is a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_j):=\{\fp_j\}$ for all $1\leq j\leq n$. Assume that $\fp_i$ is an arbitrary element of $\Delta$. We claim that in all the given cases either $\h c{\fa}{N/N_i}$ is $\fa$-cofinite or $(0:_{\h c{\fa}{N/N_i}}\fa)$ is finitely generated and hence we can deduce from Theorem \ref{annh4} that the equality $\operatorname{Ann}_R (\h c{\fa}N)=\operatorname{Ann}_R (N/\T {\fa}N)$ holds.
(i) If $\cdd R {\fa}N\leq 1$, then $\cdd R {\fa}{N/N_i}\leq 1$ and so, by \cite[Corollary 3.14]{mel}, $\h c{\fa}{N/N_i}$ is $\fa$-cofinite.
(ii) If $\dim_R(N/\fa N)\leq 1$, then $\dim_R((N/N_i)/\fa(N/N_i))\leq 1$ and so $\h c{\fa}{N/N_i}$ is $\fa$-cofinite by \cite[Corollary 2.7]{bn} (see also \cite[Theorem 1]{dm} and \cite[Theorem 1.1]{y} for the local case).
(iii) If $c=\dim_R(N/N_i)$, then $\h c{\fa}{N/N_i}$ is $\fa$-cofinite by \cite[Proposition 5.1]{mel} (see also \cite[Theorem 3]{dm} for the local case). If $c<\dim_R(N/N_i)$, then the result follows from (i).
(iv) It follows from the exact sequence $\h c{\fa}N\rightarrow \h c{\fa}{N/N_i}\rightarrow 0$ that $\h c{\fa}{N/N_i}$ is also $\fa$-cofinite minimax because the category of $\fa$-cofinite minimax modules is a Serre subcategory of the category of $R$-modules; see Lemma \ref{ser}.
\end{proof}
Bahmanpour in \cite[Theorem 2.9]{b2015}, as one of his main results, computed the annihilator of $\h c{\fa}N$ when $c:=\cdd R{\fa}N=1$. The following corollary shows that our result in Corollary \ref{cor2}(i) coincides with that of Bahmanpour and gives an affirmative answer to Lynch's conjecture.
\begin{lem}\label{prop2} Let $N$ be a non-zero finitely generated (coprimary) $R$-module with $\ass_R(N)=\{\fp\}$ and $\fa$ an ideal of $R$.
Then $\cdd R{\fa}N=0$ if and only if $\fa\subseteq\fp$.
\end{lem}
\begin{proof}
If $\fa\subseteq \fp$, then, since $\surd(\operatorname{Ann}_R(N))=\fp$, we obtain $\fa^nN=0$ for some $n\in\N$ and so $\gam {\fa}N=N\neq 0$. It follows that $\h i{\fa}N=0$ for all $i>0$ and consequently $\cdd R{\fa}N=0$. Conversely, assume that $\cdd R{\fa}N=0$. Hence $\gam {\fa}N\neq 0$. Thus $\ass_R(\gam {\fa}N)=\operatorname{V} {(\fa)}\cap\ass_R(N)\neq \emptyset$ and so $\fa\subseteq\fp$.
\end{proof}
\begin{cor} Let $\fa$ be an ideal of $R$ and $N$ a finitely generated $R$-module such that $\cdd R{\fa}N=1$. Then $\operatorname{Ann}_R(\h 1{\fa}N)=\operatorname{Ann}_R(N/\gam {\fb}N),$ where $\fb:=\bigcap_{\fp\in\Delta}\fp$ and $\Delta:=\{\fp\in\ass_R(N): \fa+\fp=R \textrm{ or } \fa\subseteq\fp\}$.
In particular, if for each $a\in\fa$, $1-a$ is a non-zerodivisor on $N$, then $\operatorname{Ann}_R(\h 1{\fa}N)=\operatorname{Ann}_R(N/\gam {\fa}N)$.
\end{cor}
\begin{proof}
By Corollary \ref{cor2}(i), $\operatorname{Ann}_R(\h 1{\fa}N)=\operatorname{Ann}_R(N/\T{\fa}N)$. Also, by Definition \ref{def1}, $\T{\fa}N=\gam {\fb'}N$, where $\fb'=\bigcap_{\fp\in\Delta'}\fp$ and $\Delta'=\{\fp\in\ass_R(N): \cdd R{\fa}{R/\fp}<1\}$.
Now assume that $\fp\in\ass_R(N)$. Then $\cdd R{\fa}{R/\fp}=-\infty$ if and only if $\fa(R/\fp)=R/\fp$ or equivalently $\fa+\fp=R$. Also, by Lemma \ref{prop2}, $\cdd R{\fa}{R/\fp}=0$ if and only if $\fa\subseteq\fp$. Therefore $\Delta'=\Delta$ and so $\fb=\fb'$. This proves the first part of the assertion.
Now assume that for each $a\in\fa$, $1-a$ is a non-zerodivisor on $N$. Thus $\Delta=\{\fp\in\ass_R(N): \fa\subseteq\fp\}=\ass_R(\gam {\fa}N)$ and so
$\fb=\surd(\operatorname{Ann}_R(\gam {\fa}N))$. It follows that $\gam {\fb}N=\gam {\operatorname{Ann}_R(\gam {\fa}N)}N=\gam {\fa}N$ because $\fa^t\gam {\fa}N=0$ for some $t\in\N_0$.
\end{proof}
Let $\fa$ be a proper ideal of $R$. Then, by \cite[Corollary 3.3.3]{bs}, $\operatorname{ara} (\fa)$ is an upper bound for the invariant $\cdd R{\fa}R$.
Hochster and Jeffries \cite[Theorem 2.6]{hj} proved that if $R$ is a domain of prime characteristic and $\cdd R{\fa}R=\operatorname{ara} (\fa)$, then $\h {\cdd R{\fa}R}{\fa}R$ is faithful. On the other hand, if $R$ is local, then $\dim_R(R)-\dim_R(R/\fa)$ is a lower bound for the invariant $\cdd R{\fa}R$; see the following lemma. If $R$ is a local domain and $\cdd R{\fa}R=\dim_R(R)-\dim_R(R/\fa)$, then $\h {\cdd R{\fa}R}{\fa}R$ is faithful, see \cite[Theorem 2.7]{b2015}. In the following theorem we generalize this result.
We note that Theorem \ref{cd2} and Corollary \ref{sys} give affirmative answers to Lynch's conjecture.
\begin{lem}[{\cite[Corrollary 2.3]{ey}}]\label{erd} Let $(R, \fn)$ be a local ring, $\fa$ an ideal of $R$ and $N$ a finitely generated $R$-module with $N\neq \fa N$. Then
$$\dim_R(N)-\dim_R(N/\fa N)\leq \cdd R{\fa}N.$$
Moreover, if $\cdd R{\fa}N=\dim_R(N)-\dim_R(N/\fa N)$, then $$\h {\dim_R(N/\fa N)}{\fn}{\h {\cdd R{\fa}N}{\fa}N}\cong \h {\dim_R(N)}{\fn}N$$
and $\dim\supp_R({\h {\cdd R{\fa}N}{\fa}N})=\dim_R(N/\fa N)$.
\end{lem}
\begin{thm}\label{cd2} Let $(R, \fn)$ be a local ring, $\fa$ an ideal of $R$, $N$ a finitely generated $R$-module with $N\neq \fa N$ and $c:=\cdd R{\fa}N=\dim_R(N)-\dim_R(N/\fa N)$. Then \begin{enumerate}[\rm(i)]
\item $\operatorname{Ann}_R(\h c{\fa}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\assh_R(N)}C^N_\fp(0)).$
\item If $N$ is unmixed (that is, $\dim_R(N)=\dim_R(R/\fp)$ for all $\fp\in\ass_R(N)$), then $\operatorname{Ann}_R(\h {c}{\fa}N)=\operatorname{Ann}_R(N)$.
\item $\Ht N{\operatorname{Ann}_R(\h c{\fa}N)}=0$.
\item $\dim_R(R/\operatorname{Ann}_R(\h c{\fa}N))=\dim_R(N)$. Moreover, if $c>0$, then $\dim_R(N)=\dim_R(N/\gam {\fa}N)=\dim_R(R/\operatorname{Ann}_R(\h c{\fa}N)$.
\end{enumerate}
\end{thm}
\begin{proof}
(i) By Lemma \ref{erd}, we have $\h {\dim_R(N/\fa N )}{\fn}{\h c{\fa}N}\cong\h {\dim_R(N)}{\fn}N$. Thus
$$\operatorname{Ann}_R(\h c{\fa}N)\subseteq\operatorname{Ann}_R(\h {\dim_R(N)}{\fn}N).$$
Now (i) follows from Theorem \ref{annh2} (set $\fa=\fn$ in Theorem \ref{annh2}).
(ii) Assume that $N$ is unmixed. Then we have $\assh_R(N)=\ass_R(N)$ and hence $\bigcap_{\fp\in\assh_R(N)}C^N_\fp(0)=\bigcap_{\fp\in\ass_R(N)}C^N_\fp(0)=0$. Therefore, by (i), $\operatorname{Ann}_R(\h c{\fa}N)\subseteq\operatorname{Ann}_R(N)$. The reverse inclusion is clear and so the claimed equality holds.
(iii) It is an immediate consequence of (i).
(iv) Let $\fp\in\assh_R(N)$. By (i), $\operatorname{Ann}_R(N)\subseteq\operatorname{Ann}_R(\h c{\fa}N)\subseteq C_\fp(\operatorname{Ann}_R(N))\subseteq \fp$ and so
$$\dim_R(R/\fp)\leq\dim_R(R/\operatorname{Ann}_R(\h c{\fa}N))\leq\dim_R(R/\operatorname{Ann}_R(N)).$$
Therefore $\dim_R(R/\operatorname{Ann}_R(\h c{\fa}N))=\dim_R(N)$. Now assume that $c>0$. Hence
$$\operatorname{Ann}_R(N)\subseteq\operatorname{Ann}_R(N/\gam {\fa}N)\subseteq\operatorname{Ann}_R(\h c{\fa}{N/\gam {\fa}N})=\operatorname{Ann}_R(\h c{\fa}N)\subseteq\fp$$
and so the claimed equalities hold.
\end{proof}
The following corollary generalizes two main theorems of \cite{b2015} (see \cite[Theorems 2.2 and 2.3]{b2015}).
\begin{cor}\label{sys} Let $(R, \fn)$ be a local ring and $N$ a non-zero finitely generated $R$-module. Let $0\leq t\leq \dim_R(N)$ and $x_1,\dots,x_t\in\fn$ be a part of a system of parameters for $N$. Then the following statements hold.
\begin{enumerate}[\rm(i)]
\item $\cdd R{(x_1,\dots,x_t)}N=t$.
\item $\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\assh_R(N)}C_\fp^N(0))$. In particular, if $N$ is unmixed, then $\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N)=\operatorname{Ann}_R(N)$.
\item $\Ht N{\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N)}=0$.
\item $\dim_R(R/\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N))=\dim_R(N)$ and if $t>0$, then $$\dim_R(R/\operatorname{Ann}_R(\h t{(x_1,\dots,x_t)}N))=\dim_R(N/\gam {(x_1,\dots,x_t)}N)=\dim_R(N).$$
\item $\dim\supp_R(\h t{(x_1,\dots,x_t)}N))=\dim_R(N)-t$.
\end{enumerate}
\end{cor}
\begin{proof}
We have $\dim_R(N/(x_1,\dots, x_t)N)=\dim_R(N)-t$ because $x_1,\dots, x_t$ is a part of a system of parameters of $N$. Hence, by Lemma \ref{erd} and \cite[Corollary 3.3.3]{bs}, we obtain
\begin{align*}
&t=\dim_R(N)-\dim_R(N/(x_1,\dots, x_t)N)\\
&\leq\cdd R{(x_1,\dots,x_t)}N\leq\operatorname{ara}{(x_1,\dots,x_t)}\leq t.
\end{align*}
It follows that $$\cdd R{(x_1,\dots,x_t)}N=\dim_R(N)-\dim_R(N/(x_1,\dots, x_t)N)=t.$$
Therefore (i) holds; and (ii)--(iv) follow from Theorem \ref{cd2}. Finally, by Lemma \ref{erd}, we have $$\dim\supp_R(\h t{(x_1,\dots,x_t)}N))=\dim_R(N/(x_1,\dots,x_t)N)=\dim_R(N)-t.$$
\end{proof}
\section{\bf Annihilator of first non-zero local cohomology}
Let $\fa$ be an ideal of $R$ and $N$ a finitely generated $R$-module with $N\neq\fa N$. Our purpose in this section is to provide a sharp upper bound for the annihilator of
$\h {\grad{R} {\fa}N}{\fa}N$, see Theorem \ref{annh7} and Corollary \ref{cor3}. Also, we consider and compute $\dim_R(R/\operatorname{Ann}_R(\h {\grad{R}{\fa}N}{\fa}N)$ in certain cases. Before that we need some lemmas.
\begin{lem}\label{loc} Let $(R, \fn)$ be a local ring and $N$ a non-zero finitely generated $R$-module. For each $t\in\N_0$, set $\Delta(t):=\{\fp\in\ass_R(N): \dim_R(R/\fp)\geq t\}$,
$\Sigma(t):=\{\fp\in\mass_R(N): \dim_R(R/\fp)=t\}$ and $\Sigma'(t):=\{\fp\in\ass_R(N)\setminus\mass_R(N): \dim_R(R/\fp)=t\}$. There is the following bound for the annihilator of $\h t{\fn}N$:
\begin{align*}
&\textstyle{\operatorname{Ann}_R(N/\bigcap_{\fp\in\Delta(t)}C_\fp^N(0))\subseteq\operatorname{Ann}_R(\h t{\fn}N)}\\
&\textstyle{\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma(t)}C_\fp^N(0))\cap(\bigcap_{\fp\in\Sigma'(t)}\fp).}
\end{align*}
\end{lem}
\begin{proof}
We set $S(t):=R\setminus\bigcup_{\fp\in\Delta(t)}\fp$ and $T(t):=R\setminus\bigcup_{\fp\in\Sigma(t)}\fp$. By \cite[Theorem 3.2]{f}, we have
$$\operatorname{Ann}_R(N/C_{S(t)}^N(0))\subseteq\operatorname{Ann}_R(\h t{\fn}N)\subseteq\operatorname{Ann}_R(N/C_{T(t)}^N(0)).$$
Also, if $\fp\in\Sigma'(t)$, then \cite[Corollary 4.9]{s} implies that $\fp\in\att_R(\h t{\fn}N$ and so $\operatorname{Ann}_R(\h t{\fn}N)\subseteq\fp$. This completes the proof.
\end{proof}
Note that if $N$ is a non-zero finitely generated $R$-module and $\fp\in\supp_R(N)$, then we have $\operatorname{Ann}_R(N/C^N_\fp(0))=C_\fp(\operatorname{Ann}_R(N))\subseteq\fp$. Example \ref{exa2} shows that to improve the upper bound for the annihilator of local cohomology $\h t{\fn}N$ in the above lemma we can not replace $\mass_R(N)$ by $\ass_R(N)$ in the index set $\Sigma(t)$.
\begin{lem}\label{annh6}
Let $N$ be a finitely generated $R$-module and $\fa$ an ideal of $R$ with $N\neq\fa N$. Let $\fp\in\ass_R(N)$ with $\fa+\fp\neq R$ and $n:=\Ht {R/\fp}{(\fa+\fp)/\fp}$. Then
$$\operatorname{Ann}_R(\h n{\fa+\fp}N)\subseteq\fp.$$ If, in addition, $\fp\in\mass_R(N)$, then
$$\operatorname{Ann}_R(\h n{\fa+\fp}N)\subseteq\operatorname{Ann}_R(N/C^N_\fp(0)).$$
In particular, if $N$ is coprimary, then $\operatorname{Ann}_R(\h {\Ht N{\fa}}{\fa}N)=\operatorname{Ann}_R(N).$
\end{lem}
\begin{proof}
Assume that $\fq$ is a prime ideal of $R$ containing $\fa+\fp$ such that $\Ht{R/\fp}{\fq/\fp}=\Ht {R/\fp}{(\fa+\fp)/\fp}$. Then
$$\dim_{R_\fq}(R_\fq/\fp R_\fq)=\Ht {R_\fq/\fp R_\fq}{\fq R_\fq/\fp R_\fq}=\Ht {R/\fp}{\fq/\fp}=n.$$
Also we have
$$(\h n{\fa+\fp}N)_\fq\cong \h n{(\fa+\fp)R_\fq}{N_\fq}\cong\h n{\fq R_\fq}{N_\fq}.$$
Since $\fp R_\fq\in\ass_{R_\fq}(N_\fq)$ and $\dim_{R_\fq}(R_\fq/\fp R_\fq)=n$, we have $\fp R_\fq\in\att_{R_\fq}(\h n{\fq R_\fq}{N_\fq})$ by \cite[Corollary 4.9]{s} and so
$\operatorname{Ann}_{R_\fq}(\h n{\fq R_\fq}{N_\fq})\subseteq\fp R_\fq$. To prove $\operatorname{Ann}_R(\h n{\fa+\fp}N)\subseteq\fp$, assume for the sake of contradiction that
$x\in\operatorname{Ann}_R(\h n{\fa+\fp}N)$ and $x\notin\fp$. Since $\fp$ is prime, $x/1\notin\fp R_\fq$ and so $(x\h n{\fa+\fp}N)_\fq\cong x/1\h n{\fq R_\fq}{N_\fq}\neq 0$, a contradiction. This proves the first claimed inclusion.
Next, assume in addition that $\fp$ is a minimal element of $\ass_R(N)$. Thus $C:=C^N_\fp(0)$ is a $\fp$-primary submodule of $N$ (in fact, $C^N_\fp(0)$ is the unique $\fp$-primary component of every minimal primary decomposition of the zero submodule of $N$).
Therefore $\ass_R(N/C)=\{\fp\}$.
Since $\fp R_{\fq}$ is a minimal element of $\ass_{R_\fq}(N_\fq)$ with $\dim_{R_\fq}({R_\fq}/{\fp R_\fq})=n$, it follows from Lemma \ref{loc} that
$$\operatorname{Ann}_{R_\fq}((\h n{\fa+\fp}N)_\fq)=\operatorname{Ann}_{R_\fq}(\h n{\fq R_\fq}{N_\fq})\subseteq \operatorname{Ann}_{R_\fq}(N_\fq/C^{N_\fq}_{\fp R_\fq}(0)).$$
Now suppose that $x\in R$ is such that $x N\nsubseteq C$. We have $\emptyset\neq\ass_R(x(N/C))\subseteq\ass_R(N/C)=\{\fp\}$ and so $\ass_R(x(N/C))=\{\fp\}$. Therefore
$\ass_{R_\fq}(x/1(N_\fq/C_\fq))=\{\fp R_\fq\}$ and hence $x/1\notin \operatorname{Ann}_{R_\fq}(N_\fq/C_\fq)$. By Lemma \ref{lem5}, $C_\fq=(C^N_\fp(0))_\fq=C^{N_\fq}_{\fp R_\fq}(0)$.
Hence the above inclusion implies that $x/1(\h n{\fa+\fp}N)_\fq\neq 0$ and so $x \h n{\fa+\fp}N\neq 0$. It follows that
$$\operatorname{Ann}_R(\h n{\fa+\fp}N)\subseteq\operatorname{Ann}_R(N/C)$$
because $x$ is an arbitrary element of $R$ with $x N\nsubseteq C$. The last assertion follows immediately from the first part, because if $\ass_R(N)=\{\fp\}$, then $C^N_\fp(0)=0$, $\Ht N{\fa}=\Ht {R/\fp}{(\fa+\fp)/\fp}$ and, since $\surd(\operatorname{Ann}_R(N))=\fp$, $\h {\Ht N{\fa}}{\fa}N=\h {\Ht N{\fa}}{\fa+\operatorname{Ann}_R(N)}N=\h {\Ht N{\fa}}{\fa+\fp}N$.
\end{proof}
\begin{lem}\label{iso}
Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ such that $\fa N\neq N$ and $n:=\grad{R} {\fa}N$. Then, for each ideal $\fb$ of $R$, there is the following isomorphism:
$$\h n{\fa+\fb}N\cong\gam {\fb}{\h n{\fa}N}.$$
\end{lem}
\begin{proof}
Note that $n$ is a finite non-negative integer because $\fa N\neq N$. By \cite[Theorem 10.47]{r}, there is the following Grothendieck's third quadrant
spectral sequence
$$\textstyle{E_2^{p, q}=\h p{\fb}{\h q{\fa}N}\underset{p}{\Rightarrow} \h n{\fa+\fb}N,}$$
where $n=p+q$.
We set ${H}^n:=\h n{\fa+\fb}N$. Hence there is a finite filtration
$$0=\Phi_{n+1}{H}^n\subseteq\Phi_{n}{H}^n\subseteq\cdots\subseteq\Phi_0{H}^n={H}^n$$
of submodules of ${H}^n$ such that $\Phi_{p}{H}^n/\Phi_{p+1}{H}^n\cong E^{p, q}_\infty$ for all $0\leq p\leq n$.
If $p>0$, then $q<n$ and so $E_2^{p, q}=0$. Therefore $E_{\infty}^{p, q}=0$ because $E_\infty^{\fp, \fq}$ is a subquotient of $E_2^{\fp, \fq}$. Thus
$\Phi_1{H}^n=\Phi_2{H}^n=\dots=\Phi_{n+1}{H}^n=0$ and $H^n=\Phi_0H^n\cong E_\infty^{0,n}$.
On the other hand, for each $r\geq 2$, there is the following exact sequence
\begin{displaymath}
\xymatrix {E^{-r,n+r-1}_r\ar[rr]^{\quad d^{-r,n+r-1}_r}&&E^{0, n}_r\ar[r]^{d^{0, n}_r\qquad}&E^{r, n-r+1}_r }.
\end{displaymath}
$E^{-r,n+r-1}_r$ is a subquotients of $E^{-r,n+r-1}_2$ and so is zero. Also, since $n-r+1<n$, $E_2^{r, n-r+1}$ and, consequently, $E_r^{r, n-r+1}$ are zero. Therefore
$$E^{0, n}_{r+1}= \textrm{ker}\,d^{0, n}_r/ \textrm{im}\, d^{-r,n+r-1}_r\cong E^{0, n}_r$$
for all $r\geq2$. It follows that $E_\infty^{0, n}\cong E_2^{0, n}$ and so
$$\h n{\fa+\fb}N=H^n\cong E_\infty^{0, n}\cong E_2^{0, n}=\gam {\fb}{\h n{\fa}N}.$$
\end{proof}
Now we are ready to state and prove the main theorem of this section which provides a bound for the annihilator of the first non-zero local cohomology module. We recall that if $N$ is a finitely generated $R$-module and $\fa$ is an ideal of $R$ with $N\neq\fa N$, then ${\rm S}(\fa, N)=\bigcap_{\fp\in\ass_R(N),\, \fa+\fp\neq R}C^N_\fp(0)=\gam {\bigcap_{\fp\in\ass_R(N),\, \fa+\fp=R}\fp}{N}$ is the largest submodule $L$ of $N$ such that $L=\fa L$; also, ${\rm S}(\fa, N)=0$ if and only if for each $a\in\fa$, $1-a$ is a non-zerodivisor on $N$; see Definition \ref{def1}, Lemma \ref{grad} and Proposition \ref{prop5}.
\begin{thm}\label{annh7}
Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ with $N\neq \fa N$ and $n:=\grad{R} {\fa}N$. Then
$$\textstyle{\operatorname{Ann}_R(N/{\rm S}(\fa, N))\subseteq\operatorname{Ann}_R(\h n{\fa}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma} C^N_\fp(0))\cap(\bigcap_{\fp\in\Sigma'}\fp)}$$
where $\Sigma:=\{\fp\in\mass_R(N): \Ht {R/\fp}{(\fa+\fp)/\fp}=n\}$ and $\Sigma':=\{\fp\in\ass_R(N)\setminus\mass_R(N): \Ht {R/\fp}{(\fa+\fp)/\fp}=n\}$.
\end{thm}
\begin{proof}
The lower bound for the annihilator of $\h n{\fa}N$ follows from Theorem \ref{lower}(iii). Also, it follows from Lemmas \ref{iso} and \ref{annh6} that
$$\operatorname{Ann}_R(\h n{\fa}N)\subseteq\operatorname{Ann}_R(\gam {\fp}{\h n{\fa}N})=\operatorname{Ann}_R(\h n{\fa+\fp}N)\subseteq\operatorname{Ann}_R(N/C^N_{\fp}(0))$$
for all $\fp\in \Sigma$. Similarly, $\operatorname{Ann}_R(\h n{\fa}N)\subseteq\fq$ for all $\fq\in\Sigma'$. Hence
\begin{align*}
\operatorname{Ann}_R(\h n{\fa}N)&\textstyle{\subseteq\bigcap_{\fp\in\Sigma}\operatorname{Ann}_R(N/C^N_{\fp}(0))\cap(\bigcap_{\fq\in\Sigma'}\fq)}\\
&=\textstyle{\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma} C^N_\fp(0))\cap(\bigcap_{\fq\in\Sigma'}\fq).}
\end{align*}
\end{proof}
\begin{cor}\label{cor3}
Let $N$ be a finitely generated Cohen-Macaulay $R$-module, $\fa$ an ideal of $R$ with $N\neq \fa N$, $n:=\grad{R} {\fa}N$ and $\Sigma:=\{\fp\in\ass_R(N): \Ht {R/\fp}{(\fa+\fp)/\fp}=n\}$. Then
$$\textstyle{\operatorname{Ann}_R(\h n{\fa}N)=\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma} C^N_\fp(0)).}$$
In particular, $\Ht N{\operatorname{Ann}_R(\h n{\fa}N)}=0$ and $\dim_R(R/\operatorname{Ann}_R(\h n{\fa}N))=\dim_R(N)$.
\end{cor}
\begin{proof}
Assume that $x\in R$ and $x\h n{\fa}N\neq 0$. Hence $(x\h n{\fa}N)_\fq\cong x/1\h n{\fa R_\fq}{N_\fq}\neq 0$ for some $\fq\in\ass_R(\h n{\fa}N)$. By \cite[Theorem 2.1]{ctt}, $\depth_{R_\fq}(N_\fq)=\grad{R}{\fa}N=n$ and so $\Ht {N}\fq=\Ht N{\fa}$. Therefore $\fq$ is a minimal prime ideal of $\fa+\operatorname{Ann}_R(N)$ and hence $
\h n{\fa R_\fq}{N_\fq}\cong\h n{\fq R_\fq}{N_\fq}$. Since $\dim_{R_\fq}(N_\fq)=n$, by \cite[Theorem 3.2(iii)]{f}, $\operatorname{Ann}_{R_\fq}(\h n{\fq R_\fq}{N_\fq})=\operatorname{Ann}_{R_\fq}(N_\fq)$. Thus $(xN)_\fq\neq 0$ and so there exists $\fp R_\fq\in\ass_{R_\fq}(N_\fq)$ such that $(xN_\fq)_{\fp R_\fq}\cong(xN)_\fp\neq 0$. Since $N_\fq$ is Cohen-Macaulay, $\dim_{R_\fq}(R_\fq/\fp R_\fq)=\dim_{R_\fq}(N_\fq)=n$ and so Lemma \ref{grad} yields
$$n=\grad{R} {\fa}N\leq \Ht {R/\fp}{\fa+\fp}\leq\Ht {R/\fp}{\fq/\fp}=\dim_{R_\fq}(R_\fq/\fp R_\fq)=n.$$
Therefore $\fp\in\Sigma$ and hence $(\bigcap_{\fp'\in\Sigma} C^N_{\fp'}(0))_\fp\subseteq (C^N_\fp(0))_\fp=0$. It follows that $(x(N/\bigcap_{\fp'\in\Sigma} C^N_{\fp'}(0)))_\fp\neq 0$ and so $xN\nsubseteq\bigcap_{\fp'\in\Sigma} C^N_{\fp'}(0)$. Since $x$ is an arbitrary element of $R$ with $x\h n{\fa}N\neq 0$, we have
$\operatorname{Ann}_R(N/\bigcap_{\fp'\in\Sigma} C^N_{\fp'}(0))\subseteq\operatorname{Ann}_R(\h n{\fa}N)$. The reverse inclusion follows from Theorem \ref{annh7}.
Finally since $\h n{\fa}N\neq 0$, $\Sigma\neq\emptyset$. Assume that $\fp\in\Sigma$. Then $$\operatorname{Ann}_R(N)\subseteq\operatorname{Ann}_R(\h c{\fa}N)\subseteq \operatorname{Ann}_R(N/C^N_\fp(0))=C_\fp(\operatorname{Ann}_R(N))\subseteq\fp.$$
Since $\Ht N{\fp}=0$ and $\dim_R(R/\fp)=\dim_R(N)$, the above inclusions imply
the last assertion.
\end{proof}
\begin{ex} \label{exa2} Let $K$ be a field and $R:=K[[x, y]]$ be the ring of formal power series over $K$ in indeterminates $x, y$. Set $N:=R/(Rx^2+Rxy), N_1:=Rx/(Rx^2+Rxy)$ and $N_2:=(Rx^2+Ry)/(Rx^2+Rxy)$. Then $0=N_1\cap N_2$ is a minimal primary decomposition of the zero submodule of $N$ with $\ass_R(N/N_1)=\{\fp:=Rx\}$ and $\ass_R(N/N_2)=\{\fn:=Rx+Ry\}$. Thus $\ass_R(N)=\{\fp, \fn\}$.
We have $\ext 1RNR\cong R/\fp$, $\ext 2RNR\cong R/\fn$ and $\ext iRNR=0$ for all $i\neq 1, 2$; see \cite[Example 2.7]{f}. Therefore
the Grothendieck's Duality Theorem (see \cite[Theorem 11.2.5 or 11.2.8]{bs}) implies that
$$\gam {\fn}N\cong\Hom R{\ext 2RNR}{E_R(R/\fn)}\cong\Hom R{R/\fn}{E_R(R/\fn)}\cong R/\fn,$$
\begin{align*}
&\h 1{\fn}N\cong\Hom R{\ext 1RNR}{E_R(R/\fn)}\cong\Hom R{R/\fp}{E_R(R/\fn)}\\
&\cong E_{R/\fp}(R/\fn)\cong E_{K[[y]]}(K)\cong K[y^{-1}]
\end{align*}
($K[y^{-1}]$ is a $K[[y]]$-module by the convention that $y^r.y^{-s}$ is equal $y^{-(s-r)}$ when $s\geq r$ and zero otherwise) and $\h i{\fn}N=0$ for all $i\neq 0, 1$. Thus $\depth_R(N)=0$, $\cdd R{\fn}N=\dim_R(N)=1$ and
$$\operatorname{Ann}_R(\h {\depth_R(N)}{\fn}N)=Rx+Ry,$$
$$\operatorname{Ann}_R(\h {\dim_R(N)}{\fn}N)= \operatorname{Ann}_R(\Hom R{R/\fp}{E_R(R/\fn)})= \operatorname{Ann}_R(R/\fp)=Rx.$$
On the other hand, $\supp_R(N)=\operatorname{V}(\fp)\cup\operatorname{V}(\fn)=\{\fp, \fn\}$ and since $\fp$ is a minimal element of $\ass_R(N)$, $C^N_\fp(0)=N_1=Rx/(Rx^2+Rxy)$. Also, it is clear that $C^N_\fn(0)=0$. Now assume that $\Delta $ is a subset of $\supp_R(N)$. Then
$$\textstyle{\operatorname{Ann}_R(N/\bigcap_{\fq\in\Delta}C^N_\fq(0))}=\left\{\begin{array}{lll}
R& \textrm{ if } \Delta=\emptyset,\\
Rx&\textrm{ if }\Delta=\{\fp\},\\
Rx^2+Rxy&\textrm{ otherwise. }
\end{array}\right. $$
Therefore the following statements hold.
(i) There is not a subset $\Delta$ of $\supp_R(N)$ such that $\operatorname{Ann}_R(\h {\depth_R(N)}{\fn}N)=\operatorname{Ann}_R(N/\bigcap_{\fq\in\Delta}C^N_\fq(0)).$
(ii) By setting $\fa:=\fn$, this example shows that to improve the upper bound for the annihilator of $\h {\grad{R} {\fa}N}{\fa}N$ in Theorem \ref{annh7}, we can not replace $\mass_R(N)$ by $\ass_R(N)$ in the index set $\Sigma$.
\end{ex}
In the following remark, we consider the analogue versions of Theorems \ref{annh2}, \ref{annh4} at $\grad{R} {\fa}N$ instead of at $\cdd R{\fa}N$.
\begin{rem} Let $N$ be a finitely generated $R$-module, $\fa$ an ideal of $R$ with $N\neq\fa N$, $n:=\grad{R} {\fa}N$ and $c:=\cdd R{\fa}N$.
(i) In Theorem \ref{annh2}, we see that $\operatorname{Ann}_R(\h c{\fa}N)\subseteq\operatorname{Ann}_R(N/\bigcap_{\fq\in\Sigma}C^N_\fq(0))$, where $\Sigma:=\{\fq\in\supp_R(N): \cdd R{\fa}{R/\fq}=\dim_R(R/\fq)=c \}$.
Now assume that $R$, $\fp$, $\fn$ and $N$ are as in Example \ref{exa2} and we set $\fa:=\fn$,
\begin{align*}
&\Sigma_1:=\{\fq\in\supp_R(N): \cdd R{\fa}{R/\fq}=\dim_R(R/\fq)=n \},\\
&\Sigma_2:=\{\fq\in\supp_R(N): \cdd R{\fa}{R/\fq}=\depth_R(R/\fq)=n \},\\
&\Sigma_3:=\{\fq\in\supp_R(N): \grad{R} {\fa}{R/\fq}=\dim_R(R/\fq)=n \},\\
&\Sigma_4:=\{\fq\in\supp_R(N): \grad{R} {\fa}{R/\fq}=\depth_R(R/\fq)=n \}.
\end{align*}
By above example, we have $\operatorname{Ann}_R(N/\bigcap_{\fq\in\Sigma_i}C^N_\fq(0))=Rx^2+Rxy$ for all $1\leq i\leq 4$ and $\operatorname{Ann}_R(\h n{\fa}N)=Rx+Ry$. Hence
$$\textstyle{\operatorname{Ann}_R(\h n{\fa}N)\nsubseteq\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma_i}C^N_\fp(0))}$$
for all $1\leq i\leq 4$.
(ii) Assume again that $R$, $\fp$, $\fn$ and $N$ are as in Example \ref{exa2} and we set $\fa:=\fn$. Since $\fa$ is the maximal ideal of $R$, $(0:_{\h n{\fa}{N/N'}}\fa)$ has finite length for all submodules $N'$ of $N$. But, by Example \ref{exa2}, there is not a subset $\Sigma'$ of $\supp_R(N)$ such that
$\operatorname{Ann}_R(\h n{\fa}N)=\operatorname{Ann}_R(N/\bigcap_{\fp\in\Sigma'}C^N_\fp(0))$. Therefore the analogue version of Theorem \ref{annh4} does hold at $\grad{R} {\fa}N$ instead of at $\cdd R{\fa}N$.
\end{rem}
\begin{prop}\label{prop3}
Let $(R, \fn)$ be a homomorphic image of a Cohen-Macaulay local ring, $N$ a non-zero finitely generated $R$-module and $t\in\N_0$ is such that $\h t{\fn}N\neq 0$. Then
$$\dim_R(R/\operatorname{Ann}_R(\h t{\fn}N)\leq t.$$
Equality holds whenever there exists $\fp\in\ass_R(N)$ with $\dim_R(R/\fp)=t$.
\end{prop}
\begin{proof}
Since $\surd(\operatorname{Ann}_R(\h t{\fn}N))=\bigcap_{\fp\in\att_R(\h t{\fn}N)}\fp$ and $\att_R(\h t{\fn}N)$ is a finite set, we have $\dim_R(R/\operatorname{Ann}_R(\h t{\fn}N)=\dim_R(R/\fp)$ for some $\fp\in\att_R(\h t{\fn}N)$. By \cite[Corollary 1.2]{k}, $R$ is universally catenary and all its formal fibers are Cohen-Macaulay. Thus \cite[Theorem 1.1]{nq} implies that $\fp R_\fp\in\att_{R_\fp}(\h {t-\dim_R(R/\fp)}{\fp R_\fp}{N_\fp})$. Hence $t-\dim_R(R/\fp)\geq 0$ and so $$\dim_R(R/\operatorname{Ann}_R(\h t{\fn}N))\leq t.$$
If there exists $\fq\in\ass_R(N)$ with $\dim_R(R/\fq)=t$, then \cite[Corollary 4.9]{s} implies that $\fq\in\att_R(\h t{\fn}N)$ and so $\operatorname{Ann}_R(\h t{\fn}N)\subseteq\fq$. Thus $\dim_R(R/\operatorname{Ann}_R(\h t{\fn}N))\geq\dim_R(R/\fq)=t$ and so the equality holds.
\end{proof}
The following example shows that there is a local ring $(A, \fn)$ which is a homomorphic image of a complete regular local ring such that
$$\dim_A(A/\operatorname{Ann}_A(\h {\depth_A(A)}{\fn}{A}))<\depth_A(A)=\depth_A(A/\gam {\fn}A).$$
Therefore the inequality in Proposition \ref{prop3} may be strict. Also, the analogue version of Lynch's conjecture is not true for $\h {\depth_A(A)}{\fn}A$.
\begin{ex}\label{exa3}
Let $K$ be a field and let $R:=K[[x, y, z, w]]$ be the ring of formal power series over $K$ in indeterminates $x, y, z, w$. We set $\fm:=(x, y, z, w)$ and $I:=(x, y)\cap(z, w)$. Then $A:=R/I$ is a local ring with maximal ideal $\fn:=\fm/I$. By \cite[Example 2.8]{f} and the Independence Theorem, we have $\gam {\fn}{A}\cong\gam {\fm}{R/I}=0$ and
$\h 1{\fn}{A}\cong\h 1{\fm}{R/I}\cong R/\fm\cong A/\fn$. Therefore $\depth_A(A)=\depth_A(A/\gam {\fn}A)=1$ and
$$\dim_A(A/\operatorname{Ann}_A(\h {\depth_A(A)}{\fn}{A}))=\dim_A(A/\fn)=0.$$
Therefore, in general, $\dim_A(A/\operatorname{Ann}_A(\h {\depth_A(A)}{\fn}{A}))$ is not equal $\depth_A(A)$ or $\depth_A(A/\gam {\fn}A)$ and the inequality in Proposition \ref{prop3} may be strict.
\end{ex}
Let $(R, \fn)$ be a local ring and $N$ a non-zero finitely generated $R$-module. Then, for each $\fp\in\ass_R(N)$, $\depth_R(N)\leq\dim_R(R/\fp)$. We say that $N$ has {\it maximal depth} if $\depth_R(N)=\dim_R(R/\fp)$ for some $\fp\in\ass_R(N)$. Cohen-Macaulay modules and sequentially Cohen-Macaulay modules have maximal depth; see \cite{r2} for more details. Now assume in addition that $R$ is a homomorphic image of a Cohen-Macaulay local ring. Proposition \ref{prop3} shows that $\dim_R(R/\operatorname{Ann}_R(\h {\depth_R(N)}\fn N))\leq\depth_R(N)$ and Example \ref{exa3} shows that this inequality may be strict. In the following corollary we see that the equality holds if $N$ has maximal depth.
\begin{cor} \label{prop4} Let $(R, \fn)$ be a homomorphic image of a Cohen-Macaulay local ring and let $N$ be a non-zero finitely generated $R$-module which has maximal depth. Then
$$\dim_R(R/\operatorname{Ann}_R(\h {\depth_R(N)}{\fn} N))=\depth_R(N).$$
\end{cor}
\begin{proof} It is an immediate consequence of Proposition \ref{prop3}.
\end{proof}
\end{document}
|
\begin{document}
\title[On Landis Conjecture for the Fractional Schr\"{o}dinger Equation]{On Landis Conjecture for the Fractional Schr\"{o}dinger Equation}
\author{Pu-Zhao Kow}
\address{Department of Mathematics and Statistics, P.O. Box 35 (MaD), FI-40014 University of Jyv\"{a}skyl\"{a}, Finland.}
\email{\href{mailto:[email protected]}{[email protected]}}
\begin{abstract}
In this paper, we study a Landis-type conjecture for the general fractional Schr\"{o}dinger equation $((-P)^{s}+q)u=0$. As a byproduct, we also proved the additivity and boundedness of the linear operator $(-P)^{s}$ for non-smooth coefficents. For differentiable potentials $q$, if a solution decays at a rate $\exp(-|x|^{1+})$, then the solution vanishes identically. For non-differentiable potentials $q$, if a solution decays at a rate $\exp(-|x|^{\frac{4s}{4s-1}+})$, then the solution must again be trivial. The proof relies on delicate Carleman estimates. This study is an extension of the work by R\"{u}land-Wang (2019).
\end{abstract}
\subjclass[2020]{35R11; 35A02; 35B60. }
\keywords{Landis conjecture; unique continuation at infinity; fractional Schr\"{o}dinger equation; Carleman-type estimates. }
\maketitle
\maketitle
\section{Introduction}
In this work, we study a Landis-type conjecture for the fractional Schr\"{o}dinger equation
\begin{equation}
((-P)^{s}+q)u=0\quad\text{in }\mathbb{R}^{n}, \quad \text{where} \quad P=\sum_{j,k=1}^{n}\partial_{j}a_{jk}(x)\partial_{k} \label{eq:Sch}
\end{equation}
with $s\in(0,1)$ and $|q(x)|\le1$. Here, the operator $(-P)^s$ is defined as
\begin{equation}
(-P)^{s}u(x) := \int_{0}^{\infty} \lambda^{s} \, \mathsf{d}E_{\lambda} = \frac{1}{\Gamma(-s)} \int_{0}^{\infty}(e^{tP}-1)u(x)\,\frac{\mathsf{d}t}{t^{1+s}} \label{eq:definition-fractional}
\end{equation}
for all
\begin{equation*}
u \in {\rm dom}\,((-P)^{s}) := \bigg\{ u \in L^{2}(\mathbb{R}^{n}) : \int_{0}^{\infty} \lambda^{2s} \, \mathsf{d} \| E_{\lambda}u \|^{2} < \infty \bigg\}
\end{equation*}
where $\{E_{\lambda}\}$ is the spectral resolution of $-P$ (each $\{E_{\lambda}\}$ is a projection in $L^{2}(\mathbb{R}^{n})$) and $\{e^{tP}\}_{t\ge0}$ is the heat-diffusion semigroup generated by $-P$, see e.g. \cite{GLX17FractionalCalderon,ST10ExtensionProblem}.
The Landis conjecture was proposed by E.M. Landis in the 60's \cite{KL88}. He conjectured the following statement: Let $|q(x)|\le1$ and let $u$ be a solution to \eqref{eq:Sch} with $P=\Delta$ and $s=1$. If $|u(x)|\le C_{0}$ and $|u(x)|\le\exp(-C|x|^{1+})$, then $u\equiv0$. However, this statement is false: In \cite{Mes92Landis}, Meshkov constructed a (complex-valued) potential $q$ and a (complex-valued) nontrivial $u$ with $|u(x)|\le C\exp(-C|x|^{\frac{4}{3}})$. In the same literature, he also showed that if $|u(x)|\le C\exp(-C|x|^{\frac{4}{3}+})$, then $u\equiv0$. In other words, the exponent $\frac{4}{3}+$ is optimal. In \cite{BK05quantitativeLandis}, Bourgain and Kenig derived a quantitative form of Meshkov's result, which is based on the Carleman method; their result then extended by Davey in \cite{Dav14MagneticSch}, including the drift term. Following, in \cite{LW14Landis}, Lin and Wang further extend Davey's result by replacing $\Delta$ by $P$.
The results mentioned above allowing \emph{complex-valued} solutions. It is also interesting to study the real-version of Landis conjecture, which proposed by Kenig in \cite[Question~1]{Ken06RealLandis}. The case when $n=1$ and $n=2$ were resolved in \cite{Ros21realLandis} and \cite{LMNN20realLandis}, respectively. To the best of the author's knowledge, the real-version of Landis conjecture is still open for $n \ge 3$. Here we also refer some related works \cite{Dav20realLandis,DKW17realLandis,DKW20realLandis,DW20LandisConjecturePlane,KSW15realLandis}.
In \cite{RW19Landis}, R\"{u}land and Wang consider the Landis conjecture of the fractional Schr\"{o}dinger equation \eqref{eq:Sch} with $P=\Delta$ and $0<s<1$. For the case when $s=1/2$, in \cite{Cas20SharpExponentialDecay}, we remark that Cassano proved the Landis conjecture for the Dirac equation. In some sense, the Dirac operator is the square root of the Laplacian operator, that is, the phenomena are similar when $s=1/2$.
\subsection{Main results}
We assume that the second order elliptic operator $P$ satisfies the elliptic condition
\begin{equation}
\lambda|\xi|^{2}\le\sum_{j,k=1}^{n}a_{jk}(x)\xi_j\xi_{k}\le\lambda^{-1}|\xi|^{2}\quad\text{for some constant }0<\lambda\le 1.\label{eq:unif-ellip}
\end{equation}
Assume that $a_{jk}=a_{kj} \in \mathcal{C}^{0,1}(\mathbb{R}^{n})$ for all $1\le j,k \le n$, and satisfy
\begin{equation}
\max_{1\le j,k\le n}\sup_{|x|\ge1}|a_{jk}(x)-\delta_{jk}(x)|+\max_{1\le j,k\le n}\sup_{|x|\ge1}|x||\nabla a_{jk}(x)|\le\epsilon\label{eq:decay}
\end{equation}
for some sufficiently small $\epsilon>0$ and
\begin{equation}
\max_{1\le j,k\le n}\sup_{|x|\ge1}|\nabla^{2}a_{jk}(x)|\le C\label{eq:2der-bdd}
\end{equation}
for some positive constant $C$.
In this paper, we prove the following Landis-type conjecture for the fractional Schr\"{o}dinger equations.
\begin{thm}
\label{thm:result1}Let $s\in(0,1)$ and assume that $u\in {\rm dom}\,((-P)^{s})$ is a solution to \eqref{eq:Sch} with \eqref{eq:unif-ellip}, \eqref{eq:decay} and \eqref{eq:2der-bdd}. We assume that the potential $q\in\mathcal{C}^{1}(\mathbb{R}^{n})$
satisfies $|q(x)|\le1$ and
\[
|x||\nabla q(x)|\le1.
\]
If $u$ further satisfies
\[
\int_{\mathbb{R}^{n}}e^{|x|^{\alpha}}|u|^{2}\,\mathsf{d}x\le C<\infty\quad\text{for some }\alpha>1,
\]
then $u\equiv0$.
\end{thm}
We also have the following result for non-differentiable potential
$q$.
\begin{thm}
\label{thm:result2}Let $s\in(1/4,1)$ and assume that $u\in {\rm dom}\,((-P)^{s})$
is a solution to \eqref{eq:Sch} with \eqref{eq:unif-ellip}, \eqref{eq:decay}, and \eqref{eq:2der-bdd}. Now we assume that the potential $q$ satisfies
$|q(x)|\le1$. If $u$ satisfies
\[
\int_{\mathbb{R}^{n}}e^{|x|^{\alpha}}|u|^{2}\,\mathsf{d}x\le C<\infty\quad\text{for some }\alpha>\frac{4s}{4s-1},
\]
then $u\equiv0$.
\end{thm}
\begin{rem}
When $s=\frac 12$, Theorem \ref{thm:result1} and Theorem
\ref{thm:result2} still hold without \eqref{eq:2der-bdd}.
\end{rem}
\begin{rem}
We prove Theorem~\ref{thm:result2} using the splitting arguments in \cite{RW19Landis}. Therefore, due to the sub-ellipticity nature, we the same restriction $s\in(\frac{1}{4},1)$. We also see that, as $s\rightarrow1$, the exponent $\frac{4s}{4s-1}$ in Theorem~\ref{thm:result2} tends to $\frac{4}{3}$, which is the optimal exponent for the classical Schr\"{o}dinger equation.
\end{rem}
\begin{rem}
The condition \eqref{eq:decay} allows small perturbations of Laplacian only, which works as a sufficient condition in deriving Carleman estimate. In \cite{GR19unique}, they also imposed similar assumption to prove the strong unique continuation property for \eqref{eq:Sch}. In contrast to the works \cite{DKW17realLandis,Ros21realLandis}, which studied the \emph{real-version} of Landis conjecture, such condition is not needed, since their proofs did not involve any Carleman estimate.
\end{rem}
\subsection{Main ideas}
The main method of proving Theorem~\ref{thm:result1} and \ref{thm:result2} is Carleman estimates. However, due to the non-locality of $(-P)^s$, the techniques here are much complicated than those for the classical case, i.e., $s=1$. One of the major tricks is to localize $(-P)^s$, which is motivated by Caffarelli-Silvestre's fundamental work \cite{CS07extension}. Here we will use the Caffarelli-Silvestre type extension of $(-P)^s$ proved in \cite{ST10ExtensionProblem,Sti10Fractional}. After localizing $(-P)^s$, we will derive a Carleman estimate on $\mathbb{R}_{+}^{n+1}$ mimicking the one proved in \cite{RS20Calderon}. This Carleman estimate enables passing of the boundary decay to the bulk decay.
\subsection{\label{sec:regularity}Main difficulties: Regularity of $(-P)^{s}$}
Using the Fourier transform, it is easy to see that
\[
(-\Delta)^{\alpha}(-\Delta)^{\beta}=(-\Delta)^{\alpha+\beta} \quad \text{and} \quad (-\Delta)^{s} \in \mathcal{L}(\dot{H}^{\beta+s}(\mathbb{R}^{n}),\dot{H}^{\beta-s}(\mathbb{R}^{n})).
\]
However, extension of these properties to $(-P)^{s}$ is not trivial. We establish the additivity property of $(-P)^{s}$ by introducing the Balakrishnan definition of $(-P)^{s}$, which is equivalent to \eqref{eq:definition-fractional}, see e.g. \cite{MS01FractionalOperators} or \cite[Section IX.11]{Yos80FunctionalAnalysis}. The continuity of $(-P)^{s}:H^{2s}(\mathbb{R}^{n})\rightarrow L^{2}(\mathbb{R}^{n})$ can be also obtained by the Balakrishnan operator, as well as the interpolation of the single operator $-P$. Here, we shall not interpolate on the family of the operator $(-P)^{s}$, see also \cite{GM14AnalyticFamiliesMultilinear} for the interpolation theory of the analytic familiy of multilinear operators.
\begin{rem}
In \cite{See67ComplexPowerEllipticOperator}, R.T. Seeley showed that the operator $(-P)^{s}$ is a pseudo-differential operator of order $2s$ if $a_{jk}$ are smooth. In this case, we can apply the theory of pseudo-differential operator, see e.g. \cite{Tay74PseudoDifferentialOperator}. As a byproduct, we loosened the smoothness hypothesis that required by theories of the pseudo-differential operator. Moreover, the boundary value theories for the fractional Laplacian have been elaborated in recent years, see e.g. \cite{Gru14MuTransmissionFractionalElliptic,Gru15FractionalLaplacianDomains,Gru16IntegrationByPartsPohozaev,Gru16RegularitySpectralFractional,Gru20ExactGreenFormulaFractionalLaplacian}. In \cite{Gru20ExactGreenFormulaFractionalLaplacian}, Grubb calculated the first few terms in the symbol of $(-P)^{s}$.\footnote{I would like to thank Prof Gerd Grubb for bringing these issues to my attention and for pointing out several related references.}
\end{rem}
\subsection{Main difficulties: Carleman estimates}
In \cite{RW19Landis}, they proved their Carleman estimates by estimating a certain commutator term, see \cite[(31)--(33)]{RW19Landis}. In our case, we shall approximate $P$ by $\Delta$. However, we face difficulties while controlling the remainder terms. Here, we solve this problem using the ideas in \cite{Reg97StrongUniquenessSecondOrderElliptic}. It is also interesting to mention that the terms of second derivative in the Carleman estimate should be $\tilde{\nabla}(\nabla\tilde{u})$ rather than $\tilde{\nabla}^{2}\tilde{u}$, where $\tilde{\nabla}=(\nabla,\partial_{n+1})$ is the gradient operator on $\mathbb{R}^{n+1}$, and $\tilde{u}$ is the Caffarelli-Silvestre type extension of $u$.
\subsection{Organization of the paper}
In Section~\ref{sec:definition}, we localize the operator $(-P)^{s}$ and solve the problems described in Paragraph~\ref{sec:regularity}. Following, in Section~\ref{sec:decay}, we show that the decay of $u$ implies the decay of the Caffarelli-Silvestre type extension $\tilde{u}$ of $u$. Then, we derive some delicate Carleman estimates on $\mathbb{R}_{+}^{n}$ in Section~\ref{sec:Carleman}. Finally, we prove Theorem~\ref{thm:result1} and Theorem~\ref{thm:result2} in Section~\ref{sec:result}.
\section{\label{sec:definition}Caffarelli-Silvestre type Extension}
Let $\mathbb{R}_{+}^{n+1}=\mathbb{R}^{n}\times\mathbb{R}_{+}=\{(x',x_{n+1}):x_{n+1} > 0\}$, and we write $x=(x',x_{n+1})$ with $x'\in\mathbb{R}^{n}$ and $x_{n+1}\in\mathbb{R}_{+}$. We also denote $\nabla'=(\partial_{1},\cdots\partial_{n})$ and $\nabla = (\nabla' , \partial_{n+1})$. For $x_{0}\in\mathbb{R}^{n}\times\{0\}$, we denote the half balls in $\mathbb{R}_{+}^{n+1}$ and $\mathbb{R}^{n}\times\{0\}$ by
\begin{align*}
B_{r}^{+}(x_{0})&:=\{x\in\mathbb{R}_{+}^{n+1}:|x-x_{0}|\le r\}, \\
B_{r}'(x_{0})&:=\{(x',0) \in \mathbb{R}^{n}\times\{0\} : |(x',0) - x_{0}| \le r\},
\end{align*}
$B_{r}^{+}(0)=B_{r}^{+}$, and $B_{r}'(0)=B_{r}'$. We define the annulus
\begin{align*}
A_{r,R}^{+} &:= \{x\in\mathbb{R}_{+}^{n+1}:r\le|x|\le R\} \\
A_{r,R}' &:= \{(x',0)\in\mathbb{R}^{n}\times\{0\}:r\le|(x',0)|\le R\}.
\end{align*}
We consider the following Sobolev spaces:
\begin{align*}
L^{2}(D,x_{n+1}^{1-2s}) & :=\bigg\{ v:D\rightarrow\mathbb{R}:\int_{D}x_{n+1}^{1-2s}|v|^{2}\,\mathsf{d}x<\infty\bigg\},\\
\dot{H}^{1}(D,x_{n+1}^{1-2s}) & :=\bigg\{ v:D\rightarrow\mathbb{R}:\int_{D}x_{n+1}^{1-2s}|\nabla v|^{2}\,\mathsf{d}x<\infty\bigg\},\\
H^{1}(D,x_{n+1}^{1-2s}) & :=\bigg\{ v:D\rightarrow\mathbb{R}:\int_{D}x_{n+1}^{1-2s}(|v|^{2}+|\nabla v|^{2})\,\mathsf{d}x<\infty\bigg\},
\end{align*}
where $D$ is a relative open set in $\overline{\mathbb{R}_{+}^{n+1}}$.
For $s\in(0,1)$, let $\tilde{u}$ be a solution to the following degenerate elliptic equation:
\begin{align}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}P\bigg]\tilde{u} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\label{eq:Sch-ext}\\
\tilde{u} & =u\quad\text{on }\mathbb{R}^{n}\times\{0\}.\label{eq:Dirichlet-BC}
\end{align}
Refer to \cite[equation~(1.8) in Theorem 1.1]{ST10ExtensionProblem}, the fractional elliptic operator $(-P)^{s}$ satisfies
\begin{equation}
(-P)^{s}u(x')=c_{s}\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}(x) \label{eq:Neumann-BC}
\end{equation}
with
\[
c_{s} = \frac{4^{s}\Gamma(s)}{2s\Gamma(-s)} < 0 \quad (\text{In particular }c_{1/2}=-1),
\]
see also \cite{Sti10Fractional}.
The following lemma is a special case of \cite[Proposition~2.1]{GR19unique}:
\begin{lem}
\label{lem:well0} Let $0<s<1$, and assuming that $a_{jk}=a_{kj}\in \mathcal{C}^{0,1}(\mathbb{R}^{n})$ satisfies the elliptic condition \eqref{eq:unif-ellip}. Then there exists an extension operator
\[
\mathsf{E}_{s}:{\rm dom}\,((-P)^{s}) \rightarrow
H_{\rm loc}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s}) \cap \mathcal{C}_{\rm loc}^{2,1}(\mathbb{R}_{+}^{n+1})
\]
such that $\tilde{u} = \mathsf{E}_{s}(u)$ is a solution of \eqref{eq:Sch-ext} and the boundary conditions \eqref{eq:Dirichlet-BC} and \eqref{eq:Neumann-BC} are attained as $L^{2}(\mathbb{R}^{n})$-limits.
\end{lem}
The proof of Lemma~\ref{lem:well0} is same as in \cite{ST10ExtensionProblem,Sti10Fractional}. The following estimate also holds true:
\begin{equation}
\|\tilde{u}(\bullet,x_{n+1})\|_{L^{2}(\mathbb{R}^{n})}\le\|u\|_{L^{2}(\mathbb{R}^{n})} \quad \text{for all}\;\; x_{n+1}>0.\label{eq:apriori}
\end{equation}
with $\tilde{u} = \mathsf{E}_{s}(u)$, see \cite[page~2097]{ST10ExtensionProblem} or \cite[pages~48--49]{Sti10Fractional}. From \cite[Proposition~2.6]{Yu17unique}, indeed
\begin{equation}
\mathsf{E}_{s} : H^{s}(\mathbb{R}^{n}) \rightarrow H_{\rm loc}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s}) \label{eq:extension-domain}
\end{equation}
is a bounded linear operator. Using \cite[Remark~7.4]{LM72FractionalSobolevSpacesVol1}, we know that
\begin{equation*}
\mathcal{C}_{c}^{\infty}(\overline{\mathbb{R}_{+}^{n+1}}) \text{ is dense in } H_{\rm loc}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s}),
\end{equation*}
thus, given any $v \in H^{s}(\mathbb{R}^{n})$, we have $\tilde{v} = \mathsf{E}_{s}(v) \in H_{\rm loc}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$ and
\begin{align*}
& \bigg| \int_{\mathbb{R}^{n} \times \{0\}} ((-P)^{s}u) v \, \mathsf{d}x' \bigg| \equiv \bigg| \int_{\mathbb{R}^{n} \times \{0\}} \bigg( \lim_{x_{n+1} \rightarrow 0}x_{n+1}^{1-2s} \partial_{n+1}\tilde{u}\bigg) v \, \mathsf{d}x' \bigg|\\
= & \bigg| \int_{\mathbb{R}_{+}^{n+1}} x_{n+1}^{1-2s} \partial_{n+1}^{1-2s} \partial_{n+1} \tilde{u} \partial_{n+1} \tilde{v} \, \mathsf{d}x + \int_{\mathbb{R}_{+}^{n+1}} A(x') \nabla' \tilde{u} \cdot \nabla' \tilde{v} \, \mathsf{d}x \bigg| \\
\le & \lambda^{-1} \| \nabla \tilde{u} \|_{L^{2}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})} \| \nabla \tilde{v} \|_{L^{2}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})} \\
\equiv & \lambda^{-1} \| \mathsf{E}_{s}(u) \|_{\dot{H}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})} \| \mathsf{E}_{s}(v) \|_{\dot{H}^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})} \\
\le & C \| u \|_{H^{s}(\mathbb{R}^{n})} \| v \|_{H^{s}(\mathbb{R}^{n})} \quad \text{using \eqref{eq:extension-domain}}.
\end{align*}
Therefore, by arbitrariness of $v \in H^{s}(\mathbb{R}^{n})$, we conclude the following lemma:
\begin{lem}
\label{lem:well1} Let $0<s<1$ and $a_{jk}$ given as in Lemma~{\rm \ref{lem:well0}}. Then $(-P)^{s} : H^{s}(\mathbb{R}^{n}) \rightarrow H^{-s}(\mathbb{R}^{n})$ is a bounded linear operator.
\end{lem}
Note that
\[
Pu=\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}u+\sum_{j,k=1}^{n}(\partial_{j}a_{jk})\partial_{k}u.
\]
Since $a_{jk}$ is uniformly Lipschitz, then
\begin{equation}
\|-Pu\|_{L^{2}(\mathbb{R}^{n})}\le C\|u\|_{H^{2}(\mathbb{R}^{n})}.\label{eq:dom1}
\end{equation}
We here also remark that ${\rm dom}\,(-P) = H^{2}(\mathbb{R}^{n})$ is the maximal extension such that $-P$ is self-adjoint and densely defined in $L^{2}(\mathbb{R}^{n})$, see \cite[equation~(2.8)]{GLX17FractionalCalderon}. Given any $\phi \in \mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})$, we see that
\[
\langle Pu,\phi \rangle = (u,P\phi)_{L^{2}(\mathbb{R}^{n})} \le \|u\|_{L^{2}(\mathbb{R}^{n})} \|P\phi\|_{L^{2}(\mathbb{R}^{n})} \le C\|u\|_{L^{2}(\mathbb{R}^{n})} \|\phi\|_{H^{2}(\mathbb{R}^{n})}
\]
where $\langle\bullet,\bullet\rangle$ is the $H^{-2}(\mathbb{R}^{n})\oplus H^{2}(\mathbb{R}^{n})$ duality pair. Since
\begin{equation*}
\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n}) \text{ is dense in } H^{\gamma}(\mathbb{R}^{n}) \text{ for each }\gamma \in \mathbb{R} \quad (\text{see e.g. \cite[Remark~7.4]{LM72FractionalSobolevSpacesVol1}}),
\end{equation*}
then we know that
\begin{equation}
\|Pu\|_{H^{-2}(\mathbb{R}^{n})} \le C\|u\|_{L^{2}(\mathbb{R}^{n})}.\label{eq:dom2}
\end{equation}
We shall prove the followings:
\begin{lem}
\label{lem:well2} Let $0<s<1$ and $a_{jk}$ given as in Lemma~{\rm \ref{lem:well0}}. We have the inequality
\begin{equation}
\|(-P)^{s}u\|_{L^{2}(\mathbb{R}^{n})}\le C\|u\|_{H^{2s}(\mathbb{R}^{n})}.\label{eq:well2-1}
\end{equation}
Moreover, we have
\begin{equation}
\|(-P)^{s}u\|_{H^{-2s}(\mathbb{R}^{n})}\le C\|u\|_{L^{2}(\mathbb{R}^{n})}.\label{eq:well2-2}
\end{equation}
\end{lem}
\begin{rem}
Using the duality argument as in \eqref{eq:dom2}, we know that \eqref{eq:well2-1} and \eqref{eq:well2-2} are equivalent.
\end{rem}
In order to prove Lemma~\ref{lem:well2}, we introduce the Balakrishnan operator as in \cite[Definition~3.1.1 and Definition~5.1.1]{MS01FractionalOperators}.
\begin{defn}
Let $\alpha\in\mathbb{C}_{+}=\{z\in\mathbb{C}:\Re z>0\}$.
\begin{enumerate}
\item If $0<\Re\alpha<1$, then ${\rm dom}\,((-P)_{B}^{\alpha})={\rm dom}\,(-P)$
and
\[
(-P)_{B}^{\alpha}\phi=\frac{\sin\alpha\pi}{\pi}\int_{0}^{\infty}\lambda^{\alpha-1}(\lambda-P)^{-1}(-P)\phi\,d\lambda.
\]
\item If $\Re\alpha=1$, then ${\rm dom}\,((-P)_{B}^{\alpha})={\rm dom}\,((-P)^{2})$
and
\[
(-P)_{B}^{\alpha}\phi=\frac{\sin\alpha\pi}{\pi}\int_{0}^{\infty}\lambda^{\alpha-1}\bigg[(\lambda-P)^{-1}-\frac{\lambda}{\lambda^{2}+1}\bigg](-P)\phi\,d\lambda+\sin\frac{\alpha\pi}{2}(-P)\phi.
\]
\item If $n<\Re\alpha<n+1$ for $n\in\mathbb{N}$, then ${\rm dom}\,((-P)_{B}^{\alpha})={\rm dom}\,((-P)^{n+1})$
and
\[
(-P)_{B}^{\alpha}\phi=(-P)_{B}^{\alpha-n}(-P)^{n}\phi.
\]
\item If $\Re\alpha=n+1$ for $n\in\mathbb{N}$, then ${\rm dom}\,((-P)_{B}^{\alpha})={\rm dom}\,((-P)^{n+2})$
and
\[
(-P)_{B}^{\alpha}\phi=(-P)_{B}^{\alpha-n}(-P)^{n}\phi.
\]
\end{enumerate}
\end{defn}
The following proposition, which can be found at \cite[Theorem~6.1.6]{MS01FractionalOperators}, shows that $(-P)_{B}^{s}$ and $(-P)^{s}$ are equivalent.
\begin{prop}
Let $0<s<1$. If $u\in{\rm dom}\,((-P)_{B}^{s})$, then the strong limit
\[
\lim_{\epsilon\rightarrow0_{+}}\int_{\epsilon}^{\infty}(1-e^{tP})u\,\frac{dt}{t^{1+s}}\quad\text{exists}
\]
and
\[
(-P)_{B}^{s}u=c_{s}'\lim_{\epsilon\rightarrow0_{+}}\int_{\epsilon}^{\infty}(1-e^{tP})u\,\frac{dt}{t^{1+s}}\quad\text{for some positive constant }c_{s}',
\]
where $\{e^{tP}\}_{t\ge0}$ is the heat-diffusion semigroup generated
by $-P$.
\end{prop}
Here and after, we shall not distinguish
between $(-P)^{s}$ and $(-P)_{B}^{s}$, as well as ${\rm dom}\,((-P)^{s})$ and ${\rm dom}\,((-P)_{B}^{s})$. Using \cite[Theorem~5.1.2]{MS01FractionalOperators}, we have the following fact:
\[
\text{If } u \in {\rm dom}\,((-P)^{\alpha + \beta}) \text{, then }(-P)^{\beta}u \in {\rm dom}\,((-P)^{\alpha}),
\]
and the following identity holds:
\begin{equation}
(-P)^{\alpha}(-P)^{\beta}u=(-P)^{\alpha+\beta}u \quad \text{for all}\;\;u\in {\rm dom}\,((-P)^{\alpha + \beta}) \label{eq:add}
\end{equation}
for all $\alpha,\beta\in\mathbb{C}$ with $\Re \alpha>0$
and $\Re \beta>0$. Since $(-P)^{s}$ is self-adjoint in $L^{2}(\mathbb{R}^{n})$, then
\[
\|(-P)^{s}u\|_{L^{2}(\mathbb{R}^{n})}^{2} = ((-P)^{2s}u,u)_{L^{2}(\mathbb{R}^{n})}.
\]
Now we are ready to prove Lemma~\ref{lem:well2}.
\begin{proof}[Proof of Lemma~{\rm \ref{lem:well2}}]
We first consider the case when $0< s \le 1/2$. Since $(-P)^{s}$ is self-adjoint, by observing that $(-P)^{2s}=(-P)^{s}(-P)^{s}$ (using \eqref{eq:add}), Lemma~\ref{lem:well1} immediate implies
\begin{equation}
\|(-P)^{s}u\|_{L^{2}(\mathbb{R}^{n})}^{2} = ((-P)^{2s}u,u)_{L^{2}(\mathbb{R}^{n})} \le \|(-P)^{2s}u\|_{H^{-2s}(\mathbb{R}^{n})} \|u\|_{H^{2s}(\mathbb{R}^{n})} \le C\|u\|_{H^{2s}(\mathbb{R}^{n})}^{2}.\label{eq:continuity-2s}
\end{equation}
When $1/2<s<1$, by observing that $(-P)^{2s}=(-P)^{2s-1}(-P)=(-P)(-P)^{2s-1}$ (using \eqref{eq:add}) and $0<2s-1<1$, using Lemma~\ref{lem:well1} we can easily show that
\begin{align*}
\|(-P)^{2s}u\|_{H^{1-2s}(\mathbb{R}^{n})} & \le C\|u\|_{H^{1+2s}(\mathbb{R}^{n})} \\
\|(-P)^{2s}u\|_{H^{-1-2s}(\mathbb{R}^{n})} & \le C\|u\|_{H^{-1+2s}(\mathbb{R}^{n})}.
\end{align*}
By interpolating the above two inequalities, we conclude that \eqref{eq:continuity-2s} holds for all $0<s<1$, and we complete the proof of Lemma~\ref{lem:well2}.
\end{proof}
\section{\label{sec:decay}Boundary Decay Implies Bulk Decay}
Firstly, we translate the decay behavior on $\mathbb{R}^{n}$
to decay behavior which is also holds on $\mathbb{R}_{+}^{n+1}$.
\begin{prop}
\label{prop:bulk-decay} Let $s\in(0,1)$ and $u\in H^{s}(\mathbb{R}^{n})$
be a solution to \eqref{eq:Sch}, with \eqref{eq:unif-ellip} and \eqref{eq:decay}.
For $s\neq\frac{1}{2}$, we further assume \eqref{eq:2der-bdd}. Assume
that $|q(x)|\le1$ and there exists $\alpha>1$ such that
\[
\int_{\mathbb{R}^{n}}e^{|x|^{\alpha}}|u|^{2}\,dx\le C<\infty.
\]
Then there exist constants $C_{1},C_{2}>0$ so that
the Caffarelli-Silvestre type extension $\tilde{u}(x)$ satisfies
\[
|\tilde{u}(x)|\le C_{1}e^{-C_{2}|x|^{\alpha}}\quad\text{for all }x\in\mathbb{R}_{+}^{n+1}.
\]
\end{prop}
The ideas of proving Proposition~\ref{prop:bulk-decay} is similar to \cite[Proposition~2.2]{RW19Landis}. The proof of \cite[Proposition~2.2]{RW19Landis} utilized \cite[Propositions~5.10--5.12]{RS20Calderon}. The extension of such propositions involving many details, especially the Carleman estimate in \cite[Propositions~5.7]{RS20Calderon}. For sake of readability, here we present the details of the proofs.
In order to obtain the interior decay, similar to \cite[Proposition~2.3]{RW19Landis}, we need the following three-ball inequalities.
\begin{lem}
\label{lem:interior}Let $s\in(0,1)$ and $\tilde{u}\in H^{1}(B_{4}^{+},x_{n+1}^{1-2s})$
be a solution to
\[
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}P\bigg]\tilde{u}=0\quad\text{in }\mathbb{R}_{+}^{n+1}
\]
with \eqref{eq:unif-ellip}. Assume that $r\in(0,1)$ and $\overline{x}_{0}=(\overline{x}_{0}',5r)\in B_{2}^{+}$.
Then, there exists $\alpha=\alpha(n,s)\in(0,1)$ such that
\[
\|\tilde{u}\|_{L^{\infty}(B_{2r}^{+}(\overline{x}_{0}))}\le C\|\tilde{u}\|_{L^{\infty}(B_{r}^{+}(\overline{x}_{0}))}^{\alpha}\|\tilde{u}\|_{L^{\infty}(B_{4r}^{+}(\overline{x}_{0}))}^{1-\alpha}.
\]
\end{lem}
\begin{proof}
As $(\overline{x}_{0})_{n+1}=5r$, this follows from a standard interior
$L^{2}$ three ball inequalities together with $L^{\infty}$-$L^{2}$
estimates for uniformly elliptic equations.
\end{proof}
Also, we need the following boundary-bulk propagation of smallness
estimation:
\begin{lem}
\label{lem:small}Let $s\in(0,1)$ and let $\tilde{u}\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
be a solution to \eqref{eq:Sch-ext} with \eqref{eq:unif-ellip} and
$q\in L^{\infty}(\mathbb{R}^{n})$. We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. For $s\neq\frac{1}{2}$,
we further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Assume that $x_{0}\in\mathbb{R}^{n}\times\{0\}$.
Then
\begin{enumerate}
\renewcommand{(\alph{enumi})}{(\alph{enumi})}
\renewcommand{(\alph{enumi})}{(\alph{enumi})}
\item \label{itm:part-a:lem:small} There exist $\alpha=\alpha(n,s)\in(0,1)$ and $c=c(n,s)\in(0,1)$
such that
\begin{align*}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{cr}^{+}(x_{0}))}\\
\le & C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16r}^{+}(x_{0}))}+r^{1-s}\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\alpha}\times\\
& \quad\times\bigg[r^{s+1}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16r}'(x_{0}))}+r^{1-s}\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{1-\alpha}\\
& +C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16r}^{+}(x_{0}))}+r^{1-s}\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\frac{2s}{1+s}}\times\\
& \quad\times\bigg[r^{s+1}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16r}'(x_{0}))}+r^{1-s}\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\frac{1-s}{1+s}}.
\end{align*}
\item \label{itm:part-b:lem:small} There exist $\alpha=\alpha(n,s)\in(0,1)$ and $c=c(n,s)\in(0,1)$
such that
\begin{align*}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{\infty}(B_{\frac{cr}{2}}^{+}(x_{0}))}\\
\le & Cr^{-\frac{n}{2}}\bigg[r^{s-1}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16r}^{+}(x_{0}))}+\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\alpha}\times\\
& \quad\times\bigg[r^{2s}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16r}'(x_{0}))}+\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{1-\alpha}\\
& +Cr^{-\frac{n}{2}}\bigg[r^{s-1}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16r}^{+}(x_{0}))}+\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\frac{2s}{1+s}}\times\\
& \quad\times\bigg[r^{2s}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16r}'(x_{0}))}+\|u\|_{L^{2}(B_{16r}'(x_{0}))}\bigg]^{\frac{1-s}{1+s}}\\
& +Cr^{-\frac{n}{2}}r^{s}\|qu\|_{L^{2}(B_{16r}'(x_{0}))}^{\frac{1}{2}}\|u\|_{L^{2}(B_{16r}'(x_{0}))}^{\frac{1}{2}}.
\end{align*}
\end{enumerate}
\end{lem}
Using Lemma~\ref{lem:interior} and Lemma~\ref{lem:small}, and imitating the chain-ball argument in \cite{RW19Landis}, we can obtain Proposition~\ref{prop:bulk-decay}.
\subsection{Proof of the part~\ref{itm:part-a:lem:small} of Lemma~\ref{lem:small} for the case $s\in[1/2,1)$}
We first prove the following extension of the Carleman estimate in \cite[Proposition 5.7]{RS20Calderon}.
\begin{lem}
\label{lem:0Carl}Let $s\in[\frac{1}{2},1)$ and let $w\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
with ${\rm supp}(w)\subset B_{1/2}^{+}$ be a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\bigg]w & =f\quad\text{in }\mathbb{R}_{+}^{n+1},\\
w & =0\quad\text{on }\mathbb{R}^{n}\times\{0\}.
\end{align*}
Suppose that
\[
\phi(x)=\phi(x',x_{n+1}):=-\frac{|x'|^{2}}{4}+2\bigg(-\frac{1}{2-2s}x_{n+1}^{2-2s}+\frac{1}{2}x_{n+1}^{2}\bigg).
\]
We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. For $s\neq\frac{1}{2}$,
we further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Assume additionally that
\begin{align*}
& \|x_{n+1}^{\frac{2s-1}{2}}f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\lim_{x_{x+1}\rightarrow0}\|\Delta'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\\
& +\lim_{x_{n+1}\rightarrow0}\|\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}+\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}<\infty.
\end{align*}
Then there exist $\tau_{0}>1$ and a constant $C$ such that
\begin{align*}
& \tau^{3}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg(\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau^{-1}\lim_{x_{n+1}\rightarrow0}\|e^{\tau\phi}\Delta'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\\
& +\tau\lim_{x_{n+1}\rightarrow0}\|e^{\tau\phi}x'\cdot\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}+\tau\lim_{x_{n+1}\rightarrow0}\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\bigg)
\end{align*}
for all $\tau\ge\tau_{0}$.
\end{lem}
\begin{proof}
Now we prove the Carleman estimate for $s\in(\frac{1}{2},1)$, as the case $s=\frac{1}{2}$ is naturally included in our estimates.
\textbf{Step 1: Conjugation.} Let $\tilde{u}=x_{n+1}^{\frac{1-2s}{2}}w$, we have
\[
x_{n+1}^{\frac{2s-1}{2}}f = \Delta\tilde{u} + \mathring{c}_{s} x_{n+1}^{-2}\tilde{u}+\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\partial_{j}\partial_{k}\tilde{u},
\]
where $\mathring{c}_{s} = \frac{1-4s^{2}}{4}$. Let $u=e^{\tau\phi}\tilde{u}$, we have
\begin{align*}
e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}f= & \bigg[\Delta+\tau^{2}|\nabla\phi|^{2} + \mathring{c}_{s} x_{n+1}^{-2}-\tau\Delta\phi-2\tau\nabla\phi\cdot\nabla\bigg]u\\
& +\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\partial_{j}\partial_{k}u -\tau\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(\partial_{k}\phi)\partial_{j}+(\partial_{j}\phi)\partial_{k}\bigg]u\\
& +\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\tau^{2}(\partial_{k}\phi)(\partial_{j}\phi)-\tau(\partial_{j}\partial_{k}\phi)\bigg]u.
\end{align*}
We write $L^{+}=S+A+(I)+(II)+(III)$, where
\begin{align*}
S & =\Delta+\tau^{2}|\nabla\phi|^{2} + \mathring{c}_{s} x_{n+1}^{-2}, \quad A = -2\tau\nabla\phi\cdot\nabla-\tau\Delta\phi,\\
(I) & =\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\partial_{j}\partial_{k}\\
(II) & =-\tau\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(\partial_{k}\phi)\partial_{j}+(\partial_{j}\phi)\partial_{k}\bigg]\\
(III) & =\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\tau^{2}(\partial_{k}\phi)(\partial_{j}\phi)-\tau(\partial_{j}\partial_{k}\phi)\bigg].
\end{align*}
We now define $L^{-}:=S-A+(I)-(II)+(III)$,
\[
\mathscr{D}:=\|L^{+}u\|^{2}-\|L^{-}u\|^{2} \quad \text{and} \quad \mathscr{S} := \|L^{+}u\|^{2}+\|L^{-}u\|^{2},
\]
where
\begin{align*}
\|\bullet\| =\|\bullet\|_{L^{2}(\mathbb{R}_{+}^{n+1})}, & \qquad \|\bullet\|_{0} =\|\bullet\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\\
\langle\bullet,\bullet\rangle =\langle\bullet,\bullet\rangle_{L^{2}(\mathbb{R}_{+}^{n+1})}, & \qquad \langle\bullet,\bullet\rangle_{0} =\langle\bullet,\bullet\rangle_{L^{2}(\mathbb{R}^{n}\times\{0\})}
\end{align*}
and we omit the notations ``$\lim_{x_{n+1}\rightarrow0}$'' in $\|\bullet\|_{0}$
and $\langle\bullet,\bullet\rangle_{0}$.
\textbf{Step 2: Estimating the bulk contributions.}
\textbf{Step 2.1: Estimating the difference $\mathscr{D}$.} Observe that $\mathscr{D}=4\langle Su,Au\rangle+R$, where
\[
R=4\langle Su,(II)u\rangle+4\langle Au,(I)u\rangle+4\langle Au,(III)u\rangle+4\langle(I)u,(II)u\rangle+4\langle(II)u,(III)u\rangle.
\]
\textbf{Step 2.1.1: Computation the principal term.} Note that
\begin{equation}
2\langle Su,Au\rangle=\langle[S,A]u,u\rangle+2\tau\langle Su,(\partial_{n+1}\phi)u\rangle_{0}-\langle Au,\partial_{n+1}u\rangle_{0}+\langle\partial_{n+1}(Au),u\rangle_{0}.\label{eq:0Carl1}
\end{equation}
Observe that $[S,A]=[S,A]_{1}+[S,A]_{2}$, where
\begin{align*}
[S,A]_{1}= & [\Delta'+\tau^{2}|\nabla'\phi|^{2},-2\tau\nabla'\phi\cdot\nabla'-\tau\Delta'\phi],\\{}
[S,A]_{2}= & \bigg[\partial_{n+1}^{2}+\tau^{2}(\partial_{n+1}\phi)^{2}+\frac{1-4s^{2}}{4}x_{n+1}^{-2},-2\tau\partial_{n+1}\phi\partial_{n+1}-\tau\partial_{n+1}^{2}\phi\bigg].
\end{align*}
The following identity can be found in \cite[equation~(5.20) of Proposition~5.7]{RS20Calderon}:
\begin{equation}
\langle[S,A]_{1}u,u\rangle=-\frac{1}{2}\tau^{3}\||x'|u\|^{2}-2\tau\|\nabla'u\|^{2}.\label{eq:0Carl2}
\end{equation}
For our purpose, we need to refine the estimate \cite[equation~(5.22) of Proposition~5.7]{RS20Calderon}. The following identity can be found in \cite[equation~(5.19) of Proposition~5.7]{RS20Calderon}:
\begin{align*}
\langle[S,A]_{2}u,u\rangle= & 4\tau^{3}\langle u,(\partial_{n+1}\phi)^{2}(\partial_{n+1}^{2}\phi)u\rangle+4\tau\langle\partial_{n+1}u,(\partial_{n+1}^{2}\phi)\partial_{n+1}u\rangle\\
& -\tau\langle u,(\partial_{n+1}^{4}\phi)u\rangle -4\mathring{c}_{s} \tau\langle u,x_{n+1}^{-3}(\partial_{n+1}\phi)u\rangle\\
& +4\tau\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}.
\end{align*}
From \cite[equation~(5.21) of Proposition~5.7]{RS20Calderon}, we have
\[
(\partial_{n+1}\phi)^{2}(\partial_{n+1}^{2}\phi)=8(x_{n+1}^{1-2s}-x_{n+1})^{2}((2s-1)x_{n+1}^{-2s}+1)
\]
and
\begin{align*}
& -\tau\langle u,(\partial_{n+1}^{4}\phi)u\rangle+(2s+1)(2s-1)\tau\langle u,x_{n+1}^{-3}(\partial_{n+1}\phi)u\rangle\\
= & 2\tau(1-2s)(1+2s)^{2}\|x_{n+1}^{-1-s}u\|^{2} -8 \tau \mathring{c}_{s} \|x_{n+1}^{-1}u\|^{2}.
\end{align*}
Hence, we have
\begin{align}
\langle[S,A]_{2}u,u\rangle= & 32\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}+32(2s-1)\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}\nonumber \\
& +8\tau(2s-1)\|x_{n+1}^{-s}\partial_{n+1}u\|^{2} +2\tau(1-2s)(1+2s)^{2}\|x_{n+1}^{-1-s}u\|^{2}\nonumber \\
& +8\tau\|\partial_{n+1}u\|^{2} -8 \tau \mathring{c}_{s} \|x_{n+1}^{-1}u\|^{2}\nonumber \\
& +4\tau\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}.\label{eq:0Carl3}
\end{align}
Combining \eqref{eq:0Carl1}, \eqref{eq:0Carl2} and \eqref{eq:0Carl3},
we reach
\begin{align}
4\langle Su,Au\rangle= & 64\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}+64(2s-1)\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}\nonumber \\
& +16\tau(2s-1)\|x_{n+1}^{-s}\partial_{n+1}u\|^{2}+16\tau\|\partial_{n+1}u\|^{2} -8 \tau \mathring{c}_{s} \|x_{n+1}^{-1}u\|^{2}\nonumber \\
& +4\tau(1-2s)(1+2s)^{2}\|x_{n+1}^{-1-s}u\|^{2}-\tau^{3}\||x'|u\|^{2}-4\tau\|\nabla'u\|^{2}\nonumber \\
& +8\tau\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}\nonumber \\
& +4\tau\langle Su,(\partial_{n+1}\phi)u\rangle_{0}-2\langle Au,\partial_{n+1}u\rangle_{0}+2\langle\partial_{n+1}(Au),u\rangle_{0}.\label{eq:0Carl4}
\end{align}
\textbf{Step 2.1.2: Estimating the remainder.} Using integration by parts, we can estimate $R$ from below:
\begin{align*}
R\ge & -C\epsilon\bigg[\tau\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}+\tau\|\partial_{n+1}u\|^{2}+\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}+\tau\|x_{n+1}^{-1}u\|^{2}\\
& +\tau\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}+\tau\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2}+\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2}\bigg].
\end{align*}
Here we would like to highlight some features when estimating the
second term of $R$, that is, $\langle Au,(I)u\rangle$. Note that
\begin{align}
& \langle-2\tau\partial_{n+1}\phi\partial_{n+1}u,(a_{jk}-\delta_{jk})\partial_{j}\partial_{k}u\rangle\nonumber \\
= & \tau\langle\partial_{n+1}\phi\partial_{n+1}u,(\partial_{j}a_{jk})\partial_{k}u\rangle - \boxed{\tau\langle\partial_{n+1}^{2}\phi\partial_{j}u,(a_{jk}-\delta_{jk})\partial_{k}u\rangle} \nonumber \\
& +\tau\langle\partial_{n+1}\phi\partial_{j}u,(\partial_{k}a_{jk})\partial_{n+1}u\rangle-\tau\langle\partial_{n+1}\phi\partial_{j}u,(a_{jk}-\delta_{jk})\partial_{k}u\rangle_{0}\label{eq:harm1}
\end{align}
and
\begin{align}
& \langle-\tau(\partial_{n+1}^{2}\phi)u,(a_{jk}-\delta_{jk})\partial_{j}\partial_{k}u\rangle\nonumber \\
= & \tau\langle(\partial_{n+1}^{2}\phi)u,(\partial_{j}a_{jk})\partial_{k}u\rangle+\tau\langle\partial_{n+1}^{2}\phi\partial_{j}u,(a_{jk}-\delta_{jk})\partial_{k}u\rangle\label{eq:harm2-1}\\
= & -\frac{\tau}{2}\langle(\partial_{n+1}^{2}\phi)u,(\partial_{j}\partial_{k}a_{jk})u\rangle + \boxed{\tau\langle\partial_{n+1}^{2}\phi\partial_{j}u,(a_{jk}-\delta_{jk})\partial_{k}u\rangle}.\label{eq:harm2-2}
\end{align}
So, summing up \eqref{eq:harm1} and \eqref{eq:harm2-2}, we note
that the problematic term $\tau\langle\partial_{n+1}^{2}\phi\partial_{j}u,(a_{jk}-\delta_{jk})\partial_{k}u\rangle$
is canceled. It problematic because $\partial_{n+1}^{2}\phi$ has singularity $x_{n+1}^{-2s}$ for $s\in(1/2,1)$. However, when
$s=\frac{1}{2}$, $\partial_{n+1}^{2}\phi$ has no singularity. In
this case, we consider \eqref{eq:harm2-1} rather than \eqref{eq:harm2-2}.
This is the reason why we can loosen the second derivative assumption
for the case $s=\frac{1}{2}$.
\textbf{Step 2.1.3: Combining the commutator and the remainder.} Using the Hardy inequality in Lemma \ref{lem:Hardy}, we reach
\[
\|x_{n+1}^{-s-1}u\|^{2}\le\frac{4}{(2s+1)^{2}}\|x_{n+1}^{-s}\partial_{n+1}u\|^{2}+\frac{2}{2s+1}\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2},
\]
thus
\begin{align*}
& 16\tau(2s-1)\|x_{n+1}^{-s}\partial_{n+1}u\|^{2}-4\tau(2s-1)(2s+1)^{2}\|x_{n+1}^{-1-s}u\|^{2}\\
\ge & -8\tau(2s-1)(2s+1)\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2}.
\end{align*}
Therefore, choosing sufficiently small $\epsilon>0$, we reach
\begin{align}
\mathscr{D}\ge & 64\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}+\frac{639}{10}(2s-1)\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}\nonumber \\
& +\frac{159}{10}\tau\|\partial_{n+1}u\|^{2}+\frac{39}{10}\tau(2s-1)(2s+1)\|x_{n+1}^{-1}u\|^{2}-4\tau\|\nabla'u\|^{2}\nonumber \\
& -C\epsilon\tau\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}+8\tau\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}+4\tau\langle Su,\partial_{n+1}\phi\rangle_{0}\nonumber \\
& -2\langle Au,\partial_{n+1}u\rangle_{0}+2\langle\partial_{n+1}(Au),u\rangle_{0}-\tau\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}-\tau\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2}\nonumber \\
& -\tau^{3}\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2}-8\tau(2s-1)(2s+1)\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2}.\label{eq:0Carl5}
\end{align}
\textbf{Step 2.2: Estimating the sum $\mathscr{S}$.} Observe that
\[
\mathscr{S} \ge 2\|Su\|^{2}+2\|Au\|^{2}-C\epsilon\bigg[\sum_{j,k=1}^{n}\|\partial_{j}\partial_{k}u\|^{2}+\tau^{2}\|\nabla'u\|^{2}+\tau^{4}\|u\|^{2}\bigg].
\]
Since $\mathring{c}_{s} < 0$, then
\begin{align*}
2\|Su\|^{2}= & 2\| \Delta'u + (\partial_{n+1}^{2}u+\tau^{2}|\nabla\phi|^{2}u + \mathring{c}_{s} x_{n+1}^{-2}u ) \|^{2}\\
= & 2\|\Delta'u\|^{2}+4\langle\Delta'u,\partial_{n+1}^{2}u\rangle+4\tau^{2}\langle\Delta'u,|\nabla\phi|^{2}u\rangle \\
& + 4\mathring{c}_{s} \langle\Delta'u,x_{n+1}^{-2}u\rangle +2 \| \partial_{n+1}^{2}u+\tau^{2}|\nabla\phi|^{2}u + \mathring{c}_{s} x_{n+1}^{-2}u \|^{2}\\
= & 2\sum_{j,k=1}^{n}\|\partial_{j}\partial_{k}u\|^{2}+4\langle\Delta'u,\partial_{n+1}^{2}u\rangle+4\tau^{2}\langle\Delta'u,|\nabla\phi|^{2}u\rangle\\
& -4\mathring{c}_{s} \langle\nabla'u,x_{n+1}^{-2}\nabla'u\rangle +2 \| \partial_{n+1}^{2}u+\tau^{2}|\nabla\phi|^{2}u + \mathring{c}_{s} x_{n+1}^{-2}u \|^{2}\\
\ge & 2\sum_{j,k=1}^{n}\|\partial_{j}\partial_{k}u\|^{2}+4\langle\Delta'u,\partial_{n+1}^{2}u\rangle+4\tau^{2}\langle\Delta'u,|\nabla\phi|^{2}u\rangle.
\end{align*}
Since
\[
4\langle\Delta'u,\partial_{n+1}^{2}u\rangle=4\langle\nabla'\partial_{n+1}u,\nabla'\partial_{n+1}u\rangle-4\langle\Delta'u,\partial_{n+1}u\rangle_{0}
\]
and for $\epsilon_{0}>0$, we have
\begin{align*}
& 4\tau^{2}\langle\Delta'u,|\nabla\phi|^{2}u\rangle\\
= & \tau^{2}\langle\Delta'u,|x'|^{2}u\rangle+16\tau^{2}\langle\Delta'u,(x_{n+1}^{1-2s}-x_{n+1})^{2}u\rangle\\
\ge & -\tau^{2}(1+\epsilon_{0})\|\nabla'u\|^{2}-\tau^{2}C\epsilon_{0}^{-1}\|u\|^{2}-16\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}.
\end{align*}
Thus,
\begin{align}
\mathscr{S}\ge & 2\|Su\|^{2}+2\|Au\|^{2}-C\epsilon\bigg[\sum_{j,k=1}^{n}\|\partial_{j}\partial_{k}u\|^{2}+\tau^{2}\|\nabla'u\|^{2}+\tau^{4}\|u\|^{2}\bigg]\nonumber \\
\ge & 2\|\nabla(\nabla'u)\|^{2}-\tau^{2}(1+\epsilon_{0})\|\nabla'u\|^{2}-\tau^{2}C\epsilon_{0}^{-1}\|u\|^{2}-16\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}\nonumber \\
& -C\epsilon\bigg[\sum_{j,k=1}^{n}\|\partial_{j}\partial_{k}u\|^{2}+\tau^{2}\|\nabla'u\|^{2}+\tau^{4}\|u\|^{2}\bigg]-4\langle\Delta'u,\partial_{n+1}u\rangle_{0}.\label{eq:0Carl6}
\end{align}
\textbf{Step 2.3: Combining the difference $\mathscr{D}$ and the sum $\mathscr{S}$.} After combining \eqref{eq:0Carl5} and \eqref{eq:0Carl6}, we choose small $\epsilon>0$, and consequently choose small $\epsilon_{0}>0$ and large $\tau$, hence
\begin{align}
& \bigg(\tau+s+\frac{1}{2}\bigg)\|L^{+}u\|^{2}\nonumber \\
\ge & \frac{9}{10}(2s-1)\|\nabla(\nabla'u)\|^{2}+\|Su\|^{2}+64\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\nonumber \\
& +\frac{639}{10}(2s-1)\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}+\frac{159}{10}\tau^{2}\|\partial_{n+1}u\|^{2}-4\tau^{2}\|\nabla'u\|^{2}\nonumber \\
& -\frac{171}{20}(2s-1)\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}+\frac{39}{10}\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-1}u\|^{2}\nonumber \\
& +8\tau^{2}\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}+4\tau^{2}\langle Su,\partial_{n+1}\phi\rangle_{0}-2\tau\langle Au,\partial_{n+1}u\rangle_{0}+2\tau\langle\partial_{n+1}(Au),u\rangle_{0}\nonumber \\
& -\tau^{2}\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}-\tau^{2}\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2}-\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2}\nonumber \\
& -8\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2}-2(2s-1)\langle\Delta'u,\partial_{n+1}u\rangle_{0}.\label{eq:0Carl7}
\end{align}
\textbf{Step 2.4: Obtaining gradient estimates.} Since ${\rm supp}(u)\subset B_{1/2}^{+}$ and $s>\frac{1}{2}$, thus
\[
0\le(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{s}=x_{n+1}^{1-s}-x_{n+1}^{1+s} \le x_{n+1}^{1-s} \le 1,
\]
and hence
\begin{align*}
& \frac{172}{20}(2s-1)\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}\\
= & -\frac{172}{20}(2s-1)\tau^{2}\langle(x_{n+1}^{1-2s}-x_{n+1})x^{s}\Delta'u,(x_{n+1}^{1-2s}-x_{n+1})x^{-s}u\rangle\\
\le & \frac{86}{20}(2s-1)\delta\|(x_{n+1}^{1-2s}-x_{n+1})x^{s}\Delta'u\|^{2}+\frac{86}{20}(2s-1)\tau^{4}\delta^{-1}\|(x_{n+1}^{1-2s}-x_{n+1})x^{-s}u\|^{2}\\
\le & \frac{86}{20}(2s-1)\delta\|\Delta'u\|^{2}+\frac{86}{20}(2s-1)\tau^{4}\delta^{-1}\|(x_{n+1}^{1-2s}-x_{n+1})x^{-s}u\|^{2}.
\end{align*}
Choose $\delta=\frac{8}{43}$, we reach
\begin{align}
& \frac{172}{20}(2s-1)\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}\nonumber \\
\le & \frac{8}{10}(2s-1)\|\Delta'u\|^{2}+23.1125(2s-1)\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})x^{-s}u\|^{2}.\label{eq:0Carl8}
\end{align}
Moreover, we have
\begin{align}
& \frac{41}{10}\tau^{2}\langle Su,u\rangle \nonumber \\
= & \frac{41}{10}\tau^{2}\|\nabla u\|^{2}-\frac{41}{10}\tau^{4}\||\nabla\phi|u\|^{2} + \frac{41}{5}(2s+1)(2s-1)\tau\|x_{n+1}^{-1}u\|^{2} +\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0} \nonumber \\
\ge & \frac{41}{10}\tau^{2}\|\nabla u\|^{2}-\frac{41}{10}\tau^{4}\bigg(\frac{1}{16}\|u\|^{2}+4\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\bigg) \nonumber \\
& + \frac{41}{5}(2s+1)(2s-1)\tau\|x_{n+1}^{-1}u\|^{2} + \frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0} \nonumber \\
= & \frac{41}{10}\tau^{2} \| \nabla u \|^{2} - \frac{41}{160} \tau^{4} \|u\|^{2} - \frac{164}{10} \| (x_{n+1}^{1-2s} - x_{n+1}) u \|^{2} + \frac{41}{5}(2s+1)(2s-1)\tau\|x_{n+1}^{-1}u\|^{2} \nonumber \\
& + \frac{41}{5}(2s+1)(2s-1)\tau\|x_{n+1}^{-1}u\|^{2} + \frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0} \label{eq:rev1-1}
\end{align}
Define $\psi_{s}(x_{n+1}) := x_{n+1}^{1-2s} - x_{n+1}$. Since ${\rm supp}\,(u) \subset B_{1/2}^{+}$, so $0 \le x_{n+1} \le 1/2$, for $s \in (1/2,1)$, the derivative can be easily estimated
\[
\psi_{s}'(x_{n+1}) = (1-2s)x_{n+1}^{-2s} - 1 < 0 \quad \text{for } 0 \le x_{n+1} \le 1/2.
\]
Since $\psi_{s}(x_{n+1})$ is decreasing on $[0,1/2]$, for $s \in (1/2,1)$,
\[
\inf_{0 \le x_{n+1} \le 1/2} (x_{n+1}^{1-2s} - x_{n+1}) = \inf_{0 \le x_{n+1} \le 1/2} \psi_{s}(x_{n+1}) = \psi_{s}\bigg( \frac{1}{2} \bigg) = \frac{1}{2} (4^{s} - 1) \ge \frac{1}{2}.
\]
Combining this with \eqref{eq:rev1-1}, we reach the estimate
\begin{align*}
& \frac{41}{10}\tau^{2}\|\nabla'u\|^{2}+\frac{41}{5}(2s+1)(2s-1)\tau^{2}\|x_{n+1}^{-1}u\|^{2}+\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0}\\
\le & \frac{41}{10}\tau^{2}\|\nabla u\|^{2}+\frac{41}{5}(2s+1)(2s-1)\tau^{2}\|x_{n+1}^{-1}u\|^{2}+\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0}\\
\le & \frac{41}{10}\tau^{2}\langle Su,u\rangle+\frac{41}{160}\tau^{4}\|u\|^{2}+\frac{164}{10}\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\\
\le & \frac{41}{20}\delta\|Su\|^{2}+\frac{41}{20}\delta^{-1}\tau^{4}\|u\|^{2}+\frac{41}{160}\tau^{4}\|u\|^{2}+\frac{164}{10}\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\\
\le & \frac{41}{20}\delta\|Su\|^{2}+\frac{82}{10}\delta^{-1}\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}+\frac{41}{40}\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\\
& +\frac{164}{10}\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}.
\end{align*}
Choosing $\delta=\frac{20}{41}$, hence
\begin{align}
& \frac{41}{10}\tau^{2}\|\nabla'u\|^{2}+\frac{41}{5}(2s+1)(2s-1)\tau^{2}\|x_{n+1}^{-1}u\|^{2}+\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0}\nonumber \\
\le & \|Su\|^{2}+34.235\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}.\label{eq:0Carl9}
\end{align}
\textbf{Step 2.5: Plugging gradient estimates into \eqref{eq:0Carl7}.} Combining \eqref{eq:0Carl7}, \eqref{eq:0Carl8} and \eqref{eq:0Carl9},
we reach
\begin{align}
& \bigg(\tau+s+\frac{1}{2}\bigg)\|L^{+}u\|^{2}\nonumber \\
\ge & \frac{1}{10}(2s-1)\|\nabla(\nabla'u)\|^{2}+29.765\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}\nonumber \\
& +40.7875(2s-1)\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})x_{n+1}^{-s}u\|^{2}+\frac{159}{10}\tau^{2}\|\partial_{n+1}u\|^{2}+\frac{1}{10}\tau^{2}\|\nabla'u\|^{2}\nonumber \\
& +\frac{1}{20}(2s-1)\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})\nabla'u\|^{2}+12.1\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-1}u\|^{2}\nonumber \\
& +8\tau^{2}\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}+4\tau^{2}\langle Su,(\partial_{n+1}\phi)u\rangle_{0}-2\tau\langle Au,\partial_{n+1}u\rangle_{0}+2\tau\langle\partial_{n+1}(Au),u\rangle_{0}\nonumber \\
& -\tau^{2}\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}-\tau^{2}\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2}-\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2}\nonumber \\
& -8\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2}-4(2s-1)\langle\Delta'u,\partial_{n+1}u\rangle_{0}+\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0}.\label{eq:0Carl10}
\end{align}
Hence, we reach
\begin{align}
& 2\tau\|L^{+}u\|^{2}\nonumber \\
\ge & 25\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}+\frac{1}{10}\tau^{2}\|\nabla u\|^{2}+12\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-1}u\|^{2}\nonumber \\
& +8\tau^{2}\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}+4\tau^{2}\langle Su,(\partial_{n+1}\phi)u\rangle_{0}-2\tau\langle Au,\partial_{n+1}u\rangle_{0}+2\tau\langle\partial_{n+1}(Au),u\rangle_{0}\nonumber \\
& -\tau^{2}\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}-\tau^{2}\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2}-\tau^{4}\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2}\nonumber \\
& -8\tau^{2}(2s-1)(2s+1)\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2}-4(2s-1)\langle\Delta'u,\partial_{n+1}u\rangle_{0}+\frac{41}{10}\tau^{2}\langle\partial_{n+1}u,u\rangle_{0}.\label{eq:0Carl11}
\end{align}
Since $u=e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}w$, we estimate that
\begin{align*}
\|\nabla u\|^{2}\ge & \frac{1}{2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|^{2}-2\tau^{2}\|e^{\tau\phi}|\nabla\phi|x_{n+1}^{\frac{1-2s}{2}}w\|^{2}-2\bigg(\frac{2s-1}{2}\bigg)^{2}\|e^{\tau\phi}x_{n+1}^{-\frac{1+2s}{2}}w\|^{2}\\
\ge & \frac{1}{2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|^{2}-16\tau^{2}\|(x_{n+1}^{1-2s}-x_{n+1})u\|^{2}-(2s-1)^{2}\|x_{n+1}^{-1}u\|^{2}.
\end{align*}
\textbf{Step 3: Estimating the boundary contributions.} We want to show that
\begin{equation}
\|e^{\tau\phi}x_{n+1}^{-2s}w\|_{0}\le C_{s}\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}<\infty.\label{eq:0Carl12}
\end{equation}
Indeed, since $w(x',0)\equiv0$, thus
\[
x_{n+1}^{-2s}w(x',x_{n+1})=x_{n+1}^{1-2s}\int_{0}^{1}\partial_{n+1}w(x',tx_{n+1})\,dt=\int_{0}^{1}(tx_{n+1})^{1-2s}\partial_{n+1}w(x',tx_{n+1})t^{2s-1}\,dt.
\]
Multiplying above equation by $e^{\tau\phi}$, taking the $L^{2}$-norm with respect
to $x'$ and using the fact that $\partial_{n+1}\phi<0$ on ${\rm supp}(w)$
gives
\[
\|e^{\tau\phi}x_{n+1}^{-2s}w(\bullet,x_{n+1})\|_{0}\le\sup_{t\in(0,1)}\|e^{\tau\phi(\bullet,tx_{n+1})}(tx_{n+1})^{1-2s}\partial_{n+1}w(\bullet,tx_{n+1})\|_{0}\int_{0}^{1}t^{2s-1}\,dt.
\]
Taking $x_{n+1}\rightarrow0$ proves \eqref{eq:0Carl12}.
We observe that
\begin{align*}
& 4\tau^{2}\langle Su,(\partial_{n+1}\phi)u\rangle_{0}-2\tau\langle Au,\partial_{n+1}u\rangle_{0}+2\tau\langle\partial_{n+1}(Au),u\rangle_{0}\\
= & 8\tau^{2}\langle\partial_{n+1}u,\nabla'\phi\cdot\nabla'u\rangle_{0}+4\tau^{2}\langle(\partial_{n+1}u)^{2},\partial_{n+1}\phi\rangle_{0}-4\tau^{2}\langle(\partial_{n+1}\phi),|\nabla'u|^{2}\rangle_{0}\\
& +4\tau^{2}\langle(\Delta'\phi-\partial_{n+1}^{2}\phi)u,\partial_{n+1}u\rangle_{0}-2\tau^{2}\langle(\partial_{n+1}^{3}\phi)u,u\rangle_{0}+4\tau^{4}\langle(\partial_{n+1}\phi)|\nabla\phi|^{2}u,u\rangle_{0}\\
& -\tau^{2}(2s+1)(2s-1)\langle x_{n+1}^{-2}u,(\partial_{n+1}\phi)u\rangle_{0}\\
\ge & 8\tau^{2}\langle\partial_{n+1}u,\nabla'\phi\cdot\nabla'u\rangle_{0}+4\tau^{2}\langle(\partial_{n+1}u)^{2},\partial_{n+1}\phi\rangle_{0}+4\tau^{2}\langle(\Delta'\phi-\partial_{n+1}^{2}\phi)u,\partial_{n+1}u\rangle_{0}\\
& +4\tau^{4}\langle(\partial_{n+1}\phi)|\nabla\phi|^{2}u,u\rangle_{0}.
\end{align*}
Note that \eqref{eq:0Carl12} imply
\begin{align*}
\partial_{n+1}u & =e^{\tau\phi}\bigg(x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}w-\frac{2s-1}{2}x_{n+1}^{-\frac{1+2s}{2}}w\bigg)+x_{n+1}^{\frac{3-2s}{2}}R\\
\nabla'u & =e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla'w+x_{n+1}^{s+\frac{1}{2}}R',
\end{align*}
where $\|R\|_{0}\le C\tau$ and $\|R'\|_{0}\le C\tau$.
Hence,
\begin{align*}
& |\langle\partial_{n+1}u,\nabla'\phi\cdot\nabla'u\rangle_{0}|\\
= & \bigg|\bigg\langle e^{\tau\phi}\bigg(x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}w-\frac{2s-1}{2}x_{n+1}^{-\frac{1+2s}{2}}w\bigg),e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla'\phi\cdot\nabla'w\bigg\rangle_{0}\bigg|\\
= & \bigg|\bigg\langle e^{\tau\phi}\bigg(x_{n+1}^{1-2s}\partial_{n+1}w-\frac{2s-1}{2}x_{n+1}^{-2}w\bigg),\frac{1}{2}e^{\tau\phi}x'\cdot\nabla'w\bigg\rangle_{0}\bigg|\\
\le & \frac{1}{2}|\langle e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w,e^{\tau\phi}x'\cdot\nabla'w\rangle_{0}|+\frac{2s-1}{4}|\langle e^{\tau\phi}x_{n+1}^{-2}w,e^{\tau\phi}x'\cdot\nabla'w\rangle_{0}|.
\end{align*}
Using \eqref{eq:0Carl12}, we reach
\[
|\langle\partial_{n+1}u,\nabla'\phi\cdot\nabla'u\rangle_{0}|\le\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}\|e^{\tau\phi}x'\cdot\nabla'w\|_{0}.
\]
Similarly, using \eqref{eq:0Carl12}, we have
\begin{align*}
|\langle(\partial_{n+1}u)^{2},\partial_{n+1}\phi\rangle_{0}|+|\langle(\Delta'\phi-\partial_{n+1}^{2}\phi)u,\partial_{n+1}u\rangle_{0}| & \le C\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}\\
|\langle(\partial_{n+1}\phi)|\nabla\phi|^{2}u,u\rangle_{0}| & \le C\|e^{\tau\phi}x_{n+1}^{2-4s}w\|_{0}^{2}\rightarrow0.
\end{align*}
Also,
\begin{align*}
|\langle(\partial_{n+1}^{2}\phi)\partial_{n+1}u,u\rangle_{0}| & \le C\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}\\
\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2} & =\bigg\| e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w-\frac{2s-1}{2}e^{\tau\phi}x_{n+1}^{-2s}w\bigg\|_{0}^{2}\le C\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}\\
\|x_{n+1}^{\frac{2s-1}{2}}|x'|\nabla'u\|_{0}^{2} & =\|e^{\tau\phi}|x'|\nabla'w\|_{0}^{2}\\
\|(x_{n+1}^{1-2s}-x_{n+1})^{\frac{1}{2}}u\|_{0}^{2} & \rightarrow0\\
\|x_{n+1}^{-\frac{1}{2}-s}u\|_{0}^{2} & =\|e^{\tau\phi}x_{n+1}^{-2s}w\|_{0}^{2}\le C\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}\\
|\langle\partial_{n+1}u,u\rangle_{0}| & \rightarrow0.
\end{align*}
Finally, we also have
\begin{align*}
|\langle\Delta'u,\partial_{n+1}u\rangle_{0}|\le & \|x_{n+1}^{\frac{2s-1}{2}}\Delta'u\|_{0}^{2}+\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}\\
= & \bigg\|-\frac{n\tau}{2}e^{\tau\phi}w+\frac{\tau^{2}}{4}|x'|^{2}e^{\tau\phi}w-\tau e^{\tau\phi}x'\cdot\nabla'w+e^{\tau\phi}\Delta'w\bigg\|_{0}^{2}+\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u\|_{0}^{2}\\
\le & C\|e^{\tau\phi}\Delta'w\|_{0}^{2}+C\tau^{2}\|e^{\tau\phi}x'\cdot\nabla'w\|_{0}^{2}+C\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}.
\end{align*}
\textbf{Step 4: Conclusion.} Put them together, we reach
\begin{align*}
& \tau^{3}\|u\|^{2}+\tau\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|^{2}\\
\le & C\bigg(\|L^{+}u\|^{2}+\tau^{-1}\|e^{\tau\phi}\Delta'w\|_{0}^{2}+\tau\|e^{\tau\phi}x'\cdot\nabla'w\|_{0}^{2}+\tau\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}w\|_{0}^{2}\bigg),
\end{align*}
which is our desired result.
\end{proof}
As in \cite{RS20Calderon}, we introduce the following sets for $s\in[\frac{1}{2},1)$:
\begin{align*}
C_{s,r}^{+} & :=\bigg\{(x',x_{n+1})\in\mathbb{R}_{+}^{n+1}:x_{n+1}\le\bigg[(1-s)\bigg(r-\frac{|x'|^{2}}{4}\bigg)\bigg]^{\frac{1}{2-2s}}\bigg\}\\
C_{s,r}' & := \bigg\{(x',0)\in \mathbb{R}^{n} \times \{0\} : 0\le\bigg[(1-s)\bigg(r-\frac{|x'|^{2}}{4}\bigg)\bigg]^{\frac{1}{2-2s}}\bigg\}.
\end{align*}
With this notation, we infer the following analogous to \cite[Proposition 5.10]{RS20Calderon}:
\begin{lem}
Let $s\in[\frac{1}{2},1)$. Suppose that $\tilde{w}\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
is a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]\tilde{w} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\tilde{w} & =w\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
with $w=0$ on $B_{1}'$. We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. For $s\neq\frac{1}{2}$,
we further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Then there exists $\alpha=\alpha(n,s)\in(0,1)$
such that
\[
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/8}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/2}')}^{1-\alpha}.
\]
\end{lem}
\begin{proof}
We may assume that $\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}>0$
and
\[
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}\ge c_{0}\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/2}')}^{1-\alpha}
\]
for some sufficiently large constant $c_{0}>0$. Otherwise the result
is trivial.
Let $\eta$ is a smooth cut-off function satisfies
\[
\eta(x)=\begin{cases}
1 & \text{in }C_{s,3/16}^{+},\\
0 & \text{in }\mathbb{R}_{+}^{n+1}\setminus C_{s,1/4}^{+},
\end{cases}
\]
and $|\partial_{n+1}\eta|\le Cx_{n+1}$ in $\mathbb{R}_{+}^{n+1}$
with $\partial_{n+1}\eta=0$ on $\mathbb{R}^{n}\times\{0\}$. Define
$\overline{w}=\eta\tilde{w}$. Note that $\overline{w}$ satisfies
${\rm supp}(\overline{w})\subset B_{1/2}^{+}$ and it solves
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\bigg]\overline{w} & =f\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\overline{w} & =0\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
where
\begin{align*}
f= & \partial_{n+1}(x_{n+1}^{1-2s}\partial_{n+1}\eta)\tilde{w}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}(a_{jk}\partial_{k}\eta)\tilde{w}\\
& +2x_{n+1}^{1-2s}\partial_{n+1}\eta\partial_{n+1}\tilde{w}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{k}\eta\partial_{j}\tilde{w}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\eta\partial_{k}\tilde{w}\\
& -x_{n+1}^{1-2s}\sum_{j,k=1}^{n}(\partial_{j}a_{jk})\partial_{k}\overline{w}.
\end{align*}
Since $\eta$ and $\nabla\eta$ are bounded, together with $|\partial_{n+1}\eta|\le Cx_{n+1}$,
we know that
\[
\|x_{n+1}^{\frac{2s-1}{2}}f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\le C(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/4}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{w}\|_{L^{2}(C_{s,1/4}^{+})})<\infty.
\]
Moreover, since $w|_{B_{1}'}=0$ and ${\rm supp}(\eta)\subset B_{1}'$
on $\mathbb{R}^{n}\times\{0\}$, then
\[
\lim_{x_{n+1}\rightarrow0}\nabla'\overline{w}=0,\quad\lim_{x_{n+1}\rightarrow0}\Delta'\overline{w}=0\quad\text{and also}\quad\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\overline{w}=\eta\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}.
\]
So, by the Carleman estimate in Lemma \ref{lem:0Carl}, there exists $\tau_{0}>1$ such that
\begin{align*}
& \tau^{3}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\overline{w}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla\overline{w}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C(\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\lim_{x_{n+1}\rightarrow0}\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}\overline{w}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2})
\end{align*}
for all $\tau\ge\tau_{0}$. Then, for large $\tau_{0}$, the last term of $f$ was absorbed by the gradient term in the left-hand-side, so we have
\[
\tau^{3}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\overline{w}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\le C(\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}g\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\lim_{x_{n+1}\rightarrow0}\|e^{\tau\phi}x_{n+1}^{1-2s}\partial_{n+1}\overline{w}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}),
\]
where $g=f+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}(\partial_{j}a_{jk})\partial_{k}\overline{w}$.
Let
\[
\phi_{-}:=\inf_{x\in C_{s,1/8}^{+}}\phi(x)\quad\text{and}\quad\phi_{+}:=\sup_{x\in C_{s,1/4}^{+}\setminus C_{s,3/16}^{+}}\phi(x).
\]
Hence,
\begin{align*}
& \tau^{3}e^{2\tau\phi_{-}}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/8}^{+})}^{2}\\
\le & C\bigg[e^{2\tau\phi_{+}}\|x_{n+1}^{\frac{2s-1}{2}}g\|_{L^{2}(C_{s,1/4}^{+}\setminus C_{s,3/16}^{+})}^{2}+\tau\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/4}')}^{2}\bigg].
\end{align*}
Dividing above equation by $\tau$, since $\tau\ge1$ and applying Caccioppoli's inequality
(Lemma \ref{lem:Caccio}), we obtain
\[
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/8}^{+})}\le C\bigg[e^{\tau(\phi_{+}-\phi_{-})}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}+e^{-\tau\phi_{-}}\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/2}')}\bigg].
\]
Observe that
\[
-\frac{|x'|^{2}}{4}\ge\frac{1}{1-s}x_{n+1}^{2-2s}-\frac{1}{8}\quad\text{in }C_{s,1/8}^{+},
\]
and also since $s\ge\frac{1}{2}$,
\[
x_{n+1}^{2}\le\bigg[(1-s)\bigg(\frac{1}{4}-\frac{|x'|^{2}}{4}\bigg)\bigg]^{\frac{1}{1-s}}\le\frac{1}{8^{\frac{1}{1-s}}}\le\frac{1}{64}
\]
and
\[
-\frac{|x'|^{2}}{4}\le\frac{1}{1-s}x_{n+1}^{2-2s}-\frac{3}{16}\quad\text{in }C_{s,1/4}^{+}\setminus C_{s,3/16}^{+},
\]
so $\phi_{-}\ge-\frac{1}{8}$ and $\phi_{+}\le-\frac{11}{64}$, that is, $\phi_{+}-\phi_{-}\le-\frac{19}{64}<0$. So, we can choose $\tau$ (which is large) to satisfy
\[
e^{\tau(\phi_{+}-\phi_{-})}=\frac{\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/2}')}^{1-\alpha}}{\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}^{1-\alpha}}\le\frac{1}{c_{0}}
\]
for large $c_{0}$, where $\alpha\in(0,1)$ will be chosen later.
Note that
\[
e^{-\tau\phi_{-}}=\frac{\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/2}^{+})}^{\frac{\phi_{-}}{\phi_{+}-\phi_{-}}(1-\alpha)}}{\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{s,1/2}')}^{\frac{\phi_{-}}{\phi_{+}-\phi_{-}}(1-\alpha)}}.
\]
Finally, choosing $\alpha\in(0,1)$ satisfies $\alpha=\frac{\phi_{-}}{\phi_{+}-\phi_{-}}(1-\alpha)$
will implies our desired result.
\end{proof}
For our purpose, we only need the following simplified version of
the Lemma above:
\begin{cor}
\label{cor:interIII}Let $s\in[\frac{1}{2},1)$. Suppose that $\tilde{w}\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
is a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]\tilde{w} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\tilde{w} & =w\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
with $w=0$ on $B_{1}'$. We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. For $s\neq\frac{1}{2}$,
we further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Then there exist $\alpha=\alpha(n,s)\in(0,1)$,
$c=c(n,s)\in(0,1)$ and a constant $C$ such that
\[
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(B_{c}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(B_{2}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{L^{2}(B_{2}')}^{1-\alpha}.
\]
\end{cor}
Now we are ready to prove the part~\ref{itm:part-a:lem:small} of Lemma~\ref{lem:small} for
the case when $s\in[\frac{1}{2},1)$.
\begin{proof}
[Proof of the part~{\rm \ref{itm:part-a:lem:small}} of Lemma~{\rm \ref{lem:small}} for $s \in [ \frac{1}{2} ,1 )$]
In order to invoke the estimation from Corollary~\ref{cor:interIII},
we split our solution $u$ into two parts $\tilde{u}=u_{1}+u_{2}$,
where $u_{1}:= \mathsf{E}_{s}(\zeta u)$ satisfies
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]u_{1} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
u_{1} & =\zeta u\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
where $\zeta\in\mathcal{C}_{0}^{\infty}(B_{16}')$ is a smooth cut-off
function with $\zeta=1$ on $B_{8}'$. Since $u_{1}:= \mathsf{E}_{s}(\zeta u)$, from \eqref{eq:apriori}
we have
\[
\int_{\mathbb{R}^{n}}|u_{1}(x',x_{n+1})|^{2}\,dx'\le\|u_{1}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\le\|u\|_{L^{2}(B_{16}')}^{2}.
\]
So,
\begin{align}
\|x_{n+1}^{\frac{1-2s}{2}}u_{1}\|_{L^{2}(B_{10}^{+})}^{2} & \le\int_{0}^{10}\int_{\mathbb{R}^{n}}x_{n+1}^{1-2s}|u_{1}(x',x_{n+1})|^{2}\,dx'\,dx_{n+1}\nonumber \\
& \le\bigg(\int_{0}^{10}x_{n+1}^{1-2s}\,dx_{n+1}\bigg)\|u\|_{L^{2}(B_{16}')}^{2}=C\|u\|_{L^{2}(B_{16}')}^{2}.\label{eq:small1}
\end{align}
Note that $u_{2}$ satisfies
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]u_{2} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
u_{2} & =u-\zeta u\quad\text{on }\mathbb{R}^{n}\times\{0\}.
\end{align*}
Since $u_{2}=0$ on $B_{8}'$, by Corollary~\ref{cor:interIII}, there
exist $\alpha=\alpha(n,s)\in(0,1)$, $c=c(n,s)\in(0,1)$ and a constant
$C$ such that
\begin{equation}
\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{c}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{2}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{L^{2}(B_{2}')}^{1-\alpha}.\label{eq:small2}
\end{equation}
Let $\eta$ be a smooth, radial cut-off function with $\eta=1$ in
$B_{2}^{+}$ and $\eta=0$ outside $B_{4}^{+}$. Plug $w=\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2}$
into the trace characterization lemma (Lemma \ref{lem:inter2(b)}),
we reach
\begin{align}
& \lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{L^{2}(B_{2}')}\nonumber \\
\le & C\bigg[\mu^{1-s}(\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u_{2}\|_{L^{2}(B_{4}^{+})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla(\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2})\|_{L^{2}(\mathbb{R}_{+}^{n+1})})\nonumber \\
& +\mu^{-2s}\lim_{x_{n+1}\rightarrow0}\|\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}\bigg].\label{eq:small2-0}
\end{align}
We first control the boundary term of \eqref{eq:small2-0}. Since $\eta$ is a bounded multiplier on $H^{2s}(\mathbb{R}^{n})$, using duality, we have
\begin{align*}
\|\eta v\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})} & =\sup_{\|\varphi\|_{H^{2s}(\mathbb{R}^{n}\times\{0\})}=1}|\langle v,\eta\varphi\rangle_{L^{2}(\mathbb{R}^{n}\times\{0\})}|\\
& \le\|v\|_{H^{-2s}(B_{8}')}\sup_{\|\varphi\|_{H^{2s}(\mathbb{R}^{n}\times\{0\})}=1}\|\eta\varphi\|_{H^{2s}(\mathbb{R}^{n}\times\{0\})}\\
& \le C\|v\|_{H^{-2s}(B_{8}')}\sup_{\|\varphi\|_{H^{2s}(\mathbb{R}^{n}\times\{0\})}=1}\|\varphi\|_{H^{2s}(\mathbb{R}^{n}\times\{0\})}=C\|v\|_{H^{-2s}(B_{8}')}.
\end{align*}
Plug $v=x_{n+1}^{1-2s}\partial_{n+1}u_{2}$, we have
\begin{equation}
\lim_{x_{n+1}\rightarrow0}\|\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}\le C\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{H^{-2s}(B_{8}')}.\label{eq:small2-1}
\end{equation}
Applying the Caccioppoli's inequality in Lemma~\ref{lem:Caccio}, with zero Dirichlet condition and zero inhomogeneous terms, we have
\begin{equation}
\|x_{n+1}^{\frac{1-2s}{2}}\nabla u_{2}\|_{L^{2}(B_{4}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{8}^{+})}.\label{eq:small2-2a}
\end{equation}
Also, we have
\begin{align*}
& \|x_{n+1}^{\frac{2s-1}{2}}\nabla(\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2})\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\\
\le & \|x_{n+1}^{\frac{1-2s}{2}}(\nabla\eta)\partial_{n+1}u_{2}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{1-2s}{2}}\eta\nabla'\partial_{n+1}u_{2}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\\
& +\|x_{n+1}^{\frac{2s-1}{2}}\eta\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\\
\le & C\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}u_{2}\|_{L^{2}(B_{4}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}(\nabla'u_{2})\|_{L^{2}(B_{4}^{+})}+\bigg\| x_{n+1}^{\frac{1-2s}{2}}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}u_{2}\bigg\|_{L^{2}(B_{4}^{+})}\\
\le & C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\nabla u_{2}\|_{L^{2}(B_{4}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'u_{2})\|_{L^{2}(B_{4}^{+})}\bigg],
\end{align*}
where the last inequality follows by the boundedness assumptions
of $a_{jk}$. Observe that
\begin{align*}
0 & =\nabla'\bigg[\partial_{n+1}(x_{n+1}^{1-2s}\partial_{n+1}u_{2})+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}(a_{jk}\partial_{k}u_{2})\bigg]\\
& =\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg](\nabla'u_{2})+x_{n+1}^{1-2s}\sum_{j=1}^{n}\partial_{j}\bigg(\sum_{k=1}^{n}\nabla'a_{jk}\partial_{k}u_{2}\bigg).
\end{align*}
Applying Caccioppoli's inequality in Lemma~\ref{lem:Caccio} on $\nabla'u_{2}$
with zero Dirichlet condition and $f_{j}=\sum_{k=1}^{n}\nabla'a_{jk}\partial_{k}u_{2}$,
since $\|\nabla'a_{jk}\|_{\infty}\le\epsilon$, we have
\begin{equation}
\|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}\nabla(\nabla'u_{2})\|_{L^{2}(B_{4}^{+})}\le C'\|x_{n+1}^{\frac{1-2s}{2}}\nabla'u_{2}\|_{L^{2}(B_{6}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{8}^{+})},\label{eq:small2-2b}
\end{equation}
where the second inequality follows by \eqref{eq:small2-2a}. Hence,
we reach
\begin{equation}
\|x_{n+1}^{\frac{2s-1}{2}}\nabla(\eta x_{n+1}^{1-2s}\partial_{n+1}u_{2})\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\le C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{8}^{+})}.\label{eq:small2-3}
\end{equation}
Plugging \eqref{eq:small2-1}, \eqref{eq:small2-2a} and \eqref{eq:small2-3}
into \eqref{eq:small2-0}, and optimizing the result estimate in $\mu>0$
gives
\[
\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{L^{2}(B_{2}')}\le C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{8}^{+})}^{\frac{2s}{1+s}}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{H^{-2s}(B_{8}')}^{\frac{1-s}{1+s}}.
\]
Plugging this into \eqref{eq:small2} leads to
\begin{align}
& \|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{c}^{+})}\nonumber \\
\le & C\|x_{n+1}^{\frac{1-2s}{2}}u_{2}\|_{L^{2}(B_{8}^{+})}^{\tilde{\alpha}}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{2}\|_{H^{-2s}(B_{8}')}^{1-\tilde{\alpha}}\nonumber \\
\le & C(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{8}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}u_{1}\|_{L^{2}(B_{8}^{+})})^{\tilde{\alpha}}\times\nonumber \\
& \quad\times(\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{H^{-2s}(B_{8}')}+\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{1}\|_{H^{-2s}(B_{8}')})^{1-\tilde{\alpha}},\label{eq:small3}
\end{align}
where $\tilde{\alpha}=\frac{1-s}{1+s}\alpha+\frac{2s}{1+s}$. Then
we have
\begin{align}
\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}u_{1}\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})} & =\|(-P)^{s}u_{1}\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}\nonumber \\
& \le C\|u_{1}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\le C\|\tilde{u}\|_{L^{2}(B_{16}')},\label{eq:small4}
\end{align}
where the second inequality follows by Lemma~\ref{lem:well2}.
By combining \eqref{eq:small1}, \eqref{eq:small3} and \eqref{eq:small4},
we reach
\begin{align}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{c}^{+})}\nonumber \\
\le & C\bigg(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16}^{+})}+\|\tilde{u}\|_{L^{2}(B_{16}')}\bigg)^{\tilde{\alpha}}\bigg(\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}+\|u\|_{L^{2}(B_{16}')}\bigg)^{1-\tilde{\alpha}},\label{eq:small5}
\end{align}
which is our desired claim of (a).
\end{proof}
Indeed, by combining \eqref{eq:small5} with the Caccioppoli's inequality
(Lemma~\ref{lem:Caccio}), we reach
\begin{align}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{\tilde{c}}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{\tilde{c}}^{+})}\nonumber \\
\le & C\bigg(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16}^{+})}+\|\tilde{u}\|_{L^{2}(B_{16}')}\bigg)^{\tilde{\alpha}}\bigg(\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}+\|u\|_{L^{2}(B_{16}')}\bigg)^{1-\tilde{\alpha}}\nonumber \\
& +\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}^{\frac{1}{2}}\|u\|_{L^{2}(B_{16}')}^{\frac{1}{2}}\label{eq:small6}
\end{align}
with $\tilde{c}=c/2$. Slightly modify the proof of \eqref{eq:small3},
we can obtain the following analogue of Proposition 5.11 of \cite{RS20Calderon}:
\begin{lem}
\label{lem:inter-IV}Let $s\in[\frac{1}{2},1)$ and $\tilde{w}$ is
the Caffarelli-Silvestre type extension of some $f\in H^{\gamma}(\mathbb{R}^{n})$
as in \eqref{eq:Sch-ext}, where $\gamma\in\mathbb{R}$ with $f|_{C_{s,1}'}=0$.
We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. For $s\neq\frac{1}{2}$,
we further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Then there exist $C=C(n,s)$ and
$\alpha=\alpha(n,s)\in(0,1)$ such that
\[
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1/8}^{+})}\le C\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{s,1}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(C_{s,1/2}')}^{1-\alpha}.
\]
\end{lem}
\begin{proof}
Let $\eta$ be a smooth cut-off function supported in $C_{s,1/2}^{+}$ with $\eta=1$ in $C_{s,1/4}^{+}$. Using this cut-off function, and following the ideas in the proof of \eqref{eq:small3}, by using Lemma~\ref{lem:inter2(a)} rather than Lemma~\ref{lem:inter2(b)}, we can obtain the above inequality.
\end{proof}
\subsection{Proof of the part~\ref{itm:part-a:lem:small} of Lemma \ref{lem:small} for the case $s\in(0,1/2)$}
Let $\tilde{w}$ solves \eqref{eq:Sch-ext}. If we define $\overline{s}:=1-s\in(1/2,1)$,
\begin{equation}
v(x)=x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}(x)\quad\text{and}\quad f=\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}=c^{-1}_{s}(-P)^{s}u,\label{eq:0small1}
\end{equation}
then
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2\overline{s}}\partial_{n+1}+x_{n+1}^{1-2\overline{s}}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]v & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
v & =f\quad\text{on }\mathbb{R}^{n}\times\{0\}.
\end{align*}
Using this observation, and follows the ideas in \cite[Proposition 5.12]{RS20Calderon}, we can obtain an analogue of Lemma~\ref{lem:inter-IV}:
\begin{lem}
Let $s\in(0,1/2)$ and let $x_{0}\in\mathbb{R}^{n}\times\{0\}$. Suppose
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]\tilde{w} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\tilde{w} & =w\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
with $w=0$ on $C_{\overline{s},2}'$. We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. We further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Then there exist $C=C(n,s)$ and
$\alpha=\alpha(n,s)\in(0,1)$ such that
\begin{align*}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\\
\le & C\max\{\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})},\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(C_{\overline{s},2}')}\}^{\alpha}\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(C_{\overline{s},2}')}^{1-\alpha}.
\end{align*}
\end{lem}
\begin{proof}
Let $v$ and $f$ as in \eqref{eq:0small1}. Let $\tilde{v}$ be
the Caffarelli-Silvestre type extension of $\eta f$ as in \eqref{eq:Sch-ext},
where $\eta$ is a cut-off function satisfies
\[
\eta=\begin{cases}
1 & \text{in }C_{\overline{s},1}^{+},\\
0 & \text{outside }C_{\overline{s},2}^{+},
\end{cases}
\]
with $|\partial_{n+1}\eta|\le Cx_{n+1}$. As consequences, the function
$\overline{v}:=v-\tilde{v}$ is the Caffarelli-Silvestre extension
of $(1-\eta)f$ and solves
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]\overline{v} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\overline{v} & =0\quad\text{on }C_{\overline{s},1}'.
\end{align*}
Hence, by Lemma~\ref{lem:inter-IV} and since $\overline{s}=1-s$,
we have
\begin{align*}
\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1/8}^{+})} & \le C\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2\overline{s}}\partial_{n+1}\overline{v}\|_{H^{-\overline{s}}(C_{\overline{s},1/2}')}^{1-\alpha}\\
& =C\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2\overline{s}}\partial_{n+1}\overline{v}\|_{H^{-1+s}(C_{\overline{s},1/2}')}^{1-\alpha}.
\end{align*}
Since $\tilde{w}=0$ on $C_{\overline{s},2}'$, thus
\begin{align*}
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\overline{s}}\partial_{n+1}v\bigg|_{C_{\overline{s},1/2}'} & =\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\overline{s}}(\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}\tilde{w})\bigg|_{C_{\overline{s},1/2}'}\\
& =-\lim_{x_{n+1}\rightarrow0}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\tilde{w}\bigg|_{C_{\overline{s},1/2}'}=0.
\end{align*}
Hence,
\[
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\overline{s}}\partial_{n+1}\overline{v}\bigg|_{C_{\overline{s},1/2}'}=\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\overline{s}}\partial_{n+1}\tilde{v}\bigg|_{C_{\overline{s},1/2}'},
\]
and thus
\[
\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\le C\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2\overline{s}}\partial_{n+1}\tilde{v}\|_{H^{-1+s}(C_{\overline{s},1/2}')}^{1-\alpha}.
\]
Using $\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\overline{s}}\partial_{n+1}\tilde{v}=-c_{\overline{s}}(-P)^{\overline{s}}(\eta f)=-c_{\overline{s}}(-P)^{1-s}(\eta f)$, we have
\begin{equation}
\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2\overline{s}}\partial_{n+1}\tilde{v}\|_{H^{-1+s}(C_{\overline{s},1/2}')}\le C\|(-P)^{1-s}(\eta f)\|_{H^{-1+s}(\mathbb{R}^{n})}\le C\|\eta f\|_{H^{1-s}(\mathbb{R}^{n})},\label{eq:0small1-1}
\end{equation}
where the last inequality follows by Lemma~\ref{lem:well1}. Thus,
\begin{equation}
\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\le C\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1}^{+})}^{\alpha}\|\eta f\|_{H^{1-s}(\mathbb{R}^{n})}^{1-\alpha}.\label{eq:0small2}
\end{equation}
Firstly, we estimate the right hand side of \eqref{eq:0small2} by
\begin{align*}
\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1}^{+})}\le & \|x_{n+1}^{\frac{1-2\overline{s}}{2}}v\|_{L^{2}(C_{\overline{s},1}^{+})}+\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\tilde{v}\|_{L^{2}(C_{\overline{s},1}^{+})}\\
\le & \|x_{n+1}^{\frac{1-2\overline{s}}{2}}v\|_{L^{2}(C_{\overline{s},1}^{+})}+C\|\eta f\|_{H^{\overline{s}}(\mathbb{R}^{n}\times\{0\})}\\
= & \|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{\overline{s},1}^{+})}+C\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}\\
\le & C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}+\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}\bigg],
\end{align*}
where the second inequality follows by \eqref{eq:apriori} and the
last one is followed by the Caccioppoli's inequality in Lemma~\ref{lem:Caccio}.
Similarly, we can estimate the left hand side of \eqref{eq:0small2}
by
\begin{align*}
\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\overline{v}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\ge & \|x_{n+1}^{\frac{1-2\overline{s}}{2}}v\|_{L^{2}(C_{\overline{s},1/8}^{+})}-\|x_{n+1}^{\frac{1-2\overline{s}}{2}}\tilde{v}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\\
\ge & \|x_{n+1}^{\frac{1-2s}{2}}\partial_{n+1}\tilde{w}\|_{L^{2}(C_{\overline{s},1/8}^{+})}-C\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}\\
\ge & c\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},1/8}^{+})}-C\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})},
\end{align*}
where the last inequality is followed by Poincar\'{e} inequality. Thus, \eqref{eq:0small2} becomes
\begin{equation}
\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},1/8}^{+})}\le C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}+\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}\bigg]^{\alpha}\|\eta f\|_{H^{1-s}(\mathbb{R}^{n})}^{1-\alpha}.\label{eq:0small3}
\end{equation}
Next, we estimate the boundary contribution $\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}$.
Using the interpolation inequality in Lemma \ref{lem:inter2(a)}, we have
\begin{align*}
\|\eta f\|_{H^{\beta}(\mathbb{R}^{n}\times\{0\})}= & \|\langle D'\rangle^{\beta}\eta f\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\\
\le & C\mu^{1-s}\bigg(\|x_{n+1}^{\frac{2s-1}{2}}\langle D'\rangle^{\beta}(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla(\langle D'\rangle^{\beta}(\eta v))\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\bigg)\\
& +C\mu^{-s}\|\langle D'\rangle^{\beta}(\eta f)\|_{H^{-s}(\mathbb{R}^{n}\times\{0\})}.
\end{align*}
Using $\|\langle D'\rangle^{\beta}u\|_{L^{2}}\le\|u\|_{L^{2}}+\|\nabla' u\|_{L^{2}}$
for $\beta\le1$, we have
\begin{align*}
\|x_{n+1}^{\frac{2s-1}{2}}\langle D'\rangle^{\beta}(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\le & \|x_{n+1}^{\frac{2s-1}{2}}\eta v\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla'(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\\
\|x_{n+1}^{\frac{2s-1}{2}}\nabla(\langle D'\rangle^{\beta}(\eta v))\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\le & \|x_{n+1}^{\frac{2s-1}{2}}\nabla(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla\nabla'(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}.
\end{align*}
Using \eqref{eq:small2-2a} and \eqref{eq:small2-2b}, we know that
\[
\|x_{n+1}^{\frac{2s-1}{2}}\langle D'\rangle^{\beta}(\eta v)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla(\langle D'\rangle^{\beta}(\eta v))\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\le C\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})},
\]
hence
\begin{equation}
\|\eta f\|_{H^{\beta}(\mathbb{R}^{n}\times\{0\})}\le C\bigg[\mu^{1-s}\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}+\mu^{-s}\|\eta f\|_{H^{\beta-s}(\mathbb{R}^{n}\times\{0\})}\bigg].\label{eq:0small4}
\end{equation}
Choosing $\mu>0$ in \eqref{eq:0small4} such that the right contributions
become equal, i.e.
\[
\mu=\frac{\|\eta f\|_{H^{\beta-s}(\mathbb{R}^{n}\times\{0\})}}{\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}}.
\]
Here, using unique continuation, we notice $\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}\neq0$,
unless $\tilde{w}$ vanishes globally. Using this choice of $\mu>0$,
we reach the multiplicative estimate
\begin{equation}
\|\eta f\|_{H^{\beta}(\mathbb{R}^{n}\times\{0\})}\le C\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}^{s}\|\eta f\|_{H^{\beta-s}(\mathbb{R}^{n}\times\{0\})}^{1-s}.\label{eq:0small5}
\end{equation}
Starting from $\beta=1-s$, if we iterate \eqref{eq:0small5} for
$k$ times, we reach
\[
\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}\le C\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}^{\gamma}\|\eta f\|_{H^{1-s-ks}(\mathbb{R}^{n}\times\{0\})}^{1-\gamma}.
\]
Choose $k\in\mathbb{N}$ be the smallest integer such that $1-ks<0$,
we reach
\begin{align}
\|\eta f\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})} & \le C\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}^{\gamma}\|\eta f\|_{H^{-s}(\mathbb{R}^{n}\times\{0\})}^{1-\gamma}\nonumber \\
& \le C\|x_{n+1}^{\frac{2s-1}{2}}\tilde{w}\|_{L^{2}(C_{\overline{s},2}^{+})}^{\gamma}\|f\|_{H^{-s}(C_{\overline{s},2}')}^{1-\gamma}.\label{eq:0small6}
\end{align}
Inserting \eqref{eq:0small6} into \eqref{eq:0small3} gives our desired
result.
\end{proof}
For our purpose, we only need the following version of inequality:
\begin{cor}
\label{cor:inter-V}Let $s\in(0,1/2)$ and let $x_{0}\in\mathbb{R}^{n}\times\{0\}$.
Suppose
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\bigg]\tilde{w} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\tilde{w} & =w\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
with $w=0$ on $C_{\overline{s},2}'$. We assume that
\[
\max_{1\le j,k\le n}\|a_{jk}-\delta_{jk}\|_{\infty}+\max_{1\le j,k\le n}\|\nabla'a_{jk}\|_{\infty}\le\epsilon
\]
for some sufficiently small $\epsilon>0$. We further assume
\[
\max_{1\le j,k\le n}\|(\nabla')^{2}a_{jk}\|_{\infty}\le C
\]
for some positive constant $C$. Then there exist $C=C(n,s)$, $c=c(n,s)$
and $\alpha=\alpha(n,s)\in(0,1)$ such that
\begin{align*}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(B_{c}^{+})}\\
\le & C\max\{\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(B_{2}^{+})},\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(B_{2}')}\}^{\alpha}\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(B_{2}')}^{1-\alpha}\\
\le & C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{w}\|_{L^{2}(B_{2}^{+})}^{\alpha}\cdot\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(B_{2}')}^{1-\alpha}+\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{w}\|_{H^{-s}(B_{2}')}\bigg].
\end{align*}
\end{cor}
Now, we are ready to proof the part~\ref{itm:part-a:lem:small} of Lemma~\ref{lem:small} for the case $s\in(0,1/2)$.
\begin{proof}
[Proof of the part~{\rm \ref{itm:part-a:lem:small}} of Lemma~{\rm \ref{lem:small}} for $s \in (0, \frac{1}{2})$]
The case $s\in(0,1/2)$ is similar as the case $s\in(1/2,1)$. As
above, the estimation for $u_{1}$ is a direct result of \eqref{eq:apriori}.
For $u_{2}$, we use Corollary~\ref{cor:inter-V} and the interpolation
inequality in Lemma~\ref{lem:inter2(b)}. With this estimation,
the analogues of \eqref{eq:small5} and \eqref{eq:small6} are followed by combining the estimates in splitting argument as above. Note
that \eqref{eq:small6} becomes
\begin{align}
& \|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{\tilde{c}}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{\tilde{c}}^{+})}\nonumber \\
\le & C\bigg(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16}^{+})}+\|\tilde{u}\|_{L^{2}(B_{16}')}\bigg)^{\alpha} \bigg(\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}+\|u\|_{L^{2}(B_{16}')}\bigg)^{1-\alpha}\nonumber \\
& +C\bigg(\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{16}^{+})}+\|\tilde{u}\|_{L^{2}(B_{16}')}\bigg)^{\frac{2s}{1+s}} \bigg(\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}+\|u\|_{L^{2}(B_{16}')}\bigg)^{\frac{1-s}{1+s}}\nonumber \\
& +\lim_{x_{n+1}\rightarrow0}\|x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{16}')}^{\frac{1}{2}}\|u\|_{L^{2}(B_{16}')}^{\frac{1}{2}},\label{eq:0small7}
\end{align}
which is our desired result.
\end{proof}
Finally, combining \eqref{eq:0small7} and Lemma~\ref{lem:Linfty-L2}, we can immediately obtain the part~\ref{itm:part-b:lem:small} of Lemma~\ref{lem:small}.
\section{\label{sec:Carleman}Carleman Estimate}
\subsection{A Carleman estimate with differentiability assumption}
Modifying the arguments in \cite{Reg97StrongUniquenessSecondOrderElliptic}, we can proof the following
Carleman estimate.
\begin{thm}
\label{thm:Carl-diff} Let $s\in(0,1)$ and let $\tilde{u}\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
with ${\rm supp}(\tilde{u})\subset\mathbb{R}_{+}^{n+1}\setminus B_{1}^{+}$
be a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\bigg]\tilde{u} & =f\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u} & =V\tilde{u}\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
where $x=(x',x_{n+1})\in\mathbb{R}^{n}\times\mathbb{R}_{+}$, $f\in L^{2}(\mathbb{R}_{+}^{n+1},x_{n+1}^{2s-1})$
with compact support in $\mathbb{R}_{+}^{n+1}$, and $V\in\mathcal{C}^{1}(\mathbb{R}^{n})$.
Assume that
\[
\max_{1\le j,k\le n}\sup_{|x'|\ge1}|a_{jk}(x')-\delta_{jk}(x')|+\max_{1\le j,k\le n}\sup_{|x'|\ge1}|x'||\nabla'a_{jk}(x')|\le\epsilon
\]
for some sufficiently small $\epsilon>0$. Let further $\phi(x)=|x|^{\alpha}$
for $\alpha\ge1$. Then there exist constants $C=C(n,s,\alpha)$
and $\tau_{0}=\tau_{0}(n,s,\alpha)$ such that
\begin{align*}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\alpha}{2}-1}x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\alpha}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
& +\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\alpha}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'\tilde{u})\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg[\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}|x|f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\alpha}{2}}(|V|^{\frac{1}{2}}+|x'|^{\frac{1}{2}}|\nabla'V|^{\frac{1}{2}})\tilde{u}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\\
& \qquad+\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\alpha}{2}+1}(|V|^{\frac{1}{2}}+|x'|^{\frac{1}{2}}|\nabla'V|^{\frac{1}{2}})\nabla'\tilde{u}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\bigg].
\end{align*}
for all $\tau\ge\tau_{0}$. Here, $\nabla'=(\partial_{1},\cdots,\partial_{n})$
and $\nabla=(\partial_{1},\cdots,\partial_{n},\partial_{n+1})$.
\end{thm}
\begin{proof}
[Proof of Theorem {\rm \ref{thm:Carl-diff}}] \textbf{Step 1: Changing the coordinates.} Write $x=e^{t}\omega$ with
$t\in\mathbb{R}$ and $\omega\in\mathcal{S}_{+}^{n}$, we have
\[
\partial_{j}=e^{-t}(\omega_{j}\partial_{t}+\Omega_{j})\quad\text{for all }j=1,\cdots,n+1.
\]
Since
\begin{equation}
\Omega_{k}\omega_{j}=\delta_{jk}-\omega_{k}\omega_{j},\label{eq:Carl0}
\end{equation}
so
\[
\partial_{j}\partial_{k}=e^{-2t}(\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+(\delta_{jk}-2\omega_{j}\omega_{k})\partial_{t}+\Omega_{j}\Omega_{k}-\omega_{j}\Omega_{k}).
\]
Since $\partial_{j}$ and $\partial_{k}$ commute, then
\[
\Omega_{j}\Omega_{k}-\omega_{j}\Omega_{k}=\Omega_{k}\Omega_{j}-\omega_{k}\Omega_{j},
\]
that is, $\Omega_{j}$ and $\Omega_{k}$ commute up to some lower
order terms. Write $\partial_{j}\partial_{k}=\frac{1}{2}(\partial_{j}\partial_{k}+\partial_{k}\partial_{j})$,
we reach
\begin{align*}
\partial_{j}\partial_{k}= & e^{-2t}\bigg(\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+(\delta_{jk}-2\omega_{j}\omega_{k})\partial_{t}\\
& \qquad+\frac{1}{2}\Omega_{j}\Omega_{k}+\frac{1}{2}\Omega_{k}\Omega_{j}-\frac{1}{2}\omega_{j}\Omega_{k}-\frac{1}{2}\omega_{k}\Omega_{j}\bigg).
\end{align*}
Also, the vector fields have the following properties
\begin{align*}
\sum_{j=1}^{n+1}\omega_{j}\Omega_{j} & =0\quad\text{and}\quad\sum_{j=1}^{n+1}\Omega_{j}\omega_{j}=n\quad\text{in }\mathcal{S}_{+}^{n},\\
\sum_{j=1}^{n}\omega_{j}\Omega_{j} & =0\quad\text{and}\quad\sum_{j=1}^{n}\Omega_{j}\omega_{j}=n\quad\text{on }\partial\mathcal{S}_{+}^{n}.
\end{align*}
Using this coordinate,
\begin{align*}
f= & e^{-(1+2s)t}\bigg[\omega_{n+1}^{1-2s}\partial_{t}^{2}+\omega_{n+1}^{1-2s}(n-2s)\partial_{t}+\sum_{j=1}^{n+1}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}\bigg]\tilde{u}\\
& +e^{-(1+2s)t}\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+\frac{1}{2}\Omega_{j}\Omega_{k}+\frac{1}{2}\Omega_{k}\Omega_{j}\bigg]\tilde{u}\\
& +e^{-(1+2s)t}\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(\delta_{jk}-2\omega_{j}\omega_{k})\partial_{t}-\frac{1}{2}\omega_{j}\Omega_{k}-\frac{1}{2}\omega_{k}\Omega_{j}\bigg]\tilde{u}\quad\text{in }\mathcal{S}_{+}^{n}\times\mathbb{R}.
\end{align*}
Next, let $\overline{u}=e^{\frac{n-2s}{2}t}\tilde{u}$ and $\tilde{f}=e^{\frac{n-2s}{2}t}e^{(1+2s)t}f=e^{\frac{n+2+2s}{2}t}f$,
\begin{align}
\tilde{f}= & \bigg[\omega_{n+1}^{1-2s}\partial_{t}^{2}+\sum_{j=1}^{n+1}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}-\omega_{n+1}^{1-2s}\frac{(n-2s)^{2}}{4}\bigg]\overline{u}\nonumber \\
& +\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+\frac{1}{2}\Omega_{j}\Omega_{k}+\frac{1}{2}\Omega_{k}\Omega_{j}\bigg]\overline{u}\nonumber \\
& +\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(\delta_{jk}-(n+2-2s)\omega_{j}\omega_{k})\partial_{t}\nonumber \\
& \qquad-\frac{n+1-2s}{2}\omega_{j}\Omega_{k}-\frac{n+1-2s}{2}\omega_{k}\Omega_{j}\bigg]\overline{u}\nonumber \\
& +\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\frac{(n-2s)^{2}}{4}\omega_{j}\omega_{k}-\frac{n-2s}{2}(\delta_{jk}-2\omega_{j}\omega_{k})\bigg]\overline{u}\quad\text{in }\mathcal{S}_{+}^{n}\times\mathbb{R}.\label{eq:Carl1}
\end{align}
Also,
\[
\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}=\tilde{V}\overline{u},
\]
where $\tilde{V}=e^{2st}V$.
\textbf{Step 2: Conjugation.} Next, setting $\overline{v}=\omega_{n+1}^{\frac{1-2s}{2}}e^{\tau\varphi}\overline{u}$,
where $\varphi(t)=\phi(e^{t}\omega)=e^{\alpha t}$, we reach
\begin{equation}
\omega_{n+1}^{\frac{2s-1}{2}}e^{\tau\varphi}\tilde{f}=L^{+}\overline{v}=(S-A+(I)+(II)+(III))\overline{v}\quad\text{in }\mathcal{S}_{+}^{n}\times\mathbb{R},\label{eq:Carl2}
\end{equation}
where
\begin{align*}
S & =\partial_{t}^{2}+\tilde{\Delta}_{\omega}+\tau^{2}|\varphi'|^{2}-\tau\varphi''-\frac{(n-2s)^{2}}{4},\quad\tilde{\Delta}_{\omega}=\sum_{j=1}^{n+1}\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\\
A & =2\tau\varphi'\partial_{t}\\
(I) & =\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+\frac{1}{2}\Omega_{j}\Omega_{k}+\frac{1}{2}\Omega_{k}\Omega_{j}\bigg]\\
(II) & =\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(-2\tau\varphi'\omega_{j}\omega_{k}+(\delta_{jk}-(n+1)\omega_{j}\omega_{k}))\partial_{t}-\bigg(\tau\varphi'+\frac{n}{2}\bigg)(\omega_{j}\Omega_{k}+\omega_{k}\Omega_{j})\bigg]\\
(III) & =\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\omega_{j}\omega_{k}(\tau^{2}|\varphi'|^{2}-\tau\varphi''+(n+1)\tau\varphi'+C_{1})+C_{2}\bigg],
\end{align*}
for some constants $C_{1}$ and $C_{2}$. Also,
\begin{equation}
\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}=\tilde{V}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\quad\text{on }\partial\mathcal{S}_{+}^{n}\times\mathbb{R}.\label{eq:Carl3}
\end{equation}
We denote the norm and the scalar product in the bulk and the boundary
space by
\begin{align*}
\|\bullet\| :=\|\bullet\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}, & \qquad \|\bullet\|_{0} :=\|\bullet\|_{L^{2}(\partial\mathcal{S}_{+}^{n}\times\mathbb{R})}\\
\langle\bullet,\bullet\rangle :=\langle\bullet,\bullet\rangle_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}, & \qquad
\langle\bullet,\bullet\rangle_{0} :=\langle\bullet,\bullet\rangle_{L^{2}(\partial\mathcal{S}_{+}^{n}\times\mathbb{R})}
\end{align*}
and we omit the notation ``$\lim_{\omega_{n+1}\rightarrow0}$''
in $\|\bullet\|_{0}$ and $\langle\bullet,\bullet\rangle_{0}$.
\textbf{Step 3: Showing the ellipticity of $\tilde{\Delta}_{\omega}$.} We need to prove the ellipticity of $\tilde{\Delta}_{\omega}$:
\begin{lem}
\label{lem:ellip} Suppose \eqref{eq:Carl3} holds, then
\begin{align*}
\|\tilde{\Delta}_{\omega}\overline{v}\|^{2}\ge & c_{0}\sum_{(j,k)\neq(n+1,n+1)}\|\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\Omega_{k}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\\
& -C\bigg(\sum_{j=1}^{n+1}\|\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\|\overline{v}\|^{2}+\|(|\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})\nabla_{\omega}'\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\\
& \qquad+\||\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\bigg).
\end{align*}
\end{lem}
\begin{proof}
Note that
\begin{align*}
\|\tilde{\Delta}_{\omega}\overline{v}\|^{2}= & \bigg\|\sum_{j=1}^{n}\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}+\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{n+1}\omega_{n+1}^{1-2s}\Omega_{n+1}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\bigg\|^{2}\\
\ge & \bigg\|\sum_{j=1}^{n}\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\bigg\|^{2}\\
& +2\sum_{j=1}^{n}\langle\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v},\omega_{n+1}^{\frac{2s-1}{2}}\Omega_{n+1}\omega_{n+1}^{1-2s}\Omega_{n+1}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\rangle.
\end{align*}
The integration by parts is given by
\begin{align*}
& \int_{\mathbb{R}_{+}^{n+1}}(\Omega_{n+1}v)u\,dx+\int_{\mathbb{R}_{+}^{n+1}}v(\Omega_{n+1}u)\,dx=\int_{\mathbb{R}_{+}^{n+1}}\Omega_{n+1}(uv)\,dx\\
= & \int_{\mathbb{R}_{+}^{n+1}}|x|\partial_{n+1}(uv)\,dx-\int_{\mathcal{S}_{+}^{n}}\int_{0}^{\infty}r\omega_{n+1}\partial_{t}(uv)r^{n}\,dr\,d\omega\\
= & -\int_{\mathbb{R}^{n}\times\{0\}}|x'|uv\,dx'-\int_{\mathbb{R}_{+}^{n+1}}\omega_{n+1}uv\,dx+(n+1)\int_{\mathcal{S}_{+}^{n}}\int_{0}^{\infty}\omega_{n+1}(uv)r^{n}\,dr\,d\omega\\
= & -\int_{\mathbb{R}^{n}\times\{0\}}|x'|uv\,dx'+n\int_{\mathbb{R}_{+}^{n+1}}\omega_{n+1}uv\,dx.
\end{align*}
Similar integration by parts formula holds for $\Omega_{j}$ for $j=1,\cdots,n$.
Indeed, by \eqref{eq:Carl0}, we know that for $j=1,\cdots,n$, $\Omega_{j}$
and $\omega_{n+1}$ are commute up to some lower order term. So, to
estimate the first term, it is suffice to estimate $\|\sum_{j=1}^{n}\Omega_{j}^{2}\overline{v}\|^{2}$.
Finally, the lower order terms can be easily estimated using integration by parts,
\end{proof}
Defining $L^{-}:=S+A+(I)-(II)+(III)$,
\[
\mathscr{D} := \|L^{+}\overline{v}\|^{2}-\|L^{-}\overline{v}\|^{2} \quad \text{and} \quad \mathscr{S} := \||\varphi'|^{-\frac{1}{2}}L^{+}\overline{v}\|^{2}+\||\varphi'|^{-\frac{1}{2}}L^{-}\overline{v}\|^{2}.
\]
\textbf{Step 4: Estimating the difference $\mathscr{D}$.} Observe that $\mathscr{D}=-4\langle S\overline{v},A\overline{v}\rangle+R$, where
\[
R=4\langle S\overline{v},(II)\overline{v}\rangle-4\langle A\overline{v},(I)\overline{v}\rangle-4\langle A\overline{v},(III)\overline{v}\rangle+4\langle(I)\overline{v},(II)\overline{v}\rangle+4\langle(II)\overline{v},(III)\overline{v}\rangle.
\]
By using \eqref{eq:Carl0} and integration by parts, we can compute
\begin{align*}
-4\langle S\overline{v},A\overline{v}\rangle\ge & 4\tau\||\varphi''|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}-4\tau\sum_{j=1}^{n+1}\||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\frac{119}{10}\tau^{3}\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}\\
& -2\tau\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}})|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.
\end{align*}
Since
\[
\max_{1\le j,k\le n}|a_{jk}-\delta_{jk}|+\max_{1\le j,k\le n}|\partial_{t}a_{jk}|+\max_{1\le j,k\le n}|\nabla_{\omega}'a_{jk}|\le\epsilon,
\]
by using integration by parts, again we reach
\begin{align*}
R\ge & -\tau\epsilon C\||\varphi'|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}-\tau\epsilon C\sum_{j=1}^{n+1}\||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}-\tau^{3}\epsilon C\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}\\
& -\tau\epsilon C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.
\end{align*}
Hence, for small $\epsilon>0$ and large $\tau_{0}$, we reach
\begin{align}
\mathscr{D}\ge & \frac{39}{10}\tau\||\varphi''|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}-\frac{41}{10}\tau\sum_{j=1}^{n+1}\||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\frac{118}{10}\tau^{3}\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}\nonumber \\
& -C\tau\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.\label{eq:Carl-difference}
\end{align}
\textbf{Step 5: Estimating the sum $\mathscr{S}$.} Note that
\begin{align*}
\mathscr{S}\ge & 2\||\varphi'|^{-\frac{1}{2}}S\overline{v}\|^{2}+2\||\varphi'|^{-\frac{1}{2}}A\overline{v}\|^{2}\\
& -C\epsilon\||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}\|^{2}-C\epsilon\sum_{j=1}^{n}\||\varphi'|^{-\frac{1}{2}}\partial_{t}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}-C\epsilon\sum_{j,k=1}^{n}\||\varphi'|^{-\frac{1}{2}}\Omega_{j}\Omega_{k}\overline{v}\|^{2}\\
& -C\epsilon\tau^{2}\||\varphi'|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}-C\epsilon\tau^{2}\sum_{j=1}^{n}\||\varphi'|^{\frac{1}{2}}\Omega_{j}\overline{v}\|^{2}-C\epsilon\tau^{4}\||\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}.
\end{align*}
Observe that
\[
2\||\varphi'|^{-\frac{1}{2}}S\overline{v}\|^{2}\ge\frac{19}{10}\||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}+|\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}+\tau^{2}|\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}-C\tau^{2}\||\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}.
\]
For $\delta\in(0,1)$, write
\begin{align*}
& \||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}+|\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}+\tau^{2}|\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}\\
= & \||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}\|^{2}+(1-\delta)\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}+\delta\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}+\tau^{4}\||\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}\\
& +\langle|\varphi'|^{-1}\partial_{t}^{2}\overline{v},\tilde{\Delta}_{\omega}\overline{v}\rangle+\tau^{2}\langle\varphi'\partial_{t}^{2}\overline{v},\overline{v}\rangle+\tau^{2}\langle\varphi'\tilde{\Delta}_{\omega}\overline{v},\overline{v}\rangle.
\end{align*}
Hence, by using integration by parts, and apply Lemma \ref{lem:ellip}
on the term $\delta\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}$,
choose $\delta>0$ small, and then choose $\epsilon>0$ small, we
reach
\begin{align}
\mathscr{S}\ge & \frac{19}{10}\||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}\|^{2}+\frac{19}{10}\sum_{j=1}^{n+1}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\partial_{t}\overline{v}\|^{2}\nonumber \\
& +c_{1}\sum_{(j,k)\neq(n+1,n+1)}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\Omega_{k}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\frac{18}{10}\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}\nonumber \\
& +\frac{9}{10}\tau^{4}\||\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}+\frac{39}{10}\tau^{2}\||\varphi'|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}-\frac{11}{10}\tau^{2}\sum_{j=1}^{n+1}\||\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\nonumber \\
& -C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\partial_{t}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& -C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\nabla_{\omega}'\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& -C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.\label{eq:Carl-sum}
\end{align}
\textbf{Step 6: Combining the difference $\mathscr{D}$ and the sum $\mathscr{S}$.} Multiplying \eqref{eq:Carl-difference} by $\tau$, and summing with \eqref{eq:Carl-sum},
we reach
\begin{align}
& (\tau+1)\|L^{+}\overline{v}\|^{2}\ge\tau\mathscr{D}+\mathscr{S}\nonumber \\
\ge & c_{1}\bigg(\||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}\|^{2}+\sum_{j=1}^{n+1}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\partial_{t}\overline{v}\|^{2}\nonumber \\
& \qquad+\sum_{(j,k)\neq(n+1,n+1)}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\Omega_{k}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\bigg)\nonumber \\
& +\frac{39}{5}\tau^{2}\||\varphi'|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}+\frac{208}{10}\tau^{4}\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}-\frac{11}{10}\tau^{2}\sum_{j=1}^{n+1}\||\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\nonumber \\
& +\frac{18}{10}\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}\nonumber \\
& -C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\partial_{t}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& -C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\nabla_{\omega}'\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& -C\tau^{2}\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.\label{eq:Carl4}
\end{align}
\textbf{Step 7: Obtaining gradient estimates.} Note that
\begin{align}
& \frac{12}{10}\tau^{2}\sum_{j=1}^{n+1}\||\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\frac{12}{10}\tau^{2}\langle\tilde{V}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v},\varphi'\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\rangle_{0}\nonumber \\
= & -\frac{12}{10}\tau^{2}\langle\varphi'\overline{v},\tilde{\Delta}_{\omega}\overline{v}\rangle\le\frac{16}{10}\||\varphi'|^{-\frac{1}{2}}\tilde{\Delta}_{\omega}\overline{v}\|^{2}+\frac{144}{100}\tau^{4}\||\varphi'|^{\frac{3}{2}}\overline{v}\|^{2}.\label{eq:Carl5}
\end{align}
\textbf{Step 8: Conclusion.} Summing up \eqref{eq:Carl4} and \eqref{eq:Carl5}, we reach
\begin{align}
& \||\varphi'|^{-\frac{1}{2}}\partial_{t}^{2}\overline{v}\|^{2}+\sum_{j=1}^{n+1}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\partial_{t}\overline{v}\|^{2}\nonumber \\
& +\sum_{(j,k)\neq(n+1,n+1)}\||\varphi'|^{-\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\Omega_{k}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\nonumber \\
& +\tau^{2}\||\varphi'|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}+\tau^{2}\sum_{j=1}^{n+1}\||\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\Omega_{j}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}+\tau^{4}\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}\nonumber \\
\le & C\tau\|\tilde{f}\|^{2}+C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\partial_{t}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& +C\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi'|^{-\frac{1}{2}}\nabla_{\omega}'\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}\nonumber \\
& +C\tau^{2}\|(|\tilde{V}|^{\frac{1}{2}}+|\partial_{t}\tilde{V}|^{\frac{1}{2}}+|\nabla_{\omega}'\tilde{V}|^{\frac{1}{2}})|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|_{0}^{2}.\label{eq:Carl6}
\end{align}
Changing back to the Cartesian coordinate, and we obtain our result.
\end{proof}
\subsection{A Carleman estimate without differentiabiliy assumptions}
Imitating the splitting arguments in \cite[Theorem~5]{RW19Landis}, we can prove the following Carleman estimate.
\begin{thm}
\label{thm:Carl-ndiff} Let $s\in(0,1)$ and let $\tilde{u}\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2s})$
with ${\rm supp}(\tilde{u})\subset\mathbb{R}_{+}^{n+1}\setminus B_{1}^{+}$
be a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\bigg]\tilde{u} & =f\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u} & =V\tilde{u}\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
where $x=(x',x_{n+1})\in\mathbb{R}^{n}\times\mathbb{R}_{+}$, $f\in L^{2}(\mathbb{R}_{+}^{n+1},x_{n+1}^{2s-1})$
with compact support in $\mathbb{R}_{+}^{n+1}$, and $V\in L^{\infty}(\mathbb{R}^{n})$.
Assume that
\[
\max_{1\le j,k\le n}\sup_{|x'|\ge1}|a_{jk}(x')-\delta_{jk}(x')|+\max_{1\le j,k\le n}\sup_{|x'|\ge1}|x'||\nabla'a_{jk}(x')|\le\epsilon
\]
for some sufficiently small $\epsilon>0$. Let further $\phi(x)=|x|^{\alpha}$
for $\alpha\ge1$. Then there exist constants $C=C(n,s,\alpha)$
and $\tau_{0}=\tau_{0}(n,s,\alpha)$ such that
\begin{align*}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\alpha}{2}-1}x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\alpha}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg[\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}|x|f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau^{2-2s}\|e^{\tau\phi}V|x|^{(1-\alpha)s}\tilde{u}\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\bigg]
\end{align*}
for all $\tau\ge\tau_{0}$.
\end{thm}
\begin{proof}
[Proof of Theorem~{\rm \ref{thm:Carl-ndiff}}] \textbf{Step 1: Changing the coordinates.} As in the proof of Theorem
\ref{thm:Carl-diff}, firstly, we pass to conformal coordinates. With
the notations mentioned before, recall \eqref{eq:Carl1}:
\[
\bigg[\omega_{n+1}^{1-2s}\partial_{t}^{2}+\sum_{j=1}^{n+1}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}-\omega_{n+1}^{1-2s}\frac{(n-2s)^{2}}{4}\bigg]\overline{u}+R\overline{u}=\tilde{f},
\]
where
\begin{align*}
R= & \omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\omega_{j}\omega_{k}\partial_{t}^{2}+\omega_{j}\Omega_{k}\partial_{t}+\omega_{k}\Omega_{j}\partial_{t}+\frac{1}{2}\Omega_{j}\Omega_{k}+\frac{1}{2}\Omega_{k}\Omega_{j}\bigg]\\
& +\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[(\delta_{jk}-(n+2-2s)\omega_{j}\omega_{k})\partial_{t}\\
& \qquad-\frac{n+1-2s}{2}\omega_{j}\Omega_{k}-\frac{n+1-2s}{2}\omega_{k}\Omega_{j}\bigg]\\
& +\omega_{n+1}^{1-2s}\sum_{j,k=1}^{n}(a_{jk}-\delta_{jk})\bigg[\frac{(n-2s)^{2}}{4}\omega_{j}\omega_{k}-\frac{n-2s}{2}(\delta_{jk}-2\omega_{j}\omega_{k})\bigg].
\end{align*}
\textbf{Step 2: Splitting $\overline{u}$ into elliptic and subelliptic parts.} We split $\overline{u}$ into two parts $\overline{u}=u_{1}+u_{2}$.
Here $u_{1}$ is a solution to
\begin{align}
\bigg[\omega_{n+1}^{1-2s}\partial_{t}^{2}+\sum_{j=1}^{n+1}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}-\omega_{n+1}^{1-2s}\frac{(n-2s)^{2}}{4}-K^{2}\tau^{2}|\varphi'|^{2}\omega_{n+1}^{1-2s}\bigg]u_{1}+Ru_{1} & =\tilde{f}\quad\text{in }\mathcal{S}_{+}^{n}\times\mathbb{R},\label{eq:Carl-ndiff1}\\
\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}u_{1}=\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u} & \quad\text{on }\partial\mathcal{S}_{+}^{n}\times\mathbb{R}.\nonumber
\end{align}
We remark that existence of unique energy solution to this problem is followed by the Lax-Milgram theorem in $H^{1}(\mathcal{S}_{+}^{n}\times\mathbb{R},\omega_{n+1}^{1-2s})$.
\textbf{Step 2.1: Obtain an elliptic estimate.} Testing $\tau^{2}e^{2\tau\varphi}|\varphi''|^{2}u_{1}$ in \eqref{eq:Carl-ndiff1}, for $\delta>0$, we reach
\begin{align*}
& \tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1}\|^{2}+\tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}u_{1}\|^{2}+\tau^{2}\frac{(n-2s)^{2}}{4}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}\\
& +K^{2}\tau^{4}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}\\
= & -\tau^{2}\langle e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f},e^{\tau\varphi}|\varphi''|^{2}\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\rangle-\langle\tau e^{\tau\varphi}\varphi'''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1},\tau^{2}e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\rangle\\
& +\tau^{2}\langle Ru_{1},e^{2\tau\varphi}|\varphi''|^{2}u_{1}\rangle-2\langle\tau e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1},\tau e^{\tau\varphi}\varphi'''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\rangle\\
& -\tau^{2}\langle e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}u_{1},e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\rangle_{0}\\
\le & \|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|^{2}+\tau^{4}\|e^{\tau\varphi}|\varphi''|^{2}\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}+\delta\tau^{2}\|e^{\tau\varphi}\varphi'''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1}\|^{2}\\
& +C_{\delta}\tau^{4}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}+\delta\tau^{2}\|e^{\tau\varphi}\varphi'''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1}\|^{2}+C_{\delta}\tau^{2}\|e^{\tau\varphi}\varphi'''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}\\
& +\tau^{2}\|e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}\|_{0}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|_{0}+\tau^{2}|\langle Ru_{1},e^{2\tau\varphi}|\varphi''|^{2}u_{1}\rangle|.
\end{align*}
Firstly, we choose small $\delta>0$ and small $\epsilon>0$, then choose large $K>1$, so
\begin{align}
& \tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1}\|^{2}+\tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}u_{1}\|^{2}+\tau^{4}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}\nonumber \\
\le & C\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|^{2}+C\tau^{2}\|e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}\|_{0}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|_{0}\nonumber \\
\le & C\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|^{2}+C_{\eta}\tau^{2-2s}\|e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}\|_{0}+\eta\tau^{2+2s}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|_{0}.\label{eq:Carl-ndiff2}
\end{align}
From Proposition \ref{prop:inter1}, we have
\[
|\varphi''|^{2}e^{2\alpha st}\int_{\partial\mathcal{S}_{+}^{n}}u_{1}^{2}\le C\tilde{\tau}^{2-2s}|\varphi''|^{2}e^{2\alpha st}\int_{\mathcal{S}_{+}^{n}}\omega_{n+1}^{1-2s}u_{1}^{2}+C\tilde{\tau}^{-2s}|\varphi''|^{2}e^{2\alpha st}\int_{\mathcal{S}_{+}^{n}}\omega_{n+1}^{1-2s}|\nabla_{\mathcal{S}^{n}}u_{1}|^{2}.
\]
Choosing $\tilde{\tau}=e^{\alpha t}\tau$, we reach
\[
|\varphi''|^{2}e^{2\alpha st}\int_{\partial\mathcal{S}_{+}^{n}}u_{1}^{2}\le C\tau^{2-2s}|\varphi''|^{2}e^{2\alpha t}\int_{\mathcal{S}_{+}^{n}}\omega_{n+1}^{1-2s}u_{1}^{2}+C\tau^{-2s}|\varphi''|^{2}\int_{\mathcal{S}_{+}^{n}}\omega_{n+1}^{1-2s}|\nabla_{\mathcal{S}^{n}}u_{1}|^{2}.
\]
Multiplying with $e^{2\tau\varphi}$, using that $\varphi'=\alpha e^{\alpha t}$
and integrating in the radial direction, thus implies
\[
\tau^{2+2s}\|e^{\tau\varphi}|\varphi''|e^{\alpha st}u_{1}\|_{0}^{2}\le C\tau^{4}\|e^{\tau\varphi}\omega_{n+1}^{\frac{1-2s}{2}}\varphi'\varphi''u_{1}\|^{2}+C\tau^{2}\|e^{\tau\varphi}\omega_{n+1}^{\frac{1-2s}{2}}\varphi''\nabla_{\mathcal{S}^{n}}u_{1}\|^{2}.
\]
Plug the inequality above into \eqref{eq:Carl-ndiff2}, and choose $\eta>0$ small, so
\begin{align}
& \tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{1}\|^{2}+\tau^{2}\|e^{\tau\varphi}\varphi''\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}u_{1}\|^{2}+\tau^{4}\|e^{\tau\varphi}\varphi'\varphi''\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}\nonumber \\
\le & C\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|^{2}+C\tau^{2-2s}\|e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}\|_{0}.\label{eq:Carl-ndiff3}
\end{align}
\textbf{Step 2.2: Obtaining a sub-elliptic estimate.} Indeed, $u_{2}$ satisfies
\begin{align*}
\bigg[\omega_{n+1}^{1-2s}\partial_{t}^{2}+\sum_{j=1}^{n+1}\Omega_{j}\omega_{n+1}^{1-2s}\Omega_{j}-\omega_{n+1}^{1-2s}\frac{(n-2s)^{2}}{4}\bigg]u_{2}+Ru_{2}=-K^{2}\tau^{2}|\varphi'|^{2}\theta_{n+1}^{1-2s}u_{1} & \quad\text{in }\mathcal{S}_{+}^{n}\times\mathbb{R},\\
\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}u_{2}=0 & \quad\text{on }\partial\mathcal{S}_{+}^{n}\times\mathbb{R}.
\end{align*}
To compare with \eqref{eq:Carl1}, we should put
\[
\tilde{f}=-K^{2}\tau|\varphi'|^{2}\omega_{n+1}^{1-2s}u_{1}\quad\text{and}\quad\tilde{V}\equiv0
\]
in \eqref{eq:Carl6}. Omitting the second derivative terms, we obtain
\[
\tau^{3}\|\varphi'|\varphi''|^{\frac{1}{2}}\overline{v}\|^{2}+\tau\||\varphi''|^{\frac{1}{2}}\partial_{t}\overline{v}\|^{2}+\tau\||\varphi'|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}\omega_{n+1}^{\frac{2s-1}{2}}\overline{v}\|^{2}\le CK^{4}\tau^{4}\|e^{\tau\varphi}|\varphi'|^{2}\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2},
\]
that is,
\begin{align}
& \tau^{3}\|e^{\tau\varphi}\varphi'|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}u_{2}\|^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}u_{2}\|^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}u_{2}\|^{2}\nonumber \\
\le & CK^{4}\tau^{4}\|e^{\tau\varphi}|\varphi'|^{2}\omega_{n+1}^{\frac{1-2s}{2}}u_{1}\|^{2}.\label{eq:Carl-ndiff4}
\end{align}
\textbf{Step 3: Conclusion.} Summing up \eqref{eq:Carl-ndiff3} and \eqref{eq:Carl-ndiff4}, since
$\overline{u}=u_{1}+u_{2}$, so
\begin{align}
& \tau^{3}\|e^{\tau\varphi}\varphi'|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\overline{u}\|^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}\overline{u}\|^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}\overline{u}\|^{2} \nonumber \\
\le & C\bigg[\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|^{2}+\tau^{2-2s}\|e^{\tau\varphi}\varphi''e^{\alpha st}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}\|_{0}^{2}\bigg]. \label{eq:carl-ndff-polar}
\end{align}
Finally, plug in the boundary condition
\[
\lim_{\omega_{n+1}\rightarrow0}\omega_{n+1}^{1-2s}\Omega_{n+1}\overline{u}=\tilde{V}\overline{u},
\]
and switch back to the Cartesian coordinate, we obtain our result.
\end{proof}
\section{\label{sec:result}Proofs of Theorem \ref{thm:result1} and Theorem \ref{thm:result2}}
\begin{proof}
[Proof of Theorem~{\rm \ref{thm:result1}}] \textbf{Step 1: Applying Carleman estimate.} Define $w:=\eta_{R}\tilde{u}$,
where $\eta_{R}$ is radial,
\begin{equation}
\eta_{R}(x)=\begin{cases}
1 & ,2\le|x|\le R,\\
0 & ,|x|\le1\text{ or }|x|\ge2R,
\end{cases} \label{eq:eta-R-cutoff}
\end{equation}
and satisfies $|\nabla\eta_{R}|\le C/R$, $|\nabla^{2}\eta_{R}|\le C/R^{2}$
in $A_{R,2R}^{+}$,
\begin{align*}
|\nabla\eta_{R}| & \le C/R,\quad|\nabla^{2}\eta_{R}|\le C/R^{2}\quad\text{in }A_{R,2R}^{+},\\
|\nabla\eta_{R}| & \le C,\quad|\nabla^{2}\eta_{R}|\le C\quad\text{in }A_{1,2}^{+}.
\end{align*}
Note that
\[
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\bigg]w=f,
\]
where
\begin{align*}
f= & x_{n+1}^{1-2s}\bigg[(1-2s)x_{n+1}^{-1}\partial_{n+1}\eta_{R}\bigg]\tilde{u}+x_{n+1}^{1-2s}\bigg[\partial_{n+1}^{2}\eta_{R}+\sum_{j,k=1}^{n}a_{jk}\partial_{j}\partial_{k}\eta_{R}\bigg]\tilde{u}\\
& +2x_{n+1}^{1-2s}\bigg[(\partial_{n+1}\eta_{R})(\partial_{n+1}\tilde{u})+\sum_{j,k=1}^{n}a_{jk}(\partial_{k}\eta_{R})(\partial_{j}\tilde{u})\bigg]\\
& -x_{n+1}^{1-2s}\sum_{j,k=1}^{n}(\partial_{j}a_{jk})(\partial_{k}\tilde{u})\eta_{R}.
\end{align*}
Since $\eta_{R}$ is radial, then $\partial_{n+1}\eta_{R}=\eta_{R}'\partial_{n+1}|x|=0$
on $\mathbb{R}^{n}\times\{0\}$. Thus,
\[
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}w=\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\eta_{R}\partial_{n+1}\tilde{u}=c_{n,s}^{-1}q\eta_{R}u=c_{n,s}^{-1}qw\quad\text{on }\mathbb{R}^{n}\times\{0\}.
\]
Note that $w$ is admissible in the Carleman estimate in Theorem \ref{thm:Carl-diff}.
For $\beta>1$, since $|q|\le1$ and $|x'||\nabla' q|\le1$, we have
\begin{align}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\nonumber \\
& +\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'w)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\nonumber \\
\le & C\bigg[\|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}|x|f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\nonumber \\
& \qquad+\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\bigg].\label{eq:pf1}
\end{align}
\textbf{Step 2: Estimating the bulk contributions.} Since $1\le\frac{|x|}{R}$ in $A_{R,2R}^{+}$ and $1\le|x|$ in $A_{1,2}^{+}$,
then
\begin{align*}
& \|e^{\tau\phi}x_{n+1}^{\frac{2s-1}{2}}|x|f\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg[R^{-4}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+R^{-2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\\
& +\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\bigg].
\end{align*}
Write $\tilde{\phi}(r)=\phi(x)=r^{\beta}$ with $r=|x|$, note that
\begin{align*}
& R^{-4}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+R^{-2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\\
\le & C\bigg[R^{-2}e^{\tau\tilde{\phi}(2R)}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+e^{\tau\tilde{\phi}(2R)}\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\bigg].
\end{align*}
Now we estimate $\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}$.
Choose $\xi_{R}$ satisfies
\[
\xi_{R}(x)=\begin{cases}
1 & ,R\le|x|\le2R,\\
0 & ,|x|\le\frac{R}{2}\text{ or }|x|\ge2R,
\end{cases}
\]
with $|\nabla\xi_{R}|\le C/R$ for $x\in A_{\frac{R}{2},R}^{+}$
or $x\in A_{2R,3R}^{+}$. Test $\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}+\sum_{j,k=1}^{n}\partial_{j}a_{jk}\partial_{k}\tilde{u}=0$
by the function $\tilde{u}\xi_{R}^{2}$, we reach
\[
\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\le C \bigg[ \|x_{n+1}^{\frac{1-2s}{2}}u\|_{L^{2}(A_{\frac{R}{2},3R}')}^{2}+R^{-2}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(A_{\frac{R}{2},3R}^{+})}^{2} \bigg].
\]
So,
\begin{align*}
& R^{-4}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+R^{-2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\\
\le & C e^{\tau\tilde{\phi}(2R)} \bigg[\|x_{n+1}^{\frac{1-2s}{2}}u\|_{L^{2}(A_{\frac{R}{2},3R}')}^{2}+R^{-2}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\bigg].
\end{align*}
Using Proposition~\ref{prop:bulk-decay}, we have
\[
|\tilde{u}(x)|\le C_{1}e^{-C_{2}R^{\alpha}}\quad\text{for }x\in A_{\frac{R}{2},3R}^{+}.
\]
So, if we choose $\beta=\alpha-\epsilon$ for some $\epsilon\in(0,\alpha-1)$, then we have
\[
\lim_{R\rightarrow\infty}(R^{-4}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+R^{-2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2})=0.
\]
However, \eqref{eq:pf1} writes
\begin{align*}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
& +\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'w)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg[R^{-4}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}+R^{-2}\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{R,2R}^{+})}^{2}\\
& +\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
& +\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}+\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\bigg].
\end{align*}
Taking $R\rightarrow\infty$ in \eqref{eq:pf1} and choosing large
$\tau$, we reach
\begin{align}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\nonumber \\
& +\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'w)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\nonumber \\
\le & C\bigg[\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}\nonumber \\
& +\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}+\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\bigg].\label{eq:pf2}
\end{align}
\textbf{Step 3: Estimating the boundary contributions.} Using Proposition~\ref{prop:inter1}, we have
\[
\tilde{\tau}|\varphi''|e^{2st}\|v\|_{L^{2}(\partial\mathcal{S}_{+}^{n})}\le C\bigg[\tilde{\tau}^{2-2s}e^{2st}|\varphi''|\|\omega_{n+1}^{\frac{1-2s}{2}}v\|_{L^{2}(\mathcal{S}_{+}^{n})}^{2}+\tilde{\tau}^{-2s}e^{2st}|\varphi''|\|\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\omega}v\|_{L^{2}(\mathcal{S}_{+}^{n})}^{2}\bigg].
\]
Setting $e^{2st}\tilde{\tau}^{-2s}=\tau^{-2s}$ (i.e. $\tilde{\tau}=\tau e^{t}$),
our choice of $\varphi$ gives
\[
\tilde{\tau}^{2-2s}e^{2st}|\varphi''|=\tau^{2-2s}e^{2t}|\varphi''|\le\tau^{2-2s}|\varphi'|^{2}|\varphi''|.
\]
Hence, we reach
\[
\tau^{2s+1}|\varphi''|\|v\|_{L^{2}(\partial\mathcal{S}_{+}^{n})}^{2}\le C\bigg[\tau^{3}|\varphi''||\varphi'|^{2}\|\omega_{n+1}^{\frac{1-2s}{2}}v\|_{L^{2}(\mathcal{S}_{+}^{n})}^{2}+\tau|\varphi''|\|\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\omega}v\|_{L^{2}(\mathcal{S}_{+}^{n})}^{2}\bigg].
\]
Multiplying the above inequality by $e^{\tau\varphi}$, and then integrating with respect to the radial variable
$t$, we obtain
\begin{align*}
& \tau^{2s+1}\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}v\|_{L^{2}(\partial\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\\
\le & C\bigg[\tau^{3}\|e^{\tau\varphi}\varphi'|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}v\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\omega}v\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2} \bigg],
\end{align*}
that is,
\begin{align*}
& \tau^{2s+1}\|e^{\tau\phi}|x|^{\frac{\beta}{2}}w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\\
\le & C\bigg[\tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\bigg].
\end{align*}
Similarly, we have
\begin{align*}
& \tau^{2s-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}\nabla'w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}\\
\le & C\bigg[\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla'w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'w)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\bigg].
\end{align*}
So, for large $\tau$, the boundary terms of \eqref{eq:pf2} are absorbed,
and we reach
\begin{align*}
& \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(B_{6}^{+}\setminus B_{4}^{+})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(B_{6}^{+}\setminus B_{4}^{+})}^{2}\\
\le & \tau^{3}\|e^{\tau\phi}|x|^{\frac{3\beta}{2}-1}x_{n+1}^{\frac{1-2s}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\tau\|e^{\tau\phi}|x|^{\frac{\beta}{2}}x_{n+1}^{\frac{1-2s}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
& +\tau^{-1}\|e^{\tau\phi}|x|^{-\frac{\beta}{2}+1}x_{n+1}^{\frac{1-2s}{2}}\nabla(\nabla'w)\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
\le & C\bigg[\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+\|e^{\tau\phi}x_{n+1}^{\frac{1-2s}{2}}|x|\nabla\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}\bigg].
\end{align*}
Pulling out the exponential weight in the above estimate yields
\begin{align*}
& \tau^{3}e^{\tau\tilde{\phi}(4)}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{6}^{+}\setminus B_{4}^{+})}^{2}+\tau e^{\tau\tilde{\phi}(4)}\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{6}^{+}\setminus B_{4}^{+})}^{2}\\
\le & C\bigg[e^{\tau\tilde{\phi}(2)}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}+e^{\tau\tilde{\phi}(2)}\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(A_{1,2}^{+})}^{2}\bigg].
\end{align*}
\textbf{Step 4: Conclusion.} Since $\tilde{\phi}(4)\ge\tilde{\phi(2)}$, taking $\tau\rightarrow\infty$
will leads a contradiction, unless $\tilde{u}=0$ in $B_{6}^{+}\setminus B_{4}^{+}$.
Finally, applying the unique continuation property for classical second order elliptic equations (see e.g. \cite[Theorem~1.1]{Reg97StrongUniquenessSecondOrderElliptic}), we conclude that $\tilde{u}\equiv0$.
\end{proof}
Following exactly the arguments in \cite[Theorem~2]{RW19Landis}, we can obtain Theorem~\ref{thm:result2}. For sake of completeness, here we give a sketch of the proof of Theorem~\ref{thm:result2}.
\begin{proof}
[Sketch of the proof of Theorem~{\rm \ref{thm:result2}}] Let $\eta_{R}$
be the function given in \eqref{eq:eta-R-cutoff}, and write $\overline{w}(t,\theta)=\overline{u}(t,\theta)\eta_{R}(e^{t}\theta)\equiv\tilde{u}(e^{t}\theta)\eta_{R}(e^{t}\theta)$,
where $(t,\theta)$ is the conformal polar coordinate used in the
proof of Carleman estimates (Theorem~\ref{thm:Carl-diff} and Theorem~\ref{thm:Carl-ndiff}).
Pluging $\overline{w}$ into \eqref{eq:carl-ndff-polar} (i.e. the
Carleman estimate in Theorem~\ref{thm:Carl-ndiff} with conformal
polar coordinate) with $\varphi(t)=e^{\beta t}$ (that is, $\phi(x)=|x|^{\beta}$)
with $\frac{4s}{4s-1}<\beta<\alpha$, and taking the limit $R\rightarrow\infty$,
we obtain \cite[equation~(49)]{RW19Landis}:
\begin{align}
& \tau^{3}\|e^{\tau\varphi}|\varphi'||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\nonumber \\
& \quad+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\nonumber \\
\le & C\bigg(\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|_{L^{2}(\mathcal{S}_{+}^{n}\times[1,2])}^{2}+\tau^{2-2s}\|e^{\tau\varphi}|\varphi''|\tilde{q}e^{-\beta st}\overline{w}\|_{L^{2}(\partial\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\bigg),\label{eq:Carl-utilize1}
\end{align}
with $|\tilde{f}|\le C\omega_{n+1}^{1-2s}(|\partial_{t}\overline{u}|+|\nabla_{\mathcal{S}^{n}}\overline{u}|+|\overline{u}|)$.
Using the trace estimate in Proposition~\ref{prop:inter1} (by replacing
$\tau$ by $e^{\beta t}\tau$), the boundary term in (\ref{eq:Carl-utilize1})
can be absorbed in to the left-hand side of this estimate:
\begin{align}
& \tau^{3}\|e^{\tau\varphi}|\varphi'||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\partial_{t}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\nonumber \\
& \quad+\tau\|e^{\tau\varphi}|\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\mathcal{S}^{n}}\overline{w}\|_{L^{2}(\mathcal{S}_{+}^{n}\times\mathbb{R})}^{2}\le C\|e^{\tau\varphi}\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|_{L^{2}(\mathcal{S}_{+}^{n}\times[1,2])}^{2}.\label{eq:Carl-utilize2}
\end{align}
The observation $2\beta+4s-2\beta s\le\beta+2\beta s$ is helpful.
Pulling out the weight $e^{\tau\varphi}$ in (\ref{eq:Carl-utilize2})
leads to
\[
e^{\tau\varphi(4)}\tau^{3}\||\varphi'||\varphi''|^{\frac{1}{2}}\omega_{n+1}^{\frac{1-2s}{2}}\overline{u}\|_{L^{2}(\mathcal{S}_{+}^{n}\times[4,6])}\le Ce^{\tau\varphi(2)}\|\omega_{n+1}^{\frac{2s-1}{2}}\tilde{f}\|_{L^{2}(\mathcal{S}_{+}^{n}\times[1,2])}^{2}.
\]
Using the monotonicity of $\varphi$, and passing to the limit $\tau\rightarrow\infty$,
we know that $\overline{u}=0$ in $\mathcal{S}_{+}^{n}\times(4,6)$,
i.e. $\tilde{u}=0$ in $B_{6}^{+}\setminus \overline{B_{4}^{+}}$. By unique continuation property, we conclude that $\tilde{u}\equiv0$
in $\mathbb{R}_{+}^{n+1}$, which conclude the argument.
\end{proof}
\appendix
\section{Auxiliary Lemmas}
\subsection{Some interpolation inequalities}
The following Hardy inequality can be found in \cite[Lemma~4.6]{RS20Calderon}:
\begin{lem}
\label{lem:Hardy}If $\alpha\neq\frac{1}{2}$
and if $v$ vanishes for $x_{n+1}$ large, then
\[
\|x_{n+1}^{-\alpha}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\le\frac{4}{(2\alpha-1)^{2}}\|x_{n+1}^{1-\alpha}\partial_{n+1}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\frac{2}{2\alpha-1}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{\frac{1}{2}-\alpha}u\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2}.
\]
\end{lem}
\begin{proof}
Using integration by parts, we have
\begin{align*}
\|x_{n+1}^{-\alpha}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}= & \int\partial_{n+1}\bigg[\frac{x_{n+1}^{1-2\alpha}}{1-2\alpha}\bigg]u^{2}\\
= & \frac{2}{2\alpha-1}\int x_{n+1}^{1-2\alpha}u\partial_{n+1}u+\frac{1}{2\alpha-1}\int\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2\alpha}u^{2}\\
\le & \frac{1}{2}\frac{4}{(2\alpha-1)^{2}}\|x_{n+1}^{1-\alpha}\partial_{n+1}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\frac{1}{2}\|x_{n+1}^{-\alpha}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\\
& +\frac{1}{2\alpha-1}\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{\frac{1}{2}-\alpha}u\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}^{2},
\end{align*}
which gives our desired result.
\end{proof}
We shall use the following interpolation inequality in \cite{GR19unique,Rul15unique,RW19Landis}:
\begin{prop}
[Interpolation inequaliy I]\label{prop:inter1}Let $s\in(0,1)$ and
$u:\mathcal{S}_{+}^{n}\rightarrow\mathbb{R}$ with $u\in H^{1}(\mathcal{S}_{+}^{n},\omega_{n+1}^{1-2s})$.
Then there exists a constant $C=C(n,s)$ such that
\[
\|u\|_{L^{2}(\partial\mathcal{S}_{+}^{n})}\le C\bigg[\tau^{1-s}\|\omega_{n+1}^{\frac{1-2s}{2}}u\|_{L^{2}(\mathcal{S}_{+}^{n})}+\tau^{-s}\|\omega_{n+1}^{\frac{1-2s}{2}}\nabla_{\omega}u\|_{L^{2}(\mathcal{S}_{+}^{n})}\bigg]
\]
for all $\tau>1$.
\end{prop}
The following trace characterization lemma can be found in \cite[Lemma~4.4]{RS20Calderon}:
\begin{lem}
\label{lem:traceHs}Let $n\ge1$ and $0<\tilde{s}<1$.
There is a bounded surjective linear map
\[
T:H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{1-2\tilde{s}})\rightarrow H^{\tilde{s}}(\mathbb{R}^{n}\times\{0\})
\]
so that $u(\bullet,x_{n+1})\rightarrow Tu$ in $L^{2}(\mathbb{R}^{n})$
as $x_{n+1}\rightarrow0$.
\end{lem}
We need the following interpolation inequality in \cite[Proposition 5.11: Step 1]{RS20Calderon}:
\begin{lem}
[Interpolation inequality II(a)]\label{lem:inter2(a)}For any $w\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{2s-1})$
and any $\mu>0$, the following interpolation inequality holds:
\[
\|w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\le C\bigg[\mu^{1-s}(\|x_{n+1}^{\frac{2s-1}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})})+\mu^{-s}\|w\|_{H^{-s}(\mathbb{R}^{n}\times\{0\})}\bigg].
\]
\end{lem}
\begin{proof}
Let $\langle\bullet\rangle:=\sqrt{1+|\bullet|^{2}}$. Note that
\begin{align*}
\|w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})} & =\bigg[\int_{\mathbb{R}^{n}\times\{0\}}(\langle\xi\rangle^{2-2s}|\hat{w}|^{2})^{s}(\langle\xi\rangle^{-2s}|\hat{w}|^{2})^{1-s}\,d\xi\bigg]^{\frac{1}{2}}\\
& \le(\mu^{1-s}\|w\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})})^{s}(\mu^{-s}\|w\|_{H^{-s}(\mathbb{R}^{n}\times\{0\})})^{1-s}
\end{align*}
and hence our result follows by Lemma~\ref{lem:traceHs} with $\tilde{s}=1-s$.
\end{proof}
Slightly modify the proof, we can obtain the following:
\begin{lem}
[Interpolation inequality II(b)]\label{lem:inter2(b)}For any $w\in H^{1}(\mathbb{R}_{+}^{n+1},x_{n+1}^{2s-1})$
and any $\mu>0$, the following interpolation inequality holds:
\[
\|w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})}\le C\bigg[\mu^{1-s}(\|x_{n+1}^{\frac{2s-1}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})})+\mu^{-2s}\|w\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}\bigg].
\]
\end{lem}
\begin{proof}
Using Lemma \ref{lem:traceHs} with $\tilde{s}=1-s$, we have
\begin{align*}
\|w\|_{L^{2}(\mathbb{R}^{n}\times\{0\})} & \le C\|w\|_{H^{1-s}(\mathbb{R}^{n}\times\{0\})}^{\frac{2s}{1+s}}\|w\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}^{\frac{1-s}{1+s}}\\
& \le C\bigg(\|x_{n+1}^{\frac{2s-1}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\bigg)^{\frac{2s}{1+s}}\|w\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}^{\frac{1-s}{1+s}}\\
& \le C\bigg[\mu^{1-s}\bigg(\|x_{n+1}^{\frac{2s-1}{2}}w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}+\|x_{n+1}^{\frac{2s-1}{2}}\nabla w\|_{L^{2}(\mathbb{R}_{+}^{n+1})}\bigg)+\mu^{-2s}\|w\|_{H^{-2s}(\mathbb{R}^{n}\times\{0\})}^{\frac{1-s}{1+s}}\bigg],
\end{align*}
which is our desired result.
\end{proof}
\subsection{Caccioppoli inequality}
We need a generalized the Caccioppoli inequality in \cite[Lemma~4.5]{RS20Calderon}:
\begin{lem}
\label{lem:Caccio}Let $s\in(0,1)$ and $u\in H^{1}(B_{2r}^{+},x_{n+1}^{1-2s})$
be a solution to
\[
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}P\bigg]\tilde{u}=-x_{n+1}^{1-2s}\sum_{j=1}^{n}\partial_{j}f_{j}\quad\text{in }B_{2r}^{+}.
\]
Then there exists a constant $C=C(n,\lambda)$ such that
\begin{align*}
& \|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{r}^{+})}^{2}\\
\le & C\bigg[r^{-2}\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{2r}^{+})}^{2}+\sum_{j=1}^{n}\|x_{n+1}^{\frac{1-2s}{2}}f_{j}\|_{L^{2}(B_{2r}^{+})}^{2}+\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{2r}')}\|u\|_{L^{2}(B_{2r}')}\bigg].
\end{align*}
\end{lem}
\begin{proof}
Let $\eta:B_{2r}^{+}\rightarrow\mathbb{R}$ be a smooth, radial cut-off
function such that $0\le\eta\le1$, $\eta=1$ on $B_{r}^{+}$, ${\rm supp}(\eta)\subset B_{2r}^{+}$,
and $|\nabla\eta|\le C/r$ for some constant $C$. Note that
\begin{align}
& 2\sum_{j=1}^{n}\int_{\mathbb{R}_{+}^{n+1}}\bigg(x_{n+1}^{\frac{1-2s}{2}}\eta f_{j}\bigg)\bigg(x_{n+1}^{\frac{1-2s}{2}}(\partial_{j}\eta)\tilde{u}\bigg)+\sum_{j=1}^{n}\int_{\mathbb{R}_{+}^{n+1}}\bigg(x_{n+1}^{\frac{1-2s}{2}}\eta f_{j}\bigg)\bigg(x_{n+1}^{\frac{1-2s}{2}}\eta\partial_{j}\tilde{u}\bigg)\nonumber \\
= & -\sum_{j=1}^{n}\int_{\mathbb{R}_{+}^{n+1}}x_{n+1}^{1-2s}(\partial_{j}f_{j})(\eta^{2}\tilde{u})\nonumber \\
= & \int_{\mathbb{R}_{+}^{n+1}}\bigg(\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}+x_{n+1}^{1-2s}\sum_{i,j=1}^{n}\partial_{i}a_{ij}\partial_{j}\tilde{u}\bigg)(\eta^{2}\tilde{u})\nonumber \\
= & -\int_{\mathbb{R}^{n}\times\{0\}}\eta^{2}\tilde{u}\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}-\int_{\mathbb{R}_{+}^{n+1}}(x_{n+1}^{1-2s}\partial_{n+1}\tilde{u})\partial_{n+1}(\eta^{2}\tilde{u})\nonumber \\
& -\int_{\mathbb{R}_{+}^{n+1}}x_{n+1}^{1-2s}\sum_{i,j=1}^{n}a_{ij}\partial_{j}\tilde{u}\partial_{i}(\eta^{2}\tilde{u})\nonumber \\
= & -\int_{\mathbb{R}^{n}\times\{0\}}\eta^{2}\tilde{u}\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}-2\int_{\mathbb{R}_{+}^{n+1}}(x_{n+1}^{1-2s}\partial_{n+1}\tilde{u})\eta\partial_{n+1}\eta\tilde{u}\nonumber \\
& -\int_{\mathbb{R}_{+}^{n+1}}\eta^{2}(x_{n+1}^{1-2s}\partial_{n+1}\tilde{u})\partial_{n+1}\tilde{u}-2\int_{\mathbb{R}_{+}^{n+1}}x_{n+1}^{1-2s}\sum_{i,j=1}^{n}a_{ij}(\eta\partial_{j}\tilde{u})(\partial_{i}\eta\tilde{u})\nonumber \\
& -\int_{\mathbb{R}_{+}^{n+1}}\eta^{2}x_{n+1}^{1-2s}\bigg(\sum_{i,j=1}^{n}a_{ij}\partial_{j}\tilde{u}\partial_{i}\tilde{u}\bigg)\nonumber \\
= & -\int_{\mathbb{R}^{n}\times\{0\}}\lim_{x_{n+1}\rightarrow0}\eta^{2}\tilde{u}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}-2\langle\eta\nabla\tilde{u},\tilde{u}\nabla\eta\rangle-\|\eta\nabla\tilde{u}\|^{2}\label{eq:Cacc1}
\end{align}
where $\tilde{A}=\begin{pmatrix}\begin{array}{cc}
A & 0\\
0 & 1
\end{array}\end{pmatrix}$. Here we use the notation
\[
\langle\bullet,\bullet\rangle=\langle\bullet,\bullet\rangle_{L^{2}(\mathbb{R}_{+}^{n},x_{n+1}^{1-2s}\tilde{A})}\quad\text{and}\quad\|\bullet\|=\|\bullet\|_{L^{2}(\mathbb{R}_{+}^{n},x_{n+1}^{1-2s}\tilde{A})}.
\]
By \eqref{eq:unif-ellip}, indeed
\[
\|\eta\nabla\tilde{u}\|^{2}\ge\lambda\|\eta x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}\ge\lambda\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{r}^{+})}^{2}.
\]
Also, by \eqref{eq:unif-ellip}, for $\delta>0$, we have
\begin{align*}
2\langle\eta\nabla\tilde{u},\tilde{u}\nabla\eta\rangle & \le\delta\|\eta\nabla\tilde{u}\|^{2}+\delta^{-1}\|\tilde{u}\nabla\eta\|^{2}\\
& \le\delta\lambda^{-1}\|\eta x_{n+1}^{\frac{1-2s}{2}}\nabla u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}+\delta^{-1}\lambda^{-1}\|\nabla\eta x_{n+1}^{\frac{1-2s}{2}}u\|_{L^{2}(\mathbb{R}_{+}^{n+1})}^{2}.
\end{align*}
Moreover, we have
\[
\bigg|\int_{\mathbb{R}^{n}\times\{0\}}\lim_{x_{n+1}\rightarrow0}\eta^{2}\tilde{u}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\bigg|\le\|\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}\|_{L^{2}(B_{2r}')}\|\eta^{2}\tilde{u}\|_{L^{2}(B_{2r}')}.
\]
Plug the inequalities above into \eqref{eq:Cacc1}, with small $\delta>0$,
we obtain our desired result.
\end{proof}
\subsection{$L^{\infty}$-$L^{2}$ type interior inequality}
Following the arguments in \cite[Proposition 3.1]{TX11HarnackFractionalLaplace} (see also \cite[Proposition 2.6]{JLX14Nirenberg} or \cite[Proposition 3.2]{FF14unique}),
we can obtain the following:
\begin{lem}
\label{lem:Linfty-L2}Let $s\in(0,1)$ and $u\in H^{1}(B_{2r}^{+},x_{n+1}^{1-2s})$
be a solution to
\begin{align*}
\bigg[\partial_{n+1}x_{n+1}^{1-2s}\partial_{n+1}+x_{n+1}^{1-2s}P\bigg]\tilde{u} & =0\quad\text{in }\mathbb{R}_{+}^{n+1},\\
\tilde{u} & =u\quad\text{on }\mathbb{R}^{n}\times\{0\},\\
\lim_{x_{n+1}\rightarrow0}x_{n+1}^{1-2s}\partial_{n+1}\tilde{u}(x) & =Vu\quad\text{on }\mathbb{R}^{n}\times\{0\},
\end{align*}
with \eqref{eq:unif-ellip} and $|V| \le 1$. Then there exists a constant $C=C(n,\lambda)$
such that
\[
\|\tilde{u}\|_{L^{\infty}(B_{1/2}^{+})}\le C\bigg[\|x_{n+1}^{\frac{1-2s}{2}}\tilde{u}\|_{L^{2}(B_{1}^{+})}+\|x_{n+1}^{\frac{1-2s}{2}}\nabla\tilde{u}\|_{L^{2}(B_{1}^{+})}\bigg].
\]
\end{lem}
\section*{Acknowledgments}
I would like to thank Prof. Jenn-Nan Wang for suggesting the problem and for many helpful discussions. This research is partially supported by MOST 105-2115-M-002-014-MY3, MOST 108-2115-M-002-002-MY3, and MOST 109-2115-M-002-001-MY3.
\end{document}
|
\begin{equation}gin{document}
\title{Schur Polynomials and Pl\"ucker degree of Schubert Varieties}
\begin{equation}gin{abstract}
\noindent
The following is an informal report on the contributed talk given by the author during the INPANGA 2020[+1] meeting on Schubert Varieties.
The polynomial ring $B$ in infinitely many indeterminates $(x_1,x_2,\ldots)$, with rational coefficients, has a vector space basis of Schur polynomials, parametrized by partitions. The goal of this note is to provide an explanation of the following fact. If ${\bm \lambda}$ is a partition of weight $d$, then the partial derivative of order $d$ with respect to $x_1$ of the Schur polynomial $S_{\bm \lambda}({\mathbf x})$ coincides with the Pl\"ucker degree of the Schubert variety of dimension $d$ associated to ${\bm \lambda}$, equal to the number of standard Young tableaux of shape ${\bm \lambda}$. The generating function encoding all the degree of Schuberte varieties is determined and some (known) corollaries are also discussed.
\end{abstract}
\section{Introduction}
\claim{} Let $G(r,n)$ be the complex Grassmann variety parametrizing $r$-dimensional vector subspaces of $\mathbb C^n$, $\mathcal Q_r \rightarrow G(r,n)$ be its universal quotient bundle and $c_t(\mathcal Q_r)$ its Chern polynomial. Following \cite[p.~271]{Ful}, let
$
F_\bullet({\bm \lambda})\,\, :\,\, 0\subseteq F_1\subsetneq F_2\subsetneq \cdots\subsetneq F_r\subseteq \mathbb C^n
$
be a flag of $r\geq 1$ subspaces of $\mathbb C^n$ such that
$
\partialim F_i=i+\lambda_{r-i}.
$
Then ${\bm \lambda}:=(\lambda_1\geq\cdots\geq \lambda_r)\in{\mathcal P}_{r,n}$, a partition whose Young diagram is contained in a $r\times (n-r)$ rectangle.
Let $\Omega^{\bm \lambda}$ be the class of the closed (Schubert) irreducible variety of dimension $|{\bm \lambda}|:=\lambda_1+\cdots+\lambda_r$
(\cite[Example 14.7.11]{Ful}).
$$
\Omega^{\bm \lambda}(F_\bullet):=\Omega(F_1,\ldots,F_r):=\{\Lambda\in G(r,n)\,|\,
\partialim (\Lambda\cap F_i\geq i\},
$$
Its Pl\"ucker degree $f^{\bm \lambda}$ (which coincides, by
\cite[Theorem 2.39]{smirnov}, with the number os standard Young tableaux of
shape ${\bm \lambda}$), does not depend on $n\geq r$.
\claim{} Let now $B:=\mathbb Q[{\mathbf x}]$ be the polynomial ring in the infinitely many indeterminates ${\mathbf x}:=(x_1,x_2,\ldots)$. It possesses a basis parametrized by the set ${\mathcal P}$ of all the partitions
\begin{equation}
B:=\bigoplus_{{\bm \lambda}\in {\mathcal P}}\mathbb Q\cdot S_{\bm \lambda}({\mathbf x}).
\end{equation}
If each indeterminate $x_i$ is given weight $i$, then $S_{\bm \lambda}({\mathbf x})$ is a homogeneous polynomial of weighted degree $|{\bm \lambda}|$. Consider the vector subspace $\wedge idetilde{B}_{r,n}:=\bigoplus_{{\bm \lambda}\in{\mathcal P}_{r,n}}\mathbb Q\cdot S_{\bm \lambda}({\mathbf x})$ of $B$ .
The map
\begin{equation}
\left\{\matrix{\pi_{r,n}&:&\wedge idetilde{B}_{r,n}&\longrightarrow& H_*(G(r,n),\mathbb Q)\cr\cr
&&S_{\bm \lambda}({\mathbf x})&\longmapsto&\Omega^{\bm \lambda}}\right.
\end{equation}
is a vector space isomorphism for trivial reasons.
The main result of this note is the following
\begin{claim}{\bf Theorem.}\label{thm:thm13}{\em
\begin{equation}
c_t(\mathcal Q_r)\cap \Omega^{\bm \lambda}=\pi_{r,n}\Big(\exp\left(\sum_{i\geq 1}{t^i{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x_i}\right)S_{\bm \lambda}({\mathbf x})\Big)\label{eq1:mnth}
\end{equation}
}
\end{claim}
In particular, equating the coefficients of the poer of $t$ of same degree:
\begin{equation}
c_i(\mathcal Q_r)\cap \Omega^{\bm \lambda}=S_i(\wedge idetilde{\partial})\cap \Omega^{\bm \lambda}
\end{equation}
where $S_i(\wedge idetilde{\partial})$ iks an explicit polynomial expression in $\wedge idetilde{\partial}:=\partialisplaystyle{\left({\partial{\bf 0}_Ver \partial x_1}, {1{\bf 0}_Ver 2}{\partial{\bf 0}_Ver \partial x_2}, {1{\bf 0}_Ver 3}{\partial{\bf 0}_Ver \partial x_3},\ldots
\right)}
$
corresponding to the coefficient of $t^i$ in the expansion of $\partialisplaystyle{\exp\left(\sum_{i\geq 1}{t^i{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x_i}\right)}$.
Theorem \ref{thm:thm13} will be shortly proven in Section \ref{sec:proof}, basing upon the notion of Schubert derivation on an exterior algebra as in \cite{SCHSD, gln, pluckercone} alongwith its extension to an infinite wedge power, as in \cite{SDIWP, BeGa}.
\claim{} Theorem \ref{thm:thm13} has a number of corollaries, all collected in Section \ref{sec:sec3}. The most important is:
\noindent
{\bf Corollary \ref{cor:cor31}.}\label{thm:thm11} {\em For all ${\bm \lambda}\in{\mathcal P}$
\begin{equation}
f^{\bm \lambda}={\partial^{d} S_{\bm \lambda}({\mathbf x}){\bf 0}_Ver \partial x_1^{d}}.\label{eq10:fl}
\end{equation}
}
\noindent
For example $f^{(2,2)}=\partialisplaystyle{\partial^4 S_{(2,2)}{\bf 0}_Ver \partial x_1^4}=2$, which is the Pl\"ucker degree of the Grassmannian $G(1,\mathbb P^3)$ of lines in the three dimensional projective space.
We emphasize that we have not been able to find any reference to (\ref{eq10:fl}), it looks new and is the main motivation of this note.
Recall that the classical way to compute t$f^{\bm \lambda}$ is to rely on a formula due to Schubert, accounted for in \cite[Example 14.7.11]{Ful}. Because of its combinatorial interpretation in terms of Young tableaux, it is also computed by the celebrated {\em hook length formula} (see e.g.
\cite[p.~53]{Fulyoung} or \cite[Theorem 4.33]{gillespie})
$$
f^{\bm \lambda}:={|{\bm \lambda}|!{\bf 0}_Ver \prod_{x\in Y({\bm \lambda})}h(x)},
$$
proved in \cite{thrall},
where $Y({\bm \lambda})$ is the Young diagram of ${\bm \lambda}$ and $h(x)$ is the hook
length of the box $x\in Y({\bm \lambda})$.
\noindent
\noindent
{\bf Corollary~\ref{cor:cor32}.} {\em
\begin{equation}
f^{\bm \lambda}:={|\lambda|!\cdot \Delta_{\bm \lambda}(\exp(t))}\label{eq:behzadf}
\end{equation} }
\noindent
Formula (\ref{eq:behzadf}) has been first observed by O.~Behzad during the investigations which lead to her Ph.~D. Thesis \cite{BeThesis}. See also the forthcoming \cite{BeGa2}.
\noindent
{\bf Corollary~\ref{cor:34}.} {\em Let ${\bm \lambda}\in{\mathcal P}$ and $Y({\bm \lambda})$ its Young diagram. Then
\begin{equation}
\prod_{x\in Y({\bm \lambda})}h(x)={1{\bf 0}_Ver \Delta_{\bm \lambda}(\exp(t))}
\end{equation}
where $h(x)$ denotes the {\em hook length} of the box $x$ in the Young diagram of ${\bm \lambda}$.
}
\noindent
{\bf Corollary \ref{cor:35}} {\em Let ${\mathcal P}_r$ be the set of all partitions of length at most $r$. For all ${\bm \lambda}\in{\mathcal P}_r$, let $s_{\bm \lambda}({\mathbf z}_r)$ denote the Schur symmetric polynomial in the $r$ indeterminates $(z_1,\ldots,z_r)$, i.e.
$$
s_{\bm \lambda}({\mathbf z}_r)={\partialet(z_j^{\lambda_j-j+i}){\bf 0}_Ver \Delta_0({\mathbf z}_r)}
$$
Then
\begin{equation}
\sum_{d\geq 0}{t^d{\bf 0}_Ver d!}\sum_{{\bm \lambda}\vdash n}f^{\bm \lambda} s_{\bm \lambda}({\mathbf z}_r)=\exp(tp_1({\mathbf z}))=\exp(t\cdot(z_1+\cdots+z_r))
\end{equation}
In particular, for all $d\geq 0$
\begin{equation}
(z_1+\cdots+z_r)^d=\sum_{{\bm \lambda}\vdash d}f^{\bm \lambda}\cdot s_{\bm \lambda}({\mathbf z}_r)\label{eqi:prewrds}
\end{equation}
}
We additionally observe that evaluating the equality at $z_i=1$, formula (\ref{eqi:prewrds}) turns into
\begin{equation}
r^d=\sum_{{\bm \lambda}\vdash d}s_{\bm \lambda}(1,\ldots,1)f^{\bm \lambda}.\label{eq:comp}
\end{equation}
Comparing (\ref{eq:comp}) with \cite[Formula (5), p.~52]{Fulyoung}, one deduces that $s_{\bm \lambda}(\underbrace{1,\ldots,1}_{r-times})$ is precisely the number $d_{\bm \lambda}(r)$ of standard Young tableaux of shape ${\bm \lambda}$, whose entries are taken from the alphabet
$\{1,2,\ldots,r\}.$
\claim{} Let $\partialisplaystyle{\sum_{i\geq 0}}S_j(\tilde{\partial})t^j=\exp\left(\sum_{i\geq 1}\partialisplaystyle{t^i{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x_i}\right)$. It is not difficult to see that
\begin{equation}
\langle P(S_i({\mathbf x})),S_{\bm \lambda}({\mathbf x})\rangle=P(S_i(\wedge idetilde{\partial}))S_{\bm \lambda}({\mathbf x})
\end{equation}
where by $P(S_i(\wedge idetilde{\partial}))$ is the evaluation of $P$ at $\partialisplaystyle{x_i={1{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x_i}}$.
In particular
$$
x_1^n=\sum_{{\bm \lambda}\vdash n}<x_1^n,S_{\bm \lambda}({\mathbf x})>S_{\bm \lambda}({\mathbf x})=\sum_{{\bm \lambda}\vdash n}{\partial^n S_{\bm \lambda}({\mathbf x}){\bf 0}_Ver \partial x_1^n}\cdot S_{\bm \lambda}({\mathbf x}),
$$
which, due to Corollary \ref{thm:thm11}, gives:
$$
x_1^n=\sum_{{\bm \lambda}\vdash n}f_{\bm \lambda} S_{\bm \lambda}({\mathbf x})
$$
from which, taking the derivative with respect to $x_1$ of order $n$, and again by Corollary~\ref{cor:cor34} gives:
\begin{equation}
n!=\sum_{{\bm \lambda}\vdash n}(f^{\bm \lambda})^2\label{eq:square}
\end{equation}
which is \cite[Formula (4), p.~50]{Fulyoung}.
\noindent
\noindent
In Section \ref{sec:sec2} we recall a few preliminaries. Then we will state and prove the main corollaries in Section \ref{sec:sec3}. In Section~\ref{sec:sec4} the notion of Schubert derivation (as in \cite{SCHSD}, \cite{HSDGA}, \cite{gln}) is reviewed. The short proof of Theorem \ref{thm:thm13} will conclude this short note.
\section{Preliminaries}\label{sec:sec2}
The content of this section is very well known and easily available in many
common textbooks and its only purpose is to introduce the notation adopted in the
sequel.
\claim{\bf Partitions.} Let ${\mathcal P}$ be the set of all partitions, namely the
monoid of all non-increeasing sequences ${\bm \lambda}:=(\lambda_1\geq \lambda_2\geq
\cdots)$ of non-negative integers with finite support (all terms zero but
finitely many). The non zero terms of ${\bm \lambda}$ are called {\em parts}, the
number $\ell({\bm \lambda})$ of parts is called {\em length}. Let ${\mathcal P}_r:={\mathcal P}
\cap \mathbb N^r$: it is the set of all partitions with at most $r$-parts and $
{\mathcal P}_\infty={\mathcal P}$.
The Young diagram of a partition is the left
justified array of $r$-rows such that the $i$th row has $\lambda_i$ boxes.
For all $\leq r\leq n$ we denote by ${\mathcal P}_{r,n}$ the set of partitions whose
Young diagram is contained in a $r\times (n-r)$ rectangle. Then ${\mathcal P}_{r,
\infty}={\mathcal P}$ and ${\mathcal P}_\infty={\mathcal P}$.
If ${\bm \lambda}\in
{\mathcal P}_{r,n}$, we denote by ${\bm \lambda}^c$ the partition whose
Young diagram is the complement of the Young diagram of ${\bm \lambda}$ in the $r
\times (n-r)$ rectangle. For example the complement of the partition $(3,3,2,
1)$ in the $4\times 3$ rectangle is $(2,1)$. Its complement in the $5\times
4$
rectangle is $(4,3,2,1,1)$.
\claim{\bf Schur Determinants.} Let $A$ be any commutative algebra. To each pair
$$
\Big(f(t)=\sum_{n\geq 0}f_nt^n,{\bm \lambda}\Big)\in A\llbracket t^{-1},t\rrbracket\times {\mathcal P}_r
$$
one attachesthe {\em Schur determinant}:
\begin{equation}
\Delta_{\bm \lambda}(f(t))=\partialet(f_{\lambda_j-j+i})_{1\leq i,j\leq r}\in A\label{eq:scud}
\end{equation}
If $f(t)\in A[[t]]$, one think of it as a formal Laurent series with $f_j=0$ for $j<0$.
Putting $S_{\bm \lambda}({\mathbf x}):=\partialet(S_{\lambda_j-j+i})$, it is well known that
$$
B:=\bigoplus_{{\bm \lambda}\in{\mathcal P}}\mathbb Q\cdot S_{\bm \lambda}({\mathbf x})
$$
\claim{} \label{notpier} For all $(i,{\bm \lambda})\in\mathbb N\times {\mathcal P}_r$, define
$$
PF_i({\bm \lambda}):=\{{\bm \mu}\in {\mathcal P}_r\,\, |\,\, |{\bm \mu}|=|{\bm \lambda}|+i\quad \mathrm{and}\quad \mu_1\geq\lambda_\geq\cdots\geq\mu_r\geq\lambda_r\}
$$
and
$$
PF_{-i}({\bm \lambda}):=\{{\bm \mu}\in {\mathcal P}_r\,\, |\,\, |{\bm \mu}|=|{\bm \lambda}|-i\quad \mathrm{and}\quad \lambda_1\geq \mu_1\geq\lambda_2\geq\mu_2\geq \cdots\geq\lambda_r\geq\mu_r\}
$$
\claim{\bf Proposition.} \label{prop:prop24}{\em Pieri's rule for Schur $S$-function holds:
\begin{equation}
S_i({\mathbf x})\cdot S_{\bm \lambda}({\mathbf x})=\sum_{{\bm \mu}\in PF_i({\bm \lambda})} S_{\bm \mu}({\mathbf x})\label{eq:PieriS}
\end{equation}
}
\claim{\bf Proposition.}\label{prop:dualp} {\em The ``dual'' Pieri's rule for $\Omega^{\bm \lambda}$ holds:
\begin{equation}
c_i(\mathcal Q_r)\cap \Omega^{\bm \lambda}=\sum_{{\bm \mu}\in PF_{-i}({\bm \lambda})} \Omega_{\bm \mu}({\mathbf x})\label{eq:PieriS}
\end{equation}
}
Proposition \ref{prop:dualp} is basically another phrasing of \cite[Example 14.7.1]{Ful}
It is very well known that
$H_*(G(r,n),\mathbb Q):=\bigoplus_{{\bm \lambda}\in{\mathcal P}_{r,n}}\mathbb Q\cdot \Omega_{\bm \lambda}$. Moreover, Giambelli's formula says that
$$
\Omega^\lambda=\Delta_{{\bm \lambda}^c}(c_t(\mathcal Q_r))\cap [G(r,n)]:=(c_{\lambda_j-j+i}(\mathcal Q_r))_{1\leq i,j\leq r},
$$
where ${\bm \lambda}^c$ denotes the complement of ${\bm \lambda}$ in the $r\times (n-r)$ rectangle.
\section{Corollaries to Theorem \ref{thm:thm13}}\label{sec:sec3}
In this section we assume Theorem \ref{thm:thm13}, to find a few corollaries. As it is customary to do we set $\sigma_i:=c_i(\mathcal Q_r)$.
\begin{equation}gin{cor}\label{cor:cor31} {Let ${\bm \lambda}\in{\mathcal P}_{r,n}$.
Then
\begin{equation}
\sigma_1\cap \Omega^{\bm \lambda}=\pi_{r,n}\left({\partial S_{\bm \lambda}{\bf 0}_Ver \partial x_1}\right)\label{eq:s1}
\end{equation}
In particular, if $d=|{\bm \lambda}|$:
\begin{equation}
f^{\bm \lambda}:=\sigma_1^d\cap \Omega^{\bm \lambda}={\partial^d{\bf 0}_Ver\partial x_1^{d}}S_{\bm \lambda}({\mathbf x})\label{eq2:degr}
\end{equation}
}
\end{cor}
\noindent{\bf Proof.}\,\,
Formula (\ref{eq:s1}) comes from equating the coefficients of the linear terms on both sides of~\ref{eq1:mnth}).
Iterating it $d$ times one obtains (\ref{eq2:degr}), where the projection $\pi_{r,n}$ can be omitted ($\pi_{r,n}$ is the identity on constants).{
\vrule height4pt width4pt depth0pt}
Formula (\ref{eq:s1}) generalizes to
$$
\sigma_i\cap \Omega^{\bm \lambda}=\pi_{r,n}\left(S_i(\wedge idetilde{\partial})S_{\bm \lambda}({\mathbf x})\right)
$$
For example
$$
\sigma_3\cap \Omega^{\bm \lambda}=\pi_{r,n}\left[\left({1{\bf 0}_Ver 6}{\partial^3{\bf 0}_Ver \partial x_1^3}+{1{\bf 0}_Ver 2}{\partial^2{\bf 0}_Ver \partial x_1 \partial x_2} +{1{\bf 0}_Ver 3}{\partial {\bf 0}_Ver \partial x_3}\right)S_{\bm \lambda}({\mathbf x})\right]
$$
\claim{} Let $\Delta_{\bm \lambda}(\exp(t))$ be the Schur determinant as in (\ref{eq:scud}), attached to the exponential formal power series. For example
$$
\Delta_{(3,2,2)}(\exp(t))=\left|\matrix{\partialisplaystyle{1{\bf 0}_Ver 3!}&\partialisplaystyle{1{\bf 0}_Ver 1!}&\partialisplaystyle{1{\bf 0}_Ver 0!}\cr\cr
\partialisplaystyle{1{\bf 0}_Ver 4!}&\partialisplaystyle{1{\bf 0}_Ver 2!}&\partialisplaystyle{1{\bf 0}_Ver 1!}\cr\cr \partialisplaystyle{1{\bf 0}_Ver 5!}&\partialisplaystyle{1{\bf 0}_Ver 3!}&\partialisplaystyle{1{\bf 0}_Ver 2!}}\right|=15
$$
\claim{\bf Corollary.}\label{cor:cor32}
\begin{equation}
f^{\bm \lambda}:=|{\bm \lambda}|!\cdot \Delta_{\bm \lambda}(\exp(t))
\end{equation}
\noindent{\bf Proof.}\,\,
Let $d:=|{\bm \lambda}|$. Then
$$
f^{\bm \lambda}={\partial^d S_{\bm \lambda}({\mathbf x}){\bf 0}_Ver \partial x_1^d}=
\left|\matrix{S_{\lambda_1}({\mathbf x})&S_{\lambda_2-1}({\mathbf x})&\cdots&S_{\lambda_r-r+1}({\mathbf x})\cr\cr
S_{\lambda_1-1}({\mathbf x})&S_{\lambda_2}({\mathbf x})&\cdots&S_{\lambda_r-r+2}({\mathbf x})\cr
\vdots&\vdots&\partialdots&\vdots\cr
S_{\lambda_1+r-1}({\mathbf x})&S_{\lambda_2+r-2}({\mathbf x})&\cdots&S_{\lambda_r}({\mathbf x})
}\right|\label{eq:dete}
$$
Now $S_i({\mathbf x})=\partialisplaystyle{x_1^i{\bf 0}_Ver i!}+g_i$, where
$$
g_i:=g_i(x_1,x_2,\ldots,x_i)
$$
is a polynomal in which $x_1$ occurs with degree strictly smaller than $i$. Therefore the determinant occurring in (\ref{eq:dete} can be written as
\begin{equation}
\left|\matrix{\partialisplaystyle{x_1^{\lambda_1}{\bf 0}_Ver \lambda_1!}+g_{\lambda_1}&\partialisplaystyle{x_1^{\lambda_2-1}{\bf 0}_Ver (\lambda_2-1)!}+g_{\lambda_2-1}&\cdots&\partialisplaystyle{x_1^{\lambda_r+r-1} {\bf 0}_Ver (\lambda_r+r-1)!}+g_{\lambda_r+r-1}\cr\cr\cr
\partialisplaystyle{x_1^{\lambda_1+1}{\bf 0}_Ver( \lambda_1+1)!}+g_{\lambda_1+1}&\partialisplaystyle{x_1^{\lambda_2}{\bf 0}_Ver \lambda_2!}+g_{\lambda_2}&\cdots&\partialisplaystyle{x_1^{\lambda_r+r-2}{\bf 0}_Ver (\lambda_r+r-2)!}+g_{\lambda_r+r-2}\cr
\vdots&\vdots&\partialdots&\vdots\cr\cr
\partialisplaystyle{x_1^{\lambda_1+r-1}{\bf 0}_Ver( \lambda_1+r-1)!}+g_{\lambda_1+r-1}&\partialisplaystyle{x_1^{\lambda_2+r-2}{\bf 0}_Ver (\lambda_2+r-2)!}+g_{\lambda_2+r-2}&\cdots&\partialisplaystyle{x_1^{\lambda_r}{\bf 0}_Ver \lambda_r!}+g_{\lambda_r}
}\right|\label{eq:hugedet}
\end{equation}
Easy manipulations with determinants show that (\ref{eq:hugedet}) can be written as
$$
x_1^{d}\Delta_{\bm \lambda}(\exp(e^t))+F(x_1,x_2,\ldots,x_{\lambda_1})
$$
where $F$ is a polynomial in which $x_1$ occurs in degree smaller than $d$.
Therefore
$$
f^{\bm \lambda}:={\partial^d S_{\bm \lambda}({\mathbf x}){\bf 0}_Ver \partial x_1^d}={\partial^d{\bf 0}_Ver \partial x_1^d}\Big(x_1^{d}\Delta_{\bm \lambda}(\exp(t))+F(x_1,x_2,\ldots,x_{\lambda_1})\Big)=d!\cdot\Delta_{\bm \lambda}(\exp(t))
$$
{
\vrule height4pt width4pt depth0pt}
\claim{\bf Example.} The degree of the Schubert variety
$
\Omega^{(3,2,1)}(F^\bullet)
$
is
$$
f^{(3,2,1)}=6!\left|\matrix{\partialisplaystyle{1{\bf 0}_Ver 3!}&\partialisplaystyle{1{\bf 0}_Ver 1!}&0\cr\cr
\partialisplaystyle{1{\bf 0}_Ver 4!}&\partialisplaystyle{1{\bf 0}_Ver 2!}&1\cr\cr
\partialisplaystyle{1{\bf 0}_Ver 5!}&\partialisplaystyle{1{\bf 0}_Ver 3!}&1
}\right|=16
$$
which is also the number of standard Young Tableaux of shape $(3,2,1)$.
\claim{\bf Corollary.} \label{cor:34} {\em
Let $Y({\bm \lambda})$ denote the Young tableau of the partition ${\bm \lambda}$. Let $h(x)$ denote the {\em hook length} of the box $x$ of $Y({\bm \lambda})$. Then
$$
\prod_{x\in Y({\bm \lambda})} h(x)={1{\bf 0}_Ver \Delta_{\bm \lambda}(\exp(t))}
$$
}
\noindent
\noindent{\bf Proof.}\,\, As in \cite{smirnov}, the degree of a Schubert variety coincides with the number of standard Young tableaux of shape ${\bm \lambda}$. Then one invoke the celebrated hook length formula proven by \{
\vrule height4pt width4pt depth0pt}
\claim{\bf Corollary.}\label{cor:35}
\begin{equation}
\sum_{d\geq 0}{t^d{\bf 0}_Ver d!}\sum_{{\bm \lambda}\in{\mathcal P}_r\,|\,|{\bm \lambda}|=d}f^{\bm \lambda} s_{\bm \lambda}(z_1,\ldots,z_r)=\exp(t(z_1+\cdots+z_r))
\end{equation}
as a consequence, for each $d\geq 0$, the Pl\"ucker coordinates of the symmetric polynomial $e_1({\mathbf z}_r)^r$ are the degree of the Schubert varieties $\Omega^{\bm \lambda}(F^\bullet)$
$$
(z_1+\cdots+z_r)^d=\sum_{{\bm \lambda}}f^{\bm \lambda} s_{\bm \lambda}
$$
\noindent{\bf Proof.}\,\,
Let ${\mathbf z}_r;=(z_1,\ldots,z_r)$ and consider the generating function
$$
\sum_{{\bm \lambda}\in{\mathcal P}_r}X^r({\bm \lambda})s_{\bm \lambda}({\mathbf z}_r)
$$
of the basis $X^r({\bm \lambda})$ of $B_r$.
By \cite{BeCoGaVi}, this is equal to
$$
\sigma_+(z_1,\ldots,z_r)X^r(0)=\exp\left(\sum_{i\geq 0}x_ip_i({\mathbf z}_r)\right)X^r(0)
$$
Then
\begin{equation}gin{eqnarray*}
\sum_{d\geq 0}{t^d{\bf 0}_Ver d!}\sum_{|{\bm \lambda}|=d} f_{\bm \lambda} X^r(0)s_{\bm \lambda}({\mathbf z}_r)&=& \exp\left(t{\partial{\bf 0}_Ver \partial x_1}\right)_{|t=0}\exp\left(\sum_{i\geq 0}x_ip_i({\mathbf z}_r)\right)X^r(0)\cr\cr\cr
&=&\exp((x_1+t)p_1({\mathbf z}_r))_{{\mathbf x}=0}=\exp(tp_1({\mathbf z}_r))
\end{eqnarray*}
which concludes the proof because $p_1({\mathbf z}_r)=e_1({\mathbf z}_r)$.
{
\vrule height4pt width4pt depth0pt}
\noindent
More generally, one can consider Schur polynomials in infinitely many indterminates ${\mathbf z}:=(z_1,z_2,\ldots)$. We apply the translation operator along the $x_1$ diiection to the generating function
$$
S_{\bm \lambda}({\mathbf x},{\mathbf z})=\exp(\sum_{i\geq 0}x_ip_i({\mathbf z}))
$$
of the basis elements of $B$, obtaining:
$$
\exp\left(t{\partial{\bf 0}_Ver \partial x_1}\right)S_{\bm \lambda}({\mathbf x},{\mathbf z})=\exp\left(\exp(t+x_1)p_1(z)+\sum_{j\geq 2}x_jp_j({\mathbf z})\right)
$$
Evaluating at ${\mathbf x}=0$ one gets
$$
\sum_{d\geq 0}{t^d{\bf 0}_Ver d!}\sum_{{\bm \lambda}\vdash d}S_{\bm \lambda}({\mathbf x})=\exp(tp_1({\mathbf z}))
$$
\section{Review on Schubert Derivations}\label{sec:sec4}
Consider the following vector spaces over the rationals:
\begin{equation}
Vcal:=\mathbb Q[X^{-1},X],\qquad V=V_\infty=\mathbb Q[X],\qquad V_n:={V{\bf 0}_Ver X^n\cdot V}\label{eq:spaces}
\end{equation}
alongwith their restricted duals
$$
Vcal^*=\bigoplus_{i\in\mathbb Z}\mathbb Q\cdot \partial^i,\qquad V^*:=\bigoplus_{i\geq 0}\mathbb Q
\cdot X^i,\qquad Vcal^*_n=\bigoplus_{0\leq i< n} \mathbb Q\cdot \partial^j.
$$
where $\partial^j$ stands for the unique linear form on $\mathcal U$ such that $\partial^j(X^i)=\partialelta^{ij}$.
There is a natural chain of inclusions
$$
V_n\hookrightarrow V\hookrightarrow Vcal
$$
where the first map is the natural section $X^i+(X^n)\mapsto X^i$ associated to the canonical projection $Vcal\mapsto Vcal_n$ and the second is by seeing a polynomial as a Laurent polynomial with no singular part.
\claim{\bf Exterior Algebras.} For all $r\geq 0$ and all ${\bm \lambda}\in{\mathcal P}_r$ let
\begin{equation}
X^r({\bm \lambda})=X^{r-1+\lambda_1}\wedge \cdots\wedge X^{\lambda_r}
\end{equation}
The exterior algebra of $V_n$ ($n\in\mathbb N\cup\{\infty\}$) is:
$$
\bigwedge V_n=\bigoplus_{r\geq 0}\bigwedge ^rV_n\qquad\mathrm{where}\qquad
\bigwedge ^rV_n=\bigoplus_{{\bm \lambda}\in{\mathcal P}_{r,n}}\mathbb Q\cdot X^r({\bm \lambda})\qquad
$$
For each $m\in\mathbb Z$, let ${[\mathbf X]}^{m+(0)}:=X^{m}\wedge X^{m-1}\wedge X^{m-2}\wedge \cdots$.
The fermionic Fock space of charge $0$ is $F_0:=\bigoplus_{{\bm \lambda}\in{\mathcal P}}\mathbb Q\cdot {[\mathbf X]}^{0+{\bm \lambda}}$ where
\begin{equation}
{[\mathbf X]}^{0+{\bm \lambda}}:=X^{\lambda_1}\wedge X^{-1+\lambda_2}\wedge X^{-2+\lambda_2}\wedge \cdots\wedge X^{r-1+{\bm \lambda}}\wedge {[\mathbf X]}^{r-1+(0)}
\end{equation}
expression which does not depend on $r\geq \ell({\bm \lambda})$.
The {\em boson-fermion correspondence} can be phrased by saying that
$F_0$ is an invertible $B$-module generated by ${[\mathbf X]}^0:={[\mathbf X]}^{0+(0)}$, such that
$$
{[\mathbf X]}^{0+{\bm \lambda}}=S_{\bm \lambda}({\mathbf x}){[\mathbf X]}^0
$$
Let now $\mathcal U$ denote anyone of the spaces listed in (\ref{eq:spaces}).
\begin{claim}{\bf Definition.} {\em A {\em Hasse-Schmidt} (HS) derivation on $\bigwedge \mathcal U$ is a $\mathbb Q$-linear map $\mathcal D(z):\bigwedge \mathcal U\rightarrow \bigwedge \mathcal U\llbracket z\rrbracket$ such that
\begin{equation}
\mathcal D(z)(u\wedge v)=\mathcal D(z)u\wedge \mathcal D(z)v,\qquad \forall u,v\in\bigwedge \mathcal U\label{eq:HSD1}
\end{equation}
}
\end{claim}
Writing $\mathcal D(z)=\sum_{i\geq 0}D_iz^i\in \mathrm{End}_\mathbb Q(\bigwedge \mathcal U)\llbracket z\rrbracket$, equation (\ref{eq:HSD1}) is equivalent to
\begin{equation}
D_j(u\wedge v)=\sum_{i=0}^jD_iu\wedge D_{j-i}v.
\end{equation}
If $A\in \mathrm{End}(\mathcal U)$, denote by $\partialelta(A)\in \mathrm{End}_\mathbb Q(\bigwedge \mathcal U)$ the unique derivation of $\bigwedge \mathcal U$ such that
\begin{equation}
\partialelta(A)u=A\cdot u, \qquad \forall u\in \mathcal U=\bigwedge ^1\mathcal U.
\end{equation}
\begin{claim}{\bf Proposition.} {\em The plethistic exponential
\begin{equation} \mathcal D^A(z)=\mathrm{Exp}(\partialelta(A^i)z)=\exp\left(\sum_{i\geq 1}{1{\bf 0}_Ver i}\partialelta(A)z^i\right):\bigwedge \mathcal U\longrightarrow \bigwedge \mathcal U\llbracket z\rrbracket
\end{equation}
is the unique Hasse-Schmidt (HS) derivation on $\bigwedge \mathcal U$,
such that
$\mathcal D^A(z)u=\sum_{i\geq 0}(A^iu)z^i$.
}
\end{claim}
\noindent{\bf Proof.}\,\, Based on the general fact that the exponential of a derivation of an algebra is an algebra homomorphism.{
\vrule height4pt width4pt depth0pt}
Abusing notation $X$ will also stand for the endomorphism of $\mathcal U$
given by $u\mapsto Xu$, which is nilpotent if $\mathcal U=V_n$ and $n<\infty$.
\begin{claim}{\bf Definition.} {\em The {\em Schubert derivation} $\sigma_+(z):\bigwedge V_n\rightarrow \bigwedge V_n[[z]]$ is
\begin{equation}
\sigma_+(z):=\sum_{i\geq 0}\sigma_iz^i=\mathrm{Exp}(\partialelta(X)z).
\end{equation}
Its invese is
\begin{equation}
{\bf 0}_Verline{\sigma}_+(z):=\sum_{i\geq 0}(-1)^i{\bf 0}_Verline{\sigma}_iz^i=\mathrm{Exp}(-\partialelta(X)z).
\end{equation}
}
\end{claim}
They are clearly the unique HS derivation such that $\sigma_+(z)u=\sum_{i\geq 0}X^iu\cdot z^i$ and ${\bf 0}_Verline{\sigma}_+(z)u=u-Xu$, for all $u\in V_n$.
\claim{} If $u=\sum_{{\bm \lambda}\in{\mathcal P}_{r,n}}a_{\bm \lambda}\cdot X^r({\bm \lambda})\in\bigwedge ^rV_n$, we denote by $u^c$ the sum $\sum_{{\bm \lambda}\in{\mathcal P}^{r,n}}a_{\bm \lambda} X^r({\bm \lambda}^c)$.
\begin{claim}{\bf Proposition.} {\em Let $\sigma_-(z):=\sigma_+(z)^*$ be the $\langle,\rangle$--adjoint of the Schubert derivation $\sigma_+(z)$, i.e.
\begin{equation}
\langle \sigma_-(z)u,v\rangle=\langle u,\sigma_+(z)v\rangle.
\end{equation}
Then $\sigma_-(z)=\mathrm{Exp}(\partialelta(X^{-1})z)$, where $X^{-1}$ is the unique endomorphism of $V_n$ mapping $X^j$ to $X^{j-1}$ if $j\geq 1$ and to $0$ otherwise.
}
\end{claim}
\noindent{\bf Proof.}\,\, To show that $\sigma_-(z)$ is a HS-derivation one first identifies $V_n^*$ with $V_n$ through the isomorphism $u\mapsto \langle u,\cdot\rangle$ and then arguues as in \cite[p.~]{pluckercone}. Then one observes that
$$
\langle \sigma_-jX^i,X^k\rangle=\langle X^i,\sigma_jX^k\rangle=\langle X^i,X^{j+k}\rangle=\partialelta^{i,j+k}=\partialelta^{i-j,k}=\langle X^{i-j},X^k\rangle
$$
which proves that ${\bf 0}_Verline{\sigma}_{-j}X^i=X^{i-j}$. Thus ${\bf 0}_Verline{\sigma}_-(z)=\mathrm{Exp}(\partialelta(X^{-1})z)$, because both sides restrict to the same endomorphism of $V_n$. {
\vrule height4pt width4pt depth0pt}
Recall the notation \ref{notpier}. The $\langle,\rangle$--adjoint $\sigma_-(z)$ of $\sigma_+(z)$ will be called Schubert derivation as well. The reason is due to:
\claim{\bf Theorem.} {\em Schubert derivations $\sigma_\pm(z)$ satisfy Pieri's rule, i.e.
\begin{equation}
\sigma_iX^r({\bm \lambda})=\sum_{{\bm \mu}\in PF_i({\bm \lambda})}X^r({\bm \mu})\label{eq:mypier}
\end{equation}
and
\begin{equation}
\sigma_{-i}X^r({\bm \lambda})=\sum_{{\bm \mu}\in PF_{-i}({\bm \lambda})}X^r({\bm \mu})\label{eq:mypier-}
\end{equation}
}
\noindent{\bf Proof.}\,\,
Formula \ref{eq:mypier} is, up to the notation, \cite[Theorem 2.4]{SCHSD}. To prove (\ref{eq:mypier-}), due to the fact that $(X^r({\bm \lambda}))_{{\bm \lambda}\in{\mathcal P}_{r,n}}$ is an orthonormal basis of $\bigwedge ^rV_n$, one has
\begin{equation}gin{eqnarray*}
\sigma_{-i}X^r({\bm \lambda})&=&\sum_{{\bm \mu}\in{\mathcal P}_{r,n}}\langle \sigma_{-i}
X({\bm \lambda}), X^r({\bm \mu})\rangle X^r({\bm \mu})\cr\cr
&=&\sum_{{\bm \mu}\in{\mathcal P}_{r,n}}\langle X^r({\bm \lambda}),\sigma_i X^r({\bm \mu})\rangle
X^r({\bm \mu})
\end{eqnarray*}
Using Pieri's formula (\ref{eq:mypier}) one has
$$
\langle X^r({\bm \lambda}),\sigma_i X^r({\bm \mu})\rangle=\langle X^r({\bm \lambda}),\sum_{{\bm \nu}\in PF_i({\bm \mu})}
X^r({\bm \nu})\rangle
$$
and ${\bm \nu}\in PF_i({\bm \mu})$ if and only if
$
|{\bm \nu}|=|{\bm \mu}|+i\qquad\mathrm{and}\qquad \nu_1\geq \mu_1\geq\cdots\geq\nu_r
\geq \mu_r.
$
Thus
$$
\langle X^r({\bm \lambda}),\sigma_i X^r({\bm \mu})\rangle=\partialelta_{{\bm \lambda},{\bm \nu}}
$$
i.e. the only non zero coefficients are those for which $\nu_1=\lambda_1,
\ldots,\nu_r=\lambda_r$, which are then the summands of $\sigma_{-i}X^r({\bm \lambda})$.
{
\vrule height4pt width4pt depth0pt}
\section{Proof of Theorem 1 and one generalization}\label{sec:proof}
By Proposition~\ref{prop:prop24}. the product $S_i({\mathbf x})S_{\bm \lambda}({\mathbf x})$ obeys Pieri's formula. With respect to the inner product $\langle,\rangle$ for which $(S_{\bm \lambda}({\mathbf x}))_{{\bm \lambda}\in{\mathcal P}}$ is an orthonormal basis of $B$,
one has
\begin{equation}
\left\langle \exp(\sum_{i\geq 0}x_it^i)S_{\bm \lambda}({\mathbf x}),S_{\bm \mu}({\mathbf x})\right\rangle=\left\langle S_{\bm \lambda}({\mathbf x}),\exp\left(\sum_{i\geq 1}{t^i{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x^i}\right)\right\rangle\label{eq:s+s-ad}
\end{equation}
Because of (\ref{eq:s+s-ad}), it follows that the coefficients $S_i(\wedge idetilde{\partial})$ of $\exp\left(\partialisplaystyle{\sum_{i\geq 1}{t^i{\bf 0}_Ver i}{\partial{\bf 0}_Ver \partial x^i}}\right)$ satisfy the dual Pieri formula as in (\ref{eq:mypier-}).
Therefore
\begin{equation}gin{eqnarray*}
\pi_{r,n}(S_i(\wedge idetilde{\partial})S_{\bm \lambda}({\mathbf x}))&=&\pi_{r,n}\big(\sum_{{\bm \mu}\in PF_{-i}({\bm \lambda})}S_{\bm \lambda}({\mathbf x})\Big)=\sum_{{\bm \mu}\in PF_{-i}({\bm \lambda})}\pi_{r,n}(S_{\bm \lambda}({\mathbf x}))\cr\cr\cr
&=&\sum_{{\bm \mu}\in PF_{-i}({\bm \lambda})}\Omega^{\bm \mu}=\sigma_i\cap \Omega^{\bm \lambda}.
\end{eqnarray*}
{
\vrule height4pt width4pt depth0pt}
Let
\begin{equation}
F({\bf u}, {\bf t}):=\left.\exp\left(\sum_{i\geq 1}t_i{\partial{\bf 0}_Ver \partial x_i}\right)\right|_{x_i=0}\exp\left(\sum_{i\geq 1}u_iS_i({\mathbf x})\right).
\end{equation}
\claim{\bf Proposition.} {\em
\begin{equation}
F({\bf u},{\bf t})=\exp\left(\sum_{i\geq 1}u_iS_i({\bf t})\right),\label{eq:integrals}
\end{equation}
where $S_i({\bf t})$ is the Schur polynomial in the variable ${\bf t}$.
}
\noindent{\bf Proof.}\,\,
It amounts to straightforward manipulation with the Taylor formula.{
\vrule height4pt width4pt depth0pt}
\claim{\bf Remark.} Formula (\ref{eq:integrals}) is the generating function of the ``integrals'' of product of special Schubert cycles.
Putting $h_i:=S_i({\bm \lambda}({\mathbf x}))$, it is the generating functions of
$$
\left({\partial{\bf 0}_Ver \partial x_1}\right)^{|{\bm \mu}|}h_{\bm \mu}
$$
where if ${\bm \mu}=(\mu_1,\ldots,\mu_r)\in{\mathcal P}_r$, one sets
$
h_{\bm \mu}:=h_{\mu_1}\cdots h_{\mu_r}
$.
\claim{} A remarkable special case is obtained by setting $t_1=t$, and $t_j=0$ for all $j\geq 2$:
\begin{equation}
F({\bf u}, t)=\exp\left(\sum_{i\geq 1}u_i{t^i{\bf 0}_Ver i!}\right).
\end{equation}
If ${\bm \mu}=(1^{m_1}\cdots r^{m_r})$ it is easy to see that
$$
\left({\partial{\bf 0}_Ver \partial x^1}\right)^{|{\bm \mu}|}h_{\bm \mu}={(m_1+2m_2+\cdots +rm_r)!{\bf 0}_Ver 1!(2!)^{m_2}\cdots(r!)^{m_r}}
$$
because it is merely the coefficient of $\partialisplaystyle{t^{m_1+\cdots+r m_r}{\bf 0}_Ver (m_1+\cdots+r m_r)!}$
in the expansion of $F({\bf u},t)$. In particular
\begin{equation}
h_{\bm \mu}=\sum_{{\bm \lambda}\vdash m_1+\cdots+rm_r}<h_{\bm \mu},S_{\bm \lambda}({\mathbf x})>S_{\bm \lambda}({\mathbf x})\label{eq:itera}
\end{equation}
from which, iterating $(m_1+2m_2+\cdots+rm_r)$ times the derivative with respect to $x_1$ of (\ref{eq:itera}):
\begin{equation}
{(m_1+2m_2+\cdots +rm_r)!{\bf 0}_Ver 1!(2!)^{m_2}\cdots(r!)^{m_r}}=\sum_{{\bm \lambda}\vdash m_1+2m_2+\cdots+rm_r}S_{\bm \mu}(\wedge idetilde{\partial})S_{\bm \lambda}({\mathbf x})\cdot f^{\bm \lambda}\label{eq:lasf}
\end{equation}
which generalizes (\ref{eq:square}). Special cases of (\ref{eq:lasf}) are studied explicitly in \cite{BeGa2}.
\noindent
{\bf Acknowledgment.} For supplying the opportunity to give the talk and for logistic and financial support, the Scientific and Organizing Committee of INPANGA 2020[+1] are warmly acknowledged. Thanks are due Piotr Pragacz and Pietro Pirola for helpful discussions.
Work sponsored by INPAN and, partially, by Finanziamento Diffuso della Ricerca, no. 53 RBA17GATLET del Politecnico di
Torino.
\begin{equation}gin{thebibliography}{99}
\bibitem{BeThesis} O.~Behzad, {\em Hasse-Schmidt Derivations and Vertex Operators on Exterior Algebras}, Ph.D. Thesis, Institute for Advanced Studies in Basic Sciences, 2021.
\bibitem{BeGa2} O.~Behzad, L.~Gatto, {\em Integrals on Ferminic Fock Spaces}, in progress (2021).
\bibitem{BeCoGaVi}
O.~Behzad, A.~Contiero, L.~Gatto, and R.~Vidal Martins, \emph{Polynomial
representations of endomorphisms of exterior powers}, Collect. Math. (2021), \href{https://link.springer.com/article/10.1007/s13348-020-00310-5}{https://doi.org/10.1007/s13348-020-00310-5.}
\bibitem{Fulyoung}
W.~Fulton, \emph{Young tableaux}, London Mathematical Society Student Texts,
vol.~35, Cambridge University Press, Cambridge, 1997, With applications to
representation theory and geometry.
\bibitem{Ful}
William Fulton, \emph{Intersection theory}, Springer, 1984.
\bibitem{SCHSD}
L.~Gatto, \emph{Schubert calculus via {H}asse-{S}chmidt derivations}, Asian J.
Math. \textbf{9} (2005), no.~3, 315--321.
\bibitem{HSDGA}
L.~Gatto and P.~Salehyan, \emph{Hasse-{S}chmidt derivations on {G}rassmann
algebras}, IMPA Monographs, vol.~4, Springer, [Cham], 2016, With applications
to vertex operators.
\bibitem{pluckercone}
L.~Gatto and P.~Salehyan, \emph{On {P}l\"ucker equations characterizing
{G}rassmann cones}, Schubert varieties, equivariant cohomology and
characteristic classes\, ---\,{IMPANGA} 15, EMS Ser. Congr. Rep., Eur. Math.
Soc., Z\"urich, 2018, pp.~97--125,
\href{https://ga.org/pdf/1603.00510.pdf}{\tt{arXiv:1603.00510}}.
\bibitem{gln}
\bysame, \emph{The cohomology of the {G}rassmannian is a {$gl_n$}-module},
Comm. Algebra \textbf{48} (2020), no.~1, 274--290.
\bibitem{SDIWP}
\bysame, \emph{Schubert derivations on the infinite exterior power}, Bull.
Braz. Math. Soc., New Series \textbf{xxx} (2020), no.~1, 2s.
\bibitem{gillespie}
M.~Gillespie, \emph{Variations on a theme of Schubert calculus},
pp.~115--158, Springer International Publishing, Cham, 2019.
\bibitem{thrall}
J.~S.~Frame, G. de B.~Robinson, R.~M.~Thrall \emph{The hook graphs of the
symmetric groups}, Canad. J. Math. \textbf{6} (1954), 316--324.
\bibitem{BeGa}
O.~Behzad, L.Gatto \emph{{Bosonic and Fermionic Representations of
Endomorphisms of Exterior Algebras}},
{\href{https://arxiv.org/abs/2009.00479}{\tt ArXiv:2009.00479}}, 2018.
\bibitem{smirnov}
E.~Smirnov, \emph{Grassmannians, flag varieties, and {G}elfand-{Z}etlin
polytopes}, Recent developments in representation theory, Contemp. Math.,
vol. 673, Amer. Math. Soc., Providence, RI, 2016, pp.~179--226.
\end{thebibliography}
\noindent
{\rm Letterio~Gatto}\\
{\tt \href{mailto:[email protected]}{[email protected]}}\\
{\it Dipartimento~di~Scienze~Matematiche}\\
{\it Politecnico di Torino}\
\end{document}
|
\begin{document}
\begin{abstract}
Let $k$ be a rational congruence function field and
consider an arbitrary finite separable
extension $K/k$. If for each
prime in $k$ ramified in $K$ we have that at least one
ramification index is not divided by the characteristic of $K$,
we find the genus field $\g K$, except for constants,
of the extension $K/k$. In general, we describe the
genus field of a global function field.
\end{abstract}
\mathbbketitle
\section{Introduction}\label{S1}
C. F. Gauss \cite{Gau1801} was the first in considering what
now is called the {\em genus field} of a quadratic
number field. H. Hasse \cite{Has51} introduced genus theory
for quadratic number fields describing the theory invented by
Gauss by means of class field theory. H. W. Leopoldt \cite{Leo53}
determined the genus field $\g K$ of an absolute abelian
number field $K$ generalizing the work of Hasse. Leopoldt
developed the theory using Dirichlet characters and relating them
with the arithmetic of $K$. A. Fr\"ohlich \cite{Fro59-1, Fro59-2} introduced
the concept of genus fields for nonabelian number fields.
Fr\"ohlich defined the genus field $\g K$ of an arbitrary
finite number field $K/{\mathbb Q}$ as $\g K:= Kk^{\ast}$ where
$k^{\ast}$ is the maximal abelian number field such that $Kk^{\ast}/
K$ is unramified. We have that $k^{\ast}$ is the maximal abelian
number field contained in $\g K$. The degree $[\g K:K]$ is called
the {\em genus number} of $K$ and the Galois group $\operatorname{Gal}(
\g K/K)$ is called the {\em genus group} of $K$.
If $K_H$ denotes the Hilbert class field (HCF) of $K$, we have
$K\subseteq \g K\subseteq K_H$ and $\operatorname{Gal}(K_H/K)$ is
isomorphic to the class group $Cl_K$ of $K$. Then $\g K$
corresponds to a subgroup $G_K$ of $Cl_K$, that is,
$\operatorname{Gal}(\g K/K)\operatorname{con}g Cl_K/G_K$. The subgroup $G_K$ is called
the {\em principal genus} of $K$ and $|Cl_K/G_K|$ is equal to
the genus number of $K$.
M. Ishida \cite{Ish76} described the genus field $\g K$ of any
finite extension of ${\mathbb Q}$, allowing ramification at the
infinite primes. Given a number field $K$, Ishida found an
abelian number field $k_1^{\ast}$ and described another number
field $k_2^{\ast}$ such that $k^{\ast}=k_1^{\ast}k_2^{\ast}$ and
$k_1^{\ast}\cap k_2^{\ast}={\mathbb Q}$. The field $k_1^{\ast}$ was
constructed by means of the finite primes $p$ such that at least
one ramification index of the decomposition of $p$ in $K$ is not
divisible by $p$. In other words, by those primes $p$ such that
at least one prime in $K$ above $p$ is tamely ramified.
We are interested in genus fields in the context of congruence
(global) function fields. In this case there is no proper notion of
Hilbert class field since all the constant field extensions are abelian
and unramified. In fact, if the class number of a congruence function
field $K$ is $h_K$ then there are exactly $h:=h_K$ abelian
extensions $K_1,\ldots, K_h$ of $K$ such that $K_i/K$ are maximal
unramified with exact field of constants of each $K_i$ the same
as the one of $K$, ${\mathbb F}_q$, the finite field of $q$ elements
and $\operatorname{Gal}(K_i/K)\operatorname{con}g Cl_{K,0}$ the group of classes of
divisors of degree zero
(\cite[Chapter 8, page 79]{ArTa67}).
M. Rosen \cite{Ros87} gave a definition of Hilbert class fields of $K$,
fixing a nonempty finite set $S_{\infty}$ of prime divisors of $K$.
Using Rosen's definition of HCF is possible to give a proper concept
of genus fields along the lines of number fields. In the literature
there have been different definitions of genus fields according to
different HCF definitions. R. Clement \cite{Cle92} found a narrow genus
field of a cyclic extension of $k={\ma F}_q(T)$ of prime degree dividing
$q-1$. She used the concept of HCF similar to that of a quadratic
number field $K$: it is the finite abelian extension of $K$ such
that the prime ideals of the ring of integers ${\mathbbthcal O}_K$
of $K$ splitting are precisely the principal ideals generated by
an element of positive norm. S. Bae and J. K. Koo \cite{BaKo96}
were able to generalize the results of Clement with the methods
developed by Fr\"ohlich \cite{Fro83}. They defined the genus
field for general global function fields and developed the analogue
of the classical genus theory.
B. Angl\`es and J.-F. Jaulent
\cite{AnJa2000} used narrow $S$--class groups to establish
the fundamental results for genus theory of finite extensions of
global fields, where $S$ is a finite nonempty set of places.
G. Peng \cite{Pen2003} explicitly described the genus theory for
Kummer function fields for a prime number $l$
based on the global function field analogue of P. E.
Conner and J. Hurrelbrink exact hexagon.
C. Wittman \cite{Wit2007} extended Peng's results to the
case $l\nmid q(q-1)$ and used his results to study the $l$--part
of the ideal class groups of cyclic extensions of degree $l$.
S. Hu and Y. Li \cite{HuLi2010} described explicitly the genus
field of an Artin--Schreier extension of $k={\ma F}_q(T)$.
In \cite{MaRzVi2013, MaRzVi2015} we
developed a theory of genus fields of
congruence function fields using Rosen's definition of HCF.
The methods we used there
are based on the ideas of Leopoldt using Dirichlet
characters and a general description of $\g K$ in terms of
Dirichlet characters. The genus field
$\g K$ was obtained for an abelian extension $K$ of
$k$. The method can be used to give $\g K$ explicitly when
$K/k$ is a cyclic extension of prime degree $l\mid q-1$ (Kummer)
or $l=p$ where $p$ is the characteristic (Artin--Schreier) and
also when $K/k$ is a $p$--cyclic extension (Witt). Later
on, the method was used in \cite{BaRzVi2013} to describe
$\g K$ explicitly when $K/k$ is a cyclic extension of degree
$l^n$, where $l$ is a prime number and $l^n\mid q-1$.
In this paper we consider a congruence function field $K$ with
exact field of constants a finite extension of ${\ma F}_q$ and such that $K$
is a finite separable extension of $k=
{\mathbb F}_q(T)$. We use
Rosen's definition of HCF
of $K$ to find $\g K$ as $\g K=Kk^{\ast}$, where $k^{\ast}$ is the
composite of two fields $k_1^{\ast}$ and $k_2^{\ast}$.
Following Ishida's methods, we prove that $k_1^{\ast}$ is
contained in the composite of constants with
fields $F_{{\mathcal P}}$ where
$k\subseteq F_{{\mathcal P}} \subseteq \lam P$, ${\mathcal P}$
is fully ramified in $F_{{\mathcal P}}/k$, ${\mathcal P}$ running in the set of finite
primes ramified in $K/k$. Here $P$ is the monic irreducible
polynomial in $T$ associated to ${\mathcal P}$ and $\lam P$
is the $P$--th cyclotomic function field. The field $k_2^{\ast}$
encodes the wild ramification of the extension $k^{\ast}/k$.
The main difficulty handling $k_1^{\ast}$
is the decomposition of the infinite primes.
\section{Notation and basic results on cyclotomic function fields}\label{S2}
The results on function fields and cyclotomic function fields
we need in this
paper may be consulted in \cite{Vil2006}. Let ${\ma F}_q$ be the finite
field of $q$ elements and of characteristic $p$. Let $k={\ma F}_q(T)$ be a fixed
rational function field. Let $K$ be a congruence (global) function field with
exact field of constants a finite extension of ${\ma F}_q$ and
such that $K/k$ is a finite separable
extension. Let $R_T={\ma F}_q[T]$ be the ring
of polynomials and $R_T^+$ denotes the subset of monic irreducible
polynomials. The pole of $T$ in $k$ will be denoted by ${\mathcal P}_{\infty}$. We
say that ${\mathcal P}_{\infty}$ is the {\em infinite prime} of $k$.
For $M\in R_T\setminus
\{0\}$, $\Lambda_M$ denotes the $M$ torsion of the Carlitz module
and $\lam M$ denotes the $M$--th cyclotomic function field.
We have that $\lam M/k$ is an abelian extension and $\operatorname{Gal}(\lam M/k)
\operatorname{con}g \G M$. For any $M\in R_T\setminus \{0\}$ the prime ${\mathcal P}_{\infty}$ has
ramification index $q-1$ and
decomposes in ${\mathcal P}hi(M)/(q-1)$ primes of degree one in $\lam M$.
The inertia group of ${\mathcal P}_{\infty}$ in $\lam M/k$ is identified with ${\ma F}_q^{\ast}
\subseteq \G M$. The fixed field $\lam M^{{\ma F}_q^{\ast}}=\lam M^+$
is the {\em maximal real subfield} of $\lam M$.
By a geometric extension we mean an extension without new
constants.
The definition we use for the Hilbert class field is the one given by
Rosen. That is:
\begin{definition}\label{D2.1}{\rm{
Given a congruence function field $K$, the {\em Hilbert class field}
$K_H$ of $K$ is defined as the maximal unramified abelian extension
of $K$ such that all the primes in $K$ above ${\mathcal P}_{\infty}$ decompose fully.
}}
\end{definition}
With this definition of Hilbert class field, the definition of the genus
field of $K/k$ is given as follows.
\begin{definition}\label{D2.2}{\rm{
Let $K$ be a finite separable
extension of $k$. The {\em genus field} $\g K$ of
$K$ is the maximal extension of $K$ contained in $K_H$ that is the
composite of $K$ and an abelian extension of $k$. Equivalently,
$\g K=K k^{\ast}$, where $k^{\ast}$ is the maximal abelian extension of
$k$ contained in $K_H$.
}}
\end{definition}
Note that it is possible to have $\g K=K E$ for several different
subfields $E\subsetneqq k^{\ast}$. We are interested in $k^{\ast}$
itself. In particular we have $k^{\ast}=\g {k^{\ast}}$ and since $k^{\ast}/k$
is abelian, the description of such $k^{\ast}$ may be found in
\cite{MaRzVi2013}.
The set of prime divisors in $k$ will be denoted by ${\mathbb P}_k$ and
let ${\mathbb P}_k^{\ast}:={\mathbb P}_k\setminus \{{\mathcal P}_{\infty}\}$ be the set of
finite primes of $k$.
The conorm map
from a field $E$ to a field $F$ will be denoted by $\operatorname{con}_{E/F}$. In an extension
$F/E$, $e(F|E)$ denotes the ramification index of a prime in $F$ above
one in $E$. If the primes are ${\mathbbthfrak P}$ and ${\mathbbthfrak p}$ we also write
$e(F|E)=e({\mathbbthfrak P}|{\mathbbthfrak p})=e_{F/E}({\mathbbthfrak P}|{\mathbbthfrak p})$.
The symbol $d_E({\mathbbthfrak p})$
denotes the degree of ${\mathbbthfrak p}$ for a prime ${\mathbbthfrak p}$ in $E$.
If ${\mathcal P}$ is a prime in $k$, its degree will be denoted by $d_{{\mathcal P}}$.
For any finite extension $E/k$ and any $m\in{\mathbb N}$ we denote
the extension of constants $E{\mathbb F}_{q^m}$ by $E_m$.
Let ${\mathcal P}_1,\ldots, {\mathcal P}_s,{\mathcal P}_{s+1},\ldots, {\mathcal P}_t$
be the finite primes in $k$ ramified in $K$. Let
$P_i\in R_T^+$ be such that the divisor $(P_i)_k$ is
$(P_i)_k={\ma F}_q^{\ast}rac{{\mathcal P}_i}{{\mathcal P}_{\infty}^{\deg P_i}}$ for $1\leq i\leq t$.
For a prime ${\mathcal P}\in{\mathbb P}_k$, if $\operatorname{con}_{k/K}{\mathcal P}={\mathcal P}_{\infty}roducto$,
we denote
\begin{gather}\label{Eq0.1}
e_{{\mathcal P}}=\gcd(e_1,\ldots,e_r)= p^{u_{{\mathcal P}}} e_{{\mathcal P}}^{(0)}, \quad u_{{\mathcal P}}\geq
0,\quad \gcd(p,e^{(0)}_{{\mathcal P}})=1.
\end{gather}
We assume that
$p\nmid e_{{\mathcal P}_i}$ for $1\leq i\leq s$ and $p\mid e_{{\mathcal P}_j}$ for
$s+1\leq j\leq t$. That is, $u_{{\mathcal P}_i}=0$ for
$1\leq i\leq s$ and $u_{{\mathcal P}_j}
\geq 1$ for $s+1\leq j\leq t$.
One of the main tools used in this paper is the following result.
\begin{theorem}[Abhyankar's Lemma]\label{T2.3}
Let $F/E$ be a finite separable extension of function fields. Suppose that
$F=E_1E_2$ with $E\subseteq E_i\subseteq F$. Let ${{\mathcal P}}$ be a prime
of $E$ and ${\mathbbthfrak P}$ a prime in $F$ above ${{\mathcal P}}$. Let
${\mathbbthfrak p}_i={\mathbbthfrak P}
\cap E_i$ for $i=1,2$. If at least one of the extensions $E_i/E$ is tamely
ramified at ${\mathbbthfrak p}_i$, then
\[
e_{F/E}({\mathbbthfrak P}|{{\mathcal P}})=\operatorname{lcm}[e_{E_1/E}({\mathbbthfrak p}_1|{{\mathcal P}}), e_{E_2/E}
({\mathbbthfrak p}_2|{{\mathcal P}})].
\]
\end{theorem}
{\mathcal P}_{\infty}roof \cite[Theorem 12.4.4]{Vil2006}. ${\ma F}_q^{\ast}in$
We also recall two results.
\begin{proposition}\label{PalestineP4.1} Let $L/k$ be a finite abelian
extension, $P\in R_T^+$ and $d:=\deg P$. Assume $P$
is tamely ramified in $L/k$. If $e$ denotes
the ramification index of $P$ in $L/k$, we have
$e\mid q^d-1$.
\end{proposition}
{\mathcal P}_{\infty}roof \cite[Proposition 4.1]{SaRzVi2013}. ${\ma F}_q^{\ast}in$
\begin{proposition}\label{PalestineP4.2}
Let $L/k$ be an abelian extension where at most
a prime divisor ${\mathbbthfrak p}_0$ of degree $1$ is
ramified and the extension is tamely ramified. Then
$L/k$ is a constant extension.
\end{proposition}
{\mathcal P}_{\infty}roof \cite[Proposition 4.2]{SaRzVi2013}. ${\ma F}_q^{\ast}in$
With respect to the genus field of a finite abelian
extension of $k$, we have the following results (see \cite[Theorem
4.2]{MaRzVi2013, MaRzVi2015}).
\begin{theorem}\label{T2.6}
Let $K/{\mathbb F}_q$ be a finite abelian
extension of $k$ where ${\mathcal P}_{\infty}$ is tamely ramified. Let $N\in R_T$
and $m\in{\mathbb N}$ be such that $K\subseteq {\lam N}{\mathbb F}_{q^m}$.
Let $E_{{\mathbbthfrak {ge}}}$ be the genus field of $E:=k(\Lambda_N)\cap
K{\mathbb F}_{q^m}$.
Let $H_1$ be the subgroup that corresponds
to the decomposition group of the infinite primes of $K$ in
$\g E K/K$ under the Galois correspondence. Then
the genus field of $K$ is
$$
K_{{\mathbbthfrak {ge}}}= \g {E^{H_1}}K.
\eqno{\Box}
$$
\end{theorem}
\begin{remark}\label{R2.7} {\rm{In Theorem \ref {T2.6}, it was assumed
originally that
$K/k$ is a geometric extension. This is not necessary; the same proof
that works for the geometric case works for any
finite abelian extension $K/k$.
}}
\end{remark}
\section{General case}\label{S4}
We consider a finite separable extension $K/k$ and let
$\g K=Kk^{\ast}$.
First we prove some general results.
\begin{proposition}\label{P3.1}
Let $E/k$ be a finite tamely ramified abelian extension.
For a finite prime ${\mathbbthcal P}\in {\mathbb P}_k$, let
$\operatorname{con}_{k/K}{\mathbbthcal P}={\mathcal P}_{\infty}roducto$ and
let $e^{\ast}_{{\mathcal P}}$ be the ramification index of ${\mathcal P}$ in $E/k$.
Then $KE/K$ is
unramified at the primes above ${\mathcal P}$ if and only if
$e^{\ast}_{{\mathcal P}}\mid e_{{\mathcal P}}$, where $e_{{\mathcal P}}$ is given
by {\rm{(\ref{Eq0.1})}}.
\end{proposition}
{\mathcal P}_{\infty}roof Let ${\mathbbthfrak P}$ be a prime in $KE$ above
${\mathcal P}={\mathbbthfrak P}\cap k$. Thus
${\mathbbthfrak P}\cap K={\mathbbthfrak p}_i$ for some $i$.
Then, from Abhyankar's Lemma, we
have
\[
e({\mathbbthfrak P}|{\mathcal P})=\operatorname{lcm}[e({\mathbbthfrak p}_i|{\mathcal P}),e({\mathbbthfrak P}\cap E|{\mathcal P})]=
\operatorname{lcm} [e_i,e^{\ast}_{{\mathcal P}}]=e({\mathbbthfrak P}|{\mathbbthfrak p}_i)e({\mathbbthfrak p}_i|{\mathcal P})=
e({\mathbbthfrak P}|{\mathbbthfrak p}_i) e_i.
\]
Therefore ${\mathbbthfrak P}$ is unramified in $KE/K
\iff e({\mathbbthfrak P}|{\mathbbthfrak p}_i)=1 \iff \operatorname{lcm}[e_i,e^{\ast}_{{\mathcal P}}]=
e_i\iff e_{{\mathcal P}}^{\ast}\mid e_i$. The result follows. ${\ma F}_q^{\ast}in$
Consider the conorm of the infinite prime of $k$:
\begin{gather}\label{Eq3.2'}
\operatorname{con}_{k/K} {\mathcal P}_{\infty}={\mathcal P}_{\infty}infty.
\end{gather}
Let $t_i$ be the degree of ${\mathbbthfrak p}_{i,\infty}$ and
\begin{gather}\label{Eq3.2''}
t_0:=\gcd (t_1,\ldots,t_{r_{\infty}}).
\end{gather}
\begin{proposition}\label{P3.2} The field of constants of $\g K$ is
${\mathbb F}_{q^{t_0}}$.
\end{proposition}
{\mathcal P}_{\infty}roof Consider the extension of constants $K{\mathbb F}_{q^m}/K$ with
$m\geq 1$. We have that ${\mathbbthfrak p}_{i,\infty}$ splits into $\gcd(t_i,m)$ factors
(\cite[Theorem 6.2.1]{Vil2006}). Therefore ${\mathbbthfrak p}_{i,\infty}$
decomposes fully
in $K{\mathbb F}_m \iff \gcd(t_i,m)=m\iff m\mid t_i$.
Thus the infinite primes of $K$ decompose
fully in $K{\mathbb F}_m \iff m\mid t_0$. It
follows that ${\mathbb F}_{q^{t_0}}$ is the field of constants of $\g K$. ${\ma F}_q^{\ast}in$
\begin{proposition}\label{P3.3} Let ${\mathcal P}$
be a prime divisor of $k$ of degree
$d$ such that ${\mathcal P}\neq {\mathcal P}_{\infty}$ and
$p\nmid e_{{\mathcal P}}$. Let $\g K=Kk^{\ast}$ and
let $e_{{\mathcal P}}^{\ast}$ be the
ramification index of ${\mathcal P}$ in $k^{\ast}/k$.
Then $\gcd\big(e_{{\mathcal P}},
{\ma F}_q^{\ast}rac{q^d-1}{q-1}\big)\mid e_{{\mathcal P}}^{\ast}$
and $e_{{\mathcal P}}^{\ast}\mid e_{{\mathcal P}}$.
\end{proposition}
{\mathcal P}_{\infty}roof From Proposition \ref{P3.1} we have $e^{\ast}_{{\mathcal P}} \mid
\gcd (e_1,\ldots,e_r)=e_{{\mathcal P}}$. Furthermore ${\mathcal P}$ is fully ramified
in $\lam P/k$, where $(P)_k={\ma F}_q^{\ast}rac{{\mathcal P}}{{\mathcal P}_{\infty}^{\deg P}}$ and
${\mathcal P}_{\infty}$ decomposes fully in $\lam P^{{\ma F}_q^{\ast}}/k$. The degree
of the extension $\lam P^{{\ma F}_q^{\ast}}/k$
is $(q^d-1)/(q-1)$. Let $S$ be the subfield $k\subseteq
S\subseteq \lam P$ of degree $\gcd\big(e_{{\mathcal P}},{\ma F}_q^{\ast}rac{q^d-1}{q-1}\big)$.
Then by Proposition \ref{P3.1} we have that
$S$ satisfies that $KS/K$ is unramified and the infinite primes in $K$
decompose fully in $KS/K$ since ${\mathcal P}_{\infty}$ decomposes fully in $S/k$.
Therefore $KS\subseteq Kk^{\ast}$, $S\subseteq k^{\ast}$ and
$\gcd\big(e_{{\mathcal P}},{\ma F}_q^{\ast}rac{q^d-1}{q-1}\big)\mid e_{{\mathcal P}}^{\ast}$. ${\ma F}_q^{\ast}in$
Let $G:=\operatorname{Gal}(k^{\ast}/k)$ and let $G_p$ be the $p$--Sylow subgroup
of $G$. Then $G=G_0\times G_p$ with $p\nmid |G_0|$.
Therefore we have the decomposition
\begin{gather}\label{Eq3.1''}
k^{\ast}=k_1^{\ast}k_2^{\ast},\quad k_1^{\ast}\cap k_2^{\ast}=k, \quad
G_0=\operatorname{Gal}(k_1^{\ast}/k),\quad G_p=\operatorname{Gal}(k_2^{\ast}/k).
\end{gather}
Thus, $k_1^{\ast}/k$ is tamely ramified and $k_2^{\ast}/k$
is a $p$--extension so that it is wildly
ramified unless it is an extension of constants.
Now we study the field $k_1^{\ast}$.
To find an explicit description of $k_1^{\ast}$ we proceed as
follows. Let
\begin{gather}\label{Eq3.1'}
F_0:={\mathcal P}_{\infty}rod_{{\mathcal P}\in{\mathbb P}_k^{\ast}}F_{{\mathcal P}}={\mathcal P}_{\infty}rod_{i=1}^t F_{{\mathcal P}_i}
\end{gather}
where $k\subseteq F_{{\mathcal P}}\subseteq \lam P$ is the unique subfield of
the extension $\lam P/k$ of degree
\begin{gather}\label{Eq3.3'}
c_{{\mathcal P}}:=\gcd (e_{{\mathcal P}},q^{d_{{\mathcal P}}}-1)=\gcd (e^{(0)}_{{\mathcal P}},q^{d_{{\mathcal P}}}-1).
\end{gather}
Therefore $F_0$
satisfies that $KF_0/K$ is unramified at every finite
prime (Proposition \ref{P3.1}).
Let $R:=k(\Lambda_{P_1\cdots P_t})$ and $R^+:=k(
\Lambda_{P_1\cdots P_t})^+$. Then $F_0\subseteq R$.
\begin{theorem}\label{T3.4'} With the notations as above, we have
\[
k_1^{\ast}\subseteq F_0{\mathbb F}_{q^{u_1}}\quad \text{and}\quad
K(F_0\cap R^+){\mathbb F}_{q^{t_0^{{\mathcal P}_{\infty}rime}}} \subseteq K k_1^{\ast}
\subseteq KF_0 {\mathbb F}_{q^{u_1}},
\]
for some $u_1\in{\mathbb N}$ and
where $F_0$ is given by {\rm (\ref{Eq3.1'})}, $t_0$
is given by Proposition {\rm{\ref{P3.2}}} and $t_0=t_0^{{\mathcal P}_{\infty}rime}
p^v$ with $\gcd(t_0^{{\mathcal P}_{\infty}rime}, p)=1$.
\end{theorem}
{\mathcal P}_{\infty}roof
We will prove that $k_1^{\ast}\subseteq F_0{\mathbb F}_{q^{u_1}}$ for some
$u_1\in{\mathbb N}$.
For any prime ${\mathcal P}\in {\mathbb P}_k^{\ast}$
we obtain from Proposition \ref{P3.1} that if the ramification
index of ${\mathcal P}$ in $k_1^{\ast}/k$ is $b_{{\mathcal P}}$, then
$b_{{\mathcal P}}|e_{{\mathcal P}}$ and since $k_1^{\ast}/k$ is a finite abelian
tamely ramified
extension, we have $b_{{\mathcal P}}\mid q^{d_{{\mathcal P}}}-1$
(Proposition \ref{PalestineP4.1}). Hence
$b_{{\mathcal P}}\mid
c_{{\mathcal P}}=\gcd(e_{{\mathcal P}},q^{d_{{\mathcal P}}}-1)=[F_{{\mathcal P}}:k]$.
Let $F_{{\mathcal P}}^{{\mathcal P}_{\infty}rime}$ be the
subfield of $F_{{\mathcal P}}$ of degree $b_{{\mathcal P}}$ over $k$.
We may assume that the finite ramified primes in $k_1^{\ast}/k$
are all of ${\mathcal P}_1,\ldots,
{\mathcal P}_t$ since, if some of the $b_{{\mathcal P}_i}$ are equal to $1$,
the argument below works even in this case.
We start with ${\mathcal P}_1$. From Abhyankar's Lemma we have that the
ramification index of ${\mathcal P}_1$ in $k_1^{\ast}F_{{\mathcal P}_1}^{{\mathcal P}_{\infty}rime}$ over $k$ is
$b_{{\mathcal P}_1}$. Let $I_{{\mathcal P}_1}$ be the inertia group of ${\mathcal P}_1$
in $k_1^{\ast} F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}$ which is of order $b_{{\mathcal P}_1}$.
Let $E_1$ be the fixed field of $k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{P_1}$ under
$I_{{\mathcal P}_1}$. Since ${\mathcal P}_1$ is fully ramified in $F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}/k$
and unramified in $E_1/k$ we have $E_1\cap F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}=k$
and
\begin{gather*}
[E_1F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}:k]=[E_1:k][F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}:k]=
{\ma F}_q^{\ast}rac{[k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}:k]}{|I_{{\mathcal P}_1}|}|I_{{\mathcal P}_1}|=
[k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}:k].
\end{gather*}
\[
\xymatrix{k_1^{\ast}\ar@{-}[rr]
\ar@{-}[dd]&&k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}=
E_1F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}\ar@{-}[dd]
\ar@{-}[dl]_{I_{{\mathcal P}_1}}\\ &E_1=(k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1})^{I_{{\mathcal P}_1}}
\ar@{-}[dl]\\
k\ar@{-}[rr]^{b_{{\mathcal P}_1}}&&F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}}
\]
Therefore $k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}=E_1F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}$.
Furthermore, since
${\mathcal P}_2, \ldots, {\mathcal P}_t$ are unramified in $F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}$ their
ramification indices are $b_{{\mathcal P}_2},\ldots, b_{{\mathcal P}_t}$
in $E_1F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}/F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}$.
Thus ${\mathcal P}_2, \ldots, {\mathcal P}_t$
have ramification indices $b_{{\mathcal P}_2},\ldots, b_{{\mathcal P}_t}$
in $E_1/k$.
Take now $E_1$ instead of $k_1^{\ast}$ and $F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}$
instead of $F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}$.
We obtain $E_2$ such that
$E_1F_{{\mathcal P}_2}^{{\mathcal P}_{\infty}rime}=E_2F_{{\mathcal P}_2}^{{\mathcal P}_{\infty}rime}$ and ${\mathcal P}_3,
\ldots,{\mathcal P}_t$ are the only finite primes of $k$ ramified
in $E_2$ with ramification indices $b_{{\mathcal P}_3},\ldots b_{{\mathcal P}_t}$
respectively. Note that
\[
k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}=
E_1F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}=F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}E_1
F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}=F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}E_2F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}=
E_2F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_2}.
\]
In the general
step we have $E_{i-1}F_{{\mathcal P}_i}^{{\mathcal P}_{\infty}rime}=E_iF_{{\mathcal P}_i}^{{\mathcal P}_{\infty}rime}$ and the
ramification indices of ${\mathcal P}_{i+1},\ldots, {\mathcal P}_t$ in $E_i/k$ are
$b_{{\mathcal P}_{i+1}}, \ldots, b_{{\mathcal P}_t}$
and $k_1^{\ast}F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}\ldots F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_i}=
E_i F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_1}\ldots F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_i}$.
Keeping on in this way we finally obtain
$E_t$ such that
$E_{t-1}F_{{\mathcal P}_t}^{{\mathcal P}_{\infty}rime}=E_tF_{{\mathcal P}_t}^{{\mathcal P}_{\infty}rime}$,
no finite prime is ramified in $E_t/k$ and
$k_1^{\ast} F_0^{{\mathcal P}_{\infty}rime} = E_t F_0^{{\mathcal P}_{\infty}rime}$ where
$F_0^{{\mathcal P}_{\infty}rime}={\mathcal P}_{\infty}rod_{i=1}^t F^{{\mathcal P}_{\infty}rime}_{{\mathcal P}_i}$.
Since the only possibly ramified
prime in $E_t/k$ is ${\mathcal P}_{\infty}$ and it is tamely ramified,
from Proposition \ref{PalestineP4.2}
we obtain that $E_t/k$ is a constant field extension,
say $E_t={\mathbb F}_{q^{u_1}}(T)=k_{u_1}$.
Since $\big\{F_{{\mathcal P}_i}^{{\mathcal P}_{\infty}rime}\big\}_{i=1}^t$ are pairwise
linearly disjoint and $F_0^{{\mathcal P}_{\infty}rime}/k$ is a geometric extension, we have
\[
[F_0^{{\mathcal P}_{\infty}rime}:k]={\mathcal P}_{\infty}rod_{i=1}^t [F_{{\mathcal P}_i}^{{\mathcal P}_{\infty}rime}:k]={\mathcal P}_{\infty}rod_{i=1}^t
b_{{\mathcal P}_i}, \quad E_t\cap F_0^{{\mathcal P}_{\infty}rime}=k \quad\text{and}\quad
[k_1^{\ast}F_0^{{\mathcal P}_{\infty}rime}:k]=[E_t:k][F_0^{{\mathcal P}_{\infty}rime}:k].
\]
In particular, ${\mathbb F}_{q^{u_1}}$ is the field of constants of
$k_1^{\ast}F_0^{{\mathcal P}_{\infty}rime}$.
Therefore $k_1^{\ast}\subseteq k_1^{\ast}F_0^{{\mathcal P}_{\infty}rime}= E_t F_0^{{\mathcal P}_{\infty}rime} \subseteq F_0{\mathbb F}_{q^{u_1}}$ and
$Kk_1^{\ast}\subseteq KF_0{\mathbb F}_{q^{u_1}}$.
Finally, since the extension $K(F_0\cap R^+){\mathbb F}_{q^{t_0}}/K$
is unramified and the infinite primes are fully decomposed, it follows
that $K(F_0\cap R^+){\mathbb F}_{q^{t_0^{{\mathcal P}_{\infty}rime}}}
\subseteq Kk_1^{\ast}$. {\ma F}_q^{\ast}in
\begin{remark}\label{R3.4''}{\rm{
In the proof of Theorem \ref{T3.4'} we have obtained that in fact
$k_1^{\ast}\subseteq E_tF_0^{{\mathcal P}_{\infty}rime}$ and that ${\mathbb F}_{q^{u_1}}$ is the
field of constants of $k_1^{\ast}F_0^{{\mathcal P}_{\infty}rime}$.
}}
\end{remark}
To study $k_2^{\ast}$ we first prove:
\begin{lemma}\label{L3.5}
We have $\g {k^{\ast}} =\g {({k^{\ast}_1})}
\g {({k^{\ast}_2})}= k^{\ast}$. Furthermore $\g {({k^{\ast}_1})}
=k^{\ast}_1$ and $\g {({k^{\ast}_2})}=k^{\ast}_2$.
\end{lemma}
{\mathcal P}_{\infty}roof We have $k^{\ast}=k_1^{\ast}k_2^{\ast}$ and we have
already noted that $\g {k^{\ast}}=k^{\ast}$.
Since $\g {({k^{\ast}_1})}/k_1^{\ast}$ is unramified and the
infinite primes decompose fully, the same holds in the
extension $k^{\ast}
\g {({k^{\ast}_1})}/k^{\ast}$ so that $\g {({k^{\ast}_1})}\subseteq
\g {k^{\ast}}$. Similarly $\g {({k^{\ast}_2})}\subseteq \g {k^{\ast}}$.
Hence $\g {({k^{\ast}_1})}\g {({k^{\ast}_2})}\subseteq \g {k^{\ast}}$.
Now, since $\g {({k^{\ast}_1})}\supseteq k_1^{\ast}$ and
$\g {({k^{\ast}_2})}\supseteq k_2^{\ast}$, we obtain
\[
\g {k^{\ast}}=k^{\ast} =k_1^{\ast} k_2^{\ast}\subseteq
\g {({k^{\ast}_1})}\g {({k^{\ast}_2})}\subseteq \g {k^{\ast}}.
\]
Let now $[k_1^{\ast}:k]=a$ and $[k_2^{\ast}:k]=p^v$ where $p\nmid a$.
If $k_1^{\ast}\subsetneqq \g {({k^{\ast}_1})}$, let
$M:=\g {({k^{\ast}_1})}\cap k_2^{\ast}$. From the Galois correspondence
we obtain that $M\neq k$.
Let $[M:k]=p^b$ with $b\geq 1$. We have $M/k$ is unramified since
otherwise there exists a prime in $k$ with ramification index $p^{c}$ with
$c\geq 1$ in $M$. Since $p\nmid a$, it follows that there exists a
ramified prime in $\g {({k^{\ast}_1})}/k_1^{\ast}$ with ramification index
$p^c$. This contradiction shows that $M/k$ is unramified. Thus $M/k$
is an extension of constants. It follows that ${\mathcal P}_{\infty}$ has inertia degree
$p^b$ in $M/k$ but this implies that the inertia degree of the
infinite primes in $\g {({k^{\ast}_1})}/k_1^{\ast}$ is $p^b$
which is impossible.
Therefore $\g {({k^{\ast}_1})}=k_1^{\ast}$.
Similarly $\g {({k^{\ast}_2})}=k^{\ast}_2$. {\ma F}_q^{\ast}in
\begin{remark}\label{R3.6}{\rm{
In general, if $L=L_1L_2$, then $\g {(L_1)}\g {(L_2)} \subseteqq
\g L$ but not necessarily $\g L=
\g {(L_1)}\g {(L_2)}$. For instance, let $q>2$ and
let $P,Q,R,S\in R_T$ be four monic
irreducible polynomials in $k$. Let $L_1:=k(\Lambda_{P^2Q^2})^+$
and $L_2:=k(\Lambda_{R^2S^2})^+$. Then $L_1=\g {(L_1)}$ and
$L_2=\g {(L_2)}$. Let $L:=L_1L_2$. Then $\g L=k(\Lambda_{
P^2Q^2R^2S^2})^+$ and $[\g L:L]=q-1>1$. Thus $\g L=\g {(L_1
L_2)}\neq \g {(L_1)}\g {(L_2)}= L$.
}}
\end{remark}
Now let $\operatorname{Gal}(k_2^{\ast}/k)\operatorname{con}g C_{p^{n_1}}\times \cdots
\times C_{p^{n_{\nu}}}$ and if for each $1\leq i\leq \nu$, $E_i$
is the subfield $k\subseteq E_i\subseteq k_2^{\ast}$
such that $\operatorname{Gal}(E_i/k)\operatorname{con}g C_{p^{n_i}}$, then from
\cite[Theorem 5.7]{MaRzVi2013}, we obtain that $\g {(E_i)}$ is the
composite of $p$--cyclic extensions of $k$ such that in each
one only one prime is ramified or is an extension of constants.
Therefore, $\g {({k^{\ast}_2})}=k_2^{\ast}$ is the composite
of this type of cyclic $p$--extensions.
Finally, since $u_{{\mathcal P}_j}\geq 1$ for $s+1\leq j\leq t$ (see (\ref{Eq0.1})),
we have the following result.
\begin{theorem}\label{T3.6}
The field $k_2^{\ast}$ is of the form
$k_2^{\ast} = J_{s+1} J_{s+2}\cdots J_t J_{\infty}$ where
${\mathcal P}_j$ is the only ramified prime in $J_j/k$,
$[J_j:k]=p^{v_j}$ with $0\leq v_j\leq u_{{\mathcal P}_j}$ for $s+1\leq j\leq t$,
and $J_{\infty}$ is an abelian $p$--extension of $k$ that is
either an extension of constants or
such that ${\mathcal P}_{\infty}$ is the only ramified prime. {\ma F}_q^{\ast}in
\end{theorem}
\section{The genus field in a special case}\label{S3}
Let $K/k$ be a finite separable extension
such that for all ${\mathcal P}\in{\mathbb P}_k$, $p\nmid e_{{\mathcal P}}=
\gcd(e_1,\ldots,e_r)$ where $\operatorname{con}_{k/K}{\mathcal P}={\mathcal P}_{\infty}roducto$.
That is, we assume $t=s$ and also $p\nmid e_{{\mathcal P}_{\infty}}$.
We have that $k_1^{\ast}$ is given by (\ref{Eq3.1''}) and
in this case we have $k_2^{\ast}/k$ is unramified.
Hence $k_2^{\ast}/k$ is
an extension of constants.
To find a more explicit description of $k_1^{\ast}$ we proceed as
follows. First we consider the behavior of ${\mathcal P}_{\infty}$.
Let $\operatorname{con}_{k/K}{\mathcal P}_{\infty}$ be given by (\ref{Eq3.2'}).
We have
$e_{\infty}(F_{{\mathcal P}}|k)\mid
\gcd(c_{{\mathcal P}},q-1)$ for ${\mathcal P}\in {\mathbb P}_k^{\ast}$. By
Abhyankar's Lemma we obtain that if
\[
c_{\infty}:=e_{\infty}(F_0|k),
\]
is the ramification index of ${\mathcal P}_{\infty}$ in $F_0/k$
then
\[
c_{\infty}\mid \operatorname{lcm}\big[\gcd(e_{{\mathcal P}_1},q-1),\ldots,\gcd(e_{{\mathcal P}_s},q-1)\big]=
\gcd\big(\operatorname{lcm}[e_{{\mathcal P}_1},\ldots, e_{{\mathcal P}_s}],q-1\big).
\]
To obtain a formula for $c_{\infty}$, we consider the following.
We have from (\ref{Eq3.3'})
\[
c_{{\mathcal P}}=[F_{{\mathcal P}}:k]=\gcd (e_{{\mathcal P}},q^{d_{{\mathcal P}}}-1),
\]
where $d_{{\mathcal P}}=d_k({\mathcal P})$. Let $H:=\operatorname{Gal}(R/F_0)$, where $R=k(
\Lambda_{P_1\cdots P_s})$. Let $M:=F_0R^+$, where $R^+=k(
\Lambda_{P_1\cdots P_s})^+$.
Therefore
\begin{align*}
e_{\infty}(M|k)&=e_{\infty}(M|R^+)e_{\infty}(R^+|F_0\cap R^+)
e_{\infty}(F_0\cap R^+|k)\\
&=[M:R^+]\cdot 1\cdot 1=[M:R^+]=
[F_0:F_0\cap R^+];\\
e_{\infty}(M|k)&=e_{\infty}(M|F_0)e_{\infty}(F_0|F_0\cap R^+)
e_{\infty}(F_0\cap R^+|k)\\
&=1\cdot e_{\infty}(F_0|F_0\cap R^+)\cdot 1=
e_{\infty}(F_0|F_0\cap R^+).
\end{align*}
Hence
\begin{gather}\label{Eq3.2}
c_{\infty}=e_{\infty}(F_0|k)=e_{\infty}(F_0|F_0\cap R^+)=e_{\infty}
(M|k) =[F_0:F_0\cap R^+]=[M:R^+].
\end{gather}
We choose the maximal field $F$ with $F_0\cap R^+\subseteq F
\subseteq F_0$ and such that the
infinite primes of $K$ decompose fully in $KF$.
Note that such field $F$ exists since if $F_1,F_2$ are two fields such
that $F_0\cap R^+\subseteq F_i\subseteq F_0$ and such that the
infinite primes of $K$ decompose fully in $KF_i/K$, $i=1,2$, then
$F_1F_2$ satisfies the same properties.
\begin{remark}\label{R4.-1}{\rm{
With the notation
of Theorem \ref{T3.4'},
observe that since $KF/K$ is unramified, and
because ${\mathcal P}_{\infty}$ splits fully in $KF/K$,
it follows that $F\subseteq k_1^{\ast}$
so that $Fk_1^{\ast}=k_1^{\ast}
\subseteq k_1^{\ast}F_0^{{\mathcal P}_{\infty}rime}$. Since $F_0\cap
R^+\subseteq F_0^{{\mathcal P}_{\infty}rime}\subseteq F_0$ we have
$F\subseteq F_0^{{\mathcal P}_{\infty}rime}$ .
In general we may have $F_0^{{\mathcal P}_{\infty}rime}\neq F$,
see Example \ref{Ex5.1}.
}}
\end{remark}
Next, we determine $F$ for an abelian extension $K/k$.
\begin{proposition}\label{P4.0}
Let $K/k$ be a finite abelian tamely ramified extension. With the
notation in Theorem {\rm{\ref{T2.6}}} we have
\begin{gather*}
F\subseteq \g E \subseteq F_0,
\intertext{more precisely}
F=\g {E^{H_1}}\quad\text{and}\quad \g K=KF.
\end{gather*}
\end{proposition}
{\mathcal P}_{\infty}roof In this case $s=t$ and $N=P_1\cdots P_t$. Since
for any prime ${\mathcal P}$ in $k$, the ramification index in $K/k$ is the
same as the ramification index in $E/k$ (see \cite[Section 4.1]
{MaRzVi2013}) and $F_0={\mathcal P}_{\infty}rod_{{\mathcal P}\in{\mathbb P}_k^{\ast}} F_{{\mathcal P}}$,
we have $\g E\subseteq F_0$.
The infinite prime decomposes fully in $F_0\cap R^+/k$. Hence
the infinite primes decompose fully in $E(F_0\cap R^+)/E$. Since
the extension $E(F_0\cap R^+)/E$ is unramified, we have
$F_0\cap R^+\subseteq \g E$.
We observe that by Abhyankar's Lemma (Theorem \ref{T2.3}) the
extension $K(F_0\cap R^+)/K$ is unramified and the infinite primes
decompose fully. Thus $F\subseteq \g E$.
Finally, again by Abhyankar's Lemma, $K\g E/K$ is unramified and the
inertia of the infinite primes corresponds to $H_1$, that is,
$\g {E^{H_1}}$ is the maximal
extension such that $F_0 \cap R^+\subseteq \g {E^{H_1}}\subseteq
F_0$ and that in $K\g {E^{H_1}}/K$ the infinite primes decompose
fully. Therefore $F=\g {E^{H_1}}$. From Theorem \ref{T2.6},
it follows $\g K=KF$. ${\ma F}_q^{\ast}in$
\begin{remark}\label{Ex5.4}{\rm{
Let $K/k$ be a finite tamely ramified abelian extension. Let ${\mathcal P}_1,
\ldots,{\mathcal P}_s$ be the finite ramified primes. Then
$F_0={\mathcal P}_{\infty}rod_{i=1}^s F_{{\mathcal P}_i}$ with $k\subseteq F_{{\mathcal P}_i}
\subseteq k(\Lambda_{P_i})$. We have $[F_{{\mathcal P}_i}:k]=c_{{\mathcal P}_i}=\gcd
(e_{{\mathcal P}_i},q^{\deg P_i}-1)$. Since $K/k$ is abelian and tamely
ramified we have $e_{{\mathcal P}_i}\mid q^{\deg P_i}-1$
(Proposition \ref{PalestineP4.1}). Therefore, $c_{{\mathcal P}_i} = e_{{\mathcal P}_i}$.
}}
\end{remark}
Now let
\[
c^{{\mathcal P}_{\infty}rime}_{\infty}:=[F:F_0\cap R^+]=e_{\infty}(F|k).
\]
Since
\[
c_{\infty}=[F_0:F_0\cap R^+] =[F_0:F][F:F_0\cap R^+]=
[F_0:F]c^{{\mathcal P}_{\infty}rime}_{\infty}
\]
we have $c^{{\mathcal P}_{\infty}rime}_{\infty}\mid c_{\infty}$.
\[
\xymatrix{&R\ar@{-}[d]_{H\cap {\ma F}_q^{\ast}}
\ar@/^5pc/@{-}[ddd]^{q-1}\ar@/^3pc/@{-}[dd]^{H_2}\\
F_0\ar@{-}[r]\ar@{-}[d]_{H_2/(H\cap {\ma F}_q^{\ast})}\ar@/^1pc/@{-}[ur]^H
\ar@/_6pc/@{-}[dd]_{c_{\infty}}&
M=F_0 R^+\ar@{-}[d]_{H_2/(H\cap {\ma F}_q^{\ast})}\\
F\ar@{-}[r]\ar@{-}[d]_{c^{{\mathcal P}_{\infty}rime}_{\infty}}&N=FR^+\ar@{-}[d]_{
c^{{\mathcal P}_{\infty}rime}_{\infty}}\\
F_0\cap R^+
\ar@{-}[r]&R^+}
\]
The degree $c^{{\mathcal P}_{\infty}rime}_{\infty}$ must satisfy the following. By
Abhyankar's Lemma, we have that if ${\mathbbthfrak P}$ is a prime in $KF$
dividing ${\mathcal P}_{\infty}$ and if ${\mathbbthfrak P}\cap K={\mathbbthfrak p}_{i,\infty}$, then
\begin{align*}
e({\mathbbthfrak P}|{\mathcal P}_{\infty})&=\operatorname{lcm}[e_{i,\infty},c^{{\mathcal P}_{\infty}rime}_{\infty}]={\ma F}_q^{\ast}rac{
e_{i,\infty}c^{{\mathcal P}_{\infty}rime}_{\infty}}{\gcd(e_{i,\infty},c^{{\mathcal P}_{\infty}rime}_{\infty})}\\
&=e({\mathbbthfrak P}|{\mathbbthfrak p}_{i,\infty})e({\mathbbthfrak p}_{i,\infty}|{\mathcal P}_{\infty})=
e({\mathbbthfrak P}|{\mathbbthfrak p}_{i,\infty})e_{i,\infty}.
\end{align*}
It follows that
\begin{gather}\label{Eq3.3}
e({\mathbbthfrak P}|{\mathbbthfrak p_{i,\infty}})={\ma F}_q^{\ast}rac{c^{{\mathcal P}_{\infty}rime}_{\infty}}
{\gcd(e_{i,\infty},c^{{\mathcal P}_{\infty}rime}_{\infty})}.
\end{gather}
Therefore
\begin{gather*}
e({\mathbbthfrak P}|{\mathbbthfrak p}_{i,\infty})=1\iff
\gcd (e_{i,\infty},c^{{\mathcal P}_{\infty}rime}_{\infty})=c^{{\mathcal P}_{\infty}rime}_{\infty}
\iff c^{{\mathcal P}_{\infty}rime}_{\infty}\mid e_{i,\infty}.
\end{gather*}
Thus, $KF/K$ is unramified if and only if
$c^{{\mathcal P}_{\infty}rime}_{\infty}\mid e_{{\mathcal P}_{\infty}}=\gcd (e_{1,\infty},\ldots,
e_{r_{\infty},\infty})$.
Therefore $c^{{\mathcal P}_{\infty}rime}_{\infty}$ must be maximal
in the sense $c^{{\mathcal P}_{\infty}rime}_{\infty}\mid c_{\infty}$,
$c^{{\mathcal P}_{\infty}rime}_{\infty}\mid
e_{\infty}$ where $e_{\infty}=e_{{\mathcal P}_{\infty}}$,
$c_{\infty}$ is given by (\ref{Eq3.2})
and the infinite primes of $K$ decompose
fully in $KF$. Hence
\begin{gather}\label{Eq3.30}
c^{{\mathcal P}_{\infty}rime}_{\infty}\mid\gcd (c_{\infty}, e_{\infty}).
\end{gather}
That is, $F$ is the field
\begin{gather}\label{Eq3.31}
F_0\cap R^+\subseteq F\subseteq F_0\quad\text{such that}\quad
[F:F_0\cap R^+]=c^{{\mathcal P}_{\infty}rime}_{\infty}.
\end{gather}
Let $H_2$ be the subgroup of ${\ma F}_q^{\ast}$ of order ${\ma F}_q^{\ast}rac{q-1}
{c_{\infty}^{{\mathcal P}_{\infty}rime}}$ and let $N:=R^{H_2}$. Note that
$|H\cap {\ma F}_q^{\ast}|=[R:F_0R^+]=
{\ma F}_q^{\ast}rac{q-1}{c_{\infty}}$. Hence $|H\cap {\ma F}_q^{\ast}|\mid |H_2|$ and from
(\ref{Eq3.30}) we obtain
\[
[M:N]=[F_0:F]={\ma F}_q^{\ast}rac{c_{\infty}}{c_{\infty}^{{\mathcal P}_{\infty}rime}}.
\]
With the above notation we have the following result.
\begin{theorem}\label{T3.4} Let $K/k$ be a finite separable
extension such that
every prime ${\mathcal P}\in {\mathbb P}_k$ satisfies that if
$\operatorname{con}_{k/K}{\mathcal P}={\mathcal P}_{\infty}roducto$, then $p\nmid e_{{\mathcal P}}= \gcd(e_1,\ldots,e_r)$.
Then
\[
KF{\mathbb F}_{q^{t_0}} \subseteq \g K \subseteq KF_0 {\mathbb F}_{q^{u}},
\]
where $F_0$ is given by {\rm (\ref{Eq3.1'})}, F is
given by {\rm (\ref{Eq3.31})}, $t_0$
is given by {\rm{(\ref{Eq3.2''})}}
and $u\in{\mathbb N}$.
Furthermore, $KF_0{\mathbb F}_{q^{u}}/K$ is unramified
at every finite prime and the ramification index of the
infinite prime ${\mathbbthfrak p}_{i,\infty}$
is ${\ma F}_q^{\ast}rac{c_{\infty}}{\gcd(e_{i,\infty},c_{\infty})}$, $1\leq i\leq r_{\infty}$
where $c_{\infty}$ is given by {\rm{(\ref{Eq3.2})}}.
\end{theorem}
{\mathcal P}_{\infty}roof
Because $KF/K$ is unramified and the infinite
primes decompose fully, we have $F\subseteq k_1^{\ast}$.
Therefore by Proposition \ref{P3.2} we have
$KF{\ma F}_q^{\ast}inite {t_0}\subseteq K k^{\ast}=\g K$.
Since $p\nmid e_{{\mathcal P}}$ for all ${\mathcal P}\in{\mathbb P}_k$ it follows
from Theorem \ref{T3.6} and
Proposition \ref{PalestineP4.2} that $k_2^{\ast}/k$
is an extension of constants, so that $k_2^{\ast}={\mathbb F}_{
q^{u_2}}(T)$. Furthermore, since $k_2^* \subseteq \g K$ we have $u_2|t_0$.
From Theorem \ref{T3.4'} we have $k_1^{\ast}\subseteq F_0
{\mathbb F}_{q^{u_1}}$ for some $u_1\in{\mathbb N}$
and $Kk_1^{\ast}\subseteq KF_0{\mathbb F}_{q^{u_1}}$.
Hence $\g K=Kk^{\ast}=Kk_1^{\ast}k_2^{\ast}\subseteq
KF_0{\mathbb F}_{q^{u}}$, where $u=\operatorname{lcm} [u_1,u_2]$.
The ramification index of ${\mathcal P}_{\infty}$ in
$F_0/k$ is $c_{\infty}$ where $c_{\infty}$ is
given by (\ref{Eq3.2}).
Now applying (\ref{Eq3.3}) to $F_0$ and $c_{\infty}$
we obtain the ramification index of ${\mathbbthfrak p}_{i,\infty}$
in $KF_0{\mathbb F}_{q^{u}}/K$ is ${\ma F}_q^{\ast}rac{c_{\infty}}
{\gcd(e_{i,\infty},c_{\infty})}$.
${\ma F}_q^{\ast}in$
\begin{remark}\label{R3.5} {\rm{
Note that
$F\subseteq \g K\cap F_0$.
Because $\g K\cap F_0 \subseteq \g K$, we have the infinite primes of $K$ decompose
fully in $(\g K\cap F_0 )K$. Besides, $F_0 \cap R^+ \subseteq \g K\cap F_0 \subseteq F_0$. It follows from the maximality
of $F$ that $$\g K\cap F_0=F.$$ Therefore $KF{\ma F}_q^{\ast}inite {t_0}
\cap F_0=F$ and $\g K\cap KF_0{\ma F}_q^{\ast}inite {t_0}=KF{\ma F}_q^{\ast}inite {t_0}$.
Observe that if we had in the proof that $
(\g K)_u\cap F_0=F$, then by the Galois correspondence we
would obtain $(\g K)_u=((\g K)_u\cap F_0) K_u=FK{\mathbb F}_{q^u}$.
Hence $FK{\mathbb F}_{q^{t_0}} \subseteq \g K\subseteq (\g K)_u=
FK{\mathbb F}_{q^u}=KF{\mathbb F}_{q^{t_0}}{\mathbb F}_{q^u}$.
Therefore $\g K/FK{\mathbb F}_{q^{t_0}}$ is an extension of
constants and since the field of constants of $\g K$ is
${\mathbb F}_{q^{t_0}}$, it would follow the equality
\[
\g K =FK {\mathbb F}_{q^{t_0}}.
\]
In case that $F=F_0$ then $KF{\mathbb F}_{q^{t_0}}\subseteq
\g K\subseteq KF{\mathbb F}_{q^u}$ so that $\g K/KF{\mathbb F}_{q^{t_0}}$
is an extension of constants and then $\g K= KF{\mathbb F}_{q^{t_0}}$.
Finally, if $u=t_0$, then $\g K=\g K \cap KF_0{\mathbb F}_{q^{t_0}}=
KF{\mathbb F}_{q^{t_0}}$ (see the diagram below). Hence
$\g K=KF{\mathbb F}_{q^{t_0}}$.
Also, when $K/k$ is an abelian tamely ramified extension, we
have $\g K= KF$ (see
Proposition \ref{P4.0}).
In short, it is very likely that always $\g K=KF{\mathbb F}_{q^{t_0}}$.
\begin{scriptsize}
\[
\xymatrix{F_0\ar@{-}[d]\ar@{-}[r]\ar@/_3pc/@{-}[ddd]_{c_{\infty}}
&KF_0\ar@{-}[dd]\ar@{-}[r]&(KF_0)_ {t_0}\ar@{-}[r]
\ar@{-}[dd]&\g K F_0\ar@{-}[d]\ar@{-}[r]&
(\g K F_0)_u=(KF_0)_u\ar@{-}[d]\\
({\g K})_u\cap F_0\ar@{-}[d]\ar@{-}[rrr]
|!{[r];[dd]}\hole|!{[rr];[dd]}\hole&&& ({\g K})_v\ar@{-}[r]^{u/v}
\ar@{-}[d]^{v/t_0}&({\g K})_{u}\\
F\ar@{-}[r]\ar@{-}[d]_{c^{{\mathcal P}_{\infty}rime}_{\infty}}
&KF\ar@{-}[r]\ar@{-}[d]&(KF)_{t_0}\ar@{-}[r]
&\g K\ar@{-}[ur]_{u/t_0}\ar@{-}[dll]\\
F_0\cap R^+\ar@{-}[d]&K\ar@{-}[dl]\\
k}
\]
\end{scriptsize}
}}
\end{remark}
\section{Applications and examples}\label{S5}
\begin{example}\label{Ex5.1} {\rm{Consider
$q=3$ and $P=T^3+2T+1$. We have
$P$ is irreducible in ${\mathbb F}_3(T)$. Let $K=k(\sqrt{P})$. In our
construction, if ${\mathcal P}$ is the prime corresponding to $P$, we have
$F_0=F_{{\mathcal P}}=k(\sqrt{(-1)^{\deg P}P})=k(\sqrt{-P})$. Now
${\mathcal P}_{\infty}$ is ramified in $K$ and in $F_0=F_{{\mathcal P}}$. Therefore
$t_0=1$, that is, the field of constants of $\g K$ is ${\mathbb F}_3$.
Since $[R^+:k]=13$ and $[F_0:k]=2$, we have $F_0\cap R^+
=k$. Now $KF_0=K(\sqrt{-1})$. Since $\sqrt{-1}\notin
{\mathbb F}_3$ we have $K(\sqrt{-1})=K{\mathbb F}_9$ and the infinite
primes are inert in $KF_0/K$. Hence $F=k$ and $\g K=K$.
Here we have $F_0^{{\mathcal P}_{\infty}rime}=F_0=k(\sqrt{-P})\neq k=F$.
}}
\end{example}
\subsection{Cyclic extensions of prime degree not dividing $q(q-1)$}\label{S5.1}
Let $l$ be a prime not dividing $q(q-1)$ and let $K/k$ be a cyclic extension
of degree $l$. Let ${\mathcal P}_1,\ldots, {\mathcal P}_t$ be the primes in $k$
ramified in $K$. Note that since the inertia group of a ramified prime
is contained in the multiplicative group of the residue field, we have
$l\mid (q^{\deg P_i}-1)$ for $1\leq i\leq t$. In particular
${\mathcal P}_{\infty}$ is not ramified. In this case we have
$k\subseteq F_{{\mathcal P} i}\subseteq \lam {{P_i}}$
for $1\leq i\leq t$ where $F_{{\mathcal P} i}$ is
the unique subfield of $\lam {{P_i}}$ of degree $c_{{\mathcal P}_i}=\gcd(
e_{{\mathcal P}_i},q^{d_{{\mathcal P}_i}}-1)=l$. Then
\[
F_0={\mathcal P}_{\infty}rod_{i=1}^t F_{{\mathcal P} i}\subseteq k(\Lambda_{P_1\cdots P_t})^+.
\]
Therefore we have $c_{\infty}=e_{{\mathcal P}_{\infty}}=1$, $F_0\cap R^+=F_0$.
Thus $F=F_0$, $c_{\infty}^{{\mathcal P}_{\infty}rime}=1$. Furthermore, if $t_0$
is the degree of the infinite prime(s) above ${\mathcal P}_{\infty}$ in $K$, then
$t_0=1$ or $l$. In fact $t_0=1$ iff ${\mathcal P}_{\infty}$ decomposes in $K/k$.
This is equivalent to
$K\subseteq k(\Lambda_{P_1\cdots P_t})^+$. We have $t_0=l$
iff ${\mathcal P}_{\infty}$ is inert in $K/k$ iff
$K\nsubseteq k(\Lambda_{P_1\cdots P_t})^+$.
From Proposition \ref{P3.2} and Theorem \ref{T3.4}
we have $\g K {}=KF{\mathbb F}_{q^{t_0}}$, since in this case
we have $F=F_0$ and $u=t_0$.
First we consider
$K\subseteq k(\Lambda_{P_1\cdots P_t})^+$. Then $\g K {}=
KF{\mathbb F}_{q^{t_0}}=KF=F$ and $[\g K {}:K]=l^{t-1}$.
Now we consider $K\nsubseteq k(\Lambda_{P_1\cdots P_t})^+$.
Then $K\nsubseteq F$ and in particular $k\subseteq K\cap F
\subsetneqq K$ so that $K\cap F=k$ and $[KF:K]=[F:k]=l^t$.
We will prove ${\mathbb F}_{q^l}\subseteq KF$. First, we have
$[KF:k]=[{\mathbb F}_{q^l} F:k]=l^{t+1}$. Now if $k_l:={\mathbb F}_{q^l}(T)$,
then $k_l
\cap K=k$. Now ${\mathcal P}_{\infty}$ is inert in $K/k$ and in $k_l/k$. The
decomposition group ${\mathbbthcal D}$ of ${\mathcal P}_{\infty}$ in $K_l=Kk_l$ is
a cyclic group of order
$l$. Consider $L:=(K_l)^{\mathbbthcal D}$. The prime ${\mathcal P}_{\infty}$ decomposes fully
in $L/k$ and ${\mathcal P}_1,\ldots,{\mathcal P}_t$ are the ramified primes in $L/k$.
It follows that $L\subseteq F$. Since $L\neq K$ we obtain that
$KL=K_l$ and $KL=K_l=K{\mathbb F}_{q^l}\subseteq KF$. Thus
${\mathbb F}_{q^l}\subseteq KF$.
Therefore $\g K {}=KF{\mathbb F}_{q^l}=KF$ and $[\g K {}:K]=
[KF:K]=l^t$.
Note that this example is consistent with the results of
\cite{MaRzVi2013, MaRzVi2015}, in particular with
Theorem 4.2 and Remark 4.3 of \cite{MaRzVi2015}. In the notation
of those papers, $K\subseteq F\iff E=K$ and $[\g K {}:K]=
[\g E {}:E]=l^{t-1}$. When $K\nsubseteq F$, $\g E{}=F$
we have $[\g K{}:K]=l^t=l l^{t-1}=l[\g E {}:E]$.
\subsection{Radical extensions}\label{S5.2}
Let $K=k(\sqrt[n]{\gamma D})$, where $D\in R_T$ is a monic polynomial
and $\gamma \in
{\ma F}_q^{\ast}$. Let $D=P_1^{\alpha_1}
\cdots P_s^{\alpha_s}$ be the decomposition of $D$ as product of
irreducible polynomials. We assume that $D$ is $n$--th power free, that is,
$0<\alpha_i<n$ for $1\leq i\leq s$ and we also assume $p\nmid n$.
The finite ramified primes are ${\mathcal P}_1,\ldots,{\mathcal P}_s$ and they are tamely ramified.
Indeed, let ${\mathbbthfrak p}_i$ be a prime in $K$ above ${\mathcal P}_i$. We have
\begin{gather}\label{Eq5.1}
e({\mathbbthfrak p}_i|{\mathcal P}_i) v_{{\mathcal P}_i}(D)=e({\mathbbthfrak p}_i|{\mathcal P}_i) \alpha_i=
v_{{\mathbbthfrak p}_i}(D)=v_{{\mathbbthfrak p}_i}((\sqrt[n]{\gamma D})^n)=nv_{{\mathbbthfrak p}_i}
(\sqrt[n]{\gamma D}).
\end{gather}
Let $d_i=\gcd(\alpha_i,n)$. We obtain from (\ref{Eq5.1}) that
${\ma F}_q^{\ast}rac{n}{d_i}\mid e({\mathbbthfrak p}_i|{\mathcal P}_i)$. On the other hand if we write $K=k(y)$,
where $y^n=\gamma D$, set $z=y^{n/d_i}$. Thus $z^{d_i}=y^n =
\gamma D= \gamma (P_i^{\alpha_i/d_i})^{d_i} (D/P_i^{\alpha_i})$. Therefore
$k(z)=k(\sqrt[d_i]{\gamma D/P_i^{\alpha_i}})$. In particular ${\mathcal P}_i$
is unramified in $k(z)/k$. It follows that $e({\mathbbthfrak p}_i|{\mathcal P}_i)=n/d_i$.
Thus $e_{{\mathcal P}_i}=n/d_i$.
Similarly we obtain $e_{\infty}=e_{\infty}(K|k) =
{\ma F}_q^{\ast}rac{n}{d}$, where $d=\gcd(\deg D,n)$.
Therefore $F_0={\mathcal P}_{\infty}rod_{i=1}^s F_{{\mathcal P}_i}$ with
$c_{{\mathcal P}_i}=\gcd(e_{{\mathcal P}_i},q^{\deg P_i}-1)=
\gcd\big({\ma F}_q^{\ast}rac{n}{d_i},q^{\deg P_i}-1\big)$. Then
\begin{gather*}
e_{\infty}(F_{{\mathcal P}_i}|k)\mid \gcd(c_{{\mathcal P}_i},q-1)=\gcd(e_{{\mathcal P}_i},q-1)
=\gcd\big({\ma F}_q^{\ast}rac{n}{d_i},q-1\big).\\
\intertext{Hence}
c_{\infty}\mid \gcd(\operatorname{lcm}[e_{{\mathcal P}_i},\ldots,e_{{\mathcal P}_s}],q-1)=\gcd(\operatorname{lcm}\big[
{\ma F}_q^{\ast}rac{n}{d_1},
\ldots,{\ma F}_q^{\ast}rac{n}{d_s}\big],q-1)=\gcd({\ma F}_q^{\ast}rac{n}{d_0},q-1),\\
\intertext{where $d_0=\gcd[d_1,\ldots,d_s]$. We also have}
c^{{\mathcal P}_{\infty}rime}_{\infty}\mid \gcd(c_{\infty},e_{\infty})|\gcd\big({\ma F}_q^{\ast}rac{n}{d_0},
{\ma F}_q^{\ast}rac{n}{d},q-1\big).
\end{gather*}
By Theorem \ref{T3.4} we obtain
\[
KF{\mathbb F}_{q^{t_0}} =
k(\sqrt[n]{\gamma D})F{\mathbb F}_{q^{t_0}}\subseteq \g{K} =
\g {k(\sqrt[n]{\gamma D})}\subseteq KF_0{\mathbb F}_{q^u},
\]
where $t_0, u\in{\mathbb N}$.
To find $t_0$,
we consider the subfield $E=k(\sqrt[d]{\gamma
D})\subseteq k(\sqrt[n]{\gamma D})$. Since ${\mathcal P}_{\infty}$ is unramified
in $E/k$ and fully ramified in $K/E$, we have that the
inertia degree of ${\mathcal P}_{\infty}$ in $K/k$ is equal to the inertia degree
of ${\mathcal P}_{\infty}$ in $E/k$. Note that $D(T)=T^l+a_{l-1}T^{l-1}+\cdots+a_1
T+a_0=T^l\big(1+a_{l-1}({\ma F}_q^{\ast}rac{1}{T})+\cdots +a_1({\ma F}_q^{\ast}rac{1}{T})^{l-1}
+a_0 ({\ma F}_q^{\ast}rac{1}{T})^l\big)=T^l D_1({\ma F}_q^{\ast}rac{1}{T})$ with $D_1(0)=1$
and $d|l$. Hence $E=k(\sqrt[d]{\gamma D_1(1/T)})$
with $D_1(1/T)\in {\mathbb F}_q[1/T]$ and $D_1(1/T)
\equiv 1 \bmod (1/T) $. Therefore $X^d-\gamma D_1(1/T) \bmod\ {\mathcal P}_{\infty}$
becomes $\bar{X}^d-\gamma\in {\mathbb F}_q [\bar{X}]$.
Let $\mu\in\bar{\mathbb F}_q$ be a fixed $d$-th root of $\gamma$. If
$\zeta_d$ denotes a primitive $d$--th root of unity, we have that
the factorization of $\bar{X}^d-\gamma$ in ${\mathbb F}_q[\bar{X}]$ is
of the form
\[
\bar{X}^d-\gamma={\mathcal P}_{\infty}rod_{j=1}^r\operatorname{Irr}(\zeta_d^{i_j}\mu,\bar{X},{\mathbb F}_q)
\]
for some $0\leq i_1<i_2<\cdots <i_r\leq d-1$. From Hensel's Lemma,
we obtain $X^d-\gamma D_1\big({\ma F}_q^{\ast}rac{1}{T}\big)={\mathcal P}_{\infty}rod_{j=1}^r
F_j(X)$ with $F_j(X)\in k_{\infty}[X]$ distinct irreducible polynomials.
In particular $\operatorname{con}_{k/K}{\mathcal P}_{\infty}={\mathbbthfrak p}_{\infty, 1}\cdots {\mathbbthfrak p}_{\infty,r}$
with $\deg_K {\mathbbthfrak p}_{\infty, j}=\deg F_j(X)$, $1\leq j\leq r$. Therefore
$t_0=\gcd_{1\leq j\leq r}\{\deg F_j(X)\}=\gcd_{1\leq j\leq r}
\{[{\mathbb F}_q(\zeta_d^{i_j}\mu):{\mathbb F}_q]\}=\gcd_{0\leq i\leq d-1}
\{[{\mathbb F}_q(\zeta_d^i \mu):{\mathbb F}_q]\}$. In short if we write $\sqrt[d]{
\gamma}=\mu$,
\begin{gather}\label{Eq5.1*}
t_0=\gcd_{0\leq j\leq r}\big[{\mathbb F}_q(\zeta_d^{i_j}\sqrt[d]{\gamma}):{\mathbb F}_q\big]
=\gcd_{0\leq i\leq d-1}\big[{\mathbb F}_q(\zeta_d^{i}\sqrt[d]{\gamma}):{\mathbb F}_q\big].
\end{gather}
It follows from (\ref{Eq5.1}) that if $\gcd(\alpha_i,n)=1$
for some $i$, then $K/k$ is a geometric extension.
\begin{example}\label{Ex5.3} {\rm{Consider
$q=3$, $P_1 = T$, $P_2=T^2 - T -1$ and $D=P_1^2P_2$. We have
$P_1$ and $P_2$ are irreducible in ${\mathbb F}_3(T)$. Let $K=k(\sqrt[10]{-D})$. In our
construction, if ${\mathcal P}_i$ is the prime corresponding to $P_i$, we have
$F_{{\mathcal P}_1}=k$ and $F_{{\mathcal P}_2}=k(\sqrt{P_2})$. Hence $F_0=k(\sqrt{P_2})$.
On the other hand, ${\mathcal P}_{\infty}$ decomposes in $k(\sqrt{P_2})/k$, thus
$k(\sqrt{P_2}) \subseteq k(\Lambda_{P1P2})^+$. Therefore $F =F_0=k(\sqrt{P_2})$.
Since $d=2$, we obtain $t_0 = 2$. Because $u_2$ is a power of $3$ and $u_2$
divides $t_0$, we have $u_2 = 1$. From the proof of Theorem \ref{T3.4'}, we obtain that in this case
$E_1 = k_1^*$ and ${\mathbb F}_{q^{u_1}} = E_2 \subseteq k_1^*$. Therefore $u_1$ divides $t_0$. Thus
$u = u_1 \in \{1,2 \}$. It follows from Theorem \ref{T3.4} that $\g K = Kk(\sqrt{P_2}) {\mathbb F}_9$.
}}
\end{example}
\subsection{Radical extensions of prime power degree dividing
$q-1$}\label{S5.3}
As a particular case of
Subsection \ref{S5.2}, let $l$ be a prime number
such that $l^n\mid q-1$. Let $D\in R_T$ be a monic polynomial $l^n$--power
free. Let $D=P_1^{\alpha_1}\cdots P_s^{\alpha_s}$ with $P_1,\ldots,
P_s\in R_T^+$ and $v_l(\alpha_i)=a_i<n$. Let $\gamma \in {\ma F}_q^{\ast}$ and
$K=k(\sqrt[l^n]{\gamma D})$. Then $e_{{\mathcal P}_i}=l^{n-a_i}$, $1\leq i\leq s$.
Since $K/k$ is a cyclic extension of degree $l^n$, $K/k$ is a geometric
extension if and only if $a_i=0$ for some $1\leq i\leq s$.
Now, we have $F_{{\mathcal P}_i}\subseteq k(\Lambda_{P_i})$ and $c_{{\mathcal P}_i}=
\gcd(e_{{\mathcal P}_i},q^{\deg P_i}-1)= e_{{\mathcal P}_i}=l^{n-a_i}$. Therefore
$F_{{\mathcal P}_i}=k(\sqrt[l^{n-a_i}]{(-1)^{\deg P_i}P_i})$ and
$F_0={\mathcal P}_{\infty}rod_{i=1}^s F_{{\mathcal P}_i}$.
We have $e_{{\mathcal P}_{\infty}}=e_{\infty}=l^{n-d}$, where $d=\min\{n,d^{{\mathcal P}_{\infty}rime}\}$
and $v_l(\deg D)=d^{{\mathcal P}_{\infty}rime}$. Furthermore the inertia degree of
${\mathcal P}_{\infty}$ is $f_{\infty}=l^m$, where ${\mathbb F}_{q^{l^m}}={\ma F}_q(\sqrt[l^d]
{(-1)^{\deg D}\gamma})$ (see \cite[Proposition 2.8]{BaRzVi2013}).
Hence $t_0=l^m$ and the field of constants of $\g K$ is ${\mathbb F}_{q^{l^m}}$.
Now, with respect to ${\mathcal P}_{\infty}$ we have
$e_{\infty}(F_{{\mathcal P}_i}|k)=l^{n-a_i-d_i}$, where
$d_i=\min\{n-a_i,d_i^{{\mathcal P}_{\infty}rime}\}$, $v_l(\deg P_i)=d^{{\mathcal P}_{\infty}rime}_i$.
From Abhyankar's Lemma we obtain
\[
e_{\infty}(F_0|k)=\operatorname{lcm}[e_{\infty}(F_{{\mathcal P}_i}|k)\mid 1\leq i\leq s]=
\operatorname{lcm}[l^{n-a_i-d_i}\mid 1\leq i\leq s]=l^{n-\delta},
\]
where $\delta=\min\limits_{1\leq i\leq s}\{a_i+d_i\}=
\min\limits_{1\leq i\leq s}\{a_i+\min\{n-a_i,d_i^{{\mathcal P}_{\infty}rime}\}\}=
\min\limits_{1\leq i\leq s}\{n,v_l(\deg P_i^{\alpha_i})\}$. That is
\[
c_{\infty}=[F_0:F_0\cap R^+]=e_{\infty}(F_0|k)=l^{n-\delta}.
\]
Note that $d\geq \delta$. Then $c^{{\mathcal P}_{\infty}rime}_{\infty}\mid\gcd(c_{\infty},
e_{\infty})=\gcd(l^{n-\delta},l^{n-d})=l^{n-d}$.
We have that $F$ is the subfield $F_0\cap R^+\subseteq F
\subseteq F_0$ such that $[F:F_0\cap R^+]=c_{\infty}^{{\mathcal P}_{\infty}rime}\mid
l^{n-d}$.
\[
\xymatrix{
F_0\ar@{-}[d]^{{\ma F}_q^{\ast}rac{c_{\infty}}{c_{\infty}^{{\mathcal P}_{\infty}rime}}}
\ar@/_2pc/@{-}[dd]_{l^{n-\delta}=c_{\infty}}\\
F\ar@{-}[d]^{c^{{\mathcal P}_{\infty}rime}_{\infty}\mid l^{n-d}}\\ F_0\cap R^+}
\]
\begin{example}\label{Ex5.3'}{\rm{
Let $k={\mathbb F}_5(T)$ and
$K:=k\big(\sqrt[3]{T(T^2+T+1)}\big)\cdot {\mathbb F}_{5^2}=
k\big(T,\sqrt[3]{D(T)}\big)\cdot {\mathbb F}_{5^2}$, where $D(T)=T(T^2+T+1)$. Let
$K_0:=k(\sqrt[3]{T(T^2+T+1)})$. Note that $K=K_0(\zeta_3)
=K_0\cdot {\mathbb F}_{5^2}$
is the Galois closure of $K_0/k$.
We have that $T$ and $T^2+T+1$ are irreducible in ${\mathbb F}_5(T)$
since $\zeta_3\notin {\mathbb F}_5$ and $T^2+T+1={\ma F}_q^{\ast}rac{T^3-1}{T-1}=
(T-\zeta_3)(T-\zeta_3^2)$. In fact ${\mathbb F}_5(\zeta_3)={\mathbb F}_{5^2}
={\mathbb F}_{25}$.
Let ${\mathbbthcal P}_T$ and ${\mathbbthcal P}_{
T^2+T+1}$ be the prime divisors in $k={\mathbb F}_5(T)$ corresponding
to $T$ and $T^2+T+1$ respectively.
The infinite prime ${\mathcal P}_{\infty}$ is unramified in $K_0/k$ and in $K/k$ because $\deg
D(T)=3$. Let $t_0(K_0)$ and $t_0(K)$ be given by (\ref{Eq3.2''}) with
respect to the fields $K_0$ and $K$ respectively. Since $\gamma=1$, from
(\ref{Eq5.1*}) we obtain
\begin{gather*}
t_0(K_0)=\gcd_{0\leq i\leq 2}\big\{\big[{\mathbb F}_5(\zeta_3^i):{\mathbb F}_5\big]\big\}=
\gcd\{1,2,2\}=1
\intertext{and}
t_0(K)=\gcd_{0\leq i\leq 2}\big\{\big[{\mathbb F}_{5^2}(\zeta_3^i):{\mathbb F}_5\big]\big\}
=\gcd_{0\leq i\leq 2}\{[{\mathbb F}_{5^2}:{\mathbb F}_5]\}= \gcd\{2,2,2\}=2.
\end{gather*}
Since $t_0(K_0)\neq t_0(K)$, it follows that $\g{(K_0)}\neq \g K$.
Let $F_0=F_{{\mathbbthcal P}_T} F_{{\mathbbthcal P}_{T^2+T+1}}$. We have
\begin{gather*}
[F_{{\mathbbthcal P}_T}:k]=c_{{\mathbbthcal P}_T}=\gcd(e_{{\mathbbthcal P}_T},
5^{d_{{\mathbbthcal P}_T}}-1)=(3,4)=1.
\intertext{Therefore $F_{{\mathbbthcal P}_T}=k$.
Now}
[F_{{\mathbbthcal P}_{T^2+T+1}}:k]=c_{{\mathbbthcal P}_{T^2+T+1}}=
\gcd(e_{{\mathbbthcal P}_{T^2+T+1}}, q^{\deg {\mathbbthcal P}_{T^2+T+1}}-1)=
\gcd(3,24)=3.
\end{gather*}
Since $q-1=e_{{\mathcal P}_{\infty}}(k(\Lambda_{T^2+T+1})|k)=4$ it
follows that $F_{{\mathbbthcal P}_{T^2+T+1}}\subseteq k(\Lambda_{T^2
+T+1})^+$ and $F_0=F_{{\mathbbthcal P}_{T^2+T+1}}=F_0\cap
k(\Lambda_{T(T^2+T+1)})^+$. Hence $F_0$ is the
unique subfield of $k(\Lambda_{T^2+T+1})$ of degree $3$
over $k$ and $F_0=F$.
We have that $F_{{\mathbbthcal P}_{T^2+T+1}}\cdot {\mathbb F}_{5^2}/
{\mathbb F}_{5^2}(T)$ is a Kummer extension of degree $3$ where
the finite ramified primes are the primes in ${\mathbb F}_{5^2}(T)$
dividing $T^2+T+1=(T-\zeta_3)(T-\zeta_3^2)$. Since ${\mathcal P}_{\infty}$ decomposes
fully in $F_{{\mathbbthcal P}_{T^2+T+1}}/k$, ${\mathbbthfrak p}_{\infty}$, the infinite prime
in ${\mathbb F}_{5^2}(T)$, decomposes fully in $F_{{\mathbbthcal P}_{T^2+T+1}}
\cdot {\mathbb F}_{5^2}/{\mathbb F}_{5^2}(T)$. Therefore
$F_{{\mathbbthcal P}_{T^2+T+1}} \cdot {\mathbb F}_{5^2}={\mathbb F}_{5^2}(T)
\big(\sqrt[3]{(-1)^{\deg Q}Q(T)}\big) = {\mathbb F}_{5^2}(T)
\big(\sqrt[3]{Q(T)}\big)$ with $\deg Q(T)=3$ and $T-\zeta_3, T-\zeta_3^2$ are the
unique irreducible polynomials dividing $Q(T)$.
\[
\xymatrix{
F_{{\mathbbthcal P}_{T^2+T+1}}\ar@{-}[r]
\ar@{-}[d]_3 & F_{{\mathbbthcal P}_{T^2+T+1}}
\cdot {\mathbb F}_{5^2}={\mathbb F}_{25}(T)\big(\sqrt[3]{Q(T)}\big)
\ar@{-}[d]^3\\
k\ar@{-}[r]&{\mathbb F}_{25}(T)
}
\]
It follows that
\begin{gather*}
F_{{\mathbbthcal P}_{T^2+T+1}}
\cdot {\mathbb F}_{5^2}={\mathbb F}_{25}(T)\Big(\sqrt[3]{(T-\zeta_3)(T-\zeta_3^2)^2}\Big)
={\mathbb F}_{25}(T)\Big(\sqrt[3]{(T-\zeta_3)^2(T-\zeta_3^2)}\Big).
\intertext{Now, since $F_0=F$, from Remark \ref{R3.5} we obtain}
\g K=KF{\mathbb F}_{5^2}={\mathbb F}_{25}\Big(T,\sqrt[3]{T(T-\zeta_3)(T-\zeta_3^2)},
\sqrt[3]{(T-\zeta_3)(T-\zeta_3^2)^2}\Big).
\end{gather*}
Let $\g {K^{{\mathcal P}_{\infty}rime}}$ be the genus
field of $K/{\mathbb F}_{25}(T)$. We may apply Peng's
Theorem. With the notations from \cite[Theorem 5.2]{MaRzVi2015}, we have
$r=3, P_1=T, P_2=T-\zeta_3, P_3=
T-\zeta_3^2, \gamma =1, \alpha=(-1)^{\deg D}\gamma =-1\in({\mathbb F}_{25}^{
\ast})^3, a_1=a_2=2$. Thus
\[
\g {K^{{\mathcal P}_{\infty}rime}}={\mathbb F}_{25}\Big(T, \sqrt[3]{T(T-\zeta_3^2)^2},\sqrt[3]
{(T-\zeta_3)(T-\zeta_3^2)^2}\Big).
\]
We also have
\begin{multline*}
{\mathbb F}_{25}\Big(T,\sqrt[3]{T(T-\zeta_3)(T-\zeta_3^2)},
\sqrt[3]{(T-\zeta_3)(T-\zeta_3^2)^2}\Big) \\
={\mathbb F}_{25}\Big(T, \sqrt[3]{T(T-\zeta_3^2)^2},\sqrt[3]
{(T-\zeta_3)(T-\zeta_3^2)^2}\Big).
\end{multline*}
Therefore $\g K=\g {K^{{\mathcal P}_{\infty}rime}}$ (see Remark \ref{R5.4}).
Finally, from Remark \ref{R3.5} $\g {(K_0)}=KF=k\big(T,\sqrt[3]{D(T)}\big)
F_{{\mathbbthcal P}_{T^2+T+1}}$.
}}
\end{example}
\begin{remark}\label{R5.4}{\rm{
Let $k={\mathbb F}_q(T)$ and let $k_n$ be the extension of constants
of $k$ of degree $n$.
Let $K$ be any finite extension of $k$ such that ${\mathbb F}_{q^n}
\subseteq K$. Let $\g K$ and
$\g {K^{{\mathcal P}_{\infty}rime}}$ be the genus fields of $K/k$ and $K/k_n$ respectively.
Since the infinite prime divisors of $K$ decompose fully in $\g K$
and in $\g {K^{{\mathcal P}_{\infty}rime}}$ and $\g K/K$ and $\g {K^{{\mathcal P}_{\infty}rime}}/K$ are abelian
and unramified, it follows that $\g K=\g {K^{{\mathcal P}_{\infty}rime}}$.
}}
\end{remark}
\end{document}
|
\begin{document}
\title[Irrationality]{Irrationality of growth constants associated with polynomial recursions}
\subjclass[2010]{11J72,11B37} \keywords{Irrationality, polynomial recursion, growth constant}
\author[S. Wagner]{Stephan Wagner}
\address{S. Wagner,
Department of Mathematical Sciences,
Stellenbosch University,
Private Bag X1,
Matieland 7602, South Africa
and
Department of Mathematics,
Uppsala Universitet,
Box 480,
751 06 Uppsala,
Sweden
}
\email{swagner\char'100sun.ac.za,stephan.wagner\char'100math.uu.se}
\author[V. Ziegler]{Volker Ziegler}
\address{V. Ziegler,
University of Salzburg,
Hellbrunnerstrasse 34/I,
A-5020 Salzburg, Austria}
\email{volker.ziegler\char'100sbg.ac.at}
\begin{abstract}
We consider integer sequences that satisfy a recursion of the form $x_{n+1} = P(x_n)$ for some polynomial $P$ of degree $d > 1$. If such a sequence tends to infinity, then it satisfies an asymptotic formula of the form $x_n \sim A \alpha^{d^n}$, but little can be said about the constant $\alpha$. In this paper, we show that $\alpha$ is always irrational or an integer. In fact, we prove a stronger statement: if a sequence $G_n$ satisfies an asymptotic formula of the form $G_n = A \alpha^n + B + O(\alpha^{-\epsilon n})$, where $A,B$ are algebraic and $\alpha > 1$, and the sequence contains infinitely many integers, then $\alpha$ is irrational or an integer.
\end{abstract}
\maketitle
\section{Introduction}
Integer sequences obtained by polynomial iteration, i.e., sequences that satisfy a recursion of the form
$$x_{n+1} = P(x_n),$$
occur in several areas of mathematics. Several interesting examples can be found in Finch's book \cite{Finch:2003} on mathematical constants, Chapter 6.10.
Let us give two concrete examples: the first is the sequence given by $x_0 = 0$ and $x_{n+1} = x_n^2 + 1$ for $n \geq 0$, which is entry A003095 in the On-Line Encyclopedia of Integer Sequences (OEIS, \cite{OEIS}). Among other things, $x_n$ is the number of binary trees whose height (greatest distance from the root to a leaf) is less than $n$. This sequence grows very rapidly: there exists a constant $\beta \approx 1.2259024435$ (the digits are A076949 in the OEIS) such that $x_n = \lfloor \beta^{2^n} \rfloor$. However, this formula is not an efficient way to compute the elements of the sequence, since one needs the whole sequence to evaluate the constant $\beta$ numerically: it is given by
$$\beta = \prod_{n=1}^{\infty} \big(1 + x_n^{-2} \big)^{2^{-n-1}}.$$
Another well-known example is Sylvester's sequence (A000058 in the OEIS), which is given by $y_0 = 2$ and $y_{n+1} = y_n^2-y_n+1$. It arises in the context of Egyptian fractions, $y_n$ being the smallest positive integer for each $n$ such that
$$\frac{1}{y_0} + \frac{1}{y_1} + \frac{1}{y_2} + \cdots + \frac{1}{y_n} < 1.$$
Again, there is a pseudo-explicit formula for the sequence: for a constant $\gamma \approx 1.5979102180$, we have $y_n = \lfloor \gamma^{2^n} + \frac12 \rfloor$. However, $\gamma$ can again only be expressed in terms of the elements of the sequence. This is also the reason why little is known about the constants $\beta$ and $\gamma$ in these two examples and generally growth constants associated with similar sequences that satisfy a polynomial recursion.
In this short note, we will prove that the constants $\beta$ and $\gamma$ in these examples are---perhaps unsurprisingly---irrational, as are all growth constants associated with similar sequences that follow a polynomial recursion. The precise statement reads as follows:
\begin{theorem}\label{thm:main_recseq}
Suppose that an integer sequence satisfies a recursion of the form $x_{n+1} = P(x_n)$ for some polynomial $P$ of degree $d > 1$ with rational coefficients. Assume further that $x_n \to \infty$ as $n \to \infty$. Set
$$\alpha = \lim_{n \to \infty} \sqrt[d^n]{x_n}.$$
Then $\alpha$ is a real number greater than $1$ that is either irrational or an integer.
\end{theorem}
It is natural to conjecture that the constants $\beta$ and $\gamma$ in our first two examples are not only irrational, but even transcendental. This is not always true for polynomial recurrences in general, though: for example, consider the sequence given by $z_1 = 3$ and $z_{n+1} = z_n^2 - 2$. It is not difficult to prove that
$$z_n = L_{2^n} = \Big( \frac{1 + \sqrt{5}}{2} \Big)^{2^n} + \Big( \frac{1 - \sqrt{5}}{2} \Big)^{2^n}$$
for all $n \geq 1$, where $L_{n}$ denotes the $n$-th Lucas number. Thus the constant $\alpha$ in Theorem~\ref{thm:main_recseq} would be the golden ratio in this example.
In the following section, we briefly review the classical method to determine the asymptotic behaviour of polynomially recurrent sequences. Theorem~\ref{thm:main_recseq} will follow as a consequence of a somewhat stronger result, Theorem~\ref{th:irr_crit}. This theorem and its proof, which makes use of the subspace theorem, will be given in Section~\ref{sec:subspace}.
\section{Asymptotic formulas for polynomially recurrent sequences}\label{sec:asymp}
There is a classical technique for the analysis of polynomial recursions. A treatment of the two examples given in the introduction
can already been found in the 1973 paper of Aho and Sloane \cite{AhoSloane:1973} (along with many other examples). See also Chapter 2.2.3 of the book of Greene and Knuth~\cite{GreeneKnuth:1990} for a discussion of the method.
Let the polynomial $P$ in the recursion $x_{n+1} = P(x_n)$ be given by
$$P(x) = c_d x^d + c_{d-1} x^{d-1} + \cdots + c_0.$$
Note that
$$P(x) = c_d \Big( x + \frac{c_{d-1}}{d c_d} \Big)^d + O(x^{d-2}).$$
So if we perform the substitution $y_n = c_d^{1/(d-1)} (x_n + \frac{c_{d-1}}{d c_d})$, the recursion becomes
\begin{align*}
y_{n+1} &= c_d^{1/(d-1)} \Big(P(x_n) + \frac{c_{d-1}}{d c_d} \Big)\\
&= c_d^{d/(d-1)} \Big( x_n + \frac{c_{d-1}}{d c_d} \Big)^d + O(x_n^{d-2}) \\
&= y_n^d + O(y_n^{d-2}).
\end{align*}
Let us assume that $x_n \to \infty$, thus also $y_n \to \infty$. It is easy to see that $x_n$ and $y_n$ are increasing from some point onwards in this case. We can also assume, without loss of generality, that none of the $y_n$ is zero: if not, we simply choose a later starting point. Taking the logarithm, we obtain
$$\log y_{n+1} = d \log y_n + O(y_n^{-2})$$
or equivalently
\begin{equation}\label{eq:O-est}
\log \Big( \frac{y_{n+1}}{y_n^d} \Big) = O(y_n^{-2}).
\end{equation}
Next express $\log y_n$ as follows:
\begin{align*}
\log y_n &= d \log y_{n-1} + \log \Big( \frac{y_n}{y_{n-1}^d} \Big) =
d^2 \log y_{n-2} + d \log \Big( \frac{y_{n-1}}{y_{n-2}^d} \Big) + \log \Big( \frac{y_n}{y_{n-1}^d} \Big) \\
&= \cdots = d^n \log y_0 + \sum_{k=0}^{n-1} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).
\end{align*}
Extending to an infinite sum (which converges since $\log (y_{k+1}/y_k^d)$ is bounded) yields
$$\log y_n = d^n \Big( \log y_0 + \sum_{k=0}^{\infty} d^{-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big) \Big) - \sum_{k=n}^{\infty} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).$$
Set
$$\log \alpha = \log y_0 + \sum_{k=0}^{\infty} d^{-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big),$$
so that
$$\log y_n = d^n \log \alpha - \sum_{k=n}^{\infty} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).$$
In view of~\eqref{eq:O-est} and the fact that $y_n \leq y_{n+1} \leq \cdots$ for sufficiently large $n$, this gives
$$\log y_n = d^n \log \alpha + O(y_n^{-2}),$$
and thus finally
$$y_n = \alpha^{d^n} + O\big(\alpha^{-d^n}\big).$$
This means that
$$x_n = c_d^{-1/(d-1)} \alpha^{d^n} - \frac{c_{d-1}}{d c_d} + O\big(\alpha^{-d^n}\big).$$
\section{Application of the subspace theorem}\label{sec:subspace}
We now combine the asymptotic formula from the previous section with an application of the subspace theorem to prove our main result on polynomial recursions. In fact, we first state and prove a somewhat stronger result that implies Theorem~\ref{thm:main_recseq}.
\begin{theorem}\label{th:irr_crit}
Assume that the sequence $G_n$ attains an integral value infinitely often, and that it satisfies an asymptotic formula of the form
$$G_n=A \alpha^n+B+O(\alpha^{-\epsilon n}),$$
where $\alpha > 1$, $A$ and $B$ are algebraic numbers with $A \neq 0$, $\epsilon > 0$, and the constant implied by the $O$-term does not depend on $n$. Then the number $\alpha$ is either irrational or an integer.
\end{theorem}
In order to prove the irrationality of $\alpha$ we make use of the following version of the subspace theorem, which is most suitable for our purposes (cf. \cite[Chapter V, Theorem 1D]{Schmidt:1991}).
\begin{theorem}[Subspace theorem]
Let $K$ be an algebraic number field and let
$S \subset M(K)=\{\text{canonical absolute values of}\; K\}$ be a finite set of absolute
values which contains all of the Archimedian ones. For each $\nu \in S$
let $L_{\nu,1}, \ldots, L_{\nu,N}$ be $N$ linearly independent linear
forms in $n$ variables with coefficients in $K$. Then for given
$\delta>0$, the solutions of the inequality
\[\prod_{\nu \in S} \prod_{i=1}^N |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu} <
\overline{|\mathbf x|}^{-\delta}\]
with $\mathbf x = (x_1,x_2,\ldots,x_N) \in \mathfrak a_K^N$ ($\mathfrak a_K$ being the maximal order of $K$) and $\mathbf x \neq \mathbf 0$, where
$|\cdot|_{\nu}$ denotes the valuation corresponding to $\nu$, $n_{\nu}$ is the
local degree, and
\[\overline{|\mathbf x|}= \max_{1 \leq i \leq N \atop 1 \leq j
\leq \deg K}|x_i^{(j)}|,\]
the maximum being taken over all conjugates $x_i^{(j)}$ of all entries $x_i$ of $\mathbf x$, lie in finitely many proper subspaces of $K^N$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{th:irr_crit}]
Let us assume contrary to the statement of Theorem \ref{th:irr_crit} that $\alpha=p/q$ is rational, where $p$ and $q$ are coprime positive integers, $p > q$, and $q \neq 1$. Moreover, assume that their prime factorizations are
$$p=p_1^{n_1} \dots p_k^{n_k} \quad \text{and} \quad q=q_1^{m_1} \dots q_{\ell}^{m_\ell}.$$
We choose $K$ in the subspace theorem to be the normal closure of $\mathbb{Q}(A,B)$ and write $D=[K:\mathbb{Q}]$. We fix one embedding of $K$ into $\mathbb{C}$ so that we can assume that $K\subseteq \mathbb{C}$. Moreover, let us write $A$ and $B$ as $A=\beta_1/Q$ and $B=\beta_2/Q$, where $\beta_1$ and $\beta_2$ lie in the maximal order $\mathfrak a_K$ of $K$ and $Q$ is a positive integer such that the ideals $(\beta_1,\beta_2)$ and $(Q)$ are coprime.
If $n$ is an index such that $G_n$ is an integer we deduce that there exists an algebraic integer $a$ which may depend on $n$ such that
\begin{equation}\label{eq:G_integral}
G_n=\frac{\beta_1 p^n+\beta_2 q^n+a}{Q q^n}=A \alpha^n+B+\frac{a}{Q q^n}=A\alpha^n+B+O(\alpha^{-\epsilon n}).
\end{equation}
Since we are assuming that $G_n$ is a rational integer, we can write the algebraic integer $a$ in the form $a=X-\beta_1 p^n-\beta_2q^n$, with $X\in \mathbb{Z}$.
Moreover, we know that
\begin{equation}\label{eq:a_small}
|a|<CQ\left(\frac{q^{1+\epsilon}}{p^{\epsilon}}\right)^n,
\end{equation}
where $C$ is the constant implied by the $O$-term.
Assume that $K$ has signature $(r,s)$. We choose
$$S=\{\infty_1,\dots,\infty_{r+s},\mathfrak p_{1,1},\dots,\mathfrak p_{k,t_k},\mathfrak q_{1,1},\dots,\mathfrak q_{t,u_{\ell}}\},$$
where the valuations $\mathfrak p_{i,1},\dots,\mathfrak p_{i,t_i}$ are all valuations lying above $p_i$ for $1\leq i \leq k$ and the valuations $\mathfrak q_{j,1},\dots,\mathfrak q_{j,u_j}$ are all valuations lying above $q_j$ for $1\leq j \leq \ell$. Moreover, let
$$\Gal(K/\mathbb{Q})=\{\sigma_1,\dots,\sigma_r,\sigma_{r+1},\bar\sigma_{r+1},\dots,\sigma_{r+s},\bar\sigma_{r+s}\},$$
so that the valuation $\infty_i$ is given by $|x|_{\infty_i}=|\sigma_i^{-1} x|$, where $|\cdot|$ is the usual absolute value of $\mathbb{C}$. Finally, denote the conjugates of $\beta_1$ and $\beta_2$ by $\beta_j^{(i)} = \sigma_i(\beta_j)$. We have the formula
$$|x_1\beta_1^{(i)}+x_2\beta_2^{(i)}+x_3|_{\infty_i}=|x_1\beta_1+x_2\beta_2+x_3|$$
for arbitrary rational numbers $x_1,x_2,x_3$.
Next, we construct suitable linear forms to apply the subspace theorem. Let us write
$x_1=p^n$, $x_2=q^n$ and $x_3=a$, thus $N=3$. We choose our linear forms as $L_{\nu,1}(\mathbf x)=x_1, L_{\nu,2}(\mathbf x)=x_2$ for all $\nu\in S$ and $L_{\nu,3}(\mathbf x)=x_3$ if $\nu$ lies above one of the valuations $p_1,\dots,p_k$. We choose $L_{\nu,3}(\mathbf x)=\beta_1 x_1+\beta_2 x_2+x_3$ if $\nu$ lies above one of the valuations $q_1,\dots,q_\ell$. Finally if $\nu=\infty_i$ then we put $L_{\infty_i,3}(\mathbf x)=(\beta_1-\beta_1^{(i)}) x_1+(\beta_2-\beta_2^{(i)}) x_2+x_3$.
Using the product formula (cf. \cite[pages 99--100]{Lang:ANT}) and trivial estimates we obtain
\begin{align*}
\prod_{\nu\in S} |L_{\nu,1}(\mathbf x)|_\nu^{n_\nu}&=1,&
\prod_{\nu\in S} |L_{\nu,2}(\mathbf x)|_\nu^{n_\nu}&=1,\\
\prod_{\nu|q_j \atop 1\leq j\leq \ell} |L_{\nu,3}(\mathbf x)|_\nu^{n_\nu}&\leq q^{-Dn},&
\prod_{\nu|p_i \atop 1\leq i\leq k} |L_{\nu,3}(\mathbf x)|_\nu^{n_\nu}&\leq 1.
\end{align*}
Thus we are left to compute the quantities $|L_{\infty_i,3}(\mathbf x)|_{\infty_i}$. We obtain
\begin{align*}
|L_{\infty_i,3}(\mathbf x)|_{\infty_i} &=|(\beta_1-\beta_1^{(i)}) x_1+(\beta_2-\beta_2^{(i)}) x_2+x_3 |_{\infty_i}\\
&= |\beta_1 p^n+\beta_2 q^n+a-\beta_1^{(i)}p^n-\beta_2^{(i)}q^n|_{\infty_i}\\
&=|X-\beta_1^{(i)}p^n-\beta_2^{(i)}q^n|_{\infty_i}\\
&=|X-\beta_1 p^n-\beta_2 q^n|=|a|.
\end{align*}
Combining all inequalities, we have
\begin{equation}\label{eq:prod_bound}
\prod_{\nu \in S} \prod_{i=1}^3 |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu}\leq q^{-Dn}|a|^D< (CQ)^D\left(\frac{q}{p}\right)^{\epsilon Dn}.
\end{equation}
Now choose $\delta>0$ small enough so that
$$\left(\frac{q}{p}\right)^{\epsilon D}<p^{-\delta}.$$
In view of~\eqref{eq:a_small}, the inequality $|a|_\nu \leq p^n$ holds for all valuations $\nu$ lying above $\infty$ for sufficiently large $n$, so that $\overline{|\mathbf x|} = |x_1| = p^n$. Hence we obtain
$$(CQ)^D\left(\frac{q}{p}\right)^{\epsilon Dn}<(p^n)^{-\delta}=\overline{|\mathbf x|}^{-\delta}$$
for sufficiently large $n$. In view of \eqref{eq:prod_bound} we have shown that
\begin{equation}\label{eq:ST}
\prod_{\nu \in S} \prod_{i=1}^n |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu} <
\overline{|\mathbf x|}^{-\delta}.
\end{equation}
By the subspace theorem all solutions $x_1,x_2$ and $x_3$ to \eqref{eq:ST} lie in finitely many subspaces of $K^3$. Since by assumption there are infinitely many solutions, there exists one subspace $T\subseteq K^3$ which contains infinitely many solutions. Let $T$ be defined by
$t_1x_1+t_2x_2+t_3x_3=0$, with fixed algebraic integers $t_1,t_2,t_3\in \mathfrak a_K$. Then there must be infinitely many integers $n$ such that $t_1p^n+t_2q^n+t_3a=0$ which is in contradiction to \eqref{eq:a_small} and the assumption that $p > q > 1$. Thus we can conclude that $\alpha$ cannot be rational, unless $q=1$ so that $\alpha$ is an integer.
\end{proof}
Now the proof of Theorem~\ref{thm:main_recseq} is straightforward.
\begin{proof}[Proof of Theorem~\ref{thm:main_recseq}]
As derived in Section~\ref{sec:asymp}, if an integer sequence satisfies a recursion of the form $x_{n+1} = P(x_n)$ for some polynomial $P$ of degree $d > 1$ with rational coefficients and $x_n \to \infty$ as $n \to \infty$, then an asymptotic formula
of the form
$$x_n = A \alpha^{d^n} + B + O(\alpha^{-d^n})$$
holds. If $\alpha$ is rational, but not an integer, then we have an immediate contradiction with Theorem~\ref{th:irr_crit}.
\end{proof}
\section{Further generalizations}
Let us remark that Theorem \ref{th:irr_crit} can be extended to number fields:
\begin{theorem}\label{th:general-irr-crit}
Let $L$ be a number field and $\mathfrak a_L$ its maximal order.
Assume that the sequence $G_n$ attains values in $\mathcal O_L$ infinitely often, and that it satisfies an asymptotic formula of the form
$$G_n=A \alpha^n+B+O(|\alpha|^{-\epsilon n}),$$
where $|\alpha| > 1$, $A$ and $B$ are algebraic numbers with $A \neq 0$, $\epsilon > 0$, and the constant implied by the $O$-term does not depend on $n$. Then the number $\alpha$ is either an algebraic integer in $\mathfrak a_L$ or $\alpha\not\in L$.
\end{theorem}
The proof of this theorem is similar to the proof of Theorem \ref{th:irr_crit}. In particular let $K$ be the normal closure of $L(A,B)$ and assume that $\alpha=p/q$ with $p\in \mathfrak a_L$ and $q\in \mathbb{Z}$ with $q>1$. Then we consider the prime ideal factorizations
$$(p)=\mathfrak p_1^{n_1} \dots \mathfrak p_k^{n_k} \quad \text{and} \quad (q)=\mathfrak q_1^{m_1} \dots \mathfrak q_{\ell}^{m_\ell}$$
in $K$. We can construct the same linear forms as in the proof of Theorem \ref{th:irr_crit} and use the subspace theorem to get a contradiction.
It is also possible to consider a higher-dimensional variant of Theorem \ref{thm:main_recseq}. Let $f_1,\dots,f_N \in \mathbb{Z}[X_1,\dots,X_N]$ be polynomials of degree $d$. Then we can consider a sequence $(\mathbf x_n)$ with $\mathbf x_n=\left(x^{(1)}_n,\dots,x^{(N)}_n\right)\in \mathbb{Z}^N$ for all $n\geq 0$ satisfying the polynomial recurrence
$$\mathbf x_{n+1}=f(\mathbf x_n)=(f_1(\mathbf x_n),\dots,f_N(\mathbf x_n)).$$
With this notation at hand we pose the following problem:
\begin{problem}
Assume that $\max \left\{x^{(1)}_n,\dots,x^{(N)}_n\right\} \rightarrow \infty$ as $n\rightarrow \infty$, and let
$$\alpha=\lim_{n\rightarrow \infty} \sqrt[d^n]{\max\left\{x^{(1)}_n,\dots,x^{(N)}_n\right\}}.$$
Is $\alpha$ necessarily irrational or an integer?
\end{problem}
\end{document}
|
\begin{document}
\begin{abstract}
Let \((G,\alpha)\) and \((H,\beta)\) be locally compact Hausdorff groupoids with Haar systems, and let \((X,\lambda)\) be a topological correspondence from \((G,\alpha)\) to \((H,\beta)\) which induce the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\colon \mathbb{C}st(G,\alpha)\to \mathbb{C}st(H,\beta)\). We give sufficient topological conditions which when satisfied the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\) is proper, that is, the groupoid \(\mathbb{C}st(G,\alpha)\) acts on the \(\mathbb{C}st(H,\beta)\)\nb-Hilbert module \(\Hilm(X)\) via the comapct operators. Thus a proper topological correspondence produces an element in \(\mathrm{K}K(\mathbb{C}st(G,\alpha),\mathbb{C}st(H,\beta))\).
\mathrm{e}nd{abstract}
\maketitle
\tableofcontents
\section*{Introduction}
\label{sec:intro}
Given \(\mathbb{C}st\)\nb-algebras \(A\) and \(B\), a \(\mathbb{C}st\)\nb-correspondence from \(A\) to \(B\) is a pair \((\Hils, \phi)\) where \(\Hils\) is a Hilbert \(B\)\nb-module, and \(\phi\colon A\to \mathbb B(\Hils)\) is a nondegenerate representation. We call the \(\mathbb{C}st\)\nb-correspondence \((\Hils,\phi)\) \mathrm{e}mph{proper} if the representation \(\phi\colon A\to \mathbb{K}(\Hils)\). Proper correspondences are important and studied for various purposes. In this article, we shall denote a \(\mathbb{C}st\)\nb-correspondence merely by the Hilbert module in its definition. We shall not come across an occasion where the representation needs to be explicitly spelled out.
In~\cite{Holkar2017Construction-of-Corr}, topological correspondences of locally compact groupoids equipped with Haar systems are defined. Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids equipped with Haar systems. A topological correspondence from \((G,\alpha)\) to \((H,\beta)\) is a pair \((X,\lambda)\) where \(X\) is a \(G\)\nb-\(H\)-bispace with the \(H\)\nb-action proper, \(\lambda\defeq\{\lambda_u\}_{u\in\base[H]}\) is a proper \(H\)\nb-invariant family of measures along the right momentum map \(s_X\colon X\to \base[H]\) and each measure \(\lambda_u\) in \(\lambda\) is \((G,\alpha)\)\nb-quasi-invariant. The main theorem in~\cite{Holkar2017Construction-of-Corr}*{Theorem 2.10} asserts that a certain completion of \(\mathbb{C}ontc(X)\) gives a \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X,\lambda)\) from \(\mathbb{C}st(G,\alpha)\) to \(\mathbb{C}st(H,\beta)\).
For the above topological correspondence \((X,\lambda)\), let \(\base\xleftarrow{r_X} X\xrightarrow{s_X}\base[H]\) be the momentum maps. The \(\mathbb{C}st\)\nb-algebra \(\mathbb{K}(\Hilm((X,\lambda))\) of compact operators on the Hilbert \(\mathbb{C}st(H,\beta)\)-module \(\Hilm(X,\lambda)\) can be described using a topological, namely, a hypergroupoid equipped with a Haar system. To do this, one observes that the right diagonal action of \(H\) on the fibre product \(X\times_{s_X,}X\) is proper. In~\cite{Renault2014Induced-rep-and-Hpgpd}, Renault shows that the quotient space \((X\times_{s_X,\base[H],s_X})/H\) is a hypergroupoid equipped with a Haar system \(\tilde{\lambda}\). The Haar system \(\tilde{\lambda}\) is induced by \(\lambda\). The \(\mathbb{C}st\)\nb-algebra of this hypergroupoid, \(\mathbb{C}st(X\times_{s_X,\base[H],s_X})/H), \tilde{\lambda}\) is the \(\mathbb{C}st\)\nb-algebra of compact operators on \(\Hilm(X,\lambda)\). We revise this result in Section~\ref{comp-op}; this was written for locally compact, Hausdorff and second countable groupoids and spaces in~\cite{mythesis}.
The notion of topological correspondence in~\cite{Holkar2017Construction-of-Corr} generalises many existing notions of topological correspondences in the literature including Jean-Louis Tu's notion of \mathrm{e}mph{locally proper generalised morphism} in~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}. In~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}, Tu defines \mathrm{e}mph{a proper locally proper generalised morphism} which is a \mathrm{e}mph{proper topological correspondences}. That is, he adds an extra conditions to his topological correspondence so that the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X,\lambda)\) it produces is proper. This work of Tu raises a question, when is a topological correspondence defined in~\cite{Holkar2017Construction-of-Corr} proper? To be precise, under what extra hypothesis on the topological correspondence in~\cite{Holkar2017Construction-of-Corr} is the corresponding \(\mathbb{C}st\)\nb-correspondence proper?
Apart from Tu's above-mentioned work, this question has been discussed and answered for special cases. Marth-Stadler and O'uchi define a topological correspondence and a proper one in~\cite{Stadler-Ouchi1999Gpd-correspondences}. This work is a special case of the work in~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}. In~\cite{Muhly-Tomforde-2005-Topological-quivers}, Muhly and Tomford define topological quivers, which are special types of topological correspondences (see~\cite{Holkar2017Construction-of-Corr}*{Example 3.3}). In the same article, they also characterise proper topological quivers, see~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11}.
Now we briefly elaborate the above works putting Tu's at the end. Let \(Y\) be a locally compact Hausdorff space. Slightly modifying~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Definition 3.1}, we may say that a topological quiver from \(Y\) to itself is a quadruple \((X,b,f,\lambda)\) where \(b,f\colon X\to Y\) are continuous maps and \(\lambda\) is a continuous family of measures along \(f\).~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11} says that the quiver is proper if and only if \(f\) is a local homeomorphism and \(b\) is proper.
Let \(Y\) and \(Z\) be locally compact Hausdorff spaces. Then as in~\cite{Buss-Holkar-Meyer2016Gpd-Uni-I}, in general, one defines that a quiver from \(Y\) to \(Z\) is a quadruple \((X,b,f,\lambda)\) where \(b\colon X\to Y\) and \(f\colon X\to Z\) are continuous maps, and \(\lambda\) is a continuous family of measures along \(f\). One may check that~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11} is valid for this slightly generalised notion of quivers also. If one thinks of the spaces \(Y\) and \(Z\) above as the trivial groupoids, then both the above of quivers are topological correspondences, see~\cite{Holkar2017Construction-of-Corr}*{Example 3.3}.
Tu~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K} works with locally compact groupoids equipped with Haar systems and the space involved in the (proper) topological correspondence is locally compact but not necessarily Hausdorff. Marta-Stadler and O'uchi's work~\cite{Stadler-Ouchi1999Gpd-correspondences} involves Hausdorff groupoids equipped with Haar systems and Hausdorff spaces; their proper correspondences is a special case of Tu's work, hence we focus on~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}. Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids equipped with Haar systems. A topological correspondence from \((G,\alpha)\) to \((H,\beta)\) in the sense of Tu is a \(G\)\nb-\(H\)-bispace \(X\) such that
\begin{enumerate}[i), leftmargin=*]
\item both the actions are proper,
\item the action of \(G\) is free,
\item the right momentum map \(s_X\colon X\to\base[H]\) induces an isomorphism \([s_X]\colon X/H\to\base[H]\).\label{intro-Tu-1}
\mathrm{e}nd{enumerate}
Property~\ref{intro-Tu-1} above is equivalent to
\begin{enumerate}[resume, label=iii')]
\item the map \(\mathrm{m}_{s_X}\colon G\times_{s_G,\base, r_X}X\to X\times_{s_X,\base[H],s_X}X\),
\[
\mathrm{m}_{s_X}(\gamma,x)\mapsto (\gamma x,x),
\]
is a homeomorphism\label{intro-Tu-2} where \(s_G\) is the source map of \(G\), \(r_X\colon X\to\base\) and \(s_X\colon X\to\base[H]\) are the momentum maps for the action of \(G\) and \(H\), respectively, and the domain and codomain of the function \(\mathrm{m}_{s_X}\) are the obvious fibre products.
\mathrm{e}nd{enumerate}
\cite{Holkar2017Construction-of-Corr}*{Example 3.7} shows that this is a topological correspondence in our sense, that is, in the sense of~\cite{Holkar2017Construction-of-Corr}*{Definition 2.1}. \cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Definition 7.6} says that the above correspondence is proper if the map \([r_X]\colon X/H\to \base\) induced by the momentum map \(r_X\colon X\to \base\) for the action of \(G\) is proper. \cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Theorem 7.8} proves that if the above topological correspondence \(X\) is proper, then so is the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\colon \textup{C}^*_{\textup{r}}(G,\alpha)\to \textup{C}^*_{\textup{r}}(H,\beta)\).
Now we discuss the three conditions in the definition of a proper correspondence which is proposed in this article. Let \((X,\lambda)\) be a topological correspondence from a locally compact groupoid equipped with a Haar system \((G,\alpha)\) to another one, say \((H,\beta)\). Let \(r_X\) and \(s_X\) be the momentum maps for the actions of \(G\) and \(H\), respectively, on \(X\). First of all, one may expect from preceding literature survey that the map \([r_X]\colon X/H\to \base\) should be proper. This is true; this is one of the conditions we need. However, this is not a sufficient condition while dealing a general topological correspondence. For example, if \(G=H=\{*\}\), the trivial group, and \((X,\lambda)\) is a compact measure space, then \([r_X]\) is proper. In this case, \(\Hilm(X,\lambda)=\mathcal{L}^2(X,\lambda)\) and \(\mathbb{C}st(G)=\mathbb{C}st(H)=\mathbb{C}\). The action of \(\mathbb{C}\) on \(\Hilm(X)\) is by scalar multiplication which has the identity operator \(1\). Hence, in general, \(\Hilm(X,\lambda)\colon \mathbb{C}\to \mathbb{C}\) is not a proper correspondence.
Secondly, since there are families measures involved one may expect that there should be condition(s) that relate the family of measures on the bispace and the Haar system on the left groupoid. This is a technical condition; this is the third condition in Definition~\ref{def:prop-corr}.
Thirdly and finally, an interesting and not-at-all-obvious condition is the rephrasing~\ref{intro-Tu-2} of the property~\ref{intro-Tu-1} appearing in the definition of Tu's proper correspondence above. It is a \mathrm{e}mph{classical} condition that appears in the definition of a groupoid equivalence (\cite{Muhly-Renault-Williams1987Gpd-equivalence}) and may other notions of groupoid morphisms. This property gives a family of measures on \(X\) as observed in~\cite{Holkar2017Construction-of-Corr}*{Exmaple 3.7}. What is so interesting about this property? In its other form, namely~\ref{intro-Tu-2}, the property proves very useful.
With the same notations as in the last paragraph, let \(b\in\mathbb{C}ontc(G)\) and \(\xi\in\mathbb{C}ontc(X)\). Then the action of \(b\) on \(\xi\) that gives the representation of \(\mathbb{C}st(G,\alpha)\) on \(\Hilm(X,\lambda)\) is given by
\[
b\xi(x)=\int_G b(\gamma)\xi(\gamma^{-1} x)\Delta(\gamma,\gamma^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(\gamma)
\]
where \(\Delta\) is the adjoining function for the correspondence \((X,\lambda)\). Now, we aim to assign \(b\) an element \(\tilde{b}\) in \(\mathbb{C}ontc((X\times_{s_X,\base[H],s_X}X)/H)\) such that \(\tilde{b}\xi=b\xi\). For this purpose, using the fact that \([r_X]\) is proper, we choose a\mathrm{e}mph{dummy} function \(t\) in \(\mathbb{C}ontc(X)\). Then \(b\otimes_{\base} t\in \mathbb{C}ontc(G\times_{\base}X)\). If \(\mathrm{m}_{s_X}\) were a homeomorphism, then \((b\otimes_{\base} t)\circ\mathrm{m}_{s_X}^{-1}\) is an element of \(\mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\). If one chooses the dummy function \(t\) carefully, then after averaging over \(H\), the image of \(b\otimes_{\base} t\) in \(\mathbb{C}ontc((X\times_{s_X,\base[H],s_X}X)/H)\) serves as the required function \(\tilde{b}\).
Note that if \(\mathrm{m}_{s_X}\) is a homeomorphism, then the action of \(G\) on \(X\) is free as well as proper. A bit of more work shows that the above argument goes through when \(\mathrm{m}_{s_X}\) is a homeomorphism onto its image. An action of \(G\) on \(X\) for which \(\mathrm{m}_{s_X}\) is a homeomorphism onto its image are called \mathrm{e}mph{basic} in~\cite{Meyer-Zhu2014Groupoids-in-categories-with-pretopology}. However, the following simple example shows that for us even basic actions too much to ask for. Let \(\mathbb{S}^1\) denote the unit circle; consider this compact space as the trivial groupoid. Let the multiplicative group of order two, \(\mathbb{Z}/2\mathbb{Z}\defeq \{1,-1\}\), act on the space \(\mathbb{S}^1\) from left by \((-1)\cdot z =\bar{z}\) for \(-1\in \mathbb{Z}/2\mathbb{Z}\) and \(z\in \mathbb{S}^1\). Let \(\mathbb{S}^1\) act on itself trivially from right. The momentum map for this action is the identity map \(\textup{Id}_{\mathbb{S}^1}\colon \mathbb{S}^1\to \mathbb{S}^1\). Let \(\tau_{\textup{Id}_{\mathbb{S}^1}}\) be the family of measures along \(\textup{Id}_{\mathbb{S}^1}\) which consists of point masses. Then \((\mathbb{S}^1, \tau_{\textup{Id}_{\mathbb{S}^1}})\) is a topological correspondence from \(\mathbb{Z}/2\mathbb{Z}\) to \(\mathbb{S}^1\); the adjoining function of this correspondence is the constant function \(1\). The topological correspondence \((\mathbb{S}^1, \tau_{\textup{Id}_{\mathbb{S}^1}})\) gives the \(\mathbb{C}st\)\nb-correspondence \(\mathbb{C}ont(\mathbb{S}^1)\colon \mathbb{C}st(\mathbb{Z}/2\mathbb{Z}) \to \mathbb{C}ont(\mathbb{S}^1)\). Note that here we make the \(\mathbb{C}st\)-algebra \(\mathbb{C}ont(\mathbb{S}^1)\) a Hilbert module over itself in the obvious way. Since this \(\mathbb{C}st\)\nb-algebra is unital, we have \(\mathbb{K}(\mathbb{C}ont(\mathbb{S}^1))=\Mult(\mathbb{C}ont(\mathbb{S}^1))=\mathbb{C}ont(\mathbb{S}^1)\). Therefore, the \(\mathbb{C}st\)\nb-correspondence \(\mathbb{C}ont(\mathbb{S}^1)\colon \mathbb{C}st(\mathbb{Z})\to \mathbb{C}ont(\mathbb{S}^1)\) is proper.
In above example, \(\mathrm{m}_{s_{\mathbb{S}^1}}\colon \mathbb{Z}/2\mathbb{Z}\times \mathbb{S}^1\to \mathbb{S}^1 \times \mathbb{S}^1\) is the map \((\pm1,z)\mapsto (z,z) \text{ or } (\bar{z},z)\) which is not a homeomorphism, even, onto its image as \((-1)\cdot(1+0i)=1\cdot(1+0i)\) where \(1, -1\in \mathbb{Z}/2\mathbb{Z}\) and \(1+0i\in \mathbb{S}^1\). However, on may check that this map is a local homeomorphism onto its image: for \(\pm1\in \mathbb{Z}/2\mathbb{Z}\), \(\mathrm{m}_{{\mathbb{S}^1}}|_{\{\pm1\}\times {\mathbb{S}^1}}\) is a homeomorphism onto its mage; inverse of this restriction is \((z',z)\mapsto (\frac{z'}{z},z)\) where \((z',z)\in \Img(\mathrm{m}_{{\mathbb{S}^1}}|_{\{\pm1\}\times {\mathbb{S}^1}})\).
This and similar examples motivate us to add the condition that \mathrm{e}mph{\(\mathrm{m}_{s_X}\) is a local homeomorphism onto its image}. And this serves our purpose well.
\mathrm{e}dskip
Once the maps which are local homeomorphisms onto their images are in picture, we have to study these local homeomorphisms: we have to study how does a map of this type behave
\begin{enumerate*}[(i)]
\item with respect extensions of continuous function along it,
\item with respect to induction of measures along it.
\mathrm{e}nd{enumerate*}
We study these technical issues in Sections~\ref{sec:measure-th}.
One may put all these conditions and above study together and state the main result in this article. However, we investigate when is the map \(\mathrm{m}_{s_X}\) above a local homeomorphism. This question breaks down into two pieces:
\begin{enumerate*}[(i)]
\item when is \(\mathrm{m}_{s_X}\) locally one-to-one?
\item when is \(\mathrm{m}_{s_X}\) open onto its image?
\mathrm{e}nd{enumerate*}
The second question has no concrete answer and examples show that \(\mathrm{m}_{s_X}\) is open has to be a part of data.
The first question leads us to the study of the locally free action of groupoids; this study is done in~Section~\ref{sec:locally-free-actions}. While studying the locally free actions, we also define transitive actions of groupoids in Section~\ref{sec:act-gpd} which facilitate rephrasing a classical condition more theoretically, see Remark~\ref{rem:relation-between-sX-quo-sX-Mlp}.
Since Muhly and Tomford~\cite{Muhly-Tomforde-2005-Topological-quivers} not only \mathrm{e}mph{define} a proper topological correspondence but also \mathrm{e}mph{characterise} it, we wish to see that up to what level can our definition of a proper correspondence reproduce their results, we study measures which are concentrated on certain sets (Section~\ref{sec:measure-th}).
What do we finally achieve? We define a proper topological correspondence (Definition~\ref{def:prop-corr}) and show that a proper topological correspondence of groupoids equipped with Haar systems produce a proper \(\mathbb{C}st\)\nb-correspondence of full groupoid \(\mathbb{C}st\)\nb-algebras (Theorem~\ref{thm:prop-corr-main-thm}). We study \mathrm{e}mph{locally free} actions of groupoid: we define locally free and strongly locally free actions of groupoids and show that they do not mean the same (Example~\ref{exa:local-free-and-not-str-loc-free-action}). However, in the case of groups, these notions are same.
While proving this, the example, Example~\ref{exa:local-free-and-not-str-loc-free-action}, shows that a groupoid with discrete fibres need not be {\mathrm{e}tale}. Moreover, while studying measures concentrated on a set, we prove Lemma~\ref{lem:conc-measu-equi} which gives different characterisations that when is a measure concentrated on a set; this lemma is not a deep work, but it comfortable to have it. In Section~\ref{sec:exa}, we show that all the definitions of various proper topological correspondences mentioned earlier fit in Definition~\ref{def:prop-corr}. Following is a detail discussion of results in this section.
We discuss the ({\mathrm{e}tale}) proper correspondences of spaces and {\mathrm{e}tale} groupoids in detail in Section~\ref{sec:ex-space-case} and~\ref{sec:ex-egpd-case}, respectively. We show that in a proper topological correspondence of spaces in the sense of Definition~\ref{def:prop-corr}, the family of measures on the middle space consists of atomic measures and has full support, see~\ref{lem:prop-quiver-labda-is-atomic}. Reader may compare this with ~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11}. This result generalises to {\mathrm{e}tale} groupoids also, Lemma~\ref{lem:etale-corr-full-supp}.
Let \(X,Y\) and \(Z\) be spaces, and \(Y\xleftarrow{b} X\xrightarrow{f} Z\) be continuous maps where \(f\) is a local homeomorphism. Then \(X\) carries a continuous family of counting measures, \(\tau_{s_X}\), along the {\mathrm{e}tale} map \(f\) which makes \(X\) into a topological correspondence from \(Y\) to \(Z\). Definition~\ref{def:prop-corr} implies that \(X\) is proper if \(b\) is a proper map.
Section~\ref{sec:ex-egpd-case} discusses the case of {\mathrm{e}tale} groupoids. In this section, we study {\mathrm{e}tale} topological correspondence of {\mathrm{e}tale} groupoids. Let \(X\) be a \(G\)\nb-\(H\)-bispace, where \(G\) and \(H\) are {\mathrm{e}tale} groupoids. Assume that the right momentum map \(s_X\) is a local homeomorphism which gives \(X\) a continuous family of atomic measure, \(\tau_{s_X}\), along \(s_X\). If \((X, \tau_{s_X})\) is a topological correspondence from \(G\) to \(H\), then we call \(X\) an {\mathrm{e}tale} correspondence. Example of such a morphism is the Hilsum-Skandalis morphism, see~\cite{Hilsum-Skandalis1987Morphismes-K-orientes-deSpaces-deFeuilles-etFuncto-KK}. Proposition~\ref{prop:gen-etale-corr-iso} shows that any (proper) topological correspondence obtained from the above \(G\)\nb-\(H\)-bispace \(X\) by changing the family of measures on \(X\) is isomorphic to \((X,\tau_{s_X})\). Thus the \(\mathrm{K}K\)\nb-class in \(\mathrm{K}K(\mathbb{C}st(G),\mathbb{C}st(H))\) determined by such a bispace does not depend on the family of measures the bispace carries.
We show that \(X\) is a proper correspondence if the momentum map for the right action is a proper, see~Proposition~\ref{prop:prop-etale-cor-char}.
What could we not achieve? The proof of Theorem~\ref{thm:prop-corr-main-thm} uses the cutoff function on a proper groupoid which needs that the space of units is Hausdorff. This forces that the bispace involved in a topological correspondence is Hausdorff. Thus, though we write results for locally compact groupoids when it comes to proving Theorem~\ref{thm:prop-corr-main-thm}, the fact that the groupoids are non-Hausdorff does not play a great role. We do not know examples of groupoid actions which is locally strictly locally free but not strictly locally free, and locally free but not locally locally free, see Figure~\ref{fig:LF-actions-rel} and the discussion below it.
Before proceeding to the summary of the article, we would recommend the reader to assume the function \(\mathbb{R}N\) and an adjoining function of a topological correspondence to be the constant function 1, especially in Section~\ref{sec:prop-corr} and the proof of Theorem~\ref{thm:prop-corr-main-thm} during the first reading. This would reduce the complexity in the proofs and ideas.
Following is the sectionwise summary.
\noindent
\textbf{Section~\ref{sec:prelim}:}
we fix some important notation in this section. This section has five subsections. The first one, namely, Section~\ref{sec:measure-th}, discusses extensions of functions along local homeomorphisms which are not necessarily surjective and then some measure theoretic preliminaries. The continuous extensions of a function along a local homeomorphism, which need not be surjective, are used to verbalise one of the conditions for a proper correspondence, Definition~\ref{def:prop-corr}{(iii)}. In the measure theoretic preliminaries, firstly we discuss constructing measures using local data and restriction of measures. Then, in Lemma~\ref{lem:conc-measu-equi}, we discuss various equivalent ways of saying that a measure is concentrated on a subset. This is used to prove the next lemma, Lemma~\ref{lem:loc-concentrated-measure}, which is one of important result for this article. We use Lemma~\ref{lem:loc-concentrated-measure} to interpret a condition in the definition of a proper correspondence in certain cases. The last part of this section discusses absolute continuity of families of measures.
In the next subsection, Section~\ref{sec:act-gpd}, we discuss free, proper and transitive actions.
In Section~\ref{sec:locally-free-actions}, we introduce different notions of locally free actions of groupoids and examples of some of them. In Proposition~\ref{prop:etale-gpd-loc-free-act}, we show that any action of an {\mathrm{e}tale} groupoid is strongly locally free.
Section~\ref{sec:prop-group-cutoff} is a revision of some well-known results about proper groupoids and cutoff functions.
Let \((H,\beta)\) be a locally compact groupoid with a Haar system. Let \(X\) be a proper \(H\)\nb-space and \(\lambda\) an \(H\)\nb-invariant family of measures on \(X\). In Section~\ref{comp-op}, we describe \(\mathbb{K}_{\mathrm{T}}(X)\), the \(\mathbb{C}st\)\nb-algebra of compact operators on the \(\mathbb{C}st(H,\beta)\).
\noindent
\textbf{Section~\ref{sec:prop-corr}:}
This section starts with three examples which, we expect, may prove helpful to understand the definition of a proper correspondence. The examples are followed by the definition of a proper correspondence, Definition~\ref{def:prop-corr}. Then we make few remarks. The rest of the section is the proof our main theorem, namely, Theorem~\ref{thm:prop-corr-main-thm}.
\noindent
\textbf{Section~\ref{sec:exa}:}
In this section, we give examples of proper correspondences and show that some well-known proper topological correspondences are proper in our sense. Example~\ref{exa:Tu} shows that a proper locally proper generalised morphism defined by Jean--Louis Tu in~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K} is a proper topological correspondence. Example~\ref{exa:gpd-equi} shows that an equivalence of groupoid, Example~\ref{exa:prop-quiver} shows that a proper quiver defined by Muhly and Tomford in~\cite{Muhly-Tomforde-2005-Topological-quivers} is a proper topological correspondence. This section discusses {\mathrm{e}tale} topological correspondence of spaces and {\mathrm{e}tale} groupoids. Proposition~\ref{prop:prop-etale-cor-char} shows that for an {\mathrm{e}tale} correspondences, one of the groupoids can be reduced to a space.
\section{Preliminaries}
\label{sec:prelim}
\paragraph{Topological conventions}
We assume that reader is familiar with basic theory of locally compact groupoids (\cite{Renault1980Gpd-Cst-Alg}, \cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}), continuous families of measures (\cite{Renault1985Representations-of-crossed-product-of-gpd-Cst-Alg}, \cite{mythesis}*{Section 2.5.2}) and topological correspondences of locally compact groupoids equipped with Haar systems (\cite{Holkar2017Construction-of-Corr}).
Let \(X\) be a topological space. A subset \(A \subseteq X\) is called quasi-compact if every open cover of \(A\) has a finite subcover, compact if it is quasi-compact and Hausdorff. The space \(X\) is called locally compact if every point has a compact neighbourhood. We call a groupoid locally compact if it is a locally compact topological space and its space of units is Hausdorff. For locally compact space \(X\) \(\mathbb{C}ontc(X)_0\) denotes the set of functions \(f\) such that \(f\) vanishes outside a compact set \(V\) and \(f|_v\in\mathbb{C}ontc(V)\). And \(\mathbb{C}ontc(X)\) is defined as the linear span of \(\mathbb{C}ontc(X)_0\). The main definition and theorem in this article are stated for locally compact groupoids and Hausdorff spaces, so reader may simply assume that groupoids are also Hausdorff.
For a function \(f\colon X\to Y\) of sets, \(\Img(f)\) denotes the image of \(f\). Assume that \(X\) and \(Y\) are spaces, and \(f\colon X\to Y\) a continuous map. We say \(f\) is open, if \(f(U)\) is open \mathrm{e}mph{in} \(Y\) for every open \(U\subseteq X\). We call \(f\) open onto its image, if \(f(U)\) is open \mathrm{e}mph{in} \(f(X)\) for very open \(U\subseteq X\).
The function \(f\) above is called locally one-to-one if for each \(x\in X\) there is a neighbourhood \(U\subseteq X\) of \(x\) with \(f\vert_U\colon U\to f(U)\) is a one-to-one function. The function \(f\) is called a local homeomorphism if \(f\) is surjective and for each \(x\in X\) there is a neighbourhood \(U\subseteq X\) of \(x\) with \(f\vert_U\colon U\to f(U)\) is a homeomorphism. In this case, \(f\) is an open map. We call \(f\) a local homeomorphism \mathrm{e}mph{onto its image} if \(x\in X\) there is a neighbourhood \(U\subseteq X\) of \(x\) with \(f\vert_U\colon U\to f(U)\) is a homeomorphism; this is equivalent to saying that the function \(f'\colon X\to f(X)\) obtained from \(f\) by restricting its codomain is a local homeomorphism.
Let \(X\) be a space. If \(\sim\) is an equivalence relation on \(X\), then \([x]\in X/\!\sim\) denotes the equivalence class of \(x\in X\) under \(\sim\), and \(q_X\colon X\to X/\!\sim\) denotes the quotient map \(q_X(X)=[x]\). Let \(Y\) be another space and \(f\colon X\to Y\) a continuous map. If \(f\) preserves \(\sim\), that is, \(f(x)=f(y)\) for all \(x,y\in X\) with \(x\sim y\), then the universal property of quotient induces a map \(X/\!\sim \to Y\), \([x]\to f(x)\). We denote this map by \([f]\).
Let \(X,Y\) and \(Z\) be spaces, and let \(a\colon X\to Z\) and \(b\colon Y\to Z\) be functions. Then we denote the set \(\{(x,y)\in X\times Y: a(x)=b(y)\}\), which is the fibre product of \(X\) and \(Y\) over \(Z\) along \(a\) and \(b\), by \(X\times_{a,Z,b}Y\) most often. Sometimes, when the map \(a\) and \(b\) are clear, we write \(X\times_ZY\). For \(U\subseteq X\) and \(V\subseteq Y\), \(U\times_{a,Z,b}V\) denotes the set \((U\times V)\cap( X\times_ZY)\). Note that \(U\times_{a,Z,b}V\subseteq X\times_{a,Z,b}Y\) is open if and only if \(U\times V\subseteq X\times_{a,Z,b}Y\) is open, which is, in turn, open if and only if \(U\subseteq X\) and \(V\subseteq Y\) are open.
Let \(X,Y,Z,a\) and \(b\) be as in the last paragraph. Let \(f\) be a function on \(X\) and \(g\) on \(Y\). Then \(f\otimes g\) is the function on \(X\times Y\) \((x,y)\mapsto f(x)g(x)\). The restriction of \(f\otimes g\) to the fibre product \(X\times_{a,Z,b}Y\) is written as \(f\otimes_Zg\).
Let \(X,Y\) and \(Z\) be spaces, and \(f\colon X\to Z\) and \(g\colon Y\to Z\) continuous maps. Then a basic open set for the product topology on \(X\times Y\) is a product \(U\times V\) where \(U\subseteq X\) and \(V\subseteq Y\) are open. Hence the sets of type \(U\times_{f,Z,g} V= (U\times V)\cap (X\times_{f,Z,g} Y)\), where \(U\subseteq X\) and \(V\subseteq Y\) are open, are basic open sets for the subspace topology of the fibre product \(X\times_{f,Z,g} Y\subseteq X\times Y\).
For a groupoid \(G\), \(r_G\) and \(s_G\) denote its range and source maps, respectively. For a left (right) \(G\)\nb-space, we always assume that the corresponding momentum map is \(r_X\) (respectively, \(s_X\)), with an exception of Example~\ref{exa:prop-quiver}. We denote the fibre product \(G\times_{s_G,\base,r_X}X\) (respectively, \(X\times_{s_X,\base, r_G}G\)) by \(G\times_{\base}X\) (respectively, \(X\times_{\base}G\)). We say \(\gamma\) and \(x\) are composable or \((\gamma, x)\) is a composable pair if \((\gamma,x)\in G\times_{\base}X\); similarly for the right action.
Let \(X\) be a left \(G\)\nb-space and right \(H\)\nb-space for groupoids \(G\) and \(H\). We call \(X\) a \(G\)\nb-\(H\)-bispace if the actions of \(G\) and \(H\) commute in the usual sense, that is, for every composable pairs \((\gamma, x)\in G\times_{\base}X\) and \((x,\mathrm{e}ta)\in X\times_{\base[H]}H\) we have that
\begin{enumerate*}[(i)]
\item \((\gamma x, \mathrm{e}ta)\in X\times_{\base[H]}H\)
and \((\gamma, x\mathrm{e}ta )\in G\times_{\base}X\),
\item \((\gamma x)\mathrm{e}ta=\gamma(x\mathrm{e}ta)\).
\mathrm{e}nd{enumerate*}
Let \((H,\beta)\) be a locally compact groupoid with a Haar system and \(X\) a right \(H\)\nb-space. Then there is continuous family of measures with full support \(\beta_X=\{\beta^{[x]}_X\}_{[x]\in X/H}\) along the quotient map \(X\to X/H\) which is given by
\begin{equation}
\label{eq:measures-along-the-quotient}
\int_{X} f\;\mathrm{d}\beta^{[x]}_X \defeq \int_{G} f(x\mathrm{e}ta)\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)
\mathrm{e}nd{equation}
for \([x]\in X/H\) and \(f\in \mathbb{C}ontc(X)\).
See~\cite{mythesis}*{Proposition 1.3.21} for the proof.
Let \(\mathcal{A}\) be a category and let \(\mathcal{A}^0\) denote its class of objects. By \(x\in\!\in\mathcal{A}\) we mean that \(x\) is an object in \(\mathcal{A}\).
The symbols \(\mathbb{R}^*\) and \(\mathbb{C}^*\) denote, respectively, the set of nonzero real and complex numbers. And \(\mathbb{R}^-\) and \(\mathbb{R}^+\) denote is the set of negative and positive real numbers, respectively.
\subsection{Measures concentrated on sets}
\label{sec:measure-th}
Our reference for measure theory is Bourbaki~\cite{Bourbaki2004Integration-I-EN}. For a topological space \(X\), let \(\mathrm{Bor}(X)\) denote the set of Borel functions on \(X\). Given \(Y\subseteq X\), \(\chi_Y\) denotes the characteristic function of the set \(Y\). In this article, all the measures are assumed to be positive, Radon and \(\sigma\)\nb-finite. Now we introduce some notation.
\begin{notation}[Notation and remarks]
\label{not:1}
\begin{enumerate}[leftmargin=*]
\item Let \(X\) be a space and \(V,U\subseteq X\) with \(U\subseteq V\). Let \(f\) be a function on \(U\). Then we write \(0^V(f)\) for the function on \(V\) which equals \(f\) on \(U\) and zero on \(V-U\).
Let \(X\) be locally compact space and \(U\) an open Hausdorff subset of it. Then define the set
\[
\mathbb{C}ontc(X, U)=\{f\in \mathbb{C}ontc(X): {\textup{supp}}(f)\subseteq U \}.
\]
When \(X\) is Hausdorff one may identify \(\mathbb{C}ontc(X,U)\) with \(\mathbb{C}ontc(U)\) as follows: if \(g\) is in \(\mathbb{C}ontc(X,U)\), then the restriction of \(g\) to \(U\) lies in \(\mathbb{C}ontc(U)\). On the other hand, if \(f\) is in \(\mathbb{C}ontc(U)\), then \(0^X(f)\) is in \(\mathbb{C}ontc(X,U)\). These two process are inverses of each other. We shall frequently use this identification.
\item Let \(X\) and \(Y\) be locally compact Hausdorff spaces, and \(\phi\colon X\to Y\) a map which is open onto its image. For an open set \(U\subseteq X\), define
\[
\mathcal{E}S_\phi(U)=\{U'\subseteq Y: U'\text{ is open in }Y\text{ and } U'\cap \phi(X)=\phi(U)\}
\]
(here an everywhere else, \(\mathcal{E}\) stands for `extension'). Note that for \(U\neq\mathrm{e}mptyset\), \(\mathcal{E}S_\phi(U)\) is not empty: since \(\phi(U)\subseteq \phi(X)\) is open, there is an open set \(U'\subseteq Y\) with \(\phi(U)=U'\cap\phi(X)\). We call an element of \(\mathcal{E}S_\phi(U)\) an extension of \(U\) via \(\phi\), or simply an extension of \(U\).
\item A set is called cocomapct if it has compact closure. Let \(X\) and \(Y\) be locally compact Hausdorff spaces, and \(\phi\colon X\to Y\) a local homeomorphism onto its image. Then \(\phi\) is an open map onto its image. Let
\begin{align*}
\phi^{\mathrm{Ho}}&\defeq\{U\subseteq X : U \text{ is open and } \phi|_{U} \text{ is a homeomorphism}\}\\
\phi^{\mathrm{Ho}C}&\defeq\{U\subseteq X : U \text{ is cocompact, }U\in\phi^{\mathrm{Ho}} \text{ and } \phi|_{\overline{U}} \text{ is a homeomorphism}\}\\
\mathrm{e}nd{align*}
Then \(\phi^{\mathrm{Ho}}\neq \mathrm{e}mptyset\) is an open cover of \(X\) for \(X\neq \mathrm{e}mptyset\). Given \(U\in\phi^{\mathrm{Ho}}\), we write \(\phi|_U^{-1}\) for \((\phi|_U)^{-1}\). Note that \(\phi|_U^{-1}\colon \phi(U)\to U\) is a homeomorphism. The map \(\phi|_U^{-1}\) induces an obvious isomorphism \(\mathbb{C}ontc(U)\to \mathbb{C}ontc(\phi(U))\) which sends a function \(f\in \mathbb{C}ontc(U)\) to the function \(f\circ \phi|_{\phi(U)}^{-1}\) in \(\mathbb{C}ontc(\phi(U))\).
Since \(X\) is locally compact, for any \(U\in\phi^{\mathrm{Ho}}\) and \(x\in U\), there is an open cocompact neighbourhood \(V\) of \(x\) with \(\overline{V}\subseteq U\). Hence \(\phi^{\mathrm{Ho}C}\neq \mathrm{e}mptyset\). Moreover, \(\phi^{\mathrm{Ho}}\) and \(\phi^{\mathrm{Ho}C}\) are open covers of \(X\) which are closed under intersections of sets.
\item Let \(X,Y\) and \(\phi\) be as in (3) above. Let \(U\in \phi^{\mathrm{Ho}}\). For \(f\in \mathbb{C}ontc(X;U)\) define the set
\[
\mathcal{E}_\phi(f) =\{f'\in \mathbb{C}ontc(Y,U'): U'\in \mathcal{E}S_\phi(U)\text{ and } f'= f\circ\phi|_{U}^{-1} \text{ on } \phi(U)\}.
\]
Since \(\phi|_U\) is a homeomorphism onto its image, we may write:
\[
\mathcal{E}_\phi(f) =\{f'\in \mathbb{C}ontc(Y,U'): U'\in \mathcal{E}S_\phi(U)\text{ and } f'\circ \phi|_U= f \text{ on } U\}.
\]
Lemma~\ref{lem:loc-homeo-fn-ext-restr} says that for \(U\in \phi^{\mathrm{Ho}C}\) and \(f\in \mathbb{C}ontc(X,U)\), \(\mathcal{E}_\phi(f)\) is nonempty.
We call an element of \(\mathcal{E}_\phi(f)\) \mathrm{e}mph{an extension of \(f\) to \(Y\) via \(\phi\)} or, sometimes, simply an extension. Clearly, any two extensions of \(f\) agree on \(\phi(U)\) and equal \(f\circ \phi|_U^{-1}\).
Since the intersection of any two sets in \(\mathcal{E}S_\phi\) with \(\phi(X)\) is \(\phi(U)\), we have
\(f'|_{\phi(X)}=f''|_{\phi(X)}=0^{\phi(X)}(f\circ \phi|_U^{-1})\) for any two extensions \(f',f''\in \mathcal{E}_\phi(f)\).
For \(U\in \phi^{\mathrm{Ho}}\) , let \(U'\in \mathcal{E}S_\phi(U)\). For \(g\in \mathbb{C}ontc(Y, U')\) we define the \mathrm{e}mph{function} \(\mathbb{R}es^\phi(g)\) on \(X\) to be \(0^Y(g\circ \phi|_U)\), that is,
\[
\mathbb{R}es^\phi(g)= \begin{cases} g\circ \phi|_U &\mbox{on } U, \\
0 & \mbox{on } X-U \mathrm{e}nd{cases}
\]
The function \(\mathbb{R}es^\phi(g)\) is continuous on \(U\). Lemma~\ref{lem:restr-is-Cc} says that \(\mathbb{R}es^\phi(g)\in \mathbb{C}ontc(X;U)\) when \(U\in \phi^{\mathrm{Ho}C}\). Here \(\mathbb{R}es\) stands for `restriction'.
\item Let \(X,Y\) and \(\phi\) be as in (3) above. Analogous to \(\mathbb{C}ontc(X,U)\), for \(U\subseteq X\), we define \(\mathrm{Bor}(X,U)\) as the set of Borel measurable functions on \(X\) whose support lies in \(U\). For \(U\in \phi^{\mathrm{Ho}}\) and \(f\in \mathrm{Bor}(X,U)\), we define \(\mathcal{E}B_\phi(f)\) analogous to \(\mathcal{E}_\phi(f)\) in (4) above but with \(\mathbb{C}ontc(Y,U')\) replaced by \(\mathrm{Bor}(Y,U')\). Finally, observe that \(\mathbb{R}es^\phi(g)\) makes sense for a Borel function \(g\in \mathrm{Bor}(Y,U')\) where \(U'\in\mathcal{E}S_\phi(U)\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{notation}
Now we start preparing for Lemma~\ref{lem:restr-is-Cc}. Let \(X\) be a space, and \(Y\) and \(A\) its subspaces with \(A\subseteq Y\). The set \(A\) is (quasi-)compact in \(Y\) if and only if it is so in \(X\). To prove the \mathrm{e}mph{if} part, let \(\{{U_j}'\}\) be an open cover of \(A\) in \(Y\). For each \({U_j}'\), there is an open set \({U_j}'\) of \(X\) with \({U_j}'={U_j}\cap Y\). Since \(\{{U_j}'\}\) cover \(A\), so do \(\{{U_j}\}\). Let \({{U_j}_1},\dots,{{U_j}_n}\) be a finite cover of \(A\) in \(X\), then \({{U_j}_1}',\dots,{{U_j}_1}'\) is a finite subcover of \(A\) in \(Y\).
Conversely, let \(\{{U_j}\}\) be an open cover of \(A\) in \(X\). Then \({U_j}'\defeq {U_j}\cap Y\) are open sets in \(Y\) which cover \(A\). Since \(A\) is compact in \(Y\), we may choose a finite subcover, say \({{U_j}_1}'\dots,{{U_j}_n}'\). Then \({{U_j}_1},\dots,{{U_j}_n}\) is the finite subcover of \(A\) in \(X\).
We need \mathrm{e}mph{if} part observation in the final argument of Lemma~\ref{lem:restr-is-Cc}. Even later this observation shall be used.
Let \(X\) be a space, \(Y\) and \(W\) its subspaces with \(Y\supseteq W\). In the next lemma, \(\overline{W}\) and \(\overline{W}^Y\) denote the closures of \(W\) in \(X\) and \(Y\), respectively. In general, \(\overline{W}\) denotes the closure of \(W\) in the biggest space in the discussion that contains \(W\). It is a basic result in point-set topology that \(\overline{W}^Y= \overline{W}\cap Y\).
\begin{lemma}
\label{lem:restr-is-Cc}
Let \(X\) and \(Y\) be locally compact Hausdorff spaces, and \(\phi\colon X\to Y\) be a local homeomorphism onto its image. Let \(U\in \phi^{\mathrm{Ho}C}\) and \(V\in \mathcal{E}S_\phi(U)\). Then for \(g\in \mathbb{C}ontc(Y,V)\), \(\mathbb{R}es^\phi(g)\in \mathbb{C}ontc(X;U)\). A similar claim holds for \(g\in \mathrm{Bor}(Y,V)\), that is, for \(g\in \mathrm{Bor}(Y,V)\), \(\mathbb{R}es^\phi(g)\in \mathrm{Bor}(X;U)\).
\mathrm{e}nd{lemma}
\begin{proof}
The Borel part of the lemma follows easily from the definition of \(\mathbb{R}es^\phi(g)\). We deal with the continuous case. Assume that \(g|_{\phi(U)}\in \mathbb{C}ontc(\phi(U))\). Then \(\mathbb{R}es^\phi(g)\in \mathbb{C}ontc(X;U)\). Hence we shall show that \(g|_{\phi(U)}\in \mathbb{C}ontc(\phi(U))\). We write \(Z\) for \(\phi(X)\) in the discussion that follows.
We need the following observation: for any subset \(A\) of \(V\), \(A\cap \phi(\overline{U})=A\cap \phi(U)\).
The proof of the observation is as follows: it is obvious that \(A\cap \phi(\overline{U})\supseteq A\cap \phi(U)\). One the other hand, \(A\cap \phi(\overline{U})=(A\cap V)\cap(\phi(\overline{U})\cap Z)\subseteq A\cap (V\cap Z)=A\cap \phi(U)\).
Now the fact that \(g^{-1}(\mathbb{C}^*)\subseteq {\textup{supp}}(g)\subseteq V\) and the observation in the last paragraph together give us that \(g^{-1}(\mathbb{C}^*)\cap \phi(U)=g^{-1}(\mathbb{C}^*)\cap \phi(\overline{U})\). But
\begin{align*}
g^{-1}(\mathbb{C}^*)\cap \phi(U)&= \{y\in \phi(U): g(y)\neq 0 \}=g|_{\phi(U)}^{-1}(\mathbb{C}^*) \text{ and}\\
g^{-1}(\mathbb{C}^*)\cap \phi(\overline{U})&= \{y\in \phi(\overline{U}): g(y)\neq 0 \}=g|_{\phi(\overline{U})}^{-1}(\mathbb{C}^*).
\mathrm{e}nd{align*}
Thus \(g|_{\phi(U)}^{-1}(\mathbb{C}^*)=g|_{\phi(\overline{U})}^{-1}(\mathbb{C}^*)\); denote either of these sets by \(B\). Then \({\textup{supp}}(g|_{\phi(U)})=\overline{B}^{\phi(U)}\) and \({\textup{supp}}(g|_{\phi(\overline{U})})=\overline{B}^{\phi(\overline{U})}\). Furthermore,
\begin{equation}
\label{eq:res-is-cc-1}
{\textup{supp}}(g|_{\phi(U)})=\overline{B}^{\phi(\overline{U})} \cap \phi(U).
\mathrm{e}nd{equation}
Now we claim that \(\overline{B}^{\phi(\overline{U})} \subseteq V\). This is because \(g|_{\phi(\overline{U})}^{-1}(\mathbb{C}^*)\subseteq g^{-1}(\mathbb{C}^*)\) and hence
\[\overline{B}^{\phi(\overline{U})}=\overline{g|_{\phi(\overline{U})}^{-1}(\mathbb{C}^*)}\cap \phi(\overline{U})\subseteq \overline{g|_{\phi(\overline{U})}^{-1}(\mathbb{C}^*)}\subseteq \overline{g^{-1}(\mathbb{C}^*)}={\textup{supp}}(g)\subseteq V.
\]
Now Equation~\mathrm{e}qref{eq:res-is-cc-1} and the observation made at the beginning of the proof together yield that
\[
{\textup{supp}}(g|_{\phi(U)})=\overline{B}^{\phi(\overline{U})} \cap \phi(U)=\overline{B}^{\phi(\overline{U})} \cap \phi(\overline{U})=\overline{B}^{\phi(\overline{U})}={\textup{supp}}(g|_{\phi(\overline{U})}).
\]
Note that since \(\overline{B}^{\phi(\overline{U})}\) is a closed subset of the compact set \(\phi(\overline{U})\), \(\overline{B}^{\phi(\overline{U})}\) is compact in \(\phi(\overline{U})\). This along with the observation prior to the statement of this lemma implies that \(\overline{B}^{\phi(\overline{U})}={\textup{supp}}(g|_{\phi(U)})\) is compact in \(Y\) as well as \(\phi(U)\).
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:loc-homeo-fn-ext-restr}
Let \(X\) and \(Y\) be locally compact spaces with \(Y\) Hausdorff. Let \(\phi\colon X\to Y\) be a local homeomorphism onto its image. Let \(U\in\phi^{\mathrm{Ho}C}\) and \(U'\in \mathcal{E}S_\phi(U)\). Then
\begin{enumerate}[i)]
\item for \(f\in\mathbb{C}ontc(X;U)\), \(\mathcal{E}_\phi(f)\) is nonempty;
\item for \(f\in \mathbb{C}ontc(X;U)\) and \(f'\in \mathcal{E}_\phi(f)\), \(\mathbb{R}es^\phi(f')=f\);
\item for \(g\in \mathbb{C}ontc(Y,U')\), \(g\in \mathcal{E}(\mathbb{R}es^\phi(g))\).
\mathrm{e}nd{enumerate}
Similar claims for hold the measurable case.
\mathrm{e}nd{lemma}
\begin{proof}
Consider the measurable case first.
i): For \(f\in \mathrm{Bor}(X;U)\), \(0^Y(f\circ\phi|_U^{-1})\) lies in \(\mathcal{E}B_\phi(f)\). (ii) and (iii) can be proved on the similar lines as the continuous case. Now we prove the claims for the continuous case.
\noindent i): We know that \(\phi(\overline{U})\subseteq Y\) is compact, therefore,
\[
f\circ\phi|_{\phi(\overline{U})}^{-1}\colon \phi(\overline{U})\to \mathbb{C}
\]
is a well-defined continuous function. We extend \(f\circ\phi|_{\phi(\overline{U})}^{-1}\) to a function \(\tilde f\in\mathbb{C}ontc(Y)\) with \(f=\tilde f\) on \(\phi(\overline{U})\) using Tietze's extension theorem.
Let \(V\in \mathcal{E}S_\phi(U)\). Then \(\phi({\textup{supp}}(f))\subseteq V\) is compact, since \(\phi({\textup{supp}}(f))\) is compact in the subspace \(\phi(U)\) of \(V\). Let \(w\in \mathbb{C}ontc(Y)\) be a function which is \(1\) on the compact set \(\phi({\textup{supp}}(f))\) and has support in \(V\). Then \(f'\defeq \tilde f w\in\mathbb{C}ontc(Y)\) is an element of \(\mathcal{E}_\phi(f)\): it is plain that \(f'|_{\phi(U)}=\tilde f|_{\phi(U)}=f\circ {\phi|}_{U}^{-1}\). Moreover,
\[
\left({\textup{supp}}(f')\cap \phi(X)\right) \subseteq \left(({\textup{supp}}(\tilde f)\cap{\textup{supp}}(w))\cap \phi(X)\right)\subseteq\,{\textup{supp}}(w)\cap \phi(X)\subseteq \phi(U).
\]
Thus \(f'\in \mathcal{E}_\phi(f)\).
\noindent ii): If \(f'\in \mathcal{E}_\phi(f)\), then by definition \(f'\circ \phi|_U=f\) on \(\phi(U)\); hence \(\mathbb{R}es^\phi(f') = f'\circ \phi|_U=f\) on \(\phi(U)\). Since \(f\in \mathbb{C}ontc(X;U)\), \(f=0\) on \(X-U\), and so is \(\mathbb{R}es^\phi(f')\).
\noindent iii): Lemma~\ref{eq:res-is-cc-1} says that \(\mathbb{R}es^\phi(g)\in \mathbb{C}ontc(X;U)\), hence we may define \(\mathcal{E}_\phi(\mathbb{R}es^\phi(g))\). The rest follows directly from the definitions of \(\mathbb{R}es^\phi(g)\) and \(\mathcal{E}_\phi(f)\).
\mathrm{e}nd{proof}
\paragraph{\textbf{Restriction of a measure, and construction of a measures using local data:}}
Let \(X\) be a locally compact, Hausdorff space and \(U\) an open subset of \(X\). Identify \(\mathbb{C}ontc(X,U)\) with \(\mathbb{C}ontc(U)\) in the obvious way, see~\ref{not:1}(1). Recall from elementary measure theory that the measure \(\lambda\colon \mathbb{C}ontc(X)\to\mathbb{R}\) on \(X\), when restricted to the subspace \(\mathbb{C}ontc(X,U)\simeq\mathbb{C}ontc(U)\) gives a measure on \(U\) which is called the restriction of the \(\lambda\) to \(U\). We denote this restriction by \(\lambda|_U\).
In general, the measure \(\lambda\) can be restricted not only to an open set but also to any Borel set. Let \((X,\lambda)\) be a Borel measure space and \(Y\) a Borel subset of \(X\). Then any Borel subset \(U\subseteq Y\) is of type \(U=U'\cap Y\) for a Borel \(U'\subseteq X\). For \(Y\subseteq X\) as above, define \(\lambda|_Y\), the restriction of \(\lambda\) to \(Y\), by \(\lambda|_Y(U)=\lambda(U'\cap Y)\) where \(U\subseteq Y\) and \(U'\subseteq X\) are Borel sets with \(U=U'\cap Y\). We shall often use the following result from elementary measure theory: let \(f\) be an integrable function on \(X\) then
\[
\int_X f|_Y\,\mathrm{d}\lambda\defeq\int_Y f\,\mathrm{d}\lambda=\int_Y f\,\mathrm{d}\lambda|_Y.
\]
Let \(X\) and \(\lambda\) be as in the last paragraph, and let \(U\) and \(V\) be Borel subsets of \(X\) with \(V\subseteq U\). It is easy to check that the restricted measures satisfy the equality \(\lambda|_V=(\lambda|_U)|_V\). We will not need such a generalised notion of restricted measures, but we shall need the notion of restriction of a measure to a locally compact space, see~\cite{Bourbaki2004Integration-I-EN}*{Nr.\,7, {\S}5, IV.\,74}.
The following proposition gives a criterion to extend measures defined on elements of an open cover of \(X\) to whole of the space.
\begin{proposition}[Proposition~\cite{Bourbaki2004Integration-I-EN}*{Chpater III, \S 2, 1, Proposition 1}]
\label{prop:bourbaki-pasting-measures}
Let \(\{Y_\alpha\}_{\alpha\in A}\) be an open cover of \(X\), \(X\) a locally compact Hausdorff space, and suppose given, on each subspace \(Y_\alpha\), a measure \(\mu_\alpha\), in such a way that for every pair \((\alpha,\beta)\), the restriction of \(\mu_\alpha\) and \(\mu_\beta\) to \(Y_\alpha\cap Y_\beta\) are identical. Under these conditions, there is one and only one measure \(\mu\) on \(X\) whose restriction to \(Y_\alpha\) is equal to \(\mu_\alpha\) for every index \(\alpha\).
\mathrm{e}nd{proposition}
\begin{lemma}
\label{lem:pulling-and-pasting-measures-1}
Let \(X\) and \(Y\) be locally compact, Hausdorff spaces and let \(\lambda\) be a measure on \(Y\). If \(\phi\colon X\to Y\) be a local homeomorphism (onto) \(Y\), then \(\lambda\) induces a unique measure \(\phi^*(\lambda)\) on \(X\) such that for every \(f\in \mathbb{C}ontc(X,U)\), where \(U\in\phi^{\mathrm{Ho}}\), we have
\begin{equation}
\label{eq:pullback-measure-1}
\int_X f\,\phi^*(\lambda)=\int_Y 0^Y(f\circ \phi|_U^{-1}) \;\mathrm{d}\lambda.
\mathrm{e}nd{equation}
\mathrm{e}nd{lemma}
\begin{proof}
We know that \(\phi^{\mathrm{Ho}}\) is an open cover of \(X\). For each \(U\in\phi^{\mathrm{Ho}}\) define a measure \(\phi^*(\lambda)_U\) on \(U\) by
\[
\phi^*(\lambda)_U(f)=\int_{\phi(U)} f\circ \phi|_U^{-1} \,\mathrm{d}\lambda.
\]
If \(U, V\in \phi^{\mathrm{Ho}}\), then \(U\cap V\in \phi^{\mathrm{Ho}}\). Furthermore, for \(f\in \mathbb{C}ontc(U\cap V)\),
\[
\phi^*(\lambda)_{U}(f)=\phi^*(\lambda)_{U\cap V}(f)=\phi^*(\lambda)_{V}(f).
\]
Now Proposition~\ref{prop:bourbaki-pasting-measures} says that there is a unique measure on \(X\), which we denote by \(\phi^*(\lambda)\), which has the property that restriction of \(\phi^*(\lambda)\) to each \(U\) in \(\phi^{\mathrm{Ho}}\) equals \(\phi^*(\lambda)_U\). The result, now, follows from the observation that \(\lambda_{\phi(U)}(f\circ \phi|_U^{-1})=\lambda(0^Y(f\circ \phi|_{U}^{-1}))\) for \(f\in\mathbb{C}ontc(X,U)\) where \(U\in\phi^{\mathrm{Ho}}\).
\mathrm{e}nd{proof}
One may observe that Lemma~\ref{lem:pulling-and-pasting-measures-1} can be restated when the function \(f\) in Equation~\mathrm{e}qref{eq:pullback-measure-1} is an integrable function in \(\mathrm{Bor}(X,U)\) for \(U\in \phi^{\mathrm{Ho}}\). The proof is similar to the one above.
Let \(Y\) be a locally compact, Hausdorff space and \(\lambda\) a measure on it. Then~\cite{Bourbaki1966Topologoy-Part-1}*{Proposition 12, {\S}9.\,7, Chapter I} says that any locally compact Hausdorff subspace \(Z\) of \(Y\) is locally closed. Moreover,~\cite{Bourbaki1966Topologoy-Part-1}*{Proposition 5, {\S}3.\,3, Chapter I} implies that \(Z\) is an intersection of a closed and open subsets of \(Y\). Hence \(Z\) is a Borel subspace of \(Y\) and the restriction of \(\lambda\) to \(Z\) is defined.
\begin{corollary}
\label{cor:pulling-and-pasting-measures-2}
Let \(X\) and \(Y\) be locally compact Hausdorff spaces, and \(\lambda\) a measure on \(Y\). If \(\phi\colon X\to Y\) is a local homeomorphism onto its image, then \(\lambda\) induces a unique measure \(\phi^*(\lambda)\) on \(X\) such that for every \(f\in \mathbb{C}ontc(X,U)\) where \(U\in\phi^{\mathrm{Ho}C}\) we have
\begin{equation}
\label{eq:pullback-measure-2}
\int_X f\,\mathrm{d}\phi^*(\lambda)=\int_{_{\phi(U)}} f'\,\mathrm{d}\lambda=\int_{_{\phi(X)}} f'\,\mathrm{d}\lambda
\mathrm{e}nd{equation}
for any extension \(f'\in\mathcal{E}_\phi(f)\).
A similar statement holds for \(f\in \mathrm{Bor}(X,U)\) and \(f'\in\mathcal{E}B_\phi(f)\).
\mathrm{e}nd{corollary}
\begin{proof}
Let \(\phi'\colon X\to \phi(X)\) be the map obtained from \(\phi\) by restricting the codomain; here \(\phi(X)\subseteq Y\) is equipped with subspace topology. Then \(\phi'\) is a local homeomorphism and hence open. Since the image of a locally compact space under an open map is locally compact, \(\phi(X)\) is locally compact when equipped with the subspace topology from \(Y\). Thus \(\phi(X)\) is a locally compact subspace of \(Y\). Now we bestow \(\phi(X)\) with the restricted measure \(\lambda|_{\phi(X)}\) and apply Lemma~\ref{lem:pulling-and-pasting-measures-1} to \(\phi':X\to \phi(X)\). This gives us the measure \({\phi'}^*(\lambda|_{\phi(X)})\) which we denote by \(\phi^*(\lambda)\). Equation~\mathrm{e}qref{eq:pullback-measure-1} tells us that for \(f\in \mathbb{C}ontc(X;U)\), where \(U\in\phi^{\mathrm{Ho}}\),
\begin{equation}
\label{eq:ext-1}
\int_X f\,\mathrm{d}\phi^*(\lambda)=\int_{\phi(X)}0^{\phi(X)}(f\circ \phi|_{\phi(U)}^{-1})\;\,\mathrm{d}\lambda.
\mathrm{e}nd{equation}
Since \(0^{\phi(X)}(f\circ \phi|_{\phi(U)}^{-1})\in \mathbb{C}ontc(\phi(X),\phi(U))\), we can write
\[
\int_X f\,\mathrm{d}\phi^*(\lambda)=\int_{\phi(X)}0^{\phi(X)}(f\circ \phi|_{\phi(U)}^{-1})\;\mathrm{d}\lambda=\int_{\phi(U)}0^{\phi(X)}(f\circ \phi|_{\phi(U)}^{-1})\;\mathrm{d}\lambda.
\]
Let \(f'\in\mathcal{E}_\phi(f)\). Then Notation~\ref{not:1}(4) tells us that \(f'|_{\phi(X)}=0^{\phi(X)}(f\circ\phi|_U^{-1})\) and \(f'|_{\phi(X)}=(f'|_{\phi(U)})_{\phi(X)}^0\). From this discussion and Equation~\mathrm{e}qref{eq:ext-1} we may write that
\begin{equation*}
\int_X f\,\mathrm{d}\phi^*(\lambda)
=\int_{\phi(U)}(f\circ\phi|_U^{-1})^0_{\phi(X)}\,\mathrm{d}\lambda
=\int_Y f'|_{\phi(X)}\,\mathrm{d}\lambda
=\int_Y f'|_{\phi(U)}\,\mathrm{d}\lambda.
\mathrm{e}nd{equation*}
The claim of the lemma follows by observing that the last two terms in the above equation are \(\int_{\phi(X)}
f'\,\mathrm{d}\lambda\) and \(\int_{\phi(U)} f'\,\mathrm{d}\lambda\) , respectively.
The proof for the measurable case can be written along the same lines as above.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:conc-measu-equi}
Let \(Y\) be a locally compact, Hausdorff space, \(\lambda\) a measure on it and \(Z\) a locally compact subspace of \(Y\). Then the following are equivalent:
\begin{enumerate}[i)]
\item For every nonnegative \(f\in \mathbb{C}ontc(Y)\), \(\int_Y f\,\mathrm{d}\lambda = \int_Z f\,\mathrm{d}\lambda\).
\item For every \(f\in \mathbb{C}ontc(Y)\), \(\int_Y f\,\mathrm{d}\lambda = \int_Z f\,\mathrm{d}\lambda\).
\item For every \(f\in \mathcal{L}^1(Y)\), \(\int_Y f\,\mathrm{d}\lambda = \int_Z f\,\mathrm{d}\lambda\).
\item For every nonnegative \(f\in \mathcal{L}^1(Y)\), \(\int_Y f\,\mathrm{d}\lambda = \int_Z f\,\mathrm{d}\lambda\).
\item Every \(y\in Y\) has a neighbourhood \(V\subseteq Y\) such that \(\lambda(V\cap (Y-Z))=0\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
\noindent (i)\(\implies\)(ii): Let \(f\in \mathbb{C}ontc(Y)\). Put \(f
^+ = \max\{f,0\}\) and \(f^-=-\mathrm{i}n\{f,0\}\) where \(0\) is the constant function \(0\) on \(Y\). Then \(f^+\) and \(f^-\) are continuous, and they are compactly supported since \({\textup{supp}}(f^+)\) and \({\textup{supp}}(f^-)\) are contained in \({\textup{supp}}(f)\). Furthermore, \(f= f^+ - f^-\) . Hence
\[
\int_Y f\,\mathrm{d}\lambda = \int_Y f^+\,\mathrm{d}\lambda - \int_Y f^-\,\mathrm{d}\lambda = \int_Z f^+\,\mathrm{d}\lambda - \int_Z f^-\,\mathrm{d}\lambda =\int_Z f\,\mathrm{d}\lambda.
\]
\noindent (ii)\(\implies\)(iii): Let \(f\in \mathcal{L}^1(Y)\). Then \(f-f|_Z = f-f\chi_Z\) is also in \(\mathcal{L}^1(Y)\) and we may choose \(g\in \mathbb{C}ontc(Y)\) such that \(\int_Y \abs{(f-f|_Z)-g}\,\mathrm{d}\lambda < \mathrm{e}psilon/2\) for given \(\mathrm{e}psilon >0\). Now
\begin{multline}
\label{eq:meas-conc-qui-1}
\int_Y \abs{g}\,\mathrm{d}\lambda=\int_Z \abs{g}\,\mathrm{d}\lambda\leq \int_Z \abs{g-(f-f|_Z)}\,\mathrm{d}\lambda + \int_Z (f-f|_Z)\,\mathrm{d}\lambda\\
=\int_Z \abs{g-(f-f|_Z)}\,\mathrm{d}\lambda\leq \int_Y \abs{g-(f-f|_Z)}\,\mathrm{d}\lambda< \mathrm{e}psilon/2.
\mathrm{e}nd{multline}
In above computation, the first equality is due to the hypothesis, the second equality is due to the fact that \(\int_Z (f-f|_Z)\,\mathrm{d}\lambda=\int_Y \chi_Z (f-f \chi_Z)\,\mathrm{d}\lambda=0\) and the second last inequality is because of the fact that \(\abs{h\chi_Z}\leq \abs{h}\) for any integrable function \(h\in \mathcal{L}^1(Y)\).
Now
\[
0\leq \abs{\int_Y f\,\mathrm{d}\lambda-\int_Y f|_Z\,\mathrm{d}\lambda}\leq \int_Y(f-f|_Z)\,\mathrm{d}\lambda\leq \int_Y\abs{(f-f|_Z)-g}\,\mathrm{d}\lambda+ \int_Y \abs{g}\,\mathrm{d}\lambda< \mathrm{e}psilon.
\]
The last inequality above is due to the choice of \(g\) and Equation~\mathrm{e}qref{eq:meas-conc-qui-1}. Since above equation holds for any \(\mathrm{e}psilon>0\), we have \(\int_Y f\,\mathrm{d}\lambda=\int_Y f|_Z\,\mathrm{d}\lambda\).
\noindent (iii)\(\implies\)(iv): Clear.
\noindent (iv)\(\implies\)(v): For \(y\in Y\), let \(V\subseteq Y\) be a relatively compact neighbourhood. Then \(\lambda(\chi_V)=\lambda(V)\leq \lambda(\overline{V})< \infty\); thus \(\chi_V\in \mathcal{L}^1(Y)\) is a nonnegative function. Using the hypothesis we compute
\[
\lambda(V\cap (Y-Z))= \int_Y \chi_V\cdot\; \chi_{(Y-Z)}\;\mathrm{d}\lambda=\int_Z \chi_V\cdot\; \chi_{(Y-Z)}\;\mathrm{d}\lambda=\int_Y \chi_V\cdot\; \chi_Z\cdot\;\chi_{(Y-Z)}\;\mathrm{d}\lambda=0, \]
as \(\chi_Z\cdot\;\chi_{(Y-Z)}=0\).
\noindent (v)\(\implies\)(i): Let \(y\in Y\) and \(V\) a neighbourhood of \(y\) in \(Y\) such that \(\lambda(V\cap (Y-Z))=0\). Let \(g\in \mathbb{C}ontc(Y,V)\) be a nonnegative function then
\[\int_{Y-Z} g\,\mathrm{d}\lambda=\int_Y g\cdot\, \chi_V\cdot\;\chi_{Y-Z}\;\mathrm{d}\lambda\leq \norm{g}_\infty \int_Y \chi_V\cdot\;\chi_{Y-Z}\;\mathrm{d}\lambda =\norm{g}_\infty\lambda(V\cap (Y-Z))=0.
\]
Now the equality \(\int_Y g\,\mathrm{d}\lambda = \int_Z g\,\mathrm{d}\lambda + \int_{Y-Z} g\,\mathrm{d}\lambda\) gives us that \(\int_Y g\,\mathrm{d}\lambda = \int_Z g\,\mathrm{d}\lambda\).
Let \(f\) be a nonnegative function in \(\mathbb{C}ontc(Y)\). For every \(y\in {\textup{supp}}(f)\), let \(V_y\) be a neighbourhood of \(y\) such that \(\lambda(V_y\cap (Y-Z))=0\). Because the support of \(f\) is compact and \(\{V_y\}_{y\in {\textup{supp}}(f)}\) cover \({\textup{supp}}(f)\), we may choose finitely many neighbourhoods \(V_{y_1},\dots,V_{y_n}\) which cover \({\textup{supp}}(f)\). Let \(u_1,\dots,u_n\) be a partition of unity on \({\textup{supp}}(f)\) subordinated to this finite cover. Then for every \(i\in \{1,\dots,n\}\), \(f u_i\) is a nonnegative function in \(\mathbb{C}ontc(Y,V_{x_i})\). Since \(f=\sum_{i=1}^n fu_i\) using the result proved in the previous paragraph we see that
\[
\int_Y f\,\mathrm{d}\lambda= \int_Y\sum_{i=1}^n fu_i\,\mathrm{d}\lambda= \sum_{i=1}^n \int_Y fu_i\,\mathrm{d}\lambda =\sum_{i=1}^n \int_Z fu_i\,\mathrm{d}\lambda=\int_Z f\,\mathrm{d}\lambda.
\]
\mathrm{e}nd{proof}
Let \(Y\) be a locally compact, Hausdorff space and \(\lambda\) a measure on it. Let \(Z\) be locally compact subspace of \(Y\). We say \(\lambda\) is \mathrm{e}mph{concentrated} on \(Y\), if any one of the equivalent conditions in Lemma~\ref{lem:conc-measu-equi} holds. Condtion~{(v)} in Lemma~\ref{lem:conc-measu-equi} is~\cite{Bourbaki2004Integration-I-EN}*{Definition 4, Nr.\,7, {\S}5, V.\,55}.
\begin{lemma}
\label{lem:loc-concentrated-measure}
Let \(Y,Z\) and \(\lambda\) be as in Lemma~\ref{lem:conc-measu-equi} above. Let \(\{U_i\}\) be an open cover of \(Y\). Then, for each \(i\), \(\lambda\) is concentrated on \(Z\) if and only if the restricted measure \(\lambda|_{U_i}\) is concentrated on \(U_i\cap Z\) for each \(i\).
\mathrm{e}nd{lemma}
\begin{proof}
If \(\lambda\) is concentrated on \(Z\), then for every \(U_i\) and every \(f\in \mathbb{C}ontc(Y,U_i)\simeq \mathbb{C}ontc(U_i)\) we have
\begin{multline*}
\int_{U_i} f\,\mathrm{d}\lambda= \int_Y f\,\chi_{U_i}\;\mathrm{d}\lambda= \int_Y f\,\mathrm{d}\lambda\\
=\int_Z f\,\mathrm{d}\lambda= \int_Z f\,\chi_{U_i}\,\mathrm{d}\lambda=\int_Y f\,\chi_{Z\cap U_i}\;\mathrm{d}\lambda=\int_{Z\cap U_i} f\,\;\mathrm{d}\lambda,
\mathrm{e}nd{multline*}
which proves that \(\lambda|_{U_i}\) is concentrated on \(Z\cap U_i\).
Conversely, for given \(f\in \mathbb{C}ontc(Y)\), cover its support by finitely many elements in \(\{U_i\}\), say \(U_{i_1},\dots,U_{i_n}\) where \(n\in \mathbb{N}\). Let \(u_1,\dots,u_n\) be the partition of unity on \({\textup{supp}}(g)\) subordinated to the open cover \(\{U_{i_1},\dots,U_{i_n}\}\). Then an argument as in the proof of the part (v)\(\implies\)(i) in Lemma~\ref{lem:conc-measu-equi} shows that \(\int_Y f\,\mathrm{d}\lambda=\int_Z f\,\mathrm{d}\lambda\).
\mathrm{e}nd{proof}
\begin{notation}
\label{not:cov-ext}
Let \(X\), \(Y\) be spaces and \(\phi\colon X\to Y\) a continuous map which is also a local homeomorphism onto its image. Let \(\mathcal{A}\) be an open cover of \(X\). An extension of \(\mathcal{A}\) via \(\phi\) is a collection of open sets \(\mathcal{A}'\) of \(Y\) with the following properties
\begin{enumerate}[(i)]
\item if \(U'\in\mathbb{C}ov'\), then either there is
\(U\in\mathcal{A}\) such that \(\phi(U)=\phi(X)\cap U'\), or
\(\phi(X)\cap U'=\mathrm{e}mptyset\);
\item for each \(U\in\mathbb{C}ov\), \(\mathcal{E}S_\phi(U)\cap\mathbb{C}ov'\neq \mathrm{e}mptyset\), that is, there is at least one set \(U'\in\mathbb{C}ov'\) such that \(U'\cap\phi(X)=\phi(U)\).
\mathrm{e}nd{enumerate}
That is, each set in \(\mathcal{A}'\) which intersects \(\Img(\phi)\) is an extension of a set in \(\mathcal{A}\) via \(\phi\), and for given \(U\in\mathbb{C}ov'\), \(\mathbb{C}ov'\) contains at least one extension of \(U\). Note that an extension via \(\phi\) always exits for any cover of \(X\).
The collection \(\mathbb{C}ov'\) is called \mathrm{e}mph{exhaustive} if it covers \(Y\). If \(\phi\) is an inclusion map of a subspace, we shall simply say an ``extension'' or ``exhaustive extension''.
\mathrm{e}nd{notation}
\begin{example}
\label{exa:ext-of-open-cov-closed-sb}
Let \(\phi\colon X\to Y\) be a local homeomorphism onto its image where \(X\) and \(Y\) are spaces. Assume that \(\Img(\phi)\subseteq Z\) is closed, then every open cover \(\{U_i\}_{i\in I}\) of \(X\) has an exhaustive extension in \(Y\) which is
\[
\{U_i'\subseteq Y: i\in I, U_i' \text{ is open in Y, and } \phi(U_i)= U_i'\cap Z\}\cup \{Y-\Img(X)\}.
\]
\mathrm{e}nd{example}
As the next example shows, it is not true that any open cover of a subspace admits an exhaustive extension.
\begin{example}
\label{exa:refined-ext-counterex}
We show that there is a cover of the open subspace \(\mathbb{R}-\{0\}\) of \(\mathbb{R}\) which does not admit an exhaustive extension to \(\mathbb{R}\). Let \(\{\mathbb{R}^-,\mathbb{R}^+\}\) be a cover of \(\mathbb{R}^*\). Then this cover does not have an exhaustive extension to \(\mathbb{R}\). Given any open cover \(\{U_i\}_{i\in I}\) of \(\mathbb{R}\), choose an index \(j\in I\) such that \(0\in U_j\). Then \(U_j\cap \mathbb{R}^-\neq\mathrm{e}mptyset\) and \(U_j\cap \mathbb{R}^+\neq\mathrm{e}mptyset\). Hence \(U_j\cap \mathbb{R}^* \nsubseteq \mathbb{R}^-\) as well as \(U_j\cap \mathbb{R}^* \nsubseteq \mathbb{R}^+\).
In fact, one may show that no open cover of \(\mathbb{R}-\{0\}\) admits an exhaustive extension to \(\mathbb{R}\); the proof uses an argument similar to the one above.
\mathrm{e}nd{example}
\begin{lemma}
\label{lem:pulling-and-pasting-measures-3}
Let \(X\) and \(Y\) be locally compact Hausdorff spaces, and \(\lambda\) a measure on \(Y\). Let \(\phi\colon X\to Y\) be a local homeomorphism onto its image. Let \(\phi^*(\lambda)\) be as in Corollary~\ref{cor:pulling-and-pasting-measures-2}. Let \(\mathcal{A}\) be an open of \(X\) consisting of sets in \(\phi^{\mathrm{Ho}C}\), and let \(\mathcal{A}'\) be an extension of \(\mathcal{A}\) via \(\phi\). Assume that for every \(V\in\mathcal{A}\), \(f\in \mathbb{C}ontc(X,V)\), every \(V'\in \mathcal{A}'\) which is an extension of \(V\) via \(\phi\), and any extension \(f'\in\mathbb{C}ontc(Y,V')\) of \(f\) via \(\phi\), we have
\begin{equation}
\label{eq:pullback-measure-3}
\phi^*(\lambda)(f)=\lambda(f').
\mathrm{e}nd{equation}
Then the following statements are true:
\begin{enumerate}[i)]
\item For every \(V\in \mathcal{A}\) and its extension \(V'\in \mathcal{A}'\), the measure \(\lambda|_{V'}\) is concentrated on \(\phi(V)\).
\item If \(B\defeq \cup_{V'\in \mathcal{A}'}\; V'\), then \(B\subseteq Y\) is open and the measure \(\lambda|_{B}\) is concentrated on \(\phi(X)\).
\item If the open cover \(\mathcal{A}'\) is exhaustive, then \(\lambda\) is concentrated on \(\phi(X)\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
(i): Let \(g\in \mathbb{C}ontc(V')\). Then Lemma~\ref{lem:loc-homeo-fn-ext-restr}(iii) says that \(g\) is in \(\mathcal{E}_\phi(\mathbb{R}es^\phi(g))\) and the hypothesis tells us that
\[
\int_X\mathbb{R}es^\phi(g)\, \mathrm{d}\phi^*(\lambda)=\int_Y g \,\mathrm{d}\lambda.
\]
Note that the last term equals \(\int_{V'} g\,\mathrm{d}\lambda\) since \(g\in \mathbb{C}ontc(Y,V')\). Using Equation~\mathrm{e}qref{eq:pullback-measure-2} we see that
\[
\int_X \mathbb{R}es^\phi(g)\;\mathrm{d}\phi^*(\lambda)= \int_{\phi(V)} g\,\mathrm{d}\lambda
\]
The above two equations imply that \(\int_{V'} g\,\mathrm{d}\lambda = \int_{\phi(V)} g\,\mathrm{d}\lambda\) for any \(g\in \mathbb{C}ontc(Y,V')\simeq \mathbb{C}ontc(V')\). Hence using Lemma~\ref{lem:conc-measu-equi}(ii) we infer that \(\lambda|_{V'}\) is concentrated on \(\phi(V)\).
\noindent (ii): By definition, the space \(B\) is covered by \(V'\in \mathcal{A}'\) where \(V'\) is an extension of some \(V\in \phi^{\mathrm{Ho}C}\). For every \(V'\) like this, \(V'\cap \phi(X)=\phi(V)\). Now (i) of this lemma and Lemma~\ref{lem:loc-concentrated-measure} together yield the desired result.
\noindent (iii): In this case, \(B=Y\) and the result follows from~{(ii)} above.
\mathrm{e}nd{proof}
\mathrm{e}dskip
Let \(X\) be a space, and \(\lambda\) and \(\mu\) measures on \(X\). To say that \(\lambda\) is absolutely continuous with respect to \(\mu\), we write ``\(\lambda\ll\mu\)'' and to say that \(\lambda\) and \(\mu\) are equivalent we write ``\(\lambda\sim\mu\)''.
\begin{lemma}
\label{lem:RN-positive-makes-pullback-measure-equi}
Let \(X,Y,\phi\) and \(\lambda\) be as in Lemma~\ref{lem:pulling-and-pasting-measures-3}. Let \(\mu\) be another measure on \(Y\).
\begin{enumerate}[i)]
\item If \(\mu\ll\lambda\), then \(\phi^*(\mu)\ll\phi^*(\lambda)\). Moreover, \(\frac{\mathrm{d}\phi^*(\mu)}{\mathrm{d}\phi^*(\lambda)}=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ \phi\).
\item If \(\mu\sim\lambda\), then \(\phi^*(\mu)\sim\phi^*(\lambda)\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
(i): Due to Corollary~\ref{cor:pulling-and-pasting-measures-2}, for \(U\in \phi^{\mathrm{Ho}C}\) and \(f\in \mathbb{C}ontc(X;U)\), we have that
\begin{align}
\int_X f(x)\,\mathrm{d}\phi^*(\mu)(x)\notag
&= \int_{\phi(X)} f'(y)\,\mathrm{d}\mu(y)\notag\\
&= \int_{\phi(X)} f'(y)\,\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(y)\;\mathrm{d}\lambda(y)\label{eq:pulback-qui-lemma-1}
\mathrm{e}nd{align}
for \(f'\in \mathcal{E}_\phi(f)\). Let
\[
F= f\cdot \frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ\phi
\]
be a function on \(X\). Then \(F\in \mathrm{Bor}(X;U)\) is integrable. Now note that \(f'\,\frac{\mathrm{d}\mu}{\mathrm{d}\lambda} \in \mathcal{E}B_\phi\left(F\right)\) and compute
\begin{align*}
\int_X \left(f\, \frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ \phi\right)\mathrm{d}\phi^*(\lambda)(x)
&= \int_X F\,\mathrm{d}\phi^*(\lambda)(x)\\
&=\int_{\phi(X)} f'(y)\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}(y)\,\mathrm{d}\lambda(y)\\
&=\int_Xf(x)\,\mathrm{d}\phi^*(\mu)(x) && \text{(due to Equation~\mathrm{e}qref{eq:pulback-qui-lemma-1}).}
\mathrm{e}nd{align*}
The second equality in the above computation uses the measurable version of Equation~\mathrm{e}qref{eq:pullback-measure-2}). This shows that \(\phi^*(\mu)\ll\phi^*(\lambda)\) and \(\frac{\mathrm{d}\phi^*(\mu)}{\mathrm{d}\phi^*(\lambda)}=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ \phi\).
\noindent (ii): Proof of (i) above shows that \(\frac{\mathrm{d}\phi^*(\mu)}{\mathrm{d}\phi^*(\lambda)}=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ \phi\). If \(\mu\sim\lambda\), then \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}>0\). Hence \(\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\circ \phi>0\) which is equivalent to saying that \(\phi^*(\lambda)\sim\phi^*(\mu)\).
\mathrm{e}nd{proof}
\subsection{Free, proper and transitive actions of groupoids}
\label{sec:act-gpd}
Let \(G\) be a topological groupoid and \(X\) a left \(G\)\nb-space. Let \(Y\) be a space. We call a map \(f\colon X\to Y\) \(G\)\nb-invariant if \(f(\gamma x)=f(x)\) for all \((\gamma,x)\in G\times_{\base}X\).
A map \(f\colon X\to Y\) of locally compact spaces is called proper if the inverse image of a (quasi)-compact set in \(Y\) under \(f\) is quasi-compact. A proper map has various characterisations, for example, see~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Proposition 1.6}.
\begin{definition}
\label{def:mult-fn}
For a groupoid \(G\), a left \(G\)\nb-space \(X\), and a \(G\)\nb-invariant map \(f\) from \(X\) to a space \(Y\), define the function \( \mathrm{m}_f\colon G\times_{\base}X\to X\times_{f,Y,f}X\) by
\[
\mathrm{m}_f(\gamma,x)=(\gamma x,x).
\]
\mathrm{e}nd{definition}
\noindent A similar definition can be written for a right \(G\)\nb-space.
Let \(G\) be a groupoid acting on a space \(X\). Then the map \((\gamma, x)\mapsto (\gamma x,x)\) from \(G\times_{\base}X\to X\times X\) can be realised as the one in Definition~\ref{def:mult-fn} by taking the space \(Y\) to be the singleton \(\{*\}\) and \(f\) the constant map; in this case, we denote the map by \(\mathrm{m}_0\).
We call the map in Definition~\ref{def:mult-fn} ``the map \(\mathrm{m}_f\)''.
Let \(G\) be a topological groupoid, \(X\) a left \(G\)\nb-space and \(\mathrm{m}_0\) as in the above paragraphs. We call the action of \(G\) on \(X\)
\begin{enumerate*}[(i)]
\item \mathrm{e}mph{free} if the map \(\mathrm{m}_0\) is one-to-one;
\item \mathrm{e}mph{proper} if \(\mathrm{m}_0\) is proper;
\item \mathrm{e}mph{basic} if \(\mathrm{m}_0\) is a homeomorphism onto its image (see~\cite{Meyer-Zhu2014Groupoids-in-categories-with-pretopology}).
\mathrm{e}nd{enumerate*}
\noindent It can be checked that the action of \(G\) on \(X\) is free if and only if for any two composable pairs \((\gamma,x)\) and \((\gamma',x)\) in \(G\times_{\base}X\), \(\gamma x=\gamma' x\) implies \(\gamma=\gamma'\).
\begin{remark}
\label{rem:prop-act-im-closed}
Talking about a proper action, a proper map is closed,
see~\cite{Bourbaki1966Topologoy-Part-1}*{Proposition I, \S 10.1,
Chapter I}. Thus for a groupoid \(G\) and a \(G\)\nb-space \(X\)
\begin{enumerate*}[(i)]
\item the image of \(G\times_{\base}X\)
under \(\mathrm{m}_0\)
is a closed subset of \(X\times X\)
if the action of \(G\) on \(X\) is proper;
\item the action of \(G\)
on \(X\)
is free as well as proper if and only if the image of \(\mathrm{m}_0\)
is a closed subset of \(X\times X\),
and \(\mathrm{m}_0\) is a homeomorphisms onto its image; the action is basic.
\mathrm{e}nd{enumerate*}
The inclusion of the open interval \((0,1)\hookrightarrow [0,1]\)
is a map which is homeomorphism onto its image but is not proper.
in \((0,1)\) is not compact.
\mathrm{e}nd{remark}
Let \(G\) be a topological groupoid and \(X\) a left \(G\)\nb-space. Let \(Y\) be a space, and let \(f\colon X\to Y\) be a \(G\)\nb-invariant map. Let \([f]\colon G\backslash X\to Y\) be the continuous map induced by \(f\); thus \([f]([x])=f(x)\) for all \(x\in X\). We always bestow \(G\backslashX\) with the quotient topology.
\begin{lemma}
\label{lem:m-and-quotient-of-sx}
Let \(G\) be a topological groupoid and \(X\) a left \(G\)\nb-space. Let \(Y\) be a space and \(f\colon X\to Y\) a \(G\)\nb-invariant map. Then the following statements hold:
\begin{enumerate}[i)]
\item The action of \(G\) on \(X\) is free if and only if the map \(\mathrm{m}_f\colon G\times_{\base}X\to X\times_{f,Y,f}X\) is one-to-one.
\item The map \([f]\colon G\backslash X\to Y\) is surjective if and only if \(f\colon X\to Y\) is surjective. If \(f\) is open, then so is \([f]\).
\item \([f]\colon G\backslash X\to Y\) is one-to-one if and only in the map \(\mathrm{m}_f\) is surjective.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
\noindent (i): Note that the images of \(\mathrm{m}_f\) and \(\mathrm{m}_0\) are the same subsets of \(X\times X\), since \(f\) is invariant. Let \(\iota\colon X\times_{f,Y,f}X\hookrightarrow X\times X\) be the inclusion map. Then \(\mathrm{m}_f=\mathrm{m}_0\circ \iota\). Since \(\iota\) is one-to-one, \(\mathrm{m}_f\) is one-to-one if and only if \(\mathrm{m}_0\) is one-to-one.
\noindent (ii): The first claim follows from the fact that \(f\) and \([f]\) have the same images. The second claim follows from the universal property of the quotient map of topological spaces.
\noindent (iii): Firstly, assume that \([f]\) is one-to-one. Let \((x,z)\in X\times_{f,Y,f}X\) be given, then \(f(x)=f(z)\). Now the fact that \([f]([x])=f(x)=f(z)=[f]([z])\) along with the injectivity of \([f]\) implies that \([x]=[z]\), that is, \(x=\gamma z\) for some \(\gamma\in G\). Thus \((x,z)=(\gamma z, z)=\mathrm{m}_f(\gamma,z)\).
Conversly, assume that \(\mathrm{m}_f\) is surjective. Let \([x],[z]\in G\backslash X\) be such that \([f]([x])=[f]([z])\). Then \(f(x)=f(z)\) and we see that \((x,z)\in X\times_{f,Y,f}X\). Now the surjectivity of \(\mathrm{m}_f\) gives \(\gamma\in G\) such that \((\gamma z,z)=\mathrm{m}_f(\gamma,z)=(x,z)\), thus \([x]=[z]\).
\mathrm{e}nd{proof}
\begin{definition}
\label{def:trans-act}
Let \(G\) be a groupoid, \(X\) a \(G\)\nb-space, and \(Y\) a space. For a \(G\)\nb-invariant map \(f\colon X\to Y\) we say that the action of \(G\) on \(X\) is \mathrm{e}mph{transitive over \(f\)} if one of the following equivalent conditions hold:
\begin{enumerate}[i), leftmargin=*]
\item the map \(\mathrm{m}_f\) is surjective, that is, for all \((x,x')\in X\times_{f,Y,f}X\), there is \(\gamma\in G\) with \(\gamma x'=x\);
\item the map \([f]\colon G\backslash X\to Y\) induced by \(f\) is one-to-one.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
\noindent Lemma~\ref{lem:m-and-quotient-of-sx}~{(iii)} shows that the two conditions in Definition~\ref{def:trans-act} are equivalent. We say that an action of a groupoid \(G\) on a space \(X\) is transitive if it is transitive over the constant map \(X\to{\{*\}}\). A groupoid is transitive if and only if the obvious action of the groupoid on its space of units is transitive.
Let \(G\) and \(H\) be groupoids, and \(X\) a \(G\)-\(H\)\nb-bispace. The conditions that \(\mathrm{m}_{s_X}\colon G\times_{\base}X\to X\times_{s_X,\base[H],s_X}X\) is a homeomorphism, or the map \([s_X]\colon G/X\to \base[H]\) is one-to-one (or a homeomorphism) appear in many works concerning equivalences, generalised morphisms, and actions of groupoids. We revisit these conditons in the following remark.
\begin{remark}
\label{rem:relation-between-sX-quo-sX-Mlp}
Let \(X\) be a \(G\)\nb-\(H\)-bispace for groupoids \(G\) and \(H\). Let \(s_X\colon X\to \base[H]\) be an open surection. Recall the following one-sided classical conditions in for a groupoid equivalence in~\cite{Muhly-Renault-Williams1987Gpd-equivalence}, namely,
\begin{enumerate}[i)]
\item The action of \(G\) on \(X\) is free.
\item The action of \(G\) on \(X\) is proper.
\item The map \(s_X\) induces an isomorphism \([s_X]\colon G/X\to \base[H]\).
\mathrm{e}nd{enumerate}
\noindent 1) Since \(s_X\) is an open surjection, due to Lemma~\ref{lem:m-and-quotient-of-sx}{(ii)}. Hence Condition~{(iii)} above is equivalent to, and can be replaced by, the one that the action of \(G\) on \(X\) is transitive over \(s_X\).
\noindent 2) As the action of \(G\) on \(X\) is free and proper, Remark~\ref{rem:prop-act-im-closed} implies that \(\mathrm{m}_f\) is an isomorphism onto its image \(\Img(G\times_{\base}X)\). But the action of \(G\) on \(X\) is transitive over \(s_X\), hence \(\Img(G\times_{\base}X)= X\times_{s_X,\base[H],s_X}\), that is, \(\mathrm{m}_{s_X}\) is an isomorphism--- this is one of the classical conditions that is given as an alternate to the third one above, and as we see, in present situation, this condition also is equivalent to that the action of \(G\) on \(X\) is transitive over \(s_X\).
\mathrm{e}nd{remark}
\subsection{Locally free actions of groupoids}
\label{sec:locally-free-actions}
Let \(X\) be a left \(G\)\nb-space where \(G\) is a groupoid. For \(x\in X\), the isotropy group of \(x\), which we denote by \(\text{Fix}(x)_G\), is the group \(\{\gamma\in G: \gamma x=x\}\). The element \(r_X(x)\) is the unit in \(\text{Fix}(x)_G\).
Let \(G\) and \(X\) be as above, and let \(\gamma\in G\). We say that the action of \(G\) is \mathrm{e}mph{locally free at \(\gamma\)} if there is a neighbourhood \(U\subseteq G\) of \(\gamma\) with the property that for all \(\mathrm{e}ta\in U\) and \(x\in X\), \(\gamma x=\mathrm{e}ta x\) implies \(\gamma=\mathrm{e}ta\). In this case, we say that ` \(\gamma\) acts freely on \(X\) in the neighbourhood \(U\)'. Caution: this nomenclature does not mean that for\mathrm{e}mph{ any} two elements \(\mathrm{e}ta,\delta\in U\) and any \(x\in X\) with \(\mathrm{e}ta x= \delta x\implies \mathrm{e}ta =\delta\); the statement is true if and only if one of \(\mathrm{e}ta\) or \(\mathrm{e}ta'\) is \(\gamma\).
\begin{lemma}
\label{lem:loc-free-act}
Let \(G\) be a groupoid and \(X\) a \(G\)\nb-space. Assume that the momentum map \(r_X\) is surjective. Then the following statements are equivalent.
\begin{enumerate}[i)]
\item The action of \(G\) on \(X\) is locally free at every \(u\in \base\).
\item The action of \(G\) on \(X\) is locally free at every \(\gamma\in G\).
\item The isotropy group of every \(x\in X\) is discrete.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
We prove that \((i)\mathrel{\iff}(ii)\) and \((i)\iff (iii)\).
\noindent \((i)\iff (ii)\): The implication \((ii)\mathrel{\implies}(i)\) is clear. To prove the converse, let \(\gamma\in G\) be given. Let \(Z\subseteq G\) be an open neighbourhood of \(s_G(\gamma)\) in which \(s_G(\gamma)\) acts freely on \(X\). Let \(\textup{mult}_G\colon G\times_{s_G,\base,r_G}G\to G\) denote the multiplication map, \(\textup{mult}_G(\mathrm{e}ta,\mathrm{e}ta')= \mathrm{e}ta\mathrm{e}ta'\). Then \( \textup{mult}_G(\gamma^{-1}, \gamma)=s_G(\gamma)\). Using the continuity of \(\textup{mult}_G\), choose open neighbourhoods \(U_1\subseteq G\) of \(\gamma^{-1}\), and \(U_2\subseteq G\) of \(\gamma\) with \(\textup{mult}_G(U_1\times_{\base}U_2)\subseteq Z\).
Let \(\textup{inv}_G\colon G\to G\) denote the inverting function \(\mathrm{e}ta\mapsto\mathrm{e}ta^{-1}\). Then \(U=\textup{inv}_G(U_1)\cap U_2\) is an open neighbourhood containing \(\gamma\), to see this, we observe that \(\gamma=(\gamma^{-1})^{-1}\in \textup{inv}_G(U_1)\) and \(\gamma \in U_2\), hence \(\gamma\in U\). And \(U\) is open because \(\textup{inv}_G\) is a homeomorphism.
We claim that \(\gamma\) acts freely on \(X\) in \(U\). Let \(\mathrm{e}ta\in U\) and \(x\in X\) be such that \(\gamma x=\mathrm{e}ta x\), that is, \(\mathrm{e}ta^{-1}\gamma x=\textup{mult}_G(\mathrm{e}ta^{-1},\gamma)x=x\). Since \(\mathrm{e}ta\in U = \textup{inv}_G(U_1)\cap U_2\), we have that \(\mathrm{e}ta^{-1} \in U_1\). Then \(\textup{mult}_G(\mathrm{e}ta^{-1}, \gamma)\in Z\). But \(Z\) acts freely on \(X\). Hence \(\mathrm{e}ta^{-1}\gamma x= x\) implies \(\mathrm{e}ta^{-1}\gamma = r_X(x)\), that is, \(\gamma=\mathrm{e}ta\).
\noindent \((i)\iff (iii)\):
Assume that the action of \(G\) is locally free at every \(\gamma\in G\). Then, for each \(x\in X\), we produce a neighbourhood of the unit \(r_X(x)\) in \(\text{Fix}(x)_G\) which contains only the unit. Let \(x\in X\) be given. Let \(U\subseteq G\) be a neighbourhood of the unit \(r_X(x)\in \text{Fix}(x)_G\) in which \(r_X(x)\) acts freely on \(X\). Then \(U\cap\text{Fix}(x)_G=\{r_X(x)\}\) because for every \(\mathrm{e}ta\in U\), \(\mathrm{e}ta x = x\) is equivalent to \(\mathrm{e}ta x = r_X(x)x = x\) which implies that \(\mathrm{e}ta=r_X(x)\).
Conversely, assume that \(\text{Fix}(x)_G\) is discrete for all \(x\) in \(X\). Let \(u\in \base\) be given. Then \(u=r_X(x)\) for some \(x\in X\). Let \(U\subseteq G\) be an open neighbourhood with \(U\cap \text{Fix}(x)_G=\{r_X(x)\}\). Then \(u\) acts freely on \(X\) in the neighbourhood \(U\). Because if \(\gamma\in U\) with \(\gamma y = y\), then \(\gamma\in\text{Fix}(x)_G\). Thus \(\gamma\in (U\cap\, \text{Fix}(x)_G)=\{r_X(x)\}\). Hence \(\gamma = r_X(x)\).
\mathrm{e}nd{proof}
Assume that \(G\) and \(X\) are as in Lemma~\ref{lem:loc-free-act}. Then the proof of (i)\(\implies\)(ii) above shows that for \(x\in X\) the isotropy group \(\text{Fix}(x)_G\) is discrete if and only if the action of \(G\) is locally free at \(r_X(x)\in\base\), or equivalently, for \(x\in X\) the group \(\text{Fix}(x)_G\) is discrete if and only if the action of \(G\) is locally free at \(\gamma\in G\) for some \(\gamma\in G_{r_X(x)}\).
\begin{definition}
\label{def:loc-free-act}
\begin{enumerate}[i),leftmargin=*]
\item An action of a groupoid \(G\) on a space \(X\) is called \mathrm{e}mph{locally free} if any of \((i)\)---\((iii)\) in Lemma~\ref{lem:loc-free-act} holds.
\item An action of a groupoid \(G\) on a space \(X\) is called \mathrm{e}mph{locally locally free} if every \(\gamma\in G\) has a neighbourhood \(U\subseteq G\) and every \(y\in X^{s_G(\gamma)}\) has a neighbourhood \(V\subseteq X\) such that for all \(\mathrm{e}ta\in U\) and \(x\in V\), \(\gamma x=\mathrm{e}ta x\) implies \(\gamma=\mathrm{e}ta\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
Clearly, a locally free action is locally locally free. For a group, the converse holds. To see it, one may use an argument as in the proof of Lemma~\ref{lem:loc-free-act}{(i)\(\implies\)(iii)}, and observe that all isotropy groups are discrete.
\begin{example}
\label{exa:R-S-1}
The action of \(\mathbb{R}\) on the unit circle \(\mathbb{S}^1\), \(r\cdot\mathrm{e}^{2\pi\mathrm{i}\theta}=\mathrm{e}^{2\pi\mathrm{i}(\theta+r)}\) is not free; here \(r\in \mathbb{R}\) and \(\mathrm{e}^{2\pi\mathrm{i}\theta}\in \mathbb{S}^1\). But this action is locally free since \(\text{Fix}(\mathrm{e}^{2\pi\mathrm{i}\theta})_{\mathbb{R}}\simeq \mathbb{Z}\) is discrete for all \(\mathrm{e}^{2\pi\mathrm{i}\theta} \in \mathbb{S}^1\).
\mathrm{e}nd{example}
Let \(G\) be a groupoid acting on a space \(X\). For \(\gamma\in G\), we say that the action of \(G\) is \mathrm{e}mph{strongly locally free at \(\gamma\)} if there is a neighbourhood \(U\subseteq G\) of \(\gamma\) with the property that for all \(\mathrm{e}ta,\delta\in U\) and \(x\in X\), \(\mathrm{e}ta x= \delta x\) implies that \(\mathrm{e}ta = \delta\). We say that `the neighbourhood \(U\) (of \(\gamma\))' acts freely on \(X\).
\begin{lemma}
\label{lem:str-loc-free-act}
Let \(G\) be a groupoid and \(X\) a \(G\)\nb-space. Assume that the momentum map \(r_X\) is surjective. Let \(Y\) be a space and \(f\colon X\to Y\) a \(G\)\nb-invariant map. Then the following statements are equivalent.
\begin{enumerate}[i)]
\item The action of \(G\) on \(X\) is strongly locally free at every \(u\in \base\).
\item Every unit \(u\in \base\) has a neighbourhood \(U\) in \(G\) such that such that the restriction of the map \(\mathrm{m}_f\) to \(U\times_{\base}X\) is one-to-one.
\item The action of \(G\) on \(X\) is strongly locally free at every \(\gamma\in G\).
\item Every element \(\gamma\in G\) has a neighbourhood \(U\) in \(G\) such that the restriction of the map \(\mathrm{m}_f\) to \(U\times_{\base}X\) is one-to-one.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
It can be readily seen that (i) and (ii) are different phrasing of the same fact, and so are (iii) and (iv).
The implication (iii)\(\implies\)(i) is obvious and the proof of its converse can be written on exactly the same lines as that of the proof of (ii)\(\implies\)(i) of Lemma~\ref{lem:loc-free-act}. We sketch the proof roughly: let \(\gamma \in G\). Let \(Z\subseteq G\) be an open neighbourhood of \(s_X(\gamma)\) that acts freely on \(X\). Let \(U_1\) and \(U_2\) be as in the proof of Lemma~\ref{lem:loc-free-act}. Then as shown in the proof of Lemma~\ref{lem:loc-free-act}, \(U=\textup{inv}_G(U_1)\cap U_2\) is an open neighbourhood of \(\gamma\) in \(G\). We claim that \(U\) acts freely on \(X\). Let \(\mathrm{e}ta,\delta\in U\), then \(\mathrm{e}ta x=\delta x\iff \delta^{-1} \mathrm{e}ta x = x\). As in the same proof above, we observe that \(\delta^{-1} \mathrm{e}ta \in Z\). And since \(Z\) acts freely on \(X\), we must have \(\delta^{-1}\mathrm{e}ta=r_G(\mathrm{e}ta)\), that is, \(\mathrm{e}ta=\delta\).
\mathrm{e}nd{proof}
\begin{definition}
\label{def:str-loc-free-act}
\begin{enumerate}[i),leftmargin=*]
\item An action of a groupoid \(G\) on a space \(X\) is called \mathrm{e}mph{strongly locally free} if any one of {(i)}-{(iv)} in Lemma~\ref{lem:str-loc-free-act} above holds.
\item An action of a groupoid \(G\) on a space \(X\) is called \mathrm{e}mph{locally strongly locally free} if every \(\gamma\in G\) has a neighbourhood \(U\subseteq G\) and every \(y\in X^{s_G(\gamma)}\) has a neighbourhood \(V\subseteq X\) such that for all \(\delta,\mathrm{e}ta\in U\) and \(x\in V\), \(\mathrm{e}ta x=\delta x\) implies \(\mathrm{e}ta=\delta\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
Clearly, a strongly locally free action is locally free as well as locally strongly locally free. As discussed on page~\pageref{page:gp-LF-is-SLF}, for groups, all these four notions of free action coincide.
\begin{example}
\label{exa:local-free-and-not-str-loc-free-action}
Let
\begin{align*}
G&=\{(0,0)\}\cup\{(1/n,i/n^2)\in \mathbb{R}^2: n\in\mathbb{N} \text{ and } 0\leq i\leq n\},\\
\base&\defeq \{(0,0)\}\cup \{(1/n,0)\in \mathbb{R}^2: n\in\mathbb{N}\}
\mathrm{e}nd{align*}
and equip both sets with the subspace topology coming from \(\mathbb{R}^2\); then \(\base\) is a subspace of \(G\). We think of \(G\) as a group bundle over the space \(\base\): for \(n\in\mathbb{N}\), the fibre over \((1/n,0)\) is
\[
\{(1/n,i/n^2): 0\leq i\leq n\}=\{(1/n,0),(1/n,1/n^2),\dots,(1/n,n/n^2)\}
\]
which we identify with the finite cyclic group \(\mathbb{Z}/(n+1)\mathbb{Z}\) of order \(n\) by the map \(i/n^2\mapsto [i]\in \mathbb{Z}/(n+1)\mathbb{Z}\). The fibre over the origin \((0,0)\) is the trivial group. One may see that this group bundle is a continuous group bundle. We make this bundle into a groupoid in the standard way: \(\base\) is a the space of units, the projection onto the first factor, \(G\to \base\), \((x,y)\mapsto (x,0)\), is the source as well as range map for \(G\). The composite of \((1/n,i/n^2),(1/n,j/n^2)\in G\) is \((1/n,[i+j]/n^2)\) and the inverse of \((1/n,i/n^2)\) is \((1/n,(n+1-i)/n)\). For the fibre over zero, the operations are defined in the obvious way.
Let \(X=\base\) and \(s_X\colon \base\to G\) the inclusion map. Then we claim that the trivial left action of \(G\) on \(X\) is locally free but not strongly locally free. The action is locally free because for every \((x,0)\in\base\) the isotropy group \(\text{Fix}_G((x,0))\) is discrete, to be precise, for \(n\in\mathbb{N}\) \(\text{Fix}_G((1/n,0))\simeq\mathbb{Z}/(n+1)\mathbb{Z}\), and \(\text{Fix}_G((0,0))\) is the trivial group. Now Lemma~\ref{lem:loc-free-act} tells us that action of \(G\) is locally free. We claim that given no neighbourhood of \((0,0)\) in \(G\) acts freely on \(X\).
Let \(U\) be an open neighbourhood of \((0,0)\) and let \(U'\subseteq \mathbb{R}^2\) be an open set such that \(U=U'\cap G\). Let \(\mathrm{e}psilon>0\) be sufficiently small so that, \(W'\defeq (-\mathrm{e}psilon,\mathrm{e}psilon)\times(-\mathrm{e}psilon,\mathrm{e}psilon)\), an open square centred at the origin is contained in \(U'\). Put \(W=W'\cap G\), then \(W\subseteq U\) is an open set containing \((0,0)\). Since the sequences \(\{(1/n,1/n)\}_{n\in\mathbb{N}}\) and \(\{(1/n,0)\}_{n\in\mathbb{N}}\) converge to \((0,0)\) in \(G\) as well as \(\mathbb{R}^2\), for a sufficiently large \(n\) we have \((1/n,0),(1/n, 1/n)\in W\subseteq U\). Thus for \((1/n,0),(1/n, 1/n)\in G\) and for \(x\defeq(1/n,0)\in X\), we have
\[
x=(1/n,0)\cdot x=(1/n,1/n)\cdot x;
\]
here \(\cdot\) is the trivial action of \(G\) on \(X\). Thus \(U\) does not act strongly freely on \(X\).
Furthermore, the contrapositive of Proposition~\ref{prop:etale-gpd-loc-free-act} says that \(G\) is not {\mathrm{e}tale}. Thus \(G\) is a groupoid in which the fibre over every unit is discrete but the groupoid is not {\mathrm{e}tale}.
\mathrm{e}nd{example}
In this article, locally strongly locally free actions are what interest us the most.
\begin{lemma}
\label{lem:loc-str-loc-free-act}
Let \(G\) be a groupoid and \(X\) a \(G\)\nb-space. Assume that the momentum map \(r_X\) is surjective. Let \(f\colon X\to Y\) be a \(G\)\nb-invariant map to a space \(Y\). Then the following statements are equivalent.
\begin{enumerate}[i), leftmargin=*]
\item The action of \(G\) on \(X\) is locally strongly locally free at every \(\gamma\in G\).
\item The map \(\mathrm{m}_f\) is locally one-to-one.
\mathrm{e}nd{enumerate}
Additionally, if \(\mathrm{m}_f\) is an open map onto its image, then the following statements are equivalent.
\begin{enumerate}[resume, leftmargin=*, label=\roman*)]
\item The action of \(G\) on \(X\) is locally strongly locally free at every \(\gamma\in G\).
\item The map \(\mathrm{m}_f\) is a local homeomorphism onto its image.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
Assume that the action of \(G\) is locally strongly locally free at every element \(\gamma\). Given a point \((\gamma,x)\in G\times_{\base}X\), we need to find a neighbourhood of it such that the restriction of \(\mathrm{m}_f\) to this neighbourhood is one-to-one. Let \(U\) be a neighbourhood of \(\gamma\) and \(V\subseteq X\) be the one of \(x\) with the property that for all \(\delta,\mathrm{e}ta\in U\) and \(x\in V\), \(\mathrm{e}ta x=\delta x\implies\mathrm{e}ta=\delta\). Then \(U\times_{\base}V\) is the neighbourhood to which the restriction \(\mathrm{m}_f|_{U\times_{\base}X}\) is one-to-one.
Conversely, let \(\gamma\in G\) is given. Choose \(x\) in \(X\) which is composable with \(\gamma\); this can be done since \(r_X\) is assumed to be surjective. Let \(W\) be an open neighbourhood of \((\gamma,x)\) in \(G\times_{\base}X\) such that \(\mathrm{m}_f|_{W}\) is one-to-one. Let \(U\subseteq G\) and \(V\subseteq X\) be open neighbourhoods of \(\gamma\) and \(x\), respectively, such that \(U\times_{\base}V\) is a basic open neighbourhood of \((\gamma,x)\) and \(U\times_{\base}B\subseteq V\). Then \(U\) and \(V\) are the required neighbourhoods which prove the claim.
The rest of the proof is an easy exercise.
\mathrm{e}nd{proof}
One can see that a free action is locally free, locally locally free, strictly locally free and locally strictly locally free.
It is well-known that action of a group on a space is called locally free if the isotropy group of each point of the space is discrete, for example, see~\cite{Candel-Conlon2000Foliations-I}*{Definition 11.3.7} which discusses the special case of locally free action of a group on a foliation. It can be shown that for groups a locally free action as in Definition~\ref{def:loc-free-act} and a strictly locally free one as in Definition~\ref{def:str-loc-free-act} are equivalent:\label{page:gp-LF-is-SLF} assume that \(G\) is a group which acts on a space \(X\) and the action is locally free. Thus for each point in \(X\) the corresponding isotropy group is discrete. Let \(\gamma\in G\) be given. Choose \(x\in X\) and let \(U'\subseteq G\) be a neighbourhood of \(e\), the unit of \(G\), that intersects \(\text{Fix}(x)_G\) only at \(e\). Let \(U\subseteq U'\) be a neighbourhood such that \(U^{-1} U\subseteq U'\). Then \(\gamma U\) is the neighbourhood of \(\gamma\) which acts freely on \(X\). This is because, if \(\mathrm{e}ta,\delta\in \gamma U\), then \(\mathrm{e}ta'\defeq \gamma^{-1} \mathrm{e}ta\), \(\delta'\defeq\gamma^{-1} \delta\) and \({\delta'}^{-1}\mathrm{e}ta'\) are in \(U'\). And then \(\mathrm{e}ta' x = \delta' x \iff {\delta'}^{-1} \mathrm{e}ta' x=x\). Which implies \(\mathrm{e}ta'=\delta'\) and hence \(\mathrm{e}ta=\delta\). The neighbourhood \(U\) of \(U'\) above can be constructed, for example, see~\cite{Folland1995Harmonic-analysis-book}*{Proposition 2.1 b}.
The proof above also shows that, for groups, a locally locally free action is locally strongly locally free. Let \(G\) and \(X\) be as above, and assume that the action of \(G\) on \(X\) is locally locally free. Let \(U'\times V\subseteq G\times X\) a basic neighbourhood of \((e,x)\in G\times X\) such that \(\mathrm{m}_0\) is a one-to-one when restricted to \(U\times \). Now the above proof goes through with \(X\) replaced by \(V\).
Thus for groups, the notion of locally free action is equivalent to a locally locally free action (see page~\pageref{def:loc-free-act}) and a strongly locally free action, and a locally locally free action is same as a locally strongly free action. This allows us to conclude that for groups, a strongly locally free action is same as a locally strongly free action. This shows that the four notions of locally free action coincide for groups.
The proofs above, however, do not work for groupoids. The reason is that there may be more than a single unit in a groupoid. As mentioned earlier, locally strictly locally free actions interest us the most. Since, in the case of groups, every locally free action is a locally strictly free action, we pocket a large class of examples of locally strictly locally free actions.
\begin{figure}[htb]
\centering
\[
\begin{tikzcd}[scale=2]
\text{SLF}\arrow[Rightarrow]{r}\arrow[Rightarrow]{d}
& \text{LF} \arrow[Rightarrow]{d}& &\text{SLF}\arrow[Leftrightarrow]{r}\arrow[Leftrightarrow]{d}& \text{LF} \arrow[Leftrightarrow]{d}\\
\text{LSLF} \arrow[Rightarrow]{r} & \text{LLF}& &\text{LSLF} \arrow[Leftrightarrow]{r} & \text{LLF}
\mathrm{e}nd{tikzcd}
\]
\caption{The different types of free actions and their interrelations; the first square shows the case of groupoids, and the second one shows the case of groups; here S stands for \mathrm{e}mph{strognly}, L for \mathrm{e}mph{locally} and F for \mathrm{e}mph{free}.}
\label{fig:LF-actions-rel}
\mathrm{e}nd{figure}
Speaking of the square for groupoid in Figure~\ref{fig:LF-actions-rel}, Example~\ref{exa:local-free-and-not-str-loc-free-action} shows that a locally free action need not be strongly locally free. But as-of-now, we do not know if a locally locally free should be locally free, or if a locally strictly locally free actions should be strictly locally free.
\mathrm{e}dskip
Now we turn our attention to a special class of groupoids, namely, {\mathrm{e}tale} groupoids. Here is an example first:
\begin{example}
\label{exa:disc-grp}
Let \(\mathbb{Z}/2\mathbb{Z}=\{-1,1\}\) act on \([-1,1]\) as \(\pm 1\cdot t=\pm 1\) for all \(t\in [-1,1]\). This action is not free since \(\pm 1\cdot 0 =0\). But \(\text{Fix}(t)_{\mathbb{Z}/2\mathbb{Z}}\) is discrete for each \(t\in [-1,1]\). Hence, this action is locally free.
In general, if \(G\) is a discrete group, then any \(G\) action is locally free. Because if \(X\) is a \(G\) space, then for each \(x\in X\), \(\text{Fix}(x)_G\subseteq G\) is discrete. This example generalises very well to {\mathrm{e}tale} groupoids. A groupoid is called {\mathrm{e}tale} if its range map is a local homeomorphism.
\mathrm{e}nd{example}
Recall from~\cite{Renault1980Gpd-Cst-Alg}*{Definition 1.2.6} that a locally compact groupoid is called \(r\)\nb-discrete if its space of units is an open subset. A groupoid is {\mathrm{e}tale} if and only if it is \(r\)\nb-discrete and has a Haar system (Proposition~\cite{Renault1980Gpd-Cst-Alg}*{1.2.8}).
\begin{proposition}
\label{prop:etale-gpd-loc-free-act}
Let \(G\) be an \(r\)\nb-discrete groupoid. Then the action of \(G\) on \(X\) is strongly locally free.
\mathrm{e}nd{proposition}
\begin{proof}
We claim that the restriction of \(\mathrm{m}_f\) to \(\base\times_{\base}X\) is a homeomorphism onto its image. Because the map
\[
(r_X,\textup{Id})\colon X\to \base\times_{\base}X, \quad x\mapsto (r_X(x),x),
\]
is a homeomorphism, and so is the diagonal embedding
\[
\mathrm{dia}_X\colon X\hookrightarrow X\times_{f,Y,f} X,\quad x\mapsto (x,x).
\]
Now observe that \(\mathrm{m}_f\) is the composite \(\mathrm{dia}_X\circ(r_X,\textup{Id})^{-1}\); this proves that claim. In this case, Lemma~\ref{lem:str-loc-free-act} implies that the action of \(G\) on \(X\) is strongly locally free.
\mathrm{e}nd{proof}
\mathrm{e}dskip
We conclude this section by introducing a notation. Let \(G\) be a locally compact groupoid, \(X\) a \(G\)\nb-space, and let the \(G\) action be locally strongly locally free. Let \(Y\) be a space and \(f\colon X\to Y\) a \(G\)\nb-invariant map. Assume that \(\mathrm{m}_f\) is open onto its image. Then every \((\gamma,x)\in G\times_{\base}X\) has a basic neighbourhood \(A\times_{\base}B \subseteq G\times_{\base}X\) restricted to which \(\mathrm{m}_f\) is one-to-one. Thus \(G\times_{\base}X\) has an open cover consisting of basic open sets restricted to which \(\mathrm{m}_f\) is a homeomorphism onto its image. One may choose an open cover where each basic open set is cocompact. Note that since \(X\) is Hausdorff, the fibre product \(G\times_{\base}X\) is Hausdorff.
\begin{notation}[Notation and discussion]
\label{not:loc-sec}
Let \(G\) be a locally compact groupoid
and \(X\) a left \(G\)\nb-space. Let \(Y\) be a space and \(f\colon X\to Y\) be a \(G\)\nb-invariant map. Assume that \(\mathrm{m}_f\colon G\times_{\base}X\to X\times_{f,Y,f} X\), \((\gamma, x)\mapsto (\gamma x,y)\), is a local homeomorphism onto its image , that is, the action of \(G\) on \(X\) is locally strongly locally free and \(\mathrm{m}_0\) is open onto its image.
Let \(U\subseteq G\) and \(V\subseteq X\) be open, Hausdorff sets with \(\mathrm{m}|_{U\times_{\base}V}\) a homeomorphism onto its image. Then for each \((x,y)\in \mathrm{m}(U\times_{\base}V)\), there is a unique \(\gamma\in U\) with \(\gamma y=x\). We define the function
\[
\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}\colon \mathrm{m}(U\times_{\base}V)\to G
\]
which sends \((x,y)\in \mathrm{m}(U\times_{\base}V)\) to the unique \(\gamma\in U\) such that \(x=\gamma y\). Note that if \(pr_1\colon G\times_{\base}X\to G\) is the projection on the first factor, then \(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}=pr_1\circ\mathrm{m}|_{\mathrm{m}(U\times_{\base}V)}^{-1}\). This shows that \(\mathcal{S}_{\mathrm{m}(U\times_{\base}V)}\) is continuous.
We call \(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}\) \mathrm{e}mph{ the local section} at \(U\times_{\base}V\). Using \(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}\) we may write \(\mathrm{m}|_{\mathrm{m}(U\times_{\base}V)}^{-1}\colon \mathrm{m}(U\times_{\base}V)\to U\times_{\base}V\) as
\begin{align}
\mathrm{m}|_{\mathrm{m}(U\times_{\base}V)}^{-1}(x,y)&=(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}(x,y),y)\notag\\
&=(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}(x,y),\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)} (x,y)^{-1} x).\notag
\mathrm{e}nd{align}
If one writes \(\gamma_{(x,y)}\) for \(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}(x,y)\), then the above formula looks better:
\begin{align}
\mathrm{m}|_{\mathrm{m}(U\times_{\base}V)}^{-1}(x,y)&=(\gamma_{(x,y)},\,y) \label{eq:loc-sec-inv-y}\\
&=(\gamma_{(x,y)},\, \gamma_{(x,y)}^{-1} x).\label{eq:loc-sec-inv-x}
\mathrm{e}nd{align}
Equation~\mathrm{e}qref{eq:loc-sec-inv-y} expresses the local section \(\mathcal{S}|_{\mathrm{m}(U\times_{\base}V)}\) of \(U\times_{\base}V\) in terms of \(y\) whereas Equation~\mathrm{e}qref{eq:loc-sec-inv-x} expreses it in terms of \(x\).
\mathrm{e}nd{notation}
\subsection{Proper groupoids and cutoff functions}
\label{sec:prop-group-cutoff}
This subsection is based on Section~{1.2} of~\cite{Holkar2017Composition-of-Corr}.
Let \(G\) be a groupoid. We call \(G\) \mathrm{e}mph{proper} if the map \(s_G\times r_G\colon G\to \base\times\base\), \(s_G\times r_G(\gamma)=(s_G(\gamma), r_G(\gamma))\) is proper. An action of \(G\) on a space \(X\) is proper if the transformation groupoid \(G\ltimes X\) is proper; this definition is equivalent to the one given after Definition~\ref{def:mult-fn} on page~\pageref{def:mult-fn}. It is well-known that if the action of \(G\) on \(X\) is proper, then \(G/X\) inherits many \mathrm{e}mph{good} topological properties from \(X\), that is, if \(X\) is locally compact (Hausdorff or paracompact or second-countable), then \(G/X\) is also locally compact (respectively, Hausdorff or paracompact or second-countable).
For a groupoid \(G\), the space of units \(\base\) carries an action of \(G\) for which \(s_G\) is the momentum map, and the action is that for \(u\in\base\) and \(\gamma\in G_u\) \(\gamma u=r_G(\gamma)\). It can be checked that \(G\) is proper if and only if this action of \(G\) on \(\base\) is proper. We call this action \mathrm{e}mph{the} left action of \(G\) on its space of units. \mathrm{e}mph{The} right action of \(G\) on its space of units is defined similarly.
Let \(G\) be a groupoid. For nonempty subsets \(V,W\subseteq\base\), we denote the set \(\{\gamma\in G: r_G(\gamma)\in V\text{ and } s_G(\gamma)\in W\}\) by \(G^V_W\). If \(W=V\), then \(G^V_V\) is a groupoid called the \mathrm{e}mph{restriction groupoid} for the subset \(V\subseteq \base\). The space of units of the restriction groupoid \(\base[G_V^V]\) is \(V\). \(G^V_V\) is a topological subgroupoid of \(G\) when bestowed with the subspace topology of \(G\).
\begin{lemma}
\label{lem:rest-prop-gpd}
Let \(G\) be a proper groupoid and \(V\) a subset of \(\base\).
\begin{enumerate}[i)]
\item If \(G\) is proper the so is the restriction groupoid \(G_V^V\).
\item The inclusion map \(V\hookrightarrow \base\) induces a topological embedding \(G/V\hookrightarrow G/\base\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
(i) Firstly, note that \(G^V_V=(r_G\times s_G)^{-1}(V\times V)\). Now the claim of the lemma follows from~\cite{Bourbaki1966Topologoy-Part-1}*{Chapter I, \S 10 No.\,1, Proposition 3(a)}.
\noindent
(ii) First of all observe that for every \(v\in V\) the equivalence class of \(v\) in \(G/V\) and \(G/\base\) are the same sets. Thus the inclusion \(V\hookrightarrow \base\) does induce a well-defined one-to-one map \(G/V\hookrightarrow G/\base\). This map is continuous because of the universal property of the quotient space, and is open onto its image because \(V\hookrightarrow\base\) is so.
\mathrm{e}nd{proof}
Let \(G\) and \(V\) be as in Lemma~\ref{lem:rest-prop-gpd}. Assume that \(r_G\) is an open map. If \(V\) is an open subset of \(\base\), then {(ii)}~of Lemma~\ref{lem:rest-prop-gpd} says that \(G/V\) is homeomorphic to an open subset of \(G/\base\) since the quotient map \(\base\to G/\base\) is open in this case. Thus if \(G/\base\) is paracompact, then \(G/V\) is paracompact.
\begin{lemma}
\label{lem:cutoff-function}
Let \(G\) be a locally compact proper groupoid with the range map open. Assume that \(G/\base\) is paracompact for the action of \(G\) on its space of units \(\base\). Then there is a continuous real-valued function \(F\) on \(\base\) which has the following properties
\begin{enumerate}[i)]
\item $F$ is not identically zero on any equivalence class for the action of \(G\) on \(\base\);
\item for every compact subset $K$ of $G/\base$, the intersection of $r_G^{-1}(K)$ with ${\textup{supp}}(F)$ is compact.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
This proof follows from that of~\cite{Holkar2017Composition-of-Corr}*{Lemma 1.7}.
\mathrm{e}nd{proof}
Let \((G,\alpha)\) be a locally compact proper groupoid with a Haar system. Then \(r_G\) is open. Let \(F\) be a function on \(\base\) as in Lemma~\ref{lem:cutoff-function}. Now the proof of \cite{Holkar2017Composition-of-Corr}*{Lemma 1.7} shows that the function \(\bar F\) on \(\base\) defined as
\[
\bar F(u)=\int_G F\circ s_G(\gamma) \;\mathrm{d}\alpha^u(\gamma)
\]
is continuous and \(0<\bar F(u)<\infty\) for all \(u\in\base\). The discussion in the same proof also shows that the function
\[
c_G\defeq \frac{F}{\bar F}\circ s_G
\]
on \(G\) is continuous has the property that
\[
\int_{G^u} c_G(\gamma) \;\mathrm{d}\alpha^u(\gamma) =1
\]
for all \(u\in\base\). We call \(c_G\) a cutoff function for \((G,\alpha)\).
\begin{example}
\label{exa:cutoff-for-H-space}
Let \((H,\beta)\) be a locally compact groupoid with a Haar system and \(X\) a proper rigt \(H\)\nb-space with \(X/H\) paracompact. Then \(X\rtimes H\) is a locally compact proper groupoid. Recall that for \((x,\mathrm{e}ta)\in X\rtimes H\), \[
(x,\mathrm{e}ta)^{-1}=(x\mathrm{e}ta,\mathrm{e}ta^{-1}),\quad r_{X\rtimes H}(x,\mathrm{e}ta)= x,\quad \text{and}\quad s_{X\rtimes H}(x,\mathrm{e}ta)=x\mathrm{e}ta.
\] It is well known that the Haar system \(\beta\) induces a Haar system \(\tilde\beta\) for \(X\rtimes H\): for \(x\in X=\base[(X\rtimes H)]\) and \(f\in \mathbb{C}ontc(X\rtimes H)\)
\[
\int_{X\rtimes H} f\,\mathrm{d}\tilde\beta^u\defeq \int_{H} f(x,\mathrm{e}ta)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\]
Let \(c\) be a function on \(X\) as in Lemma~\ref{lem:cutoff-function}. Then
\[
\bar c(u)=\int_{X\rtimes H} c\circ s_{X\rtimes H}(x,\mathrm{e}ta) \;\mathrm{d}\tilde\beta^u(\mathrm{e}ta)= \int_{H} c(x\mathrm{e}ta) \;\mathrm{d}\beta^u(\mathrm{e}ta),
\]
where \(u\in X\), is the function similar to \(\bar F\) above and
\[
c_{X\rtimes H}= \frac{c}{\bar c}\circ s_{X\rtimes H}
\]
is a cutoff function like \(c_G\) above. Note that \(\bar c\) is \(H\) and \(X\rtimes H\)\nb-invariant. In this example, we call the function \(c\) a \mathrm{e}mph{pre-cutoff} function and \(c/\bar c\) \mathrm{e}mph{the normalised pre-cutoff function} corresponding to \(c\). Thus, the cutoff function \(c_{X\rtimes H}\) is the composite of the \mathrm{e}mph{the normalised pre-cutoff function} corresponding to \(c\) and the source map of the groupoid \(X\rtimes H\). We shall use this example and notation in (4) of the proof of the main theorem, Theorem~\ref{thm:prop-corr-main-thm}.
\mathrm{e}nd{example}
\subsection{Compact operators}
\label{comp-op}
Let \((H,\beta)\) be a locally compact groupoid with a Haar system. Let \((X,\lambda)\) be a pair consisting of a proper right \(H\)\nb-space \(X\) and an \(H\)\nb-invariant family of measures on \(\lambda\) along the momentum map \(s_X\). For $\zeta,\xi \in C_c(X)$ and $g \in C_c(H)$ define
\begin{equation}\label{def:r-act-inerpro}
\left\{\begin{aligned}
(\zeta\cdot g )(x) &\defeq \int_{H^{s_X(x)}} \zeta(x\mathrm{e}ta)\, g({\mathrm{e}ta}^{-1}) \; \mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta) \text{ and }\\
\langle \zeta, \xi \rangle (\mathrm{e}ta) &\defeq \int_{X_{r_H(\mathrm{e}ta)}}\, \overline{\zeta(x)} \xi(x\mathrm{e}ta) \; \mathrm{d}\lambda_{r_H(\mathrm{e}ta)}(x).
\mathrm{e}nd{aligned}\right.
\mathrm{e}nd{equation}
Using the above equations one may make \(\mathbb{C}ontc(X)\) into an inner product \(\mathbb{C}ontc(H)\)\nb-module which completes to a Hilbert \(\mathbb{C}st(H,\beta)\)\nb-module; see~\cite{Renault1985Representations-of-crossed-product-of-gpd-Cst-Alg}*{Corollaire 5.2} for details. We denote this Hilbert \(\mathbb{C}st(H,\beta)\)\nb-module by \(\Hilm(X)\).
Let \((X,\lambda)\) and \((H,\beta)\) be as above. Then \(X\times_{s_X,\base[H],s_X}X\) carries the diagonal action of \(H\) which is proper; for \((x,y)\in X\times_{s_X,\base[H],s_X}X\) and \(\mathrm{e}ta\in H^{s_X(x)}\) the action is \((x,y)\mathrm{e}ta=(x\mathrm{e}ta,y\mathrm{e}ta)\). Let \(G\) denote the quotient space \((X\times_{s_X,\base[H],s_X}X)/H\). For \(f,g\in \mathbb{C}ontc(G)\) define
\begin{align}
f*g([x,z])&=\int_X f([x,y])g([y,z])\,\mathrm{d}\lambda_{s_X(z)}(y)\label{eq:hgpd-con}\\
f^*([x,y])&=\overline {f[y,x]}\label{eq:hgpd-inv}.
\mathrm{e}nd{align}
Then \(f^*\) is in \(\mathbb{C}ontc(G)\) and~\cite{Holkar-Renault2013Hypergpd-Cst-Alg} shows that \(f*g\) is well-defined and lies in \(\mathbb{C}ontc(G)\). When equipped with the convolution in Equation~\mathrm{e}qref{eq:hgpd-con} and the involution~\mathrm{e}qref{eq:hgpd-inv}, \(\mathbb{C}ontc(G)\) becomes a {\Star}algebra.~\cite{Holkar-Renault2013Hypergpd-Cst-Alg}*{Theorem 2.2} shows the representations of the {\Star}algebra \(\mathbb{C}ontc(H)\) induces representations of \(\mathbb{C}ontc(G)\). Using these induced representations we complete \(\mathbb{C}ontc(G)\) to a \(\mathbb{C}st\)\nb-algebra which we denote by \(\mathbb{C}st(G)\); see~\cite{Holkar-Renault2013Hypergpd-Cst-Alg} for details.
In~\cite{Renault2014Induced-rep-and-Hpgpd}, Renault shows that the quotient space \(G\) and the above \(\mathbb{C}st\)\nb-algebra corresponding to it carry a well-defined mathematical structure. He shows that \(G\) is the \mathrm{e}mph{spatial hypergroupoid} associated with the proper \(H\)\nb-space \(X\). The \(H\)\nb-invariant family of measures \(\lambda\) induces a Haar system \(\lambda_G\) on \(G\) and the algebra \(\mathbb{C}st(G)\) is the \(\mathbb{C}st\)\nb-algebra \(\mathbb{C}st(G,\lambda_G)\) of the hypergroupoid \(G\) equipped with the Haar system \(\lambda_G\).
\begin{remark}
\label{rem:cst-cat}[A long remark that explains the origins of Equations~\mathrm{e}qref{eq:hgpd-con},\mathrm{e}qref{eq:hgpd-inv} and~\mathrm{e}qref{eq:hgpd-l-act-inpro}.]
We advise reader to have look at Proposition~\ref{prop:implications-of-Cst-cat-2} and its proof until Equation~\mathrm{e}qref{eq:hgpd-l-act-inpro} below before reading this remark further. In this remark, we discuss a {\Star}category studied in~\cite{Holkar-Renault2013Hypergpd-Cst-Alg} and its relation to Equations~\mathrm{e}qref{eq:hgpd-con},\mathrm{e}qref{eq:hgpd-inv} and~\mathrm{e}qref{eq:hgpd-l-act-inpro}. For a locally compact groupoid \(H\) equipped with a Haar system \(\beta\), one may form a {\Star}category \(\mathfrak{C}_c(H,\beta)\) as follows: the objects of \(\mathfrak{C}_c(H,\beta)\) are the vector spaces \(\mathbb{C}ontc(X,\lambda)\) where \((X,\lambda)\) is a proper \(H\)\nb-space with an \(H\)\nb-invariant family of measures.
The arrows from \(\mathbb{C}ontc(X,\lambda)\) to \(\mathbb{C}ontc(Y,\mu)\), where \(\mathbb{C}ontc(X,\lambda), \mathbb{C}ontc(Y,\mu)\in\!\in \mathfrak{C}_c(H,\beta)^0\), are functions in \(\mathbb{C}ontc((X\times_{s_X,\base[H],s_Y}Y)/H)\). The involution of \(f\in\mathbb{C}ontc((X\times_{s_X,\base[H],s_Y}Y)/H)\) is \(f^*([x,y])=\overline{f([y,x])}\).
Let \(\mathbb{C}ontc(Z,\nu)\) be one more object in \(\mathfrak{C}_c(H,\beta)\). If \(f\colon \mathbb{C}ontc(X)\to \mathbb{C}ontc(Y)\) and \(g\colon \mathbb{C}ontc(Y)\to \mathbb{C}ontc(Z)\) are arrows in \(\mathfrak{C}_c(H,\beta)\), then their composite arrow, which goes from \((X,\lambda)\) to \((Z,\nu)\), is denoted by \(f*g\) and defined as
\[f*g[x,z]=\int_Y f[x,y]g[y,z]\,\mathrm{d}\mu_{s_z(z)}(\psi).\]
When equipped with the involution in Equation~\mathrm{e}qref{eq:hgpd-inv}, \(\mathfrak{C}_c(H)\) becomes a {\Star}category. See~\cite{Holkar-Renault2013Hypergpd-Cst-Alg} for details.
\cite{Holkar-Renault2013Hypergpd-Cst-Alg}*{Theorem 2.2} shows that that the representations of \((H,\beta)\) induce the ones of \(\mathfrak{C}_c(H)\) which makes \(\mathfrak{C}_c(H)\) a normed {\Star}category. Using these induced representations, we complete \(\mathfrak{C}_c(H)\) to a \(\mathbb{C}st\)\nb-category \(\mathfrak{C}(H,\beta)\).
Let \((X,\lambda)\in\!\in\mathfrak{C}_c(H,\beta)\). Note that \(\mathbb{C}ontc(H,\beta^{-1})\) is also an object in \(\mathfrak{C}_c(H,\beta)\). Now it can be checked that Equation~\mathrm{e}qref{eq:hgpd-con} gives the composition of two arrows from \(\mathbb{C}ontc(X,\lambda)\) to itself and Equation~\mathrm{e}qref{eq:hgpd-inv} is an involution of an arrow from \(\mathbb{C}ontc(X,\lambda)\) to itself. Furthermore,~\cite{mythesis}*{Table 3.2} shows that Equation~\mathrm{e}qref{eq:hgpd-l-act-inpro} below is equivalent to the composition of certain arrows in \(\mathfrak{C}_c(H)\). This explains the origins and well-definedness of Equations~\mathrm{e}qref{eq:hgpd-con},\mathrm{e}qref{eq:hgpd-inv} and~\mathrm{e}qref{eq:hgpd-l-act-inpro}.
\mathrm{e}nd{remark}
\begin{proposition}
\label{prop:implications-of-Cst-cat-2}
Let \((X,\lambda)\) be a proper \(H\)\nb-space equipped with an \(H\)\nb-invariant family of measures. Let \(\mathbb{C}st(G)\) be the \(\mathbb{C}st\)\nb-algebra obtained by completing the {\Star}algebra \(C_c(G)\) as in~\cite{Holkar-Renault2013Hypergpd-Cst-Alg}. Then $\mathbb{C}st(G)\simeq\mathbb{K}(\Hilm(X))$.
\mathrm{e}nd{proposition}
\begin{proof}
We show that \(\mathbb{C}st(G)\) and \(\mathbb{C}st(H,\beta)\) are Morita equivalent and \(\Hilm(X)\) is the imprimitivity bimodule; this implies that \(\mathbb{C}st(G)\simeq\mathbb{K}(\Hilm(X))\). The isomorphism \(\mathbb{C}st(G)\simeq\mathbb{K}(\Hilm(X))\) is given by the representation of \(\mathbb{C}st(G)\) on \(\Hilm(X)\).
For \(f\in \mathbb{C}ontc(G)\), \(\xi, \zeta \in \mathbb{C}ontc(X)\) and \(g\in \mathbb{C}ontc(H)\) let \(\zeta a\) and \(\inpro{\zeta}{\xi}\) be as in Equation~\ref{def:r-act-inerpro}. Furthermore, define
\begin{align}
\left\{\begin{aligned}\label{eq:hgpd-l-act-inpro}
f\zeta(x)=\int_X f[x,y]\,\zeta(y)\,\mathrm{d}\lambda_{s_X(x)}(y) \;\text{ and}\\
_G\inpro{\zeta}{\xi}=\int_H \zeta(x\mathrm{e}ta)\,\overline{\xi(y\mathrm{e}ta)}\,\mathrm{d}\beta_{s_X(x)}(\mathrm{e}ta).
\mathrm{e}nd{aligned}\right.
\mathrm{e}nd{align}
The integrals in Equation~\ref{eq:hgpd-l-act-inpro} are well-defined, in addition to which, \(f\zeta \in \mathbb{C}ontc(X)\) and \({_G\inpro{\zeta}{\xi}}\in \mathbb{C}ontc(G)\), see Remark~\ref{rem:cst-cat} above. Equation~\mathrm{e}qref{eq:hgpd-l-act-inpro} defines a representation of the {\Star}algebra \(\mathbb{C}ontc(G)\) on \(\mathbb{C}ontc(X)\) and a \(\mathbb{C}ontc(G)\)\nb-valued inner produce on \(\mathbb{C}ontc(X)\), respectively. Completing the {\Star}category \(\mathfrak{C}_c(H,\beta)\) to the \(\mathbb{C}st\)\nb-category, \(\mathfrak{C}(H,\beta)\), we get a representation of \(\mathbb{C}st(G)\) on \(\Hilm(X)\) and the \(\mathbb{C}ontc(G)\)\nb-valued inner product makes \(\Hilm(X)\) into a left Hilbert \(\mathbb{C}st(G)\)\nb-module.
It is a standard computation to check that the actions of \(\mathbb{C}ontc(G)\) and \(\mathbb{C}ontc(H)\) on \(\mathbb{C}ontc(X)\) commute in the usual sense, that is \((f\xi)g=f(\xi g)\) for all \(f\in\mathbb{C}ontc(G), \xi\in\mathbb{C}ontc(X)\) and \(g\in\mathbb{C}ontc(H)\). Thus \(\Hilm(X)\) is a \(\mathbb{C}st(G)\)\nb-\(\ Cst(H,\beta)\)-bimodule which carries \(\mathbb{C}st(G)\) and \(\mathbb{C}st(H,\beta)\)\nb-valued inner products \(\inpro{}{}\) and \({_G\inpro{}{}}\), respectively.
Since \(\lambda\) is a proper family of measures along \(s_X\), the set \(\{\inpro{\zeta}{\xi}|\zeta,\xi\in \mathbb{C}ontc(X)\}\) is dense in \(\mathbb{C}ontc(H,\beta)\), see~\cite{Renault1985Representations-of-crossed-product-of-gpd-Cst-Alg}*{Lemme 1.1}.
The set \(\{ _G\inpro{\zeta}{\xi}:\zeta,\xi\in\mathbb{C}ontc(X)\}\), that is, the image of \(\mathbb{C}ontc(X)\mathbin{\otimes}_{\mathbb{C}}\mathbb{C}ontc(X)\) in \(\mathbb{C}ontc(G)\) under the map \((\zeta,\xi)\mapsto\, _G\inpro{\zeta}{\xi}\) is dense: this follows from the a standard argument that uses the family of measures in Equation~\mathrm{e}qref{eq:measures-along-the-quotient}. The the argument is as follows: The restriction map mapping \(\zeta\otimes\xi\in \mathbb{C}ontc(X)\otimes_{\mathbb{C}}\mathbb{C}ontc(X)\) to \(\zeta\otimes_{s_X,\base[H],s_X}\xi\in \mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\) has a dense image due to the Stone-Weirstra{\ss} theorem. The Haar system \(\beta\) induces the family of measures \(\beta_{X\times_{s_X,\base[H],s_X}X}\) along the quotient map \(X\times_{s_X,\base[H],s_X}X\to G\) similar to the one in Equation~\mathrm{e}qref{eq:measures-along-the-quotient}. Hence the image of the integration map \(B_{X\times_{s_X,\base[H],s_X}X}\colon \mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\to \mathbb{C}ontc(G)\), induced by the family of measures \(\beta_{X\times_{s_X,\base[H],s_X}X}\), is dense.
Let \(\iota\) be the bijection from \(\mathbb{C}ontc(X)\otimes_{\mathbb{C}}\mathbb{C}ontc(X)\) itself given by
\[
\iota \colon \zeta\otimes_{\mathbb{C}}\xi \mapsto \zeta\otimes_{\mathbb{C}}\overline{\xi}.
\]
Then we observe that \(\{ _G\inpro{\zeta}{\xi}:\zeta,\xi\in\mathbb{C}ontc(X)\}\subseteq \mathbb{C}ontc(G)\) is the image of the composite \(B_{X\times_{s_X,\base[H],s_X}X}\circ \iota\). Hence \(\{ _G\inpro{\zeta}{\xi}:\zeta,\xi\in\mathbb{C}ontc(X)\}\subseteq \mathbb{C}ontc(G)\) is dense.
To prove that \(\Hilm(X)\) implements Morita equivalence between the two \(\mathbb{C}st\)\nb-algebras, we need to check that the equality \( {_G\inpro{\zeta}{\xi}\cdot\theta}= \zeta\cdot\inpro{\xi}{\theta}\) holds for every \(\zeta,\xi\) and \(\theta\) in \(\Hilm(X)\). To check this let \(\zeta,\xi,\theta\in\mathbb{C}ontc(X)\), then for any \(x\in X\)
\begin{align}
{_G\inpro{\zeta}{\xi}\cdot\theta}\,(x)
&=\int_X {_G\inpro{\zeta}{\xi}}[x,y]\,\theta(y)\;\mathrm{d}\lambda_{s_X(x)}(y)\notag\\
&=\int_X\int_G \zeta(x\mathrm{e}ta)\,\overline{\xi(y\mathrm{e}ta)}\,\theta(y)\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\,\mathrm{d}\lambda_{s_X(x)}(y),\label{eq:implications-of-Cst-cat-2-1} && \text{ and}
\mathrm{e}nd{align}
\begin{align*}
\zeta\cdot\inpro{\xi}{\theta}\,(x)
&=\int_G \zeta(x\mathrm{e}ta)\,\inpro{\xi}{\theta}(\mathrm{e}ta^{-1})\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\\
&=\int_G\int_X \zeta(x\mathrm{e}ta)\,\overline{\xi(y)}\,\theta(y\mathrm{e}ta^{-1})\;\mathrm{d}\lambda_{s_X(x)}(y)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\mathrm{e}nd{align*}
Change the variable \(y\mapsto y\mathrm{e}ta\) in the last term above and use the right invariance of the family of measures \(\lambda\) to see that the equation transforms to
\begin{equation}
\label{eq:implications-of-Cst-cat-2-2}
\zeta\cdot\inpro{\xi}{\theta}\,(x)= \int_G\int_X \zeta(x\mathrm{e}ta)\,\overline{\xi(y\mathrm{e}ta)}\,\theta(y)\;\mathrm{d}\lambda_{s_X(x)}(y)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\mathrm{e}nd{equation}
Now use Fubini's theorem in the above equation to interchange the integrals to see that Equations~\mathrm{e}qref{eq:implications-of-Cst-cat-2-1} and~\mathrm{e}qref{eq:implications-of-Cst-cat-2-2} are same. Thus
\[ {_G\inpro{\zeta}{\xi}\cdot\theta}= \zeta\cdot\inpro{\xi}{\theta}\]
for all \(\zeta,\xi\) and \(\theta\in\mathbb{C}ontc(X)\). One may see that the \(\mathbb{C}st\)\nb-category in~\cite{Holkar2017Construction-of-Corr} allows to extend above equality to \(\mathbb{C}st(G), \Hilm(X)\) and \(\mathbb{C}st(H,\beta)\) which proves that \(\Hilm(X)\) is the imprimitivity bimodule between \(\mathbb{C}st(G)\) and \(\mathbb{C}st(H,\beta)\).
\mathrm{e}nd{proof}
\begin{notation}
\label{not:Hgpd-of-comp-operators}
Let \(X,H,\beta\) and \(\lambda\) as in Proposition~\ref{prop:implications-of-Cst-cat-2}. Then we denote the hypergroupoid \(G=(X\times_{s_X,\base[H],s_X}X)/H\) in the proof of the proposition by \(\mathbb{K}_{\mathrm{T}}(X)\); the suffix `T' stands for \mathrm{e}mph{topological}. Hence we change \(\sigma_{G}\) and \( _{G}\inpro{}{}\) change to \(\sigma_{\mathbb{K}_{\mathrm{T}}(X)}\) and \( _{\mathbb{K}_{\mathrm{T}}(X)}\inpro{}{}\), respectively, in the further writing.
\mathrm{e}nd{notation}
\section{Proper topological correspondences}
\label{sec:prop-corr}
In this section, we define a proper topological correspondence, see Definition~\ref{def:prop-corr}. Section~\ref{sec:some-examples} discusses some details that may proof helpful to understand the definition of a proper topological correspondence. Without these details, the definition may look artificial or technical. We hope that Example~\ref{exa:concrete-exa-of-G-X-H} in Section~\ref{sec:some-examples} should setup the background for Definition~\ref{def:prop-corr}.
\subsection{Some examples}
\label{sec:some-examples}
Example~\ref{exa:transf-gpd} revises some well-known facts about transformation groupoids; Example~\ref{exa:equi-rel-gpd} discusses the groupoid associated with a continuous equivalence relation and Example~\ref{exa:concrete-exa-of-G-X-H} puts these two examples together to elaborate the background in the definition of a proper correspondence.
\begin{example}[Transformation groupoid]
\label{exa:transf-gpd}
Let \(G\) be a locally compact groupoid and \(X\) a left \(G\)\nb-space. For \((\gamma, x)\in G\ltimes X\), \(r_{G\ltimes X}(\gamma,x)=(r_{G}(\gamma), \gamma x)\) and \(s_{G\ltimes X}(\gamma,x)=(s_G(\gamma),x)\). We often identify the space of units \(\base\times_{\textup{Id}_{\base},\base,r_X}X\) of this transformation groupoid with \(X\) in which case, the range and source of the above element are \(r_{G\ltimes X}(\gamma,x)=\gamma x\) and \(s_{G\ltimes X}(\gamma,x)=x\). Given a unit \((r_X(x),x)\in \base[G\ltimes X]\), \(r_{G\ltimes X}^{-1}(r_X(x),x)=\{(\gamma, \gamma^{-1} x): r_G(\gamma)=r_X(x)\}\).
It is well known that a Haar system on \(G\) induces one on the transformation groupoid \(G\ltimes X\); if \(\alpha\) is a Haar system on \(G\), then the Haar system on \(G\ltimes X\), which we denote by \({\alpha_2}\), is given by
\[
\int_{G\ltimes X} f\,\mathrm{d}{\alpha_2}^{x}\defeq \int_G f(\gamma,\gamma^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(\gamma)
\]
for \(f\in \mathbb{C}ontc(G\ltimes X)\) and \(x\in X\approx\base[(G\ltimes X)]\).
Let \(H\) be another groupoid. Assume that \(H\) acts on \(X\) from right. Then \((\gamma,x)\mathrm{e}ta\mapsto (\gamma,x\mathrm{e}ta)\), for \((\gamma,x)\in G\ltimes X\) and \((x,\mathrm{e}ta)\in X\times_{\base[H]}H\), is a right action of groupoid \(H\) on groupoid \(G\ltimes X\) by invertible functors, see~\cite{Holkar2017Construction-of-Corr}*{Definition 1.8 and Example 1.11}.
\mathrm{e}nd{example}
\begin{example}[The groupoid of an equivalence relation]
\label{exa:equi-rel-gpd}
Let \(X\) and \(Y\) be spaces, and \(\phi\colon X\to Y\) a continuous map. Then the fibre product \(X\times_{\phi,Y,\phi}X\defeq\{(x,x'): \phi(x)=\phi(x')\}\) carries a groupoid structure: \((x,x')\) and \((x'',x''')\)
in \(X\times_{\phi,Y,\phi}X\) are composable if and only if \(x'=x''\) and the composite \((x,x')(x', x''')=(x,x''')\). The space of units of \(X_\phi\) is the diagonal \(\{(x,x)\in X\times_{\phi,Y,\phi}X:x\in X\}\) in \(X\times_{\phi,Y,\phi}X\) which can be identified with \(X\). For the element \((x,x')\) as above, its
\begin{enumerate*}[(i)]
\item source is \(x'\),
\item range is \(x\) and
\item inverse \((x,x')^{-1}= (x',x)\).
\mathrm{e}nd{enumerate*}
We denote this groupoid by \(X_\phi\); this is the groupoid of the equivalence relation \(x\sim x'\) if and only if \(\phi(x)=\phi(x')\).
In addition to the above data, let \(\lambda=\{\lambda_y\}_{y\in Y}\) be a continuous family of measures with full support along \(\phi\). Then \(\phi\) induces a Haar system \({\lambda_1}\) on \(X_\phi\); for \(f\in \mathbb{C}ontc(X_\phi)\) and \(z\in X\simeq \base[X_\phi]\),
\[
\int_{X_\phi} f\,\mathrm{d}{\lambda_1}^z \defeq \int_X
f(z,x)\,\mathrm{d}\lambda_{\phi(z)}(x).
\]
Note that \(\lambda_1^z=\delta_z\otimes\lambda_{\phi(z)}\) where \(\delta_x\) is the point mass at \(x\in X\). The family of measures \({\lambda_1}\) is a continuous family of measures due to~\cite{Renault1985Representations-of-crossed-product-of-gpd-Cst-Alg}*{Lemme 1.2} and has full support because \(\lambda\) has full support. It is easy to see that \({\lambda_1}\) is left invariant: for \(f\in \mathbb{C}ontc(X_\phi)\) and \((v,w)\in X_\phi\)
\begin{multline*}
\int_{X_\phi} f((v,w)(w,x))\,\mathrm{d}{\lambda_1}^{r_{X_\phi}(v,w)}(w,x)= \int_{X_\phi} f((v,w)(w,x))\,\mathrm{d}{\lambda_1}^{w}(w,x)\\
=\int_{X} f(v,x)\,\mathrm{d}\lambda_{\phi(w)}(x)=\int_{X} f(v,x)\,\mathrm{d}\lambda_{\phi(v)}(x)= \int_{X_\phi} f(v,x)\,\mathrm{d}{\lambda_1}^{r_{X_\phi}(v,w)}(v,x).
\mathrm{e}nd{multline*}
In the above computation, the third equality due to the fact that \(\phi(v)=\phi(w)\). If \(\lambda\) does not have full support, then \({\lambda_1}\) is a continuous family of left invariant measures and not a Haar system.
Additionally, if we assume that \(X\) and \(Y\) are right \(H\)\nb-spaces for a groupoid \(H\), and \(\phi\) is an \(H\)\nb-equivariant map, then \((x,y)\mathrm{e}ta\mapsto (x\mathrm{e}ta,y\mathrm{e}ta)\), for \((x,y)\in X\times_{\phi,Y,\phi}X\) and \(\mathrm{e}ta\in X\times_{\base[H]}H\), is an action of \(H\) on the groupoid \(X_\phi\) in the sense of~\cite{Holkar2017Construction-of-Corr}*{Definition 1.8}. The momentum map \(r_{X_\phi,H}\colon X_\phi\to \base[H]\) of this action is \(r_{X_\phi,H}(x,y)=s_X(x)\).
If \(\lambda\) is an \(H\)\nb-invariant family of measures, then a routine computation shows that the family of measures \(\lambda_1\) is also \(H\)\nb-invariant.
The example ends here.
\mathrm{e}nd{example}
We need the following remark: it is well-known that the right action \(r_H(\mathrm{e}ta)\mathrm{e}ta=s_H(\mathrm{e}ta)\) of a groupoid \(H\) on its space of units \(\base[H]\) is well-defined continuous action. Moreover, for any right \(H\)\nb-space \(X\), the momentum map \(s_X\colon X\to \base[H]\) is \(H\)\nb-equivariant for this action.
\begin{example}
\label{exa:concrete-exa-of-G-X-H}
Let \(G\) and \(H\) be locally compact groupoids, and \(X\) a \(G\)\nb-\(H\)-bispace. Let \(\alpha\) be a Haar system on \(G\) and \(\lambda\) an \(H\)\nb-invariant continuous family of measures on \(X\) along \(s_X\).
\begin{enumerate}[leftmargin=*]
\item Then \(s_X\colon X\to \base[H]\) is a \(G\)\nb-invariant map. Let \(X_{s_X}\) be the groupoid associated with the continuous map \(s_X\colon X\to \base[H]\) as in Example~\ref{exa:equi-rel-gpd}. It is a routine calculation to check that the map
\[
\mathrm{m}_{s_X}\colon G\ltimes X \to X_{s_X},\quad (\gamma, x)\mapsto (\gamma x,x),
\]
is a homomorphism of groupoids. Furthermore, since \(X\) is a \(G\)\nb-\(H\)-bispace, both groupoids, \(G\ltimes X\) and \(X_{s_X}\), carry actions of \(H\) described in Examples~\ref{exa:equi-rel-gpd} and~\ref{exa:transf-gpd} and the map \(\mathrm{m}_{s_X}\) is an \(H\)\nb-equivariant homomorphisms of groupoids. Note that \(\mathrm{m}_{s_X}\) is the restriction of \(r_{G\ltimes X}\times s_{G\ltimes X}\colon G\ltimes X\to X\times X\) to the closed subspace \(X\times_{s_X,\base[H],s_X}X\subseteq X\times X\).
Identify \(X\) with the space of units of \(G\ltimes X\) as well as \(X_{s_X}\). Then
\begin{enumerate}
\item \(\mathrm{m}_{s_X}|_{\base[G\ltimes X]}=\textup{Id}_X\colon X\to X\),
\item \(r_{G\ltimes X}=r_{X_{s_X}}\circ \mathrm{m}_{s_X}\).
\mathrm{e}nd{enumerate}
The last equality above implies that \(\mathrm{m}_{s_X}^{-1}\circ r_{X_{s_X}}^{-1} (x)=r_{G\ltimes X}^{-1}(x)\) for all \(x\in X\).
\item The Haar system \(\alpha\) on \(G\) induces the Haar system \({\alpha_2}\) on \(G\ltimes X\) (see Example~\ref{exa:transf-gpd}). As in Example~\ref{exa:equi-rel-gpd}, \(\lambda\) induces a continuous invariant family of measures \({\lambda_1}\) on the groupoid \(X_{s_X}\). Since \(\lambda\) is \(H\)\nb-invariant, so is \(\lambda_1\). If \(\lambda\) has full support, then \({\lambda_1}\) is a Haar system for \(X_{s_X}\).
\item In addition to the current hypotheses, assume that the action of \(G\) on \(X\) is locally strongly locally free and \(\mathrm{m}_{s_X}\) is open onto its image, that is, \(\mathrm{m}_{s_X}\) is a local homeomorphism onto its image (see Lemma~\ref{lem:loc-str-loc-free-act}). Let \(\mathbb{R}N\colon X\times_{s_X,\base[H],s_X}X\to \mathbb{R}\) be a nonnegative \(H\)\nb-invariant
continuous function. Then, for each \(x\in X\approx\base[X]_{s_X}\), \(\mathbb{R}N(x,\_){\lambda_1}^x\) is an absolutely continuous measure with respect to \({\lambda}_1^x\) on \(X_{s_X}^x\). Furthermore, the Radon-Nikodym derivative \(\frac{\mathrm{d}\mathbb{R}N(x,\_){\lambda}_1^x}{\mathrm{d}{\lambda}_1^x}(x,y)=\mathbb{R}N(x,y)\). Let \(\mathbb{R}N\lambda_1\defeq \{\mathbb{R}N(x,\_)\lambda_1^x\}_{x\in X}\). We abuse the notation: for each \(x\in X\approx\base[X]_{s_X}\), let \(\mathbb{R}N(x,\_){\lambda}_1^x\) denote the measure on \(X_{s_X}\) which equals \(\mathbb{R}N(x,\_){\lambda}_1^x\) on the fibre \(X_{s_X}^x\) and zero outside it. Now Corollary~\ref{cor:pulling-and-pasting-measures-2} says that for each \(x\in X\approx\base[X]_{s_X}\), \(\mathbb{R}N(x,\_){\lambda_1}^x\) induces a measure \(\mathrm{m}_{s_X}^*(\mathbb{R}N(x,\_)\lambda_1^x)\) on \(G\ltimes X\), to be precise, on \(r_{G\ltimes X}^{-1}(x)\subseteq G\ltimes X\).
Since \(\mathbb{R}N\) is \(H\)\nb-equivariant, so is the family of measures \(\mathbb{R}N\lambda_1\). Recall from (1) above that \(\mathrm{m}_{s_X}\) is an \(H\)\nb-equivariant map. The \(H\)\nb-equivariance of \(\mathrm{m}_{s_X}\) and \(\mathbb{R}N\lambda_1\) implies that the family of measures
\[
\mathrm{m}_{s_X}^*(\mathbb{R}N\lambda_1)\defeq\{\mathrm{m}_{s_X}^*(\mathbb{R}N(x,\_){\lambda}_1^x)\}_{x\in X}
\]
is also \(H\)\nb-invariant--- this claim follows from a direct computation.
Lemma~\ref{lem:loc-str-loc-free-act} describes this family of measures: for \(U\in \mathrm{m}_{s_X}^{\mathrm{Ho}C}\) and \(f\in \mathbb{C}ontc(G\ltimes X,U)\),
\begin{equation}
\label{eq:RN-lambda-induced-measure}
\mathrm{m}^*_{s_X}(\mathbb{R}N(x,\_){\lambda_1^x})(f)= (\mathbb{R}N(x,\_){\lambda_1^x})(f')=\int_X f'(x,y) \mathbb{R}N(x,y)\,\mathrm{d}\lambda_{s_X(x)}(y)
\mathrm{e}nd{equation}
for any extension \(f'\in \mathcal{E}_{\mathrm{m}_{s_X}}(f)\).
\item
In Definition~\ref{def:prop-corr}, one of the conditions demands that
\[
(\mathrm{m}_{s_X}^* (\mathbb{R}N(x,\_){\lambda_1^x}) = {\alpha_2^x}
\]
for each \(x\in X\).
Recall from (3) above that \(\mathbb{R}N(x,\_){\lambda_1^x}\ll {\lambda_1^x}\) and the Radon-Nikodym derivative \(\frac{\mathrm{d}(\mathbb{R}N(x,\_){\lambda_1^x})}{\mathrm{d}{\lambda_1^x}}(x,\_)=\mathbb{R}N(x,\_)\). Lemma~\ref{lem:RN-positive-makes-pullback-measure-equi} says that \({\alpha_2}_x=\mathrm{m}_{s_X}^*(\mathbb{R}N(x,\_){\lambda_1^x})\ll \mathrm{m}_{s_X}^*({\lambda_1^x})\), and, the Radon-Nikodym derivative
\[
\frac{\mathrm{d}{\alpha_2^x}}{\mathrm{d} \mathrm{m}_{s_X}^*({\lambda_1^x})}=\mathbb{R}N\circ\mathrm{m}_{s_X}.
\]
\item Note that, for \(U\in \mathrm{m}_{s_X}^{\mathrm{Ho}}\) and \(f\in \mathbb{C}ontc(G\ltimes X,U)\), we may write
\[
f\circ \mathrm{m}_{s_X}|_U^{-1}(x,y)= f(\gamma_{(x,y)}^{-1}, y)
\]
using Equation~\ref{eq:loc-sec-inv-y} or
\[
f\circ \mathrm{m}_{s_X}|_U^{-1}(x,y)=
f(\gamma_{(x,y)},\gamma_{(x,y)}^{-1} x)
\]
using Equation~\ref{eq:loc-sec-inv-x}. We use the second form of \(f\circ \mathrm{m}_{s_X}|_U^{-1}\) in Definition~\ref{def:prop-corr}.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{example}
\subsection{Proper correspondences}
\label{sec:prop-corr-1}
As advised in the Introduction, reader may eliminate the functions \(\mathbb{R}N\) and the adjoinig function \(\Delta\) in this section during the first reading--- one may consider \(\mathbb{R}N\) and \(\Delta\) are the constant function \(1\), and skip the remarks discussing these functions.
\begin{definition}
\label{def:prop-corr}
Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids with Haar systems, and \((X,\lambda)\) a topological correspondence from \((G,\alpha)\) to \((H,\beta)\). Let \(X\) be Hausdorff and \(X/H\) paracompact. We call \((X,\lambda)\) \mathrm{e}mph{proper} if there is an \(H\)\nb-invariant nonnegative continuous function \(\mathbb{R}N\colon X\times_{s_X,\base[H],s_X}X\to \mathbb{R}\) and following holds:
\begin{enumerate}[i)]
\item the left action of \(G\) on \(X\) is locally strongly locally free, and the map \(\mathrm{m}_{s_X}\) is open onto its image,
\item the map \([r_X]\colon X/H\to \base\) induced by the left momentum map is proper,
\item there is an open cover \(\mathcal{A}\) of \(G\times_{\base} X\) consisting of sets in \(\mathrm{m}_{s_X}^{\mathrm{Ho}C}\) and there is an extension \(\mathbb{C}ov'\) of \(\mathbb{C}ov\) via \(\mathrm{m}_{s_X}\) with the property that for all \(U\in \mathcal{A}\), any extension \(U'\in \mathbb{C}ov'\) of it via \(\mathrm{m}_{s_X}\), given \(f\in \mathbb{C}ontc(G\ltimes X, U)\), and given any extension \(f'\in \mathbb{C}ontc(X\times_{s_X,\base[H],s_X}, U')\) of \(f\), the following equality holds
\begin{equation}
\label{eq:prop-corr}
\int_{X} f' (x,y)\;\mathbb{R}N(x,y)\;\mathrm{d}\lambda_{s_X(x)}(y)= \int_{G} f(\gamma_{(x,y)},\gamma_{(x,y)}^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(\gamma)
\mathrm{e}nd{equation}
for every \(x\in X\).
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
We make few remarks. In the following remarks, we denote restriction of a measure \(\mu\) to a set \(U\) by \(\mu|_U\) instead of \(\mu_U\). We adopt this convention only for the following discussion.
Lemma~\ref{lem:loc-str-loc-free-act} implies that the first condition in Definition~\ref{def:prop-corr} is equivalent to \(\mathrm{m}_{s_X}\) is a local homeomorphism. Thus (iii) in Definition~\ref{def:prop-corr} makes sense, see Example~\ref{exa:concrete-exa-of-G-X-H} for details. Equation~\mathrm{e}qref{eq:RN-lambda-induced-measure} in that example says that the left side of Equation~\mathrm{e}qref{eq:prop-corr} is \(\mathrm{m}^*_{s_X}(\mathbb{R}N(x,\_){\lambda_1^x})(f)\), and thus Equation~\mathrm{e}qref{eq:prop-corr} says that
\begin{equation}
\label{equ:meaning-eq-prop-corr}
\mathrm{m}^*_{s_X}(\mathbb{R}N(x,\_){\lambda_1^x})(f)(x)=\alpha_2^x(f)(x)
\mathrm{e}nd{equation}
for all \(f\in \mathbb{C}ontc(G\ltimes X,U)\) where \(U\in \mathcal{A}\subseteq {s_X}^{\mathrm{Ho}C}\) and \(x\in X\approx\base[G\ltimes X]\approx \base[X]_{s_X}\). Now Equation~\mathrm{e}qref{equ:meaning-eq-prop-corr} along with Lemma~\ref{lem:pulling-and-pasting-measures-3} tell us that for \(U'\in \mathcal{E}S_{s_X}(U)\cap\mathbb{C}ov'\), the measure \({\mathbb{R}N(x,\_){\lambda_1^x}}|_{U'}\) is concentrated on \(\mathrm{m}_{s_X}(U)\); Lemma~\ref{lem:closed-image-of-Mlp-measure-conc} discusses when is the measure \(\mathbb{R}N(x,\_){\lambda_1^x}\) is concentrated on \(\mathrm{m}_{s_X}(G\times_{\base} \{x\})\).
Furthermore, as discussed in Example~\ref{exa:concrete-exa-of-G-X-H}, the measure \(\alpha_2^x=\mathrm{m}^*_{s_X}(\mathbb{R}N(x,\_){\lambda_1^x})\ll\mathrm{m}^*_{s_X}({\lambda_1^x})\) and the Radon-Nikodym derivative
\[
\frac{\mathrm{d}\mathrm{m}^*_{s_X}({\lambda_1^x})}{\mathrm{d}{\alpha_2^x}} = \mathbb{R}N(x,\_)\circ \mathrm{m}_{s_X}\circ \textup{inv}_{G\ltimes X},
\]
see the composite map in~{(4)} of Example~\ref{exa:concrete-exa-of-G-X-H}.
In Definition~\ref{def:prop-corr}, if the open cover \(\{\mathrm{m}_{s_X}(U)\}_{U\in \mathrm{m}_{s_X}^{\mathrm{Ho}C}}\) of \(\mathrm{m}_{s_X}(G\ltimes X)\) admits an extension to \(X\times_{s_X,\base[H],s_X}X\), then, due to Lemma~\ref{lem:pulling-and-pasting-measures-3}, we may conclude that the measure \(\mathbb{R}N(x,\_){\lambda_1^x}\) is concentrated on the image of \(\mathrm{m}_{s_X}\).
\begin{lemma}
\label{lem:closed-image-of-Mlp-measure-conc}
In Definition~\ref{def:prop-corr}, if the image of \(\mathrm{m}_{s_X}\) is a closed subset of \(X\times_{s_X,\base[H],s_X}X\), then the measure \(\mathbb{R}N(x,\_){\lambda_1^x}\) is concentrated on the image of \(\mathrm{m}_{s_X}\).
\mathrm{e}nd{lemma}
\begin{proof}
This follows from Example~\ref{exa:ext-of-open-cov-closed-sb} and the remark immediately above this lemma.
\mathrm{e}nd{proof}
A particular instance when the hypothesis of Lemma~\ref{lem:closed-image-of-Mlp-measure-conc} is fulfilled is when \(\mathrm{m}_{s_X}\) is a proper map. We encounter examples of this type when \(\mathrm{m}_{s_X}\) is a homeomorphism or homeomorphism onto a closed subspace of \(X\times_{s_X,\base[H],s_X}X\).
\begin{remark}
\label{rem:gp-case-cond-iii}
Let \((X,\lambda)\) be a topological correspondence from a groupoid with a Haar system \((G,\alpha)\) to another one \((H,\beta)\). If \(X\) is \(H\)\nb-compact, that is \(X/H\) is compact, then the map \([r_X]:X/H\to\base\) induced by the right momentum map \(r_X\) is proper. On the other hand, assume that the space of the units of the groupoid \(G\) is compact, for example, \(G\) is a group. If \([r_X]\) is a proper map, then \(X/H=[r_X]^{-1}(\base)\) is a compact space. Thus when \(\base\) is a compact space, Condition~{(ii)} in Definition~\ref{def:prop-corr} is equivalent to that the quotient space \(X/H\) is compact.
\mathrm{e}nd{remark}
\mathrm{e}dskip
Now we prepare to state the main theorem. Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids with Haar systems, and \((X,\lambda)\) a topological correspondence from \((G,\alpha)\) to \((H,\beta)\) with \(\Delta\) as the adjoining function. This gives a \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\colon \mathbb{C}st(G,\alpha)\to \mathbb{C}st(H,\beta)\) which is a certain completion of \(\mathbb{C}ontc(X)\). From~\cite{Holkar2017Construction-of-Corr} , recall the formulae for the actions of \(\mathbb{C}ontc(G)\) and \(\mathbb{C}ontc(H)\) on \(\mathbb{C}ontc(X)\), and the \(\mathbb{C}ontc(H)\)\nb-valued inner product on \(\mathbb{C}ontc(X)\) which gives this \(\mathbb{C}st\)\nb-correspondence: for \(b \in C_c(G)\), \(\zeta,\xi \in C_c(X)\) and \(a \in C_c(H)\)
\begin{equation}\label{def:lr-act-inerpro}
\left\{\begin{aligned}
(b\cdot \zeta)(x) &\defeq \int_{G^{r_X(x)}} b(\gamma)\, \zeta(\gamma^{-1} x) \,
\Delta^{1/2}(\gamma, \gamma^{-1} x) \; \mathrm{d} \alpha^{r_X(x)}(\gamma),\\
(\zeta\cdot a )(x) &\defeq \int_{H^{s_X(x)}} \zeta(x\mathrm{e}ta)\, a({\mathrm{e}ta}^{-1}) \; \mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta) \text{ and }\\
\langle \zeta, \xi \rangle (\mathrm{e}ta) &\defeq \int_{X_{r_H(\mathrm{e}ta)}}\, \overline{\zeta(x)} \xi(x\mathrm{e}ta) \; \mathrm{d}\lambda_{r_H(\mathrm{e}ta)}(x).
\mathrm{e}nd{aligned}\right.
\mathrm{e}nd{equation}
Denote the {\Star}representation of the pre-\(\mathbb{C}st\)\nb-algebra \(\mathbb{C}ontc(G)\), defined in Equation~\mathrm{e}qref{def:lr-act-inerpro}, on the dense subspace \(\mathbb{C}ontc(X)\) of the Hilbert \(\mathbb{C}st(H,\beta)\)\nb-module \(\Hilm(X)\) by \(\sigma_G\), that is, \(\sigma_G(b)(\zeta)=b\cdot \zeta\). Similarly, let \(\sigma_H\) denote the representation of the pre-\(\mathbb{C}st\)\nb-algebra \(\mathbb{C}ontc(H)\) on the pre-Hilbert \(\mathbb{C}st(H,\beta)\)\nb-module \(\mathbb{C}ontc(X)\).
\begin{theorem}
\label{thm:prop-corr-main-thm}
Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids with Haar systems. If \((X,\lambda)\) is a proper topological correspondence from \((G,\alpha)\) to \((H,\beta)\), then the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\colon \mathbb{C}st(G,\alpha)\to\mathbb{C}st(H,\beta)\) is proper.
\mathrm{e}nd{theorem}
The rest of this section is devoted to prove Theorem~\ref{thm:prop-corr-main-thm}. The proof is broken into three steps:
\begin{enumerate*}[(i)]
\item first we show that if \(h\in \mathbb{C}ontc(G)\) and
\(\xi\in\mathbb{C}ontc(X)\), then the restriction \((h\otimes\xi)|_{G\times_{\base}X}\in\mathbb{C}ontc(G\times_{\base}X)\); this is a well know fact that we revise.
\item For given \(b\in\mathbb{C}ontc(G)\), we find suitable functions
\(\mathrm{e}psilon_i\in\mathbb{C}ontc(G)\) and \(\delta_j\in \mathbb{C}ontc(X)\), where
\(1\leq i\leq m\) and \(1\leq j\leq n\), and \(m\) and \(n\) are
natural numbers, such that
\(f_{ij}\defeq b\mathrm{e}psilon_i\otimes \frac{c}{\bar c}\,\delta_j\) is in
\(\mathbb{C}ontc(G\times_{\base}X,D_{ij})\) for some
\(D_{ij}\in \mathbb{C}ov\), here \(c\) and \(\bar c\) are fixed
pre-cutoff and cutoff functions. Recall from Definition~\ref{def:prop-corr} that \(\mathbb{C}ov\) is a fixed cover of \(G\ltimes X\) by sets in \(\mathrm{m}_{s_X}^{\mathrm{Ho}C}\).
\item Finally, let \(F_{ij}'\in\mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\) be
an extension of \(f_{ij}\) via \(\mathrm{m}_{s_X}\) which is supported in \(\mathbb{C}ov'\). We show that the operator \(\sigma_G(b)\) on \(\Hilm(X)\) is same as the operator \(\sigma_{\mathbb{K}_{\mathrm{T}}(X)}\left(\sum_{i=1,j=1}^{i=n,j=m} B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}')\right)\). See the proof of Proposition~\ref{prop:implications-of-Cst-cat-2} for the meaning of \(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}')\).
\mathrm{e}nd{enumerate*}
We start by proving follwing lemmas.
\begin{lemma}
\label{lem:top-1}
Let \(X\) be a topological space and \(L, K\subseteq X\) subspaces. If \(L\) is closed in \(X\) and \(K\) is (quasi-)compact in \(X\), then \(L\cap K\) is (quasi-)compact in \(L\) as well as in \(X\).
\mathrm{e}nd{lemma}
\begin{proof}
Let \(\{U_i\}_{i\in I}\) be an open cover of \(K\cap L\) in \(L\) where \(I\) is an index set. For each \(i\in I\) let \(U'_i\subseteq X\) be an open set such that \(U_i=U_i'\cap K\). Then \(\{U_i'\}_{i\in I}\cup \{X-L\}\) is an open cover of \(K\) in \(X\). Since \(K\subseteq X\) is (quasi-)compact, choose and open subcover of this cover which covers \(K\); thus there is a finite natural number \(n\) such that \(U_{i_1}',\dots,U_{i_n}'\) and \(X-L\) cover \(K\). Now one may see that \(U_{i_1},\dots,U_{i_n}\), is an open cover \(K\cap L\).
Moreover, since \(L\cap K\) is (quasi-)compact in \(L\), it is (quasi-)compact in any subspace of \(X\) containing \(L\), in particular, \(X\) itself.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:main-thm-1}
Let \(X\) be a locally compact left \(G\)\nb-space for a locally compact groupoid \(G\). Let \(K\subseteq G\) and \(Q\subseteq X\) be nonempty subsets with \(s_G(K)\cap r_X(Q)\neq \mathrm{e}mptyset\). If \(K\) and \(Q\) are quasi-compact (or compact) in \(G\) and \(X\), respectively, then so is \(K\times_{\base}Q\subseteq G\times_{\base}X\).
\mathrm{e}nd{lemma}
\begin{proof}
Since all the sets \(K\), \(Q\) and \(s_G(K)\cap r_X(Q)\) are nonempty, so is \(K\times_{\base}Q\).
Assume that \(K\subseteq G\) and \(Q\subseteq X\) are quasi-compact. We observe that \(G\times_{\base} X\subseteq G\times X\) is closed; this is proved in the last paragraph below. Then \(K\times_{\base} Q\defeq (K\times Q)\cap (G\times_{\base} X\)) is quasi-compact in \(G\times_{\base} X\) due to Lemma~\ref{lem:top-1}. This proves the assertion about quasi-compactness of \(K\times_{\base} Q\). Similar is the argument for the claim regarding compactness.
A proof that \(G\times_{\base} X\subseteq G\times X\) is closed: recall from basic topology that a space \(Y\) is Hausdorff if and only if the diagonal \(\mathrm{dia}(Y)\defeq \{(y,y):y\in Y\}\subseteq Y\times Y\) is closed. By hypothesis, \(\base\), the space of units of the groupoid \(G\), is Hausdorff hence \(\mathrm{dia}(\base)\subseteq \base\times\base\) is closed. Now the function \(s_G\times r_X\colon G\times X\to \base\times\base\), \((\gamma, x)\mapsto (s_G(\gamma), r_X(x))\), is continuous, and \(G\times_{\base} X= (s_G\times r_X)^{-1}(\mathrm{dia}(\base))\). Being the inverse image of a closed set under a continuous function \(G\times_{\base} X\subseteq G\times X\) is closed.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:main-thm-3}
Let \(Y\) and \(X\) be locally compact spaces, and let \(K\subseteq Y\) and \(Q\subseteq X\) be quasi-compact spaces. Let \(\{W_d\}_{d\in D}\) be an open cover of \(K\times Q\) where \(D\) is an index set and each \(W_d\) is open \(Y\times X\). Then there are finite open covers \(\{U_1,\dots,U_m\}\) of \(\{K\}\), and \(\{V_i,\dots,V_n\}\) of \(\{Q\}\) such that for each \((i,j)\in\{1,\dots,m\}\times\{1,\dots,n\}\) there is an index \(d\in D\) with \(U_i\times V_j\subseteq W_d\), and
\(U_i\subseteq Y\) and \(V_j\subseteq X\) are open.
\mathrm{e}nd{lemma}
\begin{proof}
The proof of this lemma is exactly same as the one of Lemma~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{7.15} except that one replaces all the open covers in \(K\) and \(Q\) by open covers in \(Y\) and \(X\), respectively.
\mathrm{e}nd{proof}
Now we write the proof of Theorem~\ref{thm:prop-corr-main-thm}; during the first reading of this proof, reader may assume that \(\mathbb{R}N\) and the adjoining function \(\Delta\) are the constant function \(1\).
\begin{proof}[Proof of Theorem~\ref{thm:prop-corr-main-thm}]
\noindent 1) Let \((G,\alpha)\), \((H,\beta)\) and \((X,\lambda)\) be as in the statement of the theorem. Let \(\Delta\) be the adjoining function of the topological correspondence. Let \(\sigma_G\) and \(\sigma_{\mathbb{K}_{\mathrm{T}}}\) have the meaning as the discussion following Equation~\mathrm{e}qref{def:lr-act-inerpro} and Notation~\ref{not:Hgpd-of-comp-operators}, respectively.
Recall from the proof of Proposition~\ref{prop:implications-of-Cst-cat-2} that the Haar system \(\beta\) of \(H\) induces a family of measures \(\beta_{X\times_{s_X,\base[H],s_X}X}\) along the quotient map \(X\times_{s_X,\base[H],s_X}X\to (X\times_{s_X,\base[H],s_X}X)/H\) similar to the one in Equation~\mathrm{e}qref{eq:measures-along-the-quotient}. The image of the corresponding integration map \(B_{X\times_{s_X,\base[H],s_X}X}\colon \mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\to \mathbb{C}ontc((X\times_{s_X,\base[H],s_X}X)/H)\) is dense.
\mathrm{e}dskip
\noindent 2) Let \(b\in\mathbb{C}ontc(G)\) be a given nonzero function. In (6) of this proof, we show that there are natural numbers \(m\) and \(n\), and a function \(F_{ij}\in \mathbb{C}ontc(X\times_{s_X,\base[H],s_X}X)\) for \(0\leq i \leq m\) and \(0\leq j\leq n\) with
\[
\sigma_G(b)\xi=\sum_{i,j=1}^{i=m,j=n}\sigma_{\mathbb{K}_{\mathrm{T}}(X)}(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}))\xi
\]
for all \(\xi\in\mathbb{C}ontc(X)\).
Since every function in \(\mathbb{C}ontc(G)\) is a finite linear combination of functions in \({\mathbb{C}ontc}_0(G)\), we may assume that \(b\in {\mathbb{C}ontc}_0(G)\), see~\cite{Khoshkam-Skandalis2002Reg-Rep-of-Gpd-Cst-Alg-and-Applications}*{Lemma 1.3} for details. Then \(K={\textup{supp}}(b)\subseteq T\) where \(T\subseteq G\) is open and Hausdorff. Consequently, \(s_G(K)\subseteq\base\) is compact. Use Condition~{(ii)} in Definition~\ref{def:prop-corr} to see that \([r_X]^{-1}(s_G(K))\subseteq X/G\) is compact. Since the quotient map \(q\colon X\to X/G\) is open, we choose a compact set \(Q\subseteq X\) with \(q(Q)=[r_X]^{-1}(K)\). Then \(s_G(K)=r_X(X)\neq\mathrm{e}mptyset\). Now apply Lemma~\ref{lem:main-thm-1} to the sets \(K\subseteq G\) and \(Q\subseteq X\) to see that \(K\times_{\base}Q\subseteq G\times_{\base}X\) is compact.
\mathrm{e}dskip
\noindent 3) Let
\(
\tilde{\mathbb{C}ov}= \{\tilde{A}\subseteq G\times X: \tilde{A}\text{ is open in } G\times X, \text{ and } \tilde{A}\cap (G\times_{\base} X)\in \mathbb{C}ov \}\);
this is a collection of open sets in \(G\times K\) which covers \(G\times_{\base}X\). Since \(K\times Q={\textup{supp}}(b)\times Q\) and \(G\times_{\base}X\) are closed in \(G\times X\), so is \(K\times_{\base} Q\defeq (K\times Q)\cap (G\times_{\base}X)\). Therefore, \(\tilde{\mathbb{C}ov}\cup \{(G\times X)-(K\times_{\base} Q)\}\) is an open cover of \(K\times Q\) consisting of open sets in \(G\times X\). Apply Lemma~\ref{lem:main-thm-3} to this cover to get finitely many open sets \(U_1,\dots,U_m\) in \(G\) and finitely many open sets \(V_1,\dots,V_m\) in \(X\) which cover \(K\) and \(Q\), respectively, and for each \((\gamma,x)\in K\times_{\base}Q\), there are indices \(i\in\{1,\dots,n\}\) and \(j\in\{1,\dots,m\}\) such that \((\gamma,x)\in U_i\times V_j\subseteq A\) for some \(A\in\tilde{\mathbb{C}ov}\).
Therefore, for \(1\leq i \leq m\) and \(1\leq j\leq n\) and any functions \(f\in \mathbb{C}ontc(G,U_i)\) and \(g\in \mathbb{C}ontc(X, V_j)\), the restriction \(f\otimes_{\base}g\) is supported in a set in \(\mathbb{C}ov\).
Let \(\{\mathrm{e}psilon_i\}_{i=1}^n\) be a partition of unity of \(K\) subordinate to the open cover \(\{U_i\}_{i=1}^n\) and \(\{\delta_j\}_{j=1}^m\) be the one of \(Q\) subordinate to \(\{V_j\}_{j=1}^m\).
\mathrm{e}dskip
\noindent 4) Recall Example~\ref{exa:cutoff-for-H-space}. As in this example, let \(c: X\to \bar\mathbb{R}^+\) be a pre-cutoff function, that is, a function similar to the function \(F\) in Lemma~\ref{lem:cutoff-function}. Let \(\bar c\) be same as in the example; \(\bar c\) is obtained by averaging \(c\) over \(H\):
\[
\bar c(x)= \int_{H} c(x\mathrm{e}ta) \;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta) \quad \text{ for } u\in X.
\]
Therefore, as discussed in the example, \(\bar c\) is an \(H\)\nb-invariant function.
Fix \((i,j)\in\{1,\dots,m\}\times\{1,\dots,n\}\). Then the product \(b\mathrm{e}psilon_i \in \mathbb{C}ontc(G, U_i)\). Similarly, \((c/\bar c)\,\delta_j\in \mathbb{C}ontc(X,V_j)\) and the function
\begin{equation}
\label{eq:def-l-ij}
l_{ij}\defeq \left(b\mathrm{e}psilon_i\,\otimes_{G\times_{\base}X}\; \frac{c}{\bar c}\, \delta_j\right) \;\cdot\,\Delta
\mathrm{e}nd{equation}
is in \(\mathbb{C}ontc(G\times_{\base}X, U_i\times_{\base[H]} V_j)\), due to Lemma~\ref{lem:main-thm-1}.
Recall from (3) above that each \(l_{ij} \) is supported in a set in \(\mathbb{C}ov\subseteq \mathrm{m}_{s_X}^{\mathrm{Ho}C}\). Fix an extension \(F_{ij}\in \mathcal{E}_{\mathrm{m}_{s_X}}(l_{ij})\) which is supported in a set in \(\mathbb{C}ov'\). Using the local sections in Equation~\mathrm{e}qref{eq:loc-sec-inv-x} we see that for \((x,y)\in\mathrm{m}_{s_X}(U_i\times_{\base} V_j)\),
\begin{align*}
F_{ij}(x,y)&= l_{ij}\circ \mathrm{m}_{s_X}|_{U_i\times_{\base}V_j}^{-1}(x,y) \\
&= b(\gamma_{(x,y)})\;\mathrm{e}psilon(\gamma_{(x,y)})\;\delta(\gamma_{(x,y)}^{-1}\, x)\, \frac{c(\gamma_{(x,y)}^{-1}\, x)}{\bar{c}(\gamma_{(x,y)}^{-1}\, x)}\; \Delta(\gamma_{(x,y)},\gamma_{(x,y)}^{-1}\, x).
\mathrm{e}nd{align*}
\mathrm{e}dskip
\noindent 5)
Let
\(
F_{ij}'=F_{ij}\mathbb{R}N
\)
where \(\mathbb{R}N\colon X\times_{s_X,\base[H],s_X}X\to \mathbb{R}^*\) is the \(H\)\nb-invariant nonnegative
continuous function associated to the proper correspondence \((X,\lambda)\). Since \(F_{ij}\in \mathcal{E}_{\mathrm{m}_{s_X}}(l_{ij})\) is continuous function with compact support, and \(\mathbb{R}N\) is continuous, \(F_{ij}'\) is also continuous and compactly supported.
Let \(\xi\in \mathbb{C}ontc(X)\) and recall from (1) above that \(B_{X\times_{s_X,\base[H],s_X}X}\) is the family of measures along the quotient map \(X\times_{s_X,\base,s_X}X\to (X\times_{s_X,\base,s_X}X )/H\) as in Equation~\mathrm{e}qref{eq:measures-along-the-quotient}. Then
\begin{align*}
&\sigma_{\mathbb{K}_{\mathrm{T}}(X)}(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}'))\xi(x)\\
=&\int_XB_{X\times_{s_X,\base[H],s_X}X}(F_{ij}')[x,y]\xi(y)\;\mathrm{d}\lambda_{s_X(x)}(y)\\
=&\int_X\int_H F_{ij}'(x\mathrm{e}ta,y\mathrm{e}ta)\xi(y)\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\,\mathrm{d}\lambda_{s_X(x)}(y) \\
=&\int_X\int_H F_{ij}(x\mathrm{e}ta,y\mathrm{e}ta)\;\mathbb{R}N(x\mathrm{e}ta,y\mathrm{e}ta)\;\xi(y)\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\,\mathrm{d}\lambda_{s_X(x)}(y).
\mathrm{e}nd{align*}
Now use the \(H\)\nb-invariance of \(\mathbb{R}N\) which is built in its definition (see Definition~\ref{def:prop-corr}) to see that the above term equals
\[
\int_X\int_H F_{ij}(x\mathrm{e}ta,y\mathrm{e}ta)\;\mathbb{R}N(x,y)\;\xi(y)\;\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\,\mathrm{d}\lambda_{s_X(x)}(y).
\]
Using Fubini's theorem we write the above term as
\[
\int_H\left(\int_X F_{ij}(x\mathrm{e}ta,y\mathrm{e}ta)\;\mathbb{R}N(x,y)\;\mathrm{d}\lambda_{s_X(x)}(y)\right)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\]
Recall from (4) above that \(F_{ij}\in \mathcal{E}_{\mathrm{m}_{s_X}}(l_{ij})\) and \(F_{ij}\) is supported in an element of \(\mathbb{C}ov'\). Therefore, Equation~\mathrm{e}qref{eq:prop-corr} in Definition~\ref{def:prop-corr} allows us to change the above integral on \(X\) to the one on \(G\) as
\[
\int_H\int_G l_{ij} (\gamma_{(x\mathrm{e}ta,y\mathrm{e}ta)}, \gamma_{(x\mathrm{e}ta,y\mathrm{e}ta)}^{-1} x\mathrm{e}ta)\;\xi(y)\;\mathrm{d}\alpha^{r_X(x)}(\gamma)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\]
Now substitute the value of \(l_{ij}\) in above equation from Equation~\mathrm{e}qref{eq:def-l-ij}; we write \(\gamma\) instead of \(\gamma_{(x\mathrm{e}ta,y\mathrm{e}ta)}\) for simplicity and then the previous term looks
\[
\int_H\int_G b(\gamma)\mathrm{e}psilon_i(\gamma)\, \delta_j(\gamma^{-1} x\mathrm{e}ta)\xi(\gamma^{-1} x\mathrm{e}ta)\frac{c(\gamma^{-1} x\mathrm{e}ta)}{\bar c(\gamma^{-1} x\mathrm{e}ta)}\;\Delta(\gamma,\gamma^{-1} x\mathrm{e}ta)
\;\,\mathrm{d}\alpha^{r_X(x)}(\gamma)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta).
\]
\noindent 6)
Recall from Example~\ref{exa:cutoff-for-H-space} that \(\bar c\) is \(H\)\nb-invariant, and recall from~\cite{Holkar2017Construction-of-Corr} that \(\Delta\) is also \(H\)\nb-invariant. We incorporate these changes and compute:
\begin{align*}
&\left(\sum_{i,j=1}^{i=m,j=n}\sigma_{\mathbb{K}_{\mathrm{T}}(X)}\left(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}')\right)\right)\xi(x)\\
&=\sum_{i,j=1}^{i=m,j=n}\int_H\int_G b(\gamma)\,\mathrm{e}psilon_i(\gamma)\,\delta_j(\gamma^{-1} x \mathrm{e}ta)\, \frac{c(\gamma^{-1} x \mathrm{e}ta)}{\bar{c}(\gamma^{-1} x)}\,\xi(\gamma^{-1} x )\, \Delta(\gamma,\gamma^{-1} x)
\;\,\mathrm{d}\alpha^{r_X(x)}(y)\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\\
&=\sum_{i=1}^{i=m}\int_G b(\gamma)\,\mathrm{e}psilon_i(\gamma)\,\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x) \left(\sum_{j=1}^n\int_H\delta_j(\gamma^{-1} x \mathrm{e}ta)\, \frac{c(\gamma^{-1} x \mathrm{e}ta)}{\bar{c}(\gamma^{-1} x)}
\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\right)\,\mathrm{d}\alpha^{r_X(x)}(y)\\
&=\sum_{i=1}^{i=m}\int_G b(\gamma)\,\mathrm{e}psilon_i(\gamma)\,\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x)\! \left(\int_H\left(\sum_{j=1}^n\delta_j(\gamma^{-1} x \mathrm{e}ta)\right) \frac{c(\gamma^{-1} x \mathrm{e}ta)}{\bar{c}(\gamma^{-1} x)}\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\right)\!\mathrm{d}\alpha^{r_X(x)}(y).
\mathrm{e}nd{align*}
We use Fubini's theorem during the second step above. Now use the fact that \(\{\delta_j\}_{j=1}^{n}\) is a partition of unity subordinate to the cover \(\{V_j\}_{j=1}^n\) of \(Q\). Then the last term equals
\[
\sum_{i=1}^{i=m}\int_G b(\gamma)\,\mathrm{e}psilon_i(\gamma)\,\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x) \left(\int_H \frac{c(\gamma^{-1} x \mathrm{e}ta)}{\bar{c}(\gamma^{-1} x)} \mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\right)\;\,\mathrm{d}\alpha^{r_X(x)}(y).
\]
Note that the integrant in the second integral, \(\mathrm{d}\beta^{s_X(x)}\), is the cutoff function \((c/\bar c )\circ s_{X\rtimes H}\), see Example~\ref{exa:cutoff-for-H-space}. With this observation we continue the computing the last term:
\begin{align*}
&\sum_{i=1}^{i=m}\int_G b(\gamma)\,\mathrm{e}psilon_i(\gamma)\,\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x) \left( \frac{1}{\bar{c}(\gamma^{-1} x)}\int_H c(\gamma^{-1} x \mathrm{e}ta)\,\mathrm{d}\beta^{s_X(x)}(\mathrm{e}ta)\right)\;\,\mathrm{d}\alpha^{r_X(x)}(y)\\
&= \sum_{i=1}^{i=m}\int_G b(\gamma)\mathrm{e}psilon_i(\gamma)\,\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(y)\\
&= \int_G b(\gamma)\left(\sum_{i=1}^{i=m} \mathrm{e}psilon_i(\gamma)\right)\xi(\gamma^{-1} x ) \Delta(\gamma,\gamma^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(y)=\sigma_G(b)\xi(x).
\mathrm{e}nd{align*}
To get the last equality we use the fact that \(\{\mathrm{e}psilon_i\}_{i=1}^m\) is a partition of unity subodinate the cover \(\{U_i\}_{i=1}^m\) of \(K\).
Thus, for given \(b\in\mathbb{C}ontc(G)\), we have found finite natural numbers \(m\) and \(n\), and function \(F_{ij}'\in \mathbb{C}ontc(X\times_{s_X,\base,s_X}X)\), where \(0\leq i \leq m\) and \(0\leq j\leq n\), such that
\[
\sigma_G(b)=\sum_{i,j=1}^{i=m,j=n}\sigma_{\mathbb{K}_{\mathrm{T}}(X)}(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}'))
\]
where each \(\sigma_{\mathbb{K}_{\mathrm{T}}(X)}(B_{X\times_{s_X,\base[H],s_X}X}(F_{ij}))\), as per Proposition~\ref{prop:implications-of-Cst-cat-2}, is a compact operator on \(\Hilm(X)\) .
\mathrm{e}nd{proof}
\begin{corollary}
\label{cor:prop-corr-gives-kk-cycle}
Let \((G,\alpha)\) and \((H,\beta)\) be localy compact goupoids equipped with Haar systems. Then a proper topological correspondence \((X,\lambda)\) from \((G,\alpha)\) to \((H,\beta)\) defines an element in \(\mathrm{K}K(\mathbb{C}st(G,\alpha),\mathbb{C}st(H,\beta))\).
\mathrm{e}nd{corollary}
\begin{proof}
Theorem~\ref{thm:prop-corr-main-thm} says that \(\Hilm(X)\) is a proper \(\mathbb{C}st\)\nb-correspondence from \(\mathbb{C}st(G,\alpha)\) to \(\mathbb{C}st(H,\beta)\). Hence \([0,\Hilm(X)]\in\mathrm{K}K(\mathbb{C}st(G,\alpha),\mathbb{C}st(H,\beta))\) where \(0\in \mathbb B_{\mathbb{C}st(H,\beta)}(\Hilm(X))\) is the zero operator.
\mathrm{e}nd{proof}
Recall Section {2.5.2} in~\cite{mythesis} that discusses isomorphism of topological correspondence.
\begin{proposition}
\label{prop:prop-iso-corr}
Let \((G,\alpha)\) and \((H,\beta)\) be locally compact groupoids, and let \((X,\lambda)\) and \((Y,\mu)\) be two correspondences from \((G,\alpha)\) to \((H,\beta)\). Let \(\phi\colon X\to Y\) be a \(G\)-\(H\)\nb-equivariant homeomorphism which implements an isomorphism of correspondences. Then, if \((X,\lambda)\) is proper, then so is \((Y,\beta)\).
\mathrm{e}nd{proposition}
\begin{proof}
The isomorphism of correspondences \(\phi\) induces a \(H\)\nb-equivariant isomorphism of topological groupoids
\[
\phi'\colon G\ltimes X\to G\ltimes Y,\quad (\gamma,x)\mapsto (\gamma, \phi(x)).
\]
We also get the following commutative square
\begin{figure}[htb]
\centering
\[
\begin{tikzcd}[scale=2]
G\times_{\base}X\arrow{r}{\mathrm{m}_{s_X}}\arrow{d}{\phi'}
& X\times_{s_X,\base[H],s_X}X \arrow{d}{\phi\times_{\base[H]}\phi}\\
G\times_{\base}Y \arrow{r}{\mathrm{m}_{s_Y}} & Y\times_{s_Y,\base[H],s_Y}Y&
\mathrm{e}nd{tikzcd}
\]
\caption{}
\label{fig:iso-prop-corr-1}
\mathrm{e}nd{figure}
In this square, the vertical arrows are isomorphisms. We leave it as excercise to reader to prove that if \(\mathrm{m}_{s_X}\) is a local homeomorphism onto its image, then so is \(\mathrm{m}_{s_Y}\).
The map \(\phi\) induces a \(G\)\nb-equivariant isomorphism \([\phi]\colon X/H\to Y/H\). Furthermore, \([r_X]=[r_Y]\circ [\phi]\), that is, \([r_X]\circ [\phi]^{-1}=[r_Y]\). Therefore, \([r_X]\) is a proper map if and only if\([r_Y]\) is so. By hypothesis \([r_X]\) is proper, therefore \([r_Y]\) is proper.
Finally, let \(\mathbb{C}ov\) and \(\mathbb{C}ov'\) be open covers of \( G\times_{\base}X\) and \(X\times_{s_X,\base[H],s_X}\), respectively, which satisfies the last condition in Definition~\ref{def:prop-corr}. Then
\begin{align*}
\mathbb{C}ov[B]\defeq& \{\phi'(U): U\in \mathbb{C}ov\},\\
\mathbb{C}ov[B]'\defeq& \{\phi\times_{\base[H]}\phi(V): V\in \mathbb{C}ov'\}
\mathrm{e}nd{align*}
are open covers of \( G\times_{\base}Y\) and \(Y\times_{s_X,\base[H],s_X}\), respectively, which satisfy the last condition in Definition~\ref{def:prop-corr}.
For \(u\in \base[H]\), let \(\phi_*(\lambda_u)\) be the measure induced by \(\lambda_u\) on \(Y\) via \(\phi\). Let \(\mathbb{R}N_X\) deonote the function \(\mathbb{R}N\) appearing in Definition~\ref{def:prop-corr}. Define \(\mathbb{R}N_Y(x,y)=\mathbb{R}N_X\circ \phi^{-1}(x,y) \,\frac{\mathrm{d}\phi_*(\lambda)_{s_X((x))}}{\mathrm{d}\mu_{s_Y(x)}}(y)\).
Let \(f\in \mathbb{C}ontc( G\times_{\base}Y,U)\) where \(U\in \mathcal{A}'\), and let \(f'\) be an extension of it in \(\mathcal{E}_{\mathrm{m}_Y}(f)\). Observe that \(f'\circ\phi'\) is an extension of \(f\circ \phi'\). Now
\begin{align*}
& \int_Y f'(x,y)\,\mathbb{R}N_Y(x,y)\;\mathrm{d}\lambda_{s_Y(x)}(y)\\
&=\int_Y f'(x,y)\,\mathbb{R}N_X\circ\phi^{-1}(x,y)\frac{\mathrm{d}\phi_*(\lambda)_{s_X((x))}}{\mathrm{d}\mu_{s_Y(x)}}(y)\;\mathrm{d}\lambda_{s_Y(x)}(y)\\
&= \int_X f'\circ\phi' (a,b)\,\mathbb{R}N_Y(a,b)\;\mathrm{d}\mu_{s_X(a)}(b)
\mathrm{e}nd{align*}
Since \((X,\lambda)\) is proper correspondence, using Equation~\mathrm{e}qref{eq:prop-corr} we see that the last term equals
\[
\int_G f\circ \phi' (\gamma, \gamma^{-1} a)\,\mathrm{d}\alpha^{r_X(a)}(\gamma)=\int_G f(\gamma, \gamma^{-1} x)\,\mathrm{d}\alpha^{r_Y(x)}(\gamma).
\]
The last equailty is due to the fact that \(\phi\) is a \(G\)\nb-equivariant isomorphism. This proves that \((Y,\mu)\) is proper.
\mathrm{e}nd{proof}
\section{Examples}
\label{sec:exa}
\begin{example}
\label{exa:Tu}
In~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}, Tu defines \mathrm{e}mph{locally proper generalised morphism} and \mathrm{e}mph{proper} locally proper generalised morphism from a locally compact groupoid \(G\) to another one, say, \(H\). This morphism is a locally compact \(G\)\nb-\(H\)-bispace \(X\) satisfying certain properties.
We repeat Tu's definition of a locally proper generalised morphism and proper locally proper generalised morphism when \(X\) is Hausdorff; we modify the verbalisation to suit our setting, for the original definitions see~\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Definitions 7.3 and 7.6}.
\begin{definition}[\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Definitions 7.3 and 7.6} when \(X\) is Hausdorff]
\label{def:Tu-corr}
A locally proper generalised morphism from a locally compact groupoid with Haar system $(G,\alpha)$ to a groupoid with Haar system $(H,\beta)$ is a $G$-$H$\nb-bispace $X$ such that
\begin{enumerate}[label=\roman*)]
\item the action of $G$ is proper and free,
\item the right momentum map induces a homeomorphism \(G\backslashX\to \base[H]\)
\item the action of $H$ is proper.
\mathrm{e}nd{enumerate}
Furthermore, the locally proper generalised morphism \(X\) is called proper if
\begin{enumerate}[label=\roman*), resume]
\item \([r_X]\colon X/H\to\base\) is a proper map.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{definition}
\cite{Tu2004NonHausdorff-gpd-proper-actions-and-K}*{Theorem 7.8} proves that a proper correspondence \(X\) from \((G,\alpha)\) to \((H,\beta)\) produces a proper \(\mathbb{C}st\)\nb-correspondence from \(\textup{C}^*_{\textup{r}}(G,\alpha)\) to \(\textup{C}^*_{\textup{r}}(H,\beta)\). The discussion on page 219 in the sixth paragraph in~\cite{Holkar2017Construction-of-Corr} and Example 3.8 in the same article show that a locally proper generalised morphism is a topological correspondence; this example also shows that the \(H\)\nb-invariant family of measures \(\lambda\) on \(X\) is given by
\begin{equation}
\label{eq:Tu-corr-measures}
\int_X f\,\mathrm{d}\lambda_u\defeq \int_G f(\gamma^{-1} x)\,\mathrm{d}\alpha^{r_X(x)}(\gamma)
\mathrm{e}nd{equation}
where \(f\in \mathbb{C}ontc(X)\), \(u\in \base[H]\) and \(x\in s_X^{-1}(u)\). The same computation shows that \(\lambda_u\) is well-defined and is independent of the choice of \(y\).
Let \(X\) be a proper locally proper generalised morphism from a locally compact groupoid with Haar system \((G,\alpha)\) to \((H,\beta)\). We show that when \(X\) is Hausdorff and \(X/H\) is paracompact, \((X,\lambda)\) is a proper topological correspondence; here \(\lambda\) is the family of measures discussed above.
In this case, Remark~\ref{rem:relation-between-sX-quo-sX-Mlp} shows that the map \(\mathrm{m}_{s_X}\) is a homeomorphism. Hence the first condition in Definition~\ref{def:prop-corr} is satisfied.
Condition~{(iv)} in Definition~\ref{def:Tu-corr} above agres with Definition~\ref{def:prop-corr}{(ii)}.
Finally, to see that Definition~\ref{def:prop-corr}{(iii)} is satisfied, let \(\mathbb{C}ov\{G\times_{\base}X\}\) be the open cover of \(G\times_{\base}X\), and \(\mathbb{C}ov'=\{X \times_{s_X,\base,s_X}X\}\) be its extension via \(\mathrm{m}_{s_X}\); recall from Remark~\ref{rem:relation-between-sX-quo-sX-Mlp} that in present situation \(\mathrm{m}_{s_X}\) is a homeomorphism. Let \(f\in \mathbb{C}ontc(G\times_{\base}X)\), then \(\{f\circ \mathrm{m}_{S_X}^{-1}\}= \mathcal{E}_{\mathrm{m}_{s_X}}(f)\). Take \(\mathbb{R}N\) to be the constant function 1 on \(X\times_{s_X,\base[H],s_X}X\). We compute the left and right side terms in Equation~\mathrm{e}qref{eq:prop-corr}:
\[
\int_{X} f'(x,y)\;\mathbb{R}N(x,y)\;\mathrm{d}\lambda_{s_X(x)}(y)=\int_{X} f\circ\mathrm{m}_{s_X}^{-1}(x,y)\;\mathrm{d}\lambda_{s_X(x)}(y).
\]
Since \(\mathrm{m}_{s_X}\) is a homeomorphism, there is unique element \(\gamma_{(x,y)}\in G\) such that \(\gamma_{(x,y)} y=x\), that is \(\gamma_{(x,y)}^{-1} x=y\); we remove the suffix of \(\gamma\) for the sake of simplicity. Now we may write the right side term of the previous equation as
\[
\int_{X} f\circ\mathrm{m}_{s_X}^{-1}(x,\gamma^{-1} x)\;\mathrm{d}\lambda_{s_X(x)}(\gamma^{-1} x).
\]
Using the definition of the measure \(\lambda_{s_X(x)}\) in Equation~\mathrm{e}qref{eq:Tu-corr-measures} we write the right side term in the last equation as
\[
\int_{X} f\circ\mathrm{m}_{s_X}^{-1}(x,\mathrm{e}ta^{-1} \gamma^{-1} x)\;\mathrm{d}\alpha_{s_G(\gamma)}(\mathrm{e}ta).
\]
Now use Equation~\ref{eq:loc-sec-inv-x} to see \(f\circ\mathrm{m}_{s_X}^{-1}(x,\mathrm{e}ta^{-1}\gamma^{-1} x)=f(\gamma\mathrm{e}ta,\mathrm{e}ta^{-1} \gamma^{-1} x)\). Put this value of \(f\circ\mathrm{m}_{s_X}^{-1}\) in the last integral, then it becomes
\[
\int_{X} f(\gamma\mathrm{e}ta,\mathrm{e}ta^{-1} \gamma^{-1} x)\;\mathrm{d}\alpha_{s_G(\gamma)}(\mathrm{e}ta).
\]
Now change the variable \(\gamma\mathrm{e}ta\mapsto \gamma\) and use the left invariance of the the Haar system \(\alpha\) to see that the above term equals
\[
\int_{X} f(\gamma,\gamma^{-1} x)\;\mathrm{d}\alpha_{r_X(x)}(\gamma)
\]
which is the right hand side of Equation~\mathrm{e}qref{eq:prop-corr}.
\mathrm{e}nd{example}
\begin{example}(Groupoid equivalence)
\label{exa:gpd-equi}
Recall the definition of groupoid equivalence~\cite{Muhly-Renault-Williams1987Gpd-equivalence}*{Definition 2.1}. Let \(X\) be a groupoid equivalence between locally compact groupoids \(G\) and \(H\). Let \(\alpha\) and \(\beta\) be the Haar systems on \(G\) and \(H\), respectively. Then \(X\) is a proper locally proper generalised morphism; a straightforward comparison between~\cite{Muhly-Renault-Williams1987Gpd-equivalence}{Definition 2.1} and Definition~\ref{def:Tu-corr}. Example~{3.9} shows this. Example~\ref{exa:Tu} now shows that when equipped with the family of measures \(\lambda\) in Equation~\ref{eq:Tu-corr-measures}, \((X,\lambda)\) is a proper correspondence. The function~\(\mathbb{R}N\) on \(X\times_{s_X,\base[H],s_X}X\) is the constant function 1.
\mathrm{e}nd{example}
\begin{example}
\label{exa:cpt-gp-to-pt}
Let \(G\) be a locally compact group and \(\{*\}\) the trivial group(oid). Let \(\alpha\) be the Haar system on \(G\), then \((G,\alpha)\) is a topological correspondence from \(G\) to \(\{*\}\): \(G\) acts on itself with the left multiplication and \(\{*\}\) acts on \(G\) trivially; \(\alpha\) is \((G,\alpha)\)\nb-quasi-invariant measure on \(G\). This is a special case of Example~\cite{Holkar2017Construction-of-Corr}*{Example 3.5} with the measure on the bisapce inverted; the map is the zero map \(\{*\}\to G\). Assume that \(G\) is compact, then \((G,\alpha)\) is a proper correspondence. In this case, the first two conditions in Definition~\ref{def:prop-corr} are clearly satisfied. Finally, observe that the map \(\mathrm{m}_0\colon G\times G\to G\times G\) is an isomorphism. To see that the last condition in Definition~\ref{def:prop-corr} holds, the computation is similar to the one in Example~\ref{exa:Tu} with \(X\) is replaced by \(G\).
To elaborate the algebraic side of this example, the topological correspondence \((G,\alpha)\) from \(G\) to \(\{*\}\) gives the left regular representation of \(\mathbb{C}st(G)\) on \(\mathcal{L}^2(G,\alpha)\). Now Theorem~\ref{thm:prop-corr-main-thm} reproduces the basic result that in this case the the left regaular representation takes values in \(\mathbb{K}(\mathcal{L}^2(G,\alpha))\).
\mathrm{e}nd{example}
\begin{example}
\label{exa:transversal}
Let \((G,\alpha)\) be a locally compact groupoid with a Haar system, and \(H\subseteq G\) a closed subgroupoid. Let \(\beta\) be a Haar system on \(H\). Let \(X\defeq \{\gamma\in G: r_G(\gamma)\in \base[H]\}\). Popularly, \(X\) is denoted by \(G^{\base[H]}\). Then \(H\) and \(G\) act on \(X\) from left and right, respectively, by multiplication. The momentum maps for these actions are \(r_X=r_{G}|_{X}\) and \(s_X=s_G|_X\). Both the actions are free, and as~\cite{Holkar2017Construction-of-Corr}*{Example 3.14} shows, the actions are proper. In this case, \(X\) is a topological correspondence from \(H\) to \(G\) where the family of measures is given by Equation~\mathrm{e}qref{eq:Tu-corr-measures}; note that in this case the map \([s_X]\colon H\backslashX\to \base\) is need not be surjective. This is a correspondence similar to the one in~\cite{Holkar2017Construction-of-Corr}*{Example 3.14} with the direction inverted.
Since the action of \(H\) on \(X\) is proper, the map \(\textup{m}\colon H\times_{\base[H]}X\to X\times X\), \((\mathrm{e}ta,x)\mapsto (\mathrm{e}ta x,x)\), is proper. As \(\textup{m}^{-1}(X \times_{s_X,\base[H],s_X} X)=H\times_{\base[H]}X\),
Proposition 10.1.3(a) in~\cite{Bourbaki1966Topologoy-Part-1}*{Chapter 1}) implies that \(\mathrm{m}_{s_X}\) is proper. Now the fact that \(\mathrm{m}_{s_X}\) is one-to-one and proper, using~\cite{Bourbaki1966Topologoy-Part-1}*{Chapter 1, Proposition 10.1.2}, we conclude that \(\mathrm{m}_{s_X}\) is a homeomorphism onto a closed subspace of \( X \times_{s_X,\base[H],s_X} X\). This shows that the first condition in Definition~\ref{def:prop-corr} holds.
We observe that \([r_X]\colon X/G\to\base[H] \) is the identity map. Thus the second condition in Definition~\ref{def:prop-corr} is satisfied. The third condition can be checked as in Example~\ref{exa:Tu}.
\mathrm{e}nd{example}
\begin{example}
\label{exa:gp-to-sgp}
Let \(G\) be a locally comapct group and \(H\) a closed subgroup of it. Assume that \(G\) is \(H\)\nb-compact, that is, \(G/H\) is a compact space. Let \(\alpha\) and \(\beta\) be the Haar measure on \(G\) and \(H\), respectively. Then with the left and right multiplication actions, \((G,\alpha^{-1})\) is a topological correspondence from \(G\) to \(H\). Reader may check that this is an example of correspondences in Definition~\ref{def:Tu-corr}.
In this case, \(\mathrm{m}_0\colon G\times G\to G\times G\) is the map \((\gamma,\gamma')\mapsto (\gamma\gamma',\gamma')\) which is a homeomorphism; this fulfils the first condition in Definition~\ref{def:prop-corr}. Since \(G/H\) is compact, the constant map \(G/H\to \{*\}\) is proper which fulfils the second condition for a proper correspondences. A computation as in Example~\ref{exa:Tu} shows that the third condition in Definition~\ref{def:prop-corr} holds. Thus \((G,\alpha^{-1})\colon G\to H\) is a proper correspondence. A concrete example of this case is when \(G=\mathbb{Z}\) and \(H=n\mathbb{Z}\) where \(n=1,2,\dots\).
\mathrm{e}nd{example}
\subsection{The case of spaces}
\label{sec:ex-space-case}
A locally compact space \(X\) is a locally compact groupoid; in this groupoid every arrow is a unit arrow and the source as well as the range maps equal \(\textup{Id}_X\) the identity map on \(X\). Equip the groupoid \(X\) with the Haar system \(\delta^X\defeq\{\delta^X_x\}_{x\in X}\) where \(\delta_y^X\) is the point mass at \(x\in X\). Now on, whenever we think of a space as a locally compact groupoid equipped with \mathrm{e}mph{the} (or standard or obvious) Haar system, we shall mean this setting. Note that \(X\) is an {\mathrm{e}tale} groupoid. Let \(Y\) be another space. Then the only possible action of \(X\) on \(Y\) is the trivial action, that is, there is a momentum map \(b\colon X\to Y\) and for \((x,y)\in X\times_{\textup{Id}_X,X,b}Y\), the action is \(x\cdot y=y\). The actions of \(X\) on \(Y\) are in one-to-one correspondence with the continuous maps from \(X\) to \(Y\).
\begin{lemma}
\label{lem:prop-quiver-labda-is-atomic}
Let \(Y\xleftarrow{b} X\xrightarrow[\lambda]{f} Z\) be a topological correspondence from a space \(Y\) to a space \(Z\). If \((X,\lambda)\) is proper in the sense of Definition~\ref{def:prop-corr}, then
\begin{enumerate}[i)]
\item for each \(z\in Z\), the measure \(\lambda_z\) is atomic,
\item for each \(z\in Z\), each atom in \(\lambda_z\) is nonzero and the function \(\mathbb{R}N\) appearing in Definition~\ref{def:prop-corr}{(3)} positive. Thus the family of measure \(\lambda\) has full support.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
i): The discussion regarding Figure~\ref{fig:quiver-homeo} in Example~\ref{exa:proper-basic-corr} shows that the image of \(\mathrm{m}_f\) is the diagonal \(\{(x,x)\in X\times_{f,Z,f}X:x\in X\}\) which is closed in \(X\times_{f,Z,f}X\) (since \(X\) is Hausdorff). Fix \(x\in X\). Then using Lemma~\ref{lem:closed-image-of-Mlp-measure-conc} we see that the measure \(\lambda_1^x\) is concentrated on the diagonal in \(X\times_{f,Z,f}X\). Note that the measure \(\lambda_1^x\) is defined on \(\pi_2^{-1}(x)\) where \(\pi_2\) is the projection onto the second factor of \(X\times_{f,Z,f}X\). Thus \(\lambda_1^x\) is concentrated on \(\pi_2^{-1}(x)\cap\mathrm{dia}_X(X)\) which equals
\[
\{(x,x')\in X\times_{f,Z,f}X:
f(x)=f(x')\}\cap \{(x'',x'')\in X\times_{f,Z,f}X: x''\in X\}=\{(x,x)\}.
\]
Thus
\[
{\textup{supp}}(\lambda_1^x)\subseteq \{(x,x)\}=\{x\}\times\{x\}
\]
for each \(x\in X\).
Now \(\lambda_1^x=\delta_x\otimes\lambda_{f(x)}\) hence
\[
{\textup{supp}}(\lambda_1^x)= {\textup{supp}}(\delta_x)\times {\textup{supp}}(\lambda_{f(x)})=\{x\}\times{\textup{supp}}(\lambda_{f(x)}),
\]
due to~\cite{Bourbaki2004Integration-I-EN}*{Chapter III, \S4.2, Proposition 2, page III.44}. Hence \({\textup{supp}}(\lambda_{f(x)})\subseteq \{x\}\) for each \(x\in X\).
\noindent ii): Let \(x_0\in X\) be given. Let \(\tilde g\in \mathbb{C}ontc(Y\times_{\textup{Id}_Y,Y,b}X)\) be a function which is nonnegative and \(\tilde g(b(x_0),x_0)>0\). Let \(\mathbb{R}N\) be the function in the third condition of the proper correspondence. Then Equation~\ref{eq:quiver-right-eq} says that
\[
\int_Y \tilde g(y,y^{-1}\cdot x)\,\mathrm{d}\delta_Y^{b(x_0)}= \tilde g(b(x_0),x_0)>0.
\]
Let \(g'\in \mathcal{E}_f(\tilde g)\) be a nonnegative extension of \(\tilde g\). Since the third condition in Definition~\ref{def:prop-corr} holds, we get
\[
\int_X g'(x_0,x)\,\mathbb{R}N(x_0,x')\mathrm{d}\lambda_{f(x_0)}(x)=\int_Y \tilde g(y,y^{-1}\cdot x)\,\mathrm{d}\delta_Y^{b(x_0)}= \tilde g(b(x_0),x_0)>0.
\]
Thus the measure \(\lambda_{f(x_0)}\neq 0\), that is, \(\lambda_z>0\) for each each \(z\in Z\). Furthermore, since the function \(g'\) and the measure \(\lambda_{z}\) in the left hand side of the above equation are nonnegative, and the whole term is positive, we get \(\mathbb{R}N(x_0,x')>0\).
\mathrm{e}nd{proof}
Let \(X\) and \(Y\) be spaces and \(f\colon X\to Y\) be a (surjective) local homeomorphism. Then there is a canonical continuous family of measure along \(f\) which we denote by \(\tau^\phi\):
\begin{equation}
\label{eq:etale-measure-family}
\int_X g\,\tau^\phi_y=\sum_{x\in {\textup{supp}}(g)\cap f^{-1}(y)}g(x)
\mathrm{e}nd{equation}
where \(g\in \mathbb{C}ontc(X)\) and \(y\in Y\).
\begin{example}
\label{exa:proper-basic-corr}
Let \(X,Y\) and \(Z\) be locally compact Hausdorff spaces, and \(Y\xleftarrow{b} X\xrightarrow{f} Z\) continuous maps. Assume that \(f\) is a local homeomorphism. Then \((X.\tau^f)\) is a topological correspondence from the {\mathrm{e}tale} groupoid \(X\) to \(Y\). The adjoining function of this correspondence is the constant function 1; this follows directly of one writes the Equation in Definition~{2.1~(iv)} in~\cite{Holkar2017Construction-of-Corr}, and keep in mind that the actions are trivial.
Additionally, assume that \(b\) is a proper map. Then \((X,\lambda)\) is a proper correspondence. Following are the details: in this case, \(Y\times_{Y}X=Y\times_{\textup{Id}_Y,Y,b}X\approx X\). Hence \(\mathrm{m}_f\colon Y\times_{Y}X\to X\times_{f,Z,f}X\) can be identified with the diagonal embedding (see Figure~\ref{fig:quiver-homeo}), thus the first condition in Definition~\ref{def:prop-corr} is satisfied. Since \(b\) is proper, and the action of \(Z\) on \(X\) is the trivial action, the second condition is also fulfilled. To see that the third condition is also satisfied, one identifies \(Y\times_{Y}X\) with \(X\), \(\mathrm{m}_f\) with the diagonal embedding \(X\hookrightarrow X\times_{f,Z,f}X\), and use this embedding to prove the required claim.
Let \(\theta\colon Y\times_{Y}X\to X\) be the the homeomorphism \(\theta\colon (b(x),x)\mapsto x\); inverse of \(\theta\) is \(x\mapsto (b(x),x)\). Let \(\mathrm{dia}_X\colon X\to X\times_{f,Z,f}X\) be the diagonal embedding. We draw the commutative triangle in Figure~\ref{fig:quiver-homeo}.
\begin{figure}[htb]
\centering
\[
\begin{tikzcd}[scale=2]
Y\times_{Y}X\arrow{r}{\mathrm{m}_f}\arrow{d}{\approx}[swap]{\theta}
& X\times_{f,Z,f}X\\
X\arrow{ru}[swap]{\mathrm{dia}_X} &
\mathrm{e}nd{tikzcd}
\]
\caption{}
\label{fig:quiver-homeo}
\mathrm{e}nd{figure}
Take the cover \(\mathbb{C}ov\) of \(Y\times_{Y}X\) to be
\(
\{\theta^{-1}(U): U\in f^{\mathrm{Ho}C}\}.
\)
Let \(\mathbb{C}ov'\defeq\{U\times_{f,Z,f}U: U\in f^{\mathrm{Ho}C}\}\). We observe that \(\mathbb{C}ov'\) is an extension of \(f^{\mathrm{Ho}C}\) via \(\mathrm{dia}_X\), and hence it is an extension of \(\mathbb{C}ov\) via \(\mathrm{m}_f\). Note that for \(U',U\subseteq X\), \((U\times_{f,Z,f}U) \cap (X\times_{f,Z,f}X)=U'\) if and only if \(U=U'\). Thus every element in \(f^{\mathrm{Ho}C}\) or \(\mathbb{C}ov\) has a unique extension in \(\mathbb{C}ov'\). Let \(\mathbb{R}N\) be the constant function 1.
Let \(\theta^{-1}(U)\in\mathbb{C}ov\) be given where \(U\in f^{\mathrm{Ho}C}\). Let \(\tilde{g}\in \mathbb{C}ontc(Y\times_{Y}X,\theta^{-1}(U))\); then \(g\defeq \tilde{g}\circ \theta^{-1}\in \mathbb{C}ontc(X;U)\). Let \(g'\in \mathbb{C}ontc(X\times_{f,Z,f}X; U\times_{f,Z,f} U)\) be an extension of \(g\) via \(\mathrm{dia}_X\).Then \(g'\) is an extension of \(\tilde{g}\) also. For \(x\in X\), the right side of Equation~\mathrm{e}qref{eq:prop-corr} for \(\tilde{g}\) is
\begin{equation}
\label{eq:quiver-right-eq}
\int_{Y} \tilde{g}(y,y^{-1}\cdot x)\,\mathrm{d}\delta_Y^{b(x)}(y)=\int_{Y} \tilde{g}(y,x)\,\mathrm{d}\delta_Y^{b(x)}(y)= \tilde{g}(b(x), x).
\mathrm{e}nd{equation}
For the same \(x\in X\) as above, the left side of Equation~\mathrm{e}qref{eq:prop-corr} for \(g'\) reads
\[
\int_{X} g'(x,x')\,\mathrm{d}\tau^f_{f(x)}(x').
\]
note that \(g'\) is supported in \(U\times_{f,Z,f}U\), and \(f|_U\) is a homeomorphism. Therefore, the only element in \(U\) which hits \(f(x)\) under \(f\) is \(x\) itself. Now recall that \(g'\) is an extension of \(\tilde{g}\) via \(\mathrm{m}_f\), hence \(g'\circ\mathrm{dia}_X=\tilde{g}\). Thus
\[
\int_{X} g'(x,x')\,\mathrm{d}\tau^f_{f(x)}(x')=g'(x,x)=g'\circ\mathrm{m}_f(b(x),x)=\tilde{g}(b(x),x);
\]
the claim follows by comparing the above equation with Equation~\mathrm{e}qref{eq:quiver-right-eq}.
\mathrm{e}nd{example}
\begin{example}
\label{exa:prop-quiver}
Let \(Y\) and \(Z\) be spaces considered as locally compact groupoids equipped with the canonical Haar system.
let \((X,\lambda)\) be a topological correspondence from \(y\) to \(z\), see~\cite{Holkar2017Construction-of-Corr}*{example 3.3}. thus there are momentum maps \(Y\xleftarrow{b} X\xrightarrow{f} Z\) and a continuous family of measures \(\lambda\) along \(f\). The topological correspondence \((X,\lambda)\) gives a \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\) from \(\mathrm{C_0}(Y)\) to \(\mathrm{C_0}(Z)\). We use the shorthand notation \(Y\xleftarrow{b} X\xrightarrow[\lambda]{f} Z\) to write this correspondence.
When the spaces \(Y\) and \(Z\) above are same and the family of measures \(\lambda\) has full support, Muhly and Tomford call this topological correspondence a \mathrm{e}mph{topological quiver}, see~\cite{Muhly-Tomforde-2005-Topological-quivers}*{Definition 3.1}. They call the quiver \((X,\lambda)\) proper if \(b\) is a proper map, and \(f\) is a local homeomorphism. \cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11} asserts that the \(\mathbb{C}st\)\nb-correspondence \(\Hilm(X)\) from \(\mathrm{C_0}(Y)\) to itself is proper if and only if the quiver \((X,\lambda)\) is proper. One can readily see that that \cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11} is valid for a topological correspondence of spaces if the family of measures on the middle space has full support.
\mathrm{e}dskip
Returning to our example of the topological correspondences \(Y\xleftarrow{b} X\xrightarrow[\lambda]{f} Z\), assume that \(\lambda\) has full support, \(f\) is a local homeomorphism and \(b\) is proper. Then Example~\ref{exa:proper-basic-corr} shows that \((X,\tau^f)\) is a proper correspondence from \(Y\) to \(Z\). We show that the topological correspondences \((X,\lambda)\) and \((X,\tau^f)\) are isomorphic. Then Proposition~\ref{prop:prop-iso-corr} implies that \((X,\lambda)\) is proper.
We claim that the identity map, \(\textup{Id}_X\colon X\to X\), gives the isomorphism \((X,\lambda)\to (X,\tau^f)\) of correspondences. Define
\[
D\colon X \to \mathbb{R}, \quad D(x)= \lambda_{f(x)}(x)\;.
\]
Then \(D\) is positive since \(\lambda\) has full support and continuous since \(\lambda\) is continuous. We note that
\[
D(x')=\frac{\mathrm{d}\lambda_{f(x)}}{\mathrm{d}\delta_{f(x)}}(x')
\]
where the latter term is the Radon-Nikodym derivative of \(\lambda_{f(x)}\) with respect to \(\delta_{f(x)}={(\textup{Id}_X)}_*(\delta_{f(x)})\). This implies that \({(\textup{Id}_X)}_*(\delta_{z})\sim \lambda_{z}\) for every \(z\in Z\) and the Radon-Nikodym derivative is continuous.
Thus \(\textup{Id}_X\) gives the isomorphism of the correspondences \((X,\lambda)\to (X,\tau^f)\), and the corresponding positive Radon-Nikodym derivative is given by the function \(D\) above.
The isomorphism of correspondences in this example generalises to {\mathrm{e}tale} groupoids, see Proposition~\ref{prop:gen-etale-corr-iso}.
\mathrm{e}nd{example}
\mathrm{e}dskip
\cite{Muhly-Tomforde-2005-Topological-quivers}*{Theorem 3.11} implies that if \(Y\xleftarrow{b} X\xrightarrow[\lambda]{f} Z\) is a topological correspondence, then the corresponding \(\mathbb{C}st\)\nb-correspondence is proper if and only if the map \([r_X]\colon X/H\to \base\) is proper, and the right momentum map \(s_X\) is a local homeomorphism.
\begin{proposition}
\label{prop:etale-corr-iso-kk-cycles}
Let \(X,Y\) and \(Z\) be spaces, \(Y\xleftarrow{b}X\xrightarrow{f}Z\) maps and \(\lambda\) and \(\lambda'\) two continuous families of measures along \(f\) having full support. Assume that \(f\) is a local homeomorphism and \(b\) a proper map. Thus \((X,\lambda)\) and \((X,\lambda')\) are topological correspondences from \(Y\to Z\) (Example~\ref{exa:prop-quiver}). Then in \(\mathrm{K}K(\mathrm{C_0}(Y),\mathrm{C_0}(Z))\), \([(\Hilm(X)),0]=[(\Hilm(X')),0]\).
\mathrm{e}nd{proposition}
\begin{proof}
Example~\ref{exa:prop-quiver} shows that \((X,\lambda)\) and \((X',\lambda')\) are isomorphic to \((X,\tau^f)\). Hence the required {\mathrm{K}K}-classes are same.
\mathrm{e}nd{proof}
\subsection{The case of {\mathrm{e}tale} groupoids}
\label{sec:ex-egpd-case}
Let \(G\) be an {\mathrm{e}tale} groupoid. Assume that \(\base\) is locally compact Hausdorff subspace of \(G\) and that \(G\) is covered by countably many compact sets the interiors of which form a basis for the topology of \(G\). Then \(G\) is a locally compact groupoid in the sense of~\cite{PatersonA1999Gpd-InverseSemigps-Operator-Alg}*{Definition 2.2.1}. In this section, the term \mathrm{e}mph{{\mathrm{e}tale} groupoid} shall stand for groupoids which are {\mathrm{e}tale} and fulfil the above conditions. In this case, \(G\) is a locally compact groupoid equipped with a Haar system, and it has a nice approximate identity: one writes \(\base\) as an increasing union of compact sets, \(K_n\) where \(n\in\mathbb{N}\), whose interiors cover \(\base\). Then using Urysohn's lemma, one gets functions \(u_n\in \mathbb{C}ontc(\base)\) for each \( n\in\mathbb{N}\) such that each \(0\leq f_n\leq 1\) and \(f_n=1\) on \(K_n\). The sequence \(\{f_n\}_{n\in\mathbb{N}}\) is an approximate identity for the {\Star}algebra \(\mathbb{C}ontc(G)\), for details see~\cite{PatersonA1999Gpd-InverseSemigps-Operator-Alg}*{pages 47--49}.
For an {\mathrm{e}tale} groupoid \(G\), \(\mathbb{C}st(\base)=\mathrm{C_0}(\base)\) is a subalgebra of \(\mathbb{C}st(G)\), and the map \(\mathbb{C}ontc(G)\to\mathbb{C}ontc(\base)\), \(f\mapsto f|_{\base}\), of {\Star}algebras induces an expectation from \(\mathbb{C}st(G)\to \mathrm{C_0}(\base)\), see~\cite{Renault1980Gpd-Cst-Alg}*{page 61} for details.
In this section, during computations the canonical Haar system on the {\mathrm{e}tale} groupoid \(G\) shall be denoted by \(\alpha\) and the one on \(H\) by \(\beta\).
Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids and \(X\) a \(G\)-\(H\)-bispace. Assume that the momentum map \(s_X\colon X\to \base[H]\) is a local homeomorphism. Let \(\tau^{s_X}\) be the family of point-mass measures along \(s_X\) (see Equation~\mathrm{e}qref{eq:etale-measure-family}). Assume that \((X,\tau^{s_X})\) is a topological correspondence, that is, the action of \(H\) is proper and each measure in \(\tau^{s_X}\) is \(G\)\nb-quasi-invariant. Then we call \(X\), or to be precise \((X,\tau_{s_X})\), an \mathrm{e}mph{{\mathrm{e}tale} correspondence} from \(G\) to \(H\). The adjoining function of this correspondence is the constant function 1. This is because for each \(u\in \base[H]\), the measure \(\tau_u^{s_X}\) on the space of units of the transformation groupoid \(G\ltimes X\) is quasi-invariant; here \(G\ltimes X\) is an {\mathrm{e}tale} groupoid. And the adjoining function is the modular function of \(G\ltimes X\) for \(\tau^{s_X}_u\) which is constant function 1 due to~\cite{Renault1980Gpd-Cst-Alg}*{1.3.22}.
\begin{proposition}
\label{prop:gen-etale-corr-iso}
Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids and \(X\) a \(G\)\nb-\(H\)-bispace. Assume that the right momentum map \(s_X\colon X\to \base[H]\) is a local homeomorphism. Let \(\lambda\) be a continuous family of measures with full support along \(s_X\). Assume that \((X,\lambda)\) is a topological correspondence from \(G\) to \(H\). Let \(\tau^{s_X}\) be the family of measures consisting of point-masses along \(s_X\). Then \((X,\tau^{s_X})\) is a topological correspondence from \(G\) to \(H\) and the identity map \(\textup{Id}_X\colon X\to X\) induces an isomorphism of topological correspondences.
\mathrm{e}nd{proposition}
\begin{proof}
Since the family of measures \(\lambda\) is continuous and has full support, the function
\[
D\colon X\to \mathbb{R}^+\quad D\colon x\mapsto \lambda(x)
\]
is continuous. Since \(\lambda\) is \(H\)\nb-invariant, so is \(D\). Then
\[
D\colon \mathbb{C}ontc(X)\to \mathbb{C}ontc(X) \quad \colon f\mapsto fD
\]
is an isomorphism of complex vector spaces.
Now we note that for \(f\in\mathbb{C}ontc(X)\) and every \(u\in \base[H]\),
\begin{equation}
\label{eq:gen-etale-corr-iso}
\lambda(f)=\tau^{s_X}(f\lambda(x))(u)=\tau^{s_X}(fD)(u).
\mathrm{e}nd{equation}
Thus \(\lambda_u\sim\tau^{s_X}_u\) for every \(u\in\base[H]\), and the Radon-Nikodym derivative \(\frac{\lambda_u}{\tau^{s_X}_u}(x)=\lambda(x)\) is continuous. Now it follows from~\cite{Renault1980Gpd-Cst-Alg}*{Proposition 1.3.3(ii)} that \(\tau^{s_X}_u\) is \(G\ltimes X\)-quasi-invariant for each \(u\in\base[H]\). This shows that \((X,\tau^{s_X})\) is a topological correspondence from \(G\) to \(H\). Since \(\lambda_u\sim\tau^{s_X}_u\), which follows from Equation~\mathrm{e}qref{eq:gen-etale-corr-iso}, we have also proved that the identity map \(X\to X\) is an isomorphism of the topological correspondences \((X,\lambda)\) and \((X,\tau^{s_X})\).
\mathrm{e}nd{proof}
Converse of Proposition~\ref{prop:gen-etale-corr-iso} clearly holds. Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids, \(X\) a \(G\)\nb-\(H\)-bispace and \(s_X\colon X\to \base[H]\) a local homeomorphism such that \((X,\tau^{s_X})\colon G\to H\) is a topological correspondences. If \(D\colon X\to \mathbb{R}\) is an \(H\)\nb-invariant positive continuous function, then \(D\tau^{s_X}\defeq\{D\tau^{s_X}_u\}_{u\in\base[H]}\) is an \(H\)\nb-invariant family of measures, and since each measure \(D\tau^{s_X}_u\) in this family is equivalent to \(\tau^{s_X}_u\),~\cite{Renault1980Gpd-Cst-Alg}*{Proposition 1.3.3 (ii)} implies that \(D\tau^{s_X}_u\) is \(G\ltimes X\)\nb-quasi-invariant measure on its space of units.
Let \(X\) be a \(G\)\nb-\(H\)-bispace and let \(s_X\colon X\to \base[H]\) be a local homeomorphism. Assume that \((X,\tau^{s_X})\colon G\to H\) is a topological correspondence. Then, Proposition~\ref{prop:etale-gpd-loc-free-act} implies that, this correspondence is proper if \(\mathrm{m}_{s_X}\colon G\times_{\base} X\to X\times_{s_X,\base[H],s_X} X\) is open and the rest two conditions in Definition~\ref{def:prop-corr} hold. Due to Proposition~\ref{prop:prop-iso-corr}, the correspondence \((X,\lambda)\) in Proposition~\ref{prop:gen-etale-corr-iso} above is proper if and only if \((X,\tau^{s_X})\) is proper. Thus for a correspondence \((X,\lambda)\) as in Proposition~\ref{prop:gen-etale-corr-iso}, the space \(X\) determines the correspondence upto isomorphism of correspondences.
Clearly, the correspondences of spaces in Examples~\ref{exa:proper-basic-corr} and~\ref{exa:prop-quiver} are {\mathrm{e}tale} correspondences. Following are a few more examples of {\mathrm{e}tale} correspondences.
\begin{example}
\label{exa:space-to-gpd-etale-corr}
Let \(Y\) be a space considered as groupoid, \(H\) an {\mathrm{e}tale} groupoid and \(X\) a \(Y\)\nb-\(H\)-bispace. Let \(s_X\colon X\to\base[H]\) be a local homeomorphism. Then \((X,\tau_{s_X})\colon Y\to H\) is a topological correspondence. This correspondence is proper if the right momentum map \(r_X\colon X\to Y\) is proper.
To confirm that \((X,\tau_{s_X})\) is a topological correspondence, one needs to check that \(\tau_{s_X}^u\) is \(Y\)\nb-quasi-invariant. This follows directly if one substitutes appropriate values in the required equation in~\cite{Holkar2017Construction-of-Corr}*{Definition 2.1}.
Now assume that \(r_X\) is proper. To see that this correspondence is proper, we need to check that Definition~\ref{def:prop-corr}{(iii)} holds as other conditions obviously hold. The map \(\mathrm{m}_{s_X}\colon Y\times_{\textup{Id}_Y,Y,r_X} X\colon X\times_{s_X,\base[H],s_X}X\) is \((f(x),x)\mapsto (x,x)\), and checking the necessary condition is same as in Example~\ref{exa:proper-basic-corr}. Thus the proof follows from the same computation as in Example~\ref{exa:proper-basic-corr}. This examples ends here.
\mathrm{e}nd{example}
Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids and \(X\) an {\mathrm{e}tale} correspondence from \(G\) to \(H\). Then \(\Hilm(X)\) is the Hilbert \(\mathbb{C}st(H)\)\nb-module; let \(\sigma_G\colon \mathbb{C}st(G)\to\mathbb{C}st(H)\) be the representation which gives the \(\mathbb{C}st\)\nb-correspondence. One may restrict the codomain of \(r_X\) to \(\base\) and consider \(X\) as an {\mathrm{e}tale} correspondence from \(\base\) to \(H\); we call this correspondence as the one obtained from \(X\colon G\to H\) by restricting to \(\base\). Note that in this case, \(\Hilm(X)\) is the Hilbert \(\mathbb{C}st(H)\)\nb-module. And since \(\mathrm{C_0}(\base)\subseteq \mathbb{C}st(G)\) is a \(\mathbb{C}st\)\nb-subalgebra, the representation \(\sigma_{\base[G]}\colon \mathbb{C}st(G)\to \Hilm(X)\) is obtained by restricting \(\sigma_G\) to \(\mathrm{C_0}(\base[G])\). Thus
\begin{equation}
\label{eq:space-to-e-gpd-corr}
\left\{\begin{aligned}
X&\colon G\to H &\text{gives } \quad (\Hilm(X),\sigma_G)&\colon \mathbb{C}st(G)\to \mathbb{C}st(H),\\
X&\colon \base\to H &\text{gives }\quad (\Hilm(X), \sigma_{\base})&\colon \mathrm{C_0}(\base)\to \mathbb{C}st(H).
\mathrm{e}nd{aligned}\right.
\mathrm{e}nd{equation}
\begin{proposition}\label{prop:prop-etale-cor-char}
Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids and \(X\) an {\mathrm{e}tale} correspondence from \(G\) to \(H\). Then the following hold:
\begin{enumerate}[i)]
\item The \(\mathbb{C}st\)\nb-correspondence \((\Hilm(X),\sigma_G)\colon \mathbb{C}st(G)\to \mathbb{C}st(H)\) is proper if and only if \(X\colon \base\to H\) obtained by restricting \(X\colon G\to H\) to \(\base\) is proper in the sense of Definition~\ref{def:prop-corr}.
\item If \(r_X\colon X\to \base\) is a proper map, then the \(\mathbb{C}st\)\nb-correspondence \((\Hilm(X),\sigma_G)\) from \(\mathbb{C}st(G)\) to \(\mathbb{C}st(H)\) is proper.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{proposition}
\begin{proof}
(i): Let \(\iota\colon \mathrm{C_0}(\base)\hookrightarrow \mathbb{C}st(G)\) be the inclusion. Then the representations in Equation~\mathrm{e}qref{eq:space-to-e-gpd-corr} are related as \(\sigma_{\base}=\sigma_G\circ \iota\); this can be checked by writing formulae for the representations of the pre-\(\mathbb{C}st\)\nb-algebras \(\mathbb{C}ontc(\base)\) and \(\mathbb{C}ontc(G)\) on the pre-Hilbert \(\mathbb{C}st(H)\)\nb-module \(\mathbb{C}ontc(X)\). Therefore, if \(\sigma_G(\mathbb{C}st(G))\subseteq \mathbb{K}(\Hilm(X))\), then \(\sigma_{\base}(\mathrm{C_0}(\base))\subseteq \mathbb{K}(\Hilm(X))\).
Conversely, assume that \(X\colon \base\to H\) is proper. Then Theorem~\ref{thm:prop-corr-main-thm} says that \(\sigma_{\base}(\mathrm{C_0}(\base))\subseteq \mathbb{K}(\Hilm(X))\). Let \(\{u_i\}\) be the approximate unit of the pre-\(\mathbb{C}st\)-algebra \(\mathbb{C}ontc(G)\) where each \(u_i\in\mathbb{C}ontc(\base)\); such an approximate unit is discussed at the beginning of this section. Note that for \(u\in \mathbb{C}ontc(\base)\), \(\sigma_G(u)(\xi)=\sigma_{\base}(u)(\xi)\) for every \(\xi\in\mathbb{C}ontc(X)\). Let \(f\in \mathbb{C}ontc(G)\). Then for every \(\xi\in\mathbb{C}ontc(X)\), we have
\[
\sigma_G(f)(\xi)=\lim_i\sigma_G(f u_i)(\xi)=\lim_i\sigma_G(f)\sigma_{G}(u_i)(\xi)
\]
where each \(\sigma_G(f)\sigma_{G}(u_i)\in \mathbb{K}(\Hilm(X))\). Therefore, \(\sigma_G(f)=\lim_i\sigma_G(f)\sigma_{G}(u_i)\) is also in \(\mathbb{K}(\Hilm(X))\).
\noindent (ii): If \(r_X\colon X\to \base\) is a proper map, then Examples~\ref{exa:space-to-gpd-etale-corr} says that \(X\colon \base\to H\) is a proper correspondence in the sense of Definition~\ref{def:prop-corr}. Theorem~\ref{thm:prop-corr-main-thm} implies that
\((\Hilm(X),\sigma_G)\colon \mathbb{C}st(G)\to \mathbb{C}st(H)\) is a proper \(\mathbb{C}st\)\nb-correspondence.
\mathrm{e}nd{proof}
\begin{lemma}
\label{lem:etale-corr-full-supp}
Let \(G\) and \(H\) be {\mathrm{e}tale} groupoids. Let \((X,\lambda)\colon Y\to H\) be a proper topological correspondence in the sense of Definition~\ref{def:prop-corr}, then
\begin{enumerate}[i)]
\item for each \(z\in Z\), the measure \(\lambda_z\) is atomic,
\item for each \(z\in Z\), each atom in \(\lambda_z\) is nonzero and the function \(\mathbb{R}N\) appearing in Definition~\ref{def:prop-corr}{(3)} positive. Thus the family of measure \(\lambda\) has full support.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{lemma}
\begin{proof}
In this case, since \(r_X\colon X\to\base\) is a proper map, Examples~\ref{exa:space-to-gpd-etale-corr} says that the correspondence \(X\colon\base\to H\) obtained by restricting \(X\colon G\to H\) is proper. We prove the claims of present lemma for the restricted correspondence. If one writes the map \(\mathrm{m}_{s_X}\) for the restricted correspondence, then one observes that the proof of both the claims run on the same lines as the one of Lemma~\ref{lem:prop-quiver-labda-is-atomic}.
\mathrm{e}nd{proof}
\paragraph{\bfseries Acknowledgement:} We are grateful to Ralf Meyer, Suliman Albandik and Alcides Buss for many fruitful discussions. Special thanks to S.\,Albandik for many motivating discussions. We thank Ralf Meyer and Jean Renault for checking the manuscript, pointing out mistakes and suggesting improvements.
This work is done during the author's stay in the Federal University of Santa Caterina, Florianop{\'o}lis, Brazil and Indian Institute of Science Education and Research, Pune, India. The author is thankful to both the institutes for their hospitality. We are thankful to CNPq, Brazil and SERB, India whose funding made this work possible.
\begin{bibdiv}
\begin{biblist}
\bib{Bourbaki1966Topologoy-Part-1}{book}{
author={Bourbaki, Nicolas},
title={Elements of mathematics. {G}eneral topology. {P}art 1},
publisher={Hermann, Paris; Addison-Wesley Publishing Co., Reading,
Mass.-London-Don Mills, Ont.},
date={1966},
review={\MR{0205210 (34 \#5044a)}},
}
\bib{Bourbaki2004Integration-I-EN}{book}{
author={Bourbaki, Nicolas},
title={Integration. {I}. {C}hapters 1--6},
series={Elements of Mathematics (Berlin)},
publisher={Springer-Verlag, Berlin},
date={2004},
ISBN={3-540-41129-1},
note={Translated from the 1959, 1965 and 1967 French originals by
Sterling K. Berberian},
review={\MR{2018901 (2004i:28001)}},
}
\bib{Buss-Holkar-Meyer2016Gpd-Uni-I}{article}{
author={Buss, Alcides},
author={Holkar, Rohit~Dilip},
author={Meyer, Ralf},
title={A universal property for groupoid {\(c^*\)}-algebras},
date={2016},
note={Preprint- arxiv:1612.04963v1},
}
\bib{Candel-Conlon2000Foliations-I}{book}{
author={Candel, Alberto},
author={Conlon, Lawrence},
title={Foliations. {I}},
series={Graduate Studies in Mathematics},
publisher={American Mathematical Society, Providence, RI},
date={2000},
volume={23},
ISBN={0-8218-0809-5},
review={\MR{1732868}},
}
\bib{Folland1995Harmonic-analysis-book}{book}{
author={Folland, Gerald~B.},
title={A course in abstract harmonic analysis},
series={Studies in Advanced Mathematics},
publisher={CRC Press, Boca Raton, FL},
date={1995},
ISBN={0-8493-8490-7},
review={\MR{1397028 (98c:43001)}},
}
\bib{Hilsum-Skandalis1987Morphismes-K-orientes-deSpaces-deFeuilles-etFuncto-KK}{article}{
author={Hilsum, Michel},
author={Skandalis, Georges},
title={Morphismes {$K$}-orient\'es d'espaces de feuilles et
fonctorialit\'e en th\'eorie de {K}asparov (d'apr\`es une conjecture d'{A}.
{C}onnes)},
date={1987},
ISSN={0012-9593},
journal={Ann. Sci. \'Ecole Norm. Sup. (4)},
volume={20},
number={3},
pages={325\ndash 390},
url={http://www.numdam.org/item?id=ASENS_1987_4_20_3_325_0},
review={\MR{925720 (90a:58169)}},
}
\bib{mythesis}{thesis}{
author={Holkar, Rohit~Dilip},
title={Topological construction of $\textup{C}^*$-correspondences for
groupoid ${C}^*$-algebras},
type={Ph.D. Thesis},
date={2014},
}
\bib{Holkar2017Composition-of-Corr}{article}{
author={Holkar, Rohit~Dilip},
title={Composition of topological correspondences},
date={2017},
journal={Journal of {O}perator {T}heory},
volume={78.1},
pages={89\ndash 117},
}
\bib{Holkar2017Construction-of-Corr}{article}{
author={Holkar, Rohit~Dilip},
title={Topological construction of {$C^*$}-correspondences for groupoid
{$C^*$}-algebras},
date={2017},
journal={Journal of {O}perator {T}heory},
volume={77:1},
number={23-24},
pages={217\ndash 241},
}
\bib{Holkar-Renault2013Hypergpd-Cst-Alg}{article}{
author={Holkar, Rohit~Dilip},
author={Renault, Jean},
title={Hypergroupoids and {$C^*$}-algebras},
date={2013},
ISSN={1631-073X},
journal={C. R. Math. Acad. Sci. Paris},
volume={351},
number={23-24},
pages={911\ndash 914},
url={http://dx.doi.org/10.1016/j.crma.2013.11.003},
review={\MR{3133603}},
}
\bib{Khoshkam-Skandalis2002Reg-Rep-of-Gpd-Cst-Alg-and-Applications}{article}{
author={Khoshkam, Mahmood},
author={Skandalis, Georges},
title={Regular representation of groupoid {$C^*$}-algebras and
applications to inverse semigroups},
date={2002},
ISSN={0075-4102},
journal={J. Reine Angew. Math.},
volume={546},
pages={47\ndash 72},
url={http://dx.doi.org/10.1515/crll.2002.045},
review={\MR{1900993}},
}
\bib{Stadler-Ouchi1999Gpd-correspondences}{article}{
author={Macho~Stadler, Marta},
author={O'uchi, Moto},
title={Correspondence of groupoid {$C^\ast$}-algebras},
date={1999},
ISSN={0379-4024},
journal={J. Operator Theory},
volume={42},
number={1},
pages={103\ndash 119},
review={\MR{1694789 (2000f:46077)}},
}
\bib{Meyer-Zhu2014Groupoids-in-categories-with-pretopology}{article}{
author={{Meyer}, R.},
author={{Zhu}, C.},
title={{Groupoids in categories with pretopology}},
date={2014-08},
journal={ArXiv e-print},
eprint={1408.5220},
}
\bib{Muhly-Renault-Williams1987Gpd-equivalence}{article}{
author={Muhly, Paul~S.},
author={Renault, Jean~N.},
author={Williams, Dana~P.},
title={Equivalence and isomorphism for groupoid {$C^\ast$}-algebras},
date={1987},
ISSN={0379-4024},
journal={J. Operator Theory},
volume={17},
number={1},
pages={3\ndash 22},
review={\MR{873460 (88h:46123)}},
}
\bib{Muhly-Tomforde-2005-Topological-quivers}{article}{
author={Muhly, Paul~S.},
author={Tomforde, Mark},
title={Topological quivers},
date={2005},
ISSN={0129-167X},
journal={Internat. J. Math.},
volume={16},
number={7},
pages={693\ndash 755},
url={http://dx.doi.org/10.1142/S0129167X05003077},
review={\MR{2158956 (2006i:46099)}},
}
\bib{PatersonA1999Gpd-InverseSemigps-Operator-Alg}{book}{
author={Paterson, Alan L.~T.},
title={Groupoids, inverse semigroups, and their operator algebras},
series={Progress in Mathematics},
publisher={Birkh\"auser Boston, Inc., Boston, MA},
date={1999},
volume={170},
ISBN={0-8176-4051-7},
url={http://dx.doi.org/10.1007/978-1-4612-1774-9},
review={\MR{1724106 (2001a:22003)}},
}
\bib{Renault1980Gpd-Cst-Alg}{book}{
author={Renault, Jean},
title={A groupoid approach to {$C^{\ast} $}-algebras},
series={Lecture Notes in Mathematics},
publisher={Springer, Berlin},
date={1980},
volume={793},
ISBN={3-540-09977-8},
review={\MR{584266 (82h:46075)}},
}
\bib{Renault1985Representations-of-crossed-product-of-gpd-Cst-Alg}{article}{
author={Renault, Jean},
title={Repr\'esentation des produits crois\'es d'alg\`ebres de
groupo\"\i des},
date={1987},
ISSN={0379-4024},
journal={J. Operator Theory},
volume={18},
number={1},
pages={67\ndash 97},
review={\MR{912813 (89g:46108)}},
}
\bib{Renault2014Induced-rep-and-Hpgpd}{article}{
author={Renault, Jean},
title={Induced representations and hypergroupoids},
date={2014},
ISSN={1815-0659},
journal={SIGMA Symmetry Integrability Geom. Methods Appl.},
volume={10},
pages={Paper 057, 18},
review={\MR{3226993}},
}
\bib{Tu2004NonHausdorff-gpd-proper-actions-and-K}{article}{
author={Tu, Jean-Louis},
title={Non-{H}ausdorff groupoids, proper actions and {$K$}-theory},
date={2004},
ISSN={1431-0635},
journal={Doc. Math.},
volume={9},
pages={565\ndash 597 (electronic)},
review={\MR{2117427 (2005h:22004)}},
}
\mathrm{e}nd{biblist}
\mathrm{e}nd{bibdiv}
\mathrm{e}nd{document}
|
\begin{document}
\maketitle
\begin{abstract}
A horospherical variety is a normal algebraic variety where a connected
reductive algebraic group acts with an open orbit isomorphic to a torus bundle over a flag variety.
In this article we study the cohomology of line bundles on complete horospherical varieties.
\end{abstract}
\tableofcontents
\section{Introduction}
Let $G$ be a connected reductive algebraic group over the field of complex numbers $\mathbb C$ and let $H$ be a closed subgroup of $G$.
A homogeneous space $G/H$ is said to be horospherical if $H$ contains the unipotent radical of a Borel subgroup of $G$, or equivalently, $G/H$ is isomorphic to a torus bundle over a flag variety $G/P$.
A normal $G$-variety is called horospherical if it contains an open dense $G$-orbit isomorphic to a horospherical homogeneous space $G/H$.
Toric varieties and flag varieties are horospherical varieties (see \cite{Pa} for more details).
Horospherical varieties are a special class of spherical varieties. A spherical variety is a normal $G$-variety $X$ such that there exists $x\in X$ and a Borel subgroup $B$ of $G$ satisfying that the
$B$-orbit of $x$ is open in $X$.
Let $\mathcal L$ be a line bundle on $X$. Up to replacing $G$ by a finite cover, we can assume that $G=C\times [G, G]$, where $C$ is a torus and $[G, G]$ is simply connected, hence one can assume that $\mathcal{L}$ is $G$-linearized. If $X$ is complete,
then the cohomology
groups $H^i(X, \mathcal L)$ are finite dimensional representations of $G$.
If $X$ is a flag variety, then Borel-Weil-Bott theorem describe these cohomology groups.
If $X$ is a toric variety, then the cohomology groups can be described using combinatorics of its associated fan (see \cite{Cox}).
For spherical varieties, in \cite{brion94}, Brion has given a bound on the mulplicities of these modules. In \cite{Brion1990}, if $X$ is projective spherical and $\mathcal L$ is generated by global sections,
Brion proved that the cohomology groups $H^i(X, \mathcal L)$ for $i>0$ vanish. In \cite{Tchoudjem1}, \cite{Tchoudjem2}, \cite{Tchoudjem3} and \cite{Tchoudjem4},
Tchoudjem studied these cohomology groups in the case of X is a compactification of adjoint semisimple
group, a wonderful compactification of a reductive group, a wonderful variety of minimal rank and
a complete symmetric variety respectively. In this article we consider the cohomology of line bundles on complete horospherical varieties.
We prove that the cohomology groups splits into cohomology of line bundles on flag varieties and toric varieties.
Our main tool in this article is the machinery of Grothendieck-Cousin complexes (see \cite{Ke1} for more details), and we also prove a K\"{u}nneth-like formula for local cohomology.
To describe our results we need some notation:
let $X$ and $Y$ be affine schemes. Let $Z_1\subset X$ and $Z_2\subset Y$ be two locally complete intersections subschemes, of respective codimensions $l_1$ and $l_2$.
Let $L_1$ and $L_2$ be two invertible sheaves respectively on $X$ and $Y$. Let $p_1:X\times Y \to X$ and $p_2:X\times Y\to Y$ be the projections, and
let ${\mathcal L}_i:=p_i^*L_i$. Assume $X$ and $Y$ are irreducible and Cohen-Macaulay.
Then we have the following isomorphism (See Theorem \ref{kunneth-local}):
\begin{theorem*}
$
H^{l_1+l_2}_{Z_1\times Z_2}(X\times Y, {\mathcal L}_1\otimes{\mathcal L}_2)\simeq H^{l_1}_{Z_1}(X,L_1)\otimes H^{l_2}_{Z_2}(Y,L_2).
$
\end{theorem*}
Let $X$ be a complete horospherical variety. We have that any Weil divisor $D$ of $X$ is linearly equivalent to a $B$-stable divisor, that is of the form (need not be unique)
$$D\approx \sum_{i=1}^kd_iX_i+\sum_{\alpha\in I}d_{\alpha}D_{\alpha},$$
where $X_i'$s are $G$-stable prime divisors, $D_{\alpha}'$s are $B-$stable prime divisors and $I$ is a subset of the set of simple roots for $G$ (for more details see Section 2).
If we further assume that $X$ is smooth and toroidal, then $D\approx p^*E_1 + E_2$, where $E_1\approx \sum\limits_{\alpha\in I}d_{\alpha}D_{\alpha}$ is a divisor on the base space $G/P$, $p:X\to G/P$ is the projection map, and
$E_2\approx \sum\limits_{i=1}^kd_iX_i$ can be seen as a divisor coming from the toric fibre $Y$, that is $X_i=G\times^PY_i$ where $Y_i$ is a toric prime divisor in $Y$ (see Section 2 for precise details).
Let $T$ be a maximal torus of a Borel subgroup $B$ and $T'$ be
the torus correspond to the toric fibre $Y$. Let $E_2'=\sum_{i=1}^kd_iY_i$.
Let $\mathfrak{g}$ be the Lie algebra of $G$.
Then we prove the following (See Corollary \ref{horospherical}):
\begin{corollary*}
Let $X$ be a smooth toroidal complete variety. There is an isomorphism (non canonical) of $T\times T'$-modules and of $\mathfrak{g}$-modules
\[
H^n(X, \mathcal O _X(D))\simeq H^n(X,\mathcal{O}_X(p^*E_1 + E_2)) = \bigoplus\limits_{p+q=n}H^p(G/P,{\mathcal O}_{G/P}(E_1))\otimes H^q(Y,{\mathcal O}_Y(E'_2))
\]
\end{corollary*}
\begin{remark*}
The $T\times T'$-module and $\mathfrak{g}$-module structures on these cohomology groups are compatible. By Borel-Weil-Bott theorem, at most one of the factors in the previous direct sum is nonzero.
\end{remark*}
The case of other complete horospherical varieties is straightforward: they all admit a $G$-equivariant resolution by a smooth complete toroidal horospherical variety (see Proposition \ref{nontoroidalcase}).
We now give a more concrete description of the cohomology groups $H^n(X,\mathcal{O}_X(p^*E_1 + E_2))$.
Let $\varpi_{\alpha}$ be the fundamental weight associated to a simple root $\alpha$.
Let $W=W(G,T)$ be the Weyl group, let $W_P$ be the Weyl group of $P$, and $W^P=W/W_P$ the
set of minimal length representatives of the quotient.
Let $\rho$ be the half-sum of all positive roots, and define $w\star \lambda:=w(\lambda + \rho) -\rho$ for any $w\in W$ and $\lambda$ in the weight lattice $\Lambda$ of $T$.
Note that the line bundle $\mathcal O_{G/P}(E_1)$ is the homogeneous line bundle
on $G/P$ correspond to the weight $\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha}$.
Then the Borel-Weil-Bott theorem states that $H^i(G/P,\mathcal O_{G/P}(E_1))=0$
unless there exists a $w\in W^P$ of length $i$ such that
$w\star (\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha})$ lies in the
interior of the dominant chamber, and in that case $H^{i}(G/P, \mathcal O_{G/P}(E_1))$ is the dual
of the irreducible
heighest weight module $V_{w\star (\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha})}$.
On the other hand, we can describe the cohomology of line bundles on the smooth projective
toric fibre by a result of Demazure (we refer to \cite[Chapter 9]{Cox} or \cite[Section 3.4]{Ful}
for more details).
Let $N$ be the lattice of one-parameter subgroups of the torus $T'=P/H$ and let $M$ be
the lattice of characters of $T'$. Let $M_{\mathbb R}:=M\otimes \mathbb R$ and
$N_{\mathbb R}:=N\otimes \mathbb R$. Then we have a natural bilinear pairing
$\langle-,-\rangle:M_{\mathbb R}\times N_{\mathbb R}\to \mathbb R.$
Let $\Delta$ be the fan of the toric variety $Y$.
By completeness of $X$, $\Delta$ is a complete fan in $N_{\mathbb R}$, that is $\cup_{\sigma\in \Delta}\sigma =N_{\mathbb R}$.
We denote by $\Delta(l)$ the set of $l$-dimensional cones in $\Delta$.
Then we can parametrize $T'$-invariant prime divisors of $Y$ by the set $\Delta(1)$.
\begin{comment}
For any cone $\sigma\in\Delta$, let $M(\sigma)$ denote the cospan of $\sigma$ in $M$, the , and $M_{\sigma}=M/M(\sigma)$.
Fix a trivialization ${\mathcal O}_Y(E_2)$ on the open orbit $T'\subset Y$.
Let $\sigma\in\Delta$ be a cone, let $A=k[M\cap\sigma^{\vee}]$, and
$Y_{\sigma}=\textnormal{Spec}(A)$.
The choice of a trivialization of ${\mathcal O}_Y(E_2)$ on $T'$ induces an
equality ${\mathcal O}_Y(E_2)(Y_{\sigma})=A.x^{m_{\sigma}}$ with a unique $m_{\sigma}\in M_{\sigma}$. These functions $m_{\sigma}$ on $\sigma$ can be glued together into a single function $\psi_{E_2}$ defined on the support $|\Delta|=N_{\mathbb{R}}$ of $\Delta$, defined on each cone $\sigma$ by $\psi_{E_2}(v)=\langle m_{\sigma},v\rangle$.
\end{comment}
Since the cohomology spaces $H^i(Y,{\mathcal O}_Y(E'_2))$ are $T'$-modules,
these spaces have a weight decomposition:
$$H^i(Y,{\mathcal O}_Y(E'_2))=\bigoplus_{m\in M} H^i(Y,{\mathcal O}_Y(E'_2))_m$$
and by a theorem of Demazure its degree $m$ part is described as follows: Let
$Z_{E'_2}(m):=\{v\in N_{\mathbb{R}}, \langle m,v\rangle \geq \psi_{E'_2}(v)\}$,
where $\psi_{E_2'}$ is the support function corresponding to the divisor $E_2'$
(for definition of the support function of a divisor we refer to \cite[Theorem 4.2.12]{Cox}).
Then
\[
H^i(Y,{\mathcal O}_Y(E'_2))_m=H^i_{Z_{E'_2}(m)}(N_{\mathbb{R}},\mathbb{C}).
\]
Now we can reformulate our result as follows:
\begin{corollary*}
Let $X$ be a smooth complete horospherical variety. Then
\item[1)] If there is no $w\in W^P$ such that $w \star (\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha})$ lies in the interior of the dominant chamber, then $H^n(X,\mathcal{O}_X(p^*E_1 + E_2))=0$ for all $n$.
\item[2)] If there exists $w\in W^P$ such that $w \star (\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha})$ lies in the interior of the dominant chamber, such a $w$ is unique, and as a $\mathfrak{g}\times T'$-module we have
\[
H^{l(w)+q}(X,\mathcal{O}_X(p^*E_1 + E_2)) = V_{w\star (\sum\limits_{\alpha\in I}d_{\alpha}\varpi_{\alpha})}\otimes \bigoplus\limits_{m\in M}H^q_{Z_{E_2}(m)}(N_{\mathbb{R}},\mathbb{C})\]
and $H^i(X,\mathcal{O}_X(p^*E_1 + E_2))=0$ for $i<l(w)$.
\end{corollary*}
This paper is organized as followed. Section 2 will be a remainder on horospherical varieties. Section 3 is devoted to proving Theorem \ref{kunneth-local}. In Section 4, we shall recall the
Cousin complexes and show how to compute, under nice assumptions, the cohomology of line bundles on some locally trivial fibrations. The behaviour of the obtained isomorphisms with respect to actions of
algebraic groups will be investigated in Section 5, and we shall prove Corollary \ref{horospherical} in Section 6. It should be noted that results of Sections 3 to 5 are characteristic free.
\section{Preliminaries on horospherical varieties}
In this section we recall some basic definition and properties of complete horospherical varieties.
For more details we refer to \cite{Pa} and \cite{perrin2014geometry}.
Let $G$ be a connected reductive algebraic group over the field of complex numbers. A $G$-variety is a reduced scheme of finite type over the field of complex numbers, with an algebraic action of $G$.
Let $B$ be a Borel subgroup of $G$.
\begin{definition}
A spherical variety is a normal $G$-variety $X$ such that it has a dense open $B$-orbit.
\end{definition}
These varieties have many interesting properties, for example, there are only finitely many $B$-orbits (see \cite{brion1986quelques}).
Here, we consider a special class of spherical varieties, called horospherical varieties.
\begin{definition}
Let $H$ be a closed subgroup of $G$. Then $H$ is said to be horospherical if it contains the unipotent radical of a Borel subgroup of $G$.
\end{definition}
We also say that the homogeneous space $G/H$ is horospherical if $H$ is horospherical. Up to conjugation, we can assume that $H$ contains $U$, the maximal unipotent subgroup of $G$ contained in
the Borel subgroup $B$.
Let $P$ be the normalizer $N_{G}(H)$ of $H$ in $G$. It is a parabolic subgroup of $G$, and $T':=P/H$ is a torus.
Then it is clear that $G/H$ is a torus bundle over the flag variety $G/P$ with fibers $P/H$.
\begin{definition}
Let $X$ be a $G$-spherical variety and let $H$ be the stabilizer of a point in the dense $G$-orbit in $X$.
Then we say $X$ is horospherical if $H$ is horospherical subgroup of $G$.
\end{definition}
A geometric characterization can be given as follows (see for example, \cite{Pa}):
Let $X$ be a normal $G$-variety. Then $X$ is horospherical if there exists a parabolic subgroup $P$ of $G$, and a smooth toric $T'$-variety $Y$ such that there is a diagram of $G$-equivariant morphisms
\begin{center}
\begin{tikzcd}
G\times^P Y \arrow{d}{p}\arrow{r}{\pi} & X\\
G/P &
\end{tikzcd}
\end{center}
\noindent where $G\times^P Y:=(G\times Y)/P$, the action of $P$ is given by $p\cdot (g, y)=(gp, p^{-1}\cdot y)$, $\pi$ is birational and proper (ie. a resolution of singularities), and $p$ is a locally trivial fibration with fibers isomorphic to $Y$.
The classification of horospherical varieties can be given by using colored fans, see \cite{Pa} for precise details.
\subsection{Divisors of horospherical varieties}
In this subsection we recall some properties of divisors of a spherical variety $X$. We start by the following result from \cite{brion1989groupe}:
\begin{proposition}
Any divisor of $X$ is linearly equivalent to a $B$-stable divisor.
\end{proposition}
Let $X$ be a horospherical variety. Now we consider the $B$-stable prime divisors of $X$. We denote by $X_1,X_2,\ldots, X_r$ the $G$-stable divisors of $X$. The other $B$-stable divisors
(i.e. those are not $G$-stable) are the closures of $B$-stable divisors
of $G/H$, which are called the colors of $G/H$. These are the inverse images of the torus fibration $G/H\longrightarrow G/P$ of the Schubert divisors of the flag variety $G/P$.
Now we recall the description of the Schubert divisors of $G/P$. Fix a maximal torus $T$ of $B$. Then we denote by $S$ the set of simple roots of $G$ with respect to $B$ and $T$.
Also denote by $S_P$ the subset of $S$ of simple roots of $P$, that is simple roots of the Levi factor $L_P$ of $P$.
Then the Schubert divisors of $G/P$ are indexed by the subset of simple roots $S\setminus S_P$ and are of the form
$\overline{Bw_0s_{\alpha}P/P}$, where $w_0$ denotes the longest element of the Weyl group of $G$ and $s_{\alpha}$ denotes the simple reflection associated to the simple root $\alpha$.
Hence the $B$-stable irreducible divisors of $G/H$ are of the form $\overline{Bw_0s_{\alpha}P/H}$, which we denote by $D_{\alpha}$ for $\alpha\in S\setminus S_P$.
Then the following holds:
\begin{lemma}
Any divisor $D$ of $X$ equivalent to a linear combination of $X_i$'s and $D_{\alpha}$ for $\alpha\in S\setminus R$. That is,
$$D\approx \sum_{i=1}^rd_iX_i+\sum_{\alpha\in S\setminus S_P}d_{\alpha}D_{\alpha}.$$
\end{lemma}
A horospherical variety is called toroidal if it has no color containing a $G$-orbit. It is of the form $G\times^P Y$, where $Y$ is as above. We summarize the desciption of smooth toroidal varieties that will be
used later with the following proposition (and we refer to \cite[prop. 2.2 and ex. 2.3]{Pa} for its proof):
\begin{proposition}
Let $X$ be a smooth toroidal horospherical variety, and let $H$ be the stabilizer of a point in the open $B$-orbit of $X$.
\item (i) There is a parabolic subgroup $P$ containing $B$ such that
\[
H=\bigcap\limits_{\lambda\in {\mathcal X}(P)}\ker(\lambda)
\]
where ${\mathcal X}(P)$ denotes the characters of $P$. Moreover, $P=N_G(H)$, and $T':= P/H$ is a torus.
\item (ii) There is a smooth $T'$-toric variety $Y$ such that $X=G\times^P Y$, and the natural map $X=G\times^P Y\to G/P$ is a locally trivial fibration, where the action of $P$ on $Y$ via $P\to T'$.
\item (iii) $G\times T'$ acts on $X=G\times^P Y$ via
\begin{equation}\label{action}
(g,pH).[h, y]= [gh, p\cdot y]
\end{equation}
for all $g, h \in G$ and $y\in Y$.
\end{proposition}
\subsection{Local structure of toroidal varieties}
In this subsection, we recall the local structure of a toroidal horospherical
variety $X$. We keep the notations of the previous subsection.
Let $P^-$ be the parabolic subgroup opposite to $P$, and let $L_P=P\cap P^-$ be the Levi factor of $P$ containing $T$.
Notice that there is a quotient map $L_P\to T'$, whose kernel is denoted by $L_0$.
We denote by ${\mathcal D}(X)$ the set of colors of $X$, and define
\[
X_0:=X\setminus\bigcup\limits_{D\in{\mathcal D}(X)}D
\]
Then $X_0$ is a $P$-stable open subset such that $X$ is covered by the
$G$-translates of $X_0$, and the local structure theorem describes precisely $X_0$:
\begin{proposition}\label{local_structure}\
\begin{enumerate}
\item
There exists a closed $L_P$-stable subvariety $Z$ of $X_0$, fixed pointwise by $L_0$, such that
\[
X_0\simeq P^-\times^{L_P} Z\simeq R^u(P^-)\times Z
\]
where $R^u(P^-)$ is the unipotent radical of $P^-$. Moreover, $Z$ is isomorphic to the $T'$-toric variety $Y$,
it is defined by the same fan as $X$, and any $G$-orbit intersect $Z$ along a unique $T'$-orbit.
\item The action of $B\times T'$ on $X=G\times^P Y$ (see \ref{action}) stabilizes $X_0$ and, the above isomorphism is $B\times T'$-equivariant (where the action on $R^u(P^-)\times Z$ is the product action).
\end{enumerate}
\end{proposition}
For a proof we refer to \cite[Proposition 3.4]{brion1987valuations} and also see \cite[Theorem 29.1]{Ti}.
Note that in particular, the fibration $P^-\times^{L_P} Z$ is trivial,
and that any $G$-stable divisor of $G\times^P Y$ is of the form $G\times^P D'$,
where $D'$ is a $T'$-stable divisor of $Y$. From now on, assume that $G\simeq C\times [G,G]$, where $C$ is a torus.
We also have an exact sequence $0\to \textnormal{Pic}(G/P)\to \textnormal{Pic}(X)\to\textnormal{Pic}(Y)\to 0$ (see Prop \ref{Pic}).
Since both $G$ and $P$ are factorial, and since both $G/P$ and $Y$ are normal, we have the following exact sequences (see \cite[Remark 2.4]{KKLV} and \cite[Proposition 2.10]{brion-lin})
$$0\to {\mathcal X}(G)\to \textrm{Pic}^G(G/P)\to \textrm{Pic}(G/P)\to 0$$ and
\begin{equation}\label{1}
0\to {\mathcal X}(P) \to \textrm{Pic}^P(Y)\to \textrm{Pic}(Y)\to 0
\end{equation}
First observe that $\textrm{Pic}^G(G/P)=\mathcal X(P)$ and by above discussion we can see that $$\textrm{Pic}^G(X)=\textrm{Pic}^G(G\times^P Y)=\textrm{Pic}^P(Y).$$
The natural morphism $X\to G/P$ induces a map $\textrm{Pic}^G(G/P)\to \textrm{Pic}^G(X)$ via pullback.
\begin{comment}
$$\xymatrix{
0\ar[r] & \mathcal X (P)\ar[d]^{\approx} \ar[r] & Pic^P(Y)\ar[r] \ar[d]^{\approx} & Pic(Y) \ar[r] & 0 \\
& Pic^G(G/P) \ar[r] & Pic^G(X) & &
}$$
\end{comment}
Then by exact sequence (\ref{1}) we have the following short exact sequence:
$$0\to \textrm{Pic}^G(G/P) \to \textrm{Pic}^G(X)\to \textrm{Pic}(Y) \to 0.$$
Since $Y$ is a toric variety, the $\mathbb{Z}$-module $\textrm{Pic}(Y)$ is projective. Hence the above short exact sequence splits and we have
$$\textrm{Pic}^{G}(X)\simeq \textrm{Pic}^G(G/P)\oplus\textrm{Pic}(Y)$$
The same section provides a canonical splitting
\begin{equation}\label{equi}
\textrm{Pic}^{G\times T'}(X)=\textrm{Pic}^{G}(G/P)\oplus\textrm{Pic}^{T'}(Y)
\end{equation}
\section{A K\"{u}nneth-like formula for local cohomology}
We start by recalling a few results on local cohomology functors. We refer the reader to \cite{SGA2} and \cite{Ke1} for more details.
In Sections 3 to 5, for simplicity and unless noted otherwise, all schemes will be supposed to be Noetherian over an algebraically closed field $k$.\\
\indent Let $X=\textrm{Spec}(A)$ be an affine scheme, let $QCoh(X)$ be the category of quasi-coherent ${\mathcal O}_X$-modules, and let $Z\subset X$ be a closed subscheme.
\begin{proposition}
Let ${\mathcal F}\in QCoh(X)$, and $p\geq 0$. The functor ${\mathcal H}^p_Z({\mathcal F}\otimes \bullet) : QCoh(X) \to QCoh(X)$ is additive and commutes with direct limits.
\end{proposition}
\begin{proof}
Since we assume $X$ to be Noetherian, all open $U\subset X$ are quasi-compact.
Then to prove the proposition, one can use the same arguments as in \cite[Section 2]{Ke2},
where this is done for $H^p(X,\bullet)$.
One should recall that, as for ordinary sheaf cohomology, one can compute $H^p_Z(X,\bullet)$ using a flabby resolution.
\end{proof}
We can now prove the following useful lemma.
\begin{lemma}{\label{isomorphism-flat-modules}}
Let ${\mathcal F},{\mathcal G}\in QCoh(X)$. Let $t^p_{{\mathcal F},{\mathcal G}} : {\mathcal H}^p_Z({\mathcal G})\otimes_{{\mathcal O}_X}{\mathcal F}\to
{\mathcal H}^p_Z({\mathcal G}\otimes_{{\mathcal O}_X}{\mathcal F})$ be the natural map, and let $l$ be the codimension of $Z$ in $X$.
\item (i) If ${\mathcal F}$ is a flat ${\mathcal O}_X$-module, $t^p_{{\mathcal F},{\mathcal G}}$ is an isomorphism for all $p\geq 0$.
\item (ii) Assume that $X$ is Cohen-Macaulay. If ${\mathcal G}$ is a flat ${\mathcal O}_X$-module, and if $Z$ is a locally complete intersection, $t^l_{{\mathcal F},{\mathcal G}}$ is an isomorphism.
\end{lemma}
\begin{proof}
Proof of (i): We first recall the following facts:
\item (1) If $T: \{\textrm{left} A-\textrm{Mod}\}\to {\mathcal A}b$ is an additive functor, and if $L$ is a free $A$-module, then
\[
T(A)\otimes_A L \to T(L)
\]
is an isomorphism (see \cite[Lemma 7.2.4]{EGA3})
\item (2) (Lazard's theorem) Let $M$ be a $A$-module. Then $M$ is flat if and only if it is the colimit of a directed system of free finite $A$-modules.\\
Then by flatness of ${\mathcal F}$, ${\mathcal F}(X)$ is flat over $A$, hence ${\mathcal F}(X)$ is a directed colimit of free $A$-modules of finite rank, and ${\mathcal F}$ is a directed
colimit of free ${\mathcal O}_X$-modules of finite rank. Now since both ${\mathcal H}^p_Z({\mathcal G}\otimes \bullet)$ and global sections commute with direct limits, it is enough to show
the isomorphism for global sections of free ${\mathcal O}_X$-modules, which is exactly (1).\\
\indent
Proof of (ii):We start by proving that the functor ${\mathcal H}^l_Z({\mathcal G}\otimes \bullet)$ is right exact. Let
\[
0\to{\mathcal F}_1\to{\mathcal F}_2\to{\mathcal F}_3\to 0
\]
be an exact sequence of quasi-coherent ${\mathcal O}_X$-modules. Since ${\mathcal G}$ is flat, this gives a long exact sequence of local cohomology sheaves
\[
\ldots \to {\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal F}_1)\to {\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal F}_2)\to {\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal F}_3)\to
{\mathcal H}^{l+1}_Z({\mathcal G}\otimes{\mathcal F}_1)
\]
But the conditions on $Z$ ensure the vanishing of ${\mathcal H}^{l+1}_Z({\mathcal G}\otimes{\mathcal F}_1)$ (see \cite[Lemma 3.12, III]{SGA2}), hence the right-exactness.\\
Now pick an exact sequence ${\mathcal L}'\to {\mathcal L}\to {\mathcal F}\to 0$ with ${\mathcal L}$ and ${\mathcal L}'$ being free ${\mathcal O}_X$-modules. We have a commutative diagram
\begin{center}
\begin{tikzcd}
{\mathcal H}^l_Z({\mathcal G})\otimes{\mathcal L}' \arrow{d}\arrow{r} & {\mathcal H}^l_Z({\mathcal G})\otimes{\mathcal L} \arrow{d}\arrow{r} & {\mathcal H}^l_Z({\mathcal G})\otimes{\mathcal F} \arrow{d}\arrow{r} & 0 \\
{\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal L}') \arrow{r} & {\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal L}) \arrow{r} & {\mathcal H}^l_Z({\mathcal G}\otimes{\mathcal F}) \arrow{r} & 0 \\
\end{tikzcd}
\end{center}
Since ${\mathcal L}$ and ${\mathcal L}'$ are free, the two first vertical maps are isomorphisms, so is the third one.
\end{proof}
\begin{remark}
Any invertible sheaf ${\mathcal L}$ on $X= \textrm{Spec}(A)$ is flat.
\end{remark}
Before stating our K\"{u}nneth-like formula for local cohomology, let us recall a few known results on local cohomology (which hold in much more generality).
\begin{lemma}
Let ${\mathcal F}\in QCoh(X)$.
\item (i) If $Z_2\subset Z_1$ are two closed subschemes of $X$ such that $Z_1\setminus Z_2$ is affine, then
\[
H^i_{Z_1/Z_2}(X,{\mathcal F}) \simeq H^0(X,{\mathcal H}^i_{Z_1/Z_2}({\mathcal F}))
\]
\item (ii) Let $Z_2\subset Z_1$, $W_2\subset W_1$ be closed subschemes, and let $S_1=W_1\cap Z_1$, $S_2=(W_1\cap Z_2)\cup (W_2\cap Z_1)$. Then we have a spectral sequence
\[
E_2^{p,q}={\mathcal H}^p_{W_1/W_2}({\mathcal H}^q_{Z_1/Z_2}({\mathcal F}))\Rightarrow {\mathcal H}^*_{S_1/S_2}({\mathcal F})
\]
In particular, when $Z_2=W_2=\emptyset$, we have
\[
E_2^{p,q}={\mathcal H}^p_{W_1}({\mathcal H}^q_{Z_1}({\mathcal F}))\Rightarrow {\mathcal H}^*_{W_1\cap Z_1}({\mathcal F})
\]
\item (iii) Let $Z_2\subset Z_1$ be two close subschemes of $X$. Let $Y$ be scheme, and $p:X\times Y\to X$ be the first projection. There is an isomorphism
\[
p^*{\mathcal H}^i_{Z_1/Z_2}({\mathcal F})\simeq {\mathcal H}^i_{Z_1\times Y/Z_2\times Y}(p^*{\mathcal F})
\]
\end{lemma}
\begin{proof}
For proof of (i) see \cite[Theorem 9.5 (d)]{Ke1}. For proof of (ii) see \cite[Lemma 8.5 (d)]{Ke1}. For proof of (iii) see \cite[Proposition 11.5]{Ke1}.
\end{proof}
Let $Y$ be an affine scheme. Let $Z_1\subset X$ and $Z_2\subset Y$ be two locally complete intersections, of respective codimensions $l_1$ and $l_2$.
Let $L_1$ and $L_2$ be two invertible sheaves respectively on $X$ and $Y$. Let $p_1:X\times Y \to X$ and $p_2:X\times Y\to Y$ be the projections, and
let ${\mathcal L}_i:=p_i^*L_i$. We are now ready to state our result.
\begin{theorem}\label{kunneth-local}
Assume $X$ and $Y$ are irreducible and Cohen-Macaulay. Then
\[
H^{l_1+l_2}_{Z_1\times Z_2}(X\times Y, {\mathcal L}_1\otimes{\mathcal L}_2)\simeq H^{l_1}_{Z_1}(X,L_1)\otimes_k H^{l_2}_{Z_2}(Y,L_2)
\]
\end{theorem}
\begin{proof}
First let us notice that
\[
H^{n}_{Z_1\times Z_2}(X\times Y,{\mathcal L}_1\otimes {\mathcal L}_2)=H^0(X\times Y,{\mathcal H}^n_{Z_1\times Z_2}({\mathcal L}_1\otimes{\mathcal L}_2))
\]
Since ${\mathcal L}_1$ and ${\mathcal L}_2$ are locally free, and by the conditions on $Z_i$'s, the spectral sequence
\[
E^{p,q}_2 = {\mathcal H}^p_{Z_1\times Y}({\mathcal H}^q_{X\times Z_2}({\mathcal L}_1\otimes {\mathcal L}_2))\Rightarrow {\mathcal H}^*_{Z_1\times Z_2}({\mathcal L}_1\otimes {\mathcal L}_2)
\]
collapses, and it may only be nonzero for $p=l_1$ and $q=l_2$. Here we are using that ${\mathcal L}_1\otimes {\mathcal L}_2$ is also locally free, and that
${\mathcal I}_{Z_1\times Z_2}$ is locally generated by $l_1+l_2$ elements.\\
\indent
Using Lemma \ref{isomorphism-flat-modules}, we compute
\[
\begin{array}{rclr}
{\mathcal H}^p_{Z_1\times Y}({\mathcal H}^q_{X\times Z_2}({\mathcal L}_1\otimes {\mathcal L}_2)) & \simeq & {\mathcal H}^p_{Z_1\times Y}({\mathcal L}_1\otimes {\mathcal H}^q_{X\times Z_2}({\mathcal L}_2)) & (\textrm{by Lemma \ref{isomorphism-flat-modules} (i))}\\
& \simeq & {\mathcal H}^p_{Z_1\times Y}({\mathcal L}_1)\otimes {\mathcal H}^q_{X\times Z_2}({\mathcal L}_2) & (\textrm{by Lemma \ref{isomorphism-flat-modules} (ii))}
\end{array}
\]
Taking global sections yield the wanted isomorphism.
\end{proof}
\section{Locally trivial fibrations and Cousin complexes}
Let $S$ and $F$ be smooth complete connected schemes, and let $E\to S$ be a (Zariski) locally trivial
fibration, with fiber $F$. Moreover, assume that the Picard group Pic$(F)$ of $F$ is a projective $\mathbb{Z}$-module, and that $F$ is rational. We start with the following useful
description of Picard group Pic$(E)$ of $E$.
\begin{proposition}\label{Pic}
We have an (noncanonical) isomorphism $\textnormal{Pic}(E)\simeq \textnormal{Pic}(S)\oplus \textnormal{Pic}(F)$
\end{proposition}
\begin{proof}
Using \cite[Proposition 2.3]{FI}, we have an exact sequence
\[
H^0(F,{\mathcal O}_F^*)/k^* \to \textrm{Pic}(S) \to \textrm{Pic}(E) \to \textrm{Pic}(F) \to 0
\]
By the hypotheses on $F$, $H^0(F,{\mathcal O}_F^*)=k^*$. Since $\textrm{Pic}(F)$ is projective,
we get $\textnormal{Pic}(E)\simeq \textnormal{Pic}(S)\oplus \textnormal{Pic}(F)$. This completes the proof of the proposition.
\end{proof}
Let $\mathcal L$ be a line bundle on $E$, then by Lemma \ref{Pic}, we can identify $\mathcal L$ with $\mathcal L_1\otimes \mathcal L_2$ for some line bundles ${\mathcal L}_1$ and ${\mathcal L}_2$ on $S$ and $F$ respectively.
The aim of this section is to compute the cohomology groups $H^n(E,\mathcal L)$.
To do that, we shall need the formalism of Cousin complexes.\\
\indent
Let $X$ be a Noetherian scheme, let ${\mathcal F}$ be a ${\mathcal O}_X$-module, and let $\{Z\}:=\emptyset=Z_{n+1}\subset Z_n\subset \ldots \subset Z_1\subset Z_0=X$ be a filtration by
closed subschemes. For simplicity, let us assume that $X$ is irreducible. Then one can construct (see \cite[Lemma 7.8]{Ke1}) the Cousin complex of ${\mathcal F}$ relatively to the filtration $\{Z\}$
\[
\textrm{Cousin}_{\{Z\}}({\mathcal F}) := 0\to H^0(X,{\mathcal F})\to H^0_{Z_0/Z_1}(X,{\mathcal F})\to H^1_{Z_1/Z_2}(X,{\mathcal F}) \to H^2_{Z_2/Z_3}(X,{\mathcal F}) \to \ldots
\]
and its sheaf analogue, denoted by $\underline{{\mathcal C}\textrm{ousin}}_{\{Z\}}({\mathcal F})$. Kempf showed in \cite[Theorem 9.6, 10.3 and 10.5]{Ke1} that under some conditions, these Cousin
complexes can be used to compute the cohomology groups $H^i(X,{\mathcal F})$:
\begin{proposition}\label{Cousin-Kempf}
Assume that the $Z_i\setminus Z_{i+1}$ are all affine, that $Z_i$ is of codimension $i$ for all $i\geq 0$, and that ${\mathcal F}$ is locally free. Then we have an isomorphism of complexes
\[
\textnormal{Cousin}_{\{Z\}}({\mathcal F}) \simeq H^0(X, \underline{{\mathcal C}\textnormal{ousin}}_{\{Z\}}({\mathcal F}))
\]
and $H^i(X,{\mathcal F})$ is the $i$-th homology group of $\textnormal{Cousin}_{\{Z\}}({\mathcal F})$.
\end{proposition}
We shall now state our main result.
\begin{theorem}\label{cohomology-fibration}
Assume that $S$ is stratified by locally closed affine irreducible schemes $\{Z^1_i\}_{i\in I}$, and that $F$ is stratified by locally closed affine irreducible schemes $\{Z^2_j\}_{j\in J}$ such that:
\item (i) there are affine open subsets $U^1_i\subset S$ and $U^2_j\subset F$ containing respectively $Z^1_i$ and $Z^2_j$, in which they are locally complete intersections.
\item (ii) $E$ is stratified by the $\{Z^1_i\times Z^2_j\}$, and there are open embeddings $U^1_i\times U^2_j\to E$.\\
Then we can compute $H^n(E,{\mathcal L}_1\otimes {\mathcal L}_2)$ with the following converging spectral sequence
\[
E^{p,q}_2=H^p(S,{\mathcal L}_1)\otimes H^q(F,{\mathcal L}_2) \Rightarrow H^*(E,{\mathcal L}_1\otimes {\mathcal L}_2)
\]
If further, either ${\mathcal L}_1$ or ${\mathcal L}_2$ has its cohomology concentrated in a single degree, this spectral sequence collapses, and we get
\[
H^n(E,{\mathcal L}_1\otimes {\mathcal L}_2) = \bigoplus\limits_{p+q=n}H^p(S,{\mathcal L}_1)\otimes H^q(F,{\mathcal L}_2).
\]
\end{theorem}
\begin{proof}
Let us start by fixing some notations. Let
\[
\overline{S_l}:=\bigcup\limits_{\textrm{codim}(Z^1_i)\geq l} Z^1_i \textrm{, } \overline{F_l}:=\bigcup\limits_{\textrm{codim}(Z^2_j)\geq l} Z^2_j\textrm {, and } \overline{E_l}:=\bigcup\limits_{\textrm{codim}(Z^1_i\times Z^2_j)\geq l} Z^1_i\times Z^2_j
\]
and let $S_l=\overline{S_l}\setminus\overline{S_{l+1}}$, $F_l=\overline{F_l}\setminus\overline{F_{l+1}}$, and $E_l=\overline{E_l}\setminus\overline{E_{l+1}}$.\\
By using twice the excision formula, we can rewrite the cohomology groups occurring in the Cousin complex of ${\mathcal L}_1\otimes {\mathcal L}_2$ in the following way:
\[
\begin{array}{rcll}
H^l_{\overline{E_l}/\overline{E_{l+1}}}(E,{\mathcal L}_1\otimes {\mathcal L}_2)&\simeq &H^l_{E_l}(E\setminus \overline{E_{l+1}},{\mathcal L}_1\otimes {\mathcal L}_2)&\\
&\simeq &\bigoplus\limits_{p+q=l}\bigoplus\limits_{\substack{\textrm{codim}(Z^1_i)=p \\ \textrm{codim}(Z^2_j)=q}} H^l_{Z^1_i\times Z^2_j}(E\setminus \overline{E_{l+1}},{\mathcal L}_1\otimes {\mathcal L}_2)&\\
&\simeq &\bigoplus\limits_{p+q=l}\bigoplus\limits_{\substack{\textrm{codim}(Z^1_i)=p \\ \textrm{codim}(Z^2_j)=q}} H^l_{Z^1_i\times Z^2_j}(U^1_i\times U^2_j,{\mathcal L}_1\otimes {\mathcal L}_2)&\\
&\simeq &\bigoplus\limits_{p+q=l}\bigoplus\limits_{\substack{\textrm{codim}(Z^1_i)=p \\ \textrm{codim}(Z^2_j)=q}} H^l_{Z^1_i\times Z^2_j/\partial (Z^1_i\times Z^2_j)}(E,{\mathcal L}_1\otimes {\mathcal L}_2)&(*)\\
\end{array}
\]
where $\partial (Z^1_i\times Z^2_j)$ denotes $\overline{Z^1_i\times Z^2_j}\setminus Z^1_i\times Z^2_j$. The boundary maps
\[
\partial_E^l : H^l_{Z^1_i\times Z^2_j/\partial (Z^1_i\times Z^2_j)}(E,{\mathcal L}_1\otimes {\mathcal L}_2) \to H^{l+1}_{Z^1_{i'}\times Z^2_{j'}/\partial (Z^1_{i'}\times Z^2_{j'})}(E,{\mathcal L}_1\otimes {\mathcal L}_2)
\]
are zero whenever $Z^1_{i'}\times Z^2_{j'}\not\subset \partial (Z^1_i\times Z^2_j)$. But a stratum $Z^1_{i'}\times Z^2_{j'}$ of codimension $l+1$ lies $\partial (Z^1_i\times Z^2_j)$ exactly when it is of the form $Z^1_{i'}\times Z^2_j$ or $Z^1_i\times Z^2_{j'}$, with either $Z^1_{i'}\subset\partial Z^1_i$ of codimension codim$(Z^1_i)+1$ or $Z^2_{j'}\subset\partial Z^2_j$ of codimension codim$(Z^2_j)+1$.\\
\indent
We shall now introduce the following bicomplex:
\[
\begin{array}{rcll}
K^{p,q}&:=&\bigoplus\limits_{\substack{\textrm{codim}(Z^1_i)=p \\ \textrm{codim}(Z^2_j)=q}} H^{p+q}_{Z^1_i\times Z^2_j}(U^1_i\times U^2_j,{\mathcal L}_1\otimes {\mathcal L}_2)& \\
&\simeq& \bigoplus\limits_{\substack{\textrm{codim}(Z^1_i)=p \\ \textrm{codim}(Z^2_j)=q}} H^p_{Z^1_i}(U^1_i,{\mathcal L}_1)\otimes H^q_{Z^2_j}(U^2_j,{\mathcal L}_2)& \textrm{by Theorem \ref{kunneth-local}}.
\end{array}
\]
Notice that with the same kind of arguments than for $(*)$, $\bigoplus\limits_{\textrm{codim}(Z^1_i)=p} H^p_{Z^1_i}(U^1_i,{\mathcal L}_1)$ is the $p+1$-th term of
the $\textrm{Cousin}_{\{S_l\}}({\mathcal L}_1)$, and denote by $\partial^p_{S,i}$ the induced map on $H^p_{Z^1_i}(U^1_i,{\mathcal L}_1)$. For the same reasons,
$\bigoplus\limits_{\textrm{codim}(Z^2_j)=q} H^q_{Z^2_j}(U^2_j,{\mathcal L}_2)$ is the $(q+1)$-th term of the $\textrm{Cousin}_{\{F_l\}}({\mathcal L}_2)$,
and denote by $\partial^q_{F,j}$ the induced map on $H^q_{Z^2_j}(U^2_j,{\mathcal L}_2)$. Define boundary maps on $K^{p,q}$ by :
\begin{center}
\begin{tikzcd}
{} & \vdots & \vdots& \\
\ldots \arrow{r} & K^{p,q+1} \arrow{u}\arrow{r}{\partial^{p,q+1}_h} & K^{p+1,q+1}\arrow{u}\arrow{r} & \ldots \\
\ldots \arrow{r} & K^{p,q} \arrow{r}{\partial^{p,q}_h} \arrow{u}{\partial^{p,q}_c} & K^{p+1,q} \arrow{u}{\partial^{p+1,q}_c}\arrow{r} & \ldots \\
& \vdots \arrow{u} &\vdots \arrow{u} &
\end{tikzcd}
\end{center}
\item \textbullet the horizontal boundary maps $\partial^{p,q}_h:K^{p,q}\to K^{p+1,q}$ are given by $\bigoplus\limits_{\textrm{codim}(Z^1_i)=p} (\partial^p_{S,i}\otimes 1)$ ;
\item \textbullet the vertical boundary maps $\partial^{p,q}_c:K^{p,q}\to K^{p,q+1}$ are given by $\bigoplus\limits_{\textrm{codim}(Z^2_j)=q} (1\otimes \partial^q_{F,j})$.\\
\indent
The previous argument on the boundary maps $\partial^l_E$ of $\textrm{Cousin}_{\{E_l\}}({\mathcal L}_1\otimes {\mathcal L}_2)$ shows that
\[
Tot(K) = \textrm{Cousin}_{\{E_l\}}({\mathcal L}_1\otimes {\mathcal L}_2)
\]
where $Tot(K)$ denotes the total complex of $K$. Since $K^{p,q}$ vanishes for $p<0$ or $q<0$, the bicomplex $^cH_{*,*}(K)$ obtained from $K$ by taking homology of
columns gives rise to a spectral sequence whose $E^2$-page is given by
\[
E_{p,q}^2=H_p(^cH_{q,*}(K))
\]
converging to $H_*(Tot(K))$. But by Proposition \ref{Cousin-Kempf}, we have
\[
H_n(Tot(K))=H_n(\textrm{Cousin}_{\{E_l\}}({\mathcal L}_1\otimes {\mathcal L}_2))=H^n(E,{\mathcal L}_1\otimes {\mathcal L}_2)
\]
and
\[
\begin{array}{rcl}
H_p(^cH_{q,*}(K))&=&H_p(\bigoplus\limits_{\textrm{codim}(Z^1_i)=*}H^*_{Z^1_i}(U^1_i,{\mathcal L}_1)\otimes H^q(F,{\mathcal L}_2))\\
&=&H^p(S,{\mathcal L}_1)\otimes H^q(F,{\mathcal L}_2)
\end{array}
\]
If either ${\mathcal L}_1$ or ${\mathcal L}_2$ has its cohomology concentrated in a single degree, then the last claim is obvious.
\end{proof}
\section{Equivariance}
Let $G_1$ and $G_2$ be two reductive algebraic groups. In this section, we shall take into account actions of $G_1\times G_2$ in the previous results. The key point will come from \cite[Lemma 11.6 ]{Ke1}.\\
\indent
We start to show the following lemma:
\begin{lemma}\label{equivariance-lemma}
With notations and settings of Lemma \ref{isomorphism-flat-modules}, if $G_1$ acts on $X$ in such a
way that $Z$ is $G_1$-stable, and that both ${\mathcal F}$ and ${\mathcal G}$ are $G_1$-linearized, then
the isomorphisms $t_{\mathcal{F},\mathcal{G}}^p$ given by Lemma \ref{isomorphism-flat-modules} are
$G_1$-equivariant.
\end{lemma}
\begin{proof}
We recall here the $G_1$-linearization of ${\mathcal H}^p_Z({\mathcal F})$ given by Kempf:
Let $\rho : G_1\times X\to X$ be the action map, and let $\pi$ denote the second projection. The $G_1$-linearization of ${\mathcal F}$ gives an isomorphism $\rho^*{\mathcal F}\to \pi^*{\mathcal F}$.
Since $Z$ is $G_1$-stable, we get a map $\rho^*({\mathcal H}^p_Z({\mathcal F}))\to {\mathcal H}^p_{G_1\times Z}(\pi^*{\mathcal F})$,
and composing it with the isomorphism ${\mathcal H}^p_{G_1\times Z}(\pi^*{\mathcal F}) \to \pi^*({\mathcal H}^p_Z({\mathcal F}))$ gives
the required natural $G_1$-linearization on ${\mathcal H}^p_Z({\mathcal F})$, see \cite[Lemma 11.3]{Ke1}. The commutativity of the diagram
(whose horizontal arrows are isomorphisms thanks to the linearizations, and all vertical arrows are isomorphisms by flatness)
\begin{center}
\begin{tikzcd}
\rho^*{\mathcal H}^p_Z({\mathcal F}\otimes{\mathcal G}) \arrow{r} &{\mathcal H}^p_Z(\pi^*({\mathcal F}\otimes {\mathcal G})) \arrow{r} &\pi^*{\mathcal H}^p_Z({\mathcal F}\otimes {\mathcal G})\\
\rho^*({\mathcal H}^p_Z({\mathcal F})\otimes{\mathcal G}) \arrow{u}\arrow{r} &{\mathcal H}^p_Z(\pi^*{\mathcal F})\otimes \pi^*{\mathcal G} \arrow{u}\arrow{r} &\pi^*({\mathcal H}^p_Z({\mathcal F})\otimes {\mathcal G}) \arrow{u}
\end{tikzcd}
\end{center}
is enough to conclude the lemma.
\end{proof}
Lemma \ref{equivariance-lemma} is enough to prove equivariant analogues of theorems \ref{kunneth-local} and \ref{cohomology-fibration}. Let us keep the notations and settings of Theorem \ref{kunneth-local}.
Assume that $G_1$ acts on $X$ and that $Z_1$ is $G_1$-stable, that $G_2$ acts on $Y$ and that $Z_2$ is $G_2$-stable, and that the $L_i$ are
$G_i$-linearized.
Equip ${\mathcal L}_1\otimes {\mathcal L}_2$ with the induced $G_1\times G_2$-linearization. Then we have:
\begin{proposition}\label{equivariant-kunneth}
The isomorphism given in Theorem \ref{kunneth-local} is an isomorphism of $G_1\times G_2$-modules.
\end{proposition}
\begin{proof}
By Lemma \ref{equivariance-lemma}, the isomorphism
\[
{\mathcal H}^{p+q}_{Z_1\times Z_2}({\mathcal L}_1\otimes {\mathcal L}_2)\simeq {\mathcal H}^p_{Z_1\times Y}({\mathcal L}_1)\otimes {\mathcal H}^q_{X\times Z_2}({\mathcal L}_2)
\]
obtained in the proof is an isomorphism of $G_1\times G_2$-equivariant sheaves, hence taking global sections, we get an isomorphism of $G_1\times G_2$-modules.
\end{proof}
Now take the notations and settings of Theorem \ref{cohomology-fibration}. As in the introduction, replacing $G_i$ by a finite cover we assume that $G_i$ is factorial for $i=1, 2$. Assume $G_1\times G_2$ acts on $E$, $G_1$ acts on $S$ and trivially on $F$, and $G_2$ acts on $F$ and trivially on $S$, such that the morphism $E\to S$ is $G_2$ invariant and $G_1$ equivariant, and the inclusions of the fibers $F\to E$ are $G_2$-equivariant and $G_1$-invariant. Assume that the subsets $Z^1_i$ and $U^1_i$ are $G_1$-stable,
that the subsets $Z^2_j$ and $U^2_j$ are $G_2$-stable, and that the ${\mathcal L}_i$ are $G_i$-linearized (hence $G_1\times G_2$-linearized because the action of the other group $G_j$ is trivial).
Note that we have an exact sequence
\begin{equation}\label{exact-equiv}
0\to \textrm{Pic}^{G_1}(S) \to \textrm{Pic}^{G_1\times G_2}(E) \to \textrm{Pic}^{G_2}(F) \to 0
\end{equation}
since it sits in the following diagram with exact columns whose first and last rows are exact:
\begin{center}
\begin{tikzcd}
&0 \arrow{d} &0 \arrow{d}
&0 \arrow{d} &\\
0 \arrow{r} &{\mathcal X}(G_1) \arrow{d}\arrow{r} &{\mathcal X}(G_1\times G_2) \arrow{d}{i_E}\arrow{r}
&{\mathcal X}(G_2) \arrow{d}{i_F}\arrow{r} &0\\
&\textrm{Pic}^{G_1}(S) \arrow{d}\arrow{r} &\textrm{Pic}^{G_1\times G_2}(E) \arrow{d}{p_E}\arrow{r}
&\textrm{Pic}^{G_2}(F) \arrow{d}{p_F} &\\
0 \arrow{r} &\textrm{Pic}(S) \arrow{d}\arrow{r} &\textrm{Pic}(E) \arrow{d}\arrow{r}
&\textrm{Pic}(F) \arrow{d}\arrow{r} &0\\
&0 &0 &0 &
\end{tikzcd}
\end{center}
Recall that we have fixed a section $s:\textrm{Pic}(F)\to\textrm{Pic}(E)$. We also have a natural section $s_{\mathcal X}:{\mathcal X}(G_2)\to{\mathcal X}(G_1\times G_2)$ sending a character $\chi$ to the character $\tilde{\chi}(g_1,g_2)=\chi(g_2)$.
Since $\textrm{Pic}(F)$ is projective module, so is $\textrm{Pic}^{G_2}(F)$, and we get a (non-canonical) section
$\textrm{Pic}^{G_2}(F) \to \textrm{Pic}^{G_1\times G_2}(E)$. Let us fix such a section $\tilde{s}$. Then it satisfies $\tilde{s}\circ i_F=i_E\circ s_{\mathcal X}$ and $p_E\circ\tilde{s}=s\circ p_F$. The above short exact sequence splits and we have
\begin{equation}\label{exact-equiv1}
\textrm{Pic}^{G_1\times G_2}(E)=\textrm{Pic}^{G_1}(S) \oplus \textrm{Pic}^{G_2}(F).
\end{equation}
This induces a $G_1\times G_2$-linearization on ${\mathcal L}_1 \otimes {\mathcal L}_2$, and we get:
\begin{proposition}\label{equivariant-cohomology-fibration}
The isomorphism given in Theorem \ref{cohomology-fibration} is an isomorphism of $G_1\times G_2$-modules.
\end{proposition}
\begin{proof}
This is a direct consequence of \cite[Theorem 11.6 (e)]{Ke1}.
\end{proof}
Now assume that $k$ is of characteristic $0$. Then we can lower the hypotheses to obtain a weaker result.
Namely, we do not need to assume that the closed subschemes occurring in the filtration are stable: in the setting of Theorem \ref{cohomology-fibration},
let $\mathcal L$ be a $G_1\times G_2$-linearized line bundle on $E$.
By (\ref{exact-equiv1}), we can write $\mathcal L= {\mathcal L}_1\otimes {\mathcal L}_2$, where $\mathcal L_1$ is $G_1$-linearizated line bundle on $S$ and $\mathcal L_2$ is $G_2$-linearizated line bundle on $F$.
Let $\mathfrak{g}_i$ be the Lie algebra of $G_i$. Then we have:
\begin{proposition}
The isomorphism given in Theorem \ref{cohomology-fibration} is an isomorphism of $\mathfrak{g}_1\times \mathfrak{g}_2$-modules.
\end{proposition}
\begin{proof}
For an affine algebraic group $G$, let $\widehat{G}$ denote the formal completion of $G$ at the identity.
The $G_1\times G_2$-linearization of ${\mathcal L}_1\otimes {\mathcal L}_2$ gives rise to a $\widehat{G_1\times G_2}$-linearization on ${\mathcal L}_1\otimes {\mathcal L}_2$.
Then by using the natural map $\widehat{G_1}\times \widehat{G_2}\to \widehat{G_1 \times G_2}$, we get $\widehat{G_1}\times \widehat{G_2}$-linearization.
Hence by \cite[Lemma 11.1(a)]{Ke1}, the cohomology groups $H^{l_1+l_2}_{Z^1_i\times Z^2_j}(U^1_i\times U^2_j, {\mathcal L}_1\otimes{\mathcal L}_2)$, $H^{l_1}_{Z^1_i}(U^1_i,{\mathcal L_1})$
and $H^{l_2}_{Z^2_j}(U^2_j,{\mathcal L_2})$
have respectively a natural structure of $\widehat{G_1}\times\widehat{G_2}$, $\widehat{G_1}$ and $\widehat{G_2}$-module, that we explicitly describe here.\\
\indent
A $\widehat{G}$-linearization of a sheaf ${\mathcal F}$ of ${\mathcal O}_X$-modules on a scheme $X$ is given by an inverse system of morphisms
${\mathcal F}\to k[G]/\mathfrak{m}^i\otimes {\mathcal F}$ (where $\mathfrak{m}$ denotes the ideal of regular functions vanishing at the identity).
Since $k[G]/\mathfrak{m}^i$ are finite dimensional vector spaces and the local cohomology functor commutes with direct sums, we have that
for any closed subsets $Z_2\subset Z_1\subset X$, there is an inverse system $H^p_{Z_1/Z_2}(X,{\mathcal F})\to k[G]/\mathfrak{m}^i\otimes H^p_{Z_1/Z_2}(X,{\mathcal F})$,
which is precisely the required linearization.\\
\indent
Now since the linearization construction is detailed, it is immediate that the isomorphism
\[
H^{l_1+l_2}_{Z^1_i\times Z^2_j}(U^1_i\times U^2_j, {\mathcal L}_1\otimes{\mathcal L}_2)\to H^{l_1}_{Z^1_i}(U^1_i, {\mathcal L_1})\otimes H^{l_2}_{Z^2_j}(U^2_j, {\mathcal L_2})
\]
is an isomorphism of $\widehat{G_1}\times \widehat{G_2}$-modules.
Notice that for example we can rewrite the $H^{l_1}_{Z^1_i}(U^1_i, {\mathcal L_1})=H^{l_1}_{Z^1_i/\partial Z^1_i}(S, {\mathcal L})$, hence these modules are exactly the direct summands of
the modules occurring in the Cousin complexes.
Using \cite[Lemma 11.1(d)]{Ke1}, the Cousin complex of ${\mathcal L}_1\otimes {\mathcal L}_2$ relatively to $\{E_l\}$ is a complex of $\widehat{G_1}\times \widehat{G_2}$-modules.
Hence the isomorphism given in Theorem \ref{cohomology-fibration} is an isomorphism of $\widehat{G_1}\times \widehat{G_2}$-modules.
Since we are in characteristic 0, this gives us an isomorphism of Lie algebra $\mathfrak{g}_1\times \mathfrak{g}_2$-modules.
\end{proof}
\section{Application to horospherical varieties}
Let $G$ be a complex connected reductive group. Let $X$ be a smooth toroidal horospherical variety.
Let $T\subset B$ be a maximal torus of $G$. We need to stratify $X$ in such a way that we can apply Theorem \ref{cohomology-fibration}.
This is done by looking at the stratification of $X$ by $B\times T'$-orbits. The $B\times T'$-orbits are of the form $BwP\times^P C_{\sigma}$, where $w\in W^P$
and $C_{\sigma}$ is the $T'$-orbit of $Y$ associated to a cone $\sigma$ (living in the fan defining $Y$).\\
\indent
$X$ being horospherical and toroidal, we can apply the local structure theorem (Proposition \ref{local_structure}).
Let $U_w=w(w_0^P)^{-1}Bw_0^PP/P\subset G/P$ and
$U_{\sigma}=\textrm{Spec}(k[M\cap \sigma^{\vee}])\subset Y$.
Let $p:X\to G/P$ be the projection map.
Then $BwP\times^P C_{\sigma}$ is closed in $U_wP\times^P U_{\sigma}$,
which is open in $p^{-1}(U_w)=w(w_0^P)^{-1}.X_0$.
But $p^{-1}(U_w)\simeq U_w\times Y$ by the local structure theorem.
Hence the $B\times T'$-orbits are
isomorphic to $BwP/P \times C_{\sigma}$, and they are closed in an open subset of $X$ of the form
$U_w\times U_{\sigma}$.
Remark that $U_w$ is $T$-stable, and that $U_{\sigma}$ is $T'$-stable.\\
\indent Both $BwP/P$ and $C_{\sigma}$ are
locally complete intersections, of respective dimensions $l(w)$ and $\dim(\sigma)$. Assume that $X$ is complete smooth toroidal horospherical variety.
Then we have the action of $G\times T'$ on $X$ such that $X\to G/P$ is $G$-equivariant. By the isomorphism (\ref{equi}), we have $\textrm{Pic}^{G\times T'}(X)\simeq \textrm{Pic}^G(G/P)\oplus \textrm{Pic}^{T'}(Y)$. Hence
we write any $G$-linearized line bundle $\mathcal L$ on $X$ as $\mathcal L\simeq \mathcal L_1 \otimes \mathcal L_2$, where $\mathcal L_1$ and $\mathcal L_2$ are line bundles on $G/P$ and on $Y$ respectively.
By Borel-Weil-Bott theorem, the cohomology of line bundles on $G/P$ is concentrated in at most one degree.
Hence we can apply Theorem \ref{cohomology-fibration} to complete toroidal horospherical varieties. Then,
\begin{corollary}\label{horospherical}
We have an isomorphism of $\mathfrak g \times T'$-modules
\[
H^n(X, \mathcal L) = \bigoplus\limits_{p+q=n}H^p(G/P,{\mathcal L}_1)\otimes H^q(Y,{\mathcal L}_2)
\]
\end{corollary}
Now let $X$ be a complete horospherical variety. Then it admits a $G$-equivariant complete toroidal
resolution of singularities $\pi: \tilde{X}\to X$.
By equivariance, $\tilde{X}$ is also horospherical. Then we can compute with Corollary \ref{horospherical},
the cohomology groups of line bundles on $X$ as well.
\begin{proposition}\label{nontoroidalcase}
Let ${\mathcal L}$ be a line bundle on $X$. Then we have $H^i(X,{\mathcal L})=H^i(\tilde{X},\pi^*{\mathcal L})$ for all $i\geq 0$.
\end{proposition}
\begin{proof} First note that, $X$ being a spherical variety it has rational singularities (see for example \cite[Corollary 2.3.4]{perrin2014geometry}).
Then we have $R^q\pi_*\mathcal O_{\tilde X}=0$ for all $q>0$.
By projection formula, we can see that $R^q\pi_*\pi^*{\mathcal L}=0$ for all $q>0$.
Hence the Leray spectral sequence
\[
H^p(X,R^q\pi_*\pi^*{\mathcal L})\Rightarrow H^*(\tilde{X},\pi^*{\mathcal L})
\]
degenerates and we get $H^i(X,{\mathcal L})=H^i(\tilde{X},\pi^*{\mathcal L})$ for all $i\geq 0$.
\end{proof}
{\bf Acknowledgments:}
We would like to thank Michel Brion for valuable discussions and many critical comments.The
first author would also like to thank Max Planck Institute for Mathematics for the postdoctoral fellowship, and for providing very pleasant hospitality.
\end{document}
|
\begin{document}
\title{Comparison of approximation algorithms for the travelling salesperson problem on semimetric graphs}
\author{Mateusz Krukowski, Filip Turobo\'s}
\affil{Institute of Mathematics, \L\'od\'z University of Technology, \\ W\'ol\-cza\'n\-ska 215, \
90-924 \ \L\'od\'z, \ Poland \\
e-mail: [email protected]}
\maketitle
\begin{abstract}
The aim of the paper is to compare different approximation algorithms for the travelling salesperson problem. We pick the most popular and widespread methods known in the literature and contrast them with a novel approach (the polygonal Christofides algorithm) described in our previous work. The paper contains a brief summary of theory behind the algorithms and culminates in a series of numerical simulations (or ``experiments''), whose purpose is to determine ``the best'' approximation algorithm for the travelling salesperson problem on complete, weighted graphs.
\end{abstract}
\noindent
\textbf{Keywords : } semimetric spaces, travelling salesperson problem, minimal spanning tree, approximate solutions, polygonal Christofides algorithm
\\
\textbf{Mathematics Subject Classification (2020):} 54E25, 90C27, 68R10, 05C45
\section{Introduction}
The travelling salesperson problem (or the TSP for short) hardly needs any introduction. It is a sure bet that every mathematician at one point or another has heard something along those lines: ``Suppose we have $n$ cities and you are a salesperson delivering a product to each of those cities. You cannot travel the same road between two cities twice, and (obviously) you have to supply all the customers with the desired products (or else you get fired). At the end of the journey you have to come back to where you started (beacuse the boss is waiting for your report). In what order will you visit all the cities?''
Anyone who has attended at least a couple of lectures on graph theory will immediately recognize that the story is asking for a Hamiltonian path of minimal weight in a given graph $G$ (representing the cities and roads between them). However, the story as presented above demands that we deal with a few technical caveats. First off, we assume that $G$ is a complete and weighted graph. Intuitively, this means that between any two cities there is a direct road connecting them.
Next, a couple of words regarding the weights of the edges (roads between cities) are in order. The first thought that springs to mind is that the ``weight of the road'' should be its length (in km, miles or whatever metric system is used in a given country). Under the assumption that the roads are as straight as a ruler, such a choice of weights leads to the so-called \textit{metric TSP}, where
\begin{center}
distance between city A and city B + distance between city B and city C \\
$\geqslant$ distance between city A and city C.
\end{center}
However, anyone who owns a car knows fairly well that reality does not always pan out that conveniently. For instance, we may imagine a highway which goes straight from city A to city B and then to city C and that a direct road from city A to city C leads through hills and valleys and is filled with numerous turns. It is definitely conceivable that in such a scenario we actually have
\begin{center}
distance between city A and city B + distance between city B and city C \\
$<$ distance between city A and city C.
\end{center}
If the Reader consider this example to be too far-fetched, let us suggest another reasoning.\footnote{For this argument to work we may even assume that the salesperson lives in a world where all roads are straight lines (or rather intervals).} As we all know from personal experience, time is a much more valuable resource in our lives than the number of kilometers (miles etc.) we have travelled. Hence, we should not be surprised that the salesperson would rather take the ring road, travel a longer distance but save precious time rather than get stuck on a shorter road in traffic jams at every junction. This means that if the weight of the edge/road is the time it takes to travel that distance, the TSP may easily be nonmetric.
The discussion we carried out above supports the claim that the nonmetric instances of the travelling salesperson problem should not be discarded as ``uninteresting''. As we have argued, these instances model the scenarios we encounter in our daily lives and as such constitute a sufficient motivation for further research in this area.
Having justified why we feel that the nonmetric TSP is an essential part of mathematical research we proceed with laying out the general schedule of the paper. We present this brief overview to facilitate the comprehension of the ``big picture'' before we dive into technical details.
Section \ref{section:framework} introduces preliminary notions in graph theory and semimetric spaces, which are indispensible for further reading. Additionally, the section establishes the notation used throughout the paper. Section \ref{section:approximatesolutions} opens with an explanation of why the travelling salesperson problem is not as easy as ``simply checking all the Hamiltonian cycles'' on a graph. Although such an approach seems perfectly valid from a theoretical standpoint, the computational complexity of the TSP is so immense (even for relatively small graphs) that no computer will ever be able to ``brute-force this problem'' in reasonable time. Section \ref{section:approximatesolutions} goes on to describe the following methods, which return approximate solutions to the TSP:
\begin{description}
\item[$\bullet$] double minimal spanning tree algorithm (or DMST algorithm for short),
\item[$\bullet$] (refined) Andreae-Bandelt algorithm (or (r)AB algorithm for short),
\item[$\bullet$] path matching Christofides algorithm (or PMCh algorithm for short),
\item[$\bullet$] polygonal Christofides algorithm (or PCh algorithm for short), which is a novel method constructed in our previous paper.\footnote{See \cite{Krukowski2021}.}
\end{description}
\noindent
To every method we attach a pseudocode, so everyone (if they so please) can implement these algorithms in a programming language of their own choosing.
Section \ref{section:numericalcomparison} is where we put our new PCh algorithm to the test and juxtapose it with other methods generating approximate solutions to the TSP. We verify that the PCh method performs better (returns Hamiltonian cycles with lower total weight) than the rest of the algorithms on a series of random graphs of different sizes. We also test numerically that the execution time of the PCh algorithm does not deviate much from those of other algorithms (bar the DMST method, which is significantly faster). Naturally, the paper concludes with the bibliography.
\section{Framework of semimetric spaces and graph theory}
\label{section:framework}
In the introductory section we laid down (in rather broad terms) the travelling salesperson problem and argued that its nonmetric instances are equally important as their metric counterparts. It is high time we recalled the preliminaries of semimetric spaces in greater detail.
\begin{defin}
For a nonempty, finite set $X$, a function $d:X\times X \longrightarrow [0,+\infty)$ is called a semimetric if it satisfies the following two conditions:
\begin{description}
\item[$\bullet$] $\forall_{x,y\in X}\ d(x,y) = 0$ if and only if $x=y$, and
\item[$\bullet$] $\forall_{x,y\in X}\ d(x,y) = d(y,x)$.
\end{description}
\noindent
The pair $(X,d)$ is called a semimetric space.
\label{semimetricdefinition}
\end{defin}
Let us remark that we insist on set $X$ being finite simply because our model example and primary motivation is the travelling salesperson problem, which makes no sense on infinite graphs. Hence, we refrain from unnecessary and excessive generality and do not consider infinite semimetric spaces.
Next, we define $\beta-$metric spaces and $\gamma-$polygon spaces:\footnote{Both $\beta-$metric and $\gamma-$polygon spaces are well-established in the literature: \cite{An2015,Andreae1995,Bakthin1989,Bandelt1991,Bourbaki1966,Chrzaszcz2018,Chrzaszcz2018.2,Dung2016,Fagin2003,Jachymski2020,Paluszynski2009,Schroeder2006,Suzuki2017,Wilson1931,Xia2009} serve just as a couple of examples.}
\begin{defin}
A semimetric space $(X,d)$ is said to be:
\begin{description}
\item[$\bullet$] $\beta-$metric space if $\beta\geqslant 1$ is the smallest number such that the semimetric $d$ satisfies the $\beta-$triangle inequality:
\begin{gather}
\forall_{x,y,z\in X}\ d(x,z)\leqslant \beta ( d(x,y) + d(y,z) ),
\label{betainequality}
\end{gather}
\item[$\bullet$] $\gamma-$polygon space if $\gamma\geqslant 1$ is the smallest number such that the semimetric $d$ satisfies the $\gamma-$polygon inequality:
\begin{gather}
\forall_{n\in\mathbb{N}} \ \forall_{x_1,\dots,x_n\in X}\ d(x_1,x_n)\leqslant \gamma\cdot \sum_{k=1}^{n-1} d(x_k,x_{k+1}).
\label{polygonalinequality}
\end{gather}
\end{description}
\end{defin}
It is a relatively easy observation\footnote{As far as we know it first appeared in \cite{Chrzaszcz2018.2}.} that every semimetric space admits both $\beta$-metric and $\gamma$-polygon structure. Naturally, $\beta\leqslant \gamma$ but the two constants may differ in general.\footnote{For an example see \cite{Krukowski2021}.}
Let us proceed with a brief summary of graph theory notions which are necessary for further reading. An additional advantage of our concise review is that we lay down the notational conventions used in the sequel. There could be no other starting point than the definition of a graph itself:\footnote{The definition of a graph is based on \cite[p. 2]{Diestel2000}, whereas the definition of a weighted graph was taken from \cite[p. 463]{Fletcher1991}.}
\begin{defin}
A pair $G := (V,E)$ is called a graph if $V$ is a nonempty, finite set and
$$E \subset \bigg\{\{x,y\} \ : \ x,y\in V,\ x\neq y \bigg\}.$$
\noindent
The elements of $V$ and $E$ are called vertices (or nodes) and edges, respectively.
A weighted graph is a pair $(G,\omega)$, where $G = (V,E)$ is a graph, $E\neq \emptyset$ and $\omega : E \longrightarrow (0,+\infty)$ is a positive function, called the weight.
\end{defin}
Let us pause for a moment and discuss this definition. First off, we will often utter the phrase ``\textit{Let $G$ be a graph}'' and then refer to the vertex and edge set of $G$ as $V(G)$ and $E(G)$, respectively.\footnote{An identical convention can be found in \cite{Diestel2000}.} Furthermore, those acquainted with the graph terminology will surely recognize our graphs to be \textit{undirected} and \textit{simple}. ``\textit{Undirectedness}'' of a graph means that the edges are not oriented, i.e., every edge is a set $\{x,y\}$ rather than an ordered pair $(x,y).$ On the other hand, ``\textit{simplicity}'' means that the graph contains no ``\textit{loops}'' (i.e. edges of the form $\{x,x\}$) or \textit{multiedges} (i.e. $E$ is a set and not a multiset). However, the need for \textit{multiedges} and \textit{multigraphs} will arise in Section \ref{section:approximatesolutions}, so we take the liberty of including the formal definition of these objects here:
\begin{defin}
A pair $\mathbb{G} :=(V,\mathbb{E})$ is called a multigraph if $V$ is a nonempty, finite set and
$$\mathbb{E}\subset \bigg\{ (k,\{x,y\})\ : \ x,y\in V,\ x\neq y,\ k\in\mathbb{N}_0 \bigg\}.$$
The elements of $\mathbb{E}$ are called multiedges (while the elements of $V$ are still called vertices or nodes). A weighted multigraph is a pair $(\mathbb{G},\omega)$, where $\mathbb{G}= (V,\mathbb{E})$ is a multigraph, $\mathbb{E}\neq \emptyset$ and $\omega:\mathbb{E}\longrightarrow (0,+\infty)$ is a positive function, called the weight.
\end{defin}
We proceed with a series of familiar graph theory concepts:\footnote{See \cite{Anderson2018,Bondy2008,Diestel2000}.}
\begin{defin}\label{def:subgraph etc}
Let $G$ be a graph.
\begin{description}
\item[$\bullet$] Graph $F$ is called a subgraph of $G$ if $V(F)\subset V(G)$ and $E(F)\subset E(G)$. We denote this situation by writing $F\subset G$.
\item[$\bullet$] Graph $P$ is called a path if $V(P)$ can be arranged in a sequence so that two vertices are adjacent if and only if they are consecutive in this sequence.
\item[$\bullet$] Graph $C$ is called a cycle if the removal of any edge in $C$ turns it into a path. If $G$ does not contain any cycle as a subgraph, then it is called an acyclic graph.
\item[$\bullet$] Subgraph $H\subset G$ is called a Hamiltonian cycle if it is a cycle visiting each vertex of $G$ exactly once.
\item[$\bullet$] Graph $G$ is said to be connected if for any pair of vertices $x,y\in V(G)$ there exists a path $P\subset G$ such that $x,y\in V(P)$.
\item[$\bullet$] An acyclic, connected graph is called a tree.
\end{description}
\end{defin}
The notion of a subgraph introduced in Definition \ref{def:subgraph etc} is inherently independent of the weight function (if such exists) on graph $G$. However, if $\omega$ is a weight on $G$ and $F$ is a subgraph of $G$, then $(F,\omega|_{E(F)})$ is a weighted graph and $\omega|_{E(F)}$ is called an \textit{induced weight}. Furthermore, it is often convenient to speak of a weight of a subgraph (or the whole graph itself), which we define as
\begin{equation*}
\omega(F):=\sum_{e\in E(F)} \omega(e).
\end{equation*}
It is hard to deny that this definition is a slight abuse of notation -- after all, we use the same symbol ``$\omega$'' to weigh both edges and (sub)graphs. Formally this is a mistake since edges and (sub)graphs are objects from different ``categories'' -- an edge is an unordered pair of elements while a (sub)graph is a pair of vertex set and an edge set. However, we believe that such a small notational inconsistency should not lead to any kind of misapprehension.
We are now in position to formulate the \textit{travelling salesperson problem} (or TSP for short) in a formal manner:\footnote{It should be emphasized that there are multiple other ways to define this problem, for example as an integer programming problem with constraints on vertex degrees -- see \cite{Danzig1959,Laporte1992,Matai2010,Miller1960}.}\\
\\
\fbox{
\parbox{0.95\textwidth}{\large
\textit{For a complete, weighted graph $(K_n,\omega)$ find a Hamiltonian cycle with minimal weight.}
}
}
\\
\\
As we remarked earlier, TSP is one of the most famous mathematical puzzles\footnote{Timothy Lanzone even directed a movie ``\textit{Travelling Salesman}'', which premiered at the International House in Philadelphia on June 16, 2012. The thriller won multiple awards at Silicon Valley Film Festival and New York City International Film Festival the same year.} and a detailed account of its history is far beyond the scope of this paper. Instead, we recommend just a handful of sources -- for an indepth discussion on both history and possible attempts at solving this problem see: \cite{Applegate2006,Cook2012,Fleischmann1988,Fonlupt1992,Gutin2001,Laporte1992,Reinelt1994,Snyder2019,Van2020}.
To conclude this section let us bridge the gap between graph theory and semimetric spaces. Given a complete, weighted graph $G := (K_n, \omega)$ it is most natural to impose the semimetric structure on $V(K_n)$ in the following way:\footnote{Clearly, the function defined in \eqref{semimetriconwholegraph} is symmetric (due to the ``undirectedness'' of the graph $G$) and equals zero if and only if $x=y$ for all $x,y\in V(G)$.}
\begin{equation}\label{semimetriconwholegraph}
d_G(x,y):=\begin{cases}
\omega(\{x,y\}),& \text{ if }x\neq y,\\
0,& \text{ otherwise.}
\end{cases}
\end{equation}
Due to this ``graph-semimetric marriage'' we are able to introduce the following convenient definition:
\begin{defin}
A complete, weighted graph $G:= (K_n,\omega)$ is called
\begin{description}
\item[$\bullet$] a metric graph if $d_G$ is a metric on $V(K_n)$,
\item[$\bullet$] a $\beta-$metric graph if $d_G$ satisfies \eqref{betainequality},
\item[$\bullet$] a $\gamma-$metric graph if $d_G$ satisfies \eqref{polygonalinequality}.
\end{description}
\end{defin}
\section{Approximate solutions to the TSP on semimetric graphs}
\label{section:approximatesolutions}
A brute-force solution to the travelling salesperson problem is to go over all possible Hamiltonian cycles in a given graph, calculate the total weight of each of them and choose one with the smallest value.\footnote{Note that we do not say \textit{the} one with the smallest value, since there might be multiple Hamiltonian cycles with equal, minimal total weight.} Each Hamiltonian cycle can be thought of as a permutation of nodes -- for instance, the Hamiltonian cycle $(x_1,x_2,x_3,\ldots, x_{n-1},x_n)$ in $K_n$ corresponds to the permutation $(1,2,3,\ldots,n-1,n)$ whereas $(x_n,x_2,x_3,\ldots,x_{n-1},x_1)$ corresponds to $(n,2,3,\ldots,n-1,1).$ However, the permutation representation of a Hamiltonian cycle is not unique, since $(1,2,3,\ldots,n-1,n),\ (2,3,\ldots,n-1,n,1),\ (3,\ldots,n-1,n,1,2),\ \ldots,\ (n-1,n,1,2,3,\ldots,n-2)$ and $(n,1,2,3,\ldots,n-1)$ all represent the same Hamiltonian cycle, namely $(x_1,x_2,x_3,\ldots,x_{n-1},x_n).$ Furthermore, every ``\textit{order reversal}'' also represents the same cycle, so $(n,n-1,\ldots,3,2,1),\ (1,n,n-1,\ldots,3,2),\ (1,2,n,n-1,\ldots,3),\ \ldots$ and $(n-1,\ldots,3,2,1,n)$ also correspond to $(x_1,x_2,x_3,\ldots,x_{n-1},x_n).$ In summary, there are $2n$ permutations of the set $\{1,2,\ldots,n\},$ which correspond to a single Hamiltonian cycle. This means that in order to find a ``true'' (rather than an approximate) solution to the travelling salesperson problem, one needs to examine $\frac{1}{2}\cdot (n-1)!$ permutations.\footnote{There are a number of algorithms for generating all the permutations of a given set (for a thorough exposition see \cite{Sedgewick1977}), with one of the most popular being the Heap's algorithm (see \cite{Heap1963}).}
To illustrate the computational complexity of the TSP let us imagine a complete, weighted graph $K_{61}.$ Due to the analysis above, there are $\frac{1}{2}\cdot 60!$ Hamiltonian cycles on this graph, which is more than the estimated number of particles in the observable universe! This means that even for relatively small graphs ($61$ nodes are certainly within human comprehension) the brute-force approach of checking every Hamiltonian cycle and looking for the one with the minimal total weight is extremely time-consuming.
One possibility of speeding up the search for the ``ideal'' solution to the travelling salesperson problem is to use the Held-Karp algorithm.\footnote{See \cite{Held1971}.} This method relies on the function $HK$ defined recursively for every node $x \in \{x_2,\ldots,x_n\}$ and every subset of nodes $S\subset \{x_2,\ldots,x_n\}\backslash \{x\}$ as
$$HK(x, S) := \min_{y\in S}\ \bigg(HK(y, S\backslash\{y\}) + d_G(y,x)\bigg)\hspace{0.4cm}\text{and}\hspace{0.4cm} HK(x,\emptyset) := d_G(x_1,x).$$
\noindent
Intuitively, the value $HK(x,S)$ is the total weight of the path between node $x_1$ and node $x,$ which goes through the vertices of $S$ in some order (the path does not use any other nodes). With this interpretation in mind, solving the TSP boils down to calculating all the values $HK(x_2,\{x_2,\ldots,x_n\}\backslash\{x_2\}),$ $HK(x_3,\{x_2,\ldots,x_n\}\backslash\{x_3\}),$ $\ldots,$ $HK(x_n,\{x_2,\ldots,x_n\}\backslash\{x_n\})$ and choosing the minimal value. Tracing back all the choices of the algorithm we are able to reconstruct the Hamiltonian cycle with the minimal weight (i.e., an ideal solution to the TSP).
Although the Held-Karp algorithm is a reasonable improvement with respect to the brute-force approach it is still characterized by the exponential time complexity. This renders the method impractical for graphs of larger size. Hence the need for algorithms that solve the TSP considerably faster even at the price of returning approximate solutions rather than the ideal ones. The current section aims to summarize these approximation methods.
\subsection{Double minimal spanning tree algorithm}
As the name itself suggests, the first method we discuss hinges upon the notion of a (minimal) spanning tree:
\begin{defin}
\label{def:paths trees spanning trees}
Let $(K_n,\omega)$ be a complete, weighted graph. A tree $T$, which satisfies $V(T) = V(K_n)$ is called a spanning tree. Furthermore, if $\mathcal{S}$ denotes the set of all spanning trees of $(K_n,\omega)$, then a tree $T$ which satisfies
$$\omega(T) = \min_{T' \in \mathcal{S}} \omega(T').$$
\noindent
is called a minimal spanning tree.
\end{defin}
We usually say that $T$ is ``\textit{a}'' rather than ``\textit{the}'' minimal spanning tree since for a given graph there might be more than one such tree. The most obvious example for such a situation is a complete graph with every edge of equal weight.
Prior to elaborating on the the dobule minimal spanning tree method itself, let us recall the concepts of a walk and a tree traversal:\footnote{See \cite[p.10]{Diestel2000}.}
\begin{defin}
\label{def:walk and tree traversal}
Let $G$ be a graph (not necessarily complete or weighted). A $j$-element sequence of vertices $(x_1,\dots,x_j)$ such that $\{x_i,x_{i+1}\}\in E(G)$ for every $i<j$ is called a walk on graph. If $T$ is a tree on $K_n$, then any walk on $T$ which visits every edge exactly twice is called a tree traversal.
\end{defin}
We are now fully-equipped to formulate the general framework of the \textit{double minimal spanning tree algorithm} (or the \textit{DMST algorithm} for short):\\
\noindent
\textbf{Input:} A complete, weighted graph $(K_n,\omega).$
\begin{description}
\item[\textbf{Step 1.}] Find a minimal spanning tree $T$ of the graph.
\item[\textbf{Step 2.}] Via a depth-first search (or DFS for short) on $T$ construct a tree traversal (which depends on the root of the algorithm).
\item[\textbf{Step 3.}] Perform a shortcutting procedure on the tree traversal (from the previous step) to obtain a Hamiltonian cycle $H_{DMST}$.
\end{description}
The first two steps are widely discussed in numerous sources.\footnote{See \cite[Chapter 3.10]{Deo1974} or \cite[Chapter 12]{Papadimitriou1998} for the algorithms generating a minimal spanning tree, and \cite[Chapter 22.3]{Cormen2009} for the DFS algorithm.} Hence, we restrict ourselves to presenting an overview of the the shortcutting procedure.
Given a minimal spanning tree $T$ in a complete, weighted graph $(K_n,\omega)$ we choose an arbitrary vertex $x_1\in V(T),$ which we refer to as the \textit{root of the tree}. Performing a DFS yields a tree traversal $(x_1,\ldots, x_{2n-1}).$ Next, we define $(y_1,\dots,y_n,y_{n+1})$ to be a sequence obtained from the traversal $(x_1,\dots,x_{2n-1})$ by ``\textit{shortcutting}'', i.e.:
\begin{equation}\label{eq:definition_of_yi}
y_i:=\begin{cases}
x_1, & i=1 \mbox{ or } i=(n+1), \\
x_{m(i)}, & 1< i <n+1,
\end{cases}
\end{equation}
\noindent
where
$$m(i):=\min\bigg\{j: i\leqslant j \leqslant 2n-1 \ \text{and} \ x_j \not\in \{x_1,\dots,x_{j-1} \} \bigg\}.$$
\noindent
Although the definition (\ref{eq:definition_of_yi}) seems rather daunting, there is in fact a simple way to obtain $(y_1,\dots,y_n,y_{n+1})$ from $(x_1,\ldots,x_{2n-1})$ -- we go through the elements of the tree traversal one by one and cross out every ``repetition'' (an element that we have seen earlier in the sequence) except for $x_{2n-1},$ which is the same as $x_1$.
Shortcutting procedure results in creation of a sequence $(y_i)_{i=1}^{n+1},$ where every vertex (except for the root $x_1$) appears precisely once (after all, we did cross out all repetitions other than $x_1=x_{2n-1}$). This means that the sequence is in fact a Hamiltionian cycle on $K_n$, which we denote by $H_{DMST}$ and refer to as the \textit{DMST Hamiltonian cycle}. We should, however, bear in mind that there might be multiple DMST Hamiltonian cycles on a given graph, which depend on the choice of both the minimal spanning tree $T$ and the root of the algorithm $x_1.$
If $H_{ideal}$ denotes the ideal (i.e., not approximate) solution to the TSP, then every minimal spanning tree $T$ satisfies the inequality:
\begin{equation}\label{eq:comparing optimal and MST}
\omega(T)\leqslant \omega(H_{ideal}).
\end{equation}
\noindent
The proof of this fact can be shortened to a simple observation that removing a single edge from $H_{ideal}$ leaves us with a spanning tree, whose cost is (by the very definition of the minimal spanning tree) bounded from below by $\omega(T)$. Inequality \eqref{eq:comparing optimal and MST} enables us to write down the following:
\begin{thm}\label{thm:on approximation of TSP in polygon graphs}
Let
\begin{description}
\item[$\bullet$] $(K_n,\omega)$ be a complete, $\gamma$-polygon graph,
\item[$\bullet$] $T$ be its minimal spanning tree,
\item[$\bullet$] $x_1$ be any node,
\item[$\bullet$] $H_{DMST}$ be a Hamiltonian cycle (corresponding to tree $T$ and root $x_1$) constructed by the double minimal spanning tree method.
\end{description}
\noindent
Then
\begin{equation}
\label{omegaHDMST}
\omega(H_{DMST}) \leqslant 2\gamma \omega(H_{ideal}).
\end{equation}
\end{thm}
\begin{proof}
Let $(x_k)_{k=1}^{2n-1}$ be a tree traversal for the minimal spanning tree $T$ and let $H_{DMST} = (y_k)_{k=1}^{n+1}.$ We have
\begin{equation}
\forall_{i=1,\ldots,n}\ d(y_{i},y_{i+1}) = d(x_{m(i)},x_{m(i+1)}) \stackrel{ \eqref{polygonalinequality}}{\leqslant} \gamma \cdot \sum_{j=m(i)}^{m(i+1)-1} d(x_{j},x_{j+1}).
\label{rpi_case_assumption}
\end{equation}
\noindent
Consequently, we obtain
{\small \begin{gather*}
\omega(H_{DMST}) = \sum_{i=1}^{n} d(y_i,y_{i+1}) \stackrel{\eqref{rpi_case_assumption}}{\leqslant}
\gamma \cdot \sum_{i=1}^n \sum_{j=m(i)}^{m(i+1)-1} d(x_{j},x_{j+1})
\leqslant \gamma\sum_{k=1}^{2n-2} d(x_k,x_{k+1}) \leqslant 2\gamma\omega(T).
\end{gather*}}
\noindent
Due to the inequality \eqref{eq:comparing optimal and MST} we conclude the proof.
\end{proof}
\subsection{Andreae-Bandelt algorithms}
The basic version of the \textit{Andreae-Bandelt algorithm} (or \textit{AB algorithm} for short) was laid down in \cite{Andreae1995}. The crux of the algorithm is the fact that the cube $T^3$ of any tree $T$ contains a Hamiltonian cycle.\footnote{Let us recall that for a tree $T$, the cube $T^3$ is a graph on the same set of vertices, i.e. $V(T^3) = V(T)$ and such that $(x,y) \in E(T^3)$ if and only if there exists a path in $T$ between $x$ and $y,$ which comprises of at most $3$ edges.} The Andreae-Bandelt method (referred to as the $T^3-$algorithm by the authors themselves) is summarized by the following pseudocode:\footnote{See \cite{Andreae2001}.}\\
\noindent
\textbf{Input:} A tree $T$ with $|V(T)|\geqslant 3$ and an edge $\{x_1,x_2\} \in E(T).$
\begin{description}
\item[\textbf{Step 1.}] Let $T_i$ be the connected component of $T-\{x_1,x_2\}$ containing the node $x_i$ for $i=1,2.$
\item[\textbf{Step 2.}] If $|V(T_i)|\geqslant 2$ pick any $y_i\in E(T_i)$ such that $\{x_i,y_i\} \in E(T_i).$ If $|V(T_i)| = 1$ put $y_i := x_i.$
\item[\textbf{Step 3.}] If $|V(T_i)|\geqslant 3,$ apply recursively the algorithm with $T_i$ and $\{x_i,y_i\}$ as the inputs, thus obtaining a Hamiltonian cycle $H_i$ on $T_i^3$ which contains the edge $\{x_i,y_i\}.$
\item[\textbf{Step 4.}] If $|V(T_i)| \geqslant 3$ put $P_i := H_i - \{x_i,y_i\},$ otherwise put $P_i := T_i.$
\item[\textbf{Step 5.}] Construct the Hamiltonian cycle by joining $P_1, P_2$ and the edges $\{x_1,x_2\},\ \{y_1,y_2\}.$
\end{description}
Andreae and Bandelt showed\footnote{See Theorem 2 in \cite{Andreae1995}.} that the application of their method to a minimal spanning tree (and an arbitrary edge) of a complete, weighted graph $(K_n,\omega)$ yields a Hamiltonian cycle $H_{AB}$ which satisfies
\begin{gather}\label{omegaHAB}
\omega(H_{AB}) \leqslant \frac{3\beta^2 + \beta}{2}\cdot \omega(H_{ideal}),
\end{gather}
\noindent
where $H_{ideal}$ is an ideal solution (a minimal Hamiltonian cycle) to the travelling salesperson problem. A couple years later, Andrea and Bandelt reanalysed their method and came up with a way of enhancing its performance.\footnote{For the original paper of Andreae and Bandelt see \cite{Andreae2001}.} For the most part, the \textit{refined Andreae-Bandelt algorithm} (or \textit{rAB algorithm} for short) follows the same steps as its ``basic'' counterpart. The only difference lies in the choice of vertices $y_i,$ so \textbf{Step 2.} is replaced with
\begin{description}
\item[\textbf{(refined) Step 2.}] If $|V(T_i)|\geqslant 2$ pick $y_i\in E(T_i)$ such that $\{x_i,y_i\} \in E(T_i)$ and
$$\omega(\{x_i,y_i\}) = \min\bigg\{ \omega(\{x_i,y\})\ :\ \{x_i,y\} \in E(T_i) \bigg\}.$$
\noindent
If $|V(T_i)| = 1$ put $y_i := x_i.$
\end{description}
\noindent
If $H_{rAB}$ denotes the Hamiltonian cycle constructed by the refined Andreae-Bandelt method, then it turns out\footnote{See Theorem 1 in \cite{Andreae2001}.} that the following estimate holds true:
\begin{gather}\label{omegaHrAB}
\omega(H_{rAB}) \leqslant \frac{\beta^2 + \beta}{2}\cdot \omega(H_{ideal}).
\end{gather}
\subsection{Path matching Christofides algorithm}
The idea of replacing the matching in the original Christofides algorithm by minimum-weight perfect path matching is due to B\"{o}ckenhauer et al. \cite{Bockenhauer2002}. This approach led to a procedure known as the \textit{path matching Christofides algorithm} (or \textit{PMCh} algorithm for short), which was then slightly refined by Krug \cite{Krug2013} as the original version of the algorithm failed (in certain cases) to deliver a Hamiltonian cycle! The steps for this refined PMCh method are as follows:\\
\noindent
\textbf{Input:} A complete, weighted graph $G:= (K_n,\omega).$
\begin{description}
\item[\textbf{Step 1.}] Find a minimal spanning tree $T$ of $G$.\footnote{We refer the Reader to \cite[Chapter 3.10]{Deo1974}, \cite[Chapter 12]{Papadimitriou1998} for the descriptions of Kruskal's and Prim's algorithm and \cite{Nesetril2001} for a thorough exposition of Boruvka's algorithm.}
\item[\textbf{Step 2.}] Let $V_{odd}(T)$ be the set of all odd vertices (i.e., vertices with odd degrees) of $T$. Let $F$ be a weighted subgraph induced on $G$ by $V_{odd}(T)$.
\item[\textbf{Step 3.}] Find a minimum-weight perfect path matching\footnote{The path matching $M$ is a family of paths in $G$ with disjoint endpoints which connect pairs of nodes from $F$. The minimum-weight path matching is a path matching with the least total weight. It is said to be perfect if every node in $F$ is an endpoint for one of the paths in $M$. Due to the minimality of this structure, the paths in $M$ form a forest and every pair of paths is edge-disjoint (see \cite[Claim 1]{Bockenhauer2002}).} in $F$ and denote it by $M$.
\begin{description}
\item[\textbf{Step 3.1.}] Compute the shortest paths connecting vertices of $F$ in the original graph $G$.\footnote{Our implementation uses Floyd-Warshall method (see \cite{Floyd1962}) as it is well-suited for ``dense'' graphs.}
\item[\textbf{Step 3.2.}] Construct a complete graph $F'$ on the vertices of $F$ with edge weights equal to the weights of the shortest paths computed in \textbf{Step 3.1}.
\item[\textbf{Step 3.3}] Find a minimum-weight perfect matching $M$ in $F'$.\footnote{Such a matching exists because $F$ has even number of vertices due to the ``handshaking lemma'' (see Proposition 1.2.1 in \cite{Diestel2000}).}
\item[\textbf{Step 3.4}] Let $\mathcal{P}$ be a family of paths ${P_i}$, where $P_i$ is the shortest path (computed in \textbf{Step 3.1}) connecting the endpoints of $i-$th edge from $M$. $\mathcal{P}$ is the sought minimum-weight perfect path matching.
\end{description}
\item[\textbf{Step 4.}] Resolve conflicts on $\mathcal{P}$ to obtain vertex-disjoint path matching $\mathcal{P}'$. This can be done by finding a path with only one conflict\footnote{``\textit{Conflict}'' is defined as a vertex belonging to more than one path in $\mathcal{P}$. As long as $\mathcal{P}$ is not vertex-disjoint, there always exists a path with exactly one conflict, since the graph composed of all paths in $\mathcal{P}$ is cycle-free.} in $\mathcal{P}$, then either bypassing the conflicting vertex if it is internal to this path or recombining it with the other conflicting path. \footnote{A detailed graphic description of this procedure can be found in \cite[Procedure 1, Fig. 4]{Bockenhauer2002} and in \cite[Algorithm 2]{Krug2013}.}
\item[\textbf{Step 5.}] Construct an Eulerian walk $\eta$ on a multigraph obtained from combining $T$ with the paths from $\mathcal{P},$ which alternates between complete paths from $\mathcal{P}$ and the paths which are subgraphs of $T$. This Eulerian walk can be found in analogous way to the one presented in \textbf{Step 4} of the polygonal Christofides algorithm (see the next subsection) -- simply replace paths from $\mathcal{P}$ with single edges (connecting the endpoints of each path) and apply the enhanced Hierholzer procedure to such a multigraph. Let $\mathcal{Q}$ denote the set of all paths in $\eta$ which were constructed in this part of the procedure as the subpaths of $T$.
\item[\textbf{Step 6.}] Transform $\mathcal{Q}$ to obtain a forest of degree at most $3$ as follows:
\begin{description}
\item[\textbf{Step 6.1}] Fix any root vertex $r\in T$. For every vertex $x$ in $T$, let $d_r(x)$ denote its node-distance to the selected root (it can be calculated easily by standard depth-first search).
\item[\textbf{Step 6.2}] For every path $Q\in \mathcal{Q}$ let $x_Q$ be the vertex in $Q$ with the minimal value of $d_r$. If $x_Q$ is not an endpoint of $Q$ and its degree in $T$ exceeds $3$,\footnote{This condition imposed on the degree of $x_Q$ in $T$ is precisely the remedy which was introduced by Krug in \cite{Krug2013}.} redefine $Q$ by omiting the vertex $x_Q$.\footnote{Notice that $Q$ is still a path in $G$ (however, it no longer is a subgraph of $T$) which connects the same endpoints as previously.}
\end{description}
\item[\textbf{Step 7.}] Replace paths from $T$ which were used in $\eta$ with their refined versions obtained in \textbf{Step 6}. Remove the remaining conflicts as follows:
\begin{description}
\item[\textbf{Step 7.1}] Let $x\in \eta$ be any vertex appearing in $\eta$ twice.\footnote{A single vertex can appear in $\eta$ either once or twice.} If its neighbours in the initial state of $\eta$ are conflicts, bypass one of them, otherwise -- bypass any other duplicated vertex.
\item[\textbf{Step 7.2}] Repeat \textbf{Step 7.1} until no vertex appears in $\eta$ more than once.
\end{description}
\item[\textbf{Step 8.}] Return the refined $\eta$ from \textbf{Step 7.} as the Hamiltonian cycle $H_{PMCh}$.
\end{description}
It can be proved\footnote{See \cite[Theorem 2.1]{Krug2013} and \cite[Claim 8.]{Bockenhauer2002}.} that:
\begin{gather}\label{omegaPMCh}
\omega(H_{PMCh}) \leqslant \frac{3\beta^2}{2}\cdot \omega(H_{ideal}).
\end{gather}
\subsection{Polygonal Christofides algorithm}
The final approximation algorithm we take into consideration is the \textit{polygonal Christofides algorithm} (or \textit{PCh algorithm} for short), which we introduced in our previous work.\footnote{See \cite{Krukowski2021}.} Using the $\gamma-$polygon structure of the graph, we made necessary adjustments to the classical Christofides algorithm accounting for the fact that the graph need not be metric (i.e., $\gamma$ need not be equal to $1$):\\
\noindent
\textbf{Input:} A complete, weighted graph $G:=(K_n,\omega).$
\begin{description}
\item[\textbf{Step 1.}] Find a minimal spanning tree $T$ of $G$.\footnote{We refer the Reader to \cite[Chapter 3.10]{Deo1974}, \cite[Chapter 12]{Papadimitriou1998} for descriptions of Kruskal's and Prim's algorithm and \cite{Nesetril2001} for a thorough exposition of Boruvka's algorithm.}
\item[\textbf{Step 2.}] Let $V_{odd}(T)$ be the set of all odd vertices (i.e., vertices with odd degrees) of $T$. Let $F$ be a weighted subgraph induced on $G$ by $V_{odd}(T)$.
\item[\textbf{Step 3.}] Find a minimum-weight perfect matching\footnote{Such matching exists because $F$ is a complete graph with even number of vertices. The latter observation is due to the ``handshaking lemma'' -- see Proposition 1.2.1 in \cite{Diestel2000}.} in $F$ and denote it by $M$. This can be done by applying the original blossom algorithm (due to Edmonds) or one of its subsequent versions.\footnote{The initial version of minimum-weight perfect matching algorithm \cite{Edmonds1965} had complexity of order $\mathcal{O} \left(|E|\cdot |V|^2\right)$. This bound has been consistently improved over the years -- see Tables I and II in \cite{Cook1999} for a detailed exposition.}
\item[\textbf{Step 4.}] Use the following steps (from 4.1 to 4.3) to perform the enhanced Hierholzer algorithm and find an Eulerian walk\footnote{Just as Hamiltonian cycle is a cycle which visits every vertex exactly once, the \textit{Eulerian walk} is a walk which traverses through each edge of the graph exactly once. This walk can be found using either Fleury's algorithm or Hierholzer's algorithm (with time-complexities $\mathcal{O}(|E|^2)$ and $\mathcal{O}(|E|),$ respectively). These algorithms can be found as X.2 and X.4 in \cite[Chapter 10]{Fleischner1991}, respectively.} $W$ in the multigraph $\mathbb{G} = (V(T), \mathbb{E}),$ where
$$\mathbb{E} := \bigg\{(0,e) \ :\ e\in E(T)\bigg\} \cup \bigg\{(1,e) \ :\ e\in E(M)\bigg\}.$$
\begin{description}
\item[\textbf{Step 4.1}] Select an arbitrary starting vertex $x_1\in V(\mathbb{G})$, which is incident to an edge in $E(M)$. Let $W:=(y_1)$, where $y_1:=x_1$. Mark $y_1$ as a ``\textit{recently visited vertex}''.
\item[\textbf{Step 4.2}] Extend the sequence of vertices $W$ in the following way:
\begin{description}
\item[(a)] select any neighbour $y$ of the ``\textit{recently visited vertex}'' connected to it by an ``\textit{unused}'' edge in $E(M)$ (if possible) or in $E(\mathbb{G})$ (this is always possible because each vertex in $V(\mathbb{G})$ has even degree).\footnote{Due to this ``prioritization'' $\{y_1,y\}$ is guaranteed to be taken from $E(M)$.}
\item[(b)] add $y$ to the sequence $W$. Mark $y$ as the ``\textit{recently visited vertex}'' and denote the traversed edge as ``\textit{used}''. If there is an unused edge incident to $y$, go back to step a).\footnote{At each step of the algorithm the only vertices with odd number of unused edges are $x_1$ and the current vertex $y$. Therefore, the loop can terminate only if we return to the initial vertex $x_1$.}
\end{description}
\item[\textbf{Step 4.3}] If there are any unused edges after this process, start at any vertex $x\in W$ which has at least one neighbour not in $W$. Repeat the procedure described in Step 4.2 obtaining a closed walk $W_x.$ Replace the last appearance of $x$ in sequence $W$ with $W_x$.
\end{description}
\item[\textbf{Step 5.}] Use the following steps (5.1 and 5.2) to perform the enhanced shortcutting procedure and obtain a Hamiltonian cycle $H_{PCh} = (y_1,y_2,\ldots,y_n,y_{n+1})$ on $G$.
\begin{description}
\item[\textbf{Step 5.1}] Put $y_1:=x_1$ and $y_2:=x_2$. From the construction (in Step 4.) of the Eulerian walk $W$ it follows that $\{x_1,x_{2}\}\in E(M)$. Let $i:=3$ and $j:=3$.
\item[\textbf{Step 5.2}] While $i\leqslant n$ perform the following steps:
\begin{description}
\item[(a)] If $x_j\notin V(M)$ and it has already appeared in $H_{PCh}$, increment $j$.
\item[(b)] If $x_j\notin V(M)$ and it has not yet appeared in $H_{PCh}$, put $y_i:=x_j$ and increment both $i$ and $j$.
\item[(c)] If $x_j\in V(M)$, $\{x_j,x_{j+1}\}\notin E(M)$ and $\{x_{j-1},x_j\}\notin E(M)$, increment $j$.
\item[(d)] Otherwise, i.e., in the situation where $x_j\in V(M)$ and either $\{x_j,x_{j+1}\}\in E(M)$ or $\{x_{j-1},x_j\}\in E(M)$, let $y_i:=x_j$ and increment both $i$ and $j$.
\end{description}
\end{description}
\end{description}
The climax of our previous paper was the proof that the PCh algorithm produces a Hamiltonian cycle $H_{PCh},$ which satisfies the following estimate:\footnote{See Theorem 8 in \cite{Krukowski2021}.}
\begin{gather}\label{omegaHpolygon}
\omega(H_{PCh}) \leqslant \frac{3\gamma}{2}\cdot \omega(H_{ideal}).
\end{gather}
As a closing remark of this section we may compare the estimate \eqref{omegaHpolygon} with those of the previous algorithms (see \eqref{omegaHDMST}, \eqref{omegaHAB}, \eqref{omegaHrAB}, \eqref{omegaPMCh}) and arrive at the conclusion that the PCh algorithm flaunts the best worst-case behaviour of all the approximation methods (whenever $\gamma\in [\beta,2\beta)$ and $\beta \geqslant 3$). Next section is devoted to supporting this claim on the basis of numerical simulations.
\section{Numerical comparison of the approximation methods}
\label{section:numericalcomparison}
In the previous section we reviewed a number of algorithms generating approximate solutions to the travelling salesperson problem. Apart from the methods themselves, we have provided estimates \eqref{omegaHDMST}, \eqref{omegaHAB}, \eqref{omegaHrAB}, \eqref{omegaPMCh} and \eqref{omegaHpolygon}, which tell us how far a total weight of an approximate solution can deviate from the total weight of an ideal Hamiltonian cycle (i.e., a ``true'' solution to the TSP). For instance, \eqref{omegaHDMST} guarantees that the weight of the approximate solution generated by the DMST method cannot exceed $2\gamma$ times the weight of the ideal Hamiltonian cycle, whereas the application of the PCh algorithm reduces this constant to $\frac{3\gamma}{2}.$ These estimates provide valueable insights into worst-case scenarios of the algorithms' performance but this theoretical deliberations are far from being the sole way of measuring the quality of the presented methods. In the current section, instead of focusing on the worst-case scenarios, we concentrate on examples and numerical simulations, which test how the algorithms fare in ``real life''.
We commence with a concrete example of a weighted $K_7$ graph, whose nodes are labeled ``$0$'', ``$1$'', $\ldots$, ``$6$'' (see Fig. \ref{fig:exampleofK7}). Browsing through every Hamiltonian cycle (there are 360 of them) or running the Held-Karp algorithm we discover the solution $(0,1,2,4,5,3,6)$ to the TSP with the total weight of $2.07$ (see Fig. \ref{fig:K7_and_solution_to_TSP}). The following table presents the performance of approximation algorithms reviewed in the previous section:
\begin{center}
\begin{tabular}{ | c | c | c |}
\hline
Algorithm & Hamiltonian cycle & Total weight \\ \hline
DMST & $(0,1,2,6,3,5,4)$ & 2.22 \\ \hline
AB & $(0,2,1,3,5,6,4)$ & 3.29 \\ \hline
rAB & $(0,1,2,6,5,3,4)$ & 3.08 \\ \hline
PMCh & $(0,1,2,6,3,5,4)$ & 2.22 \\ \hline
PCh & $(0,4,2,1,6,3,5)$ & 2.18 \\ \hline
\end{tabular}
\end{center}
The cycles obtained by each of the algorithms are presented in the following figures:
\begin{figure}
\caption{Example of a weighted $K_7$ graph.}
\label{fig:exampleofK7}
\end{figure}
\begin{figure}
\caption{Ideal solution to the TSP (in red).}
\label{fig:K7_and_solution_to_TSP}
\end{figure}
\begin{figure}
\caption{Approximate solution generated by the DMST or the PMCh algorithm (in red).}
\label{fig:K7_and_DMST}
\end{figure}
\begin{figure}
\caption{Approximate solution generated by the AB algorithm (in red).}
\label{fig:K7_and_A}
\end{figure}
\begin{figure}
\caption{Approximate solution generated by the rAB algorithm (in red).}
\label{fig:K7_and_rA}
\end{figure}
\begin{figure}
\caption{Approximate solution generated by the PCh algorithm (in red).}
\label{fig:K7_and_KT}
\end{figure}
Although this preliminary instance seems promising, we should not jump to any conclusions on the basis of a solitary example. In order to avoid accusations that the $K_7$ example given above was a ``fluke'', we have devised the following experiment: we randomize 45 graphs with 75 nodes and run all 5 algorithms to find the approximate solutions on these graphs. The results are enclosed below:
\begin{figure}
\caption{Approximate solutions to the TSP on 45 randomized graphs with 75 nodes. X-axis represents the number of the ``test'', Y-axis represents the total weight of the Hamiltonian cycle.}
\label{fig:multiplot75}
\end{figure}
We can distinguish 3 ``layers'' in this plot. The first one spans roughly from 25 to 35 -- this interval contains most of the approximate solutions generated by AB and rAB algorithms. The second ``layer'' is from 13 or 14 to 20 and contains the bulk of approximate solutions returned by the DMST and PMCh methods. The third and final layer consists of a single (red) plot representing the approximate solutions produced by the PCh algorithm. It is clear that these approximate solutions are better (i.e., have lower total weight) than the ones generated by all other methods.
In order to confirm the conclusions drawn from the first experiment, we have run it again, increasing the size of the graphs to 100 and then to 125 nodes. As seen in the figure below, the dominance of PCh algorithm over all other methods remains unquestioned.
\begin{figure}
\caption{Approximate solutions to the TSP on 45 randomized graphs with 100 nodes. X-axis represents the number of the ``test'', Y-axis represents the total weight of the Hamiltonian cycle.}
\label{fig:multiplo100}
\end{figure}
\begin{figure}
\caption{Approximate solutions to the TSP on 45 randomized graphs with 125 nodes. X-axis represents the number of the ``test'', Y-axis represents the total weight of the Hamiltonian cycle.}
\label{fig:multiplot125}
\end{figure}
At this point we should be fairly convinced that the PCh algorithm returns better approximate solutions for larger graphs than the rest of the discussed methods. One last doubt that should be dispelled is that of time complexity. After all, the Held-Karp algorithm returns the ideal solution to the TSP, but as we have remarked earlier, its exponential time complexity makes it impractical. In theory, the PCh method should not suffer from this cardinal flaw since its time complexity equals $\mathcal{O}(n^3)$ (see Theorem 9 in \cite{Krukowski2021}). Let us verify this claim with the following numerical simulation: we randomize a 100 graphs of size $n$ for every $n = 5,\ldots, 100$ and on every chunk of these 100 graphs we run all 5 algorithms and compute the average time of execution. The results are illustrated in the picture below:
\begin{figure}
\caption{Average time of execution of approximate algorithms on chunks of 100 randomized graphs. X-axis represents the number of the chunk and the size of graphs at the same time. Y-axis represents time in seconds.}
\label{fig:timecomplexity}
\end{figure}
One thing that is impossible to overlook in this plot is the excellent time-efficiency of the DMST method. This complies with the fact that in theory this algorithm has the time complexity of $\mathcal{O}(n^2\log(n))$. The next thing that catches one's eye is the fact that the PCh algorithm is more time efficient than the PMCh and both variants of the Andreae-Bandelt method.
To conclude, we have strong evidence that our PCh method is one of the best approximation algorithms for solving the TSP. It produces Hamiltonian cycles of lower weights than the rest of the methods. Furthermore, its average execution time is comparable with those of other algorithms (bar the DMST method). It is our firm belief that these features position the PCh method as one of the best approximation algorithms for solvng TSP currently known in the literature.
\end{document}
|
\begin{document}
\title{Entanglement-assisted capacities of time-correlated amplitude-damping
channel}
\author{Nigum Arshed and A. H. Toor \\
\textit{Department of Physics, Quaid-i-Azam University}\\
\textit{Islamabad 45320, Pakistan}}
\maketitle
\begin{abstract}
We calculate the information capacities of a time-correlated
amplitude-damping channel, provided the sender and receiver share prior
entanglement. Our analytical results show that the noisy channel with zero
capacity can transmit information if it has finite memory. The capacities
increase as the memory increases attaining maximum value for perfect memory
channel.
\end{abstract}
\section{Introduction}
Entanglement, a purely quantum phenomena, describes global states for
composite systems which cannot be written as product of individual system
states. Once seen as a counterintuitive feature of quantum mechanics it has
emerged as a useful resource for quantum information and computation \cite
{Horodecki 2009}. It performs tasks which cannot be accomplished by
classical resources such as, quantum cryptography \cite{Ekert 1991}, quantum
dense coding \cite{Bennett and Wiesner 1992}, quantum teleportation \cite
{Bennett Brassard Crepeau Jozsa Peres and Wootters 1993} and communication
of reliable information over quantum channels.
Quantum channels model the noise processes which occur in quantum systems
due their unavoidable interaction with the environment \cite{Nielson and
Chuang}. There are broadly two types of quantum channels. Memoryless quantum
channels, for which the channel acts independently over each channel input
and quantum memory channels where the noise over successive channel uses
exhibits some correlation. In practice, memoryless channel is an
approximation as the channel properties are modified by its uses. The
maximum amount of information reliably transmitted over a channel, per
channel use is known as its capacity \cite{Thomas and Cover}. In comparison
to classical channels \cite{Shannon 1948}, the theory of quantum channel
capacities is more involved as more than one capacities are associated with
a quantum channel \cite{Bennett and Shor 2004}.
Early research in quantum channel capacities focused on memoryless channels
[9-24]. The studies of quantum memory channel capacities have attracted a
lot of attention lately and many interesting results were reported [25-36].
It was established that memory increases the capacity of a quantum channel
and beyond a certain memory threshold, classical capacity for both unital
and non-unital channels is higher for entangled state encoding [25-27].
Noisy quantum channels have non-zero quantum capacity in the presence of
memory as it suppresses the channel noise, similar to superactivation
phenomenon \cite{Smith and Yard 2008, Jahangir Arshed and Toor 2012}.
Entanglement-assisted classical capacity for unital quantum channels also
increases if noise over successive channel uses is correlated \cite{Arshed
and Toor 2006}.
In this work, we study information transmission over an amplitude-damping
channel. We assume that the noise over consecutive uses of the channel is
correlated and calculate its classical and quantum capacities. Entanglement,
unlimited or limited, is shared prior to the communication. Our results show
that the capacities of the channel increase as its memory increases. The
channel noise is suppressed by its memory and its capacity to transmit
information is always non-zero in the presence of memory.
The article is organized as follows. In Sec. 2, we review quantum channels
and give a brief description of entanglement-assisted capacities in Sec. 3.
In Sec. 4, we discuss the time-correlated amplitude-damping channel. We
calculate its entanglement-assisted capacities in Sec. 5. Finally, in Sec. 6
we discuss the results and present our conclusions.
\section{Quantum channels}
The basic task of quantum communication is to transmit information encoded
in quantum states across a quantum channel reliably \cite{Bennett and Shor
1998}. The reliability of communication depends upon the channel noise.
Mathematically, a quantum channel $\mathcal{N}$\ is a completely positive
and trace preserving map of a quantum system from an initial state $\rho
_{s} $ to the final state \cite{Nielson and Chuang},
\begin{equation}
\mathcal{N\colon }\rho _{s}\rightarrow \mathcal{N}\left( \rho _{s}\right) .
\label{CPTP map}
\end{equation}
It is assumed that the total system $\mathcal{H}_{s}\otimes \mathcal{H}_{e}$
, consisting of the quantum system $\rho _{s}$ and its environment $\rho
_{e} $, is closed and undergoes unitary evolution. After interaction final
state of the system is given by
\begin{equation}
\mathcal{N}\left( \rho _{s}\right) =\text{Tr}_{e}\left[ U\left( \rho
_{s}\otimes \rho _{e}\right) U^{\dagger }\right] , \label{Unitary evolution}
\end{equation}
where Tr$_{e}$ is partial trace over the environment. This can be described
by Kraus operators acting on system $\rho _{s}$ \cite{Kraus 1983},\ as
\begin{equation}
\mathcal{N}\left( \rho _{s}\right) =\sum_{i}A_{i}\rho _{s}A_{i}^{\dagger },
\label{Kraus representation}
\end{equation}
which satisfy the completeness relationship $\sum_{i}$ $A_{i}^{\dagger
}A_{i}=I_{s}$.
In comparison to classical channels, which are characterized by a unique
capacity \cite{Shannon 1948, Thomas and Cover}, the situation in\ quantum
realm is more rich and complicated. The capacity of a quantum channel
depends on the type of information transmitted, resources shared and
protocols allowed \cite{Bennett and Shor 2004}. In this work, we are
interested in the classical and quantum capacities assisted by entanglement.
\section{Entanglement-assisted capacities}
The capacity of a quantum channel can be enhanced either by encoding
information on entangled states [25-27] or sharing entanglement between
sender and receiver \cite{Bennett and Wiesner 1992}, prior to the
communication. The amount of classical information transmitted across a
quantum channel, provided the sender and receiver share unlimited prior
entanglement is given by the \textit{entanglement-assisted classical
capacity }$C_{E}$ [17-19]. It is obtained by quantum mutual information \cite
{Nielson and Chuang}, maximized over the input state $\rho _{s}$ i.e.,
\begin{equation}
C_{E}=\max_{\rho _{s}}\left[ S\left( \rho _{s}\right) +S\left( \mathcal{N}
\left( \rho _{s}\right) \right) -S_{e}\left( \rho _{s}\right) \right] ,
\label{Entanglement assisted classical capacity}
\end{equation}
where $S\left( \rho \right) =-$Tr$\left[ \rho \log _{2}\rho \right] $ is the
von Neumann entropy and
\begin{equation}
S_{e}\left( \rho _{s}\right) =\sum_{i,j}\text{Tr}\left( A_{i}\rho
_{s}A_{j}^{\dagger }\right) \left\vert e_{i}\right\rangle \left\langle
e_{j}\right\vert , \label{Entropy exchange}
\end{equation}
is the entropy exchange \cite{Nielson and Chuang}. Pure-state entanglement
consumed per channel use is $S\left( \rho _{s}\right) $, with $\rho _{s}$
maximizing Eq. (\ref{Entanglement assisted classical capacity}). It is
additive \cite{Holevo 2002}, unlike the classical capacity $C$ \cite
{Hastings 2009}, and quantum capacity $Q$ \cite{Barnum Nielson Schumacher
1998}, of quantum channels. The \textit{entanglement-assisted quantum
capacity} $Q_{E}=C_{E}/2$ can be determined by superdense coding \cite
{Bennett and Wiesner 1992} and quantum teleportation \cite{Bennett Brassard
Crepeau Jozsa Peres and Wootters 1993}.
If the entanglement shared prior to the communication is limited then the
\textit{classical capacity assisted by limited entanglement} of a quantum
channel is given by \cite{Shor 2004},
\begin{eqnarray}
C_{E}^{\lim } &=&\max_{p_{i},\rho _{s_{i}}}\left[ \sum_{i}p_{i}S\left( \rho
_{s_{i}}\right) +S\left\{ \mathcal{N}\left( \sum_{i}p_{i}\rho
_{s_{i}}\right) \right\} -\sum_{i}p_{i}S_{e}\left( \rho _{s_{i}}\right)
\right] , \notag \\
&& \label{Classical capacity assisted by limited entanglement}
\end{eqnarray}
with $\sum_{i}p_{i}S\left( \rho _{s_{i}}\right) \leq P$, where $P$ is the
amount of entanglement available. The maximization is performed over the
input states $\rho _{s_{i}}$ and probability distribution $p_{i}$ with $
\sum_{i}p_{i}=1$. This provides a trade off curve of the classical capacity
as a function of the amount of entanglement shared. If the shared
entanglement is sufficiently large then Eq. (\ref{Classical capacity
assisted by limited entanglement}) gives $C_{E}$ while for $P=0$ it reduces
to the classical capacity $C$ \cite{Hausladen Jozsa Schumacher 1996,
Schumacher and Westmoreland 1997}.
\section{Time correlated amplitude-damping channel}
Energy dissipation from a quantum system, such as, spontaneous emission of
an atom and relaxation of a spin system at high temperature into the
equilibrium state \cite{Nielson and Chuang}, is modeled by amplitude damping
channel. It is a non-unital channel, i. e., $\mathcal{N}\left( I\right) \neq
I$ \cite{King and Ruskai 2001}, with Kraus operators
\begin{equation}
A_{0}=\left(
\begin{array}{cc}
\cos \chi & 0 \\
0 & 1
\end{array}
\right) ,A_{1}=\left(
\begin{array}{cc}
0 & 0 \\
\sin \chi & 0
\end{array}
\right) , \label{ADC Kraus}
\end{equation}
where $\chi $ is the damping parameter with $0\leq \chi \leq \frac{\pi }{2}$
. We consider an amplitude-damping channel with time-correlated Markov noise
for two consecutive uses \cite{Yeo and Skeen 2003}, given by
\begin{equation}
\mathcal{N}\left( \rho _{s}\right) =\left( 1-\mu \right)
\sum_{i,j=0}^{1}A_{ij}^{u}\rho _{s}A_{ij}^{u\dag }+\mu
\sum_{k=0}^{1}A_{kk}^{c}\rho _{s}A_{kk}^{c\dag }, \label{Partial ADC}
\end{equation}
where $0\leq \mu \leq 1$, is the memory parameter. The channel noise is
uncorrelated with probability $\left( 1-\mu \right) $ described by Kraus
operators
\begin{equation}
A_{ij}^{u}=A_{i}\otimes A_{j}, \label{Un-correlated ADC}
\end{equation}
where $i,j=0,1$ with $A_{i}$ given by Eq. (\ref{ADC Kraus}), while with
probability $\mu $\ it is correlated and given by $A_{kk}^{c}$. The Kraus
operators $A_{kk}^{c}$ for time-correlated amplitude damping channel for two
channel uses are determined by solving the Lindbladian \cite{Yeo and Skeen
2003},
\begin{eqnarray}
\mathcal{L}\rho _{s} &=&-\frac{\alpha }{2}\left[ \left( \sigma ^{\dag
}\otimes \sigma ^{\dag }\right) \left( \sigma \otimes \sigma \right) \rho
_{s}+\rho _{s}\left( \sigma ^{\dag }\otimes \sigma ^{\dag }\right) \left(
\sigma \otimes \sigma \right) \right. \notag \\
&&\left. -2\left( \sigma \otimes \sigma \right) \rho _{s}\left( \sigma
^{\dag }\otimes \sigma ^{\dag }\right) \right] .
\label{Lindbladian ADC-Two uses}
\end{eqnarray}
The parameter $\alpha $ is analogous to the Einstein coefficient for
spontaneous emission \cite{Daffer Wodkiewicz McIver 2003}, and $\sigma
^{\dag }\equiv \frac{1}{2}\left( \sigma ^{x}+i\sigma ^{y}\right) $ and $
\sigma \equiv \frac{1}{2}\left( \sigma ^{x}-i\sigma ^{y}\right) $ are the
raising and lowering operators, respectively. It is solved by using the
damping basis method \cite{Daffer Wodkiewicz McIver 2003, Yeo and Skeen 2003}
. The map
\begin{equation}
\Phi \left( \rho \right) =\exp \left( \mathcal{L}t\right) \rho =\sum_{i}
\text{Tr}\left( L_{i}\rho \right) \exp \left( \lambda _{i}t\right) R_{i},
\label{Damping basis}
\end{equation}
describes a wide class of Markov quantum channels, where $\lambda _{i}$ are
the damping eigenvalues. The right eigen operators $R_{i}$ satisfy the
eigenvalue equation
\begin{equation}
\mathcal{L}R_{i}=\lambda _{i}R_{i}, \label{Eigenvalue equation}
\end{equation}
and the duality relation
\begin{equation}
\text{Tr}\left( L_{i}R_{i}\right) =\delta _{ij}, \label{Duality relation}
\end{equation}
with the left eigen operators $L_{i}$. The resulting completely positive and
trace preserving map is given by
\begin{equation}
\mathcal{E}\left( \rho _{s}\right) =\sum_{i,j=0}^{3}\text{Tr}\left(
L_{ij}\rho _{s}\right) \exp \left( \lambda _{ij}t\right)
R_{ij}=\sum_{k=0}^{1}A_{kk}^{c}\rho _{s}A_{kk}^{c}, \label{Correlated ADC}
\end{equation}
where the Kraus operators for correlated noise are
\begin{equation}
A_{00}^{c}=\left(
\begin{array}{cccc}
\cos \chi & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right) ,\text{ }A_{11}^{c}=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\sin \chi & 0 & 0 & 0
\end{array}
\right) . \label{Correlated ADC Kraus}
\end{equation}
Eqs. (\ref{Partial ADC}), (\ref{Un-correlated ADC}) and (\ref{Correlated ADC
Kraus}) give the time-correlated amplitude-damping channel with memory.
\section{Entanglement-assisted capacities of time-correlated
amplitude-damping channel}
We now calculate the entanglement-assisted capacities of an
amplitude-damping channel with time-correlated Markov noise, for two channel
uses. Consider the protocol where the sender and receiver share two ( same
or different) maximally entangled Bell states \cite{Bell 1966}, prior to the
communication
\begin{eqnarray}
\left\vert \psi ^{\pm }\right\rangle &=&\frac{1}{\sqrt{2}}\left( \left\vert
00\right\rangle \pm \left\vert 11\right\rangle \right) , \notag \\
\left\vert \phi ^{\pm }\right\rangle &=&\frac{1}{\sqrt{2}}\left( \left\vert
01\right\rangle \pm \left\vert 10\right\rangle \right) . \label{Bell states}
\end{eqnarray}
The first qubits of the shared states belong to the sender while the second
qubits are in possession of the receiver. The state input to the channel $
\rho _{s}$\ is given by
\begin{equation}
\rho _{s}=\text{Tr}_{r}\left( \left\vert \psi ^{+}\right\rangle \left\langle
\psi ^{+}\right\vert \right) \otimes \text{Tr}_{r}\left( \left\vert \phi
^{+}\right\rangle \left\langle \phi ^{+}\right\vert \right) =\frac{I}{4},
\label{Channel input}
\end{equation}
where $I$ is the $4\times 4$ identity matrix. Information is encoded by the
sender on the input state $\rho _{s}$ and transmitted over the
amplitude-damping channel. The channel maps it to an output state $\mathcal{N
}\left( \rho _{S}\right) $, given by Eq. (\ref{Partial ADC}) with eigenvalues
\begin{eqnarray}
\omega _{1} &=&\frac{1}{4}\left[ \left( 1-\mu \right) \cos ^{4}\chi +\mu
\cos ^{2}\chi \right] , \notag \\
\omega _{2} &=&\omega _{3}=\frac{1}{4}\left[ \left( 1-\mu \right) \cos
^{2}\chi \left( 2-\cos ^{2}\chi \right) +\mu \right] , \notag \\
\omega _{4} &=&-\frac{1}{4}\left( 2-\cos ^{2}\chi \right) \left[ \left(
1-\mu \right) \cos ^{2}\chi +\mu -2\right] . \label{Output eigenvalues}
\end{eqnarray}
The amount of information lost due to coupling with the environment during
the transmission is determined by calculating the entropy exchange $
S_{e}\left( \rho _{s}\right) $ given by Eq. (\ref{Entropy exchange}). We
assume without loss of generality that initially the state of the
environment is pure,
\begin{equation}
\rho _{e}=\left( \left\vert 00\right\rangle \left\langle 00\right\vert
\right) _{e}, \label{Environment input}
\end{equation}
which is modified to
\begin{eqnarray}
\rho _{e}^{\prime } &=&\left( 1-\mu \right) \sum_{i,j,k,l=0}^{1}\text{Tr}
_{s}\left( A_{ij}^{u}\rho _{s}A_{kl}^{u\dag }\right) \left\vert
e_{ij}\right\rangle \left\langle e_{kl}\right\vert \notag \\
&&+\mu \sum_{m,n=0}^{1}\text{Tr}_{s}\left( A_{mm}^{c}\rho _{s}A_{nn}^{c\dag
}\right) \left\vert e_{mm}\right\rangle \left\langle e_{nn}\right\vert ,
\label{Environment Output}
\end{eqnarray}
after interaction with the input state $\rho _{s}$. In the above expression $
\left\vert e_{ij}\right\rangle =\left\vert e_{i}\right\rangle \otimes
\left\vert e_{j}\right\rangle $ are the orthonormal basis of the environment
and eigenvalues of the output state $\rho _{e}^{\prime }$ are
\begin{eqnarray}
\widetilde{\omega }_{1} &=&\frac{1}{4}\left[ \left( 1-\mu \right) \sin
^{4}\chi +\mu \sin ^{2}\chi \right] , \notag \\
\widetilde{\omega }_{2} &=&\widetilde{\omega }_{3}=\frac{1}{4}\left( 1-\mu
\right) \left( 2-\sin ^{2}\chi \right) \sin ^{2}\chi , \notag \\
\widetilde{\omega }_{4} &=&\frac{1}{4}\left[ \left( 1-\mu \right) \left(
2-\sin ^{2}\chi \right) ^{2}+\mu \left( 4-\sin ^{2}\chi \right) \right] .
\label{Environment output eigenvalues}
\end{eqnarray}
The entanglement-assisted classical capacity for amplitude-damping channel
with correlated noise calculated by using Eq. (\ref{Entanglement assisted
classical capacity}) is
\begin{equation}
C_{E}^{2}=2-\sum_{i=1}^{4}\left( \omega _{i}\log _{2}\omega _{i}-\widetilde{
\omega }_{i}\log _{2}\widetilde{\omega }_{i}\right) . \label{Ce}
\end{equation}
\begin{figure}
\caption{Entanglement-assisted classical capacity $C_{E}
\label{C_E versus Kie and Meu}
\end{figure}
In Fig. (\ref{C_E versus Kie and Meu}) we plot the capacity $C_{E}^{2}$
against the channel noise $\chi $ and memory parameter $\mu $.
Amplitude-damping channel affects both the populations and coherences of the
quantum system. Therefore, the capacity $C_{E}^{2}$ decreases as the channel
becomes noisy and reduces to zero for maximum value of channel noise.
However, as we increase the memory $\mu $ the capacity $C_{E}^{2}$ increases
for all values of channel noise parameter $\chi $. We infer that memory of
the channel suppresses the noise introduced by it and increases the
capacity. It is interesting to note that for memoryless amplitude-damping
channel the capacity $C_{E}^{2}$ is zero for maximum channel noise $\chi =
\frac{\pi }{2}$. However, if the channel has finite memory its capacity to
transmit information is non-zero, always. This is similar to the
superactivation phenomenon, where two channel with zero quantum capacity can
transmit quantum information if used together \cite{Smith and Yard 2008}. It
reveals the richness of quantum communication theory where whether a channel
can transmit information depends on the context. The entanglement-assisted
quantum capacity is given by
\begin{equation}
Q_{E}^{2}=\frac{C_{E}^{2}}{2}=1-\sum_{i=1}^{4}\frac{1}{2}\left( \omega
_{i}\log _{2}\omega _{i}-\widetilde{\omega }_{i}\log _{2}\widetilde{\omega }
_{i}\right) . \label{Qe}
\end{equation}
Next we calculate the classical capacity assisted by limited entanglement.
In comparison to the unlimited entanglement shared above an ensemble of
orthogonal states
\begin{eqnarray}
\left\vert \nu _{1}\right\rangle &=&\cos \theta _{1}\left\vert
00\right\rangle +\sin \theta _{1}\left\vert 11\right\rangle , \notag \\
\left\vert \nu _{2}\right\rangle &=&\sin \theta _{1}\left\vert
00\right\rangle -\cos \theta _{1}\left\vert 11\right\rangle , \notag \\
\left\vert \nu _{3}\right\rangle &=&\cos \theta _{1}\left\vert
01\right\rangle +\sin \theta _{1}\left\vert 10\right\rangle , \notag \\
\left\vert \nu _{4}\right\rangle &=&\sin \theta _{1}\left\vert
01\right\rangle -\cos \theta _{1}\left\vert 10\right\rangle ,
\label{Ansatz 1}
\end{eqnarray}
and
\begin{eqnarray}
\left\vert \upsilon _{1}\right\rangle &=&\cos \theta _{2}\left\vert
00\right\rangle +\sin \theta _{2}\left\vert 11\right\rangle , \notag \\
\left\vert \upsilon _{2}\right\rangle &=&\sin \theta _{2}\left\vert
00\right\rangle -\cos \theta _{2}\left\vert 11\right\rangle , \notag \\
\left\vert \upsilon _{3}\right\rangle &=&\cos \theta _{2}\left\vert
01\right\rangle +\sin \theta _{2}\left\vert 10\right\rangle , \notag \\
\left\vert \upsilon _{4}\right\rangle &=&\sin \theta _{2}\left\vert
01\right\rangle -\cos \theta _{2}\left\vert 10\right\rangle ,
\label{Ansatz 2}
\end{eqnarray}
is shared prior to the communication, with $0\leq \theta _{1},\theta
_{2}\leq \frac{\pi }{4}$. The input states $\rho _{s_{i}}$ are
\begin{equation}
\rho _{s_{i}}=\text{Tr}_{r}\left( \left\vert \nu _{j}\right\rangle
\left\langle \nu _{j}\right\vert \right) \otimes \text{Tr}_{r}\left(
\left\vert \upsilon _{k}\right\rangle \left\langle \upsilon _{k}\right\vert
\right) , \label{Limited input}
\end{equation}
where $j,k=1,\ldots 4$ and
\begin{eqnarray}
\rho _{s_{1}} &=&\cos ^{2}\theta _{1}\cos ^{2}\theta _{2}\left\vert
00\right\rangle \left\langle 00\right\vert +\cos ^{2}\theta _{1}\sin
^{2}\theta _{2}\left\vert 01\right\rangle \left\langle 01\right\vert \notag
\\
&&+\sin ^{2}\theta _{1}\cos ^{2}\theta _{2}\left\vert 10\right\rangle
\left\langle 10\right\vert +\sin ^{2}\theta _{1}\sin ^{2}\theta
_{2}\left\vert 11\right\rangle \left\langle 11\right\vert , \notag \\
\rho _{s_{2}} &=&\cos ^{2}\theta _{1}\sin ^{2}\theta _{2}\left\vert
00\right\rangle \left\langle 00\right\vert +\cos ^{2}\theta _{1}\cos
^{2}\theta _{2}\left\vert 01\right\rangle \left\langle 01\right\vert \notag
\\
&&+\sin ^{2}\theta _{1}\sin ^{2}\theta _{2}\left\vert 10\right\rangle
\left\langle 10\right\vert +\sin ^{2}\theta _{1}\cos ^{2}\theta
_{2}\left\vert 11\right\rangle \left\langle 11\right\vert , \notag \\
\rho _{s_{3}} &=&\sin ^{2}\theta _{1}\cos ^{2}\theta _{2}\left\vert
00\right\rangle \left\langle 00\right\vert +\sin ^{2}\theta _{1}\sin
^{2}\theta _{2}\left\vert 01\right\rangle \left\langle 01\right\vert \notag
\\
&&+\cos ^{2}\theta _{1}\cos ^{2}\theta _{2}\left\vert 10\right\rangle
\left\langle 10\right\vert +\cos ^{2}\theta _{1}\sin ^{2}\theta
_{2}\left\vert 11\right\rangle \left\langle 11\right\vert , \notag \\
\rho _{s_{4}} &=&\sin ^{2}\theta _{1}\sin ^{2}\theta _{2}\left\vert
00\right\rangle \left\langle 00\right\vert +\sin ^{2}\theta _{1}\cos
^{2}\theta _{2}\left\vert 01\right\rangle \left\langle 01\right\vert \notag
\\
&&+\cos ^{2}\theta _{1}\sin ^{2}\theta _{2}\left\vert 10\right\rangle
\left\langle 10\right\vert +\cos ^{2}\theta _{1}\cos ^{2}\theta
_{2}\left\vert 11\right\rangle \left\langle 11\right\vert ,
\label{Limited input states}
\end{eqnarray}
Therefore, for all $\rho _{s_{i}}$
\begin{equation}
S\left( \rho _{s_{i}}\right) =-\sum_{i=1}^{4}\vartheta _{i}\log
_{2}\vartheta _{i}, \label{Limited input entropy}
\end{equation}
with
\begin{eqnarray}
\vartheta _{1} &=&\cos ^{2}\theta _{1}\cos ^{2}\theta _{2}, \notag \\
\vartheta _{2} &=&\cos ^{2}\theta _{1}\sin ^{2}\theta _{2}, \notag \\
\vartheta _{3} &=&\sin ^{2}\theta _{1}\cos ^{2}\theta _{2}, \notag \\
\vartheta _{4} &=&\sin ^{2}\theta _{1}\sin ^{2}\theta _{2}.
\label{Limited input eigenvalues}
\end{eqnarray}
It has been established that the maximization over input probabilities in
Eq. (\ref{Classical capacity assisted by limited entanglement}) is achieved
when the input states are equiprobable \cite{Arshed Toor Lidar 2010}.
Therefore, $\rho _{s}=\sum_{i}p_{i}\rho _{s_{i}}=\frac{I}{4}$ which after
transmission thorough the channel is mapped to $\mathcal{N}\left( \rho
_{s}\right) $ with eigenvalues given by Eq. (\ref{Output eigenvalues}). Once
again we assume that initially the environment is in a pure state $\rho _{e}$
given by Eq. (\ref{Environment input}) and calculate the entropy exchange
for $\rho _{s_{i}}$ using Eq. (\ref{Environment Output}). The classical
capacity assisted by limited entanglement is calculated using Eq. (\ref
{Classical capacity assisted by limited entanglement}) as
\begin{equation}
C_{E}^{\lim }=-\sum_{i=1}^{4}\left[ \vartheta _{i}\log _{2}\vartheta
_{i}+\omega _{i}\log _{2}\omega _{i}\right] +\frac{1}{4}\sum_{i,j=1}^{4}
\widetilde{\omega }_{s_{i}}^{j}\log _{2}\widetilde{\omega }_{s_{i}}^{j},
\label{Limited Ce for ADC}
\end{equation}
where
\begin{eqnarray*}
\widetilde{\omega }_{s_{1}}^{1} &=&\left( 1-\mu \right) \left( \cos
^{2}\theta _{1}\cos ^{2}\chi +\sin ^{2}\theta _{1}\right) \left( \cos
^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right) \\
&&+\mu \left[ \sin ^{2}\theta _{1}+\cos ^{2}\theta _{1}\left( \cos
^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right) \right] , \\
\widetilde{\omega }_{s_{1}}^{2} &=&\left( 1-\mu \right) \cos ^{2}\theta
_{2}\left( \cos ^{2}\theta _{1}\cos ^{2}\chi +\sin ^{2}\theta _{1}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{1}}^{3} &=&\left( 1-\mu \right) \cos ^{2}\theta
_{1}\left( \cos ^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{1}}^{4} &=&\cos ^{2}\theta _{1}\cos ^{2}\theta
_{2}\sin ^{2}\chi \left[ \left( 1-\mu \right) \sin ^{2}\chi +\mu \right] , \\
\widetilde{\omega }_{s_{2}}^{1} &=&\left( 1-\mu \right) \left( \cos
^{2}\theta _{1}\cos ^{2}\chi +\sin ^{2}\theta _{1}\right) \left( \cos
^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right) \\
&&+\mu \left[ \sin ^{2}\theta _{1}+\cos ^{2}\theta _{1}\left( \cos
^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right) \right] , \\
\widetilde{\omega }_{s_{2}}^{2} &=&\left( 1-\mu \right) \sin ^{2}\theta
_{2}\left( \cos ^{2}\theta _{1}\cos ^{2}\chi +\sin ^{2}\theta _{1}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{2}}^{3} &=&\left( 1-\mu \right) \cos ^{2}\theta
_{1}\left( \cos ^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{2}}^{4} &=&\cos ^{2}\theta _{1}\sin ^{2}\theta
_{2}\sin ^{2}\chi \left[ \left( 1-\mu \right) \sin ^{2}\chi +\mu \right] , \\
\widetilde{\omega }_{s_{3}}^{1} &=&\left( 1-\mu \right) \left( \cos
^{2}\theta _{1}+\cos ^{2}\chi \sin ^{2}\theta _{1}\right) \left( \cos
^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right) \\
&&+\mu \left[ \cos ^{2}\theta _{1}+\sin ^{2}\theta _{1}\left( \cos
^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right) \right] , \\
\widetilde{\omega }_{s_{3}}^{2} &=&\left( 1-\mu \right) \cos ^{2}\theta
_{2}\left( \cos ^{2}\theta _{1}+\cos ^{2}\chi \sin ^{2}\theta _{1}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{3}}^{3} &=&\left( 1-\mu \right) \sin ^{2}\theta
_{1}\left( \cos ^{2}\theta _{2}\cos ^{2}\chi +\sin ^{2}\theta _{2}\right)
\sin ^{2}\chi , \\
\widetilde{\omega }_{s_{3}}^{4} &=&\cos ^{2}\theta _{2}\sin ^{2}\theta
_{1}\sin ^{2}\chi \left[ \left( 1-\mu \right) \sin ^{2}\chi +\mu \right] ,
\end{eqnarray*}
\begin{eqnarray}
\widetilde{\omega }_{s_{4}}^{1} &=&\left( 1-\mu \right) \left( \cos
^{2}\theta _{1}+\cos ^{2}\chi \sin ^{2}\theta _{1}\right) \left( \cos
^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right) \notag \\
&&+\mu \left[ \cos ^{2}\theta _{1}+\sin ^{2}\theta _{1}\left( \cos
^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right) \right] , \notag
\\
\widetilde{\omega }_{s_{4}}^{2} &=&\left( 1-\mu \right) \sin ^{2}\theta
_{2}\left( \cos ^{2}\theta _{1}+\cos ^{2}\chi \sin ^{2}\theta _{1}\right)
\sin ^{2}\chi , \notag \\
\widetilde{\omega }_{s_{4}}^{3} &=&\left( 1-\mu \right) \sin ^{2}\theta
_{1}\left( \cos ^{2}\theta _{2}+\cos ^{2}\chi \sin ^{2}\theta _{2}\right)
\sin ^{2}\chi , \notag \\
\widetilde{\omega }_{s_{4}}^{4} &=&\sin ^{2}\theta _{1}\sin ^{2}\theta
_{2}\sin ^{2}\chi \left[ \left( 1-\mu \right) \sin ^{2}\chi +\mu \right] .
\label{Limited entropy exchange eigenvalues}
\end{eqnarray}
The capacity $C_{E}^{\lim }$ increases with the increase in $\theta _{1}$
and $\theta _{2}$ which results into higher amount of shared entanglement.
It acquires the maximum value for $\theta _{1}=\theta _{2}=\frac{\pi }{4}$,
for which the shared states are maximally entangled and Eq. (\ref{Limited Ce
for ADC}) reduces to Eq. (\ref{Ce}). For $\theta _{1}=\theta _{2}=0$, Eq. (
\ref{Limited Ce for ADC}) reduces to the product state classical capacity $
C_{p}$\ for time-correlated amplitude-damping channel given as
\begin{equation}
C_{p}^{2}=-\sum_{i=1}^{4}\omega _{i}\log _{2}\omega _{i}+\frac{1}{4}
\sum_{i,j=1}^{4}\widetilde{\omega }_{s_{i}}^{j}\log _{2}\widetilde{\omega }
_{s_{i}}^{j}, \label{Product state classical capacity}
\end{equation}
with
\begin{eqnarray}
\widetilde{\omega }_{s_{1}}^{1} &=&\left( 1-\mu \right) \cos ^{4}\chi +\mu
\cos ^{2}\chi , \notag \\
\widetilde{\omega }_{s_{1}}^{2} &=&\widetilde{\omega }_{s_{1}}^{3}=\left(
1-\mu \right) \cos ^{2}\chi \sin ^{2}\chi , \notag \\
\widetilde{\omega }_{s_{1}}^{4} &=&\left( 1-\mu \right) \sin ^{4}\chi +\mu
\sin ^{2}\chi , \notag \\
\widetilde{\omega }_{s_{2}}^{1} &=&\left( 1-\mu \right) \cos ^{2}\chi +\mu ,
\notag \\
\widetilde{\omega }_{s_{2}}^{2} &=&0, \notag \\
\widetilde{\omega }_{s_{2}}^{3} &=&\left( 1-\mu \right) \sin ^{2}\chi ,
\notag \\
\widetilde{\omega }_{s_{2}}^{4} &=&0, \notag \\
\widetilde{\omega }_{s_{3}}^{1} &=&\left( 1-\mu \right) \cos ^{2}\chi +\mu ,
\notag \\
\widetilde{\omega }_{s_{3}}^{2} &=&\left( 1-\mu \right) \sin ^{2}\chi ,
\notag \\
\widetilde{\omega }_{s_{3}}^{3} &=&\widetilde{\omega }_{s_{3}}^{4}=0, \notag
\\
\widetilde{\omega }_{s_{4}}^{1} &=&1, \notag \\
\widetilde{\omega }_{s_{4}}^{2} &=&\widetilde{\omega }_{s_{4}}^{3}=
\widetilde{\omega }_{s_{4}}^{4}=0. \label{Classical capacity eigenvalues}
\end{eqnarray}
\begin{figure}
\caption{Classical capacity $C_{p}
\label{C_p versus Kie and Meu}
\end{figure}
We give the plot of capacity $C_{p}^{2}$ as function of channel noise $\chi $
and memory $\mu $ in Fig. (\ref{C_p versus Kie and Meu}). It is evident that
the capacity decreases as the channel becomes noisy, reducing to zero for
maximum channel noise $\chi =\frac{\pi }{2}$. In the presence of memory, the
capacity $C_{p}^{2}$ is non-zero independent of channel noise $\chi $ and
increases with the channel memory attaining maximum value for perfect memory
channel i. e., $\mu =1$. This is in agreement with the numerical study for
product state classical capacity of amplitude-damping channel presented in
\cite{Yeo and Skeen 2003}.
\section{Conclusion}
We have studied an amplitude-damping channel for transmission of
information, provided entanglement is shared prior to the communication. The
noise over consecutive uses of the channel is time-correlated Markov noise.
We analytically determine the classical and quantum capacities for this
channel which exhibit strong dependence on channel memory. The capacities
decrease as the channel noise increases reducing to zero for maximum channel
noise. However, if the channel has memory its capacity to transmit
information is always non-zero. Memory increases the predictability of the
channel action on the successive input states and thus suppresses the noise
introduced by the channel. In the presence of channel memory, its capacity
increases acquiring maximum value for perfect memory channel. We also
calculate the classical capacity assisted by limited entanglement which
increases with the amount of shared entanglement. It gives
entanglement-assisted classical capacity if maximally entangled states are
shared while reduces to product state classical capacity when the amount of
shared entanglement reduces to zero. For a given value of entanglement, the
classical capacity assisted by limited entanglement increases with the
channel memory, independent of channel noise. It is always non-zero in the
presence of channel memory attaining the maximum value for perfect memory.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Probabilistic Rely-guarantee Calculus}
\author[mq]{Annabelle McIver}
\ead{[email protected]}
\author[mq]{Tahiry Rabehaja}
\ead{[email protected]}
\author[sh]{Georg Struth}
\blfootnote{This research was supported by an iMQRES from Macquarie University, the ARC Discovery Grant DP1092464 and the EPSRC Grant EP/J003727/1.}
\ead{[email protected]}
\address[mq]{Department of Computing, Macquarie University, Australia}
\address[sh]{Department of Computer Science, University of Sheffield, United Kingdom}
\begin{abstract} Jones' rely-guarantee calculus for shared variable concurrency is extended to include probabilistic behaviours. We use an algebraic approach that is based on a combination of probabilistic Kleene algebra with concurrent Kleene algebra. \todo[disable]{adapts with concurrent KA sounds odd. Why not write: We use an algebraic approach that is based on a combination of probabilistic Kleene algebra with concurrent Kleene algebra?} Soundness of the algebra is shown relative to a general probabilistic event structure semantics. The main contribution of this paper is a collection of rely-guarantee rules built on top of that semantics. In particular, we show how to obtain bounds on probabilities of correctness by deriving quantitative extensions of rely-guarantee rules.
\todo[inline]{I don't fully understand the previous sentence: probability bounds should come from the probabilistic part of the semantics, not from the concurrent one... {\color{orange} Does that make more sense?}}
The use of these rules is illustrated by a detailed verification of a simple probabilistic concurrent program: a faulty Eratosthenes sieve. \end{abstract}
\begin{keyword}probabilistic programs, concurrency, rely-guarantee, program verification, program semantics, Kleene algebra, event structures.
\end{keyword}
\end{frontmatter}
\section{Introduction}
The rigorous study of concurrent systems remains a difficult task due to the intricate interactions {\color{blue} and interferences} between their components. A formal framework for concurrent systems ultimately depends on the kind of concurrency considered. Jones' rely-guarantee calculus provides a mathematical foundation for {\color{blue} proving} the correctness of programs with shared variables concurrency{\color{blue} in compositional fashion}~\cite{Jon81}. This paper extends Jones' calculus to the quantitative correctness of probabilistic concurrent programs.
Probabilistic programs have become popular due to their ability to express quantitative rather than limited qualitative properties. Probabilities are particularly important for protocols that rely on the unpredictability of probabilistic choices.
\todo[disable]{protocols requiring security?}
The sequential probabilistic semantics, originating with Kozen~\cite{Koz81} and Jones~\cite{Jon92}, have been extended with nondeterminism~\cite{He97,Mci04}, to yield methods for quantitative reasoning based on partial orders. \todo[inline]{What does using partial orders mean? {\color{orange} Rewritten}}
{\color{blue}
We aim to obtain similar methods for reasoning in compositional ways about probabilistic programs with shared variable concurrency. In algebraic approaches, compositionality arises quite naturally through congruence or monotonicity properties of algebraic operations such as sequential and concurrent composition or probabilistic choice.
It is well known that compositional reasoning is nontrivial both for concurrent and for sequential probabilistic systems. In the concurrent case, the obvious source of non-compositionality is communication or interaction between components. In the rely-guarantee approach, interference conditions are imposed between individual components and their environment in order to achieve compositionality. Rely conditions account for the global effect of the environment's interference with a component; guarantee conditions express the effect of a particular component on the environment. Compositionality is then obtained by considering rely conditions within components and guarantee conditions within the environment.
}
In the presence of probabilistic behaviours, a problem of congruence (and hence non-compositionality) arises when considering the natural extension of trace-based semantics to probabilistic automata~\cite{Seg94}, where a standard work-around is to define a partial order based on simulations.
\todo[inline]{I would prefer to see a conceptual explanation instead of a mathematical one, as in the r-g case.}
In this paper, we define a similar construct to achieve compositionality. However, simulation-based equivalences are usually too discriminating for program verification. Therefore, we also use a weaker semantics that is essentially based on sequential behaviours. Such a technique has been motivated elsewhere~\cite{Arm14}, where the sequential order is usually not a congruence. Therefore, the simulation-based order is used for properties requiring composition while the second order provides a tool that captures the sequential behaviours of the system.
Concurrent Kleene algebra~\cite{Hoa11,Arm14} provides an algebraic account of Jones' rely-guarantee framework. Algebras provide an abstract view of a program by focusing more on control flows rather than data flows. All the rely-guarantee rules described in~\cite{Hoa11,Arm14} were derived by equational reasoning from a finite set of algebraic axioms. Often, the verification of these axioms on an intended semantics is easier than proving the inference rules directly in that semantics. Moreover, every structure satisfying these laws will automatically incorporate a direct interpretation of the rely-guarantee rules, as well as additional rules that can be used for program refinement. Therefore, we also adopt an algebraic approach to the quantitative extension of rely-guarantee, that is, we establish some basic algebraic properties of a concrete event structure model and derive the rely-guarantee rules by algebraic reasoning.
In summary, the main contribution of this paper is the development of a mathematical foundation for \textit{probabilistic rely-guarantee calculi}. The inference rules are expressed algebraically, and we illustrate their use on an example based on the Sieve of Eratosthenes which incorporates a probability of failure. We also outline two rules that provide probabilistic lower bounds for the correctness of the concurrent execution of multiple components.
A short summary of the algebraic approach to rely-guarantee calculus and the extension to probabilistic programs are found respectively in Section~\ref{sec:standard-rg} and~\ref{sec:prgc}-\ref{sec:prg-rules}. Section~\ref{sec:sequential-programs} and~\ref{sec:es} are devoted to the construction of a denotational model for probabilistic concurrent programs.
Section~\ref{sec:application} closes this paper with a detailed verification of the faulty Eratosthenes sieve.
\section{Non-probabilistic rely-guarantee calculus}\label{sec:standard-rg}
The rely-guarantee approach, originally put forward by Jones~\cite{Jon81}, is a compositional method for developing and verifying large concurrent systems. An algebraic formulation of the approach has been proposed recently in the context of concurrent Kleene algebras~\cite{Hoa11}. In a nutshell, a bi-Kleene algebra is an algebraic structure $(K,+,\cdot,\|,0,1,^\ast,^{(\ast)})$ such that $(K,+,\cdot,0,1,^\ast)$ is a Kleene algebra and $(K,+,\|,0,1,^{(\ast)})$ is a commutative Kleene algebra. The axioms of Kleene algebra and related structures are in Appendix~\ref{A:ka}
.
\todo[inline]{I think such an appendix should be added to summarise the algebraic laws used in this paper.}
Intuitively, the set $K$ models the actions a system can take; the operation $(+)$ corresponds to the nondeterministic choice between actions, $(\cdot)$ to their sequential composition and $(\|)$ to their parallel or concurrent composition. The constant $0$, the unit of addition, models the abortive action, $1$, the unit of sequential and concurrent composition, the ineffective action $\mathtt{skip}$. The operation $(^\ast)$ is a sequential finite iteration of actions; the corresponding parallel finite iteration operation $(^{(\ast)})$ is not considered further in this article. Two standard models of bi-Kleene algebras are languages, with $(+)$ interpreted as language union, $(\cdot)$ as language product, $(\|)$ as shuffle product, $0$ as the empty language, $1$ as the empty word language and $(^\ast)$ as the Kleene star, and pomset languages under operations similarly to those in Section~\ref{S:seqred} below (cf. ~\cite{BloomEsik}).
Language-style models with interleaving or shuffle also form the standard semantics of rely-guarantee calculi. In that context, traces are typically of the form $(s_1,s_1'),(s_2,s_2')\dots (s_k,s_k')$, where the $s_i$ and $s_i'$ denote states of a system, pairs $(s_i,s_i')$ correspond to internal transitions of a component, and fragments $s_i'),(s_{i{+}1}$ to transitions caused by interferences of the environment. Behaviours of a concurrent system are associated with sets of such traces.
With semantics for concurrency in mind, a generalised encoding of the validity of Hoare triples becomes useful:
\begin{equation*}
\triple{P}{S}{Q} \Leftrightarrow P{\cdot} S \le Q,
\end{equation*}
where $P\le Q \Leftrightarrow P{\cup} Q = Q$. It has been proposed originally by Tarlecki~\cite{Tar85} for sequential programs with a relational semantics. In contrast to Hoare's standard approach, where $P$ and $Q$ are assertions and $S$ a program, all three elements are now allowed to be programs. In the context of traces, $\triple{P}{S}{Q}$ holds if all traces that are initially in $P$ and then in $S$ are also in $Q$. This comprises situations where program $P$ models traces ending in a set of states $p$ (a precondition) and $Q$ models traces ending in a set of states $q$ (a postcondition). The Hoare triple then holds if all traces establishing precondition $p$ can be extended by program $S$ to traces establishing postcondition $q$, whenever $y$ terminates, as in the standard interpretation. We freely write $\triple{p}{S}{q}$ in such cases. It turns out that all the inference rules of Hoare logic except the assignment rule can be derived in the setting of Kleene algebra~\cite{Hoa11}.
For concurrency applications, the algebraic encoding of Hoare triples has been expanded to Jones quintuples $\triple{P\ R}{S}{G\ Q}$, also written $R,G\vdash\triple{P}{S}{Q}$, with respect to rely conditions $R$ and guarantee conditions $G$~\cite{Hoa11}. The basic intuition is as follows. A rely condition $R$ is understood as a special program that constrains the behaviour of a component $S$ by executing it in parallel as $R\| S$. This is consistent with the above trace interpretation where parallel composition is interpreted as shuffle and gaps in traces correspond to interferences by the environment. Typical properties of relies are $1\le R$ (where $1$ is $\mathtt{skip}$) and $R^\ast = R{\cdot} R= R\| R = R$. Moreover, relies distribute over nondeterministic choices as well as sequential and concurrent compositions: $R\| (S{+}T)= R\|S {+} R\|T$, $R\| (S{\cdot} T)=(R\| S) {\cdot} (R\| T)$ and $R\|(S\| T)=(R\| S)\| (R\| T)$, hence they apply to all subcomponents of a given component~\cite{Arm14}. A guarantee $G$ of a given component $S$ is only constrained by the fact that it should include all behaviours of $S$, that is, $S\le G$.
\todo[inline]{\color{orange} I wonder if using the word constrain both for the guarantee (previous sentence) and the rely (next sentence) wouldn't introduce confusion.}
Consequently, a Jones quintuple is valid if the component $S$ constrained by the rely satisfies the Hoare triple---the relationship between precondition and postcondition---and the guarantee includes all behaviours of $S$~\cite{Hoa11}: \begin{equation}{\label{eq:rgspec}}
\triple{P\ R}{S}{G\ Q} \Leftrightarrow \triple{P}{R\| S}{Q} \wedge S\le G.
\end{equation}
The rules of Hoare logic without the assignment axiom are still derivable from the axioms of bi-Kleene algebra, when Hoare triples are replaced by Jones quintuples~\cite{Hoa11}. To derive the standard rely-guarantee concurrency rule, one can expand bi-Kleene algebra by a meet operation $(\sqcap)$ and assume that $(K,+,\sqcap)$ forms a distributive lattice~\cite{Arm14}. Then
\begin{equation}\label{rule:rg-standard}
\frac{\triple{P\ R}{S}{G\ Q}\quad\triple{P\ R'}{S'}{G'\ Q'}\quad G\leq R'\quad G'\leq R}{\triple{P\ R{\sqcap} R'}{S\|S'}{G{+} G'\ Q{\sqcap} Q',}}.
\end{equation}
This inference rule demonstrates how the rely-guarantee specifications of components can be composed into a rely-guarantee specification of a larger system. If $S$ and $S'$ satisfy the premises, then $S\| S'$ satisfies both postconditions $Q$ and $Q'$ when run in an environment satisfying both relies $R$ and $R'$. Moreover, $S\| S'$ guarantees either of $G$ or $G'$.
Deriving these inference rules from the algebraic axioms mentioned makes them sound with respect to all models of these axioms, including trace-based semantics with parallel composition interpreted as interleaving, and true-concurrency semantics such as pomset languages and the event structures considered in this article. Without the algebraic layer, Dingel~\cite{Din02} and Coleman and Jones~\cite{Col07} have already proved the soundness of rely-guarantee rules with respect to trace-based semantics, more precisely \emph{Aczel traces}~\cite{Roe97}. This paper follows previous algebraic developments, but for probabilistic programs.
In Section~\ref{sec:prgc}, we provide a suitable extension of the rely-guarantee formalism, in particular Rule~(\ref{rule:rg-standard}), to probabilistic concurrent programs. The soundness of such a formalism is shown relative to a semantic space that allows sequential probabilistic programs to include concurrent behaviours.
\section{Sequential probabilistic programs}\label{sec:sequential-programs}
We start by giving a brief summary of the denotation of sequential probabilistic programs using the powerdomain construction of McIver and Morgan~\cite{Mci04}. All probabilistic programs are considered to have a finite state space denoted by $\Omega$. A distribution over the set $\Omega$ is a function $\mu{:}\Omega{\to}[0,1]$ such that $\sum_{s{\in}\Omega}\mu(s) {=} 1$. The set of distributions over $\Omega$ is denoted by $\mathbb{D}\Omega$. Since $\Omega$ is a finite set, we identify a distribution with the associated measure. For every $\mu{\in}\mathbb{D}\Omega$ and $O{\subseteq}\Omega$, we write $\mu(O) {=} \sum_{s{\in} O}\mu(s)$. An example of distribution is the point distribution $\delta_s$, centred at the state $s{\in}\Omega$, such that
\begin{displaymath}
\delta_s(s') = \begin{cases}
1 & \textrm{ if } s {=} s',\\
0 & \textrm{otherwise.}
\end{cases}
\end{displaymath}
A (nondeterministic) probabilistic program $r$ modelled as a map of type $\Omega{\to}\mathbb{P}\mathbb{D} \Omega$ such that $r(s)$ is a non-empty, topologically closed and convex subset of $\mathbb{D} \Omega$ for every state $s{\in} \Omega$. The set $\mathbb{D}\Omega$ is a topological sub-space of the finite product ${\mathbb R}^\Omega$ (endowed with the usual product topology), and the topological closure is considered with respect to the induced topology on $\Omega$\footnote{These healthiness conditions are set out and fully explained in the work of McIver and Morgan~\cite{Mci04}.}. We denote by $\mathbb{H}_1 \Omega $ the set of probabilistic programs that terminate almost certainly. Notice that the set $\mathbb{D}\Omega$ contains only distributions instead of the subdistributions considered by McIver and Morgan~\cite{Mci04}. Therefore, we only model nondeterministic programs that are terminating with probability $1$.
Programs in $\mathbb{H}_1 \Omega$ are ordered by pointwise inclusion, i.e. $r\sqsubseteqh r'$ if for every $s{\in} \Omega$, $r(s)\subseteq r(s')$. A program $r$ is deterministic if, for every $s$, $r(s) = \{\mu_s\}$ (i.e. a singleton) for some distribution $\mu_s{\in} \mathbb{D} \Omega$. The set of deterministic programs is denoted by $\mathbb{J}_1 \Omega$ (as in Jones' spaces~\cite{Jon92}). If $f{\in}\mathbb{J}_1\Omega$ is a deterministic program such that $f(s) = \{\mu_s\}$, then we usually just write $f(s) = \mu_s$. A particularly useful example of a probabilistic deterministic program is the ineffectual program $\mathtt{skip}$, which we denote by $\delta$. Thus $\delta(s) = \{\delta_s\}$.
Let $p\in[0,1]$. The probabilistic combination of two probabilistic programs $r$ and $r'$ is defined as~(\cite[\mathbb{D}efs{5.4.5}]{Mci04})
\begin{equation}\label{def:prog-probabilistic-choice}
(r\pc{p} r')(s) = \{\mu\pc{p}\mu'\ |\ \mu{\in} r(s)\wedge\mu'{\in} r'(s)\},
\end{equation}
where $(\mu \pc{p}\mu')(s) = (1{-}p)\mu(s) {+} p\mu'(s)$ for every state $s{\in}\Omega$. Thus, the program $r$ (resp. $r'$) is executed with probability $1{-}p$ (resp. $p$).
Nondeterminism is obtained as the set of all probabilistic choices (\cite[\mathbb{D}efs{5.4.6}]{Mci04} ), that is, \begin{equation}\label{def:prog-nondeterminism} (r{+}r')(s) = {\cup}_{p{\in}[0,1]} (r\pc{p}r')(s). \end{equation}
The sequential composition of $r$ by $r'$ is defined as (\cite[\mathbb{D}efs{5.4.7}]{Mci04}):
\begin{equation}\label{eq:6-sequential-H}
(r{\cdot} r')(s) = \left\lbrace\left. f{\mathrm{conv}ol} \mu\right| f{\in}\mathbb{J}_1\Omega\wedge \mu{\in} r(s)\wedge f\sqsubseteqh r' \right\rbrace
\end{equation}
where
\[
(f{\mathrm{conv}ol}\mu)(s') = \sum_{s''{\in}\Omega}f(s'')(s')\mu(s'')
\]
for every state $s'{\in}\Omega$.
For $r,r'{\in}\mathbb{H}_1\Omega$, the binary Kleene star $r{*} r'$ is the least fixed point of the function $f_{r,r'}(X) = r' {+} r{\cdot} X$ in $\mathbb{H}_1\Omega$. It has been shown in~\cite{Mci04} that the function $r'\mapsto r{\cdot} r'$ is continuous ---it preserves directed suprema. Notice that a topological closure is sometimes needed to ensure that we obtain an element of $\mathbb{H}_1\Omega$. Hence, the Kleene star $r{*} r'$ is the program such that
$r{*}r'(s) = \overline{{\cup}_{n}f_{r,r'}^n(\bot)(s)}$,
where $\overline{A}$ is the topological closure of the set $A\subseteq\mathbb{D}\Omega$ and the constant $\bot$ is defined, as usual, such that $r''{\cdot} \bot {=} \bot{\cdot} r'' {=} \bot$, $\bot{+}r'' = r''$ and $\bot\sqsubseteqh r''$ for every $r''{\in}\mathbb{H}_1\Omega{\cup}\{\bot\}$.
We introduce tests, which are used for conditional constructs, following the idea adopted in various algebras of programs. We define a test to be a map $b:\Omega\to\mathbb{P}\mathbb{D}\Omega$ such that $b(s) \subseteq\{\delta_s\}$. Indeed, an ``if statement" is modelled algebraically as $b{\cdot} r {+} (\neg b){\cdot} r'$ where $(\neg b)(s) = \emptyset$ if the test underlying $b$ holds at state $s$ and it is $\{\delta_s\}$ otherwise. The sub-expression $b{\cdot} r(s)$ still evaluates to $\emptyset$ if $b(s)$ is empty, but care should be taken to avoid expressions such as $r{\cdot} b$ (if $f$ is a deterministic refinement of $b$, then $f(s'')(s')$ may have no meaning if $b(s'') {=} \emptyset$). A test that is always false can be identified with $\bot$.
We denote by $\overline{\mathbb{H}}_1\Omega$ the set of tests together with the set of probabilistic programs. The refinement order $\sqsubseteqh$ is extended to $\overline{\mathbb{H}}_1\Omega$ in a straightforward manner. For every test $b$, we have $b\sqsubseteqh \delta$; hence, we refer to tests as \emph{subidentities}. Every elements of $\overline{\mathbb{H}}_1\Omega$ are called programs, unless otherwise specified.
\section{An event structures model for probabilistic concurrent programs}\label{sec:es}
The set $\mathbb{H}_1\Omega$ of probabilistic programs provides a full semantics for program constructs such as (probabilistic) assignments, probabilistic choices, conditionals and while loops that terminate almost surely. Unfortunately, it is impossible to define the concurrent composition of two sequential programs as an operation on $\mathbb{H}_1\Omega$ because the result would always be a sequential program. Thus we are forced to look for a more general framework in order to formally model concurrency. Fortunately, there are several suitable mathematical models that allows the formal verification of programs with concurrent behaviours. A powerful example that accounts for true concurrency are Winskel's \emph{event structures}~\cite{Win80,Win86}. In this section, we outline a denotational semantics for probabilistic concurrent programs based on Langerak's bundle event structures~\cite{Lan92}, which have been extended successfully to quantitative features~\cite{Kat93,Kat96,Var03}. This construction is necessary to ensure the soundness of the extended rely-guarantee formalism.
{\color{orange}
A bundle event structure comprises events ranging over some set $E$ of \emph{events} as its fundamental objects. Intuitively, an event is an occurrence of an action at a certain moment in time. Thus an action can be repeated, but each of its occurrences is associated with a unique event. Events are (partially) ordered by a causality relation which we denote by $\mapsto$: if an event $e''$ causally depends on either $e$ or $e'$ (i.e. $\{e,e'\}{\mapsto} e''$) then either $e$ or $e'$ must have happened before $e''$ can happen or is \emph{enabled}. The relationship between $e$ and $e'$ is called \emph{conflict}, written $e\#e'$, because both events cannot occur simultaneously.
}
In general, the conflict relation $\#$ is a binary relation on $E$. Given two subsets $x,x'\subseteq E$, the predicate $x\#x'$ holds iff for every $(e,e'){\in} x{\times} x'$ such that $e{\neq}e'$, we have $e\#e'$.
\begin{definition}\label{def:ipbes}
A quintuple $\mathcal{E} = (E,\mapsto,\#,\lambda,\mathbf{\mathbb{P}hi})$ is a \emph{bundle event structure with internal probability} (i.e. an ipBES) if
\begin{itemize}
\item $\#$ is an irreflexive symmetric binary relation on $E$, called \emph{conflict relation}.
\item $\mapsto\subseteq\mathbb{P} E{\times} E$ is a \emph{bundle relation}, i.e. if $x{\mapsto} e$ for some $x\subseteq E$ and $e{\in} E$, then $x\#x$.
\item $\lambda{:}E{\to}\overline{\mathbb{H}}_1\Omega$, i.e. it labels events with (atomic) probabilistic programs.
\item $\mathbf{\mathbb{P}hi}\subseteq \mathbb{P} E$ such that $x\#x$ holds for every $x{\in}\mathbf{\mathbb{P}hi}$.
\end{itemize}
The finite state space $\Omega$ of the programs used as labels is fixed.
\end{definition}
The intuition behind this definition is that events are occurrences of atomic program fragments, i.e. they can happen without interferences from an environment. Hence, we need to distinguish all atomic program fragments when translating a program into a bundle event structure. Atomic programs can be achieved by creating a construct that forces atomicity. Examples of such a technique include ``atomic brackets"~\cite{Jon12}. In this paper, we always state which actions are atomic rather than using such a device.
Given an ipBES $\mathcal{E}$, a \emph{finite trace} of $\mathcal{E}$ is a sequence of events $e_1e_2\dots e_n$ such that for all different $1\leq i,j\leq n$, $\neg (e_i\#e_j)$ and if $j {=} i{+}1$ then there exists an $x\subseteq E$ such that $x{\mapsto} e_j$ and $e_i{\in} x$~\cite{Lan92,Kat96,Rab13}. In other words, a trace is safe (an event may occur only when it is enabled) and is conflict free. The set of all finite traces of $\mathcal{E}$ is denoted by $\mathcal{T}(\mathcal{E})$. The set of maximal traces of $\mathcal{E}$ (w.r.t the prefix ordering) is denoted $\mathcal{T}_{\max}(\mathcal{E})$. We simply write $\mathcal{T}$ (resp. $\mathcal{T}_{\max}$) instead of $\mathcal{T}(\mathcal{E})$ (resp. $\mathcal{T}_{\max}(\mathcal{E})$) when no confusion may arise.
The aim of this section is to elaborate two relationships between the sets of traces of given event structures. The first comparison is based on a sequential reduction using schedulers; the second one is simulation. We will show that the sequential comparison is strictly weaker than the simulation relation.
\subsection{Schedulers on ipBES}\label{sec:scheduler}
{\color{orange}
As in the case of automata, we define schedulers on ipBES in order to obtain a sequential equivalence on bundle event structures with internal probability. Intuitively, a scheduler reduces an ipBES to a element of $\overline{\mathbb{H}}_1\Omega$. While the technicalities of the schedulers we define in this paper is tailored towards a rely-guarantee reasoning, there might be relationships with previous works~\cite{Geo10,Geo12} where schedulers (and associated testing theories) are restricted in order achieve a broader class observationally equivalent processes.
}
A \emph{subdistribution} is a map $\mu:\Omega\to[0,1]$ such that $\sum_{s{\in}\Omega}\mu(s)\leq 1$. The set of subdistributions over $\Omega$ is denoted by $\mathbb{J}ip\Omega$.
\begin{definition}\label{def:ipscheduler}
A \emph{scheduler} $\sigma$ on an ipBES $\mathcal{E}$ is a map
\[
\sigma{:}\mathcal{T}{\to} [(E{\times} \Omega){\rightharpoondown} \mathbb{J}ip\Omega]
\]
such that for all $\alpha{\in}\mathcal{T}$:
\begin{enumerate}
\item $\mathrm{dom}(\sigma(\alpha)) = \{(e,s)\ |\ \alpha e{\in}\mathcal{T}\wedge s{\in}\Omega\}$,\label{pr:sched-dom}
\item there exists a function $w{:}E{\times}\Omega{\to}[0,1]$ such that, for every $(e,s){\in}\mathrm{dom}(\sigma(\alpha))$, $\sigma(\alpha)(e,s) = w(e,s)\mu$ for some $\mu{\in}\lambda(e)(s)$.
\label{pr:sched-choice}
\item for every $ s{\in}\Omega$, we have $\sum_{(e,s){\in}\mathrm{dom}(\sigma(\alpha))} w(e,s) = 1$,\label{pr:sched-prob}
\item for every $(e,s){\in}\mathrm{dom}(\sigma(\alpha))$, if $\lambda(e)(s) {=} \emptyset$, then $w(e,s) {=} 0$ and $\sigma(\alpha)(e,s) {=} 0$ (the subdistribution that evaluates to $0$ everywhere).\label{pr:sched-consistent}
\end{enumerate}
The set of all schedulers on $\mathcal{E}$ is denoted by $\mathbf{Sched}(\mathcal{E})$.
\end{definition}
Property~\ref{pr:sched-dom} says that we may schedule an event provided it does not depend on unscheduled events.
Property~\ref{pr:sched-choice} states that, given a trace $\alpha$, the scheduler will resolve the nondeterminism between events enabled after $\alpha$ byusing the weight function $w$. This may include immediate conflicts or interleavings of concurrent events. Moreover, the scheduler has access to the current program state when resolving that nondeterminism. This means that $w(e,s)$ is the probability that the event $e$ is scheduled, knowing that the program state is $s$. If the event $e$ is successfully scheduled, then the scheduler performs a last choice of distribution, say $\mu$ from $\lambda(e)(s)$, to generate the next state of the program.
Property~\ref{pr:sched-prob} ensures that when the state $s$ is known, then the choice between the events, enabled after the trace $\alpha$, is indeed probabilistic.
Property~\ref{pr:sched-consistent} says that a scheduler is forced to choose events whose labels do not evaluate to the empty set at the current state of the program. This is particularly important when the program contains conditionals and the label of an event is a test. A scheduler is forced to choose the branch whose test holds. If two tests hold at state $s$, then a branch is chosen probabilistically using the weight function $w$.
The motivation behind Property~\ref{pr:sched-consistent} is to ensure that, for every trace $\alpha$ such that $\mathrm{dom}(\sigma(\alpha)){\neq}\emptyset$, and every state $s{\in}\Omega$, we have
\[
\sum_{(e,s){\in} \mathrm{dom}(\sigma(\alpha))} \sigma(\alpha)(e,s) {\in}\mathbb{D}\Omega,
\]
hence that sum is indeed a distribution. To ensure that a scheduler satisfying that condition can be constructed, we restrict ourselves to \textit{feasible} event structures. Given an element $r{\in}\overline{\mathbb{H}}_1\Omega$, we write $\mathrm{dom}(r) {=} \{s \ |\ r(s){\neq}\emptyset\}$.
\begin{definition}
An ipBES $\mathcal{E}$ is \emph{feasible} if for every trace $\alpha {\in}\mathcal{T}{\setminus}\mathcal{T}_{\max}$, we have ${\cup}_{\alpha e{\in}\mathcal{T}}\mathrm{dom}(\lambda(e)) {=} \Omega$.
\end{definition}
A consequence of this assumption is that an ``if clause" always needs to have a corresponding ``else clause".
\begin{example}
Let us consider the program $r{\cdot}(\delta{+}r)$. In this program, $r$ is atomic deterministic (such as an assignment to a variable) and the associated event structure has three events:
\[
\mathcal{E} = (\{e_{r},e_{r}',e_\delta\},\{\{e_{r}\}{\mapsto} e_\delta,\{e_{r}\}{\mapsto} e_{r}'\},\{e_\delta\#e_{r_2}\},\{(e_r,r),(e_r',r),(e_\delta,\delta)\},\mathbf{\mathbb{P}hi}),
\]
where $\mathbf{\mathbb{P}hi} = \{\{e_r',e_\delta\}\}$ (see \Sec{S:seqred} for an inductive construction of ipBES from primitive blocks). This event structure is feasible and a scheduler $\sigma$ on $\mathcal{E}$ is characterised by a weight function $w{:}\{e_r,e_r',e_\delta\}{\times}\Omega{\to}[0,1]$ resolving the choice $\delta {+} r$. In fact, for every fixed state $s{\in}\Omega$, we have $\sigma(e_r)(e_\delta,s){=}w(e_\delta,s)\delta_s$ and $\sigma(e_r)(e_r',s){=}w(e_r',s)r(s)$ and $w(e_\delta,s) {+} w(e_r',s) {=} 1$.
\end{example}
\subsection{Generating sequential probabilistic programs from ipBES and schedulers}
Similar to the case of probabilistic automata~\cite{Seg94}, our scheduler resolves branching as encoded in the conflict relation of an event structure. In addition, a scheduler also ``flattens" concurrency into interleaving by choosing an enabled event according to the associated weight function. The flattening of concurrent behaviours is sound because actions labelling events are assumed atomic and we are using schedulers to generate sequential behaviours from an ipBES. True concurrency is accounted for in \Sec{s1511}.
Let $\sigma{\in}\mathbf{Sched}(\mathcal{E})$ and $s{\in}\Omega$ be an initial state. We inductively construct a sequence of functions $\varphi_n$ that map a trace in $\mathcal{T}$ to a subdistribution on $\Omega$ according to $\sigma$ and $s$. Intuitively, if $\alpha{\in}\mathcal{T}$, then $\varphi_n(\alpha){\in}\mathbb{J}ip\Omega$ is the sequential composition of the $n$-first probabilistic actions labelling events in $\alpha$ applied to the initial state $s$. This yields a subdistribution because $\alpha$ is weighted with respect to the scheduler $\sigma$. The sequence of partial functions $\varphi_n{:}\mathcal{T}{\rightharpoondown}\mathbb{J}ip\Omega$ is the \emph{computation sequence} of $\mathcal{E}$ with respect to $\sigma$ from initial state $s$.
Formally, for each $n{\in}{\mathbb N}$, we have $\mathrm{dom}(\varphi_n) {=} {\cup}_{k\leq n}\mathcal{T}_k$, where $\mathcal{T}_n$ is the set of traces of length $n$ and
\begin{enumerate}
\item $\varphi_0(\emptyset) {=} \delta_s$,where $s$ is the initial state,
\item if $\alpha e{\in}\mathcal{T}_{n{+}1}$ then
\[
\varphi_{n{+}1}(\alpha e)(s) = \sum_{t{\in}\Omega}[\sigma(\alpha)(e,t)(s)]\varphi_n(\alpha)(t)
\]
and $\varphi_{n{+}1}(\alpha e) {=} \varphi_n(\alpha e)$ otherwise.\label{pr:induction-computation-function}
\end{enumerate}
To emphasises that this computation function refers to a specific initial state $t{\in}\Omega$ we sometimes write $\varphi_{n,t}$ instead of $\varphi_n$.
The \textit{complete run} of $\mathcal{E}$ with respect to $\sigma$ is the limit $\varphi$ of that sequence, i.e. $\varphi {=} {\cup}_n\varphi_n$, which exists because $\varphi_n$ defines a sequence of partial functions such that $\varphi_{n}$ is the restriction of $\varphi_{n{+}1}$ to $\mathrm{dom}(\varphi_n)$. Since we consider finite traces only, we have $\mathrm{dom}(\varphi) {=} \mathcal{T}$. The \textit{sequential behaviour} of $\mathcal{E}$ with respect to $\sigma$ from the initial state $s$ is defined by the sum \[
\sigma_s(\mathcal{E}) = \sum_{\alpha{\in}\mathcal{T}_{\max}}\varphi(\alpha).
\]
\begin{proposition}\label{pro:scheduler-well-defined}
For every bundle event structure $\mathcal{E}$, scheduler $\sigma{\in}\mathbf{Sched}(\mathcal{E})$ and initial state $s$, $\sigma_s(\mathcal{E})$ is a subdistribution.
\end{proposition}
\begin{proofsummary}
The proof is by induction on the sequence of computation functions. We can show by induction on $n$ that
\[
\mu_{n}(\Omega) = \sum_{\alpha{\in} \mathcal{T}_{n}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n))}\varphi(\alpha)(\Omega) = \sum_{t{\in}\Omega} \sum_{\alpha{\in} \mathcal{T}_{n}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n))}\varphi(\alpha)(t) = 1
\]
and deduce that, at the limit, $\sum_{\alpha{\in}\mathcal{T}_{\max}}\varphi(\alpha)(\Omega) \leq 1$.
\end{proofsummary}
\begin{proof}
Let $\varphi$ be the complete run of $\mathcal{E}$ with respect to a given scheduler $\sigma$. We show by induction on $n$ that
\[
\mu_{n}(\Omega) = \sum_{\alpha{\in} \mathcal{T}_{n}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n))}\varphi(\alpha)(\Omega) = \sum_{t{\in}\Omega} \sum_{\alpha{\in} \mathcal{T}_{n}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n))}\varphi(\alpha)(t) = 1.
\]
For the base case $n=0$, we have $\mu_0(\Omega) = \varphi(\emptyset)(\Omega) = \delta_s(\Omega) = 1$, where $s$ is the initial state. Assume the induction hypothesis $\mu_n(\Omega) = 1$. We have
\begin{align*}
\mu_{n{+}1}(\Omega) & = \sum_{\alpha{\in} \mathcal{T}_{n{+}1}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_{n{+}1}))}\varphi(\alpha)(\Omega)\\
& =^{\dag} \sum_{\alpha{\in} \mathcal{T}_{n{+}1}}\varphi(\alpha)(\Omega) {+} \sum_{\alpha{\in}\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n)}\varphi(\alpha)(\Omega)\\
& = \sum_{\alpha e{\in} \mathcal{T}_{n{+}1}}\sum_{t{\in}\Omega}\sigma(\alpha)(e,t)(\Omega)\varphi(\alpha)(t) {+} \sum_{\alpha{\in} \mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n)}\varphi(\alpha)(\Omega)\\
& = \sum_{\alpha{\in} \mathcal{T}_{n}{\setminus}\mathcal{T}_{\max}}\sum_{\alpha e{\in} \mathcal{T}}\sum_{t{\in}\Omega}\sigma(\alpha)(e,t)(\Omega)\varphi(\alpha)(t) {+} \sum_{\alpha{\in} \mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n)}\varphi(\alpha)(\Omega)\\
& = \sum_{\alpha{\in} \mathcal{T}_{n}{\setminus}\mathcal{T}_{\max}}\left[\sum_{(e,t){\in}\mathrm{dom}(\sigma(\alpha))}\sigma(\alpha)(e,t)(\Omega)\right]\varphi(\alpha)(t) {+} \sum_{\alpha{\in}\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n)}\varphi(\alpha)(\Omega)\\
& =^{\ddag} \sum_{\alpha{\in} \mathcal{T}_{n}{\setminus}\mathcal{T}_{\max}}\varphi(\alpha)(\Omega) {+} \sum_{\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_n)}\varphi(\alpha)(\Omega)\\
& = \mu_n(\Omega) = 1.
\end{align*}
($\dag$) Follows from $\mathcal{T}_{n{+}1}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_{n{+}1})) = \mathcal{T}_{n{+}1}{\cup}(\mathcal{T}_{\max}{\cap} \mathrm{dom}(\varphi_{n}))$ and the fact that the second union is disjoint.
($\ddag$) The square-bracketed term equals $1$ because of Properties~\ref{pr:sched-choice} and~\ref{pr:sched-prob} of the scheduler $\sigma$.
Therefore, each partial computation $\varphi_n$ can be seen as a probability distribution $\varphi_n(-)(\Omega)$ supported on $\mathcal{T}_n{\cup}(\mathcal{T}_{\max}{\cap}\mathrm{dom}(\varphi_n))$. Hence, the limit is a subdistribution $\varphi(-)(\Omega)$ on $\mathcal{T}_{\max}$. It does not necessarily add up to $1$ because elements of $\mathcal{T}_{\max}$ are finite maximal traces only and non-termination will decrease that quantity (we assume that the empty sum is $0$. This occurs when there are no maximal traces).
\end{proof}
Given a state $t{\in}\Omega$, $\sigma_s(\mathcal{E})(t)$ is the probability that the concurrent probabilistic program denoted by $\mathcal{E}$ terminates in state $t$ when conflicts (resp. concurrent events) are resolved (resp. interleaved) according to the scheduler $\sigma$. Since we consider terminating programs only, we denote by $\mathbf{Sched}_1(\mathcal{E})$ the set of schedulers of $\mathcal{E}$ such that, for every initial state $s$, $\sigma_s(\mathcal{E})$ is a distribution. A scheduler in $\mathbf{Sched}_1(\mathcal{E})$ generates a sequential behaviour that terminates almost surely. This leads to our definition of a bracket $\sem{\ }$ that transform each feasible ipBES to an element of $\mathbb{H}_1\Omega$: \[
\sem{\mathcal{E}}(s) = \overline{\mathrm{conv}\{\sigma_s(\mathcal{E})\ |\ \sigma{\in}\mathbf{Sched}_1(\mathcal{E})\}}
\]
where $\mathrm{conv}(A)$ (resp. $\overline{A}$) is the convex (resp. topological) closure of the set of distributions $A$ in ${\mathbb R}^\Omega$.
\begin{definition}\label{def:semantics-sequential}
Let $\mathcal{E},\mathcal{F}$ be two feasible event structures. We say that $\mathcal{E}$ (sequentially) refines $\mathcal{F}$, denoted by $\mathcal{E}\sqsubseteq\mathcal{F}$, if $\sem{\mathcal{E}}\sqsubseteqh\sem{\mathcal{F}}$ holds in $\mathbb{H}_1\Omega$.
\end{definition}
The relation $\sqsubseteq$ is a preorder on ipBES. Whilst this order is not a congruence, it is used to specify the desired sequential properties of a feasible event structure $\mathcal{E}$ with $\mathbf{Sched}_1(\mathcal{E}){\neq}\emptyset$. We will show that feasibility and non-emptiness of $\mathbf{Sched}_1$ are preserved by the regular operations of the next section (Props \ref{pro:homomorphism} and \ref{pro:*-homomorphism}).
\subsection{Regular operations on ipBES}\label{S:seqred}
This section provides interpretations of the operations $(+,\cdot,*,\|)$ and constants $0,1$ on event structures with disjoint sets of events. These definitions allow the inductive translation of program texts into event structure objects.
\begin{itemize}
\item[-] The algebraic constant $1$ is interpreted as $({e},\emptyset,\emptyset,\{(e,\delta)\},\{e\})$.
\item[-] The algebraic constant $0$ is interpreted as $(\emptyset,\emptyset,\emptyset,\emptyset,\emptyset)$.
\item[-] Each atomic action $r{\in}\overline{\mathbb{H}}_1\Omega$ is associated with $(\{e\},\emptyset,\emptyset,\{(e,r)\},\{e\})$. This event structure is again denoted by $r$.
\item[-] The nondeterministic choice between the event structures $\mathcal{E}$ and $\mathcal{F}$ is constructed as
\[
\mathcal{E} {+} \mathcal{F} = (E{\cup} F,\#_{\mathcal{E}{+}\mathcal{F}},\mapsto_\mathcal{E}{\cup}\mapsto_\mathcal{F},\lambda_\mathcal{E}{\cup}\lambda_\mathcal{F},\{x{\cup} y\ |\ x{\in}\mathbf{\mathbb{P}hi}_\mathcal{E}\wedge y{\in}\mathbf{\mathbb{P}hi}_\mathcal{F}\})
\]
where $\#_{\mathcal{E}{+}\mathcal{F}} = [{\cup}_{x{\in}\mathbf{\mathbb{P}hi}_\mathcal{E}\wedge y\mathbf{\mathbb{P}hi}_\mathcal{F}}\textrm{sym}(x{\times} y)]{\cup} \#_\mathcal{E}{\cup}\#_\mathcal{F}{\cup}\textrm{sym}({\mathbf{in}(\mathcal{E}){\times}\mathbf{in}(\mathcal{F})})$ and $\textrm{sym}$ is the symmetric closure of a relation on $E{\cup} F$. The square-bracketed set ensures that every final event in $\mathcal{E}$ is in conflict with every final event in $\mathcal{F}$. This ensures that, if $z{\in}\mathbf{\mathbb{P}hi}_{\mathcal{E}{+}\mathcal{F}}$, then $z\#z$.
\item[-] The sequential composition of $\mathcal{E}$ by $\mathcal{F}$ is
\[
\mathcal{E}{\cdot}\mathcal{F} = (E{\cup} F,\#_\mathcal{E}{\cup}\#_\mathcal{F},\mapsto_\mathcal{E}{\cup}\mapsto_\mathcal{F}{\cup}\{x\mapsto e\ |\ e{\in}\mathbf{in}(\mathcal{F})\wedge x{\in}\mathbf{\mathbb{P}hi}_\mathcal{E} \},\lambda_\mathcal{E}{\cup}\lambda_\mathcal{F},\mathbf{\mathbb{P}hi}_\mathcal{F}).
\]
\item[-] The concurrent composition of $\mathcal{E}$ and $\mathcal{F}$ is
\[
\mathcal{E}\|\mathcal{F} = (E{\cup} F,\#_\mathcal{E}{\cup}\#_\mathcal{F},\mapsto_{\mathcal{E}}{\cup} \mapsto_\mathcal{F},\lambda_\mathcal{E}{\cup}\lambda_\mathcal{F},\mathbf{\mathbb{P}hi}_\mathcal{E}{\cup}\mathbf{\mathbb{P}hi}_\mathcal{F}). \]
\item[-] The binary Kleene star of $\mathcal{E}$ and $\mathcal{F}$ is the supremum of the sequence \[
\mathcal{F}, \mathcal{F} {+} \mathcal{E}{\cdot}\mathcal{F}, \mathcal{F} {+} \mathcal{E}{\cdot}(\mathcal{F} {+} \mathcal{E}\cdot\mathcal{F}),\dots
\]
of bundle event structures with respect to the $\omega$-complete sub-BES order~\cite{Rab13b}.
\end{itemize}
\todo[inline]{\color{orange} The reviewer advised that this example should show the inductive construction in practice.}
\begin{example}
Let us consider the sequential programs $r,\delta{\in}\mathbb{H}_1\Omega$. A concurrent program that is skipping or running $r$ in parallel with itself is algebraically denoted by $(r\|r){+}1$. The construction of the associated event structure starts from the innermost operation $(r\|r)$, assuming that each occurrence of the atomic action $r$ is associated with an event from $\{e_r,e_r'\}$. Thus
\[
\mathcal{E}_{r\|r} = (\{e_r,e_r'\}, \emptyset, \emptyset,\underbrace{\{(e_r,r),(e_r',r)\}}_{\lambda_{r\|r}}, \{\{e_r\},\{e_r'\}\}).
\]
We can now construct the nondeterministic choice between $r\|r$ and $\delta$ as
\begin{footnotesize}
\[
\mathcal{E}_{(r\|r){+}1} = (\{e_r,e_r',e_\delta\}, \{e_r\#e_\delta,e_r'\#e_\delta\},\emptyset,\lambda_{r\|r}{\cup}\{(e_\delta,\delta)\}, \{\{\epsilon,e_\delta\}\ |\ \epsilon{\in}\{e_r,e_r'\}\}).
\]
\end{footnotesize}
In this example, we have $e_r\#e_\delta$ and $e_r'\#e_\delta$ but $e_r$ and $e_r'$ are concurrent.
\end{example}
For every bundle event structure $\mathcal{E}$, $0 {+} \mathcal{E} {=} \mathcal{E}$, $0{\cdot} \mathcal{E} {=} \mathcal{E}{\cdot} 0 {=} \mathcal{E}$, and in particular, $0{\cdot} 1 {=} 1$. The constant $0$ was only introduced to have a bottom element on the set of bundle event structures with internal probabilities. It ensures that we can compute the Kleene star inductively from the least element. Moreover, $0$ will disappear in mixed expressions because of these properties.
We now show that the operations $(+)$ and $(\cdot)$ are preserved by the map $\sem{\ }$. The case of the binary Kleene star $({*})$ is proven in \mathbb{P}rop{pro:*-homomorphism}.
\begin{proposition}\label{pro:homomorphism}
For $\mathcal{E},\mathcal{F}$ non-zero, feasible and terminating event structures, we have $\sem{\mathcal{E}{+}\mathcal{F}} = \sem{\mathcal{E}}{+}\sem{\mathcal{F}}$ and $\sem{\mathcal{E}{\cdot}\mathcal{F}} = \sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}$.
\end{proposition}
\begin{proof}
For the case of nondeterminism $(+)$, let $s{\in}\Omega$ be the initial state and $\mu{\in}\sem{\mathcal{E}{+}\mathcal{F}}(s)$. Let us firstly assume that $\mu {=} \sigma_s(\mathcal{E})$ for some $\sigma{\in}\mathbf{Sched}_1(\mathcal{E}{+}\mathcal{F})$. By definition of the sum $\mathcal{E}{+}\mathcal{F}$, the set of events $E$ and $F$ are disjoints, so we can define two schedulers $\sigma^\mathcal{E}{\in}\mathbf{Sched}_1(\mathcal{E})$ and $\sigma^\mathcal{F}{\in}\mathbf{Sched}_1(\mathcal{F})$ as follows. Let $\alpha{\in}\mathcal{T}(\mathcal{E}{+}\mathcal{F})$ and $(e,t){\in}\mathrm{dom}(\sigma(\alpha))$, we define
\begin{displaymath}
\sigma^\mathcal{E}(\alpha)(e,t) = \begin{cases}
\sigma(\alpha)(e,t) & \textrm{if }\alpha{\in}\mathcal{T}(\mathcal{E}){\setminus}\{\emptyset\},\\
\frac{\sigma(\emptyset)(e,t)}{p^\mathcal{E}_t} & \textrm{if } \alpha {=} \emptyset.
\end{cases}
\end{displaymath}
where $p^\mathcal{E}_t {=} \sum_{e'{\in}\mathbf{in}(\mathcal{E})}w(e,t)$, $w$ is the weight function associated to $\sigma$ at the trace $\emptyset$ and $s$ is the initial state. The real number $p^\mathcal{E}_t$ is just a normalisation constant required by Property~\ref{pr:sched-prob} in the definition of schedulers.~\footnote{If $p^\mathcal{E}_t {=} 0$, then $\sigma{\in}\mathbf{Sched}_1(\mathcal{F})$.} The scheduler $\sigma^\mathcal{F}$ is similarly defined. It follows directly from these definition of $\sigma^\mathcal{E}$ and $\sigma^\mathcal{F}$ that $\sigma(\emptyset)(e,t) {=} p^\mathcal{E}_t\sigma(\emptyset)(e,t) {+} p^{\mathcal{F}}_t\sigma(\emptyset)(e,t)$ where $p^\mathcal{E}_t {+} p^\mathcal{F}_t {=} 1$ because of Property~\ref{pr:sched-prob}. Hence, $\sigma_s(\mathcal{E}) {=} p^\mathcal{E}_s\sigma_s^\mathcal{E}(\mathcal{E}) {+} p^{\mathcal{F}}_s\sigma^{\mathcal{F}}_s(\mathcal{F})$ i.e. $\sigma_s(\mathcal{E}){\in}\sem{\mathcal{E}} {+} \sem{\mathcal{F}}$. Since $\sem{\mathcal{E}} {+} \sem{\mathcal{F}}$ is convex and topologically closed, we deduce that $\sem{\mathcal{E} {+} \mathcal{F}}(s)\subseteq(\sem{\mathcal{E}}{+}\sem{\mathcal{F}})(s)$.
For the converse inclusion $(\sem{\mathcal{E}}{+}\sem{\mathcal{F}})(s){\subseteq}\sem{\mathcal{E} {+} \mathcal{F}}(s)$, notice that $\overline{\mathrm{conv}(A)} = \mathrm{conv}(\overline{A})$ holds for every subset $A{\subseteq}{\mathbb R}^\Omega$. If we write $A {=} \{\sigma_s(\mathcal{E}) \ | \ \sigma{\in}\mathbf{Sched}_1(\mathcal{E})\}$ and $B {=} \{\sigma_s(\mathcal{F}) \ | \ \sigma{\in}\mathbf{Sched}_1(\mathcal{F})\}$, then
\[
(\sem{\mathcal{E}}{+}\sem{\mathcal{F}})(s) = \overline{\mathrm{conv}(\overline{\mathrm{conv}(A)}{\cup}\overline{\mathrm{conv}(B)})} = \overline{\mathrm{conv}(A{\cup} B)}.
\]
But it is clear that $A{\subseteq}\sem{\mathcal{E}{+}\mathcal{F}}(s)$ (a scheduler that does not choose $\mathcal{F}$ is possible because $\mathcal{E}$ is feasible) and $B{\subseteq}\sem{\mathcal{E}{+}\mathcal{F}}(s)$. Therefore, $(\sem{\mathcal{E}}{+}\sem{\mathcal{F}})(s) = \overline{\mathrm{conv}(A{\cup} B)}{\subseteq} \sem{\mathcal{E}{+}\mathcal{F}}(s)$ because the last set is convex and topologically closed.
The sequential composition is proven using a similar reasoning. Let $\mathcal{E},\mathcal{F}$ be two bundle event structures satisfying the hypothesis, and $\mu{\in}\sem{\mathcal{E}{\cdot}\mathcal{F}}(s)$ for some initial state $s{\in}\Omega$.
The proof of $\sem{\mathcal{E}{\cdot}\mathcal{F}}(s)\subseteq \sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s)$ goes as follows. Firstly, let us assume that there is a scheduler $\sigma$ on $\mathcal{E}{\cdot}\mathcal{F}$ such that $\mu {=} \sigma_s(\mathcal{E}{\cdot}\mathcal{F})$. Since schedulers are inductively constructed, there exists $\sigma^\mathcal{E}{\in}\mathbf{Sched}(\mathcal{E})$ and $\sigma^\mathcal{F}{\in}\mathbf{Sched}(\mathcal{F})$ such that
\begin{displaymath}
\sigma(\alpha)(e,t) =
\begin{cases}
\sigma^\mathcal{E}(\alpha)(e,t) & \textrm{if }\alpha e{\in}\mathcal{T}(\mathcal{E}),\\
\sigma^\mathcal{F}(\alpha'')(e,t) & \textrm{if } \alpha {=} \alpha'\alpha''\textrm{ and }(\alpha',\alpha''){\in} \mathcal{T}_{\max}(\mathcal{E}){\times}\mathcal{T}(\mathcal{F}).
\end{cases}
\end{displaymath}
Let us denote by $\varphi_n$ and $\varphi^\mathcal{E}_n$ (resp. $\varphi^\mathcal{F}_{n,t})\footnote{Remind that $\varphi_{n,t}$ is the computation function computed given the initial state $t$.}$ the computation sequences associated to the respective schedulers $\sigma$ and $\sigma^\mathcal{E}$ (resp. $\sigma^\mathcal{F}$) from the initial state $s$ (resp. $t$). It follows directly that $\varphi_n(\alpha) {=} \varphi_n^\mathcal{E}(\alpha)$ for every $\alpha{\in}\mathcal{T}_n(\mathcal{E})$. If $\alpha'{\in}\mathcal{T}_{\max}(\mathcal{E}){\cap}\mathcal{T}_n(\mathcal{E})$ and $e{\in}\mathbf{in}(\mathcal{F})$ then, for every state $u{\in}\Omega$,
\[
\varphi_{n{+}1}(\alpha' e)(u) = \sum_{t{\in}\Omega}\sigma^\mathcal{F}(\emptyset)(e,t)(u)\varphi^\mathcal{E}(\alpha')(t).
\]
Similarly, we have
\begin{align*}
\varphi_{n{+}1}(\alpha' ee')(u) & = \sum_{t'{\in}\Omega}\sigma^\mathcal{F}(e)(e,t')(u)\left[\sum_{t{\in}\Omega}\sigma^\mathcal{F}(\emptyset)(e,t)(t')\varphi^\mathcal{E}(\alpha')(t)\right]\\
& = \sum_{t{\in}\Omega}\left[\sum_{t'{\in}\Omega}\sigma^\mathcal{F}(e)(e,t')(u)\sigma^\mathcal{F}(\emptyset)(e,t)(t')\right]\varphi^\mathcal{E}(\alpha')(t) \\
& = \sum_{t{\in}\Omega} \varphi_{2,t}^\mathcal{F}(u)\varphi^\mathcal{E}(\alpha')(t).
\end{align*}
By simple induction on the length of $\alpha''$, we deduce that
\[
\varphi(\alpha'\alpha'')(u) = \sum_{t{\in}\Omega}\varphi_{t}^\mathcal{F}(\alpha'')(u)\varphi^\mathcal{E}(\alpha')(t),
\]
where $\varphi^\mathcal{F}_t$ is the complete run obtained from the sequence $\varphi^\mathcal{F}_{n,t}$.
It follows by definition of the sequential composition on $\mathbb{H}_1\Omega$ (\Eqn{eq:6-sequential-H}) that
\[
\sigma_s(\mathcal{E})(u) = \sum_{t{\in}\Omega}\sigma^{\mathcal{F}}_t(\mathcal{F})(u)\sigma^{\mathcal{E}}_s(\mathcal{E})(t){\in} \sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s)
\]
for every state $u{\in}\Omega$. Secondly, since $\sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s)$ is upclosed and topologically closed, we deduce that $\sem{\mathcal{E}{\cdot}\mathcal{F}}(s)\subseteq \sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s)$.
Conversely, if $\mu{\in} \sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s)$, then either $\mu(u) = \sum_{t{\in} \Omega}\sigma^{\mathcal{F}}_t(\mathcal{F})(u)\sigma^{\mathcal{E}}_s(\mathcal{E})(t)$ or $\mu$ is in the closure of the set of these distributions. Either way, the closure properties of $\sem{\mathcal{E}{\cdot}\mathcal{F}}(s)$ implies that $\sem{\mathcal{E}}{\cdot}\sem{\mathcal{F}}(s){\subseteq}\sem{\mathcal{E}{\cdot}\mathcal{F}}(s)$.
\end{proof}
\subsection{Simulation for ipBES}\label{s1511}
The partial order defined in Definition~\ref{def:semantics-sequential} compares the sequential behaviours of two systems. However, it suffers from a congruence problem, i.e. there exist programs $\mathcal{E},\mathcal{F}$ and $\mathcal{G}$ such that $\mathcal{E}{\sqsubseteq}\mathcal{F}$ but $\mathcal{E}\|\mathcal{G} {\not\sqsubseteq}\mathcal{F}\|\mathcal{G}$. A known technique for achieving congruence is to construct an order based on simulations, which is the subject of this section. We use a similar technique in this subsection.
We say that a trace $\alpha$ is \emph{weakly maximal} if it is maximal or there exist some events $e_1,\dots,e_n$ such that $\alpha e_1\cdots e_n{\in}\mathcal{T}_{\max}$ and $\delta{\sqsubseteqh}\lambda(e_i)$ for every $1\leq i\leq n$.
\begin{definition}\label{def:t-simulation}
A function $f{:}\mathcal{T}(\mathcal{E}){\to}\mathcal{T}(\mathcal{F})$ is called a \emph{t-simulation} if the following conditions hold:
\begin{itemize}
\item[-] if $f(\emptyset) = \emptyset$ and $f^{-1}(\beta)$ is a finite set for every $\beta{\in}\mathcal{T}(\mathcal{F})$,
\item[-] if $\alpha e{\in}\mathcal{T}(\mathcal{E})$ then either:
\begin{itemize}
\item $f(\alpha e) = f(\alpha)$ and $\lambda(e)\sqsubseteqh\delta$ holds in $\mathbb{H}_1\Omega$,
\item or there exists an event $e'$ of $\mathcal{F}$ such that $\lambda(e){\sqsubseteqh}\lambda(e')$ and $f(\alpha e) {=} f(\alpha) e'$.
\end{itemize}
\item[-] if $\alpha e$ is maximal in $\mathcal{T}(\mathcal{E})$ then $f(\alpha e) = f(\alpha) e'$, for some $e'$ (with $\lambda(e)\sqsubseteqh\lambda(e')$), and $f(\alpha e)$ is weakly maximal in $\mathcal{T}(\mathcal{F})$~\footnote{If $f(\alpha)$ is maximal then $\alpha$ is necessarily maximal.}.
\end{itemize}
We say that $\mathcal{E}$ is simulated by $\mathcal{F}$, written $\mathcal{E}\refby_{\mathrm{sim}}\mathcal{F}$, if there exists a simulation from $\mathcal{E}$ to $\mathcal{F}$. The equivalence generated by this preorder is denoted $\equiv_{\textrm{sim}}$.
\end{definition}
The notion of t-simulation has been designed to simulate event structures correctly in the presence of tests. For instance, given a test $b$, the simulation $\delta\refby_{\mathrm{sim}}(b{+}\neg b)$ fails because a t-simulation is a total function and it does not allow the removal of ``internal" events labelled with subidentities during a refinement step. The finiteness condition on $f^{-1}(\beta)$ ensures that we do not refine a terminating specification with a diverging implementation. Without that constraint, we would be able to write the refinement \[
\mathtt{if}\ (0{=}1)\ \mathtt{then}\ s{:=}0\ \mathtt{ else } [\mathtt{if}\ (0{=}1)\ \mathtt{then}\ s{:=}0\ \mathtt{else} [\dots]]\refby_{\mathrm{sim}} s{:=}0.
\]
However, this should not hold because the left hand sides is a non-terminating program and cannot refine the terminating assignment ${s:=}0$.
A t-simulation is used to compare bundle event structures without looking in details at the labels of events. It can be seen as a refinement order on the higher level structure of a concurrent program. Once a sequential behaviour has to be checked, we use the previously defined functional equivalence on event structures with internal probabilities.
\begin{example}\label{ex:t-sim}
Consider a program variable $x$ of type Boolean (with value $0$ or $1$). A t-simulation from $(x{=}1){+}(x{\neq} 1){\cdot} (x{:=}1)$ to $1 {+} (x{:=}0{\sqcap} 1)$~\footnote{$x{:=}0{\sqcap} 1$ is an atomic nondeterministic-assignment, it cannot be interfered with.} is given by the dotted arrow in the following diagram:
\begin{displaymath}
\xymatrix{
&\emptyset\ar[dl]\ar[dr]\ar@{.>}[rrrr]& & && \ar[dl]\ar[dr]\emptyset&\\
e_{x{=}1}\ar@{.>}@/_/[rrrr]& & e_{x{\neq}1}\ar[d]\ar@{.>}[urrr] &&e_\delta&&e_{x{:=}0{\sqcap} 1} \\
&&e_{x{\neq}1}e_{x{:=}1}\ar@{.>}[urrrr]&&&&
}
\end{displaymath}
This t-simulation refines two nondeterministic choices, one at the program structure level and the other at the atomic level.
\end{example}
\begin{proposition}
The t-simulation relation $\refby_{\mathrm{sim}}$ is a preorder.
\end{proposition}
\begin{proof}
Reflexivity follows from the identity function and transitivity is obtained by composing t-simulations which will generate a new t-simulation. Notice that care should be taken with respect to the third property of a t-simulation. If $f{:}\mathcal{T}(\mathcal{E}){\to}\mathcal{T}(\mathcal{F})$, $g{:}\mathcal{T}(\mathcal{F}){\to}\mathcal{T}(\mathcal{G})$ are t-simulations, $\alpha e{\in}\mathcal{T}_{\max}(\mathcal{E})$ and $\lambda(e){\sqsubseteqh}\delta$, then $f(\alpha e) {=} f(\alpha) e'$ for some $e'$ of $\mathcal{F}$ such that $\lambda(e){\sqsubseteqh}\lambda(e')$. If $\lambda(e'){\sqsubseteqh}\delta$, then it is possible that $g(f(\alpha)e') {=} g(f(\alpha))$. However, since $f(\alpha)e'$ is weakly maximal, $g(f(\alpha)e')$ is also weakly maximal and we can find an event $e''{\in} G$ such that $f(\alpha)e''$ is weakly maximal and $\lambda(e'){\sqsubseteqh}\lambda(e'')$. We then map $\alpha e$ to $g(f(\alpha))e''$ in the t-simulation from $\mathcal{E}$ to $\mathcal{F}$.
\end{proof}
\begin{proposition}\label{pro:necessary-axioms}
If $\mathcal{E},\mathcal{F},\mathcal{G}$ are ipBES, then
\begin{align}
\mathcal{E} \| \mathcal{F} &\equiv_{\mathrm{sim}} \mathcal{F}\|\mathcal{E},\label{eq:par-comm}\\
\mathcal{E} \| (\mathcal{F} \| \mathcal{G}) &\equiv_{\mathrm{sim}} (\mathcal{E}\|\mathcal{F})\|\mathcal{G},\label{eq:par-assoc}\\
\mathcal{E}{*} \mathcal{F}&\equiv_{\mathrm{sim}}\mathcal{F} {+} \mathcal{E}{\cdot} (\mathcal{E}{*} \mathcal{F}),\label{eq:unfold}\\
\mathcal{E}\refby_{\mathrm{sim}}\mathcal{F} & \Rightarrow\mathcal{G}{+}\mathcal{E}\refby_{\mathrm{sim}}\mathcal{G}{+}\mathcal{F},\label{eq:+-monotony}\\
\mathcal{E}\refby_{\mathrm{sim}}\mathcal{F} & \Rightarrow\mathcal{G}{\cdot}\mathcal{E}\refby_{\mathrm{sim}}\mathcal{G}{\cdot}\mathcal{F},\label{eq:monotony}\\
\mathcal{E}\refby_{\mathrm{sim}}\mathcal{F} & \Rightarrow \mathcal{E}\|\mathcal{G}\refby_{\mathrm{sim}}\mathcal{F}\|\mathcal{G}.\label{eq:congruence}
\end{align}
\end{proposition}
\begin{proofsummary}
The constructions $\mathcal{E}\|\mathcal{F}$ and $\mathcal{F}\|\mathcal{E}$ result in the same event structure and similarly for associativity.
For the implication~(\ref{eq:congruence}), let $f{:}\mathcal{T}(\mathcal{E}){\to}\mathcal{T}(\mathcal{F})$ be a t-simulation. We construct a t-simulation $g{:}\mathcal{T}(\mathcal{E}\|\mathcal{G}){\to}\mathcal{T}(\mathcal{F}\|\mathcal{G})$ inductively. We set $g(\emptyset) {=} \emptyset$. Let $\alpha{\in} \mathcal{T}(\mathcal{E}\|\mathcal{G})$ and $e{\in} E{\cup} G$ such that $\alpha e$ is a trace of $\mathcal{E}\|\mathcal{G}$. We write $\alpha|_E$ for the restriction of $\alpha$ to the events occurring in $\mathcal{E}$. The inductive definition of $g$ is:
\begin{displaymath}
g(\alpha e) = \begin{cases}
g(\alpha)e & \textrm{if } e{\in} G,\\
g(\alpha) & \textrm {if } e{\in} E\textrm{ and } f(\alpha|_Ee) {=} f(\alpha|_E), \\
g(\alpha)e' & \textrm{if } e{\in} E\textrm{ and } f(\alpha|_Ee) {=} f(\alpha|_E)e'
\end{cases}
\end{displaymath}
Since the set of events of $\mathcal{E}$ and $\mathcal{G}$ are disjoint, the cases in the above definition of $g$ are disjoint. That is, $g$ is indeed a function and it satisfies the second property of a t-simulation. The last property is clear because if $\alpha e$ is maximal in $\mathcal{T}(\mathcal{E}\|\mathcal{G})$, then either $\alpha|_E$ is maximal in $\mathcal{E}$ and $\alpha|_Ge$ is maximal in $\mathcal{T}(\mathcal{G})$, or $\alpha|_Ee$ is maximal in $\mathcal{T}(\mathcal{E})$ and $\alpha|_G$ is maximal in $\mathcal{T}(\mathcal{G})$. In both cases, $g(\alpha e) {=} g(\alpha)e'$ for some $e'{\in} E{\cup} G$ and $g(\alpha e)$ is weakly maximal in $\mathcal{T}(\mathcal{F}\|\mathcal{G})$.
Similarly, the other cases are shown by constructing t-simulations.
\end{proofsummary}
\begin{proof}
The constructions $\mathcal{E}\|\mathcal{F}$ and $\mathcal{F}\|\mathcal{E}$ result in the same event structure and similarly for the associativity.
The Unfold \Eqn{eq:unfold} is clear because the left and right hand side event structures are exactly the same up to renaming of events.
Implication~(\ref{eq:+-monotony}) follows by considering the function $\mathrm{id}_{\mathcal{T}(\mathcal{G})}{\cup} f{:}\mathcal{T}(\mathcal{G}{+}\mathcal{E}){\to}\mathcal{T}(\mathcal{G} {+} \mathcal{F})$. It is indeed a function because the sets of events $G$ and $E$ (resp. $F$) are disjoint. The property of a t-simulation follows directly because the set of traces $\mathcal{T}(\mathcal{G}{+}\mathcal{E})$ is the disjoint union $\mathcal{T}(\mathcal{G}){\cup}\mathcal{T}(\mathcal{E})$ (similarly for $\mathcal{G}{+}\mathcal{F}$).
For case of sequential composition~(\ref{eq:monotony}), let $f$ be a t-simulation from $\mathcal{E}$ to $\mathcal{F}$. It is clear that the function $g{:}\mathcal{T}(\mathcal{G}{\cdot}\mathcal{E}){\to}\mathcal{T}(\mathcal{G}{\cdot}\mathcal{F})$, such that $g(\alpha) {=} \alpha|_Gf(\alpha|_E)$ is a t-simulation.
For the Implication~(\ref{eq:congruence}), let $f{:}\mathcal{T}(\mathcal{E}){\to}\mathcal{T}(\mathcal{F})$ be a t-simulation. Let us construct a t-simulation $g{:}\mathcal{T}(\mathcal{E}\|\mathcal{G}){\to}\mathcal{T}(\mathcal{F}\|\mathcal{G})$ inductively. We set $g(\emptyset) {=} \emptyset$. Let $\alpha{\in} \mathcal{T}(\mathcal{E}\|\mathcal{G})$ and $e{\in} E{\cup} G$ such that $\alpha e$ is a trace of $\mathcal{E}\|\mathcal{G}$. We write $\alpha|_E$ the restriction of $\alpha$ to the events occurring in $\mathcal{E}$. The inductive definition of $g$ is:
\begin{displaymath}
g(\alpha e) = \left\lbrace
\begin{array}{cl}
g(\alpha)e & \textrm{if } e{\in} G,\\
g(\alpha) & \textrm {if } e{\in} E\textrm{ and } f(\alpha|_Ee) {=} f(\alpha|_E), \\
g(\alpha)e' & \textrm{if } e{\in} E\textrm{ and } f(\alpha|_Ee) {=} f(\alpha|_E)e'.
\end{array}\right.
\end{displaymath}
Since the set of events of $\mathcal{E}$ and $\mathcal{G}$ are disjoint, the cases in the above definition of $g$ are disjoint. That is, $g$ is indeed a function and it satisfies the second property of a t-simulation. The last property is clear because if $\alpha e$ is maximal in $\mathcal{T}(\mathcal{E}\|\mathcal{G})$, then either $\alpha|_E$ is maximal in $\mathcal{E}$ and $\alpha|_Ge$ is maximal in $\mathcal{T}(\mathcal{G})$, or $\alpha|_Ee$ is maximal in $\mathcal{T}(\mathcal{E})$ and $\alpha|_G$ is maximal in $\mathcal{T}(\mathcal{G})$. In both cases, $g(\alpha e) {=} g(\alpha)e'$ for some $e'{\in} E{\cup} G$ and $g(\alpha e)$ is weakly maximal in $\mathcal{T}(\mathcal{F}\|\mathcal{G})$.
\end{proof}
We now state the main result of this section, which is the backbone of our probabilistic rely-guarantee calculus.
\begin{theorem}\label{thm:trace-imply-distribution}
Let $\mathcal{E}$ and $\mathcal{F}$ be feasible and terminating ipBES. Then $\mathcal{E}\refby_{\mathrm{sim}}\mathcal{F}$ implies $\mathcal{E}\sqsubseteq\mathcal{F}$.
\end{theorem}
\begin{proofsummary}
The proof amounts to showing that, given an initial state $s{\in}\Omega$ and a scheduler $\sigma$ of $\mathcal{E}$, there exists a scheduler $\sigma'$ of $\mathcal{F}$ that generates exactly the same distribution as $\sigma$ from the state $s$. The scheduler $\sigma'$ is constructed inductively from the t-simulation from $\mathcal{E}$ to $\mathcal{F}$.
\end{proofsummary}
\begin{proof}
Let $f$ be a t-simulation from $\mathcal{E}$ to $\mathcal{F}$, $s{\in}\Omega$ be the initial state, $\sigma{\in}\mathbf{Sched}_1(\mathcal{E})$ and $\varphi$ is the complete run of $\sigma$ on $\mathcal{E}$ from $s$. We have to generate a scheduler $\tau{\in}\mathbf{Sched}_1(\mathcal{F})$ such that the measures $\sigma_s(\mathcal{E})$ and $\tau_s(\mathcal{F})$ are equal i.e. they produce the same value for every state $u{\in}\Omega$.
For every $\beta{\in}\mathcal{T}(\mathcal{F})$, we define $f^{-1}_{\min}(\beta)$ to be the set of minimal traces in $f^{-1}(\beta)$, that is,
\[
f_{\min}^{-1}(\beta) = \{\alpha\ |\ \forall e{\in} E:\alpha {=} \alpha' e{\in} f^{-1}(\beta)\Rightarrow \alpha' {\notin} f^{-1}(\beta)\}.
\]
We now construct the scheduler $\tau$. Let $\beta{\in}\mathcal{T}(\mathcal{F})$. We consider two cases:
\begin{itemize}
\item If $f^{-1}(\beta) = \emptyset$ then we set $\tau(\beta)(e,t) = 0{\in}\mathbb{J}ip\Omega$, except for some particular maximal traces that are handled in $(\dagger)$ below.
\item Otherwise, given a state $t{\in}\Omega$, we define a normalisation factor
\[
C_{\beta,t} = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}{\varphi}(\alpha)(t),
\]
and we set~\footnote{Notice if $C_{\beta,t} = 0$ for some $t{\in}\Omega$ then ${\varphi}(\alpha)(t) = 0$ for every $\alpha{\in} f_{\min}^{-1}(\beta)$. In other words, none of these $\alpha$ will be scheduled at all. Hence, $\beta$ need not be scheduled either.}
\[
\tau(\beta)(e,t) = \frac{1}{C_{\beta,t}}\left(\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t)\mu_k\right)
\]
where $w_{i-1}(e_i,t)$ is the weight function such that $\sigma(\alpha e_1\cdots e_{i-1})(e_i,t) = w_{i-1}(e_i)\mu$, and $\mu{\in}\lambda(e_{i})$ (if $\lambda(e_{i})(t)$ is empty then $w_{i-1}(e_i,t) = 0$). The distribution $\mu_k$ is chosen by $\sigma$ from $\lambda(e_k)(t)$, when scheduling $e_k$.
\end{itemize}
Firstly, we show that $\tau$ is indeed a scheduler on $\mathcal{F}$. The Property(\ref{pr:sched-dom}) of Definition~\ref{def:ipscheduler} is clear. Let us show the other properties. Let $\beta e{\in}\mathcal{T}(\mathcal{E})$ and
let $W{:}E{\times}\Omega{\to}{\mathbb R}$ be the weight function such that
\[
W(e,t) = \frac{1}{C_{\beta,t}}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t).
\]
Indeed, $\mu {=} \frac{\tau(\beta)(e,t)}{W(e,t)}$ is in $\lambda(e)(t)$~\footnote{The case $W(e,t) = 0$ can be adapted easily because the numerator in the definition of $\tau(\beta)(e)$ is also $0$. For instance, we can assume that $\frac{0}{0} = 1$.} because $\lambda(e)(t)$ is convex and for each $\alpha e_1\cdots e_k{\in}f_{\min}^{-1}(\beta e)$ and $\mu_k{\in}\lambda(e_k)(t){\subseteq}\lambda(e)(t)$.
Hence $\tau(\beta)(e) = W_ef_e$ and $\tau$ satisfies the Property (\ref{pr:sched-choice}) of \mathbb{D}efs{\ref{def:ipscheduler}}. As for Property (\ref{pr:sched-prob}), let $s{\in}\Omega$ and let us compute the quantity
\[
V(t) = \sum_{(e,t){\in}\mathrm{dom}(\tau(\beta))}W(e,t),
\]
for a fixed $t{\in}\Omega$. Let us write $\mathrm{dom}(\beta) = \{e \ | \ \beta e{\in}\mathcal{T}(\mathcal{F}) \}$.
\begin{eqnarray}
V(t) & = & \sum_{(e,t){\in}\mathrm{dom}(\tau(\beta))}\frac{1}{C_{\beta,t}}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t)\nonumber\\
& = & \frac{1}{C_{\beta,t}}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{(e,t){\in}\mathrm{dom}(\tau(\beta))}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t)\nonumber\\
& = & \frac{1}{C_{\beta,t}}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{\alpha e_1\cdots e_k{\in} {\cup}_{e{\in}\mathrm{dom}(\beta)}f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t).\nonumber
\end{eqnarray}
From the second to the third expression, the two rightmost sums were merged into a single one because $f^{-1}_{\min}(\beta e){\cap} f^{-1}(\beta e') = \emptyset$ ($f$ is a function). It follows from Property~(\ref{pr:sched-prob}), applied on the weight $w_{i-1}(e_i,t)$ of $\sigma$, that
\[
\sum_{\alpha e_1\cdots e_k{\in} {\cup}_{e{\in}\mathrm{dom}(\beta)}f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,s) = 1
\]
and hence $V = 1$ (c.f. Figure~\ref{fig:6-concrete-sched} for a concrete example). The last Property (\ref{pr:sched-consistent}) of \mathbb{D}efs{\ref{def:ipscheduler}} is clear because if $\lambda(e)(t) = \emptyset$, then the coefficient of $\sigma(\alpha e_1\cdots e_{k-1})(e_k,t)$ is $0$ because $\lambda(e_k)(t) = \emptyset$. Hence, the product is also $0$.
\begin{figure}
\caption{An example showing that $V(t) = 1$.}
\label{fig:6-concrete-sched}
\end{figure}
Secondly, let $\psi$ be the complete run of $\mathcal{F}$ with respect to $\tau$. We now show by induction on $\beta$ that
\begin{equation}\label{eq:subgoal}
\psi(\beta) = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha) = C_{\beta,t},
\end{equation}
where the empty sum evaluates to the identically zero distribution. The base case is clear because $\psi(\emptyset) {=} \delta_{s} {=} \phi(\emptyset)$ where $s$ is the initial state. Let us assume the above identity for $\beta{\in}\mathcal{T}(\mathcal{F})$ and let $e{\in} F$ such that $\beta e {=} \mathcal{T}(\mathcal{E})$ and $f_{\min}^{-1}(\beta e){\neq}\emptyset$. By definition of $\psi$, if $u{\in}\Omega$, we have:
\begin{footnotesize}
\begin{align*}
\psi(\beta e)(u) &= \sum_{t{\in}\Omega} \frac{1}{C_{\beta,t}}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\varphi(\alpha)(t)\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t)\mu_k(u)\psi(\beta)(t)\\
& = \sum_{t{\in}\Omega}\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\prod_{i=1}^{k}w_{i-1}(e_i,t)\mu_k(u)\varphi(\alpha)(t) \\
& = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\sum_{t{\in} \Omega}\prod_{i=1}^{k}w_{i-1}(e_i,t)\mu_k(u)\varphi(\alpha)(t)\\
& = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\sum_{t{\in} \Omega} \sum_{t'{\in}\Omega} w_0(e_1,t')\delta_{t'}(t)\left[\prod_{i=2}^{k}w_{i-1}(e_i,t)\mu_k(u)\right]\varphi(\alpha)(t')\\
& =\sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\sum_{t{\in} \Omega}\prod_{i=2}^{k}w_{i-1}(e_i,t)\mu_k(u)\varphi(\alpha e_1)(t)\\
& = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\sum_{t{\in} \Omega} \sum_{t'{\in}\Omega} w_1(e_2,t')\delta_{t'}(t)\left[\prod_{i=3}^{k}w_{i-1}(e_i,t)\mu_k(u)\right]\varphi(\alpha e_1)(t')\\
& = \cdots .
\end{align*}
\end{footnotesize}
By continuing the above reasoning for all $e_i$ (induction), $i\leq k-1$, we obtain
\begin{align*}
\psi(\beta e)(u) & = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\sum_{t{\in} \Omega}w_{k-1}(e_k,t)\mu_k(u)\varphi(\alpha e_1\cdots e_{k-1})(t)\nonumber\\
& = \sum_{\alpha{\in} f_{\min}^{-1}(\beta)}\sum_{\alpha e_1\cdots e_k{\in} f_{\min}^{-1}(\beta e)}\varphi(\alpha e_1\cdots e_k)(u).\nonumber
\end{align*}
Hence,
\[
\psi(\beta e)(u) = \sum_{\alpha'{\in} f_{\min}^{-1}(\beta e)}\varphi(\alpha')(u).
\]
$(\dagger)$ We finally compute the sum $\tau_s(\mathcal{F}) = \sum_{\beta{\in}\mathcal{T}_{\max}(\mathcal{F})}\psi(\beta)$. Notice firstly that $\tau$ may not schedule some traces of $\mathcal{F}$. In particular, the third property in the definition of simulation implies that a maximal element of $\mathcal{T}(\mathcal{E})$ may be mapped to a weakly maximal element of $\mathcal{T}(\mathcal{F})$. Hence, we need to extend the scheduler $\tau$ so that it is non-zero for exactly one maximal element from that weakly maximal trace. More precisely, if $\beta' = f(\alpha)$ is weakly maximal for some maximal trace $\alpha{\in}\mathcal{T}_{\max}(\mathcal{E})$, then there exists a sequence $e_1,\dots,e_n$ such that $\beta = \beta' e_1\cdots e_n{\in}\mathcal{T}_{\max}(\mathcal{F})$ and $\delta\sqsubseteqh\lambda(e_i)$. We extend $\tau$ such that $\tau(\beta' e_1\cdots e_i)(e_{i{+}1},t) = \delta_t$. This implies that $\psi(\beta)(t) = \psi(\beta')(t)$. The other case is that $\beta$ is maximal and belongs to the image of $f$. In both cases, we have
\[
\psi(\beta)(t) = \sum_{\alpha{\in} A_\beta}\varphi(\alpha)(t),
\]
where $A_\beta = f^{-1}_{\min}(\beta)$ if $\beta$ is in the image of $f$, or $A_\beta = f^{-1}_{\min}(\beta')$ if there is such a $\beta'$ as above, otherwise, $A_\beta = \emptyset$. Thus, $A_\beta$ contains maximal traces only (if it is not empty). Since, $f$ is a total function, the set $\{A_\beta\ |\ \beta{\in}\mathcal{T}_{\max}(\mathcal{F})\}$ is a partition of $\mathcal{T}_{\max}(\mathcal{E})$ and we have
\[
\sum_{\beta{\in}\mathcal{T}_{\max}(\mathcal{E})}\psi(\beta)(t) = \sum_{\beta{\in}\mathcal{T}_{\max}(\mathcal{E})}\sum_{\alpha{\in} A_\beta}\varphi(\alpha)(t) = \sum_{\alpha{\in}\mathcal{T}_{\max}(\mathcal{E})}\varphi(\alpha)(t),
\]
i.e. we obtain $\tau_s(\mathcal{F}) = \sigma_s(\mathcal{E})$.
\end{proof}
\begin{example}
Reconsider the t-simulation of Example~\ref{ex:t-sim}. By definition, the unique scheduler $\sigma$ on $(x{=}1) {+} (x{\neq}1){\cdot} (x{:=}1)$ satisfies:
\begin{itemize}
\item[-] $\sigma(\emptyset)(e_{x{=}t},s) = w(e_{x{=}t},s)\delta_t$ where
$w(e_{x{=}t},s) = \begin{cases}
1 & \textrm{if } s {=} t, \\
0 & \textrm{otherwise.}
\end{cases}$
\item[-] $\sigma(e_{x{=}0})(e_{x{:=}1},s) = \delta_1$, for $s{\in}\{0,1\}$.
\end{itemize}
The corresponding scheduler $\tau$ on $1{+}(x{:=}0{\sqcap}1)$, constructed (as per the proof of \Thm{thm:trace-imply-distribution}) from $\sigma$ using the illustrated t-simulation, satisfies:
\begin{itemize}
\item[-] $\sigma(\emptyset)(e_\delta,1) = w(e_{x{=}1},1)\delta_1 = \delta_1$ and $\sigma(\emptyset)(e_\delta,0) = 0$,
\item[-] $\sigma(\emptyset)(e_{x:=0{\sqcap}1},1)=0$ and $\sigma(\emptyset)(e_{x{:=}0{\sqcap}1},0) = w(e_{x{=}0},0)\delta_1 = \delta_1$.
\end{itemize}
Since $(x{=}1) {+} (x{\neq}1){\cdot} (x{:=}1)$ is sequentially equivalent to $x{:=}1$, we can see that the scheduler $\tau$ on $1{+}(x{:=}0{\sqcap}1)$ forces the final value of $x$ to be $1$ by resolving $(+)$ and $(\sqcap)$ as they were resolved in the program $(x{=}1) {+} (x{\neq}1){\cdot} (x{:=}1)$.
\end{example}
We now show that the binary Kleene star is preserved by the semantics map.
\begin{proposition}\label{pro:*-homomorphism}
For every non-zero, feasible and terminating event structure $\mathcal{E}$ and $\mathcal{F}$, we have $\sem{\mathcal{E}{*}\mathcal{F}} = \sem{\mathcal{E}}{*}\sem{\mathcal{F}}$.
\end{proposition}
\begin{proofsummary}
Note that $\sem{\mathcal{E}}{*}\sem{\mathcal{F}}$ is the least fixed point of the function $f(X) = \sem{\mathcal{F}}{+} \sem{\mathcal{E}}{\cdot} X$ in $\mathbb{H}_1\Omega$\footnote{Notice that the least fixed point is in $\mathbb{H}_1\Omega$ but not $\overline{\mathbb{H}}_1\Omega$. The reason is that $\sem{\mathcal{E}}$ and $\sem{\mathcal{F}}$ are elements of $\mathbb{H}_1\Omega$ because of feasibility and termination.}, and $\mathcal{E}{*}\mathcal{F}$ satisfies $\mathcal{F} {+} \mathcal{E}{\cdot}(\mathcal{E}{*}\mathcal{F}) \equiv_{\textrm{sim}} \mathcal{E}{*}\mathcal{F}$ by construction of the sequences of bundle event structures defining $\mathcal{E}{*}\mathcal{F}$. Therefore, \Thm{thm:trace-imply-distribution} and \mathbb{P}rop{pro:homomorphism} imply that $\ \sem{\mathcal{E}}{*}\sem{\mathcal{F}}\sqsubseteqh \sem{\mathcal{E}{*}\mathcal{F}}$.
The converse refinement $\ \sem{\mathcal{E}{*}\mathcal{F}}\sqsubseteqh\sem{\mathcal{E}}{*}\sem{\mathcal{F}}$ holds because every scheduler of $\mathcal{E}{*}\mathcal{F}$ is the ``limit" of a sequence of schedulers used in the construction of $\sem{\mathcal{E}}{*}\sem{\mathcal{F}}$.
\end{proofsummary}
\begin{proof}
For the binary Kleene product, since $\sem{\mathcal{E}}{*}\sem{\mathcal{F}}$ is the least fixed point of $f(X) = \sem{\mathcal{F}}{+} \sem{\mathcal{E}}{\cdot} X$ in $\mathbb{H}_1\Omega$\footnote{Notice that the least fixed point is in $\mathbb{H}_1\Omega$ but not $\overline{\mathbb{H}}_1\Omega$. The reason is that $\sem{\mathcal{E}}$ and $\sem{\mathcal{F}}$ are elements of $\mathbb{H}_1\Omega$ because of feasibility and termination.}, and $\mathcal{E}{*}\mathcal{F}$ satisfies
\[
\mathcal{F} {+} \mathcal{E}{\cdot}(\mathcal{E}{*}\mathcal{F}) \equiv_{\textrm{sim}} \mathcal{E}{*}\mathcal{F}
\]
by construction of the sequences of bundle event structures defining $\mathcal{E}{*}\mathcal{F}$. Therefore, \Thm{thm:trace-imply-distribution} and \mathbb{P}rop{pro:homomorphism} imply that $\ \sem{\mathcal{E}}{*}\sem{\mathcal{F}}\sqsubseteqh \sem{\mathcal{E}{*}\mathcal{F}}$.
Conversely, let $\mu{\in} \sem{\mathcal{E}{*}\mathcal{F}}(s)$ for some initial state $s{\in}\Omega$. As in the case of \mathbb{P}rop{pro:homomorphism}, we assume that $\mu$ is computed from a scheduler $\sigma$ on $\mathcal{E}{*}\mathcal{F}$. We construct a sequence of schedulers $\sigma_n$ that ``converges" to $\sigma$ as follows. We set $\sigma_0$ to be any element of $\mathbf{Sched}_1(\mathcal{F})$, $\sigma_1(\alpha) {=} \sigma(\alpha)$ if $\alpha$ is a trace of $\mathcal{F}$ or $\mathcal{E}$, otherwise, we set $\sigma_1(\alpha'\alpha'') {=} \sigma_0(\alpha'')$ where $\alpha'{\in}\mathcal{T}_{\max}(\mathcal{E})$ (notice that $\sigma_0$ is applied to a different copy of $\mathcal{F}$ but this is not important as event names can be abstracted.). Inductively, we define
\begin{displaymath}
\sigma_{n}(\alpha) =\begin{cases}
\sigma(\alpha)&\textrm{if } \alpha{\in} \mathcal{T}(\underbrace{\mathcal{F} {+} \mathcal{E}{\cdot}(\dots \mathcal{E}{\cdot}(\mathcal{F} {+} \mathcal{E}))}_{n \textrm{ occurrences of }\mathcal{E}}), \\
\sigma_{0}(\alpha|_F)&\textrm{otherwise.}
\end{cases}
\end{displaymath}
Again, $\sigma_0$ is applied to the $n{+}1^{\textrm{th}}$ copy of $\mathcal{F}$. Indeed, we have
\[
\sigma_n{\in}\mathbf{Sched}_1(\underbrace{\mathcal{F} {+} \mathcal{E}{\cdot}(\cdots\mathcal{E}{\cdot}(\mathcal{F} {+} \mathcal{E}{\cdot}\mathcal{F}))}_{n\textrm{ occurrences of } \mathcal{E}})
\]
by construction. On the one hand, the sequence of distributions $\sigma_{n,s}(\mathcal{E})$ forms a subset of $\sem{\mathcal{E}}{*}\sem{\mathcal{F}}(s)$. On the other hand, let $u{\in}\Omega$ and let us denote
\[
\mathcal{T}_{\leq n} = \mathcal{T}(\underbrace{\mathcal{F} {+} \mathcal{E}{\cdot}(\dots \mathcal{E}{\cdot}(\mathcal{F} {+} \mathcal{E}{\cdot}\mathcal{F}))}_{n \textrm{ occurrences of }\mathcal{E}}).
\]
If we denote by $\varphi_n$ the complete run of $\sigma_n$ on $\mathcal{E}{*}\mathcal{F}$, then we have
\begin{eqnarray*}
\left|\sigma_s(\mathcal{E})(u) - \sigma_{n,s}(\alpha)(u)\right| & = & \left|\sum_{\alpha{\in}\mathcal{T}_{\max}(\mathcal{E}{*}\mathcal{F})}\varphi(\alpha)(u){-}\sum_{\alpha{\in}\mathcal{T}_n{\cap}\mathcal{T}_{\max}(\mathcal{E}{*}\mathcal{F})}\varphi_n(\alpha)(u)\right|\\
& = & \left|\sum_{\alpha{\in}\mathcal{T}_{\max}(\mathcal{E}{*}\mathcal{F}){\setminus}\mathcal{T}_{\leq n{-}1}}\left(\varphi(\alpha)(u) {-} \varphi_n(\alpha)(u)\right)\right|\\
&\leq& \sum_{\alpha{\in}\mathcal{T}_{\max}(\mathcal{E}{*}\mathcal{F}){\setminus}\mathcal{T}_{\leq n{-}1}}\left|\varphi(\alpha)(u) {-} \varphi_n(\alpha)(u)\right|.\\
\end{eqnarray*}
The set $\mathcal{T}_{\max}(\mathcal{E}{*}\mathcal{F}){\setminus}\mathcal{T}_{\leq n{-}1}$ shrinks, when $n$ increases, because every finite trace of $\mathcal{E}{*}\mathcal{F}$ belongs to some set $\mathcal{T}_{\leq k}$. Therefore, the last sum above is decreasing to $0$. Hence, since $\Omega$ is a finite set, the sequence $\sigma_{n,s}(\mathcal{E}{*}\mathcal{F})$ converges (pointwise) to $\sigma_s(\mathcal{E}{*}\mathcal{F})$ in $\mathbb{D}\Omega$. Since $\sem{\mathcal{E}}{*}\sem{\mathcal{F}}(s)$ is topologically closed, we deduce that $\sigma_s(\mathcal{E}){\in}\sem{\mathcal{E}}{*}\sem{\mathcal{F}}(s)$. Therefore, $\sem{\mathcal{E}{*}\mathcal{F}}\sqsubseteqh\sem{\mathcal{E}}{*}\sem{\mathcal{F}}$.
\end{proof}
\begin{proposition}\label{pro:get-rid-of-par}
Let $r,r'{\in}\mathbb{H}_1\Omega$ be two atomic programs and let $\mathcal{E},\mathcal{F}$ be two bundle event structures with internal probability, then
\begin{align}
r^{*}\|r^{*}&\refby_{\mathrm{sim}} r^{*},\label{eq:star-trans}\\
r^{*}\|r'&\refby_{\mathrm{sim}} r{*}(r'{\cdot} r^{*}),\label{eq:par-atom}\\
r^{*}\|(b{\cdot} \mathcal{E} {+} c{\cdot}\mathcal{F})& \refby_{\mathrm{sim}} r{*}(b{\cdot} (r^{*}\|\mathcal{E})
{+}c{\cdot} (r^{*}\|\mathcal{E})),\label{eq:par-nondet}\\
r^{*}\|(r'{\cdot}\mathcal{E})&\refby_{\mathrm{sim}} r{*}(r'{\cdot} (r^{*}\|\mathcal{E})), \label{eq:par-seq}
\end{align}
where $r^{*} = r{*} 1$.
\end{proposition}
\begin{proofsummary}
These inequations are again verified by explicitly constructing a t-simulation from the left-hand side to the right-hand side of the inequality. The following reasoning illustrates such a construction for \Eqn{eq:star-trans}.
Denote by $e_1$ and $e_2$ (resp. $e$) the events that are labelled by $\delta$ in the event structure associated to $r^{*}\|r^{*}$ (resp. $r^{*}$). We construct an operation $(')$ such that, given a trace $\alpha$ of $r^{*}\|r^{*}$ that does not contain any of the $e_i$s, we define $\alpha'$ to be the unique trace corresponding to $\alpha$ in $r^{*}$ (i.e. with the same number of events labelled by $r$). A t-simulation from $r^{*}\|r^{*}$ to $r^{*}$ is obtained by considering a function $f$ such that \begin{displaymath} f(\alpha) = \begin{cases}
(\alpha{\setminus}\{e_1,e_2\})' & \textrm{if } e_1{\notin}\alpha\textrm{ or }e_2{\notin}\alpha,\\
(\alpha{\setminus}\{e_1,e_2\})'e & \textrm{if } e_1,e_2{\in}\alpha.\end{cases} \end{displaymath} \end{proofsummary}
\begin{proof}
Let us denote by $e_1$ and $e_2$ (resp. $e$) the events that are labelled by $\delta$ in the event structure associated to $r^{*}\|r^{*}$ (resp. $r^{*}$). Given a trace $\alpha$ of $r^{*}\|r^{*}$ that does not contain any of the $e_i$s, we denote by $\alpha'$ unique trace corresponding to $\alpha$ in $r^{*}$ (i.e. with the same number of events labelled by $r$).
A t-simulation from $r^{*}\|r^{*}$ to $r^{*}$ is obtained by considering a function $f$ such that
\begin{displaymath}
f(\alpha) = \left\lbrace
\begin{array}{cl}
(\alpha{\setminus}\{e_1,e_2\})' & \textrm{if } e_1{\notin}\alpha \textrm{ or } e_2{\notin}\alpha,\\
(\alpha{\setminus}\{e_1,e_2\})'e & \textrm{if } e_1,e_2{\in}\alpha.
\end{array}\right.
\end{displaymath}
The t-simulation~(\ref{eq:par-atom}) is constructed as follows. Let us abstract the event names, i.e. $r^k$ would be a trace where each $r$ is the label of a unique event. Every trace of $r^{*}\|r'$ is a prefix of $r^mr'r^{n}\delta$ or $r^m\delta r'$, for some $m,n\geq 0$. Every prefix of either trace corresponds to a unique trace of $r{*} (r'{\cdot} r^{*})$. For instance, the maximal trace $r^m\delta r'$ is associated to the weakly maximal trace $r^mr'$ of $r{*} (r'{\cdot} r^{*})$. Figure~\ref{fig:messy-simulation} shows an explicit construction of the t-simulation.
\begin{figure}
\caption{The t-simulation from $r^{*}
\label{fig:messy-simulation}
\end{figure}
The Simulation~(\ref{eq:par-nondet} )is similar. Every trace of $r^{*}\|(b{\cdot}\mathcal{E}{+}c{\cdot}\mathcal{F})$ is a prefix of $r^mb\alpha$ or $r^mc\beta$ or $r^m\delta b\gamma$ or $r^m\delta c\zeta$, where $\alpha{\in}\mathcal{T}(r^{*}\|\mathcal{E})$, $\beta{\in}\mathcal{T}(r^{*}\|\mathcal{F})$, $\gamma{\in}\mathcal{T}(\mathcal{E})$, $\zeta{\in}\mathcal{F}$ and $n\geq0$. Again, prefixes of the first two traces correspond to a unique trace of $r{*}(b{\cdot} (r^{*}\|\mathcal{E}) {+} c{\cdot}(r^{*}\|\mathcal{F}))$. The maximal trace $r^m\delta b\gamma$ is again mapped to the weakly maximal trace $r^mb\gamma$. Similarly for the fourth case. This indeed results in a t-simulation.
The Simulation~(\ref{eq:par-seq}) is constructed as follows. Every trace of $r^{*}\|(r'{\cdot}\mathcal{E})$ is a prefix of $r^mr'\alpha$ or $r^m\delta r'\beta$ for some trace $\alpha{\in}\mathcal{T}(r^{*}\|\mathcal{E})$ and $\beta{\in}\mathcal{T}(\mathcal{E})$. We continue as in the previous case.
\end{proof}
\mathbb{P}rop{pro:get-rid-of-par} is used mainly to interleave the right operand $r^{*}$ systematically with the internal structure of $\mathcal{E}$, while preserving the simulation order. More precisely, these equations are applied to generate algebraic proofs for the reduction of one expression into another, where the occurrence of $\|$ is pushed deeper into the sub-expressions (and possibly removed).
\section{Probabilistic rely-guarantee conditions}\label{sec:prgc}
Our first task towards the extension of the rely-guarantee method to probabilistic systems is to provide a suitable definition of a rely condition that contains sufficient quantitative information about the environment and the components of a system.
From a relational point of view, as in Jones' thesis~\cite{Jon81}, a guarantee condition expresses a constraint between a state and its successor by running the relation as a nondeterministic program. Therefore, it is important to know whether some action is executed atomically or whether it is split into smaller components.
For instance, when run in the same environment, a probabilistic choice between $x{:=}x{+}1$ and $x{:=}x{-}1$ produced from an \texttt{if\dots then\dots else} clause may behave differently from an atomic probabilistic assignment that assigns $x{+}1$ and $x{-}1$ to $x$ with the exact same probability.
Without probability, a common example of a guarantee condition for a given program is the reflexive transitive closure with respect to $(\|)$ of the union of all atomic actions in that program~\cite{Hoa09a} which completely captures all possible ``effects" of the program. Such a closure property plays a crucial role in the algebraic proof of Rule~\ref{rule:rg-standard} is achieved through \mathbb{P}rop{pro:get-rid-of-par}. This construction was introduced by Jones~\cite{Jon81} and later refined by others~\cite{Din02,Jon12,Hoa09a}.
Non-probabilistic rely-guarantee conditions usually take the form $\rho^{*}$ for some binary relation $\rho$, defined on the state space of the studied program. The transitive closure of $\rho$ with respect to the relational composition $(\cdot)$ is usually a desirable property. To obtain a probabilistic guarantee condition from a relation $\rho{\subseteq}\Omega{\times}\Omega$, we construct a probabilistic program $r{\in} \mathbb{H}_1\Omega$ such that
\[
r(s) = \{\mu{\in}\mathbb{D} \Omega\ |\ \mu(\{s'\ |\ (s,s'){\notin} \rho\}) = 0\}.
\]
Equivalently, $r$ is the convex closure of $\rho$. The following proposition then follows from that construction.
\begin{proposition}\label{pro:transitive-convex-closure}
If a relation $\rho{\subseteq}\Omega{\times}\Omega$ is transitive, then the convex closure $r$ of $\rho$ satisfies $r{\cdot}(r{+}\delta)\sqsubseteqh r$.
\end{proposition}
\begin{proofsummary}
It follows immediately from the transitivity of $\rho$ and the definition of $r$.
\end{proofsummary}
\begin{proof}
Let $\rho$ be a transitive relation, $r$ its associated probabilistic program, $s{\in}\Omega$ a state and $\mu{\in} [r{\cdot}(r{+}\delta)](s)$. We need to show that $\mu{\in} r(s)$. By definition of the sequential composition $(\cdot)$ (\Eqn{eq:6-sequential-H}), there exists $\nu{\in} r(s)$ and a deterministic program $f\sqsubseteqh (1{+}r)$ such that $\mu = f{\mathrm{conv}ol}\nu$. Let $u{\in}\Omega$ such that $(s,u){\notin}\rho$, we are going to show that $\mu(u) = 0$. We have:
\begin{align*}
\mu(u) = \sum_{t{\in}\Omega} f(t)(s)\nu(t) = \sum_{t{\in}\Omega\wedge (s,t){\in}\rho} f(t)(u)\nu(t) = \sum_{t{\in}\Omega\wedge (s,t){\in}\rho\wedge (t,u){\in}\rho}f(t)(u)\nu(t).
\end{align*}
The second equality follows from $\nu(t) = 0$ for every $(s,t){\notin}\rho$. Similarly, the last equality follows from $f(t)(u) = 0$ for $(t,u){\notin}\rho$. The last expression reduces to $\sum_{t{\in}\Omega\wedge (s,u){\in}\rho}f(t)(u)\nu(t)$, by transitivity of $\rho$, which is an empty sum because $(s,u){\notin}\rho$. Therefore, $\mu(u) = 0$ for every $(u,s){\notin}\rho$, that is $\mu{\in} r(s)$.
\end{proof}
The convex closure of a relation $\rho$, given in \mathbb{P}rop{pro:transitive-convex-closure}, sometimes provides a very general rely condition that is too weak to be useful in the probabilistic case. In practice, a probabilistic assignment is considered atomic and the correctness of many protocols is based on that crucial assumption. Hence the random choice and the writing of the chosen value into a program variable $x$ is assumed to happen instantaneously and no other program can modify $x$ during and in-between these two operations. Thus, probabilistic rely and guarantee conditions need to capture the probabilistic information in such an assignment.
\begin{example} Let $x$ be a (integer) program variable with values bounded by $0$ and $n$. Let us write $x{:=}\mathtt{uniform}(0,n)$ for the program that assigns a random integer between two integers $0$ and $n$ to the variable $x$. A probabilistic guarantee condition for that assignment is obtained from the probabilistic program $r$ that satisfies, for every integer $s\in[0,n]$,
\begin{equation}\label{eq:example-rely}
r(s) = \left\lbrace\mu\ \left| \ \mu(\{0,n\}) \geq\frac{1}{n{+}1}\right.\right\rbrace. \end{equation}
The condition $r$ specifies the convex set of all probabilistic deterministic programs whose atomic actions establish a state in $\{0,n\}$ with probability at least $\frac{1}{n{+}1}$. In particular, $r$ is an overspecification of $x{:=}\mathtt{uniform}(0,x)$ where the rhs occurrence of $x$ is evaluated to the initial value of $x$. Since $r$ is transitive, it can prove useful to deduce quantitative properties of $(x{:=}\mathtt{uniform}(0,x))^{*}$. \end{example}
In practice, constructing a useful transitive probabilistic rely-guarantee condition is difficult, but the standard technique is still valid: the strongest guarantee condition of a given program is the nondeterministic choice of all atomic actions found in that program.
\begin{definition}\label{def:rely}
A \emph{probabilistic rely} or \emph{guarantee condition} $R$ is a probabilistic concurrent program such that $R\|R \refby_{\mathrm{sim}} R$.
\end{definition}
In particular, the concurrent program $r^{*} = r{*}1$ is a rely condition because
\begin{equation}\label{eq:rely-closure}
r^{*}\|r^{*}\refby_{\mathrm{sim}} r^{*}
\end{equation}
holds in the event structure model (\mathbb{P}rop{pro:get-rid-of-par} \Eqn{eq:star-trans}). This illustrates the idea that a rely condition specifies an environment that can stutter or execute a sequence of actions that are bounded by $r$.
\section{Probabilistic rely-guarantee calculus}\label{sec:prg-rules}
In this section, we develop the rely-guarantee rules governing programs involving probability and concurrency. An example is given by Rule~\ref{rule:rg-standard}, which allows us check the safety properties of the subsystems and infer the correctness of the whole system in a compositional fashion. We provide a probabilistic version of that rule.
In the previous sections, we have developed the mathematical foundations needed for our interpretation of Hoare triples and \emph{guarantee} relations, namely, the sequential refinement $\sqsubseteq$ and simulation-based order $\refby_{\mathrm{sim}}$. Following~\cite{Hoa09a}, we only adapt the orders in the algebraic interpretation of rely-guarantee quintuples (\Eqn{eq:rgspec}). That is, validity of probabilistic rely-guarantee quintuples is captured by
\[
\triple{P\ R}{\mathcal{E}}{G\ Q} \ \Leftrightarrow \ P{\cdot}(R\|\mathcal{E})\sqsubseteq Q\, \wedge\, \mathcal{E}\refby_{\mathrm{sim}} G,
\]
where $P,\mathcal{E}$ and $Q$ are probabilistic concurrent programs and $R$ and $G$ are rely-guarantee conditions. The first part is seen as a probabilistic instance of the contraction of~\cite{Arm14} which specifies the functional behaviour of $R\|\mathcal{E}$ under a precondition $P$. The second part uses the simulation order which is compositional and very sensitive to the structural properties of the program.
The conditions $R$ and $G$ specify how the component $\mathcal{E}$ interacts with its environment. As we have discussed in the previous section, rely and guarantee conditions are obtained by taking $r^{*} = r{*}\delta$ for some atomic probabilistic program $r$. Therefore, $\mathcal{E}\refby_{\mathrm{sim}} r^{*}$ implies that all actions carried by events in $\mathcal{E}$ are either stuttering or satisfying the specification $r$. This corresponds to the standard approach of Jones~\cite{Jon12,Jon81}.
The following rules are probabilistic extensions of the related rely-guarantee rules developed in~\cite{Hoa11,Jon12}. These rules are sound with respect to the event structure semantics of Section~\ref{sec:es}.
\noindent{\textbf{Atomic action: }}
The rely-guarantee rule for an atomic statement $r'$ is provided by the equation
\begin{equation}\label{rule:atomic}
r^{*}\|r'\refby_{\mathrm{sim}} r{*}(r'{\cdot} r^{*})
\end{equation}
where $r$ is the rely condition. This equation shows that a (background) program satisfying the rely condition $r$ will not interfere with the low level operations involved in the atomic execution of $r'$. The programs will be interleaved.
\noindent{\textbf{Conditional statement:}}
The rely-guarantee rule for conditional statement is provided by the equation
\begin{equation}\label{rule:conditional}
r^{*}\|(b{\cdot} \mathcal{E} {+} c{\cdot}\mathcal{F}) \refby_{\mathrm{sim}} r{*}(b{\cdot} (r^{*}\|\mathcal{E}) {+} c{\cdot}(r^{*}\|\mathcal{F})).
\end{equation}
This equation shows how a rely condition $r^{*}$ distributes through branching structures. The tests $b$ and $c$ are assumed to be atomic and their disjunction is always \emph{true} (this is necessary for feasibility). This assumption may be too strong in general because $b$ may involve the reading of some large data that is too expensive to be performed atomically. However, we may assume that such a reading is done before the guard $b$ is checked and the non-atomic evaluation of the variables involved in $b$ may be assigned to some auxiliary variable that is then checked atomically by $b$.
\noindent{\textbf{Prefixing: }}
the sequential rely-guarantee rule for a probabilistic program expressed using prefixing. We have
\begin{equation}\label{rule:prefix}
r^{*}\|(r'{\cdot}\mathcal{E})\refby_{\mathrm{sim}} r{*}(r'{\cdot}(r^{*}\|\mathcal{E})).
\end{equation}
It generalises Rule~\ref{rule:atomic} and tells us that a rely condition $r^{*}$ distributes through the prefixing operation. In other words, the program $r'$ and $\mathcal{E}$ should tolerate the same rely condition in order to prove any meaningful property of $r{\cdot}\mathcal{E}$. This results from of our interpretation of $\|$ where no synchronisation is assumed.
\noindent{\textbf{Concurrent execution:}}
in Rule~\ref{rule:rg-standard}, the concurrent composition $\mathcal{E}\|\mathcal{E}'$ requires an environment that satisfies $R{\cap} R'$ to establish the postcondition $Q{\cap} Q'$. However, such an intersection is not readily accessible at the structural level of event structures. Therefore, the most general probabilistic extension of Rule~\ref{rule:rg-standard} which applies to our algebraic setting is:
\begin{equation}\label{rule:rg-general}
\frac{\triple{P\ R}{\mathcal{E}}{G\ Q}\qquad \triple{P\ R'}{\mathcal{E}'}{G'\ Q'}\qquad G\refby_{\mathrm{sim}} R'\qquad G'\refby_{\mathrm{sim}} R}{\triple{P\ R''}{\mathcal{E}\|\mathcal{E}'}{G\|G'\ Q}},
\end{equation}
where $R''$ is a rely condition such that $R''\refby_{\mathrm{sim}} R$ and $R''\refby_{\mathrm{sim}} R'$.
The proof of this rule is exactly the same as in~\cite{Hoa11,Rab13}. In fact, we have $R''\refby_{\mathrm{sim}} R$, $\mathcal{E}'\refby_{\mathrm{sim}} R$, $R\| R\refby_{\mathrm{sim}} R$, therefore \Eqn{eq:par-assoc} and Equational Implication~(\ref{eq:congruence}) imply
\[
R''\|(\mathcal{E}'\|\mathcal{E}) \refby_{\mathrm{sim}} R\|(R\|\mathcal{E})\refby_{\mathrm{sim}} R\|\mathcal{E},
\]
and we obtain $P{\cdot} R''\|(\mathcal{E}'\|\mathcal{E}) \refby_{\mathrm{sim}} P{\cdot} (R\|\mathcal{E})$ by \Eqn{eq:monotony}. It follows from \Thm{thm:trace-imply-distribution} that $P{\cdot} R''\|(\mathcal{E}'\|\mathcal{E}) \sqsubseteq Q$.
The conclusion does not contain any occurrence of $Q'$, but by symmetry, it is also valid if $Q'$ is substituted for $Q$. The combined rely condition $R''$ is constructed such that it is below $R$ and $R'$. Indeed, if $R,R'$ have a greatest lower bound with respect to $\refby_{\mathrm{sim}}$, then $R''$ can be taken as that bound, so that the strengthening of the rely is as week as possible.
The above rule can be specialised by considering rely-guarantee conditions of the form $r^{*}$, where $r$ is an atomic probabilistic program. The following rule is expressed in exactly as in the standard case~\cite{Hoa11}. This is possible because probabilities are internal.
\begin{proposition}\label{pro:rule1}
The following rule is valid in BES:
\begin{equation}\label{rule:rg-atom-rely}
\frac{\triple{P\ r_1^{*}}{\mathcal{E}_1}{g_1^{*}\ Q_1}\qquad \triple{P\ r_2^{*}}{\mathcal{E}_2}{g_2^{*}\ Q_2}\qquad g_1\sqsubseteqh r_2\qquad g_2\sqsubseteqh r_1}{\triple{P\ (r_1{\cap} r_2)^{*}}{\mathcal{E}_1\|\mathcal{E}_2}{(g_1 {+} g_2)^{*}\ Q_1}},
\end{equation}
where $r,r',g,g'{\in}\mathbb{H}_1\Omega$ and $g{+}g'$ is the nondeterministic choice on $\mathbb{H}_1\Omega$.
\end{proposition}
\begin{proof}
This follows from substituting $R$ and $G$ by respectively $r^{*}$ and $g^{*}$ in Rule~\ref{rule:rg-general}. Moreover $g^{*}\|g'^{*}\refby_{\mathrm{sim}} (g{+} g')^{*}$ holds because $(g{+}g')^{*}\|(g{+}g')^{*}\refby_{\mathrm{sim}} (g{+}g')^{*}$ (\Eqn{eq:rely-closure}).
\end{proof}
Recall that the nondeterministic choice of $\mathbb{H}_1\Omega$ is obtained by the pointwise union followed by the necessary closure properties for the elements of $\mathbb{H}_1\Omega$. The intersection $r{\cap} r'$ is obtained by pointwise intersection.
\noindent{\textbf{Iteration:}}
a while program is modelled by using the binary Kleene star. The idea is to unfold the loop as far as necessary. The conditional and prefix (sequential) cases can then be applied on the unfolded structure to distribute the rely condition. That is, we write
\begin{align*}\label{rule:while}
r^{*}\|((b{\cdot} \mathcal{E}){*} c) & \refby_{\mathrm{sim}} r{*}(c{\cdot} r^{*} {+} b{\cdot} (r^{*}\|[\mathcal{E}{\cdot}(b{\cdot}\mathcal{E}{*} c)])).
\end{align*}
If $\mathcal{E}$ is sequential, then $r^{*}$ can be ``interleaved" within the internal structure of $\mathcal{E}{\cdot}(b{\cdot}\mathcal{E}{*} c)$ by applying the prefixing and conditional statement rules.
The sequential correctness is achieved by the usual generation of probability distributions, obtained from terminating sequential behaviours, on the ``totally" unfolded event structure (assuming that $\mathcal{E}$ is sequential). The sequential behaviours are usually obtained by interleaving the rely condition $r^{*}$ through the internal structure of the unfolded loop. A bounded loop, such as a for loop, should be modelled using a sequence of sequential compositions or prefixing.
\section{Application: a faulty Eratosthenes sieve}\label{sec:application}
In this section, we show how to use the previously established rely-guarantee rules to verify a probabilistic property of a faulty Eratosthenes sieve, which is a quantitative variant Jones' example~\cite{Jon81}.
Let $n\geq 2$ be a natural number and $s_0 = \{2,3,\dots,n\}$. For each integer $i$ such that $2\leq i\leq \sqrt n$, we consider a program $\mathtt{thd}_i$ that sequentially removes all (strict) multiples of $i$ from the shared set variable $s$ with a fixed probability $p$. More precisely, each thread $\mathtt{thd}_i$ is implemented as the following program:
\[
\begin{array}{lc}
\texttt{for(j = 2 to n/i)} & \\
\qquad u_{i,j}:\ \texttt{skip}\ \pc{p}\ \texttt{remove(i*j from s)};&
\end{array}
\]
where $\texttt{n/i}$ is the integer division of $\texttt{n}$ by $\texttt{i}$. Each $u_{i,j}$ can be seen as a faulty action that removes the product ${ij}$ from the current value of $s$ with probability $p$. The state space of each atomic deterministic program $u_{i,j}$ is $\Omega = \{ s\ |\ s{\subseteq} s_0\}$. In $\mathbb{H}_1\Omega$, $u_{i,j}$ is defined by $u_{i,j}(s) = (1{-}p)\delta_s {+} p\delta_{s{\setminus}\{ij\}}$.
The whole system is specified by the concurrent execution
\[
\mathtt{thd}_2\| ... \|\mathtt{thd}_{\sqrt n}= \|_{i=2}^{\sqrt n}(u_{i,2}\cdots u_{i,\nicefrac{n}{i}}).
\]
where, in the sequel, $\sqrt n$ is computed without decimals.
Let $\pi {=} \{2,3,\dots, m\}$ be the set of prime numbers in $s_0$. Our goal is to compute a ``good" lower bound probability that the final state is $\pi$, after executing the threads $\mathtt{thd}_i$ concurrently, from the initial state $s_0$.
We denote by $O_{i,j} = \{s\ |\ ij{\notin} s\}\subseteq\Omega$ and
\[
Q_{i,j}(s) = \{\mu{\in}\mathbb{D}\Omega \ |\ \mu(O_{i,j}){\geq} p\wedge \mu(\{s'\ | \ s'{\subseteq} s\}) {=} 1\}
\]
a specification of a probabilistic program that removes $ij$ from the state $s$ with at least probability $p$ and does not add anything to it. We define $O_i = {\cap}_{j=2}^{\nicefrac{n}{i}} O_{i,j}$, $Q_i = Q_{i,2}{\cdot} Q_{i,3}{\cdot}\dots{\cdot} Q_{i,\nicefrac{n}{i}}$ and $r$ to be the probabilistic program such that $r(s)$ is the convex closure of $\{\delta_{s'} \ |\ s'{\subseteq} s\}$.
First, we show that every thread $\mathtt{thd}_i$ guarantees $r^{*}$. Second, we show that $\mathtt{thd}_i$ establishes $Q_i$ when run in an environment satisfying $r$, i.e. $r^{*}\|\mathtt{thd}_i\sqsubseteq Q_i$, using the atomic and prefix rules~\ref{rule:atomic} and~\ref{rule:prefix}. Finally, we apply the concurrency rule~\ref{rule:rg-atom-rely} to deduce that the system $\|_{i=2}^{\sqrt n}\mathtt{thd}_i$ establishes all postconditions $Q_2,Q_3,\dots Q_{\sqrt n}$, when run in an environment satisfying $r$.
\noindent{\textbf{Establising $\mathtt{thd}_i\refby_{\mathrm{sim}} r^{*}$ and $r^{*}\|\mathtt{thd}_i\sqsubseteq Q_i$}}
On the one hand, it is clear that $u_{i,j}\sqsubseteqh r$, for every $i,j$, and thus $\mathtt{thd}_i\refby_{\mathrm{sim}} r^{*}$ follows from the unfold~(\ref{eq:unfold}). On the other hand, let us show that $r^{*}\|\mathtt{thd}_i\sqsubseteq Q_i$. Multiple applications of the prefix-case give
\[
r^{*}\|\mathtt{thd}_i \refby_{\mathrm{sim}} r{*}(u_{i,2}{\cdot} (r{*} (u_{i,3}{\cdot}(\dots r{*} (u_{i,\nicefrac{n}{i}}{\cdot} r^{*}))))).
\]
Since the right multiplication $X\mapsto X{\cdot} r$, by any program $r{\in}\mathbb{H}_1\Omega$, is the lower adjoint in a Galois connection~\cite{Mci04}, the fixed point fusion theorem~\cite{Bac02} implies
\[
r{*}(u_{i,2}{\cdot} (r{*} (u_{i,3}{\cdot}(\dots r{*} (u_{i,\nicefrac{n}{i}}{\cdot }r^{*}))))) = r^{*}{\cdot} u_{i,2}{\cdot} r^{*}{\cdot} u_{i,3}{\cdot}\dots r^{*}{\cdot} u_{i,\nicefrac{n}{i}}{\cdot} r^{*},
\]
where the equality is in $\mathbb{H}_1\Omega$. Thus,
\[
r^{*}\|\mathtt{thd}_i \sqsubseteq r^{*}{\cdot} u_{i,2}{\cdot} r^{*}{\cdot} u_{i,3}{\cdot}\dots{\cdot} r^{*}{\cdot} u_{i,\nicefrac{n}{i}}{\cdot} r^{*}
\]
follows from the fact that $\sqsubseteq$ is weaker than $\refby_{\mathrm{sim}}$ (\Thm{thm:trace-imply-distribution}). The right hand side explicitly states the interleaving of the rely condition $r^{*}$ in-between the atomic executions in $\mathtt{thd}_i$ as in~\cite{Jon12}.
Moreover, since $r$ is the probabilistic version of a transitive binary relation, \mathbb{P}rop{pro:transitive-convex-closure} implies that $r{\cdot}(r{+}\delta)\sqsubseteqh r$. Since $\mathbb{H}_1\Omega$ is a probabilistic Kleene algebra~\cite{Mci05}, the right induction law of pKA implies $r^{*} = \delta {+} r$. This reduction of $r^{*}$ to $\delta{+}r$ illustrates the practical importance of transitive rely conditions. Therefore, \[
r^{*}\|\mathtt{thd}_i \sqsubseteq (\delta{+}r){\cdot} u_{i,2}{\cdot} (\delta{+}r){\cdot} u_{i,3}{\cdot}\dots (\delta{+}r){\cdot} u_{i,\nicefrac{n}{i}}{\cdot} (\delta {+}r),
\]
where the left hand side is a sequential program (thus \mathbb{P}rop{pro:homomorphism} enables us to use the definition of sequential composition of $\mathbb{H}_1\Omega$) directly
. Since $u_{i,j}\sqsubseteq Q_{i,j}$, it remains to show that $(\delta{+}r){\cdot} Q_{i,2}{\cdot} (\delta{+}r){\cdot} Q_{i,3}{\cdot}\dots (\delta{+}r){\cdot} Q_{i,\nicefrac{n}{i}}{\cdot} (\delta {+}r)\sqsubseteq Q_i$.
First we show that $Q_{i,j}{\cdot}(\delta{+}r)\sqsubseteq Q_{i,j}$ and
$(\delta{+}r){\cdot} Q_{i,j}\sqsubseteq Q_{i,j}$. Let $s{\in}\Omega$ and $\nu{\in} (Q_{i,j}{\cdot} (\delta{+}r))(s)$. By definition of the sequential composition in $\mathbb{H}_1\Omega$, there exists a probabilistic deterministic program $f\sqsubseteqh \delta {+}r$ and a distribution $\mu{\in} Q_{i,j}(s)$ such that
$\nu(s') = \sum_{t{\in}\Omega}f(t)(s')\mu(t)$,
for every $s'{\in}\Omega$. Therefore,
\[
\nu (O_{i,j}{\cap}\{s'\ |\ s'{\subseteq} s\}) = \sum_{t{\in}\Omega}f(t)(O_{i,j})\mu(t) = \sum_{t{\subseteq} s}f(t)(O_{i,j})\mu(t),
\]
where the second equality follows from $\mu(\{t\ |\ t{\not\subseteq} s\}) {=} 0$, for every $\mu{\in} Q_{i,j}$. We deduce $\sum_{t{\subseteq} s}f(t)(O_{i,j})\mu(t)\geq p$, i.e. $\nu{\in}Q_{i,j}(s)$, by observing
\[
\sum_{t{\subseteq} s}f(t)(O_{i,j})\mu(t)\geq \sum_{ij{\notin} t\wedge t{\subseteq} s}f(t)(O_{i,j})\mu(t) = \mu(O_{i,j}{\cap}\{t \ |\ t{\subseteq} s\})\geq p,
\]
because $f(t)(O_{i,j}) {=} 1$ for every $t$ such that $ij{\notin}t$ and $\mu(O_{i,j}{\cap}\{t\ |\ t{\subseteq} s\}) = \mu(O_{ij})$ for every $\mu{\in} Q_{i,j}(s)$. Consequently, $Q_{i,j}{\cdot}(\delta{+}r)\sqsubseteq Q_{i,j}$.
Similarly, we can show that $(\delta{+}r){\cdot} Q_{i,j}\sqsubseteq Q_{i,j}$ and thus $r^{*}\|\mathtt{thd}_i\sqsubseteq Q_i$.
\noindent{\textbf{Establising the property of $r^{*}\|_{i=2}^{\sqrt n}\mathtt{thd}_i$}}
Applying the rule~\ref{rule:rg-atom-rely} $\sqrt n{-}1$ times, we obtain, for every $Q_j$ such that $2{\leq} j{\leq} \sqrt n$,
\[
r^{*}\|_{i=2}^{\sqrt n}\mathtt{thd}_i\sqsubseteq Q_j.
\]
\noindent{\textbf{Inferring a lower bound for the probability of correctness}}
Unfortunately, Rule~\ref{rule:rg-atom-rely} does not give any explicit quantitative bound in term of probability for correctness. It does provide quantitative correctness, but all the probabilities are buried in the $Q_i$.
To obtain an explicit lower bound for the probability of removing all composite numbers, we first study the case of two threads that run concurrently. We know from Rule~\ref{rule:rg-atom-rely} that $r^{*}\|\mathtt{thd}_2\|\mathtt{thd}_3\sqsubseteq Q_2$ and $r^{*}\|\mathtt{thd}_2\|\mathtt{thd}_3\sqsubseteq Q_3$. Therefore, for every $\mu{\in}\sem{r^{*}\|\mathtt{thd}_2\|\mathtt{thd}_3}(s_0)$, we have $\mu(O_2)\geq p^{\nicefrac{n}{2}{-}1}$ and $\mu(O_3)\geq p^{\nicefrac{n}{3}{-}1}$ because there are $\nicefrac{n}{2}{-}1$ (resp. $\nicefrac{n}{3}{-}1$) multiples of $2$ (resp. $3$) in $[3,n]$ (resp. $[4,n]$). Therefore, $\mu(O_1{\cup} O_2) {+} \mu(O_2{\cap} O_3) = \mu(O_1) + \mu(O_2) \geq p^{\nicefrac{n}{2}{-}1} {+} p^{\nicefrac{n}{3}{-}1}$ and
\begin{equation}\label{e1601}
\mu(O_2{\cap} O_3) \geq p^{\nicefrac{n}{2}{-}1} {+} p^{\nicefrac{n}{3}{-}1} {-} 1.
\end{equation}
In the construction of the lower bound in \Eqn{e1601}, we have only used the modularity of measures and, therefore, it can be transformed into a more general rely-guarantee rule with explicit probabilities (\mathbb{P}rop{p1605}).
Given a subset $O{\subseteq}\Omega$ and $p{\in}[0,1]$, we write $\sem{\mathcal{E}}(s_0)(O)\geq p$ if for every $\mu{\in}\sem{\mathcal{E}}(s_0)$ we have $\mu(O)\geq p$.
\begin{proposition}\label{p1605}
For every initial state $s_0$ and for all subsets $O_1,O_2{\subseteq}\Omega$,
\begin{footnotesize}
\begin{displaymath}
\frac{\sem{r_1^{*}\|\mathcal{E}_1}(s_0)(O_1)\geq p_1\quad \sem{r_2^{*}\|\mathcal{E}_2}(s_0)(O_2)\geq p_2\quad \mathcal{E}_1\refby_{\mathrm{sim}} g^{*}\refby_{\mathrm{sim}} r_2^{*}\quad \mathcal{E}_2\refby_{\mathrm{sim}} g'^{*}\refby_{\mathrm{sim}} r_1^{*}}{\sem{(r_1{\cap} r_2)^{*}\|\mathcal{E}_1\|\mathcal{E}_2}(s_0)(O_1{\cap} O_2)\geq p_1 {+} p_2 {-} 1\qquad \mathcal{E}_1\|\mathcal{E}_2\refby_{\mathrm{sim}} (g{+} g')^{*}}
.
\end{displaymath}
\end{footnotesize}
\end{proposition}
\begin{proof}
Let $\mu\in\sem{(r_1{\cap} r_2)^*\|\mathcal{E}\|\mathcal{E}_2}(s_0)$, we need to show that $\mu(O_1{\cap} O_2)\geq p_1{+}p_2{-}1$ with the above definition of $p_1$ and $p_2$.
Let us define $Q_1$ to be the (single event) ipBES whose event is labelled by the probabilistic program $u_1$ such that $u_1(s_0) = \{\mu\ |\ \mu(O_1){\geq} p_1\}$ else $u_1(s) = \mathbb{D}\Omega$ for $s\neq s_0$. Similarly, we define $Q_2$. Then the premises imply $r_1^*\| \mathcal{E}_1\sqsubseteq Q_1$ and
$r_2^*\|\mathcal{E}_2\sqsubseteq Q_2$. By \mathbb{P}rop{pro:rule1}, we have \[
\sem{(r_1{\cap} r_2)^*\|\mathcal{E}_1\|\mathcal{E}_2}\sqsubseteqh \sem{Q_1}\qquad \textrm{ and }\qquad \sem{(r_1{\cap} r_2)^*\|\mathcal{E}_1\|\mathcal{E}_2}\sqsubseteqh \sem{Q_2}.
\]
Therefore $\mu(O_1)\geq p_1$ and $\mu(O_2)\geq p_2$. Modularity of finite measures implies that $\mu(O_1{\cap} O_2) {+} \mu(O_1{\cup} O_2) = \mu(O_1) {+} \mu(O_2)\geq p_1 {+} p_2$. Hence, $\mu(O_1{\cap} O_2)\geq p_1 {+} p_2 {-} \mu(O_1{\cup} O_2)\geq p_1 {+} p_2 {-} 1$ since $\mu(O_1{\cup}O_2)\leq 1$.
The simulation $\mathcal{E}_1\|\mathcal{E}_2\refby_{\mathrm{sim}} (g{+} g')^{*}$ is also clear from \mathbb{P}rop{pro:rule1}.
\end{proof}
We know from the above discussion that
\[
\sem{r^{*}\|\mathtt{thd}_2\|\mathtt{thd}_3}(s_0)(O_2{\cap} O_3)\geq p^{\nicefrac{n}{2}{-}1} {+} p^{\nicefrac{n}{3}{-}1} {-} 1.
\]
Applying \mathbb{P}rop{p1605} on $\mathtt{thd}_2\|\mathtt{thd}_3$ and $\mathtt{thd}_4$ yields
\[
\sem{r^{*}\|\mathtt{thd}_2\|\mathtt{thd}_3\|\mathtt{thd}_4}(s_0)(O_2{\cap} O_3{\cap}O_4)\geq p^{\nicefrac{n}{2}{-}1} {+} p^{\nicefrac{n}{3}{-}1} {+}p^{\nicefrac{n}{4}{-}1}{-} 2.
\]
Thus $\sqrt n{-}1$ applications of \mathbb{P}rop{p1605} give
\[
\sem{r^{*}\|_{i=2}^{\sqrt n}\mathtt{thd}_i}(s_0)({\cap}_{i=2}^{\sqrt n} O_i) \geq \sum_{i=2}^{\sqrt n} p^{\nicefrac{n}{i}{-}1} {-} (\sqrt n{-}2) = f(p,n).
\]
The lower bound $f(p,n)$ sometimes provides a bad lower-approximation for the probability that the system establishes ${\cap}_{i=2}^{\sqrt n}O_i$. However, it is clear that $\lim_{p{\to} 1}f(p,n) = f(1,n) = 1$.
In the particular case of $n=15$, we have $\sqrt{15} = 3$ and we only need to consider $\mathtt{thd}_2$ and $\mathtt{thd}_3$ so that $f(p,15) = p^6 {+} p^4 {-} 1$. The plot of $f(p,15)$ in \Fig{fig:comparison} shows that $f(p,15)$ gives a positive lower bound when $p\geq 0.868$, the exact probability being $p^{10} {+} 4p^9(1{-}p) {+} 4p^8(1{-}p)^2$.
\noindent{\textbf{Refining the lower bound}}
We can use other internal properties of the system to obtain a better lower bound. It is clear that $O_i$ is an invariant for every $\mathtt{thd}_j$ (for $j{\neq} i$) and that all actions $u_{i,j}$ (sequentially) commute with each other. Thus, we should obtain a better lower bound by noticing that the system is ``sequentially better" than the following interleaving: $\mathtt{thd}_2$ removes all (strict) multiples of $2$, $\mathtt{thd}_3$ removes all multiples of $3$ assuming that all multiples of $\mathrm{lcm'}(2,3)$ (the lowest common multiple of $2$ and $3$ that is strictly greater than both) have been removed by $\mathtt{thd}_2$, and so on~\footnote{The probability of removing all composite numbers is usually above that bound because $6$ can be removed by either $\mathtt{thd}_2$ or $\mathtt{thd}_3$.}. Thus
\begin{align*}
\sem{r^{*}\|_{i=2}^{\sqrt n}\mathtt{thd}_i}(s_0)({\cap}_{i=2}^{\sqrt n} O_i) &\geq p^{\nicefrac{n}{2}-1}p^{\nicefrac{n}{3}{-}1{-}[\nicefrac{n}{6}]}p^{\nicefrac{n}{4}{-}1{-}[\nicefrac{n}{4}{-}1]}p^{\nicefrac{n}{5}{-}1{-}[\nicefrac{n}{10}{+}\nicefrac{n}{15} - \nicefrac{n}{30}]}\cdots \\
&= g(p,n),
\end{align*}
where the square-bracketed terms are the numbers of multiples remove by threads with smaller indices. For example, before $\mathtt{thd}_5$ runs, $\mathtt{thd}_2$ removes $\nicefrac{n}{10}$ multiples of $\mathrm{lcm'}(2,5)$, $\mathtt{thd}_3$ removes $\nicefrac{n}{15}{-}\nicefrac{n}{30}$ multiples of $\mathrm{lcm'}(3,5)$ (not multiples of $\mathrm{lcm'}(2,5)$), thus $\mathtt{thd}_5$ removes the remaining $\nicefrac{n}{5}{-}1{-}[\nicefrac{n}{10}{+}\nicefrac{n}{15}{-}\nicefrac{n}{30}]$ multiples of $5$. In the particular case of $n=15$, this yields
\[
g(p,15) = p^{\nicefrac{15}{2}{-}1}p^{\nicefrac{15}{3}-1-\nicefrac{15}{6}} = p^{7-1+5-1-2} = p^8.
\] A graphical comparison of $f,g$ and the actual probability is displayed in Figure~\ref{fig:comparison} for $n = 15$.
\begin{figure}
\caption{Comparison of the quantities $f(p,15)$ (dotted), $g(p,15)$ (dashed) and the actual probability $p^{10}
\label{fig:comparison}
\end{figure}
\noindent{\textbf{Establising the property of $\|_{i=2}^{\sqrt n}\mathtt{thd}_i$}}
Finally, notice that $\emptyset{\in} {\cap}_{i=2}^{\sqrt n}O_i$ which means that $r^{*}\|_{i=2}^{\sqrt n}\mathtt{thd}_i$ can establish $s=\emptyset$ with a positive probability. This issue is resolved by using a stronger guarantee property such as ``$u_{i,j}$ never removes $i$". Therefore, $\|_{i=2}^{\sqrt n}\mathtt{thd}_i$ never removes any prime numbers i.e. any element of ${\cap}_{i=2}^{\sqrt n}O_i$, that does not contain all the positive prime numbers below $n$, occurs with probability $0$.
\section{Conclusion}
We have presented an extension of the rely-guarantee calculus that accounts for probabilistic programs running in a shared variable environment. The rely-guarantee rules are expressed and derived by and large by using the algebraic properties of a bundle event structure semantics for concurrent programs.
In our approach, the specification of a probabilistic concurrent program is expressed with a rely-guarantee quintuple. Each quintuple is defined algebraically through the use of a sequential order $\sqsubseteq$, which captures all possible sequential behaviours when a suitable definition of the concurrency operation $\|$ is given, and a simulation order $\refby_{\mathrm{sim}}$, which specifies the level of interference between the specified component and the environment. Various probabilistic rely-guarantee rules have been established and applied on a simple example of a faulty concurrent system. We have also shown some rules that provide explicit quantitative properties, including a lower bound for the probability of correctness. In particular, a better lower-approximation can be derived if further internal properties of the systems are known.
The framework developed in this paper has its current limitations. Firstly, neither the algebra nor the event structure model support non-terminating probabilistic concurrent programs at the moment. That is, the rely-guarantee rules of this paper can only be applied in a partial correctness setting. Secondly, the concrete model is restricted to programs with finite state spaces. We will focus particularly on the first limitation in our future work.
\begin{append}
\appendix
\section{Axioms of Kleene algebra and related structures}\label{A:ka}
\subsection{Idempotent semiring}
An \emph{idempotent semiring} is an algebraic structure $(K,+,\cdot,0,1)$ such that, for every $x,y,z{\in}K$, the following axioms hold
\begin{eqnarray}
x + x & = & x,\label{eq:+-idem}\\
x + y & = &y + x,\label{eq:+-comm}\\
x + (y + z) & = & (x + y) + z,\label{eq:+-assoc}\\
x + 0 & = & x, \label{eq:+-zero}\\
x{\cdot} 1 & = & x,\label{eq:rone}\\
1{\cdot} x & = & x, \label{eq:lone}\\
x {\cdot} (y {\cdot} z) & = & (x {\cdot} y) \cdot z,\label{eq:seq-assoc}\\
0{\cdot} x& = & 0, \label{eq:left-zero}\\
x{\cdot} 0& = & 0, \label{eq:righ-zero}\\
(x+y){\cdot} z & = & x{\cdot} z + y\cdot z,\label{eq:+-dist-seq} \\
x{\cdot} y+x{\cdot} z & = & x {\cdot} (y+z).\label{eq:+-rdist-seq}
\end{eqnarray}
\subsection{Kleene algebra}
A \emph{Kleene algebra} is an algebraic structure $(K,+,\cdot,^*,0,1)$ where $(K,+,\cdot,0,1)$ is an idempotent semiring and the Kleene star $(^*)$ satisfies Kozen's axioms:
\begin{eqnarray}
x^* & = & 1 + x{\cdot} x^*,\label{eq:*-unfold}\\
z+x{\cdot} y\leq y & \Rightarrow & x^*{\cdot} z\leq y,\label{eq:*-linduction}\\
z+y{\cdot} x\leq y& \Rightarrow &z{\cdot} x^*\leq y.\label{eq:*-rinduction}
\end{eqnarray}
The induction laws~\ref{eq:*-linduction} (resp. ~\ref{eq:*-rinduction}) implies that $x^*$ is the least fixed point of $\lambda y.1+x{\cdot} y$ (resp. $\lambda y.1 + y{\cdot}x$).
\subsection{Probabilistic Kleene algebra}
A \emph{probabilistic Kleene algebra} has the same signature as Kleene algebra but weakens the distributivity law~\ref{eq:+-rdist-seq} and the induction rule~\ref{eq:*-rinduction} to:
\begin{eqnarray}
x{\cdot} y+x{\cdot} z & \leq & x {\cdot} (y+z),\label{eq:+-subdist-seq}\\
z+y{\cdot} (x+1)\leq y& \Rightarrow &z{\cdot} x^*\leq y.\label{eq:*-rweakinduction}
\end{eqnarray}
\subsection{Concurrent Kleene algebra}
A \emph{concurrent Kleene algebra} is composed of a Kleene algebra $(K,+,\cdot,^*,0,1)$ and a commutative Kleene algebra $(K,+,\|,^{(*)},0,1)$ (i.e. $\|$ is commutative) linked by the interchange law:
\begin{eqnarray}
(x\|y)\cdot(x'\|y')&\leq& (x\cdot x')\|(y\|y').
\end{eqnarray}
\end{append}
\end{document}
|
\begin{document}
\title[Counting Markov Equivalence Classes for DAG models on Trees]{Counting Markov Equivalence Classes for DAG models on Trees}
\author{Adityanarayanan Radhakrishnan}
\address{
Laboratory for Information and Decision Systems,\\
and Institute for Data, Systems, and Society\\
MIT\\
Cambridge, MA, USA}
\email{[email protected]}
\author{Liam Solus}
\address{KTH Royal Institute of Technology\\
Stockholm, Sweden}
\email{[email protected]}
\author{Caroline Uhler}
\address{
Laboratory for Information and Decision Systems,\\
and Institute for Data, Systems, and Society\\
MIT\\
Cambridge, MA, USA}
\email{[email protected]}
\date{28 May 2017}
\begin{abstract}
DAG models are statistical models satisfying a collection of conditional independence relations encoded by the nonedges of a directed acyclic graph (DAG) ${\it m}athcal{G}$.
Such models are used to model complex cause-effect systems across a variety of research fields.
From observational data alone, a DAG model ${\it m}athcal{G}$ is only recoverable up to \emph{Markov equivalence}.
Combinatorially, two DAGs are Markov equivalent if and only if they have the same underlying undirected graph (i.e. skeleton) and the same set of the induced subDAGs $i\to j \leftarrow k$, known as immoralities.
Hence it is of interest to study the number and size of Markov equivalence classes (MECs).
In a recent paper, the authors introduced a pair of generating functions that enumerate the number of MECs on a fixed skeleton by number of immoralities and by class size, and they studied the complexity of computing these functions.
In this paper, we lay the foundation for studying these generating functions by analyzing their structure for trees and other closely related graphs.
We describe these polynomials for some important families of graphs including paths, stars, cycles, spider graphs, caterpillars, and complete binary trees.
In doing so, we recover important connections to independence polynomials, and extend some classical identities that hold for Fibonacci numbers.
We also provide tight lower and upper bounds for the number and size of MECs on any tree.
Finally, we use computational methods to show that the number and distribution of high degree nodes in a triangle-free graph dictates the number and size of MECs.
\end{abstract}
{\it m}aketitle
\thispagestyle{empty}
{\it m}athcal{S}ection{Introduction}
\label{sec: introduction}
A graphical model based on a directed acyclic graph (DAG), known as a \emph{DAG model} or \emph{Bayesian network}, is a type of statistical model used to model complex cause-and-effect systems.
DAG models are popular in numerous areas of research including computational biology, epidemiology, environmental management, and sociology~\cite{A11, Friedman_2000, Pearl_2000, Robins_2000, Spirtes_2001}.
Given a DAG ${\it m}athcal{G} := ([p],A)$ with nodes $[p]=\{1, \dots , p\}$ and arrows $i \rightarrow j\in A$, the DAG model associates to each node $i\in[p]$ of ${\it m}athcal{G}$ a random variable $X_i$.
The collection of non-arrows of ${\it m}athcal{G}$ encode those conditional independence (CI) relations typical of cause-effect relationships:
$$
X_i \independent X_{\nd(i)\backslash\pa(i)} \,{\it m}id\, X_{\pa(i)},
$$
where $\nd(i)$ and $\pa(i)$ respectively denote the \emph{nondesendents} and \emph{parents} of the node $i$ in ${\it m}athcal{G}$.
A probability distribution ${\it m}athbb{P}$ is said to satisfy the \emph{Markov assumption} with respect to ${\it m}athcal{G}$ if it entails these CI relations, and the DAG model associated to ${\it m}athcal{G}$ is the complete set of all such joint probability distributions.
The global consequences of the Markov assumption in terms of CI relations can be captured via the combinatorics of the DAG ${\it m}athcal{G}$ with a notion of directed separation called \emph{$d$-separation} \cite[Chapter 3]{DSS08}.
Unfortunately, multiple DAGs can encode the same set of CI relations.
Such DAGs are said to be \emph{Markov equivalent}, and the complete collection of DAGs encoding the same set of CI relations as ${\it m}athcal{G}$ is called the \emph{Markov equivalence class} (MEC) of ${\it m}athcal{G}$.
Verma and Pearl show in \cite{VP92} that a MEC is combinatorially determined by the underlying undirected graph $G$ (or \emph{skeleton}) of ${\it m}athcal{G}$ and the placement of immoralities, i.e.~induced subgraphs of the form $i\rightarrow j\leftarrow k$.
From observational data, the underlying DAG ${\it m}athcal{G}$ of a DAG model can only be determined up to Markov equivalence.
It is therefore of interest to gain a combinatorial understanding of MECs, in particular their number and sizes.
The literature on the MEC enumeration problem can be summarized via the following three perspectives: (1) count the number of MECs on all DAGs on $p$ nodes \cite{GP01}, (2) count the number of MECs of a given size \cite{G06,S03,W13}, or (3) determine the size of a specific MEC \cite{HJY15, HY16}.
In \cite{GP01}, the authors approach perspective (1) computationally and compute the number of MECs for all DAGs on $p\leq 10$ nodes.
In \cite{G06,S03,W13}, the authors provide partial results for perspective (2) using inclusion-exclusion formulae that work nicely for small MECs sizes.
Then in \cite{HJY15, HY16}, the authors explore efficient techniques for computing the size of a fixed MEC via algorithms that manipulate \emph{$v$-rooted} and \emph{core} subgraphs of chordal graphs.
Recently, \cite{RSU17} addresses this question from a new perspective by introducing a pair of generating functions that enumerate the number of MECs on a \emph{fixed skeleton} $G=(V,E)$ by number of immoralities in each class and by class size.
Their results reveal connections to graphical enumeration problems that are well-studied from the perspective of combinatorial optimization.
A main goal of this paper is make explicit these connections and use them to study the generating functions of \cite{RSU17} for sparse graphs.
Throughout, we use curly letters for DAGs, such as ${\it m}athcal{G}$, and script letters for the corresponding undirected graph (i.e.~skeleton), such as $G$. In addition, we use $A$ to denote a collection of arrows and $E$ to denote a collection of undirected edges.
The first generating function is the graph polynomial
$$
M(G;x) := {\it m}athcal{S}um_{k\geq0}{\it m}_k(G)x^k,
$$
where ${\it m}_k(G)$ denotes the number of MECs with skeleton $G$ that contain precisely $k$ immoralities.
The degree of $M(G;x)$, denoted ${\it m}(G)$, is called the \emph{immorality number} of $G$, and it counts the maximum number of immoralities possible in an MEC with skeleton $G$.
The second generating function is the arithmetic function
$$
S(G;x) := {\it m}athcal{S}um_{k\geq0}\frac{s_k(G)}{k^x},
$$
where $s_k(G)$ denotes the number of MECs with skeleton $G$ that have size $k$.
We let $M(G):=M(G;1)=S(G;0)$ denote the total number of MECs with skeleton $G$.
In \cite{RSU17}, the authors showed that computing a DAG with ${\it m}(G)$ immoralities is an NP-hard problem, and that $S(G;x)$ is a complete graph isomorphism invariant for all connected graphs on $p\leq 10$ nodes.
Otherwise, very little is known about the structure of these generating functions.
In this paper, we lay the foundation for the study of the graph polynomial $M(G;x)$ by providing a detailed analysis of its properties for trees (and their closely related graphs).
Within this context, we draw explicit connections between properties of $M(G;x)$ and the \emph{independence polynomial} of $G$; i.e. the graph polynomial
$
I(G;x):={\it m}athcal{S}um_{k\geq0}\alpha_k(G)x^k,
$
where $\alpha_k(G)$ denotes the number of pairwise disjoint $k$-subsets of vertices (\emph{independent sets}) of $G$.
The remainder of this paper is structured as follows:
In Section~\ref{sec: some first examples} we compute $M(G;x)$ and $S(G;x)$ for some fundamental examples, including paths, cycles, and stars.
We find that $M(G;x)$ coincides with an independence polynomial for paths and cycles, therein providing connections to Fibonacci numbers and Fibonacci-like sequences.
Paths and stars give tight bounds on the number of independence sets in a tree \cite{PT82}.
We show in Section~\ref{sec: bounding the size and number of mecs on trees} that they also provide tight upper and lower bounds for the number and sizes of MECs on a tree.
In Section~\ref{sec: classic families of trees} we then use $M(G;x)$ for stars and paths to compute $M(G;x)$ and $M(G)$ for families of trees that are significant in both mathematical and statistical settings.
The graphs analyzed include spider graphs, caterpillar graphs, and complete binary trees.
In the case of spider graphs, the resulting formulae yield generalizations of classic identities known for Fibonacci numbers, and reveal a multivariate extension of $M(G;x)$ exhibiting nice combinatorial properties that can be recursively computed for any tree.
In Section~\ref{sec: beyond trees}, we use computational methods to examine properties of $M(G;x)$ and $M(G)$ for the more general family of triangle-free graphs.
The results of \cite{RSU17} and those of Sections~\ref{sec: some first examples},~\ref{sec: bounding the size and number of mecs on trees}, and~\ref{sec: classic families of trees} exhibit an underlying relationship between the number and size of MECs and the number of cycles and high degree nodes in the graph.
Using a program first described in \cite{RSU17}, we study this connection by examining data collected on MECs for all connected graphs on $p\leq 10$ nodes.
We compare class size and the number of MECs per skeleton to skeletal features including average degree, maximum degree, clustering coefficient, and the ratio of number of immoralities in the MEC to the number of induced $3$-paths in the skeleton.
Unlike $S(G;x)$, the polynomial $M(G;x)$ is not a complete graph isomorphism invariant over all connected graphs on $p\leq10$ nodes.
However, using this program, we observe that it is such an invariant when restricted to triangle-free graphs.
{\it m}athcal{S}ection{Some First Examples}
\label{sec: some first examples}
In this section, we compute the generating functions $M(G;x)$ and $S(G;x)$ for paths, cycles, stars, and bistars.
We show that $M(G;x)$ are independence polynomials for all paths and cycles.
Similarly, we show that for the star graphs $M(G;x)$ has nonzero coefficients given by the binomial coefficients, which are precisely the coefficients of its corresponding independence polynomial.
These examples are fundamental to the theory developed in Sections~\ref{sec: bounding the size and number of mecs on trees} and~\ref{sec: classic families of trees}, in which we bound the number and size of MECs on trees and compute $M(G;x)$ for more general families of graphs using paths and stars.
Recall that the \emph{$p$-path} is the (undirected) graph $I_p := ([p],E)$ for which $E := \{\{i,i+1\} : i\in[n-1]\}$, and the \emph{$p$-cycle} is the (undirected) graph $C_p := ([p],E)$ for which $E := \{\{i,i+1\} : i\in[n-1]\}\cup\{\{1,n\}\}$.
We also define the graph $G_p(q_1,q_2,\ldots,q_p)$ to be the undirected graph given by attaching $q_i$ leaves to node $i$ of the $p$-path $I_p$.
The \emph{$p$-star} is the graph $G_1(p)$ and the \emph{$p,q$-bistar} is the graph $G_2(p,q)$.
The \emph{center} node of $G_1(p)$ is its unique node of degree $p$.
{\it m}athcal{S}ubsection{Paths and cycles}
\label{subsec: paths and cycles}
We introduce two well-studied combinatorial sequences, and their associated polynomial filtrations that will play a fundamental role in the formulae computed in this section as well as in Sections~\ref{sec: bounding the size and number of mecs on trees} and~\ref{sec: classic families of trees}.
Recall that the \emph{$p^{th}$ Fibonacci number} $F_p$ is defined by the recursion
$$
F_0 := 1\quad F_1 := 1, \quad {\it m}box{and} \quad F_p := F_{p-1}+F_{p-2} \quad{\it m}box{ for $p\geq2$.}
$$
The \emph{$p^{th}$ Fibonacci polynomial} is defined by
$$
F_p(x) := {\it m}athcal{S}um_{k=0}^{\lfloor\frac{p}{2}\rfloor}{p-k\choose k}x^k,
$$
and it has the properties that $F_p(1) = F_p$ for all $p\geq 1$ and $F_p(x) = F_{p-1}(x)+xF_{p-2}(x)$ for all $p\geq2$.
Analogously, the $p^{th}$ Lucas number $L_p$ is given by the Fibonacci-like recursion
$$
L_0 := 2\quad L_1 := 1, \quad {\it m}box{and} \quad L_p := L_{p-1}+L_{p-2} \quad{\it m}box{ for $p\geq2$.}
$$
The $p^{th}$ \emph{Lucas polynomial} is given by
$$
L_0(x) := 2\quad L_1(x) := 1, \quad {\it m}box{and} \quad L_p(x) := L_{p-1}(x)+xL_{p-2}(x) \quad{\it m}box{ for $p\geq2$.}
$$
It is a well-known that the independence polynomial of the $p$-path is equal to the $(p+1)^{st}$ Fibonacci polynomial and the independence polynomial of the $p$-cycle is given by the $p^{th}$ Lucas polynomial; i.e.
$$
I(I_p;x) = F_p(x) \quad {\it m}box{and} \quad I(C_p;x) = L_p(x).
$$
With these facts in hand we prove the following theorem.
\begin{theorem}
\label{thm: path and cycle polynomials}
For the path $I_p$ and the cycle $C_p$ on $p$ nodes we have that
$$
M(I_p; x) = F_{p-1}(x) \quad {\it m}box{ and} \quad M(C_p:x) = L_p(x)-1.
$$
In particular, the number of MECs on $I_p$ and $C_p$, respectively, is
$$
M(I_p) = F_{p-1} \quad {\it m}box{and} \quad M(C_p) = L_p -1,
$$
and the maximum number of immoralities is
$$m(I_{p+2}) = m(C_p) = \left\lfloor\frac{p}{2}\right\rfloor.$$
\end{theorem}
\begin{proof}
The result follows from a simple combinatorial bijection.
Since paths and cycles are the graphs with the property that the degree of any vertex is at most two, then the possible locations of immoralities are exactly the degree two nodes.
That is, the unique head node $j$ in an immorality $i\rightarrow j\leftarrow k$ must be a degree two node.
In the path $I_p$, this corresponds to all $p-2$ non-leaf vertices, and for the cycle $C_p$ this is all the vertices of the graph.
Notice then that no two adjacent degree two nodes can simultaneously be the unique head node of an immorality, since this would require one arrow to be bidirected.
Thus, a viable placement of immoralities corresponds to a choice of any subset of degree two nodes that are mutually non-adjacent, i.e. that form an independent set.
Conversely, given any independent set in $I_p$, a DAG can be constructed by placing the head node of an immorality at each element of the set and directing all other arrows in one direction.
Similarly, this works for any nonempty independent set in $C_p$.
(Notice that any MEC on the cycle must have at least one immorality since all DAGs have at least one sink node.)
The resulting formulas are then
$$
M(I_p; x) = I(I_{p-2}; x) = F_{p-1}(x) \quad {\it m}box{and} \quad M(C_p:x) = I(C_p; x) -1 = L_p(x)-1,
$$
which completes the proof.
\end{proof}
We now compute the generating functions $S(I_p;x)$ and $S(C_p;x)$.
The desired formulae follow naturally from the description of the placement of immoralities given in Theorem~\ref{thm: path and cycle polynomials}.
\begin{theorem}
\label{thm: path vectors}
The number $s_\ell(I_p)$ of MECs of size $\ell$ with skeleton $I_p$ is the number of compositions $c_1+\cdots+c_{k+1} = p-k$ of $p-k$ into $k+1$ parts that satisfy
$$
\ell = \prod_{i = 1}^{k+1}c_i
$$
as $k$ varies from $0,1\ldots,\lfloor\frac{p}{2}\rfloor.$
\end{theorem}
\begin{proof}
Let ${\it m}athcal{G}$ be a DAG with skeleton $I_p$. We denote the Markov equivalence class of ${\it m}athcal{G}$ by $[{\it m}athcal{G}]$. By the proof of Theorem~\ref{thm: path and cycle polynomials}, we know that the immorality placements in $[{\it m}athcal{G}]$ correspond to the nodes in an independent $k$-subset ${\it m}athcal{I}{\it m}athcal{S}ubset[p]$ on the subpath $I_{p-2}$ of $I_p$ induced by the non-leaf nodes of $I_p$.
The induced graph of the complement of ${\it m}athcal{I}$ is a forest of $k+1$ paths.
Since each member of $[{\it m}athcal{G}]$ is a DAG with skeleton $I_p$ that has no immoralities on these $k+1$ paths, then each path contains a unique sink.
Each independent $k$-subset yields a distinct forest of $k+1$ paths on $[p]\backslash{\it m}athcal{I}$, which corresponds to a unique partition of $p-k$ into $k+1$ parts.
The formula for $s_\ell(I_p)$ is then given by considering all such possible placements of sinks on each path in the forests over all independent sets.
\end{proof}
A similar argument using integer partitions allows us to compute the number of MECs of size $\ell$ on the $p$-cycle.
\begin{theorem}
\label{thm: cycle vectors}
The number of MECs of size $\ell$ in the $p$-cycle is
$$
s_\ell(C_p) = {\it m}athcal{S}um_{k=1}^{\left\lfloor\frac{p}{2}\right\rfloor}\,{\it m}athcal{S}um_{{\it m}athcal{S}ubstack{{\bf m} \,\in\, {\it m}athbb{P}[p-2k+1,k,p-k], \\ \ell=\prod_{i=1}^k i^{m_i}}}\frac{p}{k}{k\choose m_1,\ldots,m_{p-2k+1}},
$$
where ${\it m}athbb{P}[j,k,n]$ denotes the partitions of $n$ with $k$ parts with largest part at most $j$.
\end{theorem}
\begin{proof}
Since $C_p$ is a graph in which every node is degree 2, then each MEC of $C_p$ containing $k$ immoralities corresponds to an independent $k$-subset of $[p]$, and the subgraph of $C_p$ given by deleting this $k$-subset consists of $k$ disjoint paths.
The size of this MEC is then the product of the lengths of these paths.
So we need only count the number of such subgraphs for which this product equals $\ell$.
To count these objects, consider that each subgraph of $C_p$ given by deleting an independent $k$-subset of $C_p$ forms a partition of the $p-k$ remaining vertices into $k$ parts with maximum possible part size being $p-2k+1$.
Such a partition is represented by
$$
\left\langle 1^{m_1},2^{m_2},\ldots,(p-2k+1)^{m_{p-2k+1}}\right\rangle\in {\it m}athbb{P}[p-2k+1,k,p-k],
$$
where $m_1,\ldots,m_{p-2k+1}\geq 0$ and ${\it m}athcal{S}um_i m_i = k$.
Each such partition corresponds to an unlabeled forest consisting of $m_i$ $i$-paths, and the number of subgraphs of $C_p$ isomorphic to this forest is
$$
\frac{p}{k}{k\choose m_1,\ldots,m_{p-2k+1}}.
$$
The claim follows since the size of each corresponding MEC is $\prod_{i=1}^k i^{m_i}$.
\end{proof}
\begin{remark}
\label{rmk: lucas triangle}
It is a well-known result that the coefficient of $x^k$ in the $(p-1)^{st}$ Fibonacci polynomial is the binomial coefficient ${p-k-1\choose k}$, and that this is also the number of compositions of $p-k$ into $k+1$ parts.
The former result says that the $(p-1)^{st}$ Fibonacci polynomial has coefficients given by the $(p-1)^{st}$ diagonal of Pascal's triangle, and so the latter result gives a compositional interpretation of the corresponding entry in Pascal's Triangle; see Figure~\ref{fig: pascal-and-lucas-triangles} (left).
In this section, we saw that this compositional interpretation of ${p-k-1\choose k}$ results in the proof of Theorem~\ref{thm: path vectors}.
\begin{figure}
\caption{
The $p^{th}
\label{fig: pascal-and-lucas-triangles}
\end{figure}
Analogously, the $p^{th}$ diagonal of a second triangle, called \emph{Lucas' triangle} in \cite{BS15}, corresponds to the coefficients of the $p^{th}$ Lucas polynomial.
This triangle is depicted on the right in Figure~\ref{fig: pascal-and-lucas-triangles}.
Thus, the proof of Theorem~\ref{thm: cycle vectors} results in a combinatorial interpretation of the entries of this triangle via partitions.
In particular, the entry of the Lucas triangle corresponding to the $k^{th}$ coefficient of $L_p(x)$ is
$$
[x^k].L_p(x) = {\it m}athcal{S}um_{{\bf m}\in {\it m}athbb{P}[p-2k+1,k,p-k]}\frac{p}{k}{k\choose m_1,\ldots,m_{p-2k+1}}.
$$
Moreover, the binomial recursion on the triangle implies that these coefficients satisfy the identity
$$
[x^k].L_p(x) = [x^{k-1}].L_{p-2}+[x^k].L_{p-1}.
$$
To the best of the authors' knowledge, such a partition identity is new to the combinatorial literature.
\end{remark}
{\it m}athcal{S}ubsection{Stars and bistars}
\label{subsec: stars and bistars}
We now study the star and bistar graphs, $G_1(p)$ and $G_2(p,q)$.
An example of a star and a bistar is given in Figure~\ref{fig: stars and bistars}.
\begin{figure}
\caption{On the left is a star and on the right is a bistar.}
\label{fig: stars and bistars}
\end{figure}
The number of MECs on stars and their sizes will play an important role in Sections~\ref{sec: bounding the size and number of mecs on trees} and~\ref{sec: classic families of trees}.
\begin{theorem}
\label{thm: stars}
The MECs on the $p$-star $G_1(p)$ have the polynomial generating function
$$
M(G_1(p);x) = 1+{\it m}athcal{S}um_{k\geq2}{p\choose k}x^{{k\choose 2}}.
$$
In particular,
$$
M(G) = 2^p-p.
$$
Moreover, the corresponding class sizes are
$$
s_1(G_1(p)) = 2^p-p+1
\qquad
{\it m}box{and}
\qquad
s_{p+1}(G_1(p)) = 1.
$$
\end{theorem}
\begin{proof}
Any immorality $i \rightarrow j \leftarrow k$ in a DAG on $G_1(P)$ must have the unique head node $j$ being the center node of $G_1(P)$, and the tail nodes $i$ and $k$ must be leaves of $G_1(p)$.
It follows that each MEC on $G_1(p)$ having at least one immorality is given by selecting any $k$-subset of the $p$ leaves for $k\geq2$ to be directed towards the center node and then directing all other edges outwards.
Each such $k$-subset yields a unique MEC of size one containing ${k\choose 2}$ immoralities.
The final MEC is the class containing no immoralities.
This class consists of all DAGs on $G_1(p)$ with a unique source node, and there are $p+1$ such DAGs.
\end{proof}
The formulas in Theorem~\ref{thm: stars} allow us to obtain similar formulas for bistars.
For convenience, we let
$$
P_m := {\it m}athcal{S}um_{k=1}^{m}{m\choose k}x^{{k+1\choose 2}}.
$$
It will also be helpful to label edges that have specified roles in certain MECs.
The green edges (also labeled with ${\it m}athcal{S}quare$) indicate that these edges cannot be involved in any immorality.
The red arrows (also labeled with $\ast$) indicate a fixed immorality in the partially directed graph, and the blue arrows (also labeled with $\circ$) represent fixed arrows that are not in immoralities.
\begin{theorem}
\label{thm: bistars}
The MECs on the bistar $G_2(p,q)$ have the polynomial generating function
$$
M(G_2(p,q);x) = M(G_1(p);x)P_q+ M(G_1(q);x)P_p + M(G_1(p);x) + M(G_1(q);x) - 1.
$$
In particular,
$$
M(G_1(p,q)) = 2^{p+q+1}-p2^q-q2^p-1.
$$
Moreover, the corresponding class sizes are
$$
s_1(G_2(p,q)) = 2^{p+q+1}-p2^q-q2^p-2^p-2^q,
$$
$$
s_{p+1}(G_2(p,q)) = 2^q-1,
\quad
s_{q+1}(G_2(p,q)) = 2^p-1,
\quad
{\it m}box{and}
\quad
s_{p+q+2}(G_2(p,q)) = 1.
$$
\end{theorem}
\begin{proof}
To count the MECs on the bistar $G_2(p,q)$ we consider three separate cases defined in terms of the edge $\{1,2\}$.
These three cases are:
\begin{enumerate}
\item The edge $\{1,2\}$ is in an immorality with at least one of the $p$ leaves attached to node $1$.
\item The edge $\{1,2\}$ is in an immorality with at least one of the $q$ leaves attached to node $2$.
\item The edge $\{1,2\}$ is not in an immorality.
\end{enumerate}
The three cases are depicted in Figure~\ref{fig: bistar cases}.
\begin{figure}
\caption{The three cases of the proof of Theorem~\ref{thm: bistars}
\label{fig: bistar cases}
\end{figure}
In the first case, at least one of the $p$ leaves attached to node $1$ must be in an immorality with the edge $\{1,2\}$, and the $q$ leaves attached to node $2$ can display any pattern of immoralities of the star $G_1(q)$.
This yields $M(G_1(q);x)P_p$ MECs as counted by their number of immoralities.
Similarly, case two yields $M(G_1(p);x)P_q$.
In the third case, in order for the edge $\{1,2\}$ to not appear in any immorality, we need that all edges at the head of $\{1,2\}$ point towards the leaves.
This yields $M(G_1(p);x) + M(G_1(q);x) - 1$ MECs as counted by their number of immoralities.
Thus,
$$
M(G_2(p,q);x) = M(G_1(p);x)P_q+ M(G_1(q);x)P_p + M(G_1(p);x) + M(G_1(q);x) - 1,
$$
and evaluating this polynomial at $1$ yields
$$
M(G_1(p,q)) = 2^{p+q+1}-p2^q-q2^p-1.
$$
Finally, to count the classes by size we again filter by the three cases $(1),(2),$ and $(3)$.
In the first case, there are $2^p-1$ ways for the edge $\{1,2\}$ to be in an immorality with any of the $p$ leaves at node $1$, and there are $2^q-q$ possible patterns of immoralities that can occur among the $q$ leaves at node $2$.
One of these $2^q-q$ patterns has class size $q+1$ (the class with no immoralities), and all others have size one.
Thus, case $(1)$ yields $2^p-1$ classes of size $q+1$ and $(2^q-q-1)(2^p-1)$ classes of size $1$.
Similarly, case $(2)$ yields $2^q-1$ classes of size $p+1$ and $(2^p-p-1)(2^p-1)$ classes of size $1$.
In case $(3)$, if both sets of leaves contain no immoralities, then we get a single class of size $p+q+2$.
If the $p$ leaves at node $1$ contain at least one immorality, then all leaves at node $2$ must be directed away from node $2$, yielding $2^p-p-1$ classes of size $1$.
Similarly, if the $q$ leaves at node $2$ contain at least one immorality, then we get another $2^q-q-1$ classes of size one.
Summing over these cases yields the desired formulae.
\end{proof}
{\it m}athcal{S}ection{Bounding the Size and Number of MECs on Trees}
\label{sec: bounding the size and number of mecs on trees}
We begin this section by deriving upper and lower bounds on the number of MECs for trees on $p$ nodes.
We show that these bounds are achieved by the $(p-1)$-star $G_1(p-1)$ and the $p$-path $I_p$, respectively.
This result parallels the classic result of \cite{PT82}, which states that the number of independent sets in a tree on $p$ nodes is bounded by the number of independent sets in $G_1(p-1)$ and $I_p$, respectively.
\begin{theorem}
\label{thm: bound on the number of mecs for trees}
Let $T_p$ be a tree on $p$ nodes. Then
$$
F_{p-1} = M(I_p) \leq M(T_p) \leq M(G_1(p-1)) = 2^{p-1}-p+1.
$$
\end{theorem}
\begin{proof}
We first prove the upper bound on $M(T_p)$.
Since $T_p$ is a tree, it has precisely $p-1$ edges, and so there are $2^{p-1}$ edge orientations on $T_p$.
Of these $2^{p-1}$ orientations, the $p$ orientations given by selecting a unique source node in $T_p$ all belong to the same MEC.
So there are at most $2^{p-1}-p+1$ MECs for $T_p$.
By Theorem~\ref{thm: stars}, this bound is achieved by the $(p-1)$-star $G_1(p-1)$.
To prove the lower bound, we use a simple inductive argument.
Notice first that the bound is true when $p\leq 5$.
Now recall that every tree on $p$ nodes can be constructed in one of two ways: $(1)$ attaching a leaf to a degree 1 node of a tree on $p-1$ nodes, or $(2)$ attaching a leaf to a node of $T_{p-1}$ that is a neighbor of a leaf.
Thus, given a tree $T_{p-1}$ on $p-1$ nodes, it suffices to show that when we construct $T_p$ from $T_{p-1}$ via $(1)$ or $(2)$, the number of MECs increases by at least $F_{p-3}$.
In case $(1)$, we attach a leaf node $v$ to a leaf $u$ of $T_{p-1}$, whose only neighbor in $T_{p-1}$ is some node $w$.
The MECs on $T_{p}$ then come in two types: either the edge $\{v,u\}$ is not in an immorality or it is in the immorality $v\rightarrow u\leftarrow w$.
The number of classes in the first case is $M(T_{p-1})$ and the number of classes in the second case is $M(T_{p-1}\backslash u)$.
So by the inductive hypothesis we have that
$$
M(T_p)\geq M(T_{p-1})+M(T_{p-1}\backslash u)\geq F_{p-2}+F_{p-3} = F_{p-1}.
$$
In case $(2)$, the leaf node $v$ is attached to some node $u$ of $T_{p-1}$ that has at least one leaf $w$ in $T_{p-1}$. The MECs on $T_p$ contain two disjoint types of classes: classes in which the edge $\{v,u\}$ is not in an immorality and classes containing the immorality $v\rightarrow u\leftarrow w$. Similar to the previous case, it then follows from the inductive hypothesis that
$$
M(T_p)\geq M(T_{p-1})+M(T_{p-1}\backslash w)\geq F_{p-2}+F_{p-3} = F_{p-1},
$$
which completes the proof.
\end{proof}
We now derive bounds on the size of the MEC for a fixed DAG ${\it m}athcal{T}_p$ on the underlying undirected graph $T_p$.
These bounds will be computed in terms of the structure of the \emph{essential graph} $\widehat{{\it m}athcal{T}_p}$ of the MEC $[{\it m}athcal{T}_p]$.
Recall that the essential graph of an MEC $[{\it m}athcal{G}]$ is a partially directed graph $\widehat{{\it m}athcal{G}}:=([p],E,A)$, where the collection of arrows $A$ in $\widehat{{\it m}athcal{G}}$ are the arrows that point in the same direction for every member of the class, and the undirected edges $E$ represent the arrows that change orientation to distinguish between members of the class; see~\cite{AMP97}.
The \emph{chain components} of $\widehat{{\it m}athcal{G}}$ are its undirected connected components, and its \emph{essential components} are its directed connected components.
To see why it is reasonable to work with the essential graph to derive such bounds, consider the analysis of the MEC sizes for stars and bistars given in Theorems~\ref{thm: stars} and \ref{thm: bistars}.
In order to derive the possible sizes of these MECs, we implicitly counted all possible orientations of the undirected edges in the essential graph of each class.
Since understanding the possible orientations of these edges is equivalent to knowing the size of the class, we will bound the size of the MEC of ${\it m}athcal{T}_p$ in terms of the number and size of the chain components of $\widehat{{\it m}athcal{T}_p}$.
We will see that the computed bounds are tight, and that stars play an important role in achieving these bounds.
We refer the reader to \cite{AMP97} for the basics relating to essential graphs.
In the following, we assume that the essential graph $\widehat{{\it m}athcal{T}_p}$ has chain components $\tau_1,\tau_2,\ldots,\tau_\ell$ for $\ell>0$.
We also assume that each $\tau_i$ is \emph{nontrivial}; i.e. it has at least two vertices.
We let ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ denote the directed subforest of the essential graph $\widehat{{\it m}athcal{T}_p}$ consisting of all directed edges of $\widehat{{\it m}athcal{T}_p}$, and we let $\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_m$ denote its connected components.
\begin{lemma}
\label{lem: size of MEC for trees}
Let ${\it m}athcal{T}_p$ be a directed tree on $p$ nodes and $\widehat{{\it m}athcal{T}_p}$ the corresponding essential graph. If $\widehat{{\it m}athcal{T}_p}$ has chain components $\tau_1,\tau_2,\ldots,\tau_\ell$, then the size of the Markov equivalence class $[{\it m}athcal{T}_p]$ is
$$
\#[{\it m}athcal{T}_p] = \prod_{i=1}^\ell|V(\tau_i)|.
$$
\end{lemma}
\begin{proof}
Each element of $[{\it m}athcal{T}_p]$ corresponds to one of the ways to direct the components $\tau_1,\ldots,\tau_\ell$, each of which is a tree.
Suppose we directed $\tau_i$ so that it has two source nodes $s_1$ and $s_2$.
Then along the unique path between $s_1$ and $s_2$ in the directed $\tau_i$, there must lie an immorality that is not present in $\widehat{{\it m}athcal{T}_p}$.
Thus, the only admissible directions of the components $\tau_i$ have no more than one source node.
Since every DAG has at least one source node, the number of admissible directions of each $\tau_i$ is precisely the number of ways to pick the unique source node of $\tau_i$.
This is precisely the number of vertices in $\tau_i$, thereby completing the proof.
\end{proof}
\begin{theorem}
\label{thm: bounding the size of an MEC for trees}
Let ${\it m}athcal{T}_p$ be a directed tree on $p$ nodes and $\widehat{{\it m}athcal{T}_p}$ the corresponding essential graph.
Suppose that $\widehat{{\it m}athcal{T}_p}$ has $\ell>0$ chain components $\tau_1,\tau_2,\ldots,\tau_\ell$ and that the directed subforest ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ of $\widehat{{\it m}athcal{T}_p}$ has $m\geq 0$ connected components $\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_m$.
Then
$$
2^\ell\leq \#[{\it m}athcal{T}_p] \leq \left(\frac{p-m}{\ell}\right)^\ell.
$$
\end{theorem}
\begin{proof}
Notice first that the lower bound is immediate from Lemma~\ref{lem: size of MEC for trees} and the assumption that each $\tau_i$ is nontrivial.
So it only remains to verify the proposed upper bound.
Let $\ell_i$ denote the number of chain components that are adjacent to $\varepsilon_i$ for all $i\in[m]$.
Since the chain components $\tau_1,\ldots,\tau_\ell$ are all disjoint, it follows that
$$
1+\ell_i\leq |V(\varepsilon_i)|
$$
for all $i\in[m]$.
Therefore, a lower bound on the size of the number of nodes in the directed subforest ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ is given by
$$
m+{\it m}athcal{S}um_{i=1}^m \ell_i
\leq
|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|.
$$
A closed form for the sum ${\it m}athcal{S}um_{i=1}^m \ell_i$ is recovered as follows.
Consider a complete bipartite graph $K_{\ell,m}$ whose vertices are partitioned into two blocks $A$ and $B$ where $|A|=\ell$ and $|B|=m$.
The possible ways to assemble the components $\tau_1,\ldots,\tau_\ell$ and $\varepsilon_1,\ldots,\varepsilon_m$ into an essential tree are in bijection with the spanning trees of $K_{\ell,m}$.
For any such spanning tree $T$ of $K_{\ell,m}$, each edge of $T$ has exactly one vertex in each of $A$ and $B$.
Thus,
$$
{\it m}athcal{S}um_{i=1}^m \ell_i
= {\it m}athcal{S}um_{v\in A}\deg_{T}(v)
= {\it m}athcal{S}um_{v\in B}\deg_{T}(v).
$$
Since $T$ is a tree, it follows that
\begin{equation}
\label{eq_1}
{\it m}athcal{S}um_{i=1}^m \ell_i = \frac{{\it m}athcal{S}um_{v\in A}\deg_{T}(v)+{\it m}athcal{S}um_{v\in B}\deg_{T}(v)}{2} = \ell + m-1.
\end{equation}
Therefore,
$$
2m+\ell-1
\leq
|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|.
$$
Moreover, since ${\it m}athcal{T}_p$ has $p$ vertices, and each edge of a spanning tree of $K_{\ell,m}$ corresponds to exactly one of the vertices shared by ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ and the chain components $\tau_1,\ldots, \tau_\ell$, then we have that
\begin{equation}
\label{eqn: chain components vertex sum}
{\it m}athcal{S}um_{j=1}^\ell |V(\tau_j)| = p+m+\ell-1-|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|.
\end{equation}
Now by Lemma~\ref{lem: size of MEC for trees} and the arithmetic-geometric mean inequality, we have
\begin{equation*}
\begin{split}
\#[{\it m}athcal{T}_p]=\prod_{j=1}^\ell |V(\tau_j)|\leq\left(\frac{{\it m}athcal{S}um_{j=1}^\ell |V(\tau_j)|}{\ell}\right)^\ell.
\end{split}
\end{equation*}
Thus, by applying equation~\ref{eqn: chain components vertex sum}, we conclude that
\begin{equation*}
\begin{split}
\#[{\it m}athcal{T}_p]&\leq\left(\frac{p+m+\ell-1-|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|}{\ell}\right)^\ell\leq\left(\frac{p+m+\ell-1-(2m+\ell-1)}{\ell}\right)^\ell,
\end{split}
\end{equation*}
and so $\#[{\it m}athcal{T}_p]\leq ((p-m)/\ell)^\ell$, which completes the proof.
\end{proof}
We now examine the tightness of the bounds in Theorem~\ref{thm: bounding the size of an MEC for trees} by considering some special cases.
Notice first that the lower bound is tight exactly when each chain component is a single edge.
The upper bound is tight exactly when $|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|=2m+\ell-1$ and each chain component has exactly $\frac{p-m}{\ell}$ vertices.
\begin{corollary}
\label{cor: tightness when m=1}
Suppose ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ has precisely one connected component, i.e., ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ is a directed tree.
Then
$$
2^\ell\leq\#[{\it m}athcal{T}_p]\leq\left(\frac{p-1}{\ell}\right)^\ell,
$$
and every directed tree ${\it m}athcal{T}_p$ for which the upper bound is tight has the same subtree ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$, namely $G_1(\ell)$ with all edges directed inwards.
\end{corollary}
\begin{proof}
The statement of the bounds is immediate from Theorem~\ref{thm: bounding the size of an MEC for trees}. So we only need to verify the claim on the tightness of the upper bound.
It follows from the more general bounds described above, that the upper bound is tight exactly when $|V({\it m}athcal{G}(\widehat{{\it m}athcal{T}_p}))|=\ell+1$ and each chain component has exactly $\frac{p-1}{\ell}$ vertices.
Since the chain components $\tau_1,\ldots,\tau_\ell$ are all distinct and ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ is a directed tree with $\ell+1$ vertices, then each $\tau_j$ is adjacent to exactly one of the $\ell$ vertices of ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$, and there remains only one vertex to connect these $\ell$ vertices.
Therefore, the skeleton of ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ is the star $G_1(k+1)$.
Moreover, since all essential edges in $\widehat{{\it m}athcal{T}_p}$ are exactly the edges of ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$, then all edges of ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ must be directed inwards towards the center node.
An example of a graph for which this upper bound is tight is presented on the left in Figure~\ref{fig: tight upper bound graphs}.
\end{proof}
\begin{figure}
\caption{ Graphs for which the bounds in Corollary~\ref{cor: tightness when k=1}
\label{fig: tight upper bound graphs}
\end{figure}
\begin{corollary}
\label{cor: tightness when k=1}
Suppose $\widehat{{\it m}athcal{T}_p}$ has precisely one chain component $\tau_1$.
Then
$$
m\leq\#[{\it m}athcal{T}_p]\leq p-2m,
$$
and both bounds are tight when $\tau_1=G_1(m-1)$.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem: size of MEC for trees} we know that $\#[{\it m}athcal{T}_p] = |V(\tau_1)|$; so the bounds presented here are bounds on the size of the vertex set of the chain component $\tau_1$.
Since the connected components $\varepsilon_1,\ldots,\varepsilon_m$ of ${\it m}athcal{G}(\widehat{{\it m}athcal{T}_p})$ are all disjoint, we know that $\tau_1$ contains at least $m$ vertices.
On the other hand, since each $\varepsilon_i$ contains at least one immorality and attaches to $\tau_1$ at precisely one node, then each $\varepsilon_i$ contains at least two nodes that are not also nodes of $\tau_i$.
A graph for which the bounds are simultaneously tight is depicted on the right in Figure~\ref{fig: tight upper bound graphs}.
Notice that the chain component $\tau_1$ is $G_1(m-1)$.
\end{proof}
Corollary~\ref{cor: tightness when m=1} and Corollary~\ref{cor: tightness when k=1} suggest the important role of the maximum degree of a graph for the size of MECs.
This is further supported and discussed via the results in the next section and the simulations in Section~\ref{subsec: skeletal structure in relation to the number and size of MECs}.
{\it m}athcal{S}ection{Classic Families of Trees}
\label{sec: classic families of trees}
In this section, we study some classic families of trees that arise naturally in both, applied and theoretical contexts.
Namely, we will study the graph polynomials $M(G;x)$ for \emph{spider graphs, caterpillar graphs}, and \emph{complete binary trees}.
A \emph{spider graph} (or \emph{star-like tree}) is any tree containing precisely one node with degree greater than two, a \emph{caterpillar graph} is any tree for which deleting all leaves results in a path, and a complete binary tree is a tree for which every nonleaf node (except for possibly a root node) has precisely three neighbors.
Caterpillars and complete binary trees play important roles for modeling events in time, as for example in phylogenetics.
Caterpillars and spiders also provide large families of supporting examples for long-standing conjectures about well-studied generating functions associated to trees.
Alavi, Maldi, Schwenk, and Erd\"os conjectured that the independence polynomial of every tree is unimodal \cite{AMSE87}, and Stanley conjectured that the chromatic symmetric function is a complete graph isomorphism invariant for trees \cite{S95}.
In \cite{LM02,LM03} and \cite{MMW08} the authors, respectively, verify that these conjectures hold for caterpillars and (some) spiders.
We show in the following that these important families of graphs also yield nice properties for the generating polynomial $M(G;x)$.
In Section~\ref{subsec: spiders} we provide a formula for $M(G;x)$ for spider graphs that generalizes our formula for stars and paths given in Section~\ref{sec: some first examples}.
Using these formulae we compute expressions for $M(G)$ that extend classical identities of the Fibonacci numbers.
The methods for computing $M(G;x)$ for spiders generalizes to a multivariate formula for $M(G;x)$ for arbitrary trees with interesting combinatorial structure, which will also be described.
In Section~\ref{subsec: caterpillars} we recursively compute $M(G;x)$ for the caterpillars.
Using this recursive formula, we observe that these polynomials are all unimodal and estimate the expected number of immoralities in a randomly selected MEC on a caterpillar.
Finally, in Section~\ref{subsec: complete binary trees} we compute the number of MECs for a complete binary tree, and study the rate at which this value increases.
{\it m}athcal{S}ubsection{Spiders}
\label{subsec: spiders}
We call the unique node of degree more than two in a spider its \emph{center} node.
A spider $G$ on $n$ nodes with center node of degree $k$ corresponds to a partition $\lambda = (\lambda_1,\ldots,\lambda_k)$ of $n-1$ into $k$ parts.
Following the standard notation, we assume $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_k>0$.
Here, $\lambda_i$ denotes the number of vertices on the $i^{th}$ \emph{leg} of $G$; i.e., the $i^{th}$ maximal connected subgraph of $G$ in which every vertex has degree at most two.
Conversely, given a partition $\lambda$ of $n-1$ into $k$ parts, we write $G_\lambda$ for the corresponding spider graph.
In the following, we label the vertices of $G_\lambda$ such that $p_0$ denotes the center node and $p_{ij}$, for $1\leq i\leq k$ and $1\leq j\leq\lambda_i$, denotes the $j^{th}$ node from $p_0$ along the $i^{th}$ leg of $G_\lambda$.
For a subset $S{\it m}athcal{S}ubset[k]$, define the following polynomial:
$$
L(S;x) :=\left(\prod_{i\in S}M(I_{\lambda_i-1};x)\right)\left(\prod_{i\in[k]\backslash S}M(I_{\lambda_i};x)\right).
$$
We then have the following formula for the generating polynomial $M(G_\lambda;x)$.
\begin{theorem}
\label{thm: spider generating polynomials}
Let $G_\lambda$ denote the spider on $n$ nodes with center node of degree $k$ and partition $\lambda$ of $n-1$ into $k$ parts.
If $\lambda$ has $\ell$ parts of size one, then
$$
M(G_\lambda;x) = {\it m}athcal{S}um_{j=0}^{k-\ell}\left({\it m}athcal{S}um_{S\in{[k-\ell]\choose j}}L(S;x)\right)x^jM(G_1(k-j);x).
$$
\end{theorem}
\begin{proof}
To arrive at this formula, simply notice that all possible placements of immoralities can be computed as follows:
First choose a subset of the $k-\ell$ nodes $\{p_{ij}:\lambda_i>1,i\in[k]\}$ at which to place immoralities.
Call this set $S$.
Since the nodes in $\{p_{ij}:\lambda_i>1,i\in[k]\}\backslash S$ are not immoralities then all remaining immoralities are either at the center node $p_0$, which are counted by $M(G_1(k-|S|);x)$, or they are further down the legs of the spider, which are counted by $L(S;x)$.
\end{proof}
The general formula in Theorem~\ref{thm: spider generating polynomials} specializes to $M(G_1(p-1);x)$ when $\lambda = (1,1,\ldots,1)$ is the partition of $p-1$ into $p-1$ parts; i.e., when $G_\lambda = G_1(p-1)$.
Similarly, for $k = 2$, it reduces to $M(I_p;x)$.
It also yields a nice formula for the number of MECs on the spiders with $\lambda = (m,m,\ldots,m)$ a partition of $mk$ into $k$ parts.
\begin{corollary}
\label{cor: daddy long-legs}
For $k>1$ and $m\geq 1$, the spider $G_\lambda$ on $mk+1$ nodes with partition $\lambda = (m,m,\ldots,m)$ of $mk$ into $k$ parts has
$$
M(G_\lambda) = F_{m+1}^k-kF_{m-1}F_m^{k-1}.
$$
\end{corollary}
\begin{proof}
For $m=1$ we have that $G_\lambda = G_1(k)$, and the above formula reduces to $2^k - k = M(G_1(k))$.
For $k>1$, we simplify the formula given in Theorem~\ref{thm: spider generating polynomials} to
$$
M(G_\lambda;x) = {\it m}athcal{S}um_{j=0}^k{k\choose j}(xM(I_{m-1};x))^jM(I_m;x)^{k-j}M(G_1(k-j);x).
$$
Evaluating at $x =1$ yields
\begin{equation*}
\begin{split}
M(G_\lambda)
&= {\it m}athcal{S}um_{j=0}^k{k\choose j}F_{m-2}^jF_{m-1}^{k-j}(2^{k-j}-(k-j)),\\
&= {\it m}athcal{S}um_{j=0}^k{k\choose j}F_{m-2}^j(2F_{m-1})^{k-j}-{\it m}athcal{S}um_{j=0}^k{k\choose j}(k-j)F_{m-2}^j(F_{m-1})^{k-j},\\
&= (F_{m-2}+2F_{m-1})^{k-j}-kF_{m-1}(F_{m-2}+F_{m-1})^{k-1},\\
&= F_{m+1}^k-kF_{m-1}F_m^{k-1},\\
\end{split}
\end{equation*}
which completes the proof.
\end{proof}
\begin{remark}
\label{rem: special case of daddy long-legs formula}
In the special case of Corollary~\ref{cor: daddy long-legs} for which $k =2$ we have that $G_\lambda = I_{2m+1}$, and so $M(G_\lambda) = F_{2m}$ by Theorem~\ref{thm: path and cycle polynomials}.
In Corollary~\ref{cor: daddy long-legs}, we see that the formula for $M(G_\lambda)$ given by Theorem~\ref{thm: spider generating polynomials} is computing the Fibonacci number $F_{2m}$ via a classic identity discovered by Lucas in 1876 (see for instance \cite{K11}):
$$
F_{2m} = F_{m+1}^2-2F_{m-1}F_m = F_m^2+F_{m-1}^2.
$$
Notice that the same expression does not hold for the generating polynomials:
$$
M(G_\lambda;x) \neq M(I_{m+2};x)^k - kM(I_m;x)M(I_{m+1};x)^{k-1}.
$$
This is because $M(G_1(p);x) = 1+{\it m}athcal{S}um_{k\geq2}{p\choose k}x^{k\choose 2}$ as opposed to $1+{\it m}athcal{S}um_{k\geq2}{p\choose k}x^k$.
However, when the formula for $M(G_\lambda;x)$ used in the proof of Corollary~\ref{cor: daddy long-legs} is evaluated at $x = 1$, the exponents in the formula for $M(G_1(p);x)$ become irrelevant.
For instance, in the case when $\lambda = (2,2)$, we have that
\begin{equation*}
\begin{split}
M(G_\lambda;x) &= x^2+3x+1, {\it m}box{ but} \\
M(I_{m+2};x)^k - kM(I_m;x)&M(I_{m+1};x)^{k-1} = 4x^2+2x-1.\\
\end{split}
\end{equation*}
However, evaluating both polynomials at $x=1$ results in the Fibonacci number $F_4=5$, as predicted by Corollary~\ref{cor: daddy long-legs}.
\end{remark}
We end this section with a remark and example illustrating the more general consequences of the techniques used in the computation of $M(G_\lambda;x)$ in Theorem~\ref{thm: spider generating polynomials}.
\begin{remark}
\label{rem: a general formula for all trees}
It is natural to ask if the recursive approach used to prove Theorem~\ref{thm: spider generating polynomials} generalizes to arbitrary trees.
In particular, it would be nice if for any tree $T$, the polynomial $M(T;x)$ can be expressed as
\begin{equation}
\label{eqn: desired form}
M(T;x) = {\it m}athcal{S}um_{\alpha=(\alpha_2,\ldots,\alpha_{n-1})\in{\it m}athbb{Z}^{n-2}_{\geq0}}c_\alpha {\bf s}^{\alpha},
\end{equation}
where $s_i :=M(G_1(i);x)$ for $i = 2,\ldots,n-1$, ${\bf s}^\alpha:=s_2^{\alpha_2}s_3^{\alpha_3}\cdots s_{n-1}^{\alpha_{n-1}}$, and the $c_\alpha$ are polynomials in $x$ with nonnegative integer coefficients.
On the one hand, there exists an (albeit cumbersome) recursion for computing $M(T;x)$ that generalizes the one used in Theorem~\ref{thm: spider generating polynomials}.
On the other hand, this recursion will not yield an expression of the form in equation~(\ref{eqn: desired form}) unless it has at most one node with degree more than two.
Instead, if we take
$$
a_p := {\it m}athcal{S}um_{k\geq2}{p-1\choose k}x^{k\choose2}
\quad
{\it m}box{ and}
\quad
b_p := {\it m}athcal{S}um_{k\geq2}{p-1\choose k-1}x^{k\choose2},
$$
then we can express $M(T;x)$ as
\begin{equation}
\label{eqn: obtainable form}
M(T;x) = {\it m}athcal{S}um_{\alpha = (\alpha_1,\alpha_2,\alpha_3)\in{\it m}athbb{Z}^{n-2}_{\geq0}\times{\it m}athbb{Z}^{n-2}_{\geq0}\times{\it m}athbb{Z}^{n-2}_{\geq0}}c_\alpha {\bf s}^{\alpha_1}{\bf a}^{\alpha_2}{\bf b}^{\alpha_3},
\end{equation}
where ${\bf a}^{\alpha}$ and ${\bf b}^{\alpha}$ are defined analogously to ${\bf s}^{\alpha}$, and the $c_\alpha$ are polynomials in $x$ with nonnegative integer coefficients.
The algorithm resulting in the expression for $M(T;x)$ given in equation~(\ref{eqn: obtainable form}) is the intuitive generalization of Theorem~\ref{thm: spider generating polynomials}. Since it is technical to formalize, we here only illustrate it with Example~\ref{ex: illustrative example}.
\end{remark}
\begin{example}
\label{ex: illustrative example}
Consider the tree $T$ on $12$ nodes depicted in Figure~\ref{fig: illustrative example}.
We follow the same approach for counting MECs in $T$ that we used to count the MECs in $G_\lambda$ in Theorem~\ref{thm: spider generating polynomials}.
That is, we select a center node, choose a collection of immoralities at its nonleaf neighbors, and count the possible classes containing these immoralities.
Thinking of node $0$ as the analogous vertex to the center node of a spider, we notice that it has precisely one nonleaf neighbor, namely node $1$.
The MECs on $T$ with node $1$ in an immorality are counted by $xs_5^2$.
Now consider those MECs on $T$ for which $1$ is not in an immorality.
Analogous to the proof of Theorem~\ref{thm: spider generating polynomials}, we must consider the MECs on the $6$-star with center node $0$ and leaves $1,8,9,10,11,$ and $12$.
Notice $b_6$ enumerates the MECs on this $6$-star that use the arrow $0\leftarrow 1$, and $a_6+1$ enumerates those MECs not using this arrow.
For those enumerated by $b_6$, we then count the number of MECs on the induced subtree $T^\prime$ with vertex set $[7]$.
This gives $b_6M(T^\prime;x)$.
For the MECs enumerated by $a_6+1$, we must consider more carefully the structure of immoralities on $T^\prime$.
The constant $1$ counts the choice of no immoralities on the $6$-star, and this yields $1M(T^\prime;x)$ MECs on $T$.
On the other hand, $a_6$ counts those classes on the $6$-star with at least one immorality using the arrow $0\leftarrow1$.
For these, we take node $2$ as the center node of $T^\prime$, which has precisely one non-leaf neighbor, node $3$.
The ways in which node $3$ can be in an immorality are counted by $b_5$.
If node $3$ is not in an immorality, then either $2$ is in an immorality or there are no immoralities on $T^\prime$.
This yields
$
a_6(1+b_5+xs_4).
$
Using the same techniques, we compute that $M(T^\prime;x) = s_2s_4+b_5$.
Combining these formulae yields
\begin{equation*}
\begin{split}M(T;x)
&=xs_5^2+s_2s_4+s_2s_4b_6+b_5+b_5b_6+xs_4a_6+a_6+a_6b_5.\\
\end{split}
\end{equation*}
In general, this iterative process of picking a center node for a tree $T$, choosing immorality placements for its nonleaf neighbors, and then enumerating the resulting possible MECs based on these choices results in an expression of the form given by equation~(\ref{eqn: obtainable form}).
The monomial ${\bf s}^{\alpha_1}{\bf a}^{\alpha_2}{\bf b}^{\alpha_3}$ enumerates the possible placements of immoralities at the chosen sequence of center nodes and the coefficient polynomial $c_\alpha$ is enumerating the ways to fix immoralities at their nonleaf neighbors to allow for these placements. \qed
\end{example}
\begin{figure}
\caption{The tree for Example~\ref{ex: illustrative example}
\label{fig: illustrative example}
\end{figure}
Theorem~\ref{thm: spider generating polynomials} demonstrates that for some trees the expression for $M(T;x)$ given by the algorithmic approach described in Example~\ref{ex: illustrative example} can have nice coefficient polynomials $c_\alpha$.
It is important to notice that the expression of $M(T;x)$ given in equation~(\ref{eqn: obtainable form}) is dependent of the initial choice of center node.
However, as exhibited by Theorem~\ref{thm: spider generating polynomials}, a well-chosen initial center node and number of iterations of this decomposition can yield nice combinatorial expressions for $M(T;x)$ of the form~(\ref{eqn: desired form}) and/or~(\ref{eqn: obtainable form}).
For example, if $T$ is the spider graph, one iteration of this decomposition initialized at the spider's center yields coefficient polynomials $c_\alpha$ that are products of Fibonacci polynomials, and when all legs are the same length, they are therefore real-rooted, log-concave, and unimodal.
It would be interesting to know whether other families of trees yield coefficient polynomials $c_\alpha$ with nice combinatorial properties.
Moreover, it is unclear if for every tree $T$ the polynomial $M(T;x)$ admits an expression as in equation~(\ref{eqn: desired form}).
{\it m}athcal{S}ubsection{Caterpillars}
\label{subsec: caterpillars}
We denote the caterpillar graph $W_p$ as
$$
W_p :=
\begin{cases}
G_{\frac{p}{2}}\left(1,1,\ldots,1\right) & {\it m}box{if $p$ is even,} \\
G_{\frac{p+1}{2}}\left(1,1,\ldots,1,0 \right) & {\it m}box{if $p$ is odd.} \\
\end{cases}
$$
\begin{figure}
\caption{The first few caterpillar graphs.}
\label{fig: caterpillars}
\end{figure}
The first few caterpillar graphs are depicted in Figure~\ref{fig: caterpillars}.
Since the caterpillar graphs are closely related to paths, we would expect that a similar recursive approach also works for counting the number of MECs on $W_p$.
Indeed, with the following theorem, we provide a recursive formula for $M(W_p;x)$.
\begin{theorem}
\label{thm: caterpillars polynomials}
Let ${\it m}athbb{W}_p:=M(W_p;x)$ for $p\geq 1$.
These generating polynomials satisfy the recursion with initial conditions
\begin{equation*}
\begin{split}
{\it m}athbb{W}_1 = 1,
\quad
{\it m}athbb{W}_2 = 1&,
\quad
{\it m}athbb{W}_3 = 1+x,
\quad
{\it m}athbb{W}_4 = 1+2x, \\
\end{split}
\end{equation*}
and for $p\geq5$
$$
{\it m}athbb{W}_p = \begin{cases}
{\it m}athbb{W}_{p-1} + x{\it m}athbb{W}_{p-2} & {\it m}box{for $p$ odd,} \\
(x+2){\it m}athbb{W}_{p-2} + (x^3-x^2+x-2){\it m}athbb{W}_{p-3} +(x^2+1){\it m}athbb{W}_{p-4} & {\it m}box{for $p$ even.}\\
\end{cases}
$$
\end{theorem}
\begin{figure}
\caption{The four cases for the recursion on the caterpillar graph for $p$ odd. }
\label{fig: caterpillar recursion cases}
\end{figure}
\begin{proof}
Notice first that when $p$ is even, we can simply apply the Fibonacci recursion
$$
M(G_{\frac{p}{2}}(1,1,\ldots,1);x) = M(G_{\frac{p}{2}}(1,1,\ldots,0)) + xM(G_{\frac{p}{2}-1}(1,1,\ldots,1);x).
$$
The recursion is based on whether or not the final edge is contained within an immorality.
Now let $p = 2k+1$ be odd.
We first show that
$$
{\it m}athbb{W}_p = {\it m}athbb{W}_{p-1} + (x^3+x){\it m}athbb{W}_{p-3} + x{\it m}athbb{W}_{p-2} - x^2{\it m}athcal{S}um_{j = 2}^{\left\lfloor\frac{p}{2}\right\rfloor}{\it m}athbb{W}_{p-2j-1}.
$$
This recursion can be detected by considering the ways in which the final edge can or cannot be in an immorality.
That is, either it is not in an immorality, or it is in an immorality with some nonempty subset of edges adjacent to it, as depicted in Figure~\ref{fig: caterpillar recursion cases}. Collectively, cases $(1)$, $(2)$, and $(3)$ yield
$$
{\it m}athbb{W}_{p-1} + (x^3+x){\it m}athbb{W}_{p-3}
$$
MECs.
On the other hand, case $(4)$ yields $x{\it m}athbb{W}_{p-2}$ minus some over-counted cases.
The over-counted cases correspond to exactly when the first immorality to the right of the one depicted in case $(4)$ points towards the right, as depicted in Figure~\ref{fig: caterpillar overcounted cases}.
\begin{figure}
\caption{The over-counted cases of case $(4)$ in the caterpillar recursion for $p$ odd.}
\label{fig: caterpillar overcounted cases}
\end{figure}
Each such case would naturally force one more unspecified immorality.
Thus, the total number of MECs counted by case $(4)$ is
$$
x{\it m}athbb{W}_{p-2} - x^2{\it m}athcal{S}um_{j = 2}^{\left\lfloor\frac{p}{2}\right\rfloor}{\it m}athbb{W}_{p-2j-1}.
$$
Since $p-1$ is even, we may apply the Fibonacci recursion to ${\it m}athbb{W}_{p-1}$ to obtain
$$
{\it m}athbb{W}_p - (x+1){\it m}athbb{W}_{p-2} = (x^3+2x){\it m}athbb{W}_{p-3} - x^2{\it m}athcal{S}um_{j = 2}^{\left\lfloor\frac{p}{2}\right\rfloor}{\it m}athbb{W}_{p-2j-1}.
$$
We then consider the difference between ${\it m}athbb{W}_p - (x+1){\it m}athbb{W}_{p-2}$ and ${\it m}athbb{W}_{p-2} - (x+1){\it m}athbb{W}_{p-4}$, and repeatedly apply the Fibonacci recursion to the even terms.
The result is
\begin{equation*}
\begin{split}
{\it m}athbb{W}_p - (x+2){\it m}athbb{W}_{p-2} &= (x^3+x-1){\it m}athbb{W}_{p-4} + (x^3-x^2+x-2)({\it m}athbb{W}_{p-3} - {\it m}athbb{W}_{p-4}).
\end{split}
\end{equation*}
This simplifies to
$$
{\it m}athbb{W}_p = (x+2){\it m}athbb{W}_{p-2} + (x^3-x^2+x-2){\it m}athbb{W}_{p-3} +(x^2+1){\it m}athbb{W}_{p-4},
$$
thereby completing the proof.
\end{proof}
The first few polynomials $M(W_p;x)$ for $1\leq p \leq 14$, and the number of MECs on $W_p$, are displayed in Table~\ref{table: caterpillars}.
These polynomials all appear to be unimodal.
Using the recursion in Theorem~\ref{thm: caterpillars polynomials} we can estimate that the immorality number of $W_p$ is $m(W_p)=\left\lfloor\frac{p}{2}\right\rfloor+\left\lfloor\frac{p}{4}\right\rfloor$, and that the expected number of immoralities in a randomly chosen MEC on $W_p$ approaches $\left\lfloor\frac{m(W_p)}{2}\right\rfloor$.
As an immediate corollary to Theorem~\ref{thm: caterpillars polynomials}, we get a recursion for the number of MECs $M(W_p)$.
\begin{corollary}
\label{thm: caterpillars}
The number of MECs for the caterpillar graph $W_p$ is given by the recursion
$$
M(W_1) = 1,
\qquad
M(W_2) = 1,
\qquad
M(W_3) = 2,
\qquad
M(W_4) = 3,
$$
and for $p\geq5$
$$
M(W_p) =
\begin{cases}
M(W_{p-1}) + M(W_{p-2}) & {\it m}box{if $p$ is even,} \\
3M(W_{p-2}) + M(W_{p-4}) - M(W_{p-5}) & {\it m}box{if $p$ is odd.} \\
\end{cases}
$$
\end{corollary}
\begin{table}[t]
\centering
\begin{tabular}{ c | l }
$M(W_p)$ & $M(W_p;x)$ \\ \hline
$1$ & $1$ \\ \hline
$1$ & $1$ \\ \hline
$2$ & $x+1$ \\ \hline
$3$ & $2x+1$ \\ \hline
$7$ & $x^3+x^2+4x+1$ \\ \hline
$10$ & $x^3+3x^2+5x+1$ \\ \hline
$22$ & $3x^4+3x^3+8x^2+7x+1$ \\ \hline
$32$ & $4x^4+6x^3+13x^2+8x+1$ \\ \hline
$70$ & $x^6+6x^5+13x^4+16x^3+23x^2+10x+1$ \\ \hline
$102$ & $x^6+10x^5+19x^4+29x^3+31x^2+11x+1$ \\ \hline
$222$ & $5x^7+13x^6+39x^5+46x^4+59x^3+46x^2+13x+1$ \\ \hline
$324$ & $6x^7+23x^6+58x^5+75x^4+90x^3+57x^2+14x+1$ \\ \hline
$704$ & $x^9+15x^8+39x^7+97x^6+147x^5+158x^4+153x^3+77x^2+16x+1$ \\ \hline
$1028$ & $x^9+21x^8+62x^7+155x^6+222x^5+248x^4+210x^3+91x^2+17x+1$ \\
\end{tabular}
\caption{The number of MECs of $W_p$ for $1\leq p \leq 14$, and the associated polynomial generating functions $M(W_p;x)$.}
\label{table: caterpillars}
\end{table}
{\it m}athcal{S}ubsection{Complete Binary Trees}
\label{subsec: complete binary trees}
In the following, we let $T_k$ denote the complete binary tree containing $2^k-1$ nodes and $A_k$ denote the additive tree constructed by adding one leaf to the root node of $T_k$.
These two trees are depicted in Figure~\ref{fig: complete and additive trees} for $k=3$.
\begin{figure}
\caption{The complete binary tree $T_3$ is depicted on the left and the additive tree $A_3$ is depicted on the right.}
\label{fig: complete and additive trees}
\end{figure}
We will now use a series of recursions to enumerate the number of MECs on $T_k$ and $A_k$. We will then show that the ratio $\frac{M(A_k)}{M(T_k)} < 4$, which means that adding an edge to the root of a complete binary tree increases the number of MECs by at most a factor of 4.
In practice, we observed that the factor is around 2 for large $k$.
Before providing a recursion for $M(T_k)$ and $M(A_k)$, we introduce three new graph structures $X_k$, $Y_k$, and $Z_k$ in order to help simplify our recursions.
Similar to Section~\ref{subsec: stars and bistars}, in the following it will be helpful to label edges that have specified roles in certain MECs.
The green edges (also labeled with ${\it m}athcal{S}quare$) indicate that these edges cannot be involved in any immorality.
The red arrows (also labeled with $\ast$) indicate a fixed immorality in the partially directed graph, and the blue arrows (also labeled with $\circ$) represent fixed arrows that are not in immoralities.
\begin{enumerate}
\item Let $X_k$ denote the partially directed tree whose skeleton is $A_k$ and for which there is exactly one immorality at the child of the root (note that the root of $A_k$ has degree $1$).
\item Let $Y_k$ denote the number of MECs on a complete binary tree with $2^k -1 $ nodes such that the root's edges are not involved in any immoralities.
\item Let $Z_k$ denote the number of MECs on an additive tree with $2^k$ nodes such that there are edges directed from the root $r$ to its child $c$ and from $c$ to each of its children.
\end{enumerate}
\begin{figure}
\caption{From left-to-right, the graphs $X_3, Y_3$, and $Z_3$.}
\label{fig: complete and additive subgraphs}
\end{figure}
The graphs $X_3, Y_3$, and $Z_3$ are depicted from left-to-right in Figure~\ref{fig: complete and additive subgraphs}.
Now we have the following series of recursions for the graphs listed above.
\begin{theorem}
\label{thm: complete binary trees}
The following recursions hold for the partially directed graphs $T_k$, $A_k$, $X_k$, $Y_k$, and $Z_k$:
\begin{enumerate}
\item[(a)] $M(T_k) = M(A_{k-1})^2+ M(Y_k)$ with $M(T_1) = 1$,
\item[(b)] $M(A_k) = M(T_k) + 2M(X_k) + M(T_{k-1})^2$ with $M(A_1) = 1$,
\item[(c)] $M(X_k) = M(T_{k-1}) {\it m}athcal{S}qrt{M(Z_k)}$ with $M(X_1) = 1$,
\item[(d)] $M(Y_k) = 2M(Z_{k-1})M(T_{k-1}) - M(Z_{k-1})^2$ with $Y_1 = 1$, and
\item[(e)] $M(Z_k) = (2M(X_{k-1}) + M(T_{k-2})^2 + M(Z_{k-1}))^2$ with $Z_1 = Z_2 = 1$.
\end{enumerate}
\end{theorem}
We first prove statements $(e), (c), (d)$ in this order and then use them to prove statements $(b)$ and $(a)$.
\newline
\noindent
\textit{Proof of statement (e)}. We prove this by analyzing the cases on the left subgraph of $Z_k$ and consider possible immoralities at node $s$ in Figure~\ref{fig: cases for statement 5}.
\begin{figure}\label{fig: cases for statement 5}
\end{figure}
\begin{enumerate}
\item If node $s$ has exactly one immorality (as in the leftmost figure), then this substructure contributes exactly $M(X_{k-1})$ MECs.
By symmetry, there are two ways in which node $s$ can have exactly one immorality, which means these cases contribute $2M(X_{k-1})$ MECs.
\item If node $s$ has three immoralities (as in the center figure), then this substructure contributes exactly $M(T_{k-2})^2$ MECs as we may treat nodes $u, v$ as roots of complete binary trees $T_{k-2}$.
\item If node $s$ has no immoralities (as in the rightmost figure), then this substructure contributes exactly $M(Z_{k-1})$ MECs as we may treat the left subgraph as the graph $Z_{k-1}$.
\end{enumerate}
\noindent
Finally, as we have just considered the cases on the left subgraph of $Z_k$ and as the immoralities on the right subgraph of $Z_k$ are independent of the immoralities on the left subgraph, we square the number of MECs on the left subgraph to conclude that $M(Z_k) = (2M(X_{k-1}) + M(T_{k-2})^2 + M(Z_{k-1}))^2$.
\newline
\noindent
\textit{Proof of statement (c)}. Suppose we label two nodes $p$ and $q$ in $X_k$ as in Figure~\ref{fig: cases for statement 3}. By treating node $p$ as the root of the complete binary tree, and by treating node $q$ as node $s$ in the proof of statement (e), we directly have that $M(X_{k}) = M(T_{k-1}) {\it m}athcal{S}qrt{M(Z_k)}$.
\newline
\begin{figure}\label{fig: cases for statement 3}
\end{figure}
\noindent
\textit{Proof of statement (d)}. We will prove the desired recursion by considering the equivalence classes for which the edges $e_a$ and $e_b$ in Figure~\ref{fig: cases for statement 4} are directed towards the root or away from the root.
\begin{figure}\label{fig: cases for statement 4}
\end{figure}
\begin{enumerate}
\item Suppose that edge $e_a$ is directed away from the root, then edge $e_b$ can always be directed so that it is not in an immorality at the root's right child. Thus we can consider the root's right child to be the root of the complete binary tree $T_{k-1}$. Now since there cannot be an immorality at the root's left child, the left subgraph of the root can be treated as the root of the subgraph $Z_{k-1}$. This case thus gives us $M(Z_{k-1})M(T_{k-1})$ MECs.
\item Suppose that edge $e_b$ is now directed away from the root, then this case is symmetric to the case above and so there are again $M(Z_{k-1})M(T_{k-1})$ MECs formed.
\item In the above cases we have double-counted the cases where the edges $e_a$ and $e_b$ are both directed away from the root. Thus we must subtract the number of MECs formed in this case. However, in this case the left and right subgraphs from the root both represent $Z_{k-1}$. Thus, there are $M(Z_{k-1})^2$ MECs in this case.
\end{enumerate}
\noindent
Hence we have that $M(Y_k) = 2M(Z_{k-1})M(T_{k-1}) - M(Z_{k-1})^2$.
\newline
\noindent
\textit{Proof of statement (b)}. To prove recursion (b), we will consider the three possible cases of immoralities that can occur at the child $c$ of the root as depicted in Figure~\ref{fig: cases for statement 2}.
\begin{figure}\label{fig: cases for statement 2}
\end{figure}
\begin{enumerate}
\item In the leftmost figure, if there is no immorality formed by the edge from the root to $c$, then $c$ can be treated as the root of the complete binary tree $T_k$. This case contributes $M(T_k)$ MECs.
\item In the center figure, if there is exactly one immorality formed by the edge from the root to $s$, then the root can be treated as the root of the tree $X_k$. This case contributes $2M(X_k)$ MECs, as there are two ways in which the edge from the root to $c$ can be in exactly one immorality.
\item In the rightmost figure, if there are three immoralities formed by the edge from the root to $c$, then the children of $c$ can be treated as roots of complete binary trees $T_{k-1}$. This case contributes $M(T_{k-1})^2$ MECs.
\end{enumerate}
Thus, summing over the three cases we have that $M(A_k) = M(T_k) + 2M(X_k) + M(T_{k-1})^2$.
\newline
\noindent
\textit{Proof of statement (a)}. We can consider the following four cases depicted in Figure~\ref{fig: cases for statement 1} based on the immoralities formed by the root's edges $e_a$ and $e_b$.
\begin{figure}\label{fig: cases for statement 1}
\end{figure}
\begin{enumerate}
\item If the edges $e_a$ and $e_b$ form an immorality at the root, then the root's children $p$ and $q$ can be treated as roots of complete binary trees $T_{k-1}$. This case contributes $M(T_{k-1})^2$ MECs.
\item If the edge $e_a$ forms at least one immorality at $p$ but edge $e_b$ is not in any immoralities, then edge $q$ can be treated as the root of a complete binary tree $T_{k-1}$. Now $p$ can have exactly one immorality, in which case the left subgraph of the root is the structure $X_{k-1}$ or $p$ can have three immoralities, in which case the children of $p$ can each be treated as the root of a complete binary tree $T_{k-2}$. Now by symmetry we may consider immoralities formed by the edge $e_b$ as well, which will double the number of MECs formed. Thus, there are $2M(T_{k-1})[2M(X_{k-1}) + M(T_{k-1})^2]$ MECs.
\item If the edges $e_a$ and $e_b$ form immoralities at $p$ and $q$, then by following the reasoning in the previous case, there are $2M(X_{k-1}) + M(T_{k-2})^2$ MECs formed.
\item If the edges $e_a$ and $e_b$ form no immoralities, then the remaining graph is simply the structure $Y_k$. This case contributes $M(Y_k)$ MECs.
\end{enumerate}
\noindent
Summing over the different cases we have that
\begin{equation*}
\begin{split}
M(T_k) &= M(T_{k-1})^2 + 2M(T_{k-1})[2M(X_{k-1}) + M(T_{k-1})^2] + 2M(X_{k-1}) \\
&\hspace{15pt}+ M(T_{k-2})^2 + M(Y_k), \\
&= [M(T_{k-1}) + 2M(X_{k-1}) + M(T_{k-2})^2]^2 + M(Y_k), \\
&= M(A_{k-1})^2 + M(Y_k). \\
\end{split}
\end{equation*}
This completes the proof of Theorem~\ref{thm: complete binary trees}.
${\it m}athcal{S}quare$
Now that we have recursions for $T_k$ and $A_k$, we can establish a bound on the number of MECs given by adding an edge to the root of $T_k$ to produce $A_k$.
In order to do this, we will use the following lemma.
\begin{lemma}
\label{lem: T_k Z_k relation}
For the partially directed graphs $T_k$ and $Z_k$ we have that
$$
M(Z_k) < M(T_k).
$$
\end{lemma}
\begin{proof}
If we omit the root and its edge from the graph $Z_k$, then we see that every MEC formed in $Z_k$ can also be formed in $T_k$.
Further, since the MEC in $T_k$ with an immorality at the root cannot appear in $Z_k$, we have a strict inequality. Hence, we have that $M(Z_k) < M(T_k)$.
\end{proof}
Now we show that adding an edge to the root of $T_k$ increases the number of MECs by at most 4.
\begin{theorem}
\label{thm: complete/additive tree ratio}
The number of MECs on $A_k$ and $T_k$ satisfy
$$
1 < \frac{M(A_k)}{M(T_k)} < 4.
$$
\end{theorem}
\begin{proof}
First we let $R_k = \frac{M(A_k)}{M(T_k)}$ and $S_{k-1} = \frac{M(T_k)}{M(T_{k-1})^2}$.
By equation (b) of Theorem~\ref{thm: complete binary trees} we know that
$$M(A_k) = M(T_k) + 2M(X_k) + M(T_{k-1})^2,$$
and hence by equation (c) of Theorem~\ref{thm: complete binary trees}
\begin{equation*}
\begin{split}
R_k &= 1 + \frac{2M(X_k)}{M(T_k)} + \frac{M(T_{k-1})^2}{M(T_k)},\\
&= 1 + \frac{2M(T_{k-1}){\it m}athcal{S}qrt{M(Z_k)}}{M(T_k)} + \frac{M(T_{k-1})^2}{M(T_k)}.
\end{split}
\end{equation*}
Thus, it follows by Lemma~\ref{lem: T_k Z_k relation} that
$$R_k < 1 + \frac{2}{{\it m}athcal{S}qrt{S_{k-1}}} + \frac{1}{S_{k-1}}$$
and hence
$$R_k < \left (1 + \frac{1}{{\it m}athcal{S}qrt{S_{k-1}}} \right) ^ 2 < \left (1 + \frac{1}{{\it m}athcal{S}qrt{1}} \right) ^2,$$
which completes the proof.
\end{proof}
{\it m}athcal{S}ection{Beyond Trees: Observations for Triangle Free Graphs}
\label{sec: beyond trees}
\begin{figure}
\caption{Two graphs with the same polynomials $M(G:x)$.}
\label{fig: same MEC polynomials}
\end{figure}
We end this paper with an analysis of the natural generalization of trees, the triangle-free graphs.
As we will see, much of the intuition for the distribution of immoralities and number of MECs on trees carries over into the more general context of triangle-free graphs.
However explicitly computing the generating functions $M(G;x)$ and $S(G;x)$ becomes increasingly difficult.
In Section~\ref{subsec: the complete bipartite graph K_2,p}, we illustrate the increasing level of difficulty in computing these generating functions for triangle-free, non-tree, graphs by computing $M(G;x)$ and $S(G;x)$ for the complete bipartite graph $K_{2,p}$.
In Section~\ref{subsec: skeletal structure in relation to the number and size of MECs}, we then take a computational approach to this problem, and we study the number and size of MECs relative to properties of the skeleton.
Using data collected by a program described in \cite{RSU17}, we examine the number and size of MECs on all connected graphs for $p\leq10$ nodes and all triangle-free graphs for $p\leq12$ nodes.
We compare the number of MECs and their sizes to skeletal properties including average degree, maximum degree, clustering coefficient, and the ratio of the number of immoralities in the MEC to the number of induced $3$-paths in the skeleton.
For triangle-free graphs, we see that much of the intuition captured by the results of the previous sections extend into this setting.
In particular, the number and distribution of high degree nodes in a triangle-free skeleton plays a key role in the number and sizes of MECs.
Finally, unlike $S(G;x)$, we can see using graphs on few nodes that the polynomial $M(G;x)$ is not a complete graph isomorphism invariant for connected graphs on $p$ nodes.
For instance, the two graphs on four nodes in Figure~\ref{fig: same MEC polynomials} both have $M(G;x) = 1+2x+x^2$.
However, using this program, we verify that $M(G;x)$ is a complete graph isomorphism invariant for all triangle-free connected graphs on $p\leq 10$ nodes.
That is, $M(G;x)$ is distinct for each triangle-free connected graph on $p$ nodes for $p\leq10$.
{\it m}athcal{S}ubsection{The bipartite graph $K_{2,p}$: a triangle-free, non-tree example}
\label{subsec: the complete bipartite graph K_2,p}
We now give explicit formulae for the number and sizes of the MECs on the complete bipartite graph $K_{2,p}$.
For convenience, we consider the vertex set of $K_{2,p}$ to be two distinguished nodes $\{a,b\}$ together with the remaining $p$ nodes, labeled by $[p]$, which are collectively referred to as the \emph{spine} of $K_{2,p}$.
This labeling of $K_{2,p}$ is depicted on the left in Figure~\ref{fig: k-two-p}.
It is easy to see that the maximum number of immoralities is given by orienting the edges such that all edge heads are at the nodes $a$ and $b$. This results in $m(K_{2,p}) = 2\binom{p}{2}$.
Next, we compute a closed-form formula for the number of MECs for $K_{2,p}$.
\begin{theorem}
\label{thm: K_2,p number of MECs}
The number of MECs with skeleton $K_{2,p}$ is
$$
M(K_{2,p}) = {\it m}athcal{S}um_{k=0}^p{p\choose k}\left(2^{p-k}-1+2^k-k\right) - p2^{p-1}.
$$
\end{theorem}
\begin{proof}
To arrive at the desired formula, we divide the problem into three cases:
\begin{enumerate}[(1)]
\item[(a)] The number of immoralities at node $b$ is ${p\choose 2}$.
\item[(b)] The number of immoralities at node $b$ is strictly between $0$ and ${p\choose 2}$.
\item[(c)] There are no immoralities at node $b$.
\end{enumerate}
Notice that cases (a) and (b) have a natural interpretation via the indegree at node $b$ of the essential graph of the corresponding MECs.
If the indegree at $b$ is two or more, all edges adjacent to $b$ are essential, and the number of immoralities at node $b$ is given by its indegree.
Thus, we can rephrase cases (a) and (b) as follows:
\begin{enumerate}[(1)]
\item[(a)] The indegree of node $b$ in the essential graph of the MEC is $p$.
\item[(b)] The indegree of node $b$ in the essential graph of the MEC is $1<k<p$.
\end{enumerate}
In case (a), the MEC is determined exactly by the MEC on the star with center node $a$ and $p$ edges.
One can easily check (this was also proven as part of Theorem~\ref{thm: stars}) that this yields $2^p-p$ MECs.
Case (b) is more subtle.
First, assume that the indegree at node $b$ is $1<k<p$, and the arrows with head $b$ have the tails $\{1,2,\ldots k\}{\it m}athcal{S}ubset[p]$.
Then the remaining arrows adjacent to $b$ are all directed outwards with heads $\{k+1,\ldots,p\}$.
\begin{figure}
\caption{The graph $K_{2,p}
\label{fig: k-two-p}
\end{figure}
Notice that no immoralities can happen at nodes $[k]$ along the spine, but some may occur at the nodes $[p]\backslash[k]$.
If there are no such immoralities, then node $a$ has indegree $p$, otherwise the essential graph would contain a directed $4$-cycle.
Similarly, if, without loss of generality, we denote the nodes in $[p]\backslash[k]$ that are the heads of immoralities by $\{k+1,k+2,\ldots,k+s\}$ for $0\leq s<p-k$, then the nodes $k+s+1,\ldots,p$ are tails of the arrows adjacent to node $a$.
Thus, if the number of immoralities with heads in $[p]\backslash[k]$ is $0\leq s<p-k$, then the immoralities with heads at node $a$ are completely determined.
Therefore, each $s$-subset of $[p]\backslash[k]$ yields a single MEC.
Figure~\ref{fig: k-two-p} depicts an example of one such choice of immoralities.
We start by selecting the arrows to form immoralities at node $b$ which forces the remaining arrows at $b$ to point towards the spine.
We then select some of these to form immoralities at the spine, and this forces the remaining arrows to be directed inwards towards $a$.
However, if $s=p-k$, the star induced by nodes $\{a,1,2,\ldots,k\}$ determines the MECs.
This yields $2^k-k$ classes (see again Theorem~\ref{thm: stars}).
In total, for case (b) the number of MECs is
$$
{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}\left(2^{p-k}-1+2^k-k\right).
$$
In case (c), we consider the case when there are no immoralities at node $b$, and we count via placement of immoralities along the spine.
There are $2^p$ ways to place immoralities along the spine, one for each subset of $[p]$.
Suppose the immoralities along the spine have the heads $\{1,2,\ldots, k\}$ for $k<p-1$ (the cases $k=p-1$ and $k=p$ are considered separately).
Then the remaining immoralities can happen at node $a$.
However, if there is an immorality with head at node $a$ then all other arrows adjacent to $a$ are essential, some of which may point towards the spine with heads in the set $[p]\backslash[k]$.
Since there are no immoralities with head in the set $[p]\backslash[k]$, then any such outward pointing arrow is part of a directed path from $a$ to $b$.
However, since there are no immoralities at node $b$, there can be at most one such directed path.
The presence of any such directed path forces a directed $4$-cycle since $k<p-1$.
Therefore, for $k<p-1$ the nodes $\{k+1,\ldots,p\}$ must be tails of arrows oriented towards node $a$, thereby yielding only a single MEC.
Since $k = p$ and $k = p-1$ also yield only a single MEC, case (c) yields a total of $2^p$ classes.
Combing the total number of MECs counted for each of these cases yields the desired formula.
\end{proof}
Using the case-by-case analysis from the proof of Theorem~\ref{thm: K_2,p number of MECs} we can count the number of MECs with skeleton $K_{2,p}$ of each possible size.
Similarly, one can also recover the statistics $m_k(K_{2,p})$ from this proof.
However, to avoid overwhelming the reader with formulae, we omit the expressions for $m_k(K_{2,p})$.
\begin{corollary}
\label{cor: K_2,p sizes of MECs}
The possible sizes of a MEC with skeleton $K_{2,p}$ and the number of classes having each size is as follows:
\begin{center}
\begin{tabular}{c | c}
Class size & Number of Classes \\\hline
$1$ & $2+{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}2^{p-k}$ \\\hline
$2$ & $2+{p\choose2} $ \\\hline
$3\leq k\leq p-1$ & $1+{p\choose2}$ \\\hline
$p$ & $2$ \\
\end{tabular}
\end{center}
\end{corollary}
\begin{proof}
Recall the case analysis from the proof of Theorem~\ref{thm: K_2,p number of MECs}.
In case (a) all MECs are size $1$ except for one which is size $p$.
This yields $2^p-p-1$ classes of size one and one class of size $p$.
In case (b), all MECs have size $1$, unless $s=p-k$ and there are no immoralities at node $a$, in which case the class size is $k$.
This yields ${p\choose k}$ classes of size $k$ for $1<k<p$, and
$$
{\it m}athcal{S}um_{k=2}^{p-1}{p\choose 2}\left(2^{p-k}-1\right)
$$
classes of size $1$.
In case (c), all MECs have size $p-k$ for $0\leq k<p-1$.
When $k=p-1$, we get a single class of size $2$, and when $k=p$ we get one more class of size $1$.
The total number of MECs of size $1$ is then
\begin{equation*}
\begin{split}
(2^p-p-1)+1+{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}\left(2^{p-k}-1\right)
&= (2^p-p-1)+1+{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}2^{p-k}-{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k},\\
&= 2^p+2+{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}2^{p-k}-{\it m}athcal{S}um_{k=0}^{p}{p\choose k},\\
&= 2+{\it m}athcal{S}um_{k=2}^{p-1}{p\choose k}2^{p-k}.\\
\end{split}
\end{equation*}
The other formulae are quickly realized from the above arguments.
\end{proof}
{\it m}athcal{S}ubsection{Skeletal structure in relation to the number and size of MECs}
\label{subsec: skeletal structure in relation to the number and size of MECs}
We now take a computational approach to analyzing the number and size of MECs on triangle-free graphs with respect to their skeletal structure.
The data analyzed here was collected using the program described in \cite{RSU17}, and this program can be found at \url{https://github.com/aradha/mec_generation_tool}.
The results of \cite{RSU17}, and those provided in the previous sections of this paper, indicate that the number and distribution of high degree nodes in a triangle-free graph dictate the size and number of MECs allowable on the skeleton.
In this section, we parse these observations in terms of the data collected via our computer program.
\begin{figure}
\caption{Clustering coefficient as compared to log average class size and the average number of MECs for connected graphs with $p\leq 10$ nodes and 25 edges.}
\label{fig: clustering coefficient all graphs}
\end{figure}
Recall that the \emph{(global) clustering coefficient} of a graph $G$ is defined as the ratio of the number of triangles in $G$ to the number of connected triples of vertices in $G$.
The clustering coefficient serves as a measure of how much the nodes in $G$ cluster together.
Figure~\ref{fig: clustering coefficient all graphs} presents two plots: one compares the clustering coefficient to the log average class size and the other compares it to the average number of MECs.
This data is taken over all connected graphs on $p\leq 10$ nodes with $25$ edges (to achieve a large number of MECs).
As we can see, the average class size grows as the clustering coefficient increases.
This is to be expected, since an increase in the number of triangles within the DAG should correspond to an increase in the size of the chain components of the essential graph.
On the other hand, the average number of MECs decreases with respect to the clustering coefficient, which is to be expected given that the class sizes are increasing.
This decrease in the average number of MECs empirically captures the intuition that having many triangles in a graph results in fewer induced $3$-paths, which represent the possible choices for distinct MECs with the same skeleton.
\begin{figure}
\caption{Average degree versus log average class size and average number of MECs for all graphs and triangle-free graphs on $10$ nodes.}
\label{fig: average degree vs log average class size}
\end{figure}
Figure~\ref{fig: average degree vs log average class size} presents a pair of plots, the first of which compares the average degree of the underlying skeleton of the DAG to the log average class size of the associated MEC.
The second plot compares the average degree of the skeleton to the average number of MECs it supports.
Both plots present one curve for all connected graphs and a second curve for triangle-free graphs on $10$ nodes.
For connected graphs on $10$ nodes the left-most plot shows a strict increase in the log average MEC class size as the average degree of the nodes in the underlying skeleton increases.
This is to be expected since graphs with a higher average degree are more likely to contain larger chain components.
On the other hand, the average class size for triangle-free graphs increases for average degree up until approximately $2.0$, and then shows a steady decrease for larger average degree.
Since the average degree of a tree on $p$ nodes is $2-\frac{2}{p}$, this suggests that the largest MECs amongst triangle-free graphs have skeleta being trees.
As such, the bounds developed in Section~\ref{sec: bounding the size and number of mecs on trees} of this paper can be, heuristically, thought to apply more generally to all triangle-free graphs.
The right-most plot in Figure~\ref{fig: average degree vs log average class size} describes the relationship between average degree and the average number of MECs for all connected graphs and triangle-free graphs on $10$ nodes.
We see from this that in the setting of all connected graphs, the skeleta with the largest average number of MECs appear to have average degree $7$, whereas in the triangle-free setting, the higher the average degree the more equivalence classes the skeleta can support.
This supports the intuition that the more high degree nodes there are in a triangle-free graph, the more equivalence classes the graph can support.
\begin{figure}
\caption{Maximum degree versus log average class size and average number of MECs for all graphs and triangle-free graphs on $10$ nodes.}
\label{fig: max degree vs log average class size}
\end{figure}
The left-most plot in Figure~\ref{fig: max degree vs log average class size} depicts the relationship between the maximum degree of a node in a skeleton and the average class size on the skeleton for all connected graphs and for triangle-free graphs on $10$ nodes.
For all graphs, the relationship appears to be almost linear beginning with maximum degree $5$, suggesting that average class size grows linearly with the maximum degree of the underlying skeleton.
This growth in class size is due to the introduction of many triangles as the maximum degree grows.
On the other hand, in the triangle-free setting we actually see a decrease in average class size as the maximum degree grows, which empirically reinforces this intuition.
The right-most plot in Figure~\ref{fig: max degree vs log average class size} records the relationship between the maximum degree of a node in a skeleton and the average number of MECs supported by that skeleton for all connected graphs and triangle-free graphs on at most $10$ nodes.
For all graphs, we see that the average number of MECs grows with the maximum degree of the graphs, and this growth is approximately exponential.
In the triangle-free setting, the average number of MECs appears to be unimodal, but would be increasing if we considered also all graphs on $p>10$. For triangle-free graphs there is only one graph with maximum degree 9, namely the star $G_1(9)$, where the number of MECs is $2^9-9$. For connected graphs the average number of MECs is pushed up by those cases consisting of a complete bipartite graph where in addition one node is connected to all other nodes.
\begin{figure}
\caption{Class size versus the ratio of the number of immoralities to the number of induced $3$-paths in all MECs on 10 nodes.}
\label{fig: class size versus immorality ratio}
\end{figure}
The final plot of interest is in Figure~\ref{fig: class size versus immorality ratio}, and it shows the relationship between MEC size and the ratio of the number of immoralities in the MEC to the number of induced $3$-paths in the skeleton for all connected graphs and triangle-free graphs on $10$ nodes.
That is, it shows the relationship between the class size and how many of the potential immoralities presented by the skeleton are used by the class.
It is interesting to note that, in the triangle-free setting, as the class size grows, this ratio appears to approach $0.3$, suggesting that most large MECs use about a third of the possible immoralities in triangle-free graphs.
In the connected graph setting, as the class size grows, we see a steady decrease in the value of this ratio.
This supports the intuition that a larger class size corresponds to an essential graph with large chain components and few immoralities.
\noindent
{\bf Acknowledgements}.
We wish to thank Brendan McKay for some helpful advice in the use of the programs {\tt nauty} and {\tt Traces} \cite{MP14}.
Adityanarayanan Radhakrishnan was supported by ONR (N00014-17-1-2147). Liam Solus was partially supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship (DMS - 1606407).
Caroline Uhler was partially supported by DARPA (W911NF-16-1-0551), NSF (1651995), and ONR (N00014-17-1-2147).
\end{document}
|
\begin{document}
\title{Lagrangian Descriptors for Two Dimensional, Area Preserving, Autonomous and Nonautonomous Maps}
\author{Carlos Lopesino$^{1}$, Francisco Balibrea$^{1}$, Stephen Wiggins$^2$, Ana M. Mancho$^1$\\
$^1$Instituto de Ciencias Matem\'aticas, CSIC-UAM-UC3M-UCM,
\\ C/ Nicol\'as Cabrera 15, Campus Cantoblanco UAM, 28049
Madrid, Spain\\
$^2$School of Mathematics, University of Bristol, \\Bristol BS8 1TW, United Kingdom}
\maketitle
\begin{abstract}
In this paper we generalize the method of Lagrangian descriptors to two dimensional, area preserving, autonomous and
nonautonomous discrete time dynamical systems. We consider four generic model problems--a hyperbolic saddle point for a
linear, area-preserving autonomous map, a hyperbolic saddle point for a nonlinear, area-preserving autonomous map, a
hyperbolic saddle point for linear, area-preserving nonautonomous map, and a hyperbolic saddle point for nonlinear,
area-preserving nonautonomous map. The discrete time setting allows us to evaluate the expression for the Lagrangian
descriptors explicitly for a certain class of norms. This enables us to provide a rigorous setting for the notion that
the `singular sets'' of the Lagrangian descriptors correspond to the stable and unstable manifolds of hyperbolic
invariant sets, as well as to understand how this depends upon the particular norms that are used. Finally we analyze,
from the computational point of view, the performance of this tool for general nonlinear maps, by computing the
``chaotic saddle'' for autonomous and nonautonomous versions of the H\'enon map.
\end{abstract}
\section{Introduction}\label{sec:intro}
Lagrangian descriptors (also referred to in the literature as the ''M function'') were first introduced as a tool for
finding hyperbolic trajectories in \cite{chaos}. In this paper the notion of {\em distinguished trajectory} was
introduced as a generalization of the well-known idea of distinguished {\em hyperbolic} trajectory. The numerical
computation of distinguished trajectories was discussed in some detail, and applications to known benchmark examples, as
well as to geophysical fluid flows defined as data sets were also given.
Later \cite{prl} showed that it could be used to reveal Lagrangian invariant structures in realistic fluid flows. In
particular, a geophysical data set in the region of the Kuroshio current was analysed and it was shown that Lagrangian
descriptors could be used to reveal the “Lagrangian skeleton” of the flow, i.e. hyperbolic and elliptic regions, as well
as the invariant manifolds that delineate these regions. A deeper study of the Lagrangian transport issue associated
with the Kuroshio using Lagrangian descriptors is given in \cite{jfm}. Advantages of the method over finite time
Lyapunov exponents (FTLE) and finite size Lyapunov exponents (FSLE) were also discussed.
Since then Lagrangian descriptors have been further developed and their ability to reveal phase space structures in
dynamical systems more generally has been confirmed. In particular, Lagrangian descriptors are used in \cite{amism11} to
reveal the Lagrangian structures that define transport routes across the Antarctic polar vortex. Further studies of
transport issues related to the Antarctic polar vortex using Lagrangian descriptors are given in \cite{ammsi13} where
vortex Rossby wave breaking is related to Lagrangian structures. In \cite{rempel} Lagrangian descriptors are used to
study the influence of coherent structures on the saturation of a nonlinear dynamo. In \cite{mmw14} Lagrangian
descriptors are used to analyse the influence of Lagrangian structure on the transport of buoys in the Gulf stream and
in a region of the Gulf of Mexico relevant to the Deepwater Horizon oil spill. In \cite{mwcm13} a detailed analysis of
the behaviour of Lagrangian descriptors is provided in terms of benchmark problems, new Lagrangian descriptors are
introduced, extension of Lagrangian descriptors to 3D flows is given (using the time dependent Hill’s spherical vortex
as a benchmark problem), and a detailed analysis and discussion of the computational performance (with a comparison with
FTLE) is presented.
Lagrangian descriptors are based on the integration, for a finite time, along trajectories of an intrinsic bounded,
positive geometrical and/or physical property of the trajectory itself, such as the norm of the velocity, acceleration,
or curvature. Hyperbolic structures are revealed as singular features of the contours of the Lagrangian descriptors, but
the sharpness of these singular features depends on the particular norm chosen. These issues were explored in
\cite{mwcm13}, and further explored in this paper.
All of the work thus far on Lagrangian descriptors has been in the continuous time setting. In this paper we generalize
the method of Lagrangian descriptors to the discrete time setting of two dimensional area preserving maps, both
autonomous and nonautonomous, and provide theoretical support for their perfomance.
This paper is organized as follows. In section \ref{sec:DLDdef} we defined discrete Lagrangian descriptors. We then
consider four examples. In section \ref{sec:examp1} we consider a linear autonomous area preserving map have a
hyperbolic saddle point at the origin, in \ref{sec:examp2} we consider a nonlinear autonomous area preserving map have
a hyperbolic saddle point at the origin, in \ref{sec:examp3} we consider a linear nonautonomous area preserving map
have a hyperbolic saddle trajectory at the origin, and in \ref{sec:examp4} we consider a nonlinear nonautonomous area
preserving map have a hyperbolic trajectory at the origin. For each example we show that the Lagrangian descriptors
reveal the stable and unstable manifolds by being singular on the manifolds. The notion of ``being singular'' is made
precise in Theorem \ref{thm:lin_aut_map}. In section \ref{sec:henon} we explore further the method beyond the
analytical examples. We use discrete Lagrangian descriptors to computationally reveal the chaotic saddle of the
H\'enon map, and in section \ref{sec:NAhenon} we consider a nonautonomous version of the H\'enon map. In section
\ref{sec:summ} we summarize the conclusions and suggest future directions for this work.
\section{Lagrangian Descriptors for Maps}
\label{sec:DLDdef}
Let
\begin{equation}
\{ x_n, y_n \}_{n=-N}^{n=N}, \quad N \in \mathbb{N},
\label{orbit}
\end{equation}
\noindent
denote an orbit of length $2N +1$ generated by a two dimensional map. At this point it does not matter whether or not
the map is autonomous or nonautonomous. The method of Lagrangian descriptors applies to orbits in general, regardless of
the type of dynamics that generate the orbit.
The first Lagrangian descriptor (also known as the ``$M$ function'') for continuous time systems was based on computing
the arclength of trajectories for a finite time (\cite{chaos}). Extending this idea to maps is straightforward, and the
corresponding discrete Lagrangian descriptor (DLD) is given by:
\begin{equation}
MD_2 = \sum^{N-1}_{i=-N} \sqrt{ (x_{i+1}-x_i)^2 + (y_{i+1}-y_i)^2 }.
\label{eq:DLD_al}
\end{equation}
\noindent
In analogy with the work on continuous time Lagrangian descriptors in \cite{mwcm13}, we consider different norms for
the discretized arclength as follows:
\begin{equation}
MD_p = \sum^{N-1}_{i=-N} \sqrt[p]{ |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p }, \quad p>1,
\label{eq:DLD_p>1}
\end{equation}
\noindent
and
\begin{equation}
MD_p = \sum^{N-1}_{i=-N} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p , \quad p \leq 1.
\label{eq:DLD_p<1}
\end{equation}
\noindent
Considering the space of orbits as a sequence space, \eqref{eq:DLD_p>1} and \eqref{eq:DLD_p<1} are the $\ell^p$ norms
of an orbit.
Henceforth, we will consider only the case $p \leq 1$ since the proofs are more simple in this case. Now we will
explore these definitions in the context of some easily understood, but generic, examples.
\subsection{Example 1: A Hyperbolic Saddle Point for Linear, Area-Preserving Autonomous Maps}
\label{sec:examp1}
\subsubsection{Linear Saddle point}
Consider the following linear, area-preserving autonomous map:
\begin{equation}
\left \{ \begin{array}{ccc}
x_{n+1} & = & \lambda x_n,\\
y_{n+1} & = & \frac{1}{\lambda} y_n, \\
\end{array}\right .
\label{eq:lin_aut_map}
\end{equation}
\noindent
where we will take $\lambda > 1$. Note that this map is area-preserving, but area-preservation was not used in the
definition of the DLD's above.
Now we will compute \eqref{eq:DLD_p<1} for this example. Towards this end, we introduce the notation
$$MD_p = MD^{+}_p + MD^{-}_p$$
\noindent
where
$$MD^{+}_p = \sum ^{N-1}_{i=0} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p,$$
\noindent
and
$$MD^{-}_p = \sum ^{-N}_{i=-1} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p.$$
\noindent
We begin by computing $MD^{+}_p$. The computation of $MD^{-}_p$ is completely analogous, and therefore we will not
provide the details. We have:
\begin{eqnarray}
MD^{+}_p & = & \displaystyle{\sum ^{N-1}_{i=0} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p} \nonumber\\
& & \nonumber\\
& = & \displaystyle{|x_1-x_0|^p + |y_1-y_0|^p + ... + |x_{N}-x_{N-1}|^p + |y_{N}-y_{N-1}|^p}\nonumber\\
& & \nonumber \\
& = & \displaystyle{|\lambda x_0-x_0|^p + |1/\lambda y_0-y_0|^p + ... + |\lambda^{N} x_0 - \lambda
^{N-1}x_0|^p + |1/\lambda^{N} y_0 - 1/\lambda^{N-1} y_0|^p}\nonumber\\
& & \nonumber\\
& = & \displaystyle{|x_0|^p|\lambda-1|^p \left (1+\lambda^p+...+\lambda^{(N-1)p}\right ) +
|y_0|^p|1/\lambda-1|^p \left (1+1/\lambda^p+...+1/\lambda^{(N-1)p}\right )}\nonumber\\
& & \nonumber \\
& = & \displaystyle{|x_0|^p|\lambda-1|^p \left (\frac{\lambda^{Np}-1}{\lambda^p-1}\right ) +
|y_0|^p|1/\lambda-1|^p \left (\frac{1/\lambda^{Np}-1}{1/\lambda^p-1}\right )}\nonumber
\end{eqnarray}
\normalsize
\noindent
where in the last step we have used that the sums are geometric with rates $\lambda^p$ and $1/\lambda^p$, respectively.
By completely analogous calculations we obtain $MD^{-}_p$ as:
$$MD^{-}_p = |x_0|^p|1/\lambda-1|^p \left (\frac{1/\lambda^{Np}-1}{1/\lambda^p-1}\right ) +
|y_0|^p|\lambda-1|^p \left (\frac{\lambda^{Np}-1}{\lambda^p-1}\right ).$$
\noindent
Putting the two terms together, we obtain:
\begin{equation}
\begin{array}{ccl}
MD_p & = & MD^{+}_p + MD^{-}_p \\
& & \\
& = & (|x_0|^p+|y_0|^p)(|\lambda-1|^p \left (\frac{\lambda^{Np}-1}{\lambda^p-1}\right ) +
|1/\lambda-1|^p\left (\frac{1/\lambda^{Np}-1}{1/\lambda^p-1}\right ))\\
& & \\
& = & (|x_0|^p+|y_0|^p)f(\lambda,p,N),\\
\end{array}
\label{DLD_lin_aut}
\end{equation}
\noindent
where $\lambda$, $p$ and $N$ are fixed.
Extensive numerical simulations in a variety of examples (cf. \cite{chaos, prl, nlpg2, amism11, jfm, mwcm13, mmw14})
have shown that ``singular features'' of Lagrangian descriptors correspond to stable and unstable manifolds of
hyperbolic trajectories. We can make this statement rigorous and precise in the context of this example.
\begin{theorem}
Consider a vertical line perpendicular to the unstable manifold of the origin. In particular, consider an arbitrary
point $x = \bar{x}$ and a line parallel to the $y$ axis passing through this point. Then the derivative of $MD_p$, $p
<1$, along this line becomes unbounded on the unstable manifold of the origin.
Similarly, consider a horizontal line perpendicular to the stable manifold of the origin. In particular, consider an
arbitrary point $y = \bar{y}$ and a line parallel to the $x$ axis passing through this point. Then the derivative of
$MD_p$, $p <1$, along this line becomes unbounded on the stable manifold of the origin.
\label{thm:lin_aut_map}
\end{theorem}
\begin{proof} This is a simple calculation using \eqref{DLD_lin_aut} and the fact that $p<1$. This is illustrated in
Figure \ref{Saddle_vs_MDp}.
\end{proof}
\begin{figure}
\caption{The left-hand panel shows contours of $MD_p$ for $p=0.5$, $N=20$ and $\lambda=1.1$, with a grid point spacing
of $0.005$. The horizontal black line is at $y=0.25$. The right-hand panel shows the graph of $MD_p$ along this
horizontal black line, which illustrates the singular nature of the derivative of $MD_p$ on the stable manifold across
the line $x=0$.}
\label{Saddle_vs_MDp}
\end{figure}
\subsubsection{Linear Rotated Saddle point}
In the example studied in the previous section the DLD is singular along the stable and unstable manifolds for any
iteration $n$. However, the results discussed in \cite{ prl,mwcm13} for the continuous time case show that the
manifolds are observed for $\tau$ ``sufficiently large'', which is related to a large number of iterations in the
discrete time case. We explore further these connections by studying the case of the rotated saddle point. In order to
establish a direct link to the continuous time case, we consider the limits of small and large numbers of iterations,
and $\lambda \approx 1$.
\noindent
We have the following discrete dynamical system:
\begin{equation}\label{mapF}
F(x,y) = A
\left(
\begin{array}{c}
x \\
\\
y \\
\end{array}
\right)
\end{equation}
\noindent
where
\begin{equation}\label{matrizA}
A = \left(
\begin{array}{cc}
\frac{1}{\lambda} + \lambda & \frac{1}{\lambda} - \lambda \\
& \\
\frac{1}{\lambda} - \lambda & \frac{1}{\lambda} + \lambda \\
\end{array}
\right)= \frac{1}{2\lambda} \left(
\begin{array}{cc}
1+\lambda^2 & 1-\lambda^2 \\
& \\
1-\lambda^2 & 1+\lambda^2 \\
\end{array}
\right)
\end{equation}
\noindent
in our case with $\lambda > 1$. It is easy to see that the stable and the unstable manifolds are given by the vectors
$(1,1)$ and $(1,-1)$ respectively. We want to compute $A^{i}-A^{i-i}$ in order to get the expressions of the DLD:
\begin{equation}\label{DLD}
MD_p = \sum ^{N-1}_{i=-N} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p
\end{equation}
and to find where the 'singularities' are produced and why.
We know that A can be diagonalized so there exist $D$ and $T$ such that
\begin{equation}\label{diagonalization}
D = T^{-1} \cdot A \cdot T
\end{equation}
where $D$ is a diagonal matrix. Therefore we got the next expression
$$D^i = T^{-1} \cdot A^i \cdot T, \quad \text{for every } i.$$
which is equivalent to
\begin{equation}\label{A_i}
A^i = T \cdot D^i \cdot T^{-1}, \quad \text{for every } i.
\end{equation}
It is clear that the matrix $T$ is
\begin{equation}\label{matrizT}
T = \left(
\begin{array}{cc}
1 & 1 \\
-1 & 1 \\
\end{array}
\right)
\end{equation}
and therefore
$$T^{-1} = \frac{1}{2}\left (
\begin{array}{cc}
1 & -1 \\
1 & 1 \\
\end{array}
\right )$$
We can check equation \eqref{diagonalization}
\begin{equation}\label{matrizD}
\small{
D = \frac{1}{4\lambda}\left(
\begin{array}{ccc}
1 & & -1 \\
& & \\
1 & & 1 \\
\end{array}
\right) \left(
\begin{array}{cc}
1+\lambda^2 & 1-\lambda^2 \\
& \\
1-\lambda^2 & 1+\lambda^2 \\
\end{array}
\right) \left(
\begin{array}{ccc}
1 & & 1 \\
& & \\
-1 & & 1 \\
\end{array}
\right) = \left(
\begin{array}{cc}
\lambda & 0 \\
& \\
0 & \frac{1}{\lambda} \\
\end{array}
\right)
}
\end{equation}
So we can guess now how is $A^i$ using equation \eqref{A_i}
\begin{equation}
\footnotesize{
A^i = \frac{1}{2}\left(
\begin{array}{ccc}
1 & & 1 \\
& & \\
-1 & & 1 \\
\end{array}
\right) \left(
\begin{array}{cc}
\lambda^i & 0 \\
& \\
0 & \frac{1}{\lambda^i} \\
\end{array}
\right) \left(
\begin{array}{ccc}
1 & & -1 \\
& & \\
1 & & 1 \\
\end{array}
\right) = \frac{1}{\lambda^i}\left(
\begin{array}{cc}
1+\lambda^{2i} & 1-\lambda^{2i} \\
& \\
1-\lambda^{2i} & 1+\lambda^{2i} \\
\end{array}
\right)
}
\end{equation}
Therefore
\begin{equation}\label{A_i_expression}
A^{i}-A^{i-1} = \frac{1}{\lambda^i}\left(
\begin{array}{cc}
\lambda^{2i}-\lambda^{2i-1}-\lambda+1 & -\lambda^{2i}+\lambda^{2i-1}-\lambda+1 \\
& \\
-\lambda^{2i}+\lambda^{2i-1}-\lambda+1 &
\lambda^{2i}-\lambda^{2i-1}-\lambda+1 \\
\end{array}
\right)
\end{equation}
Now we are going to study the analytical expression of the stable and unstable manifold. For that purpose we will
develop only $MD^{+}_p$ expression ($MD^-_p$ is analogous). So we have to keep in mind the expression for $MD^{+}_p$
that is
\begin{equation}
MD^{+}_p = \sum ^{N-1}_{i=0} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p
\end{equation}
therefore using equation \eqref{A_i_expression} for $N \geq 1$
\begin{equation}\label{suma_A_n}
\scriptsize{
\begin{array}{rl}
MD^{+}_p = & \displaystyle{\sum^{N-1}_{i=0}} \frac{1}{\lambda^{(i+1)p}}|(\lambda^{2(i+1)}-\lambda^{2(i+1)-1}-\lambda+1)x_0 + (-\lambda^{2(i+1)}+\lambda^{2(i+1)-1}-\lambda+1)y_0|^p \\
& \\
& + \frac{1}{\lambda^{(i+1)p}}|(-\lambda^{2(i+1)}+\lambda^{2(i+1)-1}-\lambda+1)x_0 +
\lambda^{2(i+1)}-\lambda^{2(i+1)-1}-\lambda+1)y_0|^p \\
\end{array}}
\end{equation}
\noindent
Each term on this sum has singularities along two different lines. In particular, for each $i$ and $\lambda$, we have
the two singular lines
\begin{equation}\label{slope_m}
y_0 = \frac{\lambda^{2(i+1)}-\lambda^{2(i+1)-1}-\lambda+1}{\lambda^{2(i+1)}-\lambda^{2(i+1)-1}+\lambda-1}x_0 = m(\lambda,i)x_0
\end{equation}
\noindent
and
\begin{equation}\label{slope_-m}
y_0 = \frac{1}{m(\lambda,n)}x_0
\end{equation}
\noindent
where $m(\lambda,i)$ and $\displaystyle{\frac{1}{m(\lambda,i)}}$ are, respectively, the slopes of the singular lines.
If we fix $ \lambda = \lambda_0$ and we increase the number of iterations, we can see the evolution of the singular
features to the limit shown in Figure \ref{sequence_of_DLD}
\begin{equation}\label{slope_1}
\lim_{i \to \infty}m(\lambda_0,i) = 1
\end{equation}
\noindent
This convergence is reached rapidly and, for example, for $\lambda=1.1$ it is noticeable from $i=20$ onwards. Thus at
large $i$ most of the terms in the summation \eqref{suma_A_n} contribute with the same slope, i.e., \eqref{slope_1},
Therefore the contributions of terms in the summation \eqref{suma_A_n} with small $i$ are small and make little impact
in the global sum \eqref{suma_A_n}. If $i$ is small, the number of terms contributing to the DLD is small, and each
term is a $C^0$ function with discontinuities along {\em different} lines. Since all terms contribute the same to the
total pattern, no particular feature is highlighted (see Figure \ref{sequence_of_DLD}b) and \ref{sequence_of_DLD}c)).
The limit $\lambda \approx 1$ is closely related to the Lagrangian Descriptors defined for the continuous time case.
This can be seen by considering the limit and noting that $\lambda$ quantifies the separation of points as they are
iterated and relating this to the arclength integral for the linear saddle point discussed in \cite{mwcm13}.
For any $i = n_0$ fixed, it is possible to find a $\lambda$ in the limit close to 1 that makes the slope $m$ close
to the limit value:
\begin{equation}
\lim_{\lambda \to 1}m(\lambda,n_0) = 0
\end{equation}
\noindent
In this case, equations \eqref{slope_m} and \eqref{slope_-m} tend to $y=0$ and $x=0$, respectively. The approach to
this limit can be observed in the sequence of images shown Figure \ref{sequence_of_DLD} and the DLD derivative along
the line $y=0.25$ shown in Figure \ref{derivative_of_DLD}.
\begin{figure*}
\caption{DLD for different values of $\lambda$ and iterations $i$.}
\label{sequence_of_DLD}
\end{figure*}
\begin{figure*}
\caption{Derivative of the DLD along the line $y=0.25$ for different values of $\lambda$ and iterations $i$.}
\label{derivative_of_DLD}
\end{figure*}
\subsection{Example 2: A Hyperbolic Saddle Point for Nonlinear, Area-Preserving Autonomous Maps}
\label{sec:examp2}
We will analyze this case using a theorem of \cite{moser56}. Moser's theorem applies to analytic, area
preserving maps in a neighborhood of a hyperbolic fixed point. We will discuss how the assumptions of analyticity and
area preservation can be removed later on, but for now we proceed with these assumptions.
We consider an analytic, area-preserving map in a neighborhood of $x=y=0$ of the form:
\begin{equation}\label{nonlinear_equation}
\left \{ \begin{array}{ccccc}
x_{n+1} & = & f(x_n,y_n) & = & \lambda x_n + \cdots\\
y_{n+1} & = & g(x_n,y_n) & = & \lambda^{-1} y_n + \cdots\\
\end{array}\right .
\end{equation}
\noindent
where $\lambda>1$ and $''\cdots''$ represent nonlinear terms that obey the area-preserving constraint. Moser's Theorem
states that there exists a real analytic, area preserving change of variables of the following form:
\begin{equation}\label{change_variables}
\begin{array}{ccccc}
x & = & x(\xi,\eta), \\
y & = & y(\xi,\eta), \\
\end{array}
\end{equation}
\noindent
with inverse
\begin{equation}\label{change_variables_inverse}
\begin{array}{ccccc}
\xi & = & \xi(x,y), \\
\eta & = & \eta(x, y), \\
\end{array}
\end{equation}
\noindent
such that in these new coordinates
\eqref{nonlinear_equation} has the following {\em normal form}:
\begin{equation}\label{normal_form}
\left \{ \begin{array}{ccl}
\xi_{n+1} & = & U(\xi_n\eta_n)\xi_n\\
\eta_{n+1} & = & U^{-1}(\xi_n\eta_n)\eta_n\\
\end{array}\right .
\end{equation}
\noindent
where $U(\xi\eta)$ is a power series in the product $\xi \eta$ of the form $U_0 + U_2\xi\eta + \cdots $, with $U_0 =
\lambda$, which converges in a neighborhood of the hyperbolic point. Note that it follows from the form of
\eqref{normal_form} that $U(\cdot)$ is constant on orbits of \eqref{normal_form}, i.e. $U(\xi_{i+1}
\eta_{i+1}) = U(\xi_i \eta_i) = U, \forall i$.
The form of \eqref{normal_form} implies that the same computation described in Section \ref{sec:examp1} applies.
Therefore for $MD^{+}_p$ we have:
\begin{eqnarray}
MD^{+}_p & = & \displaystyle{\sum ^{N-1}_{i=0} |\xi_{i+1}-\xi_i|^p + |\eta_{i+1}-\eta_i|^p} \nonumber\\
& & \nonumber\\
& = & \displaystyle{\sum ^{N-1}_{i=0} |\xi_i|^p|U(\xi_i\eta_i)-1|^p + |\eta_i|^p|U^{-1}(\xi_i \eta_i)-1|^p
= \sum ^{N-1}_{i=0} |\xi_i|^p|U-1|^p + |\eta_i|^p|U^{-1}-1|^p} \nonumber\\
& & \nonumber \\
& = & \displaystyle{|\xi_0|^p|U-1|^p \left (1+|U|^p+...+|U|^{(N-1)p}\right ) + |\eta_0|^p|U^{-1}-1|^p \left
(1+|U^{-1}|^p+...+|U^{-1}|^{(N-1)p}\right )} \nonumber\\
& & \nonumber \\
& = & \displaystyle{|\xi_0|^p|U-1|^p\left | \frac{U^{Np}-1}{U^p-1} \right | +
|\eta_0|^p|U^{-1}-1|^p\left | \frac{1/U^{Np}-1}{1/U^p-1} \right |}. \nonumber
\end{eqnarray}
\normalsize
\noindent
$MD^{-}_p$ is computed analogously, and therefore $MD_p = MD^{+}_p + MD^{-}_p$
is given by:
$$ MD_p = \displaystyle{(|\xi_0|^p + |\eta_0|^p) \left (|U-1|^p\left
| \frac{U^{Np}-1}{U^p-1} \right | +|U^{-1}-1|^p\left
| \frac{1/U^{Np}-1}{1/U^p-1} \right | \right )},$$
\normalsize
\noindent
In this expression $U$ is constant along trajectories, {\em i.e.}, $U(\xi_0\eta_0) = U(\xi_i\eta_i) = U, \forall
i$. But in general, different initial conditions $( \xi_0, \eta_0)$ do not belong to the same trajectory, thus $U$
depends on $( \xi_0, \eta_0)$.
More succinctly we express this as:
\begin{equation}
\begin{array}{ccl}
MD_p & = & \displaystyle{(|\xi_0|^p + |\eta_0|^p)f(U(\xi_0,\eta_0),p,N)}\\
\end{array}
\label{eq:DLD_nonlin_aut}
\end{equation}
\noindent
This expression has the same form as \eqref{DLD_lin_aut}, except for the dependence of the function $f$ on
$U(\xi_0,\eta_0)$. We note that $U$ is analytical and thus it is a smooth function. Therefore Theorem
\ref{thm:lin_aut_map} still applies because the first derivative is infinite
due to the first factor in expression (\ref{eq:DLD_nonlin_aut}). We can conclude that the derivative of $MD_p$
transverse to the stable manifold is singular on the manifold and the derivative of $MD_p$ transverse to the unstable
manifold is singular on the manifold. However, this is a statement that is true in the $\xi-\eta$ normal form
coordinates. In practice we will compute the Lagrangian descriptor in the original $x-y$ coordinates and therefore we
would like to conclude that the ``singular sets'' of the Lagrangian descriptor in the $x-y$ coordinates correspond to
the stable and unstable manifolds of the hyperbolic fixed point. We will now show that this is the case. We will carry
out the argument for the the stable manifold. The argument for the unstable manifold is completely analogous.
First, using \eqref{change_variables}, in the $x-y$ coordinates the stable manifold of the origin is given by the curve
$(x(0, \eta), y(0, \eta))$. Here $\eta$ is viewed as a parameter for this parametric representation of the stable
manifold in the original $x-y$ coordinates. A vector perpendicular to this curve at any point on the
curve is given by $\left( -\frac{dy}{d \eta} (0, \eta), \frac{dx}{d \eta} (0, \eta) \right)$. Now we compute the rate
of change of $MD_p=MD_p (x, y)$ in this direction and consider its behavior on the stable manifold of the origin.This
is given by the directional derivative of $MD_p (x, y)$ in this direction evaluated on the stable manifold:
\begin{equation}
\left( \frac{\partial MD_p}{\partial x}(x(0, \eta), y(0, \eta)), \frac{\partial MD_p}{\partial y} (x(0, \eta), y(0,
\eta))\right) \cdot
\left( -\frac{dy}{d \eta} (0, \eta), \frac{dx}{d \eta} (0, \eta) \right),
\label{eq:direc_deriv}
\end{equation}
\noindent
where the derivatives are evaluated on $(x(0, \eta), y(0, \eta))$, but we will omit this explicitly for the sake of a
less cumbersome notation. Next we will use the chain rule to express partial derivatives with respect to $x$ and $y$ in
terms of $\xi$ and $\eta$ as follows:
\begin{eqnarray}
\frac{\partial MD_p}{\partial x} & = & \frac{\partial MD_p}{ \partial \xi} \frac{\partial \xi}{\partial x} +
\frac{\partial MD_p}{ \partial \eta} \frac{\partial \eta}{\partial x}, \nonumber \\
\frac{\partial MD_p}{\partial y} & = & \frac{\partial MD_p}{ \partial \xi} \frac{\partial \xi}{\partial y} +
\frac{\partial MD_p}{ \partial \eta} \frac{\partial \eta}{\partial y}.
\label{eq:cr}
\end{eqnarray}
\noindent
Substituting \eqref{eq:cr} into \eqref{eq:direc_deriv} gives:
\begin{eqnarray}
-\left( \frac{\partial MD_p}{ \partial \xi} \frac{\partial \xi}{\partial x} + \frac{\partial MD_p}{ \partial \eta}
\frac{\partial \eta}{\partial x}\right) \frac{dy}{d \eta} +
\left( \frac{\partial MD_p}{ \partial \xi} \frac{\partial \xi}{\partial y} + \frac{\partial MD_p}{ \partial \eta}
\frac{\partial \eta}{\partial y}\right) \frac{dx}{d \eta}.
\end{eqnarray}
Now it follows from the argument given in Theorem \ref{thm:lin_aut_map} that
$\frac{\partial MD_p}{ \partial \xi} $ is not differentiable on the stable manifold ($\xi =0$ for $p<1$). Hence
\eqref{eq:DLD_nonlin_aut} is not differentiable in a direction transverse to the stable manifold at a point on the
stable manifold in the $x-y$ coordinates.
\subsection{Example 3: A Hyperbolic Saddle Point for Linear, Area-Preserving Nonautonomous Maps}
\label{sec:examp3}
In this section we will consider the nonautonomous analog of example 1 in Section \ref{sec:examp1}. Namely, we will
consider a linear, area preserving nonautonomous map having a hyperbolic trajectory at the origin. The map that we
consider has the following form:
$$\left \{ \begin{array}{ccc}
x_{n+1} & = & \lambda_n x_n\\
y_{n+1} & = & \frac{1}{\lambda_n} y_n \\
\end{array}\right .$$
\noindent
where $\lambda_n >1, \, \forall n$. Note that $x =y =0$ is a hyperbolic trajectory with stable manifold given by
$x=0$ and unstable manifold given by $y=0$ {\em for all $n$}.
We will only compute $MD^+_p$ since the computation of $MD^-_p$ is analogous. Hence, for $MD^+_p$ we have:
\begin{eqnarray}
MD^{+}_p & = & \displaystyle{\sum ^{N-1}_{i=0} |x_{i+1}-x_i|^p + |y_{i+1}-y_i|^p =
\sum^{N-1}_{i=0} |x_i|^p|\lambda_i-1|^p + |y_i|^p|1/\lambda_i-1|^p} \nonumber \\
& & \nonumber \\
& = & \displaystyle{|x_0|^p\left (|\lambda_0-1|^p+|\lambda_0|^p|\lambda_1-1|^p +...+ |\lambda_0 \cdots
\lambda_{N-2}|^p|\lambda_{N-1}-1|^p\right ) + }\nonumber\\
& & \nonumber \\
& & \displaystyle{|y_0|^p\left (|1/\lambda_0-1|^p+|1/\lambda_0|^p|1/\lambda_1-1|^p +...+ |1/\lambda_0
\cdots 1/\lambda_{N-2}|^p|1/\lambda_{N-1}-1|^p\right )} \nonumber\\
& & \nonumber \\
& = & \displaystyle{|x_0|^p \left ( |\lambda_0-1|^p + \sum^{N-1}_{i=1} \left
(\prod^{i-1}_{j=0}|\lambda_j|^p \right )|\lambda_i-1|^p\right ) +} \nonumber \\
& & \displaystyle{|y_0|^p \left ( |1/\lambda_0-1|^p + \sum^{N-1}_{i=1} \left (\prod^{i-1}_{j=0}|1/\lambda_j|^p \right )|1/\lambda_i-1|^p\right ) } \nonumber
\end{eqnarray}
\normalsize
\noindent
A similar calculation gives:
\begin{eqnarray}
MD^{-}_p & = & \displaystyle{|x_0|^p \left ( |1-1/\lambda_{-1}|^p + \sum^{-N}_{i=-2} \left
(\prod^{i+1}_{j=-1}|1/\lambda_j|^p \right )|1-1/\lambda_i|^p\right )} +\nonumber\\
& & \displaystyle{|y_0|^p \left ( |1-\lambda_{-1}|^p +
\sum^{-N}_{i=-2} \left (\prod^{i+1}_{j=-1}|\lambda_j|^p \right )|1-\lambda_i|^p\right )} .\nonumber
\end{eqnarray}
\normalsize
\noindent
Combining these two expressions gives:
\begin{equation}
\begin{array}{ccl}
MD_p & = & \displaystyle{|x_0|^p f(\Lambda,p,N) + |y_0|^p g(\Lambda^{*},p,N)}\\
\end{array}
\label{DLD_lin_nonaut}
\end{equation}
\noindent
where
$$\Lambda = (\lambda_0,\lambda_1,...,\lambda_{N-1},1/\lambda_{-1},1/\lambda_{-2},...,1/\lambda_{-N})$$
and
$$\Lambda^{*} = (1/\lambda_0,1/\lambda_1,...,1/\lambda_{N-1},\lambda_{-1},\lambda_{-2},...,\lambda_{-N}).$$
Now \eqref{DLD_lin_nonaut} has the same functional form as \eqref{DLD_lin_aut}. So for $p<1$ the same argument as given
in Theorem \ref{thm:lin_aut_map} holds. Therefore, along a line transverse to the stable manifold (i.e. $x=0$) $MD_p$
is not differentiable at the point on this line that intersects the stable manifold. The analogous statement holds for
the unstable manifold.
\subsection{Example 4: A Hyperbolic Saddle Point for a Nonlinear, Area Preserving Nonautonomous Map}
\label{sec:examp4}
We now consider a two dimensional nonlinear area-preserving nonautonomous map having the following form:
\begin{eqnarray}
x_{n+1} & = & \lambda_n x_n + f_n(x_n,y_n), \nonumber \\
y_{n+1} & = & \lambda_n^{-1} y_n + g_n(x_n,y_n), \quad (x_n, y_n) \in \mathbb{R}^2, \forall n,
\label{nonlinear_nonautonomous}
\end{eqnarray}
\noindent
where $\lambda_n >1, \, \forall n$ with $f_n (0, 0) = g_n (0, 0)=0, \, \forall n$. We assume that $f_n (\cdot, \cdot)$
and $g_n (\cdot, \cdot)$ are real valued nonlinear functions (i.e. of order quadratic or higher), they are at least
$C^1$, and they satisfy the constraints that the nonlinear map defined by \eqref{nonlinear_nonautonomous} is area
preserving.
Since the origin is a hyperbolic trajectory it follows that it has (one dimensional) stable and unstable manifolds
(\cite{irwin,deblasi,KH}). We will apply the method of discrete
Lagrangian descriptors to \eqref{nonlinear_nonautonomous} and show that the stable and unstable manifolds of the origin
correspond to the ``singular features'' of $MD_p$ ($p<1$), in the sense described in Theorem \ref{thm:lin_aut_map}.
Our method of proof will be similar in spirit to how we showed the result for nonlinear autonomous maps by using
Moser's theorem. Unfortunately, there is
no analog of Moser's theorem for nonlinear, nonautonomous area preserving two dimensional maps. Nevertheless, we will
still use a ``change of variables'', or ``conjugation'' result that is a nonautonomous map version of the
Hartman-Grobman theorem due to \cite{bv06}.
The classical Hartman-Grobman (\cite{hart60a,hart60b,hart63,grob59,grob62}) theorem applies to autonomous maps in a
neighborhood of a hyperbolic fixed point. The result states that there exists a homeomorphism, defined in a
neighborhood of the fixed point, which conjugates the map to its linear part. Stated another way, the homeomorphism
provides a new set of coordinates where the map is given by its linear part in the new coordinates. There are two issues
that we must immediately face in order for this approach to work as it did for the linear and nonlinear autonomous maps.
One is the generalization of the Hartman-Grobman theorem to the setting on nonautonomous maps (this is dealt with in
\cite{bv06}) and the other is the smoothness of the conjugation (``change of coordinates'') since a derivative is
required in the application of the chain rule (see \eqref{eq:cr}).
In general, the conjugacy provided by the Hartman-Grobman theorem is not differentiable (see \cite{meyer86} for
examples). However, there has been much work in determining conditions under which the conjugacy is at least $C^1$,
see, e.g., \cite{svs90,ghr03}. Moreover, Hartman has proven (\cite{hart60b}) that in two dimensions, a $C^2$
diffeomorphism having a hyperbolic saddle can be linearized with a $C^1$ conjugacy (see also \cite{s86}). We also point
out that differentiability is a property defined pointwise, and the nondifferentiability of the conjugacy typically
fails to hold at the fixed point (see the examples in \cite{meyer86}) and we are not interested in differentiability at
the fixed point, but at points along the stable and unstable manifolds of the fixed point. The conjugacy is
differentiable at these points, as is described in the lecture notes of Rauch entitled ``Conjugacy Outline'' availiable
at
\url{http://www.math.lsa.umich.edu/~rauch/courses.html}. This result also follows from the rectification theorem for
ordinary differential equations (\cite{arnold73}) which says that, away from points where the vector field vanishes,
the vector field is conjugate to ``rectilinear flow'', and this conjugacy is as smooth as the vector field. Note that
this result is valid for both autonomous and nonautonomous vector fields.
So setting aside the smoothness issues, we will give a brief discussion of the set-up of \cite{bv06} for the
nonautonomous Hartman-Grobman theorem. They consider
that the phase space is given by a Banach space, denoted $X$ (for us $X$ is $\mathbb{R}^2$). The dynamics is described
by a sequence of maps on $X$:
\begin{equation}
F_n (v) = A_n v + f_n (v), \quad v \in X, \, n \in \mathbb{Z}.
\label{eq:genNAmap}
\end{equation}
\noindent
Precise assumptions on $A_n$ and $f_n (v)$ are given in \cite{bv06}). In particular $A_n$ is a hyperbolic operator,
which for us is:
\begin{equation}
A_n = \left(
\begin{array}{cc}
\lambda_n & 0 \\
0 & \lambda_n^{-1}
\end{array}
\right)
\end{equation}
\noindent
and where $f_n (v)$ is ``small'', in some sense, e.g. $f_n (0)=0$ with $f_n (v)$ satisfying a Lipschitz condition. Our
$f_n (v)$ will be at least $C^1$ and satisfy the condition for the map \eqref{nonlinear_nonautonomous} to be area
preserving.
For each $n \in \mathbb{Z}$ construct a homeomorphism, $h_n (\cdot)$ that conjugates
\eqref{eq:genNAmap} to its linear part, i.e.,
\begin{equation}
A_n \circ h_n = h_{n+1} \circ F_n,
\end{equation}
\noindent
or, expressing this in a diagram for the full dynamics (following \cite{bv06}) we have:
\begin{equation}
\begin{array}{clclclclc}
& & F_{n-1} & & F_n & & F_{n+1} & & \\
\longrightarrow & X & \longrightarrow & X & \longrightarrow & X & \longrightarrow & X & \longrightarrow\\
& \downarrow h_{n-1} & &\downarrow h_{n}& &\downarrow h_{n+1} & & \downarrow h_{n+2} &\\
& & A_{n-1} & & A_n & & A_{n+1} & & \\
\longrightarrow & X & \longrightarrow & X & \longrightarrow & X & \longrightarrow & X & \longrightarrow
\end{array}
\end{equation}
In Section \ref{sec:examp3} we proved that the discrete Lagrangian descriptor for the linear, area preserving
nonautonomous map is singular along the stable and unstable manifolds of the hyperbolic trajectory at the origin, i.e.
$x=0$ and $y=0$, respectively. Note that the discrete Lagrangian descriptor is only a function of the initial
condition, $(x_0, y_0)$. Hence we can use the change of coordinates $h_0 (\cdot)$ and the argument given in Section
\ref{sec:examp2} to conclude that the discrete Lagrangian descriptor for the nonlinear nonautonomous area preserving
map \eqref{nonlinear_nonautonomous} is singular along the stable and unstable manifolds.
\section{Application to the Chaotic Saddle of the H\'enon Map}
\label{sec:henon}
We now illustrate the method of discrete Lagrangian descriptors for autonomous, area preserving nonlinear maps by
applying it to the H\'enon map (\cite{henon76}):
\begin{equation}
H(x,y) = (A+By-x^2,x).
\label{eq:henonmap}
\end{equation}
\noindent
The map is area preserving for $|B|=1$ and is orientation-preserving if $B<0$. Moreover, it
follows from work in \cite{dn79} that for values of $A$ larger than
\begin{equation}
A_{2} = (5 + 2\sqrt{5})(1 + |B|)^2/4,
\label{eq:ACM}
\end{equation}
\noindent
the H\'enon map has a hyperbolic invariant Cantor set which is topologically conjugate to a Bernoulli shift on two
symbols, i.e. it has a {\em chaotic saddle}. We will use the method of discrete Lagrangian descriptors to visualize this
chaotic saddle.
We consider $B=-1$, which after substituting this value into \eqref{eq:ACM}, gives $A_{2} = 5 + 2\sqrt{5} \approx
9.47$, and therefore we choose $A=9.5$, which satisfies the chaos condition. With these choices of parameters we have
$H(x,y) = (9.5-y-x^2,x)$. Applying the method of discrete Lagrangian descriptors to this map gives the structures shown
in Figure \ref{dibujo_chaotic_saddle_Henon}, where the chaotic saddle is the set that appears as dark blue. This
method, in contrast to other techniques for computing chaotic saddles (see for instance \cite{yorke}), has the advantage
that it simultaneously provides insight into the manifold structure associated with the chaotic saddle.
\begin{figure}
\caption{Computation of the chaotic saddle of the H\'enon map for $A =9.5, \, B = -1$, after $N=5$ iterations and
$p=0.05$.}
\label{dibujo_chaotic_saddle_Henon}
\end{figure}
\section{Application to the Chaotic Saddle of a Nonautonomous H\'enon Map}
\label{sec:NAhenon}
We now illustrate the method of discrete Lagrangian descriptors for nonautonomous, area preserving maps by
applying it to a nonautonomous version of the H\'enon map. In particular, in \eqref{eq:henonmap} we take;
\begin{equation}
B= -1, \quad A = 9.5 +\epsilon \, \cos (n).
\end{equation}
\noindent
For $\epsilon$ `small'', this is a nonautonomous perturbation of the situation considered in Section \ref{sec:henon},
so that we would expect to have a structure similar to that shown in Figure \ref{dibujo_chaotic_saddle_Henon}, but
slightly varying with $n$, i.e. a nonautonomous chaotic saddle (see \cite{wiggins99}).
The discrete Lagrangian descriptor method provides us with a numerical tool to explore this question.
Figure \ref{fig:NAhenon} illustrates the phase space structure at different times for the nonautonomous H\'enon map.
Clearly the output is similar to that shown in Fig. \ref{dibujo_chaotic_saddle_Henon}, but varying with respect to $n$.
\begin{figure*}
\caption{Computation of the
chaotic saddle of the nonautonomous H\'enon map for $A =9.5 +\epsilon \, \cos (n), \, B = -1$, after $N=5$ iterations
and $p=0.05$. The output is shown for four different times.}
\label{fig:good_array}
\label{fig:bad2_array}
\label{fig:good_array}
\label{fig:bad2_array}
\label{fig:NAhenon}
\end{figure*}
\section{Summary and Conclusions}
\label{sec:summ}
In this paper we have generalized the notion of Lagrangian descriptors to autonomous and nonautonomous maps. We have
restricted our discussion to two dimensional, area preserving maps, but with additional work it should be possible to
remove these restrictions.
In the discrete time setting explicit expressions for the Lagrangian descriptors were derived, and for the $\ell^p$
norm, $p <1$, we proved a theorem that gave rigorous meaning to the statement that ``singular sets'' of the Lagrangian
descriptors correspond to the stable and unstable manifolds of hyperbolic invariant sets.
\section*{\bf Acknowledgments.} The research of AMM is supported by the MINECO under grant MTM2011-26696. The research
of SW is supported by ONR Grant No.~N00014-01-1-0769. We acknowledge support from MINECO: ICMAT Severo Ochoa project
SEV-2011-0087.
\end{document}
|
\begin{document}
\allowdisplaybreaks
\title{Weak Dirichlet processes and generalized martingale problems}
\begin{abstract}
In this paper we explain how the notion of {\it weak Dirichlet process}
is the suitable generalization of the one of semimartingale with jumps.
For such a process we provide a unique decomposition which is new also for semimartingales:
in particular we introduce {\it characteristics} for weak Dirichlet processes. We also introduce a weak concept (in law) of finite quadratic variation. We investigate a set of new useful chain rules
and we discuss a general framework of (possibly path-dependent with jumps)
martingale problems with a set of examples
of SDEs with jumps driven by a distributional drift.
\end{abstract}
{\bf Key words:}
Weak Dirichlet processes; c\`adl\`ag semimartingales; jump processes;
martingale problem; singular drift; random measure.
\\
{\small\textbf{MSC 2020:}
60H10; 60G48; 60G57
}
\section{Introduction}
The central notion of this work is the one of
{\it weak Dirichlet process} with jumps and the related
{\it martingale problem}.
In this work we want in particular to convince the reader
that the concept of weak Dirichlet process plays a
similar
central role as the one of semimartingale.
An $({\cal F}_t)$-weak Dirichlet process $X$
is the
sum of an $({\cal F}_t)$-local martingale $M$ and an $({\cal F}_t)$-martingale orthogonal process $\Gamma$, generally
fixed to vanish at zero. When self-explanatory, the filtration will be omitted. A {\it martingale orthogonal process} $A$
has the property that $[A,N] = 0$ for every continuous
martingale $N$. In particular a purely discontinuous martingale is a martingale orthogonal process. $A$ substitutes the usual bounded variation
component $V$ when $X$ is a semimartingale. As a matter of fact, any bounded variation process is $({\cal F}_t)$-martingale orthogonal, see Proposition 2.14 in \cite{BandiniRusso1}.
When $X$ is a continuous process, the notion of weak Dirichlet process was introduced in \cite{er}
and largely investigated in \cite{gr}.
In \cite{BandiniRusso1}, we extended the concept of weak Dirichlet
jump process with related calculus. In particular, generalizing the notion of special semimartingale, we introduced the one of special weak Dirichlet process $X = M + \Gamma,$
where $M$ is a possibly (c桿l看) local martingale
and $\Gamma$ is a predictable;
in that case the decomposition $X = M + \Gamma$ is unique.
In fact this was earlier introduced by \cite{cjms}
and partially studied, omitting the mention ''special''.
An important feature of the calculus beyond semimartingales
is the one of stability, which often constitutes a generalization of It\^o formula. If $X$ is a c\`adl\`ag semimartingale and $f\in C^2(\mathbb R)$, we recall that
$f(X)$ is again a semimartingale by a direct application of It\^o formula.
However, if $f$ is only of class $ C^1(\mathbb R)$, $f(X)$ is generally
no more a semimartingale, nevertheless it
remains a finite quadratic variation process,
see \cite{rv4}
when $X$ is a continuous process. For instance, if $X$ is a Brownian motion and $f$ is not the difference of convex functions, then it is well-known that $f(X)$ is not a semimartingale. On the other hand, if $f$ is bounded and of class $C^1$ and $X$ is a Poisson process, then $f(X)$ is a special semimartingale.
A c\`adl\`ag process $X$ will be called \emph{finite quadratic variation process} (see \cite{rv95, BandiniRusso1})
whenever the u.c.p. limit (which will be denoted by $[X,X]$) of
$ [X,X]^{ucp}_{\varepsilon}$ exists, where
\begin{equation}
\label{Appr_cov_ucpI}
[X,Y]^{ucp}_{\varepsilon}(t):= \,\int_{]0,\,t]}\,
\frac{(X((s+\varepsilon)\wedge t)-X(s))(Y((s+\varepsilon)\wedge t)-Y(s))}{\varepsilon}\,ds.
\end{equation}
By Lemma 2.10 in \cite{BandiniRusso1}, we know that
\begin{equation} \label{QVC}
[X,X] = [X, X]^c + \sum_{s \leq \cdot} |\mathbb Delta X_s|^2,
\end{equation}
where $[X,X]^c$ is the continuous component.
The covariation of two c\`adl\`ag
processes $X$ and $Y$ was defined in \cite{rv95} as the u.c.p. limit
of $[X,Y]^{ucp}_{\varepsilon}$, whenever it exists.
The notion of quadratic variation for a non-semimartingale $X$
was introduced in \cite{fo} by means of discretizations (instead
of regularizations, that we denote here by $[X]$).
One also proved that if $f \in C^1(\mathbb R)$ and $[X]$ exists, then
$[f(X)]$ also exists.
A natural extension of the notion of the semimartingale is the one of
$({\cal F}_t)$-Dirichlet process introduced in \cite{FolDir} still
in the discretization approach, which is the (unique) sum
of an $({\cal F}_t)$-local martingale and a zero quadratic variation process
(vanishing at zero).
For us, an $({\cal F}_t)$-Dirichlet process will be the analogous
concept in the regularization approach.
Let $X$ be such a Dirichlet process, which is obviously in
particular an $({\cal F}_t)$-(even special) weak Dirichlet process. Let
$X = M + \Gamma$ its unique decomposition.
Observe that the notion of Dirichlet process does not
naturally fit the
jump case; indeed $[\Gamma,\Gamma] = 0$ implies
that $\Gamma$ is continuous by \eqref{QVC}, therefore predictable.
On the other hand, since
$X$ is a finite quadratic variation process, if $f \in C^1(\mathbb R)$ then
$f(X)$ is also a finite quadratic variation process but it is not necessarily
a Dirichlet process, see
\cite{BandiniRusso_DistrDrift}.
For applications (for instance to control problems and BSDEs theory, see e.g. \cite{gr1, BandiniRusso2, FuhrmanTessitore}), it is useful to investigate stability for functions
$f \in C^{0,1}([0,\,T] \times \mathbb R; \mathbb R)$.
Let $f \in C^{0,1}([0,\,T] \times \mathbb R; \mathbb R)$ and $X$ be a finite quadratic variation process. In general we cannot expect that $f(t,X_t)$ to be a finite quadratic variation process: for instance a very irregular function $f$ not depending on $x$ may not be of finite quadratic variation.
If $X$ is a continuous weak Dirichlet process with finite quadratic variation, it is known that $f(t, X_t)$ is a weak Dirichlet process, see
Proposition 3.10 in \cite{gr}.
This stability result was extended to the case where $X$ is a discontinuos
process, provided a specific relation between the jumps of $X$ and $f$, see \cite{BandiniRusso1}. Under these conditions, $f(t, X_t)$ is a special weak Dirichlet process. Recently, an interesting generalization in the continuous framework has been provided in \cite{BLT}, where $f$ is a $C^{0,1}$-path-dependent functional in the sense of horizontal-vertical Dupire derivative.
A former work including $C^{1,2}$-chain rules for a significant class of path-dependent processes with jumps is \cite{cordoni}.
The family of weak Dirichlet processes has also interesting connections
with the so called stochastically controlled processes,
see \cite{GOR}, and also the related interesting recent reference \cite{Hocquet}.
As mentioned earlier, the second central notion of the present
paper is the one of martingale problem, whose classical notion is due to Stroock and Varadhan,
see e.g. \cite{Stroock_Varadhan}. In general, one says that a process
$X$ is a solution to the martingale problem with respect to a probability
$\mathbb P$ (on some probability space), to some domain
${\mathcal D} $ and to a time-indexed family of operators $L_t$ if
$$ f(X_t) - f(X_0) - \int_0^t L_sf(X_s) ds, $$
is a $\mathbb P$-local martingale.
One also says that $(X,\mathbb P)$ is a solution to the martingale problem related to
${\mathcal D}$ and $L_t$.
In the classical martingale problem in \cite{Stroock_Varadhan}
one takes ${\mathcal D} = C^2(\mathbb R^d)$ and $L_t$ is a
second order PDE operator. One also knows that
the solution of Stroock-Varadhan martingale problem is equivalent
to the one of an SDE in law (or weak).
In more singular situations, the notion of SDE seems
difficult to exploit and define, and in that case it
is substituted by the more flexible notion of martingale problem.
This is the case for instance when one investigates
the notion of {\it SDEs with distributional drift}, i.e.
of the
type
$$ X_t = X_0 + \int_0^t \sigma(X_s) dW_s + \textup{``}\int_0^t b(X_s) ds\textup{''},$$
and $b$ is a Schwartz distribution. In this case it is more comfortable
to express those processes as solutions to a martingale problem
with respect to some domain $ {\mathcal D} $ which is a suitable
subset of $C^1(\mathbb R)$, where
the formal map $L_t f(x) := \frac{1}{2} \sigma^2(x) f''(x) + b(x) f'(x)$
is well-defined (independently of $t$).
We remark that this is not the case in general for $f \in C^\infty_0(\mathbb R)$.
This was the objet of \cite{frw1,frw2,rtrut}.
Later extensions to the multi-dimensional case were performed,
see \cite{issoglio, diel, cannizzaro}.
Generalizations to the path-dependent case were done
by \cite{ORT1_PartI, ORT1_Bessel}.
In all these cases the solutions are constituted by continuous processes which are not necessarily semimartingales.
In the literature of jump processes the notion of
martingale problem has followed two different routes.
The first one is a new formulation of martingale problem
given in \cite{JacodBook} where the
formulation makes essentially use of the notion
of characteristics. This approach is particularly natural in the purely discontinuous framework, see e.g. \cite{BandiniCalviaColaneri}.
The second one continues the Stroock-Varadhan approach, see e.g.
\cite{LaachirRusso, barrasso3}.
We describe now the main contributions of the paper.
\begin{enumerate}
\item First we formulate a unique decomposition
for a weak Dirichlet process $X = X^c + A$,
where $X^c$ is a continuous local martingale
and $A$ a martingale orthogonal process vanishing at zero,
see Proposition \ref{P:uniqdec}.
Until now unique decompositions of weak Dirichlet processes
were established when $X$ has the special weak Dirichlet property,
in particular if $X$ is a special semimartingale.
Even when $X = M + V$ is a general c\`adl\`ag semimartingale,
no unique decomposition was available because
the property of $V$ to have bounded variation was not
enough to determine it. We recall that often purely
discontinuous martingales have bounded variation.
\item
Let $X$ be a c\`adl\`ag process satisfying
\begin{equation}\label{int_small_jumps}
\sum_{s \leq \cdot} |\mathbb Delta X_s|^2
< \infty \,\,\,\,\textup{a.s.}
\end{equation}
Notice that condition \eqref{int_small_jumps} is equivalent to ask that
$ (1 \wedge |x|^2) \star \mu^X \in \mathcal A_{\textup{loc}}^+$
(see Proposition \ref{P:new}), where $\mu^X$ is the jump measure related to $X$ defined in \eqref{jumpmeasure}.
\\
In Corollary \ref{C:3.21} we prove that if $X$ is
a weak Dirichlet process then it
is a special weak Dirichlet process if and only if
\begin{equation}\label{int_big_jumps_INTRO}
x \,\ensuremath{\mathonebb{1}}_{\{|x| > 1\}} \star \mu^X \in \mathcal{A}_{\textup{loc}}.
\end{equation}
This result in particular extends the classical characterization of a special semimartingale, see Proposition 2.29, Chapter II, in \cite{JacodBook}. We recall that if $X$ is a special weak Dirichlet process and a semimartingale, then it is a special semimartingale, see Proposition 5.14 in \cite{BandiniRusso1}.
\\
More generally, let $v: \mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ continuous.
In Theorem \ref{T:spw} we
give a necessary and sufficient condition on the weak Dirichlet process $Y_t=v(t, X_t)$ to be a special weak Dirichlet process, namely
\begin{equation*}
(v(s,X_{s-}+x)-v(s,X_{s-})) \,\ensuremath{\mathonebb{1}}_{\{|x| >1\}} \star \mu^X \in \mathcal{A}_{\textup{loc}}.
\end{equation*}
Notice that in the literature only sufficient conditions were available for $Y$ to be a special weak Dirichlet process, see for instance Theorem 5.31 in \cite{BandiniRusso1}.
\item In Theorem \ref{T:2.11} we provide a first chain rule expanding a process $v(s, X_s)$, where $v$ has no regularity at all,
that extends a similar chain rule established in \cite{BandiniRusso1} but only for purely jump processes. Indeed, under the conditions \eqref{cond5.36} and \eqref{G1_cond}, if $v(t,X_t)$ is an $\mathbb F$-weak Dirichlet process with unique continuous martingale component $Y^c$, then we have
\begin{align}\label{dec_wD1_intro}
Y &= Y^c + (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{k(x)}{x}\,\star (\mu^X- \nu^X) \notag\\
&+ \Gamma^{k}(v) + (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\,
\star \mu^X,
\end{align}
with
$ \Gamma^{k}(v)$ a predictable and martingale orthogonal process.
Our result constitutes an important tool to solve the identification problem for a BSDE driven by a random measure and a Brownian motion, see Remark \ref{R:previouschainrule}.
\item
We relax the notion of finite quadratic variation process, by giving the notion of weakly finite quadratic variation, see Definition \ref{D:weakfin}.
This is a notion which is more related to the convergence in law of subsequences. The u.c.p convergence of \eqref{Appr_cov_ucpI} is replaced by
the fact that, for every $T>0$, $[X,X]_{0 < \varepsilon \le \varepsilon_0}^{ucp}(T)$ are tight
for $\varepsilon_0 >0$ small enough.
A classical example of weakly finite quadratic variation
process comes up when
$$ \sup_{0 < \varepsilon \le \varepsilon_0}
[X,X]_\varepsilon^{ucp}(T) < \infty \,\,\textup{a.s.},$$
see Remark \ref{R:twoitems}-(i).
Another example is given when $X$
has finite energy, see Remark \ref{R:twoitems}-(ii).
It is not difficult to exhibit
a process with finite energy which has no quadratic variation, see Example \ref{E31}.
Notice that condition \eqref{int_small_jumps} holds if $X$ is a finite quadratic variation process, and it is also valid under the weaker condition of $X$ being a weak finite quadratic variation process, see Proposition \ref{P:3.12}.
\item Let $v \in C^{0,1}(\mathbb R_+ \times \mathbb R)$. If $X$ is a weakly finite quadratic variation process, then Theorem
\ref{P:3.10} states that
the process $Y = v(\cdot,X)$ is a weak Dirichlet process,
and identifies its unique continuous local martingale component
$Y^c$ of $Y$, as
$$
Y^c = Y_0 + \int_{0}^{\cdot}\partial_x v(s, X_{s})\,dX^c_s.
$$
The combined results in Theorems \ref{T:2.11}
and \ref{P:3.10} constitute indeed a $C^{0,1}$-type chain rule for weak Dirichlet processes
which generalizes Theorem 5.15
in \cite{BandiniRusso1}.
As far as special weak Dirichlet processes are concerned, the corresponding $C^{0,1}$-type chain rule is given in Corollary \ref{C:new} generalizing Theorem 5.31 in \cite{BandiniRusso1},
see also Remark \ref{R:3.10bis}.
Notice that in \cite{BandiniRusso1}
$X$ was also
supposed to be a finite quadratic variation process.
\item
We introduce the notion of {\it characteristics} $(B^k, C, \nu)$ for weak Dirichlet processes (see Definition \ref{D:genchar}), that extends the corresponding one for semimartingales.
We remark that, if $X$ is a weak Dirichlet process, then $\Gamma^k(Id)=B^k\circ X$ with $ \Gamma^{k}(v)$ defined in \eqref{dec_wD1_intro}, see Corollary \ref{C:genchar} and Remark \ref{R:cor}. Given the characteristics $(B^k, C, \nu)$ of a weak Dirichlet process $X$, a natural question is to determine the characteristics of a process $h(\cdot, X)$, where $h \in C^{0,1}$. In fact, it is possible to provide the second and third characteristic of $h(\cdot, X)$ in terms of $C$ and $\nu$, see Remark \ref{R:332}, while it is a challenging problem to evaluate the first characteristic. Nevertheless, we are able to solve this problem in the case when $h$ is bijective and time-homogeneous, see Remark \ref{R:Rred}.
\item We introduce a notion of martingale problem, which applies in a general framework including possibly non-Markovian jumps processes and non semimartingales, by generalizing the classical Stroock-Varadhan martingale problem with respect to some domain $\mathcal D_{\mathcal A}\subseteq C^{0,1}$ (replacing $\mathcal D$) and operator $\mathcal A$ (replacing $\partial_ t + L_t$), see Definition \ref{D:mtpb1}.
Moreover, here the Lebesgue measure $dt$ can be substituted by some random kernel.
\\
Let $X$ be a c\`adl\`ag weakly finite quadratic variation process. The fact that $X$ is a solution to some martingale problem in the sense of Definition \ref{D:mtpb1} does not imply that it is a semimartingale (indeed, we are in particular interested in the case where $X$ is not a semimartingale).
Among others, if $X$ is a solution of a martingale problem with respect to $\mathcal D_{\mathcal A}$ and $\mathcal A$, with $\mathcal D_{\mathcal A}$ dense in $C^{0,1}$, we even do not know if $X$ is a weak Dirichlet process.
Corollary \ref{C:3.30bis} provides some necessary and sufficient condition conditions under which $X$ is weak Dirichlet.
To get those conditions, Theorem \ref{T:new4.4} together with Proposition \ref{T:3.30_bis} give some crucial preparatory stochastic calculus tools.
\item Section \ref{S:mtpb_hom} relates our inhomogeneous formulation
of martingale problem with the (more classical) time-homogeneous
expression. Knowing that a process $X$ solves some martingale
problem (in the time-homogeneous sense), the fact that it also
solves a non-homogeneous martingale problem corresponds
to some general chain rule.
\item In Section \ref{SSExamples} we discuss
five classes of examples of martingale problems. The first two are respectively the case of general semimartingales and the case where there is a bijective function $h$ in $C^{0,1}$ such that $h(t, X_t)$ is a semimartingale.
The third one concerns discontinuous processes
solving martingale problems with distributional drift.
For this, existence and uniqueness is
discussed systematically in the companion
paper \cite{BandiniRusso_DistrDrift}.
The fourth one is about continuous path-dependent
problems involving distributional drifts.
The latter one is about the martingale problem
solved by a piecewise deterministic Markov process.
\end{enumerate}
\section{Preliminaries and notations}\label{SPrelim}
In the sequel we will consider the space of functions
$
u: \mathbb R_+ \times \mathbb R \rightarrow \mathbb R$, $(t,x)\mapsto u(t,x)$,
which are of class $C^{0,1}$ or $C^{1,2}$.
$C^{0,1}_b$ (resp. $C^{1,2}_b$) stands for the class of bounded
functions which belong to $C^{0,1}$ (resp. $C^{1,2}$).
$C^{0,1}$ is equipped with the topology of uniform convergence on each compact
of $u$ and $\partial_x u$.
$C^0$ (resp. $C^0_b$) will denote the space of continuous functions (resp. continuous and bounded functions) on $\mathbb R$ equipped with the topology of uniform convergence on each compact (resp. equipped with the topology of uniform convergence). $C^1$ (resp. $C^2$) will be the space of continuously differentiable (twice continuously differentiable) functions $u:\mathbb R\rightarrow \mathbb R$. $C^{1}_b$ (resp. $C^{2}_b$) stands for the class of bounded
functions which belong to $C^{1}$ (resp. $C^{2}$).
$D(\mathbb R_+)$ will denote the space of real c\`adl\`ag functions on $\mathbb R_+$.
Let $T>0$ be a finite horizon.
$C^{0,1}([0,T]\times \mathbb R)$ will denote the space of functions in $C^{0,1}$ restricted to $[0,T]\times \mathbb R$. In the following, $D(0,\,T)$ (resp.
$D_{-}(0,\,T)$, $C(0,\,T)$, $C^1(0,\,T)$)
will indicate the space of real c\`adl\`ag
(resp. c\`agl\`ad, continuous, continuously differentiable) functions on $[0,\,T]$.
These spaces
are equipped with the uniform norm.
We will also indicate by $||\cdot||_{\infty}$ the essential supremum norm and by $||\cdot||_{var}$ the total variation norm.
Given a topological space $E$, in the sequel $\mathcal{B}(E)$ will denote
the Borel $\sigma$-field associated with $E$.
A stochastic basis $(\Omega, \mathcal F, \mathbb F, \mathbb P)$ is fixed
throughout the section. We will suppose that
$\mathbb F=(\mathcal F_t)$ satisfies the usual conditions.
By convention, any c\`adl\`ag process (or function) defined on $[0,\,T]$ is extended to
$\mathbb R$ by continuity.
A similar convention is made for random fields (or functions) on $[0,T] \times \mathbb R$.
Related to $\mathbb F$,
the symbol $\mathbb{D}^{ucp}$
will denote the space of all adapted c\`adl\`ag
processes endowed with the u.c.p. (uniform convergence in probability)
topology on each compact interval.
$\mathcal{P}$ (resp. $\mathcal{\tilde{P}}:=\mathcal{P}\otimes \mathcal{B}(\mathbb R)$) will denote the predictable $\sigma$-field on $\Omega \times \mathbb R_+$ (resp. on $\tilde{\Omega} := \Omega \times \mathbb R_+\times \mathbb R$).
For a random field $W$, the simplified notation $W \in\mathcal{\tilde{P}}$ means that $W$ is $\mathcal{\tilde{P}}$-measurable.
A process $X$ indexed by $\mathbb R_+$ will be said to be with integrable variation if the expectation of its total variation is finite.
$\mathcal{A}$ (resp. $\mathcal{A}_{\textup{loc}}$) will denote the collection of all adapted processes with integrable variation (resp. with locally integrable variation), and $\mathcal{A}^+$ (resp $\mathcal{A}_{\textup{loc}}^+$) the collection of all adapted integrable increasing (resp. adapted locally integrable) processes.
The significance of locally is the usual one which refers
to localization by stopping times, see e.g. (0.39) of
\cite{jacod_book}.
The concept of random measure
will be extensively used throughout the paper.
For a detailed discussion on this topic and the unexplained notations,
we refer to
Chapter I and Chapter II, Section 1, in \cite{JacodBook}, Chapter III in \cite{jacod_book}, and Chapter XI, Section 1, in \cite{chineseBook}.
In particular, if $\mu$ is a random measure on $[0,\,T]\times \mathbb R$, for any measurable real function $H$ defined on $\Omega \times [0,\,T]$, one denotes $H \star \mu_t:= \int_{]0,\,t] \times \mathbb R} H(\cdot, s,x) \,\mu(\cdot, ds \,dx)$,
when the stochastic integral in the right-hand side is defined
(with possible infinite values).
We recall that a transition kernel $Q(e, dx)$ of
a measurable space $(E, \mathcal E)$ into another measurable space $(G,\mathcal G)$ is a family
$\{Q(e, \cdot): e \in E\}$ of positive measures on $(G,\mathcal G)$, such that $Q(\cdot ,C)$ is $\mathcal E$-measurable for each $C \in \mathcal G$, see
for instance in Section 1.1, Chapter I of \cite{JacodBook}.
Let $X$ be an adapted c\`adl\`ag process.
We set the
corresponding jump measure $\mu^X$ by
\begin{equation}\label{jumpmeasure}
\mu^X(dt\,dx)= \sum_{s >0} \ensuremath{\mathonebb{1}}_{\{\mathbb Delta X_s \neq 0\}}\, \delta_{(s, \mathbb Delta X_s)}(dt\,dx).
\end{equation}
We denote by $\nu^X= \nu^{X, \mathbb P}$ the compensator of $\mu^X$,
see \cite{JacodBook} (Theorem 1.8, Chapter II).
The dependence on $\mathbb P$ will be omitted when self-explanatory.
For any random field
$W$,
we set
\begin{align*}
\hat{W}_t = \int_{\mathbb R} W_t(x)\,\nu^X(\{t\} \times dx),
\quad
\tilde{W}_t = \int_{\mathbb R} W_t(x)\,\mu^X(\{t\}\times dx)
-\hat{W}_t,
\end{align*}
whenever they are well-defined.
We also define
$$
C(W) := |W- \hat W|^2\star \nu^X + \sum_{s \leq \cdot}|\hat{W}_s|^2\,(1-\nu^X(\{s\}\times \mathbb R)),
$$
and, for every $q \in [1,\,\infty[$, the linear spaces
\begin{align*}
\mathcal{G}^q(\mu^X)=\Big\{W \in \mathcal{\tilde{P}}
:\,\, \forall s \geq 0 \,\int_\mathbb R |W(s,x)|\,\nu^X(\{s\} \times dx)< \infty, \,\,
\Big[\sum_{s\leq \cdot}|\tilde{W}_s|^2\Big]^{q/2}\in \mathcal{A}^+\Big\},\quad
\\
\mathcal{G}^q_{\textup{loc}}(\mu^X)=\Big\{W \in \mathcal{\tilde{P}}
: \,\, \forall s \geq 0 \,\int_\mathbb R |W(s,x)|\,\nu^X(\{s\} \times dx)< \infty, \,\,
\Big[\sum_{s\leq \cdot}|\tilde{W}_s|^2\Big]^{q/2}\in \mathcal{A}_{\textup{loc}}^+\Big\}.\nonumber
\end{align*}
For a random field $W$ on $[0,T] \times \mathbb R$ we set the norms
$
||W||^2_{\mathcal{G}^2(\mu^X)}
:=\sper{C(W)_T}$,
$||W||_{\mathcal{L}^2(\mu^X)}:=\mathbb E[ |W|^2 \star\nu_T]$,
and the space
$\mathcal{L}^2(\mu^X):=\{W \in \tilde{\mathcal{P}}
:\,\, ||W||_{\mathcal{L}^2(\mu^X)}< \infty\}
$.
If $W \in \mathcal G_{\textup{loc}}^1(\mu^X)$, we call stochastic integral with respect to $\mu^X-\nu^X$ and we denote it by $W \star(\mu^X-\nu^X)$, any purely discontinuous local martingale $X$ such that $\mathbb Delta X$ and $\tilde W$ are indistinguishable, see Definition 1.27, Chapter II, in \cite{JacodBook}.
We recall that, if $W \in \tilde {\mathcal P}$ such that $|W|\star \mu^X \in \mathcal{A}_{\textup{loc}}^+$, then $W \in \mathcal G_{\textup{loc}}^1(\mu^X)$ and
$
W \star (\mu^X - \nu) = W \star \mu^X - W \star \nu^X$, see Theorem 1.28, Chapter II, in \cite{JacodBook}. Moreover, by Theorem 11.21, point 3) in \cite{chineseBook},
the following statements are equivalent:
\begin{enumerate}
\item $W \in \mathcal{G}^2_{\textup{loc}}(\mu^X)$;
\item
$
C(W) \in \mathcal{A}_{\textup{loc}}^+
$;
\item $W \star (\mu^X - \nu^X)$ is a square integrable local martingale.
\end{enumerate}
In this case
$\langle W \star (\mu^X-\nu^X), W \star (\mu^X-\nu^X)\rangle = C(W)$.
Finally, if $W \in \mathcal{L}^2(\mu^X)$
then $W \in \mathcal{G}^2(\mu^X)$, and
$C(W)= |W|^2 \star\nu^X- \sum_{s \leq\cdot}|\hat{W}_s|^2$.
In this case
$||W||^2_{\mathcal{G}^2(\mu^X)} \leq ||W||^2_{\mathcal{L}^2(\mu^X)}$.
\section{Weak Dirichlet processes: the suitable generalization of semimartingales with jumps}
\label{S3}
\subsection{A new unique decomposition}
\label{S31}
A stochastic basis $(\Omega, \mathcal F, \mathbb F, \mathbb P)$ is fixed
throughout the section. Sometimes the dependence on $\mathbb F$ will be omitted.
Given an adapted (c\`adl\`ag) process $X$ on it, we will denote by $\mu^X$ its jump measure given in \eqref{jumpmeasure} and by $\nu^X$ the corresponding compensator.
We recall that an $\mathbb F$-weak Dirichlet process is a process of the type
\begin{equation} \label{GenDec}
X= M+ \Gamma,
\end{equation}
where $M$ is an $\mathbb F$-local martingale and $\Gamma$ is an $\mathbb F$-orthogonal process vanishing at zero, while a special weak Dirichlet process is a weak Dirichlet process $X=M+\Gamma,$ where $\Gamma$ is in addition predictable, see Definitions 5.5. and 5.6 in \cite{BandiniRusso1}.
For complementary results, the reader can consult Section 5 in \cite{BandiniRusso1}.
\begin{remark}\label{R:decomp_mart}
Any local martingale $M$ can be uniquely decomposed as the sum of a continuous local martingale $M^c$ and a purely discontinuous local martingale $M^d$ such that $M^d_0 =0$,
see Theorem 4.18, Chapter I, in \cite{JacodBook}\end{remark}
The decomposition \eqref{GenDec} is not unique, but the result below proposes a particularly natural one, which is unique.
\begin{proposition}\label{P:uniqdec}
Let $X$ be a c\`adl\`ag $\mathbb F$-weak Dirichlet process. Then there is a unique continuous $\mathbb F$-local martingale $X^c$
and a unique $\mathbb F$-martingale orthogonal process $A$ vanishing at zero, such that
\begin{equation}\label{E:dec}
X=X^c + A.
\end{equation}
\end{proposition}
\proof
\noindent \emph{ Existence.}
Since $X$ is an $\mathbb F$-weak Dirichlet process, by \eqref{GenDec} it is a process of the type $X= M+ \Gamma$, with $M$ an $\mathbb F$-local martingale and $\Gamma$ an $\mathbb F$-martingale orthogonal process vanishing at zero.
Recalling Remark \ref{R:decomp_mart}, it follows that $X$ admits the decomposition
\begin{equation}\label{weakDir}
X= M^c + M^d + \Gamma,
\end{equation}
that provides \eqref{E:dec} by setting $A := M^d + \Gamma$ and $X^c:= M^c$.
\noindent \emph{Uniqueness.}
Assume that $X$ admits the two decompositions
\begin{align*}
X&= M^1 + A^1, \quad
X = M^2 + A^2,
\end{align*}
with $M^1, M^2$ continuous $\mathbb F$-local martingales and $A^1, A^2$ $\mathbb F$-martingale orthogonal processes vanishing at zero.
So we have $0 = M^{1}-M^{2} + A^1-A^2$. Taking the covariation of previous equality with $M^{1}-M^{2}$,
we get $ [M^{1}-M^{2},M^{1}-M^{2}] \equiv 0$. Since $M^1 - M^2$ is a continuous martingale vanishing at zero
we finally obtain $M^1 = M ^2$ and so $A^1 = A^2$.
\endproof
\begin{remark} \label{R32}
Notice that decomposition \eqref{weakDir} of the weak Dirichlet $X$ is not unique.
\end{remark}
\begin{remark} \label{Edec}
The unique decomposition \eqref{E:dec} holds (and it is new)
in particular for semimartingales.
On the other hand, when $X$ is a semimartingale, the notion of $X^c$ (as unique continuous martingale component)
was introduced in Definition 2.6, Chapter II, in \cite{JacodBook} (in that case it was fixed $X_0^c=0$). There, a unique decomposition was provided
only after having fixed a truncation.
\end{remark}
\begin{proposition}\label{P:2.14}
Let $X$ be an $\mathbb F$-semimartingale. Then $[X, X]^c = \langle X^c, X^c\rangle$.
\end{proposition}
\proof
By Proposition \ref{P:uniqdec} we have the unique decomposition $X= X^c + A$. On the other hand, being $X$ a semimartingale, we also have that $X= M+ V$ with $V$ a bounded variation process, and $M$ a local martingale. Since the unique decomposition $M=M^c + M^d$ holds (see Remark \ref{R:decomp_mart}),
we get that $X^c=M^c$ and
\begin{align}\label{GammaDef}
A = M^d + V.
\end{align}
Indeed $M^d $ and $V$ are $\mathbb F$-martingale orthogonal processes.
Now, by \eqref{QVC} and \eqref{E:dec}
\begin{align}\label{cov1}
[X,X] =[X, X]^c + \sum_{s \leq \cdot} |\mathbb Delta A_s|^2.
\end{align}
On the other hand, the bilinearity of the covariation gives
\begin{align}\label{cov2}
[X,X] = [X^c, X^c] + [A, A]
\end{align}
also taking into account that $A$ is an
$\mathbb F$-martingale orthogonal process.
To conclude, comparing \eqref{cov1} and \eqref{cov2}, we have to show that
$
[A, A]=\sum_{s \leq \cdot} |\mathbb Delta A_s|^2
$.
Now, by \eqref{GammaDef}, Proposition 5.3 and Proposition 2.14 in \cite{BandiniRusso1},
\begin{align*}
[A, A] &= [M^d, M^d]+ [V, V] + 2 [M^d, V]=\sum_{s \leq \cdot} (|\mathbb Delta M^d_s|^2 + |\mathbb Delta V_s|^2 + 2 \mathbb Delta M^d_s \mathbb Delta V_s)\\
&= \sum_{s \leq \cdot} |\mathbb Delta M^d_s + \mathbb Delta V_s|^2= \sum_{s \leq \cdot} |\mathbb Delta A_s|^2.
\end{align*}
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\begin{remark}
The equality $[X,X]^c = \langle X^c, X^c\rangle$ valid for semimartingales (see Proposition \ref{P:2.14}) may be not true for weak Dirichlet processes, even if they are of finite quadratic variation. Indeed, let $W$, $B$ be two canonical
Brownian motions, and $\mathbb F$ be the canonical filtration associated with $W$ and $B$. We set
$$
X_t = \int_0^t B_{t-s}\, dW_s.
$$
By Proposition 2.10 of \cite{er2}, $X$ is an $\mathbb F$-weak Dirichlet and $\mathbb F$-martingale orthogonal. By Proposition \ref{P:uniqdec} it follows that $X^c\equiv 0$. On the other hand, by Remark 2.16-(2) in \cite{er2}, $[X, X]_t = \frac{t^2}{2}$ which is different from zero.
\end{remark}
\subsection{Fundamental chain rules}
From here on we will denote by $\mathcal K$ the set of truncation functions, namely
$$
\mathcal K:=\{k:\mathbb R \rightarrow \mathbb R \textup{ bounded with compact support: } k(x) =x \textup{ in a neighborhood of } 0\}.
$$
A typical choice of $k(x)$ will be $k(x) = x \ensuremath{\mathonebb{1}}_{\{|x|\leq 1\}}$. In this case, $\frac{x-k(x)}{x}= \ensuremath{\mathonebb{1}}_{\{|x| >1\}}$.
We will make use the following assumption on
a pair $(v, X)$, with $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ locally bounded and
$X$ a c\`adl\`ag process.
\begin{hypothesis}\label{H:3.7-3.8}
\begin{align}
&v(t, X_t) \,\,\textup{is a c\`adl\`ag process, and for every}\,\, t \in \mathbb R_+, \,\,\mathbb Delta v(t, X_t)= v(t, X_{t}) - v(t, X_{t-});\label{cond5.36}\\
&\textup{$\exists \,k \in \mathcal K$ \,\,such that}\,\,\,\,\,(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k(x)}{x}\in {\mathcal G}^1_{\textup{loc}}(\mu^X). \label{G1_cond}
\end{align}
\end{hypothesis}
\begin{remark}\label{R:cont}
\begin{itemize}
\item [(i)]
If $v$ is continuous, then the pair $(v, X)$ obviously fulfills
\eqref{cond5.36}. For a more refined condition on $(v, X)$ to guarantee the validity of \eqref{cond5.36} we refer to Hypothesis 5.34 in \cite{BandiniRusso1}.
\item[(ii)] Assume the validity of \eqref{cond5.36}. If there is $a \in\mathbb R_+$ such that $\sum_{s \leq \cdot }|\mathbb Delta v(s, X_s)|\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s|\leq a\}} <\infty$ a.s., then \eqref{G1_cond} is verified. This is trivially verified if $v(\cdot, X)$ is a bounded variation process.
\end{itemize}
\end{remark}
\begin{proposition}\label{R:th_chainrule}
Let $X$ be an adapted c\`adl\`ag process satisfying \eqref{int_small_jumps}.
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a function of class $C^{0,1}$. Then Hypothesis \ref{H:3.7-3.8} holds true.
\end{proposition}
\proof
Condition \eqref{cond5.36} holds true being $v$ continuous, see Remark \ref{R:cont}-(i).
On the other hand, by Proposition \ref{P:2.10},
$$
|v(s,X_{s-} + x)-v(s,X_{s-})|^2\,\frac{k^2(x)}{x^2}\star \mu^X \in \mathcal{A}^+_{\rm loc}, \quad \forall k \in \mathcal K.
$$
In particular, $(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k(x)}{x} \in \mathcal G^2_{\textup{loc}}(\mu^X)$, that in turn implies condition \eqref{G1_cond}, being $\mathcal G^2_{\textup{loc}}(\mu^X) \subseteq \mathcal G^1_{\textup{loc}}(\mu^X)$.
\endproof
\begin{theorem}\label{T:2.11}
Let $X$ be a c\`adl\`ag and $\mathbb F$-adapted process satisfying \eqref{int_small_jumps}.
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function such that $(v, X)$ satisfies Hypothesis \ref{cond5.36}.
Let $Y_t= v(t,X_t)$ be an $\mathbb F$-weak Dirichlet process with
continuous martingale component $Y^c$.
Then,
for every $k \in \mathcal K$, one can write the decomposition
\begin{equation}\label{dec_wD1}
Y = Y^c + M^{k,d} + \Gamma^{k}(v) + (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\,
\star \mu^X,
\end{equation}
with
\begin{align}\label{Mdk}
M^{k,d}&:= (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{k(x)}{x}\,\star (\mu^X- \nu^X)
\end{align}
and $ \Gamma^{k}(v)$ a predictable and $\mathbb F$-martingale orthogonal process.
\end{theorem}
\begin{remark}\label{R:previouschainrule}
\begin{itemize}
\item[(i)] Sufficient conditions for $Y$ to be weak Dirichlet are given in Theorem \ref{P:3.10}.
\item [(ii)] Theorem \ref{T:2.11} drastically generalizes the results given in Proposition 5.37 in \cite{BandiniRusso1}, where we considered the case of an $\mathbb F$-martingale orthogonal process $Y_t= v(t, X_t)$ and such that $\sum_{s \leq \cdot} |\mathbb Delta Y_s| \in \mathcal A_{\textup{loc}}^+$; in particular, $\sum_{s \leq T} |\mathbb Delta Y_s| < \infty$ a.s. Notice that, by Remark \ref{R:cont}-(ii), Hypothesis \ref{H:3.7-3.8} is verified.
In that case $Y^c=0$
by Proposition \ref{P:uniqdec}.
\end{itemize}
\end{remark}
\begin{remark}\label{R:linmapGamma}
Let
$\mathcal D$ be the set of functions $v\in \mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ locally bounded such that $v(\cdot, X)$ is a weak Dirichlet process and $(v, X)$ satisfies Hypothesis \ref{H:3.7-3.8}.
We observe that
$v \mapsto \Gamma^k(v)$ in Theorem \ref{T:2.11} can be seen as a linear map from $\mathcal D$
to the space of c\`adl\`ag adapted processes.
In the sequel $\mathcal D$ will also denote a similar set of functions defined on $[0,T]\times \mathbb R$ instead of $\mathbb R_+ \times \mathbb R$, and related to a process $X=(X(t))_{t \in [0,T]}$.
\end{remark}
\noindent \emph{Proof of Theorem \ref{T:2.11}.}
Let $k \in \mathcal K$. We claim that
\begin{equation}\label{Yk}
Y^k := Y
-(v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}
\star \mu^X
\end{equation}
is an $\mathbb F$-special weak Dirichlet process.
Indeed, we notice that,
\begin{equation}\label{eqAbis}
Y^k= Y^c + A^Y -(v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\star \mu^X=: Y^c + \tilde A^k,
\end{equation}
where $\tilde A^k$ is an $\mathbb F$-martingale orthogonal process.
By condition \eqref{G1_cond} and Lemma \ref{R:B}, $(v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{k(x)}{x} \in \mathcal G^1_{\textup{loc}}(\mu^X)$.
Then the process $M^{k, d}$ in \eqref{Mdk}, is a
purely discontinuous
$\mathbb F$-local martingale, see Section \ref{SPrelim}.
We rewrite \eqref{eqAbis} as
\begin{equation}\label{tot_jumps}
Y^k= Y^c + M^{k,d} + \Gamma^{k}(v),
\end{equation}
with
\begin{equation}\label{Gammak}
\Gamma^{k}(v):= \tilde A^k- M^{k,d}.
\end{equation}
This gives
$
\mathbb Delta Y^k_t = \mathbb Delta M^{k,d}_t + \mathbb Delta \Gamma^{k}_t(v)
$.
Now, by \eqref{Yk} and \eqref{Mdk}
we get
\begin{align*}
\mathbb Delta Y^k_t &= \int_{\mathbb R} (v(s,X_{s-}+x)-v(s,X_{s-})) \,\frac{k(x)}{x} \,\mu^X(\{t\}\times\,dx),\\
\mathbb Delta M^{k,d}_t &= \int_{\mathbb R} (v(s,X_{s-}+x)-v(s,X_{s-})) \,\frac{k(x)}{x} \,\mu^X(\{t\}\times\,dx)\\
&- \int_{\mathbb R} (v(s,X_{s-}+x)-v(s,X_{s-})) \,\frac{k(x)}{x} \,\nu^X(\{t\}\times\,dx),
\end{align*}
so that, by \eqref{tot_jumps},
\begin{align*}
\mathbb Delta \Gamma^{k}_t(v)&= \int_{\mathbb R} (v(s,X_{s-}+x)-v(s,X_{s-})) \,\frac{k(x)}{x} \,\nu^X(\{t\}\times\,dx).
\end{align*}
Recall that an adapted c\`adl\`ag process is predictable if and only if its jump process is predictable, see Remark \ref{R:pred}-2. below. We conclude that $\Gamma^{k}(v)$ is an $\mathbb F$-predictable process; moreover, by \eqref{Gammak}, $\Gamma^k(v)$ is also an $\mathbb F$-martingale orthogonal process.
This yields decomposition \eqref{dec_wD1}.
\endproof
\begin{remark}\label{R:pred}
\begin{enumerate}
\item
Any c\`agl\`ad process is locally bounded, see the lines above Theorem 15,
Chapter IV, in
\cite{protter}.
\item Let $X$ be a c\`adl\`ag adapted process. Recalling that $\mathbb Delta X_s = X_{s} - X_{s-}$, we have the following.
\begin{itemize}
\item $X$ is locally bounded if and only if $\mathbb Delta X$ is locally bounded. As a matter of fact, $(X_{s-})$ is a c\`agl\`ad process and therefore locally integrable.
\item $X$ is predictable if and only if $\mathbb Delta X$ is predictable. Indeed, $(X_{s-})$ is a predictable process being adapted and left-continuous.
\end{itemize}
\end{enumerate}
\end{remark}
Taking $k(x) = x \ensuremath{\mathonebb{1}}_{\{|x|\leq a\}}$ in Theorem \ref{T:2.11} we get the following result.
\begin{corollary}\label{P:boundedjumps}
Let $X$ be a c\`adl\`ag $\mathbb F$-adapted process
satisfying \eqref{int_small_jumps}. Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function,
such that $(v,X)$ satisfies
Hypothesis \ref{cond5.36}.
Assume moreover that, for some $a \in \mathbb R_+$,
\begin{equation}\label{deltaXa}
|\mathbb Delta X_t |\leq a, \quad \forall t \in \mathbb R_+.
\end{equation}
Then, if $Y_t = v(t,X_t)$ is an $\mathbb F$-weak Dirichlet process, then it is an $\mathbb F$-special weak Dirichlet process.
\end{corollary}
The particular case of Corollary \ref{P:boundedjumps} with $v \equiv \textup{Id}$ is stated below.
\begin{corollary}\label{C:gencharbis}
Let $X$ be a c\`adl\`ag and $\mathbb F$-adapted process satisfying \eqref{int_small_jumps} and \eqref{deltaXa}. Then, if $X$ is an $\mathbb F$-weak Dirichlet process, it is an $\mathbb F$-special weak Dirichlet process.
\end{corollary}
\begin{theorem}\label{T:spw}
Let $X$ be c\`adl\`ag and $\mathbb F$-adapted process satisfying \eqref{int_small_jumps}. Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function such that $(v, X)$ satisfies Hypothesis \ref{cond5.36}.
Set $Y_t = v(t,X_t)$, and assume that $Y$ is an $\mathbb F$-weak Dirichlet process.
Then $Y$ is an $\mathbb F$-special weak Dirichlet process if and only if
\begin{equation}\label{intY}
\textup{$\exists \,a \in \mathbb R_+$ s.t.}\,\,\, (v(s,X_{s-}+x)-v(s,X_{s-})) \,\ensuremath{\mathonebb{1}}_{\{|x| >a\}} \star \mu^X \in \mathcal{A}_{\textup{loc}}.
\end{equation}
\end{theorem}
\begin{remark}\label{R:boundspecfirst}
If $v$ is bounded and $X$ is a c\`adl\`ag and $\mathbb F$-adapted,
then condition \eqref{intY} is satisfied because of Lemma \ref{L:c}.
\end{remark}
\noindent \emph{Proof of Theorem \ref{T:spw}.}
Let $a \in \mathbb R_+$ and set
$$
\tilde Y:= \sum_{s \leq \cdot} \mathbb Delta Y_s \,\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s| >a\}}= (v(s,X_{s-}+x)-v(s,X_{s-})) \,\ensuremath{\mathonebb{1}}_{\{|x| >a\}} \star \mu^X.
$$
By Theorem \ref{T:2.11} with $k(x)= \ensuremath{\mathonebb{1}}_{\{|x| \leq a\}}$,
$
Y - \tilde Y
$
is an $\mathbb F$-special weak Dirichlet process. It follows that $Y$ is an $\mathbb F$-special weak Dirichlet process if and only if
$
\tilde Y
$
is an $\mathbb F$-special weak Dirichlet process.
$\tilde Y$ has bounded variation, so it is a semimartingale, therefore it is a special semimartingale, see Proposition 5.14 in \cite{BandiniRusso1}.
This can be shown to be equivalent to condition \eqref{intY} by making use of the first three equivalent items of
Proposition 4.23, Chapter I, in \cite{JacodBook}.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
Corollary \ref{C:3.21} below follows from Theorem \ref{T:spw}
taking $v \equiv \textup{Id}$.
It extends a characterization stated in Proposition 5.24 in \cite{BandiniRusso1}: thereby, $X$ was supposed to belong to a particular class of weak Dirichlet processes.
\begin{corollary}\label{C:3.21}
Let $X$ be an $\mathbb F$-weak Dirichlet process satisfying \eqref{int_small_jumps}. Then $X$ is an $\mathbb F$-special weak Dirichlet process if and only if
\begin{equation}\label{int_big_jumps}
\exists \,a \in \mathbb R_+ \,\,\textup{s.t.}\,\, x \,\ensuremath{\mathonebb{1}}_{\{|x| > a\}} \star \mu^X \in \mathcal{A}_{\textup{loc}}.
\end{equation}
\end{corollary}
The following result is the analogous of Theorem \ref{T:2.11} for special weak Dirichlet processes.
\begin{theorem}\label{T:2.11bis}
Let $X$ be a c\`adl\`ag and $\mathbb F$-adapted process satisfying \eqref{int_small_jumps}.
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function such that $(v, X)$ satisfies Hypothesis \ref{cond5.36}.
Let $Y_t= v(t,X_t)$ be an $\mathbb F$-weak Dirichlet process
with continuous martingale component $Y^c$.
Assume moreover that the pair $(v, X)$
satisfies condition \eqref{intY}. Then $Y_t = v(t,X_t)$ is an $\mathbb F$-special weak Dirichlet process, admitting the unique decomposition
\begin{align}\label{Spec_weak_chainrule1}
Y&= Y^c + (v(s,X_{s-} + x)-v(s,X_{s-}))\,\star (\mu^X- \nu^X) +\Gamma(v),
\end{align}
with
$\Gamma(v)$ a predictable and $\mathbb F$-martingale orthogonal process.
Moreover, \begin{equation}\label{GammaY}
\Gamma(v) = \Gamma^k(v) +
(v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\,\,\star \nu^X,
\end{equation}
with $\Gamma^k(v)$ the predictable and $\mathbb F$-martingale orthogonal process appearing in \eqref{dec_wD1}.
\end{theorem}
\proof
Thanks to condition \eqref{intY}, by Theorem \ref{T:spw} $Y$ is an $\mathbb F$-special weak Dirichlet process.
By Theorem \ref{T:2.11},
\begin{align}\label{3.9}
Y &= Y^c + (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{k(x)}{x}\,\star (\mu^X- \nu^X) + \Gamma^{k}(v)\notag\\
& + (v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\,
\star \mu^X.
\end{align}
We add and subtract in \eqref{3.9} the term
$
(v(s,X_{s-} + x)-v(s,X_{s-}))
\frac{x- k(x)}{x}\,\,\star \nu^X$.
Recalling that, for every random field $W$, $|W|\star \mu^X \in \mathcal{A}_{\textup{loc}}^+$ implies $W \in \mathcal G_{\textup{loc}}^1(\mu^X)$ and
$
W \star (\mu^X - \nu^X) = W \star \mu^X - W \star \nu^X$ (see Section \ref{SPrelim}),
we get decomposition \eqref{Spec_weak_chainrule1} with $\Gamma(v)$ provided by \eqref{GammaY}.
\endproof
\begin{remark}\label{R:3items}
\begin{enumerate}
\item
It directly follows from \eqref{Spec_weak_chainrule1} that
$$
\mathbb Delta \Gamma_s(v) = \int_\mathbb R (v(s,X_{s-} + x)-v(s,X_{s-}))
\star \nu^X(\{s\}\times dx).
$$
\item
Let $X$ be a c\`adl\`ag and $\mathbb F$-adapted process satisfying \eqref{int_small_jumps}. Let $v: \mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be such that $(v, X)$ satisfies Hypothesis \ref{cond5.36}.
If $v$ is moreover a bounded function, by Remark \ref{R:boundspecfirst} condition \eqref{intY} is satisfied as well, and all the assumptions of Theorem \ref{T:2.11bis} are verified.
\end{enumerate}
\end{remark}
\begin{remark}\label{R: BSDE}
Theorem \ref{T:2.11bis} is in particular useful to solve the so-called identification problem for BDSEs.
Proposition 5.37 in \cite{BandiniRusso1} allowed to solve the identification problem when the BSDE was exclusively driven by the random measure, see Theorem 3.14 in \cite{BandiniRusso2}.
Let $\zeta$ be a non-decreasing, adapted and continuous process, and $\lambda$ be a predictable random measure on $\Omega \times [0,T] \times \mathbb R$.
We consider a BSDE driven by a random measure $\mu-\nu$ and a continuous martingale $M$ of the type
\begin{align}\label{BSDE}
Y_t &= \xi + \int_{]t, T]}\tilde g(s, Y_{s-}, Z_s) d\zeta_s + \int_{]t, T]\times \mathbb R}\tilde f(s, Y_{s-}, U_s(e)) \lambda(ds\,de) \notag\\
&- \int_{]t, T]}Z_s dM_s - \int_{]t, T]\times \mathbb R}U_s(e)(\mu-\nu)(ds\,de),
\end{align}
whose solution is a triple of processes $(Y, Z, U(\cdot))$.
If $Y_t=v(t, X_t)$ for some function $v$ and some adapted c\`adl\`ag process $X$, the identification problem consists in expressing $Z$ and $U(\cdot)$ in terms of $v$.
\begin{itemize}
\item [(i)]
Being $Y_t = v(t, X_t)$ a solution to a BSDE, it is a special weak Dirichlet process
(even a special semimartingale),
and therefore $(v, X)$ satisfies condition \eqref{intY}. Therefore, if Hypothesis \ref{cond5.36} holds for $(v, X)$, then Theorem \ref{T:2.11bis} allows to identify $U(\cdot)$. More precisely, this provides
$$
U(e)\star (\mu-\nu)= (v(s,X_{s-} + x)-v(s,X_{s-}))\,\star (\mu^X- \nu^X), \quad \textup{a.s.}
$$
From now on suppose $\mu=\mu^X$, even though this can be generalized, see Hypothesis 2.9 in \cite{BandiniRusso2}. This yields
$$
H(x)\star (\mu^X- \nu^X)=0, \quad \textup{a.s.},
$$
with $H(x):=U(x)- (v(s,X_{s-} + x)-v(s,X_{s-}))$.
If $H \in \mathcal G^2_{\textup{loc}}(\mu^X)$, then the predictable bracket of $H(x)\star (\mu^X- \nu^X)$ is well-defined and equals $C(H)$. Since $C(H)_T=0$, we get (see Proposition 2.8 in \cite{BandiniRusso2}) that there is a predictable process $(l_s)$ such that
$$
H_s(x)= l_s \ensuremath{\mathonebb{1}}_K(s)\quad d\mathbb P\, \nu^X(ds\,dx) \,\,\textup{a.e.},
$$
where $K:=\{(\omega, t):\,\,\nu^X(\omega, \{t\} \times \mathbb R)=1\}$.
\item [(ii)]
In order to identify the process $Z$ we need that $v$ belongs to $C^{0,1}([0,T]\times \mathbb R)$ and that $X$ is a weakly finite quadratic variation process, and this will be discussed in Remark \ref{R:BSDEid2}.
\end{itemize}
\end{remark}
Theorems \ref{T:2.11} and \ref{T:2.11bis} with $v \equiv \textup{Id}$ give in particular the following result.
\begin{corollary}\label{C:genchar}
Let $X$ be an $\mathbb F$-weak Dirichlet process satisfying condition \eqref{int_small_jumps}.
Let $X^c$ be the continuous martingale part of $X$. Then, the following holds.
\begin{itemize}
\item[(i)]
Let $k \in \mathcal K$. Then $X$ can be decomposed as
\begin{equation}\label{dec_X}
X = X^c + k(x)\,\star (\mu^X- \nu^X) + \Gamma^{k}(Id) +
(x- k(x))\,
\star \mu^X,
\end{equation}
with $\Gamma^{k}$ the operator introduced in Theorem \ref{T:2.11}.
\item[(ii)]If \eqref{int_big_jumps} holds, then the $\mathbb F$-special weak Dirichlet process $X$ admits the decomposition
\begin{equation}\label{dec_X_spec}
X = X^c + x\,\star (\mu^X- \nu^X) + \Gamma
\end{equation}
with
$\Gamma := \Gamma^{k}(Id) +
x \,
\ensuremath{\mathonebb{1}}_{\{|x| >1\}}\,\star \nu^X$.
\end{itemize}
\end{corollary}
\subsection{The notion of characteristics for weak Dirichlet processes} \label{S:genchar}
In the present section we provide a generalization of the concept of characteristics for semimartingales (see Appendix \ref{A:D}) in the case of weak Dirichlet processes.
\begin{remark}\label{R:cor}
Let $X$ be an $\mathbb F$-weak Dirichlet process with jump measure $\mu^X$ satisfying condition \eqref{int_small_jumps}. Given $k \in \mathcal K$, by Corollary \ref{C:genchar}-(i), we have that
$
X^k= X- \sum_{s \leq \cdot} [\mathbb Delta X_s - k(\mathbb Delta X_s)]
$
is an $\mathbb F$-special weak Dirichlet process with unique decomposition
\begin{equation}\label{dec_Xk}
X^k=
X^c + k(x) \star (\mu^X- \nu^X) + B^{k, X},
\end{equation}
where
\begin{itemize}
\item $X^c$ is the unique continuous $\mathbb F$-local martingale part of $X$ introduced in Proposition \ref{P:uniqdec};
\item $B^{k, X}:=\Gamma^{k}(Id)$, which is in particular a predictable and $\mathbb F$-martingale orthogonal process.
\end{itemize}
\end{remark}
\begin{remark}
When $X$ is a semimartingale, $B^{k, X}$ is a bounded variation process, so in particular $\mathbb F$-martingale orthogonal.
\end{remark}
From here on we denote by $\check \Omega$ the canonical space of all c\`adl\`ag functions $\check \omega: \mathbb R_+ \rightarrow \mathbb R$, namely $\check \Omega= D(\mathbb R_+)$, and by $\check X$ the canonical process defined by $\check X_t(\omega)= \check \omega(t)$. We also set $\check {\mathcal F}= \sigma(\check X)$, and $\check {\mathbb F}= (\check {\mathcal F}_t)_{t \geq 0}$. Let $\mu$ be the jump measure of $\check X$ and $\nu$ the compensator of $\mu$ under the law $\mathcal L(X)$ of $X$.
\begin{definition}\label{D:genchar}
We call \emph{characteristics} of $X$, associated with $k \in \mathcal K$,
the triplet $(B^k,C= \langle \check X^c, \check X^c\rangle,\nu)$ on $(\check \Omega, \check {\mathcal F}, \check {\mathbb F})$ obtained from the unique decomposition \eqref{dec_Xk} for $\check X$ under $\mathcal L(X)$.
In particular,
\begin{itemize}
\item[(i)] $B^k$ is a predictable and $\check{\mathbb F}$-martingale orthogonal process, with $B_0^k=0$;
\item[(ii)] $C$ is an $\check{\mathbb F}$-predictable and increasing process, with $C_0=0$;
\item[(iii)] $\nu$ is an $\check{\mathbb F}$-predictable random measure on $\mathbb R_+ \times \mathbb R$.
\end{itemize}
\end{definition}
\begin{remark}\label{R:3.27}
\begin{itemize}
\item [a)] $\mu \circ X$ is the jump measure of $X$, and its compensator under $\mathbb P$ is $\nu \circ X$, see Proposition 10.39-b) in \cite{jacod_book}.
\item [b)] It is not difficult to show that $\check X^c \circ X$ is a continuous local martingale under $\mathbb P$.
\item [c)] Let $Y$ and $Z$ be two processes on $(\check \Omega, \check {\mathcal F}, \check {\mathbb F})$ such that $[Y, Z]$ exists under $\mathcal L(X)$. Then $[Y, Z]\circ X= [Y \circ X, Z \circ X]$ under $\mathbb P$. In particular, $B^k \circ X$ is an $\mathbb F$-martingale orthogonal process.
\item [d)] By previous items we have a new decomposition of the process $X^k$:
$$
X^k = \check X^c \circ X + k \star (\mu \circ X-\nu \circ X) + B^k \circ X.
$$
\item [e)] By the uniqueness of decomposition \eqref{dec_Xk} we have $X^c= \check X^c \circ X$ and $B^{k, X}=B^k \circ X$.
\item [f)]$C^X:= C \circ X= \langle X^c, X^c\rangle$ by c) and e).
\end{itemize}
\end{remark}
\begin{remark}
Assume that $X$ admits characteristics $(B^{k_0}, C, \nu)$ depending to a given truncation function $k_0$. Then, if we choose another truncation function $k$, the corresponding characteristics will be $(B^{k}, C, \nu)$ with
$$
B^{k} \circ X= B^{k_0} \circ X + (k - k_0)\star (\nu \circ X),
$$
where $(k - k_0)\star (\nu \circ X) \in \mathcal{A}_{\textup{loc}}$, see Lemma \ref{L:D4}.
\end{remark}
\begin{remark}
\begin{itemize}
\item[a)] Given a process $X \in \mathcal A_{\textup{loc}}^+$, Theorem 3.17, Chapter I, in \cite{JacodBook} shows that there is a unique predictable process $X^p \in \mathcal A_{\textup{loc}}^+$, called \emph{compensator} of $X$, such that $X-X^p$ is a local martingale. In particular, $X$ is a special semimartingale.
\item [b)]The notion of compensator can be naturally extended. Given a process $X$, we denote by $X^p$ a process verifying the following conditions:
\begin{enumerate}
\item[(i)] $X^p$ is predictable;
\item[(ii)] $X^p$ is martingale orthogonal;
\item[(iii)] $X- X^p$ is a local martingale.
\end{enumerate}
Obviously, such $X^p$ exists if and only if $X$ is a special weak Dirichlet process, and $X^p=\Gamma$, with $\Gamma$ the process in Corollary \ref{C:genchar}-(ii), so it is uniquely determined.
\item[c)] The two notions of $X^p$ in a) and b) coincide when $X\in \mathcal A_{\textup{loc}}^+$. Indeed, a special semimartingale is a special weak Dirichlet process.
\end{itemize}
\end{remark}
\subsection{A weak notion of finite quadratic variation and a stability theorem}
\label{S32}
In the following we will use the notation (for $t \geq 0$)
\begin{align}
[X,X]_{\varepsilon}^{ucp}(t)&:=\int_0^t \frac{(X_{(s+ \varepsilon) \wedge t}- X_s)^2}{\varepsilon} ds, \quad \varepsilon>0,\label{ucpbrack}\\
C_\varepsilon(X,X)(t) &:=\int_0^t \frac{(X_{s+ \varepsilon}- X_s)^2}{\varepsilon} ds, \quad \varepsilon>0. \label{contbrack}
\end{align}
From now on $T>0$ will denote a fixed maturity time.
\begin{definition}\label{D:weakfin}
A c\`adl\`ag process $X=(X(t))_{t \in [0,T]}$ is said to be a weakly finite quadratic variation process if there is $\varepsilon_0 >0$ such that the laws of the random variables
$[X,X]_{\varepsilon}^{ucp}(T)$, $0<\varepsilon\leq \varepsilon_0$,
are tight.
\end{definition}
Below, $\varepsilon>0$ will mean $0<\varepsilon \leq \varepsilon_0$ for some $\varepsilon_0$ small enough.
For instance, a family $(Z_\varepsilon)_{\varepsilon >0}$ of random variables will indicate a sequence $(Z_\varepsilon)_{0<\varepsilon \leq \varepsilon_0}$ for some $\varepsilon_0$ small enough.
\begin{remark}
A finite quadratic variation process is a weakly finite quadratic variation process. Indeed, if $\int_0^\cdot \frac{(X_{(s+ \varepsilon) \wedge \cdot}- X_s)^2}{\varepsilon} ds$ converges u.c.p., the random variable $[X,X]_{\varepsilon}^{ucp}(T)$ converges in probability, and so it also converges in law.
\end{remark}
\begin{proposition}\label{P:criteriumtight}
Let $(Z_\varepsilon)_{\varepsilon>0}$ be a nonnegative sequence of random variables.
Suppose that one of the two items below hold.
\begin{itemize}
\item [(i)]
$\sup_{\varepsilon
>0}Z_\varepsilon< \infty$ a.s.
\item [(ii)] $\sup_{\varepsilon
>0}\mathbb E[Z_\varepsilon]< \infty$.
\end{itemize}
Then the family of distribution of $(Z_\varepsilon)_{\varepsilon >0}$ is tight.
\end{proposition}
\proof
Let $\delta>0$. In order to prove the result, we need to find $M >0$ such that
$$
\mathbb P\left(Z_\varepsilon\notin [0, M]\right)\leq \delta, \quad \forall \varepsilon>0.
$$
\noindent (i) We choose $M=M(\delta)>0 $ such that
$
\mathbb P\left(\sup_{\varepsilon
>0} Z_\varepsilon>M\right)\leq \delta$.
\noindent (ii) For every $M$, by Markov-Chebishev inequality,
$$
\mathbb P\left( Z_\varepsilon>M\right)\leq \frac{\mathbb E[ Z_\varepsilon]}{M}\leq \frac{1}{M}\sup_{\varepsilon >0} \mathbb E[ Z_\varepsilon].
$$
We choose $M$ so that the previous upper bound is smaller or equal then $\delta$.
\endproof
\begin{remark}\label{R:twoitems}
It follows from Proposition \ref{P:criteriumtight} that a c\`adl\`ag process $X$ is a weakly finite quadratic variation process when one of the following conditions holds:
\begin{enumerate}
\item [(i)] $\sup_{\varepsilon
>0}\int_0^T \frac{(X_{s+ \varepsilon}- X_s)^2}{\varepsilon} ds < \infty$ a.s. \item [(ii)]$\sup_{\varepsilon
>0}\mathbb E \Big[\int_0^T \frac{(X_{s+ \varepsilon}- X_s)^2}{\varepsilon} ds\Big] < \infty$. A process $X$ fulfilling this condition was called a finite energy process, see \cite{cjms} (in the framework of F\"ollmer's discretizations) and \cite{rv4}.
\end{enumerate}
\end{remark}
\begin{example}\label{E31}
Let $X$ be a square integrable process with weakly stationary increments. Set $V(a):=\mathbb E \left[ (X_{a}- X_0)^2 \right] $, $a \in \mathbb R_+$. Let $V(\varepsilon)=O(\varepsilon)$, $\varepsilon>0$, i.e., there is a constant $C>0$ such that $V(\varepsilon) \leq C \varepsilon$ for a small $\varepsilon>0$. Then condition (ii) of Remark \ref{R:twoitems} is verified. A classical example of a process that satisfies that condition is a weak Brownian motion of order $\kappa=2$, see \cite{fwy}. In this case $V(a)=a$. Indeed the bivariate distributions of such a process are the same as those of Brownian motion. In general, such a process is not a semimartingale and not even a finite quadratic variation process.
\end{example}
\begin{proposition}\label{P:3.12}
Suppose that $X$ is a weakly finite quadratic variation process. Then condition \eqref{int_small_jumps} holds true.
\end{proposition}
\begin{remark}\label{R:equiv}
By Proposition \ref{P:new}, condition \eqref{int_small_jumps} is equivalent to $(1 \wedge |x|^2 )\star \nu^X \in \mathcal {A}_{\textup{loc}}^+$. The validity of latter condition was known for semimartingales, see Theorem 2.42, Chapter II, in \cite{JacodBook}.
\end{remark}
\begin{proof}
Let $\gamma$
be a constant.
For each fixed $\omega$, let $\tau_{0}=\tau_{0, \gamma}:=0$ and set
$$
\tau_{i}(\omega)=\tau_{i, \gamma}(\omega):=\inf_{t > \tau_{i-1, \gamma}(\omega)} \{t: \,\,|\mathbb Delta X_t|>\gamma\}, \quad i \in \mathbb N,
$$
with the convention that it is $+ \infty$ if the set is empty.
Notice that, almost surely, there exists a finite number of jumps of $X$ greater than $\gamma$.
Let thus $N= N(\omega)$
be such that $\tau_{N, \gamma}$ is the maximum of those jump times.
Let $\varepsilon >0$, and define
$
\Omega_{\varepsilon,\gamma}^0=\{\omega \in \Omega: \tau_{i}(\omega) - \tau_{i-1}
(\omega) >\varepsilon, \,\,i \in \mathbb N\}$,
with the convention that $\infty - \infty = \infty$. We have that
\begin{equation}\label{unionOmega}
\cup_\varepsilon \Omega_{\varepsilon, \gamma}^0=\Omega,
\end{equation}
up to a null set.
We set
$$
J_A(\varepsilon):= \sum_{i=1}^N \frac{1}{\varepsilon} \int_{\tau_i - \varepsilon}^{\tau_i} (X_{(s + \varepsilon) \wedge T}- X_s)^2 ds.
$$
On $\Omega_{\varepsilon, \gamma}^0$ we have
\begin{equation}\label{estbrackucp}
[X,X]_{\varepsilon}^{ucp}(T) \geq J_A(\varepsilon).
\end{equation}
We also know, by Lemma 2.11 in \cite{BandiniRusso1}, that
\begin{equation}\label{JAconv}
J_A(\varepsilon)\underset{\varepsilon \rightarrow 0}{\rightarrow} \sum_{i=1}^N \ensuremath{\mathonebb{1}}_{]0,T]}(\tau_i) |\mathbb Delta X_{\tau_i}|^2=\sum_{t \leq T}|\mathbb Delta X_t|^2 \ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t| >\gamma\}}\,\,\,\,\textup{a.s.}
\end{equation}
Let $\kappa >0$. Since $X$ is a weakly finite quadratic variation process, there exists $\ell=\ell(\kappa)>0$ such that, for every $\varepsilon$ (small enough), $\mathbb P(\Omega_{\varepsilon, \ell}^c) < \kappa$ with
$
\Omega_{\varepsilon, \ell}:=\{\omega \in \Omega:\,\,[X,X]_{\varepsilon}^{ucp}(T) \leq \ell\}
$.
We get
\begin{align}\label{Psumjumps}
&\mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 \ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}} > K\Big)\leq \kappa + \mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 \ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}} > K; \Omega_{\varepsilon,\ell}\Big)\notag\\
& \leq \kappa +\mathbb P\Big(|J_A(\varepsilon)| > \frac{K}{2}; \Omega_{\varepsilon,\ell}\Big)+ \mathbb P\Big(\Big|\sum_{t \leq T}|\mathbb Delta X_t |^2\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}}-J_A(\varepsilon)\Big| >\frac{K}{2}; \Omega_{\varepsilon,\ell}\Big)\notag\\
& \leq \kappa +\mathbb P\Big(
|J_A(\varepsilon)| > \frac{K}{2}
; \Omega_{\varepsilon,\ell}\cap \Omega^0_{\varepsilon,\gamma}
\Big)+\mathbb P\Big(|J_A(\varepsilon)| > \frac{K}{2}; \Omega_{\varepsilon,\ell} \setminus \Omega^0_{\varepsilon,\gamma}\Big)\notag\\
&+ \mathbb P\Big(\Big|\sum_{t \leq T}|\mathbb Delta X_t |^2\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}}-J_A(\varepsilon)\Big| >\frac{K}{2}\Big)\notag\\
& \leq \kappa +\mathbb P\Big(
[X,X]_{\varepsilon}^{ucp}(T)
> \frac{K}{2}
; \Omega_{\varepsilon,\ell}\cap \Omega^0_{\varepsilon,\gamma}
\Big)+\mathbb P\Big( (\Omega^0_{\varepsilon,\gamma})^c\Big)\notag\\
&+ \mathbb P\Big(\Big|\sum_{t \leq T}|\mathbb Delta X_t |^2\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}}-J_A(\varepsilon)\Big| >\frac{K}{2}\Big),
\end{align}
where we have used
\eqref{estbrackucp} in the latter inequality.
The first probability in the right-hand side of latter inequality equals
$$
\mathbb P\Big(
[X,X]_{\varepsilon}^{ucp}(T)
\wedge \ell > \frac{K}{2}
; \Omega_{\varepsilon,\ell}\cap \Omega^0_{\varepsilon,\gamma}
\Big).
$$
Choosing $K=K(\ell, \kappa)$ so that $\frac{K}{2} \leq \ell$, this probability is zero. Therefore,
applying the $\limsup_{\varepsilon \rightarrow 0}$ in \eqref{Psumjumps}, and taking into account \eqref{unionOmega} and \eqref{JAconv}, we get
\begin{align}\label{f:gamma}
\mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 \ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}} > K\Big) \leq \kappa.
\end{align}
Notice that $\sum_{t \leq T}|\mathbb Delta X_t |^2 \ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_t |>\gamma\}}$ converges increasingly to $\sum_{t \leq T}|\mathbb Delta X_t |^2$, a.s. when $\gamma$ converges to zero.
Consequently, letting $\gamma \rightarrow 0$ in \eqref{f:gamma}, we obtain
$$
\mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 > K\Big) \leq \kappa.
$$
Finally,
\begin{align*}
\mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 = \infty\Big)\leq \mathbb P\Big(\sum_{t \leq T}|\mathbb Delta X_t |^2 > K\Big) \leq \kappa,
\end{align*}
so the conclusion follows.
\end{proof}
Below we give a significant generalization of Proposition 3.10 in \cite{gr}, where the result was proven when $X$ is continuous and of finite quadratic variation.
When $X$ is c\`adl\`ag, even
in the case when $X$ is a finite quadratic variation process,
the result is new.
Crucial tools to prove the result are the canonical decomposition stated in Proposition \ref{P:uniqdec} and Proposition \ref{P: weakfinconv}.
\begin{theorem}\label{P:3.10}
Let $X$ be an $\mathbb F$-weak Dirichlet process with weakly finite quadratic variation. Let $v\in C^{0,1}([0,T] \times \mathbb R)$. Then $Y_t = v(t, X_t)$ is an $\mathbb F$-weak Dirichlet with continuous martingale component
\begin{align}\label{Yc}
Y^c = Y_0 + \int_{0}^{\cdot}\partial_x v(s, X_{s})\,dX^c_s.
\end{align}
\end{theorem}
Theorem \ref{P:3.10}, together with Theorems \ref{T:spw} and \ref{T:2.11bis} (recall Proposition \ref{R:th_chainrule}), provides the following result.
\begin{corollary}\label{C:new}
Let $X$ be an $\mathbb F$-weak Dirichlet process with weakly finite quadratic variation. Let $v\in C^{0,1}([0,T]\times \mathbb R)$
such that $(v, X)$ satisfies
condition \eqref{intY}. Then $Y_t = v(t,X_t)$ is an $\mathbb F$-special weak Dirichlet process, admitting the unique decomposition
\begin{align}\label{Spec_weak_chainrule2}
Y&= Y_0 + \int_{0}^{\cdot}\partial_x v(s, X_{s})\,dX^c_s + (v(s,X_{s-} + x)-v(s,X_{s-}))\,\star (\mu^X- \nu^X) +\Gamma(v),
\end{align}
with
$\Gamma(v)$ a predictable and $\mathbb F$-martingale orthogonal process.
\end{corollary}
\begin{remark}\label{R:3.10bis}
Corollary \ref{C:new} extends the chain rules previously given in this framework in Theorems 5.15 and 5.31 in \cite{BandiniRusso1}. More precisely, we have the following.
\begin{itemize}
\item In Theorem 5.15 in \cite{BandiniRusso1} we already proved that, if $X$ is a $\mathbb F$-weak Dirichlet process of finite quadratic variation, and $v$ is of class $C^{0,1}$, then $Y=v(\cdot, X)$ is again a weak Dirichlet process. However the decomposition established therein was not unique. Here instead in Theorem \ref{T:2.11}, together with Theorem \ref{P:3.10}, we are able to provide an explicit form of its unique decomposition.
\item Theorem 5.31 in \cite{BandiniRusso1} focused on sufficient conditions on $(v, X)$ so that $Y$ is special weak Dirichlet.
Here, by Corollary \ref{C:new},
we are able to give the necessary and sufficient condition \eqref{intY} on $(v, X)$ such that $Y$ is
a special weak Dirichlet, and we provide its unique decomposition.
\end{itemize}
\end{remark}
\begin{remark}\label{R:BSDEid2}
Let us consider the BSDE \eqref{BSDE} introduced in
Remark \ref{R: BSDE}.
If $Y_t = v(t, X_t)$ is a solution to \eqref{BSDE}, it is a special weak Dirichlet process, and therefore $(v, X)$ satisfies condition \eqref{intY}. Then, if
$v\in C^{0,1}([0,T] \times \mathbb R)$, then
Corollary \ref{C:new} allows also to identify $Z$.
More precisely, we get
$$
Z_t =\partial_x v(t, X_t) \frac{d \langle X^c, M\rangle_t }{\langle M\rangle_t}, \quad d\mathbb P \,d \langle M \rangle_t\textup{-a.e.}
$$
\end{remark}
\noindent \emph{Proof of Theorem \ref{P:3.10}.}
We aim at proving that, for every $\mathbb F$-continuous local martingale $N$,
\begin{align}\label{toproveY}
[v(\cdot, X), N]_t = \int_0^t \partial_x v(s, X_s)\, d[X^c, N]_s, \quad t \in [0,\,T].
\end{align}
Indeed, if \eqref{toproveY} were true, it would imply that
$
A(v) := v(\cdot, X) - Y^c
$
is martingale orthogonal,
and therefore by additivity $v(\cdot, X)$ would be a weak Dirichlet process. Then \eqref{Yc} would follow by the uniqueness of the continuous martingale part of $Y$.
Let us thus prove \eqref{toproveY}.
The approximating sequence of the left-hand side of \eqref{toproveY} is
\begin{align*}
\int_0^t [v({(s+ \varepsilon) \wedge t}, X_{(s+ \varepsilon) \wedge t})-v(s, X_s)]\, \frac{N_{(s+ \varepsilon) \wedge t}- N_s}{\varepsilon} ds=
I_1(t, \varepsilon) + I_2(t, \varepsilon),
\end{align*}
with
\begin{align*}
I_1(t, \varepsilon) &:= \int_0^t [v({(s+ \varepsilon) \wedge t}, X_{(s+ \varepsilon) \wedge t})-v((s+ \varepsilon) \wedge t, X_s)]\, \frac{N_{(s+ \varepsilon) \wedge t}- N_s}{\varepsilon} ds,\\
I_2(t, \varepsilon) &:= \int_0^t [v({(s+ \varepsilon) \wedge t}, X_{s})-v(s, X_s)]\, \frac{N_{(s+ \varepsilon) \wedge t}- N_s}{\varepsilon} ds.
\end{align*}
Concerning $I_2(t, \varepsilon)$, by stochastic Fubini theorem we get
\begin{align}\label{boudaryterm}
I_2(t, \varepsilon)
& =\frac{1}{\varepsilon}\int_0^t [v({(s+ \varepsilon) \wedge t}, X_{s})-v(s, X_s)]\,\int_s^{(s + \varepsilon) \wedge t} d N_u\notag\\
& =\int_0^t d N_u \int_{(u-\varepsilon)^+}^u [v({(s+ \varepsilon) \wedge t}, X_{s})-v(s, X_s)] \frac{ds}{\varepsilon}.
\end{align}
Since
$$
\int_0^T d [N,N]_u \Big( \int_{(u-\varepsilon)^+}^u [v({s+ \varepsilon}, X_{s})-v(s, X_s)] \frac{ds}{\varepsilon}\Big)^2 \underset{\varepsilon \rightarrow 0}{\rightarrow} 0\quad \textup{in probability},
$$
by Problem 2.27, Chapter 3, in \cite{ks}, this is enough to conclude that $I_2(\cdot, \varepsilon) \underset{\varepsilon \rightarrow 0}{\rightarrow} 0$ u.c.p. (since $N$ is continuous, it is clear that we can neglect the ``$\wedge t$" in \eqref{boudaryterm}).
Concerning $I_1(t, \varepsilon)$, we have
\begin{align}\label{I1a+I1b}
&I_1(t, \varepsilon)
= \int_0^t [v({(s+ \varepsilon) \wedge t}, X_{(s+ \varepsilon)\wedge t})-v((s+ \varepsilon) \wedge t, X_s)]\, \frac{N_{(s+ \varepsilon) \wedge t}- N_s}{\varepsilon} ds\notag\\
&= \int_0^t \int_0^1 \partial_x v((s+ \varepsilon) \wedge t, X_s + a (X_{(s+ \varepsilon) \wedge t}-X_s))\, da\, (X_{(s+ \varepsilon) \wedge t}-X_s)(N_{(s+ \varepsilon) \wedge t}- N_s)\frac{ds}{\varepsilon}\notag \\
&= \int_0^t \partial_x v(s, X_s)\, (X_{(s+ \varepsilon) \wedge t}-X_s)(N_{(s+ \varepsilon) \wedge t}- N_s)\frac{ds}{\varepsilon}\notag \\
&+\int_0^t \int_0^1 [\partial_x v((s+ \varepsilon) \wedge t, X_s + a (X_{(s+ \varepsilon) \wedge t}-X_s))-\partial_x v(s, X_{s})]\, da\, (X_{(s+ \varepsilon) \wedge t}-X_s)(N_{(s+ \varepsilon) \wedge t}- N_s)\frac{ds}{\varepsilon} \notag\\
&=: I_{1,a}(t, \varepsilon)+ I_{1,b}(t, \varepsilon).
\end{align}
We set
$
\tilde I_{1,a}(t, \varepsilon) := \int_0^t \partial_x v(s, X_{s})\, (X_{s+ \varepsilon}-X_s)(N_{s+ \varepsilon}- N_s)\frac{ds}{\varepsilon}$.
By Proposition \ref{P: weakfinconv} with $g(s)= \partial_x v(s, X_{s-})$, we get
$$
\tilde I_{1,a}(\cdot, \varepsilon)\underset{\varepsilon \rightarrow0}{\rightarrow} \,\,\int_0^\cdot\partial_x v(s, X_s)\, d[X^c, N]_s, \quad \textup{u.c.p}.
$$
This also shows the convergence of $ I_{1,a}(\cdot, \varepsilon)$ to the same limit,
since $ I_{1,a}(\cdot, \varepsilon) - \tilde I_{1,a}(\cdot, \varepsilon) \underset{\varepsilon \rightarrow 0}{\rightarrow} 0$ u.c.p., being $N$ continuous.
It remains to prove that $I_{1,b}(\cdot, \varepsilon)\underset{\varepsilon \rightarrow 0}{\rightarrow} 0$ u.c.p.
Let $(\varepsilon_n)_n$ be a sequence converging to zero as $n$ goes to infinity. We fix $\kappa >0$. Recall that $X$ and $N$ have weakly finite quadratic variation. Then, there exists $\ell= \ell(\kappa)>0$ such that, for every $n$ (big enough), there is an event $\Omega_{n, \ell}$ such that
$\mathbb P(\Omega_{n, \ell}^c) \leq \kappa$, and for $\omega \in \Omega_{n, \ell}$,
\begin{align}\label{ellest}
&\Big(\sup_{s \in [0,T]}|X_s(\omega)|^2+\int_0^T (X_{(s+\varepsilon_n) \wedge T}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} \Big)
\notag\\
&+\Big(\sup_{s \in [0,T]}|N_s(\omega)|^2+\int_0^T (N_{(s+\varepsilon_n) \wedge T}(\omega)-N_s(\omega))^2\frac{ds}{\varepsilon_n}\Big) \leq \ell.
\end{align}
We provide now some estimates which are valid only for $\omega \in \Omega_{n, \ell}$. For this, we proceed as in the proof of Proposition 5.18 in \cite{BandiniRusso1} (estimate of the term $I_{13}$). We enumerate the jumps of $X(\omega)$ on $[0,T]$ by $(t_i) _{i \geq 0}$, and
\begin{equation}\label{KX}
\mathbb K^X= \mathbb K^X(\omega) \quad \textup{is smallest convex compact set including $\{X_t(\omega): t \in [0,\,T]\}$}.
\end{equation}
We fix $\gamma >0$, and we choose $M= M(\gamma, \omega)$ such that
$
\sum_{i=M+1}^\infty |\mathbb Delta X_{t_i}|^2 \leq \gamma^2$.
For $\varepsilon>0$ small enough and depending on $\omega$, we introduce
$
B(\varepsilon,M) = \bigcup_{i=1}^{M} \, ]t_{i-1},t_i - \varepsilon]
$
and we decompose
$
I_{1,b}(t, \varepsilon_n)=J^A(t,\varepsilon_n)+ J^B(t,\varepsilon_n),
$
with
\begin{align*}
J^A(t,\varepsilon_n)&=\sum_{i = 1}^M \int_{t_i - \varepsilon_n}^{t_i}\frac{ds}
{\varepsilon_n}\,
\ensuremath{\mathonebb{1}}_{]0,\,t]}(s)\,(X_{(s+ \varepsilon_n)\wedge t}-X_s)(N_{(s+ \varepsilon_n)\wedge t}-N_s)\cdot \\
&\quad \cdot\int_0^1 (\partial_x v((s+ \varepsilon_n)\wedge t,\,X_s + a (X_{(s+ \varepsilon_n)\wedge t}-X_s))-\partial_x v((s+ \varepsilon_n)\wedge t,\,X_s))\,da,\nonumber\\
J^B(t,\varepsilon_n)&= \frac{1}
{\varepsilon_n}\int_{]0,\,t]}
(X_{(s+ \varepsilon_n)\wedge t}-X_s)(N_{(s+ \varepsilon_n)\wedge t}-N_s)\,R^B(\varepsilon_n,s,t,M)\,ds,
\end{align*}
where
$$
R^B(\varepsilon_n,s,t, M)= \ensuremath{\mathonebb{1}}_{B(\varepsilon_n,M)}(s)\int_0^1 [\partial_x v((s+ \varepsilon_n)\wedge t,\,X_s + a (X_{(s+ \varepsilon_n)\wedge t}-X_s))-\partial_x v((s+ \varepsilon_n)\wedge t,\,X_s)]\,da.
$$
Let $\delta$ denote the modulus of continuity.
By Remark 3.12 in \cite{BandiniRusso1} we have for every $s, t \in [0,T]$,
\begin{align*}
R^B(\varepsilon_n,s,t,M) &\leq \delta\Big( \partial_x v \bigg|_{[0,\,T]\times \mathbb{K}^X}
,\,\sup_{l} \sup_{\underset{|r-a| \leq \varepsilon_n}{r, a \in [t_{i-1},\,t_{i}]}} |X_{a}-X_{r}|\Big),
\end{align*}
so that Lemma 2.12 in \cite{BandiniRusso1} applied successively to the intervals $[t_{i-1},\,t_{i}]$ implies
\begin{align} \label{Bill}
R^B(\varepsilon_n,s,t,M) &\leq \delta\big( \partial_x v \big|_{[0,\,T]\times \mathbb{K}^X}
,3\gamma\big).
\end{align}
This concludes the estimates restricted to $\omega \in \Omega_{n,\ell}$.
Since $N$ is continuous, by \eqref{ellest} (we remember that $\ell$ is fixed),
\begin{equation}\label{convJa}
\sup_{t \leq T}|J^A(t,\varepsilon_n)| \ensuremath{\mathonebb{1}}_{\Omega_{n,\ell}} \leq \sqrt\ell \,\delta(N(\omega),\varepsilon_n) \,M(\gamma, \omega) \sup_{(t,x) \in [0,\,T] \times \mathbb K^X(\omega)} |\partial_x v|\underset{n \rightarrow \infty}{\rightarrow} \,\,0, \quad \textup{a.s.}
\end{equation}
On the other hand,
we remark that
\begin{align}\label{Xen}
\int_0^t (X_{(s+\varepsilon_n) \wedge t}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n}
&= \int_0^{t-\varepsilon_n} (X_{s+\varepsilon_n}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} +\int_{t-\varepsilon_n}^t (X_{t}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} \notag\\
&\leq \int_0^{T} (X_{(s+\varepsilon_n) \wedge T}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} + \sup_{s \in [0,T]} |X_s(\omega)|^2.
\end{align}
A similar estimate holds replacing $X$ with for $N$. Consequently, recalling \eqref{Bill},
we have
\begin{align}\label{3.34}
&\sup_{t \in [0,\,T]} |J^B(t,\varepsilon_n)| \ensuremath{\mathonebb{1}}_{\Omega_{n,\ell}}\notag\\
&\leq \delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma)\sup_{t \in [0,\,T]}\sqrt{\int_0^t (X_{(s+\varepsilon) \wedge t}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} \int_0^t (N_{(s+\varepsilon) \wedge t}(\omega)-N_s(\omega))^2\frac{ds}{\varepsilon_n} }\notag\\
&\leq \delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma)\cdot\notag\\
&\cdot \sqrt{\Big(\sup_{s \in [0,T]}|X_s(\omega)|^2+\int_0^T (X_{(s+\varepsilon_n) \wedge T}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n} \Big)\!\!\Big(\sup_{s \in [0,T]}|N_s(\omega)|^2+\int_0^T (N_{(s+\varepsilon_n) \wedge T}(\omega)-N_s(\omega))^2\frac{ds}{\varepsilon_n}\Big)}\notag\\
&\leq \ell\,\delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma),
\end{align}
where in the second inequality we have used \eqref{Xen}.
We continue now the proof of the u.c.p. convergence of $I_{1,b}(\cdot, \varepsilon_n)$. Let $K>0$. By \eqref{I1a+I1b} we have
\begin{align*}
&\mathbb P\Big(\sup_{t \in [0,\,T]}|I_{1,b}(t, \varepsilon_n)| >K\Big)\leq \mathbb P\Big(\Omega_{n, \ell}^c\Big) + \mathbb P\Big(\sup_{t \in [0,\,T]}|I_{1,b}(t, \varepsilon_n)| >K, \Omega_{n, \ell}\Big)\\
&\leq \kappa + \mathbb P\Big(\sup_{t \in [0,\,T]}|J^A(\varepsilon_n, t)| >\frac{K}{2}, \Omega_{n, \ell}\Big)+ \mathbb P\Big(\sup_{t \in [0,\,T]}|J^B(\varepsilon_n, t)| >\frac{K}{2}, \Omega_{n, \ell}\Big)\\
& \leq \kappa + \mathbb P\Big(\sup_{t \in [0,\,T]}|J^A(\varepsilon_n, t)| >\frac{K}{2}, \Omega_{n, \ell}\Big) + \mathbb P\Big(\frac{K}{2} < \ell\,\delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma)\Big),
\end{align*}
where in the latter inequality we have used \eqref{3.34}.
This holds true for fixed $\kappa$, $\ell(\kappa)$, $\gamma$, $K$.
So, taking into account \eqref{convJa},
\begin{align*}
\limsup_{n \rightarrow \infty} \mathbb P\Big(\sup_{t \in [0,\,T]}|I_{1,b}(t, \varepsilon_n)| >K\Big) \leq \kappa + \mathbb P\Big(\frac{K}{2} < \ell\,\delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma)\Big).
\end{align*}
We let now $\gamma \rightarrow 0$. We get $\delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma) \underset{\gamma \rightarrow 0}{\rightarrow} 0$ a.s., so that
$$
\mathbb P\Big(\frac{K}{2} < \ell\,\delta(\partial_x v|_{[0,\,T] \times \mathbb K^X(\omega)}, 3 \gamma)\Big) \underset{\gamma \rightarrow 0}{\rightarrow} 0.
$$
Consequently,
\begin{align*}
\limsup_{n \rightarrow \infty} \mathbb P\Big(\sup_{t \in [0,\,T]}|I_{1,b}(t, \varepsilon_n)| >K\Big)&\leq \kappa,
\end{align*}
and since $\kappa$ is arbitrary, this concludes the proof.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
The result below follows from
Theorem
\ref{P:3.10}, together with Remark \ref{R:3items}-2. and Propositions \ref{P:3.12} and \ref{R:th_chainrule}.
\begin{corollary}\label{C:spw2}
Let $X$ be an $\mathbb F$-weak Dirichlet with weakly finite quadratic variation.
Let $v\in C^{0,1}([0,T]\times \mathbb R)$ and bounded, and set $Y_t = v(t,X_t)$.
Then $Y$ is a $\mathbb F$-special weak Dirichlet process.
\end{corollary}
\begin{remark}\label{R:pushforward}
Let $X$ be a c\`adl\`ag process, $v:[0,T] \times \mathbb R \rightarrow \mathbb R $ a continuous function, and set $Y=v(\cdot, X)$.
It is well-known that, for fixed $\omega \in \Omega$, $\mu^Y(\omega, \cdot)$ is the push forward of $\mu^X(\omega, \cdot)$ through
$\mathcal H_{\omega}: (s,x) \mapsto v(s,X_{s-}(\omega)+x)-v(s,X_{s-}(\omega))$:
\begin{align}
\mu^Y(]0,\,t]\times A)&=\int_{ ]0,\,t]\times \mathbb R}\ensuremath{\mathonebb{1}}_{A \setminus 0} (v(s,X_{s-}+ x)- v(s,X_{s-})) \, \mu^X(ds\, dx),\label{mubar}
\end{align}
for all $A \in \mathcal B(\mathbb R)$.
In particular,
$$
\mathbb Delta Y_t = \int_{ \mathbb R}y \, \mu^Y(\{t\}\times dy) =\int_{ \mathbb R} (v(t, X_{t-}+ x) -v(t, X_{t-})) \,\mu^X(\{t\} \times dx).
$$
Taking the predictable projection in identity \eqref{mubar}, we get that
\begin{align}
\nu^Y(]0,\,t]\times A)&=\int_{ ]0,\,t]\times \mathbb R}\ensuremath{\mathonebb{1}}_{A \setminus 0} (v(s,X_{s-}+ x)- v(s,X_{s-})) \, \nu^X(ds\, dx),\label{nubar}
\end{align}
for all $A \in \mathcal B(\mathbb R)$.
\end{remark}
\begin{remark}\label{R:332}
Let $X$ be a weak Dirichlet process of weakly finite quadratic variation with given characteristics $(B^k,\nu, C)$, and $v \in C^{0,1}([0,T] \times \mathbb R)$.
\begin{itemize}
\item[(i)]
By Theorem \ref{P:3.10}, $Y_t= v(t, X_t)$ is a weak Dirichlet process, so it admits characteristics $(\bar B^{k}, \bar C, \bar \nu)$.
Moreover, again by Theorem \ref{P:3.10}, recalling Remark \ref{R:3.27}-f),
\begin{align}
\bar C \circ Y = \langle Y^c, Y^c \rangle
&= \int_{]0,\,\cdot]}|\partial_x v(s, X_{s})|^2 d \langle X^c, X^c \rangle_s = \int_{]0,\,\cdot]}|\partial_x v(s, X_{s})|^2 d (C\circ X)_s.\label{Cbar}
\end{align}
If moreover $v(s, \cdot)$ is bijective for every $s$, from \eqref{Cbar}-\eqref{nubar} we get the explicit form of $\bar C$ and $\bar \nu$:
\begin{align}
&(\bar C \circ Y)_t
= \int_{]0,\,t]}|\partial_x v(s, v^{-1}(s,Y_{s}))|^2 d (C \circ v^{-1}(\cdot,Y))_s,\label{Cbardef}\\
&(\bar\nu \circ Y)(]0,\,t]\times A)\notag\\
&=\int_{ ]0,\,t]\times \mathbb R}\ensuremath{\mathonebb{1}}_{A \setminus 0} \, (v(s,v^{-1}(s,Y_{s-})+ x)- v(s,v^{-1}(s,Y_{s-}))) \, (\nu \circ v^{-1}(\cdot,Y))(ds\, dx),\label{nubardef}
\end{align}
for all $A \in \mathcal B(\mathbb R)$.
\item[(ii)]
In general, instead, it does not look possible to give explicitly the characteristic $\bar B^k$ of $Y$ in terms of the corresponding characteristic of $X$, even when $\bar B^k$ is a bounded variation process. This can however be done for instance when $v$ is a bijective homogeneous function of class $C^2$,
see Remark \ref{R:Rred}-(ii).
\end{itemize}
\end{remark}
\begin{remark}\label{R:Rred}
Let $X$ be a weak Dirichlet process with weakly finite quadratic variation.
Let $h: \mathbb R \rightarrow \mathbb R$ in $C^1$ and bijective.
By Theorem \ref{P:3.10}, $Y_t := h(X_t)$ is a weak Dirichlet process.
\begin{itemize}
\item[(i)]
By Theorem \ref{T:2.11}.
we have
\begin{align}\label{firstex}
Y&= Y^c + \Gamma^{k}(h)+ (h(X_{s-}+x)-h(X_{s-}))\frac{x-k(x)}{x}\star \mu^X \notag\\
&+ (h(X_{s-}+x)-h(X_{s-}))\frac{k(x)}{x}\star (\mu^X-\nu^X),
\end{align}
with $ \Gamma^{k}(h)$ a predictable and $\mathbb F$-martingale orthogonal process.
The characteristic $\bar B^{k}$ of $Y$ can be determined in terms of $\nu^X$ and of the map $\Gamma^k(h)$.
As a matter of fact,
by Corollary \ref{C:genchar}-(i) together with Remark \ref{R:cor}, recalling \eqref{mubar}-\eqref{nubar}, we have
\begin{align}\label{secondex}
Y&= Y^c + \bar B^{k,Y}+ (y-k(y))\star \mu^Y + k(y)\star (\mu^Y-\nu^Y)\notag\\
&=Y^c + \bar B^{k,Y}+ (h(X_{s-}+x)-h(X_{s-})-k(h(X_{s-}+x)-h(X_{s-})))\star \mu^X\notag\\
& + k(h(X_{s-}+x)-h(X_{s-}))\star (\mu^X-\nu^X)\notag\\
&=Y^c + \bar B^{k,Y}+ \Big[(h(X_{s-}+x)-h(X_{s-}))\frac{k(x)}{x}-k(h(X_{s-}+x)-h(X_{s-}))\Big]\star \mu^X\\
&+(h(X_{s-}+x)-h(X_{s-}))\frac{x-k(x)}{x}\star \mu^X + k(h(X_{s-}+x)-h(X_{s-}))\star (\mu^X-\nu^X).\notag
\end{align}
Subtracting \eqref{firstex} from \eqref{secondex}, we get
\begin{align*}
0&= \bar B^{k,Y}-\Gamma^{k}(h)+ \Big[(h(X_{s-}+x)-h(X_{s-}))\frac{k(x)}{x}-k(h(X_{s-}+x)-h(X_{s-}))\Big]\star \mu^X\\
&
+ \Big[k(h(X_{s-}+x)-h(X_{s-}))- (h(X_{s-}+x)-h(X_{s-}))\frac{k(x)}{x}\Big]\star (\mu^X-\nu^X)\\
&= \bar B^{k,Y}-\Gamma^{k}(h)
+ \Big[k(h(X_{s-}+x)-h(X_{s-}))- (h(X_{s-}+x)-h(X_{s-}))\frac{k(x)}{x}\Big]\star \nu^X,
\end{align*}
that provides
\begin{align*}
\bar B^{k,Y}
&= \Gamma^{k}(h)
- \Big[k(h(X_{s-}+x)-h(X_{s-}))- \frac{(h(X_{s-}+x)-h(X_{s-}))}{x}k(x)\Big]\star \nu^X.
\end{align*}
\item[(ii)]Assume moreover that $h \in C^2$, and that $X$ is a semimartingale with characteristics $(B^k, C, \nu)$. Then it is possible to express the characteristic $\bar B^k$ of $Y$ explicitly in terms of the characteristics of $X$. In particular, this involves a Lebesgue integral with respect to the bounded variation process $B^k$.
As a matter of fact, for every $f\in C^2 \cap C_b^0$, $f(Y)$ is a special semimartingale. By Theorem \ref{T: equiv_mtgpb_semimart} applied to $(f\circ h)(X)$, the predictable bounded variation part of $f(Y)$ is given by
\begin{align}\label{firstexsem}
&(f\circ h)(X_0) + \frac{1}{2} \int_0^{\cdot} (f\circ h)''(X_s) \,dC^X_s+ \int_0^{\cdot} (f\circ h)'(X_s) \,d B_s^{k,X}\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} ((f\circ h)(X_{s-} + x) -(f\circ h)(X_{s-})-k(x)\,(f\circ h)'(X_{s-}))\,\nu^X(ds\,dx)\notag\\
&=f(Y_0) + \frac{1}{2} \int_0^{\cdot} [f''(h(X_{s}))(h'(X_s))^2+ f'(h(X_{s}))h''(X_s)] \,dC^X_s+ \int_0^{\cdot} f'(h(X_{s})) h'(X_s)\,d B_s^{k,X}\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} [(f\circ h)(X_{s-} + x) -(f\circ h)(X_{s-})-k(x)\,f'(h(X_{s-}))h'(X_{s-})]\,\nu^X(ds\,dx).
\end{align}
On the other hand, again by Theorem \ref{T: equiv_mtgpb_semimart} applied to $f(Y)$, and recalling \eqref{Cbar} and \eqref{nubar}, the process above is equal to
\begin{align}\label{secondexsem}
&
f(Y_0) + \frac{1}{2} \int_0^{\cdot} f''(Y_s) \,dC^Y_s+ \int_0^{\cdot} f'(Y_s) \,d \bar B_s^{k,Y}\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} (f(Y_{s-} + y) -f(Y_{s-})-k(y)\,f'(Y_{s-}))\,\nu^Y(ds\,dy)\notag\\
&= f(Y_0) + \frac{1}{2} \int_0^{\cdot} f''(h(X_{s})) \,(h'(X_{s}))^2\,dC^X_s+ \int_0^{\cdot} f'(h(X_s)) \,d\bar B_s^{k,Y}\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} [k(x)\,h'(X_{s-})-k(h(X_{s-}+x)-h(X_{s-}))]\,f'(h(X_{s-}))\,\nu^X(ds\,dy)\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} [(f \circ h)(X_{s-}+x) -(f\circ h)(X_{s-})-k(x)\,f'(h(X_{s-}))h'(X_{s-})]\,\nu^X(ds\,dx).
\end{align}
Subtracting \eqref{secondexsem} from \eqref{firstexsem}
we get
\begin{align}\label{charactLim}
& \frac{1}{2} \int_0^{\cdot} f'(h(X_{s}))h''(X_s) \,dC^X_s
+ \int_0^{\cdot} f'(h(X_{s})) [h'(X_s)d B_s^{k,X}-d \bar B_s^{k,Y}]\notag\\
&- \int_{]0,\cdot]\times \mathbb R} [k(x)\,h'(X_{s-})-k(h(X_{s-}+x)-h(X_{s-}))]\,f'(h(X_{s-}))\,\nu^X(ds\,dy)=0.
\end{align}
Define now the unit partition $\chi: \mathbb R \rightarrow \mathbb R$ as the smooth function $\chi(a)$ equal to $1$ if $a \leq -1$, equal to $0$ if $a \geq 0$,
and such that $\chi(a) \in [0,\,1]$ for $a \in (-1, 0)$.
Set
\begin{align}
\chi_N(x):=\chi(|x|- (N+1))=
\left\{
\begin{array}{ll}
1 \quad \textup{if}\,\, |x| \leq N\\
0 \quad \textup{if}\,\, |x| \geq N+1\\
\in [0,\,1]\quad
\textup{otherwise}.
\end{array}
\right.\label{chiN}
\end{align}
Notice that
$\chi_N(x)$ is a smooth function.
We apply \eqref{charactLim} with $f = f_N$, where $f_N(0)=0$ and $f'_N(x) = \chi_N(x)$.
Letting $N \rightarrow \infty$ in \eqref{charactLim}, we get
\begin{align*}
\bar B^{k,Y}&= \frac{1}{2} \int_0^{\cdot} h''(X_s) \,dC^X_s+ \int_0^{\cdot} h'(X_s)\,d B_s^{k,X}\notag\\
&- \int_{]0,\cdot]\times \mathbb R} [k(x)\,h'(X_{s-})-k(h(X_{s-}+x)-h(X_{s-}))]\,\nu^X(ds\,dx).
\end{align*}
\end{itemize}
\end{remark}
\section{Generalized martingale problems}
\subsection{Stochastic calculus related to martingale problems} \label{S:4.1}
Let $(\Omega, \mathbb F)$ be a measurable space and $\mathbb P$ a probability measure.
Suppose that $X$ is a weakly finite quadratic variation process, with canonical filtration $\mathbb F^X=(\mathcal F^X_t)$, and that, for every $v$ belonging to some linear dense subspace $\mathcal D^{\mathcal S}$ of $C^{0,1}([0,T] \times \mathbb R)$, $v(\cdot, X)$ is a weak Dirichlet process.
The theorem below provides necessary and sufficient conditions under which $v(\cdot, X)$ is weak Dirichlet for every $v\in C^{0,1}([0,T] \times \mathbb R)$.
In the sequel we will make use of the following property for the couple $(\mathcal D^S, X)$.
\begin{hypothesis}\label{newH}
For every $v \in \mathcal D^S$, $Y^v:=v(\cdot, X)$ is a weak Dirichlet process, with unique continuous local martingale component $Y^{v,c}$.
\end{hypothesis}
\begin{remark}\label{R:4.4}
Let $\mathcal D^S\subseteq C^{0,1}([0,T]\times \mathbb R)$, and $X$ be a c\`adl\`ag process satisfying \eqref{int_small_jumps}. If Hypothesis \ref{newH} holds true for $(\mathcal D^S, X)$, then $\mathcal D^S$ is contained in the set $\mathcal D$ introduced in Remark \ref{R:linmapGamma}. As a matter of fact, since $\mathcal D^S$ is contained in $C^{0,1}$, then Hypothesis \ref{H:3.7-3.8} holds true, see Proposition \ref{R:th_chainrule}.
\end{remark}
\begin{theorem}\label{T:new4.4}
Let $\mathcal D^{\mathcal S}$ be a dense subspace of $C^{0,1}([0,T] \times \mathbb R)$, and $X$ be a weakly finite quadratic variation process.
The following are equivalent.
\begin{enumerate}
\item $X$ is a weak Dirichlet process.
\item
\begin{itemize}
\item [(i)]$v \mapsto Y^{v,c}$, $ \mathcal {D}^{\mathcal S}\rightarrow \mathbb D^{ucp}$, is continuous in zero.
\item [(ii)] Hypothesis \ref{newH} holds true for $(\mathcal D^S, X)$.
\end{itemize}
\item $v(t, X_t)$ is a weak Dirichlet process for every $v \in C^{0,1}([0,T] \times \mathbb R)$.
\end{enumerate}
\end{theorem}
\proof
3. $\mathbb Rightarrow$ 1. This follows taking $v \equiv \textup{Id}$.
\noindent 1. $\mathbb Rightarrow$ 2. By Theorem \ref{P:3.10}, for every $v \in C^{0,1}([0,T]\times \mathbb R)$, $v(\cdot, X)$ is a weak Dirichlet process, and
$$
Y^{v,c} = Y_0 + \int_{0}^{\cdot}\partial_x v(s, X_{s})\,dX^c_s.
$$
By Problem 2.27, Chapter 3, in \cite{ks}, this implies the continuity stated in item (i). Moreover, item (ii) trivially holds.
\noindent 2. $\mathbb Rightarrow$ 3.
By item (ii), for every $v \in \mathcal D^S$, $v(\cdot, X)$ is a weak Dirichlet process, with unique continuous martingale component $Y^{v,c}$. Since by item (i) $v \mapsto Y^{v,c}$, $ \mathcal {D}^{\mathcal S}\rightarrow \mathbb D^{ucp}$, is continuous, it extends continuously to $C^{0,1}$.
We will denote in the same way the extended operator. Since the space of continuous local martingales is closed under the u.c.p. convergence, $Y^{v,c}$ is a continuous local martingale for every $v \in C^{0,1}([0,T]\times \mathbb R)$.
We denote $A^v:=v(\cdot, X)-Y^{v,c}$, for every $v \in C^{0,1}([0,T]\times \mathbb R)$. It remains to prove that $A^v$ is a martingale orthogonal process, namely that, for every continuous local martingale $N$,
\begin{equation}\label{cov_Y-Yc,N}
[v(\cdot, X_\cdot), N)]=[Y^{v,c}, N].
\end{equation}
\noindent \emph{Step a)}.
Equality \eqref{cov_Y-Yc,N} holds true for every $v \in \mathcal D^{\mathcal S}$, since $v(\cdot, X_\cdot)$ is a weak Dirichlet process, and therefore $v(\cdot, X_\cdot) - Y^{v,c}$ is a martingale orthogonal process, see Proposition \ref{P:uniqdec}.
\noindent \emph{Step b)}. Let $(\varepsilon_n)$ be a sequence converging to zero. We need to show that (recall Proposition A.3 in \cite{BandiniRusso1})
\begin{equation}\label{Cov_approx}
\int_0^\cdot [v(s + \varepsilon_n, X_{s + \varepsilon_n})-v(s, X_s)] (N_{s + \varepsilon_n}- N_s) \frac{ds}{\varepsilon_n} \rightarrow [Y^{v,c}, N] \quad\textup{u.c.p. as }\,n \rightarrow \infty.
\end{equation}
Indeed, for this, it is enough to show the existence of a subsequence, still denoted by $(\varepsilon_n)$, such that \eqref{Cov_approx} holds.
Since $N$ is a martingale, $[N, N]$ exists, so that (again by Proposition A.3 in \cite{BandiniRusso1})
$$
[N,N]_{\varepsilon_n}:=
\int_0^\cdot (N_{s + \varepsilon_n}- N_s)^2 \frac{ds}{\varepsilon_n} \rightarrow [N, N] \quad\textup{u.c.p. as }\,n \rightarrow \infty.
$$
By extraction of subsequence, we can assume that previous convergence holds uniformly almost surely.
Let us then prove \eqref{Cov_approx}. To this end, we introduce the maps
\begin{align*}
T_n : \,C^{0,1}([0,T] \times \mathbb R) &\rightarrow \mathbb D^{ucp}\\
v&\mapsto \int_0^\cdot [v(s + \varepsilon_n, X_{s + \varepsilon_n})-v(s, X_s)](N_{s + \varepsilon_n}- N_s)\frac{ds}{\varepsilon_n}.
\end{align*}
They are linear and continuous, $\mathbb D^{ucp}$ is an $F$-space in the sense of \cite{ds}, Chapter 2.1.
Suppose that we exhibit a metric $d_{ucp}$ related to $\mathbb D^{ucp}$ such that,
\begin{equation}\label{FinalToprove}
\textup{for every fixed} \,\,v \in C^{0,1}([0,T] \times \mathbb R),\quad T_n(v) \,\,\, \textup{is bounded in }\,\mathbb D^{ucp}.
\end{equation}
By Step a), for every $v \in \mathcal D^{\mathcal S}$ (dense subset of $C^{0,1}([0,T] \times \mathbb R)$) we already know that
\begin{equation}\label{Tn_conv}
T_n(v) \rightarrow [Y^{v,c}, N] \quad \textup{u.c.p. as } \,n \rightarrow \infty.
\end{equation}
Then Banach-Steinhaus theorem would imply the existence of a linear and continuous map $T : \,C^{0,1}([0,T] \times \mathbb R) \rightarrow \mathbb D^{ucp}$ such that
$T_n (v) \rightarrow T(v)$ u.c.p. for all $v \in C^{0,1}([0,T] \times \mathbb R)$.
The map $v \mapsto [Y^{v,},N]$ is continuous by Proposition \ref{P:App}. Then by \eqref{Tn_conv}, $T(v) \equiv [Y^{v,c}, N]$.
\noindent \emph{Step c)}. It remains to show \eqref{FinalToprove}. Let $v \in C^{0,1}([0,T] \times \mathbb R)$. We have
$T_n(v) = T_n^1(v) + T_n^2(v)$,
where
\begin{align*}
T_n^1(v)&:= \frac{1}{\varepsilon_n}\int_0^\cdot [v(s + \varepsilon_n, X_{s + \varepsilon_n})-v(s + \varepsilon_n, X_s)](N_{s + \varepsilon_n}- N_s)ds,\\
T_n^2(v)&:=\frac{1}{\varepsilon_n}\int_0^\cdot [v(s + \varepsilon_n, X_{s})-v(s, X_s)](N_{s + \varepsilon_n}- N_s)ds.
\end{align*}
By similar arguments as in the proof of Proposition 3.10 in \cite{gr},
$
T_n^2(v)\rightarrow 0$ u.c.p. as $n \rightarrow \infty$.
Indeed, since $v$ is continuous,
\begin{align*}
\frac{1}{\varepsilon_n^2}\int_0^T \Big|\int_{r-\varepsilon_n}^{r} [v(s + \varepsilon_n, X_{s})-v(s, X_s)]ds\Big|^2 d[N, N]_r
\end{align*}
converges to zero, so that
\begin{align}\label{KSconv}
\int_0^\cdot \frac{dN_r}{\varepsilon_n} \int_{r-\varepsilon_n}^{r} [v(s + \varepsilon_n, X_{s})-v(s, X_s)]ds
\end{align}
converges u.c.p. to zero by Problem 2.27, Chapter 3, in \cite{ks}. By stochastic Fubini theorem, we observe that the processes in \eqref{KSconv} are equal to $(T_n^2(v))$ up to a sequence of processes converging u.c.p. to zero.
Concerning $(T_n^1(v))$, we have
\begin{align*}
T_n^1(v)(t)&:= \frac{1}{\varepsilon_n}\int_0^t ds \int_0^1 da\, \partial_x v(s + \varepsilon_n, X_{s}+ a(X_{s + \varepsilon_n}-X_{s}))(X_{s + \varepsilon_n}- X_s) (N_{s + \varepsilon_n}- N_s).
\end{align*}
\noindent \emph{Step d)}.
We choose
$
d_{ucp}(X^1, X^2)=\mathbb E\Big[\sup_{t \leq T}|X_t^1 - X_t^2|\wedge 1\Big]
$
if $X^1, X^2 \in \mathbb D^{ucp}$.
Since $T^2_n(v)$ is a converging sequence, then it is necessarily bounded. To prove that $T_n(v)$ is bounded it remains to prove the same for $T^1_n(v)$. To this end,
let $\kappa >0$. We need to show the existence of $\delta$ such that
\begin{equation}\label{est_distance}
d_{ucp}(\delta T_n^1(v), 0)< \kappa \quad \forall n.
\end{equation}
Let $\mathbb K^X(\omega)$ be the random set introduced in \eqref{KX}.
Now, introducing the (finite) random variables
\begin{align*}
\tilde \mathbb Lambda(\omega) &:= \sup_{s \in [0,\,T], \, x \in
\mathbb K^X(\omega)}|\partial_x v|(s,x),\\
\mathbb Lambda(\omega) &:=\tilde\mathbb Lambda(\omega) \, \sup_n \Big[\int_0^T(N_{s + \varepsilon_n}(\omega)- N_s(\omega))^2 ds\Big]^{1/2},
\end{align*}
we get
\begin{align*}
\sup_{t \leq T} |T_n^1(v)|&\leq \Big[\frac{1}{\varepsilon_n}\int_0^T(X_{s + \varepsilon_n}- X_s)^2 ds\, \frac{1}{\varepsilon_n}\int_0^T(N_{s + \varepsilon_n}- N_s)^2 ds\Big]^{1/2}\tilde\mathbb Lambda\\
&\leq \Big[\frac{1}{\varepsilon_n^2}\int_0^T(X_{s + \varepsilon_n}- X_s)^2 ds\Big]^{1/2} \mathbb Lambda.
\end{align*}
Since $X$ is of weakly finite quadratic variation, we can introduce $M >0$ such that
\begin{align*}
\mathbb P\Big(\frac{1}{\varepsilon_n^2}\int_0^T(X_{s + \varepsilon_n}- X_s)^2 ds >M^2\Big)\leq \frac{\kappa}{4}, \qquad
\mathbb P(|\mathbb Lambda|>M) \leq \frac{\kappa}{4}, \qquad \forall n.
\end{align*}
Now, setting
$$
\Omega_{M,n}:= \Big(\Big\{\frac{1}{\varepsilon_n^2}\int_0^T(X_{s + \varepsilon_n}- X_s)^2 ds >M^2\Big\} \cup \{|\mathbb Lambda|>M\}\Big)^c,
$$
we notice that $\mathbb P(\Omega_{M,n}^c) \leq \frac{\kappa}{2}$.
We have
\begin{align*}
\mathbb E\Big[\sup_{t \leq T} \delta |T_n^1(v)| \wedge 1\Big]= \mathbb E\Big[1_{\Omega_{M,n}^c}\sup_{t \leq T} \delta |T_n^1(v)| \wedge 1\Big]+\mathbb E\Big[1_{\Omega_{M,n}}\sup_{t \leq T} \delta |T_n^1(v)| \wedge 1\Big]
\end{align*}
is bounded by
\begin{align*}
\frac{\kappa}{2} + \mathbb E\Big[1_{\Omega_{M,n}}\sup_{t \leq T} \delta |T_n^1(v)| \wedge 1\Big]\leq \frac{\kappa}{2} + \mathbb E[\delta M^2 \wedge 1] \leq \frac{\kappa}{2} + \delta M^2.
\end{align*}
Formula \eqref{est_distance} follows by choosing $\delta$ so that $\delta M^2 < \frac{\kappa}{2}$.
\endproof
\begin{remark}
In Section \ref{S:mtgpb} we will introduce a suitable notion of
path-dependent martingale problem with respect to some operator $\mathcal A$ and domain $\mathcal D_{\mathcal A}$.
In particular, if $X$ is a solution to the aforementioned martingale problem, $v(t, X_t)$ is a special semimartingale for every $v \in \mathcal D_{\mathcal A}$.
In that case, if $\mathcal D_{\mathcal A}$ is a dense subspace in $C^{0,1}$, Theorem \ref{T:new4.4} (with $\mathcal D^{S}=\mathcal D_{\mathcal A}$) will contribute to obtain a (weak Dirichlet) decomposition for $v(t, X_t)$ to every $v \in C^{0,1}$.
In many irregular situations, the identity function does not belong to $\mathcal D_{\mathcal A}$ but only to $C^{0,1}$; this allows among others to get a sort of Doob-Meyer type decomposition for the process $X$ itself.
\end{remark}
The result below reformulates point 2.(i) of Theorem \ref{T:new4.4} in terms of the map $\Gamma^k$ in Theorem \ref{T:2.11}.
\begin{proposition}\label{T:3.30_bis}
Let $\mathcal D^{\mathcal S}$ be a dense subspace of $C^{0,1}([0,T] \times \mathbb R)$. Let $X$ be a weakly finite quadratic variation process.
Assume that Hypothesis \ref{newH} holds true for $(\mathcal D^S, X)$.
Then,
$v \mapsto Y^{v,c}$, $ \mathcal {D}^{\mathcal S}\subseteq C^{0,1}([0,T] \times \mathbb R) \rightarrow \mathbb D^{ucp}$, is continuous in zero
if and only if the map $\Gamma^k$ in Theorem \ref{T:2.11} restricted to $\mathcal D^{\mathcal S}$ is continuous in zero with respect to the $C^{0,1}$-topology.
\end{proposition}
\proof
The equivalence property follows by Lemma \ref{L:contmtg_d} and formula \eqref{dec_wD1} in Theorem \ref{T:2.11}.
\endproof
We end this section relating the brackets of some martingales associated with a semimartingale of the type $v(\cdot, X)$, where $X$ is a c\`adl\`ag process, to the map $\Gamma(v)$ in Theorem \ref{T:2.11bis}.
\begin{proposition}\label{P:3.34}
Let
$X$ be a c\`adl\`ag process satisfying \eqref{int_small_jumps}.
Let $v \in C^{0,1}([0,T]\times \mathbb R)$ and bounded such that $Y:=v(\cdot, X)$ is a semimartingale, with unique local martingale component $Y^{c}$. Then,
$Y$ is a special semimartingale with unique martingale part $N$ satisfying
\begin{align}\label{bracketN_bis}
\langle
N,
N\rangle
&= \Gamma(v^2)-2 \int_0^\cdot v(s, X_{s-})d \Gamma_s(v) - \sum_{0 \leq s \leq \cdot} \left|\mathbb Delta \Gamma_s(v)
\right|^2.
\end{align}
Moreover,
\begin{align}
\langle Y^{c}, Y^{c}\rangle &=\Gamma(v^2) -2\int_0^\cdot v(s, X_{s-}) d \Gamma_s(v)
-[v(s,X_{s-}
+ x) -v(s,X_{s-})]^2
\star\nu^{X, \mathbb P}\label{bracketYc}\\
&= \Gamma^c(v^2)-2 \int_0^\cdot v(s, X_{s-}) d \Gamma_s^c(v) - [v(s, X_{s-}+x)-v(s, X_{s-})]^2 \star \nu^{X,\mathbb P,c}, \label{Ycn_sigmaA_bis}
\end{align}
where
$\nu^{X, \mathbb P,c}$ (resp. $\Gamma^c(v)$) is the continuous part of $\nu^{X, \mathbb P}$
(resp. of $\Gamma(v)$),
where $\Gamma(v)$ is the process appearing in Theorem \ref{T:2.11bis}.
\end{proposition}
\begin{remark}
Formula \eqref{bracketN_bis} implies in particular that if $\Gamma(v)$ and $\Gamma(v^2)$ are continuous, then $\langle N, N\rangle$ is continuous as well.
\end{remark}
\begin{remark}
By Theorem \ref{T: equiv_mtgpb_semimart}, $Y^{v^2}$ is a semimartingale. On the other hand, $Y^{v}:=v(\cdot, X)$ and $Y^{v^2}:=v^2(\cdot, X)$ are special weak Dirichlet processes by Theorem \ref{T:spw} and Remark \ref{R:boundspecfirst}, being $v,v^2\in C^{0,1}$ and bounded. This implies that $Y^{v}$ and $Y^{v^2}$ are special semimartingales.
\end{remark}
\noindent \emph{Proof of Proposition \ref{P:3.34}}.
We first prove identity \eqref{Ycn_sigmaA_bis}.
By Theorem \ref{T:2.11bis}, we get that
\begin{align}
N_t &:=
Y_t -Y_0 - \Gamma_t(v),\label{N1tA}\\
\bar N_t &:= Y_t^2 - Y_0^2
- \Gamma_t(v^2)\label{N2tA}
\end{align}
are local martingales under $\mathbb P$.
Applying the integration by parts formula, and taking into account \eqref{N1tA} and \eqref{N2tA}, we get
\begin{align}\label{bracketYA}
[Y, Y]_t &=Y_t^2- Y_0^2 -2 \int_0^t Y_{s-} d Y_s\notag\\
&= \Gamma_t(v^2) -2\int_0^t v(s, X_{s-}) d \Gamma_s(v) + \bar N_t - 2 \int_0^t v(s,X_{s-}) d N_s.
\end{align}
We now show that
\begin{align}
&[v(s,X_{s-}
+ x) -v(s,X_{s-})]^2
\star\nu^{X, \mathbb P}
\in \mathcal{A}_{\textup{loc}}^+.\label{int_v_Aloc}
\end{align}
As a matter of fact,
\begin{align}
&[v(s,X_{s-}
+ x) -v(s,X_{s-})]^2
\star\nu^{X, \mathbb P}\label{squarenu}
\\
&= [v(s,X_{s-}
+ x) -v(s,X_{s-})]^2 \frac{k^2(x)}{x^2}
\star\nu^{X, \mathbb P}+[v(s,X_{s-}
+ x) -v(s,X_{s-})]^2\frac{x^2 -k^2(x)}{x^2}
\star\nu^{X, \mathbb P}.\notag
\end{align}
The first term in the right-hand side of \eqref{squarenu} belongs to $\mathcal{A}_{\textup{loc}}^+$ by Proposition \ref{P:2.10}.
On the other hand, let $c>0$ such that $k(x)=x$ on $[-c,c]$. Then the second term in the right-hand side of \eqref{squarenu} belongs to $\mathcal{A}_{\textup{loc}}^+$ by
Lemma \ref{L:c} and the fact that $v$ is bounded.
At this point, by \eqref{QVC} and Proposition \ref{P:2.14},
\begin{align}\label{brackY}
&[Y, Y]= \langle Y^{c}, Y^{c}\rangle+ \sum_{s \leq \cdot} |\mathbb Delta Y_s|^2=\langle Y^{c}, Y^{c}\rangle + [v(s,X_{s-}+x)-v(s,X_{s-})]^2 \star \mu^X\\
&=\langle Y^{c}, Y^{c}\rangle + [v(s,X_{s-}+x)-v(s,X_{s-})]^2 \star \nu^{X, \mathbb P}
+ [v(s,X_{s-}+x)-v(s,X_{s-})]^2 \star (\mu^X-\nu^{X, \mathbb P}).\notag
\end{align}
Since \eqref{bracketYA} and \eqref{brackY} provide two decompositions of the same special semimartingale $[Y,Y]$, we get \eqref{bracketYc}.
Denoting by $C$ the right hand-side of \eqref{bracketYc}, we remark that $C$ is a continuous bounded variation process.
Therefore, taking into account Remark \ref{R:3items}-1., \eqref{bracketYc} implies \eqref{Ycn_sigmaA_bis}.
On the other hand,
Theorem \ref{T:2.11bis} implies the unique decomposition \eqref{N1tA}
with
\begin{align}
N_t
&:= Y^{c}_t -Y_0 + [v(s,X_{s-} + x) -v(s,X_{s-})]
\star(\mu^X-\nu^{X, \mathbb P})_t.\label{barNtA}
\end{align}
Now we notice that, by formula \eqref{int_v_Aloc},
the stochastic integral appearing in the right-hand side of \eqref{barNtA} is a locally square integrable martingale,
and its predictable bracket yields
$$
[v(s,X_{s-}
+ x) -v(s,X_{s-})]^2
\star\nu^{X, \mathbb P}\notag\\
- \sum_{0 \leq s \leq \cdot} \Big|\int_{\mathbb R} [v(s,X_{s-}
+ x) -v(s,X_{s-})]
\nu^{X, \mathbb P}(\{s\}\times dx)\Big|^2,
$$
see the end of Section \ref{SPrelim}.
Since that purely discontinuous martingale is martingale orthogonal,
\begin{align}\label{predbracketNbarA}
\langle
N,
N\rangle
&=\langle Y^{c}, Y^{c}\rangle + [v(s,X_{s-}
+ x) -v(s,X_{s-})]^2
\star\nu^{X, \mathbb P}\notag\\
&- \sum_{0 \leq s \leq \cdot} \Big|\int_{\mathbb R} [v(s,X_{s-}
+ x) -v(s,X_{s-})]
\nu^{X, \mathbb P}(\{s\}\times dx)\Big|^2.
\end{align}
Identity \eqref{bracketN_bis} follows by plugging \eqref{bracketYc} in \eqref{predbracketNbarA}.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\subsection{Definition and main properties of the martingale problem}\label{S:mtgpb}
Given $\eta \in D_{-}(0,\,T)$ (resp. $\zeta \in D(0,\,T)$), we will use the notation
\begin{align*}
\eta^{t}(s) :=
\left\{
\begin{array}{ll}
\eta(s) \quad \textup{if}\,\,s <t,\\
\eta(t)\quad \textup{if}\,\,s \geq t.
\end{array}
\right.
\end{align*}
We denote by $C^{NA}(D_{-}(0,\,T); B(0,T))$ the subspace of $F \in C(D_{-}(0,\,T); B(0,T))$ such that $F(\eta)(s)= F(\eta^s)(s)$ for every $\eta \in D_{-}(0,\,T)$ and $s \in [0,T]$. From now on, for simplicity, we will write $F(s,\eta):=F(\eta)(s)$.
We consider the following hypothesis for a triplet $(\mathcal D, \mathbb Lambda, \gamma)$.
\begin{hypothesis}\label{H:nonant}
$\mathcal D\subseteq C^{0,1}([0,T] \times \mathbb R)$, $\mathbb Lambda: \mathcal D \rightarrow C^{NA}(D_{-}(0,\,T); B(0,T))$ is a linear map. $ \gamma: [0,\,T] \times D_{-}(0,\,T)\rightarrow \mathbb R$ is such that
for every $\eta \in D_{-}(0,\,T)$, $\gamma(\cdot, \eta)$ is of bounded variation, and fulfills the non-anticipating property,
i.e., for every $\eta \in D_{-}(0,\,T)$,
$
\gamma(t,\eta) = \gamma(t, \eta^{t})$.
\end{hypothesis}
\begin{definition}\label{D:newA}
Fix $N \in \mathbb N$.
Let $\mathcal D_{\mathcal A}\subseteq C^{0,1}([0,T] \times \mathbb R)$, $\mathbb Lambda_i: \mathcal D_{\mathcal A} \rightarrow C^{NA}(D_{-}(0,\,T); B(0,T))$ and $ \gamma_i: [0,\,T] \times D_{-}(0,\,T) \rightarrow \mathbb R$, such that $(\mathcal D_{\mathcal A}, \mathbb Lambda_i, \gamma_i)$ fulfill Hypothesis \ref{H:nonant} for all $i=1,..., N$.
For every $v \in \mathcal {D}_{\mathcal A}$, $\eta \in D_{-}(0,\,T)$, we set
\begin{equation}\label{newA}
(\mathcal{A} v)(ds,\eta):= \sum_{i=1}^N (\mathbb Lambda_i v)(s,\eta)\,\gamma_i(ds, \eta).
\end{equation}
\end{definition}
\begin{remark}
We have the decomposition $(\mathcal{A} v)(ds,\cdot)= (\mathcal{A} v)((ds)^c,\cdot)+ (\mathcal{A} v)(\mathbb Delta s,\cdot)$, with
\begin{align}
(\mathcal{A} v)(\mathbb Delta s,\cdot)&:= \sum_{i=1}^N (\mathbb Lambda_i v)(s,\cdot)\, \gamma_i(\mathbb Delta s, \cdot),\label{newAjump}\\
(\mathcal{A} v)((ds)^c,\cdot)&:= \sum_{i=1}^N (\mathbb Lambda_i v)(s,\cdot)\,\gamma_i^c(ds, \cdot).\label{Acont}
\end{align}
\end{remark}
\normalcolor
\begin{definition}\label{D:mtpb1}
Let $\mathcal{A}$ and $\mathcal D_{\mathcal A}$ as in Definition \ref{D:newA}.
A process $X$ is said to solve the martingale problem (under a probability $\mathbb P$)
with respect to
$\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0 \in \mathbb R$, if
\begin{itemize}
\item [(i)] condition \eqref{int_small_jumps} holds under $\mathbb P$;
\item[(ii)]
for any
$v \in \mathcal {D}_{\mathcal A}$ and bounded,
\begin{align}\label{mtg_pb_general}
M^v_t := v(t,X_{t}) - v(0,x_0) - \int_0^{t} (\mathcal{A} v)(ds,X^{-})
\end{align}
is an $(\mathcal F^X_t)$-local martingale.
\end{itemize}
\end{definition}
\begin{remark}
Let $\mathcal{A}$ and $\mathcal D_{\mathcal A}$ as in Definition \ref{D:newA}, and set
\begin{equation}
\mathcal X = \{
M^v
:\,\,v \in \mathcal D_{\mathcal A}
\},\label{chi_ex}
\end{equation}
with $M^v$
defined in \eqref{mtg_pb_general}.
Then $(X,\mathbb P)$ solves the martingale problem with respect to $\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0$ if and only if
$\mathbb P$ is solution of the martingale problem in Definition 1.3, Chapter III, in \cite{JacodBook}
associated to
$(\Omega, \mathcal F, \mathbb F)$,
$ \mathcal X$ in \eqref{chi_ex}, $ \mathcal H=\{B \in \mathcal F: \exists B_0 \in \mathcal B(\mathbb R) \textup{ such that } B= \{\omega \in \Omega: \omega(0) \in B_0\}\}$ and $\mathbb P_H $ corresponds to $ \delta_{x_0}$ in the sense that, for any $B \in \mathcal F$, $\mathbb P_H(B) = \delta_{x_0}(B_0)$ with $B_0=\{\omega(0) \in \mathbb R: \,\, \omega \in B\}$.
\end{remark}
\begin{definition}[Existence]
We say that the martingale problem related to $\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0$ meets existence if there exists a couple $(X,\mathbb P)$ on some $(\Omega, \mathcal F)$, solution to the martingale problem in the sense of Definition \ref{D:mtpb1}.
\end{definition}
\begin{definition}[Uniqueness]
We say that the martingale problem related to $\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0$ admits uniqueness if,
given two spaces $(\Omega_1, \mathcal F_1)$ and $(\Omega_2, \mathcal F_2)$, and two solutions $(X_1, \mathbb P_1)$ and $(X_2, \mathbb P_2)$ to the martingale problem in the sense of Definition \ref{D:mtpb1}, then $\mathcal L(X_1)|_{\mathbb P_1} = \mathcal L(X_2)|_{\mathbb P_2}$.
\end{definition}
\begin{remark}\label{3.2}
Let $(\check \Omega, \check {\mathcal F})$ be the canonical space, and $\mathbb F$ be the canonical filtration of the canonical process $\check X$.
$(X, \mathbb P)$ is a solution the the martingale problem related to $\mathcal A$, $\mathcal D_A$ and $x_0$,
if and only if $(\check X, \mathcal L(X))$ is a solution to the martingale problem related to $\mathcal A$, $\mathcal D_A$ and $x_0$.
\end{remark}
\subsection{Time-homogeneous towards time-inhomogeneous martingale problem}\label{S:mtpb_hom}
\begin{definition}\label{D:mtpb_hom}
Let $\mathcal D_{\mathcal L}\subseteq C^0_b$ and $\mathcal L: \mathcal D_{\mathcal L} \rightarrow C^{NA}(D_{-}(0,\,T); B(0,T))$ \textcolor{blue}{be a linear map}.
We say that $(X, \mathbb P)$ fulfills the time-homogeneous martingale problem with respect to $\mathcal {D}_{\mathcal L}$, $\mathcal L$ and $x_0$, if for any $f \in \mathcal {D}_{\mathcal L}$ and bounded,
the process
\begin{align}\label{mtg_pb_timehom}
M^f_t := f(X_{t}) - f(x_0) - \int_0^{t} (\mathcal{L} f)(s,X^{-}) ds
\end{align}
is an $(\mathcal F^X_t)$-local martingale under $\mathbb P$.
From here on we adopt this notation.
\normalcolor
\end{definition}
Let $C_{BUC}^{NA}(D_{-}(0,\,T); B(0,T))$ be the set of functions $G \in C^{NA}(D_{-}(0,\,T); B(0,T))$ bounded and uniformly continuous on closed balls $B_M \subset D_{-}(0,T)$ of radius $M$.
$C_{BUC}^{NA}(D_{-}(0,\,T); B(0,T))$ is a Fr\'echet space equipped with the distance generated by the seminorms
$$
\sup_{ \eta \in B_M} ||G(\eta)||_\infty, \quad M \in \mathbb N.
$$
\begin{theorem}\label{T:passage}
Assume that $D_{-}(0,\,T)$ is equipped with the metric topology of the uniform convergence on closed balls.
Let $\mathcal D_{\mathcal L}$ be a Fr\'echet space, topologically included in $C^0_b$, and equipped with some metric $d_{\mathcal L}$. Let
$\mathcal L: \mathcal D_{\mathcal L} \rightarrow C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$ be a \textcolor{blue}{linear} continuous map.
$(X, \mathbb P)$ fulfills the time-homogeneous martingale problem in Definition \ref{D:mtpb_hom} with respect to ${\mathcal D}_{\mathcal L}$, $\mathcal L$ and $x_0$,
if and only if $(X, \mathbb P)$ fulfills the time-inhomogeneous martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$,
\begin{equation}\label{D:DAnew}
{\mathcal D}_{\mathcal A}:=C^1([0,\,T]; {\mathcal D}_{\mathcal L}),
\end{equation}
and
\begin{equation}\label{Adistrib1}
(\mathcal{A} v)(dt,\eta):= (\partial_t v(t,\eta)+ (\mathcal L v(t, \cdot))(t,\eta))d t, \quad v \in \mathcal D_{\mathcal A}, \,\,\eta \in D_{-}(0,\,T).
\end{equation}
\begin{remark}
\begin{itemize}
\item[(i)]
A typical example of metric $d_{\cal L}$ comes from the graph topology.
Let $\mathcal D_{\mathcal L} \subseteq C^0_b$, and $\mathcal L: \mathcal D_{\mathcal L} \rightarrow C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$ be a measurable map. Assume that $\mathcal D_{\mathcal L}$ is equipped with the graph topology of $\mathcal L$: $v_n \rightarrow 0$ in $\mathcal D_{\mathcal L}$ if and only if $v_n \rightarrow 0$ in $C_b^0$ and $\mathcal L v_n \rightarrow 0$ in $C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$. Then $\mathcal L$ is obviously a continuous map.
\item [(ii)] ${\mathcal D}_{\mathcal A}$ in \eqref{D:DAnew} is constituted by bounded functions whose derivative in time is also bounded.
\item [(iii)]
Since $\mathcal L$ is continuous, there exists a constant $C$ such that
for every closed ball $B_M \subset D_{-}(0,T)$ of radius $M$,
$$
\sup_{t \in [0,T]}\sup_{\eta \in B_M}||(\mathcal L v(t, \cdot))(t,\eta)||_{\infty} \leq C.
$$
\end{itemize}
\end{remark}
\end{theorem}
\proof
The \emph{if} implication is trivial choosing $v$ not depending on time.
Let us now prove the \emph{only if} implication. We need to show that, for every $v \in {\mathcal D}_{\mathcal A}$,
\begin{align}\label{mtgpb_L}
M^v_t :=v(t, X_t) -v(0,x_0)- \int_0^t \left(\partial_s v(s, X_s) + (\mathcal{L} v(s, \cdot))(s,X^{-})\right)ds
\end{align}
is a local martingale.
If $v = f \in \mathcal D_{\mathcal L}$ then \eqref{mtgpb_L} is obviously a local martingale. Suppose that $v(t,x)= a(t) f(x)$, with $a \in C^1(0,\,T)$, and $f \in \mathcal D_{\mathcal L}$. By integrating by parts, we get
\begin{align}\label{intbyparts}
v(t, X_t)&= a(0) f(x_0) + \int_0^t a'(s) f(X_s) ds + \int_0^t a(s)df(X_s) \notag\\
&=a(0) f(x_0) + \int_0^t a'(s) f(X_s)ds + \int_0^t a(s)(\mathcal L f)(s,X^{-})ds+ \int_0^t a(s)dM_s^f\notag\\
&=v(0, x_0) + \int_0^t (\mathcal A v)(ds, X^{-}) + M^v_t,
\end{align}
where $M^v$ is a local martingale.
So \eqref{mtgpb_L} is a local martingale when $u \in \hat {\mathcal D}_{\mathcal A}$, with $\hat {\mathcal D}_{\mathcal A}$ the set in Lemma \ref{L:hatD}.
Let now $v \in \mathcal D_{\mathcal A}$. By Lemma \ref{L:hatD}, there is a
sequence $v_n \in \hat {\mathcal D}_{\mathcal A}$,
and
such that $v_n \underset{n \rightarrow \infty}{\rightarrow} v$ with respect to $C^1([0,\,T]; \mathcal D_{\mathcal L})$.
By \eqref{intbyparts} we have that
\begin{align}\label{uneq}
v_n(t, X_t)=v_n(0, x_0) + \int_0^t (\partial_s v_n(s, X_{s-})+ (\mathcal L v_n(s, \cdot))(s,X^{-}))ds + M^{v_n}_t,
\end{align}
where $M^{v_n}$ is a local martingale.
Since the maps $(t, \eta) \mapsto (\mathcal L v_n(t, \cdot))(t, \eta)$
converge to $(t, \eta) \mapsto (\mathcal L v(t, \cdot))(t, \eta)$
uniformly on compacts in $[0,T] \times D_{-}(0,T)$, and $\partial_s v_n \underset{n \rightarrow \infty}{\rightarrow} \partial_s v$ in $C([0,T] \times \mathbb R)$, it follows that $M^{v_n}\underset{n \rightarrow \infty}{\rightarrow} M^{v}$ u.c.p.
It remains to show that $M^v$ is a local martingale.
Since $X_{s-}$ is a c\`agl\`ad process, it is locally bounded, see Remark \ref{R:pred}-1. So, for every $\ell >0$, we define
$\tau^\ell := \inf\{t \in [0,T] :\,\,|X_{t-}| \geq \ell\},
$
with the usual convention that $\inf \emptyset = +\infty$. Clearly, $\tau^\ell \uparrow + \infty$ a.s.
Then, on $\Omega_\ell:= \{\tau^\ell \leq T\}$,
$\sup_{s \leq T}|X_{(s \wedge \tau^\ell)^-}|\leq \ell$ a.s.,
and
$\sup_{s \leq T}||(X^{^-})^{(s \wedge \tau^\ell)}||_\infty\leq \ell$ a.s.
It is thus enough to show that, for every $\ell$, $(M^v)^{\tau^\ell}$ is a martingale.
Now, by \eqref{uneq},
\begin{align*}
(M^{v_n})^{\tau^\ell}_t = v_n(t\wedge \tau^\ell, X_{t\wedge \tau^\ell})-v_n(0, x_0) - \int_0^{t \wedge \tau^\ell}(\partial_s v_n(s, X_{s-})+(\mathcal L v_n(s, \cdot))(s,X^{-})ds,
\end{align*}
so
\begin{align}\label{numR}
\sup_{t \leq T}|(M^{v_n})^{\tau^\ell}_t|&\leq 2\sup_{\underset{n}{t \in [0,T],\,|x|\leq \ell}}|v_n(t,x)|+ T\sup_{\underset{n}{t \in [0,T],\,|x|\leq \ell}}|(\partial_t v_n)(t,x)|\notag\\
&+ T \sup_{\underset{n}{t \in [0,T],\,||\eta||_\infty\leq \ell}} |(\mathcal L v_n(t, \cdot))(t,\eta)|=:R,
\end{align}
where we recall that $C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$ is equipped with the metric topology of the uniform convergence on closed balls.
On the other hand, recalling that $M^{v_n}\rightarrow M^{v}$ u.c.p. and therefore $(M^{v_n})^{\tau^\ell}\rightarrow (M^{v})^{\tau^\ell}$ u.c.p., by \eqref{numR} we have in particular that
$\sup_{t \leq T}|(M^{v})^{\tau^\ell}_t|\leq R$.
It follows that $(M^v)^{\tau^\ell}$ is a martingale.
This concludes the proof.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
The following is a consequence of Theorem \ref{T:passage} in the Markovian context.
\begin{theorem}\label{R:passage}
Let $\mathcal D_{\mathcal L_M}$ be a Fr\'echet space, topologically included in $C^0_b$, and equipped with some metric $d_{\mathcal L_M}$. Let
$\mathcal L_M: \mathcal D_{\mathcal L_M} \rightarrow C^0$ be a continuous map.
Set $\mathcal D_{\mathcal L}:=\mathcal D_{\mathcal L_M}$ equipped with the same topology of $\mathcal D_{\mathcal L_M}$,
\begin{equation}\label{D:DAnewMark}
{\mathcal D}_{\mathcal A}:=C^1([0,\,T]; {\mathcal D}_{\mathcal L_M})
\end{equation}
and, for every $f \in \mathcal D_{\mathcal L_M}$, $v \in {\mathcal D}_{\mathcal A}$, $\eta \in D_{-}(0,\,T)$,
\begin{align}
(\mathcal L f) (t,\eta) := (\mathcal L_M f)( \eta_t), \quad
(\mathcal A v) (dt,\eta) :=
(\partial_t v(t,\eta_t) +(\mathcal L_M v(t,\cdot))(\eta_t))d t.\label{4.16}
\end{align}
Then the following are equivalent.
\begin{itemize}
\item [(i)]
$(X, \mathbb P)$ fulfills the time-homogeneous martingale problem in Definition \ref{D:mtpb_hom} with respect to ${\mathcal D}_{\mathcal L}$, $\mathcal L$ and $x_0$.
\item [(ii)] For any $f \in \mathcal D_{{\mathcal L}_M}$,
\begin{align}\label{mtg_pb}
M^f &:= f(X_{\cdot}) - f(x_0) - \int_0^{\cdot} {\mathcal L}_M f(X_{s}) ds
\end{align}
is an $(\mathcal F_t^X)$-local martingale under $\mathbb P$.
\item[(iii)]
$(X, \mathbb P)$ fulfills the time-inhomogeneous martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal D_{\mathcal A}$, $\mathcal A$ and $x_0$.
\end{itemize}
\end{theorem}
\proof
By
\cite{BandiniRusso_DistrDrift}, $\mathcal L: \mathcal D_{\mathcal L} \rightarrow C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$ is continuous, where $C^{NA}_{BUC}(D_{-}(0,\,T); B(0,T))$ is equipped with the topology of uniform convergence on closed balls.
Items (i) and (ii) are equivalent by construction.
Items (i) and (iii) equivalent by Theorem \ref{T:passage}.
\endproof
\subsection{Weak Dirichlet characterization of the solutions to the martingale problem}
In the sequel we prove that stochastic calculus framework in Section \ref{S:4.1} fits in the concrete examples of martingale problems.
Suppose that $(X,\mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} related to $\mathcal A$, $\mathcal {D}_{\mathcal A}$, and $x_0$.
First of all, we set $\mathcal D^S= \mathcal D_{\mathcal A}$. By definition of the martingale problem, for every $v \in \mathcal D^{ S}$ and bounded, $v(\cdot, X)$ is a special semimartingale. In addition, by Proposition \ref{R:th_chainrule}, Hypothesis \ref{H:3.7-3.8} holds true being $v\in C^{0,1}([0,T] \times \mathbb R)$. We can therefore apply Theorem \ref{T:2.11bis}, that provides decomposition \eqref{Spec_weak_chainrule1} and $\Gamma(v)$ in \eqref{GammaY}. By the uniqueness of the decomposition of the special semimartingale, we get that
\begin{equation}\label{Gammav}
\Gamma(v) = \int_0^\cdot (\mathcal A v)(ds, X^{-})
\end{equation}
and
\begin{equation}\label{Gammavk}
\Gamma^k(v) =\int_0^\cdot (\mathcal A v)(ds, X^{-}) - (v(s, X_{s-}+x)-v(s, X_{s-})) \frac{x - k(x)}{x}\star \nu^{X, \mathbb P}.
\end{equation}
Taking into account Remark \ref{R:3items}-1., we get in particular that, for every $v \in \mathcal D_{\mathcal A}$,
\begin{equation}\label{rel_jumps}
(\mathcal{A} v)(\mathbb Delta t,X^{-}) = \int_{\mathbb R}(v(t, X_{t-}+x)-v(t, X_{t-})) \, \nu^{X, \mathbb P}(\{t\} \times dx), \quad t \in [0,\,T].
\end{equation}
\begin{corollary}\label{C:3.30bis}
Let $\mathbb P$ be a probability on $(\Omega, \mathcal F)$, and $X$ be a c\`adl\`ag process with weakly finite quadratic variation.
Let $ \mathcal {D}_{\mathcal A}$ be a dense subset of $C^{0,1}([0,T] \times \mathbb R)$, and
$\mathcal{A} $ be the operator in Definition \ref{D:newA}.
For every $v \in \mathcal {D}_{\mathcal A}$, set $Y^v_t=v(t, X_t)$, and denote by $Y^{v,c}$ its unique continuous martingale component. If $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} related to $\mathcal D_{\mathcal A}$, $\mathcal A$ and $x_0$,
then the following are equivalent.
\begin{itemize}
\item [(i)]
$v \mapsto Y^{v,c}$, $ \mathcal {D}_{\mathcal A}\subseteq C^{0,1}([0,T] \times \mathbb R)\rightarrow \mathbb D^{ucp}$, is continuous in zero.
\item[(ii)] The map
$$
v \mapsto \int_0^\cdot (\mathcal A v)(ds, X^{-}) - (v(s, X_{s-}+x)-v(s, X_{s-})) \frac{x - k(x)}{x}\star \nu^{X, \mathbb P}
$$
is continuous in zero with respect to the $C^{0,1}$-topology.
\item [(iii)] $X$ is a weak Dirichlet process.
\end{itemize}
\end{corollary}
\proof
The equivalence of items (i) and (iii) comes from Theorem \ref{T:new4.4}.
The equivalence of items (i) and (ii) follows from Proposition \ref{T:3.30_bis} together with \eqref{Gammavk}.
\endproof
\begin{corollary}\label{C:3.34_bis}
Let $ \mathcal {D}_{\mathcal A}$ be a dense subset of $C^{0,1}([0,T] \times \mathbb R)$, and
$\mathcal{A} $ be the operator in Definition \ref{D:newA}.
Let $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal D_{\mathcal A}$, $\mathcal A$, and $x_0 \in \mathbb R$.
Then, for every $v \in \mathcal {D}_{\mathcal A}$ and bounded, the unique martingale part $N$ of the special semimartingale $Y_t:=v(t, X_t)$ is given by \begin{align}\label{bracketN}
\langle
N,
N\rangle
&= \int_0^\cdot (\mathcal A v^2)(ds,X^{-})-2 \int_0^\cdot v(s, X_{s-}) (\mathcal A v)(ds,X^{-})
- \sum_{0 \leq s \leq \cdot} \left|(\mathcal{A} v)(\mathbb Delta s,X^{-})
\right|^2.
\end{align}
Moreover,
\begin{align}\label{Ycn_sigmaA}
\langle Y^{v,c}, Y^{v,c} \rangle &= \int_0^\cdot (\mathcal A v^2)(ds,X^{-})-2 \int_0^\cdot v(s, X_{s-}) (\mathcal A v)(ds,X^{-}) - [v(s, X_{s-}+x)-v(s, X_{s-})]^2 \star \nu^{X,\mathbb P},
\end{align}
where $Y^{v,c}$ is the unique continuous martingale part of $Y^v=v(\cdot, X)$, and
$(\mathcal A v)(\mathbb Delta s,\cdot)$
is introduced in \eqref{newAjump}.
\end{corollary}
\proof
The result follows from Proposition \ref{P:3.34} together with \eqref{Gammav}.
\endproof
\normalcolor
\subsection{Examples of martingale problems}
\label{SSExamples}
\subsubsection{Semimartingales}
\label{E:4.18}
Let $X$ be an adapted c\`adl\`ag real semimartingale with characteristics $(B^k, C, \nu)$, with decomposition $\nu(\omega, ds\,dx) = \phi_s(\omega, dx)d\chi_s(\omega)$. For further notations we refer to Section \ref{A:D}.
$X$ verifies condition \eqref{int_small_jumps}, since it is a semimartingale, see Remark \ref{R:equiv}.
Then $(X, \mathbb P)$ is a solution of the martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0$, with $\mathcal D_{\mathcal A}$ the set of functions in $C^{1,2}_b$ restricted to $[0,T] \times \mathbb R$ and, for every $v \in \mathcal D_{\mathcal A}$,
\begin{align*}
(\mathcal A v)(ds, \eta)&:= \partial_s v(s, \eta_s)ds + \frac{1}{2} \partial_{xx}^2 v(s, \eta_s) \,d(C\circ \eta)_s + \partial_x v(s, \eta_s) \,d (B^{k}\circ \eta)_s\notag\\
&+ \int_{\mathbb R} (v(s, \eta_s + x) -v(s, \eta_s)-k(x)\,\partial_x v(s, \eta_s))\,\phi_s(\eta, dx)d\chi_s(\eta), \quad \eta \in D_{-}(0,\,T).
\end{align*}
This follows by Remark \ref{R:D2}.
\subsubsection{Weak Dirichlet processes derived from semimartingales}
\label{E:4.17}
Let $X$ be a weak Dirichlet process with characteristics $(B^k, C, \nu)$, with $\nu(\omega, ds\,dx) = \phi_s(\omega, dx)d\chi_s(\omega)$. For further notations we refer to Section \ref{S:genchar}.
Assume that there exists $h\in C^{0,1}$, $h(t, \cdot)$ bijective and $h(t, X_t)$ is a semimartingale with characteristics $(\bar B^k, \bar C, \bar \nu)$. Being $h^{-1} \in C^{0,1}$, we have
$$
\mathbb Delta X_s = h^{-1}(s, Y_{s-} + \mathbb Delta Y_s)-h^{-1}(s, Y_{s-})= \mathbb Delta Y_s \int_0^1 \partial_x h^{-1}(s, Y_{s-} + a \mathbb Delta Y_s)da,
$$
so condition \eqref{int_small_jumps} for $Y$ (see previous subsection) implies the same for $X$.
Then by Example \ref{E:4.18}, $(X, \mathbb P)$ is a solution of the martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal A$, $\mathcal D_{\mathcal A}$ and $x_0$, with $\mathcal D_{\mathcal A}$ the set of functions $v \in C^{0,1}_b$ such that $v \circ h^{-1} \in C^{1,2}$, restricted to $[0,T] \times \mathbb R$, and, for every $v \in \mathcal D_{\mathcal A}$,
\begin{align*}
&(\mathcal A v)(ds, \gamma)\\
&= \partial_s (v \circ h^{-1})(s, h(s,\gamma_s))ds + \frac{1}{2} \partial_{xx}^2 (v \circ h^{-1})(s, h(s,\gamma_s))\,(\partial_x h(s,\gamma_s))^2\,d(C \circ \gamma)_s\\
& + \partial_x (v \circ h^{-1})(s, h(s,\gamma_s)) \,d (\bar B^{k} \circ h(\cdot, \gamma))_s\notag\\
&+(v(s, \gamma_s +x) -v (s, \gamma_s)-k(h(s, \gamma_s +x)-h(s, \gamma_s))\,\partial_x h^{-1} (s, h(s,\gamma_s))\,\partial_x v(s, \gamma_s))\, \phi_s(\gamma, dx)d\chi_s(\gamma)
\end{align*}
for every $\gamma \in D_{-}(0,\,T)$.
\subsubsection{Discontinuous Markov processes with distributional drift}\label{S:433}
In \cite{BandiniRusso_DistrDrift} we study existence and uniqueness for a time-homogeneous martingale problem with distributional drift in a discontinuous Markovian framework.
In this section, we will consider a fixed $\alpha \in [0,1]$. If $\alpha \in ]0,1[$,
$C^\alpha_{\textup{loc}}$ denotes the space of functions locally $\alpha$-H\"older continuous.
By $C^{1+\alpha}_{\textup{loc}}$ we will denote the functions in $C^1$ whose derivative is $\alpha$-H\"older continuous.
For convenience, we set $C^0_{\textup{loc}}:=C^0$, $C^1_{\textup{loc}}:=C^1$, $C^2_{\textup{loc}}:=C^2$.
Let $k \in \mathcal K$ be a continuous function. Let $\beta=\beta^k: \mathbb R \rightarrow \mathbb R$ and $\sigma: \mathbb R \rightarrow \mathbb R$ be continuous functions, with $\sigma$ not vanishing at zero. We consider formally the PDE operator of the type
\begin{equation}\label{Lbeta}
L
\psi = \frac{1}{2}\sigma^2 \psi'' + \beta' \psi'
\end{equation}
in the sense introduced by \cite{frw1, frw2}.
For a mollifier $\phi \in \mathcal S(\mathbb R)$ with $\int_\mathbb R \phi(x) dx =1$, we set
$$
\phi_n(x) := n \,\phi(n x), \quad \beta'_n := \beta' \ast \phi_n, \quad \sigma_n := \sigma \ast \phi_n.
$$
\begin{hypothesis}\label{H:Sigma}
\begin{enumerate}
\item We assume the existence of the function
\begin{equation}\label{Sigma}
\Sigma(x) := \lim_{n \rightarrow \infty} 2 \int_0^x \frac{\beta'_n}{\sigma_n^2} (y) dy
\end{equation}
in $C^0$, independently from the mollifier.
\item The function $\Sigma$ in \eqref{Sigma}
is upper and lower bounded, and belongs to $C^{\alpha}_{\textup{loc}}$.
\end{enumerate}
\end{hypothesis}
The following proposition and definition are given in \cite{frw1}, see respectively Proposition 2.3 and the definition at page 497.
\begin{proposition}
\label{P:equiv} Hypothesis \ref{H:Sigma}-1 is equivalent to ask that
there is a solution $h \in C^1$ to $L h=0$ such that $h(0) =0$ and
\begin{equation}\label{h'}
h'(x) := e^{-\Sigma(x)}, \quad x \in \mathbb R.
\end{equation}
In particular, $h'(0) =1$, and $h'$ is strictly positive so
the inverse function by $h^{-1}: \mathbb R \rightarrow \mathbb R$ is well defined.
\end{proposition}
Let $\mathcal D_{L}$ be the set of functions $f\in C^1$ such that there is $\phi \in C^1$ with
\begin{equation}\label{Lf_bis}
f' = e^{-\Sigma} \phi.
\end{equation}
For any $f \in \mathcal D_{L}$, we set
\begin{equation}\label{Lf}
L f= \frac{\sigma^2}{2} (e^{\Sigma} f')' e^{-\Sigma}.
\end{equation}
This defines without ambiguity $L : \mathcal D_{L} \subset C^1 \rightarrow C^0$.
We also define
\begin{align}
\mathcal{D}_{\mathcal L_M}&:=
\mathcal{D}_{L} \cap C^{1+ \alpha}_{\textup{loc}} \cap C_b^0,
\label{barD}
\end{align}
equipped with the graph topology of $L
$, the natural topology of $C^{1+ \alpha}_{\textup{loc}}$ and the uniform convergence topology.
Then we consider
a transition kernel $Q(\cdot,dx)$ from
$(\mathbb R, \mathcal{B} (\mathbb R))$ into $(\mathbb R,\mathcal{B} (\mathbb R))$, with $Q(y,\{0\})=0$,
satisfying the following condition.
\begin{hypothesis}\label{H:Kmeas}
For all
$B \in \mathcal B(\mathbb R)$, the map
$
y \mapsto \int_{B} (1 \wedge |x|^{1+\alpha}) \, Q(y,dx)
$
is bounded and the measure-valued map
$y \mapsto (1 \wedge |x|^{1+\alpha}) \, Q(y,dx)=: \tilde Q(y, dx)$
is continuous in the total variation topology.
\end{hypothesis}
For every $f\in \mathcal{D}_{\mathcal L_M}$, we finally introduce the operator
\begin{equation}\label{Ldistrib}
\mathcal L_M f(y) := L f(y)
+ \int_{\mathbb R \setminus 0} (f(y + x) -f(y)
-k(x)f'(y)) Q(y,dx).
\end{equation}
Under Hypothesis \ref{H:Kmeas}, the operator above takes values in
$C^0$, see \cite{BandiniRusso_DistrDrift}.
In \cite{BandiniRusso_DistrDrift} we study the following Markovian martingale problem.
\begin{definition}\label{D:mtpb_hom_Mark}
We say that $(X, \mathbb P)$ fulfills the time-homogeneous Markovian martingale problem with respect to $\mathcal {D}_{\mathcal L_M}$ in \eqref{barD}, $\mathcal L_M$ in \eqref{Ldistrib} and $x_0 \in \mathbb R$, if for any $f \in \mathcal {D}_{\mathcal L_M}$,
the process
\begin{align*}
f(X_{\cdot}) - f(x_0) - \int_0^{\cdot} \mathcal{L}_{M} f(X_{s}) ds
\end{align*}
is an $(\mathcal F^X_t)$-local martingale under $\mathbb P$.
\end{definition}
\begin{remark}\label{R:twomartgpb}
Under Hypotheses \ref{H:Sigma} and \ref{H:Kmeas}, in \cite{BandiniRusso_DistrDrift} we provide existence and uniqueness for the Markovian martingale problem in Definition \ref{D:mtpb_hom_Mark}.
Moreover, the solution $(X, \mathbb P)$ is a finite quadratic variation process, with $\nu^{X,\mathbb P}(ds\, dx)= Q(X_{s-},dx)ds$.
\end{remark}
We can state the following theorem.
\begin{theorem}\label{T:thmpassage}
Assume that Hypotheses \ref{H:Sigma} and \ref{H:Kmeas} hold true. If $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb_hom_Mark}, then it is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$,
\begin{equation}\label{DAm}
\mathcal D_{\mathcal A}:=C^1([0,\,T]; {\mathcal D}_{\mathcal L_M}),
\end{equation}
and
\begin{align}\label{Adistrib2}
&(\mathcal{A} v)(dt,\eta):= \partial_t v(t,\eta_{t})dt\\
&+L v(t,\eta_{t})dt
+ \int_{\mathbb R} (v(t,\eta_{t} + x) -v(t,\eta_{t})
-k(x)\partial_y v(t,\eta_{t})) Q(\eta_{t},dx)d t, \quad v \in \mathcal D_{\mathcal A},\,\,\eta \in D_{-}(0,\,T).\notag
\end{align}
Moreover, that martingale problem meets uniqueness.
\end{theorem}
\begin{remark}\label{R:densityDA}
By \cite{BandiniRusso_DistrDrift}, $\mathcal D_{\mathcal L_M}$ in \eqref{barD} is dense in $C^1$.
Then, by Lemma \ref{L:density},
$\mathcal D_{\mathcal A}
$ in \eqref{DAm}
is dense in $C^{0,1}([0,T] \times \mathbb R)$.
\end{remark}
\noindent\emph{Proof of Theorem \ref{T:thmpassage}.}
A solution $(X, \mathbb P)$ to the martingale problem in Definition \ref{D:mtpb_hom_Mark} exists by Remark \ref{R:twomartgpb}.
By the equivalence (ii)-(iii) in Theorem \ref{R:passage}, $(X, \mathbb P)$ fulfills the time-inhomogeneous martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$, $\mathcal D_{\mathcal A}$ in \eqref{DAm} and $\mathcal A$ in \eqref{Adistrib2}.
Concerning uniqueness, given a solution $(X, \mathbb P)$ to the martingale problem by Definition \ref{D:mtpb1}, we know that, for every $v \in \mathcal D_{\mathcal A}$, $M^v$ is a local martingale. In particular this holds for $v$ not depending on time, which implies the validity of the martingale problem not depending on time in the sense of Definition \ref{D:mtpb_hom_Mark}, for which we have uniqueness, see Remark \ref{R:twomartgpb}.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
Below we discuss some other properties of the solution to our martingale problem.
We evaluate first the quadratic variation of the martingale component of a function of $X$ belonging to $\mathcal D_{\mathcal{A}}$. We can in particular apply
Corollary \ref{C:3.34_bis} to the case of the operator $\mathcal{A}$ in \eqref{Adistrib2}.
\begin{proposition}\label{L:418}
Let $(X, \mathbb P)$ be a solution to the martingale problem in Definition \ref{D:mtpb_hom_Mark}.
Then, for every $v \in \mathcal D_{\mathcal A}$ in \eqref{DAm} we have
\begin{equation}\label{Ycn_sigma}
\langle Y^{v,c}, Y^{v,c} \rangle_t = \int_0^t \sigma^2(X_s) (\partial_x v(s, X_s))^2 ds,
\end{equation}
where $Y^{v, c}_t$ denotes the unique continuous martingale part of $v(t, X_t)$.
In particular, $v \mapsto Y^{v,c}$, $\mathcal D_{\mathcal A} \subseteq C^{0,1}([0,T] \times \mathbb R)\rightarrow \mathbb D^{ucp}$, is continuous in zero.
\end{proposition}
\noindent \emph{Proof of Proposition \ref{L:418}.}
By Theorem \ref{T:thmpassage}, $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$, $\mathcal D_{\mathcal A}$ in \eqref{DAm} and $\mathcal A$ in \eqref{Adistrib2}.
Assume that \eqref{Ycn_sigma} holds.
Let $v_n \in \mathcal D_{\mathcal A}$ such that $v_n \rightarrow 0$ in $C^{0,1}([0,T] \times \mathbb R)$. By \eqref{Ycn_sigma}, $\langle Y^{v_n,c}, Y^{v_n,c} \rangle_T$ converges to zero in probability.
Then, by Problem 5.25 in \cite{ks}, Chapter 1,
it follows that,
\begin{equation}\label{supYfn}
Y^{v_n, c} \rightarrow 0 \quad \textup{u.c.p.}\,\ \textup{as}\,\,n \rightarrow \infty.
\end{equation}
It remains to prove \eqref{Ycn_sigma}.
Take $v \in \mathcal D_{\mathcal A}$. Then formula
\eqref{Ycn_sigmaA}
in Corollary \ref{C:3.34_bis} applied to $\mathcal A$ in \eqref{Adistrib2} (taking into account Remark \ref{R:twomartgpb})
yields
\begin{align*}
\langle Y^{v,c}, Y^{v,c} \rangle_t &= \int_0^t (\mathcal A v^2 - 2 v \mathcal A v)(ds,X^{-}) - \int_{]0,t]\times \mathbb R}[v(s,X_{s-}+x)-v(s,X_{s-})]^2 Q(X_{s-},dx)ds\\
&= \int_0^t(\partial_s v^2(s,X_s)- 2 v(s,X_s) \partial_s v(s,X_s))ds+\int_0^t (L v^2 - 2 v L v)(s,X_s) ds\\
&+ \int_{]0,t]\times \mathbb R} (v^2(s,X_{s-} + x) -v^2(s,X_{s-})-2 k(x)\, v(s,X_{s-})\, \partial_x v(s,X_{s-})\,) Q(X_{s-},dx)ds\\
&- 2\int_{]0,t]\times \mathbb R}v(s,X_s) (v(s,X_{s-} + x) -v(s,X_{s-})-k(x)\,\partial_x v(s,X_{s-})\,) Q(X_{s-},dx)ds\\
& - \int_{]0,t]\times \mathbb R}[v(s,X_{s-}+x)-v(s, X_{s-})]^2 Q(X_{s-},dx)ds\\
&= \int_0^t (L v^2 - 2 v L v)(s,X_s) ds = \int_0^t \sigma^2(X_s) (\partial_x v(s,X_s))^2 ds,
\end{align*}
where the latter equality follows from the fact that $L v^2 = 2 v L v + (\sigma \partial_x v)^2$, see
Propositions 2.10 in \cite{frw1}.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\begin{corollary}\label{C:newcor}
Let $(X, \mathbb P)$ be a solution to the martingale problem in Definition \ref{D:mtpb_hom_Mark}.
\begin{itemize}
\item [(i)]For every $v \in C^{0,1}$, $Y^v:=v(\cdot, X)$ is a weak Dirichlet process. In particular, $X$ is a weak Dirichlet process.
\item [(ii)]\eqref{Ycn_sigma} holds for every $v \in C^{0,1}$.
\end{itemize}
\end{corollary}
\proof
By Theorem \ref{T:thmpassage}, $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$, $\mathcal D_{\mathcal A}$ in \eqref{DAm} and $\mathcal A$ in \eqref{Adistrib2}.
(i)
By Remark \ref{R:densityDA},
$\mathcal D_{\mathcal A}
$ in \eqref{DAm}
is dense in $C^{0,1}([0,T] \times \mathbb R)$.
Thanks to Proposition \ref{L:418}, $v \mapsto Y^{v,c}$ is a continuous map. Since $X$ is a finite quadratic variation process (see Remark \ref{R:twomartgpb}), we can apply Corollary \ref{C:3.30bis}, which states that $X$ is a weak Dirichlet process. Theorem \ref{P:3.10} concludes the proof of item (i).
(ii) Let $v \in C^{0,1}$. Since $\mathcal D_{\mathcal A}$ is dense in $C^{0,1}$ (see Remark \ref{R:densityDA}), there exists a sequence $(v_n)\in \mathcal D_{\mathcal A}$ converging to $v$ in $C^{0,1}$. Since $v \mapsto Y^{v,c}$ is continuous, $Y^{v_n, c}\rightarrow Y^{v,c}$, u.c.p. By Proposition \ref{P:App}, $\langle Y^{v_n, c}, Y^{v_n, c}\rangle \rightarrow \langle Y^{v, c}, Y^{v, c} \rangle$.
The result follows from Proposition \ref{L:418} since $v \mapsto \int_0^t \sigma^2(X_s) (\partial_x v(s, X_s))^2 ds$ is continuous.
\endproof
\begin{theorem}\label{T:G}
Let $(X, \mathbb P)$ be a solution to the martingale problem in Definition \ref{D:mtpb_hom_Mark}.
Then there exists an $(\mathcal F_t)$-Brownian motion $W^X$ such that
\begin{align}\label{decompX}
X&= x_0+ \int_0^\cdot \sigma(X_s) dW^X_s + \int_{]0,\cdot]\times \mathbb R}
k(x)\,(\mu^X(ds\,dx)- Q(X_{s-},dx)d s)
+\lim_{n \rightarrow \infty}\int_0^\cdot L f_n(X_{s})ds\notag\\
&+ \int_{]0,\cdot]\times \mathbb R} (x - k(x))\mu^X(ds\,dx),
\end{align}
for every sequence $(f_n)_n \subseteq \mathcal D_{\mathcal L_M}$ such that $f_n \underset{n \rightarrow \infty}{\rightarrow} Id$ in $C^{1}$. The limit appearing in \eqref{decompX} holds in the u.c.p. sense.
\end{theorem}
\proof
By Theorem \ref{T:thmpassage}, $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$, $\mathcal D_{\mathcal A}$ in \eqref{DAm} and $\mathcal A$ in \eqref{Adistrib2}.
By Corollary \ref{C:genchar} we have
\begin{align*}
X= X^{ c} +
k(x)\,\star (\mu^X- \nu^{X, \mathbb P}) + \Gamma^{k}(Id) + (x - k(x))\star \mu^X,
\end{align*}
where $\Gamma^k$ is the operator defined in Theorem \ref{T:2.11}. By Proposition \ref{R:th_chainrule} and Remark \ref{R:linmapGamma}, $\Gamma^k$ is well defined in particular on $C^{0,1}$.
By Proposition \ref{L:418}, we can apply Proposition \ref{T:3.30_bis} with $\mathcal {D}^S = \mathcal D_{\mathcal A}$, which yields that $v \mapsto \Gamma^{k}(v)$ restricted to $\mathcal D_{\mathcal A}$ is continuous. Since $\mathcal D_{\mathcal A}$ is dense in $C^{0,1}$,
$\Gamma^{k}(v)$ is the continuous extension of the map defined in \eqref{Gammavk}.
At this point we evaluate $\Gamma^k(Id)$. Take $(f_n)_n \subseteq \mathcal D_{\mathcal L_M}$ such that $f_n \rightarrow Id$ in $C^{1}$.
By \eqref{Gammavk} together with \eqref{Adistrib2}, we get
\begin{align*}
\Gamma^{k}(Id)&= \lim_{n \rightarrow \infty}\Gamma^{k}(f_n)\\
&=\lim_{n \rightarrow \infty}\Big(\int_0^\cdot (\mathcal A f_n)(ds, X^{-}) - \int_{]0,\cdot]\times \mathbb R}(f_n( X_{s-}+x)-f_n( X_{s-})) \frac{x - k(x)}{x}Q(X_{s-},dx)d s\Big)\\
& =\lim_{n \rightarrow \infty}\Big(\int_0^\cdot L f_n(X_{s-})ds
+ \int_{]0,\cdot]\times \mathbb R} (f_n(X_{s-} + x) -f_n(X_{s-})
-k(x)\,f_n'(X_{s-})\,) Q(X_{s-},dx)d s\\
&- (f_n( X_{s-}+x)-f_n( X_{s-})) \frac{x - k(x)}{x}\star \nu^{X, \mathbb P}\Big)\\
&=\lim_{n \rightarrow \infty}\int_0^\cdot L f_n(X_{s})ds.
\end{align*}
In order to get \eqref{decompX} it remains to identify $X^{c}$.
Setting
$
W^X_t := \int_0^t \frac{1}{\sigma(X_s)}d X^{c}_s
$,
we have
$$
\langle W^X, W^X\rangle_t= \int_0^t \frac{1}{\sigma(X_s)}d \langle X^{c}, X^{c}\rangle_s.
$$
On the other hand, Corollary \ref{C:newcor}-(ii) with $v \equiv \textup{Id}$ yields
$
\langle X^{c}, X^{c}\rangle=\int_0^\cdot \sigma^2(X_s) ds,
$
which implies that
$\langle W^X, W^X\rangle_t= t$,
and by L\'evy representation theorem, $W^X$ is a Brownian motion. Finally, by construction we have
$X^{c} = \int_0^\cdot \sigma(X_s) dW_s$.
\endproof
\subsubsection{Continuous path-dependent SDEs with distributional drift}
Let $\sigma$, $\beta$ be continuous real functions and $L: \mathcal D_L \rightarrow C^0$ in \eqref{Lf},
with $\mathcal D_L$ the subset of $C^1$ introduced just before. We only suppose item 1 of Hypothesis \ref{H:Sigma}.
Let $G^d:[0,\,T] \times D_{-}(0,\,T)\rightarrow \mathbb R$ be a Borel functional, uniformly continuous on closed balls,
and define
$$
\tilde G^d(t,\eta) = \frac{G^d(t,\eta)}{\sigma(\eta(t))}, \quad (t, \eta) \in [0,T] \times D_{-}(0,T).
$$
We suppose moreover that $\tilde G$ is bounded.
\normalcolor
We set
$G:[0,\,T] \times C(0,\,T)\rightarrow \mathbb R$ as the restriction of $G^d$.
In \cite{ORT1_PartI} one investigates the martingale problem related to a path-dependent SDE of the type
\begin{equation}\label{eq1}
d X_t = \sigma(X_t) dW_t + (\beta'(X_t) + G(t, X^t))dt.
\end{equation}
We set $ \mathcal D_{L,b}:= \mathcal D_L \cap C^0_b$.
\begin{definition}\label{D:ORT}
$(X, \mathbb P)$ is solution to the (non-Markovian) path-dependent martingale problem related to \eqref{eq1} an initial condition $x_0$ if, for every $f \in \mathcal D_{L,b}$,
\begin{equation}\label{ContdistrM}
M^f :=f(X_t) - f(x_0) - \int_0^t ((Lf) (X_s)+ f'(X_s) G(s, X^s)) ds
\end{equation}
is an $(\mathcal F^X)$-local martingale under $\mathbb P$.
\end{definition}
\begin{proposition} \label{P432}
Let $(X, \mathbb P)$ be a solution to the martingale problem
in the sense of Definition \ref{D:ORT}. Then
$X$ is necessarily a continuous process.
\end{proposition}
\proof
Let $h$ be the function introduced in Proposition \ref{P:equiv}, and set $Y=h(X)$.
For every $\phi \in C^2$, we denote by $L^0$ the classical PDE operator
$
L^0\phi(y) =\frac{(\sigma h')^2(h^{-1}(y))}{2} \phi''(y)
$.
By Propositions 2.13 in \cite{frw1},
$\phi \in \mathcal D_{L^0}(=C^2)$ if and only if $\phi \circ h \in \mathcal D_{L}$. This implies that $\phi \in C^2_b$ if and only if $\phi \circ h \in \mathcal D_{L,b}$. Moreover,
$L (\phi \circ h) = (L^0 \,\phi)\circ h$
for every $\phi \in C^2$.
Since $(X, \mathbb P)$ fulfills the time-homogeneous martingale problem in Definition \ref{D:ORT},
for every $f \in \mathcal {D}_{L,b}$,
\begin{align*}
f(X_{\cdot}) - f(x_0) - \int_0^t ((Lf) (X_s)+ f'(X_s) G(s, X^s)) ds
\end{align*}
is an $(\mathcal F_t^X)$-local martingale under $\mathbb P$.
Setting $y_0 = h^{-1}(x_0)$, this yields that, for every $\tilde f \in C^2_b$,
$$
\tilde f(Y_\cdot) - \tilde f(y_0)-\frac{1}{2}\int_0^{\cdot} (\sigma h')^2(h^{-1}(Y_{s})) \tilde f''(Y_{s}) ds -\int_0^{\cdot}h' (h^{-1}(Y_s))G(s, h^{-1}(Y^s))\tilde f'(Y_{s})ds
$$
is an $(\mathcal F_t^Y)$-local martingale under $\mathbb P$.
It follows from Theorem \ref{T: equiv_mtgpb_semimart} that $Y$ is a semimartingale with characteristics
$B = \int_0^\cdot b(s,\check Y) ds$, $ C = \int_0^\cdot c(s,\check Y) ds$, $\nu(ds\,dz) = 0$, where $b(s,\eta):=h' (h^{-1}(\eta(s)))G(s, h^{-1}(\eta^s))$ and
$
c(s,\eta) := (\sigma h')^2(h^{-1}(\eta(s)))
$. Consequently $\mu^Y(ds\,dy)=0$, so $Y=h(X)$
is necessarily a continuous process, and the same holds for $X$.
\endproof
\begin{lemma}\label{L:densityC0b}
$ \mathcal D_{L,b}$ is dense in $ \mathcal D_L$ equipped with its graph topology. In particular, $ \mathcal D_{L,b}$ is dense in $C^1$.
\end{lemma}
\proof
We consider the sequence $(\chi_N)$ introduced in \eqref{chiN}.
Let $f \in \mathcal D_L$.
We define a sequence $(f_N)\subset C^1$ such that $f_N(0)=f(0)$ and $f'_N=\chi_N f'$.
$f_N \in \mathcal D_L$ since $f_N' = (\phi \chi_N)e^{-\Sigma}$ and $\phi \chi_N \in C^1$, where $\phi$ has been defined in \eqref{Lf_bis}. Now, each $f_N$ is a bounded function since $f_N'$ has compact support. Clearly, $f_N \rightarrow f$ in $C^1$.
Moreover, making use of \eqref{Lf} we get
\begin{align*}
L f_N = \frac{\sigma^2}{2}(e^\Sigma \chi_N f')' e^{-\Sigma}=\frac{\sigma^2}{2}(\phi \chi_N)' e^{-\Sigma} \rightarrow \frac{\sigma^2}{2}\phi' e^{-\Sigma}= Lf \quad \textup{in}\,\,C^0.
\end{align*}
This concludes the proof.
\endproof
\begin{corollary}
Existence and uniqueness of a solution to the martingale problem in Definition \ref{D:ORT} holds true.
\end{corollary}
\proof
By Theorem 4.23 in \cite{ORT1_PartI}, there is a solution $(X,\mathbb P)$
in the sense of Definition \ref{D:ORT}
which is even continuous. This shows existence.
Concerning uniqueness, let $(X,\mathbb P)$ be a solution
of the martingale problem in the sense of Definition \ref{D:ORT}.
By Proposition \ref{P432}, $f(X)$ is necessarily continuous for every $f \in \mathcal {D}_{L,b}$.
Moreover
the process in \eqref{ContdistrM} is a martingale
also for every $f \in \mathcal D_{L}$
not necessarily bounded.
Indeed, let $f \in \mathcal D_L$. By Lemma \ref{L:densityC0b} there is
a sequence $f_N \in \mathcal D_{L,b}$ converging to $f$ in $\mathcal D_{L}$.
This implies that $M^{f_N}$ converges to $M^f$ u.c.p.
We remark that the space of continuous local martingales is closed with respect to the u.c.p. convergence topology so $M^f$ is again a continuous local
martingale.
The conclusion follows by
Proposition 4.24 in \cite{ORT1_PartI} which
states uniqueness in the framework of continuous processes.
\endproof
\begin{proposition}\label{P:4.35}
Let $(X, \mathbb P)$ be a solution to the martingale problem in Definition \ref{D:ORT}. Then $(X, \mathbb P)$ is a solution to the martingale problem in Definition \ref{D:mtpb1} with respect to $x_0$,
\begin{equation}\label{DAG}
{\mathcal D}_{\mathcal A}:=C^1([0,\,T];\mathcal D_{L,b})
\end{equation}
and
\begin{equation}\label{AG}
(\mathcal A v)(ds , \eta) = (\partial_s v(s, \eta(s)) + L v(s, \eta(s)) + G^d(s, \eta^s) \partial_x v(s, \eta(s)))ds.
\end{equation}
\end{proposition}
\proof
We apply Theorem \ref{T:passage} with $\mathcal D_{\mathcal L}:=\mathcal D_{L,b}$ equipped with its graph topology.
\endproof
\begin{remark}
Consider the martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal D_{\mathcal{A}}$ in \eqref{DAG}, $\mathcal{A}$ in \eqref{AG}, and $x_0 \in \mathbb R$.
Replacing $G^d$ in \eqref{AG} with another Borel extension of $G$ one gets the same solution to the martingale problem.
\end{remark}
Proceeding analogously as for the proof of Theorem \ref{T:G} we can prove the following result.
\begin{proposition}
Let $(X, \mathbb P)$ be a solution to the martingale problem in Definition \ref{D:ORT}.
Then we have the following.
\begin{itemize}
\item [(i)]$X$ is a weak Dirichlet process.
\item [(ii)]There exists an $(\mathcal F_t)$-Brownian motion $W^X$ such that
\begin{align}\label{decompX2}
X&= x_0+ \int_0^\cdot \sigma(X_s) dW^X_s + \int_0^\cdot G^d(s, X^s) ds+\lim_{n \rightarrow \infty}\int_0^\cdot L f_n(X_{s})ds,
\end{align}
for every sequence $(f_n)_n \subseteq \mathcal D_{L}$ such that $f_n \underset{n \rightarrow \infty}{\rightarrow} Id$ in $C^{1}$. The limit in \eqref{decompX2} holds in the u.c.p. sense.
\end{itemize}
\end{proposition}
\subsubsection{The PDMPs case}
\label{S:The PDMPs case}
Let $X$ be a piecewise deterministic Markov process (PDMP)
generated by a marked point process $(T_n, \zeta_n)$, where $(T_n)_n$ are increasing random times such that
$
T_n \in ]0,\,\infty[,
$
where either there is a finite number of
times $(T_n)_n$ or $\lim_{n \rightarrow \infty} T_n = + \infty$,
and $\zeta_n$ are random variables in $[0,1]$.
We will follow the notations in \cite{Da-bo}, Chapter 2, Sections 24 and 26. The behavior of the PDMP $X$ is described by a triplet of
local characteristics $(h,\lambda,Q)$:
$h: ]0,\,1[ \rightarrow \mathbb R$ is a Lipschitz continuous function,
$\lambda: ]0,1[ \rightarrow \mathbb R$ is a measurable function such that
$\sup_{x \in ]0,1[}|\lambda(x)| < \infty$,
and $Q$ is a transition probability measure on $[0,1]\times\mathcal{B}(]0,1[)$. Some other technical assumptions are specified in the over-mentioned reference, that we do not recall here.
Let us denote by $\mathbb Phi(s,x)$ the unique solution of $
{g}'(s) = h(g(s))$, $g(0)= x$. Then
\begin{equation} \label{X_eq}
X(t)=
\left\{
\begin{array}{ll}
\mathbb Phi(t,x),\quad t \in [0,\,T_{1}[\\
\mathbb Phi(t-T_n, \zeta_n),\quad t \in [T_n,\,T_{n+1}[,\,\, n \in \mathbb N,
\end{array}
\right.
\end{equation}
and, for any $x_0 \in [0,1]$, verifies the equation (provided the second integral in the right-hand side is well-defined)
\begin{equation}\label{PDP_dynamic_estended}
X_t=x_0 + \int_{0}^t h(X_s)\,ds + \int_{]0,\,t]\times \mathbb R}x\,\mu^X(ds\,dx)
\end{equation}
with
\begin{equation}\label{muPDMPS}
\mu^X(ds\,dx)
=\sum_{n \geq 1} 1_{\{\zeta_{n}\in ]0,1[\}} \delta_{( T_n,\,\zeta_n- \zeta_{n-1})}(ds\,dx).
\end{equation}
Moreover, we introduce the predictable process counting the number of jumps of $X$ from the boundary of its domain:
\begin{equation}\label{p_ast}
p^{\ast}_t = \sum_{0 <s \leq t}\,1_{\{X_{s-} \in \{0,1\}\}}.
\end{equation}
The knowledge of $(h,\,\lambda,\,Q)$ completely specifies the law of $X$, see Section 24 in \cite{Da-bo}, and also Proposition 2.1 in \cite{BandiniPDMPs}. In particular, let $\mathbb P$ be the unique probability measure under which
the compensator of $\mu^X$ has the form
\begin{equation}\label{nuPDPs}
\nu^X(ds\, dx) = \tilde Q(X_{s-},\,dx)\,(\lambda(X_{s-})\,ds + d p^{\ast}_s),
\end{equation}
where $\tilde Q(y, dx)= Q(y, y+ dx)$, and $\lambda$ is trivially extended to $[0,1]$ by the zero value.
Notice that $X$ is a finite variation process, so \eqref{int_small_jumps} holds true.
According to Theorem 31.3 and subsequent Section 31.5 in \cite{Da-bo}, for every measurable absolutely continuous function $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ such that $(v(t, X_{t-}+x)-v(t, X_{t-})) \star\mu^X \in \mathcal A_{\textup{loc}}^+$,
\begin{align*}
&v(t, X_t)- v(0, x_0)\\
&- \int_{]0,t]} \Big(\partial_s v(s, X_{s-})+ h(X_{s-})\partial_x v(s, X_s)+\lambda(X_{s-})\int_{\mathbb R}(v(s, X_{s-}+x)-v(s, X_{s-}))\tilde Q(X_{s-}, dx)\Big)ds\\
&- \int_{]0,t]\times \mathbb R}(v(s, X_{s-}+x)-v(s, X_{s-}))\tilde Q(X_{s-}, dx)dp^\ast_s
\end{align*}
is an $(\mathcal F^X)$-local martingale under $\mathbb P$.
Therefore, $(X, \mathbb P)$ solves the martingale problem in Definition \ref{D:mtpb1} with respect to $\mathcal A$, $\mathcal D_A:=C^{1}([0,\,T] \times \mathbb R)$ and $x_0$,
with
$$
(\mathcal A v)(ds, \eta):= \mathbb Lambda_1 v(s,\eta_{s-})\,\gamma_1(ds, \eta_{s-})+\mathbb Lambda_2 v(s,\eta_{s-})\,\gamma_2(ds, \eta_{s-}), \quad v \in \mathcal D_A,\,\eta \in D(0,\,T),
$$
with, for any $y \in \mathbb R$, $\gamma_1(ds, y)=ds$, $\gamma_2(ds,y)= dp^\ast_s(y)$, and
\begin{align*}
\mathbb Lambda_1 v(s,y)&= \partial_s v(s, y) + h(y)\partial_y v(s, y)+
\lambda(y)\int_{\mathbb R} (v(s, y+x)-v(s, y))\,\tilde Q(y, dx),\quad y \in ]0,1[,\\
\mathbb Lambda_2 v(s,y)&= \int_{\mathbb R} (v(s, y+x)-v(s, y))\,\tilde Q(y, dx), \quad y \in \{0,1\}.
\end{align*}
\appendix
\renewcommand\thesection{Appendix}
\section{}
\renewcommand\thesection{\Alph{subsection}}
\renewcommand\thesubsection{\Alph{subsection}}
\subsection{Some technical results on the (weak) finite quadratic variation}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\begin{proposition}\label{P:ucpequivbrackets}
Let $Y=(Y(t))_{t \in [0,T]}$ and $X=(X(t))_{t \in [0,T]}$ be respectively a c\`adl\`ag and a continuous process.
Then
$$
[X,Y]_\varepsilon^{ucp}(t) = C_\varepsilon(X,Y)(t) + R(\varepsilon, t)
$$
with
$
R(\varepsilon, t)\underset{\varepsilon \rightarrow 0}{\rightarrow 0 } \,\,\textup{u.c.p.}
$
\end{proposition}
\proof
See Proposition A.3 in \cite{BandiniRusso1}.
\endproof
\begin{lemma}\label{L:3.15}
Let $G_n : C(0,\,T) \rightarrow \mathbb R$, $n \in \mathbb N$, be a sequence of functions such that
\begin{itemize}
\item [(i)] $\sup_n ||G_n||_{var} \leq M \in [0,\,+ \infty[$,
\item [(ii)] $G_n \underset{n \rightarrow \infty}{\rightarrow} 0$ uniformly.
\end{itemize}
Then, for every $g: [0,\,T] \rightarrow \mathbb R$ c\`agl\`ad,
$
\int_0^\cdot g \, d G_n \underset{n \rightarrow \infty}{\rightarrow} 0$, uniformly.
\end{lemma}
\proof
For every $n \in \mathbb N$, let us define the operator
$T_n: D_{-}(0,\,T)\rightarrow C(0,\,T)$,
$g \mapsto \int_0^\cdot g\, d G_{n}$.
We denote by $\mathcal E_{-}(0,\,T)$ the linear space of c\`agl\`ad step functions of the type
$\sum_i c_i \ensuremath{\mathonebb{1}}_{]a_i, b_i]}$.
We first notice that,
by (ii), if $g \in \mathcal E_{-}(0,\,T)$, then
$T_n(g) \underset{n \rightarrow \infty}{\rightarrow} 0$, uniformly.
On the other hand, $\mathcal E_{-}(0,\,T)$ is dense in $ D_{-}(0,\,T)$, see Lemma 1, Chapter 3, in \cite{Bil99} (that lemma is written for c\`adl\`ag function; however the same follows for c\`agl\`ad functions since the time reversal of a c\`adl\`ag function is c\`agl\`ad).
Let now $g \in D_{-}(0,\,T)$. Since $g$ is bounded, we have
$
\sup_{s \in [0,\,T]} |g(s)| \leq m
$
for some constant $m$.
We get
$
||T_n (g)||_\infty \leq M m.
$
The conclusion follows by the Banach-Steinhaus theorem, see e.g. Chapter 1.2 in \cite{ds}.
\endproof
\begin{proposition}\label{P: weakfinconv}
Let $X$ be an $\mathbb F$-weak Dirichlet with weakly finite quadratic variation. Let $g$ be a c\`agl\`ad process, and $N$ be a continuous $\mathbb F$-local martingale. Then
$$
\int_0^t g(s) (X_{s+\varepsilon}-X_s)(N_{s+\varepsilon}-N_s)\frac{ds}{\varepsilon} \,\,\underset{\varepsilon \rightarrow0}{\rightarrow} \,\,\int_0^t g(s)\, d[X^c, N]_s, \quad t \in [0,T], \quad \textup{u.c.p}.
$$
\end{proposition}
\proof
For $\varepsilon >0$, we set
$F_\varepsilon(t) := C_{\varepsilon}(X,N)(s)$.
By Proposition \ref{P:uniqdec}, $X= X^c+ A$ with $A$ a martingale orthogonal process. Therefore, recalling Proposition \ref{P:ucpequivbrackets},
$$
F_\varepsilon(t)\underset{\varepsilon \rightarrow0}{\rightarrow} F(t):= [X^c, N]_t, \quad \textup{u.c.p}.
$$
Let $\varepsilon_n$ be a sequence converging to zero as $n \rightarrow \infty$. It is sufficient to show the existence of a subsequence, still denoted by $\varepsilon_n$, such that
\begin{equation}\label{gFconv}
\int_0^\cdot g(s) \,d F_{\varepsilon_n}(s) \,\,\underset{n \rightarrow \infty}{\rightarrow} \,\,\int_0^\cdot g(s)\, d F(s), \quad \textup{u.c.p}.
\end{equation}
By extracting a sub-subsequence, there exists a null set $\mathcal N$ such that
\begin{equation}\label{ConvFeps}
F_{\varepsilon_n}(t) \underset{n \rightarrow \infty}{\rightarrow} F(t), \quad \textup{uniformly for all}\,\, \omega \notin \mathcal N.
\end{equation}
We remark that $N$ has also weakly finite quadratic variation. Let $\kappa >0$. For any $\ell > 0$, we denote by $\Omega_{n, \ell}$ the subset of $ \omega \in \Omega$ such that
\begin{align}\label{estquadr}
&\int_0^T (X_{(s+\varepsilon_n)\wedge T}(\omega)-X_{s}(\omega))^2\frac{ds}{\varepsilon_n} + \int_0^T (N_{(s+\varepsilon_n)\wedge T}(\omega)-N_{s}(\omega))^2\frac{ds}{\varepsilon_n} \leq \ell,\notag\\
& \langle X^c, X^c\rangle_T + \langle N, N\rangle_T\leq \ell.
\end{align}
In particular, we can choose $\ell$ such that $\mathbb P(\Omega_{n, \ell}^c) \leq \kappa $. Moreover, since $g$ is locally bounded, without restriction of generality we can take
\begin{equation}\label{estg}
\sup_{s \in [0,\,T]} |g(s)| \leq \ell, \quad \forall \omega \in \Omega_{n, \ell}.
\end{equation}
Collecting \eqref{estquadr} and \eqref{estg}, on $\Omega_{n, \ell}$ we have
\begin{align}\label{estell}
&\sup_{0 \leq t \leq T} \left|\int_0^t g(s) \,d F_{\varepsilon_n}(s)\right| \leq \ell
||F_{\varepsilon_n}||_{\textup{var}}\notag\\
&\leq \ell \sqrt{\int_0^T (X_{s+\varepsilon_n}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n}} \sqrt{\int_0^T (N_{s+\varepsilon_n}(\omega)-N_s(\omega))^2\frac{ds}{\varepsilon_n}}\notag\\
&= \ell \sqrt{\int_0^T (X_{(s+\varepsilon_n) \wedge T}(\omega)-X_s(\omega))^2\frac{ds}{\varepsilon_n}} \sqrt{\int_0^T (N_{(s+\varepsilon_n)\wedge T}(\omega)-N_s(\omega))^2\frac{ds}{\varepsilon_n}}\leq \ell^2,\notag\\
&\sup_{0 \leq t \leq T} \left|\int_0^t g(s) \,d F(s)\right| \leq \ell
||F||_{\textup{var}}\leq \ell \sqrt{\langle X^c, X^c\rangle_T
\langle N, N\rangle_T
}\leq \ell^2.
\end{align}
Let us come back to prove \eqref{gFconv}. We set
$$
\chi_n(g) := \sup_{0 \leq t \leq T} \Big| \int_0^t g(s) \,d F_{\varepsilon_n}(s) - \int_0^t g(s)\, d F(s) \Big|.
$$
Let $K >0$.
Using \eqref{estell}, together with Chebyshev's inequality, we get
\begin{align}\label{ineqchin}
\mathbb P\left(\chi_n(g)> K\right)
&\leq \mathbb P(\Omega_{n, \ell}^c)+\mathbb P\left(\left \{\chi_n(g)> K\right\} \cap \Omega_{n, \ell} \cap \mathcal N^c\right)\notag\\
&\leq \kappa +\mathbb P\left(\left \{\chi_n(g)> K\right\} \cap \Omega_{n, \ell} \cap \mathcal N^c\right)\notag\\%\label{secondprob2}
&= \kappa +\mathbb P\left(\left \{\chi_n(g) \wedge 2 \ell^2 > K\right\} \cap \Omega_{n, \ell} \cap \mathcal N^c\right)\notag\\
&= \kappa +\mathbb P((\chi_n(g) \wedge 2 \ell^2)\ensuremath{\mathonebb{1}}_{ \Omega_{n, \ell}\cap \mathcal N^c} > K)\notag\\
&\leq \kappa +\frac{\mathbb E\left[(\chi_n(g) \wedge 2 \ell^2) \ensuremath{\mathonebb{1}}_{ \Omega_{n, \ell}\cap \mathcal N^c} \right]}{K^2}.
\end{align}
To prove that previous expectation goes to zero as $n$ goes to infinity, by using Lebesgue's dominated convergence theorem, it remains to show that
\begin{equation}\label{chiconv}
\chi_n(g)\ensuremath{\mathonebb{1}}_{ \Omega_{n, \ell}\cap \mathcal N^c} \underset{n \rightarrow \infty}{\rightarrow} 0 \quad \textup{a.s.}
\end{equation}
To this end, let us set $G_n:= \ensuremath{\mathonebb{1}}_{\Omega_{n,\ell}}(F_n-F)$. Since
by \eqref{estquadr} we have
$
||G_n||_{\textup{var}} \leq \frac{\ell}{2}$,
the convergence in \eqref{chiconv} follows from Lemma \ref{L:3.15}.
Consequently, by \eqref{ineqchin},
$\limsup_{n \rightarrow \infty} \mathbb P\left(\chi_n(g)> K\right)
\leq \kappa$.
Since $\kappa>0$ is arbitrary, the proof is concluded.
\normalcolor
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\begin{proposition}\label{P:App}
Let $(M^n(t))_{t \in [0,T]}$ (resp. $(N^n(t))_{t \in [0,T]}$) be a sequence of continuous local martingales, converging u.c.p. to $M$ (resp. to $N$). Then
\begin{align*}
[M^n,N^n] \underset{n\rightarrow \infty}{\longrightarrow} [M,N]\quad \textup{u.c.p}.
\end{align*}
\end{proposition}
In order to prove Proposition \ref{P:App} we first give a technical result.
\begin{lemma}\label{L:D6}
Let $A, \delta >0$. Let $M$ be a continuous local martingale vanishing at zero. We have
\begin{equation}\label{Eq:lemma}
\mathbb P\Big([M,M]_T \geq A\Big) \leq \frac{\mathbb E\Big[ \max_{t \in [0,T]}|M_t|^2\wedge \delta^2 \Big]}{A} + \mathbb P\Big(\max_{t \in [0,T]}|M_t| \geq \delta\Big).
\end{equation}
\end{lemma}
\proof
We bound the left-hand side of \eqref{Eq:lemma} by
$$
\mathbb P\Big([M,M]_T \geq A, \max_{t \in [0,T]}|M_t|\leq \delta\Big) +
\mathbb P\Big(\max_{t \in [0,T]}|M_t|\geq \delta\Big):= I_1 + I_2.
$$
Let $\tau:= \inf\{s \in [0,T]: \,\,|M_s|\geq \delta\}$.
We notice that on $\Omega_0 :=\{\omega \in \Omega: \max_{t \in [0,T]}|M_t(\omega)|\leq \delta\}$ we have $M=M^\tau$. Therefore, by the definition of covariation, $[M, M]=[M^\tau, M^\tau]$ on $\Omega_0$, so that
$$
I_1 = \mathbb P\Big([M^\tau,M^\tau]_T \geq A, \max_{t \in [0,T]}|M_t|\leq \delta\Big).
$$
Using Chebyshev and Burkholder-Davis-Gundy inequalities,
we get that there is a constant $c>0$ such that
\begin{align*}
I_1
&\leq \mathbb P\left([M^\tau,M^\tau]_T \geq A\right)
\leq \frac{\mathbb E[[M^\tau, M^\tau]_T]}{A}
\leq \frac{c\,\mathbb E\Big[\sup_{t \in [0,T]}|M_t^\tau|^2\Big]}{A}
= \frac{c\,\mathbb E\Big[\sup_{t \in [0,T]}|M_t^\tau|^2 \wedge\delta^2\Big]}{A}\\
&\leq \frac{c\,\mathbb E\Big[\sup_{t \in [0,T]}|M_t|^2 \wedge\delta^2\Big]}{A}.
\end{align*}
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\noindent \emph{Proof of Proposition \ref{P:App}.}
Obviously, we can take $M\equiv N \equiv 0$.
By polarity arguments, it is enough to suppose $(M^n) \equiv (N^n)$ and to prove that $[M^n, M^n]_T \rightarrow 0$ in probability. Let $\varepsilon>0$, and let $N_0$ such that, for every $n \geq N_0$,
$
\mathbb P(\sup_{t \in [0,T]}|M_t^n|\geq 1 )\leq \varepsilon.
$
Let $A>0$. For $n \geq N_0$, Lemma \ref{L:D6} gives
\begin{equation*}
\mathbb P\left([M^n,M^n]_T \geq A\right) \leq \frac{\mathbb E\Big[ \max_{t \in [0,T]}|M^n_t|^2\wedge 1\Big]}{A} + \varepsilon.
\end{equation*}
By taking $n \rightarrow \infty$ we get
\begin{equation*}
\limsup_{n \rightarrow \infty}\mathbb P\left([M^n,M^n]_T \geq A\right) \leq \varepsilon
\end{equation*}
and the result follows from the arbitrariness of $\varepsilon$.
{
\hbox{\enspace${ \mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}}
\subsection{Some results on densities of martingale problems' domains}
\setcounter{equation}{0}
\setcounter{theorem}{0}
We have the following density results.
\begin{lemma}\label{L:C1}
Let $M >0$ and $E$ be a topological vector $F$-space with some metric $d$. We introduce the distance
$$
\bar d(e_1, e_2) := \sup_{t \in [0,M]}d(t e_1, t e_2), \quad e_1, e_2 \in E.
$$
Then $d$ and $\bar d$ are equivalent in the following sense: if $\varepsilon >0$, there is $\delta >0$ such that
\begin{equation}\label{dequiv1}
d(e_1, e_2)\leq \delta \mathbb Rightarrow \bar d(e_1, e_2) \leq \varepsilon, \quad e_1, e_2 \in E,
\end{equation}
and
\begin{equation}\label{dequiv2}
\bar d(e_1, e_2)\leq \delta \mathbb Rightarrow d(e_1, e_2) \leq \varepsilon, \quad e_1, e_2 \in E.
\end{equation}
\end{lemma}
\proof
\eqref{dequiv2} is immediate since $d \leq \bar d$.
\eqref{dequiv1} follows since we can easily prove that $e \mapsto \bar d(e,0)$ is continuous. This follows because $(t, e) \mapsto t e$ and therefore $(t,e) \mapsto d(te, 0)$ is continuous.
\endproof
\begin{definition}
An $F$-space $E$ is said to be generated by a countable sequence of seminorms $(||\cdot||_{\alpha\in \mathbb N})$ if it can be equipped with the distance
\begin{equation}\label{distFspace}
d_E(x,y):= \sum_\alpha \frac{||x-y||_\alpha}{1+||x-y||_\alpha}2^{-\alpha}.
\end{equation}
\end{definition}
The space $C^0([0,T]; E)$ will be equipped with the distance
$
d(f,g) := \sup_{t \in [0,T]}d_E(f(t),g(t)).
$
\begin{remark}
$C^1$, $C^2$, $C^0$, and $\mathcal D_L \cap C^\alpha_\textup{loc} \cap C_b^0$ (see \eqref{barD}) are $F$- spaces generated by a countable sequence of seminorms.
\end{remark}
\begin{definition}\label{D:D4}
Let $E$ be an $F$-space generated by a countable sequence of seminorms.
Let $d_E$ be the distance in \eqref{distFspace} related to $E$.
A function $f : [0,T] \rightarrow E$ is said to be $C^1([0,T];E)$ if there exists $f':[0,T] \rightarrow E$ continuous such that
\begin{equation}\label{C1def}
\lim_{\varepsilon \rightarrow 0}\sup_{t \in [0,T]}d_E\Big (\frac{f(t + \varepsilon)- f(t)}{\varepsilon}, f'(t)\Big )=0.
\end{equation}
\end{definition}
We remark that by Definition \ref{C1def} $f'$ is continuous.
We are not aware about Bochner integrability for functions taking values in $E$. For this we make use of Riemann integrability. The following lemma makes use of classical arguments, which exploits the fact that $f$ is uniformly continuous.
\begin{lemma}\label{L:C5}
Let $f :[0,T] \rightarrow B$ be a continuous function, where $B$ is a seminormed space. We denote by
$$
s_n(f)(t) = \sum_{k=0}^{2^n-1}f(k t 2^{-n}) 2^{-n}, \quad t \in [0,T],
$$
the Riemann sequence related to the dyadic partition. Then $(s_n(f))$ is Cauchy in $C^0([0,T]; B)$, namely
$$
\sup_{t \in [0,T]} ||(s_n(f)- s_m(f))(t)||_B\rightarrow 0 \quad \textup{as}\,\,n,m \rightarrow \infty.
$$
\end{lemma}
\begin{remark}\label{R:C6}
Let $E$ be an $F$-space generated by a countable sequence of seminorms $(||\cdot||_{\alpha\in \mathbb N})$.
Let $d_E$ be the distance in \eqref{distFspace} related to $E$. Let $f_n, f_m$ (resp. $f$) be sequences of functions (resp. a function) from $[0,T]$ to $E$.
\begin{itemize}
\item[(i)]$\sup_{t \in [0,T]} d_E(f_n(t) , f(t)) \underset{n \rightarrow \infty}{\rightarrow} 0$ is equivalent to $\sup_{t \in [0,T]} ||f_n(t)-f(t)||_\alpha\underset{n \rightarrow \infty}{\rightarrow} 0$;
\item[(ii)]$\sup_{t \in [0,T]} d_E(f_n(t) , f_m(t)) \underset{n,m \rightarrow \infty}{\rightarrow} 0$ is equivalent to $\sup_{t \in [0,T]} ||f_n(t)-f_m(t)||_\alpha \underset{n,m \rightarrow \infty}{\rightarrow} 0$.
\end{itemize}
\end{remark}
\begin{remark}[Riemann integral]\label{R:RiemannIntegral}
Let $E$ be an $F$-space generated by a countable sequence of seminorms $(||\cdot||_{\alpha\in \mathbb N})$.
By Remark \ref{R:C6},
$C^0([0,T];E)$ is an $F$-space.
Let $f\in C^0([0,T]; E)$. By Lemma \ref{L:C5}, $(s_n(f))$ is Cauchy with respect to all seminorms $||\cdot||_{\alpha\in \mathbb N}$. By Remark \ref{R:C6},
the sequence $(s_n(f))$ is Cauchy with respect to $d$.
Being $C^0([0,T];E)$ an $F$-space, it is complete, and $(s_n(f))$
converges to an element in $C^0([0,T]; E)$ that we denote by $\int_0^\cdot f(s) ds$. For $a, b \in [0,T]$ we write $\int_a^b f(s) ds=\int_0^b f(s) ds-\int_0^a f(s) ds$. It is not difficult to show that Riemann integral satisfies the following properties.
\begin{enumerate}
\item $\int_a^b f(t) dt =\int_{a-h}^{b-h} f(t+h) dt$, $h\in \mathbb R$ (by definition of the integral).
\item $\lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon}\int_t^{t+\varepsilon} f(s) ds \rightarrow f(t)$ in $C^{0}([0,T];E)$, $\lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon}\int_{t-\varepsilon}^{t} f(s) ds \rightarrow f(t)$ in $C^{0}([0,T];E)$ (by Remark \ref{R:C6}-(i) and the fact that $f$ is uniformly continuous).
\item If $f \in C^0([0,T];E)$, then $f(\cdot)=f(0) + \lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon}\int_0^{\cdot} (f(s+ \varepsilon)-f(s)) ds $ (by items 1. and 2. above).
\item If $f \in C^1([0,T];E)$, then $f(\cdot)=f(0) + \int_0^{\cdot} f'(s) ds $ (by item 3. above and Definition \ref{D:D4}).
\end{enumerate}
\end{remark}
\begin{lemma}\label{L:hatD}
Let $\mathcal D_{\mathcal L}$ be a topological vector $F$-space generated by a countable sequence of seminorms, equipped with the metric $d_{\mathcal L}(=d_E)$ in \eqref{distFspace} related to $\mathcal D_{\mathcal L}(=E)$.
We suppose that $\mathcal D_{\mathcal L}$ is topologically embedded in $C^0$. Let
$\mathcal L: \mathcal D_{\mathcal L} \rightarrow C^0([0,\,T]\times D_{-}(0,\,T))$ be a continuous map.
Let $\hat {\mathcal D}_{\mathcal A}$ be the subspace of $ {\mathcal D}_{\mathcal A}:=C^1([0,\,T]; \mathcal D_{\mathcal L})$
constituted by functions of the type
\begin{equation}\label{formun}
u(t,x)= \sum_{k} a_k(t) u_k(x), \quad a_k \in C^1(0,\,T), \,\,u_k \in \mathcal D_{\mathcal L}.
\end{equation}
Then $\hat {\mathcal D}_{\mathcal A}$ is dense in ${\mathcal D}_{\mathcal A}$ equipped with the metric $d_{\mathcal A}$
governing the following convergence: \\$u_n \rightarrow 0$ in ${\mathcal D}_{\mathcal A}$ if
\begin{align*}
u_n \rightarrow 0 \,\, \textup{in}\,\,C^0([0,\,T]; \mathcal D_{\mathcal L})\,\,\textup{and}\,\,
\partial_t u_n \rightarrow 0 \,\, \textup{in}\,\,C^0([0,\,T]; \mathcal D_{\mathcal L}).
\end{align*}
\end{lemma}
\proof
Let $u \in {\mathcal D}_{\mathcal A}$.
We have to prove that there is a
sequence $u_n \in \hat {\mathcal D}_{\mathcal A}$ such that $u_n \rightarrow u$ in ${\mathcal D}_{\mathcal A}$.
We denote $v := u'$, i.e. the time derivative.
We divise the proof into two steps.
\noindent \emph{First step: approximation of $v $.}
In this step we only use the fact that $E= \mathcal D_{\mathcal L}$ is an $F$-space.
Notice that $v \in C^0([0,\,T]; \mathcal D_{\mathcal L})$. Let $\varepsilon >0$. Since $v$ is uniformly continuous, by Lemma \ref{L:C1}, there exists $\delta >0$ such that
\begin{equation}\label{C3}
|t-s|\leq\delta \mathbb Rightarrow \bar d_{\mathcal L}(v(t), v(s))< \frac{\varepsilon}{2},
\end{equation}
where $\bar d_{\mathcal L}$ is the metric related to $d_{\mathcal L}$ introduced in Lemma \ref{L:C1} with $M=1$.
We consider a dyadic partition of $[0,\,T]$ given by $t_k =2^{-n} kT$,
$k \in \{0,..., 2^n\}$, $n \in \mathbb N$.
We define the open recovering of $[0,\,T]$ given by
\begin{equation}
U_k^n =
\left\{\begin{array}{l}
[t_0, t_1[\quad k=0,\\
]t_{k-1}, t_{k+1}[ \cap[0,T]\quad k \in \{1,..., 2^{n}\}.
\end{array}\right.
\end{equation}
We also introduce a smooth partition of the unit $\varphi_k^n$, $k \in \{0,..., 2^{n}\}$. In particular,
$\sum_{k=0}^{2^n} \varphi_k^n =1$, $ \varphi_k^n \geq 0$, and $ \supp \varphi_k^n \subset U_k^n$.
We define
\begin{align*}
v_n(t)&:= \sum_{k=0}^{2^n}v(t_k) \varphi_k^n(t).
\end{align*}
We notice that, if $t \in [t_{k-1}, t_k[$, then $v_n(t) =v(t_{k-1}) \varphi_{k-1}^n(t)+v(t_k) \varphi_k^n(t)$ and $v(t) =v(t) \varphi_{k-1}^n(t)+v(t) \varphi_k^n(t)$.
Since $d_{\mathcal L}$ is a homogeneous distance, using the triangle inequality, we have
\begin{align}\label{bard}
d_{\mathcal L}(v_n(t), v(t))&= d_{\mathcal L}((v(t_{k-1})-v(t)) \varphi_{k-1}^n(t)+(v(t_k)-v(t)) \varphi_k^n(t),0)\notag\\
&\leq d_{\mathcal L}((v(t_{k-1})-v(t)) \varphi_{k-1}^n(t),0)+d((v(t_k)-v(t)) \varphi_k^n(t),0)\notag\\
&\leq d_{\mathcal L}(v(t_{k-1}) \varphi_{k-1}^n(t),v(t)) \varphi_{k-1}^n(t))+d_{\mathcal L}(v(t_k) \varphi_k^n(t),v(t) \varphi_k^n(t))\notag\\
&\leq \bar d_{\mathcal L}(v(t_{k-1}),v(t))+ \bar d_{\mathcal L}(v(t_k),v(t)).
\end{align}
Then we choose $N$ such that $2^{-N} T\leq \delta$. Recalling \eqref{C3}, we obtain from \eqref{bard} that
\begin{align*}
n > N \mathbb Rightarrow \sup_{t \in [0,T]} d_{\mathcal L}(v_n(t), v(t))
&\leq \varepsilon.
\end{align*}
This shows that $v_n \rightarrow v=\partial_t u \,\, \textup{in}\,\,C^0([0,\,T]; \mathcal D_{\mathcal L})$.
\noindent \emph{Second step: approximation of $u$.}
Now we set
\begin{align*}
u_n(t)&
:=
u(0) + \sum_{k=0}^{2^n}v(t_k) \bar \varphi_k^n(t),
\end{align*}
where $\bar \varphi_k^n(t) := \int_0^t \varphi_k^n(s) ds$.
We remark that
$
u_n(t) = u(0) + \int_0^t v_n(s) ds
$
in the sense of Remark \ref{R:RiemannIntegral}.
We have to show that
$
\sup_{t \in [0,T]} d_{\mathcal L}(u_n(t), u(t))
$
converges to zero. To this end, by Remark \ref{R:C6}, it is enough to show that, for every fixed $\alpha$,
\begin{equation}\label{convde}
\sup_{t \in [0,T]} ||u_n(t)-u(t)||_\alpha\underset{\varepsilon \rightarrow 0}{\rightarrow} 0.
\end{equation}
Since $u$ is uniformly continuous with respect to $d_{\mathcal L}$, obviously it has the same property with respect to the seminorm $\alpha$.
For every $t \in [0,T]$,
\begin{align*}
||u_n(t)-u(t)||_\alpha \leq \int_{0}^t ||v_n(s)-v(s)||_\alpha ds \leq T \sup_{s \in [0,T]} ||v_n(s)-v(s)||_\alpha \leq C \sup_{s \in [0,T]}\,d_{\mathcal L}(v_n(s),v(s))
\end{align*}
that converges to zero by the first step of the proof.
This implies \eqref{convde} and concludes the proof.
\endproof
\begin{lemma}\label{L:density}
Let $\mathcal D_{\mathcal L_M}$ be a topological vector $F$-space
generated by a countable sequence of seminorms, and let ${\mathcal D}_{\mathcal A}$ be defined in \eqref{D:DAnewMark}. If $\mathcal D_{\mathcal L_M}$ is dense in $C^1$, then ${\mathcal D}_{\mathcal A}$ is dense in $C^{0,1}$.
\end{lemma}
\proof
We start by noticing that $C^{0,1}=C^0([0,\,T]; C^1)$.
Then we divise the proof into two steps.
\noindent \emph{First step.} $C^0([0,\,T]; {\mathcal D}_{\mathcal L_M})$ is dense in $C^0([0,\,T]; C^1)$.
In this step we only use the fact that $E:= C^1$ is an $F$-space.
Let $d_E$ be the distance in \eqref{distFspace} related to $E$.
Let $f \in C^0([0,\,T]; C^1)$. Let $\delta >0$.
We need to show the existence of $f^\varepsilon \in C^0([0,\,T]; \mathcal D_{\mathcal L_M})$ such that
$$
d_E(f(t), f^\varepsilon(t))\leq \delta, \quad \forall t \in [0,\,T].
$$
Let $0=t_1 < t_1<\cdots < t_n=T$ be a subdivision with mesh $\varepsilon$. Since $\mathcal D_{\mathcal L_M}$ is dense in $C^1$, using Lemma \ref{L:C1}, for every $i=1,..., n$, there is $f^\varepsilon_{t_i} \in \mathcal D_{\mathcal L_M}$ such that
\begin{equation}\label{bard2}
\bar d_E(f(t_i), f^\varepsilon_{t_i}) \leq \frac{\delta}{6},
\end{equation}
where $\bar d_E$ is the metric in Lemma \ref{L:C1} with $M=1$.
The candidate now is
\begin{align*}
f^\varepsilon(t) = f^\varepsilon_{t_i}+ \frac{t-t_{i-1}}{t_i - t_{i-1}}(f^\varepsilon_{t_i}-f^\varepsilon_{t_{i-1}}), \quad t \in [t_i, t_{i+1}[.
\end{align*}
To prove that $f^\varepsilon$ is a good approximation of $f$, we define $f^\pi: [0,\,T] \rightarrow C^1$ as
$$
f^\pi(t)= f(t_{i-1}) + \frac{t-t_{i-1}}{t_i - t_{i-1}}(f(t_i)-f(t_{i-1})), \quad t \in [t_i, t_{i+1}[.
$$
We evaluate the difference between $f^\pi$ and $f^\varepsilon$.
Setting $A:=\frac{t-t_{i-1}}{t_i - t_{i-1}}\in [0,\,1]$, taking into account the homogeneity of $d$ and the triangle inequality, we have
\begin{align*}
d_E(f^\pi(t),f^\varepsilon(t))&=d\Big(f(t_{i-1}) + A(f(t_i)-f(t_{i-1})),f^\varepsilon_{t_{i-1}}+A(f^\varepsilon_{t_i}-f^\varepsilon_{t_{i-1}})\Big)\\
&\leq d_E(f(t_{i-1}),f^\varepsilon_{t_{i-1}})+ \bar d_E(f(t_i),f^\varepsilon_{t_{i}})+\bar d_E(f(t_{i-1}),f^\varepsilon_{t_{i-1}})\leq 3 \frac{\delta}{6}= \frac{\delta}{2},
\end{align*}
where in the latter inequality we have used \eqref{bard2}.
This shows that
$$
\sup_{t \in [0,\,T]} d_E(f^\pi(t), f^\varepsilon(t)) \leq \frac{\delta}{2}.
$$
\emph{Second step}. $C^1([0,\,T]; {\mathcal D}_{\mathcal L_M})$ is dense in $C^0([0,\,T]; {\mathcal D}_{\mathcal L_M})$.
In this step we set $E:= {\mathcal D}_{\mathcal L_M}$. Let $f \in C^0([0,\,T]; {\mathcal D}_{\mathcal L_M})$, and set
$
f_\varepsilon(t):=\frac{1}{\varepsilon} \int_{t- \varepsilon}^t f(s) ds$, $t \in [0,T]$.
This integral is well-defined,
and converges to $f$ by Remark \ref{R:RiemannIntegral}-2.
\endproof
\subsection{Some further technical results}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\begin{proposition}\label{P:new}
Condition \eqref{int_small_jumps} is equivalent to ask that
$$
(1 \wedge |x|^2) \star \mu^X \in \mathcal A_{\textup{loc}}^+.
$$
\end{proposition}
\proof
We start by noticing that
\begin{align*}
(1 \wedge |x|^2) \star \mu^X &= \int_\mathbb R \ensuremath{\mathonebb{1}}_{|x| >1 }\,\mu^X(ds\,dx)+\int_\mathbb R \ensuremath{\mathonebb{1}}_{|x| \leq1 }x^2\,\mu^X(ds\,dx)\\
&=\sum_{s \leq \cdot }\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s|>1\}}+\sum_{s \leq \cdot} |\mathbb Delta X_s|^2
\,\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s| \leq 1\}}.
\end{align*}
The process $\sum_{s \leq \cdot }\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s|>1\}}$ is locally bounded having bounded jumps (see Remark \ref{R:pred}-2.),
and therefore it is locally integrable.
On the other hand, since there is almost surely a finite number of jumps above one, condition \eqref{int_small_jumps} is equivalent to
\begin{equation}\label{Sfinite}
S:=\sum_{s \leq \cdot} |\mathbb Delta X_s|^2
\,\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s| \leq 1\}}
< \infty \,\,\,\,\textup{a.s.}
\end{equation}
Being $S$ a process with bounded jumps, \eqref{Sfinite} is equivalent to say that $S$ is locally bounded and therefore locally integrable. This concludes the result.
\endproof
In the sequel of the section we consider a c\`adl\`ag process $X$ satisfying condition \eqref{int_small_jumps}.
\begin{lemma}\label{L:c}
For all $c>0$,
$$
\ensuremath{\mathonebb{1}}_{\{|x| >c\}}\star \mu^X \in \mathcal{A}_{\textup{loc}}^+.
$$
\end{lemma}
\proof
It is enough to prove the result for $c \leq 1$. We have
\begin{align*}
\ensuremath{\mathonebb{1}}_{\{|x|>c\}}\star \mu^X = \frac{1}{c^2} c^2 \, \ensuremath{\mathonebb{1}}_{\{|x|>c\}}\star \mu^X \leq \frac{1}{c^2} (|x|^2 \wedge 1) \star \mu^X
\end{align*}
and the result follows by \eqref{int_small_jumps} and Proposition \ref{P:new}.
\endproof
\begin{lemma}\label{L:3.32}
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function. Then for all $0 <a_0 \leq a_1$,
$$
|v(s,X_{s-}+x)-v(s,X_{s-})| \,\ensuremath{\mathonebb{1}}_{\{a_0 <|x| \leq a_1\}} \star \mu^X \in \mathcal{A}^+_{\textup{loc}}.
$$
\end{lemma}
\proof
Without restriction of generality, we can take $a_0 < 1 \leq a_1$.
Since $X_{s-}$ is c\`agl\`ad, it is locally bounded, and therefore we can consider the localizing sequence $(\tau_n)_n$
such that, for every $n \in \mathbb N$, $\tau_n=\inf\{s \in \mathbb R_+:\,\,|X_{s-}| \leq n\}$. Then, for every $n \in \mathbb N$, on $[0,\,\tau_n]$,
\begin{align*}
\ensuremath{\mathonebb{1}}_{[0,\,\tau_n]}(s) |v(s,X_{s-}+x)-v(s,X_{s-})| \,\ensuremath{\mathonebb{1}}_{\{a_0 <|x| \leq a_1\}} \star \mu^X
&\leq 2 \sup_{y \in (-n-a_1, n+a_1)}|v(s, y)|\,\ensuremath{\mathonebb{1}}_{\{a_0 <|x| \leq a_1\}}\star \mu^X\\
&\leq 2 \sup_{y \in (-n-a_1, n+a_1)}|v(s, y)|\,\ensuremath{\mathonebb{1}}_{\{|x|> a_0\}}\star \mu^X
\end{align*}
that belongs to $\mathcal{A}^+$ by \eqref{int_small_jumps} and Lemma \ref{L:c}.
\endproof
We now recall the following fact, which constitutes a generalization of Proposition 4.5 (formula (4.4) and considerations below) in \cite{BandiniRusso1}, obtained replacing $ \ensuremath{\mathonebb{1}}_{\{|x|\leq 1\}}$ with $\frac{k(x)}{x}$, with $k\in \mathcal K$.
\begin{proposition}\label{P:2.10}
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a function of class $C^{0,1}$. Then, for every $k \in \mathcal K$,
\begin{align*}
&|v(s,X_{s-} + x)-v(s,X_{s-})|^2\,\frac{k^2(x)}{x^2}\star \mu^X \in \mathcal{A}^+_{\rm loc}.
\end{align*}
In particular, the process
$
(v(s,X_{s-} + x)-v(s,X_{s-}))\frac{k(x)}{x}\,\star (\mu^X- \nu^X)
$
is a square integrable purely discontinuous local martingale.
\end{proposition}
We continue by giving a basic lemma.
\begin{lemma}\label{L:contmtg_d}
Let $k \in \mathcal K$.
The maps
\begin{align*}
(i)\quad &v \mapsto (v(s, X_{s-}+x)-v(s, X_{s-})) \frac{x-k(x)}{x}\star \mu^X=:D^{v},\\
(ii)\quad &v \mapsto (v(s, X_{s-}+x)-v(s, X_{s-})) \frac{k(x)}{x}\star (\mu^X- \nu^X)=:M^{v,d},
\end{align*}
from $C^{0,1}$ with values in $\mathbb D^{ucp}$ are well-defined and continuous.
\end{lemma}
\proof
Let $T >0$.
\noindent (i) For every $v \in C^{0,1}$, the process $D^v$ is defined pathwise. Indeed, let $a_0 >0$ such that $k(x) = x$ for $x \in [-a_0, a_0]$. Then
\begin{align}\label{supDv}
&\sup_{t \in [0,\,T]} \Big|\int_{]0,\,t] \times\mathbb R}(v(s, X_{s-}(\omega)+x)-v(s, X_{s-}(\omega))) \frac{x-k(x)}{x} \mu^X(ds\,dx)\Big|\notag\\
&\leq C\sum_{0 < s\leq T}|\mathbb Delta v(s, X_{s
})|\ensuremath{\mathonebb{1}}_{\{|\mathbb Delta X_s| >a_0\}},
\end{align}
for some constant $C= C(k)$.
Since the number of jumps of $X$ larger than $a_0$ is finite, previous quantity is finite a.s.
Let $v_\ell \rightarrow 0$ in $C^{0,1}$ as $\ell\rightarrow \infty$.
The continuity follows since replacing $v$ by $v_\ell$, \eqref{supDv} converges to zero a.s., taking into account that $v_\ell$ converges uniformly on compact sets.
\noindent (ii) For every $v \in C^{0,1}$,
$M^{v,d}$ is a square integrable local martingale by Proposition \ref{P:2.10}. Moreover, again by Proposition \ref{P:2.10}, taking $v=Id$, we have $|k(x)|^2 \star \nu^X \in \mathcal A^+_{\textup{loc}}$.
Let
$
\tau_n^1 := \inf\{t \geq 0:\,\,|X_{t-} |\geq n\}
$
and $\tau_n^2 \uparrow \infty$ be an increasing sequence of stopping times such that
$\int_{]0,\tau^2_n] \times \mathbb R} |k(x)|^2 \nu^X(ds\,dx) \in \mathcal A^+$ and $M^{v,d}_{\tau^2_n \wedge \cdot} $ is a square integrable martingale.
Take
$\tau_n:= \tau_n^1 \wedge \tau_n^2$.
At this point, let us fix $\varepsilon >0$. We have
\begin{align*}
\mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_t|> \varepsilon\Big)&= \mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_t|> \varepsilon, \tau_n \leq T\Big) + \mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_t|> \varepsilon, \tau_n > T\Big)\\
&\leq \mathbb P\Big(\tau_n \leq T\Big)+\mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_{\tau_n \wedge t} |> \varepsilon\Big).
\end{align*}
Let us now choose $n_0 \in \mathbb N$ in such a way that there exists $\delta >0$ such that $\mathbb P(\tau_n \leq T) \leq \delta$ for all $n \geq n_0$.
For $n \geq n_0$, applying the Chebyshev inequality, previous inequality gives
\begin{align}\label{Mvest}
\mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_t|> \varepsilon\Big)
&\leq \delta+\mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_{\tau_n \wedge t} |> \varepsilon\Big)\leq \delta + \frac{\mathbb E\Big[\sup_{t \in [0,\,T]}|M^{v,d}_{\tau_n \wedge t}|^2\Big]}{\varepsilon^2}.
\end{align}
By Doob
inequality we have
\begin{equation}\label{Doob}
\mathbb E\Big[\sup_{t \in [0,\,T]}|M^{v,d}_{\tau_n \wedge t}|^2\Big]\leq
4 \mathbb E[|M^{v,d}_{\tau_n \wedge T}|^2]
=4\sper{\langle M^{v,d},M^{v,d}\rangle_{\tau_n \wedge T}}.
\end{equation}
Denoting by $[-m, m]$ the compact support of $k$, we have
\begin{align}\label{Mvdb}
\langle M^{v,d},M^{v,d}\rangle_{\tau_n \wedge T} &\leq \int_{]0,\,\tau_n \wedge T] \times\mathbb R} \int_0^1 da |\partial_x v(s, X_{s-}+ a x)|^2 |k(x)|^2 \nu^X(ds\,dx)\notag\\
&\leq \sup_{y \in [-(m+n), m+n], s \in [0,\,T]} |\partial_x v(s, y)|^2 \int_{]0,\tau_n \wedge t] \times \mathbb R}|k(x)|^2 \nu^X(ds\,dx).
\end{align}
Collecting \eqref{Doob} and \eqref{Mvdb}, inequality \eqref{Mvest} becomes
\begin{align*}
\mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v,d}_t|> \varepsilon\Big)
\leq \delta + \frac{4}{\varepsilon^2}\sup_{y \in [-(m+n), m+n], s \in [0,\,T]} |\partial_x v(s, y)|^2\,\mathbb E\Big[ \int_{]0,\tau_n \wedge t] \times \mathbb R}|k(x)|^2 \nu^X(ds\,dx)\Big].
\end{align*}
Let us now show the continuity with respect to $v$.
Let $v_\ell \rightarrow 0$ in $C^{0,1}$ as $\ell\rightarrow \infty$.
Previous estimate shows that
\begin{align*}
&\limsup_{\ell \rightarrow \infty} \mathbb P\Big(\sup_{t \in [0,\,T]}|M^{v_\ell,d}_t|> \varepsilon\Big)\leq \delta.
\end{align*}
By the arbitrariness of $\delta$ this shows that $(M^{v_\ell, d})$ converges to zero u.c.p.
\endproof
\begin{lemma}\label{L:D4}
Let $k_1, k_2 \in \mathcal K$. Then $|k_1(x)-k_2(x)| \star \nu^X
\in \mathcal{A}^+_{\textup{loc}}
$.
\end{lemma}
\proof
Let $a_0$ such that $k_1(x) = k_2(x) = x$ on $[-a_0, a_0]$. Then
\begin{align*}
|k_1(x)-k_2(x)| \star \nu^X &= |k_1(x)-k_2(x)|\ensuremath{\mathonebb{1}}_{\{|x| >a_0\}} \star \nu^X\leq (||k_1||_{\infty} + ||k_2||_{\infty})\ensuremath{\mathonebb{1}}_{\{|x| >a_0\}} \star \nu^X
\end{align*}
that belongs to $\mathcal{A}^+_{\textup{loc}}$ by Lemma \ref{L:c}.
\endproof
\begin{lemma}\label{R:B}
Let $v:\mathbb R_+ \times \mathbb R \rightarrow \mathbb R$ be a locally bounded function.
Condition \eqref{G1_cond} is equivalent to
\begin{align*}
&(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k(x)}{x}\in {\mathcal G}^1_{\textup{loc}}(\mu^X), \qquad \forall k \in \mathcal K.
\end{align*}
\end{lemma}
\proof
Let $k_1, k_2 \in \mathcal K$. It is enough to show that
$\left|(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k_1(x)-k_2(x)}{x}\right|\star \mu^X\in \mathcal A_{\textup{loc}}^+$.
Let $a_0, a_1>0$
such that $k_1(x) = k_2(x) = x$ on $|x| \leq a_0$ $k_1(x)=k_2(x) =0$ for $|x| > a_1$.
We have
\begin{align*}
&\Big|(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k_1(x)-k_2(x)}{x}\Big| \star \mu^X \\
&= \Big|(v(s,X_{s-} + x)-v(s,X_{s-}))\,\frac{k_1(x)-k_2(x)}{x}\Big| \ensuremath{\mathonebb{1}}_{\{a_0 < |x| \leq a_1\}}\star \mu^X\\
&\leq \frac{||k_1||_{\infty} + ||k_2||_{\infty}}{a_0}\ensuremath{\mathonebb{1}}_{\{a_0 < |x| \leq a_1\}}\star \mu^X
\end{align*}
that belongs to $\mathcal A_{\textup{loc}}^+$ by
Lemma \ref{L:3.32}.
\subsection{Recalls on discontinuous semimartingales and related Jacod's martingale problems}\label{A:D}
\setcounter{equation}{0}
\setcounter{theorem}{0}
We recall that a special semimartingale is a semimartingale $X$ which admits a decomposition $X=M+V$, where $M$ is a local martingale and $V$ is a finite variation and predictable process such that $V_0=0$, see Definition 4.21, Chapter I, in \cite{JacodBook}. Such a decomposition is unique, and is called canonical decomposition of $X$, see respectively Proposition 3.16 and Definition 4.22, Chapter I, in \cite{JacodBook}.
In the following we set $\mathcal {\tilde K}:=\{k:\mathbb R \rightarrow \mathbb R \textup{ bounded: } k(x) =x \textup{ in a neighborhood of } 0\}$.
Assume now that $X$ is a semimartingale
with jump measure $\mu^X$.
Given $k \in \tilde{\mathcal K}$, the process
$
X^k= X- \sum_{s \leq \cdot} [\mathbb Delta X_s - k(\mathbb Delta X_s)]
$
is a special semimartingale with unique decomposition
\begin{equation}\label{dec_Xk_semimart}
X^k= X^c + M^{k,d} + B^{k, X},
\end{equation}
where $M^{k,d}$ is a purely discontinuous local martingale such that $M^{k,d}_0=0$, $X^c$ is the unique continuous martingale part of $X$ (it coincides with the process $X^c$ introduced in Proposition \ref{P:uniqdec}), and $B^{k, X}$ is a predictable process of bounded variation vanishing at zero.
Let now $(\check \Omega, \check {\mathcal F}, \check {\mathbb F})$ be the canonical filtered space, and $\check X$ the canonical process.
According to Definition 2.6, Chapter II in \cite{JacodBook}, the characteristics of $X$ associated with $k \in \tilde {\mathcal K}$
is then the triplet $(B^k,C,\nu)$ on $(\Omega, {\mathcal F}, {\mathbb F})$ such that
the following items hold.
\begin{itemize}
\item[(i)] $B^k$ is $\check {\mathbb F}$-predictable, with finite variation on finite intervals, and $B_0^k=0$, i.e., $B^{k, X}= B^k \circ X$ is the process in \eqref{dec_Xk_semimart};
\item[(ii)] $C$ is
a continuous process of finite variation with $C_0=0$, i.e., $C^X:= C \circ X= \langle \check X^c, \check X^c\rangle$.
\item[(iii)] $\nu$ is an $\check {\mathbb F}$-predictable random measure on $\mathbb R_+ \times \mathbb R$, i.e., $\nu^X: = \nu \circ X$ is the compensator of $\mu^X$.
\end{itemize}
\begin{theorem}[Theorem 2.42, Chapter II, in \cite{JacodBook}]\label{T: equiv_mtgpb_semimart}
Let $X$ be an adapted c\`adl\`ag process. Let $B^k$ be an $\check{\mathbb F}$-predictable process, with finite variation on finite intervals, and $B_0^k=0$,
$C$ be
an $\check{\mathbb F}$-adapted continuous process of finite variation with $C_0=0$, and $\nu$ be an $\check{\mathbb F}$-predictable random measure on $\mathbb R_+ \times \mathbb R$.
There is equivalence between the two following statements.
\begin{itemize} \item [(i)] X is a real semimartingale with characteristics $(B^k, C, \nu)$. \item [(ii)] For each bounded function $f$ of class $C^2$, the process
\begin{align}\label{f:ito}
&f(X_{\cdot}) - f(X_0) - \frac{1}{2} \int_0^{\cdot} f''(X_s) \,dC^X_s- \int_0^{\cdot} f'(X_s) \,d B_s^{k,X}\notag\\
&- \int_{]0,\cdot]\times \mathbb R} (f(X_{s-} + x) -f(X_{s-})-k(x)\,f'(X_{s-}))\,\nu^X(ds\,dx)
\end{align}
is a local martingale.
\end{itemize}
\end{theorem}
\begin{remark}\label{R:D2}
Assuming item (i) in Theorem \ref{T: equiv_mtgpb_semimart}, if $f$ is a bounded function of class $C^{1,2}$, formula \eqref{f:ito} can be generalized into
\begin{align*}
&f(t,X_{t}) - f(0,X_0)- \int_0^{t}\frac{\partial }{\partial s}f(s, X_s)ds - \frac{1}{2} \int_0^{t} \frac{\partial^2 }{\partial x^2}f(s, X_s) \,dC^X_s- \int_0^{t} \frac{\partial}{\partial x}f(s, X_s) \,d B_s^{k,X}\notag\\
&- \int_{]0,t]\times \mathbb R} \Big(f(s, X_{s-} + x) -f(s, X_{s-})-k(x)\,\frac{\partial }{\partial x}f(s, X_{s-})\Big)\,\nu^X(ds\,dx), \quad t \in [0,\,T].
\end{align*}
\end{remark}
\small
\paragraph{Acknowledgements.}
The work of the first named author was partially supported by PRIN 2015 \emph{Deterministic and Stochastic Evolution equations}.
The work of the second named author was partially supported by a public grant as part of the
{\it Investissement d'avenir project, reference ANR-11-LABX-0056-LMH,
LabEx LMH,}
in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences.
\addcontentsline{toc}{chapter}{Bibliography}
\end{document}
|
\begin{document}
\title{space{-3cm}
\thankyou{1}{Partially supported by CONACYT Research Grant 102783.}
\begin{abstract} \noindent
The Stiefel manifold $V_{m+1,2}$ of 2-frames in $\mathbb{R}^{m+1}$ is acted upon by the orthogonal group O$(2)$. By restriction, there are corresponding actions of the dihedral group of order 8, $D_8$, and of the rank-2 elementary 2-group $\mathbb{Z}_2\times\mathbb{Z}_2$. We use the Cartan-Leray spectral sequences of these actions to compute the integral homology and cohomology groups of the configuration spaces $B(\mathrm{P}^m,2)$ and $F(\mathrm{P}^m,2)$ of (unordered and ordered) pairs of points on the real projective space $\mathrm{P}^m$.
{\cal E}nd{abstract}
\noindent \thanks{\it{2010 Mathematics Subject Classification:}
{\small 55R80, 55T10, 55M30, 57R19, 57R40.}
\\
\it{Keywords and phrases:}
$2$-point configuration spaces, dihedral group of order~$8$,
twisted Poincar\'e duality, torsion linking form.
}
\section{Introduction}
The integral cohomology rings of
the configuration spaces $F(\mathrm{P}^m,2)$ and $B(\mathrm{P}^m,2)$ of two distinct points,
ordered and unordered respectively,
in the $m$-dimensional real projective space $\mathrm{P}^m$
have recently been computed in~\cite{D8}. The method in that paper
relies on a rather technical bookkeeping in the corresponding
Bockstein spectral sequences. As a consequence, a
reader following the details in that
work might miss part of the geometrical insight of the problem
(in Definition~\ref{inicio1Handel} and subsequent considerations).
To help remedy such a situation, we offer in this paper an
alternative approach to the additive structure.
The basic results are presented in Theorems~\ref{descripcionordenada} and
\ref{descripciondesaordenada} below, where
the notation $\langle k\rangle$ stands for
the elementary abelian 2-group of rank $k$, $\mathbb{Z}_2\oplus\cdots\oplus\mathbb{Z}_2$ ($k$ times),
and where we write $\{k\}$ as a
shorthand for $\langle k\rangle\oplus\mathbb{Z}_4$.
\begin{teorema}\label{descripcionordenada}
For $n>0$,
$$
H^i(F(\mathrm{P}^{2n},2)) = \begin{cases}
\mathbb{Z}, & i=0\mbox{ ~or~ }i=4n-1;\\
\left\langle\frac{i}2+1\right\rangle, & i\mbox{ ~even,~ }1\leq i\leq2n;\\
\left\langle\frac{i-1}2\right\rangle, & i\mbox{ ~odd,~ }1\leq i\leq2n;\\
\left\langle2n+1-\frac{i}2\right\rangle, & i\mbox{ ~even,~ }
2n<i<4n-1;\rule{6mm}{0mm}\\
\left\langle2n-\frac{i+1}2\right\rangle, & i\mbox{ ~odd,~ }2n<i<4n-1;\\
0, & \mbox{otherwise}.
{\cal E}nd{cases}
$$
\noindent For $n\geq0$,
$$
H^i(F(\mathrm{P}^{2n+1},2)) = \begin{cases}
\mathbb{Z}, & i=0;\\
\left\langle\frac{i}2+1\right\rangle, & i\mbox{ ~even,~ }1\leq i\leq2n;\\
\left\langle\frac{i-1}2\right\rangle, & i\mbox{ ~odd,~ }1\leq i\leq2n;\\
\mathbb{Z}\oplus\langle n\rangle, & i=2n+1;\\
\left\langle2n+1-\frac{i}2\right\rangle, & i\mbox{ ~even,~ }2n+1<i\leq4n+1;\\
\left\langle2n+1-\frac{i-1}2\right\rangle, & i\mbox{ ~odd,~ }2n+1<i\leq4n+1;\\
0, & \mbox{otherwise}.
{\cal E}nd{cases}
$$
{\cal E}nd{teorema}
\begin{teorema}\label{descripciondesaordenada}
Let $0\leq b\leq3$. For $n>0$,
$$
H^{4a+b}(B(\mathrm{P}^{2n},2)) = \begin{cases}
\mathbb{Z}, & 4a+b=0 \mbox{ or } 4a+b=4n-1;\\
\{2a\}, & b=0<a,\;\,4a+b\leq2n;\\
\left\langle2a\right\rangle, & b=1,\;\,4a+b\leq2n;\\
\left\langle2a+2\right\rangle, & b=2,\;\,4a+b\leq2n;\\
\left\langle2a+1\right\rangle, & b=3,\;\,4a+b\leq2n;\\
\{2n-2a\}, & b=0,\;\,2n<4a+b<4n-1;\\
\langle2n-2a-1\rangle, & b=1,\;\,2n<4a+b<4n-1;\\
\langle2n-2a\rangle, & b=2,\;\,2n<4a+b<4n-1;\\
\langle2n-2a-2\rangle, & b=3,\;\,2n<4a+b<4n-1;\\
0, & \mbox{otherwise}.
{\cal E}nd{cases}
$$
\noindent For $n\geq0$,
{\footnotesize $$
H^{4a+b}(B(\mathrm{P}^{2n+1},2)) = \begin{cases}
\mathbb{Z}, & 4a+b=0;\\
\{2a\}, & b=0<a,\;\,4a+b<2n+1;\\
\left\langle2a\right\rangle, & b=1,\;\,4a+b<2n+1;\\
\left\langle2a+2\right\rangle, & b=2,\;\,4a+b<2n+1;\\
\left\langle2a+1\right\rangle, & b=3,\;\,4a+b<2n+1;\\
\mathbb{Z}\oplus\langle n\rangle, & 4a+b=2n+1;\\
\{2n-2a\}, & b=0,\;\,2n+1<4a+b\leq4n+1;\\
\langle2n+1-2a\rangle, & b=1,\;\,2n+1<4a+b\leq4n+1;\\
\langle2n-2a\rangle, & b\in\{2,3\},\;\,2n+1<4a+b\leq4n+1;\\
0, & \mbox{otherwise}.
{\cal E}nd{cases}
$$}
{\cal E}nd{teorema}
\
As noted in~\cite{D8},
Theorems~\ref{descripcionordenada} and~\ref{descripciondesaordenada}
can be coupled with the Universal Coefficient Theorem (UCT),
expressing homology in
terms of cohomology (e.g.~\cite[Theorem~56.1]{munkres}), in order to give
explicit descriptions of the corresponding integral homology groups.
Another immediate consequence is
that, together with Poincar\'e duality (in its not necessarily orientable
version, cf.~\cite[Theorem~3H.6]{hatcher} or~\cite[Theorem~4.51]{ranicki}),
Theorems~\ref{descripcionordenada} and~\ref{descripciondesaordenada}
give a corresponding explicit description
of the $w_1$-twisted homology and cohomology groups of $F(\mathrm{P}^m,2)$ and $B(\mathrm{P}^m,2)$.
Details are given in Section~\ref{linkinsection}---a
second contribution not discussed in~\cite{D8}.
\begin{nota}\label{m1}
Note that, after inverting 2, both $B(\mathrm{P}^m,2)$ and $F(\mathrm{P}^m,2)$ are homology
spheres. This assertion can be considered as a partial
generalization of the fact that both $F(\mathrm{P}^1,2)$ and $B(\mathrm{P}^1,2)$ have
the homotopy type of a circle; for $B(\mathrm{P}^1,2)$ this follows from
Lemma~\ref{inicio2Handel} and Example~\ref{V22} below, while the situation for
$F(\mathrm{P}^1,2)$ comes from the fact that $\mathrm{P}^1$ is a Lie group---so that
$F(\mathrm{P}^1,2)$ is in fact diffeomorphic to $S^1\times(S^1-\{1\})$. In particular,
any product of positive dimensional classes
in either $H^*(F(\mathrm{P}^1,2))$ or $H^*(B(\mathrm{P}^1,2))$ is trivial. The
trivial-product property also
holds for both $H^*(F(\mathrm{P}^2,2))$ and $H^*(B(\mathrm{P}^2,2))$ in view of the $\mathrm{P}^2$-case
in Theorems~\ref{descripcionordenada} and~\ref{descripciondesaordenada}.
For $m\geq 3$, the multiplicative structure of $H^*(F(\mathrm{P}^m,2))$ and $H^*(B(\mathrm{P}^m,2))$ was first worked out in~\cite{tesiscarlos}.
{\cal E}nd{nota}
\begin{definicion}\label{inicio1Handel}
Recall that $D_8$ can be expressed as the usual wreath
product extension
\begin{equation}\label{wreath}
1\to\mathbb{Z}_2\times\mathbb{Z}_2\to D_8\to\mathbb{Z}_2\to1.
{\cal E}nd{equation}
Let $\rho_1,\rho_2\in D_8$ generate the normal subgroup
$\mathbb{Z}_2\times\mathbb{Z}_2$, and let (the class of)
$\rho\in D_8$ generate the quotient group $\mathbb{Z}_2$ so that, via conjugation,
$\rho$ switches $\rho_1$ and $\rho_2$. $D_8$ acts
freely on the Stiefel
manifold $V_{n,2}$ of orthonormal $2$-frames in $\mathbb{R}^n$
by setting
{\small $$\rho(v_1,v_2)=(v_2,v_1),\quad
\rho_1(v_1,v_2)=(-v_1,v_2), \quad\mbox{and}\quad\rho_2(v_1,v_2)=(v_1,-v_2).
$$}
This describes a group inclusion $D_8\hookrightarrow\mathrm{O}(2)$
where the rotation
$\rho\rho_1$ is a generator for $\mathbb{Z}_4=D_8\cap\mathrm{SO}(2)$.
{\cal E}nd{definicion}
\begin{notacion}\label{convenience}
Throughout the paper the letter $G$ stands for either
$D_8$ or its subgroup $\mathbb{Z}_2\times\mathbb{Z}_2$ in~(\ref{wreath}). Likewise,
$E_m=E_{m,G}$ denotes the orbit space
of the $G$-action on $V_{m+1,2}$ indicated in Definition~\ref{inicio1Handel},
and $\theta\colon V_{m+1,2}\to E_{m,G}$ represents the canonical projection.
Our interest lies in the (kernel of the) morphism induced in
cohomology by the map
\begin{equation}\label{lap}
p=p_{m,G}\colon E_m\to BG
{\cal E}nd{equation}
that classifies the $G$-action on $V_{m+1,2}$.
{\cal E}nd{notacion}
\begin{lema}[{\cite[Proposition~2.6]{handel68}}]\label{inicio2Handel}
$E_m$ is a strong deformation retract of $B(\mathrm{P}^m,2)$
if $G=D_8$, and of $F(\mathrm{P}^m,2)$ if $G=\mathbb{Z}_2\times\mathbb{Z}_2$.
\cajita
{\cal E}nd{lema}
Thus, the cohomology properties of the configuration spaces we are
interested in---and of~(\ref{lap}), for that matter---can be approached
via the Cartan-Leray spectral sequence (CLSS) of the $G$-action on $V_{m+1,2}$.
Such an analysis yields:
\begin{proposicion}\label{HFpar}
Let $m$ be even. The map $p^*\colon H^i(BG)\to H^i(E_m)$ is:
{{\cal E}m\begin{enumerate}
\item {{\cal E}m an isomorphism for $i\leq m;$}
\item {{\cal E}m an epimorphism with nonzero kernel for $m<i<2m-1;$}
\item {{\cal E}m the zero map for $2m-1\leq i$.}
{\cal E}nd{enumerate}}
{\cal E}nd{proposicion}
\begin{proposicion}\label{HF1m4}
Let $m$ be odd. The map $p^*\colon H^i(BG)\to H^i(E_m)$ is:
{{\cal E}m \begin{enumerate}
\item {{\cal E}m an isomorphism for $i<m;$}
\item {{\cal E}m a monomorphism onto the
torsion subgroup of $H^i(E_m)$ for $i=m;$}
\item {{\cal E}m an epimorphism with nonzero kernel
for $m<i\leq 2m-1$.}
\item {{\cal E}m the zero map for $2m-1<i$.}
{\cal E}nd{enumerate}}
{\cal E}nd{proposicion}
Kernels in the above two results are carefully described
in~\cite{D8}. The approach in this paper allows us to prove
Propositions~\ref{HFpar} and~\ref{HF1m4},
except for item 3 in Proposition~\ref{HF1m4} if $G=D_8$ and $\,m{\cal E}quiv3\bmod4$.
Since the ring $H^*(BG)$ is well known
(see Theorem~\ref{modulestructure} and the comments following
Lemma~\ref{kunneth}), the multiplicative
structure of $H^*(E_m)$ through dimensions at most $m$
follows from the four results stated in this section. Of course,
the ring structure in larger dimensions
depends on giving explicit generators for the ideal Ker$(p^*)$.
In this direction we note that the methods in this paper also yield:
\begin{proposicion}\label{mono4}
Let $G=D_8$. Assume $m\not{\cal E}quiv3\bmod4$ and consider the
map in~{{\cal E}m(\ref{lap})}. In dimensions at most $2m-1$, every
nonzero element in $\mathrm{Ker}(p^*)$ has order $2$,
i.e.~$2\cdot\mathrm{Ker}(p^*)=0$ in those dimensions.
In fact, every $4{\cal E}ll$-dimensional integral cohomology class in $BD_8$
generating a $\mathbb{Z}_4$-group maps under
$p^*$ into a class which also generates a $\mathbb{Z}_4$-group provided
${\cal E}ll<m/2$---otherwise the class maps trivially for dimensional reasons.
{\cal E}nd{proposicion}
\begin{nota}\label{F2dimension}
By Lemma~\ref{kunneth} below,
$\mathrm{Ker}(p^*)$ is also killed by multiplication by 2
when $G=\mathbb{Z}_2\times \mathbb{Z}_2$ (any $m$, any dimension).
Our approach allows us to explicitly describe
the (dimension-wise) 2-rank of $\mathrm{Ker}(p^*)$ in the cases where we
know this is an $\mathbb{F}_2$-vector space (i.e.~when either $G=\mathbb{Z}_2\times\mathbb{Z}_2$ or
$m\not{\cal E}quiv3\bmod4$,
see Examples~\ref{muchkernel1} and~\ref{muchkernel2}).
Unfortunately the methods used in the proofs of
Propositions~\ref{HFpar}--\ref{mono4} break down for $E_{4n+3,D_8}$,
and Section~\ref{antesultima}
discusses a few such aspects focusing attention on the case $n=0$.
{\cal E}nd{nota}
The spectral sequence methods in this paper are similar
in spirit to those in~\cite{idealvalued}
and~\cite{FZ}. In the latter reference, Feichtner and Ziegler
describe the integral cohomology rings of {\it ordered} configuration spaces
on spheres by means of a full analysis of the Serre
spectral sequence (SSS) associated
to the Fadell-Neuwirth fibration $\pi\colon F(S^k,n)\to S^k$ given by
$\pi(x_1,\ldots, x_n)=x_n$ (a similar study is carried out
in~\cite{FZ02}, but in the context of {{\cal E}m
ordered} orbit configuration spaces).
One of the main achievements of the present
paper is a successful calculation of cohomology groups
of {\it unordered} configuration spaces (on real projective spaces),
where no Fadell-Neuwirth
fibrations are available---instead we rely on
Lemma~\ref{inicio2Handel} and the CLSS\footnote{Our
CLSS calculations can also be done in terms of the SSS of the fibration
$V_{m+1,2}\stackrel{\theta}\to E_{m,G}\stackrel{p}{\to} BG$.}
of the $G$-action on $V_{m+1,2}$. Also worth stressing is the fact
that we succeed in computing cohomology groups with {\it integer} coefficients,
whereas the Leray spectral sequence (and its $\Sigma_k$-invariant version)
for the inclusion $F(X,k)\hookrightarrow X^k$ has proved to be
effectively computable mainly when {\it field} coefficients are used
(\cite{FeTa,Totaro}).
A major obstacle we have to confront (not
present in~\cite{FZ}) comes from the fact that the spectral sequences
we encounter often have non-simple systems of local coefficients.
This is also the situation in~\cite{idealvalued}, where
the two-hyperplane case of Gr\"unbaum's mass partition
problem~(\cite{grunbaum}) is studied from the Fadell-Husseini
index theory viewpoint~\cite{FH}. Indeed, Blagojevi\'c and Ziegler
deal with twisted coefficients in their main SSS, namely the one associated to
the Borel fibration
\begin{equation}\label{FHindex}
S^m\times S^m \to ED_8\times_{D_8}(S^m\times S^m)
\stackrel{\overline{p}}\to BD_8
{\cal E}nd{equation}
\noindent where the $D_8$-action on $S^m\times S^m$ is the obvious extension of
that in Definition~\ref{inicio1Handel}. Now, the main goal in~\cite{idealvalued}
is to describe the kernel of the map induced by $\overline{p}$ in integral
cohomology---the so-called Fadell-Husseini ($\mathbb{Z}$-)index of $D_8$
acting on $S^m\times S^m$, Index$_{D_8}(S^m\times S^m)$.
Since $D_8$ acts freely on $V_{m+1,2}$, Index$_{D_8}(S^m\times
S^m)$ is contained in the kernel of the map induced in integral
cohomology by the map $p\colon E_m\to BD_8$
in Proposition~\ref{mono4} (whether or not $m{\cal E}quiv3\bmod4$).
In particular, the work in~\cite{idealvalued} can be used to
identify explicit elements in Ker$(p^*)$ and, as observed
in Remark~\ref{F2dimension}, our approach allows us to assess, for $m\not
{\cal E}quiv3\bmod4$ (in Examples~\ref{muchkernel1} and \ref{muchkernel2}), how
much of the actual kernel is still lacking
description: \cite{idealvalued} gives just a bit less than half
the expected elements in Ker$(p^*)$.
\section{Preliminary cohomology facts}\label{HBprelim}
As shown {in~\cite{AM} (see also~\cite{handel68}
for a straightforward approach)}, the mod 2 cohomology of $D_8$ is
a polynomial ring on three generators $x,x_1,x_2\in H^*(BD_8;\mathbb{F}_2)$,
the first two of dimension 1, and the last one of dimension 2,
subject to the single relation $x^2=x\cdot x_1$. The classes $x_i$ are
the restrictions of the universal Stiefel-Whitney classes $w_i$
($i=1,2$) under the map corresponding to the group inclusion
$D_8\subset\mathrm{O}(2)$ in Definition~\ref{inicio1Handel}.
On the other hand, the class $x$ is not characterized by the relation
$x^2=x\cdot x_1$, but by the requirement that, for all $m$, $x$
pulls back, under the map $p_{m,D_8}$ in~(\ref{lap}), to the map
$u\colon B(\mathrm{P}^m,2)\to\mathrm{P}^\infty$ classifying the obvios double cover
$F(\mathrm{P}^m,2)\to B(\mathrm{P}^m,2)$---see~\cite[Proposition~3.5]{handel68}.
In particular:
\begin{lema}\label{handel68D}
For $i\geq0$, $H^i(BD_8;{\mathbb{F}_2})=\langle i+1\rangle$.
\cajita
{\cal E}nd{lema}
\begin{corolario}\label{handel68B}
For any $m$, $$H^i(B(\mathrm{P}^m,2);{\mathbb{F}_2})=\begin{cases}\langle
i+1\rangle,& 0\leq i\leq m-1;\\
\langle2m-i\rangle,& m\leq i\leq 2m-1;\\0,&\mbox{otherwise}.{\cal E}nd{cases}$$
{\cal E}nd{corolario}
\begin{proof}
The assertion for $i\geq2m$ follows from Lemma~\ref{inicio2Handel} and
dimensional considerations.
Poincar\'e duality implies that
the assertion for $m\leq i\leq 2m-1$ follows from that for $0\leq i\leq m-1$.
Since $V_{m+1,2}$ is ($m-2$)-connected,
the assertion for $0\leq i\leq m-1$ follows from Lemma~\ref{handel68D},
using the fact (a consequence
of~\cite[Proposition~3.6 and~(3.8)]{handel68})
that, in the mod 2 SSS for the fibration $V_{m+1,2}\stackrel{\theta}{\to}
E_{m,D_8}\stackrel{p}{\to} BD_8$,
the two indecomposable elements in $H^*(V_{m+1,2};\mathbb{F}_2)$
transgress to nontrivial elements.
{\cal E}nd{proof}
Let $\mathbb{Z}_\alpha$ denote the $\mathbb{Z}[D_8]$-module
whose underlying group is free on a generator $\alpha$ on which
each of $\rho,\rho_1,\rho_2\in D_8$ acts via multiplication
by $-1$ (in particular, elements in $D_8\cap\mathrm{SO}(2)$ act
trivially). Corollaries~\ref{HBD} and~\ref{HBDT} below
are direct consequences of the following description, proved
in~\cite{handeltohoku} (see also~\cite[Theorem~4.5]{idealvalued}),
of the ring $H^*(BD_8)$ and of the $H^*(BD_8)$-module $H^*(BD_8;
\mathbb{Z}_\alpha)$:
\begin{teorema}[Handel~\cite{handeltohoku}]\label{modulestructure}
$H^*(BD_8)$ is generated by classes $\mu_2$, $\nu_2$, $\lambda_3$,
and $\kappa_4$ subject to the relations
$2\mu_2=2\nu_2=2\lambda_3=4\kappa_4=0$, $\nu_2^2=\mu_2\nu_2$, and
$\lambda_3^2=\mu_2\kappa_4$.
$H^*(BD_8;\mathbb{Z}_\alpha)$ is the free $H^*(BD_8)$-module on
classes $\alpha_1$ and $\alpha_2$ subject to the relations
$2\alpha_1=4\alpha_2=0$, $\lambda_3\alpha_1=\mu_2\alpha_2$, and
$\kappa_4\alpha_1=\lambda_3\alpha_2$. Subscripts in the notation of these
six generators indicate their cohomology dimensions.
\cajita
{\cal E}nd{teorema}
The notation $a_2$, $b_2$, $c_3$, and $d_4$
was used in~\cite{handeltohoku}
instead of the current $\mu_2$, $\nu_2$, $\lambda_3$, and $\kappa_4$.
The change is made in order to avoid confusion with the generic notation $d_i$
for differentials in the several spectral sequences considered
in this paper.
\begin{corolario}\label{HBD}
For $a\geq0$ and $0\leq b\leq3$,
$$
H^{4a+b}(BD_8)=\begin{cases}
\mathbb{Z}, & (a,b)=(0,0);\\
\{2a\}, & b=0<a;\\
\langle 2a\rangle, & b=1;\\
\langle 2a+2\rangle, & b=2;\\
\langle 2a+1\rangle, & b=3.
{\cal E}nd{cases}
$$
\
\cajita
{\cal E}nd{corolario}
\begin{corolario}\label{HBDT}
{For $a\geq0$ and $0\leq b\leq3$,}
$$
{H^{4a+b}(BD_8;\mathbb{Z}_\alpha)=\begin{cases}
\langle 2a\rangle, & b=0;\\
\langle 2a+1\rangle, & b=1;\\
\{2a\},& b=2;\\
\langle 2a+2\rangle, & b=3.
{\cal E}nd{cases}}
$$
\
\cajita
{\cal E}nd{corolario}
We show that, up to a certain symmetry condition (exemplified in
Table~\ref{tabla} at the end of Section~\ref{linkinsection}),
the groups explicitly described
by Corollaries~\ref{HBD} and~\ref{HBDT} delineate the
additive structure of the graded group $H^*(B(\mathrm{P}^m,2))$.
The corresponding situation for $H^*(F(\mathrm{P}^m,2))$ uses the
following well-known analogues of Lemma~\ref{handel68D} and
Corollaries~\ref{handel68B},~\ref{HBD} and~\ref{HBDT}:
\begin{lema}\label{wellknown}
For $i\geq0$, $H^i(\mathrm{P}^\infty\times\mathrm{P}^\infty;\mathbb{F}_2)=\langle i+1\rangle$.
\cajita
{\cal E}nd{lema}
\begin{lema}\label{initialF}
For any $m$, $$H^i(F(\mathrm{P}^m,2);{\mathbb{F}_2})=\begin{cases}\langle
i+1\rangle,& 0\leq i\leq m-1;\\
\langle2m-i\rangle,& m\leq i\leq 2m-1;\\0,&\mbox{otherwise}.{\cal E}nd{cases}$$
\
\cajita
{\cal E}nd{lema}
\begin{lema}\label{kunneth} For $i\geq0$,
\begin{eqnarray*}
H^i(\mathrm{P}^\infty\times\mathrm{P}^\infty)&=&\begin{cases}
\mathbb{Z}, & i=0; \\ \left\langle\frac{i}2+1\right\rangle, &
i \mbox{ even }, i>0; \\ \left\langle\frac{i-1}2\right\rangle,
& \mbox{otherwise}.
{\cal E}nd{cases} \\
H^i(\mathrm{P}^\infty\times\mathrm{P}^\infty;\mathbb{Z}_\alpha)&=&\begin{cases}
\left\langle\frac{i}{2}\right\rangle, & i \mbox{ even};\\ \left\langle
\frac{i+1}{2}\right\rangle, & i \mbox{ odd}. {\cal E}nd{cases}
{\cal E}nd{eqnarray*}
Here $\mathbb{Z}_\alpha$
is regarded as a $(\mathbb{Z}_2\times\mathbb{Z}_2)$-module via the restricted structure
coming from the inclusion $\mathbb{Z}_2\times\mathbb{Z}_2\hookrightarrow D_8$.
\cajita
{\cal E}nd{lema}
Here are some brief comments on the proofs of
Lemmas~\ref{wellknown}--\ref{kunneth}.
Of course, the ring structure $H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty;\mathbb{F}_2)=
\mathbb{F}_2[x_1,y_1]$ is standard (as in Theorem~\ref{modulestructure}, subscripts
for the cohomology classes in this paragraph indicate dimension).
On the other hand, it is easily shown~(see for instance~\cite[Example~3E.5
on pages~306--307]{hatcher}) that $H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty)$
is the polynomial ring over the integers on three classes $x_2$, $y_2$,
and $z_3$ subject to the four relations
\begin{equation}\label{relacionesenteras}
2x_2=0,\;\; 2y_2=0,\;\; 2z_3=0,\;\mbox{and}\;\; z_3^2=x_2y_2(x_2+y_2).
{\cal E}nd{equation}
These two facts yield Lemma~\ref{wellknown} and
the first equality in Lemma~\ref{kunneth}.
Lemma~\ref{initialF} can be proved with the argument given for
Corollary~\ref{handel68B}---replacing $D_8$ by its subgroup
$\mathbb{Z}_2\times\mathbb{Z}_2$ in~(\ref{wreath}).
Finally, both equalities in Lemma~\ref{kunneth} can be obtained as immediate
consequences of the K\"unneth exact sequence (for the second
equality, note that $\mathbb{Z}_\alpha$ arises
as the tensor square of the standard twisted coefficients for a single
factor $\mathrm{P}^\infty$).
\begin{nota}\label{mapdereduccion}
For future reference we recall (again from Hatcher's
book) that the mod 2 reduction map $H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty)\to
H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty;\mathbb{F}_2)$, a monomorphism in positive dimensions,
is characterized by $x_2\mathrm{Map}sto x_1^2$,
$y_2\mathrm{Map}sto y_1^2$, and $z_3\mathrm{Map}sto x_1y_1(x_1+y_1)$.
{\cal E}nd{nota}
\section{Orientability properties of some quotients of $V_{n,2}$}
\label{orientability}
Proofs in this section will be postponed
until all relevant results have been presented.
Recall that all Stiefel manifolds $V_{n,2}$ are orientable (actually
parallelizable, cf.~\cite{sutherland}).
Even if some of the elements of a given subgroup $H$
of $\mathrm{O}(2)$ fail
to act on $V_{n,2}$ in an orientation-preserving way, we could still use the
possible orientability of the quotients $V_{n,2}/H$ as an indication
of the extent to which $H$,
as a whole, is compatible with the orientability
of the several $V_{n,2}$.
For example, while every element of $\mathrm{SO}(2)$
gives an orientation-preserving diffeomorphism on each $V_{n,2}$,
it is well known that the Grassmannian $V_{n,2}/\mathrm{O}(2)$
of unoriented 2-planes in $\mathbb{R}^n$
is orientable if and only if $n$ is even (see for instance
\cite[Example~47 on page~162]{prasolov}).
We show that a similar---but {\it shifted}---result holds
when $\mathrm{O}(2)$ is replaced by $D_8$.
\begin{notacion}\label{loscocientes}
For a subgroup $H$ of $\mathrm{O}(2)$,
we will use the shorthand $V_{n,H}$ to denote the quotient
$V_{n,2}/H$. For instance $V_{m+1,G}=E_{m,G}$, the space
in Notation~\ref{convenience}.
{\cal E}nd{notacion}
\begin{proposicion}\label{orientabilidadB}
For $n>2$, $V_{n,D_8}$ is orientable if and only if $n$ is odd.
Consequently, for $m>1$, the top dimensional cohomology group of
$B(\mathrm{P}^m,2)$ is $$H^{2m-1}(B(\mathrm{P}^m,2))=\begin{cases}
\mathbb{Z}, & \mbox{{for even $\,m$}};\\\mathbb{Z}_2, &
\mbox{for odd $\,m$.}
{\cal E}nd{cases}$$
{\cal E}nd{proposicion}
\begin{nota}\label{analogousversionsF}
Proposition~\ref{orientabilidadB} holds (with the same proof) if $D_8$ is
replaced by its subgroup $\mathbb{Z}_2\times\mathbb{Z}_2$, and
$B(\mathrm{P}^m,2)$ is replaced by $F(\mathrm{P}^m,2)$.
It is interesting to compare both versions of
Proposition~\ref{orientabilidadB} with the fact that, for $m>1$,
$B(\mathrm{P}^m,2)$ is non-orientable, while $F(\mathrm{P}^m,2)$ is orientable only for odd
$m$~(\cite[Lemma~2.6]{SadokCohenFest}).
{\cal E}nd{nota}
\begin{ejemplo}\label{V22}
The cases with $n=2$ and $m=1$ in Proposition~\ref{orientabilidadB}
are special (compare
to~\cite[Proposition~2.5]{SadokCohenFest}):
Since the
quotient of $V_{2,2}=S^1\cup S^1$ by the action of
$D_8\cap\mathrm{SO}(2)$ is diffeomorphic to
the disjoint union of two copies of $S^1/\mathbb{Z}_4$,
we see that $V_{2,D_8}\cong S^1$.
{\cal E}nd{ejemplo}
If we take the same orientation for both circles
in $V_{2,2}=S^1\cup S^1$, it is clear that
the automorphism $H^1(V_{2,2})\to H^1(V_{2,2})$
induced by an element $r\in D_8$
is represented by the matrix
$\left(\begin{smallmatrix}0&1\\1&0{\cal E}nd{smallmatrix}\right)$
if $r\in \mathrm{SO}(2)$, but by the matrix
$\left(\begin{smallmatrix}0&-1\\-1&0{\cal E}nd{smallmatrix}\right)$
if $r\not\in \mathrm{SO}(2)$. For larger values of $n$,
the method of proof of Proposition~\ref{orientabilidadB}
allows us to describe the action of
$D_8$ on the integral cohomology ring of $V_{n,2}$.
The answer is given in terms of the generators $\rho,\rho_1,\rho_2\in D_8$
introduced in Definition~\ref{inicio1Handel}.
\begin{teorema}\label{D8actionV}
The three automorphisms
{\small $\rho^*\!,\rho_1^*,\rho_2^*\colon H^q(V_{n,2})\to
H^q(V_{n,2})$} agree.
For $n>2$, this common morphism is the identity
except when $n$ is even and $q\in\{n-2,2n-3\}$,
in which case the common morphism is multiplication by $-1$.
{\cal E}nd{teorema}
Theorem~\ref{D8actionV} should be read keeping in mind the well-known
cohomology ring $H^*(V_{n,2})$. We recall its simple description
after proving Proposition~\ref{orientabilidadB}. For the time being it
suffices to recall, for the purposes of Proposition~\ref{SSOGimp}
below, that $H^{n-1}(V_{n,2})=\mathbb{Z}_2$ for odd $n$, $n\geq3$.
We use our approach to Theorem~\ref{D8actionV}
in order to describe the integral cohomology ring of the
oriented Grassmannian $V_{n,\mathrm{SO}(2)}$ for odd $n$, $n\geq3$.
Although the result might be well known ($V_{n,\mathrm{SO}(2)}$ is
a complex quadric of complex dimension $n-2$), we include the details
(an easy step from the constructions in this section)
since we have not been able to find an explicit reference
in the literature.
\begin{proposicion}\label{SSOGimp}
Assume $n$ is odd, $n=2a+1$ with $a\geq1$. Let $\widetilde{z}\in
H^2(V_{n,\mathrm{SO}(2)})$ stand for the Euler class of the
smooth principal $S^1$-bundle
\begin{equation}\label{smoothfibrfix}
S^1\to V_{n,2}\to V_{n,\mathrm{SO}(2)}
{\cal E}nd{equation}
There is a class $\widetilde{x}\in
H^{n-1}(V_{n,\mathrm{SO}(2)})$ mapping under the projection
in~{{\cal E}m(\ref{smoothfibrfix})} to the nontrivial element in $H^{n-1}(V_{n,2})$.
Furthermore, as a ring $$H^*(V_{n,\mathrm{SO}(2)})=\mathbb{Z}
[\widetilde{x},\widetilde{z\hspace{.5mm}}]
\hspace{.5mm}/I_n$$ where $I_n$ is the ideal
generated by
\begin{equation}\label{tresrelaciones}
\widetilde{x}^{\,2},\mbox{ \ }
{\widetilde{x}\,\widetilde{z}^{\,a}}, \mbox{ \ and \ \ }
\widetilde{z}^{\,a}-2\hspace{.4mm}\widetilde{x}.
{\cal E}nd{equation}
{\cal E}nd{proposicion}
It should be noted that the second generator of $I_n$ is superfluous. We
include it in the description since it will become clear, from the proof of
Proposition~\ref{SSOGimp}, that the first two terms in~(\ref{tresrelaciones})
correspond to the two families of differentials in the SSS of the fibration
classifying~(\ref{smoothfibrfix}),
while the last term corresponds to the family of
nontrivial extensions in the resulting $E_\infty$-term.
\begin{nota}\label{compat1}
It is illuminating to compare Proposition~\ref{SSOGimp} with
H.~F. Lai's computation of the cohomology ring
$H^*(V_{n,\mathrm{SO}(2)})$ for even $n$, $n\geq4$. According
to~\cite[Theorem~2]{lai}, $H^*(V_{2a,\mathrm{SO}(2)})=\mathbb{Z}
[\kappa,\widetilde{z\hspace{.5mm}}]\hspace{.5mm}/I_{2a}$ where
$I_{2a}$ is the ideal generated by
\begin{equation}\label{laisdescription}
\kappa^2-\varepsilon \kappa\widetilde{z}^{\,a-1}\mbox{ \ \ and \ \ }\,
\widetilde{z}^{\,a}-2\kappa\widetilde{z}.
{\cal E}nd{equation}
Here $\varepsilon=0$ for $a$ even, and $\varepsilon=1$ for $a$ odd, while
the generator $\kappa\in H^{2a-2}(V_{2a,\mathrm{SO}(2)})$
is the Poincar\'e dual
of the homology class represented by the canonical (realification) embedding
$\mathbb{C}\mathrm{P}^{a-1}\hookrightarrow V_{2a,\mathrm{SO}(2)}$ (Lai also proves
that $(-1)^{a-1}\kappa \widetilde{z}^{\,a-1}$ is
the top dimensional cohomology class in $V_{2a,\mathrm{SO}(2)}$
corresponding to the canonical orientation of this manifold).
The first fact to observe in Lai's description of
$H^*(V_{2a,\mathrm{SO}(2)})$
is that the two dimensionally forced relations $\kappa\widetilde{z}^{\,a}=0$
and $\widetilde{z}^{\,2a-1}=0$ can be algebraically
deduced from the relations implied by~(\ref{laisdescription}). A similar
situation holds for $H^*(V_{2a+1,\mathrm{SO}(2)})$, where
the first two relations in~(\ref{tresrelaciones}), as well as the
corresponding algebraically implied relation $\widetilde{z}^{\,2a}=0$,
are forced by dimensional considerations. But it is more interesting
to compare Lai's result with Proposition~\ref{SSOGimp}
through the canonical inclusions $\iota_n\colon
V_{n,\mathrm{SO}(2)}\hookrightarrow
V_{n+1,\mathrm{SO}(2)}$ ($n\geq3$). In fact, the
relations given by the last element both in~(\ref{tresrelaciones})
and~(\ref{laisdescription}) readily give
\begin{equation}\label{compatibilidadxk}
\iota_{2a}^*(\widetilde{x})=\kappa\widetilde{z}\mbox{ \ \ and \ \ }
\iota^*_{2a+1}(\kappa)=\widetilde{x}
{\cal E}nd{equation}
for $a\geq2$. Note that the second equality in~(\ref{compatibilidadxk})
can be proved, for all $a\geq1$, with the following alternative
argument: From~\cite[Theorem~2]{lai}, $2\kappa-\widetilde{z}^{\,a}\in
V_{2a+2,\mathrm{SO}(2)}$ is the Euler class of the canonical
{\it normal} bundle of $V_{2a+2,\mathrm{SO}(2)}$ and, therefore, maps
trivially under $\iota_{2a+1}^*$. The second equality
in~(\ref{compatibilidadxk}) then follows from
the relation implied by the last element in~(\ref{tresrelaciones}).
Needless to say, the usual cohomology ring $H^*(B\mathrm{SO}(2))$
is recovered as the inverse limit of the maps $\iota_n^*$
(of course $B\mathrm{SO}(2)\simeq\mathbb{C\mathrm{P}^\infty}$).
{\cal E}nd{nota}
\begin{proof}[Proof of Proposition~{{\cal E}m\ref{orientabilidadB}}
from Theorem~{{\cal E}m\ref{D8actionV}}]
Since the action of every element in $D_8\cap\mathrm{SO}(2)$ preserves
orientation in $V_{n,2}$, and since two elements in
$D_8-\mathrm{SO}(2)$ must
``differ'' by an orientation-preserving element in $D_8$,
the first assertion in Proposition~\ref{orientabilidadB}
will follow once we argue that (say) $\rho$
is orientation-preserving precisely when $n$ is odd. But such a fact
is given by Theorem~\ref{D8actionV} in view of the UCT.
The second assertion in Proposition~\ref{orientabilidadB}
then follows from
Lemma~\ref{inicio2Handel},~\cite[Corollary~3.28]{hatcher}, and the
UCT (recall $\dim(V_{n,2})=2n-3$).
{\cal E}nd{proof}
We now start working toward the proof of Theorem~\ref{D8actionV},
recalling in particular the cohomology ring $H^*(V_{n,2})$.
Let $n>2$ and think of $V_{n,2}$ as the sphere bundle of the
tangent bundle of $S^{n-1}$. The (integral cohomology) SSS for the fibration
$S^{n-2}\stackrel{\iota}\to V_{n,2}\stackrel{\pi}\to S^{n-1}$
(where $\pi(v_1,v_2)=v_1$ and $\iota(w)=(e_1,(0,w))$ with
$e_1=(1,0,\ldots,0)$) starts as
\begin{equation}\label{SSS}
{E}^{p,q}_2=\begin{cases}
\mathbb{Z}, & (p,q)\in\{(0,0),(n-1,0),(0,n-2),(n-1,n-2)\};
\\0, & \mbox{otherwise;}{\cal E}nd{cases}
{\cal E}nd{equation}
and the only possibly nonzero differential is multiplication by the Euler
characteristic of $S^{{n-1}}$ (see for
instance~\cite[pages 153--154]{mccleary}).
At any rate, the only possibilities for a
nonzero cohomology group $H^q(V_{n,2})$ are $\mathbb{Z}_2$ or $\mathbb{Z}$.
In the former case,
any automorphism must be the identity. So the real task is to
determine the action of the three elements in Theorem~\ref{D8actionV}
on a cohomology group $H^q(V_{n,2})=\mathbb{Z}$.
\begin{proof}[Proof of Theorem~{{\cal E}m\ref{D8actionV}}]
The fact that $\rho^*=\rho_1^*=\rho_2^*$ follows by observing that
the product of any two of the elements $\rho$, $\rho_1$, and $\rho_2$ lies
in the path connected group $\mathrm{SO}(2)$, and therefore determines
an automorphism $V_{n,2}\to V_{n,2}$ which is homotopic to the identity.
The analysis of the second assertion of Theorem~\ref{D8actionV}
depends on the parity of $n$.
\noindent
{\bf Case with $n$ even, $n>2$.}
The SSS~(\ref{SSS}) collapses, giving that $H^*(V_{n,2})$
is an exterior algebra (over $\mathbb{Z}$)
on a pair of generators $x_{n-2}$ and $x_{n-1}$ (indices
denote dimensions). The spectral sequence also gives that $x_{n-2}$
maps under $\iota^*$
to the generator in $S^{n-2}$, whereas $x_{n-1}$ is the image under $\pi^*$
of the generator in $S^{n-1}$. Now, the (obviously) commutative diagram
\begin{picture}(0,75)(-63,0)
\put(0,60){$S^{n-2}$}
\put(8,55){\vector(0,-1){15}}
\put(84,55){\vector(0,-1){15}}
\put(3,47){\scriptsize $\iota$}
\put(87,47){\scriptsize $\iota$}
\put(20,63){\vector(1,0){54}}
\put(27,66){\scriptsize antipodal map}
\put(43.5,36){\scriptsize $\rho_2$}
\put(20,33){\vector(1,0){54}}
\put(76,60){$S^{n-2}$}
\put(79,30){$V_{n,2}$}
\put(3,30){$V_{n,2}$}
\put(38,5){$S^{n-1}$}
\put(15,24){\vector(2,-1){22}}
\put(21,13.5){\scriptsize $\pi$}
\put(78,25){\vector(-2,-1){22}}
\put(68,14){\scriptsize $\pi$}
{\cal E}nd{picture}
\noindent
implies that $\rho_2^*$ (and therefore $\rho_1^*$ and $\rho^*$) is the
identity on $H^{n-1}(V_{n,2})$, and that $\rho_2^*$ (and therefore
$\rho_1^*$ and $\rho^*$) act by multiplication by $-1$ on
$H^{n-2}(V_{n,2})$.
The multiplicative structure then implies that the last assertion holds
also {on} $H^{2n-3}(V_{n,2})$.
\noindent
{\bf Case with $n$ odd, $n>2$.}
The description in~(\ref{SSS})
of the start of the SSS implies that the only nonzero
cohomology groups of $V_{n,2}$ are $H^{n-1}(V_{n,2})=\mathbb{Z}_2$ and
$H^i(V_{n,2})=\mathbb{Z}$ for $i=0,2n-3$. Thus,
we only need to make sure that
\begin{equation}\label{mostrar}
\mbox{$\rho^*\colon H^{2n-3}(V_{n,2})
\to H^{2n-3}(V_{n,2})$ is the identity morphism.}
{\cal E}nd{equation}
Choose generators
$x\in H^{n-1}(V_{n,2})$, $y\in H^{2n-3}(V_{n,2})${\small ,} and
$z\in H^2(\mathbb{C}\mathrm{P}^\infty)${\small ,} and let
$V_{n,\mathrm{SO}(2)}\to\mathbb{C}\mathrm{P}^\infty$ classify the circle
fibration~(\ref{smoothfibrfix}).
Thus{\small ,} the $E_2$-term of the SSS for the fibration
\begin{equation}\label{fibrationOG}
V_{n,2}\to V_{n,\mathrm{SO}(2)}\to\mathbb{C}\mathrm{P}^\infty
{\cal E}nd{equation}
takes the simple form
\begin{picture}(0,94)(-2,-14)
\qbezier[100](0,0)(100,0)(218,0)
\qbezier[30](0,0)(0,37)(0,70)
\multiput(-2,-2)(15,0){4}{$\mathbb{Z}$}
\multiput(88,-2)(15,0){3}{$\mathbb{Z}$}
\multiput(163,-2)(15,0){3}{$\mathbb{Z}$}
\multiput(-2,29)(15,0){4}{$\bullet$}
\multiput(88,29)(15,0){3}{$\bullet$}
\multiput(163,29)(15,0){3}{$\bullet$}
\multiput(-2,58)(15,0){4}{$\mathbb{Z}$}
\multiput(88,58)(15,0){3}{$\mathbb{Z}$}
\multiput(163,58)(15,0){3}{$\mathbb{Z}$}
\put(-1,-8){\scriptsize$1$}
\put(14,-8){\scriptsize$z$}
\put(29,-8){\scriptsize$z^2$}
\put(44,-8){\scriptsize$z^3$}
\put(87,-8){\scriptsize$z^{a-1}$}
\put(103,-8){\scriptsize$z^{a}$}
\put(117,-8){\scriptsize$z^{a+1}$}
\put(162,-8){\scriptsize$z^{n-2}$}
\put(177,-8){\scriptsize$z^{n-1}$}
\put(192,-8){\scriptsize$z^{n}$}
\put(210,-7){$\dots$}
\put(210,30){$\dots$}
\put(210,60){$\dots$}
\put(140,-7){$\dots$}
\put(140,30){$\dots$}
\put(140,60){$\dots$}
\put(65,-7){$\dots$}
\put(65,30){$\dots$}
\put(65,60){$\dots$}
\put(-9,29.5){\scriptsize$x$}
\put(-9,58){\scriptsize$y$}
{\cal E}nd{picture}
\noindent where $n=2a+1$, and a bullet represents a copy of $\mathbb{Z}_2$.
The proof of Proposition~\ref{SSOGimp} below gives two rounds
of differentials, both originating on the top horizontal line;
the element $2y$ is a cycle in the first round of differentials,
but determines the second round of differentials by
\begin{equation}\label{differential}
d_{2n-2}(2y)=z^{n-1}.
{\cal E}nd{equation}
The key ingredient
comes from the observation that $\rho$ and the
involution $\tau\colon V_{n,\mathrm{SO}(2)}
\to V_{n,\mathrm{SO}(2)}$
that reverses orientation of an oriented $2$-plane
fit into the pull-back diagram
\begin{equation}\label{conjugation}
\begin{picture}(0,36)(50,32)
\put(0,60){$V_{n,2}$}
\put(8,55){\vector(0,-1){14}}
\put(84,55){\vector(0,-1){14}}
\put(20,63){\vector(1,0){52}}
\put(45,66){\scriptsize $\rho$}
\put(45,36){\scriptsize $\tau$}
\put(45,6){\scriptsize $c$}
\put(24,33){\vector(1,0){46}}
\put(76,60){$V_{n,2}$}
\put(74,30){$V_{n,\mathrm{SO}(2)}$}
\put(-5,30){$V_{n,\mathrm{SO}(2)}$}
\put(2,0){$\mathbb{C}\mathrm{P}^\infty$}
\put(78,0){$\mathbb{C}\mathrm{P}^\infty$}
\put(24,3){\vector(1,0){50}}
\put(8,25){\vector(0,-1){14}}
\put(84,25){\vector(0,-1){14}}
{\cal E}nd{picture}
{\cal E}nd{equation}
\noindent
where $c$ stands for conjugation. [Indeed, thinking of
$V_{n,\mathrm{SO}(2)}
\to \mathbb{C}\mathrm{P}^\infty$ as an inclusion, $\tau$ is the restriction of $c$,
and $\rho$ becomes the equivalence induced on (selected) fibers.]
Of course $c^*(z)=-z$ in $H^2(\mathbb{C}\mathrm{P}^\infty)$, so that
\begin{equation}\label{par}c^*(z^{n-1})=z^{n-1}{\cal E}nd{equation}
(recall $n$ is odd). Thus, in terms of the map of spectral sequences
determined by~(\ref{conjugation}),
conditions~(\ref{differential}) and~(\ref{par}) force the relation
$\rho^*(2y)=2y$. This gives~(\ref{mostrar}).
{\cal E}nd{proof}
The proof of~(\ref{mostrar}) we just gave (for odd $n$)
can be simplified by working over the rationals
(see Remark~\ref{transfer} in the next paragraph). We have chosen the
spectral sequence analysis of~(\ref{fibrationOG}) since it
leads us to Proposition~\ref{SSOGimp}.
\begin{nota}\label{transfer}
It is well known that whenever a finite group $H$ acts freely
on a space $X$, with $Y = X/H$, the rational cohomology of $Y$ maps
isomorphically onto the $H$-invariant elements in the rational
cohomology of $X$ (see for instance~\cite[Proposition~3G.1]{hatcher}).
We apply this fact to the $8$-fold covering projection
$\,\theta\colon V_{n,2}\to V_{n,D_8}$. Since the only nontrivial groups
$H^q(V_{n,2};\mathbb{Q})$ are $\mathbb{Q}$ for $q=0,2n-3$ (this is where
we use that $n$ is odd), we get that
the rational cohomology of $V_{n,D_8}$ is $\mathbb{Q}$ in dimension $0$,
vanishes in positive dimensions below $2n-3$, and is either $\mathbb{Q}$
or $0$ in the top dimension $2n-3$.
But $V_{n,D_8}$ is a manifold of odd dimension, so its Euler
characteristic is zero; this forces the top rational cohomology
to be $\mathbb{Q}$. Thus, every element in $D_8$ acts as the identity
on the top rational (and therefore integral) cohomology group
of $V_{n,2}$. This gives in particular~(\ref{mostrar}), the
real content of Theorem~\ref{D8actionV} for an odd $n$.
{\cal E}nd{nota}
As in the notation introduced right after~(\ref{mostrar}),
let $z\in H^2(\mathbb{C}\mathrm{P}^\infty)$ be a generator so that
the element $\widetilde{z}\in H^2(V_{n,\mathrm{SO}(2)})$ in
Proposition~\ref{SSOGimp} is the image of $z$ under the projection
map in~(\ref{fibrationOG}).
\begin{proof}[Proof of Proposition~{{\cal E}m\ref{SSOGimp}}]
The $E_2$-term of the SSS for~(\ref{fibrationOG}) has been indicated
in the proof of Theorem~\ref{D8actionV}. In that picture,
the horizontal $x$-line consists of permanent cycles; indeed,
there is no nontrivial target in a $\mathbb{Z}$ group
for a differential originating at a $\mathbb{Z}_2$ group.
Since $\dim(V_{n,\mathrm{SO}(2)})=2n-4$, the term $xz^a$ must be killed
by a differential, and the only way this can happen is by means of
$d_{n-1}(y)=xz^a$. By multiplicativity, this settles a whole family of
differentials killing off the elements $xz^i$ with $i\geq a$. Note that
this still leaves groups $2\cdot\mathbb{Z}$ in the $y$-line (rather,
the $2y$-line). Just as before, dimensionality forces the
differential~(\ref{differential}), and multiplicativity determines
a corresponding family of differentials. What remains in the SSS after
these two rounds of differentials---depicted below---consists of
permanent cycles, so the spectral sequence collapses from this point on.
\begin{picture}(0,64)(-17,-14)
\qbezier[90](0,0)(100,0)(180,0)
\qbezier[20](0,0)(0,20)(0,40)
\multiput(-2,-2)(15,0){4}{$\mathbb{Z}$}
\multiput(88,-2)(15,0){3}{$\mathbb{Z}$}
\multiput(163,-2)(15,0){1}{$\mathbb{Z}$}
\multiput(-2,29)(15,0){4}{$\bullet$}
\multiput(88,29)(15,0){1}{$\bullet$}
\put(-1,-8){\scriptsize$1$}
\put(14,-8){\scriptsize$z$}
\put(29,-8){\scriptsize$z^2$}
\put(44,-8){\scriptsize$z^3$}
\put(87,-8){\scriptsize$z^{a-1}$}
\put(103,-8){\scriptsize$z^{a}$}
\put(117,-8){\scriptsize$z^{a+1}$}
\put(162,-8){\scriptsize$z^{n-2}$}
\put(140,-7){$\dots$}
\put(65,-7){$\dots$}
\put(65,30){$\dots$}
\put(-9,29.5){\scriptsize$x$}
{\cal E}nd{picture}
\noindent Finally, we note that all possible extensions
are nontrivial. Indeed, orientability of
$\hspace{.5mm}V_{n,\mathrm{SO}(2)}$ gives
$H^{2n-4}(V_{n,\mathrm{SO}(2)})=\mathbb{Z}$, which implies
a nontrivial extension involving $xz^{a-1}$ and $z^{n-2}$. Since
multiplication by $z$ is monic in total dimensions
less that $2n-4$ of the $E_\infty$-term,
the $5$-Lemma (applied recursively) shows that
the same assertion is true in $H^{*}(V_{n,\mathrm{SO}(2)})$.
This forces the corresponding nontrivial
extensions in degrees lower than
$2n-4$: an element of order 2 in low dimensions would produce,
after multiplication by $z$, a corresponding element of order $2$ in the
top dimension. The proposition follows.
{\cal E}nd{proof}
Lai's description of the ring $H^*(V_{2a,\mathrm{SO}(2)})$
given in Remark~\ref{compat1} can be used to understand
the full patter of differentials and extensions in
the SSS of~(\ref{fibrationOG}) for
$n=2a\hspace{.5mm}$. Due to space limitations, details are not given
here---but they are discussed in Remark~3.10 of the
preliminary version~\cite{v1} of this paper.
We close this section with an argument
that explains, in a geometric way, the switch in parity of $n$
when comparing the orientability properties of $V_{n,\mathrm{O}(2)}$
to those of $V_{n,D_8}$. Let $\pi$ stand for the projection map in
the smooth fiber bundle~(\ref{smoothfibrfix}). The tangent bundle
$T_{n,2}$ to $V_{n.2}$ decomposes as the Whitney sum $$T_{n,2}\cong\pi^*
(T_{n,\mathrm{SO}(2)})\oplus\lambda$$ where \hspace{0.6mm}$T_{n,\mathrm{SO}(2)}$
\hspace{0.6mm}is the tangent bundle to \hspace{0.6mm}$V_{n,\mathrm{SO}(2)}$, and $\lambda$
is the $1$--dimensional bundle of tangents to the fibers---a trivial bundle
since we have the nowhere vanishing vector field obtained by differentiating
the free action of $S^1$ on $V_{n.2}$. Note that $\rho\colon V_{n,2}\to V_{n,2}$
reverses orientation on all fibers and so reverses a given orientation
of $\lambda$. Hence, $\rho\,$ {\it preserves\hspace{.5mm}}
a chosen orientation of $T_{n,2}$
precisely when the involution $\tau$ in~(\ref{conjugation})
{\it reverses} a chosen orientation of $T_{n,\mathrm{SO}(2)}$. But,
as explained in the proof of Proposition~\ref{orientabilidadB},
$V_{n,D_8}$ is orientable precisely when
$\rho$ is orientation-preserving. Likewise, $V_{n,\mathrm{O}(2)}$
is orientable precisely when $\tau$ is orientation-preserving.
\section{Torsion linking form and Theorems~\ref{descripcionordenada}
and \ref{descripciondesaordenada}}\label{linkinsection}
In this short section we outline an argument, based on the
classical torsion linking form, that allows us to compute the cohomology
groups described by Theorems~\ref{descripcionordenada}
and~\ref{descripciondesaordenada} in all but three critical dimensions.
The totality of dimensions (together with the proofs of
Propositions~\ref{HFpar}--\ref{mono4})
is considered in the next three sections---the first
two of which represent the bulk of spectral sequence
computations in this paper.
For a space $X$ let $TH_i(X;A)$ (respectively, $TH^i(X;A)$) denote
the torsion subgroup of
the $i^{\mathrm{th}}$ homology (respectively, cohomology)
group of $X$ with (possibly twisted)
coefficients $A$. As usual, omission of $A$ from the notation
indicates that a simple system of
$\mathbb{Z}$-coefficients is used. We are interested in the twisted
coefficients $\widetilde{\mathbb{Z}}$ arising from the orientation character
of a closed $m$-manifold $X=M$ for, in such a case, there are non-singular
pairings
\begin{equation}\label{linkingtorsion}
TH^i(M)\times TH^j(M;\widetilde{\mathbb{Z}})\to\mathbb{Q}/
\mathbb{Z}
{\cal E}nd{equation}
(for $i+j=m+1$),
the so-called torsion linking forms, constructed from the UCT
and Poincar\'e duality. Although~(\ref{linkingtorsion})
seems to be best known for an orientable $M$ (see for
instance~\cite[pages 16--17
and 58--59]{collins}), the construction works just as well in a
non-orientable setting.
We briefly recall the details (in cohomological terms) for completeness.
Start by observing that for a finitely generated abelian group
$H=F \oplus T$ with $F$ free abelian and $T$ a finite group, the group
$\mathrm{Ext}^1(H,\mathbb{Z})\cong\mathrm{Ext}^1(T,\mathbb{Z})$
is canonically isomorphic
to $\mathrm{Hom}(T,\mathbb{Q}/\mathbb{Z})$, the Pontryagin dual of $T$
(verify this by using the exact
sequence $0 \to \mathbb{Z} \to \mathbb{Q} \to \mathbb{Q}/\mathbb{Z} \to 0$,
and noting that $\mathbb{Q}$ is injective while $\mathrm{Hom}(T,\mathbb{Q})
=0$). In particular, the canonical isomorphism $TH^i(M)\cong\mathrm{Ext}^1
(TH_{i-1}(M),\mathbb{Z})$ coming from the UCT yields a non-singular
pairing $TH^i(M)\times TH_{i-1}(M)\to \mathbb{Q}/\mathbb{Z}$.
The form in~(\ref{linkingtorsion}) then follows by using
Poincar\'e duality (in its not necessarily orientable version,
see~\cite[Theorem~3H.6]{hatcher} or~\cite[Theorem~4.51]{ranicki}).
As explained by Barden in \cite[Section~0.7]{barden} (in the orientable case),
the resulting pairing can be interpreted geometrically
as the classical torsion linking number~(\cite{KM,ST,wall}).
Recall the group $G$ and orbit space $E_m$ in
Notation~\ref{convenience}.
We next indicate how the isomorphisms
\begin{equation}\label{linkisos}
TH^i(M)\cong TH^j(M;\widetilde{\mathbb{Z}}),\quad i+j=2m,
{\cal E}nd{equation}
coming from~(\ref{linkingtorsion}) for $M=E_m$
can be used for computing
most of the integral cohomology groups of $F(\mathrm{P}^m,2)$
and $B(\mathrm{P}^m,2)$.
\noindent Since $V_{m+1,2}$ is
($m-2$)-connected\footnote{Low-dimensional cases with $m\leq3$ are given special attention
in Example~\ref{babyexample}, Remark~\ref{elm1}, and~(\ref{casoespecial})
in the following sections.},
the map in~(\ref{lap})
is ($m-1$)-connected. Therefore it induces an isomorphism
(respectively, monomorphism) in cohomology with any---possibly
twisted, in view of \cite[Theorem~$6.4.3^*$]{whitehead} ---coefficients
in dimensions $i\leq m-2$ (respectively, $i=m-1$). Together
with Corollary~\ref{HBD} and Lemmas~\ref{inicio2Handel} and~\ref{kunneth},
this leads to the explicit
description of the groups in Theorems~\ref{descripcionordenada}
and~\ref{descripciondesaordenada}\hspace{.5mm}
in dimensions at most $m-2$. The corresponding
groups in dimensions at least $m+2$ can then be obtained from
the isomorphisms~(\ref{linkisos})
and the full description in Section~\ref{HBprelim}
of the twisted and untwisted cohomology groups of $BG$.
Note that the last step requires knowing that, when
$E_m$ is non-orientable (as determined in
Proposition~\ref{orientabilidadB} and Remark~\ref{analogousversionsF}), the
twisted coefficients $\widetilde{\mathbb{Z}}$ agree with those
$\mathbb{Z}_\alpha$ used in Theorem~\ref{modulestructure}. But such a
requirement is a direct consequence of Theorem~\ref{D8actionV}.
Since the torsion-free subgroups of $H^*(E_m)$ are easily identifiable from
a quick glance at the $E_2$-term of the
CLSS for the $G$-action on $V_{m+1,2}$,
only the torsion subgroups in
Theorems~\ref{descripcionordenada} and~\ref{descripciondesaordenada}
in dimensions
\begin{equation}\label{faltantes}
\mbox{$m-1$, $\;m$, $\;$and $\,\;m+1$}
{\cal E}nd{equation}
are lacking description in this argument.
A deeper analysis of the CLSS of the $G$-action on $V_{m+1,2}$
(worked out in Sections~\ref{HB} and~\ref{problems3} for $G=D_8$,
and discussed briefly in Section~\ref{HF} for $G=\mathbb{Z}_2\times\mathbb{Z}_2$)
will give us (among other things) a detailed description of
the three missing cases in~(\ref{faltantes}) {\it except}
for the ($m+1$)-dimensional group when $G=D_8$
and $m{\cal E}quiv3\bmod4$. Note that this apparently
singular case cannot be handled directly with the torsion linking form
argument in the previous paragraph because the connectivity
of $V_{m+1,2}$ only gives
the injectivity, but not the surjectivity, of the first map in the composite
\begin{equation}\label{excepcional}
H^{m-1}(BD_8;\mathbb{Z}_\alpha)\stackrel{p^*}\longrightarrow
H^{m-1}(B(\mathrm{P}^m,2);\mathbb{Z}_\alpha)\cong H^{m+1}(B(\mathrm{P}^m,2)).
{\cal E}nd{equation}
To overcome the problem, in Section~\ref{problems3} we perform
a direct calculation in the first two pages of the Bockstein spectral
sequence (BSS) of $B(\mathrm{P}^{4a+3},2)$ to prove that~(\ref{excepcional}) is
indeed an isomorphism for $m{\cal E}quiv3\bmod4$---therefore
completing the proof of Theorems~\ref{descripcionordenada}
and~\ref{descripciondesaordenada}.
\begin{table}[h]
\centerline{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\rule{0pt}{15pt}$*={}$& \makebox[2mm]{\small 2} & \makebox[2mm]{\small 3} & \makebox[2mm]{\small 4} & \makebox[2mm]{\small 5} & \makebox[2mm]{\small 6} & \makebox[2mm]{\small 7} & \makebox[2mm]{\small 8} & \makebox[2mm]{\small 9} & \makebox[2mm]{\small 10} &
\makebox[2mm]{\small 11} & \makebox[2mm]{\small 12} & \makebox[2mm]{\small 13} & \makebox[2mm]{\small 14}\\[0.5ex]
\hline
\rule{0pt}{15pt}{\scriptsize $H^*(E_{2,D_8})$} &\makebox[2mm]{\scriptsize $\langle2\rangle$}&&&&&&&&&&&&\\ [0.5ex]
\hline
\rule{0pt}{15pt}{\scriptsize $H^*(E_{4,D_8})$} &\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize $\langle1\rangle$}
&\makebox[2mm]{\scriptsize \{2\}}&
\makebox[2mm]{\scriptsize $\langle1\rangle$}&\makebox[2mm]{\scriptsize $\langle2\rangle$}&&&&&&&&\\ [0.5ex]
\hline
\rule{0pt}{15pt}{\scriptsize $H^*(E_{6,D_8})$} &\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize $\langle1\rangle$}&
\makebox[2mm]{\scriptsize \{2\}}
&\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize $\langle4\rangle$}&\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize \{2\}}&
\makebox[2mm]{\scriptsize $\langle1\rangle$}&\makebox[2mm]{\scriptsize $\langle2\rangle$}&&&&\\ [0.5ex]
\hline
\rule{0pt}{15pt}{\scriptsize $H^*(E_{8,D_8})$} &\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize $\langle1\rangle$}&
\makebox[2mm]{\scriptsize \{2\}}&\makebox[2mm]{\scriptsize $\langle2\rangle$}&\makebox[2mm]{\scriptsize $\langle4\rangle$}&
\makebox[2mm]{\scriptsize $\langle3\rangle$}&\makebox[2mm]{\scriptsize \{4\}}
&\makebox[2mm]{\scriptsize $\langle3\rangle$}&\makebox[2mm]{\scriptsize $\langle4\rangle$}&\makebox[2mm]{\scriptsize $\langle2\rangle$}
&\makebox[2mm]{\scriptsize \{2\}}&\makebox[2mm]{\scriptsize $\langle1\rangle$}&\makebox[2mm]{\scriptsize $\langle2\rangle$}\\ [0.5ex]
\hline
{\cal E}nd{tabular}}
\caption{$H^*(E_{m,D_8})\cong H^*(B(\mathrm{P}^m,2))\,$ for
$m=2,4,6$, and $8$
\label{tabla}}
{\cal E}nd{table}
The isomorphisms in~(\ref{linkisos}) yield a
(twisted, in the non-orientable case)
symmetry for the torsion groups of $H^*(E_m)$. This is
illustrated (for $G=D_8$ and in the orientable case)
in Table~\ref{tabla} following the conventions set
in the very first paragraph of the paper.
\section{Case of $B(\mathrm{P}^m,2)$ for
$m\not{\cal E}quiv3\bmod4$}\label{HB}
This section and the next one contain
a careful study of the CLSS of the $D_8$-action on
$V_{m+1,2}$ described in Definition~\ref{inicio1Handel};
the corresponding (much simpler) analysis for the restricted
$(\mathbb{Z}_2\times\mathbb{Z}_2)$-action is outlined in Section~\ref{HF}. The CLSS
approach will yield, in addition, direct proofs of
Propositions~\ref{HFpar}--\ref{mono4}.
The reader is assumed to be familiar with the properties of the CLSS
of a regular covering space, complete details of which first appeared
in~\cite{cartan}.
We start with the less involved situation of an even $m$ and,
as a warm-up, we consider first the case $m=2$.
\begin{ejemplo}\label{babyexample}
Lemmas~\ref{inicio2Handel} and~\ref{handel68D}, Corollary~\ref{HBD}, and
Theorem~\ref{D8actionV}
imply that, in total dimensions at most $\dim(V_{3,D_8})=3$,
the (integral cohomology) CLSS for the $D_8$-action on $V_{3,2}$
starts as
\begin{picture}(0,88)(-70,-14)
\qbezier[50](0,0)(45,0)(90,0)
\qbezier[40](0,0)(0,33)(0,66)
\multiput(-2.5,-3)(25,0){1}{$\mathbb{Z}$}
\put(22.5,-11){\scriptsize$1$}
\put(47.5,-11){\scriptsize$2$}
\put(72.5,-11){\scriptsize$3$}
\put(-10,17.5){\scriptsize$1$}
\put(-10,37.5){\scriptsize$2$}
\put(-10,57.5){\scriptsize$3$}
\put(43.5,-3){$\langle2\rangle$}
\put(68.5,-3){$\langle1\rangle$}
\put(-5.35,37.1){$\langle1\rangle$}
\put(19,37.1){$\langle2\rangle$}
\put(-3,56.8){$\mathbb{Z}$}
{\cal E}nd{picture}
\noindent The only possible nontrivial differential in this range
is $d_3^{\,0,2}\colon E_2^{\,0,2}\to E_2^{\,3,0}$, which must be an
isomorphism in view of the second assertion in
Proposition~\ref{orientabilidadB}. This yields the $\mathrm{P}^2$-case in
Theorem~\ref{descripciondesaordenada} and
Propositions~\ref{mono4} and~\ref{HFpar} (with $G=D_8$ in the latter one).
As indicated in Table~\ref{tabla}, the symmetry
isomorphisms are invisible in the current situation.
It is worth noticing that the $d_3$-differential
originating at node $(1,2)$ must be injective. This observation
will be the basis in our argument for the general situation, where
2-rank considerations will be the catalyst. Here and in what follows,
by the {\it {\rm 2}-rank} (or simply {\it rank})
of a finite abelian 2-group $H$ we mean the rank
($\mathbb{F}_2$-dimension) of $H\otimes\mathbb{F}_2$.
{\cal E}nd{ejemplo}
\noindent{{\cal E}m Proof of Proposition {{\cal E}m\ref{HFpar}}
for $G\hspace{.75mm}{=}\hspace{.75mm}D_8$, and of Proposition {{\cal E}m\ref{mono4}},
both for even $m\geq4$.}
The assertion in Proposition~\ref{HFpar} for
\begin{itemize}
\item $i\geq2m$ follows from
Lemma~\ref{inicio2Handel}
and the fact that $\dim(V_{m+1,2})=2m-1$, and for
\item $i=2m-1$ follows from the fact that $H^{2m-1}(BD_8)$
is a torsion group (Corollary~\ref{HBD}) while $H^{2m-1}(B(\mathrm{P}^m,2))=\mathbb{Z}$
(Proposition~\ref{orientabilidadB}).
{\cal E}nd{itemize}
\noindent
We work with the (integral cohomology)
CLSS for the $D_8$-action on $V_{m+1,2}$
in order to prove Proposition~\ref{mono4}
and the assertions in Proposition~\ref{HFpar} for $i<2m-1$.
In view of Theorem~\ref{D8actionV}, the spectral sequence
has a simple system of coefficients and, from the
description of $H^*(V_{m+1,2})$ in the proof of Theorem~\ref{D8actionV},
it is concentrated in the three horizontal lines with $q=0,m,2m-1$. We can
focus on the lines with $q=0,m$ in view of the range under
current consideration. At the start of the CLSS
there is a copy of
\begin{itemize}
\item $H^*(BD_8)$ (described by
Corollary~\ref{HBD}) at the line with $q=0$;
\item $H^*(BD_8,\mathbb{F}_2)$
(described by Lemma~\ref{handel68D}) at the line with $q=m$.
{\cal E}nd{itemize}
\noindent
Note that the assertion in Proposition~\ref{HFpar}
for $i<m$ is an obvious consequence of the
above description of the $E_2$-term of the CLSS. The case $i=m$
will follow once we show that the ``first'' potentially nontrivial
differential $d_{m+1}^{\hspace{.25mm}0,m}
\colon E_2^{0,m}\to E_2^{m+1,0}$ is injective.
More generally, we show in the paragraph
following~(\ref{sharp}) below that all differentials
\begin{equation}\label{difiny}
\mbox{$d_{m+1}^{\hspace{.25mm}m-{\cal E}ll-1,m}
\colon E_2^{m-{\cal E}ll-1,m}\to E_2^{2m-{\cal E}ll,0}$
with $0<{\cal E}ll<m$ are injective.}
{\cal E}nd{equation}
From this, the assertion in Proposition~\ref{HFpar}
for $m<i<2m-1$ follows at once.
The information we need about differentials is
forced by the ``size'' of their domains and codomains.
For instance, since $H^{2m-1}(B(\mathrm{P}^m,2))$ is torsion-free, all of
$E_2^{2m-1,0}=H^{2m-1}(BD_8)=\langle m-1\rangle$ must be killed
by differentials. But the only possibly nontrivial
differential landing in
$E_2^{2m-1,0}$ is the one in~(\ref{difiny}) with ${\cal E}ll=1$.
The resulting surjective $d_{m+1}^{m-2,m}$ map
must be an isomorphism since its domain, $E_2^{m-2,m}=H^{m-2}(BD_8;{\mathbb{F}_2})=
\langle m-1\rangle$, is isomorphic to its codomain.
The extra input we need in order
to deal with the rest of the differentials in~(\ref{difiny})
comes from the short exact sequences
\begin{equation}\label{spliteadas}
0\to\mbox{Coker}(2_{i})\to H^i(B(\mathrm{P}^m,2);{\mathbb{F}_2})\to\mbox{Ker}(2_{i+1})\to0
{\cal E}nd{equation}
obtained from the Bockstein long exact sequence
$$
\cdots\leftarrow H^i(B(\mathrm{P}^m,2);{\mathbb{F}_2})\stackrel{\pi_i}\leftarrow
H^i(B(\mathrm{P}^m,2))
$$
$$
\stackrel{2_i}\leftarrow
H^i(B(\mathrm{P}^m,2))\stackrel{\partial_i}\leftarrow H^{i-1}(B(\mathrm{P}^m,2);{\mathbb{F}_2})
\leftarrow\cdots.
$$
From the $E_2$-term of the spectral sequence we easily see
that
$$H^1(B(\mathrm{P}^m,2))=0$$ and that $$H^i(B(\mathrm{P}^m,2))$$
{is} a finite $2$-torsion group for $1<i<2m-1$; let $r_i$ denote its $2$-rank. Then
Ker$(2_i)\cong{}$Coker$(2_i)\cong\langle r_i\rangle$, so
that~(\ref{spliteadas}),
Corollary~\ref{handel68B}, and an easy induction
(grounded by the fact that $\mbox{Ker}(2_{2m-1})=0$, in view
of the second assertion in
Proposition~\ref{orientabilidadB}) yield
\begin{equation}\label{ranks}
r_{2m-{\cal E}ll}=\begin{cases}a+1,&{\cal E}ll=2a;\\a,&{\cal E}ll=2a+1;{\cal E}nd{cases}
{\cal E}nd{equation}
for $2\leq{\cal E}ll\leq m-1$. Under these conditions, the ${\cal E}ll$-th differential
in~(\ref{difiny}) takes the form
\begin{equation}\label{sharp}
\langle m-{\cal E}ll\rangle\hspace{.5mm}{=}\hspace{.5mm}H^{m-{\cal E}ll-1}(BD_8;{\mathbb{F}_2})
\to H^{2m-{\cal E}ll}(BD_8)
{\cal E}nd{equation}
where $$H^{2m-{\cal E}ll}(BD_8)
\,{=}\begin{cases}
\left\{m-\frac{\cal E}ll2\right\}
,&{\cal E}ll{\cal E}quiv0\bmod4;\\
\left\langle m-\frac{{\cal E}ll-2}2\right\rangle,&{\cal E}ll{\cal E}quiv2\bmod4;\\
\left\langle m-\frac{{\cal E}ll+1}2\right\rangle,&\mbox{otherwise}.\\
{\cal E}nd{cases}
$$
But the cokernel of this map, which is a subgroup of $H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$,
must have 2-rank at most $r_{2m-{\cal E}ll}$. An easy counting argument
(using the right exactness of the tensor product) shows that this is
possible only with an injective differential~(\ref{sharp})
which, in the case of ${\cal E}ll{\cal E}quiv0\bmod4$, yields an injective
map even after tensoring\footnote{This amounts to the fact that
twice the generator of the $\mathbb{Z}_4$-summand in~(\ref{sharp})
is not in the image of~(\ref{sharp})---compare to the proof of
Proposition~\ref{algebraico}.} with $\mathbb{Z}_2$.
Note that, in total dimensions at most $2m-2$,
the $E_{m+2}$-term of the spectral sequence is concentrated
on the base line ($q=0$). Thus, for $2\leq{\cal E}ll\leq m-1$,
$H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$ is the cokernel of the
differential~(\ref{sharp})---which yields the surjectivity
asserted in Proposition~\ref{HFpar} in the range $m<i<2m-1$.
Furthermore the kernel of $p^*\colon
H^{2m-{\cal E}ll}(BD_8)\to H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$ is the elementary abelian 2-group
specified on the left hand side of~(\ref{sharp}). In fact, the observation in
the second half of the final assertion in the previous paragraph proves
Proposition~\ref{mono4}.
$\Box$
As indicated in the last paragraph of the previous proof, for
$2\leq{\cal E}ll\leq m-1$ the CLSS
analysis identifies the group $H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$
as the cokernel of~(\ref{sharp}). Thus, the following algebraic
calculation of these groups not only gives us
an alternative approach to that
using the non-singularity of the torsion linking form, but it also allows
us to recover (for $m$ even and $G=D_8$) the three missing cases
in~(\ref{faltantes})---therefore completing the proof of
the $\mathrm{P}^{\mathrm{even}}$-case of Theorem~\ref{descripciondesaordenada}.
\begin{proposicion}\label{algebraico}
For $2\leq{\cal E}ll\leq m-1$,
the cokernel of the differential~{{\cal E}m(\ref{sharp})} is isomorphic to
$$
H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))=\begin{cases}
\left\{\frac{\cal E}ll2\right\},&{\cal E}ll{\cal E}quiv0\bmod4;\\
\left\langle\frac{{\cal E}ll}2+1\right\rangle,&{\cal E}ll{\cal E}quiv2\bmod4;\\
\left\langle\frac{{\cal E}ll-1}2\right\rangle,&\mbox{otherwise}.
{\cal E}nd{cases}
$$
{\cal E}nd{proposicion}
\begin{proof}
Cases with ${\cal E}ll\not{\cal E}quiv0\bmod4$ follow from a simple count,
so we only offer an argument for ${\cal E}ll{\cal E}quiv0\bmod4$. Consider the diagram
with exact rows
\begin{picture}(0,65)(6.5,-6)
\put(0,40){$0$}
\put(8,42){\vector(1,0){15}}
\put(26,40){$\langle m-{\cal E}ll\rangle$}
\put(55,42){\vector(1,0){22}}
\put(80,40){$\left\{ m-\frac{{\cal E}ll}2\right\}$}
\put(115,42){\vector(1,0){22}}
\put(140,40){$H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$}
\put(205,42){\vector(1,0){15}}
\put(224,40){$0$}
\put(0,3){$0$}
\put(8,5){\vector(1,0){15}}
\put(26,3){$\langle m-{\cal E}ll\rangle$}
\put(55,5){\vector(1,0){17}}
\put(75,3){$\left\langle m-\frac{{\cal E}ll}2+1\right\rangle$}
\put(122,5){\vector(1,0){33}}
\put(158,3){$\left\langle\frac{{\cal E}ll}2+1\right\rangle$}
\put(185,5){\vector(1,0){35}}
\put(224,3){$0$}
\put(39,14){\line(0,1){20}}
\put(40.5,14){\line(0,1){20}}
\put(95,14){\vector(0,1){20}}
\put(93.5,14){\oval(3,3)[b]}
\put(172,14){\vector(0,1){20}}
\put(170.5,14){\oval(3,3)[b]}
{\cal E}nd{picture}
\noindent
where the top horizontal monomorphism is~(\ref{sharp}), and where the middle
group on the bottom is included in the top one as the elements annihilated by
multiplication by~$2$. The lower right group is $\langle\frac{\cal E}ll2+1\rangle$
by a simple counting. The snake lemma shows that the right-hand-side vertical
map is injective with cokernel $\mathbb{Z}_2$; the resulting extension is nontrivial in
view of~(\ref{ranks}).
{\cal E}nd{proof}
\begin{ejemplo}\label{muchkernel1}
For $m$ even,~\cite[Theorem~1.4~(D)]{idealvalued} identifies
three explicit elements in the kernel of $p^*\colon H^i(BD_8)\to
H^i(B(\mathrm{P}^m,2))$: one for each of $i=m+2$, $i=m+3$, and $i=m+4$.
In particular, this produces at most four basis elements in
the ideal Ker$(p^*)$ in dimensions at most $m+4$. However
we have just seen that, for $m+1\leq i\leq 2m-1$, the kernel of
$p^*\colon H^i(BD_8)\to H^i(B(\mathrm{P}^m,2))$ is an $\mathbb{F}_2$-vector space of dimension
$i-m$. This means that through dimensions at most $m+4$ (and with $m>4$)
there are at least six more basis elements remaining
to be identified in Ker$(p^*)$.
{\cal E}nd{ejemplo}
We next turn to the case when $m$ is odd
(a hypothesis in force throughout the rest of the section)
assuming, from Lemma~\ref{d2m1} on, that
$m{\cal E}quiv1\bmod4$.
\begin{nota}\label{elm1}
Since the $\mathrm{P}^1$-case in Proposition~\ref{mono4} and
Theorems~\ref{descripciondesaordenada} and~\ref{HF1m4} is elementary
(in view of Remark~\ref{m1} and Corollary~\ref{HBD}), we will implicitly
assume $m\neq1$.
{\cal E}nd{nota}
The CLSS of the $D_8$-action on $V_{m+1,2}$
now has a few extra complications that turn the analysis of
differentials into a
harder task. To begin with, we find a twisted system of
local coefficients (Theorem~\ref{D8actionV}). As a $\mathbb{Z}[D_8]$-module,
$H^{q}(V_{m+1,2})$ is:
\\
\begin{itemize}
\item $\mathbb{Z}$ for $q=0,m$;
\item $\mathbb{Z}_\alpha$ for $q=m-1,2m-1$;
\item the zero module otherwise.
{\cal E}nd{itemize}
\noindent
Thus, in total dimensions at most $2m-2$ the CLSS
is concentrated on the three horizontal lines with $q=0,m-1,m$.
[This is in fact the case in total dimensions at most $2m-1$, since
$H^0(BD_8;\mathbb{Z}_\alpha)=0$; this observation is not relevant for
the actual group $H^{2m-1}(B(\mathrm{P}^m,2))=\mathbb{Z}_2$---given in
the second assertion in Proposition~\ref{orientabilidadB}---,
but it will be relevant for the claimed surjectivity
of the map $p^*\colon H^{2m-1}(BD_8)\to H^{2m-1}(B(\mathrm{P}^m,2))$.]
In more detail, at the start of the CLSS
we have a copy of $H^*(BD_8)$ at
$q=0,m$, and a copy of $H^*(BD_8;\mathbb{Z}_\alpha)$ at $q=m-1$.
It is the extra horizontal
line at $q=m-1$ (not present for an even $m$) that leads to
potential $d_2$-differentials---from the ($q=m$)-line to the ($q=m-1$)-line.
Sorting these differentials out is the main difficulty (which we
have been able to overcome
only for $m{\cal E}quiv1\bmod4$). Throughout the remainder of the section we
work in terms of this spectral sequence, making free use of the
description of its $E_2$-term coming from Corollaries~\ref{HBD} and~\ref{HBDT},
as well as of its $H^*(BD_8)$-module structure. Note that the latter property
implies that much of the global structure of the spectral sequence
is dictated by differentials on the three elements
\begin{itemize}
\item $x_m\in E_2^{0,m}=H^0(BD_8;H^m(V_{m+1,2}))=
H^0(BD_8;\mathbb{Z})=\mathbb{Z}$;
\item $\alpha_1\in E_2^{1,m-1}=
H^1(BD_8;H^{m-1}(V_{m+1,2}))=H^1(BD_8;\mathbb{Z}_\alpha)=\mathbb{Z}_2$;
\item $\alpha_2\in E_2^{2,m-1}=H^2(BD_8;H^{m-1}(V_{m+1,2}))=H^2(BD_8;
\mathbb{Z}_\alpha)=\mathbb{Z}_4$;
{\cal E}nd{itemize}
each of which is
a generator of the indicated group (notation is inspired by
that in Theorem~\ref{modulestructure} and in the proof of
Theorem~\ref{D8actionV}---for even $n$).
\begin{lema}\label{d2m1}
For \hspace{1mm} $m{\cal E}quiv1\bmod4$ and \hspace{1mm}
$m\geq 5$, the nontrivial \hspace{1mm} $d_2$-differentials are
given by $d_2^{\hspace{.25mm}4i,m}(\kappa_4^i x_m)=2
\kappa_4^i\alpha_2$ for $i\geq0$.
{\cal E}nd{lema}
\begin{proof}
The only potentially nontrivial $d_2$-differentials
originate at the ($q=m$)-line and, in view of the module structure, all
we need to show is that
\begin{equation}\label{dif2}
d_2\colon E_2^{0,m}\to E_2^{2,m-1}\,\mbox{ has }\,d_2(x_m)=2\alpha_2
{\cal E}nd{equation}
(here and in what follows we omit superscripts of differentials).
Let $m=4a+1$. Since $H^{2m-1}(B(\mathrm{P}^m,2))=\langle 1\rangle$,
most of the elements in $E_2^{2m-1,0}=\langle 4a\rangle$ must be
wiped out by differentials. The only differentials landing in a
$E_r^{2m-1,0}$ (that originate at a nonzero group) are
\begin{equation}\label{injdifs1}
d_m\colon E_m^{m-1,m-1}\to E_m^{2m-1,0}\mbox{ \ \ and \ \ }
d_{m+1}\colon E_{m+1}^{m-2,m}\to E_{m+1}^{2m-1,0}.
{\cal E}nd{equation}
But $E_2^{m-1,m-1}=\langle 2a\rangle$ and $E_{2}^{m-2,m}=\langle 2a-1\rangle$,
so that rank considerations imply
\begin{equation}\label{cps1}
E_{2}^{m-2,m}=E_{m+1}^{m-2,m},
{\cal E}nd{equation}
with the two differentials in~(\ref{injdifs1})
injective. In particular we get that
\begin{equation}\label{infty1}
H^{2m-1}(B(\mathrm{P}^m,2))=\langle 1\rangle \mbox{ comes from }
E_\infty^{2m-1,0}=\langle 1\rangle.
{\cal E}nd{equation}
Furthermore,~(\ref{cps1}) and
the $H^*(BD_8)$-module structure in the spectral sequence
imply that the differential in~(\ref{dif2}) cannot be surjective.
It remains to show that the differential in~(\ref{dif2}) is nonzero.
We shall obtain a contradiction by assuming that
$d_2(x_m)=0$, so that every element in the
($q=m$)-line is a $d_2$-cycle.
Since $H^{2m}(B(\mathrm{P}^m,2))=0$, all of $E_2^{2m,0}=\langle 4a+2\rangle$ must be
wiped out by differentials, and under the current hypothesis the only possible
such differentials would be $$d_m\colon E_m^{m,m-1}=E_2^{m,m-1}=\langle
2a+1\rangle\to E_m^{2m,0}=E_2^{2m,0}$$ and $$d_{m+1}\colon E_{m+1}^{m-1,m}=
E_2^{m-1,m}=\langle 2a\rangle\oplus\mathbb{Z}_4\to
E_{m+1}^{2m,0}\,$$---indeed, $E_2^{0,2m-1}=H^0(BD_8;\mathbb{Z}_\alpha)=0$.
Thus, the former differential would have to be injective while the
latter one would have to be surjective with a $\mathbb{Z}_2$ kernel. But there
are no further differentials that could kill the resulting
$E_{m+2}^{m-1,m}=\langle1\rangle$, in contradiction to~(\ref{infty1}).
{\cal E}nd{proof}
\begin{nota}\label{criticals}
\hspace{1mm} In the preceding proof we
made crucial use of the $H^*(BD_8)$-module
structure in the spectral sequence in order to handle $d_2$-differentials.
We show next that, just as in the proof of Proposition~\ref{HFpar}
for $G=D_8$, many of the properties of all
higher differentials in the case $m{\cal E}quiv1\bmod4$ follow from
the ``size'' of the resulting $E_3$-term.
{\cal E}nd{nota}
\noindent{{\cal E}m Proof of Theorem~{{\cal E}m\ref{HF1m4}}
for $G=D_8$, and of Proposition~{{\cal E}m\ref{mono4}}, both for
$m\hspace{.5mm}{{\cal E}quiv}\hspace{.5mm}1\bmod4$.}
The $d_2$ differentials in Lemma~\ref{d2m1} replace, by a
$\mathbb{Z}_2$-group, every instance of a $\mathbb{Z}_4$-group in the ($q=m-1$) and
($q=m$)-lines of the $E_2$-term. This describes the $E_3$-term, the starting
stage of the CLSS in the following considerations
(note that the $E_3$-term agrees with the $E_m$-term).
With this information the idea of the proof
is formally the same as that in the case of an even $m$,
namely: a little input from the
Bockstein long exact sequence for $B(\mathrm{P}^m,2)$ forces the injectivity of
all relevant higher differentials
(we give the explicit details for the reader's benefit).
Let $m=4a+1$ (recall we are assuming $a\geq1$).
The crux of the matter is showing that the differentials
\begin{equation}\label{difiny2}
d_m \colon E_3^{m-{\cal E}ll,m-1}\to E_3^{2m-{\cal E}ll,0}\;\mbox{ with }\;
{\cal E}ll=0,1,2,\ldots,m
{\cal E}nd{equation}
and
\begin{equation}\label{difiny3}
d_{m+1} \colon E_3^{m-{\cal E}ll-1,m}\to E_{m+1}^{2m-{\cal E}ll,0}\;
\mbox{ with }\;{\cal E}ll=0,1,2,\ldots,m-1
{\cal E}nd{equation}
are injective and never hit twice the generator of a $\mathbb{Z}_4$-group.
This assertion has already
been shown for ${\cal E}ll=1$ in the paragraph containing~(\ref{injdifs1}).
Likewise, the assertion for ${\cal E}ll=0$ follows from~(\ref{infty1}) with
the same counting argument as the one
used in the final paragraph of the proof of
Lemma~\ref{d2m1}. Furthermore the case ${\cal E}ll=m$ in~(\ref{difiny2}) is obvious
since $E_3^{0,m-1}=H^0(BD_8;\mathbb{Z}_\alpha)=0$. However, since $E_3^{0,m}=
H^0(BD_8)=\mathbb{Z}$ and $E_3^{m+1,0}=H^{m+1}(BD_8)=\langle2a+2\rangle$,
the injectivity assertion needs to be suitably interpreted for ${\cal E}ll=m-1$
in~(\ref{difiny3}); indeed, we will prove that
\begin{equation}\label{sutilezainjectiva}
d_{m+1}\colon E_3^{0,m}\to E_{m+1}^{m+1,0}
{\cal E}nd{equation}
\mbox{ \ yields an injective map {\it after} tensoring with $\mathbb{Z}_2$.}
\\
\indent From the $E_3$-term of the spectral sequence
we easily see that $$H^m(B(\mathrm{P}^m,2))$$ is the direct
sum of a copy of $\mathbb{Z}$ and a finite $2$-torsion group, while
$H^i(B(\mathrm{P}^m,2))$ is a finite $2$-torsion group for $i\neq0,m$.
We consider the analogue of~(\ref{spliteadas}), the
short exact sequences
\begin{equation}\label{spliteadas2}
0\to\mbox{Coker}(2_{i})\to H^i(B(\mathrm{P}^m,2);\mathbb{F}_2)\to\mbox{Ker}(2_{i+1})\to0,
{\cal E}nd{equation}
working here and below in the range $m+1\leq i\leq 2m-2$.
Let $r_i$ denote the $2$-rank of (the torsion subgroup of) $H^i(B(\mathrm{P}^m,2))$,
so that Ker$(2_i)\cong{}$Coker$(2_i)\cong\langle r_i\rangle$.
Then Corollary~\ref{handel68B},~(\ref{spliteadas2}),
and an easy induction (grounded by the fact that
$\mbox{Ker}(2_{2m-1})=\langle1\rangle$, which in turn comes from
the second assertion in
Proposition~\ref{orientabilidadB}) yield that
\begin{equation}\label{rangoagain}
\mbox{$r_{2m-{\cal E}ll}$ is the integral part of
$\frac{{\cal E}ll+1}2$ for $2\leq{\cal E}ll\leq m-1$}.
{\cal E}nd{equation}
\indent Now, in the range of~(\ref{rangoagain}), Lemma~\ref{d2m1}
and Corollaries~\ref{HBD} and~\ref{HBDT} give
\begin{eqnarray*}
E_3^{m-{\cal E}ll,m-1} & = & \begin{cases}
\left\langle 2a+1-\frac{\cal E}ll2\right\rangle,&
{\cal E}ll\mbox{ even};\\
\left\langle 2a-\frac{{\cal E}ll-1}2\right\rangle,&{\cal E}ll\mbox{ odd};
{\cal E}nd{cases}\\
E_3^{m-{\cal E}ll-1,m} & = & \begin{cases}
\mathbb{Z}\,,&{\cal E}ll=m-1;\\
\left\langle 2a+1-\frac{\cal E}ll2\right\rangle,&
{\cal E}ll\mbox{ even},\,{\cal E}ll<m-1;\\
\left\langle 2a-\frac{{\cal E}ll+1}2\right\rangle,&{\cal E}ll\mbox{ odd};
{\cal E}nd{cases}\\
E_3^{2m-{\cal E}ll,0} & = & \begin{cases}
\left\langle 4a+2-\frac{\cal E}ll2\right\rangle,&{\cal E}ll{\cal E}quiv0\bmod4;\\
\left\{ 4a+1-\frac{{\cal E}ll}2\right\}&
{\cal E}ll{\cal E}quiv2\bmod4;\\
\left\langle 4a-\frac{{\cal E}ll-1}2\right\rangle,&\mbox{otherwise};
{\cal E}nd{cases}
{\cal E}nd{eqnarray*}
and since $E_{m+2}^{2m-{\cal E}ll,0}$ has $2$-rank at most $r_{2m-{\cal E}ll}$
(indeed, $E_{m+2}^{2m-{\cal E}ll,0}=E_\infty^{2m-{\cal E}ll,0}$ which is a subgroup of
$H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))$), an easy counting argument (using, as in the case
of an even $m$, the right exactness of the tensor product)
gives that the differentials in~(\ref{difiny2}) and~(\ref{difiny3}) must yield
an injective map after tensoring with $\mathbb{Z}_2$. In particular they
\begin{itemize}
\item[(a)] must be injective on the nose, except for the case discussed
in~(\ref{sutilezainjectiva});
\item[(b)] cannot hit twice the generator of a $\mathbb{Z}_4$-summand.
{\cal E}nd{itemize}
The already observed equalities $E_2^{0,2m-1}=H^0(BD_8;\mathbb{Z}_\alpha)=0$
together with~(a) above imply that, in total dimensions $t$ with $t\leq2m-1$
and $t\neq m$, the $E_{m+2}$-term of the spectral sequence is concentrated
on the base line ($q=0$), while at higher lines ($q>0$)
the spectral sequence only has a $\mathbb{Z}$-group---at node $(0,m)$.
This situation yields Theorem~\ref{HF1m4}, while~(b) above yields
Proposition~\ref{mono4}.
$\Box$
A direct calculation (left to the
reader) using the proved behavior of the differentials in~(\ref{difiny2})
and~(\ref{difiny3})---and using (twice) the analogue of
Proposition~\ref{algebraico} when ${\cal E}ll{\cal E}quiv2\bmod4\,$---gives
$$
H^{2m-{\cal E}ll}(B(\mathrm{P}^m,2))=\begin{cases}
\left\langle\frac{\cal E}ll2\right\rangle,&{\cal E}ll{\cal E}quiv0\bmod4;\\
\left\{ \frac{\cal E}ll2-1 \right\}
,&{\cal E}ll{\cal E}quiv2\bmod4;\\
\left\langle\frac{{\cal E}ll+1}2\right\rangle,&\mbox{otherwise;}
{\cal E}nd{cases}
$$
for $2\leq{\cal E}ll\leq m-1$. Thus, as the reader can easily check using
Corollaries~\ref{HBD} and~\ref{HBDT}, instead of the symmetry
isomorphisms exemplified in Table~\ref{tabla}, the cohomology groups
of $B(\mathrm{P}^m,2)$ are now formed (as predicted by the
isomorphisms~(\ref{linkisos})
of the previous section) by a combination of $H^*(BD_8)$ and
$H^*(BD_8;\mathbb{Z}_\alpha)$---in the lower and upper halves, respectively.
Once again, the CLSS analysis not only offers an alternative
to the (torsion linking form) arguments in the previous section,
but it allows us to recover, under the present hypotheses, the
torsion subgroup in the three missing dimensions in~(\ref{faltantes}).
\begin{ejemplo}\label{muchkernel2}
For $m{\cal E}quiv1\bmod4$,~\cite[Theorem~1.4~(D)]{idealvalued} identifies
two explicit elements in the kernel of $p^*\colon H^i(BD_8)\to
H^i(B(\mathrm{P}^m,2))$: one for each of $i=m+1$ and $i=m+3$.
In particular, this produces at most three basis elements in
the ideal Ker$(p^*)$ in dimensions at most $m+3$. However it follows from
the previous spectral sequence analysis that, for $m+1\leq i\leq 2m-1$,
the kernel of $p^*\colon H^i(BD_8)\to H^i(B(\mathrm{P}^m,2))$ is an $\mathbb{F}_2$-vector space
of dimension $i-m+(-1)^i$.
This means that through dimensions at most $m+3$ (and with $m\geq5$)
there are at least four more basis elements remaining
to be identified in Ker$(p^*)$.
{\cal E}nd{ejemplo}
\section{Case of $B(\mathrm{P}^{4a+3},2)$}\label{problems3}
We now discuss some aspects of the spectral sequence of the previous section
in the unresolved case $m{\cal E}quiv3\bmod4$.
Although we are unable to describe the pattern of differentials
for such $m$, we show that
enough information can be collected to not only resolve
the three missing cases in~(\ref{faltantes}), but
also to conclude the proof of Theorem~\ref{HF1m4} for $G=D_8$.
Unless explicitly stated otherwise,
the hypothesis $m{\cal E}quiv3\bmod4$ will be in force throughout the section.
\begin{nota}\label{lam3}
The main problem that has prevented us from fully
understanding the spectral sequence of this section comes from the
apparent fact that the algebraic input coming from the
$H^*(BD_8)$-module structure in the CLSS---the crucial
property used in the proof of Lemma \ref{d2m1}---does not
give us enough information in order to determine the
pattern of $d_2$-differentials. New geometric insights
seem to be needed instead. Although
it might be tempting to conjecture the validity
of Lemma~\ref{d2m1} for $m{\cal E}quiv 3\bmod 4$, we have not found
concrete evidence supporting such a possibility.
In fact, a careful analysis of the possible
behaviors of the spectral sequence for $m=3$ (performed in
Section~\ref{antesultima}) does not give even a more
aesthetically pleasant reason for leaning toward the possibility of having a
valid Lemma~\ref{d2m1} in the current congruence. A second problem
arose in~\cite{v1} when we noted that,
even if the pattern of $d_2$-differentials were known for
$m{\cal E}quiv3\bmod4$, there would seem to be a slight indeterminacy
either in a few higher differentials (if Lemma~\ref{d2m1} holds for
$m{\cal E}quiv3\bmod4$), or in a few possible extensions among the $E_\infty^{p,q}$
groups (if Lemma~\ref{d2m1} actually fails for $m{\cal E}quiv3\bmod4$).
Even though we cannot resolve
the current $d_2$-related ambiguity, in~\cite[Example~6.4]{v1} we note that,
at least for $m=3$, it is possible to overcome the above mentioned problems
about higher differentials or possible extensions
by making use of the explicit description of $H^4(B(\mathrm{P}^3,2))$---given later
in the section (considerations previous to Remark~\ref{expander})
in regard to the claimed surjectivity
of~(\ref{excepcional}); see also~\cite{taylor},
where advantage is taken of the fact
that $\mathrm{P}^3$ is a group. The explicit possibilities in the case of
$\mathrm{P}^3$ are discussed in Section~\ref{antesultima}.
{\cal E}nd{nota}
In the first result of this section,
Theorem~\ref{HF1m4} for $G=D_8$ and $m{\cal E}quiv3\bmod4$, we show that,
despite the previous comments, the spectral sequence approach
can still be used to compute $H^*(B(\mathrm{P}^{4a+3},2))$ just beyond the middle
dimension (i.e., just before the first problematic $d_2$-differential
plays a decisive role). In particular, this computes the corresponding
groups in the first two of the three missing cases in~(\ref{faltantes}).
\begin{proposicion}\label{dimm}
Let $m=4a+3$. The map $H^i(BD_8)\to H^i(B(\mathrm{P}^m,2))$
induced by~{{\cal E}m(\ref{lap})} is:
{{\cal E}m \begin{enumerate}
\item {{\cal E}m an isomorphism for $i<m;$}
\item {{\cal E}m a monomorphism onto the torsion subgroup of $H^i(B(\mathrm{P}^m,2))=
\langle 2a+1\rangle\oplus\mathbb{Z}$ for $i=m;$}
\item {{\cal E}m the zero map for $\,2m-1<i$.}
{\cal E}nd{enumerate}}
{\cal E}nd{proposicion}
\begin{proof}
The argument parallels that used in the analysis of the CLSS when
$m{\cal E}quiv1\bmod4$. Here is the chart of the current $E_2$-term through
total dimensions at most $m+1$:
\begin{picture}(0,91)(-17,-16)
\qbezier[100](0,0)(100,0)(200,0)
\qbezier[40](0,0)(0,35)(0,70)
\multiput(-2.5,-3)(25,0){1}{$\mathbb{Z}$}
\put(22.5,-13){\scriptsize$1$}
\put(47.5,-13){\scriptsize$2$}
\put(-25,37.5){\scriptsize$m-1$}
\put(-18,57.5){\scriptsize$m$}
\put(43.5,-3){$\langle2\rangle$}
\put(19.5,37.1){$\langle1\rangle$}
\put(45,37.1){$\mathbb{Z}_4$}
\put(-3,56.8){$\mathbb{Z}$}
\put(87,-11){\scriptsize$\cdots$}
\put(127,-13){\scriptsize$m-1$}
\put(157.5,-13){\scriptsize$m$}
\put(179.5,-13){\scriptsize$m+1$}
\put(131,-4){\Huge$\star$}
\put(157,-3){\Large$\bullet$}
\put(185,-1.7994){$\rule{2.2mm}{2.2mm}$}
{\cal E}nd{picture}
\noindent The star at node $(m-1,0)$ stands for $\langle2a+2\rangle$;
the bullet at node $(m,0)$ stands for $\langle2a+1\rangle$;
the solid box at node $(m+1,0)$ stands for
$\{2a+2\}$. In this range there are only three possibly nonzero
differentials:
\begin{itemize}
\item a $d_2$ from node $(0,m)$ to node $(2,m-1)$;
\item a $d_{m}$ from node $(1,m-1)$ to node $(m+1,0)$;
\item a $d_{m+1}$ from node $(0,m)$ to node $(m+1,0)$.
{\cal E}nd{itemize}
Whatever these $d_2$ and $d_{m+1}$ are, there will be a resulting
$E_\infty^{0,m}=\mathbb{Z}$. On the other hand, the argument about 2-ranks
in~(\ref{spliteadas}) and in~(\ref{spliteadas2}), leading respectively
to~(\ref{ranks}) and~(\ref{rangoagain}), now yields that the torsion 2-group
$H^{m+1}(B(\mathrm{P}^m,2))$ has 2-rank $2a+1$. Since $E_\infty^{m+1,0}$ is a subgroup of
$H^{m+1}(B(\mathrm{P}^m,2))$, this forces the two differentials $d_{m}$ and $d_{m+1}$ above
to be nonzero, each one with cokernel of 2-rank one less than the 2-rank of its
codomain. In fact, $d_{m}$ must have cokernel isomorphic to $\{2a+1\}$,
whereas the cokernel of $d_{m+1}$ is either $\{2a\}$ or $\langle2a+1\rangle$
(Remark~\ref{expander}, and especially~\cite[Example~6.4]{v1},
expand on these
possibilities). What matters here is the forced injectivity of $d_m$, which
implies $E_\infty^{1,m-1}=0$ and, therefore, the second assertion of the
proposition---the first assertion is obvious from the
CLSS, while the third
one is elementary.
{\cal E}nd{proof}
We now start work on the only groups in
Theorem~\ref{descripciondesaordenada} not
yet computed, namely $H^{m+1}(B(\mathrm{P}^m,2))$ for $m=4a+3$.
As indicated in the previous proof,
these are torsion 2-groups of 2-rank $2a+1$.
Furthermore,~(\ref{excepcional}) and Corollary~\ref{HBDT}
show that each such group contains a copy of $\{2a\}$,
a 2-group of the same 2-rank as that of $H^{m+1}(B(\mathrm{P}^m,2))$. In showing that
the two groups actually agree (thus completing the proof of
Theorem~\ref{descripciondesaordenada}), a key fact
comes from Fred Cohen's observation
(recalled in the paragraph previous to Remark~\ref{m1}) that {\it there
are no elements of order $8$}. For instance,
\begin{eqnarray}\label{casoespecial}
\raisebox{1.5ex}{\mbox{when $m=3$ the two groups must agree since}}
\\\noalign{\vskip -2.5mm}
\raisebox{2ex}{\mbox{both are cyclic (i.e., have 2-rank 1).}\hspace{15.5mm}}
\nonumber
{\cal E}nd{eqnarray}
\noindent
In order to deal with the situation for positive values of $a$,
Cohen's observation is coupled with
a few computations in the first two pages of
the Bockstein spectral sequence (BSS) for $B(\mathrm{P}^m,2)$: we will show that
there is only one copy of $\mathbb{Z}_4$ (the one coming from the
subgroup $\{2a\}$) in the decomposition of $H^{m+1}(B(\mathrm{P}^m,2))$ as a sum of
cyclic 2-groups---forcing $H^{m+1}(B(\mathrm{P}^m,2))=\{2a\}$.
\begin{nota}\label{expander}
Before undertaking the BSS calculations (in Proposition~\ref{sq1} below),
we pause to observe that, unlike the Bockstein input in all the
previous CLSS-related proofs, the use of the BSS
does not seem to give quite enough information in order to understand the
pattern of $d_2$-differentials in the current CLSS. Much of the problem
lies in being able to decide the actual cokernel of the
$d_{m+1}$-differential in the previous proof and, consequently,
understand how the $\mathbb{Z}_4$-group in $H^{m+1}(B(\mathrm{P}^m,2))$ arises
in the current CLSS; either entirely at the $q=0$ line
(as in all cases of the previous---and the next---section), or as a nontrivial
extension in the $E_\infty$ chart. The final section of the paper discusses
in detail these possibilities in the case $m=3$---which should be compared
to the much simpler situation in Example~\ref{babyexample}.
{\cal E}nd{nota}
Recall from~\cite{feder,handel68} that the mod 2 cohomology ring of
$B(\mathrm{P}^m,2)$ is polynomial on three classes $x$, $x_1$, and $x_2$, of respective
dimensions 1, 1, and 2, subject to the three relations
\begin{itemize}
\item[(I)] $\quad x^2=xx_1$;
\item[(II)] $\quad\hspace{1.5mm} \displaystyle\sum_{0\leq i\leq \frac{m}{2}}
\binom{m-i}{i}x_1^{m-2i}x_2^i=0$;
\item[(III)] $\quad \displaystyle\sum_{0\leq i\leq \frac{m+1}{2}}\binom{m+1-i}{i}
x_1^{m+1-2i}x_2^i=0$.
{\cal E}nd{itemize}
Further, the action of $\mathrm{Sq}^1$ is determined by (I) and
\begin{equation}\label{sq1y}
\mathrm{Sq}^1 x_2=x_1x_2.
{\cal E}nd{equation}
[The following observations---proved
in~\cite{feder,handel68}, but not needed in this paper---might help
the reader to assimilate the facts just described:
The three generators $x$, $x_1$, and $x_2$
are in fact the images under the map $p_{m,D_8}$
in~(\ref{lap}) of the corresponding classes at the beginning of
Section~\ref{HBprelim}. In turn, the latter generators $x_1$ and $x_2$
come from the Stiefel-Whitney classes $w_1$ and $w_2$ in $B\mathrm{O}(2)$
under the classifying map for the inclusion $D_8\subset\mathrm{O}(2)$.
In these terms,~(\ref{sq1y}) corresponds to the (simplified in $B\mathrm{O}
(2)$) Wu formula $\mathrm{Sq}^1(w_2)=w_1w_2$. Finally, the two relations (II) and
(III) correspond to the fact that the two dual Stiefel-Whitney classes
$\overline{w}_m$ and $\overline{w}_{m+1}$ in $B\mathrm{O}(2)$ generate the
kernel of the map induced by the Grassmann inclusion $G_{m+1,2}\subset
B\mathrm{O}(2)$.]
Let $R$ stand for the subring generated by $x_1$
and $x_2$, so that there is an additive splitting
\begin{equation}\label{splitting}
H^*(B(\mathrm{P}^m,2);\mathbb{F}_2)=R\oplus x\cdot R
{\cal E}nd{equation}
which is compatible with the action of $\mathrm{Sq}^1$ (note that multiplication by $x$
determines an additive isomorphism $R\cong x\cdot R$).
\begin{proposicion}\label{sq1}
Let $m=4a+3$. With respect to the differential $\mathrm{Sq}^1:$
\begin{itemize}
\item $H^{m+1} (R; \mathrm{Sq}^1)=\mathbb{Z}_2$.
\item $H^{m+1} (x\cdot R; \mathrm{Sq}^1) = 0$.
{\cal E}nd{itemize}
{\cal E}nd{proposicion}
Before proving this result, let us indicate how it can be used to show
that~(\ref{excepcional}) is an isomorphism for $m=4a+3$. As explained in
the paragraph containing~(\ref{casoespecial}), we must have
\begin{equation}\label{laerre}
2\cdot H^{4a+4}(B(\mathrm{P}^{4a+3},2))=\langle r\rangle\quad\mbox{with}\quad r\geq1
{\cal E}nd{equation}
and we need to show that $r=1$ is in fact the case.
Consider the Bockstein exact couple
\begin{picture}(0,50)(41,-37)
\put(33.5,0){$H^*(B(P^{4a+3},2))$}
\put(96,3){\vector(1,0){110}}
\put(150,6){\scriptsize$2$}
\put(208,0){$H^*(B(P^{4a+3},2))$}
\put(207,-4){\vector(-2,-1){30}}
\put(195,-15){\scriptsize$\rho$}
\put(113,-30){$H^*(B(P^{4a+3},2);\hbox{$\mathbb{F}$}_2).$}
\put(120,-20){\vector(-2,1){30}}
\put(100,-18){\scriptsize$\delta$}
{\cal E}nd{picture}
\noindent
In the (unravelled) derived exact couple
{\small
\begin{eqnarray*}
\cdots \to2\cdot H^{4a+4}(B(P^{4a+3},2)) \stackrel{2}{\rightarrow} 2\cdot H^{4a+4} (B(P^{4a+3},2))\to\quad\ \\[2mm]
\rightarrow H^{4a+4} (H^*(B(P^{4a+3},2); \mathbb{F}_2);\mathrm{Sq}^1)\rightarrow 2\cdot H^{4a+5} (B(P^{4a+3},2))\rightarrow \cdots
{\cal E}nd{eqnarray*}
}
we have $2\cdot H^{4a+5}(B(P^{4a+3},2))=0$ since $H^{4a+5} (B(P^{4a+3},2))=
\langle 2a+1\rangle$---argued in Section~\ref{linkinsection} by means of the
(twisted) torsion linking form. Together with~(\ref{laerre}), this implies
that the map
\begin{equation}\label{lacentral}
\langle r\rangle=2\cdot H^{4a+4} (B(P^{4a+3},2))\to
H^{4a+4} (H^*(B(P^{4a+3},2);\mathbb{F}_2);\mathrm{Sq}^1)
{\cal E}nd{equation}
in the above exact sequence is
an isomorphism. Proposition~\ref{sq1} and~(\ref{splitting}) then imply the
required conclusion $r=1$.
\begin{proof}[Proof of Proposition~{{\cal E}m\ref{sq1}}] Note that every binomial
coefficient in (II) with $i\not{\cal E}quiv0\bmod4$ is congruent to zero mod $2$.
Therefore relation (II) can be rewritten as
\begin{equation}
\label{laI}
x_1^{4a+3} = \sum^{a/2}_{j=1} \binom{a-j}{j}x_1^{4(a-2j)+3} x_2^{4j}.
{\cal E}nd{equation}
Likewise, every binomial coefficient in (III) with $i{\cal E}quiv 3\bmod4$ is
congruent to zero mod $2$. Then, taking into account~(\ref{laI}), relation
(III) becomes
{\small
\begin{eqnarray}\label{laII}
x_2^{2a+2} &=& x_1^{4a+4} +\sum_{i\in\Lambda}
\binom{4a+4-i}{i}x_1^{4a+4-2i} x_2^i \\
&=& \sum^{a/2}_{j=1} \binom{a-j}{j}x_1^{4(a-2j)+4} x_2^{4j} +
\sum_{i\in\Lambda}\binom{4a+4-i}{i}x_1^{4a+4-2i}x_2^i \nonumber
{\cal E}nd{eqnarray}}
where $\Lambda$ is the set of integers $i$ with $1\leq i\leq2a+1$ and
$i\not{\cal E}quiv3\bmod4$. Using (\ref{laI}) and (\ref{laII}) it is a simple
matter to write down a basis for $R$ and $x\cdot R$ in dimensions $4a+3$,
$4a+4$, and $4a+5$. The information is summarized (under the assumption $a>0$,
which is no real restriction in view of~(\ref{casoespecial}))
in the following chart,
where elements in a column form a basis in the indicated dimension, and where
crossed out terms can be expressed as linear combination of the other ones
in view of~(\ref{laI}) and~(\ref{laII}).
{\unitlength=0.79cm\scriptsize
\begin{picture}(0,9.2)(0.76,-7.6)
\put(0,.65){\normalsize$4a+3$}
\put(0,0){$x_1^{4a+3}$}
\put(-.05,-.1){\line(2,1){.9}}
\put(0,-.5){$x_1^{4a+1}x_2$}
\put(0,-1){$x_1^{4a-1}x_2^2$}
\put(0,-1.5){$x_1^{4a-3}x_2^3$}
\put(.5,-2){$\vdots$}
\put(0,-2.5){$x_1^3x_2^{2a}$}
\put(0,-3){$x_1x_2^{2a+1}$}
\put(1.5,-.4){\vector(1,0){1}}
\put(2.73,-.5){$0$}
\put(1.5,-1.4){\vector(1,0){1}}
\put(2.73,-1.5){$0$}
\put(1.5,-2.9){\vector(1,0){1}}
\put(2.73,-3){$0$}
\put(1.5,-.9){\vector(1,0){5}}
\put(1.5,-2.4){\vector(1,0){5}}
\put(8.5,-1.4){\vector(1,0){4.5}}
\put(8.5,-2.9){\vector(1,0){4.5}}
\put(7,.65){\normalsize$4a+4$}
\put(7,0){$x_1^{4a+4}$}
\put(6.95,-.1){\line(2,1){.9}}
\put(6.95,-3.6){\line(2,1){.9}}
\put(7,-.5){$x_1^{4a+2}x_2$}
\put(7,-1){$x_1^{4a}x_2^2$}
\put(7,-1.5){$x_1^{4a-2}x_2^3$}
\put(7.5,-2){$\vdots$}
\put(7,-2.5){$x_1^4x_2^{2a}$}
\put(7,-3){$x_1^2x_2^{2a+1}$}
\put(7,-3.5){$x_2^{2a+2}$}
\put(13.5,.65){\normalsize$4a+5$}
\put(13.5,0){$x_1^{4a+5}$}
\put(13.45,-.1){\line(2,1){.9}}
\put(13.5,-.5){$x_1^{4a+3}x_2$}
\put(13.45,-.6){\line(4,1){1.2}}
\put(13.45,-3.57){\line(4,1){1.2}}
\put(13.5,-1){$x_1^{4a+1}x_2^2$}
\put(13.5,-1.5){$x_1^{4a-1}x_2^3$}
\put(13.5,-2){$x_1^{4a-3}x_2^4$}
\put(14,-2.5){$\vdots$}
\put(13.5,-3){$x_1^3x_2^{2a+1}$}
\put(13.5,-3.5){$x_1x_2^{2a+2}$}
\multiput(0,-3.85)(.1,0){147}{\circle*{.05}}
{\cal E}nd{picture}
\begin{picture}(0,0)(0.76,-7.6)
\put(0,-4){$xx_1^{4a+2}$}
\put(0,-4.5){$xx_1^{4a}x_2$}
\put(0,-5){$xx_1^{4a-2}x_2^2$}
\put(.5,-5.8){$\vdots$}
\put(0,-6.5){$xx_1^2x_2^{2a}$}
\put(0,-7){$xx_2^{2a+1}$}
\put(1.5,-4.9){\vector(1,0){5}}
\put(1.5,-4.4){\vector(1,0){1}}
\put(1.5,-6.4){\vector(1,0){5}}
\put(2.73,-4.5){$0$}
\put(1.5,-6.9){\vector(1,0){1}}
\put(2.73,-7){$0$}
\put(8.5,-4.4){\vector(1,0){4.5}}
\put(8.5,-5.4){\vector(1,0){4.5}}
\put(8.5,-6.9){\vector(1,0){4.5}}
\put(6.95,-4.07){\line(4,1){1.2}}
\put(7,-4){$xx_1^{4a+3}$}
\put(7,-4.5){$xx_1^{4a+1}x_2$}
\put(7,-5){$xx_1^{4a-1}x_2^2$}
\put(7,-5.5){$xx_1^{4a-3}x_2^3$}
\put(7.5,-6){$\vdots$}
\put(7,-6.5){$xx_1^3x_2^{2a}$}
\put(7,-7){$xx_1x_2^{2a+1}$}
\put(13.45,-4.07){\line(4,1){1.1}}
\put(13.45,-7.57){\line(4,1){1.1}}
\put(13.5,-4){$xx_1^{4a+4}$}
\put(13.5,-4.5){$xx_1^{4a+2}x_2$}
\put(13.5,-5){$xx_1^{4a}x_2^2$}
\put(13.5,-5.5){$xx_1^{4a-2}x_2^3$}
\put(14,-6.3){$\vdots$}
\put(13.5,-7){$xx_1^2x_2^{2a+1}$}
\put(13.5,-7.5){$xx_2^{2a+2}$}
{\cal E}nd{picture}}
\noindent
The top and bottom portions of the chart (delimited
by the horizontal dotted line) correspond to $R$ and
$x\cdot R$, respectively. Horizontal arrows indicate
$\mathrm{Sq}^1$-images, which are easily computable
from~(\ref{sq1y}) and (I): $$\mathrm{Sq}^1(x^i x_1^{i_1}x_2^{i_2})=0$$ when
$i+i_1+i_2$ is even, while $$\mathrm{Sq}^1(x^i x_1^{i_1}x_2^{i_2})=
x^i x_1^{i_1+1}x_2^{i_2}$$ when $i+i_1+i_2$ is odd---here $i\in\{0,1\}$ in
view of (I) above. There are only two basis elements,
in dimensions $4a+3$ and $4a+4$, whose
$\mathrm{Sq}^1$-images are not indicated in the chart: $xx_1^{4a+2}\in (x\cdot R)^{4a+3}$
and $x_1^{4a+2}x_2 \in R^{4a+4}$. The second conclusion in the proposition is
evident from the bottom part of the chart---no matter what the $\mathrm{Sq}^1$-image of
$xx_1^{4a+2}$ is. On the other hand, the top portion of the
chart implies that, in dimension
$4a+4$, Ker$(\mathrm{Sq}^1)$ and Im$(\mathrm{Sq}^1)$ are elementary $2$-groups whose ranks satisfy
$$
\mathrm{rk}(\mathrm{Ker} (\mathrm{Sq}^1)) =\mathrm{rk}(\mathrm{Im}(\mathrm{Sq}^1))+\varepsilon
$$
with $\varepsilon =1$ or $\varepsilon =0$ (depending on whether or not
$\mathrm{Sq}^1(x_1^{4a+2}x_2)$ can be written down as a linear combination of the
elements $x_1^{4a-1}x_2^3$, $x_1^{4a-5}x_2^5,\ldots,$ and $x_1^3
x_2^{2a+1}$---this of course depends on the actual binomial coefficients
in~(\ref{laI})). But the possibility $\varepsilon =0$ is ruled out
by~(\ref{laerre}) and~(\ref{lacentral}), forcing $\varepsilon=1$ and,
therefore, the first assertion of this proposition.
{\cal E}nd{proof}
\section{The CLSS for $B(\mathrm{P}^3,2)$}\label{antesultima}
Here is the chart for the $E_2$-term of the spectral sequence for $m=3$
through filtration degree $13$:
{\unitlength=1pt
\begin{picture}(0,135)(-13,-21.3)
\put(0,-12){\scriptsize$0$}
\put(20,-12){\scriptsize$1$}
\put(40,-12){\scriptsize$2$}
\put(60,-12){\scriptsize$3$}
\put(80,-12){\scriptsize$4$}
\put(100,-12){\scriptsize$5$}
\put(120,-12){\scriptsize$6$}
\put(140,-12){\scriptsize$7$}
\put(160,-12){\scriptsize$8$}
\put(180,-12){\scriptsize$9$}
\put(198.3,-12){\scriptsize$10$}
\put(218.3,-12){\scriptsize$11$}
\put(238.3,-12){\scriptsize$12$}
\put(258.3,-12){\scriptsize$13$}
\put(-.6,-3){$\mathbb{Z}$}
\put(36.5,-3){$\langle2\rangle$}
\put(56.5,-3){$\langle1\rangle$}
\put(75.6,-3){$\{2\}$}
\put(96.5,-3){$\langle2\rangle$}
\put(116.5,-3){$\langle4\rangle$}
\put(136.5,-3){$\langle3\rangle$}
\put(155.6,-3){$\{4\}$}
\put(176.5,-3){$\langle4\rangle$}
\put(196.5,-3){$\langle6\rangle$}
\put(216.5,-3){$\langle5\rangle$}
\put(235.6,-3){$\{6\}$}
\put(256.5,-3){$\langle6\rangle$}
\put(276,-3){$\cdots$}
\put(-10,-2){\scriptsize$0$}
\put(-10,18){\scriptsize$1$}
\put(-10,38){\scriptsize$2$}
\put(-10,58){\scriptsize$3$}
\put(-10,78){\scriptsize$4$}
\put(-10,98){\scriptsize$5$}
\put(16.5,37){$\langle1\rangle$}
\put(35.6,37){$\{0\}$}
\put(56.5,37){$\langle2\rangle$}
\put(76.5,37){$\langle2\rangle$}
\put(96.5,37){$\langle3\rangle$}
\put(115.6,37){$\{2\}$}
\put(136.5,37){$\langle4\rangle$}
\put(156.5,37){$\langle4\rangle$}
\put(176.5,37){$\langle5\rangle$}
\put(195.6,37){$\{4\}$}
\put(216.5,37){$\langle6\rangle$}
\put(236.5,37){$\langle6\rangle$}
\put(256.5,37){$\langle7\rangle$}
\put(276,37){$\cdots$}
\put(-.6,57){$\mathbb{Z}$}
\put(36.5,57){$\langle2\rangle$}
\put(56.5,57){$\langle1\rangle$}
\put(75.6,57){$\{2\}$}
\put(96.5,57){$\langle2\rangle$}
\put(116.5,57){$\langle4\rangle$}
\put(136.5,57){$\langle3\rangle$}
\put(155.6,57){$\{4\}$}
\put(176.5,57){$\langle4\rangle$}
\put(196.5,57){$\langle6\rangle$}
\put(216.5,57){$\langle5\rangle$}
\put(235.6,57){$\{6\}$}
\put(256.5,57){$\langle6\rangle$}
\put(276,57){$\cdots$}
\put(16.5,97){$\langle1\rangle$}
\put(35.6,97){$\{0\}$}
\put(56.5,97){$\langle2\rangle$}
\put(76.5,97){$\langle2\rangle$}
\put(96.5,97){$\langle3\rangle$}
\put(115.6,97){$\{2\}$}
\put(136.5,97){$\langle4\rangle$}
\put(156.5,97){$\langle4\rangle$}
\put(176.5,97){$\langle5\rangle$}
\put(195.6,97){$\{4\}$}
\put(216.5,97){$\langle6\rangle$}
\put(236.5,97){$\langle6\rangle$}
\put(256.5,97){$\langle7\rangle$}
\put(276,97){$\cdots$}
{\cal E}nd{picture}}
\noindent Since $H^5(B(\mathrm{P}^3,2))=\mathbb{Z}_2$ (Corollary~\ref{orientabilidadB}), there
must be a nontrivial differential landing at node $(5,0)$. The only such
possibility is
\begin{equation}\label{d3}
d_3^{2,2}\colon E_3^{2,2}=\mathbb{Z}_4\left/\mathrm{Im}(d_2^{0,3})\right.
\to E_3^{5,0}=\mathbb{Z}_2\oplus\mathbb{Z}_2
{\cal E}nd{equation}
which, up to a change of basis, is the composition of the canonical
projection $\mathbb{Z}_4\left/\mathrm{Im}(d_2^{0,3})\right.\to\mathbb{Z}_2$ and the
canonical inclusion $\iota_1\colon \mathbb{Z}_2\hookrightarrow\mathbb{Z}_2\oplus\mathbb{Z}_2$. In
particular, as in the conclusion of the second paragraph of the proof of
Lemma~\ref{d2m1}, the differential $d_2^{0,3}\colon E_2^{0,3}=\mathbb{Z}\to
E_2^{2,2}=\mathbb{Z}_4$ cannot be surjective (otherwise~(\ref{d3})
would be the zero map) and, therefore, its only options are:
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\mbox{$d_2^{0,3}$ is trivial, or}\label{(a)}\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mbox{as in~(\ref{dif2}),
$d_2^{0,3}$ is twice the canonical projection.} \label{(b)}
{\cal E}nd{eqnarray}
\indent The goal in this example is to discuss how neither of these
two options leads to an apparent contradiction in the behavior of the
spectral sequence.
As a first task we consider the situation where~(\ref{(a)}) holds,
noticing that if $d_2^{0,3}$ vanishes, then the $H^*(BD_8)$-module
structure in the spectral sequence implies that the whole ($q=3$)-line
consists of $d_2$-cycles, so the above chart actually gives the $E_3$-term.
Furthermore, using again the $H^*(BD_8)$-module structure, we note
that every $d_3$-differential from the ($q=2$)-line to the ($q=0$)-line
would have to repeat vertically as a $d_3$-differential from the
($q=5$)-line to the ($q=3$)-line.
Under these conditions,
let us now analyze $d_3$-differentials. The proof
of Proposition~\ref{dimm} already discusses the $d_3$-differential (and its
cokernel) from node $(1,2)$ to node $(4,0)$. On the other hand, the
$d_3$-differential from node $(2,2)$ to node $(5,0)$ is~(\ref{d3}) and
has been fully described. Note that the behavior of these two initial
$d_3$-differentials can be summarized by remarking that they yield
monomorphisms after tensoring with $\mathbb{Z}_2$. We now show, by means of a
repeated cycle of three steps, that this is also the
case for all the remaining $d_3$-differentials.
\noindent {\bf Step 1}. To begin with, observe that
the argument in the final paragraph of the proof of Lemma~\ref{d2m1} does not
lead to a contradiction:~it only implies that
both differentials $d_3\colon E_3^{3,2}\to E_3^{6,0}$ and
$d_4\colon E_4^{2,3}\to E_4^{6,0}$ must be injective---this time wiping out
$E_\infty^{2,3}$, $E_\infty^{3,2}$, and $E_\infty^{6,0}$.
\noindent {\bf Step 2}. In view of our discussion of the first
nontrivial $d_3$-differential, the last assertion in
the paragraph following~(\ref{(b)}) implies that
the group $\langle1\rangle$ at node $(1,5)$ does not survive to $E_4$; indeed,
the differential $$d_3\colon E_3^{1,5}=\langle1\rangle
\to E_3^{4,3}=
\{2\}$$ is injective with cokernel
$E_4^{4,3}=
\{1\}$.
Such a situation has two consequences. First, that the discussion in
the previous step applies word for word when the three nodes $(2,3)$,
$(3,2)$, and $(6,0)$ are respectively replaced by $(3,3)$, $(4,2)$, and
$(7,0)$. Second, that there is no room for a
nonzero differential landing in
$E_i^{5,2}$ or $E_j^{4,3}$ for $i\geq3$ and $j\geq4$ (of course we have
detected the nontrivial differential $d_3$ landing at node $(4,3)$),
so that both $d_3^{5,2}$ and $d_4^{4,3}$
must be injective (recall $H^7(B(\mathrm{P}^3,2))=0$). Actually, the only way for this
to (algebraically) hold is with an injective $d_3^{5,2}\otimes\mathbb{Z}_2$.
\noindent {\bf Step 3}. Note that the differential
$d_3^{6,2}\colon E_3^{6,2}=\{2\}\to E_3^{9,0}=
\langle4\rangle$ has at least a $\mathbb{Z}_2$-group in its kernel. But the kernel
cannot be any larger: the only nontrivial differential landing at node
$(6,2)$ starts at node $(2,5)$ and, as we already showed, $E_4^{2,5}=\mathbb{Z}_2$.
Consequently, $d_3^{6,2}\otimes\mathbb{Z}_2$ is injective.
The arguments in these three steps repeat,
essentially word for word, in a periodic
way, each time accounting for the (${-}\otimes\mathbb{Z}_2$)-injectivity of
the next block of four
consecutive $d_3$-differentials. This leads to the following
chart of the resulting $E_4$-term (again through filtration degree 13):
{\unitlength=1pt
\begin{picture}(0,131)(-13,-19.3)
\put(0,-12){\scriptsize$0$}
\put(20,-12){\scriptsize$1$}
\put(40,-12){\scriptsize$2$}
\put(60,-12){\scriptsize$3$}
\put(80,-12){\scriptsize$4$}
\put(100,-12){\scriptsize$5$}
\put(120,-12){\scriptsize$6$}
\put(140,-12){\scriptsize$7$}
\put(160,-12){\scriptsize$8$}
\put(180,-12){\scriptsize$9$}
\put(198.3,-12){\scriptsize$10$}
\put(218.3,-12){\scriptsize$11$}
\put(238.3,-12){\scriptsize$12$}
\put(258.3,-12){\scriptsize$13$}
\put(-.6,-3){$\mathbb{Z}$}
\put(36.5,-3){$\langle2\rangle$}
\put(56.5,-3){$\langle1\rangle$}
\put(75.6,-3){$\{1\}$}
\put(96.5,-3){$\langle1\rangle$}
\put(116.5,-3){$\langle2\rangle$}
\put(136.5,-3){$\langle1\rangle$}
\put(155.6,-3){$\{1\}$}
\put(176.5,-3){$\langle1\rangle$}
\put(196.5,-3){$\langle2\rangle$}
\put(216.5,-3){$\langle1\rangle$}
\put(235.6,-3){$\{1\}$}
\put(256.5,-3){$\langle1\rangle$}
\put(276,-3){$\cdots$}
\put(-10,-2){\scriptsize$0$}
\put(-10,18){\scriptsize$1$}
\put(-10,38){\scriptsize$2$}
\put(-10,58){\scriptsize$3$}
\put(-10,78){\scriptsize$4$}
\put(-10,98){\scriptsize$5$}
\put(36.5,37){$\langle1\rangle$}
\put(116.5,37){$\langle1\rangle$}
\put(196.5,37){$\langle1\rangle$}
\put(276,37){$\cdots$}
\put(-.6,57){$\mathbb{Z}$}
\put(36.5,57){$\langle2\rangle$}
\put(56.5,57){$\langle1\rangle$}
\put(75.6,57){$\{1\}$}
\put(96.5,57){$\langle1\rangle$}
\put(116.5,57){$\langle2\rangle$}
\put(136.5,57){$\langle1\rangle$}
\put(155.6,57){$\{1\}$}
\put(176.5,57){$\langle1\rangle$}
\put(196.5,57){$\langle2\rangle$}
\put(216.5,57){$\langle1\rangle$}
\put(235.6,57){$\{1\}$}
\put(256.5,57){$\langle1\rangle$}
\put(276,57){$\cdots$}
\put(36.5,97){$\langle1\rangle$}
\put(116.5,97){$\langle1\rangle$}
\put(196.5,97){$\langle1\rangle$}
\put(276,97){$\cdots$}
{\cal E}nd{picture}}
At this point further differentials are forced just from the fact that
$H^i(B(\mathrm{P}^3,2))=0$ for $i\geq 6$. Indeed, all possibly nontrivial
differentials $d_4^{p,q}$ must be isomorphisms for $p\geq 2$,
whereas the $H^*(BD_8)$-module structure implies that the image of the
differential $d_4^{0,3}\colon E_4^{0,3}=\mathbb{Z}\to E_4^{4,0}=
\{1\}$ is generated by an element of order $4$. Thus,
the whole $E_5$-term reduces to the chart:
\begin{picture}(0,90)(-60,-18.5)
\put(0,-12){\scriptsize$0$}
\put(20,-12){\scriptsize$1$}
\put(40,-12){\scriptsize$2$}
\put(60,-12){\scriptsize$3$}
\put(80,-12){\scriptsize$4$}
\put(100,-12){\scriptsize$5$}
\put(-.6,-3){$\mathbb{Z}$}
\put(36.5,-3){$\langle2\rangle$}
\put(56.5,-3){$\langle1\rangle$}
\put(76.5,-3){$\langle1\rangle$}
\put(96.5,-3){$\langle1\rangle$}
\put(-10,-2){\scriptsize$0$}
\put(-10,18){\scriptsize$1$}
\put(-10,38){\scriptsize$2$}
\put(-10,58){\scriptsize$3$}
\put(36.5,37){$\langle1\rangle$}
\put(-.6,57){$\mathbb{Z}$}
{\cal E}nd{picture}
\noindent
This is also the $E_\infty$-term for dimensional reasons, and the resulting
output is compatible with the known structure of $H^*(B(\mathrm{P}^3,2))$---note
that the only possibly nontrivial extension (in total degree $4$)
is actually nontrivial, in view of~\cite[Theorem~1.5]{taylor}. This
concludes our discussion of the first task in this section, namely,
that~(\ref{(a)}) leads
to no apparent contradiction in the behavior of the spectral sequence
(alternatively: the breakdown in the proof of Lemma~\ref{d2m1} for $m=3$,
already observed in Step 1 above, does not seem to be fixable with the
present methods).
The second and final task in this section is to explain how,
just as~(\ref{(a)}) does, option~(\ref{(b)}) leads to no apparent
contradiction in the behavior of the spectral sequence. Thus, for
the remainder of the section we assume~(\ref{(b)}). In particular,
the $H^*(BD_8)$-module structure in the spectral sequence implies
that the conclusion of Lemma~\ref{d2m1} holds. Then, as explained in the
paragraph following Remark~\ref{criticals}, the resulting $E_3$-term now
takes the form
{\unitlength=1pt
\begin{picture}(0,135)(-13,-21.3)
\put(0,-12){\scriptsize$0$}
\put(20,-12){\scriptsize$1$}
\put(40,-12){\scriptsize$2$}
\put(60,-12){\scriptsize$3$}
\put(80,-12){\scriptsize$4$}
\put(100,-12){\scriptsize$5$}
\put(120,-12){\scriptsize$6$}
\put(140,-12){\scriptsize$7$}
\put(160,-12){\scriptsize$8$}
\put(180,-12){\scriptsize$9$}
\put(198.3,-12){\scriptsize$10$}
\put(218.3,-12){\scriptsize$11$}
\put(238.3,-12){\scriptsize$12$}
\put(258.3,-12){\scriptsize$13$}
\put(-.6,-3){$\mathbb{Z}$}
\put(36.5,-3){$\langle2\rangle$}
\put(56.5,-3){$\langle1\rangle$}
\put(75.6,-3){$\{2\}$}
\put(96.5,-3){$\langle2\rangle$}
\put(116.5,-3){$\langle4\rangle$}
\put(136.5,-3){$\langle3\rangle$}
\put(155.6,-3){$\{4\}$}
\put(176.5,-3){$\langle4\rangle$}
\put(196.5,-3){$\langle6\rangle$}
\put(216.5,-3){$\langle5\rangle$}
\put(235.6,-3){$\{6\}$}
\put(256.5,-3){$\langle6\rangle$}
\put(276,-3){$\cdots$}
\put(-10,-2){\scriptsize$0$}
\put(-10,18){\scriptsize$1$}
\put(-10,38){\scriptsize$2$}
\put(-10,58){\scriptsize$3$}
\put(-10,78){\scriptsize$4$}
\put(-10,98){\scriptsize$5$}
\put(16.5,37){$\langle1\rangle$}
\put(36.5,37){$\langle1\rangle$}
\put(56.5,37){$\langle2\rangle$}
\put(76.5,37){$\langle2\rangle$}
\put(96.5,37){$\langle3\rangle$}
\put(116.5,37){$\langle3\rangle$}
\put(136.5,37){$\langle4\rangle$}
\put(156.5,37){$\langle4\rangle$}
\put(176.5,37){$\langle5\rangle$}
\put(196.5,37){$\langle5\rangle$}
\put(216.5,37){$\langle6\rangle$}
\put(236.5,37){$\langle6\rangle$}
\put(256.5,37){$\langle7\rangle$}
\put(276,37){$\cdots$}
\put(-.6,57){$\mathbb{Z}$}
\put(36.5,57){$\langle2\rangle$}
\put(56.5,57){$\langle1\rangle$}
\put(76.5,57){$\langle3\rangle$}
\put(96.5,57){$\langle2\rangle$}
\put(116.5,57){$\langle4\rangle$}
\put(136.5,57){$\langle3\rangle$}
\put(156.5,57){$\langle5\rangle$}
\put(176.5,57){$\langle4\rangle$}
\put(196.5,57){$\langle6\rangle$}
\put(216.5,57){$\langle5\rangle$}
\put(236.5,57){$\langle7\rangle$}
\put(256.5,57){$\langle6\rangle$}
\put(276,57){$\cdots$}
\put(16.5,97){$\langle1\rangle$}
\put(35.6,97){$\{0\}$}
\put(56.5,97){$\langle2\rangle$}
\put(76.5,97){$\langle2\rangle$}
\put(96.5,97){$\langle3\rangle$}
\put(115.6,97){$\{2\}$}
\put(136.5,97){$\langle4\rangle$}
\put(156.5,97){$\langle4\rangle$}
\put(176.5,97){$\langle5\rangle$}
\put(195.6,97){$\{4\}$}
\put(216.5,97){$\langle6\rangle$}
\put(236.5,97){$\langle6\rangle$}
\put(256.5,97){$\langle7\rangle$}
\put(276,97){$\cdots$}
{\cal E}nd{picture}}
\noindent where again only dimensions at most $13$ are shown.
At this point it is convenient to observe that the last statement
in the paragraph following~(\ref{(b)}) fails under the current hypothesis.
Indeed, the generator of $E_3^{0,3}$ is twice the generator of $E_2^{0,3}$,
breaking up the vertical symmetry of $d_3$-differentials
holding under~(\ref{(a)})---of course, the groups in the current $E_3$-term
already lack the vertical symmetry we had in the case of~(\ref{(a)}).
In order to deal with such an asymmetric situation we need to make a
{\it differential-wise} measurement of all the groups
involved in the current $E_3$-term (we will simultaneously analyze
the possibilities for the two horizontal families of $d_3$-differentials).
To begin with, note that the arguments dealing, in the
case of~(\ref{(a)}), with the two differentials $E_3^{1,2}\to E_3^{4,0}$
and $E_3^{2,2}\to E_3^{5,0}$ apply without change under the current
hypothesis to yield that these two differentials are injective, the
former with cokernel $E_4^{4,0}=\mathbb{Z}_2\oplus\mathbb{Z}_4$ (i.e., both yield
injective maps after tensoring with $\mathbb{Z}_2$). Note that any other group
not appearing as the domain or codomain of
these two differentials must be eventually wiped out in the
spectral sequence, either because $H^i(B(\mathrm{P}^3,2))=0$ for $i\geq6$, or else
because the already observed $E_4^{5,0}=\mathbb{Z}_2$ accounts for all there is in
$H^5(B(\mathrm{P}^3,2))$ in view of Corollary~\ref{orientabilidadB}. This observation
is the key in the analysis of further differentials, which uses
repeatedly the following three-step argument (the reader is advised
to keep handy the previous chart in order to follow the details):
\noindent {\bf Step 1}. The groups $E_3^{p,q}$ not yet considered
and having smallest $p+q$ are $E_3^{3,2}$ and $E_3^{2,3}$.
Both are isomorphic to $\langle2\rangle$; none can be hit a differential.
Since $E_3^{6,0}=\langle4\rangle$,
we must have injective differentials $d_3\colon E_3^{3,2}\to E_3^{6,0}$ and
$d_4\colon E_4^{2,3}\to E_4^{6,0}$, clearing the $E_\infty$-term at nodes
$(2,3)$, $(3,2)$, and $(6,0)$. Look now at the groups not yet considered
and in the next smallest
total dimension $p+q$. These are $E_3^{1,5}$,
$E_3^{3,3}=\langle1\rangle$, and $E_3^{4,2}=\langle2\rangle$. Again the last two
cannot be hit by a differential and, since $E_3^{7,0}=\langle3\rangle$, the
two differentials $d_3\colon E_3^{4,2}\to E_3^{7,0}$ and
$d_4\colon E_4^{3,3}\to E_4^{7,0}$ must be injective, now clearing the
$E_\infty$-term at nodes $(3,3)$, $(4,2)$, and $(7,0)$.
\noindent {\bf Step 2}. The only case remaining to consider with
$p+q=5$ is $E_3^{1,5}=\langle1\rangle$. We have seen that there is nothing
left in the spectral sequence for this group to hit with a $d_6$-differential,
so it must hit either $E_3^{4,3}=\langle3\rangle$ or $E_3^{5,2}=
\langle3\rangle$. Therefore, in these two positions there are $2^5$ elements
that will have to inject into (quotients of) $E_3^{8,0}=\{4\}$, a group with
cardinality $2^6$. The outcome of this situation is two-fold:
\begin{itemize}
\item[(i)] the $E_\infty$-term is now cleared at positions
$(1,5)$, $(4,3)$, and $(5,2)$;
\item[(ii)] there is a $\mathbb{Z}_2$ group at node $(8,0)$ that still
needs a differential matchup.
{\cal E}nd{itemize}
But (i) implies that the only way to kill the element in (ii)
is with a $d_6$-differential originating at node
$(2,5)$, where we have $E_3^{2,5}=\mathbb{Z}_4$.
\noindent {\bf Step 3}. The above analysis leaves only one element
at node $(2,5)$ still without a differential matchup. Since everything at node
$(8,0)$ has been accounted for, the element in question at node $(2,5)$
must be cleared up at either of the stages $E_3$ or $E_4$
with a corresponding nontrivial
differential landing at nodes $(5,3)$ or
$(6,2)$, respectively. But $E_3^{5,3}=\langle2\rangle$ while $E_3^{6,2}=
\langle3\rangle$. Thus, the last differential will {\it leave} $2^4$ elements
that need to be mapped injectively by {\it previous}
differentials landing at node $(9,0)$.
Since $E_3^{9,0}=\langle4\rangle$, our bookkeeping analysis has now
cleared up every group $E_\infty^{p,q}$ with either
\begin{itemize}
\item $q=0$ and $p\leq9$;
\item $q=2$ and $p\leq6$;
\item $q=3$ and $p\leq5$;
\item $q=5$ and $p\leq2$.
{\cal E}nd{itemize}
These three steps now repeat to cover the next four cases
of $p$. For instance, one starts by looking at $E_3^{3,5}=\langle2
\rangle$, whose two basis elements are forced to inject with
differentials landing either at node $(6,3)$ or $(7,2)$. Since
$E_3^{6,3}\cong E_3^{7,2}\cong\langle4\rangle$, this leaves $2^6$
elements that must be mapping into node $(10,0)$ through injective
differentials. But $E_3^{10,0}=\langle6\rangle$, clearing the appropriate
nodes---the situation in Step 1. At the end of this
three-step inductive analysis
we find that there is just the right number of elements, at the right nodes,
to match up through differentials---the opposite of the situation that
we successfully exploited in the previous section to deal with cases where
$m\not{\cal E}quiv3\bmod4$.
From the chart we note that $d_4\colon E_4^{0,3}=\mathbb{Z}\to E_4^{4,0}=\mathbb{Z}_2
\oplus\mathbb{Z}_4$ is the only undecided differential, and that its cokernel
equals $H^4(B(\mathrm{P}^3,2))$---since $E_\infty^{2,2}=0=E_\infty^{1,3}$. The two
possibilities (indicated at the end of the proof of
Proposition~\ref{dimm}) for this cokernel are $\mathbb{Z}_2$ and
$\mathbb{Z}_4$, but~\cite[Theorem~1.5]{taylor} implies that
the latter option must be the right one under the present
hypothesis~(\ref{(b)}).
\begin{nota}\label{fredtorsion}
The previous paragraph suggests that, if our methods are to be used to
understand the CLSS in the remaining case with
$m{\cal E}quiv3\bmod4$, then it will be convenient to keep in mind
the type of $2^e$-torsion Theorem~\ref{descripciondesaordenada} describes for
the integral cohomology of $B(\mathrm{P}^{4a+3},2)$.
{\cal E}nd{nota}
\section{Case of $F(\mathrm{P}^m,2)$}\label{HF}
The CLSS analysis in the previous two sections can be
applied---with $G=\mathbb{Z}_2\times\mathbb{Z}_2$ instead of $G=D_8$---in
order to study the cohomology groups
of the ordered configuration space
$F(\mathrm{P}^m,2)$. The explicit details are similar but much easier
than those for unordered configuration spaces, and this time
the additive structure of differentials can be fully understood for any $m$.
Here we only review the main differences, simplifications, and results.
For one, there is no $4$-torsion to deal with (e.g.~the arithmetic
Proposition~\ref{algebraico} is not needed); indeed, the role of $BD_8$ in
the situation of an unordered configuration space $B(\mathrm{P}^m,2)$ is played by
$\mathrm{P}^\infty\times\mathrm{P}^\infty$ for ordered configuration spaces $F(\mathrm{P}^m,2)$.
Thus, the use of Corollaries~\ref{HBD} and~\ref{HBDT} is
replaced by the simpler Lemma~\ref{kunneth}. But the most important
simplification in the calculations relevant to the
present section comes from the absence of
problematic $d_2$-differentials, the obstacle
that prevented us from computing the CLSS of the $D_8$-action on
$V_{m+1,2}$ for $m{\cal E}quiv3\bmod4$.
[This is why in Lemma~\ref{kunneth} we do not insist on describing
$H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty;\mathbb{Z}_\alpha)$ as a module over
$H^*(\mathrm{P}^\infty\times\mathrm{P}^\infty)$---compare to Remark~\ref{criticals}.]
As a result, the integral cohomology CLSS of the
$(\mathbb{Z}_2\times\mathbb{Z}_2)$-action on $V_{m+1,2}$ can be fully understood,
without restriction on $m$, by means of the
counting arguments used in
Section~\ref{HB}, now forcing the injectivity of all relevant differentials
from the following two ingredients:
\begin{itemize}
\item[(a)] The size and distribution of the groups in the CLSS.
\item[(b)] The $\mathbb{Z}_2\times\mathbb{Z}_2$ analogue of Proposition~\ref{orientabilidadB}
in Remark~\ref{analogousversionsF}---the input triggering the determination
of differentials.
{\cal E}nd{itemize}
In particular, when $m$ is odd, the $\mathbb{Z}_2\times\mathbb{Z}_2$ analogue of
Lemma~\ref{d2m1} does not arise and, instead, only
the counting argument in the proof following Remark~\ref{criticals}
is needed.
We leave it for the reader
to supply details of the above CLSS and verify that
this leads to Propositions~\ref{HFpar} and~\ref{HF1m4} in the case
$G=\mathbb{Z}_2\times\mathbb{Z}_2$, as well as to the computation of all the
cohomology groups in Theorem~\ref{descripcionordenada}.
\
{\footnotesize
\parbox{5cm}{Jes\'us Gonz\'alez\\
{\it Departamento de Matem\'aticas},\\
Centro de Investigaci\'on y de Estudios Avanzados del IPN,\\
Apartado Postal 14-740,\\
07000 Mexico City, Mexico\\
{\sf [email protected]}}\
{
}\
\parbox{5cm}{Peter Landweber\\
{\it Department of Mathematics},\\
Rutgers University,\\
Piscataway, NJ 08854, USA,\\
{\sf [email protected]}}}\
\begin{thebibliography}{10}
\bibitem{AM} A.~Adem and R.~J.~Milgram,
{\it Cohomology of Finite Groups,} second edition.
Grund\-lehren der mathematischen Wissenschaften, 309.
Springer-Verlag, Berlin, 2004.
\bibitem{barden} D.~Barden,
``Simply connected five-manifolds'',
{{\cal E}m Ann. of Math. (2)} {\bf 82} (1965) 365--385.
\bibitem{idealvalued} P.~V.~M.~Blagojevi\'c and G.~M.~Ziegler,
``The ideal-valued index for a dihedral group action,
and mass partition by two hyperplanes'', {{\cal E}m Topology Appl.} {\bf 158} (2011)
1326--1351. A longer preliminary version is available as
$\hspace{.3mm}$arXiv:0704.1943v4 [math.AT].
\bibitem{cartan} H.~Cartan, ``Espaces avec groupes d'op\'erateurs.~I:~Notions
pr\'eliminaires;~II: La suite spectrale;~applications'',
{{\cal E}m S\'eminaire Henri Cartan}, tome \textbf{3},
expos\'es 11 (1--11) and 12 (1--10)
(1950-1951), both available at http://www.numdam.org.
\bibitem{tesiscarlos} C.~Dom\'{\i}nguez,
``Cohomology of pairs of points in real projective spaces and applications'',
Ph.D.~thesis, Department of Mathematics, Cinvestav, 2011.
\bibitem{D8} C.~Dom\'{\i}nguez, J.~Gonz\'alez, and Peter S. Landweber,
``The integral cohomology of configuration spaces of pairs of points in real
projective spaces'', to appear in Forum Mathematicum.
\bibitem{FH} E.~Fadell and S.~Husseini,
``An ideal-valued cohomological index theory with applications to
Borsuk-Ulam and Bourgin-Yang theorems'',
{{\cal E}m Ergod. Th. and Dynam. Sys.} {\bf 8$^*$} (1988) 73-85.
\bibitem{feder} S.~Feder,
``The reduced symmetric product of projective spaces
and the generalized Whitney theorem'',
{{\cal E}m Illinois J. Math.} {\bf 16} (1972) 323--329.
\bibitem{FZ} E.~M.~Feichtner and G.~M.~Ziegler,
``The integral cohomology algebras of ordered configuration spaces of spheres'',
{{\cal E}m Doc. Math.} {\bf 5} (2000) 115--139.
\bibitem{FZ02} E.~M.~Feichtner and G.~M.~Ziegler,
``On orbit configuration spaces of spheres'',
{{\cal E}m Topology Appl.} {\bf 118} (2002) 85--102.
\bibitem{FeTa} Y.~F\'elix and D.~Tanr\'e,
``The cohomology algebra of unordered configuration spaces'',
{{\cal E}m J.~London Math.~Soc.} {\bf 72} (2005) 525--544.
\bibitem{taylor} J.~Gonz\'alez,
``Symmetric topological complexity as the first
obstruction in Goodwillie's Euclidean embedding
tower for real projective spaces'', to appear in
{{\cal E}m Trans. Amer.~Math.~Soc.} (currently available at
arXiv:0911.1116v4 [math.AT]).
\bibitem{v1}J.~Gonz\'alez and P.~Landweber,
``The integral cohomology groups of configuration spaces of pairs
of points in real projective spaces'', initial version of
the present paper available at arXiv:1004.0746v1 [math.AT].
\bibitem{grunbaum} B.~Gr\"unbaum,
``Partitions of mass-distributions and of convex bodies by hyperplanes'',
{{\cal E}m Pacific J. Math.} {\bf 10} (1960) 1257--1261.
\bibitem{handel68} D.~Handel,
``An embedding theorem for real projective spaces'',
{{\cal E}m Topology} {\bf 7} (1968) 125--130.
\bibitem{handeltohoku} D.~Handel,
``On products in the cohomology of the dihedral groups'',
{{\cal E}m T\^ohoku Math. J. (2)} {\bf 45} (1993) 13--42.
\bibitem{hatcher} A.~Hatcher,
{\it Algebraic Topology.}
Cambridge University Press, Cambridge, 2002.
\bibitem{SadokCohenFest} S.~Kallel,
``Symmetric products, duality and homological dimension of configuration spaces'',
{{\cal E}m Geom. Topol. Monogr.}, {\bf 13} (2008) 499--527.
\bibitem{KM}
M.~A.~Kervaire and J.~W.~Milnor,
``Groups of homotopy spheres:~I'',
{{\cal E}m Ann. of Math. (2)} {\bf 77} (1963) 504--537.
\bibitem{lai}
H.~F.~Lai,
``On the topology of the even-dimensional complex quadrics'',
{{\cal E}m Proc. Amer. Math. Soc.} {\bf 46} (1974) 419--425.
\bibitem{mccleary}
J.~McCleary,
{\it A User's Guide to Spectral Sequences},
second edition. Cambridge Studies in Advanced Mathematics, \textbf{58}.
Cambridge University Press, Cambridge, 2001.
\bibitem{munkres}
J.~R.~Munkres,
{\it Elements of Algebraic Topology.}
Addison-Wesley Publishing Company, Menlo Park, CA, 1984.
\bibitem{prasolov}
V.~V.~Prasolov,
{\it Elements of homology theory.}
Translated from the 2005 Russian original by Olga Sipacheva.
Graduate Studies in Mathematics, \textbf{81}. AMS, Providence, RI, 2007.
\bibitem{ranicki}
A.~Ranicki,
{\it Algebraic and Geometric Surgery.}
Oxford Mathematical Monographs, Oxford Science Publications.
Oxford University Press, 2002.
Electronic version (August 2009) available at
http://www.maths.ed.ac.uk/$\sim$aar/books/surgery.pdf.
\bibitem{ST}
H.~Seifert and W.~Threlfall,
{\it A Textbook of Topology,}
translated from the German 1934 edition by Michael A.~Goldman,
with a preface by Joan S.~Birman.
Pure and Applied Mathematics, 89. Academic Press, Inc. New York-London, 1980.
\bibitem{sutherland}
W.~A.~Sutherland,
``A note on the parallelizability of sphere-bundles over spheres'',
{{\cal E}m J. London Math. Soc.} {\bf 39} (1964) 55--62.
\bibitem{collins}
P.~Teichner,
{\it Slice Knots:~Knot Theory in the $4^{\mathrm th}$ Dimension.}
Lecture notes by Julia Collins and Mark Powell.
Electronic version (October 2009) available at
http://www.maths.ed.ac.uk/$\sim$s0681349/\#research.
\bibitem{Totaro}
B.~Totaro, ``Configuration spaces of algebraic varieties'',
{{\cal E}m Topology\hspace{.2mm}} {\bf 35} (1996) 1057--1067.
\bibitem{wall}
C.~T.~C.~Wall,
``Killing the middle homotopy groups of odd dimensional manifolds'',
{{\cal E}m Trans. Amer. Math. Soc.} {\bf 103} (1962) 421--433.
\bibitem{whitehead}
G.~W.~Whitehead,
{\it Elements of Homotopy Theory.}
Graduate Texts in Mathematics, \textbf{61}. Springer-Verlag, New York-Berlin, 1978.
{\cal E}nd{thebibliography}
{\cal E}nd{document}
|
\begin{document}
\title{Spectral Analysis of the Non-self-adjoint \ Mathieu-Hill Operator }
\author{O. A. Veliev\\{\small Depart. of Math., Dogus University, Ac\i badem, Kadik\"{o}y, \ }\\{\small Istanbul, Turkey.}\ {\small e-mail: [email protected]}}
\date{}
\maketitle
\begin{abstract}
We obtain uniform, with respect to $t$ asymptotic formulas for the eigenvalues
of the operators generated in $[0,1]$ by the Mathieu-Hill equation with a
complex-valued potential and by the $t-$periodic boundary conditions. Then
using it we investigate the non-self-adjoint \ Mathieu-Hill operator$\ H$
generated in $(-\infty,\infty)$ by the same equation and establish the
necessary and sufficient conditions for the potential for which $H$ has no
spectral singularity at infinity and it is an asymptotically spectral operator.
Key Words: Hill operator, Spectral singularities, Spectral operator.
AMS Mathematics Subject Classification: 34L05, 34L20.
\end{abstract}
\section{Introduction}
Let $L(q)$ be the Hill operator generated in $L_{2}(-\infty,\infty)$ by the expression
\begin{equation}
l(y)=-y^{\prime\prime}+qy,
\end{equation}
where $q$ is a complex-valued summable function on $[0,1]$ and $q(x+1)=q(x)$.
It is well-known that (see [8-10]) the spectrum $S(L(q))$ of the operator
$L(q)$ is the union of the spectra $S(L_{t}(q))$ of the operators $L_{t}(q)$
for $t\in(-\pi,\pi],$ where $L_{t}(q)$ is the operator generated in
$L_{2}[0,1]$ by (1) and the boundary conditions
\begin{equation}
y(1)=e^{it}y(0),\text{ }y^{\prime}(1)=e^{it}y^{\prime}(0).
\end{equation}
The spectrum of $L_{t}(q)$ consist of the eigenvalues that are the roots of
\begin{equation}
F(\lambda)=2\cos t,
\end{equation}
where $F(\lambda)=\varphi^{\prime}(1,\lambda)+\theta(1,\lambda)$, $\varphi$
and $\theta$ are the solutions of the equation $l(y)=\lambda y$ satisfying the
initial conditions $\theta(0,\lambda)=\varphi^{\prime}(0,\lambda)=1$
and$\quad\theta^{\prime}(0,\lambda)=\varphi(0,\lambda)=0.$
The operators $L_{t}(q)$ and $L(q)$ are denoted by $H_{t}$ and $H$
respectively when
\begin{equation}
\text{ }q(x)=ae^{-i2\pi x}+be^{i2\pi x},
\end{equation}
where $a$ and $b$ are the nonzero complex numbers. In the cases $t=0$ and
$t=\pi$ the operator $H_{t}$ was investigated by Djakov and Mitjagin\ [2-5].
In [16] we have found the conditions on the potential (4) such that all
eigenvalues of the periodic, antiperiodic, Dirichlet, and Neumann boundary
value problems are simple. In this paper we consider the operators $H$ and
$H_{t}$ for all values of $t\in(-\pi,\pi].$ First, we obtain the asymptotic
formulas, uniform with respect to $t$ in some intervals, for the eigenvalues
of the operators $H_{t}.$ (Note that, the formula\ $f(k,t)=O(h(k))$ is said to
be uniform with respect to $t$ in a set $I$ if there exist positive constants
$M$ and $N,$ independent of $t,$ such that $\mid f(k,t))\mid<M\mid h(k)\mid$
for all $t\in I$ and $\mid k\mid\geq N).$ Then using these asymptotic
formulas, we investigate the spectral singularities and the asymptotic
spectrality of the operator $H$.
Note that the spectral singularities of the operator $L(q)$ are the points of
its spectrum in neighborhoods of which the projections of $L(q)$ are not
uniformly bounded (see [7] and [12]). McGarvey [9] proved that $L(q)$ is a
spectral operator if and only if the projections of the operators $L_{t}(q)$
are bounded uniformly with respect to $t$ in $(-\pi,\pi]$. Recently, Gesztezy
and Tkachenko [6,7] proved two versions of a criterion for the Hill operator
$L(q)$ with $q\in L_{2}[0,1]$ to be a spectral operator of scalar type, in
sense of Danford, one analytic and one geometric. The analytic version was
stated in term of the solutions of Hill's equation. The geometric version of
the criterion uses algebraic and geometric \ properties of the spectra of
periodic/antiperiodic and Dirichlet boundary value problems.
The problem of describing explicitly, for which potentials $q$ the Hill
operators $L(q)$ are spectral operators appears to have been open for about 50
years. Moreover, the discussed papers show that the set of potentials $q$ for
which $L(q)$ is spectral is a small subset of the periodic functions and it is
very hard to describe explicitly the required subset. In paper [14] we found
the explicit conditions on the potential $q$ such that $L(q)$ is an
asymptotically spectral operator and in [17] we constructed the spectral
expansion for the asymptotically spectral operator. In this paper we find a
criterion for asymptotic spectrality of $H$ stated in term of the potential (4).
The paper consists of 5 sections. In Section 2 we present some preliminary
facts, from [13, 14, 3], which are needed in the following. In Section 3 we
obtain some general results for $L_{t}(q)$ with locally integrable potential
$q.$ In Section 4 using the results of Section 3 we obtain the uniform
asymptotic formulas for the operators $H_{t}$. In Section 5, as a main result
of this paper, we find the\ necessary and sufficient conditions on numbers $a$
and $b$ for which $H$ has no spectral singularity at infinity and it is an
asymptotically spectral operator.
\section{Preliminary Facts}
In this section we present some results of [13, 14, 3] which are used for the
proof of the main results of the paper. We use the following results of [13].
\textbf{Theorem 2 of [13].}\textit{ The eigenvalues }$\lambda_{n}(t)$\textit{
\ and eigenfunctions }$\Psi_{n,t}$\textit{ of the operators }$L_{t}
(q)$\textit{ for }$t\neq0,\pi,$\textit{ satisfy the following asymptotic
formulas }
\begin{equation}
\lambda_{n}(t)=(2\pi n+t)^{2}+O(n^{-1}\ln\left\vert n\right\vert ),\text{
}\Psi_{n,t}(x)=e^{i(2\pi n+t)x}+O(n^{-1}).
\end{equation}
\textit{for }$n\rightarrow\infty.$\textit{ For any fixed number }$\rho
\in(0,\pi/2),$\textit{ these asymptotic formulas are uniform with respect to
}$t$\textit{ in }$[\rho,\pi-\rho]$\textit{. Moreover, there exists a positive
number }$N(\rho),$\textit{ independent of }$t,$\textit{ such that the
eigenvalues }$\lambda_{n}(t)$\textit{ for \ }$t\in\lbrack\rho,\pi-\rho
]$\textit{ and}$\mid n\mid>N(\rho)$\textit{ are simple.}
In the paper [14] we obtained the uniform asymptotic formulas in the more
complicated case $t\in\lbrack0,\rho]\cup\lbrack\pi-\rho,\pi],$ when the
potential $q$ satisfies the following conditions:\textit{ }
\[
q\in W_{1}^{p}[0,1],\mathit{\ }q^{(k)}(0)=q^{(k)}(1),\text{ }q_{n}\sim
q_{-n},\text{ }(q_{n})^{-1}=O(n^{s+1})
\]
for $k=0,1,...,s-1$\textit{ }with some\textit{ }$s\leq p$ and at least one of
the inequalities $\operatorname{Re}q_{n}q_{-n}\geq0$ and $\mid
\operatorname{Im}q_{n}q_{-n}\mid\geq\varepsilon\mid q_{n}q_{-n}\mid$ hold for
some $\varepsilon>0,$ where
\begin{equation}
q_{n}=:(q,e^{i2\pi nx})=:\int_{0}^{1}q(x)e^{-i2\pi nx}dx
\end{equation}
is the Fourier coefficient of $q$ and $a_{n}\sim b_{n}$ means that
$a_{n}=O(b_{n})$ and $b_{n}=O(a_{n}).$ It is clear that these results of
[14]\textbf{ }can not be used for the potential (4). However, we use Remark
2.1 and lot of formulas of [14] that are listed in Remark 1 and as formulas (10)-(25).
\begin{remark}
In Remark 2.1 of [14] \ we proved that here exists a positive integer $N(0)$
such that the disk $U(n,t,\rho)=:\{\lambda\in\mathbb{C}:\left\vert
\lambda-(2\pi n+t)^{2}\right\vert \leq15\pi n\rho\}$ for $t\in\lbrack0,\rho],$
where $15\pi\rho<1,$ and $n>N(0)$ contains two eigenvalues (counting with
multiplicities) denoted by $\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ and these
eigenvalues can be chosen as a continuous function of $t$ on the interval
$[0,\rho].$ In addition to these eigenvalues, the operator $L_{t}(q)$ for
$t\in\lbrack0,\rho]$ has only $2N+1$ eigenvalues denoted by $\lambda_{k}(t)$
for $k=0,\pm1,\pm2,...,\pm N$ (see Remark 2.1 of [14]). Similarly, there
exists a positive integer $N(\pi)$ such that the disk $U(n,t,\rho)$ for
$t\in\lbrack\pi-\rho,\pi]$ and $n>N(\pi)$ contains two eigenvalues (counting
with multiplicities) denoted again by $\lambda_{n,1}(t)$ and $\lambda
_{n,2}(t)$ that are continuous function of $t$ on the interval $[\pi-\rho
,\pi].$
Thus for $n>$ $N=:\max\left\{ N(\rho),N(0),N(\pi)\right\} ,$ the eigenvalues
$\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ are continuous on $[0,\rho
]\cup\lbrack\pi-\rho,\pi]$ and for$\mid n\mid>N$ \ \textit{the eigenvalue
}$\lambda_{n}(t),$\textit{ defined by (5), is continuous on \ }$[\rho,\pi
-\rho].$\textit{ Moreover, by Theorem 2 of [13] there exist only two
eigenvalues }$\lambda_{-n}(\rho)$ and $\lambda_{n}(\rho)$ of the operator
$L_{\rho}(q)$ lying in the disk $U(n,\rho,\rho).$ Therefore these 2
eigenvalues coincides with the eigenvalues $\lambda_{n,1}(\rho)$ and
$\lambda_{n,2}(\rho).$ By (5) $\operatorname{Re}(\lambda_{-n}(\rho
))<\operatorname{Re}(\lambda_{n}(\rho)).$ Let $\lambda_{n,2}(\rho))$ be the
eigenvalue whose real part is larger. Then
\begin{equation}
\lambda_{n,1}(\rho)=\lambda_{-n}(\rho),\text{ }\lambda_{n,2}(t)=\lambda
_{n}(\rho).
\end{equation}
In the same way we obtain that
\begin{equation}
\lambda_{n,1}(\pi-\rho)=\lambda_{n}(\pi-\rho),\text{ }\lambda_{n,2}(\pi
-\rho)=\lambda_{-(n+1)}(\pi-\rho)
\end{equation}
if $\lambda_{n,2}(\pi-\rho))$ is the eigenvalue whose real part is larger. Let
$\Gamma_{-n}$ be the union of the following continuous curves $\left\{
\lambda_{n-1,2}(t):t\in\lbrack\pi-\rho,\pi]\right\} ,$ $\left\{ \lambda
_{-n}(t):t\in\lbrack\rho,\pi-\rho]\right\} $ and $\left\{ \lambda
_{n,1}(t):t\in\lbrack0,\rho]\right\} .$ By (7) and \ (8) these curve are
connected and $\Gamma_{-n}$ is a continuous curve. Similarly, the curve
$\Gamma_{n}$ which is the union of the curves $\left\{ \lambda_{n,2}
(t):t\in\lbrack0,\rho]\right\} ,$ $\left\{ \lambda_{n}(t):t\in\lbrack
\rho,\pi-\rho]\right\} $ and $\left\{ \lambda_{n,1}(t):t\in\lbrack\pi
-\rho,\pi]\right\} $ is a continuous curve.
Let us redenote $\lambda_{n,1}(t)$ and $\,\lambda_{n,2}(t)$ by $\lambda
_{-n}(t)$ and $\lambda_{n}(t)$ respectively for $n>N$ and $t\in\lbrack
0,\rho].$ Similarly redenote $\lambda_{n,1}(t)$ and $\,\lambda_{n,2}(t)$ by
$\lambda_{n}(t)$ and $\lambda_{-n-1}(t)$ respectively for $n>N$ and
$t\in\lbrack\pi-\rho,\pi].$ In this notation we have $\Gamma_{n}=\{\lambda
_{n}(t):t\in\lbrack0,\pi]\}$\ for $\left\vert n\right\vert >N.$ In this paper
we use both notations: $\lambda_{n}(t)$ and $\lambda_{n,j}(t)$.
\end{remark}
One can readily see that
\begin{equation}
\left\vert \lambda-(2\pi(n-k)+t)^{2}\right\vert >\left\vert k\right\vert
\left\vert 2n-k\right\vert ,\text{ \ }\forall\lambda\in U(n,t,\rho)
\end{equation}
for $k\neq0,2n$ and $t\in\lbrack0,\rho]$, where $n>N$.
In [14] to obtain the uniform, with respect to $t\in\lbrack0,\rho],$
asymptotic formulas for the eigenvalues $\lambda_{n,j}(t)$ we used (9) and the
iteration of the formula
\begin{equation}
(\lambda_{n,j}(t)-(2\pi n+t)^{2})(\Psi_{n,j,t},e^{i(2\pi n+t)x})=(q\Psi
_{n,j,t},e^{i(2\pi n+t)x}),
\end{equation}
where $\Psi_{n,j,t}$ is any normalized eigenfunction corresponding to
$\lambda_{n,j}(t).$ Iterating (10) infinite times we got the following
formula
\begin{equation}
(\lambda_{n,j}(t)-(2\pi n+t)^{2}-A(\lambda_{n,j}(t),t))u_{n,j}(t)=(q_{2n}
+B(\lambda_{n,j}(t),t))v_{n,j}(t),
\end{equation}
where $u_{n,j}(t)=(\Psi_{n,j,t},e^{i(2\pi n+t)x}),$ $v_{n,j}(t)=(\Psi
_{n,j,t},e^{i(-2\pi n+t)x}),$
\begin{equation}
A(\lambda,t)=\sum_{k=1}^{\infty}a_{k}(\lambda,t),\text{ }B(\lambda
,t)=\sum_{k=1}^{\infty}b_{k}(\lambda,t),
\end{equation}
\begin{equation}
a_{k}(\lambda,t)=\sum_{n_{1},n_{2},...,n_{k}}q_{-n_{1}-n_{2}-...-n_{k}}
{\textstyle\prod\limits_{s=1}^{k}}
q_{n_{s}}\left( \lambda-(2\pi(n-n_{1}-..-n_{s})+t)^{2}\right) ^{-1},
\end{equation}
\begin{equation}
b_{k}(\lambda,t)=\sum_{n_{1},n_{2},...,n_{k}}q_{2n-n_{1}-n_{2}-...-n_{k}}
{\textstyle\prod\limits_{s=1}^{k}}
q_{n_{s}}\left( \lambda-(2\pi(n-n_{1}-..-n_{s})+t)^{2}\right) ^{-1}
\end{equation}
for $\lambda\in U(n,t,\rho)$ (see (37) of [14]).
Similarly, we obtained the formula
\begin{equation}
(\lambda_{n,j}(t)-(-2\pi n+t)^{2}-A^{\prime}(\lambda_{n,j}(t),t))v_{n,j}
(t)=(q_{-2n}+B^{\prime}(\lambda_{n,j}(t),t))u_{n,j}(t),
\end{equation}
where
\begin{equation}
A^{\prime}(\lambda,t)=\sum_{k=1}^{\infty}a_{k}^{\prime}(\lambda,t),\text{
}B^{\prime}(\lambda,t)=\sum_{k=1}^{\infty}b_{k}^{\prime}(\lambda,t),
\end{equation}
\begin{equation}
a_{k}^{\prime}(\lambda,t)=\sum_{n_{1},n_{2},...,n_{k}}q_{-n_{1}-n_{2}
-...-n_{k}}
{\textstyle\prod\limits_{s=1}^{k}}
q_{n_{s}}\left( \lambda-(2\pi(n+n_{1}+..+n_{s})-t)^{2}\right) ^{-1},
\end{equation}
\begin{equation}
b_{k}^{\prime}(\lambda,t)=\sum_{n_{1},n_{2},...,n_{k}}q_{-2n-n_{1}
-n_{2}-...-n_{k}}
{\textstyle\prod\limits_{s=1}^{k}}
q_{n_{s}}\left( \lambda-(2\pi(n+n_{1}+..+n_{s})-t)^{2}\right) ^{-1}
\end{equation}
for $\lambda\in U(n,t,\rho)$ (see (38) of [14]).
The sums in (13), (14) and (17), (18) are taken under conditions $n_{1}
+n_{2}+...+n_{s}\neq0,2n$ and $n_{1}+n_{2}+...+n_{s}\neq0,-2n$ respectively,
where $s=1,2,...$
Moreover, it was proved [14] that the equalities
\begin{equation}
a_{k}(\lambda,t),\text{ }b_{k}(\lambda,t),\text{ }a_{k}^{\prime}
(\lambda,t),\text{ }b_{k}^{\prime}(\lambda,t)=O\left( (n^{-1}\ln\left\vert
n\right\vert )^{k}\right)
\end{equation}
hold uniformly for $t\in\lbrack0,\rho]$ and $\lambda\in U(n,t,\rho)$\ (see
(34) and (36) of [14]), and derivatives of these functions with respect to
$\lambda$ are $O(n^{-k-1})$ (see the proof of Lemma 2.5) which imply that the
functions $A(\lambda,t),$ $A^{\prime}(\lambda,t),$ $B(\lambda,t)$ and
$B^{\prime}(\lambda,t)$ are analytic on $U(n,t,\rho).$ Moreover, there exists
a constant $K$ such that
\begin{equation}
\mid A(\lambda,t)\mid<Kn^{-1},\mid A^{\prime}(\lambda,t)\mid<Kn^{-1},\mid
B(\lambda,t)\mid<Kn^{-1},\mid B^{\prime}(\lambda,t)\mid<Kn^{-1},
\end{equation}
\begin{equation}
\mid A(\lambda,t)-A(\mu,t)\mid<Kn^{-2}\mid\lambda-\mu\mid,\mid A^{\prime
}(\lambda,t)-A^{\prime}(\mu,t)\mid<Kn^{-2}\mid\lambda-\mu\mid,
\end{equation}
\begin{equation}
\mid B(\lambda,t)-B(\mu,t)\mid<Kn^{-2}\mid\lambda-\mu\mid,\mid B^{\prime
}(\lambda,t)-B^{\prime}(\mu,t)\mid<Kn^{-2}\mid\lambda-\mu\mid,
\end{equation}
\begin{equation}
\mid C(\lambda,t)\mid<tKn^{-1},\text{ }\mid C(\lambda,t))-C(\mu,t))\mid
<tKn^{-2}\mid\lambda-\mu\mid
\end{equation}
for all $\ n>N,$ $t\in\lbrack0,\rho]$ and $\lambda,\mu\in U(n,t,\rho),$ where
$N$ and $U(n,t,\rho)$ are defined in Remark 1, and $C(\lambda,t)=\frac{1}
{2}(A(\lambda,t)-A^{\prime}(\lambda,t))$ \textbf{(}see Lemma 2.3 and Lemma 2.5
of [14]\textbf{).}
In this paper we use also the following, uniform with respect to $t\in
\lbrack0,\rho],$ equalities from [14] (see (26)-(28) of [14]) for the
normalized eigenfunction $\Psi_{n,j,t}$:
\begin{equation}
\Psi_{n,j,t}(x)=u_{n,j}(t)e^{i(2\pi n+t)x}+v_{n,j}(t)e^{i(-2\pi n+t)x}
+h_{n,j,t}(x),
\end{equation}
\begin{equation}
(h_{n,j,t},e^{i(\pm2\pi n+t)x})=0,\text{ }\left\Vert h_{n,j,t}\right\Vert
=O(n^{-1}),\text{ }\left\vert u_{n,j}(t)\right\vert ^{2}+\left\vert
v_{n,j}(t)\right\vert ^{2}=1+O(n^{-2}).
\end{equation}
Besides we use formula (55) of [3] about estimations of $B(\lambda,0)$ and
$B^{\prime}(\lambda,0)$ as follows: \
\textit{ Let the potential }$q$\textit{ \ has the form (4), }$\lambda=(2\pi
n)^{2}+z,$\textit{ where \ }$\left\vert z\right\vert <1,$\textit{ and }
\begin{equation}
p_{n_{1},n_{2},...,n_{k}}(\lambda,0)=q_{2n-n_{1}-n_{2}-...-n_{k}}
{\textstyle\prod\limits_{s=1}^{k}}
q_{n_{s}}\left( \lambda-(2\pi(n-n_{1}-..-n_{s}))^{2}\right) ^{-1}
\end{equation}
\textit{be summands of }$b_{k}(\lambda,t)$\textit{ for }$t=0$\textit{ (see
(14)). Then using }(55) of [3] \textit{with }$q\geq2$\textit{ and the
estimation}
\[
\sum_{q\geq2}\left( _{q}^{n+2q}\right) \left( \frac{\left\vert
ab\right\vert }{n^{2}}\right) ^{q}=O(n^{-2})
\]
\textit{of [3] (see the estimation after formula (55) of [3]) and taking into
account that if }$k$\textit{ changes from }$2n+3$\textit{ to }$\infty
,$\textit{ then the number }$q$\textit{ \ of steps}$=-2$\textit{ (that is, in
our notations the number of indices }$n_{1},n_{2},...,n_{k}$\textit{ of (26)
that are equal to }$1$\textit{) changes from }$2$\textit{ to }$\infty
,$\textit{ we obtain}
\begin{equation}
\sum_{k=2n+3}^{\infty}\sum_{n_{1},n_{2},...,n_{k}}\left\vert p_{n_{1}
,n_{2},...,n_{k}}(\lambda,0)\right\vert =b_{2n-1}(\lambda,0)O(n^{-2}).
\end{equation}
\section{Some General Results for $L_{t}(q)$ with $q\in L_{1}[0,1]$}
First, in Theorem 1, we consider the cases: $t=0$ and $t=\pi$ which correspond
to the periodic and antiperiodic boundary conditions (see (2)). These cases
were considered by Djakov and Mitjagin in\ [2, 3] and [4, 5] for $q\in
L_{2}[0,1]$ and $q\in H^{-1}[0,1]$ in detail. We obtain similar results by the
methods of our papers [1, 11] for $q\in L_{1}[0,1]$ and use the following
terminology of [11]. If the set of the Jordan chains of $L_{0}(q)$ is
infinite, then we consider the Riesz basis property of the normal system of
eigenfunction and associated function (EAF), defined in [11] as follows. In
the case when the large eigenvalue has geometric multiplicity $2$, we choose
the pair of normalized eigenfunctions so that they are mutually orthogonal. In
the case when only one eigenfunction $\varphi$ corresponds to the double
eigenvalue $\lambda$, we assume that $\left\Vert \varphi\right\Vert =1$ and
choose the associated function to be orthogonal to $\varphi$ (it is uniquely
defined by this condition). If the set of the Jordan chains is finite, then
one do not need to consider the specially chosen normal system of EAF.
Therefore, in this case, instead of the normal system of EAF we use the system
of root functions.
For brevity, we discuss only the periodic problem and denote $\lambda
_{n,j}(t),$ $A(\lambda_{n,j}(t),t)),$ $B(\lambda_{n,j}(t),t)),$ $A^{\prime
}(\lambda_{n,j}(t),t)),$ $B^{\prime}(\lambda_{n,j}(t),t))$ for $t=0$ (see (11)
and (15)) by $\lambda_{n,j},$ $A(\lambda_{n,j}),$ $B(\lambda_{n,j}),$
$A^{\prime}(\lambda_{n,j}),$ $B^{\prime}(\lambda_{n,j})$ respectively. The
antiperiodic problem is similar to the periodic problem. One can readily see
from \ (11), (15), (20) and Remark 1 that
\begin{equation}
\lambda_{n,j}(t)\in d^{-}(r(n),t)\cup d^{+}(r(n),t)\subset U(n,t,\rho),
\end{equation}
for all $\ n>N,$ $t\in\lbrack0,\rho],$ where $r(n)=\max\{\left\vert
q_{2n}\right\vert ,\left\vert q_{-2n}\right\vert \}+2Kn^{-1}$ and $\ d^{\pm
}(r(n),t)$ is the disk with center $(\pm2\pi n+t)^{2}$ and radius $r(n).$
Indeed if $\left\vert u_{n,j}(t)\right\vert \geq\left\vert v_{n,j}
(t)\right\vert ,$ then using (11) (if $\left\vert v_{n,j}(t)\right\vert
>\left\vert u_{n,j}(t)\right\vert ,$ then using (15)) and (20) we get (28).
In the case $t=0$ the disks $d^{-}(r(n),t)$ and $d^{+}(r(n),t)$ are the same
and are denoted by $d(r(n)).$ The set of indices $n>N$ for which the periodic
eigenvalues lying in $d(r(n))$ are simple (double) is denoted by
$\mathbb{N}_{1}$ $\,$($\mathbb{N}_{2}$). If $n\in\mathbb{N}_{2}$ then
$\lambda_{n,1}=\lambda_{n,2\text{ }}$ and these eigenvalues are redenoted by
$\lambda_{n}.$
\begin{theorem}
Let $q\in L_{1}[0,1]$ and $N$ be a large number defined in Remark 1.
$(a)$ If $n\in\mathbb{N}_{2}$ and the inequality \textit{ }
\begin{equation}
\mid q_{2n}+B(\lambda)\mid+\mid q_{-2n}+B^{\prime}(\lambda)\mid\neq0,
\end{equation}
holds for $\lambda=\lambda_{n}$, then the geometric multiplicity of the
eigenvalue $\lambda_{n}$ is $1$.\textit{ }
$(b)$ \textit{Suppose for }$n>N$ there exists an eigenvalue of $L_{0}(q)$
lying in $d(r(n))$ and denoted, for simplicity of notation, by $\lambda_{n,1}
$\textit{ such that (29) for }$\lambda=\lambda_{n,1}$ \textit{holds. }Then the
normal system of EAF of $L_{0}(q)$ forms a Riesz basis if and only if
\begin{equation}
q_{2n}+B(\lambda_{n,1})\sim\mathit{\ }q_{-2n}+B^{\prime}(\lambda_{n,1}).
\end{equation}
$(c)$ If (29) for $\lambda=\lambda_{n,1}$, where $n>N,$ and (30) hold, then
the large periodic eigenvalues are simple and the system of root functions of
$L_{0}(q)$ forms a Riesz basis.
\end{theorem}
\begin{proof}
$(a)$ Suppose that there exist $2$\ eigenfunctions corresponding to
$\lambda_{n}$. Then one can choose the eigenfunction $\Psi_{n}$ such that
$\left( \Psi_{n},e^{i2\pi nx}\right) =0.$ This with (11) \ and (25) implies
that $q_{2n}+B(\lambda_{n})=0$. In the same way we prove that $q_{-2n}
+B^{\prime}(\lambda_{n})=0.$ The last two equalities contradict (29).
$(b)$ If (29) for $\lambda=\lambda_{n,1}$ and (30) hold, then one can readily
see that
\begin{equation}
\text{ }q_{-2n}+B(\lambda_{n,1})\neq0,\text{ }q_{-2n}+B^{\prime}(\lambda
_{n,1})\neq0.
\end{equation}
These with formulas (11), (15) and (25) imply that
\begin{equation}
u_{n,1}(t)v_{n,1}(t)\neq0
\end{equation}
for $t=0.$ Indeed, if $u_{n,1}(0)=0$ then by (25) $v_{n,1}(0)\neq0$ and by
(11) $q_{2n}+B(\lambda_{n,1})=0$ which contradicts (31). Similarly, if
$v_{n,1}(0)=0$ then by (25) and (15) $q_{-2n}+B^{\prime}(\lambda_{n,1})=0$
which again contradicts (31). By (31) and (32) the right-hand sides of (11)
and (15) for $t=0$ are not zero. Therefore, dividing (11) and (15) side by
side and using the equality $A(\lambda_{n,1})=A^{\prime}(\lambda_{n,1})$ (see
the proof of Lemma 3 of [11]), we get
\begin{equation}
\frac{q_{-2n}+B^{\prime}(\lambda_{n,1})}{q_{2n}+B(\lambda_{n,1})}
=\frac{u_{n,1}^{2}(0)}{v_{n,1}^{2}(0)}.
\end{equation}
Then, by (30) and (25) we have
\begin{equation}
u_{n,1}(0)\sim v_{n,1}(0)\sim1
\end{equation}
which implies that the set of the Jordan chains is finite (see the end of page
118 of [1]). Thus, if (29) and (30) hold, then the large periodic eigenvalues
are simple. Moreover, by Theorem 1 of [11] the relation (34) implies that the
normal system of EAF of $L_{0}(q)$ form a Riesz basis.
Now suppose that (29) for $\lambda=\lambda_{n,1}$ holds and the normal system
of EAF of $L_{0}(q)$ form a Riesz basis. By Theorem 1 of [11] the set of the
Jordan chains is finite and (34) holds. On the other hand, at least one of the
summands in (29) for $\lambda=\lambda_{n,1}$ are not zero. Suppose, without
less of generality, that $q_{2n}+B(\lambda_{n,1})\neq0.$ Then, using (11),
(15) for $t=0$ and (34) we see that equality (33) holds (see the proof of
(33)). Therefore, using (34) we obtain (30).
$(c)$ The simplicity of the large periodic eigenvalues is proved in $(b).$
Therefore it is enough to note that in this case the Riesz basis property of
the root functions follows from the Riesz basis property of the normal system
of EAF.
\end{proof}
Now, we consider the case $t\in\lbrack0,\rho].$
\begin{theorem}
\textit{ A number }$\lambda\in U(n,t,\rho)$ is an eigenvalue of $L_{t}(q)$ for
$t\in\lbrack0,\rho]$ and $n>N,$ where $U(n,t,\rho)$ and $N$ are defined in
Remark \ 1, if and only if
\begin{equation}
(\lambda-(2\pi n+t)^{2}-A(\lambda,t))(\lambda-(2\pi n-t)^{2}-A^{\prime
}(\lambda,t))=(q_{2n}+B(\lambda,t))(q_{-2n}+B^{\prime}(\lambda,t)).
\end{equation}
Moreover\textit{ }$\lambda\in U(n,t,\rho)$ is a double eigenvalue of $L_{t}$
if and only if it is a double root of (35).
\end{theorem}
\begin{proof}
If $u_{n,j}(t)=0,$ then by (25) we have$\ v_{n,j}(t)\neq0$.\ Therefore, (11)
and (15) imply that $q_{2n}+B(\lambda_{n,j}(t),t)=0$ and $\lambda
_{n,j}(t)-(-2\pi n+t)^{2}-A^{\prime}(\lambda_{n,j}(t),t)=0,$ that is, the
right-hand side and the left-hand side of (35)\ vanish \ when $\lambda$ is
replaced by$\ \lambda_{n,j}(t)$. Hence $\lambda_{n,j}(t)$ satisfies (35). In
the same way we prove that if $\ \ v_{n,j}(t)=0$ then $\lambda_{n,j}(t)$ is a
\ root of (35). It remains to consider the case $u_{n,j}(t)v_{n,j}(t)\neq0.$
In this case multiplying (11) and (15) side by side and canceling
$u_{n,j}(t)v_{n,j}(t)$ we get an equality obtained from (35) by replacing
$\lambda$ with $\lambda_{n,j}(t).$\ Thus, in any case $\lambda_{n,j}(t)$ is a
root of\ (35).
Now we prove that the roots of (35) lying in $U(n,t,\rho)$\ are the
eigenvalues of $L_{t}(q).$ Let $F(\lambda,t,)$ be the left-hand side minus the
right-hand side of (35). Using (20) one can easily verify that the inequality
\
\begin{equation}
\mid F(\lambda,t,)-G(\lambda,t)\mid<\mid G(\lambda,t)\mid,
\end{equation}
where $G(\lambda,t)=(\lambda-(2\pi n+t)^{2})(\lambda-(2\pi n-t)^{2}),$ holds
for all $\lambda$ from the boundary of $U(n,t,\rho).$ Since the function
$(\lambda-(2\pi n+t)^{2})(\lambda-(2\pi n-t)^{2})$ has two roots in \ the
\ set $U(n,t,\rho),$ by the Rouche's theorem from (36) we obtain that
$F(\lambda,t,)$ has two roots in the same\ set.\ Thus\ $L_{t}(q)$ has two
eigenvalue (counting with multiplicities) lying in $U(n,t,\rho)$ (see Remark
1) that are the roots of (35). On the other hand, (35) has preciously two
roots (counting with multiplicities) in $U(n,t,\rho).$ Therefore $\lambda\in
U(n,t,\rho)$ is an eigenvalue of $L_{t}(q)$ if and only if (35) holds.
If \textit{ }$\lambda\in U(n,t,\rho)$ is a double eigenvalue of $L_{t}(q),$
then by Remark 1 $L_{t}(q)$ has no other eigenvalues\textit{ }in\textit{
}$U(n,t,\rho)$ and hence (35) has no other roots. This implies that $\lambda$
is a double root of (35). By the same argument one can prove that if $\lambda$
is a double root of (35) then it is a double eigenvalue of $L_{t}(q)$
\end{proof}
One can readily verify that equation (35) can be written in the form
\begin{equation}
(\lambda-(2\pi n+t)^{2}-\frac{1}{2}(A+A^{\prime})+4\pi nt)^{2}=D,
\end{equation}
where
\begin{equation}
D(\lambda,t)=(4\pi nt)^{2}+q_{2n}q_{-2n}+8\pi ntC+C^{2}+q_{2n}B^{\prime
}+q_{-2n}B+BB^{\prime}
\end{equation}
and, for brevity, we denote $C(\lambda,t),$ $\ B(\lambda,t),$ $\ A(\lambda,t)$
etc. by $C,$ $\ B,$ $A$ etc. It is clear that $\lambda$ is a root of (37) if
and only if \ it satisfies at least one of the equations
\begin{equation}
\lambda-(2\pi n+t)^{2}-\frac{1}{2}(A(\lambda,t)+A^{\prime}(\lambda,t))+4\pi
nt=-\sqrt{D(\lambda,t)}
\end{equation}
and
\begin{equation}
\lambda-(2\pi n+t)^{2}-\frac{1}{2}(A(\lambda,t)+A^{\prime}(\lambda,t))+4\pi
nt=\sqrt{D(\lambda,t)},
\end{equation}
where
\begin{equation}
\sqrt{D}=\sqrt{\left\vert D\right\vert }e^{(\arg D)/2},\text{ }-\pi<\arg
D\leq\pi.
\end{equation}
\begin{remark}
It is clear from the construction of $D(\lambda,t)$ that this function is
continuous with respect to $(\lambda,t)$ for $t\in\lbrack0,\rho]$ and
$\lambda\in U(n,t,\rho).$ Moreover, by Remark 1 the eigenvalues $\lambda
_{n,1}(t)$ and $\lambda_{n,2}(t)$ continuously depend on $t\in\lbrack0,\rho].$
Therefore $D(\lambda_{n,j}(t),t)$ for $n>N$ and $j=1,2$ is a continuous
functions of $t\in\lbrack0,\rho].$ By (38), (23), (12), (16) and (19) we have
\[
D(\lambda_{n,j}(\rho),\rho)=(4\pi nt)^{2}+o(1),\text{ }A(\lambda_{n,j}
(\rho),\rho)+A^{\prime}(\lambda_{n,j}(\rho),\rho)=o(1)
\]
as $n\rightarrow\infty.$ Therefore by (7) and \textit{Theorem 2 of [13]} the
eigenvalues $\lambda_{n,1}(\rho)$ and $\lambda_{n,2}(\rho)$ are simple,
$\lambda_{n,1}(\rho),$ satisfies (39) and $\lambda_{n,2}(\rho)$ satisfies
(40). If $\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ are simple for $t\in\lbrack
t_{0},\rho],$ where $0\leq t_{0}\leq\rho,$ then these functions are analytic
function on $[t_{0},\rho]$ and $\lambda_{n,1}(t)\neq\lambda_{n,2}(t)$ for all
$t\in\lbrack t_{0},\rho]$.
\end{remark}
\begin{theorem}
Suppose that $\sqrt{D(\lambda_{n,j}(t),t))}$ continuously depends on $t$ at
$[t_{0},\rho]$ and
\begin{equation}
D(\lambda_{n,j}(t),t)\neq0,\text{ }\forall t\in\lbrack t_{0},\rho]
\end{equation}
for $n>N$ and $j=1,2,$ where $\rho$ and $N$ are defined in Remark \ 1 and
$\sqrt{D}$ is defined in (41) and $0\leq t_{0}\leq\rho$. Then for $t\in\lbrack
t_{0},\rho]$ the eigenvalues $\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ defined
in Remark 1 are simple, $\lambda_{n,1}(t)$ satisfies (39) and $\lambda
_{n,2}(t),$ satisfies (40). That is
\begin{equation}
\lambda_{n,j}(t)=(2\pi n+t)^{2}+\frac{1}{2}(A(\lambda_{n,j},t)+A^{\prime
}(\lambda_{n,j},t))-4\pi nt+(-1)^{j}\sqrt{D(\lambda_{n,j},t)}
\end{equation}
for $t\in\lbrack t_{0},\rho],$ $n>N$ and $j=1,2.$
\end{theorem}
\begin{proof}
By Remark 2, the eigenvalues $\lambda_{n,1}(\rho)$ and $\lambda_{n,2}(\rho)$
are simple, $\lambda_{n,1}(\rho)$ satisfies (39) and $\lambda_{n,2}(\rho)$
satisfies (40). Let us we prove that $\lambda_{n,1}(t)$ satisfies (39) for all
$t\in\lbrack t_{0},\rho]$. Suppose to the contrary that this claim is not
true. Then there exists $t\in\lbrack t_{0},\rho)$ and the sequences
$p_{n}\rightarrow t$ and $q_{n}\rightarrow t,$ where one of them may be a
constant sequence, such that $\lambda_{n,1}(p_{n})$ and $\lambda_{n,1}(q_{n})$
satisfy (39) and (40) respectively. Using the continuity of $\sqrt
{(D(\lambda_{n,j}(t),t))}$, we conclude that $\lambda_{n,1}(t)$ satisfies both
(39) and (40). However, it is possible only if $D(\lambda_{n,1}(t),t)=0$ which
contradicts (42). Hence $\lambda_{n,1}(t)$ satisfies (39) for all $t\in\lbrack
t_{0},\rho]$. In the same way we prove that $\lambda_{n,2}(t)$ satisfies (40)
for all $t\in\lbrack t_{0},\rho]$. If $\lambda_{n,1}(t)=\lambda_{n,2}(t)$ for
some value of $t\in\lbrack t_{0},\rho]$, that is if $\lambda_{n,j}(t)$ is a
double eigenvalue then it satisfies both (39) and (40) which again contradicts (42)
\end{proof}
\section{On the Operator $H_{t}$ for $t\in(-\pi,\pi].$}
In this section we study the operator $H_{t}$ for $t\in\lbrack0,\rho].$ Note
that we consider only the case $t\in\lbrack0,\rho]$ due to the following
reason. The case $t\in\lbrack\rho,\pi-\rho]$ was considered in [13]. \ The
case $t\in\lbrack\pi-\rho,\pi]$ is similar to the case $t\in\lbrack0,\rho]$
and we explain it in Remark 3. Besides, the eigenvalues of $H_{-t}$ coincides
with the eigenvalues of $H_{t}.$
When the potential $q$ has the form (4) then by (6)
\begin{equation}
q_{-1}=a,\text{ }q_{1}=b,\text{ }q_{n}=0,\text{ }\forall n\neq\pm1
\end{equation}
and hence formulas (11), (15), (37) and (38) have the form
\begin{equation}
(\lambda_{n,j}(t)-(2\pi n+t)^{2}-A(\lambda_{n,j}(t),t))u_{n,j}(t)=B(\lambda
_{n,j}(t),t)v_{n,j}(t),
\end{equation}
\begin{equation}
(\lambda_{n,j}(t)-(-2\pi n+t)^{2}-A^{\prime}(\lambda_{n,j}(t),t))v_{n,j}
(t)=B^{\prime}(\lambda_{n,j}(t),t)u_{n,j}(t),
\end{equation}
\begin{equation}
(\lambda-(2\pi n+t)^{2}-\frac{1}{2}(A(\lambda,t)+A^{\prime}(\lambda,t))+4\pi
nt)^{2}=D(\lambda,t),
\end{equation}
\begin{equation}
D(\lambda,t)=(4\pi nt+C(\lambda,t))^{2}+B(\lambda,t)B^{\prime}(\lambda,t).
\end{equation}
Moreover, by Theorem 2,\textit{ }$\lambda\in U(n,t,\rho)$ is a double
eigenvalue of $H_{t}$ if and only if it satisfies (47) and the equation
\begin{equation}
2(\lambda-(2\pi n+t)^{2}-\frac{1}{2}(A+A^{\prime})+4\pi nt)^{2}(1-\frac{1}
{2}\frac{\partial}{\partial\lambda}(A+A^{\prime}))=\frac{\partial}
{\partial\lambda}(D(\lambda,t)).
\end{equation}
By, (28) and (44) $\lambda_{n,j}(t)\in d^{-}(2Kn^{-1},t)\cup d^{+}
(2Kn^{-1},t)\subset U(n,t,\rho).$ Therefore the formula
\begin{equation}
\lambda_{n,j}(t)=(2\pi n)^{2}+O(n^{-1})
\end{equation}
holds uniformly, with respect to $t\in\lbrack0,n^{-2}],$ for $j=1,2,$ i.e.,
there exist positive constants $M$ and $N$ such that $\mid\lambda
_{n,j}(t)-(2\pi n)^{2}\mid<Mn^{-1}$ for $n\geq N$ \ and $t\in\lbrack
0,n^{-2}].$
Let us consider the functions taking part in (45)-(48). From (44) we see that
the indices in formulas (13), (14) for the case (4) satisfy the conditions
\begin{equation}
\{n_{1},n_{2},...,n_{k}\}\subset\{-1,1\},\text{ }n_{1}+n_{2}+...+n_{s}
\neq0,2n,
\end{equation}
\begin{equation}
\{n_{1},n_{2},...,n_{k},2n-n_{1}-n_{2}-...-n_{k}\}\subset\{-1,1\},\text{
}n_{1}+n_{2}+...+n_{s}\neq0,2n
\end{equation}
for $s=1,2,...,k$ respectively. Hence, by (44) $q_{-n_{1}-n_{2}-...-n_{k}}=0$
if $k$ is an even number. Therefore, by (13) and (17)
\begin{equation}
a_{2m}(\lambda,t)=0,\text{ }a_{2m}^{\prime}(\lambda,t)=0,\text{ }\forall
m=1,2,...
\end{equation}
Since the indices $n_{1},n_{2},...,n_{k}$ take two values (see (51)) the
number of the summands in the right-hand side of (13) is not more than
$2^{k}.$ Clearly, these summands for $k=2m-1$ have the form
\[
a_{k}(\lambda,n_{1},n_{2},...,n_{k},t)=:(ab)^{m}
{\textstyle\prod\limits_{s=1,2,...,k}}
\left( \lambda-(2\pi(n-n_{1}-n_{2}-...-n_{s})+t)^{2}\right) ^{-1}
\]
(see (13) and (51)). Therefore, we have
\begin{equation}
\text{ }a_{2m-1}(\lambda_{n,j}(t),t)=(4ab)^{m}O(n^{-2m+1}).
\end{equation}
If $t\in\lbrack0,n^{-2}],$ then one can readily see that
\[
a_{1}(\lambda_{n,j},t)=\frac{ab}{(2\pi n)^{2}+O(n^{-1})-(2\pi(n-1))^{2}}
+\frac{ab}{(2\pi n)^{2}+O(n^{-1})-(2\pi(n+1))^{2}}
\]
\[
=\frac{ab}{2\pi(2\pi(2n-1)}-\frac{ab}{2\pi(2\pi(2n+1)}+O(\frac{1}{n^{3}
})=O(\frac{1}{n^{2}}).
\]
The same estimations for $a_{2m-1}^{\prime}(\lambda_{n,j}(t),t)$ and
$a_{1}^{\prime}(\lambda_{n,j}(t),t)$ hold respectively. Thus, by (12), (16),
(19) and (53), we have
\begin{equation}
A(\lambda_{n,j}(t),t)=O(n^{-2}),\text{ }A^{\prime}(\lambda_{n,j}
(t),t)=O(n^{-2}),\text{ }\forall t\in\lbrack0,n^{-2}].\text{ }
\end{equation}
Now we study the functions $B(\lambda,t)$ and $B^{\prime}(\lambda,t)$ (see
(12), (14) and (16), (18)). First let us consider $b_{2n-1}(\lambda,t).$ If
$k=2n-1,$ then by (52) $n_{1}=n_{2}=...=n_{2k-1}=1.$ Using this and (44) in
(14) for $k=2n-1,$ we obtain
\begin{equation}
b_{2n-1}(\lambda,t)=b^{2n}
{\textstyle\prod\limits_{s=1}^{2n-1}}
\left( \lambda-(2\pi(n-s)+t)^{2}\right) ^{-1}.
\end{equation}
If\ $k<2n-1$ or $k=2m,$ then, by (44), $q_{2n-n_{1}-n_{2}-...-n_{k}}=0$ and by
(14)
\begin{equation}
\text{ }b_{k}(\lambda,t)=0.
\end{equation}
In the same way, from (18) we obtain
\begin{equation}
b_{2n-1}^{\prime}(\lambda,t)=a^{2n}
{\textstyle\prod\limits_{s=1}^{2n-1}}
\left( \lambda-(2\pi(n-s)-t)^{2}\right) ^{-1},\text{ \ }b_{k}^{\prime
}(\lambda_{n,j}(t),t)=0
\end{equation}
for\ $k<2n-1$ or $k=2m.$ Now, (19), (57) and (58) imply that the equalities
\begin{equation}
B(\lambda,t)=O\left( n^{-5}\right) ,\text{ }B^{\prime}(\lambda,t)=O\left(
n^{-5}\right)
\end{equation}
hold uniformly for $t\in\lbrack0,\rho]$ and $\lambda\in U(n,t,\rho).$ From
(45) and (46) (if $\left\vert u_{n,j}(t)\right\vert \geq\left\vert
v_{n,j}(t)\right\vert $ then use (45) and if $\left\vert v_{n,j}(t)\right\vert
>\left\vert u_{n,j}(t)\right\vert $ then use (46)) by using (55) and (59) we
obtain that the formula
\begin{equation}
\lambda_{n,j}(t)=(2\pi n)^{2}+O(n^{-2})
\end{equation}
holds uniformly, with respect to $t\in\lbrack0,n^{-3}],$ for $j=1,2.$
More detail estimations of $B$ and $B^{\prime}$ are given in the following lemma.
\begin{lemma}
If $q$\ has the form (4), then the formulas
\begin{equation}
B(\lambda,t)=\beta_{n}\left( 1+O(n^{-2})\right) ,\text{ }B^{\prime}
(\lambda,t)=\alpha_{n}\left( 1+O(n^{-2})\right) ,
\end{equation}
\begin{equation}
\frac{\partial}{\partial\lambda}(B^{\prime}(\lambda,t)B(\lambda,t))\sim
\alpha_{n}\beta_{n}n^{-1}\ln\left\vert n\right\vert
\end{equation}
hold uniformly for
\begin{equation}
t\in\lbrack0,n^{-3}],\text{ }\lambda=(2\pi n)^{2}+O(n^{-2}),
\end{equation}
where $\beta_{n}=b^{2n}\left( (2\pi)^{2n-1}(2n-1)!\right) ^{-2}$ and
$\alpha_{n}=a^{2n}\left( (2\pi)^{2n-1}(2n-1)!\right) ^{-2}.$
\end{lemma}
\begin{proof}
Using (56) and (58) by direct calculations we get
\begin{equation}
b_{2n-1}((2\pi n)^{2},0)=\beta_{n},\text{ }b_{2n-1}^{\prime}((2\pi
n)^{2},0)=\alpha_{n}.
\end{equation}
If $1\leq s\leq2n-1$ then for any $(\lambda,t)$ satisfying (63) there exists
$\lambda_{1}=(2\pi n)^{2}+O(n^{-2})$ and
$\lambda_{2}=(2\pi n)^{2}+O(n^{-2})$ such that
\begin{equation}
\mid\lambda_{1}-(2\pi(n-s))^{2}\mid<\mid\lambda-(2\pi(n-s)+t)^{2}\mid
<\mid\lambda_{2}-(2\pi(n-s))^{2}\mid.
\end{equation}
Therefore from (56) we obtain that
\begin{equation}
\left\vert b_{2n-1}(\lambda_{1},0)\right\vert <\left\vert b_{2n-1}
(\lambda,t)\right\vert <\left\vert b_{2n-1}(\lambda_{2},0)\right\vert .
\end{equation}
On the other hand, differentiating (56) with respect to $\lambda,$ we conclude
that
\begin{equation}
\frac{\partial}{\partial\lambda}(b_{2n-1}((2\pi n)^{2},0))=b_{2n-1}((2\pi
n)^{2},0)\sum_{s=1}^{2n-1}\frac{1+O(n^{-1})}{s(2n-s)}.
\end{equation}
Now taking into account that the last summation is of order $n^{-1}
\ln\left\vert n\right\vert $ and using (64), we get
\begin{equation}
\frac{\partial}{\partial\lambda}b_{2n-1}((2\pi n)^{2},0))\sim\beta_{n}
n^{-1}\ln\left\vert n\right\vert .\text{ }
\end{equation}
Arguing as above one can easily see that the $m$-th derivative, where
$m=2,3,...,$ of $b_{2n-1}(\lambda,0)$ is $O(\beta_{n}).$ Hence using the
Taylor series of $b_{2n-1}(\lambda,0)$ for $\lambda=(2\pi n)^{2}+O(n^{-2})$
about $(2\pi n)^{2},$ we obtain $b_{2n-1}(\lambda_{i},0)=\beta_{n}
(1+O(n^{-2})),$ $\forall i=1,2.$ This with (66) yields
\begin{equation}
b_{2n-1}(\lambda,t)=\beta_{n}(1+O(n^{-2}))
\end{equation}
for all $(\lambda,t)$ satisfying (63). In the same way, we get
\begin{equation}
\frac{\partial}{\partial\lambda}b_{2n-1}^{\prime}((2\pi n)^{2},0))\sim
\alpha_{n}(\frac{\ln n}{n}),\text{ }b_{2n-1}^{\prime}(\lambda,t)=\alpha
_{n}(1+O(n^{-2})).\text{ }
\end{equation}
Now let us consider $b_{2n+1}(\lambda,t).$ By (52) the indices $n_{1}
,n_{2},...,n_{2n+1}$ taking part in $b_{2n+1}(\lambda,t)$ are $\ 1$ except
one, say $n_{s+1}=-1,$ where $s=2,3,...,2n-1.$ Moreover, if $n_{s+1}=-1,$ then
$n_{1}+n_{2}+...+n_{s+1}=n_{1}+n_{2}+...+n_{s-1}=s-1$ and
$n_{1}+n_{2}+...+n_{s+2}=n_{1}+n_{2}+...+n_{s}=s.$ Therefore, by (14),
$b_{2n+1}(\lambda,t)$ for
\begin{equation}
\lambda=(2\pi n)^{2}+O(n^{-1}),\text{ }t\in\lbrack0,n^{-3}]
\end{equation}
has the form
\[
b_{2n-1}(\lambda,t)
{\textstyle\sum\limits_{s=2}^{2n-1}}
\frac{ab}{(2\pi n)^{2}-(2\pi(n-s+1))^{2}+O(n^{-1}))(2\pi n)^{2}-(2\pi
(n-s))^{2}+O(n^{-1}))}.
\]
One can easily see that the last sum is $O(n^{-2}).$ Thus we have
\begin{equation}
b_{2n+1}(\lambda,t)=b_{2n-1}(\lambda,t)O(n^{-2})=\beta_{n}O(n^{-2}))
\end{equation}
for all $(\lambda,t)$ satisfying (71).
Now let us estimate $b_{k}(\lambda,t)$ for $k>2n+1$. Since the sums in (14)
are taken under conditions (52), we conclude that $1\leq n_{1}+n_{2}
+\cdots+n_{s}\leq2n-1.$ Using this instead of $1\leq s\leq2n-1$ and repeating
the proof of (66) we obtain that for any $(\lambda,t)$ satisfying (71) there
exists $\lambda_{3}=(2\pi n)^{2}+O(n^{-1})$ and $\lambda_{4}=(2\pi
n)^{2}+O(n^{-1})$ such that
\[
\left\vert p_{n_{1},n_{2},...,n_{k}}(\lambda_{3},0)\right\vert <\left\vert
p_{n_{1},n_{2},...,n_{k}}(\lambda,t)\right\vert <\left\vert p_{n_{1}
,n_{2},...,n_{k}}(\lambda_{4},0)\right\vert ,\text{ }\forall k<2n-1,
\]
where $p_{n_{1},n_{2},...,n_{k}}(\lambda,0)$ is defined in (26). This with
(27) and (72) implies that
\begin{equation}
\sum_{k=2n+1}^{\infty}\left\vert b_{k}(\lambda,t)\right\vert =\beta
_{n}O(n^{-2})
\end{equation}
for all $(\lambda,t)$ satisfying (71). In the same way, we obtain
\begin{equation}
\sum_{k=2n+1}^{\infty}\left\vert b_{k}^{\prime}(\lambda,t)\right\vert
=\alpha_{n}O(n^{-2}).
\end{equation}
Thus (61) follows from (69), (70), (73) and (74).
Now we prove (62). It follows from (73), (74) and the Cauchy inequality that
\begin{equation}
\frac{\partial}{\partial\lambda}(\sum_{k=2n+1}^{\infty}b_{k}(\lambda
,t))=\beta_{n}O(n^{-1}),\text{ }\frac{\partial}{\partial\lambda}(\sum
_{k=2n+1}^{\infty}b_{k}^{\prime}(\lambda,t))=\alpha_{n}O(n^{-1}).
\end{equation}
Therefore (62) follows from (68) and (70).
\end{proof}
In the case (4) formula (61) with (44) implies that the inequality (29) for
$\lambda=\lambda_{n,j}(0)$ holds and the relation (30) holds if and only if
$\mid a\mid=\mid b\mid.$ On the other hand, the large eigenvalues of $H_{0}$
and $H_{\pi}$ are simple (see, for example, theorems 11 and 12 of [16] and
Theorem 1 of [15]). Therefore, taking into account that, if the number of
multiple eigenvalues is finite then Riesz basis property of the root functions
follows from the Riesz basis property of the normal system of EAF we get the
following consequence of Theorem 1.
\begin{corollary}
The root functions of $H_{0}$ form a Riesz basis if and only if $\mid
a\mid=\mid b\mid.$ \ The statement continue to hold if $H_{0}$ is replaced by
$H_{\pi}$ which can be proved in the same way.
\end{corollary}
These result were obtained by Djakov and Mitjagin [3] by the other method.
From Lemma 1 it easily follows also the following statement.
\begin{proposition}
If $\lambda_{n,j}(t)$ for $t\in\lbrack0,\rho]$ is a multiple eigenvalue of
$H_{t}$ then
\begin{equation}
(4\pi nt)^{2}=-\beta_{n}\alpha_{n}\left( 1+O(n^{-2})\right) .
\end{equation}
\end{proposition}
\begin{proof}
If $\lambda_{n,1}(t)=\lambda_{n,2}(t)=:\lambda_{n}(t)$ is a multiple
eigenvalue, then as it is noted in the beginning of this section, it satisfies
(47) and (49) from which we obtain
\begin{equation}
4D(\lambda_{n}(t),t)\left( 1-\frac{1}{2}\frac{\partial}{\partial\lambda
}(A(\lambda_{n}(t),t)+A^{\prime}(\lambda_{n}(t),t))\right) ^{2}=\left(
\frac{\partial}{\partial\lambda}D(\lambda_{n}(t),t)\right) ^{2}.
\end{equation}
By (21) and (23) we have
\begin{equation}
\frac{\partial}{\partial\lambda}(A(\lambda_{n}(t),t)+A^{\prime}(\lambda
_{n}(t),t))=O(n^{-2}),
\end{equation}
\begin{align}
(4\pi nt+C(\lambda_{n}(t),t))^{2} & =(4\pi nt)^{2}(1+O(n^{-2})),\text{ }\\
\frac{\partial}{\partial\lambda}(4\pi nt+C(\lambda_{n}(t),t))^{2} & =(4\pi
nt)^{2}(1+O(n^{-2}))O(n^{-3})
\end{align}
for $t\in\lbrack0,\rho]$. On the other hand it follows from (59) and (22) that
\begin{equation}
B(\lambda_{n}(t),t)B^{\prime}(\lambda_{n}(t),t)=O(n^{-10}),\text{ }
\frac{\partial}{\partial\lambda}(B^{\prime}(\lambda_{n}(t),t)B(\lambda
_{n}(t),t))=O(n^{-7}).
\end{equation}
Therefore from (48) and (79)-(81) we obtain
\begin{equation}
D((\lambda_{n}(t),t))=(4\pi nt)^{2}(1+O(n^{-2}))+O(n^{-10})
\end{equation}
and
\begin{equation}
\frac{\partial}{\partial\lambda}(D(\lambda_{n}(t),t))=(4\pi nt)^{2}
(1+O(n^{-2}))O(n^{-3})+O(n^{-7}).
\end{equation}
Using the equalities (78), (82) and (83) in (77) we get
\begin{equation}
4(4\pi nt)^{2}(1+O(n^{-2}))=(4\pi nt)^{2}O(n^{-4})+O(n^{-8}).
\end{equation}
Hence, we have $t\in\lbrack0,n^{-3}].$ Then by (60), $t$ and $\lambda
=:\lambda_{n}(t)$ satisfy (63) and by Lemma 1
\begin{equation}
B(\lambda_{n}(t),t)=\beta_{n}\left( 1+O(n^{-2})\right) ,\text{ }B^{\prime
}(\lambda_{n}(t),t)=\alpha_{n}\left( 1+O(n^{-2})\right) ,
\end{equation}
\begin{equation}
\frac{\partial}{\partial\lambda}(B^{\prime}(\lambda_{n}(t),t)B(\lambda
_{n}(t),t))\sim\alpha_{n}\beta_{n}n^{-1}\ln\left\vert n\right\vert .
\end{equation}
Therefore by (48), (79) and (80) we have
\begin{equation}
D((\lambda_{n}(t),t))=(4\pi nt)^{2}(1+O(n^{-2}))+\beta_{n}\alpha_{n}\left(
1+O(n^{-2})\right)
\end{equation}
and
\begin{equation}
\frac{\partial}{\partial\lambda}(D(\lambda_{n}(t),t))=(4\pi nt)^{2}
(1+O(n^{-2}))O(n^{-3})+O(\alpha_{n}\beta_{n}n^{-1}\ln\left\vert n\right\vert
).
\end{equation}
Now using (78), (87) and (88) in (77) we obtain
\[
(4\pi nt)^{2}(1+O(n^{-2}))+\beta_{n}\alpha_{n}\left( 1+O(n^{-2})\right)
=(4\pi nt)^{2}O(n^{-4})+\left( O(\alpha_{n}\beta_{n}n^{-1}\ln\left\vert
n\right\vert \right) )^{2}
\]
which implies (76)
\end{proof}
Now we are ready to prove the main result of this section by using Theorem 3.
In formula (87) the terms $O(n^{-2})$ do not depend on $t$, that is, there
exists $c>0$ such that these terms satisfy the inequality
\begin{equation}
\left\vert O(n^{-2})\right\vert <cn^{-2}.
\end{equation}
\begin{theorem}
Let $\mathbb{S}$ be the set of integer $n>N$ such that
\begin{equation}
-\pi+3cn^{-2}\leq\arg(\beta_{n}\alpha_{n})\leq\pi-3cn^{-2}
\end{equation}
and $\left\{ t_{n}:n>N\right\} $ be a sequence defined as follows: $t_{n}=0$
if $n\in$ $\mathbb{S}$ and
\begin{equation}
(4\pi nt_{n})^{2}(1-cn^{-2})=-(1+cn^{-2}+n^{-3})\operatorname{Re}(\beta
_{n}\alpha_{n})
\end{equation}
if $n\notin$ $\mathbb{S},$ where $c$ is defined in (89). Then the eigenvalues
$\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ defined in Remark 1 are simple and
satisfy (43) for $t\in\lbrack t_{n},\rho].$
\end{theorem}
\begin{proof}
Let $n\notin$ $\mathbb{S}.$ It follows from (48), (59) and (79) that if $t\geq
n^{-3}$ then
\begin{equation}
\operatorname{Re}D(\lambda_{n,j}(t),t))>0.
\end{equation}
If $t\in\lbrack0,n^{-3}]$ then we have formula (87). Since the terms
$O(n^{-2})$ in (87) satisfy (89) we have the following estimate for the real
part of the first term in the right-hand side of (87):
\begin{equation}
\operatorname{Re}((4\pi nt)^{2}(1+O(n^{-2})))>(4\pi nt)^{2}(1-cn^{-2}
)\geq(4\pi nt_{n})^{2}(1-cn^{-2})
\end{equation}
for $t\in\lbrack t_{n},n^{-3}].$ On the other hand if $n\notin$ $\mathbb{S}$
then by the definition of $\mathbb{S}$ (90) does not hold, which implies that
$\beta_{n}\alpha_{n}=-\left\vert (\beta_{n}\alpha_{n})\right\vert e^{i\theta
},$ $\left\vert \theta\right\vert <3cn^{-2}$ and hence
\begin{equation}
\operatorname{Im}(\beta_{n}\alpha_{n})=O(n^{-2})\operatorname{Re}(\beta
_{n}\alpha_{n}).
\end{equation}
Using this and (89), we obtain the following estimate for the real part of the
second term in the right-hand side of (87):
\[
\left\vert \operatorname{Re}(\beta_{n}\alpha_{n}\left( 1+O(n^{-2})\right)
)\right\vert <(1+cn^{-2}+n^{-3})\left\vert \operatorname{Re}(\beta_{n}
\alpha_{n})\right\vert .
\]
Therefore it follows from (93), (91) and (87) that (92) holds for $t\in\lbrack
t_{n},n^{-3}]$ , $n>N$ and $n\notin$ $\mathbb{S}.$ Thus (92) is true for all
$t\in\lbrack t_{n},\rho]$. Hence $\sqrt{D(\lambda_{n,j}(t),t))}$ is
well-defined and by Remark 2 it continuously depends on $t$. Therefore the
proof follows from Theorem 3.
Now consider the case $n\in$ $\mathbb{S}.$ By (89) we have
\[
-cn^{-2}-n^{-3}<\arg(1+O(n^{-2}))<cn^{-2}+n^{-3}
\]
Using (94) and (89) we obtain
\[
-\pi+2cn^{-2}-n^{-3}<\arg(\beta_{n}\alpha_{n}\left( 1+O(n^{-2})\right)
)<\pi-2cn^{-2}+n^{-3},
\]
\[
-cn^{-2}-n^{-3}<\arg((4\pi nt)^{2}(1+O(n^{-2})))<cn^{-2}+n^{-3}
\]
and the acute angle between the vectors $(4\pi nt)^{2}((1+O(n^{-2}))$ and
$\beta_{n}\alpha_{n}\left( 1+O(n^{-2})\right) $ is less than $\pi.$
Therefore by the parallelogram law of vector addition we have
\[
-\pi<\arg(D(\lambda_{n,j}(t),t))<\pi,\text{ }D(\lambda_{n,j}(t),t))\neq0
\]
for $t\in\lbrack0,\rho].$ Thus the proof again follows from Theorem 3
\end{proof}
\begin{corollary}
Suppose that
\begin{equation}
\text{ }\inf_{q,p\in\mathbb{N}}\{\mid2q\alpha-(2p-1)\mid\}\neq0,
\end{equation}
where $\alpha=\pi^{-1}\arg(ab).$ Then for all $n>N$ the eigenvalues
$\lambda_{n,1}(t)$ and $\lambda_{n,2}(t)$ defined in Remark 1 are simple and
satisfy (43) for $t\in\lbrack0,\rho]$.
\end{corollary}
\begin{proof}
By (95) that there exists $\varepsilon>0$ such that $-\pi+\varepsilon
<\arg((ab)^{2n})<\pi-\varepsilon$ for all $n\in\mathbb{N}.$ Hence by the
definition of $\beta_{n}$ and $\alpha_{n}$ (see Lemma 1)
\begin{equation}
-\pi+\varepsilon<\arg(\alpha_{n}\beta_{n})<\pi-\varepsilon,
\end{equation}
that is, (90) holds for all $n>N.$ Therefore the proof follows from Theorem 4
\end{proof}
\begin{remark}
Let $\widetilde{A},$ $\widetilde{B},$ $\widetilde{A}^{\prime}$, $\widetilde
{B}^{\prime}$ and $\widetilde{C}$ be the functions obtained from $A,$ $B,$
$A^{\prime},$ $B^{\prime}$ and $C$ by replacing $a_{k},a_{k}^{\prime}
,b_{k},b_{k}^{\prime}$ $\ $with $\widetilde{a}_{k},\widetilde{a}_{k}^{\prime
},\widetilde{b}_{k},\widetilde{b}_{k}^{\prime}$ , where $\widetilde{a}
_{k},\widetilde{a}_{k}^{\prime},\widetilde{b}_{k},\widetilde{b}_{k}^{\prime}$
differ from $a_{k},a_{k}^{\prime},b_{k},b_{k}^{\prime}$ respectively, in the
following sense. The sums in the expressions for $\widetilde{a}_{k}
,\widetilde{a}_{k}^{\prime},\widetilde{b}_{k},\widetilde{b}_{k}^{\prime}$ are
taken under condition $n_{1}+n_{2}+...+n_{s}\neq0,\pm(2n+1)$ instead of the
condition $n_{1}+n_{2}+...+n_{s}\neq0,\pm2n$ for $s=1,2,...,k.$ In
$\widetilde{b}_{k},\widetilde{b}_{k}^{\prime}$ the multiplicand $q_{\pm
2n-n_{1}-n_{2}-...-n_{k}}$ of $b_{k},b_{k}^{\prime}$ is replaced by
$q_{\pm(2n+1)-n_{1}-n_{2}-...-n_{k}}$. \ To consider the case $t\in\lbrack
\pi-\rho,\pi]$ instead of (11), (15) we use
\begin{equation}
(\lambda_{n,j}(t)-(2\pi n+t)^{2}-\widetilde{A}(\lambda_{n,j}(t),t))u_{n,j}
(t)=(q_{2n+1}+\widetilde{B}(\lambda_{n,j}(t),t))v_{n,j}(t),
\end{equation}
\begin{equation}
(\lambda_{n,j}(t)-(-2\pi(n+1)+t)^{2}-\widetilde{A}^{\prime}(\lambda
_{n,j}(t),t))v_{n,j}(t)=(q_{-2n-1}+\widetilde{B}^{\prime}(\lambda
_{n,j}(t),t))u_{n,j}(t)
\end{equation}
and repeat the investigation of the case $t\in\lbrack0,\rho]$. Note that
instead of (9) for $k\neq0,2n$ using the same inequality for $k\neq0,2n+1$ and
$t\in\lbrack\pi-\rho,\pi]$ from (10) we obtain (97) and (98) instead of (11)
and (15). \ In the case $t\in\lbrack\pi-\rho,\pi]$ \ instead of (43) we
obtain
\begin{equation}
\lambda_{n,j}(t)=(2\pi n+t)^{2}-2\pi(2n+1)(t-\pi)+\frac{1}{2}(\widetilde
{A}^{^{\prime}}+\widetilde{A})+(-1)^{j}\sqrt{\widetilde{D}(\lambda
_{n,j}(t),t))},
\end{equation}
where $\widetilde{D}=\left( 2\pi(2n+1)(t-\pi)+\widetilde{C}\right)
^{2}+\widetilde{B}$ $\widetilde{B}^{\prime}.$ Similarly, instead of (61),
(76), (91) and (95) we obtain respectively the following relations
\[
\widetilde{B}(\lambda,t)=\widetilde{\beta}_{n}\left( 1+O(n^{-2})\right)
,\text{ }\widetilde{B}^{\prime}(\lambda,t)=\widetilde{\alpha}_{n}\left(
1+O(n^{-2})\right) ,
\]
\[
\left( 2\pi(2n+1)(t-\pi)\right) ^{2}=-\widetilde{\beta}_{n}\widetilde
{\alpha}_{n}\left( 1+O(n^{-2})\right) ,
\]
\[
(2\pi(2n+1)(\widetilde{t}_{n}-\pi))^{2}(1-cn^{-2})=-(1+cn^{-2}+n^{-3}
)\operatorname{Re}(\widetilde{\beta}_{n}\widetilde{\alpha}_{n}),
\]
\[
\text{ }\inf_{q,p\in\mathbb{N}}\{\mid(2q+1)\alpha-(2p-1)\mid\}\neq0,
\]
where $\widetilde{\beta}_{n}=b^{2n+1}\left( (2\pi)^{2n}(2n)!\right) ^{-2}$,
$\widetilde{\alpha}_{n}=a^{2n+1}\left( (2\pi)^{2n}(2n)!\right) ^{-2},$
$\widetilde{t}_{n}\in\lbrack\pi-\rho,\pi]$ and Proposition 1, Theorem 4,
Corollary 2 continue to hold under the corresponding replacement.
As we noted in Section 2 (see \textit{Theorem 2 of [13] and Remark 1}) the
large eigenvalues of $H_{t}$ for $t\in\lbrack\rho,\pi-\rho]$ consist of the
simple eigenvalues $\lambda_{n}(t)$ for $\left\vert n\right\vert >N$
satisfying the, \textit{uniform with respect to }$t$\textit{ in }$[\rho
,\pi-\rho],$\textit{ asymptotic formula (5).} Thus by Theorem 4 and by the
just noted similar investigation, the eigenvalues $\lambda_{n,j}(t)$ for
$n>N,$ $j=1,2$ and $t\in$ $[t_{n},\rho]\cup$ $[\pi-\rho,\widetilde{t}_{n}]$
and the eigenvalues $\lambda_{n}(t)$ \textit{for \ }$t\in\lbrack\rho,\pi
-\rho]$\textit{ and }$\left\vert n\right\vert >N$\textit{ are simple.}. These
eigenvalues satisfy (43), (5) and (99) for $t\in$ $[t_{n},\rho],$ $t\in
\lbrack\rho,\pi-\rho]$ and $t\in\lbrack\pi-\rho,\widetilde{t}_{n}]$ respectively.
\end{remark}
\section{Asymptotic Analysis of H}
In this section we investigate the operator $H$ generated in $L_{2}
(-\infty,\infty)$ by (1) when the potential $q$ has the form (4). Since the
spectrum $S(H)$ of $H$ is the union of the spectra $S(H_{t})$ of the operators
$H_{t}$ for $t\in(-\pi,\pi],$ due to the notations of Remark 1, $\Gamma_{n}$
for $n>N$ are the part of $S(H)$ lying in the neighborhood of infinity. Here
we consider only this part of the spectrum.
Following \ [7, 12], we define the projections and the spectral singularities
of $H$ as follows. A closed arc $\gamma=:\{z\in\mathbb{C}:z=\lambda
(t),t\in\lbrack\alpha,\beta]\}$ with $\lambda(t)$ continuous on the closed
interval $[\alpha,\beta]$, analytic in an open neighborhood of $[\alpha
,\beta]$ and $F(\lambda(t))=2\cos t,$
\[
\text{ }\frac{\partial F(\lambda(t))}{\partial\lambda}\neq0,\text{ }\forall
t\in\lbrack\alpha,\beta],\text{ }\lambda^{^{\prime}}(t)\neq0,\text{ }\forall
t\in(\alpha,\beta)
\]
is called a regular spectral arc of $H,$ where $F(\lambda)$ is defined in (3).
The projection $P(\gamma)$ corresponding to the regular spectral arc $\gamma$
is defined by
\begin{equation}
P(\gamma)f=\frac{1}{2\pi}
{\textstyle\int\limits_{\gamma}}
(\Phi_{+}(x,\lambda)F_{-}(\lambda,f)+\Phi_{-}(x,\lambda)F_{+}(\lambda
,f))\frac{\varphi(1,\lambda)}{p(\lambda)}d\lambda,
\end{equation}
where $\Phi_{\pm}(x,\lambda)=:\theta(x,\lambda)+(\varphi(1,\lambda
))^{-1}(e^{\pm it}-\theta(1,\lambda))\varphi(x,\lambda)$ is the Floquet
solution,
\[
F_{\pm}(\lambda,f)=\int_{\mathbb{R}}f(x)\Phi_{\pm}(x,\lambda)dx,
\]
$p(\lambda)=\sqrt{4-F^{2}(\lambda)}$ and the functions $\theta(x,\lambda)$ and
$\varphi(x,\lambda)$ are defined in (3). Moreover, for the norm of the
projections $P(\gamma)$ corresponding to the regular spectral arc $\gamma$
defined in (100) we use the formula
\begin{equation}
\parallel P(\gamma)\parallel=\sup_{t\in\lbrack\alpha,\beta]}\mid
d(\lambda(t))\mid^{-1}
\end{equation}
of [12] (see Theorem 2 of [12] or Proposition 1 of [17]), where $d(\lambda
(t))=(\Psi_{\lambda(t)},\Psi_{\lambda(t)}^{\ast}),$ $\Psi_{\lambda(t)}$ and
$\Psi_{\lambda(t)}^{\ast}$ are the normalized eigenfunctions of $H_{t}$ and
$H_{t}^{\ast}$ corresponding to $\lambda(t)$ and $\overline{\lambda(t)}$
respectively. In [14] and [17] the following definitions were given
\begin{definition}
We say that $\lambda\in S(H)$ is a spectral singularity of $H$ if for all
$\varepsilon>0$\ there exists a sequence $\{\gamma_{n}\}$ of the regular
spectral arcs $\gamma_{n}\subset\{z\in\mathbb{C}:\mid z-\lambda\mid
<\varepsilon\}$ such that
\begin{equation}
\lim_{n\rightarrow\infty}\parallel P(\gamma_{n})\parallel=\infty.
\end{equation}
\end{definition}
\begin{definition}
We say that the operator $H$ has a spectral singularity at infinity if there
exists a sequence $\{\gamma_{n}\}$ of the regular spectral arcs such that
$d(0,\gamma_{n})\rightarrow\infty$ as $n\rightarrow\infty$ and (102) holds,
where $d(0,\gamma_{n})$ is the distance from the point $(0,0)$ to the arc
$\gamma_{n}.$
\end{definition}
\begin{definition}
The operator $H$ is said to be an asymptotically spectral operator if there
exists a positive constant $C$ such that
\[
\sup_{\sigma\in R(C)}(ess\sup_{t\in(-\pi,\pi]}\parallel e(t,\sigma
)\parallel)<\infty,
\]
where $R(C)$ is the ring consisting of all sets which are the finite union of
the half closed rectangles lying in $\{\lambda\in\mathbb{C}:\mid\lambda
\mid>C\}$ and $e(t,\sigma)$\textit{ is the spectral projection defined by
contour integration of the resolvent of }$H_{t}$ over $\sigma.$
\end{definition}
\begin{remark}
Since the large eigenvalues of $H_{0}$ and $H_{\pi}$ are simple, Theorem 1 of
[17] for the operator $H$ can be written as follows: The following statements
are equivalent
$(a)$ The operator $H$ has no spectral singularity at infinity.
$(b)$ $H$ is an asymptotically spectral operator.
$(c)$ There exist constant $c_{1}$ and $N$ such that for all $\mid n\mid>N$
and $t\in(-\pi,\pi]$ the eigenvalues $\lambda_{n}(t)$ are simple and
\begin{equation}
\mid d_{n}(t)\mid^{-1}<c_{1}
\end{equation}
where $d_{n}(t)=(\Psi_{n,t},\Psi_{n,t}^{\ast}),$ $\Psi_{n,t}$ and $\Psi
_{n,t}^{\ast}$ are the normalized eigenfunctions of $H_{t}$ and $H_{t}^{\ast}$
corresponding to the eigenvalues $\lambda_{n}(t)$ and $\overline{\lambda
_{n}(t)}$ respectively.
Note that if $\lambda_{n}(t)$ is a simple eigenvalue then the normalized
eigenfunctions $\Psi_{n,t}$ and $\Psi_{n,t}^{\ast}$ are determined uniquely up
to constant of modulus $1.$ Therefore $\mid d_{n}(t)\mid$ is uniquely defined
and is the norm of the projection defined by integration of the resolvent of
the operator $H_{t}$ over a closed contour containing only the simple
eigenvalue $\lambda_{n}(t)$ . Moreover $\mid d_{n}\mid$ is continuous at $t$
if $\lambda_{n}(t)$ is a simple eigenvalue.
\end{remark}
The following proposition follows from Remark 4, Definition 2 and (101).
\begin{proposition}
The operator $H$ has a spectral singularities at infinity if and only if there
exists a sequence of pairs $\{(n_{k},t_{k})\}$ such that $\lambda_{n_{k}
}(t_{k})$ is a simple eigenvalue and
\begin{equation}
\lim_{k\rightarrow\infty}d_{n_{k}}(t_{k})=0,
\end{equation}
where $n_{k}\in\mathbb{Z}$ and $t_{k}\in(-\pi,\pi]$.
\end{proposition}
As it was noted in [3] (see page 539 of [3]) if $\mid a\mid\neq\mid b\mid,$
then it follows from [3] and [7] that $H$ is not a spectral operator. This
fact and the following more general fact easily follows from the formulas of
Section 4.
\begin{proposition}
If $\mid a\mid\neq\mid b\mid,$ then the operator $H$ has the spectral
singularity at infinity and hence is not an asymptotically spectral operator.
\end{proposition}
\begin{proof}
Suppose, without loss of generality, that $\mid a\mid<\mid b\mid.$ As was
noted in Remark 4 the large periodic eigenvalues $\lambda_{n}(0)$ are simple.
Due to (44) the formulas (11), (15) and (25) for $t=0$ have the forms
\begin{align*}
(\lambda_{n}(0)-(2\pi n)^{2}-A(\lambda_{n}(0),0))u_{n} & =B(\lambda
_{n}(0),0))v_{n},\\
(\lambda_{n}(0)-(2\pi n)^{2}-A^{\prime}(\lambda_{n}(0),0))u_{n} &
=B^{\prime}(\lambda_{n}(0),0))v_{n},\text{ }\left\vert u_{n}\right\vert
^{2}+\left\vert v_{n}\right\vert ^{2}=1+O(n^{-2})
\end{align*}
where $u_{n}=(\Psi_{n,0},e^{i2\pi nx}),$ $v_{n}=(\Psi_{n,0},e^{-i2\pi nx}).$
Moreover, by (61), $B(\lambda_{n}(0),0))$ and $B^{\prime}(\lambda_{n}(0),0))$
are nonzero numbers and
\[
\frac{B(\lambda_{n}(0),0))}{B^{\prime}(\lambda_{n}(0),0))}=O(\left( \mid
a\mid/\mid b\mid\right) ^{n})=O(n^{-2}).
\]
Using these equalities and arguing as in the proof of (33) we obtain
\[
u_{n}=v_{n}O(n^{-1}),\text{ }\Psi_{n,0}(x)=ce^{-i2\pi nx}+O(n^{-1}),
\]
where $\mid c\mid=1$ and $\Psi_{n,0}(x)$ is the normalized eigenfunction
corresponding to $\lambda_{n}(0).$ Replacing $a$ and $b$ by $\overline{b}$ and
$\overline{a}$ respectively, in the same way we obtain
\[
\Psi_{n,0}^{\ast}(x)=\overline{c}e^{i2\pi nx}+O(n^{-1}).
\]
Thus $(\Psi_{n,0},\Psi_{n,0}^{\ast}(x))\rightarrow0$ as $n\rightarrow\infty$
and hence the proof follows from Proposition 2.
\end{proof}
Thus if $\mid a\mid\neq\mid b\mid,$ then $H$ is not a spectral operator. The
inverse statement is not true. If $\mid a\mid=\mid b\mid$ then in the Theorem
6 we will find the necessary and sufficient conditions on $ab$ for the $H$ to
be the asymptotically spectral operator. To prove this main result of this
paper we first prove some preliminary propositions.
\begin{theorem}
Suppose$\mid a\mid=\mid b\mid$ and
\begin{equation}
\text{ }\inf_{q,p\in\mathbb{N}}\{\mid q\alpha-(2p-1)\mid\}\neq0,
\end{equation}
where $\alpha=\pi^{-1}\arg(ab)$. Then there exists $N$ such that for
$\left\vert n\right\vert >N$ the component $\Gamma_{n}$ of the spectrum $S(H)$
of the operator $H$ is a separated simple analytic arc with the end points
$\lambda_{n}(0)$ and $\lambda_{n}(\pi).$ These components do not contain
spectral singularities. In other words, the number of the spectral
singularities of $H$ is finite.
\end{theorem}
\begin{proof}
Corollary 2, Theorem 2 of [13]\textit{,} and Remark 3 immediately imply that
the eigenvalues $\lambda_{n}(t)$ for $\left\vert n\right\vert >N$ and
$t\in(-\pi,\pi]$ are simple. Therefore for $\left\vert n\right\vert >N$ the
component $\Gamma_{n}$ of the spectrum of the operator $H$ is a separated
simple analytic arc with the end points $\lambda_{n}(0)$ and $\lambda_{n}
(\pi)$. It is well-known that the spectral singularities of $H$ are contained
in the set of multiple eigenvalues of $H_{t}$ (see [7,12]). Hence, $\Gamma
_{n}$ for $\left\vert n\right\vert >N$ does not contain the spectral
singularities. On the other hand, the multiple eigenvalues are the zeros of
the entire function $\frac{dF(\lambda)}{d\lambda},$ where $F(\lambda)$ is
defined in (3). Since the entire function has a finite number of roots on the
bounded sets the number of the spectral singularities of $H$ is finite
\end{proof}
Now using Theorems 4, 5, Corollary 2, Proposition 2 and the following
equalities we prove the main results of the paper. By Theorem 4 $\lambda
_{n,j}$ satisfies (43) for $j=1,2$ \ and $t\in\lbrack t_{n},\rho].$ Using it
in (45) and (46) we obtain
\begin{equation}
(-C(\lambda_{n,1}(t),t)-4\pi nt-\sqrt{D(\lambda_{n,1}(t),t)})u_{n,1}
(t)=B(\lambda_{n,1}(t),t)v_{n,1}(t),
\end{equation}
\begin{equation}
(-C(\lambda_{n,2}(t),t)-4\pi nt+\sqrt{D(\lambda_{n,2}(t),t)})u_{n,2}
(t)=B(\lambda_{n,2}(t),t)v_{n,2}(t),
\end{equation}
\begin{equation}
(C(\lambda_{n,1}(t),t)+4\pi nt-\sqrt{D(\lambda_{n,1}(t),t)})v_{n,1}
(t)=B^{\prime}(\lambda_{n,1}(t),t)u_{n,1}(t),
\end{equation}
\begin{equation}
(C(\lambda_{n,2}(t),t)+4\pi nt+\sqrt{D(\lambda_{n,2}(t),t)})v_{n,2}
(t)=B^{\prime}(\lambda_{n,2}(t),t)u_{n,2}(t)
\end{equation}
for $t\in\lbrack t_{n},\rho].$
Since the boundary condition (2) is self-adjoint we have $(H_{t}(q))^{\ast}=$
$H_{t}(\overline{q}).$ Therefore, all formulas and theorems obtained for
$H_{t}$ are true for $H_{t}^{\ast}$ if we replace $a$ and $b$ by $\overline
{b}$ and $\overline{a}$ respectively. For instance, (24) and (25) hold for the
operator $H_{t}^{\ast}$ and hence we have
\begin{equation}
\Psi_{n,j,t}^{\ast}(x)=u_{n,j}^{\ast}(t)e^{i(2\pi n+t)x}+v_{n,j}^{\ast
}(t)e^{i(-2\pi n+t)x}+h_{n,j,t}^{\ast}(x),
\end{equation}
\begin{equation}
(h_{n,j,t}^{\ast},e^{i(\pm2\pi n+t)x})=0,\text{ }\left\Vert h_{n,j,t}^{\ast
}\right\Vert =O(\frac{1}{n}),\text{ }\left\vert u_{n,j}^{\ast}(t)\right\vert
^{2}+\left\vert v_{n,j}^{\ast}(t)\right\vert ^{2}=1+O(\frac{1}{n^{2}}).
\end{equation}
Note that if (105) holds then $t_{n}=0,$ that is, (106)-(109) are satisfied
for $t\in\lbrack0,\rho].$ For $\mid n\mid>N$ it follows from (24), (25), (110)
and (111) that
\begin{equation}
(\Psi_{n,j,t},\Psi_{n,j,t}^{\ast})=u_{n,j}(t)\overline{u_{n,j}^{\ast}
(t)}+v_{n,j}(t)\overline{v_{n,j}^{\ast}(t)}+O(n^{-1}).
\end{equation}
Moreover, by Theorem 4, $\lambda_{n,1}(t)\neq\lambda_{n,2}(t)$ for
$t\in\lbrack t_{n},\rho]$ which imply that
\begin{equation}
(\Psi_{n,2,t},\Psi_{n,1,t}^{\ast})=u_{n,2}(t)\overline{u_{n,1}^{\ast}
(t)}+v_{n,2}(t)\overline{v_{n,1}^{\ast}(t)}+O(n^{-1})=0.
\end{equation}
\begin{theorem}
(Main Result) \ If $\mid a\mid=\mid b\mid$ and $\alpha=\pi^{-1}\arg(ab),$ then:
$(a)$ The operator $H$ has no the spectral singularity at infinity and is an
asymptotically spectral operator if and only if (105) holds.
$(b)$ Let $\alpha$ be a rational number, that is, $\alpha=\frac{m}{q}$ where
$m$ and $q$ are irreducible integers. The operator $H$ has no the spectral
singularity at infinity and is an asymptotically spectral operator if and only
if $m$ is an even integer.
Let $\alpha$ be an irrational number. Then $H$ has the spectral singularity at
infinity and is not an asymptotically spectral operator if and only if there
exists a sequence of pairs $\{(q_{k},p_{k})\}\subset\mathbb{N}^{2}$ such
that$\ $
\begin{equation}
\mid\alpha-\frac{2p_{k}-1}{q_{k}}\mid=o(\frac{1}{q_{k}}),
\end{equation}
where $2p_{k}-1$ and $q_{k}$ are irreducible integers.
\end{theorem}
It is clear that $(b)$ follows from $(a).$ The sufficiency of Theorem 6$(a)$
follows from Remark 4 and the following.
\begin{lemma}
If $\mid a\mid=\mid b\mid$ and (105) holds, then (103) is satisfied.
\end{lemma}
\begin{proof}
If $t\in\lbrack n^{-3},\rho],$ then by (79), (48) and (59) the coefficient of
$u_{n,1}(t)$ in (106) is essentially greater than the coefficient of
$v_{n,1}(t)$. Therefore from (24) and (25) we get
\[
\Psi_{n,1,t}(x)=e^{-i(2\pi n+t)x}+O(n^{-1}).
\]
In the same way we obtain that $\Psi_{n,1,t}^{\ast}$ satisfies the same
formula and hence the equality
\begin{equation}
(\Psi_{n,j,t},\Psi_{n,j,t}^{\ast})=1+O(n^{-1})
\end{equation}
holds uniformly for $t\in\lbrack n^{-3},\rho]$ and $j=1.$
Now suppose that $t\in\lbrack0,n^{-3}].$ First consider the case $nt\geq
\mid\alpha_{n}\mid,$ where $\alpha_{n}$ is defined in Lemma 1. Then using
(79), (85), (87) and taking into account that $\mid\alpha_{n}\mid=\mid
\beta_{n}\mid$ when $\mid a\mid=\mid b\mid$ from (106) we get $\mid
u_{n,1}(t)\mid<\frac{1}{6}\mid v_{n,1}(t)\mid.$ This with (25) gives $\mid
u_{n,1}(t)\mid<\frac{1}{5},$ $\mid v_{n,1}(t)\mid>\frac{4}{5}.$ Similarly,
$\mid u_{n,1}^{\ast}(t)\mid<\frac{1}{5}$ and $\mid v_{n,1}^{\ast}(t)\mid
>\frac{4}{5}.$ Therefore, by (112) we have
\begin{equation}
\mid(\Psi_{n,j,t},\Psi_{n,j,t}^{\ast})\mid>\frac{1}{2}
\end{equation}
for $j=1.$ In the same way we prove (115) and (116) for $j=2$.
Now, consider the case \ $nt<\mid\alpha_{n}\mid.$ Using (79) and (87) one can
easily see that if $nt=o(\mid\alpha_{n}\mid)$ then both of the number
\begin{equation}
-C(\lambda_{n,1},t)-4\pi nt-\sqrt{D(\lambda_{n,1},t)},\text{ }C(\lambda
_{n,1},t)+4\pi nt-\sqrt{D(\lambda_{n,1},t)},
\end{equation}
if $nt\sim\beta_{n}$ \ then at least one of the numbers in (117) are of order
$\alpha_{n}.$ Hence (106) and (108) imply that $u_{n,1}(t)\sim v_{n,1}(t).$ In
the same way we obtain $u_{n,2}(t)\sim v_{n,2}(t).$ Thus, by (25)
\begin{equation}
u_{n,1}(t)\sim v_{n,1}(t)\text{ }\sim u_{n,2}(t)\sim v_{n,2}(t)\sim1.
\end{equation}
Similarly
\begin{equation}
u_{n,1}^{\ast}(t)\sim v_{n,1}^{\ast}(t)\text{ }\sim u_{n,2}^{\ast}(t)\sim
v_{n,2}^{\ast}(t)\sim1.
\end{equation}
Using (118), (79) and (87) from (108) and (109) we obtain that
\begin{equation}
\frac{u_{n,1}}{v_{n,1}}=\frac{C(\lambda_{n,1},t)+4\pi nt-\sqrt{D(\lambda
_{n,1},t)}}{B^{\prime}(\lambda_{n,1}(t),t)}=\frac{4\pi nt[1]-\sqrt{4\pi
nt[1]+\alpha_{n}\beta_{n}[1]}}{\alpha_{n}}[1],
\end{equation}
\begin{equation}
\frac{u_{n,2}}{v_{n,2}}=\frac{C(\lambda_{n,2},t)+4\pi nt+\sqrt{D(\lambda
_{n,2},t)}}{B^{\prime}(\lambda_{n,2}(t),t)}=\frac{4\pi nt[1]+\sqrt{4\pi
nt[1]+\alpha_{n}\beta_{n}[1]}}{\alpha_{n}}[1],
\end{equation}
where, for brevity $1+O(n^{-2})$ is denoted by $[1].$ On the other hand using
(112) and (113) and taking into account (118) and (119) we get
\begin{equation}
(\Psi_{n,1,t},\Psi_{n,1,t}^{\ast})=v_{n,1}(t)\overline{v_{n,1}^{\ast}
(t)}(1+\frac{u_{n,1}(t)\overline{u_{n,1}^{\ast}(t)}}{v_{n,1}(t)\overline
{v_{n,1}^{\ast}(t)}})+O(n^{-1})=
\end{equation}
\[
v_{n,1}(t)\overline{v_{n,1}^{\ast}(t)}(1-\frac{u_{n,1}(t)v_{n,2}(t)}
{v_{n,1}(t)u_{n,2}(t)})+O(n^{-1})).
\]
This with (120) and (121) implies that
\begin{equation}
(\Psi_{n,1,t},\Psi_{n,1,t}^{\ast})=v_{n,1}(t)\overline{v_{n,1}^{\ast}
(t)}\left( 1-\frac{4\pi nt[1]-\sqrt{(4\pi nt)^{2}[1]+\alpha_{n}\beta_{n}[1]}
}{4\pi nt[1]+\sqrt{(4\pi nt)^{2}[1]+\alpha_{n}\beta_{n}[1]}}[1]\right) =
\end{equation}
\[
v_{n,1}(t)\overline{v_{n,1}^{\ast}(t)}\left( \frac{2\sqrt{(4\pi
nt)^{2}[1]+\alpha_{n}\beta_{n}[1]}}{4\pi nt[1]+\sqrt{(4\pi nt)^{2}
[1]+\alpha_{n}\beta_{n}[1]}}[1]\right)
\]
If $(4\pi nt)^{2}=o(\alpha_{n}\beta_{n})$ then the last fraction is $2+o(1)$
and hence (103) holds.
It remains to consider the case $(4\pi nt)^{2}\sim(\alpha_{n}\beta_{n}).$ If
(105) holds then we have inequality (96). Therefore we have $(4\pi
nt)^{2}[1]+\alpha_{n}\beta_{n}[1]\sim\alpha_{n}\beta_{n}.$ Moreover, one can
easily verify that
\[
4\pi nt[1]+\sqrt{(4\pi nt)^{2}[1]+\alpha_{n}\beta_{n}[1]}=4\pi nt(1+\sqrt
{1+(4\pi nt)^{-2}\alpha_{n}\beta_{n}}+o(1))\sim(\alpha_{n}\beta_{n}).
\]
Using these relations in (123) we get (103) in this case. Thus (103) for
$t\in\lbrack0,\rho]$ is proved. In the same way, by using Remark 3, we prove
(103) for $t\in\lbrack\pi-\rho,\pi]$ and it follows from (5) for $t\in
\lbrack\rho,\pi-\rho]$
\end{proof}
The proof of the necessity of Theorem 6$(a)$ follows from Proposition 2 and
the following
\begin{lemma}
If $\mid a\mid=\mid b\mid$ and
\begin{equation}
\text{ }\inf_{q,p\in\mathbb{N}}\{\mid q\alpha-(2p-1)\mid\}=0,
\end{equation}
where $\alpha=\pi^{-1}\arg(ab),$ then there exists a sequence of pairs
$\{(n_{k},t_{k})\}$ satisfying (104), where $n_{k}\in\mathbb{Z},$ $t_{k}
\in\lbrack0,\pi]$ and $\lambda_{n_{k}}(t_{k})$ is a simple eigenvalue
\end{lemma}
\begin{proof}
If (124) holds then there exists a sequence of pairs $\{(q_{k},p_{k})\}$ such
that $q_{k}\alpha-(2p_{k}-1)\rightarrow0.$ First suppose that the sequence
$\{q_{k}\}$ contains infinite number of even number. Then one can easily
verify that there exists a sequence $\{n_{k}\}$ satisfying
\begin{equation}
\operatorname{Im}((ab)^{2n_{k}})=o((ab)^{2n_{k}})
\end{equation}
and
\begin{equation}
\lim_{k\rightarrow\infty}sgn(\operatorname{Re}((ab)^{2n_{k}}))=-1.
\end{equation}
By Theorem 4, for the sequence $\{t_{n_{k}}\}$ defined by (91) and now, for
simplicity, redenoted by $\{t_{k}\}$ the eigenvalues $\lambda_{n_{k},j}
(t_{k})$ are simple and the following relations hold
\[
(4\pi n_{k}t_{k})^{2}=-\operatorname{Re}(\beta_{n_{k}}\alpha_{n_{k}
})(1+o(1))=-(\beta_{n_{k}}\alpha_{n_{k}})(1+o(1)),\text{ }
\]
\begin{equation}
(4\pi n_{k}t_{k})^{2}+(\beta_{n_{k}}\alpha_{n_{k}})=o(\beta_{n_{k}}
^{2}),\text{ }4\pi n_{k}t_{k}\sim\beta_{n_{k}}\sim\alpha_{n_{k}}.
\end{equation}
This with (87) implies that $\sqrt{D(\lambda_{n_{k},j},t_{k})}=o(\beta_{n_{k}
})$ for $j=1,2.$ Then by (79) we have
\begin{equation}
C(\lambda_{n_{k},j},t_{k})+4\pi n_{k}t_{k}\pm\sqrt{D(\lambda_{n_{k}j},t_{k}
)}=4\pi n_{k}t_{k}(1+o(1)).
\end{equation}
Using (61), (127) and (128) in (106) and (107) and taking into account (25) we
obtain
\[
u_{n_{k},1}(t)\sim v_{n_{k},1}(t)\sim u_{n_{k},2}(t)\sim v_{n_{k},2}(t)\sim1,
\]
\[
\lim_{k\rightarrow\infty}\frac{v_{n_{k},1}(t_{k})}{u_{n_{k},1}(t_{k})}
=\lim_{k\rightarrow\infty}\frac{v_{n_{k},2}(t_{k})}{u_{n_{k},2}(t_{k})}.
\]
This with (112) and (113) implies that (104) holds. In the same way we prove
(104) when $\{q_{k}\}$ contains infinite number of odd number.
\end{proof}
\begin{remark}
The main result of this paper shows that the asymptotic spectrality of $H$
depends on $\arg(ab),$ while we have proved in [15] that its spectrum depends
on $ab.$
\end{remark}
\end{document}
|
\begin{document}
\title{A simple proof of Jordan normal form}
\author{
Yuqun
Chen \\
{\small \ School of Mathematical Sciences, South China Normal
University}\\
{\small Guangzhou 510631, P.R. China}\\
{\small [email protected]}}
\date{}
\maketitle \noindent\textbf{Abstract:} In this note, a simple proof
Jordan normal form and rational form of matrices over a field is
given.
\ \
Let $F$ be a field, $M_n(F)$ the set of $n\times n$ matrices over
$F$, $\lambda_i\in F,\ n_i$ a natural number, $i=1,2,\cdots,r$. A
matrix of the form
\begin{eqnarray}\label{e1}
\left( \begin{array}{ccccc}
J_{\lambda_1,n_1} & 0 & \dots & 0 \\
0 & J_{\lambda_2,n_2} & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & 0 & \dots & J_{\lambda_r,n_r}
\end{array} \right)
\end{eqnarray}
is called a Jordan normal form, where for each $i$,
\begin{eqnarray*}
J_{\lambda_i,n_i}=\left( \begin{array}{cccccc}
\lambda_1 & 1&0 & \dots & 0 & 0\\
0 & \lambda_1 &1& \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0& \dots & \lambda_1& 1 \\
0 & 0 & 0 &\dots& 0 & \lambda_1
\end{array} \right)
\end{eqnarray*}
is an $n_i\times n_i$ matrix.
Let $A,B\in M_n(F)$. Then $A$ is similar to $B$ in $M_n(F)$ if there
exists an invertible matrix $P\in M_n(F)$ such that $A=PBP^{-1}$.
It is known that an $n\times n$ matrix over the complex field is
similar to a Jordan normal form, see \cite{Ayres}. In this paper, a
short proof of this result is given.
\begin{theorem}\label{t1} (Jordan)
Let $F[\lambda]$ be the polynomial ring with one variable $\lambda$,
$A\in M_n(F)$ and $\lambda E-A$ the eigenmatrix. Suppose that there
exist invertible $\lambda$-matrices $P(\lambda),\ Q(\lambda)$ such
that
\begin{eqnarray}\label{e2}
P(\lambda)(\lambda E-A)Q(\lambda)=\left( \begin{array}{ccccccc}
1 & \dots&0 & 0& \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots &1& 0& \dots & 0 \\
0 & \dots & 0& (\lambda-\lambda_1)^{n_1} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots & 0& 0 &\dots & (\lambda-\lambda_r)^{n_r}
\end{array} \right)
\end{eqnarray}
Then in $M_n(F)$, $A$ is similar to (\ref{e1}).
\end{theorem}
\noindent{\bf Proof:} Let $V$ be a $n$-dimensional vector space over
$F$ with a basis $e_1,\dots,e_n$. Let $\cal A$ be a linear
transformation of $V$ such that
\begin{eqnarray*}
({\cal A}E-A)\left( \begin{array}{cccccc}
e_1 \\
\vdots \\
e_n
\end{array} \right)=0.
\end{eqnarray*}
Since $Q(\lambda)$ is invertible, each entry in $Q({\cal A})^{-1}$
is a ${\cal A}$-polynomial. Now, by (\ref{e2}), we have
\begin{eqnarray*}
\left( \begin{array}{ccccccc}
1 & \dots&0 & 0& \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots &1& 0& \dots & 0 \\
0 & \dots & 0& ({\cal A}-\lambda_1)^{n_1} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots & 0& 0 &\dots & ({\cal A}-\lambda_r)^{n_r}
\end{array} \right)
\left( \begin{array}{cc}
\ast \\
\vdots \\
\ast\\
y_1 \\
\vdots \\
y_r
\end{array} \right)=0,
\end{eqnarray*}
where
\begin{eqnarray*}
\left( \begin{array}{cc}
\ast \\
\vdots \\
\ast\\
y_1 \\
\vdots \\
y_r
\end{array} \right)
=Q({\cal A})^{-1} \left( \begin{array}{cccccc}
e_1 \\
\vdots \\
e_n
\end{array} \right).
\end{eqnarray*}
Then we have $({\cal A}-\lambda_1)^{n_1}y_1=0,\dots,({\cal
A}-\lambda_r)^{n_r}y_r=0$.
For any $\alpha\in V$, suppose that
\begin{eqnarray*}
\alpha=(a_1,\dots,a_n)\left( \begin{array}{cccccc}
e_1 \\
\vdots \\
e_n
\end{array} \right), \ a_i\in F,\ i=1,\dots,n.
\end{eqnarray*}
Then
\begin{eqnarray*}
\alpha &=&(a_1,\dots,a_n)Q({\cal A})\left( \begin{array}{cc}
\ast \\
\vdots \\
\ast\\
y_1 \\
\vdots \\
y_r
\end{array} \right)
=(\star,\dots,\star,f_1({\cal A}),\dots,f_r({\cal A})) \left(
\begin{array}{cc}
\ast \\
\vdots \\
\ast\\
y_1 \\
\vdots \\
y_r
\end{array} \right)\\
&=&f_1({\cal A})y_1+\dots+f_r({\cal A})y_r.
\end{eqnarray*}
Noting that $n_1+\dots+n_r=n$, it is easy to see that
$$
y_1,\ ({\cal A}-\lambda_1)y_1,\dots, ({\cal
A}-\lambda_1)^{n_1-1}y_1,\dots\dots,y_r,\ ({\cal
A}-\lambda_r)y_r,\dots, ({\cal A}-\lambda_r)^{n_r-1}y_r
$$
forms a basis of $V$ and the matrix of ${\cal A}$ under this basis
is (\ref{e1}).
The proof is completed.
\ \
The following known theorem can be similarly proved.
\begin{theorem} Let the notion be as in Theorem \ref{t1}.
Suppose that there exist invertible $\lambda$-matrices $P(\lambda),\
Q(\lambda)$ such that
\begin{eqnarray}\label{e4}
P(\lambda)(\lambda E-A)Q(\lambda)=\left( \begin{array}{ccccccc}
1 & \dots&0 & 0& \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots &1& 0& \dots & 0 \\
0 & \dots & 0& g_1(\lambda) & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \dots & 0& 0 &\dots & g_r(\lambda)
\end{array} \right)
\end{eqnarray}
where
$g_i(\lambda)=\lambda^{n_i}-a_{in_i-1}\lambda^{n_i-1}-\dots-a_{i1}\lambda-a_{i0}\in
F[\lambda],\ i=1,\dots,r$ and $n_1+\dots+n_r=n$.
Then in $M_n(F)$, $A$ is similar to a rational form
\begin{eqnarray}\label{e3}
\left( \begin{array}{ccccc}
T_1 & 0 & \dots & 0 \\
0 & T_2 & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & 0 & \dots & T_r
\end{array} \right)
\end{eqnarray}
where for each $i$,
\begin{eqnarray*}
T_i=\left( \begin{array}{cccccc}
0 & 1&0 & \dots & 0 & 0\\
0 & 0 &1& \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0& \dots & 0& 1 \\
a_{i0} & a_{i1} & a_{i2} &\dots& a_{in_i-2} & a_{in_i-1}
\end{array} \right).
\end{eqnarray*}
\end{theorem}
\noindent{\bf Proof:} By the proof as the same as Theorem \ref{t1},
$$
y_1,\ {\cal A}y_1,\dots, {\cal A}^{n_1-1}y_1,\dots\dots,y_r,\ {\cal
A}y_r,\dots, {\cal A}^{n_r-1}y_r
$$
forms a basis of $V$ and the matrix of ${\cal A}$ under this basis
is (\ref{e3}). The proof is completed.
\ \
\noindent{\bf Remark:} If $F$ is algebraically closed, then one can
find invertible $\lambda$-matrices $P(\lambda),\ Q(\lambda)$ such
that (\ref{e2}) holds. Generally, we have (\ref{e4}).
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.